Hacker News new | past | comments | ask | show | jobs | submit | g1a55er's comments login

Yes! Honestly, I mostly just use it as a really, really fancy iPad. It's great for watching content, like movies and TV shows. Some of the games are also fun too in small doses.


(Also not a lawyer)

The point of (d) is to create a jurisdictional hook for the US federal government. The federal government only has power over a limited number of offenses, including "interstate commerce" related offenses, but not most normal criminal offenses, which can only be criminalized by the states. So, for the federal government to regulate it, the easiest way is to only regulate things that involve interstate commerce somehow. In this case, downloading the Stable Diffusion model over the Internet probably creates enough of a hook, and this defendant is alleged to have done much more than that, so it's probably enough.


Interesting, so an obscene cartoon drawing in pencil with a pencil purchased locally would not be an offense, but if the pencil was purchased interstate or by mail then it is.

Are there any actual cases on this? i.e., someone downloads a model, creates but does not distribute the results, gets convicted?


I think prosecutors would probably argue that even using a pencil purchased locally meets the bar of making the "visual depiction... produced using materials.. or that have been shipped or transported in interstate or foreign commerce by any means, including by computer".

The Commerce Clause has been read very expansively since the Wickard v. Filburn[1] in the New Deal era. One of the current legal projects of the Federalist Society, the conservative legal movement the current Supreme Court majority stems from, is rolling this back, and limiting the power of the Commerce Clause. They argue that reading the Commerce Clause so expansively coupled with modern technological/economic change gives the Federal Government effectively unchecked power to regulate behavior, which was contrary to the intention/design of the Constitution. Supporters of the current status quo argue that reading the Commerce Clause too narrowly would make the modern economy unmanageable and ungovernable. It's an open question how far the current Supreme Court will go in paring it back, but they have started narrowing parts of this doctrine.[2]

(There are no prior cases on this in the context of AI generated content in the United States, because this is the first time this offense has been charged for AI generated content in the United States. That's why it's so newsworthy and why I posted it - it's going to set some precedent, one way or another.)

I personally find the much more interesting argument here to be about obscenity. The obscenity doctrine places "obscene" speech completely outside the protection of the First Amendment. Previously, the Supreme Court ruled in Ashcroft vs FSC that for CSAM to be illegal, it must at least meet the bar for obscenity, or it must be produced via actual exploitation.[3] This is the most similar case I am aware of.

Obscenity doctrine in the United States is... a bit of a hot mess, in my humble opinion. The current test is as follows:

> The basic guidelines for the trier of fact must be: (a) whether "the average person, applying contemporary community standards" would find that the work, taken as a whole, appeals to the prurient interest; (b) whether the work depicts or describes, in a patently offensive way, sexual conduct specifically defined by the applicable state law; and (c) whether the work, taken as a whole, lacks serious literary, artistic, political, or scientific value.

To be blunt, I don't think anyone really knows what that word salad means. What are the "contemporary community standards"? Who is the "community"? What does it mean to be "patently offensive"? According to whom? The whole thing that makes exclusions to the First Amendment tenable is that they need to be really, really clear. Otherwise ambiguity in the standard leads to curtailment of expression by a chilling effect.

Furthermore, there's also this ruling[4] - that as far as I can tell is still good law - which makes it unconstitutional to ban the mere possession of obscene material. I don't think that applies here, because in this case, the defendant is alleged to have distributed the material widely. Furthermore, it also stems from a "right to privacy" the current Supreme Court is rather skeptical of.

There's not really a conclusion to all this. The main point here is that something is happening and that will probably result in interesting decisions later on.

[1] https://fedsoc.org/case/wickard-v-filburn; held that Congress could bar farmers from growing wheat on their own land for their own consumption. [2] https://www.nlc.org/article/2023/06/15/supreme-court-decides... limited the related Dormant Commerce Clause doctrine. [3] https://www.oyez.org/cases/2001/00-795 [4] https://en.wikipedia.org/wiki/Stanley_v._Georgia


> I think prosecutors would probably argue that even using a pencil purchased locally meets the bar of making the "visual depiction... produced using materials.. or that have been shipped or transported in interstate or foreign commerce by any means, including by computer".

This would expand the Commerce Clause beyond Wickard v. Filburn arguably.

> Previously, the Supreme Court ruled in Ashcroft vs FSC that for CSAM to be illegal, it must at least meet the bar for obscenity, or it must be produced via actual exploitation.

You meant virtual child pornography? New York v. Ferber established Congress could ban real child pornography regardless of obscenity. I don't think Ashcroft v. Free Speech Coalition changed this. And calling all child pornography CSAM has made people confused about what images are illegal.

> The main point here is that something is happening and that will probably result in interesting decisions later on.

Possibly. Probably it will result in a plea bargain. And courts have resisted to clarify obscenity.


> In this case, downloading the Stable Diffusion model over the Internet probably creates enough of a hook

A prosecutor would say yes. Would a court?


The first, second, and fourth charges in the indictment[1] is for violating 18 USC 1466A(a)(1)/(b)(1)/(d)(1)/(d)(4), which are the statutes against producing/distributing/possessing an "obscene" "drawing, cartoon, sculpture, or painting" "depict[ing] a minor engaging in sexually explicit conduct".

[1] https://www.justice.gov/opa/media/1352606/dl?inline


I'm not sure that this continues to be true, now that Git supports partial clones[1]. It's now a fully supported feature in Git to work with only a partial copy of the code and metadata. In my experience, this plus an fsmonitor[2] makes Git work fine on very, very large repos. This is how Microsoft scaled Git to work with their massive Windows monorepo[3]. (That particular article talks about the earlier work using Git VFS, but it eventually turned into these extensions that got upstreamed into Git, as documented in the Scalar repo[4].)

[1] https://github.blog/2020-12-21-get-up-to-speed-with-partial-...

[2] https://github.blog/2022-06-29-improve-git-monorepo-performa...

[3] https://arstechnica.com/information-technology/2017/02/micro...

[4] https://github.com/microsoft/scalar


Yeah, these all seem like parallel or inspired developments of the same thing in git. But they're relatively recent and when you use this stuff git basically ceases to be a DVCS system (though the modern git + GitHub workflow mostly works like centralized version control already)


> GitHub workflow mostly works like centralized version

I mean the actual full DVCS capabilities are rarely used anywhere and aren't appropriate for many people at all. Yes, the Linux kernel uses it. But with the kernel there is no real authorative branch. Yes, there is Linus's and his team's branches but each distro has their own where they pull in patches etc. For a Red Hat user you could say their branches are authorative.

Where in enterprise would you want that? Usually, there is a a team that owns each module, and they are the definitive team and you use their artifacts.

The only case I can imagine a true DVCS workflow is a company I worked at had a core library that we customised and build apps around for clients. There were updates that we would choose to accept that fixed bugs that effected our client, but if they didn't we might not. There were fixes that we couldn't because they clashed with our modifications. This is a situation where true DVCS is applicable. (We didn't use it though, it was copy & paste files from sent emails).


This is explicitly the case according to Microsoft documentation[1].

"When the administrator uses a Microsoft account to sign in, the clear key is removed, a recovery key is uploaded to the online Microsoft account..."

Microsoft does not break down requests by key disclosure, but they do say in their most recent report for 2022 H2 that they released account content for 522 requests to US criminal authorities in that half. It does not note how many accounts were included in those 522 requests.[2]

[1] https://learn.microsoft.com/en-us/windows/security/operating...

[2] https://www.microsoft.com/en-us/corporate-responsibility/law...


It is on by default in Windows 11 Home if you go through the normal setup experience completely according to the Microsoft documentation. As part of the setup, you sign in to a Microsoft account, which then creates a TPM protector.

"Unlike a standard BitLocker implementation, device encryption is enabled automatically so that the device is always protected... When the administrator uses a Microsoft account to sign in, the clear key is removed, a recovery key is uploaded to the online Microsoft account, and a TPM protector is created. Should a device require the recovery key, the user is guided to use an alternate device and navigate to a recovery key access URL to retrieve the recovery key by using their Microsoft account credentials."

From https://learn.microsoft.com/en-us/windows/security/operating...

This is also how it's reported in the press:

"In fact, the mechanisms to do exactly that are already in place. Windows 11 Home and Windows 11 Pro both support automatic device encryption, with the Home version a more streamlined experience. You just have to sign into the machine with a Microsoft account, which nearly all people do during setup."

From https://www.pcworld.com/article/624593/is-your-windows-11-pc...

My main point is just that if you skip this, like a lot of privacy conscious people do, you might end up inadvertently not having encryption fully enabled.


I would think the other side of this is "if you try to boot another OS one day, surprise, you didn't know the disc was encrypted and can't access any of your files."

That screams anti-competitive behaviour to me-- how many people would stop their "let's try Linux" experiment if you can't mount your existing drive to access previous data?


>That screams anti-competitive behaviour to me

...or they're trying to increase security against physical attacks. The year of the Linux desktop has been a running joke for decades. Microsoft doesn't need disk encryption to keep Linux from gaining traction. Linux is already doing a pretty good job for them.


Well, I could see plenty of other use cases (i. e. "My machine is kaput, can you tether the hard disc and grab my data") but this one has a legitimate business edge if they intercept it.


You left out the part where it says "If a device uses only local accounts, then it remains unprotected even though the data is encrypted"

I think you are confusing "device encryption" with "disk encryption" (BitLocker)


I quote that exact part of the documentation in the post. I also talk about the difference between "Device encryption" and "BitLocker Device Encryption"

My argument isn't that this isn't documented. It's that it is a bit counterintuitive.

My points are:

1) It would be best if Microsoft just asked if you wanted encryption if you create a local account. This is what Apple does in this situation. I imagine a large portion of the people who are creating local accounts on Windows 11 Home are the sort that want to manage their own keys.

2) If you are in that set of people, you should double check your setting if you never thought about it before, because it's easy to miss.


I’m fairly concerned about how broadly some are interpreting “operating in the EU”.

The furthest this has gone that I’m aware of is the Dutch, alongside other EU authorities, has issued a fine to an American website, alleging that they subject themselves to EU jurisdiction by merely hosting information about EU citizens. This seems to me to be too far.

https://edpb.europa.eu/news/national-news/2021/dutch-dpa-imp...


The DSA itself imposes EU-wide requirements on the “mitigation” of “illegal hate speech”, although it does not define the term. Section 35, 1(c). My amateur guess for how this would end up is that large European wide platforms will be expected to take down hate speech that is illegal anywhere in Europe, but I don’t think we have seen an example for certain. It’s possible this enforcement action will bring more certainty.

https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=celex%3A...

I think this is going to inevitably create more and more tension between EU and American contexts, because hate speech is affirmatively legal in the U.S., most recently ruled under Snyder v Phelps.


I think the negative attitude towards the EU on Hacker News is the product of a couple things:

- The EU has drastically scaled up regulatory requirements for tech businesses, starting with GDPR, running through the DSA, and probably eventually continuing through the AI Act and the proposed cybersecurity law. Because this is a community mostly centered around people who start or work at or invest in tech businesses, there’s a lot of frustration that the new regulations are making life harder.

- In this case, part of what the EU alleges is that Twitter is not doing enough to actively combat disinformation. People are concerned that what the EU wants in terms of combatting disinformation IS a speech-controlling agenda.

I’m not sure either argument is 100% correct, but I can understand why many are arguing that the EU regulations are going too far, both in terms of requiring too much work for too little gain, and in terms of jeopardizing Internet independence.


I think it's also because the people on hacker news in general are not stupid and they can see through the bullshit to the real agendas that are being passed.

Let's have a test:

What's wrong with the following:

In 2020, the Netherlands passed a new law that stated that anyone that works in a job with an obligation to secrecy cannot be prosecuted for perjury for lawing in court under oath.

The example given by the government was, "Consider a lawyer and his client".

Forget for a moment that in this trivial example a lawyer could simply refuse to answer?

Figure it out yet?

Here's a hint, everyone in government has an obligation to secrecy, including prosecutors.

See the problem here? I'm sure that lots of hacker news people would.


Here's a link to a prosecutor that was caught lying:

https://www.telegraaf.nl/nieuws/1039807/liegende-officieren-...

He was lying in court again just before he retired. No consequences. Worse, he was never found out officially. The press were never even interested in the story.

However, two years later the law was changed. Now there can never be consequences in such cases. The Netherlands officially only has a kangaroo court now.


You're also missing the part where local politics will always blame EU for unpopular legislation (even one they supported themselves in euparl) and take credit for all positive EU directives (even if they fought against them).

With less informed, this creates a pretty big "EU bogeyman" trend.


Honest question as an American who speaks English as a first language and is studying German as a second:

Section 8 paragraph 1 seems to clearly not require a service provider to block access in order to prevent copyright infringement if they meet its requirements. I understand that section 7 paragraph 3 leaves in place blocking remedies specified elsewhere in other laws. However, for the specific case of copyright infringement, this is clearly the narrowest most specific rule for blocking due to copyright infringement, and at least in American law that generally means it is the one that takes precedence for the infringement blocking case.

Also I read Section 7 paragraph 4 to just mean that public WiFi hotspots can be mandated to block infringing content if no other means is available.

Am I reading this wrong? I’m struggling because I’m not sure if my understanding of the German language or German law is wrong here.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: