Hacker News new | past | comments | ask | show | jobs | submit | jchw's comments login

You can always ignore the BIOS routines and directly touch hardware registers/etc. so as long as you know precisely what hardware you are dealing with. Of course, this is what modern OSes do for most hardware after bootstrapping since calling back into BIOS isn't really an option.

BIOS routines are purely an abstraction layer, though the abstraction is somewhat leaky and my understanding is that most hardware was trying to be IBM compatible even when software skips the BIOS routines and directly touches hardware interfaces.

e.g. you can see documentation for the VGA card here: https://wiki.osdev.org/VGA_Hardware

The thing is, as far as I know you do not need to use the video BIOS routines even to set the video mode. After all, the video BIOS routines are also just routines that run on the CPU, the only advantage they really have is that of any abstraction, the fact that using it allows you to be compatible with any card that implements that software interface even if it doesn't implement the same exact hardware interface. But as far as I know, if you know you're dealing with a VGA compatible card, you can set the mode by directly flipping around CRTC registers and it should work just fine.

Same for disk controllers and etc.


However, since GPUs tended to vary widely when going beyond VGA modes (although I believe even the original VGA controller was capable of 800x600x4 with a suitably lenient autosync monitor), VESA VBE was introduced to make that much easier again.

Windows actually runs the VBIOS in an emulator / VM for switching modes with the default GPU driver.


You know, I thought UEFI actually was doing the same thing, but I am mistaken, at least for anything resembling modern UEFI.

Apparently, modern UEFI just doesn't bother supporting classic Video BIOS option ROMs at all, only supporting UEFI option ROMs. Then for those... you either need to hope your GPU has one for your host CPU architecture, or you can use X86EmulatorDxe, a Qemu-based LGPL module that can run AMD64 drivers on AArch64, which hasn't been updated in a few years, or Intel's MultiArchUefiPkg, a more modern solution using Unicorn Engine that supports both AArch64 and RISC-V. Which is also LGPL, but.. Unicorn Engine itself is GPL, which is obviously a potential licensing issue for any would-be users. And, Intel archived it (probably alongside tons of their other open source projects) earlier this year, so it is no longer being maintained by them. Or, finally, you can have an option ROM containing machine-independent EFI Byte Code... which is no longer required by the UEFI spec, was never really used (apparently, the primary toolchain was a proprietary Intel C compiler that is no longer updated.) So who knows how many UEFI implementations still support it. It was removed from EDK2 in 2023, so presumably newer firmwares won't handle it at all.

I'm sure most people don't care about the trainwreck that is UEFI, but I found this tangent to be surprisingly interesting. At this point, treating AMD64 as a lingua franca of PC-based computers might just be the best way forward rather than trying to invent virtual machines. If they wanted to go the VM route, they should've committed fully and only supported OpROMs that were architecture-independent from the gitgo.

All of this just so we can initialize the video card and display some messages at boot, huh.


Sorry for your situation, or I suppose, glad to hear it's better. My understanding as a layperson is that antibiotics can be a nightmare for the gut: I don't think it's particularly likely, but you are a lot more likely to get C. diff while taking antibiotics, and I'm sure that's not the only way it can make things worse, alongside the other caveats of antibiotics (e.g. people misusing them in ways that threatens their effectiveness.) So, I can understand why doctors are not always eager to deploy antibiotics when they're not convinced they will help.

I've personally had quite some experiences navigating health issues, health anxiety, the medical system, etc. Nothing terribly interesting, but, still. I'm actually in middle of scheduling tests to see if I might, in fact, have an autoimmune condition. If they do find evidence of that, then it will have taken me around 6 years to figure it out from my very first symptoms. Thanks to modern medical science, I have little reason to sweat over it, though. (Of course, I'm still hoping for a negative, but at least in the case of a positive I can have the relief of knowing what the hell was wrong with me all this time.)


Finding out I had gout was a 10 year process. Apparently "full test panels" don't include uric acid tests. And whenever symptoms are aggravated, it's much harder to test for it anyway as the crystals are lodged in your bones and not in your blood stream and urine.

Now I have to go through the entire process again to rule out everything else before getting a fibromyalgia diagnosis.

Anyway antibiotics have unrelatedly literally saved my life on several occasions now and I am glad that in a few emergencies I was able to access them on the street without needing to navigate the healthcare system. Antibiotics are too tightly controlled for humans and not enough for animals.

That said, I have had to take them enough that I've definitely experienced negative gut effects a couple of times. Now I reach for probiotics after a regimen.


I'm allergic to all cillins and had a spate of issues that required antibiotics 15 years ago, 3 treatments in 17 months.

I'll have to take probiotics the rest of my life. Because I'm not doing a fecal transplant, thanks. Those three times of antibiotics completely wrecked my intestines.


>Because I'm not doing a fecal transplant, thanks.

Why?


Previously on Hacker News (I had bookmarked it):

"Antibiotics damage the colonic mucus barrier in a microbiota-independent manner"

https://news.ycombinator.com/item?id=41516419


Even as a U.S. citizen this hits me frequently. A lot of amazing comedians I didn't realize were Canadian for a long time, as a U.S. citizen. Norm Macdonald comes to mind, but he's far from alone.

Canada has government programs requiring broadcasters to broadcast Canadian content. So there are strong economic incentives to find, fund, develop, and promote Canadian artists working in various media.

In my undergrad days I learned about the MAPL classification to be considered Canadian content: Music, Artist, Performance, Lyrics

IIRC you need to hit 3/4 to be considered Canadian content.

At that time J Biebs was big, but since his music and lyrics were written by Americans and he performed/recorded in the States, his music was not CanCon despite him being Canadian. So, at the radio station I volunteered at, his music would count towards the 30% quota of not-CanCon music.


Interesting. He seems extremely Canadian to me. A lot of his jokes play off this sort of fish-out-of-water vibe…

For Norm, just the crude nature of a lot of his jokes felt like something that was very "American", at least certainly in the 90s and 2000s. His delivery, on the other hand, always felt fairly unique to me. It's also possible I just mistook characteristics of Canadian comics as being those of American comics by simply not realizing how many popular comedians are Canadian.

> Without a CLA, that's not doable without the approval of all the contributors.

Three things:

- libogc is BSD 3-clause[1], which is GPL-compatible[2][3]. That means that while you can't just strip off the existing BSD license, you can distribute a combination of source code that is BSD and GPL licensed, it just has to be distributed under the terms of the GPL, which is fully compatible with the terms of a BSD license, too, since all BSD licenses require is maintaining the copyright notice/disclaimer, and nothing about that requirement conflicts with the GPL. (IANAL, so take with a grain of salt, but I'm pretty positive on this one.)

- Although RTEMS is traditionally GPL, it's actually a modified version of GPL with the GNAT exception[4]. A compiler exception effectively allows users to link the object files into software of any license without invoking the copyleft provisions, so to my understanding this would basically be identical except for people who are modifying the source code of libogc (I imagine this is exceedingly rare.)

- And finally, RTEMS has been re-licensing to BSD 2-clause for a long time[5], so it's very possible all of the relevant code is now available under a BSD license with effectively equivalent permissiveness.

So actually I think this situation is probably much a nothingburger in that regard...

[1]: https://github.com/devkitPro/libogc/blob/master/libogc_licen...

[2]: https://www.gnu.org/licenses/license-list.en.html#ModifiedBS...

[3]: https://en.wikipedia.org/wiki/License_compatibility

[4]: https://gitlab.rtems.org/rtems/rtos/rtems/-/blob/main/LICENS...

[5]: https://gitlab.rtems.org/rtems/rtos/rtems/-/blob/main/LICENS...


I'm gonna walk through this because I have a bit of experience here on the computer side of things, but I'm not really making an excuse for the fact that the PC version of this is less user-friendly; from my perspective, I fully respect that Apple has done a good job with user experience where PC manufacturers have lagged. However, my main concern is devices turning to e-waste, so the important thing for that isn't UX, it's just how plausible it is to recover once you've bricked. With that out of the way...

> This is in contrast to some PCs, where if you damage the BIOS (e.g. by suddenly losing power during a firmware update), your device may or may not be bricked.

I accidentally destroyed the firmware on a machine that did not have any recovery features, when flashing modified UEFI images, leaving it mostly inoperable. I wound up recovering it using flashrom and a Raspberry Pi. I think this counts as a hard brick, but the modular nature of PCs (e.g. most BIOS chips are on sockets so you can pull them out easily) it's not nearly as big of an issue if you hard-bricked a device that's more integrated and locked down. It's not instant e-waste because no bricks are permanent.

(It's a little harder for laptops, but I did also flashrom a laptop in a similar fashion, in-circuit using a SOIC8 clamp. This was not a brick recovery but rather messing with coreboot.)

Definitely not as much for the faint of heart, but a repair technician could do it for you. Alternatively, for PCs with socketed BIOS, you can buy a new EEPROM that's already flashed with the right firmware--they were readily available on eBay last I looked.

That was probably a decade ago or more by now. Many modern PC motherboards from many vendors have mitigations for this; it was a common enough pain point after all. For example, my desktop PC has an embedded controller that can boot and rewrite the flash chip for you, using a copy of BIOS from a USB stick. (Works even if the CPU isn't installed. Pretty cool.)

> There have even been stories of peoples' computers being bricked via rm -rf /, due to removing everything at /sys/firmware/efi/efivars/ (which is actually stored inside the motherboard), and sometimes contains things that the motherboard won't boot without

EFI vars are stored in NVRAM, not the EEPROM. You can usually clear that a couple of ways:

- Use an integrated NVRAM reset system. Some machines have a way to do this listed in their manual. On desktop PC motherboards, it tends to be a jumper you set for a few seconds. Sometimes you will have an easier option, like a button somewhere, or even possibly a key combination at boot (Long time Macintosh fans probably have memorized the NVRAM reset key chord for Apple computers... I wonder if it still works on Apple Silicon.)

- Remove the battery for a few seconds. Usually easily accessible on a desktop. Usually a little less easy to get to on a laptop, but nothing absurd, usually just some screws.

Certainly it could be easier to recover from, I'd say it's actually not very easy to brick a typical desktop PC in a particularly permanent fashion. The only time I've ever done it was because I was modifying my UEFI image intentionally. Screwing up EFI vars doesn't make most systems unbootable. I have corrupted my EFI vars quite a few times trying to do funny things. UEFI implementations do tend to be buggy, but not all of them are that catastrophically bad.

--

Now... as for whether or not an Apple Silicon device can "physically" be bricked by software, the most obvious way to do that would be to wear the SSD down to the point where it can no longer be rewritten. I think the M4 Mac Mini finally no longer solders these and that the Mac Minis do have a way you can recover from this brick (using another Mac to restore to a new SSD) but there are many Macs where if the SSD is destroyed, it's pretty hard to fix since you need Apple tools that are hard to obtain if you want to pair a new SSD. This is unfortunate because Apple has often had dodgy hardware choices around the SSD (e.g. the notorious TPS62180 buck converter) and doesn't always use SSDs that have the best reliability (IIRC they use a lot of Kioxia in the newer Apple Silicon devices, which are not considered to be bad devices by any means, but are generally considered less durable than e.g. Samsung SSDs.)

Rather than have an Apple device become ewaste due to software issues, in recent years, it is much more likely that it will become ewaste due to hardware issues, as a result of parts pairing and having failure-prone components that are not modular even when they really can and should be (Good on them for rectifying this lately, e.g., with the Mac Mini SSD, but it's a bit sad that it took this long. And on the note of that SSD... Apple, you really could've used a standard electrical interface for that.)

This is somewhat a testament to Apple's software and system design, but it's simultaneously a condemnation of their recent track record with repair, too. Best we can hope is that they don't go backwards from this point forward, because they created a lot of devices that will become ewaste over time for almost no gain for anyone. (I strongly dislike but can understand the justification for something like parts pairing in iPhones and iPads, but much less so for similar sorts of mechanisms in computers.)


> Screwing up EFI vars doesn't make most systems unbootable. I have corrupted my EFI vars quite a few times trying to do funny things. UEFI implementations do tend to be buggy, but not all of them are that catastrophically bad.

For what it's worth, I have a laptop here that can be irrevocably (short of having a flash memory dump on-hand that can be flashed back) bricked just by messing around with EFI variables through fully intentional operations (i.e. operations that would be available to any program with Administrator privileges on Windows, or the root user on Linux).


As far as I know virtually all of the EFI vars will be stored on battery-backed NVRAM, so the usual solution is to just clear that, by removing the CMOS battery. I am pretty sure the only solution are things you definitely can not read or write from the host OS (e.g. BIOS passwords.) Does require partially disassembling the laptop though, and I know there's at least a couple random models of laptop that actually stop working if you clear the NVRAM (lol)

*only solution = only exceptions

Not sure how that happened.


>Rather than have an Apple device become ewaste due to software issues, in recent years, it is much more likely that it will become ewaste due to hardware issue

Apple not supporting their hardware after a short time is a software issue causing e-waste. I have a big box full of non-viable Apple hardware that is working perfectly well, Apple just decided to stop supporting all those devices - a bunch of tablets, a couple apple tvs, an old apple watch, several laptops, etc.

Sure, other manufacturers do this too, but none as badly as Apple does IMHO.


Also worth noting: the version of GPL used by RTEMS seems to be one with a compiler exception, so it probably wouldn't have been an issue for HBC.

Yes[0], and if Team Twiizers had consciously decided to use RTEMS code in that way, they probably would have been fine. However, libogc still cannot legally strip out the GPL copyright notices and distribute RTEMS code in that way.

That being said, RTEMS itself is trying to relicense to BSD 2-Clause, which would obviate the concerns over copyleft, but NOT the thing that libogc did. In fact, the 2 clauses left in the BSD 2-Clause license are the ones that require you to retain the copyright notices. So libogc is still in the wrong.

[0] https://gitlab.rtems.org/rtems/rtos/rtems/-/blob/main/LICENS...


FWIW, whether you agree with the accusation or not, it isn't anonymous. The commit history makes it obvious that it's marcan (Hector Martin) making the accusations.

Whether it's really worth all of the hooplah or not is going to be up to taste. I think it's pointless to just not explicitly credit RTEMS personally, but I suspect the real point of doing this is probably in large parts just to distance themselves from the reverse engineered libogc code.


> FWIW, whether you agree with the accusation or not, it isn't anonymous. The commit history makes it obvious that it's marcan (Hector Martin) making the accusations.

I was referring to this repo by github account "derek57": https://github.com/derek57/libogc

I assume it's anonymous because the account has no public contact info.


> I'm finding it hard to determine that actual harm has occurred here. [...]. But neither fail0verflow nor RTEMS seem to care about any of this.

? There isn't really any evidence that the original RTEMS developers are aware of this situation.

> You don't need permission to use open source code..

"Open source" on its own is just industry jargon. When you use open source code, you are copying it in accordance with an open source copyright license. The copyright license contains certain stipulations around how it is allowed to be used. For example, BSD licenses require that the copyright notice is included when using the code. IANAL but my understanding is if you omit this information even though your work is a derivative work of the original you're in violation of the copyright license.

> So there appears to be two double standards occurring at once.

You should really elaborate who is being held to what standards because I can't make sense of this.


The point is that nobody is being held to anything. Who will make a case in court? There is nobody to enforce the law, and if there was someone, it can be easily corrected by including these license files. Therefore nothing is blocking either project.

> The point is that nobody is being held to anything. Who will make a case in court? There is nobody to enforce the law, [...]

Lawsuits are very expensive for all parties no matter what, there is clearly no intent to try to engage legal action. That has nothing to do with anything. They're trying to distance themselves from illicit behavior, including the behavior they already knew about and let slide in 2007.

(And I doubt it's being done for legal reasons, but distancing yourself from illicit behavior does matter; take a look at what happened with Citra. The case partially hinged on their responses to piracy.)

> It can be easily corrected by including these license files. Therefore nothing is blocking either project.

Tell that to the libogc developers who seem to only be interested in burying the problem rather than trying to rectify it in any way.


These points don't seem to be an argument that harm has occurred.

What is harm? Does infringing someone's copyrights not count?

No, it sometimes does not. The crux is that this is a somewhat novel GPL violation, and their knee-jerk reaction to freeze development is extreme. It's a weird story.

They just "froze" upstream development, but it was purely performative; there isn't actually active upstream development.

If you wanted to fork it and continue development you certainly could.


Call it what you want, but it's just disrespectful and unnecessary. I'm sure we've all fucked up somewhere and didn't attribute something correctly, but I feel like once it's been brought to your attention, it's just silly to not at least acknowledge it (especially if people are paying you to work on it). In this case, it's a somewhat serious licensing issue even if it is unlikely to lead to any actual legal action.

Stolen valor isn't really literal theft either, but that doesn't mean it's okay to do it.


Okay, sure. But the question is what harm is being done. Am I understanding you correctly that your answer is that there is none?

Would you accept any definition of harm short of money being lost or someone beating you with a club?

So far I'm just waiting for any definition.

You got examples and didn't like them. That's fine, that just means people won't indulge you anymore.

Can you clarify what examples of harm have been provided? Disrespecting someone is not harming them, if that is what you're getting at? Your comment is quite disrespectful towards my genuine question which you refuse to answer, and yet, I am not harmed. In fact, I am amused, since it's clear you don't have a real answer and are just resorting to ad hominem attacks instead.

No, I'm going to let 'jchw do it for me, because they are more patient than I would have been and make me thankful I didn't go down that route. I don't really want to engage with someone whose argument is "there's no harm because the harm is plagiarism and according to OpenAI plagiarism is OK".

> the harm is plagiarism

How is plagiarism harmful outside of an academic setting? Is it illegal? Who is hurt by it? In what way? Does this supposed harm outweigh the benefit it brings to the rest of society? And, mostly unrelated, why are you okay with bigtech doing it, but not a mere human?

Just admit you realized that you don't actually have an argument. It's a simple question, and you're not able to answer it.

It's okay to admit you were wrong. It shows growth.


I don't recall ever saying that plagarism by big tech was ok.

I'm more curious what your definition of harm is.

(To be clear, this is a completely pointless tangent, "harm" has nothing to do with whether or not you should condone plagiarism. But you seem rather interested in discussing it, so I am kind of curious what answer you're actually looking for.)


I'm specifically asking you (and other HNers) what definition of harm you think applies here. I'm still waiting.

As for not condoning plagiarism, grow up. We're not kids in school anymore. You're (hopefully) an adult who graduated already.

If you're so against plagiarism, how do you feel about LLMs plagiarizing the whole internet? Didn't all the techbros collectively decide for us that this is the future we want?


> I'm specifically asking you (and other HNers) what definition of harm you think applies here. I'm still waiting.

Well, now I asked for yours, and I'm also still waiting.

> As for not condoning plagiarism, grow up. We're not kids in school anymore. You're (hopefully) an adult who graduated already.

Look, man, I'm not saying we should go kill people for committing plagiarism, I don't think this is the worst thing ever, but it definitely reflects a lack of integrity even if the original authors explicitly don't care. It's dishonest and can put the legal status of a software library into genuine question.

i.e. I care if people lie to me even if the lie doesn't matter that much.

And it is not just a thing in school. Anyone who publishes or really writes anything (e.g. books, video scripts, blog posts, etc.) can ruin their career through plagiarism. It's a cultural faux pas.

https://en.wikipedia.org/wiki/Plagiarism

> If you're so against plagiarism, how do you feel about LLMs plagiarizing the whole internet? Didn't all the techbros collectively decide for us that this is the future we want?

That's a whole other can of worms.


> Well, now I asked for yours, and I'm also still waiting.

I asked first and I don't want to influence your response. So, go ahead. You first.

If your only answer is that plagiarism is bad then I agree with that (in certain settings, such as education), but it's clearly no longer considered to be illegal (if it ever was?) or immoral. Just look at all the bigtech LLMs doing so while raising billions without getting into legal trouble. So apparently society has recently decided that this is fine.


> I asked first and I don't want to influence your response. So, go ahead. You first.

It's simple: I'm not dodging the question, it's just that I don't know. It's complicated. It's easy to punch someone in the face and say "I have harmed this person" but things go into the weeds quickly. Like, can you harm someone through inaction? It's a surprisingly deep philosophical question and I am not a philosopher. I don't think determining exactly what harm is to be relevant in this particular case, anyways, but any definition I could come up with would probably have holes in it and lead to a large debate that I'd argue isn't actually relevant to the point(s) being made anyways.

> If your only answer is that plagiarism is bad then I agree with that (in certain settings, such as education), but it's clearly no longer considered to be illegal (if it ever was?) or immoral. Just look at all the bigtech LLMs doing so while raising billions without getting into legal trouble. So apparently society has recently decided that this is fine.

Say we really did crack the code on how human learning works and distilled it into an algorithm. If you were able to use this algorithm to produce a representation of learned skills and knowledge, e.g. something lossy enough to be considered legally distinct rather than just a compressed form of the training data, then surely this would not be considered a derivative work of the copyright material used to train it. I think most people would agree with this. (Note the obvious caveats, e.g. if your weights do contain obvious artifacts of direct memorization then it would still be a legal problem.)

Clearly we haven't done that yet, but we did do something that sits between "lossless compression" and "human learning". The courts have the unenviable job of trying to figure out where to draw the line when we still don't really understand what's going on.

I don't really like the heist that occurred with machine learning, but I also lack a satisfactory answer on what exactly it is they did wrong (except for the obvious, e.g. committing massive amounts of piracy and DDoS'ing the entire Internet for the sake of training data.) I don't think anybody could have foresaw what would happened with machine learning decades ago to be able to make laws that would adequately cover it, and tech companies always move way too fast for regulators to keep up.

However, I don't believe that this means that all plagiarism is simply okay, either legally or morally. I just think we lack an adequate legal framework to represent our moral quandaries with big tech machine learning operations, as the traditional notion of plagiarism doesn't cover the complexities of model weights or model outputs. I also don't think that the current legal frameworks will last forever; it's a golden era for ML companies, but assuming they haven't and aren't cracking the code on artificial cognition (I strongly believe they're not near it atm) I believe regulations will eventually catch up some time after the hype has died down.


Alright, my point is that any harm done here is significantly less than what the bigtech LLMs are doing. If plagiarizing code is bad then so is both building & using LLMs. If building & using LLMs is fine, then so is plagiarizing code.

In this case there's a non-commerical open source project that ignored some other project's licenses. This isn't great, but it doesn't affect me, a third party, in the slightest. I have no reason to be upset about this. It doesn't really affect the other projects either, nor does it negatively affect our society. If anything it adds to our society by giving something people are clearly interested in having.

In the case of RTEMS the only thing they're missing out on is attribution. Nintendo isn't missing out on anything at all, people will still be buying their hardware to run this software.

So my argument is that any harm that may have been done is insignificant at best. Hardly worth getting upset about, especially as a third party.

As for the legal argument, it's hypocritical at best. If someone wants to condemn what happened here they should first go after the big boys who are making billions by doing the same thing on a massive scale.

> If you were able to use this algorithm to produce a representation of learned skills and knowledge, e.g. something lossy enough to be considered legally distinct rather than just a compressed form of the training data, then surely this would not be considered a derivative work of the copyright material used to train it. I think most people would agree with this.

If it's okay for an algorithm to do then it's okay for a human to do. So in that case copyright would be dead since the conclusion is you (or a machine learning algorithm) are allowed to ingest some content, then produce similar content.

A simple example is using an LLM to draw an image of some disney characters. If we say the LLM is allowed to do this because it learned to do so, which we aren't considering to be plagiarism, then why are human artists being sued by disney for doing the same?

Or in this case, let's say the original authors used an advanced LLM to assist their coding. The LLM once happened to ingest Nintendo's binary blobs during training and was advanced enough to learn from them. It uses this knowledge to produce code that can interface with the hardware which just so happens to look like the original code because that's just how you do it. Is it suddenly not plagiarism anymore? Did it become morally okay because the LLM laundered the code? Is this any different from LLMs ingesting all of github and becoming coding assistants? Why are we okay with that, but not when a human does it?

I know that in the end the legal answer is that if you have enough money you can do whatever you want, but this doesn't answer the moral questions.


> Alright, my point is that any harm done here is significantly less than what the bigtech LLMs are doing. If plagiarizing code is bad then so is both building & using LLMs. If building & using LLMs is fine, then so is plagiarizing code.

This falls under the "two wrongs don't make a right" adage, I'd argue. (To clarify... I agree, at least insofar as LLM training is plagiarism.)

> In this case there's a non-commerical [sic] open source project that ignored some other project's licenses. This isn't great, but it doesn't affect me, a third party, in the slightest. I have no reason to be upset about this.

Personally, I do sometimes get upset about things that don't directly affect me, as the result of empathy, sympathy, and having principles. I think if you really think about you'd agree.

> It doesn't really affect the other projects either, nor does it negatively affect our society. If anything it adds to our society by giving something people are clearly interested in having.

Calculating the effect of one person committing plagiarism is impossible. Part of the reason it's taboo is because I think we all agree the world is a better place when people are honest and give credit where credit is due, even when the threat of a lawsuit is not looming. Even if you are going to potentially violate a copyright license, you may as well be forthcoming about it IMO. And I'm not asking for perfection; we all make mistakes, after all. It's really about how you handle things once a mistake is brought to your attention.

As far as this goes, the tricky part is that right now, anyone distributing software that includes libogc, e.g. almost all GameCube and Wii homebrew, is potentially guilty of unauthorized distribution of copyrighted materials. It is hard to quantify how severe this is, but if you are trying to follow the letter of the law closely, especially since you're likely already engaging in gray area activities like console hacking, you probably want to keep a strong distance from illicit activities. I strongly believe that courts will consider how strong your public commitment to not purposefully violating the law was if you wind up going to trial. Just look at how the Discord conversations wound up factoring into the Citra case. Now that everyone is aware, the ball is suddenly in hundreds of people's courts to figure out; presumably most of them will just do nothing and ignore it, but it's difficult to really quantify what damage is done here.

The homebrew scene has a strong reason to distance themselves from software piracy. When the homebrew scene itself is building on top of potential copyright infringement, that's not a good look. It looks an awful lot like hypocrisy.

> In the case of RTEMS the only thing they're missing out on is attribution.

This part needs a deeper investigation. RTEMS has been relicensing to BSD 2-clause for a while, but some of the older code might only have been available under a variant of the GPL. Software that includes libogc today can't possibly be adhering to the RTEMS license since they will be missing the proper copyright notice and disclaimer, so this will take time to resolve. Meanwhile the modified GPL variant is likely OK for most projects, but it might pose licensing issues for some.

> Nintendo isn't missing out on anything at all, people will still be buying their hardware to run this software.

Those statements are largely not related, and not even necessarily true as plenty of people run homebrew on emulators. In fact, in many cases, you'll wind up running homebrew more on emulator than the real machine just because it's easier to do. Some homebrew actually specifically supports emulators and will take advantage of running in an emulator.

While this seems kind of silly, it's actually a big argument in favor of the legitimacy of emulators, as rather than simply emulate the console, you can argue what they actually do is emulate a platform that is a compatible superset of the game console.

> So my argument is that any harm that may have been done is insignificant at best. Hardly worth getting upset about, especially as a third party.

> As for the legal argument, it's hypocritical at best. If someone wants to condemn what happened here they should first go after the big boys who are making billions by doing the same thing on a massive scale.

It is totally possible to condemn both things. However, if you're a member of the homebrew scene, there's a good chance that one of those problems is more personally relevant to you than the other.

> If it's okay for an algorithm to do then it's okay for a human to do. So in that case copyright would be dead since the conclusion is you (or a machine learning algorithm) are allowed to ingest some content, then produce similar content.

> A simple example is using an LLM to draw an image of some disney characters. If we say the LLM is allowed to do this because it learned to do so, which we aren't considering to be plagiarism, then why are human artists being sued by disney for doing the same?

I think this is an even bigger can of worms than the AI one. We actually don't have a lot of case law on the legality of fan art and fan works in general. Note though that legal bullying can be effective even when the plaintiffs have no real leg to stand on, so it's hard to really judge what it means when someone has to fold to legal threats.

Meanwhile, if you think you can get away with it, I'd actually implore you or anyone else daring enough to try selling a blatantly copyright-infringing Disney T-shirt using Stable Diffusion for the artwork. I strongly doubt this would hold up in court. (If it did, it would be very funny.)

> Or in this case, let's say the original authors used an advanced LLM to assist their coding. The LLM once happened to ingest Nintendo's binary blobs during training and was advanced enough to learn from them. It uses this knowledge to produce code that can interface with the hardware which just so happens to look like the original code because that's just how you do it. Is it suddenly not plagiarism anymore? Did it become morally okay because the LLM laundered the code? Is this any different from LLMs ingesting all of github and becoming coding assistants? Why are we okay with that, but not when a human does it?

Actually you just described some real world case law, at least in the United States. Recently, Google LLC v. Oracle America, Inc. established that copying code for the sake of interoperability can be considered fair use. Similarly, Atari Games Corp. v. Nintendo and Sega Enterprises Ltd. v. Accolade, Inc. helped establish this earlier, and you could argue Sony's Connectix and Bleem! lawsuits as well, as both Connectix and Bleem! used some degree of non-cleanroom reverse engineering.

Copying code for the sake of interoperability can be fair use, even for libogc, even if someone of the resulting code is necessarily structurally similar to the original code. However, e.g. just copying decompilations directly out of HexRays or Ghidra is unlikely to hold up. (Disclaimer; obviously, IANAL.)

Today's case-law regarding machine learning models mostly establishes that the model weights themselves are not inherently infringing because of the training data. I'd argue the implicit legality of model outputs is significantly less charted waters and that is exactly why some ML vendors are providing indemnity agreements: they want to reassure their customers that they will not be liable if the model's outputs are found to be infringing, because there absolutely is still risk that model outputs could be found as infringing. It is not blackletter law that anything that comes out of a model is necessarily free of copyright, trademark and patent infringement.

> I know that in the end the legal answer is that if you have enough money you can do whatever you want, but this doesn't answer the moral questions.

Sure, and just because someone does or doesn't choose to sue also doesn't mean something is morally good or bad.


> This falls under the "two wrongs don't make a right" adage, I'd argue. (To clarify... I agree, at least insofar as LLM training is plagiarism.)

The argument is that so far society at large seems to have decided that what bigtech has done with LLMs is not wrong. Everyone is happily using it, pretty much every company is touting their new "AI" features, and lawsuits haven't gained any traction. So if it's not wrong for an LLM, I'd argue it's not wrong for a human, either.

> Personally, I do sometimes get upset about things that don't directly affect me, as the result of empathy, sympathy, and having principles. I think if you really think about you'd agree.

True, in this case I feel sympathy for the poor developers who put time into making a free open source tool that brought many people joy now being harassed over something as insignificant as a license dispute. It's all just made up nonsense designed to protect the big boys who can afford the fancy lawyers anyway.

The rest of the post is mostly about the legal angle where I'm sure you're right, but the main take-away is that these people did not really do anything morally wrong. It's just because of legal bullying that they have to be careful. So my distaste is aimed at those who perform the legal bullying and those who enable it, not at their victims.


> The argument is that so far society at large seems to have decided that what bigtech has done with LLMs is not wrong. Everyone is happily using it, pretty much every company is touting their new "AI" features, and lawsuits haven't gained any traction. So if it's not wrong for an LLM, I'd argue it's not wrong for a human, either.

Right, because if one wrong thing is allowed, we should allow... Other wrong things.

That sounds a lot like two wrongs making a right.

> True, in this case I feel sympathy for the poor developers who put time into making a free open source tool that brought many people joy now being harassed over something as insignificant as a license dispute. It's all just made up nonsense designed to protect the big boys who can afford the fancy lawyers anyway.

Okay. Well I feel sympathy for the poor developers who put time into making a free open source tool that brought many people joy now having their work ripped off without credit because I guess it's okay if LLM training is legal for some reason.

I mean, honest to God, how much rationalizing are we going to go through here? It's okay because LLMs? It's okay because it's free so that means plagiarism is fine? Copyright licenses are "made up nonsense"?

Marcan's response is disproportionate, I never even denied this. Doesn't really have any bearing on whether or not this libogc issue is a problem, and it is still a problem.


> Right, because if one wrong thing is allowed, we should allow... Other wrong things.

Unless you think LLMs deserve more rights than actual flesh and blood humans, yes. Either that or get bigtech to stop what they're doing, but we both know that's not going to happen.

> Okay. Well I feel sympathy for the poor developers who put time into making a free open source tool that brought many people joy now having their work ripped off without credit because I guess it's okay if LLM training is legal for some reason.

If LLMs get to do it, so do humans. If humans don't get to do it, then LLMs don't either. Society has embraced LLMs and bigtech is not going to give up their new toy, so it has to become okay for humans, even if you think this is unfortunate.

> I mean, honest to God, how much rationalizing are we going to go through here?

The same amount as people go through to excuse legal bullying.

> It's okay because LLMs?

Again, humans deserve more rights than machines, not less. So apparently, yes.

> It's okay because it's free so that means plagiarism is fine?

Yeah. No one was hurt, not even financially. You can think it's distasteful, that's fine.

> Copyright licenses are "made up nonsense"?

Yup, only serves to make the rich richer anyway.


> Unless you think LLMs deserve more rights than actual flesh and blood humans, yes. Either that or get bigtech to stop what they're doing, but we both know that's not going to happen.

> If LLMs get to do it, so do humans. If humans don't get to do it, then LLMs don't either. Society has embraced LLMs and bigtech is not going to give up their new toy, so it has to become okay for humans, even if you think this is unfortunate.

> Again, humans deserve more rights than machines, not less. So apparently, yes.

Man, you are fucking obsessed with LLMs. This incident predates the existence of LLMs, has nothing to do with LLMs, and plagiarism and ML training are two completely different issues. And, you keep acting like I am saying I think what happened with LLMs is fine, which I have never said at any point. I didn't say that what happened and is happening with LLMs is fine, only that it is a completely different thing that bears no relation to this whatsoever. Nobody mentioned LLMs. It's not a thing here. Stop talking about fucking LLMs.

> Yup, only serves to make the rich richer anyway.

The GPL is a great example of a copyright license that is explicitly not designed to make the rich richer.


Your code is my code actually. I wrote all of it. Where's the harm?

have fun

I find it intriguing but have so far been unwilling to convince myself to give it a try on anything. It has a lot of good ideas, but I think Apple needs to relinquish more control over its future and direction for me to feel like a future plan change at Apple won't jeopardize its usefulness outside of Apple platforms. Presumably the reason why they want to keep it under their own organization is specifically so they can control their own destiny better, since Apple is heavily using Swift themselves; totally understandable, but trusting it for something that needs to be crossplatform in the long term seems potentially unwise.

It's not fool-proof either. Microsoft started the .NET Foundation, but that hasn't stopped them from causing drama by pushing self-serving decisions from the top-down. I don't really fear the same sort of behavior from Apple very much, but I definitely worry that Apple might eventually lose interest on the cross platform part for sure.

This is especially troubling because it is a fairly innovative language. If you get trapped on an unmaintained port of it, moving off of it seems like it might be hard. It's got a lot of clever ideas I haven't seen elsewhere.


Swift evolution and governance is open and publicly documented. It will always be dominated by Apple engineers, but the evolution of the language is largely orthogonal to Apple's OS releases.

I'm not sure how much of the standard library is available on the server side. However, I it's more about the engineers' interest than it is Apple's, and in that respect, the Swift ecosystem has been evolving constantly, e.g., the Swift toolchain was entirely divested from Xcode a month ago.

I can't speak for the .NET ecosystem, but your fears are unfounded. Whether Swift is useful in a cross-platform context is another question, however.


Orthogonal? Odd thing to say given Swift's evolution and release timeline are entirely constrained by Apple's OS release schedule. We're currently going through the spike in evolution proposals by Apple engineers in preparation for the branching of Swift 6.2 next month before WWDC in June.

As for server side, the standard library is entirely available on other platforms, with a subset available for embedded Swift. However, it's fairly limited when compared to something like Python, and cross platform support for the other libraries like swift-foundation or SwiftNIO is more limited (IIRC SwiftNIO still doesn't support Windows properly).

I'm not sure what you're talking about with the tool chain. Apple has been producing toolchains that can run on macOS outside Xcode for years. Do you mean integration of swiftly? I think that just brought swiftly support to macOS for the first time.

Ultimately I agree with jchw; Swift would be in a much better position if it wasn't controlled by Apple's process. Features could get more than a few months work at a time. We could have dedicated teams for maintenance intensive parts of the compiler, like the type checker or the diagnostics engine, rather than a single person, or a few people that switch between focus areas.


Firstly, I believe the fears are founded; these fears are a good starting point, since learning and adopting a programming language is a big investment, and you should be careful when making big investments. To say they're unfounded suggests that they have no basis. Disagreed.

Secondly, I don't really feel like this sort of analysis does much to assuage fears, as Apple's business strategy is always going to take priority over what its engineers individually want. Apple of today doesn't have any obvious reason to just go and axe cross-platform Swift, but if that ever changes in the future, they could do it overnight, like it was never there. Could do it tomorrow. It's not much different than an employee getting laid off abruptly.

This is especially true because in truth Apple doesn't really have a strong incentive in the grand scheme of things to support Swift on non-Apple platforms. Even if they use it in this way today, it's certainly not core to their business, and it costs them to maintain, costs that they may eventually decide benefits their competitors more than it helps them.

There's no exact heuristic here, either. Go is entirely controlled by Google and does just fine, though it has the advantage of no conflict-of-interest regarding platforms. Nobody writing Go on Linux servers really has much reason to be concerned about its future; Partly because Google has quite a lot of Go running on Linux today, and given how long it took them to e.g. transition to Python 3 internally, I can just about guarantee you that if Go died it would probably not be abrupt. Even if it was, because of the massive amount of external stakeholders there are, it would quickly be picked up by some of the large orgs that have adopted it, like Uber or Digital Ocean. The risk analysis with Go is solid: Google has no particular conflict of interest here, as they don't control the platforms that Go is primarily used on; Google has business reasons to not abruptly discontinue it and especially not on Linux servers; there are multiple massive stakeholders with a lot of skin in the game who could pick up the pieces if they called it quits.

I believe Apple could also get to that point with Swift, but they might need a different route to get there, as Swift is still largely seen as "That Apple Thing" for now, by a lot of outsiders, and that's why I think they need to cede some control. Even if they did fund a Swift foundation, they could still remain substantially in control of the language; but at least having other stakeholders with skin in the game having a seat at the table would do a lot to assuage fears about Swift's future and decouple aspects of governance from Apple in ways that would probably ultimately benefit Swift for everyone.

P.S.: And I'm not singling Apple out here, because I think any rational company has to make tough decisions sometimes, but it's obvious from their past that they definitely don't fear changes of plan. Look all the way back to OpenDoc. Being willing to make bold changes of plan feels like it's a part of their company DNA.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: