Hacker News new | past | comments | ask | show | jobs | submit login

The hard part doesn't go away with unless you pull in a GC or something similar to take care of lifetime bugs. In Go you always use a GC and circumvent the explicit lifetime management syntax of Rust. Some data structures are very hard to write without something akin to a GC, so my bets are on a few opcodes trickling down into mainstream cpus making GC easier to implement in optimal time and space. I'm surprised Android hasn't forced anything like that in their supported ARM designs, since such cpu support has been available in production systems in the 1970s and hence isn't anything revolutionary, just not on the radar like vector instructions have been.



Is this based on your experience learning lifetimes in Rust? I very rarely find myself needing to do convoluted lifetime tricks -- most of the code I write doesn't even need explicit annotations. So I'd be curious to hear what project you may have worked on as a beginner where lifetime management because a serious obstacle.


I'm not talking about Rust lifetimes but lifetimes as a common term used in many languages. What I mean is that in Rust you're forced to explicitly deal with the lifetime aspects of variables before the compiler will generate any code. In C and C++ you're free to skip that and introduce leaks or use after free bugs, which are accepted by the compiler due to the language definitions. I didn't mean to say it's more difficult. In fact, it's less work to deal with these issues in Rust before the code gets generated rather than debugging mysterious bugs. Put differently, you're debugging your code while trying to compile it instead of debugging it after hopefully someone was able to trigger it and provide enough clues for you to identify the issue.


Could you provide references for the claim that hardware support for GCs had been in CPUs for decades?


Burroughs line of machines and Intel iAPX 432 and it's derivative i960 (which is actually in use). The project for which the Intel CPU was designed together with Siemens would have given us a safer foundation due to using this processor in an Ada based operating system in the 1980s. But then UNIX won and gave us the hegemony of C and all the avoidable security bugs associated with it. Lisp machines had similar designs. These days there are RISC-V designs that have similar features. Just like DJB has been wishing for a few extra arithmetic instructions for crypto in x86 processors, I wish for instructions that would make garbage collectors more efficient and also provide support for memory safety features.

https://en.wikipedia.org/wiki/Intel_iAPX_432

https://en.wikipedia.org/wiki/Intel_i960

https://en.wikipedia.org/wiki/Burroughs_large_systems#Tagged...

http://www.smecc.org/The%20Architecture%20%20of%20the%20Burr...

I'm sure there were similar design in the 1990s, but I don't have references ready off the top of my head like the above.


Thanks. I will definitely look into these when I have the time. I'm actually a grad student in EE but this topic interests me. Maybe I can get something published on hardware accelerated GCs... one day!


Genuinely curious, do they not teach historic designs in EE course plans? To learn from and improve, but not reinvent something half-way or worse. Also, older designs are much easier to study completely compared to the super complex logic inside your current day x86.

Personally, I would call it hardware-assisted gc if we consider hardware acceleration to be things like GPUs, crypto accelerators, etc.


An undergrad computer architecture course would typically gloss over the history of CPUs and focus either on MIPS or x86 (or both). For example, I did two courses on x86, starting from 8086 (and 8088), through Pentium, and some bits of Itanium.

You're right that starting with a simple architecture makes things much easier. 8086 for example operated in 16-bit real mode (segmented memory), and so the memory layout was trivial compared to 32-bit protected mode in x86.

I haven't taken any graduate architecture courses yet, but my assumption that they would go into more detail on the development of CPUs through history.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: