Hacker News new | past | comments | ask | show | jobs | submit login

Van neumann is more general and provides more flexibility (what if you need more data space and don't need all that code space?).

There are still new harvard architecture microcontrollers being designed. It provides some security in the field as well as simplifying the chip layout.




It'd be pretty annoying for a JIT compiler to have to manage too. Java can do a garbage collection with barely any memory left - cool! But, it'd be a shame if similar tricky stuff was written just to e.g. decompile code just to make way for more code space. Pages with only one of 'write' or 'execute' privileges work well enough


Harvard architecture is pretty common in MCU and ASIC space, and especially in tiny little low-powered processors that might have a tiny bit of flash that instructions are read out of and an even smaller amount of SRAM for data. The ARM9 architecture is effectively Harvard for example.


These use cases could be served by asking the operating system or the hardware to copy data to the code memory. Weakens absolute protection, but it would be an escape hatch to enable JIT compilers.


That's half the definitional difference between Harvard and Von Neumann architectures, though - That never the twain shall meet.

There's a second insight on the tip of my tongue, here: Harvard architectures can emulate Von Neumann ones, and vice versa. And problems, specific problems, can be solved by either. But it seems to me that there's something interesting about emulation too - That Harvard architectures can open themselves up to precisely the same vectors of attacks if you enable "Copy this data block to code memory".


But not if instead of copying, they support a "Run this data as a program in virtual machine with access to these segments of memory" operation. The behavior would be functionally identical to emulating the architecture and interpreting the data, except that it wouldn't be slow and could be done recursively with no cumulative overhead.


If you couldn't transfer data memory to code memory at all, how would it be possible to write an ordinary compiler? If it's possible to run a compiler on the system, just have your JIT do whatever the compiler does.


Write it to the filesystem (ELF format presumably) and execute the file.

Some Harvard architecture implementations provide ways of shoving data into executable space and back (like special copy instructions) but it's not general purpose. Generally you compile on Von Neumann and flash to Harvard.


Well you write a bytecode compiler and run an interpreter for it out of code space. It could be a JIT compiler, just not one that generated native code.

This approach was pretty common in the 80s Lisp days IIRC.


Java already compiles to bytecode, which executes on the JVM. But this is too slow for most practical applications. JIT compilation to native code is crucial for execution of Java programs with competitive speed.

It would be possible to compile everything to machine code ahead of time (pretty much what GraalVM does), but that would compile everything even if it would never end up being executed*. Without JIT compiler, it would not be possible to recompile code to take advantage of tracing information gathered at runtime. Also, newly loaded Java code would not be able to be optimized at all.

There are workarounds for all of these. For example, the JIT compiler could generate a new executable with the optimized code and migrate execution state to a new process. But it seems very clunky.

*: Not really an issue with microservices and stripped-down binaries. A huge deal for IDEs and environments with plugin architecture though.


In a standard harvard architecture that isn't possible.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: