Hacker News new | past | comments | ask | show | jobs | submit login

This is great. One way closer to a computation platform, were the data just streams continuously through operative "channels" and the execution is mostly encoded into the layout of the data in memory. There is just no "program" to load, just a river of data, squeezing through cores which endlessly in parallel perform the same operations (until some other driftwood in the data instructs them to change that in this case for example a addition operation).

The complexity of such a program would be mostly done at compile time, layouting the instructionflow into the memory. Data would have physics, as in the parallel nature of this "simulation" pumping data through would add something roughly similar to our physics to the behaviour. Interaction between parallel processed elements has a scope, like a lightcone in our physics. (SumOf(NrOfCores*LargestChunkOfData))

If you want to safely remove or place new data into the whole affair, one just leaves large enough "gaps" in the datastream that can be assumed to firewall.

In my undergrad time i build a little comp science experiment. Nothing fenzy, just a idealized, parallel "Engine", copying datachunks of size from Buffer A to Buffer B.

https://github.com/PicassoCT/CompScienceExperiments/tree/mas...

Even just this has interesting interaction behaviour. Like sand at the beach, the data get sorted by size after intervals. One also insert "dark matter" aka dead chunks into the buffers, which has the effect of "accelerating" large chunks of data, while the small chunks stay put. All of this is from the point of view of the buffer of course, in which time is one "total copying from a to b".

Sorry i found this fascinating, even though it might be of no value besides distraction at all.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: