Hacker News new | past | comments | ask | show | jobs | submit login

In modern usage, the von Neumann machine is unfortunately poorly defined. You won't find it defined in Hennessy and Patterson (although they won the von Neumann award); it's discussed but not defined even in appendix L.

There are strong similarities between the Princeton IAS, UNIVAC 1, IBM 701 and even the CDC 1604 (Cray's transistor version). In fact, I'd say that the von Neumann reports [0] are the first 'architecture' with apologies to IBM. Sort of kind of. Don't chop my head off. But that similarity is not what people are thinking when they say von Neumann machine. It should be but it isn't.

But you have to really read the writing to get a sense of what a von Neumann machine actually is and reading those reports are damn hard. The Computer As Von Neumann Planned It [1] is fairly readable.

As an example, the von Neumann machine word had a binary digit (bit) describing whether a minor cycle (word) was a standard number (data) or an order (instruction, really two instructions); section 15.1 of the EDVAC report if you want to look it up. So it's kind've tagged. Lot other weirdnesses and cool ideas.

  Minor cycles fall into two classes: Standard numbers
  and orders. These two categories should be distinguished
  from each other by their respective first units  i.e. by
  the value of i0. We agree accordingly that i0 = 0 is to
  designate a standard number, and i0 = 1 an order.
Anyways, these days whenever someone says von Neumann they generally gloss over this blizzard of detail which they probably were never taught and just mean scalar. It's doubtful whether they are even distinguishing Harvard and Princeton architectures. They just mean something basic, fundamental, abstract. But they don't really mean von Neumann.

This all was the basis of patent lawsuit mentioned in the article. Over the years, there have been many histories written. Reconsidering the Stored-Program Concept [2] is pretty good.

[0] https://library.ias.edu/ecp

[1] http://cva.stanford.edu/classes/cs99s/papers/godfrey-compute...

[2] http://www.markpriestley.net/pdfs/ReconsideringTheStoredProg...




You are marking out an interesting point, rather ignored in general observations.

We may say that there had been soon two strains in fixed word-length machines: One that prefixed address contents by one or more flags to taint the data regarding its type or kind (as in "<fixed-length taint>((<instruction>[<operand>])|<data>)", where "taint" may indicate data versus instruction, binary versus decimal, a flag to indicate a decision, etc) and, on the other hand, a more generalized form of "(<fixed-length instruction><operand>)|<data>", where a fixed length instruction represents a serial selector to a hierarchically structured function table and interpretation of the type of the content of any address was left to context. While the latter model lends itself to self-modifying programs more easily (with dedicated instructions to address either just the instruction part or just the operand exclusively and simple increments of instructions to perform pointer arithmetics), which soon became the prevalent standard before the advent of B registers and/or stacks, we eventually saw a return of tainting (both on the machine- and the OS-level) to ensure program and data integrity ...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: