There's something amazing about carrying around your dot-matrix printed source code in a roller suitcase. The only thing that says "stable release" more than printing it out on hundreds of pages of paper is launching the computer running the code into space with no hope of doing an update.
Kids, today! You have no idea how lucky you are. At the time of writing this code, there were no 32" color monitors, IDEs, code text editors, or source code management systems. Code was first written out by hand and then typed onto punch cards or paper tape. These were fed into a mainframe computer for assembly. If you made a mistake, you had to find cards (by hand), remove and replace them with corrected cards. Get the cards out of order, you were toast. In general, the only way to gain a big picture of the code was to print it out, often over and over. Needless to say, paper was in huge demand.
Yup..."back then" you would arrange all your cards in a shoebox and then take a marker and make a diagonal mark from top left to bottom right JUST IN CASE you dropped said shoebox on the way to class to turn in your homework/take home test. Without that mark there's no easy way to recreate the correct order of the punch cards.
Source: dad was an engineering major in the early 70s.
This comment is extremely interesting to me as my first instinct would be numbering the cards w marker. But it makes far more sense to draw a line on the edge for visual sorting.
Also, for all but the richest programmers who had direct access to the computer, the typical experience was that you submitted your stack of cards at the end of the day and got a printout in the morning... Probably why my old programming teacher scorned our iterative approach to repeatedly stumbling upon syntax errors by hitting the 'compile' key and adding the missing commas.
Since each punch card meant something, you really learned code. Watching out for memory and keeping track of registers was paramount (COMPASS was my HS and Freshman initial class). Syntax killed, but was overcome by hand writing code. The science of CPS courses had to be taught just to be able to pass the class via assignments. Writing an OS with punch cards was an entire term.
We could get same day runs if we paid more or had our own account. There were 3 tiers of cost..
I'm 57 and a working developer. In school,
It was a beautiful feeling if your card deck compiled, ran, didn't run out of CPU time, and produced the correct print out, the very first time.
> space probes occasionally do get software updates
Sure. In 2014. Not in 1969.
I once had the very interesting experience of developing an in-place update for a mid-1980s-vintage spacecraft instrument: the magnetometer on the Galileo spacecraft. It used an 1802 processor with 2k bytes of memory and was programmed in Forth. The development system ran on an Apple II. The instrument developed a bad byte of RAM and so a patch had to be developed, but the Apple II that ran the Forth compiler had long since been decommissioned. (It was nine years between when the spacecraft was completed and when it arrived at Jupiter, three of which were a delay caused by the Challenger explosion.)
I ended up writing a complete 1802 emulator and new Forth development environment that was able to compile the original source code and reproduce what was then running on the instrument. That code ran in Common Lisp on a Macintosh II. Using that new development system, I produced a patch, which was only about a dozen bytes long. It was quite an inexpensive project by NASA standards (took about three months from start to finish with just me working on it) but the cost per byte of output code was pretty staggering :-)
The LM computer on Apollo 14 had to be updated in flight in order to bypass a faulty abort switch.[1] The patch was applied by radioing the instructions to the crew and having them enter it manually.
Note that this was not a code patch: This was a sequence of changes to the program state that tricked it into ignoring the abort switch. The code itself was stored in read only core rope memory, so it was physically impossible to change it (the memory was written by appropriately winding wires around ferrite cores).
> I once had the very interesting experience of developing an in-place update for a mid-1980s-vintage spacecraft instrument: the magnetometer on the Galileo spacecraft.
Sigh. Reading this at the start of my workday is not doing wonders to my morale.
Only tangentially related to the article, but since you are commenting here, I'd like to thank you for the "Lisping at JPL". I've read it several times, which caused me to start collecting all the info I could get on the Remote Agent. I couldn't find that much though. Surely such a project should have generated way more research papers than it did? Since it doesn't seem to be used anymore, where's the source code? It feels like it was killed with fire.
I wonder if current generation spacecraft now have something even half as powerful as the DS1 REPL...
> Reading this at the start of my workday is not doing wonders to my morale
I'm very sorry to hear that. Would it help if I told you how my life sucks now? ;-)
> Surely such a project should have generated way more research papers than it did? Since it doesn't seem to be used anymore, where's the source code?
I have no idea. I strongly suspect that the RAX code was not preserved, though I could be wrong. You might want to contact the JPL public-relations office and ask them.
It is hard to appreciate just how chaotic the entire DS1 project was. The mission was trying to do things that had never been done before on a budget and schedule that was a tiny fraction of anything that had come before. And then RAX was a little cauldron of chaos within the larger chaos. We were a team forced to work together by largely political rather than technological considerations. We were using CVS for version control (this was the mid-90's remember). It was all we could do to come up with something that worked at all. It was very stressful, and when it was over, everyone just wanted to distance themselves from it as fast as possible. There was no money for anyone to properly catalog and archive the code (AFAIK).
> I'm very sorry to hear that. Would it help if I told you how my life sucks now? ;-)
I'm sure it does ;)
However: back in those days, computing skills were in mad demand (much more so than today, impressions to the contrary notwithstanding) and almost everything that you could do with a computer turned out to be something interesting and new.
With all the 'base' problems solved and the enormous resources available today the creativity and joy have definitely dropped a level or two. That doesn't mean that it isn't possible to derive some satisfaction of the creation of something but it is definitely harder to come by than it was in the days past.
Creativity explodes when there are more constraints, we are now for all practical purposes almost unconstrained.
> I'm very sorry to hear that. Would it help if I told you how my life sucks now? ;-)
Not at all. It would only increase the (perceived!) overall suckyness of the universe :)
> I have no idea. I strongly suspect that the RAX code was not preserved, though I could be wrong. You might want to contact the JPL public-relations office and ask them.
I didn't know that such an attempt could have a greater than zero chance of success. Let's see how that goes.
> And then RAX was a little cauldron of chaos within the larger chaos.
And yet, it flew!
> It was very stressful, and when it was over, everyone just wanted to distance themselves from it as fast as possible.
Uh oh. That's far worse than having no budget for archival.
Yes, it did. And it even mostly worked. And even when it didn't work it let us demonstrate in-situ debugging and repair, which had also never been done before (and AFAIK has not been done since).
> That's far worse than having no budget for archival.
> I once had the very interesting experience of developing an in-place update for a mid-1980s-vintage spacecraft instrument: the magnetometer on the Galileo spacecraft.
One of the many reasons why I love reading the comments on this site.
Sometimes I like to see what I can figure out in 5 minutes without help or using easy search terms. Keep my Google Fu sharp. Clicking on your name would've been way too easy. Although, I probably should've looked just in case before I asked haha.
Anyway, nice to run into you. I remember your writings were some inspiration in my work on an embedded, efficient 4GL. I no longer had the tool or even remember what I read but your writings factored into it a bit I know. So, thanks for that I guess is all I can say there. :)
Btw, I recently discovered the 1802 in my high assurance and anti-subversion research in hardware. That foray was interested in chips that demonstrated extreme reliability and longevity. I looked into spaceflight given overlap in requirements. Discovered it. The Intersil datasheet had amazing amount of info compared to most chips with some impressive claims about what it could handle.
Since you had to study it, do you think it's worth using on an older process node for high-reliability today? Anything good about it you remember that's worth copying today? Or one of the bigger headaches of your career? I'm just curious as verifiable designs will likely require inspectable fabs... which require tiny CPU's. Means I can't overlook any old wisdom from a time with similar constraints.
I liked the 1802 a lot. Nice orthogonal architecture, lots of registers, made writing the emulator a snap. The only problem with it was that it was dog-slow even in its day: 8 clock cycles per machine instruction. I think you could do better today with a modern FPGA.
I'm doing some work on secure hardware (and software) myself. Would love to compare notes. I sent you an email.
" Equals 2 machine cycles - one Fetch and one Execute operation for all instructions except Long Branch and Long Skip, which require 3 machine cycles - one Fetch and two Execute operations."
Max clock is 3.6MHz. So, better than 8 clock cycles but still dog slow compared to most. 3.6MHz is still very usable in lost of control applications, some logging, and possibly trusted coprocessor if simple function. Speed improvement is a must, though, for the 1802/2.
I remember the 8080 used 3 clock-cycles for a T-cycles, and 2 or three T-cycles for an instruction. It really took years to get one-instruction-per-clock-cycle. Pretty much all modern processors can do that now (or even three or four per)
Thanks as I couldnt recall how average CPU was then. I was writing it off as possibly the safety/reliability sacking the performance a bit. Still wondering about the "noise immunity" feature. Thought all digital gates had that property.
Yeah, I know the modern era machines have in-the-field updates, but I assume that wasn't a feature on the Lunar Module. You need a device capable of self-programming and with the state and cost of memory at the time I would think that an impossibility at the time.
There's a link to the source code on GitHub[0], is that not the full source though? Just searching for a comment like 'hello there' from the photo[1] doesn't seem to exist in the repo.
If you enjoy this, you might also enjoy a DIY Apollo Guidance Computer build by John Pultorak [1]:
"John Pultorak created a working reproduction of the Apollo Guidance Computer (AGC), wrote a complete manual that will allow you to build your own Apollo flight computer clone and released it in the public ___domain."
There's mention of interpreted sections in the code. Do you know if it was possible to manually enter interpreted code to have it ran by the system afterward - or even program some of software by hand (being dictated hex) in case something unexpected occurring during the mission warranted it?
All of the code was in read-only memory (core rope), so it was impossible to modify it.
There was at least one case when the behaviour of the code was significantly changed in flight to work around a hardware failure (shorted button). This involved changing the writable memory, which kept all the state, to selectively disable some parts of the lunar landing program. For more details see: