Hacker News new | past | comments | ask | show | jobs | submit login
It's often said that the Analytical Engine was before its time (twitter.com/rygorous)
252 points by luu on Feb 16, 2022 | hide | past | favorite | 95 comments



While this is a good point (yes, many of Babbage's difficulties were self-inflicted; arithmetic on 40-digit numbers wasn't needed in mechanical hardware and could be done in software as the industry started to realize in the 1970s), looking at the historical evolution—how Babbage got to the Analytical Engine—reveals the reason:

• Babbage's first insight was that many books of tables (log tables, sine tables, actuarial tables) could be generated mechanically, using finite differences: basically, any "nice" function can be well-approximated by (say) a sixth-degree polynomial, and that was the basis of his Difference Engine. He did understand this; see the paragraph just before his frequently quoted one:

> One gentleman addressed me thus: “Pray, Mr. Babbage, can you explain to me in two words what is the principle of this machine?” Had the querist possessed a moderate acquaintance with mathematics I might in four words have conveyed to him the required information by answering, “The method of differences.” The question might indeed have been answered with six characters thus—

    Δ⁷uₓ = 0
but such information would have been unintelligible to such inquirers.

(This paragraph is followed by the famous:

> On two occasions I have been asked,—“Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?” In one case a member of the Upper, and in the other a member of the Lower, House put this question. I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.

— but he does attempt to briefly answer how error-correction could be built into the machine.)

• In short, as he explains, his Difference Engine can be seen as a glorified version of a simple machine that turns a triple of integers (x, y, z) into the triple (x + y, y + z, z). If you start this machine with the triple (0, 1, 2), then it successively turns it into (1, 3, 2), then (4, 5, 2), then (9, 7, 2), then (16, 9, 2), etc — so in n steps you get n^2 as the first number. (In general, starting with (a, b, c) gives the quadratic function a + nb + (n(n-1)/2)c after n steps.) Do this to six places and many digits of precision and you have the Difference Engine, which could compute arbitrary sixth-degree polynomials and could indeed automate a lot of the tables that were being built by hand.

• As Tom Forsyth points out in the replies on the Twitter thread (https://twitter.com/tom_forsyth/status/1359572977377890304), for this purpose, he really needed all those digits of precision. (BTW his blog post "Babbage was a true genius" looks great: http://tomforsyth1000.github.io/blog.wiki.html#%5B%5BBabbage... )

> What the Difference Engine did was polynomials by forward differencing, which is just a bunch of adds. You actually do need massive precision there, and/or the numbers have a high dynamic range. So until floating-point, yeah you need a lot of digits.

• The big mistake Babbage seems to have done (IMO), after coming up with this idea for the Difference Engine, is to have grand visions of it (it can do all the tables!), think it will be super useful, and present it to the government. Instead of taking private funding, he thought his great invention should be the property of the country and partly funded by government. Of course, like any engineer, he underestimated how long it would take, and meanwhile, while the government entanglement led to it being dragged on for twenty years at various points deciding whether to pour more money in, he came up with loops and branches and arbitrary computation—the Analytical Engine—asked them "hey I have something better than the project you've been funding, what do you think?", and put them in an impossible spot, and was too socially naive to realize that he had become non grata.

• John Nagle (Animats) has commented a few times about how, contrary to the "high culture" story of the evolution of computers (Turing, von Neumann etc), the gradual evolution of "calculators" was itself leading up to computers (e.g. https://news.ycombinator.com/item?id=10636154). Something similar appears to have happened with Babbage, where he started with a simple calculating device and, thinking about it more deeply, single-handedly came up with a Turing-complete design and was writing programs for it.

• I started reading Babbage's memoirs "Passages From the Life of a Philosopher" (http://onlinebooks.library.upenn.edu/webbin/book/lookupid?ke...) after seeing an intriguing Knuth reference to it (see Russ Cox's blog post https://research.swtch.com/tictactoe "Play Tic-Tac-Toe with Knuth"). I'm only a third of the way through it, but unlike the popular image (his plans were never completed, the project was a failure etc), he seems to have been the real deal, really did understand what computation was possible, in many respects he was thinking like a programmer. Too bad machine-making of his time was not up to the task, or we might have had a different history. (According to Wikipedia, William Gibson and Bruce Sterling had the same thought, and came up with their 1990 novel "The Difference Engine", establishing the genre of steampunk.)


> On two occasions I have been asked,—“Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?” In one case a member of the Upper, and in the other a member of the Lower, House put this question. I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.

It's nice to know, as someone that works with computers, that this layman attitude is eternal and not something unique to our current times.


Although, I always wondered if that was a snide way of asking if the machine was a fake, that Babbage didn't pick up on.


I was going to say that Poe's law existed even back then, but apparently it was coined by a different Poe in 2005.


Machinery tends to be used to "refine" raw materials. Rough timber is transformed into precisely dimensioned planks, for example. So it's not unreasonable to ask whether a machine for producing numerical data can operate with crude and inaccurate raw materials (inputs).

Consider: The "regula falsi" approach to solving equations in one unknown actually involves a guessed, "false" input. https://en.m.wikipedia.org/wiki/Regula_falsi#The_regula_fals...

Robert Recorde wrote:

  Gesse at this woorke as happe doth leade.
  By chaunce to truthe you may procede.
  And firste woorke by the question,
  Although no truthe therein be don.
  Suche falsehode is so good a grounde,
  That truth by it will soone be founde.
  From many bate to many mo,
  From to fewe take to fewe also.
  With to much ioyne to fewe againe,
  To to fewe adde to manye plaine.
  In crossewaies multiplye contrary kinde,
  All truthe by falsehode for to fynde.
Now, somebody on HN ("wzdd") has previously tried to defend Babbage, on more than one occasion (e.g. https://news.ycombinator.com/item?id=29605135), saying that he wasn't mystified by the misunderstanding, but instead was making some kind of joke at his own expense (not a snide comment at the expense of the questioner.) I'm far from convinced.

It seems to me that we shouldn't be celebrating this mean-spirited passage from Babbage's writing.


I always assumed we are only hearing Babbage's side without the context of the question. Was this part of a series of questions presented to Babbage trying to help him along in almost a Socratic method to understand the Parliament's hesitation? Knowing that most politicians of that era discussed 'to whose benefit' and 'for what purpose' when most of the upper class were landed gentry, it would be most likely affecting a whole new sector of the economy they didn't understand. How would this machine be used for Law? Could it help the homeless? War widows? Managing pensions?

I presume the question could also be directed at whether the machine would require a specially trained operator and thus more funding or whether anyone skilled in the Arts and the Trades could work the machine efficiently without oversight or much necessary direction. Given the machine was a whole new step and direction of the economy, I can understand their hesitation.

A computer without a program, while brilliant, is useless.


WRT the digit count, the piece Babbage missed, IMO, was that he could continue his overall scheme of using automation to 'inflate' a polynomial to a more generic time/space efficiency tradeoff. He needed all of that precision, but that wasn't required in his mill (the analytical engine's ALU), because you can chain ops with carries like we do today. In fact some of the first electronic computers we used (which fit in a similar niche, calculating polynomials to generate tables, generally for WWII artillery) literally had single digit ALUs in order to get the cost under control enough to be viable. Their results had greater than single digit precision.


People already had tables of logarithms with lower precision. At the time he did a survey of what tables were available in the world, and it turned out the French had just produced some quite impressive tables, using a lot of manual labor. His value proposition to the British Government was that he can deliver something even better at a fraction of the manual labor (basically only a few people to turn on the crank). The Brits were enamored with the concept of automation at the time, and he made a proof of concept machine with lower precision that he was very happy to demonstrate wherever he would go. Why did he aim for 40 digits? Maybe he did some back of the envelope estimation for the error propagation. If you start with an average error of 10^(-40), then after n additions, in the worst case you can have an error of n*10^(-40). Which is not so bad. But the problem is that in the difference engine you end up adding intermediate results to intermediate results, many, many times over. So if you only look at the worst possible results, you end up needing a lot of precision to start with. Of course, nowadays we know better how to analyze the propagation of errors, but in his time he either didn't know how, or he couldn't convince other people that he can get the total error under control if he used a lower precision throughout. In any case, he completely underestimated the difficulty of scaling the project from his toy example to the full blown final machine.


They had tables, but they were rife with errors. Some accidental, some intentional as a form of copy protection. Britain didn't respect French copyright at the time, so as soon as they put their hands on a book it would have been fine to copy. Therefore his proposition was for accuracy from mechanical nature of what he was proposing.

And everything you said doesn't discount using a lower precision ALU, and chaining the results with carry propagation. We do that today; every Diffie-Hellman your computer performs involves numbers far larger than it's ALU width. And bit serial machines have a long history.


The Difference Engine was doing only additions and subtractions. For additions and subtractions, it does not make sense to split the operands in smaller pieces and then chain the results by carry propagation. Or rather it does make sense, but this is exactly how they are implemented to begin with: the operands are split in groups of 1 digit each, and then the operation is done digit by digit, and there's carry from one digit to the next.

The Difference Engine does not do any multiplications.

As for the stupendous precision. I implemented a difference engine today in Excel to see what's going on. It's a very fun exercise, one can do it in 15 min. But be warned, you'll waste hours playing with the thing.

Here's how the Difference Engine works. You want to calculate a function f at the points 0,dx, 2dx, 3dx, etc. You do it inductively, using the formula

f(x + dx) = f(x) + f'(x)dx

But who is going to give you f'(x)? Well, you calculate that inductively too. Or more precisely (this is crucial) you calculate the whole f'(x)dx inductively:

f'(x + dx) dx = f'(x) dx + f''(x) dx^2

And then for f'' dx^2, you say

f''(x + dx) dx^2 = f''(x) dx^2 + f'''(x) dx^3

At some point you decide to stop, so you assume that a certain derivative is constant. That's the order of the scheme. Wikipedia states that the second Difference Engine envisioned by Babbage had an order of 7 and 31 digits of precision, but I suspect it's a mistake. I think it's more likely the order of 8. The 8'th derivative of log(x) evaluated at 1 is -7!=-5040. If you use a dx=0.0001 then you need to store the number 504010^-32 = 504 10^31.

So, I suspect Babbage was trying to produce log tables with increments of 0.0001 with a precision of six digits. If you look at the error of this scheme, you'll see that it grows exponentially, and after just 1000 steps the error is of the order of 10^-6. So you need to reset your starting point, and start again. You produce 1000 more logs, then you reset again. After a while, the scheme becomes better behaved (fundamentally because you move further away from the function's singularity, which is zero). By the time you are around 4, you can run 10000 steps while keeping the error below 10^-6.

Now, it looks like a successor of Babbage, Scheutz, actually built a difference engine. I think the guy deserves as much admiration, if not more, than Babbage himself. He used 15 digits and only 4th order differences. But if you look at engines with different orders, you'll find out that higher orders stop giving a significant bang for the buck after the 4th order.

Does 15 digits/4th order make sense? Well, only if you are quite smart, but I think Scheutz was plenty: the fourth order derivative of log at 1 is 3!=6, so you need to be able to store the number 60.0001^4=610^-16. But how do you do that if you only have 15 digits? Well very simple: you observe that for the first about 1000 steps, all the numbers in the table start with 0.0, so you don't store only the digits after that, and for that you only need 15 digits.


> BTW his blog post "Babbage was a true genius" looks great: http://tomforsyth1000.github.io/blog.wiki.html#%5B%5BBabbage...

Highlighting this for extra exposure. For even more info, plus an emulator, see [1] (also linked from the blog post).

[1] https://www.fourmilab.ch/babbage/cards.html


> Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out

Of course the tricky answer is, where possible that is exactly what we should strive for.

People hate it when their SQL Server responds to a 60 line procedure with "Syntax error" even though I'm sure that's technically correct - while they like it when the Rust compiler spits out an explanation of why what they wrote can't work, pointing at the specific places it's wrong and suggesting what might help.

If you ask Google for "tow the line" it suggests "Did you mean toe the line?" because sure enough that's the expression you probably wanted (unless what you wanted was specifically people explaining why it isn't "tow the line" but don't worry it does link those explanations)


Back in his day that was still not technically possible, and maybe not even in demand at the time(Even today "real programmers" brag about not needing such things).

Seems like detecting errors like Rust wouldn't occur to someone in a world where basically no "smart objects" of any kind exist. Up until recently, most objects were passive tools under direct human control with no more intelligence than a punch card loom at the most.

Lots of people still don't really think much outside of the "Manmade things as passive tools" mindset or see a need for anything more.


John Nagle (Animats) has commented a few times about how, contrary to the "high culture" story of the evolution of computers (Turing, von Neumann etc), the gradual evolution of "calculators" was itself leading up to computers

That's a great point, and I've been thinking about the similar issue with compilers and type systems. Nowadays people seem to frame type systems as originating from math and logic, but really the first type systems were for instruction selection -- generating different code for a+b when they're ints or floats. It was more of an engineering thing.

So many rules in C support that, and most of them survive in C++ (arrays decay to pointers, etc.)

In Search of Types is a great read: https://www.cs.tufts.edu/~nr/cs257/archive/stephen-kell/in-s...

The last 40 years have seen an impressive confluence of theory and practice in programming language design, with the interesting side-effect of taking the sense of “types” originating in symbolic logic and implanting it into engineering traditions.

I kinda want to read about type systems from a historically accurate perspective. I think it's well known that Ritchie and Thompson didn't agree with many of the type safety improvements in ANSI C (even though ANSI C seems ridiculously weak from a modern perspective).

Somewhat related to this is that there seems to be a ton of exposition on Hindley Milner type systems, but very little on explicit object oriented type systems (Java, C#, C++, Kotlin, Swift, etc.)


> “Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?” In one case a member of the Upper, and in the other a member of the Lower, House put this question. I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question.

This was the most memorable quote from his autobiography for me!

GIGO - Garbage In Garbage Out - was the first think taught to me in school in Computer Class.

I always used to think why they taught us that - it's common sense! Turns out people have been asking this question since the conception of programmable computers!


And yet, Google tries to show the right links to wrong search requests.

*edit: To me, this seems to be the greatest obstacle towards artificial intelligence. We search AI within computational concepts, where GIGO holds, whereas society already knows what intelligence is. It could be more fruitful to start from there compared to training NNs until they speak.


> • John Nagle (Animats) has commented a few times about how, contrary to the "high culture" story of the evolution of computers (Turing, von Neumann etc), the gradual evolution of "calculators" was itself leading up to computers (e.g. https://news.ycombinator.com/item?id=10636154). Something similar appears to have happened with Babbage, where he started with a simple calculating device and, thinking about it more deeply, single-handedly came up with a Turing-complete design and was writing programs for it.

Konrad Zuse was a civil engineer who was annoyed by doing manual numerical calculations and so started to build his computers (22-bit floating point machines). He too saw how many engineer-man-centuries these machines could save and approached the Nazi government which of course blundered that and thought computers were irrelevant.


> He too saw how many engineer-man-centuries these machines could save and approached the Nazi government which of course blundered that and thought computers were irrelevant.

Zuse built "computers" and process control for military R&D and production. He wasn't a major concern or priority at a high level, but given resources "thought irrelevant" projects wouldn't have gotten.


> the Nazi government which of course blundered that and thought computers were irrelevant.

Well, it seems they were enthusiastic users of IBM machines, maintenance service, spare parts, training etc.

https://en.wikipedia.org/wiki/IBM_and_the_Holocaust


Up-voted you because of the high quality comment. Thank you for the provided links.


Bit confused by Δ⁷uₓ = 0, should it be read as “approximately equals”?


Either that, or I think the idea is that uₓ is a polynomial of degree at most 6. (Such a polynomial is what the difference engine computes, and it may be a good approximation for the actually desired function.)


Floating point is basically just scientific notation. Which means that in theory, it even might have been invented back then.


A lot of the problems cited make sense in the context the machine was designed for. Of course it did math in decimal. The designer did math in decimal, the user input was in decimal, they wanted output in decimal, and unlike electrical switches, mechanical switches aren't naturally limited to just on/off states. Of course the register supported 40 digit numbers. This was a machine that would have to be cranked. The clock cycle was literally a cycle. Anything less than 40 and you could more economically pay someone to calculate directly. It's easy to say "just do it in software" when cycles are measured in GhZ.


Yeah, I was also wondering about performance when I read this. The Analytical Engine wouldn't have been fast to begin with and using a narrower datapath would have made it even slower. AFAIK if you measure the time-space complexity of emulating wide operations in software it's a loss.

8-bit PCs were derided as toys in the 1970s and with the benefit of hindsight people now scoff at that idea, but PCs really were much slower, less capable, and harder to program than minicomputers.


The reason why many 8 bit micros were derided as toys was not because of the 8bit cpu. 8bit cpus were well-respected.

It was because of the crippling lack of ram and other cost-saving measures.

Having 1KB of ram (or less) on a single board computer like the KIM-1 wasn't an issue, because they were programmed in machine-code and didn't need to drive a screen.

But 1KB of ram on a low-cost microcomputer like the ZX80 was beyond painful. It takes 768 bytes for a full screen buffer, leaving just 384 bytes for the BASIC program, all it's variables and the interpreter state. And it took all the CPU time to drive the display. Actually running the program or even pressing a key would cause the screen to blank and desync.

Even on better micros that had ~4KB of ram, the scope of the BASIC program you could write was pretty limited.


8 bit computers were basically glorified calculators that could double as video game systems and perhaps BBS terminals. Even data storage was very much an afterthought on those machines until well after floppy disks entered common use. Then again, as late as the 1960s a programmable desk calculator could be quite valuable for plenty of serious uses.


And we're going wide again, as soon as we have the means to do so. The virtualization penalty is real.


> Of course the register supported 40 digit numbers. This was a machine that would have to be cranked. The clock cycle was literally a cycle. Anything less than 40 and you could more economically pay someone to calculate directly.

Yes, but larger registers would need more physical force to manipulate. If you had smaller registers, you could have a higher gear ratio on the crank, allowing the machine to run faster for the same input force.

If they wanted to perform some particularly complicated instructions, then they might want a lower gear ratio instead. The obvious solution: install a gearbox and a shifter.


This was a machine that would have to be cranked.

The Difference Engine was hand-cranked, but the Analytical Engine was intended to be steam-powered. The thing was going to be the size of a locomotive. Most of that was memory. As I've pointed out before, the big problem in the early days was affordable, fast memory. Babbage's design, at least one version, was to have the ability to store 1000 numbers of 40 digits each. So, 40,000 number wheels, with some kind of mechanism to bring them to the read/write station. Access time would probably have been measured in seconds.

The arithmetic unit wasn't the big part of the machine. It was roughly equivalent to a desktop mechanical desk calculator, after all.

Something similar appears to have happened with Babbage, where he started with a simple calculating device and, thinking about it more deeply, single-handedly came up with a Turing-complete design and was writing programs for it.

Desktop calculators existed long before Babbage. Leibniz built the first mechanical multiplier around 1673. Mechanical arithmetic was known. Babbage's contribution was the instruction decoder and control unit. Mechanical arithmetic was limited more by cost-effectiveness and reliability than by conception. The commercial breakthrough was cash registers, in the mid 1880s. First really cost-effective application. Babbage's machine might have been buildable, but not cost-effective.

A few years ago, there was some guy in the UK talking about an analytical engine build. But he never got very far. I'm surprised someone doesn't have one running in Minecraft or Unreal Engine.


“Some guy” is John Graham-Cumming (CTO of Cloudflare) and the project website is https://www.plan28.org/ named after Babbage’s final iteration of his design.

I am sitting below a poster of Sydney Padua’s splendid cartoon of Plan 25, and her book and comics are the most entertaining way to learn a bit about Lovelace and Babbage http://sydneypadua.com/2dgoggles/comics/



Babbage shows his latest invention to a dubious Lovelace:

"It's operated by a crank!!" "... Indeed." :-P

http://sydneypadua.com/2dgoggles/lovelace-and-babbage-vs-the...


I'm so glad you mentioned Padua's art and book. Her book ("The Thrilling Adventures of Lovelace and Babbage") was just a joy to read. The footnotes are magnificent, and span multiple pages, and it was just altogether entertaining as heck. The artwork is great as well. :)


40,000 decimal numbers is about 16 KB of RAM (40K * log2(10) / 8). That’s 1980s home computer territory — there’s no way that amount of memory would have been needed for anything. (Given today’s understanding of efficient algorithms.)


Babbage was perhaps thinking a bit too big.

Also, the 40 or 50 digit decimal number thing comes partly from not being clear on how to manage scaling "However, by inserting an imaginary divider between the same two figure wheels of all variable number columns, thus making all coefficients and numbers within the Store possess the same number of decimal places, decimals could be used."[1] So there was one decimal point ___location for all memory locations. Babbage apparently didn't include a general shift function, which is necessary for rescaling results. If you can shift to discard low order digits, as on mechanical desk calculators, you need maybe 10 digits, and a 20 digit product register, so you can multiply two 10-digit numbers and then round off or truncate the result. Without that, you need a lot more digits to avoid overflow. So close...

Useful programmable calculators from the 1970s had 20 to 100 memory locations, each capable of maybe 10 digits. A base Babbage machine with 200 digit wheels of memory, expandable to 1000, would probably have been feasible and moderately useful. Useful for cranking out navigation and gunnery tables, at least. Babbage's difference engine has about that much storage. So that was probably buildable as a minimum viable product.

[1] https://cs.stanford.edu/people/eroberts/courses/soco/project...


That sounds like the fixed point representation is part of what causes the huge memory requirements. It's fascinating how obvious some ideas are in retrospect. Floating point is basically just scientific notation, which was well known AFAIK. But I can imagine that it's hard to make the transfer to use that as the representation for all calculations.


I don't think it's hard. As a kid, I didn't know about floating point but derived it on my own. It seemed logical to just add another number that represented where the binary point was. I'm sure Babagge would have got there if his iteration time wasn't so slow. He really should have made a small general purpose computer as simple as possible just to experiment with.


Babbage's work pre-dated mass production. Today, production people look at something like that, and ask "how many different parts"? Because banging out a lot of the same part has been a routine job for the last century. Babbage was too early for that. Parts required much hand work and craftsmanship. Metal part manufacturing did not scale yet. It was possible to build Babbage's parts using clockmaking techniques, but it would have required a lot of clockmakers. Ely Terry had a mass production clock business in the US by 1804, but the gears were wooden. Apprentices made rough clock wheels, and better workers finished them more precisely with hand tools. Precision stamping was still in the future, awaiting the widespread ability of steel in the 1880s.

The Ingersoll dollar watch (1896), "The Watch that made the Dollar Famous", was probably the first high-volume mass produced product with part complexity and precision comparable to what Babbage needed. A few years later, the Computing-Tabulating-Recording company, the predecessor of IBM, was manufacturing the first commercial electromechanical computing devices in quantity. After that, there was steady progress.

Those were the days when New England was to the world what Guangdong is now.


I believe his plans did include a general shift function - what he referred to as “stepping” up/down was just shifting by a digit


Right. There's some question as to whether that was seen as a scaling operation or just as part of multiply or divide.[1]

If you can scale into a reasonable range for each data type, you don't need 50 digits. But that's an abstraction which came decades after Babbage.

[1] https://www.fourmilab.ch/babbage/cards.html


As I’m sure you know, the lack of reliability of computing machines persisted well into the 20th century.


That's extra complexity kind of like adding frequency throttling or sleep mode.


> Anything less than 40 and you could more economically pay someone to calculate directly.

Hiring someone to do the math would defeat the point. The idea was to eliminate human error when doing something like printing tables of common functions like sines or cosines.

A lot of effort went into designing a printer, as they couldn't have humans recording the results without re-introducing human error.


The goal wasn't as much speed (it generated printed tables to be referenced later), but instead accuracy. The contemporary sources of such tables were rife with errors. Some by mistake, some added intentionally as a form of copy protection.


What I find interesting is how historiography works. Babbage is widely cited as a major or even the primary originator of programmable digital computers. What I haven’t been able to get great information about is to what extent there was actually a continuous chain of influence, and to what extent we retroactively recognize that he had the ideas early. Did Turing know about Babbage? Did the ENIAC designers?


I don't think there was any continuous chain of influence. Howard Aiken[0] effectively "rediscovered" Babbage in the late 30s and presented himself as carrying on Babbage's legacy, and did a lot to popularize the connection between modern computing and Babbage.

There's a chapter on this in I. Bernard Cohen's book on Aiken[1].

[0]: https://en.wikipedia.org/wiki/Howard_H._Aiken [1]: https://www.google.com/books/edition/Howard_Aiken/Ld7TgLeQXs...


And Grace Hopper worked for Aiken.


Thanks for the book suggestion, I recently finished "Pioneer Programmer", the autobiography of Jean Jennings Bartik and her colleages working on the ENIAC. I really enjoyed reading about that age of invention, it will be interesting to find out what was going on at Harvard that whole time.


I've wondered the same thing about Turing's influence on computers (not computer science, which is obvious). ENIAC looks nothing like a Turing machine and I assume Eckert and Mauchly were not aware of Turing's highly classified computer work at Bletchley Park.


I don't know the answer either, but didn't Turing design the British bombe? It's not, as far as I know, Turing complete (or even programmable), but he is quite widely known for his work on this electro-mechanical computer and not just theoretical computer science. He also worked on programmable computers after the war, and apparently is credited with the first complete design of a stored-program computer.


A lot of math seems like that: this cool "new" idea was actually completely worked out 200 years ago by this obscure mathematician but it didn't have any impact. (I mean, they deserve priority, but...)


Sometimes 2000 years ago. I was admiring this quarter scale multiplication algorithm on an eight bit system that used tables of squares and a clever formula to beat my fastest shift and add approach and was blown away to find out that it was known in Babylonian times.


Same for computer science, no? Or anything really... ideas are cheap, seeing the bigger picture and successfully integrating with it is difficult.


Yeah, I think most would say Turing had little influence on computers per se, rather than computer science. ENIAC was built in a clear line of succession with other mechanical calculating devices of varying complexity, but if they got how fundamental a threshold they passed I'm not sure they knew it.


This happens with Lovelace too.

As far as I'm aware she has no influence on modern computing, which has always slightly confused me because she is often raised in campaigns as a "girls can program too!"-type figure when there are lots of other women who's work is used everyday e.g. Frances Allen was one of the first to really lay down the fundamentals of optimizing compilers


Lovelace gets a lot of attention because she grasped the cultural implications of computers approximately a century ahead of anyone else. She was also the first person in history to get bitten by the programming bug. As an inspirational figure, her practical day-to-day relevance is immaterial.


I want to use this opportunity to plug The Thrilling Adventures of Lovelace And Babbage:

https://en.wikipedia.org/wiki/The_Thrilling_Adventures_of_Lo...

The book is fun and deeply instructional. It has the only illustrated explanation of the analytical engine that I have ever been able to understand.

And yes, Babbage wanted as much RAM as possible ("the store" he called it). This greatly ballooned the size of a mechanical machine as he devised it, further impeding its physical construction.

The book actually pokes fun at this, inventing "Babbage's Law" to replace "Moore's Law": instead of shrinking by half regularly, the machines in this alternate comic book world double in size, and giant construction projects are required to install new RAM.


"I predict that within 10 years, computers will be twice as powerful, ten thousand times larger, and so expensive that only the 5 richest kings of Europe will own them"

- The Simpsons


The impression I have gotten from articles which look at the Analytical Engine more critically is that Babbage had the habit of trying to upsell his investors into a more complex, more risky, more academically interesting device without completing the last one. The Difference Engine actually was completed eventually and ordered by the British government, in a more pragmatic form: https://en.wikipedia.org/wiki/Difference_engine#Scheutzian_c...

That said I do agree with what was said in the thread. There was a lot of accidental complexity in early numeric computers. I feel a true programmable computer could not have come about without the development and refinement of symbolic logic that took place in the 20th century.


His "investors" being in this case the government, and Babbage does not seem to have understood politicians or the constraints that a government operates under.

You can see the timeline in https://www.gutenberg.org/files/57532/57532-h/57532-h.htm#p0... Chapter VI (with the caveat that, though this chapter is written by a third person, this being Chapter VI of Babbage's autobiography/memoirs, clearly it must have been sufficiently sympathetic to him that Babbage included it in his book).

But the short version is that, far from it being a "habit", he seems to have done what you said only once, in 1834, when he started to have ideas for an Analytical Engine, which would have completely superseded the Difference Engine that he had already been building for the government for 11 years at that point (initial estimate had been 2–3 years). The relationship between the government and him were already strained, but he doesn't seem to have understood that and got further entangled, instead of extricating himself. He asked the government what their plan was (in light of this development), and it took them until 1842 to make a decision (just give up on it entirely).

----

Slightly longer version of the timeline that I started reconstructing, before abandoning it:

• In around 1812 or 1813, he had the germ of the idea "that all these Tables (pointing to the logarithms) might be calculated by machinery". Between 1820 and 1822, he made his own Difference Engine (two orders of difference, 6 digits).

• In 1823, Babbage started making "a much larger and more perfect engine" for the government, and this is where the trouble starts. The plan was for this machine to have "six orders of differences, each consisting of about twenty places of figures".

• Unfortunately, the conversation was informal and the details of the arrangement were not written down(!), which led to misunderstandings over the years as Babbage came back and asked for more money: work ceased in May 1829, resumed in February 1830, etc. Then in September 1834 that "Analytical Engine" event (idea) happened.

• Just before this, in July 1834, Lardner in The Edinburgh Review wrote "a very elaborate description of this portion of the machine" (took me some searching but I found it! here: https://archive.org/details/edinburghreviewo59macauoft/page/... ), and this inspired Scheutz in Sweden to build a machine as described. As Wikipedia describes, the Swedish machine went up to the third order (not sixth), and had 5-digit numbers (not 20). This was completed in 1843.

• In 1853, a larger (fourth-order, 15 digits) Swedish machine was built, exhibited in 1855 and sold in 1856, delivered in 1857. The British government commissioned a copy of it, which was built in 1859. (Note that this was still smaller than the design which Babbage was close to completing in 1834 or even 1842… oh well.)

• Note the ending of the 1834 article (https://archive.org/details/edinburghreviewo59macauoft/page/...): already in 1834 people were wondering why on earth Babbage and the government don't bring this matter to a quick conclusion.


Ok, I concede that, it was exagerrated in what I was reading. What do you think about the other point? Was a computer as we know it today practical to design before the discovery that arithmetic and other principles of math could be worked out using logic?


I'm not an expert (my only qualification is that I read a part of Babbage's book; others must know a lot more), but see http://tomforsyth1000.github.io/blog.wiki.html#%5B%5BBabbage... and (linked from it) https://www.fourmilab.ch/babbage/cards.html — it looks like he did have a reasonable design of a computer in fact. (Except for the too-many-digits-of-precision thing, that the OP tweet thread is about.) He seems to have well understood how not just arithmetic but a lot more could be done mechanically.


"Babbage had the habit of trying to upsell his investors into a more complex, more risky, more academically interesting device without completing the last one." Omg the oracle business model was born....


Don't forget about Moller.


He's missing one key fact in his analysis which makes the conclusion incorrect, imho : early computers were really, really slow !

Early mechanical or relay computers for example had dedicated hardware to compute multiplications or divisions or more complex operation on full width floating point registers and still they took minutes to complete a computation. Doing things in software sounds great written from a multi-GHz modern computer but it would have been just too impractically slow at the time. Doing something as seemingly simple as a division can take thousands of cycles when you need to do everything "in software" with only additions and subtractions. When your machine has a clock frequency in the order of 1Hz, this translates to hours on a single operation.

So the issue was mostly technical, the hardware at their disposal was not capable of doing things quickly enough, so they did not design impossible things based on a future understanding of how things should be done. The process of invention is very iterative, building up on technological progress. When advances in semiconductors allowed to reach clock speeds millions of time faster, then doing things "in software" came naturally.


Really interesting to think about hardware vs software. Thank goodness for Von Neumann for helping us get more generic.

A lot of historians assume that Babbages work collected dust and was lost for 100 years. This turns out to not be true at all. His work was consulted by the Scheuts and Jevons. Details on what I found researching him at https://buriedreads.com/2019/02/09/when-computers-stopped-be...


Good blog post and thanks for writing it. But though Babbage's work may not have been lost for quite as close as 100 years, there is truth to the usual history and I don't think these two examples establish the "turns out to not be true at all":

• Charles Babbage lived 1791–1871, dying a couple of months before his 80th birthday. He wrote his Passages from the Life of a Philosopher in 1864.

• The Swedish difference engine, produced by Scheutz father and son, is mentioned in these memoirs. At the time, it was regarded as a smaller-scale/toy/prototype version of the Difference Engine he was engaged in building. So, even though this version did find use "in production" (at the Dudley Observatory at Albany, and an English-made copy in use "the department of the Registrar-General, at Somerset House"), it was still short of what he actually wanted to build (or had promised to build).

• As for the example you illustrate of W. Stanley Jevons's praise in 1969 of the Difference and Analytical Engine, this too came during Babbage's lifetime, and was for the idea rather than any concrete details of the design: "in his subsequent design for an Analytical Engine, Mr. Babbage has shown that material machinery is capable, in theory at least…" etc.

So it does seem to be the case that at least after Babbage's death in 1871 (if not before), his ideas for the Analytical Engine were dismissed as impractical, or absurdly expensive, or a failure, etc, and no one quite looked at them at least until (going by the Aiken reference you found) 1936, which is 65 years.


Wow this really brings home how recent it was still in the scheme of things. I had it in my mind before this thread and article that Babbage was in the 1600’s or something.


But did Von Neumann also lead us towards a security nightmare? Code is data is very powerful but also a Pandora's box from a security perspective.


As far as I can see the reason for a large number of digits was to do the equivalent of floating point without having an exponent. Also a number of digits gets lost doing differences and not doing proper rounding. Konrad Zuse was the first one to do floating point - and he had binary too in his mechanical computer, but it worked on completely different principles from Babbage's machines. Decimal is fine when one uses wheels, Zuse used plates where binary was much more natural. The digital mechanical multipliers I know of use rods for multiplying and wheels for the adding.


I don't know about Babbage, but the hardware/software tradeoff was well known before microprocessors. The IBM 360 family (1960s) had a range of options, from hardwired monsters, to microcoded but wide machines, all the way down to microcoded versions that had an 8 bit ALU.


As another example of historical complexity driven by a focus on directly encoding the everyday representation of data, early designs for the telegraph system used twenty-six individual wires to separately indicate each letter of the alphabet.

It is also interesting that practical designs then ignored the obvious five-wire simplification of this ("UTF-32 for telegraphy") and settled on Morse code ("UTF-8 for telegraphy").


The costs of stringing wire would be significantly less for the Morse system. There were low wire designs that could be used by untrained staff. The Cooke and Wheatstone system was used in 4 to 6 wire setups.

https://en.m.wikipedia.org/wiki/Cooke_and_Wheatstone_telegra...


Somewhat related is the way one tends to learn mathematics, mostly seeing definitions, proofs, and sequences of lemmas that have been improved on over many decades.

When you learn real analysis you get Cauchy’s epsilon–delta formulation mixed with quantifiers, set theory, the notion of continuous functions, and Riemann integration. You don’t need to first learn vague notions of infinitesimals and explanations that seem to make sense but fall apart when you try them yourself.

When you learn graph theory, you can use the language of set theory with straightforward definitions of what a graph is, and you can see innovations like the probabilistic proof or subjects like Ramsey theory.

When you do applied maths you have tools like contour integration and computers and you don’t need to rely on Euler’s technique of applying the ‘universality of analysis’ (at the time analysis meant what we might now call non-abstract algebra; over history it has meant basically every different topic that is not geometry) and expanding and rearranging infinite expressions and somehow getting the right answer.

When you learn Galois theory you already know what a group is, and you have lots of examples and tools you can use. You also know about rings and fields and polynomial rings. You don’t need to simultaneously invent the notion of a field extension, a group, a functor, and a normal subgroup. When you learn group theory you seem to find all these easily-proven theorems with grand names like Lagrange’s theorem or Cayley’s theorem but maybe they are only easy in a modern context.

When you learn category theory, most of your examples of categories come from things that you learned from materials that post-date category theory and therefore can have a structure that makes the categories more obvious.


All true but certainly, for me, I was never satisfied until I found out the history and stumbles along the way. In this manner, I was able to integrate my own approach to understanding with the original path and not only feel better about my own struggles but overcome some mental blocks to achieve deeper understanding.


I've been very interested in mechanical computing for a while (even going so far as to build my own simple 3D-printed, hand-cranked computer!), but the whole analytical engine thing makes me wonder what might have happened if computing had developed more from the industrial-control / power-loom side of things rather than a need for scientific number crunching. A truly impressive amount of work was done by the mid-to-late 1800s with power looms - 'programs' 10s of thousands of cards long, speeds in excess of 3 Hz, 'production-level' reliability. The people building weaving mechanisms could have readily implemented something equivalent to a 4-bit microcontroller if they had been sufficiently motivated, which might have kicked off a computing revolution from the bottom up rather than starting with supercomputers and gradually figuring out how useful very limited microcontrollers were.


>Babbage and consorts were not alone in making that mistake;

I wouldn't call it a mistake. Maybe there were not alone in taking that tact or direction, but there seems to be a touch of hubris calling it a mistake.

I get the point of the thread, and there are some nice points in it, but hey, Babbage was born in 1791!


Would it have adequate memory to do it in software? I guess the code was on ROM (punched cards), but you need RAM too. It's also a whole other invention or series of inventions - just because it's software doesn't make it soft and easy. (what software techniques have we yet not thought of - or is it "all already invented"...?)

It's maybe fair to qualify it as his hardware problems being harder than warranted.

Reminds me of Wozniak: made Breakout in hardware... then made it again, much more quickly and malleable in software... but (I think?) only realized he could do that after having made the Apple computer. Kinda sorta related Dragon's Egg (Forward), a novel with a theme that inventions are much easier once you have the idea, and know it's possible.


As designed, it had storage space on the order of tens of kilobytes, accounting for much of the complexity and cost. It definitely could have gotten by with significantly less memory and smaller, more practical registers.


I was expecting the thread to end with a link to a design for a binary mechanical computer with software multiply which could have been built back then. Exercise for the reader I guess?


See also Plan 28, a project to build an Analytical Engine, started by HN own jgrahamc:

https://plan28.org/


People invent things for something, turns out they are really useful for something else, in hindsight. So much talk about AI not being really intelligent, not being able to think causally, but what about us? We're also suffering from low ability to predict outcomes and we can only see one step ahead. We think we're so superior when in fact we're doing distributed random search.

Another example - the battery invented in Baghdad 2000 years ago. Why didn't we follow up with the theory of electricity for two millennia?


I was looking into the idea of building a clockwork computer. Probably a 16-bit RISC based on a simple teaching instruction set (just over dozen instructions). Maybe a 4-bit prototype to work out how to actually implement everything.

Realistically even the prototype project is way beyond my skills but it doesn't look impossible. Actual performance would be in seconds-per-instruction ( mIPs = milli-IPs? ) with perhaps a few hundred words of memory.


Given that you need a really simple ISA, and probably also good code density, how about a Forth CPU, like https://excamera.com/sphinx/fpga-j1.html?


Is there a particularly good book on the history of math/analysis that would be worth reading? I assume it would overlap with these types of computation a lot.


Very interesting article, thanks for sharing!

I hope someone makes a 3D model of such a "minimilist" working Analytical Engine that we can play with.

Would be cool if we could enter a program and see it being processed.

Seeing AE in action - programs being run mechanically - would be so cool!


I love that it would take another century (almost exactly, AE ~1837, Turing's "machines" paper 1936) for the required insight that the computation was interchangeable, and indeed the HW need not be so sophisticated.


"knowing what we do now, you could likely come up with a version that could be realistically built (and kept running reasonably well) with 19th century tech."

No shit!


But seriously.

I think, we should put more effort in checking out these old solutions to problems and see how we could build it in a better way with modern knowledge.

Many things, like the AE, could probably be built much smaller, since we have more precise tooling available. Smaller size could lead to optimizations already, a smaller mechanical system requires less power to do its work.

But also, like the article suggests, using generally better design. Could be that a mechanical computer, like the AE, is way slower but could work in circumstances that electronical devices don't.


I soemtimes forget to forego clickbaity titles.


A lot of the comments here seem to be conflating the Analytical Engine with the Difference Engine.


First time I've seen ryg/farbrausch not talking about the demosecene :) Great post!


They have been built in Meccano, and there are videos !


It was too CISC!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: