"What's the expense of getting simplicity? [..] You get simplicity by finding a slightly more sophisticated building block to build your theories out of. It's when you go for a simple building block that anyone can understand with common sense that you start screwing yourself right and left."
After thinking about it for a while, I struggled to get anything profound out of it. I feel his point is equivalent to saying "a higher level of abstraction can make problems more tractable" which is something we do all the time in all contexts.
Finding good abstractions is an art but I don't think this really maps to insights like Copernicus though. I see a slightly different process happening i.e. discovering a new perspective on a problem. A new perspective can be a different way to decompose a problem but I don't think it necessitates anything about the sophistication of the building blocks. Instead, a revealing change is often because it changes the type of relationships between the building blocks in a radical way that enables new inferences.
Suppose you need to get from A to B and back. You need to do it often, you need to do it fast.
Your cargo - written notes which two kings (from A and B respectively) exchange between each other.
So you try to solve the problem by getting faster cars, building better roads, hiring the best drivers - all with the goal of getting the notes from king A to king B as fast as possible.
---
The 'abstraction' which I think Alan is talking about is not about how to make the car go faster between A and B - it is about stepping back and actually thinking about the context - what do we actually need ? We need a way to transfer messages between A and B.
And pretty soon you come up with the idea of using a membrane to convert voice into electrical signals and send those over a wire.
So you install a cable between A and B and now the kings can chat all day long.
That's a much simpler and powerful solution, but not to the problems you were trying to solve (making the car go faster), but to the problem in the upper scope - sending messages from A to B.
Don’t know if this applies to your post, but I got a lot more out of Alan Kay talks when I stopped trying to look for profundity, and assumed he's mostly talking about how we're being morons.
What he basically says is, you need an organizational framework that works 10 years at a time in order to get a result in 7 years. The people working in this framework, needs 5 years windows and you need to hire absolute geniuses and have the trust that they'll do the right thing.
If you do that, you will get long term, world changing inventions relatively cheap.
I don't think anyone is being a moron for utilizing a staff of less than stellar geniuses to get work done in a time frame less than 7-10 years.
I believe there's a lot of evidence we're collectively morons. Consider how close we are to a transformed world. Hunger, wage slavery, environmental destruction, etc could be a thing of the past if enough of us decided right now to be serious... countering absurd ways of thinking.
Are people really afraid of death? Then we should invite people to organize to solve that. Support each other regardless of gender or other insignificant differences, to accomplish such goals; any menial work can be shared equally. What huge percentage of humanity would go for that?
The moronism comes from a corporate culture that prioritises short-term gain over the invention of ultra-profitable long-term game-changers.
How many of the advances in computing have been inspired by MBAs demanding quarterly growth?
We have no formal tools for valuing our collective future.
Incubators and accelerators come closest, but the emphasis is still on believable prospects for profitability - hence investing in the team, instead of good ideas - and not on open research.
I think the problem is he uses so much similie and metaphor that you have to work to follow him, and come to expect some kind of philosophical insight as a reward. He goes a long way around the merrygoround to get to the point, and in the end never really suggests anything practical we can do today.
I understood Alan Kay to be saying that a lot of abstractions are actually making things too simple. I would understand the point to mean "a lower level of abstraction can make the big picture more tractable".
I see where you're coming from, and in a very, very abstract sense it might be true. However, I think there are two different kinds of "abstraction" at play here. The more common is where you have a basic set of building blocks and build abstractions with them. The second is where you adjust the building blocks. That's the one that's profound...and profoundly difficult, I found.
For example, I believe to have discovered that we need a slightly more complex notion of identifier/reference/variable (I call them Polymorphic Identifiers [1][2][3]). Smalltalk says that we only need messages (procedural abstraction) to be polymorphic, identifiers and things like assignment are simple/monomorphic. SELF (another "Power of Simplicity" [4]) essentially said that you don't really need separate identifiers for things at all, all the things that look like identifiers are really message-sends. This makes a lot of sense at first glance, and at second and third. However, there are subtle problems related to the Uniform Access Principle [5], and my contention is that to solve these problems you can't layer abstractions on top, you actually have to go against the grain and make one of the fundamental building blocks (variable access, both read and write) slightly more complicated. And that you actually need the concept "variable access" :-)
Having worked with PIs for a bit now, it looks like this slightly more complicated primitive really does help solve/reduce a lot of complexities in real world systems. As far as I can tell, a lot of these complexities are fundamentally "storage" related, but that unifying abstraction is lost because the mechanisms we have available are procedural. Once you have it, you get building blocks similar to shell filters that you can combine to create variations as one-liners that would otherwise be many pages of code, similar to McIlroy's shell solution to Jon Bentley's word count challenge (6 lines of shell vs. Knuth's 12 pages of Pascal/Web) [6]
However, communicating this appears to be nearly impossible. For example, David Ungar (of Self fame) reviewed my paper and just tore it to shreds, misunderstanding not just the solution, but also the problem, and completely incorrectly claiming that Self already solves those problems. He stopped responding when I showed him the (very simple) examples demonstrating the problem in Self.
> "The big problem with Xerox is they only wanted to make billions. And that's the problem with most companies. When you're doing kind of stuff you're actually in the trillion-dollar range."
The quantity of world-changing tech that came out of the late 70's has never been matched. In fact it's hard to come up with more than a handful of innovations in tech of the same magnitude that have happened since. http://stackoverflow.com/questions/432922/significant-new-in...
The World Wide Web. Yep, plenty of antecedents, I personally start with Vannevar Bush's Memex for the idea, and there were many attempts with varying levels of un-success, until Tim Berners-Lee put together a winning, massively network effects positive solution.
Which, along with search engines, have resulted in significant, qualitative changes in information publishing and access; in the '70s (or perhaps earlier) Jerry Pournelle was looking forward to the day when you could quickly find the answer to any question that had an answer, and this is one of the biggest steps towards that.
General-purpose technologies (GPTs) are technologies that can affect an entire economy (usually at a national or global level). GPTs have the potential to drastically alter societies through their impact on pre-existing economic and social structures. Examples include the steam engine, railroad, interchangeable parts, electricity, electronics, material handling, mechanization, control theory (automation), the automobile, the computer, and the Internet.
This was the step that made the Internet into a "monster". So even if it only counts as one invention, it's as big as many of those preceding GPTs.
I think the revision of the NSFNet AUP in 1994 (?) to allow commercial use, and the decommissioning of the NSFNet in 1995, was what made the internet into a "monster". Even if people were "only" running NNTP, email, and anonymous FTP, the internet would have taken off massively at that point. This was obvious to me from the time I joined the internet in 1992.
Everything except sovereignty, and some of them are pretty good at working around that.
I'm actually sort of surprised that a company hasn't just "bought out" one or more of the smaller countries -- paying off the citizens to renounce their citizenship, say. It would be easier if you bought two, so as to avoid creating a bunch of stateless people -- pay Country A to grant citizenship to everyone in Country B, and pay the citizens of Country B to renounce their Country B citizenship. You now have Country B to run as you please.
"What's the expense of getting simplicity? [..] You get simplicity by finding a slightly more sophisticated building block to build your theories out of. It's when you go for a simple building block that anyone can understand with common sense that you start screwing yourself right and left."