After thinking about it for a while, I struggled to get anything profound out of it. I feel his point is equivalent to saying "a higher level of abstraction can make problems more tractable" which is something we do all the time in all contexts.
Finding good abstractions is an art but I don't think this really maps to insights like Copernicus though. I see a slightly different process happening i.e. discovering a new perspective on a problem. A new perspective can be a different way to decompose a problem but I don't think it necessitates anything about the sophistication of the building blocks. Instead, a revealing change is often because it changes the type of relationships between the building blocks in a radical way that enables new inferences.
Suppose you need to get from A to B and back. You need to do it often, you need to do it fast.
Your cargo - written notes which two kings (from A and B respectively) exchange between each other.
So you try to solve the problem by getting faster cars, building better roads, hiring the best drivers - all with the goal of getting the notes from king A to king B as fast as possible.
---
The 'abstraction' which I think Alan is talking about is not about how to make the car go faster between A and B - it is about stepping back and actually thinking about the context - what do we actually need ? We need a way to transfer messages between A and B.
And pretty soon you come up with the idea of using a membrane to convert voice into electrical signals and send those over a wire.
So you install a cable between A and B and now the kings can chat all day long.
That's a much simpler and powerful solution, but not to the problems you were trying to solve (making the car go faster), but to the problem in the upper scope - sending messages from A to B.
Don’t know if this applies to your post, but I got a lot more out of Alan Kay talks when I stopped trying to look for profundity, and assumed he's mostly talking about how we're being morons.
What he basically says is, you need an organizational framework that works 10 years at a time in order to get a result in 7 years. The people working in this framework, needs 5 years windows and you need to hire absolute geniuses and have the trust that they'll do the right thing.
If you do that, you will get long term, world changing inventions relatively cheap.
I don't think anyone is being a moron for utilizing a staff of less than stellar geniuses to get work done in a time frame less than 7-10 years.
I believe there's a lot of evidence we're collectively morons. Consider how close we are to a transformed world. Hunger, wage slavery, environmental destruction, etc could be a thing of the past if enough of us decided right now to be serious... countering absurd ways of thinking.
Are people really afraid of death? Then we should invite people to organize to solve that. Support each other regardless of gender or other insignificant differences, to accomplish such goals; any menial work can be shared equally. What huge percentage of humanity would go for that?
The moronism comes from a corporate culture that prioritises short-term gain over the invention of ultra-profitable long-term game-changers.
How many of the advances in computing have been inspired by MBAs demanding quarterly growth?
We have no formal tools for valuing our collective future.
Incubators and accelerators come closest, but the emphasis is still on believable prospects for profitability - hence investing in the team, instead of good ideas - and not on open research.
I think the problem is he uses so much similie and metaphor that you have to work to follow him, and come to expect some kind of philosophical insight as a reward. He goes a long way around the merrygoround to get to the point, and in the end never really suggests anything practical we can do today.
I understood Alan Kay to be saying that a lot of abstractions are actually making things too simple. I would understand the point to mean "a lower level of abstraction can make the big picture more tractable".
I see where you're coming from, and in a very, very abstract sense it might be true. However, I think there are two different kinds of "abstraction" at play here. The more common is where you have a basic set of building blocks and build abstractions with them. The second is where you adjust the building blocks. That's the one that's profound...and profoundly difficult, I found.
For example, I believe to have discovered that we need a slightly more complex notion of identifier/reference/variable (I call them Polymorphic Identifiers [1][2][3]). Smalltalk says that we only need messages (procedural abstraction) to be polymorphic, identifiers and things like assignment are simple/monomorphic. SELF (another "Power of Simplicity" [4]) essentially said that you don't really need separate identifiers for things at all, all the things that look like identifiers are really message-sends. This makes a lot of sense at first glance, and at second and third. However, there are subtle problems related to the Uniform Access Principle [5], and my contention is that to solve these problems you can't layer abstractions on top, you actually have to go against the grain and make one of the fundamental building blocks (variable access, both read and write) slightly more complicated. And that you actually need the concept "variable access" :-)
Having worked with PIs for a bit now, it looks like this slightly more complicated primitive really does help solve/reduce a lot of complexities in real world systems. As far as I can tell, a lot of these complexities are fundamentally "storage" related, but that unifying abstraction is lost because the mechanisms we have available are procedural. Once you have it, you get building blocks similar to shell filters that you can combine to create variations as one-liners that would otherwise be many pages of code, similar to McIlroy's shell solution to Jon Bentley's word count challenge (6 lines of shell vs. Knuth's 12 pages of Pascal/Web) [6]
However, communicating this appears to be nearly impossible. For example, David Ungar (of Self fame) reviewed my paper and just tore it to shreds, misunderstanding not just the solution, but also the problem, and completely incorrectly claiming that Self already solves those problems. He stopped responding when I showed him the (very simple) examples demonstrating the problem in Self.
After thinking about it for a while, I struggled to get anything profound out of it. I feel his point is equivalent to saying "a higher level of abstraction can make problems more tractable" which is something we do all the time in all contexts.
Finding good abstractions is an art but I don't think this really maps to insights like Copernicus though. I see a slightly different process happening i.e. discovering a new perspective on a problem. A new perspective can be a different way to decompose a problem but I don't think it necessitates anything about the sophistication of the building blocks. Instead, a revealing change is often because it changes the type of relationships between the building blocks in a radical way that enables new inferences.