I really wonder what the software world would look like today if the Smalltalk environment had predominated. It seems like various ideas from the Kay's group get reinvented from time to time, but we it's not really mainstream. Even the RAD tools from the 90s somewhat fell out of favor, partially due to the rise of the web, which meant reinventing everything in the browser.
From hearing several talks by Alan Kay since then, it doesn't sound like software quite turned out the way he envisioned it, but he still thinks there's a better way to do things that's related to what he was thinking in the 70s with a living environment of little computers sending messages to one another. He's talked of the idea of modeling biological cells since then.
“his conception of the World Wide Web was infinitely tinier and weaker and terrible. His thing was simple enough with other unsophisticated people to wind up becoming a de facto standard, which we’re still suffering from. You know, [HTML is] terrible and most people can’t see it.”
I assume you're posting this to show that Kay "doesn't get it"?
The more I learn about cutting-edge technologies of the 60s, 70s and 80s, the more I'm convinced that he's entirely correct about the web and mobile. Most developers today are obsessed with tooling and features. Almost no one seems to care whether those tools and features have some kind of fundamentally valid idea behind them.
Just look at the proliferation of HTTP security headers for a clear-cut example of how this works out in the long run.
Considering a whole Smalltalk VM is smaller than a lot of hero images on websites, I get the feeling that Kay is correct. I would imagine a containerized / sandboxed environment would have developed. I cannot see how it could be worse than today.
I fear it would turn out equivalent. The starting stack might have been better, but the promise of money to be made and competitive pressures would be the same. We would "worse-is-better" ourselves to roughly the same spot we're in today. Bloat, tracking and clickbait.
Whenever I implore people to cut down on software bloat, I know what I'm really asking is, "fight against the market, slow the decay down just a little bit".
Only some 30 or so years earlier, and who knows what we'd have come up with, given such an early head start?
I work in the space of dynamic language runtimes, and it hurts me to see how ideas pioneered by SmallTalk and subsequently the Self language[1] runtimes haven't yet been adopted by some of the more popular dynamic languages today.
One argument against this is that the web today is not set up for easy authoring by end users (in fact, it's gotten more difficult to "make websites" as the web has progressed). But there was a hypermedia system around two decades ago that had authoring in mind from the start — hypercard — that could have been a good prototype for a "real web".
If you have authoring as a primary consideration than perhaps one-directional, consumer oriented cultures of technology would not be so prominent. But who knows.
I doubt it. Smalltalk had coherent design that anticipated many needs that the web faces today (and computers in general). What we get in ad-hoc piecemeal additions would be an organic part of the overall infrastructure.
I recently started experimenting with Squeak. It's a bit dated, but holy crap, that thing is amazing. So many good ideas packed in so few megabytes. The startup time alone blows my mind. It's pretty much a virtual machine with a sort-of-operating-system that runs an IDE for all of its own source code, and it loads faster than 95% of all vanilla desktop apps I use today.
They are still using bitmapped graphics under the hood, which explains some of the limitation. The community has shrunk, which means there's no one to take them to the world of vector. However, Pharo people are implementing a new vector engine called Bloc.
>They are still using bitmapped graphics under the hood, which explains some of the limitation. The community has shrunk, which means there's no one to take them to the world of vector.
Most (all?) commercial OSes don't use vectors for their GUIs either.
I've noticed this myself, and it makes me ashamed of modern computing honestly. It is as though the community has been taken over by some kind of religion that worships technology for its own sake. Technology is supposed to make our lives better, but there seems to be an increasing trend of sacrificing our own interests in service to increasing technological complexity, and these people embrace it.
He has a good talk where he shows that the original Smalltalk implementation had IDE, graphics, a word processor, VM, networking features...etc etc in like 10k lines of code. He then shows how big Microsoft Office is and refers to a visible bug that MS has been trying to fix for 20 years, but they can't find it. The take home idea is that we've made things far more complex than we've needed to.
Computer Programming was better when there was a constraint on it.Now that chips really can't get much smaller and Moore's Law is effectively dead we are entering a new era of programming. Efficiency is going to be the low hanging fruit for getting better programming solutions.
When I look at what was done with the C64, and Amiga I am shocked at what that doomed toy company was able to do. The Amiga was easily 6 years ahead of MS and Apple at the time.
Amiga's solution to the constraint is that they gave us custom chips for graphics and sound and we could program to easily. The C64 in Assembly was so simple you really could pretty much have it in your head. If you even look at ID and Doom with 640k limitation it is pretty amazing stuff.
Memory became "free" and people just ran with whatever we wanted. Parallel computing is proving to be not the easiest thing to implement but efficient use of memory and cycles will become king of big projects.
Having personally experience that era, it always saddens me to see xterm maximized to full screen size, with text mode vi or emacs, replicating the same experience I had with actual VT 100 terminals almost 30 years ago.
That is why I live in eco-systems where the communities love their IDEs.
I like the idea of software, libraries and application code all being accessible to each other and have the ability to call each other. Too much functionality is siloed behind application A or B, and the interface exposed by these applications is often woefully inadequate for anything interesting. You see this a lot with smaller projects, where they need to focus on getting the product feature complete as well as with much larger projects, where lock-in becomes a supposed requirement. The space in the middle seems to be ever shrinking.
I like the idea of everything being written in the same language and the code being accessible to anyone who is interested. That you can then open any running code in a debugger to see how it works, or in the integrated IDE to alter it's behavior, is very appealing. For sure the ability to edit anything at anytime can lead to instability but there are ways to ameliorate that without being as dramatically inaccessible as today's environment.
I do feel that even in a modern Linux desktop environment (the only place where we might expect to see any Smalltalk-like accessibility and functionality), so much work needs to be done slogging through different languages to figure out how things work. Your email application might be developed in Vala, a lot of library code is in C or C++. Many tools turn out to be written in Python or Ruby or even in a shell scripting language. While it makes things more interesting for the career software developer it's clearly making things harder for the end-user who needs to alter a subtle behavior in an application they use every day; so much so that the idea of actually making that change seems foreign to, well, everyone.
Smalltalk has it's obvious problems and warts and I can see why we aren't using it today. But in many ways, the Smalltalk we are looking at today is the very same Smalltalk detailed in the article, which is very similar to the Smalltalk of 1984. Work has been done and I do not wish to cast shade on projects like Pharo, which are doing some heavy lifting with their work on the language, compiler and runtime. But it seems to me that Smalltalk as a whole is moving away from the "anyone can use a computer" target and more towards the developers, where it may well end up another provider of a single application that is siloed away from the rest of the customer's environment rather than that environment itself.
> I like the idea of software, libraries and application code all being accessible to each other and have the ability to call each other. Too much functionality is siloed behind application A or B, and the interface exposed by these applications is often woefully inadequate for anything interesting.
Maybe I'm skeptical, but I've gotten to the point where I believe that this specific problem is not going to be solved anytime soon.
Here's how I see it. I believe that unification, specifically within software, is universally based on the idea that we can clean up the world (where "the world" refers to a given software environment) if only we can reimagine it clearly enough and, to borrow a few technical concepts, compute a delta with enough detail and resolution that it'll carry us from where we are to where we want to be. (With the implicit [mis]understanding that all we need to do is make said delta exist, and then it'll do all the work of carrying us.)
I think the above is what we're trying to do - and that we're doing it because we're staring at/barking up the wrong set of trees (the associated forests of which got lost long ago).
Ultimately, the world is a disorganized mess. Any security researcher could entertain you for days with highly-detailed hair-raising examples and proofs of this. I've very recently (finally) realized this is because, at a very high level, the world operates somewhat similarly to the bazaar model Linux uses. Evolution is highly organic, often driven by social effects. There's no sort of cohesion (or "elegance", for want of a better word) that carries from the standpoint of some superscalar bigger picture all the way down to the smallest detail, and if anything is cohesive it only acts within the reach of some small-scale focus, is often further limited by the scope and motivation of that focus, and always ends up conflicting with _something_. A good examples of this is software security - there's always going to be a way to break things, no matter the kind of protections that are applied.
After quite a few years' confused and anxious pondering, I've reached the conclusion that the planning parts of our brains that we use to architect scope-sensitive solutions to problems don't have limit switches, and will in effect run forever if we let them. This is a problem because they will also never report any sort of ground-truth that could be used as an objective sense of progress - which makes a lot of sense if the problem(s) being studied themselves are forever in motion and never stay put. Furthermore, these parts of our brains seem to have no cognitive load cost metric; "dreaming up castles in the sky" is something we have the potential to enthusiastically engage in all day; the sense of motion we seem to get from this seems to satisfy our "I'm doing something productive!" threshold. When all of these things come together, it's obvious that, sadly, some fall into setting all of their focus and attention knobs to prioritize this "planning and scheming" style of thought, at the cost of incorporating many other styles of architectural study.
For me, one new style I try to prioritize is (as might be obvious) considering the bigger picture, and particularly the lateral subjective connections and the network-effect/unintuitive/unintended ways things can influence each other. I read a comment on here some time ago that left an impression and got me started on this track. It was talking about how it's a highly effective use of one's time to become the best X within a given Y vertical - for example, being the best data scientist/programmer within a group of etymologists, or the best woodworker amongst a pile of photography chefs, or the best psychologist within the software development community. I think this is a remarkable approach to use for a couple of reasons: firstly, it mitigates the "front and center" style of focus that seems to be so incredibly devastating to mental organization, because I'm looking at X from the standpoint of Y, so I'm not organizing/weighing/judging X by its own merits, memes, culture and hangups, so I'll likely have a highly effective easygoing outlook on it, and much more than if I were to go "righteo then let's go be a psychologist" hefts Atlas's burden. Secondly, because I'm coming at this from a different set of mental models and perspectives, I'm basically getting some really cool procedural cross-pollination for free. This is the kind of mental hacking I think is cool.
Let me fold back to what you were saying before and stitch together a narrative out of some of the things you've said - I hope I'm not taking your words out of context here:
> I like the idea of software, libraries and application code all being accessible to each other and have the ability to call each other. Too much functionality is siloed behind application A or B, and the interface exposed by these applications is often woefully inadequate for anything interesting. ...
> I like the idea of everything being written in the same language and the code being accessible to anyone who is interested. That you can then open any running code in a debugger to see how it works, or in the integrated IDE to alter it's behavior, is very appealing.
> ... Your email application might be developed in Vala, a lot of library code is in C or C++. Many tools turn out to be written in Python or Ruby or even in a shell scripting language. While it makes things more interesting for the career software developer it's clearly making things harder for the end-user who needs to alter a subtle behavior in an application they use every day; so much so that the idea of actually making that change seems foreign to, well, everyone.
Right. I think I'll address the language thing first.
So, first of all, we have: why so many languages? Why not just one?
Of course, this is a loaded question with some unspoken context. The real answer I want is, how can we eliminate all the languages out there?
Well, each language was created to solve a given problem. So now we need a rough mental sketch that covers all of that history - not all of it of course; perhaps enough to fit on an allegorical napkin.
Vala was written to solve the problem of easily creating applications on Linux. The unspoken sentiment was that it wanted to squash C's brittleness and make software development more straightforward. So it has a focus of accessibility.
C/C++ were of course grown very slowly (and you might even say haphazardly), with zero 20/20-equivalent foresight. As a relevant example http://pzemtsov.github.io/2016/11/06/bug-story-alignment-on-... popped up on here the other day; it covers a good example of how the C standard is unintuitive and hard to follow at the best of times, and also gives a sense of how a C compiler really is a thousand specifications tacked together that (amazingly) don't result in programs that spray singularities and black holes upon process creation. That organization grew out of slow iteration and evolution, not planning; as such we don't have the mental modeling ability to follow and reason about it very easily. (To reiterate, that doesn't mean there's _no_ high-level organization, of course.)
Python was written with the vision of a systems-capable, batteries-included variant of the ABC teaching programming language.
Ruby was written as a more-readable alternative to Perl, with care and attention paid to high-level language design and organization.
Python and Ruby both overwhelmingly occupy the tail end of benchmark graphs. :(
Python has scaled out so much that its creator left the project only a few weeks ago, IIUC citing bus factor and the need to preserve sanity and manage/mitigate burnout.
Ruby's focus on organization produced a language with a perceived high-level elegance; the "web designer" crowd of the early turn of the century bolted it onto Apache with initially heavily-hyped results. For some reason the current trending developer aesthetic seems to be relabeling medium-level languages as low-level languages, so we have people doing "suuper-looow level stuff" in Go, which lends itself heavily toward writing application servers.
Vala was written to make programming more accessible for everyone, but the status quo that has developed around it has kept it tied to the GNOME project, even though it can generate code for other platforms.
All of this is very interesting.
Python was written to "be useful"; it became so useful its creator had to run away from the ensuing chaos produced by all the people successfully using it.
Ruby was written with a particular sense of design aesthetic; it remained "in vogue" until appreciation for the aesthetic it used wore off, and now Ruby occupies this incredibly confusing space that seems to return true for all of "apathetic", "why are you even using this" and "it works really well". Real-world example (that might fall flat): you may be aware that that GitHub is pretty much all Ruby on Rails. (...While GitLab is Go. Haha.)
Bash wasn't written. It grew. It evolved. It metamorphosed. It... shapeshifted? It grew some more, and partially fell over. Hmm, I'm reminded of one of the login fortune-quotes I saw earlier today!
> It took 300 years to build and by the time it was 10% built, everyone knew it would be a total disaster. But by then the investment was so big they felt compelled to go on. Since its completion, it has
cost a fortune to maintain and is still in danger of collapsing. There are at present no plans to replace it, since it was never really needed in the first place. I expect every installation has its own pet software which is analogous to the above.
> -- K.E. Iverson, on the Leaning Tower of Pisa
Except with bash there was no actual intention to build anything in the first place. So, lacking a purpose, direction, or design spec, it... gained ad-hoc status as the first-class program we use to interact with our machines more than anything else. (What's X? :P)
Hahaha. Yeah, that totally makes sense, and explains UNIX a lot. Really shows the bazaar mentality is pretty old, too.
Okay I'm making bash - and shell{s, syntax} in general - look bad. Concatenative languages are really interesting IMO. I can do things in bash that take much more LOC in an actual programming language. (Case in point: I needed to port a shell script that was behaving erratically (due to misuse of subprocess control hacks :D) and rewrite it in an "actual" programming language, and it promptly grew from 114 to 780 lines.) Also - I've never used a functional language in my life -- yet I have.
Without purpose or direction, bash seems to have converged/settled upon some reasonably-global balance that permits "a reasonably fair bit" in less keystrokes than other languages require - because it is, of course, not a language, but a shell environment. (Now I think about it, I wonder if bash hasn't unintentionally picked up on the "be the best X in the given {{A..W},YZ} vertical" I was talking about earlier.)
I continue to look (honestly, genuinely) for something that allows me to interactively control a computer in the same minimal number of keystrokes that bash et al do. I haven't found anything similar up to now. (K is interesting but I haven't Van Gogh-ified my brain enough to be able to squint at it correctly... yet :P)
The reason for the outline above is to illustrate the extreme difficulty of writing One True Language To Rule Them All. We simply aren't capable of reaching that far - of correctly color-coordinating our brick fences from 10,000 ft up, if you will. The languages mentioned all had different focuses, differing origin stories, different motivations, different goals... and different userbases, different cultures, different community cliques (which surely influenced the language in good and bad ways), different sets of tradeoffs.
And this is why I say, we have to let go. Treating the world as suitable input to feed to a mental model of a REPL will leave us stuck at either the R or E stages (depending on how you look at it) for ever and ever and ever, because the input is constantly moving and changing.
I'm not yet sure how this works, to be honest. We take the slightly shorter view, we make things, and they're cool, and they have a real impact within the scope they target... and from the perspective of the bigger picture it'll all continue to be relative. This is admittedly the bit where I don't know how to emerge with a sense of grounded orientation AND a positive outlook; I've reached the point I describe above, but I don't have any sort of rationalization to justify one particular subset and set of compromises over another. I guess I can say that I've emphasized unbiasedness and not wanting to short-sightedly focus on "just do what makes you happy" at the cost of indecisiveness :S
One other thing.
> For sure the ability to edit anything at anytime can lead to instability but there are ways to ameliorate that without being as dramatically inaccessible as today's environment.
This is something I got stuck on for about... 6-7 years. The idea that it might be possible to create an environment where you really could edit things in such a way that the result was generally useful. Not perfect, but generally useful.
Heheh. Remember "mashups"? The idea that you could combine bits of data from around the Web onto one webpage? I think that idea kind of happened, but the nicely cohesive organized notion that it'd let us control the data flying around - and even lay it out - kind of never eventuated, and with perhaps obvious reason: such ideas require full-stack integration, down to the smallest detail, in order to work.
Knowing this, I set my sights on minimal levels of accessibility - enough to let people make it look like they put their screens through a blender and customize things fairly extensively, without needing to write code to do special-case rendering, ie where applications would be "editable" enough to be able to customize them appreciably.
I eventually reached the conclusion that the amount of effort required to architect such a system... and then all the applications for it... would get us to somewhere only slightly north of where we are now. Better, most definitely, and maybe even better enough to justify the cost, but in terms of this idea having a magic breaking-even point? No.
I thought, okay, Facebook is pretty widely hated, but a lot of people use that every day, and they don't seem to hate it enough to not use it, so let's say we apply the formularized, cookie-cutter model they use to UX, and let's say it catches on. Information would need to be expressed in specific ways so that the UX engine could pick it up and render it correctly.
And I decided that such a system would only possibly wind up "universally useful", which is the goal I'm aiming for. I can't really assure myself that such an idea would definitely work out.
NB. Bug #2691 in Arc: posting new comments can sometimes result in "that comment is too long", but I've discovered there are no size restrictions for edits. :>
From hearing several talks by Alan Kay since then, it doesn't sound like software quite turned out the way he envisioned it, but he still thinks there's a better way to do things that's related to what he was thinking in the 70s with a living environment of little computers sending messages to one another. He's talked of the idea of modeling biological cells since then.