Note: this response deals with this question from a consumer OS perspective, the answer with regard to servers is quite different.
The funny thing about the history of programming languages is that much like the properties of human societies, if you don't know the actual history its easy to get it completely wrong by logical analysis. For example, someone not knowing anything about our history may naturally assume that we started with monarchies then moved on to republics, etc. Only after reading about Rome would they see how messy it really was to get to where we are today.
Similarly, its easy to think that C was the "first" language, and then gradually we developed more dynamic ideas. But the reality is that C showed up 14 years after Lisp.
All this to say that if you want to truly understand "where we're going", you have to kind of understand that most the problems we are trying to "fix" are really "fake" or "man made problems" (from an engineering perspective that is). Making a program that runs on every architecture is not difficult. C++ wasn't invented to solve this problem nor did it make things any easier in this department. The reason a program that runs on Mac doesn't run on Windows is due to the fact that the companies that own the underlying systems don't want them to (I suppose you could thus argue that the reason is that we allow laws to make this possible). Its not that Windows can't handle drawing Macintosh looking buttons or something.
Yes, yes, its certainly the case that at some point it would have theoretically been annoying to ship a PPC bundle and an x86 bundle if the API problem would have magically not existed -- but the reality is that today basically everyone is running the same architecture for all intents and purposes (90+% of your consumer desktop base is on x86 and 90+% of your smartphone base is on ARM).
So the real problem now becomes making a program that runs on Linux/Mac/PC (or alternatively iOS/Android) that "feels" right on both. This remains an unsolved issue, and is arguably why Java failed. Java figured out the technical aspect just fine (again, it is NOT hard to make something that runs on any piece of metal), however, Java apps "felt" terrible. Similarly, Microsoft correctly understood that Java was a threat and hampered it. As you can see this has nothing to do with engineering.
So why is this not the case with JavaScript? Again, for no technical reason: the answer is purely cultural. The trick with JS was that it snuck in through the browser where two important coincidences took place:
1. People did't have an existing expectation of homogeneity in the browser. It started as a place to post a bunch of documents, and so people got used to different websites looking different. No one complains if website A has different buttons or behaviors than website B, so as an accident of history developers were able to increasingly add functionality without complaints of a website "not feeling right on Mac OS".
2. No large corporation was able to properly assess the danger of JS early enough to kill it. Again, since the web was a place for teens to make emo blogs, it wasn't kept in check like Java was. Hell, Microsoft added XMLHttpRequest: arguably the most important component in the rise of web apps.
So now everything is about trying to leverage this existing foothold, that's why we bend over backwards to get low level languages to compile into high level languages and so forth. Don't try to think about it like a logical progression or how you would design the whole system from the ground up, just treat it more like a challenge or an unecessarily complex logic puzzle.
3. A sea change has taken place with the explosive growth in smart phones, tablets, and other devices. These devices have GUIs that are very different from Mac and Windows, which means that almost all users of Macs and Windows have gotten used to dealing with a much wider variety of UIs than was the case in the early life of Java. Even those who hate any UI alternative unless Jony Ive tells them they like it have to deal with more than one UI style, and with smart TVs, smart refrigerators, smart cameras, etc., everyone will soon have to.
(Now, if we could declare the placing of land mines (:hover effects that blow up in your face and cover what you are trying to look at if your mouse randomly passes by, forcing your mouse to run around looking for a way to make it go away) on Web pages a crime against humanity, I would be okay with most other GUI varieties. Hovers that change the appearance of an element in place: fine. Elements, such as menus, that expand when explicitly clicked and even continue unfolding with a hover thereafter: fine. But elements that jump out of nowhere and block the page just because your mouse was randomly there when the page opened or passed by on the way across the page to something else: FAIL.)
"Its not that Windows can't handle drawing Macintosh looking buttons or something."
It's an unfortunate misconception about Mac OS X's UI that it's just a pretty skin. It really isn't, and this is why people complain about things not 'feeling' right.
OS X's UI is all about exposing direct manipulation. When you change a setting in OS X, its effects are supposed to ripple through the UI instantly, without having to click "Ok". There's no "Cancel" either, because the cancel action should be obvious: simply uncheck whatever you just checked.
In other words, OS X acts like a light switch: you should feel free to turn it on or off as much as possible. Windows acts like a light switch that not only requires you to confirm turning it on or off, but where the switch actually closes itself off every time you change it, requiring you to open it up again.
The evolution of web apps in the 2000s is explained by the fact that web developers switched to Mac en masse. As a result, web apps slowly but surely adopted a similar model. They didn't just become prettier, but they drastically simplified their model, moved away from the "open new form, fill out form, submit form, see result" model to a text view turning into a text field and then back again, all without ever leaving the page.
There's basically an entire contingent of programmers who are utterly blind to all this. They operate computers using the mental model of how the hardware and software works, and then laugh at users who feel it makes no sense. In fact, it is the computer that makes no sense, and doesn't act like any other object in your house.
So I disagree that it has nothing to do with engineering. It really does, but it's about the kind of engineering that simply isn't on most programmer's radars. It solves problems they don't have, because they work around them in their heads every time they use it, so the computer doesn't have to.
Case in point: I go to install a game on Steam. It tells me I don't have enough disk space available. I go remove some files to make room. The total in Steam's dialog does not update. Their "disk space available" display is basically a lie, yet it's a lie that people simply accept as "how things are".
The funny thing about the history of programming languages is that much like the properties of human societies, if you don't know the actual history its easy to get it completely wrong by logical analysis. For example, someone not knowing anything about our history may naturally assume that we started with monarchies then moved on to republics, etc. Only after reading about Rome would they see how messy it really was to get to where we are today.
Similarly, its easy to think that C was the "first" language, and then gradually we developed more dynamic ideas. But the reality is that C showed up 14 years after Lisp.
All this to say that if you want to truly understand "where we're going", you have to kind of understand that most the problems we are trying to "fix" are really "fake" or "man made problems" (from an engineering perspective that is). Making a program that runs on every architecture is not difficult. C++ wasn't invented to solve this problem nor did it make things any easier in this department. The reason a program that runs on Mac doesn't run on Windows is due to the fact that the companies that own the underlying systems don't want them to (I suppose you could thus argue that the reason is that we allow laws to make this possible). Its not that Windows can't handle drawing Macintosh looking buttons or something.
Yes, yes, its certainly the case that at some point it would have theoretically been annoying to ship a PPC bundle and an x86 bundle if the API problem would have magically not existed -- but the reality is that today basically everyone is running the same architecture for all intents and purposes (90+% of your consumer desktop base is on x86 and 90+% of your smartphone base is on ARM).
So the real problem now becomes making a program that runs on Linux/Mac/PC (or alternatively iOS/Android) that "feels" right on both. This remains an unsolved issue, and is arguably why Java failed. Java figured out the technical aspect just fine (again, it is NOT hard to make something that runs on any piece of metal), however, Java apps "felt" terrible. Similarly, Microsoft correctly understood that Java was a threat and hampered it. As you can see this has nothing to do with engineering.
So why is this not the case with JavaScript? Again, for no technical reason: the answer is purely cultural. The trick with JS was that it snuck in through the browser where two important coincidences took place:
1. People did't have an existing expectation of homogeneity in the browser. It started as a place to post a bunch of documents, and so people got used to different websites looking different. No one complains if website A has different buttons or behaviors than website B, so as an accident of history developers were able to increasingly add functionality without complaints of a website "not feeling right on Mac OS".
2. No large corporation was able to properly assess the danger of JS early enough to kill it. Again, since the web was a place for teens to make emo blogs, it wasn't kept in check like Java was. Hell, Microsoft added XMLHttpRequest: arguably the most important component in the rise of web apps.
So now everything is about trying to leverage this existing foothold, that's why we bend over backwards to get low level languages to compile into high level languages and so forth. Don't try to think about it like a logical progression or how you would design the whole system from the ground up, just treat it more like a challenge or an unecessarily complex logic puzzle.