I still think we're missing the point; we're focusing on the wrong benchmarks. We're putting too much emphasis on JS speed, when in reality most web apps are behind native because of lack of (perceived) graphics performance.
The shortcut out of this requires a greater variety of GPU-accelerated CSS transitions/DOM changes, as well as easier computations and DOM manipulations off the main thread, which cause horribly noticeable UI hiccups. Web Workers are still too primitive (e.g. no DOM access whatsoever) and slow (e.g. no shared memory).
Not saying it's unimportant to improve JS's CPU performance; just saying that we're focusing too much on the wrong battle in the war against native.
WebGL anyone? This is what does most of the heavy lifting in the main asm.js use case: games.
I am still waiting to see a UI framework built using asm.js and WebGL. Maybe even a existing native framework ported over, something like WPFe(Silverlight) might be about as difficult as a game engine to port.
Asm.js still has no solution to shared memory threads, which will be an issue for any modern game engine I would imagine.
>Asm.js still has no solution to shared memory threads, which will be an issue for any modern game engine I would imagine.
This feels to me like the elephant in the room, not just for asm.js, but for JS in general. We have Web Workers, and that sounds like a solution, so it dampened the discussion of parallelism on the web. But in a lot of situations, Web Workers just don't cut it. Copying memory back and forth is just too inefficient.
I agree, I can appreciate the idea of share nothing threading(web workers) which is more like multi-process rather than multi-threading, it's probably the better model for much of the threading use cases out there. However there are performance critical scenarios where lightweight shared memory threads will always win and will be required if it is to be a valid compile target for c/c++ coded bases.
If there's a way to do zero-copy message passing in web workers (or equivalently, overhead-free transfer of object ownership between threads), shared memory may be unnecessary.
That helps, but it still means you can only have the data on one thread at a time. What would be a massive help would be if you could split a buffer into non-overlapping views, and transfer each view to a separate worker. Some algorithms would still be challenging or impossible to implement this way, like parallel prefix sum algorithms, but it would still greatly widen the number of things you could do in parallel.
Yes, at some layer of abstraction there will be the possibility of sharing memory. It's probably better to keep it behind the scenes for all but the most CPU intensive uses.
Why do you think WebCL will be widely-deployed in 4 years? I used to think that, not I'm not so sure. It's sure not been given a warm welcome by any of the major browsers. I'm not sure I share your optimism.
I do agree that Transferable Objects will be a tremendous boon, and should address all the shared memory thread complaints elsewhere in these comments.
I had forgotten about RiverTrail. Very cool project, but is it even still in active development? Commits seemed to have trailed off.
Because WebGL will be based off of WebCL, WebGL is a subset targeted at graphics.
And change is accelerating, everything in the future happens faster. So there will be more people to implement WebCL than there was to implement WebGL. More and more people are using advanced web apps so the pressure is higher to make things faster.
The 'war on native' has multiple fronts and this post is just reporting on one of them. For many apps, I agree that the items you've mentioned are the most significant and progress is also being made on these fronts (e.g., GPU-acceleration of some CSS transitions are already in Firefox; more and more DOM APIs are being exposed to workers). For other apps, though, especially ones using WebGL like games or authoring tools, raw computational throughput is the most significant.
Web workers will never have DOM access. They run in a separate process with a separate URL. That's not an issue. You use postMessage to communicate with the workers.
I'm pretty sure OP knows about postMessage. That wasn't his point.
The only way to pass data is through message passing - but the message passing takes only strings. Just object serialization and deserialization in JS is slow enough to make it infeasable to do large-scale transfer between the two without paying for transmuting the data.
We need immutable object passing for performance. Or locked mutable object passing (like Rust.)
And why not the ability to modify DOM elements with the repaints happening on the main thread?
I had exactly this issue when trying to make my OpenSCAD port multithreaded in asm.js - the results of each worker took longer to serialise/deserialise than the time saved by doing them in parallel.
Unfortunately couldn't even use transferable objects, as the C++ objects in question were CGAL nef polys, and once copied across would be garbage. It would work though if you have objects you can allocate to a specific buffer (I since discovered called a 'bump allocator')
I think locking the DOM might not be too difficult. To have multi-threaded accessors/getters would be insanely tough given the size of the existing code-base. But a single exclusive lock over the DOM could work.
There are several problem with that. One is that the web worker would block all user input, including scrolling, while you were modifying the DOM. Also, the locking would absolutely have to be manual. Locking before each DOM modification and then unlocking after would be disastrous, since you'd have no guarantees about the DOM's state from one call to the next. And once you introduce manual locking, you're largely defeating all of the safety guaranteed by workers in the first place.
Or think of it this way: If you can lock the DOM reliably and efficiently, then you can write code that shares memory by storing data in the DOM, rather than copying it between workers. That would get you most of what you get from ordinary threads, albeit with only one lock available. This would be such a huge convenience and performance win, that if it were possible, they'd have done it to allow memory sharing without going through the DOM at all. That right there should tell you that allowing DOM access from workers by locking is impractical.
1) I understand the C/C++ -> LLVM -> Emscripten -> asm.js process. But I heard they're also working on supporting GC languages (like Java and Go). How would this work exactly? Wouldn't they first have to port the entire JVM or Go runtime into asm.js? And every time a Java/Go -> asm.js program is downloaded, it would basically also download the entire JVM/Go runtime as well?
2) Would it be possible to use GUI frameworks (like Qt for C++ and maybe Swing for Java in the future) to build GUIs and directly output to canvas?
For (1): You're right that an option is to compile the VM itself to asm.js (since the VM is usually written C/C++ code; JITs are an obvious exception since they generate machine code at runtime). This has already been done, e.g., for Java [1] and Lua [2]. What is meant by "supporting GC languages" is translating GC objects in the source language to real JavaScript objects so that the JS GC may be used on them. For statically-typed source languages like .NET/JVM, the proposed Typed Objects extension to JavaScript [3] could be naturally integrated into asm.js to describe classes. This is all a little ways off since Typed Objects needs to be standardized first. Also, the lack of finalizers in JS would limit the fidelity of the translation.
>I fail to see the benefit of building a jvm to run in a javascript engine.
That's because you picked a very specific example to suit your argument.
How about seing the benefit of building games to run in the browser with near native speed? Or something like Mathematica? Or an image editor, etc etc...
The new thing is that performance has improved, from 2x slower than native to 1.5x slower than native, and that a big chunk of that is due to float32 optimizations.
Dart2js does not currently implement the whole language. Integer/float semantics and support for large integers don't work (it seems like they didn't implement them manually, so it just falls back to doubles?), and there are probably other things I don't know about missing.
I should point out that emscripten and many other JS-targeting compilers (like JSIL) emulate int64 semantics directly so that apps using them do work. It's kind of strange for the Dart team to have not gotten this right yet when Dart is at 1.0.
> Integer/float semantics and support for large integers don't work (it seems like they didn't implement them manually, so it just falls back to doubles?), and there are probably other things I don't know about missing.
That's going to be really unfortunate if pages start relying on the proper behavior and become Chrome-only as a result...
The comments on it in the dartlang issue tracker indicate that it has already been a problem; one commenter indicates that his contribution to a crypto library was rejected because he had only tested it against Dart and it didn't work correctly in Dart2js (due to 64-bit ints)
That's not really the problem. Web developers have learned from the past and don't just build web-pages for one browser only. On the contrary... Developers now sometimes need to support outdated browsers with little market share (IE8 for example), instead of actively pushing the users to upgrade.
The discrepancy between Dart and JS numbers is mostly an issue for the developers. When they deal with numbers in specific ranges, they need to run dart2js more frequently (to test if their code works) instead of relying on Dartium (and its F5 development cycle).
After 2 years of Dart development, numbers have rarely been an issue in practice. Developers know when they deal with numbers > 32/53bits and work around it.
Web developers know too well the trap of testing only on one browser, which when Dart comes to Chrome, will be Chrome. Testing all input data in a space that exceeds the integral ___domain of double is hard. Murphy says there will be bugs in other browsers when one's app tests "work in Chrome".
For a wannabe-cross-browser, wannabe-standard (Ecma has a sorry history of proprietary standards: C#, OOXML) language pushed by a super-rich big company with what? 60 or more engineers on the project, optics must matter. Why not fix the bignum-not-emulated-in-JS bug? It's easy, GWT has BigInteger emulation code in JS lying around the Googleplex. Just plug that in if the code needs it.
The obvious risk remains what I cited in HN two years ago: Google letting dart2js be sub-standard, using all its strength and leverage to promote Dart in combination with DartVM, letting users of other browsers "blame their browser" when things don't quite work as well as in Chrome.
Given market share among browsers currently, I think this will backfire, but it could inflict collateral damage too -- on developers as well as on users. And it just plain looks bad. People expect better of Google, and they ought to.
Both arbitrary precision immutable integer types. If some corner case differs, my point stands. Google has more than enough engineers on Dart to do the work of writing a bignum library in JS, if they can't find one to use that someone already wrote.
I'm definitely not trolling. I truly thought you would understand the basic difference.
Big integer support in Dart is not an issue of writing a bignum library. That's already been done. The difficulty is in supporting the single integer type in a way that's performant for numbers < 2^53
Java has fixed width ints, Dart has infinite precision ints. In Java when dealing with ints, you're always in the int ___domain, and when dealing with BigIntegers you're always in the BigInteger ___domain. In Java operations on ints can overflow. In Dart operations on ints don't overflow - internally, if the result doesn't fit in an machine int a bigint is returned.
In JavaScript you have a few choices when implementing infinite precision ints: 1) Use a single bigint type for all ints, which will be incredibly slow. 2) Use two types for ints, the built-in number and a BigInteger class. This means a lot of expensive type and overflow checks that slow down basic numeric operations. 3) Don't implement infinite precision ints.
Compile-to-JS languages can use static analysis to do better. dart2js already uses type inferencing extensively to eliminate type checks where it can, but eliminating the checks for ints would require range propagation which is much trickier, and doesn't work in a lot of cases.
> dart2js already uses type inferencing extensively to eliminate type checks where it can, but eliminating the checks for ints would require range propagation which is much trickier, and doesn't work in a lot of cases.
Couldn't you just have changed Dart to fix the semantics?
This is a big issue (no pun intended) that essentially means that dart2js is broken. I predict that, absent some solution to this issue, if Dart takes off, there will be sites that break in non-Chrome browsers when (for example) 64 bit hashing algorithms are involved due to authors only testing in Chrome and not testing the dart2js version.
"I truly thought" you would not raise a question to which V8/Dart people already know the answer. Range prop is not the only solution.
Use JIT techniques to speculate profitably that most ints fit in JS doubles. This can be done in JS-hosted VMs, just as in machine-code-generating native VMs.
Again, this is extremely old hat to the Dart/V8 folks. See, e.g.
In SpiderMonkey, we augment JIT techniques with online type inference that distinguishes int from double. The same type inference could add bignum to the speculative tower. See
could have been proposed and actually worked into ES6 by now, but the Dart team was in stealth mode (Dash) for too long. Still, better late than never. There's time for ES7. Who from Google will actually do the work in V8 and with Ecma TC39?
This is sounding less and less like just plugging GWTs BigInteger implementation into Dart, which is exactly my point when I said you must know the difference.
Using a GWT BigInteger for all dart2js integers would work just fine and provide correct semantics, but if it doesn't meet their performance bar, that's a failure of design - not an excuse for cutting corners and shipping a dart2js that produces wrong behavior.
This could have been accounted for with a better language design, but it wasn't.
Could have been accounted for in language design, dart2js effort, bignums-for-JS effort, or some combination of the last two. I don't want to assume a conclusion, but the lack of a fix for http://code.google.com/p/dart/issues/detail?id=1533 is a problem, even if there's no immediate fix. It is fixable.
No, you changed the argument from correctness to performance.
Makes me wonder if the problem is not entitled sloth with a dash of anti-JS animus, rather: wholesale, '90s-Microsoft-style-lock-in, submarine-the-other-browsers dirty pool.
Of course, if dart2js produced too-slow-JS, that might limit Dart adoption. Not a dilemma, given a good-faith effort to optimize per my comment above -- and much better than forking the language between DartVM and dart2js, leaving a trap for the unwary Dart programmer to fall into when using dart2js to run on all the non-DartVM browsers.
"Make it correct, then make it fast." Is this no longer done where you work? (Which would be Google, amirite? :-P)
> Web developers have learned from the past and don't just build web-pages for one browser only.
This is not true in practice. Just a few days ago I couldn't use clippercard.com on Firefox for Android because the WebKit prefixes broke the design so badly as to be unusable. :(
> When they deal with numbers in specific ranges, they need to run dart2js more frequently (to test if their code works) instead of relying on Dartium (and its F5 development cycle).
And if they don't, their pages lock their users into Chrome…
That's simply not true. There are a ton of applications out there that need true 64-bit integers, not 53-bit integers. Games, crypto, graphics, etc are all fields where you can end up needing 64 bits of integral precision (if not more!)
And this isn't based on 'I've only hit one case where I needed them'; this is based on how many users of compilers like Emscripten and JSIL actively utilize 64-bit arithmetic - sometimes they use it so much that they demand high performance! A half-assed implementation that relies on doubles being 'good enough' is just not going to cut it.
Ok, you've got me really curious what these applications are. 2^53 is 3.67e18, which is about 500 times larger than the number of millimeters from the sun to Pluto's maximum orbit [1] allowing an accurate model of a solar system. It is more kilometers than the diameter of the Milky Way [2], allowing accurate models of a galaxy (given the size of stars, 1 km is pretty accurate). I'm guessing that the 15 TB storage requirement of 400 billion stars [3] is going to be a problem long before the integer precision. So I'm not really seeing how games or graphics are going to need more than this precision...
Crypto needs 256 bit integers minimum, so 64 bit integers are usually convenient multiples, but not terribly helpful.
I'm not saying that 53-bit integers are good enough (although I certainly have never needed anything that large); I have no idea. But I don't see a huge number of applications that obviously need them. So I'm curious...
[3] Each stars needs (x, y, z) and (v_x, v_y, v_z), probably also in 53-bit integers, so that is 6 * 53 bits = 318 bits = 40 bytes. 4e11 stars * 40 bytes = 1.6e13 bytes = approximately 14.6 TB.
> Ok, you've got me really curious what these applications are.
In addition to the applications others mentioned, this is an issue in applications that don't need it, but run into it. If for example a program has a bug where a number is multiplied by 2 in a loop, then if it's a double it will eventually become infinity and stop changing, but if it is a bigint then it will continue to increase. This means that the bug will have different effects in those two cases.
While this is a bug, if the developer tests on only one of those platforms (say, the double), and the bug happens to not cause noticeable effects, it might go unfixed, but people on the other platform (with bigints) would get something broken (as a constantly growing bigint, eventually it will be extremely slow or run into some limit and give an exception).
The basic issue is that leaving numeric types to some extent undefined means you open yourself up to risks like this. How common it will be in practice is hard to say, of course, but I've seen such things happen in another language, so I think this can't be simply ignored out of hand.
Ok, I can think straight up of one huge use case. With 64 bit integers, I can blend all four colour channels of a pixel in one hit. For graphics operations this is a huge performance boost. (For those unfamiliar with the concept, you allocate 16 bits of the integer to each of rgba, which are only 8 bit values. The extra 8 bits per channel allows you to handle overflow on operations, which means you can do an op in all four channels simultaneously).
Any time you multiply two 32-bit numbers, you need 64 bits out to prevent overflow. Sometimes cleverly splitting and rearranging an operation will avoid the overflow, but it's much cheaper in developer time to have 64 bit ints available for intermediate parts of calculations.
Bit sets for 64 element or 8x8 things. One example are chess programs (the board is represented as several 64bit integers called bit boards) but there are many others. Everything is going to be way slower if you suddenly need split your bit set in two.
If you think 'modelling the entire milky way' is an average use case you don't know very much about graphics or game engines, and you should stop extrapolating based on your horribly inadequate knowledge in order to draw faulty conclusions. Again, I have end-users building games and other types of software that are using 64-bit integers right now and have told me they actually need 64 bits of precision. In one case they were actually using pairs of 64-bit integers for 128 bits of precision.
For your contrived galaxy example, one thing you need to realize is that value ranges aren't the only thing that matters; value precision is just important, and variable precision is an absolute trainwreck in game development scenarios because it means that your game's physics are nondeterministic (due to precision variance depending on where you are). This is a real problem that affects real games. EVE Online happens to simulate a massive galaxy and massive solar systems, and it uses integers to represent coordinates precisely for this reason.
So crypto needs '256 bit integers' (why 256? why not 512? 2048? 4096?), thus it's okay that dart only offers 53 bits of integer precision, and somehow 64 bits wouldn't be better? Does your CPU have 256-bit integer registers? Last I checked, most integer registers only go up to 64 or 128 bits, and the registers bigger than that are for SIMD - i.e. they contain multiple integers, not one big one. I'm not even demanding that Dart implement arbitrary-precision integer arithmetic here - just actually get 64 bit integer arithmetic right, since VIRTUALLY EVERY real programming environment's VM offers it natively to begin with [0]. If you still insist that nothing less than arbitrary precision integer values are sufficient for crypto, then fine - what about 64-bit hashing algorithms (real; used in production)? What about applications that need to do I/O against files/buffers larger than 4GB? What about applications that need to do graphics rendering against image surfaces larger than 4GB? What about audio editing software that needs to operate on long audio recordings larger than 4GB? When 32-bit unix timestamps overflow in 2036, will 64-bit integers still be unimportant?
These are all real use cases. If you ignore them all and still argue that 64 bit integers don't matter, you don't seem to have the ability to imagine software beyond your experience.
[0] Yes, there are a few scripting runtimes without 64-bit integer values built-in. They are all insufficient for anything resembling numeric computation.
> Integer/float semantics and support for large integers don't work (it seems like they didn't implement them manually, so it just falls back to doubles?)
Integers work fine up to 2^53 (9 quadrillion and some).
There is also a "throw_on_javascript_int_overflow" flag, which you can use during development.
Please do not spread FUD. From what I see there are some extreme corner cases in Dart->Js behavior and they are well known. Dart team promises to support modern Browsers, period.
Javascript is Turing complete so of course you can compile anything down to JS. But for performance and convenient debugging native support would be a big help.
I wouldn't say asm.js is a spin-off. In fact it's a strict subset of JavaScript, though designed to be used as a compiler target rather than directly coded. The web browser can then execute the code in a highly optimized (hopefully, eventually near native performance) fashion.
Could someone please explain me where mankind is going currently in software development:
Here is my current understanding (approximated heavily to allow seeing the big picture) and presented in non-linear chronological order:
== Note: I am genuinely trying to construct the big picture. Please help me understand and not offer just plain criticism that does not teach me. ==
1. There were multiple processors and operating systems. Programming languages like C created with the idea of writing programs once and just compiling for different platforms. But alas, systems were still too different to handle with a common code base. Many languages like C++ came along, but do not provide cross-platform libraries for many things like GUI development. Third parties libraries did, but native UI/UX not achieved still?
2. Java created with the idea of writing programs once (and even compiling to the byte code once) and running everywhere, thus solving compatibility issues C and C++ had across platforms. But alas, writing once and running everywhere were still too different in a common code base (Is this true?), especially with mobile devices coming along that bring different UI/UX requirements. Also native UI/UX was not achievable (Is this true?).
In the meanwhile:
A. Installation of software on the local machine (aka caching it locally) and constantly updating it was a pain. Automatic updates popularly used but unpatched machines still exist.
B. Browsers came along with static pages, then gradually added more and more interactivity.
C. Machines and networks became fast enough to install the software live from a website when a user visits and remove it when the browser cache is cleared.
So now:
3. Applications start getting developed in Javascript (and other technologies) inside a browser. Native UI/UX still performs better on mobile (and on desktops too in my experience). Browsers still struggle somewhat with standardization.
4. The legacy code from #1 above is now being connected to #3 above allowing even operating systems and VMs to run inside the browser.
So now we may have C++ code (originally designed to compile natively) now converted to Javascript running inside a browser, that interprets/JIT's that Javascript to run natively on the machine/OS, where the browser itself is written in C++ or so, runs as a visible or hidden layer on the top of the OS or VM, which by itself is written on C++ or so, and finally runs on the processor on which the original C++ code was designed to run on.
While I certainly appreciate the flexibility of flows this offers, I am still trying to make sense of all this from the progress made by mankind as a whole. What is the ultimate problem that we are trying to solve? Compatibility between platforms and uniformity of experience across them? Improvements made to the software development processes (better languages)? Loading programs securely into the local machine (caching) instead of installing?
No matter which direction I think, it seems to me that if mankind were to think these through, and were not to have the purely historical elements in the picture, the problems we have should "technically" have been solvable more easily than the path mankind has taken.
Again, I am seeking help and inputs to understand this better, not underestimating the value of efforts made by people around the world, or criticize any such effort including asm.js.
There is no concerted master plan, only least resistance paths when trying to build something.
Programmer uses language X (JavaScript) and need to do something sufficiently complex he doesn't want to write again (image processing / game dynamics / crypto). Programmer writes a transpiler.
Or programmer A wants to do something heretic for the sake of it, programmer B sees an opportunity for his sufficiently complex library.
Thinking of this as an non-convex optimization problem, economic forces just make the mankind constantly evaluate the risk vs. reward they have around their current local point, and move towards what is most likely a local minimum (or on the very least move towards the global minimum in a very convoluted time-intensive manner).
All of this because of the pile of legacy we have ourselves created (not knowing at that time what the future holds, so not blaming anyone here), including getting the users' minds too adapted to that existing legacy.
The definitions have nothing to do with whether there is a better way. There probably is, but we probably are not going to find it with our brains working how they do. We aren't a hive mind, we aren't going to optimize globally, although we make some attempts here and there.
A cross-platform UI framework would look almost nothing like HTML5. The document model is a significant hindrance to widget-based UI design that is just barely beginning to be worked around with new CSS and HTML features that still lack browser support.
UI design benefits from a completely different structure than text-oriented documents -- layout managers, containers, spacers, grids, reusable components, nearly all missing from current browsers.
>UI design benefits from a completely different structure than text-oriented documents -- layout managers, containers, spacers, grids, reusable components, nearly all missing from current browsers.
Oh, but it's coming soon, via both JavaScript and Dart:
So it demonstrates how much more difficult UI design is using HTML5 versus every other UI design option out there. It's often worth it for the platform independence, but we could've done much better if we designed a UI markup format from scratch.
Note: this response deals with this question from a consumer OS perspective, the answer with regard to servers is quite different.
The funny thing about the history of programming languages is that much like the properties of human societies, if you don't know the actual history its easy to get it completely wrong by logical analysis. For example, someone not knowing anything about our history may naturally assume that we started with monarchies then moved on to republics, etc. Only after reading about Rome would they see how messy it really was to get to where we are today.
Similarly, its easy to think that C was the "first" language, and then gradually we developed more dynamic ideas. But the reality is that C showed up 14 years after Lisp.
All this to say that if you want to truly understand "where we're going", you have to kind of understand that most the problems we are trying to "fix" are really "fake" or "man made problems" (from an engineering perspective that is). Making a program that runs on every architecture is not difficult. C++ wasn't invented to solve this problem nor did it make things any easier in this department. The reason a program that runs on Mac doesn't run on Windows is due to the fact that the companies that own the underlying systems don't want them to (I suppose you could thus argue that the reason is that we allow laws to make this possible). Its not that Windows can't handle drawing Macintosh looking buttons or something.
Yes, yes, its certainly the case that at some point it would have theoretically been annoying to ship a PPC bundle and an x86 bundle if the API problem would have magically not existed -- but the reality is that today basically everyone is running the same architecture for all intents and purposes (90+% of your consumer desktop base is on x86 and 90+% of your smartphone base is on ARM).
So the real problem now becomes making a program that runs on Linux/Mac/PC (or alternatively iOS/Android) that "feels" right on both. This remains an unsolved issue, and is arguably why Java failed. Java figured out the technical aspect just fine (again, it is NOT hard to make something that runs on any piece of metal), however, Java apps "felt" terrible. Similarly, Microsoft correctly understood that Java was a threat and hampered it. As you can see this has nothing to do with engineering.
So why is this not the case with JavaScript? Again, for no technical reason: the answer is purely cultural. The trick with JS was that it snuck in through the browser where two important coincidences took place:
1. People did't have an existing expectation of homogeneity in the browser. It started as a place to post a bunch of documents, and so people got used to different websites looking different. No one complains if website A has different buttons or behaviors than website B, so as an accident of history developers were able to increasingly add functionality without complaints of a website "not feeling right on Mac OS".
2. No large corporation was able to properly assess the danger of JS early enough to kill it. Again, since the web was a place for teens to make emo blogs, it wasn't kept in check like Java was. Hell, Microsoft added XMLHttpRequest: arguably the most important component in the rise of web apps.
So now everything is about trying to leverage this existing foothold, that's why we bend over backwards to get low level languages to compile into high level languages and so forth. Don't try to think about it like a logical progression or how you would design the whole system from the ground up, just treat it more like a challenge or an unecessarily complex logic puzzle.
3. A sea change has taken place with the explosive growth in smart phones, tablets, and other devices. These devices have GUIs that are very different from Mac and Windows, which means that almost all users of Macs and Windows have gotten used to dealing with a much wider variety of UIs than was the case in the early life of Java. Even those who hate any UI alternative unless Jony Ive tells them they like it have to deal with more than one UI style, and with smart TVs, smart refrigerators, smart cameras, etc., everyone will soon have to.
(Now, if we could declare the placing of land mines (:hover effects that blow up in your face and cover what you are trying to look at if your mouse randomly passes by, forcing your mouse to run around looking for a way to make it go away) on Web pages a crime against humanity, I would be okay with most other GUI varieties. Hovers that change the appearance of an element in place: fine. Elements, such as menus, that expand when explicitly clicked and even continue unfolding with a hover thereafter: fine. But elements that jump out of nowhere and block the page just because your mouse was randomly there when the page opened or passed by on the way across the page to something else: FAIL.)
"Its not that Windows can't handle drawing Macintosh looking buttons or something."
It's an unfortunate misconception about Mac OS X's UI that it's just a pretty skin. It really isn't, and this is why people complain about things not 'feeling' right.
OS X's UI is all about exposing direct manipulation. When you change a setting in OS X, its effects are supposed to ripple through the UI instantly, without having to click "Ok". There's no "Cancel" either, because the cancel action should be obvious: simply uncheck whatever you just checked.
In other words, OS X acts like a light switch: you should feel free to turn it on or off as much as possible. Windows acts like a light switch that not only requires you to confirm turning it on or off, but where the switch actually closes itself off every time you change it, requiring you to open it up again.
The evolution of web apps in the 2000s is explained by the fact that web developers switched to Mac en masse. As a result, web apps slowly but surely adopted a similar model. They didn't just become prettier, but they drastically simplified their model, moved away from the "open new form, fill out form, submit form, see result" model to a text view turning into a text field and then back again, all without ever leaving the page.
There's basically an entire contingent of programmers who are utterly blind to all this. They operate computers using the mental model of how the hardware and software works, and then laugh at users who feel it makes no sense. In fact, it is the computer that makes no sense, and doesn't act like any other object in your house.
So I disagree that it has nothing to do with engineering. It really does, but it's about the kind of engineering that simply isn't on most programmer's radars. It solves problems they don't have, because they work around them in their heads every time they use it, so the computer doesn't have to.
Case in point: I go to install a game on Steam. It tells me I don't have enough disk space available. I go remove some files to make room. The total in Steam's dialog does not update. Their "disk space available" display is basically a lie, yet it's a lie that people simply accept as "how things are".
What advantages would those be? Do they still apply when compared to a modern strongly-typed language with type inference and typeclass-like functionality, like Scala?
Did you just seriously compare languages which took JavaScript and made it even worse (e. g. lexical scoping) with a language with a working type system, traits, a coherent fusion of OOP and FP, typeclasses, higher-kinded and dependent types?
Scala has waayyy too much stuff. Obviously. And its not coherent. Its a joke.
ToffeeScript, CoffeeScript, and LiveScript are objectively and obviously better than JavaScript (and Scala).
You and I are living in two different universes. One where programming languages are there to solve problems (my universe) and another where programming languages are there to create them or just to prove you are smart enough to use them (yours).
The big advantage of dynamic typing is that your code only needs to be correct on the code paths that are actually taken. That's important for the web, which is full of browser specific hacks, polyfills, feature detection, etc.
Consider the difficulty of doing static type checking for a property like window.localStorage. Its type depends on the runtime environment, and may not exist at all. So static type checking against one browser environment doesn't give you much assurance that your code will type check in other browsers. You really do need to do type checking at runtime.
window.localStorage doesn't exist in (for example) IE 7, and it may or may not exist in IE8 depending on the ___domain. So you can't rely on the interface being the same.
Plenty of libraries written in statically typed languages add features which may not be supported on legacy platforms. There are ways to deal with this. In version X you provide a way to let the client know that function Y is not supported on the current platform.
It's not like the web is some completely different animal, it's just an example of poor/little standardization and a design which has not evolved well over time.
If you disagree with the following opinion please don't down vote me instead enlighten me.
Frankly I don't see the point of compiling C/C++ to JavaScript. If you'r using C/C++ you might as well compile to machine code, it's not like you gain anything by compiling to JavaScript.
What tom said; also typing a url in a browser is a lot easier than installing something. This might sound silly but for the typical computer user (and even many nerds!) it makes a big difference.
Ask a question rather than state a potentially incorrect opinion which sounds like you think you know what you are talking about, but suggests that you don't.
With my limited knowledge, you point initially makes sense to me. But clearly you and I don't know enough. Yes, it does on the face of it seem odd to compile C to JS. In fact, it seems positively mental. But then now I have read the replies, I can now see why. Trick is to acknowledge one's knowledge limits.
https://hacks.mozilla.org/2013/12/gap-between-asm-js-and-nat...