Hacker News new | past | comments | ask | show | jobs | submit login
Brendan Eich: WebAssembly is a game-changer (infoworld.com)
316 points by alex_hirner on March 10, 2016 | hide | past | favorite | 303 comments



Personally, I think this is terrible (and it really is a game-changer, only not the kind that I'd be happy about). The further we get away from the web as a content delivery vehicle and more and more a delivery for executables that only run as long as you are on a page the more we will lose those things that made the web absolutely unique. For once the content was the important part, the reader was in control, for once universal accessibility was on the horizon and the peer-to-peer nature of the internet had half a chance of making the web a permanent read/write medium.

It looks very much as if we're going to lose all of that to vertical data silos that will ship you half-an-app that you can't use without the associated service. We'll never really know what we lost.

It's sad that we don't seem to be able to have the one without losing the other, theoretically it should be possible to do that but for some reason the trend is definitely in the direction of a permanent eradication of the 'simple' web where pages rather than programs were the norm.

Feel free to call me a digital Luddite, I just don't think this is what we had in mind when we heralded the birth of the www.


I hate to break it to you, but we're already there. People have been using the web to deliver desktop-like applications for the past decade. Over that period of time, the number of people connected to the Internet has more than doubled [1] and will continue to increase. Whether or not an application is delivered through a browser or natively is inconsequential to most of these users. If we look at the list of the most popular websites [2] we see mostly content-delivery platforms (Google, Bing, Wikipedia etc.) with some popular web apps which resemble desktop software in complexity (Facebook, YouTube, Windows Live etc.)

So we have two paths forward. One, we could try to influence the habits of billions of Internet users who use desktop-like web applications in an attempt to restore the document-based nature of the web; or two, we could provide an alternative to the artifice of modern JavaScript development which allows for better applications to be written and distributed to users that use and rely on them. The latter initiative is the more realistic and productive one, in my opinion.

WebAssembly will not lead to the end-of-times of the Internet as a content delivery vehicle. It is a net positive for the parts of the web that do not fulfill that purpose. If you're worried about the free and open web as a publishing platform, look more to governments and corporations around the world that collude to limit our freedom of expression (Facebook, we're all looking at you [3]).

[1] http://www.internetlivestats.com/internet-users/

[2] https://en.wikipedia.org/wiki/List_of_most_popular_websites

[3] http://www.theatlantic.com/technology/archive/2016/02/facebo...


> Whether or not an application is delivered through a browser or natively is inconsequential to most of these users.

This is probably the most true statement in your entire comment.

Running with the assumption that of course we should ignore the users who aren't "most" users (not a safe one).... I'd go farther, though. Whether or not these users are getting access to the utility they're looking for through:

(1) a web page with some layer of interactivity on top of more-or-less legible document semantics or

(2) whether they get it through a giant-blob-of-code SPA

is also irrelevant, right? Except to some minority of power users who can actually get more utility when someone uses a more basic semantic-document resource-oriented approach.

So it isn't the users who are driving the "let's use the web as a VM for desktop apps!" trend. It's the developers. And it isn't the billions of Internet users whose habits would need to change.

I'd guess the most ready response to this would be something like "How would you have the current Facebook/YouTube/WebMail Experience without this Desktop-Experience-Focus?" But as far as I can tell, the essentially utility involved here -- and to a large extent, the best things about the experience -- haven't really changed since a FB page was actually a legible document (a milestone we left down the road a looooong time ago).

The document-centered approach does have its limits. There are some applications it will not support. First-person shooters.... sure, deploy a black-box binary to a runtime.

Facebook, YouTube, and web mail are not really in that category at this point, and developer choices are the primary thing driving the march away from documents here.


So it isn't the users who are driving the "let's use the web as a VM for desktop apps!" trend. It's the developers.

Really? I was in a public middle school this afternoon that, just a few years ago, was having to buy expensive computers and licenses for MS-Word. Only a few could be bought, and each was its own little separately licensed, separately maintained world that easily became unusable. If a server had problems, your work required access to a specific machine. Access required lots of waiting.

And now? Chromebooks for a tenth the price (don't have to support full, expensive Widows/OSX), which they bought in order to "use the Web as a VM for desktop apps" (the horror!) such as Google's free equivalents of Word, Excel, and PowerPoint. Now everyone gets access, and they can even continue working on their own documents after school from the public library, a home tablet, or any number of other options made possible by the Web (and ever-improving hardware options).

And do you imagine that our science teacher and her students are more interested in A) "pages" about physics with "more-or-less legible document semantics", or B) live physics simulations to experiment with on their Chromebooks and phones? Are uncaring developers the only ones driving these poor users to the latter?

The problem with this "apps are oppression, simple docs for the masses!" meme is that the narrative excludes a vast number of those "underserved" who want good apps more than simple docs and need the Web-as-VM to finally make it possible.


What a wonderful story. Do those children know that everything they do on their Chromebooks is tracked and analysed by Google? Probably not, and they don't even know what tracking is or why it's bad.

They've been sold the idea that web apps and the cloud are beautiful and "free".


Same thing happens if they use Windows (esp Win 10). Did you know that when you search Google or Yahoo (using IE), your query is sent to Microsoft and stored?

http://www.youtube.com/watch?v=iGze_gjJTos&t=9m50s


> was having to buy expensive computers and licenses for MS-Word

how did the developers force that one upon you ?


The developers (and commercial environment) hadn't come up with anything better yet, so the local OS with local MS Word application was the best choice.

Now things are different.


> The problem with this "apps are oppression, simple docs for the masses!"

I kindof knew someone was going to confuse my argument and think I was arguing against web applications. Or against the browser as a platform.

It's a sign of how deep a certain kind of thinking goes in the industry. If we're talking orienting around semantic documents and resources.... we can't be talking about real applications! There's just no way to create interactive applications around those!

(And conversely, unless we're talking about application frameworks, lots of code --preferably not written in JavaScript, of course, which everyone knows isn't a serious language compared to Python -- unless we're working with GUI toolkit metaphors, unless the finished product simply isn't meant to ever be read by a human, we can't be talking about an actual application, right?)

> And do you imagine that our science teacher and her students are more interested in A) "pages" about physics with "more-or-less legible document semantics", or B) live physics simulations to experiment with on their Chromebooks and phones? Are uncaring developers the only ones driving these poor users to the latter?

Like the astute reader would note I said in my earlier post, some applications really don't fit inside a document/resource-oriented paradigm. Some simulations for sure, including first-person shooters and other games.

Other simulations, of course, fit rather nicely as a document with a layer of interactivity, and in that case, yes, the problem would indeed be uncaring developers. Perhaps particularly those who think interactive and semantic are exclusive.


It's only a small minority of users who are affected, but historically how much innovation happened due to the web's openness and that small minority's tinkering.


Webassembly will likely open even more doors because it'll free us from the shackles of the web's document publishing origin.

And I for one, can't wait to piss on Javascript's grave.


Bill Gates' quote fits perfectly: "Develop for it? I’ll piss on it."

http://www.quoteswise.com/bill-gates-quotes-5.html


The majority of web "innovation" already existed at Xerox PARC hypermedia applications.

The web is just a worse experience of what Xerox already had.


It may be worse, but it's a lot more democratic. Now we are moving away from that two way street into something much more monopolistic.


Democracy for content, tyranny for developers (one legacy language or transpilation).


We have been doing this for -decades- with Shockwave, Java applets, and Active X controls.


But they always end up abused. Everyone hates all of those, and that's good, because it causes people to steer away from them. Instead of throwing our hands up, and giving up, we should continue to try to keep those out.


How long before the first WebAssembly exploit?

(And what advantage does WebAssembly have over Java applets? We've been down this road.)


The advantage? Instead of having access to a large, stable ecosystem with 20+ years of engineering behind it (Java/JVM), you'll get to use fun, new half-baked tools that barely work to build your webassembly apps.

In another 20 years time, we might be at the level of productivity we had 20 years ago with VisualBasic.


You really think closed source Oracle is going to make applets better? People don't like applets cause the experience sucks, the tooling generally lacks, and the libraries are old.

You say that webdev is Applet 2.0? ok sure, at least it's open and moving instead of EOL'd [1] and the experience is better else wouldn't we all still be using Applets...

[1] http://www.v3.co.uk/v3-uk/news/2443810/oracle-signals-the-en...


No it is Applet 5.0.

Applet 2.0 was Flash, followed by Applet 3.0 aka Silverlight, followed by Applet 4.0 aka ecmspriten/asm.js.


Oracle isn't. Sun dropped the ball.

My point is we could've embedded JVM in the browser "correctly" instead of reinventing the wheel with another byte code format.


webasm is not a bytecode format


Yes, it's an IR. It still has all of the issues a bytecode format has, and none of the size advantages.


Cannot find PDF for Slim Binaries paper by Franz, so I cite it from memory.

They used syntax trees of program as a distribution format, compressed these trees with LZW variant and executed using simple JIT compiler (basically, tree automata over program trees). The size of Slim Binaries was smaller than JAR files for comparable Java programs. I don't remember exact percentage reduction, it was about 20-40, I believe.


It is a binary AST and is explicitly made to be both compact and fast to parse.

Not all intermediate representations are bytecode.


Do you have a source that Webasm's IR won't be as compact as Java bytecode?


The toolchain is C/C++/LLVM, which is pretty well baked.


The toolchain, yes. I'm not sure about the libraries - especially the ones that provide the glue to the JS world (or DOM, directly). I heard there is something, say, SDL support for asm.js or emscripten, but I don't think there's a large and stable ecosystem. More like "some stuff patched to some extent".

Oh, and I think in webdev, there is a lot of... err... modern creative and innovative guys and gals out there who are always happy to rewrite the world again, because the last week's tech isn't cool anymore.


> I'm not sure about the libraries - especially the ones that provide the glue to the JS world (or DOM, directly)

Has the Java world come up with anything better than Swing or SWT? Because those are the libraries that are in the "stable ecosystem with 20+ years of engineering behind it", and they are terrible. Give me the 'innovative' webdev guys and gals any day.


We do actually have JavaFX now, which has some has some weird historical baggage from back when it was supposed to be its own platform, but is actually pretty nice to work with from a UI standpoint.

It has a XML/CSS based way to describe the UI/layout and then lets you link that to your code with annotations. It's not perfect, but it's a lot more pleasant to work with than the likes of SWT.


"Oh, and I think in webdev, there is a lot of... err... modern creative and innovative guys and gals out there who are always happy to rewrite the world again, because the last week's tech isn't cool anymore."

well said


The advantage is that in time people agree about web standards.

Good luck getting Visual Basic adopted as an standard ;)


We've gotten worse.


Fun fact: Netscape promised to include vbscript in Navigator but it never happened.


Worse than that, in 2016 it still hasn't matched what VB and Delphi were capable of in the 90's.

I am really happy that our current customer projects that I am involved with, are all native mobile and desktop applications.


It's amazing, really. A few years ago I was brought in to feature-enhance (80%) / bug-fix (20%) some mil-spec hardware that interfaced with some custom Delphi front-end which was acting as a controller. I was amazed at the functionality:LOC ratio along with the correctness[0] with purely pre&post Asserts[1]. The hotel industry still largely runs on one popular Delphi app that's still chugging away. Other than MS Lightswitch which is dead[2], nothing even came close re: great RAD.

(Hey, MS employees reading this - now VS Community is free-for-commercial-use, if you want to capture the 5<=revenue<=50 MM market, replicate the Delphi + VCL ecosystem for RAD apps with Azure as the backing technology. Lightswitch already let me target WPF and HTML5, with Xamarin you now have native mobile. You pushed for an "integrated" desktop + surface/tablet + cell-phone a few years too early, but Xamarin as the presentation layer and Azure as your data-store and you've got the feature set that most businesses need. You'd put me out of business, but absolutely take over that mid-market share.)

[0] Pedants, I'm one of you. No Coq was not used in this project ;) [1] http://blog.matthewskelton.net/2012/01/29/assert-based-error... Similar to this [2] http://janvanderhaegen.com/2015/01/14/its-2015-and-lightswit...


Never used Delphi but after reading about it I would love an equivalent for web dev. I wrote small abstractions over Rails and or React but it's still too low level for my work.


Imagine you can run (efficiently) Eclipse or Lucene in any browser.

Someone try to do that with WebKit:

http://trevorlinton.github.io/


Eclipse (Che) is already coming to the browser!

https://eclipse.org/che/


Eww. One of the most beautiful parts of eclipse is the extensive level of plugins. A browser-based IDE means more painful maintenance, more painful extensibility, and a generally worse user experience. How am I going to tie into native binaries for C/C++ compilation like I can do with eclipse as is? Or the many plugins that give static checking, etc? A web-based IDE is a step backwards.


Joke?


> (And what advantage does WebAssembly have over Java applets? We've been down this road.)

The same advantage as Javascript, I'd assume: easy access to the DOM and the concomitant natural integration into a webpage. Java applets were like a portal into a weird world that started with a security warning and all of the widgets looked wrong.


Well unlike third party plugins (such as java, flash, ect) wasm will be executed by the browser (like JS) and likely won't have the same privileges that java/flash have while running.


Why would WebAssembly be any more prone to exploits than JavaScript?


Because there will probably be bugs that make it possible to break out of the sandbox and run arbitrary code on the target machine. Just like Java applets.


There's no reason to suspect that browser implementors would sandbox wasm any less strictly than JS. Heck, there's no reason to suspect that they wouldn't just re-use the existing JS sandbox.


Thanks, kibwen. I'll make a stronger statement. By definition, wasm and JS are two syntaxes (initially co-expressive, wasm and asm.js) for one VM.

Do people actually read docs any longer? https://github.com/WebAssembly/ has some, my blog covered the 1VM requirement. There won't be a new "sandbox". JS and wasm interoperate over shared objects.


> Do people actually read docs any longer

No, not really. People read headlines and make pithy middlebrow comments. Though I also don't think that's a new phenomenon...


> Do people actually read docs any longer? No. People only read headlines that WebAssembly is replacement for JS ;)


> How long before the first WebAssembly exploit?

It will happen...

> And what advantage does WebAssembly have over Java applets ?

Microsoft supports asm.js.


What advantage does WebAssembly have over Java applets?

Not java.


The main thing I didn't like about those was that they were proprietary single vendor blobs that stagnated and ran poorly.


Well, it's no different. Say, ActiveX were just x86 binary blobs. Java applets were JVM binary blobs. (Flash applets were somewhat worser kind of binary blobs with proprietary and undocumented bytecode.) WebAssembly is... again, binary blobs.


The runtime (which I was taking about) isn't a blob.

Plus minified and obsfucated JavaScript is already quite common. See closure and asm.js.


Well, my point is OpenJDK/IcedTea-Web isn't a blob either, but it doesn't make Java applets any better. Neither is Wine (which can host ActiveX controls), but I'm not going to try it at home.

And I agree that modern minified, packed and obfuscated JavaScript is also more like a binary blob than source code. WebAssembly is just a logical conclusion of this.


I guess I fail to see how the end result of this will end up any different


Considering WebAssembly is a feature that going to be standardised and supported in all JavaScript engines while those other things were either proprietary runtimes or browser specific API's I'd say you have no reason to think this will end up like them.


so, IMO, I don't think any of them failed because they were > standardised and supported in all JavaScript engines while those other things were either proprietary runtimes or browser specific API's

I think they failed because they were closed off, and nobody could read the shitty code, so people wrote shitty code. If we have things open, it gives people an incentive to make it look/work nice. Sure, not everyone will, but it makes it easier to shame those that dont.


You can't read the code of most C++ programs yet if operating systems dropped support for C++ there would be a reckoning.


Do people outside of "The Web is for documents only!" camp really hate Flash? I've had many good web experiences thanks to Flash (granted, mostly movies and games, but still...)

People on HN seem to discount how absolutely amazing it is to have a VM that can almost seamlessly connect to a network and pull down awesome programs!


I'm a big fan of Web Assembly but I've hated Flash for a long time. Flash apps tended to peg the CPU and shorten battery life on my laptop, even though most were inconsequential to the web pages that used them (ads). Constant security holes left me paranoid. Crashes left me frustrated. I can't count how many times that Flash crashed on me. The UI of most Flash apps left many things to be desired. From a technical perspective, it was also a deeply flawed design with a mediocre implementation from one vendor. Most video sites which use Flash seem to be far inferior to their HTML+JavaScript counterparts, only the biggest sites ever had decent flash players (Vimeo, YouTube), and I've noticed that some Flash video players can't really handle full screen video with the performance that they should. Ugh. I was glad so many years ago when Flash stopped being bundled with the OS, so I didn't need to take any extra steps to avoid it.


Flash is great -- I love being forced to update flash every 20 minutes.


Flash does provide a lot of nice/cool things. The problem is it was abused because it was "cool". If we could get people to use it only when necessary, then I probably would have been OK with it. Too many people end up using this in inappropriate places.


I find myself kind of pragmatic but I hated Flash because once it was available people used it for everything, even stuff that never needed it in the first place.


No, we hated them because their performance and UX was bad. Running programs off the Internet, safely, has long been the goal.


I guess I fail to see how this will be any safer?


But this time we do it fully open source, working everywhere unlike ActiveX, and much faster than Java applets.


I wonder if we will see ios/android apk app running in browser soon.

We already got DOS, Linux kernel, Window 95 running in the browser as very cool demo projects.

Maybe Angry bird Apk running unmodified inside browser at close to device speed would be next?


They (APKs) already do in Chrome.

Except for, security constraints, currently require that APKs are packaged as Chrome extensions, and can't be served from web directly.


You can already do this with App Runtime for Chrome.


> I wonder if we will see ios/android apk app running in browser soon.

or apt-get, yum or pacman...


Eh, I think things were a lot worse during the heyday of Flash and Java applets. Sites have been delivering this kind of content since WebRunner in 1997. At least now we have an open, vendor-neutral, consensus-based standards process and a commitment to multiple major open source implementations, which is something we never had with Java or, worse, Flash.


I think you and I must be looking at different 2016 Webs. In the one I see, content is increasingly locked up in closed, proprietary and/or centralised systems. In the one I see, standards don't matter much any more because there are no usefully stable browsers anyway. In the one I see, one browser is rapidly becoming so dominant that the existence of several others that were formerly influential and competitive is all but irrelevant for many web development projects, just as happened in the days of the IE vs. Netscape browser wars.

We all know how this story ended last time. And yet, here we are watching it all over again and most of us are powerless to do much about it. For all the superficial openness of the modern Web, the reality is that it is now utterly controlled by a small group of browser developers, a small group of centralised content hosts, and a small group of curators, several of which overlap with each other but almost none of which have goals that necessarily align with those of either the average web surfer or the average small content producer.


> ... it is now utterly controlled by a small group of browser developers ...

If the group is small, it's mainly because the job is very difficult. All browsers are a big mess and it takes time to be productive.

The idea of asm.js is one of Alon Zakai. He explains on his blog all the experiments he made before concluding with that.

A quote from the oldest post I know about emscripten:

"I want the speed of native code on the web - because I want to run things like game engines there - but I don't want Java, or NaCl, or some plugin. I want to use standard, platform-agnostic web technologies."

http://mozakai.blogspot.fr/2010/07/experiments-with-static-j...


Yes there are few full implementations of the Web. I hope eventually a more modular layout engine comes into being that allows for others to create their own implementations of things without having to wrestle with a large codebase.


> I hope eventually a more modular layout engine comes into being that allows for others to create their own implementations of things without having to wrestle with a large codebase.

Maybe Servo?

The more comprehensible browser codebase I know is Dillo.

http://www.dillo.org/dw/html/annotated.html

http://www.dillo.org/


I used to recommend HTML3.2 and sandboxed/MAC'd Dillo for a safe, formatted documents to people who insisted on web browsers being the medium. Worked out fine w/ much less risk. I was grasping at straws when a Chromium member asked me how I'd secure that code with minimal work and no rewrites. How would I even understand it all was my first thought.


I've seen Dillo freezed few times on big pages but it's a great project.


> ... which is something we never had with Java or, worse, Flash.

I liked a lot Java Applets. It was fantastic to be able to draw pixels or use a toolkit in a web browser at that time.

I gave up Java near after the dispute between Sun and Microsoft about J++. Java was far less interesting for me without that feature.


I had a thought a couple of days ago: what if Mozilla (or anyone) made a web-native runtime for Java applets to replace the plugin, in the same way that Mozilla made a web-native PDF renderer to replace that plugin? That would give us the good of Java applets - a well-specified, mature, backward-compatible bytecode with a built-in security model - without the bad of performance, poor UX, and your programs being trapped in a rectangle that the browser knew nothing about.


js2me: https://github.com/szatkus/js2me

It didn't get much traction because nobody really wants to program in Java on the client if there are alternatives.


Java ME is not Java. Compatibility with existing applets would require implementing more.

Java is not the best (though it's better than JavaScript, and android contradicts your claim), but JVM bytecode as a target is great.


As a counterexample, I will bring up Flash. The file format's open, and the scripting language is...ECMAscript. So an open vendor neutral consensus based standards process doesn't mean that the implementations themselves are going to be any good.


The file format became partially open in 2008... long after Flash Player became ubiquitous, long after the format was developed and stabilized, without a standards process (excluding ECMAScript), and without any serious attempts to write competing players (after all, Flash was already on the way out by then, plus it would have been harder to take on Flash Player's monopoly of... Flash players, than IE in its heyday).

WebAssembly will have several actual competing implementations, not to mention that it's a drastically smaller addition to the existing Web platform.


The web is an amazing content delivery vehicle, I totally agree! There's a whole class of content I want to be able to just `wget` and be done with it.

The goal of WebAssembly is to open up the reach and easy user experience of the web browser to new types of applications that just aren't possible to build efficiently with current tech.

We're also making sure that wasm is a first-class citizen of the open web: see, for example, our thoughts around ES6 module interop, GC + DOM integration, and view-source to see the textual encoding of wasm [0][1].

[0]: https://github.com/WebAssembly/design/blob/master/Web.md

[1]: https://github.com/WebAssembly/design/blob/master/TextFormat...

(Disclaimer: I work on V8.)


Yes, seeing a text format, much like has been possible, and to the same effect, of a disassembly view of bytecode. Hell, the plaintext s-expression format is IMO /less/ readable than most assembly formats. Regarding DOM integration, wouldn't that have been possible with other formats by giving plugins access to the DOM?

I'd much rather see that time on current tech spent on harnessing the tools we have right now better, rather than new pie-in-the-sky tools that bring a host of disadvantages, such as being yet another jit language which resembles a processor from twenty years ago. With an MVP that doesn't include multiprocessing, doesn't include SIMD, etc, I fail to see how this is really better than the status quo.


I'm baffled that anyone would find the s-expression format challenging.


I agree. I'm really running dry on respect for Brendan Eich. None of the moves he's making are for the benefit of user privacy - look at Brave, his new browser project. It replaces ads on the web with his own ads, tracks you, and puts money in his pocket instead of the publisher's pockets. I'm struggling to remember why he was respectable in the first place - for making JavaScript, an awful programming language we've spent 20 years trying to fix? I don't think that his word on these issues is worth anything any longer.


Brave does not track anyone remotely, all data in the clear (which browsers all keep in various caches and history lists) stays on device. We will pay publishers 55% directly, and users 15%.

I think you didn't read our FAQ or other site docs, and just assumed the worst. Why is that?


I did read the FAQ. It's awful. I didn't realize just how awful it was until I read the FAQ.

How do you pay publishers on sites that aren't partnered with you?


From the FAQ:

8. How do you use Bitcoin (BTC)?

We’re still developing the system, now entirely in the open source on github.com, but at this point we know we will use BTC only for permissionless payment delivery to user and publisher wallets that we will create using BitGo’s APIs. We hope to keep funds in BTC only in monthly payment buffers, to reduce effects of volatility. We intend to let expert users “bring their own BTC” to self-fund their wallets and auto-micropay for as much of their browsing as they like.

See also https://www.brave.com/blogpost_2.html which discusses user and publisher wallets. We'll have more in a week or two on the Brave Ledger payment system.


Ignoring the negativity and taking it as an opportunity to improve your product and / or understanding of a segment of your audience.. I'll remember that!


> ... for making JavaScript, an awful programming language we've spent 20 years trying to fix?

First-class function is not bad for 1995.


Yeah, I have to say that there are many things I love about Javascript. Even the quirky this pointer is at least interesting (though not particularly useful). Internally JS is very elegant IMHO and it is easy to write beautiful code in many different paradigms.

On the downside: Type coercion is always a bad idea. There is an inconsistency between built in types and user constructed types. It has terrible standard class libraries. It is verbose.

But I can name half a dozen other languages that suffer from these problems and more. Judicious use of a transpiler (like coffeescript) and choice of third party libraries will go a long way. Personally, I enjoy working in coffeescript more than Ruby specifically because of the first class functions.

Not the best programming language in the world, but not the worst either, IMHO. It's just unfortunate that some of the more obvious shortcomings weren't fixed early on.


It's also quite average for languages of that time to have first-class functions.

https://en.wikipedia.org/wiki/Timeline_of_programming_langua...


He created JavaScript in 10 days. It's still in use 20 years later. That's very respectable.


Because it had (and still has) a monopoly on code that runs inside the browser. And it's a pig that's had quite a bit of lipstick put on it. It wouldn't be in widespread use if people had the choice.

That all said, WebAssembly is potentially an avenue to break that monopoly, and I do give him credit for seeking to address the issue.


People did have a choice: Java Applets, ActiveX, Flash, VBScript. Those all lost.


Three of those were plugins and the last was proprietary to Internet Explorer only, although I'll admit it was also worse than Javascript. "Javascript: It's better than VBScript!"

And despite being an (awful) plugin with an IDE that cost $600, Flash won in the pre-iPhone/V8 era.


> Three of those were plugins and the last was proprietary to Internet Explorer only

They all started from a similar position, JS wasn't a standard from day one, there's a reason why it's emcascript today.

> And despite being an (awful) plugin with an IDE that cost $600, Flash won in the pre-iPhone/V8 era.

That's laughable, what percentage of the web ran on Flash in 2007, 1%?


>They all started from a similar position, JS wasn't a standard from day one, there's a reason why it's emcascript today.

No, Javascript had the huge advantage of being first; it's the only one on your list that both wasn't a plugin and was supported by IE and Netscape. And yes, modern Javascript is a better technology than any of those, although that's really damning with faint praise.

If Netscape Navigator 2 had shipped with, say, a Perl VM in addition to the Javascript VM, then I can guarantee you that everybody would've written their scripts in Perl instead of Javascript.

>That's laughable, what percentage of the web ran on Flash in 2007, 1%?

Back in the day, the only ways you could write a web app of any complexity/performance were to either use Flash or do it serverside (remember – not long ago! – when the new hot web technology was Ruby on Rails? Remember when it was PHP?). Flash was the only option if you wanted video (well, it or something truly evil like RealPlayer) or if you wanted a consistent and appealing look for your site.

Flash was on 56% of all websites as of 2010: http://www.stevesouders.com/blog/2011/04/05/http-archive-url...

Google trends for Flash vs. Javascript in the SF Bay Area: https://www.google.com/trends/explore#q=%2Fm%2F02p97%2C%20%2...


Flash has long been used to support features not-yet available in the DOM APIs such as copy/paste and video, and of course in ads. There's a huge difference between that and building Flash apps; apps where a flash-displayed interface is the primary way of using the app. Outside of games those were never very popular, JS had overcome it with Google Maps.


>He created JavaScript in 10 days.

It shows. JavaScript is fucking horrible, to put it mildly.


Its got bad parts and its got good parts. Once you learn how to ignore the bad parts (and the bad examples) the language really shines. Especially with the newer versions.


Stockholm Syndrome.


Haha, God made the world in 7 days, it's even worse.


I've written DSLs in a day that have seen usage beyond its original intent. Javascript's a dirty hack that's only had dirtier hacks stacked atop it.


I'm not sure people really ever respected him, or JavaScript for that matter, but sort of just recognized it can pay to pay attention to one of the most popular languages and the man who invented it.


He also founded and built Mozilla, which arguably saved the open web from extinction.

If you don't respect the founder of JavaScript and Mozilla, your standards are very high. I doubt many people commenting here would meet them.


> He also founded and built Mozilla, which arguably saved the open web from extinction.

There was already a better open-source web rendering engine and browser at the time. Maybe we'd've got a KHTML-based browser on Windows much sooner that would have filled the same role.


Look, if given a chance, I'd shake his hand. But that doesn't mean I hold him to high regard or agree with his views, standards, or ideals. I respect some of the things he's done, but I also disrespect other things he's done.

Mozilla is pretty cool I guess, JavaScript is pretty cool I guess. It's all really just whatever and I think he was very much in the right place at the right time for a lot of this, not that he isn't a brilliant man who's obviously far more accomplished than me.


Creating JS in two weeks was an amazing accomplishment. Unfortunately we'd be better off if he had been a worse developer, because then Netscape wouldn't have had the option of shipping it and enshrining its defects as features.


Not to mention his support for prop 8 and pathetic excuse for an apology.


I prefer the term "mindful" to luddite. Luddite's opposed technology, you oppose the direction it's headed.

But I think you're completely right. We took all that made the web unique, and turned it into a black box for abstracting away hardware/OS.

It's hardly surprising though... you can decentralize a network, but power and control over the medium was bound to become centralized in some form.


Luddites actually opposed the "direction technology was headed" as well, not technology for its own sake[0]. Specifically, they were the highly skilled technology workers of their time (factory and textile workers) made obsolete by automation, protesting the way technology made it easier to replace them with unskilled, low-wage labor and to flood the market with low-quality, mass produced goods. Sound like a familiar refrain?

[0]http://www.smithsonianmag.com/ist/?next=/history/what-the-lu...


Looking at technology's influences alone strikes me as a shallow viewpoint. You can find other important factors if you look a bit deeper:

"The British weavers known as Luddites, who destroyed looms precisely 200 years ago, thought rising unemployment within their ranks was due to machinery. But there’s a case to be made that inflation, money supply expansion, budget deficits and trade barriers were equally to blame.

... [The] overall picture was of cheap money leading to labor-saving capital investment, while wages were eroded by inflation and economic activity was dampened by restrictions and excessive government deficits.

The Luddites have been mocked for attacking the productivity-enhancing machinery that was to improve living standards unprecedentedly. But given the economic policies of the time, which bear an uncomfortable resemblance to some of our own, the Luddites were right to believe that only higher unemployment, with no discernible improvement in conditions on the horizon, was their fate."

http://blogs.reuters.com/breakingviews/2011/03/11/were-luddi...

Were Luddites the victims of 2011-style finances? | By Martin Hutchinson | March 11, 2011


huh, I never knew that. Thanks!

I stand by the rest of what I said though... I think we need to be mindful not just of where society is headed, but what kind of society we will live in when we get there.

Technology is important, but it's society that separates utopia from dystopia...


I think what I worry about the most is how poorly the modern web partners with assistive technology: it feels like we're actually taking steps backwards.


That is a very large part of my point, the viewer is no longer in control. That was one of the beautiful concepts of the web, that the reader could be anything, a person, a screen reader, a computer program and so on.


To me, it seems the trend is to ditch both the app and the browser, and just provide a web api that can be used with anything that supports http.


Despite all the javascript, the content stayed on the web. That's the big win, thats what draws people to use it, and what guarantees its popularity for a long time. The really great, awesome thing about the web is its openness. No company can tie you to their programming language , their "app store" ecosystem , insane policies and authoritative restrictions. We should be eternally greeatful to Berners-Lee for that.

The silos exist today, facebook platform etc. Despite how hard they tried, they did not take over the web.


> "It's sad that we don't seem to be able to have the one without losing the other, theoretically it should be possible to do that but for some reason the trend is definitely in the direction of a permanent eradication of the 'simple' web where pages rather than programs were the norm."

What does it matter what is the norm? Static HTML/CSS is going nowhere, you can still create static content, as you well know (IIRC you run a static blog). The improvements to the dynamic side of the web do not come at the expense of the document-oriented side, both currently coexist and I see no reason why making the dynamic side faster will change that.

Furthermore, changes to dynamic content can enhance the functionality of the document-focused side of the web. Consider Wikipedia. In some ways a Wiki is a set of documents, but it's a set of documents that grows based on utilising input from those using the service, democratising the accumulation of knowledge. For all its flaws, I can think of no other resource that better embodies the virtues of the web than Wikipedia, and Wikipedia would not have grown to the size it is now without the technology that supports web apps.

That said, I don't agree with the trend for moving everything to the cloud, and I hope we can see that trend reverse with better tools for people to take control of their own data. If more people had cheap home servers that were easy to maintain then the issues surrounding lack of control should be greatly reduced.


the web is and will always be a 'content delivery vehicle'. Apps which run on the web are a form of content.

Not all data is open, that is unfortunate, but realistic. At the same time, huge amounts of data is open and available without an app.

What is the use case where we lose to something because of native performance improvements in javascript?

Anybody who wants to build a simple static site can still do that, and I'd suggest the majority of the web is still just that, or very close to it.

I really don't understand your comment about 'executables that only run as long as you are on a page'. You can only read content as long as you are on a page as well. Or are you concerned about our ability to do search and data-mining on large volume of available data?


I can hit ctrl-s and save a page, and with modern browsers, all the inline content too. Webapps tend not to have that function.


I upvoted you, but I want to add my voice too.

The web succeeded in part because it was possible for anyone to do "view source" and see what was going on under the hood.

Having that source available is also an important aspect of software freedom.

Losing all that--especially the freedom to see exactly what your browser is executing--for a slight speed increase is ludicrous and I'm very, very sad to see this is being taken so seriously.

We had a huge opportunity here to shape an open and free web. Turning the web into nothing more than a binary distribution platform will undo decades of work and we may never again find ourselves in the lucky confluence of economic prosperity, technological advancement, and governmental benign neglect, that made the open web possible.


Minified JS is hardly "source" though.


And I hate it just as much.


The further we get away from the web as a content delivery vehicle and more and more a delivery for executables

Interactivity is an increasingly important aspect of media and of our culture. People spend more money and time on games than on movies and TV.

I just don't think this is what we had in mind when we heralded the birth of the www.

It's never like the framers imagined. It's always stranger and more wonderful than they could have imagined. (And horrible in some ways they couldn't have imagined.)


> Feel free to call me a digital Luddite

Sure. But don't despair, because this future isn't as bleak as you'd assume. With things like Hoodie[0], GunDB[1] and other amazing bits of technology, we can keep the benefits of web-tech for application development, but allow the user to own their data still. Offline-first, easy sync when needed. And really, anything more complex than delivering static HTML pages has the downsides you're mentioning, so unless you want to live in 1995 I can't really understand it from a practical perspective ;)

[0] http://hood.ie/

[1] http://gun.js.org/


I agree with quite a lot of your critique and share some of those concerns, but something doesn't quite add up to me. Why are you assuming that the two models of web-as-delivery-system are mutually exclusive? I don't see how web assembly competes with or causes movement away from the web as we've known it.


Check how many pages of a random sampling will even work at all without running code on them. It's quite worrisome, and I think web assembly will accelerate rather than slow down that trend. 10 years ago you could download a page and it would most likely have the relevant content in the downloaded page, the exceptions were the idiots that would make a page that was a frame around some flash applet with the actual content.

Now the page is a blank template that will fill itself in using a bunch of under-water calls to the server. Those under-water calls are the result of running a bunch of code and that code will sooner or later end up being written in web assembly and will be mostly opaque.

It's just like those flash only pages of old.


Yeah? I don't see the problem. Why does it matter how a web page is constructed? If it is fast, secure, and does what the user needs, what is the problem?


Because people will take it and say "Hey, with WebAssembly I can rip out the entire DOM and roll my own layout and text rendering directly to a canvas like some sort of game engine! By avoiding standard webpage structures, we can make our ads harder to block!"

And really, who cares about copy/paste, accessibility, fair use, user stylesheets, or any of that, when you could trade it for more resilient advertising? Not publishers, that's for sure.

It's going to open the floodgates to a new generation of those crappy bundled-up Flash websites that gave you very limited ability to interact with their content.

There are of course better things to do with it, but from my limited understanding of WebAssembly this is my biggest prediction for what it'll get used for.


>Because people will take it and say "Hey, with WebAssembly I can rip out the entire DOM and roll my own layout and text rendering directly to a canvas like some sort of game engine! By avoiding standard webpage structures, we can make our ads harder to block!"

You say that as if it was a bad thing. There is two kind of content (relevant to this diacussion at least) that get served trough the web - documents/media and apps. DOM/HTML is ok for the former it's a horrible hack in the latter and most of your objections to custom gui also apply to DOM gui frameworks + they are dog slow and annoying to use. There is room in the middle (webapps) but there is definately a class of apps out there that just don't fit in to the DOM/HTML and get zero benefits from it outside of delivery which is why they get done that way.


We should not judge the potential of a technology only by the worst-case scenario for which it may be used.

If a promising technology has a high likelihood for serious abuse but also a highly desirable upside, instead of tossing the baby with the bathwater, we should preempt the abuse through culture, policy, and possibly engineering.


Oh definitely. People did some great stuff with Flash, and I'm excited to see what people come up with using WebAssembly. I'd just temper our excitement because it's going to make some things better and it's going to make other things shitty.

Somebody's going to write a DOM-replacement page layout and rendering system, other people are going to adopt it, and we'll have to go through another whole phase of

"Guys, we made this shim layer that will make your website compatible with screenreaders and HTML5 semantic tags again!"

"I dunno, that sounds like work. And people could copy/paste from our articles? I'll pass."

<5 years later>

"The new version of PageRenderingFramework natively supports screen readers. Can we maybe make your content accessible to non-visual browsers now?"

"Yeah, I guess. We'll install it next time we rebuild our website."

On the plus side, I think most websites have outgrown the autoplaying music, so maybe we'll skip that part in this cycle.


The more things change the more they stay the same, I agree.


> We should not judge the potential of a technology only by the worst-case scenario for which it may be used.

That's exactly how we need to judge any important technology. This is a basic part of making things that "fail safely".


The word 'only' means not to exclusively judge it in that manner; it does not exclude risk analysis and mitigation. By your statement's logic, banish or redesign the kitchen knife, and put the genie back in the bottle, since it is a technology that can be used in very bad ways in a 'worst-case scenario', and it is hard to make it 'fail safely'.


Couldn't you do that with JavaScript anyway?


Sure, it's just optimizing some bottlenecks at this point. WebAssembly is just what JavaScript was evolving/heading to - I think it was a clear that it'd end up somewhere there for almost a decade.


It matters because (1) not everybody can see that web page, (2) the ability to inspect the pages is what caused the knowledge about how the web was made to spread and (3) it is a step backwards in terms of being an 'open' vehicle. That it's fast, secure and does what the user needs is not being disputed.


I think you're possibly underestimating the cost and difficulty of developing apps using low level code (or whatever web assembly is, lower level than html+css anyway).

its technically difficult enough that I think its main use case will be for somewhat large and well funded companies that have the budget and expertise to deliver a fairly complex client side app that has behavior and design that is not easily done in existing web technologies.

for example, all the stuff that people attempt to do now on HTML canvases seems like a likely candidate to be replaced by web assembly apps. this could lead to an era of high performance streaming apps with pretty impressive graphics capabilities. basically just a huge upgrade to emscripten, right?

I'm just not seeing this as being a real threat/competitor with traditional document based content on the web. I don't think that's going anywhere and I don't think web assembly interferes with that on any significant level.


It might get distributed to the browser as low-level code, but it won't be for the developers - they'll just use compilers and toolkits. In fact, I'd bet an integrated tool for compiling the codebase of an Android app to WebAssembly won't take long to appear. It'll only be technically difficult in the first couple of years, if that.


> Why does it matter how a web page is constructed? If it is fast, secure, and does what the user needs, what is the problem?

Because by definition if it is constructed with JavaScript it is neither fast nor secure.


Because it often doesn't do what a user needs. Try using a screen reader on a lot of "modern" websites some time. There are specifications to make this easy, but people ignore them because accessibility is "too expensive".


That has much more to do with the software developers themselves than the underlying technology. I don't buy the accessibility argument at all. WebAssembly and accessibility are not mutually exclusive.


There's always an element of "one step forward, two steps back" with major technology shifts. Every time we've had a big change in platforms, we also had to redo existing engineering work.

I don't think this is doom and gloom for accessibility, though. The future is in general-purpose assistance technologies that mediate any application. You can smell it with the new work in ML. It is not here now, but as with everything in technology, by the time it's mature and widely available, it's nearly obsolete.


> "one step forward, two steps back"

This only works if you're facing away from your destination.


I don't see what vertical data silos have to do with the technology the OP is talking about. Vertical data silos would exist without the ability to run programs client-side, they'd just be less pleasant (e.g. lots more reloading), and less accessible for many people (harder to make usable AI).


The tech helps those data silos to be more opaque, to treat the browser as a one way consumption device and to make it harder to link that data in the 'normal' way. It's like audio that can't be re-mixed. That's also a reason why things like RSS and other easy and open standards disappear, they make it harder to lock down the data in a silo.


Well, most of the time, when audio that can't be remixed[1] is usually because whichever audio engineer was in the studio and did the mix-down and/or whoever did the mastering afterwards lost the tapes. Mixing down to stereo from a thirty different takes of a dozen different audio tracks is inherently going to be a little lossy. It's sort of like cutting up the first draft of a novel, where the author presumably has made some editorial decision.

I'm with you and actively avoid any remote SaaS for this reason, discussed at length here[2]. I don't care about the recurring fees, I'll pay them no problem as long as I can run the binary locally, so I can hedge against the acquire-hire-kill by AppleGoogTwit. When that happens not only do I risk losing data, but I risk losing a tool I depend on within my workflow. If my data has even a remote chance of being locked into a specific platform, i.e., if I can't host it Atlassian style[3], I'm not going to use it. This could rapidly turn into a bunch of mini-App Store instances -- where good apps and/or information disappear on a whim of the developer because they're tired of working on an Angular base. Not only does your data live in a walled garden, but you also risk being the unfortunate victim of the Bored Developer Syndrome.

Side-note: Most of the good engineering blogs I read are just stock WordPress themes with RSS enabled. None of that "RSS only the first paragraph so I get more hits" junk. If someone's intentionally silo'ing their data, more often than not it's one of those "4 things you didn't know about Bash!" blogs. Not much value when you can read the GNUinfo docs on it.

[1] I'm presuming you mean, in the sense of actually re-producing off the master tracks, not just EQ'ing [2] https://news.ycombinator.com/item?id=11250108 [3] https://news.ycombinator.com/item?id=10753650 - Paragraph 2


I expect a redecentralization of the network with wasm.

It could destroy the web and something new could emerge. Something simple and open again.

The web of today is like the X protocol in 80s. Too much sophisticated and very hard to implement. VNC can be seen as a replacement of X. Simpler and better in many aspects.

wasm can fail too or have a limited success like webrtc.

In a world full of robots, markdown is enough to render most contents like JSON is better than XML.


I hope (and suspect) that Web Assembly will find a lot of use for people who need to use it. But I also think that CSS and HTML will remain the presentation technology of choice for the vast majority of Web sites. That's because, contrary to popular opinion, HTML and CSS are actually technically pretty good. They have a lot of flaws, but every proposed JS- or wasm-based replacement for them that I've seen has been significantly worse.


> They have a lot of flaws, but every proposed JS- or wasm-based replacement for them that I've seen has been significantly worse.

I'm a fan of Gopher and Wikipedia mobile.

http://gopher.floodgap.com/gopher/gw.lite

https://en.m.wikipedia.org/wiki/?mobileaction=toggle_view_mo...

HTML and CSS are not bad. The problem (for me) is the lack of diversity in the layout engines. It can be explained by the complexity of the standards. It's near impossible to create a new browser from scratch (that can display modern websites).

BTW people can't choose something else because there is nothing else and every computer has a web brower. It's more a monopoly.


Why can't both styles coexist? Declarative hypertext resources for things that are document like, and dynamic applications for things that are app like? We simply are augmenting the plain old HTML web with the ability to link to resources with new capabilities. We haven't subtracted anything.


I hear you, but what is there to do about it? You can't stop people from wanting these kinds of features, and there are huge, real benefits.


I have a suggestion: If we all start blogging again on a myriad of domains and subdomains, start linking to each other not for SEO but to provide users links to interesting stuff then all this could go away like a ugly nightmare once you are halfway into breakfast.

I know there are a few of you x-bloggers here: I loved the time when I could find lots and lots and lots of technical stuff in a never ending web of blogs. Not everyone agreed but we linked back to the ones we disagreed with without caring about SEO.

Hey, we could event do web rings and RSS and and and...

Edit: There is still a lot of content, I just miss the time where everyone blogged and I wish we would decide to go back, and then do it.


The WWW is currently a bunch of apps. Mediawiki, Wordpress, Node, Django, and thousands of others.

What is the difference between serving HTML/CSS/JS to the browser, and some other stack of UI and algorithms?


They're not pulling JavaScript out of browsers, it's time to give up that dream.

If you're really so passionate about static content why not be part of a project to that aim? You can host a Gopher server and publish to it, it's stunningly easier to do.

Or maybe an alternative that runs on https? I'd be interested in that personally.


I think the solution is to think of code as data that should be freely distributed and hackable as well. It isn't the garden that is the issue, but its walls.


We're heading straight for native code sandboxed in the browser via a bit of a detour. It will be just as hackable as any closed source code that you get from some vendor. Think of it as a slightly more modern version of Java, this time it really is run 'anywhere' as long as 'anywhere' is a browser and the source code stays with the supplier of the web-app.


The source code of my latest web app is over 1 MB of minified and obfuscated JavaScript. Technically you have the source but it's only marginally more useful to you than looking at assembly generated by a C program.

And the benefits of having source are overstated.

I didn't learn web programming by looking at other people's html and javascript. There was a little bit of that but most of the learning is from tutorials and books and experimenting.

And in the old days of C programs delivered as executables, were people unable to learn how to program in C?

Finally, WebAssembly is not a replacement for JavaScript but fills a gap that JavaScript can't (really fast code).

WebAssembly is basically a compiler target for C-like languages, which means that it's dramatically more expensive to "write" WebAssembly code than it is to use JavaScript, so people will only reach for it in cases where there's compelling reason. It'll never be a default choice for writing web apps.


> people will only reach for it in cases where there is a compelling reason.

Why? Won't any javascript (or any other source code) be compiled to webassembly before going to production?


WebAssembly is a low-level statically typed language. JavaScript is a high-level dynamically typed language.

Compiling JavaScript to WebAssembly can't be done as a simple compilation step, at least not with fast results - you really need a JIT or multiple JITs. In other words, you'd need to compile a full JS engine to WebAssembly together with your code.


I had a lot of fun to see that the other day :-)

> "Binaryen" is pronounced in the same manner as "Targaryen": bi-NAIR-ee-in. Or something like that? Anyhow, however Targaryen is correctly pronounced, they should rhyme. Aside from pronunciation, the Targaryen house words, "Fire and Blood", have also inspired Binaryen's: "Code and Bugs."

https://github.com/WebAssembly/binaryen

Do you know XZ Embedded?

http://tukaani.org/xz/embedded.html

I wonder if it could improve startup (with emterpreter?).


I hadn't heard of XZ embedded. Might be worth measuring it, but it would be competing with native gzip in the browser, which is hard to beat on speed. But maybe better compression would be worth it?


LZHAM is interesting in that area. Speed that competes with gzip and compression that competes with LZMA.

https://github.com/richgel999/lzham_codec


Interesting.

It reminds me encode.ru :)

Do you know the size of the compiled code?


In this discussion https://news.ycombinator.com/item?id=8911369 someone quoted this page http://mattmahoney.net/dc/text.html as saying it compiles to about 150k. I can't find the current listing, but I'm on mobile :/

Btw: the author's blog talks about LZHAM quite a bit http://richg42.blogspot.com/

edit: Looks like it's up to 190k

                    Compression                      Compressed size      Decompresser  Total size   Time (ns/byte)
    Program           Options                       enwik8      enwik9     size (zip)   enwik9+prog  Comp Decomp  Mem Alg Note
    -------           -------                     ----------  -----------  -----------  -----------  ----- -----  --- --- ----
    lzham 1.0         -d29 -x                     25,002,070  202,237,199    191,600 s  202,428,799   1096   6.6 7800 LZ77 70
    xz 5.2.1--lzma2=preset=9e,dict=1GiB,lc=4,pb=0 24,703,772  197,331,816     36,752 xd 197,368,568   5876    20 6000 LZ77 73
    gzip 1.3.5        -9                          36,445,248  322,591,995     38,801 x  322,630,796    101    17  1.6 LZ77


Very cool! I appreciate the details :-)

I ran a quick with the tool (xzminidec) that comes with XZ Embedded.

    gcc -O3 -s -D XZ_USE_CRC64 -o xzminidec xzminidec.c xz_crc32.c xz_crc64.c xz_dec_stream.c xz_dec_lzma2.c 
    du -b xzminidec
     19472	xzminidec

    ldd xzminidec
     linux-vdso.so.1 (0x00007ffe9905c000)
     libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f105c83b000)
     /lib64/ld-linux-x86-64.so.2 (0x000055d818666000)

    du -b enwik8.zip 
     36445475	enwik8.zip

    du -b enwik8.xz 
     26375764	enwik8.xz

    time unzip enwik8.zip
     real	0m1.023s
     user	0m0.948s
     sys	0m0.068s

    time cat enwik8.xz | ./xzminidec > out-xz.txt
     real	0m2.598s
     user	0m2.504s
     sys	0m0.136s

    cmp enwik8 out-xz.txt 
     (empty)
unzip is faster but the xz decoder is very tiny and the compressed input file is smaller.


The most interesting feature (for me) is its small size:

"Compiled code is 8–20 KiB."

zlib is heavier (about 200Ko).

It could be used to make self-extractible asm.js programs or to compress some embedded resources (like files from "--embedd", fonts, images...).


WebAssembly is a special encoding for a subset of javascript that's easier to optimize. If all arbitrary javascript could be compiled to WebAssembly and get any benefit, then browsers wouldn't bother with asmjs/WebAssembly and would just implement those optimizations in their general Javascript engines.

WebAssembly is a good compilation target for languages with a flat memory space and no garbage collection.


Yep. The web-heads are slowly reimplementing Unix in the browser. [0] We could have avoided this with better, wider-spread support for sandboxing of untrusted executables.

Alas.

[0] They don't have crashdumps yet, but I expect that they soon will.


If you ask me, the rise of the Web as app platform is a scathing indictment of the state of operating systems, both in research and in practice. Operating systems are so bad at providing a safe and mostly stateless sandbox for untrusted code that the Web, despite being a crappy app platform in every other way, has won.

Look at WebGL for example. Over the past 20 years there has been practically zero interest in making a safe, sandboxed version of OpenGL. It wasn't until browser vendors got involved that people even seriously considered it. Browsers had to implement most of the safety features themselves, in user space. The operating system level GPU interfaces will likely never be anywhere near as secure, because apparently OS vendors don't care about running untrusted code.


> Operating systems are so bad at providing a safe and mostly stateless sandbox for untrusted code that the Web, despite being a crappy app platform in every other way, has won.

"Worse is better."

> The operating system level GPU interfaces will likely never be anywhere near as secure, because apparently OS vendors don't care about running untrusted code.

It's handled with emulation and virtualization. The future of safety looks like QubesOS.

https://www.qubes-os.org/


Do you really feel like optimized JavaScript is source code any more than this is?


It's an incremental process. It started with some innocent fluff (and client side input validation) and it ends with signed binaries shipping from trusted sources.

We're somewhere in the middle.


There's . . . already a complete programming language in web pages. I'm not sure what you're trying to accomplish here.


Stallman was right, just for the wrong reasons. Scary. Offline computing is probably going to die.


You can run web apps fully offline since like 2011. The only difference is that the initial "install" is 100kb instead of 100mb.


It's not the web that we are losing -- it's the browser.

It's all still HTTP requests and responses, but the web browser itself is becoming something very different from what it was a decade ago.


is it the web without hyperlinks?


Yes and with a low-level interface to a (virtual) hardware it defines a VM that can be run standalone (thus without a browser).


Don't we already have that with the JVM, the CLR, Smalltalk, many Lisps, Erlang, MIX, MMIX, etc?


Yes, nothing new.


What's hilarious is that A) you're pining for something that hasn't existed for years, and B) you're arguing against something (Service Workers) that would bring it back.


On the Rust side, we're working on integrating Emscripten support into the compiler so that we're ready for WebAssembly right out of the gate. Given that the initial release of WebAssembly won't support managed languages, Rust is one of the few languages that is capable of competing with C/C++ in this specific space for the near future. And of course it helps that WebAssembly, Emscripten, and Rust all have strong cross-pollination through Mozilla. :)

If anyone would like to get involved with helping us prepare, please see https://internals.rust-lang.org/t/need-help-with-emscripten-...

EDIT: See also asajeffrey's wasm repo for Rust-native WebAssembly support that will hopefully land in Servo someday: https://github.com/asajeffrey/wasm


As we get closer to having a WebAssembly demo ready in multiple browsers, the group has added a small little website on GitHub [0] that should provide a better overview of the project than browsing the disparate repos (design, spec, etc.).

Since the last time WebAssembly hit HN, we've made a lot of progress designing the binary encoding [1] for WebAssembly.

(Disclaimer: I'm on the V8 team.)

[0]: http://webassembly.github.io/ [1]: https://github.com/WebAssembly/design/blob/master/BinaryEnco...


About the binary encoding... It's a bit easy to armchair these things, and it's too late for WebAsm now... but if you're on the V8 team, you have access to Google's PrefixVarint implementation (originally by Doug Rhode, IIRC from my time as a Google engineer). A 128-bit prefix varint is exactly as big as an LEB128 int in all cases, but is dramatically faster to decode and encode. It's closely related to the encoding used by UTF-8. Doug benchmarked PrefixVarints and found both Protocol Buffer encoding and Protocol Buffer decoding would be significantly faster if they had thought of using a UTF-8-like encoding.

LEB128 requires a mask operation and a branch operation on every single byte, maybe skipping the final byte, so 127 mask operations and 127 branches. Using 32-bit or 64-bit native loads gets tricky, and I suspect all of the bit twiddling necessary makes it slower than the naive byte-at-a-time mask-and-branch.

    7 bits -> 0xxxxxxx
    14 bits -> 1xxxxxxx 0xxxxxxx
    ...
    35 bits -> 1xxxxxxx 1xxxxxxx 1xxxxxxx 1xxxxxxx 0xxxxxxx
    ...
    128 bits -> 1xxxxxxx 1xxxxxxx 1xxxxxxx ... xxxxxxxx
Prefix varints just shift that unary encoding to the front, so you have at most 2 single-byte switch statements, for less branch misprediction, and for larger sizes it's trivial make use of the processor's native 32-bit and 64-bit load instructions (assuming a processor that supports unaligned loads).

    7 bits -> 0xxxxxxx
    14 bits -> 10xxxxxx xxxxxxxx
    ...
    35 bits -> 11110xxx xxxxxxxx xxxxxxxx xxxxxxxx xxxxxxxx
    ...
    128 bits -> 11111111 11111111 xxxxxxxx xxxxxxxx ... xxxxxxxx
There's literally no advantage to LEB128, other than more people have heard about it. A PrefixVarInt 128 is literally always the same number of bytes, it just puts the length-encoding bits all together so you can more easily branch on them, and doesn't make them get in the way of native loads for your data bits.

Also, zigzag encoding and decoding is faster than sign extension, for variable-length integers. Protocol Buffers got that part right.

Note that for security reasons, if there are no non-canonical representations, there can't be security bugs due to developers forgetting to check non-canonical representations. For this reason, you may want to use a bijective base 256[0] encoding, so that there aren't multiple encodings for a single integer. In the UTF-8 world, there have been several security issues due to UTF-8 decoders not properly checking for non-canonical encodings and programmers doing slightly silly checks against constant byte arrays. A bijective base 256 saves you less than half a percent in space usage, but the cost is only one subtraction at encoding time and one addition at decoding time.

[0]https://en.wikipedia.org/wiki/Bijective_numeration


It's not too late! The wasm binary encoding is open to change up until the browsers ship a stable MVP implementation (then the plan is to freeze the encoding indefinitely at version 1).

The primary advantage of LEB128 is (as you mentioned) that it's a relatively common encoding. PrefixVarint is not an open source encoding IIUC.

We'll do some experiments in terms of speed. If the gains are significant we may be able to adopt something similar (this [0] looks like a related idea).

Thanks for the suggestion.

[0]: http://www.dlugosz.com/ZIP2/VLI.html


PrefixVarint isn't open-source, but the encoding is trivial.

PrefixVarints are a folk theorem of Computer Science, (re-)invented in many times and places.

I actually coded it up once in Python and once in C before joining Google, and was chatting with an engineer, complaining about the Protocol Buffer varint encoding. The person I was complaining to, said "Yea, Doug Rhode did exactly that, called it PrefixVarint. He benchmarked it much faster."


See my other comments on this thread for a simple implementation of a bijective big-endian prefix varint encoder. You may or may not want a bijective encoding, and probably want little-endian. I'm just used to writing big-endian encoders (for lexographical sorting reasons), so that was faster for me to whip up a demonstration of a bijective encoder.

A real implementation would use a switch statement instead of a loop. One might use a lookup table or a few instructions of inline assembly to calculate the number of leading ones in the first byte, and switch on that.


I have been advocating for the PrefixVarint encoding you mention for a while.

One thing I'd mention though: as you've specified it here, it puts the continuation bits as the high bits of the first byte. I think it may be better to put them in the lower bits of that byte instead. It would allow for a simple loop-based implementation of the encoder/decoder (LEB128 also allows this). With continuation bits in the high bits of the first byte, you pretty much have to unroll everything. You have to give each length its own individual code-path, with hard-coded constants for the shifts and continuation bits.

The downside is one extra shift of latency in the one-byte case, imposed on all encoders/decoders.

Unrolling is probably a good idea for optimization anyway, but it seems better to standardize on something that at least allows a simple implementation.

Here is some sample code for a loop-based implementation that uses low bits for continuation bits:

    // Little-endian only. Untested.
    char *encode(char *p, uint64 val) {
      int len = 1;
      uint64 encoded = val << 1;
      uint64 max = 1 << 7;
      while (val > max) {
        if (max == 1ULL << 63) {
          // Special case so 64 bits fits in 9 bytes.
          *p++ = 0xff;
          memcpy(p, &val, 8);
          return p + 8;
        }
        encoded = (encoded << 1) | 1;
        max <<= 7;
        len++;
      }
      memcpy(p, &encoded, len);
      return p + len;
    }

    const char *decode(const char *p, uint64* val) {
      if (*p == 0xff) {
        // 9-byte special case
        memcpy(val, p + 1, 8);
        return p + 9;
      }

      // Can optimize with something like
      //   int len = __builtin_ctz(!*p);
      unsigned char b = *p;
      int len = 1;
      while (b & 1) {
        len++;
        b >>= 1;
      }

      *val = 0;
      memcpy(val, p, len);
      *val >>= len;
      return p + len;
    }


You can have an equally simple implementation (plus one mask operation) if you put the length encoding in the most significant bits. The advantage of having length in the most significant bit is that in the common case (1 byte integers), the decoding is faster.


Are you sure? It does not seem like it will be as simple. When continuation bits are at the top of the first byte, they come between the value bits in the first byte and value bits in the subsequent bytes. This means you have to manipulate them independently, instead of being able to manipulate them as an atomic group. With low continuation bits, all the value bits get to stay together.

If it would be as simple, you should be able to easily modify my sample encoder/decoder above to illustrate.


Oops. You're right for little-endian encoders. (See another comment on this thread for a simple bijective big-endian encoder I whipped up just now.) I've always written big-endian encoders or bijective big-endian encoders, so that byte strings sort the same lexographically and numerically.

Though, a simple loop encoder and decoder are still easily doable if the unary length encoding is in the most significant bits. You're right, though, for a little-endian encoder, it's a slightly more simple to put the unary length encoding in the least significant bits.


I think this is definitely an improvement over the wasm varint implementation. However, wasm bytecode is almost always going to be delivered compressed with gzip or brotli, so measurements of compression and speed should be taken after those. In particular, I'm wondering if a plain non-variable integer encoding would be best, considering how brotli and gzip operate on byte sequences.


This is definitely something I'd really like to see benchmarked: how valuable is it to pile two different "compression" compared only the "complicated" one (gzip or brotli).


Can you please explain how you'd use "bijective numeration" specifically? What do you think has to be changed or added to your proposal:

    7 bits -> 0xxxxxxx
    14 bits -> 10xxxxxx xxxxxxxx
    ...


A real implementation would probably be switch-driven, but I whipped up a terse implementation for a big-endian bijective encoder to go with my other comment (tested, but test code omitted):

    int varint_u64_encode(uint8_t** start, const uint8_t* limit, uint64_t value) {
      uint64_t offset;
      uint8_t* position;
      int bytes;

      for (bytes = 1, offset = 0x80;  value >= offset && bytes < 9; ++bytes) {
        offset = (offset << 7 ) | 0x80;
      }
      position = *start;
      value -= ( offset >> 7 ) ^ 1;
      if (position + bytes > limit) {return 0; /* not enough space */}
      *position = (((uint8_t)0xFF) << (9-bytes)) | (uint8_t)(value >> ((bytes-1) * 8 ));
      for (++position; --bytes > 0; ++position) {
        *position = (uint8_t) (value >> ((bytes-1) * 8));
      }
      *start = position;
      return 1;
    }
  
    int varint_u64_decode(const uint8_t** start, const uint8_t* limit, uint64_t* result) {
      uint64_t value;
      uint64_t offset;
      const uint8_t* position;
      int bytes;
      uint8_t mask;
  
      position = *start;  offset = 0;
      for(bytes=1, mask = 0x80; (mask & *position) == mask && bytes < 9; ++bytes) {
        mask = 0x80 | (mask >> 1);
        offset = (offset << 7) | 0x80;
      }
      if (position + bytes > limit) { return 0; /* not enough space */}
      value = (~mask) & *position;
      ++ position;
      for(; --bytes > 0; ++position) {
        value = (value << 8) + *position;
      }
      value += offset;
      if (bytes == 9 && value < (1uLL << 56)) {return 0; /* overflow, non-canonical encoding */ }
      *result = value;
      *start = position;
      return 1;
    }


For a bijective base:

    7 bits -> Decode 0xxxxxxx and add 0 (unchanged)
    14 bits -> Decode 10xxxxxx xxxxxxxx and add 0x80
    21 bits -> Decode 110xxxxx xxxxxxxx xxxxxxxx and add 0x4080
In a non-bijective base vs a bijective base: 7 bits encode 0 to 2&7 - 1 vs. 0 to 2^7 -1 14 bits encode 0 to 2^14 - 1 vs. 2^7 to 2^14 + 2^7 - 1 21 bits encode 0 to 2^21 - 1 vs. 2^14 + 2^7 to 2^21 + 2^14 + 2^7 - 1 ...

In the bijective decoding routine, you need to special-case the maximum length case to check for numeric overflow.


Since I started hearing about WebAssembly I cannot stop thinking about the possibilities. For example: NPM compiling C-dependencies together with ECMAScript/JavaScript into a single WebAssembly package that can then run inside the browser.

For people thinking this will close the web even more because the source will not be "human"readable. Remember that JavaScript gets minified and compiled into (using Emscripten) as well. The benefits I see compared to what we have now:

- Better sharing of code between different applications (desktop, mobile apps, server, web etc.)

- People can finally choose their own favorite language for web-development.

- Closer to the way it will be executed which will improve performance.

- Code compiled from different languages can work / link together.

Then for the UI part there are those common languages / vocabularies we can use to communicate with us humans: HTML, SVG, CSS etc.

I only hope this will improve the "running same code on client or server to render user-interface" situation as well.


More importantly, if we want to make "view source" more palatable in a WebAssembly age, we need to have it support source maps from day 1.


Yes, that would be good for development / debugging (like debug symbols) or as an optional way to give people access to the source.


>For example: NPM compiling C-dependencies together with ECMAScript/JavaScript into a single WebAssembly package that can then run inside the browser.

The reason you'd write stuff in C is (aside from performance) to access native API. Browsers and WASM doesn't let you do that.

WASM in the Node could let you do that - meaning that you would get cross platform assembly packages instead of ELF or w/e binary - but you would still need APIs on the platform and often for C the way that's handled is preprocesor macros that choses which platform you are compiling to - so you can't just "compile to WASM and then magic" even with WASM you'd have to "compile to WASM + POSIX and WASM + Win32" if you want to run on POSIX/Win32, etc. for all platform/API permutations.

TL;DR WASM is big but it won't quite be the abstract virtual machine like say JVM or CLR


Considering how critical SharedArrayBuffer is for achieving parallelism in WebAssembly, I'm hoping we see major browsers clean up their Worker API implementations, or even just comply with spec in the first place.

Right now things are a mess in Web Worker land, and have been for quite some time.


Absolutely agreed. We should be able to debug Web Workers with development tools (e.g. set breakpoints / examine state / etc), nest Web Workers, use console.log from within a Worker, and construct Web Workers from Blob URLs. It's infuriatingly difficult to work with Web Workers without these features, which are missing from most browsers!

I think there's a chicken-and-egg problem with regards to Web Workers: Developers do not use Web Workers because they are hard to use/debug/develop with, and browser vendors do not improve their Web Worker implementations because they have limited adoption. Someone needs to break the cycle.


There's also a host of nasty bugs and implementation deficiencies, depending on the browser.

My favorite is in Chrome with simple DedicatedWorker instances communicating with each other directly via the MessageChannel API. It works, except when the UI thread is blocked, because messages are routed through the UI thread. Firefox doesn't have this problem, but it has its own issues—namely the UI thread for each tab runs in the same OS process, unlike Chrome where tabs are isolated processes.

That said, Firefox and Chrome are far ahead of every other browser in terms of how they implement workers. Other implementations are borderline destitute by comparison (e.g. no DOMHighResTimeStamp available in worker context, no Transferable support for important items).

>Developers do not use Web Workers because they are hard to use/debug/develop with, and browser vendors do not improve their Web Worker implementations because they have limited adoption.

You hit the nail on the head. I think it's also a dislike for the API as a whole, because really workers are just a convoluted way of enforcing thread safety. Personally I'd prefer a far more simple and traditional shared memory model, with developers being afforded enough rope to hang themselves with if they so desired.

The existing methods of transferring data to and from workers are frankly crap, the only light at the end of the tunnel being SharedArrayBuffer and the Atomics API designed around it. The problem is that both are essentially designed for compiled applications, ala asm.js and WebAssembly. In a compiled [browser] environment, the heap is seamlessly allocated on a SharedArrayBuffer, so writing parallel code is nearly identical to the traditional desktop experience from a developer's point of view. In plain old Javascript however, you have to serialize and deserialize native types to and from the buffer, which is expensive. It really makes Javascript seem like a second-class citizen with regards to parallelism.


> It works, except when the UI thread is blocked, because messages are routed through the UI thread.

This bug actually causes a ~40% slowdown in one of my projects, since I use sendMessage() in a Worker context to quickly yield to the browser. I didn't mention it, since I figured it may have been fairly obscure... I'm somewhat glad to hear that others are experiencing pain from it. It gives me some hope that it'll eventually be addressed!


SharedArrayBuffer has been accepted as stage 2 in tc39 so there is now a good chance you will have this from JavaScript as well.

https://github.com/tc39/ecmascript_sharedmem


Right. My point was that the way the API is built is definitely more friendly towards compiled use cases. With JS, you have to manually serialize and deserialize virtually everything that transits the buffer.

Still, it's nice to see the spec move forward.


> We should be able to debug Web Workers with development tools

Being actively worked on, as far as I can tell. I agree that not having this sucks.

> nest Web Workers

Works in Firefox; haven't tested in Edge.

> use console.log from within a Worker

Works in Firefox and Chrome, fails in Safari, haven't tested in Edge.

> construct Web Workers from Blob URLs

Works in every modern browser, I believe.


If anyone at infoworld.com reads these comments:

On the top of the page, there is a horizontal menu containing "App Dev • Cloud • Data Center • Mobile ..."

When I position my cursor above this menu and then use the scroll wheel to begin scrolling down the page, once this menu becomes aligned with my cursor, the page immediately stops scrolling and the scroll wheel functionality is hijacked and used to scroll this menu horizontally instead.

It took a few seconds to realize what was happening. At first I thought the browser was lagging - why else would scrolling ever abruptly stop like that?

I closed the page without reading a single word.


I still think there is a lot of room for static pages with links in the style that people seem to be prematurely waxing melancholy about when forecasting where WebAssembly _may_ lead the internet. I was always able to find sites of interest that didn't include Flash, Java applets, and company when I just wanted to read something. I find some of the scroll-hijacking, and other javascript goodies on modern pages to either be a distraction, or non-functional on some different devices. On the other hand, I am particularly happy about, and working with Pollen in Racket, a creation by Matthew Butterick. Pollen is a language created with Racket for making digital books, books as code, and bringing some long-needed, real-world publishing aesthetics back to the web [1,2]. I may even by a font of his to get going and support him at the same time!

   [1]  http://docs.racket-lang.org/pollen/
   [2]  http://practical.typography.com


If you want to see Brendan's keynote from O'Reilly Fluent yesterday, a sample went up https://www.youtube.com/watch?v=9UYoKyuFXrM with the full one at https://www.oreilly.com/ideas/brendan-eich-javascript-fluent...


And Alex Russel's keynote (Google) "Progressive web apps and what's next for mobile" can be found at https://www.oreilly.com/ideas/progressive-web-apps-and-whats...


I think the web may split into two.

1) 'Simple' web pages will stick with jquery, react, angular, etc type code. Where you can still click view source and see whats going on. Where libs are pulled from CDNs etc.

2) 'Complex' saas web apps, where you need native functionality. This will be a huge bonus. I'm in this space. I would love to see my own application as a native app. The UI wins alone make it worth it!


What does 'native functionality' mean for a web app?

Do you mean skipping the DOM and making a Canvas for displaying content? Or do you mean something else?


To me, it's more about choice of programming language than performance. Though the latter is very important, I think the former is what will open up doors to making the browser a platform of choice (pun intended). Currently, it feels like JavaScript is the Comcast of the web. Everyone uses it, but that's only because there aren't any other options available to them.


Definitely agree ! I really hope that web asm will kill javascript and (css by the way). I just hate this language.


Video of the talk?

EDIT: Here is the full-length one - https://www.oreilly.com/ideas/brendan-eich-javascript-fluent...


Here's his Fluent keynote from yesterday: https://www.youtube.com/watch?v=9UYoKyuFXrM .. full at https://www.oreilly.com/ideas/brendan-eich-javascript-fluent... (click X on the popup window, you don't need to sign in)


Sorry, but most of the discussion here is completly missing the point about WebAssembler.

It is just a technology, to make things brought through the web, faster. And it is open. And no less secure, than js. So I think it's great.

Good technology does exactly, what the creator wants. And if people don't like some of the things, that gets created with it, then it is not a problem of the technology itself.

So people can do good things, or bad things with it. But in the web, we have the freedom to choose, where we go.

And if we don't like ads for example, we should be aware, that Web-Site creators still want money for their work, so maybe we should focus and support a different funding model. I like the pay-what-you-want or donation model the most, Wikipedia shows, that this is possible on a large scale ...


I want to agree with him, I'd like to see a future where WebAssembly closes the gap between native apps and the web. For better or worse browsers are the new OSes, and I dream of a future were all vendors come up with the equivalent of a POSIX standard where any web application can access all (or a wide common subset) of any device's capabilities, from the filesystem to native UI elements.


Your comment reminded me of this highly entertaining talk - https://www.destroyallsoftware.com/talks/the-birth-and-death...

Although tongue in cheek, I think it gives some food for thought. I feel like WebAssembly is to asm.js what the modern JS profession is to old follow-your-cursor effect on webpages - it becomes something to take seriously and use, and having done a bunch of porting things with Emscripten the idea of a browser within a browser doesn't sound as crazy as it used to!


To be honest, WebAssembly isn't really javascript anymore. asm.js was, albeit only sorta-kinda-just-barely (but in an important way), but WebAssembly isn't. There's a reasonable case to be made that in 20 years "everything" will be WebAssembly, but we won't be calling it Javascript, thinking of it like Javascript, or using it like Javascript.

In the long term, this is the death knell for Javascript-as-the-only-choice. Javascript will live on, but when left to fend for itself on its own merits, it's just another 1990s-style dynamic scripting language with little to particularly recommend it over all the other 1990s-style dynamic scripting languages.

But Javascript programmers need not fear this... it will be a very long, gradual transition. You'll have abundant time to make adjustments if you need to, and should you not want to, there will still be Javascript jobs for a very long time.


You act like JavaScript's only upside is the fact that it's required in the browser.

IME the opposite is true. I'm seeing companies flock to it outside of browser contexts in areas where "code reuse" or "isometric/universal" style programs aren't even possible.


It's not that it's the only upside. It's that once out of the browser, it isn't really a standout language. For instance, if you're going to have a "fair fight" out there, one can't help but notice that Python already has everything that we're standing around waiting for in ES6, plus the next couple of iterations.

And that's just Python. You should also check into Perl, Ruby, Lua, and PHP, and that's without straying even a little outside of the "1990s dynamic scripting language" field, to say nothing of what you can find if you leave that behind.

It just isn't that impressive of a language once you remove the browser prop. It isn't necessarily bad, or at least no worse than some other popular languages as well, but there's nothing uniquely good about it in the greater language landscape.

To be honest, anyone who thinks that Javascript does have some sort of unique advantage needs to get out more and learn a few more languages. Even Python, which you'll find goes very quickly if you already know JS. Javascript is very, very not special. Again, since people seem to confuse these things, that does not make it bad, but it's very not special. Very boring, middle-of-the-road scripting lanugage that is, if anything, well behind the other ones in features because of its multihead, no-leader-in-practice development model.


But JS isn't just a "worse python" either. I've gotten around when it comes to languages, from business-basic to c++ to go to python, php, ruby, js, lua, and lisp. Having spent a non-trivial amount of time in each, JS has by far one of the best ecosystems i've ever seen.

See, I hate talking about languages because it's hard to define what a language even is.

Is it purely the syntax? Is it syntax+standard library? Or is it the whole set of syntax+libraries+ecosystem+idioms?

From a purely syntax point of view, js is lacking some things and while they are getting fixed, it's taking longer than most would like. And i agree that in this aspect js is currently "mediocre" at best.

From a "whole ecosystem" point of view, js is wonderful. It's fast, secure enough to give arbitrary code from anyone in a browser, has a stupidly huge set of libraries, an "idiomatic" style which works very well for some problems, and it's almost literally everywhere and on everything, has multiple competing implementations that helps drive performance and reduce bugs.

Yeah, it's got it's quirks (and in JS's case, a lot of them), but every language it's age and older does.

Now if there was some way to magically take all of the "other" parts from js and apply them to another language, you'd have an overnight success, but the fact is that the language syntax is such a small part of what a language truly is.


You're asking for the ability to make perfect UI-spoofing attacks (among other types of attack). It is vitally important to maintain a wall between the browser's UI anything the remote code can touch.


Hey, I've got an idea. How about we just implement this POSIX like standard at the OS layer.

We can call it POSIX.


It's a shame you got downvoted for that, as you make a very good point. This whole trend of making the web browser a poor man's OS is definitely a bit hinky. I mean, how many layers of abstractions built on top of other (redundant) layers of abstraction really make sense?

This stuff is one reason that, despite the advances associated with Moore's Law, the advent of SSD's, and increasing RAM counts, computers don't feel any faster than they did in 1995. It's ridiculous in a way.

Just to play Devil's Advocate: maybe web browsers should be good at, ya know, browsing and leave the other stuff for something else.


> how many layers of abstractions built on top of other (redundant) layers of abstraction really make sense?

As many as needed. This is a political problem, not a technical one.

OSs don't want to provide one common framework for writing and distributing sandboxed one-click-install write-once-run-anywhere applications. So browsers are solving this problem on their own.

Maybe you don't care about this, but users, developers who need to write universal apps, and their marketing managers definitely do.


> It's a shame you got downvoted for that, as you make a very good point.

I think this is because the comment comes off as flippant and snarky.

> This stuff is one reason that, despite the advances associated with Moore's Law, the advent of SSD's, and increasing RAM counts, computers don't feel any faster than they did in 1995. It's ridiculous in a way.

That statement is ridiculous. I've never heard anyone claim that the computers of today don't "feel" any faster than computers of 20 years ago, but if you feel that way I just don't think you're living in the same universe as those of us who walk around with quad core computers in our pockets.

> maybe web browsers should be good at, ya know, browsing and leave the other stuff for something else.

Please define "other stuff" and where you draw the line between that and simply "browsing"


As someone who has been around since before the Web, I can confirm that computers today do not feel any faster... despite the fact that your phone is faster than the fastest computer in the world from that time.

In fact I gave a speech about this at Berkeley last week. I think it'll be online pretty soon.

So now you have at least heard someone claim this.


> As someone who has been around since before the Web,

While this is an impressive credential, it's one that I can also claim.

> I can confirm that computers today do not feel any faster... despite the fact that your phone is faster than the fastest computer in the world from that time.

I appreciate your confirming a subjective feeling based on anecdote, but as someone who was also around in 1995 and has continued to use computers daily since, I'll respectfully provide my own experiences as counter-anecdote to your own. I don't think there's any point in trying to debate our subjective opinions regarding how fast computers feel, but I'll assert that if you sat someone down with a 133mhz P1 desktop with 32MB of ram and a 2ghz i5 with 2GB of ram, 9/10 they'd agree that the 2ghz computer unequivocally feels faster than the 133mhz one.


Dude my first professional programming experiences were on a 486/33. Compared to that a P1/133 is pretty darn fast!

But as you say, there is not much point debating subjectivity here. It's not like I had the foresight to record benchmarks of how long it took web pages to appear, or to open a window, etc, back in the mid-90s.

Edit: How about if I put it this way:

If you go back in time to the 90s and tell everyone "20 years from now, we will have a much more advanced web where EVERYONE WILL HAVE A SUPERCOMPUTER IN THEIR POCKET", people would imagine the web would be amazing, and responsive and beautiful, and we would be doing some seriously intricate stuff.

Instead ... no, we have a pile of junk that only kind of works, and slowly at that. In terms of potential unreached, the web is kind of a massive failure. (Yes, it is "successful" in the sense that we are able to do a lot with it that we could not 20 years ago, but the mediocre is the enemy of the good, and all that).


> people would imagine the web would be amazing, and responsive and beautiful, and we would be doing some seriously intricate stuff.

I think this is what happened. Everyone can agree that there are many examples of extreme over-engineering on the modern web, but sites like gmail, facebook, youtube, twitch, google docs etc, by the standards of 1995 are pretty damn amazing, responsive, and beautiful. Concerns about privacy and ads have made us wary of these trends on the web, but from a purely functional perspective, the modern web has achieved incredible technical feats compared to what was possible in 1995.

> Instead ... no, we have a pile of junk that only kind of works, and slowly at that.

Yes, there is a lot of junk on the web, and yes, it "kind of works", but this is true of all software on all systems. There are plenty of compatibility issues with native software across operating systems, there's also plenty of junk software on the desktop and in mobile app stores. All software is crap and the web is no different, but it isn't especially crappy, it's just that we see a lot more crap on the web because visiting a URL is a lot easier, safer, and more discoverable than executing arbitrary binaries.


> by the standards of 1995 are pretty damn amazing, responsive, and beautiful

Yeah no. If you had gone back to 1995 and told me that gmail was what you would get when I have a supercomputer in my pocket, a super-super computer on my desk, and all web pages are served by SUPER-super-super computers, I would have quit the industry out of depression.

It is some horrible bullshit when you look at it in perspective.

About the quality issue, no surprise that I also disagree there: the web is especially crappy.

I do not consider any piece of software that I use to be performing acceptably (native or web), but there is a stark difference between the native apps and the web apps, in that the native ones are at least kind of close to performing acceptably, and also tend to be a lot more robust.

Web apps not working is just the way of life for the web. Any time I fill out a new web form I expect to have to fill it out three times because of some random BS or another.

Look at all the engineers employed by Facebook and especially Twitter. WHAT DO MOST OF THOSE PEOPLE EVEN DO? Obviously the average productivity, in terms of software functionality per employee per year, is historically low, devastatingly low. What is going on exactly??


I think it's a shame you got downvoted as well. Have an upvote on me.

As to the rest:

I've never heard anyone claim that the computers of today don't "feel" any faster than computers of 20 years ago,

Interesting, I find it to be a fairly common refrain. In fact, what I'm saying is basically just a paraphrase of Wirth's Law:

https://en.wikipedia.org/wiki/Wirth's_law

but if you feel that way I just don't think you're living in the same universe as those of us who walk around with quad core computers in our pockets.

Well, I walk around with a quad core computer in my pocket as well, and I still stand by that assertion.

Please define "other stuff" and where you draw the line between that and simply "browsing"

I'll allow that there's some subjectivity there, but when you're talking about a "web application" like, say, Microsoft Outlook online or something, or a programming editor or a CAD program or an image editing program, I can't help but wonder if that stuff should really be done purely "in browser" as opposed to being handed off to another program.

OTOH, I understand (some of) the arguments for doing it this way. Having a uniform experience for all clients, the security holes associated with plugins, avoiding the need to deploy software to individual machines, etc. I'd just like to suggest that people spend some time considering if there are other ways to achieve the same end(s) other than continuing to bloat the web browser until it replicates all the functionality offered by the underlying OS.


> I find it to be a fairly common refrain.

I don't object to the idea that some software trends towards sluggishness because of feature creep or lazy developers, but I take issue with the statement that computers of today don't feel any faster than computers of 20 years ago because there is a large cross section of computing tasks that are wrapped up in the notion of what constitutes a "fast" computer.

For example, in 1995, running Paintshop Pro and netscape on the same machine was about the limit of what my computer could handle at once. Today, I can run photoshop, chrome, Visual Studio and 2 VMs simultaneously without skipping a beat. In 1995 just trying to minimize netscape could result in a 30 second wait while the system attempted to redraw the windows beneath it.

I have distinct memories of how my brain was conditioned to avoid certain actions because it would render the machine practically inoperable if care wasn't taken to ensure that no more than a few programs or operations were performed simultaneously. Today, even on Windows, I can leave dozens of programs (including the browser with a dozen tabs of it's own) open for months at a time and experience zero slow down; compare this with 1995 where restarting a sluggish Windows PC was a daily ritual because it would just become unusable if left with multiple applications running over night. Even in 1999, if I decided I wanted to play the original starcraft, I needed to ensure that I closed all other applications if I wanted to avoid game-breaking slowdown, and even with that, accidentally hitting alt-tab resulted in a 30 second wait while the desktop rendered itself and another 15 seconds for the system to return context to the game. Today, I can leave all my work open in the background, play a few games or seamlessly alt tab to adjust my playlist and then continue working afterwards without any impediment. in 1995 it took my computer 20 to 60 seconds to boot, today, thanks to the SSD, it takes 8 seconds maximum from boot to desktop on Windows, and even faster on Linux.

Today, you don't even have to think about performance (as a user) because the vast majority of common computing tasks can be performed effortlessly by modern systems.


> Having a uniform experience for all clients

An experience that is uniformly slow and uniformly broken a different way on every browser...


I largely agree, but the argument is "If we rely on plugins, some users will have the plugin and some won't and since users don't install plugins, not everybody will be able to use our $THING".

And it is a somewhat legitimate argument. Whether or not it justifies having the browser subsume everything is, IMO, an open question.


I think if we decide heavily siloing / sandboxing is the right thing for software generally, then what you want to do is build an operating system that works that way (kind of like iOS, but with provisions to enable better data sharing so that you can actually make things with that OS).

This would be TREMENDOUSLY better than trying to make the browser into an OS.


What do you think about a browser tab that loads a VM running Linux running OpenJDK that runs a full Java application in its own sandboxed OS instead of an applet, with some mechanism for file transfer to the host OS? You could also support any other language, WINE, Mono, whatever. The point is having a sandboxing mechanism that gives existing native code first class status in the browser. Too hacky?


I agree. I used to be anti mobile app until I came to the same realization. Did you see https://www.qubes-os.org/ posted earlier today? It looks like an interesting sandboxing approach.


> I've never heard anyone claim that the computers of today don't "feel" any faster than computers of 20 years ago

It's a fairly common observation. What Andy giveth, Bill taketh away, and so on.


Someone else linked to a talk that mentioned removing all the layers in some theoretical architecture called METAL (this is a old talk) basically running asm.js (again talk is old) directly through the Kernel and even removing overhead that Kernels need for making native code safe (such as the Memory Management Unit) and as a result it would run faster than normal native code.

https://www.destroyallsoftware.com/talks/the-birth-and-death...

The major thing to be gained from all this then is software that can run fast but not have to be recompiled for all the different systems and hardware.


Build some nice sandboxed hypermedia-application APIs into POSIX and get them adopted all over, and then sure, we can talk!


There is an effort in this direction in emscripten (with musl).

For example with pthreads:

https://kripken.github.io/emscripten-site/docs/porting/pthre...


Allowing web sites to access the file system is not a great idea.


Having the capability does not imply having the permissions.


What could possibly go wrong with that.


The permission to read a file's contents in a web app is given by either dragging the file from your desktop to the web app or choosing it with file picker. To save a file, a web app will have to ask the user to do a save as dialog.

This permission model has pretty much always existed it was just extremely wasteful because you had to first send the file to server and then send the contents back. The new web file apis therefore don't add any new security issue but add massively better ux.

To me this is much smarter model than something like "can I have full access to your fs yes/no? Btw this app doesn't work if you say no". I think you are thinking of this stupid model when you say "what could possibly go wrong with that?" but if you don't, please elaborate.


Not exactly disagreeing with you but for a long time now the web has been our primary path to most forms of code execution hasn't it? I mean if you count HTTP as the web in addition to browsers?


WebAssembly... Wow, if we keep going, we'll re-invent what Sun achieved 20 years ago with Java. If only they hadn't f-ed it up...


The JVM problem was that it had applets and did not have the DOM integration of Javascript. I do often wonder if instead of Javascript in 1995 we had got WebAssembly and WebSockets.


You could actually call into JavaScript from Applets, using something called LiveConnect. See https://docs.oracle.com/javase/tutorial/deployment/applet/in...

Was it simply/easy? No. But, you probably wouldn't want to.. The DOM is a crappy way to build an application UI. Someday we might figure that out.


The concept of the DOM is so widely considered to be useful in declaring user interfaces, that multiple languages have copied it.

http://fxexperience.com/wp-content/uploads/2011/08/Introduci...

https://en.wikipedia.org/wiki/Extensible_Application_Markup_...

The HTML DOM sucks, but what other DOM is widely available and already installed on literally every machine on the planet?


If you consider custom elements, how is it different from any other way of building ui?


Probably the 2 minute startup time was a bigger problem for applets.


A question to WebAssembly experts: How easy it is to use WebAssembly as a sandboxed embedded scripting mechanism in my own native (C++) application? I am writing a native real-time system (a distributed 3D engine for VR) in which I send scripts on the wire between machines, and I need to call an update() method of these sent scripts like 90 times a frame. I need complete sandboxing, because my trust model is that what is trusted on machine A may be absolutely not trusted on machine B: not only not letting the scripts do any functions other than what I explicitly let them call, but I need to have hard limit on their memory usage and execution time also, but preferably they should execute in-process, so they can reach memory I let them and be called from the thread I want. Currently I go wtih Lua, but to have really good performance I will need to research this topic more deeply later.


Are those boxes in the picture Firefox OS phones?

Is this an old picture?


Good catch. The URL of that image [1] seems to indicate it's from April 2014 (or earlier).

Seems Brendon Eich resigned that same month/year [2].

[1]: http://core0.staticworld.net/images/article/2014/04/brendan-...

[2]: http://recode.net/2014/04/03/mozilla-co-founder-brendan-eich...


What is the upgrade path for Emscripten users? I understand that LLVM will have WebAssembly backend, but how will OpenGL to WebGL translation work, for example?


Emscripten can already compile to both asm.js and WebAssembly, with just flipping a switch between them.

All the JS library support code is unchanged, so Emscripten's OpenGL to WebGL layer is used just like before, and the same for all the other libraries.

The WebAssembly backend in LLVM will eventually be used by Emscripten as another way to emit WebAssembly (right now it translates asm.js to WebAssembly), but the new backend is not ready yet.

See also https://github.com/kripken/emscripten/wiki/WebAssembly


If you think WebAssembly (or asm.js) is a good idea, I would very much like you to do the thought experiment of what design decisions something like WebAssembly would have made 15 or 25 years ago, and what consequences those would have today.

Helpful research keywords: Itanium RISC Alpha WAP Power EPIC Java ARM Pentium4 X.25


I can't think of any software development API that ended up being perfect 15 or 25 years later. Javascript certainly isn't. Java applets, ActiveX controls and Flash very much weren't, but, at the time, they did things you couldn't with the standard web stack.

And we're better off for learning the lessons of the failures, creating improved technologies to replace them (HTML5, JIT Javascript engines, etc), and building on the successes to continuously do more things that previously couldn't be done in the browser.

Will WebAssembly be perfect? Of course not. Will there be unanticipated problems? Of course. I would not at all be surprised if it becomes the next Flash. But it's better to move forward and keep innovating with new web technologies instead of letting the platform stagnate.

We've tried feature-freezing the web for a few years; it was called "Internet Explorer 6" and it sucked.


I'm conflicted. One one hand I support open data/raw documents. But this prevents native-like, real-time applications. It also forces developers to work on Javascript which is a terrible language.

On the other hand we have lock-in ecosystems, closed silos, that are detrimental to the commons.

The only consolation I have is that if WebAssembly provides a bytecode instead of machine code then we still have the ability to perform reverse engineering.

In the end, we have ALL have to do the hard task to inform every single person why Apple/FB/MS/Google are harmful to us and why we should boycott their programs/services.


I wonder if along with these byte code engines we'll get capability grained control systems too. Somehow I doubt it though.

So in the future, when you visit a website they'll be able to Eg: open windows, pop up unblockable modals, webGL, bytecode loaded spam/ads, etc. The end users option will be to block everything, or live with it.

I do not like this bold new world we're entering.


This is a common confusion somehow. The programming language has nothing to do with the APIs provided by an environment. JavaScript can do all those things now, as long as you run it in an environment that provides APIs to do those things (node.js, electron etc). The browser is not that environment. When you write a keylogger virus in C, you are relying on the APIs provided by the environment to do it, they don't come from the C language.


I wasn't confused at all. Technically there is a difference between a language, an API and a platform.

That's all true in the case of NodeJS and the like, but not true for web browsers. There the language the API and the platform come as one.

The result is, remote websites can execute code on a users computer. With no control except for simple technical measures.


WebAssembly shouldn't be for the end users to use, it should used for implementations of other languages so they can access the same APIs Javascript can.

Add Lua to the browser, add Perl 6 to the browser, etc. There are plenty of decade old W3C specifications that never made it to the browser properly, like XSLT 2.0, XQuery 1.0, XForms, never mind the latest versions of the specs.


I don't see how it can be feasible to use it for implementations of other languages that don't directly map to webassembly like C. You would have to ship a runtime for the other language along with your application code. The runtime will be either huge with long startup time or small but too slow to be feasible.


Whichever it is it can't be worse than implementing other languages in Javascript directly or using stuff like GWT and there are plenty of those already (including development for running Perl 6, btw).

Runtime for a few selected implementations should be very well be packaged or installed along with the browser itself. Failing that, it should be cached.


> Runtime for a few selected implementations should be very well be packaged or installed along with the browser itself. Failing that, it should be cached.

Web Assembly isn't related to this. You can standardize on a new language VM that browsers should ship, like was attempted with Dart. What Web Assembly enables is more efficient VM with better interop to Web APIs than currently is possible, but the most prohibitive thing will not change (having to ship multi megabyte VM along with your application code).


Lots of things will change. If it makes sense to do that, then there will be demand for it.

Here's something already https://news.ycombinator.com/item?id=11269736


Is WebAssembly going to be host url resource based (like current .js files are) or will it be used as part of some centralized global assembly cache (GAC) solution where assemblies are only usable from a CDN type of authority?


What exactly will be better? One can compile a lot of languages to JavaScript today. JavaScript is fast enough and size doesn't really matter for most use cases. Is WebAssembly going to be much faster than JavaScript?


Compared to asm.js today it'll start-up faster because the format parses quicker. It will also be smaller in size compared to a gzipped asm.js equivalent.

Edit: Also, browser vendors will optimize it more consistently than they do with asm.js currently.

Also, compared to Emscripten the llvm backend for WebAssembly is upstream so you might see more frontend languages like Swift and Rust add support for it.


Webassembly is basically a portably assembly language (as in, lower level than C), that then gets translated instruction for instruction into actual assembly language.

It's several layers 'below' JavaScript. It's basically cross-platform, native code.


And moreover, the resulting native code is executed inside the same VM that executes normal Javascript, so it's not anything like starting some Flash file, it's just that we can have native code speed, when needed, inside of the Javascript. That existed already with asm.js. This step now should allow less overhead in parsing such code, which matters when there is a lot of code like in the games or big programs directly translated from lower-language code base. Less overhead means less battery drain and faster start of the program, as an example.


"Fast enough" still isn't fast enough for mobile devices. I have a high end phone and still get multi second freezes from all the JS parsing and executing on modern websites.


Any application relying heavily on 64 bit integer arithmetic will be vastly better...


Has anyone tried NativeScript? https://www.nativescript.org

Heard about it on a podcast recently, haven't had a chance to try.


Just pointing this out to absolve future confusion: This comment is very off-topic.

- WebAssembly is a new low level language for client-side scripting in web browsers. Future web browsers will support WebAssembly in the same way they currently support JavaScript. WebAssembly has a number of advantages over JavaScript, including performance and an AST-like syntax that makes it more suitable as a compilation target.

- NativeScript is a framework for developing "cross-platform" mobile apps. It achieves this through a JavaScript/TypeScript API and common UI components that are implemented natively on both iOS and Android.

I have not tried NativeScript and I am skeptical of projects that aim to "bridge the gap" in mobile development. iOS and Android are ever-evolving and so you must rely on the platforms that target them to stay up to date. Further, these platforms have very different design goals and the compromises that frameworks like NativeScript make often come at the expense of user experience.

NativeScript could be great! But please be aware of the shortcomings of eschewing native development.


Spreading the worst part of web to other platforms?


If we keep this up, the web will be almost as good of an application framework as a '90s era desktop application. Yay, progress!


I wish the browser vendors focused on CSS Grid module support as much as they did WebAssembly.


This looks AWESOME


Thanks for that update that no one asked for.


Please don't be rude.

We detached this subthread from https://news.ycombinator.com/item?id=11262923 and marked it off-topic.


I'm excited to hear the Rust news, but I don't want to make a "+1" comment or whatever, since that clutters the thread. There might be other people who feel the same.


Do you think someone asked for this comment?


WebAssembly = SWF with diff name. Come on!


The format of WebAssembly could be Java ByteCode.


It's lower level than that.


Yeah, great. Transform everything into opaque binary blobs, as far as the eye can see. Wonderful.

Thanks for nothing.


From http://webassembly.github.io/ : "Open and debuggable: WebAssembly is designed to be pretty-printed in a textual format for debugging, testing, experimenting, optimizing, learning, teaching, and writing programs by hand. The textual format will be used when viewing the source of wasm modules on the web."


Yeah, nice, this and and so many other formats, that people just throw up their hands and give up on, when confronted with the raw binaries they work with on a daily basis, are simply open and wonderful all the time.

Except not.

Portable Executables. ELF binaries. Zip Files. Open Image Formats.

All of these are theoretically open, and perfectly accessible to all, in their raw form.

And yet broadly inaccessible to like 90% of the world's lay people, since the concept of an interpreter eludes them, and in some cases is explicitly denied to them. The same will happen with this.

This puts things on a shelf, well out of reach to many more people. And a very small group of people love that.

So, while encrypted smart phones and email "go dark" on mass surveillance, the rest of everything else "goes dark" for ordinary people.


I don't know. I am not sure yet. What the HN folks think about this?




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: