Hacker News new | past | comments | ask | show | jobs | submit login

AFAIK, Dart can run on bare JS without any extra support.



Dart2js does not currently implement the whole language. Integer/float semantics and support for large integers don't work (it seems like they didn't implement them manually, so it just falls back to doubles?), and there are probably other things I don't know about missing.

I should point out that emscripten and many other JS-targeting compilers (like JSIL) emulate int64 semantics directly so that apps using them do work. It's kind of strange for the Dart team to have not gotten this right yet when Dart is at 1.0.


> Integer/float semantics and support for large integers don't work (it seems like they didn't implement them manually, so it just falls back to doubles?), and there are probably other things I don't know about missing.

That's going to be really unfortunate if pages start relying on the proper behavior and become Chrome-only as a result...


The comments on it in the dartlang issue tracker indicate that it has already been a problem; one commenter indicates that his contribution to a crypto library was rejected because he had only tested it against Dart and it didn't work correctly in Dart2js (due to 64-bit ints)


That's not really the problem. Web developers have learned from the past and don't just build web-pages for one browser only. On the contrary... Developers now sometimes need to support outdated browsers with little market share (IE8 for example), instead of actively pushing the users to upgrade.

The discrepancy between Dart and JS numbers is mostly an issue for the developers. When they deal with numbers in specific ranges, they need to run dart2js more frequently (to test if their code works) instead of relying on Dartium (and its F5 development cycle).

After 2 years of Dart development, numbers have rarely been an issue in practice. Developers know when they deal with numbers > 32/53bits and work around it.


Web developers know too well the trap of testing only on one browser, which when Dart comes to Chrome, will be Chrome. Testing all input data in a space that exceeds the integral ___domain of double is hard. Murphy says there will be bugs in other browsers when one's app tests "work in Chrome".

For a wannabe-cross-browser, wannabe-standard (Ecma has a sorry history of proprietary standards: C#, OOXML) language pushed by a super-rich big company with what? 60 or more engineers on the project, optics must matter. Why not fix the bignum-not-emulated-in-JS bug? It's easy, GWT has BigInteger emulation code in JS lying around the Googleplex. Just plug that in if the code needs it.

The obvious risk remains what I cited in HN two years ago: Google letting dart2js be sub-standard, using all its strength and leverage to promote Dart in combination with DartVM, letting users of other browsers "blame their browser" when things don't quite work as well as in Chrome.

Given market share among browsers currently, I think this will backfire, but it could inflict collateral damage too -- on developers as well as on users. And it just plain looks bad. People expect better of Google, and they ought to.

/be


You must realize how different the situations are between Java BigIntegers and Dart integers.


Please, inform me, if you are not trolling on the cheap.

https://www.dartlang.org/articles/numeric-computation/

http://docs.oracle.com/javase/7/docs/api/java/math/BigIntege...

Both arbitrary precision immutable integer types. If some corner case differs, my point stands. Google has more than enough engineers on Dart to do the work of writing a bignum library in JS, if they can't find one to use that someone already wrote.

/be


I'm definitely not trolling. I truly thought you would understand the basic difference.

Big integer support in Dart is not an issue of writing a bignum library. That's already been done. The difficulty is in supporting the single integer type in a way that's performant for numbers < 2^53

Java has fixed width ints, Dart has infinite precision ints. In Java when dealing with ints, you're always in the int ___domain, and when dealing with BigIntegers you're always in the BigInteger ___domain. In Java operations on ints can overflow. In Dart operations on ints don't overflow - internally, if the result doesn't fit in an machine int a bigint is returned.

In JavaScript you have a few choices when implementing infinite precision ints: 1) Use a single bigint type for all ints, which will be incredibly slow. 2) Use two types for ints, the built-in number and a BigInteger class. This means a lot of expensive type and overflow checks that slow down basic numeric operations. 3) Don't implement infinite precision ints.

Compile-to-JS languages can use static analysis to do better. dart2js already uses type inferencing extensively to eliminate type checks where it can, but eliminating the checks for ints would require range propagation which is much trickier, and doesn't work in a lot of cases.


> dart2js already uses type inferencing extensively to eliminate type checks where it can, but eliminating the checks for ints would require range propagation which is much trickier, and doesn't work in a lot of cases.

Couldn't you just have changed Dart to fix the semantics?

This is a big issue (no pun intended) that essentially means that dart2js is broken. I predict that, absent some solution to this issue, if Dart takes off, there will be sites that break in non-Chrome browsers when (for example) 64 bit hashing algorithms are involved due to authors only testing in Chrome and not testing the dart2js version.


"I truly thought" you would not raise a question to which V8/Dart people already know the answer. Range prop is not the only solution.

Use JIT techniques to speculate profitably that most ints fit in JS doubles. This can be done in JS-hosted VMs, just as in machine-code-generating native VMs.

Again, this is extremely old hat to the Dart/V8 folks. See, e.g.

http://mraleph.tumblr.com/post/24351748336/explaining-js-vm-...

In SpiderMonkey, we augment JIT techniques with online type inference that distinguishes int from double. The same type inference could add bignum to the speculative tower. See

http://rfrn.org/~shu/drafts/ti.pdf‎

This avoids most of those checks that you assume are too expensive (and by assuming, so casually fork Dart semantics).

Here is a JS version of [Hackett&Shu, PLDI 2012], from @pcwalton:

https://github.com/pcwalton/doctorjsmm/

Can the full JIT+TI approach to bignum/double/int perform well enough for dart2js? Better find out, because forking semantics is just the wrong thing.

Another road not taken: we hear "can't wait for JS to get better", but Dart started at least by 2010, possibly earlier.

http://wiki.ecmascript.org/doku.php?id=strawman:bignums

could have been proposed and actually worked into ES6 by now, but the Dart team was in stealth mode (Dash) for too long. Still, better late than never. There's time for ES7. Who from Google will actually do the work in V8 and with Ecma TC39?

/be


This is sounding less and less like just plugging GWTs BigInteger implementation into Dart, which is exactly my point when I said you must know the difference.


Using a GWT BigInteger for all dart2js integers would work just fine and provide correct semantics, but if it doesn't meet their performance bar, that's a failure of design - not an excuse for cutting corners and shipping a dart2js that produces wrong behavior.

This could have been accounted for with a better language design, but it wasn't.


Could have been accounted for in language design, dart2js effort, bignums-for-JS effort, or some combination of the last two. I don't want to assume a conclusion, but the lack of a fix for http://code.google.com/p/dart/issues/detail?id=1533 is a problem, even if there's no immediate fix. It is fixable.

/be


No, you changed the argument from correctness to performance.

Makes me wonder if the problem is not entitled sloth with a dash of anti-JS animus, rather: wholesale, '90s-Microsoft-style-lock-in, submarine-the-other-browsers dirty pool.

Of course, if dart2js produced too-slow-JS, that might limit Dart adoption. Not a dilemma, given a good-faith effort to optimize per my comment above -- and much better than forking the language between DartVM and dart2js, leaving a trap for the unwary Dart programmer to fall into when using dart2js to run on all the non-DartVM browsers.

"Make it correct, then make it fast." Is this no longer done where you work? (Which would be Google, amirite? :-P)

/be


> Web developers have learned from the past and don't just build web-pages for one browser only.

This is not true in practice. Just a few days ago I couldn't use clippercard.com on Firefox for Android because the WebKit prefixes broke the design so badly as to be unusable. :(

For numerous other examples: https://bugzilla.mozilla.org/buglist.cgi?quicksearch=evangel...

> When they deal with numbers in specific ranges, they need to run dart2js more frequently (to test if their code works) instead of relying on Dartium (and its F5 development cycle).

And if they don't, their pages lock their users into Chrome…


Very few applications need "big" (in this case: > 2^53) integers.

So far, I only needed them once. I ported a big int micro benchmark a few months ago. Kinda meta. Not sure if that really counts.


That's simply not true. There are a ton of applications out there that need true 64-bit integers, not 53-bit integers. Games, crypto, graphics, etc are all fields where you can end up needing 64 bits of integral precision (if not more!)

And this isn't based on 'I've only hit one case where I needed them'; this is based on how many users of compilers like Emscripten and JSIL actively utilize 64-bit arithmetic - sometimes they use it so much that they demand high performance! A half-assed implementation that relies on doubles being 'good enough' is just not going to cut it.


Ok, you've got me really curious what these applications are. 2^53 is 3.67e18, which is about 500 times larger than the number of millimeters from the sun to Pluto's maximum orbit [1] allowing an accurate model of a solar system. It is more kilometers than the diameter of the Milky Way [2], allowing accurate models of a galaxy (given the size of stars, 1 km is pretty accurate). I'm guessing that the 15 TB storage requirement of 400 billion stars [3] is going to be a problem long before the integer precision. So I'm not really seeing how games or graphics are going to need more than this precision...

Crypto needs 256 bit integers minimum, so 64 bit integers are usually convenient multiples, but not terribly helpful.

I'm not saying that 53-bit integers are good enough (although I certainly have never needed anything that large); I have no idea. But I don't see a huge number of applications that obviously need them. So I'm curious...

[1] According to http://en.wikipedia.org/wiki/Pluto, Pluto's maximum orbit is 7.31e9 km = 7.31e15 mm

[2] 120,000 ly = 120,000 ly * 9.5e15 m = 1.14e20 m = 1.14e18 m (http://en.wikipedia.org/wiki/Milky_Way)

[3] Each stars needs (x, y, z) and (v_x, v_y, v_z), probably also in 53-bit integers, so that is 6 * 53 bits = 318 bits = 40 bytes. 4e11 stars * 40 bytes = 1.6e13 bytes = approximately 14.6 TB.


> Ok, you've got me really curious what these applications are.

In addition to the applications others mentioned, this is an issue in applications that don't need it, but run into it. If for example a program has a bug where a number is multiplied by 2 in a loop, then if it's a double it will eventually become infinity and stop changing, but if it is a bigint then it will continue to increase. This means that the bug will have different effects in those two cases.

While this is a bug, if the developer tests on only one of those platforms (say, the double), and the bug happens to not cause noticeable effects, it might go unfixed, but people on the other platform (with bigints) would get something broken (as a constantly growing bigint, eventually it will be extremely slow or run into some limit and give an exception).

The basic issue is that leaving numeric types to some extent undefined means you open yourself up to risks like this. How common it will be in practice is hard to say, of course, but I've seen such things happen in another language, so I think this can't be simply ignored out of hand.


Ok, I can think straight up of one huge use case. With 64 bit integers, I can blend all four colour channels of a pixel in one hit. For graphics operations this is a huge performance boost. (For those unfamiliar with the concept, you allocate 16 bits of the integer to each of rgba, which are only 8 bit values. The extra 8 bits per channel allows you to handle overflow on operations, which means you can do an op in all four channels simultaneously).


Any time you multiply two 32-bit numbers, you need 64 bits out to prevent overflow. Sometimes cleverly splitting and rearranging an operation will avoid the overflow, but it's much cheaper in developer time to have 64 bit ints available for intermediate parts of calculations.


Bit sets for 64 element or 8x8 things. One example are chess programs (the board is represented as several 64bit integers called bit boards) but there are many others. Everything is going to be way slower if you suddenly need split your bit set in two.


Yet twitter ids are bigger than 53 bits (they return string ids now).


There weren't more than 9 quadrillion tweets though.

    >>> Math.pow(2,53) / Math.pow(10, 9) / 7
    1286742.7506772846
Over a million tweets per person? Certainly not.


If you think 'modelling the entire milky way' is an average use case you don't know very much about graphics or game engines, and you should stop extrapolating based on your horribly inadequate knowledge in order to draw faulty conclusions. Again, I have end-users building games and other types of software that are using 64-bit integers right now and have told me they actually need 64 bits of precision. In one case they were actually using pairs of 64-bit integers for 128 bits of precision.

For your contrived galaxy example, one thing you need to realize is that value ranges aren't the only thing that matters; value precision is just important, and variable precision is an absolute trainwreck in game development scenarios because it means that your game's physics are nondeterministic (due to precision variance depending on where you are). This is a real problem that affects real games. EVE Online happens to simulate a massive galaxy and massive solar systems, and it uses integers to represent coordinates precisely for this reason.

So crypto needs '256 bit integers' (why 256? why not 512? 2048? 4096?), thus it's okay that dart only offers 53 bits of integer precision, and somehow 64 bits wouldn't be better? Does your CPU have 256-bit integer registers? Last I checked, most integer registers only go up to 64 or 128 bits, and the registers bigger than that are for SIMD - i.e. they contain multiple integers, not one big one. I'm not even demanding that Dart implement arbitrary-precision integer arithmetic here - just actually get 64 bit integer arithmetic right, since VIRTUALLY EVERY real programming environment's VM offers it natively to begin with [0]. If you still insist that nothing less than arbitrary precision integer values are sufficient for crypto, then fine - what about 64-bit hashing algorithms (real; used in production)? What about applications that need to do I/O against files/buffers larger than 4GB? What about applications that need to do graphics rendering against image surfaces larger than 4GB? What about audio editing software that needs to operate on long audio recordings larger than 4GB? When 32-bit unix timestamps overflow in 2036, will 64-bit integers still be unimportant?

These are all real use cases. If you ignore them all and still argue that 64 bit integers don't matter, you don't seem to have the ability to imagine software beyond your experience.

[0] Yes, there are a few scripting runtimes without 64-bit integer values built-in. They are all insufficient for anything resembling numeric computation.


> Integer/float semantics and support for large integers don't work (it seems like they didn't implement them manually, so it just falls back to doubles?)

Integers work fine up to 2^53 (9 quadrillion and some).

There is also a "throw_on_javascript_int_overflow" flag, which you can use during development.


Is this is in the issues tracker at dartlang.org?



Yes


Please do not spread FUD. From what I see there are some extreme corner cases in Dart->Js behavior and they are well known. Dart team promises to support modern Browsers, period.


Javascript is Turing complete so of course you can compile anything down to JS. But for performance and convenient debugging native support would be a big help.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: