It's not just server-side or client side these days. Using Rhino I'm using JS for compressing JS files for production and generating documentation for said JS files.
A generator expression is an easy way the create a function that performs a small part of the calculation each time it's run (and has memory of the state it's in). Since code often uses a one item at a time, this is less memory intensive then first generating the complete list and storing it in memory.
The generator part in the example is the '(y in [])' the rest is a list comprehension (a short way of writing for loops that can be implemented very efficiently in a language and will return a list).
list = list.sort(function() Math.random() > 0.5 ? 1 : -1);
I'm wondering just how reliably random the result is. Does JavaScript make any guarantee whether the comparison function will only be called once for a given pair of values, and if not, could this affect the result?
Javascript does not guarantee that, and yes, it does affect the results.
I did a bit of playing with this in Firefox, sorting the array [0,1,2,3,4,5,6,7,8,9] and then testing the mean of the first 5 vs the mean of the second 5, over 10,000 iterations. Then, based on the deviation from 4.5 in each set, I nudged the random cutoff up or down.
It seems to result in most evenly distributed randomness if you change the 0.5 to 0.36725 or so. When I tracked the values to return the same result for the same pair every time, it converged almost exactly on 0.375 every time.
On the contrary, I bet that depending on the exact search algorithm, it would be possible to make some very specific predictions. To pick a simple example, say you use this technique on quicksort of the "pivot on the first element" variety. Given the nature of the algorithm, the first source element will end up near the center of the "randomized" list with high probability, not at all what you'd expect from an unbiased random permutation.
Granted, no real world sort is that simple, and it's even possible that the "clever technique" will produce an unbiased permutation for some sorting algorithms. But why waste time thinking about this when the Knuth shuffle, a proper way to randomly permute a list, only takes a few lines of code more and is so well-documented?
Or it could derail the sort algorithm in some way (infinite loop?) - the point is, it makes some assumptions and it's not clear that these assumptions hold.