I would rather say that it is more OS X returning back to its roots where NeXTStep/OpenStep use to have such capability when all the rendering was done with the PostScript built in server.
These feature disappeared from OS X, so it is more like a welcome back with a user friendly UI. Back then you usually had to use some terminal command line to make ir happen.
Amazing that this is the only comment so far that points out this wasn't even Not-Invented-Here Syndrome, because their own OS already had this feature (Display PostScript remoting) twenty years ago. At least until some architect marking their territory was allowed to wilfully break it and replace it with ... nothing.
The "You need a PHD to understand Haskell" slur is so bloody ridiculous. I certainly didn't have a problem with it, and I haven't even finished my degree yet! On a more general note, I find the idea that a programming language can be "too smart" troubling. Is the solution to have dumber programming languages then?
There is a very big difference between "too smart" and "too complicated". I think Haskell is dangerously close to the former category while C++ is firmly in the second. I appreciate "smart" for many tasks, but sometimes the operational semantics become important (e.g. it's performance-sensitive) and then you have to understand the stack at a pretty deep level and it can be awfully confusing why a tiny source-level change makes a big performance or space difference.
Sometimes I pretty much know what assembly I want and it's much easier to make a C compiler produce it than to get GHC to produce it.
I agree that it is hard to get GHC produce the assembly you want, but I think this is unfair to "blame" GHC for this. In my eyes Haskell is mostly a high-level language and if you worry about the assembler produced I think we have left high-level language territory and it might easier to just write C and use the FFI.
Tools that fit the task make it simpler. Otherwise, much of your attention is diverted to the tool itself, and not available for the actual task. This can be fun provided the task isn't pressing.
One example is multiplication in the complex roman numeral system vs. (our) simple arabic numeral system.
Another example is PHP (tool) for webapps (task). Worked for YouTube and CDBaby. -- Honestly, I think the concept of embedding commands in HTML is brilliantly simple, because the program is largely isomorphic to the result, making it intuitive to reason about. [disclaimer not a webapp developer]
"Another example is PHP (tool) for webapps (task). Worked for YouTube and CDBaby. -- Honestly, I think the concept of embedding commands in HTML is brilliantly simple, because the program is largely isomorphic to the result, making it intuitive to reason about. [disclaimer not a webapp developer]"
Simple until you have to come in and maintain it. There are many levels of simple. What is often simple for the original writer is horrible for the person who picks it up later on and has to extend and maintain it.
No this is what a GPL developer considers a "loss". Specifically, yet another demonstration of why developing for someone else's walled garden is a bad idea.
I'm not sure that Haskell could, given time, subsume dynamically typed programming. But even if it is possible, why would it be desirable? Most Haskell people don't think very highly of dynamic typing, and the extensions to the language reflect that. They generally add more typing, e.g GADT's, multi-parameter type classes, extensible kinds e.t.c. And dependent typing is certainly not a move towards dynamic typing! Casting it as such shows poor understanding of both concepts.
The prevalence of ad-hominem attacks against Icaza is unfortunate. You can agree or not with his more-pragmatic-than-some stance towards open source software, Microsoft, etc., but the fact remains that the guy is a first-rate builder and hacker.
There have been a seemingly endless stream of incidences of him shilling for MS, going on for several years now. The two most common themes of shilling are trying to push Mono as a viable linux technology, trying to push Mono as a dependancy on common linux systems, and shilling for OOXML like you see here.
Another (slightly less common) theme is his ad hominem attacks on [F]OSS figures/organizations/supporters such as the FSF and Groklaw, like you see here.
He's been at this for years, it's no stretch of an imagination to call it a "long history".
The confusion about the use case of NoSQL probably stems from the term "NoSQL" being so vaguely defined. All you can say for sure about one is that it's not relational. But other than that there's not much in common between (for example) key-value stores, graph databases and object databases.
Result: The author has to qualify all statements with "only applies to some NoSQL databases".
I would love a relational database with some less horrible language than SQL to manage it. That'd be NoSQL, no lie, but considerably different than a KV store.
Remember, you can also use any language that compiles to Java bytecode on a JVM. A short list includes Scala, Clojure, Ruby, Python, Groovy and even Javascript(via Rhino). So the JVM and Java are very much separate things.
Unfortunately, the Dalvik VM and JVM are quite different too. Dalvik may act like a JVM and talk like a JVM but it isn't a JVM (not according to Google, at least; Oracle thinks otherwise).
There is also the matter that the whole "correctness vs. productivity" debate is a false opposite. There is nothing mutually contradictive with correctness and productivity, so there is no reason why you can't have both.
Correctness stems from expressing not only the algorithm but also the results you expect from running it (as types and/or test cases) so they can be verified. But if you somehow convince yourself that you haven't made any mistakes, it's always less work to just write the algorithm (and omit the expected results).