I feel like the writer of the above article answers part of your question when they write:
"My experience is that when you tackle big problems, that go beyond simple execution but require actual strong engineers, hiring will be a problem, there's just no way around it. Choosing people that fit your development culture and see themselves fit to tackle big problems is a long process, integrating them is also time consuming. In that picture, the chosen language isn't a huge deciding factor."
As to the issue of change, you go with something else when the current stack is bad and/or unable to do something you need. Colin Steele wrote about this when he decided to switch Hotelicopter to Clojure. He first wrote about how the existing PHP stack needed to be re-written:
"For example, at that point, the site ran out of one ginormous subdirectory with hundreds of PHP files scattered like chunks of gorgonzola on your salad, sticking to one another with tenacious glee. There was a “lib” directory, which you think would hold much of the supporting library code, but a good fraction of that actually lived in “site”, and some in “server”. The previous programming staff had felt it good and worthwhile to roll their own half-assed MVC framework, including a barely-baked library for page caching (which broke and took the site down at regular intervals), and components for database abstraction that only worked with - wait for it - MySQL. Every single goddamn file was littered with SQL, like bacon bits on this demonic salad. There was a “log” directory, but the search logs weren’t kept there, they were in “server”. Etc., etc. It made you want to eat a gun."
There are certainly situations where the codebase calls for a complete re-write, and, as the writer says in the quote above, finding good engineers is hard, and the choice of language is a side issue compared to the difficulty of finding good engineers.
Colin could've re-written it using Java. He didn't do that because he doesn't like Java ;)
Language may have not be a huge deciding factor until it does.
My question is focused more toward the existing team that probably has some pretty good skill in, let's just say, Java and OOP and now they're turning 180 degree to pure functional like Clojure as opposed to Scala (not that I'm a big fan of Scala or anything like that, but just as an example).
If it were that easy to learn new languages, we wouldn't have issue with hiring as most of the job opening requires the candidate to know specific language however we would like to believe it was not the case.
What do you think people will think when they read this?
"He didn't do that because he doesn't like Java "
Your words imply that somehow the preferences of the top engineers or CTO should not matter. But why should those preferences not matter? If we are talking about people who are talented, then we can start with the assumption that those preferences are probably the distilled wisdom of many years of experience with particular styles of development. We don't need to re-enact the millions of debates that have happened regarding whether Java is good or bad (One side shouting "It is verbose!" the other side shouting: "Type enforcement is good", etc, etc, until the last syllable of recorded time). It is enough to know that good engineers have preferences and those preferences probably have some benefits.
You also wrote:
"My question is focused more toward the existing team..."
What if the current team is terrible? Again, referring to Colin Steele, he fired/allowedd-to-leave the entire team that existed when he first arrived.
What if we turn your question around and ask it from the other side: is change ever justified? I'm guessing you would say "Yes".
By your statement that the preferences of the top engineers should matters is the time where "language does matter". It matters because it's a personal choice regardless whether it is the right choice or not.
"What if the current team is terrible" is not the focused of my question hence providing an example from Colin is arguably not in the right context of this discussion and I'm going to leave it to that because there's no point to discuss as anyone could have switched the underlying technology from Java to Ruby and fire all Java developers and hire Ruby developers.
Colin is a very specific example that is written by the man himself. The justification is sound. Having said that, I could pick some obscure language tomorrow (not that Clojure is these days) and forces my own preferences to go forward as long as I can reach the goal and claim in an article how my personal preferences were the best thing as well even though it may not be the case.
But nobody knows the truth... right?
I'm not trying to be negative here but at the same time no human willing to admit his/her mistakes to be honest. Especially when there are plenty at stakes.
Learning a new language _is_ easy for a good developers. I know of a lot of companies that are doing just that, trying to find good developers, and then train them on the fly in new language. And it works well for them.
The reason why some companies have "hiring problem" is that they require X years of experience with framework Y in language Z, although that language Z takes 1 month to master, and framework Y another two weeks.
Why are they doing it? Because HR is doing hiring, and because "hiring is problem" they get bigger budget for hiring, instead development department getting bigger budget for training.
It's true that you can't learn programming in few months, or even a year, but if you are already solid programmer learning yet another language or framework is easy. Heck, some of us are doing it just for fun.
One of the advantages of JVM based languages is that you always have a safety net - worst case you are going to fall back and do parts in Java or some other JVM based language which, while perhaps painful, is not going to destroy your entire company the way locking yourself into a poor technology choice might in other circumstances. Of course, the JVM has its own limitations, but it's been around long enough that at least they are known quantities.
IMHO there is no better and faster way to learn a new programming language in-depth than to use it seriously in a project. Of course you have to be an experienced developer to do that, and you have to have the time and freedom to afford it :-)
This article motivated me to consider Clojure for a new business project. Clojure can use the JVM infrastructure, that's a big advantage. Java has become too boring for me (C# also).
You know that saying that "the moment you drive a new car off the lot, its value depreciates by half"? Well, the moment you ship a product, the cost of upgrading the underlying technology stack doubles.
In my case at the previous company it was rather simple. The language known to the team didn't fit the bill. So after a trial period we decided on one.
At several companies I've worked at (and my current company) there have been severe limitations on future growth with the existing technology choices. It's more important to be able to remain agile and have a high ceiling for growth than to use the most familiar thing always.
> Sometimes I do wonder why people decided to use a new language for a critical/important piece of their product while learning on the go as well.
Because flying by the seat of your pants, making shit up as you go and putting something into production without any real understanding of what the fuck is about to happen is one hell of an adrenaline rush.
In theory. In practice, doing that (with Clojure specifically) would be a bit silly. A lot would need to change with the language to make it suitable for OS development, at which point Clojure would lose a lot of the attributes that make it Clojure.
A lot would need to change with the language to make it suitable for OS development
I keep hearing this, or statements along these lines. But I never see an explanation for this. Performance? Managing memory from other processes? Bit twiddling required to create device drivers? Something else? Are these insurmountable?
In C, I can do something like this (pardon my rusty x86/intel syntax):
int variable = 0xdeadbeef;
__asm {
ldw eax, $(variable) # load from variable
int 0x3 # trip interrupt.. maybe read from a IO port or something.
stw $(variable), $eax # get result, put into C variable
}
return variable;
This allows me to express certain things to the computer that are outside the range of expressibility of the language model.
What I would like to be able to do in Lisp is to execute the semantic equivalent of the above code fragment. I've considered hacking it into SBCL, but I have had higher priorities so far.
If you were trying to build the world's next major operating system, would you go with Clojure as one of the core languages, specifically the one that developer-created apps are written in? And could Clojure be fast enough to power the actual OS itself too?
With all the new excitement around Clojure, I'm just trying to understand the theoretical and practical limits of the language.
While it's an interesting idea, you would need one or even two of the big three players (MS, Apple, Google) to push this onto the market, otherwise you'd have a complete driver desert.
I'd go one level above: make a HIL bridge between standard x86 drivers and your lisp machine code.
The author claims this first article is satire, but in case you were tempted to take it literally, allow me to rebut:
Why not simply add support for concurrent programming as a library instead?
Clojure's concurrency primitives are libraries. You don't have to use them; it's perfectly OK to use, say, kilim or java.util.concurrent, and many do.
Why not provide both immutable and mutable versions of the same data types?
The mutable variants of these structures are already there in java.util, and the stdlib is designed to interact well with java interfaces like List, Map, etc. [1]
Also when writing real-world software, what about the effort required to align the multitude of Java libraries that assume an imperative environment with Clojure?
I've used nontrivial Java libraries in Clojure: it looks about the same as the Java code, only shorter and with fewer parentheses. Yes, fewer.
Can these libraries be used easily, safely and with the same performance?
Yes. Obviously the Clojure parts won't run quite as fast as the Java parts, but interop is complete, concise, and well-designed. InvokeVirtual is InvokeVirtual either way.
Meanwhile Clojure makes obvious improvements over CL in reader forms for readability: the vector, set, map, and regex literals make it much easier to write and understand. It may not be as pure as some other Lisps, but it's a competent addition to the family.
Meanwhile Clojure makes obvious improvements over CL
I would beg to differ. The features you note are pretty subjective and not obvious improvements at all. I've done a few small non-trivial prototypes in Clojure. I cannot stand the syntax for literals. Or the mangled hell that is the literal for lambda. At least CL lets you define your own reader macros -- something that Clojure cannot do (yet). These features you mention are not improvements over CL IMO.
Huh, I find the syntax for literals a breath of fresh air--but I also grew up in languages with rich data structure literals and find the {}, (), [], #"", #{} distinctions make it much easier for me to understand, at a glance, the shape of a structure. I also prefer the vector notation for arguments in (fn [arg1 arg2 & friends])--the brackets make it easier to recognize the argument boundaries, especially in one-liners.
In general, Clojure code seems to have less nesting, which makes it easier for me to read and parse. You're honestly the first person I've heard express a dislike for the reader forms, so I thought it was universally liked.
I think you're right: a lack of configurable reader macros is a problem. I'd also point to the lack of tail recursion (and consequent mucking about with (recur) and (trampoline) as a notable flaw in Clojure. Its error messages are pathologically malicious. On the other hand, I think Clojure's packaging environment, thanks to lein and clojars, is quite good. There also seems to be more consistency in Clojure coding... style? preference? than in CL, which I attribute partly to its young age and small user base, but also, perhaps, to a more opinionated set of attitudes around mutability and types.
Regarding the #() form, I almost never reach for it, partly because it never seems to work as expected. (fn) and (partial) feel more natural to me, so I've never taken the time to understand how #() works.
I also prefer Lisp-1s in general, although I understand that's a more contentious differentiation.
Either way, my CL experience is minimal, so it was wrong of me to claim these as obvious improvements. I'll defer to your expertise here: it sounds like you've used CL enough to understand it better.
The loper os guy hates all languages that aren't the Symbolics flavour of common lisp and all systems that aren't Symbolics' Genera. His criticism of clojure is based only on those two facts and not on actual real problems or gripes with the language. He doesn't back up any claims and seems to hate clojure because its not Symbolics lisp.
If I hate all food but pancakes, then my opinion on steak is hardly relevant.
I dislike Clojure because it is a cheap knock-off of Common Lisp, with no upsides that I'm aware of - besides trendiness. The article mentions one serious flaw - Clojure barfs Java stack traces, instead of serious debug information (the way Common Lisp does - with restarts, etc.) Another flaw is the lack of reader macros. But what is the point of listing said flaws? I could go on and on, and no one will care a whit. Why? Because Clojure is a product of immature minds who piss on the past work of serious people (the Common Lisp community) simply for the sake of faux-novelty. In this, it resembles Newlisp and other backwards steps disguised as progress. It is, in fact, best understood as yet another product of the idiotic language-of-the-day mentality which gave us Dylan, Python, Ruby, and the many other shoddy "infix Lisps."
> hates all languages that aren't the Symbolics flavour of common lisp
This is patently untrue, as anyone who actually bothers to read my articles knows well. Symbolics (or rather, the MIT Lisp machine architecture their products were based on) was simply an example of a Lisp system done well. Clojure, a thin veneer on the Java cesspool, is not. To run with your analogy, I am a lover of steak, who is upset by tofu peddlers' success in passing off their cheap trash as real meat.
"Astonishing" is exactly what I was thinking of as well. There's a lot of vitriol in this post. You say that you could go on and on and no one would care. But that's not true. If you list good, logical arguments against something, people will listen. Instead, there's an almost instinctive urge to rebel against your points solely because of your post's tone, which is antagonistic to what I assume is your goal.
In my opinion, Clojure doesn't try to one-up Lisp. Rich Hickey recognizes the great value of Lisp and wants to share these features with a wider audience -- I can't say I would have ever taken an interest in Lisp if it weren't for Clojure in fact.
*A Lisp
*for Functional Programming
*symbiotic with an established Platform
*designed for Concurrency
And he says he couldn't find one. That's not a subjective statement -- there wasn't a language with all those features before Clojure. And I'll bet there's a lot of people that want those four things. His language does that very well. That doesn't mean it should be used for everything, but he never implied it that anyway -- quite to the contrary, he explicitly points out the problem ___domain that Clojure addresses.
I think you misunderstand the designers of these new languages. Most of them aren't trying to make something "trendy" (that should be obvious by how many esoteric hobby languages there are). They are trying to solve problems. And if some people other than the author of the language can get some use along the way, well then that's just great.
Clojure isn't the product of "immature minds" either. It's the product of one mind, and one that's spent many years thinking about good language design and many years working with software in industry.
Symbolics was the future. CS has in many ways regressed since then, mostly because of the stupendous success of worse-is-better unix/c mindset coupled with lack of DARPA funding ,the AI winter & the rise of cheap wintel cpu's. At Goldman Sachs, I was incredibly fortunate to work alongside one of the original Symbolics guys ( listed here: http://www.asl.dsl.pipex.com/symbolics/legacy.html ). He complained long and hard about how the c/c++/java family of PLs were "absolute shit" and OOP was doomed from the get go and why functional will be back someday. This was back in the 1999-2000 timeframe when Sun was in the driver's seat and everywhere you looked, it was OOP or bust. Now, ten years hence, functional is all the rage, and you've got clojure, functional java, scala, several functional JS libs...all that's old is new once again! Who knows, with all this VC $$$ searching for a home, we might actually see the resurgence of a LISP Machine. Symbolics 2.0 FTW!!!
I'm about to start a small hobby web project. What Lisp platform would you recommend? I'm somewhat scared of stories like Reddit's - Lisp being a great language, but implementations having problems with Unicode, threads, networking, scaling, price, etc.
I remember your comments from some time ago, where you described Mathematica as a good Lisp-like language with good development tools. I am also curious about Mathematica. Have you changed your mind? Why?
I'd be the first to think that they would like Clojure for bringing Lisp to the "masses". What does irk me about it is that language outputs some really bizarre stack-traces and that 4clojure.com (which people recommended but it's learning curve jumps after a couple of exercise like it saw Chtulu). Speaking of learning Clojure, anyone got a good book/tutorial/demo?
If you have no prior lisp experience then I would recommend Clojure Programming as the starting point - http://www.clojurebook.com/
I found it to be the best beginner book on Clojure and one of the best written programming language books I've come across in a while.
Programming Clojure would be a good followup. I found that Clojure Programming gives the "how" of the language and Progamming Clojure gives the "why", if that makes sense.
Clojure in Action is good, but it is definitely an "In Action" book. Depends on your learning style. I've only skimmed the Joy Of Clojure. It looks really good too but I don't think it's targeted at beginners. Could be wrong.
I think the GP is talking about something like C, which has a static type system at compile time. But at runtime, it's all ones and zeroes. In Clojure a symbol is a symbol and the (strong) types of the objects they point to are resolved at runtime. So Clojure's type system is, in a precise sense, the opposite of what the GP was talking about!
It depends, really. Clojure's compiler actually does perform static, compile-time type analysis in certain cases, notably on primitives. For a more traditional form of static type analysis, I imagine you'd use the macro system to provide something like
That said, I'm personally of the opinion that compile-time static type verification isn't quite as powerful as is oft conjectured, in terms of safety and performance. Sufficiently expressive type systems pay some of the same costs as run-time verified types in terms of indirection, and in practice the difference rarely seems significant.
To be more explicit: I assert that C (and, for that matter, Java) do not have strong type systems, in the sense of power, not rigidity. There are huge classes of bugs in kernel code that could be caught by static type verification and can't, because the C type system simply isn't powerful enough to express the invariants required. Yet, somehow, people write successful operating systems in C, and they work not because of the type system but because of careful thought and thorough testing.
Even more to the point: We have written operating systems in dynamically-typed Lisp before, even down to the microcode level. The Symbolics Lisp machines should illustrate that it is perfectly possible to do what the parent asserts is infeasible.
All this said, I certainly wouldn't write an OS in Clojure; though I love the language, the JVM just wouldn't be an appropriate substrate. The instant someone announces an LLVM Clojure compiler, though... ;-)
You make excellent and carefully argued points. It is important to me that I say that even though I do not agree with your opinion on static type systems, I enjoyed reading all of your argument.
Although, one thing is not clear to me. It reads to me that you are you saying that indirection incurs only some of the cost of dynamic typing, so why do you follow with "and in practice the difference rarely seems significant"? I can't connect the latter part, clearly I have missed something.
I am of the opinion that you can write complex systems in C because, although as an abstraction of a complex system it is itself complex, there is no incidental complexity as is found in C++. Its small size makes it simple for those who have understood the system it abstracts. I still believe that a more expressive type system would reduce some amount of effort and uncertainty when testing and using C.
Do you consider Clean, ATS, Haskell and OCaml type systems to be expressive? These languages are expressive with powerful type systems and tend to be very fast, often without leaving idiomatic code. In my experience I have been saved many times by helpful type systems and follow them like gradients, allowing me to hold more without getting lost in the minute details of correct piping, invocation and application. I find I am much slower with dynamic typing and pressed with a feeling of paranoia.
I can't explain why preferences are often expressed so viscerally, my only guess is that there must be something biological that influences what type of type system you prefer.
I actually quite like the Haskell and OCaml type systems! I had them in mind while writing my post above. Personally, I dislike type systems which are rigid but weak. Strong compiler verification is awesome in a language like Haskell which offers union types, bounded types, contra- and covariant parameterized types (not commonly referred to as such, but implict in functors), etc. I agree with you that it makes it easier to handle complex codebases, catches huge classes of bugs readily, and allows you to solve problems in less code. [1] Good type systems are programmer amplifiers.
I also love the concision and flexibility that comes with Clojure's idiomatic eschewing of type information: it helps me focus on functional composition instead of the particular data. Both have their advantages. Java just makes me angry. ;-)
Regarding the difference in cost: I meant to say that many "strongly typed" compilers are not yet smart enough to elide many of the run-time indirections and safety checks that dynamic languages must use. Really good type systems, like Haskell, are different: precomputing finite-___domain functions to lookup tables, finding fixed points, etc.
This is not a ___domain I understand very well, but the comments I've read from language folks (for instance, the Dart VM designers) suggest that type checks in particular have a relatively small impact on performance. Polymorphism still leads to things like vtables, etc, and as I understand it modern x86 is pretty good at handling these cases.
Again, I know very little about actually writing compilers/vms, so if you have further comments I'd be interested to hear 'em!
[1] That said, Haskell's type system drives me nuts in its proliferation of types which are almost but not quite compatible; nothing worse than trying to use two libraries which will only interact with their own particular variant of a String or ByteArray.
Thanks, that clarified things. Unfortunately I too know little about actually writing compilers so my depth is limited. But I will offer some thoughts anyways.
The slow down in dynamic languages is in more than type checks though. The slow down is because dynamic languages are like some crazy awesome dream world where anything can change at any given moment and things are not necessarily what they seem. So VMs must be extra vigilant, checking many things like assignment, for exceptions, if the object is still the same class, if it still has the same methods etc. The term dynamic is almost an understatement! Add to that boxing and the possibility of heterogeneous collections and slow downs are the price to pay for all that flexibility. This makes things very hard to predict both on the compiler level and also at the CPU level in terms of branching - which alone is very costly. Dynamic language programs are not easily compressed. There are ways around that and languages like Clojure offer a sort of compromise, by being able to lock certain parts down in a solid reality, structures (in the sense of regularities) coalesce and can be used to speed up parts of your code. You can choose where to trade flexibility for speed.
You're right (Subtype) Polymorphsism do incur a cost but modern CPUs can well handle these. But with parametric polymorphism and value types you get no runtime hits and also get some free theorems to boot.
That said, how you think has to have some influence. I have never found static types to be constraining, I actually feel like the allow me to more easily plan future consequences. I suppose I trade implementation freedom for the ability to create consequence trees of greater depth and quickly eliminate unproductive branches.
That's a really good observation. I think in cases like Clojure, the dynamic problems aren't quite as bad because of the emphasis on immutability: the compiler can readily generate single-assignment forms from let and def bindings. With type hints, method calls on variables should then be computable at compile-time. (Where type hints are absent, obviously, you pay the runtime reflection cost.)
One thing I don't quite understand is how expensive the protocol system is; e.g., if I extend a type with a new protocol at run time, perhaps concurrent with the use of an object of that type, how does the compiler handle it? IIRC protocols are handled as JVM interfaces, so it may just be an update in the interface method table which is resolved... by invokeVirtual, right? I imagine you could pay a significant cost in terms of branch misprediction for the JVM's runtime behavior around interfaces...
It's indeed possible to add type-checking to Clojure by writing macros for type declarations and checking. But Clojure was designed primarily as a dynamic language, and if one wants type checking maybe Haskell fits the bill better, since it has a powerful type system and a syntax that was designed with such a type system in mind from the get-go. I'd suspect that a heavyweight type system would just look and feel cumbersome in Clojure, but maybe I'm wrong.
Sometimes I do wonder why people decided to use a new language for a critical/important piece of their product while learning on the go as well.