The implementation of Clojure itself is not idiomatic nor is it intended it to be so. Most of the complexity in that example, for instance, is to deal with chunked lazy sequences. In user-land code, chunking of lazy sequences is pretty much a pure performance optimization that you don't have to worry about.
Granted, it's not a ideologue "turtles all the way down" LISP. But it isn't intended to be. If you want that, I advise you to find an old Lisp Machine and use that.
Is the classic one actually going to be able to leverage tail-call elimination? Obviously it could be refactored to make a tail-recursive call, but correct me if I'm wrong, it wouldn't be automatic.
After refactoring it, the tail-recursive version would be a bit harder to read than the naive implementation. In a sense the Clojure code is in the same boat, it's just that the optimizations are more complicated, because it's not working with singly linked lists in the "address register" and "decrement register".
Because you then don't need the complexity (& boilerplate) of SERIES or LOOP the moment you want to do a series of transformations on a slightly bigger list.
You could write the clojure version in the exact same way (well, you wouldn't say null?, but, basically the same) as the lisp one. But it wouldn't be as efficient given the various ways sequences might be implemented in Clojure. That makes sense as part of the standard library, IMO---users of keep don't need to care about whether their sequence is chunked or whatever. And it makes sense that there should be a variety of implementers of sequences: there's more to life than singly-linked lists.
(Even Guy Steele, who should know, recommended a couple years ago that language designers omit cons from their langauges!)
your opinion is one i haven't seen before, do you care to elaborate?