Syntaxes that are hard for computers to parse are also hard for humans to parse.
But, nobody literally parses Lisp syntax: you rely on indentation, just like when you're writing C, Java, shell scripts.
People working in Lisp like Lisp syntax; it's really easy to edit, very readable, very consistent.
Anything new in the language, whether coming from a new release of your dialect or someone's new project, is a list with a new keyword, and predictable syntax! Syntax that works with your editor, tooling, and readability intuitions.
If you work involves a lot of numerical formulas, the Lisp way of writing math isn't always the best. You might have documents for the code which use conventional math and someone going back and forth between the doc and the code may have to deal with the translation.
Most software isn't math formulas. The language features that need ergonomics are those which address program organization.
There are ways to get infix math if you really want it in that code.
> Has anyone done that?
Over and over again, starting in the 1960's.
1. 1960's: Defunct "Lisp 2" project run by John MacCarthy himself. Algol-like syntax generating Lisp code under the hood.
2. 1970s': CGOL by Vaughan Pratt (of Pratt Parser fame).
3. Others:
- Sweet Expressions by David A. Wheeler (for Scheme, appearing as a SRFI).
- Dylan
- infix.cl module for Common Lisp
- Racket language and its numerous #lang modules.
The common thread in "syntactic skins" for Lisp is two fold: they all either died, or turned into separate languages which distanced themselves from Lisp.
None of the above skins I mentioned received wide spread use and are defunct, except maybe Racket's various #lang things.
However, Lisp has inspired languages with non-Lisp syntax: some of them with actual Lisp internals. For instance the language R known for statistical capabilities is built on a Lisp core. It has symbols, and cons-cell based lists terminated by NIL, which are used for creating expressions. The cons cells have CAR and CDR fields!
The language Julia is boostrapped out of something called Femtolisp, which is still buried in there.
> Syntaxes that are hard for computers to parse are also hard for humans to parse.
This is obviously untrue. People aren't computers. Obviously there's some relationship, but it's clearly not 1:1.
> it's really easy to edit
Is it? Kind of looks like a formatting nightmare to me, though presumably auto-formatters are pretty much a requirement (and I guess trivial to implement).
If you remove the syntactic sugar like operators (which includes assignment and curly braces - it all desugars into function calls!), R is basically a lazily evaluated Lisp with S-expressions written as C-style function calls. By "lazily evaluated" here I mean that each argument is actually passed as the underlying S-expr + environment in which it was created, so it can either be evaluated when its value is needed, or parsed if the function actually wants to look at the syntax tree instead (which covers a lot of the same ground as macros).
So the existence of R is evidence that Lisp's syntax is a serious problem. If a language providing syntactic sugar has a significantly increased adoption rate, that suggests that the bitter pill of your syntax is a problem :-).
Language adoption is a complicated story. Many Lispers won't agree with me anyway. But I do think the poor uptake of Lisp demands an explanation. Its capabilities are strong, so that's not it. Its lack of syntactic sugar is, to me, the obvious primary cause.
R isn't simply syntactic sugar over a Lisp-like runtime, to make it acceptable, but an implementation of an earlier language S, using Lisp techniques under the hood.
Syntaxes that are hard for computers to parse are also hard for humans to parse.
But, nobody literally parses Lisp syntax: you rely on indentation, just like when you're writing C, Java, shell scripts.
People working in Lisp like Lisp syntax; it's really easy to edit, very readable, very consistent.
Anything new in the language, whether coming from a new release of your dialect or someone's new project, is a list with a new keyword, and predictable syntax! Syntax that works with your editor, tooling, and readability intuitions.
If you work involves a lot of numerical formulas, the Lisp way of writing math isn't always the best. You might have documents for the code which use conventional math and someone going back and forth between the doc and the code may have to deal with the translation.
Most software isn't math formulas. The language features that need ergonomics are those which address program organization.
There are ways to get infix math if you really want it in that code.
> Has anyone done that?
Over and over again, starting in the 1960's.
1. 1960's: Defunct "Lisp 2" project run by John MacCarthy himself. Algol-like syntax generating Lisp code under the hood.
2. 1970s': CGOL by Vaughan Pratt (of Pratt Parser fame).
3. Others:
The common thread in "syntactic skins" for Lisp is two fold: they all either died, or turned into separate languages which distanced themselves from Lisp.None of the above skins I mentioned received wide spread use and are defunct, except maybe Racket's various #lang things.
However, Lisp has inspired languages with non-Lisp syntax: some of them with actual Lisp internals. For instance the language R known for statistical capabilities is built on a Lisp core. It has symbols, and cons-cell based lists terminated by NIL, which are used for creating expressions. The cons cells have CAR and CDR fields!
The language Julia is boostrapped out of something called Femtolisp, which is still buried in there.