I will give my answer, noting that not everyone will agree with me :-).
The vast majority of software developers do not put up with Lisp syntax. The vast majority of software developers use a different programming language with a much better syntax, and never consider using Lisp for anything serious. For example, for most developers, a programming language is a laughable nonstarter if it cannot use infix notation for basic arithmetic, as that is the standard notation for mathematics worldwide and is taught to everyone.
As evidence, look at TIOBE:
https://www.tiobe.com/tiobe-index/
... you'll see "Lisp" is #26, lower than Classic Visual Basic, Kotlin, Ada, and SAS. Scheme and Clojure aren't even in the top 50, so adding them wouldn't change much. Lisps will never break the top 10 with their current syntax.
There is a reason for Lisp's syntax: it's homoiconic, enabling powerful macros. Lisp macros are incredibly powerful, because you can manipulate programs as data. Other macro systems generally don't hold a candle to Lisp's. The vast majority of "simple syntactic sugar" systems for Lisp lose homoiconicity, ruining Lisps's macros. This was even the problem for the M-expressions created by Lisp's original developer. Most of those syntactic improvements also tend to NOT be backwards-compatible.
There is a solution. I developed curly-infix expressions, SRFI-105 https://srfi.schemers.org/srfi-105/ Curly infix enables infix notation WITHOUT losing homoiconicity or requiring a specific meaning for symbols. On top of that I developed sweet-expressions
SRFI-110 https://srfi.schemers.org/srfi-110/ . These are backwards-compatible (easing transition) and homoiconic (so macros keep working). There are no doubt other approaches, too.
In general getting anything accepted and used is hard.
Here's my personal take, though: Almost all programmers abandoned Lisps decades ago, in large part due to its terrible syntax. The few who still use Lisp tend to like its notation (otherwise they wouldn't be using Lisp). Since they like Lisp syntax, they don't see the problem. Cue the "this is fine" cartoon dog in a fire. This means that Lisps will never be seriously considered by most software developers today, due to their user-hostile syntax. Some will learn them, as a fun toy, but not use them seriously.
I think that is a terrible shame. Lisps have a lot going for them. For certain kinds of problems they can be an excellent tool.
I realize most people who seriously use Lisp will disagree with me. That's fine, you're entitled to your own opinion! But as long as Lisp has terrible syntax (according to most developers), it will continue to be relegated to a (relative) backwater. It will continue to be learned, but primarily as an "elegant weapon" from a bygone "civilized age" https://www.explainxkcd.com/wiki/index.php/297:_Lisp_Cycles
Even MIT's 6.001 (Structure and Interpretation of Computer Programs), which once taught powerful concepts using Scheme, has been replaced by a course using Python. Reason: "starting off with python makes an undergraduate’s initial experiences maximally productive in the current environment". Scheme did not help them be maximally productive. See: https://cemerick.com/blog/2009/03/24/why-mit-now-uses-python...
I like Lisp. I don't like Lisp syntax. Since most Lispers don't want to fix Lisp syntax, or don't believe it's a problem, the vast majority of software developers have decided to use a different programming language that has a syntax they prefer to use. The numbers from TIOBE make it clear Lisp isn't a common choice.
You'll no doubt get different views in the comments. :-)
I don't think it's necessarily infix math. Most code is not math heavy. But code bases tend to be heavily loaded with those language features that drive program organization, like object-orientation.
Common Lisp's "problem" isn't so much (* x (y + z)) but rather (fun (slot-value foo 'bar) (slot-value (slot-value xyzzy 'bag) 'element)).
The with-slots macro is not really an answer. It's an extra blurb you have to write, for something simple that just looks like fun(foo.bar, xyzzy.bag.element) in another language. Slot readers and accessors help. If the class you're using hasn't defined them, should you?
I have a suspicion that newcomers to Lisp who are able to get past the way arithmetic is written balk when they see how object oriented-glue code looks.
In the back of my mind I'm wondering whether regular parentheses couldn't incorporate the curly infix syntax.
In particular, a Lisp-2 helps here. Suppose we have (x + y). If that is interpreted as arguments + y passed to function x, it makes no sense. There is no such function, and + has no variable binding. So we can take the second interpretation: swap the first two positions.
Even in a Lisp-1, we can simply have a rule that if the second position of a funcion call is the member of a set of math operators (which could be by symbol identity, without caring about the binding), we do the swap.
The users then just can't use + as a variable name in the second position of a function call.
This swap can be done during the macro-expanding code walk, so the interpreter or compiler only see (+ x y), just like with brace expressions.
Today AI can generate code. Sometimes it's even correct.
AI is a useful aid to software developers, but it requires developers to know what they're doing. We need developers to know more, not less, so they can review AI-generated code, fix it when it's wrong, etc.
The 6502 was a great CPU for its time and price point. I wrote many programs in its assembly language. However, if you're going to work on modern systems, there are too many differences for the 6502 to be a good first assembly language (unless the 6502 is your focus).
The 6502 was designed to be easily implemented with relatively few transistors. For that it was amazing. There is a reason it was popular! But its CPU registers are only 8-bit, its call stack is 256 bytes, and for real work you need to use the zero page (zpage) well. None of these are likely to relevant to a modern software developer using assembly. Its design encourages the use of global locations, again, an approach that don't make sense on modern systems.
I say this as someone who admires what the 6502 was able to achieve in its time, and I even have a long-running page dedicated to the 6502: https://dwheeler.com/6502/
If you want retro and easy, the 68000 has it beat in terms of simplicity of development. The 68K is quite regular & a little more like modern systems (sort of).
However, I think starting retro is often a disservice. Most the time the reason to use assembly is performance. If you're running code on chips more than a dollar or so, getting performance out of modern architectures is MUCH different than retro computers. For example, today a lot depends on memory cache latency, yet is typically not a serious concern on most retro computers. So learners will learn to focus on the wrong problems.
If you want a modern simple architecture, may I suggest RISC V? That's under a royalty-free open source license, and real computers use it today. It's pretty simple (your basic load-store architecture) and you can pick only the modules you care about. Full disclosure: I work for the Linux Foundation, but I'd say that anyway.
Plausible alternatives are ARM and x86. If you subset them, they're okay.
The reality is that assembly languages reflect the underlying system, and that will change over time.
I wish more PDFs were generated as hybrid PDFs. These are PDFs that also include their original source material. Then you have a document whose format is fixed, but if you need more semantic information, there it is!
One complication is that in typical English, if you say "All my hats....", you are simultaneously making an existence statement that you have at least one hat... but the usual formal logic "forall" quantifier does NOT presume existence. Here's a formal proof that "forall" has a "surprise" meaning for those not well-versed in formal logic: https://us.metamath.org/mpeuni/alimp-surprise.html
I propose that when translating such statements to a formal logic, if that's what you really mean, use an "allsome" quantifier as I've described here: https://dwheeler.com/essays/allsome.html
It's really easy to forget to include an existence quantifier. Having notation specifically designed to automatically include it can avoid some problems.
> One complication is that in typical English, if you say "All my hats....", you are simultaneously making an existence statement that you have at least one hat... but the usual formal logic "forall" quantifier does NOT presume existence.
Perhaps. But what if someone asks you "are all your hats green?" Then the interpretation is not so clear.
Oh boy... so I actually wrote a thesis in graduate school on conversational implicature, Paul Grice, and various other theories of implying things.
I would actually agree user dwheeler here.
Whether or not you agree with Gricean implicature theory (I do not), the point is that making a claim about a group that doesn't exist is absurd. Absurd statements do not convey meaning, and language is a tool for communication, thus it is generally an assumed axiom that statements will have meaning. Here, even when people make borderline nonsensical statements, we assume there is a metaphor or language game involved.
So, by making a statement about 'all my hats', if the number of hats you have is zero, then any predication is absurd and the statement is absurd, so given an axiom of not making absurd statements for natural language, you can assume there are at least two hats. Obviously there are no formal rules here, but the functionality of natural language demonstrates that these heuristics exist.
Is "absurd" a term of art here, or you just mean it conflicts with common intuition? This sort of thing comes up a lot in programming languages. For example, is Null=Null true or false? What about Null!=Null? Maybe they can both be true or both false. It's strange because there's no simple obvious right answer but we need some answer and programming languages manage to define that sort of thing so it ends up logically consistent. Closer to this topic, how about a typed collection with no item in it? We expect the type system to enforce "all its items are green" but when it's empty, that constraint would become absurd and we can no longer pass an empty collection to a function that requires a collection of greens?
A simple program to test "all my hats are green" allows the empty set to be all green:
AllGreen = True
For each hat in MyHats:
If hat <> green:
AllGreen = False
I'll add the exchange back here to continue this thread
-----
>>scoofy: I mean, it's important to remember that the axioms of first-order logic are arbitrary. We could easily argue that the truth value of an empty group is undecidable, and that would better correlate to natural language logic.
The fact that we compact these edge cases into arbitrary truth values is just for ease of computing.
This is also relevant to the arbitrary choice of the 'inclusive or' as a default over an 'exclusive or', which most people use in natural language.
---
>foxglacier: This addresses my previous reply to you, thanks. I wonder though if there's a problem in that common natural language is inherently limited to common concepts. Scientists famously use confusing language in their papers but they're writing for people who use the same language so it's OK. For example, they use "consistent with zero" to mean "might be zero" even though a common-language reader can interpret it as "not zero". I suppose logicians use "or" to mean inclusive or in their papers too.
-----
"Absurd" here I wouldn't say is a term of art. I just mean things that not only don't mean anything, but can't mean anything. Here, existence is always extremely relevant. This goes back to Kant's idea that existence can't/shouldn't be a predicate. The idea of talking about the actual color of a nonexistent hat is absurd in that a nonexistent hat can not have a color, period, because having a color presumes existence.
So, when I talk about the logic of natural language, we have to get really philosophical. I presume that there as at least significant equivalence from formal logic to natural language, if not ultimately being fully equivalent. Formal logic is effectively a model trying to capture logical reasoning, and there are some notable differences for simplicity's sake (the Frege-Russell ambiguity thesis is a common example: https://link.springer.com/chapter/10.1007/978-94-009-4780-1_... ), however, most-if-not-all of these formal logic ambiguity concerns are trivial for natural language to deal with as any ambiguity can be clarified by an interlocutor.
Where things get really weird, however, is as you go up to the axioms of logic, and try to justify them. The idea that foundations of logic itself is determined either inductively or instinctually is just bizarre. And mapping an inductive/instinctual logic to a formal system runs into a lot of philosophical problems that aren't really practical to worry about. It just gets weird and solipsistic, as it does when you get too caught up in philosophy.
> "but green hats catch fire in the sunlight" - joe
> "and thats why i dont have any hats" - bill
from the link:
> Many conversations have goals other than the exchange of information. One is amusement, which speakers often pursue by making jokes (Lepore & Stone 2015: §11.3). Because the goal is not to provide information, the maxims of Quality, Quantity, and Relation do not apply. If for any of these reasons the Cooperative Principle does not apply, reasoning based on it will be unsound.
i think i disagree - the joke is intended to say that bill doesnt have any hats, but would like one, and only a green one, and only if they didnt catch fire in the sunlight
This is why I point out that an absurd statement still points toward meaning via a metaphor or language game, which I would put wordplay/jokes under the rubric of.
In fact, in my thesis, I cited The Naked Jape, by Jimmy Carr specifically in reference to jokes (it has a one-liner on every page). On of my main arguments against Gricean conversational implicature theory was that the theory itself was a form of begging the question or no true scotsman problems, in that all of the obvious examples where a counter-factual to the cooperation principal that exist everywhere are excused as "not conversation."
Again, yes, you can have wordplay, but wordplay is wordplay, and is a language game that exists and is trying to do something in a different framework.
The reason why so many folks have no issue with the puzzle is that they view it as a puzzle (a kind of language game), and not a sensible human communication. This lets them genuinely consider absurd statements and treat them as normal.
Granted all that, but we're not really talking about normal everyday English, but a hypothetical conversation with some mythical entity who can only lie, which is not really a capability of humans; even the most pathological liar among us can and will tell the truth.
So I'd put all that theory in a drawer somewhere and acknowledge that, when we're talking about logic puzzles, the rules of logic are paramount, not grammar.
As I said in a previous part of this thread, the rules of logic are as arbitrary (by definition) as they are paramount, and often diverge from natural language logic: https://news.ycombinator.com/item?id=42365222#42368661
---
I mean, it's important to remember that the axioms of first-order logic are arbitrary. We could easily argue that the truth value of an empty group is undecidable, and that would better correlate to natural language logic.
The fact that we compact these edge cases into arbitrary truth values is just for ease of computing.
This is also relevant to the arbitrary choice of the 'inclusive or' as a default over an 'exclusive or', which most people use in natural language.
This addresses my previous reply to you, thanks. I wonder though if there's a problem in that common natural language is inherently limited to common concepts. Scientists famously use confusing language in their papers but they're writing for people who use the same language so it's OK. For example, they use "consistent with zero" to mean "might be zero" even though a common-language reader can interpret it as "not zero". I suppose logicians use "or" to mean inclusive or in their papers too.
> All my hats....", you are simultaneously making an existence statement that you have at least one hat
It does not. All my unicorns fly. There is no assumption that I have a unicorn. There is an assumption, based on the claim but it is not a fact.
The puzzle also assumes that "my" implies there is some ownership (we'll take for granted "my" means "has" for simplicity), which is another quibble that unravels the whole thing.
E is correct. I don't see how A comes to be the accepted answer.
This reminds me of one of my favorite threads on the old Internet Infidels web site, in their Philosophy forum. The question was, "Do dogs bark?" There was an enormous amount of discussion on it!
Does "when pigs fly" imply that some day pigs will be able to fly? No; people can understand impossibility when it is used rhetorically in every day speech.
For example I might say, "all the honest politicians are doing a great job", which conveys my actual meaning, "all politicians are dishonest".
That stops working when its not obviously rhetorical.
Someone else in the thread mentioned: 'All my kids are in high school'. If you said this to a stranger with no other context, they will 100% think that you have kids. There is no possibility that you meant, 'I am asserting that in the set of my children, each element satisfies the property of being in high school'
I don't understand what point those examples are supposed to convey.
"All of my unicorns can fly -> some of my unicorns can fly -> at least one of my unicorns can fly" still seems to be a valid inference that may get lost in conventional translation into first order logic. And a proposed "allsome" quantifier still seems like a valid remedy for that.
So if “all my hats” doesn’t imply that I have at least one hat, “some of my hats” doesn’t imply it either; otherwise we wouldn’t be able to derive “some” from “all”.
Hence, “some of my hats are green” doesn’t imply that “at least one of my hats is green”. That’s a claim that contradicts both traditional formal logic interpretation and common sense English interpretation.
I think the same of your interpretation of some vs all. Some can contain all, just as it contains none. Both some/all imply, but do not assert existence. Claiming it tautologically defies logic is not compelling.
Well, I think showing that it defies logical inference is quite relevant in the context of that thread being about translating typical English into first order logic to do logical inference.
I worry about sets and consideration of edge cases. Legal, programmatic, medical. Adhering to a convention that presupposes meaning and claim that interpretation is the only interpretation, cannot be resolved with repetition. I remain unconvinced.
You were the one to claim the only interpretation (in opposite of OP who merely claimed "typical English" interpretation). Moreover, your interpretation directly contradicts both conventional typical English interpretation, which would be relevant in legal context (where making such claims with empty set in mind would be deemed misleading), and conventional formal-logical interpretation, which would be relevant in programming (where the truth of `array.every()` doesn't imply the truth of `array.some()`).
I would agree that's obvious, if not for the original error.
The liar doesn't necessarily "have" any hats. Again, the assumption that the liar has hats is incorrect because it's relying on an conversational implication, rather than a specific assertion.
Sure, but the question isn't about which statement is possible from the liar's statement, it's about which statement we can conclude from the liar's statement.
The liar could be lying because they have no hats. They could be lying because they have a non-green hat. We cannot conclude E because it's possible that E is not correct.
I mean, it's important to remember that the axioms of first-order logic are arbitrary. We could easily argue that the truth value of an empty group is undecidable, and that would better correlate to natural language logic.
The fact that we compact these edge cases into arbitrary truth values is just for ease of computing.
This is also relevant to the arbitrary choice of the 'inclusive or' as a default over an 'exclusive or', which most people use in natural language.
I have to interpret the question at face value, which may equate to natural language logic. I dont know the specific rules of any of these systems, which are obviously particular or wildly different from a layman interpretation. Most of the arguments seem to center around specialized conveniece rules (as you mention), which are eventually equated to the one true way to deconstruct meaning. At least, that is what I got out of this thread.
> At GopherCon 1993, it was announced that Gopher servers would need to pay for the privilege of using the protocol... Well, that didn’t work out. People were angry and many felt betrayed. They weren’t quiet about any of it either.
> If one were to attempt to identify a single failure of Gopher in competition with the web, it would be the licensing costs. No such fee existed for the World Wide Web.
This, a thousand times. I watched as this happened. The instant that announcement was made, gopher was finished. Gopher might have lost later as HTML kept adding features, but by the time those features were added to HTML, gopher had already lost.
Similarly, Bertrand Meyer killed Eiffel by trying to charge money for the compiler, and missing the nascent OSS movement. Java was an inferior language in a few important ways but the compiler and runtime were free. He could not compete with both C++ and Java.
A number of people in that era thought this was a fad and that business as usual would prevail.
The licensing fee wasn't the sole reason, but it certainly sounded the death knell for Gopher and gave users reasons to look elsewhere.
For all the gauzy what-could-have-been speculations about Gopher, it really was more like a hierarchical wiki. WWW's freestyle document model quickly expanded to an application platform that could support all manner and style of services. Fees or not, Gopher didn't have a chance.
The Gopher standard mixes a document format with a networking protocol. The HTTP standards don't say a damn thing about HTML, but the Gopher standard defines a standard for sending directory information to a client, right down to the fixed list of file types that can occur in a directory. (MIME Type? What's that?) This is the hypertext part of Gopher, as only those directories can link to other places, so constraining that gives you a nice, simple way to have pretty plain-text sites with an enforce separation of lists of links, one one hand, and images and documents, on the other, which HTTP has no equivalent for.
(Not just file types, in fact, in that they have a special type for tn3270 telnet sessions. Yep, those IBM mainframes with block-mode terminals were quite important back then, but it's a bit out of place now. They also have a type for GIF and a generic 'client-figures-it-out' image type. How forward-thinking.)
> The HTTP standards don't say a damn thing about HTML
HTTP stands for Hypertext transfer protocol. The whole thing was predicated on sending HTML documents and couldn't originally send anything else.
> (MIME Type? What's that?)
MIME is a hack to add attachments to email that was later also hacked into HTTP so you could send add "attachments" to a protocol designed only to transfer hypertext, it's a kludge added in version 1.1.
Gopher's main difference with the web was that its linking was directory-based with directory tree documents rather than embedded hyperlinks. This was inferior, strictly speaking since you can easily make a directory HTML document on the web, but you could also cross-link.
I'm sure that would have been changed though, along with additional file-types or whatever had Gopher succeeded. The web's success over Gopher was never down to technical details.
> MIME is a hack to add attachments to email that was later also hacked into HTTP so you could send add "attachments" to a protocol designed only to transfer hypertext, it's a kludge added in version 1.1.
Like how Gopher got kludged to add other item types? All useful protocols evolve, and Gopher is no exception.
> Gopher's main difference with the web was that its linking was directory-based with directory tree documents rather than embedded hyperlinks. This was inferior, strictly speaking since you can easily make a directory HTML document on the web, but you could also cross-link.
Yes, that's true.
> I'm sure that would have been changed though, along with additional file-types or whatever had Gopher succeeded. The web's success over Gopher was never down to technical details.
Also true, and that New Gopher is now called Gemini.
It also defines a hypertext document format (Gemtext) but it allows HTML-style free linking and (depending on client) inline images, although that's not really what the Gemini users want. Again, this is more social than technical, and more self-consciously social because it's a deliberate reaction to existing paradigms: The Web as it is now (too invasive, too busy) and Gopher as it is now (a moribund retrocomputing exercise that can't realistically incorporate new technologies or serve new goals).
As usual though, more restrictive protocols remove possible use cases from one side of the protocol, but add use cases to the other. It's like JSON versus a Turing-complete configuration language.
A design based on directories of static files makes it a lot easier to mirror the entire site, or a subtree.
The vast majority of software developers do not put up with Lisp syntax. The vast majority of software developers use a different programming language with a much better syntax, and never consider using Lisp for anything serious. For example, for most developers, a programming language is a laughable nonstarter if it cannot use infix notation for basic arithmetic, as that is the standard notation for mathematics worldwide and is taught to everyone.
As evidence, look at TIOBE: https://www.tiobe.com/tiobe-index/ ... you'll see "Lisp" is #26, lower than Classic Visual Basic, Kotlin, Ada, and SAS. Scheme and Clojure aren't even in the top 50, so adding them wouldn't change much. Lisps will never break the top 10 with their current syntax.
There is a reason for Lisp's syntax: it's homoiconic, enabling powerful macros. Lisp macros are incredibly powerful, because you can manipulate programs as data. Other macro systems generally don't hold a candle to Lisp's. The vast majority of "simple syntactic sugar" systems for Lisp lose homoiconicity, ruining Lisps's macros. This was even the problem for the M-expressions created by Lisp's original developer. Most of those syntactic improvements also tend to NOT be backwards-compatible.
There is a solution. I developed curly-infix expressions, SRFI-105 https://srfi.schemers.org/srfi-105/ Curly infix enables infix notation WITHOUT losing homoiconicity or requiring a specific meaning for symbols. On top of that I developed sweet-expressions SRFI-110 https://srfi.schemers.org/srfi-110/ . These are backwards-compatible (easing transition) and homoiconic (so macros keep working). There are no doubt other approaches, too.
In general getting anything accepted and used is hard.
Here's my personal take, though: Almost all programmers abandoned Lisps decades ago, in large part due to its terrible syntax. The few who still use Lisp tend to like its notation (otherwise they wouldn't be using Lisp). Since they like Lisp syntax, they don't see the problem. Cue the "this is fine" cartoon dog in a fire. This means that Lisps will never be seriously considered by most software developers today, due to their user-hostile syntax. Some will learn them, as a fun toy, but not use them seriously.
I think that is a terrible shame. Lisps have a lot going for them. For certain kinds of problems they can be an excellent tool.
I realize most people who seriously use Lisp will disagree with me. That's fine, you're entitled to your own opinion! But as long as Lisp has terrible syntax (according to most developers), it will continue to be relegated to a (relative) backwater. It will continue to be learned, but primarily as an "elegant weapon" from a bygone "civilized age" https://www.explainxkcd.com/wiki/index.php/297:_Lisp_Cycles
Even MIT's 6.001 (Structure and Interpretation of Computer Programs), which once taught powerful concepts using Scheme, has been replaced by a course using Python. Reason: "starting off with python makes an undergraduate’s initial experiences maximally productive in the current environment". Scheme did not help them be maximally productive. See: https://cemerick.com/blog/2009/03/24/why-mit-now-uses-python...
I like Lisp. I don't like Lisp syntax. Since most Lispers don't want to fix Lisp syntax, or don't believe it's a problem, the vast majority of software developers have decided to use a different programming language that has a syntax they prefer to use. The numbers from TIOBE make it clear Lisp isn't a common choice.
You'll no doubt get different views in the comments. :-)