Hacker News new | past | comments | ask | show | jobs | submit login

This is ridiculous.

The average computer scientist (not only "programmer", as a js dev would be) never wrote lean/coq or similar, and is not aware of the Curry-Haskell like theorems and their implications.




I think you entirely missed the point. GP put it well:

>> They are conceptually/abstractly rigorous, but in "implementation" are incredibly sloppy.

Maturity in concept-space and the ability to reason abstractly can be achieved without the sort of formal rigor required by far less abstract and much more conceptually simple programming.

I have seen this first hand TAing and tutoring CS1. I regularly had students who put off their required programming course until senior year. As a result, some were well into graduate-level mathematics and at the top of their class but struggled deeply with the rigor required in implementation. Think about, e.g., missing semi-colons at the end of lines, understanding where a variable is defined, understanding how nested loops work, simple recursion, and so on. Consider something as simple as writing a C/Java program that reads lines from a file, parses them according to a simple format, prints out some accumulated value from the process, and handles common errors appropriately. Programming requires a lot more formal rigor than mathematical proof writing.


I didn’t miss his point. He’s plain wrong.

> Programming requires a lot more formal rigor than mathematical proof writing.

This is is just wrong?

Syntax rigour has almost nothing to do with correctness.


Just take the downvotes with pride.

You have a valid point, which is that we are not even being rigorous enough about the meaning of the word “rigor” in this context.

- One poster praises how programming needs to be boiled down into executable instructions as “rigor,” presumably comparing to an imaginary math prof saying “eh that sort of problem can probably be solved with a Cholesky decomposition” without telling you how to do that or what it is or why it is even germane to the problem. This poster has not seen the sheer number of Java API devs who use the Spring framework every day and have no idea how it does what it does, the number of Git developers who do not understand what Git is or how it uses the filesystem as a simple NoSQL database, or the number of people running on Kubernetes who do not know what the control plane is, do not know what etcd is, no idea of what a custom resource definition is or when it would be useful... If we are comparing apples to apples, “rigor” meaning “this person is talking about a technique they have run across in their context and rather than abstractly indicating that it can be used to fix a problem without exact details of how it does that, they know the technique inside and out and are going to patiently sit down with you until you understand it too,” well, I think the point more often goes to the mathematician.

- Meanwhile you invoke correctness and I think you mean not just ontic correctness “this passed the test cases and happens to be correct on all the actual inputs it will be run on” but epistemic correctness “this argument gives us confidence that the code has a definite contract which it will correctly deliver on,” which you do see in programming and computer science, often in terms of “loop invariants” or “amortized big-O analysis” or the like... But yeah most programmers only interact with this correctness by partially specifying a contract in terms of some test cases which they validate.

That discussion, however, would require a much longer and more nuanced discussion that would be more appropriate for a blog article than for an HN comment thread. Even this comment pointing out that there are at least three meanings of rigor hiding in plain sight is too long.


>> Programming requires a lot more formal rigor than mathematical proof writing.

> This is is just wrong? Syntax rigour has almost nothing to do with correctness.

1. It's all fine and well to wave your hand at "Syntax rigour", but if your code doesn't even parse then you won't get far toward "correctness". The frustration with having to write code that parses was extremely common among the students I am referring to in my original post -- it seemed incidental and unnecessary. It might be incidental, but at least for now it's definitely not unnecessary.

2. It's not just syntactic rigor. I gave two other examples which are not primarily syntactic trip-ups: understanding nested loops and simple recursion. (This actually makes sense -- how often in undergraduate math do you write a proof that involves multiple interacting inductions? It happens, but isn't a particularly common item in the arsenal. And even when you do, the precise way in which the two inductions proceed is almost always irrelevant to the argument because you don't care about the "runtime" of a proof. So the fact that students toward the end of undergraduate struggle with this isn't particularly surprising.)

Even elementary programming ability demands a type of rigor we'll call "implementation rigor". Understanding how nested loops actually work and why switching the order of two nested loops might result in wildly different runtimes. Understanding that two variables that happen to have the same name and two different points in the program might not be referring to the same piece of memory. Etc.

Mathematical maturity doesn't traditionally emphasize this type of "implementation rigor" -- even a mathematician at the end of their undergraduate studies often won't have a novice programmer's level of "implementation rigor".

I am not quite sure why you are being so defensive on this point. To anyone who has educated both mathematicians and computer scientists, it's a fairly obvious point and plainly observable out in the real world. Going on about curry-howard and other abstract nonsense seems to wildly and radically miss this point.


Having taught both I get what you are saying, but the rigor required in programming is quite trivial compared to that in mathematics. Writing a well structured program is much more comparable to what is involved in careful mathematical writing. It's precisely the internal semantic coherence and consistency, rather than syntactic correctness, that is hardest.


> Syntax rigour has almost nothing to do with correctness.

Yes, that's the point. You did miss it.


And yet I didn’t miss it. I insist.

You need more rigour to prove let’s say Beppo Levy theorem than writing a moderately complex piece of software.

Yet you can write it in crappy English, the medium not being the target goal, the ideation process even poorly transcribed in English needs to be perfectly rigorous. Otherwise, you proved nothing.


I'm pretty sure the two of you are working from different definitions of the word "rigorous" here.

(Which, given the topic of conversation here, is somewhat ironic.)


I take it broadly as nothing was clearly defined. :)


> Syntax rigour has almost nothing to do with correctness.

I see your point: has almost nothing correctness with rigour do to Syntax.

Syntax rigor has to do with correctness to the extent that "correctness" exists outside the mind of the creator. Einstein notation is a decent example: the rigor is inherent in the definition of the syntax, but to a novice, it is entirely under-specified and can't be said to be "correct" without its definition being incorporated already...which is the ultimate-parent-posts' point and I think the context in which the post-to-which-you're-replying needs to be taken.

And if you're going to argue "This is just wrong?" (I love the passive-aggressive '?'?) while ignoring the context of the discussion...QED.


Well I think we should stop discussing at the passive agressive thing.


but programmers don't write underspecified notational shortcuts, because those are soundly rejected as syntax errors by the compiler or interpreter

this is not about semantics (like dependent types etc) this is just syntax. it works like this in any language. the only way to make syntax accepted by a compiler is to make it unambiguous

... maybe LLMs will change this game and the programming languages of the future will be allowed to be sloppy, just like mathematics


Some compilers can accept “holes” that can be univocally deducted from context for instance.

You can think about it as type inference on steroids.


Yep, but for any notational shortcut the compiler infers a single thing for it. It's still unambiguous as far as computers are concerned (but it may be confusing reading it without context)


It’s not only a notational shortcut as in syntactic sugar though.

It can apply a set of rewrite rules given you were able to construct the right type at some point.

It’s type inference on steroids because you can brute force the solution by applying the rewrite rules and other tactics on propositions until something (propositional equality) is found, or nothing.


s/Curry-Haskell/Curry-Howard/g




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: