Hacker News new | past | comments | ask | show | jobs | submit login
Be sceptical of your own work (2009) (terrytao.wordpress.com)
138 points by creer on Aug 16, 2023 | hide | past | favorite | 39 comments



My favorite thing is stumbling upon a piece of project documentation, exclaiming "who the hell wrote this crap?", then discovering that it was me.


I've had this happen enough I don't use git blame.

Only once have I looked at something and thought, "cool solution, I would never have thought of that", but apparently I did.


Sometimes I encounter those snippets and I still have a discrete memory of writing something that "I knew my future self will understand".

That is especially frustrating.


I recently came across my own comment in some code, but luckily it hasn’t caused me problems.

“hhh: There's probably a better way to do this, but that is a problem for you, dear reader, who is likely future me.”


That's me all day.


"The first principle is that you must not fool yourself and you are the easiest person to fool"

- Some guy who showed up as an extra in the Oppenheimer movie, playing the bongo drums


That was funny but weird, because as far as I know Feynman didn't take up drumming until his sabbatical in Brazil in 1951.


Shirley, you're joking!


I'm not and don't call me Shirley


"Ninety percent of what I do goes in the trash anyway. But that's part of the job. I'm not going to show that to people, because if I don't like it, how can I expect anyone else to like it?"

"I can understand when I like it and others don't, but if I don't like it myself, I don't want to offer it to others, because that risks that they might actually like it."

"Jean Patou, the designer, the fashion designer, from the 20s once said:"

"Never make an ugly dress, someone might buy it."

-- Karl Lagerfeld


I’ve learned never do the “three designs, pick one” trick unless all three are things you like. Inevitably the boss or customer will pick the worst most horrible one.


I love these. Thanks.


Very relevant article, considering today's LK-99 news.

Skepticism is at the core of an inquisitive mind, and scientists should be the first to be skeptical of their own work. This is part of being intellectually honest, and science depends on a balance of trust and healthy skepticism. Unfortunately, many get wrapped up in egos, and the publish-or-perish mindset, which corrupts the scientific process.

Skepticism in general is a positive trait to have, especially in today's post-truth world. Facts can be twisted or fabricated to serve any narrative, and being skeptical minimizes the chances of being influenced by propaganda and {mis,dis}information. This will become even more important as our information sources get corrupted by AI and deepfakes.

Schools should be preparing kids to be more skeptical, as it will be an invaluable skill in their lives.


The counterpoint in software is that it's still useful to temporarily solve a narrow problem - and never need to return to that issue. How do you denote that in your code?

I have used multiple dispatch where a sub-type is deliberately handled, and the rest is left with "Nice try, you would have to implement that first", and tests in library functions with the same rude response. And I have used strictly defined grammars on the input. ... and I still have lots of TODO list items not correctly protected from in the code. What else do you use in non-trivial cases? Much deeper or beyond program argument checks?


I use a combination of things, depending on the nature of the temporary measure.

I use TODO comments to document that this part of the code needs to be revisited.

I write a warning to the log file whenever the temporary code executes.

If there are things left unimplemented, but there's an interface to them, then I put code in that's guaranteed to fail at runtime. If the language is C or C++, I'll use something like assert(false).


What I do:

1. Add TODO items when I'm in flow

2. When I'm not in flow, I search TODO items in code/notes I own

3. Add a basic task in a project management tool with no estimations for each TODO item

At least then it's in a system to be triaged at some point.


For me TODO is more literal, reserved for things that must be done. So, I use SMELL if it's unsane. If it's somewhat broken, I use BUG, and file a low priority issue. For both, I put long explanations, in the code, for the future reader.


You make your life substantially easier if all the functions/methods/procedures/blocks/... in your code are "total" -- if they transform every possible input into the output one would expect from the name and type signature. With that as a guiding principle, I tend toward the following strategies, the particular choice of which depends on other details. Most often I'll combine 2/5, throwing a compile-time error for the broken branches, later realizing that the abstraction I chose was stupid, and then writing code that actually does what I want.

1. Inline the work being done. If you've only implemented a single case it doesn't deserve a named function that does something different from what the name indicates. Refactor it later if you ever need when you have more cases.

2. Ensure the error happens at compilation. C# has diagnostic directives. Zig has the @compileError builtin. C/C++ can fail using the preprocessor. It's a relatively common feature, and if you ensure that when your buggy/unfinished function compiles it's correct then that mitigates the harm you might cause (and serves as documentation: @compileError("TODO: ...")).

3. Ensure the type signature captures the problem. If some inputs result in crashes, return an error-code/error-condition/failure-union/optional/nullable/... so that the caller knows shit might happen and can deal with it at runtime if it does.

4. Make it abundantly clear that there's an issue. Name it `unsafe_unfinished_only_works_for_this_subtype_foo_bar` or something similarly onerous.

5. Use a different abstraction. Rather than `def handles_everything(something_that_might_not_be_handled):` -- a great abstraction that you don't actually use yet because you don't actually handle everything, instead write a `def handles_something(something):` function. Refactoring is cheap. If you don't need the full abstraction then don't use it, and definitely don't write an unfinished, buggy version of the abstraction you're not using.

5a. It might be hard to write the type signature of that restricted "something". E.g., let's say your function only works on sorted lists. A wrapper type pointing to the thing you actually want can be a good tool for this. E.g., you might have a SortedList type, and it might expose two methods for instantiating it -- validate_sorted (which might return an error but which checks that the list is actually in order), and unsafe_assert_sorted (where you as a programmer pinky promise that you know what you're doing and create the wrapper object regardless). Any function doing a binary search can now rely on the SortedList behaving appropriately, and all bugs associated with that function call can be tied to the very greppable name "unsafe_assert_sorted". Every possible input to the function yields an appropriate output. I'm not the first person to notice this idea. [0]

6. Limit the visibility. Use inner methods, inner classes, private, protected, underscore prefixes, or whatever else your language has to indicate to the outside world that they're technically allowed to use this method but that they probably want to think twice about it (similarly to the "unsafe_unfinished_..." from above, but also tends to play nicely with tooling).

7. Just finish the thing. Shaky foundations are more expensive than they appear, and when you've written one case it's actually usually pretty fast to implement the rest, especially while you have all the surrounding context in your mind.

8. Ignore all of the above. Sometimes something like performance overrides other concerns. Code is a balancing act.

[0] https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-va...


So yes, there is a fundamental difference in approach between a pessimistically (?) strongly typed environment and an optimistically loosely typed one. Okay. Good point.

I find myself writing with an optimistic more general thinking (so more general typing choices and architecture) but where I know that the code that I never have to write is the most efficient one. Heh. Or perhaps where I could have spent the time to write safer, narrower classes but couldn't be bothered. THEN I have to make the resulting working program safer before later foot-shooting situations. And the thing is in use so compile-time errors are not allowed (in this style of thinking.

"assert" was mentioned earlier. Useful and clean and greppable abstration, yes!

Raku has interesting options for types and signatures that I have not yet abused, like:

multi sub doesnt_handle_everything ( Int $n where {$_ < 5} ) { etc; };

or:

subset Monster of Str where { $_ eq any( <godzilla gammera ghidora> ) };


Sounds like great advice when working on the bleeding edge, but kinda terrible for general use. If you are constantly undermining your own self confidence in routine tasks be prepared for a depressive episode.


> If you are constantly undermining your own self confidence in routine tasks be prepared for a depressive episode.

I disagree that self-skepticism implies undermining your own self confidence. With regular practice, it should bolster your confidence in your ability to find mistakes (a valuable skill), and in your ability to produce final outputs with fewer errors.

I guess it's in how you phrase the skepticism: are you skeptical of your ability to do the work, or are you skeptical that your work is free of mistakes? There is a difference.


I agree with your take and I'll support it with different phrasing.

Skepticism and practice resolving it helps you develop a validation framework you can stand on. Is this right or wrong? How right or wrong? How valid or invalid? Over time, you train yourself on that framework.

I'm an actuary by training but currently in a SaaS sales role. The nuance is some roles have lots of opportunities for validation while others are much more qualitative. As an actuary, there's lots of math that you check as either right or wrong, and other math that you validate as accurate (not right or wrong) with some list of assumptions and caveats for error/noise.

I see a lot LESS skepticism in sales peers. People that don't want to buy must not get it, or they're not the right customer, or any number of reasons. A revaluation of the pitch, the approach, the discovery of the problem is much less present. Moreover, what is right/wrong or valid/invalid?

I'm fascinated by how it all works in sales (and in math) but the contrast of my experiences tells me you've never hurting your case by looking for validation, by naturally applying some doubt and trying to make the case for outcome.


To me, skepticism is an orientation towards not believing something. Kind of a posture or position. This, to me, is what seems like a bad way to go about life.

Maybe a better way to frame this is that someone should be honest with themselves. Be aware of their fallibility and use that awareness to inform decision making.

There is definitely a dark side to being overly skeptical by default. It can harm or hinder relationships, unnecessarily reinforce existing patterns/beliefs and deepen blind spots, and I think a failure mode involves obsessive compulsive tendencies + anxiety about the quality of one's work.

I've been on the "clinically unhealthy" side of this mindset, and it's ultimately about balance and framing. The same can be said for optimism, which can become just as dysfunctional if not balanced with a healthy dose of reality.


Agreed. "The goal is to get to the truth, the goal is not about winning arguments."

Being confident in being able to unravel truths, being able to improve but skeptical of current self because of potential ingrained biases and potential cognitive faillability.


Even in day to day programming, it's too easy to throw something together and see it work and call it a day. A dose of scepticism toward a "solution" is healthy. It's not like software doesn't come with bugs grossly often. Probably grounds for extra self confidence even.


Missing cases, the "Oh, it does fail if the size of the input is divisible by 256" feeling.

Or the O(n*this-exponent-is-way-too-large) algorithms that seem fine with a ten item input but not so nice with 10,000 items.

Or the incomprehensible build script that you (meaning me, of course) just kept adding things to with no systematic naming or sections.

Just three that have bitten me in recent memory.

Even the simplest and most routine tasks can get messed up sometimes.


Critique the work of others as well. A lot of impostor syndrome goes away when you realize that 80% of the people who you thing are better than you are cutting corners you won’t, and that takes up time. Someone else is cleaning up after them.


I think that would depend on whether you view your work as producing "argument proof results" or just "results." In other words, producing high volume low-quality "trash" may easily undermine self confidence more than producing low-volume high-quality "diamonds."

In either case, the way you frame it matters much more than the process. But I can agree that expecting either "high quality high volume" or aiming for "low quality low volume" are both likely paths to a depressive episode.

Personally, I'd rather small flawed diamonds. The fact that they are diamonds nonetheless gives me pride. The fact that I can produce greater quantities of them than flawless gems keeps me employed (which also gives me self-confidence).

Ultimately very subjective, regardless


The classic case is if I write tests, then write code and the tests pass first time I better break the code and make sure the tests definitely fail for a good reason.


If you want a structured and rigerous version of this it's called mutation testing. It can be quite slow, but also very informative when evaluation how well tested a piece of code is.


With TDD you write tests first, then ensure your tests fail, then write code, ... for the reasons you mention.


I think it's also not important to fall in love with your own work. I spent a few years working on a system. It was so much fun. Very technically rewarding. Then one day it needed to go, and I killed it without a second thought.


One such direction software people ARE familiar with is refactoring. At some point we do recognize we have a mess on our hands and it's time to rewrite it on more sane foundations.


This article contains an excellent description of the work of a mathematician. It should be part of any curriculum in the field.


The discussion notes are awesome too. Lots of examples raised.


isn't the word "skeptical"?


Both spellings are correct.


from the same people who brought you maths plural


If anything is evident from the internet, and HN as an example, the lineage of "Ancient Skeptics"* is as extinct as Jimmy Hoffa.

Anyway, as the great philosopher René Descartes said, regarding "skepticism": I think, therefore, I put des cartes before da horse.**

* https://plato.stanford.edu/entries/skepticism-ancient/

** This comment is bad, and I should feel bad ... https://youtu.be/jG2KMkQLZmI




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: