Hacker News new | past | comments | ask | show | jobs | submit login

To be fair though, if your tests are at the right abstraction level, the specific approach you are choosing for the implementation shouldn't matter for the test.

Writing the test first also forces you to think abouy what API do you actually want to expose. Once you got the API right, there is still room for experimenting.




That’s only true when you have decided what the abstraction will be. That’s my point, a lot of times you don’t know yet!


In this approach you decide on the abstraction (i.e., the API) by writing example code (i.e., some tests) that uses the API. The tests are how you decide which abstraction seems to make the most sense.

It sounds like you actually implement the abstraction to decide whether it seems like the right one, which is a lot more work.


Yes, my position is that tests as client don't really tell you the truth about the abstraction because they don't represent a real usage of it.

It is better to write tests for code when you know what it is and what it should do. Tests also introduce a drag on changing strategies- if the choice you made when you wrote them is now not necessarily the optimal one, you must now change your tests or convince yourself that you were actually right the first time.

If people like to work this way then great, I'm just explaining why for me it feels bad and runs counter to my instincts.


I think I understand what you mean. At the same time though, one crucial takeaway for me from Ian's talk is that my tests might be on a too small scale if they are not useful whilst I am changing the implementation strategy.

For example, I found it useful to ditch concepts like the testing pyramid and focus on writing e2e tests for my HTTP API instead of trying to cover everything with module or function level tests. That makes it much less likely that they need to change during refactorings and hence provide more value.

I generally think that "What is going to break this test?" is a really powerful question to ask to evaluate how good it is. Any answer apart from "a change in requirements" could be a hint that something is odd about the software design. But to ask this question, I need to write the test first or at least think about what kind of test I would write. At some point, writing the actual test might be obsolete as just thinking about it makes you realize a flaw in the design.

Other interesting questions I like to ask myself are: "How much risk is this test eliminating?" and "How costly is it to write and maintain?"


In reality I tend to do both: write example client code to think through the abstraction (some call this “README-driven development”) and then write tests once the implementation is under way. Though you can get the first as a side effect of the second, I find that good tests aren’t really good example code (too fragmented, focus on edge cases, etc.).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: