Hacker News new | past | comments | ask | show | jobs | submit login

If these are the proportions then I really disagree. And actually these are the proportions I see (or better). If you have 19 out of 20 changes done correctly, that means probably 1 every week will fail. Now the question is how will it fail - spelling mistake? single request rejected? single request failed? customer data lost?

If you let too many simple issues through, you're likely to find yourself completely failing when a very simple thing breaks. First you'll find that your logging is not correct, so you'll need to fix that first and reproduce, then you'll find that the rollback doesn't quite work and you've got some bad data to fix manually, then you'll find that this is actually a simple error masking some really nasty bug, etc. etc. I've seen that once or twice and I really believe in the broken windows theory now. I'd be more glad with a reasonable slowdown, than people pushing ahead instead of stopping to think about long-term issues.

Yes, I'm the guy who rejects ~4 out of 5 changes during code review initially (on average). Then again, I'm the guy who gets woken up when things fail - not everyone does, but it really gives you the appreciation of why you want to test those exception branches. I hope that people will be as strict about stuff I submit too.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: