Reliability and quality are obviously important, but you have to question whether all these steps are actually helping you to achieve that.
All of these items in the story add overhead and reduce agility, but potentially do very little to improve quality:
- Mandatory paper trail via fields on the issue tracker
- The need for documented internal sign offs
- Standards and policies with no clear reason for the existence
- 'Micro' Code reviews
- Excessive permissions and security processes internally
- No pragmatism with regards to testing the change
This kind of stuff is a constant and substantial overhead on getting anything done. You need to look at these processes very critically and be very sure of the benefits in terms of quality before introducing it.
I have to take issue with this one. Code (and Requirements, and Design) reviews, when done properly probably have the greatest impact in improving product quality. No, I don't have time to look up the SEI research on the subject, but it has been consistently shown to be true.
Part of the problem I had with the article was that Shirley was rejecting code for non-specific reasons. You can't say "it's not written down anywhere." If it's important enough to reject the code, it's important enough to write down.
We have mandatory code reviews for all changes and I think it works quite well. A person making a one or two line bugfix may think it's quite innocuous but it may have a bigger impact on other parts of the system than they realize. Many eyes help reduce the likelihood of that.
I dunno... the odd bug fix has the potential to really screw things up, when the person fixing it isn't the person who wrote it originally, or it's been months since then.
1) The fields are probably mandatory because so many times routine entries were not filled correctly, and even MORE overhead was required going back and forth between teams to find out all of the details. If there is a way to avoid this overhead, again, how do you stop people abusing it?
2 & 3) Yeah.. bad overhead.
4) A sanity check is ALWAYS a good idea, whether its a fresh intern or the most seasoned programming in the company making a change.
5) They security processes might seem excessive but once again are probably in place due to issues in the past that have required the review/signoff be there to stop major disruptions getting through.
6) As someone who worked in QA for a while, there is no way in a hell someone can say "push this change through" without a strong questioning of why. QA is there (in most cases) to be that last line of defence. Often letting people know just how many people are really going to be affected by this change.
Being on a testing team often gives you a broader oversight of how all of the components of the code work together, while people working on one project can sometimes get tunnel vision on getting their thing out the door.
Edit: Just like to point out, I'm agreeing with you
I really like to see a pragmatic relationship between Dev and QA where you reach agreement on the best way to build confidence in a change.
QA should absolutely be able to push back on Dev with regards to quality and relevant test cases, but likewise Dev often need to be able to direct the testing and mutually agree on the scope that will get the change signed off adequately.
Exactly. The point bengl3rt was making, I assume, is that cavalier avoidance of process is a bad thing because it allows mistakes to happen that would be caught. That much is true.
But the assumption that's wrong is that all processes avoid mistakes. Clearly they don't.
And some of the examples here are just plain cargo cult misapplications of good ideas. The point behind code review is to catch design flaws in new code. You never want to demand refactoring of existing code in order to patch features, that hurts, it doesn't help.
Skilled people without a process will always find a way to get things done. Skill begets process. But process doesn’t beget skill. Following a recipe won’t make you a great chef – it just means you can make a competent bolognese. Great chefs don’t need cookery books. They know their medium and their ingredients so well that they can find excellent combinations as they go. The recipe becomes a natural by-product of their work.
Er... great chefs do need cookery books. They may not refer to them as often, but you won't find many chefs out there without a collection of cookery books. They still use recipes for things they're not familiar with.
Besides, skill may beget product, but it doesn't necessarily beget process.
>> Besides, skill may beget product, but it doesn't necessarily beget process.
Oh boy, talk about not seeing the forest for the trees. For skill to beget a successful product, you must follow a process to get from nothing to product. So in the course of creating a successful product you have automatically created a process -- the process to build the product. That process may be used one time, or multiple times -- but it's still a process.
Not seeing the forest for the trees? Hell, if we're going to use that loose a definition, everything is a process, regardless of skill level. Even if you end up without a product, to get the steaming pile of crap you abandoned, you went through a process.
Having just (as in hours ago) been through the wringer of adding new features to a mass of spaghetti that had not been touched in 10 years, the only way I could maintain my sanity (and have some assurance beyond regression testing) that I wasn't breaking something, was to refactor the existing code and then insert my changes.
Sure. That's the point behind refactoring, it makes changes to existing code flow more smoothly. But the case here doesn't fit that at all: the change as described was a change in configuration (that just happened to be stored in a code variable), yet it was being reviewed as if it were a new feature being added through development. That's the "cargo cult" part -- refactoring in the course of development is good. Rules demanding refactoring are bad, because they hit false positives (in this case, a high priority configuration change).
If these are the proportions then I really disagree. And actually these are the proportions I see (or better). If you have 19 out of 20 changes done correctly, that means probably 1 every week will fail. Now the question is how will it fail - spelling mistake? single request rejected? single request failed? customer data lost?
If you let too many simple issues through, you're likely to find yourself completely failing when a very simple thing breaks. First you'll find that your logging is not correct, so you'll need to fix that first and reproduce, then you'll find that the rollback doesn't quite work and you've got some bad data to fix manually, then you'll find that this is actually a simple error masking some really nasty bug, etc. etc. I've seen that once or twice and I really believe in the broken windows theory now. I'd be more glad with a reasonable slowdown, than people pushing ahead instead of stopping to think about long-term issues.
Yes, I'm the guy who rejects ~4 out of 5 changes during code review initially (on average). Then again, I'm the guy who gets woken up when things fail - not everyone does, but it really gives you the appreciation of why you want to test those exception branches. I hope that people will be as strict about stuff I submit too.
All of these items in the story add overhead and reduce agility, but potentially do very little to improve quality:
This kind of stuff is a constant and substantial overhead on getting anything done. You need to look at these processes very critically and be very sure of the benefits in terms of quality before introducing it.