I really don't, at least not on the scale it was a couple of years ago. I expect there are a handful of organisations who are still stuck in IE6 land, but it's no longer the limiting factor it used to be.
> When you're talking about a product that needs to be upgraded frequently to continue to do its job and to maintain security, I'd say the answer is "not very long".
I guess I lack sympathy because, ironically, the vendors are still using old and underpowered tools and casual processes for their development, which is why we need so many security patches. Usually, those tools start with a C and end with the word "compiler". Often, the development processes are trendy, "Agile" things that emphasize prototyping and pushing rapid updates over clear specs and systematic designs, and that treat techniques like unit testing as some sort of divine correctness guarantee instead of a back-up to good basic development practices.
Well, you reap what you sow. If you don't invest in tools and processes that can build robust software that requires little maintenance, I'm going to bitch if you don't invest in supporting your existing software either.
We've had programming languages much better suited to application development than C and its derivatives for many years now, and there is really no excuse for still writing things like networking software using a model that has a terrible risk/benefit ratio for that kind of project.
It's also a common fallacy that writing code significantly less buggy than what we put up with today needs heavyweight processes that cost more. There are many people in many industries who do it every day and have the metrics to prove it.
> Unless you choose to use bleeding-edge beta functionality, your sites will still work just fine.
Sure, but unless people are going to use that bleeding edge technology, why push out new versions of the browser with the corresponding functionality changes? It would be far safer to push only such security patches and bug fixes in existing functionality as are necessary, and make functional changes at a pace that allows for the whole industry to develop sustainably and actually take advantage of the improvements.
> Can you point out what FF5 has done that's so terrible and backwards-incompatible?
Well, so far it seems to have broken quite a few plug-ins and it looks like some of the typography engine bugs are back again, but I've only been using it for two days so I haven't yet tried it with most of the projects I'm involved with.
In any case, shipping one version without breaking anything is nothing to be proud of. You have to ship every version without breaking anything, and neither Firefox nor Chrome has a good track record in that department.
Sorry, I like my pseudonymity shield in this sort of discussion. It makes conflicts of interest far less likely, given that I have a number of professional interests and possibly not everyone I work with would like my honest views on this subject. (I don't think that matters as long as the work I do for them is my best effort at what they're paying me for, but they might disagree.)
Because I'm seriously considering devoting some of my meager resources to forking and long-term maintaining the Firefox 3.6 and 4-5 versions, and wanted to get some non-public feedback on that from someone with the same problems I have.
Well, the nice thing about a pseudonym is I can tell you the same things here that I'd say privately. :-)
FYI in case you didn't know, it looks like they are planning to maintain the 3.x branch for a while, but 5 is being treated as the next step from 4 so they won't be maintaining a 4.x series independently.
Also FWIW, some of the specific problems encountered by projects I've worked on actually crept into the later 3.x series, notably the ones involving text rendering problems and the ones involving Java applets.
So I guess that's a vote from me for "probably more valuable uses for that much of your time", but I suppose YMMV depending on what specific issues you've encountered.
> I really don't, at least not on the scale it was a couple of years ago. I expect there are a handful of organisations who are still stuck in IE6 land, but it's no longer the limiting factor it used to be.
And we can all be thankful for that.
> I guess I lack sympathy because, ironically, the vendors are still using old and underpowered tools and casual processes for their development, which is why we need so many security patches. Usually, those tools start with a C and end with the word "compiler". Often, the development processes are trendy, "Agile" things that emphasize prototyping and pushing rapid updates over clear specs and systematic designs, and that treat techniques like unit testing as some sort of divine correctness guarantee instead of a back-up to good basic development practices.
I suppose you could use HotJava or Lobo if you want a browser not written in C(++). Really, if you think that Firefox is poorly written and managed, why not move to something else? I feel like software companies are always under fire for using outdated techniques and poor quality control and whatever else, and yet these complaints seem so often to come from people who haven't produced anything on the scale of the software they criticize.
> Well, you reap what you sow. If you don't invest in tools and processes that can build robust software that requires little maintenance, I'm going to bitch if you don't invest in supporting your existing software either.
What projects have you worked on that had 5MM lines of code? Considering how complex a modern browser is, I'm not sure the bug rate is especially high. How many people use Firefox every day and rarely encounter bugs? Seems pretty robust to me.
> We've had programming languages much better suited to application development than C and its derivatives for many years now, and there is really no excuse for still writing things like networking software using a model that has a terrible risk/benefit ratio for that kind of project.
I'm sure you'll be looking into Lobo, then.
> It's also a common fallacy that writing code significantly less buggy than what we put up with today needs heavyweight processes that cost more. There are many people in many industries who do it every day and have the metrics to prove it.
I'm not really sure why you're going down this path, since I don't think I ever said anything about this, but I'll bite. What industries are producing significantly less-buggy code at the same (or similar) cost?
> Sure, but unless people are going to use that bleeding edge technology, why push out new versions of the browser with the corresponding functionality changes? It would be far safer to push only such security patches and bug fixes in existing functionality as are necessary, and make functional changes at a pace that allows for the whole industry to develop sustainably and actually take advantage of the improvements.
Safer? Sure. More expensive? That, too. Maintaining multiple branches is not free. Mozilla has been playing that game for a long time, and I think they're sick of it. I doubt their decision was made indiscriminately.
> Well, so far it seems to have broken quite a few plug-ins and it looks like some of the typography engine bugs are back again, but I've only been using it for two days so I haven't yet tried it with most of the projects I'm involved with.
> In any case, shipping one version without breaking anything is nothing to be proud of. You have to ship every version without breaking anything, and neither Firefox nor Chrome has a good track record in that department.
If they can't ship without introducing functional bugs, then I agree they have issues they need to work out. The plugin thing isn't really acceptable, either. If you're going to have other people build on your product, you absolutely need to support them.
> I feel like software companies are always under fire for using outdated techniques and poor quality control and whatever else, and yet these complaints seem so often to come from people who haven't produced anything on the scale of the software they criticize. [...] What projects have you worked on that had 5MM lines of code?
I think you know that I'm not going to answer that when I'm posting under a pseudonym, but I've worked on some reasonably large-scale projects. I doubt anything I've written ultimately has as many end users as Firefox, but I'm definitely in both the millions-of-lines club and the millions-of-users club, probably multiple times by now, and I've worked on projects where faults can cause Very Bad Things to happen.
> I'm not really sure why you're going down this path, since I don't think I ever said anything about this, but I'll bite. What industries are producing significantly less-buggy code at the same (or similar) cost?
My point is that if you're going to write software with a huge cross-section for malware to attack, such as a web browser, and you choose to write it using decades-old technology, where it is difficult to write it securely, then it's hard to take seriously any complaints about how hard it is to maintain that code and keep up with security patches.
As for other industries, obvious ones would be defence, aerospace, finance, and infrastructure management. When was the last time you saw a plane fall out of the sky because of software failure, or your electricity supply cut off because the management software for the national grid crashed? (There are also oddities like TeX, but then that was written by Donald Knuth, so it's hardly representative.)
Software developers in these industries typically work to different standards than those developing consumer products, and yes, the up-front costs are typically significantly higher. The thing is, by the time you account for the time it takes to fix bugs post-release compared to fixing it at an earlier stage in development, the difference in the total effort isn't such a high factor. And at that stage, you still haven't counted the benefit to society (and to your PR) of not having perhaps millions of users disrupted by each serious bug.
The bottom line is that as an industry, we are mostly stuck in the past, but we are there only because of inertia and non-technical factors, not because of engineering merit. As I noted before, this is a rather ironic argument in a discussion about trying to move people onto newer and better browser technologies, but quite apt: it shows how little progress we make if all the better technologies live in their own little worlds.
> If you're going to have other people build on your product, you absolutely need to support them.
Exactly. I think that's the key point that I and a few other people in this thread have been trying to make.
> I think you know that I'm not going to answer that when I'm posting under a pseudonym, but I've worked on some reasonably large-scale projects. I doubt anything I've written ultimately has as many end users as Firefox, but I'm definitely in both the millions-of-lines club and the millions-of-users club, probably multiple times by now, and I've worked on projects where faults can cause Very Bad Things to happen.
Fair enough, but I stand by what I said. Most of the people who criticize large software projects have never worked on a similar scale. Nor have most of them worked on projects with similar performance needs. It's all fine and dandy to criticize the choice to use C++ when you've never had to write a large performant program (not that this necessarily applies to you).
> My point is that if you're going to write software with a huge cross-section for malware to attack, such as a web browser, and you choose to write it using decades-old technology, where it is difficult to write it securely, then it's hard to take seriously any complaints about how hard it is to maintain that code and keep up with security patches.
So what's the alternative? To write it in Java, so it's measurably slower and feels even slower than it measures? To write it in C# so it only runs on Windows reliably? To write it in Smalltalk (yeah, right)? To write it in Haskell so that only a handfull of people have the skills to do the job? Oh, and let's not forget all of those have major pieces written in C that would still be attack vectors.
> As for other industries, obvious ones would be defence, aerospace, finance, and infrastructure management. When was the last time you saw a plane fall out of the sky because of software failure, or your electricity supply cut off because the management software for the national grid crashed? (There are also oddities like TeX, but then that was written by Donald Knuth, so it's hardly representative.)
Oh, come on. I've worked in the Defense industry. The processes are heavier and I dispute that the quality of the resultant code is better. It's also a ton of C. If most of it were attached to the Internet, people would find holes all over it. Defense software most certainly has bugs, sometimes serious ones. The aerospace industry has similar issues. Tons of C and C++. If you're talking about NASA, let's not forget that Spirit landed on Mars and went into fault mode due to software problems. Finance? Seriously? How about the fact that a few months ago the DJIA lost a trillion dollars in value for half an hour due to a software malfunction?
On the other hand, the space shuttle software was a massive success, with relatively few bugs. It was also written in an extremely heavy-weight process that cost many times more per line than typical software development. And that's okay, because it's the space shuttle, but it's not realistic to expect this of normal software, and it's incorrect to claim that it doesn't cost more. It also has far fewer lines of code than most commercial aircraft systems, because NASA knows that simpler is generally safer. Fewer features mean less complexity, but people don't want a simple browser, because that means NCSA Mosaic.
A major difference between most of the things you mentioned and Firefox is that the things you listed are not constantly exposed to the Internet. The bugs in Firefox that matter are security bugs that don't affect Airbus 380s because A380s aren't browsing random Internet sites. The machines that control the electric grid don't have wide open ports that random attackers can reach. The attack surface of Firefox is huge, and that's why more security bugs are discovered there than, say, Visa's payment centers. High-reliability systems are also systems. When Firefox fails, you see it and curse at Mozilla. When Visa's software fails, you probably never see it, because you get rerouted to a different machine. When an Airbus has a machine failure, the backup kicks in, or worst case the pilots go manual. That's redundancy that Firefox cannot reasonably provide. (Also, when Visa's software messes up, you just reswipe the card and never realize what actually happened.)
> Software developers in these industries typically work to different standards than those developing consumer products, and yes, the up-front costs are typically significantly higher. The thing is, by the time you account for the time it takes to fix bugs post-release compared to fixing it at an earlier stage in development, the difference in the total effort isn't such a high factor. And at that stage, you still haven't counted the benefit to society (and to your PR) of not having perhaps millions of users disrupted by each serious bug.
I'm going to disagree here. For one, I think you're just handwaving when you claim that the cost to fix bugs balances out the cost of a much heavier process. I don't think that's true, and I think the value proposition for most consumer software makes this clearly a bad idea. For another, you're comparing software that does one relatively unchanging thing against software that is constantly growing. Yeah, the electric grid is pretty reliable. It's also done the same job for decades. Firefox hasn't even existed for a decade (unless you count Netscape Navigator, which is iffy because so much was rewritten). Spending twice as long to build something to get a little more reliability can be a pretty good tradeoff when you'll use it nearly unchanged for 20 years. It's a pretty bad idea when you're going to be actively developing the next version as soon as the current one drops.
> The bottom line is that as an industry, we are mostly stuck in the past, but we are there only because of inertia and non-technical factors, not because of engineering merit. As I noted before, this is a rather ironic argument in a discussion about trying to move people onto newer and better browser technologies, but quite apt: it shows how little progress we make if all the better technologies live in their own little worlds.
I don't think we're stuck in the past. The industries you mentioned use the same languages and often older ones. I think our engineering and processes are much better than the were a decade or two ago.
> Exactly. I think that's the key point that I and a few other people in this thread have been trying to make.
And I don't dispute that at all. My point is that frequent drops and reliable support are not mutually exclusive. Whether Firefox will deliver both is a good question, but it certainly could at least in theory.
FWIW, I've spent a substantial chunk of my career working with C++, including on heavily mathematical code where C++'s performance did make it a suitable tool for the job. That taught me that advocates of other languages sometimes exaggerate their performance claims.
It also taught me that most software that "needs to be written in C++ for performance reasons" really doesn't. Most of the old arguments against VM-based languages, so-called scripting languages, and functional languages have been out of date for years now as technologies like JIT compilation have matured. As the coming generations of processors introduce more parallel processing power but don't speed up sequential execution much more, languages that provide a natural model for expressing parallel algorithms or for auto-parallelisation behind the scenes will become even more advantageous.
You mentioned some of the aerospace projects but you automatically said that it wasn't realistic to expect similar quality from "normal software". That is the mindset that I think needs to change in our industry. For example, every study I've ever read about Cleanroom Software Engineering has shown that its up-front development costs (both time and money) are typically within a factor of 2 or 3 of a "normal" software development process (often much closer) and while it doesn't completely eliminate bugs you do consistently see bug rates at least an order of magnitude lower for Cleanroom jobs. Over the lifetime of most projects, the total development+bug fixing time is certainly in the same ballpark at that point. There is no evidence that such code is any less maintainable in the face of changing requirements than code developed using any other process (on the contrary, the systematic and carefully documented structure gives you a great start) and the morale of the development team is usually noticeably higher (because they are spending most of their time building interesting stuff instead of fixing bugs that should never have been there).
Elsewhere, there is telecoms control software in the world, written in Erlang, that has been operating continuously for years with a few seconds of downtime in total since it went live. That's some absurd number of 9s of reliability, because the software architecture is fundamentally designed to be fault tolerant.
I guess I'm just trying to say that we shouldn't assume today's routine commercial practice is the most efficient, reliable way of doing things. We know, beyond any reasonable doubt, that it isn't. As an industry, we allow ourselves to be held back by non-technical issues like the availability of ready-trained staff, because development groups are too tight to provide training to improve their people's skills, and by preconceptions that say languages or development processes or software design principles that aren't today's mainstream must be too hard for everyday development tasks outside of niches where quality really matters.
I'd have a lot more sympathy for development groups that struggle to maintain shipping code in the face of evolving security threats and the like if those groups didn't shoot themselves in the foot, stick a noose around their necks and then tie their hands behind their back before they kicked the stool away.
> It also taught me that most software that "needs to be written in C++ for performance reasons" really doesn't.
Most software doesn't need to be written in C++ for the simple reason that most software doesn't actually need to be highly performant. Beyond that, many applications can be written sufficiently performant in other languages. However, I'm not convinced that for software such as a browser, where performance is key to the success of emerging web technologies, anything but C/C++ is really going to do the job. There's a lot of evidence that various languages are almost as fast as C or C++, but rarely as fast. That "almost" can add up in complex programs, especially when you're talking about programs that have to run other programs (i.e. Javascript). If you disagree, what language do you believe would be appropriate for building a cross-platform web browser?
> You mentioned some of the aerospace projects but you automatically said that it wasn't realistic to expect similar quality from "normal software". That is the mindset that I think needs to change in our industry. For example, every study I've ever read about Cleanroom Software Engineering has shown that its up-front development costs (both time and money) are typically within a factor of 2 or 3 of a "normal" software development process (often much closer) and while it doesn't completely eliminate bugs you do consistently see bug rates at least an order of magnitude lower for Cleanroom jobs. Over the lifetime of most projects, the total development+bug fixing time is certainly in the same ballpark at that point. There is no evidence that such code is any less maintainable in the face of changing requirements than code developed using any other process (on the contrary, the systematic and carefully documented structure gives you a great start) and the morale of the development team is usually noticeably higher (because they are spending most of their time building interesting stuff instead of fixing bugs that should never have been there).
That's not really what I said. Expecting commercial products to use the same heavyweight process that aerospace uses is not realistic. It is not reasonable to expect Mozilla to spend 6 to 9 months to implement the same features that Microsoft or Google develops in 3. This is a strategy for loosing the entire market. If the quality bar needs to be raised, it must be done more efficiently than by adopting "defense-grade" process.
Sure, when a code failure causes a missile to detonate inside a fighter jet, you can afford the additional cost (and why not, Uncle Sam is footing that bill). But when a code failure results in a browser restart, you can't justify the extra development effort. For much lower effort you can build a system that says "oops", saves state, and restarts right back to where the user was. Indeed I think all the major browsers do that now, though I'm not sure, because I can't actually recall the last time I actually had a browser crash on the Desktop.
I also disagree with your assertion that developers are happier on teams that practice heavyweight development processes. I've never heard anything but the opposite. Spending hour upon hour writing and refining specs is hell. It is not coding, and most coders don't like doing it to excess. Experienced coders should also know that a lot of it is wasted time, because inevitably half of the assumptions made turned out wrong. Some planning is a good thing. Heavyweight processes are something else entirely. Extra specs and documentation might help reduce bugs, but probably not nearly so much as more/better testing.
> Elsewhere, there is telecoms control software in the world, written in Erlang, that has been operating continuously for years with a few seconds of downtime in total since it went live. That's some absurd number of 9s of reliability, because the software architecture is fundamentally designed to be fault tolerant.
This is a false comparison. I could point out that Firefox has been running (some version, somewhere) continuously for almost a decade, and that would almost be more reasonable. At least then we'd be comparing a bunch of machines to a bunch of machines. Telecom software is not magically bug-free. Quite the opposite, languages like Erlang are designed with failure in mind. Machines fail. Networking devices fail. Antennas fail. Software fails. Rather than asserting that the solution is to write better software, Erlang is an acknowledgement that the solution is a better system. The software isn't better in the sense of having fewer bugs or using a DoD development process. It simply expects failure. When Erlang encounters an error, it can retry the operation, restart the process, or move to another machine. Firefox will retry, it will restart if necessary, and it is adding things like process separation for crash-prone plugins. These things are a net gain for users, but the reality is that faults still happen, and the gains in quality here do not come from fewer bugs but from better response to those bugs.
> I guess I'm just trying to say that we shouldn't assume today's routine commercial practice is the most efficient, reliable way of doing things. We know, beyond any reasonable doubt, that it isn't. As an industry, we allow ourselves to be held back by non-technical issues like the availability of ready-trained staff, because development groups are too tight to provide training to improve their people's skills, and by preconceptions that say languages or development processes or software design principles that aren't today's mainstream must be too hard for everyday development tasks outside of niches where quality really matters.
I've never said that today's practices are the most efficient, but going backwards in time to adopt DoD-style development is a move in the wrong direction. Spending 3 times as long on a given development effort may yield some small quality gains, but the overall effect is a loss of value. A product that has 10% fewer meaningful bugs but has 66% fewer necessary features is a failure. The move forwards is to build in more fault tolerance. When you run Firefox, it uses some fault-tolerant techniques already, and will hopefully add more. When you execute a search on Google, it uses fault-tolerant techniques. These are the same techniques that other high-reliability industries use. Stuff still breaks, but they recover. Our industry is not trailing the state of the art here. We're doing the same things. The state of the art can always improve, but that doesn't mean that we are doing a bad job now.
The security of Microsoft's Windows line went way up in the wake of all the XP exploits. It was certainly not a move towards heavy-weight development process that made these gains. In fact everything I've heard has indicated that Microsoft has moved the other direction toward "agile" development. What changed was the focus. When security is top priority, it unsurprisingly gets better. If you want fewer bugs, hire more great testers and have them work closely with developers. Give your developers security training. Hire security experts. Use the best tools you can get. And adopt a culture that places bug-fixing first on the list. But don't saddle your developers with an antiquated development system, and especially don't try to say that this system is somehow leading-edge when the industry has already tried it (numerous times).
> I'd have a lot more sympathy for development groups that struggle to maintain shipping code in the face of evolving security threats and the like if those groups didn't shoot themselves in the foot, stick a noose around their necks and then tie their hands behind their back before they kicked the stool away.
Our posts are getting very long going point-by-point, so I won't address everything you've said individually now. However...
On the efficiency issue, IMHO you're still too focussed on the up-front cost, when it is the long term efficiency that really counts. It doesn't matter if it takes 6 months to develop something properly instead of 3 months to hack it, if the hackers then spend another 6 months patching bugs before anyone can use it.
Even if that weren't true, do we really believe that a tri-monthly release cycle is necessary to compete in the browser market today? It takes longer than that for new technologies to be used in production projects, even by the most die-hard of bleeding edge early adopters. Wouldn't you rather code to a well-defined spec and know that it really was going to work in everyone's browser, even if it took another three months for the your favourite new feature to be well-specified so you could use it? We've waited years for things like HTML5 and CSS3, so I think we could wait another few weeks to get them right!
Regarding the morale of developers, again, I think you're confusing heavyweight processes generally with my example of Cleanroom in particular. There are plenty of studies that show developers do enjoy working within that process; try a Google Scholar search for Cleanroom Software Engineering and read a few of the papers. Likewise, you also seem to be assuming that Cleanroom is heavy in the sense of being all about writing specs and management overhead, which suggests to me that you've never actually tried it to see what it feels like in practice.
This is exactly the sort of preconception and "I just don't believe it" reaction I think we need to overcome with evidence if our industry is going to improve its performance. Bizarrely, we seem to be spending far more time today worrying about issues like TDD and pair programming, which have much less proven benefit and get highly variable feedback from those who have used them.
I can't help but observe that those ideas are very accessible to a typical journeyman programmer versed in OO languages today, while shifting to something like a Cleanroom process or an Erlang style software architecture or a more declarative programming style as functional programming languages promote requires understanding concepts that for most people are radical and unfamiliar. We fear what we do not understand, and that is holding us back.
A final example on this:
> Extra specs and documentation might help reduce bugs, but probably not nearly so much as more/better testing.
On the contrary, working to good specs with a robust peer review culture is a highly effective quality strategy with an excellent RoI, and more than stands up to any test-based approach I've seen. The evidence for this is overwhelming, if you choose to seek it out. Again, I refer you to Google Scholar, or a good bookshop. But again, you have to be willing to look at a culture shift and trying something with a completely different philosophy to what you do today. You typically won't find this stuff in Cleany McCoder's Internet Echo Blog.
> Spending 3 times as long on a given development effort may yield some small quality gains, but the overall effect is a loss of value. A product that has 10% fewer meaningful bugs but has 66% fewer necessary features is a failure.
Perhaps, but what if it's more like a 10% overhead to better than halve the number of bugs (as in one small academic comparison I just quickly looked up on Google Scholar)? What if a factor of 2-3x in the up-front development costs really cuts your bug rate from 20 bugs/KLOC not to 18 bugs/KLOC but right down to about 1 bug/KLOC (not unusual for Cleanroom with established teams in industrial practice), with all the resulting savings in maintenance costs later as well as the user-visible improvement in quality?
I'm picking Cleanroom in particular here just because I happen to know a little about it, but my point isn't that we should all use Cleanroom. My real point is that there are different processes and design techniques and languages and tools out there, some of them very different to typical industrial practice today, and some of them objectively performing much better.
If any software group (I'm not just digging at Firefox, it's an industry-wide problem) wants to complain about maintenance costs and how it's impossible to do much better on quality without silly overheads, I think they should take a look outside the mainstream with an open mind before they make too many bold claims. In the meantime, claims about how it's too hard to both release quickly and maintain basic security patching on other branches (and here I am digging at Firefox's new release strategy) all sound a bit hollow.
For the process efficiency, I simply disagree with your assessment. I'm not aware of any unbiased works that indicate that using a heavyweight process results in even slightly higher quality without significantly slowing time to market. In fact it seems that there are almost no unbiased works in these areas. It's difficult to evaluate processes in general, because you're actually comparing teams. Most of the results I've seen have come from someone pushing their process, whether it's agile, or heavyweight, or whatever. There is a lot of anecdotal evidence that indicates faster iteration yields higher quality and more features, though. It's telling that so many of the industry leaders have moved that direction.
Yes, I think browsers need frequent releases in order to be competitive. I don't know if 3 months is the magic number, but it beats the pants off 1 year. Early standards proposals are refined inside the browsers that ship them first. This is one of the reasons HTML has been so successful, and it's what HTML5 has tried to continue: standardize what works. The opposite, where a committee pushes out a standard and everyone then tries to lamely implement it, is a broken process. Early shipping is a prerequisite for well-formed standards. Shipping the first beta standards also gives a browser a competitive edge, because they can then argue in favor of their particular flavor of implementation becoming the standard, and they already have that implemented. Late shipping loses developer mindshare. "Oh, I guess I have to install Chrome to try the latest CSS widgetmagicwaffles."
I'll look into the cleanroom process you're describing, but so far you've made it sound unattractive. It seems like you've been describing a very heavyweight development process. As for TDD and pair programming, I haven't said anything about those. Pair programming is completely orthogonal to process weight, and I think TDD is overhyped. I feel its primary benefit is that it gets a bunch of unit tests written that help avoid regressions.
The problem with working with good specs is that they take a long time to get done, and they inevitably have mistakes that have to be corrected during dev, often major mistakes. While you're building this massive spec, all your engineers are sitting idle, or going brain dead doing spec reviews. I'm in favor of high-level specs, and the larger the project, the more specific the spec, but I don't for a moment believe that a complete spec is a worthwhile thing. Let's be honest, this is the waterfall model: gather requirements -> build insanely large spec -> implement to spec -> verify implementation -> maintain
I'll look into cleanroom, because it's not something I'm very familiar with, but I have my doubts. The "complete spec" model is extremely expensive. It makes sense for avionics. It makes a lot less sense for desktop software.
Well, you're obviously not going to take my word for it, nor would I want you to, so please do go and read up on some of those other ideas. You're quite right that it takes more up-front work in a process like Cleanroom, but some of that can be repaid in practice in quicker testing cycles before shipping and in keeping delays to fix bugs fewer and shorter. But as I said before, please don't get too attached to Cleanroom or any particular figures I've mentioned. They're just examples, to try to demonstrate that some popular assumptions don't always stack up when faced with the facts.
I really don't, at least not on the scale it was a couple of years ago. I expect there are a handful of organisations who are still stuck in IE6 land, but it's no longer the limiting factor it used to be.
> When you're talking about a product that needs to be upgraded frequently to continue to do its job and to maintain security, I'd say the answer is "not very long".
I guess I lack sympathy because, ironically, the vendors are still using old and underpowered tools and casual processes for their development, which is why we need so many security patches. Usually, those tools start with a C and end with the word "compiler". Often, the development processes are trendy, "Agile" things that emphasize prototyping and pushing rapid updates over clear specs and systematic designs, and that treat techniques like unit testing as some sort of divine correctness guarantee instead of a back-up to good basic development practices.
Well, you reap what you sow. If you don't invest in tools and processes that can build robust software that requires little maintenance, I'm going to bitch if you don't invest in supporting your existing software either.
We've had programming languages much better suited to application development than C and its derivatives for many years now, and there is really no excuse for still writing things like networking software using a model that has a terrible risk/benefit ratio for that kind of project.
It's also a common fallacy that writing code significantly less buggy than what we put up with today needs heavyweight processes that cost more. There are many people in many industries who do it every day and have the metrics to prove it.
> Unless you choose to use bleeding-edge beta functionality, your sites will still work just fine.
Sure, but unless people are going to use that bleeding edge technology, why push out new versions of the browser with the corresponding functionality changes? It would be far safer to push only such security patches and bug fixes in existing functionality as are necessary, and make functional changes at a pace that allows for the whole industry to develop sustainably and actually take advantage of the improvements.
> Can you point out what FF5 has done that's so terrible and backwards-incompatible?
Well, so far it seems to have broken quite a few plug-ins and it looks like some of the typography engine bugs are back again, but I've only been using it for two days so I haven't yet tried it with most of the projects I'm involved with.
In any case, shipping one version without breaking anything is nothing to be proud of. You have to ship every version without breaking anything, and neither Firefox nor Chrome has a good track record in that department.