I think your example might actually support the parent's POV more than it refutes his stance. Exception handling isn't just slow in 1.8.7. It may be slower in 1.8.7 than 1.9.2, but exception handling is universally slow. There are a variety of reasons for this, but unrolling an exception is fundamentally slow and until there's native support for them on the die, that's likely to remain the case.
So, creating a situation where you're guaranteeing an exception is raised during non-exceptional execution flow is generally a bad idea. It may be cleaner or more concise code-wise, but performance-wise it almost certainly will always be slower.
For the most part I believe the parent is correct in that there is a much stronger emphasis placed on code conciseness/clarity than there is on avoiding performance anti-patterns. It's hard to measure whether the gains in "code agility" outweigh the local performance hit by making it easier to surface other performance problems. When there's less code involved it usually follows that it's easier to spot and fix problems. But I worry because my own experience suggests that death of a thousand cuts is a considerably harder situation to get out of than dealing with a handful of gnarlier bottlenecks.
Except that exception handling is so fast in 1.9.2 that it made this change a performance optimization in 1.9.2. Look lower in this topic for benchmarks showing that Rails 3 is faster in 1.9.2 than rails2 is under 1.8.7.
>performance-wise it almost certainly will always be slower
Except, it wasn't slower in the Ruby that most people running Rails 3 should be running (1.9.2).
I also point out, again, that they immediately fixed this problem when someone pointed out how slow it was in 1.8.7.
I don't really know what your background is, so apologies if I come off as patronizing. If you aren't familiar with how exception handling works under the covers, I encourage you to spend some time looking into it. It's really quite interesting and will likely impact your future design decisions in some way. Learning about longjmp and setjmp clarified a lot of things for me.
Now, I could be completely off-base and 1.9.2 may be the only system out there that has managed to make exception handling a cheap operation. But I'm highly skeptical of that. I think the more likely scenario is exception handling is much cheaper in 1.9.2 than 1.8.7, but still not cheap enough to favor over virtually anything else. I've not read anywhere in the linked issue or in the following comments that suggests using responds_to? was any slower than handling a NoMethodError exception. If it is, that largely points at a failure of responds_to? more than it trumpets the speed of ruby exception handling. And pretending that exception handling is fast enough for flow control and thus other problems needn't be fixed can land a community in serious trouble.
Now obviously there are exceptions to every rule. But the default position of any programmer really should be to reserve exceptions for exceptional circumstances because they are almost axiomatically slow, regardless of language or platform.
So, creating a situation where you're guaranteeing an exception is raised during non-exceptional execution flow is generally a bad idea. It may be cleaner or more concise code-wise, but performance-wise it almost certainly will always be slower.
For the most part I believe the parent is correct in that there is a much stronger emphasis placed on code conciseness/clarity than there is on avoiding performance anti-patterns. It's hard to measure whether the gains in "code agility" outweigh the local performance hit by making it easier to surface other performance problems. When there's less code involved it usually follows that it's easier to spot and fix problems. But I worry because my own experience suggests that death of a thousand cuts is a considerably harder situation to get out of than dealing with a handful of gnarlier bottlenecks.