At the risk of sounding like a nitpicker, as a statistician, I'll have to comment on the methodology. I believe the crucial clause in the article is
when the research and analysis firm Forrester recently surveyed our readers about how much time they spend writing in any given language
When the results change over time as indicated by the two charts, that can mean one of two things: either a lot of people who worked in just one language in 2010 now work in several languages, or a lot of people who work in just one language have stopped reading Dr. Dobbs. In order to support the claim that it's mainly the first and not the second possibility, one would have to submit at least some supporting evidence. (Edit: at first I thought that one would want to contact the same people in both surveys, but no, that's not good. It leaves out the effect of new people entering the arena. This doesn't seem to be easy.)
And also, this perhaps being a matter of taste, I find it questionable to have a caption like "Fraction of programmers..." underneath the charts. As much as I respect and admire Dr. Dobbs (I have published there): not every programmer is a Dr. Dobbs reader.
I'm the author of the article. Let me try to address your thoughtful observations.
>This doesn't seem to be easy.
Quite agreed. It's not an easy thing to measure accurately. I believe the explanation for the numbers is indeed the first of the two options you present, as I wrote in the original piece. As the Dr. Dobb's readership has grown vs. 2010, both in terms of unique visitors to the website and subscribers, I don't think the second option is likely.
>not every programmer is a Dr. Dobb's reader
Quite true. This is a problem inherent in all surveys. The survey size for these two questions were 1143 in 2010 and 500 in 2012, which statistically speaking would be fairly representative samples. The real rub is that programmers are not a homogeneous group, so the results will change a lot from one type of programming to another. For example, Dr. Dobb's does not cater much to embedded developers, consequently the effect that they would have on the charts is not captured.
If I assess the 2012 numbers based on what I know anecdotally, they seem acurrate insofar as capturing the broad trend towards polyglot programming. What was counterintuitive, at least to me, was how much the trend accelerated in the last two years.
Sorry, I'll have to very strongly disagree on that one. If, for example, you conduct a public opinion poll in the United States, then all that matters is the size of the sample and your method of selecting the sample. If those are both adequate, then you can draw conclusions (with margins of error) about the entire population. Furthermore, you can repeat the same poll at different points in time and draw conclusions on the changes that you observe. What's happening here is different. Your population is all programmers. Of that, a subset is taken, namely, the set of all Dr. Dobbs readers. From that subset, you take your sample. I trust that the size of your sample and the method of taking it are fine. But we don't know if the subset from which your sample is taken is sufficiently random, and we don't know if and how it changes over time.
In short, you cannot draw conclusions about the entire population if your sample is taken from a subset that does not qualify as a random sample. That problem is very definitely not "inherent in all surveys," as you claim.
You remind me of this[1] interesting piece of recent research.
It turns out that if you perform social science experiments to determine human behaviour and your sample is almost entirely drawn from a population of white american grad students, your results tell you an awful lot about the psychology of white american grad stuidents, but not so much about human beings in general. Oops. Bang goes almost the entire edifice of modern behavioural science.
Then you spend the rest of your comment restating exactly what he said--namely that programming is not a homogeneous community. And, as he points out himself, the Dr. Dobb's community is known to not be representative of the whole and he gives examples.
One more thing that stood out as a sore thumb to me: you do not comment on the fact that the first graph (the chronologically later one) splits up the programming languages in a different way than the second one! It won't weaken your point much to just use the same key in both graphs, but you don't do it, and you don't comment on it, which is a little sloppy.
(C/C++ are two different entries in the first graph, while they are the same (adding up their scores, thus exaggerating the upwards tendency of the curve in the higher-value range of the x-axis) in the second. Something similar happened for VB.NET.)
when the research and analysis firm Forrester recently surveyed our readers about how much time they spend writing in any given language
When the results change over time as indicated by the two charts, that can mean one of two things: either a lot of people who worked in just one language in 2010 now work in several languages, or a lot of people who work in just one language have stopped reading Dr. Dobbs. In order to support the claim that it's mainly the first and not the second possibility, one would have to submit at least some supporting evidence. (Edit: at first I thought that one would want to contact the same people in both surveys, but no, that's not good. It leaves out the effect of new people entering the arena. This doesn't seem to be easy.)
And also, this perhaps being a matter of taste, I find it questionable to have a caption like "Fraction of programmers..." underneath the charts. As much as I respect and admire Dr. Dobbs (I have published there): not every programmer is a Dr. Dobbs reader.