Hacker News new | past | comments | ask | show | jobs | submit login
Grad Student Who Took Down Reinhart and Rogoff Explains Why They're Wrong (businessinsider.com)
164 points by BruceM on April 22, 2013 | hide | past | favorite | 93 comments



I have argued in the past that data and code need to be submitted with academic papers: http://blog.jgc.org/2013/04/the-importance-of-open-code.html This is just the latest example of why, IMHO, code needs to be open.


More specifically for people not following jgrahamc's full story, he and co-authors wrote a paper making the argument that data and code need to be submitted with academic papers, and the pre-eminent scientific journal, Nature, saw fit to publish their paper. So the idea is getting attention; hopefully it will also gain traction, too.


Information about the Nature paper: http://blog.jgc.org/2012/02/case-for-open-computer-programs....

My co-authors and I chose Nature as the target for the paper because of its reputation and because I believe in going to 11 (http://en.wikipedia.org/wiki/Up_to_eleven).


I didnt realize this was something that was still up for debate. In my experience source code has been a requirement since I entered the game- especially for high impact journals.


As late as at least 2010, source code wasn't required for Nature or NEJM. People could request it from you, but it wasn't required to actually be submitted up front.


Most importantly -- when researchers know their data and methodology will be out in the open they'll have a big incentive to make it look clean and presentable. They know they risk getting called out on excluding certain segments of the data set, so they'll have to at the very least add a small remark in the spreadsheet justifying their decision. It also makes it really easy for other researchers to pick other start and end years to test if the result only holds for the original data. Which again, encourages researchers to explain justify the input data they've chosen.

In addition, researchers are likely to discover mistakes while cleaning up the excel sheet, data sources and code before publishing it. Just like we find mistakes in our work while refactoring and cleaning up code before we push it to github.

So even when nobody ever looks at the data and code we can expect the quality of the research to improve significantly: just because the code is there to be looked at.

I think the case in favor of making data & code public for academic research is pretty overwhelming.


I think having data open and available is good for everyone.

The more I learn about this particular incident, the more I feel the scandal is not (a) that famous harvard professors made a mistake, or (b) that microsoft excel is more error prone than alternatives (which seems like nonsense to me because the more complex the replacement the more error prone it will be)...

To me the scandal is that you can be a tenured professor in economics and produce work that amounts to simple averages of widely available data and call it a research paper, and that people take you seriously, and presidential candidates use you as a reference, and your department doesn't bat an eyelash.

The fact that they screwed up seems incidental - mistakes happen.

It seems like the kind of back-of-the envelope work that any old blogger would be capable of doing, we just don't have a way to take good ideas, no matter where they come from, seriously. no matter how much we profess to try - at heart, the consensus is still status-driven, and pedigrees matter.


I think having data open and available is good for everyone.

Not necessarily everyone. Collecting good data is hard, long, and tedious, and the 'glory' part is the analysis. People get accolades for making analyses, not for good groundwork.


The argument against this, and I make this as a jaded and struggling graduate student so accept that bias, is that there is no acceptable outlet for negative results. There is also no acceptable outlet for those who consistently produce negative results and so, as I'm sure everyone understands, there is tremendous pressure to produce positive and interesting results. We hear about the times that this is forced due to manipulation and falsification and this is never okay, but there are a great many times where it falls into a nebulous gray area. We double and triple check our code, verify simulation results, and run down the checklists but that never guarantees that we do not have such "excel column errors".

If data and code are open this is fantastic for science and progress because errors do not replicate and permute but it can be terrible for individual scientists and graduate students. As always, it comes down to money. If we are forced to correct or even worse retract a scientific publication, the community (both local and worldwide) bares down on you like a knife. The stakes are so high that not only is falsification lethal, in many cases honest mistakes can be lethal as well. Conversely one cannot take eternity flipping every stone to bulletproof every possible single problem and it can be very hard to identify which stones to check!

This is not a defense of closed work and closed data, but it's a realization that opening data and code is not a simple and straightforward process. There are severe and deeply embedded cultural problems that make post-publication sharing difficult.


I don't know your field, but the claim that there's no acceptable outlet for negative results isn't always true. There are usually outlets for interesting results, it's just that negative results tend to be less interesting than positive ones. Negative results that blow up accepted wisdom, though...

An example from economics: Meese and Rogoff [1] has (by google scholar) about 3000 citations, which is massive for econ and is one of the key references in Intl Macro. They showed that all of the models in use at the time of publication sucked.

[1] Empirical exchange rate models of the seventies: do they fit out of sample? links to the paper here: http://scholar.google.com/scholar?cluster=170917352624567913...


http://jacquesmattheij.com/Hey+CS+Paper+writers+please+show+...

So did I... however, showing code and data should be mandatory for acceptance of papers. Errors are all too easy to miss and since papers draw conclusions from the data a paper without data should actually be simply refused for being incomplete.

Why publications would accept a paper for review and eventual publication without giving reviewers the ability to actually check the results is a mystery to me. The current system basically amounts to 'you're going to have to believe me'. I singled out CS papers because there the code is pretty much the essence of the paper but of course the same goes for every other field.


Yes! We need to see more material under Matt Might's CRAPL license. I feel a strong urge to write off any result that doesn't have accompanying code with it.

http://matt.might.net/articles/crapl/


Great idea, awful awful name - CRAP license, as in 'Community Research/Academic Programming' = CRAP. Why would your make your concept the butt of jokes from the outset?


It's right there on the page: "It's not the kind of code one is proud of." By releasing it as CRAP, you're acknowledging that the code is imperfect, reducing the perceived barrier. If academics feel like you can only release code if it's pristine, it'll never get released, because for the most part, the incentives aren't there to make clean code in an academic setting.


This is to badly miss the point. What matters in this context is accuracy, not elegance.

If I brute-force detection of the first 10000 prime numbers (for some non-CS context) by testing their divisibility from n to 1 it's horribly inefficient and programmers may laugh, but it's accurate, simple, and easily reproducible. Given that a definition of a prime number as divisible only by itself and 1, it may be better to present a brute-force algorithm than to get into a sideline discussion about validating some new-fangled method like the Sieve of Erasthostenes, that upstart.

Science aims at proof rather than efficiency. What matters is the quality of the result, and its reproducibility. If you can get to the same result in a much more timely fashion that's awesome, but that's an engineering achievement rather than a scientific one. I don't think that anyone wants to put out code labeled as CRAP in an attempt to mollify engineers.


Have you seen typical research code? It's crap. If you looked at it, your first reaction would probably be "Wow, this is crap." It's created in a hurry, usually grows by accretion, and the people who write it are usually pretty half-assed at programming, having made the solidly practical decision to focus more on the other aspects of their field. The commenting is usually lousy-to-nonexistent, the indentation is frequently screwy, and everything about it just screams "I am a temporary hack, written only to get publishable results."

And that's to be expected, because of how most scientific code gets written, and what the incentives are. And releasing this code would still be strictly better than not releasing it. We just need to keep the expectations of code quality low, which the CRAPL license does.


Only programmers have high expectations of code quality. If I'm a biologist, my target audience is other biologists, not programmers.

Your comment reads to me like you want non-programmers to ritually humiliate themselves by labeling their stuff as CRAP before you will deign to parse it for correctness. Patronizing people is not a good way to get them on-side.

We just need to keep the expectations of code quality low, which the CRAPL license does.

The purpose of the CRAPL is to make code accessible, not to lower expectations. It's not about what you think of the code, it's about whether the code yields correct results.


How to Design Programs is a great text in general, but I think it holds particular power for researchers. I still write some janky code, but particularly when I'm doing R, I find myself falling easily into the HTDP mindset.

Not to dredge up a functional vs imperative battle, but I feel FP has a lower impedance mismatch with mathematical concepts generally.


Acronyms don't have to be positive to be successful. CRUD is an example of this.


CRUD is often used as a pejorative though.


The name is a feature, not a bug.


Perhaps, but is this reflected by widespread adoption of the license? If not, then maybe it is actually a bug.


I wonder how hard it would be to find CRAPL (or otherwise open-sourced) research code that one would be able to contribute to (as a software engineer) in order to clean it up yet preserve function. (You know, add tests, refactor, etc.) I assume that a lot of that would depend on the public availability of test data to exercise the code with, but it's an interesting idea.


That's an interesting question. I'd guess that most code born of academia has a rather short half life in terms of immediacy for the author. However, the benefit of having any sort of second pass look at anything from a software engineer's perspective would be a major boon.

You might have better luck in a quasi-research public venue. For example, I work at a public agency that uses your tax dollars to forecast travel demand, and then use the results of that statistical modeling (plus a thick shmear of political wrangling) to decide how to spend more of your tax dollars.

Much of this model code is developed as part of an honest to goodness research process (yay!) by contract software developers that public agencies can afford (not so yay). In other words, things like revision control and unit tests are mostly dismissed as extravagances and unwarranted delays. Validation that is performed doesn't exactly inspire confidence.

I'm betting there are a million and one of these kind of projects. Some of us are trying to figure out internal and external issues so we can post this sort of thing on places like GitHub. Others are already there.

If you pick a topic you're interested in, and ask the right people, it'll probably be worth some authorship cred.


I don't know how seriously this license should be taken. I've found at least one typo (in 'tested or verfied'), and a few ambigous statements, for example: 'You make a good-faith attempt to notify the Author of Your work'.

Does that mean to 'notify the Author about Your work', or 'notify the (Author of Your work)' (i.e. yourself)?

If this license is meant to be serious then I should probably notify its author about these concerns.


Unfortunately the license terms are decidedly non-free and antithetical to the released software being used elsewhere.


That is the AER's usual policy [1]; R&R's paper was published in the May issue (Papers and proceedings) which is for all intents and purposes a completely different and almost non-peer-reviewed journal [2]. I suspect (without any knowledge) that there will some pressure to distinguish the AER and the papers and proceedings more after this episode.

[1] http://www.aeaweb.org/aer/data.php [2] http://www.aeaweb.org/aer/pandp_faq.php


I completely agree. This is the exact problem that we are trying to solve at my current startup, http://Banyan.co


I wish you luck, but I'm not sure a startup is the answer. What's needed is education of scientists and to gain their acceptance. Then a startup will have a market.


You've got a beautiful site, but please fix your SSL installation. Firefox did not let me see it without adding it as an exception.


Thanks for the heads up. We rolled out a deploy that looks like it broke a few things this morning. Im on it right now.


For anyone interested, Herndon, Ash and Polin did indeed release their data and code. It can be found here: http://www.peri.umass.edu/fileadmin/pdf/working_papers/worki... . Kudos to them.


One problem is that often the data is highly valued -- the team may have spent years building it. And, in academic circles, prestige comes from papers, not data sets. So the team wants to mine papers out of that data set for years.


One problem is that often the data is highly valued -- the team may have spent years building it.

That was clearly not the case with the flawed paper this OP is concerning itself with. So, not a valid excuse in this case.

At any rate, the team should learn to write papers as they build up the data set. That way they always have a little more data than anyone else working with it. Because without release of the data on which a given paper is based, there's no way to know if the paper is actually valid.


A good data set can have incorrect analysis written about it. "Clearly not the case" doesn't apply.


For me this was the 'tl;dr' quote:

"It would be absurd to think that governments never have to worry about their level of indebtedness. The aim of our paper was much more narrowly focused. We show that, contrary to R&R, there is no definitive threshold for the public debt/GDP ratio, beyond which countries will invariably suffer a major decline in GDP growth."


Slightly longer tl;dr, from the article's abstract:

"Our finding is that when properly calculated, the average real GDP growth rate for countries carrying a public-debt-to-GDP ratio of over 90 percent is actually 2.2 percent, not -0.1 percent as published in Reinhart and Rogoff. That is, contrary to RR, average GDP growth at public debt/GDP ratios over 90 percent is not dramatically different than when debt/GDP ratios are lower."


R&R weren't trying to claim that, were they?


I think they were. Speaking as someone who is generally pro-austerity, I think R&R concluded that after 90% debt/GDP there was major drop-off in GDP growth. The corrections all but destroy this assertion. That's not to say that debt and GDP growth aren't inversely proportional/correlated -- just that there is no magic level that changes the relationship substantially.

Honestly, there are so many variables and dimensions in this system and so little data I'm not sure we should ever be drawing sweeping conclusions like this.


>I think they were. Speaking as someone who is generally pro-austerity, I think R&R concluded that after 90% debt/GDP there was major drop-off in GDP growth.

That doesn't mean they were asserting what the grad students claim, that the falloff in growth is unavoidable, which sounds like an extreme strawman.


R&R's defense strategy seems currently to pretend that they weren't claiming anything in particular by the paper originally: "hey man, I was just putting it out there. I didn't, like, mean anything by it."

But their own public pronouncements, and the way they responded to other's interpretations of the results, pretty much indicated that they though they'd found firm evidence that there was some big non-linearity at 90%.


I'm not claiming R/R are saints. I'm just objecting to ridiculous strawman implied by:

"We show that, contrary to R&R, there is no definitive threshold for the public debt/GDP ratio, beyond which countries will invariably suffer a major decline in GDP growth."

RR's claim is entirely statistical, so of course they're not going to claim that exceeding the ratio always ("invariably") leads to a major decline in growth.

RR's being wrong doesn't mean it's okay to insinuate they hold ridiculous positions that they really don't. That's just compounding public misconceptions.


Have you been following RR's op-eds, etc over the last two years? I have the impression that they've been less circumspect than you claim.


"X carries too much/undesirable risk of Y" != "X unavoidably leads to Y"

The former is what RR said, the latter is the strawman being attributed to them.


The claim by R&R is that yearly GDP growth decreases abruptly (i.e. in an non-linear fashion) when debt exceeds 90% of its GDP. According to them, therefore, governments should have, in times of recession, debt reduction at the top of their economical agendas.

The response by Herndon doesn't insinuate that countries should disregard debt as a matter of concern. It doesn't even say that debt increase isn't correlated with GDP decrease. The main issue that the paper points at is that US and European governments have, based on this thesis, adopted drastic policies of austerity and budget cuts, when actually over history there isn't such a nonlinearity.


According to them, therefore, governments should have, in times of recession, debt reduction at the top of their economical agendas

I'm not sure this is right. Rogoff said "back in 2008-9, there was a reasonable chance, maybe 20% that we’d end up in another Great Depression. Spending a trillion dollars is nothing to knock that off the table." I didn't see any policy recommendations in the paper.

The main issue that the paper points at is that US and European governments have, based on this thesis, adopted drastic policies of austerity

_Maybe_ for the US, but the timing doesn't match for Europe. For the UK, the usual candidate for self-imposed austerity, Cameron was arguing for austerity before the paper became well-known.


ErSo, there are a few things I don't quite understand. In the original study, Renhart had erroneously averaged 7 numbers together to get a slightly negative growth. The 'correct' result is obtained from averaging eight numbers together, resulting in slightly positive growth. What is unclear to me, is how are either of these numbers considered to be very significant? With so few samples in the average, I would draw the conclusion that both measurements agree with one another and the error bars for their measured result are quite a bit larger than they seem to be suggesting. Is this common in economics?


Macroecon has the simultaneous hindrances of (1) not being able to run a controlled experiment and (2) a dearth of data. It doesn't excuse people for being far too confident in the conclusions they draw, but it does explain why such large differences of opinions can persist (especially on such a politically charged subject). People can't be pulled away from their priors.


> Macroecon has the simultaneous hindrances of (1) not being able to run a controlled experiment and (2) a dearth of data.

Well, except #1 should be "not being able to run experiments in laboratory conditions". Statistical controls in an experiment are still controls, and it is still possible though (because of the point made briefly by #2, to run statistically-controlled experiments) sometimes of limited utility; the dearth of data (specifically, the small number of data points compared to the number of independent variables that need to be controlled for) is what limits the utility of many experiments with statistical controls in the field.


Even if you use rigorous statistical techniques -- and economists do under the heading of econometrics, even inventing new techniques -- it's still really hard to draw meaningful conclusions.

There are just so many variables. And the more interconnected the world becomes, the more variables there are. Pointing to any one correlation and you'll find a hundred statistical fingers pointing at correlations with totally different consequences.


> Even if you use rigorous statistical techniques -- and economists do under the heading of econometrics, even inventing new techniques -- it's still really hard to draw meaningful conclusions.

> There are just so many variables.

Right. That's the problem with too few data points given the number of independent variables that need to be controlled for. I addressed that explicitly.


Political constraints and welfare concerns effectively rule out "controlled experiment" at the present time.


Agreed.


Seems like a great argument for more transparency. This guy wasted a lot of time trying to guess what they had done. Given that publishing all the supporting materials is approximately free, perhaps journals could start requiring a git link that contains data, code, and paper drafts.


Many respectable journals already do this.

From Science: All data necessary to understand, assess, and extend the conclusions of the manuscript must be available to any reader of Science. All computer codes involved in the creation or analysis of data must also be available to any reader of Science. After publication, all reasonable requests for data and materials must be fulfilled. [...]

http://www.sciencemag.org/site/feature/contribinfo/prep/gen_...


Paper drafts? That's a sure fire way to make sure people prepare their papers in secret. The Presidental Records Act didn't suddenly increase the transparency of the presidency, it ensured the president doesn't use email and prompted the Bush administration to move to a secret email server:

http://en.wikipedia.org/wiki/Presidential_Records_Act

http://en.wikipedia.org/wiki/Bush_White_House_e-mail_controv...

By all means, force researchers to publish all the tools necessary to reproduce their results. But you can't expect to set up surveillance in their head.


You might have a point, but I think that's a terrible analogy.

It could go as you suggest. Or it could be like a locker room: if everybody is naked, then nobody cares.

What made me add drafts to the lists is Daniel Dennett's energetic description in Consciousness Explained of how he repeatedly circulates drafts of papers to colleagues for comment. At least in philosophy, that's an important part of the process.

Having to show interim steps would make fraud much harder, and it's a zero-overhead thing if people are already backing up their work.


Yes, I often circulate drafts of my papers to colleagues too. I don't post them for the world to see into perpetuity until they reach a certain level of quality.

Your shower analogy doesn't work because there's no way to force people to post drafts. We already have the option posting of drafts. It's called personal websites and/or the arXiv.


Well, the mechanism I suggested for forcing was journals requiring it for publication. People would be obliged to keep a version history of some sort. Careful writers do already, and it's easily automated, so I don't think enforcement would be hard.


"journals could start requiring a git link that contains data, code, and paper drafts."

But when the goal is to create a result rather than report it, transparency is the enemy


> But when the goal is to create a result rather than report it, providing data, transparency is the enemy

I strongly believe that errors are a much more pervasive problem in science and related fields than malice is.


There's also a bunch of stuff between error and malice. As we saw yesterday here, it's very obvious in pharmaceutical research, where the stuff getting funded and published is often heavily biased despite the best intentions of everybody (or almost everybody).

I suspect there are plenty of similar issues in economics. People with money are much more likely to support researchers whose work benefits or protects people with money. E.g., I happened to read a paper from a U of Chicago prof arguing that insider trading is actually beneficial. Boy, I wonder who the big donors are there. Probably not Mother Jones Magazine.


The economic benefit of insider trading is a well established economic possibility. It's not a pure transfer to rich people.


> I strongly believe that errors are a much more pervasive problem in science and related fields than malice is.

The problem is that economics is not a field related to science. The vast majority of public policy economics starts with a conclusion, and then creates facts to support that conclusion. This is not science, it is religion.

(insert disclamier about this being an overgeneralization)


> I strongly believe that errors are a much more pervasive problem in science and related fields than malice is.

In science generally, probably. In areas tightly connected to perennial areas of sharp ideological policy divides, like macroeconomics, I'm less convinced.


"science and related fields "

But we are talking about economics, and republican groups like fox news eat it up.

My favorite example is the 2011 chart distorting the display of the unemployment rate:

http://mediamatters.org/blog/2011/12/12/today-in-dishonest-f...


Fox News is not in the business of science. I used the term "related fields" as a euphemism for Econ and Psych because I didn't want to get drawn into a demarcation debate.

Wherever the line is drawn, the dishonesty of some random Fox News chart is not related to the honesty or dishonesty of actual scientific research.

The much bigger problem in science is error.


" didn't want to get drawn into a demarcation debate."

Econ and Psych really do lie in a gray area, and by avoid the demarcation debate you completely miss the relevance of the issue at hand.

If policy makers weren't using this study to justify more austerity, then we probably wouldn't have such a prolonged discussion.


Distorting the truth about the state of the economy is not something confined to one partisan side, and the need for data transparency extends beyond economics regardless.


"is not something confined to one partisan side"

True, but I haven't seen a liberal think tank manipulate charts in that specific manner.


Then look harder.


At my startup, http://banyan.co, we are aiming to tackle transparency in academia using git. Our products barely a few months old, but were shipping new features & improvements daily.


> This guy wasted a lot of time trying to guess what they had done.

According to the article, many people wasted a lot of time attempting to recreate their results. Further, the paper is highly-cited and it probably shaped, directly or indirectly, opinions, further research directions, and possibly even policy.

We are focusing on the Excel error, which was likely a mistake. I'm really struggling though to justify their choice of weighting/averaging. It's confounding that two highly regarded and experienced academics would choose something so particularly bad. It absolutely should have been noted in the text.


From Jelte Wicherts writing in Frontiers of Computational Neuroscience (an open-access journal) comes a set of general suggestions

Jelte M. Wicherts, Rogier A. Kievit, Marjan Bakker and Denny Borsboom. Letting the daylight in: reviewing the reviewers and other ways to maximize transparency in science. Front. Comput. Neurosci., 03 April 2012 doi: 10.3389/fncom.2012.00020

http://www.frontiersin.org/Computational_Neuroscience/10.338...

on how to make the peer-review process in scientific publishing more reliable. Wicherts does a lot of research on this issue to try to reduce the number of dubious publications in his main discipline, the psychology of human intelligence. It appears that the discipline of economics research needs help with data openness too.

"With the emergence of online publishing, opportunities to maximize transparency of scientific research have grown considerably. However, these possibilities are still only marginally used. We argue for the implementation of (1) peer-reviewed peer review, (2) transparent editorial hierarchies, and (3) online data publication. First, peer-reviewed peer review entails a community-wide review system in which reviews are published online and rated by peers. This ensures accountability of reviewers, thereby increasing academic quality of reviews. Second, reviewers who write many highly regarded reviews may move to higher editorial positions. Third, online publication of data ensures the possibility of independent verification of inferential claims in published papers. This counters statistical errors and overly positive reporting of statistical results. We illustrate the benefits of these strategies by discussing an example in which the classical publication system has gone awry, namely controversial IQ research. We argue that this case would have likely been avoided using more transparent publication practices. We argue that the proposed system leads to better reviews, meritocratic editorial hierarchies, and a higher degree of replicability of statistical analyses."


Note that the Reinhart-Rogoff article was not peer-reviewed.


It's not related to the article, but rather the site hosting it.

I noticed more than one fishy looking 3rd party ___domain loading while the page downloaded, so went into Inspector to see what was up. There are one or more resources loaded from each of the following domains, in many cases including javascript...

2mdn.net, scorecardresearch.com, bizographics.com, tynt.com, optimizely.com, google.com, sail-horizon.com, facebook.com, 247realmedia.com,akamaihd.net, vizu.com, pubmatic.com, imrworldwide.com, advertising.com, googlesyndication.com, doubleclick.net, chartbeat.com, sharethrough.com, fbcdn.net, skimresources.com, gstatic.com, stumbleupon.com, tynt.com, adadvisor.net, youtube.com, shareth.ru, agkn.com, yimg.com

'shareth.ru' seemed particularly suspect, until I realized it was probably sharethrough.com trying to be cute.

There are so many domains being trusted here the drive-bys could have drive-bys.


Its fascinating that part of the problem in the Reinhart and Rogoff paper comes down to an Excel formula error http://nymag.com/daily/intelligencer/2013/04/grad-student-wh...



One columnist I read said something like "it will be interesting to see what historians make of the fact that a global economic policy was driven in part by an error in an Excel spreadsheet".


It's rare that we get such a clean natural experiment like this in economics; maybe future economists will thank R&R for cleverly introducing this negative exogenous policy shock (sarcasm).


Austerity was a big thing in Europe, and they mostly came to that decision before R+R's paper came out in 2010.


Journals shouldn't publish papers without all the supporting data and computer code necessary to reproduce the result.

This seems obvious, but established academics have a vested interest in not sharing. Sharing data and code opens their work up to impeachment (as is seen here), and it gives others a jumping off point to extend their work.

Both of these side effects of sharing code and data are bad for the careers of successful academics, but good for literally everyone else in the world (including, ironically, these academics). It should be a no brainer, but then again, who referees these papers? The very people who have the most to lose by such a change.

It will take a lot of clamoring from the outside to bring about a world where scientific work is considered not legitimate without full documentation of the experiment performed.


I recall hearing somewhere that the original paper wasn't actually published in a peer-reviewed journal, so requirements for data and code wouldn't really have applied anyway.

Perhaps the greater lesson is that we should trust proper journals for a reason.


This was an interesting and well written followup but I'm disappointed in Herndon's decision to publish it on Business Insider. BI is generally full of blogspam and has interstitial ads.


Agreed. BI is edited by Henry Blodget who was charged with securities fraud as an analyst which lead to him paying millions in settlement and being permanently banned from the securities industry. Really not where you want to publish anything meant to be taken seriously.

http://en.wikipedia.org/wiki/Henry_Blodget


Nonzero chance Herndon gets a percentage of the ad revenue.


True, but BI is actually really good for econ news.


More debate over 20 data points with no explanatory power one way or the other (as Yglesias pointed out). This is not uncommon in macro. Another recent case also centered on work by R&R

http://johnbtaylorsblog.blogspot.com/2012/10/simple-proof-th...

(one of a host of commentary pieces on both sides... but essentially none in the middle)


I think this method of averaging should hereby be called the Reinhart-Rogoff Average, RRA for short.

I should program it into my business intelligence software, it's gotta be useful for something--like for cooking the books.


This whole situation makes me think: is there a static analysis tool for excel? Seems like missing a few rows in an excel calculation could be flagable.


There sort of is one built-in. It usually warns you if you're doing things on a group of cells and you're missing one of them.


That grad student won't be getting the big job offers in industry, great way to destroy your career by biting the hand that feeds you.

(Sarcastic, but it's real life)


Doesn't matter, he'll now have a lucrative career being a USG debt apologist.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: