Hacker News new | past | comments | ask | show | jobs | submit login
We're Incentivizing Bad Science (scientificamerican.com)
341 points by slowhand09 on Oct 29, 2019 | hide | past | favorite | 181 comments



From a previous HN comment https://news.ycombinator.com/item?id=14022158 by dasmoth

“If you pay a man a salary for doing research, he and you will want to have something to point to at the end of the year to show that the money has not been wasted. In promising work of the highest class, however, results do not come in this regular fashion, in fact years may pass without any tangible result being obtained, and the position of the paid worker would be very embarrassing and he would naturally take to work on a lower, or at any rate a different plane where he could be sure of getting year by year tangible results which would justify his salary. The position is this: You want one kind of research, but, if you pay a man to do it, it will drive him to research of a different kind. The only thing to do is to pay him for doing something else and give him enough leisure to do research for the love of it."

-- Attributed to J.J. Thomson (although I've not been able to turn up a definitive citation -- anyone know where it comes from?)


The quote is part of a speech given by J.J. Thomson in 1916. The speech is partially reproduced (including your part) in the book "The life of Sir J.J. Thomson, O. M. : sometime master of Trinity College Cambridge" by Lord Rayleigh (1942). The original book is on the Internet Archive [1]. The speech is given at pages 198-200 and your quote appears approximately halfway in the quoted part of the speech [see 2].

[1] Full biography: https://archive.org/details/b29932208

[2] Screenshot of relevant quote: https://imgur.com/a/qicgorD


Thank you so much for finding this. I am so happy to know that this is a real quote and not a mis-attribution to a famous person.

"Not all famous quotes are by famous people"

- Abraham Lincoln


> "Not all famous quotes are by famous people"

Googling this quote points back to RcouF1uZ4gsC as an original author.

https://www.google.com/search?q="Not+all+famous+quotes+are+b...


JJ Thompson was one of the great scientists of the late nineteenth and early twentieth centuries, and certainly famous.

https://en.m.wikipedia.org/wiki/J._J._Thomson


Completely agree about JJ Thomson. Just happy to hear it was he who said it and not a that it was a quote by someone else falsely attributed to JJ Thomson because he is famous.


Ah, I see - thanks for the clarification.


"Those who would give up essential control, to purchase a little temporary memory safety, deserve neither Liberty nor Safety" --Ben Franklin


Taken to an extreme, this is UBI, and I think we would all benefit to some degree from a system like that if we could get from here to there. On the other hand, I imagine there's quite a bit of research that requires expensive tests and/or apparatus to get right, and I'm not sure how you feasibly come to a situation where you have a bunch of hobby physicists on staff and provide them with a particle accelerator, but just for funsies.

I think allowing people to explore their passions is essential to bringing new ideas into fields, and will likely bring a renaissance to some areas of study, and at least portions of other fields, but I'm at a loss as to how it can help at the forefront of fields that require a lot of investment. Rocketry, for another example. Can anyone make a realistic case that another 10,000, or even 100,000 passionate people could achieve what Space X has over the last few years? I don't doubt they come up with many or all of the same ideas, but the testing of those ideas requires a lot of money.


There are plenty of people passionate about perpetual motion and chemtrails. Until we get to a point where resources are effectively infinite, we still need some form of prioritizing research funding.

Even then, playing kerbal space program does not make you a rocket scientist. It requires a certain dedication to master various physics and mathematics disciplines to contribute to even a part of one, which is itself a significant investment that you won't find possible with UBI alone; there will still need to be some form of dedicated funding through either specific government programs or commercial enterprise.

Salary also has a bit of a sticky effect; people change fields less than jobs, and spend more time in jobs than in funsies hobbies. A rocket designed by a committee of hobbyists will likely perform much like any other design-by-committee process or product.


Well, taken to another extreme, it's Philip Glass working as a plumber and taxi driver (as he did) between compositions.

ROBERT Hughes, the Australian art critic, filmmaker and writer, wandered into the kitchen of his fashionable loft home in New York’s SoHo to see how the plumber was going, setting up his new dishwasher.

On his knees grappling with the machine, the plumber heard a noise and looked up.

Hughes gasped: “My god, you’re Philip Glass. I can’t believe it. What are you doing here?”

Glass, one of the world’s most famous composers, said afterwards: “It was obvious that I was installing his new dishwasher, and I told him I would soon be finished.”

“But you are an artist,” Hughes protested.

Glass said: “I explained that I was an artist but that I was sometimes a plumber as well, and that he should go away and let me finish.”


There have been suggestions that doing science funding via lottery (with some caveats) would be more effective and more efficient than our current grant proposal based system.


There should first be a good base of teaching assistant funding with enough free time to do research work. A lottery for extra PhD grants would indeed work well on top of this. Often the key is simply top-k grades now for those grants.

Fun fact, most historical republics and democracies elected political offices by lottery. https://en.wikipedia.org/wiki/Sortition


> UBI

What is that?


Universal Basic Income


Universal basic income


> The only thing to do is to pay him for doing something else and give him enough leisure to do research for the love of it.

heh, am involved currently in exactly the process of constructing such an environment. And we are doing exactly that: there is a high value deliverable that is relatively easy and predictable to achieve (but requires such specialised knowledge that it cannot be attained any other way than through very rare and highly skilled people), and then to attract actual good people, the people themselves are free to take the rest of their job / time and apply it to solving whatever they see as the most valuable contribution they can make on any time scale that is relevant.

On the one hand you can look at it as an embarrassing political fig-leaf, on the other, you can actually see it as optimal. I acutally don't think completely detaching people from obligations to reality often works out to be ultimately optimal anyway. You need something to calibrate what you are doing against.


The Make-Work Principle: As long as you pay people to do something, they will find something to do.

(from the book The Truth About Everything by Matthew Stewart)


As someone who's currently trying to carry their company into data science (I know, we're late to the party, but homebuilding usually is), this is something that keeps me up at night. I've been hammering the idea into my superiors that if we're going to effectively administer a scientific process, we have to not only change how we manage expectations, but really, throw them out entirely. We have to see the general pursuit of knowledge as it's own ROI, because odds are that tangible value won't come from a data science team for a long time.

But having seen how poorly we adopted agile, I'm skeptical this will happen (we devolved to "waterfall w/sprints" due to not properly managing expectations with our users). If we can't get people to move away from hard deadlines for regular releases, how are we going to make them wait for a properly verified hypothesis?

One solution I've seen is organizationally "hiding" science teams from users, and only allowing a select few drive the direction of the team. But it still comes down to that select few properly managing everyone else's expectations.


Why is homebuilding company is the right place for basic research? That's an interesting choice to join the echelon of Xerox PARC, Bell Labs, Microsoft Research, and Google X.


What do you mean by 'basic research' and why would any of the organizations that you mention be incentivized to solve the problems that plague the homebuilding industry?


OP wrote that their mission was "the general pursuit of knowledge", not "solve the problems that plague the homebuilding industry", and that was a source of tension with their management.


This doesn't work for funding things besides the person's salary. If I want to buy a fancy new microscope or build a collider or even just buy materials, and I don't want to pay for it myself, I'm gonna need funding above and beyond "leisure." The LHC and ISS aren't leisure projects. Even equipment with a less insane scale is super expensive.


I’ve come full circle on this topic through grad school and dissertation and working in product companies.

I think “fundamental research” is not a thing, except possibly in mathematical theory. There is only incremental research + good salesmanship.

Research results should show up at a predictable regular pace, or else the result of that effort should be better & deeper explanation of a negative result which is also valuable.

If you’re sinking 2 years of R&D costs on something and you’re not getting the positive result you wanted and the negative results are not incrementally adding up to a clearer and clearer diff between the state you’re in and the state you’re trying to get to, then it is wasted money and the researcher isn’t being effective in their job.

I really think even difficult research tasks need to be rescoped and broken up into a series of incremental challenges, each of which has a known way to address it. You must treat it with reductionist dogma and eventually you’ll keep breaking it down into constituent parts that have known solutions until you hit on the novel problems to solve and it will be at a level of scope small enough that you can infer the solution from existing methods.

That’s all there is in the world. There’s no miracle cure for cancer or climate change or aging or social inequality. Let alone random business problems. There’s just a big bunch of little tiny problems with boring solutions that get all glued together into bigger messes that are hard to figure out. You can try smashing with a hammer or basically scaling the hammer up or down, that’s about it.

edits: fixed typos


> That’s all there is in [the] world.

That seems like a blanket assertion. It certainly seems like the most "manageable" mode of operating, but looking back at human history it seems like the "big breakthroughs" rarely happened that way.

The model of scientific research as harvesting the tail events (low probability & massive payoff) seems quite incompatible with what you've said.

That said, you probably think the way you do because of your experiences. Can you articulate that better?

> Research results should show up at a predictable regular pace, or else the result of that effort should be better & deeper explanation of a negative result which is also valuable.

To make a slightly more provocative claim, I think that this fetish for predictable/steady research progress is one of the primary causes of the problem discussed in the article. While papers can be generated steadily, piling on details doesn't necessarily make insight. (To quote Alan Kay: "A change of perspective is worth 80 IQ points")

PS: There's an apt SMBC comic (Aaargh, I'm unable to find right now!) where researchers keep digging a tunnel linearly, and declare the field dead, while there is a big gold mine slightly off to the side.


I disagree. The perception that you need big discrete jumps forward in progress is what leads to the problem of the article. If instead you directly incentivize breaking a problem up into incremental goals from the start, and never pursue giant jumps of progress at all, then you can get it.

The problem is when people have been wasting time on big leaps of progress and then feel pressured to deliver something when they haven’t been pursuing incremental progress all along.


I disagree. In order to make real scientific Breakthrough it is necessary to take a leap in thinking.

Think special theory of relativity or quantum mechanics kind.

On the other hand:

> The problem is when people have been wasting time on big leaps of progress and then feel pressured to deliver something when they haven’t been pursuing incremental progress all along.

That’s why the most upvoted comment with Thompson’s quote is so true. You need to pay gifted people to do simple, predictably realisable things but give them lots of leisure time to pursue big leaps.

Many problems with the universities and academic system comes from the fact that granting system etc. sweats the small stuff not leaving enough leisure time to pursue big ideas.


Why do you think special relativity or quantum mechanics are “big leap” things? Those are huge systems of theories comprised of thousands of different experiments and mathematical results driven by thousands of different researchers, almost all of whom were pursuing very small scope, narrowly defined incremental gains of knowledge. Even modern quantum computing follows this pattern. People are not solving it with lots of leisure time. They are solving it within private companies as full time researchers or via industry grants, federal grants, university pipelines of incremental experiments, and focus on very straightforward scalability of known existing systems and so on.

If anything, examples like these highlight why the Thompson quote _is wrong_.


> Why do you think special relativity or quantum mechanics are “big leap” things?

It has been more than 90 years since they were invented. At the time they were proposed they've been unexpected and quite unwelcome.

I definitely agree that right now the progress is more linear, distributed and steady, but I wouldn't exactly call quantum computing (application of theory that has been created ~90 years ago) "big leap". I would rather compare it to development of hear engine from thermodynamics.

By big leaps I meant "paradigm-shifting" kind of things. These things cannot be expected to come from steady pipelines and projects with expected outcomes with anticipated results.


I have not seen an example of a singular thing that is “paradigm shifting.” Even with relativity, the overall set of concepts and results is paradigm shifting, but every individual contributor, including Einstein, pursued small specific incremental pieces and various postulations whose significance wasn’t determined until years latee through incremental experiments.


This quote is better than the whole OP article.

However... A flaw in Thomson's idea is that some research is expensive, costing much more than leisure time.

Another approach is angel-investing / business-loan style: Give a large grant, wait a long time, and then call in the note -- to return research that justifies the investment, or repayment with interest.

Or X-prize style: Give outsize awards for outsize accomplishments, and let the researches take on the risk.


> Give a large grant, wait a long time, and then call in the note

And what are the consequences if the scientist has pittered that money away? How do you decide if a scientist/manager is even deserving of that level of confidence in the fist place


A fine example of this is Jansky's discovery of stellar radio emissions in the early 1930s. He found them while he was working for Bell Labs. Neither Bell Labs (of all places) nor astronomers were interested in further investigation.

That discovery would lead to radio astronomy ... no thanks to established interests.


But Bell Labs is remembered for things like the transistor, which had huge impact.

There's a basic assumption you're making here and which underlies a lot of the writing on this topic - that it's desirable, even morally virtuous, for research funding to be disconnected from application.

Bell Labs funded the transistor and not radio astronomy because apart from making cool TV documentaries, radio astronomy isn't actually useful for much. If we knew how to travel faster than light and explore the universe it'd be extremely useful, but we don't, so learning things about what a remote corner of the galaxy looked like a few billion years ago is easily argued to be a rather absurd waste of limited research dollars.

It's exactly what this op-ed in the Scientific American is talking about: a research field optimised to produce papers independent of any concrete economic utility function. In a world where such things get funded, what exactly should scientists be measured by? They can't be measured by market success because nobody cares or has any use for their output: their work is pure academic navel/star-gazing. So they pretty much have to be measured by volume of output or respect of their peers, both of which are closed and circular systems of measurement.

In my view the right fix for the science crisis is not to pay scientists to research whatever the hell they like with no success measurement at all: that really is directly equivalent to just firing them all and putting them on social security (or "UBI" as HNers like to call it). The right thing to do would probably be to just slash academic funding dramatically and reduce corporation taxes so corporate research can be given more funding. The net result would still be a drop in the amount of science done, but as Bayer's study makes clear, "not enough science" is not the world's problem right now.


Thanks for the reply. Yes indeed, Bell was famous for its many discoveries and applications. It was also famous (among science and engineering pros of its time) for granting its employees lots of time to spend on their own projects. Why they dropped-the-ball on Jansky's find is probably complicated.

The person that did lead the way to the (now enormous) field of radio astronomy was Grote Reber. He had a BSEE degree. It was his life-long passion, in his free time, at his own expense, and he had to struggle to get anyone to pay attention. He personally discovered Cygnus A in 1939 (and lots more). But he didn't get the physics Nobel in 1974. Instructive story:.

http://www.bigear.org/CSMO/HTML/CS13/cs13p14.htm


It's worth distinguishing between different types of bad science.

1. The first and most egregious type is outright fraud. This is when you intentionally manipulate or fake data. Everyone agrees this is bad, and honest actors are enough to prevent it. In some cases, other honest actors are sufficient to determine if the claims are fishy.

2. The second more subtle type is not paying attention to adaptivity. For example, maybe an investigator wants to look at the data before coming up with a hypothesis to test. This is dangerous because the investigator is already overfitting, so any p-values the investigator computes afterward do not mean what they're supposed to mean. This is less egregious because it's easy to do this just by not being careful or not knowing your statistics very well. A scientist can be honest, but imperfect, and do this. It's also not easy to sniff this out as a reviewer -- the scientist might just omit all the stuff that didn't work. But there appears to be growing awareness of this kind of problem.

3. The third, and hardest to solve, problem is not factoring in the whole population of experiments. This is where 100 labs independently try an idea and one of them gets a genuine (from their limited view) result with a genuine p-value. It's novel and that lab has (in the limited view) been careful about adaptivity and keeping their hypotheses carefully generated. Maybe they've even used carefully generated noise to ensure their conclusions generalize [1] (which would definitely cut down on this problem). So it's pretty much impossible for a reviewer to tell there's a problem, because they don't know about the 100 other people that tried this and failed because the randomness didn't go their way. Short of a public experiment registry, this one is hard to fix, especially because it may be that nobody's being malicious or ignorant.

[1] https://arxiv.org/abs/1411.2664


I don't think people understand how bad the third one is. If 100 labs try an idea and only one publishes, then clearly we can get a spurious p-value of .01. But what if 10 labs try it, and 7 publish their non-significant results. Then, a meta-analysis takes the results of those 7 studies and combines them together. Here the meta-analysis can get a spurious p-value of .01 with an order of magnitude less original studies and with most of them publishing their results. The culture of selective publishing and of parading any significant result no matter how small the effect size is incompatible with correct science.


How would a meta-analysis wrongly arrive at a spurious p=0.01 when 7 of 10 labs publish their (non-significant) results?


My advisor was guilty of pushing me to do #2. "Your original hypothesis was disconfirmed? Find one that the data does not disconfirm and that'll lead to a follow-up paper!" I dropped out of grad school because I assumed this was the norm.


I feel like evidence dis-confirming a hypothesis is as valuable as confirming it. How many other grad students are going to attempt the same experiment/concept if null results are never published?

I wish there was more praise for negative results in publication, because whether confirmed or not, the knowledge has value.


I think it's partially embarrassing when that happens. Hindsight is 20/20, and people will think it was obvious that it wouldn't be the way you expected.

It's hard to argue that I had justification to think that novel intervention X would have an effect. It turns out it doesn't. Science is often very specialized, there's little chance others would have the very same idea. If it works out, the argumentation would have to be reversed: my idea was very novel and non-obvious but as I show it actually works, which no one would have guessed.

The negative result story only works if the research community would have very strongly expected to see the effect, almost reversing the role of the null and the alternative.


>The negative result story only works if the research community would have very strongly expected to see the effect, almost reversing the role of the null and the alternative

Depends what you mean by "works". If you mean "is reasonably publishable in the current academic climate, then I agree. If you mean "has value", then I disagree.


I mean "sufficiently impresses and/or catches the attention of others, especially other scientists".


> I feel like evidence dis-confirming a hypothesis is as valuable as confirming it.

In particular as that would mitigate problem #3 above ("3. The third, and hardest to solve, problem is not factoring in the whole population of experiments.")


That may not be a wrong assumption.


A good example of no. 2 happened last week. Biogen, a large biotech company, announced it was resurrecting a previously failed drug for Alzheimers based on a post hoc analysis. The drug has potential to be the biggest drug of all time, so the stakes are quite high for the company. But the data they have to support their product is quite inconclusive.

I wrote an analysis of their data here: https://www.baybridgebio.com/blog/aducanumab-analysis.html


Point 2 is fine if you perform orthogonal validation. In biology, it is not possible to formulate hypotheses prior to gathering data in all cases. What was the hypothesis for the human genome project? What hypothesis could you conceive before ‘looking’ at the genome?


I would think that #3 is actually easy to solve by increasing the statistical standards.

#1 and #2 are difficult to solve, given the near-exponential growth of funding for academic research.

https://www.researchgate.net/figure/Evolution-of-the-number-...


> #1 and #2 are difficult to solve, given the near-exponential growth of funding for academic research.

Could you show any data that supports the claim about nearly exponential growth of funding for academic research?

AFAIK the funding, especially considering growing number of people in the field, gets rather worse than better.


The astounding growth in the rate of publications (the chart I linked to) could not have arisen without a tremendous growth in the number of people, and in the funding. I don't think growth can be achieved without a decrease in overall quality of contributors / contributions, and indeed I wouldn't be surprised to see lower funding per author.


>> Short of a public experiment registry, this one is hard to fix, especially because it may be that nobody's being malicious or ignorant.

That seems like a really good idea, that would solve a lot of problems. Can we do that, please?


For some studies this works really well - clinical trials, for example. But a lot of studies don't fit into a clean "Single experiment to measure a single outcome" paradigm, and pre-registration becomes somewhere between difficult and impossible.


There is: https://clinicaltrials.gov/ I have a vague memory or reading somewhere that if you don't register your desired success condition in the trial ahead of time, even if you find something it's not considered a real result.


3. That's why you should not rely on p-values!


The real problem with science is the same as in politics.

We, the other scientists (just like the voters) incentivize certain behaviors and with that favor a certain type of scientists (and politician) to prosper.

There are all kinds of scientists (and politicians) competing for your trust (and votes). There are good and bad among them. As long as we reward the bullshitters more, they are the ones that will outcompete the others.

All these rules and regulations that people propose are ineffectual, as long as a certain level of self-criticism is not being applied:

- Stop believing and propagating the bullshit even if it seems to support your preconceived notions (or even the truth). This is very hard to do in practice.

As sad as it sounds: the greatest enemy of good science are the other scientists.

feels like some sort of prisoners's dilemma, it only works if all do it, otherwise, it is best to just not admit anything


A different framing of this is that politics is, more and more, intruding into science.

Paul Romer:

> Politics does not lead to a broadly shared consensus. It has to yield a decision whether or not a consensus prevails. As a result, political institutions create incentives for participants to exaggerate disagreements between factions. Words that are evocative and ambiguous better serve factional interests than words that are analytical and precise.

> Science is a process that does lead to a broadly shared consensus. It is arguably the only social process that does. Consensus forms around theoretical and empirical statements that are true. In making these statements, a combination of words from natural language and tightly linked symbols from the formal language of mathematics encourages the use of words that are analytical and precise.

from https://paulromer.net/mathiness/Mathiness.pdf


He also left as World Bank chief economist after he complained that the "Ease of doing Business" index suspiciously went down for Chile while a socialist party was in power.


I agree with everything except for the jab at (paid) open access, which looks quite gratuitous to me. It may be true that "authors are willing to pay more to get their articles published in more prestigious journals. So, the more exciting the findings a journal publishes, the more references, the higher the impact the journal, the more submissions they get, the more money they make". But this is also true of non-open-access journals. Journals live off their prestige, and before paid open access was a thing, publishers still wanted to have high-impact journals so that university libraries would subscribe to them. It was a very distorted market (as it is now), but a market nevertheless.

While I often bash for-profit journals for being parasites that do little actual work and profit from withholding access to science that should be public, and for this I would open a bottle of champagne if they disappeared, I don't think journals have much to do with this particular problem of incentivization of bad science. Journals just respond to the demand of publishing more, and shallower, papers. That demand comes from hypercompetitivity in academia, where researchers need to fight for scarce positions and scraps of funding, often paired with too much bureaucratization (selection processes that look at "objective" and "verifiable" metrics like number of papers published at a given impact factor quartile, etc., instead of just asking a bunch of neutral experts whether the person is doing good research, which may be more opaque but also much more meaningful).

As evidence that journals are not the problem in this particular case, in fields like machine learning, where publication happens mostly in arXiv and conferences that don't charge for publishing or reading papers, the problems pointed out in the post also exist. Published models that only beat previous ones because they were lucky with random seeds or data splits are widespread.


> researchers need to fight for scarce positions and scraps of funding

there is quite a lot of funding

> often paired with too much bureaucratization (selection processes that look at "objective" and "verifiable" metrics

this is the problem.

The whole OP reads like a bizarre hit piece on open access. How could scientists paying to publish their work incentivize them to publish more? How would spamming the world with more publications inflate a scientist's impact factor? (It wouldn't -- impact factor would be diluted by the spam)

It was always possible to self-publish and to cite self-published work, and even without journals, a modern scientist can publish on a free webhost for even cheaper than an open-access journal.


How could scientists paying to publish their work incentivize them to publish more?

I assumed that the point was that the journal is incentivized to publish as many articles as possible, and hence to lower review standards.


Sure, but if the journal publishes too many articles it'll lose prestige. Nature publishes what, 8% of submissions? And that's apparently what they consider financially optimal -- they're a for-profit corporation, they could publish more if they wanted to. I guess the question is just whether this applies equally to open-access and traditional subscription journals.


But Nature doesn't make its living by publishing articles, it does it by selling subscriptions. This requires having high "prestige" so people want to subscribe to it.

If you're just charging for publishing articles, you don't care about whether anyone reads them or about what your "prestige" is, since you don't make any money off of that.

It's true that if I publish a paper it's better for me if it's read and thus cited, but that's much less of a difference compared to published vs not published at all. The entire problem starts with authors not being incentivized to publish a few good articles over churning out as many as possible.


I've noticed a trend across many scientific fields where publications are basically irrelevant since nobody reads them. Status is conferred by high profile talks, which tend to be invited. The material discussed in high profile talks is eventually published in a journal or conference proceeding (in the case of CS) and will get citations, but ideas only spread if they are picked by the program committees. The academics who publish many low impact papers are not really relevant.


This is not applicable to a lot of fields. Generally, only CS and ancillary fields are ones I've encountered where talks and conferences are the primary "currency". In biomedicine, talks will only follow after publications, and are often intentionally paired.


I think we also need to facilitate the work of reviewers. With so many papers and vast literature, reviewing is a PITA and errors keep slipping through the cracks. I d like to use a simple wiki of "argument refutations" , where we can look up previous objections to arguments being made. This is useful work that is currently being lost in the email archives of journal editors.

It would also help if we could open up science to people outside the academia, and begin the process of de-pedestalization of academia altogether, but not in an unregulated, completely flat way - academic discourse cannot be done in facebook. We know it has to happen and it will happen but we pretend the current situation can last forever. Academia is turning to a place that sells indulgences.


Television incentivizes forgetable reality TV, the radio incentivizes meaningless poppy music, social media incentivizes bickering about the controversies of today.

But, nowadays, you can also use your TV to watch a French arthouse film, to go to Youtube and be recommended a Japanese jazz album from 1974, to join the conversation on Twitter and ask questions to leaders in their respective fields.

Now you can swim against the current: Force all these power - and money - hungry institutions to fundamentally change their tune. Or you can find one of the many new waves to surf. Life is good, science is good, progress is good. The choice, as a scientist, is up to you. Can't write one groundbreaking paper a year? Write two or three mediocre ones. No amount of foundational change is going to make you a groundbreaking scientist. And change the channel once in while: the world is only getting bigger and more connected.


I agree with this article and have always thought about this but never voiced it.

I think a few things could improve the quality and discovery of published papers:

-After so many publications, could it be mandatory for random samples in that author’s publication be tested? I get that there is a limit on resources, but some advanced undergrads could do this with guidance.

-I would love to see some version of a journal of failures. That is, well intentioned research that had poor outcomes. It was so frequently in Chemistry research that my compounds were useless or the methods to synthesize them did not work, and it would be helpful to document that. Unfortunately, there is no “Journal of Failed Chemistry.” Only research that ostensibly make a contribution with a clear outcome get published. So much time wasted experimenting where you could save another scientist time and encourage them down another path.


>It was so frequently in Chemistry research that my compounds were useless or the methods to synthesize them did not work, and it would be helpful to document that.

This mirrors my experience where a lot of the post-docs carried war stories of things like lab/country-specific humidity playing a role in synthetic methods succeeding (or failing). There were a lot of dark arts/tricks of the trade that people carried around with them: stuff like going that extra mile to dry things of water super thoroughly (even if it was not mentioned in the paper we were referencing).


Many "dark arts" aren't known as dark arts to the people who know them. They're just the things you always do, because if you don't do them, nothing works.

An approximate CS analog: Writing great commit messages, and using an SCM.

You don't have to do them, nobody writes it up because those who know regard it as trivial, but if you don't do them, almost nothing works. Everyone who knows what they're doing does it.


>An approximate CS analog: Writing great commit messages, and using an SCM.

This is definitely not the analogue. The analogue is always doing a clean install of the OS before running your experiment. Or only ever using Arch version x.y.z for replicating lab 1 and maybe a.b.c for replicating lab 2.

It's knowing all the magic undocumented JVM flags ahead of running the application.

Some you know to use as part of war scars/best practices. Some are just pure inside information from working in that lab or having a personal relationship with people in that lab.


There are a bunch of journals like that. If you google "Journal of Negative Results", you will see several for different fields.

However, for some I know, it's really difficult to get in, because there are many more journals of positive than negative results, and there are probably many more negative results than positive... so your negative result had better be especially interesting.


I recently noticed SURE journal, intent on publishing "unsurprising" economic articles.

I just wish there were more, or at least more journals rewarding sound methodology instead of unsound "results". Who knows how much effort is wasted reproducing sound, but unremarkable studies simply because they weren't published


Partly the issue is because of the naming... Like "poor outcomes" and "failed"...

Obviously there would be no awards for the poor outcomes and failed results and we would be back to square one where no one is incentivized to look at them positively.


What are you waiting for creating one? Motivate a few scientists to form a credible review comity, fill one paper [1] and voila you have the journal you dreamt about on a recognized and perennial web platform.

This is of course some work, but if you really want it to exists it’s better not to wait for someone else to do it. Also allowing anonymous authors would help I think (to avoid the name being associated with failure).

[1] https://www.openedition.org/15974?file=1

Edit: openedition.org is for humanities, but there probably is alternative for other fields.


> What are you waiting for creating one?

Creating a journal is a tremendous task, and only highly respected researchers have any chance.

> if you really want it to exists it’s better not to wait for someone else to do it

Again, most people simply cannot create a credible journal. Let's be realistic here.


On failures: many years ago I stumbled across the idea of a `CV of Failures` where you list all of the accomplishments you missed/were rejected from, etc. I think it's a very humbling idea and can see some applicability here.

Example: https://www.princeton.edu/~joha/Johannes_Haushofer_CV_of_Fai...


This is a variant of "the carrot and stick" approach that happens to be very short on the carrot and be very large on stick.

Very unlikely to be effective.

Time and again people think that when they have a problem it must mean the punishment was not severe enough. It won't work.

The problem is that good science is not rewarded enough. Nobody rewards you for having published reliable stuff five years ago.


Unless the GP edited, there's no suggestion of punishment in his comment.


maybe a more serious version of the journal of irreproducible results? https://en.wikipedia.org/wiki/Journal_of_Irreproducible_Resu...


There are other ways in which the incentives are misaligned. Granting agencies that prefer to fund researchers who nearly always "succeed" in proving their hypothesis will find that research proposals are low-risk and low-information. At in at least one sense, the ideal experiment is one with a 50/50 chance of failing.

Disconfirmation is also important.


I don't want to sound too triumphant, but over here in fundamental physics we have decades-old safeguards in place against all these problems.

The big experiments don't have publication bias: they proudly say exactly what they did, even if 90% of the time there are only negative results, because exclusions are important too. Experiments are inherently replicated, with multiple independent simultaneous experiments (LHC) or multiple independent analyses (EHT), data blinding throughout, and even occasionally a further layer of blinding using decoy signals (LIGO). The statistical standards for discovery are, in terms of p-values, about 10,000 times more stringent, and even still people are moving away from p-values entirely.

The resulting publications are put out for free, publicly, on the ArXiv. Later they are submitted to a relatively small family of low-cost journals, which everybody knows the reputations of.

Hopefully some of these lessons can be adapted to other fields.


>The statistical standards for discovery are, in terms of p-values, about 10,000 times more stringent

But this has mostly to do with your subject matter being better behaved than others, no? The most profound changes and the healthiest research culture could not make 5 sigma a reasonable goal in psychology.


There's also a very healthy question of weather or not it's worth while. A p-value that's theoretical physics stringent will mean leaving a lot of cures on the shelf.


Our statistical standards may be unrealistic for social science, but there was a petition a while ago to lower the cutoff from 0.05 to 0.001. I think that really would help.


While it's certainly more robust, it still has some issues. For example, I remember reading that because the initial results for the experiment measuring the size of the electron was too low, every experiment afterwards reported slightly bigger sizes until they reached the correct value.

Since they trusted the original result, if their results varied too much they discarded them. Ultimately they redid the experiment until they got something closer, and the only values that ended up getting published were results that were slightly bigger.

Correct me if I am wrong, as I read this story a while ago.


I think you're thinking of the oil-drop experiment for the charge of the electron that was done 110 years ago (1909). And the experiments in the subsequent decades. Feynman, in 1974, so 45 years ago, used it as an example of problems in science. So, in response to the original post saying that particle physics has been doing things more or less correctly for decades is not tarnished by your example.

More details here: https://en.wikipedia.org/wiki/Oil_drop_experiment#Millikan's...


That's an old Feynman story, but he was kind of bullshitting to make a point. If you actually look at the history of measurements, it doesn't look like that at all -- it instantly corrects to about the correct value right after Milliken and hovers around that value.

https://hsm.stackexchange.com/questions/264/timeline-of-meas...


Actually if you look at the follow up answer, they added some more measurements that back up Faynman's claim.


Huh, I didn’t even know! The new answer really does change things.


There is similar example for the speed of light as well.


This is where measurements by multiple groups come in. I would point out that in 2019 the basis for the SI base units were changed because of issues with the standards. The international community - lead primarialy by NIST and their counterpart agencies in other countries noted that, among other standards, that measurements of the "standard meter" bars were changing measureably and the replicates of the "official bars" differed from one another.

They proposed and agreed upon new "standards" based upon measurments with better precision and accuracy. See the Wikipedia summary [1]

[1] https://en.wikipedia.org/wiki/2019_redefinition_of_the_SI_ba...


This is exactly the sort of thing that blinding is there to prevent. It is the same reason we in cosmology blind our data - to prevent us from biasing ourselves towards what other experiments have found.

Also, the charge of the electron experiments you are talking about happening over 100 years ago, so not that relevant to today :)


Seems like a bias-variance tradeoff. It's not obvious to me that what they did was bad.


Yeah but then you have different problems.

Hundreds of authors on each publication. Whose contributions are real and who is just cruising?


I mean, can't you say that for any large group of people? You can't easily evaluate who's pulling the most weight at Google either. But that's not a problem, because the people at Google deciding promotions do know, and that's what matters. CERN works like that, as does experimental physics at large.


Most positions at CERN are fixed time. Afterwards you have a problem.


If you leave CERN to apply for a postdoc or professorship somewhere else, they ask for lots of recommendation letters. That's how they learn what you actually did.


Do you think a recommendation letter is a more objective and better evaluation of what someone actually did? That there is less embellishment, less subjectivity in it than a scientific paper that was meant to reflect on your work as a scientist.

I would expect that recommendation letters heavily favor some individuals for subjective reasons.


As does hiring when a Googler goes to some other company. Nobody is going to see their commits. They have to go on their word, their technical skills, and their references.


The fact that there are other instances when people make subjective decisions is not quite relevant in this instance.

We are talking about the case when authorship on a scientific publication is insufficient evidence that someone had any contribution whatsoever to the paper. So that one needs a recommendation letter, in addition to the authorsip, where the recommendation letter would presumably state, this individual did actually do some work and is not just cruising.


No, in Spain you do not have that choice...


At the LHC the number is closer to 3500. I "cruise" on around a hundred papers each year: I haven't read them and in all likelihood never will. At the same time I worked really hard on one or two of them.

I have tried to figure out what would be required to remove myself from 98% of the papers we publish, but it turns out to be a lot of work (essentially you have to argue with the head of the experiment, who has other things to worry about).


this would be tough for me; I will not allow my name on a paper unless I read it before publication (and had some sort of useful, direct contribution). I guess large physics collaborations are very different from small bio projects, but still, I wouldn't want my name associated with a paper that later got retracted for something I would have caught.


Well when you do your PhD in particle physics it's just sort of accepted.

And I think it's tough on a lot of people: many people in the experiment feel that they should read every paper, because they feel personally accountable to it. They feel they should insist on changes because otherwise it will reflect badly on them.

To me that's an enormous amount of work and gets in the way of the really interesting science. Within the field everyone knows that I'm not directly accountable for a paper just because my name is on it. Most work is done by small teams of less than 20 people (sometimes just one or two) so it would be absurd to ask me to fully understand, much less feel accountable for, 95% of what comes out.

Personally I'd love an opt-out option. I don't think my colleagues should feel burdened by any weight that my name adds. Beyond that it's a convenience thing: I was updating my CV and thinking "if only there was some automated system to keep track of the papers I contributed to...". Incidentally, we have such a system, but it's only visible within the collaboration.


Maybe this just shows that using authorship credit as a proxy for contribution is a bad system.


Determining attribution for scientific contributions is a far less severe problem than having high-quality scientific contributions in the first place.


> 90% of the time there are only negative results

That’s an understatement.


What people forget is that this was true in the old days as well. Even back in the 60s and 70s there were over ten accelerators throughout the world constantly doing science, and you can bet they were booked up with new speculative searches every month. The vast majority found nothing, and were forgotten. That's because there's a very finite number of things to discover in fundamental physics, which is of course what makes the field so exciting in the first place.


Of course. It’s actually pretty awesome that every time the energy frontier was pushed, something new was discovered, so there was return on every round of investment.


cought 750 GeV cough... ahem. Let me sing you an OPERA about a how Andrei Linde was astonished by the biggest BICEP you've ever seen! It belonged to a LIBRA* that was faster than light and could fly trough walls!

* Couldn't get DAMA to work :(


Sure, but there's a difference between the accuracy of speculative theory and experiment. Experiment is very reliable -- they never called the 750 GeV bump a discovery of anything, because it didn't meet the statistical standards. Even things that do meet the statistical standards are often not called discoveries, because the experimentalists think they are systematic errors at play, as we see with MiniBooNE and DAMA.

Speculative theory is not reliable, which is why all the papers on the 750 GeV bump were wrong. But it has never been reliable, because, well, it's speculative! For each new phenomenon there's probably only one right explanation, but far more than one paper. Being 99% wrong has been par for the course for almost a century.


I mean, if CMS just hadn't played up their 2.3 sigma excess we probably wouldn't have had the 750 bonanza* so don't pretend there is no blood on the experimental hands :p

OPERA is a bit of a special case, but BICEP just straight up declared discovery and wanted the Nobel, have you seen their video of when they tell Linde?

* Any time I can, I link to the conclusion: http://resonaances.blogspot.com/2016/06/game-of-thrones-750-...


I agree that BICEP was the worst debacle of this kind that’s occurred in recent memory. But also see how the safeguards worked: their results were properly questioned, the hype was over in months, and they did not get a Nobel.


> BICEP just straight up declared discovery and wanted the Nobel, have you seen their video of when they tell Linde?

That’s a bit exaggerated. You mean the champagne popping video? Well, it’s somewhat awkward in hind sight, but a bit of excitement was warranted at that time. I was at Stanford around that time, might even talked to Andrei right after, and don’t recall a festive atmosphere or anything.


I feel bad for his wife every time I see it now :(


She’s also a Stanford physicist. Great person.


The high profile debacles were pretty quickly caught, so I’d say the system is working.


There are two aspects: how many errors are published and how they long before they are caught? The BICEP debacle is an example of a claim that should have gotten caught before it was published, doing an important calibration step using a photo of preliminary data?!


Actually, the error was caught before publication. They sent it to ArXiv, the standard pre-publication space, so that other physicists could read and comment on it. It was there that the error was quickly caught.


Oh, I didn't know that. I though with all the announcements they did they had a paper ready to publish...


Well, it's all a little ambiguous. You can choose to pre-publish or not, and you can choose to announce whenever you want: before pre-publishing, after but before publishing, and at the moment of publication itself.

It's certainly true that they announced early, but it's also true that the community at large regarded it with appropriate skepticism, causing the whole thing to be self-corrected in months.


"Stanford Professor Andrei Linde celebrates physics breakthrough"

https://youtube.com/watch?v=ZlfIVEy_YOA


>> Hopefully some of these lessons can be adapted to other fields

the primary lesson here is to get a commitment for decades of government funding


Preferably in the billions of dollars. Nothing generates five sigma confidence levels like ten-figure budgets.


Sorry, do you think that fundamental physics gets more than other fields? Have you ever seen a breakdown of research budgets?


I don't have such a breakdown, but I do know that the budget of the LHC alone is a billion dollars a year plus $13B to build it. That wouldn't have been possible if it weren't for a long-term commitment.

And I know that few fields get that kind of commitment. Maybe cancer, maybe something in semiconductors. Lots of other fields would produce things if you offered them many years of billion-dollar commitments.

I'm not saying that it was wrong to give so much to fundamental physics. I'd just be very happy if we gave that level of funding to lots of other things.


CERN and the LHC are to me prime examples of bad science. You build huge machines at incredible cost, wasting hundreds of thousands of man hours of brilliant young people’s time. They are paid almost nothing (a checkout clerk at a Swiss supermarket makes more than a PhD student at CERN). Then you string along a subset of those for years, exploiting them for further cheap labor, somehow making them believe they are “lucky” to get that opportunity (a postdoc at CERN pays a fraction of what you can make at Google Zurich). In the time between experiments people only ever see simulated data, leading to a rude awakening when actual experimental data comes in (c.f. ALICE’s desaster of a analysis Pipeline). Then there is literally decades of over promising on ground breaking discoveries right around the corner (super symmetry, extra dimensions, dark matter). Defunding the super collider in the 90s in the US was probably one of the best science policy decisions they made.


> They are paid almost nothing (a checkout clerk at a Swiss supermarket makes more than a PhD student at CERN). Then you string along a subset of those for years, exploiting them for further cheap labor, somehow making them believe they are “lucky” to get that opportunity (a postdoc at CERN pays a fraction of what you can make at Google Zurich).

That’s not unique to LHC, or CERN, or physics. It’s a general problem of academia, where PhDs and postdocs are paid a pittance compared to what they could otherwise earn in the industry. This problem is especially bad in high energy physics of course, since jobs are especially limited, and it’s the brightest people competing against each other, who could easily land jobs on Wall Street or Silicon Valley.

> Then there is literally decades of over promising on ground breaking discoveries right around the corner (super symmetry, extra dimensions, dark matter).

Standard Model works exceedingly well at LHC. No one was actually sure about BSM (beyond Standard Model) so there was no “promise” really. Or the promise is: we may see something interesting, or we may disprove some otherwise interesting theories.

> Defunding the super collider in the 90s in the US was probably one of the best science policy decisions they made.

Cancelling SSC was such a stupid waste of labor and money, it’s painful to see someone touting it as a triumph. Two words: defense budget. Enough said.

Disclosure: I worked for CMS for a while. (Not physically at CERN; was doing data analysis for CMS in the U.S.)


> Cancelling SSC was such a stupid waste of labor and money

No, it wasn't. The construction site was chosen for political reasons and the entire bidding process was beset with problems that would have made it cost even more money, run into a ton of problems (for example fire ants causing multitudes of delays and groundwater seepage making the installation of sensitive electronics probably impossible), and probably not have been finished. The administrators kind of knew it was going to cost more than they pitched even under best of circumstances and were counting on funding momentum to keep it going (well we put this much money into it so we can't just stop now).

What the US did with the SSC money was to put it into LIGO instead.

source: growing up my neighbor was this guy: https://www.npr.org/2019/05/19/723326933/billion-dollar-gamb... and he told me about these things.


I'm pretty positive that people made firm predictions and bets that the LHC will see super partners (because otherwise the "naturalness" argument would go away). Whether it would be the MSSM or something else was up for debate, but people thought it would be more likely than not that they would see them if the Higgs was found in the predicted energy range. In any case physics departments at top universities all over the world are stacked with phenomenologists that made their careers on working out these predictions.

Anderson made an argument against the SSC https://www.the-scientist.com/opinion-old/the-case-against-t..., which I pretty much agree with. Science funding is finite and physics talent in a country as well. Many really good students are funnelled into dead end careers in high-energy physics (whether theoretical or experimental). It's just a huge waste of human potential, especially given the fact of how ruthlessly they are exploited. I know people in the field, a hiring decision between three people was recently described to me as a choice between a 'social case' and two competent workers, one of which happens to be a friend of mine.

Funnily enough lot's of institutions doing fundamental research in high energy physics either also do military research or receive military funding. Most of Witten's work for example has been funded by the Department of Energy. The whole reason CERN was build in a neutral country was because people worried that a post nuclear arms race would break out otherwise. In France one of the major institutes contributing to particle physics (Saclay Nuclear Research Centre) also developed their nuclear arsenal and is located next to a major arms manufacturers research center (Thalys).


> more likely than not

Yeah, “more likely than not” isn’t a promise. Sure, a lot of people firmly believe in their theories, so me saying “no one was actually sure” seems wrong, but I was talking about a different kind of “sure”. The community overwhelmingly agreed on SM, whereas there were huge divides on where the BSM bets were, or even on roughly the same bet, where SUSY scale lies, etc.

> Mant really good students are funneled into dead end careers in high-energy physics, ...

I was one of the funneled. We signed up because we were drawn to the fundamental questions, not because of glowing job prospects, which were largely laid out for anyone paying a little bit of attention. Cancelling things and decreasing funding certainly didn’t help, only lead to worse “exploitation” in your words.

> Funnily enough lot's of institutions doing fundamental research in high energy physics either also do military research or receive military funding.

Institutions do lots of things. Most also receive funding for medical research, so?

In general, modern day HEP in and of itself hardly contributes anything to the military sector. On the more practical side, powerful magnets, computational methods etc. should be useful in military applications, but a lot of different areas have such second-order effects. Nevertheless, I’m neither knowledgeable nor enthusiastic about killing machines, so I could be missing some obvious connections.

> Most of Witten's work for example has been funded by the Department of Energy.

Why would you put all DOE funding under defense budget? It’s not DOD. Or would you characterize all renewable energy spending as military spending too?

> In France one of the major institutes contributing to particle physics (Saclay Nuclear Research Centre) also developed their nuclear arsenal...

Particle physics has largely moved on from nuclear physics. (I know, many particle physicists are still interested in cold fusion etc.)


Want to throw out that PhD students at CERN & ETH/EPFL are still paid a living salary. Go outside of the ETH system and you can make a fair amount more (nothing considerable, but equivalent to what high school teacher in the states, which for a PhD student is nothing to sneeze at).


> Then there is literally decades of over promising on ground breaking discoveries right around the corner (super symmetry, extra dimensions, dark matter).

Every statement in your diatribe would seem to be predicated by the assumption that discoveries in theoretical physics (what you're complaining about) will result in revolutionary changes to your day-to-day experience (kinda i.e. applied physics) in a short enough span of time that you can actually enjoy them with an able body.

Probably not. Flight/space and semiconductors were notable exceptions, and they were largely driven by the century of hot and cold war that we know to be the 1900s.

Bad science is science that fundamentally doesn't yield a new understanding of the world, either by falsifying theories or confirming them. What you described is:

1. failure to meet exaggerated lay expectations, and

2. poor working conditions in the nonprofit space.

#2 (and possibly #1 depending on where the theoretical discoveries take us) can largely be resolved by pouring more money into science, not less. I'll point back to our century of war if you need past precedent.

Edit: removed a line that was unnecessarily insulting. Sorry about that.


> Every statement in your diatribe would seem to be predicate...

I would be happy if they found any experimental evidence for their predictions. If you have followed the field basically the majority opinion was that supersymmetry at the LHC would be inevitably found, as long as the Higgs mass was in a certain range (naturalness argument). Turns out it (narrowly) was in that range but supersymmetry was not found anyways. Arguably a lot of particle phenomenology in the last 20+ years was bad science in that way, because no new phenomena had actually been observed/measured and needed model building. Of course there is nothing inherently bad with conjecture or theory, but given the fact that there are lots of unsolved problems in physics on which progress was made in that time, a lot of what they were doing seems like a huge waste of time.

Philip Anderson made his case for refocussing physics a long time ago: See his case against against the SSC: https://www.the-scientist.com/opinion-old/the-case-against-t... and "More is Different" https://science.sciencemag.org/content/177/4047/393.


> Arguably a lot of particle phenomenology in the last 20+ years was bad science.

Perhaps that wouldn't have gone on so long if the SSC wasn't fucking cancelled in '93.


> Arguably a lot of particle phenomenology in the last 20+ years was bad science in that way, because no new phenomena had actually been observed/measured

That is an interesting observation. Would you be willing to guess specifically why there was such a long gap between the LHC and the previous collider? The answer is already in your comment.


Indeed, flight and space have applications, which is why they (completely justifiably) get more funding. People seem to forget this, but the space shuttle alone was funded over 15x more than the LHC was, and by a single country rather than many.


I kinda wish this comment focused more on bad science rather than bad pay. If bad pay implied bad science then most of science is bad!

I suppose I'd be much more interested in learning about the latter half of your comment (specific scientific failures).


Nobody forces academics to work for low pay. I mean, that's what I'm doing right now and I certainly don't feel exploited.

> Then there is literally decades of over promising on ground breaking discoveries right around the corner (super symmetry, extra dimensions, dark matter).

Suppose all TeV-scale colliders had been defunded in the 90s, including the LHC. Then we would have forever been stuck with the strong suspicion that a Higgs is there, but no proof, along with untested but plausible-sounding well-motivated speculation over what else could be. Is that not worse than actually knowing?


> I certainly don't feel exploited.

Sure this is your subjective perception, maybe fuelled by the positive reinforcement that rituals of accomplishment provide and the fact that working on physics is really fun. As long as you realise that you are playing a game that is unlikely to yield any economic or societal value (in the case of high-energy physics) at any time-scale with a group of people that have been playing the same game for >20 years with no discernible progress except for the discovery of one particle predicted >40 years ago.

> Suppose all TeV-scale colliders [..]

Well this is hypothetical, but what would most likely have happened is that the ~10000+ scientists involved with the LHC and high-energy physics had gone into different subfields of physics. Hopefully the same would have happened with funding (we don't want it to go to biologists, do we?). Since high-energy physics still attracts some of the best students, this would have actually disproportionately improved human capital in other areas of physics. Whether the Higgs was there or not was never a super pressing concern anyways. Condensed matter physics, bio-physics, environmental physics, all kinds of (mostly experimental) quantum physics still have discoveries to be made with ~1-10s of million in budget. Not only that a lot of those discoveries will have society level consequences in a time frame of decades. We are in contrast very unlikely to derive any benefit from studying the energy scales high-energy physics has reached. For those reasons I think funding high-energy physics is a huge net negative to overall societal progress.


While it is true that nobody forces you to work for low pay - there is definitely misleading advertising at play.

Most young people that pick science do not understand the "real" rules of science, and what it takes to "succeed". Being a good, reliable, hard-working scientists will not ensure your success.

Whereas in many if not most jobs, being good at the job itself usually suffices.


I know what the rules are. In fact the community seems to be extremely transparent about it -- there are sites that keep track of all job offers out there for you, along with historical statistics.

The struggle for jobs is the result of the government funding structure, plus supply and demand. It's not perfect, but it's infinitely preferably to the older system, where you could reliably do science only if you had great personal wealth, or were favored by somebody who did.


First and foremost, we are talking in general, about young scientists. Do you think they all know what they sign up for? I for one, having seen many hundreds of them, I don't think so.

Then I will say, job offers are not relevant here. Who gets the job offer? What do they have to actually do in that job? Where are they going to be in 10 years? What kind of life, job security, job conditions, career advancement prospects do you have over the long term? For most scientists, these are the most nebulous concepts ever.

You may think you know what the job is - maybe, I wouldn't know.

What I do know that most people do not understand that a PI (principal investigator) at a University does nothing even remotely similar to what a postdoc (who wants to be a PI does)/ The churn and exploitation is very common.


> ALICE’s desaster of a analysis Pipeline

Any references to this? First I've heard of it.


You can take a look here: https://github.com/alisw/AliRoot


it is true that moving protobufs is more profitable than moving protons, but I think CERN represents a fundamentally interesting and important scientific endeavor.


Yes well every solution we come up with invariably favours well established scientists... how can a new scientist get anywhere if it takes 10 years to produce substantive work?

There is a conceit in the final paragraph, where it is implied that we are missing out on cures for diseases etc due to wasteful scientific endeavours. This is not necessarily true. There have been many successes in the current era of medical science. Generally these are driven by technological advances such as monoclonal antibodies or next-generation sequencing.


> Self-regulation by scientists of decades and centuries past has created modern science with all its virtues and triumphs. However, much like the bankers of the early 21st century, we risk allowing new incentives to erode our self-regulation and skew our perceptions and behavior; similar to the risky loans underlying mortgage-backed securities, faulty scientific observations can form a bubble and an unstable edifice. As science is ultimately self-correcting, faulty conclusions are remedied with ongoing study, but this takes a great deal of time.

> Unless and until leadership is taken at a structural and societal level to alter the incentive structure present, the current environment will continue to encourage and promote wasting of resources, squandering of research efforts and delaying of progress; such waste and delay is something that those suffering diseases for which we have inadequate therapy, and those suffering conditions for which we have inadequate technological remedies, can ill afford and should not be forced to endure.

I agree.


The article makes some fair points, but strangely seems to imply that open access is somehow opposed to rigorous peer review, which certainly leaves an odd taste in my mouth.

> Of course, scientific publication is subjected to a high degree of quality control through the peer-review process, which despite the political and societal factors that are ineradicable parts of human interaction, is one of the “crown jewels” of scientific objectivity. However, this is changing. The very laudable goal of “open access journals” is to make sure that the public has free access to the scientific data that its tax dollars are used to generate.


First issue: publications. I think this is not a big problem in the USA relative to China. Is there evidence of American scientists who have greatly benefited in reputation and prestige, despite doing very shoddy work? Many scientists have been rebuffed and rejected from grants/awards due to not having enough publications. American scientists, though, are more attuned to the quality and integrity of journals in which publications appear. Whether for tenure or grants, a Nature or Science or Cell paper will mean a LOT, and many scientists evaluating other scientists wouldn't care much about a journal publication with Impact Factor under 1.0.

Meanwhile in China, there are numerous article-factory-journals that are pay to publish, you can put your shoddy work in those and amp up your publication count easily. Surely, these exist in the USA, but are career scientists at major institutions utilizing these shady Chinese journals? There is evidence that some of these Chinese journals are publishing straight up BS, which is especially easy to do with data analysis where you could "clean" your data easily. Perhaps it is a cultural or political difference, but I don't see nearly as much rigorous self-reflection of Chinese scientists on this front?

Second issue, grants: publications are a small slice of this story. Science departments (not humanities) in universities are MAJOR revenue generators for the University. My university took 1/3 of your grant straight off the bat, to cover overheads like shiny facilities and administration and marketing. Meanwhile, the scientists themselves may make huge salary bonuses or advance their tech/staff substantially when they have substantial grants. So, getting a grant is great for you personally, and improves your chances of further grants.

So, is it really that surprising that there may be pressure to publish at all costs, to p-hack, to reach for those low-impact journals despite their lower reputation/impact, given both universities and scientists are BOTH massively financially benefited from this incrementalism? Does it really pay to reach for pie-in-the-sky, fundamental sea-changes in your field? It seems like a high variance, high risk strategy that only very bold, well-funded, devil-may-care scientists would employ.


I can't comment on China and other fields, but in the US for AI and robotics, in which I do research / publish in, there is definitely a growing trend of an over-emphasis on novelty research that disregards reproducibility in favor of "wow" factors and fancy demo videos. A lot of highly cited research papers/labs tend to be the most heavily promoted ones (on Twitter, in the press, etc), and they're not necessarily impactful/useful for the rest of the field.


Amidst this worrying trend of shallow research, begging for petty cred and grants more than knowledge, you have to wonder what has become of ethics, of integrity in science. Globalized academia needs some soul-searching, IMHO.


" you have to wonder what has become of ethics, of integrity in science"

When was there any more ethics or integrity in science than any other time? The AIDS crisis was a shitshow of choosing prestige and recognition over the lives of a generation. The discovery of DNA was off a woman who was hardly recognized. Henrietta Lacks' cells. The syphilis experiments.


The discovery of DNA was done by Friedrich Meischer, a brilliant Swiss biologist. At the time, nobody knew what the nuclein he purified did (and, the results were considered so surprising that his own thesis advisor redid all the experiments manually before letting the data be published).

What you are referring to is the use of Rosalind Franklin's X-ray fibre diffraction images by Watson and Crick to elucidate the 3D structure of DNA, and, depending on the accounts you read, whether she got due credit is arguable. She did publish in the same Nature journal issue as W&C (https://www.nature.com/articles/171740a0.pdf), she got credit for the photos (see the acknowledgements in the W&C paper, http://www.nature.com/genomics/human/watson-crick/), and she was dead by the time the Nobel Prize decision was made (so she could not have received the prize).

I understand many feel very strongly that she was cheated, and while I do believe she was definitely slighted and not given enough credit, the underlying story is fairly complicated. I recommend reading both Dark Lady and Eighth Day of Creation and then forming your own opinion. personally I thought her personal diaries, which she willed to Aaron Klug and were used in the writing of Eighth Day, were really illuminating.


Sorry, you're right, the story is complicated, but that it is so complicated draws further the question- where are we getting the idea that science was full of ethical humans doing purely ethical things?


> Is there evidence of American scientists who have greatly benefited in reputation and prestige, despite doing very shoddy work?

This is the exact point of the replication crisis. Within social science, the answer is an absolute yes. That's why we have been having this conversation more and more over the past decade.

> Meanwhile in China, there are numerous article-factory-journals that are pay to publish, you can put your shoddy work in those and amp up your publication count easily. Surely, these exist in the USA, but are career scientists at major institutions utilizing these shady Chinese journals?

In China, people know about scientific reputation perfectly well, which is why publication in Nature/Science/Cell ensures great financial reward.

There are shoddy journals everywhere, of course. Yes, top scientists at top institutions and rigorous fields in the US don't use them. The same is true for top scientists at top institutions in China. The average quality is probably lower (due to the inherent difficulties of doing cutting-edge science only two generations removed from mass starvation) but the principle is the same.


Regarding your first point, the West may not be as bad a China, but there are some notably bad actors. Schon [1] fabricated results very impressive results at Bell Labs for years before getting caught. In fact, these results were so impressive that it shifted the focus of research groups in his field for years to try and get his methods to work. It was such a big scandal that my (and I assume other) universities added a research ethics requirement to their engineering PhD programs as reactionary measure.

[1] https://en.wikipedia.org/wiki/Schön_scandal


A previous Great Leader decided that he would improve our research output in a simple way: we should stop doing humdrum work, and only do work which would succeed famously.

Of course, there is some difficulty in determining which work is going to be brilliant before it is done. But he decided that he could do that, seemingly based on how PR worthy the proposal was.

At any rate it did immense damage and set back deep research by years. Naturally he left when he could wangle a better job elsewhere.


Every time an arxiv paper gets published to HN, I wonder whether it's even worth reading until findings are peer reviewed..


*worth


thx


How do you quantify rigor?


In mortises?


Partially can be solved with technology - all data and programs should be self-contained, interactive, edition and modification process should be visible for general public, better connections between papers rather than current citation mechanism.


Market demands Bad Science, market supplies Bad Science. I do not see a problem here.


A cynical view - alas it captures the truth.

It is very easy to point fingers, to blame it on funding, blame it on journals, blame it on media, but in the end:

- scientists decide who gets funded

- scientists decide who gets published

- scientists make exaggerated claims in the media

The source of the problem is the scientists, not understanding the damage they are doing to themselves.

I do foresee downvotes, because scientists do not like this idea at all :-)


Most of it is being funded by public means. Industries such as pharma, which rely on that science did point out the replication crisis in medicine: https://en.wikipedia.org/wiki/Replication_crisis#In_medicine


Not surprising when science has become a tool for political and industrial purposes.


Actually, i think some part this bad science and low innovation rate- is conciousness. Goverments everywhere realized, that unless they have total social control, people can not be trusted with exponential powerfull technology of all kinds. So, i predict, that unless we as humans are not totally surveilanced and controlled- we will not see great leaps of technology in the forseeable future.


When I was doing laser spectroscopy research as an undergrad my prof said there was an informal physics "association" that had taken an oath not to develop anything like another atomic bomb. He asked if I wanted to join and I said yes. So I certainly see some basis for people being scared of fundamental advances in science and technology and wanting to control and place limits on what is developed. There have been a lot of advances in astrophysics (little potential harm to things here on Earth) and not much research in gravity (big consequences if we could actually control gravity.) That doesn't require "total social control", just a strong influence at key checkpoints (like funding). Whether there is an active effort to reduce investigation in specific areas of science might be discoverable by a meta-analysis of papers showing what areas of research tend to not get funding and see if there is a correlation with the potential for social disruption.


At this point, what even is the point of good science if nobody believes it?


If what you said was literally true, that nobody believes it, then there would be little point. But "nobody" is an exaggeration. Scientists are likely to believe it, or at least are in a much better position to evaluate the truth and utility of it, and build upon it.

When it comes to abstract science, eg, the ultimate origin of the universe, it doesn't materially affect anything if it is believed or not. But if someone produces a cheaper or longer lasting battery, the proof is in the pudding, and that basic research will have made a difference.

Then there is the category of hard science which is disbelieved because moneyed interests wish to discredit it, and/or it has become a political shibboleth to discredit the science. Those aren't due to bad science.


The problem is that significant portions of the population refuse to believe anything that comes out of science. This directly affects what politics decides to do about real problems like climate change. This isn't some abstract issue, it's something that's affecting us on an everyday level because people are unwilling to listen to the evidence.


People are dying to believe in science , even when it fails repeatedly or drags on for decades - or else the BBC would have stopped reporting on new cancer cures, because people have had enough. They re moslty disbelieving of big narratives built on the science and things that look like FUD, and even more when emotional argumentation blankets the scientific arguments.


We need to incentivize thorough reviews as well.


Applied research has always been in fashion. Theoretical is always rare and underfunded. Many books on this subject have been written I recommend "A University for the 21st Century"

People in glass offices run everything in 2019 and the more of an expert you become in your field the more professional managers will feel you are a pest to be silenced/hated/removed.


[flagged]


This is absolutely false. It's exactly because of competition that we can trust the journals of elsevier, even if they are ... elsevier. Nothing could be worse than a government-owned journal where only the connected ones could publish.


I've always found it distasteful that the Clay Math Prize offers a million dollar reward for proving that P=NP but nothing for revealing the truth should it be otherwise.


I think you misunderstood something. The problem itself is sometimes referred to as the "P=NP" problem, but that's just a name, it is also called the "P vs. NP" problem. If you can prove either P!=NP or P==NP, you get the prize.

Can you give a reference to back up your interpretation?


> Can you give a reference to back up your interpretation?

Alas, no. I do recall being astonished at my claim, and then being convinced by a colleague (which was backed up by plain language on the CMI page, in my memory...) but now that I'm re-reading (current and archive.org'd) I cannot find such a thing. Disturbing. Yet relieving. Fuck my memory.


Your interpretation is correct. The official problem statement [1] is "does P = NP?", so either "P = NP" or "P != NP" is a valid answer.

[1] https://www.claymath.org/sites/default/files/pvsnp.pdf




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: