Hacker News new | past | comments | ask | show | jobs | submit login
The PhD Metagame: Don't try to reform science – not yet (maxwellforbes.com)
148 points by jxmorris12 55 days ago | hide | past | favorite | 160 comments



This article and most of the comments ignore the social power dynamics and status quo institutional structures of academic science (Science 2): university administrators, power-broker faculty researchers, funding agencies (til recently), publishing companies, higher-ed consultants, etc. There are thousands of potential reforms that would bring Science 2 closer to Science 1 and generally make science life better. Those reforms are articulated by competent scientists and higher ed journalists every day. If you want to know why science reform isn't happening, ask which powerful interests are benefiting from the existing structures.

Power makes people stupid: powerful people can't imagine a world other than the one that brought them their power. They will say, "That's the way the world is." Let's encourage students to continue to imagine other possible worlds in order to challenge the status quo.


I've never really understood the sentiment that "articulation of a problem = solving that problem." Articulation seems to me to be Step 0 in solving a problem, there needs to be people on the ground advocating for why this new ideological framework is "better" than the status quo and actively convincing decision makers or acquiring decision making positions. Otherwise any amount of highly articulate complaints are just sophistry.


I think calling problem articulation "just sophistry" is overly reductionist. People who make the effort to articulate the problems (e.g., some Chronicle of Higher Ed writers) offer thoughtful readers other possibilities for consideration. Then, in the rare case that a powerful decision-maker perceives a tension in the status quo, there exist well articulated potential actions to resolve the tension. This is why think-tanks write white papers. The narrative that "people on the ground" is a necessary condition for reform dissuades thoughtful problem articulation. "People on the ground" is one way to influence decision-makers, but it is not necessary. Watch CSPAN when a septuagenarian Senator references his/her granddaughter's comment as influencing his/her vote.


I think a senator being influenced by a grand child is a good mental case study in productive dissemination of an ideology. There are many people in leadership roles who may sometimes be on the lookout for strategies to tackle problems, but the only way those strategies become actionable is if someone nearby 1) has had the idea communicated to them and 2) is able to rhetorically sway those commanding the decision making process (the is an instant victory if sufficient decision making position has been captured by allies). Ultimately the ideas themselves only gain material action with a dissemination network with a connection to the people making decisions.


I see. For you, "people on the ground" includes a grand child's comment. In my experience, "people on the ground" has implied "don't try to do anything on your own," which dissuades action and consequently promotes the status quo's persistence. When you say "dissemination network," I hear you saying a group of people is necessary. But a group is not necessary. A group is one possible way. But powerful people are influenced by far less than a group of people every day. See also: lobbyists. "Start a popular ideological movement" and "become a lobbyist" warrant very different life choices.


Unfortunately there are many popular ideological movements with little to no penetration in the structures that actually swing material conditions. That disconnect between the holders of an ideology and the existing power centers leads to intense cognitive dissonance. Generally organizing is helpful in achieving anything political (i.e. affecting distribution of resources). I feel like it'd be very hard to form a popular ideological movement without any form of collectiveness, as if a movement is one individual writing for themselves to read, it doesn't seem like it's popular.

The concept of lobbying itself has been basically shattered in our modern world with businesses having a near infinite amount of resources to exploit it. I don't think there's anything implicitly unreasonable about conveying your understanding of the importance, impact, and potential consequences of major choices onto key decision makers.


Most political lobbying pertains to matters that are completely out of the radar of news media. In fact if the topic you're lobbying on is in the news you are probably failing.

They tend to be intensely practical and specific, rather than hot morally heated topics. Like building infrastructure, securing a government contract, or amending/removing a new regulation from your sector of business (e.g. making sure a new law on tobacco exempts cigar manufacturers).


Established institutions become like organisms with feedback loops who like to maintain status quo akin to homeostasis. Any changes to the status quo is seen as a threat and is dealt with accordingly.


Naively, you’d think such a revolutionary paper [BERT] would be met with open arms. But when it was given the best paper award (at NAACL 2019), the postdocs I talked to universally grumbled about it. Why? It wasn’t interesting, they bemoaned. “It just scaled some stuff up.”

I'm from that community, was there when they presented it, have used and still use BERT a lot, and still if it were my decision I wouldn't have given it the best paper award, even in hindsight.

BERT the model has been, of course, enormously influential. I still use it, even after the generative LLM revolution (which also stands on it shoulders). I greatly respect its authors and am truly grateful that they published and open-sourced it.

BERT the paper? It's not really well-written (almost everyone who wants to understand Transformers or BERT turn to a blog, because the papers are so bad) and it's not stellar in terms of scientific insights, because indeed, it scales some stuff up and comes up with a lot of magic numbers that are there presumably because other alternatives were tried and those happened to work. Or maybe not even that, because some stuff included in BERT actually turned up being useless (see RoBERTa), so I guess they just winged much of it and it worked.

From a scientific paper, I would expect much more explanation: why the architecture is designed like this, why this number of layers/dimensions and not that, etc. which that paper thoroughly lacks. No one will learn to do better science from that paper, it's not a paper that a PhD student would benefit much from reading except as a curiosity (they of course can benefit from downloading, using and getting to know the model. Just not from the paper).

Maybe create a "best model award", "best software award" or whatever, but in my view as an academic a best paper award is just not for this.


Coming up with something that is new, original, and actually works better than anything before it is quite hard. It does not happen often. When it happens, it should be embraced and cherished. Because without these discoveries, science is not worth anything at all. These discoveries are what science is all about. Yes, making sense of the discoveries is also science, but that is the easy part of science.


It happens every day. The job description of an engineer is basically to apply their knowledge of fundamentals to design and improve products, not simply copy the existing products. Products in every field must incrementally and monotonically improve in some way in order to compete. It is often not even science at all, just development.


We are talking about different things. Incremental engineering can definitely be part of it, but at some point something different happens: the discovery.


I don't think that this is what GP wrote about. They complain about the paper and not the software. Imho, most scientific papers are really badly written, because most scientists (and most people) are bad writers.


If you want beautiful writing, read poetry. You should give the best paper award to the paper with the best invention / discovery. Because as much as I like a well-written paper, I like a paper with a great invention / discovery even better, even if badly presented. Remember, you get what you incentivise. There are too many papers out there that, while nicely written, do not move the needle at all.


I would certainly appreciate beautiful writing, but I want something far more basic: Good writing. I want a text that is pleasant to read. That does not mean that the text should be dumbed down or that I expect it to be easy. A good text avoids unnecessary jargon and is simplified to make it very clear what the reader should take away from reading the text. A lot of the technical details can be delegated to the supplement so that the main text can remain clear and focused.


I think that best paper really means best discovery, or best finding.

No one really cares how well written a paper is.


I agree the best paper award should go to the paper with the strongest contributions, perhaps with writing quality as secondary. I don’t agree with your last sentence.

Writing quality is often a make or break thing when it comes to whether a paper is accepted or not, mostly because it makes a paper easier to understand, ie its contributions and the evidence that they are truly there are easy for a reviewer to pick up on and appreciate.

Furthermore a well written paper is a far larger contribution to the research community - the audience post reviewers - than a poorly written paper with the same contributions, for the same reasons: well written papers can be a joy to read, particularly if a leap in contributions are presented in an easy to understand way.

I understand that these things are cultural (ie field specific) but this has been my experience.


A badly written paper is bad at delivering its core content of scientific knowledge. The reading process less efficient, the understanding comes slower, and discussing the core ideas and findings is harder and less fruitful.

A well written paper transports the reader to the perspective of the writer. It’s not about poetry or aesthetics.

Bad handwriting is illegible. Good handwriting is clear. Calligraphic aesthetics is another ___domain.


> comes up with a lot of magic numbers that are there presumably because other alternatives were tried and those happened to work.

The process of experimentation is what makes Computer Science "science"!


A central part of experimentation in scientific fields is to document the experiments in a very comprehensive way, including

- which other options you also tried, but which failed

- write down hypotheses that explain the worse results of the other options

- write down hypotheses why the chosen options gives better results

- ideally formulate testable predictions that these hypotheses imply

- etc.

Simply saying "other alternatives were tried and those happened to work" is not science, but tinkering around combined with magical thinking.


Do you take issue with the 'purely empirical' approach (just trying out variants and seeing which sticks) or only with its insufficient documentation?

I don't know how you'd improve on the former. For a lot of it there simply isn't any sound theoretical foundation, so you just end up with flimsy post-hoc rationalizations.

While I agree that it's unfortunate that people often just present magic numbers without explaining where they come from, in my experience providing documentation for how one arrives at these often enough gets punished because it draws more attention to them. That is, reviewers will e.g. complain about preliminary experiments, asking for theoretical analysis or question why only certain variants were tried, whereas magic numbers are just kind of accepted.


Seems pretty clear they aren't objecting to throwing stuff at the wall and seeing what sticks, but with calling the outcome of sticky-wall work "science".

I'd say that's a bit strict take on science, one could be generous and compare it to biologist going out into the forest and combing bsck with a report on finding a new lichen.

Thought admittedly these days the biologist is probably expected to report details about their search strategy, which the sticky-wall researchers don't.


The biologist would be expected to describe the lichen in detail, including where it was found, its expected ecology, its place in the ecosystem, life-cycle, structure, etc. It is no longer 1696 where we can go spear some hapless fish, bring back its desiccated body, and let our fellow gentleman ogle over its weirdness.


I'm not GP, but I don't think they are taking issue with the fact that e.g. layer numbers or architecture were arrived at without first-principles but rather empirically.

Rather that when you do come to something empirically, you need to validate your findings by e.g. ablations, hypothesis testing, case studies, etc...


Exactly, I can confirm this is what I meant.


> I don't know how you'd improve on the former. For a lot of it there simply isn't any sound theoretical foundation, so you just end up with flimsy post-hoc rationalizations.

So great science would come up with a sound theoretical foundation, or at least strong arguments as to why no such foundation can exist.


Max Planck argued that change takes time because good ideas need enough staying power to outlive their detractors:

> A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die and a new generation grows up that is familiar with it … An important scientific innovation rarely makes its way by gradually winning over and converting its opponents: it rarely happens that Saul becomes Paul. What does happen is that its opponents gradually die out, and that the growing generation is familiarized with the ideas from the beginning: another instance of the fact that the future lies with the youth.

https://www.benwhite.com/misc/good-ideas-need-to-outlive-the...


I suspect that there are myriad counterexamples of this belief, and that it's not even testable. Perhaps one idea that he was involved in -- quantum mechanics -- encountered resistance because it had glaring problems that were not resolved with one killer test or theory, but the problems were gradually overshadowed by the empirical success of the program.

Please don't turn this -- or any hypothesis -- into a "law" of how science works.


Based on our understanding of human psychology, despite it being as incomplete as it is, it would seem reasonable to argue that this trend, if it does exist, would not be a binary rule but one whose strength depends upon numerous factors that coincide with how strong the evidence is, how many different experiments create evidence that aligns with the new theory, and how much better the new theory explains existing problems in the data.

If the existing theory predicts results that are 50% off, and the new theory is 45% off in the other direction, then things aren't likely to be accepted. If it is instead 0.1% off, that makes a much stronger argument. The issue with rejecting the first case outright is that often the experiments themselves have imperfections, but those are much slower to work out and refine. When the new theory doesn't need the existing experiments to be refined, I would guess long term experts are much more likely to entertain it.


> depends upon numerous factors that coincide with how strong the evidence is

Might I venture the guess that seeing tangible advantages for oneself when using that new theory is more important? If you adopt it, will you become one or more of more famous, more successful, able to advance your own work, become part of a more reputable group, etc.?

If it is merely something that will not affect you, there is little or no incentive to change one's view.


It's not a law. If human civilization goes extinct, no more "scientific truths" will be discovered. Or an authoritarian society could gain total control (perhaps with robotics), and for whatever reason, they decide to ban some fact and mandate that everyone be taught the lie.

But scientific truths have a tendency to be adopted that falsehoods don't because truths are verified by other truths, whereas falsehoods contradict them. A lie can only be so large before it becomes self-contradictory, the truth is incomprehensibly large yet coherent.

If quantum mechanics has problems, as we do more experiments, we'll encounter these problems more and more, until eventually they can no longer be overshadowed. I predict quantum mechanics will never be entirely thrown out, but it will end up as a simplified approximation of a more complex "true" model; which is still taught in schools and used wherever extra accuracy isn't needed, like classical mechanics is today.

I think Max Planck's quote matters in practice too. In theory, you can discover a "scientific truth" and be recognized for it, but only long after your death, and only after someone re-discovered it (i.e. you didn't advance scientific knowledge at all).

However, most people aren't particularly unique, which means that if you discover something, chances are others have discovered it, or are at least close enough that you can easily convince them. You may not be able to convince the dominant "in-group", but if your idea is obvious enough (which it probably is if it's true and you managed to discover it), you can form an "out-group", which will grow as the idea gets verified (by truths) far more often than contradicted (by lies, because there are less of the latter, since a lie can only be so large without being self-contradictory).

Why do science in the first place? If your only goal is to predict something, you're doing it for yourself, so do "Science 1" and listen to others, but only to correct yourself. If your only goal is status, the truth doesn't matter to you, so do "Science 2" and make others happy. If your goal is to further scientific knowledge, I recommend you do both with preference for "Science 1": prioritize being correct, but explain your idea very well and make others happy when possible without sacrificing correctness (diplomacy).

It's important to note that when you can't change the majority's factual belief, you should really evaluate your own, because usually in such cases you're the one whose wrong. But otherwise: when you can't change others beliefs, the next best thing is to (as much as possible) not care what they believe, even if they are the majority.


>If quantum mechanics has problems, as we do more experiments, we'll encounter these problems more and more, u

This is non sequitur. If you do more experiments, you'll encounter it. But if there were such a wrong theory that everyone insisted was correct, it's not necessarily the case that more experiments will occur. I can't do an experiment to invalidate... quantum mechanics is in a regime where anything less than hundreds of thousands of dollars does not even get you started (never mind the lacking expertise). It's also unlikely to be financially lucrative in the timespans that would entice an investor (especially one averse to pissing off the status quo, as most are). By doling out the grant money (or not) to those people who will preserve the status quo, one could let the false theory survive for decades or even centuries.

We seem to have this mythology of science from a past era, where some maverick can just bust in and start embarrassing people with unignorable truths. If ever there was such a time it exists no more. The stakes have never been lower, trillions won't be lost if we get quantum mechanics wrong. Thousands won't die in a quantum mechanics accident that could have been avoided. The problems could persist indefinitely.


It's true that a false theory can survive indefinitely, especially if it doesn't have real-world impact. Classical physics survived for centuries before being disproven by modern physics, because the differences are subtle except in extreme circumstances. Maybe quantum physics is accurate enough that the circumstances for non-negligible differences between it and "the truth" are so extreme, we never test them, therefore it's never disproven.

However, right now people are spending large amounts of money to build quantum chips. They're not explicitly trying to disprove quantum physics, but implicitly testing it through their experiments. And if these tests suggest that quantum physics has fundamental issues, they'll investigate, at least so if they realize quantum chips are impossible, they stop spending money trying to build them.


Every transistor in every existing computer is designed on the basis of quantum mechanics because they're dealing with electrons and atoms.


Quantum physics is too basic for any experiment to not test it. Like the other poster said, all the truths are linked together in physics and math especially.


> If quantum mechanics has problems, as we do more experiments, we'll encounter these problems more and more, until eventually they can no longer be overshadowed.

The problem is rather that a lot of physicists tend to "hand-wave away" that existing problems (like "what is a measurement?" and "sudden collapse of the wave function") actually are problems in the theory, i.e. that we don't have "the theory is basically correct (as evidenced by lots of experiments that were done), but these 'problems' are simply open, unanswered research questions".


They're problems when they matter experimentally. If quantum computers are failing because their qubits keep getting measured, the companies building them will spend a lot of money to discover why, which will refine our definition of "measurement" at least in that context.


> They're problems when they matter experimentally.

This is exactly an example of the "hand-waving problems away" point that I made. :-)


They should be hand-waved away.

I'm an industrial physicist. I've noticed that physicists, and the general public, often have different ideas about what problems we should be versed in. And people are surprised when they learn that most physicists are not theoreticians. Were it up to the public, we'd all be working on warp drive, infinite energy, and explaining quantum mechanics. ;-)

We all learned about the problems in both undergraduate and graduate training, and in discussions and readings. I attended a lecture about it by John Bell. I was excited about it, but I also had an experiment to finish.

I think physics is utterly unique in having a theory with seemingly infallible predictive power and zero explanatory power. But if someone asks me about it in the lunch room, all I can do is shrug it off. The fact that this paradox hasn't stopped us dead in our tracks, in 100 years, that's the problem.


> I'm an industrial physicist. I've noticed that physicists, and the general public, often have different ideas about what problems we should be versed in.

Rather say: you have studied physics, but what you actually work on and are interested in is engineering. :-)

Addendum: Just to be very clear: there is nothing wrong with being exciting about engineering problems from industry.


Actually, I'm a scientist, not an engineer. I don't want to be an engineer. I realize we're outnumbered by engineers, and many people have never met a physicist. My parents were both industrial scientists too. We exist. There's a general sense that while there's some overlap, scientists and engineers are not the same, and may even think differently.


This tracks perfectly with Khun's Structure of Scientific Revolutions. The paradigm only shifts when the old guard dies off. Scientific progress has much more to do with ego, institutions, and human nature than scientific method.

https://en.wikipedia.org/wiki/The_Structure_of_Scientific_Re...


Tends to be the issue with a lot of human society. Although, reading through the summary on Wikipedia, I'd suggest that a lot of the "paradigm" shifting often has to do with humanity's monopolistic rent extraction / tragedy of the commons nature.

Almost every part of society, including science tends to collapse towards those that have access to ideas and resources, and are placed within society to be able to take advantage of them, and those that are exploited. Even the Nobel Prize was found to mostly favor those who were born to win a Nobel Prize. Best way to win a Nobel Prize? Be born to a wealthy family with a prestigious science background.

Those that have access tend to devolve into monopolistic tendencies, exploiting the existing "paradigm" for their own gain, while punishing, excluding, or minimizing those that suggest alternatives that might in some way disrupt their power structure and control. Money, politics, military, industry, and a lot of other parts of society seem to all work about the same. Best way to work at the White House? Be born to a wealthy family with political access.

Almost every person who has retrospectively been considered "great" or "revolutionary" centuries or millennia later were punished, excluded, or minimized during their own lifetime, while their contributions were then reevaluated, fought over, fetishized, and collected after their death. Galileo was mentioned elsewhere, relatively well known example, some success and acceptance at the time, mostly condemnation for "vehement suspicion of heresy." Van Gogh's work only began to attract attention in the last year of his life ... right before he shot himself in the chest with a revolver from being so mentally tortured.


Why wait when we could just not be dicks?


If your solution to some problem relies on “If everyone would just...” then you do not have a solution. Everyone is not going to just. At no time in the history of the universe has everyone just, and they’re not going to start now.



Did you just


It can be part of a viable solution.

Someone’s plan when investing in early Solar panel R&D went something like: If everyone would just… “follow their economic interests” driving down the cost of panels will dramatically increase adoption further driving down costs in a feedback loop.

Unlike most “if everyone would just” plans that one actually worked because the desired behavior aligned with people’s interests.


> Someone’s plan when investing in early Solar panel R&D went something like: If everyone would just… “follow their economic interests” driving down the cost of panels will dramatically increase adoption further driving down costs in a feedback loop.

I'm not sure if that is a good paralel. The difference is that we didn't needed "everyone" to innovate on solar panels. It was enough if "someone" was, and those who did not got left behind with their inefficient processes. That's not a true "everyone would just" situation.


Getting a handful of people to act differently is rarely the issue, especially if they think they’ll get rich by doing so.

The customer base continually expanding is the “tough” side of the equation. 100’s of millions of people behaving differently is the hard part of those “If everyone would just” plans.


Its not just a tendency to assume the best or worst, instead of investigating and dealing with reality. Its a full blown feature of evolution:

Its OP. Signals engagement and understanding to society, while expending no energy on work. Idealisation is not retardation, its optimisation.


You need both people with their heads in the clouds dreaming of fantasy futures as you also need those promoting the status quo. The former gives us direction and hope (motivation) while the latter gives us stability. There is a balance. To use reinforcement learning terms: exploration and exploitation. In reality there are many more subsets that each pull in different directions. I agree that this is optimization and most of them play essential roles. But it also means we should adjust the weights and pay attention when one starts to dominate and throw things out of balance. And I'd argue that this is exactly what has happened in academia. The bureaucrats won and threw things out of balance. I'm not asking to go somewhere we've never been before, but I think many have because they don't know what the past looked like.


  > “If everyone would just...” then you do not have a solution
Unfortunately this is not a reasonable argument. I get where you're coming from but what I'm asking is that everyone just do their job. Surely "do your job" has to be a reasonable version of this.

What I mean by "not be a dick" is to check the alignment, the goals of the process compared to what we're actually achieving. What is the point? The author of the article lays out a lot of reasons and even is stating how these things are well known. Which unfortunately means someone needs to actually take action. When we're in a situation where many people want change but no one is willing to fight for that change, then we will just keep doing what we've been doing and headed where we've been headed. Even if that is knowingly off a cliff.

I don't need everyone to just do something, I only need a few more people to stand up. And yes, I will tell those that are saying "keep your head down" to shut up. Some things are worth fighting for and for me, one of those things is the integrity of science.


>[W]hat I'm asking is that everyone just do their job.

>I don't need everyone to just do something.

Naked contradiction. Either everyone needs to just do their job or not everyone needs to just do their job.

>Surely "do your job" has to be a reasonable version of this.

There are entire fields of research centered around answering why people don't 'just' do their jobs like good little worker bees in exquisite detail. Some terms I'm aware of that you may find useful to look into, in rough order of how general to the problem they are: Agency problems; the Case theorem; malicious compliance; work to rule; collective bargaining; moral hazard; perverse incentives; adverse selection; rent seeking; regulatory capture. If you want to read up on people trying to design actually working systems from scratch, look into the world of mechanism design, starting with auctions and branching outwards.

>When we're in a situation where many people want change but no one is willing to fight for that change, then we will just keep doing what we've been doing [...]

One could argue that the past ~century of scientific and technological development has probably beat any other 100 year period your could pick hands down along any natural metric. So "what we've been doing" is actually pretty great, and it may not be a good idea to stake such a hugely important enterprise on some newfangled and only theoretical ways of doing things.


> >[W]hat I'm asking is that everyone just do their job.

> >I don't need everyone to just do something.

> Naked contradiction. Either everyone needs to just do their job or not everyone needs to just do their job.

It's not a contradiction; "just something" is not the same thing as "their job". "I need everyone to do their job" does not contradict "I don't need everyone to just do something." (Emphasis added for clarity about the differences.)


  > One could argue that the past ~century of scientific and technological development has probably beat any other 100 year period 
You could make this argument about most centuries. But it's a meaningless argument if the metrics you're evaluating on are implicit and assumed to be well agreed upon by all others.

My reply is the same to the other arguments you've made


Almost no one is a dick on purpose. Everyone belives in their truths. Mutually incompatible truths. That is perceived as being dick from other person's truth perspective. That's the tragedy of situation. Everybody is right to some extent. Plus ego plus interest plus strong belives makes hard to move from own truth. So no, we can't stop being dicks. We're just being normal humans.


It would be nice if that was true. But school and university has taught people to first and foremost be obedient. That means a large majority of people doesn't care about the truth at all, only what the relevant authorities are saying.

Many of those authorities have learnt that deciding what is true based on reality is good and all, but you live longer and better by making friends and not disagreeing with them.

This is a societal failing.


This is such a weird perspective because the attitude I associate with people who go through the university system on the academic path is not obedience at all, but ruthless self advancement. Like literally where are all these obedient people you are talking about?

I would characterize the problem with science as being a failure to increase the available resources commensurate with the population of people capable of doing science. In this situation, the competition becomes sufficiently fierce that it is statistically better to lie, cheat or knee-cap your competitors in some other way than it is to actually do good science, which is unreliable. What you see as fealty to scientific authority is actually just a system which has become totally dominated by resource competition to the exclusion of its actual purpose.


> school and university has taught people to first and foremost be obedient.

That's not inevitable. I count myself lucky that it didn't happen to me.

Unfortunately I don't know how to improve schools to the level of those I attended over half a century ago. And I lack the get up and go to make it happen anyway.


  > Almost no one is a dick on purpose.
This is true, but that does not mean people are not being dicks.

What is important is to have self reflection and to recognize when you have been a dick, to apologize (make amends if necessary), and try not to repeat. Yes, habits die hard, but we can still improve. But the biggest dick move is to double down. We've created a culture where we act as if being unintentionally wrong is a bad thing and that the worst thing you can do is admit a mistake and self-correct. But we are always wrong (to some degree) and so the only thing there is is to self-correct.

So yes, we can stop being dicks. It's how humans evolved. We wouldn't have expanded from our small tribes to villages, to cities, to countries, to a global economy if we weren't capable of this. The arc may be slow and noisy, but it has always expanded to be more inclusive.


Because you don’t know which side is the “dicks”. Evidence is rarely conclusive, and progress isn’t made by cowtowing to the researcher with the biggest mouth.


Why risk it when you can be a dick and assure your success?


If you are using a first order approximation, then yes, this is the correct strategy. But when you consider the elements of time or consider that there are other players in the field (incorporating either will do, both is better), then this strategy becomes far from optimal. In fact, it will lead you down the wrong direction, making your own life harder. The thing is, bullshit compounds. Builds up slowly, but we all know that's how you boil a frog.

Humans have always advanced through the formation of coalitions. To optimize your own success you have to simultaneously optimize the success of others.


There is right now a thing that you believe strongly that is false. If someone were to point it out to you you would get angry. C'est la vie.


Yes. But you have to convince me that it is false (of course)[0]. I'm actually happy to tell you what I think the best way to do that would be. Would I get angry? Probably not, but it is hard to know.

In fact, this is even how I review papers. I am much more detailed than my peers, and get very specific. I always also include a list at the end detailing what factors are the most important and what I think the authors could do to change my mind (if I'm rejecting). If I'm accepting, I'll also argue my points to the other reviewers and make them stand for their arguments.

Truthfully, if no one is willing to change their minds, I'd say they can't be a scientist. It is a fundamental requirement simply because we are all wrong and all the time. While we can get ever and ever closer to it, absolute truth is fundamentally unobtainable. So you must always be able to update your beliefs, or else you will become more wrong as time marches on.

[0] I also recognize that the inability to convince me does not mean I am right and the other person is wrong. But this too is why I specifically make a point to try to help the other person. At least as long as I believe they are acting in good faith. If I am wrong then I WANT to know. I take no shame in being wrong, but I take a lot of shame in being unwilling to right myself.


I actually really want to test your theory on myself. . . I wonder how I could best do that.


My method is to help your "adversary". The way I think about it is this: we can't obtain absolute truth, so we're always somewhat wrong; we have limited data and information, so we need to be able to consider what others have that we don't. Arguments can be both adversarial and cooperative, right?

If your goal is to seek truth, then you need to reframe the setting. It is not "I defend my position and they make their case", that is allowing yourself to change but framed to maintain your current belief. Sure, you have good reason to maintain your belief and I'm not saying you shouldn't hold this, but it should be a byproduct of seeking truth rather than the premise.


Just pick a position you feel strongly about and imagine how your world would change if it was false. How your relationships would change. How stupid your previous statements would be.

Pick anything. Climate change is a big one. I would definitely have to eat some chaff if it was shown to be false personally.


It is worth reflecting on group dynamics - every group has a small core of members who set the standards for everyone else. They can reform the group. No-one else can. Sometimes an outsider can manage through sheer force of personality.

Groups display some sad behaviours where most of the members know that what they are doing but the leadership is committed to something silly so they go along anyway. Sometimes it gets bad enough that they quit the group, but that is all they can do since most never have a chance at reforming it. This turns up formally in corporations (the executive team have the power to reform) but that is actually just a mirror of a natural social dynamic.

Consider religion for the more natural form of this. It gives a good indication of how the ratio of insider to outsiders pans out in practice. Catholicism is a very well established one, and how many people have the moral and practical authority to reform their doctrine? Not many.


Yes, I think the story of Catholicism actually points to the idea that ignoring or even defying the norms of your community as part of an effort to form an alternative community is often more effective than trying to stick it out long enough to earn the clout to reform it. Luther's 95 theses were distributed in 1517 but the Catholic Church did not seriously reform their use until the Council of Trent in 1563, when their sale was banned. By that point a fifth of Europe was Protestant.


There were a few reform movements before that with a bit more mixed results. See the cathari and Jan Hus.


You made a important point for academic reform their. Regularly allowing for fast, unbeaucrati domation of groups and auto dissolving already formed hierarchical structures would help. The reactor internals cant be allowed to stratify.


Perhaps Science and Religion are undergoing some sort of schism. Looking back several hundred years, the universities that were forming priests were also educating the elite and turning out polymaths. Perhaps there is a blossoming of specialization in these latter days, where kids don't need church or a noble family to put them on track for some really heady science work.

In fact university systems are practically religions unto themselves, with perceptible and distinctive campus cultures, Greek fraternity/sorority and what have you clubs to join, there is certainly a churchy blueprint being followed in education.

It seems to me that religion and medicine and architecture and other various sciences were tightly intertwined. It's why the reactionary Christians are rejecting Science qua Science and Scientism, because if science is not subject to religious ethics or morals, then science has power that church doesn't.

I don't know about Asia and the rest of the world, but seeing as the Catholic Church set up systems such as universities and hospitals, it's altogether unsurprising that religion is woven into the DNA of those disciplines even as they secularize and syncretize.


> Perhaps Science and Religion are undergoing some sort of schism.

There's no schism. Religion and science are both ostensibly attempting to describe the underlying truth. The issue is that any institution/engine that has money/power, will obviously attract people that want money and power too, truth be damned.

The idea that the institution is about 'uncovering truth' is certainly given lip service/held as an article of faith as this is required by the naive masses to invest themselves in to the idea (science/religion), but the underlying reality is a somewhat dirtier political jostling for money/power.


Yup, this is why I never join truth-seeking organizations. You'd be amazed how many people are only motivated to find the truth because they want to turn it to some profitable use somehow. Thank goodness number theory has no practical applications whatsoever.


> truth-seeking organizations

I am unfamiliar with this term, and it gives me pause. If an organization does not seek truth, what does it seek? Also, religions typically consider themselves as guardians and authorities of Truth, and disseminate/preach it through leadership/missionary/evangelization activities. Surely, the seekers of truth are ones who join and follow such paths?

In terms of Christianity, "finding the Truth" involves following Jesus and sacrificing our lives to do it. Sure, a televangelist or notable preacher could line his pockets, electroplate his Learjet, and influence politicians, but a follower's "profit" is moral and intangible, in exchange for actual cash, goods, and services we donate freely, or that's not tax-deductible!

In terms of universities and science, I'm curious how you became educated in Number Theory without joining a truth-seeking university, picking up truth-seeking textbooks, and hearkening to truth-seeking professors. Science as an industry or career path attracts ordinary people trying to make a living, whether that's in a community college, a four-year, or a prestigious Ivy League or Oxford/Cambridge setting. Certainly, the money, power, and prestige are attractive and draw in followers just the same.


> Groups display some sad behaviours where most of the members know that what they are doing but the leadership is committed to something silly so they go along anyway. Sometimes it gets bad enough that they quit the group, but that is all they can do since most never have a chance at reforming it. This turns up formally in corporations (the executive team have the power to reform) but that is actually just a mirror of a natural social dynamic.

It’s weird how people talk about our modern very unnatural corporations and how they behave and then rush to insist that it’s all a reflection of totally natural and organic stuff.

Yeah I think they doth insist too much.


> BERT was nothing short of a revolution in the field when it happened. You could cleanly draw a line pre-BERT and post-BERT. After it came out, something absurd like 95% of papers used it. It was so good, nobody could ignore it.

>Naively, you’d think such a revolutionary paper would be met with open arms. But when it was given the best paper award (at NAACL 2019), the postdocs I talked to universally grumbled about it. Why? It wasn’t interesting, they bemoaned. “It just scaled some stuff up.”

I still hear people making this complaint, despite the extraordinary success of scaling over the last few years.

It's pretty clear that scaling is the winning method (although exactly how to make best use of scale is an open question), but many researchers find it repulsive.


Ha, I remember when the scaling stuff started taking over in computer vision and how it annoyed us grad students. Suddenly companies (obviously, gross places where research and intellectual curiosity goes to die) were able to produce results that were much better than universities and they did it in a way that was inaccessible to us. Of course, not to mention in such a boring way. We didn’t have nearly the compute resources or any way of getting them. Now it’s slightly better I think.


> It's pretty clear that scaling is the winning method (although exactly how to make best use of scale is an open question), but many researchers find it repulsive.

"Winning" in the sense of "perhaps suitable to build practical applications that make a lot of money" - perhaps (even though I'd claim that whether this is true is still an open question).

On the other hand, "winning" in the sense of "getting a deeper understanding why the method works and having a model that can be analyzed deeply very well", then I would clearly say that scaling is not the winning method.

Scientific research is about truth-seeking, so many researchers are in particular interested in the second interpretation.


Limiting yourself to methods that are easy to understand is like looking for your keys under the streetlight. Small-scale methods may be easy to analyze, but they lack the richness and complexity that makes intelligence interesting.


> Small-scale methods may be easy to analyze, but they lack the richness and complexity that makes intelligence interesting.

I don't disagree.

But what a scientist would do after having strong evidence that huge scaling might help is attempting to understand what part of the much larger complexity leads to this qualitative change.


Horrible article telling bright eyed prospective grad students to fellate the prestige obsessed academic politics machine rather than actually try to advance science.

Remember that Einstein told journal editors to piss off [1] when they tried to get his papers peer reviewed.

[1] https://theconversation.com/hate-the-peer-review-process-ein...


I happen to be an (associate) editor of an academic journal.

What the people who critique the publication process are missing: 90% of submissions are crap - unfit for publication.

We need some process to gate-keep.

a) The venue of publication is a good signal, whether the time to read a paper is well-spent. b) The PhD students learning the craft need objective feedback. The supervising professior/university often has the incentive to "just submit" -- even if they know that a publication does not meet the quality standard.

Before peer-review, somebody also needed to make a decision on what to publish. This typically fell to a single individual. The editor or some well-known member of the community who could recommend a paper for publication. On old journal issues they even mention the "recommender".

So the question is not whether peer-review is bad, the question is which alternative gate-keeping process would be better. Otherwise we will drown in crap publications (even more) and the PhD students don't get a honest feedback signal upon which they can improve their craft.


As a (former) reviewer at 5 journals, I disagree first and foremost with the notion that

> We need some process to gate-keep.

Journals, when print was the medium through which academic research was disseminated, had to gatekeep because there were practical considerations regarding how many articles they could put in each issue. With online repositories like arxiv, this is hardly a concern anymore.

Someone putting a crap article on arxiv does not hurt anyone else, and I'm saying this as a person who recommended tons of articles to be rejected because they had atrocious grammar/spelling issues. Worst case, it gets 0 attention and is ignored by the research community.

Something not being published in a journal/conference proceedings clearly does not prevent it from drawing tons of research attention, as we saw in numerous cases like the Adam optimizer [1].

Which brings us to the second point: what even is the purpose of a journal now? The answer is that the sole function of a journal now is gatekeeping, with the presupposition that, as you observed

> The venue of publication is a good signal, whether the time to read a paper is well-spent

Except, well, top journals have tons of articles that get 0 citations too. Clearly the filter fails at this purpose as well. So, why gatekeep at all then? Because if we did not have some exclusive prestigious journal, the plebs would not be separated from the esteemed titans of academia with the biggest grants, most prestigious scholarships and diplomas from the most famous universities.

The only reason we need to gatekeep today is to feed the academic prestige and politics machine. If you care about the science, upload the goddamn PDF to arxiv , tell your colleagues about your research at a conference and let the scientific community decide on whether your idea is interesting.

[1] https://arxiv.org/abs/1412.6980


The Adam optimizer was published at ICLR, a top conference in machine learning. So, basically, analogous to a peer-reviewed journal for the purposes of this discussion. ML (and some other subfields of computer science) have the particularity that the really comptetitive gatekeeping happens even more in conferences than in journals.

That said, there definitely are very relevant papers that are not published in any peer-reviewed venue. A good example is "Language Models are Unsupervised Multitask Learners" (the GPT-2 paper, which I would argue started the whole generative LLM revolution). But I think if you look for this kind of papers, you will find something in common to all of them: they are by very well known researchers, elite institutions or influential companies. That's why people went out of their way to read them even if they were posted somewhere without peer review.

If you removed peer review and just relied on posting to arXiv or similar, new researchers, or researchers from less known institutions, would have no chance at all to make an impact. It's peer review that allows them to be able to submit to a top journal, where the editor and reviewers will read their paper, and they can get a somewhat fair chance.

PS: I don't really like the peer review system that much either. It's just that every alternative that I have seen proposed so far is worse.


> The Adam optimizer was published at ICLR, a top conference in machine learning.

Fair, must have misremembered that one.

> If you removed peer review and just relied on posting to arXiv or similar, new researchers, or researchers from less known institutions, would have no chance at all to make an impact.

I disagree on this one. I did my PhD at an institution that ranks in the top 10 in the most well known university rankings, and I distinctly remember that one time when I was submitting a manuscript to a prominent journal in my field, got some reviews back which weren't positive yet were quite valid criticisms, and my professor told me not to worry because the editor is his buddy and my manuscript will get published for sure.

When that sort of "scratch my back and I'll scratch yours" culture exists in journals I don't see how peer review can be an equalizer. It just means everyone who publishes at a journal, including the less well esteemed ones, can claim they went through the rigor of peer review. Of course, we all know peer review is just a vibe check and is actually not that rigorous at all, and besides no one cares unless you published in a prestigious journal anyway. The less revered journals exist to collect $5k in open access fees for the publisher in return for hosting a pdf at the marginal cost of maybe a cent a year.


It seems to me that something like eLife's model is the best solution to this [0]: you still have a minimal amount of curation, but generally if a paper is written well enough and within the field it won't be desk rejected. Then, it gets published on the site and sent off for peer review. Peer reviewers assess how sound the paper is and pass a judgement which readers can view, as well as provide some recommendations to the authors make it stronger. The authors can then either revise the paper, or do nothing at all. In either case, papers don't languish in reviewer hell and the larger scientific community gets to see it.

[0] https://elifesciences.org/inside-elife/66d43597/elife-s-new-...


Exactly, everybody can publish on arXiv. And there are enough semi-predatory journals/conferences which basically accept everybody. Especially since the LLM can rewrite any paragraph nicely.

So the role of journals and conferences is not to prevent "the word getting out". It is to provide value by a curated list of on-topic and high-quality publications.

So no need to wade through tons of crap. Especially for PhD students who might take more time to detect crap as such.

In my experience, the publications at the good venues get a lot more eye-balls and by consequence citations. So there seem to be a lot of people who like this role as a "filter" for what to focus on.


> Someone putting a crap article on arxiv does not hurt anyone else

> ... only reason we need to gatekeep today is to feed the academic prestige and politics machine

This to me says you have may not have experienced some parts of the (long-term) research process. It suggests that you have infinite resources to filter out noise, which is probably not the case. It suggests you're willing to spend a lot more time figuring out why something doesn't quite make sense, rather than get to the heart of the problem, while this is fine in many cases it sucks when you're hot on the trail of something interesting, and you're slammed by a million twisty paths full half-baked hot-takes.

We need to filter ("gate-keep" is pretty inflammatory term) information and processes so that we don't have 12 different screw types with 12 different electric screwdrivers, instead of "just" 6 (sigh). We need to come to consensus and that means some things go in and something are left out. We need many mechanisms to filter.

> tell your colleagues about your research at a conference and let the scientific community decide on whether your idea is interesting.

All of these things feel like filters, when does a filter become a gate: colleagues - i.e. not everybody, but some selected few, who and how?; conference - filter (well, gate!); scientific community - != your baker's community; decisions directed by you, not on my own (i.e. a pointer to my paper) - filter.

[Edit for formatting, sort of.]


Conference does not imply gatekeeping. There are many conferences out there which accept almost all submissions.

As for the "scientific community" being a filter, there is a difference between "elite" researchers being the ultimate arbiters of scientific truth via their positions in the editorial teams of journals versus everyone being allowed to publish on open platforms like arxiv and bad ideas/quackery being filtered out naturally.

Because the former is what makes or breaks a scientist's career, grad students and postdocs hyper optimize to publish at prestigious venues, as opposed to optimizing for doing science. These two are aligned only sometimes.

Per Goodhart's law: "When a measure becomes a target, it ceases to be a good measure." [1]

[1] https://archive.org/details/ImprovingRatingsAuditInTheBritis...


> Conference does not imply gatekeeping.

Nor do journals.

I think it's straightforward to make an argument that many of today's conferences are as bad as journals, accepting submissions is only one way conferences filter. IRL they are prohibitively expensive to enter, let alone attend (but again, see "Zoom"), and therefor eliminate all but the elite, they are run by commercial entities in all cases with more than ~200 people, they are more or less required venues for networking and therefor selling yourself for a tiny chance at academic permanence, they give plenaries to elites (filtering to one voice), they have special symposia by invite only, with other submissions dumped to inaccessible parallel sessions (which one will you choose to see?), the submissions you make are published in much more ephemeral ways, and tend to be more difficult to discover in the long term, making the event important but the research not as much (at least in my experience), etc.


Yes, there are many scam conferences that don't do any peer review. They are a huge problem. They waste researchers time and money and exist only to extract dollars from people, not to advance science.


>Someone putting a crap article on arxiv does not hurt anyone else,

But having a browsable collection of the verified non-crap articles on any given topic helps most everyone working in that area.


PNAS still mentions the recommender.


> What the people who critique the publication process are missing: 90% of submissions are crap - unfit for publication.

Publication is antiquated. HN doesn't need reviewers to boost the best content or to provide commentary on how to improve a paper or fact check its contents. Join the 21st century.


We are on the cusp of being able to plow through vast quantities of literature and data in an instant using multimodal machine learning models. Journal articles are written for people. The landscape is changing. We are headed toward a future where scientist upload data and thoughts en masse into the cloud to be consumed by interpreter models that in turn feed back into the scientific machine. Data quality and attribution (scientist performance rating) is automatically allocated by models.


The post made some points and you’ve said it was wrong to make them. Who am I supposed to believe? I would like to understand more about your position, but you’ve chosen not to actually address any of their points.

Sorry, don’t mistake my tone for condescension. I just wanted to explain how this looks to an outside observer.


The post is telling young researchers to ignore the main reason they went into research - scientific discovery (what the author calls "Science 1") - and focus on playing the academia prestige pissing contest game ("Science 2") in a cynical career optimization move.

Academic politics always existed of course. We should not be under any illusions that the greats of the past just wrote their manuscripts in isolation. However it was not an industrialized machine like it is now, and incentives are misaligned in academia to such a degree that the machination of academic politics killed the reason why academia exists in the first place (scientific discovery).

To advance in the academic cursus honorum, one goes to a presitgious undergrad to go to a prestigious grad school, so you can get a prestigious postdoc grant, so you can get a tenured position at a prestigious institution, so you can get a fat and prestigious government grant, which you use to hire bright young students who want to do the same. Note that scientific advancement does not play a role in this cycle, it's actually safer to pursue incremental and irrelevant improvements which you get published through thanks to the connections you made throughout your prestige optimization career.

As a result, academia has produced no notable scientific advances in a long time. It has instead evolved into an organism which selects individuals that excel at funneling money into itself under the guise of doing science while not necessarily doing science (though it incidentally occasionally may).

The kind of behaviour the author is promoting is telling individual prospective grad students you're small, the academic politics machine is so big, yes we all know it's a farce but you just need to suck it up and play the game. In doing so, the prospective grad students will strengthen the machine that is actually killing the very thing they want to cherish and promote, in the hope that they receive a few scraps in return.

(Writing all this as a PhD and former journal reviewer)


Hot take, the elephant in the room is that the firehose of easy discoveries has run dry in most physical sciences. Accordingly, the academic community is probably an order of magnitude oversized, but there will be little incentive to scale itself down. As a result you see an academic machine idling.


“Everything that can be invented has been invented.” -- Charles H. Duell, Commissioner of the USPTO (1899)

He was just as wrong then, as you are now.


I think of that quote every day, as I sit hoping I am wrong and await the warp drives :-D


Invention is a slow process filled with periods where progress feels like it has stalled. Then, we have a breakthrough, sometimes major, other times incremental. Seeing it in hindsight is easy, living through it is hard as we measure our experiences in seconds.


There's a pragmatic issue involved: A PhD student faces a high risk of getting knocked off of their track and leaving without a degree due to circumstances entirely outside of their control. Delays are super costly, both financially and by adding to the risk. This is a hugely asymmetric burden, and overshadows all other considerations. For this reason, the top three priorities of the PhD student are:

1. Finish

2. Finish

3. Finish


I would extra: avoid any activities which does not add any value to your thesis and obviously weight it against potential conflicts. Having PhD year earlier is better value, than year later with "better experience" (depends on situation, but having written thesis makes things little easier)


Indeed and if you feel that theres more work to be done along the lines of your thesis, funded advisors will often let you stay on for a while as a post doc. This can also help if you end up finishing without a job offer.


If you dont play their game you are not allowed to play at all. Catch 22


Stockholm syndrome. Only if the game is getting grant money out of that system.


For PhDs, the game is "finishing the degree".


That has always been an arcane ritual. Of course there are silly rules.


The idealized (Science 1) / realpolitik (Science 2) dichotomy is both real and at first depressing. I also did a PhD in machine learning, and became quite disillusioned after seeing how the sausage was made, and how different the process is from how I had imagined it. At the same time -- engaging in 'game change' within Science 2 (perhaps not as a PhD, but after you have some security), is I think one of science's highest moral callings. The aim is not necessarily to inch Science 2 towards an impossible Science 1, but to help science to take itself more seriously (it really is a messy social process & there are ways that social process can work better or worse towards the public good -- itself a scientific question) -- and contribute towards science 2 becoming a better (and ideally better-at-self-improving) science 2.


I really think a lot of it falls on the funding agencies. While communication is key, there need to be some real questions answered. If a professor has 100 researchers under him and is churning out 2 papers per day how much is he really an expert? If his cousin is getting a PhD under him without doing the legwork shouldn't he be fired? Should we keep funding the same person excessively? It doesnt help that professors often rely on immigrant labor meaning that they can have a real choke hold over the lives of their staff. I really think there should be upper limits of what a professor can get away with, they need to be answerable to how tax payer money is being burned.


I think you’re pointing out a very important point; it seems like everyone pretends that you can ‘scale up’ scientific research, when it’s more like a service (that doesn’t scale). One problem might be that the people allocating the funding also stand to gain from pretending that they can “supervise” infinite amounts of research with no diminution of quality.


During my PhD (2011-2014), I had naive dreams of changing science (http://offtopicarium.wikidot.com/v1:open-science-2-0). Instead, I understood that I cannot change the world, but I can change myself - and moved to data science (https://p.migdal.pl/blog/2015/12/sci-to-data-sci).

> Don’t try to reform science. Not yet. Not in your PhD.

So, I don't agree with that. While yes, you won't succeed - without this kind of idealism, academia is doomed.


Maybe nit-picking but the article started out problematic for me with their placement of "maybe?" after the scientific method in their definitions. Without the scientific method as the basis, science is meaningless. Both the ways they define the "idealized concept" are 100% applicable to religions and other lines of thought. It is the methodology of science that differentiates science from all the other ways of doing those things.


100% I wish more people paid attention to the _methodology_. A lot of people define science as the search for "truth", where it really it is the methodology that makes it the key differentiator from religion. The process of hypothesizing then subjecting a theory to rigorous, rational, evidence-based scrutiny is THE key differentiator.


Also the fact that a theory is refutable. That a theory is not _the_ truth, but the best explanation so far and if something proves it completely or partially wrong then we move on to the new, better theory.


Yes, methodology is important, but it’s unclear that the way scientists really work has much to do with the naive description of the scientific method that’s taught in school.


It's not the methodology in and of itself. The key factor that makes science science and not just an arbitrary belief is falsifiability. No matter how you come up with an idea, if it's falsifiable then it is worth pondering because if it's wrong then it can be proven wrong.

It's only when we enter into the ___domain of unfalsifiable things that we enter more into systems of belief than systems that can challenged or tested. So for instance most of social science is not scientific, because the concepts are generally entirely unfalsifiable. The "Journal of Personality and Social Psychology" is one of the leading journals in its ___domain - and also one of the most frequently cited by the media due to its oft catchy headlines. It's also one of the best examples of the replication crisis where only about 20% of papers published in that journal are able to be replicated.

Does that mean that 80% of the papers in it are thus fake and false? Nope. Because the entire ___domain is just completely unfalsifiable, so the complete inability to replicate the overwhelming majority of what that journal claims has done little to change its premier place in social psychology. It's just entertainment with some standard deviations attached.

So looping back to religion, the issue isn't the methodology. It's the lack of falsifiability. You simply cannot disprove the concept of e.g. a spirit because it's inherently immeasurable, untestable, and unknowable. Yet the lack of falsifiability does not mean false. For instance the exact same is true of consciousness. If I claim you're a philosophical zombie [1], you can't prove I'm wrong (or right), because the entire notion of consciousness is unfalsifiable.

[1] - https://en.wikipedia.org/wiki/Philosophical_zombie


> the concept of e.g. a spirit because it's inherently immeasurable, untestable, and unknowable

Do you mean by artifical means or sensing machines or something?

Because spirits are a metaphysical but very human concept, and I answer that they can be perceived, discerned, described, and known [perhaps not fully or objectively].

Faith and reason are both in operation for science and religion. There is a complex history of falsified doctrines, miracles, apparitions, and communities for mainstream Christianity, as I am sure all others.

You can't dissect a Eucharistic host or throw it into a Mass Spectrometer and find Jesus molecules, [unless you're Carlo Acutis] but if one billion people can tell the difference, who are we to judge?

It's better known than Prozac, and possibly more effective, what do you think?


Falsifiability is exclusively about objectivity and measurability. For instance I think it's fairly safe to say that every single human of working faculties would claim they are conscious. This doesn't mean humans are conscious, because it can't be falsified. Our philosophical zombie, for instance, would say exactly the same no doubt.

And no idea if the reference to Prozac was random, but yeah that is a good example. It's effect is not falsifiable at all. It's based on self reporting. And controlled studies, particularly those not carried out by parties with a vested interest in the outcome, shows its effect be scarcely better than placebo. Some might cling onto to that with 'well that does mean it's still better!' yet Prozac has extreme, and rather rapid onset, side effects which makes it basically impossible to not know if you're getting the right thing. This completely ruins double blinds, and could largely explain what little effect it does show.


There is an underlying precarity in the academy that is so deep it almost feels like a natural part of science. This is part of what makes reform so difficult. Early on, rocking the boat feels like career suicide. Later on, if you are lucky enough to become established, you are much less likely to feel like deep reform is necessary: after all, the system benefited you. And even then, the precarity doesn't go away. You still compete for grants like everyone else, and your trainees are attempting to become established. Why should they be the ones to shoulder the risk of, e.g., ignoring the glam journals and exclusively putting out preprints?


This seems like a tautology, by definition most people are not so unique that they can’t be replaced in a large enough organization.

This was different in the past because back then academia was vastly smaller, so practically everyone from post—doc on up was a literal genius, or close to it. And thus much harder to replace.


This is a terrible take.

Grad students try the hardest to change things because they are the most affected. The problem with "tabling the issue for later" argument is that you just keep doing this and we end up with exactly the system we have. Maybe it isn't a PhD to do, but there's always something. Professors are still overworked.

There was a good post on BlueSky recently[0] that quoted from the instructions for reviewing PNAS

  The purpose of peer review is not to demonstrate proficiency in identifying flaws
I think this is an issue many have when doing any form of quality control. Every single work has flaws and every single work needs more. The problem, especially in machine learning, that I see is that we are not focusing on what matters: validating hypotheses. This requires far more than looking at plots and tables. It really requires you to think about the paper you read.

But I think there's a fundamental alignment problem. An irony in ML, since surely this is far easier than the AI alignment problem. But the purpose of publishing is to communicate. Are we actually doing that? Is our review process improving communication? Or is it actually just gatekeeping or blocking out voices? It is one thing to reject works because they communicate poorly, don't evidence their hypothesis, or are outright fraud, but why are we blocking anything else? This stupid notion of prestige? That's never going to end well.

Not to mention all the wasted time and money...

[0] https://bsky.app/profile/docbecca.bsky.social/post/3lkbec2hi...


Why is it a terrible take? You first have to understand a community in order to reform it. You can't just say "I actually have no idea how to do research in this field, have not contributed anything substantial, have no money or soft power, but let me tell y'all how to do better science according to my subjective and very limited understanding."


  > You can't just say "I actually have no idea how to do research in this field
I am a researcher in the field, I have SOTA models in ML, and I have a good number of citations for the experience level. Sure, I'm no rockstar, but neither am I below average.

I'm not sure why you jumped to the conclusion that I'm not part of this field.


Why do people go through the system, hate the system, then gleefully enforce the system (or an even more brutal version of the system) when they are at the top?


> Why do people go through the system, hate the system, then gleefully enforce the system (or an even more brutal version of the system) when they are at the top?

Survivorship bias.

The people who get to the top are those who can play sufficiently well by the rules of the system.


It’s called being institutionalized.

You can let it happen or resist its influence over you from the start. Maybe through reform. Maybe by doing your own thing.


This is a classic question asked in any abusive cycle. Why do children who were abused by parents have high likelihoods of spousal and child abuse, continuing the cycle. Prevailing theories is that it becomes normalized and not knowing how to do things differently. We are imitation learners. Capable of more, but this is built into us.

As another commenter mentioned, survivorship bias. "It sucks, but it is working, right?" Often we want to convince ourselves of this because it can justify the bad stuff that happened. We want to rewrite this in our heads because it helps us not get depressed. But there's always room to improve and I think this is the better aspect to focus on. Sure, what we're doing may work but that is not a reason we shouldn't improve.

In fact, one of the most frustrating aspects to me is that it is the job of a scientist/researcher/engineer to find problems and then improve them. So that's why I find maintaining the status quo rather infuriating. It is in direct opposition to the fundamental framework we work in: to always be improving.


This is expected. People doing frontier research on humanity on shambolic wages while their finances are controlled by stakeholders who know fuck all about science.


Growing up in a country where every single adult I saw was essentially part of a corrupt regime with no change, it made sense to reform science right from the get go. Even if I did it somewhat stupidly. Had to recalibrate after coming to west where things are somewhat better.


I liked the piece. Interesting OP refers as 'Science 2' what many call 'the scientific enterprise':

https://osf.io/preprints/metaarxiv/pkahc_v1

^ This is a review I co-authored of a meta-science book that focuses on the scientific enterprise. (The book is oriented more to life sciences than compsci). Although my personal preference of a term is 'praxis of science', I go with one of the more frequently used terms. But I do appreciate the enumeration approach that inherently juxtaposes Science 1 vs. Science 2.


Well, I resigned my postdoc and decided to pursue "Science 1" as he terms it. Let's see how successful I am in a year or two...


Best of luck (not being sarcastic). Hacker News celebrates the risk-taking of startups for a reason -- great leaps often requires taking a risky jump. What you're doing is similar.

Please post about your experience! I'l love to read what happened


Thanks! It's hard to describe, but ultimately I realized that neither academia nor industry is willing to invest in moonshot type ideas. Moreover, academics have to worry about funding and tenure (in addition to teaching and service). Funding is easiest when you propose research topics that people believe stand a chance of success. For wacky ideas it's difficult to make that case.

Anyway, I have enough money to fund myself for at least a couple years, so my goal is to make the best of that time and see if I can upend the dominant paradigm in NLP (humans don't need as much data to learn — I'm going to pursue ideas that allow computers to be as data efficient).


Article seems to call for passively accepting the status quo until one is an associate professor, at a minimum. That’s a long time to wait, a lot of dues to pay. You can’t point out the naked emperors unless you wear a crown yourself. If you can’t be a frustrated idealist in grad school, when can you be?


I did my PhD 2004-2007. I'm not sure whether it was the subject (pure math), my particular advisor or just being two decades ago but I was never aware of any pressure to "publish or perish". I didn't end up continuing in academia post-PhD, but my advisor certainly wanted me to do so, so would have demanded more publication if that was necessary. I'm thankful that I was able to just focus on the quality of research (type I science) and publish when it made sense.

Having said that, I now work in industry research and frequently find myself having to remember and remind others (particularly junior researchers) not to fight the system of industry research too hard if they want to actually get anyone to take notice of their research.


Something rubs me the wrong way about this article. I struggle to articulate it, but it would be how success in science now requires political ability.

It's rare to find people who are both good scientists and good politicians.


True, but I also see the wisdom. As a PhD student, chances are you are overworked, in the clutches of the institution, dependent on the higher ups, and being mentally crushed by it all. (Source: know, have known people in such programs.) Trying to reform as well, while being a nobody, has lots of costs and not much expected value.

I would focus on staying sane psychologically and socially, and not feeling I'm wasting my years as much as possible. Though if you can participate in some movement of your colleagues with potential for a positive change, do that. Just don't overburden yourself alone.

I kind of think academia needs to be reformed and unwound from the outside. But I am more and more convinced by the news that the 'outside' might be unable to do it in ways that aren't appalingly dumb. So my faint hope would be that the much maligned "gen Z work ethic" will eventually force some change, and the old academia barons just die out.


Is that something wrong with the article, or is it something that the article is right about, which is wrong in academia?


Likely the latter. But the article expresses no interest or desire in changing it.


Basically it covers only the topic of what you should not do if you're a new researcher, without getting into what you should do and when. So it skews towards cynicism.


It is the stated assumption that competition for limited resources will lead to the most qualified person getting the position. This is true only in the most tautoligical sense. The biggest sin of science 2 is that all the survivors believe that they were the most qualified to practice science 1 as well.


As someone who left their PhD, it rubs me with how accurate it is lol


> While lone wolves can go off and do Science 1 on their own, if you’re reading this, that’s probably not you.

At the moment, yes it is; but when HackerNews buggers off, things will be back to normal.


In my day to day I write code libraries. Some of that code is boring, but some is making something very simple out of something very complicated.

Since I do this commercially, not for science, and since I don't agree with software patents (on principle), I don't "describe" the method anywhere.

Now it's possible that in that code is some novel thing. Perhaps am insight, perhaps an algorithm or technique. (I don't really think there is, such is my modesty, but its possible.)

Obviously from a "science" point of view, my work is meaningless. And perhaps that speaks to the point of academic science versus the "real world". Their goal is to "write stuff down" - not "do stuff" as much as "record stuff".

In the context of the above quote, I believe we might be doing scientific things (even novel things) but if we're not actively sharing that knowledge it's not "science".

Now of course lots of us -do- document things in blogs etc. But this informal writing on the internet is adding to a haystack, in which there might be gems, but how can they ever be found?


> Now of course lots of us -do- document things in blogs etc. But this informal writing on the internet is adding to a haystack, in which there might be gems, but how can they ever be found?

Give Copilot and the others some time to chew through GitHub, I guess. Perhaps one day, an LLM will write the successor to Chrome.


I suspect we may have hit peak LLM. Or if not yet, then just about. The AI that is capable of writing Chrome will be something other than an LLM.


There's no peak. Just an eternal slowing down clibing that still tends towards AGI, but probably not before the heat-death of the Universe.


While that may well apply to the overall AI program as such, I don't suspect it is descriptive of LLM.

Throwing more parameters at LLMs is no longer yielding appreciable improvements, and once you have thrown all the data at them you can find, that dimension is done also.


Startups, when done well, and with the right attitude, are science 1. Especially if you operate off of design thinking and jobs-to-be-done type principles.

Startups work best when the product fits perfectly into the market need. This requires meticulous investigation into the scope and shape of the market need as well as what approaches (business model, product, onboarding, value prop, etc) best satisfy it.

Observe -> theory -> hypotheses -> test -> repeat


With the author's emphasis on Science 2, I found quite provocative an earlier post of his: https://maxwellforbes.com/posts/your-paper-is-an-ad/

Submitted it for discussion at https://news.ycombinator.com/item?id=43403596


What a PhD student hopefully gets from an advisor is more targeted advice than the type of boilerplate generic advice in this article and others like it.

An advisor who knows what the student wants to accomplish, and is capable of accomplishing should be able to determine when building a working system is more valuable than pushing yet another paper, when reforming science in a small way is likely succeed, and so on.


The logic “Bert came out and didn’t follow norms. The establishment awarded it best paper because it was a revolutionary result. This shows how the establishment values following norms over revolutionary results” doesn’t scan to me.


Speaking as someone who left tech to get a Ph.D. in a non CS field...

Broadly speaking, I agree with the author's point, that one needs to learn the rules of the game before trying to futz with them, which means one will ultimately be more effective learning the ropes in the first few years of a Ph.D. program. Then, after, one will be in a much better position to change things.

One big issue I see is that the skills that academic training engenders is almost orthogonal to management. And, unlike most of human history, we are now in an Information Age, with both private and public knowledge production economies. The private knowledge economy (i.e., tech, broadly writ) utilizes many practices that are barely heard of in Academia. Minor case in point: project management software is not the norm, at least not in my field.

For those who are interested in this topic, there's a very interesting set of proposals for how to bring "Science 2" much closer to "Science 1" in Michael Nielsen's and Kanjun Qiu's monograph / book "A Vision of Meta-Science." [1] Fair Warning: it is very, very long. But the first part is quite short and proposes a number of interesting Science 2 reforms that should interested HN readers. Tenure Insurance (proposed by none other than Patrick Collison), funding by grant-rating variance, etc.

I'm still finishing the essay, but so far it's the best thing on the state of science I've read to date.

[1] https://scienceplusplus.org/metascience/


Has "science 2" actually acomplished anything worthwhile.


Something about the website is melting my phone while trying to read the article. I think it's maybe those animations? Couldn't finish reading because the tab froze


We are on the cusp of being able to plow through vast quantities of literature and data in an instant using multimodal machine learning models. Journal articles are written for people. The landscape is changing. We are headed toward a future where scientist upload data and thoughts en masse into the cloud to be consumed by interpreter models that in turn feed back into the scientific machine. Data quality and attribution (scientist performance rating) is automatically allocated by models.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: