>I say that’s also important in giving certain types of government advice. Supposing a senator asked you for advice about whether drilling a hole should be done in his state; and you decide it would he better in some other state. If you don’t publish such a result, it seems to me you’re not giving scientific advice. You’re being used. If your answer happens to come out in the direction the government or the politicians like, they can use it as an argument in their favor; if it comes out the other way, they don’t publish it at all. That’s not giving scientific advice.
We saw this just recently when the EU tried to hide a study that showed piracy actually increased game sales. I wonder what other studies are being suppressed because they don't fit the narrative.
> I wonder what other studies are being suppressed because they don't fit the narrative
In Nature / Science / Cell, your article submissions are likely to be rejected if they go against the ideas supported by highly cited researchers in your particular subfield, as these will be the reviewers.
Just see what happened in case of e.g. Alzheimer's disease [1]. Eventually, new ideas push through, but it takes too long due to the issue I mentioned above.
Claiming that new drugs with exclusive patents are always more effective than out-of-patent drugs is another cargo cult theme promoted in many highly corporatized
biomedical research centers.
Demonstrating that the drug discovered by your colleagues down the hall (who are pushing for their little startup biotech outfit to be acquired by Pfizer or Merck or Gilead, in an effort backed by the institution's IP office and administration) is less effective for a given condition than a drug discovered 50 years ago would be considered impolite, and would lessen your chances of getting tenure and having a successful career.
For example, molnupiravir and nirmatrelvir (aka Paxlovid), heavily promoted for COVID treatment, may be no more effective than older out-of-patent anti-histamine drugs:
> "Since antihistamines seems to hold a crucial prognostic role in the management of Covid‐19, there is a need to identify and repurpose some potential antihistamine drugs. One study suggested diphenhydramine, hydroxyzine, and azelastine to be considered in repositioning, then further research in them. 17 Due to its potent, less side effects, rapid onset of action, specificity, antiallergic, and anti‐inflammatory properties, 24 Cetirizine might be an important drug of consideration in managing Covid‐19 patients at the moment compared to other antihistamines or histamine receptors (H2, H3, and H4)."
Corporatization and the profit motive have an undeniably corrupting effect on scientific integrity, that's clear enough.
One of the issues here is how intellectual property works for pharmaceuticals. A new drug is patentable. An old drug already has generic competitors. We allow doctors to prescribe a drug for just about any reason (off-label use is legal), but only the new ones get investigated for effectiveness (because they are the most profitable as there is less competition).
Recognition of the massive IPR imbalances, and adoption of national-strategic funding for drug development with no IPR.
If the corporates are now gaming the system, then return it to a public utility function, and pay for drug development out of the state.
It is arguably true that we'd plateau at "good enough" drugs and for things like Insulin, maybe not push hard for the increadible advances rapid-action Insulin has, over reliable, understood, cheap ubiquitous insulin. So I can see there are downsides, but its fundamentally health-economics: If you want to argue for the IPR justifying $1000+ per month Insulin shots to motivate making the best insulin out there, I think you're also arguing for a really bad outcome, like my former Hypertension drug which was an isomer twist of a well understood Cox inhibitor, to get the IPR renewed. Somebody really had to work hard on that one.
Sabine Hossenfelder basically posted a rant about particle physicists on Youtube last week, about how they are doing science wrong.
Basically she said you are supposed to change your theory based upon results that don't match the theory, and what they have been doing for decades is looking at gaps in the results, and imagining particle theories that don't alter the expected results in the currently explored energy levels, just in the unexplored ones. Then when people get funding to look into these frontiers/gaps, and find that everything looks the same, they adjust their theories to push them out into the gaps.
That's not Science, she emphatically states.
For myself it sounds like saying, "well you don't know Unicorns don't exist because we haven't looked everywhere, what if they're in <unexplored forests in the Congo>?" Then the Congo gets explored and they say, "well maybe they're in Antarctica".
Eh, your analogy isn't apt. The problem in physics is that the space of possible theories has been searched so exhaustively that what these physicists are doing is just about the only choice.
To make your analogy close to physics -- imagine you found a unicorn horn and can tell from the biological makeup that it is indeed a horn grown from some sort of horse-like mammal. If you searched 99.9999% of the earth for animals that might have grown that horn and found nothing then the best thing to do would be to continue looking within that 0.00001%.
We can't figure out what comes after the standard model of particle physics. But we know it's wrong. e.g. we know black holes exist and something has to happen within them. But we don't have a theory that can predict what that is. e.g. we found a unicorn horn. So we have to desperately search for it.
Hossenfelder is right that it's shitty to have to keep pushing the energy boundaries in which we hope we'll see some interesting behavior. But there's literally nothing else to do. We've searched 99.99999% of earth and there's no unicorn. Do we start denying the existence of the unicorn horn that we're holding?
> The problem in physics is that the space of possible theories has been searched so exhaustively that what these physicists are doing is just about the only choice.
Can't say I agree with that one. I think we're capping out on the physics that Einstein and ilk laid out for us to explore. This is an ideal time in the next 20-40 years for a new revolutionary genius to figure out the next big thing.
I'm not going to make any sci-fi prognostications but I have a feeling like there's a new threshold to cross coming up, one of those unknown unknowns that we'll suddenly figure out thanks to a flash of insight by the right person being in the right place at the right time kind of things.
Counterpoint: there are major unresolved puzzles in symplectic geometry, and every indication is that they touch on deep physics. It could easily take research mathematicians fifty more years to build a clear picture of what's going on.
The state of physics these days is we have 100 physicists of Einstein's calibre alive right now, but not enough experimental data for them to make any headway on fundamental questions. For all we know, someone already published the theory of everything, but because there was no way to test it, it got ignored completely.
idk the amount of experiments / work vying for funding is far bigger than what can be funded. and if a big chunk of the funds is allocated to building bigger accelerators then a lot of experiments or other work might not get funded. im not saying its better to fund other projects over the next big accelerator or dark matter detector, but i dont think its right to say there is nothing else to do besides that...
Serial contrarian jumps on table and shouts. More at 11.
A telltale sign of a crank is attacking career scientists, which is her brand. What insight is she offering? Destroying is easy, creating is hard. Lies spread faster than truth. In general she is a force of confusion rather than enlightenment. It's too bad because she seems capable, if only she'd follow through rather than chase headlines.
She’s not the only or first one to criticize contemporary physics funding, and the focus on mathematical beauty to guide new theories. String Theory received a lot of blowback for how much time it sucked up without empirical results because the math was very appealing. Funding is finite, and so are careers.
Any reason to frame this argument as contrarian? Apart from your decision to label the author so - presumably for the sake of legitimizing your argument?
That would've been a great reply had the title of the main post not been "Cargo Cult Science".
For me, a "career scientist" is someone that publishes papers solely to hold a position at an org / institute / uni / etc (yes, yes, everyone needs food on the table... not the point here. One can still have integrity). The career scientist couldn't care less about being right, only about how many citations he's gonna get and whether anyone's going to discover any flaws in his paper.
>The career scientist couldn't care less about being right
What you are describing is philosophically, definitively, not science. The career scientist is a scientist who is committed to the scientific method. The career crank is committed to selling the notion that they have all the answers. That could be a citation chaser or a youtube contrarian.
I agree. But unerringly honest people are vanishingly rare, and the second someone with integrity encounters a field of research that could undermine others' research, there is a silent outgrouping and sometimes expulsion. Some science just isn't allowed to breathe.
Why? Do you believe the same is true for other occupations as well? So does a doctor just care how many prescriptions they make, or a police person about how many people they arrest? Have you considered that people aspire to other goals except money, despite they being paid for it?
>So does a doctor just care how many prescriptions they make, or a police person about how many people they arrest?
Someone hasn't heard of ticket quotas before have they? Or the pharmaceutical companies that were effectively buying off doctors for years by expensive trips based on their ability to prescribe the drugs they were selling. We've had to add regulations to these above two 'fields' to prevent exactly this from happening, and in many places it still does.
Honesty and playing by the rules is often the ___domain of the new, shiny-eyed initiates. Unfortunately, not having integrity is often easier for most people.
Lol not trying to be snarky but if you think academics push out papers for purely monetary reasons then I suspect you are very far from academia in your day to day life. Of course PIs have goals besides money! If their main goal was ever money they're an idiot for sticking to the university lab, and it's not like science profs didn't have other career opportunities available.
That doesn't change the fact that the most common non-monetary goal of academics is in fact pretty well aligned with behaviors such as: publishing very frequently/flag planting, farming citations on old publications, exaggerating work in presentations and media press releases, etc. Not all of these behaviors are inherently bad but they easily can be and too often are taken to an extreme.
Said goal is of course intellectual credit, or from a more cynical lense pure ego stroking. And it is indeed a rampant non-monetary incentive in academia.
Anyway, I don't see the problem with someone who has recently pursued a PhD or beyond in academic science ripping on "career scientists". This is the sort of (mostly) informed criticism that science should welcome. Though some are overly negative, they usually still have relevant insights into problems in their field, and it is silly to throw the baby out with the bathwater.
That is markedly different from someone that frequently criticizes scientists from the outside while lacking any actual insight into the relevant technical complications, especially if that person is casting a wide net in what they criticize.
The goal of a lot of modern scientific criticism is to further the conversation amongst those presently in the field, and help those adjacent to the field make more informed decisions - including prospective/very early career trainees, potential interdisciplinary collaborators, and anyone that has a role in directing possibly relevant funding opportunities. If criticisms get co-opted by other individuals for irrelevant goals that is unfortunate but it does not make the criticism invalid.
It's also somewhat difficult in the social media age to avoid commentary that gains steam in the scientific community from reaching broader public exposure, which indeed opens it to misinterpretation. But the alternative of walled off information a la the old school journal publication system is obviously problematic for other reasons.
That is a long winded way of saying that I think some amount of harsh publicly viewable criticism is sorely needed by a number of scientific fields right now, including my own (very much not physics). The fact that criticisms which from what I can gather where initially internally directed eventually gained such external exposure tells me that the message resonated with many scientists, even if just earlier career ones. Whether this person has started to profiteer on their cynicism is tangential to the validity of the original core message IMO.
Things like this make me want to read Against Method [1]. Like, yeah that seems 'not science' in one serious sense, but also not really? It falls into what I understand of Kuhn's 'normal science'. Slightly adjusting their theories to explain existing data is exactly what you should expect scientists to do, until the gaps get so big no adjustments are sufficiently concise.
For a defense of the 'unicorns in unexplored forests', that was pretty much the approach that led to the discovery of most of the standard model.
Plus, given dark matter and dark energy, we know something must be out there, even if its not unicorns, there's something.
> Plus, given dark matter and dark energy, we know something must be out there, even if its not unicorns, there's something.
Because science is paradigmatic, this is the problem. We do not know something else must be out there because we have no direct evidence of either Dark Matter nor Dark Energy, which, btw, have absolutely nothing to do with one another. We might just not yet have the intuition to understand what is causing our perception of the need for these labels.
I, for one, am absolutely convinced Dark Matter is absolute bunk and will be widely accepted to not exist eventually within the next 50 years. The initial reason for Dark Matter's existence, to explain rotational velocities of spiral galaxies we assume spiral galaxies appear to have more invisible matter, has been explained by the fact that the initial observation was of a galaxy in isolation, and the assumption that it was an isolated inertial frame without considering the cluster of galaxies around it, which perfectly explain spiral galaxy rotational velocities without any need for Dark Matter. But because science is paradigmatic and even in the Space Age the correct information is taking decades to disseminate, many if not most are still are convinced Dark Matter is needed even though no one knows what it is. It's nothing, really, it's a phantom, imagination gone haywire. Also, as other unexplained developments occurred, these too got lumped into Dark Matter, which galaxy clusters can not explain. So the problem becomes the shame of not just saying, "we don't know... we don't understand this," and labelling it something to fake it like we do understand it. This is a flaw of modern science, or at least of cosmology,i.e. the unwillingness to accept that we don't (yet) know the precise reasons for what is observed.
That said sometime it’s worthwhile to try something completely different. If all we did was “adjust” we would still be using epicycles to describe orbits.
Feynman mentions a similar point in QED when he talks about the "science" of proton decay. The proton hasn't had the courtesy to actually decay in an experiment, so all they do is keep publishing increasingly high lower bounds on the proton's half-life.
It's fascinating that someone can make a career out of a phenomenon not being observed.
Folks talk about the lack of support for things like replication studies and the publication of negative results in the life sciences, as contributing to the widely proclaimed replication crisis.
Establishing a solid negative result that rules out hypotheses is worthwhile if unglamorous.
Citing an upper bound on the half-life is all that can be credibly reported. It's how you put an error bar on something that's non-negative but consistent with zero.
Sure, but when an event supposedly happens on average less frequently than once every 10^32 years maybe it’s time to consider the possibility that the sought phenomenon never occurs. At least that appeared to me to be what Feynman was getting at.
The life sciences equivalent would be a study of unicorn sightings. None so far, but they could be out there so please fund my research!
Placing an upper limit on the half life means that zero is within the reported bounds, so they are considering the possibility.
Whether to fund difficult research, when it's reasonable to give up, and when it escapes beyond reasonable definitions of science, are separate questions. Physics is wrestling with these questions in a number of areas, such as the search for an empirically testable theory that reconciles quantum mechanics and gravity.
The life sciences have no equivalent. Precision measurement is not their bag, which is OK. Physics is the first to arrive at such a roadblock. Maybe the search for proofs of some famous math conjectures is an analogy to draw guidance from.
Something is there causing those phenomena. People have a theory that it may be unicorns, so they are out looking for unicorns. Those are supposed to be hard to find, and people are still searching on the hardest possible places that can hold them. It may also not be unicorns, on what case nobody has any idea what it may be. Some people are trying to make such theories, but none got a good one yet.
On this level of detail, there is nothing wrong at all with any party of the above. All of the problems are with finer details.
It matters when the limited resources are spent chasing unicorns that are vanishing unlikely to actually be there.
That is part of the argument. Those dollars could go other places than these giant particle accelerators that haven't shown anything much since the Higgs Boson, and no theories are more than guessing that we will.
the thing is that this pattern of pushing to unexplored energy levels did work before. Unicorns were found in the unexplored forests of Congo. In fact an entire zoo of unicorns of all shapes and forms was found, an incredible achievement that stands among the most important stuff our gray-tinted gel has managed to internalize about the universe. You can't blame modern day particle physicists for being born late for the Unicorn chasing party...
so the argurment now is, for how much longer is it reasonable to be roaming difficult terrains in the hope that we might have a repeat miracle? Its practically impossible to answer. "All Unicorns have been discovered" has been a scientist's cry as far back as Laplace.
> You can't blame modern day particle physicists for being born late for the Unicorn chasing party...
1950s, 1950, 1970s, particle physics was a fast-moving field, theory and experiment moved hand in hand. Guided by the intuition of what is mathematically elegant, you propose an extension of current theories, and soon someone in experimental physics with particle colliders publishes results that confirm your theory. Or the other way round: Experimental physicists publish new collider results, and theorists rush in to explain them with new theories.
But you had to be fast, and greedy (in a good way). If you didn't work out and publish the new mathematical expansion of current theories, someone else surely did.
It was also a good time to be a scientist in academia: Population was growing (15% per decade in USA, 10% per decade in Europe), economy was growing, and graduates from a previous (smaller) generation found jobs and careers in the expanding university system, to teach the next (larger) generation of students.
The standard model was competed in the 1970s. Theoretical particle physics was a well-oiled, battle tested fast spinning machine. Ready to continue the mad gold rush of past 30 years. Follow your intuition of mathematical elegance, propose new theories, wait for experiments to confirm them.
But experiments stopped finding traces of new particles. The standard model of particle physics was complete, and apparently all its particles were already discovered. Well, you had to wait a bit longer for: Top quark 1995, tau neutrino 2000, Higgs boson 2012.
The fast machine of theoretical physics kept rotating and spinning out mathematical structures and theories. But experiments were no longer confirming, or even guiding, the development of theory. The mode of work that had worked beautifully up to mid-1970s didn't work anymore. Supersymmetry made predictions, but all turned out to be wrong. String theory was such a complicated mathematical construction, that we don't even know if it makes predictions (in the Popperian sense) at all.
So yes, we can blame particle physics. What first worked for 30 years, hasn't been working anymore for 40 years. You should try something different.
A good summary of key attributes of that journey (why I like HN) but I would insist the conclusion doesn't follow directly and is rushed.
We know that we are far from having figured out all fundamental physics: All the "dark" stuff in cosmology points to the pieces of the puzzle not fitting well together, if at all.
The question is what sort of local experiments can help lift that next weil of ignorance and if we are collectively capable to imagine them and pursue them.
The issue with local physics is that it is what it is (as Feynman who triggered this thread might say) and what is accessible to us to explore does not match to perfection the socioeconomic convulsions of homo sapiens.
The high energy accelerator physics paradigm (which is but one of the possible local physics windows) may have been exhausted for our current cirumstances but I would bet not in general. The pattern of all physical systems we know is that if you keep pushing them they keep revealing new behavior. We may have hit an energy gap that requires too many resources to cross. But maybe the only resource missing is imagination. Who is to tell?
What is my conclusion? I would agree the success patterns of the past will not repeat on a cookie cutter basis. And theoretical physics is certainly gyrating uncontollably in the absence of experimental anchors. This only raises the bar for people to do a better job at connecting the dots.
The last time we found something that was predicted by the Standard Model was 1983, wasn't it? That's 40 years.
edit: her complaint is that once funding sources realize that 'particle physicists shill ridiculous ideas for research grants', all of the money dries up and won't return for at least a generation or two. That's an existential threat to the field. That sounds like a legitimate complaint to me. Unfortunately she's also shining a spotlight on the thing she's worried about other people noticing.
> The last time we found something that was predicted by the Standard Model was 1983, wasn't it? That's 40 years.
IANAPhysicist so correct me if I'm wrong but the last time we found something predicted [1] by the Standard Model was the Higgs Boson in 2012. Before that it was the Tau neutrino and the quark-gluon plasma in 2000, the top quark in 1995, and then (the discovery I think you're referring to) the W and Z bosons in 1983
Also not a physicist, but I think the distinction being made here is that Higgs et al explained existing phenomenon that did not make sense within the existing theory, and oh by the way if we're correct we've also created a hole in the standard model where a particle exists with certain properties that were a bit nebulous.
For instance special relativity explains gravitational lensing and predicted that frame dragging would also be discovered, but also explained other problems. The theories that make a physicist's day seem to tie the past to the present and the present to the future:
This happened, we don't know why. Here's a theory that explains those measurements. Here's a prediction we can test now/with a billion dollars that will corroborate my theory, and here are some new consequences of the theory that require <future tech> to be sure that this is 'true' instead of an accident of math. Like atomic clocks or giant particle accelerators.
Higgs et al didn't just predict the boson though, right? His team explained other properties of particle physics and their model also predicted a particle.
Is that a lot? On what basis could we decide?(1) I am not saying it isn't, I am just saying it is a rather subjective call which sort of looks at the prior pace of discoveries and extrapolates a judgement.
(1) it took more than that from the first gravitational wave detector to an actual discovery
I'm going to start repeating myself in separate replies here.
Einstein didn't predict gravitational waves as a standalone theory. Gravitational waves are a prediction of General Relativity. General Relativity was shown to be plausible based on data already at hand and data they could collect.
Gravity waves and lensing and frame dragging all confirm what we already suspected to be true. Which is good and bad news because they increase the corroborating evidence for an existing theory, and they take out huge chunks of opportunity for some other theory to explain the same phenomenon but introduce new loopholes or consequences that let us do new things like build Foundation power supplies (nuclear reactors the size of gumballs) or FTL travel or explain why galaxies move 'wrong'.
Physics is not scifi. As the good professor might say, its stranger and always more surprising than the bizarre combinatorial pseudoscience our brains might concoct in the absence of experimental guard rails.
I don't see what the problem with Sabine's analysis is? I thought it was great.
Shouldn't particle physicists have more expectations of what they expect to find rather than we want to find X and we are going to search the entire energy frontier to find it. The problem is that the energy frontier is limitless so the ROI could be extremely poor unless we had a good idea of what we expect
If we knew what to expect, it would be engineering not science. The fact is, we have tried having expectations, most notably supersymmetry, and those haven't turned out very well because we are literally venturing into the unknown. Part of the fun is that sometimes a wild theory works out (GR, Higgs) and sometimes it doesn't.
I think if we push the energy frontier one more time, there's no real reason to expect us to find anything -- but there's also no reason for us not to find anything.
Have you been reading the threads about the reproducibility crisis that pop up here regularly? Essentially some people worry we have more people who need to write papers than we can sustain with our current research horizon, and people are making shit up or at least being overly optimistic.
As Feynman used to say, "The most important thing is not to fool yourself, and you're the easiest person to fool."
To all reproducibility problems I say “publish more”. Do you plan to do a study? Register it and publish your methodology before data collection. Have you collected data? Publish it. Have you finished the study? Believe it or not, publish it.
Do you want to do a 20 year long study on a particle collider built specifically for you right after finishing your PhD? Well, sorry, you have to build your reputation so that society sees that you are not one of those people who “are making shit up or at least being overly optimistic”.
> For myself it sounds like saying, "well you don't know Unicorns don't exist because we haven't looked everywhere, what if they're in <unexplored forests in the Congo>?" Then the Congo gets explored and they say, "well maybe they're in Antarctica".
As a counter point to this, by all accounts Millikan had worked out the charge on an electron but had not validated it by experiemnt. Knowing what thw result should be, he then designed an experiment to arri ve at this result. Not very scientific at all.
Hypothesis: the charge on an electron is X, as predicted by Y. Experiment: observe the charge on an electron. The experiment could have shown the charge to be other than X: in which case (assuming repeated experiments confirmed it) the hypothesis was wrong and a new one would have been needed. "Y" wasn't invented out of nothing: it was built on other observations, hypotheses, and theories. Seems pretty scientific to me, but I'm biased (since I'm a physicist).
I think at some point we (physicists) are just going to have to realize we're at the point where the models we have maybe should be taken for what they are (incomplete), and re-examine the data and construct new hypotheses to design experiments for. I'm certainly not the physicist to do that (I haven't been practicing for almost 15 years now, and even when I was I was not the Einstein caliber physicist to do it), but there will be someone or a group who do that. I hope soon, as I'd like to see it!
You see this constantly in information technology contexts as well. People who at work, when required to work with many other people on the same project which itself is of a larger scale, then bring kubernetes style containers and ideas about "deployment" home to their personal webserver. And this would be fine, silly, recreation if only they didn't write up articles on "How to host from home" that described their k8/etc/etc setup like it was required in this new context.
Basically any time you see the word "deployal" outside of a business or institutional IT context you're looking at cargo cult behavior.
But it is not only people cargo culting at home. It is also people doing that on the job. Instead of looking at various alternatives and truly evaluating them, looking at benefit and downsides each brings, people jump to whatever is the current hype or what is fashionable and then justify it with a circular argument of popularity.
Instead of immediately when the talk is about clusters jumping to kubernetes, people should think about whether the conceptual overhead is actually worth it. They should think about, whether the Google scale tool is truly appropriate for what they want to do. Similar situation of course with all the hyped JS frameworks, when a simple statically rendered website/page would do.
Basically it is not engineering any longer, but cargo cult and hype. And that is how shitty products are made.
I find myself thinking of this speech more and more often. It seems to apply to more and more of society, as I see it. The scariest versions I’ve personally come across is Cargo Cult Medicine and Cargo Cult Courts…
There isn't even such a thing as "the methodology of science". People disagree about it and it is constantly evolving. Who should write that curriculum?
Sure, there's a lot of common ground that could be taught (and it is taught in some places, we read some Popper in high school). But it's also very abstract, difficult to teach without extensive examples and of very limited use in most jobs. Hardly "the central problem".
If science had no methodology, it would hardly be able to produce results?
Of course it has and people reading "some Popper" in high school, evidently without understanding any of it, exemplifies my point.
(By the way, evolution does not mean constant change of "everything", but gradual adaptation with most stuff staying constant)
The scientific method is a learning algorithm designed to discern truth. Claiming that was "of very limited use in most jobs" shows all kinds of things, it doesn't say anything about its obviously paramount importance.
All that your comment says is that you disagree and think I'm stupid. Care to offer any object level arguments? For example, do you think making each university student take three lectures of philosophy of science would be better than the current system? What would it achieve and why do you think that? Did you learn something from reading and understanding Popper that you didn't know previously and that helped you in doing something you cared about?
I don't see it as a general public problem but I actually do see it as an ongoing scientific training problem. Biology PhD programs these days spend a lot more time teaching prior biology results than they do having students engage with what it means to be a scientist, in the limited time that they teach at all.
"Students" are then sorted into a lab where they learn the specific scientific cookbook of the subfield and in some cases never even try to apply it truly on their own before graduating, because they are used as a lab monkey for 4+ years. Collaborative side projects or substantial additional course work are rare, so students are often functionally the same as a lab employee.
The lab has minimal external incentive to train the person to become a good scientist, and there isn't much cultural pressure either, so the training focus is instead to become a good [hyperspecific topic] lab member.
To graduate a student needs a deep understanding of what was previously shown in their niche, but they really do not need to engage with much at all to do with meta science, and it is wholly possible to graduate without ever demonstrating the ability to ask good scientific questions from the genesis of a project through to it's completion, even in the context of the norms of said niche subfield.
Student experiences of course vary widely, and even for those that do the menial slog version of a bio PhD, the attitude surrounding it varies. Some seem to be happy for the guidance when their advisor is like this while others resent the lack of intellectual freedom.
Regardless, I find this to be pervasive in the environments I've been able to observe, even at a few of the supposedly top life sciences PhD programs in the US. To think deeply about meta science in any official capacity as part of PhD training requires you to go out of your way.
Meanwhile on the other end of the spectrum it is possible to receive a "top" PhD and stay in academia for quite a long time through only a combination of hard work and learning the experimental best practices of the field as if it were a trade (and that might be underrating the nuance in the trades honestly, I don't know enough to say).
I do think it is a difficult problem to avoid to at least some extent, because open questions in biology increase in complexity quite a lot over time, with a big time and financial commitment required to do a number of projects these days. Without radical changes in labor sources for wet lab work I'm not sure the issue can be sufficiently addressed.
I also imagine that even in less expensive/slow fields there is an increasingly unapproachable body of prior knowledge to learn, and people don't want to be trainees forever. So I believe a large philosophical paradigm shift might be needed too.
I'd love to see multiple different types of PhDs introduced with different graduation requirements, funding structures, and intended postgrad career paths, while maintaining the core thesis writing experience across the board.
> Student experiences of course vary widely, and even for those that do the menial slog version of a bio PhD, the attitude surrounding it varies. Some seem to be happy for the guidance when their advisor is like this while others resent the lack of intellectual freedom.
This section I think captures the problem well: most good (future) scientists will recent the lack of intellectual freedom, while the less capable will appreciate the guidance. This mechanism pushes academia more and more towards mediocre Cargo Cult Science, is my impression.
And I think we’re seeing the same theme playing out all through society. It’s convenient for the conformists at the top that need cheap compliant laborers, and it’s good for the (many) mediocre conventional-minded that enjoy it. But it’s not good for results.
> This mechanism pushes academia more and more towards mediocre Cargo Cult Science
Ultimately, a cargo cult is a scientifically sound application of empiricism that just happens to completely fail, whereupon it may be rejected. If scientific methodology is evaluated empirically (which I endorse) in terms of enabling scientific progess, you end up with a cargo cult.
For example, why is it a good idea to have falsifiable hypotheses? Most philosophers don't adhere to this standard. You might make a great argument on the grounds of what scientific progress means, how tangible it is and how it prevents discussions that are in some precise sense a waste of time. However, if you ground your epistemology on experience, the best and most convincing reason to adopt falsifiability is that it's historically been enabling science in ways that we like. One non-obvious prerequisite for this is that people seem to more easily agree on whether something is falsifiable than on whether it is true. I think this is an empirical psychological fact.
Concerning what is a good scientist, I consider as the important metric their individual contribution to humanity's knowledge. It may be that the biggest marginal contributions are made by people who identify epistemological deficiencies in their field and initiate paradigm shifts. However, I believe most of the total contributions to scientific knowledge are made in ways that barely benefit from deep epistemological insights.
I think it's very hard to argue "it's not good for results" that we have too many compliant laborers, if "results" is advancing human knowledge. Successfully teaching philosophy of science to people such that they can meaningfully apply it in practice is hard and requires significant investment. I'm not convinced such efforts are the best way to advance science. Rather, I think a more effective measure would be better aligning the incentives of scientists with the goal of advancing humanity's knowledge.
You need both types for sure. The major problem is that they've been confounded, and the more creative ones are subsequently being driven out at an alarming rate.
There are also not enough spots for the worker bee types, because even though many do reach PI status, the majority don't. The majority of people in general do not become a tenured professor, and the entire academic system is essentially like minor league baseball where either you make it all the way or you exit.
There should be way more laborers working on bio research, a lot of which doesn't even require understanding the science on the level of the "tradesman" PhD, let alone understanding the science on the level of being able to break new ground. This could involve both new viable long term positions to be a laborer in a university bio lab and a greater diversity in the type of biology research that can occur in industry. But government funding agencies needs to get their shit together first.
>To think deeply about meta science in any official capacity as part of PhD training requires you to go out of your way.
Yup. Not just life sciences, this applies to physics as well.
>I also imagine that even in less expensive/slow fields there is an increasingly unapproachable body of prior knowledge to learn, and people don't want to be trainees forever.
This is where AI might come in handy, for compressing the time it takes to ingest the prior work by providing tailored educational material, or even obviating the need for it by augmenting our intelligence (esp. when it comes to recall/search). Otherwise, we'll eventually be in the situation of Schlemiel the painter [0].
> a large philosophical paradigm shift might be needed too
Why? The way to learn a lot of prior knowledge is focused study and good teaching. Philosophy has little to do with it. I agree that it would make sense having more specialized lab workers (which definitely is a trade and should be learned while doing like one) assisting scientists whose education must be broader.
Another thing concerning the philosophy is that the current environment and competition for resources at research institutions rather discourages good scientific practice. E.g., it encourages overselling results while never attempting to reproduce them. This is getting better in my opinion, as e.g. in the open science initiative. However, I find it very hard to agrue that teaching more philosophy of science would do much good here. It's a problem of incentives.
Another question is whether science would actually progress faster (or in some other way better) if the philosophical ideals were followed more closely. I don't see how the situation in bio labs that you describe would benefit from all PhD students having heared three lectures on philosophy of science. The occasions where the philosophy actually informs decisions in the lab are very rare (though sometimes very important) and there seem to be enough people who know these things well enough for their purposes when this knowledge benefits the lab. When leaving academia, few students will have a chance to apply these concepts and most at least know that scientific methodology is a thing and will learn it if actually needed for what they do. Furthermore, most fields have their own somewhat incompatible folklore methodologies that do not easily generalize to unrelated problems.
No as fields get deeper and deeper there is a point at which it is no longer humanly possible to cover the prior material exhaustively in a vaguely reasonable timeframe. We can keep digging narrower and narrower niches at the expense of basic scientific literacy, or we can realize that at some point the system cannot keep scaling in a one dimensional way. Or I guess we can make people go to school until they're 60.
If you think what I'm proposing is three lectures on philosophy of science than either you have zero idea how a PhD program works or you're arguing in bad faith. The fact that someone can graduate from a top 5 biology PhD program without ever demonstrating the ability to come up with a decent research question nor to independently plan a decent study to address a given research question is appalling. The fact the programs actively discourage students from exploring anything intellectual outside of the narrow framework of their chosen lab, even projects that would clearly fit under the program's broader umbrella, is also appalling.
The primary point isn't whether students can learn those things on their own later. The primary point is that this is a huge indictment of the culture of graduate training. It is creating a legion of fucking drones as our next set of profs. Many of them are truly not aware that they lack this expertise.
Also, a PhD is explicitly designed as training to be in academia. There are a small set of industry jobs that require a PhD, and there are more industry jobs that appreciate a PhD, but really the only reason that PhDs have shifted so much towards industry is because the academic job market is completely fucked. It is only recently that biology professors will entertain the idea that someone should choose to leave academia after a PhD and that is not considered failure. Yet at the same time they do a bad job of preparing others for academic research.
> If you think what I'm proposing is three lectures on philosophy of science than either you have zero idea how a PhD program works or you're arguing in bad faith.
Talking about three lectures was meant as a question of what you actually propose (you might say that's bad faith). However, I believe you will find many people who you can convince that this is an important problem and they will suggest the "lecture approach" in full seriousness. That's also what I would expect from administrators who can actually implement some change in response to the problem.
You still haven't said what measures you would actually endorse. I will try to guess a second time: Do you want to make each PhD student design their own experiment, perhaps with individual hands-off supervision to teach methodology? Either these experiments will be "toy experiments" to teach them methodology, or they would have to make a real contribution to science, in which case you would have to massively increase research budgets (or massively reduce numbers of PhD students) and add a LOT of non-PhD student researchers who do all the work for your newly made PhD-PI. In the latter case, these experiments might not actually deliver on their promise of generating much knowledge and, I believe, would prove an investment in the PhD's education at the cost of less scientific output.
You will not convince administrators making economic decisions to implement either of these, even if they agree with you about the urgency of the problem. I think it's quite instructive to think about why this is so. Which brings me back to my main claim: the important problem in modern science is the incentive structure, not the lack of scientific methodology knowledge in PhD students.
You are right about the incentive structure being a main problem in science.
But at the heart of the reasons why this structure is the way it is, you find people's utter lack of knowledge about how science does and should work in the first place.
Remarably, nearly all the commenters here betray this very lack of knowledge by talking about it as "philosophy". As if it was some practically irrelevant jibberjabber.
Science does not work by magic though, with people going through some motions of an occult ritual. It's an algorithm, implemented insanely badly currently.
I'm not sure why you think "philosophy" and "practically irrelevant jibberjabber" are synonymous. Or maybe you just think that's what commenters are implying? That's certainly not what I was trying to imply. Scientific thinking can be super practical, and it need not involve meta thinking for everyone that uses it. But there absolutely are philosophical discussions that remain of relevance today, discussions that some part of the scientific community ought to be having.
Besides, academic science has become more formalized than it used to be, not less. People are running the algorithm badly IMO not because they are trying to be philosophical or eschew the previous fundamentals of science, but because they lack a deep understanding of what they are attempting to implement. The guardrails that were put up to try and force things to be more formalized have backfired into becoming a bad crutch for shitty science.
It's also not surprising that certain metrics and incentive structures would break in a career environment where resources have become scarce. Things implemented in good faith 25 years ago may not have even been a bad idea at the time, but they definitely were reacting to a different era with different job expectations and different open questions.
Anyway, I definitely agree with most of your broader concerns but I'm not sure if I'm misinterpreting your statement about philosophy. In my experience present grad students are hyperfixated on highly unimportant (from a training perspective) technical details. When I've been around PhD students chatting about science at lunch or house parties or whatever I more often hear discussions at the level of abstraction of mouse perfusion techniques than philosophy of science. It is certainly theoretically possible to have too many eggheads, but that's been the opposite of my experience.
> at the heart of the reasons why this structure is the way it is, you find people's utter lack of knowledge about how science does and should work
I don't think people's lack of understanding scientific methods is the main factor shaping the incentive structure, though it's very interesting how big a role it plays and I would love to hear more detailed arguments.
In order to change the system, there must be enough alignment on what the actual goals are and what incentives might benefit them, ways to effectively organize around it, and ways to actually enforce the new incentives. (Perhaps even more things...)
I think there are big hurdles on each of the above. First, concerning alignment on the goal, many people don't agree that faster scientific progress is actually good. You might argue that trying to teach people epistemology might change that. I neither think it would nor that it should, but one could argue about it a lot. "Indoctrinating" people starting at a young age that "science and knowledge = good" as a moral value might work (it is being done and I mostly endorse that) but that's different from teaching the scientific method.
The really important important factor here is, of course, that many individuals and organizations benefit from the current system and don't want to change it.
Concerning organization, political intersts and competition hinder it somewhat, but co-operation between scientists is quite high and increasing. Hopefully that trend will continue and not sucumb to geopolitical pressures.
Concerning enforcing better incentives, that's very hard. For example, we can't objectively and reliably evaluate someone's scientific contributions. Any metric you set up is going to be gamed, etc. I think the incentives have improved over the (very short) history of collaborative science and the trend looks positive. Take for example the trend towards open access publication and growing awareness of many issues (replication crisis, etc.).
Generally, I would say that understanding how science (the system) works and how it should work (or would, given certain changes) is very hard. Similarly, I also think really deep understanding how science (the process) should work is also hard and applying that knowledge in practice is much harder still. At some point you get diminishing returns for the efforts of teaching it. You still haven't proposed how you would promote this knowledge. I can't think of anything that seems likely to do much good but would love to hear about it.
> Remarably, nearly all the commenters here betray this very lack of knowledge by talking about it as "philosophy"
I believe epistemology and philosophy of science are philosophy. I also believe that having and improving these things is very useful and one cannot overstate their role in getting science where it is now. This is in my mind perfectly consistent with there not being a great need (way?) to teach more of it to students.
Perhaps an explanation of how you would teach it would make me realize we haven't been talking about quite the same thing?
It's harder to game metrics when there is a wider diversity of them and the way in which they are weighted in different decision making committees is different. Because it's hard to simultaneously game a bunch of different metrics, and the benefits from gaming a single one are probably worse than the benefits of just organically doing what you think is best.
But yeah I totally agree that these are extremely hard problems with no silver bullet. Philosophically and from a "how do systems of humans work together" perspective. Teaching methodology of science like a cookbook I also agree is the wrong play and just digging in further to what got us where we are.
I'm curious what makes you say scientists are collaborating more? Stats or has that been your experience? I've seen a lot of semi political collaborations where people are mostly just posturing to get on more papers. I have also seen a decent number of good faith interdisciplinary collaborations, though it seems to be a bit of a coin flip whether they flame out over disagreements. I've personally not seen much cooperation within the same field though that I'd consider genuine.
Thanks for sparking the interesting discussion in any event, it's not often I end up debating people on both sides of a debate so this has been fun.
In my field there's lots of honest and fruitful collaboration. It's not all perfect but definitely a good situation. Technology has been making collaboration easier and faster as well. Globalization has led to international scientific collaboration being ever more expected and normal.
Look into the organization of CERN for example. It's a huge project, highly politicized in many ways, with many parties that have their own goals and motives. Yet they all realize that a huge and sufficiently effective collaboration is the only way to do this science at all. And they are doing it.
Most decisions made about the structure of PhD training are not made by random administrators like undergrad. The head of a program will be a faculty member in that department actively doing research and advising students, usually a faculty member that has been very successful in their research output (at least by official metrics). The decisions on how to structure first year curriculum, what to require for graduation, etc. are all made by PIs in that department.
Of course you are 100% right that those PIs are short term incentivized to keep the cheap labor coming. I think there's a decent chance that this utterly backfires on them in the next few decades, but I guess we will see. It has already started to backfire as far as obtaining postdocs goes.
Anyway, a small scale example of a better academic requirement IMO that would definitely be implementable by those making program decisions (because it already has at UCSD neurobio, perhaps elsewhere) is to change qualifying exams. In engineering programs I know quals involve grad level course work exams too so they're already a big time commitment, but in many bio programs it is simply a project proposal presentation and you get asked some questions on the background lit.
Because the project proposal is just a project you are planning to do in your lab for your PhD, it is not uncommon for it to be pre-scaffolded by the PI, not requiring original thought. At UCSD they require that you write your quals proposal/presentation on a different neurobio topic entirely from anything to do with your chosen lab. This makes it far more likely that students will put together an in depth scientific proposal themselves before embarking on executing what is often someone else's. Of course their proposal does not get implemented, it is just evaluated and given feedback by a committee of established faculty members.
However what I was getting at on a bigger level is that not all PhDs need specialized meta science skills, but some do. It doesn't have to happen overnight, but I'd like to see the process started by having some sort of branching in PhD requirements, perhaps like the quals change I described along with differing year 1 curriculums being optional. This shouldn't require many resources because at the graduate level it is completely fair game to stick some students in a room and give them some suggested references and a problem, and check back a month later. The important part is having not only the clear encouragement but also the protected time.
Oh and I think more opportunities to work with fellow students on meaningful projects is sorely needed too. PhD students from different labs mostly only help each other on a very surface level, it is rare to see students work together across labs in a way that each would be contributing authors on the other's paper. This has the opportunity to still be productive from a traditional viewpoint but also to naturally produce more thinking about a range of science.
Anyway, at a minimum I think professors can stop telling PhD students that e.g. wanting to attend a meta science reading group is a waste of time. Having that as a component of PhD education need not be compulsory or even expected, but for it to be culturally frowned upon is bizarre to me (though the selfish reasoning behind the attitude is obvious).
So the incentive structures part is again true, but incentive structures aren't going to change without an attitude shift and a willingness to diversify intellectually. And it kind of is chicken/egg - the incentive structures created a bad educational environment for some time now, and some of those people are already early career profs. They're in an even worse position to be trying to fix things because they're more ignorant of the fundamental problems to begin with. Seriously there are bad actors but there are also people that are just remarkably unaware of how to be remotely statistically rigorous for example. Either that or they are really good actors who are willing to look dumb in front of certain peers to further their long term goals, but academics don't strike me as the type to be content with playing dumb (if it's being perceived by others in any event).
Regardless, I'd be happy to start from a place of changing incentive structures if that were possible, and work from there. But at the end of the day I think this is an important factor for the incentive structures to consider. Bad training systems can create a real telephone game effect after a few generations.
> there are also people that are just remarkably unaware of how to be remotely statistically rigorous
That reminds me of D. Kahneman's self-righteous book where he recounts teaching statistics for years "without it ever occuring to him that it might be used in his research". Claiming to have learned this lesson, he proceeds to promote his book while half of it's claims have failed to reproduce. He's still a celebrity im his field.
My point being: you really think this situation ever used to be better?
In my opinion, we have deviated from the true essence of "Science" by using it to determine "what ought to be" instead of focusing on answering the fundamental question of "what is."
I know that this is not particularly in vogue now to say, but social sciences especially did a whole lot to discredit the idea of science being something that is objective. They pretend to "do science" but instead use it to either "Launder Ideas" (https://www.wsj.com/articles/idea-laundering-in-academia-115...) or pure activism (for example, a paper published in Hipatia suggested that male students should be trained to propagate the feminist message, much like the way viruses spread https://www.researchgate.net/publication/315583936_Women's_S....
Not even going to mention the Grievance studies affair.
I have embraced accelerationism and hope that "science" (and academia specifically) fails quickly so that it can rise again with greater resilience and the ability to withstand pressure from activist organizations. Let's hope the hard sciences renaissance is nigh.
The social sciences indeed have the politically charged element, but I think you've hit on a broader point here. Investigating "what is" needs to involve to at least some extent a true curiosity and an adaptable plan. Modern science is so formalized that I find these ingredients to take way too much of a back seat - at least in what I've observed of modern biology.
I swear more often than not the way proposed experiments are framed reeks of a "what ought to be" perspective. Not because it's wishful thinking or an externally incentivized conclusion, but just because questions are often both incremental and kinda binary. So it is reasonably likely that what they've proposed to check will turn out about how they expected, and if it doesn't it may not be publishable at all. By crafting the proposal in a particular (fairly common) style, they've inadvertently turned it into "what ought to be".
By contrast, I loved talking to my really old professors because there were some awesome ones that you could tell spent a decent amount of time in their day just absolutely fucking around. Should that be the universal model of science? No, but I do believe you need those types, and I've struggled to find much of a modern equivalent. The tight job market doesn't help.
Of course with fewer low hanging fruit over time as well as greater experimental costs, it is indeed more difficult to effectively spitball now. And sure there are fewer random novel observations that would be worth a fun off the cuff paper for others to follow-up on. But I still can't shake the feeling that the scientific system has partially self-imposed the eradication of this style of curiosity, in a misguided attempt to formalize science.
From my perspective, there seems to be a massive underlying assumption throughout the system that scientific outputs should stand as nearly atomic units, with little chance for "unscientific" pieces of work to contribute to an eventually formalized scientific result. That clashes awfully with the ever increasing scale of open questions, and it doesn't play well with the individualistic academic ego stroking credit model either.
If you stumble upon a website that makes the stylistic choice that text should take the whole width of your whole screen, you can toggle the "Reader view" feature in Firefox to put it back into a more usual format (F9 on Windows).
If you use Chrome-based browsers, there's bookmarklets (bookmarks with a `javascript:` URI)
Those websites use the full width of your browser window, not your whole screen (on desktop at least; on mobile the story is different). That means that you, who has control over your window, have full control over how wide you want the text to be.
Stylistic choice? Isn’t that literally basic html the way it was meant to be used? I think you were supposed to display it in an actual browser window.
There's two parts to this problem. One, is his reflections on what is unquestionably outlier belief, like reiki and reflexology and spoon bending. The other is his intense dislike of the low bar p-jacking in social sciences and the lack of rigour, even in his own field.
They aren't the same thing. Labeling both of them 'cargo cult' science does the problem space a disservice.
Bad science, belief that the proof is 'in' or that low bar experiments prove things against a lack of reliable, reproducible outcomes, isn't the same thing as the implications of funding for space from the Drake Equation, or a lack of funding for VLSI engineering research because the money went on feel-good stats about alternative medicines.
What happened with the Shuttle, was false belief in the hierarchy of science and a fear about speaking out. What happened in tertiary education since Feynman died is that peer review has decayed into a game to tenure, and has nothing to do with the real outcome of why scientists exist.
During his lifetime in nuclear physics, it was normal for scientists to lodge patent applications. But that said, it wasn't the main game on the Manhattan project and Leo Szilard used his patent rights as a lever to try and get a say in the socialized and politicized outcomes of his discoveries. Now, most scientists who work in a field with fertile IPR are really strongly driven to the IPO implications of that IPR. It's not a tool, a lever, its the main game. Adding 5% efficiency to classic solar cells in a reproducible patent-free manner right now would be transformative for everyone. Adding 5 patents which contribute 0.1% efficiency to classic solar cells might pay back better.
Feynman sat to one side of that game. He lived through a time of tenure and a role in science, which nowadays would be very hard to maintain.
Cargo cult tech: visit Silicon Valley, copy some striking behavioral traits not practiced in your little corner of the planet and hope that you will "eat the world" somehow.
> copy some striking behavioral traits not practiced in your little corner of the planet
If only they did.
Entrepreneurship, Great Universities, celebration of failures (having 2 failed startups is seen as a badge of honor to VCs) and privately administered capital (low tax rate!) willing to take risk. These are the elements vital to any potential foreign Silicon Valley competitors and they only exist here in America.
All of the "Silicon Valley of X" initiatives I've seen were either led by bureaucrats with no ties to the valley or by who had little to no understanding of these fundamental traits. It was beanbag chairs and people writing JS on MacBooks (for 1/3 of what people who actually came to the valley made).
Ha ha, yes this is harder to copy structure is what I was alluding to. But I think, that too, is a conventional reading that doesnt really explain the disparity between regions.
Great universities, massive amounts of capital, reckless, lemming stylr investment behavior etc. exists in many places.
Imho the key difference is a deep seated and essentially naive american belief in technology as a "force for good". This helps coordinate society and over time creates the conditions for significant breakthroughs.
That naivete is in older parts of the world matched instead with lots of cynicism. They have seen waves upon waves of tech simply facilitate the next calamity.
That cynical view is closer to the truth as we can see in how digital tech has become increasingly a force for evil.
But the tragedy with tech is that "somebody will do it". You just hope it is eventually going to be shaped by societies that can control it and eliminate its worst implications
It seems like VCs are doing a lot of cargo cult tech.
Cargo cults work if you have enough money. Throw it to the wall with adequate speed for large enough amount of time and some very few will stick. Then you can rest on the laurels that Fortuna gave you. This is why Silicon Valley cannot be replicated elsewhere, except perhaps Switzerland or some filthy-rich Petrocountry.
But those only copy the cultural aspects that favor those doing the copying, and since it’s conventional-minded people that copy and the “secret” SV ingredient is putting independent-minded people in charge the whole process is forever doomed to failure…
But it keeps a lot of bureaucrats busy, and (very) gainfully employed.
Unfortunately some people tends to cherry pick their favourite details. In my country there's a politician that says that the iPhone development was paid by US government.
Humans are innately imaginative outside of the bounds of reality. Most people I know believe in things that aren't real, whether it be crystals, chakras, essential oils, random mythologies about random things.
Most interestingly, these are generally weak beliefs strongly held. You'll never convince these people that they've divorced themselves from reality, and you have nearly nothing to gain from trying.
Psychologists, doctors, lawyers, sitting congresspeople, etc. This isn't something that Facebook moms have a monopoly on.
We are really bad at recognizing our own flights of fancy (and, of course, recognizing what we're really bad at).
Most people I know believe in things that aren't real, whether it be crystals, chakras, essential oils, random mythologies about random things.
You're using known pseudo-science, magic examples.
Do you want to have real fun? Try using computer science fads or political ideologies. For every new age belief, there's an even more ridiculous one, based on the same evidence (none) and hurting people and businesses. Compared to those, crystals are harmless.
Well said. For me, a person who admittedly has some pretty far-out beliefs/theories, it’s because there is generally no penalty for having these beliefs. Sure, at work where things count, I make sure my theories adhere to reality. But when I’m out chilling in nature? My goal then is to have fun, and entertaining crazy ideas is my idea of a good time. People who have to be correct all the time are kind of missing out and expending a lot of useless energy.
Plus, the universe is so wild that even modern science probably doesn’t cut as deep as we think it does.
In the book Sapiens it builds on this idea for myths lots of people believe in, or "shared fictions" or artificial truths or whatever you want to call it. Like money for example, it's only worth anything as long as people believe it has purchasing power, even if really it's just collections of 0s and 1s or symbols printed on textiles. Extrapolate that idea to many other things people collectively believe in, and that let's us build stable societies.
Yep. And given a significant problem and a placebo, would you rather:
1) Not use the placebo, because even though it seems to work for you, it's been scientifically shown in trials to do nothing
2) Use the placebo, because it seems to work; somehow get good results
A huge number of people will choose #1. They are extremely close to cargo cult without even realizing it.
I'd say that if there's a pos/neg polarization to cargo cult, they are on the negative side, looking for things to reject in culty ways. But whether you're consuming product or rejecting it, the same shaky mindsets apply.
> 1) Not use the placebo, because even though it seems to work for you, it's been scientifically shown in trials to do nothing
This is the opposite of a placebo. A placebo is a treatment that has been scientifically shown in trials to do something even though it shouldn't. For example, taking a pill or drinking a cup of "medicine" might make your headache feel better even though the pill is just sugar and the medicine is just colored water, and just eating a spoonful of sugar or drinking the same amount of water doesn't have the same effect if you don't believe it's medicine.
We use placebos as the comparison in clinical trials precisely because placebos do something, and we want to show that taking the pill with the actual medicine does more than just taking a sugar pill does. Because of it doesn't--might as well just take the sugar pill.
Placebos do something, and if that something is beneficial then by all means take them. If I can take a sugar pill that makes me feel better I don't really care whether it's psychosomatic or pharmacological. The FDA cares, because they want to encourage real treatments and discourage quackery.
That's interesting, I see the term has multiple definitions, to include contradictory definitions around the tested value of the placebo itself.
It reminds me of some other terms in my own areas of specialty, and the meaning they "properly" carry, vs. the meaning they generally carry when used by the public.
So we hook up ChatGPT to procedurally generate medicine sounding names to sugar pills, and then figure out which name gives the best placebo effect in patients? Then we just rotate some of the better ones in patients to maximize the placebo effect, when they say the effect wears off they get a new name on the pill they take.
But they're generally not actually beliefs, in the sense that they wouldn't stake any substantial value on them. They are "performative" beliefs that are indicative of belonging to a certain subculture. What's interesting is when those performative beliefs collide with factual consequences (eg, COVID deniers dying of COVID). Nobody has died of "belief" in astrology as far as I know. I imagine most astrology fans would cop out if there were real consequences involved.
When people talk about essential oils in contexts like these, they're most likely referring to some people having the belief that they have medicinal properties; can cure autism etc.
Lots of people believe essential oils can cure cancer. Any big oncology center has stories about people who show up presenting horrible symptoms well beyond what's normally seen. They spent 2 years just doing essential oils or whatever (often encouraged by some a quack) on some readily treatable thing and when they do finally give in and come to the ER it's basically too late.
Tragedies like these make me very hesitant about "harmless" falsehoods.
I also wonder what cancer survival rates look like if you filter people who self select out of conventional healthcare for a period of time like your example above.
I wonder if belief in alternate medicine correlates with unaffordable healthcare. It is much easier to believe that there is a better way if the correct procedure would ruin you financially.
Another example is how to treat criminals. We obviously have made no progress—lots of theory, but no progress—in decreasing the amount of crime by the method that we use to handle criminals.
Psychology is really hard to do well. I tend to look for real word data that casts light on a thing because psychological experiments involving humans are so often designed so very poorly.
For example, it was found that when women take an interest in a celebrity, they do a search for his marital or relationship status. When men take an interest in a celebrity, they do a search for nude photos of her.
This is people in the privacy of their own homes on their own computers looking for something to make themselves hot. They aren't worried about what other people might expect from them, yet male and female strategies differ in ways that fit with historical norms that women are usually looking for a relationship and men are often just looking for sex (at least at first).
You do realize that was intended to be humorous, yes? It's true that one should not defer blindly to authority, but experts nonetheless know some things that non-experts don't. That's what makes them experts.
Exactly. And it was about experts questioning the beliefs of other experts, particularly experts in the past. Galileo questioning Aristotle. Einstein questioning Newton. Not about random quacks thinking they have opinions equal to the experts.
There's a huge set of potentially politically fraught middle grounds that exist between Einstein -> Newton and random quack -> some modern distinguished scientist. And by politically fraught I mean career politics that directly impact the scientific landscape, not "politics".
It is my impression that back in the day it was more common for academics with differing opinions to banter or at times straight up disrespect each other semi-publicly. This could be historical survivorship bias, but it is a fact that aspiring scientists had a much greater likelihood of becoming a tenured PI in previous generations, and usually reached that status much earlier in their lives.
I fear that with the state of the academic job market along with present cultural norms, it is less common now for an early career investigator to question the quality of an established PI's work, or even for PIs to meaningfully criticize each other in scientifically accessible formats.
Besides, questioning the work of past scientists is implicitly questioning the research of many modern scientists, at least if that questioning is to be considered remotely novel. There are socially acceptable avenues to question past science, but there are also cases where modern PIs of influence have built so much on an assumption from prior results, such that the impact of their work would indeed be threatened by questions of the foundation.
It is important that the latter situation is handled in a manner consistent with scientific ideals. I don't know what Feynman was exactly referring to with his quote, but I'd be surprised if he was focused on the behavior of random lay people. Someone immersed in science is probably going to be most concerned with the behavior of fellow scientists and emerging trainees.
I think you are missing an important bit of the contemporary political landscape: Feynman's quote is used by today's quacks -- particularly young-earth creationists and climate-change denialists -- to discredit the entire scientific enterprise.
I assumed that was the intention of the comment but I don't think HN is really a board for young earth creationists or climate change denialists. Maybe I'm wrong about that but I think there are legitimate questions to be debated on this topic and it's annoying to see critical discussions of science regularly side tracked by concerns about how someone's random crazy uncle might misinterpret the criticisms. It's not like the OP posted the quote on a climate research thread, so I don't see why it's especially relevant.
There are a surprising number of YECs (many of whom are also climate-change denialists -- it's all part of the same ideology) lurking here. I suspect it's part of an organized covert effort to spread the doctrine. Part of the strategy is to discredit science and scientists in general.
I'd think YEC is a small subset of climate deniers even if most YEC are climate deniers. I can image a climate denier contingency here but I have a hard time believing that YECs bother with HN. I'm not up to date on conspiracy theorists though. Are there any threads you can point to about YEC here?
Regardless, this is also a great way to stifle important scientific discourse. Nobody wants scientific commentary to happen hidden from public view, so more and more happens on preprint servers and social media. But then nobody wants to make even good faith criticisms because they don't want the general public to misinterpret, aided by bad faith spinning by other parties.
Of course you will see criticisms related to e.g. labor conditions, but when it comes to criticism of scientific practice or even in some cases specific results, there are always a bunch of people ready to comment about how most critics are quacks and we need to be careful about extremism. This is awfully convenient for the scientists that are doing subpar work.
> I have a hard time believing that YECs bother with HN.
Hard to say. It's hard to get a YEC to come out of the closet here. Maybe I'm a little overly sensitized to YECs in particular because I spent some time on the YouTube YEC debate circuit.
But it hardly matters. What matters is that there are a lot of anti-science ideologues out there, some of them are well organized and well-funded, they are developing a collection of stock rhetoric (including that Feynman quote) which can be very persuasive, and some of that rhetoric appears regularly here. Whenever I see it I try to smack it down, but sometimes it feels like whack-a-mole.
I think the problem is that two different notions of expert have been widely confounded within the present scientific training structure, let alone within public perception.
There is a huge amount of prior literature on many topics, so to be an expert in something like immunology you're almost definitely a PhD, or maybe a very academically inclined MD. However this notion of expert simply means you have learned intricate details about many of the important previous results, most of which are probably true. It does not mean you actually have the ability to do good science in the field of immunology.
But for any technical question in your niche that isn't uncharted territory you can serve as an expert. Such expertise has obvious uses in many other domains, and as there isn't really a great term for it, it gets called scientific expertise. Perhaps the person may be called a biologist, but that also has scientific undertones which may or may not be warranted.
I'm sure you are aware of this, but I don't often see it explicitly stated, and so it feels like people talk past each other in a sense. It is wholly possible to approach e.g. chemistry in a non-scientific way, just as something like business can be approached in a scientific way. I don't think the double language will be resolved any time soon, since I don't see much effort to make this distinction in the actual design of PhD training - forget about random internet forums.
"When they got a number closer to Millikan’s value they didn’t look so hard. And so they eliminated the numbers that were too far off, and did other things like that. We’ve learned those tricks nowadays, and now we don’t have that kind of a disease."
Astrophysicists are extremely motivated to find alternatives to dark matter.
The "problem" is that the evidence for the existence of dark matter, what fraction of the universe it makes up, where it is, and some of its general properties (e.g., temperature) come from many different directions, and they point to the same conclusions.
Dark matter is a relatively simple postulate that explains a large number of disparate observations, from the relative abundances of different atomic elements to the amount of gravitational lensing in distant galaxy clusters.
It's not that astrophysicists are straining to make everything fit into the dark matter paradigm. It's that everything seems to fit into the paradigm very naturally. That's what overcomes the queasiness over dark matter having never been directly observed in a lab.
When you don't understand something, and are not inclined to take the time to understand it, you just label it as cargo cult science, or pseudoscience.
Ironically, scientists love to do this! Take a new area with promising results, hype the shit out of it until the worst and/or least ethical scientists truly create a pseudoscientific tangent out of it, and then drop the area entirely for a decade because now anything vaguely associated with it is taboo. Labeling real science as cargo cult science sounds to me like cargo cult science with extra steps.
Granted such a violent burst doesn't always happen when an academic bubble is created. But at times I've found academic backlash to be shockingly irrational in its breadth. I think the most common factor is when the pseudoscience makes it to the point of having an impact on actual humans. But it is still sad to see a state when even tenured faculty will shy away from public discussion of certain questions because of vague associations, as it feels entirely unscientific to me.
The funniest part is when the area reemerges with a new name, as if extremely surface level branding is actually important to receiving grants or publishing papers. Well I suppose it is unfortunately, but this is one of those cases that feels so thinly veiled it might as well be scientific satire.
My personal favorite example is the galvanic skin response hype from 50+ years ago, which morphed into more of a fringe science primarily because of flaws with lie detection applications - flaws that should've been extremely obvious concerns to test just based on the OG literature + common scientific sense. In the last decade or so it's been revamped as electrodermal activity and unsurprisingly shown great promise for certain health applications.
EDA is presently the main focus of a big group at MIT, it has been included in some Fitbit models, it is used in FDA approved devices for epilepsy monitoring, and overall the relevant publications per year keep trending back up. A far cry from the looks you'd get if you discussed galvanic skin response in certain supposedly scientific circles back in the day.
In absolute numbers the rebranded GSR has actually already fully recovered from the couple decades of low research activity, though relative to total academic output now versus then it probably still has some distance to go.
His analogy of teaching resonated too much with me. I have been feeling exactly that for a couple of years, and it feels scarily like when I abandoned the belief in god many years ago. There is that nagging feeling I can’t shake that, no matter how I look at it, I can’t negate the null hypothesis that organized education is basically useless.
Don’t get me wrong, I don’t think educational institutions shouldn’t exist; surely smart people need a place to go and do things and get together and work. The point to me is that, whatever it is that smart people are doing, they’d mostly do that anyway no matter what kind of education they’re getting.
Whenever I bring it up with colleagues, I usually get into heated arguments where I become the 1 in an invariably N-to-1 aggressive debate. Plus, my wife is a primary school teacher with a Master degree in Education who loves teaching, so obviously that is a very delicate subject to bring up at home. I have noticed that people like her, who love teaching, usually have a very emotional attachment to it from their own personal experiences and how they see education as transformative, and that makes any kind of objective research on teaching become very hard to do. It’s almost like teaching is such a holy subject that any teaching has to be good, and criticizing it is like defending imperialism or capitalism or rich people or something like that (it doesn’t help that my wife came from a very poor background and got out of that through education practice, and has teached most of her career in poor neighborhoods; that makes a delicate subject even more touchy).
My personal opinion is that I am only human and seeing myself as some kind of ”savior through teaching” is unfair, to me and to others (yes, I do some teaching too). Most of the time I focus on trying to give smart students a platform to grow, and while I also try to motivate the other students as much as I can (I do believe you can ”fish” some of them out of the pool), I don’t usually pull my hair or lose sleep over them. One, because I don’t feel like I have the right to keep patronizing people as if I know what’s objectively better for them; lots of people out there are doing much better than me without formal education. Second, because I do not have the illusion that I (or most other teachers) know a really good teaching method that can make a significant difference. It would be like losing sleep because a glass of holy water that I blessed did not save a cancer patient.
This is a lot of patronizing hair tearing over what amounts to a pretty banal HN greatest hit: “intelligent” people don’t really need school, and everyone else can get fucked.
Yeah, he seems to be making it a matter of "well the smart people, the ones who really matter, don't need education, so all the sub 99th percentile students don't really matter because the smart ones aren't benefitting"
So teaching kids to read, understand logical inference, showing them different parts of the world and how to look at it, doesn't matter because the wonderful gifted kids would have happened upon those insights anyway? It seems pretty reductive to assume the point of education is just to foster a small cadre of geniuses. Does every other non-gifted kid not deserve to be helped by education because they couldn't have educated themselves?
Thanks for both points, I’ll think about them. However, you basically just confirmed what I argued: it’s impossible to criticize educational methods. Just because education ”can” be useful, that doesn’t mean it ”is” useful. I wish every person in the world could have access to transformative education, but the truth of the matter is that almost no teacher out there can actually do that; most are just very well-intended witch doctors. And since the subject is impossible to criticize (and the ”research” area is mired with politics and virtual signalling), it’s probably going to stay that way. So, calling me elitist is all good, but I’m pretty sure I’m doing more than most other people by being as rational as I can and not promising to cure an ailment for which I have no remedy.
Your comment only seemed to be about the current uselessness of education as a value-add for smart kids. You never mentioned the benefit it offered to average or less intelligent kids. Do you believe that their needs aren't being addressed either?
> you basically just confirmed what I argued: it’s impossible to criticize educational methods.
Not sure how my comment confirmed that. All I pointed out is the fact that your gripe with education seemed mostly focused on the failure to help smart kids, ignoring whether or not if helping other kids could be evidence for or against the effectiveness of the education system in the western world. With regards to that I have a question for you. Is the percentage of kids in any age cohort you consider "smart" a product of their educational environment, or is the proportion of smart kids to normal kids the same no matter what methods, within the realm of methods readily available to educators nowadays, we use?
> your gripe with education seemed mostly focused on the failure to help smart kids
Not at all, I definitely did not express myself correctly. I really have no worries that the smart kids will mostly thrive, no matter what educational methods they are subjected to. Sure, some systems will work against them, but that is unavoidable.
My gripe is, first of all, with the fact that it is impossible to criticize education and, as a consequence, it is impossible to improve it. Nobody really knows whether one or another type/method works better than any other, and yet many people pretend to know, and pretend to have scientific knowledge about it. But because teaching is such an emotional subject, any teaching seems to be good teaching, and as such, we'll never really know which one works or not.
Given that, I don't really feel the obligation to use one method or the other, and I also don't feel like the success or failure of a student is my success or my fault. Don't get me wrong, I do my absolute best to do a good job. I believe in teaching by example, I believe in connecting theory to practice, I believe in reaching out to eventual special needs of specific students, etc. However, I also believe (and make it very clear to my students) that their success is still 95% based on their own hard work and motivation, and only 5% based on whatever it is that I offer them. I simply don't know whether my teaching is good or not; I'll never know.
As such, it is irresponsible of me to try to convince them (like some kind of religious leader or a witch doctor) that by following my "methods" they will thrive, and if they haven't, it's because they didn't follow me close enough. Sure, some will pass the exams with straight A's. Some students are very good in overfitting to the institutional assessments, and will build a kind of fake safety and comfort with their results. Does that mean they will do well out there? I have no idea. Others will fail my exams miserably. Some students are bored by structured, predictable goals and need more complex challenges; others are in the wrong area; others simply don't want to study. Does that mean they won't do well out there? I have no idea.
As such, I think that the only honest thing to do is this: offer them everything you can, but make it clear to them that they'll only learn as much they work to learn. The school will give them more space and time to make errors and learn from them, in a less traumatic way than the real world. But if they don't use that, i.e. if they don't actually work hard to learn and experiment and fail and try again, there is nothing I can do to make that better. Sorry. I'm only human, and I don't pretend to have a solution to this problem just to try to justify my existence as a teacher.
As for your question: I believe there is definitely an effect, but it could be good or bad. Some teaching will give motivation for students to develop their abilities; others will kill their creativity. My point is exactly that: we don't know, and never will. We're just witch doctors trying every possible plant to treat a disease. Randomly, some will actually work, some will not, and some will make it worse. But since we cannot check it objectively, we'll never know which is which.
“Your profession is valueless, and you kinky think otherwise because you have an irrational emotional connection to it” is not the incisive, conversation-provoking criticism you seem the think it is.
I wouldn't say it is useless. But, there's an extent to which "I can explain it to you, but I can't understand it to you." There's an interesting school of thought saying that in order to understand an idea, it requires creative act similar to that of the individual who originated an idea. (B/c verbal communication is very limited.) For instance, in order to comprehend the idea of Newton's Laws, one needs to have in some sense the capability to have come up with them, because teaching is really just prompting the pupil to reinvent the concept in their own mind.
IMO, this is why "natural language processing" with LLMs will not amount to "natural language understanding." Natural language explanations are targeted at intelligences capable of forming concepts based on IRL experience (naive physics, etc). The goal of teaching is to induce the formation of the concepts, not to directly transmit the concepts.
We saw this just recently when the EU tried to hide a study that showed piracy actually increased game sales. I wonder what other studies are being suppressed because they don't fit the narrative.
https://www.ghacks.net/2017/09/22/inconvenient-eu-piracy-stu...