I'm the furthest thing from a scientist unless you count 3,000 hours of PBS spacetime, but I love science and so science/academia fraud to me, feels kinda like the worst kinda fraud you can commit. Financial fraud can cause suicides and ruin in lives, sure, but I feel like academic fraud just sets the whole of humanity back? I also feel that through my life I've (maybe wrongly) placed a great deal of respect and trust in scientists, mostly that they understand that their work is of the upmost importance and so the downstream consequences of mucking around are just too grave. Stuff like this seems to bother me more than it rationally should. Are people who commit this type of science fraud just really evil humans? Am I over thinking this? Do scientists go to jail for academic fraud?
Pick up an old engineering book at some point, something from mid 1800's or early 1900's and you'll quickly realize that the trust people put on science isn't what it should be. The scientific method works over a long period of time, but to blindly trust a peer review study that just came out, any study, is almost as much faith as religion, specially if you're not a high level researcher in the same field and have spent a good amount of time reading their methodology yourself. If you go to the social sciences then the amount of crock that gets published is incredible.
As a quick example, any book about electricity from the early 1900's will include quite serious sections about the positive effects of electromagnetic radiation (or "EM field therapies"), teach you about different frequencies and modulations for different illnesses and how doctors are applying them. Today these devices are peddled by scammers of the same ilk as the ones that align your shakras with the right stone on your forehead.
Going to need some citations here since the texts that I'm familiar with from that time period are "A Treatise on Electricity and Magnetism" by Maxwell (mid-late 1800s) and "A History of the Theories of Aether and Electricity" by E. T. Whittaker, neither of which mentions anything of the sort. I suspect you are choosing from texts that at the time likely would not have been considered academic or standard.
Sure, check out Audel's Electric guide, 1927 print, volume 10, for example. This is a series of books intended for practitioners at the time, in wide circulation in the USA. The "Electric Therapeuthics" section changed a lot with each edition so it's even interesting to compare across older versions if you can find multiple. I didn't want to reference specific old niche books but if you're willing to ebay it, I guess you can check. My point was more general than just electric engineering though.
Because those two texts are the two among literally thousands of scientific publications that have survived the test of time, which is exactly the point being made.
This might seem crazy to hear now, but when Maxwell first published A Dynamical Theory of the Electromagnetic Field in 1865, no one cared, it received very little attention at the time.
It was decades later in 1888 with the work of Hertz that Maxwell's equations started to gain significance within the scientific community.
Your points of memory are not counterpoints.
Those are the ones that lived - and are not indicative of the general quality of science during those times. Obvious survivor bias.
The fact that you can recall those reinforces the point that the value is determined by how long it is useful and remembered, not the fact that it was published.
Indeed, but you are clearly missing the historical context as these were two highly celebrated and referenced texts of the time period by leading scientists. However, it appears that the leading scientific minds (of which Maxwell and Whittaker are) did not include these uses in their texts. I do not dispute that science can be wrong (in fact it is almost always 'wrong' in the end) nor do I dispute that there could have been published research in those applications. I would argue that these applications were likely fringe at best within the scientific community by the mid 1800s.
There are of course incredible scientists that went down disappointing paths (eg Shockley, Dyson, Pauling) in terms of their research output later on, though one must remember that typically this occurs outside their original field of expertise.
If you read my comment you will see that I am asking for the references to the claims the previous author made. I simply provided my own references which werew written at the time and are representative of the times that do not corroborate the tall tale of the previous author. If you have any references to support their claim I would be interested in perusing them.
And what's to say that other highly celebrated and highly referenced texts from that time were not based on bad science or were outright frauds? Your memory of them?
Picking the winners as examples is not good sampling.
The originator explicitly said that 'any engineering book' would contain these references, thus it would seem that this was at least a widespread belief among physicists and engineers at the time. Do you have any example?
Again, you and the original poster seem to have this understanding that scientists and engineers from the mid 1800s to early 1900s are not to be trusted. I think that this assertion should be backed by considerable evidence, and that burden is of course mostly on you
I don't dispute that there were doctors applying electricity and/or magnetism to the body in an "un"-rigorous manner, but is there documentation that suggests that the scientists at the time had come to the conclusion that it worked?
Also notably, Whittaker's work was a 'loser'. I chose it specifically for this purpose. I had read parts of it previously because it was a 'loser' as he chose to dispute Einstein's contributions to special relativity.
We've gotten into the territory of just repeating ourselves, so I don't want to do that.
I will say that
> Again, you and the original poster seem to have this understanding that scientists and engineers from the mid 1800s to early 1900s are not to be trusted.
Is not correct as far as I believe. Instead, we're saying that there's no reason to believe any study until it has stood the test of time. The longer it remains impactful the better.
I am building from this statement:
> The scientific method works over a long period of time, but to blindly trust a peer review study that just came out, any study, is almost as much faith as religion, specially if you're not a high level researcher in the same field and have spent a good amount of time reading their methodology yourself.
Saying that textbooks from Maxwell's era had misunderstandings and bad information is not saying they are inept, it's saying that that is how it works, it always has and always will be that way. That's it, really. The fact that good science came from it is to be expected, and the fact that bad science existed is also not to be suprising.
I think you interpreted the statement about the 1900s textbooks being wrong as a slander against the entire era, which is not how I read it, and certainly not what I meant to imply by any of my comments.
>Is not correct as far as I believe. Instead, we're saying that there's no reason to believe any study until it has stood the test of time. The longer it remains impactful the better.
Upon rereading I agree, I apologize to you and OP for my misunderstanding. However, ultimately in general I still have to disagree at least semantically with "standing the test of time". I am not really familiar with the processes in biological or social sciences, but from a physical science background, any result of interest will need to be built upon quite quickly. Either some kind of design will be reproduced to improve it or use it, or in the case of a theoretical result it will be awaiting some kind of experiment to validate it.
>> The scientific method works over a long period of time, but to blindly trust a peer review study that just came out, any study, is almost as much faith as religion, specially if you're not a high level researcher in the same field and have spent a good amount of time reading their methodology yourself.
I don't necessarily disagree with this statement (besides the 'long period of time'). Though I would also say that simply mistrusting the result has the same issue, so the only correct way forward seems to me to be to act as if it does not exist until you gain the expertise.
>Saying that textbooks from Maxwell's era had misunderstandings and bad information is not saying they are inept, it's saying that that is how it works, it always has and always will be that way. That's it, really. The fact that good science came from it is to be expected, and the fact that bad science existed is also not to be suprising.
I'm not familiar with textbooks of that era as through personal curiosity I've only read a few. I would still like to see an example of such an occurrence to understand the context under which these treatments are discussed. If these fallacious techniques were widespread enough to be popular in textbooks there must be some kind of literature supporting them?
>I think you interpreted the statement about the 1900s textbooks being wrong as a slander against the entire era, which is not how I read it, and certainly not what I meant to imply by any of my comments.
I will admit to being a bit hotheaded in the initial response, which I apologize for.
You have made so many mistakes in your reading that I would urge you next time to carefully re-read people's posts before responding to them. Also never quote someone without using their actual words. For example, it was not explicitly said that any engineering book would contain those references, it was a general statement not a categorical statement.
>Again, you and the original poster seem to have this understanding that scientists and engineers from the mid 1800s to early 1900s are not to be trusted.
No, once again that's not what was said. The concept being communicated is that the scientific method works over long periods of time, not short ones. Over long periods of time, such as 200 years, the work that survives peer review and remains significant today are things like Maxwell's work on electromagnetism as opposed to Dr. Franz Mesmer's work on animal magnetism.
You are taking little bits and pieces of what people are saying, misconstruing them and reinterpreting them, and then forming an argument that is not a genuine representation of the original comment.
>You have made so many mistakes in your reading that I would urge you next time to carefully re-read people's posts before responding to them. Also never quote someone without using their actual words. For example, it was not explicitly said that any engineering book would contain those references, it was a general statement not a categorical statement.
I will admit to some mistakes in comprehension and a poor literal quoting, though I will also maintain that I captured the majority of the essence of what was written in the quote. In the case of "any book about electricity..." vs "any engineering book" the only books that would be relevant to the discussion should be engineering OR science books relating to electricity.
>No, once again that's not what was said. The concept being communicated is that the scientific method works over long periods of time, not short ones. Over long periods of time, such as 200 years, the work that survives peer review and remains significant today are things like Maxwell's work on electromagnetism as opposed to Dr. Franz Mesmer's work on animal magnetism.
With respect to "time" I will refer to another comment I made below in response to the previous OP.
I think it's interesting that you use Mesmer as an example because his work failed to gain the acceptance of the scientific societies of the time, and was in the late 1700s, significantly earlier than the proposed mid 1800s to early 1900s
Here's a more recent (1950) example that I think makes parent's point quite well:
> I assume that the reader is familiar with the idea of extra-sensory perception, and the meaning of the four items of it, viz. telepathy, clairvoyance, precognition and psycho-kinesis. These disturbing phenomena seem to deny all our usual scientific ideas. How we should like to discredit them! Unfortunately the statistical evidence, at least for telepathy, is overwhelming. It is very difficult to rearrange one's ideas so as to fit these new facts in. Once one has accepted them it does not seem a very big step to believe in ghosts and bogies. The idea that our bodies move simply according to the known laws of physics, together with some others not yet discovered but somewhat similar, would be one of the first to go.
Anyone on this site who doesn't know what this is from should feel a bit of shame in the current era of hype around machine intelligence, so I'll leave it as an exercise to the reader if you aren't already familiar with this paper.
Alan Turing was an incredible computer scientist and mathematician.
Unfortunately he is out of his area of expertise in physics and human biology/neuroscience (? not sure where telepathy would be if it was to be rigorously studied). This is akin to Freeman Dyson on global warming.
That scientists can have strange ideas is something nobody can dispute. That those strange ideas enter into scientific legitimacy is another story entirely.
The point is that I don't believe Turing's ideas were widely considered strange at the time. The point is more that, even under conditions of honest actions, it's very easy for educated, smart, and sincere thinkers to take for fact something that with time we believe is wildly not fact.
Science even at it's most sincere should always be approached with thoughtful skepticism. The phrase that I hear touted often these days "trust the science", is in essence not how science should be thought of.
There is a difference between "not considered strange at the time" and "science" via scientific publication and subsequent consensus validating the idea. I have mentioned that luminaries can have odd ideas multiple times in this thread, it's not something I seek to deny. However, as I continue to reiterate, these ideas are generally:
1. outside their areas of expertise
2. not validated by independent scientific research
I completely agree that science should be approached with thoughtful skepticism, and I agree that 'trust the science' might not necessarily be the best semantics to use. However, it is not clear that skepticism by all parties should be considered with equal weight. Most of the times, people should "trust the science" because they are not equipped to be skeptical.
Exactly, but the problem is that everyone has to be the first to publish about something and then every paper after that which deals with the same thing is relegated to second-level venues (or lower). So whatever is the first word about something becomes the accepted and is super hard to contest.
The default state of the human brain almost seems to be a form of anti-science, blind faith in what you already believe, especially if you stand to gain personally from what you believe being true.
What is most incredible to me is even knowing and believing the above, I fall prey to this all the time.
the best example is psychology. the entire field needs to be scrapped and started over, nothing you read on any of those papers can be trusted, it's just heaping piles of bad research dressed with a thin veil of statistical respectability.
I treat the field like medicine in 19th century. Nice motive, but unless someone drag me to operating room because I've just been shot, I won't have any surgery.
We use EM radiation for illnesses and doctors apply them. It's one of the most important diagnostic and treatment options we have. I think what you're referring to is invalid therapies ("woo" or snake oil or just plain ignorance/greed) but it's hard to distinguish those from legitimate therapies at times.
Gamma knife? Basically the entire field of radiotherapy?
TMS is magnetic, not EM (the coil generates a magnetic field, which induces localized currents in the body being treated)
OP is making a distinction between "EM Radiation" (i.e. "light") and "Quasistatic fields".
This is warranted because they are pretty different - light has a frequency distribution, diffracts, etc, and can be focused to propagate energy over distances large compared to its source, whereas quasistatic fields (by definition) have no frequency distribution, and die as 1/r^2 or faster
I can't parse what you are saying, but there's a difference between EM radiation and a magnetic field (and the resulting locally induced currents).
Think in terms of an MRI machine: it puts you in a giant magnet (causing the various nuclear spins to align with the field) and then sends a bunch of EM radiation (radiofrequency). The former is a magnetic field, not EM radiation.
I think the error is putting trust in scientists as people, instead of putting trust in science as a methodology. The methodology is designed to rely on trusting a process, not trusting individuals, to arrive at the truth.
I guess it also reinforces the supreme importance of reproducibility. Seems like no research result should be taken seriously until at least one other scientist or group of scientists are able to reproduce the result.
And if the work isn't sufficiently defined to the point of being reproducible, it should be considered a garbage study.
There is no way to do any kind of science without putting trust in people. Science is not the universe as it is presented. Science is the human interpretation of observation. People are who carry out and interpret experiments. There is no set of methodology you can adopt that will ever change that. "Reproducibility" is important, but it is not a silver bullet. You cannot run any experiment exactly in the same way ever.
If you have independent measurements you cannot rule out bias from prior results. Look at the error bars here on published values of the electron charge and tell me that methodology or reproducibility shored up the result. https://hsm.stackexchange.com/questions/264/timeline-of-meas...
TFA is about a person who literally faked the observations. Everyone on this sub is trying to shoehorn in their preferred view of "how to fix science" when the problem here has nothing to do with any of it.
The initial GP comment made the point that, at some level, science requires trust. And (in the case of TFA) specifically trust that the person making the observations is actually performing the experiments and recording them correctly -- rather than making them up. You can verify and replicate (and we do quite a bit of that, modulo the fact that resource constraints are a huge problem in science) but without some degree of trust you're in trouble.
But the OP's suggestion was that to fix this specific problem of faked observations, you should separate interpretation and observation. I don't see how that fixes this problem at all. And in my view the first step in solving the problem is to come up with some sense of how serious the problem is: meaning, rather than dwelling on each terrible isolated case and panicking, try to determine what the actual prevalance is and what the overall impacts are. With that information you can make resource-allocation decisions in how to address it. The HN response is much too emotional for anything useful to come of it (except for more anger and confirmation bias.)
You're talking about science, the methodology. They're talking about science, the social institution. Scientists lying is a problem with the latter, not the former.
The way I sum it up is: science is a method, which is not equivalent to the institution of science, and because that institution is run by humans it will contain and perpetrate all the ills of any human group.
This error really went viral during the pandemic and continues to this day. We're in for an Orwellian future if the public does not cultivate some skeptic impulse.
I'd say the public needs to develop some rational impulse, it already has plenty of skepitism to the point where people no longer trust science the methodology. Instead, they genuinely believe there is some alternative to finding the truth, and now simply believe the same old superstitions and bunk that people have prior to the scientific revolution.
Speaking of Orwell, I don't think science comes into it. Rather, when people stop believing in democracy, things will degenerate into authoritarianism. It's generally pretty hard to use science the methodology to implement an authoritarian government as the scientific method by definition will follow the evidence, not the will of a dictator.
However, something that looks like science but isn't could be used, especially if the public doesn't understand science and thus can't spot things that claim to be science but don't actually follow the scientific method.
Critical thinking = the ability to be skeptical, literally it is the ability to criticize.
Great critical thinkers become lawyers, post modernist intellectuals, and other parts of the "talking" class of intellectuals. Unfortunately, it's far easier to talk shit than it is to build things. We've massively over-valued critical thinking over constructive thinking.
Most people want to dunk on science. Few people want to submit their own papers to conferences. Many people act like submitting papers is impossible for non-Ph.D's. We have a lack of constructive oriented thinking.
>Many people act like submitting papers is impossible for non-Ph.D's.
I agree. But academia reinforces this perception. I feel like PhD's only give serious consideration to the utterances of other PhD's. The rest of the public consists of the unwashed masses, and at best gets the smiling-nod treatment from teh PhD.
PS I (a non-PhD) managed to publish a paper during the pandemic (doi: 10.3389/fphar.2022.909945 ). One of the biggest barriers was the item you mentioned quoted above, and the bogeyman of "epistemic trespass" in general, as operating in my own psychology. I've since become noisy in advocating for the #DeSci movement.
> I'd say the public needs to develop some rational impulse, it already has plenty of skepitism to the point where people no longer trust science the methodology.
Methodologies are inanimate - I may trust that a methodology is fine, but once Humans become involved I do not trust.
> Instead, they genuinely believe there is some alternative to finding the truth
There are several alternate means, the field of philosophy (that birthed science) has been working on such problems for ages, and has all sorts of utility, just sitting there waiting to be used by Humanity.
> and now simply believe the same old superstitions and bunk that people have prior to the scientific revolution.
Not possible for you to know, unless there are indeed forms of supernatural (beyond current scientific knowledge) forms of perception.
> Rather, when people stop believing in democracy, things will degenerate into authoritarianism.
Once again, not possible for you to know.
> It's generally pretty hard to use science the methodology to implement an authoritarian government
COVID demonstrated that to be incorrect.
> as the scientific method by definition will follow the evidence, not the will of a dictator.
Incorrect. Something defined to be true necessarily being true only works in metaphysics, such as linguistics.
And again, the scientific method is inanimate.
> However, something that looks like science but isn't could be used, especially if the public doesn't understand science and thus can't spot things that claim to be science but don't actually follow the scientific method.
On a scale of 1 to 10, how comprehensively and accurately do you believe you understand science?
Science is an anarchic enterprise. There is no "one scientific method", and anyone telling you there is has something to sell to you (likely academic careerism). https://en.wikipedia.org/wiki/Against_Method
How does this work for things like COVID vaccines, where waiting for a reproduction study would leave hundreds of thousands dead? Ultimately there needs to be some level of trust in scientific institutions as well. I do think placing higher value on reproducibility studies might help the issue somewhat, but I think there also needs to be a larger culture shift of accountability and a higher purpose than profit.
I believe if we taught philosophy in school to a non-trivial level we wouldn't have to rely on trust/faith.
I wonder if it's possible to get people to wonder why the one discipline that has the tools to deal with all of these epistemic, logical, etc issues isn't taught in school. You'd think it would be something that people would naturally wonder about, but maybe our fundamentalist (and false) focus on science as the one and only source of knowledge has damaged our ability to wonder independently.
Suppose you need to make a decision on a topic that's contingent on P being true, which someone has already tested. How would you go about making the decision without testing P yourself (because that would mean that you would have to do the same for every decision in your life)?
> How would you go about making the decision without testing P yourself (because that would mean that you would have to do the same for every decision in your life)?
This does not seem necessary for my ask above.
There may be many approaches, some impossible/invalid, but perhaps not all.
You're far from a scientist, so it's easy for you to put scientists/academia on a pedestal.
For most of the people who end up in these scandals, this is just the day job that their various choices and random chance led up to. they're just ordinary humans responding to ordinary incentives in light of whatever consequences and risks they may or may not have considered.
Other careers, like teaching, medicine, and engineering have similar problems.
As a scientist, I agree, although for not quite the reason you gave. Scientists are given tremendous freedom and resources by society (public dollars, but also private dollars like at my industry research lab). I think scientists have a corresponding higher duty for honesty.
Jobs at top institutions are worth much more than their nominal salary, as evidenced by how much those people could be making in the private sector. (They are compensated mostly in freedom and intellectual stimulation.) Unambiguously faking data, which is the sort of thing a bad actor might do to get a top job, should be considered at least as bad a moral transgression as stealing hundreds of thousands or perhaps a few million dollars.
(What is the downside? I have never once heard a researcher express feeling threatened or wary of being falsely/unjustly accused of fraud.)
In my view, prosecuting the bad actors alone will not fix science. Science is by its own nature a community because only a small number of people have the expertise (and university positions) to participate. A healthy scientific discipline and a healthy community are the same thing. Just like the "tough on crime" initiative alone often does not help a problematic community, just punish scientific fraud harshly will not fix the problem. Because the community is small, to catch the bad actors, you will either have insiders policing themselves, or have an non-expert outsiders rendering judgements. It's easy for well-intention-ed policing effort to turn into power struggles.
This is why I think the most effective way is to empower good actors. Ensure open debate, limit the power of individuals, and prevent over concentration of power in a small group. These efforts are harder to implement than you think because they run against our desire to have scientific superstars and celebrities, but I think they will go a long way towards building a healthy community.
I agree with you, science fraud is terrible. It pollutes and breaks the scientific method. Enormous resources are wasted, not just by the fraudster but also by all the other well meaning scientists who base their work on that.
In my experience no, most fraudsters are not evil people, they just follow the incentives and almost non-existent disincentives.
Scientist has become just a job, you find all kinds of people there.
As far as I know no-one goes to jail, worst thing possible (and very rare) is losing the job, most likely just the reputation.
I have the displeasure of having acquaintances that have done some pretty bad things, of the fraud and bribery persuasion. They did so because they had no regard of the secondary cosequences. However, this didn't mean 'I understand this horrible secondary consequence is going to happen, but I don't care'. That would be evil. Instead, it's more common to not dedicate an iota of time at thinking of possible negative effects at all.
You'll see this all over risky startups. What starts as hopeful optimism only becomes fraud over time, when the consequences of not committing fraud also seem horrible. It's easy to follow the road until all your choices are horrible in different ways, and they pick the one better for the people around them, yet worse for everyone else.
Our judgment of societal ills and the concept of "evil" rests too much on the question of "is this a bad person?" today. Most people who do heinous things are not bad people, but the fact that they did bad things really ought to be enough to mete out punishment.
Lack of foresight isn't a virtue, it's as much of a vice as knowing the consequences and ignoring them. If you lack foresight and that causes you to commit fraud, you committed fraud, plain and simple. That is evil.
IMO “evil” is a misconception. People have different beliefs and psychological needs, and placed in certain incentive structures that has the outcomes that we see. You can call certain behaviors “evil”, but that doesn’t explain anything about why the behaviors occur.
Nope. “Evil” still provides no explanation and no understanding of why and how things happen there. It’s the same thing as believing in miracles created by a god.
The context here is from the root comment: “Are people who commit this type of science fraud just really evil humans?”. “Just really evil” implies that that there is no other explanation, and that the fraud is committed as a function of them being “really evil”.
I don’t actually know what people mean when they label someone as “evil”, other than “is doing/saying/thinking stuff I find very reprehensible”. Which doesn’t make sense when you insert it into the above statement: “Are people who commit this type of science fraud just humans who do stuff I find really reprehensible?” Well, I guess it sounds like they are.
It seems like people want to assign a character trait when they say “person X is evil”, but I don’t believe such a generic character trait exists (and what exactly it is supposed to mean if it existed). What’s worse, it obfuscates and prevents understanding the actual character traits and circumstances that lead to the respective behavior.
I agree, "evil" is a misconception, there exists no such thing as an "evil" person, in reality, just as there is no such thing as a "darling" person, in reality. But both expressions work as an expression of sentiment. When we use it we aim to communicate that we feel no empathy for such people (in case they are "evil"); they can without further ado be thrown in the dungeon. It is a dehumanizing construct enabling hate, same as calling people vermin, or monsters, but with religious connotations, exposing a will to exclude such people from the community (often for god reason), enabling going to war, or to exploit.
However, by removing empathy, we also reduce the possibility to understand the human motivations behind heinous acts (there always are), find solutions, build bridges, make truces, end wars. So maybe we should go lightly on the "evil" stuff, as much as possible.
Perhaps if you define evil as a low quantity of ability or commitment to search for and act in accordance to what is ultimately true then that will better resonate with you. Of course, that will necessarily lead to questions regarding the nature of truth and whether it exists, but that is beyond the scope of a short reply :)
Physical pain is objective. Someone inflecting physical pain is evil unless it’s in self defense or common sense situations like a doctor performing surgery.
What is a general definition of “evil” that one could derive this from? And how does this relate to the actual reasons why someone would inflect physical pain? Are soldiers in a war evil when they happen to inflict physical pain outside of self defense? Or is that another “common-sense” exception?
The concept is emotionally laden and ill-defined, and has little relation to why the designated behaviors actually happen. It’s an incoherent concept that has no explanatory power.
Exactly. In fact, all things in the universe are subjective except exactly one thing, which is that all other things are subjective. This is epistemological monism, and it's the only coherent view.
Socrates got it. "I know that I know nothing" (else)
Because we're cowards, and declaring vast swatches of our economy and society to be evil is not good for our future prospects. In other words, you can't tell people not to put radium up their asshole!
It's complicated. Historically scientific fraud could be construed as 'good-intentioned' - typically a researcher in a cutting edge field might think they understood how a system worked, and wanting to be first to publish for reasons of career advancement, would cook up data so they could get their paper into print before anyone else.
Indeed, I believe many academic careers were kicked off in this manner. Where it all goes wrong is when other more diligent researchers fail to reproduce said fraudulent research - this is what brought down famous fraudster Jan Hendrik Schön in the field of plastic-based organic electronics, which involved something like 9 papers in Science and Nature. There are good books and documentaries on that one. This will only be getting worse with AI data generation, as most of those frauds were detected by banal data replication, obvious cuts and pastes, etc.
However, when you add a big financial driver, things really go off the rails. A new pharmaceutical brings investors sniffing for a big payout, and cooking data to make the patentable 'discovery' look better than it is is a strong incentive to commit egregious fraud. Bug-eyed greed makes people do foolish things.
People like us think scientists care about big-money things, but they largely don't care about that stuff as much as they care about prestige in their field. Prominent scientists get huge rewards of power and influence, as well as indirect money from leveraging that influence. When you start to think that way, the incentives for fraud become very "minor" and "petty" compared to what you are thinking of.
> Stuff like this seems to bother me more than it rationally should.
It's bothering you a rational amount, actually. These people have done serious damage to lots of lives and humanity in general. Society as a whole has at least as much interest in punishing them as it does for financial fraudsters. They should burn.
> There was a period of time when science was advanced by the aristocrats who were self funded and self motivated.
From a distance the practice of science in early modern and Enlightenment times might look like the disinterested pursuit of knowledge for its own sake. If you read the detailed history of the times you'll see that the reality was much more messy.
Today we only remember the great thinkers of these times, and tend to see a linear accumulation of knowledge. If you look at the history of the times you realise that at the time there was a vast and confusing babble, it was very hard at the time to distinguish the valid science from the superstition, the blind regurgitation of classical authority, the soothsayers and yes, the fraudsters.
For example Kepler considered his work on the Music of the Spheres (google it) to be more important than, and the ultimate goal of, his research on the mechanics of planetary motion. Newton dabbled in alchemy, and his dispute with Leibnitz was very very bitchy with some dubious jostling for priority. And there was no end of dubious research and outright fraud going on at the time. So no, it was not a golden era of disinterested research.
See for example the wikipedia articles on Phlogiston, The Music of the Spheres, the long and hard fought battle over Epicycles etc
Not the OP, but I remember reading about many twists and turns on the road to various inventions described in Matt Ridley's "How Innovation Works". I personally like "Happy Accidents. Serendipity in Major Medical Breakthroughs in the Twentieth Century" by Morton Meyers.
Generally, the fields that have a Nobel in them attract the glory hounds and therefore the fraudsters. The ones that don't, like geology or archeology for example, don't get the glory hounds.
Anytime you see champagne bottles up on a professor's top shelf with little tags for Nature publications (or something like that), then you know they are a glory hound.
When you see beer bottles in the trash, then you know they're in it for more than themselves.
It seems like this could ultimately fall under the category of financial fraud, since the allegations are that he may have favorably misrepresented the results of drug trials where he was credited as an inventor of the drug that's now worth hundreds of millions of dollars.
Evil is a much simpler explanation than recognizing that if you were in the same position with the same incentives, you would do the same thing. It's not just one event, it's a whole career of normalizing deviation from your values. Maybe you think you'd have morals that would have stopped you, maybe those same morals would have ensured you were never in a position to PI research like that.
Scientific fraud can also compound really badly because people will try to replicate it, and the easiest results to fake are usually the most expensive...
I also watched almost all episodes of PBS Spacetime. Some of them multiple times. I'm so happy that Spacetime exists and also that Matt was recruited as a host (in place of Gabe). Highly recommended channel, superb content!
It is the same flavor of fraud as financial fraud. It is about personal gain, and avoiding loss.
This kind of fraud happens because scientists are rewarded greatly for coming up with new, publishable, interesting results. They are punished severely for failing to do that.
You could be the department's best professor in terms of teaching, but if you aren't publishing, your job is at risk at many universities.
Scientists in Academia are incentivized to publish papers. If they can take shortcuts, and get away with it, they will. That's the whole problem, that's human nature.
This is why you don't nearly as many industry scientists coming out with fraudulent papers. If Shell's scientists publish a paper, they aren't rewarded for that, if they come up with some efficient new way to refine oil they are rewarded, and they also might publish a paper if they feel like it.
A lot of companies reward employees for publications. Mine certainly does. Also an oil company may not be such a great example since they directly and covertly rewarded scientists for publishing papers undermining climate change research.
As a collective endeavor to seek out higher truth, maybe some amount of fraud is necessary to train the immune system of the collective body, so to speak, so that it's more resilient in the long-term. But too much fraud, I agree, could tip into mistrust of the entire system. My fear is that AI further exacerbates this problem, and only AI itself can handle wading through the resulting volume of junk science output.
This is pretty funny. I usually hear this kind of language when a religious person is so devastated when their priest or pastor does something wrong that it causes them to leave their religion altogether. Are you going to do the same thing for scientism?
I'm not a particularly religious person, I didn't realize what you described is something that happens with any great frequency. Never the less, I suppose one is able to leave a particular place of worship and not leave a religion, as it is with any way people form their views on something societal like this, it's on a spectrum? Religion, Politics, Science, Sex, Education, whatever.