This actually feels like an amazing step in the right direction.
If AI can help spot obvious errors in published papers, it can do it as part of the review process. And if it can do it as part of the review process, authors can run it on their own work before submitting. It could massively raise the quality level of a lot of papers.
What's important here is that it's part of a process involving experts themselves -- the authors, the peer reviewers. They can easily dismiss false positives, but especially get warnings about statistical mistakes or other aspects of the paper that aren't their primary area of expertise but can contain gotchas.
Students and researchers send their own paper to plagiarism checker to look for "real" and unintended flags before actually submitting the papers, and make revisions accordingly. This is a known, standard practice that is widely accepted.
And let's say someone modifies their faked lab results so that no AI can detect any evidence of photoshopping images. Their results get published. Well, nobody will be able to reproduce their work (unless other people also publish fraudulent work from there), and fellow researchers will raise questions, like, a lot of them. Also, guess what, even today, badly photoshopped results often don't get caught for a few years, and in hindsight that's just some low effort image manipulation -- copying a part of image and paste it elsewhere.
I doubt any of this changes anything. There is a lot of competition in academia, and depending on the field, things may move very fast. Getting away with AI detection of fraudulent work likely doesn't give anyone enough advantage to survive in a competitive field.
>Their results get published. Well, nobody will be able to reproduce their work (unless other people also publish fraudulent work from there), and fellow researchers will raise questions, like, a lot of them.
Sadly you seem to underestimate how widespread fraud is in academia and overestimate how big the punishment is. In the worst case when someone finds you are guilty of fraud, you will get slap on the wrist. In the usual case absolutely nothing will happen and you will be free to keep publishing fraud.
It depends, independent organizations that track this stuff are able to call out unethical research and make sure there is more than a slap on the wrist. I also suspect that things may get better as the NIH has forced all research to be in electronic lab notebooks and published in open access journals. https://x.com/RetractionWatch
> I also suspect that things may get better as the NIH has forced all research to be in electronic lab notebooks and published in open access journals.
Alternatively, now that NIH has been turned into a tool for enforcing ideological conformity on research instead focussing on quality, things will get much worse.
> Sadly you seem to underestimate how widespread fraud is in academia
Anyway, I think "wishful thinking" is way more rampant and problematic than fraud. I.e. work done in a way that does not explore the weakness of it fully.
People shouldn't be trying to publish before they know how to properly define a study and analyze the results. Publications also shouldn't be willing to publish work that does a poor job at following the fundamentals of the scientific method.
Wishful thinking and assuming good intent isn't a bad idea here, but that leaves us with a scientific (or academic) industry that is completely inept at doing what it is meant to do - science.
I don’t actually believe that this is true if “academia” is defined as the set of reputable researchers from R1 schools and similar. If you define Academia as “anyone anywhere in the world who submits research papers” then yes, it has vast amounts of fraud in the same way that most email is spam.
Within the reputable set, as someone convinced that fraud is out of control, have you ever tried to calculate the fraud rate as a percentage with numerator and denominator (either number of papers published or number of reputable researchers. I would be very interested and stunned if it was over .1% or even .01%.
There is lots of evidence that p-hacking is widespread (some estimate that up to 20% are p-hacked). This problem also exists in top instutions, in fact in some fields it appears that this problem is WORSE in higher ranking unis - https://mitsloan.mit.edu/sites/default/files/inline-files/P-...
Where is that evidence? The paper you cite suggests that p hacking is done in experimental accounting studies but not archival.
Generally speaking, evidence suggests that fraud rates are low ( lower than in most other human endeavours). This study cites 2% [1]. This is similar to numbers that Elizabeth Bik reports. For comparison self reported doping rates were between 6 and 9% here [2]
The 2% figure isn't a study of the fraud rate, it's just a survey asking academics if they've committed fraud themselves. Ask them to estimate how many other academics commit fraud and they say more like 10%-15%.
Those 15% is actually if they know someone who has committed academic misconduct not fraud (although there is an overlap it's not the same), and it is across all levels (I.e. from PI to PhD student). So this will very likely overestimate fraud, as we would be double counting (I.e. Multiple reporters will know the same person). Imporantly the paper also says if people reported the misconduct it had consequences in the majority of cases.
And just again for comparison >30% of elite athlete say that they know someone who doped.
See my other reply to Matthew. It's very dependent on how you define fraud, which field you look at, which country you look at, and a few other things.
Depending on what you choose for those variables it can range from a few percent up to 100%.
I agree and am disappointed to see you in gray text. I'm old enough to have seen too many pendulum swings from new truth to thought-terminating cliche, and am increasingly frustrated by a game of telephone, over years, leading to it being common wisdom that research fraud is done all the time and its shrugged off.
There's some real irony in that, as we wouldn't have gotten to this point a ton of self-policing over years where it was exposed with great consequence.
> 0.04% of papers are retracted. At least 1.9% of papers have duplicate images "suggestive of deliberate manipulation". About 2.5% of scientists admit to fraud, and they estimate that 10% of other scientists have committed fraud. 27% of postdocs said they were willing to select or omit data to improve their results. More than 50% of published findings in psychology are false. The ORI, which makes about 13 misconduct findings per year, gives a conservative estimate of over 2000 misconduct incidents per year.
Although publishing untrue claims isn't the same thing as fraud, editors of well known journals like The Lancet or the New England Journal of Medicine have estimated that maybe half or more of the claims they publish are wrong. Statistical consistency detectors run over psych papers find that ~50% fail such checks (e.g. that computed means are possible given the input data). The authors don't care, when asked to share their data so the causes of the check failures can be explored they just refuse or ignore the request, even if they signed a document saying they'd share.
You don't have these sorts of problems in cryptography but a lot of fields are rife with it, especially if you use a definition of fraud that includes pseudoscientific practices. The article goes into some of the issues and arguments with how to define and measure it.
0.04% is an extremely small number and (it needs to be said) also includes papers retracted due to errors and other good-faith corrections. Remember that we want people to retract flawed papers! treating it as evidence of fraud is not only a mischaracterization of the result but also a choice that is bad for a society that wants quality scientific results.
The other two metrics seem pretty weak. 1.9% of papers in a vast database containing 40 journals show signs of duplication. But then dig into the details: apparently a huge fraction of those are in one journal and in two specific years. Look at Figure 1 and it just screams “something very weird is going on here, let’s look closely at this methodology before we accept the top line results.”
The final result is a meta-survey based on surveys done across scientists all over the world, including surveys that are written in other languages, presumably based on scientists also publishing in smaller local journals. Presumably this covers a vast range of scientists with different reputations. As I said before, if you cast a wide net that includes everyone doing science in the entire world, I bet you’ll find tons of fraud. This study just seems to do that.
The point about 0.04% is not that it's low, it's that it should be much higher. Getting even obviously fraudulent papers retracted is difficult and the image duplications are being found by unpaid volunteers, not via some comprehensive process so the numbers are lower bounds, not upper. You can find academic fraud in bulk with a tool as simple as grep and yet papers found that way are typically not retracted.
Example, select the tortured phrases section of this database. It's literally nothing fancier than a big regex:
"A novel approach on heart disease prediction using optimized hybrid deep learning approach", published in Multimedia Tools and Applications.
This paper has been run through a thesaurus spinner yielding garbage text like "To advance the expectation exactness of the anticipated heart malady ___location show" (heart disease -> heart malady). It also has nothing to do with the journal it's published in.
Now you might object that the paper in question comes from India and not an R1 American university, which is how you're defining reputable. The journal itself does, though. It's edited by an academic in the Dept. of Computer Science and Engineering, Florida Atlantic University, which is an R1. It also has many dozens of people with the title of editor at other presumably reputable western universities like Brunel in the UK, the University of Salerno, etc:
Clearly, none of the so-called editors of the journal can be reading what's submitted to it. Zombie journals run by well known publishers like Spring Nature are common. They auto-publish blatant spam yet always have a gazillion editors at well known universities. This stuff is so basic both generation and detection predate LLMs entirely, but it doesn't get fixed.
Then you get into all the papers that aren't trivially fake but fake in advanced undetectable ways, or which are merely using questionable research practices... the true rate of retraction if standards were at the level laymen imagine would be orders of magnitude higher.
> found by unpaid volunteers, not via some comprehensive process
"Unpaid volunteers" describes the majority of the academic publication process so I'm not sure what you're point is. It's also a pretty reasonable approach - readers should report issues. This is exactly how moderation works the web over.
Mind that I'm not arguing in favor of the status quo. Merely pointing out that this isn't some smoking gun.
> you might object that the paper in question comes from India and not an R1 American university
Yes, it does rather seem that you're trying to argue one thing (ie the mainstream scientific establishment of the western world is full of fraud) while selecting evidence from a rather different bucket (non-R1 institutions, journals that aren't mainstream, papers that aren't widely cited and were probably never read by anyone).
> The journal itself does, though. It's edited by an academic in ...
That isn't how anyone I've ever worked with assessed journal reputability. At a glance that journal doesn't look anywhere near high end to me.
Remember that, just as with books, anyone can publish any scientific writeup that they'd like. By raw numbers, most published works of fiction aren't very high quality.[0] That doesn't say anything about the skilled fiction authors or the industry as a whole though.
> but it doesn't get fixed.
Is there a problem to begin with? People are publishing things. Are you seriously suggesting that we attempt to regulate what people are permitted to publish or who academics are permitted to associate with on the basis of some magical objective quality metric that doesn't currently exist?
If you go searching for trash you will find trash. Things like industry and walk of life have little bearing on it. Trash is universal.
You are lumping together a bunch of different things that no professional would ever consider to belong to the same category. If you want to critique mainstream scientific research then you need to present an analysis of sources that are widely accepted as being mainstream.
The inconsistent standards seen in this type of discussion damages sympathy amongst the public, and causes people who could be allies in future to just give up. Every year there are more articles on scientific fraud appear in all kinds of places, from newspapers to HN to blogs yet the reaction is always https://prod-printler-front-as.azurewebsites.net/media/photo...
Academics draw a salary to do their job, but when they go AWOL on tasks critical to their profession suddenly they're all unpaid volunteers. This Is Fine.
Journals don't retract fraudulent articles without a fight, yet the low retraction rate is evidence that This Is Fine.
The publishing process is a source of credibility so rigorous it places academic views well above those of the common man, but when it publishes spam on auto-pilot suddenly journals are just some kind of abandoned subreddit and This Is Fine "but I'm not arguing in favor of it".
And the darned circular logic. Fraud is common but This Is Fine because reputable sources don't do it, where the definition of reputable is totally ad-hoc beyond not engaging in fraud. This thread is an exemplar: today reputable means American R1 universities because they don't do bad stuff like that, except when their employees sign off on it but that's totally different. The editor of The Lancet has said probably half of what his journal publishes is wrong [1] but This Is Fine until there's "an analysis of sources that are widely accepted as being mainstream".
Reputability is meaningless. Many of the supposedly top universities have hosted star researchers, entire labs [2] and even presidents who were caught doing long cons of various kinds. This Is Not Fine.
> Academics draw a salary to do their job, but when they go AWOL on tasks critical to their profession suddenly they're all unpaid volunteers. This Is Fine.
Academics are paid by grants to work on concrete research and by their institution to work on tasks institution pays for. These institutions do not pay for general "tasks critical to their profession".
> This Is Fine.
That is as much fine as me not working on an open source project on my employer time.
> The inconsistent standards seen in this type of discussion
I assume you must be referring to the standards of the person I was replying to?
> damages sympathy amongst the public
Indeed. The sort of misinformation seen in this thread, presented in an authoritative tone, damages the public perception of the mainstream scientific establishment.
> Every year there are more articles on scientific fraud appear in all kinds of places
If those articles reflect the discussion in this thread so far then I'd suggest that they amount to little more than libel.
Even if those articles have substance, you are at a minimum implying a false equivalence - that the things being discussed in this thread (and the various examples provided) are the same as those articles. I already explained in the comment you directly replied to why the discussion in this thread is not an accurate description of the reality.
> Academics draw a salary to do their job, but when they go AWOL on tasks critical to their profession
Do they? It was already acknowledged that they do unpaid labor in this regard. If society expects those tasks to be performed to a higher standard then perhaps resources need to be allocated for it. How is it reasonable to expect an employee to do something they aren't paid to do? If management's policies leave the business depending on an underfunded task then that is entirely management's fault, no?
> The publishing process is a source of credibility ... but when it publishes ...
As I already pointed out previously, this is conflating distinct things. Credible journals are credible. Ones that aren't aren't. Pointing to trash and saying "that isn't credible" isn't useful. Nobody ever suggested it was.
If you lack such basic understanding of the field then perhaps you shouldn't be commenting on it in such an authoritative tone?
> This Is Fine "but I'm not arguing in favor of it".
Precisely how do you propose that we regulate academic's freedom of speech (and the related freedom of the press) without violating their fundamental human rights? Unless you have a workable proposal in this regard your complaints are meaningless.
I also eagerly await this objective and un-gameable metric of quality that your position appears to imply.
> Fraud is common but This Is Fine because reputable sources don't do it, where the definition of reputable is totally ad-hoc beyond not engaging in fraud.
Violent crime is common [on planet earth] but this isn't relevant to our discussion because the place we live [our city/state/country] doesn't have this issue, where the definition for "place we live" is rather arbitrarily defined by some squiggles that were drawn on a map.
Do you see the issue with what you wrote now?
If a journal publishes low quality papers that makes that journal a low quality outlet, right? Conversely, if the vast majority of the materials it publishes are high quality then it will be recognized as a high quality outlet. As with any other good it is on the consumer to determine quality for themselves.
If you object to the above then please be sure that you have a workable proposal for how to do things differently that doesn't infringe on basic human rights (but I repeat myself).
> today reputable means American R1 universities because they don't do bad stuff like that
It's a quick way of binning things. A form of profiling. By the metrics it holds up - a large volume of high quality work and few examples (relative to the total) of bad things happening.
> except when their employees sign off on it but that's totally different
The provided example upthread was a journal editor, not an author. No one (at least that I'm aware of) is assessing a paper based on the editors attached to the journal that it appeared in. I'm really not sure what your point is here other than to illustrate that you haven't the faintest idea how this stuff actually works in practice.
> The editor of The Lancet has said
Did you actually read the article you refer to here? "Wrong conclusions" is not "fraud" or even "misconduct". Many valid criticisms of the current system are laid out in that article. None of them support the claims made by you and others in this comments section.
> star researchers, entire labs [2] and even presidents who were caught doing long cons of various kinds. This Is Not Fine.
We finally agree on something! It is not fine. Which is why, naturally, those things generally had consequences once discovered.
An argument can certainly be made that those things should have been discovered earlier. That it should have been more difficult to do them. That the perverse incentives that led to many of them are a serious systemic issue.
In fact those exact arguments are the ones being made in the essay you linked. You will also find that a huge portion (likely the majority) of the scientific establishment in the west agrees with them. But agreeing that there are systemic issues is not the same as having a workable solution let alone having the resources and authority to implement it.
Thanks for the link to the randomly-chosen paper. It really brightened my day to move my eyes over the craziness of this text. Who needs "The Onion" when Springer is providing this sort of comedy?
It's hyperbole to the level that obfuscates, unfortunately. 50% of psych findings being wrong doesn't mean "right all the time except in exotic edge cases" like pre-quantum physics, it means they have no value at all and can't be salvaged. And very often the cause turns out to be fraud, which is why there is such a high rate of refusing to share the raw data from experiments - even when they signed agreements saying they'd do so on demand.
Not trying to be hostile but as a source on metrics, that one is grossly misleading in several ways. There's lots of problems with scientific publication but gish gallop is not the way to have an honest conversation about them.
Papers that can't be reproduced sound like they're not very useful, either.
I know it's not as simple as that, and "useful" can simply mean "cited" (a sadly overrated metric). But surely it's easier to get hired if your work actually results in something somebody uses.
Papers are reproducible in exactly the same way that github projects are buildable, and in both cases anything that comes fully assembled for you is already a product.
If your academic research results in immediately useful output all of the people waiting for that to happen step in and you no longer worry about employment.
The "better" journals are listed in JCR. Nearly 40% of them have impact factor less than 1, it means that on average papers in them are cited less than 1 times.
Conclusion: even in better journals, the average paper is rarely cited at all, which means that definitely the public has rarely heard of it or found it useful.
> Papers that can't be reproduced sound like they're not very useful, either.
They’re not useful at all. Reproduction of results isn’t sexy, nobody does it. Almost feels like science is built on a web on funding trying to buy the desired results.
Reproduction is boring, but it would often happen incidentally to building off someone else's results.
You tell me that this reaction creates X, and I need X to make Y. If I can't make my Y, sooner or later it's going to occur to me that X is the cause.
Like I said, I know it's never that easy. Bench work is hard and there are a million reasons why your idea failed, and you may not take the time to figure out why. You won't report such failures. And complicated results, like in sociology, are rarely attributable to anything.
I've had this idea that reproduction studies in one's C.V should become a sort of virtue signal, akin to philanthropy among the rich. This way, some percentage of one's work would need to be reproduction work or otherwise they would be looked down upon, and this would create the right incentive to do go.
Yeah...It's more on the less Pure domains...And mostly overseas?... :-)
https://xkcd.com/435/
"A 2016 survey by Nature on 1,576 researchers who took a brief online questionnaire on reproducibility found that more than 70% of researchers have tried and failed to reproduce another scientist's experiment results (including 87% of chemists, 77% of biologists, 69% of physicists and engineers, 67% of medical researchers, 64% of earth and environmental scientists, and 62% of all others), and more than half have failed to reproduce their own experiments."
Which researchers are using plagiarism detectors? I'm not aware that this is a known and widely accepted practice. They are used by students and teachers for student papers (in courses etc), but nobody i know would use them for submitting research. I also don't see the point for why even unethical researchers would use it, it wouldn't increase your acceptance chances dramatically.
> Well, nobody will be able to reproduce their work (unless other people also publish fraudulent work from there)
In theory, yes, in practice, the original result for amyloid beta protein as the main cause of Alzheimer were faked and it wasn't caught for 16 years. A member of my family took med based on it and died in the meantime.
You're right that this won't change the incentives for the dishonest researchers. Unfortunately there's not an equivalent of "short sellers" in research, people who are incentivized for finding fraud.
AI is definitely a good thing (TM) for those honest researchers.
AI is fundamentally much more of a danger to the fraudsters. Because they can only calibrate their obfuscation to today's tools. But the publications are set in stone and can be analyzed by tomorrow's tools. There are already startups going through old papers with modern tools to detect manipulation [0].
Every tool cuts both ways. This won't remove the need for people to be good, but hopefully reduces the scale of the problems to the point where good people (and better systems) can manage.
FWIW while fraud gets headlines, unintentional errors and simply crappy writing are much more common and bigger problems I think. As reviewer and editor I often feel I'm the first one (counting the authors) to ever read the paper beginning to end: inconsistent notation & terminology, unnecessary repetitions, unexplained background material, etc.
> unethical researchers could run it on their own work before submitting. It could massively raise the plausibility of fraudulent papers
The real low hanging fruit that this helps with is detecting accidental errors and preventing researchers with legitimate intent from making mistakes.
Research fraud and its detection is always going to be an adversarial process between those trying to commit it and those trying to detect it. Where I see tools like this making a difference against fraud is that it may also make fraud harder to plausibly pass off as errors if the fraudster gets caught. Since the tools can improve over time, I think this increases the risk that research fraud will be detected by tools that didn't exist when the fraud was perpetrated and which will ideally lead to consequences for the fraudster. This risk will hopefully dissuade some researchers from committing fraud.
Normally I'm an AI skeptic, but in this case there's a good analogy to post-quantum crypto: even if the current state of the art allows fraudulent researchers to evade detection by today's AI by using today's AI, their results, once published, will remain unchanged as the AI improves, and tomorrow's AI will catch them...
Doesn't matter. Lots of bad papers get caught the moment they're published and read by someone, but there's no followup. The institutions don't care if they publish auto-generated spam that can be detected on literally a single read through, they aren't going to deploy advanced AI on their archives of papers to create consequences a decade later:
Are we talking about "bad papers", "fraud", "academic misconduct", or something else? It's a rather important detail.
You would ideally expect blatant fraud to have repercussions, even decades later.
You probably would not expect low quality publications to have direct repercussions, now or ever. This is similar to unacceptably low performance at work. You aren't getting immediately reprimanded for it, but if it keeps up consistently then you might not be working there for much longer.
> The institutions don't care if they publish auto-generated spam
The institutions are generally recognized as having no right to interfere with freedom to publish or freedom to associate. This is a very good thing. So good in fact that it is pretty much the entire point of having a tenure system.
They do tend to get involved if someone commits actual (by which I mean legally defined) fraud.
I think it’s not always a world scale problem as scientific niches tend to be small communities. The challenge is to get these small communities to police themselves.
For the rarer world scale papers we can dedicate more resources to getting vetting them.
Based on my own experience as a peer reviewer and scientist, the issue is not necessarily in detecting plagiarism or fraud. It is in getting editors to care after a paper is already published.
During peer review, this could be great. It could stop a fraudulent paper before it causes any damage. But in my experience, I have never gotten a journal editor to retract an already-published paper that had obvious plagiarism in it (very obvious plagiarism in one case!). They have no incentive to do extra work after the fact with no obvious benefit to themselves. They choose to ignore it instead. I wish it wasn't true, but that has been my experience.
I already ask AI it to be a harsh reviewer on a manuscript before submitting it. Sometimes blunders are there because of how close you are to the work. It hadn't occurred to me that bad "scientists" could use it to avoid detection
I would add that I've never gotten anything particularly insightful in return...but it has pointed out somethings that could be written more clearly, or where I forgot to cite a particular standardized measure, etc.
Just as plagiarism checkers harden the output of plagiarists.
This goes back to a principle of safety engineering: the safer, reliable, trustworthy you make the system, the more catastrophic the failures when they happen.
Humans are already capable of “post-truth”. This is enabled by instant global communication and social media (not dismissing the massive benefits these can bring), and led by dictators who want fealty over independent rational thinking.
The limitations of slow news cycles and slow information transmission lends to slow careful thinking. Especially compared to social media.
The communication enabled by the internet is incredible, but this aspect of it is so frustrating. The cat is out of the bag, and I struggle to identify a solution.
The other day I saw a Facebook post of a national park announcing they'd be closed until further notice. Thousands of comments, 99% of which were divisive political banter assuming this was the result of a top-down order. A very easy-to-miss 1% of the comments were people explaining that the closure was due to a burst pipe or something to that effect. It's reminiscent of the "tragedy of the commons" concept. We are overusing our right to spew nonsense to the point that it's masking the truth.
How do we fix this? Guiding people away from the writings of random nobodies in favor of mainstream authorities doesn't feel entirely proper.
> Guiding people away from the writings of random nobodies in favor of mainstream authorities doesn't feel entirely proper.
Why not? I think the issue is the word "mainstream". If by mainstream, we mean pre-Internet authorities, such as leading newspapers, then I think that's inappropriate and an odd prejudice.
But we could use 'authorities' to improve the quality of social media - that is, create a category of social media that follows high standards. There's nothing about the medium that prevents it.
There's not much difference between a blog entry and scientific journal publication: The founders of the scientific method wrote letters and reports about what they found; they could just as well have posted it on their blogs, if they could.
At some point, a few decided they would follow certain standards --- You have to see it yourself. You need publicly verifiable evidence. You need a falsifiable claim. You need to prove that the observed phenomena can be generalized. You should start with a review of prior research following this standard. Etc. --- Journalists follow similar standards, as do courts.
There's no reason bloggers can't do the same, or some bloggers and social media posters, and then they could join the group of 'authorities'. Why not? For the ones who are serious and want to be taken seriously, why not? How could they settle for less for their own work product?
Redesign how social media works (and then hope that people are willing to adopt the new model). Yes, I know, technical solutions, social problems. But sometimes the design of the tool is the direct cause of the issue. In other cases a problem rooted in human behavior can be mitigated by carefully thought out tooling design. I think both of those things are happening with social media.
Baffles me that somebody can be professor, director, whatever, meaning: taking the place of somebody _really_ qualified and not get dragged through court after falsifying a publication until nothing is left of that betrayer.
It's not only the damage to society due to false, misleading claims. If those publications decide who gets tenure, a research grant, etc. there are careers of others, that were massively damaged.
A retraction due to fraud already torches your career. It's a black mark that makes it harder to get funding, and it's one of the few reasons a university might revoke tenure. And you will be explaining it to every future employer in an interview.
There generally aren't penalties beyond that in the West because - outside of libel - lying is usually protected as free speech
They should work like the Polish plagiarism-detection system, legally required for all students' theses.
You can't just put stuff into that system and tweak your work until there are no issues. It only runs after your final submission. If there are issues, appropriate people are notified and can manually resolve them I think (I've never actually hit that pathway).
My hope is that ML can be used to point out real world things you can't fake or work around, such as why an idea is considered novel or why the methodology isn't just gaming results or why the statistics was done wrong.
We are "upgrading" from making errors to committing fraud. I think that difference will still be important to most people. In addition I don't really see why an unethical, but not idiotic, researcher would assume, that the same tool that they could use to correct errors, would not allow others to check for and spot the fraud they are thinking of committing instead.
I very much suspect this will fall into the same behaviors as AI-submitted bug reports in software.
Obviously it's useful when desired, they can find real issues. But it's also absolutely riddled with unchecked "CVE 11 fix now!!!" spam that isn't even correct, exhausting maintainers. Some of those are legitimate accidents, but many are just karma-farming for some other purpose, to appear like a legitimate effort by throwing plausible-looking work onto other people.
Or it could become a gameable review step like first line resume review.
I think the only structural way to change research publication quality en mass is to change the incentives of the publishers, grant recipients, tenure track requirements, and grad or post doc researcher empowerment/funding.
That is a tall order so I suspect we’ll get more of the same and now there will be 100 page 100% articles just like there are 4-5 page top rank resumes. Whereas a dumb human can tell you that a 1 pager resume or 2000 word article should suffice to get the idea across (barring tenuous proofs or explanation of methods).
Edit: incentives of anonymous reviewers as well that can occupy an insular sub-industry to prop up colleagues or discredit research that contradicts theirs.
This is exactly the kind of task we need to be using AI for - not content generation, but these sort of long running behind the scenes things that are difficult for humans, and where false positives have minimal cost.
The current review mechanism is based on how expensive it is to do the review. If it can be done cheaply it can be replaced with a continuous review system. With each discovery previous works at least need adjusted wording. What starts out an educated guess or an invitation for future research can be replaced with or directly linked to newer findings. An entire body of work can simultaneously drift sideways and offer a new way to measure impact.
In another world of reviews... Copilot can now be added as a pr reviewer if a company allows/pays for it. I've started doing it right before adding any of my actual peers. It's only been a week or so and it did catch one small thing for me last week.
This type of llm use feels like spell check except for basic logic. As long as we stuff have people who know what they are doing reviewing stuff AFTER the AI review, I don't see any downsides.
There’s no such thing as an obvious error in most fields. What would the AI say to someone who claimed the earth orbited the sun 1000 years ago? I don’t know how it could ever know the truth unless it starts collecting its own information. It could be useful for a field that operates from first principles like math but more likely is that it just blocks everyone from publishing things that go against the orthodoxy.
I am pretty sure that "the Earth orbited the Sun 1,000 years ago", and I think I could make a pretty solid argument about it from human observations of the behavior of, well, everything, around and after the year AD 1025.
It seems that there was an alternating occurences of "days" and "nights" of approximatively the same length as today.
A comparison of the ecosystem and civilization of the time vs. ours are fairly consistent with the hypothesis that the Earth hasn't seen the kind of major gravity disturbances that would have happened if our planet only got captured into Sun orbit within the last 1,000 years.
If your AI rates my claim as an error, it might have too many false positives to be of much use, don't you think?
Of course you could when even a 1st grader knows this is true.
You have to be delusional to believe this would be so easy though a 1000 years ago when not only everyone would be saying you are wrong but completely insane, maybe even such a heretic to be worthy of being burned at the stake. Certainly worthy of house arrest for such ungodly thoughts when everyone knows man is the center of the universe and naturally the sun revolves around the earth.
A few centuries later but people did not think Copernicus was insane or a heretic.
> everyone knows man is the center of the universe and naturally the sun revolves around the earth
more at the bottom of the universe. They saw the earth as corrupt, closest to hell, and it was hell at the centre. Outside the earth the planets and stars were thought pure and perfect.
I don't think this is about expecting AI to successfully fact-check observations, let alone do its own research.
I think it is more about using AI to analyze research papers 'as written', focusing on the methodology of experiments, the soundness of any math or source code used for data analysis, cited sources, and the validity of the argumentation supporting the final conclusions.
I think that responsible use of AI in this way could be very valuable during research as well as peer review.
So long as they don't build the models to rely on earlier papers, it might work. Fraudulent or mistaken earlier work, taken as correct, could easily lead to newer papers which disagree or don't use the older data as wrong/mistaken. This sort of checking needs to drill down as far as possible.
I agree it should be part of the composition and review processes.
> It could massively raise the quality level of a lot of papers.
Is there an indication that the difference is 'massive'? For example, reading the OP, it wasn't clear to me how significant these errors are. For example, maybe they are simple factual errors such as the wrong year on a citation.
> They can easily dismiss false positives
That may not be the case - it is possible that the error reports may not be worthwhile. Based on the OP's reporting on accuracy, it doesn't seem like that's the case, but it could vary by field, type of content (quantitative, etc.), etc.
If the LLM spots a mistake with 90% precision, it's pretty good. If it's a 10% precision, people still might take a look if they publish a paper once per year. If it's 1% - forget it.
I thought about it a while back. My concept was using RLHF to train a LLM to extract key points, their premises, and generate counter questions. A human could filter the questions. That feedback becomes training material.
Once better with numbers, maybe have one spot statistical errors. I think a constantly-updated, field-specific checklist for human reviewers made more sense on that, though.
For a data source, I thought OpenReview.net would be a nice start.
> If AI can help spot obvious errors in published papers, it can do it as part of the review process.
If it could explain what's wrong, that would be awesome. Something tells me we don't have that kind of explainability yet. If we do, people could get advice on what's wrong with their research and improve it. So many scientists would lov3 a tool like that. So if ya got it, let's go!
The peer review process is not working right now, with AI (Actual Intelligence) from humans, why would it work with the tools?
Perhaps a better suggestion would be to set up industrial AI to attempt to reproduce each of the 1,000 most cited papers in every ___domain, flagging those that fail to reproduce, probably most of them...
Totally agree! If done right, this could shift the focus of peer review toward deeper scrutiny of ideas and interpretations rather than just error-spotting
If AI can help spot obvious errors in published papers, it can do it as part of the review process. And if it can do it as part of the review process, authors can run it on their own work before submitting. It could massively raise the quality level of a lot of papers.
What's important here is that it's part of a process involving experts themselves -- the authors, the peer reviewers. They can easily dismiss false positives, but especially get warnings about statistical mistakes or other aspects of the paper that aren't their primary area of expertise but can contain gotchas.