It's using data from the Integrative Human Microbiome Project [1], so, in a sense, hundreds of high-quality biological replicas support their findings. Obelisks were found in a substantial portion of them.
They then expanded the search to millions of sequences, which are publicly available, and found ~30k different classes(!) of Obelisk elements. One could argue that the quality of each of these "experiments" may not be as good as IHMP, but still, the signal is more than sufficient to clearly demonstrate the existence and implied significance of these elements.
No doubt there are valid concerns but the value of comments formed by internet clichés and dismissive tropes is... lower than what we want here. When it comes with a tone of entitled aggressiveness, that's even worse—that's enough to drive away the people who spend their lives working on a subject, and that would be the worst possible outcome for HN.
It's based on data gathered from scientific instruments that are well-characterized and whose results are validated by being used for other findings. They're not administering a questionnaire to someone's microbiome and doing sample statistics on the results. They found novel RNA sequences in microbiome sample data using well-understood and repeatable methods, and confirmed that the RNA sequences seem to exist. You can repeat the analysis yourself and make that determination.
I think you're trying to draw a comparison here between two things of completely different category, and I feel like you might not understand how different they are.
Don't be a dick. It's totally normal to be skeptical and doubtful of some over-the-top claims. The onus is squarely on the person presenting the information to justify it. Your points about replication are accurate, but let's at least keep it to a pretext of civility.
The claim is not over-the-top at all though. They did a survey of funny RNA sequences in gut samples and didn't find anything that links them to known sequence from other organisms. Nothing spectacular was claimed.
It’s such a different thing entirely, the question of replication doesn’t make sense. See my other comment.
I think you’d be better served by trying to understand how psychological research has been done, why people go on about replication, what results are suspect from psych, and how they developed in the first place.
Bc it seems like you’re trying to justify the actions in psych world by other disciplines, but it just doesn’t make sense.
What they both share are epistemic questions: How much knowledge can they deliver? And is science the basis for this knowledge?
When it comes to discrediting psychology here, it comes down to folks arguing, no, psychology isn't a basis for knowledge because it isn't scientific because it's claims are rarely replicated.
If the replication of claims are required for something to be science, I fail to see how this is different. If it's not a case of special pleading, it appears identical to it.
Is this study scientific? It doesn't appear to be so.
The sequences they found are already in various databases and are not linked to any organisms we know. It's like discovering an interesting new insect in a national park. That's as different as it can be from psychological research. What is there to replicate?
So basically this study is “we looked at various publicly available data sets of information, and here is a thing we notice that has previously gone unnoticed”.
The noticed it in multiple datasets.
What would replication look like here besides someone else looking at the dataset and agreeing that they also see the curiosities?
Or do you mean someone measuring a new fresh set of data and looking at it?
Asking for replication in this case is surprising, because seemingly the entire value is to prompt other research to go figure out what’s going on with these things.
My reading is that the GP moreso takes issue with how social sciences are discussed, and is using an arbitrary replication-less finding in the 'hard sciences' (this one) as a podium to speak from--regardless of why replication doesn't exist for it, or whether that reason makes sense.
Rather obviously, I'd think, if someone else looks at similar samples and sees no "Obelisks" at all. Or looks at their samples.
> What is H_0 in this study?
"The stuff we're seeing here doesn't exist"?
Or, IOW, "All the electron microscopes used have the same weird bug, showing obelisk-shaped pieces of RNA material where there really isn't anything at all"?
Feels a bit like what you want is really "We were all high on funny mushrooms when we saw that".
Look, I see from your other comments that you're actually not talking about this, but something else entirely. And you may well have a point there, but in order to make that point here, you would have had to come up with something a lot better than this. Because I'm fairly sure they weren't all high on funny mushrooms. Aren't you too, really?
It's more that it's a category error. Replication has a context for where it's important, e.g. a treatment and cause & effect relationship. It doesn't If you say "we did this to people and it caused them to respond this way X% of the time," that's something that needs to replicate for us to know if we should take it seriously. If you look at a bunch of separate data sources and find out that a bunch of them show that a certain thing verifiably exists, that doesn't need to replicate to be taken seriously. The data should be checked to make sure the thing actually does exist, but that's verification.
So most findings in research psychology definitely need replication. But the idea that the existence of something is verifiably found across a bunch of different datasets doesn't need a new set of experiments to show -- you just check the data.
One thing that makes soft sciences softer is that it is inherently more difficult to achieve the same statistical significance, due to both higher underlying variances and logistics of obtaining large samples. Publishing with higher p-values results in lower reproducibility by definition.
The problem with psychology is that despite the fact that the subject matter is inherently more difficult to study, researchers are forced into the same publish-or-perish system as biologists and mathematicians. A higher degree of skepticism towards new studies might technically be prejudice, but it’s certainly justified.
They then expanded the search to millions of sequences, which are publicly available, and found ~30k different classes(!) of Obelisk elements. One could argue that the quality of each of these "experiments" may not be as good as IHMP, but still, the signal is more than sufficient to clearly demonstrate the existence and implied significance of these elements.
1: https://pubmed.ncbi.nlm.nih.gov/31142853/