There are numerous changes over the first two days, Exercise to the reader to find which HN comments inspired them.
To note: I really have no problem with him updating his piece to reflect accurate criticism, I do find issue with doing it silently, and with not reflecting on how it should influence his thoroughness in the future.
It's using data from the Integrative Human Microbiome Project [1], so, in a sense, hundreds of high-quality biological replicas support their findings. Obelisks were found in a substantial portion of them.
They then expanded the search to millions of sequences, which are publicly available, and found ~30k different classes(!) of Obelisk elements. One could argue that the quality of each of these "experiments" may not be as good as IHMP, but still, the signal is more than sufficient to clearly demonstrate the existence and implied significance of these elements.
No doubt there are valid concerns but the value of comments formed by internet clichés and dismissive tropes is... lower than what we want here. When it comes with a tone of entitled aggressiveness, that's even worse—that's enough to drive away the people who spend their lives working on a subject, and that would be the worst possible outcome for HN.
It's based on data gathered from scientific instruments that are well-characterized and whose results are validated by being used for other findings. They're not administering a questionnaire to someone's microbiome and doing sample statistics on the results. They found novel RNA sequences in microbiome sample data using well-understood and repeatable methods, and confirmed that the RNA sequences seem to exist. You can repeat the analysis yourself and make that determination.
I think you're trying to draw a comparison here between two things of completely different category, and I feel like you might not understand how different they are.
Don't be a dick. It's totally normal to be skeptical and doubtful of some over-the-top claims. The onus is squarely on the person presenting the information to justify it. Your points about replication are accurate, but let's at least keep it to a pretext of civility.
The claim is not over-the-top at all though. They did a survey of funny RNA sequences in gut samples and didn't find anything that links them to known sequence from other organisms. Nothing spectacular was claimed.
It’s such a different thing entirely, the question of replication doesn’t make sense. See my other comment.
I think you’d be better served by trying to understand how psychological research has been done, why people go on about replication, what results are suspect from psych, and how they developed in the first place.
Bc it seems like you’re trying to justify the actions in psych world by other disciplines, but it just doesn’t make sense.
What they both share are epistemic questions: How much knowledge can they deliver? And is science the basis for this knowledge?
When it comes to discrediting psychology here, it comes down to folks arguing, no, psychology isn't a basis for knowledge because it isn't scientific because it's claims are rarely replicated.
If the replication of claims are required for something to be science, I fail to see how this is different. If it's not a case of special pleading, it appears identical to it.
Is this study scientific? It doesn't appear to be so.
The sequences they found are already in various databases and are not linked to any organisms we know. It's like discovering an interesting new insect in a national park. That's as different as it can be from psychological research. What is there to replicate?
So basically this study is “we looked at various publicly available data sets of information, and here is a thing we notice that has previously gone unnoticed”.
The noticed it in multiple datasets.
What would replication look like here besides someone else looking at the dataset and agreeing that they also see the curiosities?
Or do you mean someone measuring a new fresh set of data and looking at it?
Asking for replication in this case is surprising, because seemingly the entire value is to prompt other research to go figure out what’s going on with these things.
My reading is that the GP moreso takes issue with how social sciences are discussed, and is using an arbitrary replication-less finding in the 'hard sciences' (this one) as a podium to speak from--regardless of why replication doesn't exist for it, or whether that reason makes sense.
Rather obviously, I'd think, if someone else looks at similar samples and sees no "Obelisks" at all. Or looks at their samples.
> What is H_0 in this study?
"The stuff we're seeing here doesn't exist"?
Or, IOW, "All the electron microscopes used have the same weird bug, showing obelisk-shaped pieces of RNA material where there really isn't anything at all"?
Feels a bit like what you want is really "We were all high on funny mushrooms when we saw that".
Look, I see from your other comments that you're actually not talking about this, but something else entirely. And you may well have a point there, but in order to make that point here, you would have had to come up with something a lot better than this. Because I'm fairly sure they weren't all high on funny mushrooms. Aren't you too, really?
It's more that it's a category error. Replication has a context for where it's important, e.g. a treatment and cause & effect relationship. It doesn't If you say "we did this to people and it caused them to respond this way X% of the time," that's something that needs to replicate for us to know if we should take it seriously. If you look at a bunch of separate data sources and find out that a bunch of them show that a certain thing verifiably exists, that doesn't need to replicate to be taken seriously. The data should be checked to make sure the thing actually does exist, but that's verification.
So most findings in research psychology definitely need replication. But the idea that the existence of something is verifiably found across a bunch of different datasets doesn't need a new set of experiments to show -- you just check the data.
One thing that makes soft sciences softer is that it is inherently more difficult to achieve the same statistical significance, due to both higher underlying variances and logistics of obtaining large samples. Publishing with higher p-values results in lower reproducibility by definition.
The problem with psychology is that despite the fact that the subject matter is inherently more difficult to study, researchers are forced into the same publish-or-perish system as biologists and mathematicians. A higher degree of skepticism towards new studies might technically be prejudice, but it’s certainly justified.
Does anyone have examples of OSS projects using this framework, and having good documentation?
I find the framework doesn't really speak to me (as a user, or prospective docs writer), and IME trying it out in communities I'm a part of, it didn't particularly improve the situation, as I had perhaps naively expected.
Is there anything comparable/support for this in jujutsu? The files git-crypt handles are not added to .gitignore — they’re instead added to .gitattributes.
The result is that jj commits them, which is not what you want.
I just finished converting my rather large (just under 500 lines) init.vim to Lua. It took way longer than I had hoped. I feel like I've forgotten the motivation and what benefits it was supposed to bring. At this point I really don't want to consider another conversion, using Fennel or otherwise.
I think about writing the config in fennel all the time, but I’m not a big config tinkerer and already worry about struggling with Lua when I have an upgrade gone awry.
I’ve found the neovim ecosystem to churn way more in recent years than it did when I initially started using it 8 years ago. It really reminds me of the JS ecosystem of the past decade: full featured plugin that works great decides to strip things back so that it’s functionality is all plugable and you suddenly have to wade through 2 major release’s migration docs to get things kind of working like they were before. I digress to point out why it might be hard to have yet another layer to translate through. Though if you don’t have kids, I’d say “go for it!”
I have 3500 lines config file in Lua that I have converted from VimScript last year. I see that Lua it's way easier to maintain than VimScript.
Fennel solves some Lua kirks, so I think that is a good use case, since Fennel has some cool features that can help to maintain the code. Right now I am moving the plugins that I maintain to Fennel.