Grad students try the hardest to change things because they are the most affected. The problem with "tabling the issue for later" argument is that you just keep doing this and we end up with exactly the system we have. Maybe it isn't a PhD to do, but there's always something. Professors are still overworked.
There was a good post on BlueSky recently[0] that quoted from the instructions for reviewing PNAS
The purpose of peer review is not to demonstrate proficiency in identifying flaws
I think this is an issue many have when doing any form of quality control. Every single work has flaws and every single work needs more. The problem, especially in machine learning, that I see is that we are not focusing on what matters: validating hypotheses. This requires far more than looking at plots and tables. It really requires you to think about the paper you read.
But I think there's a fundamental alignment problem. An irony in ML, since surely this is far easier than the AI alignment problem. But the purpose of publishing is to communicate. Are we actually doing that? Is our review process improving communication? Or is it actually just gatekeeping or blocking out voices? It is one thing to reject works because they communicate poorly, don't evidence their hypothesis, or are outright fraud, but why are we blocking anything else? This stupid notion of prestige? That's never going to end well.
Why is it a terrible take? You first have to understand a community in order to reform it. You can't just say "I actually have no idea how to do research in this field, have not contributed anything substantial, have no money or soft power, but let me tell y'all how to do better science according to my subjective and very limited understanding."
> You can't just say "I actually have no idea how to do research in this field
I am a researcher in the field, I have SOTA models in ML, and I have a good number of citations for the experience level. Sure, I'm no rockstar, but neither am I below average.
I'm not sure why you jumped to the conclusion that I'm not part of this field.
Why do people go through the system, hate the system, then gleefully enforce the system (or an even more brutal version of the system) when they are at the top?
> Why do people go through the system, hate the system, then gleefully enforce the system (or an even more brutal version of the system) when they are at the top?
Survivorship bias.
The people who get to the top are those who can play sufficiently well by the rules of the system.
This is a classic question asked in any abusive cycle. Why do children who were abused by parents have high likelihoods of spousal and child abuse, continuing the cycle. Prevailing theories is that it becomes normalized and not knowing how to do things differently. We are imitation learners. Capable of more, but this is built into us.
As another commenter mentioned, survivorship bias. "It sucks, but it is working, right?" Often we want to convince ourselves of this because it can justify the bad stuff that happened. We want to rewrite this in our heads because it helps us not get depressed. But there's always room to improve and I think this is the better aspect to focus on. Sure, what we're doing may work but that is not a reason we shouldn't improve.
In fact, one of the most frustrating aspects to me is that it is the job of a scientist/researcher/engineer to find problems and then improve them. So that's why I find maintaining the status quo rather infuriating. It is in direct opposition to the fundamental framework we work in: to always be improving.
Grad students try the hardest to change things because they are the most affected. The problem with "tabling the issue for later" argument is that you just keep doing this and we end up with exactly the system we have. Maybe it isn't a PhD to do, but there's always something. Professors are still overworked.
There was a good post on BlueSky recently[0] that quoted from the instructions for reviewing PNAS
I think this is an issue many have when doing any form of quality control. Every single work has flaws and every single work needs more. The problem, especially in machine learning, that I see is that we are not focusing on what matters: validating hypotheses. This requires far more than looking at plots and tables. It really requires you to think about the paper you read.But I think there's a fundamental alignment problem. An irony in ML, since surely this is far easier than the AI alignment problem. But the purpose of publishing is to communicate. Are we actually doing that? Is our review process improving communication? Or is it actually just gatekeeping or blocking out voices? It is one thing to reject works because they communicate poorly, don't evidence their hypothesis, or are outright fraud, but why are we blocking anything else? This stupid notion of prestige? That's never going to end well.
Not to mention all the wasted time and money...
[0] https://bsky.app/profile/docbecca.bsky.social/post/3lkbec2hi...