I haven't really found stackoverflow to be that humiliating (compared to some IRC rooms or forums), basic questions get asked and answered all the time. But the worst part is when you want to do something off the beaten path.
Q: how do I do thing X in C?
A: Why do you need to know this? The C standard doesn't say anything about X. The answer will depend on your compiler and platform. Are you sure you want to do X instead Y? What version of Ubuntu are you running?
This being HN, I'd love to hear from one of the many IRC channel mods who literally typed (I'd guess copy/pasted) this kind of text into their chat room topics and auto-responders.
If you're out there-- how does it feel to know that what you meant as a efficient course-correction for newcomers was instead a social shaming that cut so deep that the message you wrote is still burned verbatim into their memory after all these years?
To be clear, I'm taking OP's experience as a common case of IRC newbies at that time on many channels. I certainly experienced something like it (though I can't remember the exact text), and I've read many others post on HN about the same behavior from the IRC days.
> social shaming that cut so deep that the message you wrote is still burned verbatim into their memory after all these years
Oh my, this reminded me how some 20 years ago I was a high school kid and dared to install a more nerdy Linux distro (which I won't name here) on my home computer. After some big upgrade, the system stopped booting, and when in panic I asked for help at the official forum, I got responses that were shaming me for blindly copying commands from their official website without consulting some README files. That's how I switched to Debian and never looked back.
>If you're out there-- how does it feel to know that what you meant as a efficient course-correction for newcomers was instead a social shaming that cut so deep that the message you wrote is still burned verbatim into their memory after all these years?
I've never been a IRC mod for a channel of anywhere near that significance.
But as someone who has put in considerable effort for the last few years trying to explain basic fundamental ideas about what Stack Overflow is and how it's supposed to work to people (including to people whose accounts are 15+ years old but who insist on continuing to treat the Q&A like a discussion forum)...
... I'm kinda envious.
Being presented with a text entry box, and the implied contract that what you type there will be broadcast to a wide audience, is not supposed to imply a right to reject the community's telos and substitute your own. Gatekeeping of this sort is important; otherwise you end up with the Wikipedia page for "Dog" being flooded with people trying to get free veterinary consultation. Communities are allowed to have goals and purposes that aren't obvious and which don't match the apparent design, and certainly they aren't required to have goals which encompass everything possible with the sites software.
There are countless places on the Internet that work like a discussion forum where you can post a question, or a request for help, or just a general state of confusion, about a problem where you're just completely lost; and where you can expect to have a back-and-forth multi-way conversation with others to try and diagnose things or come to a state of understanding; and where nobody cares if your real concern ever surfaces in the form of an explicit, coherent question; and where there's no expectation that any of this dialogue should ever be useful to anyone else.
Stack Overflow is not that, explicitly and by design; and that came about specifically so that people who do have some clue and want to find an answer to an actual question, can do so without having to pick through a dialogue of the sort described above, following an arbitrarily long chain of posts of questionable relevance (that might well end in "never mind, I fixed it" with no explanation).
But in order to become a repository of such questions, people presented with the question submission form... need to be restricted to asking such questions.
That is really absurd! AFAIK, it is not possible to pose a question to a human in C++.
This level of dogmatism
and ignorance of human communication reminds me of a TL I worked with once who believed that their project's C codebase was "self-documenting". They would categorically reject PRs that contained comments, even "why" comments that were legitimately informative. It was a very frustrating experience, but at least I have some anecdotes now that are funny in retrospect.
Self-documenting code is one of the worst ideas in programming. Like you, I've had to work with teams where my PRs would be blocked until I removed my comments. I'm not talking pointless comments like "# loop through the array" but JSdoc style comments describing why a function was needed.
I will no longer work anywhere that has this kind of culture.
Hard to agree or disagree without real examples. I've worked with people who insist on writing paragraphs of stories as comments on top of some pretty obviously self-descriptive code. In those cases, the comments were indeed just clutter that would likely soon be out of date anyway. Conversely, places that need huge comments like that usually should just be refactored anyway. It's pretty rare to actually need written comments to explain what's going on when the code is written semantically and thoughtfully.
To be clear, I'm not talking about self-indulgent essays embedded in comments. I'm talking about comments that convey context for humans.
Here's a made-up example:
// Parses a Foo message [spec]. Returns true iff the parse was successful.
// The `out` and `in` pointers must be non-null.
//
// [spec]: https://foo.example/spec
bool parse_foo(foo_t* out, const char* in, size_t in_len) {
assert(out);
assert(in);
// TODO(tracker.example/issue#123) Delete the Foo v1 parser. Nobody will
// be sending us Foo v1 message when all the clients are upgraded to Bar
// v7.1.
//
// Starting in Foo v2, messages begin with a two-byte version identifier.
// Prior to v2, there was no version tag at all, so we have to make an
// educated guess.
//
// For more context on the Foo project's decision to add a version tag:
// https://foo.example/specv2#breaking-change-version-tag
uint16_t tag_or_len;
if (!consume_u16(&tag_or_len, &in, &in_len)) {
return false;
}
if (tag_or_len == in_len) {
return parse_foo_v1(out, in, in_len);
}
return parse_foo_v2(out, in, in_len);
}
On that old project, I believe the TL would have rejected each of these comments on the grounds that the code is self-documenting. I find this to be absurd:
(1) Function comments are necessary to define a contract with the caller. Without it, callers are just guessing at proper use, which is particularly dangerous in C. Imagine if a caller guessed that the parser would gracefully degrade into a validator when `out` is NULL. (This is a mild example! I'm sure I could come up with an example where the consequence is UB or `rm -rf /`.)
(2) The TODO comment links to the issue tracker. There's no universe in which a reader could have found issue#123 purely by reading the code. On the aforementioned project, we were constantly rediscovering issues after wasting hours/days retreading old territory.
(3) Regarding the "Starting in Foo v2" comment... OK, fine, maybe this could have been inferred from reading the code. But it eases the reader into what's about to happen while providing further context with a link. On balance, I think this kind of "what" comment is worth including even though it mirrors the code.
A one week ban on the first message is clearly gatekeeping. What a bunch of jerks. A 1 hour ban would have been a lot more appropriate, and escalate from there if the person can't follow the rules.
Don't even get me started about how dumb rule 2 is, though. And rule 3 doesn't even work for normal English as many things are abbreviated, e.g. this example.
And of course, you didn't greet and wait, you just put a pleasantry in the same message. Jeez.
I'm 100% sure I'd never have gone back after that rude ban.
> And of course, you didn't greet and wait, you just put a pleasantry in the same message. Jeez.
I'm pretty sure that "rule" was more aimed towards "just ask your question" rather than "greet, make smalltalk, then ask your question".
I have similar rules, though I don't communicate them as aggressively, and don't ban people for breaking them, I just don't reply to greetings coming from people I know aren't looking to talk to me to ask me how I've been. It's a lot easier if you send the question you have instead of sending "Hi, how are you?" and then wait for 3 minutes to type out your question.
He did send the question. In the same message. It's just that he was polite about it. He didn't send a greeting and wait for a response before continuing.
I think reading “u” takes longer than reading “you”.
With “u”, I have to pause for a moment and think “that’s not a normal word; I wonder if they meant to type ‘i’ instead (and just hit a little left of target)?” and then maybe read the passage twice to see which is more likely.
I don’t think it’s quite as much a contradiction. (It still could be more gruff than needed.)
edit: I remember how some communities changed into: The help isn't good enough, you should help harder, I want you to help me by these conventions. Then they leave after getting their answer and no one has seen them ever again rather than join the help desk.
I learned a bunch of programming languages on IRC, and the C and C++ communities on freenode were by far the most toxic I've encountered.
Now that Rust is succesfully assimilating those communities, I have noticed the same toxicity on less well moderated forums, like the subreddit. The Discord luckily is still great.
It's probably really important to separate the curmudgeons from the fresh initiates to provide an enjoyable and positive experience for both groups. Discord makes that really easy.
In the Ruby IRC channel curmudgeons would simply be shot down instantly with MINASWAN style arguments. In the Haskell IRC channel I guess it was basically accepted that everyone was learning new things all the time, and there was always someone willing to teach at the level you were trying to learn.
Not my experience. IRC was 'toxic' since forever, but that't not a toxicity, that's inability to read emotion through transactional plan text. Once one account that in the mental model IRC is just fine.
Yes, immature people are everywhere, but SO took it to a new level before they had to implement a code of conduct. I remember asking questions and getting "this is is common misconception, maybe you're looking for X instead" type of actually helpful and kind answers.
After some point it came to a point that if you're not asking a complete problem which can't be modeled as a logic statement, you're labeled as stupid for not knowing better. The thing is, if I knew better or already found the answer, I'd not be asking to SO in the first place.
After a couple of incidents, I left the place for the better. I can do my own research, and share my knowledge elsewhere.
Now they're training their and others models with that corpus, I'll never add a single dot to their dataset.
Stack Overflow always had a code of conduct - it just wasn't always called that, and it started out based in some naive assumptions about human behaviour.
Someone who tells you something along the lines of "this is is common misconception, maybe you're looking for X instead" is being helpful and kind, and is not at all "labeling you as stupid". On Stack Overflow, it's not at all required that you actually need the question answered, as asked, in order to ask it. (You're even actively encouraged to ask and answer your own questions, as long as both question and answer meet the usual standards, as a way to share your expertise.)
It has never been acceptable to insult others openly on the site - but moderation resources are, and always have been, extremely strained, and the community simply doesn't see certain things as insulting or "mean" that others might. In particular, downvotes aren't withheld out of sympathy, because they are understood to be purely for content rating - but many users take them personally anyway. Curators commonly get yelled at when they comment (thus identifying themselves) to explain policy in polite but blunt copy-paste terms; and they get seethed at when they don't comment. There is a standard set of reasons to close questions (https://meta.stackoverflow.com/questions/417476), and a how-to-ask guide (https://stackoverflow.com/help/how-to-ask), and a site tour (https://stackoverflow.com/tour), and an entire Meta site with a variety of FAQ entries and other common references (for example, I wrote the "proposed faq" of https://meta.stackoverflow.com/questions/429808); but very few people seem interested in checking the extant documentation for the community norms, and then feel slighted when the community norms aren't what they assume they ought to be (because it works that way everywhere else). Communities should be allowed to have their own norms, so that they can pursue their own goals (https://meta.stackoverflow.com/questions/254770).
A ton of users confuse "know better or find the answer ahead of time" for debugging. Stack Overflow allows for what are commonly called "debugging questions", but this does not mean questions wherein you ask someone to debug the code for you. Why? Because that can't ever be helpful to someone else - nobody else has your code, and thus they can't benefit from someone locating the bug in your code. The "debugging questions" that Stack Overflow does want are questions about behaviour that isn't understood, after you have isolated the misbehaving part. These are ideally presented in the form of a "minimal reproducible example" or MRE (https://stackoverflow.com/help/minimal-reproducible-example).
Stack Overflow explicitly expects you to do research before asking your question (https://meta.stackoverflow.com/questions/261592) - because most of the goal of the site is to cover questions that can't be answered that way. Traditional reference documentation is only part of the puzzle (https://diataxis.fr/); the Q&A site format allows for explanations (for "debugging questions") and how-to guides (in response to simple queries about how to do some atomic, well-specified task). (Tutorials don't fit in this format because they don't start with a clear question from the student.)
The major misunderstanding is that SO exists to help the question author first. It is not an IRC. The most value comes from googling a topic and getting existing answers on SO.
In other words, perhaps in your very specific case, your question is not XY problem but for the vast majority of visitors from google it won't be so. https://en.wikipedia.org/wiki/XY_problem
Personally, I always answered SO from at least two perspectives: how the question looks for someone coming from google and how the author might interpret it.
I think it depends on how the question is constructed:
- I want to do X, how do I do it?
- I was thinking of doing X to achieve Y, wonder if that's a good idea?
Sometimes, I really want to do X, I know it may be questionable, I know the safest is "probably don't want to do it", and yet, that's not someone else's (or LLMs) business, I know exactly what I want to do, and I'm asking if anyone knows HOW, not IF.
So IMO it's not a flaw, it's a very useful feature, and I really do hope LLMs stay that way.
I think there can be a middle ground. I think it's fine if LLMs warn you but still answer the question the way you asked. I don't always know when i should be asking if something is a good idea or not.
I always wonder about that. Very often it seems that you need to be able to LLM that they are wrong. And then they happily correct themselves. But if you do not know that the answer is wrong how can you get correct answer?
This happened to me the other day. I had framed a question in the ordinal case, and since I was trying to offload thinking anyways, I forgot that my use case was rotated, and failed to apply the rotation when testing the LLM answer. I corrected it twice before it wrapped around to the same (correct) previous answer, and that’s when I noticed my error. I apologized, added the rotation piece to my question, and it happily gave me a verifiably correct answer.
A large majority of the complaints we get of this form on the Meta site, whenever there's an actual object case, turn out to be really clear-cut duplicates where the OP is quibbling on an irrelevant detail instead of trying to understand the explanation in the answer and adapt the code to individual circumstances.
I find that this is mainly a problem in languages that attract "practical"/"best tool for the job" Philistines. Not going to name names right now but I had never really experienced this until I started using languages from a certain Washington based software company.
SO does suck, but i've found that if you clarify in the question what you want, and pre-empt the Y instead of X type answers, you will get some results.
i haven't been using SO recently, so this problem may have devolved now. if people remaining today on SO are such that they just question the premise whenever something slightly off the beaten track is asked, then i guess the site has really died.
I mean, those all sound like good questions. You might be a super genius, but most people who ask how to do X actually want to do Y. And if they DO want X, then those other questions about compiler and OS version really matter. The fact that you didn’t include them in your question shows you aren’t really respecting the time of the experts on the platform. If you know you are doing something unusual, then you need to provide a lot more context.
It’s also quite fun when you ask niche questions that haven’t been asked or answered yet ("How do I do X with Y?"), and just get downvoted for some reason.
That’s when I stopped investing any effort into that community.
Turned out it, counter-intuitively, was impossible. And not documented anywhere.
For the major programming languages, it must be a pretty esoteric question if it does not have an answer yet.
Increasingly, the free products of experts are stolen from them with the pretext that "users need to be protected". Entire open source projects are stolen by corporations and the experts are removed using the CoC wedge.
Now SO answers are stolen because the experts are not trained like hotel receptionists (while being short of time and unpaid).
I'm sure that the corporations who steal are very polite and CoC compliant, and when they fire all developers once an AGI is developed, the firing notices will be in business speak, polite, express regret and wish you all the best in your future endeavors!
One man's tangent is another man's big picture. It may be the case of course that some people guilty of CoC overreach are shaking in their boots right now because they went further than their corporations wanted them to go.
The main issue with Stack Overflow (and similar public Q&A platforms) is that many contributors do not know what they do not know, leading to inaccurate answers.
Additionally, these platforms tend to attract a fair amount of spam (self promotion etc) which can make it very hard to find high-quality responses.
I’m not sure how to take you comment, but I feel the same(?) way. I love that I can use LLMs to explore topics that I don’t know well enough to find the right language to get hits on. I used to be able to do this with google, after a few queries, and skimming to page 5 hits, I’d eventually find the one phrase that cracks open the topic. I haven’t been able to do that with google for at least 10 years. I do it regularly with LLMs today.
They are extraordinarily useful for this! "Blah blah blah high level naive description of what I want to know about, what is the term of art for this?"
Then equipped with the right term it's way easier to find reliable information about what you need.
QA platforms and blogging platforms both seem to have finite lifespans. QA forums (Stack Overflow, Quora, Yahoo answers) do seem to last longer, need to be moderated pretty aggressively unless they are going to turn into homework help platforms.
Blogging platforms are the worst though. Medium looked pretty OK when it first came out. But now it is just a platform for self-promotion. Substack is like 75% of the way through that transition IMO.
People who do interesting things spend most of their time doing the thing. So, non-practicing bloggers and other influencers will naturally overwhelm the people who actually have anything interesting to report.
I don’t want to be that guy saying this, but 99% of the top results on google from Medium related to anything technical is literally the reworded/reframed version of the official quick start guide.
There are some very rare gems, but it is hard to find those among the above mentioned ocean of reworded quick starts disguised as “how to X”, “fixing Y”. Almost reminds me of the SEO junks when you search “how to restart iPhone” and find answers that dance around letting it die from battery drain and then charge, install this software, take it to the apple repair shop, go to settings and traverse many steps while not saying that if you are between these models use the power+volume up button trick.
Somebody who just summarizes tutorials can write like 10 medium posts, in the time it takes an actual practitioner to do something legitimately interesting.
What you mention has been serious from day one indeed.
But to me the worst issue is it's now "Dead Overflow": most answers are completely, totally and utterly outdated. And seen that they made the mistake of having the concept of an "accepted answer" (which should never have existed), it only makes the issue worse.
If it's a question about things that don't change often, like algorithms, then it's OK. But for anything "tech", technical rot is a very real thing.
To me SO has both outdated and inaccurate answers.
I agree that "accepted answer" is a misfeature; but the default is for questions to remain open to new answers indefinitely.
On the one hand, good new answers can remain buried under mediocre and outdated answers more or less indefinitely, because they received hundreds of questionable upvotes in the past. (Despite popular perception, the average culture of Stack Overflow is very heavily weighted towards upvoting. I have statistical analysis of this on the meta site somewhere.)
On the other hand, there are tons of popular reference questions that had to be "protected" so that people couldn't just sign up and contribute the 100th answer (yes, the count really does range into triple digits in several cases, especially if you have access to deleted answers) to a question where there are maybe five actually reasonable-considered-distinct answers possible. And this protection is a quite low barrier - 10 reputation, IIRC. (I'm not sure what motivates people to write these new answers. It might have something to do with being able to point to a Stack Overflow profile which boasts that you've "reached" millions of people, due to its laughably naive algorithm for that.)
>The main issue with Stack Overflow (and similar public Q&A platforms) is that many contributors do not know what they do not know, leading to inaccurate answers.
The best Q&A platform would be the one where experts and scientists answer questions but sites like Wikipedia and Reddit showed that broad range of audience can also be pretty good at providing useful information and moderating it.
I've gotten answers from OpenAI that were technically correct but quite horrible in the longer term. I've gotten the same kinds of answers on Stack Overflow, but there other people are eager to add the necessary feedback. I got the same feedback from an LLM but only because in that case I knew enough to ask for it.
Maybe we can get this multi-headed advantage back from LLMs by applying a team of divergent AIs to the same problem. I've had other occasions when OpenAI gave me crap that Claude corrected, and visa versa.
I too can write made up criticism if that’s what my boss wants in the workplace — but that doesn’t suddenly invalidate my ability to criticize my own work to improve it.
I've been arguing with Copilot back and forth where it gave me a half-working solution that seemed overly complicated but since I was new to the tech used, I couldn't say what exactly was wrong. After a couple of hours, I googled the background and trust my instinct and was able to simplify the code.
At that situation, where I iteratively improved the solution by telling Copilot things seem to complicated and this or that isn't working. That led the LLM to actually come back with better ideas. I kept asking myself why something like you propose isn't baked into the system.
The papers I've read have shown LLM critics to be quite bad at their work. If you give an LLM a few known good and bad results, I think you'll see the LLM is just as likely to make good results bad as it is to make bad results good.
I approach it the same way as the things I build myself - testing and measuring.
Although if I’m truly honest with myself, even after many years of developing, the true cycle of me writing code is: over confidence, then shock it didn’t work 100% the first time, wondering if there is a bug in the compiler, and then reality setting in that of course the compiler is fine and I just made my 15th off-by-one error of the day :)
The flipside to this is you can’t get answers to anything _recent_, since the models are trained years behind in content. My feelig is it’s getting increasingly difficult to figure out issues on the latest version of libraries & tools, as the only options are private Discords (which aren’t even googleable)
Not my daily experience. It’s been impossible to get relevant answers to questions on multiple languages and frameworks, no matter the model. O1 frequently generates code using deprecated libraries (and is unable to fix it with iteration).
Not to mention there will be no data for the model to learn the new stuff anyway, since places like SO will get zero responses with the new stuff for the model to crawl
I feel like this will be really beneficial in work environments. LLMs provide a lot of psychological safety when asking “dumb” questions that your coworkers might judge you for.
At the same time, if I coworker comes asking me for something _strange_, my first response is to gently inquire as to the direction of their efforts instead of helping them find an answer. Often enough, this ends up going back up their "call stack" to some goofy logic branch, which we then together undo, and everyone is pleased.
Full ACK. It has been liberating to be able to chat about a topic I always wanted to catch up on. And, even though I read a lot of apologies, at least nobody is telling me "Thats not what you actually want."
I am very curious to see how this is going to impact STEM education. Such a big part of an engineer's education happens informally by asking peers, teachers, and strangers questions. Different groups are more or less likely to do that consistently (e.g. https://journals.asm.org/doi/10.1128/jmbe.00100-21), and it can impact their progress. I've learned most from publicly asking "dumb" questions.
It won't. If you look at advanced engineering/mathematics
material online it is abysmal in quality of actually "explaining" the content. Most of the learning and understanding of intricacies happens via dialogue with professors/mentors/colleagues/etc.
That said, when that is not available, LLMs do an excellent job or rubber ducky-ing complicated topics.
To your latter point - that’s where I think most of the value of LLMs in education is. They can explain code beyond the educational content that’s already available out there. They are pretty decent at finding and explaining code errors. Someone who’s ramping up their coding skills can make a lot of progress with those two features alone.
Many of the forums I enjoyed in the past have become heavily burdened by rules, processes, and expectations. They are frequented by people who spend hours every day reading everything and calling out any misstep.
Some of them are so overburdened that navigating all of the rules and expectations becomes a skill in itself. A single innocent misstep turns simple questions into lectures about how you’ve violated the rules.
One Slack I joined has created a Slackbot to enforce these rules. It became a game in itself for people to add new rules to the bot. Now it triggers on a large dictionary of problematic words such as “blind” (potentially offensive to people with vision impairments. Don’t bother discussing poker.). It gives a stern warning if anyone accidentally says “crazy” (offensive to those with mental health problems) or “you guys” (how dare you be so sexist).
They even created a rule that you have to make sure someone wants advice about a situation before offering it, because a group of people decided it was too presumptuous and potentially sexist (I don’t know how) for people to give advice when the other person may have only wanted to vent. This creates the weirdest situations where someone posts a question in channels named “Help and advice” and then lurkers wait to jump on anyone who offers advice if the question wasn’t explicitly phrased in a way that unequivocally requested advice.
It’s all so very tiresome to navigate. Some people appear to thrive in this environment where there are rules for everything. People who memorize and enforce all of the rules on others get to operate a tiny little power trip while opening an opportunity to lecture internet strangers all day.
It’s honestly refreshing to go from that to asking an LLM that you know isn’t going to turn your question into a lecture on social issues because you used a secretly problematic word or broke rule #73 on the ever growing list of community rules.
> Some people appear to thrive in this environment where there are rules for everything. People who memorize and enforce all of the rules on others get to operate a tiny little power trip while opening an opportunity to lecture internet strangers all day.
Toddlers go through this sometimes around ages 2 or 3. They discover the "rules" for the first time and delight in brandishing them.
The reason those rules are created is because at some point something happened that necessitated that rule. (Not always of course, there are dictatorial mods.)
The fundamental problem is that communities/forums (in the general sense, e.g., market squares) don't scale, period. Because moderation and (transmission and error correction of) social mores don't scale.
> The reason those rules are created is because at some point something happened that necessitated that rule. (Not always of course, there are dictatorial mods.)
Maybe initially, but in the community I’m talking about rules are introduced to prevent situations that might offend someone. For example, the rule warning against using the word “blind” was introduced by someone who thought it was a good thing to do in case a person with vision issues maybe got offended by it at some point in the future.
It’s a small group of people introducing the rules. Introducing a new rule brings a lot of celebration for the person’s thoughtfulness and earns a lot of praise and thanks for making the community safer. It’s turned into a meta-game in itself, much like how I feel when I navigate Stack Overflow
Stack Overflow's policies exist for clearly documented reasons that are directly motivated by a clearly documented purpose (which happens not to be aligned with what most new users want the site to be, but they aren't the ones who should get to decide what the site is). They are certainly not politically motivated or created for the sake of a power trip. They came out of years of discussion on the meta site, and have changed over time due to the community realizing that previous iterations missed the mark.
I get good answers all the time on SO or used to. My problem is that I've been down voted several times for "stupid question" and also been down voted for not knowing what I was talking about in an area I'm an expert in.
I had one question that was a bit odd and went against testing dogma that I had a friend post. He pulled it 30 minutes later as he was already down 30 votes. It was a thing that's not best practice in most cases but also in certain situations the only way to do it. Like when you're testing apis you don't control.
In some sections people also want textbook or better quality answers from random strangers on the internet.
The final part is that you at least used to have to build up a lot of karma to be able to post effectively or at all in some sections or be seen. Which is very catch 22.
-30 votes would be extremely unusual on SO. That amount of votes even including upvotes in such a short time would be almost impossible. The only way you get that kind of massive voting is either if the question hits the "Hot Network Questions" or if an external site like HN with a high population of SO users links to it and drives lots of traffic. Questions with a negative score won't hit the hot network questions, so it seems very unlikely to me that it could be voted on that much.
You can get +30 from the HNQ list, but -30 is much harder, because the association bonus only gives you 101 rep, and the threshold for downvoting is 125.
I get useful info from SO all the time, so often that these days it’s rare I have to ask a question. When I do, the issue seems to be it’s likely niche enough that an answer could take days or weeks, which is too bad, but fair enough. It’s also rare I can add an answer these days but I’m glad when I can.
I submit that what SO is doing is working; it's just that SO is not what some people want it to be.
SO is not a pure Q&A site. It is essentially a wiki where the contents are formatted as Q&As, and asking questions is merely a method to contribute toward this wiki. This is why, e.g., duplicates are aggressively culled.
>is not a pure Q&A site. It is essentially a wiki where the contents are formatted as Q&As
The thing is, the meta community of Stack Overflow - and of other similar sites like Codidact - generally understand "Q&A site" to mean the exact thing you describe.
The thing where you "ask a question"[0] and start of a chain of responses which ideally leads to you sorting out your problem, is what we call a discussion forum. The Q&A format is about so much more than labelling one post as a "question" and everything else as an "answer" or a "comment" and then organizing the "answers" in a certain way on the page.
[0]: which doesn't necessarily have a question mark or a question word in it, apparently, and which rambles without a clear point of focus, and which might be more of a generic request for help - see https://meta.stackoverflow.com/questions/284236.
It's working just fine. The decline in the rate of new questions is generally seen as a good thing, as it's a sign of reaching a point where the low-hanging fruit has been properly picked and dealt with and new questions are only concerned with things that actually need to be updated because the surrounding world has changed (i.e., new versions of APIs and libraries).
>Pretty hard to argue at this point that the problem is with the users being too shit to use the platform.
On the contrary. Almost everyone who comes to the site seems to want to use it in a way that is fundamentally at odds with the site's intended purpose. The goal is to build a searchable repository of clear, focused, canonicalized questions - such that you can find them with a search engine, recognize that you've found the right question, understand the scope of the question, and directly see the best answers to the highest-quality version of that question. When people see a question submission form and treat it as they would the post submission form on a discussion forum, that actively pollutes said repository. It takes time away from subject-matter experts; it makes it harder for curators to find duplicates and identify the best versions thereof to canonicalize; and it increases the probability that the next person with the same problem, armed with a search engine, will find a dud.
We're talking about stackoverflow right? The website is a veritable gold mine of carefully answered queries. Sure, some people are shit, but how often are you unable to get at least some progress on a question from it? I find it useful in 90-95% of queries, i find the answers useful in 99% of queries that match my question. The thing is amazing! I Google search a problem, and there's 5 threads of people with comparable issues, even if no one has my exact error, the debugging and advice around the related errors is almost always enough to get me over the hump.
Why all the hate? AI answers can suck, definitely. Stackoverflow literally holds the modern industry up. Next time you have a technical problem or error you don't understand go ahead and avoid the easy answers given on the platform and see how much better the web is without it. I don't understand, what kind questions do you have?
Nobody is criticising the content that is on the site. The problem is an incredibly hostile user base that will berate you if you don't ask your question in the exact right way, or if you ask a question that implies a violation of some kind of best practice (for which you don't provide context because it's irrelevant to the question).
As for the AI, it can only erode the quality of the content on SO.
Well, I believe the underlying problem of platforms like StackOverflow, ticketing systems (in-house and public) and even CRMs is not really solvable. The problem is, the quality of an answer is actually not easy to determine. All the mechanisms we have are hacks, and better solutions would need more resources... which leads to skewed incentives, and ultimately to a "knwoledge" db thats actually not very good. People are incentiviszed to collect karma points, or whatever it is. But these metrics are not really resembling the quality of their work... Crowdsourcing this mechanisms via upvotes or whatever does also not really work, because quantity is not quality... As said, I believe this is a problem we can not solve.
Half joking, but I am pretty tired of SO pedantry.