Hacker News new | past | comments | ask | show | jobs | submit login

In the seven years I've been on HN, it has gone through different phases, each with a noticeable change in the quality of the comments.

One big shift came at the beginning of COVID, when everyone went work-from home. Another came when Elon Musk bought X. There have been one or two other events I've noticed, but those are the ones I can recall now. For a short while, many of the comments were from low-grade Russian and Chinese trolls, but almost all of those are long gone. I don't know if it was a technical change at HN, or a strategy change externally.

I don't know if it's internal or external or just fed by internet trends, but while it is resistant, HN is certainly not immune from the ills affecting the rest of the internet.




16 year HN vet here.

This place has both changed a _lot_ and also very little, depending on which axis you want to analyze. One thing that has been pretty consistent, however, is the rather minimal amount of trolls/bots. There are some surges from time to time, but they really don't last that long.


HN has mechanisms to detect upvotes and comments that seem to be promoting a product or coordinated in some other way. I'm not sure what they do behind the scenes or how effective it is but it's something. Also other readers downvote bot spam. Obvious bot/LLM-generated comments seem to be "dead" quite often, as are posts that are clearly just content/ad farm links or product promotions or way off-topic.


How are you so sure these users are actually bots? Just because someone disagrees with you about Russia or China doesn't mean that's evidence of a bot, no matter how stupid their opinion is.


I don't know about anyone else, but to me a lot of bot traffic is very obvious. I don't have the expertise to be able to describe the feeling that low quality bot text gives me, but it sticks out like a sore thumb. It's too verbose, not specific enough to the discussion, and so on.

I'm sure there are real pros who sneak automated propaganda in front of my eyes with my notice, but then again I probably just think they are human trolls.


> but it sticks out like a sore thumb

Could you give some examples of HN comments that "sticks out like a sore thumb"?

> It's too verbose, not specific enough to the discussion, and so on.

That to me just sounds like the average person who feels deeply about something, but isn't used to productive arguments/debates. I come across this frequently on HN, Twitter and everywhere else, including real life where I know for a fact the person I'm speaking to is not a robot (I'm 99% sure at least).


sorry, I didn't mean to give the impression that I was talking about HN comments specifically. I was talking about spotting bot content out on the open Internet.

as for verbosity, I don't mean simply using a lot of text, but rather using a lot of superfluous words sentences.

people tend not to write in comments the way they would in an article.


Hackernews isn’t the place to bring that up regardless of your opinion. So out of context political posts should be viewed with at least some scrutiny.


That's true, but maybe there should be a meta section of the site where these topics can be openly discussed?

While I appreciate dang's perspective[1], and agree that most of these are baseless accusations, I also think that it's inevitable that a site with seemingly zero bot-mitigation techniques, where accounts and comments can be easily automated, doesn't have some or, I would wager _a lot_, of bot activity.

I would definitely appreciate some transparency here. E.g. are there any automated or manual bot detection and prevention techniques in place? If so, can these accounts and their comments be flagged as such?

[1]: https://news.ycombinator.com/item?id=41710142


We're not going to have a meta section for reasons I've explained in the past:

https://news.ycombinator.com/item?id=22649383 (March 2020)

https://news.ycombinator.com/item?id=24902628 (Oct 2020)

I've responded to your other point here: https://news.ycombinator.com/item?id=41713361


There are a few horsemen of the online community apocalypse,

1) Politics 2) Religion 3) Meta

Fundamentally - Productive discussion is problem solving. A high signal to noise ratio community is almost always boring, see r/Badeconomics for example.

Politics, religion are low barrier to entry topics, and always result in flame wars, that then proceed to drag all other behavior down.

Meta is similar: To have a high signal community, with a large user base, you filter out thousands of accounts and comments, regularly. Meta spaces inevitably become the gathering point for these accounts and users, and their sheer volume ends up making public refutations and evidence sharing impossible.

As a result, meta becomes impossible to engage with at the level it was envisioned.

In my experience, all meta areas become staging grounds to target or harass moderation. HN is unique in the level of communication from Dang.


This I agree with, off-topic is off-topic and should be removed/flagged. But I'm guessing we're not talking about simple rule/guidelines-breaking here.


How are you so sure these users are actually bots?

I stated nothing about bots. Re-read what I wrote.


Bots, trolls, foreign agents, a dear child has many names. Point is the same, name calling without evidence does nothing to solve the problem.


Ignoring there are problems doesn't solve anything either.


>How are you so sure these users are actually bots? Just because someone disagrees with you about Russia or China doesn't mean that's evidence of a bot, no matter how stupid their opinion is.

If the account is new and promoting Ruzzian narrative by denying the reality I can be 99% sure it is a paid person copy pasting arguments from a KGB manual, 1% is a home sovieticus with some free time.


> If the account is new and promoting Ruzzian narrative by denying the reality I can be 99% sure it is a paid person copy pasting arguments from a KGB manual, 1% is a home sovieticus with some free time.

I'm not as certain as you about that. Last time the US had a presidential election, it seems like almost half the country is either absolutely bananas and out of their mind, or half the country are robots.

But reality turns out to be less exciting in reality. People are just dumb, and spew whatever propaganda they happen to come across "at the right time". Same is true for Russians as it is for Americans.


I think it's mostly a timing thing. It's one thing for someone to say something dumb, but it's another for someone to say it immediately on a new account. That, to me, screams bot behavior. Also if they have a laser focus. Like if I open a twitter account and every single tweet is some closely related propaganda point.


[flagged]


Nationalistic flamewar will get you banned here, regardless* of which country you have a problem with. No more of this, please.

https://news.ycombinator.com/newsguidelines.html

* https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...


Will I be allowed to say just provide some links instead and let the community inform themselves if I am not allowed to share my observations? Or links to real news events are also not allowed.


It depends on the link, really. Most "real news events" are off-topic as the site guidelines explain: https://news.ycombinator.com/newsguidelines.html. For a more in-depth explanation of how we look at political topics on HN, see https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so....


OK< I will use wikipedia links, my problem is with Ruzzians (ZZ refers to the Russians that support invasion and war crimes) making new accounts and commenting here, we should not let this people spread misinformation here, or bring bullshit like "Russia is as bad/good as USA". At least they should use a regular , years old account so they can risk banning like I am risking when my account when debating them.

Checking now I see the guy was flagged https://news.ycombinator.com/user?id=ajsdawzu but he had time to spread his stuff


Anyone who’s spent any amount of time in this space can spot them pretty quickly/easily. They tend to stick to certain scripts and themes and almost never deviate.


In my experience, that's not true. Rather, people are much too quick to jump to the conclusion that so-and-so is a bot (or a troll, a shill, a foreign agent, etc.), when the other's views are outside the range of what feels normal to them.

I've written a lot of about this dynamic because it's so fundamental. Here are some of the longer posts (mini essays really):

https://news.ycombinator.com/item?id=39158911 (Jan 2024)

https://news.ycombinator.com/item?id=35932851 (May 2023)

https://news.ycombinator.com/item?id=27398725 (June 2021)

https://news.ycombinator.com/item?id=23308098 (May 2020)

Since HN has many users with different backgrounds from all over the world, it has a lot of user pairs (A, B) where A's views don't seem normal to B and vice versa. This is why we have the following rule, which has held up well over the years:

"Please don't post insinuations about astroturfing, shilling, bots, brigading, foreign agents and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email [email protected] and we'll look at the data." - https://news.ycombinator.com/newsguidelines.html


In my research and experience, it is. I’m making no comment about bots/shills on this site, either, I’m responding to the plausibility of the original comment.


> I’m making no comment about bots/shills on this site, either, I’m responding to the plausibility of the original comment.

The original comment:

> I wonder the same about HN. Has anyone done this kind of analysis? Me good LLM

Slightly disingenuous to argue from the standpoint of "I'm talking about the whole internet" when this thread is specifically about HN. But whatever floats your boat.


Hey, I would appreciate if you could address some of my questions here[1].

I do think it's unrealistic to believe that there is absolutely zero bot activity, so at least some of those accusations might be true.

[1]: https://news.ycombinator.com/item?id=41711060


The claim is not "zero bot activity" - how would one even begin to support that?

Rather, the claim is that accusations about other users being bots/shills/etc. overwhelmingly turn out, when investigated, to have zero evidence in favor of them. And I do mean overwhelmingly. That is perhaps the single most consistent phenomenon we've observed on HN, and it has strong implications.

If you want further explanation of how we approach these issues, the links in my GP comment (https://news.ycombinator.com/item?id=41710142) go into it in depth. If you read those and still have a question that isn't answered there, I can take a crack at it. Since you ask (in your other comment) whether HN has any protections against this kind of thing at all, I think you should look at those past explanations—for example the first paragraph of https://news.ycombinator.com/item?id=27398725.


Alright, thanks. I read your explanations and they do answer some of my questions.

I'm still surprised that the percentage of this activity here is so low, below 0.1%, as you say. Given that the modern internet is flooded by bots—over 60% in the case of ProductHunt as estimated by the article, and a third of global internet traffic[1]—how do you a) know that you're detecting all of them accurately (given that it seems like a manual process that takes a lot of effort), and b) explain that it's so low here compared to most other places?

[1]: https://investors.fastly.com/news/news-details/2024/New-Fast...


From what I understand - users accuse others of being shills and bots, and are a largely wrong.

Dang and team use other tools to remove the actual bots that they can find evidence for.

So yes, there are bots, but human reports, tend to be more about disagreements, than actual bot identification.


intended's reply is correct.

Most of the bot activity we know about on HN has to do with voting rings and things like that, people trying to promote their commercial content. To the extent that they post things, it's mostly low-quality stuff that either gets killed by software, flagged by users, or eventually reported to us.

When it comes to political, ideological, nationalistic arguments and the like, that's where we see little (if any) evidence. Those are the areas where users are most likely to accuse each other of not being human, or posting in bad faith, etc., so that's what I've written about in the posts that I linked to.

There's still always the possibility that some bad actors are running campaigns too sophisitcated for us to detect and crack down on. I call this the Sufficiently Smart Manipulator problem and you can find past takes on it here: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que....

I can't say whether or not this exists (that follows by definition—"sufficiently" means smart enough to evade detection). All I can tell you is that in specific cases people ask us to look into, there are usually obvious reasons not to believe this interpretation. For example would a sufficiently smart manipulator be smart enough to have been posting about Julia macros back in 2017, or the equivalent? You can always make a case for "yes" but those cases end up having to stretch pretty thin.


Dang, I agree with and appreciate your moderation approach here and I completely agree with most of what you said. IME the last 18 months or so I’ve been here, this has been a welcome bastion against typical bot/campaign activity. Nowhere in the web seems safe the last ~dozen years. Most of what I’ve written here applies to my research of foreign bot activity on social networks, particularly in election years, in which you can far more easily piece together associations between accounts and narratives and writing style and piece together a lot more dots than on a site like this - and conclude very definitively that yes, this is a bot.

My original comment was just meant to chime in that, in the wild the last ten years, I’ve encountered an extraordinary amount of this kind of activity (which I confirmed - I really do research this stuff on the side and have written quite a lot about it) - that would support credibility to anyone that felt they experienced bot activity on this site. I haven’t done a full test on this site yet, because I don’t think it’s allowed, but at a glance I suspect particular topics and keywords attract swarms of voting/downvoting stuff, which you alluded to in your post. I think the threshold of 500 upvotes to downvote is a bit low, but clearly to me what you are doing is working. I’m only writing all of this out to make it very clear I am not making any criticisms or commentary about this site and how it handles bots/smurfs/etc.

Most of my research centers around 2016,2020 political cycles. Since the invention, release, and mass distribution of LLM’s I personally think this stuff has proliferated far beyond what anyone can imagine right now, and renders most of my old methods worthless, but for now that’s just a hypothesis.

Again, I appreciate the moderation of this site, it’s one of the few places left I can converse with reasonably intelligent and curious people compared to the rest of the web. Whatever you are doing, please keep doing it.


I think that HN may in general be an outlier here. Typically outright political content is not allowed, along with religious which is quite often intertwined with politics. Because of the higher quality of the first pass filter here (users flagging this stuff), you don't see the campaigns here that you do on typical social media.

For example in Reddit you'll see accounts that are primed, that is they reuse other upvoted/mostly on topic older existing user replies on new posts of the same topic to build a natural looking account. Then at some point they'll switch to their intended purpose.


Thank you. I appreciate your positive outlook on these things. It helps counteract my negative one. :)

For example, when you say "The answer to the Sufficiently Smart Manipulator is the Sufficiently Healthy Community", that sounds reasonable, but I see a few issues with it.

1. These individuals are undetectable by definition. They can infiltrate communities and direct conversations and opinions without raising any alarms. Sometimes these are long-term operations that take years, and involve building trust and relationships. For all intents and purposes, they may seem like just another member of the community, which they partly are. But they have an agenda that masquerades as strong opinions, and are protected by tolerance and inclusivity, i.e. the paradox of tolerance.

2. Because they're difficult to detect, they can easily overrun the community. What happens when they're a substantial percentage of it? The line between fact and fiction becomes blurry, and it's not possible to counteract bad arguments with better ones, simply because they become a matter of opinion. Ultimately those who shout harder, in larger numbers, and are in a better position to, get heard the most.

These are not some conspiracy theories. Psyops and propaganda are very real and happen all around us in ways we often can't detect. We can only see the effects like increased polarization and confusion, but are not able to trace these back to the source.

Moreover, with the recent advent of AI, how long until these operations are fully autonomous? What if they already are? Bots can be deployed by the thousands, and their capabilities improve every day.

So I'm not sure that a Sufficiently Healthy Community alone has a chance of counteracting this. I don't have the answer either, but can't help but see this trend in most online communities. Can we do a better job at detection? What does that even look like?


If you come up with good ideas on this problem you should share them, but the core of this thread is that having commenters on thread calling out other commenters as psyops, propaganda, bots, and shills doesn't work, and gravely harms the community, far more than any psyop could.


Does it, though? The reason why I ask such a loaded question is because I believe this is actually part of the 'healthy community' framework. It can be thought of as the communities immune system responding to what they perceive as outside threats to the system and is, in my opinion one of the most well known phenomenon in internet communities that far predates HN.

The modern analogy of this problem is described as the 'Nazi Bar' problem and is related to the whole Eternal September phenomenon. I think HN does a good enough job of kicking out the really low quality posters, but the culture of a forum will always gradually shift based on the fringes of what is allowed or not.


How is that different from humans? Humans have themes/areas they care more about, and are more likely to discuss with others. It's not hard to imagine there are Russians/Chinese people caring deeply about their country, just like there are Americans who care deeply about US.


A human aggressively taking a particular line and a bot doing so may be equivalent; do we need to differentiate there?


If the comment is off-topic/breaking the guidelines/rules, it should be removed, full stop.

The difference is that the bots comment should be removed regardless if the particular comment is breaking the rules or not, as HN specifically is a forum for humans. The humans comment, granted it doesn't break the rules, shouldn't, no matter how shitty their opinion/view is.


If posts make HN a less interesting place to converse I don't see why humans should get a pass & I don't see anything in the guidelines to support that view either.


C’mon. When you have an account that is less than a year old and has 542 posts, 541 of which are repeating very specific kremlin narratives verbatim, it isn’t difficult to make a guess. Is your contention that they are actually difficult to spot, or that they don’t exist at all? because both of those views are hilariously false.


I feel like you're speaking about specific accounts here, since it's so obvious and exact. Care to share the HN accounts you're thinking about here?

My contention is that people jump to "It's just a bot" when they parrot obvious government propaganda they disagree with, when the average person is as likely to parrot obvious propaganda without involving computers at all.

People are just generally stupid by themselves, and reducing it to "Robots be robotting" doesn't feel very helpful when there is an actual problem to address.


No, I'm not. And I don't/won't post any specific accounts. I'm speaking more generally - and no one is jumping to anything here, you're projecting an argument that absolutely no one is making. The original claim was that russian/chinese bots were on this platform and left. I've only been here about 1.5 years, so I don't know the validity of that claim, but I have a fair amount of experience and research in the last ten years or so on the topic of foreign misinformation campaigns on the web, so it sounds like a very valid claim, given how proliferate these campaigns were across the entire web.

It isn't an entirely new concept or unknown, and that isn't what is happening here. You're making a lot of weird assumptions, especially given the fact that the US government wrote several hundred pages about this exact topic years ago.


> and no one is jumping to anything here, you're projecting an argument that absolutely no one is making

You literally claimed "when you have accounts with these stats, and they say these specific things, it isn't difficult to guess..." which ends with "that they're bots" I'm guessing. Read around in this very submission for more examples of people doing "the jump".

I'm not saying there isn't any "foreign misinformation campaigns on the web", so not sure who is projecting here.


I “literally” did not say that. You seem to be doing the very thing I said, projecting arguments no one is making. Certainly I’m not.


Ten years ago those accounts existed, too. Back then we called them "people."


Not at all - ten years ago russian misinformation campaigns on twitter and meta platforms were alive and well. There was an entire several hundred page report about it, even.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: