Hacker News new | past | comments | ask | show | jobs | submit login

I moderate a forum and a user recently started answer questions with links to his blog, where he made AI-generated pages of generated answers on the topics.

The posts don't offer anything novel or personal to conversation, as they only repeat the most common talking points on the topic. Ugh.




This is a very hard problem, "who is the original author of a string of facts," "is that string of facts sound or was it altered," this is like the end of truth.

I know that truth is relative but it's like there's no point in using the word truth anymore. Everything is just becoming a collection of words.


Exactly how I feel. I'm especially worry about trust in historical facts. Renowned and trustworthy institutions, even as they may have their own biases, may not have such an easy time competing against tons of generated AI content..


I'm optimistic that this will force society to take critical thinking more seriously, treating it like a skill as fundamental as language, rather than lazily relying on shaky concepts like "renown" and "reputation." Our current society often rewards appeals to authority over well-reasoned arguments supported by strong evidence (e.g., the CDC's initial recommendation to avoid masking, despite mounting published evidence, and, later, continued insistence on the prioritization of sterilizing surfaces over mask distribution, again despite mounting published evidence). I'm hopeful that, given a higher noise floor, we'll all do our best to develop better filtering algorithms.


Yes. The user claims he wrote one of the posts but admits that AI wrote some others. The tone and style look identical.


What if an AI-generated picture is a real person?


Wow - unintentional deep-fake. Fantastic point.

Another challenge to our notions of identity, brought on by evolution of technology.


I really dont want the internet populated with meaningless garbage to give traffic to companies I dont care about. Hopefully google will create a classifier and downrank anyone who just shoots out AI generated bullshit. The process for identifying AI generated content does look fucking bonkers tho.


Google can't even successfully detect the shitty "we copied all of StackOverflow's Q&As and put ads around it" clones. I tend to doubt their ability to do something 100x as difficult.


Can't or just doesn't need to? Their business isn't search, it's ads.


If search quality degrades enough that people stop using it, it will impact ads.


How bad does search quality have to get before people go back to books? Until that point, it's not bad for ads. (In fact, if the ads are the highest-quality search results…)


Or people could go to a competing search engine...


> Hopefully google will create a classifier and downrank anyone who just shoots out AI generated bullshit.

Only if it affects their bottom line. And I doubt that's going to happen.


It's kinda funny (in a very serious way), but if news papers can stick around long enough they are the solution to this.

Curated content, by trusted publishers guaranteed to not to use ML generation.

Created libraries for facts, curated newspapers for daily events.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: