Hacker News new | past | comments | ask | show | jobs | submit login

I think the key is to think through the incentives for future authors.

As a thought experiment, say that the idea someday becomes mainstream that there is no reason to read any book or research publication because you can just ask an AI to describe and quote at length from the contents of anything you might want to read. In such a future, I think it's reasonable to predict that there would be less incentive to publish and thus less people publishing things.

In that case, I would argue the "hurt" is primarily to society as a whole, and also to people who might have otherwise enjoyed a career in writing.

Having said that, I don't think we're particularly close to living in that future. For one thing I'd say that the ability to receive compensation from holding a copyright doesn't seem to be the most important incentive for people to create things (written or otherwise), though it is for some people. But mostly, I just don't think this idea of chatting with an AI instead of reading things is very mainstream, maybe at least in part because it isn't very easy to get them to quote at length. What I don't know is whether this is likely to change or how quickly.




  there is no reason to read any book or research publication because you can just ask an AI to describe and quote at length from the contents of anything you might want to read
I think this is the fundamental misunderstanding at the heart of a lot of the anger over this, beyond the basic "corporations in general are out of control and living authors should earn a fair wage" points that existed before this.

You summarize well how we aren't there yet, but I'd say the answer to your final implied question is "not likely to change at all". Even when my fellow traitors-to-humanity are done with our cognitive AGI systems that employ intuitive algorithms in symphony with deliberative symbolic ones, at the end of the day, information theory holds for them just as much as it does for us. LLMs are not built to memorize knowledge, they're built to intuitively transform text -- the only way to get verbatim copies of "anything you might want to read" is fundamentally to store a copy of it. Full stop, end of story, will never not be true.

In that light, such a future seems as easy to avoid today as it was 5 years ago: not trivial, but well within the bounds of our legal and social systems. If someone makes a bot with copies of recent literature, and the authors wrote that lit under a social contract that promised them royalties, then the obvious move is to stop them.

Until then, as you say: only extremists and laymen who don't know better are using LLMs to replace published literature altogether. Everyone else knows that the UX isn't there, and the chance for confident error way too high.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: