Hacker News new | past | comments | ask | show | jobs | submit login

I get that a lot of people are opposed to AI, but blocking random IP ranges seems like a really inappropriate way to do this, the friendly fire is going to be massive. The robots.txt approach is fine, but it would be nice if it could get standardized so that you don't have to change it a lot based on new companies (like a generic no llm crawling directive for example).



Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: