Hacker News new | past | comments | ask | show | jobs | submit login

No, we don't enforce any robots.txt restrictions ourselves. We also don't do any scraping ourselves. We provide browser infrastructure that operates like any normal browser would - what users choose to do with it is up to them. We're building tools that give AI agents the same web access capabilities that humans have, don't think it's our place to impose any additional limitations.



It is 100% your responsibility what your servers do to other peoples servers in this context, and wanton negligence is not an excuse that will stop your servers from being evicted by hosting companies.


You make the tools, what people do with them isn't up to you. I can tolerate some form of that opinion/argument on some level, but it is at the very least short-sighted on your part to not have been better-equipped for how to respond to concerns people have about potential misuse.

If what has been said elsewhere in this thread is true about providing documentation on how to circumvent attempts to detect/block your service and your resistance to providing helpful information such as IP ranges used and how user agents are set, then you have strayed far from being neutral and hands-off.

"it's not our place" is not actual neutrality, it's performative or complicit neutrality. Actual neutrality would be perhaps not providing ways to counter your service, but also not documenting how to circumvent people from trying. And if this is what your POV is, fine! You are far from alone--given the state of the automated browsing/scraping ecosystem right now, plenty of people feel this way. Be honest about it! Don't deflect questions. Don't give misleading answers/information. That's what carries this into sketchy territory


Do you publish an IP range?




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: