+1 on this. Improving Gemini apps and live mode will go such a long way for them. Google actually has the best model line-up now but the apps and APIs hold them back so much.
Hey - puzzle looks interesting. I’d say we focus a lot on simplicity and flexible reporting, we have a built-in real-time spreadsheet so you’re able to put the kind of report that matters to you as a founder rather than the one one we force you to look at.
No, we don't enforce any robots.txt restrictions ourselves. We also don't do any scraping ourselves. We provide browser infrastructure that operates like any normal browser would - what users choose to do with it is up to them. We're building tools that give AI agents the same web access capabilities that humans have, don't think it's our place to impose any additional limitations.
It is 100% your responsibility what your servers do to other peoples servers in this context, and wanton negligence is not an excuse that will stop your servers from being evicted by hosting companies.
You make the tools, what people do with them isn't up to you. I can tolerate some form of that opinion/argument on some level, but it is at the very least short-sighted on your part to not have been better-equipped for how to respond to concerns people have about potential misuse.
If what has been said elsewhere in this thread is true about providing documentation on how to circumvent attempts to detect/block your service and your resistance to providing helpful information such as IP ranges used and how user agents are set, then you have strayed far from being neutral and hands-off.
"it's not our place" is not actual neutrality, it's performative or complicit neutrality. Actual neutrality would be perhaps not providing ways to counter your service, but also not documenting how to circumvent people from trying. And if this is what your POV is, fine! You are far from alone--given the state of the automated browsing/scraping ecosystem right now, plenty of people feel this way. Be honest about it! Don't deflect questions. Don't give misleading answers/information. That's what carries this into sketchy territory
And yeah MCP is super promising. We announced this on X and LinkedIn yesterday and the response has been really good. A lot of people with a bunch of use cases.
One surprising thing is there’s also a bunch of semi/non-technical people using our MCP server and the installation experience for them rn just absolutely sucks.
I think once auth and 1-click install are solved, MCP could become the standard way to integrate tools with LLMs
The MCP protocol is really not that good. The stateful nature of it makes it only suitable as a local desktop RPC pipe - certainly not something that will work well on mobile nor anything anyone would like to try to run maintain in a server-to-serve context.
It is fine if that is the scope. It is also understandable why Anthropic chose to use a stateful protocol where stateless HTTP would be more than enough. They are catering for the default transport layer which is stdio based where state needs to be established.
There are also other aspects of it that are simply unnecessarily complex and resource intensive for no good reasons.
Well OpenAPI. You don't need some wired debugging tools nobody knows how to use, a stateful protocol that is hard to troubleshoot, etc. There is plenty of support already built into standard HTTP services and Swagger - abundance of tools and documentation too and what we call function calling is basically JSON Schema which is at the core of swagger definitions.
MCP is trying to reinvent OpenAPI but in the wrong way.
1) Yep, you just pay from browser time and proxy usage
2) We use a handful of proxy providers under the hood ourselves. There’s a lot of shady ones but we only work with ones where we’ve vetted the source of. Different providers source proxies in different ways - directly from ISPs, paying end sources for proxies etc