Hacker News new | past | comments | ask | show | jobs | submit login

I actually think that many of the concerns the author doesn’t have are the more concerning ones and some of the concerns they do have are more likely to be not much of a big deal.

The privacy concern I find to be particularly overstated. This is an identical concern to ones that have existed before AI ever entered the fray. Anytime you send data to a system that someone else controls you run those exact same risks. I also think there’s an overstated fear that an app focused on private data (something similar to Signal) would just add some kind of AI functionality one day out of the blue and suddenly ship your data off to a hive mind.

Any app that is willing to cross that line already has done so (e.g., Facebook).

It also seems to be technologically simple to perform a lot of AI tasks without compromising privacy. E.g., chips with local-first AI computational ability are reaching consumer level devices. Even the much-maligned Windows Recall feature specifically emphasizes how it never sends information to Microsoft servers nor processes data in the cloud.






The risk isn’t that Signal itself will add AI features, it’s more that it will be built-in to your OS that’s running Signal (Apple Intelligence, Windows Recall, etc). These types watch everything you do on-device by default and learn a ton from your E2EE messages regardless of the intentions of the Signal devs.

I would say that in practice that doesn’t seem to be how some of these examples are being architected.

Apple Intelligence as an example only seems to be reading information from Apple’s own default apps as of today, and their developer documentation suggests that capabilities require developers to implement features via their APIs.

It isn’t really a correct read of the situation to say that Apple Intelligence is “watching everything you do” like a service that is just watching your screen output at all times.

Even the service that does do that exact sort of thing (Windows Recall) has an extensive set of controls around filtering out specific apps and private browsing mode and other sensitive information, enabled by default: https://support.microsoft.com/en-us/windows/privacy-and-cont...

So I think the reality is that a lot of the big players making this technology recognize the privacy and security concerns and are designing their AI applications to address those concerns.

I personally feel like AI products are frequently launching with more transparency about data usage than a lot of Web 2.0 era applications like Facebook.


On privacy, the shift in incentives have changed the game. "More is better" is a new mantra and the perceived value of gathering and labeling arbitrary organic data has gone up significantly. This offsets or outright obliterates the liability aspect of data-hoarding. This will have privacy implications for individuals referenced in some of those datasets.



Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: