Hacker News new | past | comments | ask | show | jobs | submit login

Their demos looked like how I imagined AI before ChatGPT ever existed. It was a personalized, context aware, deeply integrated way of interacting with your whole system.

I really enjoyed the explanation for how they planned on tackling server-enabled AI tasks while making the best possible effort to keep your requests private. Auditable server software that runs on Apple hardware is probably as good as you can get for tasks like that. Even better would be making it OSS.

There was one demo where you could talk to Siri about your mom and it would understand the context because of stuff that she (your mom) had written in one of her emails to you... that's the kind of stuff that I think we all imagined an AI world would look like. I'm really impressed with the vision they described and I think they honestly jumped to the lead of the pack in an important way that hasn't been well considered up until this point.

It's not just the raw AI capabilities from the models themselves, which I think many of us already get the feeling are going to be commoditized at some point in the future, but rather the hardware and system-wide integrations that make use of those models that matters starting today. Obviously how the experience will be when it's available to the public is a different story, but the vision alone was impressive to me. Basically, Apple again understands the UX.

I wish Apple the best of luck and I'm excited to see how their competitors plan on responding. The announcement today I think was actually subtle compared to what the implications are going to be. It's exciting to think that it may make computing easier for older people.




Until this gets into reviewers' hands, I think it's fair to say that we really have no idea how good any of this is. When it comes to AI being able to do "all kinds of things," it's easy to demo some really cool stuff, but if it falls on its face all the time in the real world, you end up with the current Siri.

Remember this ad? https://www.youtube.com/watch?v=sw1iwC7Zh24 12 years ago, they promised a bunch of things that I still wouldn't trust Siri to pull off.


These are all very basic commands that Siri pulls off flawlessly whenever I use it.


Most of the stuff shown are much faster to do yourself if you have your hands free and if you don't you have to pray the gods that Siri doesn't fuck up for whatever reason.

Even something as simple as setting the time, Siri will bork it at least 1 in 10 times. I know that for sure, since I worked at a friend's restaurant 2 summers ago and was heavily using Siri's timer to time french fries blanching (many batches for at least 2 hours every day or every 2 days); this dam thing would regularly use wrong time or not understand at all even though it was always the same dam time and the conditions were always similar.

On the other hand, the Google home at my cousin's place operates at my command without mistakes even though he doesn't even have the luxury of knowing my voice.

People who think Siri is good either are delusional or have special godlike skills. But considering how many hilarious "demos" I have gotten from Apple fans friends; I will say it's the former.

I myself use iPhone/Apple Watch/Macs since forever so it's not like I'm free hating. I just goddam suck like too many Apple stuff recently...


> People who think Siri is good either are delusional or have special godlike skills. But considering how many hilarious "demos" I have gotten from Apple fans friends; I will say it's the former.

Or maybe they just have good experiences? Why do they have to be delusional?


Because they usually are in a tech bubble where Apple is the best there is on everything and they never really tried any alternatives.

So, they think it's good and it's a delusion because it would not be objectively considered good if it was compared side by side with the competition.

I know that for sure because I have spent a lot of time with people like that and I used to be a bit like that. It's much easier to see the world in black and white for most, just like religions are with good/bad and people who really like Apple stuff are very often like that.


I think too many people assumed that because ChatGPT is a conversation interface that that's how AI should be designed, which is like assuming computers would always be command lines instead of GUIs. Apple has done a good job of providing purpose-built GUIs for AI stuff here, and I think it will be interesting to watch that stuff get deeper.


> There was one demo where you could talk to Siri about your mom and it would understand the context because of stuff that she (your mom) had written in one of her emails to you... that's the kind of stuff that I think we all imagined an AI world would look like.

I can't but feel all of this super creepy.


We're really just describing an on-device search tool with a much better interface. It's only creepy if you treat it like a person, which Apple is pretty careful not to do too much.


Yep it's an assistant, they didnt add some weird app where you can talk to virtual granny lol


Yep.

I remember vividly the comment on Windows Recall that said if the same was done by Apple it would be applauded. Here we are.


At the risk of sounding like an Apple apologist, Apple has a pretty good (though not perfect) track record for privacy and security.

Microsoft on the other hand… well, I understand they just pulled the recall feature after it was discovered the data wasn’t even encrypted at rest?!


If anything Recall is MORE privacy respectful than this since everything is stored and processed on your device and you can access (and easily alter) the database, exclude specific applications, websites (for Edge for now), etc.

I'm not saying it's not an awful feature, I will disable it as soon as it is installed.

The fact that it's not encrypted at rest really is the least of my concerns (though it does show the lack of care and planning). For this to be a problem, an attacker already has all the necessary accesses to your computer to either get your encryption key or do devastating damage anyway.

> At the risk of sounding like an Apple apologist, Apple has a pretty good (though not perfect) track record for privacy and security.

"Not perfect" is enough to be concerned. I would also not be surprised that their good reputation is more due to their better ability at hiding their data collection and related scandals rather than due to any care for the user.


I thought that the problem with Recall is that it takes screenshots (potentially of sensitive things like passwords, or private browsing sessions) and stores new data that you never intended to store in the first place.

This Apple AI is not storing anything new, it’s just processing the data that you already stored. As long as they pay close attention to invalidation on the index when things get deleted.

The cloud processing is a little concerning but presumably you will be able to turn it off, and it doesn’t seem much different to using iCloud anyway.


The screenshots are not storing anything new, it's just a visual trail of an already existing activity. It literally just makes it easier to browse the history, that's it. Someone motivated could just recompose activity from logs/histories of the various softwares.

The distinction is made by people who seem hell bent on trashing Microsoft for everything and glorifying everything Apple does.


I strongly disagree. My expectation of what’s on my screen is that it’s ephemeral unless I take a screen shot.

Here’s an example. I always use a random password when creating accounts for (eg) databases, but not every UI supports this, so I have a little shell script that generates one. I then copy and paste it from the terminal. Once I close the terminal window and copy something else, that password is stored only once.

With recall, it’s now stored permanently. Someone who gets access to my screen history is a step closer to getting into my stuff.

Of course there are workarounds. But the expectation I have around how my screen work informs the actions I take on it.

Here’s another example. I recently clicked on a link from HN and ended up on a page containing explicit images. At work. As soon as I realised what I was looking at, I clicked away.

How long until my visual history is to be interrogated, characterised, and used to find me guilty of looking at inappropriate material in the workplace? Such a system is not going to care about my intentions. Even if I’m not disciplined, I’d certainly be embarrassed.


I don't think the above poster was really referring to who does it, but that it's creepy that you're having a conversation about your mom with your phone to begin with


As opposed to what: If you hired an actual human assistant, it wouldn't be?


Having other people read through my stuff and respond for me is creepy regardless.


This is what executive assistants do all day.

Some people view house keepers the same way. “I can’t let someone going through and touch all of my personal belongings. That’s just creepy.”

There’s a wide range of what people find creepy and also what people can and do get used to.


Assistants are generally limited to people who can afford to have one. I think that's a fair assumption. Out of all those people not everyone in that group is going to have one. Which leaves a very very few people that do have one.

Why would this translate to everybody wanting to have one?


What does that have to do with creepiness?


How many people out there are hiring personal assistants?


This something else is it pushes people to even more heavily dive into the ecosystem, if it works how they show you really want it to understand your life, so you'll want all your devices able to help build that net of data to provide your context to all your devices for answering about events and stuff, meaning hey maybe i should get an appletv instead of a chromecast so that siri knows about my shows and stuff too.


I'm just unhappy that this will mostly end up to make the moat larger and the platform lock-in more painful either way. iPhones have been going up in price, serious compute once you're deep in this will be simply extortion, as leaving the apple universe is going to be nigh impossible.

Also no competitor is going to be as good at integrating everything, as none of those have as integrated systems.


i'd be skeptical of the marketing used for the security/privacy angle. won't be surprised if there is subopena-able data out of this in some court case.

i might have missed it but there has not been much talk about guardrails or ethical use with their tools, and what they are doing about it in terms of potential abuse.


You can read the details of their approach for the privacy/security aspects of the cloud compute portion here. https://security.apple.com/blog/private-cloud-compute/


Question I have is how deeply it is integrated in non-Apple apps. Like Signal (still no Siri support) or Outlook.


It sounds like the app creators need to built in the support using SiriKit and app intentions. If they're using either already a fair bit of integration will be automatic.


Hope I can keep apples fingers from getting "deeply integrated" with my personal data.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: