I actually think Apple is in a unique position here again with the hardware/software integration.
Once again, their ability to do computation on device and optimize silicon to do it, is unparalleled.
A huge Achilles heel of current models like GPT-4 is that they can’t be run locally. And there are tons of use cases where we don’t necessarily want to share what we’re doing with OpenAI.
That’s why if Apple wasn’t so behind on the actual models (Siri is still a joke a decade later), they’d be in great shape hardware-wise.
Google has some impressive on device AI software such as Google Translate (translation), Google Photos (object detection, removal, inpainting), and Recorder (multi-speaker speech to text). Most of this is possible without their Tensor chip, but is more efficient with it.
Once again, their ability to do computation on device and optimize silicon to do it, is unparalleled.
A huge Achilles heel of current models like GPT-4 is that they can’t be run locally. And there are tons of use cases where we don’t necessarily want to share what we’re doing with OpenAI.
That’s why if Apple wasn’t so behind on the actual models (Siri is still a joke a decade later), they’d be in great shape hardware-wise.