They're probably just using an already trained model. They're almost certainly not training a model on-device.
AFAIK both the iphone and the pixel have a chip that are specifically designed to do forward propagation on a deep neural network. I don't know much about them though.
True - but what model would they want to use on-device?
All the feed ranking stuff is done serverside I assume... And apart from that, the app is just for posting and reading text and pictures. I don't really see any use for any ML there.
If one could profitably offload serverside CPU to the client then every app would be mining bitcoins on-device.
But it turns out that user complaints of a laggy app, hot phone and bad battery life (and therefore lost users and lost ad revenue) far outweigh any CPU time savings on the server.
Are they using on-device ML? Why?