The data source is linked and is based on the ARM Datacenter Energy prediction.
But i don't think its too far fetched.
The compute needed for digital twins, simulating a whole army of robots than uploading it to the robots, who sitll need a ton of compute, is not unrealistic.
Cars like Tesla have A TON of compute build in too.
And we have seen what suddenly happens to an LLM when you switch the amount of parameters. We were in a investment hell were it was not clear in what to invest (crypto, blockchain and NFT bubble bursted) but AI opened up the sky again.
If we continue like this, it will not be far fetched that everyone has their own private agent running and paying for it (private / isolated for data security) + your work agent.
I have to think economies of scale are coming into play for Apple. They can cut deals for chips and other components at a scale no one else is really capable of and they have the luxury of being able to pay up front in advance if they need to.
It could make sense for you to consider an EV with bi-directional charging. For off-grid solar, you can charge your truck/car/EV for free or/and extend your off-grid battery by using your car to de/uncharge it for your home.
A 100kWh battery can help you out when you come back home if you know you can charge it the next day.
when i was 18, i looked at notes from my new coworker / manager and asked him how he learned to write that nice. He told me that he struggled writing cursive and just started writing letter by letter (print letter style?).
I changed my 'font' on that day and suddenly i was at least able to read what i wrote!
Nevertheless, cursive is where the speed is. University times I was writing at the speed of the professor chatting and still having it halfway readable even for others - even though the looks of it were tending to the Arabic (I'm Latin). Good times, now I can barely scratch a shopping list before I break my wrist.
On federated learning, you just make sure to keep this mechanism in the right stage of your pipeline