Apple has been struggling to fix Siri since it’s inception in 2011, but today they find themselves even further behind. The solution? Apple intelligence but that didn’t pan out as expected. The Californian company has now resorted to using Google’s Gemini to solve their AI woes. In this episode we explore what this means for Apple, the wider AI industry and consumers.



For a long while I wondered what would happen when Apple (a company who prides itself on it’s products “just working”) inevitably collided with the generative AI hype train.
At first I thought they might stay away of the whole thing, but they didn’t and it’s been funny watching them struggle to integrate even the simplest aspects of generative AI into their products. Anyone who knows how LLMs work know that it is wholly different than the natural language processing that goes into Siri.
It’s such a shame because Apple has been building the perfect platform for LLM’s to thrive in for decades. Hopefully Gemma/Gemini integration can fill the gap
They still need natural language processing to translate the LLM to something the device understands. LLMs just make the interface more human-ey.
Why is their platform perfect for LLMs?
AppIntents
Cause it relies on stealing data from users, and AI lives for regurgitation of stolen data