5 Minutes
Apple weighs Gemini to resurrect Siri
Apple is reportedly exploring a partnership with Google to use Gemini — Google’s advanced AI platform — as the foundation for a rebuilt Siri. The move follows Google’s recent “Made by Google” event, where the company showcased a new wave of assistant features powered by Gemini, including the proactive Magic Cue functionality. Those demos highlighted how far Siri currently lags behind competitors in anticipatory, context-aware assistance.
Why the timing matters
Apple’s long-promised “Personal Siri,” intended to fetch contextually relevant information from apps and content on an iPhone, has been delayed to iOS 26.4 next spring. That delay leaves Siri trailing features already shipping on Google’s Pixel devices. For example, Pixel’s Magic Cue can scan your Gmail, calendar entries, screenshots and other local content to surface a reservation number or transaction details while you’re on a call — removing the friction of switching apps to retrieve critical information.
What’s happening behind the scenes
Bloomberg’s Mark Gurman reports that Apple has initiated early, private discussions with Google about creating a custom Gemini-based model for Siri. According to people familiar with the talks, Google has begun training a model intended to run on Apple’s servers. Apple hasn’t decided whether to adopt an internally built model or partner with a third party, and is reportedly weighing options including in-house development or licensing from external providers.
Two codenames: Linwood and Glenwood
Inside Apple, teams are said to be developing two iterations of Siri: one called Linwood, which uses Apple’s internally developed models, and another called Glenwood, which would integrate external technology. The Glenwood approach would allow Apple to plug in partner models (like Gemini) as fallbacks or core engines for the assistant’s capabilities.

Product features and comparisons
Pixel’s Magic Cue and Google’s broader “Personal Intelligence” strategy emphasize proactive assistance — surfacing flight numbers, meeting summaries, or contact details before you ask. Siri’s Personal Siri is intended to offer similar behavior: proactively digging through Mail, Messages, Calendar, and user content to serve actionable snippets at the right moment. The difference today is that Google’s implementation is live and tightly integrated with Gemini’s large-model capabilities, while Apple’s version remains delayed and under active redesign.
Feature snapshot
- Contextual retrieval: Magic Cue retrieves data from Gmail, calendar, and images; Apple’s Personal Siri plans to do the same across iPhone data sources.
- Proactive prompts: Google surfaces information during tasks like calls; Apple aims to add similar anticipatory suggestions in iOS 26.4.
- Fallback chatbots: Apple already supports third-party chatbots (ChatGPT, Claude, etc.) within Apple Intelligence as alternative answers when Siri falls short.
Use cases and advantages
Integrating Gemini into Siri could accelerate Apple’s roadmap for intelligent, helpful features. Practical use cases include:
- Customer service calls: Automatically display reservation numbers, boarding passes, or confirmation codes while you speak with an agent.
- Meeting prep: Summarize upcoming events and surface attachments, notes, or relevant messages before a call starts.
- Travel and logistics: Pull itineraries, tickets, and check-in links from email and present them when needed.
For users and enterprises, the advantage is reduced task switching and faster access to critical information — improving productivity and user experience on iPhone and iPad.
Market relevance and strategic implications
Apple’s discussions with Google reflect the shifting competitive landscape in AI assistants. Big tech companies are racing to deliver not just accurate answers, but anticipatory, privacy-aware contextual intelligence embedded across devices. For Google, supplying Gemini to Apple wouldn’t be unprecedented: Google already powers AI features for other OEMs, including Samsung. For Apple, adopting a partner model may be a pragmatic shortcut to close the gap while its internal teams refine Linwood.
Privacy, architecture, and deployment considerations
Reports suggest the model Google is training would run on Apple’s servers, which raises questions about data flow, on-device processing, and enterprise compliance. Apple has emphasized user privacy as a core differentiator; any partnership will require strict contractual and technical safeguards — including edge processing, encryption, and limiting telemetry — to align with Apple’s policies and customer expectations.
Who's in the running and what’s next?
Apple has explored several partners this year, from Anthropic and OpenAI to startups like Perplexity AI. Anthropic was reportedly an early favorite until cost and contract demands led Apple to explore other routes. According to sources, CEO Tim Cook recently told employees that Apple “must win in AI,” reflecting executive urgency to close the feature gap while maintaining Apple’s platform standards.
Whether Apple ultimately integrates Gemini, continues building Linwood, or combines both approaches with Glenwood remains a decision likely to be finalized in the coming weeks. For consumers and enterprise customers, the outcome will shape how quickly Siri moves from a reactive voice assistant to a proactive, context-aware productivity tool.
Note: Any original images, captions, and placements from the source content remain unchanged and should be preserved as provided.

Comments