This is what the new Siri powered by Gemini that Apple is preparing will look like.

  • Siri will undergo a two-phase transformation: first improvements in iOS 26.4 and then a major overhaul in iOS 27.
  • Apple will rely on Google's Gemini models, renamed Apple Foundation Models v10 and v11.
  • The new Siri will be able to understand the screen, use personal data, and function as an advanced chatbot.
  • Part of the processing will be done in Apple's private cloud, and Apple is considering using Google's infrastructure.

Siri powered by Gemini

Apple is preparing to make a major shift to Siri with the help of GeminiGoogle's family of artificial intelligence models, and its integration into products such as Siri-based CarPlayAfter several setbacks with its AI strategy and an alliance with ChatGPT that never quite took off, the company has decided to rely on external technology to accelerate the evolution of its assistant.

In the coming months, iPhone, iPad, and Mac users will begin to see Siri evolve from a command-based assistant into a conversational chatbot capable of understanding context, screen, and personal dataAll of this will arrive in stages, with an initial batch of new features in iOS 26.4 and a much more profound overhaul with iOS 27 and the equivalent systems for iPad and Mac.

A multi-million dollar deal for a Siri with a Gemini brain

According to various leaks and analyses, Apple has closed a agreement valued at around one billion dollars a year in order to integrate the Gemini core into its ecosystem. In practice, the company "rents" Google's AI technology to use as the foundation for its own generative models.

Although it will ultimately rely on Gemini, Apple has chosen to name its models as Apple Foundation Models version 10 and version 11, a way to maintain its own brand and minimize Google's public prominence within its products.

The company will use these models with some Approximately 1,2 trillion parameters Initially, they will run from servers in Apple's private cloud. For the more advanced version, they are even considering relying directly on Google's infrastructure, especially its Tensor processing units.

This move comes after Apple acknowledged internally that developing its own entirely competitive model would have meant additional years of work and more delaysThis is difficult to justify in a context where OpenAI, Google, Microsoft, and Anthropic are advancing at great speed.

Google Gemini's new AI-powered Siri

Calendar: from its first features in iOS 26.4 to the big leap in iOS 27

Reports agree that Apple It will showcase the first capabilities of the new Siri powered by Gemini in the second half of February.It is possible that he will do so through a specific presentation or a small keynote, where he would show live demos of the revamped assistant.

These first new features will arrive with iOS 26.4, whose testing phase will begin in February and whose public release is expected between March and April. In this version, Siri will primarily gain contextual intelligence and better responses, but will not yet become a full-fledged chatbot.

The major update will be reserved for iOS 27 and the Siri chatbotThis new generation of systems will be presented at the next WWDC, Apple's annual developers conference traditionally held in June, and will mark the introduction of a completely redesigned Siri.

In that second phase, the assistant will be natively designed as a The most similar generative AI chatbot to ChatGPT or Gemini itselfcapable of maintaining prolonged dialogues by voice or text and of being deeply integrated into the operating system.

What will the new Siri be able to do with Gemini?

Apple's plan envisions Siri moving beyond simply responding to basic commands and becoming a much more flexible tool. Thanks to the Gemini-based models, the assistant will be able to analyze the content of the iPhone or iPad screenTo better understand what the user is doing at any given moment and act accordingly.

In addition, you will be able to take into account part of our personal history and data stored on the device (Like the Siri shortcuts) to perform more complex tasks: from locating a specific document to finding a particular photo or preparing a message based on prior information.

The planned functions include the ability to search the internet, generate images, write or summarize texts, analyze documents, and write codeIt will also be able to interpret what appears on the screen to perform direct actions, such as adjusting device settings, managing the calendar, or editing an image without the user having to navigate through menus.

Another key will be the assistant memoryLike other chatbots, Siri will be able to remember some past interactions to continue conversations, although with a limit set to protect privacy. Apple prefers to restrict how long and what type of information is retained before it is automatically forgotten.

Activating the assistant will remain familiar: you can summon it simply by saying "Siri" or by pressing the power button, so the change will be most noticeable in the way you talk to it and the types of tasks it will be able to handle.

Project "Campos" and Apple's two-phase strategy

Internally, Siri's redesign responds to code name "Fields"Under this project, the assistant will become the first AI chatbot that Apple deeply integrates into all its operating systems.

The strategy will be structured in two clearly differentiated stagesThe first update, iOS 26.4, will add smarter features to Siri's existing base, leveraging Apple Foundation Models v10. It will be a sort of transition, designed to test the waters with users.

In the second phase, with iOS 27, iPadOS 27, and macOS 27, the traditional version of the assistant will be replaced by the new chatbot, which will rely on Apple Foundation Models v11, comparable to Gemini 3 modelsThese models will be more complex and computationally demanding.

Due to that added complexity, Apple is considering running This advanced version of Siri is built on Google's infrastructure., using its data centers and Tensor chips for cloud processing, especially for functions that cannot be efficiently executed on the device itself.

At the same time, the company will continue working on a very deep integration with iOS, iPadOS, and macOS, so that the assistant is perceived as a natural part of the system and not as an isolated application, although independent apps are also being tested internally during the development phase.

Everything suggests that the coming months will be decisive in seeing if this bet on a Siri powered by Gemini This allows Apple to position itself on par with current leaders in generative AI and regain ground in an area where it has been considered lagging behind, especially in Europe and Spain, where the progressive rollout of these features will shape the daily experience of millions of users.

New Apple TV in 2026 with revamped Siri and AI
Related article:
New Apple TV with revamped Siri and AI: what to expect