In brief

  • Google says Gemini Intelligence will let Android devices complete multi-step tasks across apps with user approval.
  • New features include AI-powered browsing, smarter autofill, custom widgets, and voice-to-text cleanup tools.
  • Gemini Intelligence launches first on Samsung Galaxy S26 and Google Pixel 10 devices this summer.

Google wants Android phones to act less like collections of separate apps and more like AI agents that handle tasks in the background.

Google on Tuesday introduced “Gemini Intelligence,” a new AI feature for Android that the company says will automate tasks across apps, personalize device interfaces, and help users complete everyday actions with less manual input.

According to Google, the rollout will begin this summer on Samsung Galaxy S26 and Google Pixel 10 phones before expanding later this year to watches, cars, glasses, and laptops tied to the Android ecosystem.

“Soon, devices with Gemini Intelligence will do all that and more,” Google said in a statement. “Gemini will navigate tasks for you—whether it’s snagging a front-row bike for your spin class or finding your class syllabus in Gmail then putting the books you need in your cart. Gemini handles the logistics while you stay in the moment.”

One of the biggest changes, Google said, involves multi-step app automation. Instead of switching between apps manually, users will be able to ask Gemini to complete actions across services.

“Instead of manually switching between apps and copying data, Gemini can turn visual context into instant action,” the tech giant said. “Imagine you have a long grocery list on your notes app. Just long press the power button over the list and ask Gemini to build a shopping cart with all of the items for delivery.”

Gemini Intelligence also introduces a redesigned Android interface based on Material 3 Expressive, which the company says reduces distractions and helps users stay focused.

Despite giving AI under-the-hood capabilities, Google said users remain in control because Gemini only acts after receiving commands and stops once a task is complete. Final confirmations still require user approval.

Gemini Intelligence will also bring AI-powered browsing to Chrome, expand autofill using information from connected apps, introduce a multilingual voice-cleanup feature called Rambler, and let users create custom Android widgets using natural language prompts.

“With Rambler, you don't have to worry about getting your words exactly right before you start,” Google said. “You can speak naturally, and it will take the important parts, then fit them all together into a concise message.”

In addition to changes to Android devices, Google also unveiled the Googlebook, the first laptop designed for Gemini Intelligence.

“Over 15 years ago, we introduced the Chromebook, a laptop built for a cloud-first world,” Google wrote. “Now, as we are moving from an operating system to an intelligence system, we see an opportunity to rethink laptops again.”

However, Google did not say if the Googlebook would officially replace the Chromebook or when.

Google’s rollout also arrives as rival smartphone makers struggle to deliver on ambitious AI promises. Earlier this month, Apple agreed to a $250 million settlement over claims it misled consumers about “Apple Intelligence” features that were delayed or never arrived on new iPhones, including an upgraded Siri experience. Apple later said it would use Google’s Gemini to help power some AI products, including Siri.

Google’s years of investment in Gemini models, Android integrations, and AI infrastructure put the company in a stronger position to bring AI agent functionality directly into consumer devices.

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.