Google announced plans to deploy Gemini Intelligence on Android devices beginning this summer, describing the platform as an integrated hardware-and-software system designed to automate multi-step tasks across apps while upholding user privacy and control.
The company said the first wave of availability will land on the latest Samsung Galaxy and Google Pixel phones, explicitly naming the Galaxy S26 and Pixel 10 as initial targets. Google expects Gemini Intelligence to reach additional Android form factors - including smartwatches, in-car systems, smart glasses and laptops - later this year.
Gemini Intelligence is built to orchestrate sequences of actions across multiple applications. Google cited examples such as booking fitness classes and compiling grocery delivery carts from simple shopping lists. The system can use visual material captured from the device to enrich context: users will be able to long-press the power button to add a screenshot or image, enabling Gemini to translate visual information into executable tasks. Google gave examples of converting handwritten notes into delivery carts and identifying travel tours on Expedia from photographs of printed brochures.
Starting in late June, Google plans to extend Gemini into Chrome, furnishing tools for research, summarization and comparisons. Chrome-based Gemini will also include auto-browse functionality intended to complete tasks such as booking appointments and arranging parking reservations on users' behalf.
Autofill with Google will make use of Gemini's Personal Intelligence to populate and complete complex forms by pulling relevant information from connected apps. Google emphasized that any linkage to apps for this purpose will require explicit user opt-in.
The launch also brings several named features. Rambler is designed to take natural speech and render it as polished text, with the capability to handle multiple languages within a single message. Create My Widget will allow users to produce custom widgets by describing them in natural language. The interface for Gemini Intelligence will reflect an updated visual approach, implemented with the Material 3 Expressive design language.
Google positions Gemini Intelligence as an ability to combine device-level inputs and cloud-powered models to streamline common, multi-step workflows, while placing emphasis on preserving user choice over data access. The staged rollout and opt-in requirements highlight where functionality will initially be limited to supported devices and consenting users.
Summary
Gemini Intelligence will arrive on new Pixel and Galaxy phones this summer and expand to other Android devices later in the year. It automates cross-app tasks, can act on screen or image content captured via a long-press of the power button, integrates into Chrome with auto-browse features beginning in late June, and introduces Rambler and Create My Widget. Autofill with Google will use Personal Intelligence only when users opt in.