Posted by Kateryna Semenova – Sr. Developer Relations Engineer
AI is reshaping how customers work together with their favourite apps, opening new avenues for builders to create clever experiences. At Google I/O, we showcased how Android is making it simpler than ever so that you can construct sensible, customized and inventive apps. And we’re dedicated to offering you with the instruments wanted to innovate throughout the total growth stack on this evolving panorama.
This 12 months, we centered on making AI accessible throughout the spectrum, from on-device processing to cloud-powered capabilities. Listed below are the highest 3 bulletins it’s good to know for constructing with AI on Android from Google I/O ‘25:
#1 Leverage the effectivity of Gemini Nano for on-device AI experiences
For on-device AI, we announced a new set of ML Kit GenAI APIs powered by Gemini Nano, our most effective and compact mannequin designed and optimized for operating instantly on cellular gadgets. These APIs present high-level, straightforward integration for widespread duties together with textual content summarization, proofreading, rewriting content material in several types, and producing picture description. Constructing on-device gives important advantages comparable to native knowledge processing and offline availability at no extra value for inference. To start out integrating these options discover the ML Kit GenAI documentation, the sample on GitHub and watch the “Gemini Nano on Android: Building with on-device GenAI” discuss.
#2 Seamlessly combine on-device ML/AI with your individual customized fashions
The Google AI Edge platform allows constructing and deploying a variety of pretrained and customized fashions on edge gadgets and helps varied frameworks like TensorFlow, PyTorch, Keras, and Jax, permitting for extra customization in apps. The platform now additionally gives improved support of on-device {hardware} accelerators and a brand new AI Edge Portal service for broad protection of on-device benchmarking and evals. In case you are in search of GenAI language fashions on gadgets the place Gemini Nano just isn’t obtainable, you should utilize different open fashions through the MediaPipe LLM Inference API.
Serving your individual customized fashions on-device can pose challenges associated to dealing with massive mannequin downloads and updates, impacting the consumer expertise. To enhance this, we’ve launched Play for On-Device AI in beta. This service is designed to assist builders handle customized mannequin downloads effectively, making certain the fitting mannequin measurement and pace are delivered to every Android system exactly when wanted.
For extra data watch “Small language models with Google AI Edge” discuss.
#3 Energy your Android apps with Gemini Flash, Professional and Imagen utilizing Firebase AI Logic
For extra superior generative AI use instances, comparable to advanced reasoning duties, analyzing massive quantities of information, processing audio or video, or producing photographs, you should utilize bigger fashions from the Gemini Flash and Gemini Professional households, and Imagen operating within the cloud. These fashions are nicely fitted to situations requiring superior capabilities or multimodal inputs and outputs. And because the AI inference runs within the cloud any Android system with an web connection is supported. They’re straightforward to combine into your Android app by utilizing Firebase AI Logic, which supplies a simplified, safe technique to entry these capabilities with out managing your individual backend. Its SDK additionally consists of help for conversational AI experiences utilizing the Gemini Live API or producing customized contextual visible belongings with Imagen. To be taught extra, take a look at our sample on GitHub and watch “Enhance your Android app with Gemini Pro and Flash, and Imagen” session.
These highly effective AI capabilities can be dropped at life in immersive Android XR experiences. Yow will discover corresponding documentation, samples and the technical session: “The future is now, with Compose and AI on Android XR“.

Get impressed and begin constructing with AI on Android right this moment
We launched a brand new open source app, Androidify, to assist builders construct AI-driven Android experiences utilizing Gemini APIs, ML Package, Jetpack Compose, CameraX, Navigation 3, and adaptive design. Customers can create customized Android bot with Gemini and Imagen through the Firebase AI Logic SDK. Moreover, it incorporates ML Package pose detection to detect an individual within the digicam viewfinder. The total code pattern is available on GitHub for exploration and inspiration. Uncover extra AI examples in our Android AI Sample Catalog.

Choosing the proper Gemini mannequin will depend on understanding your particular wants and the mannequin’s capabilities, together with modality, complexity, context window, offline functionality, value, and system attain. To discover these issues additional and see all our bulletins in motion, take a look at the AI on Android at I/O ‘25 playlist on YouTube and take a look at our documentation.
We’re excited to see what you’ll construct with the ability of Gemini!