Google AI advancements across its artificial intelligence portfolio throughout June are huge, underscoring its long-term investment in the field and its commitment to integrating AI into a vast array of products and scientific research. The updates span from core model enhancements and developer tools to new consumer-facing features and breakthroughs in specialized domains like genomics, weather prediction, and robotics.
For more than two decades, the technology giant has consistently poured resources into machine learning and AI research, tools, and infrastructure. This sustained effort aims to build products that enhance daily life for a broader user base. Teams across Google are actively exploring methods to harness AI’s benefits in diverse fields, including healthcare, crisis response, and education, as part of an ongoing initiative to regularly update the public on their progress.
Table of Contents
Google AI Advancements for AI Models and Developer Empowerment
Central to Google’s June announcements were expansions and optimizations of its foundational AI models, particularly within the Gemini family, alongside the introduction of powerful new tools for developers.
Expanding the Gemini 2.5 Family
Google expanded its Gemini 2.5 family of models, making Gemini 2.5 Flash and Pro generally available. A notable addition is 2.5 Flash-Lite, described as Google’s most cost-efficient and fastest 2.5 model to date. This strategic move aims to make advanced AI capabilities more accessible and efficient for a wider range of applications and users.
New Tools for Developers
In a bid to empower the developer community, Google introduced Gemini CLI, an open-source AI agent designed to bring Gemini capabilities directly into the terminal environment. This tool facilitates coding, problem-solving, and task management, offering access to Gemini 2.5 Pro free of charge with a personal Google account, or enhanced access via a Google AI Studio or Vertex AI key.
Furthermore, Google made Imagen 4, its latest text-to-image model, available for developers. Imagen 4 is accessible for paid preview in the Gemini API and for limited free testing in Google AI Studio. The company highlights that Imagen 4 offers significantly improved text rendering compared to prior image models, with general availability anticipated in the coming weeks.
Enhancing User Experience Across Products
Beyond core models and developer tools, Google rolled out a suite of AI-powered enhancements designed to refine user interactions across its popular products, from search to photo management and personal computing.
Innovations in AI Mode for Search
AI Mode, Google’s most powerful AI search experience, received several significant upgrades. The company provided a closer look into the development history of AI Mode, illustrating how it evolved to its current sophisticated state.
- Voice Search Capabilities: A new feature, Search Live with voice, was introduced, allowing users to talk, listen, and explore in real time with AI Mode within the Google app for Android and iOS. This enables free-flowing, back-and-forth voice conversations with Search, including the ability to explore links from across the web. This multi-tasking capability is exemplified by scenarios such as finding real-time tips for a trip while simultaneously packing. Transcripts of these voice searches are saved in AI Mode history for future reference.
- Interactive Financial Charts: AI Mode now supports interactive charts for financial data, stocks, and mutual funds. Users can compare and analyze information over specific time periods, obtain interactive graphs, and receive comprehensive explanations for their queries. The custom Gemini model’s advanced multi-step reasoning and multimodal capabilities in AI Mode facilitate follow-up questions, providing a more dynamic and insightful data visualization experience.
Smarter Photo Management with Ask Photos
The Ask Photos feature, which leverages Gemini models, saw improvements and broader availability for Google Photos users. It now allows for more complex queries, such as “what did I eat on my trip to Barcelona?”, to find specific photos. Simultaneously, Google reports that it returns more photos faster for simpler searches like “beach” or “dogs,” enhancing overall efficiency and accuracy in photo retrieval.
AI Integration in Chromebooks
The latest Lenovo Chromebook Plus 14 was launched with several new AI features aimed at boosting productivity. These include Smart grouping for organizing open tabs and documents, AI image editing within the Gallery app, and the capability to extract and convert text from images into editable text. The device also features custom wallpapers of Jupiter, created using generative AI in partnership with NASA, highlighting the creative applications of AI in consumer hardware.
Public Sharing for NotebookLM
Google introduced a new way to share NotebookLM notebooks publicly. Users can now share a notebook with anyone via a single link, making it easier to distribute content ranging from overviews of nonprofit projects and product manuals for businesses to study guides for educational purposes. This enhances collaboration and dissemination of information created within NotebookLM.
Advancing AI in Specialized Domains and Scientific Research
Google’s AI efforts extend significantly into specialized scientific and societal applications, showcasing the technology’s potential beyond mainstream consumer products.
Gemini for Education
Recognizing the unique needs of the educational community, Google introduced Gemini for Education, a tailored version of the Gemini app. Unveiled at the International Society for Technology in Education (ISTE) conference, this new AI solution aims to support both learners and educators. Its potential applications range from personalizing learning experiences for students to assisting teachers in generating compelling educational content, marking a significant step in integrating AI into pedagogical practices.
AlphaGenome: Understanding the Human Genome
DeepMind, Google’s AI research arm, unveiled AlphaGenome, a new unifying DNA sequence model designed to enhance the understanding of the human genome. This model advances regulatory variant-effect prediction and promises to shed new light on genome function. To accelerate scientific research, AlphaGenome is being made available in preview via the AlphaGenome API for non-commercial research, with plans for a broader model release in the future. This initiative represents a profound application of AI in unlocking complex biological insights.
Weather Lab: Improving Tropical Cyclone Prediction
Google DeepMind and Google Research launched Weather Lab, an interactive website dedicated to sharing their AI weather models. Weather Lab features experimental tropical cyclone predictions, and Google is collaborating with the U.S. National Hurricane Center to support their forecasts and warnings during the cyclone season. This partnership underscores AI’s potential in critical areas like disaster preparedness and climate science.
AI in Cancer Research and Treatment
Google highlighted how its AI breakthroughs are bringing hope to cancer research and treatment. Ruth Porat, Google’s President and Chief Investment Officer, addressed the American Society of Clinical Oncology (ASCO), emphasizing how Google’s AI research shows promising avenues for early detection and improved treatment of cancer. This commitment to healthcare applications demonstrates AI’s transformative potential in addressing some of humanity’s most pressing health challenges.
Gemini Robotics On-Device: AI for Physical Robots
Building on previous announcements, Google introduced Gemini Robotics On-Device, marking a significant step in bringing AI directly to local robotic devices. In March, Google had showcased Gemini Robotics as its most advanced Vision-Language-Action (VLA) model, capable of bringing multimodal reasoning and real-world understanding to machines. Gemini Robotics On-Device is optimized to run efficiently on the robot itself, equipping robots with strong general-purpose dexterity and task generalization. This advancement signifies Google’s push towards more autonomous and capable physical robots, with Gemini 2.5 further enhancing robotics and embodied intelligence.
Looking Forward
The comprehensive array of AI announcements in June reaffirms Google’s position at the forefront of artificial intelligence innovation. From making powerful AI models more accessible and efficient for developers to embedding intelligent features directly into consumer products, and extending its research into complex scientific and medical fields, Google is demonstrating a multi-faceted approach to leveraging AI. These developments suggest a future where AI continues to permeate various aspects of technology, science, and daily life, promising enhanced capabilities and new possibilities across numerous sectors.