Google shipped a dense batch of AI updates in March 2026, touching Search, Workspace, Maps, Pixel devices, and its core Gemini model lineup. The company expanded Search Live to more than 200 countries, rolled out Gemini integration across Google Docs, Sheets, and Slides, and released the Gemini 3.1 Flash-Lite model for developers. Here is everything that changed.
Google Search Live Goes Global
The biggest consumer-facing launch of the month was Search Live expanding to more than 200 countries and territories. Previously limited to the US, Search Live lets users have a real-time voice conversation with Google Search - asking follow-up questions, refining queries, and getting spoken answers without typing.
Google also made Canvas available in AI Mode throughout the United States. Canvas is the interactive workspace that appeared in Search last year, allowing users to edit, iterate on, and export AI-generated content directly within the search results page.
Gemini Arrives in Google Workspace
Google Docs, Sheets, Slides, and Drive all received Gemini integration in March 2026. The Sheets integration stood out - Google claimed it achieved state-of-the-art performance on spreadsheet benchmarks, meaning the AI can generate formulas, build pivot tables, analyze datasets, and create charts from plain English prompts more accurately than competing tools.
In Docs, Gemini can draft, summarize, and restructure documents. In Slides, it generates presentation layouts from text descriptions. Drive gained AI-powered search that understands file contents rather than just filenames, making it possible to find documents by describing what they contain. These tools join a broader wave of AI productivity tools reshaping the workplace in 2026.
Google Maps Gets Smarter
Maps received two significant AI upgrades. Ask Maps lets users type or speak natural language questions such as "Where can I find a quiet cafe near downtown with outdoor seating?" and get filtered, ranked results. Previously, this kind of multi-attribute search required manual filter adjustments.
Immersive Navigation adds a 3D overhead perspective during turn-by-turn directions. Instead of a flat 2D map, drivers see a rendered aerial view of upcoming intersections, lane merges, and highway exits - particularly useful in complex urban interchanges.
Personal Intelligence Expands
Google grouped several features under a "personal intelligence" umbrella. AI Mode in Search now draws on personal context - past searches, saved places, calendar events - to provide more relevant answers when users opt in. Gemini in Chrome provides contextual suggestions based on the page a user is viewing.
The Gemini app also gained tools to help users switch from competing AI assistants, including the ability to import conversation history and memory data from other platforms. Google's AI suite now spans search, documents, and research - for a deep dive into one of its most underrated tools, read our NotebookLM guide for 2026.
March Pixel Drop
Pixel phones received their monthly feature drop with two notable AI additions. Circle to Search now includes outfit breakdown, which identifies individual clothing items in a photo and finds similar products online. Magic Cue uses location and time context to suggest nearby restaurant recommendations proactively.
Google also expanded its live translation feature for Pixel Buds and compatible headphones to iOS devices and additional countries. The feature translates spoken conversation in near real-time through the earbuds.
Health and Fitness Updates
Google published The Check Up 2026, its annual health technology report, and announced $10 million in new funding for health-related AI research. The Fitbit health coach - an AI-powered feature that provides personalized fitness and wellness guidance - expanded to more Fitbit and Pixel Watch models.
New Gemini 3.1 Models and AI Developer Tools
On the developer side, Google released Gemini 3.1 Flash-Lite and Gemini Flash Live. Flash-Lite is positioned as a lightweight, cost-efficient model for high-volume tasks like classification, extraction, and summarization. Flash Live targets real-time conversational applications where latency matters more than deep reasoning. For context on how these new models compare to the competition, see our Claude vs ChatGPT comparison for 2026.
Google AI Studio gained a vibe coding feature powered by a new tool called Antigravity. The coding agent lets developers describe an application in natural language and generates working code, joining the growing field of AI coding tools alongside Cursor, Windsurf, and Replit Agent. Developers who prefer a no-code approach can explore our guide on building AI apps without writing code.
Lyria 3 Pro and AlphaGo at 10
Google DeepMind released Lyria 3 Pro, the latest version of its music generation model. Lyria 3 Pro can produce full tracks up to three minutes long across multiple genres and instrument combinations. The model is available through AI Studio for experimental use.
The company also marked the 10th anniversary of AlphaGo's first victory against a professional Go player. Google published a retrospective examining how techniques developed for AlphaGo - reinforcement learning, Monte Carlo tree search, and neural network game evaluation - influenced the development of modern large language models.
Source: blog.google
Frequently Asked Questions
What is Google Search Live and where is it available?
Search Live is a feature that lets users talk to Google Search using real-time voice conversation. In March 2026, Google expanded it from the US to more than 200 countries and territories, making it available to the vast majority of global Search users.
What changed with Gemini in Google Workspace?
Google added Gemini AI capabilities directly into Docs, Sheets, Slides, and Drive. The Sheets integration achieved what Google called state-of-the-art performance for spreadsheet tasks, allowing users to generate formulas, analyze data, and create charts through natural language prompts.
What are the new Gemini 3.1 models released in March 2026?
Google released two new models: Gemini 3.1 Flash-Lite, a lightweight model optimized for speed and cost-efficiency, and Gemini Flash Live, designed for real-time conversational applications. Both are available to developers through Google AI Studio.
What is vibe coding in Google AI Studio?
Google introduced an experimental feature called vibe coding through its Antigravity coding agent in AI Studio. It lets developers describe what they want to build in natural language and have the AI generate functional application code, similar to tools like Cursor and Replit Agent.
How did Google Maps change in March 2026?
Google Maps received two major AI upgrades: Ask Maps, which lets users ask natural language questions about places and routes, and Immersive Navigation, which provides a 3D overhead view of upcoming turns and landmarks during navigation.
The Bottom Line
March 2026 was one of Google's densest months for AI releases in recent memory. The through-line across every announcement - Search Live, Workspace AI, Maps intelligence, Pixel features, and new Gemini models - is the same: Google is embedding generative AI into every product surface its billions of users touch daily. For developers, the Gemini 3.1 Flash-Lite model and AI Studio's vibe coding tools open new build paths. For consumers, the changes mean Google's core products are becoming conversational by default. The pace suggests April will bring more of the same.
Continue reading related coverage in News or browse all stories on the articles page.