
Editor’s note: Sundar Pichai spoke on stage at Google Cloud Next 2025 in Las Vegas. Below is an edited version of his remarks.
The chance to improve lives and reimagine things is why Google has been investing in AI and machine learning for more than two decades. We see it as the most important way we can advance our mission: to organize the world’s information and make it universally accessible and useful.
This week at Google Cloud Next, we’re sharing how AI can help advance your mission. It’s exciting to see how our products are helping companies of all sizes do more with AI — and translate those benefits to customers. Here are some highlights:
Ironwood, our seventh-generation Google TPU
Since 2013, we’ve invested heavily in our custom AI chips, called Tensor Processing Units, or TPUs, and we continue to make massive improvements in performance and efficiency at scale. Today I’m proud to announce our seventh-generation TPU, Ironwood, is coming later this year. Compared to our first publicly available TPU, Ironwood achieves 3,600 times better performance. It’s the most powerful chip we’ve ever built and will enable the next frontier of AI models. In the same period, we’ve also become 29 times more energy efficient.
Cloud Wide Area Network: Our global private network, for your business
We need our global infrastructure to move at “Google speed” — with near-zero latency — supporting services like Gmail, Photos and Search for billions of users worldwide. And we use it for training our most capable model, Gemini.
Google’s backbone network is unparalleled, spanning more than 200 countries and territories, powered by over two million miles of fiber. Today, I’m pleased to announce that we are making Google’s global private network available to enterprises around the world. We call it Cloud Wide Area Network, or WAN.
Cloud WAN leverages Google’s planet-scale network. It’s optimized for application performance, and delivers over 40% faster performance, while reducing total cost of ownership by up to 40%. Companies like Nestlé and Citadel Securities are already using this network for faster, more reliable solutions. And it will be available to all Google Cloud customers later this month.
Advances in research in quantum and AI
This progress is laying the foundation for breakthroughs across multiple fields. Quantum computing is a great example. Our newest quantum chip, Willow, cracked a key challenge in quantum error correction that eluded researchers for three decades. It can reduce errors exponentially as we scale up using more qubits. The Willow chip really paves the way for a useful, large-scale quantum computer down the road.
Our infrastructure enables the next layer of the stack: research and models. Over the last decade, our research teams have pushed the boundaries of AI forward. And today, they are accelerating science and discovery — from our AlphaFold breakthrough with protein folding to WeatherNext, our state-of-the-art weather forecasting models.
Gemini 2.5, our advanced reasoning model
A couple weeks ago we released a new model, Gemini 2.5, which is a thinking model that can reason through its thoughts before responding. It’s our most intelligent AI model — ever.
And it is the best model in the world, according to the Chatbot Arena leaderboard. It’s state-of-the-art across a range of benchmarks requiring advanced reasoning. That included the highest score — ever — on Humanity’s Last Exam, one of the hardest industry benchmarks that’s designed to capture the human frontier of knowledge and reasoning. Gemini 2.5 Pro is available now for everyone in Google AI Studio, Vertex AI and in the Gemini app.
Gemini 2.5 Flash: Our most cost-efficient thinking model
We’re also announcing Gemini 2.5 Flash, our low latency and most cost efficient thinking model. With 2.5 Flash, you can control how much the model reasons, and balance performance with your budget.
Gemini 2.5 Flash is coming soon in Google AI Studio, Vertex AI and in the Gemini app. We’ll be sharing more details on the model and its performance soon.
Products and platforms powered by world-class AI
Our goal is to always bring our latest AI advances into the fourth layer of our stack: products and platforms. Today all 15 of our half-billion user products — including seven with 2 billion users — are using our Gemini models. AI deployed at this scale requires world-class inference, which enterprises can benefit from to build their own AI-powered applications.
Gemini is also helping us create net-new products and experiences. NotebookLM is one example, used by 100,000 businesses. It uses long context, multimodality and our latest thinking models to show information in powerful ways. Veo 2 is a leading video generation model. Major film studios, entertainment companies, as well as the top advertising agencies in the world are using it to bring their stories to life.
Getting these advancements into the hands of both consumers and enterprises is something we’re really focused on. This is why we are able to innovate at the cutting edge, and push the boundaries of what’s possible, for us — and for you. The result: better, faster, and more innovation for everyone.

Blog Article: Here