Everything announced at Google I/O 2024

A presenter at Google IO shows information on a new AI project.

Android, Wear OS, and Pixel may be Google’s household names, but it was Google Gemini, its emerging AI technology, that stole the limelight at Google I/O 2024. The company’s annual software celebration sets the stage for everything the company has planned for the coming year, and this year, CEO Sundar Pichai unambiguously declared that Google is in its “Gemini era.” From AI searches in your Google Photos to virtual AI assistants that will work alongside you, Google is baking Gemini into absolutely everything, and the implications are enormous. Here’s an overview of everything Google announced this year.

Gemini takeover

Google's Ask Photos debut.

Users upload more than 6 billion photos to Google Photos every day, so it’s little wonder that we could use a hand sifting through them all. Gemini will be added to Google Photos this summer, adding extra search abilities through the Ask Photos function. For instance, ask it “what’s my license plate again” and it’ll search through your photos to find the most likely answer, saving you from needing to manually look through your photos to find it yourself.

Notebook LM, spotlighted at last year’s Google I/O, will also have Gemini introduced into it, pushing its AI smarts to even higher levels. During the presentation, Google showed it tailoring a physics lesson to use basketball as an example. This sort of personalization for learning is likely to become more prevalent as we move into the future.

Gemini 1.5 Pro will be available for all developers and advanced users starting today in over 35 languages.

Gemini Agents can do it for you

Gemini isn’t just for asking questions and summarizing data — Google wants it to actually get thing done for you, too. While it can’t vacuum or take out the trash for you, Agents is a new AI assistant that you can assign tasks to. Google demonstrated it by taking a photo of a pair of shoes and telling Agents to return them. It used AI to identify the shoes, searched Gmail for the receipt, and offered to initiate a return via email. It could also be used to plan vacations, work trips, and other information-related tasks.

Project Astra

Project Astra demonstration on a phone.

Another experimental project for Google is Astra, which ties Gemini into cameras and allows it to understand and interpret the world around it. In the demo we saw, Astra was able to identify a speaker, break down which part of the speaker made noises, and read code and explain it. Astra could also be used to add AI into a pair of smart glasses, allowing you to ask questions about things you see without holding up a phone camera.

This isn’t something we haven’t seen before — ChatGPT has shown off something similar — but it’s hard to say Astra isn’t impressive. Unfortunately, there’s no timeline on when this is likely to be released, and for how much.

Generative AI

Generative AI is the most mainstream AI out there, and Google isn’t ignoring this most important element of AI. Its newest AI model for creating images is called Imagen 3, and Google claims it’s the best model it’s made so far for creating images with words, as well as for understanding prompts.

Donald Glover sitting in a cabin with a movie crew.

Beyond images, Google has been working hard on creating AI models for generating music, as well as Veo, an AI model that can create some very impressive HD videos. Prompts can be used to edit existing videos, so you don’t need to recreate videos from scratch every time, and the video examples shown definitely look better than most videos created by AI. Google is lending the power of Veo to Donald Glover, who is in the process of creating a movie using this new AI model.

Worried about generated images, sounds, and videos being used for nefarious purposes? Google has added SynthID to Gemini’s creations. It’s an invisible form of watermarking to show content that has been created by AI. Image and video tools can be found in ImageFX and Video FX.

Generative AI will also appear in Google Searches. AI Overviews will summarize results at the top of your search, rather than sending you to various websites. Multistep reasoning will break down your requests, tapping into Google’s indexes to provide you the most relevant information. It can even help you to plan a trip.

One of the most impressive elements of AI Overviews is the ability to use it to ask it a question during Google Lens and get a customized and relevant overview that answers the question posed. AI Overviews will be available across the U.S. starting today.

Gemini and Workspace

Gemini has been available in Google’s Workspace for a little while now, but Google is ready to push it to the next level. A Gemini-powered side panel will be available next month. Gemini is also coming to Meet in more languages.

And as you might expect, Gemini will be rolling out to Gmail. Ask it to summarize information from your kid’s school, and it can do that, or it can just sum up long emails so you don’t have to. Type a question or prompt, and Gemini will be able to respond, or perform an action. For instance, it will be able to bring together separate quotes for building work and bring them into a list for you. Smart replies is also getting an upgrade with Contextual Smart Replies. These abilities roll out to Workspace Labs users this summer.

You may soon be working alongside AI, too. Google showed off an “AI teammate” named Chip, who was in charge of monitoring resources for the team. Chip was available to answer questions in chats in Google Workspace, and was capable of remembering when decisions had been made and the progress of the specific project mentioned.

The Gemini app

Google Gemini on smartphone.

Effectively an upgraded Google Assistant, you can communicate with Gemini in all the same ways as you would with Google Assistant, including text and voice, but you’ll also be able to use video and a more conversational way of speaking to it, known as Gemini Live.

The there are Gemini Gems, which are smaller, customized versions of Gemini that can be specialized into various niches. So if you use Gemini in specific ways over and over, you can create a Gem to save time when you need it again and again. For instance, you could customize a Gem to tell you stories in a style you prefer, rather than hitting a generic AI chatbot with the same prompts over and over.

The Gemini app is capable of doing a number of things you expect from Gemini, including being able to plan a trip and set out an itinerary. This function rolls out this summer.

AI and Android

Google Gemini on iOS and Android.

Naturally, Google will bake Gemini into its mobile operating system, too. Android will be the first mobile OS to include such an advanced AI model, and that makes it the platform of choice if you’re an AI fan.

Circle to Search was the first part of this to come out, but this year, Google will also be adding Gemini as your standard AI assistant on Android — and adding more AI functions under the hood.

Think of Gemini on Android as being Google Assistant on steroids. It will be able to contextually understand the content on your screen, including being able to figure out summaries from YouTube videos, create images for replies, and answer any questions you might have — without ever leaving the screen you’re looking at.

Accessibility is a key feature offered by AI. The Talkback feature has been around for a while, but now, thanks to Gemini, images can be described in detail, giving those with vision impairments an easier way to use their phones. And since Gemini is available on device, it’s fast and quick.

Gemini will also be able to help deal with spam and scam callers. Gemini will listen in to your calls and give you a warning when it detects suspect activity — and since it’s all on-device, the information won’t leave your phone. This feature is still being tweaked though, and won’t be available for a while yet.