Google's new AI model is lighter, more efficient and even more intelligent

 Gemini 1.5 Flash.
Gemini 1.5 Flash.

Google held its annual developer conference, Google I/O, on Tuesday, and guess what? There was more generative AI news. The company unveiled Gemini 1.5 Flash, which it says is its lightest and most efficient artificial intelligence model yet.

Google has so many AI projects (including one of the best AI image generators) that it's hard to keep track of them all and what they will do, but Gemini 1.5 Flash is essentially a leaner, more efficient version of Gemini 1.5 Pro. Google Cloud users can integrate it into their own apps, and it can rapidly summarise conversations, caption images and videos and extract data from large documents and tables.

In a blog post, Google said Gemini 1.5 Flash is lighter than Gemini Pro but is highly capable of multimodal reasoning across vast amounts of information. The new model was trained by 1.5 Pro through a process called 'distillation,' in which the most essential knowledge and skills from a larger model are transferred to a smaller, more efficient model.

Users can integrate Gemini models into your applications with Google AI Studio and Google Cloud Vertex AI. It says Flash has a one-million-token context window by default, which means you can process one hour of video, 11 hours of audio, codebases with more than 30,000 lines of code, or over 700,000 words.

Gemini 1.5 Flash
Gemini 1.5 Flash

Meanwhile, Google also announced Gemma 2, a new generation of open models for "responsible AI innovation". It also updated its Responsible Generative AI Toolkit with LLM Comparator for evaluating the quality of model responses.

For more generative AI news see the damp squib that was the KFC AI ad campaign.