Google Bard is switching to a more 'capable' language model, CEO confirms

Updates will come as soon as next week.

Google / Illustration by Engadget
  • Oops!
    Something went wrong.
    Please try again later.

People haven't exactly been impressed in the short time since Google released its "experimental conversational AI service" Bard. Coming up against OpenAI's ChatGPT and Microsoft's Bing Chat (also powered by OpenAI's GPT-4) users have found its responses to not be as knowledgeable or detailed as its rivals. That could be set to change, however, after Google CEO Sundar Pichai confirmed on The New York Times podcast "Hard Fork" that Bard will soon be moving from its current LaMDA-based model to larger-scale PaLM datasets in the coming days.

When asked how he felt about responses to Bard's release, Pichai commented: "We clearly have more capable models. Pretty soon, maybe as this goes live, we will be upgrading Bard to some of our more capable PaLM models, so which will bring more capabilities, be it in reasoning, coding."

To frame the difference, Google said it had trained LaMDA with 137 billion parameters when it shared details about the language-based models last year. PaLM, on the other hand, was said to have been trained with around 540 billion parameters. Both models may have evolved and grown since early 2022, but the contrast likely shows why Google is now slowly transitioning Bard over to PaLM, with its larger dataset and more diverse answers.

Pichai claims not to be worried about how fast Google's AI develops compared to its competitors. When Bard first debuted in February, he acknowledged its reliance on LaMDA gave it a smaller scale, but framed having less computing power as a benefit, giving more users the change to test it out and provide feedback. Pichai also ensured that Google would be doing its own analysis of Bard's safety and quality once provided with real-world information.

To that end, Pichai expressed that Google doesn't want to release a "more capable model before we can fully make sure we can handle it well. We are all in very, very early stages. We will have even more capable models to plug in over time. But I don’t want it to be just who’s there first, but getting it right is very important to us."

That thought is on the minds of over 1,800 people (including tech leaders and AI researchers) who have signed an open letter calling for a minimum six month pause on the development of AI technology "more powerful than GPT-4."

Pichai doesn't think this can be effectively done without involving the government, but agrees with the need for guidance: "AI is too important an area not to regulate. It’s also too important an area not to regulate well. So I’m glad these conversations are underway."

This article contains affiliate links; if you click such a link and make a purchase, we may earn a commission.