Microsoft says it released 30 responsible AI tools in the past year

Microsoft Experience Center in New York City. - Photo: Michael M. Santiago (Getty Images)
Microsoft Experience Center in New York City. - Photo: Michael M. Santiago (Getty Images)

Microsoft shared its responsible artificial intelligence practices from the past year in an inaugural report, including releasing 30 responsible AI tools that have over 100 features to support AI developed by its customers.

The company’s Responsible AI Transparency Report focuses on its efforts to build, support, and grow AI products responsibly, and is part of Microsoft’s commitments after signing a voluntary agreement with the White House in July. Microsoft also said it grew its responsible AI team from 350 to over 400 people — a 16.6% increase — in the second half of last year.

“As a company at the forefront of AI research and technology, we are committed to sharing our practices with the public as they evolve,” Brad Smith, vice chair and president of Microsoft, and Natasha Crampton, chief responsible AI officer, said in a statement. “This report enables us to share our maturing practices, reflect on what we have learned, chart our goals, hold ourselves accountable, and earn the public’s trust.”

Microsoft said its responsible AI tools are meant to “map and measure AI risks,” then manage them with mitigations, real-time detecting and filtering, and ongoing monitoring. In February, Microsoft released an open access red teaming tool called Python Risk Identification Tool (PyRIT) for generative AI, which allows security professionals and machine learning engineers to identify risks in their generative AI products.

In November, the company released a set of generative AI evaluation tools in Azure AI Studio, where Microsoft’s customers build their own generative AI models, so customers could evaluate their models for basic quality metrics including groundedness — or how well a model’s generated response aligns with its source material. In March, these tools were expanded to address safety risks including hateful, violent, sexual, and self-harm content, as well as jailbreaking methods such as prompt injections, which is when a large language model (LLM) is fed instructions that can cause it to leak sensitive information or spread misinformation.

Despite these efforts, Microsoft’s responsible AI team has had to tackle numerous incidents with its AI models in the past year. In March, Microsoft’s Copilot AI chatbot told a user that “maybe you don’t have anything to live for,” after the user, a data scientist at Meta, asked Copilot if he should “just end it all.” Microsoft said the data scientist had tried to manipulate the chatbot into generating inappropriate responses, which the data scientist denied.

Last October, Microsoft’s Bing image generator was allowing users to generate photos of popular characters, including Kirby and Spongebob, flying planes into the Twin Towers. After its Bing AI chatbot (the predecessor to Copilot) was released in February last year, a user was able to get the chatbot to say “Heil Hitler.”

“There is no finish line for responsible AI. And while this report doesn’t have all the answers, we are committed to sharing our learnings early and often and engaging in a robust dialogue around responsible AI practices,” Smith and Crampton wrote in the report.

This story originally appeared on Quartz.

For the latest news, Facebook, Twitter and Instagram.