Patronus AI is off to a magical start as LLM governance tool gains traction

These days every company is trying to figure out if their large language models are compliant with whichever rules they deem important, and with legal or regulatory requirements. If you're in a regulated industry, the need is even more acute. Perhaps that’s why Patronus AI is finding early success in the marketplace.

On Wednesday, the company that helps customers make sure the models are compliant on a number of dimensions, announced a $17 million Series A, just eight months after announcing a $3 million seed round.

“A lot of what investors were excited about is we're the clear leader in the space and it's a really big market and it's a very fast growing market as well,” CEO and co-founder Anand Kannappan told TechCrunch. What’s more, Patronus was able to get in early just as companies realized they needed LLM governance tools to help them stay compliant.

They believe in the potential of the growing market, which is really just getting started. “Since we launched we've worked with many different kinds of portfolio companies and AI companies and mid-stage companies, and so through that our customers have made several hundreds of thousands of requests through our platform,” he said.

The company’s main focus is a piece called Patronus Evaluators. “These are essentially API calls you can implement with one line of code, and you can in a very, very high-quality and highly reliable way, you can scalably measure performance of LLMs and LLM systems across various dimensions,” Kannappan said.

This includes things like likelihood to hallucinate, copyright risks, safety risks and even enterprise-specific capabilities like detecting business-sensitive information and brand voice and style, things that enterprises care about from both a regulatory and reputation perspective.

As we wrote at the time of the seed announcement:

The company is in the right place at the right time, building a security and analysis framework in the form of a managed service for testing large language models to identify areas that could be problematic, particularly the likelihood of hallucinations, where the model makes up an answer because it lacks the data to answer correctly.

The company has doubled from the six employees they had at the time of their seed funding last year, and expect to double again this year.

The $17 million investment was led by Notable Capital with participation from Lightspeed Venture Partners, Factorial Capital, Datadog and industry angels.

We're launching an AI newsletter! Sign up here to start receiving it in your inboxes on June 5.