TechCrunch
The U.K. Safety Institute, the U.K.'s recently established AI safety body, has released a toolset designed to "strengthen AI safety" by making it easier for industry, research organizations and academia to develop AI evaluations. Called Inspect, the toolset -- which is available under an open source license, specifically an MIT License -- aims to assess certain capabilities of AI models, including models' core knowledge and ability to reason, and generate a score based on the results. In a press release announcing the news on Friday, the Safety Institute claimed that Inspect marks "the first time that an AI safety testing platform which has been spearheaded by a state-backed body has been released for wider use."