BusinessTechCrunch
OpenAI has formed a new committee to oversee "critical" safety and security decisions related to the company's projects and operations. Altman and the rest of the Safety and Security Committee -- OpenAI board members Bret Taylor, Adam D’Angelo and Nicole Seligman as well as chief scientist Jakub Pachocki, Aleksander Madry (who leads OpenAI's "preparedness" team), Lilian Weng (head of safety systems), Matt Knight (head of security) and John Schulman (head of "alignment science") -- will be responsible for evaluating OpenAI's safety processes and safeguards over the next 90 days, according to a post on the company's corporate blog.