Another OpenAI employee announced she quit over safety concerns hours before two exec resigned

  • Another OpenAI employee quit over safety concerns after two high-profile resignations.

  • Gretchen Krueger announced her resignation on X, echoing the concerns of Ilya Sutskever and Jan Leike.

  • Krueger said tech firms can "disempower those seeking to hold them accountable" by sowing division.

OpenAI faces more turmoil as another employee announces she quit over safety concerns.

It comes after the resignations of high-profile executives Ilya Sutskever and Jan Leike, who ran its now-dissolved safety research team Superalignment.

Gretchen Krueger announced in a thread on X on Wednesday that she quit the ChatGPT maker on May 14 and that it "was not an easy decision to make."

Krueger wrote, "I resigned a few hours before hearing the news about @ilyasut and @janleike, and I made my decision independently. I share their concerns. I also have additional and overlapping concerns."

She added that more needs to be done to improve "decision-making processes; accountability; transparency; documentation; policy enforcement; the care with which we use our own technology; and mitigations for impacts on inequality, rights, and the environment."

The former policy research worker said that people and communities share these worries that can "influence how aspects of the future can be charted, and by whom" and that they shouldn't be "misread as narrow, speculative, or disconnected."

In another post, she wrote, "One of the ways tech companies in general can disempower those seeking to hold them accountable is to sow division among those raising concerns or challenging their power. I care deeply about preventing this."

Last week, chief scientist and cofounder Ilya Sutskever announced his resignation, followed by executive Jan Leike. The pair co-led its Superalignment team, which was tasked with building safeguards to prevent artificial general intelligence from going rogue.

Leike accused OpenAI of putting "shiny products" ahead of safety and claimed that the Superalignment team was "struggling for compute" resources and that "it was getting harder and harder to get this crucial research done."

The exits this and last week followed those of two other safety researchers, Daniel Kokotajlo and William Saunders, who quit in recent months over similar reasons. Kokotajlo said he left after "losing confidence that it [OpenAI] would behave responsibly around the time of AGI."

OpenAI didn't immediately respond to a request for comment from Business Insider.

Read the original article on Business Insider