OpenAI puts ‘brilliant products’ ahead of security, says outgoing researcher

A former senior OpenAI employee said the company behind ChatGPT prioritized “brilliant products” over security, revealing he quit after disagreements over key goals reached a “tipping point.”

Jan Leicke was a key security researcher at OpenAI as co-head of the superorganization division, ensuring that powerful AI systems align with human values ​​and goals. His intervention came ahead of next week’s Global Summit on Artificial Intelligence in Seoul, where policymakers, experts and tech leaders will discuss oversight of the technology.

Leicke resigned days after the San Francisco-based company released its latest AI model, the GPT-4o. His departure marks the departure of two senior OpenAI employees this week, following the resignation of Ilya Sutzkever, OpenAI’s co-founder and co-head of superalignment.

Leicke detailed the reasons for his departure in a thread on X posted on Friday, in which he said the safety culture had become a lower priority.

“In recent years, safety culture and processes have taken a back seat to shiny products,” he wrote.

Yesterday was my last day as Head of Reconciliation, Supergroup Leader and Executive @OpenAI.

— Jan Leike (@janleike) May 17, 2024

OpenAI was founded to ensure that general artificial intelligence, which it describes as “artificial intelligence systems that are generally smarter than humans,” benefits all of humanity. In his X posts, Leicke said he had been at odds with OpenAI’s leadership over the company’s priorities for some time, but that the standoff had “finally reached a tipping point.”

Leicke said OpenAI, which also developed the Dall-E image generator and the Sora video generator, should invest more resources in issues such as safety, social impact, privacy and security for the next generation of models.

“These problems are quite difficult to solve, and I am concerned that we are not on a trajectory to get there,” he wrote, adding that it is getting “harder and harder” for his team to do research.

“Creating smarter-than-human machines is an inherently dangerous business. OpenAI has a huge responsibility on behalf of all humanity,” Leicke wrote, adding that OpenAI “must become a safety-first AGI company.”

Sam Altman, chief executive of OpenAI, responded to Leike’s thread with a message on X, thanking his former colleague for his contributions to the company’s security culture.

skip past mailing promotion

“He’s right, we still have a lot to do; we strive to do so,” he wrote.

Sutzkever, who was also OpenAI’s chief scientist, wrote in his post to X announcing his retirement that he was confident that OpenAI “will create AGI that is both safe and useful” under its current leadership. Sutzkever initially supported Altman’s ouster as head of OpenAI last November, then supported his reinstatement after days of internal turmoil at the company.

Leyke’s warning came as an international panel of artificial intelligence experts released an introductory report on AI security that said there was disagreement over the likelihood of powerful AI systems evading human control. However, he warned that regulators could be left behind by rapid advances in technology, warning of a “potential mismatch between the pace of technological progress and the pace of regulatory response”.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top