2 min read

OpenAI Fires Two Researchers For Leaking Information

On April 11, 2024, The Information reported that Leopold Aschenbrenner and Pavel Izmailov were fired from OpenAI for leaking information. Exactly what they stand accused of leaking wasn't reported.
OpenAI Fires Two Researchers For Leaking Information
The cover image for Leopold's personal blog.

One is interested in First Amendment Law and recently referenced the work of nuclear secrets whistleblower Daniel Ellsberg

On April 11, 2024, The Information reported that Leopold Aschenbrenner and Pavel Izmailov were fired from OpenAI for leaking information. Exactly what they stand accused of leaking wasn't reported.

Both were hired last summer to fill open positions on AI's Superintelligence Alignment team led by Ilya Sutskever.

Leopold Aschenbrenner entered Columbia University at 15, and graduated as valedictorian at 19. He was the recipient of an Emergent Ventures grant and was at the Global Priorities Institute as a researcher on Longtermism. He received grants from the Centre for Effective Altruism, the Future of Humanities Institute, and Columbia University's Institute for Social and Economic Research and Policy (ISERP). His working paper on Existential Risk and Growth can be read here.

He has an interest in First Amendment Law according to his blog, and in 2021 read whistleblower Daniel Ellsberg's The Doomsday Machine: Confessions of a Nuclear War Planner, which is interesting in lieu of what he's been accused of. This is probably just a coincidence since nuclear war occupies the existential risk space along with Superintelligence.

On the other hand, why pick Ellsberg to read if his interest was nuclear war as existential risk when there are so many other authors to read on that topic?

The Ellsberg reference was from a 2021 interview on the Ben Leoh Chats podcast (1:00:02).

Leopold Aschenbrenner (1:00:02): I think nuclear is very underrated. There's actually a great book by Daniel Ellsberg, the Pentagon Papers guy. He was the nuclear war planner, and he recently released a book about Confessions of a Nuclear War Planner. I think nuclear is underrated. I think partially the reason I'm very attuned to this is I feel proof from family history, I'm still very rooted in the Cold War or something like that. I think nuclear is underrated. I also think what is underrated is...

Pavel Izmailov started on the Superintelligence Alignment team at OpenAI but shifted over to work on Reasoning according to his Github profile. Pavel received a BSc in applied math and computer science from the faculty of Computational Mathematics and Cybernetics of Lomonosov Moscow State University, received his Masters degree from Cornell, and his Ph.D. from NYU where he will be returning in the Fall of 2025 as an Assistant Professor in the Tandon CSE department, and Courant CS department by courtesy (Courant Computer Science is where Yann LeCun is a professor). Based upon his CV, he excelled in academics.

The last paper that Pavel and Leopold were both involved with came from the Superalignment Generalization team "WEAK-TO-STRONG GENERALIZATION: ELICITING STRONG CAPABILITIES WITH WEAK SUPERVISION"