Daniel Kokotajlo
Executive Director, AI Futures Project
About
Daniel Kokotajlo is an AI safety researcher and Executive Director of the AI Futures Project, a Berkeley-based nonprofit researching the future impact of artificial intelligence. A former philosophy PhD student, he worked at AI Impacts and the Center on Long-Term Risk before joining OpenAI as a governance researcher in 2022. He resigned from OpenAI in 2024, forfeiting approximately $2 million in equity by refusing to sign a non-disparagement agreement, citing loss of confidence that the company would behave responsibly around AGI development. He co-organized the 'Right to Warn' initiative for AI whistleblower protections and published AI 2027, a detailed AGI forecast scenario read by over a million people.
Key Contributions
- Co-founded and leads the AI Futures Project nonprofit
- Published AI 2027, a detailed AGI forecast read by over 1 million people
- Resigned from OpenAI forfeiting $2M equity to warn about AI safety
- Co-organized the Right to Warn initiative for AI whistleblower protections
- Wrote prescient 2021 'What 2026 Looks Like' predicting chatbot explosion and inference-time scaling
Videos & Interviews
2027 Intelligence Explosion: Month-by-Month Model — Scott Alexander & Daniel Kokotajlo
Dwarkesh Patel podcast — 3-hour deep dive on the AI 2027 scenario
Watch on YouTube
Why the AI Race Ends in Disaster (with Daniel Kokotajlo)
Future of Life Institute podcast on AI acceleration risks
Watch on YouTube
Daniel Kokotajlo on what a hyperspeed robot economy might look like
80,000 Hours podcast on AI 2027 updates and robot economy scenarios
Watch on YouTube