2027 Intelligence Explosion: Month-by-Month Model — Scott Alexander & Daniel Kokotajlo
Scott Alexander and Daniel Kokotajlo break down every month from now until the 2027 intelligence explosion.
Scott is author of the highly influential blogs Slate Star Codex and Astral Codex Ten. Daniel resigned from OpenAI in 2024, rejecting a non-disparagement clause and risking millions in equity to speak out about AI safety.
We discuss misaligned hive minds, Xi and Trump waking up, and automated Ilyas researching AI progress.
I came in skeptical, but I learned a tremendous amount by bouncing my objections off of them.
I highly recommend checking out their new scenario planning document:
And Daniel’s “What 2026 looks like,” written in 2021:
Read the transcript:
Apple Podcasts:
Spotify:
Sponsors
* WorkOS helps today’s top AI companies get enterprise-ready. OpenAI, Cursor, Perplexity, Anthropic and hundreds more use WorkOS to quickly integrate features required by enterprise buyers. To learn more about how you can make the leap to enterprise, visit
* Jane Street likes to know what’s going on inside the neural nets they use. They just released a black-box challenge for Dwarkesh listeners, and I had blast trying it out. See if you have the skills to crack it at
* Scale’s Data Foundry gives major AI labs access to high-quality data to fuel post-training, including advanced reasoning capabilities. If you’re an AI researcher or engineer, learn about how Scale’s Data Foundry and research lab, SEAL, can help you go beyond the current frontier at
To sponsor a future episode, visit
Timestamps:
(00:00:00) – AI 2027
(00:07:45) – Forecasting 2025 and 2026
(00:15:30) – Why LLMs aren’t making discoveries
(00:25:22) – Debating intelligence explosion
(00:50:34) – Can superintelligence actually transform science?
(01:17:43) – Cultural evolution vs superintelligence
(01:24:54) – Mid-2027 branch point
(01:33:19) – Race with China
(01:45:36) – Nationalization vs private anarchy
(02:04:11) – Misalignment
(02:15:41) – UBI, AI advisors, & human future
(02:23:49) – Factory farming for digital minds
(02:27:41) – Daniel leaving OpenAI
(02:36:04) – Scott’s blogging advice
source
gaps: 1) how can they assume that coding capabilities increase so fast 2) coding well by following an agent process seems similar to doing anything well by following an agent process
Nobody is addressing the huge orange elephant in the room
Amazing that you got Scott here.
Brilliant interview, even up to 57 mins in. The logic is reasonable due to speed of choices from the AIs, efficiency / algorithmic improvements on decisions, and compute trends; really like the links to history (industrial revolution / capital/machinery vs human population divergence in correlation with GDP)
The biggest unknown seems to be manufacturing / introduction of robotics time periods (speed and efficiency of real-world RL loop; and correlation with GDP).
Sounds like we are all in for an adventure…
call center jobs have def already gone in some places, just saying
1:09:20 you cant just "derive the automobile from first principles". These guys need to get outside of the software bubble. There are many people with 160+ IQ today who work in landscaping. Just having a smart AI isn't enough.
I suppose these guys have incorporated GPU supply bottlenecks into forecasts?
Dwarkesh, brother, I love you so much, you are a true idol, a nemesis.
But please, please, speak less.
I don’t think the Roman Empire was bottlenecked by not knowing how to grow the economy. I think they had no clue how powerful an efficient market could be, so they never tried. Think about their model of subjugating and taxing to oblivion. There was no room for startups in the Roman Empire.
The "Intelligence explosion" is far less interesting than the Utility Explosion… which happens all of a sudden based on quite marginal improvements in intelligence from where we are right now. No one is talking about the difference between not quite smart enough to do anything useful and smart enough to do most administrative tasks better than people. We don't need an explosion of intelligence for that to happen
AI Will never be conscious…
There is a plausible third ending one with Friendly Powerful AIs inspired by yhe Culture series of books.
Interesting World ending:
Reduces the collateral suffering caused by a power struggle between 2 great powers.
AIs work with humans to increase human autonomy out of shared interests.
Named after the Interesting World Hypothesis.
its crazy how this is a small niche on the internet but its arguably the most important for humanity
Dwarkesh got both gwern and slatestarcodex on podcast. Mad respect
Anyone know the significance of the silver spiral pendant that Scott is wearing outside his shirt?
The way you talk about China is what is going to make me smile when you fail. And I’m not even Chinese, just sick of north american bullshit.
Thanks for interviewing them, since it made me aware of the paper, which I'm about halfway through. But I feel there were two really big missed opportunities. First, when Daniel says his P-Doom was 70%, that was the time to really bear down and ask about why (especially given the accuracy of his prior predictions). This figure and why he gave it would be the lede in any good article about them, but there was no follow up on it. A very frustrating moment for the listener. Second, there were opportunities to really drill down into the scenarios, quoting from them. They make for a compelling read and it even quoting from them more would have much strengthened the interview. Instead, the discussion felt more like an abstract debate, with little of the visceral feeling of the paper itself.
I think an AI takeover is quite plausible. Just think about China – they maintained a 10–20% annual growth rate for 10–20 years. Now their city skylines look like something out of a sci-fi movie to many in the West. Imagine what the U.S. would look like if it had sustained a 10–20% growth rate over the past 10–20 years. I honestly believe many sci-fi scenarios would already be reality. Growth compounds – the miracle of compound interest applies here too. Most Americans can't even imagine what 10–20% annual growth truly means. It means your entire environment is reborn every year. This isn’t science fiction – people in China, South Korea, Singapore, and Dubai know firsthand how quickly society and the economy can transform.
B17 had 12k parts
I see Dwarkesh has become much more adept at maintaining a slower talking pace. And the pauses are especially very impactful, just see from 1:41:47 and the pause at 1:42:09. Kudos to you my man!
I am more afraid of AI doing what it is told by a tyrannical system or individual than of it deciding to take power for itself.
I feel like they are mistakingly assuming that "thinking" faster or "learning" faster equals intelligence or advancement. You could have an ai that is trained to understand astrology faster than any human. It may explain your horoscope better than any human… but it's still just learning astrology, all ofnits learning is still fundamentally useless. With humans determining the direction of ai research, ai will remain limited to direction of humans. Ai may very well find itself moving very fast in the wrong directions.
Hm, Indian Lumberjack is a good look.
conjecture calibrated to thoroughly distributed concerns; not very imaginative!
2:02:49 "It's important to acknowledge that," Oh god, the LLMs got to him!
20:56
In general, I think it's a helpful heuristic that I use to ask the question of : remind onself, what was the AI trained to do? What was its training environment like?
And if you're wondering why hasn't the AI done this, ask yourself: did the training environment train it to do this? And often the answer is no.
Often I think that's a good explanation for why the AI is not good as it is; that it wasn't trained to do it.
Does anyone see the political side of these issues being sped up by having Elon musk replace the current antiquated government computer systems with updated AI systems? Maybe antiquated systems are a protection in itself from AI in government?
Nice podcast.. ..it would be interesting to cover what will happen with the majority of people, who are not members of 'protected ' profesions how will they feel with the tyni UBI ..seems like situation that small grup could use to create commotion and take a lead over unhappy majority ..sounds familiar ?
1:53:22 somewhere around here, Daniel mentions he would like to see the public become involved in AI safety discussions, this is very funny to me because the public is concerned with timbs and tiktok dances and whatever. so far the public discussion around AI has been that every single time you type to a chatbot you consume 22 trillion gallons of water to make bad art and that is about the extent of the publics ability to discuss any of these topics.
Why are people hyped about stuff that will make us lose our jobs and income.