Julian Schrittwieser
Julian recently highlighted a striking trend: the length of AI tasks that models can complete autonomously has been doubling roughly every seven months. That is, the horizon of what counts as a tractable software problem keeps stretching at an exponential pace. But here’s the critical link—this cadence mirrors another, deeper curve: the relentless growth of compute budgets.
The Compute Exponent
Since 2010, the compute used in frontier AI training runs has doubled about every 6 months. OpenAI’s 2018 study showed an even steeper 3.4-month doubling during the ImageNet→GPT-2 era. Updated datasets (Epoch, Sevilla et al.) put it closer to 5.7–6 months today. That equates to a 4–5× increase per year—orders of magnitude faster than Moore’s Law ever delivered.
Overlay Julian’s “7-month task horizon doubling” and you get a resonance: capability and compute are climbing almost in lockstep.
Efficiency as a Second Exponent
Raw FLOPs aren’t the whole story. Algorithmic efficiency—the ability to achieve the same performance with less compute—has also doubled about every 16 months. From 2012–2019, efficiency gains amounted to a 44× multiplier. Put differently: even if compute growth slowed, capabilities would still ratchet forward thanks to smarter architectures, optimization methods, and training tricks.
Capabilities grow as a product of these exponents:
- Compute doubling: ~6 months.
- Task horizon doubling: ~7 months.
- Efficiency doubling: ~16 months.
It’s multiplicative, not additive. That’s why Julian’s log-plots look eerily linear: the world is running on compound exponents.
Forward Projection: 10–20 Years
Projecting these curves is speculative, but three scenarios illustrate the terrain:
1. Status Quo Scaling
- Compute: 6–8 month doubling through 2030, slowing to ~12 months after.
- Impact: 32–128× more compute by 2030, ~300–3,000× by 2035.
- Power: AI datacenter demand climbs to 2–3× by 2030, still within grid-expansion forecasts but regionally disruptive.
2. Capital-Constrained
- Compute: 12–18 month doubling (bottlenecks in GPUs, fabs, siting).
- Impact: 6–64× in 10 years.
- Power: AI’s share of datacenter demand rises, but stays closer to IEA “base case.”
3. Aggressive Acceleration
- Compute: 4–6 month doubling until 2028, then 9–12 months.
- Impact: 1,000–8,000× in a decade.
- Power: risks overshooting current energy build-outs without massive nuclear/renewable deployments.
Bubble or Not?
Every exponential curve invites the same question: is this a bubble? The comparison to dot-com or crypto bubbles is natural. But the distinguishing factor here is that AI capabilities are not speculative abstractions—they are already producing measurable, valuable output. Models are writing code, designing drugs, optimizing logistics, and automating knowledge work. These are not promises of future utility; they are present-day functions.
Still, there are “bubble dynamics” at the edges:
- Capital concentration: Trillions are being funneled into chips, datacenters, and startups—faster than most sectors can absorb productively.
- Valuation froth: Multiples on frontier labs and semiconductor firms assume near-perfect continuation of the exponent.
- Energy and resource limits: Physical bottlenecks (grid capacity, water for cooling, rare earths) may force abrupt slowdowns that investors don’t price in.
The implication: AI is not a fake bubble like tulips or meme coins, but it may be a productive bubble—akin to railroads or electrification. Enormous overbuild, inevitable shakeouts, but also irreversible infrastructure that reshapes economies.
Investor alpha: leading indicators a pop is near
Below are pragmatic, measurable signals that the AI buildout is due for a correction (even if temporary). Where useful, I include what to watch and what would constitute a red flag:
- Bookings vs. Billings (equipment makers)
Watch: quarterly bookings, book‑to‑bill, and backlog trajectories for lithography and back‑end equipment vendors.
Red flag: two consecutive quarters of bookings deceleration or book‑to‑bill < 1.0 while backlogs shrink—suggests fab and packaging over‑ordering is unwinding. - Foundry/packaging utilization
Watch: leading‑edge utilization at top foundries and OSATs; guidance on HPC/AI vs. handset/PC mix.
Red flag: utilization slipping below the 80–85% “healthy” zone, or CAPEX cuts tied to weaker AI demand—implies near‑term digestion. - Hyperscaler CAPEX vs. AI revenue realization
Watch: cloud providers’ AI/“GenAI” revenue disclosures vs. AI CAPEX growth.
Red flag: a widening gap where CAPEX grows 40–60% Y/Y but recognized AI revenue lags in the teens—monetization risk and ROI scrutiny ahead. - Component pricing & lead‑time rollover (HBM, NICs, advanced substrates)
Watch: spot/contract prices and lead times across the AI server stack.
Red flag: synchronized price declines and normalizing lead times across multiple components—classic sign demand is easing versus supply. - Inventory days (accelerators, servers, memory)
Watch: channel and customer inventory days in OEM/ODM earnings and hyperscaler commentary.
Red flag: sharp inventory build alongside stable shipments—preludes price cuts or order push‑outs. - Enterprise adoption surveys vs. spending
Watch: CIO surveys on production deployments, data readiness, and realized ROI.
Red flag: persistent reports of low ROI and data readiness shortfalls even as budgets expand—points to elongating payback periods. - Power & siting frictions
Watch: utility interconnection queues, PUE trends, moratoriums or delays in key metros, wholesale power spikes.
Red flag: mounting grid constraints or curtailments that push projects >12–18 months out—CAPEX deferrals and geography‑driven slowdowns. - Macro risk appetite
Watch: margin debt, high‑beta factor performance, first‑day IPO pops, and credit spreads.
Red flag: rolling over of margin debt and tightening credit while AI leaders still trade at peak multiples—typical late‑cycle tension. - Regulatory shocks
Watch: export controls, data residency mandates, content liability regimes, or safety rules that directly restrict model scale or chip supply.
Red flag: sudden scope expansions (e.g., new export bans or licensing regimes) that disrupt expected hardware shipments or model training plans. - Narrative cracks in earnings language
Watch: subtle guidance shifts—“normalizing demand,” “elongated qualification,” “customer digestion,” “mix headwinds.”
Red flag: multiple adjacent suppliers and customers using the same cautionary language within a single earnings season—coordination you only see at inflection points.
How to synthesize: Create a simple dashboard with five buckets—Supply (bookings/utilization), Demand (hyperscaler AI revenue, surveys), Pricing (HBM/NICs), Balance Sheets (inventory/margin debt), and Constraints (power/regulation). When 3+ buckets flash red for two consecutive quarters, the odds of a correction jump materially.
Why This Matters for Technesis
Technesis—technology’s inevitable emergence alongside sentience—plays out here in purest form. Compute, efficiency, and task length are three interlocking gears. Each turn multiplies the others. The result: an exponential cultural force that policymakers, investors, and even many technologists still underestimate.
We’re not simply watching bigger models. We are watching time itself compress: problems that once took human teams months shrink into hours of machine work. By 2035, the cumulative effect of these exponents could be 1,000×–10,000× today’s frontier.
Whether we call it a bubble or an industrial revolution, the trajectory is the same: society must grapple with the compounding tempo of Technesis. Technesis—technology’s inevitable emergence alongside sentience—plays out here in purest form. Compute, efficiency, and task length are three interlocking gears. Each turn multiplies the others. The result: an exponential cultural force that policymakers, investors, and even many technologists still underestimate.
We’re not simply watching bigger models. We are watching time itself compress: problems that once took human teams months shrink into hours of machine work. By 2035, the cumulative effect of these exponents could be 1,000×–10,000× today’s frontier.
Whether we call it a bubble or an industrial revolution, the trajectory is the same: society must grapple with the compounding tempo of Technesis.




Leave a comment