Future of AI

The Future of AI Is Infrastructure, Not Apps

For years, AI was framed as a software story: smarter algorithms, better models, faster inference. That framing is now outdated. The future of AI is increasingly defined by physical infrastructure—power, land, chips, cooling, and grid access.

Modern AI systems are constrained less by ideas than by electrons. Training and operating large models requires data centers operating at industrial scale, often measured in hundreds of megawatts or more. This has shifted the center of gravity of AI from Silicon Valley engineering teams to utilities, energy regulators, equipment manufacturers, and local governments.

This infrastructure reality explains why AI investment cycles behave more like energy or telecom build-outs than traditional software booms. Once projects are financed and permitted, they tend to proceed regardless of short-term demand fluctuations. Capacity arrives because capital is already committed.

The implication is important: future AI growth will be steadier but more capital-intensive. Returns will accrue to those who control scarce inputs—power interconnection, grid equipment, specialized manufacturing—not just those who write code.

The next phase of AI competition will be decided as much in zoning hearings and transformer factories as in research labs.


Why AI Growth Will Be Slower—and More Durable—Than Headlines Suggest

AI adoption is often portrayed as either explosive or disappointing. In reality, the future looks more incremental and durable.

General-purpose technologies rarely deliver instant, economy-wide productivity gains. Instead, value compounds gradually as organizations redesign workflows, retrain staff, and restructure incentives. AI follows this pattern.

While early adoption has been rapid, most organizations still use AI at the margins—summarization, search, support tools—rather than as a core operating layer. That limits short-term financial impact but builds familiarity and institutional learning.

This slower diffusion is not a weakness. It reduces systemic risk and makes AI growth less dependent on speculative behavior. As AI becomes embedded into routine processes, it becomes harder to unwind—and less likely to “crash” in the dramatic sense often implied.

The future of AI is not exponential forever. It is cumulative.

The AI Power Constraint Will Shape the Next Decade

Electricity has quietly become one of the most important constraints on the future of artificial intelligence.

As AI models grow larger and inference volumes rise, power availability increasingly determines where AI capacity can be built. Modern data centers supporting AI workloads operate at industrial scale, often requiring hundreds of megawatts of continuous power. This is not a software bottleneck—it is a grid, generation, and transmission problem.

Regions with surplus generation, fast interconnection timelines, and regulatory clarity are now structurally advantaged. Conversely, traditional technology hubs with aging grids, limited transmission capacity, or slow permitting processes may struggle to scale AI infrastructure, regardless of talent availability or capital.

This shift has strategic implications. AI investment is increasingly flowing toward jurisdictions with reliable baseload power, access to natural gas or nuclear generation, and predictable utility regulation. In practice, this means parts of the U.S. Midwest, Southeast, and energy-rich international regions are becoming AI infrastructure hubs—often far from legacy tech clusters.

Power constraints also introduce economic discipline. Electricity costs, grid congestion fees, and curtailment risk now directly affect AI unit economics. As a result, model efficiency, hardware optimization, and workload scheduling are becoming competitive advantages rather than engineering afterthoughts.

Over the next decade, AI strategy and energy policy will converge. Governments that treat AI as a long-term electricity planning challenge—rather than a purely digital opportunity—will attract more durable investment. Companies that secure power early and build relationships with utilities will enjoy advantages that competitors cannot quickly replicate.

The future of AI will be shaped not just by algorithms, but by electrons.


Why the “AI Bubble” Debate Misses the Point

The question “Is AI a bubble?” assumes a binary outcome: either AI succeeds spectacularly or collapses under its own hype. That framing misunderstands how general-purpose technologies typically evolve.

True bubbles involve speculative demand detached from real economic use. AI does not fit that definition. It is already embedded in customer service, software development, logistics, fraud detection, marketing, and research workflows. The debate is not whether AI has value—it clearly does—but whether expectations are aligned with the pace of value realization.

What is more likely than a crash is a repricing of timelines. Infrastructure has been built on assumptions of rapid utilization growth and fast enterprise monetization. If those assumptions prove optimistic, the adjustment will come through slower capital deployment, valuation compression, or project delays—not widespread abandonment of AI.

Importantly, AI infrastructure does not become useless if revenue growth slows. Data centers, power connections, and compute assets remain productive over long lifecycles. That distinguishes AI from past speculative episodes tied to short-lived assets or purely financial engineering.

The real risk is not technological failure, but financial mismatch: short-term return expectations applied to long-duration assets. When that mismatch is corrected, markets can feel volatile even if the underlying technology remains sound.

The “bubble” question distracts from the more relevant one: who carries the downside if timelines stretch?


5. AI Returns Will Lag Adoption—and That’s Normal

One of the most common sources of skepticism around AI is the gap between rapid adoption and uneven financial results. Many organizations report widespread AI use but modest or unclear bottom-line impact.

This is not unusual. Historically, general-purpose technologies—from electricity to enterprise software—delivered productivity gains gradually. Early value appears in narrow tasks, local efficiencies, and time savings, rather than immediate margin expansion.

AI often improves how work is done before it changes what work is done. Organizations may save hours, reduce errors, or improve responsiveness without immediately capturing those gains in financial statements. In many cases, savings are absorbed into higher service levels or redeployed labor rather than eliminated costs.

Meaningful returns usually require deeper change: redesigned workflows, revised performance metrics, new decision rights, and updated governance. Without those changes, AI remains a layer of augmentation rather than transformation.

This lag does not imply failure. It reflects the reality that organizational change moves slower than technology deployment. Over time, as processes are re-engineered and incentives realigned, the same AI tools can deliver materially higher returns.

The future of AI returns is less about model breakthroughs and more about managerial execution.

Next series