Last week, Sam Altman stood at BlackRock’s 2026 US Infrastructure Summit in Washington DC and said something that has been ricocheting around the internet ever since. “We see a future where intelligence is a utility, like electricity or water,” he said, “and people buy it from us on a meter.”
The crowd nodded. Tech Twitter argued about it for days. The Reddit thread in r/singularity collected 1,700 comments in 48 hours, which is a number you do not hit unless something genuinely landed with people.
I have been watching how AI companies talk about their business models for a while now, and the framing deserves a closer look.
Not because Altman is necessarily wrong about the direction things are heading, but because the analogy he chose to make his case accidentally argues the opposite of what he wants to say.
If intelligence is going to be a utility like electricity, a lot of what that actually means should concern OpenAI’s investors far more than its critics.

What Sam Altman Actually Said
To understand what is at stake, it helps to know the full context of where and why he said this. From what I can tell, the venue was not an accident.
The quote came from Altman’s appearance at BlackRock’s 2026 US Infrastructure Summit, a conference aimed at large-scale investment in physical infrastructure. The audience was not tech enthusiasts. It was institutional investors, fund managers, and the kind of people who finance power grids.
His pitch centered on one idea: AI is the next great infrastructure layer, on par with electricity or water systems, and OpenAI is positioning itself as the company that will deliver it.
The business model centers on tokens, the units AI models use to process text. “Fundamentally our business,” Altman said, “is going to look like selling tokens from various AI models at different price points and capability levels.”
He also floated a longer-term goal that has gotten less attention: making intelligence “too cheap to meter.”
That phrase is lifted directly from a 1954 speech about nuclear power, where a government official predicted electricity would eventually be so cheap it would not be worth billing for. It never happened. Instead, electricity became one of the most tightly regulated industries in every developed country on earth.
What OpenAI Current Pricing Actually Looks Like
Before getting into the bigger picture, here is where things stand today. The metered model Altman describes is already partially in place for developers:
| Plan | Price | Who It Is For | Key Limits |
|---|---|---|---|
| Free | $0/mo | Casual users | Rate-limited, older model, no advanced tools |
| Plus | $20/mo | Regular users | Message caps during peak hours |
| Pro | $200/mo | Power users | Near-unlimited access to all models |
| API pay-per-use | Per million tokens | Developers and businesses | Costs scale directly with usage |
The API tier is already a metered utility in everything but name. The bigger question Altman is raising is whether this consumption-based model will eventually replace flat subscriptions for everyone.
The Hypocrisy Is Real and It Matters

The comment that hit 2,600 upvotes on Reddit was not about electricity. It was a quote from Altman himself, dated 2015:
“Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”
That was OpenAI’s founding mission, written when the company was structured as a nonprofit. The phrase “unconstrained by a need to generate financial return” was not decorative. It was the entire point.
OpenAI was specifically created as a counterweight to for-profit AI labs that might race toward dangerous capabilities under commercial pressure.
OpenAI is now valued at $730 billion. It closed a $110 billion funding round in early 2026 with Amazon, NVIDIA, and SoftBank. And Altman was standing in front of infrastructure investors describing a future where his company meters and sells intelligence to the world.
The gap between “unconstrained by financial return” and “people buy it from us on a meter” is not a small pivot. It is a complete reversal of the founding premise, dressed up in the language of public good.
The Timeline of That Reversal
- 2015: OpenAI founded as a nonprofit to develop AI safely and openly, for humanity’s benefit
- 2019: Converted to a capped profit structure to attract investment, with investor returns capped at 100x
- 2023: ChatGPT reaches 100 million users; commercial pressure intensifies
- 2024: OpenAI begins converting to a for-profit corporation, removing the profit cap
- 2025: Raises at a $300 billion valuation; restructuring deal approved
- 2026: Altman at a BlackRock infrastructure summit describing intelligence as a metered product, with the company now valued at $730 billion
That is an eleven-year arc from “benefit humanity, unconstrained by profit” to “buy it from us on a meter.”
The Electricity Analogy Backfires Badly
Here is the thing about using electricity as your comparison: electricity did not stay in the hands of one company.
The comment that cut through loudest on Reddit captured it well: “The funny thing about the electricity analogy is that it actually destroys his own argument. Electricity got commoditized. No single company owns it.
Utilities are heavily regulated, prices are capped, and in most countries the grid is publicly owned or at least publicly controlled. So if intelligence really does become like electricity, that means OpenAI eventually becomes a regulated utility with thin margins and government oversight.”
That is exactly the problem with the metaphor, and it is one I am surprised more commentators have not pressed him on. Real utilities, meaning actual electricity, actual water, actual gas systems, operate under a completely different set of rules than what Altman is describing.
What Real Utility Regulation Would Mean for OpenAI
If AI intelligence were genuinely treated like electricity in the US, here is what that would look like in practice:
- A federal or state regulator would set maximum prices per token
- OpenAI would be required to provide service to all customers, including low-income ones, often below cost
- Infrastructure investment plans would require regulatory pre-approval
- The company could not change pricing or cut service without filing with the regulator
- Profit margins would be capped, typically in the 8 to 12 percent range that US utilities operate at
- Shareholders would receive regulated, predictable but modest returns, not venture-scale multiples
OpenAI’s investors who just valued it at $730 billion are not pricing in utility-style regulated margins. The electricity metaphor, taken seriously, is an argument for the exact kind of oversight that would collapse OpenAI’s current valuation.
The sharpest Reddit comment nailed it: “What Sam actually means is he wants to be the oil company, not the electric company. He just knows that sounds worse.”
What Metered Intelligence Means for You Right Now
For most people, the abstract debate about OpenAI’s corporate structure matters less than the practical question: what does this pricing trajectory mean for the AI tools you use every day?
The answer is that tiered access to AI is already here, and it is already stratifying by income.
Take ChatGPT’s current setup. The free tier gives you a degraded experience during busy periods. Plus users at $20 a month get better access but still hit message caps. Pro users at $200 a month get something closer to unconstrained access.
Developers building apps on OpenAI’s API pay for every token they process, which means every AI-powered feature in every product you use has a cost being passed along somewhere in the pricing chain.
AI companion tools like Nomi AI and Candy AI, for example, run on large language models. As OpenAI and other providers shift toward consumption-based pricing, the companies building on top of those models will pass those costs to subscribers.
What you pay for an AI subscription today already reflects what the underlying model provider charges for compute.
That relationship will only become more direct.
I covered this dynamic when ChatGPT Plus started feeling like less value for the money. The value-per-dollar has shifted noticeably as usage caps have tightened while the sticker price stayed the same.
Who Benefits from Metered Pricing and Who Does Not
| User type | Current experience | In a metered world |
|---|---|---|
| Light users | Free tier, limited but functional | Could get genuinely cheap pay-as-you-go access |
| Moderate users | $20/mo Plus with occasional caps | Potentially higher cost if usage-based billing kicks in |
| Power users | $200/mo Pro, mostly unconstrained | Could save money if per-token costs drop enough |
| Developers | Pay per token already | No model change, just price shifts over time |
| Low-income users | Free tier is the ceiling | At risk if free tiers shrink or get paywalled |
| App builders | Pass token costs to subscribers | Margins squeezed as provider costs fluctuate |
The people a strict metered model hurts most are those who can only access the free tier today. In a fully usage-billed world, not being able to afford it means no access, the same way not paying your electric bill means your lights go out.
A March 2026 NBC News poll found AI’s net favorability at minus 20 points nationally, with 46% viewing AI negatively and only 26% positively. Among younger voters aged 18 to 34, the number was minus 44.
People already sense that the AI economy is not being built for them.
The Alternatives Are Getting Stronger

The counter-narrative to Altman’s vision is not just criticism. There is a real competing model gaining ground faster than most casual observers realize.
When Reddit commenters say they want local models to win, they are pointing at something real. The open-source AI movement, led by Meta’s Llama series and a growing number of labs releasing model weights freely, is building toward a world where you do not need to buy intelligence from anyone on a meter.
You run it yourself, on your own hardware, at zero per-query cost.
That future is already here for developers with the right setup. Ollama, LM Studio, and similar tools let you run capable models locally today.
If you want to build AI-powered workflows without paying per token, tools like Dynamiq let you orchestrate open-weight models in a structured, production-ready way. I have written about how AI agents work and why they matter if you want to understand that layer before jumping in.
For most users, local models are not yet a drop-in replacement for the best frontier models. GPT-4 class performance still requires either paying OpenAI or owning serious hardware. But the gap is closing.
The quality difference that was around ten points 18 months ago is closer to three or four today, and some open models are competitive on specific tasks right now.
The real question is whether that gap closes fast enough before OpenAI cements its position as the dominant infrastructure layer for intelligence.
That race is already underway, and its outcome will shape what the future of AI and work actually looks like.
How to Think About Your AI Spending Right Now
None of this means you should panic or cancel everything tomorrow. But a few things are worth getting right before the pricing model shifts further.
Here is how I would approach it as a regular AI user:
- Audit what you are actually using. If you are on a $20 or $200 plan, check your actual usage stats. If you are not hitting the limits that justify the tier, you may be paying for headroom you do not use.
- Spread across multiple tools. Locking into one provider makes you more exposed when prices change. Claude, Gemini, and Perplexity all have solid free or low-cost tiers. Using them in rotation hedges your exposure.
- Watch the token math on apps you subscribe to. Products built on OpenAI or Anthropic’s API will raise prices as their own underlying costs change. If a tool you rely on suddenly gets more expensive, that is often exactly why.
- Take open-source models seriously. They are not there yet for every use case, but they are further along than most people realize. Running a local model for routine tasks can offset the cost of keeping one paid subscription for heavier work.
- Follow the regulatory debate. Whether AI gets treated like a regulated utility is being actively discussed in Washington right now. Where that lands will affect every AI tool you pay for.
Concrete example of auditing your spend:
Before: “I pay $20 for ChatGPT Plus and $20 for Claude Pro and use both a few times a week.”
>
After: Log into each account and find your usage stats or message history. If you are hitting less than half your limits each month on either one, drop it to free and put the $20 toward the single tool you actually max out. You will almost certainly get the same output for less money.
For a broader look at which paid AI subscriptions actually hold their value right now, my roundup of paid AI tools worth keeping covers the current field.
