AI is at the PageRank moment
History doesn’t repeat itself, but it often rhymes
I was a year‑old kid when the internet was happening, so most of my context here is from reading and intuition. But the patterns feel familiar enough to be worth calling out. When Google won search, it wasn’t because they had the only algorithm. For a while, everyone had one: Altavista, Yahoo. They were all good enough, and eventually they all started looking the same.
Google’s edge wasn’t just PageRank. In many ways, PageRank was the first clear milestone, not the first search algorithm, but the one that scaled effectively, showed the system really worked, became the common standard, and then drove value once paired with the right distribution and business model. It was distribution (default in browsers), monetisation (AdWords), and a product flywheel that made the experience sticky. The “search wars” ended once the here and what followed was the real unlock: the information economy, digital marketing, ecommerce, and entire industries built on top of stable infrastructure.
The convergence
GPT‑5, Claude 4.1, Gemini 2.5, and Grok 4 all shipped in mid‑2025. Across providers, the patterns are increasingly similar: tool use/function calling, long contexts, retrieval/caching, structured outputs, and in some cases explicit routing between fast and deeper‑reasoning modes or multi‑agent variants (for example, Grok 4 Heavy). That convergence makes it feel like we’re only months away from frontier models that are close in day‑to‑day capability for most work.
Public leaderboards and evaluation round‑ups show small margins between the top models rather than runaway winners. Pricing bands are also narrowing: consumer chat tiers cluster in the same range (with new low‑cost plans in some regions), and API pricing is trending down via cheaper SKUs and features like caching and batching. Meanwhile, the current way of scaling models appears to deliver diminishing returns. Training and infrastructure costs are huge, the model’s half‑life is short as competitors ship upgrades, and most labs are still pushing the same transformer recipe, making it hard to build a durable moat or justify relentless reinvestment.
Everything becomes a commodity
These research labs are already on a path to becoming infrastructure companies, something like “AWS for intelligence”. The APIs and features they now ship look less like experiments and more like standardised building blocks that any developer can plug into. This is what makes them feel like a commodity layer in the making.
Look at OpenAI’s API surface: file uploads, vector store access, structured responses, evals, tracing. It reads less like a “chatbot product” and more like AWS for intelligence. The primitives are being standardised for developers, not just end users. And it’s not just OpenAI. Anthropic, Google, xAI, Cohere, and Mistral are all exposing nearly identical primitives — tracing, evals, long contexts, caching, structured outputs, and tool use. Everyone is converging on the same infra surface.
When the product looks the same and the price is the same, the moat shifts to:
- Who owns the rails (infra, APIs, data centres, developer platforms)
- Who owns the users (700m ChatGPT users, Microsoft’s enterprise channels, Google defaults, Elon’s X)
This is how commoditisation typically happens: once differentiation narrows, offerings look alike, pricing compresses, and competition shifts to who controls scale, rails, and distribution. It doesn’t happen overnight, but the trend is clear.
Companies are finding their positions in the market
We used to carry doubts about AI. But the trajectory over the past few years is clearly different. People are searching more and asking longer, more complex questions; Google’s AI Overviews are actually showing more links, surfacing more websites, and driving higher‑quality clicks. ChatGPT, meanwhile, is becoming a discovery engine in its own right, guiding users to new products, services, and content. Other real use cases are surfacing: shoppers using AI to compare products across sites, travellers planning trips through conversational agents, students and researchers drafting work with copilots, and businesses integrating AI to triage support tickets or generate code. Enterprises are embedding copilots directly into workflows from Office to Salesforce to Google Workspace, which drives daily use.
What’s different now is that companies aren’t just experimenting but are staking positions. Consumer apps like ChatGPT have reached hundreds of millions of users. Enterprise copilots are embedded across core productivity suites. Infrastructure providers resemble utilities, with APIs standardising across OpenAI, Anthropic, Cohere, Google, and others. Large incumbents are locking in distribution and partnerships — Microsoft with OpenAI, Amazon with Anthropic, Google with its own models. Categories are firming up, leaders are visible, and consolidation is underway. The market is no longer asking who has the smartest model; it’s who secures the strongest position before the window closes.
The near‑term feels familiar: convergence, commoditisation, distribution wars. The playbook is recognisable, much like the end of the search wars after PageRank proved itself narrow differentiation at the model level, competition shifting to rails and distribution, and consolidation among a few large players. It’s the same rhythm: once the underlying models converge, the contest moves to scale and channels.