You have a great idea for a meme to share with your group chat. You’re not an artist, but luckily you have help: ChatGPT. You prompt the model and, after a few iterations, get it just right. You proudly post it to Discord or Slack, wherever. You have your joke, and you’re happy.
You know who isn’t happy? OpenAI.
Why? Well, you used ChatGPT to make content and then left their platform to go elsewhere. In the early days of ChatGPT, taking content to other websites drove user growth. But now, as the household name in large language models (LLMs), OpenAI wants ChatGPT to capture that traffic. OpenAI wants platform lock-in.
That lock-in means discovering, creating, sharing, and interacting with content within the OpenAI ecosystem. More and more, especially for younger people, AI models are becoming the front door to the internet. They are becoming the new encyclopedia, the new search engine, and, ultimately, the new browser.
The idea of LLM capabilities melding with a browser is not radical; LLM-powered search engine Perplexity launched their browser, Comet, on July 9, 2025. There have been rumors about an OpenAI browser and even a full operating system.
The trend for LLMs and LLM platforms is integrations. As LLMs are being given more capabilities and responsibilities, more people are using them. The more that people use them, the more companies will build out additional integrations and extra features. This positive feedback loop means that LLMs will take an increasingly larger role in how humans interact with the web.
In the early days of the web, content was static and served unidirectionally. There were no interactive features, like comments or likes. This “read-only” version of the internet was retroactively assigned the moniker “Web 1.0.” In the early 21st century, Web 2.0 brought with it the interactivity we know today.
The thing is, this Web 2.0 was built for humans to read. Think about Javascript and HTML, tools created specifically to make it easier – and more appealing – for us to ingest content. But now there are new, non-human users. LLMs and “agents” need their own web, one geared to the way they ingest information, one with standardized content formats and interaction rules designed for AI. Enter: the Model Web.
Right now, LLMs can’t navigate the internet like a human can. The tool usage and connection standards developed today, what companies like OpenAI, Anthropic, and Google have built to allow AI to access websites, are likely only shims to connect the Model Web and the Human Web.
Were the future of technology so clean and linear, the next step of this Model Web would be the completion of the Model Web 1.0 – essentially, a “read-only” web geared toward LLMs and agents. More likely, we’ll see a mix of Web 1.0 (standardized access and presentation) and Web 2.0 (interactivity) characteristics evolving at the same time.
The good news: large amounts of the Human Web today are already hosted using technologies such as WordPress, which abstract away almost all of the technical implementations and have a natural separation of the content from the form. In simple words: most websites are already built on platforms that make it easy to manage and publish content without worrying about the underlying code. Much of the groundwork for the Model Web has been laid by years of moving huge amounts of content creation and hosting onto a few hundred platforms instead of millions of bespoke solutions.
Ultimately, the state of the Model Web must move from parasitic to symbiotic. These negative effects of today’s Model Web are real and already affecting the health of the Human Web. Sites not only don’t benefit from consumption by LLMs, they are actively hurt by the reduction in traffic and, consequently, ad revenue. There are attempts from large players to counteract this erosion; CloudFlare, for example, has a product to monetize scraping. The long-term impact and viability are uncertain, but it is clear today that the status quo will not stand.
On the flip side: individual websites and hosting platforms are more likely to add restrictions if they feel these AI companies are benefiting without giving anything back. LLMs and agents may get better at processing sites made for humans, but they can only consume what they’re allowed to access.
The incentives align for content producers and LLM providers to focus the Model Web on discovery and directing traffic for the Human Web. Content producers will create AI-friendly “discoverability layers” that benefit both them and the models. Think of it like SEO for the Model Web. If a site’s content is easy for LLMs and agents to parse, more humans will find it through AI assistance. For those who do not adapt, it will be like not showing up in Google Search results.
The losers of this new paradigm are traditional search engines; embedding models and ranking algorithms can be replaced by an archivist with intimate knowledge of the content available in the Model Web. Years of complaints about the quality of Google, previously unassailable, have opened space for a personal librarian to topple the giant. A Model Web enables the kind of discoverability capabilities that Google and its competitors once only dreamed about. OpenAI and other platforms, including Google’s own Deep Mind, are well-positioned to take on the Google search engine as the backbone of discoverability on the web.
This evolution brings up several big questions. What does social media made for model interaction look like? How do you make and consume content for this kind of platform? What are the new rules for user experience? It’s difficult to predict the dynamics of that scenario.
We probably will not wake up one day to a fully formed Model Web. It will creep in gradually through small integrations until it becomes the new normal. The incentives are already aligned. AI companies want LLMs to have content they can easily consume, and producers want traffic. The next era of the web will be more than humans clicking around in browsers. It will be humans and AI coexisting in a shared digital ecosystem, each shaping what the other sees.