We’ve been here before.
At so many pivotal moments in our adoption of digital technology, people and businesses mistake a company’s walled garden for the broader, more powerful network underneath. In the 1990s, many people genuinely believed AOL was the internet. When I left Facebook in 2013, hundreds of people asked how I would function “without the web.” Over and over, packaged products—operating systems, app stores, streaming services—eclipse quieter, less expensive, bottom-up alternatives like Linux or torrents. We forget they exist.
Today we’re making the same mistake with large language models.
To many of us, “AI” now means choosing among a handful of commercial LLMs such as ChatGPT, Claude, Gemini, or Grok—and perhaps even choosing the one that matches our cultural or political sensibilities. But these systems share important structural limitations: they are centralized, expensive, energy-intensive operations that depend on massive data centers, rare chips, and proprietary data stores. Because they’re trained on roughly the same public internet, they also tend to generate the same generalized, flattened results. Companies using them wholesale often end up substituting their own expertise with recombinations of whatever is already out there.
This is how AI will do to businesses what social media did to publications, and what the early web did to retailers who went online without a strategy. Using the same generic tools as everyone else produces the same generic results. Worse, outsourcing core knowledge processes to a black-box service replaces the long-term development of internal capacity—especially junior employees learning through real practice—with cheaper but future-eroding automation.
The limits of centralized AI
Commercial language models are optimized for generality and scale. That scale is impressive, but it creates real constraints for organizations. Centralized LLMs require:
- Large volumes of training data scraped from the open web
- Expensive server infrastructure and power consumption
- Constant external connectivity
- Business models built around subscription, token fees, or upselling
For many companies, these models become another outsourced dependency. Every time a commercial LLM updates itself—which can happen weekly—your workflows change underneath you. Your proprietary data may be exposed to third-party APIs. And your differentiation erodes, because the model’s knowledge is drawn from the same public corpus available to your competitors.
Meanwhile, the narrative surrounding AI has encouraged businesses to believe that this centralized path is the only viable one—that achieving meaningful AI capability requires enormous data centers, billion-dollar training runs, and participation in a global race toward Artificial General Intelligence.
But none of this is a requirement for using AI productively.
A practical alternative already exists
You do not need frontier-scale models to benefit from AI. A growing ecosystem of open-source, locally deployable language models provides organizations with far more autonomy, privacy, and control.
A $100 Raspberry Pi—or any modest home or office server—can run a compact open-source model using tools like Ollama or GPT4All. These models don’t “learn” on the fly the way people do, but they can produce high-quality responses while remaining completely contained within your own environment. More importantly, they can be paired with a private knowledge base using retrieval systems. That means the model can reference your own research library, internal documentation, or curated public resources like Wikipedia—without training on the entire internet, and without sending your data to an external provider.
These systems build on your own data instead of extracting it, strengthen your institutional memory instead of commoditizing it, and run at a fraction of the cost.
This approach allows an organization to create an AI system aligned with its actual priorities, values, and domain expertise. It becomes a private assistant rather than a generalized product shaped by the incentives of a trillion-dollar platform. And the alternative doesn’t have to be a solitary effort.
Neighborhoods, campuses, or company departments can form a “mesh network”—a set of devices connected directly through Wi-Fi or cables rather than through the public internet. One node can host a local model; others can contribute or withhold their own data stores. Instead of a single company owning the infrastructure and the knowledge, you get something closer to a community data commons or a digital library system.
Projects like the High Desert Institute’s LoreKeeper’s Guild are already experimenting with this approach. Their “Librarian” initiative envisions local libraries acting as the data hubs for mesh-networked AI systems—resilient enough to function even during connectivity disruptions. But their deeper innovation is architectural. These systems give organizations access to powerful language capabilities without subscription costs, lock-in, data extraction, or exposure of proprietary information.
Local or community models enable organizations to:
- Curate their own data
- Maintain complete privacy by keeping computation on-site
- Reduce latency to near zero
- Preserve and strengthen internal expertise
- Avoid recurring token or API costs
And they do so using energy and computing resources that are orders of magnitude lower than those required by frontier-scale models.
Why decentralized AI matters now
The more institutions adopt localized or mesh-based AI, the less they are compelled to fund the centralized companies racing toward AGI. Those companies have made an effective argument: that sophisticated AI is only possible through their services. But much of what organizations pay for is not their own productivity—it is the construction of massive server farms, procurement of rare chips, and long-term bets on energy-intensive infrastructure.
By contrast, in-house or community-run systems can be deployed once and maintained indefinitely. A week of setup can eliminate a decade of subscription payments. A small rural library has already demonstrated the feasibility of operating a self-hosted LLM node; a Fortune 500 company should have no trouble doing the same.
Still, history suggests that most organizations will choose the convenient option rather than the autonomous one. Few people accessed the early internet directly; they chose AOL. Today, many will continue to choose centralized AI services, even when they offer the least control. But what social media companies did to businesses that mistook them for “the internet” will be mild compared to what comes when companies mistake these proprietary interfaces for “AI” itself.
Decentralized AI already exists. The question now is whether we’ll choose to use it.