
We’re entering an era of computing that feels less and less human-centered. Paradoxically, tech companies remain fixated on mining every detail of our personal data. The familiar, task-specific interfaces we once used are being pushed aside, replaced by generative AI and LLM-driven chatbots that upend how we interact with software.
Instead of opening a dedicated app for writing, research, coding, or even emotional support, we’re funneled into a single chatbot window. OpenAI promises to, “Let AI do the work for you—designed to handle any task,” while Anthropic touts Claude as a fantasy-fulfillment engine with the tagline “If you can dream it, Claude can help you do it.”
The pitch is clear: These tools are promoted as a one-stop shop for everything.
The shrinking interface
Search engines have trained us to expect answers from a single field. Now chatbots take this a step further: the text box has swallowed other applications, even as its output often requires endless refinement and fact-checking through a text box. And text isn’t the endgame. Voice assistants like Alexa, Siri, and Google Assistant primed us to learn hands-free interaction, which is eventually coming to replace that text box. Tech companies are chasing a future where speech replaces typing, and the interface nearly disappears. The real prize isn’t usability for us, but the value gained from capturing what we say and how we say it, by, and for, them.
This change marks a sharp departure from the last 45 years of interface design. Apple, inspired by Xerox PARC, championed user-centered design: graphical icons, so-called “what-you-see-is-what-you-get” (WYSIWYG) editors, intuitive metaphors that empowered people to create and communicate. For decades, this approach made computing accessible. But with the rise of big data, priorities shifted, as people generated ever-larger archives of digital traces (emails, documents, photos, browsing histories), and were persuaded to store these in “clouds” for easy retrieval. Tech companies soon realized that, taken together, this data formed a vast global knowledge corpus that could be mined and monetized.
The rise of surveillance-driven business models pushed firms toward increasingly quantitative forms of “user profiling.” The focus shifted from designing tools to help us get work done to extracting patterns that served corporate goals. We ceased to be seen as people with needs and instead became raw material for metrics, models, and market dominance.
As big data informed the models for AI and LLMs, this shift accelerated. These systems now operate on top of what we do, but without the capacity to understand why we do it. Stripped of the context discovered through qualitative research, quantitative analysis can easily misinterpret intent. Chatbots produce inconsistent answers depending on the prompt, and we must constantly refine queries just to get something useful. For those who mistake these tools as “truth machines,” the risks are profound: even fatal, as in documented cases where chatbots coached people to commit self-harm.
This lack of contextual, qualitative research isn’t new. Apple—often held up as the gold standard for user-centered design—initially resisted user research. Early on, Steve Jobs insisted that “people don’t know what they want until you give it to them.” Pieces of this bias eventually became imbued in Apple’s culture, where it has persisted in various ways over the years, spreading from Apple through the rest of Silicon Valley and beyond as former employees changed jobs and/or founded new companies. Media and business school case studies perpetuated this myth and as a result, there is a growing tech industry, a culture of quantitative-data-first product design, reinforced by a bias that big data mining is the only measurement for understanding people. It’s not.
This quantitative big data collection movement comes with another syphoning of our free labor and time: the survey. We’re also being used to nudge companies’ algorithms via the endless surveys sent to us after every engagement, with nagging businesses demanding our feedback on the agents, algorithms, services, and products that they’ve already tracked us engaging with—all to collect even more data about us. It’s exhausting.
Scaling at our expense
Companies justify this approach as a way to scale an interface (for them). But “scaling” often means flattening differences among users and does not work well for cultural differences in a global context. The all-in-one chatbot promises universality yet introduces new frictions: translation errors when models are trained on mismatched languages, hallucinations from incomplete training sets, and endless cycles of prompt refinement. Instead of simplifying, these systems demand more labor from users in the form of prompt refinement.
The effect is recursive. We feed chatbots queries, they pass these queries to LLMs that pattern match words to generate answers—right or wrong—that then circulate back into search engines, which are themselves increasingly infused with LLM output (making verification a nightmare). Meanwhile, our conversations with chatbots are now being mined to train future models.
The user interfaces on our devices have become less like tools and more like receptacles for collection. Where we once used a tool to get our work done, we now train tools to do the work so that we can, in turn, finish ours. This dynamic exploits our energy and labor, propping up systems that may one day replace us (if they haven’t already).
Today’s design trajectory aims to erase the interface altogether, replacing it with conversation under surveillance—mechanized eavesdropping dressed up as dialogue. Behind the scenes, algorithms sift and stitch together fragments of training data (not always accurate, not always complete) to generate songs, code, images, or advice pirated from humanity’s corpus built over lifetimes. Sometimes that LLM “advice” filtered through a chatbot extends into domains as risky as psychological counseling or nuclear operations, putting us in harm’s way and potentially at great risk. At scale, this is terrifying.
It isn’t fair to say that user centered design is gone—yet. It’s still here, but the target users have changed. We used to be the users centered by companies; now the LLMs are their focus. And like it or not, our role now is to enable that success.