When it comes to market segmentation, I don’t see truly well-documented cases often.
At a more simplistic level, we think of classic matrices such as BCG or McKinsey’s. But the real exercise of segmentation is far more complex. In certain contexts, it comes close to the behavior of a tensor: multiple dimensions, cross-dependencies, distinct weights, temporality, and contextual factors that shift the meaning of data depending on the axis being analyzed.
Thinking like a tensor is practicing Model Thinking, which remains, above all, an analog discipline. It requires a brain, not a machine.
The challenge is necessarily multidisciplinary, and this is exactly where executives suffer, spending enormous time compensating for immature teams.
Even when business operators manage to bring quantitative data from ERP, CRM, or sector reports (which are often scarce or methodologically fragile), the information set must be normalized. This process demands an additional set of competencies: statistical knowledge, data-cleaning techniques, sampling concepts, dimensional modeling, and even systems logic to avoid collinearity and redundancy.
When unstructured data is added, the challenge grows further.
This includes everything from more sophisticated sentiment analysis to qualitative inputs from field teams, customer recordings, or information mined from third-party sources. In these cases, the problem is not confined to normalization: It involves interpreting, validating, reducing noise, and converting natural language into structures that can interface with transactional data. It is epistemological, not just technical.
SERIOUS SEGMENTATION
Serious segmentation is not a mere snapshot of the market. It plots and overlays multiple layers: data on strategic human resources (both internal and competitive), asset acquisition history, technological maturity, revenues and margins, pricing elasticity, media activity, public opinion, and ecosystem maps revealing the true position of players.
Good segmentation uncovers unclaimed revenue, positioning errors, pricing failures, ignored clusters, asymmetries between capability and discourse, and even subtle competitor movements that go unnoticed at the tactical level.
The entire process demands other equally essential competencies: dataset modeling, command of relational tables, use of manipulation languages such as SQL, Python, or R, basic and applied statistics, visualization techniques, clustering, similarity analysis, and, above all, the ability to formulate hypotheses. Without hypotheses, there is no segmentation. There is only table sorting.
THE AGENT ERA
In the so-called era of agents (some already speak of the decade of agents) a complementary arsenal emerges to support these processes. Agents capable of cleaning and normalizing data, agents for web scraping and data enrichment, agents that classify and label content using LLMs as annotators, statistical automation agents able to perform clustering, PCA, or churn analysis, reconciliation agents capable of resolving deduplication and probabilistic matching, and competitive-simulation agents designed to test elasticity scenarios, pricing movements, or anticipated reactions of market players.
As a last resort, and not as the first option, as leaders outside tech hubs tend to believe, RAG enters the picture.
This article could list agents available in the ecosystem for immediate use, but it is fundamentally about the capabilities that precede automation.
Before any automation, there is foundational knowledge: truly understanding the discipline of segmentation, knowing principles of market behavior, and having clarity about the information models that generate strategic insights for guiding portfolio, productive capacity, and competitive advantage. No GPU, no matter how powerful, replaces this conceptual clarity.
And this clarity is not necessarily the exclusive responsibility of IT, the CTO, or marketing teams (understanding marketing here, according to the American Marketing Association’s definition). Segmentation belongs to multidimensional leaders capable of moving fluidly across strategy, operations, data, behavior, and finance.
The provocative question remains: Do these leaders exist in the analog perspective, prior to automation? Many companies try to leap directly from subjective culture to algorithmic culture without building the intermediate methodological culture, and this is one of the silent sources of failure today.
There is robust literature on segmentation and, it must be said, it requires intellectual musculature. I appreciate Malcolm McDonald and Ian Dunbar in Market Segmentation.
Peter Fader, from the Wharton School, offers a more financial and pricing-oriented view in The Customer-Base Audit.
Naturally, these two works only give a glimpse of the thinking underlying the structured idea.
FINAL THOUGHTS
Finally, two observations.
First, what I have just written is not something that ChatGPT—even as a “generative” model—would spontaneously produce. LLMs do not naturally form implicit assumptions across domains, nor do they articulate disciplinary layers whose connection depends on human repertoire and has not been previously mapped. They operate on existing corpora; they do not originate new paradigms on their own.
Second, most business schools today, aside from a small group of highly specialized institutions, tend not to emphasize this mode of thinking. Not by fault, but by design. Their structure was built to serve the needs of upward-moving managers, not to cultivate the broader, integrative perspective required of executive-level decision makers.
This gap in knowledge for top leadership has a structural explanation: The audience is relatively small, and therefore not the core economic engine of educational institutions. As a result, many executive leaders find themselves without ongoing renewal of their knowledge matrix, even in an era that promotes “continuous learning.”
A paradox of our time.
Rodrigo Magnago is researcher and director at RMagnago Critical Thinking.