A professor I follow on X, Ethan Mollick1 recently made an observation that struck a deep chord with me and that cuts to the heart of AI adoption as I’ve observed models are commercialised in enterprise environments. Mollick puts it bluntly (and I paraphrase): AI labs, run by coders, keep developing supercool tools for coding while leaving other forms of work stuck with generic chatbots2. Unless you own a frontier model, he argues, your ability to build specialized AI for your field is limited3. Coding shows us what is possible, but it also highlights the imbalance.
Take software development as an example. Developers now rely on copilots that debug, explain, and even generate entire libraries of code: sharp, precise, and tightly integrated into their workflows. Other professions, by contrast, are still working with more generic assistants: clever, but shallow. To bridge that gap, many companies have tried building custom AI agents. Yet the pace of change is so fast that by the time those agents are operationalised, the major labs’ own platforms are already becoming agentic; exposing the weaknesses of simpler solutions. This “evolutionary glitch,” where custom tools risk obsolescence before they are fully deployed, often goes unnoticed by coders, who are themselves sprinting ahead with increasingly powerful copilots.
So why has coding become such a privileged domain? The reasons are structural, not temporary. Coders both build the AI systems and use them, creating a tight feedback loop where problems are identified and solved almost instantly. On top of that, code is an unusually rich kind of data: structured, public, and easily testable. Training data is abundant, and outcomes are unambiguous: a line of code either compiles or it doesn’t, a test suite either passes or fails. Contrast that with law, medicine, or marketing, where the data is private, messy, and the outcomes are often subjective or even contradictory. In those domains, two different answers can both be “right” or “wrong,” which makes progress slower and far harder to measure.
This combination has made coding the natural proving ground for frontier labs; that bias is unlikely to fade. The labs are run by people with coding backgrounds, building tools first and foremost for their own work. Each release improves not just the tools themselves but the labs’ ability to build the next generation. It is a compounding advantage: a self-reinforcing loop where coders and labs accelerate together, leaving other domains struggling to keep pace.
The New Rules of Strategic Advantage
From owning models to owning context
For businesses outside of software development, this dynamic creates a series of strategic challenges that require rethinking how competitive advantage is built and how AI tools are implemented in domains that lack the strong feedback loops coders enjoy.
For years, the central technology debate has been whether to build or buy, but another truth is becoming apparent. With AI evolving so rapidly, organizations' in-house technical teams simply can't develop AI agents fast enough to compete with the evolution of agentic AI. While building custom chatbots on APIs may offer short-term benefits and cheaper than buying, organizations must ask themselves whether their projects can realistically keep pace with the more integrated and capable solutions that agentic AI platforms provide.
The AI labs' own agentic platforms are evolving too quickly, integrating memory, tools, and project management in ways that internal projects cannot easily replicate. Organizations will increasingly need to adopt those platforms, then focus on how to adapt them to their unique needs. This represents a major shift in the build-or-buy paradigm. The build-or-buy question has instead become: do we prioritize speed and capability, or do we insist on complete control and accept obsolescence? When the 'buy' option is evolving faster than internal development can match, competitive advantage moves from owning the technology to mastering its application.
This shift is also creating a two-tiered workforce. Coders are being supercharged by their copilots, while non-technical staff often use less powerful generic assistants. This risks creating a divide where technical employees become “super-agents” and everyone else remains “agent-lite,” at best. For business and technology leaders, the challenge is how to extend the benefits of AI beyond engineering by investing in training, redesigning roles, and embedding AI tools into every function so that the productivity boost is shared across the organization.
Yet the divide is less simply coders versus everyone else, and more about which domains already have mature AI-native tools and which do not. Design, writing, and sales are developing powerful copilots of their own – often powered by frontier models – though they evolve at different speeds and with uneven sophistication. Coding sprinted ahead first, but other knowledge domains are following on their own timelines.
The pace of adoption is shaped not only by technical feasibility but also by regulation, data sensitivity, and organizational risk tolerance. Highly regulated fields such as healthcare and finance move cautiously because of liability and compliance requirements, even when the technology is ready. By contrast, creative industries and marketing face fewer structural constraints and can experiment more freely, accelerating the emergence of domain-specific copilots. The divide, therefore, reflects not just technical maturity but the institutional barriers that determine how quickly AI can be woven into real-world workflows.
From my vantage point in cybersecurity, this divergence is especially critical. Here, AI is already accelerating both attack and defence. Coders with access to advanced copilots can rapidly probe systems, automate exploits, or generate polymorphic malware. Meanwhile, defenders can use the same tools to triage alerts, automate incident response, or hunt for vulnerabilities in ways that were previously impossible. The asymmetry emerges quickly: those with access to specialized copilots operate at an entirely different speed and scale than those relying on generic assistants. For business leaders, this is a preview of what may happen across many other domains if the gap between “super-agents” and “agent-lite” workers is left unaddressed.
The question of data sovereignty and intellectual property becomes even more critical in this new paradigm. To be effective, agentic AI tools need deep access to a company's internal knowledge: documents, emails, databases, even strategy memos. This creates an uncomfortable dependency where your most valuable data becomes the training ground for tools you don't control. How much of this knowledge can safely be exposed to third-party platforms? What protections are needed to ensure that proprietary information does not become part of someone else's ecosystem? For many companies, the answer will determine whether AI becomes a strategic advantage or a dangerous liability.
In practice, however, the question is not purely build or buy, but how to hybridize. Many organizations are already fine-tuning open-source models, layering RAG pipelines, and assembling custom agents on top of frontier APIs. The real challenge is deciding how much of the AI stack to customize versus consume as-a-service. Competitive advantage may rest not only in data, workflows, agility, and trust, but also in mastering this hybrid landscape
This reality reframes the build-vs-buy dilemma even further. If only a handful of labs can build frontier models, does that mean that true innovation is out of reach for everyone else? Not entirely. While companies may not be able to train their own frontier models, they can still innovate in ways that matter. The competitive game moves away from model building and toward something more fundamental: context building.
The New Locus of Advantage: Context and Orchestration
In Michael Porter’s terms, strategy has always been about the sources of sustainable advantage. In the age of agentic AI, owning the model is no longer that source. The new locus of strategic advantage is context: the data you control, the workflows you own, the agility you cultivate, the trust you build, and the way you orchestrate it all together. When frontier models are out of reach, competitive advantage shifts to these five pillars that define the “last mile” of AI adoption. Each is inseparable from a company’s identity and operations, and none can simply be outsourced to a lab.
Proprietary Data. It is not just about having unique data, but having the highest-quality, most-structured, most-contextual data. This data becomes a company's non-replicable moat, as it creates the most valuable fine-tuning and Retrieval-Augmented Generation (RAG) applications. The companies that win will be those whose data creates unique insights that no competitor can replicate, regardless of which frontier model they're using.
Workflow Mastery. This involves redesigning operations so they are natively AI-driven, not just bolted onto old processes. It means creating new forms of human-AI collaboration and developing organizational muscle memory for AI integration that competitors cannot easily copy. The advantage goes to companies that discover optimal divisions of labour between humans and AI agents, creating workflows that become increasingly difficult to replicate.
Speed and Agility. If everyone has access to similar foundational models, the advantage goes to whoever can deploy, test, and iterate fastest. This favours companies with flat hierarchies and fast approval processes over those weighed down by legacy structures. In a world where AI capabilities evolve monthly, the ability to rapidly identify, integrate, and optimize new tools becomes a sustainable competitive advantage.
Trust and Human Capital. The ability to deploy AI in ways that are secure, compliant, and explainable – particularly in regulated industries – will be a major differentiator. This must be paired with building a workforce where every employee is an effective AI operator, not just the engineers. Companies that can extend AI capabilities across their entire organization while maintaining security and compliance will have advantages that pure technology cannot provide.
Orchestration. Alongside these four pillars lies another layer of advantage: how effectively an organization orchestrates its AI stack. Few will fully ‘build,’ and fewer still can afford to only ‘buy.’ The reality is almost always hybrid: fine-tuning open-source models, integrating APIs, layering retrieval-augmented pipelines, and constructing custom agents on top of frontier platforms. In this light, the competitive question is no longer purely ‘build versus buy,’ but rather ‘assemble versus stagnate.’
But orchestration itself is not without risk. Combining open-source models, proprietary APIs, and custom agents can create complexity that overwhelms organizations without the right technical maturity. Poorly managed, this hybrid approach can fragment systems, inflate costs, and undermine performance, potentially leaving companies worse off than those that standardize on a single unified platform. Orchestration offers outsized rewards, but it demands governance, coordination, and a sober assessment of organizational limits. Done well, it becomes a multiplier; done poorly, it becomes a liability.
Taken together, these five pillars of context – data, workflows, speed, trust, and orchestration – represent where sustainable competitive advantage now lies. They require deep organizational commitment and can’t be easily copied or commoditized.
The Broader Locus of Risk: Society and Governance
While companies can adapt to this new competitive landscape, the concentration of advanced AI capabilities in a handful of labs raises broader questions that extend far beyond any single organization’s strategy. The structural bias toward coding reflects priorities that could have profound societal implications. These challenges play out across multiple levels: from ecosystem fragility, to geopolitical and corporate inequality, to questions of governance and, ultimately, to the future of human work and autonomy.
The Problem of “AI Monoculture.” (Ecosystem-level fragility)
When a handful of companies control the foundational models that power the world’s most advanced tools, society becomes vulnerable to an “AI monoculture.” This is similar to relying on a single crop, which can be wiped out by a single pest. If a single lab’s model has a bias, a flaw, or a “hallucination,” that problem could propagate across countless industries simultaneously. How do we ensure diversity and resilience in the AI ecosystem? Open-source efforts such as LLaMA or Mistral provide an important hedge against overconcentration. They can serve as fallback options and help diversify the ecosystem. But it would be a mistake to see them as full counterweights. The true moat for frontier labs lies less in raw model capability and more in the ecosystems they control. Platforms that integrate deeply into productivity suites, search engines, or proprietary plugin architectures create powerful lock-in effects. Once a company’s workflows depend on these integrations, the switching costs become prohibitive, even if open-source or alternative models reach comparable performance. For this reason, open-source models are likely to play important but limited roles such as filling niches, enabling experimentation, or serving as safeguards rather than replacing frontier labs as the primary engines of business adoption.
The Widening “Intelligence Asymmetry.” (Geopolitical and corporate inequality)
The initial information asymmetry in corporate governance, where management had better information than shareholders, is now evolving into a much deeper “intelligence asymmetry.” As some companies and countries gain access to more powerful, agentic AI tools than others, what are the ethical and economic implications? Will this deepen existing inequalities between the “AI-haves” and “AI-have-nots”?
Regulation and Accountability. (Institutional governance)
If an agentic AI makes a mistake that leads to a financial loss, a medical misdiagnosis, or a legal error, who is responsible? The company that used the tool? The developer of the frontier model? The human who supervised the agent? The increasingly complex and multi-step nature of agentic AI makes it difficult to assign clear accountability. This raises critical questions for regulators and legal systems that are ill-equipped to handle this new paradigm. It also underscores why owning the context (the data, workflows, governance, and human oversight surrounding AI use ) matters as much as the technology itself. Organizations that fail to define and control this context risk not only poor outcomes but also accountability and explainability gaps they will struggle to defend.
The Future of Work and Human Autonomy. (Individual human impact)
If the tools for human-AI collaboration remain generic for most fields while becoming highly specialized for coding, what does this mean for the future of non-technical professions? Will the “soft skills” of strategy, empathy, and creativity be sufficient to thrive, or will they too require specialized AI tools that may not be developed for years to come?
Conclusion
Mollick is right that coding has enjoyed a head start, and this advantage appears to be structural rather than temporary. But the mistake for businesses would be to chase the labs by trying to build frontier models internally, only to replicate the advantage that coders already enjoy today. The real opportunity is to recognize where advantage truly lies: in the five pillars of competitive strength – the data you control, the workflows you own, the speed and agility you cultivate, the trust and human capital you build, and the way you orchestrate the AI stack itself. Together these define the context in which models become useful.
In this transition toward agentic AI, you do not need to own the model to win; you need to own the context – and that means mastering the discipline of orchestration. Done well, orchestration amplifies the other pillars; done poorly, it becomes the weak link in the chain. Crucially, owning the context is also the foundation for accountability. Without it, organizations risk deploying powerful AI systems they cannot explain, defend, or govern.
The question I see most relevant for leaders, in both business and society, is whether we can build this context in ways that create value without reinforcing lock-in, widening inequalities, or introducing new vulnerabilities. The answer will determine not just who wins in the marketplace, but how AI shapes the future of resilience, innovation, responsibility, and human agency itself.
On X under @emollick but also blogs here (worth subscribing): https://www.oneusefulthing.org/
https://x.com/emollick/status/1967704853171638494
https://x.com/emollick/status/1967706150218174958