The AI Complexity Trap: Why Treating Your Company Like a Machine Fails When AI Arrives

Executive Summary

In our earlier work, The AI Amplification Effect, we described how faster AI-generated insight combined with flatter organizations allows strategic decisions to travel farther and faster before correction can intervene — creating dangerous performance oscillations.

The AI Complexity Trap explains what happens next.

Artificial intelligence is not just accelerating insight. It is fundamentally changing the operating conditions in which companies must compete. What used to be manageable complexity is rapidly becoming dominant, emergent, and unpredictable. Organizations that continue to treat AI as a faster version of the old “complicated” machine — something that can be planned, inserted, and controlled — are walking into a trap.

Adoption is widespread: McKinsey’s 2025 State of AI survey shows ~88% of organizations now use AI in at least one function. Yet scaling remains elusive. Nearly two-thirds have not rolled it out enterprise-wide, and only 39% report any measurable EBIT impact — typically under 5%.

Mid-market companies ($100M–$2B) feel this shift most acutely. They generate real strategic complexity from AI but lack the deep buffers of larger enterprises. Executive teams become the primary bottleneck.

Consider a mid-market distributor that deployed AI-driven dynamic pricing. Margins improved quickly in several regions. But the same model triggered unexpected demand spikes that strained inventory, disrupted fulfillment, and increased customer complaints. The model performed as designed. The system did not.

This is the AI Complexity Trap in action.

In this briefing we explain the phase shift from complicated to complex operating conditions, why most current leadership approaches are mismatched, and — most importantly — the practical execution discipline required to succeed. You will learn:

  • The five practical dimensions that separate complicated from complex environments — and what they mean for your AI initiatives

  • Why initiative overload, change fatigue, and performance volatility are predictable outcomes of the old model

  • A proven response framework (contain before you scale + distribute sensing, centralize judgment + deliberate pause points) aligned with Cynefin and Ethan Mollick’s Leadership–Crowd–Lab model

  • How leading mid-market companies are restoring judgment-rich stabilization without rebuilding bureaucracy

The advantage will not go to the organizations that move fastest. It will go to those that can absorb speed without losing control.

For decades, corporate decision-making followed a predictable rhythm: assess, plan, execute. That rhythm was shaped by constraint. The time required to gather information, run analysis, and build consensus acted as a natural control system, slowing decisions enough for organizations to adjust as they moved.

Artificial intelligence has removed that constraint. Insight is now continuous, and decisions move through the system faster than organizations can absorb or correct them. This is the dynamic we described in The AI Amplification Effect. What begins as a local optimization can quickly propagate into a system-wide disruption.

1. The Phase Shift Underway

For decades, corporate decision-making followed a predictable rhythm: assess, plan, execute.

That rhythm was shaped by constraint. It took time to gather information, run analysis, and build consensus. Those delays acted as a natural control system. Decisions moved slowly enough that organizations could absorb, adjust, and correct along the way.

Artificial intelligence has removed that constraint.

Work that once required weeks and teams of analysts can now be completed in minutes. Insight is no longer scarce. It is continuous. The result is not simply a faster organization. It is a more exposed one.

Decisions now move through the system faster than the system can respond.

This is the dynamic we described in the AI Amplification Effect. Faster insight, combined with thinner management layers, allows decisions to travel farther and faster before correction can occur. What begins as a local improvement can quickly propagate across the organization, creating consequences that only become visible after they have taken hold.

In one mid-market company, a set of AI-generated recommendations led to rapid changes in pricing, marketing spends, and inventory positioning within weeks. Each decision was supported by credible analysis. Each made sense in isolation. Together, they created interactions no one had fully anticipated. Demand patterns shifted, operational strain increased, and leadership found itself reacting rather than directing.

This is not a failure of analytics, rather, it’s a change in operating conditions. The front-end of strategy has accelerated, but execution has not.

Organizations are now operating in an environment where:

  • decisions are generated faster than they can be evaluated

  • actions propagate faster than they can be coordinated

  • consequences emerge faster than they can be understood

This shift is best understood as a move from complicated to complex operating conditions.

In complicated systems, outcomes can be planned by breaking work into parts. In complex systems, outcomes emerge from interactions across teams, data, and customer behavior. Cause and effect are often only visible in hindsight.

Artificial intelligence is pushing organizations across this boundary.

As Stanley McChrystal observed in a different context, organizations designed for predictable environments struggle when faced with systems defined by constant interaction and adaptation. The solution is not better planning – it’s a different way of operating.

Mid-market organizations feel this shift most acutely. They generate enough complexity for these interactions to matter but lack the structural depth to absorb them. Executive teams become the primary point of coordination and, increasingly, the primary bottleneck.

The result is a growing gap between how decisions are generated and how they are executed. This gap is where instability begins.

2. Understanding Complicated vs. Complex in Business Terms

Most operating models have always contained both structured processes and dynamic interactions. What is changing is the balance.

Artificial intelligence is increasing both the volume and velocity of interactions across the business. Decisions made in one area now trigger faster and less predictable effects elsewhere. What was once manageable complexity is becoming the dominant condition of daily operations.

In a complicated system, work can be broken down into parts and optimized. Planning works because cause and effect are stable enough to anticipate.

In a complex system, outcomes emerge from interactions. Those interactions evolve over time and are shaped by feedback loops across teams, data, and customer behavior. The system cannot be fully understood in advance. It must be observed as it operates.

AI is accelerating this shift.

A pricing model no longer affects only pricing. It influences demand, which affects operations, which shapes customer experience and future behavior. A marketing model does not just optimize conversion. It changes how quickly signals move through the system and how other functions must respond.

How AI Is Shifting Operating Conditions

Complicated vs Complex Systems
Dimension Complicated Systems Complex Systems What This Looks Like in Practice
Causality Linear Non-linear Pricing improves margins locally but creates demand spikes that disrupt supply elsewhere
Reducibility Parts determine the whole Interactions determine the whole Automation improves response time but increases repeat contacts
History Static Path-dependent Models reinforce historical patterns unless actively corrected
Knowability Predictable risks Emergent risks Options look sound but create downstream effects post-deployment
Governance Control Enablement Central controls slow response while local actions fragment the system

Many leaders still rely on decision models designed for complicated environments. Those models assume problems can be understood upfront and execution will follow predictably. This assumption no longer holds.

Organizations now generate high-quality insights but struggle to translate them into coordinated action. Local optimizations succeed, while system-level effects emerge later. This is why many AI initiatives succeed in isolation but fail to scale. The issue is the interaction, not the idea.

3. Challenges Created by the Shift

When AI-driven decisions are managed with tools designed for stable conditions, the result is instability.

This shows up in consistent ways:

Initiative overload
Leadership teams face a surge of credible AI opportunities, each competing for the same limited resources. In one $250 million company, more than a dozen initiatives launched within a single quarter. None were flawed. Together, they overwhelmed execution capacity.

Change fatigue
Teams absorb continuous waves of change without clear prioritization. Adoption slows not from resistance, but from overload. Tools are implemented, but behaviors stop evolving.

Cross-functional misalignment
Decisions in one function create downstream effects elsewhere. A marketing model increases demand. Operations cannot keep up. Customer experience deteriorates.

Performance volatility
Gains scale quickly. Failures scale faster. Forecasting improvements increase variability. Supply chains react defensively. System responsiveness declines.

These issues are not isolated - they reflect a mismatch between operating conditions and decision models.

For mid-market companies, the effect is amplified. The system cannot absorb the speed and volume of change. The gap between insight and execution widens.

4. Why Mid-Market Companies Are Especially Exposed

Mid-market companies are large enough to generate complexity, but not large enough to buffer it.

In larger organizations, layers absorb volatility. In mid-market firms, those layers are thin. The burden falls on the executive team. This creates a consistent pattern.

A small leadership group must evaluate and oversee a growing number of AI initiatives while running the business. Each initiative requires coordination and moves faster than traditional decision cycles allow.

In one $180 million company, leadership simultaneously managed pricing, maintenance, and customer service AI initiatives. The constraint was leadership bandwidth, not capital or technology

This creates three pressures:

  • resource contention

  • decision fatigue

  • organizational strain

Without a different approach, these pressures compound. Ideas move faster than they can be evaluated. Signals are missed. Resources are committed before risks are understood.

The result is predictable: stalled scaling, fragmented execution, and declining confidence.

5. How Leaders Should Respond

In complex conditions, the challenge is not generating ideas. It is controlling how decisions move through the system. Most organizations already have more AI opportunities than they can absorb. The constraint is execution.

The companies that make progress adopt a different discipline.

Contain before you scale
New ideas enter through bounded tests with clear limits. This is not about proving value in isolation – it’s about understanding interaction effects.

In one company, a pricing model deployed broadly created volatility. Resetting to a single-region test revealed second-order effects before scaling.

Distribute sensing, centralize judgment
Signals come from across the organization. Decisions do not.

Distributed decision-making fragments execution. Effective operators maintain a clear center for prioritization and sequencing.

Build deliberate pause points
Speed without reflection creates volatility. Structured checkpoints allow organizations to review results, adjust direction, and stop initiatives early when needed. These are control mechanisms, not delays.

This model depends on clear roles.

Leadership sets direction and constraints. The broader organization generates and tests ideas. A small integration layer converts successful experiments into repeatable practices.

This aligns with the Leadership–Crowd–Lab model described by Ethan Mollick and the probe–sense–respond logic of the Cynefin framework. The value lies not in the frameworks, but in the discipline they impose.

Organizations that adopt this approach do not eliminate complexity. They operate within it.

6. The New Leadership Imperative

Artificial intelligence is changing the conditions under which organizations operate. The risk is not slow adoption. It is applying outdated models to a system that no longer supports them.

Many organizations continue to rely on planning and control mechanisms designed for predictability. As complexity increases, those mechanisms break down.

The result is visible.

A consumer company scaled AI-driven forecasting and promotion models quickly after early success. Within two quarters, demand variability increased, inventory imbalances grew, and service levels declined. Leadership added controls and oversight, but the system continued to strain.

The issue was not the models. It was the lack of a mechanism to observe and absorb second-order effects before scaling. By the time the impact was clear, resources had been committed. Several initiatives were paused. Confidence declined. The organization reverted to more familiar patterns.

This pattern is becoming more common - success scales quickly. Interactions emerge later. Instability follows.

In today’s increasingly complex world, the advantage will not go to those who move fastest. It will go to those who can absorb speed without losing control.

That is the new leadership imperative.

About the Authors

Dev D’Souza, Jon Watts and Pete Perkins collaborated on this article. Propel Strategy Group is an operator-led advisory firm focused on large-scale operational and technological transformation for mid-market companies. Their work examines how leadership, operating models, and execution must evolve as artificial intelligence accelerates strategic decision-making.

References: 

McKinsey & Company. 2025. The State of AI in 2025.

Cilliers, Paul. 1998. Complexity and Postmodernism: Understanding Complex Systems. London: Routledge.

Snowden, David J., and Mary E. Boone. 2007. “A Leader’s Framework for Decision Making.” Harvard Business Review 85 (11): 68–76.

McChrystal, Stanley A., Tantum Collins, David Silverman, and Chris Fussell. 2015. Team of Teams: New Rules of Engagement for a Complex World. New York: Portfolio/Penguin.

Mollick, Ethan. 2024. Co-Intelligence: Living and Working with AI. New York: Portfolio/Penguin.

Mollick, Ethan. 2026. “Weird AI, the stage is yours.” The Economist, April 2026.

Sargut, Gökçe, and Rita Gunther McGrath. 2011. “Learning to Live with Complexity.” Harvard Business Review 89 (9): 68–76.

Next
Next

The AI Amplification Effect