people sitting on chair in front of laptop computers

SuperSkill 3: Big Picture Thinking:

people sitting on chair in front of laptop computers

SuperSkill 3: Big Picture Thinking:

people sitting on chair in front of laptop computers

SuperSkill 3: Big Picture Thinking:

SuperSkill 3: Big Picture Thinking: Change Readiness in the Age of AI

Big Picture Thinking in the Age of AI

A pharmaceutical company accelerates drug development by optimising each stage of its pipeline independently. Clinical trials run faster. Manufacturing scales more efficiently. Regulatory submissions arrive sooner. Yet post-market surveillance reveals safety signals that earlier, slower processes would have caught. The gains in speed have produced costs in outcomes that only become visible years later.

A technology firm reorganises around autonomous teams, each empowered to move quickly on their products. Velocity increases. Features ship faster. But the products begin to diverge in ways that confuse customers and create integration problems. What worked for each team separately fails when the pieces must function together.

These are not unusual cases. They represent a pattern that has become more common as organisations pursue efficiency through specialisation and speed. The pattern is not that people fail at their jobs. The pattern is that success within a narrow frame can produce failure at the level of the whole.

The cognitive capacity in question

What distinguishes those who anticipate these failures from those who are surprised by them?

The difference is not intelligence in the conventional sense. Nor is it experience alone, since experienced people are often caught off guard by systemic effects. The distinguishing factor is a particular cognitive orientation: the capacity to step back from immediate concerns and grasp how elements relate within a broader context.

Researchers in cognitive science describe this as high-level construal, the ability to think abstractly about situations rather than focusing only on concrete details. In organisational theory, it corresponds to systems thinking, the practice of understanding how components influence one another and how the whole behaves differently from the sum of its parts.

This capacity involves several distinct but related abilities. It requires shifting between levels of analysis, moving from immediate tasks to long-term implications and back. It requires tolerance for ambiguity, since complex systems rarely yield simple answers. And it requires what psychologists call cognitive flexibility, the willingness to update mental models when evidence suggests they no longer fit.

One clarification is essential. This is not the same as ignoring details in favour of abstractions. Effective practitioners of this skill do not operate at altitude while others handle the ground. They integrate details into larger frameworks. They understand which specifics matter for the whole and which are locally significant but systemically irrelevant. The capacity involves synthesis, not withdrawal from substance.

What the research shows

The empirical foundation for this capacity spans multiple disciplines, though much of the evidence comes from specific contexts rather than direct measurement of the skill itself.

Controlled experiments have demonstrated that inducing a broader perspective changes decision-making in measurable ways. In a series of studies involving approximately 500 participants, researchers found that prompting people to think about overarching goals rather than immediate gains led to choices that maximised collective benefit, even when those choices meant receiving less personally. The shift in perspective reduced short-term self-interest and improved aggregate outcomes.

Leadership research has consistently identified pattern recognition and systems orientation as cognitive capacities that distinguish high performers. Reviews of what makes leaders effective note that exceptional performers excel at recognising complex interdependencies. German research on complex problem-solving found that participants who approached dynamic simulations with attention to multiple variables and feedback loops achieved better control than those who focused narrowly.

Historical cases provide additional texture. The well-documented example of Royal Dutch Shell's scenario planning in the early 1970s illustrates how structured practices can produce advantage. Shell's development of multiple long-range scenarios had anticipated the possibility of an oil supply disruption. When the 1973 crisis arrived, the company responded faster than competitors who had not rehearsed the possibility. Internal accounts confirm that the scenario work had created both cognitive and organisational readiness months before events unfolded.

Cross-cultural psychology offers a different angle. Research comparing cognitive styles across cultures found that individuals oriented toward holistic thinking, attending to context and relationships, processed information differently from those with more analytic orientations. They were more likely to notice background changes and explain events in terms of interactions rather than isolated causes. While neither style is uniformly superior, the findings suggest that orientation toward context affects what people perceive and how they reason.

The evidence includes important caveats. One study found that prompting people to imagine their distant future could, under certain conditions, increase indulgent behaviour rather than reduce it. Those with strong self-focus sometimes reasoned that life is short and present enjoyment is warranted. This underscores that the effects of broad thinking depend on framing and direction. The capacity itself is not automatically beneficial.

The mechanisms at work

Several pathways explain how this orientation produces different outcomes.

The first operates at the level of attention. Adopting a broader frame shifts focus from immediate pressures to longer time horizons and wider scope. This reduces what researchers call present bias, the tendency to overweight near-term consequences relative to distant ones. Decisions become more aligned with objectives that matter over time rather than with whatever is most salient in the moment.

The second pathway operates through anticipation. Mapping how parts of a system influence one another makes it possible to foresee where interventions will have unintended effects. Research on quality improvement in healthcare has documented this mechanism: teams that understood systemic interactions could explain why certain changes succeeded while others failed, because they saw dynamics that narrow perspectives missed.

A third pathway involves transfer across contexts. Individuals who consistently think in broad terms accumulate patterns and analogies that accelerate future reasoning. A disruption in one industry may follow a shape previously observed in another. Recognising the pattern enables faster response. Cognitive research suggests that expert decision-makers rely on deep structural similarities rather than surface features when navigating novel situations.

At the organisational level, the capacity produces coordination benefits. Managers who maintain awareness of the whole system are more likely to align their efforts with others, ensuring that local improvements do not create problems elsewhere. When this orientation is widespread, the organisation develops what researchers describe as collective adaptive capacity: the ability to sense environmental shifts and respond coherently rather than fragmenting under pressure.

The AI relationship

The rise of AI creates a complex dynamic for this capacity. The relationship runs in both directions and carries both opportunity and risk.

AI can extend the reach of broad thinking. Machine learning models can process data at scales beyond human capacity, detecting patterns and simulating system behaviours that inform strategic analysis. Emerging applications in foresight help teams scan for weak signals, visualise interdependencies, and stress-test assumptions. A decision-maker equipped with these tools can explore more possibilities than would otherwise be feasible.

The limitation is that current AI systems do not understand context, causality, or meaning in the way humans integrate them. AI excels at optimising defined objectives within bounded domains. It does not reliably question whether those domains are correctly specified or whether the objectives capture what actually matters. It pursues local optima defined by its training, which can produce exactly the narrow optimisation that human oversight is meant to prevent.

Many consequential failures have occurred when automated systems optimised metrics without understanding broader context. Recommendation algorithms maximising engagement surfaced content that users later regretted. Pricing algorithms optimising for revenue created patterns that damaged customer relationships. These were not failures of technical capability. They were failures to situate technical capability within a framework that accounted for effects beyond the immediate objective.

For individuals, the risk runs in the opposite direction. When AI handles integrative analysis, people may stop exercising their own capacity for synthesis. Research on automation bias documents how reliance on machine outputs can lead to complacency. A decision-maker who trusts AI recommendations may not question how those recommendations fit the larger picture or what factors the algorithm has omitted.

Studies on navigation technology provide an instructive parallel. Heavy GPS users develop weaker spatial memory and wayfinding ability than those who navigate without assistance. The tool handles the task, but the underlying cognitive capacity weakens from disuse. The same dynamic may apply to strategic thinking. If AI consistently provides the integrated view, people may stop developing their own capacity to construct it.

A further concern is developmental. The experiences that build broad thinking often involve grappling with complexity directly. Synthesising disparate information, identifying patterns, discovering connections that were not obvious. If AI handles these tasks, the learning opportunities that would normally develop integrative capacity are bypassed. The immediate output may be acceptable, but the pathway that would build deeper understanding is short-circuited.

Consequences of absence

When this capacity is missing, the effects are predictable but often slow to surface.

Decisions produce consequences that surprise their makers. Initiatives optimise one metric while degrading others. Problems solved in one area reappear in different forms elsewhere. Resources flow to whoever argues most persuasively for local needs rather than to where they would produce the greatest return for the whole.

Under pressure, the absence becomes more pronounced. Research on threat rigidity documents how stress narrows attention and reinforces reliance on familiar approaches. Without the habit of stepping back, decision-makers focus harder on what is directly in front of them, missing the systemic dynamics that may be driving the situation.

At the organisational level, the absence produces brittleness. Performance may be acceptable when conditions are stable, but the capacity to sense emerging patterns and respond proactively is missing. When disruption arrives, there is no shared framework for interpretation or coordination.

Post-mortems of corporate failures frequently identify this pattern. Leadership was not incompetent within their frame. Their frame was simply too narrow to encompass what was happening. By the time the limitations became obvious, the organisation had selected against the capacity that might have provided warning.

Building and sustaining the capacity

Unlike narrow technical skills that may become obsolete as tools evolve, this capacity has a structure that allows it to compound. Each complex situation navigated adds to a repertoire of patterns. Each long-term consequence observed refines the mental models used to anticipate future outcomes. The more varied the situations encountered, the broader the base of analogies available for future use.

The compounding depends on exercise. Individuals who remain in narrow roles for extended periods may find their systemic perspective weakening. Organisations that automate integrative work may inadvertently weaken the capacity in their people, even as they increase access to processed information.

Development typically occurs through experience rather than instruction. Rotation through different functions, exposure to unfamiliar domains, involvement in decisions where trade-offs span boundaries. These experiences force the integration that builds the skill. Classroom training can introduce concepts, but the capacity itself develops through repeated practice in contexts that demand it.

Organisational conditions matter. Cultures that reward narrow optimisation and penalise questions about broader effects will suppress the behaviour that builds the skill. Cultures that value systemic awareness and create space for integrative thinking will develop it. The choice of what to measure, what to reward, and what questions to ask in decision-making shapes whether the capacity grows or atrophies.

Looking forward

The environments in which decisions are made will continue to grow more interconnected. The consequences of local actions will continue to propagate further and faster. The gap between what narrow optimisation achieves and what systemic awareness could achieve will continue to widen.

Tools will improve. AI will become more capable of pattern detection, simulation, and scenario generation. These developments will make the capacity for human synthesis more valuable, not less. The tools will handle processing. Humans will need to handle meaning: determining what the patterns signify, what the simulations imply, what the scenarios demand.

The individuals and organisations that develop this capacity deliberately will navigate complexity more effectively than those who assume the tools will substitute for it. The capacity is not guaranteed to develop on its own. It requires conditions that support it, experiences that build it, and awareness that it matters. The alternative is to discover its absence only when circumstances reveal what a broader view would have shown earlier.

Book a call with Rahim

Book a call with Rahim

"The goal isn't more technology. It's more capable humans."


"The goal isn't more technology. It's more capable humans."


Rahim Hirji