Neither AI Hype Nor Doom
In the last quarter, your organisation likely approved at least one AI initiative. You may have signed off on a pilot, allocated budget to a platform, or greenlit a vendor relationship. Your leadership team probably discussed AI at least twice. You might even have published a statement about responsible AI use.
And yet, if you are honest, the conversation probably oscillated between two poles. On one side: breathless enthusiasm about transformation, disruption, and competitive advantage. On the other: anxious hand-wringing about job losses, regulatory risk, and worst-case scenarios. The meeting likely ended without a clear framework for deciding which fears were legitimate and which opportunities were real.
This oscillation is not a failure of your leadership team. It reflects the broader public discourse, which has become trapped between two equally unhelpful extremes.
The common view is that organisations must choose between AI enthusiasm and AI caution. This is wrong because both positions share the same flaw: they treat AI as a force that happens to organisations rather than a tool that organisations shape through deliberate choices.
If your AI strategy is driven by either hype or doom, you will make decisions optimised for narratives rather than outcomes, and your organisation will either overinvest in capability you cannot absorb or underinvest in capability your competitors will use against you.
This is the drift versus design problem at its most visible. Drift happens when organisations let the loudest voices in the AI conversation set their agenda. They adopt tools because competitors did, or they delay adoption because a headline spooked the board. Design looks different. It starts with a clear-eyed assessment of what AI can and cannot do, builds human capability alongside technical capability, and makes decisions based on your specific context rather than generic narratives.
DEFINITION
The Third Way
A strategic posture toward AI that rejects both uncritical enthusiasm and paralysing pessimism in favour of deliberate capability building, grounded in realistic assessment of current technology and investment in enduring human skills.
Why it matters: Most AI failures stem not from technical limitations but from organisations operating at the extremes, either rushing deployment without readiness or avoiding engagement until forced by competition.
How it shows up: Organisations taking the Third Way can articulate what specific problems AI solves for them, can name the human capabilities that remain essential, and can describe their governance approach without either dismissing risk or catastrophising it.
The opposite: Defaulting to vendor narratives, competitor behaviour, or headline sentiment as the primary input to AI strategy.
The Hype Camp and Its Blind Spots
The hype camp consists of those who see AI as an unqualified good and imply it is a solution to nearly every problem. These voices trumpet breakthroughs loudly, often claiming we are on the verge of human-level intelligence or a technology-driven transformation of everything.
The blind spots are predictable. Hype-driven thinking overlooks practical realities: errors, bias, the gap between demonstration and deployment, the time required for organisational absorption. It ignores the difference between what AI can do in a controlled environment and what it can do reliably at scale in your specific context.
The organisational consequences are concrete. Companies that buy into unchecked hype chase fads and mirages. They pour resources into initiatives under pressure to not miss out, only to find the technology was not mature enough or the use case was ill-conceived. Inflated promises create later disillusionment that poisons the well for genuine innovation.
What I have observed in organisations is a consistent pattern: hype-driven adoption without corresponding investment in human readiness leads to expensive pilots that never scale, tools that employees work around rather than with, and a growing cynicism that makes the next initiative harder to launch.
The Doom Camp and Its Paralysis
The doom camp consists of those who see catastrophe lurking in every AI advance. These voices warn that AI will steal jobs en masse, entrench control in the wrong hands, or escape our oversight entirely. While some concerns are grounded in real risks, the doom camp often takes them to extremes that prevent productive action.
The blind spot is techno-paralysis: a belief that doing nothing is the only safe path. This fixates on worst-case hypotheticals at the expense of pragmatic engagement with current reality. It ignores that risk exists in inaction as well as action.
The organisational consequences are equally concrete. Companies that succumb to exaggerated fears risk stagnation. They refuse to adopt tools even when they could provide value, or delay innovation until conditions are perfect. In industry surveys, business leaders report that while they worry about AI misuses, they fear being left behind even more. Falling behind in AI adoption is now seen as a greater threat to competitiveness than economic recessions or regulatory changes.
What I have observed in organisations is that doom-driven avoidance creates a different kind of debt. Talent leaves for more forward-thinking competitors. Operational inefficiencies compound. And when adoption eventually becomes unavoidable, the organisation lacks the muscle memory and institutional knowledge that earlier engagement would have built.
MODEL
The Posture Spectrum
The model has three zones along a single axis. The Hype Zone is characterised by urgency without strategy, adoption without readiness assessment, and metrics focused on deployment speed rather than value delivery. The Doom Zone is characterised by delay as default, risk assessment without risk appetite, and governance that prevents engagement rather than enabling it safely. The Third Way Zone is characterised by deliberate engagement, human capability investment alongside technical investment, and governance that creates clarity rather than paralysis.
Use this when: Your leadership team is setting or revisiting AI strategy, or when you notice your AI conversations oscillating between extreme optimism and extreme caution without resolution.
What I would do differently tomorrow: Map your current leadership conversation to the spectrum. If you are clustered at either extreme, ask what evidence would move you toward the centre. If you are already in the Third Way zone, ask what governance structures are keeping you there.
Why Both Extremes Fail in Practice
Neither extreme holds up in the real world because technology development is rarely all-or-nothing. AI progress is incremental and occurs within social, economic, and regulatory contexts. It brings both opportunities and limits, moves fast in some areas but slower in others, and can be governed with normal tools rather than panic or fantasy.
Even among early adopters, AI accounts for only a few percent of work tasks. Widespread adoption takes years, just as it did for electricity or the internet. Extreme predictions, whether utopian or dystopian, are simplistic. AI is neither going to solve all problems overnight nor ruin everything overnight.
Crucially, both viewpoints divert organisations from the middle path of responsible progress. Hyperbolic optimism can lead to corners being cut: deploying AI without ethical guardrails. Hyperbolic pessimism can lead to throwing out the baby with the bathwater: refusing engagement that would build necessary capability.
The truth is that society needs to balance innovation with caution. The goal should be to maximise benefits while minimising harms, and that requires a blend of enthusiasm and vigilance.
Wrong Approach vs Right Approach
The wrong approach treats AI strategy as headline-driven. The right approach treats AI strategy as context-driven, shaped by your specific problems and capabilities.
The wrong approach treats AI as either magic or menace. The right approach treats AI as a powerful but manageable tool that responds to governance.
The wrong approach invests only in technical capability. The right approach invests in human and technical capability together, recognising that one without the other creates fragility.
The wrong approach creates governance that prevents engagement. The right approach creates governance that enables safe engagement.
The wrong approach measures deployment speed. The right approach measures value delivery and capability growth.
The Cost of Getting This Wrong
Capability debt: Organisations that avoid AI build a growing gap between their current capacity and what the market requires, a gap that becomes harder to close the longer it compounds.
Credibility debt: Organisations that over-promise on AI create internal cynicism and external scepticism that make future initiatives harder to launch and fund.
Talent flight: Nearly half of professionals in recent surveys said they would consider leaving a company that lagged significantly in AI deployment. The doom posture accelerates this.
Governance theatre: Organisations at either extreme often create governance that looks serious but achieves nothing. Hype organisations rubber-stamp; doom organisations prohibit without enabling alternatives.
Strategic blindness: Both extremes prevent organisations from seeing AI clearly. Hype obscures limitations; doom obscures opportunities. Both lead to decisions based on narrative rather than evidence.
Diagnostic: Where Is Your Organisation?
Answer these three questions honestly:
Can your leadership team articulate three specific problems AI currently solves for your organisation, with measurable outcomes?
Can they name the human capabilities that remain essential regardless of AI advancement?
Can they describe your AI governance approach in a way that neither dismisses risk nor prevents engagement?
DIAGNOSTIC STATES
GREEN: Yes to all three. Your leadership has a grounded view of AI. Next step: Document your current posture and share it broadly. Build on this foundation.
AMBER: Yes to one or two. You have partial clarity but gaps that create risk. Next step: Identify which question you cannot answer and prioritise closing that gap in the next quarter.
RED: No to all three. You are operating at an extreme or without coherent strategy. Next step: Schedule a leadership session specifically to establish your Third Way position before your next AI investment decision.
The Third Way as Strategy
The Third Way is not a compromise between enthusiasm and caution. It is a fundamentally different posture that starts from different premises.
First premise: AI can greatly increase efficiency and output, but it also carries risks. It can erode human judgment, create accountability gaps, or tempt organisations to over-rely on automation. We do not downplay these issues. We face them directly.
Second premise: the organisations that win in the long run will not be those who either mindlessly adopt AI or summarily reject it. They will be those who redesign work around the human core, identifying what humans do best, fortifying those uniquely human skills, and using AI to augment rather than replace human decision-making.
Third premise: this requires investing in specific human capabilities that remain essential even as AI advances. These are the SuperSkills that technology cannot easily supplant: curiosity to continually learn and question, change-readiness to adapt as the landscape shifts, big-picture thinking to strategise amid complexity, empathy to lead and collaborate, global adaptability to operate across contexts, principled innovation to ensure ethical standards guide technology use, and an augmented mindset to effectively integrate AI into work and life.
These capabilities counterbalance AI weaknesses and guard against its risks. While AI handles narrow tasks and data at scale, it lacks holistic judgment, ethics, creativity, and interpersonal understanding. By developing these human capabilities, we ensure that people remain in the loop and capable of steering AI toward positive outcomes.
The Third Way Filter
Who uses it: Leadership team member, AI program lead, or anyone evaluating an AI initiative.
When: Before any AI investment decision, pilot launch, or vendor selection.
How: For each proposed initiative, require written answers to four questions: (1) What specific problem does this solve, with measurable outcome? (2) What human capability does this augment? (3) How will we know if the augmentation worked? (4) What is the governance mechanism that enables safe use?
Output: A one-page summary for each initiative that can be reviewed in five minutes and compared across initiatives.
How to know it worked: Initiatives that cannot answer all four questions get flagged for further work rather than approved or rejected outright. The conversation shifts from enthusiasm versus caution to evidence versus gaps.
The Strongest Objection
The strongest objection to the Third Way is that it sounds like fence-sitting. In a world where AI capabilities are advancing rapidly, perhaps decisive action in one direction is better than measured consideration. Perhaps the hype camp is right that early movers will capture value that cautious organisations miss. Perhaps the doom camp is right that the risks are so severe that caution is the only responsible posture.
This objection has validity. Measured consideration can become an excuse for inaction. Balance can become paralysis by another name.
But the evidence does not support either extreme as a viable long-term strategy. Organisations that rush to adopt without readiness create expensive failures. Organisations that refuse to engage create capability gaps that competitors exploit. The Third Way is not fence-sitting. It is the recognition that sustainable advantage comes from building capability rather than chasing or avoiding narratives.
Key Takeaways
If your AI strategy is driven by either hype or doom, you will make decisions optimised for narratives rather than outcomes.
The organisations that win in the long run will be those who redesign work around the human core.
The Third Way is not a compromise. It is a fundamentally different posture that starts from different premises.
Capability debt compounds. Every quarter you avoid engagement, the gap between your current capacity and market requirements grows.
What the Evidence Shows
What we know
AI adoption accounts for only a small percentage of work tasks even among early adopters. Widespread diffusion takes years.
Business leaders report fearing competitive lag more than AI risks, economic recession, or regulatory change.
Talent increasingly considers AI strategy when evaluating employers. Nearly half would consider leaving a company that lags significantly.
What we do not know yet
The precise timeline for AI capability development in different domains.
The long-term effects on labour markets across different skill levels and geographies.
What I have observed in organisations
The most successful AI implementations pair technical investment with explicit human capability development.
Governance quality correlates more strongly with outcomes than governance quantity.
Organisations that can articulate their AI posture clearly make better and faster decisions than those who cannot.
The Question That Remains
Your organisation will make AI decisions this quarter. Some will be explicit, debated in leadership meetings. Others will be implicit, made by teams and individuals responding to the tools and pressures in front of them.
The question is not whether you will engage with AI. The question is whether your engagement will be shaped by the narratives that happen to be loudest this week, or by a deliberate posture that you have chosen and can defend.
The Third Way is available. The only barrier is the discipline to hold it.
Rahim Hirji

