people sitting on chair in front of laptop computers

AI Reading Strategy

people sitting on chair in front of laptop computers

AI Reading Strategy

people sitting on chair in front of laptop computers

AI Reading Strategy

The AI Reading Strategy That Turns Books Into Decisions

Your team's AI reading is creating the illusion of alignment, not the reality. Fix it not by reading more, but by using the Reading Compass to diagnose your team's single biggest blind spot and assigning three targeted books. This article provides the diagnostic, the shortlist, and the discussion format. If you are looking for the Long List of the best 30 book about AI, you can find them here.

Undirected reading produces well-informed individuals and misaligned teams.

Your board has read plenty about AI. The CFO dog-eared The Coming Wave. The CHRO circulated an article on algorithmic bias. The CTO quoted Ethan Mollick in every strategy slide. By lunch at the last offsite, they were debating whether AI was an existential threat or a productivity tool, and still had not agreed whether to approve the pilot on the agenda.

Twelve months later, the leadership team has consumed thirty books between them, each read as personal curiosity rather than team capability. They still cannot make one decision together and this is not a knowledge gap. In my own terms aligned to drift, I call this strategic drift.

The common view is that well-read leaders make better AI decisions. The common view is wrong. When leadership teams read as individuals rather than as a team, they create the illusion of alignment while hiding critical capability gaps. If your leadership team reads widely but not strategically, you breed false confidence, misaligned investment, and stalled decisions. That is how you end up buying tools before you agree the delegation boundary, or writing an ethics charter before you understand the workflow.

The fix is not more reading. It is strategic reading: a gap-mapped approach that turns knowledge into shared language and shared language into decisions.

AI Reading Strategy (definition): A gap-mapped approach to selecting and sequencing AI literature across a leadership team to build shared vocabulary, aligned risk assessment, and decision-ready understanding. The opposite: the AI book club, well-intentioned, unsequenced, producing interesting dinner conversation and zero operational change.

The Cost of Getting This Wrong

Before the framework, the pain.

  • Decision paralysis. Your leadership team uses the same words, "bias," "alignment," "readiness," to mean different things. Every governance conversation starts from scratch. Pilots stall. Partnerships drift.

  • False confidence. Leaders who have read one corner of the landscape believe they understand the whole map. They have a quarter of the picture and the conviction of someone holding the full thing.

  • Capital misallocation. Leaders without operational understanding cannot challenge vendor claims. They buy tools that solve impressive but irrelevant problems. Misaligned teams spend more on pilots, then spend again to unwind them.

  • Reputational exposure. Leaders without structural critique produce AI ethics policies that satisfy PR and fail employees, opening the door to talent attrition and regulatory scrutiny.

  • Governance theatre. Policies written for today's AI, blindsided by tomorrow's.

Symptom checker: Is your team in drift?

  • Your AI governance conversations restart from first principles every meeting.

  • Procurement is driving AI strategy, not the leadership team.

  • Your AI ethics charter was written by comms or legal, not the risk team and the people affected.

  • You have spent more on AI pilots than on defining what "success" looks like.

  • The board agrees AI is important but cannot agree on what to do about it.

If you ticked two or more, you are in drift.

Drift vs Design

Drift exists when your leadership team cannot answer the same three questions with the same words:

  1. Which specific decisions are we currently delegating to AI, and which are human-only red lines?

  2. What AI decisions are we making in the next 90 days?

  3. What harms are we willing to tolerate, and which are non-negotiable?

Undirected reading (Drift)Strategic reading (Design) Each leader reads what interests themReading is mapped to team-level capability gapsVocabulary diverges; debates restart every meetingShared terms emerge; debates move forward because definitions are stable"We buy tools and look for use cases""We define the decision friction, then find the solver"Books are consumed and forgottenBooks are sequenced, discussed, and converted into tools"We've all read about AI""We have shared language for the three decisions in front of us"

The Model: The Reading Compass

Four quadrants. Each represents a distinct leadership need and a distinct failure mode. No leader needs to read all the books below. Every leadership team needs coverage across all four.

The rule: no quadrant, no decision.

A decision made without operational reality (Q3) is a vendor's dream. A decision made without structural critique (Q2) is a reputational time bomb. A decision made without governance (Q4) is a regulatory gap you will discover in public.

Quadrant 1 – Situational Awareness. What is happening, why now, and how should leaders frame it? This is where most leaders start, and, dangerously, where most stay.

Quadrant 2 – Structural Critique. Who pays the cost and where does power concentrate? Leaders who skip this quadrant make confident decisions built on incomplete maps.

Quadrant 3 – Operational Reality and Limits. How does this actually work, what does it cost, and where does it break? The danger zone. Every leader thinks they understand AI because they use ChatGPT. Confidence here inversely correlates with actual knowledge.

Quadrant 4 – Governance and Long-Term Risk. What are the control problems and who is responsible? This quadrant exists to inform policy, delegation boundaries, and escalation rules, not speculation. Leaders who skip it build policies that sound right and fail under pressure.

Industry calibration: Financial services: weight Q2 and Q4 higher. Operations-heavy industry: start with Q3 or you will buy fantasy technology. Media and creative services: Q2 is where your workforce risks live.

What I would do differently with my team immediately:

  1. Audit which quadrant your leadership team has concentrated in.

  2. Identify the gap quadrant.

  3. Assign three books from it before the next strategy session.

Diagnostic: The Reading Compass Audit

Score your leadership team's collective coverage per quadrant:

  • Green: At least three leaders have read deeply here. They reference shared concepts in decisions without prompting.

  • Amber: One or two leaders have read here. Knowledge is held individually. The team relies on one person to "translate."

  • Red: No systematic reading. The team is operating on assumptions, headlines, or vendor briefings.

Green across all four: Run a tabletop exercise on an AI incident. Review your AI portfolio against risk appetite. Set model risk thresholds.

Amber in one or two: Assign the gap quadrant as structured pre-reading before your next strategy session.

Red in any quadrant: That is your most dangerous blind spot. Three books, three leaders, thirty days.

What We Know, What We Don't, What I've Observed

What we know. Shared mental models improve team decision quality. This is well-established in organisational psychology (Cannon-Bowers & Salas, 2001; DeChurch & Mesmer-Magnus, 2010). Teams that share vocabulary and frameworks make faster, more consistent decisions under uncertainty. There is no reason to believe AI decisions are exempt from this pattern.

What we don't know yet. Whether structured reading programmes produce measurably better AI governance outcomes than other interventions (executive education, consulting briefings, hands-on tool experimentation). The evidence base for "reading as team capability building" is strong in principle but thin on AI-specific measurement. We are operating on reasonable inference, not controlled trials.

What I've observed in organisations. Leadership teams that read strategically, even modestly, converge on shared definitions faster and spend less time re-litigating first principles. The most common failure mode is not insufficient reading but lopsided reading: heavy concentration in Q1 and Q4 (the "big picture" and "big risk" quadrants), with almost nothing in Q2 and Q3 (the quadrants that connect to live decisions about people, money, and systems).

The Strategic Shortlist: 4 Books Per Quadrant

Selected for impact, accessibility, relevance to live decisions, and the ability to generate shared language in a single discussion. Assign by quadrant, not by personal interest. The goal is coverage, not consensus. ★ marks the start-here pick.

Quadrant 1 – Situational Awareness

★ Co-Intelligence – Ethan Mollick (2024) The most practical book for people living with AI at work. Best for: adoption decisions in the next quarter.

The Coming Wave – Mustafa Suleyman with Michael Bhaskar (2023) The foundational strategic text of the pre-regulation era. Best for: framing why containment is harder than deployment.

Supremacy – Parmy Olson (2024) The definitive narrative of how the initial AI race was fought. 2024 FT Business Book of the Year. Best for: competitive dynamics before partnership decisions.

Nexus – Yuval Noah Harari (2024) The widest lens on AI available. Best for: board-level sensemaking and historical context before committing to a strategy.

Quadrant 2 – Structural Critique

★ Atlas of AI – Kate Crawford (2021, updated 2025) AI as industrial system: labour, minerals, energy, surveillance, power. Best for: any leader signing off on procurement without understanding the supply chain.

AI Snake Oil – Arvind Narayanan & Sayash Kapoor (2024) Dismantles exaggerated claims. Equips leaders to tell breakthroughs from vendor theatre. Best for: pre-reading before any RFP.

Code Dependent – Madhumita Murgia (2024) Human-first reporting on the people in AI's shadow. Best for: CHROs and anyone responsible for workforce impact.

Unmasking AI – Joy Buolamwini (2023) Bias in deployed systems and the fight for accountability. Best for: teams deploying AI in hiring, customer service, or any human-facing context.

Quadrant 3 – Operational Reality and Limits

★ Power and Prediction – Agrawal, Gans, Goldfarb (2022) AI lowers the cost of prediction, which reshapes decisions, organisations, and markets. Best for: any leader approving budgets or evaluating build-vs-buy.

A Guide for Thinking Humans – Melanie Mitchell (2019) What modern AI can and cannot do, and why generalisation remains hard. Best for: challenging technical claims in plain language.

You Look Like a Thing and I Love You – Janelle Shane (2019) Funny, revealing, rigorous about failure modes. Best for: understanding why AI fails, not just that it fails.

These Strange New Minds – Christopher Summerfield (2025) How LLMs emerged, how they process information, and where the analogy with human minds breaks. Best for: one level deeper on capability without going technical.

Quadrant 4 – Governance and Long-Term Risk

★ The Alignment Problem – Brian Christian (2020) Why optimisation fails in the real world and what alignment research actually tries to do. Best for: writing policy on deployment, risk appetite, or human oversight.

Human Compatible – Stuart Russell (2019) The control problem explained by one of the field's central figures. Best for: governance committees setting guardrails for autonomous systems.

The Algorithm – Hilke Schellmann (2024) Hiring systems, workplace surveillance, scoring, automated management. Best for: CHROs and GCs evaluating AI in HR and workforce monitoring.

Life 3.0 – Max Tegmark (2017) Long-run scenarios and governance choices. Best for: board members who need to think beyond the 3-year horizon.

The 4-Book Starter Kit

For teams that are Red across multiple quadrants. One book per quadrant. Read by the CEO and at least one direct report. Discuss before any AI strategy session.

QuadrantBookWhy this oneQ1Co-Intelligence (Mollick)Practical, current, grounded in real useQ2Atlas of AI (Crawford)The corrective every confident leader needsQ3Power and Prediction (Agrawal et al.)The economic logic behind every AI investmentQ4The Alignment Problem (Christian)Why good intentions produce bad systems

Role-based paths (3 books per role): CEO: The Coming Wave (Q1), Atlas of AI (Q2), Human Compatible (Q4). CHRO: Code Dependent (Q2), The Algorithm (Q2/Q4), Power and Prediction (Q3). CTO / CDO: Co-Intelligence (Q1), AI Snake Oil (Q2), These Strange New Minds (Q3). CFO / CRO: Power and Prediction (Q3), AI Snake Oil (Q2), The Alignment Problem (Q4). GC / Chief Risk Officer: Human Compatible (Q4), The Algorithm (Q2/Q4), Atlas of AI (Q2).

The complete reference list of all 30 titles, mapped to quadrants with author, year, and decision relevance, is available as a separate downloadable resource.

The Leadership Reading Audit

Who uses it: CEO, CHRO, or transformation lead. When: Before any AI strategy session, board education programme, or governance review. Pre-audit question: What is the one AI-dependent decision we must get right in the next quarter?

How:

  1. List every AI-related book, article, or report your leadership team has read in the past 12 months.

  2. Map each title to one of the four Reading Compass quadrants.

  3. Score each quadrant Green / Amber / Red.

  4. Identify the weakest quadrant.

  5. Assign three books from that quadrant to at least three leaders, with a 30-day deadline and a structured discussion scheduled.

The 45-minute discussion:

  • 10 min: shared definitions (one slide, lock in terms for your decision).

  • 20 min: implications for three live decisions (what does the gap quadrant change?).

  • 15 min: agree one guardrail and one workflow experiment.

Output: A Strategic Readiness Memo: (1) our key decision, (2) our critical blind spot, (3) our mitigation: books assigned, leaders responsible, guardrail agreed, experiment defined.

How to know it worked: The go/no-go on the pilot takes 45 minutes instead of four meetings.

How This Fails

The Homework Trap. The CEO assigns books; the leadership team resists. Fix: frame it as "pre-reading for a decision we need to make," not professional development.

The Book Club Death. Discussion becomes literary appreciation, not operational translation. Fix: anchor every discussion to a live decision. No decision anchor, no discussion.

The Single-Reader Risk. One person reads all four quadrants and becomes the team's "AI explainer." Fix: distribute reading so capability is held collectively, not individually.

The "We Don't Have Time" Objection

The strongest objection is not that this is too much work. It is that it feels unnecessary, because your team already reads. But reading without a compass is navigation without a map. You will move, but you will not know if you are going in circles.

Minimum viable version (30 days): Run the diagnostic. Identify the Red quadrant. Assign one book plus one author interview or podcast per leader. Hold one 45-minute discussion tied to a live decision. Produce one shared glossary page and one guardrail.

Standard version (90 days): Month 1, one book per gap quadrant plus one adversarial report. Month 2, one policy decision and shared glossary locked in. Month 3, review against live pilot, repeat diagnostic, assign next gap.

"Undirected reading produces well-informed individuals and misaligned teams."

"The board had approved an AI ethics charter and a £12 million transformation budget, but no one could explain what a large language model could and could not reliably do."

"Three books in the right quadrant, read by the right people, discussed in the right room, will do more than twenty books consumed alone."

Book Bridge

Chapter 7 of SuperSkills maps how the Reading Compass feeds into the Delegation Boundary, a decision tool that prevents the workflow failures described above, and introduces the Augmented Mindset: the capability that determines whether leaders engage with AI as passive consumers or active designers of human-machine collaboration. For the full framework, including the Five Loops, see SuperSkills: The Seven Human Skills for the AI Age (Kogan Page, July 2026).

Internal links

  • Up: Drift vs Design: The Defining Choice of the AI Era

  • Sideways: The Augmented Mindset: What It Actually Means to Work With AI

  • Forward: What Does "AI-Ready" Actually Mean for a Leadership Team?

The test is whether your next AI decision, hire, deploy, buy, or wait, takes 45 minutes instead of four meetings because the hard thinking happened in the reading, not in the room.

If it does not, you do not have a knowledge problem. You have a team alignment problem.

Start with one quadrant. Assign three books. Schedule one conversation. Then ask: what do we now agree on, and what decision can we make today that we could not make before?

Book a call with Rahim

Book a call with Rahim

"The goal isn't more technology. It's more capable humans."


"The goal isn't more technology. It's more capable humans."


Rahim Hirji