people sitting on chair in front of laptop computers

The Missed Reps

people sitting on chair in front of laptop computers

The Missed Reps

people sitting on chair in front of laptop computers

The Missed Reps

The Missed Reps


Why the real cost of AI service isn't what it does to humans today. It's what it does to the humans who were supposed to become senior tomorrow.

Primary reader: Board member, CEO, CHRO, Transformation lead

Primary use case: Strategy offsite pre-read, executive briefing, governance framework anchor

There is a growing argument in the business press, and it sounds reasonable on first hearing. It goes like this. Large language models are now articulate, patient, consistent, and knowledgeable in ways that human agents often are not. Customers in certain contexts are starting to prefer them. If that trend continues, AI will set a standard for service that raises the bar for actual humans. The mediocre agent, the distracted analyst, the tired associate on their fourth call of the afternoon, may find themselves measured against a machine that does not get tired, does not go off-script, and does not have a bad morning.

The argument is sincere, and the observations behind it are accurate. I spoke to a partner in a services firm recently who told me his clients had quietly stopped asking for junior analysts on calls, preferring the AI draft to the junior's first attempt. It is happening, and it will accelerate.

But the argument measures the wrong thing. It measures the quality of a single interaction at a single moment. It does not ask what those interactions were producing before the AI took them over. In my discussions with leaders across more than thirty countries over the last three years, the clearest pattern I see is not that AI is raising the bar for humans. It is that AI is quietly removing the reps that made humans good at the job in the first place. The missed reps are the cost nobody is measuring, and they are larger than the efficiency gains that triggered them.

The bar argument

Before dismantling the argument, it is worth stating it fairly.

Jonathan Peachey put the question most sharply in a Substack essay in October 2025. At an AI ethics event at the House of Lords, Dr Claire Benn of Cambridge's new Ethics of AI, Data and Algorithms programme had asked a roomful of senior leaders what tasks they would still want to do themselves in a future where AI capabilities far exceeded today's. They held on to activities that created human connection or were intrinsically satisfying. They happily handed over the mundane. Then one attendee said that his firm's clients were becoming less tolerant of human contact-centre agents, preferring the concise, consistent style of the AI bot. Peachey asked the question that sat underneath it. As LLMs get better at seeming human, will they raise the bar for actual humans? Will we start judging one another against the calm, articulate, knowledgeable and perfectly patient standard set by machines?

The observation is consistent with what the research finds. A September 2025 meta-analysis published in the Journal of Marketing by Holger Roschk at Aalborg University Business School and colleagues, drawing on 327 experimental studies involving almost 282,000 participants, found that in many service contexts artificial agents were received almost as positively as human ones, and in some contexts more positively. Discretion-sensitive purchases, standardised calculations, and negative responses were the settings where AI outperformed humans in customer preference. The human advantage, when present, concentrated in emotionally charged or highly ambiguous interactions. The rest of the customer journey, the study suggests, may be ground on which AI now competes with humans credibly or wins outright.

So the provocation has teeth. Against the AI's calm, articulate, patient, knowledgeable baseline, the variable human is at risk of looking worse. Not because humans have got worse. Because the bar has moved, and the bar now includes things, like infinite patience and perfect consistency, that humans were never trying to offer in the first place.

If the argument ended there, it would be difficult to answer. But the argument assumes the only thing a human interaction produces is the interaction itself. And that is not how humans are made.

The missed reps

Every senior in a professional services firm, in a bank, in a consultancy, in a hospital, in a law firm, became senior by doing work that, in hindsight, mostly did not need to be done by them. The junior lawyer who took the notes at the client meeting. The first-year associate who drafted the memo that was rewritten three times before it was sent. The graduate analyst whose first model contained an error that the vice-president caught and explained. The contact-centre agent who handled a furious caller at 4pm on a Friday and figured out, by trial and error and with help from a supervisor, what to do when the script ran out.

None of those reps were efficient. Most of them produced outputs that the organisation could have got faster and cheaper elsewhere. That was not the point. The point was that the human doing the work was absorbing something through the work. Context. Tone. The shape of the client's worry underneath the question they had asked. The feel of when an answer was complete and when it needed another layer. The taste for what good looked like in this firm, this sector, this moment.

Call these the missed reps. They are what AI quietly collects when a firm re-routes the interactions that used to build its people. The tier-one call handled by a bot is also the call that would have taught a new agent what a customer sounds like when they are about to escalate. The first-draft memo produced by the language model is also the draft that would have taught the associate which questions to ask the partner before starting. The calculation the AI ran in three seconds is also the calculation the graduate would have laboured over for an afternoon, making mistakes, noticing patterns, forming the judgement they will need when, in ten years, they are the senior who has to spot that the AI has done the calculation wrong.

The missed reps do not show up in this quarter's results. They show up in the bench strength of the firm five and ten years out. They show up in the conversation at a promotion panel when the committee realises that the candidate who looks excellent on output has never had to sit with a client through a difficult meeting and absorb how to be useful when the plan falls apart. They show up in the leadership pipeline that suddenly has a shape nobody noticed it was acquiring.

This is not a new kind of problem, and the argument has a long lineage. Schön wrote about reflective practice in the 1980s; the medieval guilds had the apprenticeship model long before that. Organisations have always debated whether to keep junior work in-house or outsource it to back offices and shared-service centres. But the current moment is different in scale and in kind. The work that produced seniors is being removed from the firm altogether, to a system that cannot, by its nature, be promoted. The reps are not being outsourced. They are being retired.

What the evidence suggests

Three strands of recent work deserve to be read together.

The first is the Klarna case, which is the most publicly documented version of the pattern. Between 2022 and 2024 the Swedish fintech eliminated approximately 700 customer service positions, replacing them with an AI assistant built in partnership with OpenAI. In February 2024 the company announced that the assistant had handled 2.3 million conversations in its first month of operation and was doing the equivalent work of 700 full-time agents. The CEO Sebastian Siemiatkowski presented the move as a template for the industry. By mid-2025 Klarna was rehiring. Siemiatkowski acknowledged publicly that the company had "overestimated AI's capabilities and underappreciated the human aspects of service delivery," and that customer satisfaction had dropped on complex cases. Klarna's reversal is the headline story. What is less told is that the firm now has to rebuild, from scratch, a customer service capability that developed over years of human agents handling both routine and complex interactions. The reps cannot be recovered retrospectively. The new agents will learn, but they will learn on a different, narrower set of interactions, because the AI still handles most of the routine traffic. The people who would have learned by volume have fewer reps available to learn on.

The second is the argument that Steve Hasker, the CEO of Thomson Reuters, made in a Fortune commentary in November 2025. Drawing on a conversation with Nandan Nelivigi, a partner in New York at the global law firm White & Case with a thirty-year corporate law career, Hasker described the way entry-level legal work has been progressively automated over decades, and the way generative AI has accelerated that process into a different register. Nelivigi put it directly. "A lot of the process with AI is not going to be happening in a big conference room where people can observe how others do it. It's going to be happening on your personal computer, on your screen or on your phone. So there needs to be a different approach to conveying some of the basic skills on how people will need to work and be trained, and to train themselves to some extent." Hasker's own survey of nearly 2,300 knowledge workers, Thomson Reuters' annual Future of Professionals report, found that 81% had used AI-powered tools to start or edit their work, while only 22% of their employers had a clear AI strategy. The training systems are behind the tool use by a considerable margin, and the senior professionals who ought to be redesigning the apprenticeship model are themselves still adjusting to the environment in which their juniors will now be formed.

The third strand is the research on where AI actually lifts performance. The 2025 study by Cui and Demirer, covering nearly 5,000 developers at Microsoft, Accenture, and a Fortune 100 manufacturer, found that AI gave junior developers productivity gains two to three times larger than those enjoyed by senior developers. This is widely reported as evidence that AI closes the gap between juniors and seniors. The study measured productivity, not the formation of judgement, and I should be clear that what follows is my reading of the data rather than its finding. I think the gap closes partly because AI is doing the work that used to distinguish the two. A junior who produces senior-level output today because the AI is pulling them up is also a junior who has not yet built the judgement that made the senior worth pulling up from in the first place. The output looks senior. The capability underneath it is not. When that junior becomes the senior in charge of the AI, the absence of that underlying capability will matter.

Read together, the three strands point in the same direction. Where AI is deployed most aggressively in exactly the tasks that juniors used to grow through, the short-term output improves and the long-term capability of the firm erodes. The firms that deploy AI most eagerly are also, usually unknowingly, building the largest long-term capability gap.

The deliberate response

The answer is not to be sentimental about manual processes or to hold back from AI adoption. The economics will not allow it and the case for AI in most service contexts is real.

The answer is to treat the apprenticeship question as a first-order strategic decision rather than an HR afterthought. Every AI deployment that removes a category of interaction from humans is also removing a learning opportunity. Leaders should assume, by default, that they must replace the learning value of the removed reps with something else, deliberately, or accept that their senior bench will hollow out.

The firms I have seen doing this well are doing three things in particular.

They are counting. Not usage of AI tools, but the deliberate practice a junior gets in a year. A firm that knows its first-year associates used to handle forty client interactions a year and now handle twelve is a firm that can make a decision about what to do about the other twenty-eight. A firm that does not count cannot make that decision.

They are redesigning what juniors do, rather than simply giving them less. The best response is not to have graduates review AI output, which teaches them to recognise surface error but not to build judgement. The best response is to give them a smaller number of harder interactions, with more senior coaching around each one. Fewer reps, of better quality, with intentional development structure around them. This costs more per rep and produces, in time, a stronger senior.

They are protecting the reps the AI cannot replace. The hard client meeting, the genuine complaint, the negotiation where the other side does not follow a predictable script. These are the interactions that teach most and the ones that AI handles worst. A firm that routes them to AI because AI is cheaper is optimising for today's margin at the cost of tomorrow's capability. A firm that deliberately routes them to humans, even where an AI solution exists, is making a different bet. It is betting that a generation of people who have handled hard interactions will be worth more, in fifteen years, than the labour cost saved by routing them to a bot.

None of this is complicated. It is, however, uncomfortable, because it requires leaders to hold a short-term cost in order to protect a long-term capability whose absence will not be visible for years. Most organisations are not set up for that kind of decision. The performance system rewards this quarter's efficiency, not the 2035 partnership bench.

What we are actually measuring

Return for a moment to the provocation. Will AI raise the bar for humans?

In a single interaction, measured on a single dimension at a single moment, yes. A well-deployed AI will often be more patient, more articulate, more consistent, and more available than the human it replaces. That is a real fact about the technology, and it is going to change customer expectations.

But the service interaction is not the only product of a service interaction. The other product is the person doing it. Over a career, that person is the accumulated result of every hard call they handled, every first draft they got wrong, every awkward meeting they survived. The quality of an industry, ten years from now, will be set by what we let that person do between now and then. We are in the middle of quietly deciding to let them do less of it, on the grounds that the AI does it more smoothly today.

Peachey's question, which is a good one, was whether AI raises the bar for humans. The question that sits underneath it is whether we still know how to make humans. Not in the abstract. In the specific, patient, accumulated sense of how a firm turns a graduate into a partner, an agent into a manager, an analyst into someone whose judgement you would trust in a crisis.

Peachey, incidentally, is one of the sharpest voices writing about AI and professional services at the moment, and his Substack is worth following. He asked the right question. I just think the answer sits one layer deeper than the interaction itself.

The firms that answer that question deliberately will build seniors who are more capable than their predecessors, because their AI-augmented juniors will have been given reps of a better kind than the old reps could provide. The firms that do not answer it will not notice the cost for several years. When they do notice it, the cost will be the absence of people they had assumed would be there, and there will be no quick way to produce them. The efficiency was real, and so is the capability gap that is now compounding underneath it. Both need to be on the same page of the strategy document.

_______________________________________________

Rahim Hirji is the founder of The SuperSkills Intelligence Company and author of SuperSkills: The Seven Human Skills for the AI Age (Kogan Page, July 2026). He advises organisations on human capability development in the AI age.

Book a call with Rahim

Book a call with Rahim

"The goal isn't more technology. It's more capable humans."


"The goal isn't more technology. It's more capable humans."


Rahim Hirji