people sitting on chair in front of laptop computers

Synthetic Seniority

people sitting on chair in front of laptop computers

Synthetic Seniority

people sitting on chair in front of laptop computers

Synthetic Seniority

SuperSkills

Synthetic Seniority

Synthetic seniority is the phenomenon where AI tools enable junior professionals to produce output that looks senior, sounds senior, and passes casual inspection as senior, while the underlying judgement, pattern recognition, and contextual wisdom that genuine seniority requires has not been built. The output is polished. The capability underneath it is not. Synthetic seniority is one of the least visible and most consequential effects of AI adoption in professional organisations, and it sits at the centre of the SuperSkills framework's argument that human capability must be deliberately designed rather than assumed.

What it looks like

A second-year consultant submits a strategy deck to a partner. The structure is tight, the analysis is credible, the recommendations are defensible. The partner approves it with minor edits. What the partner does not see is that the consultant used a large language model to generate the initial structure, draft the executive summary, and stress-test the recommendations before submitting. The work is good. Whether the consultant could have produced it without the AI is a question nobody thinks to ask, because the output is all anyone sees.

In a law firm, a trainee produces a client memo on a regulatory question. The memo is well-reasoned, clearly written, and cites the right authorities. A senior associate reviews it and notes that it is better than most trainee work. The trainee used an AI tool to identify the relevant case law, draft the initial argument, and check for gaps. The senior associate does not know this, and the trainee does not volunteer it, because nobody has established whether this is expected, permitted, or a problem.

In a financial services team, a graduate analyst builds a model that the vice-president describes as unusually mature. The graduate is praised in the team meeting. What the vice-president does not know is that the graduate used AI to structure the model, identify the assumptions, and flag the sensitivities. The graduate received credit for output she did not fully produce and is now expected to perform at that level consistently, with or without the tool.

These are not hypothetical examples. I have encountered versions of each in my conversations with leaders across professional services, financial services, legal, and consulting firms in the last eighteen months. The pattern is consistent enough to name.

How it works

Synthetic seniority is produced by the interaction of three forces.

The first is the nature of AI tools themselves. Large language models are trained on vast quantities of professional output: strategy documents, legal memos, financial analyses, board papers, consulting frameworks. When a junior professional uses one, the tool draws on patterns from thousands of senior-quality documents. The output inherits the structure, tone, and apparent rigour of senior work, because that is what the training data contains. The tool does not know the user is junior. It produces senior-sounding output regardless.

The second force is the invisibility of the assistance. Unlike asking a colleague for help, which is visible and social, using an AI tool is private. Nobody sees the prompt. Nobody sees the drafts. Nobody sees how much of the final output was generated and how much was shaped by the person submitting it. In most organisations there is no norm, no policy, and no cultural expectation that would make this visible. The junior has no incentive to disclose, and the senior has no mechanism to detect.

The third force is the way organisations assess capability. Promotion panels, performance reviews, and talent discussions are built around output. What did this person produce? How good was it? How does it compare to peers? These systems assume a stable relationship between output quality and underlying capability. AI breaks that assumption. A junior producing senior-quality output is no longer a reliable signal that the junior has senior-quality judgement. But the assessment systems have not caught up, and in most organisations nobody has yet asked whether they need to.

The three forces compound. The tools produce senior-looking output. The use is invisible. The assessment systems reward the output without questioning its origins. The result is a population of professionals who are advancing on the basis of work that overstates their current capability, into roles that will demand the judgement their career path has not yet built.

This is a drift problem. Nobody decided to promote people beyond their capability. Nobody chose to break the relationship between output and judgement. It happened because the tools arrived faster than the systems that would need to account for them.

What the research suggests

The clearest signal that synthetic seniority is already being felt comes from the EY 2025 Work Reimagined Survey, which covered 15,000 employees and 1,500 employers across 29 countries. The headline finding was that 88% of employees now use AI in their daily work, but that almost all of that use is limited to basic applications: search, summarisation, and first-draft generation. Only 5% are using AI in ways that transform how they work rather than simply accelerating what they already did.

The finding that matters for synthetic seniority sits one level deeper. 37% of employees said they worry that overreliance on AI could erode their skills and expertise. That is not a speculative concern voiced by commentators. It is the felt experience of workers who can see, in their own practice, that the tool is doing work their capability used to do. They are not wrong to worry. The survey also found that organisations investing in AI on fragile talent foundations, meaning weak learning cultures, insufficient development, and misaligned rewards, saw productivity benefits lag by over 40%. The tool without the human foundation does not produce the gains. It produces the appearance of gains, which is precisely what synthetic seniority describes.

What happens if it goes unaddressed

The consequences of synthetic seniority are not immediate. They are structural, and they surface over years rather than quarters.

The first consequence is a promotion pipeline that produces leaders who have never operated without AI support. When those leaders face a situation the AI cannot help with, whether a novel client problem, a reputational crisis, or a judgement call with incomplete information, they will lack the accumulated experience that previous generations of leaders built through years of doing the work manually. The gap will not be visible until the moment it matters.

The second consequence is a collapse of the signal that organisations rely on to identify talent. If output quality no longer correlates reliably with underlying capability, then the systems built to reward output (performance ratings, bonus allocations, promotion decisions) are rewarding the wrong thing. Organisations will promote fluently and discover slowly that fluency and judgement are not the same.

The third consequence is cultural. When juniors are praised for AI-assisted output without disclosure, a norm is set. The norm is that the AI is part of you, that using it is unremarkable, and that the line between your work and the tool's work does not need to be drawn. In many contexts that norm is healthy. In the specific context of professional development, where the point of the work is to build the person doing it, that norm quietly removes the developmental signal from the work itself.

What to do about it

The response to synthetic seniority is not to restrict AI use. The economics and the capability case are too strong, and a ban would be both unenforceable and counterproductive.

The response is to separate the assessment of output from the assessment of capability, deliberately and visibly.

Some firms I work with have started requiring juniors to annotate their own AI use: not as surveillance, but as a development conversation. "Show me what you prompted, what the AI gave you, and what you changed." That conversation, done well, is itself a training exercise. It forces the junior to articulate what they understood and what they borrowed. It gives the senior a window into the junior's actual capability, not the capability the output implies.

Others are redesigning what counts as evidence of readiness for promotion. Rather than relying solely on output quality, they are introducing moments that test judgement directly: live client interactions, simulated crises, oral examinations on work the candidate submitted. These are not new ideas. The medical profession has used structured oral examinations for decades. What is new is the need for them in professions that have historically relied on output alone.

The SuperSkills framework positions this as an Augmented Mindset challenge: the capacity to work with AI deliberately, understanding what the tool contributes and what you contribute, and being honest about the difference. A professional with a strong Augmented Mindset does not hide the AI's contribution. They understand it well enough to explain where it helped, where it misled, and where their own judgement overrode it. That transparency is the antidote to synthetic seniority, and it is a capability that must be trained, not assumed.

Book a call with Rahim

Book a call with Rahim

"The goal isn't more technology. It's more capable humans."


"The goal isn't more technology. It's more capable humans."


Rahim Hirji