Dear reader,
For the past several weeks, we have explored the mechanics of AI governance—from the unglamorous but essential work of building a solid data foundation to the complex challenge of controlling autonomous agents. But behind every technical and regulatory debate lies a deeply human question, one that is likely on the minds of every member of your team: “Will this technology take my job?” This is not an unreasonable fear. Recent articles, like a widely circulated piece from Axios, paint a rather grim picture of mass displacement, feeding a narrative of human obsolescence. While it is true that AI will automate many tasks, I believe this focus on replacement is a fundamental misreading of the coming transformation. It is a failure of imagination. The more interesting, and I would argue more accurate, story is not one of replacement, but of augmentation. The future does not belong to the AI that replaces the lawyer, the analyst, or the strategist. It belongs to the lawyer, the analyst, and the strategist who learn how to use AI to amplify their skills to an extraordinary degree. This issue is about how to lead your organisation through that transformation, moving from a culture of fear to one of intelligent, human-led augmentation.
The Rise of the Centaurs: New Hybrid Roles#
The idea that AI will simply eliminate jobs wholesale is a blunt and unsophisticated prediction. A more likely outcome is the creation of a new class of hybrid roles—“Centaurs,” as they are sometimes called—professionals who combine their deep human expertise with the computational power of AI. These are not science fiction; they are emerging in organisations right now.
The AI Trainer / Curator:#
Responsibilities: This is the person who teaches the AI. They are responsible for curating the high-quality, proprietary data that an AI is trained on, and for providing the continuous feedback needed to correct its mistakes and refine its performance. They are part data steward, part subject-matter expert, and part teacher.
Key Skills: Deep domain expertise is non-negotiable. An AI trainer for a legal AI must be an experienced lawyer. They also need strong analytical skills to spot subtle biases in data and a patient, pedagogical mindset.
Example: At a major investment bank, a veteran financial analyst now spends half her time “training” a proprietary market analysis AI. She feeds it curated research reports and uses her decades of experience to correct its interpretations, teaching it the unwritten rules and nuanced context of their specific market niche.
The AI Ethicist / Auditor:#
Responsibilities: This role serves as the organisation’s conscience. They are responsible for running the “Ethical Litmus Tests” we discussed in our last issue, conducting AI Impact Assessments, and leading the “red teaming” exercises designed to uncover hidden biases and potential harms.
Key Skills: This is a deeply multidisciplinary role, requiring a background in ethics or law, a strong understanding of technology, and the diplomatic skill to challenge technical teams without alienating them.
Example: A European insurance company creates a small team of AI Ethicists. Before any new AI-powered underwriting model is deployed, this team must sign off on a formal audit, which includes statistical bias testing and a qualitative assessment of its potential impact on vulnerable customers.
The AI System Orchestrator:#
Responsibilities: As organisations deploy not one but dozens of AI tools and agents, a new role is emerging: the orchestrator who designs how these different systems interact with each other and with human workflows. They are the architects of the human-AI collaboration process.
Key Skills: This requires a unique blend of systems thinking, user experience (UX) design, and a deep understanding of business processes. They are less focused on building individual models and more focused on designing the entire factory.
Example: A large logistics company has an AI System Orchestrator whose job is to design the workflow between an AI that predicts shipping delays, an agent that automatically re-routes shipments, and the human logistics managers who must approve high-cost changes.
The Enduring Human Advantage#
The common thread in all these new roles is that they amplify uniquely human skills. While AI is exceptionally good at calculation, pattern recognition, and prediction, there are several areas where humans retain a profound and, I believe, enduring advantage.
Complex Critical Thinking: An AI can analyse a dataset and tell you what happened. A human expert can look at the same result and tell you why it matters. This ability to apply context, to understand second- and third-order consequences, and to ask the right questions remains a deeply human skill. AI can provide a beautifully rendered map, but it takes a human to decide where to go.
True Creativity & “Zero-to-One” Innovation: Generative AI is a master of recombination. It can brilliantly remix existing ideas, styles, and data. However, it cannot create something from nothing. The “zero-to-one” leap of a truly novel idea—the kind of thinking that creates a new market or a new paradigm—remains the province of human creativity. AI is a powerful tool for brainstorming and iteration, but it is not (yet) a source of genuine invention.
Complex Ethical Reasoning: An AI can be programmed with a set of ethical rules. But it cannot navigate a novel ethical dilemma that requires balancing competing values. It cannot understand the spirit of the law, only the letter. The ability to make a difficult judgment call in a grey area, weighing compassion against fairness or justice against mercy, is perhaps the most human skill of all.
Deep Empathy and Persuasion: An AI can be trained to mimic empathetic language. But it cannot form a genuine human connection. The ability to sit with a client, understand their unspoken fears, build a relationship based on trust, and persuade them of a course of action is a fundamentally human process. As Klarna discovered, you can’t automate empathy.
Leadership in the Augmented Age: A Practical Guide#
Navigating this transformation is one of the most significant leadership challenges of our time. It requires moving beyond fear and embracing a proactive strategy for augmentation. Here are four key areas of focus:
1 Foster AI Literacy, Not Just AI Skills: The goal is not to turn everyone into a data scientist. It is to create a culture where everyone in the organisation has a basic, pragmatic understanding of what AI is, what it can do, and what it cannot. This can be achieved through practical, hands-on workshops (e.g., “A Manager’s Guide to Prompt Engineering”) and by demystifying the technology in internal communications.
2 Move from Reskilling to “New-Skilling”: Traditional reskilling often focuses on teaching old dogs new tricks. A more effective approach is “new-skilling”—identifying the emerging hybrid roles your organisation will need (like the ones profiled above) and creating clear career pathways for your existing talent to move into them. This is not just about offering training courses; it’s about creating apprenticeships and on-the-job learning opportunities.
3 Cultivate Psychological Safety: In a time of transformation, fear is a powerful inhibitor of innovation. Leaders must create a culture of psychological safety where employees feel safe to experiment with AI, to fail, and to talk openly about their anxieties. This means celebrating smart experiments that don’t work out and framing AI not as a threat, but as a new tool that everyone can learn to master.
4 Measure What Matters: Augmentation, Not Just Automation: The wrong way to measure AI success is by simply counting the number of tasks automated or the number of roles eliminated. The right way is to measure augmentation. Are your teams making better decisions? Are they solving more complex problems? Is the quality of their strategic thinking improving? Success is not about doing the same work faster; it’s about elevating the nature of the work itself. For a more detailed look at this, you can refer to my article: “Augmentation, Not Replacement: A Leader’s Guide to the Human-AI Workforce.”
Conclusion#
The narrative of “human vs. machine” is simple, dramatic, and almost entirely wrong. The real story is one of partnership. The future of work is not a world without humans; it is a world where humans are amplified by powerful new tools, freeing us from the drudgery of repetitive tasks to focus on the deeply human work of creativity, critical thinking, and connection. The challenge for us as leaders is not to predict the future, but to build it. It is to lead our teams through this transformation with a clear vision, a pragmatic mindset, and a deep-seated belief in the enduring value of human ingenuity. In our next issue, we will expand on this, exploring the broader societal impact of enterprise AI and the C-suite’s expanding responsibility as stewards of this powerful technology.
Until then, lead with foresight.
All the best, Krzysztof
