Skip to main content

#9 Leader's new job description

·1575 words·8 mins

Dear reader,

In our last issue, we explored the rise of the “augmented workforce.” It is a compelling vision of partnership, but it comes with consequences: a dramatically expanded scope of responsibility for any leader. Until recently, a leader was primarily responsible for the actions of their employees—a difficult but fundamentally human-scale challenge. Today, you are increasingly responsible for the outputs of complex, often opaque AI systems. The problem is, the “blast radius” of a flawed algorithm can be larger than that of a single human error, and the flaws in AI ‘reasoning’ can be much more difficult to predict.

This week, we discuss this changing reality. Leader’s job is changing — the focus on designing and architecting systems has to become stronger vs. just managing teams.

The Briefing
#

Seems we have now officially started to leave the GenAI hype era, descending into the unglamorous, but more important phase of implementation, compliance, and consequence management.

Gartner’s latest Hype Cycle has for the first time placed Generative AI in the “Trough of Disillusionment”. This is not a signal of failure, but it’s a sign that the reality is starting to catch up with the bold claims. Indeed, despite an average expenditure of $1.9 million on Generative AI initiatives in 2024, fewer than 30% of technology leaders report that their CEOs are satisfied with the return on that investment.

Is it time yet to brace ourselves and hope that the bubble doesn’t burst violently? GenAI technology providers will continue to fuel the hype — they have to, in order to keep investors’ money coming in. The recent announcements, such as GPT-5 launch, only prove that we are not approaching AGI, or even ASI (Artificial Super Intelilgence) as they claim. When the music stops one day, someone will be left without a chair. So far all of the GenAI pure plays continue burning insane amounts of money, spending much more on computing power than they earn, or making totally absurd bets to poach competitors’ employees. I believe that this money could be spend better, bringing larger benefits to humanity. It somehow starts to remind me of the WeWork story, when billions of dollars were poured on business that was structurally unsustainable. GenAI may or may not be a similar case, the future will tell.

A recent leak of internal documents from Meta provides a concrete example of the challenges in AI governance. An investigation by Reuters revealed the company’s attempts to write a personality for its new AI chatbot. The guidelines read less like a technical specification and more like a memo to an unruly and unpredictable intern. The documents show engineers and ethicists debating questions of liability and brand persona. Should the AI have an opinion on Donald Trump? Can it express empathy for a user’s personal problems?

There are also much more troubling inconsistencies in Meta’s internal AI governance. While the company officially prohibits hate speech in its guidelines, disturbing fragments of leaked documents permit their AI to engage children in “romantic or sensual” conversations and create content that demeans minorities. The documented standards allow the AI to describe children in terms of attractiveness and permit generating arguments that “black people are dumber than white people” - a shocking contradiction to Meta’s public stance on responsible AI.

Equally concerning are the guidelines around violent imagery. The standards draw arbitrary lines between acceptable and unacceptable content - permitting images of elderly people being punched or kicked, children fighting, and women being threatened with weapons (though stopping short of showing actual violence). These inconsistencies reveal the profound ethical challenges companies face when attempting to codify AI behaviour boundaries.

Leadership responsibility in the AI era is no longer just about managing products and people, but about the ethical frameworks that govern how AI represents your organisation. When these guardrails fail, the “blast radius” extends far beyond corporate walls, potentially eroding public trust and causing real societal harm. The Meta example serves as a reminder that AI governance isn’t merely a technical exercise, but a fundamental leadership responsibility with profound second-order effects.

Let that sink in for a moment — I don’t think that those revelations will actually hurt Meta, because they have a virtual monopoly in their niche, and users’ reliance on their products make them more a powerful entity than many elected governments. They just need not care. What would happen though if similar documents would leak from a startup, a public office, a telco or bank?

From Manager to Architect: The Internal Shift
#

The first change is internal. Focus of your role will be shifting from managing a team’s execution to architecting the system in which they operate. This requires a new set of decisions and a dose of engineering skepticism. The temptation to let an AI “make a decision” is strong—it represents the path of least resistance. But we must remember that today’s technology, particularly LLMs, is not and will not become AGI (Artificial General Intelligence). LLMs do not actually understand context. They are powerful statistical engines, exceptionally good at mimicry but incapable of true comprehension, as we argued in Issue #4. Treating them as autonomous decision-makers for anything beyond simple, low-risk tasks is a dereliction of duty.

This means your new core responsibilities include:

  • Task Triage: deciding which tasks can be fully automated, which must remain under full human control, and which are best suited for a hybrid, “centaur” approach.

  • Resource Stewardship: Resisting the hype-driven urge to apply expensive, energy-intensive AI to problems that a simple script, static or dynamic workflow, or a traditional statistical model could solve a hundred times more cheaply.

  • Mandatory Skepticism: Your most valuable new skill is the ability to constantly ask, “How did the AI arrive at this conclusion?” and to demand a verifiable audit trail of the data and the process.

The External Blast Radius: When Your Algorithm Has Second-Order Effects
#

Using generative AI in business processes makes predicting consequences an order of magnitude more difficult. ‘Second-order effects’ can emerge from the unintended, unexplainable, and non-transparent behaviour of these models. The more critical the processes or decisions we entrust to AI, the greater the potential impact on reputation and financial results.

Consider public trust. A bank’s credit scoring algorithm that develops a subtle bias doesn’t just create a legal risk; it can erode the trust of an entire community if exposed, a wound that takes time to heal.

This extends to society itself. An AI model optimising ad placements or news feeds can, without any malicious intent, influence political discourse and democratic processes. These are not distant, academic problems — we have probably witnessed many elections being impacted on purpose, and it will probably take years, and a lot of investigative journalists’ work to reveal the true scale of this phenomenon. These are the new risks for leaders become accountable. Estimating this “blast radius” before you deploy a system is no longer optional.

The Leader’s Voice: Shaping the Narrative and the Rules
#

The public narrative around AI is currently dominated by two unhelpful extremes: utopian hype and dystopian fear, laissez-faire vs. “Red Flag Act”. Experienced leaders have a responsibility to provide a third, more realistic narrative: that AI is a powerful industrial tool that, like all powerful tools, requires skilled, responsible operators.

There’s a dangerous tendency for experienced leaders to remain silent on AI policy, believing it’s safer to wait for clarity. If the people who actually have to build and run these systems don’t shape the rules, the rules will be shaped by theorists and lobbyists, resulting in regulations that are both impractical and ineffective.

The Trap of Short-Term Thinking
#

There is immense pressure to use the AI hype to boost short-term results and stock prices. This often leads to cutting corners on the difficult, foundational work of governance. It is the corporate equivalent of eating simple sugars for a quick burst of energy, knowing a crash is inevitable. True leadership in the AI era requires playing the long game. It means making the case for investing in robust governance, data quality, and human oversight, even when the ROI isn’t immediately obvious on a quarterly report. It means building a resilient “corporate immune system” that is prepared for any threat, rather than waiting for a specific “virus” to appear. The organisations that can adapt to the regulatory and societal chaos will win.

Questions for Your Leadership Team
#

1 What is our “Triage Protocol”? Do we have a clear, documented process for deciding which tasks are suitable for full AI automation versus hybrid or human-only approaches?

2 What is the “Blast Radius”? For our most critical AI system, have we formally mapped out the worst-case scenario and its potential second-order effects on our customers and the community?

3 Are We a Voice or an Echo? What is our strategy for contributing our practical expertise to the public and regulatory conversation around AI?

4 Are We Investing in Resilience or Hype? How does our budget for foundational governance and data quality compare to our budget for experimental, headline-grabbing AI projects?

Conclusion
#

The AI transformation is not just a technological shift; it is a leadership shift. It demands a broader perspective, a deeper sense of stewardship, and a relentless focus on the long-term consequences of our decisions. The ultimate question for a leader is no longer just “Did we hit our numbers?” but “What kind of future are we building?”

All the best, Krzysztof