Welcome to part six of our Simia series: The Unsaid Truths of Organisational Life. This bold series lifts the lid on the silence and cultural contradictions of the modern workplace, drawing on insights from over 3,500 leaders across the ASEAN region.

In this edition, we explore The A.I. Conundrum: the complex psychological tension employees feel as they build the very systems that may eventually disrupt their own roles. We examine why the A.I. transition is no longer just about productivity, but a deeply personal shift in identity and future relevance.

Over the past few months, I’ve heard a strange and increasingly common tension in workshops and leadership sessions around A.I. People are starting to feel like they are building the very agents that may eventually replace parts of their own role. 

Obviously, this is not because they are resisting technology. Nor is it because they are anti-A.I. But because, for the first time, the transition is beginning to feel personal. Through the designing of workflows, prompts, and systems, seeing it as a way to make our jobs easier, individuals are slowly realising the system may no longer need us in the same way afterwards. As one participant put it: “People are not being open about their use of A.I., they are using it all the time, creating the illusion that they are doing the work, without realising that the A.I. will soon do their work without the need for them to be there.”

That is a psychologically complex place for organisations and employees to sit and this is why the conversation about A.I. cannot just be about productivity, efficiency, or adoption.

The A.I. Conundrum

It has to become a conversation about identity and future relevance.

One of the most insightful examples I heard recently came from an I.T. professional who had spent years developing highly technical coding expertise. He explained that he suddenly realised something uncomfortable: The thing he once believed was rare and difficult, writing code, had become incredibly accessible through A.I.

But rather than panic, he reframed his role. Instead of seeing himself purely as a coder, he started seeing himself as:

  • A designer of systems
  • An integrator of people and technology
  • A translator between business needs and A.I. capability
  • Someone who creates the conditions for better human decision-making
  • In other words, he stopped competing with A.I. and started building value around it.

That shift feels incredibly important. Because future-proofing probably won’t come from protecting tasks, it will come from expanding human contribution.

The organisations that handle this transition best may not be the ones that implement A.I. the fastest. They may be the ones that help people understand:

“What do I now need to become?”

That is a much harder conversation. But probably the most important one.