Deloitte recently stated in their ‘Tech Trends’ report that ‘2023 will be the year we learn to trust our AI colleagues’. Erica recently tested that statement with several senior L&D leaders to gauge their initial reaction. It tended to be a little hesitant, with some discomfort, and even some protest.
Why this reaction? Well, it was the word ‘trust’.
Trust is perceived as a human privilege, which is earned and can be easily broken. So how does that fit with AI that has no agenda or sense of motivation but relies on human input to create a response?
It’s easy to get swept away with the current mania (both positive and negative) – remembering this helps us to feel a sense of control and calmness. Your teams are in the same space as many.
Maybe they are concerned about what they are hearing about this technology, are unsure of what trust means in this equation and/or are wary of the future when it comes to jobs and skills.
None of us have the magic bullet here, but what we can do in learning and development is to look ahead and be proactive in navigating what is coming, step by step.
As a human race, we go through the same emotional journey when a large change happens.
We saw it in the 1970s when we started to build the computer – we feared it would take over the world and get out of control. The same happened in the 1980s when we started to develop robotics which would pick up manual tasks and jobs – we convinced ourselves these robots would take our jobs and put us all out of work.
We had no trust in this revolution until we started to understand more detail and relevance to mankind.
Sounds familiar, right?
We are now doing the same with artificial intelligence. And the narrative isn’t helped by the wonderful movies and shows we have all consumed over the last 30 years, whether this be The Terminator film franchise, The Matrix (my personal favourite), or any other story which sees our robot overlords turn on humanity and put us into a constant state of war.
I’ve previously blogged about key considerations to ready your organisations for AI, including the introduction of a digital ethics framework, and creating the space for the ‘Average Joe’ to start using generative AI. This blogpost is the next in the series and I want you to consider elevating skills in your organisation as one of the best ways to gain buy-in, advocacy and most importantly, ‘trust’.
These skills sit in three clear categories, they are:
I call them the ‘Human Distinctive Skills’. They include:
I want to add:
These are the skills which come from our higher-order thinking and are the unlikely developments in AI; these are the skills which make us inherently human.
Yes, but it’s up to us to build that trust in our teams – it’s not a task for the machines.
You can build trust in AI by developing those ‘human distinctive skills’ and providing opportunities for people to be excited about this change, while repeatedly assessing confidence levels and being transparent in communication.
This, alongside the continuous development of leadership capability and culture, will serve you well in the dawn of AI.
SHARE THIS THOUGHT PIECE
Taking your digital learning from good to great.
CLIENTS & PARTNERS
Our collaboration with these organisations reflects our shared commitment to advancing educational excellence and innovation.

The Quantum Rise website uses cookies to collect browsing behaviour and device information for analytical purposes. By consenting, you allow us to process this data. No personal data is collected or stored.
Taking your digital learning from good to great.