Can We Trust AI?

Deloitte recently stated in their ‘Tech Trends’ report that ‘2023 will be the year we learn to trust our AI colleagues’. Erica recently tested that statement with several senior L&D leaders to gauge their initial reaction. It tended to be a little hesitant, with some discomfort, and even some protest.

Why this reaction? Well, it was the word ‘trust’.

Trust is perceived as a human privilege, which is earned and can be easily broken. So how does that fit with AI that has no agenda or sense of motivation but relies on human input to create a response?

Learning to trust AI

It’s easy to get swept away with the current mania (both positive and negative) – remembering this helps us to feel a sense of control and calmness. Your teams are in the same space as many.

Maybe they are concerned about what they are hearing about this technology, are unsure of what trust means in this equation and/or are wary of the future when it comes to jobs and skills.

None of us have the magic bullet here, but what we can do in learning and development is to look ahead and be proactive in navigating what is coming, step by step.

Managing change and generating trust

As a human race, we go through the same emotional journey when a large change happens.

We saw it in the 1970s when we started to build the computer – we feared it would take over the world and get out of control. The same happened in the 1980s when we started to develop robotics which would pick up manual tasks and jobs – we convinced ourselves these robots would take our jobs and put us all out of work.

We had no trust in this revolution until we started to understand more detail and relevance to mankind.

Sounds familiar, right?

We are now doing the same with artificial intelligence. And the narrative isn’t helped by the wonderful movies and shows we have all consumed over the last 30 years, whether this be The Terminator film franchise, The Matrix (my personal favourite), or any other story which sees our robot overlords turn on humanity and put us into a constant state of war.

I’ve previously blogged about key considerations to ready your organisations for AI, including the introduction of a digital ethics framework, and creating the space for the ‘Average Joe’ to start using generative AI. This blogpost is the next in the series and I want you to consider elevating skills in your organisation as one of the best ways to gain buy-in, advocacy and most importantly, ‘trust’.

 

Trust and AI: elevating skills in your organisation

These skills sit in three clear categories, they are:

  • skills needed for the 20% that the AI can’t do.
  • skills needed to excite people and help them take their career to the next step.
  • skills that drive the competitive edge in your organisation.


I call them the ‘Human Distinctive Skills’. They include:

  1. Creativity – the ability to bring new ideas and innovations.
  2. Critical thinking – the objective analysis of an issue to form a judgment.
  3. Learning agility – applying new skills and appropriate learning in record time.


I want to add:

  1. Applying context and clear expression –the understanding of the conditions and circumstances surrounding an object or situation and articulating them clearly and transparently.

These are the skills which come from our higher-order thinking and are the unlikely developments in AI; these are the skills which make us inherently human.

Can we trust AI?

Yes, but it’s up to us to build that trust in our teams – it’s not a task for the machines.

You can build trust in AI by developing those ‘human distinctive skills’ and providing opportunities for people to be excited about this change, while repeatedly assessing confidence levels and being transparent in communication.

This, alongside the continuous development of leadership capability and culture, will serve you well in the dawn of AI.

Author: erica Farmer

SHARE THIS THOUGHT PIECE

Facebook
X
LinkedIn