#471 Trust in Artificial Intelligence Is Not a Technology Problem. It Is a Leadership One.
Trust in Artificial Intelligence Is Not a Technology Problem. It Is a Leadership One.
By Niels Brabandt
Artificial Intelligence has reached an uncomfortable stage of maturity. It is no longer a future promise, nor a niche experiment. AI is everywhere. And precisely because of that, trust in Artificial Intelligence has become one of the most underestimated leadership challenges facing organisations today.
Business leaders often frame resistance to AI as a mindset issue: employees are described as risk-averse, backward-looking, or insufficiently curious. That narrative is convenient. It is also wrong.
The real issue is not whether people trust Artificial Intelligence as a technology. The issue is whether they trust the context in which they are expected to use it.
Why Intelligent People Hesitate to Trust AI
Almost every business decision maker has experienced the same moment: an AI system delivers an answer that looks convincing, only to turn out to be incorrect, fabricated, or entirely fictional. The initial reaction is surprise, sometimes even shock. The more reflective reaction is caution.
That caution is rational.
When AI fails in a private context, the consequences are limited. A wrong travel suggestion may cost time or money. In an organisational context, however, the perceived risk is existential. In many legal and employment environments, employees understand very clearly that using AI and being wrong can have far harsher consequences than avoiding AI altogether and being slow.
From a rational risk perspective, avoiding AI is often the safer option.
This is why trust in Artificial Intelligence cannot be mandated. It must be earned.
The Cultural Contradiction at the Heart of AI Adoption
Many organisations claim to promote a culture of tolerance for mistakes. In practice, employees can usually recall concrete examples where someone took an initiative, made an error, and paid a disproportionate price for it. These stories travel faster than any internal communication campaign.
The result is predictable. The moment people observe negative consequences attached to AI usage, adoption stalls. Not because employees are anti-technology, but because they are pro-survival.
This creates a fundamental contradiction. Organisations want innovation, speed, and AI-driven efficiency. Employees want predictability, psychological safety, and fairness. Until leadership resolves this contradiction, trust in Artificial Intelligence will remain superficial.
Why Awareness Training Often Makes the Problem Worse
In response, many organisations turn to awareness training. In principle, this is the right move. In practice, it often backfires.
The market is flooded with self-declared AI experts whose confidence far exceeds their qualifications. Shallow explanations, oversimplifications, and factually incorrect claims do not build trust. They destroy it. Employees are quick to recognise when substance is missing.
Trust in Artificial Intelligence must be grounded in science, not slogans. Without academically and professionally sound foundations, awareness training becomes a box-ticking exercise that changes nothing.
Leadership Cannot Be Delegated to PowerPoint Slides
One of the most consistent patterns observed in AI transformations is leadership detachment. Leaders approve training budgets, mandate tool usage, and then continue working exactly as before.
This behaviour is fatal.
If leaders do not use AI visibly and responsibly themselves, no amount of training will compensate. Trust is not created through instructions. It is created through observation.
Some organisations have begun to embed AI adoption directly into leadership objectives, including measurable implementation goals and tangible consequences. This approach sends a clear signal: AI usage is not optional, and leadership accountability is real.
In smaller organisations, the dynamic is different but no less powerful. When ownership, investment, and shared responsibility are visible, trust grows organically. People understand why AI is being used and what is at stake.
Trust Requires Structure, Not Optimism
Trust in Artificial Intelligence does not emerge from enthusiasm alone. It requires structure.
Employees need immediate access to support when questions arise. This may be an internal service desk, curated learning resources, or clearly defined escalation paths. Crucially, help must be available at the moment of need, not weeks later in a follow-up workshop.
When professional qualification, leadership role modelling, and practical support converge, something remarkable happens. AI stops being perceived as a threat and starts being seen as an instrument.
The Hard Truth About Trust in AI
Building trust in Artificial Intelligence is demanding. It requires investment, discipline, and leadership courage. There are no shortcuts.
But the alternative is far more costly: silent resistance, shadow processes, and missed opportunities hidden behind a façade of compliance.
AI will not fail because it is imperfect. It will fail because leadership underestimated the human dimension of trust.
And that is not a technology problem. It is a leadership one.
About the author
Niels Brabandt is a leadership consultant, trainer, and speaker specialising in leadership, organisational behaviour, and the responsible implementation of Artificial Intelligence in business contexts. He works internationally with organisations ranging from global corporations to public institutions.
Niels Brabandt
---
More on this topic in this week's videocast and podcast with Niels Brabandt: Videocast / Apple Podcasts / Spotify
For the videocast’s and podcast’s transcript, read below this article.
Is excellent leadership important to you?
Let's have a chat: NB@NB-Networks.com
Contact: Niels Brabandt on LinkedIn
Website: www.NB-Networks.biz
Niels Brabandt is an expert in sustainable leadership with more than 20 years of experience in practice and science.
Niels Brabandt: Professional Training, Speaking, Coaching, Consulting, Mentoring, Project & Interim Management. Event host, MC, Moderator.
Podcast and Videocast Transcript
Niels Brabandt
Artificial Intelligence. You probably know that moment when you think, "Yeah, I have a result by AI," and hmm, not sure if this is correct. And why do you have that hesitation? Most likely because, just like me, you had the moment where you got a result from AI and you found out it was fake, it was wrong, it didn't exist in reality. Uter shock, of course: how can the magic AI suddenly be wrong? And that's what we're going to talk about today—how to trust Artificial Intelligence.
Niels Brabandt
When it comes to Artificial Intelligence, one aspect is crucially important: AI is basically anywhere right now. However, you expect your people to trust Artificial Intelligence, and—and—and the question is how to do that. And when you now start with the point that I've seen way too often where people say, "Oh yeah, we just have too many backward-facing employees, they are not risk-open enough," you—you have to go forward, you have to use this.
Niels Brabandt
We have a, a culture of tolerance when something goes wrong. And usually within 15 minutes, someone steps up and says, "I remember that example where someone did something wrong, and the only thing that happened was they faced massive consequences—negative consequences." And that's exactly the problem here. When you see that suddenly someone took the step forward and they faced negative consequences, usually people immediately stop using a technology because they say, "Oh, it's all about risk-averse again, so not my name here in the first row."
Niels Brabandt
How to gain trust in Artificial Intelligence in your organization. First, of course about a—a disclaimer: I'm going to show you a source in a minute. It's all under the Fair Use Act, so I can quote this here and, of course, citiarech quotation right in German law. You see, Business Insider already said, "Getting workers to trust and adopt AI is forcing HR people to reinvent themselves." Exactly. It's just not enough to say, "Use that."
Niels Brabandt
And you might have seen the picture on the internet where an AI in different forms—usually depicted as some sort of computer or robot—says to a human being, because the human being asks, "Oh, is this food poisoned?" and the computer says, "No, it's not." And suddenly you see that the human being is dead and the computer says, "Oh, it was poisoned anyway sorry for that. Do you want to learn more about toxic and poisoned food?"
Niels Brabandt
And we probably all, including myself, had this moment: AI said something wrong, you call out the AI saying, "This is wrong," and then AI says, "Oh yeah, correct, yeah, it's wrong, you were right. Do you want to learn something correct now?" and you wonder, "What—why can't you just give me the right thing in the first place?" And that's exactly what we're going to talk about today.
Niels Brabandt
When it comes to trust in Artificial Intelligence, you have to see one situation first: AI is absolutely everywhere, and that makes people insecure. Insecure not because they don't trust the technology; they don't trust the circumstances. They say, "If I use it for my private travel planning and it goes wrong, I face the consequences by myself. I can't live with it. I have to book another train, another flight, fair enough." But when I'm in my organization, especially in countries where probably the working laws allow that people get fired on the spot, they say, "If I do this wrong, well, I might get fired. When I do this slow, without the AI, I might get, let's say, a warning letter, but I definitely won't get fired." So they obviously go for the lower risk option, at least in many cases.
Niels Brabandt
So you see, AI is everywhere. The problem here is that this omnipresence of knowledge makes people insecure about what to do right now, because especially in times of fake news, fake videos, deepfakes, where many people say, "I really can't distinguish the fake news, the fake videos from the real ones." Some can, some not so much. And then they say, "Look, I'm—I'm—I'm just not trusting anything new. I know when I go to Microsoft Excel, doing it word by word by myself, adding all these cells, I know this works. There's no AI there, and I just deactivate Copilot in Microsoft, so I just work by myself. Everything's fine here."
Niels Brabandt
The fake news and more is one of the massive issues. So when you are amongst the fake news and more people where you say, "I'm not really sure this—this episode is for you," and when you are a leader in one of these departments where you say, "I have people who really don't trust the technology," this episode is also for you. You have to offer help, and the first help is always awareness training.
Niels Brabandt
The problem with awareness training is the vast majority of awareness trainings I see on the market are anything but qualified. Someone with a huge confidence, who probably has written half of a—of a Wikipedia article about AI, suddenly is an AI expert stepping up with massive confidence, telling things which are—which are violently wrong. So you need professional content. You need to be sure that the person talking about AI is professionally qualified.
Niels Brabandt
And I don't want to put myself in the spotlight here, however, just because, of course, now people can say, "Braban, what are you talking about? What are your credentials? Can you tell—tell—tell me something?" Of course. Either my AI qualification—so first, I work for Microsoft, New Horizons Computer Learning Centers, and I was into first scripts, automatic learning, machine learning way before the topic of AI even occurred. And I got my professional AI knowledge from the University of Pennsylvania Wharton Business School, followed by Vanderbilt, and then vendor certifications by Microsoft directly almost 10 different qualifications only in the last year. So—so I think it's not—I'm—I'm not talking from a—from a bad point here.
Niels Brabandt
The—the general principle is science leads the way. When people do not have credentials backed by science, there is no point in listening to them. Absolutely no point in listening to them, because these people will not help you in any way. Science leads the way.
Niels Brabandt
Very important here is when you now say, "Yeah, but we are such a large organization, it's just too complicated to train all of them," or you're on the other end of the scale, you say, "Oh, we're such a small organization, we don't have the resources." Look, during the last 12 months, I probably gave about 50 different awareness trainings when it comes to regarding AI—when it comes to AI regarding AI on a global scale, anywhere in between from Australia to Canada, South America to Japan, all over Europe, onsite and online, and it really is about planning. Large organization, I just gave one awareness training for 50 managers. First, we had a group of 500 for a general training, then 50 for a more specific—specific training, and then we have groups of five for very specified training—very, very specialized training for each de—department. So it goes from 500 to 50 to five, so anyone gets a bit of general knowledge, a bit more specified, and then specialized just for them.
Niels Brabandt
Very important here: the implementation is always based on professional qualification. When you put people to the front who are not professional trainers, not professional coaches, do not hold the credentials, have no—no qualification, methodology, didactics, education, there's no chance this is going to work. Plus the scientific education you need in AI. And when you say there aren't many, I agree, there aren't many but that's not your people's problem. When you say there aren't many, so you get some bad, cheap ones, people will not listen. And even if they listen, they will simply say, "Yep, I took the online training, tick the box, I will change, let me check nothing. I will simply continue as always." And that is a massive issue. Professional qualification is always step number one.
Niels Brabandt
And the most important part of any department is leaders must lead. I have seen way too many examples where leaders said, "Oh great, all of my people are qualified, so you do this now because I do whatever I do just as I have done before." And one large corporation where I just was the presenter and the keynote speaker last week, they said they hold their leaders accountable. It is in their goal paper, so when they have a goal agreement, it's in their goal agreements what kind of pieces—which pieces of AI need to be implemented by the end of the year. So they are valued and they also receive bonuses or the opposite of bonuses when—when they do not achieve these goals.
Niels Brabandt
In smaller organizations, usually people stick to what they invested money in because often owners are there or someone who invested in the company saying, "I—I put money on the table, so please do what I invested for," and people say, "Look, we all own this here and we all make it happen together." So the professional qualification needs to be based off leaders must lead. And on top of that, you must have resources. These can be the internal IT service desk or anyone who likes to help. These can be a website. These can be any kind of learning resources. It needs to be available for the people in the moment of lead—in the moment of need.
Niels Brabandt
When you do it the way we just discussed here, then AI and the trust in Artificial Intelligence in your organization is going to be a massive success, and I wish you all the best implementing that in your organization. And when you now say, "Whoa, that sounds pretty demanding," I can tell you it is.
Niels Brabandt
However, so first, of course, thank you very much again for listening. If you're watching me on YouTube right now, feel free to leave a like there, subscribe to my channel, and leave—leave a comment here if you like. You can, of course, also leave a review on Apple Podcasts, Spotify, five stars. Thank you very much for doing so. And recommend this podcast and video cast amongst your friends—friends, colleagues on social media anywhere you like.
Niels Brabandt
Or go to my website, nb-networks.biz. However, when you now like to discuss something, and the vast majority—as this is a business podcast—most people will say, "I have something to discuss. It is internal. It is from my organization, my company, my business. I can't put it in the comment section on Facebook," and I fully agree. Send me an email. That's what most people did: nb@nb-networks.com. If you send me an email, I guarantee you to get back to you within 24 hours or less, so I'm looking forward to hearing from you.
Niels Brabandt
If you need something very specific—trainer, speaker, coach, consultant, mentor, project interim manager—let me know. If you just have a couple of questions and we have a discussion there, feel free to do so as well. Looking forward to hearing from you. In addition to what we do here, we have live sessions. We just had a very successful one, almost booked out last Friday.
Niels Brabandt
When you go to expert.nb-networks.com, then you sign up with your email address there. You receive only one email every Wednesday morning. It's 100% content, ad-free guarantee. And in there, you find full access to all the podcasts, articles, video casts, everything in the English and German language, of course, all at no charge for our listeners and readers. And you also find the next date and time for our next live session, so I'm looking forward to seeing you there.
Niels Brabandt
Of course, you can also just connect with me on LinkedIn and text me there, or just follow me on Instagram and text me there. You can also like me on Facebook or follow my YouTube channel. Looking forward to hearing from you. Any message is going to be answered within 24 hours or less, meaning if you don't get an answer within 24 hours, your message probably didn't get through, but usually messages get—get through—get through, so I'm looking forward to hearing from you.
Niels Brabandt
However, the most important bit is always the last thing that I say: apply, apply, apply what you hear in this podcast, because only when you apply what you heard, you will see the positive aspect that you obviously want to see in your organization. I wish you all the best implementing all of this there right now. And of course, as usual, at the end of this podcast, there's only one thing left for me to say: thank you very much for your time.