#433 AI in the C-Suite: Why Critical Thinking Is Non-Negotiable - an article by Niels Brabandt

AI in the C-Suite: Why Critical Thinking Is Now Non-Negotiable

article by Niels Brabandt

 

From wrong distances to fabricated case law: artificial intelligence accelerates decisions and, when left unchecked, accelerates errors. Niels Brabandt lays out how leaders can harness AI productively without sacrificing judgement and governance.

 

The wake-up call: when “fast” gets mistaken for “correct”

Start with a deceptively small error. A navigation AI in Zurich, Switzerland, estimates a 12-minute walk that actually takes 30–35 minutes. “Trivial” until costings, logistics or service levels are built on the same misread. Minutes become margin. Niels Brabandt uses the example to show how plausible outputs still corrode planning if leaders stop verifying.

Another everyday miss: a restaurant PDF the AI cannot parse. It recommends dishes that aren’t on the menu; only a screenshot upload fixes the read. Amusing at lunch, expensive when similar flaws seep into product data, proposals or supplier shortlists.

Then there’s the viral “fact” that every job ad must state a salary from next year. Ask enough AIs and some will “confirm” it, yet it is not universally true. Organisations that codify such myths into process and copy pay in reputational currency.

At the extreme, a law firm cited AI-invented cases and was sanctioned. The reputational damage dwarfed the immediate penalty. The point, Brabandt argues, is not to shun AI but to restore judgement to the centre of the workflow.

 

The core failure: the One-vs-Many syndrome

Leaders often prefer a single, neat answer over multiple sources because “it saves time.” That bias is dangerous when the one answer is wrong. Brabandt’s antidote is a disciplined loop: Analysis, Evaluation, Judgement. First, break the problem down and surface facts; then weigh sources and biases; only then decide. Primary sources sit above secondary summaries, including AI digests and encyclopaedias.

AI errors do happen. Brabandt cites Professor Graham Jones’ (University of Buckingham) publications indicating roughly 0.7% hallucination in AI answers versus around 7% human misremembering or misattribution. The lesson is not that AI is perfect or people are flawed. The message is that both require procedure, not intuition.

 

Governance before glamour: which steps executive boards should implement now

1) Awareness and qualification. One-off “cheap and cheerful” courses create false confidence. An approach which works: scenario-based training that teaches people to verify outputs, not just generate them. Leaders can acquire decision-grade literacy in one to two days, then deepen practice.

2) Policy with roles.

  • Define permitted uses (drafting, ideation, research paths) and prohibited uses (legal conclusions, medical guidance, financial forecasts) without expert review.

  • Apply a four-eyes rule to offers, contracts and external communications touched by AI.

  • Require a source standard: check a primary source; document secondary sources.

3) Data hygiene and access control. Brabandt recounts an internal AI that, after a hasty rollout, returned names, dates of birth and personnel-file links when asked about “the last five redundancies.” That is a governance failure, not a model feature. Fix: strict role-based access, logging, prompt review and output filters.

4) Quality gates.

  • Fact-check: sample primary-source verification.

  • Bias screen: examine political, commercial and cultural skew in sources.

  • Hallucination heuristics: red flags include “too perfect” citations (precise case numbers, DOIs) that do not resolve.

5) Red-teaming. Task a group to deliberately break your AI process. Feed it edge cases, attempt prompt leaks, and try to elicit sensitive data. Fold the findings back into policy and tech.

6) Make it measurable and track error rates, rework cost, time-to-decision and the primary/secondary source mix. Review monthly; adjust quarterly.

 

A 30/60/90-day plan leaders can actually run

Days 0–30

  • Inventory AI use-cases by value and risk.

  • Deliver decision-maker training focused on verification.

  • Publish Policy v1; lock down access; activate logging.

Days 31–60

  • Introduce quality gates and a red-team routine.

  • Enforce the primary-source workflow.

  • Baseline KPIs (error rate, rework time, source mix).

Days 61–90

  • Update to Policy v2 from lessons learned.

  • Scale to more use-cases while keeping heightened review on sensitive ones.

  • Communicate wins (where AI sped decisions) and near-misses (where governance saved you).

 

The false economy of “cheap & fast”

“We have no time for preparation” is how organisations accumulate invisible debt: corrections, legal exposure, trust erosion. Brabandt describes teams that skipped expertise and shipped an internal AI anyway, only to watch it expose personal data on day one. The eventual bill exceeds the cost of robust training and governance.

 

The leadership crux: AI plus judgement

AI is neither a silver bullet nor a saboteur. AI is a force multiplier for quality and for mistakes. Leadership means exploiting the speed while hard-wiring critical thinking. Pair AI with Analysis, Evaluation, Judgement, and you compound advantage. Use AI without it, and you compound error, just faster than you ever did before. Niels Brabandt’s message is blunt because the stakes are high.

 

Executive checklist (for your to pin)

  • Source order matters: primary first, secondary second.

  • Guardrails: if outputs touch legal, medical or financial claims, or personal data, require expert review.

  • Watch for “too perfect” citations: verify any exact case numbers, DOIs or quotes.

  • Document assumptions and sources.

  • Measure monthly; adjust quarterly.

 

About Niels Brabandt

Niels Brabandt is a leadership expert and host of the “Leadership Podcast”, advising organisations on evidence-based, situational leadership and AI governance. The examples and recommendations in this article are drawn from his recent episode about Artificial Intelligence, Leadership and Critical Thinking.

 

Final statement: AI belongs in the boardroom but only alongside critical thinking. Master both, and you will make faster, better decisions. Skip the second, and you will decide faster, but wrong.

 

Niels Brabandt

---

More on this topic in this week's videocast and podcast with Niels Brabandt: Videocast / Apple Podcasts / Spotify

For the videocast’s and podcast’s transcript, read below this article.

 

Is excellent leadership important to you?

Let's have a chat: NB@NB-Networks.com

 

Contact: Niels Brabandt on LinkedIn

Website: www.NB-Networks.biz

 

Niels Brabandt is an expert in sustainable leadership with more than 20 years of experience in practice and science.

Niels Brabandt: Professional Training, Speaking, Coaching, Consulting, Mentoring, Project & Interim Management. Event host, MC, moderator.

Podcast Transcript

Niels Brabandt

AI. And just to be sure, just to be sure, AI in general is a good thing. No one is going to start the discussion of, oh, we don't need AI. It's all nonsense. No one needs that. Society is better off without it, because you will not change any kind of progress that is happening in that way just by disliking it. However, I receive quite a number of emails, contacts, social media requests, etc.

Where people say I. I actually get the feeling that certain other aspects of our behaviour deteriorate quite quickly and quite often. Unfortunately, some people shared a meme with me where it says that the MIT conducted a study that people who use ChatGPT use less of their brains. Here's the information. That study never took place. It's fake news, it's a meme, nothing else. And quite.

I have to be fair here, it's pretty obvious that this is a meme because MIT doesn't publish studies by meme. They would go via probably science magazines, or maybe the Washington Post, Financial Times, whatever other magazine it is, and press releases, not by memes. So be sure that when you share something, because quite a number of people shared that on LinkedIn and it probably wasn't the best choice here. The question now is how do we deal with any kind of change in our skills due to AI? And when we talk about critical thinking, Critical thinking is something where people say, I get the feeling that people take things for granted as soon as they read them on the Internet and a bit too general, but we need to talk about what is happening here and how is AI influencing our ability to make decisions? And of course the very first thing often is that people ask, do you actually have cases? Do you have cases not taken out of press, but cases you've seen firsthand where AI was wildly wrong.

And yes, I have seen these cases. And of course I could tell you about 150 cases only from my clients, but of course they are on a strict NDA, so they are confidential. So I'm going to tell you cases from small to large that actually affected myself. I give you a very simple example. I went for dinner, that is not too unusual, but before dinner I went to the movies. So I wanted to watch a movie, the Life of Chuck, way better than I thought, so highly recommend it. Here I went by foot from Albus Riede Platzen, Zurich, where I live, to the cinema Arena Cinema. And I asked the AI how long that will take because at the moment I am documenting everything I do, all the activities, everything I eat, because I'M losing a bit of weight.

That was my personal choice. You probably see across the last hundred videos that maybe you lost a bit of weight, more or less your decision. And suddenly the AI said, yeah, it's about a 12 minute walk. Is it? I lived there for a couple of years.

Now it's 35 minute walk. When you go quick, it's 30 minute walk, but hardly ever quicker than that, except when you go really fast or march. That's not walking, that's marching. Before someone says, can we make a German joke about marching? Not here, not today, not now. So when this was the very first thing and you cannot say, hey, sometimes AI is wrong. This just happens. Yeah. I can just give you an example.

Let's say you are a logistics company and you have to deliver goods across countries where you are not knowledgeable about the local circumstances and you are wrong by 2/3 on distances you, most likely during a public tender will hand in something where at the end you have to fulfil a legally binding contract and you will find out in due course of the project you're running out of money. So because fuel energy costs are way too expensive and also all the timings you gave them cannot be held. At least the AI was right on the unit it used kilometre, not miles, because we use miles in Switzerland.

In the UK we use kilometres in Switzerland. It's rather mild in the UK or in the US. However, still, being wrong by 2/3 is quite something. And then I tested a couple of other cities. It was very good in Paris, Madrid, Vienna, Milan, areas I knew where I could also check. But many people say I use AI because it's quicker, because when I have to go to Google Maps and then open it and type in start and finish. And with the AI, I just chat into the AI and I get a result because it has speech recognition.

Here we have the very first proper issue, because when you do not double check, and often people say, I don't want to double check, that's what I have the AI for. Here we are with the first issue and that's only the beginning. So after the movies I went for dinner, I met a colleague, Lee Vorin, and we met at Balance in London. That was a couple of weeks later and this meeting happened at the moment. Of course, due to the food programme I am on, I always upload the menu up front and then I ask the AI what is the best pick here for my diet plans, for my weight loss plans, and that is my personal choice. It worked brilliantly, hundreds of times. So I Upload the PDF, the menus PDF, and immediately it says, you should pick the C bus.

I said, cool, good choice. Sat down. When the waiter arrived, I immediately wanted to order, but he was on his way somewhere else, so he just put the menu on the table. So just out of interest, I looked into it, found out there is no C bus on the menu. It was exactly the menu that you saw online, but no C bars on there. So I told the AI, excuse me, AI, there is no C bars on there. And then the AI said, oh yeah, I just looked on it, you're right, there is no C bars on there.

Where I thought, well, maybe you read it in the first place, what about that? So, and then what's your next pick? And then it says, chicken burger is the next best pick. So look into the menu. And there was no chicken burger either. Turns out, although I asked for it in the first place, when I asked the AI, can you read the PDF? And the AI said, yes, it was obviously lying to me when I uploaded a screenshot.

All of a sudden that was working. And that is something which is deeply troubling. When you upload certain data files and the system doesn't tell you what, when it does not work. You probably know from any kind of Windows system since the 1990s, when you open any file which doesn't work, you immediately get an error message or you get very weird sign on the screen which tells you, that's not what I wanted. AIs take things and then they may or may not understand what they see. And then they say, yeah, yeah, I think, yeah, I think my recommendation is this. This, yeah, you know, just do this.

And that, of course, is plain dangerous. Just imagine you put a proposal together and you use AI and. And suddenly you quote, for example, studies university studies that do not exist, or you quote lawsuits as a reference. That never happened. And by the way, both of these already happened, that companies put tenders out there, proposals, handed in proposals for tenders, and then when the client properly checked, they say, look, we can't work with you because what you actually do is you make stuff up, because that's not. And when you then say, yeah, it was the AI, they say, that's your problem, not ours. And by the way, when, when you now think, what does he have on.

On on his wrist? Just to show you, here we have the Manchester Pride weekend, where I'm recording this from for greetings from Manchester. So one is the wristband you need for the whole village there, and the other one is a wristband of Talk Out Loud, a charity about mental health, which I support. So just raising awareness for mental health issues and situations.

We have to help these people. So the restaurant issue is the second one. It's not about the restaurant here, it's about making stuff up. And when, of course, now people say, yeah, we know, AIs hallucinate. Graham Jones at Buckingham University in the UK just published a great blog article on that. About 0.7% of all AI hallucinations of all AI answers have some sort of hallucination. When you now say that's one, almost one out of a hundred.

So when I ask 100 questions, which I can quickly do in one day, one is probably partially or completely wrong. Yeah, true. And by the way, science has also said, and there's also a quote from Graham Jones, Buckingham University here, humans have about 7%, so 10 times more. So when you now say, I don't, not me, the problem with hallucinations is we don't realise them. You have something in your mind which you think you remember, you remember correctly, but then you arrive and you see it's, it's just not there. I give you a very simple example. I was absolutely sure I, I, I, I go to Manchester Pride every year to support the Pride movement here.

It's very, very important, especially at the moment, to step up for minorities, women, persecuted people, anything. And I had a very strong opinion on in which order the bars are there on Canal Street. Turns out I just mixed two of them up. I mixed them up and I was absolutely convinced I was right and I was completely wrong on the matter. So you see, the restaurant issue here tells you they can factually make things up. Sometimes you will spot it easily. When someone says, oh, the distance between your home and the cinema is 52 chicken.

Then someone will say, I don't think chicken is a distance unit, is it? But when someone just says, yeah, it's a 12 minute walk and you've never been there, you will probably just believe that because you want to save the time and you do not want to double check. And this, by the way, can get way worse with the whole situation. What we have right now in Germany, you have a tonne of LinkedIn posts which go on about from 2026, awareness, awareness from 2026, you must have the salary mentioned.

Every single job ad. It's a legal obligation. And by the way, the AI says the same. I looked it up on the AI and that's exactly the problem. I looked it up as well because I knew that this is not correct. I knew because I deal with recruiting all the time, giving speeches at Europe's largest fair and expo we have in a couple of weeks time again. And the issue there is that AI sometimes pick up stuff which is wrong because many people wrote wrong stuff, but they sound very convincing.

It is not 100% obligatory in any situation, on any job at absolutely anywhere. So people just looked it up on the AI and then repeated it and suddenly hundreds of people repeat the same nonsense. And by the way, the whole meme thing with mit, MIT published a study and then you have the. The image looks quite important, but when you look at the image, the Image basically says ChatGPT users use 15% of their brain and people who don't use ChatGPT use 85% of their brain. You would spot that on the street. You would spot that on the street. When we have a difference of 70% brain use.

I mean some people, you think, yeah, you probably really only use 15%, but that is something which is so absurdly, wildly, far away from reality. Be sure that you're always spot on on the matter. You need to look it up because this recruiting wrong thing that we got very soon, very quickly will enter the corporate world. You will have recruiters who make wrong statements because they looked something up on AI and didn't double check and cross check it. So the question now is, where did we lose the critical thinking? And first, of course not all of us lost it, but some of us lost it. So first of all we have to define what does it actually mean?

It's always a three step process. You have an analysis, then you have an evaluation and then you come to a judgement. And just to be very sure here, when you have maybe you heard from inspirational motivational speakers, never judge, never judge, never do judgement. Yeah. And that is another piece of nonsense from motivational inspirational speakers who ever since this planet was seen in the universe, never said anything that actually sustainably changed life for the better on a factual or scientific basis. Which is why they are motivational inspirational speakers. Analysis means you look at the different components of what you see there, then you have to evaluate that.

You also have to critically evaluate the sources. I just give you a very simple example. When you look into, I don't know, the socialist magazine on the one hand and Fox News on the other hand, you will probably see that you have two extremes on a certain matter with strong opinions on both sides. And you have to critically analyse who said that. As a self employed entrepreneur, someone who gives job to others, gives work to others, someone who is part of the Liberal party in Switzerland. Of course that gives a certain bias. And of course when I write scientifically, I try to take that out as much as I can.

However, when I have discussion with other entrepreneurs or people on the political spectrum, we disagree on certain matters based on our evaluations of the exact same thing. Analysis, evaluation, judgement. You have to come to a conclusion. And that is what critical thinking is about. However, why do many people rely on AI? They say it's just so practical, it saves me so much time. I just.

Look, Niels, I just give you the very honest opinion, I don't have time. And then people say something like, you know Niels, when I go on Google I get 15 results, maybe 20 or 30. All of them I have to read through. And then I have to stick to whatever I get out of that or just, or just ask the AI. And the AI says that's it. And that's exactly the problem, the one versus many situation. When you go on Google or any kind of other search engine, Bing, Yahoo, whatever it is, then you find many sources and you can make up your mind about that from different angles.

And that is what critical thinking is about. When an AI tells you this is the only way to go, especially when the AI is very sure it doesn't mean that it's factually correct. And then people say, but I want it to be correct because when I have to double check, I can do the whole thing from the very scale. And that's why we have science. That's why we have science. We distinguish between primary and secondary sources. Primary sources are where the study is, where the real facts are, or at least as close to the facts as it possibly can be.

As little bias as possible, peer reviewed scientific magazines, whatever you take there. When people say I go to primary sources, for example, Wikipedia probably saw that Wikipedia quotes studies, which tells you it's not a primary source because primary sources are exactly on the matter. Wikipedia already is a secondary source. The peer review there, by the way, most of the authors come from a non diverse background, so that already puts a speedy, pretty strong bias into Wikipedia. But that's a completely different issue. The primary versus secondary source issue is a valid one and I know it, I, I know it takes time. I did an mba, did an executive mba, now writing my third master degree in a master of Science, probably planning to do doctorate afterwards.

And that takes time. If you just ask AI, the results you will get are reasonably poor on, on a scientific level, not anywhere near what you need to write. I can tell you. So be sure that you really stick to proper sources and that you look at them and then balance your views and then make up your mind from there. And of course, now some people will say, how do I get that into any kind of organisation? We don't have any time left right now. How could I ever do that? And of course, that is a very reasonable point when we now talk about how to implement that in organisations.

Number one is always awareness. And I cannot stress it often enough, the $49 pound euro online class, which people should watch during the lunch break or in the afternoon or on their mobile phone, or listen to it in an MP3 during their drive back and forth from work or during a podcast. That is. That is not going to make the cut. It's not going to make the cut. I am very sorry, it's not going to make the cut. The implementation here needs to be, needs to be close to the matter that people have an interaction with a professionally educated person.

So people need to be qualified. 98% of people talking about AI on the market are not qualified. I got my education from the University of Pennsylvania and Wharton Business School. I could tell you, you can look up these, these institutions. These calls were bloody hard. And I'm now adding another qualification from Vanderbilt University, also not the worst university on this planet. Takes a lot of time, it's a lot of work and it's bloody hard to be fair, but it is very good when people say, oh, I just read this article and that, and here are the top 100 prompts for a great sales executive to find the best clients with passive income to write cold emails where 98% open your email lies left, right and centre.

And unfortunately, I have to tell you, when people now say I'm talking about AI, I really look on what is their experience, but also what is their qualification. No qualification. You can talk to the wall. I have absolutely no time about your opinion because proper education is the key. Otherwise there is no way you get access to scientific evidence and the real facts without going to science and the real facts. You cannot just sit there and say, I go through my LinkedIn and then I pick up a couple of postings and say, oh, look at me, I'm now an AI expert. Speed of light might be really fast, but not as fast as people today think they're an AI expert.

So be sure that the qualifications are on a level where they actually should be. And I now know what the issue always is. The issue is always time. People say, I don't have time to double check. I don't have time to cross check the AI because I use the AI to save time. And by the way, we don't have time to any kind of professional education. And then my question always is, so.

But you have time to tidy up the mess that you might have created by either putting in a logistics bid to a public tender proposal where all the distances are wrong and you have the time to correct everything that the AI made up and you embarrass yourself in front of clients and customers, including the reputation and relational damage. You have time for that and you have the time to deal with all of this because then you have to background check afterwards and rewrite everything. So you have time for all of that, but not time to get a proper implementation of a sustainable AI approach into the organisation. Well, if you, if you still say, yeah, we still don't have the time, then please do not call yourself a great leader, please call yourself anything, but maybe call yourself some manager. That will be the most appropriate thing. And that's already the most polite term I can use here. Awareness, qualification and time.

That is the key when you implement everything as we just talked about, even in a very brief way for smaller organisations or in another way in larger organisations. I can tell you what's never the best pick, because of course it happened immediately when people said, oh, the European Union has this new guideline, people need AI education. So Niels, can you do like for €49, just like a presentation, just say something. We don't actually care what you say, but we need to tick that box, right? We need to tick a box that we educated people.

Can you do that? And the answer is no. We have a guideline about fees and I just stick to that. I'm not a rule breaker there. So very important is you need to take a quality approach because if you don't, the catastrophe is in the making and you will either find out way too late or never at all. Because customers will not say, hey, I did something wrong. Customer will simply say, yeah, we decided for someone else.

And you probably will not find out ever why this happened. Be sure to pick a quality approach on AI and be sure that critical thinking stays up and everything will become better in your organisation. Because the best solution is never AI on their own or the humans on their own. It's always, AI produces something and humans optimise it for their usage, including the double check and cross checking. That's, by the way, how I find out that I ordered something on the menu almost. That is not there. And also maybe I probably went a distance in 12 minutes which I walk in 35.

No, it was simply wrong and I found out by coincidence but also due to the education I received I always double and triple check what's on there with the AI can make good suggestions but it hallucinated quite a bit. And by the way we humans do even more stick to quality and everything will go well. I wish you all the best doing so. And when you now say oh I need a couple of more questions to po to to be posed a few times. The first of course if you to leave a like here when you when you look at me when you watch this on YouTube, I leave a like here. Subscribe to my channel. Put the little alarm bell there so you don't miss out on future videos.

Put a comment there. I'm looking forward to hearing from you. When you listen to this on Apple Podcasts or Spotify, feel free to leave a review there. Thank you very much for 5 stars and recommend this podcast to friends, colleagues, anyone around you because this week it is big announcement. 100,000 listeners probably saw it on social media. I posted it there as well. We reached for 100,000 listeners after five years.

I started this five years ago and I thought let's hope that we get to anywhere to 5 to 10,000 and now we are at 100,000 listens. Absolutely amazing. Thank you very much for your ongoing support here. A couple of things we only have on YouTube for example are the leadership tips called the YouTube Shorts as the name it says YouTube Shorts are only on YouTube who could afford. So it really pays off to put a little alarm bell there because you will not miss out on any future episodes. And the leadership tips that we have usually a couple of leaderships per week we publish here. When you listen on Apple Podcast or Spotify, feel free to follow and subscribe here and also you can go to my website NB Networks Biz and then you see what I do for a living.

When you now say I really like to discuss something but not in public because it's something about my company, it's confidential and needs to stay confidential. Everything I have via email stays confidential of when you put out in public. Of course others can read it, but my email inbox is 100% confidential. NB networks.com feel free to send me an email and I'm looking forward to being in touch with you there. If you look for live sessions, yes we have them expert.nb-networks.com you put your email address in There and then, no worries. You receive only one email every Wednesday morning. It's 100% content ad free guarantee.

And in this email you get first free access, no paywall, to all the articles in English and German, more than 400 in each language. Then full access to all the podcasts. So that's the second positive aspect here. English and German podcast, more than 400, 400 episodes published here. And of course you see the live session, which we only publish on the Leadership Letter. There is no other way to access them. So I look forward to seeing you there.

Of course, you can also follow and connect with me on social media, LinkedIn, Instagram, Facebook and YouTube. Looking forward to being in touch with you there. However, the most important thing is always the last bit that I say. Apply, apply, apply what you heard in this podcast and videocast. Because only when you apply what you heard, you will see the positive change that you obviously want to have and want to achieve in your organisation. Feel free to contact me anytime when you have something very specific you need training, speaking, coaching, consulting, mentoring, project, interim management. Very happy to talk about that, but also, if you just have a question, just text me, send me an email and then we take it from there.

I'm looking forward to hearing from you there. At the end of this podcast, there's, as usual, at the end of this video cast, there's only one thing left for me to say. Thank you very much for 100,000 listeners, happy pride from Manchester, and for today, thank you very much for your time.

Niels Brabandt