#490 The AI-Ready Human: How Leaders Can Prepare Organisations and People for the Age of Artificial Intelligence - Niels Brabandt interviews Paul Slater

The AI-Ready Human: How Leaders Can Prepare Organisations and People for the Age of Artificial Intelligence

Niels Brabandt interviews Paul Slater

Artificial intelligence is reshaping the world of work at a speed few organisations have previously experienced. For executives and decision makers the central question is no longer whether AI will influence their organisations. The real question is whether the people inside those organisations are ready.

In a recent episode of the Leadership Podcast and Leadership Videocast, leadership expert Niels Brabandt spoke with Paul Slater, author of the book The AI-Ready Human. Their conversation explored how leaders, teams and organisations can prepare for a future in which artificial intelligence becomes a permanent collaborator in daily work. The discussion moved beyond the usual debate about prompts and tools and instead focused on the human behaviours required to thrive in an AI-driven environment.

The insights shared by Paul Slater and Niels Brabandt offer an important perspective for board members, executives and senior leaders who are currently navigating the complex relationship between human capability and artificial intelligence.

Why AI Readiness Is a Human Challenge

Paul Slater’s work began with a simple observation. When generative AI tools such as ChatGPT first appeared, many people were immediately enthusiastic. Some quickly experimented, adopted the tools and began integrating them into their daily workflows. However, several months later the results were very different.

Some individuals continued to thrive with AI. Others struggled significantly.

This contrast led Paul Slater to a deeper question. Artificial intelligence is not merely another digital tool that can be mastered through technical instruction. It is a force that changes how work itself is structured. As a result, the human behaviours required for success are also changing.

AI readiness therefore depends less on technical knowledge and more on adaptability, learning behaviour and the ability to collaborate with intelligent systems.

Why Organisations Matter Both Less and More

One of the most thought provoking ideas discussed by Paul Slater in his conversation with Niels Brabandt concerns the role of organisations in the AI era.

On one level, artificial intelligence reduces the need for certain traditional organisational structures. In earlier decades companies required elaborate systems for organising and retrieving information. Entire departments existed to manage filing systems, documentation and structured data management.

Modern AI systems increasingly perform those functions automatically.

However, the organisational challenge does not disappear. It simply shifts. Individuals must now organise their time, attention and priorities in a far more ambiguous working environment. Work itself has become less structured.

At the same time organisations must design structures that combine human expertise with AI capabilities in a coherent way. Artificial intelligence does not automatically create an effective organisation. Poor organisational design can still undermine even the most advanced technology.

Control and Governance in the Age of AI

Another central theme in the discussion between Paul Slater and Niels Brabandt is control. Artificial intelligence promises extraordinary capabilities. It can analyse large amounts of information, generate insights and support decision making at unprecedented speed.

Yet this promise also introduces risk.

If leaders begin by assuming that AI outputs are automatically reliable they may reduce human oversight too quickly. This can lead to flawed decisions, governance failures or operational damage.

Paul Slater emphasises that governance frameworks must evolve alongside technological adoption. Human judgement remains essential. Organisations must design systems that combine AI capabilities with strong oversight and multidisciplinary teams capable of identifying errors, biases or hallucinations.

In other words, artificial intelligence expands human capability but it does not eliminate the need for human responsibility.

The Governance Balance

The conversation also highlights a familiar challenge for many organisations. Overly restrictive policies often produce unintended consequences.

When organisations block useful tools or restrict access excessively employees frequently find alternative paths. They may use personal accounts or external systems that bypass internal controls.

This phenomenon has appeared repeatedly throughout technological history. The early days of the internet produced similar patterns when employees circumvented firewalls in order to access online resources.

Paul Slater argues that the long term solution lies not primarily in tighter restrictions but in better education. AI literacy must become widespread within organisations. Employees need to understand how probabilistic technologies function and how to interact with them responsibly.

Without this understanding organisations risk combining fear, uncertainty and incomplete knowledge. The result can be irrational behaviour at both leadership and employee level.

Why Adaptability Matters More Than Ever

Artificial intelligence also introduces a deeper psychological challenge within organisations. Many employees fear that AI may eventually replace their roles.

Paul Slater emphasises that leaders must address these concerns transparently. Technological transformation has always created both opportunity and disruption. However the speed and scale of AI development mean that future labour market outcomes remain uncertain.

Leaders must therefore create environments where continuous learning, resilience and adaptability become core competencies.

Employees who actively develop their ability to adapt to new technologies are more likely to identify new opportunities within changing organisational structures. The organisations that succeed will be those that encourage learning cultures and open dialogue about technological change.

A Structural Recommendation for Boards

During the interview Niels Brabandt asked Paul Slater a question frequently raised at board level. If a company could introduce only one structural change in response to AI what should that change be.

Paul Slater proposed the concept of a Future of Work Executive. This role would focus on understanding how technological change affects the structure of work itself. It would integrate elements of human resources, technology strategy, workplace design and organisational development.

The purpose of such a role would be to ensure that organisations remain adaptable. Instead of responding to change through isolated initiatives companies would build the capacity to adapt continuously.

Artificial intelligence is not merely another technology cycle. It signals a broader transformation in how work is organised and how organisations operate.

Leadership in the Age of AI

The conversation between Niels Brabandt and Paul Slater ultimately points to a simple but powerful conclusion. Artificial intelligence does not remove the need for leadership. It increases the importance of leadership.

Executives must guide their organisations through uncertainty while simultaneously developing the human capabilities required to work effectively with intelligent systems.

The future will belong to organisations that cultivate adaptable people, transparent governance and leaders who understand both the technological and the human dimensions of artificial intelligence.

Niels Brabandt

---

More on this topic in this week's videocast and podcast with Niels Brabandt: Videocast / Apple Podcasts / Spotify

For the videocast’s and podcast’s transcript, read below this article.

 

Is excellent leadership important to you?

Let's have a chat: NB@NB-Networks.com

 

Contact: Niels Brabandt on LinkedIn

Website: www.NB-Networks.biz

 

Niels Brabandt is an expert in sustainable leadership with more than 20 years of experience in practice and science.

Niels Brabandt: Professional Training, Speaking, Coaching, Consulting, Mentoring, Project & Interim Management. Event host, MC, Moderator.

Podcast and Videocast Transcript

Niels Brabandt

The question is, are you ready? And of course, are you ready for AI? And some people wonder, can you ever be ready for AI? We have an expert on the matter with us here today. Hello and welcome, Paul Slater.

Paul Slater

Hi, nice to be with you today.

Niels Brabandt

Thank you very much for taking the time. So you wrote the book "The AI-Ready Human," which you do not only see in your background, we see it here. First, of course, I have to wonder, when people say, can humans ever be ready for AI? And then to put it into a book, what was your core motivation to write this book?

Paul Slater

It actually came from working with, I would say, probably a couple of hundred people that I started to observe as part of the work that I was doing in learning development. And I started to observe some really interesting patterns.

Paul Slater

I saw some people that straight out of the gate, as soon as ChatGPT 3.5 came out, they were all over it. It was kind of their first opportunity as human beings to interact directly with something that they recognized as AI.

Paul Slater

Some of those people that were the most enthusiastic about it right off the bat went back to those same people six to nine months later and saw a very different picture. Some of them were continuing to thrive, but some of them were really struggling.

Paul Slater

And it really started to occur to me that despite the fact that a lot of people were regarding this purely as a tool that you could master if you were just figuring out how to engineer a prompt, it was actually much more complicated than that. And it was something that was disrupting the world of work itself and changing how we need to work.

Paul Slater

And so then it really became a bit of an obsession for me. What are the right, or what are the most effective human behaviors to deal with this new world of work?

Niels Brabandt

Excellent. So one of your chapters says, and I found this very interesting and also really fascinating, "Why organizations matter less and more with AI?" What do you mean with that?

Niels Brabandt

Because some people say, "Look, on the one side, I am afraid that my job is going to be gone by tomorrow. On the other side, why do we need all these supervisor managers when basically AI can do all of that? So why are they still sitting here?"

Niels Brabandt

What do you mean with the organizational part in your book?

Paul Slater

Yeah, so really kind of it's actually two meanings of the same thing. So one is the organization itself, and then there's the act of organization. And of course, in the English language, those two things are conflated.

Paul Slater

In terms of the act of organizing, the act of organizing for an individual, it is something where effectively, if you think about it, in order to succeed as individuals, we used to have to be very, very organized. We had to organize all of our data. If you go back far enough, there were entire job titles and entire parts of companies which were filing rooms. And when you needed to get something, you had to file it in the right place so that you could find it in the right place.

Paul Slater

Effectively, all of that is gone. But as an individual, we need a new type of organization in order to replace it. We need to organize our day. We need to organize our life in certain ways because work itself has become very unstructured and ambiguous.

Paul Slater

And then the other meaning of the word "organization," as in the company and things like that, we are at a point now where we can structure organizations in a whole bunch of different ways and use AI to help us. But we still need to structure the organization in a way that works for the business that we're in. And so you can still get organization horribly wrong, even with the help of AI.

Niels Brabandt

Excellent. And that's the perfect transition, perfect segue to my next point, when people say, "We're really interested in AI mostly because we see every one of our competitors does it, so we need to do something with AI, but we're just aware that we are not really sure how to keep it under control."

Niels Brabandt

You have a whole chapter on control in your book. And I just give you an example which I told you before the interview as well, with one of my clients. One of their suppliers said they had AI in place and they made decisions based for staffing, where should salespeople go, where should marketing go.

Niels Brabandt

And they did this based on AI data only to find out by pure coincidence when someone manually checked that there were wild hallucinations amongst the data. So on the one hand, people say, "AI, of course, is fascinating. We like to have it. Anyone's doing it. However, when I have to background check everything an AI does, I can actually do the work in the first place." Or, "Where am I wrong here? How can I keep this under control?"

Paul Slater

Yeah. It's funny. When I'm often asked, "What is the most significant issue associated with AI?" and when it comes to the interface between humans and AI, I think it is that issue of control. And what's really intriguing here is that AI has the promise to allow us to be able to take control of our organizations, of our teams, of ourselves in ways that we were unable to before because we don't have to sweat the small stuff.

Paul Slater

But on the other hand, it can be the cause of us potentially losing control. And you hit on something which I think is extremely important, which is if you start, when you're working with AI, if you start from the point of complete trust, if you just assume that AI is going to be giving you the right information, if you just assume that the way that AI does things is going to be largely as good or better than the human beings inside your organization, and you relax your controls because of that, your human oversight, for example, because of that, you can end up in a world of hurt.

Paul Slater

You can end up in very not just failed AI deployments, but potentially putting your business at risk because of it. And particularly in the area of governance, I think this is where organizations are really starting to try to figure this out. It's not, "I've spent $50 million on an AI deployment and it failed." It's, "I've spent $50 million on an AI deployment and it's caused an issue that has cost my business $100 million or $200 million."

Paul Slater

So when you think we have mechanisms inside organizations, governance, safety mechanisms inside organizations, I would really focus on what is the role of the humans inside those organizations. And then when you come down to a team level, it's also about how you structure your team. So we want to start building teams that are highly multidisciplinary so that as you're bringing AI into the picture, bringing AI into your team, you've got the right set of human skills inside the team to be able to make sure that you're using AI safely and effectively, and you're creating the right combination of the human skills and the AI capabilities in order to be able to succeed as a business.

Niels Brabandt

Yeah, excellent. And when now people say, "Okay, governance, very important word here," we put the compliance department in charge. And I'll give you an example of what I just saw with one of the companies where one of my friends worked. They said, "Okay, we have these amazing tools. Compliance showed up, and they ended up with, 'Okay, to make everything safe, you now have Copilot Chat with no libraries, no share function, and it doesn't have access to any internal data.'"

Niels Brabandt

So what people immediately did, taking their private ChatGPT accounts, using that at work. So they just circumvented basically the whole process. So how do you actually want to give people X on the one hand without having the fear that they're going to wildly breach governance or even data protection rules, which in some countries, for example, in Europe, can cost you up to 5% or 4% of your revenue annually when you get caught with it? So how to keep the right balance? Because I know that you have balance also in your book.

Paul Slater

Yeah, well, we've seen this story before, I would say, multiple times. One that really sticks out for me was when the internet first became a thing. And inside many organizations, people did not have the internet on their desktop. There might be like one or two computers inside the company that had it. So what did everybody do? Well, there were still phone jacks in floors in those days. So people would just basically get a modem off of the street, plug a modem in, plug it into their computer, and they had access to the internet, circumventing a firewall that was inside the organization designed to protect the organization. We've seen that story before. Anytime you are over-restrictive in terms of the controls you put in, people inside your organization will create what for them is a justifiable exception, but in reality, for the organization, is an undeniable risk.

Paul Slater

I think there is something fundamental at work here that we really have to consider. The responsibility associated with keeping an organization safe and effective and getting the most benefit out of AI can be you can make the decisions, you can make the policy decisions at the top of the organization, but that responsibility for a safe and effective use of AI is basically everybody inside the organization. And so what that means is that there is a significant job of education that needs to happen here.

Paul Slater

Because right now, what you have is people all the way through organizations, I would say even in some cases, including some IT departments, that really don't have a full and deep understanding of what the technology is. And they don't have a deep fundamental understanding of what it means to interact with technology, which is fundamentally probabilistic in nature, versus the technology that we have been using for the previous 20, 30 years. They don't understand that that is fundamentally different. But then all the way through to every human that could potentially interact with AI needs a level of AI literacy that extends beyond, "I know how to type a prompt." It extends to a deep understanding of how to do so in a way that is safe and effective.

Paul Slater

Without that, what you end up with is limited information combined with fear and uncertainty and doubt that leads to irrational behaviors. There's an irrationality in terms of the way that people are behaving right now, and it stems from a lack of education and understanding of the technology as a whole.

Niels Brabandt

Yeah, excellent. So you would say professional training is becoming more and more important, especially regarding AI?

Paul Slater

Yeah, I would. And there is a challenge that and I come from a background of learning and development, so I don't want to throw anything off the spot.

Niels Brabandt

I know. This is why I'm asking.

Paul Slater

But there is a challenge here in the sense that in my experience, what I've seen is that many learning and development organizations do not really understand the technology either. So I think it is a I think it's basically infrastructure, policy, culture of which learning and development understands.

Paul Slater

I'm seeing an emergence of AI enablement functions that are separate from L&D but work very closely with L&D. And I think that's probably better. And inside AI enablement, you need a deep understanding, yes, of the technology, but also of organizational change management, human behavior, learning and development, those types of skills as well.

Niels Brabandt

Yeah, excellent. And you also talk about motivation and resilience in your book. When you now say AI is coming, you, of course, will see that some people say, "I am a bit nervous about this." And not only nervous because some people might think they might lose their job, but also because the atmosphere in the organization might become a bit tense.

Niels Brabandt

Because I'll give you an example. In one organization, it went through press. People said, "No one is going to lose their job." However, there were quite a number of what people say, usual admin jobs. And after people retired, they realized they're not coming back. So someone retires, and instead of what we had back in the day, someone gets hired, the job is simply gone.

Niels Brabandt

And that, of course, gets closer and closer each day. How do you keep the organization running when not everyone in your organization is sitting there with a bilingual Stanford MBA where you say, "I'm going to find a job anyway, 100% anyway"? So how do you keep the organization running when fear suddenly gets closer and closer to these people?

Paul Slater

Yeah. And I think probably the key answer to that question is that it's extremely important to have rational discussions based with transparency and clarity about the direction the organization is going, the role of AI, and the role of human beings within it. And that's not happening inside a lot of organizations. A lot of organizations are not really talking about it effectively.

Niels Brabandt

Unfortunately.

Paul Slater

There is a non-zero risk that as this technology evolves, and particularly as you start to see more of the convergence between AI automation, which we're already seeing with the emergence of agents, and also as we see more convergence with robotics, new computer trends like quantum and so on, there is a non-zero risk that significant unemployment will occur. We can't deny that. We don't know. But we don't know that it's not we don't know that it's not going to be the case.

Paul Slater

And just making tired arguments like, "Well, it hasn't happened previously. Every time we've had major technology advances, then we've had good employment numbers previously. Therefore, it must happen again." We don't know. It's only been a couple of hundred years since the Industrial Revolution. So because we don't know that, I think we have to be open and honest about that.

Paul Slater

But what we do have to do is we have to help people understand that if you don't just embrace the technology as in you play with it, but really start to think seriously about the job that you're going to have in three years' time, in five years' time, and really focus on honing traits like resilience and specifically adaptability, and being the most adaptable human that you can be, that there are real opportunities for you. So we have to create an environment where everybody is learning all the time, everybody is adapting all the time, and everybody is kind of leaning into the world of potential future uncertainty, but just knowing that because they are really adaptable humans, they're going to find a spot that works for them.

Niels Brabandt

Excellent. And I have two more questions to wrap this interview up. Because, of course, when I prepared for this interview, I always look into certain clusters of questions appearing more and more. And there is one question which from board level was asked by almost 80% of all people from board level participating in the survey. And they are asking, "If you advise on board level, and you can only pick one, just one structural change regarding AI that you insist on in front of a board, what would it be?"

Paul Slater

Structural change?

Niels Brabandt

Yeah.

Paul Slater

I think I alluded to it a little bit before. I think that there needs to be inside every organization above a certain size, I call them a future of work executive. They don't necessarily have to be in the C-suite, but it's probably better if they are. And I think of a future of work in that context as being a fairly all-encompassing function. So it encompasses HR, it encompasses corporate real estate, it encompasses forward-looking technology.

Paul Slater

But the idea here is that AI is a signal, in a sense. AI is a signal that the how, what, when, where, why of work is changing much more rapidly than it ever has before, and that you need the right people, the right facilities, and the right infrastructure and policy and culture to take advantage of that. So companies themselves need to be very change-ready, leaning into change, and change becomes a continuous competency versus individual change initiatives. And so I think sort of a restructuring around the idea of technology change happening rapidly, probably under the auspices of a future of work executive, that to me feels like the direction that we need to build so that organizations are adaptable and change-ready by design.

Niels Brabandt

Brilliant. I think future of work executive is something which you probably should trademark by tomorrow. Because to me, that sounds really, really good, especially with your background. Amazing. And, of course, one final question I have to ask. When now people say, "Hey, I think Paul could really be helpful for our organization, either as a consultant, coach, mentor, or as a keynote speaker for our conference," how can they get in touch with you?

Paul Slater

Okay, I do all of those, so yes. So paul@palslater.ai is how to contact me directly, or you can go to palslater.ai and learn more about me and the work I do, or just read the book. Any of those things will be you'll find ways to get in touch with me that way. And, of course, I'm always available on LinkedIn.

Niels Brabandt

Perfect. These are the perfect final words. The AI-Ready Human, Paul Slater, thank you very much for your time.

Paul Slater

Thank you.

Niels Brabandt