Leadership is rapidly evolving as artificial intelligence becomes deeply integrated into how organizations operate and make decisions. In a world where volatility, uncertainty, complexity, and ambiguity are the norm, leaders are being challenged to not only understand the role of AI—but also to harness it in ways that enhance what makes leadership truly human. This episode dives into why leaders can no longer afford to treat AI as a separate technical domain, but rather as a strategic partner that augments decision making, stretches creativity, and humanizes leadership itself.

As generative AI rapidly advances, the question is not whether leaders should use these tools, but how to do so thoughtfully, ethically, and effectively. This conversation explores how leaders can remain at the center, leverage augmentation, and make deliberate choices that will define competitive advantage and organizational health in the next decade. The episode also emphasizes the necessity of skepticism, risk awareness, and the cultivation of skills—like “human calming”—that will set exceptional leaders apart in an AI-rich world.

Meet Bob

Bob Johansen is a renowned futurist and leadership thinker, co-author of the third edition of “Leaders Make the Future,” which introduces 10 essential skills for AI-augmented leadership. With decades at the Institute for the Future and experience teaching at the Army War College, Bob brings unparalleled insight into how leaders can prepare for—and shape—the AI-empowered future. His co-authors, Gabe and Jeremy, bring expertise in generative AI research and practical development, rounding out a team committed to equipping leaders for the next wave of technological transformation.

Timestamped Overview

  • [00:05:27] Introduction to AI and Leadership: Exploring why AI is now essential to leadership development and future success.
  • [00:07:00] The Human Core of Leadership: Discussing the enduring need for human-centered leadership—augmented, not replaced, by technology.
  • [00:08:34] Evolution of Leadership Skills: How classic leadership skills stand the test of time, but require a generative AI lens.
  • [00:09:34] Rethinking “Cyborg Leadership”: Moving beyond science fiction to practical digital augmentation for leaders.
  • [00:11:46] Developing the 10 AI-Augmented Leadership Skills: Why these skills matter and the unique place of “human calming.”
  • [00:15:06] The Role of Human Calming: Centering leaders’ intention and composure in an AI-driven world.
  • [00:17:36] The Value of Skepticism: Why questioning, challenging, and stretching assumptions is vital in adopting new technology.
  • [00:19:17] Embracing vs. Rejecting AI: Strategies for experimenting, learning, and building organizational trust with emerging tools.
  • [00:24:00] Facing Risks and Unknowns: Assessing current and future risks—including cyber threats and over-focusing on efficiency.
  • [00:30:03] Effectiveness vs. Efficiency: Shifting the leadership focus toward innovation, not just automation.
  • [00:34:03] All Hands on Deck: Why AI is a human and organizational story—not just a technical one.
  • [00:35:12] Future-Back Thinking: Blending human and machine, and why leaders must choose to play and prototype.
  • [00:39:44] Managing the Noise: How scalable foresight and intentional augmentation can help leaders cut through information overload.
  • [00:43:35] Practical Takeaways: Real-life examples of how leaders can use AI tools to augment creativity and effectiveness.
  • [00:46:22] Embracing Skeptical Foresight: Encouraging leaders to challenge assumptions and stretch their strategic thinking.
  • [00:50:02] Don’t Get Discouraged: The learning curve of AI and the importance of hands-on experimentation.

Guest Resources

Related Articles and Podcasts

Join Our Elite Mastermind Community

Join Scott and our dynamic Mastermind Community! 🚀

 

Unlock the power of growth-focused leadership with a group of like-minded individuals who are passionate about taking their leadership skills to the next level. 🌟

 

Ready to transform your leadership journey? Click here for more information! 👉📈

Leave an iTunes Review

Get a FREE membership!

If you’re enjoying the show, leave us a review on your favorite podcast appIf your review is chosen as the Review-of-the Week, we’ll get a free month to the Leader Growth Mastermind!

What do: Write a review, send an email to scott@movingforwardleadership.com with a screen capture of the review, and wait to hear it read out on the show! 

Thanks for the amazing support!  

 

Write your review or rating here:

Unlock Your Peak Leadership Potential with Personalized 1-to-1 Coaching

Elevate your leadership to its highest potential with personalized 1-to-1 coaching from Scott. Discover the path to peak performance and achieve unparalleled success in your leadership journey. Ready to unlock your leadership’s full potential?

Subscribe to the Peak Performance Leadership Podcast

Join thousands of leaders worldwide who are transforming their leadership skills with the Peak Performance Leadership podcast. Unleash your full potential and stay at the forefront of leadership trends. Subscribe now and embark on your leadership journey of excellence!

Follow us on Your Favorite Social Media

Share now!


Transcript

The following is an AI generated transcript which should be used for reference purposes only. It has not been verified or edited to reflect what was actually said in the podcast episode. 


 

Scott McCarthy:
Gents, welcome to the show. And by gents, for the listener, I don’t have one guest, I don’t have two guests, I have three guests today. We have Bob, Gabe and Jeremy. Gents, how are we doing today?

Bob Johansen:
Doing great.

Scott McCarthy:
All right, so we are talking AI and leadership. Bob, I’m going to kick this one to you first. So first off, out of the gate, like normally when we talk leadership, we’re talking about humans. Human to human. You know, that connection, that interaction, motivation, getting people to believe in the vision and the mission of an organization. Yet you’ve wrote a book with these two guys all about the exact opposite of this. So for the skeptical listener out there of like, why are we talking about leading in AI? Well, why are we? And why did you write a book on it in the first place?

Bob Johansen:
Sure. Well, we agree with you, Scott, that the essence of leadership is human. The human has got to be in the center. We’re futurists. However, we think future back from 10 years ahead. And we’ve been doing a long term study on leadership and have concluded that if you look 10 years ahead, all of us, all of us who are successful are going to be augmented in some way. And we get to choose how we’re augmented. So we’re really at a kind of key tipping point here where if you think 10 years ahead and you think top leaders, we’re all going to be cyborgs, we’re all going to be augmented humans in some sense.

Bob Johansen:
But the subtitle of our book, you know, the title is Leaders make the Future. The subtitle is 10 New Skills to Humanize Leadership with Generative AI. So we’re not talking about automating, we’re talking about humanizing, in effect, re enchanting leaders leadership, but also augmenting. So it’s an opportunity over the next decade to ask, what do humans do best, what do computers do best? And most importantly, how can we partner to do things that have never been done before?

Scott McCarthy:
That’s super interesting, Gabe, I see you nodding along there. You got anything you want to add into this conference?

Gabe:
Yeah, absolutely, Scott. So this is the third edition of Leaders make the Future. Bob wrote Leaders make the Future on his own. The first and the second edition, and you know, the first second edition were the 10 leadership skills that leaders need to thrive in, you know, a chaotic world, in a world that is constantly changing, that is volatile, there’s a lot of uncertainty, there’s a lot of things just kind of shifting all around us. And it turns out that the leadership skills have stood the test of time. It really is a testament to, you know, Bob being just a wonderful futurist and being able to look long and see what’s out there. But we found ourselves in a unique opportunity, opportunity to add in the lens of generative AI. And so that’s where Jeremy was brought on and I was brought on to just kind of look at the 10 skills and see, you know, how.

Gabe:
How can we reshape them and reform them and put them in a new light, given that we are at such an opportune moment right now. You know, we believe that history will look back at where we are today and really say this was a defining moment for humans, for leadership, for all of human. So it was really exciting to go back to that original work and give it the spin, give it that new angle of considering generative AI and what it means, what it will mean for human leadership in the future.

Scott McCarthy:
Oh, that’s awesome. I love it. Now, Jeremy, Bob said something that spiked my interest and scared me at the same time. And that was the whole notion of, like, cyborgs and like, I’m just thinking like Star wars and Star Trek smashing together. Like, how are we foreseeing the future here? Am I going to be going to have pieces attached to my head or something? Where do you see this going?

Jeremy:
I think the way we see it is less about whether there’s pieces attached to your head and more about being able to use digital tools to expand the kinds of things that we thought were human before and solely human. Things like the hunch, things like the feel for the situation, things like your gut instinct and reasoning. We know that the way that AI models work is not the way a human mind works, but it is doing something that looks a whole lot like thinking. I don’t know, maybe we should call it like, thunking or grokking or something so that we can stop arguing about whether it’s really thinking. Because it’s clearly helping with that kind of thing, whatever it is. And for senior leaders, this is really important. Why? Because when you’re a senior leader, what gets pushed up to you is the things that don’t have an obvious answer. They have an obvious answer, and you’re a senior leader focusing on it.

Gabe:
You really shouldn’t be.

Jeremy:
You should be pushing that down to others in the organization. And so you need to be looking at things that don’t have a lot of historical data to drive them. Things where you need to take a small amount of information and do a lot with them. And that’s what the new type of generative AI tools have the potential, although not the guarantee of helping with.

Scott McCarthy:
I loved what you said there. You know, if, if the answer is obvious, then you know you shouldn’t be doing it because that’s not your job. That’s how I would have put it. Right? Like if, if someone comes to me, as I told you guys during day before we hit record, my is a senior Canadian army officer and commander of the largest supply depot in the country right now. And like if the guys come to me with a question and it’s like, why am I being asked this? Like there’s, there’s, you know, that’s basically how you summed it up there, which is beautiful. Now I wanted to flip over to Bob. Now as you said, this was the third edition to book and he wrote the first two and he came up with 10 AI augmented leadership skills by 10:1 to 8:1. Why not 12 or 16? Where did 10 come from?

Bob Johansen:
Well, we just refined them and they developed, as Gabe said, over the years. The first edition was done in 2009 and we’ve tested them globally and came out to about 10. It turns out that nine of the 10 in the new edition are augmented. But the 10th, the 10th is called human Calming. Human calming. It’s really what’s inside of us that leadership should be based in. We’re focused on top leadership, senior and rising star senior leadership. So we work a lot with companies and with large nonprofits.

Bob Johansen:
I also teach with the Army War College. So I get the new three star generals on their first week in Washington and they read this book and we talk about leadership, we talk about strategy. And the key thing is to figure out, as Jeremy said, how you can assist augment, enhance the human abilities that we humans have to do things like develop clarity or flip dilemmas or manage polarization. We have to learn how to do all those things. And what generative AI does well is help us stretch. So, you know, I’ve been writing a long time, Scott. You and I have known each other for several books now. I’ve done 15 books for the last two years.

Bob Johansen:
I’ve been augmented myself by a gen program that Jeremy created for me that I’ve nicknamed Stretch, and I call it Stretch because it’s there to stretch my mind and it’s continuously stretching me. Now I don’t go there for answers. This is not a question and answer machine. I don’t go there to just ask simple things. I go there to expand my mind about complicated things. So I have ongoing Conversations with Stretch pretty much all during the day as I’m writing. And the idea of it is to continuously stretch me. But it’s really important to note while I get really involved in these deep conversations.

Bob Johansen:
I don’t trust Stretch. I don’t trust Stretch because Stretch can sound enthusiastic, even if stretch doesn’t know what it’s talking about. So. So you have to be really careful of this. On the other hand, it’s awesome in terms of expanding my ability to think. And, you know, in spite of all the books I’ve written, I still have blank page syndrome. I get stuck right at the beginning. Stretch is great to help me get started, and it’s never where I end, but it gets me going.

Scott McCarthy:
That’s amazing. Just to have that companion. Yes, I definitely laugh when you said you don’t trust Stretch. Like, do you have a secret life of being a secret agent or something? You’re like, oh, I talk with them all the time, but I never trust them. You know, Scott, you mentioned something, Bob, that really spiked my interest, and that was the last chapter, last skill you said was, which was human calming. Like, I want to jump past all the other ones. I want to jump to this one, because this one, to me is the most oxymoronish. So we want to use AI to calm a human.

Scott McCarthy:
Is this what I’m getting from you?

Bob Johansen:
Well, it’s more that you want to calm the human before you use AI, so, you know, the AI can help because you can offload the things that you’re not as good at. The AI can help to stretch your thinking about complicated topics. The AI can help in terms of providing examples or signals from the real world that you might not know about. These generative AI systems are so expansive in terms of the knowledge base, but they really need that human calming at the center. So the human calming is more something we’ve got to do on our own to decide what our intention is. These aren’t routine tools. These aren’t routine tools. They’re really a social communication medium, a collective intelligence, essentially, that requires both digital and human input.

Scott McCarthy:
That’s super interesting, Gabe, do you have more to jump in on with this topic?

Gabe:
You know, there’s not much credit I can take for. But I will take some credit for that 10th skill of human calming. Bob and Jeremy came together initially to kind of start this book and start thinking about generative AI. And they came to me last and they said, you know, Gabe, we’d love you to join the project. And I said, I will gladly join and be Support and be 100% it. Only if you let me play the role of the skeptic. Only if you let me play the role of asking the difficult questions and saying, you know, not a. Not the skeptic, that I think AI is a hoax and is going to go away.

Gabe:
But kind of saying, what if people rejected AI? What if we didn’t use it for everything? What if what we know about AI completely ends up being wrong or just ends up going in a different direction? And so kind of, and to Bob and Jeremy’s credit, I mean, almost immediately they said, yes, absolutely, we see the value of having a skeptic join the team. And so I think that’s where a little bit of that human calming skill initially came from. Was saying, you know, this isn’t a book that that’s saying AI is the answer to everything. This is a book that’s saying AI can be a useful tool. You know, Jeremy, both Jeremy and I, we both started off at the institute as Bob’s research assistants. We’ve since, you know, moved on and gone up. But Bob used to sit next to us and have conversations with us. And, you know, maybe Bob trusted us more than he did stretch.

Gabe:
Maybe he didn’t trust us that much, but we used to play that role of having those conversations and expanding those views, points and asking questions. And, you know, Bob’s criteria for bringing on someone to help him is different from him in an interesting way. And I think that’s what we’re going for here. And that’s how we’ve programmed stretch, is to think differently, to act differently, to present things differently, in order to stretch the conversation, in order to stretch what we think we know, what we think we want to know and ultimately get us to place that allows us to be better prepared for the future. Ultimately, as futurists, that’s our job. How can we go exploring the possibilities of the future in order to be more resilient, in order to be better prepared for whatever unfolds in the future?

Scott McCarthy:
No. That’s amazing. Wow, I’m loving this conversation already, guys. Jeremy, I want to double right back to beginning and let’s, you know, we started with the end, but let’s go back to the beginning in that, you know, I talked about AI and humanity aspects and stuff like this. What do you say to leaders out there who go, well, you know, this ain’t for us. This is not a like, no, this ain’t for us. We’re people first organization, all the other, you know, pick 50 million different other arguments you might have against using AI in this type of nature. I’d love to hear your thoughts on how do we go about, you know, getting, I’m saying, convincing, but, you know, just make that counter argument back to them.

Jeremy:
Well, they, they might be right. I mean, there’s organizations today that still don’t need a website, though they’re few and far between. And there’s some organizations that don’t make much use of mobile phones. They certainly do exist. And so I certainly wouldn’t assume that AI or generative AI tools are right for a specific leader or a specific organization. I would say that we just don’t have enough information about what they do right now and how they should be used and how they could be used within organizations to really make that determination confidently. And so what I would encourage leaders to do is experiment with their company in doing this. Right now we have this new technology.

Jeremy:
It’s been around for a few years. We’re starting to get a few glimmers of the kinds of things it can do, but we’re all catching up together. Even the people creating the models, the model labs, aren’t fully aware of how they work yet of what they really can or cannot do. And so I’d be skeptical of a leader who said, we know this isn’t for us, and I would suspect that they’re probably leaving a lot of value on the table. What I would suggest to that leader is to bring people in on it, say, we’re not sure about this technology, but we’d like to try it out. Let’s do it together. These are the clear yes zones for where we want to try this. These are the clear no zones.

Jeremy:
And you do have to do the yes zones. Organizations are usually pretty good at the no zones. These are the yes zones. We’re going to do it here, and when you do it, you’re going to share what you learned, and then we’re going to give you a badge or some kind of reward for helping us learn together. And it may be the case that at the end of that, your organization learns together that there’s really no use for this. But that’s probably not going to happen. And if you don’t create those kinds of programs so that people can learn together in your organization, two things will happen. Your organization is going to lose a lot of value from not gathering those learnings.

Jeremy:
Because although we know a little bit more than other folks, there are no experts on this yet. The first, the earliest models of the kind that we’re using today were appearing in 2019, 2020. So no one has a lot of experience with this. And then two people are going to start using it without asking you, and then they’re going to hide it and then you get shadow it, same way that we had with social media and all kinds of other tools. And we know that that isn’t an effective way to innovate in an organization. So that’s what I would say is try it out, and try it out with your people so that they trust that you’re doing it with them in mind and reward them for the learning that they give the organization.

Bob Johansen:
And Scott, you’re a military guy. You understand about risk. The risk here is thinking you know more than you actually know. The people that really worry me about generative AI are those who are sure about what it is, those who are certain about what it is. Whether you decide, oh, this is great, or you decide, this is awful. But if you’re certain, certain, you’ve got a dilemma because nobody really should be certain because we’re so early in the process and AI itself has been around a long time. But this generative AI, large language models, and the kind of interactive collective intelligence media like Stretch, those are really very new and almost instantly practical, but with risk associated with it. And even the developers don’t know exactly what they do.

Bob Johansen:
So we all need to be humble and we all need to be experimenting, but we really have to resist certainty. Whether it’s positive certainty about AI or negative certainty, nobody should be certain right now.

Scott McCarthy:
That’s a great point, Bob. Thanks for jumping on there and adding that in. And Jeremy, I appreciate your thoughts there. I would offer to you that, yeah, you’re 100% right. Normally organizations find ways to say no before they say yes to something. I, I am an oddity, as Bob may remember, I am the inverse. Like, hey, let’s try it. Let’s go.

Scott McCarthy:
What’s the worst that’s going to happen? I don’t know, but we’ll figure it out. But Bob, you bring up a really good point, and that is no doubt the listener right now has risks going like, red light, red light, red light, flashing like crazy, like here to like. So why are some of those risks that you guys normally hear about or you’re foreseeing down the line, and if you know what the counters are to those risks or the mitigations or what have you, I’d love to know what those are as well.

Bob Johansen:
Sure. And Jeremy, I think you should answer this because you have the greatest technical depth of the three of us. But I want to start by saying the Wharton University of Pennsylvania Wharton School professor Ethan Mollick, who’s written a really nice introductory book about this, what he says at the beginning of the book is if want to truly understand generative AI, you have to at least invest three sleepless nights. Three sleepless nights. Now that leads to humility at the end of three sleepless nights, which I’ve done. Actually, more than that, you still don’t understand exactly what’s happening here, but there are risks associated with it. So the biggest risk I’d introduce is to think you know what it is. But Jeremy, you take it from here.

Jeremy:
Well, let me start with the present day risk and then go out to the future because I think both will be interesting in different ways for your audience. There’s a couple of things that I think are worth thinking about with the present day risk. The first thing about it is that most of the risks around using AI software are neither more nor less than that of using other software. A misunderstanding that a lot of leaders have is that simply by using the AI tools they are learning from you. Fortunately, this is not the case. In order for an AI to learn from customer usage, the company has to very carefully and deliberately collect the data and then spend a whole lot of time and money remaking the model in order to get that data into it, along with a whole lot of other tooling. Now if you don’t trust the company, you don’t trust the company. They certainly can do that.

Jeremy:
But if all of your data is already in SharePoint, don’t worry about using AI models through Azure. If all of your app is running on aws, don’t worry about using AI models through Amazon. You know, if you have an sla, of course, like these are important, Company day is important. But the reviews actually should not be any more stringent for AI than for any other software tool. Now this is relaxing, right? Because we have patterns of review for all of these things that we’ve been using for a long time. The other risk I think in the near term is that you let the people working for you just take whatever off the shelf product the company you already have an SLA with is offering you. So yeah, it’s oh, it’s so easy. We’ll just do Copilot because we’re already with Microsoft.

Jeremy:
Oh, it’s so easy. We’ll just do Gemini because we’re already with Google. That’s a big mistake because who you’re already with may not have the best AI tools and you’ll lose a lot of value because of that. Jumping out to the future, I think it’s a little bit more interesting. The biggest risk is that people focus too much on efficiency and miss that. What you can. What you should focus on is effectiveness. We see people talking a lot about automation, and automation saves costs.

Jeremy:
And of course, this is something that we want in our businesses. But if you’re offered a car and you are just using it to drive the same places that you would have walked to, you’re missing the point. The purpose of a car or a plane is not to more quickly go to the place that you would have walked to do. It’s to go places that you never could have gone to. And so if your job. Our job is to be writers and thinkers and a bit to make AI apps. But when we’re writing books, right now we’re asking, how do we write books better? But we’re also asking, what books can we write now that in the past would have taken us 50 years to write, but now we can do it in five years, maybe, you know, if your job is to make products for people, ask yourself, what are the products that we never could have dreamed of offering people that maybe we can now? So making sure to go beyond that’s the biggest risk, is just optimizing for what you have instead of opening your mind to what’s truly new.

Bob Johansen:
Now, Jeremy, you should touch on cyber risk and cybercrime a bit too.

Jeremy:
Yeah. Unfortunately, the increase in capabilities for cyber security right now are skewed towards the offensive. But there are more techniques coming out to defend against risk in the cyber sphere. But I. Well, I guess it’s a whole big kind of topic unto itself. But I guess what I could say, in short, is don’t use an. That LLM tools are still insecure, so don’t give them access to things that you wouldn’t want anyone to have access to.

Bob Johansen:
Yeah. And I think the reality is the bad people have access to this stuff too, and they have fewer constraints than we do, like laws.

Scott McCarthy:
Yeah. Not to mention morals and ethics and all those other things too. Right. But just go back to something Jeremy said there, which, like, I went, oh, that was good. The effectiveness in versus efficiency debate. And I loved your example. Like, hey, if I got a car, why am I going to use that to drive 20 seconds away where I could just walk, or two minutes away where I could just walk? Let’s use this for something and let’s treat it as a new capability that, you know, that’s the Kind of the verbiage that we use in the military talk about in terms of capabilities. Let’s try to develop this into a new capability.

Scott McCarthy:
Right?

Bob Johansen:
Exactly.

Scott McCarthy:
I definitely enjoy the risk part. And like, hey, just because. Not risk, but just because you’re using one tool or comes with, you know, it comes with the suite. Like we have access to Copilot because we use Microsoft Suite right now. Is that the best. Is that the best one to be determined. Jeremy shaking his head no. For an audience who can’t see him.

Scott McCarthy:
He’s shaking his head violently, though. And I’ll get to you in a second. But, you know, we, we gotta look at, we gotta look at, you know, what, what are the best. What is the tool that fits our needs? And that’s the thing that I think as leaders, we can’t forget. Just because it gets packaged with the rest of the software you’re using doesn’t mean that it’s the one that fits our needs the best. That’s how I look at it. Bob, you had a point to add in there.

Bob Johansen:
Yeah. The great management guru Peter Drucker said the definition of efficiency is doing things right. And that’s really important. And a lot of the current effort around generative AI is focused on efficiency doing things right. We’re futurists, though, and we look 10 years out. That’s not the big story. That’s not the big story. It’s kind of easy to get things right in that sense and control that.

Bob Johansen:
So go ahead and do it. We’re not at all against that. But the big story is effectiveness, which Peter Drucker defined as doing the right things.

Scott McCarthy:
Love it. I love it. You know, I often ask my, my team because, you know, like, like, like many organizations. Sorry, can’t talk here. This is something I just don’t edit out, but many organ. Like many organizations out there, you know, we’re short staff, high demands on us right now. And the thing I keep asking my team is, yes, we are working. Yes, we are busy, but are we doing the right things? Are we doing the work that we’re supposed to be doing? I assume that’s the same type of conversation we could have with AI.

Scott McCarthy:
Are we using it the way we should be using it?

Bob Johansen:
Ye.

Scott McCarthy:
You know, Scott, Gabe, I haven’t heard from you, so. Yeah, you’re jumping in, so jump in, man.

Gabe:
I just like to say one of the things that we know from futures work is that when the external environment is constantly changing, we have a tendency to hunker down and continue to do the same things, we continue to do more of the same things that we’ve done historically, because it’s almost that innate need of, like, wanting to have control, wanting things to stay the same as the external environment changes. What we advocate for at the Institute for the Future is helping people understand that when things are changing, that is probably the most opportune time to say, is the way that we work actually, is it the right way? Is the way the processes and the flows that we have, are they the right ones still? Or should we use kind of this moment of chaos, this moment of change, this moment of uncertainty, as an opportunity to leapfrog ahead and start to do things differently? And so kind of when we’re talking about this era, when we’re talking about what we do and what we don’t want to do and kind of testing those assumptions, now is probably the strongest time to actually do it. And, you know, this isn’t just a technology story. This isn’t just a conversation that the CTO and the CIO should be having. The Chro should be involved, the. The chief people officer should be involved, the chief marketing people, everyone across the organization should be involved. Because, yes, this is a technology, but this is not purely a technology story. This is a human story.

Gabe:
And if you are human, you should be at the table being a part of this conversation.

Scott McCarthy:
Oh, love it. All hands on deck. Because you know it’s going to affect us all. And that’s what I get. This is why it drives me nuts when I hear the old adage of, oh, leadership people. That’s a HR problem. Like, no, no, no, no, no, it’s not. Sorry, I am not a HR officer.

Scott McCarthy:
I am an operations guy. But here I am. I am in charge of everything when it comes to my team. So, no, it’s all hands on deck. And I don’t think AI is going to be any different whatsoever. Jets moving forward, how do you guys see the whole divergence between humanity and this AI and the machines and all this stuff? Where do you guys see this going? I know, Bob, you kind of mentioned earlier, but it coming together. But I also foresee potential conflict inflection points. So, Bob, I’d like to start with you on where do we see this moving forward?

Bob Johansen:
Sure. So we like to think future back instead of just present forward. And if you think future back, the reason why you do that is if you think future back at least 10 years out, sometimes more. That’s where you find clarity. You know, clarity lives out there in the future. It’s on the horizon. And it helps you figure out, well, where are we going right now? The present is so noisy, so painfully noisy, so dangerously noisy, that it all boils down to kind of an us versus them. And sometimes the us versus them is the humans versus the tech.

Bob Johansen:
If you think future back, that’s kind of silly because there’s no way they’re going to be separate. They’re going to be blended one way or another. They’ll either be blended humanistically or they’ll be blended in a way that dehumanizes. And we get to choose. That’s the human calming. That’s the choice window we have right now. But you only have it if you choose to play, if you choose to learn enough to play, if you choose to get involved in the conversation. If you just say no early to Genai, you’re checking out of the game.

Bob Johansen:
This game is going to be played with augmented media and augmented tools. That’s just obvious 10 years out. We just have to figure out how that’s going to work. And as Jeremy was saying, the only way to get there now is to prototype our way toward it, to learn as we go.

Jeremy:
And you’ll have to use different techniques than you’re using today, different team structures, different methods for guiding the organization. We talk a lot about this in the book, in the section on augmented human agent swarms, that in 10 years, leaders aren’t going to be leading organizations of just people or agents. It’ll be swarms of thousands of. So this sounds really intractable. And it will be if you try and use command and control or very hierarchical methods to guide it. You’ll have to think about. But we have methods for guiding huge kind of interrelated, complex systems, largely from the realms of finance, for instance, or politics or policy, using guides and nudges and incentives to steer the interactions of the group. And if a leader does have clarity, they will be able to do that.

Jeremy:
It’ll require a lot of. We use the framework from Kentaro Toyama of intention, discernment and self control, which is what grounds that human calming. And the reason for that is that in this world of human agent swarms, there’s going to be so many distractions. I mean, there already are. Right. The risk of being distracted will be so high, the amount of noise will be so much more than there is even today. I know that seems impossible, but if you’re able to get out of that, then you’ll be able to create a lot of value and really do things that are totally impossible in today’s.

Scott McCarthy:
Organizations that’s, you know, so how do we get out of that noise? Or how do you foresee us getting out of that noise? Because it’s terrible right now. Like just in there, 34 minutes and 15 seconds of recording. Like, I lost track of how many times my phone is being notifications and I’ve got a ton turned off already as it is. Like, there’s things going all around. The noise is already insanity. I can’t imagine some people leave all notifications on subscribe to 50 million newsletters in the hopes that, oh, maybe one day they’ll have a coupon code for me because I’m actually looking to buy that one thing. So I’m going to receive 5,000 emails a day. Never look at them because there’s just too many.

Scott McCarthy:
And if it’s going to get worse with all these AI bots and these swarms that you talked about, you’re like, how are we going to cut through the noise? Or is it just going to. To me, it’s going to be almost like crippling by the sounds of it.

Bob Johansen:
I think, Gabe, your notion of scalable foresight fits right in here.

Gabe:
Yeah, Scott, you know, I think you might naturally be a futurist in case you ever want to look for a new career. Because what we do is ask. We ask those kinds of questions. And you know, one of the research agendas, you know, from the institute this year is understanding what we’re unofficially calling, you know, making sense of the slop from the Internet. Right. We’ve had the Internet for a while now. We finally have, have machine learning, AI, digital twins, all of these sophisticated tools that are supposed to make sense of everything that we’ve developed. But a lot of we’ve developed are cat videos and viral TikTok trends and all of this slop that is out there.

Gabe:
So how do you expect intelligent systems to come in to unintelligent noise and make sense of it? And so the whole concept here is, is saying, you know, I am the kind of person who wants to learn new things, so I’m going to deploy AI and all of the tools that I need to help me filter through the noise and figure out what is truly new and present that to me. A leader might say, you know, I just want the tools that I’m using in the future to reinforce what I already think and what I already know. You may agree, you may not, not, you may disagree with that, but that might be the way in which someone uses it. And so I think the question here is what is it that you need augmentation help with? A lot of the conversations that I’ve had with leaders is we have to change our way of thinking. A lot of leaders, especially in hr, assume that AI can be deployed to replace human beings. If you put AI at the center, the human is no longer in the picture, and you see where that takes you. The conversation that I’m trying to have with these HR leaders in particular is what if we kept the human at the center and we understood really well where the edges of their competence were, and then you deployed AI at the edge, how much further could you actually go? How much better of a partnership would that be? It’s not a replacement story. It’s.

Gabe:
It really is an augmentation and a stretching story. And so I think your question, Scott, is exactly what it is that we try to do. And as Bob mentioned, this concept of scalable foresight is taking that and applying it at scale. Not having it just sit with one person or one team or one area of the business or even society, but really scaling it so that it’s commonplace, so that it’s the common framework and the common understanding of doing that, that moving forward.

Scott McCarthy:
Wow. Love it. And just one side comment of if I’m putting AI on the edges of my capability, it doesn’t have to go that far to hit the edge. It’s better be a robust system. It’s got a lot of work to do. All right, guys, listen, we’re going to slowly wrap up here, but before we wrap up, I kind of want to do a last roundtable with the 3U. I’m going to toss a question out, and each one of you guys get a chance to answer it on your, you know, from your own perspective. And that is, you know, what’s one thing from your perspective that a leader can walk away today and feel better about AI, the future, or something they might be able to implement right now, tomorrow, what have you.

Scott McCarthy:
So I’ll start with the man himself, Bob, and then we’ll go Gabe, and then Jeremy after that. So, Bob, what do you think?

Bob Johansen:
Yeah, let me give you a practical example. Last night I’m working on a new book that’ll probably come out in 2026, all about the concept of faith in the future. And I’m really struggling with the title. And in the middle of the night, I came up with the title. So I came down to my study and started talking to Stretch. So I’ve just learned to talk to Stretch. I don’t just type to Stretch now, but When I. I’m kind of open and opening a conversation, I sat down and said, stretch.

Bob Johansen:
I had this middle of the night idea. Here’s the idea for a title, here’s the subtitle idea. And I just mused for a while, just talking, kind of rambling. I was half awake and Stretch then said, oh, interesting ideas, Bob. Let’s play with four or five different variations. And did four or five different variations. And then I came back and gave reactions to it and said, well, let’s switch over to typing now because I want to actually see these all as I’m working. And in about an hour I had an improved title.

Bob Johansen:
Still not the final title, but it took a middle of the night idea and made it something tangible that I could actually begin to work with. I started by talking, I shifted to typing. So the big takeaway is this stuff is now practical. You can take it to your daily life and you can apply it in practical ways, but it’s up to you to learn how to talk, how to have conversations with this. It isn’t just a question and answer machine. This isn’t your grandpa’s Google, this isn’t Wikipedia. It’s a whole new thing. But it’s actually very practical.

Bob Johansen:
If you figure out how, you can try it out yourself. And half of it’s going to be you learning new skills.

Scott McCarthy:
I love it. I am also in the middle of writing my first book. Not like my 500th like you, Bob, but my first ever real book.

Bob Johansen:
Good.

Scott McCarthy:
And I woke up, middle of the night with not the title but the ending. And I had to get up and I had to write it and this is how it has to end.

Bob Johansen:
Good for you.

Scott McCarthy:
That’s, you know, it was like 3 o’ clock in the morning. I’m like sitting there, I’m like, what? Anyone would probably be like, what the hell are you doing? But yeah, I had to do the same thing.

Bob Johansen:
And when you woke up, good, was it still good?

Scott McCarthy:
Sorry, say what? Sorry, Bob, what was that?

Bob Johansen:
When you woke up and look back at what you’d written in the middle of the night, was it still good?

Scott McCarthy:
Need a little bit of fine tuning. But the whole idea, the premise of how it ends and stuff like this was I think, spot on, but maybe a little bit biased.

Bob Johansen:
Good for you.

Scott McCarthy:
So, thanks, Bob. Gabe, thoughts?

Gabe:
Yeah, well, just picking up on where Bob, you know, left off. I’m just glad I’m not the one who’s calling at 3am to workshop because I’m not sure how productive of a conversation that would Be. As much as I love and respect Bob, you know, I think, Scott, my, my big lesson here is be the skeptic, right? Be, you know, your organization, your partner, your, your family, your friends will have certain ideas and desires for where this should go, where this shouldn’t go. And be the skeptic. Be the one that asks hard questions and you know, don’t reject all ideas, just kind of be the skeptic who asks those questions. That guides the conversation in a new light. I mean, that is strategic foresight. That is skill foresight is walking in and saying, what if we flipped our assumptions on their head? In what ways are we prepared? In what ways are we not prepared? And how does that stretch and evolve the conversation? You know, there’s a certain arrogance that a lot of us, especially in positions of power and leadership and in business and in government, all across the board, we assume that today is going to be exactly the same as it was yesterday and tomorrow will be the same as it was today.

Gabe:
And then we wake up, up and we’re told, you know, the world is upside down and you can no longer shake hands with people and you can no longer go into office. And we say, ah, we never could have imagined it. We never saw that coming. We were never prepared. But the reality is that we never took the time to think through and ask those difficult questions and play out at least for a little bit. Kind of, what if something different happens happen? What if the world started to look differently? What if the assumptions that we hold today no longer hold true in the future? And so what I’ve learned is, you know, be the skeptic. Find partners, find thought leaders who are willing to embrace you as a skeptic, just like Bob and Jeremy did for me, and bring you into the conversation, not try to shut you out and silence your voice, but really invite the challenge, invite the mind stretching conversations and the questions. That would be my biggest piece of advice, especially as this new technology continues to take on.

Scott McCarthy:
Love it. And last but not least, Jeremy.

Jeremy:
Yeah, Scott, you asked for something that would help people feel better about AI and one, I would just say that you don’t know. There’s no need to feel good about it. Like, I think a lot of, of people’s worries are really valid and the risks are real. But to answer your question, I think the main thing is that if you’re trying it out and it’s not working for you yet, that doesn’t mean you’re doing it wrong, particularly if you’re trying to do tasks that you’re accustomed to doing by hand, so to speak. If you’ve been doing a task for 10 years, 20 years, 30 years, a certain kind of way, and then you use a new tool to do it, the first time you use that new tool, it’s not going to be easier or faster or better. And neither is the second and neither is the third. But the fifth time, it will be the sixth time, it’s going to be really fast or really good. And then from that point you can start asking that question of well, what can I do now that I never was able to do at all? And this is hard for.

Jeremy:
For leaders, you have a lot of things to focus on. There’s a lot of things that are floating up to you day to day that aren’t taking time out of your day to sit down and play around with a new software tool. But you really do need to with this because you need to know it hands on and you need to be able to use it in your work and be a part of your organization learning. But don’t get discouraged if at first it seems like, like it’s not immediately providing you massive results in the way that you might have heard it should.

Scott McCarthy:
Awesome. I appreciate it guys. I appreciate you all taking time on your busy schedules. All of us coming across together from literally across the continent here, me and Montreal, we have San Diego, we have basically everywhere. So thanks for taking time in your busy schedules. Bob, last question over you. You being the boss and all, how can people find you, follow you, be part of your journey? Find the book. Book Shameless Plug.

Scott McCarthy:
Have at it, sir.

Bob Johansen:
Yeah, we’ll give you all the links to the book. You can buy it any place. It’s called third edition of Leaders make the Future and Institute for the future’s website. We can give you the link to that. It’s www.iftf iftf.org and then Jeremy, you should give the handshake as well the Handshake website. Jeremy’s Handshake company actually does the kind of development that I’m using and talking about and writing about now.

Scott McCarthy:
Yeah, go ahead Jeremy.

Jeremy:
Oh yeah, you can find our writing@handshakefyi.substack.com or check out our website at Handshake.

Scott McCarthy:
FYI Gabe, do you have anything, any Shameless plugs or are you just along for the ride?

Gabe:
I’m along for the ride, Scott. Thank you.

Bob Johansen:
Thank you.

Scott McCarthy:
Awesome.

Gabe:
It’s really appreciate it.

Scott McCarthy:
Hey, awesome. And for listeners, always it’s easy. These links are gonna be in the show notes for you just go to lead don’tboss.com the episode number in digits. So if it’s 1, 2, 3, it’s 1, 2, 3 is in digit. Gents, thanks again. I appreciate you all. This has been fantastic.

Bob Johansen:
Thanks for what you’re doing.

Jeremy:
Thanks, Scott.