Skip to content

Human First, AI Forward: Navigating the Future of Thought Leadership | Jen Cohen, Stephanie Grayson, Amelia Ross, Spencer Ante


Where Human Ingenuity Meets Machine Intelligence

AI is transforming thought leadership—but not by replacing humans. In this episode, four experts explore how AI can amplify insight, speed up strategy, and reshape content—while still requiring human judgment, empathy, and creativity.

What happens when AI meets the human mind in the world of thought leadership?

In this special compilation episode of Leveraging Thought Leadership, host Bill Sherman brings together four voices at the forefront of AI and innovation: Jen Cohen, Stephanie Grayson, Amelia Ross, and Spencer Ante. Each offers a powerful, firsthand take on how AI is changing the way we lead, communicate, and make decisions.

Jen Cohen explores how AI can amplify—rather than replace—human potential. She’s seen how automation can unlock joy and confidence, from safe driving to elder care. For Cohen, AI is about purpose, empathy, and enabling people to thrive through their “ikigai.”

Stephanie Grayson shares how AI is becoming an essential research partner in the world of B2B thought leadership. But she also flags real risks—from bias to hallucinations—and warns that without humans in the loop, AI can damage brand trust faster than it can build it.

Amelia Ross breaks down how natural language processing (NLP) can make sense of real-time chaos, like the war in Ukraine. She reveals how AI-driven insights helped governments and companies cut through disinformation and find clarity in crisis—demonstrating the societal power of smart content strategy.

And Spencer Ante takes us to the edge of the AI frontier, where generative tools like ChatGPT are reshaping not just how we work, but how we think. A former investigative journalist, he calls on business leaders to collaborate with AI—not fear it—and to focus on managing truth, training systems, and preserving human creativity and empathy.

From content to ethics, strategy to scalability, this episode is a must-listen for anyone leading in a world shaped by machines—and guided by human insight.

Four Key Takeaways

AI should be used to amplify human capability, not replace it. Whether in automated driving or elder care, AI can remove risk and enhance joy—while preserving purpose and human involvement.

Thought leadership must remain human-led and carefully validated. AI is a powerful research assistant, but without human oversight, it risks spreading misinformation and damaging trust.

AI-driven NLP can turn unstructured real-time data into actionable insight, helping organizations cut through disinformation and respond quickly to fast-changing global events.

AI won’t replace human creativity—it will augment intelligence. Leaders must learn to collaborate with AI tools, supervise them, and guide them with empathy and strategy.


Transcript

Bill Sherman Welcome to Leveraging Thought Leadership. I’m your host, Bill Sherman. In today’s special compilation episode, we dive into one of the most transformative forces of our time, artificial intelligence. Specifically, how it intersects with thought leadership, business strategy, and innovation. You’ll hear from four expert voices we’ve recorded over the past few years. Jen Cohen, Stephanie Grayson, Amelia Ross, and Spencer Ante. Each of them will be identified by the role they served when we interviewed them, and each brings a unique perspective on how AI and large language models are being used, what benefits they offer, and where caution is still needed. We begin with Jen Cohen, who was VP of Core Engineering at the Toyota Research Institute when we spoke. Jen believes the future of AI isn’t about replacing humans, it’s about amplifying us. You mentioned something that I want to explore a little bit deeper, you were talking about the concept of human amplification, and helping us do more through automation. And you’ve mentioned a term Ikegai. What is Ikegai there? And how does that fit into some of the work the Institute is doing? Because I think it’s an interesting concept on its own from a thought leadership perspective.

Jen Cohen I love the concept. Studies of Ikigai teach us that we feel most fulfilled when our lives incorporate work that we love and help society. One of the things we think about is how do we enable people to achieve their Ikegai. A lot of that we’re thinking about in relation to automation with a human touch. Human amplification we talked about, let me give you an example. I think it was two years ago, we had a slalom track set up. A track for lots of turns and purrs, and done with cones. And they had a lot of those drivers get in the car and they said, go through the cones and go up to 30 miles an hour and try not to hit any. I didn’t get through, I hit 25 miles an hours and I still hit cones.

Bill Sherman Cones. Yeah.

Jen Cohen Then they turned on the automated driving and it was an assist. It didn’t drive the whole car. I was still driving, but I could go 30 miles an hour all the way down to the ground and it helped me turn. It assisted me, it amplified my skills. I didn’t hit one cone. And one of the things our CEO talks about is the joy of driving. I’ve never had so much fun because I had no risk. I wasn’t going to hurt anybody or the car, but I got to go faster. Those cones and not hit a single one. And I think that’s human amplification, not human replacement.

Bill Sherman And that leads to the question of, if you take out the risk or if you take out danger and whether that’s, you know, aging in place or driving in that, what new opportunities open up when we reduce the risk of negative consequences? And you talked about joy, but I think there’s a number of human emotions and you talked the empathy, you now, on the work and the projects. There’s a very human part tied not only to the research that Toyota Research Institute is doing, but also the problems that are trying.

Jen Cohen Absolutely. When I think about the elders that are aging in place and looking at how much care our researchers take in interviewing people and understanding problems and understanding that Iki Kai is important in this, that people don’t want just to have a robot running around the house doing everything. People want to be involved. So this idea of having the robot helped to make the kneel, having the robot, but still the human being able to be a participant in that. I think it’s such a key part of how we can leverage that empathy, that we can understand that people need purpose and it’s not just about replacing. And so, yeah, I definitely think that that’s in line with the way that we think. And if it’s automated driving, you know, every, every bit of safety we have out there doesn’t just help the person in that car, it helps all the people on the road with that car. So I love that that altruistic part is key to TRI. That’s one of the reasons I work there.

Bill Sherman Who I see working on thought leadership and research development, that touchstone of why they do it becomes critically important because it’s not just knowledge for knowledge’s sake, it’s about solving problems that people encounter that have a material impact in the world. Next, we turn to Stephanie Grayson. She’s an experienced B2B content marketer and a seasoned thought leadership practitioner. She brings a unique perspective on how AI intersects with content and thought leadership and where its strengths and limits lie.

Stephanie Grayson One of the most important things about thought leadership is that it gives you pause, right? Great thought leadership should stop, you know, really stop you in your tracks for a moment and say, now that, that is an interesting idea or that is a interesting thought. And so just circling back to the question you asked about, you sort of where AI will be helpful. And of course, you now, to your point, I’m speaking about AI where we are today I do not have a crystal ball, although, you know, it is interesting because I too am a consumer of thought leadership reading up constantly on where people, you know, very deep into data science and the AI space are thinking where we might go and soon with the further development and sophistication of AI. But speaking as we are today in 2023, I think that it is for this reason. That, you know, thought leadership is this conversation starter where we’re thinking of something novel. It’s not just a reorganization and repetition of things that are already out there, right? There’s nothing I can kind of feed into like there’s no there’s no training data that the generative AI will have access to if this is a really fresh idea until you feed it that idea.

Bill Sherman Right, right.

Stephanie Grayson But these these ideas that are coming out are originating with humans. And yes, of course, if you have great ideas. And you give them as training data and other types of things for generative AI, it can learn and it can learn to reason and things like that in the future. And even now, we’re starting to see elements of that. We’re starting see that, even though CHAP GPT is the media darling, we are starting to see also a lot of excitement around something called AI agents. So you have things like AutoGPT and other type of things where Instead of being task-focused, it is more of a suite of things where it can prompt itself, right? So let’s say, chat GPT, you might have a human who’s prompting AI, hey, can you suggest a listicle of five things about a certain topic or et cetera, et cetera. Whereas if you’re working with an AI agent, instead of you prompting it for every little thing, you’re going to kind of give it, let’s say, a goal. And then the goal means that the agent will then break it down and prompt itself in, you know, but still has to go through the process. And you see it kind of, you know quote unquote thinking right about each piece, assigning itself, each prompt, and then executing on each prompt. And I think in terms of the future, that’s going to be the interesting thing that we’ll want to watch. And also that is a concern as well because things like bias can creep in. And will, right? Because if it’s learning from humans that, you know, we are not perfect, right, humans are not perfect, and come with their own set of the, you know, bias. Also, you that AI hallucinations as their palm is an issue where an AI can very confidently, very confidently in fact, I mean, if you read it, sounds like it’s sure, hallucinate things that either haven’t happened, right.

Bill Sherman That lawyer who tried to use it and where it’s safe. That’s exactly the example I was thinking of. So go ahead and tell that story for us.

Stephanie Grayson Don’t exist right a lawyer try lawyer erroneously try to defend a client by citing cases offered debt to uh him regarding case precedents that really didn’t exist but the ai generated them very confidently that’s something

Bill Sherman And then the judge said, hey, your opposing counsel can’t find these cases, can you provide the transcripts? They went back to the AI, the AI generated the documents and made them look real. So they went two layers deep down a rabbit hole of hallucination, right? And it wound up being in some real world trouble of having to appear before the why they thought this was a good idea. Right? Yeah.

Stephanie Grayson And I think it really speaks to why human involvement will continue to be now and in the future, a very important thing that, you know, the humans working with AI, right, augmenting with AI but not replacing themselves with AI completely because who is minding the store here, right? I know personally, when I’ve used several different AI tools, generative tools that I have had, for example, links where I go to click them and I’m getting error pages or that they don’t exist. So you absolutely must have a human in the loop because thought leadership done well can mean amazing things for business. Thought leadership done poorly or if there’s inaccuracies, you know, can really ruin reputation. In a big way if people feel that they can’t trust what’s coming from the company in terms of their thought leadership.

Bill Sherman Well, and from a thought leadership creation perspective, I think one of the things that you can use AI now and lease some of the tools is as a research assistant. If I were to put it in a academic setting, I would say, go give me an annotated bibliography of what people are saying out there and who are the voices and what are their points of view and take. I think you can task it with that to get a good snapshot of an ecosystem of conversation. But you then have to validate and go, is this real or hallucination? Who do I agree with? Where are they sort of going? What is their direction? But there are limitations. It’s sort of like having an intern or a research assistant at this point.

Stephanie Grayson You know, look, even if you weren’t a research assistant, there are reasons why we need one, you know, not just one person minding the data and minding the resource, you Know, whenever you have an important piece of thought leadership that is being put out there by a company, I would hope that more than one person looked at that. And I absolutely, I would hope that more than one person cheped on it. So I think in general AI or human really. If you’re working on an important piece of thought leadership, a few different sets of eyes should be looking at.

Bill Sherman If you’re enjoying this episode of Leveraging Thought Leadership, please make sure to subscribe. If you’d like to help spread the word about our podcast, please leave a five-star review at ratethispodcast.com/LTL and share it with your friends. We’re available on Apple podcasts and on all major listening apps, as well as thoughtleadershipleverage.com/podcast.

Bill Sherman Next, we talk with Amelia Ross, Director of Content Strategy at Primer AI. Amelia specializes in making complex AI concepts understandable and actionable. You talk about a little bit of being controversial and creating a conversation around something that may not be on people’s radar. And so one of the things that I know with your organization recently. As captured attention as we’re recording here in May of 2022 with the war in Ukraine. Can you use that a little bit as an example of how that attracted attention and got people thinking about AI in a new way?

Amelia Ross Right. My company, Primer AI, does artificial intelligence and natural language processing. And what natural language is essentially taking unstructured data and being able to run AI models against it. And so what is unstructured data? Unstructured data can be anything that’s written, anything that is a video, etc. So you can imagine that in really rapidly evolving events, like the Russian invasion of Ukraine, so much information is coming in in non-written form. So what we did in our terms of our thought leadership is we took some of our products, one product in particular called Command, and we basically fed it information about what was going on in real time in the Ukraine. And we then created thought leadership content around that. So, why is… Real-time situational awareness, so key for not only government organizations who are involved, but for corporations who might have employees or facilities in areas of risk. It turned out to be really a good way for us to engage a much broader audience beyond who I would say our initial cohort is, and to have, in fact, reporters, many reporters coming in. And asking to, you know, find out information through our services about what was really happening all too quickly.

Bill Sherman And that ability to process real time, but unwritten information and to make sense of it. I think what you talk about as you’re reaching different audiences. In some ways, you gave a concrete example of something that is now possible that many people wouldn’t have thought. Is that correct?

Amelia Ross Yes, I would think people who were in government agencies would know this is possible. And I think people sort of on the cutting edge of natural language processing would know that this is possibly. But many people, sort of everyday citizens, would not know the extent to which this is plausible and to the extent which this possible in a way that creates safety for citizens and safety for employees. So. I think this was a very good theme for us to communicate broadly. It helps communicate the possibilities of NLP or natural language processing in a way that is maybe less fearful about, you know. And you know, it’s also very engaging because one of the things that I think we all face as consumers of information is the amount of disinformation. That’s in the world. And we really use some of our technologies to surface what could possibly be disinformation. So I think, you know, we touched upon a number of themes that many people in the word are considering when they take a look at artificial intelligence and set of world today.

Bill Sherman And with that, as you said, those in government, those working in AI, natural language processing, they’re more familiar and more comfortable with this. But if you’re leading an organization, if you are running a business saying, how will this impact me and what we’ve been doing, let alone if you are a citizen in a society, how does information flow? Touches on some very broad themes that go beyond and comes back to that question of going beyond the economic buyer to influencing conversations within society.

Amelia Ross Yes, and I think you’ll see Primer and you’ll see other AI vendors talking about the democratization of artificial intelligence, where it really is something that can be pervasive in society for a societal good.

Bill Sherman And now to wrap up, we turn to Spencer Ante. Most recently, Spencer was the head of insights at Meta. And he has a background in business journalism and investigative reporting. He’s been studying how generative AI is reshaping the business landscape. So we’ve talked about some of the past and some of present in politics. I want to push and go into the future. And you and I have had conversations around collaborative journalism and AI, right? And I think Given what we’ve seen over the last 12 months, really on using AI to generate content, it’s both fascinating and then the question of where does it sit within the world of thought leadership? So let me ask you that question. What are you feeling? What are seeing and how’s it going to change?

Spencer Ante Yeah, this is the, this is the big question that a lot of people are asking right now. And you mentioned the term club journalism. That’s a term like I came up with when I was at Edelman and the whole idea was like, make the journalist your friend instead of your foe, right? Cause a lot of companies have an adversarial relationship with the media, especially the bigger ones. And so it was like, make the journalists your friend, instead of your fo by like hiring the journalists, getting them to tell your story. It’s the same thing. I think with AI make AI, your friend. Not your fo now. AI has been around for a long time. It seems to be reaching a new inflection point with chat GPT and Dolly and all these like gener, so-called generative AI tools and services. And, you know, if you’ve used them, you could see that like it’s, they’re able to do more than they’ve ever been able to, like I just, they are far more than a chat bot, right? Yeah. I went in this morning and I showed my son. I’m like, what is the main theme of animal farm? Okay. I typed that in there 30 seconds later, it starts spending out this like. Very like expertly composed paragraph. Okay. That was pretty good. But listen, the whole, the whole fear of AI is that it’s going to replace humans. Now I do think there will be certain situations where AI does actually take over like some lower level jobs of individuals that, um, in particular that where there’s like rote or very like commoditized work, but on the whole. It’s more an issue of society figuring out. How to collaborate and, and literally supervise AI. That’s going to be the next 10 to 20 years, right? Because if you’ve ever studied AI, you know that all AI needs to be trained. Okay. Well, who does the training humans lead the training? Humans tell the AI what to, uh, what to learn, what to ingest, what information to absorb. And so if you, you, it’s the old computer saying garbage in garbage out. If you put garbage into, into chat GPT or other AI, It’s going to produce garbage and, you know, there’s a lot of instances of where you put in questions to these, these tools and they come back with like essays that have really key factual errors and accuracies.

Bill Sherman And that’s exactly something that I wanted to talk to you about. Cause you mentioned the concept of being an advocate for truth earlier, right? And if your AI and the algorithm is working from information, which is not accurate, which has not valid, it can argue incredibly persuasively for something which is flat out wrong, right. That it can create the best essay in the world or the best campaign, but start from flawed premises that every human would look at and go, That’s not right.

Spencer Ante Yeah. And that’s exactly why I’m not scared of AI to be honest with you for the most part, because I know having worked with it and studied it, that it’s going to require human oversight and supervision for a while now, who knows 30, 40, 50 years down the road, these, these systems do get smarter that, and that is like one of the broiler things about AI is these neural networks and machine learning systems, they do get smart over time. That could take a long time, right? You know, there was one example, I think somebody wrote an article that like you put it like who created the iPod and it says like, John Scully created the iPod at Apple. It’s like, anyone who knows Apple knows that’s not true. It came graced jobs there. So like when I was at Bloomberg, one of the dirty little secrets are working at Bloomberg. The reason they were so fast is because 60 to 70% of the articles was regurgitated information from older articles. They basically like copied and pasted like what the, you know, the like, you know, in an earning story, you would take a bunch of stuff from another story.

Bill Sherman Well, you knew what the template was going to be on an EARNIC story to be galled.

Spencer Ante Yeah. Yeah. That’s like a fan of automation. So like this has been going on for a while. It’s just kind of reaching like a new level of sophistication, but still human beings need to like, need to manage it. And so I think that a lot of jobs are going to involve in like, if you look at the job description of something going forward, it’s going to say working with AI systems to like manage them, blah, blah, blah. And you’re going to see a lot more of that.

Bill Sherman It’s going to be more similar to, you know, do you have basic word processing and spreadsheet skills, right? Can you manage those? Can you formulate a query? Can you create a data set that’s going. To allow you to answer the questions that you want to answer, right. And so the ability to curate knowledge, I think, even if we go further back, you know I think of my father who spent most of his career as a professional librarian. And he said, you don’t have to know all the answers. You just need to know how to find the right answers and which sources to trust. And I think that that’s the same response we could use for AI, right? What do we feed the system so that it has valid information? And then two, how do we evaluate it?

Spencer Ante Yeah. And, you know, they’re obviously technical advantages to these systems that human beings can’t replicate. Like healthcare is a really interesting example. I think, I think AI is going to play an increasingly important role in healthcare because if you’re a doctor, you need to stay on top of all the research in your field and that’s not easy to do when you’re serving patients all the time. So like, and this is, this is the promise of Watson, which never really came to fruition, but you could have like a medical assistant That could basically quickly tell you what the state of the art of the research is on this particular issue, you could augment human intelligence. And I think that’s what AI is going to do. It’s going to augment human intelligence, but not replace it because you still need. The idea of empathy is a really interesting thing. Like machines don’t have empathy really. Do they have creativity? They’re getting more creative because they’re just absorbing all this information and coming up with more creative responses, but I think the realm of true creativity. And original ideas will still be in the realm of humanity.

Bill Sherman Whether you’re designing robots, crafting content strategies, or building enterprise software, one thing is clear. AI is changing the landscape of business, leadership, and thought leadership. But with change comes responsibility. And as our guests have shown, thought leadership isn’t just about evangelizing technology, it’s about guiding it with empathy, strategy, and ethics.

Bill Sherman Okay. You’ve made it to the end of the episode, and that means you’re probably someone deeply interested in thought leadership. Want to learn even more? Here are three recommendations. First, check out the back catalog of our podcast episodes. There are a lot of great conversations with people at the top of their game, and thought leadership, as well as just starting out. Second, subscribe to our newsletter that talks about the business of thought leadership. And finally, feel free to reach out to me. My day job is helping people with big insights. Take them to scale through the practice of thought leadership. Maybe you’re looking for strategy, or maybe you want to polish up your ideas or even create new products and offerings. I’d love to chat with you. Thanks for listening.

Bill Sherman works with thought leaders to launch big ideas within well-known brands. He is the COO of Thought Leadership Leverage. Visit Bill on Twitter

Comments (0)

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Back To Top //