The Winning Formula for Building Trust and Accountability Peter Winick and Kim Scott discuss the…
Making Ideas Accessible | Keith Goode
How AI models can assist managers in making better decisions.
An interview with Keith Goode about using AI in HR.
AI is a hot topic right now, but many don’t fully understand the ramifications.
How do you present data sets and explainable AI to the average person?
Today we sit down with Keith Goode, the Vice President of Services at ZeroedIn Technologies which provides HR solutions by combining people data with business data for one source of truth.
Our conversation begins by getting a sense of how complex ideas like AI can be used by people who are not familiar with it. Keith shares how he identifies the business issue or concern at hand and that AI could be useful to help solve those problems. From there they seek the right data to answer the questions and then qualify that information by looking at historical data, then aggregating it together to generate an AI model that can predict outcomes based on the predetermined traits.
Building a data set is a crucial part of creating an accurate AI model. Keith explains how they use a template that has key factors that have been used in the past coupled with various data elements that are unique and important to the client. Keith continues by explaining how the model can be tested and updated to make the model more accurate.
If you want a better understanding of AI in terms everyone can understand you’ll want to tune into this episode.
Three Key Takeaways
- Having the ability to create content that speaks to an audience on their level, be it introductory or expert is the key to reaching a wider audience.
- Connecting your product or service with the consumer’s insights is how you hit the sweet spot.
- When engaging others on large scale research you need to allow them to engage with and shape the content.
If you need a strategy to bring your thought leadership to market, Thought Leadership Leverage can assist you! Contact us for more information. In addition, we can help you implement marketing, research, and sales. Let us help you so you can devote yourself to what you do best.
Transcript
Bill Sherman How do you make ideas accessible to your audience? Thought leadership is well, by definition, a complex idea, a counterintuitive idea, or even one that’s at the cutting edge. And those sorts of ideas aren’t easy to put into simple six second soundbites for your target audience. So today I’m speaking with Keith Goode. He’s the VP of products and services at Zeroed In Technologies, a firm that focuses on workforce intelligence. In today’s conversation, we’ll explore how to make ideas approachable versus going too deep too quickly. We also talk about experimenting with different sources of data. Find new insights. And finally, we talk about helping people find answers to good questions. I’m Bill Sherman and you’re listening to Leveraging thought leadership. Ready. Let’s begin.
Bill Sherman Welcome to the show, Keith.
Keith Goode Hey. Well, it’s great to be here. Thanks for having me.
Bill Sherman So I want to dive in from a perspective. There’s a lot of conversation about AI, but there’s not a lot of conversation around how do you explain AI, right? And how do you make an idea accessible? Right. Because there’s some tension there. You can get someone who’s technically proficient and they can layer you with stats, or you can sort of gloss over. So how do you do it? How do you get someone? Familiar with a new idea like I, which sprung on pretty much the consumer and the business user at the end of 2022. How do you explain an idea? Not too deep, not too narrow, but just right. Sort of the Goldilocks approach.
Keith Goode Right. Right. It is. And there is a lot of things out there to, to help in that process. But I would say through my experience, the first is, is obviously identifying a business issue, a concern, an idea that’s going to be that I can help use to answer those questions. So first, understanding the problem, understanding an issue is the first step. And then second is looking for data that’s going to help answer that. And that could be a very broad subject or very broad topic. So we do a lot around uh, using AI to help predict who might be at a flight risk within an organization. As you know, flight risk can be very expensive. You hire someone, you train someone. Those are the primary costs. And then the opportunity cost is where do you think that person is going to go? They could go to your competitor. So there’s that opportunity cost. So looking at and working with organizations to help identify what are some key elements that might help us better understand where flight risk come from. And look at the data that can help support that. What we do is we then quantify that information. We look at historical data, aggregate that together, and generate an AI model. An AI model is unique in the fact that it looks at every instance of a prediction, and it looks at those features that you establish as going to be able to help predict the outcome. And it determines for each occurrence what features might be most important. So each prediction is fairly unique. And that’s what I brings to the table. But we take it a step further and use what’s called explainable AI. It’s being able to reverse a given prediction back through the model to say, well, why was Bill’s prediction different than if we use the same features? Of course the values might be different. So then it analyzes the values and says, well, for build we found these features more important. And for Keith we found these features more important. And that helps people then to take that prediction and do something about it.
Bill Sherman Right? Because if you were to give a warning to say this person is likely to leave your organization, but you don’t know why. Then if I’m a manager or leader and go, I don’t want to lose this person. What do I do? If it’s not explainable, it’s not actionable in a lot of ways. You can’t just walk up and say, hey, are you thinking of leaving? Don’t do that right.
Keith Goode Right? Yes. Exactly. Right. And you get a prediction and a lot of systems. It’s kind of a black box. You just see this value. This person is predicted to be a flight risk say and then he scratch your head and say why. And a lot of solutions. That prediction may not come from that client’s data, but may have been aggregated with different companies data that is very dissimilar than yours. Uh, so having a, you know, not knowing where that that prediction is coming from and what was used to drive that prediction, it kind of reduces the value of it.
Bill Sherman It does. And you mentioned the concept of black box on data. Right. So you mentioned models and I want to make this a little bit more visible. So if you and I were to come up with factors and a list of factors that promote flight risk, you know, we could sit down. You and I could come up with a list of a couple dozen, probably within an hour. Right. If we were to turn to specialists on that, the organizational psychologists of the world, they would add another layer of expertise. But when you get to AI, the number of dimensions that a model could be looking at is so much more complex, right?
Keith Goode Oh, it really is. And it reminds me of kind of a unique, uh, story that we have. We’ve worked with a large retail company and, and we had, uh, a wide range of HR and business data for those factors for those features. And you think retail, you think high turnover. Right? But that’s on the store level behind the store. In the corporate office you’ve got very tenured people. So having a in the old days what they would do, the statisticians would create multiple different models for those different categories of people. But using AI, it was able to determine quickly what type of, uh, factors, what type of features were going to be important to do a prediction based off of where the person was working, how they traverse through the organization. So that’s really exciting to see these models be able to learn from their data, to come up with a better prediction. And it saves time and building a whole bunch of different models that then have to be maintained on a regular basis.
Bill Sherman Well, and it shifts in some ways. The burden of the expert. Right. Because those experts are still highly useful for the interpretation. But even more useful for asking the right questions, because that’s the part that the black box, the model can’t do right in AI.
Keith Goode Right, exactly. And we’ve taken a step further and we’ve done the explainable AI. But we also came up with this very unique concept where basically we use a waterfall graph. And if you’re not familiar with that, basically it says, okay, you start at this point and then there are certain ones of those features that may have raised your prediction of leaving, and then some may have lowered it. So you have a bar for each one of those factors. And then you can kind of see each factor how much based off the bar width was important for one prediction on determining if that person is going to stay or leave. And then on top of that, because we now can take that one person’s prediction and run it back through the model, we can then actually kind of do some tweaking and say, well, let’s suppose that, you know, if we gave this person a pay raise, how would that impact their flight risk? Or if we gave them a promotion or if they were demoted, or if they were to change to a different store? How would that impact their flight risk? And it’s amazing to see how something like that, putting it back through the model, can instantly generate an idea or a better decision making on working with that person where a given person’s ability to stay within the organization.
Bill Sherman Right. Because again, if you’re the manager of a leader, the last thing you want to do is something that either has a low chance of impact or a high chance of negative impact. If you’re trying to keep the person right, you know, good intentions gone wrong.
Keith Goode Yes, exactly, exactly. So what if modeling that you can also get once that model’s been baked and you have the appropriate capability to send a prediction back through the model.
Bill Sherman So let’s talk about sources and data. Right. Um, and some of this goes back to the classic correlations. You know, in a stats one class, you know that the as the sale of ice cream goes up, so does the murder rate, right. And it’s the classic lesson that correlation is not causation. No. People don’t go on killing sprees because they have ice cream. Right. And it’s the heat. There’s an underlying factor. So as you’re building something like explainable AI, is it a set of data sources that are already you’re pulling from, I could imagine, and then ones on the general economy and things like that. And then you layer in the company’s data or how does that work?
Keith Goode Yeah, that’s a great question. And typically how we work with it is through attempt. So we’ve kind of template have a template of some key factors that we’ve used in the past. Key features. But when we work with our clients our model is extendable. It’s configurable. So we will work with them to identify different data elements and then test that through generating the model. So you generate the model. You then run the model against the same test data to determine how effective that model was. And based off those features. Maybe the model is not that that effective. Maybe it’s above that 95% confidence ratio that we’re looking for. So we may have to drop one feature and add a couple more. And of course, depending on the client’s data and how much historical data that you have could play an impact to that. We’ve worked with different clients to find very unique elements, everything from obviously your well known metrics like tenure, promotions and salary, but also very unique things like how far of a commute a person has to work if they’re still coming in, or how many friends that they have, you know, based off of their engagement scores. We can look at, you know, how we could use some of those features off the engagement score to help generate that model. So there’s a lot of different things that we could use to fit a model for a given client, and we’d work with them, and they have ideas to that that are unique to their organization.
Bill Sherman So I want to ask a few questions about explainable AI. And I want to ask follow up leadership perspective. Who are you trying to explain explainable AI to? Who’s the audience? If you.
Keith Goode Yeah, that’s the important thing. Because again, tech for this purpose of tech is just not solving a business problem. So it’s got to be a business problem. And in our cases, what we found, for example, at the uh, retail organization that we worked with is we were targeting store managers and regional managers so that they could look at an individual person and identify what could be done to, to reduce turnover. And there’s organizations that a lot of times that also equated to potential just training opportunities. So that’s where we were. We were looking at, you know, the actual workers outside of the HR, the partners of HR, if you will, that could use that information to make better decisions.
Bill Sherman Right. And so there’s several layers that I want to unpack. You’ve got probably the individual who makes the decision to bring your firm in. So you’ve got a buyer as a target audience. You’ve got the folks within H.R. In it who are charged with implementation and making it work. But you’ve identified the end user, right? And that person is the one who’s going to make either this work on a very simple level or have a struggle, right, to create value. Yeah.
Keith Goode And it is a struggle to think that just implementing it will provide it and they’ll come. It’s not the field of dreams. Uh, so it does take effort. It does take resources to drive behavior. This is something new. It’s unique. And we’ve seen amazing technology come recently being out into the marketplace. However, the adoption and satisfaction of the technology, it’s just not there. If you look at some recent studies around, uh, the adoption and satisfaction of HR tech over the past year, unfortunately it’s not what it should be. So everybody needs to do better. But I think it’s also a level of adoption, a level of changing people’s behavior to utilize this. It means making it easy for someone to find the technology and maybe someone that’s not technically savvy to be able to figure it out. So we’re talking about a generational gap that also has to be considered.
Bill Sherman And I’m also thinking that when you get to that end user level, that retail manager, for example, you could have someone who says, look, I spent a lot of time getting to know each of my people. I don’t need software. I don’t need artificial intelligence. I manage by walking around. I know the store, right? Why are you telling me you know better with this data? Right? Yeah.
Keith Goode Yeah. And that’s really interesting. And I think great that you have those people. But think of it as just an augmentation. Just, you know, I’ve seen people too, where they, they walk around, they think they had that idea. Then all of a sudden they see some pieces of data that was put together in front of them and they’ll say, oh, I didn’t know that, right. And then it kind of the light bulb kind of goes off. And it’s just it’s an additional augmentation that those people can, can utilize now. So it is a little bit of a training, a little bit of a curve, a little bit of effort to get them there. But and it doesn’t happen overnight. It is not. Oh, well, we’ve implemented this new technology yesterday and everybody’s going to start using it tomorrow. Not quite there. Yeah.
Bill Sherman And I think there it’s the difference between telling someone, hey, here’s a tool and a better idea versus, okay, you don’t have to do this anymore. Trust the system. Because until there’s some proof and you can show me, hey, this works both in predicting and also helping me preventing people leave. I’m going to be skeptical, right? Yeah.
Keith Goode Yeah. And think about the you know, you have a prediction. There was something done and now that prediction didn’t come true. So right now how where does that trust. And that’s why we’ve always got to periodically retrain those models and always check for the relevance and, and the effectiveness of a model to ensure that the confidence is there and that it’s not a one time that we typically work with our clients and retrain models either semiannually, annually. And of course, the new data is always changing. So we’re always running the new data against the model. But retraining that model, you should be considering it on a periodic basis depending on the type of model.
Bill Sherman If you’re enjoying this episode of Leveraging Thought Leadership, please make sure to subscribe. If you’d like to help spread the word about our podcast, please leave a five-star review at ratethispodcast.com/ltl and share it with your friends. We’re available on Apple Podcasts and on all major listening apps as well as thought leadership, leverage, dot com forward slash podcasts.
Bill Sherman So I want to ask two questions and I’m going to. Ask them. Together. We’re going to focus on one, then turn to the other. How did you get into thought leadership? And how did your organization get into the leadership? I want to start with you. I want to hear your journey. How did you get to being an advocate and a voice for these ideas?
Keith Goode Well, I’ve been in this space. My background has been in it in information systems. And every day I wake up and I’m thinking, how can we add value based off of this data is nothing more exciting to me than being able to look at some data that’s been sitting on the shelf for the last five years that that we can aggregate somehow and make better use of it. So that’s what really jazzes me and that geek in me really enjoys. I think, however, the secret is really being able to take those, those unique solutions and make them effective and sustainable on a, on a regular basis. So my background is has been in information systems and HR primarily actually in the learning space and in the company I’m with now is my colleague Chris Moore. We’ve been in the learning space for, I hate to say, over almost 30 years now. So in the learning space was really interesting too. Uh, we find that it’s data rich but information poor. And so we’ve developed, uh, zeroed in, and we’re not a transactional system. We’re not an HR system. We’re not a learning system. We’re not a benefits administration system. But we collect the data from those different systems to be able to bring it together and then provide value and information across the HR spectrum. And being able to then configure in business data, whether it’s finance, whether it’s productivity, whether it’s sales data, in order to gain that value of HR data. And the challenge has really been, you know, HR is highly transactional. Everything’s been transactional. But being able to take that now and roll it up and aggregate it into information where it’s been missing and when you can satisfy that, that type of gap, it’s very rewarding.
Bill Sherman And so for. Zeroed in out in zeroed in decide. And what is it been doing from a thought leadership perspective to get these ideas out there? I mean, obviously you don’t have a budget for towers, for example, right?
Keith Goode Zero nine is a really unique organization, privately held, but we’ve been profitable. We actually, you know, saw the.com error and we saw that, you know, expansion. And then the quote more or less collapse of that. So we’ve always been very fiscally conservative in in our growth, which has been good and bad because you see, you know, these other organizations kind of come and go. They get all that VC money. And, you know, all of a sudden they’re huge. We’ve been really trying to build it organically and organically with our clients organically identifying what the needs are and then having I’ve been very proud of our, our technology group are developers that understand AI, that understands generative AI, that understands the technology to be able to build that platform, but yet make it configurable for each of our clients’ uniqueness. And that’s really what we’ve tried to do is we we’ve really tried to focus on the value adding value as opposed to what’s in the next shiny object.
Bill Sherman Well, I think there’s a couple layers there that you mentioned that I want to call out in making the technology explainable. You make it accessible to individuals. And if you’re trying to do organic roads. It’s important for people in the organization to be talking about these ideas. It’s even more important for people within the organization, talking to their peers and being able to explain what it is, as well as to colleagues and other organizations and saying, hey, I found this new tool. You should check it out too. And so by making an idea approachable, explainable, and clear. That’s how you get the word of mouth and people starting pulling ideas beyond you just having to talk about them again and again.
Keith Goode Yeah, yeah. And I see that a little bit as a struggle. I think people in today’s world.
Bill Sherman Oh, it’s not easy. It’s really easy work.
Keith Goode Right, right, right. They want that quick fix. They want the you know, this is the answer. This is a one click answer. And I think what we’re trying to do is also provide that, that information in a unique way that can help them get to that. But by all means it’s still people driven. It’s still, you know, people looking at things in a different way that can drive better decision making. But there isn’t a button and click and say, here’s your decision. That’s not there. And I don’t think it should.
Bill Sherman No, I don’t think it should be either. I mean, we’re not talking about like going on to Amazon and doing a one click buy now sort of thing, right? We’re talking about people in the workplace, in leading and responding to their needs and helping to understand patterns of behavior. And that’s not something that you. You still need that human touch and level and go, yeah, here’s what the data is missing. I know that that employee has had a daughter who’s been sick and she’s been out and hasn’t been in, and her attention has been split, but it’s not because she’s losing interest in the organization. She’s got a sick child and has had a sick child for a few months. Right? Yes, yes.
Keith Goode We work with a city manager and our representative. They’re such a great guy. And will he’ll give us a data challenge and we’ll go through the data. And it a part of the analysis process is maybe we’ll hone in on a given person in a given department. And he’s so fine. And sure enough, he knows that person. You know, and we’re talking about it, you know, almost a 10,000 person organization. And we’ll find kind of an anomaly on something and we’ll start talking about it. And if it’s been great, we’ve worked with them. We’ve done things like, for example, in municipalities, they do a lot of, uh, budgeting. And the budgeting is typically done in Excel spreadsheets outside of the HR system. And then of course you have the HR transactional system. So we’ve been able to look at the budgeted positions and correlate them to actual positions. And of course people changing every month. The budget is a budget for the year, but people say all the time. So we’ve been looking at we came up with a metric we call a turnover savings metrics, which I’m sure it’s been out there for a long time, but it more is probably more valuable in in the municipalities as, as perhaps the, the private organizations. But you know, we’ve looked at the, the churn inside of organization to say, hey, you’ve got a gap with this position was supposed to be filled for whatever reason. And it was and it could been to a movement, what maybe a person went to another department or another division turnover, whatever it might be. So there’s a lot of a lot of factors to consider and looking at and determining a turnover metric. But at the end of the day here, here’s the amount of budgeted money that you know, perhaps wasn’t utilized. And he’s like looking at ways to put it back to make, uh, service bonuses or merit bonuses and to make people more engaged in the workforce and, and rewarded for what they doing based off of something that was budgeted to me that that is really exciting and, and is changing the way people feel about working their.
Bill Sherman That’s a very cool point, right? Is how do you reapply those dollars more effectively? Once you know you have those insights? Yeah. So as we begin to wrap up here, Q I want to ask you a question. I want you to think back. Of you earlier in your career as you started get into the world of thought leadership. What advice would you give yourself? The. That you wish you knew that you’d love.
Keith Goode I would say. Look for the business problems. Look for the problems. Don’t take a solution and look for a problem. Start with a problem and look for a solution. That is where I wish I would have known in the past. Maybe where I am today is it? It’s your driving value based off of solving problems. And I’d like to say that, you know, technologies have that in mind, but unfortunately I don’t think it does. I think people are taking technology and trying to find a home for it. And sometimes those homes don’t match up. Uh, so I would say, you know, look, look for the business problems and then the technology to help solve those problems. Does that resonate with you, Bill?
Bill Sherman Yeah. That does I mean, because at the end of the day. In organizational thought leadership. It’s an idea that creates impact for your organization, for your end users, for your employees. If you’ve got an idea but you can’t apply it to the business if you can’t make sense of it, well, it may be an interesting idea, but it’s unlikely to be an applied idea, and it’s more likely to be sort of an academic idea that sits on the shelf that people look and go, yeah, that’s a good thing to know, but what do we do with it? You’ve got to force that last mile right from an idea to action. I mean, that’s the language of business.
Keith Goode Yes. And something I was thinking about, too is if you have, you’re trying to solve that problem and it’s not being solved by, say, the tech or the idea. Don’t continue to spin that wheel. Right, right. It’s a thing to do. You want something to work? You want it to work. You want it to work. And then sometimes that, you know, knowing when you need to find a different, a different way. That’s sometimes a challenge.
Bill Sherman There’s a phrase in the teaching of writing. Oh, kill your darlings. Right. So the ideas in the terms of phrase and language that you have as you, as you’re doing creative writing, maybe a short story or novel or something, you have to be willing to let those go to create a better reading experience for your reader. And I think what you’re talking about is very much a kill your darlings experience. You’ve got a pet idea or something that you love. It’s not working. You can’t make it work. Let it go. Right, right. Not all ideas are ones that you’d have to bring across the page.
Keith Goode That’s right, that’s right. That you can always use it in another story. Right?
Bill Sherman Yeah, exactly. So I want to thank you for joining me today. This has been a fantastic conversation.
Keith Goode Thank you. And I hope your audience is finding it as valuable as I have. And if they’d like to continue the conversation, I’d be more than happy to do so. I can be reached at zero at incom or my LinkedIn account, which is quite a good. And again, I’d always be happy to continue the conversation.
Bill Sherman If you’re interested in organizational thought leadership, then I invite you to subscribe to the Org TLE newsletter. Each month we talk about the people who create, curate and deploy thought leadership on behalf of their organizations. Go to the website. OrgTL.com and choose ‘join our newsletter’. I’ll leave a link to the website as well as my LinkedIn profile in the show notes. Thanks for listening and I look forward to hearing what you thought of the show.