Skip to content

Creating Frameworks and Models from Expert Insights. | Dr. A.J. Marsden

Creating Frameworks and Models from Expert Insights. | Dr. A.J. Marsden

 


The process of creating a model from initial theory to validation.

An interview with Dr. A.J. Marsden about using validated models to confirm your thought leadership insights.

What is a model? How can you make one? How do you know if it is useful and what is a validated model?

Statistician George P. Box once said, “All models are wrong, but some of them are useful.”

Models give us ways to depict how our insights relate to one another, and create structure that can be supported by statistical data – validating and confirming your ideas. Today, we take a deep dive into the process that turns insights into validated models. We will explore the topic with Dr. A.J. Marsden, Associate Professor at Boston College’s Psychology and Human Services Department; she’s also Thought Leadership Leverage’s resident Organizational Psychologist.

A.J. clearly defines models, and describes where they come from and how they function. We learn the basic components of a model, and why models need to be clean, simple, and free from anything that isn’t essential.

A.J goes on to explain why a good model needs structure and well defined dimensions, and breaks down how insights interact with each other on a fundamental level. In addition, you need to define observable behaviors in order to measure dimensions, providing valuable data for proving the model’s validity and reliability.

Creating a validated model means finding any weak spots in your content, and correcting them. A.J. describes the process, and goes into the statistical techniques used to create predictive validity. From there, a model can be written up in a technical report that provides functional numerical data to back up the insight and provide proof of your content’s validity.

If you are a stats nerd, data curious, or just struggling to identify what makes your insights “tick,” this is the perfect episode for you.

Three Key Takeaways:
  • When gathering data it is important to use participants that know and understand the concepts being used in the model.
  • Part of building a good model is understanding how the dimensions interact, and how the ideas support one another.
  • While you can create a model on your own, it can only be confirmed by a statistics professional who can develop a technical report. Without that, a model isn’t validated.

If you need a strategy to bring your thought leadership to market, Thought Leadership Leverage can assist you! Contact us for more information. In addition, we can help you implement marketing, research, and sales. Let us help you so you can devote yourself to what you do best.


 


Transcript

Bill Sherman Everyone has opinions, but experts have insights. And some of those insights become thought. Leadership expert insights usually start with hunches and gut instinct based on pattern recognition and reflection. Today, we’re talking about organizing your insight into frameworks and models and then how to validate them. That means collecting data and running the numbers, often through advanced statistics. So I’ve invited Dr. A.J. Marsden, who serves as the organizational psychologist here at Thought Leadership Leverage. She is also an associate professor in the Department of Psychology and Human Services at Beacon College. In this episode, we’ll be talking about how to organize your insights and put them to the test. I’m Bill Sherman. And you’re listening to Leveraging Thought Leadership. Ready? Let’s begin. Welcome to the show, AJ.

A.J. Marsden Well, thank you so much for having me, Bill. I appreciate it.

Bill Sherman So one of my favorite quotes comes from statistician George P Box, and he said, “All models are wrong, but some of them are useful.” So my question for you as a statistician is what’s a model?

A.J. Marsden That is a great question and one that I get asked quite often. What is a model? Where do they come from? How do they work? Right. A lot of times models start in our head, right? We read something that we find interesting and we relate it back to our life and we say, Hey, I remember when that happened in this other situation and this happened and we start to make these connections and then we start to see more connections. We read more information, and then slowly over time, we start to build almost a theory or a hypothesis in our mind of how these things work together. And then once we start to kind of dig into it, we can start building a structure of how these things maybe correlate with one another, how they relate with one another, how they don’t relate with one another. And then from there we can actually start collecting data and putting down ideas, and then we can move from what is a model to here’s something that’s well thought out structured, let’s test it, let’s get some number and data and then we can move from here’s a model to hey, here’s a validated model. Right? Right. So it’s basically just taking ideas and then trying to find some research or experiences to back that up and finding connections. So that’s how we start.

Bill Sherman So one of the things that you point to and I think is worth underlining, especially in organizations, your experts often have mental models in their heads, which they’re not even aware of.

A.J. Marsden Yes, absolutely. And it’s my job and the job of those of us that work at Cielo to get those thought leaders to take those ideas that’s in their head and start to make sense of them. Right. Put them down on paper and let’s see what this looks like visually. Try to explain to us what’s going on to your head. And then once we start to be able to dig deep into their thoughts, then we as the researchers, can start to get a better understanding of the words that they use, the terminology, the jargon, and where they’re coming from. So we can start putting definitions together and things like that.

Bill Sherman Well, and part of that expert mental model is making instinctive or experience based decisions about what’s relevant to this issue and what’s noise.

A.J. Marsden Yes, that is really, really difficult because a model can get out of hand so fast, right? Because you’re like, oh, this is important and this is important and we need to include this and we need to include this. And that’s where we oftentimes have to take a step back from our thought leaders and say, okay, all right, what’s essential? What’s really, really important? What could you do without, Right. What’s not so important? Maybe you think it’s important right now, but let’s sit down, talk about it, define it, and then we can determine if this is really something that needs to be in there or if this is just, like you said, kind of background noise. That’s not that important.

Bill Sherman And over time, you can develop that sort of internal expertise if you’re the practitioner, if you’re the expert. But there’s a process of bringing the framework to life. And it may be as simple as you start by drawing a two by two diagram for the first time, or use your standing in front of a meeting and you say, okay, based on this idea, there are four things that are important. Right. All of a sudden, that’s the birth of the model right there. Right. Right.

A.J. Marsden Exactly. So they kind of start like you said, they have it kind of mapped out in their head, but they haven’t really thought about it that much. But once we can get them to put it down on paper, oftentimes that’s when we really start narrowing down on this is what’s important. This is what needs to be included in this model. This other stuff not quite as important.

Bill Sherman So let’s turn to the second half of that quote from George P. Box, where he said, But some of them are useful, right, in terms of models. So what makes a good model? Why are some models better than others? And I know that’s a loaded question that we can spend the entire rest of this conversation and probably four other episodes on. But what makes a good model, at least to start us off.

A.J. Marsden To start us off? I mean, you really have to have that good idea, right? That well-thought out good idea where it’s structured. It’s, you know, you find kind of like the rough edges. We talk about this a lot where we want to have clear, clear definitions, clearly defined lines between what this is with this point. That’s really important. In this point over here, that’s really important. What makes those and point more important? What makes those distinct? Right. So we start there, and then once we get it down on paper, we have to look at the model from an observable point of view. I know that sounds kind of weird, but we have to look at the information that’s in that model and say, okay, can we actually observe the things that are in this model? If I talk about good communication and I’m developing a model around good communication that decreases conflict, one of the things in there is going to be interpersonal skills, right? What exactly is that? What does that mean? I have to have a good definition of interpersonal skills. And then from that good definition of interpersonal skills, I have to have behaviors that somebody else could see. So I can’t say knows how to communicate really well. Right. Well, what does that look like? If I were to observe another person, what does that look like? So it could be instead of say, knows how to communicate, it could say asks other people questions. Right. Because that’s a behavior that I could actually see. So we want to make sure that the model includes observable stuff, right? Observable what we call observable behaviors. So we want we need to make sure that that’s in the model. If that’s not in the model, then that’s a red flag, right? Because we need to be able to see those things, first of all, in the model. So from there we can say, all right, now we have these observable behaviors that measure these distinct factors or dimensions of this particular idea or construct, and then we can start gathering some data. And that’s where it’s really important. We need to make sure that the participants that we use are not just, you know, random anybody is, but they’re actually people who know the content, know the person, and then understand these ideas. And then they can give us their feedback, their information, and we call this process a Q sort, where they sort the different behaviors and add to the definitions and they make sure that it makes sense to them. So it’s kind of our first little nugget of information, of data.

Bill Sherman So you referred to the concept of dimension. So I want to unpack that a little bit more. What is a dimension and why should someone who’s practicing thought leadership care about it?

A.J. Marsden Good question. I like to use the concept of intelligence right way back when we used to think there was like this overarching idea of intelligence, right? G Spearman’s G We know today that because the intelligence really is like there are different types of intelligence, right? So we have this overarching idea of intelligence. But then from that we have clusters or different types of intelligence, right? We have creative intelligence. Interpersonal intelligence. We have like logical intelligence. Each of those you can think of is almost a dimension of overarching intelligence. Right? All of those kind of different types of intelligence are clustered together into one dimension. And then that kind of covers the entire overarching concept of intelligence.

Bill Sherman So they’re the Lego bricks, if you will, that go and build up the model.

A.J. Marsden Exactly. Yes. You could think of it as Legos. That’s a great way to talk about it. Basically, each block in the model creates the structure of the Lego house, right? Each block is important. Each pillar creates the foundation to the whole overarching idea.

Bill Sherman And to your point, whether it’s around intelligence or we go to something like Maslow’s hierarchy of needs that probably many people know. If you were to start treating Maslow’s Pyramid as a set of Legos, right, and shifting them and saying, well, really, food, clothing and shelter belongs higher up on the pyramid than that, then you start having some problems because you’re asking what fits where and what are the relationships between these layers in the Maslow pyramid?

A.J. Marsden Exactly. Exactly. And that’s part of building a good model, right? Understanding how things interact with one another. So I have this one idea. Worker satisfaction. Right. What all factors into worker satisfaction. So is it how long they’ve been there? Is it their relationships with their coworkers? Is it the relationships with their boss? All of those things feed into that one idea. So what? I’m creating the model of what impacts worker satisfaction, it has to include all of those other concepts and the relationships with those other concepts. Maybe it’s a positive relationship with how satisfied I am with my coworkers, but it’s a negative relationship with how satisfied I am with my boss, right? So those types of things you have to understand and you have to include that in the model when you’re thinking about it.

Bill Sherman So you’ve talked about the term validation of a model. What is validity?

A.J. Marsden That’s a great question. And I get asked this all the time. I like to talk about reliability and validity together because often times they get confused. So people think reliability and validity is the same thing, and you can use those words interchangeably, but that’s not the case at all. Reliability is all about consistency. So am I consistently measuring something the same way over and over and over again? And you would want to do that with certain things like personality traits, right? You expect extroversion to be consistently scored the same way over and over again. I’m an extroverted person, so I should get about the same score every single time I take it, right. That means it’s a reliable assessment. It’s consistent. That’s good. The lady is a little bit different. It’s not just. Am I measuring something correctly or am I measuring something consistently? Excuse me? It’s. Am I measuring what I think I’m measuring? That’s what validity is all about. Am I measuring what I think I’m measuring? Am I measuring this concept accurately? Right. So here’s my definition. Am I measuring it the way I’m defining it? And with validity, there’s lots of different ways that we can assess that. And that’s when we start talking about different types of validity. Right. But validity overall is all about accuracy. Reliability is all about consistency. And then validity is all about accuracy. Think about like a bull’s eye, right? If I’m hitting all in the same area, but it’s over here in the upper right hand corner, I’m reliable. I’m hitting it consistently in the same area. But I am not accurate. I’m not valid. Right. If I’m hitting them kind of all over the place, I’m not valid and I’m not reliable. That’s just as crazy all over.

Bill Sherman It’s random. Yeah.

A.J. Marsden It’s random. Yeah. But if I’m hitting in the middle consistently, every single arrow goes right in the middle. And that’s valid and it’s reliable. And that’s what we’re trying to do. Whenever we talk about validity and reliability of assessments and models is are we measuring that consistently and then are we measuring it accurately?

Bill Sherman And so then. How do you know if you’re measuring what you want to measure? Or maybe the better question is how do you build a ruler?

A.J. Marsden Good, good question. And it really comes down to my favorite thing in the world, which is statistics gathering data. Right. And then from that data, being able to do some really cool advanced statistical techniques to determine if there is reliability and if there is validity, some of it’s just as easy as looking at correlations between items. Right. Are these items and this assessment or in this model, are they all correlated or do they all relate to one another? Or we could take it a little bit deeper and do some more high level statistics, things like factor analysis. So I hypothesize that my model has four dimensions or four factors or what you call them earlier four clusters, right? So I hypothesize this I can run an exploratory and a confirmatory factor analysis and determine if that really is the truth or not. The data will. It’s an advanced data reduction technique that looks at the data from multiple angles. It looks at the error in the data, and then from there it can tell you, yes, here are these four dimensions. Your hypothesis is correct, or all of these numbers over here don’t look very good. I think your hypothesis might be wrong. So it really does come down to what did the data tell us. Some of it’s really easy to interpret and some of it’s some of it’s more art than science.

Bill Sherman So when you talk about how many factors do you have and do you have the number that you think you have? I think of the classic scene from Mel Brooks in History of the World, where he’s as Moses and he’s coming down from the mountain with three stone tablets and he says, I bring you these 15. One of the tablets drops 1010 Commandments, right? Right. That’s the situation. You don’t want to be in when you’re practicing. Thought leadership is you don’t want to say there are seven things that are really important in this framework. And then the data shows they’re really only four.

A.J. Marsden Right. Exactly. Exactly. And if the data shows us that, it’s unfortunate for sure. Right. But then that’s when we just take a step back and say, all right, we’re not thinking about this, Right. We’re not. There’s something that’s over overlapping. It’s mashing together weird. Something weird is going on, not necessarily with the data, but with the model itself. So you have to take a step back and look at it from new angles, maybe bring in fresh eyes. Right. So. Well, and.

Bill Sherman You point the fresh eyes on that because this is the difference between the expert mental model in your head where things can be squishy and undefined. And then as you start defining and sharing it with others, do people interpret your framework or your way of looking at the world the same way that you do? Can they look at that framework and understand it? Right? So it could be lost in translation.

A.J. Marsden Absolutely. Absolutely. It could definitely happen that way. And that’s why we do so many rounds and data collection validation. Right. Let’s look at the data from this point of view. All right. Let’s slice the data this way and look at demographics. Right. Which is something that I love to do. Are there gender differences? Are there differences in sex? And then what does that tell us about this model? Right. So if there are gender differences, why are there gender differences? We don’t need to necessarily recall the whole model, but we need to ask ourselves, why does this exist?

Bill Sherman There’s, I think, in thought leadership, an implicit approach that if it’s clear to me, it will be clear to others.

A.J. Marsden Yes.

Bill Sherman That that’s not always true.

A.J. Marsden No, no. So many times in in my work, I have thought that I understood. What the client wanted. And even the client was just like, No, yeah, I think that this is what’s going on. And then once we put it out there to the masses, right, once we sent it out to 300 or 400 different participants, we realized, wow, we were definitely not thinking about this the way that the majority of people were thinking about this. And that’s great. That’s exactly what the data is supposed to do. It’s supposed to tell you where the weak points are in your model and then how you could potentially fix that, make it better. So the whole point of data collection is not I mean, it’s great when it comes back and everything’s perfect, right? But the point is to find the model we always talk about. Let’s try to break the model, right? If we can break the model, we found the weak spot and then we can go back at it and make it even stronger. So that’s our goal, is to break the model.

Bill Sherman Yeah. And it’s that process of iterative sort of, I think of almost hammering away at it until, you know, when you strike at it, you know, it’s, as you said earlier, valid and reliable. And it is very rare in my experience, and I’d be interested in hearing in yours that you get it right on the first shot.

A.J. Marsden Oh, hardly ever. Yeah, hardly ever. It’s just it’s for it to be perfect the first time and for you not to have to remove items or even change dimensions. Right. I’ve had it come back. It’s like, Oh, my goodness, this. We thought we had five dimensions. We definitely do not have five dimensions. Right. And you have to start all over. I would say maybe once or twice. Has it come back where? Oh, it’s just a matter of removing two or three items and now it’s fine, right? But that is really, really rare, Incredibly rare. Oftentimes it takes multiple rounds of data collection.

Bill Sherman If you’re enjoying this episode of Leveraging Thought Leadership, please make sure to subscribe. If you’d like to help spread the word about our podcast, please leave a five star review at ratethispodcast.com/ltl and share it with your friends. We’re available on Apple Podcasts and on all major listening apps as well as ThoughtLeadershipLeverage.com/podcasts.

Bill Sherman So you talked about observable behaviors in terms of collecting data that’s tied to individuals, but you can collect data from other sources as well. You can collect performance data and other things. How do those two marry together to make sure that a model actually reflects reality rather than just theory?

A.J. Marsden That’s a great question. That really comes down to what we call predictive validity. So is the model able to predict behavior in the future? If it can, then that’s great. That’s ultimately what we want, right? So if I’m a thought leader and I’m all about making organizations better when it comes to communication, transparency, things like that, I want to make sure that the training program that I’m putting them through actually has the outcomes that I want, right? So what I can do after I get that outcome information, after I get that outcome data is I can go back and correlate it back to the model and see if there is a correlation between the outcome data and the actual model data. And if there is, I can say, look, this model predicts this behavior. This training actually helps. So and that’s what we want, right? It’s essentially proof, although I don’t like that word, but it’s essentially proof that our model and our training program works.

Bill Sherman Evidence, you know.

A.J. Marsden Yes, evidence.

Bill Sherman Exactly. So many organizations have people who have these internal expert mental models. The how would you offer advice of if a tip or two that if you’re ahead of thought leadership in you know someone has this in their head. How do you help draw that out of them? Let’s start there. What are good ways to make an expert mental model on paper?

A.J. Marsden I would say first things first. Just brainstorm it. I call it throwing it up on paper. Write anything that’s in your head, just get it down on paper. So then that way you can see it, right? And you’re able to actually see it on paper. Plus it’s in your head and then other people can see it too. And maybe they make connections that you don’t see or that you didn’t think of before. So if you could get it into a shared document or put it up on a big whiteboard, something where everybody can kind of see it and then just start, like I said, just start throwing up your ideas, anything, even if it sounds stupid, just throw it up there and maybe it’ll turn into something that’s useful later. Maybe it’ll get thrown away. It doesn’t matter. There’s no wrong information when it comes to brainstorming models. So first, from first and foremost, just encourage that. Great. Get the information out there so other people can look at it and then from there have a brainstorming session, see what other people think about this. Like I said, maybe other people have ideas that you hadn’t thought of.

Bill Sherman And encouraged them to help you break the framework or model or explain what they don’t understand right now. Will you be surprised?

A.J. Marsden Yeah, that can be really tough. When somebody is vulnerable and they put their ideas out there like that, right. And they come to you and they say, I have this idea and we’re really hesitant to share it with you. It might be stupid, I don’t know. But they put it out there and they’re vulnerable. And then you come around and you’re like, okay, this is great, but now we have to break it. Right? They can have some really adverse reactions to that. Sometimes they’re like, Why do we do all this work if you’re just going to make me break it? But if you can approach it in a way that this is, you know, compassionate, helping, if you will, then they’re going to be more open to the idea of, okay, you’re right, let’s try to make this better. Right? We’ll break it in order to make it better. So approach it in a very positive way. So it can be hard, though. It can be hard.

Bill Sherman So is there any way, as you’re building this framework right, and you’re putting it to use to have a sense of whether it’s valid and reliable without collecting data and doing statistics? So say I am not a statistician, I don’t have access to one. How do I know my model is useful? Going back to that quote.

A.J. Marsden That’s a good question. I would say, you know, try to do some type of maybe informal or internal, what we would call a queue sort, right. Where you put all of this information together and you say, okay, what do you think about this? So have each individual person who’s looked at this kind of give their own feedback, but in an anonymous way, right? So then that way I won’t go back to you and just be like, Oh, why did you think this? Or I will get angry about things. Try to collect a little bit of anonymous feedback from the people that I’ve been working with the models, and that way you get some more honesty, some more information regarding it. And then from there I would say bring it to other people, show it to other people. Try to get as much feedback from other outside sources as possible, too. That’s not going to make it valid, right? It’s not going to make it 100% valid. But the more people you have a look at it, the more people that kind of give you affirmation of, yes, this is what I’ve seen in my organization or this is what happens when this other you know, when this happens, then that’s a little bit more data, right? It’s anecdotal data for sure, but it’s a little bit more that you can use. So hopefully at some point in time, either you can afford to bring in a statistician to do the stats on your model or you can I don’t know if you can necessarily learn it all yourself, but you know, maybe you could take a stab at some of it. So but it is important for you to say that this is a validated model. You have to have some data and some backing in that way. I mean, this is just it’s just how it is. A good model is going to have a technical report behind it, period.

Bill Sherman And that’s one of the places I wanted to go. What is a technical report from.

A.J. Marsden A technical report is honestly one of my favorite things to read because it’s super nerdy and filled with the numbers, Most people will flip through it and say, All right, this is great. What are all these matrices and charts and columns of numbers, right? And then it’s my job to sit down and say, okay, this is what all of this means. This is good, this is bad. These numbers indicate this. So really, it’s a great way to demonstrate to other people the numbers that are driving the model. Right. So if I were to take a technical report to just like a GM someplace, again, they probably wouldn’t care too much. But if I were to take that technical report to maybe an IOC psychologist that they have on staff that I or psychologists could look at it and say, okay, yes, I 100% back us bringing this model into our organization because they have numbers that backup what they’re trying to sell us. Right.

Bill Sherman And it requires you I think of it almost like long division. You have to show your work on what it is. And you can’t just say this is reliable, this is valid. Here’s the results of the statistical tests and here are the numbers. And if you know what those numbers mean and how to interpret them. Then it’s easy for someone who understands that language to say, yes, this is good, this is effective, or I have concerns.

A.J. Marsden Exactly. Exactly. And I’ve definitely gotten technical reports in the past where I’ve looked at them and I’m like, Oh, wow, The error in this model is really, really high. I don’t know if I would necessarily trust this, in which case, you know, you can always ask them about it and things like that. But you’ve definitely I’ve definitely seen some tech reports that have had correlations in there that maybe were a little questionable or or things like that, which just make me say, all right, maybe this isn’t exactly measuring what they think they’re measuring. This could be better. Maybe the outcomes aren’t going to be the same things like that. But most of the tech reports that I see, the first things that I look for are the correlations, right? I look at the reliability coefficients and I look at the indices for the EFA in the CFA. So the exploratory and the confirmatory factor analysis, what do those indices look like? Is the model fitting the data the way it’s supposed to be? If it’s not, then the model’s broken. It’s not. It’s not there.

Bill Sherman Well, and that’s if you’re on the buy side of this and you’re trying to bring an idea into an organization, like you said, you want to be able to say, okay, how tested has this idea been? It might seem really sort of catchy or glitzy on the surface, but you need to make sure that it actually works, that it’s to that opening point useful and will be useful to your organization. And a validated assessment is pretty much the gold standard.

A.J. Marsden Absolutely. Yes, absolutely. If you can get an assessment that’s been validated, that’s reliable. I mean, that’s amazing. I would definitely say, yes, use that and use it.

Bill Sherman So as we begin to wrap up, you’ve been working in the space of thought leadership and data and assessments for a number of years now. What advice would you give someone who is either data focused or data curious? Around thought leadership.

A.J. Marsden Well, I would say if you’re data curious, dig in there. Start just learning as much as you can Dive into those reports, ask questions. Statisticians oftentimes love to answer questions, and if you’re not sure, go on Reddit or go online and ask questions. Statisticians are online, too, right? We love to answer your question, but dig into it, dig into the data, and then next step. And I would not normally I would not normally recommend this for anything serious that you want to do. But if you want to take a stab at like maybe creating a survey or creating your own overarching model or idea, try it, see what happens right now. Don’t market it or don’t try to sell it or don’t try to use it for anything serious at work. It could just be something fun that you kind of just start playing around with, right? So take a stab at it and then when you get a little bit of data back from your friends or your employees. Play around with it. Download it into Excel. Figure out some means and standard deviations. Play around with it. Learn about it. That’s really the best way that you can understand and get to know. Data is just ask questions, play with it, gather it, have fun with it.

Bill Sherman Well, and you say that and it makes me think. I mean, my undergraduate training was in English and theater, and I did graduate in both, right? And it was only after coming out of school that I wound up studying statistics. I would add to that it’s never too late if you’re data curious. And the thing that I found most useful is the ability to frame hypotheses and questions and then test them with data.

A.J. Marsden Yes, that’s like my favorite thing, right? Yeah. Just like, Yeah. This question. I need an answer to this. Well, let’s gather some data. Right. So that’s one thing that I love doing in my research and statistics classes is oftentimes, you know, I’ll be talking to friends about cool things that we witness. And I’m like, Oh, that’s a really good hypothesis. I’m going to make my start. Students do that as an experiment in class, right? So do you know.

Bill Sherman I know how all those stats or those psych experiments happen. They’re happening in conversations for the professors. Got it. Okay. Yeah. Okay. So this has been a delightful conversation, A.J. And thank you for joining us to dive into the world of models, assessments, validation and statistics.

A.J. Marsden Thank you so much for having me.

Bill Sherman If you’re interested in organizational thought leadership, then I invite you to subscribe to the OrgTL newsletter. Each month we talk about the people who create, curate and deploy thought leadership on behalf of their organizations. Go to the website. OrgTL.com and choose “Join our newsletter.” I’ll leave a link to the website as well as my LinkedIn profile in the show notes. Thanks for listening and I look forward to hearing what you thought of the show.

Peter Winick has deep expertise in helping those with deep expertise. He is the CEO of Thought Leadership Leverage. Visit Peter on Twitter!

Back To Top
//