Episode 106

106 — Charting the Conjoint Frontier: Steve Cohen's Legacy in Market Research

Published on: 22nd April, 2024

How did conjoint analysis revolutionize market research?

In this episode of the Greenbook Podcast, host Lenny Murphy interviews Steve Cohen, founder and CEO of In4mation Insights and a notable figure in market research. Steve recounts his pioneering work in developing methodologies like choice-based conjoint and MaxDiff, detailing his career from early film forecasting models at Polaroid to groundbreaking academic contributions. He discusses innovations in market research tools, such as integrating budget constraints in choice modeling and enhancing analysis with behavioral economics concepts like regret minimization. We also explore the potential of advanced computational technologies and AI to enhance market research processes.

You can reach out to Steve on LinkedIn.

Many thanks to Steve for being our guest. Thanks also to our producer, Natalie Pusch; and our editor, Big Bad Audio.

Mentioned in this episode:

Join us at an IIEX Event!

Visit greenbook.org/events to learn more about events in Asia, the Americas, and Europe. Use code PODCAST for 20% off general admission at all upcoming events.

Transcript
Lenny:

Hello, everybody, Lenny Murphy with another edition of the Greenbook Podcast. Thank you so much taking time out of your day to spend it with myself and my guest. And we love all of our guests equally, but occasionally we have folks that just have a special place in my heart. That is the case today. We have Steve Cohen, the founder and CEO of In4mation Insights, and also just a legend in the industry. Maybe we should call this our Legend Series—

Steve:

[laugh].

Lenny:

—for the podcast. Steve, welcome.

Steve:

Well, thank you so much, Lenny, it’s great to be here with you. We have known each other since neither of us had gray hair. So, it’s been a while, I think [laugh].

Lenny:

[laugh] Yeah, it has been—it has been a while. Yeah, we met at ARF, back in the heyday of ARF?

Steve:

I—you know, my brain doesn’t go that far back. It’s sort of like, you know, first in, first out [laugh] you know? So [laugh].

Lenny:

[laugh] I understand. It’s, you know, it’s getting a little clogged up there. We got to make room. So—

Steve:

Exactly.

Lenny:

—with my usual hyperbole—but in this case, maybe not hyperbole—introduce you as a legend, your bio and background is so extensive, I cannot do it justice. So, now why don’t you brag a little bit for our audience and give them a sense of why I consider you a legend. You’ll get it, audience, from the [crosstalk 00:01:26].

Steve:

I’m sure I [laugh]—right. It’s like, “Enough of your talking about me. Let me talk about me,” right?

Lenny:

[laugh] Right, right.

Steve:

So, I actually entered the market research industry in 1983 or so, and I at the time was working in doing film forecasting models for Polaroid Corporation. And soon thereafter, in about 1986, I transferred over to In4mation Insights at a group called the Custom Projects Group. And I started doing work in choice-based conjoint in the late 1980s—excuse me, I’m getting my dates wrong. It was ’83 I moved over to do work in choice-based conjoint. There’s a fella named Jordan Louviere, who is acknowledged as the founder of, and first mover, if you will, a choice-based conjoint. He wrote a paper that appeared in Journal of Market Research in December ’83. I called him on the phone, and for years, he was saying I was the only person who would call him on the phone to discuss the paper. So, I was really the first person to be doing choice-based conjoint commercially. Sawtooth kind of came around about ten years after that with their choice-based conjoint product. In the meantime, I was involved in doing an academic paper on latent class choice-based conjoint, latent class being a market segmentation tool, so basically, you’re finding groups of people who have similar kind of choice drivers, if you will. By then I was involved in, sort of, what I consider the first paper to be looking at menu-based conjoint analysis in about 2000 is when that paper was published in Journal of Market Research. And then in 2002, I presented a paper on using MaxDiff, at the SMR conference in Barcelona. And then the next year, I Won Best Paper at all SMR conferences in 2003, and it was—they published sort of like a companion book of all the papers that had been published that year; I was lead article and I won 5000 euros for becoming best paper that year. Hope I spent it wisely. I don’t remember what [laugh] I spent it on. And then I also, in 2003, presented MaxDiff at Sawtooth conference. Won best paper at that, and then about six months later, Sawtooth had their software product for doing MaxDiff at that time. And then essentially in 2011, I won what’s called the Parlin Award, and that’s the award given by the American Marketing Association to, sort of, lifetime contributions in marketing research and marketing science. At the time, they were doing one year a practitioner, the following year in academic. I think they’ve gone to all academic because they couldn’t find enough practitioners to [laugh] give it to. But it’s really quite a prestigious award. It’s the best award given away by the American Marketing Association. And then in 2013, there’s an organization called the New York Market Research Council, out of New York City. It’s a lot of big names in market research in the New York area. They elected me to the Marketing Research Hall of Fame. And then in 2016, I was—there’s another award given out by the Institute of Marketing Science, which is an academic group. They give out an award called the Buck Weaver Award—and I’ll tell you who Buck Weaver is in just a second, if you’ve never heard of him—and that’s for lifetime contributions in marketing science [raspberry]. So, uh… it’s been quite a ride. Maybe I’ll just go back for a second and tell you and your listeners who Buck Weaver is because I’m sure you know the number of people who’ve heard of Buck Weaver is probably really infinitesimal. So, do you know who Buck Weaver is?

Lenny:

I do not.

Steve:

Okay. Well, this is going to be a good lesson here. So, Buck Weaver, it turns out, is the only person I’ve been able to find—only market researcher I’ve ever able to find—was on a cover of Time Magazine. Buck worked for Alfred Sloan at General Motors, and he was essentially a guy who was sending out zillions of mail surveys every year. You know, the picture of him, he’s sitting there very, you know, formally, tie, suit the whole deal, behind his desk, he’s got stacks of pieces of paper all over his desk, and the caption says, “A million opinions makes a fact.” [laugh].

Lenny:

[laugh]. All right.

Steve:

So, Buck is—so they give out the Buck Weaver Award every year. And he kind of was lost for a while in the Sands of Time. And there’s a fellow named Vince Barabba. Vince, he used to be the head of the US Census, he was also the head of market research in General Motors. And the way the story goes is that he was apparently, you know, in the archive room one day, and so he came upon Buck Weaver and said, “Who is Buck Weaver?” And one of the cool things you could do which I wish I had with me, is that you can actually go on eBay and you can purchase some of Buck’s old surveys that he sent out, you know, little pamphlets about so big, and you kind of look at what he did, and you say, “God, things haven’t changed that much, have they?” You know, you look at the little pictures and questions. You know, at the time in the ’30s, they were really interested in something called streamlining, so everything, you know, it’s got to be—I don’t know if you remember these old pictures of trains going really fast with the lines, you know behind the cars going really fast. So, they were talking about streamlining everything. It was very big in the 1930s, so he had a lot of questions about streamlining. Anyway, so that’s who Buck Weaver is. Anyway, so my sort of passion, if you will, has been looking at choice models, and that’s partially what we’re here to talk about today, so I’ll stop there for a second, if you want to stop me from blabbering on.

Lenny:

Well, no, the point here is for you to blabber—

Steve:

To blabber [laugh].

Lenny:

[laugh] Yes, I mean, it’s interesting blabber. So, I always think of you as the father of MaxDiff—

Steve:

Thank you.

Lenny:

And—you’re welcome to add—and what an advancement it was in choice-based models, overall: much simpler, much cleaner, you know, some greater utility. Now, let’s get really nerdy. And for the audience who may not be as familiar with conjoint, and choice-based models, why don’t you give an explanation on the approach and some of the use cases. And I think that’s where, also, you’ve really pushed the limits within In4mation Insights in embedding choice-based models, Bayes, all that great stuff. From an analytical standpoint, you’ve done some amazing stuff. So, let’s talk about that for inspiration for our audience, and how these tools can be used because they can do some amazing stuff.

Steve:

Yeah, well, you know, certainly the number of applications is, from what I’ve read in the industry press and, you know, the academic articles and so on, they say there’s roughly over 20 to 30,000 applications worldwide every year, these days, of using choice-based conjoint. And the idea behind it is, what you’re looking to do is find out what the drivers of people’s choices are. And you know, it’s very important, certainly in the academic discipline of decision science, and also, microeconomics is another place that it comes from. And the idea in macroeconomics is that people try to maximize the utility of what they buy. So, what we want to understand is, by presenting you with a series of choices from a set of products, or a set of services, or whatever it is a set of items, really, what are the characteristics of those items and products or services that are saying, “Yeah, I’ll buy that one, and I won’t buy the other one.” So, it’s a technique that’s called compositional. In other words, you’re going to break down the product or service or whatever it is into a series of product features or attributes that they may have, and what you want to do is, you know, once you break it down into pieces, you’re looking to say, “Well, what’s the impact of each of those pieces on the eventual decision to buy?” Now obviously, it’s a technique that is not true to the official, yes, I’m actually going to buy that thing, and so it’s what people say is important to them, so that’s probably the biggest downfall of choice models is that you don’t actually observe the true choice. You’re giving people a hypothetical choice. But there has been some work showing that the hypothetical choices are pretty good in terms of predicting what people will do when they go out and choose. Now, it really falls more in the System 2 thinking rather than System 1 thinking, so System 2 thinking is more sort of a considered decision. System 1 is sort of like [ptht], you know, here it is, quick, buy it, kind of thing, you know, don’t think about it a whole lot, which one would you do? But there has been some work with these choice models to try and get at, see if you could do it in System 1 thinking. So, for example, you might have a timer on the choice that people make, you know, or you actually measure the amount of time they’re thinking, and people who make the choice quick, you put into System 1, and people who are making more considered choice, put them into System 2, which of course conflicts with the whole idea of, if you have somebody who’s making quick choices, are they just trying to speed through a survey that they might be looking at, so you have to kind of separate one from the other along the way. But these choice models have been used, you know, since really 1983, 1984 have they been used, and that’s about when I started getting introduced to it was about that time. You mentioned MaxDiff also because that, it turns out—I mean, I didn’t make this whole cloth out of my own head. I mentioned George [Luvia 00:11:10], the professor, and he had done some work, he had published a paper in, oh, the early-’90s, maybe the early-’80s or so. The paper was looking at decisions around energy and water conservation in a city in Canada, and they were presenting people with different options, which is really what MaxDiff does: you’ve got a bunch of different, I’ll call them objects or statements or whatever you want to call it, you’re trying to figure out what’s more important people. But till I started working with it, I started working with that whole notion in the mid-’90s or so because I had been working for one of the computer companies, and we were looking at what was driving the choice of desktop computers, for IT buyers for the IT department, or you know, together with the finance department buying computers for use by their different, sort of, levels of people, you know, from a, let’s say, an admin up through a workstation that might go to somebody else. And they had asked me, you know, let’s try and figure out what’s driving these choices, but we don’t want them to necessarily say, “Well, should I have a faster or a slower computer, but how important is speed in the decision?” “How important is ease-of-use in the decision?” So, they were more abstract kinds of ideas, as opposed to, let’s say, you know, do you want the 1 megahertz fast computer versus the 1.5 megahertz. And that’s really what you do in conjoin: you try and get very specific about what the features and what the attributes of those features are, as opposed to MaxDiff is more sort of more abstract kind of thing. And so, I started doing it in the mid-’90s, had a lot of good success with it, and I really didn’t publish about, at least what was going on, until the work I had done in 2002 at the SMR conference, and then the following year, the SMR conference again. That particular work was—we actually, I was working with a company in Panama, and they were looking at what the abstract features of why people drank coffee. One of their biggest clients, I believe, was Nestle. And so, we sort of had a sort of an attribute rating test. So, we had 13 attributes, and we said, “How important is, you know, gets me to wake up in my day?” You know, “Helps me with my digestion,” “Helps me to be friendly with people around me,” “Jumpstarts, you know, what I wanted to do,” those kinds of things, those were all one to five ratings, which, before MaxDiff, a lot of people were doing those kinds of things and finding that everything was important. So, I like to say if it wasn’t important, you wouldn’t put it on the list in the first place, you know? And then you get, you know, in lots of countries, there are tendencies to use the high end of the scale, or the low end of the scale, or the middle. And so, you’ve got, you know, scale-use bias in one kind or another you got to take into account. And so, when we analyzed the ratings data, we found that there were—it was actually six countries in Central America we’re looking at—it was about 150, 200 interviews each of those six or seven countries, we found that some countries had a tendency to be yaysayers, and so when we did the typical factor analysis, cluster analysis kind of thing—which by the way, I hate—when you do that sort of thing, you get segments that are all yaysayers. You know, whoo-whoo hoo-ha. You know, big deal. It doesn’t tell you anything about what drove it. Whereas a similar group of people, 7, 800, people did the MaxDiff task and everything came out just beautiful. And that’s why we won the paper, essentially, that year. But MaxDiff is really a variant of choice-based conjoint. But it’s gotten pretty mature at this point, you know, which kind of drives me crazy, Lenny, to be honest with you because all the companies say they do MaxDiff, and the royalty checks haven’t shown up yet. I’m—

Lenny:

[laugh].

Steve:

—I’m still waiting for them. And, you know, so be it. That happens. [unintelligible 00:14:53], of course, you know, for the betterment of mankind, when [laugh] you put it out there. So, there’s lots of, you know, work that’s been done with choice-based conjoint. I still love what it is and what it does, but to tell you the truth—I mean, this is part of what’s rankling me these days—is that if you look at the last 20 years or so of choice-based conjoint, you’ll find there hasn’t been a whole lot new that’s being done. Certainly, like I said, the work I did in MaxDiff was 2003, so that’s 21 years ago, at this point. Like I said, Sawtooth came out with their MaxDiff Software, late-2003, if I’m not mistaken, or maybe early-2004, so it’s been 20 years, they’ve done a lot of good things with MaxDiff and make it easy to use. So, they got banded MaxDiff, and this MaxDiff, and only using the most versus the least, or the best versus the worst. But, you know, you look at what’s been done, at least—and I’m specifically talking about market research—there hasn’t been a whole lot done. If you look at, you know, the pure play conjoint kinds of companies, and I don’t mean to disparage it, you know, there are those—I’m not going to name them—there’s the, you know, the online research kinds of companies, data collection companies, even the big consulting companies like McKinsey or BCG, they’re all essentially using the same technology that I was involved in develop—you know, introducing to the market research world in the ’80s and the ’90s, you know? So, one of the things that’s really sort of motivated me to look at new things is the fact that everybody’s doing the same old, same old. It’s the same thing over and over again, and nobody’s really said, “Well, can I look under the hood,” and sort of say, “Can I do things a little better?”

Lenny:

Well, I hear you, and we could argue that from researchers in industry, other than changes of maybe form factor, you know, from mail to phone to online, kind of, the interface, that the ways we go about doing things from a design and analytical perspective, haven’t changed much. They’ve maybe gotten far more efficient, more automated, et cetera, et cetera. I mean, gosh, I remember the first time that I did a conjoint study in 2005, and what a pain in the ass, right [laugh]?

Steve:

Yeah, oh sure.

Lenny:

You know, utilities, and the simulator, and it was really expensive and complex. And, you know, very challenging to do, where now we embed MaxDiff into GRIT without a second thought because it’s just so easy, technologically. And maybe for the audience—and you keep me honest here—when I think about the difference between, let’s call it traditional conjoint and MaxDiff, my go-to is if I need to understand the optimal configuration for a product based on features, then that’s when, as a solution to embed within my study, that’s what conjoint is for. If I need to understand the trade-offs on more conceptual things, that’s where MaxDiff is. So—

Steve:

Absolutely. Yep.

Lenny:

Yep. Got it? Good. Okay. So, I’m not an idiot. That’s good.

Steve:

Yeah.

Lenny:

[laugh] Now, to your point, you said you’re experimenting, you’ve been trying to do new things, and I know from over the years of our conversations, that you have been doing that. So. So, tell us, what’s the new stuff? What’s the advancement?

Steve:

For sure. Well, before I go there, let me also mention that you kind of got MaxDiff and choice-based conjoint, you know, I think you pack them pretty well, and I want to agree with you on what you said. One of the things that I don’t see being done a lot is the two of them getting in the same study. So, in other words, you got the conceptual part, and you can have the more detailed part in the same study. And now you say, “Well, how can I combine those in some way to know how, maybe, the conceptual part, sort of, maps to the more detailed part?” And there are ways to do that through latent class models to be able to do what—I wrote a paper that appeared in the Journal of Market Research, I think it was ’96 or ’98 we called the joint segmentation. So, you’re basically saying, I can segment people in the MaxDiff, so [unintelligible 00:19:22] people have different, sort of, theoretical or conceptual drivers, and I can also have a segmentation based on the choices that people made—and maybe there’s different drivers—and the two of them, hopefully should be related to one another in some way. And so, what this joint segmentation does, it estimates the choice segments from this choice-based conjoint, it estimates the MaxDiff conceptual segments, and it estimates at the same time the [cross-tie 00:19:50] between them. And I haven’t seen many people doing that [laugh] if at all, but I, you know, it may be happening because I don’t see everything. Anyway, let’s talk about other new things that have been sort of occupying my time. And there’s really—I made some notes here, if I may—

Lenny:

[laugh].

Steve:

—[laugh] Just to remind myself what to say—but the first one that I believe is probably, you know, could be the next big thing, all right, in choice-based conjoint. So here, I’m going to start that particular discussion off by saying to you, “All right”—and this is not a trick, all right, so if you don’t know the answer, it’s all right. This—“So, what’s the one thing that people don’t have enough of?”

Lenny:

Time?

Steve:

Excellent. Good. That’s the correct answer [laugh].

Lenny:

Oh, all right.

Steve:

Okay, now—

Lenny:

Thank you sir, may I have another?

Steve:

[laugh]. Putting—putting—yes, you may [crosstalk 00:20:42] give you another one. Okay, so putting time aside, what’s the second thing people don’t have enough of?

Lenny:

Money?

Steve:

Yeah, exactly. Exactly. Time and money. And why is that? Well, again, if you look at, sort of, microeconomics, I’m sure you may have heard the expression, “Choices are made under constraints.” You know, people don’t have enough time, they can’t do everything they want to do, and people typically don’t have enough money, I mean, unless you’re, you know, Jeff Bezos or somebody like that; you’ll have enough money to do what you want to do. So now, let me ask you another question that has nothing to do with time and money. But if you’re going to do a choice-based conjoint—and I know you’ve seen quite a few in your day, and I ask your listeners to think about it—has anybody ever included the buyers’ budget in a choice-based conjoint—so I’m not saying, you know, ask them outside the survey saying, you know, “How much money do you have to spend,” or, “How much do you typically spend,” or anything like that—have it inside the modeling exercise?

Lenny:

No, that’s—because that’s usually with the pricing, you know? Van Westendorp or something of that nature. It’s a whole other section—

Steve:

Correct.

Lenny:

—so, not embedded in. Yeah.

Steve:

Okay. And if I may also say that people are making a choice—in the choice-based conjoint, they’re making choices—but we haven’t put a constraint on them, nor do we know what their own constraints are. So, I’m not saying I’m going to put a constraint on them, but what I’m going to say to them, can I figure out what your particular constraint is, your particular budget? And I can do that by analyzing the choices people made. Now, let me also say that there are other kinds of constraints that are out there in the world. For example, some people can’t eat gluten, some people can’t eat—you know, don’t buy products that have extra sugar in them, they have to have, you know, artificial sweeteners. Some people don’t products with too much salt. A physician may not prescribe something because there’s too many side effects. There’s all sorts of constraints that we have.

Lenny:

Or self-imposed values, right? All of those things. I’m not going to support that company, or I’ve made these ethical choices, right? Yeah, that’s a complex configuration. Yeah.

Steve:

Absolutely. So, what we want to do is, within the context of the choice-based conjoint, we want to figure out what the person’s constraints are. Now, I must say, trying to figure out a value-based constraint is a bit more difficult, but it turns out, there’s a whole set of literature out there called taboo kinds of choices that people make. So, for—and the taboo choice literature says something’s like—well, apparently, they went to—some researchers went to the Middle East, and they said, “Would you be willing to give up the claims you have to your ancestral lands if I give you a big amount of money?” And people said, “No.” All right. So, that’s really a taboo. Taboo constraints has to deal with things like trading off values of health, peace, those kinds of things, against money. So, if you will, it’s—the taboo trade-off is—I’m trying to remember exactly what it is; I can’t remember, but it’s sort of li—oh, the sublime versus the—

Lenny:

Material?

Steve:

Yeah, the sublime versus material, if you will. So, material is the money; sublime is the peace, health—

Lenny:

The values, the—

Steve:

Kinds of things. The values, the values. So, that kind of stuff typically can’t be done in a choice-based conjoint, but you can put those other things in, you know, what’s the sugar content? What’s the gluten content? What’s the cost along the way? And so, these new budget models—I’m doing budget, it’s [unintelligible 00:24:15] a sense of a constraint, if you will—when you actually do a side-by-side of analysis, which I’ve done on probably a dozen datasets so far, the underlying math, when you understand the underlying math, it turns out the budget models will give you have a better result, or an equal result as the standard choice-based conjoint. So, it’s always going to be at least as good as or even better. By even better, I mean, if you’re looking at things like a measure of fit to the model, an R-squared kind of thing, which may be a measure of how well the model is doing, if you’re doing holdout tasks, you know, so you have a training set of choices and a holdout set of choices, it’ll predict better to the holdouts and so on. And I think it’s just remarkable that it’s able to do that. And it’s really the only technology that I know of that allows you to put in budgets into the modeling, kind of thing. I think it’s really—I’ve been amazed at how good the results are, and how interpretable they are. One of the interesting things that happens with it is, price sensitivity, individual’s price sensitivity. And we’re typically using these Bayesian models, and for the people don’t know what the Bayesian models do, one of the things that does is to get you coefficients, or elasticities, or sensitivities, or weights associated with each of the attributes that you’re testing at the individual level. So, you would have your own set of weights for each of the attributes, and I’d have my own set of weights. They’re idiosyncratic to each of us, and to each of the people. And so, it turns out one of the things that happens is that the budget model finds—because it’s estimating not only the budget, but also the price sensitivity, the price sensitivity of a budget model is lower, being more positive, if you will. So, people are less price sensitive under the budget model. And you go, “Wait a second, what’s going on there?” Well, what’s happening really is the following—and I’ll use a very simple example—let’s say you want to go to the supermarket, you want to buy a six-pack of beer, all right? You know, you’ve got all sorts of choices when you go to the beer case: you’ve got the Belgian beers, you got the American beers, you got the Italian beers, the British beers, you’ve got all these beers to choose from. And let’s say you say to yourself, “Well, I’m not going to spend more than $15. That’s my budget.” And so, what the budget model says is any beers that are over $15, you’re not going to buy because this past your budget. Now, there are instances where maybe, you know, there are situations where you will exceed your budget, but you’re not going to exceed your budget to 2, 3, 4x of what it was, you know? You’ll let it creep up a little bit along the way. And so, what happens is that anything that is above your budget, you will never buy. Essentially, the utility of value placed on it, because it’s too expensive, is zero. And so, what happens is that, let’s say you have a choice of beer that’s $12 or $13, under a $15 Budget, well usually, who cares? $12, $13, who cares? So, price sensitivity is flatter, if you will. Now, one of the things I found that happens is that there’s all this variability in choices, you got to take into account, and what I found happens since price gets less expensive, other things like brands, and features tend to get less important. So, now that price becomes less important in the choice, other things sort of suck up all the variation within choices that happen, and they get more interesting and more important. So, brands started getting more important. And to which I say, well isn’t that nice to be able to tell a brand manager people are less price sensitive than we were thinking they might be, and brands are really more important all over the way. So that, if you will, is the budget model, and the budget model, I’m really excited about.

Lenny:

I’m just thinking of use cases, right? The—

Steve:

Oh, there’s tons of them, yeah.

Lenny:

Yeah. I mean, I always go back to the famous statement from the CMO of P&G, that their goal is to have a one-to-one relationship with everybody on the planet in real time. And, you know, they do that by delivering the right message to the right person at the right time. And—

Steve:

With the right price.

Lenny:

Right. The right price, the right configuration of products, and we have that personalization, right, the technology is [enabling 00:28:35] that scalability now, to have very personalized configurations of products, and that’s what you are describing is a central piece of driving that piece of things. And I’ll give you one example—it was actually really, really interesting—this week, for me. We put our car in the shop, thinking that it just needed some work. Get the phone call. “No, you need a whole new transmission.” And so, the decision at that point goes for me of, “Well, nope, I guess we’re buying a new car.” So, [laugh] and it was such a great experience to be able to—now this was a targeted towards me, but the exercise was going online before I ever—because I hate going on car lots, right? I know I’m going to pick out what the hell I want before even get on there—and combining all the different components while juggling towards price. Here’s my—you know, this is the range that I need to be in, but here’s the features and capabilities, et cetera, et cetera, and going through all of that, which was a real world exercise that you’re describing. And I don’t think we realize how we do that every day, so often.

Steve:

Oh, absolutely. Yeah, I mean, it’s—I mean, I’ve done this budget model with things as inexpensive as a dozen eggs. I found about 10% of the people were budget constrained, meaning that they had a budget limit that was less than the maximum price we tested. So, we tested a dozen eggs that went from $2 to $6, but 10% of them said they wouldn’t go as high as $6 [unintelligible 00:30:02]. So, one of the things that’s cool is happens is, now that I have sort of a budget limit for each person, I can start segmenting people into, sort of, low, medium, high budgets, or, you know, however many groups along the way, and I can start profiling, who are they, what turns them on, you know, what are their demographics, values, you know, other shopping behaviors, and so on. So, it’s really a cool little technique; I really like it a lot. But so, next one is what’s called attribute non-attendance in the literature—ANA. And the idea is that the basic choice model that we use for choice-based conjoint assumes that people are paying attention to all the things that you show them, and they are trading off those. It’s a compensatory model, meaning I’m willing to pay more money to get better features, I’m trading off, you know, spending money versus getting something I like. And the attribute non-attendance model says, “Well, wait a second. If we have situations where, you know, people may be confused, or new to the category, or there’s a lot of features that we’re showing people, a lot of features and levels, lots of products we’re showing certain people.” People do what’s called—they make a choice under a heuristic. A heuristic is, I’m going to simplify the decision for myself in some way. I’m going to pay attention to the things that I care about, and I won’t pay attention to the things I don’t care about, right?

Lenny:

So, looking at reviews, for instance.

Steve:

Yeah, whatever it is they’re going to do. But they help—they simplify the decision for themselves. And so, if you were to do a choice-based conjoint study, and you find that a particular attribute has a value of zero, all right, or close to zero, now the question you have to ask yourself, does that mean they don’t care about it, or they didn’t pay attention to it and that’s why it’s zero? So, what these attribute non-attention models do, you know, in terms of the background on what it’s doing it, it tries to understand for each person, did they pay attention or not? And if I can get really nerdy and tell you that the detailed map, but the basic idea is, for each of the attributes, it’s going to, sort of, turn it on and turn it off, okay? And what it’s going to say is, do I better fit this person’s behavior when I turn it on, or turn it off?

Lenny:

Did that come into effect in tw—did you start playing with that in 2020 when scarcity became a driver versus features?

Steve:

I did not, no. It wasn’t me, but a lot of it has to do, you know, again, with situations where there’s a lot of the stuff that’s in the academic literature looks at when—you know, things are—there’s just a lot of things people have to pay attention to. And certainly we know heuristics are out there all the time, and behavioral economics certainly tells us that people use heuristics all the time to simplify the decision-making. Which allows me to segue into another thing that I’m working on, which is, so this is this one’s been around for a while, and I know—[unintelligible 00:32:57] published a paper on this idea, so I mentioned earlier that everybody likes to utility-maximize, maximize utility along the way. And, again, if you look at what’s happening within the context of the choice-based conjoint, people are essentially saying, “I’m going to look at the features, prices, brands, whatever, of each product individually, I’m going to mentally calculate which one is the best, the one with the highest utility, and that’s what I’m going to choose.” What behavioral economics tells us is that uh-uh, actually what people do is important in the context. They kind of take into account not only the things they might choose, but what other things that are out there, right? And you may have heard the expression, “Losses loom larger than gains.” And so, what some people do is they want to avert losses—loss aversion—or they want to—the official kind of thing is called regret minimization: that I minimize the regret of not choosing the other thing, if you will. And so, some people are utility maximizers, some people are regret minimizers, or loss avoiders, and there are probably a whole lot of other kinds of heuristics people are doing that we’re not clever enough to figure out. So, it turns out, if you’re looking at a choice-based conjoint and everybody’s using utility maximization, you can get a set of results. And it turns out, if you use the regret minimization tools and techniques, you get a set of results. And if you compare the two in terms of fit, the literature basically says that they’re not that far off one another. The fit is about the same, but since in the regret minimization loss aversion world, you’re taking into account with other things are out there, you’ll be acting more like a behavioral economist, and [laugh] it turns out the predictions that you get from that approach are different than you get from the predictions in the utility maximization approach because what you chose by trying to avoid a loss depends upon what else is available to you to choose from, okay? Okay, so in that case, what you’re able to do is to segment people into these different groups, and it turns out that figuring out who’s which gives you even better results than either one set being monolithic about using utility maximization or monolithic about regret minimization. A hybrid approach is actually the best thing that you get out of it.

Lenny:

Is there almost a matrix that can be—and again, I’m thinking personally—we’ll will use 2020 example—I never a hoarder or prepper before 2020, but you know what—

Steve:

[laugh].

Lenny:

—my God, we started stocking up. We always have toilet paper, right?

Steve:

[laugh].

Lenny:

[laugh]. We have lots and lots of toilet paper. In just thinking through, I can almost envision a matrix, a decision-making matrix, right, of this integration between, kind of, System 1, System 2 and the more utilitarian components of decision-making, and you’re describing this holy grail of understanding these factors that drive decision-making by segment in profile. Almost like the big, like, the Big Five personality profiles, but now it’s driven by these other utilities that you’re describing. It’s pretty—that’s pretty cool stuff, Steve.

Steve:

It is very cool stuff. I am… I’m trying to think. The last thing I’m working on is—and we can wrap up the conversation as soon as I pontificate about this last one, [laugh] if you don’t mind—and so, one of the things that bothered me is, you know, Sawtooth has this thing they call adaptive conjoint analysis—it’s choice-based conjoint, excuse me—adaptive choice-based conjoint. And essentially what they do is they give a bunch of pre-questions, you know, would you rather have [unintelligible 00:36:45] one, two, or three, or what’s important to you, two or three? And then they adapt that and then show you choices to make. And it’s pretty good stuff. I personally have never used it to tell you the truth because I’m more interested in—there’s been a whole set of papers written about what are called, I’ll use the phrase IASB—Individually Adapted Sequential Bayesian choice—IASB, Individually Adapted Sequential Bayesian choice. And so, the idea behind that is, let me kind of give you a choice-task to do—a typical choice you might get in choice-based conjoint—I’ll keep track of that, maybe do two or three of those, and then, sort of, in between the beginning of the fourth, after the third, third task, I’m going to, kind of, run a little model and figure out what’s important to you, all right? And what I’m going to do is then decide, okay, what’s the best thing to give you as the fourth thing to look at based on what you told me you liked in the first one, two, three?

Lenny:

Is that what—remember [F ANOVA 00:37:43]?

Steve:

Yes.

Lenny:

That sounds very similar to their approach.

Steve:

It is similar, but I think this is a bit more rigorous than what they were doing. Because it’s really hard, in the sense of, you’ve got to run this, if you will, run a model in between, and then you got to decide what you’re going to show somebody. All that’s got to be done—if you’re doing an online interview—it’s got to be done lightning-fast. Lightning-fast. And a lot of [unintelligible 00:38:07] ANOVA, I mean, I don’t know the full details of what they were doing, but the big barrier has been doing it fast, when you get down to it. So, it turns out, there’s some work that was done in the machine learning world that I’ve glommed on to, where they’re figuring out how to do that in another domain, if you will, but we’re trying to develop some software to do just that with people when we get what should be the fourth one be, and then if you have the first four, what should the fifth one be, and so on, and so on. That’s still a work in progress on our end, but you know, we’re hoping to finish in the next, you know, two, three months or so, and test it out. So unfortunately, I don’t know who I’m going to talk to about it, [laugh] but.

Lenny:

Well, let’s have a conversation offline. The—

Steve:

Yeah, super. Super. Yeah. So, that’s kind of what I’ve been working at. And I’m trying to introduce some new innovations, you know, to the people listening to this or watching it, that are things that I think could hopefully, you know say, “Yeah, in the last twenty years, we haven’t done anything, but you know, Steve’s at it again with some new stuff.” [laugh].

Lenny:

That’s cool stuff. Now, we are almost out of time, but I wanted to get to, you know, the dreaded two-letter word in this—although I would—we can probably save that for another conversation.

Steve:

Sure.

Lenny:

But I would guess on what we’re talking about, especially from the speed component, that because of the implementation of generative AI, and the efficiencies, particularly at the chip level, right, to power that—

Steve:

Absolutely, yeah.

Lenny:

—that that is going to open up even more capabilities, which I would argue would make the solutions that you’re building more critical, more accessible because were just expanding computing power, and data synthesis, and the ability to get to the analytics in a much more efficient way, is that, kind of, where your head is?

Steve:

Well actually, all these things I’ve described to you, we’re writing in Python, which allows you to go from CPU computing, which is what we’re typically using, to GPU computing. And the GPU is the graphics processing unit, and the big maker of graphics cards is Nvidia, and I, unfortunately, did not buy Nvidia stock—

Lenny:

I know [laugh].

Steve:

Several years ago [laugh].

Lenny:

I was kicking myself a few weeks ago, too, with that.

Steve:

Exactly, yeah. So, and it turns out, we’ve been looking at these. The problem with trying to do conjoint studies with GPU is the communication between the CPU and the GPU takes time, even how fast everything is, it still takes time, and the kinds of datasets are not really big enough to try to take advantage of the GPU. So, in the context of artificial intelligence, ChatGPT, all those kinds of things, there’s enormous databases is what they’re looking at. And so, the GPU can—

Lenny:

Well, so we just need to do more conjoint.

Steve:

[laugh].

Lenny:

Right?

Steve:

Right. Couple of million respondents, and you’ll have enough data to do it. But a typical stu—I mean, we’ve tried it with, like, 2, 3000 respondents, and it just it doesn’t go much faster than the typical CPU on the GPU. So, it’s like, what’s the—the problem, again, is the communication because of—it just, it doesn’t work fast, let’s put it that way—yet. So, we’ll see what happens. We’ll see what happens.

Lenny:

Well, Steve, you’ve been incredibly gracious with your time. Our listeners have been gracious as well. It is always a pleasure to catch up with you. And again, now listeners, I hope you realize why I said Steve was a legend. The man is.

Steve:

[laugh].

Lenny:

So, honored to have the chance to talk to you.

Steve:

Well please, thank you so much.

Lenny:

Yeah, seriously—

Steve:

It’s always good to talk to you. Oh, it’s good to talk to you, Lenny, really.

Lenny:

I appreciate—it’s always—you always bring a smile on my face when I see you at a conference. It’s been too long, and it’s great to have this chance to catch up, and know that you’re still building this body of work that has huge long-term implications for the entire industry. And listeners, that’s the takeaway, right?

Steve:

Thank you.

Lenny:

When you think about all these things that we do, these are our standard tools in the tool belt, and they will continue to be. The form factor may change, right, but the underlying thesis and the approach will absolutely—they’re here to stay, and this is the man who helped bring them to life. So, thank you.

Steve:

Wow, thank you so much for the compliments. And always great to see you, and I’m glad you’re doing well. And let’s try and figure out when we’re going to do this again, okay?

Lenny:

Absolutely. All right. Last thing: where can people find you, Steve?

Steve:

Well, I’m at In4mation Insights—information is spelled with a number four instead of an F-O-R. So, it’s I-N the number four mation insights, and you could just steve@in4ins.com. In4mation Insights: I-N-4-INS dot com. That’s the easiest way to get to me. Steve at.

Lenny:

Okay, there we go. All right. Thank you so much, Steve, really appreciate it. I want to give a big shout-out to our producer, Natalie—she is the one who keeps all the wheels turning—to our editors, to our sponsors, and most of all, to our listeners. So, that’s it for this edition of the Greenbook Podcast. Everybody, take care, and we’ll talk again soon. Bye-bye.

Next Episode All Episodes Previous Episode
Show artwork for Greenbook Podcast

About the Podcast

Greenbook Podcast
Exploring the future of market research and consumer insights
Immerse yourself in the evolving world of market research, insights and analytics, as hosts Lenny Murphy and Karen Lynch explore factors impacting our industry with some of its most innovative, influential practitioners. Spend less than an hour weekly exploring the latest technologies, methodologies, strategies, and emerging ideas with Greenbook, your guide to the future of insights.

About your host

Profile picture for Greenbook Podcast

Greenbook Podcast