Episode 75

75 — AI Roundtable: Facing the Next Frontier of Research with Julian Dailly, Lenny Murphy, and Gregg Archibald

Published on: 28th August, 2023

What does the future of market research hold in a world of ever-evolving AI technology?

In this episode, our panel of industry experts discuss synthetic respondents, the balance between human analysis and machine learning, and the upskilling of researchers in the face of disruptive change. As AI promises efficiency and scalability, the dialogue balances automation with human insights, ensuring research remains insightful and actionable. Envisioning AI as a potent disruptor, the panel explores their optimism for improved accuracy, scalability, and industry innovation. Join us to explore AI's role in shaping market research's future – are you prepared for the transformation?

You can reach out to Julian on LinkedIn

You can reach out to Gregg on LinkedIn

You can reach out to Lenny on LinkedIn

Many thanks to Julian, Gregg, and Lenny for being our guests. Thanks also to our producer, Natalie Pusch; our editor, James Carlisle; and this episode's sponsor, SurveyMonkey.

Check out some of the links from this episode:

IIEX.AI

https://success.appen.com/hc/en-us

Transcript
Karen:

Hello, everybody. Welcome to another episode of the GreenBook Podcast. It is a pleasure to be here with you all today. I’m Karen Lynch, hosting today, and I’m joined by three guests, which is really exciting for me. They’ve actually all been on the show before. Julian Dailly is the founder and CEO of Savio. Julian, I’m going to start with you and let you and let you introduce yourself to our audience personally. You can do a better job than I can, most likely. Welcome to the show.

Julian:

Hi. Thanks a lot, Karen. I’m the CEO, founder of Savio. It’s a network to find flexible talent. In addition to finding professionals for work, Savio itself offers clients some solutions to their insight, and it’s usually AI itself. As a little sideline, I also help other industries and businesses get into AI from a productivity and process point of view as well.

Karen:

Well, it’s great to have you here and have you back. I know that we’ll probably put the last episode that you were on in our show notes. I know Lenny hosted you for a great conversation not that long ago. It’s great to have you back. Our second guest, everybody, is Gregg Archibald. Some of you have heard him on show before as well. He is one of the two managing partners at Gen2 Advisors. Gregg, I’m going to let you do the same thing Julian just did and introduce yourself to our audience in the way that only you can do best.

Gregg:

Thanks, Karen. I’m Gregg. I’m managing partner at Gen2 Advisors, just repeating what Karen said. We are basically a management consulting company for the insights industry. In that role, we work with a lot of different suppliers and brand organizations, helping them solve unique problems in the insights space.

Karen:

Thank you, Gregg. It’s good to have you back as well. Our third guest is actually somebody who you all know pretty well. Lenny Murphy, he’s the other managing partner at Gen2 Advisors, but he’s also GreenBook’s chief advisor for insights and development. He’s got a hand in Savio too. He’s instrumental, serving on the board right now, working regularly with Julian as well. Of course, he’s our other podcast host. Lenny, it’s good to have you here. I want you to explain a little bit about why we want you on this panel, too, having this conversation about AI. That is our topic of the day. Give us your thoughts.

Lenny:

Well, thanks. Well, I ask myself the same question. Why does anybody want me to participate? This is obviously a topic we’ve been paying lots of attention to. Both in my role within Gen2 and GreenBook, and as advisor to many companies, it’s the topic du jour. I’ve spent a lot of time for the last eight months having many conversations with lots of folks on this and, maybe more importantly, working with Gregg and others on thinking through the pragmatic components. My hope for this particular conversation—there’s been lots of hype discussions, and that, maybe, now we’re moving into not necessarily—Gartner says we’re moving into the trough of disillusionment. I prefer to think of it as the plateau of pragmatism.

Karen:

Sound bites.

Lenny:

There we go. Sound bite. You heard it here first. Julian and Gregg, I know from working with them that we’re all engaged in various ways in getting a handle on that now. What is happening? Who’s doing what? What are the business impacts that we are seeing and that we anticipate? I think that’s a piece of the conversation around AI that it’s time for.

Karen:

Yeah. Excellent. Thank you. I’m glad you’re participating in this conversation as well. Before we went live, friends, Lenny and I were having a conversation about the fact that Gregg and I were just on a call that was a part of ESOMAR’s task force work, which was a SWOT analysis done live with the ESOMAR audience. People who called in were able to contribute whether they saw a strength in AI or some weaknesses in AI, opportunities and threats, et cetera.

Gregg:

Absolutely. I want to say that the biggest point that was made is this is impacting every step of the research process. It’s being utilized by a lot of different companies. There are data-quality, insight-quality concerns. All of those are in the process of being resolved. They’re not going to be resolved this week, but I think all of those things are going to be more unified and more resolved over the course of the next year or so.

Karen:

Yeah, yeah. It’s interesting, because I know you and I have talked recently about what happened at the end of November of last year. Now that we’re ticking into fall, we’re three-quarters of the year into this. It’s been an interesting ride. I’m glad we are where we are. I think that what would be really interesting to start this off, especially for you, Gregg, and Julian, is to share a little bit about your uses, what you’re seeing with your clients, the people that you’re talking to, the people that you’re advising. How are people using it effectively right now?

Julian:

I think, first of all, it’s a different thing to different people. Very much followed a kind of early [adopter] curve in terms of some people are seeing it from a very creative point of view. They’re looking at it and thinking, “Oh. There’s things I couldn’t do before that I wanted to do,” but there was something stopping them, because it was too complicated, or it wasn’t possible. Those people are using it for—what I would say—creative tasks. Then there are some people who are using it because they’ve interpreted it as a way to go faster. They’re using it as a productivity, timesaving, effort-reduction type solution. Then there are those people who are at the outer edge. It’s either not on their radar, or, if it is on their radar, there’s a blocker. They’re resisting it.

Karen:

Yeah. In more creative space. Gregg, how about you? I’d love to get your thoughts on that. What are you seeing out there?

Gregg:

Yeah. I actually want to cover two topics with that very single topic question. First of all, if I think about the research process all the way from the initial development of a hypothesis and study design and data collection, the survey instrument, the data collection analysis, blah, blah, blah. Right? All of that middle operational stuff AI is picking up on, whether it’s developing PowerPoint presentations, writing surveys, discussion-guide screeners, they’re not doing it completely and solely, but certainly it’s a good starting point. The data collection piece of it, there’s synthetic respondents and designing the analysis. All of that AI is stepping into our research processes. Where we still provide a lot of value is on that stream front end and back end. What are the problems, and what do we do about the problems? I think, if you would’ve asked me this question, yeah, three months ago, I would’ve given a different answer.

Karen:

Go for it, Lenny. I see you nodding. I know that you’ve got thoughts. This is your opening.

Lenny:

I always do. Absolutely agree with what Julian and Gregg said. Actually, I really appreciate, Julian, that perspective on the business-model component. It reminds me, from that standpoint, it’s not substantively different than the path that we have been on. Right? We’ve been on this DIY and automation trend for quite some time. The democratization of insights, et cetera, et cetera. I think that the real opportunity now, more to Gregg’s point of what we see the industry look like, it will be continual acceleration of the—of the technology companies, because they are best positioned to house and manage data. Inherently, that is the power of these technologies. They will unlock the ability for us to synthesize information that we had been envisioning for years with big data and all that good stuff, but it was too much of a pain in the butt, and we couldn’t figure out how to make all the data work together.

Karen:

I want to hover there for a minute on some of this concept of synthetic respondents and even synthetic data. Shout out to you, Julian, because you shared on your LinkedIn wall that you were going to be recording this episode. One of your contacts, [Margaret So] , shared that she wants to get our point of view on synthetic data. The quote I have is, “I’m shocked that anyone would use this for survey research. Modeling, okay, as we already make assumptions, but I think it’s wrong to replace consumers, people who actually use the product or service.” Then goes on to say, “Makes no sense whatsoever, and I think there are ethical issues.” Let’s dig in. Gregg, you’re smiling. Let’s dig in and get Margaret some of the answers that she needs. Then, Julian, certainly chime in.

Gregg:

Okay. I think we’re going to have a divergence of opinions on this. [Laughs]

Karen:

[Laughs] Good, good. I love it.

Gregg:

Lenny started it out with a solution looking for a problem. No, no, no. Fundamentally, synthetic respondents are using human responses and reorganizing them in a different way. I don’t want to make this look like humans are not involved in this. They are. That’s where the information comes from. These really are human responses. The way it’s being used today is not so much answering surveys. If we think about the Likert scale, AI today doesn’t do a very good job of answering a Likert scale.

Lenny:

I’m going to jump in for a second to echo what Gregg said. We’re not rumbling, Gregg. We’ll have to find another reason to rumble. I talked about this in a webinar last week. Within this edition of GRIT, we had over 600 AI generated completes. We found them. What the trigger for finding them was that the open ends were too damn good. They were very thoughtful. They were very long, and they were absolutely grammatically correct. Set aside, it’s—that’s fraud. We removed them from GRIT.

Karen:

We’re probably going to have to have an entire episode just debriefing what we learned, because, Lenny, I think what’s poignant about all of it is what you’re saying, which is we saw this within the industry in a very specific industry report with industry participants. That felt unnerving, so imagine what the general population results might be to a survey. The data-cleaning efforts, hats off to our research director for data-cleaning efforts that are significant and worth every painstaking moment that he took to clean our data. Anyway, Julian, I know you’re chomping at the bit to chime in on this conversation. What’s on your mind here?

Julian:

I think the synthetic respondents really go right to the heart of the debate about AI and MRX, actually, even though it’s a specific use case. I think it goes right to the heart of it, because it’s almost like the Holy Grail, isn’t it? It’s like I heard MRX CEOs in the agency world, in the—in the more traditional areas saying, if you end up a substitutable responder, then, actually, it’s the end of the industry, because it’s all built around this continuous value chain of asking real people questions and, as Gregg alluded to, the research process. I think it’s a really big question, and it’s also a theoretical frontier in terms of—and I hear what Lenny says about—I’ve seen synthetic responders. It’s interesting. Whether or not what you are actually seeing is the discussion of a topic through the medium of an imagined person or whether you’re actually seeing a synthetic responder.

Karen:

Yeah. One of the questions we had is how do we ensure that the synthetic respondents are adapting to behavioral changes that people make every day? The person booking the travel, in a week he might fall madly in love with somebody who wants to do all the booking. The next thing you know, his behaviors are changed. He’s out. He’s not even in that business anymore. How do we make sure, as people behavior changes, that the machines are being trained equally in a common way?

Gregg:

Yeah. I talked about esoteric topics. Really, this comes down to how broad an issue are we talking about in consumer behavior? If it’s a broad issue that changes slowly, it’s not a big deal, because there’s enough information to see some nuance. Where things start to become more difficult is in the more extreme situations, as Julian was alluding to and used a phrase, “You really need that dissonance for innovation.” Julian, I actually wrote that down. There’s absolute truth in that, but when the innovation is marginal and the topic is broad, it’s identifiable. When the innovation is extreme—if we think about the microwave or the first airplane or something like this where it is a truly new innovation—those are not going to come from anything appearing to be synthetic.

Julian:

Well, let’s face it. Right? We are effectively doing it anyway, right, which is that when we use a focus group to test an advert, we’re saying these people are going to stand in for millions and millions of people.

Gregg:

Yeah, yeah. That’s what we do. Right? We’re not doing census. We do sampling of populations to try and predict larger behavior. Yes. Yeah, great point.

Julian:

It’s not necessarily a new thing. It’s whether or not we can trust taking away that last plank. Ironically, it might be more powerful, because what we—what we tend to find is that all research—outside of, maybe, exceptional circumstances—is somehow constrained by time and money. Why do we do 4 focus groups and not 8 and not 16 and not 32? It’s because we don’t have that much money. It doesn’t make any sense. It takes too long. These are some of the constraints that we learn to live with when we’ll group—we want to test something that we are going to extrapolate onto a national market or global market, et cetera. We may be able to go beyond those constraints if we can have 10,000 people who are good enough, maybe even better, at overcoming some of those constraints and get to better outcomes altogether.

project for the same reason::

to get to that emotional context to build out these models more efficiently and effectively. It’s exciting times for technology providers in the industry if they figure out how to integrate and work together effectively to help unlock this potential.

Karen:

I want to segue here—as I’m watching the clock—into the flipside of what we’re talking about is the training of the machines and go to the up—the upskilling of the humans doing the research. Right? We’ve talked a lot about synthetic respondents, but there’s this other part of it that has to—other part of this whole equation, which has to do with the human that is going to bring some balance to what we’re learning. Gregg, to your point, thinking about once we bring in sensory testing and all of that to an AI level, everything will change. How do we get to the balance of what a human can contribute to the analysis versus what these models are going to be capable of doing for us at great scale and with great efficiencies?

Gregg:

The big piece there, if I look at the process, right, the future of market researchers will be prompt engineering. It’ll be a big, big piece of the job. If I look at what we really do as our jobs is to try and identify solutions to problems—and that’s 100 percent our job—in that context, nothing really changes. To date, and I don’t see it happening in the near term, we can get some information. We can get some insights from utilizing different AI tools, but we can’t get a solution to a problem. What I mean by that is let’s say we come up with people are unhappy with—I don’t know—Bluetooth speakers. Now it’s someone’s job to go, “Okay. What is the state of Bluetooth technology? What’s the next technologies? What are competitors doing? What is the economic environment?”

Julian:

I tend to agree. I think that the future, probably, can be seen by looking at other industries that have gone through similar changes. If I think about the design and illustration industry, the arrival of something like Midjourney, or Stable Diffusion, DALL·E, it’s a much more of a hammer blow than it is for MRX, because it’s like there used to be people who would get paid for weeks to create some of the stuff that comes out in seconds now. They are going through a similar change, but theirs is much more dramatic and much more rapid. Similarly, we all saluted when automation took hold of the automotive industry, and robots started making cars. It was all totally fine. Similarly with automations in warehouses and recommendation engines for commodities like car insurance and et cetera.

Karen:

Well, let’s hope. I love what you were saying, Julian, about AI helping the research become more usable, more actionable. Maybe people are actually going to take the findings from some AI-supported research and act on them quicker. Because I think that a lot of frustration for many, many researchers is that we did all this work, and nothing’s happening as a result of that. Recommendations aren’t taken. People aren’t doing the work after the fact because—for whatever reason.

Lenny:

I actually don’t. I’m sitting here, one, being very appreciative that I’m glad that I get to work with all three of you every single day, because, damn, you’re smart.

Karen:

[Laughs] Come on, Lenny. Go on.

Julian:

Thank you, Lenny. Thank you, thank you.

Lenny:

[Laughs] No. That’s my last nice comment to anybody today. No more. No. I think this is a—it’s a great topic. We’ve joked about, “Oh, the drinking games. Talk about AI. Take a drink,” but it is a—an incredibly disruptive technology. We’re foolish to think, even if the Gartner’s saying, “Oh. We’re moving to the trough of disillusionment.”

Karen:

Ah, well, of course, we are too. Julian, any last words you’d like to add here?

Julian:

Well, I was at IIEX Austin this year, and I was struck by the different speeds of adoption, I suppose, from the different presentations that were given. Some people were—at the very beginning, they were having big slides about, “I’ve asked ChatGPT this, and I’m giving 6 out of 10.” Other people, like, Yabble, were on stage with PNG talking about fully-integrated solutions that were at scale. I suppose the question is how can we help people move along and overcome some ambivalence that might, in the end, mean that they end up becoming a self-fulfilling prophecy, that they end up getting left behind because AI is going to kill them, but because, somehow, they’ve given up?

Karen:

To build on what Julian was just saying, our AI event, which is coming up soon and I think we’ll be airing—will be happening on September 7th and 8th—which will be after this episode airs—Natalie, our producer, will put that in the show notes as well. At that event, hopefully, we can—we can help them so that they become survivors of this disruptive moment in time, Julian. Thank you. Gregg, go ahead. Before I bring us to the close, share your final thoughts as well.

Gregg:

Yeah. Building on what Julian said, all the organizations, GreenBook and IIEX and SMR and ARF and Insights Association, are all doing things to help the industry. Take advantage of those things. I’m going to lighten it up a little bit from Julian’s example and remind people that in the ‘80s there was a band called Timbuk 3. The way that I’m looking at the—this change that’s happening is, “the future’s so bright, I got to wear shades.” Yeah. Now, whip out the harmonica there.

Julian:

That’s great.

Gregg:

You got to add the harmonica.

Karen:

I love those pop culture references that age us. It’s time and time again that that comes up, gentlemen. Anyway, well, no, I think there is so much to look forward to. I think that, when we were talking earlier about some of the challenges with my creative-problem-solving hat on, I’m always thinking, “How might we,” instead of looking of the, “Oh, no. What if? These are the risks. These are the threats. These are challenges.”

Next Episode All Episodes Previous Episode
Show artwork for Greenbook Podcast

About the Podcast

Greenbook Podcast
Exploring the future of market research and consumer insights
Immerse yourself in the evolving world of market research, insights and analytics, as hosts Lenny Murphy and Karen Lynch explore factors impacting our industry with some of its most innovative, influential practitioners. Spend less than an hour weekly exploring the latest technologies, methodologies, strategies, and emerging ideas with Greenbook, your guide to the future of insights.

About your host

Profile picture for Greenbook Podcast

Greenbook Podcast