Episode 96

96 — Back to the Futurist: An Insights Journey Beyond Tomorrow with Ari Popper

Published on: 5th February, 2024

What do futurists see on the horizon? A world where imagination and innovation converge to create tomorrow's reality.

In this episode of the Greenbook Podcast, we sit down with Ari Popper, the founder and CEO of SciFutures, to explore the intriguing intersection of science fiction and corporate innovation. Popper delves into the unique approach of SciFutures, which collaborates with science fiction writers to craft compelling future scenarios that help the world's largest companies envision and build their preferred futures. Through stories of transformative projects, such as revolutionizing home improvement with Lowe's and pioneering the future of mood and emotion measurement, Popper illustrates the power of storytelling in sparking real-world innovation. The conversation also navigates the ethical landscapes of emerging technologies, the societal implications of AI, and synthetic consumers, highlighting Popper's hopeful vision for a future where technology enhances humanity's best qualities.

You can reach out to Ari on LinkedIn.

Many thanks to Ari for being our guest. Thanks also to our producer, Natalie Pusch; and our editor, Big Bad Audio.

Mentioned in this episode:

Join us at an IIEX Event!

Visit greenbook.org/events to learn more about events in Asia, the Americas, and Europe. Use code PODCAST for 20% off general admission at all upcoming events.

Transcript
Lenny:

Hello, everybody. It’s Lenny Murphy with another edition of the Greenbook Podcast. Thank you so much taking time out of your busy day to join me and my guest, because we know you probably have better things to do. Hopefully, this will be worthwhile. Today, I am joined by the man, the myth, the legend: Ari Popper, founder and CEO of SciFutures. Ari, welcome.

Ari:

Hey, Lenny. Hello, everyone. It’s amazing to be back with you again after, I think, it’s almost 13 years.

Lenny:

It has been a long time since we have shared any type of stage, virtual or in person. Although, back in the day, certainly had you roped into IIEX and other things on a regular basis, but you’ve been too cool to spend time with us until recently. Thank you for dining, too, actually. Give us some of your—[laughs]

Ari:

Now I’m thinking have I—have I become less cool, or has everything else become cooler and—I don’t know. Maybe a bit of both. [Laughs]

Lenny:

I don’t know. We think about SciFutures. We’re living in a lot of the future that you predicted, so, which is probably the segue. Obviously, Ari and I do know each other. We go way back, and I’ve looked forward to this conversation quite a bit for our listeners. Why don’t you explain your background and what SciFutures does? That’ll give a little context for why I think this is going to be fun and cool conversation.

Ari:

Oh, sure, Lenny. I am the CEO and founder of SciFutures. We’re a foresight and innovation firm. We work with the world’s largest companies. What we do is we help them create their preferred futures. The way we do that is we use the power of story. We work with science fiction writers around the world and grounded in all the science facts and the emerging technologies and all these amazing changes that are happening around us every day. We use that as raw material to create these preferred future visions of the future. Then we work backwards from those preferred future visions to help them create it. Kind of like a full-service innovation accelerator. That’s really what we do.

Lenny:

To make the research connection, do you want to talk a little bit about your pre-SciFutures world so folks understand where the roots are?

Ari:

Yeah. I used to be the president of BrainJuicer, which rebranded to System1. I actually was one of the first employees that set up the North American business and worked there for six years with John Kearon—the man, the myth, the legend—and all the great folks there. Yeah. That’s where I was before. Then, before that, I worked at Kantar for Millward Brown. I was the vice president there. You can take the man out of research, but you can’t research out of the man. I’m back. I’m back in the research world a bit as well.

Lenny:

All right. Now, that is an interesting through line, and I remember—maybe see if you remember this—we were sitting in a bar in Las Vegas, and you were talking about this idea at that point. Do you remember? We were at a—at a—at a conference, and you were talking about this idea of using—you love science fiction. You love writing and this narrative: how to turn that into an ideation process to build innovation. That sounds like the coolest thing in the world and a lot of hard work, so good luck, Ari. Then you did it. Talk us through. What was that inspiration? Because you spent your career in research and thinking there was a real need and opportunity to shift into this different mode.

Ari:

A lot of that inspiration came from how storytelling and science fiction made me feel. I was a big sci-fi fan. I used to read a lot of sci-fi as a kid. I used to watch sci-fi. Still do. I always got excited about the visions and the technologies and the geeky stuff and even the societal stuff that science fiction portrays. Then I realized that you look at the foresight agencies and the force at work that large organizations do, and it’s all very dry and very left-brained, [unintelligible 00:04:28] if you want. It’s no wonder big things don’t happen. It’s no wonder that huge innovations don’t happen in these large organizations. They’re intellectually satisfied by all the reports and the scenario plans, but that content doesn’t result in meaningful action. Then working at System1, we know that, in order for people to change their behaviors, they need to feel. They need to have an emotional connection with advertising and marketing. For some reason, my brain—my crazy brain connected the dots and said, “Well, let’s create almost like the System1 content for the foresight and innovation world.” It works when you develop content that makes feel excited about the future and also gives them a reason to believe it. They’ll want to build it, and it’ll result in much more transformative innovation. That’s the greatest joy that I’ve got from my career is—over the last 13 years—is seeing our content transform these large companies to do really meaningful stuff. I wish I could do more of it. It’s such a powerful way to create transformation and positive transformations. We need it more now than we’ve ever needed it because of how powerful these emerging technologies are. Yeah. I think 90 percent of startups fail. I think it might be even more. I think when I started SciFutures, I was bright-eyed and optimistic, but I was also like, “This is probably a one-year endeavor, and I’m back doing other things.” Yeah. It’s amazing how we’re over a decade and still doing this work.

Lenny:

I think one of the differences—and you correct me if I’m wrong—was it’s not just you guiding a brand into thinking about new stuff, but you actually help them build it, if I recall correctly. There were quite a few. Lowe’s comes to mind with some of the work that was done there that I’m aware of. That you were instrumental in taking the idea, building it, and helping them roll that out. Do you want to talk a little bit about that process of how you helped actually incubate and accelerate the concepts into actual execution as well?

Ari:

Sure. Yeah. For some clients, kinds there’s a handoff process. We’ve created the vision with giving them the emotional, visceral content to sell in. They get sold in, and then they run with that. For others, they actually need help developing proof of concepts and prototyping, and we do that as well. The Lowe’s was our most famous public example, or one of them. I’ve got a few more. We envisioned the future of home improvement where people could walk around in their kitchen using some kind of next-reality device and, in real time, do the whole renovation. Then, which a flick of the wrist or a voice command, the Lowe’s trucks come, and they basically design it exactly how you imagined it. That was very science fiction in 20—what was it—13, 14. Today, it’s like, “Oh, yeah. Yeah, yeah. I could see that.” The technologies now that we predicted or anticipated—I think, is a better word—are here. How do you create 3D assets from scratch? Well, you pick up your phone, you take three photographs of it, and the AI and the algorithms do the rest. At the time, you had the modeling by hand, or, if you’re lucky, buy a $20,000 scanner and scan them. The whole industry’s kind of evolved pretty much how we anticipated it to, and it’s only going to get even more powerful now with generative AI. That’s a whole separate conversation. Yeah. That’s one example. We’ve done work around mood and emotion. What does it mean when you can measure consumers’ moods passively? How does a mood economy look like in the future? That’s also coming on leaps and bounds. There’s a whole conversation we can have about the future of mood and emotion. We’ve built a mood lab for a client where a lot of the technologies we anticipated are now online. What we’re spending a lot of time today working on—and, Lenny, you and I spoke about this recently—is what we’re calling synthetic consumers. What happens when you can do research without respondents, or you can create your—have your best sales person in every meeting all the time or the CEO in every meeting? You make sure she turns up in every single meeting and has a point of view. These are sci-fi-like concepts, but they are absolutely reality today and accelerating. I think the last point I’ll say—because it’s hard for me to shut up once I start talking, so interrupt me—but one of the [unintelligible 00:09:14] in working with SciFutures is that, yes, we have visions that are transformative, and we help create stories and narratives of the future that get people immersed into to them, but if it’s not grounded in what’s possible and plausible, and if there aren’t immediate action that you can take today, the work is basically not useful. It can inspirational, but inspiration without action is—it’s just a—it’s just entertainment. We have our feet on the ground. We came back off, probably, [unintelligible 00:09:50] tour of doing tours with CES where we talked to all the emerging technologies and startups that we think are cool, and we bring that back into our clients. You need that vision, but you also need that grounding as well.

Lenny:

All right. I have three questions I’m dying to ask. I’m not going to ask them all at once, but I’ll—we’ll rapid fire. Right? So, what’s an example of work that you’ve done that you are the most proud of. You’re like, “You know what? That makes all of this worthwhile. That was incredibly cool, game-changing stuff.”

Ari:

Wow. That’s a great question, Lenny. I think this is the true answer—although it might not be that satisfying for you—but every time we see the work that we’ve done translate into meaningful initiatives within our clients, I am absolutely over the moon. This morning I was talking to a client about these future magazines that we created. We’ve done two for them. We’re about to do a third. These are like, “Imagine if our industry transforms, and this is how we’ll be in 5 to 10 years.” These beautifully designed visual-narrative-driven artifacts from the future got passed around the entire organization. This is a large CPG. You’ll know exactly who they are. They made it all the way to the CEO, and it was—it was wonderful, because it basically sparks pilot tests, transformations, and mindset capabilities. That fills me with immense pride. We did work for a large hospital client last year, and I was there. I saw all our assets around the room. I saw people using the language that we created as part of their conversations about the future. The sense of possibility and that onboarding and acceptance in the organization. One of the artifacts from it, from the work that we did with this particular client—it’s a health and wellness client—they actually used the vision that we created for them and actually created a prototype using generative AI. They played me the video when I was there and used the same art, the same language. I was immensely proud. I couldn’t wait to tell the team. Yeah. There’s lots of those. I could break client confidentiality and tell you specifics, but I’d probably rather not do that on a public podcast.

Lenny:

Sure. No. No worries. Generalities are fine. All right. Flip side: what is a concept you’ve come up with that you were like, “Oh, holy crap. I hope we never do this. This is scary.”

Ari:

Yeah. There’s a few. When you’re working in the future, and you’re imagining possibilities, there are, obviously, dystopian scenarios that are actually sadly quite easy to anticipate. The Cambridge Analytica scandal, it was actually pretty easy to predict. It’s quite easy to anticipate. I think we’re seeing a lot of dystopian visions. We don’t generally do a lot of—we do use dystopia as a way to understand the consequence of inaction or action that isn’t deliberate, and it’s useful. We don’t dwell on dystopias. We usually have a aspirational vision of the future. Regardless, a lot of, frankly, what we’re seeing today with deepfakes, with the lack of truth with complete social—social media is breaking the world, really. Let’s call it what it is. It’s breaking the world. A lot of that we saw early on. I think some of the dystopia—potential dystopia is that we have anticipated it. We hope we don’t see but will—probably will likely see is when these immersive technologies become so real and so compelling to our animal brains, we’ll lose people in the metaverse worlds. We’ll lose people in these completely fake, digital, immersive realities. I think it would become a public health crisis of sorts. Those are, sadly, fairly easy to see. Then, of course, there’s a lot of questions about the future of work, the nature of work, the nature social cohesion, the future of social connection, human connection, all of that. There are some, I would say, pretty awful scenarios or visions that, unfortunately, are plausible. Our clients tend to be the Fortune 500s of the world. We focus mainly on commercial innovation, but we certainly don’t ignore the ethical responsibilities that these organizations have and, particularly, leaders have. You don’t need to hire SciFutures to anticipate some of the—some of the scary things that potentially could unfold.

Lenny:

Okay. No DARPA work.

Ari:

We’ve done work in the military. Yeah, no. Not for DARPA, but we’ve done work in the military. You go to bed thinking, “Oh, my god.” It’s interesting. Yeah.

Lenny:

All right. We’ll leave it there. That sounds like it bordered on national security issues, [laughs] and I don’t want to get into any of that. All right. The third question, and you’ve hinted at that—right—the—okay. You’ve been doing this 13 years. A lot of technologies are now coming or are reality. Right? AR, VR, AI—yeah—3D printing, yada, yada, yada. Right? All of those things are real. They may not be ubiquitous in all categories yet, but it’s certainly—it’s only an issue of computing power and cost to—before they are. Okay. Cool. What do you see next? What is the next technology that you think, “Okay. Now 5 years from now, 10 years from now, whatever, we’re not—this is going to be old hat. This would be like talking about mobile.” There’s going to be—it’s a whole new thing. What do you think that looks like?

Ari:

I think it’s really all about AI now. This is the AI—the next 25 years is all about AI, because there’s a ripple effect in my world where, yeah, you have silos of different technologies, like the mixed reality world, energy, the IoT devices, all the—all these—what were independent technology verticals evolving. It’s all being compounded and accelerated because of AI. Yes. The next 5 to 10 years, it’s going to be AI, much more powerful AI, much more useful AI, much more intelligent AI. All these separate industries are going to benefit by it as well. It’s a incredibly transformative force, and we’re just starting. We haven’t even left the starting blocks yet. We could talk about the other disruptive technologies like quantum computing and the genetic engineering that’s happening in the biomedical space. Really, I think for us, what’s most interesting and what we work with with our clients over the next 5 to 10 years is let’s imagine—based upon these current trends and all these technologies—let’s look and see how people will be shopping, how they’ll be buying, how markets will be working, what products and services your organizations will need to provide in a very mature AI economy. How do you need to restructure or rethink about your organization if consumers have their own personal AI agents that are out in the marketplace shopping for them and that organizations or cities have their own AI agents that are the primary interface? I’ve coined this term “the algorithm to algorithm economy.” A to A. We’ve heard of B to B and B to C. Okay. This is A to A. That’s happening. Apple, Amazon, Microsoft, they’re—Google—they’re working on consolidation of all your individual data sets and creating AI agents that can interface with them and then go out into the marketplace for you. If that’s the case, if in the next 5 to 10 years you have personal AI agents that are shopping for Ari or planning a holiday or even a date night or—how do brands, how do our traditional clients, how do they need to operate in those environments? What can they do today to be ready for that? What sort of competencies and skills do you need to be ready for that? That’s really where we’re spending quite a bit of time. It’s exciting, because there’s tremendous opportunity for efficiencies in the marketplace, to really delight consumers, to give them products and services that they really want and need. There’s also a huge shadow to this type of vision in the future. Taking away free will, nudge tyranny, nudging people to an inch of their lives, creating echo chambers and blindness. From brands’ point of view, controlling your brand, making sure you’re searchable. If AIs are looking for you, how do they find you? Yeah. There’s lots to think about in this area alone in the next 5 to 10 years.

Lenny:

Yeah. Agreed. I think that’s where we are as well, right there of AI. It is interesting to think about the—somewhat today, I think, that AI doesn’t create anything. It remixes things really, really, really, really well. Right? I don’t think we’ll be stuck there for very long. I think that there will be—probably sooner than we even think—maybe not general artificial intelligence or sentience but something that looks really damn close that does have the ability to create in new ways, and the opportunists that will unlock for new products and opportunities, et cetera, et cetera. I think that’s really interesting, and I suspect that will unlock robotics as well in a new way. Not just the digital avatars or digital agent type of world that’s pretty easy to see now but actual iRobot-type of a—of a world where there’s—there are robotic assistants that are driven by that. Certainly we see it in manufacturing. I know in Japan they’ve been playing for quite a while with personal robotic assistants, especially for the elderly and in healthcare. We’ll see a lot more of that start to emerge over the next few years. Yes, you being a big sci-fi geek, I think—I can’t help but always think, “I’ve seen this movie. It doesn’t end well.” I can’t think of too many movies I’ve seen where that future is like, “Woo, that’s wonderful.” It’s usually like, “Oh, my god. Run from the robot overlords that’re trying to kill us.” [Laughs] Interesting times we live in [laugh] [Unintelligible 00:21:35].

Ari:

Indeed. Yeah, yeah. I think, if you’re creating science fiction, your job is entertain. Often we get entertained by being afraid or terrified. That’s fine. It also absurdly sets up expectations that aren’t necessarily true or inevitable. I think there is a movement in the science fiction communities is to—is to create aspirational visions. I applaud that. Yeah, yeah. AI’s a big one. I think what I’m—what I’m passionate about is making sure we preserve our humanity in this transformation process. I think that’s a really passion area in me. What does it mean to be human? What’s special about—what’s unique about our humanity, and how do we make sure that, 10 years from now or 15 years from now, we don’t wake up and look around and go, “Oh, shit. Why did we do that for the sake of X, Y, and Z?” I think that’s a big one. Also, we don’t want to go to a world of technofeudalism. That feels like a potential future as well. We’ve got these billionaires building bunkers now, and it’s like, “Eh. Please, let’s not go there.” That’s something that we as a society, as a collective, need to agree on. We need to put in systems and processes in place for us to manage that. That’s beyond the remit of SciFutures, but it—but it certainly—being in the room with corporate leaders, it gives us the opportunity to give them a sense of their responsibility as well in shaping their conversation and being a stakeholder. That’s a privilege and something I take seriously. My business is about helping our clients be ready for the future in a way that drives shareholder value but ultimately also attempts to leave the world a better place as well in that process. There’s a lot of tension, natural tension, in trying to resolve both of those integral demands that corporations have. Storytelling is a great way to do that. You can really play with—you can really collaborate visions and responsibilities and roles with—using narrative and using visions of the future. You’d be amazed at how quick—when we work with clients on the topic of AI and on purely commercial grounds, how do we use generative AI to improve a supply chain? Whatever they might be. You’ll be amazed at how quickly the conversation gets esoteric and existential. It’s a couple questions, and you’d be like, “Oh, okay. Here we are again.” Yeah.

Lenny:

Yeah. I think of the world as there’s two—you’re either Star Trek or Star Wars. Right? My guess is that you’re in the Star Trek camp versus the Star Wars camp? Would that be accurate?

Ari:

I have to say I enjoyed the original Star Wars universe a lot more than the recent one, although it’s getting a lot better.

Lenny:

Yeah. Well, that’s a given. [Laughs]

Ari:

[Laughs] My ethos is definitely more Star Trek. I like Star Trek, but I really love Star Wars. Yeah. Yes.

Lenny:

All right. All right. I think I’m a little mix of both. I love the positive and humane view of the future of Star Trek. I love the adventure component of Star Wars. Yes, my inner—my inner Luke. Right? Actually, in my marriage, I’m Han, and she’s Leia. Which makes sense being a marriage. Right? We didn’t want to the whole Luke-Leia thing. [Laughs] That kiss. Ugh.

Ari:

[Laughs] Oh. Ew.

Lenny:

George, did you really have the whole thing written?

Ari:

Oh, man.

Lenny:

Anyway, our geek is flying. I think it’s important to that idea of the narrative. Right? There is. We’ve got the Gibson visions of the future, which, oh, god, no. I don’t want to live there. The Asimov visions of the future, which probably are a little more pragmatic in where we really are maybe going. That Star Trek, what I liked about it was the—when you looked at the history—and we’re really getting geeky here, audience, but I think it’s relevant—in that Star Trek future, they went through some crap. Right? There were world wars. There was almost the annihilation of civilization. There was a lot of humanity learning some really hard lessons getting to that point of, “Okay. Maybe now we’re a mature species in doing that,” which has always felt like that’s probably pretty realistic as well. To your point, I think we still have some mistakes that we’re going to experience with these technologies. Hopefully we are learning to deploy that ethical and component and framework around it as well. That’s my really long-winded way of reacting to what you said about helping your clients. It’s not just about the tech. It’s also let’s make sure that we’re doing the right thing. Right? This is innovation for good, not just innovation.

Ari:

Exactly, exactly. I think one of the new things we’re doing—what’s new at SciFutures—one of the new things we’re doing is leadership training, so future literacy training. That’s something our clients asked us for. It was something that came to us, and it’s great, because you get to sit in a room with leaders. You get to immerse them into stories and in visions of the future. We have a scenario that we use for one of our clients, which is the healing home. What if your home’s really smart and knows what you need at all times, and all your devices are connected? It’s a great scenario. Then we say, “Okay. What does that mean for you as a leader if this how the world is operating? What does it mean for you as a leader when you have AI colleagues with their own motivations and needs?” I spoke to you about the A to A economy as well. Then it’s like, okay, you get a visceral sense of what it means to lead in that future. Then it’s like, okay, what are the leadership skills you need to be successful? Part of that is navigating that tension between because I can doesn’t mean I always should. What’s our values? What’s my value as an individual? What are our values as an organization, as a team? How do we pull that into our innovation? It’s very philosophical. Ten, fifteen years ago, you weren’t really having conversations that much about ethics and about philosophy in business meetings. You were talking about ROI, and you were talking about—it was kind—we knew what the template was to be—to be a good leader. Now, with so much change, so much uncertainty, so much transformation, there really are lots and lots of conversations about ethics and philosophy. What does it mean to be human? It’s amazing. These are now practical conversations. They’re not these wonderful—you’re sitting in front of the campfire with—warming your s’mores. This is like, “Damn it. We need to know this in the next two to three years.” It’s great. We need more philosophy grads coming into the business world.

Lenny:

I agree. I love that. You mentioned that’s new. Let’s actually pivot back into the business for a minute. What else is new? What are you working on? What are you excited about this year that’s going to be a new product offering or new opportunities that’s in front of you?

Ari:

Yeah. The leadership training’s great. I really love that part of our business. We want to do more of it. It’s a different buyer to the typical buyers that we have. We’ve been working mostly with R&D teams, strategy, a little bit of foresight and innovation, a little bit with marketing, but mostly, I’d say, R&D. This buyer’s usually learning and development, so it’s a different client altogether. They recognize that we need our leaders to be aware of what’s coming and how lead through that. We’ve been doing a lot of work with J.P. Morgan, actually, for the last two and a half years. We piloted for half a year, and now we’ve perfected it. I’m so proud of the program, and we’re starting to work with other clients now in that as well. I want to do more of that. I really love it, because it—you have a real impact with the leaders that you—that you train. You can really help open their minds about, “Oh, wow. This is seriously coming around the corner, and this is what I need to be to be ready.” The leadership training we’re doing a lot of. On the more classic SciFutures side, a lot in health and wellness. A ton. We’re experts in that. Not just physical health. Emotion, human connection, which touches a lot of our clients that are in the CPG space. Then, as you know, we’re doing a lot of work around synthetic consumers. This is an area where I think that it’s going to completely blow up. You probably know more about this coming from the research side than me and being closer to it, but I’ve got clients asking me, “How do we do research now? How do we create personas of our customers that are fully AI?” We’ve been doing quite a bit of work now, and we actually got to a proof of concept for a client trying to figure out what’s plausible at this point in time. Yeah. That’s a lot. That’s tons. It keeps us really, really busy working in those areas.

Lenny:

Sounds like it. I want to be conscious of your time as well as the—of our listeners. Listen, we’re recording this the end of January 2024. The zeitgeist seems to be of this is going to be a weird year. I can think of 101 different definitions of weird or applications of weird. Since you are a futurist, what’s—what do you think we may see happen this year, that somewhat black swan? I don’t mean all the scary stuff. There’s plenty of that. What do you think like, “Hey, it may be time that this happens?” Aliens land. Whatever the case may be. What do you think? What do we have to look forward to like, “Ah, this could happen this year. Pay attention.”

Ari:

Well, I’ll tell you what—I’ll tell you what I hope for. I hope that we, as a species and also as Americans—but I understand your audience is not just America, but this trend is happening all over the world—that we can—we can see what unites us and what’s common amongst us more than what divides us. I hope that there’s some kind of catalyzing event that brings us together. I don’t know what that catalyzing event is, and we could think of a—we could imagine a number of scenarios that catalyze us. I have hope that, ultimately, as a human species and as human beings, we will continue to evolve and appreciate our differences and welcome them and learn to live each—with each other peacefully and respectfully. That’s my hope. I sadly don’t see that right now from the mainstream and also from what I pick up on more of the [unintelligible 00:33:53] fringe and social trends. It’s concerning. I don’t want to leave on a low note, but I do think—I do think we’re in for a bit of a bumpy ride as a species and as a society as we figure out how we handle truth, how we respect differences, how we integrate technologies that are so transformative like this into our ways of life that still allow you the core human values. It’s a growing up period. Every human being goes through it, and we go through growth journeys and learning journeys where we fail, and we hurt ourselves, and we get bruised. We’re probably going to have to go through that. I hope it’s not too destructive, and I hope that we can learn and come through it stronger. Yeah. It does feel like we’re at a really pivotal point. To be fair, every society has that. Every society. There was the Vietnam War in 1968 and [unintelligible 00:34:52] and the Second World War. Yes. It really does feel like we’re in a very pivotal time once again.

Lenny:

Well said, my friend.

Ari:

Yeah. Thank you.

Lenny:

I share that hope, and I’m sure that all of our listeners do as well. Yeah. Don’t want to leave it on a downer note. You’re right. Here’s the reality of that. The work that you do, the work that we all do—especially in the insights space—we do create the future. Right? Every single one of us that play in this industry, this world, we help contribute to creating some future. Right? Whether it’s a product or a message or a concept, we touch it. Right? I was doing work in the mid-2000s on 5G. Right? We knew where this was going to go. I think that that’s—it’s great to have a conversation like this, because it reminds me—and hopefully reminds all of us—that, yes, sometimes the world seems like a big, scary, out-of-control place. The last few years, COVID, all of that. Certainly, there’s some trauma we’re all working through with that, because, in some ways, it was like, “Oh, wait.” We have so much influence over our future, and it does start with being good humans. Doesn’t it?

Ari:

Yeah. It really does. I love that sentiment. My partner says something that’s really wise. She’s like, “Every single thing that exists in the world today was in someone’s imagination before it happened.” Everything. Someone had to imagine it. I love what you’re saying. It’s like, “Well, let’s get better at imagining great things.” That’s what we’re trying to do at SciFutures. You’re right. We can all do that. Let’s get much better at imagining positive, uplifting futures [unintelligible 00:36:44].

Lenny:

Agreed. Agreed. All right. Well, we’ll segue out from that positive statement. Was there anything that you wanted to touch on that we didn’t?

Ari:

No. It’s great to catch up with you, and I always love chatting to you. Thank you. I appreciate the opportunity to chat to you and hopefully to your listeners as well. Thank you.

Lenny:

Oh, well, ditto. Let’s not have years go by with only the excuse of doing a podcast interview to chat again. We can go lots of places with our private conversations about this. Where can people find you?

Ari:

SciFutures.com or Ari.Popper@SciFutures.

Lenny:

Okay. Well, man, thank you so much, Ari. It was great to catch up. Best of luck visualizing your future of success as we go from here. I guess that is it for this edition of the Greenbook Podcast. I want to give a big shout out to our producer, Natalie, who wasn’t feeling well today. Natalie, get well soon and thank you for all that you do. Our editor, Big Bad Audio. To our sponsors and, of course, to you our listeners, because, without you, really, Ari and I would not have had this chance to get back together. Because this was the only reason we did was to do the podcast. Thank you. You really are important on so many levels, and we appreciate you. That’s it for this edition of the Greenbook Podcast. Everybody be well, take care, and we’ll catch you at the next one. Buh-bye.

Next Episode All Episodes Previous Episode
Show artwork for Greenbook Podcast

About the Podcast

Greenbook Podcast
Exploring the future of market research and consumer insights
Immerse yourself in the evolving world of market research, insights and analytics, as hosts Lenny Murphy and Karen Lynch explore factors impacting our industry with some of its most innovative, influential practitioners. Spend less than an hour weekly exploring the latest technologies, methodologies, strategies, and emerging ideas with Greenbook, your guide to the future of insights.

About your host

Profile picture for Greenbook Podcast

Greenbook Podcast