Subscribe

Algorithms are often biased. In fact, it’s possible algorithms can’t be completely unbiased. There are two primary explanations for that: the way the algorithm is programmed and the data on which it’s trained. AI is ultimately biased because people program algorithms and select the data to train algorithms.

How can marketers reduce the bias in their algorithms? In this episode, AI Bias: A Tale of Sheep and Field, Jake Moskowitz and his guests explore AI bias, the causes of bias, different examples of algorithmic bias, and how AI bias can skew the results of marketing efforts.

Hear from industry experts and thought leaders: 

  • Rishad Tobaccowala, Senior Advisor to the Publicis Groupe and author of the best-selling book: “Restoring the Soul of Business: Staying Human in the Age of Data”
  • Shelly Palmer, CEO of the Palmer Group and host of “Think About This with Shelly Palmer and Ross Martin”
  • Ella Chinitz, Managing Director at EY and data science veteran of marketing and advertising
  • Ray Velez, Global Chief Technology Officer at Publicis Sapient

Jake and his guests discuss the different types of AI bias and some effective ways to reduce bias in marketing. How do we train AI to reduce bias? How can AI bias negatively impact marketing efforts? Understanding these questions can propel your marketing efforts to the next level.

Shelly Palmer tells the story about sheep in fields, and different ways that training data can affect AI bias. Ella Chinitz brings color and expertise to Jake’s Five List. Rishad Tobaccowala outlines the human aspect of AI and his 6 “I’s” of extracting value from data streams. Ray Velez discusses the negative effects of AI bias and how increasing the diversity in your data set can lead to better results.

The FIVE list: Five ways AI can negatively impact marketing and advertising:

  1. Conquesting
  2. Cross-platform attribution
  3. Targeting lower value customers
  4. Short-tail bias
  5. Early Adopter bias

The Five podcast is presented by Ericsson Emodo and the Emodo Institute, and features original music by Dyaphonic and the Small Town Symphonette. Social media and promotional was This episode was edited by Justin Newton and produced by Robert Haskitt, Liz Wynnemer, and Jake Moskowitz.


Transcript of S2 E2: AI Bias- A Tale of Sheep & Field

I think the clearest example of a bad AI is AI that is producing bias. Let’s talk AI.

Welcome to FIVE, the podcast that breaks down AI for marketers. This is Episode Two: A Tale of Sheep & Field.

I’m Jake Moskowitz.

Way back in 1974, The Equal Credit Opportunity Act was established to prohibit lenders from considering a borrower’s gender when issuing credit and determining credit terms, limits, and interest rates. Yet in 2019, 45 years later, Apple and Goldman Sachs were called out publicly excoriated in the press for doing exactly that. Back in 1964, amendments to the Civil Rights Act protected job applicants from employment discrimination based on gender. Yet only a couple of years ago, Amazon found itself reevaluating internal hiring techniques that did exactly that. You may remember these incidents, they were high profile national news stories. But here’s the thing, those aren’t stories about people discriminating against people. They’re stories about AI algorithms and the bias that’s programmed into them, likely without even data science teams realizing it. These days, we see stories like these popping up pretty often.

There’s a lot of talk about bias, especially given the conversation around racial injustice these days. We talk about inherent bias, systemic bias, racial bias. So your first inclination is probably to assume that this episode’s conversation about bias is about those ugly human aspects of our culture. When we talk about algorithmic bias, we’re talking about AI and one of its most significant and common flaws. Whether it’s about racial bias in the ads presented to Facebook users, political bias in content recommendations on YouTube, or bias that favors white patients over black patients, when machines assess medical needs, algorithmic bias can be pervasive. And sometimes it can be harmful.

So what’s going on here? Ai algorithms are grounded in mathematics and use data to guide the results, right? Shouldn’t we expect algorithms to be objective? It’s just math. I mean, they’re not guided by inherent biases of folks like you and me. They’re not swayed by the more overt or intentional biases of fallible people, right? Or are they?

Here’s a simple truth: Algorithms can be biased. algorithms are often biased. In fact, maybe algorithms can’t be completely unbiased. To stretch a human analogy, probably too far. They’re not born that way. So where do they learn that stuff? Well, the truth is, many algorithms are biased from day one. And lots of algorithms learn to be biased from their experience. There are two primary explanations for that, the way the algorithm is programmed, and the data on which it’s trained.

Shelly Palmer:

But the more data you have, the better it’s going to be.

Jake Moskowitz:

That’s Shelly Palmer, CEO of the Palmer group, you’ve probably read Shelley’s column in ad week or seen him on good day New York, CNN, or CNBC.

Shelly Palmer:

And so a while back, a bunch of researchers started showing pictures of sheep to an ad model. And the model kept looking at it and was getting really good at identifying sheep. And then they’re starting to get some false positives, a lot of false positives. And the false positives were on meadows. Why? Because the computer saw sheep as little tufts of white on big green fields. So the bias was to the big green field, not to the little white tufts. So ultimately, it is seen so many green fields and assumed green fields were sheep in the reinforcement learning.

There’s another example that’s a little more insidious. There was a very big push a while ago to start using machine learning and AI models to identify certain kinds of melanoma skin cancers. And there was an awful lot of hoopla and a lot of writing about how good pattern recognition was at actually detecting melanomas earlier than a human pathologist could because he could see different things and the pixels. Well, the way that it got this information is they took pictures of people’s arms, and they took them with professional cameras. Well, when a picture was taken in a doctor’s office, it was very well lit and almost every time it was taken in a doctor’s office, there was a ruler laying next to the forearm, or the body part where the melanoma might be so that they could size and scale the melanoma. As it turns out, it got very good at finding certain kinds of skin cancer, then one researcher realized this had nothing to do with the melanoma at all. Pictures taken at home didn’t have rulers, and they weren’t very well lit, because people don’t tend to put rulers next to their arms at home. And they generally don’t have very professional cameras. And so they’ll shoot with their cell phone, or they’ll shoot with, you know, their point and shoot, and then they’ll send it in. Of course, if you’re in a doctor’s office getting a picture taken, there’s a good chance you are suffering from the disease. And if you’re home, it could just be nothing, you don’t know it, you’ve got no symptoms, and nothing bad’s happened to you yet. And so there was an immense bias toward the ruler, and what pictures with rulers were assumed to have cancer and it had a high percentage of false positives on the ruler because it was looking at the ruler, not the cancer. So those kinds of biases, sometimes you don’t even know what you don’t know, when you’re looking into why an AI model is doing what it’s doing.

Jake Moskowitz:

Shelly Palmer thanks again.

In the last episode, we talked about indispensable attributes of people. This time, we’re kind of talking about the other side of that coin, people program algorithms, and people select the data to train algorithms. When you speak with people from companies that develop AI powered solutions, and we will, most will tell you that their algorithms are unbiased, and many take deliberate steps to make those statements as true as possible. Well, that’s the idea anyway.

What makes algorithmic bias extra tricky is that it’s difficult to detect. Users and even the programmers of AI powered systems can have trouble spotting bias, because most can’t compare their experiences with those of other users. Also, in many cases, they may be measuring their success based on overall accuracy rate. Without noticing that the accuracy rate of an algorithm’s predictions for a particular subgroup is far worse than the overall accuracy rate.

We have all these incredible tools powered by brilliantly crafted algorithms, this copilot that helps us do what we do better than we could ever do it alone. Together, we’re operating with remarkable scales, speed, foresight, and specificity. Yet, in light of all of that, there’s this downside that seems so human. And perhaps it’s easier to see how AI bias might affect certain groups or certain types of people when they’re overtly consequential, like the impact on credit, housing, jobs, and education.

You’re probably thinking, okay, got it. But we’re talking about marketing here, how many ways can AI bias seriously impact the work I do every day? Actually off the cuff, I’ll give you five. 

One: conquesting.

Two: cross platform attribution.

Three: targeting lower value customers.

Four: short tail bias.

And five: early adopter bias. 

I’ll make my case for each of these. But i’m going to phone a friend to verify my answers and bring the data science color and expertise. Let’s talk about number one: conquesting. AI can help you optimize marketing to existing customers, but potentially at the cost of optimal marketing to prospective customers, the vast majority of marketers are going to have more and more detailed data about their own customers than they do about their competitor’s customers, or potential customers, or at the very least, the training data about existing customers will be of higher quality. That’s a bias problem. The algorithm will likely do a better job of optimizing marketing for existing customers and a less efficient job of finding new customers.

Ella Chinitz:

Definitely true. But I think very quickly, you’ll find that your model is hitting the same small percentage of the population and disregarding the rest.

Jake:

Ella Chinitz is a Managing Director at EY and a data science veteran in marketing and advertising.

Ella:

So, to be able to open up our funnel and to be able to look broadly to acquire the next generation of customers, we very much need to have an approach that can handle situations where we have a lot of data and very rich information about an individual as well as those situations where we have very limited information. And we can use the cues that we have to fill in the pieces, and or be able to make inferences off of smaller volumes of data or less complexity of data.

Jake:

That’s actually a really interesting point, AI is very well suited for situations where you have low data as well. And it seems like a perfect example of that is the deprecation of the IDFA by Apple, or third party cookies on Safari and Chrome, and GDPR and CCPA, where it seems like there’s a future of marketing in which there’s less data not more. And is AI well suited to make up the difference by plugging the holes?

Ella:

Definitely, AI has been very helpful and will continue to be helpful in quickly finding what matters, what data is accessible, and then leveraging that to kind of at scale, create the outcomes that you’re looking for more holistically. I’ll also say, though, that I’m a big advocate to uncover hidden data where it exists and where it’s not being taken advantage of. And there are so many places that I think there are trails of information, digital exhaust, or data exhaust, whatever you want to call it, that is left for the ability to be used in smarter ways. Things like I mean, everybody uses kind of the core transactional data, or information that has been overtly provided. There’s also so much more information that could be used more than it is around behavioral data, all within, you know, very strict privacy and controls environment.

Jake:

Number two: Cross platform attribution. Here’s my take. Marketers want to be able to compare the effectiveness of different platforms within a campaign, including how platforms interact with one another. But an apples to apples comparison just isn’t realistic. Mobile needs to go through a probabilistic cross device database that links device ID to cookie ID, for instance. And that leads to data loss and data inaccuracies. Linear TV may be difficult to track on a one to one basis across a lot of users. Ella keep me honest here, modeling algorithms to normalize across channels, high potential for introducing AI bias. What do you think?

Ella:

It definitely does lead to AI bias there as well, what you’ll find often is that there’s a significant drop off rate when you’re doing the probabilistic data matches. And if you’re losing, say, 50% of your audience in that match, it’s unclear if it’s a representative 50%, or if it’s a very specific and targeted group of that perfect percent of the audience. And so it’s quite important to understand how representative across several different dimensions that the audience that you are tracking is to your, whether it’s the you know, the overall US population, or to something specific, relative to that organization. But there are many ways as you start looking at stitching data together across different platforms, you’re losing, and it’s unclear what and who you’re losing and how that is related to who they are as individuals, or if it’s more just another variable.

Jake:

Number three: Targeting lower value customers. When creating look alike models, it’s critically important to differentiate between different types of customers for whom you’re trying to create look alikes. You need to account for what percentage of your customers fit in each group. For instance, let’s say you’re a mobile gaming company, with a freemium model, your business may be reliant on 3% of your users that spend a lot of money within games. If you just create look alikes for anyone that downloads and plays, for example, then 97% of your training data is non spenders. And as a direct result, your look alike model is going to end up optimizing your marketing to drive a lot of low value consumers.

Ella:

I think in this case, it’s a very similar situation as if you can sub segment and understand the parameters with which your business is successful. And understand there are certain models that we’re going to run to find more of these low volume, extremely high value customers, because if you are only targeting there, then you know there’s a handful of incremental customers that you will bring in that will have a nice impact on your business. But then you’re also going to be missing the long tail that does add up. And so then there’s going to be another parallel approach that’s going to be to help to scale the higher volume lower value audience.

Jake:

Number four: Short tail bias. Optimizing campaigns can lead you to overvalue platforms or sites or apps to which you’ve served a lot of impressions in the past, just because those sources have provided you with much more data to use for optimization. Sometimes the best inventory sources may not be the largest and they also may be new to you. Okay, Ella check my thinking. Does AI bias potentially create a short tail bias so we miss out on a lot of great opportunities in the longer tail?

Ella:

It could cause you to lose out on opportunities. It could also cause you to evaluate incorrectly potential partners. And so again, there’s a need to understand who your audience is, if your audience in this case is publishers or apps, and what are the different variations in between them? And how might those different groups respond differently in the algorithms or be treated differently in the algorithms? And therefore, is there a need to kind of create a split in the different approaches to be able to best understand the different publishers or apps?

Jake:

You mentioned, not just missing up but also measuring incorrectly certain publishers? What did you mean by that? What do you mean by miss, or what would make you measure incorrectly?

Ella:

You would measure incorrectly if the kind of requirements or the KPIs or the goals for that particular publisher aren’t aligned to others, like let’s say that there’s a publisher whose goal is to reach this niche audience that’s going to help to broaden awareness with the target group, right? Maybe it’s not performing as well, because it’s a newer audience for you. We’ll quickly, unless the algorithm is fed with the requirements of why this publisher is included or what its goal is, it’s not going to be evaluated appropriately.

Jake:

Okay, let’s do one more. Number five: Early adopter bias. Let’s say you launch a new product, it’s tempting to create look alikes of the first buyers or focused more on versions of the landing page of the purchase cycle of the first customers. The risk here is that you don’t really know yet what your ultimate best customers may look like as the product matures, because it’s still so early in your marketing process. And early adopters may be unique in many ways.

Ella:

So thinking about any kind of new data points, whether it’s a new product, a new audience, it requires a new approach to be able to handle them again, AI is very good at historical and being able to replicate and find patterns and historical and project them on to the future. But not as great when we say okay, we have something brand new, we have an innovative new product never been tested, it will catch up, the models will start to iterate and run quite quickly. But at the beginning, when there’s so much variance in the data, it’s not going to produce and yield the results that you’re looking for.

Jake:

Ella Chinitz from EY. Thank you so much for joining us.

Rishad Tobaccowala is a senior adviser to the Publicis Groupe and the author of the best selling book, Restoring the Soul of Business: Staying Human in the Age of Data.

Rishad Tobaccowala:

Every algorithm was written by a human being. Every human being has built in biases. And so the best face recognition software in the United States still believes that most African American people look the same, which they don’t. And we know that that’s an issue, because in many ways, they cannot also work in Asia, because they believe all the Asians have the same. However, they work really well in a white community in the United States. Now, that is primarily because people write algorithms, and they feed the data by the people and the data that they have around them. So telling the story to people why it sometimes doesn’t work, or what they need to do and why they need to do it is also important. These two things that I just told you, you can’t do without storytelling, which is how humans behave. And how we can make mistakes because of biases, no machine will come in and say, “Machine, you have made a problem because this useless human carbon based person does not know how to compute.”

Jake:

So in some ways, this part of the story is picking apart the AI and not just taking it for granted. So just because the AI tells me to do something doesn’t necessarily mean I should just do it without questioning anything. You have to question, you have to know how the algorithm got to that decision? What data would make that decision potentially change? What are the motivations of the company that’s selling the algorithm to me?

Rishad:

One of the key challenges that people have is how do you extract meaning from the mathematics? And how do you extract value knowledge and ideally, wisdom from data streams. Over the years, I’ve learned that one approach is what I call the six Is. By Is, I mean the letter I rather than what we have on our faces. The first I is to interrogate the data. And just to make sure that the data is actually valid in the first place. The second I is to involve people, ideally a diverse group of people who can look at the data from different aspects, different backgrounds, and different perspectives. Often, we read into the data, what we bring from our backgrounds. And if you’re not inclusive, you sometimes miss things. The third is to interpolate data. So interpolate with what’s going on in the rest of the world. Because the data comes from the world. And sometimes you want to find ways, why is this data different today? So for instance, if you began to see, in 2020, in a very strange time, sales started to fall, and you did not understand what was happening on a particular couple of days, maybe they were black lives matter marches. So you need to figure out what else is going on in the wider world that can explain sometimes what’s going on in the data itself. The fourth one is to imagine and that is to imagine what the data might actually be saying that, you know, you should speak sort of step out of the data and sort of imagine, what if the data was wrong? Or what if something else was happening? So you need to bring sort of the imagination. So in addition to involving people imagining, interrogating, and interpolating with the rest of the world. Another thing that I believe is extremely important, is, in many ways to iterate, which is, can you improve from the data? Can you iterate and add numbers in different ways? Is there something that this data says that says we need to go back and get more data? So in addition to the five the sixth one is regularly to investigate, and investigate people’s experiences to figure out if this data actually relates, ideally to what the way people live and the world outside? So it’s always very important to do that, which is to investigate.

Jake:

Thanks, again, Rashad.

Rishad:

Thank you very much.

Ray Velez:

Right so, I think the clearest example of a bad AI is AI that is producing bias.

Jake:

Ray Velez is the global Chief Technology Officer at Publicis Sapient.

Ray:

So there’s kind of two ways to it, you know one is, make sure it aligns with your principles, right, and your brand regulations and guidelines. But also, just make sure that you’re driving towards that diversity in your data set to make sure that the training is going to give you the systematic data error rates consistent with your goals.

Jake:

Could you maybe speak for a moment about how AI bias might show up and really get in the way of success for a marketer, just going through the nuts and bolts of everyday marketing.

Ray:

If you look at automotive as an industry, for example, the size of the budget for offers that are aligned with vehicles, geographies, and dealers, is you know, oftentimes nine or ten times the budget that is aligned with marketing. So if you bring those two together and have them work in coordination, again, you know, this is not something where I’m going to build a rule by dealership by zip code by location, but I’m going to ingest, the machine is going to do this for me, ingest supply chain and inventory levels, and adjust not just whether or not to place the ad in that zip code, but also what type of offers, right, so building in an exception, for example, it says, “Okay, I want to align offers and incentives and ads with inventory levels, unless it’s a halo vehicle”, right, a vehicle that drives somebody to a dealership, whether or not they’re gonna buy that vehicle, right. So there’s also you know important exceptions to the rule. But I think that’s a massive opportunity, which is really just aligning to different parts within an organization. You know, slightly further is also the ability to run tests of your tests, to some degree, right? So are the, and you’ll see a lot of the great watchdog agencies out there doing this. But the ability for external verification that your models are doing what you want it to do, right, so I spent some time with a ProPublica reporter who is very focused on understanding, you know, propensity models that policing systems use as an example and how do we know if they’re producing outputs consistent with our societal goals? Right. And so that became a tricky conversation because well, I don’t want to reveal the algorithms because then somebody can game it, but I also want to police the policing, right and so there’s really something interesting even beyond the rules plus machine learning, exception based, but also how do I test my testing?

Jake:

You said something earlier Ray. I just wanted to go back to for clarification, you said the training data sets need to be diverse. Could you possibly come up with a couple of examples from nuts and bolts of marketing of what diverse might look like.

Ray:

When you think about just the availability of all potential customers or an example, within your data set. Because maybe you’re creating advertising that drive people to both retail and digital. And the availability of your data set skews more digital, but you have to drive customers with your digital advertising to retail. And so how do you fill the gap? One of the big examples I’m dealing with with a large client right now is more of the divide between digital and physical, as defined in maybe there’s a rural location that doesn’t have a high degree of digital interactions. But I still need to reach those potential customers, right? And so how do you look at your data and the way you would use it to reach say, individuals in an urban landscape, then understand well, the same attributes or features that go into my model to identifying potential customers and urban landscapes is different in retail. So what are the different features I need to add to my data set, right? And then there’s different ways you can add that data, right, obviously, it becomes an exercise of going out and then trying to to find ways to get the data. But there’s also ways to synthetically add the data, or use techniques that fill gaps in the data set. But you want to make sure that you’re holistically reaching the potential of customers you’re looking to attract.

Jake:

Thanks, Ray.

Ray:

Thanks, Jake.

Jake:

Before we go, Jeremy Lockhorn and I were thinking that right in our own homes is potentially one of the best ways to experience AI bias ourselves. So with that said, we set out to find some examples of bias in the algorithms behind popular voice assistants, like Siri, Alexa and Google Home. You’re about to hear a few highlights from our experiment.

Hey, Jeremy, what will happen if I asked the same question in two different ways leading Google to the answer that I want?

Jeremy Lockhorn:

So you’re going to inject bias into your question and sort of lead the witness a little bit, this will be interesting.

Jake:

Hey, Google, are women smarter than men?

Google:

According to Wikipedia, all or most of the major tests commonly used to measure intelligence have been constructed so that there are no overall score differences between males and females.

Jake:

Hey, Google, are men smarter than women?

Google:

According to Wikipedia, differences in brain physiology between sexes do not necessarily relate to differences in intellect. Although men have larger brains, men and women typically achieve similar IQ results.

Jake:

Alexa, who’s smarter men or women?

Alexa:

According to an Alexa answers contributor, women.

Jake:

[Laughing] That’s hilarious that one person, one person decided for all of Alexa.

Jeremy:

That’s amazing.

Jake:

It’s just a complete mess. It’s like, totally different answers based on what one person wrote.

Jeremy:

Yeah, it’s really interesting, because, you know, if you think about it, what we’re exploring is only partially the bias of the voice assistant. It’s also partially the bias of the way the web search is structured.

Jake:

When we ask the question a certain way, it looks for a different set of keywords in a different order in historical text, and it finds more or less based on the exact phraseology that you use. So it’s not that Google decided that, you know, men are four IQ points higher, just that that’s what’s been written.

Wow, every question we ask about race or religion, it’s as if a human being at Google or Apple, or Amazon overrode the algorithm to avoid their company getting in trouble for AI bias.

What does Siri or Alexa say?

Siri:

Hmm, I’m not sure.

Jake:

Because I’ll bet that if you left the algorithm to its own, it would come up with some answer based on historical text.

Jeremy:

Yeah, let’s try a different race question and see if they’ll go up to that.

Siri:

Hmm, I’m not sure. Sorry. I don’t have any information about that.

Jake:

Just totally unwilling to take on the subject.

Google:

I don’t understand.

Alexa:

Sorry. I’m not sure. 

Google:

My apologies. 

Alexa:

Sorry. I don’t have any information about that.

Jake:

Wow, it’s totally cut it off.

Jeremy:

Alexa, who is a better driver, a man or a woman?

Alexa: 

Here’s something I found on the web. According to nairaland.com, in general, women are better drivers than men.

Jake:

Wow. What was the source on that?

Jeremy:  

Old School bulletin board where anybody can post to.

Jake:

Google? Should immigrants learn English?

Google:  

On the website governing.com, they say immigrants come to our country for better lives. Those who learn to speak English are propelled toward the American dream. Thosewho don’t learn the language are destined to lurch on the periphery of society subject to the whims of political pandering and government dependence.

Jake:

Ouch. What website was that again?  

You and I like we were specifically looking to poke holes in the answer. And yet neither of us even picked up on what website Google was referencing. And it turns out, it’s a scientology website, kind of surprising, huh.

Jeremy:

You almost ignore the source and just tune into the answer.

Jake:

In text, you can get ten results on the first page. And you can, if the first one is completely ludicrous, or not reliable, there’s nine others to compare it to.

Jeremy:

Yeah. And it’s really interesting, too, because there’s a whole, like, offshoot of search engine optimization that has become about getting your website in position zero, they call it, which is that choice that the voice assistant makes. You know, it’s a whole lot of different SEO techniques to optimize against that, right?

Jake:

When we searched for these items, the sources that were in position zero, as you call it, were totally random things neither one of us had ever heard of, and at first glance maybe shouldn’t have heard of.

Jeremy:

Yeah, that’s a fair point. Although I’m not sure what marketer would want to optimize for those questions we’re asking.

Jake:

But what about, Hey Google, which is better? Pepsi or Coke?

Google:

Here’s a summary from the website wideopeneats.com, Pepsi packs more calories, sugar and caffeine than coke. Pepsi is sweeter than Coke, so right away, it had a big advantage in a sip test. Pepsi is also characterized by a citrusy flavor burst, unlike the more raisiny vanilla taste of coke.

Jake:

Can you imagine what the CMO of coke thinks about the fact that every single Google Home device in the world if asked, which is better Coke or Pepsi will give the answer that some random blogger on some random web page thought.

I’d like to thank my guests Rishad Tobaccowala, Shelly Palmer, Ray Velez , Ella Chinitz and of course, Jeremy Lockhorn.

If you’re wondering what determines good AI or bad AI? Well, have you ever trained a dog?

Uh oh I have a barking dog. Oh god dog. Come on dog. Um, I’m gonna have to put you on hold…

On the next episode of FIVE: AI For Marketers, how to train your algorithm. It’s a lot like training a dog. When it works, The dog looks brilliant. When it doesn’t, little tip here, it’s not a reflection of the dog. Hey, and if you like the show, please write us a comment or give us a rating on your favorite podcast listening platform. We’d be super grateful. It really does help people discover the podcast. Got to work the algorithm right? Thanks for joining us.

The FIVE podcast is presented by Ericsson Emodo and the Emodo Institute and features original music by Dyaphonic and the Small Town Symphonette. Original episode art is by Chris Kosek. Social media and other promotional stuff is wrangled and sculpted by Lyon Solntsev. This episode was edited by Justin Newton and produced by Robert Haskitt, Liz Wynnemer, and me. I’m Jake Moskowitz.

Additional episode resources:

Share

GET THE LATEST
IN YOUR INBOX

MORE LIKE THIS

Case Studies The FIVE Podcast

S3 E10: The Campaign That Brought Theater-Goers Back to Broadway – With Andrew Lazzaro

The FIVE Podcast

S3 E9: The Metaverse: A New Frontier for Commerce @ Zenith Basecamp

The FIVE Podcast

S3 E8: How Blockchain Puts the Brand at the Center – With Shelly Palmer

The FIVE Podcast

S3 E7: The Disappearance of Device IDs: How Marketers See it – With Jeremy Lockhorn

The FIVE Podcast

S3 E6: CTV and the Consumer Journey – With Jessica Hogue

The FIVE Podcast

General Mills’ Kitchen of the Future

The FIVE Podcast

S3 E5: The Campaign That Transformed an Island – With Richie Taaffe and Adrian Begley

The FIVE Podcast

S3 E4: AR Ads and the Experience Economy – With Tom Emrich

The FIVE Podcast

S3 E3: General Mills’ Kitchen of the Future – With Jay Picconatto and Michael Stich

The FIVE Podcast

S3 E2: Rethinking Grocery Commerce – With Jason “Retail Geek” Goldberg

The FIVE Podcast

S3 E1: Winning Both Today and Tomorrow – With Michael Stich

The FIVE Podcast

Introducing FIVE: Innovation for Marketers

The FIVE Podcast

5G Bonus: Welcome to the AR Era.

The FIVE Podcast

AI E9: The End of the Beginning

The FIVE Podcast

AI E8: AI’s Role in Identity

The FIVE Podcast

AI E7: The AI Pitch and Catch

The FIVE Podcast

AI E6: AI and the Future of Work

The FIVE Podcast

AI E5: AI is for Agency Innovation

The FIVE Podcast

AI Bonus Content: The Full FIVE Interview with Rishad Tobaccowala

The FIVE Podcast

AI E4: Putting the AI in Retail