Subscribe

Jake Moskowitz and his guests explore algorithm training and the importance of good training data and practices in AI-powered marketing solutions.

We might say an algorithm is good or bad, but in reality, an AI algorithm is either well-trained or not well-trained. Just like a dog.

You don’t need to be a professional dog trainer to have a dog as a copilot, and you don’t need to be a professional algorithm trainer to have an algorithm as a copilot. But it sure helps to understand how they think and how they learn. Understanding how algorithms are trained can help marketers understand how algorithms think, how they work and whether or not a particular AI solution is a good choice for specific marketing purposes.

Hear from industry experts and thought leaders: 

  • Charlie Archibald, VP of Data Science at MediaMath
  • Kyra Sundance, Nationally renowned dog trainer, best-selling author and performer
  • Jeremy Lockhorn, Global Head of Partner Solutions at Ericsson Emodo, speaker, mobile marketing expert.
  • Dog training expert, Kyra Sundance helps Jake compare algorithm training to dog training.

Charlie Archibald, VP of Data Science at MediaMath, shares thoughts on how to evaluate AI-powered marketing tools, misconceptions about AI in marketing and key factors that come into play when training an algorithm.

Jake and Jeremy Lockhorn call on  marketing colleague Lyon Solntsev to help them look for truth and meaning in vendor marketing claims. 

The Five List
Five tips for training an AI algorithm (it’s a lot like training a dog).

  1. Training diversity rules out false patterns.   
  2. Train one trick at a time. 
  3. Training is never done
  4. Positive reinforcement
  5. The training has to be consistent

In this episode, Jake references the new book by Kyra Sundance, bestselling Author of “101 Dog Tricks.” Kyra’s new book is The joy of dog training.”

The Five podcast is presented by Ericsson Emodo and the Emodo Institute, and features original music by Dyaphonic and the Small Town Symphonette. Social media and promotional content was composed and conducted by Lyon Solntsev. This episode was edited by Justin Newton and produced by Robert Haskitt, Liz Wynnemer, and Jake Moskowitz.


Transcript of S2 E3: How Algorithms Work 

Kyra:

Well, there’s the saying in dog training, it’s always the trainer’s fault.

Jake:

Let’s talk AI. Welcome to FIVE, the podcast that breaks down AI for marketers. This is episode three, how algorithms think, I’m Jake Moskowitz.

These days you can’t read the morning news without seeing at least one story about the economy. And it’s usually not good news. But you know what’s booming, the dog economy, seriously. Since we all started sheltering at home, it seems like everybody’s gotten a dog. The demand for dogs has gone through the roof and nearly doubled in some markets. Chewy.com, a pet e-commerce site, reported a 46% year over year increase in sales. A pet insurance company called Trupanion reported a 29% increase. That’s good for dogs, good for dog owners, good for manufacturers of dog bowls, dog food, dog toys. And of course, it’s good for dog trainers. Not many dogs come preloaded with tricks or managed behaviors. That gives us an analogy we can all relate to.

Because in this episode, we’re going to talk about how to train an algorithm. It’s a lot like training a dog, when it works, the dog looks brilliant. When it doesn’t, it’s not a reflection of the dog. It’s a reflection of the dog’s training.

Wherever you’re listening to our show, you’re likely to see a dog or two. One dog might be sitting quietly looking at the owner waiting for a command, another may be tugging, growling, barking or chewing on something it’s not supposed to, just totally uncontrollable and not responding to its owner at all.

What’s the first thought that pops into your head? My first thought is that the first dog is well trained, and the second dog not so much. And that’s kind of a perfect analogy for AI. Because with AI, it’s all about the training. We might say an algorithm is good or bad. But in reality, the algorithm is either well trained or not well trained, just like a dog. Understanding how algorithms are trained can help you to understand how algorithms think, how they work and whether or not a particular AI solution is a good choice for your marketing purposes. And here’s the thing, you don’t have to get down into the weeds to get it. You don’t need to be a professional dog trainer to have a dog as a co pilot. And you don’t need to be a professional algorithm trainer to have an algorithm as a co pilot. But it sure helps to understand how they think and how they learn.

Are there differences between training a dog and training an AI algorithm? Yeah, but just a few. And those differences are relevant too, for example, dogs have instincts algorithms don’t. So when you get a dog, it comes with a set of behaviors and assumptions, like jumping on you is an effective way to get your attention for instance. It takes some work to change those just to get the dog to unlearn instincts you don’t like. If you do nothing to train the dog, the dog still does stuff like it or not. If you don’t do anything to train an algorithm, the algorithm will do nothing, it’s totally useless. It only does something if you train it.

Here’s another, dogs are always learning. AI algorithms only learn when they’re given data. Nothing will change with an algorithm unless you proactively feed a data and teach it something. When you hear the terms training data or training set, that’s the data that’s used to teach the algorithm to do its thing. Also, you know, dogs kind of choose their reward based on what they care about. Maybe it’s a treat, or a belly rub. Meanwhile, we choose what dogs have to do to get the reward.

Algorithms are the opposite, they have no goal of their own. Trainers tell algorithms what the goal is, and the algorithm tells them what to do in order to achieve it. The algorithm doesn’t care, for instance, whether a consumer converts or not. But once it’s trained to look for conversions, the algorithm determines what needs to be done to get one of those conversions. Those are some differences.

Now let’s talk about training an algorithm like training a dog. You’re a marketer or business leader. Your goal is to drive business forward not to get lost in the technical rabbit holes of AI. In order to know what you’re working with you do at least need to know where the rabbit holes are. So let’s stay surface level here. Let’s pretend for a few minutes that instead of benefiting from an algorithm, you’re the one training it. Consider this episode your mini training book for your algorithm. Dog Training books, I’m sure have lots of tips, but this little book has just five. And just to ensure that our dog training analogy is completely legit, I’ve invited a nationally renowned expert, Kyra Sundance to help me make my case. Hey, Kyra, thanks for joining us.

Kyra:

I’m happy to be here.

Jake:

Kyra, your work has taken you to some pretty incredible places. Can we start there for a minute?

Kyra:

I’m a professional dog trainer. And I spent my career doing professional stunt dog shows at NBA halftime shows in circuses and corporate events. I’ve also had considerable training with dogs, teaching them to do tricks, and to be better behaved members of our family. We’ve performed at halftime shows and circuses and Major League Baseball games. In fact, we had one show, where we got a call from a representative from the king of Morocco, who flew us out to Marrakech to do a performance at the Crown Prince’s birthday party. I think the Crown Prince was like four or five at the time. So we were just entertainment at his birthday party.

Jake:

And now you’re on the FIVE podcast. This is great. Let’s dive in. 

Algorithm training tip number one, training diversity rules out false patterns. 

In our last episode, we talked about AI bias. So this will kind of sound familiar, both algorithms and dogs are always looking for patterns in the data that they’re ingesting. So you have to be incredibly careful to make sure the data that you’re providing a dog or an algorithm doesn’t have patterns in it that you didn’t intend. That usually comes from having certain examples missing. So you really need to make sure your training data is diverse and thorough. And by thorough, I mean, you need to have a lot of examples that are different from one another, with every example well represented.

Kyra:

Yeah, that happens all the time. In fact, there’s a saying that dogs are the world’s best associative learners. So dogs are very good about making associations. My dog, in fact, used to have an association that every time I put on my running shoes, my dog got all happy and ran in circles because she thought we were going for a walk, she thought running shoes equals a walk. I didn’t even know I was teaching her this, but she picked up on that subtle signal. I think a common scenario is that we inadvertently teach our dogs bad behavior, you have to look back to the cause and effect and you actually taught that association. For example, the one with my running shoes for my dog, always thought that we were going on a walk is I had to give my dog additional scenarios where running shoes don’t equal a walk. So sometimes I would put on my running shoes, and I would do vacuuming or put on my running shoes and do you know do the dishes or something different. So my dog then had to refine her association. And it’s not always that running shoes equals walk, it might be refined to running shoes plus my owner picks up her water bottle equals a walk.

Jake:

Training tip number two, train one trick at a time.

Both dogs and algorithms are really only good at one thing at a time. So if you immediately try to teach a dog to sit in a loud Park from 26 feet away, it’s a hopeless venture. You have to train each one of those three things individually before you ever start putting them together. And by the time you’re putting it together, you’re really putting together multiple individual tricks. It’s not one trick. Kyra can a dog be trained to be great at sniffing out drugs and truffles and be a great athlete?

Kyra:

Generally, if you want an expert in any one of those cases, you’re going to specify in just one sport or one area. A dog can be a jack of all trades, but sometimes to the detriment of every one of those sports. So you want a dog to be the best he can be as a drug sniffing dog, you teach him only that one thing.

Jake:

And the same thing is true with an algorithm. You can’t make one algorithm that is good for everything. You can’t make an algorithm that’s good for brand lift and driving online conversions and driving in store traffic and driving completed video views. Because those are all very different things that each require a different algorithm with different training data. One reason algorithms can’t replace humans is because they don’t have the reasoning that humans do. So they lack the ability to put multiple things together all at once. Algorithms are good at one thing, but that one thing they can hone better than any human or even better than any group of humans. There’s a great term for what happens if you try to teach an algorithm a second thing. It’s called catastrophic forgetting. Basically, if you take an algorithm and try to expand it to do something new, the algorithm immediately forgets the first thing it learned. That’s true for dogs too. key point here, if you see claims that an algorithm is great at a lot of things, that’s a red flag, each thing it’s supposedly good at needs to be evaluated individually.

Training tip number three, training is never done.

Both dogs and algorithms are extremely perceptive, much more so than humans, they pick up on tiny differences or variations. And if there are major changes, you might have to just start over. That means you have to constantly be retraining both. So for a dog, if you train your dog to sit, and then you get comfortable, and you assume the dog knows how to sit and you stop rewarding the dog for sitting, the dog will eventually stop sitting because it’ll think well sitting doesn’t get me a treat.

Kyra:

You’re never done training. It’s a constant process of reinforcement. So the environment that your dog was initially trained in will never remain exactly the same. So we have to give updated feedback all along the way. For example, if you have a child, and you initially trained your dog how to behave around that baby, as the baby grows and starts to walk and talk in full tails, you’re going to have to keep giving feedback to your dog on how you want the dog to react.

Jake:

Algorithms are the same in the sense that the market is always changing. Even if we as humans don’t perceive small differences. For instance, the differences in the way bid requests look in the programmatic universe from one month to the previous, or the week of Valentine’s Day versus the week before Valentine’s Day, or during COVID versus pre COVID, algorithms can see the difference. So with an algorithm, you have to keep feeding it data or it will stop learning, and at some point, it will be irrelevant. An effective AI program isn’t a one time thing. It’s a complex, ongoing initiative, always questioning, doubting, reevaluating, and optimizing in a search for perfection.

Training tip number four, positive reinforcement.

Kyra:

In dog training, learning occurs with successes. So if you’re training a dog to do a trick, maybe you shake hands, and you’re trying and he doesn’t do it, and you’re trying, he doesn’t do it, and he doesn’t do it. At the end of five minutes, what does that dog learn, he’s learned nothing. The dog only learns if he gets a success, and he gets rewarded. That’s why in every stage of dog training, we set goals small enough so that the dog can achieve them, we want the dog to have a ton of positive outcomes so he can make an association between cause and effect and start to find a pattern.

Jake:

Algorithms are exactly the same. Let’s say you’re training an algorithm on online conversions. And the conversion rate is 2% of clicks with a click through rate of perhaps point 5%. Although that’s not outrageous for online conversions, that calculates to a very small percentage of ad impressions. Mixing metaphors here. If you’re training on a small number of events, and there are only a few needles or positive conversions in the haystack, the algorithm isn’t going to work very well, because it doesn’t see enough positive outcomes to find strong patterns. So if haystack needles are rare, you need bigger haystacks. Finding those haystack needles is the positive reinforcement algorithms need to learn.

Training tip number five, the training has to be consistent. Both dogs and AI algorithms know only what they’ve been shown. So it’s critical that the training data set is a truth set. And it has to be consistent. For instance, if you teach your dog one day that sitting gets him a treat, but the next day you forget to give him a treat when he sits, he’s going to get confused, and you won’t get the results you’re looking for.

Kyra:

You have to be very consistent. If your dog is not allowed to jump on you when you’re wearing good clothes, then he shouldn’t be allowed to jump on you when you’re wearing play clothes. This consistency will make it easier for him to learn.

Jake:

What happens if you’re not consistent?

Kyra:

Well, like everything in life, you’re going to get out of it what you put into it. So garbage in garbage out, right? The best animal trainers are the ones that are not sloppy, but instead think through what it is that they want and how they’re going to get it.

Jake:

Algorithms are the same. And the surest way to know that your training data is consistent is ensuring it’s the truth. Having a truth set is a huge differentiator in the AI marketplace. All in, there’s a key principle here. Blame the trainer or the data, not the algorithm.

Kyra:

Well there’s the saying in dog training. It’s always the trainer’s fault.

Jake:

Kyra, thank you so much for joining us.

Kyra:

Thanks for having me, Jake. It was a pleasure.

Jake:

By the way, if you’ve got a dog and these days you probably do. Be sure to pick up Kyra’s new book The Joy of Dog Training. It’s a thorough book of rules for training your dog Kyra’s way, simple step by step and positive.

Dogs and algorithms can learn pretty much anything, you can train a dog to attack people if you really want to. And the same is true for an algorithm. You can train an algorithm to create fraud or to spread malware, for example, just as effectively as you can teach it to detect fraud or fight malware. When you see an algorithm that’s not performing the way you want it to perform, remember, it’s not about bad algorithms and good algorithms. It’s about the quality of the training set and the quality of the trainer surface level stuff. Okay, so diversity, one trick at a time, continuous training, positive reinforcement, truth and consistency, five tips for training an effective algorithm.

Okay, you can put your marketing hat back on now. And now that you’re dressed for the occasion, let’s put all of that in a purely marketing context. Charlie Archibald is VP of data science at MediaMath, a leading global demand side platform. Charlie, thanks for taking time out to talk with me.

Charlie:

Thank you so much for having me, I’m excited to be here.

Jake:

Charlie may be a good place to start here is that AI is really becoming a key component of so many of the tools marketers rely on every day. Of course, MediaMath has a number of algorithms that make it work, but how much do marketers really need to know about AI algorithms to be proficient in their jobs?

Charlie:

Yeah, that’s a great question. I would say that marketers probably don’t need to get bogged down in the weeds of what particular technique or algorithms are being used. You know, I think what ultimately is important for the marketer is whether the particular AI solution that they’re working with is driving the desired business outcome. At the end of the day, that’s what’s important to them. Is it improving the ROI for a particular campaign? Is it freeing up the team’s time through automation to focus on more important things? If it’s delivering the results that it promises, is it transparent in how it delivers them, so that the marketer can then, you know, take those insights and apply it to other areas of their business?

The other area that I think is important is for the marketer to understand if what they’re purchasing is in fact AI, right? There’s a tremendous amount of hype around AI everywhere you look, everyone wants to utilize AI in their business, everyone thinks they need it, and not so coincidentally, suddenly, everybody is also selling it, whether or not what they have under the hood, is truly in fact, AI. And so I think that can be a tricky thing for marketers to navigate because, you know, everything sounds great from the marketing material. But understanding what you’re actually getting is a little harder to ascertain sometimes.

Jake:

Yeah, that’s exactly where I was headed. I’ll ask you a clarifying question about both of those. The first one, I totally agree that marketers don’t need to get bogged down in the details of how exactly an algorithm works. That’s kind of the core theme of the show. The key is whether the algorithm is moving the needle. But the scary thing for me is, a lot of times the needle itself is AI based, like the example of attribution algorithms that are giving credit to specific consumer touchpoints. You focus on the results but often, maybe even more often in the future, the measurement itself will be AI based. So it becomes harder to just decide whether AI is good based on the results, if the results themselves are essentially AI guesses.

Charlie:

Yeah, I guess you get into a little bit of a chicken and egg kind of situation there. But I understand the point that you’re trying to make, particularly when it comes to say, attribution, where AI may be taking or deciding where to kind of allocate the credit for that. But you can still run a B test to understand if in one case I’m utilizing this AI solution and in another case I’m not, and I am, you know, attributing data in the same manner, am I seeing lifts by making use of this or not? And does it justify the investment that I’m putting into it?

Jake:

That’s really interesting. So maybe one takeaway is, AI can get really complicated. And maybe one key as a buyer of AI is to not get bogged down in the complication. And just think basic, like you might anything else that’s not AI, like it’s very simple A B tests to figure out if the AI is working, rather than really sophisticated AI based determination if your AI is working, right?

Charlie:

Right. I mean, while the AI itself may be complicated, it’s supposed to try to simplify things for you at the end of the day, right? Whether that’s through automation or driving performance with less effort on the end user or marketer, right? So it should hopefully allow the marketer to focus on some of those bigger kind of business decisions and concepts without, you know, worrying about the nitty gritty of what levers are being pulled by the algorithm, or what decisions are being made down at the impression level that’s too far down in the weeds.

Jake:

Awesome. But that said, you also mentioned that everybody is selling AI today. And it’s important to understand whether their claims are real or not. How do marketers spot red flags or certain terms that indicate whether companies are telling the truth or exaggerating or maybe hiding something?

Charlie:

Yeah, I think, obviously, it depends on what type of AI solution you are considering working with, right? And I think it’s always helpful, if possible, if you can bring somebody to those conversations, that has more of that technical background, whether it’s a data scientist, or you know, a statistician, or, you know, whomever, but I think at the end of it, I think it’s never safe to kind of take the marketing materials at face value, right? You’ve got to probe deeper and ask questions. And as you go through that process, you know, you hope that the red flags kind of start to show up.

And I can give an example here of one of my experiences with this earlier in my career, where I was evaluating a vendor who was supplying a probabilistic identity solution. And part of the cost structure of working with that vendor had to do with the scale of identity matches that they were creating and sending back to you. And so because it was probabilistic, presumably on the back end, they had some sort of, you know, dial that they could tweak, in which case, you’re lessening your scale, but presumably getting improvement in quality. And I remember having a conversation and asking, well, can you pass me in addition to the matches? Can you pass me some sort of score, indicating the quality of the match so that I can study it and understand from my own point of view. Is there some sort of score threshold at which point if I look at the trade off between performance and additional scale, is there some point where it just doesn’t make sense? Like, yeah, I’m getting additional scale, but the performance just isn’t worth it.

And the response that I got back, and I think I was talking to the CTO of the company at the time was, well, everything that we do is of the highest quality. So we don’t send a score, everything’s of the highest quality, right? And so when I got that response, and you’re not going to give me kind of any insight into the quality of what you’re producing, that was something that sent up red flags for me. So I don’t know that there’s really any blueprint or specific question that you have to ask, but I do think as a general practice, it is worthwhile to probe into the offering of the AI solution to kind of get a sense of whether you do get any red flags, or if you do have any questions about the authenticity of what it is that you’re you’re buying.

Jake:

That’s a great transition, because the next question I was going to ask you was, what are some of the misconceptions that you think people have about AI? Your example points to one of the biggest misconceptions in my view, which is that AI is binary? Either yes or no? Yes, I should bid on it. No, I shouldn’t bid on it. But the reality is, as you described really well, in this example, is that it’s not binary, it’s a score. And it’s up to a human to decide what score do I feel comfortable with? Because it’s basically a trade off of scale or quality, as you described really well.

Charlie:

So with respect to some of the common misconceptions, I think another one of them, honestly, is that AI is some sort of silver bullet or that it’s just kind of magic, right? And it sounds kind of silly to say that, but I think when you consider the marketing hype surrounding AI, and its ability to do all of these unbelievable things, and you couple that with the fact that many people don’t have a really strong understanding of what goes on inside the black box, so to speak, it can start to feel a little bit, you know, magical. And I think the other kind of common misconception, which is tied to that, is how quickly AI solutions can be generated.

There’s a lot that goes into building an AI or machine learning solution. And so if you take marketing for example, right? We’re dealing with these massive volumes of data, you want to be able to make real time decisions on that data. Right off the bat, you have a huge data engineering and architectural challenge. And that’s before you even get to the machine learning side of things. It’s not going to happen overnight. There’s a lot of research, trial and error, iteration that goes into producing and maintaining a quality solution. So I think, unfortunately, sometimes what can happen is because there’s this misconception that, well, it’s kind of magic and you know, just flip a switch and you make it work, right? You get this misalignment between kind of this idealistic kind of business notion of what AI is going to do for me. And the reality that, you know, it’s something that takes time and a lot of work and iteration to get to a good solution.

Jake:

I would add to that, and I’ll ask, if you think I’m right here, that iteration is not just about making an algorithm good. It’s also about keeping an algorithm fresh, right? Because the real world is constantly changing. And almost by definition, any algorithm is at all times outdated, because it’s based on data from before. And the data now is the data now, and you have to continually access training data, and be constantly retraining just to stay accurate, let alone get more accurate.

Charlie:

That’s 100% correct. And, you know, at MediaMath, we update our algorithms every single day for exactly the reason that you just described.

Jake:

And one of the key things that makes for good training data is a high level of confidence that you’re going to have access to it long into the future so that you can continue to refine the algorithm.

Charlie:

Yes, right. That’s exactly right. If you are changing the inputs into your model, you’ve got to build a new model, right? Like if I have a model that’s reliant upon a particular data point or input feature that I had yesterday, and that goes away for whatever reason, and I have to update the model accordingly. But you’re right, in terms of the question of what makes for good training data? Yes, data consistency is important. The data that you’re training on has to be representative of the population that you’re trying to predict. And so like to give a good example, here, right, you know, we’re approaching the fall here, we’ve got Black Friday, Cyber Monday coming up in the not too distant future. We know that during those periods of the year, marketers behave very very differently, particularly those in the retail space, right? There’s going to be huge influxes of spend, CPMs are going to go wild, conversion rates are all over the place because of the promotions that are being run. If I’m trying to predict what are reasonable bid prices during that period, based off of data from today, it’s not going to do a fantastic job, right? Because today, and now is not very representative of that particular time in the future.

And so that’s one area, the other is accuracy. And I think that within marketing, this is an important one, right? If you’re partnering and bringing in data from other places, or even the data that you’re collecting yourself, accuracy of that data is really important. And we’re talking about an ecosystem that’s incredibly complex, where you’ve got huge volumes of data, working its way through data pipelines, that if you have failures, or if a conversion tag malfunctions, that can thrust error into your data set that can skew your predictions. If you’re working with a third party that’s providing you, say, demographic data, and you’re trying to use that as an input to decide how likely somebody is to take some action with respect to a particular brand or product. If it’s probabilistic in nature, and it happens to be not very accurate, then you’re going to be making decisions based off of inaccurate data and you’re not going to get good results. Right? So all of those things, I think, kind of go into you know, that idea of what makes for good training data. And the last one and probably the most obvious one is you want to have a lot of it.

Jake:

Charlie Archibald, MediaMaths VP of data science, thanks so much.

Charlie:

Yeah, thank you, Jake it was a pleasure.

Jake:

Before we go, Jeremy Lockhorn and I are going to put what we learned today into action, and we’ve invited a special guest to help us out. Lyon Solntsev who works behind the scenes to make this podcast and get it out there. So Jeremy, Lyon’s brought us a few marketing phrases, real phrases that are used by some martech and adtech companies to describe their AI based solutions. You and I will chime in with a few thoughts on whether they’re helpful, accurate, responsible, that sort of thing. Lyon ready to get started?

Lyon:

Yep, let’s do it. Our AI technology allows brands and agencies to extract key signals outside of the obvious that contribute to the consumer journey from unaware of a need all the way through to conversion.

Jeremy:

So you’re going to solve all of my problems, basically, you know, I always get a little skeptical. And there’s a promise made, that a machine is going to identify something that’s not obvious, you know, without a lot of specifics behind it. And, you know, it kind of promises the world, right? From every part of the customer journey as opposed to focusing on a very specific piece of the customer journey, they’re more likely to be trained rather against specific pieces as opposed to the entire thing.

Jake:

Also, not to nitpick. But outside of the obvious, that terminology, to me, plays off one of the most unfortunate misconceptions about AI: that it’s a black box that you can just throw it at something and it discovers things that no human could ever discover. But the reality is, there’s much more interaction between human and machine where you need expertise to point the computer at something in particular, based on a hypothesis that there might be information there or patterns there that are important. You can’t just throw it at a dataset and have it just bring up things that are outside of the obvious that no one ever thought of before.

Lyon:

Okay, number two. We factor more than 50,000 variables into the algorithm that decides who will see an ad and when, all with the same goal in mind, achieving predefined marketing objectives, from completed views in upper funnel to site visits that drive consideration.

Jake:

In order for a training dataset to be useful to an algorithm, it has to be really cleansed and organized and structured and accurate and all set up for success and to say your 50,000 variables tells me that there’s no way that all 50,000 of them are thorough and cleansed and accurate and make sense. So in some ways that delegitimizes the AI program to say that you just throw everything at it.

Lyon:

Number three, patent and proprietary unsupervised machine learning algorithms work without labeled input data to automatically detect new and previously unidentified fraud and abuse patterns.

Jake:

The one thing I could say about this one, the thing about fraud specifically, is nobody actually knows. So I love the fact that they’re being specific about the methodology of using unstructured data, meaning unlabeled data, to have a machine find patterns, but then a human has to label that pattern. So a human has to decide, oh, well, that pattern that’s fraud. But the thing with fraud is you don’t actually know. It’s not like you’re looking at a bunch of pictures of sheeps in a field and if the computer says, well, I see a bunch of green stuff, you can say, oh, well, that green stuff is grass. Like it’s not like that. In this particular case. You don’t know that something is fraud, you can guess that it’s fraud.

Jeremy:

Yeah, that’s interesting, didn’t it say specifically that there’s no need to input labels or to label? Or did I hear that?

Jake:

Yeah, that I mean, I think it’s interesting that they’re using unstructured data. So they’re not saying, Oh, I know what fraud looks like, go out and find it. They’re saying, go find a bunch of patterns. And we’ll tell you which ones are likely to be fraud.

Jeremy:

Yeah.

Jake:

Which I like. It’s just that the problem with fraud specifically is you can’t actually know therefore you’re basically training an algorithm on guesses.

Jeremy:

Right. And the way that the statement reads it’s very black boxy, like we have this patented technology that is unsupervised and looks for patterns, just trust me it works you quickly become skeptical.

Jake:

Thank you guys. I’d like to thank my guests Charlie Archibald, Kyra Sundance  Lyon Solntsev and of course, Jeremy Lockhorn. On the next episode of FIVE, the retail sector is going through a massive amount of turmoil and transformation. How can AI help physical retailers grow and help both consumer brands and retail brands win at e-commerce? Hey, and if you like the show, please write us a comment or give us a rating on your favorite podcast listening platform. We’d be super grateful. It definitely helps more people discover the show. Yeah, let’s talk about that algorithm one of these days. Thanks for joining us.

The FIVE podcast is presented by Ericsson Emodo and the Emodo Institute and features original music by Dyaphonic and the Small Town Symphonette, original episode art is by Chris Kosek. Social media and other promotional stuff was composed and conducted by Lyon Solntsev. This episode was edited by Justin Newton and produced by Robert Haskitt, Liz Wynnemer and me. I’m Jake Moskowitz.

Additional episode resources:

Share

GET THE LATEST
IN YOUR INBOX

MORE LIKE THIS

Case Studies Season 3: Innovation The FIVE Podcast

S3 E10: The Campaign That Brought Theater-Goers Back to Broadway – With Andrew Lazzaro

Season 3: Innovation The FIVE Podcast

S3 E9: The Metaverse: A New Frontier for Commerce @ Zenith Basecamp

Season 3: Innovation The FIVE Podcast

S3 E8: How Blockchain Puts the Brand at the Center – With Shelly Palmer

Season 3: Innovation The FIVE Podcast

S3 E7: The Disappearance of Device IDs: How Marketers See it – With Jeremy Lockhorn

Season 3: Innovation The FIVE Podcast

S3 E6: CTV and the Consumer Journey – With Jessica Hogue

The FIVE Podcast

General Mills’ Kitchen of the Future

Season 3: Innovation The FIVE Podcast

S3 E5: The Campaign That Transformed an Island – With Richie Taaffe and Adrian Begley

Season 3: Innovation The FIVE Podcast

S3 E4: AR Ads and the Experience Economy – With Tom Emrich

Season 3: Innovation The FIVE Podcast

S3 E3: General Mills’ Kitchen of the Future – With Jay Picconatto and Michael Stich

Season 3: Innovation The FIVE Podcast

S3 E2: Rethinking Grocery Commerce – With Jason “Retail Geek” Goldberg

Season 3: Innovation The FIVE Podcast

S3 E1: Winning Both Today and Tomorrow – With Michael Stich

Season 3: Innovation The FIVE Podcast

Introducing FIVE: Innovation for Marketers

Season 1: 5G The FIVE Podcast

5G Bonus: Welcome to the AR Era.

Season 2: AI The FIVE Podcast

AI E9: The End of the Beginning

Season 2: AI The FIVE Podcast

AI E8: AI’s Role in Identity

Season 2: AI The FIVE Podcast

AI E7: The AI Pitch and Catch

Season 2: AI The FIVE Podcast

AI E6: AI and the Future of Work

Season 2: AI The FIVE Podcast

AI E5: AI is for Agency Innovation

Season 2: AI The FIVE Podcast

AI Bonus Content: The Full FIVE Interview with Rishad Tobaccowala

Season 2: AI The FIVE Podcast

AI E4: Putting the AI in Retail