Self-Driving Cars, Suicidal Rodents, and the Responsible Use of AI in Digital Marketing
As recently reported by NPR in the article “Should Self-Driving Cars Have Ethics?,” last week a group of researchers published the results of their “Moral Machine” experiment. This online survey, with millions of participants, “posed a series of moral dilemmas involving a self-driving car with brakes that suddenly give out.” In short, should the car in question act in the best interests of pedestrians or the driver, of the one or the many, of the young or the old, of people or animals? Perhaps not so surprisingly, for this last choice, more survey participants said the moral thing to do was to spare people over animals. Unlike some of the other choices, this was a near-universal preference. And all I could think when I read this was, “Yes, but what about all the squirrels?”
As I write this, it’s autumn here in Connecticut. Every weekday, I drive one or two hours along back roads lined with quaint homes and gorgeous trees painted in Rockwell-ian hues of yellow and orange and red. In each of those trees, by my reckoning, live dozens of squirrels, each with a brain smaller than a peanut. Which means that a dozen times during each trip, I’m faced with a squirrel who has decided that now, as I’m barreling toward it in my car, is the exact right time to cross the road. If I’m lucky, this rodent daredevil will make the trip quickly, its tiny legs and heart pumping frantically, and reach the other side without any drama. If I’m not so lucky, then about halfway across the road, the squirrel will just inexplicably decide to … stop.
This happens to me several times a week. Often enough that I know how to handle it. But how would a self-driving car with artificial intelligence handle it? For me, a human being who has a conscience and doesn’t want to end anything’s life if it can be avoided, it’s a matter of assessing the options in the span of a second and implementing the best solution possible. If I can quickly jog the steering wheel and avoid the squirrel without driving into a ditch or a pedestrian or oncoming traffic, I will. After that, all I can really do is take a deep breath through gritted teeth and wait to see if the squirrel manages to thread the needle between my tires. More often than not, it does, and I’m always relieved to see it in my rear view mirror, scurrying the rest of the way across the road.
So far this year, my win-loss record for avoiding squirrels with stage fright is about 150 and 1. And that one loss bothers me more than it probably should. But the fact is, it wouldn’t bother a self-driving car. If anything, the car’s record would likely be far worse. After all, I doubt most engineers will design an AI that actually “cares” (makes morality-based decisions) about squirrels. Because to be clear, in the survey, the animals that participants were questioned about were dogs and cats, and people have an affection for dogs and cats that for some is stronger than their affection for other humans. But ask them how they feel about a spider or a rat or a snake or even a squirrel, and it’s usually a different story. So if humans are the ones defining the moral architecture of self-driving cars, dictating which life forms are worth taking action to avoid hitting and which aren’t, then those back roads may very well resemble a small animal horror show before long.
Why is this important? Because self-driving cars are currently an AI outlier. Yes, there are a few of them on the roads, but only a few, and according to TheStreet, only 8.5% of cars sold by the year 2025 will be capable of making these kinds of decisions. So all this talk of “what should smart cars do?” is really a placeholder for the broader issue posed in the abstract of the Moral Machine experiment: “With the rapid development of artificial intelligence have come concerns about how machines will make moral decisions, and the major challenge of quantifying societal expectations about the ethical principles that should guide machine behaviour.” Previously, the only people who had to worry about such things were sci-fi writers like Asimov and Heinlein. But now the future is here. While self-driving cars aren’t yet mainstream, practical usage of artificial intelligence and machine learning can be found everywhere these days, across a variety of industries, regularly affecting our everyday lives in ways that most people aren’t even aware of. Like in marketing.
As a digital marketing professional, alarm bells go off when I read articles like one found on the Content Marketing Institute site last year, which opens with a rather bold, fear-inducing statement: “Every day your team postpones using innovative AI-powered solutions in your content marketing, you’re losing competitive edge.” To its credit, the article is informative and thorough, and everything in it is something a marketer in this day and age should probably know about. Here are the high points:
- AI-enhanced Pay Per Click Advertising
- AI-driven “Personalized” Websites and Emails
- AI-powered Content/Article Creation
- AI-powered Customer Service (chatbots)
- AI-powered “Churn Prediction” (guessing when a customer will leave)
- AI-driven Customer Insights and Algorithms
- AI-powered Facial Recognition
Again, while I believe that all of these technologies are incredibly powerful and potentially valuable tools in any marketer’s toolkit … most of them also scare the hell out of me. And probably not in the “you should be doing this” way the author intended.
Because what about all the squirrels?
You’ll notice I quote-unquoted the word “Personalized” in the list above, and that was deliberate, because it always bugs me to see that word used in this context. There is nothing the least bit personal about having your demographic and behavioral data gathered, dissected, deconstructed, analyzed, batched, and used to try to sell you something. We’ve all been on the receiving end of it, we all know what it feels like, and yet for those of us who are marketing professionals, we’ve all been guilty of doing it ourselves, in one way or another. It’s a necessary evil in these days–sending emails to “segments” or using automation tools as a means to improve conversion rates. I’ve done it myself, too many times, and no doubt will again before long.
Still, I can’t help but worry when it’s not somebody like me, another human being with a conscience, making decisions about whether or not (and how) to interrupt somebody’s day with an ad or an email. Because if I don’t trust that self-driving car not to mow down squirrels indiscriminately on back roads, then I certainly don’t trust AI-driven marketing tools to treat people like human beings instead of revenue opportunities. To be honest, I’m not even convinced it’s an effective revenue strategy in the long run. Guy Gonzalez sums it up quite nicely in a recent post about digital strategy for authors:
Literal billions of dollars have been invested in tools and platforms that have attempted to automate the humanity out of marketing. If you’ve ever been sucked into a company’s marketing funnel, you’ve seen the series of “personal” emails typical of that hamfisted approach…. Too many marketers still believe there’s a magic bullet, though, that engagement can be mapped, measured, and automated, allowing them to inject their products into conversations that will magically translate to sales without ever truly immersing themselves in the communities they’re trying to sell things to.
Don’t get me wrong. Automation certainly has much to offer marketers in any industry. I even see some potential for AI … as long as it’s tempered by basic humanity. Maybe that sounds a bit naive to some. But the question the survey was exploring about the proposed “morality” of self-driving cars was important, critical even, and it’s one that should be asked any time we get to a point where we hand off to a machine decisions that were once made by people. AI for marketing can and does free up human resources by performing in bulk monotonous data processing and targeting tasks. Which is great, as long as it doesn’t result in practices that A) hyper-value quantity of conversions over quality of human interaction, or B) view potential customers as nothing more than a collection of data points to be fed to an algorithm-driven AI that will never actually be a customer itself.
Because the scariest thing about trying to teach a car to make decisions about who to kill and who to save is that unlike a human driver, it will never have any skin in the game. Literally. It will never be that person (or squirrel) in the middle of the road, wondering if they’re going to die. So how can it truly understand the implications of a bad choice? Likewise for AI-powered marketing solutions that are aimed at living, breathing people with real world problems and busy schedules and quarterly goals.
All I’m really saying here is drive responsibly. Make human decisions.
And keep an eye out for the squirrels.