Are brands approaching a crisis in authenticity?
Google has demonstrated how its Duplex voice agent can successfully trick a hair salon into booking an appointment over the phone. “Deep fake” videos can now show live action footage of famous or otherwise people saying things they never said.
Fake social accounts fool fans of brands, and bots conduct text conversations in the “voice” of celebrities.
It’s not just the deliberate mislabeling of critical news as “fake news.” More and more, new technological capabilities or malicious innovations are weakening our ability to use our eyes, ears and sense of human interaction to understand when we’re talking with an actual human — and when we’re talking with an imposter.
Are we entering a crisis of authenticity for brands? Is it comparable to the days before October 30, 1938, when consumers thought everything they heard as “radio news” was real.
That is, until Orson Welles broadcast his fake-reality “War of the Worlds.”
To get a sense, we checked in with the chronicler of all things marketing — and conference chair of our upcoming MarTech East conference in Boston — Scott Brinker.
It’s not difficult to see how one might think we’re “coming to a tipping point,” he told me. “Almost everyone I speak to in marketing, who is serious about their brand, thinks that trust and authenticity” have become a big deal.
Two key drivers
There are two key drivers challenging trust and authenticity.
In one, the ability to pass the famous Turing Test — where a computer-generated conversation is indistinguishable from a human’s responses — will become increasingly available to all kinds of brands, criminals and others.
The other driver is the number of ways in which brand, celebrity and other kinds of identity can be easily impersonated, without the need to pretend your intelligent agent is human.
Threat intelligence firm IntSights, for instance, estimates that about half of its corporate clients have to battle fake social media accounts set up with its brand identity.
Influencer marketing platform theAmplify spun up its KalaniBot on the Kik messaging app to converse with fans and dispense images and video about Cover Girl products, as a stand-in for online celebrity Kalani Hilliker. theAmplify President Amy Luca told me that the bot identifies itself as a stand-in, but primarily at the beginning of a conversation.
Obviously, Brinker said, the “presumed best practices” for brands is to make any pretense evident. After Google got a lot of negative feedback when it demonstrated how Duplex could trick a human into booking an appointment on the phone, it indicated that, in the future, any use of this or similarly indistinguishable agents should be identified as non-human.
But that’s assuming the originator intends to operate in its best interest, to preserve its reputation. Just as there are countless email or phone scammers who pretend to be Microsoft or the IRS, there will undoubtedly be countless scammers who will add the new Turing Test-passing tools to their growing toolkit of impersonations.
At least in part, Brinker noted, brands can be protected against misuse of these tools — such as, say, creating a completely-realistic but fake video of Elon Musk saying how electric cars are worthless — by laws for libel, slander or fraud, and brands can protect their names with those legal tools.
But, leaving aside the question of whether the current laws are sufficient for these new threats to brand reputation, there’s the question of enforcement. There are also laws against threats that purport to be from the IRS, but they still show up everyday in people’s emails and on their phones.
‘Hard to build it back’
A key unknown question, though, is at what point the prevalence of inauthentic brand experiences leads consumers to temper their trust, just as online reviews and marketing calls are now often mistrusted.
theAmplify’s Luca pointed out that one of her firm’s clients, soap maker Dove, now adds its logo and a visual indicator of “no digital distortion” to brand-related images it distributes. She suggests that there may be a need for third-party validation so that brands can assure consumers a given video, photo or conversation is actually from the brand.
“Once [trust] is broken,” she pointed out, “it’s hard to build it back.”
Chris Paradysz, CEO and founder of marketing firm The PMX Agency, told me that the false pretenses under which Cambridge Analytica used consumer data on Facebook, and the massive resulting scandal, created a “groundswell of realization that people have to take control of what’s going on.”
“The guarantee about who you are talking to” needs to come from the platform on which the talking is place, Eric Lam, co-founder and CEO of influencer firm Revfluence, told me. If there is a fake social account for a brand on Facebook, for instance, that social giant needs to police identities and, when notified, take the fake ones down.
Similarly, if there are cheap and easy ways to create Duplex-like voice agents, and they start bombarding consumers with realistic phone calls, phone companies will have to start developing ways to flag them.
It’s an arms race with the New Impersonators, Brinker noted, in which tools and practices to combat new ways of attacking brand reputations must keep pace. Legal and technical counter-measures, and brands’ best practices, will simply have to match the threat.
The alternative is “madness or the end of civilization,” Brinker said, although he did acknowledge that this dire possibility is “non-zero.”