Keeping content real with Henry Ajder
You’ve heard the buzzwords: deepfakes, generative AI, ChatGPT. But how do you make sense of it all while juggling the demands of marketing?
On this episode of The CMO Show, we sit down with Henry Ajder—an expert in generative AI and deepfakes—recorded live at Adobe’s Executive Forum in Sydney.
Henry shares his vision for a future where content authenticity is transparent and trustworthy. As an advocate for the Content Authenticity Initiative, he’s on a mission to help marketers navigate the challenges of AI with confidence.
Don’t miss this invaluable conversation on what’s next in AI and how you can stay ahead.
You might also like:
Adapting to the age of Chief Multipurpose Officers with Nine, Adore Beauty, and LEGO
Adobe Digital Report 2024: Trends shaping our digital experience
Credits
####
The CMO Show Production Team
Producer - Pamela Obeid
Audio Engineers – Ed Cheng & Daniel Marr
####
Transcript:
Mark Jones:
You’ve heard all the buzzwords. Deep fakes. Generative AI. ChatGPT.
You’re hearing them, but maybe you’re not really understanding them.
How can you, as a marketer, keep all the balls in the air AND find the time to digest this AI mumbo jumbo?
Our very special guest on today’s episode might have just the answer for you.
--
Hello, Mark Jones here, thanks for joining us on The CMO Show.
The CMO Show is a podcast made for and by marketing leaders, created by ImpactInstitute, and proudly supported by Adobe.
Now, I recently had the privilege of being invited down to the Adobe Executive Forum in Sydney, where I was treated to some fascinating chats with some very fascinating people.
First cab off the rank is Henry Ajder, an expert on generative AI and deepfakes. He’s also an Advisor to the Content Authenticity Initiative, and if you’re not familiar with the CAI, you’re truly missing out.
Luckily, Henry and I get into all of that and more on today’s episode. It was quite the chat, so let’s jump straight into it.
Welcome to the CMO Show. My very special guest today is Henry Ajder, and he's an expert on AI deep fakes. Henry, first question, are you real?
Henry Ajder:
Yeah, I think so. Jet lag from coming down here, not so sure, but on the whole, I think I'm pretty authentic right now.
Mark Jones:
I'm so glad, and thank you for humouring me. There's this big question of trust, authenticity, and brand reputation, which really matters to CMO. So can you give me the high-level picture of what's going on?
Henry Ajder:
Sure. So we are in a moment right now where awareness of AI-generated content, deep fakes, is at an all-time high. The last couple of years have been pretty astonishing in terms of the adoption of the technology progress with the technology itself in terms of the realism of the outputs, the accessibility of the tools for generating this stuff. But actually marketers and advertisers were some of the earliest adopters of this pre this kind of current gen AI wave that we sort of see.
Mark Jones:
So it's their fault.
Henry Ajder:
In part maybe, they've certainly been experimenting. Right?
Mark Jones:
Okay.
Henry Ajder:
And part of that process has been understanding what's resonated, what's gone down well with audiences and with customers, and also what hasn't done so well.
When it comes to use cases, when it comes to trust, when it comes to transparency, these are all really big questions. They are critical not just to marketers, but to society at large. The whole of the space of synthetic content is changing the way we think about this. What we think of as authentic is no longer diametrically opposed to synthetic. But what we are also seeing is people increasingly being distrustful of everything they see and hear, and becoming increasingly suspicious of the way that AI is shaping the content that they consume.
So for CMOs, it's a really challenging time and an exciting time simultaneously, all of these new tools are coming out, all of these new possibilities, all of these new opportunities. At the same time, getting it right is really, really difficult. And it's something where we've seen some pretty big examples of them getting it wrong. And then also some pretty good success stories too.
Mark Jones:
I haven't heard anyone say synthetic content before, so that's kind of fun. It does show there's a lot of innovation going on. My question is, we're now at a stage where we still can pick what's real or not, sort of.
Henry Ajder:
I push back on that.
Mark Jones:
Okay.
Henry Ajder:
I would push back on that. And I think just to clarify, so when I talk about synthetic media, what I'm really talking about is the output of generative AI. So the term deepfake and generative AI are thrown around a lot, but synthetic media is a slightly more scientific unambiguous term for the outputs, whether that's images, videos, audio, text, you name it. Marketers, CMOs in this space, they've been playing with some really interesting applications. This is something where whether it's again, in the large language model space with the likes of ChatGPT, Claude, Gemini, and Jasper was a big early product in the marketing space for AI. We've seen some really interesting use cases, and also in audio-visual, which in my world is the more kind of prominent case I guess. But this all comes with a backdrop of increasing distrust in what we see because that assumption that we can, by the naked eye or ear, sort of determine what is real or not has broken down. And I think that overconfidence is actually one of the challenges right now, is that you'll see articles out there saying, here are the top five things to look for to spot AI-generated content. And the scientific research just shows it doesn't hold up.
Mark Jones:
Meaning we can't tell anymore.
Henry Ajder:
In a lot of cases we can't. It's not to say all, but in a lot of cases we can't, particularly with spaces like audio, for example, or indeed some forms of images, we just can't.
Mark Jones:
So my next question was going to be when do we panic? And I think the answer is now, is that right?
Henry Ajder:
Well, without getting too doomy, and I'm often asked to get doomy, and I am conscious that I want this to be something which is pragmatic and hopeful. And I don't think we have to be mutually exclusively scared or excited. I think we can be both, but we are at a point right now where people think they can determine what is AI-generated or not. And often they can't.
And as people are starting to recognise this, we're reaching this point where people are sceptical about all content, not just the stuff that turns out to be AI-generated, right? It's a concept called the liar's dividend, when everything can be AI-generated, everything could be. And we're at this point where we have to tell people in good faith, "Look, you can't trust what you see or hear anymore, really. And you don't really have the abilities as an individual to determine that. By the way, the technology for getting us out of this mess is still pretty nascent. So good luck."
Mark Jones:
It's not a strategy though, is it? Good luck?
Henry Ajder:
No, it's not. Precisely, and that's why the work I do with the content Authenticity initiative is really important. It's why trust and transparency are so important for marketers and people developing products around these technologies. Because if there's one golden rule that I think CMOs and people in the space need to know is, people don't like to be fooled. People don't like to be treated like either an idiot or to have stuff hidden from them in plain sight. And that's when navigating this space, when you're deploying these tools, it's really important to provide that layer of transparency, to build trust. Because if you don't have trust, what do you have? In my view, it's the most important currency in the generative age.
Mark Jones:
I want to talk about the Content Authenticity initiative in just a moment. First, there's a responsibility that marketers have here to shape the conversation, and part of that actually is internal education.
Henry Ajder:
Absolutely.
Mark Jones:
Helping to lead the conversation. Because it actually just seems to me at a career professional level, there's a bit of an existential risk here because the general public doesn't trust marketers to start with. This is actually making it worse. And I'm not just being flippant because it actually impacts brand reputation. Reputation has value in terms of an organisation, not just as a market driver, but of course there's capital associated with it. So this is a non-trivial issue.
Henry Ajder:
Oh, it's huge. It's absolutely huge. Brand equity and reputation I think are really on the line. And from some of the cases where we've seen where it's gone badly, you can see the impact. For example, Levi's, the jeans manufacturer, they did a campaign where they used AI generated diverse models to try and increase the diversity of their campaigns, to which a massive predictable backlash in my estimation came forward saying, "Why aren't you paying real diverse people to do the modelling?" Similarly, with Mountain Dew, they did an advert where they brought back a synthetically resurrected version of Bob Ross, the famous TV painter, painting a bottle of Mountain Dew, to which all of his fans said, "This is so poor taste. This is not in the spirit of his kind of ethos."
And so there are some examples where companies understandably are excited about the technology, they want to put it to work, they have what they see as great ideas, but they may be, as you said, kind of internally caught in their own pattern of thinking and not necessarily thinking about the societal attitudes and the speed of change and what people see as acceptable or in good taste.
Mark Jones:
So actually, the fun part about this is that it gets back to our humanity and how do we call that out? So what's your suggestion? What's the mindset CMOs should bring to those two examples you just spoke about?
Henry Ajder:
Well, I think it's kind of what I refer to as almost an innovation type, right? Is that you are kind of constantly trying to balance between pushing forward new ideas, new formats, new strategies with understanding where the public is at and where your customers and consumers are. Sometimes you can change attitudes in society by leading by example, and things that people may not necessarily think of as, "Oh, that sounds great," or, "Oh, I love that kind of content." They come to love because people experiment and do new things.
Great example of this was a campaign done by Lays PepsiCo with Lionel Messi, where they created this campaign where you could have an avatar of Messi speak to you in multiple different languages that he doesn't speak to create personalised messages for your friends and family. It was clearly fake. It was disclosed as so, and it was really, really good. I think it won a Cannes Lion Award.
Mark Jones:
And fun. I was going to say.
Henry Ajder:
Yeah, exactly. Fun. And it wasn't trying to deceive people. It was, again, clearly marked as AI generated.
Mark Jones:
We know the rules of the game.
Henry Ajder:
Yeah, right. Exactly. You know the arena you are entering into. And I think this idea of arenas is really important here, because in some, perhaps more traditional formats, if you're starting to use AI and that's not expected, that's when people aren't delighted by the surprise. They feel misled or they feel deceived. So that innovation tightrope is kind of understanding your audience I think is really, really critical. Doing a bit of social listening, understanding attitudes in society, and looking in some respects to some of the first movers and what they got and what they got wrong. I don't think there's anything wrong with being a first mover. I think there's some really big advantages for trying new things and being bold and being brave. But there's also a lot of learnings. As I mentioned, this is not brand new. It's not as new as some of the applications. Marketers and advertisers have been doing this for some time. So there's a good kind of archive of examples out there of how it's worked and how it hasn't.
Mark Jones:
So you've really got to understand the ground rules.
Okay so, we’ve heard all about the responsibilities that marketers have to shape the conversation around AI.
That’s all well and good, but we’re all time poor, time poor professionals, so how are marketers supposed to make time for this? Can communities like the Content Authenticity Initiative help?
Stick around. We’re about to find out.
Mark Jones
Let's go to the Content Authenticity initiative, and Adobe and many other organisations have been leading this for quite some time. Give us the quick headline, what is it and why CMOs should care?
Henry Ajder:
Sure. So fundamentally, the Content Authenticity initiative is about advocating for the adoption of what are called cryptographically secure metadata standards. Now, that's a really kind of dull term.
Mark Jones:
I was just thinking about that this morning.
Henry Ajder:
Gets you going, right?
Mark Jones:
Yeah.
Henry Ajder:
The better way to understand this is the adoption of digital nutrition labels.
Mark Jones:
Nice.
Henry Ajder:
So it's a way to provide transparency about how a piece of content has been created, whether that's captured on a DSLR or generated using a tool like DALL-E 3 or Firefly from Adobe. And it provides a way to track how it's been edited, how it's been produced fundamentally. And the way that CAI is working is to try and encourage adoption of this by various different stakeholders all across digital content creation, whether that's marketers, whether that's entertainment companies, whether that's photography companies themselves and the cameras they're developing. And the reason I think this is so important is that detection in particular, using AI to spot AI is, in my view, somewhat of a losing proposition. It's not very robust, so it's not particularly accurate. And at scale, it doesn't work particularly well.
Mark Jones:
You're also asking AI to monitor itself.
Henry Ajder:
To an extent, to an extent. There are many different problems with that in other contexts, but in this one, it's more about, it's just not particularly reliable at scale right now. And I don't think it ever will be. So in my view, this is the best solution we have to provide people with that sense of confidence about what they're consuming.
Mark Jones:
So just to understand, we are talking about metadata, some form of embedding of codes and some other form of data in the content, right?
Henry Ajder:
So it is more attached to it.
Mark Jones:
Attached to it.
Henry Ajder:
So if you take a picture right now on your phone, there will be metadata with that image, which will travel with it. It won't surface necessarily, so to speak. It won't be easily accessible on social media or on other platforms, but it will be there. A good example of this is when Trump the other day claimed that a Kamala Harris rally photo was fake and it was AI generated. One of the journalists who reported on that story went to the campaign and said, "Well, can I see the metadata?" And they could. So it's already there in some form. But what this standard does, it's called C2PA, that's the technical standard that CAI is advocating for. What it does is it cryptographically secures the metadata to the content and then provides a seal. It's called a content credential. And so if you try and tamper with that metadata, the seal breaks.
Mark Jones:
And will that be legally enforceable or useful in the law, in court?
Henry Ajder:
Well, so again, the courtroom question is a huge one. We are now entering an age where lawyers and defence and prosecution are both questioning the authenticity of media being presented in evidential proceedings. Elon Musk's lawyers have done this. So potentially in that context.
Mark Jones:
I only ask because that's the ultimate standard. So otherwise everything, it's just... I'm wanting to understand what are the boundaries here.
Henry Ajder:
So I think this idea of an ultimate standard is critical because if we have 30 of these kinds of standards, it doesn't actually separate the signal from the noise. It becomes a new form of noise, right? So we've done this with other technologies in the past, globally, unifying around a singular standard is really important. And when it comes to legislation and governments looking at potentially mandating, there is certainly ongoing conversations. There's no laws on the books explicitly saying that this must be done with this standard. In my view, it's a matter of time, because the challenges are not going to become less prominent, they're not going to become less challenging.
Mark Jones:
What's the brand implication? Because nobody likes watermarks on their content. And much of the content we talk about at scale, and I'm also thinking about content we produce for campaigns. There's just so much of it. And the consumers don't care. Ultimately, they're entertained by something or not at sort of a consumer level. So how do we deal with this from a brand point of view?
Henry Ajder:
Sure. Marketers and creatives, as you said, really don't like having big ugly watermarks all over their content. It's not particularly intuitive. It's not something they really like, understandably so. What this provides, what content credentials provide, which I think is useful, is it doesn't have to necessarily be a big ugly logo over your content. The idea is that this standard is supported by platforms, and we're in early days of this right now, but the hopeful end goal is all social media platforms will have support for this standard.
So you'll get a little icon up here just in the corner of the image, which then provides a drop-down of this digital nutrition label. And the way this kind of fits in for, I guess, marketers and advertising, why it's important I think, is yes, people may not ultimately care in some context, but in others they certainly do. So if you have a campaign where you're using voice synthesis technologies to make it so that you only have to record one campaign once, and then you can translate into different languages whilst retaining the person's voice style and the performance, some people might feel deceived or misled if they don't know that's going on.
And I think particularly in other contexts, for example, if you're using AI generated imagery or other formats, people might again feel misled. They might say, "Oh, well, I thought this was an actual photo of the product, not a stylized generative version." So again, going back to this idea, people don't like to be fooled, and particularly when making purchasing decisions or things like this. It's in my view, a kind of responsibility of marketers to embrace these technologies and reap the trust dividend that you mentioned earlier, which is so important.
Mark Jones:
And if it is in the platform, it means that we can filter content in a better way. So you can actually set up your trust filters for want of a better term, and those sorts of things, right?
Henry Ajder:
Absolutely. And it's so critical with this standard that it gets this wide adoption. If there's only a few people or a few organisations using it, it's not going to change the reflexive attitude in society at the moment. It's trust what you see, or at least that's a message that a lot of people are putting out there. What we want to push is don't trust what you see, but look for the credential, right?
Mark Jones:
Yes.
Henry Ajder:
It is verify then trust, as opposed to trust and verify.
Mark Jones:
Just briefly, are there any quick examples of how this is working that you think will help people understand what we're talking about?
Henry Ajder:
Well, yeah, the, so CAI has a lot of members. Some of the big tech companies are also involved, and some of them are already deploying the C2PA standard. So right now, if you generate an image using Firefly, for example, Adobe's text to image tool, if that gets published on LinkedIn, you will get a little content credential, a little CR logo appear in the corner of that image, and if you click on it will then say the standard or the credential with issued by X company, based on Y tool that they were using. So it's already starting to surface on LinkedIn, which I imagine quite a few of your listeners use. They might have even seen some of these going around.
Mark Jones:
Yeah, I'm sure.
Henry Ajder:
So it's already out there. It's being built into cameras as well. So Leica have released an actual hardware camera, which allows you to sign this metadata in the images that you take. And what I would say is I would love to be able to give you some more examples of how marketers are already using this, but at the moment, there aren't that many. And this is where it'd be really great to see some of your listeners become early adopters and lead the way on this, because I think it's not the case that this is a speculative gamble as to whether this is going to be important or whether this is going to change. I think this is the future. So it's important to get in early and be the trailblazers.
Mark Jones:
What's your best advice for CMOs getting their heads around this as leaders within the organisation? There's an expectation they're not just keeping up, but leading it internally.
Henry Ajder:
So I think when you're talking to C-suite, when you're talking to other people in the organisation and you're trying to evangelise for this approach, I think it is really about bringing it back to basics fundamentally. It's about saying these technologies are revolutionary, they're transformative, and they can really improve our campaigns. They can help us reach people in ways that we've never done before, hyper-personalization, all of these different avenues where there's value. But there's a risk of undermining that entire value chain if we don't get the fundamentals and the foundations. And again, you don't want to be in a position where you are laying foundations that in two or three years you are having to dig up or you are having backlash from your customers or consumers because you've done it the wrong way. This is not a technology which is incredibly sophisticated and hard to implement. There's an API, there's a web app in some context for this to be used. So it's about laying those solid foundations to reap the real rewards and build that trust relationship with your customers in the generative age.
Mark Jones:
The challenge is most CMOs don't have much time. They're not going to go off and get a degree on this stuff.
Henry Ajder:
No.
Mark Jones:
So are we just talking about hiring in more experts? It starts to add up.
Henry Ajder:
I think. Yeah, so fully appreciate busy people and have a lot going on. I think you don't necessarily need to hire in specific content authentication experts. In my view, the same people who are perhaps evangelising for generative AI and kind of innovation within your organisation, whether that's a CIO or another role under the CMO office. It's equally as important that they're considering this side of things as well. Likewise, reputation management, as you mentioned earlier. I think these are people who have to be thinking about these technologies and looking at your tech stack as well. So there are lots of people that this can kind of fall under, but I think that if a CMO can lead these conversations, I think that would make my day.
Mark Jones:
Henry, we are nearly out of time, but a quick question because of who you are and what you know, tell me what you think about this consciousness idea of gen AI.
Henry Ajder:
Oh, bloody hell. You're ending on a cracker.
Mark Jones:
I know, on a cracker. Just summarise all of it, would you mind?
Henry Ajder:
Yeah, that's a simple guess. No answer.
Mark Jones:
And I'll tell you why, because I saw this, it's pretty popular in the internet, these two fake podcast hosts realising that the whole thing isn't real, that they're not real. And then you start to feel this empathy for AI that's about to sort of disappear off into the mist.
Henry Ajder:
Right? Okay. So the consciousness question, simply put, even though I did four years of academic philosophy, I still do not feel I am an authority to make that point or to make a definitive point on whether AI can reach consciousness or already has. I personally don't think it has. And I think what you just touched on with the NotebookLM example, podcasting one. I think it's much more an interesting reflection on how we as humans are so primitive still in the way that our brains work, that we can't help but associate consciousness and human identity to things that sound and feel human.
Mark Jones:
Okay. That's really interesting to me. We're projecting our humanity, our empathy, all these complex emotions onto AI. And you think that's really what's going on?
Henry Ajder:
Correct. And I think we're primed biologically to do so, and that's a challenge. That's where, again, having the disclosure and having the transparency is so important because if you're tagging or tying into that thread within us, that's very potent. That's very powerful.
Mark Jones:
Henry, if we had more time, I'd keep asking you questions about that, but we don't. That was a great spot to end, actually.
Henry Ajder:
Yeah. Yeah.
Mark Jones:
So Henry, thank you.
Henry Ajder:
Food For thought.
Mark Jones:
Yeah. Henry Ajder, thank you so much for your time and joining us on the CMO Show.
Henry Ajder:
My pleasure. Thanks for having me.
Mark Jones:
That was Henry Ajder, Generative AI and deepfake expert.
There’s a lot to unpack here. One thing that’s for certain is that AI is here to stay, and ignoring it won’t do you a lot of good.
The Content Authenticity Initiative and similar communities all have one common principle in mind – making AI safe and transparent for all who use it. Now that’s a movement I’d get behind.
Special thanks to Henry for joining me on today’s episode. You’ve been listening to The CMO Show, created by ImpactInstitute and supported by Adobe, and I’m your host, Mark Jones.
Thanks for tuning in. We’ll catch you next time.