Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
This edition of Digiday’s daily CES Briefing looks at the need for brands to adopt SEO strategies to deal with AI agents, an interview with Mastercard’s Raja Rajamannar about agency compensation models in the AI era, and how Dotdash Meredith used OpenAI to power its contextual advertising product. D/Cipher.
Expect to hear a lot about search engine optimization in 2025. Except it won’t be called that.
“It’s not about search engine optimization anymore. It’s about responsible engines,” said Digitas CEO Amy Lanzi.
Instead of figuring out how to rank high on Google’s search rankings, brands will need to figure out how to reach AI agents who are expected to handle human tasks such as booking travel itineraries. “Brands have to be the answer to the questions you ask in Gemini or whatever [other generative AI tool]Lanzi said.
This notion of response engine optimization—or AI agent optimization, or whatever it’s eventually called—was a major topic of discussion among agency executives during this year’s Consumer Electronics Show.
“The reality of AI agents is gathering information and bringing it back [to people]so searching in 2025 and beyond. Making sure you’re there from an SEO perspective is absolutely critical,” said Kelly Metz, chief investment officer of OMD USA.
How marketers make sure their brands are selected when someone asks an AI agent to plan a summer vacation or handle Christmas shopping goes far beyond traditional SEO tactics.
“Exploration was the answer to discovery. Now it’s going to be an ingredient and a different paradigm,” said Jeff Geheb, Global Chief Experience Officer and Global Head of Enterprise Solutions at VML.
Historically, SEO tactics have focused on tying a brand to specific keywords by seeding content on the web that makes that association so that Google’s search engine learns to make that association when someone types one of those keywords into a search query. Keyword-based tactics won’t cut it when the big language models powering AI agents can go beyond keyword recognition to understand the context and concepts underlying the language and judge for themselves what’s best. [insert product type] rather than relying on articles from publishers stating the best [insert product type].
“The reality of AI agents is that they go to your website to bring information to users. Navigation that is challenging for brands. These are questions like, “Do I want them to go to my website? How can I use media partnerships to get more out of this experience to make sure I come across to agents in the right way,” Metz said.
“It’s not a ‘store near me.’ It’s about ‘the perfect place to make me beautiful’ because I want to win ‘best brand to buy makeup from,’” Lanzi said. “It’s a completely different way of thinking about winning in search. That’s why Reddit is interesting.”
Thanks to an agreement between Reddit and Google to share LLM user data, what people say about brands on Reddit — in the colorful language people use on the platform — can color AI agents’ ratings as much, if not more, than their own content pages brands as well as articles from publishers.
But Reddit is just one example of a broader challenge. As LLMs receive content on and off the web, marketers will have a harder time trying to curate, let alone control, what information about their brands and products is exposed to LLMs.
At the same time, brands will likely develop their own AI agents, which may end up interacting with the AI agents used by humans and become central to what this new SEO is called.
“The moniker we’re moving to is M-to-M: machine to machine,” said Brian Yamada, chief innovation officer at VML. “In this next era that’s starting to grow, we’re going to let our agents talk to consumer agents. Then brands will have to think about brand APIs, what datasets to make available. Agents decide what the experience tier is.’
That experiential layer is, well, reality.
The way brands pay agencies for their work looks set to change as AI tools undermine the billable hours model. Exactly what the new agency compensation model(s) will be is anyone’s guess. But some guesses carry more weight than others. Like the ones coming from the CMO. So Digiday sat down with Mastercard’s chief marketing and communications officer Raja Rajamannar during CES to discuss the matter.
The transcript has been edited for length and clarity.
What do you think about how the agency compensation model needs to change?
I’ll give you a little anecdote and then I’ll tell you why I’m saying what I’m saying: At Mastercard, we get requests for quotes all the time from our clients, our potential customers. In the past, it took approximately seven weeks from receipt of the RFP to write the first draft response. Seven weeks. Today, it takes less than a day, including human supervision. There is no increase in the workforce; it’s the brilliance of the AI that makes the process ridiculously efficient without sacrificing quality.
So one of the things that I’m challenging my own team is that if there’s efficiency in our ecosystem, whether it’s within our own team or with our partners, which are agencies, we need to challenge the existing model. There is a significant opportunity for efficiency. And I think agencies need to reinvent that model.
How would you like to see that change? Because the whole idea of billable hours is completely different in a world where AI tools reduce the time it takes to complete client projects.
If I were to go to billable hours it would be brutal for agencies and we shouldn’t drive agencies to extinction. There must be another model.
It can be a project model. For this project, to get the output I will give you this much. Or it could be a combination [the agency] dedicates three [full-time employees] on [the brand] and the three FTEs [the brand] will pay completely, and here is the amount of tokens we use for AI. So the cost-plus could be the second model.
Third [model type] is reward based on results. If I say I’m trying to drive awareness and preference for my new service or product from X to Y, I’ll pay you ABC dollars to do it. Now you, as an agency, figure out how the hell to do that, and I’m willing to pay, say, $100,000 per point. If you [as the agency] manage to achieve that increase [at a cost to the agency of] $5,000 and $95,000 out of pocket, God bless you. But I look at it from my perspective: what is each percentage worth to me?
Have you offered it to your agencies?
Not yet. All of this is a work in progress. We are discovering AI ourselves. I need to have enough information and knowledge and experience to say, “You know what? I know the work you are doing will only cost you so much. I can demand that they reduce it by 70% or 80% or whatever, but that won’t help you. I know it will put you out of business. So let’s move on to performance-based compensation.”
Dotdash Meredith’s deal with OpenAI goes beyond content licensing. The IAC-owned publisher is also using ChatGPT Kindergarten AI technology to enhance its D/Cipher contextual advertising product.
But first, some background. D/Cipher effectively indexes DDM publication article pages by content-related topics, so an advertiser looking to reach brides can target that audience by displaying ads on articles of interest to brides, which can include wedding-related articles as well as stories on other topics. which brides re-index when reading. When DDM introduced D/Cipher in 2023, this indexing process used natural language processing to identify common keywords. But in the second half of last year, the large-language OpenAI model entered the mix.
DDM now uses OpenAI APIs to allow LLM to find connections between a publisher’s corpus of articles in a similar way that ChatGPT is able to process text to understand the underlying concepts of what is written, not just the surface-level words.
“The new world of OpenAI makes it much better because we’re not just talking about words as tokens, but as concepts, as conceptual structures. It connects concepts, not just language itself,” Jon Roberts, chief innovation officer at DDM, said in an interview.
After running the DDM content library through OpenAI technology, 70% of the content connections identified by LLMs were “basically the same, but 30% were significantly different. It was obviously better,” Roberts said.
And to be clear, it’s not theoretical. “We have active campaigns where these types of insights from a given taxonomy level improve results for advertisers,” said Lindsay Van Kirk, CEO of D/Cipher.
Example: DDM ran a campaign for a major CPG advertiser that was launching a new luxury hair care product. One of the audiences the advertiser wanted to reach was brides. DDM ran the campaign through D/Cipher and was able to see that the campaign fell short of the main benchmark provided by the client when brides were included.
“Based on this measure of user engagement, brides were 30 to 40% less likely to engage than the average ad viewer,” Roberts said.
DDM was able to detect this particular drop in audience because the campaign ran on both wedding-related and non-wedding-related content, where only people viewing wedding-related pieces overlapped. The DDM recommended removing brides from the campaign and as a result the campaign “overpowered”. [the client’s higher] stretch the benchmark,” Van Kirk said.