Physical Address

304 North Cardinal St.
Dorchester Center, MA 02124

As agencies evolve AI tools for influencer vetting, they’re also discovering the tech’s limitations


Influencer agencies have embraced generative AI applications in the past year as they seek to reduce the time it takes to secure creative engagement with brand campaigns.

Client response to these solutions it was mixed. However, in recent months, agencies operating in this space have found one area with a clear application of AI tools – brand safety.

Screening creators can take anywhere from “a few days to a few weeks, depending on the depth of the analysis,” said James Clarke, senior director of digital and social at PepsiCo Foods, US AI solutions that aim to reduce that time per issue. registration.

The setup varies between agencies, but most have combined AI-powered search tools with generative AI applications that analyze the content of social media posts. Systems are used to find creators who fit the brand and to disqualify those whose output exceeds the client’s requirements.

In recent years, marketers have become more sensitive and cautious when it comes to choosing influencer partnerships. According to the agency’s leadership, the railings can range from depictions of alcohol consumption, swearing or political speech. According to Aynsley Moffitt, director of product and growth at Open Influence, “most” clients limit screening to the last six months of creators’ social activity, although some have asked the company for help previewing content as far back as five years. .

“This end-to-end approach helps protect brand reputation while identifying creators who will be authentic advocates for our clients,” said Moffitt.

“It’s the first line of defense,” added Ben Jeffries, CEO and co-founder of creative agency Influencer.

The time saved can allow an agency to take on more information or manage more creator relationships – allowing them to serve larger campaigns.

“It’s an efficient and scaled play,” Mae Karwowski, founder of creative agency WPP Obviously, told Digiday. “The more time we can save on ‘first pass’ content or reviewing creator profiles, the more time we can spend on strategy and optimization.”

The demand among advertisers for a time-saving artificial intelligence solution for creator verification – and the number of companies offering such a solution – is growing. Viral Nation has been offering such a solution since 2023but agencies are not the only ones to answer.

Lightricks, a software company that publishes software like Facetune, is currently developing an AI-based creator verification tool called SafeCollab. The company entered open beta development in November after partnering with six brand advertisers, including PepsiCo, through 2024.

The software uses public social posts and access granted by creators who use Lightricks Popular Pays influencer trading platform to scan and analyze influencer activity, said Corbett Drumney, vice president of brand partnerships at Lightricks. Instagram and TikTok; X and YouTube were tested next.

“Basically, it does a cursory Google search for them on the internet, summarizes it, and then sums it up [social] the content it indexed,” he explained.

According to PepsiCo’s Clarke, the beta test using SafeCollab was one of several ongoing trials using artificial intelligence for creator marketing.

“By leveraging emerging technologies … we’re confident our teams will be able to move faster, more efficiently and increase the effectiveness of screening creators,” he said in an email. Clarke declined to share the “red lines” PepsiCo used to disqualify creators.

Each background report generated by the software costs several hundred dollars, Drumney said, though he declined to provide exact financial figures. Lightricks is intended as a self-service app for internal teams, but agencies like Influencer, Obviously, and Open Influence offer similar solutions for choosing creators.

Creator marketing agency Props offers a solution built with API access to Google’s Gemini model, while Influencer uses proprietary software built with API access to ChatGPT to serve its entire client list since October.

Before this point, Influencer staff would have to approve creators by manually crawling their channels. Influencer’s Jeffries said the AI ​​solution helped speed up the work of billing influencers for partnerships, tracking campaign outputs and informing the agency of agency follow-up reports.

But automating creator vetting has more implications than just saving time. The habit of large language models to hallucinate answers or reproduce the biases of their original inputs provides shaky foundations for decision making.

Marketing agency Props’ solution, called Ollie, sometimes misinterprets the images it’s supposed to analyze, said Megan Matera, director of client success. In one example she gave, a creator posted a picture of them drinking beer in a spa. “Ollie referred to it as ‘bathing in beer’. said Mother.

In the case of SafeCollab, Corbett said his team is in the process of adding custom filters to its software because its default settings flagged too many posts as risky. He said the company’s clients said the result was “too alarming,” adding that “we’re going to have to change the way we present these things.”

To avoid such issues, none of the apps created by Lightricks or its partners are able to decide which creators get green or red. Instead, systems flag problematic posts for a human employee to call out.

“We always let our human team review all work done through AI,” said Obviously’s Karwowski. “Having experienced team members review any insights generated means we can provide real-time feedback when something seems off,” added Open Influence’s Moffitt.

Still, there’s an uncomfortable parallel between the rise of AI-powered influencer selection — and brand marketers’ overuse of programmatic filters for brand safety. which have been credited with effectively funding intelligence organizations.

As influencer marketing becomes increasingly “programmatic” – both in thought and execution – automating the task of vetting creators may risk repeating this harm and defunding creators whose activities are deemed outside the realm of acceptable without o they knew that.

Jeffries suggested that human intervention in AI-powered selection systems will always be necessary.

“Influence marketing isn’t just media buying, it’s creative buying,” he concluded.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *