Connect with us

Hi, what are you looking for?

Tech

AI Brand Safety Tech Faces Criticism Over Effectiveness and Transparency

AI Brand Safety Tech Faces Criticism Over Effectiveness and Transparency
AI Brand Safety Tech Faces Criticism Over Effectiveness and Transparency

Adalytics’ recent report has cast doubt on the effectiveness of AI-powered brand safety technology, prompting concerns from industry insiders about the reliability of these tools and what advertisers are actually paying for.

The report scrutinizes whether brand safety tech from companies like DoubleVerify (DV) and Integral Ad Science (IAS) can effectively detect and prevent ads from appearing alongside inappropriate content such as hate speech, sexual references, or violence. The findings have led to widespread surprise among advertisers, who are now questioning the efficacy of these systems.

Both DV and IAS have defended their technologies against the report’s criticisms. IAS contends that its commitment to digital media quality is unwavering and that it aims to set global standards for trust and transparency. Similarly, DoubleVerify has criticized the report’s context and emphasized its comprehensive settings options for advertisers.

However, despite these assurances, there are concerns within the advertising and tech industries about the lack of transparency in how these tools operate, which complicates the ability to fully understand and evaluate their performance.

AI Brand Safety Tech Faces Criticism Over Effectiveness and Transparency

AI Brand Safety Tech Faces Criticism Over Effectiveness and Transparency

Experts and industry insiders are calling for more transparency to address the identified issues. One expert likened the current situation to a famous Star Wars scene, suggesting that the industry is experiencing a significant moment of realization regarding brand safety technology.

Questions about the real-time analysis of web pages, the categorization of user-generated content versus news content, and the methodology behind AI models remain unanswered. The lack of direct responses from DV and IAS has only intensified the call for greater openness.

Rocky Moss from DeepSee.io argues that measurement firms need to provide detailed data about page-level categorization accuracy and reliability. Advertisers should inquire about how vendors handle uncategorized URLs, paywalls, and potential overreliance on aggregate ratings.

Moss stresses that while categorization models inherently include some degree of error, vendors should be honest about these limitations to avoid misleading clients. The current lack of disclosure, he believes, undermines trust in the industry.

The distinction between brand safety and user safety is increasingly blurred, according to Tiffany Xingyu Wang of the Oasis Consortium. Wang suggests that the traditional focus on blocklists is no longer sufficient and that advertisers need more sophisticated tools that address both brand suitability and user safety.

Seekr, a company specializing in content evaluation, provides a model that offers transparent scoring and detailed explanations of content risks. This level of transparency, according to Seekr’s Pat LaCroix, can lead to better business decisions and align performance with advertiser expectations, challenging the status quo of brand safety technology.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Tech

Threads is experimenting with a new feature that allows users to set a 24-hour timer on their posts. After this period, the post and...

Tech

A team of international researchers has developed Live2Diff, an AI system that transforms live video streams into stylized content in near real-time. Named for...

Tech

Amazon Web Services (AWS) recently unveiled several innovations aimed at enhancing the development and deployment of generative AI applications, addressing concerns around accuracy and...

News

AU10TIX, an Israeli company that verifies IDs for clients like TikTok, X, and Uber, accidentally left important admin credentials exposed for over a year....