Adverse media screening is an essential aspect of safeguarding a company’s reputation, and to achieve this, many organizations are turning to artificial intelligence (AI) solutions. According to a recent survey by cybersecurity firm Ripjar, 71% of companies now utilize AI or machine learning (ML) as a core component in their screening processes. The report, titled “2023 State of Adverse Media Screening,” provides insights into the attitudes and practices of compliance professionals across various countries. While AI adoption is growing, some compliance teams still rely on manual processes, highlighting the need for further technological advancements.
AI Adoption in adverse media screening
The survey reveals that AI adoption is steadily becoming a standard practice in adverse media screening. Among the 205 compliance professionals surveyed in Benelux, Sweden, Finland, Germany, France, Italy, the United Kingdom, and the United Arab Emirates, only 14% do not plan to integrate Large Language Models (LLMs) like ChatGPT into their screening processes. This indicates a widespread recognition of AI’s potential benefits to improve risk detection and compliance surplus.
Embracing AI for efficiency and risk mitigation
Ripjar’s CEO and co-founder, Jeremy Annis, emphasizes that adopting AI and machine learning in adverse media screening marks a transformative era. The technology empowers compliance teams to handle a vast range of challenges effectively. The report points out that 62% of respondents acknowledge an over-reliance on manual processes, leading to time and resource inefficiencies. However, 71% of firms using technology for screening already incorporate AI or ML, showcasing a growing trend toward efficiency and risk mitigation.
ChatGPT, an AI-powered language model, has gained significant popularity since its launch in November 2022. The platform’s monthly active users nearly doubled between December 2022 and January 2023, reaching 100 million. By June 2023, ChatGPT achieved over 1.6 billion site visits, demonstrating its widespread adoption across various industries.
Cautious approach towards AI
While AI adoption is rising, some compliance teams still disagree about fully embracing AI and LLMs. The report indicates that 14% of respondents show no interest in using these platforms for adverse media screening. This hesitancy might stem from concerns about the technology’s trustworthiness and potential workforce displacement. However, the survey also reveals that 58% of respondents are somewhat confident in LLMs, suggesting a growing trust in AI solutions.
The report highlights that 50% of the surveyed companies are considering adopting AI or LLMs in the future, while an additional 34% are eager to explore these models. Despite the popularity of ChatGPT and the growth of generative AI, only 2% of respondents currently use AI in their adverse media screening processes.
Importance of trust and model governance
Gabriel Hopkins, Chief Product Officer at Ripjar, emphasizes the importance of working with a trusted solution that provides both performance and required levels of model governance. This is crucial in alleviating AI concerns and ensuring organizations can leverage advanced technology to mitigate risks and proactively achieve compliance in an ever-changing landscape.
The 2023 State of Adverse Media Screening report by Ripjar underscores the increasing role of AI and machine learning in the fight against financial crime. Compliance teams gradually recognize the advantages of technology-driven solutions in managing risks, handling vast amounts of unstructured data, and prioritizing alerts for analysts. While some hesitation remains, the transformative power of AI in adverse media screening is undeniable. As organizations navigate a challenging regulatory environment, embracing advanced technology becomes essential for optimizing compliance efforts and safeguarding their reputation.