In recent years, using artificial intelligence (AI) in various aspects of government operations has become more prevalent. Still, it has sparked significant debates about privacy, accountability, and fairness. A recent revelation regarding the U.S. Immigration and Customs Enforcement (ICE) agency’s use of AI to monitor the social media activity of visa applicants has ignited fresh discussions on the role of algorithms in determining who is considered “risky.”
The AI-powered tool: Giant Oak Search Technology (GOST)
Documents obtained through a Freedom of Information Act (FOIA) action have revealed that ICE has been utilizing an AI-powered tool known as Giant Oak Search Technology (GOST) since 2014. GOST is designed to scan the social media posts of visa applicants and assign them a “social media score” ranging from 1 to 100. This score is determined based on whether the posts are “derogatory” toward the United States. If an applicant’s score raises concerns, ICE analysts can review flagged images and profiles to assess whether they pose a potential risk.
A multi-million dollar investment
The scope of this AI surveillance program is significant, with ICE reportedly paying over $10 million to Giant Oak for the technology since 2017. Moreover, Giant Oak’s reach extends beyond ICE, as the company also holds contracts with other government agencies, including the Drug Enforcement Administration (DEA), Air Force, State Department, and Treasury Department. The widespread use of such technology in various government agencies has prompted privacy advocates to question its implications for civil liberties.
Privacy advocates sound the alarm
Privacy advocates argue that using AI algorithms to screen social media content raises serious concerns. Patrick Toomey, deputy director of the American Civil Liberties Union’s (ACLU) national security project, expressed his concerns, stating, “The government should not be using algorithms to scrutinize our social media posts and decide which of us is ‘risky.'” Toomey called for transparency and urged the Department of Homeland Security (DHS) to explain how its systems determine an individual’s risk and what consequences those flagged by its algorithms may face.
The origins of social media surveillance
The social media surveillance program under discussion began as a pilot in 2016, primarily targeting potential visa overstayers. Concurrently, in 2016, the Trump administration implemented rules requiring visa applicants to provide five years of their social media history as part of the screening process. These measures were introduced in an attempt to enhance national security, but they have since become a subject of controversy.
The risk of discrimination
Experts have raised concerns that these AI-driven practices may inadvertently lead to discrimination. Automated systems, they argue, could disproportionately flag applicants from certain countries or backgrounds, potentially resulting in unjust outcomes. A notable case in 2019 saw a Harvard student denied entry to the United States due to the social media activity of their friends, highlighting the potential for unintended consequences.
The future of AI in immigration enforcement
While ICE’s contract with Giant Oak reportedly ended in 2022, the broader issue of using AI to assess applicants’ social media content continues to be a topic of concern. The need for more oversight and regulation to prevent abuse and protect civil liberties is apparent. The implications of these practices extend beyond immigration enforcement and raise fundamental questions about the balance between security and individual privacy in the digital age.
The use of AI algorithms to monitor and assess social media content in the context of visa applications has ignited a debate about the government’s role in determining who is considered a potential risk. Privacy advocates argue that such practices raise significant civil liberties concerns, and experts warn of the risk of discrimination based on nationality or background. As technology continues to evolve, it becomes increasingly important to strike a balance between national security and the protection of individual privacy rights. The case of GOST and its use by ICE serves as a stark reminder of the challenges and questions that lie ahead in the age of AI-powered surveillance.