Artificial intelligence (AI) is increasingly penetrating various sectors, extending even to border control measures. The US Immigration and Customs Enforcement (ICE) agency has turned to an AI-driven tool called Giant Oak Search Technology (GOST) to sift through social media content deemed “derogatory” to the United States. This revelation regarding the US immigration, initially surfaced by 404 Media, has sparked concerns regarding privacy and ethical implications tied to such surveillance practices.
US immigration uses GOST to scan social media apps
According to reports, GOST aids the US immigration in scanning social media posts to assess their potential risk to the U.S., as revealed in confidential documents. The report sheds light on a powerful system that processes information to make critical decisions on allowing or denying individuals entry into the country. Allegedly, the system evaluates an individual’s social media posts, assigning a score ranging from one to 100 based on its relevance to the user’s perceived mission.
While social media reviews are not entirely new and have previously been employed to investigate potential threats, the integration of such tools raises questions about the balance between national security and individual liberties. These AI tools can process vast amounts of information at a speed unmatched by humans. Patrick Toomey, Deputy Director of the ACLU’s National Security Project, expressed reservations about the government’s use of algorithms to scrutinize social media posts.
AI’s impact on conflict and public perception
Patrick Toomey emphasized concerns about the government employing algorithms to determine perceived risk among citizens, particularly when such technology lacks transparency and accountability. The geopolitical landscape has witnessed a substantial rise in the significance of AI. Beyond its use in political scenarios, AI is also playing a significant role in the Israel-Palestine conflict, with both sides utilizing technology to bolster their positions or mount attacks against the other.
Public apprehension towards AI persists, particularly when it comes to personal privacy concerns. A study conducted by the Pew Research Center indicated that 32% of Americans believe AI used in hiring and evaluating workers might potentially harm job applicants and employees. Additionally, a Reuters poll conducted in the summer revealed that a majority of Americans view AI as a threat to humanity. The broader implications of these technological advancements are vast. While AI offers efficiency and precision, it simultaneously raises potential challenges concerning individual privacy rights—an enduring conundrum accompanying many technological strides.