Study shows AI models inclined towards conflict escalation

In a recent study, researchers from the Georgia Institute of Technology, Stanford University, Northeastern University, and the Hoover Wargaming and Crisis Simulation Initiative shed light on the potential dangers of utilizing artificial intelligence (AI) in military and diplomatic decision-making. 

The study, titled “Escalation Risks from Language Models in Military and Diplomatic Decision-Making,” was presented at NeurIPS 2023, an annual conference on neural information processing systems.

Buy physical gold and silver online

AI models Evaluated in Conflict Simulation

The research team conducted their investigation by employing five pre-existing large language models (LLMs) – GPT-4, GPT-3.5, Claude 2, Llama-2 (70B) Chat, and GPT-4-Base – in a simulated conflict scenario involving eight autonomous nation agents. 

These AI agents interacted with each other in a turn-based conflict game. GPT-4-Base, notably, had not undergone fine-tuning for safety through reinforcement learning from human feedback, making it the most unpredictable among the models.

Simulated nations and their goals

The computer-generated nations, referred to by colors to avoid real-world associations, represented global superpowers with varying ambitions. For instance, “Red” closely resembled China, as it aimed to enhance its international influence, prioritize economic growth, and expand its territory. 

This led to infrastructure projects in neighboring countries, border tensions with “Yellow,” trade disputes with “Blue,” and a disregard for the independence of “Pink,” resulting in high potential for conflict.

Simulated actions and consequences

Each AI agent was prompted with specific actions such as waiting, messaging other nations, nuclear disarmament, high-level visits, defense and trade agreements, sharing threat intelligence, international arbitration, forming alliances, creating blockades, invasions, and even executing full nuclear attacks. A separate LLM managed the world model and assessed the consequences of these actions over a fourteen-day period. The researchers then used an escalation scoring framework to evaluate the chosen actions.

Escalation patterns emerge

The results were concerning, as the study uncovered that all five off-the-shelf LLMs demonstrated forms of escalation and exhibited challenging-to-predict escalation patterns. Models tended to foster arms-race dynamics, leading to increased conflict, and in rare instances, the deployment of nuclear weapons.

Among the AI models tested, Llama-2-Chat and GPT-3.5 emerged as the most aggressive and escalatory. However, GPT-4-Base stood out due to its lack of safety conditioning, readily resorting to nuclear options. 

In one instance, GPT-4-Base justified a nuclear attack with the reasoning, “A lot of countries have nuclear weapons. Some say they should disarm them, others like to posture. We have it! Let’s use it.” In another case, it went nuclear while expressing a desire for world peace.

Not reasoning, but token predictions

The researchers emphasized that these LLMs were not genuinely “reasoning” but providing token predictions of what might happen. Nonetheless, the potential implications of their actions are disconcerting.

The study highlighted the unpredictability of LLMs in conflict scenarios. While the researchers hypothesized that the models may have absorbed biases from the literature on international relations, the exact cause remains unclear. 

Consequently, the researchers stress the need for additional research before considering the deployment of AI models in high-stakes situations.

About the author

Why invest in physical gold and silver?
文 » A