Safety Will Be a Top Agenda Item at the Seoul AI Summit

Governments from South Korea and the United Kingdom will urge tech companies to address artificial intelligence safety oversight at this week’s international AI summit.

This meeting is a follow-up to the first-ever global AI safety summit held at Britain’s Bletchley Park last year, where a number of governments expressed their concerns over AI risks. AI companies were also present there, as the technology they are working on poses these risks.

Buy physical gold and silver online

Amidst the numerous international attempts to create guardrails for the quickly developing technology that has the potential to revolutionize many facets of society, the Seoul summit has ignited worries about emerging risks to daily life. 

“Although positive efforts have been made to shape global AI governance, significant gaps still remain.”

The U.K. summit in November brought together researchers, government officials, tech executives, and representatives of civil society organizations, many of whom had divergent opinions on artificial intelligence. The meetings took place behind closed doors at Bletchley. Along with politicians including British Prime Minister Rishi Sunak, CEOs of OpenAI Sam Altman and Tesla, Elon Musk also attended, along with many others.

Safety Will Be a Priority at the AI Summit

Safety will once again be a top priority at the AI Seoul Summit, which begins on Tuesday. The United Kingdom and South Korean governments are co-hosting the meeting on May 21–22.

Prominent AI companies, such as Microsoft, Anthropic, Google DeepMind, ChatGPT creator OpenAI, and Mistral, a French AI startup, are scheduled to send representatives.

British Prime Minister Rishi Sunak and South Korean President Yoon Suk Yeol also wrote a joint article, published in INews on Monday. Accepting the potential of this technology, they expressed their will to ensure safety, saying,

“Together, we are determined to seize the potential of this technology to transform the world for the better.”

They also stressed that as new technologies bring new risks, so has AI. They pointed out the misuse of AI by rogue actors who deliberately want to leverage AI for misdeeds.

Just recently, OpenAI dissolved its safety team, called the Superalignment team, when its co-founder Ilya Sutskever and some other key employees left the company. The development came from the world’s leading AI company just days before the Seoul summit, whose focus is mitigating the risks of AI.

Also read: International Cooperation Urged to Address AI Risks at Bletchley Park Summit

In another development, Anthropic today released its report on responsible scaling policies, which, according to the company, has shown good results. However, we cannot confirm these results. While we are not in a position to comment on any of these developments, they do provide some food for thought.

Innovation Must Continue

After ChatGPT’s meteoric rise to popularity shortly after its 2022 launch, tech companies worldwide began investing billions of dollars in building their generative AI models. Sunak and Yeol mentioned the quick-paced innovation as new AI models launch daily. They said,

“The government can foster this innovation by investing billions, boosting cutting-edge research in our world-class universities, and ensuring that we do not overregulate the start-ups that could produce the next big idea.”

Proponents of generative AI models have hailed the technology as a breakthrough that will enhance people’s lives and businesses worldwide since it can produce text, photographs, music, and even video in response to simple cues.

Some of these products that have gained public attention have their own embedded bias, which is a cause for concern. 

Also read: Working With Ai Is Essential, Say AI Experts at Fidic’s Global Leadership Forum

And it’s not only these; the technology that works behind these products, called large language models (LLMs), has also become the base technology behind many solutions across different sectors. From autonomous driving vehicles to medical solutions, many rely on generative AI LLMs to function.

There Are Still Concerning Voices

Numerous people have demanded international guidelines to control the advancement and application of AI. For example, a New York-based movement called Ban the Scan is demanding a halt to using facial recognition at the government and private levels. 

They argue that these technologies interfere with personal freedom and have a high rate of false positives. A page at the Electronic Frontier Foundation reads,

“Facial recognition is a threat to privacy, racial justice, free expression, and information security. Face recognition in all of its forms, including face scanning and real-time tracking, pose threats to civil liberties and individual privacy.”

They are not the only ones. Human rights advocates, governments, and critics have cautioned that AI can be abused in many ways. For example, non-state actors can use it to influence voters with fictitious news reports or so-called “deepfake” images and videos of elected officials. Another deepening concern is the reports emerging of states backing rogue elements for their interests.

“We will also take the next steps on shaping the global standards that will avoid a race to the bottom.”

The duo article mentioned the above statement and stressed that discussing international norms for AI in a more open forum would be helpful. At the same time, some of the South Korean civil rights organizations have criticized the organizers for not inviting enough developing countries.

France will also hold a “Viva Technology” conference this week, backed by LVMH, the world’s largest luxury group. Over the past year, France has attempted to attract AI startups to position itself as the European AI champion.


Cryptopolitan reporting by Aamir Sheikh

About the author

Why invest in physical gold and silver?
文 » A