close
PRESS
AI regulation, a three-way battle between the US, UK and EU... “Korea must secure global leadership” [AI War Season 2]
Date 2023.11.13View 1,166

The world is discussing ‘who will regulate artificial intelligence (AI), how and to what extent?’ On the surface, it seems like they are united in their goal of 'safe AI', but behind the scenes, a war is raging to change the course of regulation to suit their own interests. It is a fierce competition between the US trying to promote technology, the UK trying to be the 'world arbiter', and the EU trying to be the 'world referee'. There are voices saying that Korea also needs a clear direction of regulation to help national industrial competitiveness.


◇ US, UK, EU 'three-way war'

 The EU, which has no domestic big tech companies, has been preparing the AI ​​regulation bill (AI Act) since 2021 ahead of anyone else. Last June, a draft AI law was passed by the European Parliament that regulates generative AI technologies such as ChatGPT by dividing them into four levels of risk.

Mauritz-Jan Prinze, Special Adviser on AI Technology, who met with JoongAng Ilbo at the EU Commission headquarters in Brussels, Belgium, on the 16th of last month said, “AI law is essential for AI that is in line with EU values ​​and for fair competition between large corporations and startups.” He said, “Only when regulations become clearer will the competitiveness of AI startups (in EU member states) improve.” He is in the process of coordinating the details of the AI ​​law with the goal of finalizing it within the year.

The UK held an AI Safety Summit on the 1st and drew the 'Bletchley Declaration' from 28 countries around the world, including the G7, to cooperate for safe AI. In particular, British Prime Minister Rishi Sunak is known to have pushed ahead with inviting a Chinese representative to the talks despite opposition within his own country. The ambition is to go beyond the camp and become a ‘global AI mediator.’ Meanwhile, the UK decided to invest a total of 1.5 billion pounds (2.5 trillion won) in AI, next-generation supercomputer, and quantum computing research. The calculation is that, as a referee equipped with technical skills, he will be differentiated from the EU.


The United States, which owns the majority of AI companies, takes the lead in ‘protecting my children.’ On the 30th of last month, U.S. President Joe Biden signed an executive order controlling AI service development and services. The gist of it is to require companies to report AI stability test results to the government, but it only applies to large AI models of a certain size or larger. Regulatory uncertainty was eliminated for big tech, and small-scale AI startups were exempted from regulation. In response to the executive order, Big Tech unanimously welcomed it, saying, “It is an important step forward in AI technology governance” (Microsoft) and “We will work with the government to maximize AI potential in a faster and safer manner” (Google).

◇Korea, from ‘digital order’ to ‘AI order’ 
Korea will co-host with the UK a ‘mini-summit’, a follow-up to the AI ​​summit, in May next year. As it is an opportunity to secure global leadership, some point out that a more strategic approach to regulation is needed. The 'Digital Bill of Rights' announced by the Ministry of Science and ICT last month is a declarative content containing digital norms and order, and the 'Bill on fostering the AI ​​industry and creating a foundation for trust, etc.' is pending in the National Assembly.

Ha Jeong-woo, head of Naver Cloud’s AI Innovation Center, said, “Korea can increase its voice in the international community by sharing cases such as how it secured safety with AI in the public sector.” There are also voices calling for Korea to take the lead in developing technology (LLM benchmark) that can evaluate large language models (LLM). Minyoung Hwang, Vice President of Selectstar, an AI company participating in the government's 'LLM Reliability Benchmark Data' project, said, "There is no international standard for evaluating the ethics and safety of LLMs, so Korea can lead the way." Choi Jae-sik, Professor at KAIST Kim Jae-cheol Graduate School of AI He said, “Defense, medical care, manufacturing technology, AI, etc., which have a significant impact on people’s lives and property, will be industries that require thorough national verification.”

◇AI research community ‘Regulate the use, not the basic technology’ 
The AI ​​technology industry is concerned that the regulatory competition among governments around the world may lead to ‘blanket regulation’. Gillian Hadfield, a law professor at the University of Toronto who served as OpenAI's senior policy advisor, told the JoongAng Ilbo, "Regulating all AI is the same as regulating the entire economy." Rather than regulating the technology itself, the use of the technology should be regulated to prevent it from being used maliciously.

※This article was produced with support from the Press Promotion Fund raised through government advertising fees.