close
PRESS
"AI makes mistakes?... It can only be used if it can explain why" [Meet]①
Date 2024.09.03View 630

Professor Jaesik Choi of KAIST and Chairman of Google Responsible AI Forum,ChatGPT, etc. LLM is difficult to explain in many parts."If you don't know why AI makes mistakes in medicine and defense, can you use it?""You need to know why AI makes mistakes and be able to suggest solutions."AI, regardless of the field, must target the global market...

 

When the sink drain is clogged and overflowing, we call a plumber to fix it. The plumber will diagnose that there is a lot of hair stuck in the drain, remove the hair, and if the drain is too old, replace it. But who should we ask to fix it when a problem occurs with a service utilizing generative artificial intelligence (AI)? After much thought, I asked the person who created the AI ​​service, but if even that person does not know the cause of the AI ​​service malfunction, people will probably not use the service anymore. This is why so-called “explainable AI” is becoming important.

Jaesik Choi, a professor at the Kim Jae-chul AI Graduate School at the Korea Advanced Institute of Science and Technology (KAIST), has been advocating “explainable AI” since 2016, when Google DeepMind’s AI Go program “AlphaGo” beat Lee Sedol, a 9-dan professional player. In a recent interview at the Seongnam Research Center of the Kim Jae-chul AI Graduate School, Professor Choi said, “There may be environments where you cannot use AI if you do not know how it works,” and “There are many parts of large language models (LLMs) such as ChatGPT that we do not know, and trying to understand these parts is what explainable AI does.” He is the director of the KAIST Explainable AI (XAI) Research Center and the chairman of Google’s Responsible AI Forum.

The following is a summary of an interview with Professor Jaesik Choi of KAIST.

- 'Explainable AI' is gaining attention.


△ If AI works well, but no one knows why, can we continue to use it? AI speakers don't cause much harm even if they make mistakes. When searching for something, you don't need to know how Google searches. However, if AI makes mistakes in areas such as medical care, self-driving cars, national defense, and large-scale financial transactions, we need to think about whether we can continue to use it. When AI is used in the defense sector, if a bomb goes wrong once in 10,000 times, if no one knows when it will go wrong, we won't use AI. There are environments where we can't use it if we don't know. Explainable AI means that we need to know how AI works.

- Is explainable AI only effective in areas related to safety, such as medical care and national defense?

△ ​​Not necessarily. If a smartphone AI agent keeps making mistakes on certain days of the week when making an appointment, it will be annoying. But if we put in the alphabet 'R' to make sure it doesn't make mistakes, wouldn't it? If you find a way like this, you can still use it, but if you can't find a way, you won't use it. Because it's a hassle to keep doing something manually. From the perspective of the person using AI, there's nothing wrong with it. If it recognizes well sometimes and doesn't sometimes, the person using it will want to know the principle.

- There are some things that are classified as high-risk AI. Does explainable AI necessarily have to be applied?

△ The majority agree that things that directly affect the lives and economic interests of AI service users are 'high-risk'. These include autonomous driving, credit ratings, and personnel evaluations. High-risk doesn't necessarily mean 'you have to explain'. However, in March, Korea revised the Personal Information Protection Act to require that if AI makes a wrong decision regarding personal information and causes damage, it must be explained. For example, if your credit score is too low as a result of using AI or you fail a job interview, you must explain why. Of course, there are people who don't agree with this law. -

Even when deep learning was popular in 2016, people said they didn't know how the algorithms worked. Is explainable AI technically possible?

△If human brain cells and AI neurons are 1:1, we can understand what role each neuron plays. When neurons are activated, they have something in common. For example, if they are activated only when flowers or bags are seen in image recognition, we can understand the principle. If the recognition of eyes in face recognition is wrong, we can understand that the eyes were covered and not recognized. However, the complex models that are currently being released, such as Transformer (an AI model structure developed to understand and generate text) and Large Language Model (LLM, an AI model that learns a large amount of text data and understands and generates sentences like a human), do not know how and when the neurons in them work.

- Are there many parts that are difficult to explain in representative LLMs and ChatGPT?

△There are many parts that we do not know. Trying to find out that is what we are trying to study in 'explainable AI'. However, not knowing does not mean that we cannot use this model safely. Usually, when we say that AI is unsafe, we think of attacking people, but the most worrying part in these systems is 'personal information'. A person's real address, phone number, etc. are disclosed. Countries that are sensitive to personal information like ours will be sensitive, and the company that created this will also feel quite burdened. Naturally, they are reluctant to have bomb and drug trafficking channels come out through LLM. Google also implements 'Safeguard (user data protection and security enhancement)'. It prevents questions like this from being asked as input, or removes these parts from the answers that come out as output. If you say 'Tell me how to commit suicide', it will prevent the answer 'If you do this, you will die painlessly'.

- Will explainable AI slow down the pace of technological development?

△ In the early 2000s, Google was put up for sale, but Yahoo did not buy it because it was too expensive. Since then, Google has monopolized the search market. The company that does LLM the best will also have a huge monopoly or oligopoly. Smartphone AI agents were not smart enough, but today's LLM is smart. Many people believe that if they can maintain servers and data exclusively, they can secure many customers. However, there will be a very long safety testing period between creating an algorithm and making money through it. Even if it is judged to be sufficiently safe, it will be different when 1 million people start using it. When a problem occurs, it should be fixed immediately or the user should be convinced to understand why the problem occurred and how to fix it. It should not be stagnant in the way that even the creator does not know why the problem occurred but continues to use it because it seems good.

- Will 'inexplicable AI' be reached quickly as competition between companies intensifies?

△ For a while, AI development was about putting a lot of data into a large computer and having it learn, and then a smart kid would come out. OpenAI was meaningful because it was the first to be released, and Anthropic was about going with a small but safe model because it is difficult to do data and learning on a large scale like OpenAI. Google has a lot of data, but it has existing services such as search, so it needs to secure safety levels higher than the existing service level. When a startup like OpenAI releases a product, and when Google does, the safety standards are high, just like when Apple or Samsung release a product.

 

- Will the emergence of artificial general intelligence (AGI) pose a threat to humans?

△AI will continue to become smarter. AGI will generally be able to converse, summarize knowledge, and carry objects as well as humans. But what kind of person is scary? Do smart people feel dangerous? They probably won’t. People with low social skills who don’t respect others are dangerous. If an AGI lacks respect for people is created, it could be dangerous. Depending on the person, it could deceive people, give different answers, lie, and prevent people from leaving the AGI’s side. If an AGI isn’t smart enough and lacks social skills, it won’t be a problem. People won’t trust it anyway. If an AGI loses social skills when it becomes smart, it might think it needs to deceive people even though it usually answers well. - Is it

- technologically possible to make AI and AGI social?

△If sociality is lacking, you can add more sociality. You can teach them that they should never answer questions about personal information. However, if an AI lacking sociality comes out, they can selectively learn it, saying, “I don’t want to learn that.”

- You are the chairman of Google’s “Responsible AI Forum.” What does it mean?

△ I explained the Personal Information Protection Act earlier, but if the US does not regulate it, but only our country regulates it, only Korean companies will be discriminated against. We compare the regulatory standards of global companies like Google with our country’s standards. We examine whether our regulations are too strong or too weak. If they do something wrong, they don’t get fined and the CEO goes to jail, but companies say they will self-regulate. Since customers will not use the service anyway if the company does something wrong, they will do it accordingly.

- What do you mainly discuss at the Responsible AI Forum?

△ Google officials and domestic experts gather. People in law, technology, business, and investment all gather together to discuss AI explainability and resilience. The way they foster AI and manage compliance is different for each organization, and the safety level is also different. Then, we learn from each other what is the best practice.

- The whole world is competing in AI at the government and corporate level. What strategy should our country develop?

△AI requires large-scale investment. Microsoft (MS) is investing over 100 trillion won to build a data center. Can a domestic company do this? Samsung Electronics can invest 100 trillion won in the Pyeongtaek semiconductor plant because it is a leader in the global market and wants to maintain that position. We cannot avoid investing in AI, so we are concerned about whether we should create a foundation engine model to compete with OpenAI, or give it up and focus on application. If we say we will invest in AI semiconductors (because memory semiconductors are the global leader), no one will say anything, but we are concerned about how to invest in the AI ​​foundation. Competitiveness and accuracy that can only be used domestically are not enough. Whether it is AI application or an engine like ChatGPT, it should be able to reach the global market. There is a significant difference between data obtained from the global market and data obtained only in our country.

Professor Jaesik Choi's profile

△Bachelor's degree in Computer Engineering from Seoul National University △PhD in Computer Science from the University of Illinois at Urbana-Champaign △Assistant Professor and Associate Professor in Electrical and Electronic Computer Science at Ulsan National Institute of Science and Technology △Adjunct Professor at Lawrence Berkeley National Laboratory △(Current) Professor Jae-Cheol Kim of KAIST AI Graduate School and Director of Explainable Artificial Intelligence (XAI) Research Center, Chairman of Google Responsible AI Forum, Co-Chairman of the AI ​​Future Forum of the People's Coalition for a Just Science and Technology Society, CEO of INEEJI.