주요메뉴 바로가기 본문 바로가기

주메뉴

IBS Conferences
Clues to Enhancing Artificial Intelligence Found in Brain Science 게시판 상세보기
Title Clues to Enhancing Artificial Intelligence Found in Brain Science
Name 전체관리자 Registration Date 2024-05-22 Hits 375
att. png 파일명 : 백업섬네일 (1).png 백업섬네일 (1).png

Clues to Enhancing Artificial Intelligence Found in Brain Science

Clues to Enhancing Artificial Intelligence Found in Brain Science


Artificial intelligence (AI), which once merely mimicked human capabilities, has begun to surpass humans in certain areas. AI is now showing capabilities to perform reasoning and argumentation—domains once thought to be uniquely human. The emergence of AI that can think like humans is on the horizon.

The secret to creating human-like AI lies in understanding humans. Director C Justin LEE of the Center for Cognition and Sociality and CHA Mi Young of the Data Science Group within the Institute for Basic Science (IBS) have learned that clues to enhancing AI’s learning abilities can be found in the human brain. They discovered that the memory and learning mechanisms of AI models bear similarities to the memory integration processes in the human brain. This research was presented at the world’s most prestigious AI conference, the Conference on Neural Information Processing Systems (NeurIPS), last December.

In humans, short-term memory is converted into long-term memory in a part of the brain called the hippocampus. During this process, NMDA receptors in neurons play a crucial role in memory formation by regulating the strength of neural connections. The research team focused on the nonlinearity of NMDA receptors, which act as ion channels under specific conditions. They found that AI models exhibit similar nonlinearity to NMDA receptors, mirroring the memory integration process in the human hippocampus. This suggests that AI forms long-term memories in a manner similar to humans.

Co-first author Dr. KIM Dongkyum stated, "Based on studies of the similarities between humans and AI, we can enhance AI’s performance." We interviewed Dr. KIM Dongkyum to learn more about the background and outputs of this research.

Q. Please introduce yourself.

A. I am KIM Dongkyum, a postdoctoral researcher in the Data Science Group at the IBS Center for Mathematical and Computational Sciences. I joined the research team after earning my Ph.D. in 2022.

Q. What does the Data Science Group do?

A. Data science involves finding and analyzing hidden patterns in big data to solve problems. The IBS Data Science Group conducts various research projects aimed at solving social challenges using artificial intelligence. For example, we are developing an AI customs officer that can detect illegal activities such as smuggling or fabricating the place of origin just by analyzing import/export customs declarations. We also develop algorithms to analyze how sleep duration varies based on geographic or cultural influences. Recently, we have been collaborating with various research teams to develop technologies that can detect economic changes in North Korea using satellite imagery or predict climate change.

Q. How did this research start?

A. I have always been very interested in neuroscience. During this time, I met Dr. KWON Jae from Director C Justin Lee's group, who shared an interest in artificial intelligence. This meeting provided an opportunity for us to exchange ideas. We discussed existing research that suggested the functioning of AI models is similar to the computational processes in the hippocampus of the brain. This discussion highlighted potential areas for studying the correlation between the human brain and AI models, which led us to start this research.

Q. Have you always been interested in neuroscience?

A. Yes, I have been very interested in understanding AI through neuroscience since my Ph.D. studies. Neuroscience particularly offers numerous tools such as fMRI for analyzing brain signal data. I have conducted research using these tools to analyze AI models. Recently, I was the second author of a paper published in Nature Communications, which involved using neuroscience analysis tools to study AI.

Q. Please explain this research.

A. The process by which the human brain converts short-term memory into long-term memory is called "memory consolidation." A key finding of this research is that a similar memory consolidation process is observed not only in humans but also in AI models.

AI models process data through "self-attention layers" and "feed-forward layers." Previous studies have shown that the data processing in self-attention layers is similar to how the human hippocampus stores information. In this research, we focused on the AI's feed-forward layers and discovered that the nonlinearity in these layers closely resembles the nonlinearity of NMDA receptors in the hippocampus. This similarity suggests a comparable mechanism of memory formation in both human brains and AI models.

Q. How can the research findings be applied?

A. By increasing the efficiency of converting short-term memory to long-term memory, we can enhance long-term memory with minimal energy. The same applies to artificial intelligence. We've identified a method to maximize long-term memory with minimal training. This discovery could lay the foundation for future "low-cost, high-performance" AI systems.

Q. What was the most challenging part of the research?

A. I wasn't very familiar with the fields of neuroscience or brain science, and the researchers in those fields weren't familiar with AI models. This led to difficulties in communication, such as understanding terminology and concepts related to modeling. However, this interdisciplinary research also provided an opportunity for me to gain deeper insights into my own field. For instance, AI researchers often understand synaptic weight changes as something that occurs through the algorithm's learning process. In contrast, brain science research delves into why these weights are assigned, exploring the specific biological processes behind them. These brain science studies have greatly aided in our understanding of AI.

Q. Are there any fields you want to research in the future?

A. One of the problems with artificial intelligence is that it remembers all sensitive information indiscriminately. In contrast, humans do not remember everything and can choose what to recall. This selective memory process is why military personnel experience post-traumatic stress disorder (PTSD). Despite the similarities between AI and human memory, there are significant differences in how they operate. I am interested in researching how large language models (LLMs) retain certain data, and further, how we can enable them to forget specific information.

Q. Is there anything you need for future research?

A. For this study, we utilized IBS's GPU cluster to perform the necessary computations. However, for future research on large language models (LLMs), we will require more extensive computational resources to handle larger-scale calculations. Fortunately, IBS's supercomputer, "Olaf," is scheduled to begin full operation in March 2024, which should provide the necessary resource for our research. Access to such resources would greatly facilitate our research endeavors.

Research

Are you satisfied with the information on this page?

Content Manager
Public Relations Team : Yim Ji Yeob   042-878-8173
Last Update 2023-11-28 14:20