Brain visual cortex selective processing, AI image recognition ability

A smarter AI technology that resembles the human brain has emerged.



The research team led by Director Lee Chang-joon of the Cognition and Sociality Research Group at the Institute for Basic Science (IBS, President Noh Do-young) and the research team led by Professor Song Kyung-woo of the Department of Applied Statistics at Yonsei University have developed a new technology that enhances the image recognition capabilities of AI by applying the way the brain's visual cortex



selects and processes visual information. The human visual system has excellent recognition capabilities. It can recognize objects at a glance and quickly select important information even in complex environments. Existing AI models still show limitations.



In the actual brain's visual cortex, neurons are widely and smoothly connected based on the center. This connection strength shows the characteristic of gradually changing with distance (a, b). On the other hand, in existing convolutional neural networks (CNNs), neurons only look at a fixed rectangular area (e.g., 3×3, 5×5, etc.) and process information (c, d). [Photo = IBS]



Traditional convolutional neural networks (CNNs) work well with relatively few operations, but since they are structured to split and analyze images with small square filters, they have limitations in understanding broad contexts or relationships between separated information.



The research team focused on how the visual cortex of the human brain selectively processes visual information. The human visual cortex does not process all information equally, but selectively responds by focusing only on prominent features or important parts.



In this process, neurons have a structure that smoothly detects a wide range and selectively responds only to necessary information. The research team proposed the 'Lp-convolution' technology that applies this method to significantly improve the performance of the CNN model.



Lp-convolution is a technology designed to allow AI to prioritize key information when analyzing images, like humans. The 'mask (weight map-type filter)' that is automatically generated for each image works by emphasizing important parts and naturally excluding less important parts, like neurons in the visual cortex.



This mask adjusts its shape on its own during the learning process, allowing it to consistently focus on important features even in various environments.



"Just as people quickly grasp the core of a complex scene, Lp-convolution helps AI utilize computational resources efficiently while enabling more accurate analysis by incorporating the brain's information processing method," explained first author



Kwon Jae, an IBS postdoctoral researcher at the Max Planck Institute for Security and Information Protection in Germany. "Lp-convolution can greatly contribute to not only improving AI performance but also mimicking and understanding how the brain processes information," said IBS Director Lee Chang-joon. "It will be a good example of a new convergence model in which AI and brain science can advance together."



This study (title: Brain-Inspired Lp-Convolution Benefits Large Kernels and Aligns Better with Visual Cortex) has been accepted for the AI ​​conference "The International Conference on Learning Representations (ICLR)" and will be presented at "ICLR 2025" to be held in Singapore from the 24th to the 28th.





https://www.inews24.com/view/blogger/1836904

댓글

이 블로그의 인기 게시물

Livestock Manure Methane Is Soaring, But 'Resource Recovery' Isn't Working [Now is a Climate Crisis]

Making Green Hydrogen from Sugarcane