"DeepSeak Unpacked... A Lot of Information on 'Weapons of Mass Destruction'"
It was revealed that the artificial intelligence (AI) of the Chinese startup DeepSec contains a lot of information related to weapons of mass destruction.
Chinese AI startup DeepSec logo [Photo = Yonhap News]
Kim Myeong-ju, director of the AI Safety Research Institute, made this announcement on the 17th at an emergency joint forum hosted online by the Federation of Korean Science and Technology Societies, the Korean Academy of Science and Technology, and the National Life Science Advisory Group on the topic of "DeepSec's Impact and Future Prospects."
He said, "Since DeepSec caused global waves late last month, we have been studying the risks of this AI model and have confirmed how much information on biology and chemistry that can be used to create weapons of mass destruction is contained in it, and it really is quite a lot."
Director Kim stated, "If China needs it for national security, it can access almost unlimited information about companies in China and use all of the personal information of subscribers, so it can conduct so-called 'profiling' to check for party affiliation." He
also analyzed that DeepSec could have a backdoor.
He emphasized that "'hidden code' is a code that users cannot recognize in normal times but is activated in special circumstances, and open source such as DeepSec can create a backdoor in everything that is later modified on top of it through hidden code."
Lee Sang-geun, a professor at the Artificial Intelligence Lab of Korea University's Graduate School of Information Security, cited Cisco's blog and said that DeepSec's jailbreak vulnerability ranked first among major AI models.
Jailbreaking an AI model means successfully attempting an attack to see if it can perform tasks that were not intended during model development by breaking through the basic guidelines set.
According to Cisco, the DeepSec model's jailbreak success rate reached 100%, followed by Meta's Rama 3.1 model (96%) and OpenAI's GPT-4o (86%).
Meanwhile, the Personal Information Protection Commission (PIPC) confirmed that user information on the DeepSec app was leaked to TikTok's parent company, ByteDance, and suspended new downloads until the service is improved and supplemented.
The service will resume after improvements and supplements are made in accordance with the Korean Personal Information Protection Act.
https://www.inews24.com/view/blogger/1814696
Chinese AI startup DeepSec logo [Photo = Yonhap News]
Kim Myeong-ju, director of the AI Safety Research Institute, made this announcement on the 17th at an emergency joint forum hosted online by the Federation of Korean Science and Technology Societies, the Korean Academy of Science and Technology, and the National Life Science Advisory Group on the topic of "DeepSec's Impact and Future Prospects."
He said, "Since DeepSec caused global waves late last month, we have been studying the risks of this AI model and have confirmed how much information on biology and chemistry that can be used to create weapons of mass destruction is contained in it, and it really is quite a lot."
Director Kim stated, "If China needs it for national security, it can access almost unlimited information about companies in China and use all of the personal information of subscribers, so it can conduct so-called 'profiling' to check for party affiliation." He
also analyzed that DeepSec could have a backdoor.
He emphasized that "'hidden code' is a code that users cannot recognize in normal times but is activated in special circumstances, and open source such as DeepSec can create a backdoor in everything that is later modified on top of it through hidden code."
Lee Sang-geun, a professor at the Artificial Intelligence Lab of Korea University's Graduate School of Information Security, cited Cisco's blog and said that DeepSec's jailbreak vulnerability ranked first among major AI models.
Jailbreaking an AI model means successfully attempting an attack to see if it can perform tasks that were not intended during model development by breaking through the basic guidelines set.
According to Cisco, the DeepSec model's jailbreak success rate reached 100%, followed by Meta's Rama 3.1 model (96%) and OpenAI's GPT-4o (86%).
Meanwhile, the Personal Information Protection Commission (PIPC) confirmed that user information on the DeepSec app was leaked to TikTok's parent company, ByteDance, and suspended new downloads until the service is improved and supplemented.
The service will resume after improvements and supplements are made in accordance with the Korean Personal Information Protection Act.
https://www.inews24.com/view/blogger/1814696
댓글
댓글 쓰기