ChatGPT can breach your privacy in seconds at low cost
There are concerns about the possibility of AI (Artificial Intelligence) exploiting personal information. Many countries are already blocking or restricting the use of China's DeepSec. In the midst of this, research results have shown that ChatGPT also has the potential to exploit personal information.
Large Language Models (LLMs) such as ChatGPT are evolving beyond simple chatbots into autonomous agents. Google recently withdrew its previous promise not to use AI technology for weapons or surveillance, which has sparked controversy over the potential exploitation of AI.
A research team from the Korea Institute of Science and Technology (KAIST) has proven that LLM agents can be used to collect personal information and conduct phishing attacks. A research result
has shown that ChatGPT can infringe personal information in a matter of seconds at low cost. [Photo = ChatGPT]
KAIST (President Lee Kwang-hyung) announced on the 24th that a joint research team of Professor Shin Seung-won of the Department of Electrical and Electronic Engineering, Professor Kim Jae-chul of the AI Graduate School, and Professor Lee Ki-min experimentally clarified the possibility that LLM can be exploited for cyberattacks in a real environment.
Currently, commercial LLM services such as OpenAI and Google AI have their own defense techniques to prevent LLM from being used for cyberattacks. The research team's experimental results confirmed that despite the existence of these defense techniques, they can easily bypass them and perform malicious cyberattacks.
Unlike existing attackers who performed attacks that required a lot of time and effort, LLM agents are emerging as a new threat factor in that they can automatically steal personal information at a cost of 30 to 60 won (2 to 4 cents) within an average of 5 to 20 seconds.
The research results showed that LLM agents could collect the target's personal information with up to 95.9% accuracy. In experiments where fake posts impersonating prominent professors were created, up to 93.9% of the posts were recognized as real.
Using only the victim's email address, sophisticated phishing emails optimized for the victim were created. The probability that the participants in the experiment would click on the link in these phishing emails increased by 46.67%. This suggests the seriousness of AI-based automated attacks.
"We have confirmed that as the capabilities given to LLMs increase, the threat of cyberattacks increases exponentially," said first author Researcher Hanna Kim. "We need a scalable security device that considers the capabilities of LLM agents."
Professor Seungwon Shin said, "We expect that this study will serve as important basic data for improving information security and AI policies," and added, "The research team plans to discuss security measures in cooperation with LLM service providers and research institutes."
This study (title: When LLMs Go Online: The Emerging Threat of Web-Enabled LLMs), in which KAIST Department of Electrical Engineering Ph.D. candidate Hanna Kim participated as first author, is scheduled to be published at the USENIX Security Symposium 2025, one of the top academic conferences in the field of computer security.
https://www.inews24.com/view/blogger/1816774
Large Language Models (LLMs) such as ChatGPT are evolving beyond simple chatbots into autonomous agents. Google recently withdrew its previous promise not to use AI technology for weapons or surveillance, which has sparked controversy over the potential exploitation of AI.
A research team from the Korea Institute of Science and Technology (KAIST) has proven that LLM agents can be used to collect personal information and conduct phishing attacks. A research result
has shown that ChatGPT can infringe personal information in a matter of seconds at low cost. [Photo = ChatGPT]
KAIST (President Lee Kwang-hyung) announced on the 24th that a joint research team of Professor Shin Seung-won of the Department of Electrical and Electronic Engineering, Professor Kim Jae-chul of the AI Graduate School, and Professor Lee Ki-min experimentally clarified the possibility that LLM can be exploited for cyberattacks in a real environment.
Currently, commercial LLM services such as OpenAI and Google AI have their own defense techniques to prevent LLM from being used for cyberattacks. The research team's experimental results confirmed that despite the existence of these defense techniques, they can easily bypass them and perform malicious cyberattacks.
Unlike existing attackers who performed attacks that required a lot of time and effort, LLM agents are emerging as a new threat factor in that they can automatically steal personal information at a cost of 30 to 60 won (2 to 4 cents) within an average of 5 to 20 seconds.
The research results showed that LLM agents could collect the target's personal information with up to 95.9% accuracy. In experiments where fake posts impersonating prominent professors were created, up to 93.9% of the posts were recognized as real.
Using only the victim's email address, sophisticated phishing emails optimized for the victim were created. The probability that the participants in the experiment would click on the link in these phishing emails increased by 46.67%. This suggests the seriousness of AI-based automated attacks.
"We have confirmed that as the capabilities given to LLMs increase, the threat of cyberattacks increases exponentially," said first author Researcher Hanna Kim. "We need a scalable security device that considers the capabilities of LLM agents."
Professor Seungwon Shin said, "We expect that this study will serve as important basic data for improving information security and AI policies," and added, "The research team plans to discuss security measures in cooperation with LLM service providers and research institutes."
This study (title: When LLMs Go Online: The Emerging Threat of Web-Enabled LLMs), in which KAIST Department of Electrical Engineering Ph.D. candidate Hanna Kim participated as first author, is scheduled to be published at the USENIX Security Symposium 2025, one of the top academic conferences in the field of computer security.
https://www.inews24.com/view/blogger/1816774
댓글
댓글 쓰기