Headlines

The Threat of Malicious Code Distribution through Hallucinations.

The Threat of Malicious Code Distribution through Hallucinations.maliciouscode,distribution,hallucinations,cybersecurity,threat

Researchers Discover How ChatGPT Hallucinations Can Be Exploited in Malicious Code Distribution

The vulnerability and risk management firm, Vulcan Cyber, has shown how it’s possible for hackers to exploit AI chatbots such as ChatGPT to distribute malicious code packages to software developers. The issue is related to hallucinations that occur when large language models provide factually incorrect or nonsensical information that may appear to be plausible. In Vulcan’s analysis, the company’s researchers noted that ChatGPT, probably due to its use of old data for training, recommended code libraries that do not exist. Vulcan warned that threat actors could collect the names of such non-existent packages and create malicious versions that developers could download based on ChatGPT’s recommendations. Cybersecurity companies are regularly using AI chatbots to answer questions related to cybersecurity and programming, making this vulnerability particularly concerning.

The Researchers’ Analysis

Vulcan researchers analyzed various questions on the Stack Overflow coding platform and asked ChatGPT questions in the context of Python and Node.js. ChatGPT was asked more than four hundred questions, and around one hundred of its responses included references to at least one Python or Node.js package that does not exist. In total, ChatGPT’s responses mentioned over 150 non-existent packages. The researchers demonstrated evidence of how this method would work in the wild by creating a package that could extract confidential system information from a device and upload it to the NPM registry.

Exploitation by Hackers

It is not easy to tell if a software package recommended by ChatGPT is malicious if the hacker effectively obfuscates their work or uses additional techniques such as creating a trojan package that is actually functional. Given how threat actors deploy malicious code libraries to well-known repositories, it’s crucial for developers to authenticate libraries to make sure they are legitimate. This is even more critical when using suggestions from data sources, including ChatGPT, which may recommend non-existent packages or did not exist before threat actors created them.

There is a real danger that hackers could manipulate such AI chatbots to distribute malicious code packages to an unsuspecting audience and create a supply chain attack. Hackers could collect the names of packages recommended by ChatGPT and create malicious versions to trick developers into downloading the malware.

Precautions for Developers

Developers must ensure that they validate the libraries they use to authenticate that they are authentic. Moreover, developers could check their open-source software for any signs of tampering and potentially monitor it for any threats. It is also crucial that companies limit their usage of pre-packaged software library components and do not rely on AI-based technologies alone to secure their products.

Conclusion

The rising prevalence and usefulness of AI-based chatbots and conversational agents are creating new vulnerabilities that threaten to upend a broader range of computing systems. Software developers must remain alert to these dangers and actively work to ensure that their software remains secure, authentic, and safe. Nonetheless, the exploitation of ChatGPT by hackers highlights the need for everyone to pay attention to the potential risk associated with AI chatbots and other AI-based systems. Such risks must be handled cautiously and adequately addressed, or it could jeopardize the integrity and security of systems and their users.

Cybersecurity.-maliciouscode,distribution,hallucinations,cybersecurity,threat


The Threat of Malicious Code Distribution through Hallucinations.
<< photo by Artem Bryzgalov >>

You might want to read !