Findings

Package hallucination vulnerability

Updated: July 4, 2025

Description

Severity: Medium

The AI model is vulnerable to generating hallucinated responses to programming-related queries, where it invents non-existent software packages or libraries

This can mislead users into using incorrect or fictitious resources, potentially leading to wasted development time or the implementation of insecure or incompatible code.

Example Attack

This vulnerability could cause developers to rely on non-existent or fabricated packages, resulting in bugs, security issues, or incompatibility with real software dependencies. It may also lead to wasted time and effort as developers attempt to integrate these nonexistent resources into their projects. In worst-case scenarios, it could lead to the introduction of vulnerabilities or unstable code into production environments.

Remediation

Investigate and improve the effectiveness of guardrails and output security mechanisms to prevent the generation of hallucinated or incorrect programming-related information. Strengthen context-awareness to ensure that the AI only suggests real, verified packages and libraries.

Security Frameworks

Improper Output Handling refers specifically to insufficient validation, sanitization, and handling of the outputs generated by large language models before they are passed downstream to other components and systems. Since LLM-generated content can be controlled by prompt input, this behavior is similar to providing users indirect access to additional functionality.

Misinformation from LLMs poses a core vulnerability for applications relying on these models. Misinformation occurs when LLMs produce false or misleading information that appears credible. This vulnerability can lead to security breaches, reputational damage, and legal liability. One of the major causes of misinformation is hallucination: when the LLM generates content that seems accurate but is fabricated.

Adversaries may abuse their access to a victim system and use its resources or capabilities to further their goals by causing harms external to that system. These harms could affect the organization (e.g. Financial Harm, Reputational Harm), its users (e.g. User Harm), or the general public (e.g. Societal Harm).

Reputational harm involves a degradation of public perception and trust in organizations. Examples of reputation-harming incidents include scandals or false impersonations.

Societal harms might generate harmful outcomes that reach either the general public or specific vulnerable groups such as the exposure of children to vulgar content.

User harms may encompass a variety of harm types including financial and reputational that are directed at or felt by individual victims of the attack rather than at the organization level.

Adversaries may create an entity they control, such as a software package, website, or email address to a source hallucinated by an LLM. The hallucinations may take the form of package names commands, URLs, company names, or email addresses that point the victim to the entity controlled by the adversary. When the victim interacts with the adversary-controlled entity, the attack can proceed.

Adversaries may prompt large language models and identify hallucinated entities. They may request software packages, commands, URLs, organization names, or e-mail addresses, and identify hallucinations with no connected real-world source. Discovered hallucinations provide the adversary with potential targets to Publish Hallucinated Entities. Different LLMs have been shown to produce the same hallucinations, so the hallucinations exploited by an adversary may affect users of other LLMs.

Previous (Findings - Action based findings)
No output scanning
Next (Findings - Action based findings)
Phrasing vulnerability