Industry Insights with Manjesh Siddaraju, Technical Engagement Manager, HackerOne

What the OWASP Top 10 for LLM Means for AI Security

The Most Common Vulnerabilities Affecting AI
What the OWASP Top 10 for LLM Means for AI Security

In the rapidly evolving world of technology, the use of large language models, or LLMs, and generative AI, or GAI, in applications has become increasingly prevalent. While these models offer incredible benefits for automation and efficiency, they also present unique security challenges. The Open Web Application Security Project, or OWASP, released "Top 10 for LLM Applications 2023" a comprehensive guide to the most critical security risks to LLM applications.

See Also: How to Unlock the Power of Zero Trust Network Access Through a Life Cycle Approach

At HackerOne, we strive to be at the forefront of AI security research. Here's our perspective on the Top 10 list for LLM vulnerabilities and how organizations can prevent these critical security risks.

LLM01: Prompt Injection

One of the most commonly discussed LLM vulnerabilities, prompt injection is a vulnerability during which an attacker manipulates the operation of a trusted LLM through crafted inputs, either directly or indirectly.

Solutions

  • Enforce privilege control on LLM access to the back-end system.
  • Segregate external content from user prompts.
  • Keep humans in the loop for extensible functionality.

LLM02: Insecure Output Handling

Insecure output handling occurs when an LLM output is accepted without scrutiny, potentially exposing back-end systems.

Solutions

  • Treat the model output as any other untrusted user content and validating inputs.
  • Conduct pen testing to uncover insecure outputs and identify opportunities for more secure handling techniques.

LLM03: Training Data Poisoning

Training data poisoning refers to the manipulation of data or fine-tuning of processes that introduce vulnerabilities, backdoors or biases and could compromise the model's security, effectiveness or ethical behavior.

Solutions

  • Verify the supply chain of training data and the legitimacy of targeted training data.
  • Use strict vetting or input filters for specific training data or categories of data sources.

LLM04: Model Denial of Service

Model denial of service occurs when attackers cause resource-heavy operations on LLMs, leading to service degradation or high costs.

Solutions

  • Implement input validation and sanitization and enforce limits/caps.
  • Cap resource use per request.
  • Continuously monitor the resource utilization of LLMs.

LLM05: Supply Chain Vulnerabilities

The supply chain in LLMs can be vulnerable, affecting the integrity of training data, machine learning models and deployment platforms. Supply chain vulnerabilities in LLMs can lead to biased outcomes, security breaches and even complete system failures.

Solutions

  • Carefully vet data sources and suppliers.
  • Use only reputable plug-ins.
  • Conduct sufficient monitoring and proper patch management.

LLM06: Sensitive Information Disclosure

Sensitive information disclosure occurs when LLMs inadvertently reveal confidential data, exposing proprietary algorithms, intellectual property and private or personal information.

Solutions

  • Integrate adequate data input/output sanitization and scrubbing techniques.
  • Implement robust input validation and sanitization methods.
  • Leverage hacker-based adversarial testing to identify possible sensitive information disclosure issues.

LLM07: Insecure Plug-In Design

Insecure LLM plug-ins can be prone to malicious requests leading to a wide range of harmful and undesired behaviors, up to and including sensitive data exfiltration and remote code execution.

Solutions

  • Enforce strict parameterized input.
  • Use appropriate authentication and authorization mechanisms.
  • Require manual user intervention and approval for sensitive actions.

LLM08: Excessive Agency

Excessive agency is typically caused by excessive functionality, excessive permissions and/or excessive autonomy, enabling damaging actions to be performed in response to unexpected or ambiguous outputs from an LLM.

Solutions

  • Limit the tools, functions and permissions to only the minimum necessary for the LLM.
  • Require human approval for major and sensitive actions.

LLM09: Overreliance

Organizations and the individuals that comprise them can come to overrely on LLMs without the knowledge and validation mechanisms required to ensure information is accurate, vetted and secure.

Solutions

  • Regularly monitor LLM outputs with trusted external sources.
  • Break down complex tasks into more manageable ones.
  • Communicate and train the benefits as well as the risks and limitations of LLMs at an organizational level.

LLM10: Model Theft

Model theft occurs when there is unauthorized access, copying or exfiltration of proprietary LLM models. This can lead to economic loss, reputational damage and unauthorized access to highly sensitive data.

Solutions

  • Implement strong access controls.
  • Monitor and audit access logs to catch suspicious activity.
  • Automate governance and compliance tracking.
  • Leverage hacker-based testing to identify vulnerabilities that could lead to model theft.

Securing the Future of LLMs

This new release by the OWASP Foundation enables organizations looking to adopt LLM technology to guard against common pitfalls. In many cases, organizations simply are unable to catch every vulnerability. With HackerOne, teams can:



About the Author

Manjesh Siddaraju, Technical Engagement Manager, HackerOne

Manjesh Siddaraju, Technical Engagement Manager, HackerOne

Technical Engagement Manager, HackerOne

Siddaraju has over seven years of experience in web application security, bug bounties, AI/ML, code auditing, network, API, mobile, and IoT testing. He has extensive experience in building and managing enterprise cybersecurity governance structures, policies and processes and has contributed vulnerability disclosures to Facebook, Google, Twitter, Apple, etc. He is actively involved in AI security research and is a core team member of the OWASP Top 10 for LLM.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing inforisktoday.asia, you agree to our use of cookies.