
Volunteers needed to help secure AI models
The nonprofit Open Worldwide Application Security (OWASP) is looking for some cybersecurity volunteers to help define a set of best practices for securing artificial intelligence (AI) applications, hopefully before a major incident.
Previously, OWASP created a list of the Top 10 risks and mitigation for large language models (LLMs) that include:
- Prompt Injections that make use of malicious input to cause unintended actions.
- Insecure output handling where results generated are accepted without scrutiny in a way that exposes backend systems.
- Training data poisoning that injects, for example, vulnerabilities and biases, into the model
- Model denial of service attacks that degree service levels or increase overall costs.
- Supply chain vulnerabilities in which the software used to build a model has been compromised.
- Sensitive information disclosures that result in confidential data being included in LLM outputs
- Insecure plugin designs that enable remote code execution (RCE).
- Excessive agency that leads to permissions and privileges being exploited.
- Overreliance depending on LLMs without supervision can lead to misinformation, miscommunication and compliance issues.
- Models are a form of intellectual property that cybercriminals are likely to try and steal.
The (OWASP) Top 10 for LLM Application Security Project is now looking to expand the scope of that effort to, for example, provide advice on how best to respond to Deep Fakes created using AI in addition to publishing guidance on how to build an AI Security Center of Excellence. Other existing project initiatives and working groups that need additional volunteers address risk and exploit data mapping, LLM AI cyber threat intelligence, secure AI adoption, and AI red teaming and evaluation.
The OWASP Top 10 for LLM Application Security Project already has 550 contributors from 110 companies and plans to update its Top 10 list for LLM risks and mitigations twice a year starting in 2025. The challenge is the limited number of cybersecurity professionals with AI expertise. Cybersecurity professionals who gain this expertise should be able to command a premium for their expertise. One of the best ways to gain that expertise is to help define what’s required in the first place by joining a group that is trying to define a set of best practices for everyone to follow.
It's still early days so far as actual compromises of an AI model are concerned, but in much the same way that cybercriminals are already targeting software supply chains, it’s only a matter of time. In fact, an AI model at its core is simply another type of artifact that is being added to the software supply chain. More challenging still, many of those AI models are being constructed using tools with known vulnerabilities, resulting in many dependencies that might one day be exploited.
Unfortunately, data scientists tend to have even less cybersecurity knowledge than application developers. Many organizations with mixed success are trying to teach application developers the fundamentals of cybersecurity. The next major issue will be trying to teach data scientists who have even less knowledge of those fundamentals the same lessons.

Informe de Barracuda sobre Ransomware 2025
Principales conclusiones sobre la experiencia y el impacto del ransomware en las organizaciones de todo el mundo
Suscríbase al blog de Barracuda.
Regístrese para recibir Threat Spotlight, comentarios de la industria y más.

Seguridad de vulnerabilidades gestionada: corrección más rápida, menos riesgos, cumplimiento normativo más fácil
Descubra lo fácil que es encontrar las vulnerabilidades que los ciberdelincuentes quieren explotar.