Navigating Security in Generative AI Development
As generative AI applications become increasingly prevalent in production environments, developers face unique security challenges that traditional application security may not fully address. This session explores emerging security patterns and community-driven initiatives for protecting GenAI applications, with a focus on practical threat modeling techniques developed through open source collaboration.
The rapid adoption of generative AI in production applications has introduced novel adversarial attack risks and security considerations that the open source community is actively working to address. Through the lens of ongoing work from key industry working groups - including the OpenSSF, OPEA Security Working Group, and others - this talk will examine how the security community is coming together to establish best practices and frameworks for secure AI development.
Participants will gain insights into the current work-in-progress status of various open source security initiatives, including draft frameworks, proposed standards, and evolving threat taxonomies. The session will highlight how these community efforts are shaping practical security approaches, from prompt injection prevention to model supply chain security.