How to Secure Against Generative AI and Protect AI Systems | Briefing
Event Overview
Two of the most common questions we get about generative AI are relatively simple: How can I secure my organization against bad actors using Generative AI? And how can I protect my LLM-powered architecture and AI systems and data? Following his key session at NVIDIA GTC, WWT's Global Head of AI Security Kent Noyes shares critical security challenges in regards to Generative AI and how you can overcome them.
Featured Speaker
What to expect
- What makes up a comprehensive AI security approach.
- Current and emerging threats generated by AI, such as common jailbreaks and deepfakes.
- LLM-powered architectures and how to secure them.
- The rapidly evolving Generative AI security ecosystem.
Goals and Objectives
Better understand risk in the context of AI, how security can impact LLM models and API extensions, how security teams can lean on co-pilots, and how security teams can position themselves to take on the uncertain (yet exciting) future of AI advancement.
Who should attend?
C-level leaders looking to securely drive AI transformation; business leaders looking to understand more about how the executive suite should be thinking about AI security; business and IT leaders looking to gain insight and understanding of today's complex AI environment; security teams and personnel wanting to understand more about how security plays an important role in driving AI success.
Related content
How You Can Secure AI Strategies | Research
NVIDIA GTC
Key Takeaways from NVIDIA GTC 2024