Foundations Lab  · On-demand

AI Prompt Injection Lab

Foundations Lab

Solution overview

Prompt Injection, also sometimes called jailbreaking, refers to an LLM being manipulated by an attacker through carefully crafted prompts or inputs. These inputs cause the LLM to unknowingly conduct the attacker's malicious intentions, and oftentimes, the LLM will behave in ways that it would normally not behave.

This lab introduces users to the risks of direct and indirect prompt injection to Large Language Model (LLM) systems through real-time queries of an LLM to see how LLMs can be tricked into revealing private information if given the correct prompt.

Lab diagram

Loading

Technologies

Contributors

Labs are secured to WWT customers and partners. Login to access.