Prompt Injection Attack ChatGPT is a security vulnerability that can be used to manipulate the output of ChatGPT, a large language model chatbot developed by OpenAI. The attack works by injecting malicious code into the prompt, which is the text that is used to generate the chatbot's response. This malicious code can then be used to steal sensitive data from the user, such as their login credentials or credit card information. To protect yourself from this attack, you should be careful about the prompts that you use when interacting with ChatGPT. You should also be aware of the signs of a prompt injection attack, such as unexpected behavior from the chatbot or requests for sensitive information.