Web LLM Attacks
• What are LLMs?• Interactive Interfaces and Use Cases;• Security Considerations;• Protecting Against LLM Attacks;• Exploiting LLM APIs with excessive agency;• Exploiting vulnerabilities in LLM APIs;• Indirect prompt injection;• Exploiting insecure output handling in LLMs;• LLM Zero-Shot Learning Attacks;• LLM Homographic Attacks;• LLM Model Poisoning with Code Injection;• Chained Prompt Injection;• Conclusion;• References;• Security Researchers….

You must be logged in to post a comment.