Research February 9 3 min read A one-prompt attack that breaks LLM safety alignment As LLMs and diffusion models power more applications, their safety alignment becomes critical.