<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"><channel><title>JailbreakDB</title><description>An indexed, searchable catalog of LLM jailbreak techniques. Each entry: the prompt pattern, the models it works on, when it stopped working, the behavior it exploits — sourced from primary disclosure where possible, with honest attribution.</description><link>https://jailbreakdb.com/</link><language>en</language><item><title>What this site is for</title><link>https://jailbreakdb.com/posts/welcome/</link><guid isPermaLink="true">https://jailbreakdb.com/posts/welcome/</guid><description>JailbreakDB covers offensive AI security from a working practitioner&apos;s perspective. Here&apos;s what we publish.</description><pubDate>Sun, 03 May 2026 00:00:00 GMT</pubDate><category>meta</category><author>JailbreakDB Editorial</author></item></channel></rss>