Statistics for topic jailbreak
RepositoryStats tracks 518,325 Github repositories, of these 121 are tagged with the jailbreak topic. The most common primary language for repositories using this topic is C (19). Other languages include: Swift (16), Python (14), Shell (13), Objective-C (12)
Stargazers over time for topic jailbreak
Most starred repositories for topic jailbreak (view more)
Trending repositories for topic jailbreak (view more)
ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security, Super Prompts, Prompt Hack, Prompt Security, Ai Prompt Engineering, Adversarial Machine Learning.
A reading list for large models safety, security, and privacy.
ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security, Super Prompts, Prompt Hack, Prompt Security, Ai Prompt Engineering, Adversarial Machine Learning.
A reading list for large models safety, security, and privacy.
An easy-to-use Python framework to generate adversarial jailbreak prompts.
Bypass restricted and censored content on AI chat prompts 😈
SeaShell Framework is an iOS post-exploitation framework that enables you to access the device remotely, control it and extract sensitive information.
(i18n/CLI) Not the first, but the best phone call recorder with TrollStore.
A reading list for large models safety, security, and privacy.
ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security, Super Prompts, Prompt Hack, Prompt Security, Ai Prompt Engineering, Adversarial Machine Learning.
Bypass restricted and censored content on AI chat prompts 😈
An easy-to-use Python framework to generate adversarial jailbreak prompts.
A reading list for large models safety, security, and privacy.
SeaShell Framework is an iOS post-exploitation framework that enables you to access the device remotely, control it and extract sensitive information.
(i18n/CLI) Not the first, but the best phone call recorder with TrollStore.
A reading list for large models safety, security, and privacy.
ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security, Super Prompts, Prompt Hack, Prompt Security, Ai Prompt Engineering, Adversarial Machine Learning.
Bypass restricted and censored content on AI chat prompts 😈
Ultra-fast, low latency LLM prompt injection/jailbreak detection ⛓️
An easy-to-use Python framework to generate adversarial jailbreak prompts.
(i18n/CLI) Not the first, but the best phone call recorder with TrollStore.
Awesome things you can do with ChatGPT + Code Interpreter combo 🔥
A framework to evaluate the generalization capability of safety alignment for LLMs
Jailbreak for A8 through A11, T2 devices, on iOS/iPadOS/tvOS 15.0, bridgeOS 5.0 and higher.
An easy-to-use Python framework to generate adversarial jailbreak prompts.
A reading list for large models safety, security, and privacy.
A cross-platform desktop client for the jailbroken New Bing AI Copilot (Sydney ver.) built with Go and Wails (previously based on Python and Qt).
[arXiv:2311.03191] "DeepInception: Hypnotize Large Language Model to Be Jailbreaker"