34 results found Sort:

ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security, Super Prompts, Prompt Hack, Prompt Security, Ai Prompt Engineering, Adversarial Machine Learning.
Created 2024-03-15
223 commits to main branch, last one about a month ago
200
1.6k
mit
19
The Security Toolkit for LLM Interactions
Created 2023-07-27
501 commits to main branch, last one about a month ago
98
1.2k
apache-2.0
16
LLM Prompt Injection Detector
Created 2023-04-24
345 commits to main branch, last one about a year ago
A playground of highly experimental prompts, tools & scripts for machine intelligence models from DeepSeek, OpenAI, Anthropic, Meta, Mistral, Google, xAI & others.
Created 2023-01-03
66 commits to main branch, last one 10 days ago
69
903
apache-2.0
16
🔍 LangKit: An open-source toolkit for monitoring Large Language Models (LLMs). 📚 Extracts signals from prompts & responses, ensuring safety & security. 🛡️ Features include text quality, relevance m...
Created 2023-04-26
279 commits to main branch, last one 4 months ago
81
777
gpl-3.0
13
a prompt injection scanner for custom LLM applications
Created 2023-07-15
28 commits to main branch, last one about a month ago
43
558
apache-2.0
6
💼 another CV template for your job application, yet powered by Typst and more
Created 2023-04-28
137 commits to main branch, last one about a month ago
Every practical and proposed defense against prompt injection.
Created 2024-04-01
24 commits to main branch, last one about a month ago
41
377
apache-2.0
11
⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs
Created 2023-09-04
230 commits to main branch, last one about a year ago
Self-hardening firewall for large language models
Created 2023-06-18
19 commits to main branch, last one about a year ago
Prompts of GPT-4V & DALL-E3 to full utilize the multi-modal ability. GPT4V Prompts, DALL-E3 Prompts.
Created 2023-09-30
44 commits to main branch, last one about a year ago
28
221
apache-2.0
7
Dropbox LLM Security research code and results
Created 2023-08-01
37 commits to main branch, last one 11 months ago
This repository provides a benchmark for prompt Injection attacks and defenses
Created 2023-10-19
35 commits to main branch, last one 3 days ago
prompt attack-defense, prompt Injection, reverse engineering notes and examples | 提示词对抗、破解例子与笔记
Created 2023-05-16
13 commits to main branch, last one about a month ago
16
167
apache-2.0
4
gpt_server是一个用于生产级部署LLMs或Embedding的开源框架。
Created 2023-12-16
338 commits to main branch, last one 5 days ago
A benchmark for prompt injection detection systems.
Created 2024-03-27
59 commits to main branch, last one 2 months ago
Project Mantis: Hacking Back the AI-Hacker; Prompt Injection as a Defense Against LLM-driven Cyberattacks
Created 2024-10-24
26 commits to main branch, last one 4 months ago
Code scanner to check for issues in prompts and LLM calls
Created 2025-03-14
51 commits to master branch, last one 14 days ago
A prompt injection game to collect data for robust ML research
Created 2023-06-05
1,954 commits to main branch, last one 3 months ago
This is The most comprehensive prompt hacking course available, which record our progress on a prompt engineering and prompt hacking course.
Created 2024-08-12
319 commits to main branch, last one 8 days ago
This class is a broad overview and dive into Exploiting AI and the different attacks that exist, and best practice strategies.
Created 2024-01-22
822 commits to main branch, last one 9 days ago
1
42
apache-2.0
2
Build production ready apps for GPT using Node.js & TypeScript
Created 2023-02-04
41 commits to main branch, last one about a year ago
My inputs for the LLM Gandalf made by Lakera
Created 2023-06-10
2 commits to main branch, last one about a year ago
This project investigates the security of large language models by performing binary classification of a set of input prompts to discover malicious prompts. Several approaches have been analyzed using...
Created 2023-11-21
40 commits to main branch, last one about a year ago
jailbreakme.xyz is an open-source decentralized app (dApp) where users are challenged to try and jailbreak pre-existing LLMs in order to find weaknesses and be rewarded. 🏆
Created 2024-12-01
195 commits to main branch, last one about a month ago
Whispers in the Machine: Confidentiality in LLM-integrated Systems
Created 2023-05-26
799 commits to main branch, last one about a month ago
This repository has no description...
Created 2025-03-09
192 commits to main branch, last one 7 days ago
Turning Gandalf against itself. Use LLMs to automate playing Lakera Gandalf challenge without needing to set up an account with a platform provider.
Created 2023-10-13
2 commits to main branch, last one about a year ago
LLM | Security | Operations in one github repo with good links and pictures.
Created 2024-07-31
47 commits to main branch, last one 3 months ago