26 results found Sort:

ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security, Super Prompts, Prompt Hack, Prompt Security, Ai Prompt Engineering, Adversarial Machine Learning.
Created 2024-03-15
167 commits to main branch, last one 23 hours ago
158
1.3k
mit
19
The Security Toolkit for LLM Interactions
Created 2023-07-27
492 commits to main branch, last one about a month ago
81
1.1k
apache-2.0
15
LLM Prompt Injection Detector
Created 2023-04-24
345 commits to main branch, last one 10 months ago
68
854
apache-2.0
15
🔍 LangKit: An open-source toolkit for monitoring Large Language Models (LLMs). 📚 Extracts signals from prompts & responses, ensuring safety & security. 🛡️ Features include text quality, relevance m...
Created 2023-04-26
279 commits to main branch, last one 12 hours ago
Advanced Code and Text Manipulation Prompts for Various LLMs. Suitable for Siri, GPT-4o, Claude, Llama3, Gemini, and other high-performance open-source LLMs.
Created 2023-01-03
49 commits to main branch, last one 5 months ago
automatically tests prompt injection attacks on ChatGPT instances
Created 2023-07-15
17 commits to main branch, last one 11 months ago
28
466
apache-2.0
7
💼 another CV template for your job application, yet powered by Typst and more
Created 2023-04-28
125 commits to main branch, last one 20 days ago
Every practical and proposed defense against prompt injection.
Created 2024-04-01
15 commits to main branch, last one 5 months ago
36
316
apache-2.0
11
⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs
Created 2023-09-04
230 commits to main branch, last one 9 months ago
# Prompt Engineering Hub ⭐️ If you find this helpful, give it a star to show your support! This repository is a one-stop resource for prompt engineering. Also available on: https://promptengineeringh...
Created 2024-10-31
215 commits to main branch, last one 3 days ago
Self-hardening firewall for large language models
Created 2023-06-18
19 commits to main branch, last one 8 months ago
Prompts of GPT-4V & DALL-E3 to full utilize the multi-modal ability. GPT4V Prompts, DALL-E3 Prompts.
Created 2023-09-30
44 commits to main branch, last one about a year ago
23
217
apache-2.0
7
Dropbox LLM Security research code and results
Created 2023-08-01
37 commits to main branch, last one 6 months ago
This repository provides implementation to formalize and benchmark Prompt Injection attacks and defenses
Created 2023-10-19
24 commits to main branch, last one 2 months ago
prompt attack-defense, prompt Injection, reverse engineering notes and examples | 提示词对抗、破解例子与笔记
Created 2023-05-16
12 commits to main branch, last one about a year ago
A benchmark for prompt injection detection systems.
Created 2024-03-27
57 commits to main branch, last one 2 months ago
Project Mantis: Hacking Back the AI-Hacker; Prompt Injection as a Defense Against LLM-driven Cyberattacks
Created 2024-10-24
20 commits to main branch, last one 3 days ago
A prompt injection game to collect data for robust ML research
Created 2023-06-05
1,948 commits to main branch, last one 8 months ago
1
39
apache-2.0
1
Build production ready apps for GPT using Node.js & TypeScript
Created 2023-02-04
41 commits to main branch, last one about a year ago
My inputs for the LLM Gandalf made by Lakera
Created 2023-06-10
2 commits to main branch, last one about a year ago
This project investigates the security of large language models by performing binary classification of a set of input prompts to discover malicious prompts. Several approaches have been analyzed using...
Created 2023-11-21
40 commits to main branch, last one 11 months ago
Whispers in the Machine: Confidentiality in LLM-integrated Systems
Created 2023-05-26
787 commits to main branch, last one 16 days ago
Website Prompt Injection is a concept that allows for the injection of prompts into an AI system via a website's. This technique exploits the interaction between users, websites, and AI systems to exe...
Created 2024-03-11
9 commits to main branch, last one 8 months ago
The Prompt Injection Testing Tool is a Python script designed to assess the security of your AI system's prompt handling against a predefined list of user prompts commonly used for injection attacks. ...
Created 2024-03-20
4 commits to main branch, last one 8 months ago
Turning Gandalf against itself. Use LLMs to automate playing Lakera Gandalf challenge without needing to set up an account with a platform provider.
Created 2023-10-13
2 commits to main branch, last one about a year ago
A Python package designed to detect prompt injection in text inputs utilizing state-of-the-art machine learning models from Hugging Face. The main focus is on ease of use, enabling developers to integ...
Created 2024-03-22
38 commits to main branch, last one 23 days ago