27 results found Sort:

ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security, Super Prompts, Prompt Hack, Prompt Security, Ai Prompt Engineering, Adversarial Machine Learning.
Created 2024-03-15
172 commits to main branch, last one 19 days ago
178
1.4k
mit
18
The Security Toolkit for LLM Interactions
Created 2023-07-27
492 commits to main branch, last one 3 months ago
87
1.2k
apache-2.0
15
LLM Prompt Injection Detector
Created 2023-04-24
345 commits to main branch, last one about a year ago
Advanced Code and Text Manipulation Prompts for Various LLMs. Suitable for Siri, GPT-4o, Claude, Llama3, Gemini, and other high-performance open-source LLMs.
Created 2023-01-03
51 commits to main branch, last one about a month ago
67
874
apache-2.0
16
🔍 LangKit: An open-source toolkit for monitoring Large Language Models (LLMs). 📚 Extracts signals from prompts & responses, ensuring safety & security. 🛡️ Features include text quality, relevance m...
Created 2023-04-26
279 commits to main branch, last one 2 months ago
66
725
gpl-3.0
13
a prompt injection scanner for custom LLM applications
Created 2023-07-15
21 commits to main branch, last one 4 days ago
34
501
apache-2.0
7
💼 another CV template for your job application, yet powered by Typst and more
Created 2023-04-28
128 commits to main branch, last one 9 days ago
# Prompt Engineering Hub ⭐️ If you find this helpful, give it a star to show your support! This repository is a one-stop resource for prompt engineering. Also available on: https://promptengineeringh...
Created 2024-10-31
325 commits to main branch, last one 19 days ago
Every practical and proposed defense against prompt injection.
Created 2024-04-01
15 commits to main branch, last one 8 months ago
37
343
apache-2.0
11
⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs
Created 2023-09-04
230 commits to main branch, last one about a year ago
Self-hardening firewall for large language models
Created 2023-06-18
19 commits to main branch, last one 11 months ago
Prompts of GPT-4V & DALL-E3 to full utilize the multi-modal ability. GPT4V Prompts, DALL-E3 Prompts.
Created 2023-09-30
44 commits to main branch, last one about a year ago
25
220
apache-2.0
7
Dropbox LLM Security research code and results
Created 2023-08-01
37 commits to main branch, last one 8 months ago
This repository provides implementation to formalize and benchmark Prompt Injection attacks and defenses
Created 2023-10-19
28 commits to main branch, last one 13 days ago
prompt attack-defense, prompt Injection, reverse engineering notes and examples | 提示词对抗、破解例子与笔记
Created 2023-05-16
12 commits to main branch, last one about a year ago
13
148
apache-2.0
4
gpt_server是一个用于生产级部署LLMs或Embedding的开源框架。
Created 2023-12-16
285 commits to main branch, last one 21 days ago
A benchmark for prompt injection detection systems.
Created 2024-03-27
57 commits to main branch, last one 4 months ago
Project Mantis: Hacking Back the AI-Hacker; Prompt Injection as a Defense Against LLM-driven Cyberattacks
Created 2024-10-24
26 commits to main branch, last one about a month ago
A prompt injection game to collect data for robust ML research
Created 2023-06-05
1,954 commits to main branch, last one about a month ago
My inputs for the LLM Gandalf made by Lakera
Created 2023-06-10
2 commits to main branch, last one about a year ago
1
41
apache-2.0
1
Build production ready apps for GPT using Node.js & TypeScript
Created 2023-02-04
41 commits to main branch, last one about a year ago
This is The most comprehensive prompt hacking course available, which record our progress on a prompt engineering and prompt hacking course.
Created 2024-08-12
307 commits to main branch, last one 13 days ago
This project investigates the security of large language models by performing binary classification of a set of input prompts to discover malicious prompts. Several approaches have been analyzed using...
Created 2023-11-21
40 commits to main branch, last one about a year ago
jailbreakme.xyz is an open-source decentralized app (dApp) where users are challenged to try and jailbreak pre-existing LLMs in order to find weaknesses and be rewarded. 🏆
Created 2024-12-01
187 commits to main branch, last one a day ago
Whispers in the Machine: Confidentiality in LLM-integrated Systems
Created 2023-05-26
793 commits to main branch, last one 7 hours ago
Turning Gandalf against itself. Use LLMs to automate playing Lakera Gandalf challenge without needing to set up an account with a platform provider.
Created 2023-10-13
2 commits to main branch, last one about a year ago
A Python package designed to detect prompt injection in text inputs utilizing state-of-the-art machine learning models from Hugging Face. The main focus is on ease of use, enabling developers to integ...
Created 2024-03-22
38 commits to main branch, last one 3 months ago