28 results found Sort:
- Filter by Primary Language:
- Python (11)
- Jupyter Notebook (5)
- TypeScript (2)
- HTML (1)
- JavaScript (1)
- Typst (1)
- +
ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security, Super Prompts, Prompt Hack, Prompt Security, Ai Prompt Engineering, Adversarial Machine Learning.
Created
2024-03-15
221 commits to main branch, last one 9 days ago
The Security Toolkit for LLM Interactions
Created
2023-07-27
497 commits to main branch, last one 23 days ago
LLM Prompt Injection Detector
Created
2023-04-24
345 commits to main branch, last one about a year ago
Advanced Code and Text Manipulation Prompts for Various LLMs. Suitable for Deepseek, GPT o1, Claude, Llama3, Gemini, and other high-performance open-source LLMs.
Created
2023-01-03
55 commits to main branch, last one 29 days ago
🔍 LangKit: An open-source toolkit for monitoring Large Language Models (LLMs). 📚 Extracts signals from prompts & responses, ensuring safety & security. 🛡️ Features include text quality, relevance m...
Created
2023-04-26
279 commits to main branch, last one 3 months ago
a prompt injection scanner for custom LLM applications
Created
2023-07-15
28 commits to main branch, last one 7 days ago
💼 another CV template for your job application, yet powered by Typst and more
Created
2023-04-28
137 commits to main branch, last one 9 days ago
# Prompt Engineering Hub ⭐️ lovable.dev no code builders: https://www.aidevelopers.tech/
ai
prompt
prompts
generative-ai
prompt-tuning
prompt-toolkit
prompt-learning
prompt-generator
prompt-injection
ai-prompt-engineer
prompt-engineering
ai-prompt-engineering
prompt-engineer-course
artificial-intelligence
prompt-engineering-jobs
prompts-stable-diffusion
prompt-engineering-course
prompt-engineering-github
prompt-engineering-courses
prompt-engineering-certification
Created
2024-10-31
325 commits to main branch, last one about a month ago
Every practical and proposed defense against prompt injection.
Created
2024-04-01
24 commits to main branch, last one 21 days ago
⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs
Created
2023-09-04
230 commits to main branch, last one about a year ago
Self-hardening firewall for large language models
Created
2023-06-18
19 commits to main branch, last one about a year ago
Prompts of GPT-4V & DALL-E3 to full utilize the multi-modal ability. GPT4V Prompts, DALL-E3 Prompts.
Created
2023-09-30
44 commits to main branch, last one about a year ago
Dropbox LLM Security research code and results
Created
2023-08-01
37 commits to main branch, last one 9 months ago
This repository provides implementation to formalize and benchmark Prompt Injection attacks and defenses
Created
2023-10-19
28 commits to main branch, last one about a month ago
prompt attack-defense, prompt Injection, reverse engineering notes and examples | 提示词对抗、破解例子与笔记
Created
2023-05-16
13 commits to main branch, last one 17 days ago
gpt_server是一个用于生产级部署LLMs或Embedding的开源框架。
Created
2023-12-16
294 commits to main branch, last one a day ago
A benchmark for prompt injection detection systems.
Created
2024-03-27
59 commits to main branch, last one about a month ago
Project Mantis: Hacking Back the AI-Hacker; Prompt Injection as a Defense Against LLM-driven Cyberattacks
Created
2024-10-24
26 commits to main branch, last one 3 months ago
A prompt injection game to collect data for robust ML research
Created
2023-06-05
1,954 commits to main branch, last one 2 months ago
This is The most comprehensive prompt hacking course available, which record our progress on a prompt engineering and prompt hacking course.
Created
2024-08-12
307 commits to main branch, last one about a month ago
My inputs for the LLM Gandalf made by Lakera
Created
2023-06-10
2 commits to main branch, last one about a year ago
Build production ready apps for GPT using Node.js & TypeScript
Created
2023-02-04
41 commits to main branch, last one about a year ago
This project investigates the security of large language models by performing binary classification of a set of input prompts to discover malicious prompts. Several approaches have been analyzed using...
Created
2023-11-21
40 commits to main branch, last one about a year ago
jailbreakme.xyz is an open-source decentralized app (dApp) where users are challenged to try and jailbreak pre-existing LLMs in order to find weaknesses and be rewarded. 🏆
Created
2024-12-01
195 commits to main branch, last one 13 days ago
Whispers in the Machine: Confidentiality in LLM-integrated Systems
Created
2023-05-26
799 commits to main branch, last one 11 days ago
Turning Gandalf against itself. Use LLMs to automate playing Lakera Gandalf challenge without needing to set up an account with a platform provider.
Created
2023-10-13
2 commits to main branch, last one about a year ago
Short list of indirect prompt injection attacks for OpenAI-based models.
Created
2024-06-08
44 commits to main branch, last one 2 months ago
Easy to use LLM Prompt Injection Detection / Detector Python Package
Created
2024-03-22
46 commits to main branch, last one 9 days ago