37 results found Sort:
- Filter by Primary Language:
- Python (20)
- Jupyter Notebook (7)
- HTML (2)
- Svelte (1)
- TeX (1)
- CSS (1)
- Vue (1)
- Go (1)
- Solidity (1)
- +
Ready-to-run cloud templates for RAG, AI pipelines, and enterprise search with live data. 🐳Docker-friendly.⚡Always in sync with Sharepoint, Google Drive, S3, Kafka, PostgreSQL, real-time data APIs, a...
Created
2023-07-19
237 commits to main branch, last one a day ago
🐢 Open-Source Evaluation & Testing for AI & LLM systems
Created
2022-03-06
10,273 commits to main branch, last one 3 days ago
the LLM vulnerability scanner
Created
2023-05-10
1,833 commits to main branch, last one a day ago
[CCS'24] A dataset consists of 15,140 ChatGPT prompts from Reddit, Discord, websites, and open-source datasets (including 1,405 jailbreak prompts).
Created
2023-08-01
19 commits to main branch, last one 3 months ago
The Security Toolkit for LLM Interactions
Created
2023-07-27
501 commits to main branch, last one 26 days ago
Agentic LLM Vulnerability Scanner / AI red teaming kit 🧪
Created
2024-04-11
581 commits to main branch, last one 3 days ago
A secure low code honeypot framework, leveraging LLM for System Virtualization.
Created
2022-05-08
264 commits to main branch, last one 3 days ago
An easy-to-use Python framework to generate adversarial jailbreak prompts.
Created
2024-01-31
94 commits to master branch, last one 16 days ago
A powerful tool for automated LLM fuzzing. It is designed to help developers and security researchers identify and mitigate potential jailbreaks in their LLM APIs.
Created
2024-12-03
186 commits to main branch, last one 10 days ago
Papers and resources related to the security and privacy of LLMs 🤖
Created
2023-11-15
44 commits to main branch, last one 4 months ago
A security scanner for your LLM agentic workflows
Created
2025-02-12
59 commits to main branch, last one a day ago
⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs
Created
2023-09-04
230 commits to main branch, last one about a year ago
🏴☠️ Hacking Guides, Demos and Proof-of-Concepts 🥷
Created
2024-06-23
186 commits to main branch, last one 16 days ago
This repository provides implementation to formalize and benchmark Prompt Injection attacks and defenses
Created
2023-10-19
28 commits to main branch, last one 2 months ago
Toolkits to create a human-in-the-loop approval layer to monitor and guide AI agents workflow in real-time.
Created
2024-10-13
146 commits to main branch, last one 4 months ago
Experimental tools to backdoor large language models by re-writing their system prompts at a raw parameter level. This allows you to potentially execute offline remote code execution without running a...
Created
2025-01-30
38 commits to main branch, last one 7 days ago
AI-driven Threat modeling-as-a-Code (TaaC-AI)
Created
2023-12-14
67 commits to main branch, last one 10 months ago
The fastest Trust Layer for AI Agents
Created
2024-03-11
235 commits to main branch, last one about a month ago
Whistleblower is a offensive security tool for testing against system prompt leakage and capability discovery of an AI application exposed through API. Built for AI engineers, security researchers and...
Created
2024-06-23
28 commits to main branch, last one 8 months ago
Ultra-fast, low latency LLM prompt injection/jailbreak detection ⛓️
Created
2024-03-29
40 commits to main branch, last one 11 months ago
Framework for LLM evaluation, guardrails and security
Created
2024-03-02
9 commits to main branch, last one 7 months ago
This repository contains various attack against Large Language Models.
Created
2024-04-15
25 commits to main branch, last one 10 months ago
A benchmark for prompt injection detection systems.
Created
2024-03-27
59 commits to main branch, last one 2 months ago
Framework for testing vulnerabilities of large language models (LLM).
Created
2024-09-05
9 commits to release branch, last one about a month ago
An Execution Isolation Architecture for LLM-Based Agentic Systems
Created
2024-03-07
10 commits to main branch, last one 2 months ago
A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.
Created
2024-01-04
34 commits to main branch, last one 12 months ago
It is a comprehensive resource hub compiling all LLM papers accepted at the International Conference on Learning Representations (ICLR) in 2024.
llm
llms
llmops
llm-agent
llm-privacy
llm-serving
llm-security
llm-training
llm-framework
llm-inference
llm-prompting
llm-evaluation
pretrained-models
pretrained-weights
large-language-model
large-language-models
pretrained-language-model
large-language-models-for-graph-learning
large-language-models-and-translation-systems
Created
2024-03-18
5 commits to main branch, last one about a year ago
Code scanner to check for issues in prompts and LLM calls
Created
2025-03-14
51 commits to master branch, last one 6 days ago
intents engine
Created
2024-02-01
188 commits to main branch, last one 5 months ago
This is The most comprehensive prompt hacking course available, which record our progress on a prompt engineering and prompt hacking course.
Created
2024-08-12
319 commits to main branch, last one 16 hours ago