31 results found Sort:
- Filter by Primary Language:
- Python (16)
- Jupyter Notebook (6)
- HTML (2)
- Svelte (1)
- CSS (1)
- TeX (1)
- Go (1)
- Solidity (1)
- +
Ready-to-run cloud templates for RAG, AI pipelines, and enterprise search with live data. 🐳Docker-friendly.⚡Always in sync with Sharepoint, Google Drive, S3, Kafka, PostgreSQL, real-time data APIs, a...
Created
2023-07-19
220 commits to main branch, last one 4 days ago
🐢 Open-Source Evaluation & Testing for AI & LLM systems
Created
2022-03-06
10,170 commits to main branch, last one about a month ago
the LLM vulnerability scanner
Created
2023-05-10
1,544 commits to main branch, last one 7 days ago
[CCS'24] A dataset consists of 15,140 ChatGPT prompts from Reddit, Discord, websites, and open-source datasets (including 1,405 jailbreak prompts).
Created
2023-08-01
19 commits to main branch, last one about a month ago
The Security Toolkit for LLM Interactions
Created
2023-07-27
492 commits to main branch, last one 3 months ago
Agentic LLM Vulnerability Scanner / AI red teaming kit 🧪
Created
2024-04-11
263 commits to main branch, last one a day ago
A secure low code honeypot framework, leveraging AI for System Virtualization.
Created
2022-05-08
241 commits to main branch, last one 4 days ago
An easy-to-use Python framework to generate adversarial jailbreak prompts.
Created
2024-01-31
89 commits to master branch, last one 4 months ago
Papers and resources related to the security and privacy of LLMs 🤖
Created
2023-11-15
44 commits to main branch, last one 2 months ago
⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs
Created
2023-09-04
230 commits to main branch, last one 12 months ago
🏴☠️ Hacking Guides, Demos and Proof-of-Concepts 🥷
Created
2024-06-23
178 commits to main branch, last one 12 days ago
This repository provides implementation to formalize and benchmark Prompt Injection attacks and defenses
Created
2023-10-19
28 commits to main branch, last one 6 days ago
Toolkits to create a human-in-the-loop approval layer to monitor and guide AI agents workflow in real-time.
Created
2024-10-13
146 commits to main branch, last one 2 months ago
AI-driven Threat modeling-as-a-Code (TaaC-AI)
Created
2023-12-14
67 commits to main branch, last one 7 months ago
The fastest && easiest LLM security guardrails for CX AI Agents and applications.
Created
2024-03-11
221 commits to main branch, last one 10 days ago
Ultra-fast, low latency LLM prompt injection/jailbreak detection ⛓️
Created
2024-03-29
40 commits to main branch, last one 9 months ago
Framework for LLM evaluation, guardrails and security
Created
2024-03-02
9 commits to main branch, last one 4 months ago
A benchmark for prompt injection detection systems.
Created
2024-03-27
57 commits to main branch, last one 4 months ago
This repository contains various attack against Large Language Models.
Created
2024-04-15
25 commits to main branch, last one 8 months ago
Framework for testing vulnerabilities of large language models (LLM).
Created
2024-09-05
6 commits to release branch, last one 10 days ago
SecGPT: An execution isolation architecture for LLM-based systems
Created
2024-03-07
8 commits to main branch, last one 2 months ago
A benchmark for evaluating the robustness of LLMs and defenses to indirect prompt injection attacks.
Created
2024-01-04
34 commits to main branch, last one 9 months ago
intents engine
Created
2024-02-01
188 commits to main branch, last one 3 months ago
It is a comprehensive resource hub compiling all LLM papers accepted at the International Conference on Learning Representations (ICLR) in 2024.
llm
llms
llmops
llm-agent
llm-privacy
llm-serving
llm-security
llm-training
llm-framework
llm-inference
llm-prompting
llm-evaluation
pretrained-models
pretrained-weights
large-language-model
large-language-models
pretrained-language-model
large-language-models-for-graph-learning
large-language-models-and-translation-systems
Created
2024-03-18
5 commits to main branch, last one 9 months ago
LLM security and privacy
Created
2023-08-30
41 commits to main branch, last one 3 months ago
This project investigates the security of large language models by performing binary classification of a set of input prompts to discover malicious prompts. Several approaches have been analyzed using...
Created
2023-11-21
40 commits to main branch, last one about a year ago
This is The most comprehensive prompt hacking course available, which record our progress on a prompt engineering and prompt hacking course.
Created
2024-08-12
307 commits to main branch, last one 6 days ago
安全手册,企业安全实践、攻防与安全研究知识库
Created
2023-11-22
39 commits to main branch, last one 2 months ago
Whispers in the Machine: Confidentiality in LLM-integrated Systems
Created
2023-05-26
792 commits to main branch, last one 7 days ago
Risks and targets for assessing LLMs & LLM vulnerabilities
Created
2022-12-13
38 commits to main branch, last one 8 months ago