sinanw / llm-security-prompt-injection

This project investigates the security of large language models by performing binary classification of a set of input prompts to discover malicious prompts. Several approaches have been analyzed using classical ML algorithms, a trained LLM model, and a fine-tuned LLM model.

Date Created 2023-11-21 (about a year ago)
Commits 40 (last one 11 months ago)
Stargazers 34 (0 this week)
Watchers 3 (0 this week)
Forks 7
License mit
Ranking

RepositoryStats indexes 584,777 repositories, of these sinanw/llm-security-prompt-injection is ranked #542,823 (7th percentile) for total stargazers, and #422,609 for total watchers. Github reports the primary language for this repository as Jupyter Notebook, for repositories using this language it is ranked #15,296/17,124.

sinanw/llm-security-prompt-injection is also tagged with popular topics, for these it's ranked: cybersecurity (#872/943)

Star History

Github stargazers over time

Watcher History

Github watchers over time, collection started in '23

Recent Commit History

40 commits on the default branch (main) since jan '22

Yearly Commits

Commits to the default branch (main) per year

Issue History

Languages

The only known language in this repository is Jupyter Notebook

updated: 2024-11-05 @ 03:51pm, id: 721855992 / R_kgDOKwal-A