sinanw / llm-security-prompt-injection
This project investigates the security of large language models by performing binary classification of a set of input prompts to discover malicious prompts. Several approaches have been analyzed using classical ML algorithms, a trained LLM model, and a fine-tuned LLM model.
RepositoryStats indexes 584,777 repositories, of these sinanw/llm-security-prompt-injection is ranked #542,823 (7th percentile) for total stargazers, and #422,609 for total watchers. Github reports the primary language for this repository as Jupyter Notebook, for repositories using this language it is ranked #15,296/17,124.
Star History
Github stargazers over time
Watcher History
Github watchers over time, collection started in '23
Recent Commit History
40 commits on the default branch (main) since jan '22
Yearly Commits
Commits to the default branch (main) per year
Issue History
Languages
The only known language in this repository is Jupyter Notebook
updated: 2024-11-05 @ 03:51pm, id: 721855992 / R_kgDOKwal-A