Quick Start
Get started with BENGUARD in minutes. Protect your LLM applications from prompt injection, jailbreaks, PII leaks, toxic content, and 10 more threat categories.
2
Make Your First Request
Send a POST request to scan any prompt before passing it to your LLM.
curl -X POST https://benguard.io/api/v1/scan \
-H "X-API-Key: ben_your_api_key_here" \
-H "Content-Type: application/json" \
-d '{
"prompt": "Hello, can you help me with my homework?"
}'3
Handle the Response
Check the response to determine if the prompt is safe or contains threats.
{
"is_valid": true,
"status": "safe",
"risk_score": 0.05,
"threat_types": [],
"details": {
"results": [
{
"scanner": "prompt_injection",
"threat_detected": false,
"risk_score": 0.02,
"confidence": 0.95
}
]
},
"request_id": "req_abc123"
}Integration Flow
User Input
BENGUARD API
{is_valid: true} → LLM
Response