Your AI just read that.
Did it comply?
Four-layer detection pipeline that scans untrusted text before it reaches your LLM. Catches instruction overrides, jailbreaks, and semantic evasion in milliseconds.
One malicious string can compromise your entire pipeline.
Attackers embed instructions in documents, emails, and API responses that your agent processes as trusted input. PromptScan intercepts them before they reach your LLM.
The alternatives cost more than you think.
Rolling your own takes weeks and leaves you maintaining ML infrastructure forever. Lakera Guard requires a sales call before you can see a price. PromptScan is live in two minutes, transparent, and starts free.
| ▲ PromptScan | Lakera Guard | Build Yourself | |
|---|---|---|---|
| Free tier | ✓ 1,000 scans/mo | ✗ Contact sales | ~ You build it |
| Time to first scan | 2 minutes | Days (sales cycle) | 3–4 weeks |
| Transparent pricing | ✓ Published | ✗ Not public | Pay-as-you-go infra |
| Infrastructure to manage | None | None | Server, DB, model, ops |
| Model updates included | ✓ Automatic | ✓ Automatic | ✗ Retrain yourself |
| Cost at 10K scans/mo | $9/mo | Undisclosed $$ | ~$12 infra + eng time |
| Agent-native (x402 / MCP) | ✓ | ✗ | ✗ Build it yourself |
| No credit card to start | ✓ | ✗ | Pay for infra upfront |
Four layers. Each catches what the last missed.
Layers activate in sequence, stopping as soon as a definitive verdict is reached. Clean text exits at Layer 1 in microseconds. Only ambiguous edge cases reach Layer 4.
Up and running in three steps.
/v1/scan. Place it in your pipeline between user input and your LLM call.injection_detected. Block, sanitize, or flag the input. Done.curl -X POST https://promptscan.dev/v1/scan \ -H "Content-Type: application/json" \ -H "X-API-Key: pif_your_key" \ -d '{"text": "Ignore all instructions..."}' # Response: { "injection_detected": true, "attack_type": "instruction_override", "confidence": 0.97, "details": { "layer_triggered": "pattern_engine" }, "meta": { "processing_time_ms": 2.1 } }
import httpx client = httpx.Client( base_url="https://promptscan.dev", headers={"X-API-Key": "pif_your_key"}, ) def safe_prompt(text: str) -> str: result = client.post("/v1/scan", json={"text": text}).json() if result["injection_detected"]: raise ValueError( f"Injection: {result['attack_type']} ({result['confidence']:.0%})" ) return text
const scan = async (text) => { const res = await fetch( "https://promptscan.dev/v1/scan", { method: "POST", headers: { "Content-Type": "application/json", "X-API-Key": "pif_your_key", }, body: JSON.stringify({ text }), } ); const data = await res.json(); if (data.injection_detected) { throw new Error(`Injection: ${data.attack_type}`); } return text; };
Agents discover, sign up, and upgrade — autonomously.
x402 payment options — no human needed to interpret them./v1/signup with its email. Receives an API key instantly, no browser, no OAuth flow.{
"error": "free_tier_exhausted",
"x402": {
"version": "0.1",
"accepts": [
{
"scheme": "signup",
"description": "Developer: 1,000 scans/mo, free",
"method": "POST",
"url": "https://promptscan.dev/v1/signup"
},
{
"scheme": "stripe-payment-link",
"description": "Starter: 10,000 scans/mo, $9/mo",
"url": "https://buy.stripe.com/..."
}
]
}
}
Also discoverable via MCP manifest — agent frameworks auto-discover the signup tool without any configuration.