Policy enforcement
Define rules for what agents can access — URLs, file paths, system calls, API endpoints. Enforce them at runtime.
Lupid is an Agent Runtime Protection ecosystem — infrastructure for running AI agents securely. Observe every action they take, and enforce what they can and can’t do at runtime.
LLM-driven agents make decisions you can’t fully predict at deploy time. A requests.get() your code never wrote can still appear in production. Static analysis catches some of this; runtime enforcement catches the rest.
Lupid evaluates every agent action against your policy at the moment it happens. Allowed actions go through. Disallowed ones don’t.
Policy enforcement
Define rules for what agents can access — URLs, file paths, system calls, API endpoints. Enforce them at runtime.
Action observability
Full visibility into every action your agents take. Trace tool calls, inspect payloads, and audit decisions.
Audit trail
Every policy evaluation is logged. Know exactly what happened, when, and why it was allowed or blocked.
Framework agnostic
Works with LangChain, CrewAI, AutoGen, custom agents, or any stack that runs LLM-driven tool calls.
Getting Started
Set up Lupid and add runtime protection to your first agent.
API Reference
Full reference for the Lupid SDK, policy language, and configuration.
Guides
Step-by-step walkthroughs for common scenarios and integrations.