Skip to content

Introduction

Lupid is an Agent Runtime Protection ecosystem — infrastructure for running AI agents securely. Observe every action they take, and enforce what they can and can’t do at runtime.

LLM-driven agents make decisions you can’t fully predict at deploy time. A requests.get() your code never wrote can still appear in production. Static analysis catches some of this; runtime enforcement catches the rest.

Lupid evaluates every agent action against your policy at the moment it happens. Allowed actions go through. Disallowed ones don’t.

  1. Your agent decides to act — a tool call, an API request, a file write.
  2. Lupid intercepts the action — before it reaches the outside world, the action is checked against your policy.
  3. Policy decides — if allowed, the action proceeds normally. If not, it’s blocked and your agent receives a denial response.
  4. Everything is logged — every action, every decision, every policy evaluation is recorded for observability.

Policy enforcement

Define rules for what agents can access — URLs, file paths, system calls, API endpoints. Enforce them at runtime.

Action observability

Full visibility into every action your agents take. Trace tool calls, inspect payloads, and audit decisions.

Audit trail

Every policy evaluation is logged. Know exactly what happened, when, and why it was allowed or blocked.

Framework agnostic

Works with LangChain, CrewAI, AutoGen, custom agents, or any stack that runs LLM-driven tool calls.

Getting Started

Set up Lupid and add runtime protection to your first agent.

API Reference

Full reference for the Lupid SDK, policy language, and configuration.

Guides

Step-by-step walkthroughs for common scenarios and integrations.