TL;DR
- APort is pre-action authorization infrastructure for AI agents. It enforces policy before a tool call executes — not after.
- The core spec is the Open Agent Passport (OAP): agent identity + declarative policy + cryptographic audit trail. Open source, Apache 2.0.
- 53ms median latency. Same input, same decision, every time. No model inference in the enforcement path.
- 0% bypass rate under restrictive policy in a public adversarial testbed (879 attempts, $5,000 bounty unclaimed).
- Works across OpenClaw, Cursor, Claude Code, LangChain, CrewAI, OpenAI SDK, and others. Framework-agnostic.
The one-sentence version
APort answers a question that OAuth, API gateways, and model alignment cannot: "Should this specific AI agent be allowed to execute this specific tool call, right now, under this policy?"
If the answer is no, the tool call never runs. The decision is logged. The agent gets a denial reason. No damage done.
Why this exists
AI agents execute real-world actions through tool calls — functions that transfer funds, query databases, execute shell commands, send messages, and delegate to sub-agents. Today, the decision to execute a tool call is made in one of two places:
- The model — via alignment training (probabilistic, bypassable via prompt injection)
- The application — via ad hoc validation code (inconsistent, framework-specific, no audit trail)
Neither is an authorization layer. Neither enforces a declarative policy. Neither produces a verifiable record of what was authorized and what was denied.
This is the same gap the industry faced before OAuth for web APIs, before RBAC for multi-user systems. It's a missing infrastructure layer, not a missing feature.
Published data on the current state:
- 27.2% of engineering teams have abandoned framework-provided authorization and reverted to custom, hardcoded logic
- 492+ MCP servers were found exposed without authentication or encryption in production
- 90% of government organizations lack purpose-binding controls for AI agents
How APort works
APort operates at the tool call boundary — the moment between when an agent decides to do something and when the action actually executes.
Agent decides to call a tool
↓
APort intercepts: before_tool_call(tool, params, passport, context)
↓
Policy engine evaluates: ALLOW | DENY | ESCALATE
↓
ALLOW → tool executes DENY → tool blocked
↓ ↓
signed attestation signed attestation (denial reason)
Three components make this work:
1. Agent Passport
A signed credential that identifies the agent and declares what it's authorized to do. Think of it as an OAuth token that scopes to actions, not just APIs.
{
"agent_id": "support-bot-prod",
"authorized_capabilities": [
"finance.payment.refund",
"messaging.message.send"
],
"assurance_level": "L2",
"policy_packs": ["oap:finance:v1", "oap:comms:v1"],
"delegatable": false
}
If the agent tries system.command.execute — not in its passport — the call is denied before it runs.
2. Policy Packs
Declarative rules that define what's allowed, under what conditions:
policy_pack: oap:finance:v1
rules:
- tool_pattern: "finance.payment.refund"
action: DENY
unless:
- assurance_level: L3
- amount: "<= 500"
reason: "Refund requires L3 assurance and max $500"
29 policy packs ship in v1.0.20, covering finance, data operations, code, messaging, legal, governance, and MCP tools. You can also write custom policies.
3. Signed Audit Trail
Every decision — allow or deny — produces a cryptographically signed record:
{
"decision_id": "dec_7f4c2b1a",
"agent_id": "support-bot-prod",
"action": "finance.payment.refund",
"verdict": "DENY",
"reason": "Amount 5000 exceeds limit 500",
"signature": "ed25519:..."
}
This is the compliance artifact. SOX, GDPR, HIPAA, and EU AI Act all require answers to "what happened, was it authorized, who authorized it." APort generates this per decision.
Properties
| Property | What it means |
|---|---|
| Deterministic | Same input → same decision. No model inference in the enforcement path. |
| Bypass-resistant | Runs at the framework/platform level. A jailbroken model cannot skip the policy check. |
| Fail-closed | Authorization service down? Tool calls denied by default. |
| Fast | 53ms median, p99 under 77ms (cloud API). |
| Framework-agnostic | OpenClaw, Cursor, Claude Code, LangChain, CrewAI, OpenAI SDK, Generic adapter. |
| Open spec |
OAP v1.0, Apache 2.0, DOI: 10.5281/zenodo.18901595, /.well-known/oap/.
|
What APort is NOT
- Not model alignment. APort doesn't train or fine-tune models. It enforces policy outside the model.
- Not a sandbox. Sandboxes (NemoClaw, E2B) contain blast radius. APort prevents the action from executing at all.
- Not post-hoc evaluation. Evaluation tools (Promptfoo, Galileo) test agents after the fact. APort blocks bad calls in real time.
These layers are complementary. A production agent stack needs alignment, evaluation, sandboxing, and pre-action authorization. APort is the authorization layer.
Evidence: Vault CTF
We ran a public adversarial testbed — the Vault CTF — with a $5,000 bounty for bypassing the authorization layer.
| Metric | Result |
|---|---|
| Total authorization decisions | 4,437 |
| Unique attack sessions | 1,151 |
| L1 (no policy, model only): attacker success | 74.6% |
| L5 (full OAP policy): attacker success | 0.0% (879 attempts) |
| Bounty claimed | $0 of $5,000 |
| Statistical confidence | 99% CI met |
Same model. Same users. Same prompts. The only variable: whether a deterministic policy was enforced before the tool call.
Full results: arXiv:2603.20953
Get started
# Install for your framework
npx @aporthq/aport-agent-guardrails openclaw
npx @aporthq/aport-agent-guardrails cursor
npx @aporthq/aport-agent-guardrails claude-code
# Or via npm/pip for library use
npm install @aporthq/aport-agent-guardrails
pip install aport-agent-guardrails-langchain
- Spec: github.com/aporthq/aport-spec
- Implementation: github.com/aporthq/aport-agent-guardrails
- Platform: aport.io
- Research: arXiv:2603.20953