APort vs Safiron
Model-based guardians improve over distributions but remain classifiers. OAP separates “what the model wants” from “what policy permits.”
Safiron (AuraGen + GRPO) represents the state of the art in learned pre-execution screening: powerful when the guardian sees similar trajectories to training.
OAP is deliberately non-learned in the decision core so adversaries cannot game the policy evaluator with the same tactics aimed at frontier models.
| Comparison point | OAP / APort | Safiron |
|---|---|---|
| Evaluator | Deterministic rules + expressions over structured context. | Guardian neural model scoring proposed actions. |
| Robustness class | Bounded to policy language; no gradient-based attacks on the evaluator. | Inherits adversarial robustness limits of the guardian LLM. |
| Explainability | Stable deny codes (`oap.*`) for automation and SIEM routing. | Model rationales may help humans; less standardized for compliance. |
| Complements | Use Safiron as an upstream risk scorer; OAP as the hard gate. | Use OAP to constrain what the guardian is allowed to approve. |
Use Safiron when
- You want ML-driven prioritization of risky trajectories
- You can retrain guardians as attack patterns evolve
- You accept probabilistic pre-filters before a hard policy layer
Use OAP / APort when
- You need court- or auditor-friendly deterministic denials
- You cannot afford guardian false negatives on financial or data tools
- You want policies editable without RL training cycles
Why teams choose OAP / APort
Hard final gate
Learned screeners can feed signals; OAP still decides allow/deny.
Spec-backed decisions
Customers can diff policy pack versions like infrastructure-as-code.
Fail-closed operations
If verification is down, tool calls stop—no silent guardian bypass.