AI Governance & Security Platform

Every agent call. Audited.
Every tool use. Policy-checked.
Every artifact. Cryptographically yours.

r5e is the governance and orchestration layer between your AI agents and your infrastructure. Not another harness. Not an SDK you bolt on. A runtime with real security.

Request a Pilot
THE AI GATEWAY

Shadow AI? We've got you.

Not just another SDK you bolt onto your scripts. r5e acts as an AI gateway — proxying and auditing every AI call across your organization. Every. Single. One.

Proxy & Audit

Every AI API call routed through the gateway gets logged, policy-checked, and attributed. No blind spots. No shadow calls slipping through.

SBOM & CVE Checks

Hosted SBOMs for MCP tools and npx packages. Verified against CVE databases in real-time. Remember the axios leak? Yeah. This is why.

Agent Identity

Your agents authenticate with employee SSO or machine-identity. Your call. We support Okta, Entra, SAML, LDAP — whatever you already use.

Your Agents Claude · Codex · Custom
ALL TRAFFIC
r5e Gateway
Policy admission
Identity verification
SBOM validation
Audit logging
Prompt safety check
Gateway
Registry
SBOM
APPROVED
AI Providers Anthropic · OpenAI
Google · Local LLMs
SECURITY ARCHITECTURE

Three layers. Zero trust gaps.

Auditors need to answer three different questions. We built a layer for each one.

01 Execution Attestation

What image/runtime was used. What agent, tools, and auth were bound. What workspace was attached. Signed provenance around the container + mounts + agent identity.

The question it answers: “Was the environment constrained as claimed?”
02 Transparency Log

Certificate-Transparency-style append-only log. Every important event gets committed — hash-linked, signed, and externally anchored. Tamper-evident by design, not by promise.

The question it answers: “Was the record tamper-evident and append-only?”
03 Artifact Integrity

Merkle-tree hashing for every artifact — prompts, outputs, patches, referenced files. Content-addressed storage means every blob is verifiable. Is this the exact artifact that was produced? Prove it.

The question it answers: “Do these artifacts actually correspond to that record?”
The architecture in one line:
Attestation = boundary
+
CT log = history
+
Merkle graph = integrity
REAL-TIME THREAT DETECTION

We monitor agent intent. In near real-time.

When every AI call flows through our runtime, we don't just log it — we watch it. Standard DSA plus intent analysis means we can catch a prompt poisoning attack as it happens. Detect it. Alert on it. Stop it from propagating to anything else. Then lead your security team straight to it — to delete it, or better yet, to leave it there so they can work the audit log with your network team and figure out what poisoned it in the first place. Rogue agent? Bad npx package? Compromised MCP tool? The evidence chain is already there.

$ r5e watch --intent-monitor  
14:23:05 anomalous intent pattern in agent:coded
14:23:05 propagation blocked → 4 downstream tasks held
14:23:06 alert fired → #security-oncall
14:23:08 source traced → npx:mcp-toolkit@3.2.1
14:23:08 SBOM mismatch → CVE-2026-41882 (known)
14:23:09 evidence chain preserved for forensics

“We don’t compete with Wiz. We don’t compete with Datadog. We take your existing tools, supplement them, integrate with them. We’re flexible so you don’t have to be.”

A much smaller ship to turn
INTEGRATIONS

We're flexible so you don't have to be.

Integrate with your existing security stack. We supplement your tools, not replace them. We're a much smaller ship to turn.

IDENTITY PROVIDERS
Okta
Entra ID
SAML
LDAP
r5e
Gateway
Registry
SBOM
YOUR SECURITY STACK
Wiz
Datadog
Splunk
CI/CD
Registries
DISCOVERY

Your AI security advisor.

An agent that reads your policy-as-code, examines your tools, maps your gaps, and helps your EIS team reason about what's missing.

1

Agent Scans

Reads your policy-as-code, examines CI/CD configs, inventories security tools already deployed.

2

Finds Gaps

Maps coverage holes — missing policies, unmonitored endpoints, shadow AI usage patterns.

3

Suggests Policies

Proposes sane defaults based on your stack. Not generic templates — contextual recommendations.

4

Team Reviews

Your team reasons over findings with the agent. Accept, modify, or reject — humans stay in the loop.

EXTENSIBILITY

Built to extend.
Not to lock in.

K8s-style declarative API. CustomResourceDefinitions as first-class citizens. Compatible with any major harness. If not, craft it yourself with a few API definitions.

agent-policy.yaml YAML
apiVersion: r5e.io/v1alpha1
kind: AgentPolicy
metadata:
  name: code-review-agent
  namespace: engineering
spec:
  harness: claude-code
  admission:
    - rule: require-signed-identity
    - rule: budget-cap
      params:
        max_tokens: 50000
  permissions:
    - resource: "repos/*/pulls"
      actions: [read, comment, approve]
    - resource: "repos/*/branches/main"
      actions: [read] # no direct push
  escalation:
    notify: ["#security-review"]
    requireApproval: true

Make a PR, we'll probably approve it pretty quick.

Stop hamstringing your agents
with walled gardens.

Governance that actually works. For teams that actually ship.

Request a Pilot thomas@r5e-ai.com California C-Corp. Built for regulated industries.