The Vigil Journal
Writing from the defense layer.
Threat research. Engineering deep-dives. Protocol specs. Notes from the company. No hot takes, no hype cycles. What we are building, what we are finding, and what we think it means.
Featured · Latest
Standards
We filed our NIST RFI. Here is what we said.
On March 9, 2026 we submitted our Response to the NIST AI Safety Institute Request for Information on AI audit standards. Docket NIST-2025-0035. This post walks through the core argument: why existing audit frameworks cannot address agent-economy risk, why observability alone is insufficient, and why cryptographically sealed audit trails (VOAF) are the only workable substrate for regulatory review in a world where AI acts.
“Measurement is not prevention. You cannot audit your way out of a probabilistic system. The audit trail has to be cryptographic, verifiable without trust in the issuer, and emitted at every action, not assembled after the fact.”
Read the full response →Threat Research
Prompt injection is a statistical problem, not a security patch.
Every prompt-injection defense that relies on an LLM to catch attacks will eventually be broken by a better-crafted prompt. The only workable defense is statistical: behavioral baselines, scope enforcement, deterministic policy.
Engineering
The four-model detection ensemble, explained.
Isolation Forest. LSTM drift. Bayesian anomaly score. Multi-Window CUSUM. Why we picked these four, how they vote, and why none of them ever involves an LLM on the critical path.
Threat Research
What the McKinsey Lilli breach tells us.
A multi-billion dollar firm's internal AI system was compromised through the prompt layer. No audit trail existed. This is not a McKinsey problem. It is the default state of the agent economy.
Company
v2.0 is shipped. What changed.
362 tests passing. The Execution Gate moved from prototype to production. The five-plane architecture landed. The Gateway surface entered private beta. Here is the full engineering changelog in plain English.
Standards
Publishing TAP v1.0. Identity for AI agents.
Trust Attestation Protocol v1.0 specification is public. Who an agent is. What it can do. Who vouches for it. Revocable in one call. Designed to be adopted without Vigil.
Threat Research
Why providers cannot build the defense layer.
Structural conflict. OpenAI, Anthropic, Google all have incentives that misalign with user safety at the tail. A standalone defense layer is the only durable answer. Here is the market logic.
Company
Vigil Gateway: private beta is open.
Cloud agent coverage in one URL change. Four-tier pricing. OpenAI, Anthropic, Google, Groq supported at launch. Framework partnerships in discussion with LangChain, LangGraph, and Cloudflare.
Standards
VOAF-M: turning audit trails into training data.
VOAF-M is the training-ready variant of the Vigil Open Audit Format. One export, two outputs: a regulatory audit record and a ready-to-use JSONL training set for personal and enterprise model fine-tunes.
Engineering
The Execution Gate. How we hold actions pre-execution.
Request-response pipeline architecture. Three-tier action classification. How we hold a wire transfer, surface it to the user, and revoke cleanly if rejected. With diagrams.
From the archive.
See all 47 posts →Feb 04, 2026
Why Vigil exists. The founding thesis, two months in.
Company
11 min
Jan 28, 2026
VARP: revoking a compromised agent across every surface.
Standards
7 min
Jan 22, 2026
Glasswing: what Anthropic's 40-org coalition actually means.
Threat Research
9 min
Jan 14, 2026
Why we do TLS interception. And why it is safe when done right.
Engineering
13 min
Jan 08, 2026
DeepMind's six attack categories, mapped to Vigil modes.
Threat Research
10 min
Dec 28, 2025
Local-first AI security. Why none of your data leaves your Mac.
Engineering
8 min
The Vigil Brief
One email a month. Zero fluff.
Threat research, protocol updates, engineering notes, and the occasional piece of raw company data. Sent once a month. Written by the founders. Unsubscribe in one click.
PlatformThe AI defense platform for AI·Buildv2.1.0 · 362 tests · 11 crates · 31 endpoints · <10ms p99·PatentsVIGIL-2026-001 · VIGIL-2026-002·RegulatoryNIST docket 2025-0035 · mmk-190r-hvap