FailSafe by PhT Labs

Contract testing for multi-agent AI systems

Validate handoffs, prevent data leakage, and enforce compliance policies — in milliseconds.

Getting Started

$ pip install failsafe-ai

Your existing code

A standard LangGraph pipeline — research passes data directly to writer.

from langgraph.graph import StateGraph, START, END
 
graph = StateGraph(dict)
graph.add_node("research", lambda s: {
"sources": ["arxiv.org"],
"api_key": "sk-secret-0x90F" # leaked to next node
})
graph.add_node("writer", lambda s: print("writer sees:", s) or {})
graph.add_edge(START, "research")
graph.add_edge("research", "writer")
graph.add_edge("writer", END)
graph.compile().invoke({"query": "AI safety"})

Built for production agent systems

Validate in milliseconds

Deterministic contract rules execute without LLM calls — sub-millisecond validation.

Prevent data leakage

Allow/deny field lists and pattern detection block sensitive data from crossing agent boundaries.

Compliance policies

Pre-built policy packs for finance regulations and GDPR. Load with a single line.

LLM-as-judge

Natural language rules evaluated by an LLM for nuanced validation beyond deterministic checks.

Full audit trail

Every handoff logged to SQLite with violations, timestamps, and trace IDs.

Warn or block modes

Choose whether violations log warnings or actively block handoffs. Configure per-contract.