Provenance

An architectural layer specification for cross-layer attribution in AI systems.

Most AI governance work treats the audit trail as documentation produced about the system. Provenance treats it as a property of the system. The distinction is structural: in an environment where AI agents take thousands of actions per day, governance latency that scales with human review becomes the binding constraint on what an organization can do safely. The substrate that makes every action, decision, and artifact reconstructable, attributable, and reversible at the layer where they happen is the layer the AI stack has been missing.

Provenance is a layered architecture under active development. The position piece below stakes the case for the layer and engages the adjacent literature directly. The architecture document, which specifies the layer in implementable detail, is in development and will publish in late summer 2026. The architecture document is co-authored by Nathan Kling and Ben Fisher.

Current artifacts

Position piece · May 2026

Provenance: The Layer the AI Stack Has Been Missing

Identity, action, artifact, and decision are being argued separately. The integration is the missing layer.

Read the position piece →

What's coming

Architecture document · Target: late August 2026

Provenance: A Layer Specification for Cross-Layer Attribution in AI Systems

By Nathan Kling and Ben Fisher. The layer specification: four layers, interface contracts, cryptographic binding model, regulatory mapping, and a reference implementation sketch. In active development.

Project context

Provenance is sibling work to The Lattice, a foundational paper on organizational design in the intelligence abundance era. The Lattice describes the organizational form that AI leverage requires. Provenance is the structural substrate that form depends on. The two projects publish independently and reference each other directly.

Contact

Questions, critiques, or collaboration interest: nathan@thinkingmanagement.com.