Launch & ops
Governance analytics
Dashboard metrics: approval rates, human decision latency, rejection reasons — definitions and data scope.
Governance analytics (dashboard)
The Governance analytics screen helps teams show that controls are working: approval rates, how long human decisions take, and why evaluations were rejected. It complements the row-level Audit log with workspace-level aggregates.
Where to find it
- URL:
/dashboard/analytics(authenticated). - Navigation: sidebar Governance analytics (same session and workspace membership rules as the rest of the dashboard).
Filters
- Workspace — same picker pattern as Audit.
- Period — last 7, 30, 90, or 365 days (based on each evaluation’s
created_at).
Data scope
- The page loads up to 2,000 of the most recent evaluations in the selected workspace and time window, then loads related approval stages and audit rows in batches. Very high-volume workspaces may see a capped sample; widen the window or export from the database if you need full history for compliance.
Metrics (definitions)
| Metric | Definition |
|---|---|
| Evaluations | Count of evaluation rows in the workspace and period (subject to the cap above). |
| Policy pass rate | Among evaluations in a resolved terminal state (auto_approved, approved, rejected, execution_complete), the percentage that ended allowed (auto_approved, approved, or execution_complete). |
| Human approval rate | Among evaluations that have at least one approval stage and reached a terminal outcome (approved, execution_complete, or rejected), the percentage that ended approved or completed (not human-rejected). |
| Awaiting human | Count with status pending_human. |
| Time to first human approval | For human-gated evaluations that ultimately reached approved or execution_complete: elapsed time from evaluation created_at to the earliest stage decided_at where the stage status is approved. |
| Time to resolution (human-gated) | For human-gated evaluations with a terminal outcome: elapsed time from evaluation created_at to the latest stage decided_at among stages that have a decision timestamp. |
| Status mix | Counts by evaluation status in the window. |
| Rejection reasons | For evaluations with status rejected: if an approval_rejected audit event exists → Human rejection; else if approval_timeout_escalation_exhausted → Approval timeout (escalation exhausted); else if approval_timeout with payload.action other than escalate_next → Approval timeout (auto-reject); otherwise the first evaluation_completed audit with engine_outcome no_match or reject → No matching rule or Policy denied (engine); else Other / unknown. See APPROVAL_TIMEOUTS.md. |
Median and average times are computed in the app from these deltas (not stored aggregates).
Related
- Audit log — per-evaluation detail and timeline:
/dashboard/audit(link from the analytics page). - Audit data model — evaluations,
approval_stages,audit_log(see core migrations and DATA_HANDLING.md).