Most automation tools optimize for speed. Fluxo optimizes for controlled speed: fast enough to scale, governed enough to trust. That is why Human Review is a first-class subsystem in Fluxo.
Human Review is not a UI checkbox. It is a system of record: persisted tasks, routing decisions, audit trails, and explicit state transitions. When a workflow pauses for review, the platform records exactly what happened so the run can be resumed safely and explained later.
What happens when Human Review runs
When the HUMAN_REVIEW node executes, Fluxo:
- resolves draft content and context paths from upstream graph data
- creates a ReviewTask with status PENDING
- stores original content, revision number, and routing metadata fields
- emits review-created events for notifications and automations
Execution waits for a decision and resumes only when a valid transition is recorded. That single design choice is what makes review reliable: you can always answer who approved what, when, and why.
Governance assessment pipeline
Routing is only useful if it is grounded in data. Before routing, Fluxo computes a governance assessment that combines multiple signals:
- deterministic rule checks (PII, links, numeric claims, forbidden terms, length bounds)
- historical trust metrics (first-pass volume and approval rate)
- optional model self-evaluation
- risk flag severity scoring to derive confidence
This produces a concrete confidence score, risk flags, and reasons. Those fields are persisted on the task for auditability and later analytics.
Policy routing decisions
Routing policy can choose:
- STANDARD_REVIEW
- AUTO_APPROVED
- ESCALATED
- BLOCKED
Auto-approval is guarded by strict checks: confidence threshold, word-count bounds, no high-severity flags, minimum trust history, and downstream destination risk constraints. Fluxo only allows low-risk destination classes for auto-approval paths.
Review actions and state transitions
Review actions are implemented through explicit service functions with access checks:
- approve
- reject
- request revision
Transitions write timestamps, actor identity, optional edited content, and audit logs. They also emit webhook dispatch and completion events so external systems remain synchronized.
Revision regeneration loop
For revision requests, Fluxo can regenerate from the nearest upstream AI node. Fluxo resolves the regeneration target from graph structure, applies revision feedback, executes the AI node again, and creates a new review cycle with an incremented revision number.
I cap revision loops to protect runtime health and avoid infinite regenerate/review oscillation.
External review links
Fluxo supports public review sessions for stakeholders without full Fluxo accounts:
- signed, expiring session tokens
- per-link permissions (approve/reject/request revision/edit content)
- collector modes (none, name, name+email)
- IP/user-agent logging and rate limits
This lets agencies involve clients while keeping proper audit trails.
Notification and delivery model
Review events can notify users over Slack, Discord, Telegram, or email. Deliveries are persisted with status and provider identifiers. Retries and digest/reminder flows run through scheduled background functions.
This lets teams operate review SLAs as an actual process, not a best-effort inbox.
Why this matters
Many automation products optimize for speed alone. I optimized Fluxo for controlled speed: fast enough to scale, governed enough to trust.