Two-Pass Consistency Verification
How hakadoru.ai detects contradictions in long-form fiction using Near-Range and Long-Range verification passes
Published: 2026-03-17
architectureverificationquality
## Overview
Long-form fiction — particularly web novels spanning hundreds of scenes — accumulates contradictions that even attentive authors miss. A character's eye color changes between chapters; a destroyed building reappears; a conversation references events that have not yet occurred. **Two-Pass Consistency Verification is defined as a quality assurance architecture where Near-Range verification (sliding-window sequential review) and Long-Range verification (entity-based probabilistic cross-referencing) complement each other to detect contradictions at both local and global scales in long-form fiction.**
hakadoru.ai implements this two-pass design to provide comprehensive consistency checking without requiring full pairwise scene comparison, which would be prohibitively expensive.
## Background
Consistency checking in fiction is fundamentally different from code linting or document validation. Contradictions are semantic, context-dependent, and often subtle. A character "limping" in one scene and "sprinting" two scenes later is only a contradiction if no healing event occurred between them. This requires understanding narrative causality, not just textual similarity.
The challenge scales non-linearly with work length. A 10-scene story has 45 scene pairs to check; a 100-scene novel has 4,950; a 300-scene web serial has 44,850. Full pairwise verification is computationally infeasible and economically impractical for works of any meaningful length.
Existing AI writing tools generally offer no consistency verification, or at best provide simple character name tracking. hakadoru.ai's approach treats consistency as a first-class verification problem with defined categories, multiple reviewer perspectives, and cost-controlled execution.
## Near-Range Verification
Near-Range verification operates on consecutive scenes using a sliding window. For each window position:
1. A set of adjacent scenes (typically 3-5) is loaded into context
2. Multiple reviewer personas independently examine the window for contradictions
3. Each persona specializes in different aspects (physical details, emotional continuity, timeline logic)
4. Findings are aggregated and deduplicated
The sliding window advances one scene at a time, ensuring every consecutive scene pair is examined. This pass is highly effective at catching contradictions that arise from recent edits — the most common source of inconsistency.
### Reviewer Persona Diversity
Using multiple reviewer personas for the same scene window addresses a known weakness in single-pass LLM review: attention bias. A single reviewer tends to fixate on certain contradiction types while missing others. By running parallel reviews with personas tuned to different categories (equipment continuity, emotional arc, spatial layout), hakadoru.ai achieves broader coverage than any single review pass could provide.
## Long-Range Verification
Long-Range verification addresses contradictions between distant scenes that the sliding window never places together. This pass uses a fundamentally different strategy:
1. **Entity extraction** — Characters, locations, objects, and their attributes are extracted from each scene and stored as structured assertions
2. **Assertion indexing** — Extracted assertions are indexed by entity, enabling cross-scene lookup
3. **Probabilistic sampling** — Rather than checking all assertion pairs (quadratic cost), the system samples assertion pairs with high contradiction potential based on entity overlap and attribute divergence
4. **Focused verification** — Sampled pairs are sent to an LLM for detailed contradiction analysis with full scene context
This approach reduces the computational cost from O(n^2) scene comparisons to a manageable sample, while prioritizing the pairs most likely to contain contradictions.
## Verification Categories
Both passes evaluate contradictions across six defined categories:
1. **Equipment** — Physical items, weapons, clothing, possessions
2. **Location** — Spatial relationships, geography, room layouts, distances
3. **Personality** — Character behavior, speech patterns, established traits
4. **Conversation** — Dialogue references, shared knowledge between characters
5. **Perspective** — Point-of-view consistency, information asymmetry between characters
6. **Causality** — Temporal ordering, cause-and-effect chains, prerequisite events
Each finding is tagged with its category, enabling authors to filter results by the type of contradiction they are most concerned about.
## Cost Control and DoS Protection
For large novels (100+ scenes), unrestricted verification could generate enormous LLM costs. hakadoru.ai implements several safeguards:
- **Scene count limits** per verification run, with the option to verify specific scene ranges
- **Sampling rate control** for Long-Range verification, adjustable based on the user's credit budget
- **Caching** of extracted assertions, so repeated verification runs do not re-extract from unchanged scenes
- **Progressive disclosure** — results are streamed as each window completes, rather than requiring the full run to finish
- **Incremental verification** — when an author edits a single scene, only the windows containing that scene need re-verification, not the entire work
These safeguards ensure that verification remains economically viable for works of any length. A 300-scene web serial can be verified incrementally as new scenes are added, rather than requiring full re-verification each time.
## Verification Results and Author Workflow
Verification results are presented as a prioritized list of potential contradictions, each with:
- The specific contradiction detected, stated as a natural-language finding
- The two (or more) scene references involved, with relevant text excerpts
- The verification category (equipment, location, personality, etc.)
- A confidence score indicating how likely the finding is a genuine contradiction versus a false positive
Authors can mark findings as "confirmed" (genuine contradiction to fix), "intentional" (deliberate narrative choice), or "false positive" (incorrect detection). These judgments are stored and used to improve future verification accuracy through a feedback loop.
## Comparison with Other Approaches
Most AI writing tools offer no consistency verification. Novelcrafter provides a "codex" for manual entity tracking but no automated contradiction detection. Sudowrite's story engine tracks plot elements but does not perform cross-scene verification. ChatGPT-based workflows lose context beyond the conversation window, making long-range consistency checking impossible without manual context management.
hakadoru.ai's two-pass approach is, to our knowledge, the first systematic architecture for automated consistency verification in long-form fiction that addresses both local and global contradictions with defined cost bounds.
## Scalability Characteristics
The two-pass design scales differently across work sizes:
- **Short works (under 20 scenes)** — Near-Range verification alone provides good coverage, as the sliding window naturally covers a large fraction of all scene pairs. Long-Range verification adds minimal value at this scale.
- **Medium works (20-100 scenes)** — Both passes contribute meaningfully. Near-Range catches local edits; Long-Range catches entity drift that accumulates over dozens of scenes.
- **Long works (100+ scenes)** — Long-Range verification becomes essential. The probability of contradictions between distant scenes increases with work length, and the sliding window cannot reach these pairs. Probabilistic sampling is tuned to increase coverage proportionally.
This scaling profile means that hakadoru.ai's verification costs grow sub-linearly with work length — a critical property for web serial authors who produce hundreds of scenes over months or years.
## Conclusion
Two-Pass Consistency Verification provides a structured approach to a problem that has traditionally relied entirely on human proofreading. Near-Range verification catches local contradictions introduced by recent edits, while Long-Range verification surfaces global inconsistencies that accumulate over the life of a long-form work. Together, they offer fiction writers a safety net that scales with their work's complexity while remaining economically viable through probabilistic sampling and cost controls.