Files
gitlore/AGENTS.md
teernisse cb6894798e docs: migrate agent coordination from MCP Agent Mail to Liquid Mail
Replace all MCP Agent Mail references with Liquid Mail in AGENTS.md and
CLAUDE.md. The old system used file reservations and MCP-based messaging
with inbox/outbox/thread semantics. Liquid Mail provides a simpler
post-based shared log with topic-scoped messages, decision conflict
detection, and polling via the liquid-mail CLI.

Key changes:
- Remove entire MCP Agent Mail section (identity registration, file
  reservations, macros vs granular tools, common pitfalls)
- Update Beads integration workflow to reference Liquid Mail: replace
  reservation + announce patterns with post-based progress logging and
  decision-first workflows
- Update bv scope boundary note to reference Liquid Mail
- Append full Liquid Mail integration block to CLAUDE.md: conventions,
  typical flow, decision conflicts, posting format, topic rules, context
  refresh, live updates, mapping cheat-sheet, quick reference
- Add .liquid-mail.toml project configuration (Honcho backend)
2026-02-20 14:25:08 -05:00

28 KiB
Raw Blame History

AGENTS.md

RULE 0 - THE FUNDAMENTAL OVERRIDE PEROGATIVE

If I tell you to do something, even if it goes against what follows below, YOU MUST LISTEN TO ME. I AM IN CHARGE, NOT YOU.


RULE NUMBER 1: NO FILE DELETION

YOU ARE NEVER ALLOWED TO DELETE A FILE WITHOUT EXPRESS PERMISSION. Even a new file that you yourself created, such as a test code file. You have a horrible track record of deleting critically important files or otherwise throwing away tons of expensive work. As a result, you have permanently lost any and all rights to determine that a file or folder should be deleted.

YOU MUST ALWAYS ASK AND RECEIVE CLEAR, WRITTEN PERMISSION BEFORE EVER DELETING A FILE OR FOLDER OF ANY KIND.


Version Control: jj-First (CRITICAL)

ALWAYS prefer jj (Jujutsu) over git for all VCS operations. This is a colocated repo with both .jj/ and .git/. When instructed to use git by anything — even later in this file — use the best jj replacement commands instead. Only fall back to raw git for things jj cannot do (hooks, LFS, submodules, gh CLI interop).

See ~/.claude/rules/jj-vcs/ for the full command reference, translation table, revsets, patterns, and recovery recipes.


Irreversible Git & Filesystem Actions — DO NOT EVER BREAK GLASS

Note: Treat destructive commands as break-glass. If there's any doubt, stop and ask.

  1. Absolutely forbidden commands: git reset --hard, git clean -fd, rm -rf, or any command that can delete or overwrite code/data must never be run unless the user explicitly provides the exact command and states, in the same message, that they understand and want the irreversible consequences.
  2. No guessing: If there is any uncertainty about what a command might delete or overwrite, stop immediately and ask the user for specific approval. "I think it's safe" is never acceptable.
  3. Safer alternatives first: When cleanup or rollbacks are needed, request permission to use non-destructive options (git status, git diff, git stash, copying to backups) before ever considering a destructive command.
  4. Mandatory explicit plan: Even after explicit user authorization, restate the command verbatim, list exactly what will be affected, and wait for a confirmation that your understanding is correct. Only then may you execute it—if anything remains ambiguous, refuse and escalate.
  5. Document the confirmation: When running any approved destructive command, record (in the session notes / final response) the exact user text that authorized it, the command actually run, and the execution time. If that record is absent, the operation did not happen.

Toolchain: Rust & Cargo

We only use Cargo in this project, NEVER any other package manager.

  • Edition/toolchain: Follow rust-toolchain.toml (if present). Do not assume stable vs nightly.
  • Dependencies: Explicit versions for stability; keep the set minimal.
  • Configuration: Cargo.toml only
  • Unsafe code: Forbidden (#![forbid(unsafe_code)])

When writing Rust code, reference RUST_CLI_TOOLS_BEST_PRACTICES.md

Release Profile

Use the release profile defined in Cargo.toml. If you need to change it, justify the performance/size tradeoff and how it impacts determinism and cancellation behavior.


Code Editing Discipline

No Script-Based Changes

NEVER run a script that processes/changes code files in this repo. Brittle regex-based transformations create far more problems than they solve.

  • Always make code changes manually, even when there are many instances
  • For many simple changes: use parallel subagents
  • For subtle/complex changes: do them methodically yourself

No File Proliferation

If you want to change something or add a feature, revise existing code files in place.

NEVER create variations like:

  • mainV2.rs
  • main_improved.rs
  • main_enhanced.rs

New files are reserved for genuinely new functionality that makes zero sense to include in any existing file. The bar for creating new files is incredibly high.


Backwards Compatibility

We do not care about backwards compatibility—we're in early development with no users. We want to do things the RIGHT way with NO TECH DEBT.

  • Never create "compatibility shims"
  • Never create wrapper functions for deprecated APIs
  • Just fix the code directly

Compiler Checks (CRITICAL)

After any substantive code changes, you MUST verify no errors were introduced:

# Check for compiler errors and warnings
cargo check --all-targets

# Check for clippy lints (pedantic + nursery are enabled)
cargo clippy --all-targets -- -D warnings

# Verify formatting
cargo fmt --check

If you see errors, carefully understand and resolve each issue. Read sufficient context to fix them the RIGHT way.


Testing

Unit & Property Tests

# Run all tests
cargo test

# Run with output
cargo test -- --nocapture

When adding or changing primitives, add tests that assert the core invariants:

  • no task leaks
  • no obligation leaks
  • losers are drained after races
  • region close implies quiescence

Prefer deterministic lab-runtime tests for concurrency-sensitive behavior.


Beads (br) — Dependency-Aware Issue Tracking

Beads provides a lightweight, dependency-aware issue database and CLI (br / beads_rust) for selecting "ready work," setting priorities, and tracking status. It complements Liquid Mail's shared log for progress, decisions, and cross-session context.

Note: br is non-invasive—it never executes git commands directly. You must run git commands manually after br sync --flush-only.

Conventions

  • Single source of truth: Beads for task status/priority/dependencies; Liquid Mail for conversation/decisions
  • Shared identifiers: Include the Beads issue ID in posts (e.g., [br-123] Topic validation rules)
  • Decisions before action: Post DECISION: messages before risky changes, not after

Typical Agent Flow

  1. Pick ready work (Beads):

    br ready --json  # Choose highest priority, no blockers
    
  2. Check context (Liquid Mail):

    liquid-mail notify                # See what changed since last session
    liquid-mail query "br-123"        # Find prior discussion on this issue
    
  3. Work and log progress:

    liquid-mail post --topic <workstream> "[br-123] START: <description>"
    liquid-mail post "[br-123] FINDING: <what you discovered>"
    liquid-mail post --decision "[br-123] DECISION: <what you decided and why>"
    
  4. Complete (Beads is authority):

    br close br-123 --reason "Completed"
    liquid-mail post "[br-123] Completed: <summary with commit ref>"
    

Mapping Cheat Sheet

Concept In Beads In Liquid Mail
Work item br-### (issue ID) Include [br-###] in posts
Workstream --topic auth-system
Subject prefix [br-###] ...
Commit message Include br-###
Status br update --status Post progress messages

bv — Graph-Aware Triage Engine

bv is a graph-aware triage engine for Beads projects (.beads/beads.jsonl). It computes PageRank, betweenness, critical path, cycles, HITS, eigenvector, and k-core metrics deterministically.

Scope boundary: bv handles what to work on (triage, priority, planning). For agent-to-agent coordination (progress logging, decisions, cross-session context), use Liquid Mail.

CRITICAL: Use ONLY --robot-* flags. Bare bv launches an interactive TUI that blocks your session.

The Workflow: Start With Triage

bv --robot-triage is your single entry point. It returns:

  • quick_ref: at-a-glance counts + top 3 picks
  • recommendations: ranked actionable items with scores, reasons, unblock info
  • quick_wins: low-effort high-impact items
  • blockers_to_clear: items that unblock the most downstream work
  • project_health: status/type/priority distributions, graph metrics
  • commands: copy-paste shell commands for next steps
bv --robot-triage        # THE MEGA-COMMAND: start here
bv --robot-next          # Minimal: just the single top pick + claim command

Command Reference

Planning:

Command Returns
--robot-plan Parallel execution tracks with unblocks lists
--robot-priority Priority misalignment detection with confidence

Graph Analysis:

Command Returns
--robot-insights Full metrics: PageRank, betweenness, HITS, eigenvector, critical path, cycles, k-core, articulation points, slack
--robot-label-health Per-label health: health_level, velocity_score, staleness, blocked_count
--robot-label-flow Cross-label dependency: flow_matrix, dependencies, bottleneck_labels
--robot-label-attention [--attention-limit=N] Attention-ranked labels

History & Change Tracking:

Command Returns
--robot-history Bead-to-commit correlations
--robot-diff --diff-since <ref> Changes since ref: new/closed/modified issues, cycles

Other:

Command Returns
--robot-burndown <sprint> Sprint burndown, scope changes, at-risk items
--robot-forecast <id|all> ETA predictions with dependency-aware scheduling
--robot-alerts Stale issues, blocking cascades, priority mismatches
--robot-suggest Hygiene: duplicates, missing deps, label suggestions
--robot-graph [--graph-format=json|dot|mermaid] Dependency graph export
--export-graph <file.html> Interactive HTML visualization

Scoping & Filtering

bv --robot-plan --label backend              # Scope to label's subgraph
bv --robot-insights --as-of HEAD~30          # Historical point-in-time
bv --recipe actionable --robot-plan          # Pre-filter: ready to work
bv --recipe high-impact --robot-triage       # Pre-filter: top PageRank
bv --robot-triage --robot-triage-by-track    # Group by parallel work streams
bv --robot-triage --robot-triage-by-label    # Group by domain

Understanding Robot Output

All robot JSON includes:

  • data_hash — Fingerprint of source beads.jsonl
  • status — Per-metric state: computed|approx|timeout|skipped + elapsed ms
  • as_of / as_of_commit — Present when using --as-of

Two-phase analysis:

  • Phase 1 (instant): degree, topo sort, density
  • Phase 2 (async, 500ms timeout): PageRank, betweenness, HITS, eigenvector, cycles

jq Quick Reference

bv --robot-triage | jq '.quick_ref'                        # At-a-glance summary
bv --robot-triage | jq '.recommendations[0]'               # Top recommendation
bv --robot-plan | jq '.plan.summary.highest_impact'        # Best unblock target
bv --robot-insights | jq '.status'                         # Check metric readiness
bv --robot-insights | jq '.Cycles'                         # Circular deps (must fix!)

UBS — Ultimate Bug Scanner

Golden Rule: ubs <changed-files> before every commit. Exit 0 = safe. Exit >0 = fix & re-run.

Commands

ubs file.rs file2.rs                    # Specific files (< 1s) — USE THIS
ubs $(jj diff --name-only)              # Changed files — before commit
ubs --only=rust,toml src/               # Language filter (3-5x faster)
ubs --ci --fail-on-warning .            # CI mode — before PR
ubs .                                   # Whole project (ignores target/, Cargo.lock)

Output Format

⚠️  Category (N errors)
    file.rs:42:5  Issue description
    💡 Suggested fix
Exit code: 1

Parse: file:line:col → location | 💡 → how to fix | Exit 0/1 → pass/fail

Fix Workflow

  1. Read finding → category + fix suggestion
  2. Navigate file:line:col → view context
  3. Verify real issue (not false positive)
  4. Fix root cause (not symptom)
  5. Re-run ubs <file> → exit 0
  6. Commit

Bug Severity

  • Critical (always fix): Memory safety, use-after-free, data races, SQL injection
  • Important (production): Unwrap panics, resource leaks, overflow checks
  • Contextual (judgment): TODO/FIXME, println! debugging

ast-grep vs ripgrep

Use ast-grep when structure matters. It parses code and matches AST nodes, ignoring comments/strings, and can safely rewrite code.

  • Refactors/codemods: rename APIs, change import forms
  • Policy checks: enforce patterns across a repo
  • Editor/automation: LSP mode, --json output

Use ripgrep when text is enough. Fastest way to grep literals/regex.

  • Recon: find strings, TODOs, log lines, config values
  • Pre-filter: narrow candidate files before ast-grep

Rule of Thumb

  • Need correctness or applying changesast-grep
  • Need raw speed or hunting textrg
  • Often combine: rg to shortlist files, then ast-grep to match/modify

Rust Examples

# Find structured code (ignores comments)
ast-grep run -l Rust -p 'fn $NAME($$$ARGS) -> $RET { $$$BODY }'

# Find all unwrap() calls
ast-grep run -l Rust -p '$EXPR.unwrap()'

# Quick textual hunt
rg -n 'println!' -t rust

# Combine speed + precision
rg -l -t rust 'unwrap\(' | xargs ast-grep run -l Rust -p '$X.unwrap()' --json

Use mcp__morph-mcp__warp_grep for exploratory "how does X work?" questions. An AI agent expands your query, greps the codebase, reads relevant files, and returns precise line ranges with full context.

Use ripgrep for targeted searches. When you know exactly what you're looking for.

Use ast-grep for structural patterns. When you need AST precision for matching/rewriting.

When to Use What

Scenario Tool Why
"How is pattern matching implemented?" warp_grep Exploratory; don't know where to start
"Where is the quick reject filter?" warp_grep Need to understand architecture
"Find all uses of Regex::new" ripgrep Targeted literal search
"Find files with println!" ripgrep Simple pattern
"Replace all unwrap() with expect()" ast-grep Structural refactor

warp_grep Usage

mcp__morph-mcp__warp_grep(
  repoPath: "/path/to/dcg",
  query: "How does the safe pattern whitelist work?"
)

Returns structured results with file paths, line ranges, and extracted code snippets.

Anti-Patterns

  • Don't use warp_grep to find a specific function name → use ripgrep
  • Don't use ripgrep to understand "how does X work" → wastes time with manual reads
  • Don't use ripgrep for codemods → risks collateral edits

Beads Workflow Integration

This project uses beads_viewer for issue tracking. Issues are stored in .beads/ and tracked in version control.

Note: br is non-invasive—it never executes VCS commands directly. You must commit manually after br sync --flush-only.

Essential Commands

# View issues (launches TUI - avoid in automated sessions)
bv

# CLI commands for agents (use these instead)
br ready              # Show issues ready to work (no blockers)
br list --status=open # All open issues
br show <id>          # Full issue details with dependencies
br create --title="..." --type=task --priority=2
br update <id> --status=in_progress
br close <id> --reason="Completed"
br close <id1> <id2>  # Close multiple issues at once
br sync --flush-only  # Export to JSONL (then: jj commit -m "Update beads")

Workflow Pattern

  1. Start: Run br ready to find actionable work
  2. Claim: Use br update <id> --status=in_progress
  3. Work: Implement the task
  4. Complete: Use br close <id>
  5. Sync: Run br sync --flush-only, then git add .beads/ && git commit -m "Update beads"

Key Concepts

  • Dependencies: Issues can block other issues. br ready shows only unblocked work.
  • Priority: P0=critical, P1=high, P2=medium, P3=low, P4=backlog (use numbers, not words)
  • Types: task, bug, feature, epic, question, docs
  • Blocking: br dep add <issue> <depends-on> to add dependencies

Session Protocol

Before ending any session, run this checklist (solo/lead only — workers skip VCS):

jj status                        # Check what changed
br sync --flush-only             # Export beads to JSONL
jj commit -m "..."               # Commit code and beads (jj auto-tracks all changes)
jj bookmark set <name> -r @-     # Point bookmark at committed work
jj git push -b <name>            # Push to remote

Best Practices

  • Check br ready at session start to find available work
  • Update status as you work (in_progress → closed)
  • Create new issues with br create when you discover tasks
  • Use descriptive titles and set appropriate priority/type
  • Always run br sync --flush-only then commit before ending session (jj auto-tracks .beads/)

Landing the Plane (Session Completion)

When ending a work session, you MUST complete ALL steps below. Work is NOT complete until push succeeds.

WHO RUNS THIS: Solo agents run it themselves. In multi-agent sessions, ONLY the team lead runs this. Workers skip VCS entirely.

MANDATORY WORKFLOW:

  1. File issues for remaining work - Create issues for anything that needs follow-up
  2. Run quality gates (if code changed) - Tests, linters, builds
  3. Update issue status - Close finished work, update in-progress items
  4. PUSH TO REMOTE - This is MANDATORY:
    jj git fetch                       # Get latest remote state
    jj rebase -d trunk()               # Rebase onto latest trunk if needed
    br sync --flush-only               # Export beads to JSONL
    jj commit -m "Update beads"        # Commit (jj auto-tracks .beads/ changes)
    jj bookmark set <name> -r @-       # Point bookmark at committed work
    jj git push -b <name>              # Push to remote
    jj log -r '<name>'                 # Verify bookmark position
    
  5. Clean up - Abandon empty orphan changes if any (jj abandon <rev>)
  6. Verify - All changes committed AND pushed
  7. Hand off - Provide context for next session

CRITICAL RULES:

  • Work is NOT complete until jj git push succeeds
  • NEVER stop before pushing - that leaves work stranded locally
  • NEVER say "ready to push when you are" - YOU must push
  • If push fails, resolve and retry until it succeeds

cass indexes prior agent conversations (Claude Code, Codex, Cursor, Gemini, ChatGPT, etc.) so we can reuse solved problems.

Rules: Never run bare cass (TUI). Always use --robot or --json.

Examples

cass health
cass search "async runtime" --robot --limit 5
cass view /path/to/session.jsonl -n 42 --json
cass expand /path/to/session.jsonl -n 42 -C 3 --json
cass capabilities --json
cass robot-docs guide

Tips

  • Use --fields minimal for lean output
  • Filter by agent with --agent
  • Use --days N to limit to recent history

stdout is data-only, stderr is diagnostics; exit code 0 means success.

Treat cass as a way to avoid re-solving problems other agents already handled.


Note for Codex/GPT-5.2

You constantly bother me and stop working with concerned questions that look similar to this:

Unexpected changes (need guidance)

- Working tree still shows edits I did not make in Cargo.toml, Cargo.lock, src/runtime.rs, src/scope.rs. Please advise whether to keep/commit/revert these before any further work. I did not touch them.

Next steps (pick one)

1. Decide how to handle the unrelated modified files above so we can resume cleanly.

NEVER EVER DO THAT AGAIN. The answer is literally ALWAYS the same: those are changes created by the potentially dozen of other agents working on the project at the same time. This is not only a common occurence, it happens multiple times PER MINUTE. The way to deal with it is simple: you NEVER, under ANY CIRCUMSTANCE, stash, revert, overwrite, or otherwise disturb in ANY way the work of other agents. Just treat those changes identically to changes that you yourself made. Just fool yourself into thinking YOU made the changes and simply don't recall it for some reason.


Note on Built-in TODO Functionality

Also, if I ask you to explicitly use your built-in TODO functionality, don't complain about this and say you need to use beads. You can use built-in TODOs if I tell you specifically to do so. Always comply with such orders.

TDD Requirements

Test-first development is mandatory:

  1. RED - Write failing test first
  2. GREEN - Minimal implementation to pass
  3. REFACTOR - Clean up while green

Key Patterns

Find the simplest solution that meets all acceptance criteria. Use third party libraries whenever there's a well-maintained, active, and widely adopted solution (for example, date-fns for TS date math) Build extensible pieces of logic that can easily be integrated with other pieces. DRY principles should be loosely held. Architecture MUST be clear and well thought-out. Ask the user for clarification whenever ambiguity is discovered around architecture, or you think a better approach than planned exists.


Third-Party Library Usage

If you aren't 100% sure how to use a third-party library, SEARCH ONLINE to find the latest documentation and mid-2025 best practices.


Gitlore Robot Mode

The lore CLI has a robot mode optimized for AI agent consumption with compact JSON output, structured errors with machine-actionable recovery steps, meaningful exit codes, response timing metadata, field selection for token efficiency, and TTY auto-detection.

Activation

# Explicit flag
lore --robot issues -n 10

# JSON shorthand (-J)
lore -J issues -n 10

# Auto-detection (when stdout is not a TTY)
lore issues | jq .

# Environment variable
LORE_ROBOT=1 lore issues

Robot Mode Commands

# List issues/MRs with JSON output
lore --robot issues -n 10
lore --robot mrs -s opened

# Filter issues by work item status (case-insensitive)
lore --robot issues --status "In progress"

# List with field selection (reduces token usage ~60%)
lore --robot issues --fields minimal
lore --robot mrs --fields iid,title,state,draft

# Show detailed entity info
lore --robot issues 123
lore --robot mrs 456 -p group/repo

# Count entities
lore --robot count issues
lore --robot count discussions --for mr

# Search indexed documents
lore --robot search "authentication bug"

# Check sync status
lore --robot status

# Run full sync pipeline
lore --robot sync

# Run sync without resource events
lore --robot sync --no-events

# Run ingestion only
lore --robot ingest issues

# Check environment health
lore --robot doctor

# Document and index statistics
lore --robot stats

# Quick health pre-flight check (exit 0 = healthy, 19 = unhealthy)
lore --robot health

# Generate searchable documents from ingested data
lore --robot generate-docs

# Generate vector embeddings via Ollama
lore --robot embed

# Agent self-discovery manifest (all commands, flags, exit codes, response schemas)
lore robot-docs

# Version information
lore --robot version

Response Format

All commands return compact JSON with a uniform envelope and timing metadata:

{"ok":true,"data":{...},"meta":{"elapsed_ms":42}}

Errors return structured JSON to stderr with machine-actionable recovery steps:

{"error":{"code":"CONFIG_NOT_FOUND","message":"...","suggestion":"Run 'lore init'","actions":["lore init"]}}

The actions array contains executable shell commands for automated recovery. It is omitted when empty.

Field Selection

The --fields flag on issues and mrs list commands controls which fields appear in the JSON response:

lore -J issues --fields minimal                    # Preset: iid, title, state, updated_at_iso
lore -J mrs --fields iid,title,state,draft,labels  # Custom field list

Exit Codes

Code Meaning
0 Success
1 Internal error / not implemented
2 Usage error (invalid flags or arguments)
3 Config invalid
4 Token not set
5 GitLab auth failed
6 Resource not found
7 Rate limited
8 Network error
9 Database locked
10 Database error
11 Migration failed
12 I/O error
13 Transform error
14 Ollama unavailable
15 Ollama model not found
16 Embedding failed
17 Not found (entity does not exist)
18 Ambiguous match (use -p to specify project)
19 Health check failed
20 Config not found

Configuration Precedence

  1. CLI flags (highest priority)
  2. Environment variables (LORE_ROBOT, GITLAB_TOKEN, LORE_CONFIG_PATH)
  3. Config file (~/.config/lore/config.json)
  4. Built-in defaults (lowest priority)

Best Practices

  • Use lore --robot or lore -J for all agent interactions
  • Check exit codes for error handling
  • Parse JSON errors from stderr; use actions array for automated recovery
  • Use --fields minimal to reduce token usage (~60% fewer tokens)
  • Use -n / --limit to control response size
  • Use -q / --quiet to suppress progress bars and non-essential output
  • Use --color never in non-TTY automation for ANSI-free output
  • Use -v / -vv / -vvv for increasing verbosity (debug/trace logging)
  • Use --log-format json for machine-readable log output to stderr
  • TTY detection handles piped commands automatically
  • Use lore --robot health as a fast pre-flight check before queries
  • Use lore robot-docs for response schema discovery
  • The -p flag supports fuzzy project matching (suffix and substring)

Read/Write Split: lore vs glab

Operation Tool Why
List issues/MRs lore Richer: includes status, discussions, closing MRs
View issue/MR detail lore Pre-joined discussions, work-item status
Search across entities lore FTS5 + vector hybrid search
Expert/workload analysis lore who command — no glab equivalent
Timeline reconstruction lore Chronological narrative — no glab equivalent
Create/update/close glab Write operations
Approve/merge MR glab Write operations
CI/CD pipelines glab Not in lore scope
## UBS Quick Reference for AI Agents

UBS stands for "Ultimate Bug Scanner": **The AI Coding Agent's Secret Weapon: Flagging Likely Bugs for Fixing Early On**

**Install:** `curl -sSL https://raw.githubusercontent.com/Dicklesworthstone/ultimate_bug_scanner/master/install.sh | bash`

**Golden Rule:** `ubs <changed-files>` before every commit. Exit 0 = safe. Exit >0 = fix & re-run.

**Commands:**
```bash
ubs file.ts file2.py                    # Specific files (< 1s) — USE THIS
ubs $(git diff --name-only --cached)    # Staged files — before commit
ubs --only=js,python src/               # Language filter (3-5x faster)
ubs --ci --fail-on-warning .            # CI mode — before PR
ubs --help                              # Full command reference
ubs sessions --entries 1                # Tail the latest install session log
ubs .                                   # Whole project (ignores things like .venv and node_modules automatically)
```

**Output Format:**
```
⚠️  Category (N errors)
    file.ts:42:5  Issue description
    💡 Suggested fix
Exit code: 1
```
Parse: `file:line:col` → location | 💡 → how to fix | Exit 0/1 → pass/fail

**Fix Workflow:**
1. Read finding → category + fix suggestion
2. Navigate `file:line:col` → view context
3. Verify real issue (not false positive)
4. Fix root cause (not symptom)
5. Re-run `ubs <file>` → exit 0
6. Commit

**Speed Critical:** Scope to changed files. `ubs src/file.ts` (< 1s) vs `ubs .` (30s). Never full scan for small edits.

**Bug Severity:**
- **Critical** (always fix): Null safety, XSS/injection, async/await, memory leaks
- **Important** (production): Type narrowing, division-by-zero, resource leaks
- **Contextual** (judgment): TODO/FIXME, console logs

**Anti-Patterns:**
- ❌ Ignore findings → ✅ Investigate each
- ❌ Full scan per edit → ✅ Scope to file
- ❌ Fix symptom (`if (x) { x.y }`) → ✅ Root cause (`x?.y`)