diff --git a/.beads/issues.jsonl b/.beads/issues.jsonl index 7d0d49c..3b6264a 100644 --- a/.beads/issues.jsonl +++ b/.beads/issues.jsonl @@ -2,7 +2,7 @@ {"id":"bd-10i","title":"Epic: CP2 Gate D - Resumability Proof","description":"## Background\nGate D validates resumability and crash recovery. Proves that cursor and watermark mechanics prevent massive refetch after interruption. This is critical for large projects where a full refetch would take hours.\n\n## Acceptance Criteria (Pass/Fail)\n- [ ] Kill mid-run, rerun -> bounded redo (not full refetch from beginning)\n- [ ] Cursor saved at page boundary (not item boundary)\n- [ ] No redundant discussion refetch after crash recovery\n- [ ] No watermark advancement on partial pagination failure\n- [ ] Single-flight lock prevents concurrent ingest runs\n- [ ] `--full` flag resets MR cursor to NULL\n- [ ] `--full` flag resets ALL `discussions_synced_for_updated_at` to NULL\n- [ ] `--force` bypasses single-flight lock\n\n## Validation Script\n```bash\n#!/bin/bash\nset -e\n\nDB_PATH=\"${XDG_DATA_HOME:-$HOME/.local/share}/gitlab-inbox/db.sqlite3\"\n\necho \"=== Gate D: Resumability Proof ===\"\n\n# 1. Test single-flight lock\necho \"Step 1: Test single-flight lock...\"\ngi ingest --type=merge_requests &\nFIRST_PID=$!\nsleep 1\n\n# Try second ingest - should fail with lock error\nif gi ingest --type=merge_requests 2>&1 | grep -q \"lock\\|already running\"; then\n echo \" PASS: Second ingest blocked by lock\"\nelse\n echo \" FAIL: Lock not working\"\nfi\nwait $FIRST_PID 2>/dev/null || true\n\n# 2. Test --force bypasses lock\necho \"Step 2: Test --force flag...\"\ngi ingest --type=merge_requests &\nFIRST_PID=$!\nsleep 1\nif gi ingest --type=merge_requests --force 2>&1; then\n echo \" PASS: --force bypassed lock\"\nelse\n echo \" Note: --force test inconclusive\"\nfi\nwait $FIRST_PID 2>/dev/null || true\n\n# 3. Check cursor state\necho \"Step 3: Check cursor state...\"\nsqlite3 \"$DB_PATH\" \"\n SELECT resource_type, updated_at, gitlab_id\n FROM sync_cursors \n WHERE resource_type = 'merge_requests';\n\"\n\n# 4. Test crash recovery\necho \"Step 4: Test crash recovery...\"\n\n# Record current cursor\nCURSOR_BEFORE=$(sqlite3 \"$DB_PATH\" \"\n SELECT updated_at FROM sync_cursors WHERE resource_type = 'merge_requests';\n\")\necho \" Cursor before: $CURSOR_BEFORE\"\n\n# Force full sync and kill\necho \" Starting full sync then killing...\"\ngi ingest --type=merge_requests --full &\nPID=$!\nsleep 5 && kill -9 $PID 2>/dev/null || true\nwait $PID 2>/dev/null || true\n\n# Check cursor was saved (should be non-null if any page completed)\nCURSOR_AFTER=$(sqlite3 \"$DB_PATH\" \"\n SELECT updated_at FROM sync_cursors WHERE resource_type = 'merge_requests';\n\")\necho \" Cursor after kill: $CURSOR_AFTER\"\n\n# Re-run and verify bounded redo\necho \" Re-running (should resume from cursor)...\"\ntime gi ingest --type=merge_requests\n# Should be faster than first full sync\n\n# 5. Test --full reset\necho \"Step 5: Test --full resets watermarks...\"\n\n# Check watermarks before\nWATERMARKS_BEFORE=$(sqlite3 \"$DB_PATH\" \"\n SELECT COUNT(*) FROM merge_requests \n WHERE discussions_synced_for_updated_at IS NOT NULL;\n\")\necho \" Watermarks set before --full: $WATERMARKS_BEFORE\"\n\n# Record cursor before\nCURSOR_BEFORE_FULL=$(sqlite3 \"$DB_PATH\" \"\n SELECT updated_at, gitlab_id FROM sync_cursors WHERE resource_type = 'merge_requests';\n\")\necho \" Cursor before --full: $CURSOR_BEFORE_FULL\"\n\n# Run --full\ngi ingest --type=merge_requests --full\n\n# Check cursor was reset then rebuilt\nCURSOR_AFTER_FULL=$(sqlite3 \"$DB_PATH\" \"\n SELECT updated_at, gitlab_id FROM sync_cursors WHERE resource_type = 'merge_requests';\n\")\necho \" Cursor after --full: $CURSOR_AFTER_FULL\"\n\n# Watermarks should be set again (sync completed)\nWATERMARKS_AFTER=$(sqlite3 \"$DB_PATH\" \"\n SELECT COUNT(*) FROM merge_requests \n WHERE discussions_synced_for_updated_at IS NOT NULL;\n\")\necho \" Watermarks set after --full: $WATERMARKS_AFTER\"\n\necho \"\"\necho \"=== Gate D: PASSED ===\"\n```\n\n## Watermark Safety Test (Simulated Network Failure)\n```bash\n# This tests that watermark doesn't advance on partial failure\n# Requires ability to simulate network issues\n\n# 1. Get an MR that needs discussion sync\nMR_ID=$(sqlite3 \"$DB_PATH\" \"\n SELECT id FROM merge_requests \n WHERE discussions_synced_for_updated_at IS NULL \n OR updated_at > discussions_synced_for_updated_at\n LIMIT 1;\n\")\n\n# 2. Note current watermark\nWATERMARK_BEFORE=$(sqlite3 \"$DB_PATH\" \"\n SELECT discussions_synced_for_updated_at FROM merge_requests WHERE id = $MR_ID;\n\")\necho \"Watermark before: $WATERMARK_BEFORE\"\n\n# 3. Simulate network failure (requires network manipulation)\n# Option A: Block GitLab API temporarily\n# Option B: Run in a container with network limits\n# Option C: Use the automated test instead:\ncargo test does_not_advance_discussion_watermark_on_partial_failure\n\n# 4. Verify watermark unchanged after failure\nWATERMARK_AFTER=$(sqlite3 \"$DB_PATH\" \"\n SELECT discussions_synced_for_updated_at FROM merge_requests WHERE id = $MR_ID;\n\")\necho \"Watermark after failure: $WATERMARK_AFTER\"\n[ \"$WATERMARK_BEFORE\" = \"$WATERMARK_AFTER\" ] && echo \"PASS: Watermark preserved\"\n```\n\n## Test Commands (Quick Verification)\n```bash\n# Check cursor state:\nsqlite3 ~/.local/share/gitlab-inbox/db.sqlite3 \"\n SELECT * FROM sync_cursors WHERE resource_type = 'merge_requests';\n\"\n\n# Check watermark distribution:\nsqlite3 ~/.local/share/gitlab-inbox/db.sqlite3 \"\n SELECT \n SUM(CASE WHEN discussions_synced_for_updated_at IS NULL THEN 1 ELSE 0 END) as needs_sync,\n SUM(CASE WHEN discussions_synced_for_updated_at IS NOT NULL THEN 1 ELSE 0 END) as synced\n FROM merge_requests;\n\"\n\n# Test --full resets (check before/after):\nsqlite3 ~/.local/share/gitlab-inbox/db.sqlite3 \"SELECT COUNT(*) FROM merge_requests WHERE discussions_synced_for_updated_at IS NOT NULL;\"\ngi ingest --type=merge_requests --full\n# During full sync, watermarks should be NULL, then repopulated\n```\n\n## Critical Automated Tests\nThese tests MUST pass for Gate D:\n```bash\ncargo test does_not_advance_discussion_watermark_on_partial_failure\ncargo test full_sync_resets_discussion_watermarks\ncargo test cursor_saved_at_page_boundary\n```\n\n## Dependencies\nThis gate requires:\n- bd-mk3 (ingest command with --full and --force support)\n- bd-ser (MR ingestion with cursor mechanics)\n- bd-20h (MR discussion ingestion with watermark safety)\n- Gates A, B, C must pass first\n\n## Edge Cases\n- Very fast sync: May complete before kill signal reaches; retest with larger project\n- Lock file stale: If previous run crashed, lock file may exist; --force handles this\n- Clock skew: Cursor timestamps should use server time, not local time","status":"closed","priority":3,"issue_type":"task","created_at":"2026-01-26T22:06:02.124186Z","created_by":"tayloreernisse","updated_at":"2026-01-27T00:48:21.060596Z","closed_at":"2026-01-27T00:48:21.060555Z","close_reason":"done","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-10i","depends_on_id":"bd-mk3","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} {"id":"bd-11mg","title":"Add CLI flags: --as-of, --explain-score, --include-bots, --all-history","description":"## Background\nThe who command needs new CLI flags for reproducible scoring (--as-of), score transparency (--explain-score), bot inclusion (--include-bots), and full-history mode (--all-history). The default --since for expert mode changes from 6m to 24m. At this point query_expert() already accepts the new params (from bd-13q8).\n\n## Approach\nModify WhoArgs struct and run_who() (who.rs:276). The command uses clap derive macros.\n\n### New clap fields on WhoArgs:\n```rust\n#[arg(long, value_name = \"TIMESTAMP\")]\npub as_of: Option,\n\n#[arg(long, conflicts_with = \"detail\")]\npub explain_score: bool,\n\n#[arg(long)]\npub include_bots: bool,\n\n#[arg(long, conflicts_with = \"since\")]\npub all_history: bool,\n```\n\n### run_who() changes (who.rs:276):\n1. Default --since: \"6m\" -> \"24m\" for expert mode\n2. **Path canonicalization**: Call normalize_query_path() on raw path input at top of run_who(), before build_path_query(). Store both original and normalized for robot JSON.\n3. Parse --as-of: RFC3339 or YYYY-MM-DD (append T23:59:59.999Z for end-of-day UTC) -> i64 millis. Default: now_ms()\n4. Parse --all-history: set since_ms = 0\n5. Thread as_of_ms, explain_score, include_bots through to query_expert()\n6. Update the production query_expert() callsite (line ~311) from the default values bd-13q8 set to the actual parsed flag values\n\n### Robot JSON resolved_input additions:\n- scoring_model_version: 2\n- path_input_original: raw user input\n- path_input_normalized: after normalize_query_path()\n- as_of_ms/as_of_iso\n- window_start_iso/window_end_iso/window_end_exclusive: true\n- since_mode: \"all\" | \"24m\" | user value\n- excluded_usernames_applied: true|false\n\n### Robot JSON per-expert (explain_score): score_raw + components object\n### Human output (explain_score): parenthetical after score: `42 (author:28.5 review:10.0 notes:3.5)`\n\n## TDD Loop\n\n### RED (write these 8 tests first):\n- test_explain_score_components_sum_to_total: components sum == score_raw within tolerance\n- test_as_of_produces_deterministic_results: two runs with same as_of -> identical\n- test_as_of_excludes_future_events: event after as_of excluded entirely\n- test_as_of_exclusive_upper_bound: event at exactly as_of_ms excluded (strict <)\n- test_since_relative_to_as_of_clock: since window from as_of, not wall clock\n- test_explain_and_detail_are_mutually_exclusive: clap parse error\n- test_excluded_usernames_filters_bots: renovate-bot filtered, jsmith present\n- test_include_bots_flag_disables_filtering: both appear with --include-bots\n\n### GREEN: Add clap args, parse logic, robot JSON fields, human output format.\n### VERIFY: cargo test -p lore -- test_explain_score test_as_of test_since_relative test_excluded test_include_bots\n\n## Acceptance Criteria\n- [ ] All 8 tests pass green\n- [ ] --as-of parses RFC3339 and YYYY-MM-DD (end-of-day UTC)\n- [ ] --explain-score conflicts with --detail (clap error at parse time)\n- [ ] --all-history conflicts with --since (clap error at parse time)\n- [ ] Default --since is 24m for expert mode\n- [ ] Robot JSON includes scoring_model_version: 2\n- [ ] Robot JSON includes path_input_original AND path_input_normalized\n- [ ] Robot JSON includes score_raw + components when --explain-score\n- [ ] Human output appends component parenthetical when --explain-score\n\n## Files\n- MODIFY: src/cli/commands/who.rs (WhoArgs struct, run_who at line 276, robot/human output rendering)\n\n## Edge Cases\n- YYYY-MM-DD parsing: chrono NaiveDate then to end-of-day UTC (T23:59:59.999Z)\n- as_of in the past with --since: since window = as_of_ms - duration, not now - duration\n- since_mode in robot JSON: \"all\" for --all-history, \"24m\" default, user value otherwise\n- Path normalization runs BEFORE path resolution","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-09T17:00:11.115322Z","created_by":"tayloreernisse","updated_at":"2026-02-12T20:43:04.413786Z","closed_at":"2026-02-12T20:43:04.413729Z","close_reason":"Implemented by time-decay swarm: 3 agents, 12 tasks, 621 tests passing, all quality gates green","compaction_level":0,"original_size":0,"labels":["cli","scoring"],"dependencies":[{"issue_id":"bd-11mg","depends_on_id":"bd-13q8","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-11mg","depends_on_id":"bd-18dn","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} {"id":"bd-12ae","title":"OBSERV: Add structured tracing fields to rate-limit/retry handling","description":"## Background\nRate limit and retry events are currently logged at WARN with minimal context (src/gitlab/client.rs:~157). This enriches them with structured fields so MetricsLayer can count them and -v mode shows actionable retry information.\n\n## Approach\n### src/gitlab/client.rs - request() method (line ~119-171)\n\nCurrent 429 handling (~line 155-158):\n```rust\nif response.status() == StatusCode::TOO_MANY_REQUESTS && attempt < Self::MAX_RETRIES {\n let retry_after = Self::parse_retry_after(&response);\n tracing::warn!(retry_after_secs = retry_after, attempt, path, \"Rate limited by GitLab, retrying\");\n sleep(Duration::from_secs(retry_after)).await;\n continue;\n}\n```\n\nReplace with INFO-level structured log:\n```rust\nif response.status() == StatusCode::TOO_MANY_REQUESTS && attempt < Self::MAX_RETRIES {\n let retry_after = Self::parse_retry_after(&response);\n tracing::info!(\n path = %path,\n attempt = attempt,\n retry_after_secs = retry_after,\n status_code = 429u16,\n \"Rate limited, retrying\"\n );\n sleep(Duration::from_secs(retry_after)).await;\n continue;\n}\n```\n\nFor transient errors (network errors, 5xx responses), add similar structured logging:\n```rust\ntracing::info!(\n path = %path,\n attempt = attempt,\n error = %e,\n \"Retrying after transient error\"\n);\n```\n\nKey changes:\n- Level: WARN -> INFO (visible in -v mode, not alarming in default mode)\n- Added: status_code field for 429\n- Added: structured path, attempt fields for all retry events\n- These structured fields enable MetricsLayer (bd-3vqk) to count rate_limit_hits and retries\n\n## Acceptance Criteria\n- [ ] 429 responses log at INFO with fields: path, attempt, retry_after_secs, status_code=429\n- [ ] Transient error retries log at INFO with fields: path, attempt, error\n- [ ] lore -v sync shows retry activity on stderr (INFO is visible in -v mode)\n- [ ] Default mode (no -v) does NOT show retry lines on stderr (INFO filtered out)\n- [ ] File layer captures all retry events (always at DEBUG+)\n- [ ] cargo clippy --all-targets -- -D warnings passes\n\n## Files\n- src/gitlab/client.rs (modify request() method, lines ~119-171)\n\n## TDD Loop\nRED:\n - test_rate_limit_log_fields: mock 429 response, capture log output, parse JSON, assert fields\n - test_retry_log_fields: mock network error + retry, assert structured fields\nGREEN: Change log level and add structured fields\nVERIFY: cargo test && cargo clippy --all-targets -- -D warnings\n\n## Edge Cases\n- parse_retry_after returns 0 or very large values: the existing logic handles this\n- All retries exhausted: the final attempt returns the error normally. No special logging needed (the error propagates).\n- path may contain sensitive data (project IDs): project IDs are not sensitive in this context","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-04T15:55:02.448070Z","created_by":"tayloreernisse","updated_at":"2026-02-04T17:21:42.304259Z","closed_at":"2026-02-04T17:21:42.304213Z","close_reason":"Changed 429 rate-limit logging from WARN to INFO with structured fields: path, attempt, retry_after_secs, status_code=429 in both request() and request_with_headers()","compaction_level":0,"original_size":0,"labels":["observability"],"dependencies":[{"issue_id":"bd-12ae","depends_on_id":"bd-3pk","type":"parent-child","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} -{"id":"bd-12um","title":"Phase 3f-Step2: Cx threading — command layer + embedding pipeline","description":"## What\nWiden Cx threading to the remaining async command handlers and embedding pipeline.\n\n## Why\nLower-risk step after Step 1 validates the pattern. These modules don't have the same fan-out patterns as the orchestrator, so they're simpler to migrate.\n\n## Rearchitecture Context (2026-03-06)\n- cli/commands/embed.rs remains a single file (not split)\n- embedding/pipeline.rs remains unchanged\n- Command dispatch handlers are in src/app/handlers.rs (include\\!'d into main.rs scope)\n\n## Functions That Need cx: &Cx Added\n\n| Module | Functions |\n|--------|-----------|\n| cli/commands/embed.rs | run_embed() |\n| embedding/pipeline.rs | embed_documents(), embed_batch_group() |\n\n## Notes\n- embed_batch_group likely has its own join_all for concurrent embedding requests — same region-wrapping pattern as orchestrator\n- Embedding pipeline has configurable concurrency (config.embedding.concurrency) — verify region respects this\n- If Step 1 surfaces problems, this step hasn't been started yet (risk reduction)\n\n## Files Changed\n- src/cli/commands/embed.rs (~5 LOC)\n- src/embedding/pipeline.rs (~10 LOC)\n\n## Testing\n- cargo check --all-targets\n- cargo test\n- Manual: lore embed with Ctrl+C — verify graceful shutdown\n\n## Depends On\n- Phase 3f-Step1 (orchestration path must work first)","status":"open","priority":2,"issue_type":"task","created_at":"2026-03-06T18:41:16.721761Z","created_by":"tayloreernisse","updated_at":"2026-03-06T19:53:19.399Z","compaction_level":0,"original_size":0,"labels":["asupersync","phase-3"],"dependencies":[{"issue_id":"bd-12um","depends_on_id":"bd-1lj5","type":"parent-child","created_at":"2026-03-06T18:41:16.723082Z","created_by":"tayloreernisse"},{"issue_id":"bd-12um","depends_on_id":"bd-zgke","type":"blocks","created_at":"2026-03-06T18:42:54.658467Z","created_by":"tayloreernisse"}]} +{"id":"bd-12um","title":"Phase 3f-Step2: Cx threading — command layer + embedding pipeline","description":"## What\nWiden Cx threading to the remaining async command handlers and embedding pipeline.\n\n## Why\nLower-risk step after Step 1 validates the pattern. These modules don't have the same fan-out patterns as the orchestrator, so they're simpler to migrate.\n\n## Rearchitecture Context (2026-03-06)\n- cli/commands/embed.rs remains a single file (not split)\n- embedding/pipeline.rs remains unchanged\n- Command dispatch handlers are in src/app/handlers.rs (include\\!'d into main.rs scope)\n\n## Functions That Need cx: &Cx Added\n\n| Module | Functions |\n|--------|-----------|\n| cli/commands/embed.rs | run_embed() |\n| embedding/pipeline.rs | embed_documents(), embed_batch_group() |\n\n## Notes\n- embed_batch_group likely has its own join_all for concurrent embedding requests — same region-wrapping pattern as orchestrator\n- Embedding pipeline has configurable concurrency (config.embedding.concurrency) — verify region respects this\n- If Step 1 surfaces problems, this step hasn't been started yet (risk reduction)\n\n## Files Changed\n- src/cli/commands/embed.rs (~5 LOC)\n- src/embedding/pipeline.rs (~10 LOC)\n\n## Testing\n- cargo check --all-targets\n- cargo test\n- Manual: lore embed with Ctrl+C — verify graceful shutdown\n\n## Depends On\n- Phase 3f-Step1 (orchestration path must work first)","status":"closed","priority":2,"issue_type":"task","created_at":"2026-03-06T18:41:16.721761Z","created_by":"tayloreernisse","updated_at":"2026-03-06T21:11:12.650925Z","closed_at":"2026-03-06T21:11:12.650877Z","close_reason":"asupersync migration complete — 5 commits pushed to asupersync-migration branch","compaction_level":0,"original_size":0,"labels":["asupersync","phase-3"],"dependencies":[{"issue_id":"bd-12um","depends_on_id":"bd-1lj5","type":"parent-child","created_at":"2026-03-06T18:41:16.723082Z","created_by":"tayloreernisse"},{"issue_id":"bd-12um","depends_on_id":"bd-zgke","type":"blocks","created_at":"2026-03-06T18:42:54.658467Z","created_by":"tayloreernisse"}]} {"id":"bd-13b","title":"[CP0] CLI entry point with Commander.js","description":"## Background\n\nCommander.js provides the CLI framework. The main entry point sets up the program with all subcommands. Uses ESM with proper shebang for npx/global installation.\n\nReference: docs/prd/checkpoint-0.md section \"CLI Commands\"\n\n## Approach\n\n**src/cli/index.ts:**\n```typescript\n#!/usr/bin/env node\n\nimport { Command } from 'commander';\nimport { version } from '../../package.json' with { type: 'json' };\nimport { initCommand } from './commands/init';\nimport { authTestCommand } from './commands/auth-test';\nimport { doctorCommand } from './commands/doctor';\nimport { versionCommand } from './commands/version';\nimport { backupCommand } from './commands/backup';\nimport { resetCommand } from './commands/reset';\nimport { syncStatusCommand } from './commands/sync-status';\n\nconst program = new Command();\n\nprogram\n .name('gi')\n .description('GitLab Inbox - Unified notification management')\n .version(version);\n\n// Global --config flag available to all commands\nprogram.option('-c, --config ', 'Path to config file');\n\n// Register subcommands\nprogram.addCommand(initCommand);\nprogram.addCommand(authTestCommand);\nprogram.addCommand(doctorCommand);\nprogram.addCommand(versionCommand);\nprogram.addCommand(backupCommand);\nprogram.addCommand(resetCommand);\nprogram.addCommand(syncStatusCommand);\n\nprogram.parse();\n```\n\nEach command file exports a Command instance:\n```typescript\n// src/cli/commands/version.ts\nimport { Command } from 'commander';\n\nexport const versionCommand = new Command('version')\n .description('Show version information')\n .action(() => {\n console.log(`gi version ${version}`);\n });\n```\n\n## Acceptance Criteria\n\n- [ ] `gi --help` shows all commands and global options\n- [ ] `gi --version` shows version from package.json\n- [ ] `gi --help` shows command-specific help\n- [ ] `gi --config ./path` passes config path to commands\n- [ ] Unknown command shows error and suggests --help\n- [ ] Exit code 0 on success, non-zero on error\n- [ ] Shebang line works for npx execution\n\n## Files\n\nCREATE:\n- src/cli/index.ts (main entry point)\n- src/cli/commands/version.ts (simple command as template)\n\nMODIFY (later beads):\n- package.json (add \"bin\" field pointing to dist/cli/index.js)\n\n## TDD Loop\n\nN/A for CLI entry point - verify with manual testing:\n\n```bash\nnpm run build\nnode dist/cli/index.js --help\nnode dist/cli/index.js version\nnode dist/cli/index.js unknown-command # should error\n```\n\n## Edge Cases\n\n- package.json import requires Node 20+ with { type: 'json' } assertion\n- Alternative: read version from package.json with readFileSync\n- Command registration order affects help display - alphabetical preferred\n- Global options must be defined before subcommands","status":"closed","priority":1,"issue_type":"task","created_at":"2026-01-24T16:09:50.499023Z","created_by":"tayloreernisse","updated_at":"2026-01-25T03:10:49.224627Z","closed_at":"2026-01-25T03:10:49.224499Z","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-13b","depends_on_id":"bd-gg1","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} {"id":"bd-13lp","title":"Epic: CLI Intelligence & Market Position (CLI-IMP)","description":"## Strategic Context\n\nAnalysis of glab (GitLab CLI) vs lore reveals a clean architectural split: lore = ALL reads (issues, MRs, search, who, timeline, intelligence); glab = ALL writes (create, update, approve, merge, CI/CD).\n\nLore is NOT duplicating glab. The overlap is minimal (both list issues/MRs), but lore output is curated, flatter, and richer (closing MRs, work-item status, discussions pre-joined). Agents reach for glab by default because they have been trained on it — a discovery problem, not a capability problem.\n\nThree layers: (1) Foundation — make lore the definitive read path; (2) Intelligence — ship half-built features (hybrid search, timeline, per-note search); (3) Alien Artifact — novel intelligence (explain, related, brief, drift).\n\n## Progress (as of 2026-02-12)\n\n### Shipped\n- Timeline CLI (bd-2wpf): CLOSED. 5-stage pipeline with human and robot renderers working end-to-end.\n- `who` command: Expert, Workload, Reviews, Active, Overlap modes all functional.\n- Search infrastructure: hybrid.rs, vector.rs, rrf.rs all implemented and tested (not yet wired to CLI).\n\n### In Progress\n- Foundation: bd-kvij (skill rewrite), bd-91j1 (robot-docs), bd-2g50 (data gaps)\n- Intelligence: bd-1ksf (hybrid search wiring), bd-2l3s (per-note search)\n- Alien Artifact: bd-1n5q (brief), bd-8con (related), bd-9lbr (explain), bd-1cjx (drift)\n\n## Success Criteria\n- [x] Timeline CLI shipped with human and robot renderers (bd-2wpf CLOSED)\n- [ ] Zero agent skill files reference glab for read operations\n- [ ] robot-docs comprehensive enough for zero-training agent bootstrap\n- [ ] Hybrid search (FTS + vector + RRF) wired to CLI and default\n- [ ] Per-note search operational at note granularity\n- [ ] At least one Tier 3 alien artifact feature prototyped (brief, related, explain, or drift)\n\n## Architecture Notes\n- main.rs is 2579 lines with all subcommand handlers\n- CLI commands in src/cli/commands/ (16 modules: auth_test, count, doctor, embed, generate_docs, init, ingest, list, search, show, stats, sync, sync_status, timeline, who, plus mod.rs)\n- Database: 21 migrations wired (001-021), LATEST_SCHEMA_VERSION = 21\n- Raw payloads for issues store 15 fields: assignees, author, closed_at, created_at, description, due_date, id, iid, labels, milestone, project_id, state, title, updated_at, web_url\n- Missing from raw payloads: closed_by, confidential, upvotes, downvotes, weight, issue_type, time_stats, health_status (ingestion pipeline doesn't capture these)\n- robot-docs current output keys: name, version, description, activation, commands, aliases, exit_codes, clap_error_codes, error_format, workflows","status":"open","priority":1,"issue_type":"epic","created_at":"2026-02-12T15:44:23.993267Z","created_by":"tayloreernisse","updated_at":"2026-02-12T16:08:36.417919Z","compaction_level":0,"original_size":0,"labels":["cli-imp","epic"]} {"id":"bd-13pt","title":"Display closing MRs in lore issues output","description":"## Background\nThe `entity_references` table stores MR->Issue 'closes' relationships (from the closes_issues API), but this data is never displayed when viewing an issue. This is the 'Development' section in GitLab UI showing which MRs will close an issue when merged.\n\n**System fit**: Data already flows through `fetch_mr_closes_issues()` -> `store_closes_issues_refs()` -> `entity_references` table. We just need to query and display it.\n\n## Approach\n\nAll changes in `src/cli/commands/show.rs`:\n\n### 1. Add ClosingMrRef struct (after DiffNotePosition ~line 57)\n```rust\n#[derive(Debug, Clone, Serialize)]\npub struct ClosingMrRef {\n pub iid: i64,\n pub title: String,\n pub state: String,\n pub web_url: Option,\n}\n```\n\n### 2. Update IssueDetail struct (line ~59)\n```rust\npub struct IssueDetail {\n // ... existing fields ...\n pub closing_merge_requests: Vec, // NEW - add after discussions\n}\n```\n\n### 3. Add ClosingMrRefJson struct (after NoteDetailJson ~line 797)\n```rust\n#[derive(Serialize)]\npub struct ClosingMrRefJson {\n pub iid: i64,\n pub title: String,\n pub state: String,\n pub web_url: Option,\n}\n```\n\n### 4. Update IssueDetailJson struct (line ~770)\n```rust\npub struct IssueDetailJson {\n // ... existing fields ...\n pub closing_merge_requests: Vec, // NEW\n}\n```\n\n### 5. Add get_closing_mrs() function (after get_issue_discussions ~line 245)\n```rust\nfn get_closing_mrs(conn: &Connection, issue_id: i64) -> Result> {\n let mut stmt = conn.prepare(\n \"SELECT mr.iid, mr.title, mr.state, mr.web_url\n FROM entity_references er\n JOIN merge_requests mr ON mr.id = er.source_entity_id\n WHERE er.target_entity_type = 'issue'\n AND er.target_entity_id = ?\n AND er.source_entity_type = 'merge_request'\n AND er.reference_type = 'closes'\n ORDER BY mr.iid\"\n )?;\n \n let mrs = stmt\n .query_map([issue_id], |row| {\n Ok(ClosingMrRef {\n iid: row.get(0)?,\n title: row.get(1)?,\n state: row.get(2)?,\n web_url: row.get(3)?,\n })\n })?\n .collect::, _>>()?;\n \n Ok(mrs)\n}\n```\n\n### 6. Update run_show_issue() (line ~89)\n```rust\nlet closing_mrs = get_closing_mrs(&conn, issue.id)?;\n// In return struct:\nclosing_merge_requests: closing_mrs,\n```\n\n### 7. Update print_show_issue() (after Labels section ~line 556)\n```rust\nif !issue.closing_merge_requests.is_empty() {\n println!(\"Development:\");\n for mr in &issue.closing_merge_requests {\n let state_indicator = match mr.state.as_str() {\n \"merged\" => style(\"merged\").green(),\n \"opened\" => style(\"opened\").cyan(),\n \"closed\" => style(\"closed\").red(),\n _ => style(&mr.state).dim(),\n };\n println!(\" !{} {} ({})\", mr.iid, mr.title, state_indicator);\n }\n}\n```\n\n### 8. Update From<&IssueDetail> for IssueDetailJson (line ~799)\n```rust\nclosing_merge_requests: issue.closing_merge_requests.iter().map(|mr| ClosingMrRefJson {\n iid: mr.iid,\n title: mr.title.clone(),\n state: mr.state.clone(),\n web_url: mr.web_url.clone(),\n}).collect(),\n```\n\n## Acceptance Criteria\n- [ ] `cargo test test_get_closing_mrs` passes (4 tests)\n- [ ] `lore issues ` shows Development section when closing MRs exist\n- [ ] Development section shows MR iid, title, and state\n- [ ] State is color-coded (green=merged, cyan=opened, red=closed)\n- [ ] `lore -J issues ` includes closing_merge_requests array\n- [ ] `cargo clippy --all-targets -- -D warnings` passes\n\n## Files\n- `src/cli/commands/show.rs` - ALL changes\n\n## TDD Loop\n\n**RED** - Add tests to `src/cli/commands/show.rs` `#[cfg(test)] mod tests`:\n\n```rust\nfn seed_issue_with_closing_mr(conn: &Connection) -> (i64, i64) {\n conn.execute(\n \"INSERT INTO projects (id, gitlab_project_id, path_with_namespace, web_url, created_at, updated_at)\n VALUES (1, 100, 'group/repo', 'https://gitlab.example.com', 1000, 2000)\", []\n ).unwrap();\n conn.execute(\n \"INSERT INTO issues (id, gitlab_id, iid, project_id, title, state, author_username,\n created_at, updated_at, last_seen_at) VALUES (1, 200, 10, 1, 'Bug fix', 'opened', 'dev', 1000, 2000, 2000)\", []\n ).unwrap();\n conn.execute(\n \"INSERT INTO merge_requests (id, gitlab_id, iid, project_id, title, state, author_username,\n source_branch, target_branch, created_at, updated_at, last_seen_at)\n VALUES (1, 300, 5, 1, 'Fix the bug', 'merged', 'dev', 'fix', 'main', 1000, 2000, 2000)\", []\n ).unwrap();\n conn.execute(\n \"INSERT INTO entity_references (project_id, source_entity_type, source_entity_id,\n target_entity_type, target_entity_id, reference_type, source_method, created_at)\n VALUES (1, 'merge_request', 1, 'issue', 1, 'closes', 'api', 3000)\", []\n ).unwrap();\n (1, 1) // (issue_id, mr_id)\n}\n\n#[test]\nfn test_get_closing_mrs_empty() {\n let conn = setup_test_db();\n // seed project + issue with no closing MRs\n conn.execute(\"INSERT INTO projects ...\", []).unwrap();\n conn.execute(\"INSERT INTO issues ...\", []).unwrap();\n let result = get_closing_mrs(&conn, 1).unwrap();\n assert!(result.is_empty());\n}\n\n#[test]\nfn test_get_closing_mrs_single() {\n let conn = setup_test_db();\n seed_issue_with_closing_mr(&conn);\n let result = get_closing_mrs(&conn, 1).unwrap();\n assert_eq!(result.len(), 1);\n assert_eq!(result[0].iid, 5);\n assert_eq!(result[0].title, \"Fix the bug\");\n assert_eq!(result[0].state, \"merged\");\n}\n\n#[test]\nfn test_get_closing_mrs_ignores_mentioned() {\n let conn = setup_test_db();\n seed_issue_with_closing_mr(&conn);\n // Add a 'mentioned' reference that should be ignored\n conn.execute(\n \"INSERT INTO merge_requests (id, gitlab_id, iid, project_id, title, state, author_username,\n source_branch, target_branch, created_at, updated_at, last_seen_at)\n VALUES (2, 301, 6, 1, 'Other MR', 'opened', 'dev', 'other', 'main', 1000, 2000, 2000)\", []\n ).unwrap();\n conn.execute(\n \"INSERT INTO entity_references (project_id, source_entity_type, source_entity_id,\n target_entity_type, target_entity_id, reference_type, source_method, created_at)\n VALUES (1, 'merge_request', 2, 'issue', 1, 'mentioned', 'note_parse', 3000)\", []\n ).unwrap();\n let result = get_closing_mrs(&conn, 1).unwrap();\n assert_eq!(result.len(), 1); // Only the 'closes' ref\n}\n\n#[test]\nfn test_get_closing_mrs_multiple_sorted() {\n let conn = setup_test_db();\n seed_issue_with_closing_mr(&conn);\n // Add second closing MR with higher iid\n conn.execute(\n \"INSERT INTO merge_requests (id, gitlab_id, iid, project_id, title, state, author_username,\n source_branch, target_branch, created_at, updated_at, last_seen_at)\n VALUES (2, 301, 8, 1, 'Another fix', 'opened', 'dev', 'fix2', 'main', 1000, 2000, 2000)\", []\n ).unwrap();\n conn.execute(\n \"INSERT INTO entity_references (project_id, source_entity_type, source_entity_id,\n target_entity_type, target_entity_id, reference_type, source_method, created_at)\n VALUES (1, 'merge_request', 2, 'issue', 1, 'closes', 'api', 3000)\", []\n ).unwrap();\n let result = get_closing_mrs(&conn, 1).unwrap();\n assert_eq!(result.len(), 2);\n assert_eq!(result[0].iid, 5); // Lower iid first\n assert_eq!(result[1].iid, 8);\n}\n```\n\n**GREEN** - Implement get_closing_mrs() and struct updates\n\n**VERIFY**: `cargo test test_get_closing_mrs && cargo clippy --all-targets -- -D warnings`\n\n## Edge Cases\n- Empty closing MRs -> don't print Development section\n- MR in different states -> color-coded appropriately \n- Cross-project closes (target_entity_id IS NULL) -> not displayed (unresolved refs)\n- Multiple MRs closing same issue -> all shown, ordered by iid","status":"closed","priority":1,"issue_type":"feature","created_at":"2026-02-05T15:15:37.598249Z","created_by":"tayloreernisse","updated_at":"2026-02-05T15:26:09.522557Z","closed_at":"2026-02-05T15:26:09.522506Z","close_reason":"Implemented: closing MRs (Development section) now display in lore issues . All 4 new tests pass.","compaction_level":0,"original_size":0,"labels":["ISSUE"]} @@ -13,10 +13,10 @@ {"id":"bd-157","title":"[CP1] Issue transformer with label extraction","description":"Transform GitLab issue payloads to normalized database schema.\n\n## Module\nsrc/gitlab/transformers/issue.rs\n\n## Structs\n\n### NormalizedIssue\n- gitlab_id: i64\n- project_id: i64 (local DB project ID)\n- iid: i64\n- title: String\n- description: Option\n- state: String\n- author_username: String\n- created_at, updated_at, last_seen_at: i64 (ms epoch)\n- web_url: String\n\n### NormalizedLabel (CP1: name-only)\n- project_id: i64\n- name: String\n\n## Functions\n\n### transform_issue(gitlab_issue: &GitLabIssue, local_project_id: i64) -> NormalizedIssue\n- Convert ISO timestamps to ms epoch using iso_to_ms()\n- Set last_seen_at to now_ms()\n- Clone string fields\n\n### extract_labels(gitlab_issue: &GitLabIssue, local_project_id: i64) -> Vec\n- Map labels vec to NormalizedLabel structs\n\nFiles: \n- src/gitlab/transformers/mod.rs\n- src/gitlab/transformers/issue.rs\nTests: tests/issue_transformer_tests.rs\nDone when: Unit tests pass for payload transformation and label extraction","status":"tombstone","priority":2,"issue_type":"task","created_at":"2026-01-25T15:42:47.719562Z","created_by":"tayloreernisse","updated_at":"2026-01-25T17:02:01.736142Z","closed_at":"2026-01-25T17:02:01.736142Z","deleted_at":"2026-01-25T17:02:01.736129Z","deleted_by":"tayloreernisse","delete_reason":"recreating with correct deps","original_type":"task","compaction_level":0,"original_size":0} {"id":"bd-159p","title":"Add get_issue_by_iid and get_mr_by_iid to GitLabClient with wiremock tests","description":"## Background\nSurgical sync needs to fetch a single issue or MR by its project-scoped IID from GitLab REST API during the preflight phase. The existing `GitLabClient` has `paginate_issues` and `paginate_merge_requests` for bulk streaming, but no single-entity fetch by IID. The GitLab v4 API provides `/api/v4/projects/:id/issues/:iid` and `/api/v4/projects/:id/merge_requests/:iid` endpoints that return exactly one entity or 404.\n\nThese methods are used by the surgical preflight (bd-3sez) to validate that requested IIDs actually exist on GitLab before committing to the ingest phase. They must return the full `GitLabIssue` / `GitLabMergeRequest` structs (same as the paginated endpoints return) so they can be passed directly to `process_single_issue` / `process_single_mr`.\n\n## Approach\n\n### Step 1: Add `get_issue_by_iid` method (src/gitlab/client.rs)\n\nAdd after the existing `get_version` method (~line 112):\n\n```rust\npub async fn get_issue_by_iid(\n &self,\n project_id: u64,\n iid: u64,\n) -> Result {\n self.request(&format!(\"/api/v4/projects/{project_id}/issues/{iid}\"))\n .await\n}\n```\n\nThis reuses the existing `request()` method which already handles:\n- Rate limiting (via `RateLimiter`)\n- Retry on 429 (up to `MAX_RETRIES`)\n- 404 → `LoreError::GitLabNotFound { resource }`\n- 401 → `LoreError::GitLabAuthFailed`\n- JSON deserialization into `GitLabIssue`\n\n### Step 2: Add `get_mr_by_iid` method (src/gitlab/client.rs)\n\n```rust\npub async fn get_mr_by_iid(\n &self,\n project_id: u64,\n iid: u64,\n) -> Result {\n self.request(&format!(\"/api/v4/projects/{project_id}/merge_requests/{iid}\"))\n .await\n}\n```\n\n### Step 3: Add wiremock tests (src/gitlab/client_tests.rs or inline #[cfg(test)])\n\nFour tests using the same wiremock pattern as `src/gitlab/graphql_tests.rs`:\n1. `get_issue_by_iid_success` — mock 200 with full GitLabIssue JSON, verify deserialized fields\n2. `get_issue_by_iid_not_found` — mock 404, verify `LoreError::GitLabNotFound`\n3. `get_mr_by_iid_success` — mock 200 with full GitLabMergeRequest JSON, verify deserialized fields\n4. `get_mr_by_iid_not_found` — mock 404, verify `LoreError::GitLabNotFound`\n\n## Acceptance Criteria\n- [ ] `GitLabClient::get_issue_by_iid(project_id, iid)` returns `Result`\n- [ ] `GitLabClient::get_mr_by_iid(project_id, iid)` returns `Result`\n- [ ] 404 response maps to `LoreError::GitLabNotFound`\n- [ ] 401 response maps to `LoreError::GitLabAuthFailed` (inherited from `handle_response`)\n- [ ] Successful responses deserialize into the correct struct types\n- [ ] All 4 wiremock tests pass\n- [ ] `cargo check --all-targets` passes\n- [ ] `cargo clippy --all-targets -- -D warnings` passes\n\n## Files\n- MODIFY: src/gitlab/client.rs (add two pub async methods)\n- CREATE: src/gitlab/client_tests.rs (wiremock tests, referenced via `#[cfg(test)] #[path = \"client_tests.rs\"] mod tests;` at bottom of client.rs)\n\n## TDD Anchor\nRED: Write 4 wiremock tests in `src/gitlab/client_tests.rs`:\n\n```rust\nuse super::*;\nuse crate::core::error::LoreError;\nuse wiremock::matchers::{header, method, path};\nuse wiremock::{Mock, MockServer, ResponseTemplate};\n\n#[tokio::test]\nasync fn get_issue_by_iid_success() {\n let server = MockServer::start().await;\n let issue_json = serde_json::json!({\n \"id\": 1001,\n \"iid\": 42,\n \"project_id\": 5,\n \"title\": \"Fix login bug\",\n \"state\": \"opened\",\n \"created_at\": \"2026-01-15T10:00:00Z\",\n \"updated_at\": \"2026-02-01T14:30:00Z\",\n \"author\": { \"id\": 1, \"username\": \"dev1\", \"name\": \"Developer One\", \"avatar_url\": null, \"web_url\": \"https://gitlab.example.com/dev1\" },\n \"web_url\": \"https://gitlab.example.com/group/repo/-/issues/42\",\n \"labels\": [],\n \"milestone\": null,\n \"assignees\": [],\n \"closed_at\": null,\n \"closed_by\": null,\n \"description\": \"Login fails on mobile\"\n });\n\n Mock::given(method(\"GET\"))\n .and(path(\"/api/v4/projects/5/issues/42\"))\n .and(header(\"PRIVATE-TOKEN\", \"test-token\"))\n .respond_with(ResponseTemplate::new(200).set_body_json(&issue_json))\n .mount(&server)\n .await;\n\n let client = GitLabClient::new(&server.uri(), \"test-token\", Some(100.0));\n let issue = client.get_issue_by_iid(5, 42).await.unwrap();\n assert_eq!(issue.iid, 42);\n assert_eq!(issue.title, \"Fix login bug\");\n}\n\n#[tokio::test]\nasync fn get_issue_by_iid_not_found() {\n let server = MockServer::start().await;\n\n Mock::given(method(\"GET\"))\n .and(path(\"/api/v4/projects/5/issues/999\"))\n .respond_with(ResponseTemplate::new(404).set_body_json(serde_json::json!({\"message\": \"404 Not Found\"})))\n .mount(&server)\n .await;\n\n let client = GitLabClient::new(&server.uri(), \"test-token\", Some(100.0));\n let err = client.get_issue_by_iid(5, 999).await.unwrap_err();\n assert!(matches!(err, LoreError::GitLabNotFound { .. }));\n}\n\n#[tokio::test]\nasync fn get_mr_by_iid_success() {\n let server = MockServer::start().await;\n let mr_json = serde_json::json!({\n \"id\": 2001,\n \"iid\": 101,\n \"project_id\": 5,\n \"title\": \"Add caching layer\",\n \"state\": \"merged\",\n \"created_at\": \"2026-01-20T09:00:00Z\",\n \"updated_at\": \"2026-02-10T16:00:00Z\",\n \"author\": { \"id\": 2, \"username\": \"dev2\", \"name\": \"Developer Two\", \"avatar_url\": null, \"web_url\": \"https://gitlab.example.com/dev2\" },\n \"web_url\": \"https://gitlab.example.com/group/repo/-/merge_requests/101\",\n \"source_branch\": \"feature/caching\",\n \"target_branch\": \"main\",\n \"draft\": false,\n \"merge_status\": \"can_be_merged\",\n \"labels\": [],\n \"milestone\": null,\n \"assignees\": [],\n \"reviewers\": [],\n \"merged_by\": null,\n \"merged_at\": null,\n \"closed_at\": null,\n \"closed_by\": null,\n \"description\": \"Adds Redis caching\"\n });\n\n Mock::given(method(\"GET\"))\n .and(path(\"/api/v4/projects/5/merge_requests/101\"))\n .and(header(\"PRIVATE-TOKEN\", \"test-token\"))\n .respond_with(ResponseTemplate::new(200).set_body_json(&mr_json))\n .mount(&server)\n .await;\n\n let client = GitLabClient::new(&server.uri(), \"test-token\", Some(100.0));\n let mr = client.get_mr_by_iid(5, 101).await.unwrap();\n assert_eq!(mr.iid, 101);\n assert_eq!(mr.title, \"Add caching layer\");\n assert_eq!(mr.source_branch, \"feature/caching\");\n}\n\n#[tokio::test]\nasync fn get_mr_by_iid_not_found() {\n let server = MockServer::start().await;\n\n Mock::given(method(\"GET\"))\n .and(path(\"/api/v4/projects/5/merge_requests/999\"))\n .respond_with(ResponseTemplate::new(404).set_body_json(serde_json::json!({\"message\": \"404 Not Found\"})))\n .mount(&server)\n .await;\n\n let client = GitLabClient::new(&server.uri(), \"test-token\", Some(100.0));\n let err = client.get_mr_by_iid(5, 999).await.unwrap_err();\n assert!(matches!(err, LoreError::GitLabNotFound { .. }));\n}\n```\n\nGREEN: Add the two methods to `GitLabClient`.\nVERIFY: `cargo test get_issue_by_iid && cargo test get_mr_by_iid`\n\n## Edge Cases\n- The `request()` method already handles 429 retries, so no extra retry logic is needed in the new methods.\n- The GitLabIssue/GitLabMergeRequest fixture JSON must include all required (non-Option) fields. Check the struct definitions in `src/gitlab/types.rs` if deserialization fails — the test fixtures above include the minimum required fields based on the struct definitions.\n- The `project_id` parameter is the GitLab-side numeric project ID (not the local SQLite row ID). The caller must resolve this from the local `projects` table's `gitlab_project_id` column.\n\n## Dependency Context\nThis is a leaf/foundation bead with no upstream dependencies. Downstream bead bd-3sez (surgical.rs) calls these methods during preflight to fetch entities by IID before ingesting.","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-17T19:12:14.447996Z","created_by":"tayloreernisse","updated_at":"2026-02-18T21:04:58.567083Z","closed_at":"2026-02-18T21:04:58.567041Z","close_reason":"Completed: all implementation work done, code reviewed, tests passing","compaction_level":0,"original_size":0,"labels":["surgical-sync"],"dependencies":[{"issue_id":"bd-159p","depends_on_id":"bd-1i4i","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} {"id":"bd-16m8","title":"OBSERV: Record item counts as span fields in sync stages","description":"## Background\nMetricsLayer (bd-34ek) captures span fields, but the stage functions must actually record item counts INTO their spans. This is the bridge between \"work happened\" and \"MetricsLayer knows about it.\"\n\n## Approach\nIn each stage function, after the work loop completes, record counts into the current span:\n\n### src/ingestion/orchestrator.rs - ingest_project_issues_with_progress() (~line 110)\nAfter issues are fetched and discussions synced:\n```rust\ntracing::Span::current().record(\"items_processed\", result.issues_upserted);\ntracing::Span::current().record(\"items_skipped\", result.issues_skipped);\ntracing::Span::current().record(\"errors\", result.errors);\n```\n\n### src/ingestion/orchestrator.rs - drain_resource_events() (~line 566)\nAfter the drain loop:\n```rust\ntracing::Span::current().record(\"items_processed\", result.fetched);\ntracing::Span::current().record(\"errors\", result.failed);\n```\n\n### src/documents/regenerator.rs - regenerate_dirty_documents() (~line 24)\nAfter the regeneration loop:\n```rust\ntracing::Span::current().record(\"items_processed\", result.regenerated);\ntracing::Span::current().record(\"items_skipped\", result.unchanged);\ntracing::Span::current().record(\"errors\", result.errored);\n```\n\n### src/embedding/pipeline.rs - embed_documents() (~line 36)\nAfter embedding completes:\n```rust\ntracing::Span::current().record(\"items_processed\", result.embedded);\ntracing::Span::current().record(\"items_skipped\", result.skipped);\ntracing::Span::current().record(\"errors\", result.failed);\n```\n\nIMPORTANT: These fields must be declared as tracing::field::Empty in the #[instrument] attribute (done in bd-24j1). You can only record() a field that was declared at span creation. Attempting to record an undeclared field silently does nothing.\n\n## Acceptance Criteria\n- [ ] MetricsLayer captures items_processed for each stage\n- [ ] MetricsLayer captures items_skipped and errors when non-zero\n- [ ] Fields match the span declarations from bd-24j1\n- [ ] extract_timings() returns correct counts in StageTiming\n- [ ] cargo clippy --all-targets -- -D warnings passes\n\n## Files\n- src/ingestion/orchestrator.rs (record counts in ingest + drain functions)\n- src/documents/regenerator.rs (record counts in regenerate)\n- src/embedding/pipeline.rs (record counts in embed)\n\n## TDD Loop\nRED: test_stage_fields_recorded (integration: run pipeline, extract timings, verify counts > 0)\nGREEN: Add Span::current().record() calls at end of each stage\nVERIFY: cargo test && cargo clippy --all-targets -- -D warnings\n\n## Edge Cases\n- Span::current() returns a disabled span if no subscriber is registered (e.g., in tests without subscriber setup). record() on disabled span is a no-op. Tests need a subscriber.\n- Field names must exactly match the declaration: \"items_processed\" not \"itemsProcessed\"\n- Recording must happen BEFORE the span closes (before function returns). Place at end of function but before Ok(result).","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-04T15:54:32.011236Z","created_by":"tayloreernisse","updated_at":"2026-02-04T17:27:38.620645Z","closed_at":"2026-02-04T17:27:38.620601Z","close_reason":"Added tracing::field::Empty declarations and Span::current().record() calls in 4 functions: ingest_project_issues, ingest_project_merge_requests, drain_resource_events, regenerate_dirty_documents, embed_documents","compaction_level":0,"original_size":0,"labels":["observability"],"dependencies":[{"issue_id":"bd-16m8","depends_on_id":"bd-24j1","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-16m8","depends_on_id":"bd-34ek","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-16m8","depends_on_id":"bd-3er","type":"parent-child","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} -{"id":"bd-16qf","title":"Phase 4b: Switch timeline_seed_tests to #[asupersync::test]","description":"## What\nSwitch 13 tests in timeline/timeline_seed_tests.rs from #[tokio::test] to #[asupersync::test].\n\n## Rearchitecture Context (2026-03-06)\nThe timeline subsystem was extracted from core/ into its own top-level module:\n- src/core/timeline_seed_tests.rs -> src/timeline/timeline_seed_tests.rs\n- src/core/timeline.rs -> src/timeline/types.rs\n- src/core/timeline_seed.rs -> src/timeline/seed.rs\n- src/core/timeline_expand.rs -> src/timeline/expand.rs\n- src/core/timeline_collect.rs -> src/timeline/collect.rs\n\n## Why\nThese tests are pure CPU/SQLite with no HTTP and no tokio APIs. They are safe and straightforward to migrate. Running them on asupersync validates the test macro works correctly.\n\n## Implementation\n```rust\n// Before\n#[tokio::test]\nasync fn test_foo() { ... }\n\n// After\n#[asupersync::test]\nasync fn test_foo() { ... }\n```\n\n## Files Changed\n- src/timeline/timeline_seed_tests.rs (~13 test attribute changes)\n\n## Testing\n- cargo test -- timeline_seed (all 13 must pass)\n\n## Depends On\n- Phase 3a (asupersync must be in deps for the test macro)","status":"open","priority":2,"issue_type":"task","created_at":"2026-03-06T18:41:35.654811Z","created_by":"tayloreernisse","updated_at":"2026-03-06T19:52:19.475910Z","compaction_level":0,"original_size":0,"labels":["asupersync","phase-4","testing"],"dependencies":[{"issue_id":"bd-16qf","depends_on_id":"bd-1lj5","type":"parent-child","created_at":"2026-03-06T18:41:35.656827Z","created_by":"tayloreernisse"},{"issue_id":"bd-16qf","depends_on_id":"bd-39yp","type":"blocks","created_at":"2026-03-06T18:42:56.052380Z","created_by":"tayloreernisse"}]} +{"id":"bd-16qf","title":"Phase 4b: Switch timeline_seed_tests to #[asupersync::test]","description":"## What\nSwitch 13 tests in timeline/timeline_seed_tests.rs from #[tokio::test] to #[asupersync::test].\n\n## Rearchitecture Context (2026-03-06)\nThe timeline subsystem was extracted from core/ into its own top-level module:\n- src/core/timeline_seed_tests.rs -> src/timeline/timeline_seed_tests.rs\n- src/core/timeline.rs -> src/timeline/types.rs\n- src/core/timeline_seed.rs -> src/timeline/seed.rs\n- src/core/timeline_expand.rs -> src/timeline/expand.rs\n- src/core/timeline_collect.rs -> src/timeline/collect.rs\n\n## Why\nThese tests are pure CPU/SQLite with no HTTP and no tokio APIs. They are safe and straightforward to migrate. Running them on asupersync validates the test macro works correctly.\n\n## Implementation\n```rust\n// Before\n#[tokio::test]\nasync fn test_foo() { ... }\n\n// After\n#[asupersync::test]\nasync fn test_foo() { ... }\n```\n\n## Files Changed\n- src/timeline/timeline_seed_tests.rs (~13 test attribute changes)\n\n## Testing\n- cargo test -- timeline_seed (all 13 must pass)\n\n## Depends On\n- Phase 3a (asupersync must be in deps for the test macro)","status":"closed","priority":2,"issue_type":"task","created_at":"2026-03-06T18:41:35.654811Z","created_by":"tayloreernisse","updated_at":"2026-03-06T21:11:12.656849Z","closed_at":"2026-03-06T21:11:12.656699Z","close_reason":"asupersync migration complete — 5 commits pushed to asupersync-migration branch","compaction_level":0,"original_size":0,"labels":["asupersync","phase-4","testing"],"dependencies":[{"issue_id":"bd-16qf","depends_on_id":"bd-1lj5","type":"parent-child","created_at":"2026-03-06T18:41:35.656827Z","created_by":"tayloreernisse"},{"issue_id":"bd-16qf","depends_on_id":"bd-39yp","type":"blocks","created_at":"2026-03-06T18:42:56.052380Z","created_by":"tayloreernisse"}]} {"id":"bd-17n","title":"OBSERV: Add LoggingConfig to Config struct","description":"## Background\nLoggingConfig centralizes log file settings so users can customize retention and disable file logging. It follows the same #[serde(default)] pattern as SyncConfig (src/core/config.rs:32-78) so existing config.json files continue working with zero changes.\n\n## Approach\nAdd to src/core/config.rs, after the EmbeddingConfig struct (around line 120):\n\n```rust\n#[derive(Debug, Clone, Deserialize)]\n#[serde(default)]\npub struct LoggingConfig {\n /// Directory for log files. Default: None (= XDG data dir + /logs/)\n pub log_dir: Option,\n\n /// Days to retain log files. Default: 30. Set to 0 to disable file logging.\n pub retention_days: u32,\n\n /// Enable JSON log files. Default: true.\n pub file_logging: bool,\n}\n\nimpl Default for LoggingConfig {\n fn default() -> Self {\n Self {\n log_dir: None,\n retention_days: 30,\n file_logging: true,\n }\n }\n}\n```\n\nAdd to the Config struct (src/core/config.rs:123-137), after the embedding field:\n\n```rust\n#[serde(default)]\npub logging: LoggingConfig,\n```\n\nNote: Using impl Default rather than default helper functions (default_retention_days, default_true) because #[serde(default)] on the struct applies Default::default() to the entire struct when the key is missing. This is the same pattern used by SyncConfig.\n\n## Acceptance Criteria\n- [ ] Deserializing {} as LoggingConfig yields retention_days=30, file_logging=true, log_dir=None\n- [ ] Deserializing {\"retention_days\": 7} preserves file_logging=true default\n- [ ] Existing config.json files (no \"logging\" key) deserialize without error\n- [ ] Config struct has .logging field accessible\n- [ ] cargo clippy --all-targets -- -D warnings passes\n\n## Files\n- src/core/config.rs (add LoggingConfig struct + Default impl, add field to Config)\n\n## TDD Loop\nRED: tests/config_tests.rs (or inline #[cfg(test)] mod):\n - test_logging_config_defaults\n - test_logging_config_partial\nGREEN: Add LoggingConfig struct, Default impl, field on Config\nVERIFY: cargo test && cargo clippy --all-targets -- -D warnings\n\n## Edge Cases\n- retention_days=0 means disable file logging entirely (not \"delete all files\") -- document this in the struct doc comment\n- log_dir with a relative path: should be resolved relative to CWD or treated as absolute? Decision: treat as absolute, document it\n- Missing \"logging\" key in JSON: #[serde(default)] handles this -- the entire LoggingConfig gets Default::default()","status":"closed","priority":1,"issue_type":"task","created_at":"2026-02-04T15:53:55.471193Z","created_by":"tayloreernisse","updated_at":"2026-02-04T17:10:22.751969Z","closed_at":"2026-02-04T17:10:22.751921Z","close_reason":"Added LoggingConfig struct with log_dir, retention_days, file_logging fields and serde defaults","compaction_level":0,"original_size":0,"labels":["observability"],"dependencies":[{"issue_id":"bd-17n","depends_on_id":"bd-2nx","type":"parent-child","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} {"id":"bd-17v","title":"[CP1] gi sync-status enhancement","description":"## Background\n\nThe `gi sync-status` command shows synchronization state: last successful sync time, cursor positions per project/resource, and overall health. This helps users understand when data was last refreshed and diagnose sync issues.\n\n## Approach\n\n### Module: src/cli/commands/sync_status.rs (enhance existing or create)\n\n### Handler Function\n\n```rust\npub async fn handle_sync_status(conn: &Connection) -> Result<()>\n```\n\n### Data to Display\n\n1. **Last sync run**: From `sync_runs` table\n - Started at, completed at, status\n - Issues fetched, discussions fetched\n\n2. **Cursor positions**: From `sync_cursors` table\n - Per (project, resource_type) pair\n - Show updated_at_cursor as human-readable date\n - Show tie_breaker_id (GitLab ID of last processed item)\n\n3. **Overall counts**: Quick summary\n - Total issues, discussions, notes in DB\n\n### Output Format\n\n```\nLast Sync\n─────────\nStatus: completed\nStarted: 2024-01-25 10:30:00\nCompleted: 2024-01-25 10:35:00\nDuration: 5m 23s\n\nCursor Positions\n────────────────\ngroup/project-one (issues):\n Last updated_at: 2024-01-25 10:30:00\n Last GitLab ID: 12345\n\nData Summary\n────────────\nIssues: 1,234\nDiscussions: 5,678\nNotes: 12,345 (excluding 2,000 system)\n```\n\n### Queries\n\n```sql\n-- Last sync run\nSELECT * FROM sync_runs ORDER BY started_at DESC LIMIT 1\n\n-- Cursor positions\nSELECT p.path, sc.resource_type, sc.updated_at_cursor, sc.tie_breaker_id\nFROM sync_cursors sc\nJOIN projects p ON sc.project_id = p.id\n\n-- Data summary\nSELECT COUNT(*) FROM issues\nSELECT COUNT(*) FROM discussions\nSELECT COUNT(*), SUM(is_system) FROM notes\n```\n\n## Acceptance Criteria\n\n- [ ] Shows last sync run with status and timing\n- [ ] Shows cursor position per project/resource\n- [ ] Shows total counts for issues, discussions, notes\n- [ ] Handles case where no sync has run yet\n- [ ] Formats timestamps as human-readable local time\n\n## Files\n\n- src/cli/commands/sync_status.rs (create or enhance)\n- src/cli/mod.rs (add SyncStatus variant if new)\n\n## TDD Loop\n\nRED:\n```rust\n#[tokio::test] async fn sync_status_shows_last_run()\n#[tokio::test] async fn sync_status_shows_cursor_positions()\n#[tokio::test] async fn sync_status_handles_no_sync_yet()\n```\n\nGREEN: Implement handler with queries and formatting\n\nVERIFY: `cargo test sync_status`\n\n## Edge Cases\n\n- No sync has ever run - show \"No sync runs recorded\"\n- Sync in progress - show \"Status: running\" with started_at\n- Cursor at epoch 0 - means fresh start, show \"Not started\"\n- Multiple projects - show cursor for each","status":"closed","priority":3,"issue_type":"task","created_at":"2026-01-25T17:02:38.409353Z","created_by":"tayloreernisse","updated_at":"2026-01-25T23:03:21.851557Z","closed_at":"2026-01-25T23:03:21.851496Z","close_reason":"Implemented gi sync-status showing last run, cursor positions, and data summary","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-17v","depends_on_id":"bd-208","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} -{"id":"bd-18ai","title":"Phase 4a: Adapter integration tests (asupersync-native)","description":"## What\nAdd 3-5 asupersync-native integration tests that exercise the HTTP adapter against a simple HTTP server (not wiremock). These close the gap between wiremock tests (on tokio) and production (on asupersync).\n\n## Why\nThe 42 wiremock tests validate protocol correctness but run on tokio. The adapter actual behavior under the production runtime is untested by mocked-response tests. These tests close the \"works on tokio but does it work on asupersync?\" gap.\n\n## Tests to Write (~50 LOC each)\n\n1. **GET with headers + JSON response** — verify header passing and JSON deserialization through adapter\n2. **POST with JSON body** — verify Content-Type injection and body serialization\n3. **429 + Retry-After** — verify adapter surfaces rate-limit responses correctly\n4. **Timeout** — verify adapter asupersync::time::timeout wrapper fires\n5. **Large response rejection** — verify body size guard triggers at 64 MiB\n\n## Test Infrastructure\nUse a simple HTTP server (hyper or raw TCP listener) instead of wiremock. These tests use #[asupersync::test] since they exercise the production runtime.\n\n## Files Changed\n- src/http.rs or tests/http_integration.rs (NEW, ~250 LOC)\n\n## Testing\n- cargo test (new tests must pass)\n\n## Depends On\n- Phase 1 (adapter must exist)\n- Phase 3a (asupersync must be in deps)","status":"open","priority":1,"issue_type":"task","created_at":"2026-03-06T18:41:29.108891Z","created_by":"tayloreernisse","updated_at":"2026-03-06T18:42:55.745136Z","compaction_level":0,"original_size":0,"labels":["asupersync","phase-4","testing"],"dependencies":[{"issue_id":"bd-18ai","depends_on_id":"bd-1lj5","type":"parent-child","created_at":"2026-03-06T18:41:29.116461Z","created_by":"tayloreernisse"},{"issue_id":"bd-18ai","depends_on_id":"bd-39yp","type":"blocks","created_at":"2026-03-06T18:42:55.745113Z","created_by":"tayloreernisse"},{"issue_id":"bd-18ai","depends_on_id":"bd-bqoc","type":"blocks","created_at":"2026-03-06T18:42:55.600869Z","created_by":"tayloreernisse"}]} +{"id":"bd-18ai","title":"Phase 4a: Adapter integration tests (asupersync-native)","description":"## What\nAdd 3-5 asupersync-native integration tests that exercise the HTTP adapter against a simple HTTP server (not wiremock). These close the gap between wiremock tests (on tokio) and production (on asupersync).\n\n## Why\nThe 42 wiremock tests validate protocol correctness but run on tokio. The adapter actual behavior under the production runtime is untested by mocked-response tests. These tests close the \"works on tokio but does it work on asupersync?\" gap.\n\n## Tests to Write (~50 LOC each)\n\n1. **GET with headers + JSON response** — verify header passing and JSON deserialization through adapter\n2. **POST with JSON body** — verify Content-Type injection and body serialization\n3. **429 + Retry-After** — verify adapter surfaces rate-limit responses correctly\n4. **Timeout** — verify adapter asupersync::time::timeout wrapper fires\n5. **Large response rejection** — verify body size guard triggers at 64 MiB\n\n## Test Infrastructure\nUse a simple HTTP server (hyper or raw TCP listener) instead of wiremock. These tests use #[asupersync::test] since they exercise the production runtime.\n\n## Files Changed\n- src/http.rs or tests/http_integration.rs (NEW, ~250 LOC)\n\n## Testing\n- cargo test (new tests must pass)\n\n## Depends On\n- Phase 1 (adapter must exist)\n- Phase 3a (asupersync must be in deps)","status":"closed","priority":1,"issue_type":"task","created_at":"2026-03-06T18:41:29.108891Z","created_by":"tayloreernisse","updated_at":"2026-03-06T21:11:12.652737Z","closed_at":"2026-03-06T21:11:12.652691Z","close_reason":"asupersync migration complete — 5 commits pushed to asupersync-migration branch","compaction_level":0,"original_size":0,"labels":["asupersync","phase-4","testing"],"dependencies":[{"issue_id":"bd-18ai","depends_on_id":"bd-1lj5","type":"parent-child","created_at":"2026-03-06T18:41:29.116461Z","created_by":"tayloreernisse"},{"issue_id":"bd-18ai","depends_on_id":"bd-39yp","type":"blocks","created_at":"2026-03-06T18:42:55.745113Z","created_by":"tayloreernisse"},{"issue_id":"bd-18ai","depends_on_id":"bd-bqoc","type":"blocks","created_at":"2026-03-06T18:42:55.600869Z","created_by":"tayloreernisse"}]} {"id":"bd-18bf","title":"NOTE-0B: Immediate deletion propagation for swept notes","description":"## Background\nWhen sweep deletes stale notes, orphaned note documents remain in search results until generate-docs --full runs. This erodes dataset trust. Propagate deletion to documents immediately in the same transaction.\n\n## Approach\nUpdate both sweep functions (issue + MR) to use set-based SQL that deletes documents and dirty_sources entries for stale notes before deleting the note rows:\n\nStep 1: DELETE FROM documents WHERE source_type = 'note' AND source_id IN (SELECT id FROM notes WHERE discussion_id = ? AND last_seen_at < ? AND is_system = 0)\nStep 2: DELETE FROM dirty_sources WHERE source_type = 'note' AND source_id IN (same subquery)\nStep 3: DELETE FROM notes WHERE discussion_id = ? AND last_seen_at < ?\n\nDocument DELETE cascades to document_labels/document_paths via ON DELETE CASCADE (defined in migration 007_documents.sql). FTS trigger documents_ad auto-removes FTS entry (defined in migration 008_fts5.sql). Same pattern for mr_discussions.rs sweep.\n\nNote: MR sweep_stale_notes() at line 551 uses a different WHERE clause (project_id + discussion_id IN subquery + last_seen_at). Apply the same document propagation pattern with the matching subquery.\n\n## Files\n- MODIFY: src/ingestion/discussions.rs (update sweep_stale_issue_notes from NOTE-0A)\n- MODIFY: src/ingestion/mr_discussions.rs (update sweep_stale_notes at line 551)\n\n## TDD Anchor\nRED: test_issue_note_sweep_deletes_note_documents_immediately — setup 3 notes with documents, re-sync 2, sweep, assert stale doc deleted.\nGREEN: Add document/dirty_sources DELETE before note DELETE in sweep functions.\nVERIFY: cargo test sweep_deletes_note_documents -- --nocapture\nTests: test_mr_note_sweep_deletes_note_documents_immediately, test_sweep_deletion_handles_note_without_document, test_set_based_deletion_atomicity\n\n## Acceptance Criteria\n- [ ] Stale note sweep deletes corresponding documents in same transaction\n- [ ] Stale note sweep deletes corresponding dirty_sources entries\n- [ ] Non-system notes only — system notes never have documents (is_system = 0 filter)\n- [ ] Set-based SQL (not per-note loops) for performance\n- [ ] Works for both issue and MR discussion sweeps\n- [ ] No error when sweeping notes that have no documents (DELETE WHERE on absent rows = no-op)\n- [ ] All 4 tests pass\n\n## Dependency Context\n- Depends on NOTE-0A (bd-3bpk): uses sweep_stale_issue_notes/sweep_stale_notes functions created/modified in that bead\n- Depends on NOTE-2A (bd-1oi7): documents table must accept source_type='note' (migration 024 adds CHECK constraint)\n\n## Edge Cases\n- System notes: WHERE clause filters with is_system = 0 (system notes never get documents)\n- Notes without documents: DELETE WHERE on non-existent document is a no-op in SQLite\n- FTS consistency: documents_ad trigger (migration 008) handles FTS cleanup on document DELETE","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-12T16:59:33.412628Z","created_by":"tayloreernisse","updated_at":"2026-02-12T18:13:15.082355Z","closed_at":"2026-02-12T18:13:15.082307Z","close_reason":"Implemented by agent swarm","compaction_level":0,"original_size":0,"labels":["per-note","search"]} {"id":"bd-18dn","title":"Add normalize_query_path() pure function for path canonicalization","description":"## Background\nPlan section 3a (iteration 6, feedback-6). User input paths like `./src//foo.rs` or whitespace-padded paths fail path resolution even when the file exists in the database. A syntactic normalization function runs before `build_path_query()` to reduce false negatives.\n\n## Approach\nAdd `normalize_query_path()` as a private pure function in who.rs (near `half_life_decay()`):\n\n```rust\nfn normalize_query_path(input: &str) -> String {\n let trimmed = input.trim();\n let stripped = trimmed.strip_prefix(\"./\").unwrap_or(trimmed);\n // Collapse repeated /\n let mut result = String::with_capacity(stripped.len());\n let mut prev_slash = false;\n for ch in stripped.chars() {\n if ch == '/' {\n if !prev_slash { result.push('/'); }\n prev_slash = true;\n } else {\n result.push(ch);\n prev_slash = false;\n }\n }\n result\n}\n```\n\nCalled once at top of `run_who()` before `build_path_query()`. Robot JSON `resolved_input` includes both `path_input_original` (raw) and `path_input_normalized` (after canonicalization).\n\nRules:\n- Strip leading `./`\n- Collapse repeated `/` (e.g., `src//foo.rs` -> `src/foo.rs`)\n- Trim leading/trailing whitespace\n- Preserve trailing `/` (signals explicit prefix intent)\n- Purely syntactic — no filesystem or DB lookups\n\n## TDD Loop\n\n### RED (write first):\n```rust\n#[test]\nfn test_path_normalization_handles_dot_and_double_slash() {\n assert_eq!(normalize_query_path(\"./src//foo.rs\"), \"src/foo.rs\");\n assert_eq!(normalize_query_path(\" src/bar.rs \"), \"src/bar.rs\");\n assert_eq!(normalize_query_path(\"src/foo.rs\"), \"src/foo.rs\"); // unchanged\n assert_eq!(normalize_query_path(\"\"), \"\"); // empty passthrough\n}\n\n#[test]\nfn test_path_normalization_preserves_prefix_semantics() {\n assert_eq!(normalize_query_path(\"./src/dir/\"), \"src/dir/\"); // trailing slash preserved\n assert_eq!(normalize_query_path(\"src/dir\"), \"src/dir\"); // no trailing slash = file\n}\n```\n\n### GREEN: Implement normalize_query_path (5-10 lines).\n### VERIFY: `cargo test -p lore -- test_path_normalization`\n\n## Acceptance Criteria\n- [ ] test_path_normalization_handles_dot_and_double_slash passes\n- [ ] test_path_normalization_preserves_prefix_semantics passes\n- [ ] Function is private (`fn` not `pub fn`)\n- [ ] No DB or filesystem dependency — pure string function\n- [ ] Called in run_who() before build_path_query()\n- [ ] Robot JSON resolved_input includes path_input_original and path_input_normalized\n\n## Files\n- src/cli/commands/who.rs (function near half_life_decay, call site in run_who)\n\n## Edge Cases\n- Empty string -> empty string\n- Only whitespace -> empty string\n- Multiple leading ./ (\"././src\") -> strip first \"./\" only per plan spec\n- Trailing slash preserved for prefix intent","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-12T19:29:27.954857Z","created_by":"tayloreernisse","updated_at":"2026-02-12T20:43:04.406259Z","closed_at":"2026-02-12T20:43:04.406217Z","close_reason":"Implemented by time-decay swarm: 3 agents, 12 tasks, 621 tests passing, all quality gates green","compaction_level":0,"original_size":0,"labels":["scoring"]} {"id":"bd-18qs","title":"Implement entity table + filter bar widgets","description":"## Background\nThe entity table and filter bar are shared widgets used by Issue List, MR List, and potentially Search results. The entity table supports sortable columns with responsive width allocation. The filter bar provides a typed DSL for filtering with inline diagnostics.\n\n## Approach\nEntity Table (view/common/entity_table.rs):\n- EntityTable widget: generic over row type\n- TableRow trait: fn cells(&self) -> Vec, fn sort_key(&self, col: usize) -> Ordering\n- Column definitions: name, min_width, flex_weight, alignment, sort_field\n- Responsive column fitting: hide low-priority columns as terminal narrows\n- Keyboard: j/k scroll, J/K page scroll, Tab cycle sort column, Enter select, g+g top, G bottom\n- Visual: alternating row colors, selected row highlight, sort indicator arrow\n\nFilter Bar (view/common/filter_bar.rs):\n- FilterBar widget wrapping ftui TextInput\n- DSL parsing (crate filter_dsl.rs): quoted values (\"in progress\"), negation prefix (-closed), field:value syntax (author:taylor, state:opened, label:bug), free-text search\n- Inline diagnostics: unknown field names highlighted, cursor position for error\n- Applied filter chips shown as tags below the input\n\nFilter DSL (filter_dsl.rs):\n- parse_filter_tokens(input: &str) -> Vec\n- FilterToken enum: FieldValue{field, value}, Negation{field, value}, FreeText(String), QuotedValue(String)\n- Validation: known fields per entity type (issues: state, author, assignee, label, milestone, status; MRs: state, author, reviewer, target_branch, source_branch, label, draft)\n\n## Acceptance Criteria\n- [ ] EntityTable renders with responsive column widths\n- [ ] Columns hide gracefully when terminal is too narrow\n- [ ] j/k scrolls, Enter selects, Tab cycles sort column\n- [ ] Sort indicator (arrow) shows on active sort column\n- [ ] FilterBar captures text input and parses DSL tokens\n- [ ] Quoted values preserved as single token\n- [ ] Negation prefix (-closed) creates exclusion filter\n- [ ] field:value syntax maps to typed filter fields\n- [ ] Unknown field names highlighted as error\n- [ ] Filter chips rendered below input bar\n\n## Files\n- CREATE: crates/lore-tui/src/view/common/entity_table.rs\n- CREATE: crates/lore-tui/src/view/common/filter_bar.rs\n- CREATE: crates/lore-tui/src/filter_dsl.rs\n\n## TDD Anchor\nRED: Write test_parse_filter_basic in filter_dsl.rs that parses \"state:opened author:taylor\" and asserts two FieldValue tokens.\nGREEN: Implement parse_filter_tokens with field:value splitting.\nVERIFY: cargo test --manifest-path crates/lore-tui/Cargo.toml test_parse_filter\n\nAdditional tests:\n- test_parse_quoted_value: \"in progress\" -> single QuotedValue token\n- test_parse_negation: -closed -> Negation token\n- test_parse_mixed: state:opened \"bug fix\" -wontfix -> 3 tokens of correct types\n- test_column_hiding: EntityTable with 5 columns hides lowest priority at 60 cols\n\n## Edge Cases\n- Filter DSL must handle Unicode in values (CJK issue titles)\n- Empty filter string should show all results (no-op)\n- Very long filter strings must not overflow the input area\n- Tab cycling sort must wrap around (last column -> first)\n- Column widths must respect min_width even when terminal is very narrow","status":"open","priority":2,"issue_type":"task","created_at":"2026-02-12T16:58:07.586225Z","created_by":"tayloreernisse","updated_at":"2026-02-12T18:11:28.085981Z","compaction_level":0,"original_size":0,"labels":["TUI"],"dependencies":[{"issue_id":"bd-18qs","depends_on_id":"bd-1cl9","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-18qs","depends_on_id":"bd-6pmy","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} @@ -48,7 +48,7 @@ {"id":"bd-1ht","title":"Epic: Gate 5 - Code Trace (lore trace)","description":"## Background\n\nGate 5 implements 'lore trace' — answers 'Why was this code introduced?' by tracing from a file path through the MR that modified it, to the issue that motivated the MR, to the discussions with decision rationale. Capstone of Phase B.\n\nGate 5 ships Tier 1 only (API-only, no local git). Tier 2 (git blame via git2-rs) deferred to Phase C.\n\n**Spec reference:** `docs/phase-b-temporal-intelligence.md` Gate 5 (Sections 5.1-5.7).\n\n## Prerequisites\n\n- Gates 1-2 COMPLETE: entity_references populated, resource events fetched\n- Gate 4 (bd-14q): provides mr_file_changes table + resolve_rename_chain algorithm\n- entity_references source_method: 'api' | 'note_parse' | 'description_parse'\n- discussions/notes tables for DiffNote content\n- merge_requests.merged_at exists (migration 006). Use COALESCE(merged_at, updated_at) for ordering.\n\n## Architecture\n\n- **No new tables.** Trace queries combine mr_file_changes, entity_references, discussions/notes\n- **Query flow:** file -> mr_file_changes -> MRs -> entity_references (closes/related) -> issues -> discussions with DiffNote context\n- **Tier 1:** File-level granularity only. Cannot trace a specific line to its introducing commit.\n- **Path parsing:** Supports 'src/foo.rs:45' syntax — line number parsed but deferred with Tier 2 warning.\n- **Rename aware:** Reuses file_history::resolve_rename_chain for multi-path matching.\n\n## Children (Execution Order)\n\n1. **bd-2n4** — Trace query logic: file -> MR -> issue -> discussion chain (src/core/trace.rs)\n2. **bd-9dd** — CLI command with human + robot output (src/cli/commands/trace.rs)\n\n## Gate Completion Criteria\n\n- [ ] `lore trace ` shows MRs with linked issues + discussion context\n- [ ] Output includes MR -> issue -> discussion chain\n- [ ] DiffNote snippets show content on the traced file\n- [ ] Cross-references from entity_references used for MR->issue linking\n- [ ] :line suffix parses and emits Tier 2 warning\n- [ ] Robot mode JSON with tier: 'api_only'\n- [ ] Graceful handling when no MR data found (suggest sync with fetchMrFileChanges)\n","status":"closed","priority":1,"issue_type":"feature","created_at":"2026-02-02T21:31:01.141053Z","created_by":"tayloreernisse","updated_at":"2026-02-18T21:44:16.629378Z","closed_at":"2026-02-18T21:44:16.629324Z","close_reason":"Gate 5 complete. trace command shipped, all impl children closed. Only doc update tasks (bd-1v8, bd-2fc) remain as independent tasks.","compaction_level":0,"original_size":0,"labels":["epic","gate-5","phase-b"],"dependencies":[{"issue_id":"bd-1ht","depends_on_id":"bd-14q","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-1ht","depends_on_id":"bd-1se","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} {"id":"bd-1i2","title":"Integrate mark_dirty_tx into ingestion modules","description":"## Background\nThis bead integrates dirty source tracking into the existing ingestion pipelines. Every entity upserted during ingestion must be marked dirty so the document regenerator knows to update the corresponding search document. The critical constraint: mark_dirty_tx() must be called INSIDE the same transaction that upserts the entity — not after commit.\n\n**Key PRD clarification:** Mark ALL upserted entities dirty (not just changed ones). The regenerator's hash comparison handles \"unchanged\" detection cheaply — this avoids needing change detection in ingestion.\n\n## Approach\nModify 4 existing ingestion files to add mark_dirty_tx() calls inside existing transaction blocks per PRD Section 6.1.\n\n**1. src/ingestion/issues.rs:**\nInside the issue upsert loop, after each successful INSERT/UPDATE:\n```rust\ndirty_tracker::mark_dirty_tx(&tx, SourceType::Issue, issue_row.id)?;\n```\n\n**2. src/ingestion/merge_requests.rs:**\nInside the MR upsert loop:\n```rust\ndirty_tracker::mark_dirty_tx(&tx, SourceType::MergeRequest, mr_row.id)?;\n```\n\n**3. src/ingestion/discussions.rs:**\nInside discussion insert (issue discussions, full-refresh transaction):\n```rust\ndirty_tracker::mark_dirty_tx(&tx, SourceType::Discussion, discussion_row.id)?;\n```\n\n**4. src/ingestion/mr_discussions.rs:**\nInside discussion upsert (write phase):\n```rust\ndirty_tracker::mark_dirty_tx(&tx, SourceType::Discussion, discussion_row.id)?;\n```\n\n**Discussion Sweep Cleanup (PRD Section 6.1 — CRITICAL):**\nWhen the MR discussion sweep deletes stale discussions (`last_seen_at < run_start_time`), **delete the corresponding document rows directly** — do NOT use the dirty queue for cleanup. The `ON DELETE CASCADE` on `document_labels`/`document_paths` and the `documents_embeddings_ad` trigger handle all downstream cleanup.\n\n**PRD-exact CTE pattern:**\n```sql\n-- In src/ingestion/mr_discussions.rs, during sweep phase.\n-- Uses a CTE to capture stale IDs atomically before cascading deletes.\n-- This is more defensive than two separate statements because the CTE\n-- guarantees the ID set is captured before any row is deleted.\nWITH stale AS (\n SELECT id FROM discussions\n WHERE merge_request_id = ? AND last_seen_at < ?\n)\n-- Step 1: delete orphaned documents (must happen while source_id still resolves)\nDELETE FROM documents\n WHERE source_type = 'discussion' AND source_id IN (SELECT id FROM stale);\n-- Step 2: delete the stale discussions themselves\nDELETE FROM discussions\n WHERE id IN (SELECT id FROM stale);\n```\n\n**NOTE:** If SQLite version doesn't support CTE-based multi-statement, execute as two sequential statements capturing IDs in Rust first:\n```rust\nlet stale_ids: Vec = conn.prepare(\n \"SELECT id FROM discussions WHERE merge_request_id = ? AND last_seen_at < ?\"\n)?.query_map(params![mr_id, run_start], |r| r.get(0))?\n .collect::, _>>()?;\n\nif !stale_ids.is_empty() {\n // Delete documents FIRST (while source_id still resolves)\n conn.execute(\n \"DELETE FROM documents WHERE source_type = 'discussion' AND source_id IN (...)\",\n ...\n )?;\n // Then delete the discussions\n conn.execute(\n \"DELETE FROM discussions WHERE id IN (...)\",\n ...\n )?;\n}\n```\n\n**IMPORTANT difference from dirty queue pattern:** The sweep deletes documents DIRECTLY (not via dirty_sources queue). This is because the source entity is being deleted — there's nothing for the regenerator to regenerate from. The cascade handles FTS, labels, paths, and embeddings cleanup.\n\n## Acceptance Criteria\n- [ ] Every upserted issue is marked dirty inside the same transaction\n- [ ] Every upserted MR is marked dirty inside the same transaction\n- [ ] Every upserted discussion (issue + MR) is marked dirty inside the same transaction\n- [ ] ALL upserted entities marked dirty (not just changed ones) — regenerator handles skip\n- [ ] mark_dirty_tx called with &Transaction (not &Connection)\n- [ ] mark_dirty_tx uses upsert with ON CONFLICT to reset backoff state (not INSERT OR IGNORE)\n- [ ] Discussion sweep deletes documents DIRECTLY (not via dirty queue)\n- [ ] Discussion sweep uses CTE (or Rust-side ID capture) to capture stale IDs before cascading deletes\n- [ ] Documents deleted BEFORE discussions (while source_id still resolves)\n- [ ] ON DELETE CASCADE handles document_labels, document_paths cleanup\n- [ ] documents_embeddings_ad trigger handles embedding cleanup\n- [ ] `cargo build` succeeds\n- [ ] Existing ingestion tests still pass\n\n## Files\n- `src/ingestion/issues.rs` — add mark_dirty_tx calls in upsert loop\n- `src/ingestion/merge_requests.rs` — add mark_dirty_tx calls in upsert loop\n- `src/ingestion/discussions.rs` — add mark_dirty_tx calls in insert loop\n- `src/ingestion/mr_discussions.rs` — add mark_dirty_tx calls + direct document deletion in sweep\n\n## TDD Loop\nRED: Existing tests should still pass (regression); new tests:\n- `test_issue_upsert_marks_dirty` — after issue ingest, dirty_sources has entry\n- `test_mr_upsert_marks_dirty` — after MR ingest, dirty_sources has entry\n- `test_discussion_upsert_marks_dirty` — after discussion ingest, dirty_sources has entry\n- `test_discussion_sweep_deletes_documents` — stale discussion documents deleted directly\n- `test_sweep_cascade_cleans_labels_paths` — ON DELETE CASCADE works\nGREEN: Add mark_dirty_tx calls in all 4 files, implement sweep with CTE\nVERIFY: `cargo test ingestion && cargo build`\n\n## Edge Cases\n- Upsert that doesn't change data: still marks dirty (regenerator hash check handles skip)\n- Transaction rollback: dirty mark also rolled back (atomic, inside same txn)\n- Discussion sweep with zero stale IDs: CTE returns empty, no DELETE executed\n- Large batch of upserts: each mark_dirty_tx is O(1) INSERT with ON CONFLICT\n- Sweep deletes document before discussion: order matters for source_id resolution","status":"closed","priority":2,"issue_type":"task","created_at":"2026-01-30T15:27:09.540279Z","created_by":"tayloreernisse","updated_at":"2026-01-30T17:39:17.241433Z","closed_at":"2026-01-30T17:39:17.241390Z","close_reason":"Added mark_dirty_tx calls in issues.rs, merge_requests.rs, discussions.rs, mr_discussions.rs (2 paths)","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-1i2","depends_on_id":"bd-38q","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} {"id":"bd-1i4i","title":"Implement run_sync_surgical orchestration function","description":"## Background\n\nThe surgical sync pipeline needs a top-level orchestration function that coordinates the full pipeline for syncing specific IIDs. Unlike `run_sync` (lines 63-360 of `src/cli/commands/sync.rs`) which syncs all projects and all entities, `run_sync_surgical` targets specific issues/MRs by IID within a single project. The pipeline stages are: resolve project, record sync run, preflight fetch, check cancellation, acquire lock, ingest with TOCTOU guards, inline dependent enrichment (discussions, events, diffs), scoped doc regeneration, scoped embedding, finalize recorder, and build `SyncResult`.\n\n## Approach\n\nCreate `pub async fn run_sync_surgical()` in a new file `src/cli/commands/sync_surgical.rs`. Signature:\n\n```rust\npub async fn run_sync_surgical(\n config: &Config,\n options: SyncOptions,\n run_id: Option<&str>,\n signal: &ShutdownSignal,\n) -> Result\n```\n\nThe function reads `options.issue_iids` and `options.mr_iids` (added by bd-1lja) to determine target IIDs. Pipeline:\n\n1. **Resolve project**: Call `resolve_project(conn, project_str)` from `src/core/project.rs` to get `gitlab_project_id`.\n2. **Start recorder**: `SyncRunRecorder::start(&recorder_conn, \"surgical-sync\", run_id)`. Note: `succeed()` and `fail()` consume `self`, so control flow must ensure exactly one terminal call.\n3. **Preflight fetch**: For each IID, call `get_issue_by_iid` / `get_mr_by_iid` (bd-159p) to confirm the entity exists on GitLab and capture `updated_at` for TOCTOU.\n4. **Check cancellation**: `if signal.is_cancelled() { recorder.fail(...); return Ok(result); }`\n5. **Acquire lock**: `AppLock::new(conn, LockOptions { name: \"surgical-sync\".into(), stale_lock_minutes: config.sync.stale_lock_minutes, heartbeat_interval_seconds: config.sync.heartbeat_interval_seconds })`. Lock must `acquire(force)` and `release()` on all exit paths.\n6. **Ingest with TOCTOU**: For each preflight entity, call surgical ingest (bd-3sez). Compare DB `updated_at` with preflight `updated_at`; skip if already current. Record outcome in `EntitySyncResult`.\n7. **Inline dependents**: For ingested entities, fetch discussions, resource events (if `config.sync.fetch_resource_events`), MR diffs (if `config.sync.fetch_mr_file_changes`). Use `config.sync.requests_per_second` for rate limiting.\n8. **Scoped docs**: Call `run_generate_docs_for_sources()` (bd-hs6j) with only the affected entity source IDs.\n9. **Scoped embed**: Call `run_embed_for_document_ids()` (bd-1elx) with only the regenerated document IDs.\n10. **Finalize**: `recorder.succeed(conn, &metrics, total_items, total_errors)`.\n11. **Build SyncResult**: Populate surgical fields (bd-wcja): `surgical_mode: Some(true)`, `surgical_iids`, `entity_results`, `preflight_only`.\n\nIf `options.preflight_only` is set, return after step 3 with the preflight data and skip steps 4-10.\n\nProgress output uses `stage_spinner_v2(icon, label, msg, robot_mode)` from `src/cli/progress.rs` line 18 during execution, and `format_stage_line(icon, label, summary, elapsed)` from `src/cli/progress.rs` line 67 for completion lines. Stage icons via `Icons::sync()` from `src/cli/render.rs` line 208. Error completion uses `color_icon(icon, has_errors)` from `src/cli/commands/sync.rs` line 55.\n\n## Acceptance Criteria\n\n1. `run_sync_surgical` compiles and runs the full pipeline for 1+ issue IIDs\n2. Preflight-only mode returns early with fetched entity data, no DB writes beyond recorder\n3. TOCTOU: entities whose DB `updated_at` matches preflight `updated_at` are skipped with `skipped_toctou` outcome\n4. Cancellation at any stage between preflight and ingest stops processing, calls `recorder.fail()`\n5. Lock is acquired before ingest and released on all exit paths (success, error, cancellation)\n6. `SyncResult` surgical fields are populated: `surgical_mode`, `surgical_iids`, `entity_results`\n7. Robot mode produces valid JSON with per-entity outcomes\n8. Human mode shows stage spinners and completion lines\n\n## Files\n\n- `src/cli/commands/sync_surgical.rs` — new file, main orchestration function\n- `src/cli/commands/mod.rs` — add `pub mod sync_surgical;`\n\n## TDD Anchor\n\nTests in `src/cli/commands/sync_surgical.rs` or a companion `sync_surgical_tests.rs`:\n\n```rust\n#[cfg(test)]\nmod tests {\n use super::*;\n use crate::core::db::{create_connection, run_migrations};\n use std::path::Path;\n use wiremock::{MockServer, Mock, ResponseTemplate};\n use wiremock::matchers::{method, path_regex};\n\n fn test_config(mock_url: &str) -> Config {\n let mut config = Config::default();\n config.gitlab.url = mock_url.to_string();\n config.gitlab.token = \"test-token\".to_string();\n config\n }\n\n fn setup_db() -> rusqlite::Connection {\n let conn = create_connection(Path::new(\":memory:\")).unwrap();\n run_migrations(&conn).unwrap();\n // Insert test project\n conn.execute(\n \"INSERT INTO projects (gitlab_project_id, path_with_namespace, web_url)\n VALUES (1, 'group/project', 'https://gitlab.example.com/group/project')\",\n [],\n ).unwrap();\n conn\n }\n\n #[tokio::test]\n async fn surgical_sync_single_issue_end_to_end() {\n let server = MockServer::start().await;\n // Mock: GET /projects/:id/issues?iids[]=7 returns one issue\n Mock::given(method(\"GET\"))\n .and(path_regex(r\"/api/v4/projects/1/issues\"))\n .respond_with(ResponseTemplate::new(200).set_body_json(\n serde_json::json!([{\n \"id\": 100, \"iid\": 7, \"project_id\": 1, \"title\": \"Test\",\n \"state\": \"opened\", \"created_at\": \"2026-01-01T00:00:00Z\",\n \"updated_at\": \"2026-02-17T00:00:00Z\",\n \"author\": {\"id\": 1, \"username\": \"dev\", \"name\": \"Dev\"},\n \"web_url\": \"https://gitlab.example.com/group/project/-/issues/7\"\n }])\n ))\n .mount(&server).await;\n // Mock discussions endpoint\n Mock::given(method(\"GET\"))\n .and(path_regex(r\"/api/v4/projects/1/issues/7/discussions\"))\n .respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!([])))\n .mount(&server).await;\n\n let config = test_config(&server.uri());\n let options = SyncOptions {\n robot_mode: true,\n issue_iids: vec![7],\n ..SyncOptions::default()\n };\n let signal = ShutdownSignal::new();\n let result = run_sync_surgical(&config, options, Some(\"test01\"), &signal).await.unwrap();\n\n assert_eq!(result.surgical_mode, Some(true));\n assert_eq!(result.surgical_iids.as_ref().unwrap().issues, vec![7]);\n let entities = result.entity_results.as_ref().unwrap();\n assert_eq!(entities.len(), 1);\n assert_eq!(entities[0].outcome, \"synced\");\n }\n\n #[tokio::test]\n async fn preflight_only_returns_early() {\n let server = MockServer::start().await;\n Mock::given(method(\"GET\"))\n .and(path_regex(r\"/api/v4/projects/1/issues\"))\n .respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!([{\n \"id\": 100, \"iid\": 7, \"project_id\": 1, \"title\": \"Test\",\n \"state\": \"opened\", \"created_at\": \"2026-01-01T00:00:00Z\",\n \"updated_at\": \"2026-02-17T00:00:00Z\",\n \"author\": {\"id\": 1, \"username\": \"dev\", \"name\": \"Dev\"},\n \"web_url\": \"https://gitlab.example.com/group/project/-/issues/7\"\n }])))\n .mount(&server).await;\n\n let config = test_config(&server.uri());\n let options = SyncOptions {\n robot_mode: true,\n issue_iids: vec![7],\n preflight_only: true,\n ..SyncOptions::default()\n };\n let signal = ShutdownSignal::new();\n let result = run_sync_surgical(&config, options, Some(\"test02\"), &signal).await.unwrap();\n\n assert_eq!(result.preflight_only, Some(true));\n assert_eq!(result.issues_updated, 0); // No actual ingest happened\n }\n\n #[tokio::test]\n async fn cancellation_before_ingest_fails_recorder() {\n let server = MockServer::start().await;\n Mock::given(method(\"GET\"))\n .and(path_regex(r\"/api/v4/projects/1/issues\"))\n .respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!([{\n \"id\": 100, \"iid\": 7, \"project_id\": 1, \"title\": \"Test\",\n \"state\": \"opened\", \"created_at\": \"2026-01-01T00:00:00Z\",\n \"updated_at\": \"2026-02-17T00:00:00Z\",\n \"author\": {\"id\": 1, \"username\": \"dev\", \"name\": \"Dev\"},\n \"web_url\": \"https://gitlab.example.com/group/project/-/issues/7\"\n }])))\n .mount(&server).await;\n\n let config = test_config(&server.uri());\n let options = SyncOptions {\n robot_mode: true,\n issue_iids: vec![7],\n ..SyncOptions::default()\n };\n let signal = ShutdownSignal::new();\n signal.cancel(); // Cancel before we start\n let result = run_sync_surgical(&config, options, Some(\"test03\"), &signal).await.unwrap();\n\n // Result should indicate cancellation\n assert_eq!(result.issues_updated, 0);\n }\n}\n```\n\n## Edge Cases\n\n- **Entity not found on GitLab**: Preflight returns 404 for an IID. Record `EntitySyncResult { outcome: \"not_found\" }` and continue with remaining IIDs.\n- **All entities skipped by TOCTOU**: Every entity's `updated_at` matches DB. Result has `entity_results` with all `skipped_toctou`, zero actual sync work.\n- **Mixed success/failure**: Some IIDs succeed, some fail. All recorded in `entity_results`. Function returns `Ok` with partial results, not `Err`.\n- **SyncRunRecorder consume semantics**: `succeed()` and `fail()` take `self` by value. The orchestrator must ensure exactly one terminal call. Use an `Option` pattern: `let mut recorder = Some(recorder); ... recorder.take().unwrap().succeed(...)`.\n- **Lock contention**: If another sync holds the lock and `force` is false, fail with clear error before any ingest.\n- **Empty IID lists**: If both `options.issue_iids` and `options.mr_iids` are empty, return immediately with default `SyncResult` (no surgical fields set).\n\n## Dependency Context\n\n- **Depends on (upstream)**: bd-wcja (SyncResult fields), bd-1lja (SyncOptions extensions), bd-159p (get_by_iid client methods), bd-3sez (surgical ingest/preflight/TOCTOU), bd-kanh (per-entity helpers), bd-arka (SyncRunRecorder surgical methods), bd-1elx (scoped embed), bd-hs6j (scoped docs), bd-tiux (migration 027)\n- **Blocks (downstream)**: bd-3bec (wiring into run_sync), bd-3jqx (integration tests)\n- This is the keystone bead — it consumes all upstream primitives and is consumed by the final wiring and integration test beads.","status":"closed","priority":1,"issue_type":"task","created_at":"2026-02-17T19:17:24.197299Z","created_by":"tayloreernisse","updated_at":"2026-02-18T20:36:39.596508Z","closed_at":"2026-02-18T20:36:39.596455Z","close_reason":"run_sync_surgical orchestrator: 719-line pipeline with preflight/TOCTOU/ingest/dependents/docs/embed stages, Option pattern, graceful embed failures","compaction_level":0,"original_size":0,"labels":["surgical-sync"],"dependencies":[{"issue_id":"bd-1i4i","depends_on_id":"bd-3bec","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} -{"id":"bd-1iuj","title":"Phase 0d: Error type migration — NetworkErrorKind enum","description":"## What\nModify LoreError in src/core/error.rs to:\n1. Remove the reqwest::Error From impl (Http variant)\n2. Add a NetworkErrorKind enum for error classification\n3. Change GitLabNetworkError to use kind: NetworkErrorKind + detail: Option instead of source: Option\n\n## Why\nThe adapter layer (Phase 1) uses GitLabNetworkError { detail: Option }, which requires this error type change before the adapter compiles. This MUST precede Phase 1. Placed in Phase 0 so Phases 1-3 compile as a unit.\n\n## Implementation\n\n### New enum in src/core/error.rs:\n\\`\\`\\`rust\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub enum NetworkErrorKind {\n Timeout,\n ConnectionRefused,\n DnsResolution,\n Tls,\n Other,\n}\n\\`\\`\\`\n\n### Change GitLabNetworkError variant:\n\\`\\`\\`rust\n// Remove:\n#[error(\\\"HTTP error: {0}\\\")]\nHttp(#[from] reqwest::Error),\n\n// Change:\n#[error(\\\"Cannot connect to GitLab at {base_url}\\\")]\nGitLabNetworkError {\n base_url: String,\n kind: NetworkErrorKind,\n detail: Option,\n},\n\\`\\`\\`\n\n## Error Granularity Rationale\nFlattening all HTTP errors to detail: Option would lose distinction between timeouts, TLS failures, DNS resolution, and connection resets. NetworkErrorKind preserves actionable error categories without coupling LoreError to any HTTP client. The adapter execute() method classifies errors at the boundary:\n- Timeout from asupersync::time::timeout -> NetworkErrorKind::Timeout\n- Transport errors -> classified by error type into appropriate kind\n- Unknown errors -> NetworkErrorKind::Other\n\nThis keeps LoreError client-agnostic while preserving retry decisions based on error type (e.g., retry on timeout but not on TLS).\n\n## CRITICAL: Finding All Affected Call Sites\nRemoving Http(#[from] reqwest::Error) breaks all code that relies on the implicit From conversion (i.e., ? on reqwest::Error in functions returning LoreError/Result).\n\nTo find all affected sites:\n\\`\\`\\`bash\n# Find all files using reqwest error types or ? on reqwest calls\nrg 'reqwest::Error' src/ --files-with-matches\nrg '\\.send\\(\\)\\.await\\?' src/ --files-with-matches\n# Also check for direct GitLabNetworkError construction\nrg 'GitLabNetworkError' src/ --files-with-matches\n\\`\\`\\`\n\n### Known sites to update:\n- src/gitlab/client.rs — GitLabNetworkError construction in request(), request_with_headers()\n- src/gitlab/graphql.rs — GitLabNetworkError construction in execute_query()\n- src/embedding/ollama.rs — may construct errors from reqwest failures\n\n### Pattern for temporary fix:\n\\`\\`\\`rust\n// Before (implicit From):\nlet response = self.client.get(&url).send().await?;\n// After (explicit conversion — temporary until Phase 2 rewrites these):\nlet response = self.client.get(&url).send().await\n .map_err(|e| LoreError::GitLabNetworkError {\n base_url: url.clone(),\n kind: NetworkErrorKind::Other,\n detail: Some(format!(\\\"{e:?}\\\")),\n })?;\n\\`\\`\\`\n\nThese temporary conversions are throwaway work — Phase 2 rewrites all these call sites to use the adapter. But they must compile after Phase 0d.\n\n## Files Changed\n- src/core/error.rs (~15 LOC changed)\n- src/gitlab/client.rs (update GitLabNetworkError construction sites)\n- src/gitlab/graphql.rs (same)\n- src/embedding/ollama.rs (same, if any)\n\n## Testing\n- cargo check --all-targets (MUST compile cleanly)\n- cargo clippy --all-targets -- -D warnings\n- cargo test (existing tests should pass — verify no tests match on Http variant or source field)","status":"open","priority":1,"issue_type":"task","created_at":"2026-03-06T18:38:46.412914Z","created_by":"tayloreernisse","updated_at":"2026-03-06T18:49:43.279537Z","compaction_level":0,"original_size":0,"labels":["asupersync","phase-0"],"dependencies":[{"issue_id":"bd-1iuj","depends_on_id":"bd-1lj5","type":"parent-child","created_at":"2026-03-06T18:38:46.415930Z","created_by":"tayloreernisse"}]} +{"id":"bd-1iuj","title":"Phase 0d: Error type migration — NetworkErrorKind enum","description":"## What\nModify LoreError in src/core/error.rs to:\n1. Remove the reqwest::Error From impl (Http variant)\n2. Add a NetworkErrorKind enum for error classification\n3. Change GitLabNetworkError to use kind: NetworkErrorKind + detail: Option instead of source: Option\n\n## Why\nThe adapter layer (Phase 1) uses GitLabNetworkError { detail: Option }, which requires this error type change before the adapter compiles. This MUST precede Phase 1. Placed in Phase 0 so Phases 1-3 compile as a unit.\n\n## Implementation\n\n### New enum in src/core/error.rs:\n\\`\\`\\`rust\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub enum NetworkErrorKind {\n Timeout,\n ConnectionRefused,\n DnsResolution,\n Tls,\n Other,\n}\n\\`\\`\\`\n\n### Change GitLabNetworkError variant:\n\\`\\`\\`rust\n// Remove:\n#[error(\\\"HTTP error: {0}\\\")]\nHttp(#[from] reqwest::Error),\n\n// Change:\n#[error(\\\"Cannot connect to GitLab at {base_url}\\\")]\nGitLabNetworkError {\n base_url: String,\n kind: NetworkErrorKind,\n detail: Option,\n},\n\\`\\`\\`\n\n## Error Granularity Rationale\nFlattening all HTTP errors to detail: Option would lose distinction between timeouts, TLS failures, DNS resolution, and connection resets. NetworkErrorKind preserves actionable error categories without coupling LoreError to any HTTP client. The adapter execute() method classifies errors at the boundary:\n- Timeout from asupersync::time::timeout -> NetworkErrorKind::Timeout\n- Transport errors -> classified by error type into appropriate kind\n- Unknown errors -> NetworkErrorKind::Other\n\nThis keeps LoreError client-agnostic while preserving retry decisions based on error type (e.g., retry on timeout but not on TLS).\n\n## CRITICAL: Finding All Affected Call Sites\nRemoving Http(#[from] reqwest::Error) breaks all code that relies on the implicit From conversion (i.e., ? on reqwest::Error in functions returning LoreError/Result).\n\nTo find all affected sites:\n\\`\\`\\`bash\n# Find all files using reqwest error types or ? on reqwest calls\nrg 'reqwest::Error' src/ --files-with-matches\nrg '\\.send\\(\\)\\.await\\?' src/ --files-with-matches\n# Also check for direct GitLabNetworkError construction\nrg 'GitLabNetworkError' src/ --files-with-matches\n\\`\\`\\`\n\n### Known sites to update:\n- src/gitlab/client.rs — GitLabNetworkError construction in request(), request_with_headers()\n- src/gitlab/graphql.rs — GitLabNetworkError construction in execute_query()\n- src/embedding/ollama.rs — may construct errors from reqwest failures\n\n### Pattern for temporary fix:\n\\`\\`\\`rust\n// Before (implicit From):\nlet response = self.client.get(&url).send().await?;\n// After (explicit conversion — temporary until Phase 2 rewrites these):\nlet response = self.client.get(&url).send().await\n .map_err(|e| LoreError::GitLabNetworkError {\n base_url: url.clone(),\n kind: NetworkErrorKind::Other,\n detail: Some(format!(\\\"{e:?}\\\")),\n })?;\n\\`\\`\\`\n\nThese temporary conversions are throwaway work — Phase 2 rewrites all these call sites to use the adapter. But they must compile after Phase 0d.\n\n## Files Changed\n- src/core/error.rs (~15 LOC changed)\n- src/gitlab/client.rs (update GitLabNetworkError construction sites)\n- src/gitlab/graphql.rs (same)\n- src/embedding/ollama.rs (same, if any)\n\n## Testing\n- cargo check --all-targets (MUST compile cleanly)\n- cargo clippy --all-targets -- -D warnings\n- cargo test (existing tests should pass — verify no tests match on Http variant or source field)","status":"closed","priority":1,"issue_type":"task","created_at":"2026-03-06T18:38:46.412914Z","created_by":"tayloreernisse","updated_at":"2026-03-06T21:11:12.560623Z","closed_at":"2026-03-06T21:11:12.560572Z","close_reason":"asupersync migration complete — 5 commits pushed to asupersync-migration branch","compaction_level":0,"original_size":0,"labels":["asupersync","phase-0"],"dependencies":[{"issue_id":"bd-1iuj","depends_on_id":"bd-1lj5","type":"parent-child","created_at":"2026-03-06T18:38:46.415930Z","created_by":"tayloreernisse"}]} {"id":"bd-1j1","title":"Integration test: full Phase B sync pipeline","description":"## Background\n\nThis integration test proves the full Phase B sync pipeline works end-to-end. Since Gates 1 and 2 are already implemented and closed, this test validates that the complete pipeline — including Gate 4 mr_diffs draining — works together.\n\n## Codebase Context\n\n- **Gates 1-2 FULLY IMPLEMENTED (CLOSED):** resource events fetch, closes_issues API, system note parsing (note_parser.rs), entity_references extraction (references.rs)\n- **Gate 4 in progress:** migration 016 (mr_file_changes), fetch_mr_diffs, drain_mr_diffs — already wired in orchestrator (lines 708-726, 1514+)\n- **26 migrations exist** (001-026). LATEST_SCHEMA_VERSION = 26. In-memory DB must run all 26.\n- Orchestrator has drain_resource_events() (line 932), drain_mr_closes_issues() (line 1254), and drain_mr_diffs() (line 1514).\n- wiremock crate used in existing tests (check dev-dependencies in Cargo.toml)\n- src/core/dependent_queue.rs: enqueue_job(), claim_jobs(), complete_job(), fail_job() with exponential backoff\n- IngestProjectResult and IngestMrProjectResult track counts for all drain phases\n\n## Approach\n\nCreate tests/phase_b_integration.rs:\n\n### Test Setup\n\n1. In-memory SQLite DB with all 26 migrations (001-026)\n2. wiremock mock server with:\n - /api/v4/projects/:id/issues — 2 test issues\n - /api/v4/projects/:id/merge_requests — 1 test MR\n - /api/v4/projects/:id/issues/:iid/resource_state_events — state events\n - /api/v4/projects/:id/issues/:iid/resource_label_events — label events\n - /api/v4/projects/:id/merge_requests/:iid/resource_state_events — merge event with source_merge_request_iid\n - /api/v4/projects/:id/merge_requests/:iid/closes_issues — linked issues\n - /api/v4/projects/:id/merge_requests/:iid/diffs — file changes\n - /api/v4/projects/:id/issues/:iid/discussions — discussion with system note \"mentioned in !1\"\n3. Config with fetch_resource_events=true and fetch_mr_file_changes=true\n4. Use dependent_concurrency=1 to avoid timing issues\n\n### Test Flow\n\n```rust\n#[tokio::test]\nasync fn test_full_phase_b_pipeline() {\n // 1. Set up mock server + DB with all 26 migrations\n // 2. Run ingest issues + MRs (orchestrator functions)\n // 3. Verify pending_dependent_fetches enqueued: resource_events, mr_closes_issues, mr_diffs\n // 4. Drain all dependent fetch queues\n // 5. Assert: resource_state_events populated (count > 0)\n // 6. Assert: resource_label_events populated (count > 0)\n // 7. Assert: entity_references has closes ref with source_method='api'\n // 8. Assert: entity_references has mentioned ref with source_method='note_parse'\n // 9. Assert: mr_file_changes populated from diffs API\n // 10. Assert: pending_dependent_fetches fully drained (no stuck locks)\n}\n```\n\n### Assertions (SQL)\n\n```sql\nSELECT COUNT(*) FROM resource_state_events -- > 0\nSELECT COUNT(*) FROM resource_label_events -- > 0\nSELECT COUNT(*) FROM entity_references WHERE reference_type = 'closes' AND source_method = 'api' -- >= 1\nSELECT COUNT(*) FROM entity_references WHERE source_method = 'note_parse' -- >= 1\nSELECT COUNT(*) FROM mr_file_changes -- > 0\nSELECT COUNT(*) FROM pending_dependent_fetches WHERE locked_at IS NOT NULL -- = 0\n```\n\n## Acceptance Criteria\n\n- [ ] Test creates DB with all 26 migrations, mocks, and runs full pipeline\n- [ ] resource_state_events and resource_label_events populated\n- [ ] entity_references has closes ref (source_method='api') and mentioned ref (source_method='note_parse')\n- [ ] mr_file_changes populated from diffs mock\n- [ ] pending_dependent_fetches fully drained (no stuck locks, no retryable jobs)\n- [ ] Test runs in < 10 seconds\n- [ ] `cargo test --test phase_b_integration` passes\n\n## Files\n\n- CREATE: tests/phase_b_integration.rs\n\n## TDD Anchor\n\nRED: Write test with all assertions — should pass if all Gates are wired correctly.\n\nGREEN: If anything fails, it indicates a missing orchestrator connection — fix the wiring.\n\nVERIFY: cargo test --test phase_b_integration -- --nocapture\n\n## Edge Cases\n\n- Paginated mock responses: include Link header for multi-page responses\n- Empty pages: verify graceful handling\n- Use dependent_concurrency=1 to avoid timing issues in test environment\n- Stale lock reclaim: test that locks older than stale_lock_minutes are reclaimed\n- If Gate 4 drain_mr_diffs is not fully wired yet, the mr_file_changes assertion will fail — this is the intended RED signal\n\n## Dependency Context\n\n- **bd-8t4 (resource_state_events extraction)**: CLOSED. Provides drain_resource_events() which populates resource_state_events and resource_label_events tables.\n- **bd-3ia (closes_issues)**: CLOSED. Provides drain_mr_closes_issues() which populates entity_references with reference_type='closes', source_method='api'.\n- **bd-1ji (note parsing)**: CLOSED. Provides note_parser.rs which extracts \"mentioned in !N\" patterns and stores as entity_references with source_method='note_parse'.\n- **dependent_queue.rs**: Provides the claim/complete/fail lifecycle. All three drain functions use this.\n- **orchestrator.rs**: Contains all drain functions. drain_mr_diffs() at line 1514+ populates mr_file_changes.","status":"open","priority":3,"issue_type":"task","created_at":"2026-02-02T22:42:26.355071Z","created_by":"tayloreernisse","updated_at":"2026-02-17T16:52:30.970742Z","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-1j1","depends_on_id":"bd-1ji","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-1j1","depends_on_id":"bd-1se","type":"parent-child","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-1j1","depends_on_id":"bd-3ia","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-1j1","depends_on_id":"bd-8t4","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} {"id":"bd-1j5o","title":"Verification: quality gates, query plan check, real-world validation","description":"## Background\n\nPost-implementation verification checkpoint. Runs after all code beads complete to validate the full scoring model works correctly against real data, not just test fixtures.\n\n## Approach\n\nExecute 8 verification steps in order. Each step has a binary pass/fail outcome.\n\n### Step 1: Compiler check\n```bash\ncargo check --all-targets\n```\nPass: exit 0\n\n### Step 2: Clippy\n```bash\ncargo clippy --all-targets -- -D warnings\n```\nPass: exit 0\n\n### Step 3: Formatting\n```bash\ncargo fmt --check\n```\nPass: exit 0\n\n### Step 4: Test suite\n```bash\ncargo test -p lore\n```\nPass: all tests green, including 31 new decay/scoring tests\n\n### Step 5: UBS scan\n```bash\nubs src/cli/commands/who.rs src/core/config.rs src/core/db.rs\n```\nPass: exit 0\n\n### Step 6: Query plan verification (manual)\nRun against real database:\n```bash\ncargo run --release -- who --path MeasurementQualityDialog.tsx -vvv 2>&1 | grep -i \"query plan\"\n```\nOr use sqlite3 CLI with EXPLAIN QUERY PLAN on the expert SQL (both exact and prefix modes).\n\nPass criteria (6 checks):\n- matched_notes_raw branch 1 uses existing new_path index\n- matched_notes_raw branch 2 uses idx_notes_old_path_author\n- matched_file_changes_raw uses idx_mfc_new_path_project_mr and idx_mfc_old_path_project_mr\n- reviewer_participation uses idx_notes_diffnote_discussion_author\n- mr_activity CTE joins merge_requests via primary key from matched_file_changes\n- Path resolution probes (old_path leg) use idx_notes_old_path_project_created\nDocument observed plan as SQL comment near the CTE.\n\n### Step 7: Performance baseline (manual)\n```bash\ntime cargo run --release -- who --path MeasurementQualityDialog.tsx\ntime cargo run --release -- who --path src/\ntime cargo run --release -- who --path Dialog.tsx\n```\nPass criteria (soft SLOs):\n- Exact path: p95 < 200ms\n- Prefix: p95 < 300ms\n- Suffix: p95 < 500ms\nRecord timings as SQL comment for future regression reference.\n\n### Step 8: Real-world validation\n```bash\ncargo run --release -- who --path MeasurementQualityDialog.tsx\ncargo run --release -- who --path MeasurementQualityDialog.tsx --explain-score\ncargo run --release -- who --path MeasurementQualityDialog.tsx --as-of 2025-06-01\ncargo run --release -- who --path MeasurementQualityDialog.tsx --all-history\n```\nPass criteria:\n- [ ] Recency discounting visible (recent authors rank above old reviewers)\n- [ ] --explain-score components sum to total (within f64 tolerance)\n- [ ] --as-of produces identical results on repeated runs\n- [ ] Assigned-only reviewers rank below participated reviewers on same MR\n- [ ] Known renamed file path resolves and credits old expertise\n- [ ] LGTM-only reviewers classified as assigned-only\n- [ ] Closed MRs at ~50% contribution visible via --explain-score\n\n## Acceptance Criteria\n- [ ] Steps 1-5 pass (exit 0)\n- [ ] Step 6: query plan documented with all 6 index usage points confirmed\n- [ ] Step 7: timing baselines recorded\n- [ ] Step 8: all 7 real-world checks pass\n\n## Files\n- All files modified by child beads (read-only verification)\n- Add SQL comments near CTE with observed EXPLAIN QUERY PLAN output\n\n## Edge Cases\n- SQLite planner may choose different plans across versions — document version\n- Timing varies by hardware — record machine specs alongside baselines\n- Real DB may have NULL merged_at on old MRs — state-aware fallback handles this","status":"closed","priority":3,"issue_type":"task","created_at":"2026-02-09T17:00:59.287720Z","created_by":"tayloreernisse","updated_at":"2026-02-12T20:43:04.415816Z","closed_at":"2026-02-12T20:43:04.415772Z","close_reason":"Implemented by time-decay swarm: 3 agents, 12 tasks, 621 tests passing, all quality gates green","compaction_level":0,"original_size":0,"labels":["scoring"],"dependencies":[{"issue_id":"bd-1j5o","depends_on_id":"bd-1b50","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-1j5o","depends_on_id":"bd-1vti","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} {"id":"bd-1je","title":"Implement pending discussion queue","description":"## Background\nThe pending discussion queue tracks discussions that need to be fetched from GitLab. When an issue or MR is updated, its discussions may need re-fetching. This queue is separate from dirty_sources (which tracks entities needing document regeneration) — it tracks entities needing API calls to GitLab. The queue uses the same backoff pattern as dirty_sources for consistency.\n\n## Approach\nCreate `src/ingestion/discussion_queue.rs`:\n\n```rust\nuse crate::core::backoff::compute_next_attempt_at;\n\n/// Noteable type for discussion queue.\n#[derive(Debug, Clone, Copy)]\npub enum NoteableType {\n Issue,\n MergeRequest,\n}\n\nimpl NoteableType {\n pub fn as_str(&self) -> &'static str {\n match self {\n Self::Issue => \"Issue\",\n Self::MergeRequest => \"MergeRequest\",\n }\n }\n}\n\npub struct PendingFetch {\n pub project_id: i64,\n pub noteable_type: NoteableType,\n pub noteable_iid: i64,\n pub attempt_count: i32,\n}\n\n/// Queue a discussion fetch. ON CONFLICT DO UPDATE resets backoff (consistent with dirty_sources).\npub fn queue_discussion_fetch(\n conn: &Connection,\n project_id: i64,\n noteable_type: NoteableType,\n noteable_iid: i64,\n) -> Result<()>;\n\n/// Get next batch of pending fetches (WHERE next_attempt_at IS NULL OR <= now).\npub fn get_pending_fetches(conn: &Connection, limit: usize) -> Result>;\n\n/// Mark fetch complete (remove from queue).\npub fn complete_fetch(\n conn: &Connection,\n project_id: i64,\n noteable_type: NoteableType,\n noteable_iid: i64,\n) -> Result<()>;\n\n/// Record fetch error with backoff.\npub fn record_fetch_error(\n conn: &Connection,\n project_id: i64,\n noteable_type: NoteableType,\n noteable_iid: i64,\n error: &str,\n) -> Result<()>;\n```\n\n## Acceptance Criteria\n- [ ] queue_discussion_fetch uses ON CONFLICT DO UPDATE (consistent with dirty_sources pattern)\n- [ ] Re-queuing resets: attempt_count=0, next_attempt_at=NULL, last_error=NULL\n- [ ] get_pending_fetches respects next_attempt_at backoff\n- [ ] get_pending_fetches returns entries ordered by queued_at ASC\n- [ ] complete_fetch removes entry from queue\n- [ ] record_fetch_error increments attempt_count, computes next_attempt_at via shared backoff\n- [ ] NoteableType.as_str() returns \"Issue\" or \"MergeRequest\" (matches DB CHECK constraint)\n- [ ] `cargo test discussion_queue` passes\n\n## Files\n- `src/ingestion/discussion_queue.rs` — new file\n- `src/ingestion/mod.rs` — add `pub mod discussion_queue;`\n\n## TDD Loop\nRED: Tests in `#[cfg(test)] mod tests`:\n- `test_queue_and_get` — queue entry, get returns it\n- `test_requeue_resets_backoff` — queue, error, re-queue -> attempt_count=0\n- `test_backoff_respected` — entry with future next_attempt_at not returned\n- `test_complete_removes` — complete_fetch removes entry\n- `test_error_increments_attempts` — error -> attempt_count=1, next_attempt_at set\nGREEN: Implement all functions\nVERIFY: `cargo test discussion_queue`\n\n## Edge Cases\n- Queue same (project_id, noteable_type, noteable_iid) twice: ON CONFLICT resets state\n- NoteableType must match DB CHECK constraint exactly (\"Issue\", \"MergeRequest\" — capitalized)\n- Empty queue: get_pending_fetches returns empty Vec","status":"closed","priority":2,"issue_type":"task","created_at":"2026-01-30T15:27:09.505548Z","created_by":"tayloreernisse","updated_at":"2026-01-30T17:31:35.496454Z","closed_at":"2026-01-30T17:31:35.496405Z","close_reason":"Implemented discussion_queue with queue/get/complete/record_error + 6 tests","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-1je","depends_on_id":"bd-hrs","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-1je","depends_on_id":"bd-mem","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} @@ -60,7 +60,7 @@ {"id":"bd-1l1","title":"[CP0] GitLab API client with rate limiting","description":"## Background\n\nThe GitLab client handles all API communication with rate limiting to avoid 429 errors. Uses native fetch (Node 18+). Rate limiter adds jitter to prevent thundering herd. All errors are typed for clean error handling in CLI commands.\n\nReference: docs/prd/checkpoint-0.md section \"GitLab Client\"\n\n## Approach\n\n**src/gitlab/client.ts:**\n```typescript\nexport class GitLabClient {\n private baseUrl: string;\n private token: string;\n private rateLimiter: RateLimiter;\n\n constructor(options: { baseUrl: string; token: string; requestsPerSecond?: number }) {\n this.baseUrl = options.baseUrl.replace(/\\/$/, '');\n this.token = options.token;\n this.rateLimiter = new RateLimiter(options.requestsPerSecond ?? 10);\n }\n\n async getCurrentUser(): Promise\n async getProject(pathWithNamespace: string): Promise\n private async request(path: string, options?: RequestInit): Promise\n}\n\nclass RateLimiter {\n private lastRequest = 0;\n private minInterval: number;\n\n constructor(requestsPerSecond: number) {\n this.minInterval = 1000 / requestsPerSecond;\n }\n\n async acquire(): Promise {\n // Wait if too soon since last request\n // Add 0-50ms jitter\n }\n}\n```\n\n**src/gitlab/types.ts:**\n```typescript\nexport interface GitLabUser {\n id: number;\n username: string;\n name: string;\n}\n\nexport interface GitLabProject {\n id: number;\n path_with_namespace: string;\n default_branch: string;\n web_url: string;\n created_at: string;\n updated_at: string;\n}\n```\n\n**Integration tests with MSW (Mock Service Worker):**\nSet up MSW handlers that mock GitLab API responses for /api/v4/user and /api/v4/projects/:path\n\n## Acceptance Criteria\n\n- [ ] getCurrentUser() returns GitLabUser with id, username, name\n- [ ] getProject(\"group/project\") URL-encodes path correctly\n- [ ] 401 response throws GitLabAuthError\n- [ ] 404 response throws GitLabNotFoundError\n- [ ] 429 response throws GitLabRateLimitError with retryAfter from header\n- [ ] Network failure throws GitLabNetworkError\n- [ ] Rate limiter enforces minimum interval between requests\n- [ ] Rate limiter adds random jitter (0-50ms)\n- [ ] tests/integration/gitlab-client.test.ts passes (6 tests)\n\n## Files\n\nCREATE:\n- src/gitlab/client.ts\n- src/gitlab/types.ts\n- tests/integration/gitlab-client.test.ts\n- tests/fixtures/mock-responses/gitlab-user.json\n- tests/fixtures/mock-responses/gitlab-project.json\n\n## TDD Loop\n\nRED:\n```typescript\n// tests/integration/gitlab-client.test.ts\ndescribe('GitLab Client', () => {\n it('authenticates with valid PAT')\n it('returns 401 for invalid PAT')\n it('fetches project by path')\n it('handles rate limiting (429) with Retry-After')\n it('respects rate limit (requests per second)')\n it('adds jitter to rate limiting')\n})\n```\n\nGREEN: Implement client.ts and types.ts\n\nVERIFY: `npm run test -- tests/integration/gitlab-client.test.ts`\n\n## Edge Cases\n\n- Path with special characters (spaces, slashes) must be URL-encoded\n- Retry-After header may be missing - default to 60s\n- Network timeout should be handled (use AbortController)\n- Rate limiter jitter prevents multiple clients syncing in lockstep\n- baseUrl trailing slash should be stripped","status":"closed","priority":1,"issue_type":"task","created_at":"2026-01-24T16:09:49.842981Z","created_by":"tayloreernisse","updated_at":"2026-01-25T03:06:39.520300Z","closed_at":"2026-01-25T03:06:39.520131Z","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-1l1","depends_on_id":"bd-gg1","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} {"id":"bd-1lj5","title":"Epic: Asupersync Migration — Replace Tokio + Reqwest","description":"## Background\n\nGitlore uses tokio as its async runtime and reqwest as its HTTP client. Both work, but:\n- Ctrl+C during join_all silently drops in-flight HTTP requests with no cleanup\n- ShutdownSignal is a hand-rolled AtomicBool with no structured cancellation\n- No deterministic testing for concurrent ingestion patterns\n- tokio provides no structured concurrency guarantees\n\nAsupersync is a cancel-correct async runtime with region-owned tasks, obligation tracking, and deterministic lab testing. Replacing tokio+reqwest gives us structured shutdown, cancel-correct ingestion, and testable concurrency.\n\n## Trade-offs Accepted\n- Nightly Rust required (asupersync dependency)\n- Pre-1.0 runtime dependency (mitigated by adapter layer + version pinning)\n- Deeper function signature changes for Cx threading\n\n## Why Not tokio CancellationToken + JoinSet?\nConsidered and rejected: CancellationToken/JoinSet fix cancel-cleanly but don't give obligation tracking (compile-time proof all spawned work is awaited) or deterministic lab testing. These prevent future concurrency bugs, not just the current Ctrl+C issue. Also, fixing Ctrl+C with tokio first then migrating doubles the effort. If asupersync proves unviable, the fallback IS tokio + CancellationToken + JoinSet.\n\n## Escape Hatch Triggers\n1. Nightly breakage > 7 days with no fix\n2. TLS incompatibility on macOS (system CA store)\n3. Breaking asupersync API change requiring > 2 days rework\n4. Wiremock test incompatibility unresolvable in 1 day\n\n## Execution Order\nPhase 0 (prep) -> Decision Gate -> Phases 1-3 (atomic) -> Phase 4 (tests) -> Phase 5 (hardening)\n\n## Rearchitecture Context (2026-03-06)\nA major code reorganization was completed before implementation began. Key changes affecting this migration:\n\n### main.rs thinned\n- main.rs is now ~419 LOC (was ~3744). Handler code lives in src/app/handlers.rs, src/app/errors.rs, src/app/robot_docs.rs via include!() chain.\n- Signal handlers, command dispatch, and Cx threading targets are in src/app/handlers.rs.\n\n### CLI commands split into folder modules\n- cli/commands/sync.rs -> cli/commands/sync/ (mod.rs, run.rs, render.rs, surgical.rs, sync_tests.rs)\n- cli/commands/ingest.rs -> cli/commands/ingest/ (mod.rs, run.rs, render.rs)\n- cli/commands/sync_surgical.rs -> cli/commands/sync/surgical.rs\n- cli/commands/list.rs -> cli/commands/list/ (mod.rs, issues.rs, mrs.rs, notes.rs, render_helpers.rs)\n- cli/commands/show.rs -> cli/commands/show/ (mod.rs, issue.rs, mr.rs, render.rs)\n\n### Timeline extracted from core\n- core/timeline*.rs -> timeline/ (types.rs, seed.rs, expand.rs, collect.rs + tests)\n\n### Cross-references extracted from core\n- core/note_parser.rs -> xref/note_parser.rs\n- core/references.rs -> xref/references.rs\n\n### Ingestion storage extracted from core\n- core/payloads.rs -> ingestion/storage/payloads.rs\n- core/events_db.rs -> ingestion/storage/events.rs\n- core/dependent_queue.rs -> ingestion/storage/queue.rs\n- core/sync_run.rs -> ingestion/storage/sync_run.rs\n\n### Embedding chunks merged\n- embedding/chunk_ids.rs + embedding/chunking.rs -> embedding/chunks.rs\n\n### Test support centralized\n- src/test_support.rs (NEW) — shared test helpers\n\n### CLI args extracted\n- cli/mod.rs split: args structs moved to cli/args.rs\n\n### Documents extractor split\n- documents/extractor.rs -> documents/extractor/ (mod.rs, common.rs, issues.rs, mrs.rs, discussions.rs, notes.rs)\n\n## File Change Summary (updated)\n~20+ files modified across the migration, 1 new file (src/http.rs), ~400-500 LOC changed total. File paths in child beads have been updated to reflect the rearchitecture.\n\n## Reference\nFull plan: plans/asupersync-migration.md\nRearchitecture plan: PROPOSED_CODE_FILE_REORGANIZATION_PLAN.md","status":"open","priority":1,"issue_type":"epic","created_at":"2026-03-06T18:37:52.914426Z","created_by":"tayloreernisse","updated_at":"2026-03-06T19:52:46.575504Z","compaction_level":0,"original_size":0,"labels":["asupersync"]} {"id":"bd-1lja","title":"Add --issue, --mr, -p, --preflight-only CLI flags and SyncOptions extensions with validation","description":"## Background\nSurgical sync is invoked via `lore sync --issue 123 --mr 456 -p myproject`. This bead adds the CLI flags to `SyncArgs` (clap struct), extends `SyncOptions` with surgical fields, and wires them together in `handle_sync_cmd` with full validation. This is the user-facing entry point for the entire surgical sync feature.\n\nThe existing `SyncArgs` struct at lines 760-805 of `src/cli/mod.rs` defines all CLI flags for `lore sync`. `SyncOptions` at lines 20-29 of `src/cli/commands/sync.rs` is the runtime options struct passed to `run_sync`. `handle_sync_cmd` at lines 2070-2096 of `src/main.rs` bridges CLI args to SyncOptions and calls `run_sync`.\n\n## Approach\n\n### Step 1: Add flags to SyncArgs (src/cli/mod.rs, struct SyncArgs at line ~760)\n\nAdd after the existing `timings` field:\n\n```rust\n/// Surgically sync specific issues by IID (repeatable, must be positive)\n#[arg(long, value_parser = clap::value_parser!(u64).range(1..), action = clap::ArgAction::Append)]\npub issue: Vec,\n\n/// Surgically sync specific merge requests by IID (repeatable, must be positive)\n#[arg(long, value_parser = clap::value_parser!(u64).range(1..), action = clap::ArgAction::Append)]\npub mr: Vec,\n\n/// Scope to a single project (required when --issue or --mr is used, falls back to config.defaultProject)\n#[arg(short = 'p', long)]\npub project: Option,\n\n/// Validate remote entities exist without any DB content writes. Runs preflight network fetch only.\n#[arg(long, default_value_t = false)]\npub preflight_only: bool,\n```\n\n**Why u64 with range(1..)**: IIDs are always positive. Parse-time validation gives immediate, clear error messages from clap.\n\n### Step 2: Extend SyncOptions (src/cli/commands/sync.rs, struct SyncOptions at line ~20)\n\nAdd fields:\n\n```rust\npub issue_iids: Vec,\npub mr_iids: Vec,\npub project: Option,\npub preflight_only: bool,\n```\n\nAdd helper:\n\n```rust\nimpl SyncOptions {\n pub const MAX_SURGICAL_TARGETS: usize = 100;\n\n pub fn is_surgical(&self) -> bool {\n !self.issue_iids.is_empty() || !self.mr_iids.is_empty()\n }\n}\n```\n\n### Step 3: Wire in handle_sync_cmd (src/main.rs, function handle_sync_cmd at line ~2070)\n\nAfter existing SyncOptions construction (~line 2088):\n\n1. **Dedup IIDs** before constructing options:\n```rust\nlet mut issue_iids = args.issue;\nlet mut mr_iids = args.mr;\nissue_iids.sort_unstable();\nissue_iids.dedup();\nmr_iids.sort_unstable();\nmr_iids.dedup();\n```\n\n2. **Add new fields** to the SyncOptions construction.\n\n3. **Validation** (after options creation, before calling run_sync):\n- Hard cap: `issue_iids.len() + mr_iids.len() > MAX_SURGICAL_TARGETS` → error with count\n- Project required: if `is_surgical()`, use `config.effective_project(options.project.as_deref())`. If None → error saying `-p` or `defaultProject` is required\n- Incompatible flags: `--full` + surgical → error\n- Embed leakage guard: `--no-docs` without `--no-embed` in surgical mode → error (stale embeddings for regenerated docs)\n- `--preflight-only` requires surgical mode → error if not `is_surgical()`\n\n## Acceptance Criteria\n- [ ] `lore sync --issue 123` parses correctly (issue_iids = [123])\n- [ ] `lore sync --issue 123 --issue 456` produces deduplicated sorted vec\n- [ ] `lore sync --mr 789` parses correctly\n- [ ] `lore sync --issue 0` rejected at parse time by clap (range 1..)\n- [ ] `lore sync --issue -1` rejected at parse time by clap (u64 parse failure)\n- [ ] `lore sync -p myproject --issue 1` sets project = Some(\"myproject\")\n- [ ] `lore sync --preflight-only --issue 1 -p proj` sets preflight_only = true\n- [ ] `SyncOptions::is_surgical()` returns true when issue_iids or mr_iids is non-empty\n- [ ] `SyncOptions::is_surgical()` returns false when both vecs are empty\n- [ ] `SyncOptions::MAX_SURGICAL_TARGETS` is 100\n- [ ] Validation: `--issue 1` without `-p` and no defaultProject → error mentioning `-p`\n- [ ] Validation: `--issue 1` without `-p` but with defaultProject in config → uses defaultProject (no error)\n- [ ] Validation: `--full --issue 1 -p proj` → incompatibility error\n- [ ] Validation: `--no-docs --issue 1 -p proj` (without --no-embed) → embed leakage error\n- [ ] Validation: `--no-docs --no-embed --issue 1 -p proj` → accepted\n- [ ] Validation: `--preflight-only` without --issue/--mr → error\n- [ ] Validation: >100 combined targets → hard cap error\n- [ ] Normal `lore sync` (without --issue/--mr) still works identically\n- [ ] `cargo check --all-targets` passes\n- [ ] `cargo clippy --all-targets -- -D warnings` passes\n\n## Files\n- MODIFY: src/cli/mod.rs (add fields to SyncArgs, ~line 805)\n- MODIFY: src/cli/commands/sync.rs (extend SyncOptions + is_surgical + MAX_SURGICAL_TARGETS)\n- MODIFY: src/main.rs (wire fields + validation in handle_sync_cmd)\n\n## TDD Anchor\nRED: Write tests in `src/cli/commands/sync.rs` (in a `#[cfg(test)] mod tests` block):\n\n```rust\n#[cfg(test)]\nmod tests {\n use super::*;\n\n fn default_options() -> SyncOptions {\n SyncOptions {\n full: false,\n no_status: false,\n no_docs: false,\n no_embed: false,\n timings: false,\n issue_iids: vec![],\n mr_iids: vec![],\n project: None,\n preflight_only: false,\n }\n }\n\n #[test]\n fn is_surgical_with_issues() {\n let opts = SyncOptions { issue_iids: vec![1], ..default_options() };\n assert!(opts.is_surgical());\n }\n\n #[test]\n fn is_surgical_with_mrs() {\n let opts = SyncOptions { mr_iids: vec![10], ..default_options() };\n assert!(opts.is_surgical());\n }\n\n #[test]\n fn is_surgical_empty() {\n let opts = default_options();\n assert!(!opts.is_surgical());\n }\n\n #[test]\n fn max_surgical_targets_is_100() {\n assert_eq!(SyncOptions::MAX_SURGICAL_TARGETS, 100);\n }\n}\n```\n\nGREEN: Add the fields and `is_surgical()` method.\nVERIFY: `cargo test is_surgical && cargo test max_surgical_targets`\n\nAdditional validation tests (in integration or as unit tests on a `validate_surgical_options` helper if extracted):\n- `preflight_only_requires_surgical` — SyncOptions with preflight_only=true, empty iids → error\n- `surgical_no_docs_requires_no_embed` — SyncOptions with no_docs=true, no_embed=false, is_surgical=true → error\n- `surgical_incompatible_with_full` — SyncOptions with full=true, is_surgical=true → error\n\n## Edge Cases\n- Clap `ArgAction::Append` allows `--issue 1 --issue 2` but NOT `--issue 1,2` (no value_delimiter). This is intentional — comma-separated values are ambiguous and error-prone.\n- Duplicate IIDs like `--issue 123 --issue 123` are handled by dedup in handle_sync_cmd, not rejected.\n- The `effective_project` method on Config (line 309 of config.rs) already handles the `-p` / defaultProject fallback: `cli_project.or(self.default_project.as_deref())`.\n- The `-p` short flag does not conflict with any existing SyncArgs flags.\n\n## Dependency Context\nThis is a leaf dependency with no upstream blockers. Can be done in parallel with bd-1sc6, bd-159p, bd-tiux. Downstream bead bd-1i4i (orchestrator) reads these fields to dispatch surgical vs standard sync.","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-17T19:12:43.921399Z","created_by":"tayloreernisse","updated_at":"2026-02-18T21:03:39.041002Z","closed_at":"2026-02-18T21:03:39.040947Z","close_reason":"Completed: --issue, --mr, -p, --preflight-only CLI flags, SyncOptions.is_surgical(), MAX_SURGICAL_TARGETS, validation","compaction_level":0,"original_size":0,"labels":["surgical-sync"],"dependencies":[{"issue_id":"bd-1lja","depends_on_id":"bd-1i4i","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} -{"id":"bd-1lli","title":"Decision Gate: Verify nightly + asupersync compile and behavioral smoke tests","description":"## What\nBefore investing in Phases 1-3, verify that asupersync compiles on the pinned nightly with tls-native-roots on macOS AND that HTTP behavior and API shape are correct. This is a GO/NO-GO gate.\n\n## Why\nCompile-only gating is insufficient — the migration failure modes are semantic (HTTP behavior parity), not just syntactic. A broken TLS stack, missing timeout support, or different-than-assumed API shape would make the entire migration unviable.\n\n## Implementation\n\n1. Create rust-toolchain.toml with pinned nightly date:\n\\`\\`\\`toml\n[toolchain]\nchannel = \\\"nightly-2026-03-01\\\"\n\\`\\`\\`\n\n2. Add asupersync to Cargo.toml temporarily (can be in a test binary):\n\\`\\`\\`toml\nasupersync = { version = \\\"0.2\\\", features = [\\\"tls\\\", \\\"tls-native-roots\\\"] }\n\\`\\`\\`\n\n3. Build and verify compilation succeeds.\n\n4. **API shape verification** — confirm the following APIs exist and match plan assumptions:\n a. HttpClient::with_config(HttpClientConfig) — connection pool configuration\n b. HttpClient::request(Method, url, headers, body) -> response with status/reason/headers/body\n c. asupersync::time::timeout(duration, future) — per-request timeout wrapper\n d. asupersync::time::sleep(duration) — rate limiter backoff replacement\n e. #[asupersync::main] macro — entrypoint with cx: &Cx parameter\n f. #[asupersync::test] macro — test attribute\n g. cx.spawn(name, async closure) — named task spawning\n h. cx.region(|scope| async { scope.spawn(...) }) — region-scoped fan-out\n i. cx.shutdown_signal().await — signal handling\n If ANY of these don't exist or have significantly different signatures, document the delta and assess rework cost.\n\n5. Run behavioral smoke tests in a throwaway binary or integration test:\n a. TLS validation: HTTPS GET to a public endpoint succeeds with valid cert\n b. DNS resolution: Request using hostname (not IP) resolves correctly\n c. Redirect handling: GET to a 301/302 endpoint — verify adapter returns redirect status\n d. Timeout behavior: Request to slow/non-responsive endpoint times out within configured duration\n e. Connection pooling: 4 sequential requests to same host reuse connections\n\n## Decision Criteria\n- If compilation fails: STOP. Evaluate tokio CancellationToken fallback.\n- If TLS doesn't work on macOS: STOP. Try tls-webpki-roots. If that fails too, fallback.\n- If timeouts don't fire: STOP. Core requirement.\n- If API shape differs significantly (>2 day rework): STOP. Evaluate fallback.\n- If all pass: PROCEED to Phases 1-3.\n\n## Escape Hatch (documented in epic)\nIf gate fails, fall back to: tokio + CancellationToken + JoinSet. The adapter layer design is still valid — swap asupersync::http for reqwest behind same crate::http::Client API.\n\n## Files Changed\n- rust-toolchain.toml (NEW, 3 LOC)\n- Cargo.toml (temporary test dep, will be finalized in Phase 3a)\n- Throwaway test binary or integration test for smoke tests\n\n## Output\nDocument results: pass/fail for each of the 5 smoke tests + API shape verification. If all pass, update this bead with results and close.","status":"open","priority":0,"issue_type":"task","created_at":"2026-03-06T18:39:05.093888Z","created_by":"tayloreernisse","updated_at":"2026-03-06T18:55:28.333301Z","compaction_level":0,"original_size":0,"labels":["asupersync","gate"],"dependencies":[{"issue_id":"bd-1lli","depends_on_id":"bd-1iuj","type":"blocks","created_at":"2026-03-06T18:42:49.101418Z","created_by":"tayloreernisse"},{"issue_id":"bd-1lli","depends_on_id":"bd-1lj5","type":"parent-child","created_at":"2026-03-06T18:39:05.097877Z","created_by":"tayloreernisse"},{"issue_id":"bd-1lli","depends_on_id":"bd-1p58","type":"blocks","created_at":"2026-03-06T18:55:28.333282Z","created_by":"tayloreernisse"},{"issue_id":"bd-1lli","depends_on_id":"bd-21fb","type":"blocks","created_at":"2026-03-06T18:55:28.144786Z","created_by":"tayloreernisse"},{"issue_id":"bd-1lli","depends_on_id":"bd-2xct","type":"blocks","created_at":"2026-03-06T18:55:28.241144Z","created_by":"tayloreernisse"}]} +{"id":"bd-1lli","title":"Decision Gate: Verify nightly + asupersync compile and behavioral smoke tests","description":"## What\nBefore investing in Phases 1-3, verify that asupersync compiles on the pinned nightly with tls-native-roots on macOS AND that HTTP behavior and API shape are correct. This is a GO/NO-GO gate.\n\n## Why\nCompile-only gating is insufficient — the migration failure modes are semantic (HTTP behavior parity), not just syntactic. A broken TLS stack, missing timeout support, or different-than-assumed API shape would make the entire migration unviable.\n\n## Implementation\n\n1. Create rust-toolchain.toml with pinned nightly date:\n\\`\\`\\`toml\n[toolchain]\nchannel = \\\"nightly-2026-03-01\\\"\n\\`\\`\\`\n\n2. Add asupersync to Cargo.toml temporarily (can be in a test binary):\n\\`\\`\\`toml\nasupersync = { version = \\\"0.2\\\", features = [\\\"tls\\\", \\\"tls-native-roots\\\"] }\n\\`\\`\\`\n\n3. Build and verify compilation succeeds.\n\n4. **API shape verification** — confirm the following APIs exist and match plan assumptions:\n a. HttpClient::with_config(HttpClientConfig) — connection pool configuration\n b. HttpClient::request(Method, url, headers, body) -> response with status/reason/headers/body\n c. asupersync::time::timeout(duration, future) — per-request timeout wrapper\n d. asupersync::time::sleep(duration) — rate limiter backoff replacement\n e. #[asupersync::main] macro — entrypoint with cx: &Cx parameter\n f. #[asupersync::test] macro — test attribute\n g. cx.spawn(name, async closure) — named task spawning\n h. cx.region(|scope| async { scope.spawn(...) }) — region-scoped fan-out\n i. cx.shutdown_signal().await — signal handling\n If ANY of these don't exist or have significantly different signatures, document the delta and assess rework cost.\n\n5. Run behavioral smoke tests in a throwaway binary or integration test:\n a. TLS validation: HTTPS GET to a public endpoint succeeds with valid cert\n b. DNS resolution: Request using hostname (not IP) resolves correctly\n c. Redirect handling: GET to a 301/302 endpoint — verify adapter returns redirect status\n d. Timeout behavior: Request to slow/non-responsive endpoint times out within configured duration\n e. Connection pooling: 4 sequential requests to same host reuse connections\n\n## Decision Criteria\n- If compilation fails: STOP. Evaluate tokio CancellationToken fallback.\n- If TLS doesn't work on macOS: STOP. Try tls-webpki-roots. If that fails too, fallback.\n- If timeouts don't fire: STOP. Core requirement.\n- If API shape differs significantly (>2 day rework): STOP. Evaluate fallback.\n- If all pass: PROCEED to Phases 1-3.\n\n## Escape Hatch (documented in epic)\nIf gate fails, fall back to: tokio + CancellationToken + JoinSet. The adapter layer design is still valid — swap asupersync::http for reqwest behind same crate::http::Client API.\n\n## Files Changed\n- rust-toolchain.toml (NEW, 3 LOC)\n- Cargo.toml (temporary test dep, will be finalized in Phase 3a)\n- Throwaway test binary or integration test for smoke tests\n\n## Output\nDocument results: pass/fail for each of the 5 smoke tests + API shape verification. If all pass, update this bead with results and close.","status":"closed","priority":0,"issue_type":"task","created_at":"2026-03-06T18:39:05.093888Z","created_by":"tayloreernisse","updated_at":"2026-03-06T21:11:12.563988Z","closed_at":"2026-03-06T21:11:12.563939Z","close_reason":"asupersync migration complete — 5 commits pushed to asupersync-migration branch","compaction_level":0,"original_size":0,"labels":["asupersync","gate"],"dependencies":[{"issue_id":"bd-1lli","depends_on_id":"bd-1iuj","type":"blocks","created_at":"2026-03-06T18:42:49.101418Z","created_by":"tayloreernisse"},{"issue_id":"bd-1lli","depends_on_id":"bd-1lj5","type":"parent-child","created_at":"2026-03-06T18:39:05.097877Z","created_by":"tayloreernisse"},{"issue_id":"bd-1lli","depends_on_id":"bd-1p58","type":"blocks","created_at":"2026-03-06T18:55:28.333282Z","created_by":"tayloreernisse"},{"issue_id":"bd-1lli","depends_on_id":"bd-21fb","type":"blocks","created_at":"2026-03-06T18:55:28.144786Z","created_by":"tayloreernisse"},{"issue_id":"bd-1lli","depends_on_id":"bd-2xct","type":"blocks","created_at":"2026-03-06T18:55:28.241144Z","created_by":"tayloreernisse"}]} {"id":"bd-1m8","title":"Extend 'lore stats --check' for event table integrity and queue health","description":"## Background\nThe existing stats --check command validates data integrity. Need to extend it for event tables (referential integrity) and dependent job queue health (stuck locks, retryable jobs). This provides operators and agents a way to detect data quality issues after sync.\n\n## Approach\nExtend src/cli/commands/stats.rs check mode:\n\n**New checks:**\n\n1. Event FK integrity:\n```sql\n-- Orphaned state events (issue_id points to non-existent issue)\nSELECT COUNT(*) FROM resource_state_events rse\nWHERE rse.issue_id IS NOT NULL\n AND NOT EXISTS (SELECT 1 FROM issues i WHERE i.id = rse.issue_id);\n-- (repeat for merge_request_id, and for label + milestone event tables)\n```\n\n2. Queue health:\n```sql\n-- Pending jobs by type\nSELECT job_type, COUNT(*) FROM pending_dependent_fetches GROUP BY job_type;\n-- Stuck locks (locked_at older than 5 minutes)\nSELECT COUNT(*) FROM pending_dependent_fetches WHERE locked_at IS NOT NULL AND locked_at < ?;\n-- Retryable jobs (attempts > 0, not locked)\nSELECT COUNT(*) FROM pending_dependent_fetches WHERE attempts > 0 AND locked_at IS NULL;\n-- Max attempts (jobs that may be permanently failing)\nSELECT job_type, MAX(attempts) FROM pending_dependent_fetches GROUP BY job_type;\n```\n\n3. Human output per check: PASS / WARN / FAIL with counts\n```\nEvent FK integrity: PASS (0 orphaned events)\nQueue health: WARN (3 stuck locks, 12 retryable jobs)\n```\n\n4. Robot JSON: structured health report\n```json\n{\n \"event_integrity\": {\n \"status\": \"pass\",\n \"orphaned_state_events\": 0,\n \"orphaned_label_events\": 0,\n \"orphaned_milestone_events\": 0\n },\n \"queue_health\": {\n \"status\": \"warn\",\n \"pending_by_type\": {\"resource_events\": 5, \"mr_closes_issues\": 2},\n \"stuck_locks\": 3,\n \"retryable_jobs\": 12,\n \"max_attempts_by_type\": {\"resource_events\": 5}\n }\n}\n```\n\n## Acceptance Criteria\n- [ ] Detects orphaned events (FK target missing)\n- [ ] Detects stuck locks (locked_at older than threshold)\n- [ ] Reports retryable job count and max attempts\n- [ ] Human output shows PASS/WARN/FAIL per check\n- [ ] Robot JSON matches structured schema\n- [ ] Graceful when event/queue tables don't exist\n\n## Files\n- src/cli/commands/stats.rs (extend check mode)\n\n## TDD Loop\nRED: tests/stats_check_tests.rs:\n- `test_stats_check_events_pass` - clean data, verify PASS\n- `test_stats_check_events_orphaned` - delete an issue with events remaining, verify FAIL count\n- `test_stats_check_queue_stuck_locks` - set old locked_at, verify WARN\n- `test_stats_check_queue_retryable` - fail some jobs, verify retryable count\n\nGREEN: Add the check queries and formatting\n\nVERIFY: `cargo test stats_check -- --nocapture`\n\n## Edge Cases\n- FK with CASCADE should prevent orphaned events in normal operation — but manual DB edits or bugs could cause them\n- Tables may not exist if migration 011 not applied — check table existence before querying\n- Empty queue is PASS (not WARN for \"no jobs found\")\n- Distinguish between \"0 stuck locks\" (good) and \"queue table doesn't exist\" (skip check)","status":"closed","priority":3,"issue_type":"task","created_at":"2026-02-02T21:31:57.422916Z","created_by":"tayloreernisse","updated_at":"2026-02-03T16:23:13.409909Z","closed_at":"2026-02-03T16:23:13.409717Z","close_reason":"Extended IntegrityResult with orphan_state/label/milestone_events and queue_stuck_locks/queue_max_attempts. Added FK integrity queries for all 3 event tables and queue health checks. Updated human output with PASS/WARN/FAIL indicators and robot JSON.","compaction_level":0,"original_size":0,"labels":["cli","gate-1","phase-b"],"dependencies":[{"issue_id":"bd-1m8","depends_on_id":"bd-2zl","type":"parent-child","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-1m8","depends_on_id":"bd-hu3","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-1m8","depends_on_id":"bd-tir","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} {"id":"bd-1mf","title":"[CP1] gi sync-status enhancement","description":"Enhance sync-status from CP0 stub to show issue cursors.\n\nOutput:\n- Last run timestamp and duration\n- Cursor positions per project (issues resource_type)\n- Entity counts (issues, discussions, notes)\n\nFiles: src/cli/commands/sync-status.ts (update existing)\nDone when: Shows cursor positions and counts after ingestion","status":"tombstone","priority":3,"issue_type":"task","created_at":"2026-01-25T15:20:36.449088Z","created_by":"tayloreernisse","updated_at":"2026-01-25T15:21:35.157235Z","closed_at":"2026-01-25T15:21:35.157235Z","deleted_at":"2026-01-25T15:21:35.157232Z","deleted_by":"tayloreernisse","delete_reason":"delete","original_type":"task","compaction_level":0,"original_size":0} {"id":"bd-1mju","title":"Vertical slice integration test + SLO verification","description":"## Background\nThe vertical slice gate validates that core screens work together end-to-end with real data flows and meet performance SLOs. This is a manual + automated verification pass.\n\n## Approach\nCreate integration tests in crates/lore-tui/tests/:\n- test_full_nav_flow: Dashboard -> press i -> IssueList loads -> press Enter -> IssueDetail loads -> press Esc -> back to IssueList with cursor preserved -> press H -> Dashboard\n- test_filter_requery: IssueList -> type filter -> verify re-query triggers and results update\n- test_stale_result_guard: rapidly navigate between screens, verify no stale data displayed\n- Performance benchmarks: run M-tier fixture, measure p95 nav latency, assert < 75ms\n- Stuck-input check: fuzz InputMode transitions, assert always recoverable via Esc or Ctrl+C\n- Cancel latency: start sync, cancel, measure time to acknowledgment, assert < 2s\n\n## Acceptance Criteria\n- [ ] Full nav flow test passes without panic\n- [ ] Filter re-query test shows updated results\n- [ ] No stale data displayed during rapid navigation\n- [ ] p95 nav latency < 75ms on M-tier fixtures\n- [ ] Zero stuck-input states across 1000 random key sequences\n- [ ] Sync cancel acknowledged p95 < 2s\n- [ ] All state preserved correctly on back-navigation\n\n## Files\n- CREATE: crates/lore-tui/tests/vertical_slice.rs\n\n## TDD Anchor\nRED: Write test_dashboard_to_issue_detail_roundtrip that navigates Dashboard -> IssueList -> IssueDetail -> Esc -> IssueList, asserts cursor position preserved.\nGREEN: Ensure all navigation and state preservation is wired up.\nVERIFY: cargo test --manifest-path crates/lore-tui/Cargo.toml vertical_slice\n\n## Edge Cases\n- Tests need FakeClock and synthetic DB fixtures (not real GitLab)\n- ftui test harness required for rendering tests without TTY\n- Performance benchmarks may vary by machine — use relative thresholds\n\n## Dependency Context\nRequires all Phase 2 screens: Dashboard, Issue List, Issue Detail, MR List, MR Detail.\nRequires NavigationStack, TaskSupervisor, DbManager from Phase 1.","status":"open","priority":2,"issue_type":"task","created_at":"2026-02-12T17:00:18.310264Z","created_by":"tayloreernisse","updated_at":"2026-02-12T18:11:33.796953Z","compaction_level":0,"original_size":0,"labels":["TUI"],"dependencies":[{"issue_id":"bd-1mju","depends_on_id":"bd-3pxe","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-1mju","depends_on_id":"bd-3t1b","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-1mju","depends_on_id":"bd-3ty8","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-1mju","depends_on_id":"bd-8ab7","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} @@ -75,7 +75,7 @@ {"id":"bd-1oi7","title":"NOTE-2A: Schema migration for note documents (migration 024)","description":"## Background\nThe documents and dirty_sources tables have CHECK constraints limiting source_type to ('issue', 'merge_request', 'discussion'). Need to add 'note' as valid source_type. SQLite doesn't support ALTER CONSTRAINT, so use the table-rebuild pattern. Uses migration slot 024 (022 = query indexes, 023 = issue_detail_fields already exists).\n\n## Approach\nCreate migrations/024_note_documents.sql:\n\n1. Rebuild dirty_sources: CREATE dirty_sources_new with CHECK adding 'note', INSERT SELECT, DROP old, RENAME.\n2. Rebuild documents (complex — must preserve FTS consistency):\n - Save junction table data (_doc_labels_backup, _doc_paths_backup)\n - Drop FTS triggers (documents_ai, documents_ad, documents_au — defined in migration 008_fts5.sql)\n - Drop junction tables (document_labels, document_paths — defined in migration 007_documents.sql)\n - Create documents_new with updated CHECK adding 'note'\n - INSERT INTO documents_new SELECT * FROM documents (preserves rowids for FTS)\n - Drop documents, rename new\n - Recreate all indexes (idx_documents_project_updated, idx_documents_author, idx_documents_source, idx_documents_content_hash — see migration 007_documents.sql for definitions)\n - Recreate junction tables + restore data from backups\n - Recreate FTS triggers (see migration 008_fts5.sql for trigger SQL)\n - INSERT INTO documents_fts(documents_fts) VALUES('rebuild')\n3. Defense-in-depth triggers:\n - notes_ad_cleanup: AFTER DELETE ON notes WHEN old.is_system = 0 → delete doc + dirty_sources for source_type='note', source_id=old.id\n - notes_au_system_cleanup: AFTER UPDATE OF is_system ON notes WHEN NEW.is_system = 1 AND OLD.is_system = 0 → delete doc + dirty_sources\n4. Drop temp backup tables\n\nRegister as (\"024\", include_str!(\"../../migrations/024_note_documents.sql\")) in MIGRATIONS array in src/core/db.rs. Position AFTER the \"023\" entry.\n\n## Files\n- CREATE: migrations/024_note_documents.sql\n- MODIFY: src/core/db.rs (add (\"024\", include_str!(...)) to MIGRATIONS array, after line 75)\n\n## TDD Anchor\nRED: test_migration_024_allows_note_source_type — INSERT with source_type='note' should succeed in both documents and dirty_sources.\nGREEN: Implement the table rebuild migration.\nVERIFY: cargo test migration_024 -- --nocapture\nTests: test_migration_024_preserves_existing_data, test_migration_024_fts_triggers_intact, test_migration_024_row_counts_preserved, test_migration_024_integrity_checks_pass, test_migration_024_fts_rebuild_consistent, test_migration_024_note_delete_trigger_cleans_document, test_migration_024_note_system_flip_trigger_cleans_document, test_migration_024_system_note_delete_trigger_does_not_fire\n\n## Acceptance Criteria\n- [ ] INSERT source_type='note' succeeds in documents and dirty_sources\n- [ ] All existing data preserved through table rebuild (row counts match before/after)\n- [ ] FTS triggers fire correctly after rebuild (insert a doc, verify FTS entry exists)\n- [ ] documents_fts row count == documents row count after rebuild\n- [ ] PRAGMA foreign_key_check returns no violations\n- [ ] notes_ad_cleanup trigger fires on note deletion (deletes document + dirty_sources)\n- [ ] notes_au_system_cleanup trigger fires when is_system flips 0→1\n- [ ] System note deletion does NOT trigger notes_ad_cleanup (is_system = 1 guard)\n- [ ] All 9 tests pass\n\n## Edge Cases\n- Rowid preservation: INSERT INTO documents_new SELECT * preserves id column = rowid for FTS consistency\n- CRITICAL: Must save/restore junction table data (ON DELETE CASCADE on document_labels/document_paths would delete them when documents table is dropped)\n- The FTS rebuild at end is a safety net for any rowid drift\n- Empty database: migration is a no-op (all SELECTs return 0 rows, tables rebuilt with new CHECK)","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-12T17:01:35.164340Z","created_by":"tayloreernisse","updated_at":"2026-02-12T18:13:24.078558Z","closed_at":"2026-02-12T18:13:24.078512Z","close_reason":"Implemented by agent swarm","compaction_level":0,"original_size":0,"labels":["per-note","search"],"dependencies":[{"issue_id":"bd-1oi7","depends_on_id":"bd-18bf","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-1oi7","depends_on_id":"bd-22ai","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-1oi7","depends_on_id":"bd-ef0u","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} {"id":"bd-1oo","title":"Register migration 015 in db.rs and create migration 016 for mr_file_changes","description":"## Background\n\nThis bead creates the `mr_file_changes` table that stores which files each MR touched, enabling Gate 4 (file-history) and Gate 5 (trace). It maps MRs to the file paths they modify.\n\n**Spec reference:** `docs/phase-b-temporal-intelligence.md` Section 4.1 (Schema).\n\n## Codebase Context — CRITICAL Migration Numbering\n\n- **LATEST_SCHEMA_VERSION = 14** (MIGRATIONS array in db.rs includes 001-014)\n- **Migration 015 exists on disk** (`migrations/015_commit_shas_and_closes_watermark.sql`) but is **NOT registered** in `src/core/db.rs` MIGRATIONS array\n- `merge_commit_sha` and `squash_commit_sha` are already on merge_requests (added by 015 SQL) and already used in `src/ingestion/merge_requests.rs`\n- `closes_issues_synced_for_updated_at` also added by 015 and used in orchestrator.rs\n- **This bead must FIRST register migration 015 in db.rs**, then create migration 016 for mr_file_changes\n- pending_dependent_fetches already has `job_type='mr_diffs'` in CHECK constraint (migration 011)\n- Schema version auto-computes: `LATEST_SCHEMA_VERSION = MIGRATIONS.len() as i32`\n\n## Approach\n\n### Step 1: Register existing migration 015 in db.rs\n\nAdd to MIGRATIONS array in `src/core/db.rs` (after the \"014\" entry):\n\n```rust\n(\n \"015\",\n include_str!(\"../../migrations/015_commit_shas_and_closes_watermark.sql\"),\n),\n```\n\nThis makes LATEST_SCHEMA_VERSION = 15.\n\n### Step 2: Create migration 016 for mr_file_changes\n\nCreate `migrations/016_mr_file_changes.sql`:\n\n```sql\n-- Migration 016: MR file changes table\n-- Powers file-history and trace commands (Gates 4-5)\n\nCREATE TABLE mr_file_changes (\n id INTEGER PRIMARY KEY,\n merge_request_id INTEGER NOT NULL REFERENCES merge_requests(id) ON DELETE CASCADE,\n project_id INTEGER NOT NULL REFERENCES projects(id) ON DELETE CASCADE,\n old_path TEXT,\n new_path TEXT NOT NULL,\n change_type TEXT NOT NULL CHECK (change_type IN ('added', 'modified', 'renamed', 'deleted')),\n UNIQUE(merge_request_id, new_path)\n);\n\nCREATE INDEX idx_mfc_project_path ON mr_file_changes(project_id, new_path);\nCREATE INDEX idx_mfc_project_old_path ON mr_file_changes(project_id, old_path) WHERE old_path IS NOT NULL;\nCREATE INDEX idx_mfc_mr ON mr_file_changes(merge_request_id);\nCREATE INDEX idx_mfc_renamed ON mr_file_changes(project_id, change_type) WHERE change_type = 'renamed';\n\nINSERT INTO schema_version (version, applied_at, description)\nVALUES (16, strftime('%s', 'now') * 1000, 'MR file changes table');\n```\n\n### Step 3: Register migration 016 in db.rs\n\n```rust\n(\n \"016\",\n include_str!(\"../../migrations/016_mr_file_changes.sql\"),\n),\n```\n\nLATEST_SCHEMA_VERSION will auto-compute to 16.\n\n## Acceptance Criteria\n\n- [ ] Migration 015 registered in MIGRATIONS array in src/core/db.rs\n- [ ] Migration file exists at `migrations/016_mr_file_changes.sql`\n- [ ] `mr_file_changes` table has columns: id, merge_request_id, project_id, old_path, new_path, change_type\n- [ ] UNIQUE constraint on (merge_request_id, new_path)\n- [ ] CHECK constraint on change_type: added, modified, renamed, deleted\n- [ ] 4 indexes: project+new_path, project+old_path (partial), mr_id, project+renamed (partial)\n- [ ] Migration 016 registered in MIGRATIONS array\n- [ ] LATEST_SCHEMA_VERSION auto-computes to 16\n- [ ] `lore migrate` applies both 015 and 016 successfully on a v14 database\n- [ ] `cargo check --all-targets` passes\n- [ ] `cargo clippy --all-targets -- -D warnings` passes\n\n## Files\n\n- `src/core/db.rs` (register migrations 015 AND 016 in MIGRATIONS array)\n- `migrations/016_mr_file_changes.sql` (NEW)\n\n## TDD Loop\n\nRED: `lore migrate` on v14 database says \"already up to date\" (015 not registered)\n\nGREEN: Register 015 in db.rs, create 016 file, register 016 in db.rs. `lore migrate` applies both.\n\nVERIFY:\n```bash\ncargo check --all-targets\nlore --robot migrate\nsqlite3 ~/.local/share/lore/lore.db '.schema mr_file_changes'\nsqlite3 ~/.local/share/lore/lore.db \"SELECT version FROM schema_version ORDER BY version DESC LIMIT 1\"\n```\n\n## Edge Cases\n\n- Databases already at v15 via manual migration: 015 will be skipped, only 016 applied\n- old_path is NULL for added files, populated for renamed/deleted\n- No lines_added/lines_removed columns (spec does not require them; removed to match spec exactly)\n- Partial indexes only index relevant rows for rename chain BFS performance\n","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-02T21:34:08.837816Z","created_by":"tayloreernisse","updated_at":"2026-02-05T21:40:46.766136Z","closed_at":"2026-02-05T21:40:46.766074Z","close_reason":"Completed: registered migration 015 in db.rs MIGRATIONS array, created migration 016 (mr_file_changes table with 4 indexes, CHECK constraint, UNIQUE constraint), registered 016 in db.rs. LATEST_SCHEMA_VERSION auto-computes to 16. cargo check, clippy, and fmt all pass.","compaction_level":0,"original_size":0,"labels":["gate-4","phase-b","schema"],"dependencies":[{"issue_id":"bd-1oo","depends_on_id":"bd-14q","type":"parent-child","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-1oo","depends_on_id":"bd-hu3","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} {"id":"bd-1oyf","title":"NOTE-1D: robot-docs integration for notes command","description":"## Background\nAdd the notes command to the robot-docs manifest so agents can discover it. Also forward-prep SearchArgs --type to accept \"note\"/\"notes\" (duplicates work in NOTE-2F but is safe to do early).\n\n## Approach\n1. Robot-docs manifest is in src/main.rs, function handle_robot_docs() starting at line 2087. The commands JSON is built at line 2090 with serde_json::json!. Add a \"notes\" entry following the pattern of \"issues\" (line 2107 area) and \"mrs\" entries:\n\n \"notes\": {\n \"description\": \"List notes from discussions with rich filtering\",\n \"flags\": [\"--limit/-n \", \"--author/-a \", \"--note-type \", \"--contains \", \"--for-issue \", \"--for-mr \", \"-p/--project \", \"--since \", \"--until \", \"--path \", \"--resolution \", \"--sort \", \"--asc\", \"--include-system\", \"--note-id \", \"--gitlab-note-id \", \"--discussion-id \", \"--format \", \"--fields \", \"--open\"],\n \"robot_flags\": [\"--format json\", \"--fields minimal\"],\n \"example\": \"lore --robot notes --author jdefting --since 1y --format json --fields minimal\",\n \"response_schema\": {\n \"ok\": \"bool\",\n \"data\": {\"notes\": \"[NoteListRowJson]\", \"total_count\": \"int\", \"showing\": \"int\"},\n \"meta\": {\"elapsed_ms\": \"int\"}\n }\n }\n\n2. Update SearchArgs.source_type value_parser in src/cli/mod.rs (line 560) to include \"note\":\n value_parser = [\"issue\", \"mr\", \"discussion\", \"note\"]\n (This is also done in NOTE-2F but is safe to do in either order — value_parser is additive)\n\n3. Add \"notes\" to the command list in handle_robot_docs (line 662 area where command names are listed).\n\n## Files\n- MODIFY: src/main.rs (add notes to robot-docs commands JSON at line 2090 area, add to command list at line 662)\n- MODIFY: src/cli/mod.rs (add \"note\" to SearchArgs source_type value_parser at line 560)\n\n## TDD Anchor\nSmoke test: cargo run -- --robot robot-docs | jq '.data.commands.notes' should return the notes command entry.\nVERIFY: cargo test -- --nocapture (no dedicated test needed — robot-docs is a static JSON generator)\n\n## Acceptance Criteria\n- [ ] lore robot-docs output includes notes command with all flags\n- [ ] notes command has response_schema, example, and robot_flags\n- [ ] SearchArgs accepts --type note\n- [ ] All existing tests still pass\n\n## Dependency Context\n- Depends on NOTE-1A (bd-20p9), NOTE-1B (bd-3iod), NOTE-1C (bd-25hb): command must be fully wired before documenting (the manifest should describe actual working behavior)\n\n## Edge Cases\n- robot-docs --brief mode: notes command should still appear in brief output\n- Value parser order doesn't matter — \"note\" can be added at any position in the array","status":"closed","priority":3,"issue_type":"task","created_at":"2026-02-12T17:01:04.191582Z","created_by":"tayloreernisse","updated_at":"2026-02-12T18:13:15.359505Z","closed_at":"2026-02-12T18:13:15.359457Z","close_reason":"Implemented by agent swarm","compaction_level":0,"original_size":0,"labels":["cli","per-note","search"]} -{"id":"bd-1p58","title":"Phase 0c: Replace tokio::join! with futures::join!","description":"## What\nReplace tokio::join! with futures::join! at 2 sites in gitlab/client.rs (lines ~741, 748).\n\n## Why\nfutures::join! is runtime-agnostic and already in deps (futures crate). This removes a tokio dependency without changing any behavior.\n\n## Rearchitecture Context (2026-03-06)\ngitlab/client.rs was modified but not moved during the rearchitecture. Line numbers shifted slightly from the original (~729, 736) to approximately (~741, 748). The file remains at src/gitlab/client.rs.\n\n## Implementation\n```rust\n// Before\nuse tokio::join;\nlet (result_a, result_b) = tokio::join!(future_a, future_b);\n\n// After\nuse futures::join;\nlet (result_a, result_b) = futures::join!(future_a, future_b);\n```\n\n## Files Changed\n- src/gitlab/client.rs (~4 LOC changed: 2 import + 2 macro calls, at lines ~741, 748)\n\n## Testing\n- cargo check --all-targets\n- cargo clippy --all-targets -- -D warnings\n- Existing pagination tests should pass unchanged\n\n## Post-Phase 0 Remaining Tokio\nAfter 0a-0c, remaining tokio in production code is minimal:\n- #[tokio::main] (1 site in src/main.rs)\n- tokio::spawn + tokio::signal::ctrl_c (4 handlers in src/app/handlers.rs, to be consolidated in Phase 0a)\n- tokio::time::sleep (1 import)","status":"open","priority":2,"issue_type":"task","created_at":"2026-03-06T18:38:27.847787Z","created_by":"tayloreernisse","updated_at":"2026-03-06T19:53:53.528155Z","compaction_level":0,"original_size":0,"labels":["asupersync","phase-0"],"dependencies":[{"issue_id":"bd-1p58","depends_on_id":"bd-1lj5","type":"parent-child","created_at":"2026-03-06T18:38:27.850127Z","created_by":"tayloreernisse"}]} +{"id":"bd-1p58","title":"Phase 0c: Replace tokio::join! with futures::join!","description":"## What\nReplace tokio::join! with futures::join! at 2 sites in gitlab/client.rs (lines ~741, 748).\n\n## Why\nfutures::join! is runtime-agnostic and already in deps (futures crate). This removes a tokio dependency without changing any behavior.\n\n## Rearchitecture Context (2026-03-06)\ngitlab/client.rs was modified but not moved during the rearchitecture. Line numbers shifted slightly from the original (~729, 736) to approximately (~741, 748). The file remains at src/gitlab/client.rs.\n\n## Implementation\n```rust\n// Before\nuse tokio::join;\nlet (result_a, result_b) = tokio::join!(future_a, future_b);\n\n// After\nuse futures::join;\nlet (result_a, result_b) = futures::join!(future_a, future_b);\n```\n\n## Files Changed\n- src/gitlab/client.rs (~4 LOC changed: 2 import + 2 macro calls, at lines ~741, 748)\n\n## Testing\n- cargo check --all-targets\n- cargo clippy --all-targets -- -D warnings\n- Existing pagination tests should pass unchanged\n\n## Post-Phase 0 Remaining Tokio\nAfter 0a-0c, remaining tokio in production code is minimal:\n- #[tokio::main] (1 site in src/main.rs)\n- tokio::spawn + tokio::signal::ctrl_c (4 handlers in src/app/handlers.rs, to be consolidated in Phase 0a)\n- tokio::time::sleep (1 import)","status":"closed","priority":2,"issue_type":"task","created_at":"2026-03-06T18:38:27.847787Z","created_by":"tayloreernisse","updated_at":"2026-03-06T21:11:12.557306Z","closed_at":"2026-03-06T21:11:12.557264Z","close_reason":"asupersync migration complete — 5 commits pushed to asupersync-migration branch","compaction_level":0,"original_size":0,"labels":["asupersync","phase-0"],"dependencies":[{"issue_id":"bd-1p58","depends_on_id":"bd-1lj5","type":"parent-child","created_at":"2026-03-06T18:38:27.850127Z","created_by":"tayloreernisse"}]} {"id":"bd-1pzj","title":"Implement responsive layout system (LORE_BREAKPOINTS + Responsive)","description":"## Background\n\nEvery TUI view needs to adapt its layout to terminal width. The PRD defines a project-wide breakpoint system using FrankenTUI native `Responsive` with 5 tiers: Xs (<60), Sm (60-89), Md (90-119), Lg (120-159), Xl (160+). This is cross-cutting infrastructure — Dashboard uses it for 1/2/3-column layout, Issue/MR lists use it for column visibility, Search/Who use it for split-pane toggling. Without this, every view would reinvent breakpoint logic.\n\n## Approach\n\nDefine a single `layout.rs` module in the TUI crate with:\n\n1. **`LORE_BREAKPOINTS` constant** — `Breakpoints::new(60, 90, 120)` (Xl defaults to Lg + 40 = 160)\n2. **`classify_width(width: u16) -> Breakpoint`** — wrapper around `LORE_BREAKPOINTS.classify_width(area.width)`\n3. **Helper functions** using `Responsive`:\n - `dashboard_columns(bp: Breakpoint) -> u16` — 1 (Xs/Sm), 2 (Md), 3 (Lg/Xl)\n - `show_preview_pane(bp: Breakpoint) -> bool` — false (Xs/Sm), true (Md+)\n - `table_columns(bp: Breakpoint, screen: Screen) -> Vec` — returns visible column set per breakpoint per screen\n\nReference FrankenTUI types:\n```rust\nuse ftui::layout::{Breakpoints, Breakpoint, Responsive, Flex, Constraint};\npub const LORE_BREAKPOINTS: Breakpoints = Breakpoints::new(60, 90, 120);\n```\n\nThe `Responsive` wrapper provides breakpoint-aware values with inheritance — `.new(base)` sets Xs, `.at(bp, val)` sets overrides, `.resolve_cloned(bp)` walks downward.\n\n## Acceptance Criteria\n- [ ] `LORE_BREAKPOINTS` constant defined with sm=60, md=90, lg=120\n- [ ] `classify_width()` returns correct Breakpoint for widths: 40->Xs, 60->Sm, 90->Md, 120->Lg, 160->Xl\n- [ ] `dashboard_columns()` returns 1 for Xs/Sm, 2 for Md, 3 for Lg/Xl\n- [ ] `show_preview_pane()` returns false for Xs/Sm, true for Md+\n- [ ] All helpers use `Responsive` (not bare match), so inheritance is automatic\n- [ ] Module is `pub` and importable by all view modules\n\n## Files\n- CREATE: crates/lore-tui/src/layout.rs\n- MODIFY: crates/lore-tui/src/lib.rs (add `pub mod layout;`)\n\n## TDD Anchor\nRED: Write `test_classify_width_boundaries` that asserts classify_width(59)=Xs, classify_width(60)=Sm, classify_width(89)=Sm, classify_width(90)=Md, classify_width(119)=Md, classify_width(120)=Lg, classify_width(159)=Lg, classify_width(160)=Xl.\nGREEN: Implement LORE_BREAKPOINTS and classify_width().\nVERIFY: cargo test -p lore-tui classify_width\n\nAdditional tests:\n- test_dashboard_columns_per_breakpoint\n- test_show_preview_pane_per_breakpoint\n- test_responsive_inheritance (Sm inherits from Xs when no Sm override set)\n\n## Edge Cases\n- Terminal width of 0 or 1 must not panic — classify to Xs\n- Very wide terminals (>300 cols) should still work — classify to Xl\n- All Responsive values must have an Xs base so resolve never fails\n\n## Dependency Context\n- Depends on bd-3ddw (crate scaffold) which creates the crates/lore-tui/ workspace\n- Consumed by all view beads (bd-35g5 Dashboard, bd-3ei1 Issue List, bd-2kr0 MR List, bd-1zow Search, bd-u7se Who, etc.) for layout decisions","status":"open","priority":2,"issue_type":"task","created_at":"2026-02-12T19:29:34.077080Z","created_by":"tayloreernisse","updated_at":"2026-02-12T19:29:46.129189Z","compaction_level":0,"original_size":0,"labels":["TUI"],"dependencies":[{"issue_id":"bd-1pzj","depends_on_id":"bd-3ddw","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} {"id":"bd-1q8z","title":"WHO: Epic — People Intelligence Commands","description":"## Background\n\nThe current beads roadmap focuses on Gate 4/5 (file-history, code-trace) — archaeology queries requiring mr_file_changes data that does not exist yet. Meanwhile, the DB has rich people/activity data (280K notes, 210K discussions, 33K DiffNotes with file positions, 53 active participants) that can answer collaboration questions immediately with zero new tables or API calls.\n\n## Scope\n\nThis epic builds `lore who` — a pure SQL query layer answering 5 questions:\n1. **Expert**: \"Who should I talk to about this feature/file?\" (DiffNote path analysis)\n2. **Workload**: \"What is person X working on?\" (open issues, authored/reviewing MRs, unresolved discussions)\n3. **Reviews**: \"What review patterns does person X have?\" (DiffNote **prefix** category extraction)\n4. **Active**: \"What discussions are actively in progress?\" (unresolved resolvable discussions)\n5. **Overlap**: \"Who else has MRs/notes touching my files?\" (path-based activity overlap)\n\n## Plan Reference\n\nFull implementation plan with 8 iterations of review: `docs/who-command-design.md`\n\n## Children (Execution Order)\n\n1. **bd-34rr** — Migration 017: 5 composite indexes for query performance\n2. **bd-2rk9** — CLI skeleton: WhoArgs, Commands::Who, dispatch, stub file\n3. **bd-2ldg** — Mode resolution, path helpers, run_who entry point, all result types\n4. **bd-zqpf** — Expert mode query (CTE + MR-breadth scoring)\n5. **bd-s3rc** — Workload mode query (4 SELECT queries)\n6. **bd-m7k1** — Active mode query (CTE + global/scoped SQL variants)\n7. **bd-b51e** — Overlap mode query (dual role tracking + accumulator)\n8. **bd-2711** — Reviews mode query (prefix extraction + normalization)\n9. **bd-1rdi** — Human terminal output for all 5 modes\n10. **bd-3mj2** — Robot JSON output for all 5 modes\n11. **bd-tfh3** — Comprehensive test suite (20+ tests)\n12. **bd-zibc** — VALID_COMMANDS + robot-docs manifest\n13. **bd-g0d5** — Verification gate (check, clippy, fmt, EXPLAIN QUERY PLAN)\n\n## Design Principles (from plan)\n\n- All SQL fully static — no format!() for query text, LIMIT bound as ?N\n- prepare_cached() everywhere for statement caching\n- (?N IS NULL OR ...) nullable binding except Active mode (two SQL variants for index selection)\n- Self-review exclusion on all DiffNote-based branches\n- Deterministic output: sorted GROUP_CONCAT, sorted HashSet-derived vectors, stable tie-breakers\n- Truncation transparency: LIMIT+1 pattern with truncated bool\n- Bounded payloads: capped arrays with *_total + *_truncated metadata\n- Robot-first reproducibility: input + resolved_input with since_mode tri-state\n\n## Files\n\n| File | Action | Description |\n|---|---|---|\n| `src/cli/commands/who.rs` | CREATE | All 5 query modes + human/robot output |\n| `src/cli/commands/mod.rs` | MODIFY | Add `pub mod who` + re-exports |\n| `src/cli/mod.rs` | MODIFY | Add `WhoArgs` struct + `Commands::Who` variant |\n| `src/main.rs` | MODIFY | Add dispatch arm + `handle_who` fn + VALID_COMMANDS + robot-docs |\n| `src/core/db.rs` | MODIFY | Add migration 017: composite indexes for who query paths |\n\n## TDD Loop\n\nEach child bead has its own RED/GREEN/VERIFY cycle. The epic TDD strategy:\n- RED: Tests in bd-tfh3 (written alongside query beads)\n- GREEN: Query implementations in bd-zqpf, bd-s3rc, bd-m7k1, bd-b51e, bd-2711\n- VERIFY: bd-g0d5 runs `cargo test` + `cargo clippy` + EXPLAIN QUERY PLAN\n\n## Acceptance Criteria\n\n- [ ] `lore who src/path/` shows ranked experts with scores\n- [ ] `lore who @username` shows workload across all projects\n- [ ] `lore who @username --reviews` shows categorized review patterns\n- [ ] `lore who --active` shows unresolved discussions\n- [ ] `lore who --overlap src/path/` shows other contributors\n- [ ] `lore who --path README.md` handles root files\n- [ ] `lore -J who ...` produces valid JSON with input + resolved_input\n- [ ] All indexes verified via EXPLAIN QUERY PLAN\n- [ ] cargo check + clippy + fmt + test all pass\n\n## Edge Cases\n\n- This epic has zero new tables — all queries are pure SQL over existing schema + migration 017 indexes\n- Gate 4/5 beads are NOT dependencies — who command works independently with current data\n- If DB has <1000 notes, queries will work but results will be sparse — this is expected for fresh installations\n- format_relative_time() is duplicated from list.rs intentionally (private fn, small blast radius > refactoring shared module)\n- lookup_project_path() is local to who.rs — single invocation per run, does not warrant shared utility","status":"closed","priority":1,"issue_type":"epic","created_at":"2026-02-08T02:39:39.538892Z","created_by":"tayloreernisse","updated_at":"2026-02-08T04:10:38.665143Z","closed_at":"2026-02-08T04:10:38.665094Z","close_reason":"All 13 child beads implemented: migration 017 (5 composite indexes), CLI skeleton with WhoArgs/dispatch/robot-docs, 5 query modes (expert/workload/active/overlap/reviews), human terminal + robot JSON output, 20 tests. All quality gates pass: cargo check, clippy (pedantic+nursery), fmt, test.","compaction_level":0,"original_size":0} {"id":"bd-1qf","title":"[CP1] Discussion and note transformers","description":"## Background\n\nDiscussion and note transformers convert GitLab API discussion responses into our normalized schema. They compute derived fields like `first_note_at`, `last_note_at`, resolvable/resolved status, and note positions. These are pure functions with no I/O.\n\n## Approach\n\nCreate transformer module with:\n\n### Structs\n\n```rust\n// src/gitlab/transformers/discussion.rs\n\npub struct NormalizedDiscussion {\n pub gitlab_discussion_id: String,\n pub project_id: i64,\n pub issue_id: i64,\n pub noteable_type: String, // \"Issue\"\n pub individual_note: bool,\n pub first_note_at: Option, // min(note.created_at) in ms epoch\n pub last_note_at: Option, // max(note.created_at) in ms epoch\n pub last_seen_at: i64,\n pub resolvable: bool, // any note is resolvable\n pub resolved: bool, // all resolvable notes are resolved\n}\n\npub struct NormalizedNote {\n pub gitlab_id: i64,\n pub project_id: i64,\n pub note_type: Option, // \"DiscussionNote\" | \"DiffNote\" | null\n pub is_system: bool, // from note.system\n pub author_username: String,\n pub body: String,\n pub created_at: i64, // ms epoch\n pub updated_at: i64, // ms epoch\n pub last_seen_at: i64,\n pub position: i32, // 0-indexed array position\n pub resolvable: bool,\n pub resolved: bool,\n pub resolved_by: Option,\n pub resolved_at: Option,\n}\n```\n\n### Functions\n\n```rust\npub fn transform_discussion(\n gitlab_discussion: &GitLabDiscussion,\n local_project_id: i64,\n local_issue_id: i64,\n) -> NormalizedDiscussion\n\npub fn transform_notes(\n gitlab_discussion: &GitLabDiscussion,\n local_project_id: i64,\n) -> Vec\n```\n\n## Acceptance Criteria\n\n- [ ] `NormalizedDiscussion` struct with all fields\n- [ ] `NormalizedNote` struct with all fields\n- [ ] `transform_discussion` computes first_note_at/last_note_at from notes array\n- [ ] `transform_discussion` computes resolvable (any note is resolvable)\n- [ ] `transform_discussion` computes resolved (all resolvable notes resolved)\n- [ ] `transform_notes` preserves array order via position field (0-indexed)\n- [ ] `transform_notes` maps system flag to is_system\n- [ ] Unit tests cover all computed fields\n\n## Files\n\n- src/gitlab/transformers/mod.rs (add `pub mod discussion;`)\n- src/gitlab/transformers/discussion.rs (create)\n\n## TDD Loop\n\nRED:\n```rust\n// tests/discussion_transformer_tests.rs\n#[test] fn transforms_discussion_payload_to_normalized_schema()\n#[test] fn extracts_notes_array_from_discussion()\n#[test] fn sets_individual_note_flag_correctly()\n#[test] fn flags_system_notes_with_is_system_true()\n#[test] fn preserves_note_order_via_position_field()\n#[test] fn computes_first_note_at_and_last_note_at_correctly()\n#[test] fn computes_resolvable_and_resolved_status()\n```\n\nGREEN: Implement transform_discussion and transform_notes\n\nVERIFY: `cargo test discussion_transformer`\n\n## Edge Cases\n\n- Discussion with single note - first_note_at == last_note_at\n- All notes are system notes - still compute timestamps\n- No notes resolvable - resolvable=false, resolved=false\n- Mix of resolved/unresolved notes - resolved=false until all done","status":"closed","priority":2,"issue_type":"task","created_at":"2026-01-25T17:02:38.196079Z","created_by":"tayloreernisse","updated_at":"2026-01-25T22:27:11.485112Z","closed_at":"2026-01-25T22:27:11.485058Z","close_reason":"Implemented NormalizedDiscussion, NormalizedNote, transform_discussion, transform_notes with 9 passing unit tests","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-1qf","depends_on_id":"bd-1np","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} @@ -106,7 +106,7 @@ {"id":"bd-1xuf","title":"Implement attention state SQL computation","description":"## Background\nThe attention state model (AC-4) is the core intelligence of the dashboard. It classifies each work item based on comment activity: who commented last, and when. The computation uses CTEs and a CASE expression with 5 branches, evaluated in priority order.\n\n## Approach\nCreate `src/cli/commands/me/attention.rs`:\n```rust\nuse rusqlite::Connection;\nuse crate::core::error::Result;\nuse super::types::{AttentionState, MeIssue, MeMrAuthored, MeMrReviewing};\n\npub fn enrich_attention_states(\n conn: &Connection,\n username: &str,\n issues: &mut [MeIssue],\n authored_mrs: &mut [MeMrAuthored],\n reviewing_mrs: &mut [MeMrReviewing],\n) -> Result<()>\n```\n\nThis runs AFTER the raw items are fetched. For each work item type, execute a single SQL query that computes the attention state for all items at once, then update the `attention_state` field on each struct.\n\n### Schema context for the SQL\n\n**notes** (`src/core/db.rs` migration 002): `id, gitlab_id, discussion_id (FK→discussions), project_id, note_type, is_system (INTEGER 0/1), author_username, body, created_at (ms epoch), updated_at, ...`\n\n**discussions** (migration 002): `id, gitlab_discussion_id, project_id, issue_id (FK→issues, nullable), merge_request_id (nullable), noteable_type ('Issue'|'MergeRequest'), ...`\n- CHECK constraint: exactly one of `issue_id`/`merge_request_id` is non-NULL based on `noteable_type`\n\n**mr_reviewers** (migration 006): `merge_request_id (FK), username TEXT, PRIMARY KEY(merge_request_id, username)`\n\n**merge_requests** (migration 006): `draft INTEGER NOT NULL DEFAULT 0` (0/1 boolean)\n\n### Attention SQL pattern\n\nFor issues, the query computes per-issue attention state:\n```sql\nWITH my_latest AS (\n SELECT d.issue_id,\n MAX(n.created_at) AS ts\n FROM notes n\n JOIN discussions d ON n.discussion_id = d.id\n WHERE n.is_system = 0\n AND n.author_username = ?username\n AND d.issue_id IS NOT NULL\n GROUP BY d.issue_id\n),\nothers_latest AS (\n SELECT d.issue_id,\n MAX(n.created_at) AS ts\n FROM notes n\n JOIN discussions d ON n.discussion_id = d.id\n WHERE n.is_system = 0\n AND n.author_username != ?username\n AND d.issue_id IS NOT NULL\n GROUP BY d.issue_id\n),\nany_latest AS (\n SELECT d.issue_id,\n MAX(n.created_at) AS ts\n FROM notes n\n JOIN discussions d ON n.discussion_id = d.id\n WHERE n.is_system = 0\n AND d.issue_id IS NOT NULL\n GROUP BY d.issue_id\n)\nSELECT i.id,\n CASE\n -- 2. needs_attention: others commented AND (I haven't OR others' latest > mine)\n WHEN ol.ts IS NOT NULL AND (ml.ts IS NULL OR ol.ts > ml.ts) THEN 'needs_attention'\n -- 3. stale: any notes exist AND latest > 30 days old\n WHEN al.ts IS NOT NULL AND al.ts < (?now_ms - 30 * 86400 * 1000) THEN 'stale'\n -- 4. awaiting_response: my latest >= others' latest\n WHEN ml.ts IS NOT NULL AND (ol.ts IS NULL OR ml.ts >= ol.ts) THEN 'awaiting_response'\n -- 5. not_started: zero non-system notes\n ELSE 'not_started'\n END AS attention\nFROM issues i\nLEFT JOIN my_latest ml ON ml.issue_id = i.id\nLEFT JOIN others_latest ol ON ol.issue_id = i.id\nLEFT JOIN any_latest al ON al.issue_id = i.id\nWHERE i.id IN (...)\n```\n\nFor MRs, prepend a `not_ready` check before the CASE:\n```sql\nWHEN mr.draft = 1 AND NOT EXISTS (\n SELECT 1 FROM mr_reviewers mrr WHERE mrr.merge_request_id = mr.id\n) THEN 'not_ready'\n```\n\n### Implementation strategy\n\n1. Collect all internal DB `id` values from the fetched issue/MR vecs\n2. Run the CTE query with those IDs as parameters (use `rusqlite::params_from_iter` or IN-list)\n3. Build a `HashMap` from results\n4. Iterate over the mutable slices, setting `attention_state` from the map\n5. Default to `NotStarted` for any ID not found (shouldn't happen but safe fallback)\n\nUse `AttentionState::from_sql_str(s)` (from bd-1vai) to convert the SQL CASE string to the enum.\n\nNote: The fetched structs (`MeIssue`, `MeMrAuthored`, `MeMrReviewing`) will need an internal `id: i64` field (the DB primary key) for this lookup. This field should NOT serialize to JSON (use `#[serde(skip)]`). Ensure bd-3bwh includes this.\n\n## Acceptance Criteria\n- [ ] not_ready: MR with draft=1 AND zero mr_reviewers → NotReady (AC-4.4.1)\n- [ ] not_ready: MR with draft=1 BUT has reviewers → falls through to other states\n- [ ] needs_attention: others commented, I haven't → NeedsAttention (AC-4.4.2)\n- [ ] needs_attention: others' latest > my latest → NeedsAttention\n- [ ] stale: notes exist but latest is >30 days old → Stale (AC-4.4.3)\n- [ ] stale: zero notes → NOT stale (falls to not_started)\n- [ ] not_started: zero non-system notes from anyone → NotStarted (AC-4.4.4)\n- [ ] awaiting_response: my latest >= others' latest → AwaitingResponse (AC-4.4.5)\n- [ ] awaiting_response: only my notes, no others → AwaitingResponse\n- [ ] Applied to issues, authored MRs, and reviewing MRs (AC-4.5)\n- [ ] Only considers non-system notes (is_system = 0) (AC-4.2)\n- [ ] Uses AttentionState::from_sql_str() for enum conversion\n\n## Files\n- CREATE: src/cli/commands/me/attention.rs\n- MODIFY: src/cli/commands/me/mod.rs (add `pub mod attention;`)\n\n## TDD Anchor\nRED: Write `test_needs_attention_when_others_commented_after_me` using in-memory DB (`create_connection(Path::new(\":memory:\"))` + `run_migrations(&conn)`). Insert a project, issue, issue_assignee, discussion (noteable_type='Issue', issue_id set), two notes: mine at created_at=100000, theirs at created_at=200000. Call `enrich_attention_states`. Assert issue's attention_state = NeedsAttention.\n\nGREEN: Implement the SQL CTE query.\nVERIFY: `cargo test attention`\n\nAdditional tests:\n- test_needs_attention_others_commented_i_havent\n- test_awaiting_response_i_commented_last\n- test_awaiting_response_only_my_notes\n- test_not_started_zero_notes\n- test_stale_old_notes (set any_latest.ts to > 30 days ago using a fixed \"now\" param)\n- test_stale_not_applied_to_zero_notes\n- test_not_ready_draft_no_reviewers (insert MR with draft=1, no mr_reviewers rows)\n- test_not_ready_draft_with_reviewers_falls_through\n- test_system_notes_excluded (insert is_system=1 note, verify not counted)\n\n## Edge Cases\n- notes join through discussions: `notes.discussion_id → discussions.id`, then `discussions.issue_id / discussions.merge_request_id`\n- \"30 days\" stale threshold: use current time (ms) minus `30 * 86_400 * 1_000` ms. Pass `now_ms` as a parameter for testability (don't call `SystemTime::now()` inside SQL).\n- An item with ONLY system notes → zero non-system notes → `not_started`\n- NULL handling: LEFT JOINs mean `ml.ts`, `ol.ts`, `al.ts` can all be NULL — the CASE handles this with IS NOT NULL / IS NULL checks\n- The `id IN (...)` list may be empty (no issues fetched) — skip the query entirely for empty slices\n\n## Dependency Context\nUses `AttentionState` enum and its `from_sql_str()` from bd-1vai (same `types.rs` file).\nUses `MeIssue`, `MeMrAuthored`, `MeMrReviewing` structs from bd-3bwh — these need an internal `id: i64` field with `#[serde(skip)]`.\nRuns after the fetch queries from bd-joja (issues), bd-1obt (authored MRs), bd-1fgr (reviewing MRs).\nCalled by the handler in bd-1vv8.","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-19T19:37:34.521556Z","created_by":"tayloreernisse","updated_at":"2026-02-20T16:09:13.054002Z","closed_at":"2026-02-20T16:09:13.053951Z","close_reason":"Implemented by lore-me agent swarm","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-1xuf","depends_on_id":"bd-1fgr","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-1xuf","depends_on_id":"bd-1obt","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-1xuf","depends_on_id":"bd-joja","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} {"id":"bd-1y7q","title":"Write invariant tests for ranking system","description":"## Background\nInvariant tests catch subtle ranking regressions that example-based tests miss. These test properties that must hold for ANY input, not specific values.\n\n## Approach\n\n### test_score_monotonicity_by_age:\nGenerate 50 random (age_ms, half_life_days) pairs using a simple LCG PRNG (deterministic seed for reproducibility). Assert decay(older) <= decay(newer) for all pairs where older > newer. Tests the pure half_life_decay() function only.\n\n### test_row_order_independence:\nInsert the same 5 signals in two orderings (forward and reverse). Run query_expert on both -> assert identical username ordering and identical scores (f64 bit-equal). Use a deterministic dataset with varied timestamps.\n\n### test_reviewer_split_is_exhaustive:\nSet up 3 reviewers on the same MR:\n1. Reviewer with substantive DiffNotes (>= 20 chars) -> must appear in participated ONLY\n2. Reviewer with no DiffNotes -> must appear in assigned-only ONLY\n3. Reviewer with trivial note (< 20 chars) -> must appear in assigned-only ONLY\nUse --explain-score to verify each reviewer's components: participated reviewer has reviewer_participated > 0 and reviewer_assigned == 0; others have reviewer_assigned > 0 and reviewer_participated == 0.\n\n### test_deterministic_accumulation_order:\nInsert signals for one user with 15 MRs at varied timestamps. Run query_expert 100 times in a loop. Assert all 100 runs produce the exact same f64 score (use == not approx, to verify bit-identical results from sorted accumulation).\n\n## Acceptance Criteria\n- [ ] All 4 tests pass\n- [ ] No flakiness across 10 consecutive cargo test runs\n- [ ] test_score_monotonicity covers at least 50 random pairs\n- [ ] test_deterministic_accumulation runs at least 100 iterations\n\n## Files\n- src/cli/commands/who.rs (test module)\n\n## Edge Cases\n- LCG PRNG for monotonicity test: use fixed seed, not rand crate (avoid dependency)\n- Bit-identical f64: use assert_eq!(a, b) not approx — the deterministic ordering guarantees this\n- Row order test: must insert in genuinely different orders, not just shuffled within same transaction","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-09T17:00:35.774542Z","created_by":"tayloreernisse","updated_at":"2026-02-09T17:17:18.920235Z","closed_at":"2026-02-09T17:17:18.920188Z","close_reason":"Tests distributed to implementation beads: monotonicity->bd-1soz, row_order+split+deterministic->bd-13q8","compaction_level":0,"original_size":0,"labels":["scoring","test"]} {"id":"bd-1y8","title":"Implement chunk ID encoding module","description":"## Background\nsqlite-vec uses a single integer rowid for embeddings. To store multiple chunks per document, we encode (document_id, chunk_index) into a single rowid using a multiplier. This module is shared between the embedding pipeline (encode on write) and vector search (decode on read). The encoding scheme supports up to 1000 chunks per document.\n\n## Approach\nCreate `src/embedding/chunk_ids.rs`:\n\n```rust\n/// Multiplier for encoding (document_id, chunk_index) into a single rowid.\n/// Supports up to 1000 chunks per document (32M chars at 32k/chunk).\npub const CHUNK_ROWID_MULTIPLIER: i64 = 1000;\n\n/// Encode (document_id, chunk_index) into a sqlite-vec rowid.\n///\n/// rowid = document_id * CHUNK_ROWID_MULTIPLIER + chunk_index\npub fn encode_rowid(document_id: i64, chunk_index: i64) -> i64 {\n document_id * CHUNK_ROWID_MULTIPLIER + chunk_index\n}\n\n/// Decode a sqlite-vec rowid back into (document_id, chunk_index).\npub fn decode_rowid(rowid: i64) -> (i64, i64) {\n let document_id = rowid / CHUNK_ROWID_MULTIPLIER;\n let chunk_index = rowid % CHUNK_ROWID_MULTIPLIER;\n (document_id, chunk_index)\n}\n```\n\nAlso create the parent module `src/embedding/mod.rs`:\n```rust\npub mod chunk_ids;\n// Later beads add: pub mod ollama; pub mod pipeline;\n```\n\nUpdate `src/lib.rs`: add `pub mod embedding;`\n\n## Acceptance Criteria\n- [ ] `encode_rowid(42, 0)` == 42000\n- [ ] `encode_rowid(42, 5)` == 42005\n- [ ] `decode_rowid(42005)` == (42, 5)\n- [ ] Roundtrip: decode(encode(doc_id, chunk_idx)) == (doc_id, chunk_idx) for all valid inputs\n- [ ] CHUNK_ROWID_MULTIPLIER is 1000\n- [ ] `cargo test chunk_ids` passes\n\n## Files\n- `src/embedding/chunk_ids.rs` — new file\n- `src/embedding/mod.rs` — new file (module root)\n- `src/lib.rs` — add `pub mod embedding;`\n\n## TDD Loop\nRED: Tests in `#[cfg(test)] mod tests`:\n- `test_encode_single_chunk` — encode(1, 0) == 1000\n- `test_encode_multi_chunk` — encode(1, 5) == 1005\n- `test_decode_roundtrip` — property test over range of doc_ids and chunk_indices\n- `test_decode_zero_chunk` — decode(42000) == (42, 0)\n- `test_multiplier_value` — assert CHUNK_ROWID_MULTIPLIER == 1000\nGREEN: Implement encode_rowid, decode_rowid\nVERIFY: `cargo test chunk_ids`\n\n## Edge Cases\n- chunk_index >= 1000: not expected (documents that large would be pathological), but no runtime panic — just incorrect decode. The embedding pipeline caps chunks well below this.\n- document_id = 0: valid (encode returns chunk_index directly)","status":"closed","priority":3,"issue_type":"task","created_at":"2026-01-30T15:26:34.060769Z","created_by":"tayloreernisse","updated_at":"2026-01-30T16:51:59.048910Z","closed_at":"2026-01-30T16:51:59.048843Z","close_reason":"Completed: chunk_ids module with encode_rowid/decode_rowid, CHUNK_ROWID_MULTIPLIER=1000, 6 tests pass","compaction_level":0,"original_size":0} -{"id":"bd-1yky","title":"Phase 4c: Cancellation integration tests (asupersync-native)","description":"## What\nAdd asupersync-native integration tests for cancellation and region semantics. These test the PRIMARY MOTIVATION for this migration — structured cancellation.\n\n## Why\nWiremock tests validate protocol/serialization but cannot test cancellation. The deterministic lab runtime is one of the primary motivations for this migration.\n\n## Tests to Write\n\n### 1. Ctrl+C during fan-out\nSimulate cancellation mid-batch in orchestrator. Verify:\n- All in-flight tasks are drained\n- No task leaks\n- No obligation leaks\n\n### 2. Region quiescence\nAfter a region completes (normal or cancelled), verify no background tasks remain running.\n\n### 3. Transaction integrity under cancellation\nCancel during an ingestion batch that has fetched data but not yet written to DB. Verify:\n- No partial data is committed\n- DB is in a consistent state\n- Next sync picks up where it left off\n\n## Test Infrastructure\nUse asupersync deterministic lab runtime for reproducible cancellation timing.\n\n## Files Changed\n- tests/cancellation_tests.rs (NEW, ~150-200 LOC)\n\n## Testing\n- cargo test -- cancellation (all new tests must pass)\n\n## Depends On\n- Phase 3f-Step1 (region wrapping must be in place)","status":"open","priority":1,"issue_type":"task","created_at":"2026-03-06T18:41:45.716082Z","created_by":"tayloreernisse","updated_at":"2026-03-06T18:42:57.177851Z","compaction_level":0,"original_size":0,"labels":["asupersync","phase-4","testing"],"dependencies":[{"issue_id":"bd-1yky","depends_on_id":"bd-1lj5","type":"parent-child","created_at":"2026-03-06T18:41:45.720121Z","created_by":"tayloreernisse"},{"issue_id":"bd-1yky","depends_on_id":"bd-zgke","type":"blocks","created_at":"2026-03-06T18:42:57.177821Z","created_by":"tayloreernisse"}]} +{"id":"bd-1yky","title":"Phase 4c: Cancellation integration tests (asupersync-native)","description":"## What\nAdd asupersync-native integration tests for cancellation and region semantics. These test the PRIMARY MOTIVATION for this migration — structured cancellation.\n\n## Why\nWiremock tests validate protocol/serialization but cannot test cancellation. The deterministic lab runtime is one of the primary motivations for this migration.\n\n## Tests to Write\n\n### 1. Ctrl+C during fan-out\nSimulate cancellation mid-batch in orchestrator. Verify:\n- All in-flight tasks are drained\n- No task leaks\n- No obligation leaks\n\n### 2. Region quiescence\nAfter a region completes (normal or cancelled), verify no background tasks remain running.\n\n### 3. Transaction integrity under cancellation\nCancel during an ingestion batch that has fetched data but not yet written to DB. Verify:\n- No partial data is committed\n- DB is in a consistent state\n- Next sync picks up where it left off\n\n## Test Infrastructure\nUse asupersync deterministic lab runtime for reproducible cancellation timing.\n\n## Files Changed\n- tests/cancellation_tests.rs (NEW, ~150-200 LOC)\n\n## Testing\n- cargo test -- cancellation (all new tests must pass)\n\n## Depends On\n- Phase 3f-Step1 (region wrapping must be in place)","status":"closed","priority":1,"issue_type":"task","created_at":"2026-03-06T18:41:45.716082Z","created_by":"tayloreernisse","updated_at":"2026-03-06T21:11:12.659001Z","closed_at":"2026-03-06T21:11:12.658956Z","close_reason":"asupersync migration complete — 5 commits pushed to asupersync-migration branch","compaction_level":0,"original_size":0,"labels":["asupersync","phase-4","testing"],"dependencies":[{"issue_id":"bd-1yky","depends_on_id":"bd-1lj5","type":"parent-child","created_at":"2026-03-06T18:41:45.720121Z","created_by":"tayloreernisse"},{"issue_id":"bd-1yky","depends_on_id":"bd-zgke","type":"blocks","created_at":"2026-03-06T18:42:57.177821Z","created_by":"tayloreernisse"}]} {"id":"bd-1yu","title":"[CP1] GitLab types for issues, discussions, notes","description":"Add TypeScript interfaces for GitLab API responses.\n\nTypes to add to src/gitlab/types.ts:\n- GitLabIssue: id, iid, project_id, title, description, state, timestamps, author, labels[], labels_details?, web_url\n- GitLabDiscussion: id (string), individual_note, notes[]\n- GitLabNote: id, type, body, author, timestamps, system, resolvable, resolved, resolved_by, resolved_at, position?\n\nFiles: src/gitlab/types.ts\nDone when: Types compile and match GitLab API documentation","status":"tombstone","priority":2,"issue_type":"task","created_at":"2026-01-25T15:19:00.558718Z","created_by":"tayloreernisse","updated_at":"2026-01-25T15:21:35.153996Z","closed_at":"2026-01-25T15:21:35.153996Z","deleted_at":"2026-01-25T15:21:35.153993Z","deleted_by":"tayloreernisse","delete_reason":"delete","original_type":"task","compaction_level":0,"original_size":0} {"id":"bd-1yx","title":"Implement rename chain resolution for file-history","description":"## Background\n\nRename chain resolution is the core algorithm for Gate 4. When querying history of src/auth.rs, it finds MRs that touched the file when it was previously named src/authentication.rs. This is reused by Gate 5 (trace) as well.\n\n**Spec reference:** `docs/phase-b-temporal-intelligence.md` Section 4.6 (Rename Handling).\n\n## Codebase Context\n\n- mr_file_changes table (migration 016, bd-1oo): merge_request_id, project_id, old_path, new_path, change_type\n- change_type='renamed' rows have both old_path and new_path populated\n- Partial index `idx_mfc_renamed` on (project_id, change_type) WHERE change_type='renamed' optimizes BFS queries\n- Also `idx_mfc_project_path` on (project_id, new_path) and `idx_mfc_project_old_path` partial index\n- No timeline/trace/file_history modules exist yet in src/core/\n\n## Approach\n\nCreate `src/core/file_history.rs`:\n\n```rust\nuse std::collections::HashSet;\nuse std::collections::VecDeque;\nuse rusqlite::Connection;\nuse crate::core::error::Result;\n\n/// Resolves a file path through its rename history.\n/// Returns all equivalent paths (original + renames) for use in queries.\n/// BFS in both directions: forward (old_path -> new_path) and backward (new_path -> old_path).\npub fn resolve_rename_chain(\n conn: &Connection,\n project_id: i64,\n path: &str,\n max_hops: usize, // default 10 from CLI\n) -> Result> {\n let mut visited: HashSet = HashSet::new();\n let mut queue: VecDeque = VecDeque::new();\n\n visited.insert(path.to_string());\n queue.push_back(path.to_string());\n\n let forward_sql = \"SELECT mfc.new_path FROM mr_file_changes mfc \\\n WHERE mfc.project_id = ?1 AND mfc.old_path = ?2 AND mfc.change_type = 'renamed'\";\n let backward_sql = \"SELECT mfc.old_path FROM mr_file_changes mfc \\\n WHERE mfc.project_id = ?1 AND mfc.new_path = ?2 AND mfc.change_type = 'renamed'\";\n\n while let Some(current) = queue.pop_front() {\n if visited.len() > max_hops + 1 { break; }\n\n // Forward: current was the old name -> discover new names\n let mut stmt = conn.prepare(forward_sql)?;\n let forward: Vec = stmt.query_map(\n rusqlite::params\\![project_id, current],\n |row| row.get(0),\n )?.filter_map(|r| r.ok()).collect();\n\n // Backward: current was the new name -> discover old names\n let mut stmt = conn.prepare(backward_sql)?;\n let backward: Vec = stmt.query_map(\n rusqlite::params\\![project_id, current],\n |row| row.get(0),\n )?.filter_map(|r| r.ok()).collect();\n\n for discovered in forward.into_iter().chain(backward) {\n if visited.insert(discovered.clone()) {\n queue.push_back(discovered);\n }\n }\n }\n\n Ok(visited.into_iter().collect())\n}\n```\n\nRegister in `src/core/mod.rs`: add `pub mod file_history;`\n\n## Acceptance Criteria\n\n- [ ] `resolve_rename_chain()` follows renames in both directions (forward + backward)\n- [ ] Cycles detected via HashSet (same path never visited twice)\n- [ ] Bounded at max_hops (default 10)\n- [ ] No renames found: returns vec with just the original path\n- [ ] max_hops=0: returns just original path without querying DB\n- [ ] Module registered in src/core/mod.rs as `pub mod file_history;`\n- [ ] `cargo check --all-targets` passes\n- [ ] `cargo clippy --all-targets -- -D warnings` passes\n\n## Files\n\n- `src/core/file_history.rs` (NEW)\n- `src/core/mod.rs` (add `pub mod file_history;`)\n\n## TDD Loop\n\nRED:\n- `test_rename_chain_no_renames` — returns just original path\n- `test_rename_chain_forward` — a.rs -> b.rs -> c.rs: starting from a.rs finds all three\n- `test_rename_chain_backward` — starting from c.rs finds a.rs and b.rs\n- `test_rename_chain_cycle_detection` — a->b->a terminates without infinite loop\n- `test_rename_chain_max_hops_zero` — returns just original path\n- `test_rename_chain_max_hops_bounded` — chain longer than max is truncated\n\nTests need in-memory DB with migrations applied through 016 + mr_file_changes test data with change_type='renamed'.\n\nGREEN: Implement BFS with visited set.\n\nVERIFY: `cargo test --lib -- file_history`\n\n## Edge Cases\n\n- File never renamed: single-element vec\n- Circular rename (a->b->a): visited set prevents infinite loop\n- max_hops=0: return just original path, no queries executed\n- Case sensitivity: paths are case-sensitive (Linux default, matches GitLab behavior)\n- Multiple renames from same old_path: BFS discovers all branches\n","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-02T21:34:08.985345Z","created_by":"tayloreernisse","updated_at":"2026-02-13T14:00:46.354253Z","closed_at":"2026-02-13T14:00:46.354201Z","close_reason":"Implemented resolve_rename_chain() BFS in src/core/file_history.rs with 8 tests covering: no renames, forward chain, backward chain, cycle detection, max_hops=0, max_hops bounded, branching renames, project isolation. All 765 tests pass, clippy+fmt clean.","compaction_level":0,"original_size":0,"labels":["gate-4","phase-b","query"],"dependencies":[{"issue_id":"bd-1yx","depends_on_id":"bd-14q","type":"parent-child","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-1yx","depends_on_id":"bd-1oo","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} {"id":"bd-1yz","title":"Implement MR document extraction","description":"## Background\nMR documents are similar to issue documents but include source/target branch information in the header. The extractor queries merge_requests and mr_labels tables. Like issue extraction, it produces a DocumentData struct for the regeneration pipeline.\n\n## Approach\nImplement `extract_mr_document()` in `src/documents/extractor.rs`:\n\n```rust\n/// Extract a searchable document from a merge request.\n/// Returns None if the MR has been deleted from the DB.\npub fn extract_mr_document(conn: &Connection, mr_id: i64) -> Result>\n```\n\n**SQL queries (from PRD Section 2.2):**\n```sql\n-- Main entity\nSELECT m.id, m.iid, m.title, m.description, m.state, m.author_username,\n m.source_branch, m.target_branch,\n m.created_at, m.updated_at, m.web_url,\n p.path_with_namespace, p.id AS project_id\nFROM merge_requests m\nJOIN projects p ON p.id = m.project_id\nWHERE m.id = ?\n\n-- Labels\nSELECT l.name FROM mr_labels ml\nJOIN labels l ON l.id = ml.label_id\nWHERE ml.merge_request_id = ?\nORDER BY l.name\n```\n\n**Document format:**\n```\n[[MergeRequest]] !456: Implement JWT authentication\nProject: group/project-one\nURL: https://gitlab.example.com/group/project-one/-/merge_requests/456\nLabels: [\"feature\", \"auth\"]\nState: opened\nAuthor: @johndoe\nSource: feature/jwt-auth -> main\n\n--- Description ---\n\nThis MR implements JWT-based authentication...\n```\n\n**Key difference from issues:** The `Source:` line with `source_branch -> target_branch`.\n\n## Acceptance Criteria\n- [ ] Deleted MR returns Ok(None)\n- [ ] MR document has `[[MergeRequest]]` prefix with `!` before iid (not `#`)\n- [ ] Source line shows `source_branch -> target_branch`\n- [ ] Labels sorted alphabetically in JSON array\n- [ ] content_hash computed from full content_text\n- [ ] labels_hash computed from sorted labels\n- [ ] paths is empty (MR-level docs don't have DiffNote paths; those are on discussion docs)\n- [ ] `cargo test extract_mr` passes\n\n## Files\n- `src/documents/extractor.rs` — implement `extract_mr_document()`\n\n## TDD Loop\nRED: Tests in `#[cfg(test)] mod tests`:\n- `test_mr_document_format` — verify header matches PRD template with Source line\n- `test_mr_not_found` — returns Ok(None)\n- `test_mr_no_description` — header only\n- `test_mr_branch_info` — Source line correct\nGREEN: Implement extract_mr_document with SQL queries\nVERIFY: `cargo test extract_mr`\n\n## Edge Cases\n- MR with NULL description: skip \"--- Description ---\" section\n- MR with NULL source_branch or target_branch: omit Source line (shouldn't happen in practice)\n- Draft MRs: state field captures this, no special handling needed","status":"closed","priority":3,"issue_type":"task","created_at":"2026-01-30T15:25:45.521703Z","created_by":"tayloreernisse","updated_at":"2026-01-30T17:30:04.308781Z","closed_at":"2026-01-30T17:30:04.308598Z","close_reason":"Implemented extract_mr_document() with Source line, PRD format, and 5 tests","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-1yz","depends_on_id":"bd-36p","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-1yz","depends_on_id":"bd-hrs","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} @@ -117,7 +117,7 @@ {"id":"bd-20e","title":"Define TimelineEvent model and TimelineEventType enum","description":"## Background\n\nThe TimelineEvent model is the foundational data type for Gate 3's timeline feature. All pipeline stages (seed, expand, collect, interleave) produce or consume TimelineEvents. This must be defined first because every downstream bead (bd-32q, bd-ypa, bd-3as, bd-dty, bd-2f2) depends on these types.\n\n**Spec reference:** `docs/phase-b-temporal-intelligence.md` Section 3.3 (Event Model).\n\n## Codebase Context\n\n- Migration 011 created: resource_state_events, resource_label_events, resource_milestone_events, entity_references, pending_dependent_fetches\n- source_method CHECK constraint: `'api' | 'note_parse' | 'description_parse'` (NOT spec's 'api_closes_issues' etc.)\n- reference_type CHECK constraint: `'closes' | 'mentioned' | 'related'`\n- LATEST_SCHEMA_VERSION = 14\n\n## Approach\n\nCreate `src/core/timeline.rs` with the following types:\n\n```rust\n/// The core timeline event. All pipeline stages produce or consume these.\n/// Spec ref: Section 3.3 \"Event Model\"\n#[derive(Debug, Clone, Serialize)]\npub struct TimelineEvent {\n pub timestamp: i64, // ms epoch UTC\n pub entity_type: String, // \"issue\" | \"merge_request\"\n pub entity_id: i64, // local DB id (internal, not in JSON output)\n pub entity_iid: i64,\n pub project_path: String,\n pub event_type: TimelineEventType,\n pub summary: String, // human-readable one-liner\n pub actor: Option, // username or None for system\n pub url: Option, // web URL for the event source\n pub is_seed: bool, // true if from seed phase, false if expanded\n}\n\n/// Per spec Section 3.3. Serde tagged enum for JSON output.\n/// IMPORTANT: entity_type is String (not &'static str) because serde Serialize\n/// requires owned types for struct fields when deriving.\n#[derive(Debug, Clone, Serialize)]\n#[serde(tag = \"kind\", rename_all = \"snake_case\")]\npub enum TimelineEventType {\n Created,\n StateChanged { state: String }, // spec: just the target state\n LabelAdded { label: String },\n LabelRemoved { label: String },\n MilestoneSet { milestone: String },\n MilestoneRemoved { milestone: String },\n Merged, // spec: unit variant\n NoteEvidence {\n note_id: i64, // spec: required\n snippet: String, // first ~200 chars of matching note body\n discussion_id: Option, // spec: optional\n },\n CrossReferenced { target: String }, // compact target ref like \"\\!567\" or \"#234\"\n}\n\n/// Internal entity reference used across pipeline stages.\n#[derive(Debug, Clone, Serialize)]\npub struct EntityRef {\n pub entity_type: String, // String not &'static str — needed for Serialize\n pub entity_id: i64,\n pub entity_iid: i64,\n pub project_path: String,\n}\n\n/// An entity discovered via BFS expansion.\n/// Spec ref: Section 3.5 \"expanded_entities\" JSON structure.\n#[derive(Debug, Clone, Serialize)]\npub struct ExpandedEntityRef {\n pub entity_ref: EntityRef,\n pub depth: u32,\n pub via_from: EntityRef, // the entity that referenced this one\n pub via_reference_type: String, // \"closes\", \"mentioned\", \"related\"\n pub via_source_method: String, // \"api\", \"note_parse\", \"description_parse\"\n}\n\n/// Reference to an unsynced external entity.\n/// Spec ref: Section 3.5 \"unresolved_references\" JSON structure.\n#[derive(Debug, Clone, Serialize)]\npub struct UnresolvedRef {\n pub source: EntityRef,\n pub target_project: Option,\n pub target_type: String,\n pub target_iid: i64,\n pub reference_type: String,\n}\n\n/// Complete result from the timeline pipeline.\n#[derive(Debug, Clone, Serialize)]\npub struct TimelineResult {\n pub query: String,\n pub events: Vec,\n pub seed_entities: Vec,\n pub expanded_entities: Vec,\n pub unresolved_references: Vec,\n}\n```\n\nImplement `Ord` on `TimelineEvent` for chronological sort: primary key `timestamp`, tiebreak by `entity_id` then event_type discriminant.\n\nAlso implement `PartialEq`, `Eq`, `PartialOrd` (required by Ord).\n\nRegister in `src/core/mod.rs`: `pub mod timeline;`\n\n## Acceptance Criteria\n\n- [ ] `src/core/timeline.rs` compiles with no warnings\n- [ ] All struct fields use `String` not `&'static str` (required for `#[derive(Serialize)]`)\n- [ ] `TimelineEventType` has exactly 9 variants matching spec Section 3.3\n- [ ] `NoteEvidence` has `note_id: i64`, `snippet: String`, `discussion_id: Option`\n- [ ] `ExpandedEntityRef.via_source_method` documents codebase values: api, note_parse, description_parse\n- [ ] `Ord` impl sorts by (timestamp, entity_id, event_type discriminant)\n- [ ] `PartialEq`, `Eq`, `PartialOrd` derived or implemented\n- [ ] Module registered in `src/core/mod.rs`\n- [ ] `cargo check --all-targets` passes\n- [ ] `cargo clippy --all-targets -- -D warnings` passes\n\n## Files\n\n- `src/core/timeline.rs` (NEW)\n- `src/core/mod.rs` (add `pub mod timeline;`)\n\n## TDD Loop\n\nRED: Create `src/core/timeline.rs` with `#[cfg(test)] mod tests`:\n- `test_timeline_event_sort_by_timestamp` - events sort chronologically\n- `test_timeline_event_sort_tiebreak` - same-timestamp events sort stably\n- `test_timeline_event_type_serializes_tagged` - serde JSON uses `kind` tag\n- `test_note_evidence_has_note_id` - note_id present in serialized output\n\nGREEN: Implement the types and Ord trait.\n\nVERIFY: `cargo test --lib -- timeline`\n\n## Edge Cases\n\n- Ord must be consistent and total for all valid TimelineEvent pairs\n- NoteEvidence snippet truncated to 200 chars at construction, not in the type\n- entity_type uses String to satisfy serde Serialize derive requirements\n- url field: constructed from project_path + entity_type + iid; None for entities without web_url","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-02T21:33:08.569126Z","created_by":"tayloreernisse","updated_at":"2026-02-05T21:43:02.449502Z","closed_at":"2026-02-05T21:43:02.449454Z","close_reason":"Completed: Created src/core/timeline.rs with TimelineEvent, TimelineEventType (9 variants), EntityRef, ExpandedEntityRef, UnresolvedRef, TimelineResult. Ord impl sorts by (timestamp, entity_id, event_type discriminant). entity_id skipped in serde output. 6 tests pass. All quality gates pass.","compaction_level":0,"original_size":0,"labels":["gate-3","phase-b","types"],"dependencies":[{"issue_id":"bd-20e","depends_on_id":"bd-ike","type":"parent-child","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} {"id":"bd-20h","title":"Implement MR discussion ingestion module","description":"## Background\nMR discussion ingestion with critical atomicity guarantees. Parse notes BEFORE destructive DB operations to prevent data loss. Watermark ONLY advanced on full success.\n\n## Approach\nCreate `src/ingestion/mr_discussions.rs` with:\n1. `IngestMrDiscussionsResult` - Per-MR stats\n2. `ingest_mr_discussions()` - Main function with atomicity guarantees\n3. Upsert + sweep pattern for notes (not delete-all-then-insert)\n4. Sync health telemetry for debugging failures\n\n## Files\n- `src/ingestion/mr_discussions.rs` - New module\n- `tests/mr_discussion_ingestion_tests.rs` - Integration tests\n\n## Acceptance Criteria\n- [ ] `IngestMrDiscussionsResult` has: discussions_fetched, discussions_upserted, notes_upserted, notes_skipped_bad_timestamp, diffnotes_count, pagination_succeeded\n- [ ] `ingest_mr_discussions()` returns `Result`\n- [ ] CRITICAL: Notes parsed BEFORE any DELETE operations\n- [ ] CRITICAL: Watermark NOT advanced if `pagination_succeeded == false`\n- [ ] CRITICAL: Watermark NOT advanced if any note parse fails\n- [ ] Upsert + sweep pattern using `last_seen_at`\n- [ ] Stale discussions/notes removed only on full success\n- [ ] Selective raw payload storage (skip system notes without position)\n- [ ] Sync health telemetry recorded on failure\n- [ ] `does_not_advance_discussion_watermark_on_partial_failure` test passes\n- [ ] `atomic_note_replacement_preserves_data_on_parse_failure` test passes\n\n## TDD Loop\nRED: `cargo test does_not_advance_watermark` -> test fails\nGREEN: Add ingestion with atomicity guarantees\nVERIFY: `cargo test mr_discussion_ingestion`\n\n## Main Function\n```rust\npub async fn ingest_mr_discussions(\n conn: &Connection,\n client: &GitLabClient,\n config: &Config,\n project_id: i64,\n gitlab_project_id: i64,\n mr_iid: i64,\n local_mr_id: i64,\n mr_updated_at: i64,\n) -> Result\n```\n\n## CRITICAL: Atomic Note Replacement\n```rust\n// Record sync start time for sweep\nlet run_seen_at = now_ms();\n\nwhile let Some(discussion_result) = stream.next().await {\n let discussion = match discussion_result {\n Ok(d) => d,\n Err(e) => {\n result.pagination_succeeded = false;\n break; // Stop but don't advance watermark\n }\n };\n \n // CRITICAL: Parse BEFORE destructive operations\n let notes = match transform_notes_with_diff_position(&discussion, project_id) {\n Ok(notes) => notes,\n Err(e) => {\n warn!(\"Note transform failed; preserving existing notes\");\n result.notes_skipped_bad_timestamp += discussion.notes.len();\n result.pagination_succeeded = false;\n continue; // Skip this discussion, don't delete existing\n }\n };\n \n // Only NOW start transaction (after parse succeeded)\n let tx = conn.unchecked_transaction()?;\n \n // Upsert discussion with run_seen_at\n // Upsert notes with run_seen_at (not delete-all)\n \n tx.commit()?;\n}\n```\n\n## Stale Data Sweep (only on success)\n```rust\nif result.pagination_succeeded {\n // Sweep stale discussions\n conn.execute(\n \"DELETE FROM discussions\n WHERE project_id = ? AND merge_request_id = ?\n AND last_seen_at < ?\",\n params![project_id, local_mr_id, run_seen_at],\n )?;\n \n // Sweep stale notes\n conn.execute(\n \"DELETE FROM notes\n WHERE discussion_id IN (\n SELECT id FROM discussions\n WHERE project_id = ? AND merge_request_id = ?\n )\n AND last_seen_at < ?\",\n params![project_id, local_mr_id, run_seen_at],\n )?;\n}\n```\n\n## Watermark Update (ONLY on success)\n```rust\nif result.pagination_succeeded {\n mark_discussions_synced(conn, local_mr_id, mr_updated_at)?;\n clear_sync_health_error(conn, local_mr_id)?;\n} else {\n record_sync_health_error(conn, local_mr_id, \"Pagination incomplete or parse failure\")?;\n warn!(\"Watermark NOT advanced; will retry on next sync\");\n}\n```\n\n## Selective Payload Storage\n```rust\n// Only store payload for DiffNotes and non-system notes\nlet should_store_note_payload =\n !note.is_system() ||\n note.position_new_path().is_some() ||\n note.position_old_path().is_some();\n```\n\n## Integration Tests (CRITICAL)\n```rust\n#[tokio::test]\nasync fn does_not_advance_discussion_watermark_on_partial_failure() {\n // Setup: MR with updated_at > discussions_synced_for_updated_at\n // Mock: Page 1 returns OK, Page 2 returns 500\n // Assert: discussions_synced_for_updated_at unchanged\n}\n\n#[tokio::test]\nasync fn does_not_advance_discussion_watermark_on_note_parse_failure() {\n // Setup: Existing notes in DB\n // Mock: Discussion with note having invalid created_at\n // Assert: Original notes preserved, watermark unchanged\n}\n\n#[tokio::test]\nasync fn atomic_note_replacement_preserves_data_on_parse_failure() {\n // Setup: Discussion with 3 valid notes\n // Mock: Updated discussion where note 2 has bad timestamp\n // Assert: All 3 original notes still in DB\n}\n```\n\n## Edge Cases\n- HTTP error mid-pagination: preserve existing data, log error, no watermark advance\n- Invalid note timestamp: skip discussion, preserve existing notes\n- System notes without position: don't store raw payload (saves space)\n- Empty discussion: still upsert discussion record, no notes","status":"closed","priority":2,"issue_type":"task","created_at":"2026-01-26T22:06:42.335714Z","created_by":"tayloreernisse","updated_at":"2026-01-27T00:22:43.207057Z","closed_at":"2026-01-27T00:22:43.206996Z","close_reason":"Implemented MR discussion ingestion module with full atomicity guarantees:\n- IngestMrDiscussionsResult with all required fields\n- parse-before-destructive pattern (transform notes before DB ops)\n- Upsert + sweep pattern with last_seen_at timestamps\n- Watermark advanced ONLY on full pagination success\n- Selective payload storage (skip system notes without position)\n- Sync health telemetry for failure debugging\n- All 163 tests passing","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-20h","depends_on_id":"bd-3ir","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-20h","depends_on_id":"bd-3j6","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-20h","depends_on_id":"bd-iba","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} {"id":"bd-20p9","title":"NOTE-1A: Note query layer data types and filters","description":"## Background\nPhase 1 adds a lore notes command for direct SQL query over the notes table. This chunk implements the data structures, filter logic, and query function following existing patterns in src/cli/commands/list.rs. The existing file contains: IssueListRow/Json/Result (for issues), MrListRow/Json/Result (for MRs), ListFilters/MrListFilters, query_issues(), query_mrs().\n\n## Approach\nAdd to src/cli/commands/list.rs (after the existing MR query code):\n\nData structures:\n- NoteListRow: id, gitlab_id, author_username, body, note_type, is_system, created_at, updated_at, position_new_path, position_new_line, position_old_path, position_old_line, resolvable, resolved, resolved_by, noteable_type (from discussions.noteable_type), parent_iid (i64), parent_title, project_path\n- NoteListRowJson: ISO timestamp variants (created_at_iso, updated_at_iso using ms_to_iso from crate::core::time) + #[derive(Serialize)]\n- NoteListResult: notes: Vec, total_count: i64\n- NoteListResultJson: notes: Vec, total_count: i64, showing: usize\n- NoteListFilters: limit (usize), project (Option), author (Option), note_type (Option), include_system (bool), for_issue_iid (Option), for_mr_iid (Option), note_id (Option), gitlab_note_id (Option), discussion_id (Option), since (Option), until (Option), path (Option), contains (Option), resolution (Option), sort (String), order (String)\n\nQuery function pub fn query_notes(conn: &Connection, filters: &NoteListFilters, config: &Config) -> Result:\n- Time window: parse since/until relative to single anchored now_ms via parse_since (from crate::core::time). --until date = end-of-day (23:59:59.999). Validate since_ms <= until_ms.\n- Core SQL: SELECT from notes n JOIN discussions d ON n.discussion_id = d.id JOIN projects p ON n.project_id = p.id LEFT JOIN issues i ON d.issue_id = i.id LEFT JOIN merge_requests m ON d.merge_request_id = m.id WHERE {dynamic_filters} ORDER BY {sort} {order}, n.id {order} LIMIT ?\n- Filter mappings:\n - author: COLLATE NOCASE, strip leading @ (same pattern as existing list filters)\n - note_type: exact match\n - project: resolve_project(conn, project_str) from crate::core::project\n - since/until: n.created_at >= ?ms / n.created_at <= ?ms\n - path: trailing / = LIKE prefix match with escape_like (from crate::core::project), else exact match on position_new_path\n - contains: LIKE %term% COLLATE NOCASE on n.body with escape_like for %, _\n - resolution: \"unresolved\" → n.resolvable = 1 AND n.resolved = 0, \"resolved\" → n.resolvable = 1 AND n.resolved = 1, \"any\" → no filter\n - for_issue_iid/for_mr_iid: requires project_id context. Validation at query layer (return error if no project and no defaultProject), NOT as clap requires.\n - include_system: when false (default), add n.is_system = 0\n - note_id: exact match on n.id\n - gitlab_note_id: exact match on n.gitlab_id\n - discussion_id: exact match on d.gitlab_discussion_id\n- Use dynamic WHERE clause building with params vector (same pattern as query_issues/query_mrs)\n\n## Files\n- MODIFY: src/cli/commands/list.rs (add NoteListRow, NoteListRowJson, NoteListResult, NoteListResultJson, NoteListFilters, query_notes)\n\n## TDD Anchor\nRED: test_query_notes_empty_db — setup DB with no notes, call query_notes, assert total_count == 0.\nGREEN: Implement NoteListFilters + query_notes with basic SELECT.\nVERIFY: cargo test query_notes_empty_db -- --nocapture\n28 tests from PRD: test_query_notes_empty_db, test_query_notes_filter_author, test_query_notes_filter_author_strips_at, test_query_notes_filter_author_case_insensitive, test_query_notes_filter_note_type, test_query_notes_filter_project, test_query_notes_filter_since, test_query_notes_filter_until, test_query_notes_filter_since_and_until_combined, test_query_notes_invalid_time_window_rejected, test_query_notes_until_date_uses_end_of_day, test_query_notes_filter_contains, test_query_notes_filter_contains_case_insensitive, test_query_notes_filter_contains_escapes_like_wildcards, test_query_notes_filter_path, test_query_notes_filter_path_prefix, test_query_notes_filter_for_issue_requires_project, test_query_notes_filter_for_mr_requires_project, test_query_notes_filter_for_issue_uses_default_project, test_query_notes_filter_resolution_unresolved, test_query_notes_filter_resolution_resolved, test_query_notes_sort_created_desc, test_query_notes_sort_created_asc, test_query_notes_deterministic_tiebreak, test_query_notes_limit, test_query_notes_combined_filters, test_query_notes_filter_note_id_exact, test_query_notes_filter_gitlab_note_id_exact, test_query_notes_filter_discussion_id_exact, test_note_list_row_json_conversion\n\n## Acceptance Criteria\n- [ ] NoteListRow/Json/Result/Filters structs defined with all fields\n- [ ] query_notes returns notes matching all filter combinations\n- [ ] Author filter is case-insensitive and strips @ prefix\n- [ ] Time window validates since <= until with clear error message including swap suggestion\n- [ ] --until date uses end-of-day (23:59:59.999)\n- [ ] Path filter: trailing / = prefix match with LIKE escape, otherwise exact\n- [ ] Contains filter: case-insensitive body substring with LIKE escape for %, _\n- [ ] for_issue_iid/for_mr_iid require project context (error if no --project and no defaultProject)\n- [ ] Default: exclude system notes (is_system = 0). --include-system overrides.\n- [ ] ORDER BY includes n.id tiebreaker for deterministic results\n- [ ] All 28+ tests pass\n\n## Edge Cases\n- parse_until_with_anchor: YYYY-MM-DD --until returns end-of-day (not start-of-day)\n- Inverted time window: --since 30d --until 90d → error message suggesting swap\n- LIKE wildcards in --contains: % and _ escaped via escape_like (from crate::core::project)\n- IID without project: error at query layer (not clap) to support defaultProject\n- Discussion with NULL noteable_type: LEFT JOIN handles gracefully (parent_iid/title will be None)","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-12T17:00:26.741853Z","created_by":"tayloreernisse","updated_at":"2026-02-12T18:13:24.378983Z","closed_at":"2026-02-12T18:13:24.378936Z","close_reason":"Implemented by agent swarm","compaction_level":0,"original_size":0,"labels":["cli","per-note","search"],"dependencies":[{"issue_id":"bd-20p9","depends_on_id":"bd-1oyf","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-20p9","depends_on_id":"bd-25hb","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-20p9","depends_on_id":"bd-3iod","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} -{"id":"bd-21fb","title":"Phase 0a: Extract signal handler to core/shutdown.rs","description":"## What\nExtract the 4 identical Ctrl+C signal handlers from the command handler code into a single function in core/shutdown.rs.\n\n## Why\nReduces tokio surface area before the runtime swap. 4 spawn sites -> 1 function. The function body changes in Phase 3d (signal handler rewrite for asupersync), so centralizing first makes that change trivial.\n\n## Rearchitecture Context (2026-03-06)\nA major code reorganization moved command handler dispatch logic out of main.rs into physical files under src/app/ using include!() chains:\n- src/main.rs (419 LOC, thin entry) -> include!(\"app/dispatch.rs\") -> include!(\"handlers.rs\"), include!(\"errors.rs\"), include!(\"robot_docs.rs\")\n- All handler code is LOGICALLY still in main.rs scope but PHYSICALLY in src/app/handlers.rs (~1730 LOC)\n\n## Current State\nThe 4 signal handler sites are in src/app/handlers.rs (physically), at approximately:\n- Line ~214 (sync handler)\n- Line ~1535 (ingest handler)\n- Line ~1661 (surgical sync handler)\n- Line ~1718 (embed handler)\n\nEach handler is a tokio::spawn block that:\n1. Awaits tokio::signal::ctrl_c()\n2. Prints interrupt message\n3. Calls signal.cancel()\n4. Awaits second ctrl_c for force quit via process::exit(130)\n\n## Implementation\n\nNew file: src/core/shutdown.rs\n```rust\npub fn install_ctrl_c_handler(signal: ShutdownSignal) {\n tokio::spawn(async move {\n let _ = tokio::signal::ctrl_c().await;\n eprintln!(\"\\nInterrupted, finishing current batch... (Ctrl+C again to force quit)\");\n signal.cancel();\n let _ = tokio::signal::ctrl_c().await;\n std::process::exit(130);\n });\n}\n```\n\nReplace all 4 sites in src/app/handlers.rs with: install_ctrl_c_handler(signal.clone());\n\n## Files Changed\n- src/core/shutdown.rs (NEW ~20 LOC)\n- src/core/mod.rs (add pub mod shutdown)\n- src/app/handlers.rs (replace 4 handler blocks with 1-line calls)\n\n## Testing\n- cargo check --all-targets\n- cargo clippy --all-targets -- -D warnings\n- Existing tests should pass unchanged\n\n## Notes\n- This is independently valuable cleanup regardless of whether the full migration happens\n- The function signature will change in Phase 3d to accept &Cx instead of using tokio::spawn","status":"open","priority":2,"issue_type":"task","created_at":"2026-03-06T18:38:08.054343Z","created_by":"tayloreernisse","updated_at":"2026-03-06T19:51:18.765957Z","compaction_level":0,"original_size":0,"labels":["asupersync","phase-0"],"dependencies":[{"issue_id":"bd-21fb","depends_on_id":"bd-1lj5","type":"parent-child","created_at":"2026-03-06T18:38:08.057882Z","created_by":"tayloreernisse"}]} +{"id":"bd-21fb","title":"Phase 0a: Extract signal handler to core/shutdown.rs","description":"## What\nExtract the 4 identical Ctrl+C signal handlers from the command handler code into a single function in core/shutdown.rs.\n\n## Why\nReduces tokio surface area before the runtime swap. 4 spawn sites -> 1 function. The function body changes in Phase 3d (signal handler rewrite for asupersync), so centralizing first makes that change trivial.\n\n## Rearchitecture Context (2026-03-06)\nA major code reorganization moved command handler dispatch logic out of main.rs into physical files under src/app/ using include!() chains:\n- src/main.rs (419 LOC, thin entry) -> include!(\"app/dispatch.rs\") -> include!(\"handlers.rs\"), include!(\"errors.rs\"), include!(\"robot_docs.rs\")\n- All handler code is LOGICALLY still in main.rs scope but PHYSICALLY in src/app/handlers.rs (~1730 LOC)\n\n## Current State\nThe 4 signal handler sites are in src/app/handlers.rs (physically), at approximately:\n- Line ~214 (sync handler)\n- Line ~1535 (ingest handler)\n- Line ~1661 (surgical sync handler)\n- Line ~1718 (embed handler)\n\nEach handler is a tokio::spawn block that:\n1. Awaits tokio::signal::ctrl_c()\n2. Prints interrupt message\n3. Calls signal.cancel()\n4. Awaits second ctrl_c for force quit via process::exit(130)\n\n## Implementation\n\nNew file: src/core/shutdown.rs\n```rust\npub fn install_ctrl_c_handler(signal: ShutdownSignal) {\n tokio::spawn(async move {\n let _ = tokio::signal::ctrl_c().await;\n eprintln!(\"\\nInterrupted, finishing current batch... (Ctrl+C again to force quit)\");\n signal.cancel();\n let _ = tokio::signal::ctrl_c().await;\n std::process::exit(130);\n });\n}\n```\n\nReplace all 4 sites in src/app/handlers.rs with: install_ctrl_c_handler(signal.clone());\n\n## Files Changed\n- src/core/shutdown.rs (NEW ~20 LOC)\n- src/core/mod.rs (add pub mod shutdown)\n- src/app/handlers.rs (replace 4 handler blocks with 1-line calls)\n\n## Testing\n- cargo check --all-targets\n- cargo clippy --all-targets -- -D warnings\n- Existing tests should pass unchanged\n\n## Notes\n- This is independently valuable cleanup regardless of whether the full migration happens\n- The function signature will change in Phase 3d to accept &Cx instead of using tokio::spawn","status":"closed","priority":2,"issue_type":"task","created_at":"2026-03-06T18:38:08.054343Z","created_by":"tayloreernisse","updated_at":"2026-03-06T21:11:12.487068Z","closed_at":"2026-03-06T21:11:12.485805Z","close_reason":"asupersync migration complete — 5 commits pushed to asupersync-migration branch","compaction_level":0,"original_size":0,"labels":["asupersync","phase-0"],"dependencies":[{"issue_id":"bd-21fb","depends_on_id":"bd-1lj5","type":"parent-child","created_at":"2026-03-06T18:38:08.057882Z","created_by":"tayloreernisse"}]} {"id":"bd-221","title":"Create migration 008_fts5.sql","description":"## Background\nFTS5 (Full-Text Search 5) provides the lexical search backbone for Gate A. The virtual table + triggers keep the FTS index in sync with the documents table automatically. This migration must be applied AFTER migration 007 (documents table exists). The trigger design handles NULL titles via COALESCE and only rebuilds the FTS entry when searchable text actually changes (not metadata-only updates).\n\n## Approach\nCreate `migrations/008_fts5.sql` with the exact SQL from PRD Section 1.2:\n\n1. **Virtual table:** `documents_fts` using FTS5 with porter stemmer, prefix indexes (2,3,4), external content backed by `documents` table\n2. **Insert trigger:** `documents_ai` — inserts into FTS on document insert, uses COALESCE(title, '') for NULL safety\n3. **Delete trigger:** `documents_ad` — removes from FTS on document delete using the FTS5 delete command syntax\n4. **Update trigger:** `documents_au` — only fires when `title` or `content_text` changes (WHEN clause), performs delete-then-insert to update FTS\n\nRegister migration 8 in `src/core/db.rs` MIGRATIONS array.\n\n**Critical detail:** The COALESCE is required because FTS5 external-content tables require exact value matching for delete operations. If NULL was inserted, the delete trigger couldn't match it (NULL != NULL in SQL).\n\n## Acceptance Criteria\n- [ ] `migrations/008_fts5.sql` file exists\n- [ ] `documents_fts` virtual table created with `tokenize='porter unicode61'` and `prefix='2 3 4'`\n- [ ] `content='documents'` and `content_rowid='id'` set (external content mode)\n- [ ] Insert trigger `documents_ai` fires on document insert with COALESCE(title, '')\n- [ ] Delete trigger `documents_ad` fires on document delete using FTS5 delete command\n- [ ] Update trigger `documents_au` only fires when `old.title IS NOT new.title OR old.content_text != new.content_text`\n- [ ] Prefix search works: query `auth*` matches \"authentication\"\n- [ ] After bulk insert of N documents, `SELECT count(*) FROM documents_fts` returns N\n- [ ] Schema version 8 recorded in schema_version table\n- [ ] `cargo test migration_tests` passes\n\n## Files\n- `migrations/008_fts5.sql` — new file (copy exact SQL from PRD Section 1.2)\n- `src/core/db.rs` — add migration 8 to MIGRATIONS array\n\n## TDD Loop\nRED: Register migration in db.rs, `cargo test migration_tests` fails (SQL file missing)\nGREEN: Create `008_fts5.sql` with all triggers\nVERIFY: `cargo test migration_tests && cargo build`\n\n## Edge Cases\n- Metadata-only updates (e.g., changing `updated_at` or `labels_hash`) must NOT trigger FTS rebuild — the WHEN clause prevents this\n- NULL titles must use COALESCE to empty string in both insert and delete triggers\n- The update trigger does delete+insert (not FTS5 'delete' + regular insert atomically) — this is the correct FTS5 pattern for content changes","status":"closed","priority":2,"issue_type":"task","created_at":"2026-01-30T15:25:25.763146Z","created_by":"tayloreernisse","updated_at":"2026-01-30T16:56:13.131830Z","closed_at":"2026-01-30T16:56:13.131771Z","close_reason":"Completed: migration 008_fts5.sql with FTS5 virtual table, 3 sync triggers (insert/delete/update with COALESCE NULL safety), prefix search, registered in db.rs, cargo build + tests pass","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-221","depends_on_id":"bd-hrs","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} {"id":"bd-226s","title":"Epic: Time-Decay Expert Scoring Model","description":"## Background\n\nReplace flat-weight expertise scoring with exponential half-life decay, split reviewer signals (participated vs assigned-only), dual-path rename awareness, and new CLI flags (--as-of, --explain-score, --include-bots, --all-history).\n\n**Plan document:** plans/time-decay-expert-scoring.md (iteration 6, target 8)\n**Beads revision:** 3 (updated line numbers to match v0.7.0 codebase, fixed migration 022->026, fixed test count 27->31, added explicit callsite update scope to bd-13q8)\n\n## Children (Execution Order)\n\n### Layer 0 — Foundation (no deps)\n- **bd-2w1p** — Add half-life fields and config validation to ScoringConfig\n- **bd-1soz** — Add half_life_decay() pure function\n- **bd-18dn** — Add normalize_query_path() pure function\n\n### Layer 1 — Schema + Helpers (depends on Layer 0)\n- **bd-2ao4** — Add migration 026 for dual-path and reviewer participation indexes (5 indexes)\n- **bd-2yu5** — Add timestamp-aware test helpers (insert_mr_at, insert_diffnote_at, insert_file_change_with_old_path)\n- **bd-1b50** — Update existing tests for new ScoringConfig fields (..Default::default())\n\n### Layer 2 — SQL + Path Probes (depends on Layer 1)\n- **bd-1hoq** — Restructure expert SQL with CTE-based dual-path matching (8 CTEs, mr_activity, parameterized ?5/?6)\n- **bd-1h3f** — Add rename awareness to path resolution probes (build_path_query + suffix_probe)\n\n### Layer 3 — Rust Aggregation (depends on Layer 2)\n- **bd-13q8** — Implement Rust-side decay aggregation with reviewer split + update all 17 existing query_expert() callsites\n\n### Layer 4 — CLI (depends on Layer 3)\n- **bd-11mg** — Add CLI flags: --as-of, --explain-score, --include-bots, --all-history, path normalization\n\n### Layer 5 — Verification (depends on Layer 4)\n- **bd-1vti** — Run full test suite: 31 new tests + all existing tests, no regressions\n- **bd-1j5o** — Quality gates, query plan check (6 index points), real-world validation\n\n## Revision 3 Delta (from revision 2)\n- **Migration number**: 022 -> 026 (latest existing is 025_note_dirty_backfill.sql)\n- **Test count**: 27 -> 31 (correct tally: 2+3+2+1+2+13+8=31)\n- **Line numbers**: All beads updated to match v0.7.0 codebase (ScoringConfig at 155, validate_scoring at 274, query_expert at 641, build_path_query at 467, suffix_probe at 596, run_who at 276, test helpers at 2469-2598, test_expert_scoring_weights at 3551)\n- **bd-13q8 scope**: Now explicitly documents updating all 17 existing query_expert() callsites (1 production + 16 test) when changing signature from 7 to 10 params\n- **bd-2yu5**: insert_file_change_with_old_path now has complete SQL implementation (was placeholder)\n\n## Files Modified\n- src/core/config.rs (ScoringConfig struct at line 155, validation at line 274)\n- src/cli/commands/who.rs (decay function, normalize_query_path, SQL, aggregation, CLI flags, tests)\n- src/core/db.rs (MIGRATIONS array — add (\"026\", ...) entry)\n- CREATE: migrations/026_scoring_indexes.sql (5 new indexes)\n\n## Acceptance Criteria\n- [ ] All 31 new tests pass (across all child beads)\n- [ ] All existing tests pass unchanged (decay ~1.0 at now_ms())\n- [ ] cargo check + clippy + fmt clean\n- [ ] ubs scan clean on modified files\n- [ ] EXPLAIN QUERY PLAN shows 6 index usage points (manual verification)\n- [ ] Real-world validation: who --path on known files shows recency discounting\n- [ ] who --explain-score component breakdown sums to total\n- [ ] who --as-of produces deterministic results across runs\n- [ ] Assigned-only reviewers rank below participated reviewers\n- [ ] Old file paths resolve and credit expertise after renames\n- [ ] Path normalization: ./src//foo.rs resolves identically to src/foo.rs\n\n## Edge Cases\n- f64 NaN guard in half_life_decay (hl=0 -> 0.0)\n- Deterministic f64 ordering via mr_id sort before summation\n- Closed MR multiplier applied via state_mult in SQL (not Rust string match)\n- Trivial notes (< reviewer_min_note_chars) classified as assigned-only\n- Exclusive upper bound on --as-of prevents future event leakage\n- Config upper bounds prevent absurd values (3650-day cap, 4096-char cap, NaN/Inf rejection)","status":"closed","priority":1,"issue_type":"epic","created_at":"2026-02-09T16:58:58.007560Z","created_by":"tayloreernisse","updated_at":"2026-02-12T20:43:09.211271Z","closed_at":"2026-02-12T20:43:09.211222Z","close_reason":"Epic complete: time-decay expert scoring model implemented. 3-agent swarm, 12 tasks, 621 tests, all quality gates green, real-world validation passed.","compaction_level":0,"original_size":0} {"id":"bd-227","title":"[CP1] gi count issues/discussions/notes commands","description":"Count entities in the database.\n\n## Module\nsrc/cli/commands/count.rs\n\n## Clap Definition\nCount {\n #[arg(value_parser = [\"issues\", \"mrs\", \"discussions\", \"notes\"])]\n entity: String,\n \n #[arg(long, value_parser = [\"issue\", \"mr\"])]\n r#type: Option,\n}\n\n## Commands\n- gi count issues → 'Issues: N'\n- gi count discussions → 'Discussions: N'\n- gi count discussions --type=issue → 'Issue Discussions: N'\n- gi count notes → 'Notes: N (excluding M system)'\n- gi count notes --type=issue → 'Issue Notes: N (excluding M system)'\n\n## Implementation\n- Simple COUNT(*) queries\n- For notes, also count WHERE is_system = 1 for system note count\n- Filter by noteable_type when --type specified\n\nFiles: src/cli/commands/count.rs\nDone when: Counts match expected values from GitLab","status":"tombstone","priority":3,"issue_type":"task","created_at":"2026-01-25T16:58:25.648805Z","created_by":"tayloreernisse","updated_at":"2026-01-25T17:02:01.920135Z","closed_at":"2026-01-25T17:02:01.920135Z","deleted_at":"2026-01-25T17:02:01.920129Z","deleted_by":"tayloreernisse","delete_reason":"recreating with correct deps","original_type":"task","compaction_level":0,"original_size":0} @@ -125,17 +125,17 @@ {"id":"bd-22li","title":"OBSERV: Implement SyncRunRecorder lifecycle helper","description":"## Background\nThe sync_runs table exists (migration 001) but NOTHING writes to it. SyncRunRecorder encapsulates the INSERT-on-start, UPDATE-on-finish lifecycle, fixing this bug and enabling sync history tracking.\n\n## Approach\nCreate src/core/sync_run.rs:\n\n```rust\nuse crate::core::metrics::StageTiming;\nuse crate::core::error::Result;\nuse rusqlite::Connection;\n\npub struct SyncRunRecorder {\n row_id: i64,\n}\n\nimpl SyncRunRecorder {\n /// Insert a new sync_runs row with status='running'.\n pub fn start(conn: &Connection, command: &str, run_id: &str) -> Result {\n let now_ms = crate::core::time::now_ms();\n conn.execute(\n \"INSERT INTO sync_runs (started_at, heartbeat_at, status, command, run_id)\n VALUES (?1, ?2, 'running', ?3, ?4)\",\n rusqlite::params![now_ms, now_ms, command, run_id],\n )?;\n let row_id = conn.last_insert_rowid();\n Ok(Self { row_id })\n }\n\n /// Mark run as succeeded with full metrics.\n pub fn succeed(\n self,\n conn: &Connection,\n metrics: &[StageTiming],\n total_items: usize,\n total_errors: usize,\n ) -> Result<()> {\n let now_ms = crate::core::time::now_ms();\n let metrics_json = serde_json::to_string(metrics)\n .unwrap_or_else(|_| \"[]\".to_string());\n conn.execute(\n \"UPDATE sync_runs\n SET finished_at = ?1, status = 'succeeded',\n metrics_json = ?2, total_items_processed = ?3, total_errors = ?4\n WHERE id = ?5\",\n rusqlite::params![now_ms, metrics_json, total_items, total_errors, self.row_id],\n )?;\n Ok(())\n }\n\n /// Mark run as failed with error message and optional partial metrics.\n pub fn fail(\n self,\n conn: &Connection,\n error: &str,\n metrics: Option<&[StageTiming]>,\n ) -> Result<()> {\n let now_ms = crate::core::time::now_ms();\n let metrics_json = metrics\n .map(|m| serde_json::to_string(m).unwrap_or_else(|_| \"[]\".to_string()));\n conn.execute(\n \"UPDATE sync_runs\n SET finished_at = ?1, status = 'failed', error = ?2,\n metrics_json = ?3\n WHERE id = ?4\",\n rusqlite::params![now_ms, error, metrics_json, self.row_id],\n )?;\n Ok(())\n }\n}\n```\n\nRegister in src/core/mod.rs:\n```rust\npub mod sync_run;\n```\n\nNote: SyncRunRecorder takes self (not &self) in succeed/fail to enforce single-use lifecycle. You start a run, then either succeed or fail it -- never both.\n\nThe existing time::now_ms() helper (src/core/time.rs) returns milliseconds since epoch as i64. Used by the existing sync_runs schema (started_at, finished_at are INTEGER ms).\n\n## Acceptance Criteria\n- [ ] SyncRunRecorder::start() inserts row with status='running', started_at set\n- [ ] SyncRunRecorder::succeed() updates status='succeeded', finished_at set, metrics_json populated\n- [ ] SyncRunRecorder::fail() updates status='failed', error set, finished_at set\n- [ ] fail() with Some(metrics) stores partial metrics in metrics_json\n- [ ] fail() with None leaves metrics_json as NULL\n- [ ] succeed/fail consume self (single-use enforcement)\n- [ ] cargo clippy --all-targets -- -D warnings passes\n\n## Files\n- src/core/sync_run.rs (new file)\n- src/core/mod.rs (register module)\n\n## TDD Loop\nRED:\n - test_sync_run_recorder_start: in-memory DB, start(), query sync_runs, assert status='running'\n - test_sync_run_recorder_succeed: start() then succeed(), assert status='succeeded', metrics_json parseable\n - test_sync_run_recorder_fail: start() then fail(), assert status='failed', error set\n - test_sync_run_recorder_fail_with_partial_metrics: fail with Some(metrics), assert metrics_json has data\nGREEN: Implement SyncRunRecorder\nVERIFY: cargo test && cargo clippy --all-targets -- -D warnings\n\n## Edge Cases\n- Connection lifetime: SyncRunRecorder stores row_id, not Connection. The caller must ensure the same Connection is used for start/succeed/fail.\n- Panic during sync: if the program panics between start() and succeed()/fail(), the row stays as 'running'. The existing stale lock detection (stale_lock_minutes) handles this.\n- metrics_json encoding: serde_json::to_string on Vec produces a JSON array string.","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-04T15:54:51.364617Z","created_by":"tayloreernisse","updated_at":"2026-02-04T17:38:04.903657Z","closed_at":"2026-02-04T17:38:04.903610Z","close_reason":"Implemented SyncRunRecorder with start/succeed/fail lifecycle, 4 passing tests","compaction_level":0,"original_size":0,"labels":["observability"],"dependencies":[{"issue_id":"bd-22li","depends_on_id":"bd-1o4h","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-22li","depends_on_id":"bd-3pz","type":"parent-child","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-22li","depends_on_id":"bd-apmo","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} {"id":"bd-22uw","title":"NOTE-2F: Search CLI --type note support","description":"## Background\nAllow lore search --type note to filter search results to note-type documents. The search pipeline uses SearchFilters.source_type which maps to SourceType::parse() for filtering.\n\n## Approach\n1. Update SearchArgs.source_type value_parser in src/cli/mod.rs (line 560):\n Current: value_parser = [\"issue\", \"mr\", \"discussion\"]\n New: value_parser = [\"issue\", \"mr\", \"discussion\", \"note\"]\n\n2. Update search results display in src/cli/commands/search.rs (line 333-338):\n Add match arm in the type_prefix match:\n \"note\" => \"Note\",\n\n3. Verify SourceType::parse() already handles \"note\" (done by NOTE-2B). The search pipeline in search.rs line 105-108 calls SourceType::parse() on the CLI source_type value. No changes needed to search logic itself — the filter propagates through SearchFilters to the SQL WHERE clause automatically.\n\n## Files\n- MODIFY: src/cli/mod.rs (line 560, add \"note\" to value_parser array)\n- MODIFY: src/cli/commands/search.rs (line 333-338, add \"note\" => \"Note\" match arm)\n\n## TDD Anchor\nCompile test: verify lore search --type note is accepted by clap (no \"invalid value\" error).\nSmoke test: with note documents in DB, verify lore search --type note returns only note results.\nVERIFY: cargo test -- --nocapture (existing search tests still pass)\n\n## Acceptance Criteria\n- [ ] lore search --type note accepted by clap parser\n- [ ] lore search --type note filters to note documents only\n- [ ] Note results display \"Note\" prefix in human table output (line 340-346)\n- [ ] Robot JSON includes source_type: \"note\" field (already handled by SearchResultDisplay serialization)\n- [ ] All existing search tests still pass\n\n## Dependency Context\n- Depends on NOTE-2D (bd-2ezb): note documents must exist in DB to be searched (regenerator must handle SourceType::Note)\n\n## Edge Cases\n- No note documents indexed yet: returns 0 results (same as any empty type filter)\n- Mixed search (no --type flag): note documents appear alongside issue/mr/discussion results","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-12T17:02:32.836882Z","created_by":"tayloreernisse","updated_at":"2026-02-12T18:13:15.638041Z","closed_at":"2026-02-12T18:13:15.637987Z","close_reason":"Implemented by agent swarm","compaction_level":0,"original_size":0,"labels":["cli","per-note","search"]} {"id":"bd-23a4","title":"OBSERV: Wire SyncRunRecorder into sync and ingest commands","description":"## Background\nWith SyncRunRecorder implemented and MetricsLayer available, we wire them into the actual sync and ingest command handlers. This makes every sync/ingest invocation create a database record with full metrics.\n\n## Approach\n### src/cli/commands/sync.rs - run_sync() (line ~54)\n\nBefore the pipeline:\n```rust\nlet recorder = SyncRunRecorder::start(&conn, \"sync\", &run_id)?;\n```\n\nAfter pipeline succeeds:\n```rust\nlet stages = metrics_handle.extract_timings();\nlet total_items = stages.iter().map(|s| s.items_processed).sum::();\nlet total_errors = stages.iter().map(|s| s.errors).sum::();\nrecorder.succeed(&conn, &stages, total_items, total_errors)?;\n```\n\nOn pipeline failure (wrap pipeline in match or use a helper):\n```rust\nmatch pipeline_result {\n Ok(result) => {\n let stages = metrics_handle.extract_timings();\n recorder.succeed(&conn, &stages, total_items, total_errors)?;\n Ok(result)\n }\n Err(e) => {\n let stages = metrics_handle.extract_timings();\n recorder.fail(&conn, &e.to_string(), Some(&stages))?;\n Err(e)\n }\n}\n```\n\n### src/cli/commands/ingest.rs - run_ingest() (line ~107)\n\nSame pattern: start before pipeline, succeed/fail after.\n\nNote: run_sync() calls run_ingest() internally. Both will create sync_runs records. This is intentional -- standalone ingest should also be tracked. But when run_sync calls run_ingest, the ingest record is a child operation. Consider: should we skip the ingest recorder when called from sync? Decision: keep both records. The run_id differs, and sync-status can distinguish by the \"command\" column.\n\nActually, re-reading the code: run_sync() (line 54-178) calls run_ingest() for issues and MRs. If both create sync_runs rows, we get 3 rows per sync (1 sync + 2 ingest). This is fine -- command='sync' vs command='ingest:issues' distinguishes them.\n\n### Connection sharing\nrun_sync and run_ingest already have access to a Connection. SyncRunRecorder::start takes &Connection.\n\n### MetricsLayer handle\nmetrics_handle must be passed from main.rs through handle_sync_cmd/handle_ingest to run_sync/run_ingest. This requires adding a parameter. Alternative: use a thread-local or global. Prefer parameter passing for testability.\n\n## Acceptance Criteria\n- [ ] Every lore sync creates a sync_runs row with status transitioning running -> succeeded/failed\n- [ ] Every lore ingest creates a sync_runs row\n- [ ] metrics_json contains serialized Vec on success\n- [ ] Failed syncs record error message and partial metrics\n- [ ] sync_runs.run_id matches run_id in log files and robot JSON\n- [ ] total_items_processed and total_errors are populated\n- [ ] cargo clippy --all-targets -- -D warnings passes\n\n## Files\n- src/cli/commands/sync.rs (wire SyncRunRecorder + extract_timings in run_sync)\n- src/cli/commands/ingest.rs (wire SyncRunRecorder in run_ingest)\n- src/main.rs (pass metrics_handle to command handlers)\n\n## TDD Loop\nRED: test_sync_creates_run_record (integration: run sync, query sync_runs, assert row exists with metrics)\nGREEN: Wire SyncRunRecorder into run_sync and run_ingest\nVERIFY: cargo test && cargo clippy --all-targets -- -D warnings\n\n## Edge Cases\n- Database locked: SyncRunRecorder operations happen on the main connection. If a concurrent process holds the lock, the INSERT/UPDATE will wait (WAL mode) or error. Use existing lock handling.\n- Partial failure: if ingest issues succeeds but ingest MRs fails, the sync recorder should fail() with partial metrics (stages from issues but not MRs).\n- metrics_handle lifetime: must outlive the root span. Since it's an Arc clone, this is guaranteed.","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-04T15:54:51.414504Z","created_by":"tayloreernisse","updated_at":"2026-02-04T17:41:04.963794Z","closed_at":"2026-02-04T17:41:04.963749Z","close_reason":"Wired SyncRunRecorder into handle_sync_cmd and handle_ingest in main.rs","compaction_level":0,"original_size":0,"labels":["observability"],"dependencies":[{"issue_id":"bd-23a4","depends_on_id":"bd-22li","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-23a4","depends_on_id":"bd-34ek","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-23a4","depends_on_id":"bd-3pz","type":"parent-child","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} -{"id":"bd-23xb","title":"Phase 5c: Full verification and hardening pass","description":"## What\nRun the complete verification checklist and resolve any remaining issues.\n\n## Verification Checklist\n\n```bash\ncargo check --all-targets\ncargo clippy --all-targets -- -D warnings\ncargo fmt --check\ncargo test\n```\n\n## Specific Things to Verify\n\n1. **async-stream on nightly** — Does async_stream 0.3 compile on current nightly? If not, manual Stream impl is the fallback.\n\n2. **TLS root certs on macOS** — Does tls-native-roots pick up system CA certs? Test with production GitLab endpoint.\n\n3. **Connection pool under concurrency** — Do join_all batches (4-8 concurrent requests to same host) work without pool deadlock? Stress test.\n\n4. **Pagination streams** — Do async_stream::stream! pagination generators work unchanged? Test with multi-page GitLab results.\n\n5. **Wiremock test isolation** — Do 42 wiremock tests pass with tokio only in dev-deps? Verify no accidental production tokio usage.\n\n6. **Build time comparison** — Measure before/after build times. Document delta.\n\n7. **Binary size comparison** — Measure before/after binary sizes.\n\n## Rollback readiness\nVerify that reverting the atomic Phases 1-3 commit cleanly restores the tokio+reqwest setup. Test by checking out the pre-migration commit and running cargo test.\n\n## Files Changed\n- Potentially any file if issues are found\n- Documentation updates if behavior differs\n\n## Depends On\n- Phase 4a, 4b, 4c (all test migration complete)\n- Phase 5a (HTTP parity verified)\n- Phase 5b (DB invariants verified)","status":"open","priority":1,"issue_type":"task","created_at":"2026-03-06T18:42:35.571383Z","created_by":"tayloreernisse","updated_at":"2026-03-06T18:55:28.045029Z","compaction_level":0,"original_size":0,"labels":["asupersync","phase-5"],"dependencies":[{"issue_id":"bd-23xb","depends_on_id":"bd-12um","type":"blocks","created_at":"2026-03-06T18:48:01.617572Z","created_by":"tayloreernisse"},{"issue_id":"bd-23xb","depends_on_id":"bd-16qf","type":"blocks","created_at":"2026-03-06T18:42:59.903775Z","created_by":"tayloreernisse"},{"issue_id":"bd-23xb","depends_on_id":"bd-18ai","type":"blocks","created_at":"2026-03-06T18:42:59.817629Z","created_by":"tayloreernisse"},{"issue_id":"bd-23xb","depends_on_id":"bd-1lj5","type":"parent-child","created_at":"2026-03-06T18:42:35.602135Z","created_by":"tayloreernisse"},{"issue_id":"bd-23xb","depends_on_id":"bd-1yky","type":"blocks","created_at":"2026-03-06T18:42:59.995719Z","created_by":"tayloreernisse"},{"issue_id":"bd-23xb","depends_on_id":"bd-26km","type":"blocks","created_at":"2026-03-06T18:43:00.177732Z","created_by":"tayloreernisse"},{"issue_id":"bd-23xb","depends_on_id":"bd-2815","type":"blocks","created_at":"2026-03-06T18:43:00.086917Z","created_by":"tayloreernisse"},{"issue_id":"bd-23xb","depends_on_id":"bd-2pyn","type":"blocks","created_at":"2026-03-06T18:55:28.044984Z","created_by":"tayloreernisse"}]} +{"id":"bd-23xb","title":"Phase 5c: Full verification and hardening pass","description":"## What\nRun the complete verification checklist and resolve any remaining issues.\n\n## Verification Checklist\n\n```bash\ncargo check --all-targets\ncargo clippy --all-targets -- -D warnings\ncargo fmt --check\ncargo test\n```\n\n## Specific Things to Verify\n\n1. **async-stream on nightly** — Does async_stream 0.3 compile on current nightly? If not, manual Stream impl is the fallback.\n\n2. **TLS root certs on macOS** — Does tls-native-roots pick up system CA certs? Test with production GitLab endpoint.\n\n3. **Connection pool under concurrency** — Do join_all batches (4-8 concurrent requests to same host) work without pool deadlock? Stress test.\n\n4. **Pagination streams** — Do async_stream::stream! pagination generators work unchanged? Test with multi-page GitLab results.\n\n5. **Wiremock test isolation** — Do 42 wiremock tests pass with tokio only in dev-deps? Verify no accidental production tokio usage.\n\n6. **Build time comparison** — Measure before/after build times. Document delta.\n\n7. **Binary size comparison** — Measure before/after binary sizes.\n\n## Rollback readiness\nVerify that reverting the atomic Phases 1-3 commit cleanly restores the tokio+reqwest setup. Test by checking out the pre-migration commit and running cargo test.\n\n## Files Changed\n- Potentially any file if issues are found\n- Documentation updates if behavior differs\n\n## Depends On\n- Phase 4a, 4b, 4c (all test migration complete)\n- Phase 5a (HTTP parity verified)\n- Phase 5b (DB invariants verified)","status":"closed","priority":1,"issue_type":"task","created_at":"2026-03-06T18:42:35.571383Z","created_by":"tayloreernisse","updated_at":"2026-03-06T21:11:12.691261Z","closed_at":"2026-03-06T21:11:12.691215Z","close_reason":"asupersync migration complete — 5 commits pushed to asupersync-migration branch","compaction_level":0,"original_size":0,"labels":["asupersync","phase-5"],"dependencies":[{"issue_id":"bd-23xb","depends_on_id":"bd-12um","type":"blocks","created_at":"2026-03-06T18:48:01.617572Z","created_by":"tayloreernisse"},{"issue_id":"bd-23xb","depends_on_id":"bd-16qf","type":"blocks","created_at":"2026-03-06T18:42:59.903775Z","created_by":"tayloreernisse"},{"issue_id":"bd-23xb","depends_on_id":"bd-18ai","type":"blocks","created_at":"2026-03-06T18:42:59.817629Z","created_by":"tayloreernisse"},{"issue_id":"bd-23xb","depends_on_id":"bd-1lj5","type":"parent-child","created_at":"2026-03-06T18:42:35.602135Z","created_by":"tayloreernisse"},{"issue_id":"bd-23xb","depends_on_id":"bd-1yky","type":"blocks","created_at":"2026-03-06T18:42:59.995719Z","created_by":"tayloreernisse"},{"issue_id":"bd-23xb","depends_on_id":"bd-26km","type":"blocks","created_at":"2026-03-06T18:43:00.177732Z","created_by":"tayloreernisse"},{"issue_id":"bd-23xb","depends_on_id":"bd-2815","type":"blocks","created_at":"2026-03-06T18:43:00.086917Z","created_by":"tayloreernisse"},{"issue_id":"bd-23xb","depends_on_id":"bd-2pyn","type":"blocks","created_at":"2026-03-06T18:55:28.044984Z","created_by":"tayloreernisse"}]} {"id":"bd-247","title":"Implement issue document extraction","description":"## Background\nIssue documents are the simplest document type — a structured header + description text. The extractor queries the existing issues and issue_labels tables (populated by ingestion) and assembles a DocumentData struct. This is one of three entity-specific extractors (issue, MR, discussion) that feed the document regeneration pipeline.\n\n## Approach\nImplement `extract_issue_document()` in `src/documents/extractor.rs`:\n\n```rust\n/// Extract a searchable document from an issue.\n/// Returns None if the issue has been deleted from the DB.\npub fn extract_issue_document(conn: &Connection, issue_id: i64) -> Result>\n```\n\n**SQL queries (from PRD Section 2.2):**\n```sql\n-- Main entity\nSELECT i.id, i.iid, i.title, i.description, i.state, i.author_username,\n i.created_at, i.updated_at, i.web_url,\n p.path_with_namespace, p.id AS project_id\nFROM issues i\nJOIN projects p ON p.id = i.project_id\nWHERE i.id = ?\n\n-- Labels\nSELECT l.name FROM issue_labels il\nJOIN labels l ON l.id = il.label_id\nWHERE il.issue_id = ?\nORDER BY l.name\n```\n\n**Document format:**\n```\n[[Issue]] #234: Authentication redesign\nProject: group/project-one\nURL: https://gitlab.example.com/group/project-one/-/issues/234\nLabels: [\"bug\", \"auth\"]\nState: opened\nAuthor: @johndoe\n\n--- Description ---\n\nWe need to modernize our authentication system...\n```\n\n**Implementation steps:**\n1. Query issue row — if not found, return Ok(None)\n2. Query labels via junction table\n3. Format header with [[Issue]] prefix\n4. Compute content_hash via compute_content_hash()\n5. Compute labels_hash via compute_list_hash()\n6. paths is always empty for issues (paths are only for DiffNote discussions)\n7. Return DocumentData with all fields populated\n\n## Acceptance Criteria\n- [ ] Deleted issue (not in DB) returns Ok(None)\n- [ ] Issue with no description: content_text has header only (no \"--- Description ---\" section)\n- [ ] Issue with no labels: Labels line shows \"[]\"\n- [ ] Issue with labels: Labels line shows sorted JSON array\n- [ ] content_hash is SHA-256 of the full content_text\n- [ ] labels_hash is SHA-256 of sorted label names joined by newline\n- [ ] paths_hash is empty string hash (issues have no paths)\n- [ ] project_id comes from the JOIN with projects table\n- [ ] `cargo test extract_issue` passes\n\n## Files\n- `src/documents/extractor.rs` — implement `extract_issue_document()`\n\n## TDD Loop\nRED: Test in `#[cfg(test)] mod tests`:\n- `test_issue_document_format` — verify header format matches PRD template\n- `test_issue_not_found` — returns Ok(None) for nonexistent issue_id\n- `test_issue_no_description` — no description section when description is NULL\n- `test_issue_labels_sorted` — labels appear in alphabetical order\n- `test_issue_hash_deterministic` — same issue produces same content_hash\nGREEN: Implement extract_issue_document with SQL queries\nVERIFY: `cargo test extract_issue`\n\n## Edge Cases\n- Issue with NULL description: skip \"--- Description ---\" section entirely\n- Issue with empty string description: include section but with empty body\n- Issue with very long description: no truncation here (hard cap applied by caller)\n- Labels with special characters (quotes, commas): JSON array handles escaping","status":"closed","priority":3,"issue_type":"task","created_at":"2026-01-30T15:25:45.490145Z","created_by":"tayloreernisse","updated_at":"2026-01-30T17:28:13.974948Z","closed_at":"2026-01-30T17:28:13.974891Z","close_reason":"Implemented extract_issue_document() with SQL queries, PRD-compliant format, and 7 tests","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-247","depends_on_id":"bd-36p","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-247","depends_on_id":"bd-hrs","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} {"id":"bd-24j1","title":"OBSERV: Add #[instrument] spans to ingestion stages","description":"## Background\nTracing spans on each sync stage create the hierarchy that (1) makes log lines filterable by stage, (2) Phase 3's MetricsLayer reads to build StageTiming trees, and (3) gives meaningful context in -vv stderr output.\n\n## Approach\nAdd #[instrument] attributes or manual spans to these functions:\n\n### src/ingestion/orchestrator.rs\n1. ingest_project_issues_with_progress() (line ~110):\n```rust\n#[instrument(skip_all, fields(stage = \"ingest_issues\", project = %project_path))]\npub async fn ingest_project_issues_with_progress(...) -> Result {\n```\n\n2. The MR equivalent (ingest_project_mrs_with_progress or similar):\n```rust\n#[instrument(skip_all, fields(stage = \"ingest_mrs\", project = %project_path))]\n```\n\n3. Inside the issue ingest function, add child spans for sub-stages:\n```rust\nlet _fetch_span = tracing::info_span!(\"fetch_pages\", project = %project_path).entered();\n// ... fetch logic\ndrop(_fetch_span);\n\nlet _disc_span = tracing::info_span!(\"sync_discussions\", project = %project_path).entered();\n// ... discussion sync logic\ndrop(_disc_span);\n```\n\n4. drain_resource_events() (line ~566):\n```rust\nlet _span = tracing::info_span!(\"fetch_resource_events\", project = %project_path).entered();\n```\n\n### src/documents/regenerator.rs\n5. regenerate_dirty_documents() (line ~24):\n```rust\n#[instrument(skip_all, fields(stage = \"generate_docs\"))]\npub fn regenerate_dirty_documents(conn: &Connection) -> Result {\n```\n\n### src/embedding/pipeline.rs\n6. embed_documents() (line ~36):\n```rust\n#[instrument(skip_all, fields(stage = \"embed\"))]\npub async fn embed_documents(...) -> Result {\n```\n\n### Important: field declarations for Phase 3\nThe #[instrument] fields should include empty recording fields that Phase 3 (bd-16m8) will populate:\n```rust\n#[instrument(skip_all, fields(\n stage = \"ingest_issues\",\n project = %project_path,\n items_processed = tracing::field::Empty,\n items_skipped = tracing::field::Empty,\n errors = tracing::field::Empty,\n))]\n```\n\nThis declares the fields on the span so MetricsLayer can capture them when span.record() is called later.\n\n## Acceptance Criteria\n- [ ] JSON log lines show nested span context: sync > ingest_issues > fetch_pages\n- [ ] Each stage span has a \"stage\" field with the stage name\n- [ ] Per-project spans include \"project\" field\n- [ ] Spans are visible in -vv stderr output as bracketed context\n- [ ] Empty recording fields declared for items_processed, items_skipped, errors\n- [ ] cargo clippy --all-targets -- -D warnings passes\n\n## Files\n- src/ingestion/orchestrator.rs (spans on ingest functions and sub-stages)\n- src/documents/regenerator.rs (span on regenerate_dirty_documents)\n- src/embedding/pipeline.rs (span on embed_documents)\n\n## TDD Loop\nRED:\n - test_span_context_in_json_logs: mock sync, capture JSON, verify span chain\n - test_nested_span_chain: verify parent-child: sync > ingest_issues > fetch_pages\n - test_span_elapsed_on_close: create span, sleep 10ms, verify elapsed >= 10\nGREEN: Add #[instrument] and manual spans to all stage functions\nVERIFY: cargo test && cargo clippy --all-targets -- -D warnings\n\n## Edge Cases\n- #[instrument] on async fn: uses tracing::Instrument trait automatically. Works with tokio.\n- skip_all is essential: without it, #[instrument] tries to Debug-format all parameters, which may not implement Debug or may be expensive.\n- Manual span drop: for sub-stages within a single function, use explicit drop(_span) to end the span before the next sub-stage starts. Otherwise spans overlap.\n- tracing::field::Empty: declares a field that can be recorded later. If never recorded, it appears as empty/missing in output (not zero).","status":"closed","priority":1,"issue_type":"task","created_at":"2026-02-04T15:54:07.821068Z","created_by":"tayloreernisse","updated_at":"2026-02-04T17:19:34.307672Z","closed_at":"2026-02-04T17:19:34.307624Z","close_reason":"Added #[instrument] spans to ingest_project_issues_with_progress, ingest_project_merge_requests_with_progress, drain_resource_events, regenerate_dirty_documents, embed_documents","compaction_level":0,"original_size":0,"labels":["observability"],"dependencies":[{"issue_id":"bd-24j1","depends_on_id":"bd-2ni","type":"parent-child","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-24j1","depends_on_id":"bd-2rr","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} {"id":"bd-25hb","title":"NOTE-1C: Human and robot output formatting for notes","description":"## Background\nImplement the 4 output formatters for the notes command: human table, robot JSON, JSONL streaming, and CSV export.\n\n## Approach\nAdd to src/cli/commands/list.rs (after the query_notes function from NOTE-1A):\n\n1. pub fn print_list_notes(result: &NoteListResult) — human table:\n Use comfy-table (already in Cargo.toml) following the pattern of print_list_issues/print_list_mrs.\n Columns: ID | Author | Type | Body (truncated to 60 chars + \"...\") | Path:Line | Parent | Created\n ID: colored_cell with Cyan for gitlab_id\n Author: @username with Magenta\n Type: \"Diff\" for DiffNote, \"Disc\" for DiscussionNote, \"-\" for others\n Path: position_new_path:line (or \"-\" if no path)\n Parent: \"Issue #N\" or \"MR !N\" from noteable_type + parent_iid\n Created: format_relative_time (existing helper in list.rs)\n\n2. pub fn print_list_notes_json(result: &NoteListResult, elapsed_ms: u64, fields: Option<&[String]>) — robot JSON:\n Standard envelope: {\"ok\":true,\"data\":{\"notes\":[...],\"total_count\":N,\"showing\":M},\"meta\":{\"elapsed_ms\":U64}}\n Supports --fields via filter_fields() from crate::cli::robot\n Same pattern as print_list_issues_json.\n\n3. pub fn print_list_notes_jsonl(result: &NoteListResult) — one JSON object per line:\n Each line is one NoteListRowJson serialized. No envelope. Ideal for jq/notebook pipelines.\n Use serde_json::to_string for each row, println! each line.\n\n4. pub fn print_list_notes_csv(result: &NoteListResult) — CSV output:\n Check if csv crate is already used in the project. If not, use manual CSV with proper escaping:\n - Header row with field names matching NoteListRowJson\n - Quote fields containing commas, quotes, or newlines\n - Escape internal quotes by doubling them\n Alternatively, if adding csv crate (add csv = \"1\" to Cargo.toml [dependencies]), use csv::WriterBuilder for RFC 4180 compliance.\n\nHelper: Add a truncate_body(body: &str, max_len: usize) -> String function for the human table truncation.\n\n## Files\n- MODIFY: src/cli/commands/list.rs (4 print functions + truncate_body helper)\n- POSSIBLY MODIFY: Cargo.toml (add csv = \"1\" if using csv crate for CSV output)\n\n## TDD Anchor\nRED: test_truncate_note_body — assert 200-char body truncated to 60 + \"...\"\nGREEN: Implement truncate_body helper.\nVERIFY: cargo test truncate_note_body -- --nocapture\nTests: test_csv_output_basic (CSV output has correct header + escaped fields), test_jsonl_output_one_per_line (each line parses as valid JSON)\n\n## Acceptance Criteria\n- [ ] Human table renders with colored columns, truncated body, relative time\n- [ ] Robot JSON follows standard envelope with timing metadata\n- [ ] --fields filtering works on JSON output (via filter_fields)\n- [ ] JSONL outputs one valid JSON object per line\n- [ ] CSV properly escapes commas, quotes, and newlines in body text\n- [ ] Multi-byte chars handled correctly in CSV and truncation\n- [ ] All 3 tests pass\n\n## Dependency Context\n- Depends on NOTE-1A (bd-20p9): uses NoteListRow, NoteListRowJson, NoteListResult structs\n\n## Edge Cases\n- Empty body in table: show \"-\" or empty cell\n- Very long body with multi-byte chars: truncation must respect char boundaries (use .chars().take(n) not byte slicing)\n- JSONL with body containing newlines: serde_json::to_string escapes \\n correctly\n- CSV with body containing quotes: must double them per RFC 4180","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-12T17:00:53.482055Z","created_by":"tayloreernisse","updated_at":"2026-02-12T18:13:24.304235Z","closed_at":"2026-02-12T18:13:24.304188Z","close_reason":"Implemented by agent swarm","compaction_level":0,"original_size":0,"labels":["cli","per-note","search"],"dependencies":[{"issue_id":"bd-25hb","depends_on_id":"bd-1oyf","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} -{"id":"bd-25rl","title":"Phase 2b: Migrate gitlab/graphql.rs to HTTP adapter","description":"## What\nReplace all reqwest usage in gitlab/graphql.rs with the crate::http adapter.\n\n## Why\nGraphQL module handles POST requests with Bearer auth + JSON body. Smaller scope than client.rs (~20 LOC changed).\n\n## Changes\n\n### Request construction\n```rust\n// Before\nlet response = self.http.post(&url)\n .header(\"Authorization\", format!(\"Bearer {}\", self.token))\n .header(\"Content-Type\", \"application/json\")\n .json(&body).send().await?;\nlet json: Value = response.json().await?;\n\n// After\nlet bearer = format!(\"Bearer {}\", self.token);\nlet response = self.http.post_json(&url, &[\n (\"Authorization\", &bearer),\n], &body).await?;\nlet json: Value = response.json()?; // sync — body already buffered\n```\n\n### Status matching\n```rust\n// Before: response.status().as_u16()\n// After: response.status // already u16\n```\n\n## Files Changed\n- src/gitlab/graphql.rs (~20 LOC changed)\n\n## Testing\n- All 30 graphql_tests.rs tests must pass (wiremock, still on tokio)\n- cargo check --all-targets\n- cargo clippy --all-targets -- -D warnings\n\n## Depends On\n- Phase 1 (adapter layer must exist)","notes":"WIREMOCK COMPATIBILITY: Same risk as Phase 2a (bd-lhj8). After migration, 30 graphql_tests run on #[tokio::test] but call code using asupersync HTTP. See Phase 2a description for resolution options. If asupersync HTTP is runtime-agnostic, no issue. Verify during Decision Gate.","status":"open","priority":1,"issue_type":"task","created_at":"2026-03-06T18:39:52.295380Z","created_by":"tayloreernisse","updated_at":"2026-03-06T18:49:50.328964Z","compaction_level":0,"original_size":0,"labels":["asupersync","phase-2"],"dependencies":[{"issue_id":"bd-25rl","depends_on_id":"bd-1lj5","type":"parent-child","created_at":"2026-03-06T18:39:52.297175Z","created_by":"tayloreernisse"},{"issue_id":"bd-25rl","depends_on_id":"bd-bqoc","type":"blocks","created_at":"2026-03-06T18:42:50.737566Z","created_by":"tayloreernisse"}]} +{"id":"bd-25rl","title":"Phase 2b: Migrate gitlab/graphql.rs to HTTP adapter","description":"## What\nReplace all reqwest usage in gitlab/graphql.rs with the crate::http adapter.\n\n## Why\nGraphQL module handles POST requests with Bearer auth + JSON body. Smaller scope than client.rs (~20 LOC changed).\n\n## Changes\n\n### Request construction\n```rust\n// Before\nlet response = self.http.post(&url)\n .header(\"Authorization\", format!(\"Bearer {}\", self.token))\n .header(\"Content-Type\", \"application/json\")\n .json(&body).send().await?;\nlet json: Value = response.json().await?;\n\n// After\nlet bearer = format!(\"Bearer {}\", self.token);\nlet response = self.http.post_json(&url, &[\n (\"Authorization\", &bearer),\n], &body).await?;\nlet json: Value = response.json()?; // sync — body already buffered\n```\n\n### Status matching\n```rust\n// Before: response.status().as_u16()\n// After: response.status // already u16\n```\n\n## Files Changed\n- src/gitlab/graphql.rs (~20 LOC changed)\n\n## Testing\n- All 30 graphql_tests.rs tests must pass (wiremock, still on tokio)\n- cargo check --all-targets\n- cargo clippy --all-targets -- -D warnings\n\n## Depends On\n- Phase 1 (adapter layer must exist)","notes":"WIREMOCK COMPATIBILITY: Same risk as Phase 2a (bd-lhj8). After migration, 30 graphql_tests run on #[tokio::test] but call code using asupersync HTTP. See Phase 2a description for resolution options. If asupersync HTTP is runtime-agnostic, no issue. Verify during Decision Gate.","status":"closed","priority":1,"issue_type":"task","created_at":"2026-03-06T18:39:52.295380Z","created_by":"tayloreernisse","updated_at":"2026-03-06T21:11:12.586481Z","closed_at":"2026-03-06T21:11:12.575147Z","close_reason":"asupersync migration complete — 5 commits pushed to asupersync-migration branch","compaction_level":0,"original_size":0,"labels":["asupersync","phase-2"],"dependencies":[{"issue_id":"bd-25rl","depends_on_id":"bd-1lj5","type":"parent-child","created_at":"2026-03-06T18:39:52.297175Z","created_by":"tayloreernisse"},{"issue_id":"bd-25rl","depends_on_id":"bd-bqoc","type":"blocks","created_at":"2026-03-06T18:42:50.737566Z","created_by":"tayloreernisse"}]} {"id":"bd-25s","title":"robot-docs: Add Ollama dependency discovery to manifest","description":"## Background\n\nAdd Ollama dependency discovery to robot-docs so agents know which commands need Ollama and which work without it. Currently robot-docs lists commands, exit codes, workflows, and aliases — but has no dependency information.\n\n## Codebase Context\n\n- handle_robot_docs() in src/main.rs (line ~1646) returns RobotDocsData JSON\n- RobotDocsData struct has fields: commands, exit_codes, workflows, aliases, clap_error_codes\n- Currently 18 documented commands in the manifest\n- Ollama required for: embed, search --mode=semantic, search --mode=hybrid\n- Not required for: all Phase B temporal commands (timeline, file-history, trace), lexical search, count, ingest, stats, sync, doctor, health, who, show, issues, mrs, etc.\n- No dependencies field exists yet in RobotDocsData\n\n## Approach\n\n### 1. Add dependencies field to RobotDocsData (src/main.rs):\n\n```rust\n#[derive(Serialize)]\nstruct RobotDocsData {\n // ... existing fields ...\n dependencies: DependencyInfo,\n}\n\n#[derive(Serialize)]\nstruct DependencyInfo {\n ollama: OllamaDependency,\n}\n\n#[derive(Serialize)]\nstruct OllamaDependency {\n required_by: Vec,\n not_required_by: Vec,\n install: HashMap, // {\"macos\": \"brew install ollama\", \"linux\": \"curl ...\"}\n setup: String, // \"ollama pull nomic-embed-text\"\n note: String,\n}\n```\n\n### 2. Populate in handle_robot_docs():\n\n```json\n{\n \"ollama\": {\n \"required_by\": [\"embed\", \"search --mode=semantic\", \"search --mode=hybrid\"],\n \"not_required_by\": [\"issues\", \"mrs\", \"search --mode=lexical\", \"timeline\", \"file-history\", \"count\", \"ingest\", \"stats\", \"sync\", \"doctor\", \"health\", \"who\", \"show\", \"status\"],\n \"install\": {\"macos\": \"brew install ollama\", \"linux\": \"curl -fsSL https://ollama.ai/install.sh | sh\"},\n \"setup\": \"ollama pull nomic-embed-text\",\n \"note\": \"Lexical search and all temporal features work without Ollama.\"\n }\n}\n```\n\n## Acceptance Criteria\n\n- [ ] `lore robot-docs | jq '.data.dependencies.ollama'` returns structured info\n- [ ] required_by lists embed and semantic/hybrid search modes\n- [ ] not_required_by lists all commands that work without Ollama (including Phase B if they exist)\n- [ ] Install instructions for macos and linux\n- [ ] setup field includes \"ollama pull nomic-embed-text\"\n- [ ] `cargo check --all-targets` passes\n- [ ] `cargo clippy --all-targets -- -D warnings` passes\n- [ ] `cargo fmt --check` passes\n\n## Files\n\n- MODIFY: src/main.rs (add DependencyInfo/OllamaDependency structs, update RobotDocsData, populate in handle_robot_docs)\n\n## TDD Anchor\n\nNo unit test needed — this is static metadata. Verify with:\n\n```bash\ncargo check --all-targets\ncargo run --release -- robot-docs | jq '.data.dependencies.ollama.required_by'\ncargo run --release -- robot-docs | jq '.data.dependencies.ollama.not_required_by'\n```\n\n## Edge Cases\n\n- Keep not_required_by up to date as new commands are added — consider a comment in the code listing which commands to check\n- Phase B commands (timeline, file-history, trace) must be in not_required_by once they exist\n- If a command conditionally needs Ollama (like search with --mode flag), list the specific flag combination in required_by\n\n## Dependency Context\n\n- **RobotDocsData** (src/main.rs ~line 1646): the existing struct that this bead extends. Currently has commands (Vec), exit_codes (Vec), workflows (Vec), aliases (Vec), clap_error_codes (Vec). Adding a dependencies field is additive — no breaking changes.\n- **handle_robot_docs()**: the function that constructs and returns the JSON. All data is hardcoded in the function — no runtime introspection needed.","status":"open","priority":4,"issue_type":"feature","created_at":"2026-01-30T20:26:43.169688Z","created_by":"tayloreernisse","updated_at":"2026-02-17T16:53:20.425853Z","compaction_level":0,"original_size":0,"labels":["enhancement","robot-mode"]} {"id":"bd-26f2","title":"Implement common widgets (status bar, breadcrumb, loading, error toast, help overlay)","description":"## Background\nCommon widgets appear across all screens: the status bar shows context-sensitive key hints and sync status, the breadcrumb shows navigation depth, the loading spinner indicates background work, the error toast shows transient errors with auto-dismiss, and the help overlay (?) shows available keybindings.\n\n## Approach\nCreate crates/lore-tui/src/view/common/mod.rs and individual widget files:\n\nview/common/mod.rs:\n- render_breadcrumb(frame, area, nav: &NavigationStack, theme: &Theme): renders \"Dashboard > Issues > #42\" trail\n- render_status_bar(frame, area, registry: &CommandRegistry, screen: &Screen, mode: &InputMode, theme: &Theme): renders bottom bar with key hints and sync indicator\n- render_loading(frame, area, load_state: &LoadState, theme: &Theme): renders centered spinner for LoadingInitial, or subtle refresh indicator for Refreshing\n- render_error_toast(frame, area, msg: &str, theme: &Theme): renders floating toast at bottom-right with error message\n- render_help_overlay(frame, area, registry: &CommandRegistry, screen: &Screen, theme: &Theme): renders centered modal with keybinding list from registry\n\nCreate crates/lore-tui/src/view/mod.rs:\n- render_screen(frame, app: &LoreApp): top-level dispatch — renders breadcrumb + screen content + status bar + optional overlays (help, error toast, command palette)\n\n## Acceptance Criteria\n- [ ] Breadcrumb renders all stack entries with \" > \" separator\n- [ ] Status bar shows contextual hints from CommandRegistry\n- [ ] Loading spinner animates via tick subscription\n- [ ] Error toast auto-positions at bottom-right of screen\n- [ ] Help overlay shows all commands for current screen from registry\n- [ ] render_screen routes to correct per-screen view function\n- [ ] Overlays (help, error, palette) render on top of screen content\n\n## Files\n- CREATE: crates/lore-tui/src/view/mod.rs\n- CREATE: crates/lore-tui/src/view/common/mod.rs\n\n## TDD Anchor\nRED: Write test_breadcrumbs_format that creates a NavigationStack with Dashboard > IssueList, calls breadcrumbs(), asserts [\"Dashboard\", \"Issues\"].\nGREEN: Implement breadcrumbs() in NavigationStack (already in nav task) and render_breadcrumb.\nVERIFY: cargo test --manifest-path crates/lore-tui/Cargo.toml test_breadcrumbs\n\n## Edge Cases\n- Breadcrumb must truncate from the left if stack is too deep for terminal width\n- Status bar must handle narrow terminals (<60 cols) gracefully — show abbreviated hints\n- Error toast must handle very long messages with truncation\n- Help overlay must scroll if there are more commands than terminal height\n\n## Dependency Context\nUses NavigationStack from \"Implement NavigationStack\" task.\nUses CommandRegistry from \"Implement CommandRegistry\" task.\nUses LoadState from \"Implement AppState composition\" task.\nUses Theme from \"Implement theme configuration\" task.","status":"open","priority":2,"issue_type":"task","created_at":"2026-02-12T16:57:13.520393Z","created_by":"tayloreernisse","updated_at":"2026-02-12T18:11:25.901669Z","compaction_level":0,"original_size":0,"labels":["TUI"],"dependencies":[{"issue_id":"bd-26f2","depends_on_id":"bd-1qpp","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-26f2","depends_on_id":"bd-1v9m","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-26f2","depends_on_id":"bd-2tr4","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-26f2","depends_on_id":"bd-38lb","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-26f2","depends_on_id":"bd-5ofk","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} -{"id":"bd-26km","title":"Phase 5b: DB transaction invariant audit","description":"## What\nAudit all ingestion paths for transaction boundary correctness under cancellation. Verify 4 invariants. This is a CODE AUDIT — the corresponding tests are written in Phase 4c (bd-1yky). This bead identifies violations; 4c tests enforce them.\n\n## Rearchitecture Context (2026-03-06)\nKey files for this audit:\n- src/ingestion/orchestrator.rs — main orchestration (unchanged location)\n- src/ingestion/surgical.rs — ingestion-level surgical sync logic (unchanged location)\n- src/cli/commands/sync/surgical.rs — CLI command wrapper (was cli/commands/sync_surgical.rs)\n- src/ingestion/storage/ — persistence helpers moved here from core/ (payloads.rs, events.rs, queue.rs, sync_run.rs)\n\n## Invariants\n\n### INV-1: Atomic batch writes\nEach ingestion batch (issues, MRs, discussions) writes to DB inside a single unchecked_transaction(). If not committed, no partial data visible. Audit all paths — fix any that write outside a transaction.\n\n### INV-2: Region cancellation cannot corrupt committed data\nCancelled region may abandon in-flight HTTP but must not interrupt DB transaction mid-write. This holds because SQLite transactions are synchronous — once tx.execute() starts, it runs to completion regardless of task cancellation. VERIFY this holds for WAL mode.\n\n### HARD RULE: No .await between transaction open and commit/rollback\nCancellation can fire at any .await point. If .await exists between unchecked_transaction() and tx.commit(), cancelled region could drop transaction guard mid-batch, rolling back partial writes silently.\n\nIf any path must do async work mid-transaction, restructure to fetch-then-write: complete all async work first, then open transaction, write synchronously, commit.\n\n### INV-3: No partial batch visibility\nIf cancellation fires after fetching N items but before batch transaction commits, zero items from that batch are persisted. Next sync resumes via cursor-based pagination.\n\n### INV-4: ShutdownSignal + region cancellation are complementary\nExisting ShutdownSignal check-before-write pattern in orchestrator loops (if signal.is_cancelled() { break; }) remains first line of defense. Region cancellation is second — ensures in-flight HTTP tasks are drained even if orchestrator loop moved past signal check. Both must remain active.\n\n## Audit Procedure\n1. Grep for all unchecked_transaction() and begin_transaction() calls in ingestion/ and cli/commands/sync/\n2. For each: verify no .await between open and commit\n3. For each: verify the entire batch write is inside a single transaction\n4. Document any violations and fix them (restructure to fetch-then-write)\n5. Verify WAL mode compatibility for INV-2\n\n## Relationship to Phase 4c\nPhase 4c (bd-1yky) writes integration tests that ENFORCE these invariants at runtime. This bead (5b) is the static code AUDIT that finds violations. If this audit finds issues, fix the code first, then Phase 4c tests confirm the fix holds under cancellation.\n\n## Files to Audit\n- src/ingestion/orchestrator.rs (primary target)\n- src/ingestion/surgical.rs (ingestion-level surgical logic)\n- src/ingestion/issues.rs, merge_requests.rs, discussions.rs, mr_discussions.rs (batch write paths)\n- src/ingestion/storage/payloads.rs, events.rs, queue.rs, sync_run.rs (persistence helpers, moved from core/)\n- No new test files (tests are in Phase 4c)","status":"open","priority":1,"issue_type":"task","created_at":"2026-03-06T18:42:21.137834Z","created_by":"tayloreernisse","updated_at":"2026-03-06T19:54:19.939206Z","compaction_level":0,"original_size":0,"labels":["asupersync","phase-5"],"dependencies":[{"issue_id":"bd-26km","depends_on_id":"bd-1lj5","type":"parent-child","created_at":"2026-03-06T18:42:21.141037Z","created_by":"tayloreernisse"},{"issue_id":"bd-26km","depends_on_id":"bd-zgke","type":"blocks","created_at":"2026-03-06T18:42:58.930966Z","created_by":"tayloreernisse"}]} +{"id":"bd-26km","title":"Phase 5b: DB transaction invariant audit","description":"## What\nAudit all ingestion paths for transaction boundary correctness under cancellation. Verify 4 invariants. This is a CODE AUDIT — the corresponding tests are written in Phase 4c (bd-1yky). This bead identifies violations; 4c tests enforce them.\n\n## Rearchitecture Context (2026-03-06)\nKey files for this audit:\n- src/ingestion/orchestrator.rs — main orchestration (unchanged location)\n- src/ingestion/surgical.rs — ingestion-level surgical sync logic (unchanged location)\n- src/cli/commands/sync/surgical.rs — CLI command wrapper (was cli/commands/sync_surgical.rs)\n- src/ingestion/storage/ — persistence helpers moved here from core/ (payloads.rs, events.rs, queue.rs, sync_run.rs)\n\n## Invariants\n\n### INV-1: Atomic batch writes\nEach ingestion batch (issues, MRs, discussions) writes to DB inside a single unchecked_transaction(). If not committed, no partial data visible. Audit all paths — fix any that write outside a transaction.\n\n### INV-2: Region cancellation cannot corrupt committed data\nCancelled region may abandon in-flight HTTP but must not interrupt DB transaction mid-write. This holds because SQLite transactions are synchronous — once tx.execute() starts, it runs to completion regardless of task cancellation. VERIFY this holds for WAL mode.\n\n### HARD RULE: No .await between transaction open and commit/rollback\nCancellation can fire at any .await point. If .await exists between unchecked_transaction() and tx.commit(), cancelled region could drop transaction guard mid-batch, rolling back partial writes silently.\n\nIf any path must do async work mid-transaction, restructure to fetch-then-write: complete all async work first, then open transaction, write synchronously, commit.\n\n### INV-3: No partial batch visibility\nIf cancellation fires after fetching N items but before batch transaction commits, zero items from that batch are persisted. Next sync resumes via cursor-based pagination.\n\n### INV-4: ShutdownSignal + region cancellation are complementary\nExisting ShutdownSignal check-before-write pattern in orchestrator loops (if signal.is_cancelled() { break; }) remains first line of defense. Region cancellation is second — ensures in-flight HTTP tasks are drained even if orchestrator loop moved past signal check. Both must remain active.\n\n## Audit Procedure\n1. Grep for all unchecked_transaction() and begin_transaction() calls in ingestion/ and cli/commands/sync/\n2. For each: verify no .await between open and commit\n3. For each: verify the entire batch write is inside a single transaction\n4. Document any violations and fix them (restructure to fetch-then-write)\n5. Verify WAL mode compatibility for INV-2\n\n## Relationship to Phase 4c\nPhase 4c (bd-1yky) writes integration tests that ENFORCE these invariants at runtime. This bead (5b) is the static code AUDIT that finds violations. If this audit finds issues, fix the code first, then Phase 4c tests confirm the fix holds under cancellation.\n\n## Files to Audit\n- src/ingestion/orchestrator.rs (primary target)\n- src/ingestion/surgical.rs (ingestion-level surgical logic)\n- src/ingestion/issues.rs, merge_requests.rs, discussions.rs, mr_discussions.rs (batch write paths)\n- src/ingestion/storage/payloads.rs, events.rs, queue.rs, sync_run.rs (persistence helpers, moved from core/)\n- No new test files (tests are in Phase 4c)","status":"closed","priority":1,"issue_type":"task","created_at":"2026-03-06T18:42:21.137834Z","created_by":"tayloreernisse","updated_at":"2026-03-06T21:11:12.682092Z","closed_at":"2026-03-06T21:11:12.682048Z","close_reason":"asupersync migration complete — 5 commits pushed to asupersync-migration branch","compaction_level":0,"original_size":0,"labels":["asupersync","phase-5"],"dependencies":[{"issue_id":"bd-26km","depends_on_id":"bd-1lj5","type":"parent-child","created_at":"2026-03-06T18:42:21.141037Z","created_by":"tayloreernisse"},{"issue_id":"bd-26km","depends_on_id":"bd-zgke","type":"blocks","created_at":"2026-03-06T18:42:58.930966Z","created_by":"tayloreernisse"}]} {"id":"bd-26lp","title":"Implement CLI integration (lore tui command + binary delegation)","description":"## Background\nThe lore CLI binary needs a tui subcommand that launches the lore-tui binary. This is runtime binary delegation — lore finds lore-tui via PATH lookup and execs it, passing through relevant flags. Zero compile-time dependency from lore to lore-tui. The TUI is the human interface; the CLI is the robot/script interface.\n\n## Approach\nAdd a tui subcommand to the lore CLI:\n\n**CLI side** (`src/cli/tui.rs`):\n- Add `Tui` variant to the main CLI enum with flags: --config, --sync, --fresh, --render-mode, --ascii, --no-alt-screen\n- Implementation: resolve lore-tui binary via PATH lookup (std::process::Command with \"lore-tui\")\n- Pass through all flags as CLI arguments\n- If lore-tui not found in PATH, print helpful error: \"lore-tui binary not found. Install with: cargo install --path crates/lore-tui\"\n- Exec (not spawn+wait) using std::os::unix::process::CommandExt::exec() for clean process replacement on Unix\n\n**Binary naming**: The binary is `lore-tui` (hyphenated), matching the crate name.\n\n## Acceptance Criteria\n- [ ] lore tui launches lore-tui binary from PATH\n- [ ] All flags (--config, --sync, --fresh, --render-mode, --ascii, --no-alt-screen) are passed through\n- [ ] Missing binary produces helpful error with install instructions\n- [ ] Uses exec() on Unix for clean process replacement (no zombie parent)\n- [ ] Robot mode: lore --robot tui returns JSON error if binary not found\n- [ ] lore tui --help shows TUI-specific flags\n\n## Files\n- CREATE: src/cli/tui.rs\n- MODIFY: src/cli/mod.rs (add tui subcommand to CLI enum)\n- MODIFY: src/main.rs (add match arm for Tui variant)\n\n## TDD Anchor\nRED: Write `test_tui_binary_not_found_error` that asserts the error message includes install instructions when lore-tui is not in PATH.\nGREEN: Implement the binary lookup and error handling.\nVERIFY: cargo test tui_binary -- --nocapture\n\nAdditional tests:\n- test_tui_flag_passthrough (verify all flags are forwarded)\n- test_tui_robot_mode_json_error (structured error when binary missing)\n\n## Edge Cases\n- lore-tui binary exists but is not executable — should produce clear error\n- PATH contains multiple lore-tui versions — uses first match (standard PATH behavior)\n- Windows: exec() not available — fall back to spawn+wait+exit with same code\n- User runs lore tui in robot mode — should fail with structured JSON error (TUI is human-only)\n\n## Dependency Context\nDepends on bd-2iqk (Doctor + Stats screens) for phase ordering. The CLI integration is one of the last Phase 4 tasks because it requires lore-tui to be substantially complete for the delegation to be useful.","status":"open","priority":2,"issue_type":"task","created_at":"2026-02-12T17:02:39.602970Z","created_by":"tayloreernisse","updated_at":"2026-02-12T18:11:34.449333Z","compaction_level":0,"original_size":0,"labels":["TUI"],"dependencies":[{"issue_id":"bd-26lp","depends_on_id":"bd-1df9","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-26lp","depends_on_id":"bd-2iqk","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} {"id":"bd-2711","title":"WHO: Reviews mode query (query_reviews)","description":"## Background\n\nReviews mode answers \"What review patterns does person X have?\" by analyzing the **prefix** convention in DiffNote bodies (e.g., **suggestion**: ..., **question**: ..., **nit**: ...). Only counts DiffNotes on MRs the user did NOT author (m.author_username != ?1).\n\n## Approach\n\n### Three queries:\n1. **Total DiffNotes**: COUNT(*) of DiffNotes by user on others' MRs\n2. **Distinct MRs reviewed**: COUNT(DISTINCT m.id) \n3. **Category extraction**: SQL-level prefix parsing + Rust normalization\n\n### Category extraction SQL:\n```sql\nSELECT\n SUBSTR(ltrim(n.body), 3, INSTR(SUBSTR(ltrim(n.body), 3), '**') - 1) AS raw_prefix,\n COUNT(*) AS cnt\nFROM notes n\nJOIN discussions d ON n.discussion_id = d.id\nJOIN merge_requests m ON d.merge_request_id = m.id\nWHERE n.author_username = ?1\n AND n.note_type = 'DiffNote' AND n.is_system = 0\n AND m.author_username != ?1\n AND ltrim(n.body) LIKE '**%**%' -- only bodies with **prefix** pattern\n AND n.created_at >= ?2\n AND (?3 IS NULL OR n.project_id = ?3)\nGROUP BY raw_prefix ORDER BY cnt DESC\n```\n\nKey: `ltrim(n.body)` tolerates leading whitespace before **prefix** (common in practice).\n\n### normalize_review_prefix() in Rust:\n```rust\nfn normalize_review_prefix(raw: &str) -> String {\n let s = raw.trim().trim_end_matches(':').trim().to_lowercase();\n // Strip parentheticals like \"(non-blocking)\"\n let s = if let Some(idx) = s.find('(') { s[..idx].trim().to_string() } else { s };\n // Merge nit/nitpick variants\n match s.as_str() {\n \"nitpick\" | \"nit\" => \"nit\".to_string(),\n other => other.to_string(),\n }\n}\n```\n\n### HashMap merge for normalized categories, then sort by count DESC\n\n### ReviewsResult struct:\n```rust\npub struct ReviewsResult {\n pub username: String,\n pub total_diffnotes: u32,\n pub categorized_count: u32,\n pub mrs_reviewed: u32,\n pub categories: Vec,\n}\npub struct ReviewCategory { pub name: String, pub count: u32, pub percentage: f64 }\n```\n\nNo LIMIT needed — categories are naturally bounded (few distinct prefixes).\n\n## Files\n\n- `src/cli/commands/who.rs`\n\n## TDD Loop\n\nRED:\n```\ntest_reviews_query — insert 3 DiffNotes (2 with **prefix**, 1 without); verify total=3, categorized=2, categories.len()=2\ntest_normalize_review_prefix — \"suggestion\" \"Suggestion:\" \"suggestion (non-blocking):\" \"Nitpick:\" \"nit (non-blocking):\" \"question\" \"TODO:\"\n```\n\nGREEN: Implement query_reviews + normalize_review_prefix\nVERIFY: `cargo test -- reviews`\n\n## Acceptance Criteria\n\n- [ ] test_reviews_query passes (total=3, categorized=2)\n- [ ] test_normalize_review_prefix passes (nit/nitpick merge, parenthetical strip)\n- [ ] Only counts DiffNotes on MRs user did NOT author\n- [ ] Default since window: 6m\n\n## Edge Cases\n\n- Self-authored MRs excluded (m.author_username != ?1) — user's notes on own MRs are not \"reviews\"\n- ltrim() handles leading whitespace before **prefix**\n- Empty raw_prefix after normalization filtered out (!normalized.is_empty())\n- Percentage calculated from categorized_count (not total_diffnotes)","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-08T02:40:53.350210Z","created_by":"tayloreernisse","updated_at":"2026-02-08T04:10:29.599252Z","closed_at":"2026-02-08T04:10:29.599217Z","close_reason":"Implemented by agent team: migration 017, CLI skeleton, all 5 query modes, human+robot output, 20 tests. All quality gates pass.","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-2711","depends_on_id":"bd-2ldg","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-2711","depends_on_id":"bd-34rr","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} -{"id":"bd-2815","title":"Phase 5a: HTTP behavior parity verification","description":"## What\nVerify HTTP behavior parity between reqwest (old) and asupersync h1 (new) for all implicit behaviors reqwest provided.\n\n## Acceptance Criteria Table\n\n| reqwest default | Criterion | Test |\n|-----------------|-----------|------|\n| Auto redirect (up to 10) | If GitLab returns 3xx, must not silently lose response. Follow or surface clear error. | wiremock 301 -> verify adapter returns redirect status |\n| Auto gzip/deflate | Not required (JSON small) | N/A |\n| Proxy from HTTP_PROXY | If set, requests route through it. If unsupported, document. | Set HTTP_PROXY=127.0.0.1:9999 -> verify connection targets proxy |\n| Connection keep-alive | Pagination batches (4-8 sequential) must reuse connections | Measure with ss/netstat: 8 requests should use <=2 TCP connections |\n| System DNS | Hostnames must resolve via OS resolver | lore sync against hostname (not IP) |\n| Content-Length on POST | Must include header (some proxies/WAFs require) | Inspect outgoing headers in wiremock test |\n| TLS cert validation | HTTPS must validate certs using system CA store | lore sync against production GitLab + fail against self-signed |\n\n## Implementation\nWrite specific test for each row. Document pass/fail results. Any failure is a blocker that must be resolved before migration is complete.\n\n## Files Changed\n- tests/http_parity_tests.rs (NEW, ~200 LOC)\n- Or extend existing adapter integration tests\n\n## Depends On\n- Phase 2a, 2b, 2c (all HTTP modules migrated)\n- Phase 3a (asupersync in deps)","status":"open","priority":1,"issue_type":"task","created_at":"2026-03-06T18:41:59.839078Z","created_by":"tayloreernisse","updated_at":"2026-03-06T18:42:58.687388Z","compaction_level":0,"original_size":0,"labels":["asupersync","phase-5","testing"],"dependencies":[{"issue_id":"bd-2815","depends_on_id":"bd-1lj5","type":"parent-child","created_at":"2026-03-06T18:41:59.843026Z","created_by":"tayloreernisse"},{"issue_id":"bd-2815","depends_on_id":"bd-25rl","type":"blocks","created_at":"2026-03-06T18:42:58.332421Z","created_by":"tayloreernisse"},{"issue_id":"bd-2815","depends_on_id":"bd-2n82","type":"blocks","created_at":"2026-03-06T18:42:58.483829Z","created_by":"tayloreernisse"},{"issue_id":"bd-2815","depends_on_id":"bd-39yp","type":"blocks","created_at":"2026-03-06T18:42:58.687338Z","created_by":"tayloreernisse"},{"issue_id":"bd-2815","depends_on_id":"bd-lhj8","type":"blocks","created_at":"2026-03-06T18:42:58.191948Z","created_by":"tayloreernisse"}]} +{"id":"bd-2815","title":"Phase 5a: HTTP behavior parity verification","description":"## What\nVerify HTTP behavior parity between reqwest (old) and asupersync h1 (new) for all implicit behaviors reqwest provided.\n\n## Acceptance Criteria Table\n\n| reqwest default | Criterion | Test |\n|-----------------|-----------|------|\n| Auto redirect (up to 10) | If GitLab returns 3xx, must not silently lose response. Follow or surface clear error. | wiremock 301 -> verify adapter returns redirect status |\n| Auto gzip/deflate | Not required (JSON small) | N/A |\n| Proxy from HTTP_PROXY | If set, requests route through it. If unsupported, document. | Set HTTP_PROXY=127.0.0.1:9999 -> verify connection targets proxy |\n| Connection keep-alive | Pagination batches (4-8 sequential) must reuse connections | Measure with ss/netstat: 8 requests should use <=2 TCP connections |\n| System DNS | Hostnames must resolve via OS resolver | lore sync against hostname (not IP) |\n| Content-Length on POST | Must include header (some proxies/WAFs require) | Inspect outgoing headers in wiremock test |\n| TLS cert validation | HTTPS must validate certs using system CA store | lore sync against production GitLab + fail against self-signed |\n\n## Implementation\nWrite specific test for each row. Document pass/fail results. Any failure is a blocker that must be resolved before migration is complete.\n\n## Files Changed\n- tests/http_parity_tests.rs (NEW, ~200 LOC)\n- Or extend existing adapter integration tests\n\n## Depends On\n- Phase 2a, 2b, 2c (all HTTP modules migrated)\n- Phase 3a (asupersync in deps)","status":"closed","priority":1,"issue_type":"task","created_at":"2026-03-06T18:41:59.839078Z","created_by":"tayloreernisse","updated_at":"2026-03-06T21:11:12.674081Z","closed_at":"2026-03-06T21:11:12.674033Z","close_reason":"asupersync migration complete — 5 commits pushed to asupersync-migration branch","compaction_level":0,"original_size":0,"labels":["asupersync","phase-5","testing"],"dependencies":[{"issue_id":"bd-2815","depends_on_id":"bd-1lj5","type":"parent-child","created_at":"2026-03-06T18:41:59.843026Z","created_by":"tayloreernisse"},{"issue_id":"bd-2815","depends_on_id":"bd-25rl","type":"blocks","created_at":"2026-03-06T18:42:58.332421Z","created_by":"tayloreernisse"},{"issue_id":"bd-2815","depends_on_id":"bd-2n82","type":"blocks","created_at":"2026-03-06T18:42:58.483829Z","created_by":"tayloreernisse"},{"issue_id":"bd-2815","depends_on_id":"bd-39yp","type":"blocks","created_at":"2026-03-06T18:42:58.687338Z","created_by":"tayloreernisse"},{"issue_id":"bd-2815","depends_on_id":"bd-lhj8","type":"blocks","created_at":"2026-03-06T18:42:58.191948Z","created_by":"tayloreernisse"}]} {"id":"bd-296a","title":"NOTE-1E: Composite query index and author_id column (migration 022)","description":"## Background\nThe notes table needs composite covering indexes for the new query_notes() function, plus the author_id column for immutable identity (NOTE-0D). Combined in a single migration to avoid an extra migration step. Migration slot 022 is available (021 = work_item_status, 023 = issue_detail_fields already exists).\n\n## Approach\nCreate migrations/022_notes_query_index.sql with:\n\n1. Composite index for author-scoped queries (most common pattern):\n CREATE INDEX IF NOT EXISTS idx_notes_user_created\n ON notes(project_id, author_username COLLATE NOCASE, created_at DESC, id DESC)\n WHERE is_system = 0;\n\n2. Composite index for project-scoped date-range queries:\n CREATE INDEX IF NOT EXISTS idx_notes_project_created\n ON notes(project_id, created_at DESC, id DESC)\n WHERE is_system = 0;\n\n3. Discussion JOIN indexes (check if they already exist first):\n CREATE INDEX IF NOT EXISTS idx_discussions_issue_id ON discussions(issue_id);\n CREATE INDEX IF NOT EXISTS idx_discussions_mr_id ON discussions(merge_request_id);\n\n4. Immutable author identity column (for NOTE-0D):\n ALTER TABLE notes ADD COLUMN author_id INTEGER;\n CREATE INDEX IF NOT EXISTS idx_notes_author_id ON notes(author_id) WHERE author_id IS NOT NULL;\n\nRegister in src/core/db.rs MIGRATIONS array as (\"022\", include_str!(\"../../migrations/022_notes_query_index.sql\")). Insert BEFORE the existing (\"023\", ...) entry. LATEST_SCHEMA_VERSION auto-increments via MIGRATIONS.len().\n\n## Files\n- CREATE: migrations/022_notes_query_index.sql\n- MODIFY: src/core/db.rs (add (\"022\", include_str!(...)) to MIGRATIONS array, insert at position before \"023\" entry around line 73)\n\n## TDD Anchor\nRED: test_migration_022_indexes_exist — run_migrations on in-memory DB, verify 4 new indexes exist in sqlite_master.\nGREEN: Create migration file with all CREATE INDEX statements.\nVERIFY: cargo test migration_022 -- --nocapture\n\n## Acceptance Criteria\n- [ ] Migration 022 creates idx_notes_user_created partial index\n- [ ] Migration 022 creates idx_notes_project_created partial index\n- [ ] Migration 022 creates idx_discussions_issue_id (or is no-op if exists)\n- [ ] Migration 022 creates idx_discussions_mr_id (or is no-op if exists)\n- [ ] Migration 022 adds author_id INTEGER column to notes\n- [ ] Migration 022 creates idx_notes_author_id partial index\n- [ ] MIGRATIONS array in db.rs includes (\"022\", ...) before (\"023\", ...)\n- [ ] Existing tests still pass with new migration\n- [ ] Test verifying all indexes exist passes\n\n## Edge Cases\n- Partial indexes exclude system notes (is_system = 0) — filters 30-50% of notes\n- COLLATE NOCASE on author_username matches the query's case-insensitive comparison\n- author_id is nullable (existing notes won't have it until re-synced)\n- IF NOT EXISTS on all CREATE INDEX statements makes migration idempotent","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-12T17:01:18.127989Z","created_by":"tayloreernisse","updated_at":"2026-02-12T18:13:15.435624Z","closed_at":"2026-02-12T18:13:15.435576Z","close_reason":"Implemented by agent swarm","compaction_level":0,"original_size":0,"labels":["per-note","search"],"dependencies":[{"issue_id":"bd-296a","depends_on_id":"bd-jbfw","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} {"id":"bd-29qw","title":"Implement Timeline screen (state + action + view)","description":"## Background\nThe Timeline screen renders a chronological event stream from the 5-stage timeline pipeline (SEED -> HYDRATE -> EXPAND -> COLLECT -> RENDER). Events are color-coded by type and can be scoped to an entity, author, or time range.\n\n## Approach\nState (state/timeline.rs):\n- TimelineState: events (Vec), query (String), query_input (TextInput), query_focused (bool), selected_index (usize), scroll_offset (usize), scope (TimelineScope)\n- TimelineScope: All, Entity(EntityKey), Author(String), DateRange(DateTime, DateTime)\n\nAction (action.rs):\n- fetch_timeline(conn, scope, limit, clock) -> Vec: runs the timeline pipeline against DB\n\nView (view/timeline.rs):\n- Vertical event stream with timestamp gutter on the left\n- Color-coded event types: Created(green), Updated(yellow), Closed(red), Merged(purple), Commented(blue), Labeled(cyan), Milestoned(orange)\n- Each event: timestamp | entity ref | event description\n- Entity refs navigable via Enter\n- Query bar for filtering by text or entity\n- Keyboard: j/k scroll, Enter navigate to entity, / focus query, g+g top\n\n## Acceptance Criteria\n- [ ] Timeline renders chronological event stream\n- [ ] Events color-coded by type\n- [ ] Entity references navigable\n- [ ] Scope filters: all, per-entity, per-author, date range\n- [ ] Query bar filters events\n- [ ] Keyboard navigation works (j/k/Enter/Esc)\n- [ ] Timestamps use injected Clock\n\n## Files\n- MODIFY: crates/lore-tui/src/state/timeline.rs (expand from stub)\n- MODIFY: crates/lore-tui/src/action.rs (add fetch_timeline)\n- CREATE: crates/lore-tui/src/view/timeline.rs\n\n## TDD Anchor\nRED: Write test_fetch_timeline_scoped that creates issues with events, calls fetch_timeline with Entity scope, asserts only that entity's events returned.\nGREEN: Implement fetch_timeline with scope filtering.\nVERIFY: cargo test --manifest-path crates/lore-tui/Cargo.toml test_fetch_timeline\n\n## Edge Cases\n- Timeline pipeline may not be fully implemented in core yet — degrade gracefully if SEED/HYDRATE/EXPAND stages are not available, fall back to raw events\n- Very long timelines: VirtualizedList or lazy loading for performance\n- Events with identical timestamps: stable sort by entity type, then iid\n\n## Dependency Context\nUses timeline pipeline types from src/core/timeline.rs if available.\nUses Clock for timestamp rendering from \"Implement Clock trait\" task.\nUses EntityKey navigation from \"Implement core types\" task.","status":"open","priority":2,"issue_type":"task","created_at":"2026-02-12T17:01:05.605968Z","created_by":"tayloreernisse","updated_at":"2026-02-12T18:11:33.993830Z","compaction_level":0,"original_size":0,"labels":["TUI"],"dependencies":[{"issue_id":"bd-29qw","depends_on_id":"bd-1zow","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-29qw","depends_on_id":"bd-nwux","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} {"id":"bd-2ac","title":"Create migration 009_embeddings.sql","description":"## Background\nMigration 009 creates the embedding storage layer for Gate B. It introduces a sqlite-vec vec0 virtual table for vector search and an embedding_metadata table for tracking provenance per chunk. Unlike migrations 007-008, this migration REQUIRES sqlite-vec to be loaded before it can be applied. The migration runner in db.rs must load the sqlite-vec extension first.\n\n## Approach\nCreate `migrations/009_embeddings.sql` per PRD Section 1.3.\n\n**Tables:**\n1. `embeddings` — vec0 virtual table with `embedding float[768]`\n2. `embedding_metadata` — tracks per-chunk provenance with composite PK (document_id, chunk_index)\n3. Orphan cleanup trigger: `documents_embeddings_ad` — deletes ALL chunk embeddings when a document is deleted using range deletion `[doc_id * 1000, (doc_id + 1) * 1000)`\n\n**Critical: sqlite-vec loading:**\nThe migration runner in `src/core/db.rs` must load sqlite-vec BEFORE applying any migrations. This means adding extension loading to the `create_connection()` or `run_migrations()` function. sqlite-vec is loaded via:\n```rust\nconn.load_extension_enable()?;\nconn.load_extension(\"vec0\", None)?; // or platform-specific path\nconn.load_extension_disable()?;\n```\n\nRegister migration 9 in `src/core/db.rs` MIGRATIONS array.\n\n## Acceptance Criteria\n- [ ] `migrations/009_embeddings.sql` file exists\n- [ ] `embeddings` vec0 virtual table created with `embedding float[768]`\n- [ ] `embedding_metadata` table has composite PK (document_id, chunk_index)\n- [ ] `embedding_metadata.document_id` has FK to documents(id) ON DELETE CASCADE\n- [ ] Error tracking fields: last_error, attempt_count, last_attempt_at\n- [ ] Orphan cleanup trigger: deletes embeddings WHERE rowid in [doc_id*1000, (doc_id+1)*1000)\n- [ ] Index on embedding_metadata(last_error) WHERE last_error IS NOT NULL\n- [ ] Index on embedding_metadata(document_id)\n- [ ] Schema version 9 recorded\n- [ ] Migration runner loads sqlite-vec before applying migrations\n- [ ] `cargo build` succeeds\n\n## Files\n- `migrations/009_embeddings.sql` — new file (copy exact SQL from PRD Section 1.3)\n- `src/core/db.rs` — add migration 9 to MIGRATIONS array; add sqlite-vec extension loading\n\n## TDD Loop\nRED: Register migration in db.rs, `cargo test migration_tests` fails\nGREEN: Create SQL file + add extension loading\nVERIFY: `cargo test migration_tests && cargo build`\n\n## Edge Cases\n- sqlite-vec not installed: migration fails with clear error (not a silent skip)\n- Migration applied without sqlite-vec loaded: `CREATE VIRTUAL TABLE` fails with \"no such module: vec0\"\n- Documents deleted before embeddings: trigger fires but vec0 DELETE on empty range is safe\n- vec0 doesn't support FK cascades: that's why we need the explicit trigger","status":"closed","priority":2,"issue_type":"task","created_at":"2026-01-30T15:26:33.958178Z","created_by":"tayloreernisse","updated_at":"2026-01-30T17:22:26.478290Z","closed_at":"2026-01-30T17:22:26.478229Z","close_reason":"Completed: migration 009_embeddings.sql with vec0 table, embedding_metadata with composite PK, orphan cleanup trigger, registered in db.rs","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-2ac","depends_on_id":"bd-221","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} @@ -173,7 +173,7 @@ {"id":"bd-2ms","title":"[CP1] Unit tests for transformers","description":"Comprehensive unit tests for issue and discussion transformers.\n\n## Issue Transformer Tests (tests/issue_transformer_tests.rs)\n\n- transforms_gitlab_issue_to_normalized_schema\n- extracts_labels_from_issue_payload\n- handles_missing_optional_fields_gracefully\n- converts_iso_timestamps_to_ms_epoch\n- sets_last_seen_at_to_current_time\n\n## Discussion Transformer Tests (tests/discussion_transformer_tests.rs)\n\n- transforms_discussion_payload_to_normalized_schema\n- extracts_notes_array_from_discussion\n- sets_individual_note_flag_correctly\n- flags_system_notes_with_is_system_true\n- preserves_note_order_via_position_field\n- computes_first_note_at_and_last_note_at_correctly\n- computes_resolvable_and_resolved_status\n\n## Test Setup\n- Load from test fixtures\n- Use serde_json for deserialization\n- Compare against expected NormalizedX structs\n\nFiles: tests/issue_transformer_tests.rs, tests/discussion_transformer_tests.rs\nDone when: All transformer unit tests pass","status":"tombstone","priority":3,"issue_type":"task","created_at":"2026-01-25T16:59:04.165187Z","created_by":"tayloreernisse","updated_at":"2026-01-25T17:02:02.015847Z","closed_at":"2026-01-25T17:02:02.015847Z","deleted_at":"2026-01-25T17:02:02.015841Z","deleted_by":"tayloreernisse","delete_reason":"recreating with correct deps","original_type":"task","compaction_level":0,"original_size":0} {"id":"bd-2mz","title":"Epic: Gate A - Lexical MVP","description":"## Background\nGate A delivers the lexical search MVP — the foundation that works without sqlite-vec or Ollama. It introduces the document layer (documents, document_labels, document_paths), FTS5 indexing, search filters, and the search + stats + generate-docs CLI commands. Gate A is independently shippable — users get working search with FTS5 only.\n\n## Gate A Deliverables\n1. Document generation from issues/MRs/discussions with FTS5 indexing\n2. Lexical search + filters + snippets + lore stats\n\n## Bead Dependencies (execution order)\n1. **bd-3lc** — Rename GiError to LoreError (no deps, enables all subsequent work)\n2. **bd-hrs** — Migration 007 (blocked by bd-3lc)\n3. **bd-221** — Migration 008 FTS5 (blocked by bd-hrs)\n4. **bd-36p** — Document types + extractor module (blocked by bd-3lc)\n5. **bd-18t** — Truncation logic (blocked by bd-36p)\n6. **bd-247** — Issue extraction (blocked by bd-36p, bd-hrs)\n7. **bd-1yz** — MR extraction (blocked by bd-36p, bd-hrs)\n8. **bd-2fp** — Discussion extraction (blocked by bd-36p, bd-hrs, bd-18t)\n9. **bd-1u1** — Document regenerator (blocked by bd-36p, bd-38q, bd-hrs)\n10. **bd-1k1** — FTS5 search (blocked by bd-221)\n11. **bd-3q2** — Search filters (blocked by bd-36p)\n12. **bd-3lu** — Search CLI (blocked by bd-1k1, bd-3q2, bd-36p)\n13. **bd-3qs** — Generate-docs CLI (blocked by bd-1u1, bd-3lu)\n14. **bd-pr1** — Stats CLI (blocked by bd-hrs)\n15. **bd-2dk** — Project resolution (blocked by bd-3lc)\n\n## Acceptance Criteria\n- [ ] `lore search \"query\"` returns FTS5 results with snippets\n- [ ] `lore search --type issue --label bug \"query\"` filters correctly\n- [ ] `lore generate-docs` creates documents from all entities\n- [ ] `lore generate-docs --full` regenerates everything\n- [ ] `lore stats` shows document/FTS/queue counts\n- [ ] `lore stats --check` verifies FTS consistency\n- [ ] No sqlite-vec dependency in Gate A","status":"closed","priority":1,"issue_type":"task","created_at":"2026-01-30T15:25:09.721108Z","created_by":"tayloreernisse","updated_at":"2026-01-30T17:54:44.243610Z","closed_at":"2026-01-30T17:54:44.243562Z","close_reason":"All Gate A sub-beads complete. Lexical MVP delivered: document extraction (issue/MR/discussion), FTS5 indexing, search with filters/snippets/RRF, generate-docs CLI, stats CLI with integrity check/repair.","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-2mz","depends_on_id":"bd-3lu","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-2mz","depends_on_id":"bd-3qs","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-2mz","depends_on_id":"bd-pr1","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} {"id":"bd-2n4","title":"Implement trace query: file -> MR -> issue -> discussion chain","description":"## Background\n\nThe trace query builds a chain from file path -> MRs -> issues -> discussions, combining data from mr_file_changes (Gate 4), entity_references (Gate 2), and the existing discussions/notes tables. This is the backend for the trace CLI command.\n\n**Spec reference:** `docs/phase-b-temporal-intelligence.md` Section 5.4 (Query Flow Tier 1).\n\n## Codebase Context\n\n- entity_references table (migration 011): source_entity_type, source_entity_id, target_entity_type, target_entity_id, reference_type, source_method\n- mr_file_changes table (migration 016, bd-1oo): merge_request_id, project_id, old_path, new_path, change_type\n- discussions table: issue_id, merge_request_id\n- notes table: discussion_id, author_username, body, created_at, is_system, position_new_path (for DiffNotes)\n- merge_requests table: iid, title, state, author_username, web_url, merged_at, updated_at\n- issues table: iid, title, state, web_url\n- resolve_rename_chain() from bd-1yx (src/core/file_history.rs) provides multi-path matching\n- reference_type values: 'closes', 'mentioned', 'related'\n\n## Approach\n\nCreate `src/core/trace.rs`:\n\n```rust\nuse rusqlite::Connection;\nuse crate::core::file_history::resolve_rename_chain;\nuse crate::core::error::Result;\n\n#[derive(Debug, Clone, Serialize)]\npub struct TraceChain {\n pub merge_request: TraceMr,\n pub issues: Vec,\n pub discussions: Vec,\n}\n\n#[derive(Debug, Clone, Serialize)]\npub struct TraceMr {\n pub iid: i64,\n pub title: String,\n pub state: String,\n pub author_username: String,\n pub web_url: Option,\n pub merged_at: Option,\n pub merge_commit_sha: Option,\n pub file_change_type: String,\n}\n\n#[derive(Debug, Clone, Serialize)]\npub struct TraceIssue {\n pub iid: i64,\n pub title: String,\n pub state: String,\n pub web_url: Option,\n pub reference_type: String, // \"closes\", \"mentioned\", \"related\"\n}\n\n#[derive(Debug, Clone, Serialize)]\npub struct TraceDiscussion {\n pub author_username: String,\n pub body_snippet: String, // truncated to 500 chars\n pub created_at: i64,\n pub is_diff_note: bool, // true if position_new_path matched\n}\n\n#[derive(Debug, Clone, Serialize)]\npub struct TraceResult {\n pub path: String,\n pub resolved_paths: Vec,\n pub chains: Vec,\n}\n\npub fn run_trace(\n conn: &Connection,\n project_id: i64,\n path: &str,\n follow_renames: bool,\n include_discussions: bool,\n limit: usize,\n) -> Result {\n // 1. Resolve rename chain (unless !follow_renames)\n let paths = if follow_renames {\n resolve_rename_chain(conn, project_id, path, 10)?\n } else {\n vec![path.to_string()]\n };\n\n // 2. Find MRs via mr_file_changes for all resolved paths\n // Dynamic IN-clause for path set\n // 3. For each MR, find linked issues via entity_references\n // 4. If include_discussions, fetch DiffNote discussions on traced file\n // 5. Order chains by COALESCE(merged_at, updated_at) DESC, apply limit\n}\n```\n\n### SQL for step 2 (find MRs):\n\nBuild dynamic IN-clause placeholders for the resolved path set:\n```sql\nSELECT DISTINCT mr.id, mr.iid, mr.title, mr.state, mr.author_username,\n mr.web_url, mr.merged_at, mr.updated_at, mr.merge_commit_sha,\n mfc.change_type\nFROM mr_file_changes mfc\nJOIN merge_requests mr ON mr.id = mfc.merge_request_id\nWHERE mfc.project_id = ?1\n AND (mfc.new_path IN (...placeholders...) OR mfc.old_path IN (...placeholders...))\nORDER BY COALESCE(mr.merged_at, mr.updated_at) DESC\nLIMIT ?N\n```\n\n### SQL for step 3 (linked issues):\n```sql\nSELECT i.iid, i.title, i.state, i.web_url, er.reference_type\nFROM entity_references er\nJOIN issues i ON i.id = er.target_entity_id\nWHERE er.source_entity_type = 'merge_request'\n AND er.source_entity_id = ?1\n AND er.target_entity_type = 'issue'\n```\n\n### SQL for step 4 (DiffNote discussions):\n```sql\nSELECT n.author_username, n.body, n.created_at, n.position_new_path\nFROM notes n\nJOIN discussions d ON d.id = n.discussion_id\nWHERE d.merge_request_id = ?1\n AND n.position_new_path IN (...placeholders...)\n AND n.is_system = 0\nORDER BY n.created_at ASC\n```\n\nRegister in `src/core/mod.rs`: `pub mod trace;`\n\n## Acceptance Criteria\n\n- [ ] run_trace() returns chains ordered by COALESCE(merged_at, updated_at) DESC\n- [ ] Rename-aware: uses all paths from resolve_rename_chain\n- [ ] Issues linked via entity_references (closes, mentioned, related)\n- [ ] DiffNote discussions correctly filtered to traced file paths via position_new_path\n- [ ] Discussion body_snippet truncated to 500 chars\n- [ ] Empty result (file not in any MR) returns TraceResult with empty chains\n- [ ] Limit applies to number of chains (MRs), not total discussions\n- [ ] Module registered in src/core/mod.rs as `pub mod trace;`\n- [ ] `cargo check --all-targets` passes\n- [ ] `cargo clippy --all-targets -- -D warnings` passes\n\n## Files\n\n- `src/core/trace.rs` (NEW)\n- `src/core/mod.rs` (add `pub mod trace;`)\n\n## TDD Loop\n\nRED:\n- `test_trace_empty_file` — unknown file returns empty chains\n- `test_trace_finds_mr` — file in mr_file_changes returns chain with correct MR\n- `test_trace_follows_renames` — renamed file finds historical MRs\n- `test_trace_links_issues` — MR with entity_references shows linked issues\n- `test_trace_limits_chains` — limit=1 returns at most 1 chain\n- `test_trace_no_follow_renames` — follow_renames=false only matches literal path\n\nTests need in-memory DB with migrations applied through 016 + test fixtures for mr_file_changes, entity_references, discussions, notes.\n\nGREEN: Implement SQL queries and chain assembly.\n\nVERIFY: `cargo test --lib -- trace`\n\n## Edge Cases\n\n- MR with no linked issues: chain has empty issues vec\n- Same issue linked from multiple MRs: appears in each chain independently\n- DiffNote on old_path (before rename): captured via resolved path set\n- include_discussions=false: skip DiffNote query for performance\n- Null merged_at: falls back to updated_at for ordering\n- Dynamic IN-clause: use rusqlite::params_from_iter for parameterized queries\n","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-02T21:34:32.738743Z","created_by":"tayloreernisse","updated_at":"2026-02-18T21:10:55.618405Z","closed_at":"2026-02-18T21:10:55.618337Z","close_reason":"Trace query backend implemented","compaction_level":0,"original_size":0,"labels":["gate-5","phase-b","query"],"dependencies":[{"issue_id":"bd-2n4","depends_on_id":"bd-1ht","type":"parent-child","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-2n4","depends_on_id":"bd-3ia","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-2n4","depends_on_id":"bd-z94","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} -{"id":"bd-2n82","title":"Phase 2c: Migrate embedding/ollama.rs to HTTP adapter","description":"## What\nReplace all reqwest usage in embedding/ollama.rs with the crate::http adapter.\n\n## Why\nOllama module handles health checks (GET) and embedding requests (POST JSON). ~20 LOC changed.\n\n## Changes\n\n### Health check\n```rust\n// Before: reqwest::get(url).await?.json::()?\n// After:\nlet response = self.client.get(&url, &[]).await?;\nlet tags: TagsResponse = response.json()?;\n```\n\n### Embed batch\n```rust\n// Before: self.client.post(&url).json(&request).send().await?\n// After:\nlet response = self.client.post_json(&url, &[], &request).await?;\nif !response.is_success() {\n let status = response.status;\n let body = response.text()?;\n return Err(LoreError::EmbeddingFailed { document_id: 0, reason: format!(\"HTTP {status}: {body}\") });\n}\nlet embed_response: EmbedResponse = response.json()?;\n```\n\n### Standalone health check (check_ollama_health)\nCurrently creates a temporary reqwest::Client. Replace:\n```rust\npub async fn check_ollama_health(base_url: &str) -> bool {\n let client = Client::with_timeout(Duration::from_secs(5));\n let url = format!(\"{base_url}/api/tags\");\n client.get(&url, &[]).await.map_or(false, |r| r.is_success())\n}\n```\n\n## Files Changed\n- src/embedding/ollama.rs (~20 LOC changed)\n\n## Testing\n- All 4 pipeline_tests.rs tests must pass (wiremock, still on tokio)\n- cargo check --all-targets\n- cargo clippy --all-targets -- -D warnings\n\n## Depends On\n- Phase 1 (adapter layer must exist)","notes":"WIREMOCK COMPATIBILITY: Same risk as Phase 2a (bd-lhj8). After migration, 4 pipeline_tests run on #[tokio::test] but call code using asupersync HTTP. See Phase 2a description for resolution options.","status":"open","priority":1,"issue_type":"task","created_at":"2026-03-06T18:40:06.280100Z","created_by":"tayloreernisse","updated_at":"2026-03-06T18:49:51.651003Z","compaction_level":0,"original_size":0,"labels":["asupersync","phase-2"],"dependencies":[{"issue_id":"bd-2n82","depends_on_id":"bd-1lj5","type":"parent-child","created_at":"2026-03-06T18:40:06.303504Z","created_by":"tayloreernisse"},{"issue_id":"bd-2n82","depends_on_id":"bd-bqoc","type":"blocks","created_at":"2026-03-06T18:42:50.928621Z","created_by":"tayloreernisse"}]} +{"id":"bd-2n82","title":"Phase 2c: Migrate embedding/ollama.rs to HTTP adapter","description":"## What\nReplace all reqwest usage in embedding/ollama.rs with the crate::http adapter.\n\n## Why\nOllama module handles health checks (GET) and embedding requests (POST JSON). ~20 LOC changed.\n\n## Changes\n\n### Health check\n```rust\n// Before: reqwest::get(url).await?.json::()?\n// After:\nlet response = self.client.get(&url, &[]).await?;\nlet tags: TagsResponse = response.json()?;\n```\n\n### Embed batch\n```rust\n// Before: self.client.post(&url).json(&request).send().await?\n// After:\nlet response = self.client.post_json(&url, &[], &request).await?;\nif !response.is_success() {\n let status = response.status;\n let body = response.text()?;\n return Err(LoreError::EmbeddingFailed { document_id: 0, reason: format!(\"HTTP {status}: {body}\") });\n}\nlet embed_response: EmbedResponse = response.json()?;\n```\n\n### Standalone health check (check_ollama_health)\nCurrently creates a temporary reqwest::Client. Replace:\n```rust\npub async fn check_ollama_health(base_url: &str) -> bool {\n let client = Client::with_timeout(Duration::from_secs(5));\n let url = format!(\"{base_url}/api/tags\");\n client.get(&url, &[]).await.map_or(false, |r| r.is_success())\n}\n```\n\n## Files Changed\n- src/embedding/ollama.rs (~20 LOC changed)\n\n## Testing\n- All 4 pipeline_tests.rs tests must pass (wiremock, still on tokio)\n- cargo check --all-targets\n- cargo clippy --all-targets -- -D warnings\n\n## Depends On\n- Phase 1 (adapter layer must exist)","notes":"WIREMOCK COMPATIBILITY: Same risk as Phase 2a (bd-lhj8). After migration, 4 pipeline_tests run on #[tokio::test] but call code using asupersync HTTP. See Phase 2a description for resolution options.","status":"closed","priority":1,"issue_type":"task","created_at":"2026-03-06T18:40:06.280100Z","created_by":"tayloreernisse","updated_at":"2026-03-06T21:11:12.591090Z","closed_at":"2026-03-06T21:11:12.589370Z","close_reason":"asupersync migration complete — 5 commits pushed to asupersync-migration branch","compaction_level":0,"original_size":0,"labels":["asupersync","phase-2"],"dependencies":[{"issue_id":"bd-2n82","depends_on_id":"bd-1lj5","type":"parent-child","created_at":"2026-03-06T18:40:06.303504Z","created_by":"tayloreernisse"},{"issue_id":"bd-2n82","depends_on_id":"bd-bqoc","type":"blocks","created_at":"2026-03-06T18:42:50.928621Z","created_by":"tayloreernisse"}]} {"id":"bd-2nb","title":"[CP1] Issue ingestion module","description":"Fetch and store issues with cursor-based incremental sync.\n\nImplement ingestIssues(options) → { fetched, upserted, labelsCreated }\n\nLogic:\n1. Get current cursor from sync_cursors\n2. Paginate through issues updated after cursor\n3. Apply local filtering for tuple cursor semantics\n4. For each issue:\n - Store raw payload (compressed)\n - Upsert issue record\n - Extract and upsert labels\n - Link issue to labels via junction\n5. Update cursor after each page commit\n\nFiles: src/ingestion/issues.ts\nTests: tests/integration/issue-ingestion.test.ts\nDone when: Issues, labels, issue_labels populated correctly with resumable cursor","status":"tombstone","priority":2,"issue_type":"task","created_at":"2026-01-25T15:19:50.701180Z","created_by":"tayloreernisse","updated_at":"2026-01-25T15:21:35.154318Z","closed_at":"2026-01-25T15:21:35.154318Z","deleted_at":"2026-01-25T15:21:35.154316Z","deleted_by":"tayloreernisse","delete_reason":"delete","original_type":"task","compaction_level":0,"original_size":0} {"id":"bd-2nfs","title":"Implement snapshot test infrastructure + terminal compat matrix","description":"## Background\nSnapshot tests ensure deterministic rendering using FakeClock and ftui's test backend. They capture rendered TUI output as styled text and compare against golden files, catching visual regressions without a real terminal. The terminal compatibility matrix is a separate documentation artifact, not an automated test.\n\n## Approach\n\n### Snapshot Infrastructure\n\n**Test Backend**: Use `ftui_harness::TestBackend` (or equivalent from ftui-harness crate) which captures rendered output as a Buffer without needing a real terminal. If ftui-harness is not available, create a minimal TestBackend that implements ftui's backend trait and stores cells in a `Vec>`.\n\n**Deterministic Rendering**:\n- Inject FakeClock (from bd-2lg6) to freeze all relative time computations (\"2 hours ago\" always renders the same)\n- Fix terminal size to 120x40 for all snapshot tests\n- Use synthetic DB fixture with known data (same fixture pattern as parity tests)\n\n**Snapshot Capture Flow**:\n```rust\nfn capture_snapshot(app: &LoreApp, size: (u16, u16)) -> String {\n let backend = TestBackend::new(size.0, size.1);\n // Render app.view() to backend\n // Convert buffer cells to plain text with ANSI annotations\n // Return as String\n}\n```\n\n**Golden File Management**:\n- Golden files stored in `crates/lore-tui/tests/snapshots/` as `.snap` files\n- Naming: `{test_name}.snap` (e.g., `dashboard_default.snap`)\n- Update mode: set env var `UPDATE_SNAPSHOTS=1` to overwrite golden files instead of comparing\n- Use `insta` crate (or manual file comparison) for snapshot assertion\n\n**Fixture Data** (synthetic, deterministic):\n- 50 issues (mix of opened/closed/locked states, various labels)\n- 25 MRs (mix of opened/merged/closed/draft)\n- 100 discussions with notes\n- Known timestamps relative to FakeClock's frozen time\n\n### Snapshot Tests\n\nEach test:\n1. Creates in-memory DB with fixture data\n2. Creates LoreApp with FakeClock frozen at 2026-01-15T12:00:00Z\n3. Sets initial screen state\n4. Renders via TestBackend at 120x40\n5. Compares output against golden file\n\nTests to implement:\n- `test_dashboard_snapshot`: Dashboard screen with fixture counts and recent activity\n- `test_issue_list_snapshot`: Issue list with default sort, showing state badges and relative times\n- `test_issue_detail_snapshot`: Single issue detail with description and discussion thread\n- `test_mr_list_snapshot`: MR list showing draft indicators and review status\n- `test_search_results_snapshot`: Search results with highlighted matches\n- `test_empty_state_snapshot`: Dashboard with empty DB (zero issues/MRs)\n\n### Terminal Compatibility Matrix (Documentation)\n\nThis is a manual verification checklist, NOT an automated test. Document results in `crates/lore-tui/TERMINAL_COMPAT.md`:\n\n| Feature | iTerm2 | tmux | Alacritty | kitty |\n|---------|--------|------|-----------|-------|\n| True color (RGB) | | | | |\n| Unicode width (CJK) | | | | |\n| Box-drawing chars | | | | |\n| Bold/italic/underline | | | | |\n| Mouse events | | | | |\n| Resize handling | | | | |\n| Alt screen | | | | |\n\nFill in during manual QA, not during automated test implementation.\n\n## Acceptance Criteria\n- [ ] At least 6 snapshot tests pass with golden files committed to repo\n- [ ] All snapshots use FakeClock frozen at 2026-01-15T12:00:00Z\n- [ ] All snapshots render at fixed 120x40 terminal size\n- [ ] Dashboard snapshot matches golden file (deterministic)\n- [ ] Issue list snapshot matches golden file (deterministic)\n- [ ] Empty state snapshot matches golden file\n- [ ] UPDATE_SNAPSHOTS=1 env var overwrites golden files for updates\n- [ ] Golden files are plain text (diffable in version control)\n- [ ] TERMINAL_COMPAT.md template created (to be filled during manual QA)\n\n## Files\n- CREATE: crates/lore-tui/tests/snapshot_tests.rs\n- CREATE: crates/lore-tui/tests/snapshots/ (directory for golden files)\n- CREATE: crates/lore-tui/tests/snapshots/dashboard_default.snap\n- CREATE: crates/lore-tui/tests/snapshots/issue_list_default.snap\n- CREATE: crates/lore-tui/tests/snapshots/issue_detail.snap\n- CREATE: crates/lore-tui/tests/snapshots/mr_list_default.snap\n- CREATE: crates/lore-tui/tests/snapshots/search_results.snap\n- CREATE: crates/lore-tui/tests/snapshots/empty_state.snap\n- CREATE: crates/lore-tui/TERMINAL_COMPAT.md (template)\n\n## TDD Anchor\nRED: Write `test_dashboard_snapshot` that creates LoreApp with FakeClock and fixture DB, renders Dashboard at 120x40, asserts output matches `snapshots/dashboard_default.snap`. Fails because golden file does not exist yet.\nGREEN: Render the Dashboard, run with UPDATE_SNAPSHOTS=1 to generate golden file, then run normally to verify match.\nVERIFY: cargo test --manifest-path crates/lore-tui/Cargo.toml snapshot\n\n## Edge Cases\n- Golden file encoding: always UTF-8, normalize line endings to LF\n- FakeClock must be injected into all components that compute relative time (e.g., \"2 hours ago\")\n- Snapshot diffs on CI: print a clear diff showing expected vs actual when mismatch occurs\n- Fixture data must NOT include non-deterministic values (random IDs, current timestamps)\n- If ftui-harness API changes, TestBackend shim may need updating\n\n## Dependency Context\n- Uses FakeClock from bd-2lg6 (Implement Clock trait)\n- Uses all screen views from Phase 2 (Dashboard, Issue List, MR List, Detail views)\n- Uses TestBackend from ftui-harness crate (or custom implementation)\n- Depends on bd-3h00 (session persistence) per phase ordering — screens must be complete before snapshotting\n- Downstream: bd-nu0d (fuzz tests) and bd-3fjk (race tests) depend on this infrastructure","status":"open","priority":2,"issue_type":"task","created_at":"2026-02-12T17:03:54.220114Z","created_by":"tayloreernisse","updated_at":"2026-02-12T18:11:38.126586Z","compaction_level":0,"original_size":0,"labels":["TUI"],"dependencies":[{"issue_id":"bd-2nfs","depends_on_id":"bd-1b6k","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-2nfs","depends_on_id":"bd-3h00","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} {"id":"bd-2ni","title":"OBSERV Epic: Phase 2 - Spans + Correlation IDs","description":"Add tracing spans to all sync stages and generate UUID-based run_id for correlation. Every log line within a sync run includes run_id in JSON span context. Nested spans produce correct parent-child chains.\n\nDepends on: Phase 1 (subscriber must support span recording)\nUnblocks: Phase 3 (metrics), Phase 5 (rate limit logging)\n\nFiles: src/cli/commands/sync.rs, src/cli/commands/ingest.rs, src/ingestion/orchestrator.rs, src/documents/regenerator.rs, src/embedding/pipeline.rs, src/main.rs\n\nAcceptance criteria (PRD Section 6.2):\n- Every log line includes run_id in JSON span context\n- Nested spans produce chain: fetch_pages includes parent ingest_issues span\n- run_id is 8-char hex (truncated UUIDv4)\n- Spans visible in -vv stderr output","status":"closed","priority":1,"issue_type":"epic","created_at":"2026-02-04T15:53:08.935218Z","created_by":"tayloreernisse","updated_at":"2026-02-04T17:19:38.721297Z","closed_at":"2026-02-04T17:19:38.721241Z","close_reason":"Phase 2 complete: run_id correlation IDs generated at sync/ingest entry, root spans with .instrument() for async, #[instrument] on 5 key pipeline functions","compaction_level":0,"original_size":0,"labels":["observability"],"dependencies":[{"issue_id":"bd-2ni","depends_on_id":"bd-2nx","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} @@ -183,7 +183,7 @@ {"id":"bd-2o49","title":"Epic: TUI Phase 5.6 — CLI/TUI Parity Pack","description":"## Background\nPhase 5.6 ensures the TUI displays the same data as the CLI robot mode, preventing drift between interfaces. Tests compare TUI query results against CLI --robot output for counts, list data, detail data, and search results.\n\n## Acceptance Criteria\n- [ ] Dashboard counts match lore --robot count output\n- [ ] Issue/MR list data matches lore --robot issues/mrs output\n- [ ] Issue/MR detail data matches lore --robot issues/mrs output\n- [ ] Search results identity (same IDs, same order) matches lore --robot search output\n- [ ] Terminal safety sanitization applied consistently in TUI and CLI","status":"open","priority":1,"issue_type":"epic","created_at":"2026-02-12T17:05:36.087371Z","created_by":"tayloreernisse","updated_at":"2026-02-12T18:11:51.586917Z","compaction_level":0,"original_size":0,"labels":["TUI"],"dependencies":[{"issue_id":"bd-2o49","depends_on_id":"bd-1b6k","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} {"id":"bd-2og9","title":"Implement entity cache + render cache","description":"## Background\nEntity cache provides near-instant detail view reopens during Enter/Esc drill workflows by caching IssueDetail/MrDetail payloads. Render cache prevents per-frame recomputation of expensive render artifacts (markdown to styled text, discussion tree shaping). Both use bounded LRU eviction with selective invalidation.\n\n## Approach\n\n### Entity Cache (entity_cache.rs)\n\n```rust\nuse std::collections::HashMap;\n\npub struct EntityCache {\n entries: HashMap, // value + last-access tick\n capacity: usize,\n tick: u64,\n}\n\nimpl EntityCache {\n pub fn new(capacity: usize) -> Self;\n pub fn get(&mut self, key: &EntityKey) -> Option<&V>; // updates tick\n pub fn put(&mut self, key: EntityKey, value: V); // evicts oldest if at capacity\n pub fn invalidate(&mut self, keys: &[EntityKey]); // selective by key set\n}\n```\n\n- `EntityKey` is `(EntityType, i64)` from core types (bd-c9gk) — e.g., `(EntityType::Issue, 42)`\n- Default capacity: 64 entries (sufficient for typical drill-in/out workflows)\n- LRU eviction: on `put()` when at capacity, find entry with lowest tick and remove it\n- `get()` bumps the access tick to keep recently-accessed entries alive\n- `invalidate()` takes a slice of changed keys (from sync results) and removes only those entries — NOT a blanket clear\n\n### Render Cache (render_cache.rs)\n\n```rust\npub struct RenderCacheKey {\n content_hash: u64, // FxHash of source content\n terminal_width: u16, // width affects line wrapping\n}\n\npub struct RenderCache {\n entries: HashMap,\n capacity: usize,\n}\n\nimpl RenderCache {\n pub fn new(capacity: usize) -> Self;\n pub fn get(&self, key: &RenderCacheKey) -> Option<&V>;\n pub fn put(&mut self, key: RenderCacheKey, value: V);\n pub fn invalidate_width(&mut self, keep_width: u16); // remove entries NOT matching this width\n pub fn invalidate_all(&mut self); // theme change = full clear\n}\n```\n\n- Default capacity: 256 entries\n- Used for: markdown->styled text, discussion tree layout, issue body rendering\n- `content_hash` uses `std::hash::Hasher` with FxHash (or std DefaultHasher) on source text\n- `invalidate_width(keep_width)`: on terminal resize, remove entries cached at old width\n- `invalidate_all()`: on theme change, clear everything (colors changed)\n- Both caches are NOT thread-safe (single-threaded TUI event loop). No Arc/Mutex needed.\n\n### Integration Point\nBoth caches live as fields on the main LoreApp struct. Cache miss falls through to normal DB query transparently — the action functions check cache first, query DB on miss, populate cache on return.\n\n## Acceptance Criteria\n- [ ] EntityCache::get returns Some for recently put items\n- [ ] EntityCache::put evicts the least-recently-accessed entry when at capacity\n- [ ] EntityCache::invalidate removes only the specified keys, leaves others intact\n- [ ] EntityCache capacity defaults to 64\n- [ ] RenderCache::get returns Some for matching (hash, width) pair\n- [ ] RenderCache::invalidate_width removes entries with non-matching width\n- [ ] RenderCache::invalidate_all clears everything\n- [ ] RenderCache capacity defaults to 256\n- [ ] Both caches are Send (no Rc, no raw pointers) but NOT required to be Sync\n- [ ] No unsafe code\n\n## Files\n- CREATE: crates/lore-tui/src/entity_cache.rs\n- CREATE: crates/lore-tui/src/render_cache.rs\n- MODIFY: crates/lore-tui/src/lib.rs (add `pub mod entity_cache; pub mod render_cache;`)\n\n## TDD Anchor\nRED: Write `test_entity_cache_lru_eviction` that creates EntityCache with capacity 3, puts 4 items, asserts first item (lowest tick) is evicted and the other 3 remain.\nGREEN: Implement LRU eviction using tick-based tracking.\nVERIFY: cargo test --manifest-path crates/lore-tui/Cargo.toml entity_cache\n\nAdditional tests:\n- test_entity_cache_get_bumps_tick (accessed item survives eviction over older untouched items)\n- test_entity_cache_invalidate_selective (removes only specified keys)\n- test_entity_cache_invalidate_nonexistent_key (no panic)\n- test_render_cache_width_invalidation (entries at old width removed, current width kept)\n- test_render_cache_invalidate_all (empty after call)\n- test_render_cache_capacity_eviction\n\n## Edge Cases\n- Invalidating an EntityKey not in the cache is a no-op (no panic)\n- Zero-capacity cache: all gets return None, all puts are no-ops (degenerate but safe)\n- RenderCacheKey equality: two different strings can have the same hash (collision) — accept this; worst case is a wrong cached render that gets corrected on next invalidation\n- Entity cache should NOT be prewarmed synchronously during sync — sync results just invalidate stale entries, and the next view() call repopulates on demand\n\n## Dependency Context\nDepends on bd-c9gk (core types) for EntityKey type definition.\nBoth caches are integrated into LoreApp (bd-6pmy) as struct fields.\nAction functions (from Phase 2/3 screen beads) check cache before querying DB.","status":"open","priority":2,"issue_type":"task","created_at":"2026-02-12T17:03:25.520201Z","created_by":"tayloreernisse","updated_at":"2026-02-12T18:11:34.626204Z","compaction_level":0,"original_size":0,"labels":["TUI"],"dependencies":[{"issue_id":"bd-2og9","depends_on_id":"bd-1df9","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-2og9","depends_on_id":"bd-c9gk","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} {"id":"bd-2px","title":"[CP1] Epic: Issue Ingestion","description":"Ingest all issues, labels, and issue discussions from configured GitLab repositories with resumable cursor-based incremental sync. This establishes the core data ingestion pattern reused for MRs in CP2.\n\n## Success Criteria\n- gi ingest --type=issues fetches all issues (count matches GitLab UI)\n- Labels extracted from issue payloads (name-only)\n- Label linkage reflects current GitLab state (removed labels unlinked on re-sync)\n- Issue discussions fetched per-issue (dependent sync)\n- Cursor-based sync is resumable (re-running fetches 0 new items)\n- Discussion sync skips unchanged issues (per-issue watermark)\n- Sync tracking records all runs\n- Single-flight lock prevents concurrent runs\n\n## Internal Gates\n- Gate A: Issues only (cursor + upsert + raw payloads + list/count/show)\n- Gate B: Labels correct (stale-link removal verified)\n- Gate C: Dependent discussion sync (watermark prevents redundant refetch)\n- Gate D: Resumability proof (kill mid-run, rerun; bounded redo)\n\nReference: docs/prd/checkpoint-1.md","status":"tombstone","priority":1,"issue_type":"epic","created_at":"2026-01-25T15:42:13.167698Z","created_by":"tayloreernisse","updated_at":"2026-01-25T17:02:01.638609Z","closed_at":"2026-01-25T17:02:01.638609Z","deleted_at":"2026-01-25T17:02:01.638606Z","deleted_by":"tayloreernisse","delete_reason":"recreating with correct deps","original_type":"epic","compaction_level":0,"original_size":0} -{"id":"bd-2pyn","title":"Phase 4d: E2E runtime acceptance + structured logging","description":"## What\nAdd a dedicated end-to-end acceptance bead that runs the production CLI flow under asupersync with structured, high-signal logging. This is the missing bridge between unit/integration validation and real user workflow confidence.\n\n## Why\nCurrent beads cover adapter tests, cancellation tests, parity checks, and hardening, but we still need a single e2e acceptance path that proves the full CLI behavior with runtime semantics, cancellation, and resume behavior in one place.\n\n## Scope\nCreate an e2e test harness (script or integration test) that:\n1. Seeds a realistic project fixture (issues + MRs + discussions) via mocked GitLab endpoints.\n2. Runs `lore sync` with robot output and verbose logging enabled.\n3. Triggers Ctrl+C during active fan-out to validate graceful cancellation behavior.\n4. Re-runs sync to confirm resume behavior and no corruption/stuck dependent jobs.\n5. Emits machine-readable run artifacts (JSON summary + log file path) for CI debugging.\n\n## Logging Requirements (detailed)\nAt minimum, logs/artifacts must include:\n- run/session ID\n- phase transitions (preflight, fetch, ingest, dependent drains, finalize)\n- per-entity outcome (`synced|failed|skipped`) with IID and entity type\n- cancellation signal timing and source (first Ctrl+C vs force exit path)\n- timeout classifications (`NetworkErrorKind`) when applicable\n- retry attempts and terminal failure reasons\n- final counters matching SyncResult fields\n\n## Acceptance Criteria\n- [ ] One command executes the full e2e flow deterministically in CI/local\n- [ ] Cancellation mid-flight is exercised and validated (not just happy path)\n- [ ] Second run resumes cleanly; no stuck locks/jobs remain\n- [ ] Structured logs are persisted and easy to inspect after failure\n- [ ] E2E summary includes pass/fail per scenario and key counters\n- [ ] `cargo test` or CI task wiring documented in bead comments\n\n## Suggested Location\n- `tests/asupersync_e2e.rs` OR `tests/e2e/asupersync_smoke.sh` (+ Rust helper)\n- Keep implementation in-repo (no external infra requirement)\n\n## Dependency Context\nThis bead executes after major migration and phase-4 runtime test work is in place. It is an acceptance gate input for Phase 5 hardening.","status":"open","priority":1,"issue_type":"task","created_at":"2026-03-06T18:55:18.278302Z","created_by":"tayloreernisse","updated_at":"2026-03-06T18:55:27.938600Z","compaction_level":0,"original_size":0,"labels":["asupersync","e2e","logging","phase-4","testing"],"dependencies":[{"issue_id":"bd-2pyn","depends_on_id":"bd-16qf","type":"blocks","created_at":"2026-03-06T18:55:27.823086Z","created_by":"tayloreernisse"},{"issue_id":"bd-2pyn","depends_on_id":"bd-18ai","type":"blocks","created_at":"2026-03-06T18:55:27.709595Z","created_by":"tayloreernisse"},{"issue_id":"bd-2pyn","depends_on_id":"bd-1lj5","type":"parent-child","created_at":"2026-03-06T18:55:18.282072Z","created_by":"tayloreernisse"},{"issue_id":"bd-2pyn","depends_on_id":"bd-1yky","type":"blocks","created_at":"2026-03-06T18:55:27.938579Z","created_by":"tayloreernisse"}]} +{"id":"bd-2pyn","title":"Phase 4d: E2E runtime acceptance + structured logging","description":"## What\nAdd a dedicated end-to-end acceptance bead that runs the production CLI flow under asupersync with structured, high-signal logging. This is the missing bridge between unit/integration validation and real user workflow confidence.\n\n## Why\nCurrent beads cover adapter tests, cancellation tests, parity checks, and hardening, but we still need a single e2e acceptance path that proves the full CLI behavior with runtime semantics, cancellation, and resume behavior in one place.\n\n## Scope\nCreate an e2e test harness (script or integration test) that:\n1. Seeds a realistic project fixture (issues + MRs + discussions) via mocked GitLab endpoints.\n2. Runs `lore sync` with robot output and verbose logging enabled.\n3. Triggers Ctrl+C during active fan-out to validate graceful cancellation behavior.\n4. Re-runs sync to confirm resume behavior and no corruption/stuck dependent jobs.\n5. Emits machine-readable run artifacts (JSON summary + log file path) for CI debugging.\n\n## Logging Requirements (detailed)\nAt minimum, logs/artifacts must include:\n- run/session ID\n- phase transitions (preflight, fetch, ingest, dependent drains, finalize)\n- per-entity outcome (`synced|failed|skipped`) with IID and entity type\n- cancellation signal timing and source (first Ctrl+C vs force exit path)\n- timeout classifications (`NetworkErrorKind`) when applicable\n- retry attempts and terminal failure reasons\n- final counters matching SyncResult fields\n\n## Acceptance Criteria\n- [ ] One command executes the full e2e flow deterministically in CI/local\n- [ ] Cancellation mid-flight is exercised and validated (not just happy path)\n- [ ] Second run resumes cleanly; no stuck locks/jobs remain\n- [ ] Structured logs are persisted and easy to inspect after failure\n- [ ] E2E summary includes pass/fail per scenario and key counters\n- [ ] `cargo test` or CI task wiring documented in bead comments\n\n## Suggested Location\n- `tests/asupersync_e2e.rs` OR `tests/e2e/asupersync_smoke.sh` (+ Rust helper)\n- Keep implementation in-repo (no external infra requirement)\n\n## Dependency Context\nThis bead executes after major migration and phase-4 runtime test work is in place. It is an acceptance gate input for Phase 5 hardening.","status":"closed","priority":1,"issue_type":"task","created_at":"2026-03-06T18:55:18.278302Z","created_by":"tayloreernisse","updated_at":"2026-03-06T21:11:12.667730Z","closed_at":"2026-03-06T21:11:12.667170Z","close_reason":"asupersync migration complete — 5 commits pushed to asupersync-migration branch","compaction_level":0,"original_size":0,"labels":["asupersync","e2e","logging","phase-4","testing"],"dependencies":[{"issue_id":"bd-2pyn","depends_on_id":"bd-16qf","type":"blocks","created_at":"2026-03-06T18:55:27.823086Z","created_by":"tayloreernisse"},{"issue_id":"bd-2pyn","depends_on_id":"bd-18ai","type":"blocks","created_at":"2026-03-06T18:55:27.709595Z","created_by":"tayloreernisse"},{"issue_id":"bd-2pyn","depends_on_id":"bd-1lj5","type":"parent-child","created_at":"2026-03-06T18:55:18.282072Z","created_by":"tayloreernisse"},{"issue_id":"bd-2pyn","depends_on_id":"bd-1yky","type":"blocks","created_at":"2026-03-06T18:55:27.938579Z","created_by":"tayloreernisse"}]} {"id":"bd-2rk9","title":"WHO: CLI skeleton — WhoArgs, Commands::Who, dispatch arm","description":"## Background\n\nWire up the CLI plumbing so `lore who --help` works and dispatch reaches the who module. This is pure boilerplate — no query logic yet.\n\n## Approach\n\n### 1. src/cli/mod.rs — WhoArgs struct (after TimelineArgs, ~line 195)\n\n```rust\n#[derive(Parser)]\n#[command(after_help = \"\\x1b[1mExamples:\\x1b[0m\n lore who src/features/auth/ # Who knows about this area?\n lore who @asmith # What is asmith working on?\n lore who @asmith --reviews # What review patterns does asmith have?\n lore who --active # What discussions need attention?\n lore who --overlap src/features/auth/ # Who else is touching these files?\n lore who --path README.md # Expert lookup for a root file\")]\npub struct WhoArgs {\n /// Username or file path (path if contains /)\n pub target: Option,\n\n /// Force expert mode for a file/directory path (handles root files like README.md, Makefile)\n #[arg(long, help_heading = \"Mode\", conflicts_with_all = [\"active\", \"overlap\", \"reviews\"])]\n pub path: Option,\n\n /// Show active unresolved discussions\n #[arg(long, help_heading = \"Mode\", conflicts_with_all = [\"target\", \"overlap\", \"reviews\", \"path\"])]\n pub active: bool,\n\n /// Find users with MRs/notes touching this file path\n #[arg(long, help_heading = \"Mode\", conflicts_with_all = [\"target\", \"active\", \"reviews\", \"path\"])]\n pub overlap: Option,\n\n /// Show review pattern analysis (requires username target)\n #[arg(long, help_heading = \"Mode\", requires = \"target\", conflicts_with_all = [\"active\", \"overlap\", \"path\"])]\n pub reviews: bool,\n\n /// Time window (7d, 2w, 6m, YYYY-MM-DD). Default varies by mode.\n #[arg(long, help_heading = \"Filters\")]\n pub since: Option,\n\n /// Scope to a project (supports fuzzy matching)\n #[arg(short = 'p', long, help_heading = \"Filters\")]\n pub project: Option,\n\n /// Maximum results per section (1..=500)\n #[arg(short = 'n', long = \"limit\", default_value = \"20\",\n value_parser = clap::value_parser!(u16).range(1..=500),\n help_heading = \"Output\")]\n pub limit: u16,\n}\n```\n\n### 2. Commands enum — add Who(WhoArgs) after Timeline, before hidden List\n\n### 3. src/cli/commands/mod.rs — add `pub mod who;` and re-exports:\n```rust\npub use who::{run_who, print_who_human, print_who_json, WhoRun};\n```\n\n### 4. src/main.rs — dispatch arm + handler:\n```rust\nSome(Commands::Who(args)) => handle_who(cli.config.as_deref(), args, robot_mode),\n```\n\n### 5. src/cli/commands/who.rs — stub file with signatures that compile\n\n## Files\n\n- `src/cli/mod.rs` — WhoArgs struct + Commands::Who variant\n- `src/cli/commands/mod.rs` — pub mod who + re-exports\n- `src/main.rs` — dispatch arm + handle_who function + imports\n- `src/cli/commands/who.rs` — CREATE stub file\n\n## TDD Loop\n\nRED: `cargo check --all-targets` fails (missing who module)\nGREEN: Create stub who.rs with empty/todo!() implementations, wire up all 4 files\nVERIFY: `cargo check --all-targets && cargo run -- who --help`\n\n## Acceptance Criteria\n\n- [ ] `cargo check --all-targets` passes\n- [ ] `lore who --help` displays all flags with correct grouping (Mode, Filters, Output)\n- [ ] `lore who --active --overlap foo` rejected by clap (conflicts_with)\n- [ ] `lore who --reviews` rejected by clap (requires target)\n- [ ] WhoArgs is pub and importable from lore::cli\n\n## Edge Cases\n\n- conflicts_with_all on --path must NOT include \"target\" (--path is used alongside positional target in some cases... actually no, --path replaces target — check the plan: it conflicts with active/overlap/reviews but NOT target. Wait, looking at the plan: --path does NOT conflict with target. But if both target and --path are provided, --path takes priority in resolve_mode. The clap struct allows both.)","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-08T02:39:58.436660Z","created_by":"tayloreernisse","updated_at":"2026-02-08T04:10:29.594923Z","closed_at":"2026-02-08T04:10:29.594882Z","close_reason":"Implemented by agent team: migration 017, CLI skeleton, all 5 query modes, human+robot output, 20 tests. All quality gates pass.","compaction_level":0,"original_size":0} {"id":"bd-2rqs","title":"Dynamic shell completions for file paths (lore complete-path)","description":"## Background\n\nTab-completion for lore commands currently only covers static subcommand/flag names via clap_complete v4 (src/main.rs handle_completions(), line ~1667). Users frequently type file paths (for who --path, file-history) and entity IIDs (for issues, mrs, show) manually. Dynamic completions would allow tab-completing these from the local SQLite database.\n\n**Pattern:** kubectl, gh, docker all use hidden subcommands for dynamic completions. clap_complete v4 has a custom completer API that can shell out to these hidden subcommands.\n\n## Codebase Context\n\n- **Static completions**: Commands::Completions variant in src/cli/mod.rs, handled by handle_completions() in src/main.rs (line ~1667) using clap_complete::generate()\n- **clap_complete v4**: Already in Cargo.toml. Supports custom completer API for dynamic values.\n- **Commands taking IIDs**: IssuesArgs (iid: Option), MrsArgs (iid: Option), Drift (for: EntityRef), Show (hidden, takes entity ref)\n- **path_resolver**: src/core/path_resolver.rs (245 lines). build_path_query() (lines 71-187) and suffix_probe() (lines 192-240) resolve partial paths against mr_file_changes. SuffixResult::Ambiguous(Vec) returns multiple matches — perfect for completions.\n- **who --path**: WhoArgs has `path: Option` field, already uses path_resolver\n- **DB access**: create_connection() from src/core/db.rs, config loading from src/core/config.rs\n- **Performance**: Must complete in <100ms. SQLite queries against indexed columns are sub-ms.\n\n## Approach\n\n### 1. Hidden Subcommands (src/cli/mod.rs)\n\nAdd hidden subcommands that query the DB and print completion candidates:\n\n```rust\n/// Hidden: emit file path completions for shell integration\n#[command(name = \"complete-path\", hide = true)]\nCompletePath {\n /// Partial path prefix to complete\n prefix: String,\n /// Project scope\n #[arg(short = 'p', long)]\n project: Option,\n},\n\n/// Hidden: emit issue IID completions\n#[command(name = \"complete-issue\", hide = true)]\nCompleteIssue {\n /// Partial IID prefix\n prefix: String,\n #[arg(short = 'p', long)]\n project: Option,\n},\n\n/// Hidden: emit MR IID completions\n#[command(name = \"complete-mr\", hide = true)]\nCompleteMr {\n /// Partial IID prefix\n prefix: String,\n #[arg(short = 'p', long)]\n project: Option,\n},\n```\n\n### 2. Completion Handlers (src/cli/commands/completions.rs NEW)\n\n```rust\npub fn complete_path(conn: &Connection, prefix: &str, project_id: Option) -> Result> {\n // Use suffix_probe() from path_resolver if prefix looks like a suffix (no leading /)\n // Otherwise: SELECT DISTINCT new_path FROM mr_file_changes WHERE new_path LIKE ?||'%' LIMIT 50\n // Also check old_path for rename awareness\n}\n\npub fn complete_issue(conn: &Connection, prefix: &str, project_id: Option) -> Result> {\n // SELECT iid, title FROM issues WHERE CAST(iid AS TEXT) LIKE ?||'%' ORDER BY updated_at DESC LIMIT 30\n // Output: \"123\\tFix login bug\" (tab-separated for shell description)\n}\n\npub fn complete_mr(conn: &Connection, prefix: &str, project_id: Option) -> Result> {\n // SELECT iid, title FROM merge_requests WHERE CAST(iid AS TEXT) LIKE ?||'%' ORDER BY updated_at DESC LIMIT 30\n // Output: \"456\\tAdd OAuth support\"\n}\n```\n\n### 3. Wire in main.rs\n\nAdd match arms for CompletePath, CompleteIssue, CompleteMr. Each:\n1. Opens DB connection (read-only)\n2. Resolves project if -p given\n3. Calls completion handler\n4. Prints one candidate per line to stdout\n5. Exits 0\n\n### 4. Shell Integration\n\nUpdate handle_completions() to generate shell scripts that call the hidden subcommands. For fish:\n```fish\ncomplete -c lore -n '__fish_seen_subcommand_from issues' -a '(lore complete-issue \"\")'\ncomplete -c lore -n '__fish_seen_subcommand_from who' -l path -a '(lore complete-path (commandline -ct))'\n```\n\nSimilar for bash (using `_lore_complete()` function) and zsh.\n\n## Acceptance Criteria\n\n- [ ] `lore complete-path \"src/co\"` prints matching file paths from mr_file_changes\n- [ ] `lore complete-issue \"12\"` prints matching issue IIDs with titles\n- [ ] `lore complete-mr \"45\"` prints matching MR IIDs with titles\n- [ ] All three hidden subcommands respect -p for project scoping\n- [ ] All three complete in <100ms (SQLite indexed queries)\n- [ ] Empty prefix returns recent/popular results (not all rows)\n- [ ] Hidden subcommands don't appear in --help or completions themselves\n- [ ] Shell completion scripts (fish, bash, zsh) call hidden subcommands for dynamic values\n- [ ] Static completions (subcommands, flags) still work as before\n- [ ] No DB connection attempted if DB doesn't exist (graceful degradation — return no completions)\n- [ ] `cargo check --all-targets` passes\n- [ ] `cargo clippy --all-targets -- -D warnings` passes\n- [ ] `cargo fmt --check` passes\n\n## Files\n\n- MODIFY: src/cli/mod.rs (add CompletePath, CompleteIssue, CompleteMr hidden variants)\n- CREATE: src/cli/commands/completions.rs (complete_path, complete_issue, complete_mr handlers)\n- MODIFY: src/cli/commands/mod.rs (add pub mod completions)\n- MODIFY: src/main.rs (match arms for hidden subcommands + update handle_completions shell scripts)\n\n## TDD Anchor\n\nRED:\n- test_complete_path_suffix_match (in-memory DB with mr_file_changes rows, verify suffix matching returns correct paths)\n- test_complete_issue_prefix (in-memory DB with issues, verify IID prefix filtering)\n- test_complete_mr_prefix (same for MRs)\n- test_complete_empty_prefix_returns_recent (verify limited results ordered by updated_at DESC)\n\nGREEN: Implement completion handlers with SQL queries.\n\nVERIFY: cargo test --lib -- completions && cargo check --all-targets\n\n## Edge Cases\n\n- DB doesn't exist yet (first run before sync): return empty completions, exit 0 (not error)\n- mr_file_changes empty (sync hasn't run with --fetch-mr-diffs): complete-path returns nothing, no error\n- Very long prefix with no matches: empty output, exit 0\n- Special characters in paths (spaces, brackets): shell quoting handled by completion framework\n- Project ambiguous with -p: exit 18, same as other commands (resolve_project pattern)\n- IID prefix \"0\": return nothing (no issues/MRs have iid=0)\n\n## Dependency Context\n\n- **path_resolver** (src/core/path_resolver.rs): provides suffix_probe() which returns SuffixResult::Exact/Ambiguous/NotFound — reuse for complete-path instead of raw SQL when prefix looks like a suffix\n- **mr_file_changes** (migration 016): provides new_path/old_path columns for file path completions\n- **clap_complete v4** (Cargo.toml): provides generate() for static completions and custom completer API for dynamic shell integration","status":"open","priority":3,"issue_type":"feature","created_at":"2026-02-13T16:31:48.589428Z","created_by":"tayloreernisse","updated_at":"2026-02-17T16:51:21.891406Z","compaction_level":0,"original_size":0,"labels":["cli-ux","gate-4"]} {"id":"bd-2rr","title":"OBSERV: Replace subscriber init with dual-layer setup","description":"## Background\nThis is the core infrastructure bead for Phase 1. It replaces the single-layer subscriber (src/main.rs:44-58) with a dual-layer registry that separates stderr and file concerns. The file layer provides always-on post-mortem data; the stderr layer respects -v flags.\n\n## Approach\nReplace src/main.rs lines 44-58 with a function (e.g., init_tracing()) that:\n\n1. Build stderr filter from -v count (or RUST_LOG override):\n```rust\nfn build_stderr_filter(verbose: u8, quiet: bool) -> EnvFilter {\n if let Ok(rust_log) = std::env::var(\"RUST_LOG\") {\n return EnvFilter::new(rust_log);\n }\n if quiet {\n return EnvFilter::new(\"lore=warn,error\");\n }\n match verbose {\n 0 => EnvFilter::new(\"lore=info,warn\"),\n 1 => EnvFilter::new(\"lore=debug,warn\"),\n 2 => EnvFilter::new(\"lore=debug,info\"),\n _ => EnvFilter::new(\"trace,debug\"),\n }\n}\n```\n\n2. Build file filter (always lore=debug,warn unless RUST_LOG set):\n```rust\nfn build_file_filter() -> EnvFilter {\n if let Ok(rust_log) = std::env::var(\"RUST_LOG\") {\n return EnvFilter::new(rust_log);\n }\n EnvFilter::new(\"lore=debug,warn\")\n}\n```\n\n3. Assemble the registry:\n```rust\nlet stderr_layer = fmt::layer()\n .with_target(false)\n .with_writer(SuspendingWriter);\n// Conditionally add .json() based on log_format\n\nlet file_appender = tracing_appender::rolling::daily(log_dir, \"lore\");\nlet (non_blocking, _guard) = tracing_appender::non_blocking(file_appender);\nlet file_layer = fmt::layer()\n .json()\n .with_writer(non_blocking);\n\ntracing_subscriber::registry()\n .with(stderr_layer.with_filter(build_stderr_filter(cli.verbose, cli.quiet)))\n .with(file_layer.with_filter(build_file_filter()))\n .init();\n```\n\nCRITICAL: The non_blocking _guard must be held for the program's lifetime. Store it in main() scope, NOT in the init function. If the guard drops, the file writer thread stops and buffered logs are lost.\n\nCRITICAL: Per-layer filtering requires each .with_filter() to produce a Filtered type. The two layers will have different concrete types (one with json, one without). This is fine -- the registry accepts heterogeneous layers via .with().\n\nWhen --log-format json: wrap stderr_layer with .json() too. This requires conditional construction. Two approaches:\n A) Use Box> for dynamic dispatch (simpler, tiny perf hit)\n B) Use an enum wrapper (zero cost but more code)\nRecommend approach A for simplicity. The overhead is one vtable indirection per log event, dwarfed by I/O.\n\nWhen file_logging is false (LoggingConfig.file_logging == false): skip adding the file layer entirely.\n\n## Acceptance Criteria\n- [ ] lore sync writes JSON log lines to ~/.local/share/lore/logs/lore.YYYY-MM-DD.log\n- [ ] lore -v sync shows DEBUG lore::* on stderr, deps at WARN\n- [ ] lore -vv sync shows DEBUG lore::* + INFO deps on stderr\n- [ ] lore -vvv sync shows TRACE everything on stderr\n- [ ] RUST_LOG=lore::gitlab=trace overrides -v for both layers\n- [ ] lore --log-format json sync emits JSON on stderr\n- [ ] -q + -v: -q wins (stderr at WARN+)\n- [ ] -q does NOT affect file layer (still DEBUG+)\n- [ ] File layer does NOT use SuspendingWriter\n- [ ] Non-blocking guard kept alive for program duration\n- [ ] Existing behavior unchanged when no new flags passed\n- [ ] cargo clippy --all-targets -- -D warnings passes\n\n## Files\n- src/main.rs (replace lines 44-58, add init_tracing function or inline)\n\n## TDD Loop\nRED:\n - test_verbosity_filter_construction: assert filter directives for verbose=0,1,2,3\n - test_rust_log_overrides_verbose: set env, assert TRACE not DEBUG\n - test_quiet_overrides_verbose: -q + -v => WARN+\n - test_json_log_output_format: capture file output, parse as JSON\n - test_suspending_writer_dual_layer: no garbled stderr with progress bars\nGREEN: Implement build_stderr_filter, build_file_filter, assemble registry\nVERIFY: cargo test && cargo clippy --all-targets -- -D warnings\n\n## Edge Cases\n- _guard lifetime: if guard is dropped early, buffered log lines are lost. MUST hold in main() scope.\n- Type erasure: stderr layer with/without .json() produces different types. Use Box> or separate init paths.\n- Empty RUST_LOG string: env::var returns Ok(\"\"), which EnvFilter::new(\"\") defaults to TRACE. May want to check is_empty().\n- File I/O error on log dir: tracing-appender handles this gracefully (no panic), but logs will be silently lost. The doctor command (bd-2i10) can diagnose this.","status":"closed","priority":1,"issue_type":"task","created_at":"2026-02-04T15:53:55.577025Z","created_by":"tayloreernisse","updated_at":"2026-02-04T17:15:04.384114Z","closed_at":"2026-02-04T17:15:04.384062Z","close_reason":"Replaced single-layer subscriber with dual-layer setup: stderr (human/json, -v controlled) + file (always-on JSON, daily rotation via tracing-appender)","compaction_level":0,"original_size":0,"labels":["observability"],"dependencies":[{"issue_id":"bd-2rr","depends_on_id":"bd-17n","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-2rr","depends_on_id":"bd-1k4","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-2rr","depends_on_id":"bd-1o1","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-2rr","depends_on_id":"bd-2nx","type":"parent-child","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-2rr","depends_on_id":"bd-gba","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} @@ -194,10 +194,10 @@ {"id":"bd-2ug","title":"[CP1] gi ingest --type=issues command","description":"CLI command to orchestrate issue ingestion.\n\n## Module\nsrc/cli/commands/ingest.rs\n\n## Clap Definition\n#[derive(Subcommand)]\npub enum Commands {\n Ingest {\n #[arg(long, value_parser = [\"issues\", \"merge_requests\"])]\n r#type: String,\n \n #[arg(long)]\n project: Option,\n \n #[arg(long)]\n force: bool,\n },\n}\n\n## Implementation\n1. Acquire app lock with heartbeat (respect --force for stale lock)\n2. Create sync_run record (status='running')\n3. For each configured project (or filtered --project):\n - Call orchestrator to ingest issues and discussions\n - Show progress (spinner or progress bar)\n4. Update sync_run (status='succeeded', metrics_json with counts)\n5. Release lock\n\n## Output Format\nIngesting issues...\n\n group/project-one: 1,234 issues fetched, 45 new labels\n\nFetching discussions (312 issues with updates)...\n\n group/project-one: 312 issues → 1,234 discussions, 5,678 notes\n\nTotal: 1,234 issues, 1,234 discussions, 5,678 notes (excluding 1,234 system notes)\nSkipped discussion sync for 922 unchanged issues.\n\n## Error Handling\n- Lock acquisition failure: exit with DatabaseLockError message\n- Network errors: show GitLabNetworkError, exit non-zero\n- Rate limiting: respect backoff, show progress\n\nFiles: src/cli/commands/ingest.rs, src/cli/commands/mod.rs\nTests: tests/integration/sync_runs_tests.rs\nDone when: Full issue + discussion ingestion works end-to-end","status":"tombstone","priority":2,"issue_type":"task","created_at":"2026-01-25T16:57:58.552504Z","created_by":"tayloreernisse","updated_at":"2026-01-25T17:02:01.875613Z","closed_at":"2026-01-25T17:02:01.875613Z","deleted_at":"2026-01-25T17:02:01.875607Z","deleted_by":"tayloreernisse","delete_reason":"recreating with correct deps","original_type":"task","compaction_level":0,"original_size":0} {"id":"bd-2um","title":"[CP1] Epic: Issue Ingestion","description":"Ingest all issues, labels, and issue discussions from configured GitLab repositories with resumable cursor-based incremental sync. This checkpoint establishes the core data ingestion pattern that will be reused for MRs in Checkpoint 2.\n\n## Success Criteria\n- gi ingest --type=issues fetches all issues (count matches GitLab UI)\n- Labels extracted from issue payloads (name-only)\n- Label linkage reflects current GitLab state (removed labels unlinked on re-sync)\n- Issue discussions fetched per-issue (dependent sync)\n- Cursor-based sync is resumable (re-running fetches 0 new items)\n- Discussion sync skips unchanged issues (per-issue watermark)\n- Sync tracking records all runs (sync_runs table)\n- Single-flight lock prevents concurrent runs\n\n## Internal Gates\n- **Gate A**: Issues only - cursor + upsert + raw payloads + list/count/show working\n- **Gate B**: Labels correct - stale-link removal verified; label count matches GitLab\n- **Gate C**: Dependent discussion sync - watermark prevents redundant refetch; concurrency bounded\n- **Gate D**: Resumability proof - kill mid-run, rerun; bounded redo and no redundant discussion refetch\n\n## Reference\ndocs/prd/checkpoint-1.md","status":"closed","priority":1,"issue_type":"epic","created_at":"2026-01-25T17:02:38.075224Z","created_by":"tayloreernisse","updated_at":"2026-01-25T23:27:15.347364Z","closed_at":"2026-01-25T23:27:15.347317Z","close_reason":"CP1 Issue Ingestion complete: all sub-tasks done, 71 tests pass, CLI commands working","compaction_level":0,"original_size":0} {"id":"bd-2w1p","title":"Add half-life fields and config validation to ScoringConfig","description":"## Background\nThe flat-weight ScoringConfig (config.rs:155-167) has only 3 fields: author_weight (25), reviewer_weight (10), note_bonus (1). Time-decay scoring needs half-life parameters, a reviewer split (participated vs assigned-only), closed MR discount, substantive-note threshold, and bot filtering.\n\n## Approach\nExtend the existing ScoringConfig struct at config.rs:155. Add new fields with #[serde(default)] and camelCase rename to match existing convention (authorWeight, reviewerWeight, noteBonus). Extend the Default impl at config.rs:169 with new defaults. Extend validate_scoring() at config.rs:274-291 (currently validates 3 weights >= 0).\n\n### New fields to add:\n```rust\n#[serde(rename = \"reviewerAssignmentWeight\")]\npub reviewer_assignment_weight: i64, // default: 3\n#[serde(rename = \"authorHalfLifeDays\")]\npub author_half_life_days: u32, // default: 180\n#[serde(rename = \"reviewerHalfLifeDays\")]\npub reviewer_half_life_days: u32, // default: 90\n#[serde(rename = \"reviewerAssignmentHalfLifeDays\")]\npub reviewer_assignment_half_life_days: u32, // default: 45\n#[serde(rename = \"noteHalfLifeDays\")]\npub note_half_life_days: u32, // default: 45\n#[serde(rename = \"closedMrMultiplier\")]\npub closed_mr_multiplier: f64, // default: 0.5\n#[serde(rename = \"reviewerMinNoteChars\")]\npub reviewer_min_note_chars: u32, // default: 20\n#[serde(rename = \"excludedUsernames\")]\npub excluded_usernames: Vec, // default: vec![]\n```\n\n### Validation additions to validate_scoring() (config.rs:274):\n- All *_half_life_days must be > 0 AND <= 3650\n- All *_weight / *_bonus must be >= 0\n- reviewer_assignment_weight must be >= 0\n- closed_mr_multiplier must be finite (not NaN/Inf) AND in (0.0, 1.0]\n- reviewer_min_note_chars must be >= 0 AND <= 4096\n- excluded_usernames entries must be non-empty strings\n- Return LoreError::ConfigInvalid with clear message on failure\n\n## TDD Loop\n\n### RED (write first):\n```rust\n#[test]\nfn test_config_validation_rejects_zero_half_life() {\n let mut cfg = ScoringConfig::default();\n assert!(validate_scoring(&cfg).is_ok());\n cfg.author_half_life_days = 0;\n assert!(validate_scoring(&cfg).is_err());\n cfg.author_half_life_days = 180;\n cfg.reviewer_half_life_days = 0;\n assert!(validate_scoring(&cfg).is_err());\n cfg.reviewer_half_life_days = 90;\n cfg.closed_mr_multiplier = 0.0;\n assert!(validate_scoring(&cfg).is_err());\n cfg.closed_mr_multiplier = 1.5;\n assert!(validate_scoring(&cfg).is_err());\n cfg.closed_mr_multiplier = 1.0;\n assert!(validate_scoring(&cfg).is_ok());\n}\n\n#[test]\nfn test_config_validation_rejects_absurd_half_life() {\n let mut cfg = ScoringConfig::default();\n cfg.author_half_life_days = 5000; // > 3650 cap\n assert!(validate_scoring(&cfg).is_err());\n cfg.author_half_life_days = 3650; // boundary: valid\n assert!(validate_scoring(&cfg).is_ok());\n cfg.reviewer_min_note_chars = 5000; // > 4096 cap\n assert!(validate_scoring(&cfg).is_err());\n cfg.reviewer_min_note_chars = 4096; // boundary: valid\n assert!(validate_scoring(&cfg).is_ok());\n}\n\n#[test]\nfn test_config_validation_rejects_nan_multiplier() {\n let mut cfg = ScoringConfig::default();\n cfg.closed_mr_multiplier = f64::NAN;\n assert!(validate_scoring(&cfg).is_err());\n cfg.closed_mr_multiplier = f64::INFINITY;\n assert!(validate_scoring(&cfg).is_err());\n cfg.closed_mr_multiplier = f64::NEG_INFINITY;\n assert!(validate_scoring(&cfg).is_err());\n}\n```\n\n### GREEN: Add fields to struct + Default impl + validation rules.\n### VERIFY: cargo test -p lore -- test_config_validation\n\n## Acceptance Criteria\n- [ ] test_config_validation_rejects_zero_half_life passes\n- [ ] test_config_validation_rejects_absurd_half_life passes\n- [ ] test_config_validation_rejects_nan_multiplier passes\n- [ ] ScoringConfig::default() returns correct values for all 11 fields\n- [ ] cargo check --all-targets passes\n- [ ] Existing config deserialization works (#[serde(default)] fills new fields)\n- [ ] validate_scoring() is pub(crate) or accessible from config.rs test module\n\n## Files\n- MODIFY: src/core/config.rs (struct at line 155, Default impl at line 169, validate_scoring at line 274)\n\n## Edge Cases\n- f64 comparison: use .is_finite() for NaN/Inf check, > 0.0 and <= 1.0 for range\n- Vec default: use Vec::new()\n- Upper bounds prevent silent misconfig (5000-day half-life effectively disables decay)","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-09T16:59:14.654469Z","created_by":"tayloreernisse","updated_at":"2026-02-12T20:43:04.400186Z","closed_at":"2026-02-12T20:43:04.399988Z","close_reason":"Implemented by time-decay swarm: 3 agents, 12 tasks, 621 tests passing, all quality gates green","compaction_level":0,"original_size":0,"labels":["scoring"]} -{"id":"bd-2wok","title":"Phase 3c: Entrypoint migration (#[asupersync::main])","description":"## What\nReplace #[tokio::main] with #[asupersync::main] in main.rs and thread Cx through command dispatch.\n\n## Rearchitecture Context (2026-03-06)\nmain.rs has been thinned to ~419 LOC. It now uses include!() to pull in handler code:\n- src/main.rs -> include!(\"app/dispatch.rs\") -> include!(\"errors.rs\"), include!(\"handlers.rs\"), include!(\"robot_docs.rs\")\n- The #[tokio::main] attribute is on the main() fn in src/main.rs\n- Command dispatch handler functions are physically in src/app/handlers.rs but logically in main.rs scope\n\n## Implementation\n\n```rust\n// Before\n#[tokio::main]\nasync fn main() -> Result<()> { ... }\n\n// After\n#[asupersync::main]\nasync fn main(cx: &Cx) -> Outcome<()> { ... }\n```\n\nThe Cx parameter is the runtime context that enables region-scoped task management. It flows from main() through all async command dispatch arms.\n\n## Key Considerations\n- Return type changes from Result<()> to Outcome<()> — verify what Outcome provides and how error handling maps\n- Cx is borrowed (&Cx), not owned — it lives for the duration of the program\n- Command dispatch handler functions in src/app/handlers.rs must receive cx through the dispatch chain\n- The include!() chain means cx flows from main.rs through the included handler functions seamlessly (same scope)\n\n## Files Changed\n- src/main.rs (~10 LOC for entrypoint attribute change)\n- src/app/handlers.rs (handler function signatures gain cx: &Cx — these are include!'d into main.rs scope)\n\n## Testing\n- cargo check --all-targets (part of atomic commit)\n- Binary must launch and show help text\n\n## Depends On\n- Phase 3a (asupersync must be in Cargo.toml)","status":"open","priority":1,"issue_type":"task","created_at":"2026-03-06T18:40:29.594563Z","created_by":"tayloreernisse","updated_at":"2026-03-06T19:51:56.747622Z","compaction_level":0,"original_size":0,"labels":["asupersync","phase-3"],"dependencies":[{"issue_id":"bd-2wok","depends_on_id":"bd-1lj5","type":"parent-child","created_at":"2026-03-06T18:40:29.612761Z","created_by":"tayloreernisse"},{"issue_id":"bd-2wok","depends_on_id":"bd-39yp","type":"blocks","created_at":"2026-03-06T18:42:51.738254Z","created_by":"tayloreernisse"}]} +{"id":"bd-2wok","title":"Phase 3c: Entrypoint migration (#[asupersync::main])","description":"## What\nReplace #[tokio::main] with #[asupersync::main] in main.rs and thread Cx through command dispatch.\n\n## Rearchitecture Context (2026-03-06)\nmain.rs has been thinned to ~419 LOC. It now uses include!() to pull in handler code:\n- src/main.rs -> include!(\"app/dispatch.rs\") -> include!(\"errors.rs\"), include!(\"handlers.rs\"), include!(\"robot_docs.rs\")\n- The #[tokio::main] attribute is on the main() fn in src/main.rs\n- Command dispatch handler functions are physically in src/app/handlers.rs but logically in main.rs scope\n\n## Implementation\n\n```rust\n// Before\n#[tokio::main]\nasync fn main() -> Result<()> { ... }\n\n// After\n#[asupersync::main]\nasync fn main(cx: &Cx) -> Outcome<()> { ... }\n```\n\nThe Cx parameter is the runtime context that enables region-scoped task management. It flows from main() through all async command dispatch arms.\n\n## Key Considerations\n- Return type changes from Result<()> to Outcome<()> — verify what Outcome provides and how error handling maps\n- Cx is borrowed (&Cx), not owned — it lives for the duration of the program\n- Command dispatch handler functions in src/app/handlers.rs must receive cx through the dispatch chain\n- The include!() chain means cx flows from main.rs through the included handler functions seamlessly (same scope)\n\n## Files Changed\n- src/main.rs (~10 LOC for entrypoint attribute change)\n- src/app/handlers.rs (handler function signatures gain cx: &Cx — these are include!'d into main.rs scope)\n\n## Testing\n- cargo check --all-targets (part of atomic commit)\n- Binary must launch and show help text\n\n## Depends On\n- Phase 3a (asupersync must be in Cargo.toml)","status":"closed","priority":1,"issue_type":"task","created_at":"2026-03-06T18:40:29.594563Z","created_by":"tayloreernisse","updated_at":"2026-03-06T21:11:12.619665Z","closed_at":"2026-03-06T21:11:12.619615Z","close_reason":"asupersync migration complete — 5 commits pushed to asupersync-migration branch","compaction_level":0,"original_size":0,"labels":["asupersync","phase-3"],"dependencies":[{"issue_id":"bd-2wok","depends_on_id":"bd-1lj5","type":"parent-child","created_at":"2026-03-06T18:40:29.612761Z","created_by":"tayloreernisse"},{"issue_id":"bd-2wok","depends_on_id":"bd-39yp","type":"blocks","created_at":"2026-03-06T18:42:51.738254Z","created_by":"tayloreernisse"}]} {"id":"bd-2wpf","title":"Ship timeline CLI with human and robot renderers","description":"## Problem\nThe timeline pipeline (5-stage SEED->HYDRATE->EXPAND->COLLECT->RENDER) is implemented but not wired to the CLI. This is one of lore's most unique features — chronological narrative reconstruction from resource events, cross-references, and notes — and it is invisible to users and agents.\n\n## Current State\n- Types defined: src/core/timeline.rs (TimelineEvent, TimelineSeed, etc.)\n- Seed stage: src/core/timeline_seed.rs (FTS search -> seed entities)\n- Expand stage: src/core/timeline_expand.rs (cross-reference expansion)\n- Collect stage: src/core/timeline_collect.rs (event gathering from resource events + notes)\n- CLI command structure: src/cli/commands/timeline.rs (exists but incomplete)\n- Remaining beads: bd-1nf (CLI wiring), bd-2f2 (human renderer), bd-dty (robot renderer)\n\n## Acceptance Criteria\n1. lore timeline 'authentication refactor' works end-to-end:\n - Searches for matching entities (SEED)\n - Fetches raw data (HYDRATE)\n - Expands via cross-references (EXPAND with --depth flag, default 1)\n - Collects events chronologically (COLLECT)\n - Renders human-readable narrative (RENDER)\n2. Human renderer output:\n - Chronological event stream with timestamps\n - Color-coded by event type (state change, label change, note, reference)\n - Actor names with role context\n - Grouped by day/week for readability\n - Evidence snippets from notes (first 200 chars)\n3. Robot renderer output (--robot / -J):\n - JSON array of events with: timestamp, event_type, actor, entity_ref, body/snippet, metadata\n - Seed entities listed separately (what matched the query)\n - Expansion depth metadata (how far from seed)\n - Total event count and time range\n4. CLI flags:\n - --project (scope to project)\n - --since (time range)\n - --depth N (expansion depth, default 1, max 3)\n - --expand-mentions (follow mention references, not just closes/related)\n - -n LIMIT (max events)\n5. Performance: timeline for a single issue with 50 events renders in <200ms\n\n## Relationship to Existing Beads\nThis supersedes/unifies: bd-1nf (CLI wiring), bd-2f2 (human renderer), bd-dty (robot renderer). Those can be closed when this ships.\n\n## Files to Modify\n- src/cli/commands/timeline.rs (CLI wiring, flag parsing, output dispatch)\n- src/core/timeline.rs (may need RENDER stage types)\n- New: src/cli/render/timeline_human.rs or inline in timeline.rs\n- New: src/cli/render/timeline_robot.rs or inline in timeline.rs","status":"closed","priority":1,"issue_type":"feature","created_at":"2026-02-12T15:46:16.246889Z","created_by":"tayloreernisse","updated_at":"2026-02-12T15:50:43.885226Z","closed_at":"2026-02-12T15:50:43.885180Z","close_reason":"Already implemented: run_timeline(), print_timeline(), print_timeline_json_with_meta(), handle_timeline() all exist and are fully wired. Code audit 2026-02-12.","compaction_level":0,"original_size":0,"labels":["cli","cli-imp"],"dependencies":[{"issue_id":"bd-2wpf","depends_on_id":"bd-13lp","type":"parent-child","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} {"id":"bd-2x2h","title":"Implement Sync screen (running + summary modes + progress coalescer)","description":"## Background\nThe Sync screen provides real-time progress visualization during data synchronization. The TUI drives sync directly via lore library calls (not subprocess) — this gives direct access to progress callbacks, proper error propagation, and cooperative cancellation via CancelToken. The TUI is the primary human interface; the CLI serves robots/scripts.\n\nAfter sync completes, the screen transitions to a summary view showing exact changed entity counts. A progress coalescer prevents render thrashing by batching rapid progress updates.\n\nDesign principle: the TUI is self-contained. It does NOT detect or react to external CLI sync operations. If someone runs lore sync externally, the TUI's natural re-query on navigation handles stale data implicitly.\n\n## Approach\nCreate state, action, and view modules for the Sync screen:\n\n**State** (crates/lore-tui/src/screen/sync/state.rs):\n- SyncScreenMode enum: FullScreen, Inline (for use from Bootstrap screen)\n- SyncState enum: Idle, Running(SyncProgress), Complete(SyncSummary), Error(String)\n- SyncProgress: per-lane progress (issues, MRs, discussions, notes, events, statuses) with counts and ETA\n- SyncSummary: changed entity counts (new, updated, deleted per type), duration, errors\n- ProgressCoalescer: buffers progress updates, emits at most every 100ms to prevent render thrash\n\n**sync_delta_ledger** (crates/lore-tui/src/screen/sync/delta_ledger.rs):\n- SyncDeltaLedger: in-memory per-run record of changed entity IDs\n- Fields: new_issue_iids (Vec), updated_issue_iids (Vec), new_mr_iids (Vec), updated_mr_iids (Vec)\n- record_change(entity_type, iid, change_kind) — called by sync progress callback\n- summary() -> SyncSummary — produces the final counts for the summary view\n- Purpose: after sync completes, the dashboard and list screens can use the ledger to highlight \"new since last sync\" items\n\n**Action** (crates/lore-tui/src/screen/sync/action.rs):\n- start_sync(db: &DbManager, config: &Config, cancel: CancelToken) -> Cmd\n- Calls lore library ingestion functions directly: ingest_issues, ingest_mrs, ingest_discussions, etc.\n- Progress callback sends Msg::SyncProgress(lane, count, total) via channel\n- On completion sends Msg::SyncComplete(SyncSummary)\n- On cancel sends Msg::SyncCancelled(partial_summary)\n\n**Per-project fault isolation:** If sync for one project fails, continue syncing other projects. Collect per-project errors and display in summary view. Don't abort entire sync on single project failure.\n\n**View** (crates/lore-tui/src/screen/sync/view.rs):\n- Running view: per-lane progress bars with counts/totals, overall ETA, cancel hint (Esc)\n- Stream stats footer: show items/sec throughput for active lanes\n- Summary view: table of entity types with new/updated/deleted columns, total duration, per-project error list\n- Error view: error message with retry option\n- Inline mode: compact single-line progress for embedding in Bootstrap screen\n\nThe Sync screen uses TaskSupervisor for the background sync task with cooperative cancellation.\n\n## Acceptance Criteria\n- [ ] Sync screen launches sync via lore library calls (NOT subprocess)\n- [ ] Per-lane progress bars update in real-time during sync\n- [ ] ProgressCoalescer batches updates to at most 10/second (100ms floor)\n- [ ] Esc cancels sync cooperatively via CancelToken, shows partial summary\n- [ ] Sync completion transitions to summary view with accurate change counts\n- [ ] Summary view shows new/updated/deleted counts per entity type\n- [ ] Error during sync shows error message with retry option\n- [ ] Sync task registered with TaskSupervisor (dedup by TaskKey::Sync)\n- [ ] Per-project fault isolation: single project failure doesn't abort entire sync\n- [ ] SyncDeltaLedger records changed entity IDs for post-sync highlighting\n- [ ] Stream stats footer shows items/sec throughput\n- [ ] ScreenMode::Inline renders compact single-line progress for Bootstrap embedding\n- [ ] Unit tests for ProgressCoalescer batching behavior\n- [ ] Unit tests for SyncDeltaLedger record/summary\n- [ ] Integration test: mock sync with FakeClock verifies progress -> summary transition\n\n## Files\n- CREATE: crates/lore-tui/src/screen/sync/state.rs\n- CREATE: crates/lore-tui/src/screen/sync/action.rs\n- CREATE: crates/lore-tui/src/screen/sync/view.rs\n- CREATE: crates/lore-tui/src/screen/sync/delta_ledger.rs\n- CREATE: crates/lore-tui/src/screen/sync/mod.rs\n- MODIFY: crates/lore-tui/src/screen/mod.rs (add pub mod sync)\n\n## TDD Anchor\nRED: Write test_progress_coalescer_batches_rapid_updates that sends 50 progress updates in 10ms and asserts coalescer emits at most 1.\nGREEN: Implement ProgressCoalescer with configurable floor interval.\nVERIFY: cargo test -p lore-tui sync -- --nocapture\n\nAdditional tests:\n- test_sync_cancel_produces_partial_summary\n- test_sync_complete_produces_full_summary\n- test_sync_error_shows_retry\n- test_sync_dedup_prevents_double_launch\n- test_delta_ledger_records_changes: record 5 new issues and 3 updated MRs, assert summary counts\n- test_per_project_fault_isolation: simulate one project failure, verify others complete\n\n## Edge Cases\n- Sync cancelled immediately after start — partial summary with zero counts is valid\n- Network timeout during sync — error state with last-known progress preserved\n- Very large sync (100k+ entities) — progress coalescer prevents render thrash\n- Sync started while another sync TaskKey::Sync exists — TaskSupervisor dedup rejects it\n- Inline mode from Bootstrap: compact rendering, no full progress bars\n\n## Dependency Context\nUses TaskSupervisor from bd-3le2 for dedup and cancellation. Uses DbManager from bd-2kop for database access. Uses lore library ingestion module directly for sync operations. Used by Bootstrap screen (bd-3ty8) in inline mode.","status":"open","priority":2,"issue_type":"task","created_at":"2026-02-12T17:02:09.481354Z","created_by":"tayloreernisse","updated_at":"2026-02-12T18:11:34.266057Z","compaction_level":0,"original_size":0,"labels":["TUI"],"dependencies":[{"issue_id":"bd-2x2h","depends_on_id":"bd-1df9","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-2x2h","depends_on_id":"bd-3le2","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-2x2h","depends_on_id":"bd-u7se","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} -{"id":"bd-2xct","title":"Phase 0b: Replace tokio::sync::Mutex with std::sync::Mutex in rate limiter","description":"## What\nIn gitlab/client.rs, replace tokio::sync::Mutex with std::sync::Mutex for the rate limiter lock.\n\n## Why\nThe rate limiter lock guards a tiny synchronous critical section (check Instant::now(), compute delay). No async work inside the lock. std::sync::Mutex is correct here and removes a tokio dependency.\n\n## Current State (gitlab/client.rs ~line 9-10)\n```rust\nuse tokio::sync::Mutex;\n// ...\nlet delay = self.rate_limiter.lock().await.check_delay();\n```\n\n## Implementation\n```rust\nuse std::sync::Mutex;\n// ...\nlet delay = self.rate_limiter.lock().expect(\"rate limiter poisoned\").check_delay();\n```\n\nUse .expect() over .unwrap() for clarity. Poisoning is near-impossible here (trivial Instant::now() check), but the explicit message aids debugging.\n\n## CRITICAL CONSTRAINT (document at lock site)\nstd::sync::Mutex blocks the executor thread while held. This is safe ONLY because the critical section is a single Instant::now() comparison with no I/O. If the rate limiter ever grows to include async work (HTTP calls, DB queries), it must move back to an async-aware lock.\n\nAdd a comment at the lock site documenting this constraint.\n\n## Files Changed\n- src/gitlab/client.rs (~5 LOC changed)\n\n## Testing\n- cargo check --all-targets\n- cargo clippy --all-targets -- -D warnings\n- Existing rate limiter tests should pass unchanged","status":"in_progress","priority":2,"issue_type":"task","created_at":"2026-03-06T18:38:19.345252Z","created_by":"tayloreernisse","updated_at":"2026-03-06T19:24:18.991914Z","compaction_level":0,"original_size":0,"labels":["asupersync","phase-0"],"dependencies":[{"issue_id":"bd-2xct","depends_on_id":"bd-1lj5","type":"parent-child","created_at":"2026-03-06T18:38:19.349242Z","created_by":"tayloreernisse"}]} +{"id":"bd-2xct","title":"Phase 0b: Replace tokio::sync::Mutex with std::sync::Mutex in rate limiter","description":"## What\nIn gitlab/client.rs, replace tokio::sync::Mutex with std::sync::Mutex for the rate limiter lock.\n\n## Why\nThe rate limiter lock guards a tiny synchronous critical section (check Instant::now(), compute delay). No async work inside the lock. std::sync::Mutex is correct here and removes a tokio dependency.\n\n## Current State (gitlab/client.rs ~line 9-10)\n```rust\nuse tokio::sync::Mutex;\n// ...\nlet delay = self.rate_limiter.lock().await.check_delay();\n```\n\n## Implementation\n```rust\nuse std::sync::Mutex;\n// ...\nlet delay = self.rate_limiter.lock().expect(\"rate limiter poisoned\").check_delay();\n```\n\nUse .expect() over .unwrap() for clarity. Poisoning is near-impossible here (trivial Instant::now() check), but the explicit message aids debugging.\n\n## CRITICAL CONSTRAINT (document at lock site)\nstd::sync::Mutex blocks the executor thread while held. This is safe ONLY because the critical section is a single Instant::now() comparison with no I/O. If the rate limiter ever grows to include async work (HTTP calls, DB queries), it must move back to an async-aware lock.\n\nAdd a comment at the lock site documenting this constraint.\n\n## Files Changed\n- src/gitlab/client.rs (~5 LOC changed)\n\n## Testing\n- cargo check --all-targets\n- cargo clippy --all-targets -- -D warnings\n- Existing rate limiter tests should pass unchanged","status":"closed","priority":2,"issue_type":"task","created_at":"2026-03-06T18:38:19.345252Z","created_by":"tayloreernisse","updated_at":"2026-03-06T21:11:12.555501Z","closed_at":"2026-03-06T21:11:12.555307Z","close_reason":"asupersync migration complete — 5 commits pushed to asupersync-migration branch","compaction_level":0,"original_size":0,"labels":["asupersync","phase-0"],"dependencies":[{"issue_id":"bd-2xct","depends_on_id":"bd-1lj5","type":"parent-child","created_at":"2026-03-06T18:38:19.349242Z","created_by":"tayloreernisse"}]} {"id":"bd-2y79","title":"Add work item status via GraphQL enrichment","description":"## Summary\n\nGitLab 18.2+ has native work item status (To do, In progress, Done, Won't do, Duplicate) available ONLY via GraphQL, not REST. This enriches synced issues with status information by making supplementary GraphQL calls after REST ingestion.\n\n**Plan document:** plans/work-item-status-graphql.md\n\n## Critical Findings (from API research)\n\n- **EE-only (Premium/Ultimate)** — Free tier won't have the widget at all\n- **GraphQL auth differs from REST** — must use `Authorization: Bearer `, NOT `PRIVATE-TOKEN`\n- **Must use `workItems` resolver, NOT `project.issues`** — legacy issues path doesn't expose status widgets\n- **5 categories:** TRIAGE, TO_DO, IN_PROGRESS, DONE, CANCELED (not 3 as originally assumed)\n- **Max 100 items per GraphQL page** (standard GitLab limit)\n- **Custom statuses possible on 18.5+** — can't assume only system-defined statuses\n\n## Migration\n\nUses migration **021** (001-020 already exist on disk).\nAdds `status_name TEXT` and `status_category TEXT` to `issues` table (both nullable).\n\n## Files\n\n- src/gitlab/graphql.rs (NEW — minimal GraphQL client + status fetcher)\n- src/gitlab/mod.rs (add pub mod graphql)\n- src/gitlab/types.rs (WorkItemStatus, WorkItemStatusCategory enum)\n- src/core/db.rs (migration 021 in MIGRATIONS array)\n- src/core/config.rs (fetch_work_item_status toggle in SyncConfig)\n- src/ingestion/orchestrator.rs (enrichment step after issue sync)\n- src/cli/commands/show.rs (display status)\n- src/cli/commands/list.rs (status in list output + --status filter)\n\n## Acceptance Criteria\n\n- [ ] GraphQL client POSTs queries with Bearer auth and handles errors\n- [ ] Status fetched via workItems resolver with pagination\n- [ ] Migration 021 adds status_name and status_category to issues\n- [ ] lore show issue displays status (when available)\n- [ ] lore --robot show issue includes status in JSON\n- [ ] lore list issues --status filter works\n- [ ] Graceful degradation: Free tier, old GitLab, disabled GraphQL all handled\n- [ ] Config toggle: fetch_work_item_status (default true)\n- [ ] cargo check + clippy + tests pass","status":"closed","priority":1,"issue_type":"feature","created_at":"2026-02-05T18:32:39.287957Z","created_by":"tayloreernisse","updated_at":"2026-02-17T15:08:29.499020Z","closed_at":"2026-02-17T15:08:29.498969Z","close_reason":"Already implemented: GraphQL status enrichment shipped in v0.8.x — migration 021, graphql.rs, --status filter, --no-status flag all complete","compaction_level":0,"original_size":0,"labels":["api","phase-b"]} {"id":"bd-2ygk","title":"Implement user flow integration tests (9 PRD flows)","description":"## Background\n\nThe PRD Section 6 defines 9 end-to-end user flows that exercise cross-screen navigation, state preservation, and data flow. The existing vertical slice test (bd-1mju) covers one flow (Dashboard -> Issue List -> Issue Detail -> Sync). These integration tests cover the remaining 8 flows plus re-test the vertical slice from a user-journey perspective. Each test simulates a realistic keystroke sequence using FrankenTUI's test harness and verifies that the correct screens are reached with the correct data visible.\n\n## Approach\n\nCreate a test module `tests/tui_user_flows.rs` with 9 test functions, each simulating a keystroke sequence against a FrankenTUI `TestHarness` with a pre-populated test database. Tests use `FakeClock` for deterministic timestamps.\n\n**Test database fixture**: A shared setup function creates an in-memory SQLite DB with ~20 issues, ~10 MRs, ~30 discussions, a few experts, and timeline events. This fixture is reused across all flow tests.\n\n**Flow tests**:\n\n1. **`test_flow_find_expert`** — Dashboard -> `w` -> type \"src/auth/\" -> verify Expert mode results appear -> `↓` select first person -> `Enter` -> verify navigation to Issue List filtered by that person\n2. **`test_flow_timeline_query`** — Dashboard -> `t` -> type \"auth timeout\" -> `Enter` -> verify Timeline shows seed events -> `Enter` on first event -> verify entity detail opens -> `Esc` -> back on Timeline\n3. **`test_flow_quick_search`** — Any screen -> `/` -> type query -> verify results appear -> `Tab` (switch mode) -> verify mode label changes -> `Enter` -> verify entity detail opens\n4. **`test_flow_sync_and_browse`** — Dashboard -> `s` -> `Enter` (start sync) -> wait for completion -> verify Summary shows deltas -> `i` -> verify Issue List filtered to new items\n5. **`test_flow_review_workload`** — Dashboard -> `w` -> `Tab` (Workload mode) -> type \"@bjones\" -> verify workload sections appear (assigned, authored, reviewing)\n6. **`test_flow_command_palette`** — Any screen -> `Ctrl+P` -> type \"mrs draft\" -> verify fuzzy match -> `Enter` -> verify MR List opened with draft filter\n7. **`test_flow_morning_triage`** — Dashboard -> `i` -> verify Issue List (opened, sorted by updated) -> `Enter` on first -> verify Issue Detail -> `Esc` -> verify cursor preserved on same row -> `j` -> verify cursor moved\n8. **`test_flow_direct_screen_jumps`** — Issue Detail -> `gt` -> verify Timeline -> `gw` -> verify Who -> `gi` -> verify Issue List -> `H` -> verify Dashboard (clean reset)\n9. **`test_flow_risk_sweep`** — Dashboard -> scroll to Insights -> `Enter` on first insight -> verify pre-filtered Issue List\n\nEach test follows the pattern:\n```rust\n#[test]\nfn test_flow_X() {\n let (harness, app) = setup_test_harness_with_fixture();\n // Send keystrokes\n harness.send_key(Key::Char('w'));\n // Assert screen state\n assert_eq!(app.current_screen(), Screen::Who);\n // Assert visible content\n let frame = harness.render();\n assert!(frame.contains(\"Expert\"));\n}\n```\n\n## Acceptance Criteria\n- [ ] All 9 flow tests exist and compile\n- [ ] Each test uses the shared DB fixture (no per-test DB setup)\n- [ ] Each test verifies screen transitions via `current_screen()` assertions\n- [ ] Each test verifies at least one content assertion (rendered text contains expected data)\n- [ ] test_flow_morning_triage verifies cursor preservation after Enter/Esc round-trip\n- [ ] test_flow_direct_screen_jumps verifies the g-prefix navigation chain\n- [ ] test_flow_sync_and_browse verifies delta-filtered navigation after sync\n- [ ] All tests use FakeClock for deterministic timestamps\n- [ ] Tests complete in <5 seconds each (no real I/O)\n\n## Files\n- CREATE: crates/lore-tui/tests/tui_user_flows.rs\n- MODIFY: (none — this is a new test file only)\n\n## TDD Anchor\nRED: Write `test_flow_morning_triage` first — it exercises the most common daily workflow (Dashboard -> Issue List -> Issue Detail -> back with cursor preservation). Start with just the Dashboard -> Issue List transition.\nGREEN: Requires all Phase 2 core screens to be working; the test itself is the GREEN verification.\nVERIFY: cargo test -p lore-tui test_flow_morning_triage\n\nAdditional tests: All 9 flows listed above.\n\n## Edge Cases\n- Flow tests must handle async data loading — use harness.tick() or harness.wait_for_idle() to let async tasks complete before asserting\n- g-prefix timeout (500ms) — tests must send the second key within the timeout; use harness clock control\n- Sync flow test needs a mock sync that completes quickly — use a pre-populated SyncDeltaLedger rather than running actual sync\n\n## Dependency Context\n- Depends on bd-1mju (vertical slice integration test) which establishes the test harness patterns and fixture setup.\n- Depends on bd-2nfs (snapshot test infrastructure) which provides the FakeClock and TestHarness setup.\n- Depends on all Phase 2 core screen beads (bd-35g5 Dashboard, bd-3ei1 Issue List, bd-8ab7 Issue Detail, bd-2kr0 MR List, bd-3t1b MR Detail) being implemented.\n- Depends on Phase 3 power feature beads (bd-1zow Search, bd-29qw Timeline, bd-u7se Who, bd-wzqi Command Palette) being implemented.\n- Depends on bd-2x2h (Sync screen) for the sync+browse flow test.","status":"open","priority":2,"issue_type":"task","created_at":"2026-02-12T19:29:41.060826Z","created_by":"tayloreernisse","updated_at":"2026-02-12T19:29:52.743563Z","compaction_level":0,"original_size":0,"labels":["TUI"],"dependencies":[{"issue_id":"bd-2ygk","depends_on_id":"bd-1mju","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-2ygk","depends_on_id":"bd-1zow","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-2ygk","depends_on_id":"bd-29qw","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-2ygk","depends_on_id":"bd-2kr0","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-2ygk","depends_on_id":"bd-2nfs","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-2ygk","depends_on_id":"bd-2x2h","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-2ygk","depends_on_id":"bd-35g5","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-2ygk","depends_on_id":"bd-3ei1","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-2ygk","depends_on_id":"bd-3t1b","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-2ygk","depends_on_id":"bd-8ab7","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-2ygk","depends_on_id":"bd-u7se","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-2ygk","depends_on_id":"bd-wzqi","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} {"id":"bd-2yo","title":"Fetch MR diffs API and populate mr_file_changes","description":"## Background\n\nThis bead fetches MR diff metadata from the GitLab API and populates the mr_file_changes table created by migration 016. It extracts only file-level metadata (paths, change type) and discards actual diff content.\n\n**Spec reference:** `docs/phase-b-temporal-intelligence.md` Section 4.3 (Ingestion).\n\n## Codebase Context\n\n- pending_dependent_fetches already has `job_type='mr_diffs'` in CHECK constraint (migration 011)\n- dependent_queue.rs has: enqueue_job(), claim_jobs(), complete_job(), fail_job() with exponential backoff\n- Orchestrator pattern: enqueue after entity ingestion, drain after primary ingestion completes\n- GitLab client uses fetch_all_pages() for pagination\n- Existing drain patterns in orchestrator.rs: drain_resource_events() and drain_mr_closes_issues() — follow same pattern\n- config.sync.fetch_mr_file_changes flag guards enqueue (see bd-jec)\n- mr_file_changes table created by migration 016 (bd-1oo) — NOT 015 (015 is commit SHAs)\n- merge_commit_sha and squash_commit_sha already captured during MR ingestion (src/ingestion/merge_requests.rs lines 184, 205-206, 230-231) — no work needed for those fields\n\n## Approach\n\n### 1. API Client — add to `src/gitlab/client.rs`:\n\n```rust\npub async fn fetch_mr_diffs(\n &self,\n project_id: i64,\n mr_iid: i64,\n) -> Result> {\n let path = format\\!(\"/projects/{project_id}/merge_requests/{mr_iid}/diffs\");\n self.fetch_all_pages(&path, &[(\"per_page\", \"100\")]).await\n .or_else(|e| coalesce_not_found(e, Vec::new()))\n}\n```\n\n### 2. Types — add to `src/gitlab/types.rs`:\n\n```rust\n#[derive(Debug, Clone, Deserialize, Serialize)]\npub struct GitLabMrDiff {\n pub old_path: String,\n pub new_path: String,\n pub new_file: bool,\n pub renamed_file: bool,\n pub deleted_file: bool,\n // Ignore: diff, a_mode, b_mode, generated_file (not stored)\n}\n```\n\nAdd `GitLabMrDiff` to `src/gitlab/mod.rs` re-exports.\n\n### 3. Change Type Derivation (in new file):\n\n```rust\nfn derive_change_type(diff: &GitLabMrDiff) -> &'static str {\n if diff.new_file { \"added\" }\n else if diff.renamed_file { \"renamed\" }\n else if diff.deleted_file { \"deleted\" }\n else { \"modified\" }\n}\n```\n\n### 4. DB Storage — new `src/ingestion/mr_diffs.rs`:\n\n```rust\npub fn upsert_mr_file_changes(\n conn: &Connection,\n mr_local_id: i64,\n project_id: i64,\n diffs: &[GitLabMrDiff],\n) -> Result {\n // DELETE FROM mr_file_changes WHERE merge_request_id = ?\n // INSERT each diff row with derived change_type\n // DELETE+INSERT is simpler than UPSERT for array replacement\n}\n```\n\nAdd `pub mod mr_diffs;` to `src/ingestion/mod.rs`.\n\n### 5. Queue Integration — in orchestrator.rs:\n\n```rust\n// After MR upsert, if config.sync.fetch_mr_file_changes:\nenqueue_job(conn, project_id, \"merge_request\", mr_iid, mr_local_id, \"mr_diffs\")?;\n```\n\nAdd `drain_mr_diffs()` following the drain_mr_closes_issues() pattern. Call it after drain_mr_closes_issues() in the sync pipeline.\n\n## Acceptance Criteria\n\n- [ ] `fetch_mr_diffs()` calls GET /projects/:id/merge_requests/:iid/diffs with pagination\n- [ ] GitLabMrDiff type added to src/gitlab/types.rs and re-exported from src/gitlab/mod.rs\n- [ ] Change type derived: new_file->added, renamed_file->renamed, deleted_file->deleted, else->modified\n- [ ] mr_file_changes rows have correct old_path, new_path, change_type\n- [ ] Old rows deleted before insert (clean replacement per MR)\n- [ ] Jobs only enqueued when config.sync.fetch_mr_file_changes is true\n- [ ] 404/403 API errors handled gracefully (empty result, not failure)\n- [ ] drain_mr_diffs() added to orchestrator.rs sync pipeline\n- [ ] `pub mod mr_diffs;` added to src/ingestion/mod.rs\n- [ ] `cargo check --all-targets` passes\n- [ ] `cargo clippy --all-targets -- -D warnings` passes\n\n## Files\n\n- `src/gitlab/client.rs` (add fetch_mr_diffs method)\n- `src/gitlab/types.rs` (add GitLabMrDiff struct)\n- `src/gitlab/mod.rs` (re-export GitLabMrDiff)\n- `src/ingestion/mr_diffs.rs` (NEW — upsert_mr_file_changes + derive_change_type)\n- `src/ingestion/mod.rs` (add pub mod mr_diffs)\n- `src/ingestion/orchestrator.rs` (enqueue mr_diffs jobs + drain_mr_diffs)\n\n## TDD Loop\n\nRED:\n- `test_derive_change_type_added` - new_file=true -> \"added\"\n- `test_derive_change_type_renamed` - renamed_file=true -> \"renamed\"\n- `test_derive_change_type_deleted` - deleted_file=true -> \"deleted\"\n- `test_derive_change_type_modified` - all false -> \"modified\"\n- `test_upsert_replaces_existing` - second upsert replaces first\n\nGREEN: Implement API client, type derivation, DB ops, orchestrator wiring.\n\nVERIFY: `cargo test --lib -- mr_diffs`\n\n## Edge Cases\n\n- MR with 500+ files: paginate properly via fetch_all_pages\n- Binary files: handled as modified (renamed_file/new_file/deleted_file all false)\n- File renamed AND modified: renamed_file=true takes precedence\n- Draft MRs: still fetch diffs\n- Deleted MR: 404 -> empty vec via coalesce_not_found()\n- merge_commit_sha/squash_commit_sha: already handled in merge_requests.rs ingestion — NOT part of this bead\n","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-02T21:34:08.939514Z","created_by":"tayloreernisse","updated_at":"2026-02-08T18:27:05.993580Z","closed_at":"2026-02-08T18:27:05.993482Z","close_reason":"Implemented: GitLabMrDiff type, fetch_mr_diffs client method, upsert_mr_file_changes in new mr_diffs.rs module, enqueue_mr_diffs_jobs + drain_mr_diffs in orchestrator, migration 020 for diffs_synced_for_updated_at watermark, progress events, autocorrect registry. All 390 tests pass, clippy clean.","compaction_level":0,"original_size":0,"labels":["api","gate-4","phase-b"],"dependencies":[{"issue_id":"bd-2yo","depends_on_id":"bd-14q","type":"parent-child","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-2yo","depends_on_id":"bd-1oo","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-2yo","depends_on_id":"bd-jec","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-2yo","depends_on_id":"bd-tir","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} @@ -228,7 +228,7 @@ {"id":"bd-38lb","title":"Implement CommandRegistry (keybindings, help, palette)","description":"## Background\nCommandRegistry is the single source of truth for all actions, keybindings, CLI equivalents, palette entries, help text, and status hints. All keybinding/help/status/palette definitions are generated from this registry — no hardcoded duplicate maps in view/state modules.\n\n## Approach\nCreate crates/lore-tui/src/commands.rs:\n- CommandDef struct: id (String), label (String), keybinding (Option), cli_equivalent (Option), help_text (String), status_hint (String), available_in (Vec or ScreenFilter)\n- CommandRegistry struct: commands (Vec), by_key (HashMap>), by_screen (HashMap>)\n- build_registry() -> CommandRegistry: registers all commands with their keybindings\n- lookup_key(key: &KeyEvent, screen: &Screen, mode: &InputMode) -> Option<&CommandDef>\n- palette_entries(screen: &Screen) -> Vec<&CommandDef>: returns commands available for palette\n- help_entries(screen: &Screen) -> Vec<&CommandDef>: returns commands for help overlay\n- status_hints(screen: &Screen) -> Vec<&str>: returns hints for status bar\n\n## Acceptance Criteria\n- [ ] CommandRegistry is the sole source of keybinding definitions\n- [ ] lookup_key respects InputMode (no keybinding leaks through Text mode)\n- [ ] palette_entries returns commands sorted by label\n- [ ] help_entries returns all commands available on a given screen\n- [ ] status_hints returns context-appropriate hints\n- [ ] cli_equivalent populated for commands that have a lore CLI counterpart\n\n## Files\n- CREATE: crates/lore-tui/src/commands.rs\n\n## TDD Anchor\nRED: Write test_registry_lookup_quit that builds registry, looks up 'q' in Normal mode on Dashboard, asserts it maps to Quit command.\nGREEN: Implement build_registry with quit command registered.\nVERIFY: cargo test --manifest-path crates/lore-tui/Cargo.toml test_registry\n\n## Edge Cases\n- g-prefix keybindings (gi, gm, g/, gt, gw, gs) require two-key sequences — registry must support this\n- Command availability varies by screen — lookup must check available_in filter\n- InputMode::Text should block all normal keybindings except Esc and Ctrl+P","status":"open","priority":2,"issue_type":"task","created_at":"2026-02-12T16:56:57.098613Z","created_by":"tayloreernisse","updated_at":"2026-02-12T18:11:25.817115Z","compaction_level":0,"original_size":0,"labels":["TUI"],"dependencies":[{"issue_id":"bd-38lb","depends_on_id":"bd-2tr4","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-38lb","depends_on_id":"bd-c9gk","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} {"id":"bd-38q","title":"Implement dirty source tracking module","description":"## Background\nDirty source tracking drives incremental document regeneration. When entities are upserted during ingestion, they're marked dirty. The regenerator drains this queue. The key constraint: mark_dirty_tx() takes &Transaction to enforce atomic marking inside the entity upsert transaction. Uses ON CONFLICT DO UPDATE (not INSERT OR IGNORE) to reset backoff on re-queue.\n\n## Approach\nCreate \\`src/ingestion/dirty_tracker.rs\\` per PRD Section 6.1.\n\n```rust\nconst DIRTY_SOURCES_BATCH_SIZE: usize = 500;\n\n/// Mark dirty INSIDE existing transaction. Takes &Transaction, NOT &Connection.\n/// ON CONFLICT resets ALL backoff/error state (not INSERT OR IGNORE).\n/// This ensures fresh updates are immediately eligible, not stuck behind stale backoff.\npub fn mark_dirty_tx(\n tx: &rusqlite::Transaction<'_>,\n source_type: SourceType,\n source_id: i64,\n) -> Result<()>;\n\n/// Convenience wrapper for non-transactional contexts.\npub fn mark_dirty(conn: &Connection, source_type: SourceType, source_id: i64) -> Result<()>;\n\n/// Get dirty sources ready for processing.\n/// WHERE next_attempt_at IS NULL OR next_attempt_at <= now\n/// ORDER BY attempt_count ASC, queued_at ASC (failed items deprioritized)\n/// LIMIT 500\npub fn get_dirty_sources(conn: &Connection) -> Result>;\n\n/// Clear dirty entry after successful processing.\npub fn clear_dirty(conn: &Connection, source_type: SourceType, source_id: i64) -> Result<()>;\n```\n\n**PRD-specific details:**\n- get_dirty_sources ORDER BY: \\`attempt_count ASC, queued_at ASC\\` (failed items processed AFTER fresh items)\n- mark_dirty_tx ON CONFLICT resets: queued_at, attempt_count=0, last_attempt_at=NULL, last_error=NULL, next_attempt_at=NULL\n- SourceType parsed from string in query results via match on \\\"issue\\\"/\\\"merge_request\\\"/\\\"discussion\\\"\n- Invalid source_type in DB -> rusqlite::Error::FromSqlConversionFailure\n\n**Error recording is in regenerator.rs (bd-1u1)**, not dirty_tracker. The dirty_tracker only marks, gets, and clears.\n\n## Acceptance Criteria\n- [ ] mark_dirty_tx takes &Transaction<'_>, NOT &Connection\n- [ ] ON CONFLICT DO UPDATE resets: attempt_count=0, next_attempt_at=NULL, last_error=NULL, last_attempt_at=NULL\n- [ ] Uses ON CONFLICT DO UPDATE, NOT INSERT OR IGNORE (PRD explains why)\n- [ ] get_dirty_sources WHERE next_attempt_at IS NULL OR <= now\n- [ ] get_dirty_sources ORDER BY attempt_count ASC, queued_at ASC\n- [ ] get_dirty_sources LIMIT 500\n- [ ] get_dirty_sources returns Vec<(SourceType, i64)>\n- [ ] clear_dirty DELETEs entry\n- [ ] Queue drains completely when called in loop\n- [ ] \\`cargo test dirty_tracker\\` passes\n\n## Files\n- \\`src/ingestion/dirty_tracker.rs\\` — new file\n- \\`src/ingestion/mod.rs\\` — add \\`pub mod dirty_tracker;\\`\n\n## TDD Loop\nRED: Tests:\n- \\`test_mark_dirty_tx_inserts\\` — entry appears in dirty_sources\n- \\`test_requeue_resets_backoff\\` — mark, simulate error state, re-mark -> attempt_count=0, next_attempt_at=NULL\n- \\`test_get_respects_backoff\\` — entry with future next_attempt_at not returned\n- \\`test_get_orders_by_attempt_count\\` — fresh items before failed items\n- \\`test_batch_size_500\\` — insert 600, get returns 500\n- \\`test_clear_removes\\` — entry gone after clear\n- \\`test_drain_loop\\` — insert 1200, loop 3 times = empty\nGREEN: Implement all functions\nVERIFY: \\`cargo test dirty_tracker\\`\n\n## Edge Cases\n- Empty queue: get returns empty Vec\n- Invalid source_type string in DB: FromSqlConversionFailure error\n- Concurrent mark + get: ON CONFLICT handles race condition","status":"closed","priority":2,"issue_type":"task","created_at":"2026-01-30T15:27:09.434845Z","created_by":"tayloreernisse","updated_at":"2026-01-30T17:31:35.455315Z","closed_at":"2026-01-30T17:31:35.455127Z","close_reason":"Implemented dirty_tracker with mark_dirty_tx, get_dirty_sources, clear_dirty, record_dirty_error + 8 tests","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-38q","depends_on_id":"bd-36p","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-38q","depends_on_id":"bd-hrs","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-38q","depends_on_id":"bd-mem","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} {"id":"bd-39w","title":"[CP1] Test fixtures for mocked GitLab responses","description":"## Background\n\nTest fixtures provide mocked GitLab API responses for unit and integration tests. They enable testing without a live GitLab instance and ensure consistent test data across runs.\n\n## Approach\n\n### Fixture Files\n\nCreate JSON fixtures that match GitLab API response shapes:\n\n```\ntests/fixtures/\n├── gitlab_issue.json # Single issue\n├── gitlab_issues_page.json # Array of issues (pagination test)\n├── gitlab_discussion.json # Single discussion with notes\n└── gitlab_discussions_page.json # Array of discussions\n```\n\n### gitlab_issue.json\n\n```json\n{\n \"id\": 12345,\n \"iid\": 42,\n \"project_id\": 100,\n \"title\": \"Test issue title\",\n \"description\": \"Test issue description\",\n \"state\": \"opened\",\n \"created_at\": \"2024-01-15T10:00:00.000Z\",\n \"updated_at\": \"2024-01-20T15:30:00.000Z\",\n \"closed_at\": null,\n \"author\": {\n \"id\": 1,\n \"username\": \"testuser\",\n \"name\": \"Test User\"\n },\n \"labels\": [\"bug\", \"priority::high\"],\n \"web_url\": \"https://gitlab.example.com/group/project/-/issues/42\"\n}\n```\n\n### gitlab_discussion.json\n\n```json\n{\n \"id\": \"6a9c1750b37d513a43987b574953fceb50b03ce7\",\n \"individual_note\": false,\n \"notes\": [\n {\n \"id\": 1001,\n \"type\": \"DiscussionNote\",\n \"body\": \"First comment in thread\",\n \"author\": { \"id\": 1, \"username\": \"testuser\", \"name\": \"Test User\" },\n \"created_at\": \"2024-01-16T09:00:00.000Z\",\n \"updated_at\": \"2024-01-16T09:00:00.000Z\",\n \"system\": false,\n \"resolvable\": true,\n \"resolved\": false,\n \"resolved_by\": null,\n \"resolved_at\": null,\n \"position\": null\n },\n {\n \"id\": 1002,\n \"type\": \"DiscussionNote\",\n \"body\": \"Reply to first comment\",\n \"author\": { \"id\": 2, \"username\": \"reviewer\", \"name\": \"Reviewer\" },\n \"created_at\": \"2024-01-16T10:00:00.000Z\",\n \"updated_at\": \"2024-01-16T10:00:00.000Z\",\n \"system\": false,\n \"resolvable\": true,\n \"resolved\": false,\n \"resolved_by\": null,\n \"resolved_at\": null,\n \"position\": null\n }\n ]\n}\n```\n\n### Helper Module\n\n```rust\n// tests/fixtures/mod.rs\n\npub fn load_fixture(name: &str) -> T {\n let path = PathBuf::from(env!(\"CARGO_MANIFEST_DIR\"))\n .join(\"tests/fixtures\")\n .join(name);\n let content = std::fs::read_to_string(&path)\n .expect(&format!(\"Failed to read fixture: {}\", name));\n serde_json::from_str(&content)\n .expect(&format!(\"Failed to parse fixture: {}\", name))\n}\n\npub fn gitlab_issue() -> GitLabIssue {\n load_fixture(\"gitlab_issue.json\")\n}\n\npub fn gitlab_issues_page() -> Vec {\n load_fixture(\"gitlab_issues_page.json\")\n}\n\npub fn gitlab_discussion() -> GitLabDiscussion {\n load_fixture(\"gitlab_discussion.json\")\n}\n```\n\n## Acceptance Criteria\n\n- [ ] gitlab_issue.json deserializes to GitLabIssue correctly\n- [ ] gitlab_issues_page.json contains 3+ issues for pagination tests\n- [ ] gitlab_discussion.json contains multi-note thread\n- [ ] gitlab_discussions_page.json contains mix of individual_note true/false\n- [ ] At least one fixture includes system: true note\n- [ ] Helper functions load fixtures without panic\n\n## Files\n\n- tests/fixtures/gitlab_issue.json (create)\n- tests/fixtures/gitlab_issues_page.json (create)\n- tests/fixtures/gitlab_discussion.json (create)\n- tests/fixtures/gitlab_discussions_page.json (create)\n- tests/fixtures/mod.rs (create)\n\n## TDD Loop\n\nRED:\n```rust\n#[test] fn fixture_gitlab_issue_deserializes()\n#[test] fn fixture_gitlab_discussion_deserializes()\n#[test] fn fixture_has_system_note()\n```\n\nGREEN: Create JSON fixtures and helper module\n\nVERIFY: `cargo test fixture`\n\n## Edge Cases\n\n- Include issue with empty labels array\n- Include issue with null description\n- Include system note (system: true)\n- Include individual_note: true discussion (standalone comment)\n- Timestamps must be valid ISO 8601","status":"closed","priority":3,"issue_type":"task","created_at":"2026-01-25T17:02:38.433752Z","created_by":"tayloreernisse","updated_at":"2026-01-25T22:48:08.415195Z","closed_at":"2026-01-25T22:48:08.415132Z","close_reason":"Created 4 JSON fixture files (issue, issues_page, discussion, discussions_page) with helper tests - 6 tests passing","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-39w","depends_on_id":"bd-1np","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} -{"id":"bd-39yp","title":"Phase 3a: Cargo.toml + rust-toolchain.toml dependency swap","description":"## What\nSwap runtime and HTTP client dependencies in Cargo.toml. Finalize rust-toolchain.toml.\n\n## Cargo.toml Changes\n\n### Remove from [dependencies]:\n```toml\nreqwest = { version = \"0.12\", features = [\"json\"] }\ntokio = { version = \"1\", features = [\"rt-multi-thread\", \"macros\", \"time\", \"signal\"] }\n```\n\n### Add to [dependencies]:\n```toml\nasupersync = { version = \"0.2\", features = [\"tls\", \"tls-native-roots\"] }\n```\n\n### Keep unchanged:\n```toml\nasync-stream = \"0.3\"\nfutures = { version = \"0.3\", default-features = false, features = [\"alloc\"] }\nserde = { version = \"1\", features = [\"derive\"] }\nserde_json = \"1\"\nurlencoding = \"2\"\n```\n\n### Add/update [dev-dependencies]:\n```toml\ntokio = { version = \"1\", features = [\"rt\", \"macros\"] } # Only for wiremock tests\n# Keep: tempfile, wiremock\n```\n\n## rust-toolchain.toml\n```toml\n[toolchain]\nchannel = \"nightly-2026-03-01\" # Pin specific date\n```\nNever use bare \"nightly\" in production. Update date as needed when newer nightlies are verified.\n\n## Build time consideration\nMeasure build time before/after. asupersync may be heavier than tokio. Document in bead comments.\n\n## Files Changed\n- Cargo.toml (~10 LOC)\n- rust-toolchain.toml (finalize from decision gate)\n- Cargo.lock (auto-updated)\n\n## Testing\n- cargo check --all-targets (will fail until Phases 3b-3f complete — this is expected)\n- This bead is part of the atomic Phases 1-3 commit\n\n## Depends On\n- Decision Gate (nightly verified)\n- Phase 1 (adapter uses asupersync::http)","status":"open","priority":1,"issue_type":"task","created_at":"2026-03-06T18:40:18.579026Z","created_by":"tayloreernisse","updated_at":"2026-03-06T18:42:51.439458Z","compaction_level":0,"original_size":0,"labels":["asupersync","phase-3"],"dependencies":[{"issue_id":"bd-39yp","depends_on_id":"bd-1lj5","type":"parent-child","created_at":"2026-03-06T18:40:18.582221Z","created_by":"tayloreernisse"},{"issue_id":"bd-39yp","depends_on_id":"bd-1lli","type":"blocks","created_at":"2026-03-06T18:42:51.236638Z","created_by":"tayloreernisse"},{"issue_id":"bd-39yp","depends_on_id":"bd-bqoc","type":"blocks","created_at":"2026-03-06T18:42:51.439438Z","created_by":"tayloreernisse"}]} +{"id":"bd-39yp","title":"Phase 3a: Cargo.toml + rust-toolchain.toml dependency swap","description":"## What\nSwap runtime and HTTP client dependencies in Cargo.toml. Finalize rust-toolchain.toml.\n\n## Cargo.toml Changes\n\n### Remove from [dependencies]:\n```toml\nreqwest = { version = \"0.12\", features = [\"json\"] }\ntokio = { version = \"1\", features = [\"rt-multi-thread\", \"macros\", \"time\", \"signal\"] }\n```\n\n### Add to [dependencies]:\n```toml\nasupersync = { version = \"0.2\", features = [\"tls\", \"tls-native-roots\"] }\n```\n\n### Keep unchanged:\n```toml\nasync-stream = \"0.3\"\nfutures = { version = \"0.3\", default-features = false, features = [\"alloc\"] }\nserde = { version = \"1\", features = [\"derive\"] }\nserde_json = \"1\"\nurlencoding = \"2\"\n```\n\n### Add/update [dev-dependencies]:\n```toml\ntokio = { version = \"1\", features = [\"rt\", \"macros\"] } # Only for wiremock tests\n# Keep: tempfile, wiremock\n```\n\n## rust-toolchain.toml\n```toml\n[toolchain]\nchannel = \"nightly-2026-03-01\" # Pin specific date\n```\nNever use bare \"nightly\" in production. Update date as needed when newer nightlies are verified.\n\n## Build time consideration\nMeasure build time before/after. asupersync may be heavier than tokio. Document in bead comments.\n\n## Files Changed\n- Cargo.toml (~10 LOC)\n- rust-toolchain.toml (finalize from decision gate)\n- Cargo.lock (auto-updated)\n\n## Testing\n- cargo check --all-targets (will fail until Phases 3b-3f complete — this is expected)\n- This bead is part of the atomic Phases 1-3 commit\n\n## Depends On\n- Decision Gate (nightly verified)\n- Phase 1 (adapter uses asupersync::http)","status":"closed","priority":1,"issue_type":"task","created_at":"2026-03-06T18:40:18.579026Z","created_by":"tayloreernisse","updated_at":"2026-03-06T21:11:12.604192Z","closed_at":"2026-03-06T21:11:12.601697Z","close_reason":"asupersync migration complete — 5 commits pushed to asupersync-migration branch","compaction_level":0,"original_size":0,"labels":["asupersync","phase-3"],"dependencies":[{"issue_id":"bd-39yp","depends_on_id":"bd-1lj5","type":"parent-child","created_at":"2026-03-06T18:40:18.582221Z","created_by":"tayloreernisse"},{"issue_id":"bd-39yp","depends_on_id":"bd-1lli","type":"blocks","created_at":"2026-03-06T18:42:51.236638Z","created_by":"tayloreernisse"},{"issue_id":"bd-39yp","depends_on_id":"bd-bqoc","type":"blocks","created_at":"2026-03-06T18:42:51.439438Z","created_by":"tayloreernisse"}]} {"id":"bd-3a4k","title":"CLI: list issues status column, filter, and robot fields","description":"## Background\nList issues needs a Status column in the table, status fields in robot JSON, and a --status filter for querying by work item status name. The filter supports multiple values (OR semantics) and case-insensitive matching.\n\n## Approach\nExtend list.rs row types, SQL, table rendering. Add --status Vec to clap args. Build dynamic WHERE clause with COLLATE NOCASE. Wire into both ListFilters constructions in main.rs. Register in autocorrect.\n\n## Files\n- src/cli/commands/list.rs (row types, SQL, table, filter, color helper)\n- src/cli/mod.rs (--status flag on IssuesArgs)\n- src/main.rs (wire statuses into both ListFilters)\n- src/cli/autocorrect.rs (add --status to COMMAND_FLAGS)\n\n## Implementation\n\nIssueListRow + IssueListRowJson: add 5 status fields (all Option)\nFrom<&IssueListRow> for IssueListRowJson: clone all 5 fields\n\nquery_issues SELECT: add i.status_name, i.status_category, i.status_color, i.status_icon_name, i.status_synced_at after existing columns\n Existing SELECT has 12 columns (indices 0-11). New columns: indices 12-16.\n Row mapping: status_name: row.get(12)?, ..., status_synced_at: row.get(16)?\n\nListFilters: add pub statuses: &'a [String]\n\nWHERE clause builder (after has_due_date block):\n if statuses.len() == 1: \"i.status_name = ? COLLATE NOCASE\" + push param\n if statuses.len() > 1: \"i.status_name IN (?, ?, ...) COLLATE NOCASE\" + push all params\n\nTable: add \"Status\" column header (bold) between State and Assignee\n Row: match &issue.status_name -> Some: colored_cell_hex(status, color), None: Cell::new(\"\")\n\nNew helper:\n fn colored_cell_hex(content, hex: Option<&str>) -> Cell\n If no hex or colors disabled: Cell::new(content)\n Parse 6-char hex, use Cell::new(content).fg(Color::Rgb { r, g, b })\n\nIn src/cli/mod.rs IssuesArgs:\n #[arg(long, help_heading = \"Filters\")]\n pub status: Vec,\n\nIn src/main.rs handle_issues (~line 695):\n ListFilters { ..., statuses: &args.status }\nIn legacy List handler (~line 2421):\n ListFilters { ..., statuses: &[] }\n\nIn src/cli/autocorrect.rs COMMAND_FLAGS \"issues\" entry:\n Add \"--status\" between existing flags\n\n## Acceptance Criteria\n- [ ] Status column appears in table between State and Assignee\n- [ ] NULL status -> empty cell\n- [ ] Status colored by hex in human mode\n- [ ] --status \"In progress\" filters correctly\n- [ ] --status \"in progress\" matches \"In progress\" (COLLATE NOCASE)\n- [ ] --status \"To do\" --status \"In progress\" -> OR semantics (both returned)\n- [ ] Robot: status_name, status_category in each issue JSON\n- [ ] --fields supports status_name, status_category, status_color, status_icon_name, status_synced_at\n- [ ] --fields minimal does NOT include status fields\n- [ ] Autocorrect registry test passes (--status registered)\n- [ ] cargo check --all-targets passes\n\n## TDD Loop\nRED: test_list_filter_by_status, test_list_filter_by_status_case_insensitive, test_list_filter_by_multiple_statuses\nGREEN: Implement all changes across 4 files\nVERIFY: cargo test list_filter && cargo test registry_covers\n\n## Edge Cases\n- COLLATE NOCASE is ASCII-only but sufficient (all system statuses are ASCII)\n- Single-value uses = for simplicity; multi-value uses IN with dynamic placeholders\n- --status combined with other filters (--state, --label) -> AND logic\n- autocorrect registry_covers_command_flags test will FAIL if --status not registered\n- Legacy List command path also constructs ListFilters — needs statuses: &[]\n- Column index offset: new columns start at 12 (0-indexed)","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-11T06:42:26.438Z","created_by":"tayloreernisse","updated_at":"2026-02-11T07:21:33.421297Z","closed_at":"2026-02-11T07:21:33.421247Z","close_reason":"Implemented by agent swarm — all quality gates pass (595 tests, 0 failures)","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-3a4k","depends_on_id":"bd-2y79","type":"parent-child","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-3a4k","depends_on_id":"bd-3dum","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} {"id":"bd-3ae","title":"Epic: CP2 Gate A - MRs Only","description":"## Background\nGate A validates core MR ingestion works before adding complexity. Proves the cursor-based sync, pagination, and basic CLI work. This is the foundation - if Gate A fails, nothing else matters.\n\n## Acceptance Criteria (Pass/Fail)\n- [ ] `gi ingest --type=merge_requests` completes without error\n- [ ] `SELECT COUNT(*) FROM merge_requests` > 0\n- [ ] `gi list mrs --limit=5` shows 5 MRs with iid, title, state, author\n- [ ] `gi count mrs` shows total count matching DB query\n- [ ] MR with `state=locked` can be stored (if exists in test data)\n- [ ] Draft MR shows `draft=1` in DB and `[DRAFT]` in list output\n- [ ] `work_in_progress=true` MR shows `draft=1` (fallback works)\n- [ ] `head_sha` populated for MRs with commits\n- [ ] `references_short` and `references_full` populated\n- [ ] Re-run ingest shows \"0 new MRs\" or minimal refetch (cursor working)\n- [ ] Cursor saved at page boundary, not item boundary\n\n## Validation Script\n```bash\n#!/bin/bash\nset -e\n\nDB_PATH=\"${XDG_DATA_HOME:-$HOME/.local/share}/gitlab-inbox/db.sqlite3\"\n\necho \"=== Gate A: MRs Only ===\"\n\n# 1. Clear any existing MR data for clean test\necho \"Step 1: Reset MR cursor for clean test...\"\nsqlite3 \"$DB_PATH\" \"DELETE FROM sync_cursors WHERE resource_type = 'merge_requests';\"\n\n# 2. Run MR ingestion\necho \"Step 2: Ingest MRs...\"\ngi ingest --type=merge_requests\n\n# 3. Verify MRs exist\necho \"Step 3: Verify MR count...\"\nMR_COUNT=$(sqlite3 \"$DB_PATH\" \"SELECT COUNT(*) FROM merge_requests;\")\necho \" MR count: $MR_COUNT\"\n[ \"$MR_COUNT\" -gt 0 ] || { echo \"FAIL: No MRs ingested\"; exit 1; }\n\n# 4. Verify list command\necho \"Step 4: Test list command...\"\ngi list mrs --limit=5\n\n# 5. Verify count command\necho \"Step 5: Test count command...\"\ngi count mrs\n\n# 6. Verify draft handling\necho \"Step 6: Check draft MRs...\"\nDRAFT_COUNT=$(sqlite3 \"$DB_PATH\" \"SELECT COUNT(*) FROM merge_requests WHERE draft = 1;\")\necho \" Draft MR count: $DRAFT_COUNT\"\n\n# 7. Verify head_sha population\necho \"Step 7: Check head_sha...\"\nSHA_COUNT=$(sqlite3 \"$DB_PATH\" \"SELECT COUNT(*) FROM merge_requests WHERE head_sha IS NOT NULL;\")\necho \" MRs with head_sha: $SHA_COUNT\"\n\n# 8. Verify references\necho \"Step 8: Check references...\"\nREF_COUNT=$(sqlite3 \"$DB_PATH\" \"SELECT COUNT(*) FROM merge_requests WHERE references_short IS NOT NULL;\")\necho \" MRs with references: $REF_COUNT\"\n\n# 9. Verify cursor saved\necho \"Step 9: Check cursor...\"\nCURSOR=$(sqlite3 \"$DB_PATH\" \"SELECT updated_at, gitlab_id FROM sync_cursors WHERE resource_type = 'merge_requests';\")\necho \" Cursor: $CURSOR\"\n[ -n \"$CURSOR\" ] || { echo \"FAIL: Cursor not saved\"; exit 1; }\n\n# 10. Re-run and verify minimal refetch\necho \"Step 10: Re-run ingest (should be minimal)...\"\ngi ingest --type=merge_requests\n# Output should show minimal or zero new MRs\n\necho \"\"\necho \"=== Gate A: PASSED ===\"\n```\n\n## Test Commands (Quick Verification)\n```bash\n# Run these in order:\ngi ingest --type=merge_requests\ngi list mrs --limit=10\ngi count mrs\n\n# Verify in DB:\nsqlite3 ~/.local/share/gitlab-inbox/db.sqlite3 \"\n SELECT \n COUNT(*) as total,\n SUM(CASE WHEN draft = 1 THEN 1 ELSE 0 END) as drafts,\n SUM(CASE WHEN head_sha IS NOT NULL THEN 1 ELSE 0 END) as with_sha,\n SUM(CASE WHEN references_short IS NOT NULL THEN 1 ELSE 0 END) as with_refs\n FROM merge_requests;\n\"\n\n# Re-run (should be no-op):\ngi ingest --type=merge_requests\n```\n\n## Dependencies\nThis gate requires these beads to be complete:\n- bd-3ir (Database migration)\n- bd-5ta (GitLab MR types)\n- bd-34o (MR transformer)\n- bd-iba (GitLab client pagination)\n- bd-ser (MR ingestion module)\n\n## Edge Cases\n- `locked` state is transitional (merge in progress); may not exist in test data\n- Some older GitLab instances may not return `head_sha` for all MRs\n- `work_in_progress` is deprecated but should still work as fallback\n- Very large projects (10k+ MRs) may take significant time on first sync","status":"closed","priority":3,"issue_type":"task","created_at":"2026-01-26T22:06:00.966522Z","created_by":"tayloreernisse","updated_at":"2026-01-27T00:48:21.057298Z","closed_at":"2026-01-27T00:48:21.057225Z","close_reason":"done","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-3ae","depends_on_id":"bd-iba","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-3ae","depends_on_id":"bd-ser","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} {"id":"bd-3as","title":"Implement timeline event collection and chronological interleaving","description":"## Background\n\nThe event collection phase is steps 4-5 of the timeline pipeline (spec Section 3.2). It takes seed + expanded entity sets and collects all their events from resource event tables, then interleaves chronologically.\n\n**Spec reference:** `docs/phase-b-temporal-intelligence.md` Section 3.2 steps 4-5, Section 3.3 (Event Model).\n\n## Codebase Context\n\n- resource_state_events: columns include state, actor_username (not actor_gitlab_id for display), created_at, issue_id, merge_request_id, source_merge_request_iid, source_commit\n- resource_label_events: columns include action ('add'|'remove'), label_name (NULLABLE since migration 012), actor_username, created_at\n- resource_milestone_events: columns include action ('add'|'remove'), milestone_title (NULLABLE since migration 012), actor_username, created_at\n- issues table: created_at, author_username, title, web_url\n- merge_requests table: created_at, author_username, title, web_url, merged_at, updated_at\n- All timestamps are ms epoch UTC (stored as INTEGER)\n\n## Approach\n\nCreate `src/core/timeline_collect.rs`:\n\n```rust\nuse rusqlite::Connection;\nuse crate::core::timeline::{TimelineEvent, TimelineEventType, EntityRef, ExpandedEntityRef};\n\npub fn collect_events(\n conn: &Connection,\n seed_entities: &[EntityRef],\n expanded_entities: &[ExpandedEntityRef],\n evidence_notes: &[TimelineEvent], // from seed phase\n since_ms: Option, // --since filter\n limit: usize, // -n flag (default 100)\n) -> Result> { ... }\n```\n\n### Event Collection Per Entity\n\nFor each entity (seed + expanded), collect:\n\n1. **Creation event** (`Created`):\n ```sql\n -- Issues:\n SELECT created_at, author_username, title, web_url FROM issues WHERE id = ?1\n -- MRs:\n SELECT created_at, author_username, title, web_url FROM merge_requests WHERE id = ?1\n ```\n\n2. **State changes** (`StateChanged { state }`):\n ```sql\n SELECT state, actor_username, created_at FROM resource_state_events\n WHERE (issue_id = ?1 OR merge_request_id = ?1)\n AND (?2 IS NULL OR created_at >= ?2) -- since filter\n ORDER BY created_at ASC\n ```\n NOTE: For MRs, a state='merged' event also produces a separate Merged variant.\n\n3. **Label changes** (`LabelAdded`/`LabelRemoved`):\n ```sql\n SELECT action, label_name, actor_username, created_at FROM resource_label_events\n WHERE (issue_id = ?1 OR merge_request_id = ?1)\n AND (?2 IS NULL OR created_at >= ?2)\n ORDER BY created_at ASC\n ```\n Handle NULL label_name (deleted label): use \"[deleted label]\" as fallback.\n\n4. **Milestone changes** (`MilestoneSet`/`MilestoneRemoved`):\n ```sql\n SELECT action, milestone_title, actor_username, created_at FROM resource_milestone_events\n WHERE (issue_id = ?1 OR merge_request_id = ?1)\n AND (?2 IS NULL OR created_at >= ?2)\n ORDER BY created_at ASC\n ```\n Handle NULL milestone_title: use \"[deleted milestone]\" as fallback.\n\n5. **Merge event** (Merged, MR only):\n Derive from merge_requests.merged_at (preferred) OR resource_state_events WHERE state='merged'. Skip StateChanged when state='merged' — emit only the Merged variant.\n\n### Chronological Interleave\n\n```rust\nevents.sort(); // Uses Ord impl from bd-20e\nif let Some(since) = since_ms {\n events.retain(|e| e.timestamp >= since);\n}\nevents.truncate(limit);\n```\n\nRegister in `src/core/mod.rs`: `pub mod timeline_collect;`\n\n## Acceptance Criteria\n\n- [ ] Collects Created, StateChanged, LabelAdded/Removed, MilestoneSet/Removed, Merged, NoteEvidence events\n- [ ] Merged events deduplicated from StateChanged{merged} — emit only Merged variant\n- [ ] NULL label_name/milestone_title handled with fallback text\n- [ ] --since filter applied to all event types\n- [ ] Events sorted chronologically with stable tiebreak\n- [ ] Limit applied AFTER sorting\n- [ ] Evidence notes from seed phase included\n- [ ] is_seed correctly set based on entity source\n- [ ] Module registered in src/core/mod.rs\n- [ ] `cargo check --all-targets` passes\n- [ ] `cargo clippy --all-targets -- -D warnings` passes\n\n## Files\n\n- `src/core/timeline_collect.rs` (NEW)\n- `src/core/mod.rs` (add `pub mod timeline_collect;`)\n\n## TDD Loop\n\nRED:\n- `test_collect_creation_event` - entity produces Created event\n- `test_collect_state_events` - state changes produce StateChanged events\n- `test_collect_merged_dedup` - state='merged' produces Merged not StateChanged\n- `test_collect_null_label_fallback` - NULL label_name uses fallback text\n- `test_collect_since_filter` - old events excluded\n- `test_collect_chronological_sort` - mixed entity events interleave correctly\n- `test_collect_respects_limit`\n\nTests need in-memory DB with migrations 001-014 applied.\n\nGREEN: Implement SQL queries and event assembly.\n\nVERIFY: `cargo test --lib -- timeline_collect`\n\n## Edge Cases\n\n- MR with merged_at=NULL and no state='merged' event: no Merged event emitted\n- Entity with 0 events in resource tables: only Created event returned\n- NULL actor_username: actor field is None\n- Timestamps at exact --since boundary: use >= (inclusive)","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-02T21:33:08.703942Z","created_by":"tayloreernisse","updated_at":"2026-02-05T21:53:01.160429Z","closed_at":"2026-02-05T21:53:01.160380Z","close_reason":"Completed: Created src/core/timeline_collect.rs with event collection for Created, StateChanged, LabelAdded/Removed, MilestoneSet/Removed, Merged, NoteEvidence. Merged dedup (state=merged skipped in favor of Merged variant). NULL label/milestone fallbacks. Since filter, chronological sort, limit. 10 tests pass.","compaction_level":0,"original_size":0,"labels":["gate-3","phase-b","query"],"dependencies":[{"issue_id":"bd-3as","depends_on_id":"bd-1ep","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-3as","depends_on_id":"bd-ike","type":"parent-child","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-3as","depends_on_id":"bd-ypa","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} @@ -304,7 +304,7 @@ {"id":"bd-b51e","title":"WHO: Overlap mode query (query_overlap)","description":"## Background\n\nOverlap mode answers \"Who else has MRs/notes touching my files?\" — helps identify potential reviewers, collaborators, or conflicting work at a path. Tracks author and reviewer roles separately for richer signal.\n\n## Approach\n\n### SQL: two static variants (prefix/exact) with reviewer + author UNION ALL\n\nBoth branches return: username, role, touch_count (COUNT DISTINCT m.id), last_seen_at, mr_refs (GROUP_CONCAT of project-qualified refs).\n\nKey differences from Expert:\n- No scoring formula — just touch_count ranking\n- mr_refs collected for actionable output (group/project!iid format)\n- Rust-side merge needed (can't fully aggregate in SQL due to HashSet dedup of mr_refs across branches)\n\n### Reviewer branch includes:\n- Self-review exclusion: `n.author_username != m.author_username`\n- MR state filter: `m.state IN ('opened','merged')`\n- Project-qualified refs: `GROUP_CONCAT(DISTINCT (p.path_with_namespace || '!' || m.iid))`\n\n### Rust accumulator pattern:\n```rust\nstruct OverlapAcc {\n username: String,\n author_touch_count: u32,\n review_touch_count: u32,\n touch_count: u32,\n last_seen_at: i64,\n mr_refs: HashSet, // O(1) dedup from the start\n}\n// Build HashMap from rows\n// Convert to Vec, sort, bound mr_refs\n```\n\n### Bounded mr_refs:\n```rust\nconst MAX_MR_REFS_PER_USER: usize = 50;\nlet mr_refs_total = mr_refs.len() as u32;\nlet mr_refs_truncated = mr_refs.len() > MAX_MR_REFS_PER_USER;\n```\n\n### Deterministic sort: touch_count DESC, last_seen_at DESC, username ASC\n\n### format_overlap_role():\n```rust\nfn format_overlap_role(user: &OverlapUser) -> &'static str {\n match (user.author_touch_count > 0, user.review_touch_count > 0) {\n (true, true) => \"A+R\", (true, false) => \"A\",\n (false, true) => \"R\", (false, false) => \"-\",\n }\n}\n```\n\n### OverlapResult/OverlapUser structs include path_match (\"exact\"/\"prefix\"), truncated bool, per-user mr_refs_total + mr_refs_truncated\n\n## Files\n\n- `src/cli/commands/who.rs`\n\n## TDD Loop\n\nRED:\n```\ntest_overlap_dual_roles — user is author of MR 1 and reviewer of MR 2 at same path; verify A+R role, both touch counts > 0, mr_refs contain \"team/backend!\"\ntest_overlap_multi_project_mr_refs — same iid 100 in two projects; verify both \"team/backend!100\" and \"team/frontend!100\" present\ntest_overlap_excludes_self_review_notes — author comments on own MR; review_touch_count must be 0\n```\n\nGREEN: Implement query_overlap with both SQL variants + accumulator\nVERIFY: `cargo test -- overlap`\n\n## Acceptance Criteria\n\n- [ ] test_overlap_dual_roles passes (A+R role detection)\n- [ ] test_overlap_multi_project_mr_refs passes (project-qualified refs unique)\n- [ ] test_overlap_excludes_self_review_notes passes\n- [ ] Default since window: 30d\n- [ ] mr_refs sorted alphabetically for deterministic output\n- [ ] touch_count uses coherent units (COUNT DISTINCT m.id on BOTH branches)\n\n## Edge Cases\n\n- Both branches count MRs (not DiffNotes) for coherent touch_count — mixing units produces misleading totals\n- mr_refs from GROUP_CONCAT may contain duplicates across branches — HashSet handles dedup\n- Project scoping on n.project_id (not m.project_id) for index alignment\n- mr_refs sorted before output (HashSet iteration is nondeterministic)","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-08T02:40:46.729921Z","created_by":"tayloreernisse","updated_at":"2026-02-08T04:10:29.598708Z","closed_at":"2026-02-08T04:10:29.598673Z","close_reason":"Implemented by agent team: migration 017, CLI skeleton, all 5 query modes, human+robot output, 20 tests. All quality gates pass.","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-b51e","depends_on_id":"bd-2ldg","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"},{"issue_id":"bd-b51e","depends_on_id":"bd-34rr","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"}]} {"id":"bd-bcte","title":"Implement filter DSL parser state machine","description":"## Background\n\nThe Issue List and MR List filter bars accept typed filter expressions (e.g., `state:opened author:@asmith label:\"high priority\" -milestone:v2.0`). The PRD Appendix B defines a full state machine: Inactive -> Active -> FieldSelect/FreeText -> ValueInput. The parser needs to handle field:value pairs, negation prefix (`-`), quoted values with spaces, bare text as free-text search, and inline error diagnostics when an unrecognized field name is typed. This is a substantial subsystem that the entity table filter bar widget (bd-18qs) depends on for its core functionality.\n\n## Approach\n\nCreate a `filter_dsl.rs` module with:\n\n1. **FilterToken enum** — `Field { name: String, value: String, negated: bool }` | `FreeText(String)` | `Error { position: usize, message: String }`\n2. **`parse_filter(input: &str) -> Vec`** — Tokenizer that handles:\n - `field:value` — recognized fields: state, author, assignee, label, milestone, since, project (issue); + reviewer, draft, target, source (MR)\n - `-field:value` — negation prefix strips the `-` and sets `negated: true`\n - `field:\"quoted value\"` — double-quoted values preserve spaces\n - bare words — collected as `FreeText` tokens\n - unrecognized field names — produce `Error` token with position and message\n3. **FilterBarState** state machine:\n - `Inactive` — filter bar not focused\n - `Active(Typing)` — user typing, no suggestion yet\n - `Active(Suggesting)` — 200ms pause triggers field name suggestions\n - `FieldSelect` — dropdown showing recognized field names after `:`\n - `ValueInput` — context-dependent completions (e.g., state values: opened/closed/all)\n4. **`apply_issue_filter(tokens: &[FilterToken]) -> IssueFilterParams`** — converts tokens to query parameters\n5. **`apply_mr_filter(tokens: &[FilterToken]) -> MrFilterParams`** — MR variant with reviewer, draft, target/source fields\n\n## Acceptance Criteria\n- [ ] `parse_filter(\"state:opened\")` returns one Field token with name=\"state\", value=\"opened\", negated=false\n- [ ] `parse_filter(\"-label:bug\")` returns one Field with negated=true\n- [ ] `parse_filter('author:\"Jane Doe\"')` returns one Field with value=\"Jane Doe\" (quotes stripped)\n- [ ] `parse_filter(\"foo:bar\")` where \"foo\" is not a recognized field returns Error token with position\n- [ ] `parse_filter(\"state:opened some text\")` returns Field + FreeText tokens\n- [ ] `parse_filter(\"\")` returns empty vec\n- [ ] FilterBarState transitions match the Appendix B state machine diagram\n- [ ] apply_issue_filter correctly maps all 7 issue fields (state, author, assignee, label, milestone, since, project)\n- [ ] apply_mr_filter correctly maps additional MR fields (reviewer, draft, target, source)\n- [ ] Inline error diagnostics include the character position of the unrecognized field\n\n## Files\n- CREATE: crates/lore-tui/src/widgets/filter_dsl.rs\n- MODIFY: crates/lore-tui/src/widgets/mod.rs (add `pub mod filter_dsl;`)\n\n## TDD Anchor\nRED: Write `test_parse_simple_field_value` that asserts `parse_filter(\"state:opened\")` returns `[Field { name: \"state\", value: \"opened\", negated: false }]`.\nGREEN: Implement the tokenizer for the simplest case.\nVERIFY: cargo test -p lore-tui parse_simple\n\nAdditional tests:\n- test_parse_negation\n- test_parse_quoted_value\n- test_parse_unrecognized_field_produces_error\n- test_parse_mixed_tokens\n- test_parse_empty_input\n- test_apply_issue_filter_maps_all_fields\n- test_apply_mr_filter_maps_additional_fields\n- test_filter_bar_state_transitions\n\n## Edge Cases\n- Unclosed quote (`author:\"Jane`) — treat rest of input as the value, produce warning token\n- Empty value (`state:`) — produce Error token, not a Field with empty value\n- Multiple colons (`field:val:ue`) — first colon splits, rest is part of value\n- Unicode in field values (`author:@rene`) — must handle multi-byte chars correctly\n- Very long filter strings (>1000 chars) — must not allocate unbounded; truncate with error\n\n## Dependency Context\n- Depends on bd-18qs (entity table + filter bar widgets) which provides the TextInput widget and filter bar rendering. This bead provides the PARSER that bd-18qs's filter bar CALLS.\n- Consumed by bd-3ei1 (Issue List) and bd-2kr0 (MR List) for converting user filter input into query parameters.","status":"open","priority":2,"issue_type":"task","created_at":"2026-02-12T19:29:37.516695Z","created_by":"tayloreernisse","updated_at":"2026-02-12T19:29:47.312394Z","compaction_level":0,"original_size":0,"labels":["TUI"],"dependencies":[{"issue_id":"bd-bcte","depends_on_id":"bd-18qs","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"}]} {"id":"bd-bjo","title":"Implement vector search function","description":"## Background\nVector search queries the sqlite-vec virtual table for nearest-neighbor documents. Because documents may have multiple chunks, the raw KNN results need deduplication by document_id (keeping the best/lowest distance per document). The function over-fetches 3x to ensure enough unique documents after dedup.\n\n## Approach\nCreate `src/search/vector.rs`:\n\n```rust\npub struct VectorResult {\n pub document_id: i64,\n pub distance: f64, // Lower = closer match\n}\n\n/// Search documents using sqlite-vec KNN query.\n/// Over-fetches 3x limit to handle chunk dedup.\npub fn search_vector(\n conn: &Connection,\n query_embedding: &[f32], // 768-dim embedding of search query\n limit: usize,\n) -> Result>\n```\n\n**SQL (KNN query):**\n```sql\nSELECT rowid, distance\nFROM embeddings\nWHERE embedding MATCH ?\n AND k = ?\nORDER BY distance\n```\n\n**Algorithm:**\n1. Convert query_embedding to raw LE bytes\n2. Execute KNN with k = limit * 3 (over-fetch for dedup)\n3. Decode each rowid via decode_rowid() -> (document_id, chunk_index)\n4. Group by document_id, keep minimum distance (best chunk)\n5. Sort by distance ascending\n6. Take first `limit` results\n\n## Acceptance Criteria\n- [ ] Returns deduplicated document-level results (not chunk-level)\n- [ ] Best chunk distance kept per document (lowest distance wins)\n- [ ] KNN with k parameter (3x limit)\n- [ ] Query embedding passed as raw LE bytes\n- [ ] Results sorted by distance ascending (closest first)\n- [ ] Returns at most `limit` results\n- [ ] Empty embeddings table returns empty Vec\n- [ ] `cargo build` succeeds\n\n## Files\n- `src/search/vector.rs` — new file\n- `src/search/mod.rs` — add `pub use vector::{search_vector, VectorResult};`\n\n## TDD Loop\nRED: Integration tests need sqlite-vec + seeded embeddings:\n- `test_vector_search_basic` — finds nearest document\n- `test_vector_search_dedup` — multi-chunk doc returns once with best distance\n- `test_vector_search_empty` — empty table returns empty\n- `test_vector_search_limit` — respects limit parameter\nGREEN: Implement search_vector\nVERIFY: `cargo test vector`\n\n## Edge Cases\n- All chunks belong to same document: returns single result\n- Query embedding wrong dimension: sqlite-vec may error — handle gracefully\n- Over-fetch returns fewer than limit unique docs: return what we have\n- Distance = 0.0: exact match (valid result)","status":"closed","priority":2,"issue_type":"task","created_at":"2026-01-30T15:26:50.270357Z","created_by":"tayloreernisse","updated_at":"2026-01-30T17:44:56.233611Z","closed_at":"2026-01-30T17:44:56.233512Z","close_reason":"Implemented search_vector with KNN query, 3x over-fetch, chunk dedup. 3 tests pass.","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-bjo","depends_on_id":"bd-1y8","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"},{"issue_id":"bd-bjo","depends_on_id":"bd-2ac","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"}]} -{"id":"bd-bqoc","title":"Phase 1: Build HTTP adapter layer (src/http.rs)","description":"## What\nCreate src/http.rs (~100 LOC) — an adapter layer wrapping asupersync HttpClient with ergonomic methods matching gitlore usage patterns.\n\n## Why\nAsupersync HttpClient is lower-level than reqwest:\n- Headers: Vec<(String, String)> not typed HeaderMap/HeaderValue\n- Body: Vec not a builder with .json()\n- Status: raw u16 not StatusCode enum\n- Response: body already buffered, no async .json().await\n- No per-request timeout\n\nWithout an adapter, every call site becomes 5-6 lines of boilerplate. The adapter also isolates gitlore from asupersync pre-1.0 HTTP API changes.\n\n## API Design\n\n### Client struct\n```rust\npub struct Client {\n inner: HttpClient,\n timeout: Duration,\n}\n```\n\n### Methods\n- Client::with_timeout(Duration) -> Self — constructor with connection pool config\n- client.get(url, headers) -> Result\n- client.get_with_query(url, params, headers) -> Result\n- client.post_json(url, headers, body: &T) -> Result\n- (private) client.execute(method, url, headers, body) -> Result\n\n### Response struct\n```rust\npub struct Response {\n pub status: u16,\n pub reason: String,\n pub headers: Vec<(String, String)>,\n body: Vec, // private, accessed via methods\n}\n```\n\n### Response methods\n- response.is_success() -> bool (200..300)\n- response.json::() -> Result\n- response.text() -> Result\n- response.header(name) -> Option<&str> (case-insensitive)\n- response.headers_all(name) -> Vec<&str> (multi-value, for Link header pagination)\n\n### Timeout behavior\nEvery request wrapped with asupersync::time::timeout(self.timeout, ...). Maps to:\n- Timeout -> LoreError::GitLabNetworkError { kind: Timeout }\n- Transport error -> classified via classify_transport_error(&e) -> NetworkErrorKind\n\n### Response body size guard\nMAX_RESPONSE_BODY_BYTES = 64 MiB. Safety net for runaways. Normal responses are <1 MiB.\n\n### URL query helper\nappend_query_params(url, params) handles:\n- URLs with existing ?query -> appends with &\n- URLs with #fragment -> inserts query before fragment\n- Empty params -> returns URL unchanged\n- Repeated keys -> preserved (GitLab uses repeated labels[])\n\n### Connection pool config\n- max_connections_per_host: 6\n- max_total_connections: 100\n- idle_timeout: 90s\n\n## Default timeouts by consumer\n- GitLab REST/GraphQL: 30s\n- Ollama: configurable (default 60s)\n- Ollama health check: 5s\n\n## Files Changed\n- src/http.rs (NEW, ~100 LOC)\n- src/main.rs or src/lib.rs (add pub mod http)\n\n## Testing\n- Unit tests for append_query_params edge cases (existing query, fragments, empty params, repeated keys)\n- Unit tests for Response::header case-insensitivity\n- cargo check --all-targets\n- cargo clippy --all-targets -- -D warnings\n\n## Depends On\n- Phase 0d (error types: NetworkErrorKind must exist)\n- Decision Gate (must verify asupersync compiles first)","status":"open","priority":1,"issue_type":"feature","created_at":"2026-03-06T18:39:26.325053Z","created_by":"tayloreernisse","updated_at":"2026-03-06T18:42:50.116739Z","compaction_level":0,"original_size":0,"labels":["asupersync","phase-1"],"dependencies":[{"issue_id":"bd-bqoc","depends_on_id":"bd-1iuj","type":"blocks","created_at":"2026-03-06T18:42:49.922028Z","created_by":"tayloreernisse"},{"issue_id":"bd-bqoc","depends_on_id":"bd-1lj5","type":"parent-child","created_at":"2026-03-06T18:39:26.329649Z","created_by":"tayloreernisse"},{"issue_id":"bd-bqoc","depends_on_id":"bd-1lli","type":"blocks","created_at":"2026-03-06T18:42:50.116708Z","created_by":"tayloreernisse"}]} +{"id":"bd-bqoc","title":"Phase 1: Build HTTP adapter layer (src/http.rs)","description":"## What\nCreate src/http.rs (~100 LOC) — an adapter layer wrapping asupersync HttpClient with ergonomic methods matching gitlore usage patterns.\n\n## Why\nAsupersync HttpClient is lower-level than reqwest:\n- Headers: Vec<(String, String)> not typed HeaderMap/HeaderValue\n- Body: Vec not a builder with .json()\n- Status: raw u16 not StatusCode enum\n- Response: body already buffered, no async .json().await\n- No per-request timeout\n\nWithout an adapter, every call site becomes 5-6 lines of boilerplate. The adapter also isolates gitlore from asupersync pre-1.0 HTTP API changes.\n\n## API Design\n\n### Client struct\n```rust\npub struct Client {\n inner: HttpClient,\n timeout: Duration,\n}\n```\n\n### Methods\n- Client::with_timeout(Duration) -> Self — constructor with connection pool config\n- client.get(url, headers) -> Result\n- client.get_with_query(url, params, headers) -> Result\n- client.post_json(url, headers, body: &T) -> Result\n- (private) client.execute(method, url, headers, body) -> Result\n\n### Response struct\n```rust\npub struct Response {\n pub status: u16,\n pub reason: String,\n pub headers: Vec<(String, String)>,\n body: Vec, // private, accessed via methods\n}\n```\n\n### Response methods\n- response.is_success() -> bool (200..300)\n- response.json::() -> Result\n- response.text() -> Result\n- response.header(name) -> Option<&str> (case-insensitive)\n- response.headers_all(name) -> Vec<&str> (multi-value, for Link header pagination)\n\n### Timeout behavior\nEvery request wrapped with asupersync::time::timeout(self.timeout, ...). Maps to:\n- Timeout -> LoreError::GitLabNetworkError { kind: Timeout }\n- Transport error -> classified via classify_transport_error(&e) -> NetworkErrorKind\n\n### Response body size guard\nMAX_RESPONSE_BODY_BYTES = 64 MiB. Safety net for runaways. Normal responses are <1 MiB.\n\n### URL query helper\nappend_query_params(url, params) handles:\n- URLs with existing ?query -> appends with &\n- URLs with #fragment -> inserts query before fragment\n- Empty params -> returns URL unchanged\n- Repeated keys -> preserved (GitLab uses repeated labels[])\n\n### Connection pool config\n- max_connections_per_host: 6\n- max_total_connections: 100\n- idle_timeout: 90s\n\n## Default timeouts by consumer\n- GitLab REST/GraphQL: 30s\n- Ollama: configurable (default 60s)\n- Ollama health check: 5s\n\n## Files Changed\n- src/http.rs (NEW, ~100 LOC)\n- src/main.rs or src/lib.rs (add pub mod http)\n\n## Testing\n- Unit tests for append_query_params edge cases (existing query, fragments, empty params, repeated keys)\n- Unit tests for Response::header case-insensitivity\n- cargo check --all-targets\n- cargo clippy --all-targets -- -D warnings\n\n## Depends On\n- Phase 0d (error types: NetworkErrorKind must exist)\n- Decision Gate (must verify asupersync compiles first)","status":"closed","priority":1,"issue_type":"feature","created_at":"2026-03-06T18:39:26.325053Z","created_by":"tayloreernisse","updated_at":"2026-03-06T21:11:12.566789Z","closed_at":"2026-03-06T21:11:12.566744Z","close_reason":"asupersync migration complete — 5 commits pushed to asupersync-migration branch","compaction_level":0,"original_size":0,"labels":["asupersync","phase-1"],"dependencies":[{"issue_id":"bd-bqoc","depends_on_id":"bd-1iuj","type":"blocks","created_at":"2026-03-06T18:42:49.922028Z","created_by":"tayloreernisse"},{"issue_id":"bd-bqoc","depends_on_id":"bd-1lj5","type":"parent-child","created_at":"2026-03-06T18:39:26.329649Z","created_by":"tayloreernisse"},{"issue_id":"bd-bqoc","depends_on_id":"bd-1lli","type":"blocks","created_at":"2026-03-06T18:42:50.116708Z","created_by":"tayloreernisse"}]} {"id":"bd-c9gk","title":"Implement core types (Msg, Screen, EntityKey, AppError, InputMode)","description":"## Background\nThe core types form the message-passing backbone of the Elm Architecture. Every user action and async result flows through the Msg enum. Screen identifies navigation targets. EntityKey provides safe cross-project entity identity. AppError enables structured error display. InputMode controls key dispatch routing.\n\n## Approach\nCreate crates/lore-tui/src/message.rs with:\n- Msg enum (~40 variants): RawEvent, Tick, Resize, NavigateTo, GoBack, GoForward, GoHome, JumpBack, JumpForward, OpenCommandPalette, CloseCommandPalette, CommandPaletteInput, CommandPaletteSelect, IssueListLoaded{generation, rows}, IssueListFilterChanged, IssueListSortChanged, IssueSelected, MrListLoaded{generation, rows}, MrListFilterChanged, MrSelected, IssueDetailLoaded{generation, key, detail}, MrDetailLoaded{generation, key, detail}, DiscussionsLoaded{generation, discussions}, SearchQueryChanged, SearchRequestStarted{generation, query}, SearchExecuted{generation, results}, SearchResultSelected, SearchModeChanged, SearchCapabilitiesLoaded, TimelineLoaded, TimelineEntitySelected, WhoResultLoaded, WhoModeChanged, SyncStarted, SyncProgress, SyncProgressBatch, SyncLogLine, SyncBackpressureDrop, SyncCompleted, SyncCancelled, SyncFailed, SyncStreamStats, SearchDebounceArmed, SearchDebounceFired, DashboardLoaded, Error, ShowHelp, ShowCliEquivalent, OpenInBrowser, BlurTextInput, ScrollToTopCurrentScreen, Quit\n- impl From for Msg (FrankenTUI requirement) — maps Resize, Tick, and wraps everything else in RawEvent\n- Screen enum: Dashboard, IssueList, IssueDetail(EntityKey), MrList, MrDetail(EntityKey), Search, Timeline, Who, Sync, Stats, Doctor, Bootstrap\n- Screen::label() -> &str and Screen::is_detail_or_entity() -> bool\n- EntityKey { project_id: i64, iid: i64, kind: EntityKind } with EntityKey::issue() and EntityKey::mr() constructors\n- EntityKind enum: Issue, MergeRequest\n- AppError enum: DbBusy, DbCorruption(String), NetworkRateLimited{retry_after_secs}, NetworkUnavailable, AuthFailed, ParseError(String), Internal(String) with Display impl\n- InputMode enum: Normal, Text, Palette, GoPrefix{started_at: Instant} with Default -> Normal\n\n## Acceptance Criteria\n- [ ] Msg enum compiles with all ~40 variants\n- [ ] From impl converts Resize->Msg::Resize, Tick->Msg::Tick, other->Msg::RawEvent\n- [ ] Screen enum has all 12 variants with label() and is_detail_or_entity() methods\n- [ ] EntityKey::issue() and EntityKey::mr() constructors work correctly\n- [ ] EntityKey derives Debug, Clone, PartialEq, Eq, Hash\n- [ ] AppError Display shows user-friendly messages for each variant\n- [ ] InputMode defaults to Normal\n\n## Files\n- CREATE: crates/lore-tui/src/message.rs\n\n## TDD Anchor\nRED: Write test_entity_key_equality that asserts EntityKey::issue(1, 42) == EntityKey::issue(1, 42) and EntityKey::issue(1, 42) != EntityKey::mr(1, 42).\nGREEN: Implement EntityKey with derives.\nVERIFY: cargo test --manifest-path crates/lore-tui/Cargo.toml test_entity_key\n\n## Edge Cases\n- Generation fields (u64) in Msg variants are critical for stale result detection — must be present on all async result variants\n- EntityKey equality must include both project_id AND iid AND kind — bare iid is unsafe with multi-project datasets\n- AppError::NetworkRateLimited retry_after_secs is Option — GitLab may not provide Retry-After header","status":"open","priority":2,"issue_type":"task","created_at":"2026-02-12T16:53:37.143607Z","created_by":"tayloreernisse","updated_at":"2026-02-12T18:11:22.248862Z","compaction_level":0,"original_size":0,"labels":["TUI"],"dependencies":[{"issue_id":"bd-c9gk","depends_on_id":"bd-1cj0","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"},{"issue_id":"bd-c9gk","depends_on_id":"bd-3ddw","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"}]} {"id":"bd-cbo","title":"[CP1] Cargo.toml updates - async-stream and futures","description":"Add required dependencies for async pagination streams.\n\n## Changes\nAdd to Cargo.toml:\n- async-stream = \"0.3\"\n- futures = \"0.3\"\n\n## Why\nThe pagination methods use async generators which require async-stream crate.\nfutures crate provides StreamExt for consuming the streams.\n\n## Done When\n- cargo check passes with new deps\n- No unused dependency warnings\n\nFiles: Cargo.toml","status":"tombstone","priority":2,"issue_type":"task","created_at":"2026-01-25T15:42:31.143927Z","created_by":"tayloreernisse","updated_at":"2026-01-25T17:02:01.661666Z","closed_at":"2026-01-25T17:02:01.661666Z","deleted_at":"2026-01-25T17:02:01.661662Z","deleted_by":"tayloreernisse","delete_reason":"recreating with correct deps","original_type":"task","compaction_level":0,"original_size":0} {"id":"bd-cq2","title":"[CP1] Integration tests for label linkage","description":"Integration tests verifying label linkage and stale removal.\n\n## Tests (tests/label_linkage_tests.rs)\n\n- clears_existing_labels_before_linking_new_set\n- removes_stale_label_links_on_issue_update\n- handles_issue_with_all_labels_removed\n- preserves_labels_that_still_exist\n\n## Test Scenario\n1. Create issue with labels [A, B]\n2. Verify issue_labels has links to A and B\n3. Update issue with labels [B, C]\n4. Verify A link removed, B preserved, C added\n\n## Why This Matters\nThe clear-and-relink pattern ensures GitLab reality is reflected locally.\nIf we only INSERT, removed labels would persist incorrectly.\n\nFiles: tests/label_linkage_tests.rs\nDone when: Stale label links correctly removed on resync","status":"tombstone","priority":3,"issue_type":"task","created_at":"2026-01-25T16:59:10.665771Z","created_by":"tayloreernisse","updated_at":"2026-01-25T17:02:02.062192Z","closed_at":"2026-01-25T17:02:02.062192Z","deleted_at":"2026-01-25T17:02:02.062188Z","deleted_by":"tayloreernisse","delete_reason":"recreating with correct deps","original_type":"task","compaction_level":0,"original_size":0} @@ -332,7 +332,7 @@ {"id":"bd-kanh","title":"Extract orchestrator per-entity logic and implement inline dependent helpers","description":"## Background\n\nThe orchestrator's drain functions (`drain_resource_events` at line 932, `drain_mr_closes_issues` at line 1254, `drain_mr_diffs` at line 1514) are private and tightly coupled to the job queue system (`pending_dependent_fetches`, `claim_jobs`, `complete_job`). They batch-process all entities for a project, not individual ones. Surgical sync needs per-entity versions of these operations.\n\nThe underlying storage functions already exist and are usable:\n- `store_resource_events(conn, project_id, entity_type, entity_local_id, state_events, label_events, milestone_events)` (orchestrator.rs:1100) — calls `upsert_state_events`, `upsert_label_events`, `upsert_milestone_events`\n- `store_closes_issues_refs(conn, project_id, mr_local_id, closes_issues)` (orchestrator.rs:1409) — inserts entity references\n- `upsert_mr_file_changes(conn, project_id, mr_local_id, diffs)` (mr_diffs.rs:26) — already pub\n\nThe GitLabClient methods for fetching are also already pub:\n- `fetch_all_resource_events(gitlab_project_id, entity_type, iid)` -> (state, label, milestone) events\n- `fetch_mr_closes_issues(gitlab_project_id, iid)` -> Vec\n- `fetch_mr_diffs(gitlab_project_id, iid)` -> Vec\n\nThe gap: no standalone per-entity functions that fetch + store for a single entity without the job queue machinery.\n\n## Approach\n\nCreate standalone helper functions in `src/ingestion/surgical.rs` (or a new `src/ingestion/surgical_dependents.rs` sub-module) that surgical.rs calls after ingesting each entity:\n\n1. **`fetch_and_store_resource_events_for_entity`** (async): Takes `client`, `conn`, `project_id`, `gitlab_project_id`, `entity_type` (\"issue\"|\"merge_request\"), `entity_iid`, `entity_local_id`. Calls `client.fetch_all_resource_events()`, then `store_resource_events()` (needs `pub(crate)` visibility, currently private in orchestrator.rs). Updates the watermark column (`resource_events_synced_for_updated_at`).\n\n2. **`fetch_and_store_discussions_for_entity`** (async): For issues, calls existing `ingest_issue_discussions()`. For MRs, calls `ingest_mr_discussions()`. Both are already pub. This is a thin routing wrapper.\n\n3. **`fetch_and_store_closes_issues_for_entity`** (async, MR-only): Calls `client.fetch_mr_closes_issues()`, then `store_closes_issues_refs()` (needs `pub(crate)`). Updates watermark.\n\n4. **`fetch_and_store_file_changes_for_entity`** (async, MR-only): Calls `client.fetch_mr_diffs()`, then `upsert_mr_file_changes()` (already pub). Updates watermark.\n\nVisibility changes needed in orchestrator.rs (part of bd-1sc6):\n- `store_resource_events` -> `pub(crate)`\n- `store_closes_issues_refs` -> `pub(crate)`\n- `update_resource_event_watermark_tx` -> `pub(crate)` (or inline the SQL)\n- `update_closes_issues_watermark_tx` -> `pub(crate)` (or inline)\n\n## Acceptance Criteria\n\n- [ ] `fetch_and_store_resource_events_for_entity` fetches all 3 event types and stores them in one transaction\n- [ ] `fetch_and_store_discussions_for_entity` routes to correct discussion ingest function by entity type\n- [ ] `fetch_and_store_closes_issues_for_entity` fetches and stores closes_issues refs for MRs\n- [ ] `fetch_and_store_file_changes_for_entity` fetches and stores MR diffs\n- [ ] Each helper updates the appropriate watermark column after successful store\n- [ ] Each helper returns a result struct with counts (fetched, stored, skipped)\n- [ ] All helpers are `pub(crate)` for use by the orchestration function (bd-1i4i)\n- [ ] Config-gated: resource events only fetched if `config.sync.fetch_resource_events == true`, file changes only if `config.sync.fetch_mr_file_changes == true`\n\n## Files\n\n- `src/ingestion/surgical.rs` (add helper functions, or create `surgical_dependents.rs` sub-module)\n- `src/ingestion/orchestrator.rs` (change `store_resource_events`, `store_closes_issues_refs`, watermark functions to `pub(crate)` — via bd-1sc6)\n\n## TDD Anchor\n\nTests in `src/ingestion/surgical_tests.rs` (bd-x8oq):\n\n```rust\n#[tokio::test]\nasync fn test_fetch_and_store_resource_events_for_issue() {\n let conn = setup_db();\n let mock = MockServer::start().await;\n // Mock state/label/milestone event endpoints\n Mock::given(method(\"GET\"))\n .and(path_regex(r\"/api/v4/projects/\\d+/issues/\\d+/resource_state_events\"))\n .respond_with(ResponseTemplate::new(200).set_body_json(json!([])))\n .mount(&mock).await;\n // ... similar for label and milestone\n let client = make_test_client(&mock);\n let result = fetch_and_store_resource_events_for_entity(\n &client, &conn, /*project_id=*/1, /*gitlab_project_id=*/100,\n \"issue\", /*iid=*/42, /*local_id=*/1,\n ).await.unwrap();\n assert_eq!(result.fetched, 0); // empty events\n // Verify watermark updated\n let watermark: Option = conn.query_row(\n \"SELECT resource_events_synced_for_updated_at FROM issues WHERE id = 1\",\n [], |r| r.get(0),\n ).unwrap();\n assert!(watermark.is_some());\n}\n\n#[tokio::test]\nasync fn test_fetch_and_store_closes_issues_for_mr() {\n let conn = setup_db();\n let mock = MockServer::start().await;\n Mock::given(method(\"GET\"))\n .and(path_regex(r\"/api/v4/projects/\\d+/merge_requests/\\d+/closes_issues\"))\n .respond_with(ResponseTemplate::new(200).set_body_json(json!([\n {\"iid\": 10, \"project_id\": 100}\n ])))\n .mount(&mock).await;\n let client = make_test_client(&mock);\n let result = fetch_and_store_closes_issues_for_entity(\n &client, &conn, 1, 100, /*mr_iid=*/5, /*mr_local_id=*/1,\n ).await.unwrap();\n assert_eq!(result.stored, 1);\n}\n\n#[tokio::test]\nasync fn test_fetch_and_store_file_changes_for_mr() {\n // Similar: mock /diffs endpoint, verify upsert_mr_file_changes called\n}\n\n#[tokio::test]\nasync fn test_resource_events_skipped_when_config_disabled() {\n // config.sync.fetch_resource_events = false -> returns Ok with 0 counts\n}\n```\n\n## Edge Cases\n\n- `fetch_all_resource_events` returns 3 separate Results (state, label, milestone). If one fails (e.g., 403 on milestone events), the others should still be stored. Partial success handling.\n- `fetch_mr_closes_issues` on a deleted MR returns 404: `coalesce_not_found` already handles this in the client, returning empty vec.\n- Watermark update must happen AFTER successful store, not before, to avoid marking as synced when store failed.\n- Discussion ingest for MRs uses `prefetch_mr_discussions` (async) + `write_prefetched_mr_discussions` (sync) two-phase pattern. The helper must handle both phases.\n- If `config.sync.fetch_resource_events` is false, skip resource event fetch entirely (return empty result).\n- If `config.sync.fetch_mr_file_changes` is false, skip file changes fetch entirely.\n\n## Dependency Context\n\n- **Blocked by bd-3sez**: surgical.rs must exist before adding helpers to it\n- **Blocked by bd-1sc6 (indirectly via bd-3sez)**: `store_resource_events` and `store_closes_issues_refs` need `pub(crate)` visibility\n- **Blocks bd-1i4i**: Orchestration function calls these helpers after each entity ingest\n- **Blocks bd-3jqx**: Integration tests exercise the full surgical pipeline including these helpers\n- **Uses existing pub APIs**: `GitLabClient::fetch_all_resource_events`, `fetch_mr_closes_issues`, `fetch_mr_diffs`, `upsert_mr_file_changes`, `ingest_issue_discussions`, `ingest_mr_discussions`","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-17T19:15:42.863072Z","created_by":"tayloreernisse","updated_at":"2026-02-18T21:04:58.569185Z","closed_at":"2026-02-18T21:04:58.569141Z","close_reason":"Completed: all implementation work done, code reviewed, tests passing","compaction_level":0,"original_size":0,"labels":["surgical-sync"],"dependencies":[{"issue_id":"bd-kanh","depends_on_id":"bd-1i4i","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"}]} {"id":"bd-kvij","title":"Rewrite agent skills to mandate lore for all reads","description":"## Background\nAgent skills and AGENTS.md files currently allow agents to choose between glab and lore for read operations. Agents default to glab (familiar from training data) even though lore returns richer data. Need a clean, enforced boundary: lore=reads, glab=writes.\n\n## Approach\n1. Audit all config files for glab read patterns\n2. Replace each with lore equivalent\n3. Add explicit Read/Write Split section to AGENTS.md and CLAUDE.md\n\n## Translation Table\n| glab (remove) | lore (replace with) |\n|------------------------------------|----------------------------------|\n| glab issue view N | lore -J issues N |\n| glab issue list | lore -J issues -n 50 |\n| glab issue list -l bug | lore -J issues --label bug |\n| glab mr view N | lore -J mrs N |\n| glab mr list | lore -J mrs |\n| glab mr list -s opened | lore -J mrs -s opened |\n| glab api '/projects/:id/issues' | lore -J issues -p project |\n\n## Files to Audit\n\n### Project-level\n- /Users/tayloreernisse/projects/gitlore/AGENTS.md — primary project instructions\n\n### Global Claude config\n- ~/.claude/CLAUDE.md — global instructions (already has lore section, verify no glab reads)\n\n### Skills directory\nScan all .md files under ~/.claude/skills/ for glab read patterns.\nLikely candidates: any skill that references GitLab data retrieval.\n\n### Rules directory\nScan all .md files under ~/.claude/rules/ for glab read patterns.\n\n### Work-ghost templates\n- ~/projects/work-ghost/tasks/*.md — task templates that reference glab reads\n\n## Verification Commands\nAfter all changes:\n```bash\n# Should return ZERO matches (no glab read commands remain)\nrg 'glab issue view|glab issue list|glab mr view|glab mr list|glab api.*issues|glab api.*merge_requests' ~/.claude/ AGENTS.md --type md\n\n# These should REMAIN (write operations stay with glab)\nrg 'glab (issue|mr) (create|update|close|delete|approve|merge|note|rebase)' ~/.claude/ AGENTS.md --type md\n```\n\n## Read/Write Split Section to Add\nAdd to AGENTS.md and ~/.claude/CLAUDE.md:\n```markdown\n## Read/Write Split: lore vs glab\n\n| Operation | Tool | Why |\n|-----------|------|-----|\n| List issues/MRs | lore | Richer: includes status, discussions, closing MRs |\n| View issue/MR detail | lore | Pre-joined discussions, work-item status |\n| Search across entities | lore | FTS5 + vector hybrid search |\n| Expert/workload analysis | lore | who command — no glab equivalent |\n| Timeline reconstruction | lore | Chronological narrative — no glab equivalent |\n| Create/update/close | glab | Write operations |\n| Approve/merge MR | glab | Write operations |\n| CI/CD pipelines | glab | Not in lore scope |\n```\n\n## TDD Loop\nThis is a config-only task — no Rust code changes. Verification is via grep:\n\nRED: Run verification commands above, expect matches (glab reads still present)\nGREEN: Replace all glab read references with lore equivalents\nVERIFY: Run verification commands, expect zero glab read matches\n\n## Acceptance Criteria\n- [ ] Zero glab read references in AGENTS.md\n- [ ] Zero glab read references in ~/.claude/CLAUDE.md\n- [ ] Zero glab read references in ~/.claude/skills/**/*.md\n- [ ] Zero glab read references in ~/.claude/rules/**/*.md\n- [ ] glab write references preserved (create, update, close, approve, merge, CI)\n- [ ] Read/Write Split section added to AGENTS.md\n- [ ] Read/Write Split section added to ~/.claude/CLAUDE.md\n- [ ] Fresh agent session uses lore for reads without prompting (manual verification)\n\n## Edge Cases\n- Skills that use glab api for data NOT in lore (e.g., CI pipeline data, project settings) — these should remain\n- glab MCP server references — evaluate case-by-case (keep for write operations)\n- Shell aliases or env vars that invoke glab for reads — out of scope unless in config files\n- Skills that use `glab issue list | jq` for ad-hoc queries — replace with `lore -J issues | jq`\n- References to glab in documentation context (explaining what tools exist) vs operational context (telling agent to use glab) — only replace operational references","status":"closed","priority":1,"issue_type":"task","created_at":"2026-02-12T15:44:56.530081Z","created_by":"tayloreernisse","updated_at":"2026-02-12T16:49:04.598735Z","closed_at":"2026-02-12T16:49:04.598679Z","close_reason":"Agent skills rewritten: AGENTS.md and CLAUDE.md updated with read/write split mandating lore for reads, glab for writes","compaction_level":0,"original_size":0,"labels":["cli","cli-imp"],"dependencies":[{"issue_id":"bd-kvij","depends_on_id":"bd-13lp","type":"parent-child","created_at":"2026-03-04T20:02:52Z","created_by":"import"}]} {"id":"bd-lcb","title":"Epic: CP2 Gate E - CLI Complete","description":"## Background\nGate E validates all CLI commands are functional and user-friendly. This is the final usability gate - even if all data is correct, users need good CLI UX to access it.\n\n## Acceptance Criteria (Pass/Fail)\n\n### List Command\n- [ ] `gi list mrs` shows MR table with columns: iid, title, state, author, branches, updated\n- [ ] `gi list mrs --state=opened` filters to only opened MRs\n- [ ] `gi list mrs --state=merged` filters to only merged MRs\n- [ ] `gi list mrs --state=closed` filters to only closed MRs\n- [ ] `gi list mrs --state=locked` filters locally (not server-side filter)\n- [ ] `gi list mrs --draft` shows only draft MRs\n- [ ] `gi list mrs --no-draft` excludes draft MRs\n- [ ] Draft MRs show `[DRAFT]` prefix in title column\n- [ ] `gi list mrs --author=username` filters by author\n- [ ] `gi list mrs --assignee=username` filters by assignee\n- [ ] `gi list mrs --reviewer=username` filters by reviewer\n- [ ] `gi list mrs --target-branch=main` filters by target branch\n- [ ] `gi list mrs --source-branch=feature/x` filters by source branch\n- [ ] `gi list mrs --label=bugfix` filters by label\n- [ ] `gi list mrs --limit=N` limits output\n\n### Show Command\n- [ ] `gi show mr ` displays full MR detail\n- [ ] Show includes: title, description, state, draft status, author\n- [ ] Show includes: assignees, reviewers, labels\n- [ ] Show includes: source_branch, target_branch\n- [ ] Show includes: detailed_merge_status (e.g., \"mergeable\")\n- [ ] Show includes: merge_user and merged_at for merged MRs\n- [ ] Show includes: discussions with author and date\n- [ ] DiffNote shows file context: `[src/file.ts:45]`\n- [ ] Multi-line DiffNote shows range: `[src/file.ts:45-48]`\n- [ ] Resolved discussions show `[RESOLVED]` marker\n\n### Count Command\n- [ ] `gi count mrs` shows total count\n- [ ] Count shows state breakdown: opened, merged, closed\n\n### Sync Status\n- [ ] `gi sync-status` shows MR cursor position\n- [ ] Sync status shows last sync timestamp\n\n## Validation Script\n```bash\n#!/bin/bash\nset -e\n\nDB_PATH=\"${XDG_DATA_HOME:-$HOME/.local/share}/gitlab-inbox/db.sqlite3\"\n\necho \"=== Gate E: CLI Complete ===\"\n\n# 1. Test list command (basic)\necho \"Step 1: Basic list...\"\ngi list mrs --limit=5 || { echo \"FAIL: list mrs failed\"; exit 1; }\n\n# 2. Test state filters\necho \"Step 2: State filters...\"\nfor state in opened merged closed; do\n echo \" Testing --state=$state\"\n gi list mrs --state=$state --limit=3 || echo \" Warning: No $state MRs\"\ndone\n\n# 3. Test draft filters\necho \"Step 3: Draft filters...\"\ngi list mrs --draft --limit=3 || echo \" Note: No draft MRs found\"\ngi list mrs --no-draft --limit=3 || echo \" Note: All MRs are drafts?\"\n\n# 4. Check [DRAFT] prefix\necho \"Step 4: Check [DRAFT] prefix...\"\nDRAFT_IID=$(sqlite3 \"$DB_PATH\" \"SELECT iid FROM merge_requests WHERE draft = 1 LIMIT 1;\")\nif [ -n \"$DRAFT_IID\" ]; then\n if gi list mrs --limit=100 | grep -q \"\\[DRAFT\\]\"; then\n echo \" PASS: [DRAFT] prefix found\"\n else\n echo \" FAIL: Draft MR exists but no [DRAFT] prefix in output\"\n fi\nelse\n echo \" Skip: No draft MRs to test\"\nfi\n\n# 5. Test author/assignee/reviewer filters\necho \"Step 5: User filters...\"\nAUTHOR=$(sqlite3 \"$DB_PATH\" \"SELECT author_username FROM merge_requests LIMIT 1;\")\nif [ -n \"$AUTHOR\" ]; then\n echo \" Testing --author=$AUTHOR\"\n gi list mrs --author=\"$AUTHOR\" --limit=3\nfi\n\nREVIEWER=$(sqlite3 \"$DB_PATH\" \"SELECT username FROM mr_reviewers LIMIT 1;\")\nif [ -n \"$REVIEWER\" ]; then\n echo \" Testing --reviewer=$REVIEWER\"\n gi list mrs --reviewer=\"$REVIEWER\" --limit=3\nfi\n\n# 6. Test branch filters\necho \"Step 6: Branch filters...\"\nTARGET=$(sqlite3 \"$DB_PATH\" \"SELECT target_branch FROM merge_requests LIMIT 1;\")\nif [ -n \"$TARGET\" ]; then\n echo \" Testing --target-branch=$TARGET\"\n gi list mrs --target-branch=\"$TARGET\" --limit=3\nfi\n\n# 7. Test show command\necho \"Step 7: Show command...\"\nMR_IID=$(sqlite3 \"$DB_PATH\" \"SELECT iid FROM merge_requests LIMIT 1;\")\ngi show mr \"$MR_IID\" || { echo \"FAIL: show mr failed\"; exit 1; }\n\n# 8. Test show with DiffNote context\necho \"Step 8: Show with DiffNote...\"\nDIFFNOTE_MR=$(sqlite3 \"$DB_PATH\" \"\n SELECT DISTINCT m.iid\n FROM merge_requests m\n JOIN discussions d ON d.merge_request_id = m.id\n JOIN notes n ON n.discussion_id = d.id\n WHERE n.position_new_path IS NOT NULL\n LIMIT 1;\n\")\nif [ -n \"$DIFFNOTE_MR\" ]; then\n echo \" Testing MR with DiffNotes: !$DIFFNOTE_MR\"\n OUTPUT=$(gi show mr \"$DIFFNOTE_MR\")\n if echo \"$OUTPUT\" | grep -qE '\\[[^]]+:[0-9]+\\]'; then\n echo \" PASS: File context [path:line] found\"\n else\n echo \" FAIL: DiffNote should show [path:line] context\"\n fi\nelse\n echo \" Skip: No MRs with DiffNotes\"\nfi\n\n# 9. Test count command\necho \"Step 9: Count command...\"\ngi count mrs || { echo \"FAIL: count mrs failed\"; exit 1; }\n\n# 10. Test sync-status\necho \"Step 10: Sync status...\"\ngi sync-status || echo \" Note: sync-status may need implementation\"\n\necho \"\"\necho \"=== Gate E: PASSED ===\"\n```\n\n## Test Commands (Quick Verification)\n```bash\n# List with all column types visible:\ngi list mrs --limit=10\n\n# Show a specific MR:\ngi show mr 42\n\n# Count with breakdown:\ngi count mrs\n\n# Complex filter:\ngi list mrs --state=opened --reviewer=alice --target-branch=main --limit=5\n```\n\n## Expected Output Formats\n\n### gi list mrs\n```\nMerge Requests (showing 5 of 1,234)\n\n !847 Refactor auth to use JWT tokens merged @johndoe main <- feature/jwt 3d ago\n !846 Fix memory leak in websocket handler opened @janedoe main <- fix/websocket 5d ago\n !845 [DRAFT] Add dark mode CSS variables opened @bobsmith main <- ui/dark-mode 1w ago\n !844 Update dependencies to latest versions closed @alice main <- chore/deps 2w ago\n```\n\n### gi show mr 847\n```\nMerge Request !847: Refactor auth to use JWT tokens\n================================================================================\n\nProject: group/project-one\nState: merged\nDraft: No\nAuthor: @johndoe\nAssignees: @janedoe, @bobsmith\nReviewers: @alice, @charlie\nLabels: enhancement, auth, reviewed\nSource: feature/jwt\nTarget: main\nMerge Status: merged\nMerged By: @alice\nMerged At: 2024-03-20 14:30:00\n\nDescription:\n Moving away from session cookies to JWT-based authentication...\n\nDiscussions (3):\n\n @janedoe (2024-03-16) [src/auth/jwt.ts:45]:\n Should we use a separate signing key for refresh tokens?\n\n @johndoe (2024-03-16):\n Good point. I'll add a separate key with rotation support.\n\n @alice (2024-03-18) [RESOLVED]:\n Looks good! Just one nit about the token expiry constant.\n```\n\n### gi count mrs\n```\nMerge Requests: 1,234\n opened: 89\n merged: 1,045\n closed: 100\n```\n\n## Dependencies\nThis gate requires:\n- bd-3js (CLI commands implementation)\n- All previous gates must pass first\n\n## Edge Cases\n- Ambiguous MR iid across projects: should prompt for `--project` or show error\n- Very long titles: should truncate with `...` in list view\n- Empty description: should show \"No description\" or empty section\n- No discussions: should show \"No discussions\" message\n- Unicode in titles/descriptions: should render correctly","status":"closed","priority":3,"issue_type":"task","created_at":"2026-01-26T22:06:02.411132Z","created_by":"tayloreernisse","updated_at":"2026-01-27T00:48:21.061166Z","closed_at":"2026-01-27T00:48:21.061125Z","close_reason":"done","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-lcb","depends_on_id":"bd-3js","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"}]} -{"id":"bd-lhj8","title":"Phase 2a: Migrate gitlab/client.rs to HTTP adapter","description":"## What\nReplace all reqwest usage in gitlab/client.rs with the new crate::http adapter.\n\n## Why\nThis is the largest HTTP module (~80 LOC changed). It handles REST API calls, pagination, rate limiting, and response parsing.\n\n## Changes\n\n### Imports\n\\`\\`\\`rust\n// Remove\nuse reqwest::header::{ACCEPT, HeaderMap, HeaderValue};\nuse reqwest::{Client, Response, StatusCode};\n// Add\nuse crate::http::{Client, Response};\n\\`\\`\\`\n\n### Client construction (lines ~68-96)\n\\`\\`\\`rust\n// Before: reqwest::Client::builder().default_headers(h).timeout(d).build()\n// After:\nlet client = Client::with_timeout(Duration::from_secs(30));\n\\`\\`\\`\n\n### request() method (lines ~129-170)\n\\`\\`\\`rust\n// Before\nlet response = self.client.get(&url)\n .header(\\\"PRIVATE-TOKEN\\\", &self.token)\n .send().await?;\n// After\nlet response = self.client.get(&url, &[\n (\\\"PRIVATE-TOKEN\\\", &self.token),\n (\\\"Accept\\\", \\\"application/json\\\"),\n]).await?;\n\\`\\`\\`\n\n### request_with_headers() method (lines ~510-559)\n\\`\\`\\`rust\n// Before: self.client.get(&url).query(params).header(...)\n// After: self.client.get_with_query(&url, params, &[...])\n// headers already owned in response.headers — no .clone() needed\n\\`\\`\\`\n\n### handle_response() — becomes sync\n\\`\\`\\`rust\n// Before: async fn (consumed body with .text().await)\n// After: fn (body already buffered in Response)\nfn handle_response(&self, response: Response, path: &str) -> Result {\n match response.status {\n 401 => Err(LoreError::GitLabAuthFailed),\n 404 => Err(LoreError::GitLabNotFound { resource: path.into() }),\n 429 => {\n let retry_after = response.header(\\\"retry-after\\\")\n .and_then(|v| v.parse().ok())\n .unwrap_or(60);\n Err(LoreError::GitLabRateLimited { retry_after })\n }\n s if (200..300).contains(&s) => response.json::(),\n s => Err(LoreError::Other(format!(\\\"GitLab API error: {} {}\\\", s, response.reason))),\n }\n}\n\\`\\`\\`\n\n### Pagination — minimal structural changes\nasync_stream::stream! and header parsing stay the same. Only response type changes:\n\\`\\`\\`rust\n// Before: headers.get(\\\"x-next-page\\\").and_then(|v| v.to_str().ok())\n// After: response.header(\\\"x-next-page\\\")\n\\`\\`\\`\n\n### parse_link_header_next — signature change\nChange from (headers: &HeaderMap) to (headers: &[(String, String)]) and find by case-insensitive name. Use response.headers_all(\\\"link\\\") for multi-value Link headers.\n\n## WIREMOCK TEST COMPATIBILITY WARNING\nAfter this migration, the 4 client_tests.rs tests run on #[tokio::test] but call production code that now uses crate::http::Client (backed by asupersync::HttpClient). This may cause runtime failures if asupersync HTTP requires the asupersync runtime context.\n\n**Possible resolutions (decide during implementation):**\na) asupersync HTTP client is runtime-agnostic (works under tokio) — ideal, no changes needed\nb) Switch client_tests to #[asupersync::test] — may conflict with wiremock which needs tokio\nc) Add a test-only constructor that accepts an injected HTTP client trait — cleanest but most work\nd) Use reqwest in dev-deps as a test double behind the adapter interface\n\nVerify option (a) first during the Decision Gate. If it doesn't work, option (b) or (d) is the fallback. This is flagged as escape hatch trigger #4 in the epic.\n\n## Files Changed\n- src/gitlab/client.rs (~80 LOC changed)\n\n## Testing\n- All 4 client_tests.rs tests must pass (verify runtime compatibility per warning above)\n- cargo check --all-targets\n- cargo clippy --all-targets -- -D warnings","status":"open","priority":1,"issue_type":"task","created_at":"2026-03-06T18:39:44.065450Z","created_by":"tayloreernisse","updated_at":"2026-03-06T18:49:18.998296Z","compaction_level":0,"original_size":0,"labels":["asupersync","phase-2"],"dependencies":[{"issue_id":"bd-lhj8","depends_on_id":"bd-1lj5","type":"parent-child","created_at":"2026-03-06T18:39:44.069082Z","created_by":"tayloreernisse"},{"issue_id":"bd-lhj8","depends_on_id":"bd-bqoc","type":"blocks","created_at":"2026-03-06T18:42:50.433154Z","created_by":"tayloreernisse"}]} +{"id":"bd-lhj8","title":"Phase 2a: Migrate gitlab/client.rs to HTTP adapter","description":"## What\nReplace all reqwest usage in gitlab/client.rs with the new crate::http adapter.\n\n## Why\nThis is the largest HTTP module (~80 LOC changed). It handles REST API calls, pagination, rate limiting, and response parsing.\n\n## Changes\n\n### Imports\n\\`\\`\\`rust\n// Remove\nuse reqwest::header::{ACCEPT, HeaderMap, HeaderValue};\nuse reqwest::{Client, Response, StatusCode};\n// Add\nuse crate::http::{Client, Response};\n\\`\\`\\`\n\n### Client construction (lines ~68-96)\n\\`\\`\\`rust\n// Before: reqwest::Client::builder().default_headers(h).timeout(d).build()\n// After:\nlet client = Client::with_timeout(Duration::from_secs(30));\n\\`\\`\\`\n\n### request() method (lines ~129-170)\n\\`\\`\\`rust\n// Before\nlet response = self.client.get(&url)\n .header(\\\"PRIVATE-TOKEN\\\", &self.token)\n .send().await?;\n// After\nlet response = self.client.get(&url, &[\n (\\\"PRIVATE-TOKEN\\\", &self.token),\n (\\\"Accept\\\", \\\"application/json\\\"),\n]).await?;\n\\`\\`\\`\n\n### request_with_headers() method (lines ~510-559)\n\\`\\`\\`rust\n// Before: self.client.get(&url).query(params).header(...)\n// After: self.client.get_with_query(&url, params, &[...])\n// headers already owned in response.headers — no .clone() needed\n\\`\\`\\`\n\n### handle_response() — becomes sync\n\\`\\`\\`rust\n// Before: async fn (consumed body with .text().await)\n// After: fn (body already buffered in Response)\nfn handle_response(&self, response: Response, path: &str) -> Result {\n match response.status {\n 401 => Err(LoreError::GitLabAuthFailed),\n 404 => Err(LoreError::GitLabNotFound { resource: path.into() }),\n 429 => {\n let retry_after = response.header(\\\"retry-after\\\")\n .and_then(|v| v.parse().ok())\n .unwrap_or(60);\n Err(LoreError::GitLabRateLimited { retry_after })\n }\n s if (200..300).contains(&s) => response.json::(),\n s => Err(LoreError::Other(format!(\\\"GitLab API error: {} {}\\\", s, response.reason))),\n }\n}\n\\`\\`\\`\n\n### Pagination — minimal structural changes\nasync_stream::stream! and header parsing stay the same. Only response type changes:\n\\`\\`\\`rust\n// Before: headers.get(\\\"x-next-page\\\").and_then(|v| v.to_str().ok())\n// After: response.header(\\\"x-next-page\\\")\n\\`\\`\\`\n\n### parse_link_header_next — signature change\nChange from (headers: &HeaderMap) to (headers: &[(String, String)]) and find by case-insensitive name. Use response.headers_all(\\\"link\\\") for multi-value Link headers.\n\n## WIREMOCK TEST COMPATIBILITY WARNING\nAfter this migration, the 4 client_tests.rs tests run on #[tokio::test] but call production code that now uses crate::http::Client (backed by asupersync::HttpClient). This may cause runtime failures if asupersync HTTP requires the asupersync runtime context.\n\n**Possible resolutions (decide during implementation):**\na) asupersync HTTP client is runtime-agnostic (works under tokio) — ideal, no changes needed\nb) Switch client_tests to #[asupersync::test] — may conflict with wiremock which needs tokio\nc) Add a test-only constructor that accepts an injected HTTP client trait — cleanest but most work\nd) Use reqwest in dev-deps as a test double behind the adapter interface\n\nVerify option (a) first during the Decision Gate. If it doesn't work, option (b) or (d) is the fallback. This is flagged as escape hatch trigger #4 in the epic.\n\n## Files Changed\n- src/gitlab/client.rs (~80 LOC changed)\n\n## Testing\n- All 4 client_tests.rs tests must pass (verify runtime compatibility per warning above)\n- cargo check --all-targets\n- cargo clippy --all-targets -- -D warnings","status":"closed","priority":1,"issue_type":"task","created_at":"2026-03-06T18:39:44.065450Z","created_by":"tayloreernisse","updated_at":"2026-03-06T21:11:12.570436Z","closed_at":"2026-03-06T21:11:12.569778Z","close_reason":"asupersync migration complete — 5 commits pushed to asupersync-migration branch","compaction_level":0,"original_size":0,"labels":["asupersync","phase-2"],"dependencies":[{"issue_id":"bd-lhj8","depends_on_id":"bd-1lj5","type":"parent-child","created_at":"2026-03-06T18:39:44.069082Z","created_by":"tayloreernisse"},{"issue_id":"bd-lhj8","depends_on_id":"bd-bqoc","type":"blocks","created_at":"2026-03-06T18:42:50.433154Z","created_by":"tayloreernisse"}]} {"id":"bd-ljf","title":"Add embedding error variants to LoreError","description":"## Background\nGate B introduces Ollama-dependent operations that need distinct error variants for clear diagnostics. Each error has a unique exit code, a descriptive message, and an actionable suggestion. These errors must integrate with the existing LoreError enum pattern (renamed from GiError in bd-3lc).\n\n## Approach\nExtend `src/core/error.rs` with 4 new variants per PRD Section 4.3.\n\n**ErrorCode additions:**\n```rust\npub enum ErrorCode {\n // ... existing (InternalError=1 through TransformError=13)\n OllamaUnavailable, // exit code 14\n OllamaModelNotFound, // exit code 15\n EmbeddingFailed, // exit code 16\n}\n```\n\n**LoreError additions:**\n```rust\n/// Ollama-specific connection failure. Use instead of Http for Ollama errors\n/// because it includes base_url for actionable error messages.\n#[error(\"Cannot connect to Ollama at {base_url}. Is it running?\")]\nOllamaUnavailable {\n base_url: String,\n #[source]\n source: Option,\n},\n\n#[error(\"Ollama model '{model}' not found. Run: ollama pull {model}\")]\nOllamaModelNotFound { model: String },\n\n#[error(\"Embedding failed for document {document_id}: {reason}\")]\nEmbeddingFailed { document_id: i64, reason: String },\n\n#[error(\"No embeddings found. Run: lore embed\")]\nEmbeddingsNotBuilt,\n```\n\n**code() mapping:**\n- OllamaUnavailable => ErrorCode::OllamaUnavailable\n- OllamaModelNotFound => ErrorCode::OllamaModelNotFound\n- EmbeddingFailed => ErrorCode::EmbeddingFailed\n- EmbeddingsNotBuilt => ErrorCode::EmbeddingFailed (shares exit code 16)\n\n**suggestion() mapping:**\n- OllamaUnavailable => \"Start Ollama: ollama serve\"\n- OllamaModelNotFound => \"Pull the model: ollama pull nomic-embed-text\"\n- EmbeddingFailed => \"Check Ollama logs or retry with 'lore embed --retry-failed'\"\n- EmbeddingsNotBuilt => \"Generate embeddings first: lore embed\"\n\n## Acceptance Criteria\n- [ ] All 4 error variants compile\n- [ ] Exit codes: OllamaUnavailable=14, OllamaModelNotFound=15, EmbeddingFailed=16\n- [ ] EmbeddingsNotBuilt shares exit code 16 (mapped to ErrorCode::EmbeddingFailed)\n- [ ] OllamaUnavailable has `base_url: String` and `source: Option`\n- [ ] EmbeddingFailed has `document_id: i64` and `reason: String`\n- [ ] Each variant has actionable .suggestion() text per PRD\n- [ ] ErrorCode Display: OLLAMA_UNAVAILABLE, OLLAMA_MODEL_NOT_FOUND, EMBEDDING_FAILED\n- [ ] Robot mode JSON includes code + suggestion for each variant\n- [ ] `cargo build` succeeds\n\n## Files\n- `src/core/error.rs` — extend LoreError enum + ErrorCode enum + impl blocks\n\n## TDD Loop\nRED: Add variants, `cargo build` fails on missing match arms\nGREEN: Add match arms in code(), exit_code(), suggestion(), to_robot_error(), Display\nVERIFY: `cargo build && cargo test error`\n\n## Edge Cases\n- OllamaUnavailable with source=None: still valid (used when no HTTP error available)\n- EmbeddingFailed with document_id=0: used for batch-level failures (not per-doc)\n- EmbeddingsNotBuilt vs OllamaUnavailable: former means \"never ran embed\", latter means \"Ollama down right now\"","status":"closed","priority":2,"issue_type":"task","created_at":"2026-01-30T15:26:33.994316Z","created_by":"tayloreernisse","updated_at":"2026-01-30T16:51:20.385574Z","closed_at":"2026-01-30T16:51:20.385369Z","close_reason":"Completed: Added 4 LoreError variants (OllamaUnavailable, OllamaModelNotFound, EmbeddingFailed, EmbeddingsNotBuilt) and 3 ErrorCode variants with exit codes 14-16. cargo build succeeds.","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-ljf","depends_on_id":"bd-3lc","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"}]} {"id":"bd-lsz","title":"Epic: Gate B - Hybrid MVP","description":"## Background\nGate B adds semantic search capabilities via Ollama embeddings and sqlite-vec vector storage. It builds on Gate A's document layer, adding the embedding pipeline, vector search, RRF-based hybrid ranking, and graceful degradation when Ollama is unavailable. Gate B is independently shippable on top of Gate A.\n\n## Gate B Deliverables\n1. Ollama-powered embedding pipeline with sqlite-vec storage\n2. Hybrid search (RRF-ranked vector + lexical) with rich filtering + graceful degradation\n\n## Bead Dependencies (execution order, after Gate A)\n1. **bd-mem** — Shared backoff utility (no deps)\n2. **bd-1y8** — Chunk ID encoding (no deps)\n3. **bd-3ez** — RRF ranking (no deps)\n4. **bd-ljf** — Embedding error variants (blocked by bd-3lc)\n5. **bd-2ac** — Migration 009 embeddings (blocked by bd-hrs)\n6. **bd-335** — Ollama API client (blocked by bd-ljf)\n7. **bd-am7** — Embedding pipeline (blocked by bd-335, bd-2ac, bd-1y8)\n8. **bd-bjo** — Vector search (blocked by bd-2ac, bd-1y8)\n9. **bd-2sx** — Embed CLI (blocked by bd-am7)\n10. **bd-3eu** — Hybrid search (blocked by bd-3ez, bd-bjo, bd-1k1, bd-3q2)\n\n## Acceptance Criteria\n- [ ] `lore embed` builds embeddings for all documents via Ollama\n- [ ] `lore embed --retry-failed` re-attempts failed embeddings\n- [ ] `lore search --mode=hybrid \"query\"` uses both FTS + vector\n- [ ] `lore search --mode=semantic \"query\"` uses vector only\n- [ ] Graceful degradation: Ollama down -> FTS fallback with warning\n- [ ] `lore search --explain` shows vector_rank, fts_rank, rrf_score\n- [ ] sqlite-vec loaded before migration 009","status":"closed","priority":1,"issue_type":"task","created_at":"2026-01-30T15:25:13.462602Z","created_by":"tayloreernisse","updated_at":"2026-01-30T18:02:57.669194Z","closed_at":"2026-01-30T18:02:57.669142Z","close_reason":"All Gate B sub-beads complete: backoff, chunk IDs, RRF, error variants, migration 009, Ollama client, embedding pipeline, vector search, embed CLI, hybrid search","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-lsz","depends_on_id":"bd-2sx","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"},{"issue_id":"bd-lsz","depends_on_id":"bd-3eu","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"}]} {"id":"bd-m7k1","title":"WHO: Active mode query (query_active)","description":"## Background\n\nActive mode answers \"What discussions are actively in progress?\" by finding unresolved resolvable discussions with recent activity. This is the most complex query due to the CTE structure and the dual SQL variant requirement.\n\n## Approach\n\n### Two static SQL variants (CRITICAL — not nullable-OR):\nActive mode uses separate global vs project-scoped SQL strings because:\n- With (?N IS NULL OR d.project_id = ?N), SQLite can't commit to either index at prepare time\n- Global queries need idx_discussions_unresolved_recent_global (single-column last_note_at)\n- Scoped queries need idx_discussions_unresolved_recent (project_id, last_note_at)\n- Selected at runtime: `match project_id { None => sql_global, Some(pid) => sql_scoped }`\n\n### CTE structure (4 stages):\n```sql\nWITH picked AS (\n -- Stage 1: Select limited discussions using the right index\n SELECT d.id, d.noteable_type, d.issue_id, d.merge_request_id,\n d.project_id, d.last_note_at\n FROM discussions d\n WHERE d.resolvable = 1 AND d.resolved = 0\n AND d.last_note_at >= ?1\n ORDER BY d.last_note_at DESC LIMIT ?2\n),\nnote_counts AS (\n -- Stage 2: Count all non-system notes per discussion (ACTUAL note count)\n SELECT n.discussion_id, COUNT(*) AS note_count\n FROM notes n JOIN picked p ON p.id = n.discussion_id\n WHERE n.is_system = 0\n GROUP BY n.discussion_id\n),\nparticipants AS (\n -- Stage 3: Distinct usernames per discussion, then GROUP_CONCAT\n SELECT x.discussion_id, GROUP_CONCAT(x.author_username, X'1F') AS participants\n FROM (\n SELECT DISTINCT n.discussion_id, n.author_username\n FROM notes n JOIN picked p ON p.id = n.discussion_id\n WHERE n.is_system = 0 AND n.author_username IS NOT NULL\n ) x\n GROUP BY x.discussion_id\n)\n-- Stage 4: Join everything\nSELECT p.id, p.noteable_type, COALESCE(i.iid, m.iid), COALESCE(i.title, m.title),\n proj.path_with_namespace, p.last_note_at,\n COALESCE(nc.note_count, 0), COALESCE(pa.participants, '')\nFROM picked p\nJOIN projects proj ON p.project_id = proj.id\nLEFT JOIN issues i ON p.issue_id = i.id\nLEFT JOIN merge_requests m ON p.merge_request_id = m.id\nLEFT JOIN note_counts nc ON nc.discussion_id = p.id\nLEFT JOIN participants pa ON pa.discussion_id = p.id\nORDER BY p.last_note_at DESC\n```\n\n### CRITICAL BUG PREVENTION: note_counts and participants MUST be separate CTEs.\nA single CTE with `SELECT DISTINCT discussion_id, author_username` then `COUNT(*)` produces a PARTICIPANT count, not a NOTE count. A discussion with 5 notes from 2 people would show note_count: 2 instead of 5.\n\n### Participants post-processing in Rust:\n```rust\nlet mut participants: Vec = csv.split('\\x1F').map(String::from).collect();\nparticipants.sort(); // deterministic — GROUP_CONCAT order is undefined\nconst MAX_PARTICIPANTS: usize = 50;\nlet participants_total = participants.len() as u32;\nlet participants_truncated = participants.len() > MAX_PARTICIPANTS;\n```\n\n### Total count also uses two variants (global/scoped), same match pattern.\n\n### Unit separator X'1F' for GROUP_CONCAT (not comma — usernames could theoretically contain commas)\n\n## Files\n\n- `src/cli/commands/who.rs`\n\n## TDD Loop\n\nRED:\n```\ntest_active_query — insert discussion + 2 notes by same user; verify:\n - total_unresolved_in_window = 1\n - discussions.len() = 1\n - participants = [\"reviewer_b\"]\n - note_count = 2 (NOT 1 — this was a real regression in iteration 4)\n - discussion_id > 0\ntest_active_participants_sorted — insert notes by zebra_user then alpha_user; verify sorted [\"alpha_user\", \"zebra_user\"]\n```\n\nGREEN: Implement query_active with both SQL variants and the shared map_row closure\nVERIFY: `cargo test -- active`\n\n## Acceptance Criteria\n\n- [ ] test_active_query passes with note_count = 2 (not participant count)\n- [ ] test_active_participants_sorted passes (alphabetical order)\n- [ ] discussion_id included in output (stable entity ID for agents)\n- [ ] Default since window: 7d\n- [ ] Bounded participants: cap 50, with total + truncated metadata\n\n## Edge Cases\n\n- note_count vs participant_count: MUST be separate CTEs (see bug prevention above)\n- GROUP_CONCAT order is undefined — sort participants in Rust after parsing\n- SQLite doesn't support GROUP_CONCAT(DISTINCT col, separator) — use subquery with SELECT DISTINCT then GROUP_CONCAT\n- Two SQL variants: prepare exactly ONE statement per invocation (don't prepare both)\n- entity_type mapping: \"MergeRequest\" -> \"MR\", else \"Issue\"","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-08T02:40:38.995549Z","created_by":"tayloreernisse","updated_at":"2026-02-08T04:10:29.598085Z","closed_at":"2026-02-08T04:10:29.598047Z","close_reason":"Implemented by agent team: migration 017, CLI skeleton, all 5 query modes, human+robot output, 20 tests. All quality gates pass.","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-m7k1","depends_on_id":"bd-2ldg","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"},{"issue_id":"bd-m7k1","depends_on_id":"bd-34rr","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"}]} @@ -345,7 +345,7 @@ {"id":"bd-pgdw","title":"OBSERV: Add root tracing span with run_id to sync and ingest","description":"## Background\nA root tracing span per command invocation provides the top of the span hierarchy. All child spans (ingest_issues, fetch_pages, etc.) inherit the run_id field, making every log line within a run filterable by jq.\n\n## Approach\nIn run_sync() (src/cli/commands/sync.rs:54), after generating run_id, create a root span:\n\n```rust\npub async fn run_sync(config: &Config, options: SyncOptions) -> Result {\n let run_id = &uuid::Uuid::new_v4().to_string()[..8];\n let _root = tracing::info_span!(\"sync\", %run_id).entered();\n // ... existing sync pipeline code\n}\n```\n\nIn run_ingest() (src/cli/commands/ingest.rs:107), same pattern:\n\n```rust\npub async fn run_ingest(...) -> Result {\n let run_id = &uuid::Uuid::new_v4().to_string()[..8];\n let _root = tracing::info_span!(\"ingest\", %run_id, resource_type).entered();\n // ... existing ingest code\n}\n```\n\nCRITICAL: The _root guard must live for the entire function scope. If it drops early (e.g., shadowed or moved into a block), child spans lose their parent context. Use let _root (underscore prefix) to signal intentional unused binding that's kept alive for its Drop impl.\n\nFor async functions, use .entered() NOT .enter(). In async Rust, Span::enter() returns a guard that is NOT Send, which prevents the future from being sent across threads. However, .entered() on an info_span! creates an Entered which is also !Send. For async, prefer:\n\n```rust\nlet root_span = tracing::info_span!(\"sync\", %run_id);\nasync move {\n // ... body\n}.instrument(root_span).await\n```\n\nOr use #[instrument] on the function itself with the run_id field.\n\n## Acceptance Criteria\n- [ ] Root span established for every sync and ingest invocation\n- [ ] run_id appears in span context of all child log lines\n- [ ] jq 'select(.spans[]? | .run_id)' can extract all lines from a run\n- [ ] Span is active for entire function duration (not dropped early)\n- [ ] Works correctly with async/await (span propagated across .await points)\n- [ ] cargo clippy --all-targets -- -D warnings passes\n\n## Files\n- src/cli/commands/sync.rs (add root span in run_sync, line ~54)\n- src/cli/commands/ingest.rs (add root span in run_ingest, line ~107)\n\n## TDD Loop\nRED: test_root_span_propagates_run_id (capture JSON log output, verify run_id in span context)\nGREEN: Add root spans to run_sync and run_ingest\nVERIFY: cargo test && cargo clippy --all-targets -- -D warnings\n\n## Edge Cases\n- Async span propagation: .entered() is !Send. For async functions, use .instrument() or #[instrument]. The run_sync function is async (line 54: pub async fn run_sync).\n- Nested command calls: run_sync calls run_ingest internally. If both create root spans, we get a nested hierarchy: sync > ingest. This is correct behavior -- the ingest span becomes a child of sync.\n- Span storage: tracing-subscriber registry handles span storage automatically. No manual setup needed beyond adding the layer.","status":"closed","priority":1,"issue_type":"task","created_at":"2026-02-04T15:54:07.771605Z","created_by":"tayloreernisse","updated_at":"2026-02-04T17:19:33.006274Z","closed_at":"2026-02-04T17:19:33.006227Z","close_reason":"Added root tracing spans with run_id to run_sync() and run_ingest() using .instrument() pattern for async compatibility","compaction_level":0,"original_size":0,"labels":["observability"],"dependencies":[{"issue_id":"bd-pgdw","depends_on_id":"bd-2ni","type":"parent-child","created_at":"2026-03-04T20:02:52Z","created_by":"import"},{"issue_id":"bd-pgdw","depends_on_id":"bd-37qw","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"}]} {"id":"bd-pr1","title":"Implement lore stats CLI command","description":"## Background\nThe stats command provides visibility into the document/search/embedding pipeline health. It reports counts (DocumentStats, EmbeddingStats, FtsStats, QueueStats), verifies consistency between tables (--check), and repairs inconsistencies (--repair). This is essential for diagnosing sync issues and validating Gate A/B/C correctness.\n\n## Approach\nCreate `src/cli/commands/stats.rs` per PRD Section 4.6.\n\n**Stats structs (PRD-exact):**\n```rust\n#[derive(Debug, Serialize)]\npub struct Stats {\n pub documents: DocumentStats,\n pub embeddings: EmbeddingStats,\n pub fts: FtsStats,\n pub queues: QueueStats,\n}\n\n#[derive(Debug, Serialize)]\npub struct DocumentStats {\n pub issues: usize,\n pub mrs: usize,\n pub discussions: usize,\n pub total: usize,\n pub truncated: usize,\n}\n\n#[derive(Debug, Serialize)]\npub struct EmbeddingStats {\n /// Documents with at least one embedding (chunk_index=0 exists in embedding_metadata)\n pub embedded: usize,\n pub pending: usize,\n pub failed: usize,\n /// embedded / total_documents * 100 (document-level, not chunk-level)\n pub coverage_pct: f64,\n /// Total chunks across all embedded documents\n pub total_chunks: usize,\n}\n\n#[derive(Debug, Serialize)]\npub struct FtsStats { pub indexed: usize }\n\n#[derive(Debug, Serialize)]\npub struct QueueStats {\n pub dirty_sources: usize,\n pub dirty_sources_failed: usize,\n pub pending_discussion_fetches: usize,\n pub pending_discussion_fetches_failed: usize,\n}\n```\n\n**IntegrityCheck struct (PRD-exact):**\n```rust\n#[derive(Debug, Serialize)]\npub struct IntegrityCheck {\n pub documents_count: usize,\n pub fts_count: usize,\n pub embeddings_count: usize,\n pub metadata_count: usize,\n pub orphaned_embeddings: usize,\n pub hash_mismatches: usize,\n pub ok: bool,\n}\n```\n\n**RepairResult struct (PRD-exact):**\n```rust\n#[derive(Debug, Serialize)]\npub struct RepairResult {\n pub orphaned_embeddings_deleted: usize,\n pub stale_embeddings_cleared: usize,\n pub missing_fts_repopulated: usize,\n}\n```\n\n**Core functions:**\n- `run_stats(config) -> Result` — gather all stats\n- `run_integrity_check(config) -> Result` — verify consistency\n- `run_repair(config) -> Result` — fix issues\n\n**Integrity checks (per PRD):**\n1. documents count == documents_fts count\n2. All `embeddings.rowid / 1000` map to valid `documents.id` (orphan detection)\n3. `embedding_metadata.document_hash == documents.content_hash` for chunk_index=0 rows (staleness uses `document_hash`, NOT `chunk_hash`)\n\n**Repair operations (PRD-exact):**\n1. Delete orphaned embedding_metadata (document_id NOT IN documents)\n2. Delete orphaned vec0 rows: `DELETE FROM embeddings WHERE rowid / 1000 NOT IN (SELECT id FROM documents)` — uses `rowid / 1000` for chunked scheme\n3. Clear stale embeddings: find documents where `embedding_metadata.document_hash != documents.content_hash` (chunk_index=0 comparison), delete ALL chunks for those docs (range-based: `rowid >= doc_id * 1000 AND rowid < (doc_id + 1) * 1000`)\n4. FTS rebuild: `INSERT INTO documents_fts(documents_fts) VALUES('rebuild')` — full rebuild, NOT optimize. PRD note: partial fix is fragile with external-content FTS; rebuild is guaranteed correct.\n\n**CLI args (PRD-exact):**\n```rust\n#[derive(Args)]\npub struct StatsArgs {\n #[arg(long)]\n check: bool,\n #[arg(long, requires = \"check\")]\n repair: bool, // --repair requires --check\n}\n```\n\n## Acceptance Criteria\n- [ ] Document counts by type: issues, mrs, discussions, total, truncated\n- [ ] Embedding coverage is document-level (not chunk-level): `embedded / total * 100`\n- [ ] Embedding stats include total_chunks count\n- [ ] FTS indexed count reported\n- [ ] Queue stats: dirty_sources + dirty_sources_failed, pending_discussion_fetches + pending_discussion_fetches_failed\n- [ ] --check verifies: FTS count == documents count, orphan embeddings, hash mismatches\n- [ ] Orphan detection uses `rowid / 1000` for chunked embedding scheme\n- [ ] Hash mismatch uses `document_hash` (not `chunk_hash`) for document-level staleness\n- [ ] --repair deletes orphaned embeddings (range-based for chunks)\n- [ ] --repair clears stale metadata (document_hash != content_hash at chunk_index=0)\n- [ ] --repair uses FTS `rebuild` (not `optimize`) for correct-by-construction repair\n- [ ] --repair requires --check (Clap `requires` attribute)\n- [ ] Human output: formatted with aligned columns\n- [ ] JSON output: `{\"ok\": true, \"data\": stats}`\n- [ ] `cargo build` succeeds\n\n## Files\n- `src/cli/commands/stats.rs` — new file\n- `src/cli/commands/mod.rs` — add `pub mod stats;`\n- `src/cli/mod.rs` — add StatsArgs, wire up stats subcommand\n- `src/main.rs` — add stats command handler\n\n## TDD Loop\nRED: Integration tests:\n- `test_stats_empty_db` — all counts 0, coverage 0%\n- `test_stats_with_documents` — correct counts by type\n- `test_integrity_check_healthy` — ok=true when consistent\n- `test_integrity_check_fts_mismatch` — detects FTS/doc count divergence\n- `test_integrity_check_orphan_embeddings` — detects orphaned rowids\n- `test_repair_rebuilds_fts` — FTS count matches after repair\n- `test_repair_cleans_orphans` — orphaned embeddings deleted\n- `test_repair_clears_stale` — stale metadata cleared (doc_hash mismatch)\nGREEN: Implement stats, integrity check, repair\nVERIFY: `cargo build && cargo test stats`\n\n## Edge Cases\n- Empty database: all counts 0, coverage 0%, no integrity issues\n- Gate A only (no embeddings table): skip embedding stats gracefully\n- --repair on healthy DB: no-op, reports \"no issues found\" / zero counts\n- FTS rebuild on large DB: may be slow\n- --repair without --check: Clap rejects (requires attribute enforces dependency)","status":"closed","priority":2,"issue_type":"task","created_at":"2026-01-30T15:26:50.232629Z","created_by":"tayloreernisse","updated_at":"2026-01-30T17:54:31.065586Z","closed_at":"2026-01-30T17:54:31.065501Z","close_reason":"Implemented stats CLI with document counts by type, embedding coverage, FTS index count, queue stats, --check integrity (FTS mismatch, orphan embeddings, stale metadata), --repair (rebuild FTS, delete orphans, clear stale). Human + JSON output. Builds clean.","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-pr1","depends_on_id":"bd-3qs","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"}]} {"id":"bd-qpk3","title":"Add gitlab.username config field","description":"## Background\nThe `lore me` command needs to know the current user's GitLab username to query for assigned issues, authored MRs, and reviewing MRs. Currently `GitLabConfig` in `src/core/config.rs` has `base_url`, `token_env_var`, and optional `token` — but no username field.\n\n## Approach\nAdd an optional `username` field to `GitLabConfig` at `src/core/config.rs:8-19`:\n```rust\n#[derive(Debug, Clone, Deserialize)]\npub struct GitLabConfig {\n #[serde(rename = \"baseUrl\")]\n pub base_url: String,\n\n #[serde(rename = \"tokenEnvVar\", default = \"default_token_env_var\")]\n pub token_env_var: String,\n\n #[serde(default)]\n pub token: Option,\n\n #[serde(default)]\n pub username: Option, // NEW — AC-1.1\n}\n```\n\nThe field is single-word so it needs NO `serde(rename)` — just `#[serde(default)]` so existing configs without it parse cleanly (same pattern as `token`). Username is case-sensitive (AC-1.3) — store as-is, no normalization.\n\n## Acceptance Criteria\n- [ ] `GitLabConfig` has `pub username: Option` field\n- [ ] Field uses `#[serde(default)]` (like `token` does on line 17)\n- [ ] Field deserializes from `gitlab.username` in config.json\n- [ ] Field is optional — existing configs without it still parse correctly (Option + default)\n- [ ] Username stored as-is (case-sensitive, no lowercasing)\n- [ ] cargo test passes (no existing tests should break)\n\n## Files\n- MODIFY: src/core/config.rs (add field to GitLabConfig struct around line 18)\n\n## TDD Anchor\nRED: Write `test_config_parses_username` in `src/core/config.rs` tests:\n```rust\nlet json = r#\"{\"gitlab\":{\"baseUrl\":\"https://gitlab.com\",\"username\":\"jdoe\"},\"projects\":[]}\"#;\nlet config: Config = serde_json::from_str(json).unwrap();\nassert_eq!(config.gitlab.username, Some(\"jdoe\".to_string()));\n```\nAlso test missing field:\n```rust\nlet json = r#\"{\"gitlab\":{\"baseUrl\":\"https://gitlab.com\"},\"projects\":[]}\"#;\nlet config: Config = serde_json::from_str(json).unwrap();\nassert_eq!(config.gitlab.username, None);\n```\nGREEN: Add the field to the struct.\nVERIFY: `cargo test config_parses_username`\n\n## Edge Cases\n- Existing config files without the field must not break (Option + #[serde(default)] handles this)\n- Empty string `\"\"` → Some(\"\") — let the resolution function (bd-1f1f) handle treating it as None\n- The `ScoringConfig.excluded_usernames: Vec` is unrelated — different purpose\n\n## Dependency Context\nNo upstream dependencies. Consumed by bd-1f1f (username resolution reads this field).","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-19T19:35:03.806072Z","created_by":"tayloreernisse","updated_at":"2026-02-20T16:09:13.039716Z","closed_at":"2026-02-20T16:09:13.039655Z","close_reason":"Implemented by lore-me agent swarm","compaction_level":0,"original_size":0} -{"id":"bd-qpq6","title":"Phase 3d: Signal handler rewrite for asupersync","description":"## What\nRewrite the signal handler in core/shutdown.rs (extracted in Phase 0a) to use asupersync Cx instead of tokio::spawn.\n\n## Rearchitecture Context (2026-03-06)\n- Signal handler call sites are in src/app/handlers.rs (physically), include\\!'d into main.rs scope\n- After Phase 0a extracts them to core/shutdown.rs, the callers will be in src/app/handlers.rs\n\n## Implementation\n\n```rust\n// Phase 0a version (tokio):\npub fn install_ctrl_c_handler(signal: ShutdownSignal) {\n tokio::spawn(async move { ... });\n}\n\n// Phase 3d version (asupersync):\npub async fn install_ctrl_c_handler(cx: &Cx, signal: ShutdownSignal) {\n cx.spawn(\"ctrl-c-handler\", async move |cx| {\n cx.shutdown_signal().await;\n eprintln\\!(\"\\nInterrupted, finishing current batch... (Ctrl+C again to force quit)\");\n signal.cancel();\n cx.shutdown_signal().await;\n std::process::exit(130);\n });\n}\n```\n\n## Cleanup Concern\nstd::process::exit(130) on second Ctrl+C bypasses all drop guards, flush operations, and asupersync region cleanup. This is intentional (user demanded hard exit) but means any in-progress DB transaction will be abandoned mid-write. SQLite journaling makes this safe (uncommitted transactions rolled back on next open).\n\nConsider logging a warning before exit so users understand incomplete operations may need re-sync.\n\nVerify this holds for WAL mode if enabled.\n\n## Signature Change\n- fn -> async fn\n- Takes cx: &Cx as first parameter\n- Callers in src/app/handlers.rs must pass cx (these are include\\!'d into main.rs scope)\n\n## Files Changed\n- src/core/shutdown.rs (~10 LOC rewrite)\n- src/app/handlers.rs (update call sites to pass cx — physically here, logically in main.rs via include\\!())\n\n## Testing\n- cargo check --all-targets\n- Manual test: launch lore sync, press Ctrl+C, verify graceful shutdown message\n\n## Depends On\n- Phase 0a (signal handler must be extracted first)\n- Phase 3c (entrypoint must provide Cx)","status":"open","priority":1,"issue_type":"task","created_at":"2026-03-06T18:40:42.496146Z","created_by":"tayloreernisse","updated_at":"2026-03-06T19:52:10.827329Z","compaction_level":0,"original_size":0,"labels":["asupersync","phase-3"],"dependencies":[{"issue_id":"bd-qpq6","depends_on_id":"bd-1lj5","type":"parent-child","created_at":"2026-03-06T18:40:42.501252Z","created_by":"tayloreernisse"},{"issue_id":"bd-qpq6","depends_on_id":"bd-21fb","type":"blocks","created_at":"2026-03-06T18:42:52.410229Z","created_by":"tayloreernisse"},{"issue_id":"bd-qpq6","depends_on_id":"bd-2wok","type":"blocks","created_at":"2026-03-06T18:42:52.540421Z","created_by":"tayloreernisse"}]} +{"id":"bd-qpq6","title":"Phase 3d: Signal handler rewrite for asupersync","description":"## What\nRewrite the signal handler in core/shutdown.rs (extracted in Phase 0a) to use asupersync Cx instead of tokio::spawn.\n\n## Rearchitecture Context (2026-03-06)\n- Signal handler call sites are in src/app/handlers.rs (physically), include\\!'d into main.rs scope\n- After Phase 0a extracts them to core/shutdown.rs, the callers will be in src/app/handlers.rs\n\n## Implementation\n\n```rust\n// Phase 0a version (tokio):\npub fn install_ctrl_c_handler(signal: ShutdownSignal) {\n tokio::spawn(async move { ... });\n}\n\n// Phase 3d version (asupersync):\npub async fn install_ctrl_c_handler(cx: &Cx, signal: ShutdownSignal) {\n cx.spawn(\"ctrl-c-handler\", async move |cx| {\n cx.shutdown_signal().await;\n eprintln\\!(\"\\nInterrupted, finishing current batch... (Ctrl+C again to force quit)\");\n signal.cancel();\n cx.shutdown_signal().await;\n std::process::exit(130);\n });\n}\n```\n\n## Cleanup Concern\nstd::process::exit(130) on second Ctrl+C bypasses all drop guards, flush operations, and asupersync region cleanup. This is intentional (user demanded hard exit) but means any in-progress DB transaction will be abandoned mid-write. SQLite journaling makes this safe (uncommitted transactions rolled back on next open).\n\nConsider logging a warning before exit so users understand incomplete operations may need re-sync.\n\nVerify this holds for WAL mode if enabled.\n\n## Signature Change\n- fn -> async fn\n- Takes cx: &Cx as first parameter\n- Callers in src/app/handlers.rs must pass cx (these are include\\!'d into main.rs scope)\n\n## Files Changed\n- src/core/shutdown.rs (~10 LOC rewrite)\n- src/app/handlers.rs (update call sites to pass cx — physically here, logically in main.rs via include\\!())\n\n## Testing\n- cargo check --all-targets\n- Manual test: launch lore sync, press Ctrl+C, verify graceful shutdown message\n\n## Depends On\n- Phase 0a (signal handler must be extracted first)\n- Phase 3c (entrypoint must provide Cx)","status":"closed","priority":1,"issue_type":"task","created_at":"2026-03-06T18:40:42.496146Z","created_by":"tayloreernisse","updated_at":"2026-03-06T21:11:12.631927Z","closed_at":"2026-03-06T21:11:12.631882Z","close_reason":"asupersync migration complete — 5 commits pushed to asupersync-migration branch","compaction_level":0,"original_size":0,"labels":["asupersync","phase-3"],"dependencies":[{"issue_id":"bd-qpq6","depends_on_id":"bd-1lj5","type":"parent-child","created_at":"2026-03-06T18:40:42.501252Z","created_by":"tayloreernisse"},{"issue_id":"bd-qpq6","depends_on_id":"bd-21fb","type":"blocks","created_at":"2026-03-06T18:42:52.410229Z","created_by":"tayloreernisse"},{"issue_id":"bd-qpq6","depends_on_id":"bd-2wok","type":"blocks","created_at":"2026-03-06T18:42:52.540421Z","created_by":"tayloreernisse"}]} {"id":"bd-r3wm","title":"Description","description":"Another test","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-12T16:52:04.745618Z","updated_at":"2026-02-12T16:52:10.757707Z","closed_at":"2026-02-12T16:52:10.757667Z","close_reason":"test artifacts","compaction_level":0,"original_size":0} {"id":"bd-s3rc","title":"WHO: Workload mode query (query_workload)","description":"## Background\n\nWorkload mode answers \"What is person X working on?\" — a four-section snapshot of a user's active work items: assigned issues, authored MRs, MRs they're reviewing, and unresolved discussions they participate in.\n\n## Approach\n\nFour independent SQL queries, all using the same parameter pattern: `rusqlite::params![username, project_id, since_ms, limit_plus_one]`\n\n### Key design decisions:\n- **since_ms is Option**: unlike other modes, Workload has NO default time window. Unresolved discussions and open issues are relevant regardless of age. When --since is explicitly provided, (?3 IS NULL OR ...) activates filtering.\n- **Canonical refs**: SQL computes project-qualified references directly:\n - Issues: `p.path_with_namespace || '#' || i.iid` -> \"group/project#42\"\n - MRs: `p.path_with_namespace || '!' || m.iid` -> \"group/project!100\"\n- **Discussions**: use EXISTS subquery to check user participation, CASE for ref separator (# vs !)\n\n### Query 1: Open issues assigned to user\n```sql\nSELECT i.iid, (p.path_with_namespace || '#' || i.iid) AS ref,\n i.title, p.path_with_namespace, i.updated_at\nFROM issues i\nJOIN issue_assignees ia ON ia.issue_id = i.id\nJOIN projects p ON i.project_id = p.id\nWHERE ia.username = ?1 AND i.state = 'opened'\n AND (?2 IS NULL OR i.project_id = ?2)\n AND (?3 IS NULL OR i.updated_at >= ?3)\nORDER BY i.updated_at DESC LIMIT ?4\n```\n\n### Query 2: Open MRs authored (similar pattern, m.author_username = ?1)\n### Query 3: Open MRs where user is reviewer (JOIN mr_reviewers, includes m.author_username in output)\n### Query 4: Unresolved discussions where user participated (EXISTS notes subquery)\n\n### Per-section truncation:\n```rust\nlet assigned_issues_truncated = assigned_issues.len() > limit;\nlet assigned_issues = assigned_issues.into_iter().take(limit).collect();\n// ... same for all 4 sections\n```\n\n### WorkloadResult struct:\n```rust\npub struct WorkloadResult {\n pub username: String,\n pub assigned_issues: Vec,\n pub authored_mrs: Vec,\n pub reviewing_mrs: Vec,\n pub unresolved_discussions: Vec,\n pub assigned_issues_truncated: bool,\n pub authored_mrs_truncated: bool,\n pub reviewing_mrs_truncated: bool,\n pub unresolved_discussions_truncated: bool,\n}\n```\n\n## Files\n\n- `src/cli/commands/who.rs`\n\n## TDD Loop\n\nRED: `test_workload_query` — insert project, issue+assignee, MR; verify assigned_issues.len()=1, authored_mrs.len()=1\nGREEN: Implement all 4 queries with prepare_cached()\nVERIFY: `cargo test -- workload`\n\n## Acceptance Criteria\n\n- [ ] test_workload_query passes\n- [ ] Canonical refs contain project path (group/project#iid format)\n- [ ] since_ms=None means no time filtering (all open items returned)\n- [ ] All 4 sections have independent truncation flags\n\n## Edge Cases\n\n- since_ms is Option (not i64) — Workload is the only mode with optional time window\n- Discussions: --since filters on d.last_note_at (recent activity), not creation time\n- Reviewing MRs: include m.author_username in output (who wrote the MR being reviewed)","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-08T02:40:27.800169Z","created_by":"tayloreernisse","updated_at":"2026-02-08T04:10:29.597273Z","closed_at":"2026-02-08T04:10:29.597228Z","close_reason":"Implemented by agent team: migration 017, CLI skeleton, all 5 query modes, human+robot output, 20 tests. All quality gates pass.","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-s3rc","depends_on_id":"bd-2ldg","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"},{"issue_id":"bd-s3rc","depends_on_id":"bd-34rr","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"}]} {"id":"bd-ser","title":"Implement MR ingestion module","description":"## Background\nMR ingestion module with cursor-based sync. Follows the same pattern as issue ingestion from CP1. Discussion sync eligibility is determined via DB query AFTER ingestion (not in-memory collection) to avoid memory growth on large projects.\n\n## Approach\nCreate `src/ingestion/merge_requests.rs` with:\n1. `IngestMergeRequestsResult` - Aggregated stats\n2. `ingest_merge_requests()` - Main ingestion function\n3. `upsert_merge_request()` - Single MR upsert\n4. Helper functions for labels, assignees, reviewers, cursor management\n\n## Files\n- `src/ingestion/merge_requests.rs` - New module\n- `src/ingestion/mod.rs` - Export new module\n- `tests/mr_ingestion_tests.rs` - Integration tests\n\n## Acceptance Criteria\n- [ ] `IngestMergeRequestsResult` has: fetched, upserted, labels_created, assignees_linked, reviewers_linked\n- [ ] `ingest_merge_requests()` returns `Result`\n- [ ] Page-boundary cursor updates (not item-count modulo)\n- [ ] Tuple-based cursor filtering: `(updated_at, gitlab_id)`\n- [ ] Transaction per MR for atomicity\n- [ ] Raw payload stored for each MR\n- [ ] Labels: clear-and-relink pattern (removes stale)\n- [ ] Assignees: clear-and-relink pattern\n- [ ] Reviewers: clear-and-relink pattern\n- [ ] `reset_discussion_watermarks()` for --full sync\n- [ ] `cargo test mr_ingestion` passes\n\n## TDD Loop\nRED: `cargo test ingest_mr` -> module not found\nGREEN: Add ingestion module with full logic\nVERIFY: `cargo test mr_ingestion`\n\n## Main Function Signature\n```rust\npub async fn ingest_merge_requests(\n conn: &Connection,\n client: &GitLabClient,\n config: &Config,\n project_id: i64, // Local DB project ID\n gitlab_project_id: i64, // GitLab project ID\n full_sync: bool, // Reset cursor if true\n) -> Result\n```\n\n## Ingestion Loop (page-based)\n```rust\nlet mut page = 1u32;\nloop {\n let page_result = client.fetch_merge_requests_page(...).await?;\n \n for mr in &page_result.items {\n // Tuple cursor filtering\n if let (Some(cursor_ts), Some(cursor_id)) = (cursor_updated_at, cursor_gitlab_id) {\n if mr_updated_at < cursor_ts { continue; }\n if mr_updated_at == cursor_ts && mr.id <= cursor_id { continue; }\n }\n \n // Begin transaction\n let tx = conn.unchecked_transaction()?;\n \n // Store raw payload\n let payload_id = store_payload(&tx, ...)?;\n \n // Transform and upsert\n let transformed = transform_merge_request(&mr, project_id)?;\n let upsert_result = upsert_merge_request(&tx, &transformed.merge_request, payload_id)?;\n \n // Clear-and-relink labels\n clear_mr_labels(&tx, local_mr_id)?;\n for label in &labels { ... }\n \n // Clear-and-relink assignees\n clear_mr_assignees(&tx, local_mr_id)?;\n for username in &transformed.assignee_usernames { ... }\n \n // Clear-and-relink reviewers\n clear_mr_reviewers(&tx, local_mr_id)?;\n for username in &transformed.reviewer_usernames { ... }\n \n tx.commit()?;\n \n // Track for cursor\n last_updated_at = Some(mr_updated_at);\n last_gitlab_id = Some(mr.id);\n }\n \n // Page-boundary cursor flush\n if let (Some(updated_at), Some(gitlab_id)) = (last_updated_at, last_gitlab_id) {\n update_cursor(conn, project_id, \"merge_requests\", updated_at, gitlab_id)?;\n }\n \n if page_result.is_last_page { break; }\n page = page_result.next_page.unwrap_or(page + 1);\n}\n```\n\n## Full Sync Watermark Reset\n```rust\nfn reset_discussion_watermarks(conn: &Connection, project_id: i64) -> Result<()> {\n conn.execute(\n \"UPDATE merge_requests\n SET discussions_synced_for_updated_at = NULL,\n discussions_sync_attempts = 0,\n discussions_sync_last_error = NULL\n WHERE project_id = ?\",\n [project_id],\n )?;\n Ok(())\n}\n```\n\n## DB Helper Functions\n- `get_cursor(conn, project_id) -> (Option, Option)` - Get (updated_at, gitlab_id)\n- `update_cursor(conn, project_id, resource_type, updated_at, gitlab_id)`\n- `reset_cursor(conn, project_id, resource_type)`\n- `upsert_merge_request(conn, mr, payload_id) -> Result`\n- `clear_mr_labels(conn, mr_id)`\n- `link_mr_label(conn, mr_id, label_id)`\n- `clear_mr_assignees(conn, mr_id)`\n- `upsert_mr_assignee(conn, mr_id, username)`\n- `clear_mr_reviewers(conn, mr_id)`\n- `upsert_mr_reviewer(conn, mr_id, username)`\n\n## Edge Cases\n- Cursor rewind may cause refetch of already-seen MRs (tuple filtering handles this)\n- Large projects: 10k+ MRs - page-based cursor prevents massive refetch on crash\n- Labels/assignees/reviewers may change - clear-and-relink ensures correctness","status":"closed","priority":2,"issue_type":"task","created_at":"2026-01-26T22:06:41.967459Z","created_by":"tayloreernisse","updated_at":"2026-01-27T00:15:24.526208Z","closed_at":"2026-01-27T00:15:24.526142Z","close_reason":"done","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-ser","depends_on_id":"bd-34o","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"},{"issue_id":"bd-ser","depends_on_id":"bd-3ir","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"},{"issue_id":"bd-ser","depends_on_id":"bd-iba","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"}]} @@ -367,9 +367,9 @@ {"id":"bd-y095","title":"Implement SyncDeltaLedger for post-sync filtered navigation","description":"## Background\n\nAfter a sync completes, the Sync Summary screen shows delta counts (+12 new issues, +3 new MRs). Pressing `i` or `m` should navigate to Issue/MR List filtered to show ONLY the entities that changed in this sync run. The SyncDeltaLedger is an in-memory data structure (not persisted to DB) that records the exact IIDs of new/updated entities during a sync run. It lives for the duration of one TUI session and is cleared when a new sync starts. If the ledger is unavailable (e.g., after app restart), the Sync Summary falls back to a timestamp-based filter using `sync_status.last_completed_at`.\n\n## Approach\n\nCreate a `sync_delta.rs` module with:\n\n1. **`SyncDeltaLedger` struct**:\n ```rust\n pub struct SyncDeltaLedger {\n issues_new: Vec, // IIDs of newly created issues\n issues_updated: Vec, // IIDs of updated (not new) issues\n mrs_new: Vec, // IIDs of newly created MRs\n mrs_updated: Vec, // IIDs of updated MRs\n discussions_new: usize, // count only (too many to track individually)\n events_new: usize, // count only\n completed_at: Option, // timestamp when sync finished (fallback anchor)\n }\n ```\n2. **Builder pattern** — `SyncDeltaLedger::new()` starts empty, populated during sync via:\n - `record_issue(iid: i64, is_new: bool)`\n - `record_mr(iid: i64, is_new: bool)`\n - `record_discussions(count: usize)`\n - `record_events(count: usize)`\n - `finalize(completed_at: i64)` — marks ledger as complete\n3. **Query methods**:\n - `new_issue_iids() -> &[i64]` — for `i` key navigation in Summary mode\n - `new_mr_iids() -> &[i64]` — for `m` key navigation\n - `all_changed_issue_iids() -> Vec` — new + updated combined\n - `all_changed_mr_iids() -> Vec` — new + updated combined\n - `is_available() -> bool` — true if finalize() was called\n - `fallback_timestamp() -> Option` — completed_at for timestamp-based fallback\n4. **`clear()`** — resets all fields when a new sync starts\n\nThe ledger is owned by `SyncState` (part of `AppState`) and populated by the sync action handler when processing `SyncResult` from `run_sync()`. The existing `SyncResult` struct (src/cli/commands/sync.rs:30) already tracks `issues_updated` and `mrs_updated` counts but not individual IIDs — the TUI sync action will need to collect IIDs from the ingest callbacks.\n\n## Acceptance Criteria\n- [ ] `SyncDeltaLedger::new()` creates an empty ledger with `is_available() == false`\n- [ ] `record_issue(42, true)` adds 42 to `issues_new`; `record_issue(43, false)` adds to `issues_updated`\n- [ ] `new_issue_iids()` returns only new IIDs, `all_changed_issue_iids()` returns new + updated\n- [ ] `finalize(ts)` sets `is_available() == true` and stores the timestamp\n- [ ] `clear()` resets everything back to empty with `is_available() == false`\n- [ ] `fallback_timestamp()` returns None before finalize, Some(ts) after\n- [ ] Ledger handles >10,000 IIDs without issues (just Vec growth)\n\n## Files\n- CREATE: crates/lore-tui/src/sync_delta.rs\n- MODIFY: crates/lore-tui/src/lib.rs (add `pub mod sync_delta;`)\n\n## TDD Anchor\nRED: Write `test_empty_ledger_not_available` that asserts `SyncDeltaLedger::new().is_available() == false` and `new_issue_iids().is_empty()`.\nGREEN: Implement the struct with new() and is_available().\nVERIFY: cargo test -p lore-tui sync_delta\n\nAdditional tests:\n- test_record_and_query_issues\n- test_record_and_query_mrs\n- test_finalize_makes_available\n- test_clear_resets_everything\n- test_all_changed_combines_new_and_updated\n- test_fallback_timestamp\n\n## Edge Cases\n- Recording the same IID twice (e.g., issue updated twice during sync) — should deduplicate or allow duplicates? Allow duplicates (Vec, not HashSet) for simplicity; consumers can deduplicate if needed.\n- Very large syncs with >50,000 entities — Vec is fine, no cap needed.\n- Calling query methods before finalize — returns data so far (is_available=false signals incompleteness).\n\n## Dependency Context\n- Depends on bd-2x2h (Sync screen) which owns SyncState and drives the sync lifecycle. The ledger is a field of SyncState.\n- Consumed by Sync Summary mode's `i`/`m` key handlers to produce filtered Issue/MR List navigation with exact IID sets.","status":"open","priority":2,"issue_type":"task","created_at":"2026-02-12T19:29:38.738460Z","created_by":"tayloreernisse","updated_at":"2026-02-12T19:29:48.475698Z","compaction_level":0,"original_size":0,"labels":["TUI"],"dependencies":[{"issue_id":"bd-y095","depends_on_id":"bd-2x2h","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"}]} {"id":"bd-ymd","title":"[CP1] Final validation - Gate A through D","description":"Run all tests and verify all internal gates pass.\n\n## Gate A: Issues Only (Must Pass First)\n- [ ] gi ingest --type=issues fetches all issues from configured projects\n- [ ] Issues stored with correct schema, including last_seen_at\n- [ ] Cursor-based sync is resumable (re-run fetches only new/updated)\n- [ ] Incremental cursor updates every 100 issues\n- [ ] Raw payloads stored for each issue\n- [ ] gi list issues and gi count issues work\n\n## Gate B: Labels Correct (Must Pass)\n- [ ] Labels extracted and stored (name-only)\n- [ ] Label links created correctly\n- [ ] Stale label links removed on re-sync (verified with test)\n- [ ] Label count per issue matches GitLab\n\n## Gate C: Dependent Discussion Sync (Must Pass)\n- [ ] Discussions fetched for issues with updated_at advancement\n- [ ] Notes stored with is_system flag correctly set\n- [ ] Raw payloads stored for discussions and notes\n- [ ] discussions_synced_for_updated_at watermark updated after sync\n- [ ] Unchanged issues skip discussion refetch (verified with test)\n- [ ] Bounded concurrency (dependent_concurrency respected)\n\n## Gate D: Resumability Proof (Must Pass)\n- [ ] Kill mid-run, rerun; bounded redo (cursor progress preserved)\n- [ ] No redundant discussion refetch after crash recovery\n- [ ] Single-flight lock prevents concurrent runs\n\n## Final Gate (Must Pass)\n- [ ] All unit tests pass (cargo test)\n- [ ] All integration tests pass (mocked with wiremock)\n- [ ] cargo clippy passes with no warnings\n- [ ] cargo fmt --check passes\n- [ ] Compiles with --release\n\n## Validation Commands\ncargo test\ncargo clippy -- -D warnings\ncargo fmt --check\ncargo build --release\n\nFiles: All CP1 files\nDone when: All gate criteria pass","status":"tombstone","priority":2,"issue_type":"task","created_at":"2026-01-25T16:59:26.795633Z","created_by":"tayloreernisse","updated_at":"2026-01-25T17:02:02.132613Z","closed_at":"2026-01-25T17:02:02.132613Z","deleted_at":"2026-01-25T17:02:02.132608Z","deleted_by":"tayloreernisse","delete_reason":"recreating with correct deps","original_type":"task","compaction_level":0,"original_size":0} {"id":"bd-ypa","title":"Implement timeline expand phase: BFS cross-reference expansion","description":"## Background\n\nThe expand phase is step 3 of the timeline pipeline (spec Section 3.2). Starting from seed entities, it performs BFS over entity_references to discover related entities not matched by keywords.\n\n**Spec reference:** `docs/phase-b-temporal-intelligence.md` Section 3.2 step 3, Section 3.5 (expanded_entities JSON).\n\n## Codebase Context\n\n- entity_references table exists (migration 011) with columns: source_entity_type, source_entity_id, target_entity_type, target_entity_id, target_project_path, target_entity_iid, reference_type, source_method, created_at\n- reference_type CHECK: `'closes' | 'mentioned' | 'related'`\n- source_method CHECK: `'api' | 'note_parse' | 'description_parse'` — use these values in provenance, NOT the spec's original values\n- Indexes: idx_entity_refs_source (source_entity_type, source_entity_id), idx_entity_refs_target (target_entity_id WHERE NOT NULL)\n\n## Approach\n\nCreate `src/core/timeline_expand.rs`:\n\n```rust\nuse std::collections::{HashSet, VecDeque};\nuse rusqlite::Connection;\nuse crate::core::timeline::{EntityRef, ExpandedEntityRef, UnresolvedRef};\n\npub struct ExpandResult {\n pub expanded_entities: Vec,\n pub unresolved_references: Vec,\n}\n\npub fn expand_timeline(\n conn: &Connection,\n seeds: &[EntityRef],\n depth: u32, // 0=no expansion, 1=default, 2+=deep\n include_mentions: bool, // --expand-mentions flag\n max_entities: usize, // cap at 100 to prevent explosion\n) -> Result { ... }\n```\n\n### BFS Algorithm\n\n```\nvisited: HashSet<(String, i64)> = seeds as set (entity_type, entity_id)\nqueue: VecDeque<(EntityRef, u32)> for multi-hop\n\nFor each seed:\n query_neighbors(conn, seed, edge_types) -> outgoing + incoming refs\n - Outgoing: SELECT target_* FROM entity_references WHERE source_entity_type=? AND source_entity_id=? AND reference_type IN (...)\n - Incoming: SELECT source_* FROM entity_references WHERE target_entity_type=? AND target_entity_id=? AND reference_type IN (...)\n - Unresolved (target_entity_id IS NULL): collect in UnresolvedRef, don't traverse\n - New resolved: add to expanded with provenance (via_from, via_reference_type, via_source_method)\n - If current_depth < depth: enqueue for further BFS\n```\n\n### Edge Type Filtering\n\n```rust\nfn edge_types(include_mentions: bool) -> Vec<&'static str> {\n if include_mentions {\n vec![\"closes\", \"related\", \"mentioned\"]\n } else {\n vec![\"closes\", \"related\"]\n }\n}\n```\n\n### Provenance (Critical for spec compliance)\n\nEach expanded entity needs via object per spec Section 3.5:\n- via_from: EntityRef of the entity that referenced this one\n- via_reference_type: from entity_references.reference_type column\n- via_source_method: from entity_references.source_method column (**codebase values: 'api', 'note_parse', 'description_parse'**)\n\nRegister in `src/core/mod.rs`: `pub mod timeline_expand;`\n\n## Acceptance Criteria\n\n- [ ] BFS traverses outgoing AND incoming edges in entity_references\n- [ ] Default: only \"closes\" and \"related\" edges (not \"mentioned\")\n- [ ] --expand-mentions: also traverses \"mentioned\" edges\n- [ ] depth=0: returns empty expanded list\n- [ ] max_entities cap prevents explosion (default 100)\n- [ ] Provenance: via_source_method uses codebase values (api/note_parse/description_parse), NOT spec values\n- [ ] Unresolved references (target_entity_id IS NULL) collected, not traversed\n- [ ] No duplicates: visited set by (entity_type, entity_id)\n- [ ] Self-references skipped\n- [ ] Module registered in src/core/mod.rs\n- [ ] `cargo check --all-targets` passes\n- [ ] `cargo clippy --all-targets -- -D warnings` passes\n\n## Files\n\n- `src/core/timeline_expand.rs` (NEW)\n- `src/core/mod.rs` (add `pub mod timeline_expand;`)\n\n## TDD Loop\n\nRED: Tests in `src/core/timeline_expand.rs`:\n- `test_expand_depth_zero` - returns empty\n- `test_expand_finds_linked_entity` - seed issue -> closes -> linked MR\n- `test_expand_bidirectional` - starting from target also finds source\n- `test_expand_respects_max_entities`\n- `test_expand_skips_mentions_by_default`\n- `test_expand_includes_mentions_when_flagged`\n- `test_expand_collects_unresolved`\n- `test_expand_tracks_provenance` - verify via_source_method is 'api' not 'api_closes_issues'\n\nTests need in-memory DB with migrations 001-014 applied + entity_references test data.\n\nGREEN: Implement BFS.\n\nVERIFY: `cargo test --lib -- timeline_expand`\n\n## Edge Cases\n\n- Circular references: visited set prevents infinite loop\n- Entity referenced from multiple seeds: first-come provenance wins\n- Empty entity_references: returns empty, not error\n- Cross-project refs with NULL target_entity_id: add to unresolved","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-02T21:33:08.659381Z","created_by":"tayloreernisse","updated_at":"2026-02-05T21:49:46.868460Z","closed_at":"2026-02-05T21:49:46.868410Z","close_reason":"Completed: Created src/core/timeline_expand.rs with BFS cross-reference expansion. Bidirectional traversal, depth limiting, mention filtering, max entity cap, provenance tracking, unresolved reference collection. 10 tests pass. All quality gates pass.","compaction_level":0,"original_size":0,"labels":["gate-3","phase-b","query"],"dependencies":[{"issue_id":"bd-ypa","depends_on_id":"bd-32q","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"},{"issue_id":"bd-ypa","depends_on_id":"bd-3ia","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"},{"issue_id":"bd-ypa","depends_on_id":"bd-ike","type":"parent-child","created_at":"2026-03-04T20:02:52Z","created_by":"import"}]} -{"id":"bd-yq49","title":"Phase 3e: Rate limiter sleep migration","description":"## What\nReplace tokio::time::sleep with asupersync::time::sleep in the rate limiter backoff.\n\n## Implementation\n```rust\n// Before\nuse tokio::time::sleep;\nsleep(delay).await;\n\n// After\nuse asupersync::time::sleep;\nsleep(delay).await;\n```\n\n## Why This Is Trivial\nThe sleep API is identical — both take a Duration and return a future that resolves after the delay. The only change is the import path.\n\n## Files Changed\n- src/gitlab/client.rs (~2 LOC: 1 import + 1 usage, or just the import if usage is identical)\n\n## Testing\n- cargo check --all-targets\n\n## Depends On\n- Phase 3a (asupersync must be in Cargo.toml)","status":"open","priority":2,"issue_type":"task","created_at":"2026-03-06T18:40:48.567703Z","created_by":"tayloreernisse","updated_at":"2026-03-06T18:42:53.372517Z","compaction_level":0,"original_size":0,"labels":["asupersync","phase-3"],"dependencies":[{"issue_id":"bd-yq49","depends_on_id":"bd-1lj5","type":"parent-child","created_at":"2026-03-06T18:40:48.569191Z","created_by":"tayloreernisse"},{"issue_id":"bd-yq49","depends_on_id":"bd-39yp","type":"blocks","created_at":"2026-03-06T18:42:53.372486Z","created_by":"tayloreernisse"}]} +{"id":"bd-yq49","title":"Phase 3e: Rate limiter sleep migration","description":"## What\nReplace tokio::time::sleep with asupersync::time::sleep in the rate limiter backoff.\n\n## Implementation\n```rust\n// Before\nuse tokio::time::sleep;\nsleep(delay).await;\n\n// After\nuse asupersync::time::sleep;\nsleep(delay).await;\n```\n\n## Why This Is Trivial\nThe sleep API is identical — both take a Duration and return a future that resolves after the delay. The only change is the import path.\n\n## Files Changed\n- src/gitlab/client.rs (~2 LOC: 1 import + 1 usage, or just the import if usage is identical)\n\n## Testing\n- cargo check --all-targets\n\n## Depends On\n- Phase 3a (asupersync must be in Cargo.toml)","status":"closed","priority":2,"issue_type":"task","created_at":"2026-03-06T18:40:48.567703Z","created_by":"tayloreernisse","updated_at":"2026-03-06T21:11:12.637953Z","closed_at":"2026-03-06T21:11:12.637608Z","close_reason":"asupersync migration complete — 5 commits pushed to asupersync-migration branch","compaction_level":0,"original_size":0,"labels":["asupersync","phase-3"],"dependencies":[{"issue_id":"bd-yq49","depends_on_id":"bd-1lj5","type":"parent-child","created_at":"2026-03-06T18:40:48.569191Z","created_by":"tayloreernisse"},{"issue_id":"bd-yq49","depends_on_id":"bd-39yp","type":"blocks","created_at":"2026-03-06T18:42:53.372486Z","created_by":"tayloreernisse"}]} {"id":"bd-z0s","title":"[CP1] Final validation - Gate A through D","description":"Run all tests and verify all internal gates pass.\n\n## Gate A: Issues Only (Must Pass First)\n- [ ] gi ingest --type=issues fetches all issues from configured projects\n- [ ] Issues stored with correct schema, including last_seen_at\n- [ ] Cursor-based sync is resumable (re-run fetches only new/updated)\n- [ ] Incremental cursor updates every 100 issues\n- [ ] Raw payloads stored for each issue\n- [ ] gi list issues and gi count issues work\n\n## Gate B: Labels Correct (Must Pass)\n- [ ] Labels extracted and stored (name-only)\n- [ ] Label links created correctly\n- [ ] **Stale label links removed on re-sync** (verified with test)\n- [ ] Label count per issue matches GitLab\n\n## Gate C: Dependent Discussion Sync (Must Pass)\n- [ ] Discussions fetched for issues with updated_at advancement\n- [ ] Notes stored with is_system flag correctly set\n- [ ] Raw payloads stored for discussions and notes\n- [ ] discussions_synced_for_updated_at watermark updated after sync\n- [ ] **Unchanged issues skip discussion refetch** (verified with test)\n- [ ] Bounded concurrency (dependent_concurrency respected)\n\n## Gate D: Resumability Proof (Must Pass)\n- [ ] Kill mid-run, rerun; bounded redo (cursor progress preserved)\n- [ ] No redundant discussion refetch after crash recovery\n- [ ] Single-flight lock prevents concurrent runs\n\n## Final Gate (Must Pass)\n- [ ] All unit tests pass (cargo test)\n- [ ] All integration tests pass (mocked with wiremock)\n- [ ] cargo clippy passes with no warnings\n- [ ] cargo fmt --check passes\n- [ ] Compiles with --release\n\n## Validation Commands\ncargo test\ncargo clippy -- -D warnings\ncargo fmt --check\ncargo build --release\n\n## Data Integrity Checks\n- SELECT COUNT(*) FROM issues matches GitLab issue count\n- Every issue has a raw_payloads row\n- Every discussion has a raw_payloads row\n- Labels in issue_labels junction all exist in labels table\n- Re-running gi ingest --type=issues fetches 0 new items\n- After removing a label in GitLab and re-syncing, the link is removed\n\nFiles: All CP1 files\nDone when: All gate criteria pass","status":"closed","priority":2,"issue_type":"task","created_at":"2026-01-25T17:02:38.459095Z","created_by":"tayloreernisse","updated_at":"2026-01-25T23:27:09.567537Z","closed_at":"2026-01-25T23:27:09.567478Z","close_reason":"All gates pass: 71 tests, clippy clean, fmt clean, release build successful","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-z0s","depends_on_id":"bd-17v","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"},{"issue_id":"bd-z0s","depends_on_id":"bd-2f0","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"},{"issue_id":"bd-z0s","depends_on_id":"bd-39w","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"},{"issue_id":"bd-z0s","depends_on_id":"bd-3n1","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"},{"issue_id":"bd-z0s","depends_on_id":"bd-o7b","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"},{"issue_id":"bd-z0s","depends_on_id":"bd-v6i","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"}]} {"id":"bd-z94","title":"Implement 'lore file-history' command with human and robot output","description":"## Background\n\nThe file-history command is Gate 4's user-facing CLI. It answers \"which MRs touched this file, and why?\"\n\n**Spec reference:** `docs/phase-b-temporal-intelligence.md` Section 4.4-4.5.\n\n## Codebase Context\n\n- CLI pattern: Commands enum in src/cli/mod.rs, handler in src/main.rs, output in src/cli/commands/\n- Project resolution: resolve_project() returns project_id or exit 18 (Ambiguous)\n- Robot mode: {ok, data, meta} envelope pattern\n- merge_requests.merged_at exists (migration 006) — order by COALESCE(merged_at, updated_at) DESC\n- discussions table: issue_id, merge_request_id\n- notes table: position_new_path for DiffNotes (used for --discussions flag)\n- **mr_file_changes table**: migration 016 — already exists and is populated by drain_mr_diffs() (orchestrator.rs lines 708-726, 1514+)\n- resolve_rename_chain() from bd-1yx (src/core/file_history.rs) for rename handling\n- VALID_COMMANDS array in src/main.rs (line ~448)\n- **26 migrations** exist (001-026). LATEST_SCHEMA_VERSION derived from MIGRATIONS.len().\n\n## Approach\n\n### 1. FileHistoryArgs subcommand (`src/cli/mod.rs`):\n```rust\n/// Show MRs that touched a file, with linked issues and discussions\n#[command(name = \"file-history\")]\nFileHistory(FileHistoryArgs),\n```\n\n```rust\n#[derive(Parser, Debug)]\npub struct FileHistoryArgs {\n /// File path to trace history for\n pub path: String,\n /// Scope to a specific project (fuzzy match)\n #[arg(short = 'p', long)]\n pub project: Option,\n /// Include discussion snippets from DiffNotes on this file\n #[arg(long)]\n pub discussions: bool,\n /// Disable rename chain resolution\n #[arg(long = \"no-follow-renames\")]\n pub no_follow_renames: bool,\n /// Only show merged MRs\n #[arg(long)]\n pub merged: bool,\n /// Maximum results\n #[arg(short = 'n', long = \"limit\", default_value = \"50\")]\n pub limit: usize,\n}\n```\n\n### 2. Query logic (`src/cli/commands/file_history.rs`):\n\n1. Resolve project (exit 18 on ambiguous)\n2. Call resolve_rename_chain() unless --no-follow-renames\n3. Query mr_file_changes for all resolved paths\n4. JOIN merge_requests for MR details\n5. Optionally fetch DiffNote discussions on the file (notes.position_new_path)\n6. Order by COALESCE(merged_at, updated_at) DESC\n7. Apply --merged filter and --limit\n\n### 3. Human output:\n```\nFile History: src/auth/oauth.rs (via 3 paths, 5 MRs)\nRename chain: src/authentication/oauth.rs -> src/auth/oauth.rs\n\n !456 \"Implement OAuth2 flow\" merged @alice 2024-01-22 modified\n !489 \"Fix OAuth token expiry\" merged @bob 2024-02-15 modified\n !512 \"Refactor auth module\" merged @carol 2024-03-01 renamed\n```\n\n### 4. Robot JSON:\n```json\n{\n \"ok\": true,\n \"data\": {\n \"path\": \"src/auth/oauth.rs\",\n \"rename_chain\": [\"src/authentication/oauth.rs\", \"src/auth/oauth.rs\"],\n \"merge_requests\": [\n {\n \"iid\": 456,\n \"title\": \"Implement OAuth2 flow\",\n \"state\": \"merged\",\n \"author\": \"alice\",\n \"merged_at\": \"2024-01-22T...\",\n \"change_type\": \"modified\",\n \"discussion_count\": 12,\n \"file_discussion_count\": 4,\n \"merge_commit_sha\": \"abc123\"\n }\n ]\n },\n \"meta\": {\n \"total_mrs\": 5,\n \"renames_followed\": true,\n \"paths_searched\": 2\n }\n}\n```\n\n## Acceptance Criteria\n\n- [ ] `lore file-history src/foo.rs` works with human output\n- [ ] `lore --robot file-history src/foo.rs` works with JSON envelope\n- [ ] Rename chain displayed in human output when renames detected\n- [ ] Robot JSON includes rename_chain array\n- [ ] --no-follow-renames disables resolution (queries literal path only)\n- [ ] --merged filters to merged MRs only\n- [ ] --discussions includes DiffNote snippets from notes.position_new_path matching\n- [ ] -p for project scoping (exit 18 on ambiguous)\n- [ ] -n limits results\n- [ ] No MR history: friendly message (exit 0, not error)\n- [ ] \"file-history\" added to VALID_COMMANDS array\n- [ ] robot-docs manifest includes file-history command\n- [ ] `cargo check --all-targets` passes\n- [ ] `cargo clippy --all-targets -- -D warnings` passes\n- [ ] `cargo fmt --check` passes\n\n## Files\n\n- MODIFY: src/cli/mod.rs (FileHistoryArgs struct + Commands::FileHistory variant)\n- CREATE: src/cli/commands/file_history.rs (query + human + robot output)\n- MODIFY: src/cli/commands/mod.rs (add pub mod file_history + re-exports)\n- MODIFY: src/main.rs (handler dispatch + VALID_COMMANDS + robot-docs entry)\n\n## TDD Anchor\n\nRED: No unit tests for CLI wiring — verify with cargo check + manual run.\n\nGREEN: Implement query, human renderer, robot renderer.\n\nVERIFY:\n```bash\ncargo check --all-targets\ncargo run --release -- file-history --help\ncargo run --release -- file-history src/main.rs\ncargo run --release -- --robot file-history src/main.rs\n```\n\n## Edge Cases\n\n- File path with spaces: clap handles quoting\n- Path not in any MR: empty result, friendly message, exit 0 (not error)\n- MRs ordered by COALESCE(merged_at, updated_at) DESC (unmerged MRs use updated_at)\n- --discussions with no DiffNotes: empty discussion section, not error\n- rename_chain omitted from robot JSON when --no-follow-renames is set\n- mr_file_changes table empty (sync hasn't fetched diffs yet): friendly message suggesting `lore sync`\n\n## Dependency Context\n\n- **bd-1yx (resolve_rename_chain)**: provides resolve_rename_chain() in src/core/file_history.rs — takes a path and returns Vec of all historical paths. MUST be implemented before this bead.\n- **bd-2yo / migration 016 (mr_file_changes)**: provides the mr_file_changes table with new_path, old_path, change_type columns. Already exists and is populated by drain_mr_diffs() in orchestrator.rs (lines 708-726, 1514+).\n- **bd-3ia (closes_issues)**: provides entity_references with reference_type='closes' linking MRs to issues. Used for \"linked issues\" column if extended later.","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-02T21:34:09.027259Z","created_by":"tayloreernisse","updated_at":"2026-02-17T17:57:21.258978Z","closed_at":"2026-02-17T17:57:21.258929Z","close_reason":"Implemented file-history command with human/robot output, rename chain resolution, DiffNote discussions, --merged/--no-follow-renames filters, autocorrect registry, robot-docs manifest","compaction_level":0,"original_size":0,"labels":["cli","gate-4","phase-b"],"dependencies":[{"issue_id":"bd-z94","depends_on_id":"bd-14q","type":"parent-child","created_at":"2026-03-04T20:02:52Z","created_by":"import"},{"issue_id":"bd-z94","depends_on_id":"bd-1yx","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"},{"issue_id":"bd-z94","depends_on_id":"bd-2yo","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"},{"issue_id":"bd-z94","depends_on_id":"bd-3ia","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"}]} -{"id":"bd-zgke","title":"Phase 3f-Step1: Cx threading — orchestration path","description":"## What\nThread Cx from main() through command dispatch into the orchestrator and ingestion modules. Replace join_all batches with region-scoped tasks. This is the BIG PAYOFF of the migration.\n\n## Why\nThis is where structured cancellation actually matters. If Ctrl+C fires during a prefetch batch, the region cancels all in-flight HTTP requests with bounded cleanup instead of silently dropping them.\n\n## Rearchitecture Context (2026-03-06)\nA major code reorganization restructured several files referenced by this bead:\n- main.rs is now thin (419 LOC). Command dispatch + handlers live in src/app/handlers.rs (via include\\!() chain from main.rs)\n- cli/commands/sync.rs -> split into cli/commands/sync/ folder (mod.rs, run.rs, render.rs, surgical.rs, sync_tests.rs)\n- cli/commands/ingest.rs -> split into cli/commands/ingest/ folder (mod.rs, run.rs, render.rs)\n- cli/commands/sync_surgical.rs -> moved to cli/commands/sync/surgical.rs\n\n## Functions That Need cx: &Cx Added\n\n| Module | Functions |\n|--------|-----------|\n| src/app/handlers.rs | Command dispatch handler functions for sync, ingest (physically in handlers.rs, logically in main.rs scope via include\\!()) |\n| cli/commands/sync/run.rs | run_sync() |\n| cli/commands/ingest/run.rs | run_ingest_command(), run_ingest() |\n| cli/commands/sync/surgical.rs | run_sync_surgical() |\n| ingestion/orchestrator.rs | ingest_issues(), ingest_merge_requests(), ingest_discussions(), etc. |\n| ingestion/surgical.rs | surgical_sync() |\n\n## Region Wrapping for join_all Batches (orchestrator.rs)\n\n```rust\n// Before\nlet prefetched_batch = join_all(prefetch_futures).await;\n\n// After — cancel-correct region\n// NOTE: Pattern depends on asupersync region API. If scope.spawn() returns JoinHandle,\n// prefer collecting handles and awaiting them (preserves ordering).\n```\n\n## CRITICAL: Semantic Differences from join_all\n\n1. **Ordering**: join_all preserves input order. Channel-based collection does NOT. If downstream assumes positional alignment, send (index, result) tuples and sort. Or use JoinHandle if available.\n\n2. **Error aggregation**: join_all runs all futures even if some fail. Region cancellation may lose some results. Decide per call site: partial results OK, or retry entire batch?\n\n3. **Backpressure**: Verify asupersync region API doesn't impose implicit concurrency caps.\n\n4. **Late result loss on cancellation**: Tasks that completed but haven't been received may be lost. Caller must treat cancelled region results as incomplete.\n\nAUDIT EVERY join_all CALL SITE for all 4 assumptions before choosing the pattern.\n\n## Estimated Scope\n~10 functions gain cx: &Cx parameter in this step. Focus on the orchestration hot path only.\n\n## Files Changed\n- src/app/handlers.rs (dispatch handler functions — physically here, logically in main.rs via include\\!())\n- src/cli/commands/sync/run.rs (was cli/commands/sync.rs)\n- src/cli/commands/ingest/run.rs (was cli/commands/ingest.rs)\n- src/cli/commands/sync/surgical.rs (was cli/commands/sync_surgical.rs)\n- src/ingestion/orchestrator.rs (~30 LOC for region wrapping)\n- src/ingestion/surgical.rs\n\n## Testing\n- cargo check --all-targets\n- cargo test (all existing tests must pass)\n- Manual: lore sync with Ctrl+C mid-batch — verify graceful drain\n\n## Depends On\n- Phase 3c (entrypoint provides Cx)","status":"open","priority":1,"issue_type":"task","created_at":"2026-03-06T18:41:08.482800Z","created_by":"tayloreernisse","updated_at":"2026-03-06T19:51:43.829038Z","compaction_level":0,"original_size":0,"labels":["asupersync","phase-3"],"dependencies":[{"issue_id":"bd-zgke","depends_on_id":"bd-1lj5","type":"parent-child","created_at":"2026-03-06T18:41:08.486544Z","created_by":"tayloreernisse"},{"issue_id":"bd-zgke","depends_on_id":"bd-2wok","type":"blocks","created_at":"2026-03-06T18:42:53.810343Z","created_by":"tayloreernisse"}]} +{"id":"bd-zgke","title":"Phase 3f-Step1: Cx threading — orchestration path","description":"## What\nThread Cx from main() through command dispatch into the orchestrator and ingestion modules. Replace join_all batches with region-scoped tasks. This is the BIG PAYOFF of the migration.\n\n## Why\nThis is where structured cancellation actually matters. If Ctrl+C fires during a prefetch batch, the region cancels all in-flight HTTP requests with bounded cleanup instead of silently dropping them.\n\n## Rearchitecture Context (2026-03-06)\nA major code reorganization restructured several files referenced by this bead:\n- main.rs is now thin (419 LOC). Command dispatch + handlers live in src/app/handlers.rs (via include\\!() chain from main.rs)\n- cli/commands/sync.rs -> split into cli/commands/sync/ folder (mod.rs, run.rs, render.rs, surgical.rs, sync_tests.rs)\n- cli/commands/ingest.rs -> split into cli/commands/ingest/ folder (mod.rs, run.rs, render.rs)\n- cli/commands/sync_surgical.rs -> moved to cli/commands/sync/surgical.rs\n\n## Functions That Need cx: &Cx Added\n\n| Module | Functions |\n|--------|-----------|\n| src/app/handlers.rs | Command dispatch handler functions for sync, ingest (physically in handlers.rs, logically in main.rs scope via include\\!()) |\n| cli/commands/sync/run.rs | run_sync() |\n| cli/commands/ingest/run.rs | run_ingest_command(), run_ingest() |\n| cli/commands/sync/surgical.rs | run_sync_surgical() |\n| ingestion/orchestrator.rs | ingest_issues(), ingest_merge_requests(), ingest_discussions(), etc. |\n| ingestion/surgical.rs | surgical_sync() |\n\n## Region Wrapping for join_all Batches (orchestrator.rs)\n\n```rust\n// Before\nlet prefetched_batch = join_all(prefetch_futures).await;\n\n// After — cancel-correct region\n// NOTE: Pattern depends on asupersync region API. If scope.spawn() returns JoinHandle,\n// prefer collecting handles and awaiting them (preserves ordering).\n```\n\n## CRITICAL: Semantic Differences from join_all\n\n1. **Ordering**: join_all preserves input order. Channel-based collection does NOT. If downstream assumes positional alignment, send (index, result) tuples and sort. Or use JoinHandle if available.\n\n2. **Error aggregation**: join_all runs all futures even if some fail. Region cancellation may lose some results. Decide per call site: partial results OK, or retry entire batch?\n\n3. **Backpressure**: Verify asupersync region API doesn't impose implicit concurrency caps.\n\n4. **Late result loss on cancellation**: Tasks that completed but haven't been received may be lost. Caller must treat cancelled region results as incomplete.\n\nAUDIT EVERY join_all CALL SITE for all 4 assumptions before choosing the pattern.\n\n## Estimated Scope\n~10 functions gain cx: &Cx parameter in this step. Focus on the orchestration hot path only.\n\n## Files Changed\n- src/app/handlers.rs (dispatch handler functions — physically here, logically in main.rs via include\\!())\n- src/cli/commands/sync/run.rs (was cli/commands/sync.rs)\n- src/cli/commands/ingest/run.rs (was cli/commands/ingest.rs)\n- src/cli/commands/sync/surgical.rs (was cli/commands/sync_surgical.rs)\n- src/ingestion/orchestrator.rs (~30 LOC for region wrapping)\n- src/ingestion/surgical.rs\n\n## Testing\n- cargo check --all-targets\n- cargo test (all existing tests must pass)\n- Manual: lore sync with Ctrl+C mid-batch — verify graceful drain\n\n## Depends On\n- Phase 3c (entrypoint provides Cx)","status":"closed","priority":1,"issue_type":"task","created_at":"2026-03-06T18:41:08.482800Z","created_by":"tayloreernisse","updated_at":"2026-03-06T21:11:12.644670Z","closed_at":"2026-03-06T21:11:12.644623Z","close_reason":"asupersync migration complete — 5 commits pushed to asupersync-migration branch","compaction_level":0,"original_size":0,"labels":["asupersync","phase-3"],"dependencies":[{"issue_id":"bd-zgke","depends_on_id":"bd-1lj5","type":"parent-child","created_at":"2026-03-06T18:41:08.486544Z","created_by":"tayloreernisse"},{"issue_id":"bd-zgke","depends_on_id":"bd-2wok","type":"blocks","created_at":"2026-03-06T18:42:53.810343Z","created_by":"tayloreernisse"}]} {"id":"bd-zibc","title":"WHO: VALID_COMMANDS + robot-docs manifest","description":"## Background\n\nRegister the who command in main.rs so that typo suggestions work and robot-docs manifest includes the command for agent self-discovery.\n\n## Approach\n\n### 1. VALID_COMMANDS array (~line 471 in suggest_similar_command):\nAdd \"who\" after \"timeline\":\n```rust\nconst VALID_COMMANDS: &[&str] = &[\n \"issues\", \"mrs\", /* ... existing ... */ \"timeline\", \"who\",\n];\n```\n\n### 2. robot-docs manifest (handle_robot_docs, after \"timeline\" entry):\n```json\n\"who\": {\n \"description\": \"People intelligence: experts, workload, active discussions, overlap, review patterns\",\n \"flags\": [\"\", \"--path \", \"--active\", \"--overlap \", \"--reviews\", \"--since \", \"-p/--project\", \"-n/--limit\"],\n \"modes\": {\n \"expert\": \"lore who — Who knows about this area? (also: --path for root files)\",\n \"workload\": \"lore who — What is someone working on?\",\n \"reviews\": \"lore who --reviews — Review pattern analysis\",\n \"active\": \"lore who --active — Active unresolved discussions\",\n \"overlap\": \"lore who --overlap — Who else is touching these files?\"\n },\n \"example\": \"lore --robot who src/features/auth/\",\n \"response_schema\": {\n \"ok\": \"bool\",\n \"data\": {\n \"mode\": \"string\",\n \"input\": {\"target\": \"string|null\", \"path\": \"string|null\", \"project\": \"string|null\", \"since\": \"string|null\", \"limit\": \"int\"},\n \"resolved_input\": {\"mode\": \"string\", \"project_id\": \"int|null\", \"project_path\": \"string|null\", \"since_ms\": \"int\", \"since_iso\": \"string\", \"since_mode\": \"string (default|explicit|none)\", \"limit\": \"int\"},\n \"...\": \"mode-specific fields\"\n },\n \"meta\": {\"elapsed_ms\": \"int\"}\n }\n}\n```\n\n### 3. workflows JSON — add people_intelligence:\n```json\n\"people_intelligence\": [\n \"lore --robot who src/path/to/feature/\",\n \"lore --robot who @username\",\n \"lore --robot who @username --reviews\",\n \"lore --robot who --active --since 7d\",\n \"lore --robot who --overlap src/path/\",\n \"lore --robot who --path README.md\"\n]\n```\n\n## Files\n\n- `src/main.rs`\n\n## TDD Loop\n\nVERIFY: `cargo check && cargo run --release -- robot-docs | python3 -c \"import json,sys; d=json.load(sys.stdin); assert 'who' in d['commands']\"`\n\n## Acceptance Criteria\n\n- [ ] \"who\" in VALID_COMMANDS\n- [ ] `lore robot-docs` JSON contains who command with all 5 modes\n- [ ] workflows contains people_intelligence array\n- [ ] cargo check passes\n\n## Edge Cases\n\n- The VALID_COMMANDS array is used for typo suggestion via Levenshtein distance — ensure \"who\" does not collide with other short commands (it does not; closest is \"show\" at distance 2)\n- robot-docs JSON is constructed via serde_json::json!() macro inside a raw string — ensure no trailing commas or JSON syntax errors in the manually-written JSON block\n- The response_schema in robot-docs is documentation-only (not validated at runtime) — ensure it matches actual output structure from bd-3mj2\n- If handle_robot_docs location has changed since plan was written, search for \"robot-docs\" or \"robot_docs\" in main.rs to find current location","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-08T02:41:35.098890Z","created_by":"tayloreernisse","updated_at":"2026-02-08T04:10:29.601819Z","closed_at":"2026-02-08T04:10:29.601785Z","close_reason":"Implemented by agent team: migration 017, CLI skeleton, all 5 query modes, human+robot output, 20 tests. All quality gates pass.","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-zibc","depends_on_id":"bd-2rk9","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"}]} {"id":"bd-zqpf","title":"WHO: Expert mode query (query_expert)","description":"## Background\n\nExpert mode answers \"Who should I talk to about this feature/file?\" by analyzing DiffNote activity at a given path. It scores users by a combination of review breadth (distinct MRs reviewed), authorship breadth (distinct MRs authored), and review intensity (DiffNote count). This is the primary use case for lore who.\n\n## Approach\n\nSingle CTE with two UNION ALL branches (reviewer + author), then SQL-level aggregation, scoring, sorting, and LIMIT.\n\n### Key SQL pattern (prefix variant — exact variant replaces LIKE with =):\n\n```sql\nWITH activity AS (\n -- Reviewer branch: DiffNotes on other people's MRs\n SELECT n.author_username AS username, 'reviewer' AS role,\n COUNT(DISTINCT m.id) AS mr_cnt, COUNT(*) AS note_cnt,\n MAX(n.created_at) AS last_seen_at\n FROM notes n\n JOIN discussions d ON n.discussion_id = d.id\n JOIN merge_requests m ON d.merge_request_id = m.id\n WHERE n.note_type = 'DiffNote' AND n.is_system = 0\n AND n.author_username IS NOT NULL\n AND (m.author_username IS NULL OR n.author_username != m.author_username) -- self-review exclusion\n AND m.state IN ('opened','merged')\n AND n.position_new_path LIKE ?1 ESCAPE '\\'\n AND n.created_at >= ?2\n AND (?3 IS NULL OR n.project_id = ?3)\n GROUP BY n.author_username\n\n UNION ALL\n\n -- Author branch: MR authors with DiffNote activity at this path\n SELECT m.author_username AS username, 'author' AS role,\n COUNT(DISTINCT m.id) AS mr_cnt, 0 AS note_cnt,\n MAX(n.created_at) AS last_seen_at\n FROM merge_requests m\n JOIN discussions d ON d.merge_request_id = m.id\n JOIN notes n ON n.discussion_id = d.id\n WHERE n.note_type = 'DiffNote' AND n.is_system = 0\n AND m.author_username IS NOT NULL\n AND n.position_new_path LIKE ?1 ESCAPE '\\'\n AND n.created_at >= ?2\n AND (?3 IS NULL OR n.project_id = ?3)\n GROUP BY m.author_username\n)\nSELECT username,\n SUM(CASE WHEN role='reviewer' THEN mr_cnt ELSE 0 END) AS review_mr_count,\n SUM(CASE WHEN role='reviewer' THEN note_cnt ELSE 0 END) AS review_note_count,\n SUM(CASE WHEN role='author' THEN mr_cnt ELSE 0 END) AS author_mr_count,\n MAX(last_seen_at) AS last_seen_at,\n (SUM(CASE WHEN role='reviewer' THEN mr_cnt ELSE 0 END) * 20 +\n SUM(CASE WHEN role='author' THEN mr_cnt ELSE 0 END) * 12 +\n SUM(CASE WHEN role='reviewer' THEN note_cnt ELSE 0 END) * 1) AS score\nFROM activity\nGROUP BY username\nORDER BY score DESC, last_seen_at DESC, username ASC\nLIMIT ?4\n```\n\n### Two static SQL strings selected via `if pq.is_prefix { sql_prefix } else { sql_exact }` — the only difference is LIKE vs = on position_new_path. Both use prepare_cached().\n\n### Scoring formula: review_mr * 20 + author_mr * 12 + review_notes * 1\n- MR breadth dominates (prevents \"comment storm\" gaming)\n- Integer arithmetic (no f64 display issues)\n\n### LIMIT+1 truncation pattern:\n```rust\nlet limit_plus_one = (limit + 1) as i64;\n// ... query with limit_plus_one ...\nlet truncated = experts.len() > limit;\nlet experts = experts.into_iter().take(limit).collect();\n```\n\n### ExpertResult struct:\n```rust\npub struct ExpertResult {\n pub path_query: String,\n pub path_match: String, // \"exact\" or \"prefix\"\n pub experts: Vec,\n pub truncated: bool,\n}\npub struct Expert {\n pub username: String, pub score: i64,\n pub review_mr_count: u32, pub review_note_count: u32,\n pub author_mr_count: u32, pub last_seen_ms: i64,\n}\n```\n\n## Files\n\n- `src/cli/commands/who.rs`\n\n## TDD Loop\n\nRED:\n```\ntest_expert_query — insert project, MR, discussion, 3 DiffNotes; verify expert ranking\ntest_expert_excludes_self_review_notes — author_a comments on own MR; review_mr_count must be 0\ntest_expert_truncation — 3 experts, limit=2 -> truncated=true, len=2; limit=10 -> false\n```\n\nGREEN: Implement query_expert with both SQL variants\nVERIFY: `cargo test -- expert`\n\n## Acceptance Criteria\n\n- [ ] test_expert_query passes (reviewer_b ranked first by score)\n- [ ] test_expert_excludes_self_review_notes passes (author_a has review_mr_count=0)\n- [ ] test_expert_truncation passes (truncated flag correct at both limits)\n- [ ] Default since window: 6m\n\n## Edge Cases\n\n- Self-review: MR author commenting on own diff must NOT count as reviewer (filter n.author_username != m.author_username with IS NULL guard on m.author_username)\n- MR state: only 'opened' and 'merged' — closed/unmerged MRs are noise\n- Project scoping is on n.project_id (not m.project_id) to maximize index usage\n- Author branch also filters n.is_system = 0 for consistency","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-08T02:40:20.990590Z","created_by":"tayloreernisse","updated_at":"2026-02-08T04:10:29.596337Z","closed_at":"2026-02-08T04:10:29.596299Z","close_reason":"Implemented by agent team: migration 017, CLI skeleton, all 5 query modes, human+robot output, 20 tests. All quality gates pass.","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-zqpf","depends_on_id":"bd-2ldg","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"},{"issue_id":"bd-zqpf","depends_on_id":"bd-34rr","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"}]} diff --git a/.beads/last-touched b/.beads/last-touched index cd11194..b730a4f 100644 --- a/.beads/last-touched +++ b/.beads/last-touched @@ -1 +1 @@ -bd-26km +bd-23xb