From 6b18df11b17711476c49fdd7896c1f980116d4ef Mon Sep 17 00:00:00 2001 From: teernisse Date: Fri, 13 Mar 2026 13:18:16 -0400 Subject: [PATCH] chore(beads): update issue tracking state Co-Authored-By: Claude Opus 4.6 --- .beads/issues.jsonl | 3 +++ 1 file changed, 3 insertions(+) diff --git a/.beads/issues.jsonl b/.beads/issues.jsonl index 9d7137b..e676c9f 100644 --- a/.beads/issues.jsonl +++ b/.beads/issues.jsonl @@ -233,6 +233,7 @@ {"id":"bd-3a4k","title":"CLI: list issues status column, filter, and robot fields","description":"## Background\nList issues needs a Status column in the table, status fields in robot JSON, and a --status filter for querying by work item status name. The filter supports multiple values (OR semantics) and case-insensitive matching.\n\n## Approach\nExtend list.rs row types, SQL, table rendering. Add --status Vec to clap args. Build dynamic WHERE clause with COLLATE NOCASE. Wire into both ListFilters constructions in main.rs. Register in autocorrect.\n\n## Files\n- src/cli/commands/list.rs (row types, SQL, table, filter, color helper)\n- src/cli/mod.rs (--status flag on IssuesArgs)\n- src/main.rs (wire statuses into both ListFilters)\n- src/cli/autocorrect.rs (add --status to COMMAND_FLAGS)\n\n## Implementation\n\nIssueListRow + IssueListRowJson: add 5 status fields (all Option)\nFrom<&IssueListRow> for IssueListRowJson: clone all 5 fields\n\nquery_issues SELECT: add i.status_name, i.status_category, i.status_color, i.status_icon_name, i.status_synced_at after existing columns\n Existing SELECT has 12 columns (indices 0-11). New columns: indices 12-16.\n Row mapping: status_name: row.get(12)?, ..., status_synced_at: row.get(16)?\n\nListFilters: add pub statuses: &'a [String]\n\nWHERE clause builder (after has_due_date block):\n if statuses.len() == 1: \"i.status_name = ? COLLATE NOCASE\" + push param\n if statuses.len() > 1: \"i.status_name IN (?, ?, ...) COLLATE NOCASE\" + push all params\n\nTable: add \"Status\" column header (bold) between State and Assignee\n Row: match &issue.status_name -> Some: colored_cell_hex(status, color), None: Cell::new(\"\")\n\nNew helper:\n fn colored_cell_hex(content, hex: Option<&str>) -> Cell\n If no hex or colors disabled: Cell::new(content)\n Parse 6-char hex, use Cell::new(content).fg(Color::Rgb { r, g, b })\n\nIn src/cli/mod.rs IssuesArgs:\n #[arg(long, help_heading = \"Filters\")]\n pub status: Vec,\n\nIn src/main.rs handle_issues (~line 695):\n ListFilters { ..., statuses: &args.status }\nIn legacy List handler (~line 2421):\n ListFilters { ..., statuses: &[] }\n\nIn src/cli/autocorrect.rs COMMAND_FLAGS \"issues\" entry:\n Add \"--status\" between existing flags\n\n## Acceptance Criteria\n- [ ] Status column appears in table between State and Assignee\n- [ ] NULL status -> empty cell\n- [ ] Status colored by hex in human mode\n- [ ] --status \"In progress\" filters correctly\n- [ ] --status \"in progress\" matches \"In progress\" (COLLATE NOCASE)\n- [ ] --status \"To do\" --status \"In progress\" -> OR semantics (both returned)\n- [ ] Robot: status_name, status_category in each issue JSON\n- [ ] --fields supports status_name, status_category, status_color, status_icon_name, status_synced_at\n- [ ] --fields minimal does NOT include status fields\n- [ ] Autocorrect registry test passes (--status registered)\n- [ ] cargo check --all-targets passes\n\n## TDD Loop\nRED: test_list_filter_by_status, test_list_filter_by_status_case_insensitive, test_list_filter_by_multiple_statuses\nGREEN: Implement all changes across 4 files\nVERIFY: cargo test list_filter && cargo test registry_covers\n\n## Edge Cases\n- COLLATE NOCASE is ASCII-only but sufficient (all system statuses are ASCII)\n- Single-value uses = for simplicity; multi-value uses IN with dynamic placeholders\n- --status combined with other filters (--state, --label) -> AND logic\n- autocorrect registry_covers_command_flags test will FAIL if --status not registered\n- Legacy List command path also constructs ListFilters — needs statuses: &[]\n- Column index offset: new columns start at 12 (0-indexed)","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-11T06:42:26.438Z","created_by":"tayloreernisse","updated_at":"2026-02-11T07:21:33.421297Z","closed_at":"2026-02-11T07:21:33.421247Z","close_reason":"Implemented by agent swarm — all quality gates pass (595 tests, 0 failures)","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-3a4k","depends_on_id":"bd-2y79","type":"parent-child","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-3a4k","depends_on_id":"bd-3dum","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} {"id":"bd-3ae","title":"Epic: CP2 Gate A - MRs Only","description":"## Background\nGate A validates core MR ingestion works before adding complexity. Proves the cursor-based sync, pagination, and basic CLI work. This is the foundation - if Gate A fails, nothing else matters.\n\n## Acceptance Criteria (Pass/Fail)\n- [ ] `gi ingest --type=merge_requests` completes without error\n- [ ] `SELECT COUNT(*) FROM merge_requests` > 0\n- [ ] `gi list mrs --limit=5` shows 5 MRs with iid, title, state, author\n- [ ] `gi count mrs` shows total count matching DB query\n- [ ] MR with `state=locked` can be stored (if exists in test data)\n- [ ] Draft MR shows `draft=1` in DB and `[DRAFT]` in list output\n- [ ] `work_in_progress=true` MR shows `draft=1` (fallback works)\n- [ ] `head_sha` populated for MRs with commits\n- [ ] `references_short` and `references_full` populated\n- [ ] Re-run ingest shows \"0 new MRs\" or minimal refetch (cursor working)\n- [ ] Cursor saved at page boundary, not item boundary\n\n## Validation Script\n```bash\n#!/bin/bash\nset -e\n\nDB_PATH=\"${XDG_DATA_HOME:-$HOME/.local/share}/gitlab-inbox/db.sqlite3\"\n\necho \"=== Gate A: MRs Only ===\"\n\n# 1. Clear any existing MR data for clean test\necho \"Step 1: Reset MR cursor for clean test...\"\nsqlite3 \"$DB_PATH\" \"DELETE FROM sync_cursors WHERE resource_type = 'merge_requests';\"\n\n# 2. Run MR ingestion\necho \"Step 2: Ingest MRs...\"\ngi ingest --type=merge_requests\n\n# 3. Verify MRs exist\necho \"Step 3: Verify MR count...\"\nMR_COUNT=$(sqlite3 \"$DB_PATH\" \"SELECT COUNT(*) FROM merge_requests;\")\necho \" MR count: $MR_COUNT\"\n[ \"$MR_COUNT\" -gt 0 ] || { echo \"FAIL: No MRs ingested\"; exit 1; }\n\n# 4. Verify list command\necho \"Step 4: Test list command...\"\ngi list mrs --limit=5\n\n# 5. Verify count command\necho \"Step 5: Test count command...\"\ngi count mrs\n\n# 6. Verify draft handling\necho \"Step 6: Check draft MRs...\"\nDRAFT_COUNT=$(sqlite3 \"$DB_PATH\" \"SELECT COUNT(*) FROM merge_requests WHERE draft = 1;\")\necho \" Draft MR count: $DRAFT_COUNT\"\n\n# 7. Verify head_sha population\necho \"Step 7: Check head_sha...\"\nSHA_COUNT=$(sqlite3 \"$DB_PATH\" \"SELECT COUNT(*) FROM merge_requests WHERE head_sha IS NOT NULL;\")\necho \" MRs with head_sha: $SHA_COUNT\"\n\n# 8. Verify references\necho \"Step 8: Check references...\"\nREF_COUNT=$(sqlite3 \"$DB_PATH\" \"SELECT COUNT(*) FROM merge_requests WHERE references_short IS NOT NULL;\")\necho \" MRs with references: $REF_COUNT\"\n\n# 9. Verify cursor saved\necho \"Step 9: Check cursor...\"\nCURSOR=$(sqlite3 \"$DB_PATH\" \"SELECT updated_at, gitlab_id FROM sync_cursors WHERE resource_type = 'merge_requests';\")\necho \" Cursor: $CURSOR\"\n[ -n \"$CURSOR\" ] || { echo \"FAIL: Cursor not saved\"; exit 1; }\n\n# 10. Re-run and verify minimal refetch\necho \"Step 10: Re-run ingest (should be minimal)...\"\ngi ingest --type=merge_requests\n# Output should show minimal or zero new MRs\n\necho \"\"\necho \"=== Gate A: PASSED ===\"\n```\n\n## Test Commands (Quick Verification)\n```bash\n# Run these in order:\ngi ingest --type=merge_requests\ngi list mrs --limit=10\ngi count mrs\n\n# Verify in DB:\nsqlite3 ~/.local/share/gitlab-inbox/db.sqlite3 \"\n SELECT \n COUNT(*) as total,\n SUM(CASE WHEN draft = 1 THEN 1 ELSE 0 END) as drafts,\n SUM(CASE WHEN head_sha IS NOT NULL THEN 1 ELSE 0 END) as with_sha,\n SUM(CASE WHEN references_short IS NOT NULL THEN 1 ELSE 0 END) as with_refs\n FROM merge_requests;\n\"\n\n# Re-run (should be no-op):\ngi ingest --type=merge_requests\n```\n\n## Dependencies\nThis gate requires these beads to be complete:\n- bd-3ir (Database migration)\n- bd-5ta (GitLab MR types)\n- bd-34o (MR transformer)\n- bd-iba (GitLab client pagination)\n- bd-ser (MR ingestion module)\n\n## Edge Cases\n- `locked` state is transitional (merge in progress); may not exist in test data\n- Some older GitLab instances may not return `head_sha` for all MRs\n- `work_in_progress` is deprecated but should still work as fallback\n- Very large projects (10k+ MRs) may take significant time on first sync","status":"closed","priority":3,"issue_type":"task","created_at":"2026-01-26T22:06:00.966522Z","created_by":"tayloreernisse","updated_at":"2026-01-27T00:48:21.057298Z","closed_at":"2026-01-27T00:48:21.057225Z","close_reason":"done","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-3ae","depends_on_id":"bd-iba","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-3ae","depends_on_id":"bd-ser","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} {"id":"bd-3as","title":"Implement timeline event collection and chronological interleaving","description":"## Background\n\nThe event collection phase is steps 4-5 of the timeline pipeline (spec Section 3.2). It takes seed + expanded entity sets and collects all their events from resource event tables, then interleaves chronologically.\n\n**Spec reference:** `docs/phase-b-temporal-intelligence.md` Section 3.2 steps 4-5, Section 3.3 (Event Model).\n\n## Codebase Context\n\n- resource_state_events: columns include state, actor_username (not actor_gitlab_id for display), created_at, issue_id, merge_request_id, source_merge_request_iid, source_commit\n- resource_label_events: columns include action ('add'|'remove'), label_name (NULLABLE since migration 012), actor_username, created_at\n- resource_milestone_events: columns include action ('add'|'remove'), milestone_title (NULLABLE since migration 012), actor_username, created_at\n- issues table: created_at, author_username, title, web_url\n- merge_requests table: created_at, author_username, title, web_url, merged_at, updated_at\n- All timestamps are ms epoch UTC (stored as INTEGER)\n\n## Approach\n\nCreate `src/core/timeline_collect.rs`:\n\n```rust\nuse rusqlite::Connection;\nuse crate::core::timeline::{TimelineEvent, TimelineEventType, EntityRef, ExpandedEntityRef};\n\npub fn collect_events(\n conn: &Connection,\n seed_entities: &[EntityRef],\n expanded_entities: &[ExpandedEntityRef],\n evidence_notes: &[TimelineEvent], // from seed phase\n since_ms: Option, // --since filter\n limit: usize, // -n flag (default 100)\n) -> Result> { ... }\n```\n\n### Event Collection Per Entity\n\nFor each entity (seed + expanded), collect:\n\n1. **Creation event** (`Created`):\n ```sql\n -- Issues:\n SELECT created_at, author_username, title, web_url FROM issues WHERE id = ?1\n -- MRs:\n SELECT created_at, author_username, title, web_url FROM merge_requests WHERE id = ?1\n ```\n\n2. **State changes** (`StateChanged { state }`):\n ```sql\n SELECT state, actor_username, created_at FROM resource_state_events\n WHERE (issue_id = ?1 OR merge_request_id = ?1)\n AND (?2 IS NULL OR created_at >= ?2) -- since filter\n ORDER BY created_at ASC\n ```\n NOTE: For MRs, a state='merged' event also produces a separate Merged variant.\n\n3. **Label changes** (`LabelAdded`/`LabelRemoved`):\n ```sql\n SELECT action, label_name, actor_username, created_at FROM resource_label_events\n WHERE (issue_id = ?1 OR merge_request_id = ?1)\n AND (?2 IS NULL OR created_at >= ?2)\n ORDER BY created_at ASC\n ```\n Handle NULL label_name (deleted label): use \"[deleted label]\" as fallback.\n\n4. **Milestone changes** (`MilestoneSet`/`MilestoneRemoved`):\n ```sql\n SELECT action, milestone_title, actor_username, created_at FROM resource_milestone_events\n WHERE (issue_id = ?1 OR merge_request_id = ?1)\n AND (?2 IS NULL OR created_at >= ?2)\n ORDER BY created_at ASC\n ```\n Handle NULL milestone_title: use \"[deleted milestone]\" as fallback.\n\n5. **Merge event** (Merged, MR only):\n Derive from merge_requests.merged_at (preferred) OR resource_state_events WHERE state='merged'. Skip StateChanged when state='merged' — emit only the Merged variant.\n\n### Chronological Interleave\n\n```rust\nevents.sort(); // Uses Ord impl from bd-20e\nif let Some(since) = since_ms {\n events.retain(|e| e.timestamp >= since);\n}\nevents.truncate(limit);\n```\n\nRegister in `src/core/mod.rs`: `pub mod timeline_collect;`\n\n## Acceptance Criteria\n\n- [ ] Collects Created, StateChanged, LabelAdded/Removed, MilestoneSet/Removed, Merged, NoteEvidence events\n- [ ] Merged events deduplicated from StateChanged{merged} — emit only Merged variant\n- [ ] NULL label_name/milestone_title handled with fallback text\n- [ ] --since filter applied to all event types\n- [ ] Events sorted chronologically with stable tiebreak\n- [ ] Limit applied AFTER sorting\n- [ ] Evidence notes from seed phase included\n- [ ] is_seed correctly set based on entity source\n- [ ] Module registered in src/core/mod.rs\n- [ ] `cargo check --all-targets` passes\n- [ ] `cargo clippy --all-targets -- -D warnings` passes\n\n## Files\n\n- `src/core/timeline_collect.rs` (NEW)\n- `src/core/mod.rs` (add `pub mod timeline_collect;`)\n\n## TDD Loop\n\nRED:\n- `test_collect_creation_event` - entity produces Created event\n- `test_collect_state_events` - state changes produce StateChanged events\n- `test_collect_merged_dedup` - state='merged' produces Merged not StateChanged\n- `test_collect_null_label_fallback` - NULL label_name uses fallback text\n- `test_collect_since_filter` - old events excluded\n- `test_collect_chronological_sort` - mixed entity events interleave correctly\n- `test_collect_respects_limit`\n\nTests need in-memory DB with migrations 001-014 applied.\n\nGREEN: Implement SQL queries and event assembly.\n\nVERIFY: `cargo test --lib -- timeline_collect`\n\n## Edge Cases\n\n- MR with merged_at=NULL and no state='merged' event: no Merged event emitted\n- Entity with 0 events in resource tables: only Created event returned\n- NULL actor_username: actor field is None\n- Timestamps at exact --since boundary: use >= (inclusive)","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-02T21:33:08.703942Z","created_by":"tayloreernisse","updated_at":"2026-02-05T21:53:01.160429Z","closed_at":"2026-02-05T21:53:01.160380Z","close_reason":"Completed: Created src/core/timeline_collect.rs with event collection for Created, StateChanged, LabelAdded/Removed, MilestoneSet/Removed, Merged, NoteEvidence. Merged dedup (state=merged skipped in favor of Merged variant). NULL label/milestone fallbacks. Since filter, chronological sort, limit. 10 tests pass.","compaction_level":0,"original_size":0,"labels":["gate-3","phase-b","query"],"dependencies":[{"issue_id":"bd-3as","depends_on_id":"bd-1ep","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-3as","depends_on_id":"bd-ike","type":"parent-child","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-3as","depends_on_id":"bd-ypa","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} +{"id":"bd-3bb5","title":"Add iids filter to ListFilters and MrListFilters","description":"## Summary\nAdd an `iids: Option<&'a [i64]>` field to both `ListFilters` and `MrListFilters` so callers can pre-filter issue/MR lists by a set of known IIDs. This enables the search-first-then-filter pattern needed by `lore brief` topic mode, and benefits any future command that needs \"give me issues matching a search.\"\n\n## Changes Required\n\n### src/cli/commands/list/issues.rs\n\n1. Add field to `ListFilters<'a>` (after `order`):\n```rust\npub iids: Option<&'a [i64]>,\n```\n\n2. In `query_issues()` (the private fn that builds SQL), when `iids` is `Some`, add a WHERE clause:\n```rust\nif let Some(iids) = filters.iids {\n if !iids.is_empty() {\n let placeholders = iids.iter().map(|_| \"?\").collect::>().join(\",\");\n where_clauses.push(format!(\"i.iid IN ({placeholders})\"));\n for id in iids {\n params.push(Box::new(*id));\n }\n }\n}\n```\n\n3. Update all existing call sites of `ListFilters` to include `iids: None`. Grep for `ListFilters {` to find them — they're in:\n - `src/cli/commands/list/issues.rs` (tests)\n - `src/main.rs` (handle_list_issues)\n - `src/cli/commands/me/mod.rs` (if it constructs ListFilters directly)\n\n### src/cli/commands/list/mrs.rs\n\nSame pattern:\n\n1. Add `iids: Option<&'a [i64]>` to `MrListFilters<'a>`\n2. Add WHERE clause in `query_mrs()`\n3. Update all existing call sites to include `iids: None`\n\n### Finding all call sites\n\n```bash\nrg 'ListFilters\\s*\\{' src/ --type rust\nrg 'MrListFilters\\s*\\{' src/ --type rust\n```\n\nEvery match needs `iids: None` appended.\n\n## TDD\n\nRED tests (add to existing test modules in issues.rs and mrs.rs):\n\n- `test_list_issues_iids_filter`: Insert 5 issues, query with `iids: Some(&[1, 3])`, assert only IIDs 1 and 3 returned\n- `test_list_issues_iids_empty_slice`: `iids: Some(&[])` should return all issues (empty slice = no filter)\n- `test_list_issues_iids_none`: `iids: None` should return all issues (existing behavior preserved)\n- `test_list_issues_iids_combined_with_state`: `iids: Some(&[1,2,3])` + `state: Some(\"opened\")` — only opened issues in the IID set\n- `test_list_mrs_iids_filter`: Same as issues but for MRs\n- `test_list_mrs_iids_combined_with_state`: Same as issues but for MRs\n\nGREEN: Add the field and WHERE clause.\n\nVERIFY:\n```bash\ncargo test list:: && cargo clippy --all-targets -- -D warnings && cargo fmt --check\n```\n\n## Edge Cases\n- `iids: Some(&[])` — treat as no filter (don't add empty `IN ()` which is invalid SQL)\n- `iids: Some(&[999999])` — nonexistent IID returns empty result (not an error)\n- Large IID sets — SQLite handles IN clauses with hundreds of values fine; no special batching needed for our scale\n\n## Acceptance Criteria\n- [ ] `ListFilters` has `iids: Option<&'a [i64]>` field\n- [ ] `MrListFilters` has `iids: Option<&'a [i64]>` field\n- [ ] Existing call sites updated with `iids: None` (no behavior change)\n- [ ] WHERE clause correctly filters when iids is Some and non-empty\n- [ ] Empty slice treated as no filter\n- [ ] All existing tests still pass\n- [ ] New tests cover iids filtering, empty slice, combination with state filter","status":"open","priority":2,"issue_type":"task","created_at":"2026-03-13T15:14:47.441377Z","created_by":"tayloreernisse","updated_at":"2026-03-13T15:14:47.444188Z","compaction_level":0,"original_size":0,"labels":["cli-imp"],"dependencies":[{"issue_id":"bd-3bb5","depends_on_id":"bd-1n5q","type":"parent-child","created_at":"2026-03-13T15:14:47.444011Z","created_by":"tayloreernisse"}]} {"id":"bd-3bec","title":"Wire surgical dispatch in run_sync and update robot-docs","description":"## Background\n\nThe existing `run_sync` function (lines 63-360 of `src/cli/commands/sync.rs`) handles the normal full-sync pipeline. Once `run_sync_surgical` (bd-1i4i) is implemented, this bead wires the dispatch: when `SyncOptions` contains issue or MR IIDs, route to the surgical path instead of the normal path. This also requires updating `handle_sync_cmd` (line 2120 of `src/main.rs`) to pass through the new CLI fields (bd-1lja), and updating the robot-docs schema to document the new surgical response fields.\n\n## Approach\n\nThree changes:\n\n**1. Dispatch in `run_sync` (src/cli/commands/sync.rs)**\n\nAdd an early check at the top of `run_sync` (after line 68):\n\n```rust\npub async fn run_sync(\n config: &Config,\n options: SyncOptions,\n run_id: Option<&str>,\n signal: &ShutdownSignal,\n) -> Result {\n // Surgical dispatch: if any IIDs specified, route to surgical pipeline\n if options.is_surgical() {\n return run_sync_surgical(config, options, run_id, signal).await;\n }\n\n // ... existing normal sync pipeline unchanged ...\n}\n```\n\n**2. Update `handle_sync_cmd` (src/main.rs line 2120)**\n\nPass new fields from `SyncArgs` into `SyncOptions`:\n\n```rust\nlet options = SyncOptions {\n full: args.full && !args.no_full,\n force: args.force && !args.no_force,\n no_embed: args.no_embed,\n no_docs: args.no_docs,\n no_events: args.no_events,\n robot_mode,\n dry_run,\n // New surgical fields (from bd-1lja)\n issue_iids: args.issue.clone(),\n mr_iids: args.mr.clone(),\n project: args.project.clone(),\n preflight_only: args.preflight_only,\n};\n```\n\nAlso: when surgical mode is detected (issues/MRs non-empty), skip the normal SyncRunRecorder setup in `handle_sync_cmd` since `run_sync_surgical` manages its own recorder.\n\n**3. Update robot-docs (src/main.rs handle_robot_docs)**\n\nAdd documentation for the surgical sync response format. The robot-docs output should include:\n- New CLI flags: `--issue`, `--mr`, `-p`/`--project`, `--preflight-only`\n- Surgical response fields: `surgical_mode`, `surgical_iids`, `entity_results`, `preflight_only`\n- `EntitySyncResult` schema: `entity_type`, `iid`, `outcome`, `error`, `toctou_reason`\n- Exit codes for surgical-specific errors\n\n## Acceptance Criteria\n\n1. `lore sync --issue 7 -p group/project` dispatches to `run_sync_surgical`, not normal sync\n2. `lore sync` (no IIDs) follows the existing normal pipeline unchanged\n3. `handle_sync_cmd` passes `issues`, `merge_requests`, `project`, `preflight_only` from args to options\n4. `lore robot-docs` output includes surgical sync documentation\n5. All existing sync tests pass without modification\n6. Robot mode JSON output for surgical sync matches documented schema\n\n## Files\n\n- `src/cli/commands/sync.rs` — add dispatch check at top of `run_sync`, add `use super::sync_surgical::run_sync_surgical`\n- `src/main.rs` — update `handle_sync_cmd` to pass new fields, update robot-docs text\n- `src/cli/commands/mod.rs` — ensure `sync_surgical` module is public (may already be done by bd-1i4i)\n\n## TDD Anchor\n\nTests in `src/cli/commands/sync.rs` or a companion test file:\n\n```rust\n#[cfg(test)]\nmod dispatch_tests {\n use super::*;\n\n #[test]\n fn sync_options_with_issues_is_surgical() {\n let options = SyncOptions {\n issue_iids: vec![7],\n ..SyncOptions::default()\n };\n assert!(options.is_surgical());\n }\n\n #[test]\n fn sync_options_without_iids_is_normal() {\n let options = SyncOptions::default();\n assert!(!options.is_surgical());\n }\n\n #[test]\n fn sync_options_with_mrs_is_surgical() {\n let options = SyncOptions {\n mr_iids: vec![10, 20],\n ..SyncOptions::default()\n };\n assert!(options.is_surgical());\n }\n\n #[tokio::test]\n async fn dispatch_routes_to_surgical_when_issues_present() {\n // Integration-level test: verify run_sync with IIDs calls surgical path.\n // This test uses wiremock to mock the surgical path's GitLab calls.\n // The key assertion: when options.issue_iids is non-empty, the function\n // does NOT attempt the normal ingest flow (no project cursor queries).\n let server = wiremock::MockServer::start().await;\n wiremock::Mock::given(wiremock::matchers::method(\"GET\"))\n .and(wiremock::matchers::path_regex(r\"/api/v4/projects/1/issues\"))\n .respond_with(wiremock::ResponseTemplate::new(200)\n .set_body_json(serde_json::json!([{\n \"id\": 100, \"iid\": 7, \"project_id\": 1, \"title\": \"Test\",\n \"state\": \"opened\", \"created_at\": \"2026-01-01T00:00:00Z\",\n \"updated_at\": \"2026-02-17T00:00:00Z\",\n \"author\": {\"id\": 1, \"username\": \"dev\", \"name\": \"Dev\"},\n \"web_url\": \"https://gitlab.example.com/group/project/-/issues/7\"\n }])))\n .mount(&server).await;\n\n let mut config = Config::default();\n config.gitlab.url = server.uri();\n config.gitlab.token = \"test-token\".to_string();\n let options = SyncOptions {\n issue_iids: vec![7],\n project: Some(\"group/project\".to_string()),\n robot_mode: true,\n ..SyncOptions::default()\n };\n let signal = ShutdownSignal::new();\n let result = run_sync(&config, options, Some(\"dispatch-test\"), &signal).await;\n\n // Should succeed via surgical path (or at least not panic from normal path)\n assert!(result.is_ok());\n let r = result.unwrap();\n assert_eq!(r.surgical_mode, Some(true));\n }\n\n #[test]\n fn robot_docs_includes_surgical_sync() {\n // Verify the robot-docs string contains surgical sync documentation\n // This tests the static text, not runtime behavior\n let docs = include_str!(\"../../../src/main.rs\");\n // The robot-docs handler should mention surgical sync\n // (Actual assertion depends on how robot-docs are generated)\n }\n}\n```\n\n## Edge Cases\n\n- **Dry-run + surgical**: `handle_sync_cmd` currently short-circuits dry-run before SyncRunRecorder setup (line 2149). Surgical dry-run should also short-circuit, but preflight-only is the surgical equivalent. Clarify: `--dry-run --issue 7` should be treated as `--preflight-only --issue 7`.\n- **Normal sync recorder vs surgical recorder**: `handle_sync_cmd` creates a `SyncRunRecorder` for normal sync (line 2159). When dispatching to surgical, skip this since `run_sync_surgical` creates its own. Use `!options.is_surgical()` to decide.\n- **Robot-docs backward compatibility**: New fields are additive. Existing robot-docs consumers that ignore unknown fields are unaffected.\n- **No project specified with IIDs**: If `--issue 7` is passed without `-p project`, the dispatch should fail with a clear usage error (validation in bd-1lja).\n\n## Dependency Context\n\n- **Depends on (upstream)**: bd-1i4i (the `run_sync_surgical` function to call), bd-1lja (SyncOptions extensions with `issues`, `merge_requests`, `project`, `preflight_only` fields), bd-wcja (SyncResult surgical fields for assertion)\n- **No downstream dependents** — this is the final wiring bead for the main code path.\n- Must NOT modify the normal sync pipeline behavior. The dispatch is a pure conditional branch at function entry.","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-17T19:18:10.648172Z","created_by":"tayloreernisse","updated_at":"2026-02-18T20:36:35.149830Z","closed_at":"2026-02-18T20:36:35.149779Z","close_reason":"Surgical dispatch wired: run_sync routes to run_sync_surgical when is_surgical(), handle_sync_cmd skips recorder for surgical mode, dry-run+surgical→preflight-only, removed wrong embed validation, robot-docs updated with surgical schema","compaction_level":0,"original_size":0,"labels":["surgical-sync"]} {"id":"bd-3bo","title":"[CP1] gi count issues/discussions/notes commands","description":"Count entities in the database.\n\nCommands:\n- gi count issues → 'Issues: N'\n- gi count discussions --type=issue → 'Issue Discussions: N'\n- gi count notes --type=issue → 'Issue Notes: N (excluding M system)'\n\nFiles: src/cli/commands/count.ts\nDone when: Counts match expected values from GitLab","status":"tombstone","priority":3,"issue_type":"task","created_at":"2026-01-25T15:20:16.190875Z","created_by":"tayloreernisse","updated_at":"2026-01-25T15:21:35.156293Z","closed_at":"2026-01-25T15:21:35.156293Z","deleted_at":"2026-01-25T15:21:35.156290Z","deleted_by":"tayloreernisse","delete_reason":"delete","original_type":"task","compaction_level":0,"original_size":0} {"id":"bd-3bpk","title":"NOTE-0A: Upsert/sweep for issue discussion notes","description":"## Background\nIssue discussion note ingestion uses a delete/reinsert pattern (DELETE FROM notes WHERE discussion_id = ? at line 132-135 of src/ingestion/discussions.rs then re-insert). This makes notes.id unstable across syncs. MR discussion notes already use upsert (ON CONFLICT(gitlab_id) DO UPDATE at line 470-536 of src/ingestion/mr_discussions.rs) producing stable IDs. Phase 2 depends on stable notes.id as source_id for note documents.\n\n## Approach\nRefactor src/ingestion/discussions.rs to match the MR pattern in src/ingestion/mr_discussions.rs:\n\n1. Create shared NoteUpsertOutcome struct (in src/ingestion/discussions.rs, also used by mr_discussions.rs):\n pub struct NoteUpsertOutcome { pub local_note_id: i64, pub changed_semantics: bool }\n\n2. Replace insert_note() (line 201-233) with upsert_note_for_issue(). Current signature is:\n fn insert_note(conn: &Connection, discussion_id: i64, note: &NormalizedNote, payload_id: Option) -> Result<()>\n New signature:\n fn upsert_note_for_issue(conn: &Connection, discussion_id: i64, note: &NormalizedNote, last_seen_at: i64, payload_id: Option) -> Result\n\n Use ON CONFLICT(gitlab_id) DO UPDATE SET body, note_type, updated_at, last_seen_at, resolvable, resolved, resolved_by, resolved_at, position_old_path, position_new_path, position_old_line, position_new_line, position_type, position_line_range_start, position_line_range_end, position_base_sha, position_start_sha, position_head_sha\n\n IMPORTANT: The current issue insert_note() only populates: gitlab_id, discussion_id, project_id, note_type, is_system, author_username, body, created_at, updated_at, last_seen_at, position (integer array order), resolvable, resolved, resolved_by, resolved_at, raw_payload_id. It does NOT populate the decomposed position columns (position_new_path, etc.). The MR upsert_note() at line 470 DOES populate all decomposed position columns. Your upsert must include ALL columns from the MR pattern. The NormalizedNote struct (from src/gitlab/transformers.rs) has all position fields.\n\n3. Change detection via pre-read: SELECT existing note before upsert, compare semantic fields (body, note_type, resolved, resolved_by, positions). Exclude updated_at/last_seen_at from semantic comparison. Use IS NOT for NULL-safe comparison.\n\n4. Add sweep_stale_issue_notes(conn, discussion_id, last_seen_at) — DELETE FROM notes WHERE discussion_id = ? AND last_seen_at < ?\n\n5. Replace the delete-reinsert loop (lines 132-139) with:\n for note in notes { let outcome = upsert_note_for_issue(&tx, local_discussion_id, ¬e, last_seen_at, None)?; }\n sweep_stale_issue_notes(&tx, local_discussion_id, last_seen_at)?;\n\n6. Update upsert_note() in mr_discussions.rs (line 470) to return NoteUpsertOutcome with same semantic change detection. Current signature returns Result<()>.\n\nReference files:\n- src/ingestion/mr_discussions.rs: upsert_note() line 470, sweep_stale_notes() line 551\n- src/ingestion/discussions.rs: insert_note() line 201, delete pattern line 132-135\n- src/gitlab/transformers.rs: NormalizedNote struct definition\n\n## Files\n- MODIFY: src/ingestion/discussions.rs (refactor insert_note -> upsert + sweep, lines 132-233)\n- MODIFY: src/ingestion/mr_discussions.rs (return NoteUpsertOutcome from upsert_note at line 470)\n\n## TDD Anchor\nRED: test_issue_note_upsert_stable_id — insert 2 notes, record IDs, re-sync same gitlab_ids, assert IDs unchanged.\nGREEN: Implement upsert_note_for_issue with ON CONFLICT.\nVERIFY: cargo test upsert_stable_id -- --nocapture\nTests: test_issue_note_upsert_detects_body_change, test_issue_note_upsert_unchanged_returns_false, test_issue_note_upsert_updated_at_only_does_not_mark_semantic_change, test_issue_note_sweep_removes_stale, test_issue_note_upsert_returns_local_id\n\n## Acceptance Criteria\n- [ ] upsert_note_for_issue() uses ON CONFLICT(gitlab_id) DO UPDATE\n- [ ] Local note IDs stable across re-syncs of identical data\n- [ ] changed_semantics = true only for body/note_type/resolved/position changes\n- [ ] changed_semantics = false for updated_at-only changes\n- [ ] sweep removes notes with stale last_seen_at\n- [ ] MR upsert_note() returns NoteUpsertOutcome\n- [ ] Issue upsert populates ALL position columns (matching MR pattern)\n- [ ] All 6 tests pass, clippy clean\n\n## Edge Cases\n- NULL body: IS NOT comparison handles NULLs correctly\n- UNIQUE(gitlab_id) already exists on notes table (migration 002)\n- last_seen_at prevents stale-sweep of notes currently being ingested\n- Issue notes currently don't populate position_new_path etc. — the new upsert must extract these from NormalizedNote (check that the transformer populates them for issue DiffNotes)","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-12T16:59:14.783336Z","created_by":"tayloreernisse","updated_at":"2026-02-12T18:13:24.151831Z","closed_at":"2026-02-12T18:13:24.151781Z","close_reason":"Implemented by agent swarm","compaction_level":0,"original_size":0,"labels":["per-note","search"],"dependencies":[{"issue_id":"bd-3bpk","depends_on_id":"bd-18bf","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-3bpk","depends_on_id":"bd-2b28","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-3bpk","depends_on_id":"bd-2ezb","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-3bpk","depends_on_id":"bd-jbfw","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} @@ -261,6 +262,7 @@ {"id":"bd-3l56","title":"Add lore sync --tui convenience flag","description":"## Background\n\nThe PRD defines two CLI entry paths to the TUI: `lore tui` (full TUI) and `lore sync --tui` (convenience shortcut that launches the TUI directly on the Sync screen in inline mode). The `lore tui` command is covered by bd-26lp. This bead adds the `--tui` flag to the existing `SyncArgs` struct, which delegates to the `lore-tui` binary with `--sync` flag.\n\n## Approach\n\nTwo changes to the existing lore CLI crate (NOT the lore-tui crate):\n\n1. **Add `--tui` flag to `SyncArgs`** in `src/cli/mod.rs`:\n ```rust\n /// Show sync progress in interactive TUI (inline mode)\n #[arg(long)]\n pub tui: bool,\n ```\n\n2. **Handle the flag in sync command dispatch** in `src/main.rs` (or wherever Commands::Sync is matched):\n - If `args.tui` is true, call `resolve_tui_binary()` (from bd-26lp) and spawn it with `--sync` flag\n - Forward the config path if specified\n - Exit with the lore-tui process exit code\n - If lore-tui is not found, print a helpful error message\n\nThe `resolve_tui_binary()` function is implemented by bd-26lp (CLI integration). This bead simply adds the flag and the early-return delegation path in the sync command handler.\n\n## Acceptance Criteria\n- [ ] `lore sync --tui` is accepted by the CLI parser (no unknown flag error)\n- [ ] When `--tui` is set, the sync command delegates to `lore-tui --sync` binary\n- [ ] Config path is forwarded if `--config` was specified\n- [ ] If lore-tui binary is not found, prints error with install instructions and exits non-zero\n- [ ] `lore sync --tui --full` does NOT pass `--full` to lore-tui (TUI has its own sync controls)\n- [ ] `--tui` flag appears in `lore sync --help` output\n\n## Files\n- MODIFY: src/cli/mod.rs (add `tui: bool` field to `SyncArgs` struct at line ~776)\n- MODIFY: src/main.rs or src/cli/commands/sync.rs (add early-return delegation when `args.tui`)\n\n## TDD Anchor\nRED: Write `test_sync_tui_flag_accepted` that verifies `SyncArgs` can be parsed with `--tui` flag.\nGREEN: Add the `tui: bool` field to SyncArgs.\nVERIFY: cargo test sync_tui_flag\n\nAdditional tests:\n- test_sync_tui_flag_default_false (not set by default)\n\n## Edge Cases\n- `--tui` combined with `--dry-run` — the TUI handles dry-run internally, so `--dry-run` should be ignored when `--tui` is set (or warn)\n- `--tui` when lore-tui binary does not exist — clear error, not a panic\n- `--tui` in robot mode (`--robot`) — nonsensical combination, should error with \"cannot use --tui with --robot\"\n\n## Dependency Context\n- Depends on bd-26lp (CLI integration) which implements `resolve_tui_binary()` and `validate_tui_compat()` functions that this bead calls.\n- The SyncArgs struct is at src/cli/mod.rs:739. The existing fields are: full, no_full, force, no_force, no_embed, no_docs, no_events, no_file_changes, dry_run, no_dry_run.","status":"tombstone","priority":2,"issue_type":"task","created_at":"2026-02-12T19:29:40.785182Z","created_by":"tayloreernisse","updated_at":"2026-03-11T18:34:24.478277Z","deleted_at":"2026-03-11T18:34:24.478273Z","deleted_by":"tayloreernisse","delete_reason":"delete","original_type":"task","compaction_level":0,"original_size":0,"labels":["TUI"]} {"id":"bd-3lc","title":"Rename GiError to LoreError across codebase","description":"## Background\nThe codebase currently uses `GiError` as the primary error enum name (legacy from when the project was called \"gi\"). Checkpoint 3 introduces new modules (documents, search, embedding) that import error types. Renaming before Gate A work begins prevents every subsequent bead from needing to reference the old name and avoids merge conflicts across parallel work streams.\n\n## Approach\nMechanical find-and-replace using `ast-grep` or `sed`:\n1. Rename the enum declaration in `src/core/error.rs`: `pub enum GiError` -> `pub enum LoreError`\n2. Update the type alias: `pub type Result = std::result::Result;`\n3. Update re-exports in `src/core/mod.rs` and `src/lib.rs`\n4. Update all `use` statements across ~16 files that import `GiError`\n5. Update any `GiError::` variant construction sites\n6. Run `cargo build` to verify no references remain\n\n**Do NOT change:**\n- Error variant names (ConfigNotFound, etc.) — only the enum name\n- ErrorCode enum — it's already named correctly\n- RobotError — already named correctly\n\n## Acceptance Criteria\n- [ ] `cargo build` succeeds with zero warnings about GiError\n- [ ] `rg GiError src/` returns zero results\n- [ ] `rg LoreError src/core/error.rs` shows the enum declaration\n- [ ] `src/core/mod.rs` re-exports `LoreError` (not `GiError`)\n- [ ] `src/lib.rs` re-exports `LoreError`\n- [ ] All `use crate::core::error::LoreError` imports compile\n\n## Files\n- `src/core/error.rs` — enum rename + type alias\n- `src/core/mod.rs` — re-export update\n- `src/lib.rs` — re-export update\n- All files matching `rg 'GiError' src/` (~16 files: ingestion/*.rs, cli/commands/*.rs, gitlab/*.rs, main.rs)\n\n## TDD Loop\nRED: `cargo build` fails after renaming enum but before fixing imports\nGREEN: Fix all imports; `cargo build` succeeds\nVERIFY: `cargo build && rg GiError src/ && echo \"FAIL: GiError references remain\" || echo \"PASS: clean\"`\n\n## Edge Cases\n- Some files may use `GiError` in string literals (error messages) — do NOT rename those, only type references\n- `impl From for GiError` blocks must become `impl From for LoreError`\n- The `thiserror` derive macro on the enum does not reference the name, so no macro changes needed","status":"closed","priority":2,"issue_type":"task","created_at":"2026-01-30T15:25:25.694773Z","created_by":"tayloreernisse","updated_at":"2026-01-30T16:50:10.612340Z","closed_at":"2026-01-30T16:50:10.612278Z","close_reason":"Completed: renamed GiError to LoreError across all 16 files, cargo build + 164 tests pass","compaction_level":0,"original_size":0} {"id":"bd-3le2","title":"Implement TaskSupervisor (dedup + cancellation + generation IDs)","description":"## Background\nBackground tasks (DB queries, sync, search) are managed by a centralized TaskSupervisor that prevents redundant work, enables cooperative cancellation, and uses generation IDs for stale-result detection. This is the ONLY allowed path for background work — state handlers return ScreenIntent, not Cmd::task directly.\n\n## Approach\nCreate crates/lore-tui/src/task_supervisor.rs:\n- TaskKey enum: LoadScreen(Screen), Search, SyncStream, FilterRequery(Screen) — dedup keys, NOT generation-bearing\n- TaskPriority enum: Input(0), Navigation(1), Background(2)\n- CancelToken: AtomicBool wrapper with cancel(), is_cancelled()\n- TaskHandle struct: key (TaskKey), generation (u64), cancel (Arc), interrupt (Option)\n- TaskSupervisor struct: active (HashMap), generation (AtomicU64)\n- submit(key: TaskKey) -> TaskHandle: cancels existing task with same key (via CancelToken), increments generation, stores new handle, returns TaskHandle\n- is_current(key: &TaskKey, generation: u64) -> bool: checks if generation matches active handle\n- complete(key: &TaskKey, generation: u64): removes handle if generation matches\n- cancel_all(): cancels all active tasks (used on quit)\n\n## Acceptance Criteria\n- [ ] submit() with existing key cancels previous task's CancelToken\n- [ ] submit() returns handle with monotonically increasing generation\n- [ ] is_current() returns true only for the latest generation\n- [ ] complete() removes handle only if generation matches (prevents removing newer task)\n- [ ] CancelToken is Arc-wrapped and thread-safe (Send+Sync)\n- [ ] TaskHandle includes optional InterruptHandle for SQLite cancellation\n- [ ] Generation counter never wraps during reasonable use (AtomicU64)\n\n## Files\n- CREATE: crates/lore-tui/src/task_supervisor.rs\n\n## TDD Anchor\nRED: Write test_submit_cancels_previous that submits two tasks with same key, asserts first task's CancelToken is cancelled.\nGREEN: Implement submit() with cancel-on-supersede logic.\nVERIFY: cargo test --manifest-path crates/lore-tui/Cargo.toml test_submit_cancels\n\nAdditional tests:\n- test_is_current_after_supersede: old generation returns false, new returns true\n- test_complete_removes_handle: after complete, key is absent from active map\n- test_complete_ignores_stale: completing with old generation doesn't remove newer task\n- test_generation_monotonic: submit() always returns increasing generation values\n\n## Edge Cases\n- CancelToken uses Relaxed ordering — sufficient for cooperative cancellation polling\n- Generation u64 overflow is theoretical but worth noting (would require 2^64 submissions)\n- submit() must cancel old task BEFORE storing new handle to prevent race conditions\n- InterruptHandle is rusqlite-specific — only set for tasks that lease a reader connection","status":"tombstone","priority":2,"issue_type":"task","created_at":"2026-02-12T16:56:21.102488Z","created_by":"tayloreernisse","updated_at":"2026-03-11T18:34:23.711426Z","deleted_at":"2026-03-11T18:34:23.711422Z","deleted_by":"tayloreernisse","delete_reason":"delete","original_type":"task","compaction_level":0,"original_size":0,"labels":["TUI"]} +{"id":"bd-3lg5","title":"lore brief: human renderer (lipgloss dashboard)","description":"## Summary\nImplement the human-mode renderer for `lore brief` in `src/cli/commands/brief/render_human.rs`, following the established pattern from `me/render_human.rs`. Takes a `BriefResponse` and prints a styled multi-section dashboard using lipgloss Theme, Table, section_divider, Icons, and StyledCell.\n\n## Architectural Model: me/render_human.rs\n\nThe `me` command's human renderer (727 lines) is the direct template. It uses:\n- `section_divider(\"Section Name (count)\")` for section headers\n- 4-space indent for items\n- `Theme::issue_ref()` / `Theme::mr_ref()` for entity references\n- `Theme::username()` for @mentions\n- `Theme::dim()` for metadata (timestamps, status, counts)\n- `Theme::warning()` + `Icons::warning()` for warnings\n- `Table` with `StyledCell` for columnar alignment (activity feed)\n- `format_relative_time()` for timestamps\n- `truncate()` for titles in tabular rows\n- `Icons::issue_opened()`, `Icons::mr_opened()`, `Icons::mr_draft()` for entity state\n\nAll imports from `crate::cli::render::{self, Align, GlyphMode, Icons, LoreRenderer, StyledCell, Table, Theme}`.\n\n## File: src/cli/commands/brief/render_human.rs\n\n### Header\n```rust\npub fn print_brief_header(response: &BriefResponse) {\n // section_divider(\"Brief: {query}\")\n // Counts line: \"3 open issues 2 active MRs top expert: @teernisse\"\n // No attention legend (unlike me — brief is domain-focused, not ego-centric)\n}\n```\n\n### Open Issues\n```rust\npub fn print_brief_issues(issues: &[BriefIssue]) {\n // Skip if empty (don't show \"Open Issues (0)\")\n // section_divider(\"Open Issues (N)\")\n // Each row: Icons::issue_opened() + Theme::issue_ref() + truncated title + status + time\n // 4-space indent\n}\n```\n\n### Active MRs\n```rust\npub fn print_brief_mrs(mrs: &[BriefMr]) {\n // Skip if empty\n // section_divider(\"Active MRs (N)\")\n // Each row: Icons::mr_opened()/mr_draft() + Theme::mr_ref() + truncated title + author + draft tag + time\n}\n```\n\n### Experts\n```rust\npub fn print_brief_experts(experts: &[BriefExpert]) {\n // Skip if empty\n // section_divider(\"Experts (N)\")\n // Table: 3 cols (username, score, last active), indent 4\n // Theme::username() for @name, Theme::dim() for score and time\n}\n```\n\n### Recent Activity\n```rust\npub fn print_brief_activity(events: &[BriefActivityEvent]) {\n // Skip if empty\n // section_divider(\"Recent Activity (N)\")\n // Table: 5 cols (badge, ref, summary, actor, time), indent 4\n // Mirrors me's print_activity_section pattern exactly\n // Badge styles: note=info, status=warning, label=accent, assign/review=success\n}\n```\n\n### Unresolved Threads\n```rust\npub fn print_brief_threads(threads: &[BriefThread]) {\n // Skip if empty\n // section_divider(\"Unresolved Threads (N)\")\n // Each thread:\n // Line 1: styled entity ref + note count + time (right-aligned)\n // Lines 2+: FULL note body text, wrapped at terminal width, indented\n // CRITICAL: Note body is NEVER truncated. Full text wraps naturally.\n}\n```\n\n### Related\n```rust\npub fn print_brief_related(related: &[BriefRelated]) {\n // Skip if empty (also skip if None — no embeddings)\n // section_divider(\"Related (N)\")\n // Table: 4 cols (ref, title, similarity score, time), indent 4\n}\n```\n\n### Warnings\n```rust\npub fn print_brief_warnings(warnings: &[String]) {\n // Skip if empty\n // Each warning: Icons::warning() + Theme::warning() + text, indent 2\n // Placed at bottom of output, separated from data sections\n}\n```\n\n### Full Dashboard Entry Point\n```rust\npub fn print_brief(response: &BriefResponse) {\n print_brief_header(response);\n\n // Path mode: experts first (prominent position)\n if response.mode == \"path\" {\n print_brief_experts(&response.experts);\n }\n\n print_brief_issues(&response.open_issues);\n print_brief_mrs(&response.active_mrs);\n\n // Non-path modes: experts after issues/MRs\n if response.mode != \"path\" {\n print_brief_experts(&response.experts);\n }\n\n print_brief_activity(&response.recent_activity);\n print_brief_threads(&response.unresolved_threads);\n\n if let Some(ref related) = response.related {\n print_brief_related(related);\n }\n\n print_brief_warnings(&response.warnings);\n println!();\n}\n```\n\n## CRITICAL: No truncation on notes or discussions\n\nNote/discussion body text is NEVER truncated in human output. Full text wraps naturally within the terminal width. This applies to:\n- Unresolved thread first-note body (the most important case)\n- Activity feed body previews\n- Related section titles (full, not truncated)\n\nIssue/MR **titles** still truncate since they're single-line tabular rows (matches existing `me` behavior).\n\n## Design Decisions\n\n| Decision | Choice | Rationale |\n|----------|--------|-----------|\n| Header style | `section_divider` + counts line | Lighter than me's full header |\n| Attention states | No — use entity state icons | Brief is domain-focused, not ego-centric |\n| Empty sections | Omit entirely | Scannable; empty sections are noise |\n| Warnings | Bottom of output | Separated from data for scannability |\n| Experts position | After header in path mode, after MRs otherwise | Path mode = expertise is the question |\n\n## Example Output\n\n```\n -- Brief: authentication -----------------------------------------------\n 3 open issues 2 active MRs top expert: @teernisse\n\n -- Open Issues (3) -----------------------------------------------------\n o #3864 Fix token refresh race condition [In progress] 3d\n o #3801 Add OAuth2 PKCE support 12d\n o #3800 Session expiry not respecting timezone 45d\n\n -- Active MRs (2) ------------------------------------------------------\n <-> !456 Implement refresh token rotation @jdoe [draft] 1d\n <-> !443 Add PKCE flow to auth middleware @asmith 5d\n\n -- Experts (3) ----------------------------------------------------------\n @teernisse 42 pts last active 2d ago\n @jdoe 28 pts last active 5d ago\n @asmith 15 pts last active 12d ago\n\n -- Recent Activity (5) -------------------------------------------------\n note #3864 \"Added retry logic for token refresh\" @jdoe 1d\n status #3864 reopened @teernisse 3d\n note !456 \"PKCE challenge method discussion\" @asmith 5d\n\n -- Unresolved Threads (2) ----------------------------------------------\n #3864 \"Should we invalidate all sessions on token rotation, or\n only the compromised one? The RFC recommends full rotation\n but our mobile clients would all disconnect.\" 5 notes 3d\n !456 \"PKCE code verifier length seems short at 43 chars,\n RFC 7636 recommends 43-128. Should we bump to 128\n for extra security margin?\" 2 notes 1d\n\n -- Related (3) ----------------------------------------------------------\n #3750 Session management overhaul 0.85 12d\n !412 Add CORS headers for auth endpoints 0.72 20d\n #3699 Password reset flow broken 0.68 30d\n\n ! Issue #3800 has no activity for 45 days\n ! Issue #3801 is unassigned\n```\n\n(Unicode/Nerd glyphs render in place of ASCII placeholders)\n\n## TDD\n\nRED:\n- `test_print_brief_omits_empty_sections`: BriefResponse with empty issues vec, capture stdout, assert \"Open Issues\" not present\n- `test_print_brief_shows_populated_sections`: BriefResponse with data, assert section headers present\n- `test_print_brief_threads_not_truncated`: Thread with 200-char body, assert full text appears in output\n- `test_print_brief_path_mode_experts_first`: mode=\"path\", capture stdout, assert \"Experts\" appears before \"Open Issues\"\n- `test_print_brief_warnings_at_bottom`: Warnings present, assert warning text appears after all section dividers\n\nGREEN: Implement the renderer functions.\n\nVERIFY:\n```bash\ncargo test brief::render && cargo clippy --all-targets -- -D warnings && cargo fmt --check\n```\n\n## Acceptance Criteria\n- [ ] Human output uses lipgloss Theme/Table/section_divider/Icons from render.rs\n- [ ] Note/discussion body text is NEVER truncated\n- [ ] Empty sections are omitted (not shown with count 0)\n- [ ] Path mode places Experts section prominently (before issues)\n- [ ] Warnings rendered at bottom with warning icon and style\n- [ ] Activity feed mirrors me's print_activity_section pattern\n- [ ] All entity refs styled with Theme::issue_ref()/mr_ref()\n- [ ] Output is scannable and follows existing lore visual language","status":"open","priority":2,"issue_type":"task","created_at":"2026-03-13T15:14:56.147019Z","created_by":"tayloreernisse","updated_at":"2026-03-13T15:14:56.149836Z","compaction_level":0,"original_size":0,"labels":["cli-imp","intelligence"],"dependencies":[{"issue_id":"bd-3lg5","depends_on_id":"bd-1n5q","type":"parent-child","created_at":"2026-03-13T15:14:56.148926Z","created_by":"tayloreernisse"},{"issue_id":"bd-3lg5","depends_on_id":"bd-i4lo","type":"blocks","created_at":"2026-03-13T15:14:56.149826Z","created_by":"tayloreernisse"}]} {"id":"bd-3lu","title":"Implement lore search CLI command (lexical mode)","description":"## Background\nThe search CLI command is the user-facing entry point for Gate A lexical search. It orchestrates the search pipeline: query parsing -> FTS5 search -> filter application -> result hydration (single round-trip) -> display. Gate B extends this same command with --mode=hybrid and --mode=semantic. The hydration query is critical for performance — it fetches all display fields + labels + paths in one SQL query using json_each() + json_group_array().\n\n## Approach\nCreate `src/cli/commands/search.rs` per PRD Section 3.4.\n\n**Key types:**\n- `SearchResultDisplay` — display-ready result with all fields (dates as ISO via `ms_to_iso`)\n- `ExplainData` — ranking explanation for --explain flag (vector_rank, fts_rank, rrf_score)\n- `SearchResponse` — wrapper with query, mode, total_results, results, warnings\n\n**Core function:**\n```rust\npub fn run_search(\n config: &Config,\n query: &str,\n mode: SearchMode,\n filters: SearchFilters,\n explain: bool,\n) -> Result\n```\n\n**Pipeline:**\n1. Parse query + filters\n2. Execute search based on mode -> ranked doc_ids (+ explain ranks)\n3. Apply post-retrieval filters via apply_filters() preserving ranking order\n4. Hydrate results in single DB round-trip using json_each + json_group_array\n5. Attach snippets: prefer FTS snippet, fallback to `generate_fallback_snippet()` for semantic-only\n6. Convert timestamps via `ms_to_iso()` from `crate::core::time`\n7. Build SearchResponse\n\n**Hydration query (critical — single round-trip, replaces 60 queries with 1):**\n```sql\nSELECT d.id, d.source_type, d.title, d.url, d.author_username,\n d.created_at, d.updated_at, d.content_text,\n p.path_with_namespace AS project_path,\n (SELECT json_group_array(dl.label_name)\n FROM document_labels dl WHERE dl.document_id = d.id) AS labels,\n (SELECT json_group_array(dp.path)\n FROM document_paths dp WHERE dp.document_id = d.id) AS paths\nFROM json_each(?) AS j\nJOIN documents d ON d.id = j.value\nJOIN projects p ON p.id = d.project_id\nORDER BY j.key\n```\n\n**Human output uses `console::style` for terminal formatting:**\n```rust\nuse console::style;\n// Type prefix in cyan\nprintln!(\"[{}] {} - {} ({})\", i+1, style(type_prefix).cyan(), title, score);\n// URL in dim\nprintln!(\" {}\", style(url).dim());\n```\n\n**JSON robot mode includes elapsed_ms in meta (PRD Section 3.4):**\n```rust\npub fn print_search_results_json(response: &SearchResponse, elapsed_ms: u64) {\n let output = serde_json::json!({\n \"ok\": true,\n \"data\": response,\n \"meta\": { \"elapsed_ms\": elapsed_ms }\n });\n println!(\"{}\", serde_json::to_string_pretty(&output).unwrap());\n}\n```\n\n**CLI args in `src/cli/mod.rs` (PRD Section 3.4):**\n```rust\n#[derive(Args)]\npub struct SearchArgs {\n query: String,\n #[arg(long, default_value = \"hybrid\")]\n mode: String,\n #[arg(long, value_name = \"TYPE\")]\n r#type: Option,\n #[arg(long)]\n author: Option,\n #[arg(long)]\n project: Option,\n #[arg(long, action = clap::ArgAction::Append)]\n label: Vec,\n #[arg(long)]\n path: Option,\n #[arg(long)]\n after: Option,\n #[arg(long)]\n updated_after: Option,\n #[arg(long, default_value = \"20\")]\n limit: usize,\n #[arg(long)]\n explain: bool,\n #[arg(long, default_value = \"safe\")]\n fts_mode: String,\n}\n```\n\n**IMPORTANT: default_value = \"hybrid\"** — When Ollama is unavailable, hybrid mode gracefully degrades to FTS-only with a warning (not an error). `lore search` works without Ollama.\n\n## Acceptance Criteria\n- [ ] Default mode is \"hybrid\" (not \"lexical\") per PRD\n- [ ] Hybrid mode degrades gracefully to FTS-only when Ollama unavailable (warning, not error)\n- [ ] All filters work (type, author, project, label, path, after, updated_after, limit)\n- [ ] Label filter uses `clap::ArgAction::Append` for repeatable --label flags\n- [ ] Hydration in single query (not N+1) — uses json_each + json_group_array\n- [ ] Timestamps converted via `ms_to_iso()` for display (ISO format)\n- [ ] Human output uses `console::style` for colored type prefix (cyan) and dim URLs\n- [ ] JSON robot mode includes `elapsed_ms` in `meta` field\n- [ ] Semantic-only results get fallback snippets via `generate_fallback_snippet()`\n- [ ] Empty results show friendly message: \"No results found for 'query'\"\n- [ ] \"No data indexed\" message if documents table empty\n- [ ] --explain shows vector_rank, fts_rank, rrf_score per result\n- [ ] --fts-mode=safe preserves prefix `*` while escaping special chars\n- [ ] --fts-mode=raw passes FTS5 MATCH syntax through unchanged\n- [ ] --mode=semantic with 0% embedding coverage returns LoreError::EmbeddingsNotBuilt (not OllamaUnavailable)\n- [ ] SearchArgs registered in cli/mod.rs with Clap derive\n- [ ] `cargo build` succeeds\n\n## Files\n- `src/cli/commands/search.rs` — new file\n- `src/cli/commands/mod.rs` — add `pub mod search;`\n- `src/cli/mod.rs` — add SearchArgs struct, wire up search subcommand\n- `src/main.rs` — add search command handler\n\n## TDD Loop\nRED: Integration test requiring DB with documents\n- `test_lexical_search_returns_results` — FTS search returns hits\n- `test_hydration_single_query` — verify no N+1 (mock/inspect query count)\n- `test_json_output_includes_elapsed` — robot mode JSON has meta.elapsed_ms\n- `test_empty_results_message` — zero results shows friendly message\n- `test_fallback_snippet` — semantic-only result uses truncated content\nGREEN: Implement run_search + hydrate_results + print functions\nVERIFY: `cargo build && cargo test search`\n\n## Edge Cases\n- Zero results: display friendly empty message, JSON returns empty array\n- --mode=semantic with 0% embedding coverage: return LoreError::EmbeddingsNotBuilt\n- json_group_array returns \"[]\" for documents with no labels — parse as empty array\n- Very long snippets: truncated at display time\n- Hybrid default works without Ollama: degrades to FTS-only with warning\n- ms_to_iso with epoch 0: return valid ISO string (not crash)","status":"closed","priority":2,"issue_type":"task","created_at":"2026-01-30T15:26:13.109876Z","created_by":"tayloreernisse","updated_at":"2026-01-30T17:52:24.320923Z","closed_at":"2026-01-30T17:52:24.320857Z","close_reason":"Implemented search CLI with FTS5 + RRF ranking, single-query hydration (json_each + json_group_array), adaptive recall, all filters, --explain, human + JSON output. Builds clean.","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-3lu","depends_on_id":"bd-1k1","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"},{"issue_id":"bd-3lu","depends_on_id":"bd-3q2","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"},{"issue_id":"bd-3lu","depends_on_id":"bd-3qs","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"}]} {"id":"bd-3mj2","title":"WHO: Robot JSON output for all 5 modes","description":"## Background\n\nRobot-mode JSON output following the standard lore envelope: `{\"ok\":true,\"data\":{...},\"meta\":{\"elapsed_ms\":N}}`. Includes both raw CLI args (input) and computed values (resolved_input) for agent reproducibility.\n\n## Approach\n\n### Envelope structs:\n```rust\n#[derive(Serialize)]\nstruct WhoJsonEnvelope { ok: bool, data: WhoJsonData, meta: RobotMeta }\n\n#[derive(Serialize)]\nstruct WhoJsonData {\n mode: String,\n input: serde_json::Value,\n resolved_input: serde_json::Value,\n #[serde(flatten)]\n result: serde_json::Value,\n}\n```\n\n### print_who_json(run, args, elapsed_ms):\n- `input`: raw CLI args `{ target, path, project, since, limit }`\n- `resolved_input`: `{ mode, project_id, project_path, since_ms, since_iso, since_mode, limit }`\n- `result`: mode-specific JSON via *_to_json() functions using serde_json::json\\!() macro\n\n### Mode-specific JSON fields:\n- **Expert**: path_query, path_match, truncated, experts[] with ISO last_seen_at\n- **Workload**: username, 4 entity arrays with ref/project_path/ISO timestamps, summary{} counts, truncation{} per-section bools\n- **Reviews**: username, total_diffnotes, categorized_count, mrs_reviewed, categories[] with rounded percentages\n- **Active**: total_unresolved_in_window, truncated, discussions[] with discussion_id + participants + participants_total + participants_truncated\n- **Overlap**: path_query, path_match, truncated, users[] with role + touch counts + mr_refs + mr_refs_total + mr_refs_truncated\n\n### Key implementation detail — #[serde(flatten)] on result field:\nThe `result` field uses `#[serde(flatten)]` so mode-specific keys are merged into the top-level data object rather than nested. This means `data.experts` (not `data.result.experts`).\n\n### Timestamps: all use ms_to_iso() for ISO 8601 format in JSON output\n\n### Percentage rounding: Reviews categories use `(percentage * 10.0).round() / 10.0` for single decimal precision\n\n## Files\n\n- `src/cli/commands/who.rs`\n\n## TDD Loop\n\nNo unit tests for JSON serialization — the serde_json::json\\!() macro produces correct JSON by construction. Verification via manual robot mode invocation.\nVERIFY: `cargo check && cargo run --release -- -J who src/features/global-search/ | python3 -m json.tool`\n\n## Acceptance Criteria\n\n- [ ] cargo check passes\n- [ ] JSON output validates (valid JSON, no trailing content)\n- [ ] input echoes raw CLI args\n- [ ] resolved_input includes since_mode tri-state (default/explicit/none)\n- [ ] All timestamps in ISO 8601 format\n- [ ] Bounded metadata present (participants_total, mr_refs_total, truncation object)\n- [ ] #[serde(flatten)] correctly merges result keys into data object\n\n## Edge Cases\n\n- `#[serde(flatten)]` on the result Value means mode-specific keys must not collide with mode/input/resolved_input — verified by convention (expert uses \"experts\", workload uses \"username\", etc.)\n- serde_json::json\\!() panics are impossible for valid Rust expressions, but verify that all row.get() values in *_to_json() handle None fields correctly (author_username in WorkloadMr is Option — json\\!() serializes None as null, which is correct)\n- ms_to_iso() must handle 0 and very old timestamps gracefully — produces \"1970-01-01T00:00:00Z\" for epoch 0, which is valid\n- Reviews percentage rounding: categories summing to >100% due to rounding is acceptable (display artifact) — agent consumers should not assert sum == 100\n- println\\!() for JSON output (not eprintln\\!) — errors go to stderr, data to stdout, matching all other robot-mode commands\n- If a mode returns empty results, the JSON should still be valid (empty arrays, zero counts) — serde handles this correctly","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-08T02:41:15.280907Z","created_by":"tayloreernisse","updated_at":"2026-02-08T04:10:29.600331Z","closed_at":"2026-02-08T04:10:29.600297Z","close_reason":"Implemented by agent team: migration 017, CLI skeleton, all 5 query modes, human+robot output, 20 tests. All quality gates pass.","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-3mj2","depends_on_id":"bd-2711","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"},{"issue_id":"bd-3mj2","depends_on_id":"bd-b51e","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"},{"issue_id":"bd-3mj2","depends_on_id":"bd-m7k1","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"},{"issue_id":"bd-3mj2","depends_on_id":"bd-s3rc","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"},{"issue_id":"bd-3mj2","depends_on_id":"bd-zqpf","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"}]} {"id":"bd-3mk","title":"[CP1] gi list issues command","description":"List issues from the database.\n\nFlags:\n- --limit=N (default: 20)\n- --project=PATH (filter by project)\n- --state=opened|closed|all (default: all)\n\nOutput: Table with iid, title, state, author, relative time\n\nFiles: src/cli/commands/list.ts\nDone when: List displays issues with proper filtering and formatting","status":"tombstone","priority":3,"issue_type":"task","created_at":"2026-01-25T15:20:10.400664Z","created_by":"tayloreernisse","updated_at":"2026-01-25T15:21:35.155211Z","closed_at":"2026-01-25T15:21:35.155211Z","deleted_at":"2026-01-25T15:21:35.155209Z","deleted_by":"tayloreernisse","delete_reason":"delete","original_type":"task","compaction_level":0,"original_size":0} @@ -324,6 +326,7 @@ {"id":"bd-hrs","title":"Create migration 007_documents.sql","description":"## Background\nMigration 007 creates the document storage layer that Gate A's entire search pipeline depends on. It introduces 5 tables: `documents` (the searchable unit), `document_labels` and `document_paths` (for filtered search), and two queue tables (`dirty_sources`, `pending_discussion_fetches`) that drive incremental document regeneration and discussion fetching in Gate C. This is the most-depended-on bead in the project (6 downstream beads block on it).\n\n## Approach\nCreate `migrations/007_documents.sql` with the exact SQL from PRD Section 1.1. The schema is fully specified in the PRD — no design decisions remain.\n\nKey implementation details:\n- `documents` table has `UNIQUE(source_type, source_id)` constraint for upsert support\n- `document_labels` and `document_paths` use `WITHOUT ROWID` for compact storage\n- `dirty_sources` uses composite PK `(source_type, source_id)` with `ON CONFLICT` upsert semantics\n- `pending_discussion_fetches` uses composite PK `(project_id, noteable_type, noteable_iid)`\n- Both queue tables have `next_attempt_at` indexed for efficient backoff queries\n- `labels_hash` and `paths_hash` on documents enable write optimization (skip unchanged labels/paths)\n\nRegister the migration in `src/core/db.rs` by adding entry 7 to the `MIGRATIONS` array.\n\n## Acceptance Criteria\n- [ ] `migrations/007_documents.sql` file exists with all 5 CREATE TABLE statements\n- [ ] Migration applies cleanly on fresh DB (`cargo test migration_tests`)\n- [ ] Migration applies cleanly after CP2 schema (migrations 001-006 already applied)\n- [ ] All foreign keys enforced: `documents.project_id -> projects(id)`, `document_labels.document_id -> documents(id) ON DELETE CASCADE`, `document_paths.document_id -> documents(id) ON DELETE CASCADE`, `pending_discussion_fetches.project_id -> projects(id)`\n- [ ] All indexes created: `idx_documents_project_updated`, `idx_documents_author`, `idx_documents_source`, `idx_documents_hash`, `idx_document_labels_label`, `idx_document_paths_path`, `idx_dirty_sources_next_attempt`, `idx_pending_discussions_next_attempt`\n- [ ] `labels_hash TEXT NOT NULL DEFAULT ''` and `paths_hash TEXT NOT NULL DEFAULT ''` columns present on `documents`\n- [ ] Schema version 7 recorded in `schema_version` table\n- [ ] `cargo build` succeeds after registering migration in db.rs\n\n## Files\n- `migrations/007_documents.sql` — new file (copy exact SQL from PRD Section 1.1)\n- `src/core/db.rs` — add migration 7 to `MIGRATIONS` array\n\n## TDD Loop\nRED: Add migration to db.rs, run `cargo test migration_tests` — fails because SQL file missing\nGREEN: Create `migrations/007_documents.sql` with full schema\nVERIFY: `cargo test migration_tests && cargo build`\n\n## Edge Cases\n- Migration must be idempotent-safe if applied twice (INSERT into schema_version will fail on second run — this is expected and handled by the migration runner's version check)\n- `WITHOUT ROWID` tables (document_labels, document_paths) require explicit PK — already defined\n- `CHECK` constraint on `documents.source_type` must match exactly: `'issue','merge_request','discussion'`\n- `CHECK` constraint on `documents.truncated_reason` allows NULL or one of 4 specific values","status":"closed","priority":2,"issue_type":"task","created_at":"2026-01-30T15:25:25.734380Z","created_by":"tayloreernisse","updated_at":"2026-01-30T16:54:12.854351Z","closed_at":"2026-01-30T16:54:12.854149Z","close_reason":"Completed: migration 007_documents.sql with 5 tables (documents, document_labels, document_paths, dirty_sources, pending_discussion_fetches), 8 indexes, registered in db.rs, cargo build + migration tests pass","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-hrs","depends_on_id":"bd-3lc","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"}]} {"id":"bd-hs6j","title":"Implement run_generate_docs_for_sources scoped doc regeneration","description":"## Background\n\nCurrently `regenerate_dirty_documents()` in `src/documents/regenerator.rs` processes ALL entries in the `dirty_sources` table. The surgical sync pipeline needs a scoped variant that only regenerates documents for specific `(source_type, source_id)` pairs — the ones produced by the surgical ingest step.\n\nThe dirty_sources table schema: `(source_type TEXT, source_id INTEGER)` primary key, where source_type is one of `'issue'`, `'merge_request'`, `'discussion'`, `'note'`. After `ingest_issue_by_iid` or `ingest_mr_by_iid` calls `mark_dirty()`, these rows exist in dirty_sources with the matching keys.\n\nThe existing `regenerate_one(conn, source_type, source_id, cache)` private function does the actual work for a single source. The scoped function can call it directly for each provided key, without going through `get_dirty_sources()` which pulls from the full table.\n\nKey requirement: the function must return the `document_id` values of regenerated documents so the scoped embedding step (bd-1elx) can process only those documents.\n\n## Approach\n\nAdd `regenerate_dirty_documents_for_sources()` to `src/documents/regenerator.rs`:\n\n```rust\npub struct RegenerateForSourcesResult {\n pub regenerated: usize,\n pub unchanged: usize,\n pub errored: usize,\n pub document_ids: Vec, // IDs of regenerated docs for scoped embedding\n}\n\npub fn regenerate_dirty_documents_for_sources(\n conn: &Connection,\n source_keys: &[(SourceType, i64)],\n) -> Result\n```\n\nImplementation:\n1. Create a `ParentMetadataCache` (same as bulk path).\n2. Iterate over provided `source_keys`.\n3. For each key, call `regenerate_one(conn, source_type, source_id, &mut cache)`.\n4. On success (changed=true): call `clear_dirty()`, query `documents` table for the document_id by `(source_type, source_id)`, push to `document_ids` vec.\n5. On success (changed=false): call `clear_dirty()`, still query for document_id (content unchanged but may need re-embedding if model changed).\n6. On error: call `record_dirty_error()`, increment errored count.\n\nAlso export from `src/documents/mod.rs`: `pub use regenerator::{RegenerateForSourcesResult, regenerate_dirty_documents_for_sources};`\n\n## Acceptance Criteria\n\n- [ ] `regenerate_dirty_documents_for_sources` only processes the provided source_keys, not all dirty_sources\n- [ ] Returns `document_ids` for all successfully processed documents (both regenerated and unchanged)\n- [ ] Clears dirty_sources entries for successfully processed sources\n- [ ] Records errors for failed sources without aborting the batch\n- [ ] Exported from `src/documents/mod.rs`\n- [ ] Existing `regenerate_dirty_documents` is unchanged (no regression)\n\n## Files\n\n- `src/documents/regenerator.rs` (add new function + result struct)\n- `src/documents/mod.rs` (export new function + struct)\n\n## TDD Anchor\n\nTests in `src/documents/regenerator_tests.rs` (add to existing test file):\n\n```rust\n#[test]\nfn test_scoped_regen_only_processes_specified_sources() {\n let conn = setup_test_db();\n // Insert 2 issues with dirty markers\n insert_test_issue(&conn, 1, \"Issue A\");\n insert_test_issue(&conn, 2, \"Issue B\");\n mark_dirty(&conn, SourceType::Issue, 1).unwrap();\n mark_dirty(&conn, SourceType::Issue, 2).unwrap();\n\n // Regenerate only issue 1\n let result = regenerate_dirty_documents_for_sources(\n &conn,\n &[(SourceType::Issue, 1)],\n ).unwrap();\n\n assert!(result.regenerated >= 1 || result.unchanged >= 1);\n // Issue 1 dirty cleared, issue 2 still dirty\n let remaining = get_dirty_sources(&conn).unwrap();\n assert_eq!(remaining.len(), 1);\n assert_eq!(remaining[0], (SourceType::Issue, 2));\n}\n\n#[test]\nfn test_scoped_regen_returns_document_ids() {\n let conn = setup_test_db();\n insert_test_issue(&conn, 1, \"Issue A\");\n mark_dirty(&conn, SourceType::Issue, 1).unwrap();\n\n let result = regenerate_dirty_documents_for_sources(\n &conn,\n &[(SourceType::Issue, 1)],\n ).unwrap();\n\n assert!(!result.document_ids.is_empty());\n // Verify document_id exists in documents table\n let exists: bool = conn.query_row(\n \"SELECT EXISTS(SELECT 1 FROM documents WHERE id = ?1)\",\n [result.document_ids[0]], |r| r.get(0),\n ).unwrap();\n assert!(exists);\n}\n\n#[test]\nfn test_scoped_regen_handles_missing_source() {\n let conn = setup_test_db();\n // Source key not in dirty_sources, regenerate_one will fail or return None\n let result = regenerate_dirty_documents_for_sources(\n &conn,\n &[(SourceType::Issue, 9999)],\n ).unwrap();\n // Should handle gracefully: either errored=1 or unchanged with no doc_id\n assert_eq!(result.document_ids.len(), 0);\n}\n```\n\n## Edge Cases\n\n- Source key exists in dirty_sources but the underlying entity was deleted: `regenerate_one` returns `None` from the extractor, calls `delete_document`, returns `Ok(true)`. No document_id to return.\n- Source key not in dirty_sources at all (already cleared by concurrent process): `regenerate_one` still works (it reads from the entity tables, not dirty_sources). But `clear_dirty` is a no-op DELETE.\n- Same source_key appears twice in the input slice: second call is idempotent (dirty already cleared, doc already up to date).\n- `unchanged` documents: content_hash matches, but we still need the document_id for embedding (model version may have changed). Include in `document_ids`.\n- Error in one source must not abort processing of remaining sources.\n\n## Dependency Context\n\n- **No blockers**: Uses only existing functions (`regenerate_one`, `clear_dirty`, `record_dirty_error`) which are all private to the regenerator module. New function lives in same module.\n- **Blocks bd-1i4i**: Orchestration function calls this after surgical ingest to get document_ids for scoped embedding.\n- **Feeds bd-1elx**: `document_ids` output is the input for `run_embed_for_document_ids`.","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-17T19:16:14.014030Z","created_by":"tayloreernisse","updated_at":"2026-02-18T21:03:44.734135Z","closed_at":"2026-02-18T21:03:44.733986Z","close_reason":"Completed: regenerate_dirty_documents_for_sources with scoped dirty tracking, 3 tests","compaction_level":0,"original_size":0,"labels":["surgical-sync"],"dependencies":[{"issue_id":"bd-hs6j","depends_on_id":"bd-1i4i","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"}]} {"id":"bd-hu3","title":"Write migration 011: resource event tables, entity_references, and dependent fetch queue","description":"## Background\nPhase B needs three new event tables and a generic dependent fetch queue to power temporal queries (timeline, file-history, trace). These tables store structured event data from GitLab Resource Events APIs, replacing fragile system note parsing for state/label/milestone changes.\n\nMigration 010_chunk_config.sql already exists, so Phase B starts at migration 011.\n\n## Approach\nCreate migrations/011_resource_events.sql with the exact schema from the Phase B spec (§1.2 + §2.2):\n\n**Event tables:**\n- resource_state_events: state changes (opened/closed/reopened/merged/locked) with source_merge_request_id for \"closed by MR\" linking\n- resource_label_events: label add/remove with label_name\n- resource_milestone_events: milestone add/remove with milestone_title + milestone_id\n\n**Cross-reference table (Gate 2):**\n- entity_references: source/target entity pairs with reference_type (closes/mentioned/related), source_method provenance, and unresolved reference support (target_entity_id NULL with target_project_path + target_entity_iid)\n\n**Dependent fetch queue:**\n- pending_dependent_fetches: generic job queue with job_type IN ('resource_events', 'mr_closes_issues', 'mr_diffs'), locked_at crash recovery, exponential backoff via attempts + next_retry_at\n\n**All tables must have:**\n- CHECK constraints for entity exclusivity (issue XOR merge_request) on event tables\n- UNIQUE constraints (gitlab_id + project_id for events, composite for queue, multi-column for references)\n- Partial indexes (WHERE issue_id IS NOT NULL, WHERE target_entity_id IS NULL, etc.)\n- CASCADE deletes on project_id and entity FKs\n\nRegister in src/core/db.rs MIGRATIONS array:\n```rust\n(\"011\", include_str!(\"../../migrations/011_resource_events.sql\")),\n```\n\nEnd migration with:\n```sql\nINSERT INTO schema_version (version, applied_at, description)\nVALUES (11, strftime('%s', 'now') * 1000, 'Resource events, entity references, and dependent fetch queue');\n```\n\n## Acceptance Criteria\n- [ ] migrations/011_resource_events.sql exists with all 4 tables + indexes + constraints\n- [ ] src/core/db.rs MIGRATIONS array includes (\"011\", include_str!(...))\n- [ ] `cargo build` succeeds (migration SQL compiles into binary)\n- [ ] `cargo test migration` passes (migration applies cleanly on fresh DB)\n- [ ] All CHECK constraints enforced (issue XOR merge_request on event tables)\n- [ ] All UNIQUE constraints present (prevents duplicate events/refs/jobs)\n- [ ] entity_references UNIQUE handles NULL coalescing correctly\n- [ ] pending_dependent_fetches job_type CHECK includes all three types\n\n## Files\n- migrations/011_resource_events.sql (new)\n- src/core/db.rs (add to MIGRATIONS array, line ~46)\n\n## TDD Loop\nRED: Add test in tests/migration_tests.rs:\n- `test_migration_011_creates_event_tables` - verify all 4 tables exist after migration\n- `test_migration_011_entity_exclusivity_constraint` - verify CHECK rejects both NULL and both non-NULL for issue_id/merge_request_id\n- `test_migration_011_event_dedup` - verify UNIQUE(gitlab_id, project_id) rejects duplicate events\n- `test_migration_011_entity_references_dedup` - verify UNIQUE constraint with NULL coalescing\n- `test_migration_011_queue_dedup` - verify UNIQUE(project_id, entity_type, entity_iid, job_type)\n\nGREEN: Write the migration SQL + register in db.rs\n\nVERIFY: `cargo test migration_tests -- --nocapture`\n\n## Edge Cases\n- entity_references UNIQUE uses COALESCE for NULLable columns — test with both resolved and unresolved refs\n- pending_dependent_fetches job_type CHECK — ensure 'mr_diffs' is included (Gate 4 needs it)\n- SQLite doesn't enforce CHECK on INSERT OR REPLACE — verify constraint behavior\n- The entity exclusivity CHECK must allow exactly one of issue_id/merge_request_id to be non-NULL\n- Verify CASCADE deletes work (delete project → all events/refs/jobs deleted)","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-02T21:31:23.933894Z","created_by":"tayloreernisse","updated_at":"2026-02-03T16:06:28.918228Z","closed_at":"2026-02-03T16:06:28.917906Z","close_reason":"Already completed in prior session, re-closing after accidental reopen","compaction_level":0,"original_size":0,"labels":["gate-1","phase-b","schema"],"dependencies":[{"issue_id":"bd-hu3","depends_on_id":"bd-2zl","type":"parent-child","created_at":"2026-03-04T20:02:52Z","created_by":"import"}]} +{"id":"bd-i4lo","title":"lore brief: core section assembly, robot mode, CLI wiring","description":"## Summary\nImplement the core `lore brief` command: section assembly logic, BriefResponse type, robot mode JSON output, and CLI wiring in main.rs. This is the business logic that composes existing `run_*` functions into a unified situational awareness response.\n\n## Files to Create/Modify\n\n- **NEW:** `src/cli/commands/brief/mod.rs` — `run_brief()`, `BriefResponse`, section assembly, robot JSON\n- **MODIFY:** `src/cli/commands/mod.rs` — add `pub mod brief;`\n- **MODIFY:** `src/main.rs` — register `Brief` subcommand in `Commands` enum, add dispatch\n\n## BriefResponse Type\n\n```rust\n#[derive(Debug, Serialize)]\npub struct BriefResponse {\n pub mode: String, // \"topic\", \"path\", \"person\", \"entity\"\n pub query: String, // the input query/path/person\n pub summary: String, // auto-generated one-liner from section counts\n pub open_issues: Vec,\n pub active_mrs: Vec,\n #[serde(skip_serializing_if = \"Vec::is_empty\")]\n pub experts: Vec,\n pub recent_activity: Vec,\n pub unresolved_threads: Vec,\n #[serde(skip_serializing_if = \"Option::is_none\")]\n pub related: Option>, // None when no embeddings\n pub warnings: Vec,\n}\n```\n\nDefine lightweight Brief-prefixed structs that map from the existing run_* return types. Keep fields minimal for the robot mode 12KB token budget.\n\n## Section Assembly (run_brief)\n\n`run_brief` is `pub async fn`. Sync callees (`run_list_issues`, `run_list_mrs`, `run_who`) are called directly. Async callees (`run_related`, `run_timeline`) use `.await`. No `spawn_blocking` needed.\n\n### Topic mode scoping (search-first, IID filter)\n\n```rust\nlet search_iids: Vec = if mode == \"topic\" {\n let search_result = run_search(config, query, ...).await?;\n search_result.results.iter().map(|r| r.iid).collect()\n} else {\n vec![]\n};\n// Then pass iids to ListFilters { iids: if search_iids.is_empty() { None } else { Some(&search_iids) }, ... }\n```\n\n### Verified function signatures (2026-03-13)\n\n| Module | Function | Signature |\n|--------|----------|-----------|\n| list/issues.rs:133 | `run_list_issues` | `fn(config: &Config, filters: ListFilters<'a>) -> Result` |\n| list/mrs.rs:120 | `run_list_mrs` | `fn(config: &Config, filters: MrListFilters<'a>) -> Result` |\n| who/mod.rs:115 | `run_who` | `fn(config: &Config, args: &WhoArgs) -> Result` |\n| search.rs:69 | `run_search` | `async fn(config: &Config, query: &str, cli_filters: SearchCliFilters, fts_mode: FtsQueryMode, requested_mode: &str, explain: bool) -> Result` |\n| related.rs:92 | `run_related` | `async fn(config: &Config, query_or_type: &str, iid: Option, limit: usize, project: Option<&str>) -> Result` |\n| timeline.rs:71 | `run_timeline` | `async fn(config: &Config, params: &TimelineParams) -> Result` |\n\n### CRITICAL: ListFilters/MrListFilters have lifetimes, no Default\n\nAll fields must be constructed explicitly. See the epic bead (bd-1n5q) for full field listings.\n\n### CRITICAL: WhoArgs is a clap struct\n\nTo trigger expert mode: set `path = Some(...)`. To trigger workload mode: set `target = Some(username)`. There is no `mode` field — mode resolved internally by `resolve_mode()`. Full field list in epic bead.\n\n### Section details\n\n| Section | Source | Limit | Fallback |\n|---------|--------|-------|----------|\n| open_issues | `run_list_issues` with state=opened, iids filter for topic mode | section_limit | empty vec |\n| active_mrs | `run_list_mrs` with state=opened, iids filter for topic mode | section_limit | empty vec |\n| experts | `run_who` Expert mode (path mode only) | 3 | empty vec or omit |\n| recent_activity | `run_timeline` | section_limit | empty vec |\n| unresolved_threads | Direct SQL: discussions WHERE resolved=false | section_limit | empty vec |\n| related | `run_related` | section_limit | None (no embeddings) |\n| warnings | computed from issue dates/state | all | empty vec |\n\n### Warning generation\nDetect: stale issues (>30d no activity), unassigned issues. See epic bead for `compute_warnings` implementation.\n\n### Mode-specific behavior\n\n- **Topic** (`query` is set, no `--path`/`--person`): Search-first IID filter for issues/MRs. No experts section unless a path can be inferred.\n- **Path** (`--path`): Experts section prominent. Issues/MRs scoped by file path.\n- **Person** (`--person`): Issues filtered by assignee, MRs by author. No experts section.\n- **Entity** (`query` matches `issues N` or `mrs N`): Single entity focus with cross-references.\n\n## Clap Registration\n\n```rust\nBrief {\n /// Free-text topic, entity type, or omit for project-wide brief\n query: Option,\n #[arg(long)]\n path: Option,\n #[arg(long)]\n person: Option,\n #[arg(short, long)]\n project: Option,\n #[arg(long, default_value = \"5\")]\n section_limit: usize,\n},\n```\n\n## Robot Mode Output\n\nStandard envelope: `{\"ok\": true, \"data\": BriefResponse, \"meta\": {\"elapsed_ms\": N, \"sections_computed\": [...]}}`. See epic bead for full schema.\n\n## TDD\n\nRED:\n- `test_brief_topic_returns_all_sections`: insert test data, brief with topic query, assert all section keys present\n- `test_brief_path_uses_who_expert`: brief --path, assert experts populated\n- `test_brief_person_filters_by_assignee`: brief --person, assert issues filtered to assignee\n- `test_brief_warnings_stale_issue`: insert stale issue, assert warning generated\n- `test_brief_token_budget`: robot JSON under 12000 bytes\n- `test_brief_no_embeddings_graceful`: related is None when no embeddings\n- `test_brief_empty_topic`: zero matches returns valid response with empty sections\n\nGREEN: Implement run_brief and wire into main.rs.\n\nVERIFY:\n```bash\ncargo test brief:: && cargo clippy --all-targets -- -D warnings && cargo fmt --check\ncargo run --release -- -J brief 'throw time' | jq '.data | keys'\ncargo run --release -- -J brief 'throw time' | wc -c\n```\n\n## Acceptance Criteria\n- [ ] `lore brief TOPIC` returns all sections for free-text topic\n- [ ] `lore brief --path PATH` returns path-focused briefing with experts\n- [ ] `lore brief --person USERNAME` returns person-focused briefing\n- [ ] `lore brief issues N` returns entity-focused briefing\n- [ ] Robot mode output under 12000 bytes\n- [ ] Each section degrades gracefully if its data source is unavailable\n- [ ] summary field is auto-generated one-liner from section counts\n- [ ] warnings detect: stale issues (>30d), unassigned\n- [ ] Performance: <2s total\n- [ ] Command registered in main.rs and robot-docs","status":"open","priority":2,"issue_type":"task","created_at":"2026-03-13T15:14:51.678982Z","created_by":"tayloreernisse","updated_at":"2026-03-13T15:14:51.681267Z","compaction_level":0,"original_size":0,"labels":["cli-imp","intelligence"],"dependencies":[{"issue_id":"bd-i4lo","depends_on_id":"bd-1n5q","type":"parent-child","created_at":"2026-03-13T15:14:51.680417Z","created_by":"tayloreernisse"},{"issue_id":"bd-i4lo","depends_on_id":"bd-3bb5","type":"blocks","created_at":"2026-03-13T15:14:51.681255Z","created_by":"tayloreernisse"}]} {"id":"bd-iba","title":"Add GitLab client MR pagination methods","description":"## Background\nGitLab client pagination for merge requests and discussions. Must support robust pagination with fallback chain because some GitLab instances/proxies strip headers.\n\n## Approach\nAdd to existing `src/gitlab/client.rs`:\n1. `MergeRequestPage` struct - Items + pagination metadata\n2. `parse_link_header_next()` - RFC 8288 Link header parsing\n3. `fetch_merge_requests_page()` - Single page fetch with metadata\n4. `paginate_merge_requests()` - Async stream for all MRs\n5. `paginate_mr_discussions()` - Async stream for MR discussions\n\n## Files\n- `src/gitlab/client.rs` - Add pagination methods\n\n## Acceptance Criteria\n- [ ] `MergeRequestPage` struct exists with `items`, `next_page`, `is_last_page`\n- [ ] `parse_link_header_next()` extracts `rel=\"next\"` URL from Link header\n- [ ] Pagination fallback chain: Link header > x-next-page > full-page heuristic\n- [ ] `paginate_merge_requests()` returns `Pin>>>`\n- [ ] `paginate_mr_discussions()` returns `Pin>>>`\n- [ ] MR endpoint uses `scope=all&state=all` to include all MRs\n- [ ] `cargo test client` passes\n\n## TDD Loop\nRED: `cargo test fetch_merge_requests` -> method not found\nGREEN: Add pagination methods\nVERIFY: `cargo test client`\n\n## Struct Definitions\n```rust\n#[derive(Debug)]\npub struct MergeRequestPage {\n pub items: Vec,\n pub next_page: Option,\n pub is_last_page: bool,\n}\n```\n\n## Link Header Parsing (RFC 8288)\n```rust\n/// Parse Link header to extract rel=\"next\" URL.\nfn parse_link_header_next(headers: &reqwest::header::HeaderMap) -> Option {\n headers\n .get(\"link\")\n .and_then(|v| v.to_str().ok())\n .and_then(|link_str| {\n // Format: ; rel=\"next\", ; rel=\"last\"\n for part in link_str.split(',') {\n let part = part.trim();\n if part.contains(\"rel=\\\"next\\\"\") || part.contains(\"rel=next\") {\n if let Some(start) = part.find('<') {\n if let Some(end) = part.find('>') {\n return Some(part[start + 1..end].to_string());\n }\n }\n }\n }\n None\n })\n}\n```\n\n## Pagination Fallback Chain\n```rust\nlet next_page = match (link_next, x_next_page, items.len() as u32 == per_page) {\n (Some(_), _, _) => Some(page + 1), // Link header present: continue\n (None, Some(np), _) => Some(np), // x-next-page present: use it\n (None, None, true) => Some(page + 1), // Full page, no headers: try next\n (None, None, false) => None, // Partial page: we're done\n};\n```\n\n## Fetch Single Page\n```rust\npub async fn fetch_merge_requests_page(\n &self,\n gitlab_project_id: i64,\n updated_after: Option,\n cursor_rewind_seconds: u32,\n page: u32,\n per_page: u32,\n) -> Result {\n let mut params = vec![\n (\"scope\", \"all\".to_string()),\n (\"state\", \"all\".to_string()),\n (\"order_by\", \"updated_at\".to_string()),\n (\"sort\", \"asc\".to_string()),\n (\"per_page\", per_page.to_string()),\n (\"page\", page.to_string()),\n ];\n // Apply cursor rewind for safety\n // ...\n}\n```\n\n## Async Stream Pattern\n```rust\npub fn paginate_merge_requests(\n &self,\n gitlab_project_id: i64,\n updated_after: Option,\n cursor_rewind_seconds: u32,\n) -> Pin> + Send + '_>> {\n Box::pin(async_stream::try_stream! {\n let mut page = 1u32;\n let per_page = 100u32;\n loop {\n let page_result = self.fetch_merge_requests_page(...).await?;\n for mr in page_result.items {\n yield mr;\n }\n if page_result.is_last_page {\n break;\n }\n match page_result.next_page {\n Some(np) => page = np,\n None => break,\n }\n }\n })\n}\n```\n\n## Edge Cases\n- `scope=all` required to include all MRs (not just authored by current user)\n- `state=all` required to include merged/closed (GitLab defaults may exclude)\n- `locked` state cannot be filtered server-side (use local SQL filtering)\n- Cursor rewind should clamp to 0 to avoid negative timestamps","status":"closed","priority":2,"issue_type":"task","created_at":"2026-01-26T22:06:41.633065Z","created_by":"tayloreernisse","updated_at":"2026-01-27T00:13:05.613625Z","closed_at":"2026-01-27T00:13:05.613440Z","close_reason":"done","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-iba","depends_on_id":"bd-5ta","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"}]} {"id":"bd-ike","title":"Epic: Gate 3 - Decision Timeline (lore timeline)","description":"## Background\n\nGate 3 is the first user-facing temporal feature: `lore timeline `. It answers \"What happened with X?\" by finding matching entities via FTS5, expanding cross-references, collecting all temporal events, and rendering a chronological narrative.\n\n**Spec reference:** `docs/phase-b-temporal-intelligence.md` Gate 3 (Sections 3.1-3.6).\n\n## Prerequisites (All Complete)\n\n- Gates 1-2 COMPLETE: resource_state_events, resource_label_events, resource_milestone_events, entity_references all populated\n- FTS5 search index (CP3): working search infrastructure for keyword matching\n- Migration 015 (commit SHAs, closes watermark) exists on disk (registered by bd-1oo)\n\n## Architecture — 5-Stage Pipeline\n\n```\n1. SEED: FTS5 keyword search -> matched document IDs (issues, MRs, notes)\n2. HYDRATE: Map document IDs -> source entities + top matched notes as evidence\n3. EXPAND: BFS over entity_references (depth-limited, edge-type filtered)\n4. COLLECT: Gather events from all tables for seed + expanded entities\n5. RENDER: Sort chronologically, format as human or robot output\n```\n\nNo new tables required. All reads are from existing tables at query time.\n\n## Children (Execution Order)\n\n1. **bd-20e** — Define TimelineEvent model and TimelineEventType enum (types first)\n2. **bd-32q** — Implement timeline seed phase: FTS5 keyword search to entity IDs\n3. **bd-ypa** — Implement timeline expand phase: BFS cross-reference expansion\n4. **bd-3as** — Implement timeline event collection and chronological interleaving\n5. **bd-1nf** — Register lore timeline command with all flags (CLI wiring)\n6. **bd-2f2** — Implement timeline human output renderer\n7. **bd-dty** — Implement timeline robot mode JSON output\n\n## Gate Completion Criteria\n\n- [ ] `lore timeline ` returns chronologically ordered events\n- [ ] Seed entities found via FTS5 keyword search (issues, MRs, and notes)\n- [ ] State, label, and milestone events interleaved from resource event tables\n- [ ] Entity creation and merge events included\n- [ ] Evidence-bearing notes included as note_evidence events (top FTS5 matches, bounded default 10)\n- [ ] Cross-reference expansion follows entity_references to configurable depth\n- [ ] Default: follows closes + related edges; --expand-mentions adds mentioned\n- [ ] --depth 0 disables expansion\n- [ ] --since filters by event timestamp\n- [ ] -p scopes to project\n- [ ] Human output is colored and readable\n- [ ] Robot mode returns structured JSON with expansion provenance\n- [ ] Unresolved (external) references included in JSON output\n","status":"closed","priority":1,"issue_type":"feature","created_at":"2026-02-02T21:31:01.036474Z","created_by":"tayloreernisse","updated_at":"2026-02-06T13:49:21.285350Z","closed_at":"2026-02-06T13:49:21.285302Z","close_reason":"Gate 3 complete: all 7 children closed. Timeline pipeline fully implemented with SEED->HYDRATE->EXPAND->COLLECT->RENDER stages, human+robot renderers, CLI wiring with 9 flags, robot-docs manifest entry","compaction_level":0,"original_size":0,"labels":["epic","gate-3","phase-b"],"dependencies":[{"issue_id":"bd-ike","depends_on_id":"bd-1se","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"},{"issue_id":"bd-ike","depends_on_id":"bd-2zl","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"}]} {"id":"bd-jbfw","title":"NOTE-0D: Capture immutable author_id in note upserts","description":"## Background\nCore use case is year-scale reviewer profiling. GitLab usernames are mutable — a user can change username mid-year, fragmenting author queries. GitLab note payloads include note.author.id (immutable integer). Capturing this provides a stable identity anchor for longitudinal analysis.\n\n## Approach\n1. The author_id column and index are added by migration 022 (NOTE-1E, bd-296a). This bead only handles the ingestion code changes.\n\n2. Populate author_id during upsert: In both upsert_note_for_issue() (from NOTE-0A) and upsert_note() in mr_discussions.rs, add author_id to INSERT column list and ON CONFLICT DO UPDATE SET clause. The value comes from the GitLab API note.author.id field.\n\n3. Check NormalizedNote in src/gitlab/transformers.rs — if it doesn't have an author_id field yet, add it there. The GitLab REST API returns notes with: { \"author\": { \"id\": 12345, \"username\": \"jdefting\", ... } }. Extract author.id in the transformer.\n\n4. Semantic change detection: author_id changes do NOT trigger changed_semantics = true. It's an identity anchor, not content. Do not include author_id in the pre-read comparison fields.\n\n## Files\n- MODIFY: src/ingestion/discussions.rs (add author_id to upsert_note_for_issue INSERT and ON CONFLICT SET)\n- MODIFY: src/ingestion/mr_discussions.rs (add author_id to upsert_note INSERT and ON CONFLICT SET, line 470)\n- MODIFY: src/gitlab/transformers.rs (add author_id: Option to NormalizedNote if missing, extract from API note.author.id)\n\n## TDD Anchor\nRED: test_issue_note_upsert_captures_author_id — insert note with author_id=12345, assert stored correctly.\nGREEN: Add author_id to INSERT/UPDATE clauses and transformer.\nVERIFY: cargo test captures_author_id -- --nocapture\nTests: test_mr_note_upsert_captures_author_id, test_note_upsert_author_id_nullable (old API responses without author.id), test_note_author_id_survives_username_change\n\n## Acceptance Criteria\n- [ ] author_id populated in upsert_note_for_issue INSERT and ON CONFLICT SET\n- [ ] author_id populated in MR upsert_note INSERT and ON CONFLICT SET\n- [ ] NormalizedNote has author_id: Option field\n- [ ] Transformer extracts author.id from GitLab API note payload\n- [ ] author_id = None handled gracefully (older API responses)\n- [ ] author_id change does NOT trigger changed_semantics\n- [ ] All 4 tests pass\n\n## Dependency Context\n- Depends on NOTE-0A (bd-3bpk): modifies upsert functions created in NOTE-0A\n- Depends on NOTE-1E (bd-296a): migration 022 adds the author_id column + index to the notes table. Column must exist before ingestion code can write to it.\n\n## Edge Cases\n- Old GitLab instances: author.id may be missing from API response — use None\n- Self-hosted GitLab: some versions may not include author block — handle gracefully\n- author_id is nullable INTEGER — no NOT NULL constraint","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-12T16:59:55.097158Z","created_by":"tayloreernisse","updated_at":"2026-02-12T18:13:15.254328Z","closed_at":"2026-02-12T18:13:15.254247Z","close_reason":"Implemented by agent swarm","compaction_level":0,"original_size":0,"labels":["per-note","search"]}