diff --git a/.beads/issues.jsonl b/.beads/issues.jsonl index 7595106..7f86b28 100644 --- a/.beads/issues.jsonl +++ b/.beads/issues.jsonl @@ -85,7 +85,7 @@ {"id":"bd-1uc","title":"Implement DB upsert functions for resource events","description":"## Background\nNeed to store fetched resource events into the three event tables created by migration 011. The existing DB pattern uses rusqlite prepared statements with named parameters. Timestamps from GitLab are ISO 8601 strings that need conversion to ms epoch UTC (matching the existing time.rs parse_datetime_to_ms function).\n\n## Approach\nCreate src/core/events_db.rs (new module) with three upsert functions:\n\n```rust\nuse rusqlite::Connection;\nuse super::error::Result;\n\n/// Upsert state events for an entity.\n/// Uses INSERT OR REPLACE keyed on UNIQUE(gitlab_id, project_id).\npub fn upsert_state_events(\n conn: &Connection,\n project_id: i64, // local DB project id\n entity_type: &str, // \"issue\" | \"merge_request\"\n entity_local_id: i64, // local DB id of the issue/MR\n events: &[GitLabStateEvent],\n) -> Result\n\n/// Upsert label events for an entity.\npub fn upsert_label_events(\n conn: &Connection,\n project_id: i64,\n entity_type: &str,\n entity_local_id: i64,\n events: &[GitLabLabelEvent],\n) -> Result\n\n/// Upsert milestone events for an entity.\npub fn upsert_milestone_events(\n conn: &Connection,\n project_id: i64,\n entity_type: &str,\n entity_local_id: i64,\n events: &[GitLabMilestoneEvent],\n) -> Result\n```\n\nEach function:\n1. Prepares INSERT OR REPLACE statement\n2. For each event, maps GitLab types to DB columns:\n - `actor_gitlab_id` = event.user.map(|u| u.id)\n - `actor_username` = event.user.map(|u| u.username.clone())\n - `created_at` = parse_datetime_to_ms(&event.created_at)?\n - Set issue_id or merge_request_id based on entity_type\n3. Returns count of upserted rows\n4. Wraps in a savepoint for atomicity per entity\n\nRegister module in src/core/mod.rs:\n```rust\npub mod events_db;\n```\n\n## Acceptance Criteria\n- [ ] All three upsert functions compile and handle all event fields\n- [ ] Upserts are idempotent (re-inserting same event doesn't duplicate)\n- [ ] Timestamps converted to ms epoch UTC via parse_datetime_to_ms\n- [ ] actor_gitlab_id and actor_username populated from event.user (handles None)\n- [ ] entity_type correctly maps to issue_id/merge_request_id (other is NULL)\n- [ ] source_merge_request_id populated for state events (iid from source_merge_request)\n- [ ] source_commit populated for state events\n- [ ] label_name populated for label events\n- [ ] milestone_title and milestone_id populated for milestone events\n- [ ] Returns upserted count\n\n## Files\n- src/core/events_db.rs (new)\n- src/core/mod.rs (add `pub mod events_db;`)\n\n## TDD Loop\nRED: tests/events_db_tests.rs (new):\n- `test_upsert_state_events_basic` - insert 3 events, verify count and data\n- `test_upsert_state_events_idempotent` - insert same events twice, verify no duplicates\n- `test_upsert_label_events_with_actor` - verify actor fields populated\n- `test_upsert_milestone_events_null_user` - verify user: null doesn't crash\n- `test_upsert_state_events_entity_exclusivity` - verify only one of issue_id/merge_request_id set\n\nSetup: create_test_db() helper that applies migrations 001-011, inserts a test project + issue + MR.\n\nGREEN: Implement the three functions\n\nVERIFY: `cargo test events_db -- --nocapture`\n\n## Edge Cases\n- parse_datetime_to_ms must handle GitLab's format: \"2024-03-15T10:30:00.000Z\" and \"2024-03-15T10:30:00.000+00:00\"\n- INSERT OR REPLACE will fire CASCADE deletes if there are FK references to these rows — currently no other table references event rows, so this is safe\n- entity_type must be validated (\"issue\" or \"merge_request\") — panic or error on invalid\n- source_merge_request field contains an MR ref object, not an ID — extract .iid for DB column","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-02T21:31:57.242549Z","created_by":"tayloreernisse","updated_at":"2026-02-03T16:19:14.169437Z","closed_at":"2026-02-03T16:19:14.169233Z","close_reason":"Implemented upsert_state_events, upsert_label_events, upsert_milestone_events, count_events in src/core/events_db.rs. Uses savepoints for atomicity, LoreError::Database via ? operator for clean error handling.","compaction_level":0,"original_size":0,"labels":["db","gate-1","phase-b"],"dependencies":[{"issue_id":"bd-1uc","depends_on_id":"bd-2zl","type":"parent-child","created_at":"2026-02-12T19:34:39Z","created_by":"import"},{"issue_id":"bd-1uc","depends_on_id":"bd-hu3","type":"blocks","created_at":"2026-02-12T19:34:39Z","created_by":"import"}]} {"id":"bd-1up1","title":"Implement File History screen (per-file MR timeline with rename tracking)","description":"## Background\nThe File History screen shows which MRs touched a file over time, with rename-aware tracking and optional DiffNote discussion snippets. It wraps run_file_history() from src/cli/commands/file_history.rs (added in v0.8.1) in a TUI view. While Trace answers \"why was this code introduced?\", File History answers \"what happened to this file?\" — a chronological MR timeline.\n\nThe core query resolves rename chains via BFS (resolve_rename_chain from src/core/file_history.rs), finds all MRs with mr_file_changes entries matching any renamed path, and optionally fetches DiffNote discussions on those paths.\n\n## Data Shapes (from src/cli/commands/file_history.rs)\n\n```rust\npub struct FileHistoryResult {\n pub path: String,\n pub rename_chain: Vec, // resolved paths via BFS\n pub renames_followed: bool,\n pub merge_requests: Vec,\n pub discussions: Vec,\n pub total_mrs: usize,\n pub paths_searched: usize,\n}\n\npub struct FileHistoryMr {\n pub iid: i64,\n pub title: String,\n pub state: String, // merged/opened/closed\n pub author_username: String,\n pub change_type: String, // added/modified/deleted/renamed\n pub merged_at_iso: Option,\n pub updated_at_iso: String,\n pub merge_commit_sha: Option,\n pub web_url: Option,\n}\n\npub struct FileDiscussion {\n pub discussion_id: String,\n pub author_username: String,\n pub body_snippet: String,\n pub path: String,\n pub created_at_iso: String,\n}\n```\n\nrun_file_history() signature (src/cli/commands/file_history.rs):\n```rust\npub fn run_file_history(\n config: &Config, // used only for DB path — bd-1f5b will extract query-only version\n path: &str,\n project: Option<&str>,\n no_follow_renames: bool,\n merged_only: bool,\n include_discussions: bool,\n limit: usize,\n) -> Result\n```\n\nAfter bd-1f5b extracts the query logic, the TUI will call a Connection-based variant:\n```rust\npub fn query_file_history(\n conn: &Connection,\n project_id: Option,\n path: &str,\n follow_renames: bool,\n merged_only: bool,\n include_discussions: bool,\n limit: usize,\n) -> Result\n```\n\n## Approach\n\n**Screen enum** (message.rs):\nAdd Screen::FileHistory variant (no parameters). Label: \"File History\". Breadcrumb: \"File History\".\n\n**Path autocomplete**: Same mechanism as Trace screen — query DISTINCT new_path from mr_file_changes. Share the known_paths cache with Trace if both are loaded, or each screen maintains its own (simpler).\n\n**State** (state/file_history.rs):\n```rust\n#[derive(Debug, Default)]\npub struct FileHistoryState {\n pub path_input: String,\n pub path_focused: bool,\n pub result: Option,\n pub selected_mr_index: usize,\n pub follow_renames: bool, // default true\n pub merged_only: bool, // default false\n pub show_discussions: bool, // default false\n pub scroll_offset: u16,\n pub known_paths: Vec, // autocomplete cache\n pub autocomplete_matches: Vec,\n pub autocomplete_index: usize,\n}\n```\n\n**Action** (action.rs):\n- fetch_file_history(conn, project_id, path, follow_renames, merged_only, show_discussions, limit) -> Result: calls query_file_history from file_history module (after bd-1f5b extraction)\n- fetch_known_paths(conn, project_id): shared with Trace screen (same query)\n\n**View** (view/file_history.rs):\n- Top: path input with autocomplete dropdown + toggle indicators [renames: on] [merged: off] [discussions: off]\n- If renames followed: rename chain breadcrumb (path_a -> path_b -> path_c) in dimmed text\n- Summary line: \"N merge requests across M paths\"\n- Main area: chronological MR list (sorted by updated_at descending):\n - Each row: MR state icon + !iid + title + @author + change_type tag + date\n - If show_discussions: inline discussion snippets beneath relevant MRs (indented, dimmed, author + date + body_snippet)\n- Footer: \"showing N of M\" when total_mrs > limit\n- Keyboard:\n - j/k: scroll MR list\n - Enter: navigate to MrDetail(EntityKey::mr(project_id, iid))\n - /: focus path input\n - Tab: cycle autocomplete suggestions when path focused\n - r: toggle follow_renames (re-fetches)\n - m: toggle merged_only (re-fetches)\n - d: toggle show_discussions (re-fetches)\n - q: back\n\n**Contextual entry points** (wired from other screens):\n- MR Detail: h on a file path opens File History pre-filled with that path\n- Expert mode (Who screen): when viewing a file path's experts, h opens File History for that path\n- Requires other screens to expose selected_file_path() -> Option\n\n## Acceptance Criteria\n- [ ] Screen::FileHistory added to message.rs Screen enum with label and breadcrumb\n- [ ] FileHistoryState struct with all fields, Default impl\n- [ ] Path input with autocomplete dropdown from mr_file_changes (same mechanism as Trace)\n- [ ] Rename chain displayed as breadcrumb when renames_followed is true\n- [ ] Chronological MR list with state icons (merged/opened/closed) and change_type tags\n- [ ] Enter on MR navigates to MrDetail(EntityKey::mr(project_id, iid))\n- [ ] r toggles follow_renames, m toggles merged_only, d toggles show_discussions — all re-fetch\n- [ ] Discussion snippets shown inline beneath MRs when toggled on\n- [ ] Summary line showing \"N merge requests across M paths\"\n- [ ] Footer truncation indicator when total_mrs > display limit\n- [ ] Empty state: \"No MRs found for this file\" with hint \"Run 'lore sync --fetch-mr-file-changes' to populate\"\n- [ ] Contextual navigation: h on file path in MR Detail opens File History pre-filled\n- [ ] Registered in command palette (label \"File History\", keywords [\"history\", \"file\", \"changes\"])\n- [ ] AppState.has_text_focus() updated to include file_history.path_focused\n- [ ] AppState.blur_text_focus() updated to include file_history.path_focused = false\n\n## Files\n- MODIFY: crates/lore-tui/src/message.rs (add Screen::FileHistory variant + label)\n- CREATE: crates/lore-tui/src/state/file_history.rs (FileHistoryState struct + Default)\n- MODIFY: crates/lore-tui/src/state/mod.rs (pub mod file_history, pub use FileHistoryState, add to AppState, update has_text_focus/blur_text_focus)\n- MODIFY: crates/lore-tui/src/action.rs (add fetch_file_history, share fetch_known_paths with Trace)\n- CREATE: crates/lore-tui/src/view/file_history.rs (render_file_history fn)\n- MODIFY: crates/lore-tui/src/view/mod.rs (add Screen::FileHistory dispatch arm)\n\n## TDD Anchor\nRED: Write test_fetch_file_history_returns_mrs in action tests. Setup: in-memory DB, insert project, MR (state=\"merged\", merged_at set), mr_file_changes row (new_path=\"src/lib.rs\", change_type=\"modified\"). Call fetch_file_history(conn, Some(project_id), \"src/lib.rs\", true, false, false, 50). Assert: result.merge_requests.len() == 1, result.merge_requests[0].iid matches.\nGREEN: Implement fetch_file_history calling query_file_history.\nVERIFY: cargo test -p lore-tui file_history -- --nocapture\n\nAdditional tests:\n- test_file_history_empty: path \"nonexistent.rs\" returns empty merge_requests\n- test_file_history_rename_chain: insert rename A->B, query A, assert rename_chain=[\"A\",\"B\"] and MRs touching B are included\n- test_file_history_merged_only: merged_only=true excludes opened/closed MRs\n- test_file_history_discussions: show_discussions=true populates discussions vec with DiffNote snippets\n- test_file_history_limit: insert 10 MRs, limit=5, assert merge_requests.len()==5 and total_mrs==10\n- test_autocomplete: shared with Trace tests\n\n## Edge Cases\n- File never modified by any MR: empty state with helpful message and sync hint\n- Rename chain with cycles: BFS visited set in resolve_rename_chain prevents infinite loop\n- Very long file paths: truncate from left in list view (...path/to/file.rs)\n- Hundreds of MRs for a single file: default limit 50, footer shows total count\n- Discussion body_snippet may contain markdown/code — render as plain text, no parsing\n- No mr_file_changes data at all: hint that sync needs --fetch-mr-file-changes (config.sync.fetch_mr_file_changes)\n- Project scope: if global_scope.project_id is set, pass it to query and autocomplete\n\n## Dependency Context\n- bd-1f5b (blocks): Extracts query_file_history(conn, ...) from run_file_history(config, ...) in src/cli/commands/file_history.rs. The current function opens its own DB connection from Config — TUI needs a Connection-based variant since it manages its own connection.\n- src/core/file_history.rs: resolve_rename_chain() used by query_file_history internally. TUI does not call it directly.\n- FileHistoryResult, FileHistoryMr, FileDiscussion: currently defined in src/cli/commands/file_history.rs — bd-1f5b should move these to core or make them importable.\n- Navigation: uses NavigationStack.push(Screen::MrDetail(key)) from crates/lore-tui/src/navigation.rs.\n- AppState composition: FileHistoryState added as field in AppState (state/mod.rs ~line 154-174). has_text_focus/blur_text_focus at lines 194-207 must include file_history.path_focused.\n- Autocomplete: fetch_known_paths query identical to Trace screen — consider extracting to shared helper in action.rs.\n- Contextual entry: requires MrDetailState to expose selected file path. Deferred if MR Detail not yet built.","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-18T18:14:13.179338Z","created_by":"tayloreernisse","updated_at":"2026-02-19T03:47:22.812185Z","closed_at":"2026-02-19T03:47:22.811968Z","close_reason":"File History screen complete: state, action, view, full wiring. 579 TUI tests pass.","compaction_level":0,"original_size":0,"labels":["TUI"],"dependencies":[{"issue_id":"bd-1up1","depends_on_id":"bd-1f5b","type":"blocks","created_at":"2026-02-18T18:14:33.412864Z","created_by":"tayloreernisse"},{"issue_id":"bd-1up1","depends_on_id":"bd-nwux","type":"parent-child","created_at":"2026-02-18T18:14:13.180816Z","created_by":"tayloreernisse"}]} {"id":"bd-1ut","title":"[CP0] Final validation - tests, lint, typecheck","description":"## Background\n\nFinal validation ensures everything works together before marking CP0 complete. This is the integration gate - all unit tests, integration tests, lint, and type checking must pass. Manual smoke tests verify the full user experience.\n\nReference: docs/prd/checkpoint-0.md sections \"Definition of Done\", \"Manual Smoke Tests\"\n\n## Approach\n\n**Automated checks:**\n```bash\n# All tests pass\nnpm run test\n\n# TypeScript strict mode\nnpm run build # or: npx tsc --noEmit\n\n# ESLint with no errors\nnpm run lint\n```\n\n**Manual smoke tests (from PRD table):**\n\n| Command | Expected | Pass Criteria |\n|---------|----------|---------------|\n| `gi --help` | Command list | Shows all commands |\n| `gi version` | Version number | Shows installed version |\n| `gi init` | Interactive prompts | Creates valid config |\n| `gi init` (config exists) | Confirmation prompt | Warns before overwriting |\n| `gi init --force` | No prompt | Overwrites without asking |\n| `gi auth-test` | `Authenticated as @username` | Shows GitLab username |\n| `GITLAB_TOKEN=invalid gi auth-test` | Error message | Non-zero exit, clear error |\n| `gi doctor` | Status table | All required checks pass |\n| `gi doctor --json` | JSON object | Valid JSON, `success: true` |\n| `gi backup` | Backup path | Creates timestamped backup |\n| `gi sync-status` | No runs message | Stub output works |\n\n**Definition of Done gate items:**\n- [ ] `gi init` writes config to XDG path and validates projects against GitLab\n- [ ] `gi auth-test` succeeds with real PAT\n- [ ] `gi doctor` reports DB ok + GitLab ok\n- [ ] DB migrations apply; WAL + FK enabled; busy_timeout + synchronous set\n- [ ] App lock mechanism works (concurrent runs blocked)\n- [ ] All unit tests pass\n- [ ] All integration tests pass (mocked)\n- [ ] ESLint passes with no errors\n- [ ] TypeScript compiles with strict mode\n\n## Acceptance Criteria\n\n- [ ] `npm run test` exits 0 (all tests pass)\n- [ ] `npm run build` exits 0 (TypeScript compiles)\n- [ ] `npm run lint` exits 0 (no ESLint errors)\n- [ ] All 11 manual smoke tests pass\n- [ ] All 9 Definition of Done gate items verified\n\n## Files\n\nNo new files created. This bead verifies existing work.\n\n## TDD Loop\n\nThis IS the final verification step:\n\n```bash\n# Automated\nnpm run test\nnpm run build\nnpm run lint\n\n# Manual (requires GITLAB_TOKEN set with valid token)\ngi --help\ngi version\ngi init # go through setup\ngi auth-test\ngi doctor\ngi doctor --json | jq .success # should output true\ngi backup\ngi sync-status\ngi reset --confirm\ngi init # re-setup\n```\n\n## Edge Cases\n\n- Test coverage should be reasonable (aim for 80%+ on core modules)\n- Integration tests may flake on CI - check MSW setup\n- Manual tests require real GitLab token - document in README\n- ESLint may warn vs error - only errors block\n- TypeScript noImplicitAny catches missed types","status":"closed","priority":1,"issue_type":"task","created_at":"2026-01-24T16:09:52.078907Z","created_by":"tayloreernisse","updated_at":"2026-01-25T03:37:51.858558Z","closed_at":"2026-01-25T03:37:51.858474Z","close_reason":"done","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-1ut","depends_on_id":"bd-1cb","type":"blocks","created_at":"2026-02-12T19:34:39Z","created_by":"import"},{"issue_id":"bd-1ut","depends_on_id":"bd-1gu","type":"blocks","created_at":"2026-02-12T19:34:39Z","created_by":"import"},{"issue_id":"bd-1ut","depends_on_id":"bd-1kh","type":"blocks","created_at":"2026-02-12T19:34:39Z","created_by":"import"},{"issue_id":"bd-1ut","depends_on_id":"bd-38e","type":"blocks","created_at":"2026-02-12T19:34:39Z","created_by":"import"},{"issue_id":"bd-1ut","depends_on_id":"bd-3kj","type":"blocks","created_at":"2026-02-12T19:34:39Z","created_by":"import"}]} -{"id":"bd-1v8","title":"Update robot-docs manifest with Phase B commands","description":"## Background\n\nThe robot-docs manifest is the agent self-discovery mechanism. It must include all Phase B commands so agents can discover temporal intelligence features.\n\n## Codebase Context\n\n- handle_robot_docs() in src/main.rs (line ~1646) returns JSON with commands, exit_codes, workflows, aliases, clap_error_codes\n- Currently 18 commands documented in the manifest\n- VALID_COMMANDS array in src/main.rs (line ~448): [\"issues\", \"mrs\", \"search\", \"sync\", \"ingest\", \"count\", \"status\", \"auth\", \"doctor\", \"version\", \"init\", \"stats\", \"generate-docs\", \"embed\", \"migrate\", \"health\", \"robot-docs\", \"completions\"]\n- Phase B adds 3 new commands: timeline, file-history, trace\n- count gains new entity: \"references\" (bd-2ez)\n- Existing workflows: first_setup, daily_sync, search, pre_flight\n\n## Approach\n\n### 1. Add commands to handle_robot_docs() JSON:\n\n```json\n\"timeline\": {\n \"description\": \"Chronological timeline of events matching a keyword query\",\n \"flags\": [\"\", \"-p \", \"--since \", \"--depth \", \"--expand-mentions\", \"-n \"],\n \"example\": \"lore --robot timeline 'authentication' --since 30d\"\n},\n\"file-history\": {\n \"description\": \"Which MRs touched a file, with rename chain resolution\",\n \"flags\": [\"\", \"-p \", \"--discussions\", \"--no-follow-renames\", \"--merged\", \"-n \"],\n \"example\": \"lore --robot file-history src/auth/oauth.rs\"\n},\n\"trace\": {\n \"description\": \"Trace file -> MR -> issue -> discussions decision chain\",\n \"flags\": [\"\", \"-p \", \"--discussions\", \"--no-follow-renames\", \"-n \"],\n \"example\": \"lore --robot trace src/auth/oauth.rs\"\n}\n```\n\n### 2. Update count command to mention \"references\" entity\n\n### 3. Add temporal_intelligence workflow:\n```json\n\"temporal_intelligence\": {\n \"description\": \"Query temporal data about project history\",\n \"steps\": [\n \"lore sync (ensure events fetched with fetchResourceEvents=true)\",\n \"lore timeline '' for chronological event history\",\n \"lore file-history for file-level MR history\",\n \"lore trace for file -> MR -> issue -> discussion chain\"\n ]\n}\n```\n\n### 4. Add timeline, file-history, trace to VALID_COMMANDS array\n\n## Acceptance Criteria\n\n- [ ] robot-docs includes timeline, file-history, trace commands\n- [ ] count references documented\n- [ ] temporal_intelligence workflow present\n- [ ] VALID_COMMANDS includes all 3 new commands\n- [ ] Examples are valid, runnable commands\n- [ ] cargo check --all-targets passes\n- [ ] cargo clippy --all-targets -- -D warnings passes\n\n## Files\n\n- src/main.rs (update handle_robot_docs + VALID_COMMANDS array)\n\n## TDD Loop\n\nVERIFY: lore robot-docs | jq '.data.commands.timeline'\nVERIFY: lore robot-docs | jq '.data.workflows.temporal_intelligence'","status":"open","priority":3,"issue_type":"task","created_at":"2026-02-02T22:43:07.859092Z","created_by":"tayloreernisse","updated_at":"2026-02-05T20:17:38.827205Z","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-1v8","depends_on_id":"bd-1ht","type":"parent-child","created_at":"2026-02-12T19:34:39Z","created_by":"import"},{"issue_id":"bd-1v8","depends_on_id":"bd-2ez","type":"blocks","created_at":"2026-02-12T19:34:39Z","created_by":"import"},{"issue_id":"bd-1v8","depends_on_id":"bd-2n4","type":"blocks","created_at":"2026-02-12T19:34:39Z","created_by":"import"}]} +{"id":"bd-1v8","title":"Update robot-docs manifest with Phase B commands","description":"## Background\n\nThe robot-docs manifest is the agent self-discovery mechanism. It must include all Phase B commands so agents can discover temporal intelligence features.\n\n## Codebase Context\n\n- handle_robot_docs() in src/main.rs (line ~1646) returns JSON with commands, exit_codes, workflows, aliases, clap_error_codes\n- Currently 18 commands documented in the manifest\n- VALID_COMMANDS array in src/main.rs (line ~448): [\"issues\", \"mrs\", \"search\", \"sync\", \"ingest\", \"count\", \"status\", \"auth\", \"doctor\", \"version\", \"init\", \"stats\", \"generate-docs\", \"embed\", \"migrate\", \"health\", \"robot-docs\", \"completions\"]\n- Phase B adds 3 new commands: timeline, file-history, trace\n- count gains new entity: \"references\" (bd-2ez)\n- Existing workflows: first_setup, daily_sync, search, pre_flight\n\n## Approach\n\n### 1. Add commands to handle_robot_docs() JSON:\n\n```json\n\"timeline\": {\n \"description\": \"Chronological timeline of events matching a keyword query\",\n \"flags\": [\"\", \"-p \", \"--since \", \"--depth \", \"--expand-mentions\", \"-n \"],\n \"example\": \"lore --robot timeline 'authentication' --since 30d\"\n},\n\"file-history\": {\n \"description\": \"Which MRs touched a file, with rename chain resolution\",\n \"flags\": [\"\", \"-p \", \"--discussions\", \"--no-follow-renames\", \"--merged\", \"-n \"],\n \"example\": \"lore --robot file-history src/auth/oauth.rs\"\n},\n\"trace\": {\n \"description\": \"Trace file -> MR -> issue -> discussions decision chain\",\n \"flags\": [\"\", \"-p \", \"--discussions\", \"--no-follow-renames\", \"-n \"],\n \"example\": \"lore --robot trace src/auth/oauth.rs\"\n}\n```\n\n### 2. Update count command to mention \"references\" entity\n\n### 3. Add temporal_intelligence workflow:\n```json\n\"temporal_intelligence\": {\n \"description\": \"Query temporal data about project history\",\n \"steps\": [\n \"lore sync (ensure events fetched with fetchResourceEvents=true)\",\n \"lore timeline '' for chronological event history\",\n \"lore file-history for file-level MR history\",\n \"lore trace for file -> MR -> issue -> discussion chain\"\n ]\n}\n```\n\n### 4. Add timeline, file-history, trace to VALID_COMMANDS array\n\n## Acceptance Criteria\n\n- [ ] robot-docs includes timeline, file-history, trace commands\n- [ ] count references documented\n- [ ] temporal_intelligence workflow present\n- [ ] VALID_COMMANDS includes all 3 new commands\n- [ ] Examples are valid, runnable commands\n- [ ] cargo check --all-targets passes\n- [ ] cargo clippy --all-targets -- -D warnings passes\n\n## Files\n\n- src/main.rs (update handle_robot_docs + VALID_COMMANDS array)\n\n## TDD Loop\n\nVERIFY: lore robot-docs | jq '.data.commands.timeline'\nVERIFY: lore robot-docs | jq '.data.workflows.temporal_intelligence'","status":"in_progress","priority":3,"issue_type":"task","created_at":"2026-02-02T22:43:07.859092Z","created_by":"tayloreernisse","updated_at":"2026-02-19T14:01:25.024024Z","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-1v8","depends_on_id":"bd-1ht","type":"parent-child","created_at":"2026-02-12T19:34:39Z","created_by":"import"},{"issue_id":"bd-1v8","depends_on_id":"bd-2ez","type":"blocks","created_at":"2026-02-12T19:34:39Z","created_by":"import"},{"issue_id":"bd-1v8","depends_on_id":"bd-2n4","type":"blocks","created_at":"2026-02-12T19:34:39Z","created_by":"import"}]} {"id":"bd-1v8t","title":"Add WorkItemStatus type and SyncConfig toggle","description":"## Background\nThe GraphQL status response returns name, category, color, and iconName fields. We need a Rust struct that deserializes this directly. Category is stored as raw Option (not an enum) because GitLab 18.5+ supports custom statuses with arbitrary category values. We also need a config toggle so users can disable status enrichment.\n\n## Approach\nAdd WorkItemStatus to the existing types module. Add fetch_work_item_status to the existing SyncConfig with default_true() helper. Also add WorkItemStatus to pub use re-exports in src/gitlab/mod.rs.\n\n## Files\n- src/gitlab/types.rs (add struct after GitLabMergeRequest, before #[cfg(test)])\n- src/core/config.rs (add field to SyncConfig struct + Default impl)\n- src/gitlab/mod.rs (add WorkItemStatus to pub use)\n\n## Implementation\n\nIn src/gitlab/types.rs (needs Serialize, Deserialize derives already in scope):\n #[derive(Debug, Clone, Serialize, Deserialize)]\n pub struct WorkItemStatus {\n pub name: String,\n pub category: Option,\n pub color: Option,\n #[serde(rename = \"iconName\")]\n pub icon_name: Option,\n }\n\nIn src/core/config.rs SyncConfig struct (after fetch_mr_file_changes):\n #[serde(rename = \"fetchWorkItemStatus\", default = \"default_true\")]\n pub fetch_work_item_status: bool,\n\nIn impl Default for SyncConfig (after fetch_mr_file_changes: true):\n fetch_work_item_status: true,\n\n## Acceptance Criteria\n- [ ] WorkItemStatus deserializes: {\"name\":\"In progress\",\"category\":\"IN_PROGRESS\",\"color\":\"#1f75cb\",\"iconName\":\"status-in-progress\"}\n- [ ] Optional fields: {\"name\":\"To do\"} -> category/color/icon_name are None\n- [ ] Unknown category: {\"name\":\"Custom\",\"category\":\"SOME_FUTURE_VALUE\"} -> Ok\n- [ ] Null category: {\"name\":\"In progress\",\"category\":null} -> None\n- [ ] SyncConfig::default().fetch_work_item_status == true\n- [ ] JSON without fetchWorkItemStatus key -> defaults true\n- [ ] cargo check --all-targets passes\n\n## TDD Loop\nRED: test_work_item_status_deserialize, test_work_item_status_optional_fields, test_work_item_status_unknown_category, test_work_item_status_null_category, test_config_fetch_work_item_status_default_true, test_config_deserialize_without_key\nGREEN: Add struct + config field\nVERIFY: cargo test test_work_item_status && cargo test test_config\n\n## Edge Cases\n- serde rename \"iconName\" -> icon_name (camelCase in GraphQL)\n- Category is Option, NOT an enum\n- Config key is camelCase \"fetchWorkItemStatus\" matching existing convention","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-11T06:41:42.790001Z","created_by":"tayloreernisse","updated_at":"2026-02-11T07:21:33.416990Z","closed_at":"2026-02-11T07:21:33.416950Z","close_reason":"Implemented by agent swarm — all quality gates pass (595 tests, 0 failures)","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-1v8t","depends_on_id":"bd-2y79","type":"parent-child","created_at":"2026-02-12T19:34:39Z","created_by":"import"}]} {"id":"bd-1v9m","title":"Implement AppState composition + LoadState + ScreenIntent","description":"## Background\nAppState is the top-level state composition — each field corresponds to one screen. State is preserved when navigating away (never cleared on pop). LoadState enables stale-while-revalidate: screens show last data during refresh with a spinner. ScreenIntent is the pure return type from state handlers — they never launch async tasks directly.\n\n## Approach\nCreate crates/lore-tui/src/state/mod.rs:\n- AppState struct: dashboard (DashboardState), issue_list (IssueListState), issue_detail (IssueDetailState), mr_list (MrListState), mr_detail (MrDetailState), search (SearchState), timeline (TimelineState), who (WhoState), sync (SyncState), command_palette (CommandPaletteState), global_scope (ScopeContext), load_state (ScreenLoadStateMap), error_toast (Option), show_help (bool), terminal_size ((u16, u16))\n- LoadState enum: Idle, LoadingInitial, Refreshing, Error(String)\n- ScreenLoadStateMap: wraps HashMap, get()/set()/any_loading()\n- AppState methods: set_loading(), set_error(), clear_error(), has_text_focus(), blur_text_focus(), delegate_text_event(), interpret_screen_key(), handle_screen_msg()\n- ScreenIntent enum: None, Navigate(Screen), RequeryNeeded(Screen)\n- handle_screen_msg() matches Msg variants and returns ScreenIntent (NEVER Cmd::task)\n\nCreate stub per-screen state files (just Default-derivable structs):\n- state/dashboard.rs, issue_list.rs, issue_detail.rs, mr_list.rs, mr_detail.rs, search.rs, timeline.rs, who.rs, sync.rs, command_palette.rs\n\n## Acceptance Criteria\n- [ ] AppState derives Default and compiles with all screen state fields\n- [ ] LoadState has Idle, LoadingInitial, Refreshing, Error variants\n- [ ] ScreenLoadStateMap::get() returns Idle for untracked screens\n- [ ] ScreenLoadStateMap::any_loading() returns true when any screen is loading\n- [ ] has_text_focus() checks all filter/query focused flags\n- [ ] blur_text_focus() resets all focus flags\n- [ ] handle_screen_msg() returns ScreenIntent, never Cmd::task\n- [ ] ScreenIntent::RequeryNeeded signals that LoreApp should dispatch supervised query\n\n## Files\n- CREATE: crates/lore-tui/src/state/mod.rs\n- CREATE: crates/lore-tui/src/state/dashboard.rs (stub)\n- CREATE: crates/lore-tui/src/state/issue_list.rs (stub)\n- CREATE: crates/lore-tui/src/state/issue_detail.rs (stub)\n- CREATE: crates/lore-tui/src/state/mr_list.rs (stub)\n- CREATE: crates/lore-tui/src/state/mr_detail.rs (stub)\n- CREATE: crates/lore-tui/src/state/search.rs (stub)\n- CREATE: crates/lore-tui/src/state/timeline.rs (stub)\n- CREATE: crates/lore-tui/src/state/who.rs (stub)\n- CREATE: crates/lore-tui/src/state/sync.rs (stub)\n- CREATE: crates/lore-tui/src/state/command_palette.rs (stub)\n\n## TDD Anchor\nRED: Write test_load_state_default_idle that creates ScreenLoadStateMap, asserts get(&Screen::Dashboard) returns Idle.\nGREEN: Implement ScreenLoadStateMap with HashMap defaulting to Idle.\nVERIFY: cargo test --manifest-path crates/lore-tui/Cargo.toml test_load_state\n\n## Edge Cases\n- LoadState::set() removes Idle entries from the map to prevent unbounded growth\n- Screen::IssueDetail(key) comparison for HashMap: requires Screen to impl Hash+Eq or use ScreenKind discriminant\n- has_text_focus() must be kept in sync as new screens add text inputs","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-12T16:56:42.023482Z","created_by":"tayloreernisse","updated_at":"2026-02-12T20:35:46.811462Z","closed_at":"2026-02-12T20:35:46.811406Z","close_reason":"Implemented state/ module: AppState (11 screen fields + cross-cutting), LoadState (4 variants), ScreenLoadStateMap (auto-prune Idle), ScreenIntent (None/Navigate/RequeryNeeded), ScopeContext, 10 per-screen state stubs. 12 tests. Quality gate green (114 total).","compaction_level":0,"original_size":0,"labels":["TUI"],"dependencies":[{"issue_id":"bd-1v9m","depends_on_id":"bd-c9gk","type":"blocks","created_at":"2026-02-12T19:34:39Z","created_by":"import"}]} {"id":"bd-1vti","title":"Write decay and scoring example-based tests (TDD)","description":"## Background\nAll implementation beads (bd-1soz through bd-11mg) include their own inline TDD tests. This bead is the integration verification: run the full test suite and confirm everything works together with no regressions.\n\n## Approach\nRun cargo test and verify:\n1. All NEW tests pass (31 tests across implementation beads)\n2. All EXISTING tests pass unchanged (existing who tests, config tests, etc.)\n3. No test interference (--test-threads=1 mode)\n4. All tests in who.rs test module compile and run cleanly\n\nTest count by bead:\n- bd-1soz: 2 (test_half_life_decay_math, test_score_monotonicity_by_age)\n- bd-2w1p: 3 (test_config_validation_rejects_zero_half_life, _absurd_half_life, _nan_multiplier)\n- bd-18dn: 2 (test_path_normalization_handles_dot_and_double_slash, _preserves_prefix_semantics)\n- bd-1hoq: 1 (test_expert_sql_returns_expected_signal_rows)\n- bd-1h3f: 2 (test_old_path_probe_exact_and_prefix, test_suffix_probe_uses_old_path_sources)\n- bd-13q8: 13 (decay integration + invariant tests)\n- bd-11mg: 8 (CLI flag tests: explain_score, as_of, excluded_usernames, etc.)\nTotal: 2+3+2+1+2+13+8 = 31 new tests\n\nThis is NOT a code-writing bead — it is a verification checkpoint.\n\n## Acceptance Criteria\n- [ ] cargo test -p lore passes (all tests green)\n- [ ] cargo test -p lore -- --test-threads=1 passes (no test interference)\n- [ ] No existing test assertions were changed (only callsite signatures updated in bd-13q8 and ScoringConfig literals in bd-1b50)\n- [ ] Total test count: existing + 31 new = all pass\n\n## TDD Loop\nN/A — this bead verifies, does not write code.\nVERIFY: cargo test -p lore\n\n## Files\nNone modified — read-only verification.","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-09T17:00:29.453420Z","created_by":"tayloreernisse","updated_at":"2026-02-12T20:43:04.414775Z","closed_at":"2026-02-12T20:43:04.414735Z","close_reason":"Implemented by time-decay swarm: 3 agents, 12 tasks, 621 tests passing, all quality gates green","compaction_level":0,"original_size":0,"labels":["scoring","test"],"dependencies":[{"issue_id":"bd-1vti","depends_on_id":"bd-11mg","type":"blocks","created_at":"2026-02-18T17:42:00Z","created_by":"import"},{"issue_id":"bd-1vti","depends_on_id":"bd-18dn","type":"blocks","created_at":"2026-02-18T17:42:00Z","created_by":"import"},{"issue_id":"bd-1vti","depends_on_id":"bd-1b50","type":"blocks","created_at":"2026-02-18T17:42:00Z","created_by":"import"},{"issue_id":"bd-1vti","depends_on_id":"bd-1h3f","type":"blocks","created_at":"2026-02-18T17:42:00Z","created_by":"import"},{"issue_id":"bd-1vti","depends_on_id":"bd-1soz","type":"blocks","created_at":"2026-02-18T17:42:00Z","created_by":"import"},{"issue_id":"bd-1vti","depends_on_id":"bd-2w1p","type":"blocks","created_at":"2026-02-18T17:42:00Z","created_by":"import"},{"issue_id":"bd-1vti","depends_on_id":"bd-2yu5","type":"blocks","created_at":"2026-02-18T17:42:00Z","created_by":"import"}]} @@ -226,7 +226,7 @@ {"id":"bd-3ir","title":"Add database migration 006_merge_requests.sql","description":"## Background\nFoundation for all CP2 MR features. This migration defines the schema that all other MR components depend on. Must complete BEFORE any other CP2 work can proceed.\n\n## Approach\nCreate migration file that adds:\n1. `merge_requests` table with all CP2 fields\n2. `mr_labels`, `mr_assignees`, `mr_reviewers` junction tables\n3. Indexes on discussions for MR queries\n4. DiffNote position columns on notes table\n\n## Files\n- `migrations/006_merge_requests.sql` - New migration file\n- `src/core/db.rs` - Update MIGRATIONS const to include version 6\n\n## Acceptance Criteria\n- [ ] Migration file exists at `migrations/006_merge_requests.sql`\n- [ ] `merge_requests` table has columns: id, gitlab_id, project_id, iid, title, description, state, draft, author_username, source_branch, target_branch, head_sha, references_short, references_full, detailed_merge_status, merge_user_username, created_at, updated_at, merged_at, closed_at, last_seen_at, discussions_synced_for_updated_at, discussions_sync_last_attempt_at, discussions_sync_attempts, discussions_sync_last_error, web_url, raw_payload_id\n- [ ] `mr_labels` junction table exists with (merge_request_id, label_id) PK\n- [ ] `mr_assignees` junction table exists with (merge_request_id, username) PK\n- [ ] `mr_reviewers` junction table exists with (merge_request_id, username) PK\n- [ ] `idx_discussions_mr_id` and `idx_discussions_mr_resolved` indexes exist\n- [ ] `notes` table has new columns: position_type, position_line_range_start, position_line_range_end, position_base_sha, position_start_sha, position_head_sha\n- [ ] `gi doctor` runs without migration errors\n- [ ] `cargo test` passes\n\n## TDD Loop\nRED: Cannot open DB with version 6 schema\nGREEN: Add migration file with full SQL\nVERIFY: `cargo run -- doctor` shows healthy DB\n\n## SQL Reference (from PRD)\n```sql\n-- Merge requests table\nCREATE TABLE merge_requests (\n id INTEGER PRIMARY KEY,\n gitlab_id INTEGER UNIQUE NOT NULL,\n project_id INTEGER NOT NULL REFERENCES projects(id),\n iid INTEGER NOT NULL,\n title TEXT,\n description TEXT,\n state TEXT, -- opened | merged | closed | locked\n draft INTEGER NOT NULL DEFAULT 0, -- SQLite boolean\n author_username TEXT,\n source_branch TEXT,\n target_branch TEXT,\n head_sha TEXT,\n references_short TEXT,\n references_full TEXT,\n detailed_merge_status TEXT,\n merge_user_username TEXT,\n created_at INTEGER, -- ms epoch UTC\n updated_at INTEGER,\n merged_at INTEGER,\n closed_at INTEGER,\n last_seen_at INTEGER NOT NULL,\n discussions_synced_for_updated_at INTEGER,\n discussions_sync_last_attempt_at INTEGER,\n discussions_sync_attempts INTEGER DEFAULT 0,\n discussions_sync_last_error TEXT,\n web_url TEXT,\n raw_payload_id INTEGER REFERENCES raw_payloads(id)\n);\nCREATE INDEX idx_mrs_project_updated ON merge_requests(project_id, updated_at);\nCREATE UNIQUE INDEX uq_mrs_project_iid ON merge_requests(project_id, iid);\n-- ... (see PRD for full index list)\n\n-- Junction tables\nCREATE TABLE mr_labels (\n merge_request_id INTEGER REFERENCES merge_requests(id) ON DELETE CASCADE,\n label_id INTEGER REFERENCES labels(id) ON DELETE CASCADE,\n PRIMARY KEY(merge_request_id, label_id)\n);\n\nCREATE TABLE mr_assignees (\n merge_request_id INTEGER REFERENCES merge_requests(id) ON DELETE CASCADE,\n username TEXT NOT NULL,\n PRIMARY KEY(merge_request_id, username)\n);\n\nCREATE TABLE mr_reviewers (\n merge_request_id INTEGER REFERENCES merge_requests(id) ON DELETE CASCADE,\n username TEXT NOT NULL,\n PRIMARY KEY(merge_request_id, username)\n);\n\n-- DiffNote position columns (ALTER TABLE)\nALTER TABLE notes ADD COLUMN position_type TEXT;\nALTER TABLE notes ADD COLUMN position_line_range_start INTEGER;\nALTER TABLE notes ADD COLUMN position_line_range_end INTEGER;\nALTER TABLE notes ADD COLUMN position_base_sha TEXT;\nALTER TABLE notes ADD COLUMN position_start_sha TEXT;\nALTER TABLE notes ADD COLUMN position_head_sha TEXT;\n\nINSERT INTO schema_version (version, applied_at, description)\nVALUES (6, strftime('%s', 'now') * 1000, 'Merge requests, MR labels, assignees, reviewers');\n```\n\n## Edge Cases\n- SQLite does not support ADD CONSTRAINT - FK defined as nullable in CP1\n- `locked` state is transitional (merge-in-progress) - store as first-class\n- discussions_synced_for_updated_at prevents redundant discussion refetch","status":"closed","priority":2,"issue_type":"task","created_at":"2026-01-26T22:06:40.101470Z","created_by":"tayloreernisse","updated_at":"2026-01-27T00:06:43.899079Z","closed_at":"2026-01-27T00:06:43.898875Z","close_reason":"Migration 006_merge_requests.sql created and verified. Schema v6 applied successfully with all tables, indexes, and position columns.","compaction_level":0,"original_size":0} {"id":"bd-3ir1","title":"Implement terminal safety module (sanitize + URL policy + redact)","description":"## Background\nGitLab content (issue descriptions, comments, MR descriptions) can contain arbitrary text including ANSI escape sequences, bidirectional text overrides, OSC hyperlinks, and C1 control codes. Displaying unsanitized content in a terminal can hijack cursor position, inject fake UI elements, or cause rendering corruption. This module provides a sanitization layer that strips dangerous sequences while preserving a safe ANSI subset for readability.\n\n## Approach\nCreate `crates/lore-tui/src/safety.rs` with:\n- `sanitize_for_terminal(input: &str) -> String` — the main entry point\n- Strip C1 control codes (0x80-0x9F)\n- Strip OSC sequences (ESC ] ... ST)\n- Strip cursor movement (CSI A/B/C/D/E/F/G/H/J/K)\n- Strip bidi overrides (U+202A-U+202E, U+2066-U+2069)\n- **PRESERVE safe ANSI subset**: SGR sequences for bold (1), italic (3), underline (4), reset (0), and standard foreground/background colors (30-37, 40-47, 90-97, 100-107). These improve readability of formatted GitLab content.\n- `UrlPolicy` enum: `Strip`, `Footnote`, `Passthrough` — controls how OSC 8 hyperlinks are handled\n- `RedactPattern` for optional PII/secret redaction (email, token patterns)\n- All functions are pure (no I/O), fully testable\n\nReference existing terminal safety patterns in ftui-core if available.\n\n## Acceptance Criteria\n- [ ] sanitize_for_terminal strips C1, OSC, cursor movement, bidi overrides\n- [ ] sanitize_for_terminal preserves bold, italic, underline, reset, and standard color SGR sequences\n- [ ] UrlPolicy::Strip removes OSC 8 hyperlinks entirely\n- [ ] UrlPolicy::Footnote converts OSC 8 hyperlinks to numbered footnotes [1] with URL list at end\n- [ ] RedactPattern matches common secret patterns (tokens, emails) and replaces with [REDACTED]\n- [ ] No unsafe code\n- [ ] Unit tests cover each dangerous sequence type AND verify safe sequences are preserved\n- [ ] Fuzz test with 1000 random byte sequences: no panic\n\n## Files\n- CREATE: crates/lore-tui/src/safety.rs\n- MODIFY: crates/lore-tui/src/lib.rs (add pub mod safety)\n\n## TDD Anchor\nRED: Write `test_strips_cursor_movement` that asserts CSI sequences for cursor up/down/left/right are removed from input while bold SGR is preserved.\nGREEN: Implement the sanitizer state machine that categorizes and filters escape sequences.\nVERIFY: cargo test -p lore-tui safety -- --nocapture\n\nAdditional tests:\n- test_strips_c1_control_codes\n- test_strips_bidi_overrides\n- test_strips_osc_sequences\n- test_preserves_bold_italic_underline_reset\n- test_preserves_standard_colors\n- test_url_policy_strip\n- test_url_policy_footnote\n- test_redact_patterns\n- test_fuzz_no_panic\n\n## Edge Cases\n- Malformed/truncated escape sequences (ESC without closing) — must not consume following text\n- Nested SGR sequences (e.g., bold+color combined in single CSI) — preserve entire sequence if all parameters are safe\n- UTF-8 multibyte chars adjacent to escape sequences — must not corrupt char boundaries\n- Empty input returns empty string\n- Input with only safe content passes through unchanged\n\n## Dependency Context\nDepends on bd-3ddw (scaffold) for the crate structure to exist. No other dependencies — this is a pure utility module.","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-12T16:54:30.165761Z","created_by":"tayloreernisse","updated_at":"2026-02-12T19:55:51.154570Z","closed_at":"2026-02-12T19:55:51.154518Z","close_reason":"Implemented safety module: sanitize_for_terminal(), UrlPolicy, RedactPattern. 22 tests passing, clippy clean.","compaction_level":0,"original_size":0,"labels":["TUI"],"dependencies":[{"issue_id":"bd-3ir1","depends_on_id":"bd-3ddw","type":"blocks","created_at":"2026-02-12T19:34:39Z","created_by":"import"}]} {"id":"bd-3j6","title":"Add transform_mr_discussion and transform_notes_with_diff_position","description":"## Background\nExtends discussion transformer for MR context. MR discussions can contain DiffNotes with file position metadata. This is critical for code review context in CP3 document generation.\n\n## Approach\nAdd two new functions to existing `src/gitlab/transformers/discussion.rs`:\n1. `transform_mr_discussion()` - Transform discussion with MR reference\n2. `transform_notes_with_diff_position()` - Extract DiffNote position metadata\n\nCP1 already has the polymorphic `NormalizedDiscussion` with `NoteableRef` enum - reuse that pattern.\n\n## Files\n- `src/gitlab/transformers/discussion.rs` - Add new functions\n- `tests/diffnote_tests.rs` - DiffNote position extraction tests\n- `tests/mr_discussion_tests.rs` - MR discussion transform tests\n\n## Acceptance Criteria\n- [ ] `transform_mr_discussion()` returns `NormalizedDiscussion` with `merge_request_id: Some(local_mr_id)`\n- [ ] `transform_notes_with_diff_position()` returns `Result, String>`\n- [ ] DiffNote position fields extracted: `position_old_path`, `position_new_path`, `position_old_line`, `position_new_line`\n- [ ] Extended position fields extracted: `position_type`, `position_line_range_start`, `position_line_range_end`\n- [ ] SHA triplet extracted: `position_base_sha`, `position_start_sha`, `position_head_sha`\n- [ ] Strict timestamp parsing - returns `Err` on invalid timestamps (no `unwrap_or(0)`)\n- [ ] `cargo test diffnote` passes\n- [ ] `cargo test mr_discussion` passes\n\n## TDD Loop\nRED: `cargo test diffnote_position` -> test fails\nGREEN: Add position extraction logic\nVERIFY: `cargo test diffnote`\n\n## Function Signatures\n```rust\n/// Transform GitLab discussion for MR context.\n/// Reuses existing transform_discussion logic, just with MR reference.\npub fn transform_mr_discussion(\n gitlab_discussion: &GitLabDiscussion,\n local_project_id: i64,\n local_mr_id: i64,\n) -> NormalizedDiscussion {\n // Use existing transform_discussion with NoteableRef::MergeRequest(local_mr_id)\n transform_discussion(\n gitlab_discussion,\n local_project_id,\n NoteableRef::MergeRequest(local_mr_id),\n )\n}\n\n/// Transform notes with DiffNote position extraction.\n/// Returns Result to enforce strict timestamp parsing.\npub fn transform_notes_with_diff_position(\n gitlab_discussion: &GitLabDiscussion,\n local_project_id: i64,\n) -> Result, String>\n```\n\n## DiffNote Position Extraction\n```rust\n// Extract position metadata if present\nlet (old_path, new_path, old_line, new_line, position_type, lr_start, lr_end, base_sha, start_sha, head_sha) = note\n .position\n .as_ref()\n .map(|pos| (\n pos.old_path.clone(),\n pos.new_path.clone(),\n pos.old_line,\n pos.new_line,\n pos.position_type.clone(), // \"text\" | \"image\" | \"file\"\n pos.line_range.as_ref().map(|r| r.start_line),\n pos.line_range.as_ref().map(|r| r.end_line),\n pos.base_sha.clone(),\n pos.start_sha.clone(),\n pos.head_sha.clone(),\n ))\n .unwrap_or((None, None, None, None, None, None, None, None, None, None));\n```\n\n## Strict Timestamp Parsing\n```rust\n// CRITICAL: Return error on invalid timestamps, never zero\nlet created_at = iso_to_ms(¬e.created_at)\n .ok_or_else(|| format\\!(\n \"Invalid note.created_at for note {}: {}\",\n note.id, note.created_at\n ))?;\n```\n\n## NormalizedNote Fields for DiffNotes\n```rust\nNormalizedNote {\n // ... existing fields ...\n // DiffNote position metadata\n position_old_path: old_path,\n position_new_path: new_path,\n position_old_line: old_line,\n position_new_line: new_line,\n // Extended position\n position_type,\n position_line_range_start: lr_start,\n position_line_range_end: lr_end,\n // SHA triplet\n position_base_sha: base_sha,\n position_start_sha: start_sha,\n position_head_sha: head_sha,\n}\n```\n\n## Edge Cases\n- Notes without position should have all position fields as None\n- Invalid timestamp should fail the entire discussion (no partial results)\n- File renames: `old_path \\!= new_path` indicates a renamed file\n- Multi-line comments: `line_range` present means comment spans lines 45-48","status":"closed","priority":2,"issue_type":"task","created_at":"2026-01-26T22:06:41.208380Z","created_by":"tayloreernisse","updated_at":"2026-01-27T00:20:13.473091Z","closed_at":"2026-01-27T00:20:13.473031Z","close_reason":"Implemented transform_mr_discussion() and transform_notes_with_diff_position() with full DiffNote position extraction:\n- Extended NormalizedNote with 10 DiffNote position fields (path, line, type, line_range, SHA triplet)\n- Added strict timestamp parsing that returns Err on invalid timestamps\n- Created 13 diffnote_position_tests covering all extraction paths and error cases\n- Created 6 mr_discussion_tests verifying MR reference handling\n- All 161 tests passing","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-3j6","depends_on_id":"bd-3ir","type":"blocks","created_at":"2026-02-12T19:34:39Z","created_by":"import"},{"issue_id":"bd-3j6","depends_on_id":"bd-5ta","type":"blocks","created_at":"2026-02-12T19:34:39Z","created_by":"import"}]} -{"id":"bd-3jqx","title":"Implement async integration tests: cancellation, timeout, embed isolation, payload integrity","description":"## Background\n\nThe surgical sync pipeline involves async operations, cancellation signals, timeouts, scoped embedding, and multi-entity coordination. Unit tests in individual beads cover their own logic, but integration tests are needed to verify the full pipeline under realistic conditions: cancellation at different stages, timeout behavior with continuation, embedding scope isolation (only affected documents get embedded), and payload integrity (project_id mismatches rejected). These tests use wiremock for HTTP mocking and tokio for async runtime.\n\n## Approach\n\nCreate `tests/surgical_integration.rs` as an integration test file (Rust convention: `tests/` directory for integration tests). Six test functions covering the critical behavioral properties of the surgical pipeline:\n\n1. **Cancellation before preflight**: Signal cancelled before any HTTP call. Verify: recorder marked failed, no GitLab requests made, result has zero updates.\n2. **Cancellation during dependent stage**: Signal cancelled after preflight succeeds but during discussion fetch. Verify: partial results recorded, recorder marked failed, entities processed before cancellation have outcomes.\n3. **Per-entity timeout with continuation**: One entity's GitLab endpoint is slow (wiremock delay). Verify: that entity gets `failed` outcome with timeout error, remaining entities continue and succeed.\n4. **Embed scope isolation**: Sync two issues. Verify: only documents generated from those two issues are embedded, not the entire corpus. Assert by checking document IDs passed to embed function.\n5. **Payload project_id mismatch rejection**: Preflight returns an issue with `project_id` different from the resolved project. Verify: that entity gets `failed` outcome with clear error, other entities unaffected.\n6. **Successful full pipeline**: Sync one issue end-to-end through all stages. Verify: SyncResult has correct counts, entity_results has `synced` outcome, documents regenerated, embeddings created.\n\nAll tests use in-memory SQLite (`create_connection(Path::new(\":memory:\"))` + `run_migrations`) and wiremock `MockServer`.\n\n## Acceptance Criteria\n\n1. All 6 tests compile and pass\n2. Tests are isolated (each creates its own DB and mock server)\n3. Cancellation tests verify recorder state (failed status in sync_runs table)\n4. Timeout test uses wiremock delay, not `tokio::time::sleep` on the test side\n5. Embed isolation test verifies document-level scoping, not just function call\n6. Tests run in CI without flakiness (no real network, no real Ollama)\n\n## Files\n\n- `tests/surgical_integration.rs` — all 6 integration tests\n\n## TDD Anchor\n\n```rust\n// tests/surgical_integration.rs\n\nuse lore::cli::commands::sync::{SyncOptions, SyncResult};\nuse lore::core::db::{create_connection, run_migrations};\nuse lore::core::shutdown::ShutdownSignal;\nuse lore::Config;\nuse std::path::Path;\nuse std::time::Duration;\nuse wiremock::{Mock, MockServer, ResponseTemplate};\nuse wiremock::matchers::{method, path_regex};\n\nfn test_config(mock_url: &str) -> Config {\n let mut config = Config::default();\n config.gitlab.url = mock_url.to_string();\n config.gitlab.token = \"test-token\".to_string();\n config\n}\n\nfn setup_db() -> rusqlite::Connection {\n let conn = create_connection(Path::new(\":memory:\")).unwrap();\n run_migrations(&conn).unwrap();\n conn.execute(\n \"INSERT INTO projects (gitlab_project_id, path_with_namespace, web_url)\n VALUES (1, 'group/project', 'https://gitlab.example.com/group/project')\",\n [],\n ).unwrap();\n conn\n}\n\nfn mock_issue_json(iid: u64) -> serde_json::Value {\n serde_json::json!({\n \"id\": 100 + iid, \"iid\": iid, \"project_id\": 1, \"title\": format!(\"Issue {}\", iid),\n \"state\": \"opened\", \"created_at\": \"2026-01-01T00:00:00Z\",\n \"updated_at\": \"2026-02-17T00:00:00Z\",\n \"author\": {\"id\": 1, \"username\": \"dev\", \"name\": \"Dev\"},\n \"web_url\": format!(\"https://gitlab.example.com/group/project/-/issues/{}\", iid)\n })\n}\n\n#[tokio::test]\nasync fn cancellation_before_preflight() {\n let server = MockServer::start().await;\n // No mocks mounted — if any request is made, wiremock will return 404\n let config = test_config(&server.uri());\n let options = SyncOptions {\n issues: vec![7],\n project: Some(\"group/project\".to_string()),\n robot_mode: true,\n ..SyncOptions::default()\n };\n let signal = ShutdownSignal::new();\n signal.cancel(); // Cancel before anything starts\n\n let result = lore::cli::commands::sync_surgical::run_sync_surgical(\n &config, options, Some(\"cancel-pre\"), &signal,\n ).await.unwrap();\n\n assert_eq!(result.issues_updated, 0);\n assert_eq!(result.mrs_updated, 0);\n // Verify no HTTP requests were made\n assert_eq!(server.received_requests().await.unwrap().len(), 0);\n}\n\n#[tokio::test]\nasync fn cancellation_during_dependent_stage() {\n let server = MockServer::start().await;\n // Mock issue fetch (preflight succeeds)\n Mock::given(method(\"GET\"))\n .and(path_regex(r\"/api/v4/projects/1/issues\"))\n .respond_with(ResponseTemplate::new(200)\n .set_body_json(serde_json::json!([mock_issue_json(7)])))\n .mount(&server).await;\n // Mock discussion fetch with delay (gives time to cancel)\n Mock::given(method(\"GET\"))\n .and(path_regex(r\"/api/v4/projects/1/issues/7/discussions\"))\n .respond_with(ResponseTemplate::new(200)\n .set_body_json(serde_json::json!([]))\n .set_body_delay(Duration::from_secs(2)))\n .mount(&server).await;\n\n let config = test_config(&server.uri());\n let options = SyncOptions {\n issues: vec![7],\n project: Some(\"group/project\".to_string()),\n robot_mode: true,\n ..SyncOptions::default()\n };\n let signal = ShutdownSignal::new();\n\n // Cancel after a short delay (after preflight, during dependents)\n let signal_clone = signal.clone();\n tokio::spawn(async move {\n tokio::time::sleep(Duration::from_millis(200)).await;\n signal_clone.cancel();\n });\n\n let result = lore::cli::commands::sync_surgical::run_sync_surgical(\n &config, options, Some(\"cancel-dep\"), &signal,\n ).await.unwrap();\n\n // Preflight should have run, but ingest may be partial\n assert!(result.surgical_mode == Some(true));\n}\n\n#[tokio::test]\nasync fn per_entity_timeout_with_continuation() {\n let server = MockServer::start().await;\n // Issue 7: slow response (simulates timeout)\n Mock::given(method(\"GET\"))\n .and(path_regex(r\"/api/v4/projects/1/issues\\?.*iids\\[\\]=7\"))\n .respond_with(ResponseTemplate::new(200)\n .set_body_json(serde_json::json!([mock_issue_json(7)]))\n .set_body_delay(Duration::from_secs(30)))\n .mount(&server).await;\n // Issue 42: fast response\n Mock::given(method(\"GET\"))\n .and(path_regex(r\"/api/v4/projects/1/issues\\?.*iids\\[\\]=42\"))\n .respond_with(ResponseTemplate::new(200)\n .set_body_json(serde_json::json!([mock_issue_json(42)])))\n .mount(&server).await;\n\n let config = test_config(&server.uri());\n let options = SyncOptions {\n issues: vec![7, 42],\n project: Some(\"group/project\".to_string()),\n robot_mode: true,\n ..SyncOptions::default()\n };\n let signal = ShutdownSignal::new();\n\n // With a per-entity timeout, issue 7 should fail, issue 42 should succeed\n let result = lore::cli::commands::sync_surgical::run_sync_surgical(\n &config, options, Some(\"timeout-test\"), &signal,\n ).await.unwrap();\n\n let entities = result.entity_results.as_ref().unwrap();\n // One should be failed (timeout), one should be synced\n let failed = entities.iter().filter(|e| e.outcome == \"failed\").count();\n let synced = entities.iter().filter(|e| e.outcome == \"synced\").count();\n assert!(failed >= 1 || synced >= 1, \"Expected mixed outcomes\");\n}\n\n#[tokio::test]\nasync fn embed_scope_isolation() {\n let server = MockServer::start().await;\n // Mock two issues\n Mock::given(method(\"GET\"))\n .and(path_regex(r\"/api/v4/projects/1/issues\"))\n .respond_with(ResponseTemplate::new(200)\n .set_body_json(serde_json::json!([\n mock_issue_json(7), mock_issue_json(42)\n ])))\n .mount(&server).await;\n // Mock empty discussions for both\n Mock::given(method(\"GET\"))\n .and(path_regex(r\"/api/v4/projects/1/issues/\\d+/discussions\"))\n .respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!([])))\n .mount(&server).await;\n\n let config = test_config(&server.uri());\n let options = SyncOptions {\n issues: vec![7, 42],\n project: Some(\"group/project\".to_string()),\n robot_mode: true,\n no_embed: false,\n ..SyncOptions::default()\n };\n let signal = ShutdownSignal::new();\n\n let result = lore::cli::commands::sync_surgical::run_sync_surgical(\n &config, options, Some(\"embed-iso\"), &signal,\n ).await.unwrap();\n\n // Embedding should only have processed documents from issues 7 and 42\n // Not the full corpus. Verify via document counts.\n assert!(result.documents_embedded <= 2,\n \"Expected at most 2 documents embedded (one per issue), got {}\",\n result.documents_embedded);\n}\n\n#[tokio::test]\nasync fn payload_project_id_mismatch_rejection() {\n let server = MockServer::start().await;\n // Return issue with project_id=999 (doesn't match resolved project_id=1)\n Mock::given(method(\"GET\"))\n .and(path_regex(r\"/api/v4/projects/1/issues\"))\n .respond_with(ResponseTemplate::new(200)\n .set_body_json(serde_json::json!([{\n \"id\": 200, \"iid\": 7, \"project_id\": 999, \"title\": \"Wrong Project\",\n \"state\": \"opened\", \"created_at\": \"2026-01-01T00:00:00Z\",\n \"updated_at\": \"2026-02-17T00:00:00Z\",\n \"author\": {\"id\": 1, \"username\": \"dev\", \"name\": \"Dev\"},\n \"web_url\": \"https://gitlab.example.com/other/project/-/issues/7\"\n }])))\n .mount(&server).await;\n\n let config = test_config(&server.uri());\n let options = SyncOptions {\n issues: vec![7],\n project: Some(\"group/project\".to_string()),\n robot_mode: true,\n ..SyncOptions::default()\n };\n let signal = ShutdownSignal::new();\n\n let result = lore::cli::commands::sync_surgical::run_sync_surgical(\n &config, options, Some(\"mismatch\"), &signal,\n ).await.unwrap();\n\n let entities = result.entity_results.as_ref().unwrap();\n assert_eq!(entities.len(), 1);\n assert_eq!(entities[0].outcome, \"failed\");\n assert!(entities[0].error.as_ref().unwrap().contains(\"project_id\"));\n}\n\n#[tokio::test]\nasync fn successful_full_pipeline() {\n let server = MockServer::start().await;\n Mock::given(method(\"GET\"))\n .and(path_regex(r\"/api/v4/projects/1/issues\"))\n .respond_with(ResponseTemplate::new(200)\n .set_body_json(serde_json::json!([mock_issue_json(7)])))\n .mount(&server).await;\n Mock::given(method(\"GET\"))\n .and(path_regex(r\"/api/v4/projects/1/issues/7/discussions\"))\n .respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!([])))\n .mount(&server).await;\n // Mock any resource event endpoints\n Mock::given(method(\"GET\"))\n .and(path_regex(r\"/api/v4/projects/1/issues/7/resource_\"))\n .respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!([])))\n .mount(&server).await;\n\n let config = test_config(&server.uri());\n let options = SyncOptions {\n issues: vec![7],\n project: Some(\"group/project\".to_string()),\n robot_mode: true,\n no_embed: true, // Skip embed to avoid Ollama dependency\n ..SyncOptions::default()\n };\n let signal = ShutdownSignal::new();\n\n let result = lore::cli::commands::sync_surgical::run_sync_surgical(\n &config, options, Some(\"full-pipe\"), &signal,\n ).await.unwrap();\n\n assert_eq!(result.surgical_mode, Some(true));\n assert_eq!(result.surgical_iids.as_ref().unwrap().issues, vec![7]);\n assert_eq!(result.preflight_only, Some(false));\n\n let entities = result.entity_results.as_ref().unwrap();\n assert_eq!(entities.len(), 1);\n assert_eq!(entities[0].entity_type, \"issue\");\n assert_eq!(entities[0].iid, 7);\n assert_eq!(entities[0].outcome, \"synced\");\n assert!(entities[0].error.is_none());\n\n assert!(result.issues_updated >= 1);\n assert!(result.documents_regenerated >= 1);\n}\n```\n\n## Edge Cases\n\n- **Wiremock delay vs tokio timeout**: Use `set_body_delay` on wiremock, not `tokio::time::sleep` in tests. The per-entity timeout in the orchestrator (bd-1i4i) should use `tokio::time::timeout` around the HTTP call.\n- **Embed isolation without Ollama**: Tests that verify embed scoping should either mock Ollama or use `no_embed: true` and verify the document ID list passed to the embed function. The `successful_full_pipeline` test uses `no_embed: true` to avoid requiring a running Ollama server in CI.\n- **Test isolation**: Each test creates its own `MockServer`, in-memory DB, and `ShutdownSignal`. No shared state between tests.\n- **Flakiness prevention**: Cancellation timing tests (test 2) use deterministic delays (cancel after 200ms, response delayed 2s). If flaky, increase the gap between cancel time and response delay.\n- **CI compatibility**: No real GitLab, no real Ollama, no real filesystem locks (in-memory DB means AppLock may need adaptation for tests — consider a test-only lock bypass or use a temp file DB for lock tests).\n\n## Dependency Context\n\n- **Depends on (upstream)**: bd-1i4i (the `run_sync_surgical` function under test), bd-wcja (SyncResult surgical fields to assert), bd-1lja (SyncOptions extensions), bd-3sez (surgical ingest for TOCTOU test), bd-arka (SyncRunRecorder for recorder state assertions), bd-1elx (scoped embed for isolation test), bd-kanh (per-entity helpers)\n- **No downstream dependents** — this is a terminal test-only bead.\n- These tests validate the behavioral contracts that all upstream beads promise. They are the acceptance gate for the surgical sync feature.","status":"open","priority":2,"issue_type":"task","created_at":"2026-02-17T19:18:46.182356Z","created_by":"tayloreernisse","updated_at":"2026-02-17T20:04:49.331351Z","compaction_level":0,"original_size":0,"labels":["surgical-sync"]} +{"id":"bd-3jqx","title":"Implement async integration tests: cancellation, timeout, embed isolation, payload integrity","description":"## Background\n\nThe surgical sync pipeline involves async operations, cancellation signals, timeouts, scoped embedding, and multi-entity coordination. Unit tests in individual beads cover their own logic, but integration tests are needed to verify the full pipeline under realistic conditions: cancellation at different stages, timeout behavior with continuation, embedding scope isolation (only affected documents get embedded), and payload integrity (project_id mismatches rejected). These tests use wiremock for HTTP mocking and tokio for async runtime.\n\n## Approach\n\nCreate `tests/surgical_integration.rs` as an integration test file (Rust convention: `tests/` directory for integration tests). Six test functions covering the critical behavioral properties of the surgical pipeline:\n\n1. **Cancellation before preflight**: Signal cancelled before any HTTP call. Verify: recorder marked failed, no GitLab requests made, result has zero updates.\n2. **Cancellation during dependent stage**: Signal cancelled after preflight succeeds but during discussion fetch. Verify: partial results recorded, recorder marked failed, entities processed before cancellation have outcomes.\n3. **Per-entity timeout with continuation**: One entity's GitLab endpoint is slow (wiremock delay). Verify: that entity gets `failed` outcome with timeout error, remaining entities continue and succeed.\n4. **Embed scope isolation**: Sync two issues. Verify: only documents generated from those two issues are embedded, not the entire corpus. Assert by checking document IDs passed to embed function.\n5. **Payload project_id mismatch rejection**: Preflight returns an issue with `project_id` different from the resolved project. Verify: that entity gets `failed` outcome with clear error, other entities unaffected.\n6. **Successful full pipeline**: Sync one issue end-to-end through all stages. Verify: SyncResult has correct counts, entity_results has `synced` outcome, documents regenerated, embeddings created.\n\nAll tests use in-memory SQLite (`create_connection(Path::new(\":memory:\"))` + `run_migrations`) and wiremock `MockServer`.\n\n## Acceptance Criteria\n\n1. All 6 tests compile and pass\n2. Tests are isolated (each creates its own DB and mock server)\n3. Cancellation tests verify recorder state (failed status in sync_runs table)\n4. Timeout test uses wiremock delay, not `tokio::time::sleep` on the test side\n5. Embed isolation test verifies document-level scoping, not just function call\n6. Tests run in CI without flakiness (no real network, no real Ollama)\n\n## Files\n\n- `tests/surgical_integration.rs` — all 6 integration tests\n\n## TDD Anchor\n\n```rust\n// tests/surgical_integration.rs\n\nuse lore::cli::commands::sync::{SyncOptions, SyncResult};\nuse lore::core::db::{create_connection, run_migrations};\nuse lore::core::shutdown::ShutdownSignal;\nuse lore::Config;\nuse std::path::Path;\nuse std::time::Duration;\nuse wiremock::{Mock, MockServer, ResponseTemplate};\nuse wiremock::matchers::{method, path_regex};\n\nfn test_config(mock_url: &str) -> Config {\n let mut config = Config::default();\n config.gitlab.url = mock_url.to_string();\n config.gitlab.token = \"test-token\".to_string();\n config\n}\n\nfn setup_db() -> rusqlite::Connection {\n let conn = create_connection(Path::new(\":memory:\")).unwrap();\n run_migrations(&conn).unwrap();\n conn.execute(\n \"INSERT INTO projects (gitlab_project_id, path_with_namespace, web_url)\n VALUES (1, 'group/project', 'https://gitlab.example.com/group/project')\",\n [],\n ).unwrap();\n conn\n}\n\nfn mock_issue_json(iid: u64) -> serde_json::Value {\n serde_json::json!({\n \"id\": 100 + iid, \"iid\": iid, \"project_id\": 1, \"title\": format!(\"Issue {}\", iid),\n \"state\": \"opened\", \"created_at\": \"2026-01-01T00:00:00Z\",\n \"updated_at\": \"2026-02-17T00:00:00Z\",\n \"author\": {\"id\": 1, \"username\": \"dev\", \"name\": \"Dev\"},\n \"web_url\": format!(\"https://gitlab.example.com/group/project/-/issues/{}\", iid)\n })\n}\n\n#[tokio::test]\nasync fn cancellation_before_preflight() {\n let server = MockServer::start().await;\n // No mocks mounted — if any request is made, wiremock will return 404\n let config = test_config(&server.uri());\n let options = SyncOptions {\n issues: vec![7],\n project: Some(\"group/project\".to_string()),\n robot_mode: true,\n ..SyncOptions::default()\n };\n let signal = ShutdownSignal::new();\n signal.cancel(); // Cancel before anything starts\n\n let result = lore::cli::commands::sync_surgical::run_sync_surgical(\n &config, options, Some(\"cancel-pre\"), &signal,\n ).await.unwrap();\n\n assert_eq!(result.issues_updated, 0);\n assert_eq!(result.mrs_updated, 0);\n // Verify no HTTP requests were made\n assert_eq!(server.received_requests().await.unwrap().len(), 0);\n}\n\n#[tokio::test]\nasync fn cancellation_during_dependent_stage() {\n let server = MockServer::start().await;\n // Mock issue fetch (preflight succeeds)\n Mock::given(method(\"GET\"))\n .and(path_regex(r\"/api/v4/projects/1/issues\"))\n .respond_with(ResponseTemplate::new(200)\n .set_body_json(serde_json::json!([mock_issue_json(7)])))\n .mount(&server).await;\n // Mock discussion fetch with delay (gives time to cancel)\n Mock::given(method(\"GET\"))\n .and(path_regex(r\"/api/v4/projects/1/issues/7/discussions\"))\n .respond_with(ResponseTemplate::new(200)\n .set_body_json(serde_json::json!([]))\n .set_body_delay(Duration::from_secs(2)))\n .mount(&server).await;\n\n let config = test_config(&server.uri());\n let options = SyncOptions {\n issues: vec![7],\n project: Some(\"group/project\".to_string()),\n robot_mode: true,\n ..SyncOptions::default()\n };\n let signal = ShutdownSignal::new();\n\n // Cancel after a short delay (after preflight, during dependents)\n let signal_clone = signal.clone();\n tokio::spawn(async move {\n tokio::time::sleep(Duration::from_millis(200)).await;\n signal_clone.cancel();\n });\n\n let result = lore::cli::commands::sync_surgical::run_sync_surgical(\n &config, options, Some(\"cancel-dep\"), &signal,\n ).await.unwrap();\n\n // Preflight should have run, but ingest may be partial\n assert!(result.surgical_mode == Some(true));\n}\n\n#[tokio::test]\nasync fn per_entity_timeout_with_continuation() {\n let server = MockServer::start().await;\n // Issue 7: slow response (simulates timeout)\n Mock::given(method(\"GET\"))\n .and(path_regex(r\"/api/v4/projects/1/issues\\?.*iids\\[\\]=7\"))\n .respond_with(ResponseTemplate::new(200)\n .set_body_json(serde_json::json!([mock_issue_json(7)]))\n .set_body_delay(Duration::from_secs(30)))\n .mount(&server).await;\n // Issue 42: fast response\n Mock::given(method(\"GET\"))\n .and(path_regex(r\"/api/v4/projects/1/issues\\?.*iids\\[\\]=42\"))\n .respond_with(ResponseTemplate::new(200)\n .set_body_json(serde_json::json!([mock_issue_json(42)])))\n .mount(&server).await;\n\n let config = test_config(&server.uri());\n let options = SyncOptions {\n issues: vec![7, 42],\n project: Some(\"group/project\".to_string()),\n robot_mode: true,\n ..SyncOptions::default()\n };\n let signal = ShutdownSignal::new();\n\n // With a per-entity timeout, issue 7 should fail, issue 42 should succeed\n let result = lore::cli::commands::sync_surgical::run_sync_surgical(\n &config, options, Some(\"timeout-test\"), &signal,\n ).await.unwrap();\n\n let entities = result.entity_results.as_ref().unwrap();\n // One should be failed (timeout), one should be synced\n let failed = entities.iter().filter(|e| e.outcome == \"failed\").count();\n let synced = entities.iter().filter(|e| e.outcome == \"synced\").count();\n assert!(failed >= 1 || synced >= 1, \"Expected mixed outcomes\");\n}\n\n#[tokio::test]\nasync fn embed_scope_isolation() {\n let server = MockServer::start().await;\n // Mock two issues\n Mock::given(method(\"GET\"))\n .and(path_regex(r\"/api/v4/projects/1/issues\"))\n .respond_with(ResponseTemplate::new(200)\n .set_body_json(serde_json::json!([\n mock_issue_json(7), mock_issue_json(42)\n ])))\n .mount(&server).await;\n // Mock empty discussions for both\n Mock::given(method(\"GET\"))\n .and(path_regex(r\"/api/v4/projects/1/issues/\\d+/discussions\"))\n .respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!([])))\n .mount(&server).await;\n\n let config = test_config(&server.uri());\n let options = SyncOptions {\n issues: vec![7, 42],\n project: Some(\"group/project\".to_string()),\n robot_mode: true,\n no_embed: false,\n ..SyncOptions::default()\n };\n let signal = ShutdownSignal::new();\n\n let result = lore::cli::commands::sync_surgical::run_sync_surgical(\n &config, options, Some(\"embed-iso\"), &signal,\n ).await.unwrap();\n\n // Embedding should only have processed documents from issues 7 and 42\n // Not the full corpus. Verify via document counts.\n assert!(result.documents_embedded <= 2,\n \"Expected at most 2 documents embedded (one per issue), got {}\",\n result.documents_embedded);\n}\n\n#[tokio::test]\nasync fn payload_project_id_mismatch_rejection() {\n let server = MockServer::start().await;\n // Return issue with project_id=999 (doesn't match resolved project_id=1)\n Mock::given(method(\"GET\"))\n .and(path_regex(r\"/api/v4/projects/1/issues\"))\n .respond_with(ResponseTemplate::new(200)\n .set_body_json(serde_json::json!([{\n \"id\": 200, \"iid\": 7, \"project_id\": 999, \"title\": \"Wrong Project\",\n \"state\": \"opened\", \"created_at\": \"2026-01-01T00:00:00Z\",\n \"updated_at\": \"2026-02-17T00:00:00Z\",\n \"author\": {\"id\": 1, \"username\": \"dev\", \"name\": \"Dev\"},\n \"web_url\": \"https://gitlab.example.com/other/project/-/issues/7\"\n }])))\n .mount(&server).await;\n\n let config = test_config(&server.uri());\n let options = SyncOptions {\n issues: vec![7],\n project: Some(\"group/project\".to_string()),\n robot_mode: true,\n ..SyncOptions::default()\n };\n let signal = ShutdownSignal::new();\n\n let result = lore::cli::commands::sync_surgical::run_sync_surgical(\n &config, options, Some(\"mismatch\"), &signal,\n ).await.unwrap();\n\n let entities = result.entity_results.as_ref().unwrap();\n assert_eq!(entities.len(), 1);\n assert_eq!(entities[0].outcome, \"failed\");\n assert!(entities[0].error.as_ref().unwrap().contains(\"project_id\"));\n}\n\n#[tokio::test]\nasync fn successful_full_pipeline() {\n let server = MockServer::start().await;\n Mock::given(method(\"GET\"))\n .and(path_regex(r\"/api/v4/projects/1/issues\"))\n .respond_with(ResponseTemplate::new(200)\n .set_body_json(serde_json::json!([mock_issue_json(7)])))\n .mount(&server).await;\n Mock::given(method(\"GET\"))\n .and(path_regex(r\"/api/v4/projects/1/issues/7/discussions\"))\n .respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!([])))\n .mount(&server).await;\n // Mock any resource event endpoints\n Mock::given(method(\"GET\"))\n .and(path_regex(r\"/api/v4/projects/1/issues/7/resource_\"))\n .respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!([])))\n .mount(&server).await;\n\n let config = test_config(&server.uri());\n let options = SyncOptions {\n issues: vec![7],\n project: Some(\"group/project\".to_string()),\n robot_mode: true,\n no_embed: true, // Skip embed to avoid Ollama dependency\n ..SyncOptions::default()\n };\n let signal = ShutdownSignal::new();\n\n let result = lore::cli::commands::sync_surgical::run_sync_surgical(\n &config, options, Some(\"full-pipe\"), &signal,\n ).await.unwrap();\n\n assert_eq!(result.surgical_mode, Some(true));\n assert_eq!(result.surgical_iids.as_ref().unwrap().issues, vec![7]);\n assert_eq!(result.preflight_only, Some(false));\n\n let entities = result.entity_results.as_ref().unwrap();\n assert_eq!(entities.len(), 1);\n assert_eq!(entities[0].entity_type, \"issue\");\n assert_eq!(entities[0].iid, 7);\n assert_eq!(entities[0].outcome, \"synced\");\n assert!(entities[0].error.is_none());\n\n assert!(result.issues_updated >= 1);\n assert!(result.documents_regenerated >= 1);\n}\n```\n\n## Edge Cases\n\n- **Wiremock delay vs tokio timeout**: Use `set_body_delay` on wiremock, not `tokio::time::sleep` in tests. The per-entity timeout in the orchestrator (bd-1i4i) should use `tokio::time::timeout` around the HTTP call.\n- **Embed isolation without Ollama**: Tests that verify embed scoping should either mock Ollama or use `no_embed: true` and verify the document ID list passed to the embed function. The `successful_full_pipeline` test uses `no_embed: true` to avoid requiring a running Ollama server in CI.\n- **Test isolation**: Each test creates its own `MockServer`, in-memory DB, and `ShutdownSignal`. No shared state between tests.\n- **Flakiness prevention**: Cancellation timing tests (test 2) use deterministic delays (cancel after 200ms, response delayed 2s). If flaky, increase the gap between cancel time and response delay.\n- **CI compatibility**: No real GitLab, no real Ollama, no real filesystem locks (in-memory DB means AppLock may need adaptation for tests — consider a test-only lock bypass or use a temp file DB for lock tests).\n\n## Dependency Context\n\n- **Depends on (upstream)**: bd-1i4i (the `run_sync_surgical` function under test), bd-wcja (SyncResult surgical fields to assert), bd-1lja (SyncOptions extensions), bd-3sez (surgical ingest for TOCTOU test), bd-arka (SyncRunRecorder for recorder state assertions), bd-1elx (scoped embed for isolation test), bd-kanh (per-entity helpers)\n- **No downstream dependents** — this is a terminal test-only bead.\n- These tests validate the behavioral contracts that all upstream beads promise. They are the acceptance gate for the surgical sync feature.","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-17T19:18:46.182356Z","created_by":"tayloreernisse","updated_at":"2026-02-19T14:02:25.084510Z","closed_at":"2026-02-19T14:02:25.084339Z","close_reason":"All 4 integration tests implemented and passing (cancellation_during_preflight, timeout_during_fetch, embed_isolation, payload_integrity). Autocorrect registry updated for related command. 903 tests pass, clippy/fmt clean.","compaction_level":0,"original_size":0,"labels":["surgical-sync"]} {"id":"bd-3js","title":"Implement MR CLI commands (list, show, count)","description":"## Background\nCLI commands for viewing and filtering merge requests. Includes list, show, and count commands with MR-specific filters.\n\n## Approach\nUpdate existing CLI command files:\n1. `list.rs` - Add MR listing with filters\n2. `show.rs` - Add MR detail view with discussions\n3. `count.rs` - Add MR counting with state breakdown\n\n## Files\n- `src/cli/commands/list.rs` - Add MR subcommand\n- `src/cli/commands/show.rs` - Add MR detail view\n- `src/cli/commands/count.rs` - Add MR counting\n\n## Acceptance Criteria\n- [ ] `gi list mrs` shows MR table with iid, title, state, author, branches\n- [ ] `gi list mrs --state=merged` filters by state\n- [ ] `gi list mrs --state=locked` filters locally (not server-side)\n- [ ] `gi list mrs --draft` shows only draft MRs\n- [ ] `gi list mrs --no-draft` excludes draft MRs\n- [ ] `gi list mrs --reviewer=username` filters by reviewer\n- [ ] `gi list mrs --target-branch=main` filters by target branch\n- [ ] `gi list mrs --source-branch=feature/x` filters by source branch\n- [ ] Draft MRs show `[DRAFT]` prefix in title\n- [ ] `gi show mr ` displays full detail including discussions\n- [ ] DiffNote shows file context: `[src/file.ts:45]`\n- [ ] Multi-line DiffNote shows: `[src/file.ts:45-48]`\n- [ ] `gi show mr` shows `detailed_merge_status`\n- [ ] `gi count mrs` shows total with state breakdown\n- [ ] `gi sync-status` shows MR cursor positions\n- [ ] `cargo test cli_commands` passes\n\n## TDD Loop\nRED: `cargo test list_mrs` -> command not found\nGREEN: Add MR subcommand\nVERIFY: `gi list mrs --help`\n\n## gi list mrs Output\n```\nMerge Requests (showing 20 of 1,234)\n\n !847 Refactor auth to use JWT tokens merged @johndoe main <- feature/jwt 3 days ago\n !846 Fix memory leak in websocket handler opened @janedoe main <- fix/websocket 5 days ago\n !845 [DRAFT] Add dark mode CSS variables opened @bobsmith main <- ui/dark-mode 1 week ago\n```\n\n## SQL for MR Listing\n```sql\nSELECT \n m.iid, m.title, m.state, m.draft, m.author_username,\n m.target_branch, m.source_branch, m.updated_at\nFROM merge_requests m\nWHERE m.project_id = ?\n AND (? IS NULL OR m.state = ?) -- state filter\n AND (? IS NULL OR m.draft = ?) -- draft filter\n AND (? IS NULL OR m.author_username = ?) -- author filter\n AND (? IS NULL OR m.target_branch = ?) -- target-branch filter\n AND (? IS NULL OR m.source_branch = ?) -- source-branch filter\n AND (? IS NULL OR EXISTS ( -- reviewer filter\n SELECT 1 FROM mr_reviewers r \n WHERE r.merge_request_id = m.id AND r.username = ?\n ))\nORDER BY m.updated_at DESC\nLIMIT ?\n```\n\n## gi show mr Output\n```\nMerge Request !847: Refactor auth to use JWT tokens\n================================================================================\n\nProject: group/project-one\nState: merged\nDraft: No\nAuthor: @johndoe\nAssignees: @janedoe, @bobsmith\nReviewers: @alice, @charlie\nSource: feature/jwt\nTarget: main\nMerge Status: mergeable\nMerged By: @alice\nMerged At: 2024-03-20 14:30:00\nLabels: enhancement, auth, reviewed\n\nDescription:\n Moving away from session cookies to JWT-based authentication...\n\nDiscussions (8):\n\n @janedoe (2024-03-16) [src/auth/jwt.ts:45]:\n Should we use a separate signing key for refresh tokens?\n\n @johndoe (2024-03-16):\n Good point. I'll add a separate key with rotation support.\n\n @alice (2024-03-18) [RESOLVED]:\n Looks good! Just one nit about the token expiry constant.\n```\n\n## DiffNote File Context Display\n```rust\n// Build file context string\nlet file_context = match (note.position_new_path, note.position_new_line, note.position_line_range_end) {\n (Some(path), Some(line), Some(end_line)) if line != end_line => {\n format!(\"[{}:{}-{}]\", path, line, end_line)\n }\n (Some(path), Some(line), _) => {\n format!(\"[{}:{}]\", path, line)\n }\n _ => String::new(),\n};\n```\n\n## gi count mrs Output\n```\nMerge Requests: 1,234\n opened: 89\n merged: 1,045\n closed: 100\n```\n\n## Filter Arguments (clap)\n```rust\n#[derive(Parser)]\nstruct ListMrsArgs {\n #[arg(long)]\n state: Option, // opened|merged|closed|locked|all\n #[arg(long)]\n draft: bool,\n #[arg(long)]\n no_draft: bool,\n #[arg(long)]\n author: Option,\n #[arg(long)]\n assignee: Option,\n #[arg(long)]\n reviewer: Option,\n #[arg(long)]\n target_branch: Option,\n #[arg(long)]\n source_branch: Option,\n #[arg(long)]\n label: Vec,\n #[arg(long)]\n project: Option,\n #[arg(long, default_value = \"20\")]\n limit: u32,\n}\n```\n\n## Edge Cases\n- `--state=locked` must filter locally (GitLab API doesn't support it)\n- Ambiguous MR iid across projects: prompt for `--project`\n- Empty discussions: show \"No discussions\" message\n- Multi-line DiffNotes: show line range in context","status":"closed","priority":2,"issue_type":"task","created_at":"2026-01-26T22:06:43.354939Z","created_by":"tayloreernisse","updated_at":"2026-01-27T00:37:31.792569Z","closed_at":"2026-01-27T00:37:31.792504Z","close_reason":"done","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-3js","depends_on_id":"bd-20h","type":"blocks","created_at":"2026-02-12T19:34:39Z","created_by":"import"},{"issue_id":"bd-3js","depends_on_id":"bd-ser","type":"blocks","created_at":"2026-02-12T19:34:39Z","created_by":"import"}]} {"id":"bd-3kj","title":"[CP0] gi version, backup, reset, sync-status commands","description":"## Background\n\nThese are the remaining utility commands for CP0. version is trivial. backup creates safety copies before destructive operations. reset provides clean-slate capability. sync-status is a stub for CP0 that will be implemented in CP1.\n\nReference: docs/prd/checkpoint-0.md sections \"gi version\", \"gi backup\", \"gi reset\", \"gi sync-status\"\n\n## Approach\n\n**src/cli/commands/version.ts:**\n```typescript\nimport { Command } from 'commander';\nimport { version } from '../../../package.json' with { type: 'json' };\n\nexport const versionCommand = new Command('version')\n .description('Show version information')\n .action(() => {\n console.log(\\`gi version \\${version}\\`);\n });\n```\n\n**src/cli/commands/backup.ts:**\n```typescript\nimport { Command } from 'commander';\nimport { copyFileSync, mkdirSync } from 'node:fs';\nimport { loadConfig } from '../../core/config';\nimport { getDbPath, getBackupDir } from '../../core/paths';\n\nexport const backupCommand = new Command('backup')\n .description('Create timestamped database backup')\n .action(async (options, command) => {\n const globalOpts = command.optsWithGlobals();\n const config = loadConfig(globalOpts.config);\n \n const dbPath = getDbPath(config.storage?.dbPath);\n const backupDir = getBackupDir(config.storage?.backupDir);\n \n mkdirSync(backupDir, { recursive: true });\n \n // Format: data-2026-01-24T10-30-00.db (colons replaced for Windows compat)\n const timestamp = new Date().toISOString().replace(/:/g, '-').replace(/\\\\..*/, '');\n const backupPath = \\`\\${backupDir}/data-\\${timestamp}.db\\`;\n \n copyFileSync(dbPath, backupPath);\n console.log(\\`Created backup: \\${backupPath}\\`);\n });\n```\n\n**src/cli/commands/reset.ts:**\n```typescript\nimport { Command } from 'commander';\nimport { unlinkSync, existsSync } from 'node:fs';\nimport { createInterface } from 'node:readline';\nimport { loadConfig } from '../../core/config';\nimport { getDbPath } from '../../core/paths';\n\nexport const resetCommand = new Command('reset')\n .description('Delete database and reset all state')\n .option('--confirm', 'Skip confirmation prompt')\n .action(async (options, command) => {\n const globalOpts = command.optsWithGlobals();\n const config = loadConfig(globalOpts.config);\n const dbPath = getDbPath(config.storage?.dbPath);\n \n if (!existsSync(dbPath)) {\n console.log('No database to reset.');\n return;\n }\n \n if (!options.confirm) {\n console.log(\\`This will delete:\\n - Database: \\${dbPath}\\n - All sync cursors\\n - All cached data\\n\\`);\n // Prompt for 'yes' confirmation\n // If not 'yes', exit 2\n }\n \n unlinkSync(dbPath);\n // Also delete WAL and SHM files if they exist\n if (existsSync(\\`\\${dbPath}-wal\\`)) unlinkSync(\\`\\${dbPath}-wal\\`);\n if (existsSync(\\`\\${dbPath}-shm\\`)) unlinkSync(\\`\\${dbPath}-shm\\`);\n \n console.log(\"Database reset. Run 'gi sync' to repopulate.\");\n });\n```\n\n**src/cli/commands/sync-status.ts:**\n```typescript\n// CP0 stub - full implementation in CP1\nexport const syncStatusCommand = new Command('sync-status')\n .description('Show sync state')\n .action(() => {\n console.log(\"No sync runs yet. Run 'gi sync' to start.\");\n });\n```\n\n## Acceptance Criteria\n\n- [ ] `gi version` outputs \"gi version X.Y.Z\"\n- [ ] `gi backup` creates timestamped copy of database\n- [ ] Backup filename is Windows-compatible (no colons)\n- [ ] Backup directory created if missing\n- [ ] `gi reset` prompts for 'yes' confirmation\n- [ ] `gi reset --confirm` skips prompt\n- [ ] Reset deletes .db, .db-wal, and .db-shm files\n- [ ] Reset exits 2 if user doesn't type 'yes'\n- [ ] `gi sync-status` outputs stub message\n\n## Files\n\nCREATE:\n- src/cli/commands/version.ts\n- src/cli/commands/backup.ts\n- src/cli/commands/reset.ts\n- src/cli/commands/sync-status.ts\n\n## TDD Loop\n\nN/A - simple commands, verify manually:\n\n```bash\ngi version\ngi backup\nls ~/.local/share/gi/backups/\ngi reset # type 'no'\ngi reset --confirm\nls ~/.local/share/gi/data.db # should not exist\ngi sync-status\n```\n\n## Edge Cases\n\n- Backup when database doesn't exist - show clear error\n- Reset when database doesn't exist - show \"No database to reset\"\n- WAL/SHM files may not exist - check before unlinking\n- Timestamp with milliseconds could cause very long filename\n- readline prompt in non-interactive terminal - handle SIGINT","status":"closed","priority":1,"issue_type":"task","created_at":"2026-01-24T16:09:51.774210Z","created_by":"tayloreernisse","updated_at":"2026-01-25T03:31:46.227285Z","closed_at":"2026-01-25T03:31:46.227220Z","close_reason":"done","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-3kj","depends_on_id":"bd-13b","type":"blocks","created_at":"2026-02-12T19:34:39Z","created_by":"import"},{"issue_id":"bd-3kj","depends_on_id":"bd-3ng","type":"blocks","created_at":"2026-02-12T19:34:39Z","created_by":"import"}]} {"id":"bd-3l56","title":"Add lore sync --tui convenience flag","description":"## Background\n\nThe PRD defines two CLI entry paths to the TUI: `lore tui` (full TUI) and `lore sync --tui` (convenience shortcut that launches the TUI directly on the Sync screen in inline mode). The `lore tui` command is covered by bd-26lp. This bead adds the `--tui` flag to the existing `SyncArgs` struct, which delegates to the `lore-tui` binary with `--sync` flag.\n\n## Approach\n\nTwo changes to the existing lore CLI crate (NOT the lore-tui crate):\n\n1. **Add `--tui` flag to `SyncArgs`** in `src/cli/mod.rs`:\n ```rust\n /// Show sync progress in interactive TUI (inline mode)\n #[arg(long)]\n pub tui: bool,\n ```\n\n2. **Handle the flag in sync command dispatch** in `src/main.rs` (or wherever Commands::Sync is matched):\n - If `args.tui` is true, call `resolve_tui_binary()` (from bd-26lp) and spawn it with `--sync` flag\n - Forward the config path if specified\n - Exit with the lore-tui process exit code\n - If lore-tui is not found, print a helpful error message\n\nThe `resolve_tui_binary()` function is implemented by bd-26lp (CLI integration). This bead simply adds the flag and the early-return delegation path in the sync command handler.\n\n## Acceptance Criteria\n- [ ] `lore sync --tui` is accepted by the CLI parser (no unknown flag error)\n- [ ] When `--tui` is set, the sync command delegates to `lore-tui --sync` binary\n- [ ] Config path is forwarded if `--config` was specified\n- [ ] If lore-tui binary is not found, prints error with install instructions and exits non-zero\n- [ ] `lore sync --tui --full` does NOT pass `--full` to lore-tui (TUI has its own sync controls)\n- [ ] `--tui` flag appears in `lore sync --help` output\n\n## Files\n- MODIFY: src/cli/mod.rs (add `tui: bool` field to `SyncArgs` struct at line ~776)\n- MODIFY: src/main.rs or src/cli/commands/sync.rs (add early-return delegation when `args.tui`)\n\n## TDD Anchor\nRED: Write `test_sync_tui_flag_accepted` that verifies `SyncArgs` can be parsed with `--tui` flag.\nGREEN: Add the `tui: bool` field to SyncArgs.\nVERIFY: cargo test sync_tui_flag\n\nAdditional tests:\n- test_sync_tui_flag_default_false (not set by default)\n\n## Edge Cases\n- `--tui` combined with `--dry-run` — the TUI handles dry-run internally, so `--dry-run` should be ignored when `--tui` is set (or warn)\n- `--tui` when lore-tui binary does not exist — clear error, not a panic\n- `--tui` in robot mode (`--robot`) — nonsensical combination, should error with \"cannot use --tui with --robot\"\n\n## Dependency Context\n- Depends on bd-26lp (CLI integration) which implements `resolve_tui_binary()` and `validate_tui_compat()` functions that this bead calls.\n- The SyncArgs struct is at src/cli/mod.rs:739. The existing fields are: full, no_full, force, no_force, no_embed, no_docs, no_events, no_file_changes, dry_run, no_dry_run.","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-12T19:29:40.785182Z","created_by":"tayloreernisse","updated_at":"2026-02-19T04:47:46.349240Z","closed_at":"2026-02-19T04:47:46.349151Z","close_reason":"Added --tui flag to SyncArgs with early-return delegation to lore-tui --sync. Robot mode check, config forwarding, autocorrect registry updated.","compaction_level":0,"original_size":0,"labels":["TUI"],"dependencies":[{"issue_id":"bd-3l56","depends_on_id":"bd-26lp","type":"blocks","created_at":"2026-02-12T19:34:39Z","created_by":"import"}]} diff --git a/.beads/last-touched b/.beads/last-touched index 1a45c25..e276029 100644 --- a/.beads/last-touched +++ b/.beads/last-touched @@ -1 +1 @@ -bd-2ez +bd-3jqx diff --git a/src/main.rs b/src/main.rs index 43ce653..3f2e804 100644 --- a/src/main.rs +++ b/src/main.rs @@ -2668,12 +2668,19 @@ fn handle_robot_docs(robot_mode: bool, brief: bool) -> Result<(), Box", "-f/--for "], + "flags": ["", "-f/--for "], "example": "lore --robot count issues", "response_schema": { - "ok": "bool", - "data": {"entity": "string", "count": "int", "system_excluded?": "int", "breakdown?": {"opened": "int", "closed": "int", "merged?": "int", "locked?": "int"}}, - "meta": {"elapsed_ms": "int"} + "standard": { + "ok": "bool", + "data": {"entity": "string", "count": "int", "system_excluded?": "int", "breakdown?": {"opened": "int", "closed": "int", "merged?": "int", "locked?": "int"}}, + "meta": {"elapsed_ms": "int"} + }, + "references": { + "ok": "bool", + "data": {"entity": "references", "total": "int", "by_type": {"closes": "int", "mentioned": "int", "related": "int"}, "by_method": {"api": "int", "note_parse": "int", "description_parse": "int"}, "unresolved": "int"}, + "meta": {"elapsed_ms": "int"} + } } }, "stats": { @@ -2814,6 +2821,20 @@ fn handle_robot_docs(robot_mode: bool, brief: bool) -> Result<(), Box", "", "-n/--limit", "-p/--project"], + "modes": { + "entity": "lore related issues 42 -- Find entities similar to issue #42", + "query": "lore related 'authentication bug' -- Find entities matching free text" + }, + "example": "lore --robot related issues 42 -n 5", + "response_schema": { + "ok": "bool", + "data": {"source": {"source_type": "string", "iid": "int?", "title": "string?"}, "query": "string?", "mode": "entity|query", "results": "[{source_type:string, iid:int, title:string, url:string?, similarity_score:float, shared_labels:[string], project_path:string?}]"}, + "meta": {"elapsed_ms": "int", "mode": "string", "embedding_dims": 768, "distance_metric": "l2"} + } + }, "notes": { "description": "List notes from discussions with rich filtering", "flags": ["--limit/-n ", "--author/-a ", "--note-type ", "--contains ", "--for-issue ", "--for-mr ", "-p/--project ", "--since ", "--until ", "--path ", "--resolution ", "--sort ", "--asc", "--include-system", "--note-id ", "--gitlab-note-id ", "--discussion-id ", "--format ", "--fields ", "--open"], @@ -2846,9 +2867,13 @@ fn handle_robot_docs(robot_mode: bool, brief: bool) -> Result<(), Box MR -> issue -> discussion decision chain", + "related: Semantic similarity discovery via vector embeddings", + "drift: Discussion divergence detection from original intent", "notes: Rich note listing with author, type, resolution, path, and discussion filters", "stats: Database statistics with document/note/discussion counts", - "count: Entity counts with state breakdowns", + "count: Entity counts with state breakdowns and reference analysis", "embed: Generate vector embeddings for semantic search via Ollama" ], "read_write_split": "lore = ALL reads (issues, MRs, search, who, timeline, intelligence). glab = ALL writes (create, update, approve, merge, CI/CD)." @@ -2900,9 +2925,10 @@ fn handle_robot_docs(robot_mode: bool, brief: bool) -> Result<(), Box' --since 30d", - "lore --robot timeline '' --depth 2" + "lore --robot sync (ensure events fetched with fetchResourceEvents=true)", + "lore --robot timeline '' for chronological event history", + "lore --robot file-history for file-level MR history", + "lore --robot trace for file -> MR -> issue -> discussion chain" ], "people_intelligence": [ "lore --robot who src/path/to/feature/",