From fd0a40b1815d2e28654c6c905a4c797355900317 Mon Sep 17 00:00:00 2001 From: teernisse Date: Thu, 26 Feb 2026 11:06:59 -0500 Subject: [PATCH] chore: update beads and GitLab TODOs integration plan Update beads issue tracking state and expand the GitLab TODOs notifications integration design document with additional implementation details. Co-Authored-By: Claude Opus 4.5 --- .beads/issues.jsonl | 2 +- .beads/last-touched | 2 +- .../gitlab-todos-notifications-integration.md | 760 +++++++++++++----- 3 files changed, 569 insertions(+), 195 deletions(-) diff --git a/.beads/issues.jsonl b/.beads/issues.jsonl index e3d34c5..c14ee52 100644 --- a/.beads/issues.jsonl +++ b/.beads/issues.jsonl @@ -270,7 +270,7 @@ {"id":"bd-6pmy","title":"Implement LoreApp Model trait (full update/view skeleton)","description":"## Background\nLoreApp is the central Model implementation for FrankenTUI's Elm Architecture. It owns all state (AppState), the navigation stack, task supervisor, db manager, clock, config, and crash context. The update() method is the single entry point for all state transitions, implementing a 5-stage key dispatch pipeline. The view() method routes to per-screen render functions.\n\n## Approach\nExpand crates/lore-tui/src/app.rs:\n- LoreApp struct fields: config (Config), db (DbManager), state (AppState), navigation (NavigationStack), supervisor (TaskSupervisor), clock (Box), input_mode (InputMode), command_registry (CommandRegistry), crash_context (CrashContext)\n- init() -> Cmd: install crash_context panic hook, return Cmd::task that loads dashboard data\n- update(msg: Msg) -> Option>: push CrashEvent to crash_context FIRST, then full dispatch with 5-stage interpret_key pipeline:\n 1. Quit check (q in Normal mode, Ctrl+C always)\n 2. InputMode routing (Text->delegate to text widget, Palette->delegate to palette, GoPrefix->check timeout+destination)\n 3. Global shortcuts (H=Home, Esc=back, Ctrl+P=palette, g=prefix, Ctrl+O/I=jump)\n 4. Screen-local keys (delegate to AppState::interpret_screen_key)\n 5. Fallback (unhandled key, no-op)\n\n**Key normalization pass in interpret_key():**\nBefore the 5-stage pipeline, normalize terminal key variants:\n- Backspace variants: map Delete/Backspace to canonical Backspace\n- Alt key variants: map Meta+key to Alt+key\n- Shift+Tab: map BackTab to Shift+Tab\n- This ensures consistent behavior across terminals (iTerm2, Alacritty, Terminal.app, tmux)\n\n- For non-key messages: match on Msg variants, update state, optionally return Cmd::task for async work\n- Stale result guard: check supervisor.is_current() before applying *Loaded results\n- view(frame): match navigation.current() to dispatch to per-screen view functions (stub initially)\n- subscriptions(): tick timer (250ms for spinner animation), debounce timers\n\n## Acceptance Criteria\n- [ ] LoreApp struct compiles with all required fields including crash_context\n- [ ] init() installs panic hook and returns a Cmd that triggers dashboard load\n- [ ] update() pushes CrashEvent to crash_context before dispatching\n- [ ] update() handles Msg::Quit by returning None\n- [ ] update() handles NavigateTo by pushing nav stack and spawning load_screen\n- [ ] update() handles GoBack by popping nav stack\n- [ ] interpret_key normalizes Backspace/Alt/Shift+Tab variants before dispatch\n- [ ] interpret_key 5-stage pipeline dispatches correctly per InputMode\n- [ ] GoPrefix times out after 500ms (checked via clock.now())\n- [ ] Stale results dropped: IssueListLoaded with old generation ignored\n- [ ] view() routes to correct screen render function based on navigation.current()\n- [ ] subscriptions() returns tick timer\n\n## Files\n- MODIFY: crates/lore-tui/src/app.rs (expand from minimal to full implementation)\n\n## TDD Anchor\nRED: Write test_quit_returns_none that creates LoreApp (with FakeClock, in-memory DB), calls update(Msg::Quit), asserts it returns None.\nGREEN: Implement update() with Quit match arm.\nVERIFY: cargo test --manifest-path crates/lore-tui/Cargo.toml test_quit\n\nAdditional tests:\n- test_navigate_to_pushes_stack: update(NavigateTo(IssueList)) changes navigation.current()\n- test_go_back_pops_stack: after push, GoBack returns to previous screen\n- test_stale_result_dropped: IssueListLoaded with old generation doesn't update state\n- test_go_prefix_timeout: GoPrefix cancels after 500ms (using FakeClock)\n- test_key_normalization_backspace: both Delete and Backspace map to canonical Backspace\n- test_crash_context_records_events: after update(), crash_context.events.len() increases\n\n## Edge Cases\n- update() must handle rapid-fire messages without blocking (no long computations in update)\n- Ctrl+C must always quit regardless of InputMode (safety escape)\n- GoPrefix must cancel on any non-destination key, not just on timeout\n- Text mode must pass Esc through to blur text input first, then Normal mode handles Esc for navigation\n- Key normalization must handle unknown/exotic key codes gracefully (pass through unchanged)\n\n## Dependency Context\nUses DbManager from \"Implement DbManager\" (bd-2kop).\nUses Clock/FakeClock from \"Implement Clock trait\" (bd-2lg6).\nUses Msg, Screen, InputMode from \"Implement core types\" (bd-c9gk).\nUses NavigationStack from \"Implement NavigationStack\" (bd-1qpp).\nUses TaskSupervisor from \"Implement TaskSupervisor\" (bd-3le2).\nUses CrashContext from \"Implement crash_context ring buffer\" (bd-2fr7).\nUses CommandRegistry from \"Implement CommandRegistry\" (bd-38lb).\nUses AppState from \"Implement AppState composition\" (bd-1v9m).","status":"open","priority":2,"issue_type":"task","created_at":"2026-02-12T16:55:27.130909Z","created_by":"tayloreernisse","updated_at":"2026-02-12T18:11:25.486879Z","compaction_level":0,"original_size":0,"labels":["TUI"],"dependencies":[{"issue_id":"bd-6pmy","depends_on_id":"bd-1qpp","type":"blocks","created_at":"2026-02-12T17:09:39.201885Z","created_by":"tayloreernisse"},{"issue_id":"bd-6pmy","depends_on_id":"bd-1v9m","type":"blocks","created_at":"2026-02-12T17:09:39.220385Z","created_by":"tayloreernisse"},{"issue_id":"bd-6pmy","depends_on_id":"bd-2emv","type":"blocks","created_at":"2026-02-12T17:09:39.191058Z","created_by":"tayloreernisse"},{"issue_id":"bd-6pmy","depends_on_id":"bd-2fr7","type":"blocks","created_at":"2026-02-12T18:11:13.784914Z","created_by":"tayloreernisse"},{"issue_id":"bd-6pmy","depends_on_id":"bd-2kop","type":"blocks","created_at":"2026-02-12T17:09:39.229673Z","created_by":"tayloreernisse"},{"issue_id":"bd-6pmy","depends_on_id":"bd-2lg6","type":"blocks","created_at":"2026-02-12T17:09:39.238835Z","created_by":"tayloreernisse"},{"issue_id":"bd-6pmy","depends_on_id":"bd-2tr4","type":"blocks","created_at":"2026-02-12T18:11:25.486853Z","created_by":"tayloreernisse"},{"issue_id":"bd-6pmy","depends_on_id":"bd-38lb","type":"blocks","created_at":"2026-02-12T17:09:39.248006Z","created_by":"tayloreernisse"},{"issue_id":"bd-6pmy","depends_on_id":"bd-3le2","type":"blocks","created_at":"2026-02-12T17:09:39.211701Z","created_by":"tayloreernisse"}]} {"id":"bd-88m","title":"[CP1] Issue ingestion module","description":"Fetch and store issues with cursor-based incremental sync.\n\n## Module\nsrc/ingestion/issues.rs\n\n## Key Structs\n\n### IngestIssuesResult\n- fetched: usize\n- upserted: usize\n- labels_created: usize\n- issues_needing_discussion_sync: Vec\n\n### IssueForDiscussionSync\n- local_issue_id: i64\n- iid: i64\n- updated_at: i64\n\n## Main Function\npub async fn ingest_issues(conn, client, config, project_id, gitlab_project_id) -> Result\n\n## Logic\n1. Get current cursor from sync_cursors (updated_at_cursor, tie_breaker_id)\n2. Paginate through issues updated after cursor with cursor_rewind_seconds\n3. Apply local filtering for tuple cursor semantics:\n - Skip if issue.updated_at < cursor_updated_at\n - Skip if issue.updated_at == cursor_updated_at AND issue.id <= cursor_gitlab_id\n4. For each issue passing filter:\n - Begin transaction\n - Store raw payload (compressed)\n - Transform and upsert issue\n - Clear existing label links (DELETE FROM issue_labels)\n - Extract and upsert labels\n - Link issue to labels via junction\n - Commit transaction\n - Track for discussion sync eligibility\n5. Incremental cursor update every 100 issues\n6. Final cursor update\n7. Determine issues needing discussion sync: where updated_at > discussions_synced_for_updated_at\n\n## Helper Functions\n- get_cursor(conn, project_id) -> (Option, Option)\n- get_discussions_synced_at(conn, issue_id) -> Option\n- upsert_issue(conn, issue, payload_id) -> usize\n- get_local_issue_id(conn, gitlab_id) -> i64\n- clear_issue_labels(conn, issue_id)\n- upsert_label(conn, label) -> bool\n- get_label_id(conn, project_id, name) -> i64\n- link_issue_label(conn, issue_id, label_id)\n- update_cursor(conn, project_id, resource_type, updated_at, gitlab_id)\n\nFiles: src/ingestion/mod.rs, src/ingestion/issues.rs\nTests: tests/issue_ingestion_tests.rs\nDone when: Issues, labels, issue_labels populated correctly with resumable cursor","status":"tombstone","priority":2,"issue_type":"task","created_at":"2026-01-25T16:57:35.655708Z","created_by":"tayloreernisse","updated_at":"2026-01-25T17:02:01.806982Z","deleted_at":"2026-01-25T17:02:01.806977Z","deleted_by":"tayloreernisse","delete_reason":"recreating with correct deps","original_type":"task","compaction_level":0,"original_size":0} {"id":"bd-8ab7","title":"Implement Issue Detail (state + action + view)","description":"## Background\nThe Issue Detail screen shows a single issue with progressive hydration: Phase 1 loads metadata (fast), Phase 2 loads discussions asynchronously, Phase 3 loads thread bodies on expand. All subqueries run inside a single read transaction for snapshot consistency.\n\n## Approach\nState (state/issue_detail.rs):\n- IssueDetailState: current_key (Option), metadata (Option), discussions (Vec), discussions_loaded (bool), cross_refs (Vec), tree_state (TreePersistState), scroll_offset (usize)\n- IssueMetadata: iid, title, description, state, author, assignee, labels, milestone, created_at, updated_at, web_url, status_name, status_icon, closing_mr_iids, related_issue_iids\n- handle_key(): j/k scroll, Enter expand discussion thread, d open description, x cross-refs, o open in browser, t scoped timeline, Esc back to list\n\nAction (action.rs):\n- fetch_issue_detail(conn, key, clock) -> Result: uses with_read_snapshot for snapshot consistency. Fetches metadata, discussion count, cross-refs in single transaction.\n- fetch_discussions(conn, key) -> Result, LoreError>: loads discussions for the issue, separate async call (Phase 2 of hydration)\n\nView (view/issue_detail.rs):\n- render_issue_detail(frame, state, area, theme): header (IID, title, state badge, labels), description (markdown rendered with sanitization), discussions (tree widget), cross-references section\n- Header: \"Issue #42 — Fix auth flow [opened]\" with colored state badge\n- Description: rendered markdown, scrollable\n- Discussions: loaded async, shown with spinner until ready\n- Cross-refs: closing MRs, related issues as navigable links\n\n## Acceptance Criteria\n- [ ] Metadata loads in Phase 1 (p95 < 75ms on M-tier)\n- [ ] Discussions load async in Phase 2 (spinner shown while loading)\n- [ ] All detail subqueries run inside single read transaction (snapshot consistency)\n- [ ] Description text sanitized via sanitize_for_terminal()\n- [ ] Discussion tree renders with expand/collapse\n- [ ] Cross-references navigable via Enter\n- [ ] Esc returns to Issue List with cursor position preserved\n- [ ] Open in browser (o) uses classify_safe_url before launching\n- [ ] Scoped timeline (t) navigates to Timeline filtered for this entity\n\n## Files\n- MODIFY: crates/lore-tui/src/state/issue_detail.rs (expand from stub)\n- MODIFY: crates/lore-tui/src/action.rs (add fetch_issue_detail, fetch_discussions)\n- CREATE: crates/lore-tui/src/view/issue_detail.rs\n\n## TDD Anchor\nRED: Write test_fetch_issue_detail_snapshot in action.rs that inserts an issue with 2 discussions, calls fetch_issue_detail, asserts metadata and discussion count are correct.\nGREEN: Implement fetch_issue_detail with read transaction.\nVERIFY: cargo test --manifest-path crates/lore-tui/Cargo.toml test_fetch_issue_detail\n\n## Edge Cases\n- Issue with no description: show placeholder \"[No description]\"\n- Issue with hundreds of discussions: paginate or lazy-load beyond first 50\n- Cross-refs to entities not in local DB: show as text-only (not navigable)\n- Issue description with embedded images: show [image] placeholder (no inline rendering)\n- Entity cache (future): near-instant reopen during Enter/Esc drill workflows\n\n## Dependency Context\nUses discussion tree and cross-ref widgets from \"Implement discussion tree + cross-reference widgets\" task.\nUses EntityKey, Msg from \"Implement core types\" task.\nUses with_read_snapshot from DbManager from \"Implement DbManager\" task.\nUses sanitize_for_terminal from \"Implement terminal safety module\" task.\nUses Clock for timestamps from \"Implement Clock trait\" task.","status":"open","priority":2,"issue_type":"task","created_at":"2026-02-12T16:59:10.081146Z","created_by":"tayloreernisse","updated_at":"2026-02-12T18:11:28.338916Z","compaction_level":0,"original_size":0,"labels":["TUI"],"dependencies":[{"issue_id":"bd-8ab7","depends_on_id":"bd-1cl9","type":"blocks","created_at":"2026-02-12T18:11:28.338883Z","created_by":"tayloreernisse"},{"issue_id":"bd-8ab7","depends_on_id":"bd-1d6z","type":"blocks","created_at":"2026-02-12T17:09:48.627780Z","created_by":"tayloreernisse"},{"issue_id":"bd-8ab7","depends_on_id":"bd-3ei1","type":"blocks","created_at":"2026-02-12T17:09:48.617739Z","created_by":"tayloreernisse"}]} -{"id":"bd-8con","title":"lore related: semantic similarity discovery","description":"## Background\nGiven any entity or free text, find semantically related entities using vector embeddings. No other GitLab tool does this — glab, GitLab Advanced Search, and even paid tiers are keyword-only. This finds conceptual connections humans miss.\n\n## Current Infrastructure (Verified 2026-02-12)\n- sqlite-vec extension loaded via sqlite3_vec_init in src/core/db.rs:84\n- Embeddings stored in: embedding_metadata table (chunk info) + vec0 virtual table named `embeddings` (vectors)\n- Migration 009 creates embedding infrastructure\n- search_vector() at src/search/vector.rs:43 — works with sqlite-vec KNN queries\n- OllamaClient::embed_batch() at src/embedding/ollama.rs:103 — batch embedding\n- Model: nomic-embed-text, 768 dimensions, context_length=2048 tokens (~1500 bytes)\n- 61K documents in DB, embedding coverage TBD\n\n### sqlite-vec Distance Metric\nThe `embeddings` virtual table is `vec0(embedding float[768])`. sqlite-vec's MATCH query returns L2 (Euclidean) distance by default. Lower distance = more similar. The `search_vector()` function returns `VectorResult { document_id: i64, distance: f64 }`.\n\n## Approach\n\n### Entity Mode: lore related issues N\n1. Look up document for issue N:\n```sql\nSELECT d.id, d.content_text\nFROM documents d\nJOIN issues i ON d.source_type = 'issue' AND d.source_id = i.id\nWHERE i.iid = ?1 AND i.project_id = (SELECT id FROM projects WHERE ...)\n```\nNOTE: `documents.source_id` is the internal DB id from the source table (issues.id), NOT the GitLab IID. See migration 007 comment: `source_id INTEGER NOT NULL -- local DB id in the source table`.\n\n2. Get its embedding: Look up via embedding_metadata which maps document_id -> rowid in the vec0 table:\n```sql\nSELECT em.rowid\nFROM embedding_metadata em\nWHERE em.document_id = ?1\nLIMIT 1 -- use first chunk's embedding as representative\n```\nThen extract the embedding vector from the vec0 table to use as the KNN query.\n\nAlternatively, embed the document's content_text on-the-fly via OllamaClient (simpler, more robust):\n```rust\nlet embedding = client.embed_batch(&[&doc.content_text]).await?[0].clone();\n```\n\n3. Call search_vector(conn, &embedding, limit * 2) for KNN — multiply limit to have room after filtering self\n4. Exclude self (filter out source document_id from results)\n5. Hydrate results: join documents -> issues/mrs/discussions for title, url, labels, author\n6. Compute shared_labels: parse `documents.label_names` (JSON array string) for both source and each result, intersect\n7. Return ranked list\n\n### Query Mode: lore related 'free text'\n1. Embed query via OllamaClient::embed_batch(&[query_text])\n2. Call search_vector(conn, &query_embedding, limit)\n3. Hydrate and return (same as entity mode minus self-exclusion)\n\n### Key Design Decision\nThis is intentionally SIMPLER than hybrid search. No FTS, no RRF. Pure vector similarity. The point is conceptual relatedness, not keyword matching.\n\n### Distance to Similarity Score Conversion\nsqlite-vec returns L2 (Euclidean) distance. Convert to 0-1 similarity:\n```rust\n/// Convert L2 distance to a 0-1 similarity score.\n/// Uses inverse relationship: closer (lower distance) = higher similarity.\n/// The +1 prevents division by zero and ensures score is in (0, 1].\nfn distance_to_similarity(distance: f64) -> f64 {\n 1.0 / (1.0 + distance)\n}\n```\nFor normalized embeddings (which nomic-embed-text produces), L2 distance ranges roughly 0-2. This formula maps:\n- distance 0.0 -> similarity 1.0 (identical)\n- distance 1.0 -> similarity 0.5\n- distance 2.0 -> similarity 0.33\n\n### Label Extraction for shared_labels\n```rust\nfn parse_label_names(label_names_json: &Option) -> HashSet {\n label_names_json\n .as_deref()\n .and_then(|s| serde_json::from_str::>(s).ok())\n .unwrap_or_default()\n .into_iter()\n .collect()\n}\n\nlet source_labels = parse_label_names(&source_doc.label_names);\nlet result_labels = parse_label_names(&result_doc.label_names);\nlet shared: Vec = source_labels.intersection(&result_labels).cloned().collect();\n```\n\n## Function Signatures\n\n```rust\n// New: src/cli/commands/related.rs\npub struct RelatedArgs {\n pub entity_type: Option, // \"issues\" or \"mrs\"\n pub entity_iid: Option,\n pub query: Option, // free text mode\n pub project: Option,\n pub limit: Option,\n}\n\npub async fn run_related(\n config: &Config,\n args: RelatedArgs,\n) -> Result\n\n// Reuse from src/search/vector.rs:43\npub fn search_vector(\n conn: &Connection,\n query_embedding: &[f32],\n limit: usize,\n) -> Result>\n// VectorResult { document_id: i64, distance: f64 }\n\n// Reuse from src/embedding/ollama.rs:103\npub async fn embed_batch(&self, texts: &[&str]) -> Result>>\n```\n\n## Robot Mode Output Schema\n```json\n{\n \"ok\": true,\n \"data\": {\n \"source\": { \"type\": \"issue\", \"iid\": 3864, \"title\": \"...\" },\n \"query\": \"switch throw time...\",\n \"results\": [{\n \"source_type\": \"issue\",\n \"iid\": 3800,\n \"title\": \"Rail Break Card\",\n \"url\": \"...\",\n \"similarity_score\": 0.87,\n \"shared_labels\": [\"customer:BNSF\"],\n \"shared_authors\": [],\n \"project_path\": \"vs/typescript-code\"\n }]\n },\n \"meta\": { \"elapsed_ms\": 42, \"mode\": \"entity\", \"embedding_dims\": 768, \"distance_metric\": \"l2\" }\n}\n```\n\n## Clap Registration\n```rust\n// In src/main.rs Commands enum, add:\nRelated {\n /// Entity type (\"issues\" or \"mrs\") or free text query\n query_or_type: String,\n /// Entity IID (when first arg is entity type)\n iid: Option,\n /// Maximum results\n #[arg(short = 'n', long, default_value = \"10\")]\n limit: usize,\n /// Scope to project (fuzzy match)\n #[arg(short, long)]\n project: Option,\n},\n```\n\n## TDD Loop\nRED: Tests in src/cli/commands/related.rs:\n- test_related_entity_excludes_self: insert doc + embedding for issue, query related, assert source doc not in results\n- test_related_shared_labels: insert 2 docs with overlapping labels (JSON in label_names), assert shared_labels computed correctly\n- test_related_empty_embeddings: no embeddings in DB, assert exit code 14 with helpful error\n- test_related_query_mode: embed free text via mock, assert results returned\n- test_related_similarity_score_range: all scores between 0.0 and 1.0\n- test_distance_to_similarity: unit test the conversion function (0.0->1.0, 1.0->0.5, large->~0.0)\n\nGREEN: Implement related command using search_vector + hydration\n\nVERIFY:\n```bash\ncargo test related:: && cargo clippy --all-targets -- -D warnings\ncargo run --release -- -J related issues 3864 -n 5 | jq '.data.results[0].similarity_score'\n```\n\n## Acceptance Criteria\n- [ ] lore related issues N returns top-K semantically similar entities\n- [ ] lore related mrs N works for merge requests\n- [ ] lore related 'free text' works as concept search (requires Ollama)\n- [ ] Results exclude the input entity itself\n- [ ] similarity_score is 0-1 range (higher = more similar), converted from L2 distance\n- [ ] Robot mode includes shared_labels (from documents.label_names JSON), shared_authors per result\n- [ ] Human mode shows ranked list with titles, scores, common labels\n- [ ] No embeddings in DB: exit code 14 with message \"Run 'lore embed' first\"\n- [ ] Ollama unavailable (query mode only): exit code 14 with suggestion\n- [ ] Performance: <1s for 61K documents\n- [ ] Command registered in main.rs and robot-docs\n\n## Edge Cases\n- Entity has no embedding (added after last lore embed): embed its content_text on-the-fly via OllamaClient, or exit 14 if Ollama unavailable\n- All results have very low similarity (<0.3): include warning \"No strongly related entities found\"\n- Entity is a discussion (not issue/MR): should still work (documents table has discussion docs)\n- Multiple documents per entity (discussion docs): use the entity-level document, not discussion subdocs\n- Free text query very short (1-2 words): may produce noisy results, add warning\n- Entity not found in DB: exit code 17 with suggestion to sync\n- Ambiguous project: exit code 18 with suggestion to use -p flag\n- documents.label_names may be NULL or invalid JSON — parse_label_names handles both gracefully\n\n## Dependency Context\n- **bd-1ksf (hybrid search)**: BLOCKER. Shares OllamaClient infrastructure. Also ensures async search.rs patterns are established. Related reuses the same vector search infrastructure.\n\n## Files to Create/Modify\n- NEW: src/cli/commands/related.rs\n- src/cli/commands/mod.rs (add pub mod related; re-export)\n- src/main.rs (register Related subcommand in Commands enum, add handle_related fn)\n- Reuse: search_vector() from src/search/vector.rs, OllamaClient from src/embedding/ollama.rs","status":"open","priority":2,"issue_type":"feature","created_at":"2026-02-12T15:46:58.665923Z","created_by":"tayloreernisse","updated_at":"2026-02-18T21:44:52.749551Z","compaction_level":0,"original_size":0,"labels":["cli-imp","intelligence","search"],"dependencies":[{"issue_id":"bd-8con","depends_on_id":"bd-13lp","type":"parent-child","created_at":"2026-02-12T15:46:58.668835Z","created_by":"tayloreernisse"}]} +{"id":"bd-8con","title":"lore related: semantic similarity discovery","description":"## Background\nGiven any entity or free text, find semantically related entities using vector embeddings. No other GitLab tool does this — glab, GitLab Advanced Search, and even paid tiers are keyword-only. This finds conceptual connections humans miss.\n\n## Current Infrastructure (Verified 2026-02-12)\n- sqlite-vec extension loaded via sqlite3_vec_init in src/core/db.rs:84\n- Embeddings stored in: embedding_metadata table (chunk info) + vec0 virtual table named `embeddings` (vectors)\n- Migration 009 creates embedding infrastructure\n- search_vector() at src/search/vector.rs:43 — works with sqlite-vec KNN queries\n- OllamaClient::embed_batch() at src/embedding/ollama.rs:103 — batch embedding\n- Model: nomic-embed-text, 768 dimensions, context_length=2048 tokens (~1500 bytes)\n- 61K documents in DB, embedding coverage TBD\n\n### sqlite-vec Distance Metric\nThe `embeddings` virtual table is `vec0(embedding float[768])`. sqlite-vec's MATCH query returns L2 (Euclidean) distance by default. Lower distance = more similar. The `search_vector()` function returns `VectorResult { document_id: i64, distance: f64 }`.\n\n## Approach\n\n### Entity Mode: lore related issues N\n1. Look up document for issue N:\n```sql\nSELECT d.id, d.content_text\nFROM documents d\nJOIN issues i ON d.source_type = 'issue' AND d.source_id = i.id\nWHERE i.iid = ?1 AND i.project_id = (SELECT id FROM projects WHERE ...)\n```\nNOTE: `documents.source_id` is the internal DB id from the source table (issues.id), NOT the GitLab IID. See migration 007 comment: `source_id INTEGER NOT NULL -- local DB id in the source table`.\n\n2. Get its embedding: Look up via embedding_metadata which maps document_id -> rowid in the vec0 table:\n```sql\nSELECT em.rowid\nFROM embedding_metadata em\nWHERE em.document_id = ?1\nLIMIT 1 -- use first chunk's embedding as representative\n```\nThen extract the embedding vector from the vec0 table to use as the KNN query.\n\nAlternatively, embed the document's content_text on-the-fly via OllamaClient (simpler, more robust):\n```rust\nlet embedding = client.embed_batch(&[&doc.content_text]).await?[0].clone();\n```\n\n3. Call search_vector(conn, &embedding, limit * 2) for KNN — multiply limit to have room after filtering self\n4. Exclude self (filter out source document_id from results)\n5. Hydrate results: join documents -> issues/mrs/discussions for title, url, labels, author\n6. Compute shared_labels: parse `documents.label_names` (JSON array string) for both source and each result, intersect\n7. Return ranked list\n\n### Query Mode: lore related 'free text'\n1. Embed query via OllamaClient::embed_batch(&[query_text])\n2. Call search_vector(conn, &query_embedding, limit)\n3. Hydrate and return (same as entity mode minus self-exclusion)\n\n### Key Design Decision\nThis is intentionally SIMPLER than hybrid search. No FTS, no RRF. Pure vector similarity. The point is conceptual relatedness, not keyword matching.\n\n### Distance to Similarity Score Conversion\nsqlite-vec returns L2 (Euclidean) distance. Convert to 0-1 similarity:\n```rust\n/// Convert L2 distance to a 0-1 similarity score.\n/// Uses inverse relationship: closer (lower distance) = higher similarity.\n/// The +1 prevents division by zero and ensures score is in (0, 1].\nfn distance_to_similarity(distance: f64) -> f64 {\n 1.0 / (1.0 + distance)\n}\n```\nFor normalized embeddings (which nomic-embed-text produces), L2 distance ranges roughly 0-2. This formula maps:\n- distance 0.0 -> similarity 1.0 (identical)\n- distance 1.0 -> similarity 0.5\n- distance 2.0 -> similarity 0.33\n\n### Label Extraction for shared_labels\n```rust\nfn parse_label_names(label_names_json: &Option) -> HashSet {\n label_names_json\n .as_deref()\n .and_then(|s| serde_json::from_str::>(s).ok())\n .unwrap_or_default()\n .into_iter()\n .collect()\n}\n\nlet source_labels = parse_label_names(&source_doc.label_names);\nlet result_labels = parse_label_names(&result_doc.label_names);\nlet shared: Vec = source_labels.intersection(&result_labels).cloned().collect();\n```\n\n## Function Signatures\n\n```rust\n// New: src/cli/commands/related.rs\npub struct RelatedArgs {\n pub entity_type: Option, // \"issues\" or \"mrs\"\n pub entity_iid: Option,\n pub query: Option, // free text mode\n pub project: Option,\n pub limit: Option,\n}\n\npub async fn run_related(\n config: &Config,\n args: RelatedArgs,\n) -> Result\n\n// Reuse from src/search/vector.rs:43\npub fn search_vector(\n conn: &Connection,\n query_embedding: &[f32],\n limit: usize,\n) -> Result>\n// VectorResult { document_id: i64, distance: f64 }\n\n// Reuse from src/embedding/ollama.rs:103\npub async fn embed_batch(&self, texts: &[&str]) -> Result>>\n```\n\n## Robot Mode Output Schema\n```json\n{\n \"ok\": true,\n \"data\": {\n \"source\": { \"type\": \"issue\", \"iid\": 3864, \"title\": \"...\" },\n \"query\": \"switch throw time...\",\n \"results\": [{\n \"source_type\": \"issue\",\n \"iid\": 3800,\n \"title\": \"Rail Break Card\",\n \"url\": \"...\",\n \"similarity_score\": 0.87,\n \"shared_labels\": [\"customer:BNSF\"],\n \"shared_authors\": [],\n \"project_path\": \"vs/typescript-code\"\n }]\n },\n \"meta\": { \"elapsed_ms\": 42, \"mode\": \"entity\", \"embedding_dims\": 768, \"distance_metric\": \"l2\" }\n}\n```\n\n## Clap Registration\n```rust\n// In src/main.rs Commands enum, add:\nRelated {\n /// Entity type (\"issues\" or \"mrs\") or free text query\n query_or_type: String,\n /// Entity IID (when first arg is entity type)\n iid: Option,\n /// Maximum results\n #[arg(short = 'n', long, default_value = \"10\")]\n limit: usize,\n /// Scope to project (fuzzy match)\n #[arg(short, long)]\n project: Option,\n},\n```\n\n## TDD Loop\nRED: Tests in src/cli/commands/related.rs:\n- test_related_entity_excludes_self: insert doc + embedding for issue, query related, assert source doc not in results\n- test_related_shared_labels: insert 2 docs with overlapping labels (JSON in label_names), assert shared_labels computed correctly\n- test_related_empty_embeddings: no embeddings in DB, assert exit code 14 with helpful error\n- test_related_query_mode: embed free text via mock, assert results returned\n- test_related_similarity_score_range: all scores between 0.0 and 1.0\n- test_distance_to_similarity: unit test the conversion function (0.0->1.0, 1.0->0.5, large->~0.0)\n\nGREEN: Implement related command using search_vector + hydration\n\nVERIFY:\n```bash\ncargo test related:: && cargo clippy --all-targets -- -D warnings\ncargo run --release -- -J related issues 3864 -n 5 | jq '.data.results[0].similarity_score'\n```\n\n## Acceptance Criteria\n- [ ] lore related issues N returns top-K semantically similar entities\n- [ ] lore related mrs N works for merge requests\n- [ ] lore related 'free text' works as concept search (requires Ollama)\n- [ ] Results exclude the input entity itself\n- [ ] similarity_score is 0-1 range (higher = more similar), converted from L2 distance\n- [ ] Robot mode includes shared_labels (from documents.label_names JSON), shared_authors per result\n- [ ] Human mode shows ranked list with titles, scores, common labels\n- [ ] No embeddings in DB: exit code 14 with message \"Run 'lore embed' first\"\n- [ ] Ollama unavailable (query mode only): exit code 14 with suggestion\n- [ ] Performance: <1s for 61K documents\n- [ ] Command registered in main.rs and robot-docs\n\n## Edge Cases\n- Entity has no embedding (added after last lore embed): embed its content_text on-the-fly via OllamaClient, or exit 14 if Ollama unavailable\n- All results have very low similarity (<0.3): include warning \"No strongly related entities found\"\n- Entity is a discussion (not issue/MR): should still work (documents table has discussion docs)\n- Multiple documents per entity (discussion docs): use the entity-level document, not discussion subdocs\n- Free text query very short (1-2 words): may produce noisy results, add warning\n- Entity not found in DB: exit code 17 with suggestion to sync\n- Ambiguous project: exit code 18 with suggestion to use -p flag\n- documents.label_names may be NULL or invalid JSON — parse_label_names handles both gracefully\n\n## Dependency Context\n- **bd-1ksf (hybrid search)**: BLOCKER. Shares OllamaClient infrastructure. Also ensures async search.rs patterns are established. Related reuses the same vector search infrastructure.\n\n## Files to Create/Modify\n- NEW: src/cli/commands/related.rs\n- src/cli/commands/mod.rs (add pub mod related; re-export)\n- src/main.rs (register Related subcommand in Commands enum, add handle_related fn)\n- Reuse: search_vector() from src/search/vector.rs, OllamaClient from src/embedding/ollama.rs","status":"closed","priority":2,"issue_type":"feature","created_at":"2026-02-12T15:46:58.665923Z","created_by":"tayloreernisse","updated_at":"2026-02-26T14:21:12.786863Z","closed_at":"2026-02-26T14:21:12.786818Z","close_reason":"Implemented: entity mode (issues/mrs), query mode (free text), similarity scoring, shared labels, robot/human output","compaction_level":0,"original_size":0,"labels":["cli-imp","intelligence","search"],"dependencies":[{"issue_id":"bd-8con","depends_on_id":"bd-13lp","type":"parent-child","created_at":"2026-02-12T15:46:58.668835Z","created_by":"tayloreernisse"}]} {"id":"bd-8t4","title":"Extract cross-references from resource_state_events","description":"## Background\nresource_state_events includes source_merge_request (with iid) for 'closed by MR' events. After state events are stored (Gate 1), post-processing extracts these into entity_references for the cross-reference graph.\n\n## Approach\nCreate src/core/references.rs (new module) or add to events_db.rs:\n\n```rust\n/// Extract cross-references from stored state events and insert into entity_references.\n/// Looks for state events with source_merge_request_id IS NOT NULL (meaning \"closed by MR\").\n/// \n/// Directionality: source = MR (that caused the close), target = issue (that was closed)\npub fn extract_refs_from_state_events(\n conn: &Connection,\n project_id: i64,\n) -> Result // returns count of new references inserted\n```\n\nSQL logic:\n```sql\nINSERT OR IGNORE INTO entity_references (\n source_entity_type, source_entity_id,\n target_entity_type, target_entity_id,\n reference_type, source_method, created_at\n)\nSELECT\n 'merge_request',\n mr.id,\n 'issue',\n rse.issue_id,\n 'closes',\n 'api_state_event',\n rse.created_at\nFROM resource_state_events rse\nJOIN merge_requests mr ON mr.project_id = rse.project_id AND mr.iid = rse.source_merge_request_id\nWHERE rse.source_merge_request_id IS NOT NULL\n AND rse.issue_id IS NOT NULL\n AND rse.project_id = ?1;\n```\n\nKey: source_merge_request_id stores the MR iid, so we JOIN on merge_requests.iid to get the local DB id.\n\nRegister in src/core/mod.rs: `pub mod references;`\n\nCall this after drain_dependent_queue in the sync pipeline (after all state events are stored).\n\n## Acceptance Criteria\n- [ ] State events with source_merge_request_id produce 'closes' references\n- [ ] Source = MR (resolved by iid), target = issue\n- [ ] source_method = 'api_state_event'\n- [ ] INSERT OR IGNORE prevents duplicates with api_closes_issues data\n- [ ] Returns count of newly inserted references\n- [ ] No-op when no state events have source_merge_request_id\n\n## Files\n- src/core/references.rs (new)\n- src/core/mod.rs (add `pub mod references;`)\n- src/cli/commands/sync.rs (call after drain step)\n\n## TDD Loop\nRED: tests/references_tests.rs:\n- `test_extract_refs_from_state_events_basic` - seed a \"closed\" state event with source_merge_request_id, verify entity_reference created\n- `test_extract_refs_dedup_with_closes_issues` - insert ref from closes_issues API first, verify state event extraction doesn't duplicate\n- `test_extract_refs_no_source_mr` - state events without source_merge_request_id produce no refs\n\nSetup: create_test_db with migrations 001-011, seed project + issue + MR + state events.\n\nGREEN: Implement extract_refs_from_state_events\n\nVERIFY: `cargo test references -- --nocapture`\n\n## Edge Cases\n- source_merge_request_id may reference an MR not synced locally (cross-project close) — the JOIN will produce no match, which is correct behavior (ref simply not created)\n- Multiple state events can reference the same MR for the same issue (reopen + re-close) — INSERT OR IGNORE handles dedup\n- The merge_requests table might not have the MR yet if sync is still running — call this after all dependent fetches complete","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-02T21:32:33.619606Z","created_by":"tayloreernisse","updated_at":"2026-02-04T20:13:28.219791Z","closed_at":"2026-02-04T20:13:28.219633Z","compaction_level":0,"original_size":0,"labels":["extraction","gate-2","phase-b"],"dependencies":[{"issue_id":"bd-8t4","depends_on_id":"bd-1ep","type":"blocks","created_at":"2026-02-02T21:32:42.945176Z","created_by":"tayloreernisse"},{"issue_id":"bd-8t4","depends_on_id":"bd-1se","type":"parent-child","created_at":"2026-02-02T21:32:33.621025Z","created_by":"tayloreernisse"},{"issue_id":"bd-8t4","depends_on_id":"bd-hu3","type":"blocks","created_at":"2026-02-02T22:41:50.562935Z","created_by":"tayloreernisse"}]} {"id":"bd-91j1","title":"Comprehensive robot-docs as agent bootstrap","description":"## Background\nAgents reach for glab because they already know it from training data. lore robot-docs exists but is not comprehensive enough to serve as a zero-training bootstrap. An agent encountering lore for the first time should be able to use any command correctly after reading robot-docs output alone.\n\n## Current State (Verified 2026-02-12)\n- `handle_robot_docs()` at src/main.rs:2069\n- Called at no-args in robot mode (main.rs:165) and via Commands::RobotDocs { brief } (main.rs:229)\n- Current output top-level keys: name, version, description, activation, commands, aliases, exit_codes, clap_error_codes, error_format, workflows\n- Missing: response_schema per command, example_output per command, quick_start section, glab equivalence table\n- --brief flag exists but returns shorter version of same structure\n- main.rs is 2579 lines total\n\n## Current robot-docs Output Structure\n```json\n{\n \"name\": \"lore\",\n \"version\": \"0.6.1\",\n \"description\": \"...\",\n \"activation\": { \"flags\": [\"--robot\", \"-J\"], \"env\": \"LORE_ROBOT=1\", \"auto_detect\": \"non-TTY\" },\n \"commands\": [{ \"name\": \"...\", \"description\": \"...\", \"flags\": [...], \"example\": \"...\" }],\n \"aliases\": { ... },\n \"exit_codes\": { ... },\n \"clap_error_codes\": { ... },\n \"error_format\": { ... },\n \"workflows\": { ... }\n}\n```\n\n## Approach\n\n### 1. Add quick_start section\nTop-level key with glab-to-lore translation and lore-exclusive feature summary:\n```json\n\"quick_start\": {\n \"glab_equivalents\": [\n { \"glab\": \"glab issue list\", \"lore\": \"lore -J issues -n 50\", \"note\": \"Richer: includes labels, status, closing MRs\" },\n { \"glab\": \"glab issue view 123\", \"lore\": \"lore -J issues 123\", \"note\": \"Includes discussions, work-item status\" },\n { \"glab\": \"glab mr list\", \"lore\": \"lore -J mrs\", \"note\": \"Includes draft status, reviewers\" },\n { \"glab\": \"glab mr view 456\", \"lore\": \"lore -J mrs 456\", \"note\": \"Includes discussions, file changes\" },\n { \"glab\": \"glab api '/projects/:id/issues'\", \"lore\": \"lore -J issues -p project\", \"note\": \"Fuzzy project matching\" }\n ],\n \"lore_exclusive\": [\n \"search: FTS5 + vector hybrid search across all entities\",\n \"who: Expert/workload/reviews analysis per file path or person\",\n \"timeline: Chronological event reconstruction across entities\",\n \"stats: Database statistics with document/note/discussion counts\",\n \"count: Entity counts with state breakdowns\"\n ]\n}\n```\n\n### 2. Add response_schema per command\nFor each command in the commands array, add a `response_schema` field showing the JSON shape:\n```json\n{\n \"name\": \"issues\",\n \"response_schema\": {\n \"ok\": \"boolean\",\n \"data\": { \"type\": \"array|object\", \"fields\": [\"iid\", \"title\", \"state\", \"...\"] },\n \"meta\": { \"elapsed_ms\": \"integer\" }\n }\n}\n```\nCommands with multiple output shapes (list vs detail) need both documented.\n\n### 3. Add example_output per command\nRealistic truncated JSON for each command. Keep each example under 500 bytes.\n\n### 4. Token budget enforcement\n- --brief mode: ONLY quick_start + command names + invocation syntax. Target <4000 tokens (~16000 bytes).\n- Full mode: everything. Target <12000 tokens (~48000 bytes).\n- Measure with: `cargo run --release -- --robot robot-docs --brief | wc -c`\n\n## TDD Loop\nRED: Tests in src/main.rs or new src/cli/commands/robot_docs.rs:\n- test_robot_docs_has_quick_start: parse output JSON, assert quick_start.glab_equivalents array has >= 5 entries\n- test_robot_docs_brief_size: --brief output < 16000 bytes\n- test_robot_docs_full_size: full output < 48000 bytes\n- test_robot_docs_has_response_schemas: every command entry has response_schema key\n- test_robot_docs_commands_complete: assert all registered commands appear (issues, mrs, search, who, timeline, count, stats, sync, embed, doctor, health, ingest, generate-docs, show)\n\nGREEN: Add quick_start, response_schema, example_output to robot-docs output\n\nVERIFY:\n```bash\ncargo test robot_docs && cargo clippy --all-targets -- -D warnings\ncargo run --release -- --robot robot-docs | jq '.quick_start.glab_equivalents | length'\n# Should return >= 5\ncargo run --release -- --robot robot-docs --brief | wc -c\n# Should be < 16000\n```\n\n## Acceptance Criteria\n- [ ] robot-docs JSON has quick_start.glab_equivalents array with >= 5 entries\n- [ ] robot-docs JSON has quick_start.lore_exclusive array\n- [ ] Every command entry has response_schema showing the JSON shape\n- [ ] Every command entry has example_output with realistic truncated data\n- [ ] --brief output is under 16000 bytes (~4000 tokens)\n- [ ] Full output is under 48000 bytes (~12000 tokens)\n- [ ] An agent reading ONLY robot-docs can correctly invoke any lore command\n- [ ] cargo test passes with new robot_docs tests\n\n## Edge Cases\n- Commands with multiple output shapes (e.g., issues list vs issues detail via iid) need both schemas documented\n- --fields flag changes output shape -- document the effect in the response_schema\n- robot-docs output must be stable across versions (agents may cache it)\n- Version field should match Cargo.toml version\n\n## Files to Modify\n- src/main.rs fn handle_robot_docs() (~line 2069) — add quick_start section, response_schema, example_output\n- Consider extracting to src/cli/commands/robot_docs.rs if the function exceeds 200 lines","status":"closed","priority":1,"issue_type":"task","created_at":"2026-02-12T15:44:40.495479Z","created_by":"tayloreernisse","updated_at":"2026-02-12T16:49:01.043915Z","closed_at":"2026-02-12T16:49:01.043832Z","close_reason":"Robot-docs enhanced with quick_start (glab equivalents, lore exclusives, read/write split) and example_output for issues/mrs/search/who","compaction_level":0,"original_size":0,"labels":["cli","cli-imp","robot-mode"],"dependencies":[{"issue_id":"bd-91j1","depends_on_id":"bd-13lp","type":"parent-child","created_at":"2026-02-12T15:44:40.497236Z","created_by":"tayloreernisse"}]} {"id":"bd-9av","title":"[CP1] gi sync-status enhancement","description":"Enhance sync-status from CP0 stub to show issue cursors.\n\n## Changes to src/cli/commands/sync_status.rs\n\nUpdate the existing stub to show:\n- Last run timestamp and duration\n- Cursor positions per project (issues resource_type)\n- Entity counts (issues, discussions, notes)\n\n## Output Format\nLast sync: 2026-01-25 10:30:00 (succeeded, 45s)\n\nCursors:\n group/project-one\n issues: 2026-01-25T10:25:00Z (gitlab_id: 12345678)\n\nCounts:\n Issues: 1,234\n Discussions: 5,678\n Notes: 23,456 (4,567 system)\n\nFiles: src/cli/commands/sync_status.rs\nDone when: Shows cursor positions and counts after ingestion","status":"tombstone","priority":3,"issue_type":"task","created_at":"2026-01-25T16:58:27.246825Z","created_by":"tayloreernisse","updated_at":"2026-01-25T17:02:01.968507Z","deleted_at":"2026-01-25T17:02:01.968503Z","deleted_by":"tayloreernisse","delete_reason":"recreating with correct deps","original_type":"task","compaction_level":0,"original_size":0} diff --git a/.beads/last-touched b/.beads/last-touched index ec8bd5b..a646e0c 100644 --- a/.beads/last-touched +++ b/.beads/last-touched @@ -1 +1 @@ -bd-1tv8 +bd-8con diff --git a/plans/gitlab-todos-notifications-integration.md b/plans/gitlab-todos-notifications-integration.md index e873e3c..8dbc0d5 100644 --- a/plans/gitlab-todos-notifications-integration.md +++ b/plans/gitlab-todos-notifications-integration.md @@ -2,235 +2,433 @@ plan: true title: "GitLab TODOs Integration" status: proposed -iteration: 1 -target_iterations: 3 +iteration: 4 +target_iterations: 4 beads_revision: 1 related_plans: [] created: 2026-02-23 -updated: 2026-02-23 +updated: 2026-02-26 +audit_revision: 4 --- # GitLab TODOs Integration ## Summary -Add GitLab TODO support to lore. Todos are fetched during sync, stored locally, and surfaced through: -1. A new `--todos` section in `lore me` -2. Enrichment of the activity feed in `lore me` -3. A standalone `lore todos` command +Add GitLab TODO support to lore. Todos are fetched during sync, stored locally, and surfaced through a standalone `lore todos` command and integration into the `lore me` dashboard. **Scope:** Read-only. No mark-as-done operations. --- -## Design Decisions (from interview) +## Workflows -| Decision | Choice | -|----------|--------| -| Write operations | **Read-only** — no mark-as-done | -| Storage | **Persist locally** in SQLite | -| Integration | Three-way: activity enrichment + `--todos` flag + `lore todos` | -| Action types | Core only: assigned, mentioned, directly_addressed, approval_required, build_failed, unmergeable | -| Niche actions | Skip display (but store): merge_train_removed, member_access_requested, marked | -| Project filter | **Always account-wide** — `--project` does NOT filter todos | -| Sync timing | During normal `lore sync` | -| Non-synced projects | Include with `[external]` indicator | -| Attention state | **Separate signal** — todos don't boost attention | -| Summary header | Include pending todo count | -| Grouping | By action type: Assignments \| Mentions \| Approvals \| Build Issues | -| History | **Pending only** — done todos not tracked | -| `lore todos` filters | **None** — show all pending, simple | -| Robot mode | Yes, standard envelope | -| Target types | All GitLab supports (Issue, MR, Epic, Commit, etc.) | +### Workflow 1: Morning Triage (Human) ---- +1. User runs `lore me` to see personal dashboard +2. Summary header shows "5 pending todos" alongside issue/MR counts +3. Todos section groups items: 2 Assignments, 2 Mentions, 1 Approval Required +4. User scans Assignments — sees issue #42 assigned by @manager +5. User runs `lore todos` for full detail with body snippets +6. User clicks target URL to address highest-priority item +7. After marking done in GitLab, next `lore sync` removes it locally -## Out of Scope +### Workflow 2: Agent Polling (Robot Mode) -- Write operations (mark as done) -- Done todo history tracking -- Filters on `lore todos` command -- Todo-based attention state boosting -- Notification settings API integration (deferred to separate plan) +1. Agent runs `lore --robot health` as pre-flight check +2. Agent runs `lore --robot me --fields minimal` for dashboard +3. Agent extracts `pending_todo_count` from summary — if 0, skip todos +4. If count > 0, agent runs `lore --robot todos` +5. Agent iterates `data.todos[]`, filtering by `action` type +6. Agent prioritizes `approval_required` and `build_failed` for immediate attention +7. Agent logs external todos (`is_external: true`) for manual review + +### Workflow 3: Cross-Project Visibility + +1. User is mentioned in a project they don't sync (e.g., company-wide repo) +2. `lore sync` fetches the todo anyway (account-wide fetch) +3. `lore todos` shows item with `[external]` indicator and project path +4. User can still click target URL to view in GitLab +5. Target title may be unavailable — graceful fallback to "Untitled" --- ## Acceptance Criteria -### AC-1: Database Schema +Behavioral contract. Each AC is a single testable statement. -- [ ] **AC-1.1:** Create `todos` table with columns: - - `id` INTEGER PRIMARY KEY - - `gitlab_todo_id` INTEGER NOT NULL UNIQUE - - `project_id` INTEGER REFERENCES projects(id) ON DELETE SET NULL (nullable for non-synced) - - `target_type` TEXT NOT NULL (Issue, MergeRequest, Commit, Epic, etc.) - - `target_id` INTEGER (GitLab ID of target entity) - - `target_iid` INTEGER (IID for issues/MRs, nullable) - - `target_url` TEXT NOT NULL - - `target_title` TEXT - - `action_name` TEXT NOT NULL (assigned, mentioned, etc.) - - `author_id` INTEGER - - `author_username` TEXT - - `body` TEXT (the todo message/snippet) - - `state` TEXT NOT NULL (pending) - - `created_at` INTEGER NOT NULL (epoch ms) - - `updated_at` INTEGER NOT NULL (epoch ms) - - `synced_at` INTEGER NOT NULL (epoch ms) - - `project_path` TEXT (for display even if project not synced) -- [ ] **AC-1.2:** Create index `idx_todos_state_action` on `(state, action_name)` -- [ ] **AC-1.3:** Create index `idx_todos_target` on `(target_type, target_id)` -- [ ] **AC-1.4:** Create index `idx_todos_created` on `(created_at DESC)` -- [ ] **AC-1.5:** Migration increments schema version +### Storage -### AC-2: GitLab API Client +| ID | Behavior | +|----|----------| +| AC-1 | Todos are persisted locally in SQLite | +| AC-2 | Each todo is uniquely identified by its GitLab todo ID | +| AC-3 | Todos from non-synced projects are stored with their project path | -- [ ] **AC-2.1:** Add `fetch_todos()` method to GitLab client -- [ ] **AC-2.2:** Fetch only `state=pending` todos -- [ ] **AC-2.3:** Handle pagination (use existing pagination pattern) -- [ ] **AC-2.4:** Parse all target types GitLab returns -- [ ] **AC-2.5:** Extract project path from `target_url` for non-synced projects +### Sync -### AC-3: Sync Pipeline +| ID | Behavior | +|----|----------| +| AC-4 | `lore sync` fetches all pending todos from GitLab | +| AC-5 | Sync fetches todos account-wide, not per-project | +| AC-6 | Todos marked done in GitLab are removed locally on next sync | +| AC-7 | Transient sync errors do not delete valid local todos | +| AC-8 | `lore sync --no-todos` skips todo fetching | +| AC-9 | Sync logs todo statistics (fetched, inserted, updated, deleted) | -- [ ] **AC-3.1:** Add todos sync step to `lore sync` pipeline -- [ ] **AC-3.2:** Sync todos AFTER issues/MRs (ordering consistency) -- [ ] **AC-3.3:** Snapshot semantics: fetch all pending, upsert, delete missing (= marked done elsewhere) -- [ ] **AC-3.4:** Track `synced_at` timestamp -- [ ] **AC-3.5:** Log todo sync stats: fetched, inserted, updated, deleted -- [ ] **AC-3.6:** Add `--no-todos` flag to skip todo sync +### `lore todos` Command -### AC-4: Action Type Handling +| ID | Behavior | +|----|----------| +| AC-10 | `lore todos` displays all pending todos | +| AC-11 | Todos are grouped by action type: Assignments, Mentions, Approvals, Build Issues | +| AC-12 | Each todo shows: target title, project path, author, age | +| AC-13 | Non-synced project todos display `[external]` indicator | +| AC-14 | `lore todos --limit N` limits output to N todos | +| AC-15 | `lore --robot todos` returns JSON with standard `{ok, data, meta}` envelope | +| AC-16 | `lore --robot todos --fields minimal` returns reduced field set | +| AC-17 | `todo` and `td` are recognized as aliases for `todos` | -- [ ] **AC-4.1:** Store ALL action types from GitLab -- [ ] **AC-4.2:** Display only core actions: - - `assigned` — assigned to issue/MR - - `mentioned` — @mentioned in comment - - `directly_addressed` — @mentioned at start of comment - - `approval_required` — approval needed on MR - - `build_failed` — CI failed on your MR - - `unmergeable` — merge conflicts on your MR -- [ ] **AC-4.3:** Skip display (but store) niche actions: `merge_train_removed`, `member_access_requested`, `marked` +### `lore me` Integration -### AC-5: `lore todos` Command +| ID | Behavior | +|----|----------| +| AC-18 | `lore me` summary includes pending todo count | +| AC-19 | `lore me` includes a todos section in the full dashboard | +| AC-20 | `lore me --todos` shows only the todos section | +| AC-21 | Todos are NOT filtered by `--project` flag (always account-wide) | +| AC-22 | Warning is displayed if `--project` is passed with `--todos` | +| AC-23 | Todo events appear in the activity feed for local entities | -- [ ] **AC-5.1:** New subcommand `lore todos` (alias: `todo`) -- [ ] **AC-5.2:** Display all pending todos, no filters -- [ ] **AC-5.3:** Group by action type: Assignments | Mentions | Approvals | Build Issues -- [ ] **AC-5.4:** Per-todo display: target title, project path, author, age, action -- [ ] **AC-5.5:** Flag non-synced project todos with `[external]` indicator -- [ ] **AC-5.6:** Human-readable output with colors/icons -- [ ] **AC-5.7:** Robot mode: standard `{ok, data, meta}` envelope +### Action Types -### AC-6: `lore me --todos` Section +| ID | Behavior | +|----|----------| +| AC-24 | Core actions are displayed: assigned, mentioned, directly_addressed, approval_required, build_failed, unmergeable | +| AC-25 | Niche actions are stored but not displayed: merge_train_removed, member_access_requested, marked | -- [ ] **AC-6.1:** Add `--todos` flag to `MeArgs` -- [ ] **AC-6.2:** When no section flags: show todos in full dashboard -- [ ] **AC-6.3:** When `--todos` flag only: show only todos section -- [ ] **AC-6.4:** Todos section grouped by action type -- [ ] **AC-6.5:** Todos NOT filtered by `--project` (always account-wide) -- [ ] **AC-6.6:** Robot mode includes `todos` array in dashboard response +### Attention State -### AC-7: `lore me` Summary Header +| ID | Behavior | +|----|----------| +| AC-26 | Todos do not affect attention state calculation | +| AC-27 | Todos do not appear in "since last check" cursor-based inbox | -- [ ] **AC-7.1:** Add `pending_todo_count` to `MeSummary` struct -- [ ] **AC-7.2:** Display todo count in summary line (human mode) -- [ ] **AC-7.3:** Include `pending_todo_count` in robot mode summary +### Error Handling -### AC-8: Activity Feed Enrichment +| ID | Behavior | +|----|----------| +| AC-28 | 403 Forbidden on todos API logs warning and continues sync | +| AC-29 | 429 Rate Limited respects Retry-After header | +| AC-30 | Malformed todo JSON logs warning, skips that item, and disables purge for that sync | -- [ ] **AC-8.1:** Todos with local issue/MR target appear in activity feed -- [ ] **AC-8.2:** New `ActivityEventType::Todo` variant -- [ ] **AC-8.3:** Todo events show: action type, author, target in summary -- [ ] **AC-8.4:** Sorted chronologically with other activity events -- [ ] **AC-8.5:** Respect `--since` filter on todo `created_at` +### Documentation -### AC-9: Non-Synced Project Handling +| ID | Behavior | +|----|----------| +| AC-31 | `lore todos` appears in CLI help | +| AC-32 | `lore robot-docs` includes todos schema | +| AC-33 | CLAUDE.md documents the todos command | -- [ ] **AC-9.1:** Store todos even if target project not in config -- [ ] **AC-9.2:** Display `[external]` indicator for non-synced project todos -- [ ] **AC-9.3:** Show project path (extracted from target URL) -- [ ] **AC-9.4:** Graceful fallback when target title unavailable +### Quality -### AC-10: Attention State - -- [ ] **AC-10.1:** Attention state calculation remains note-based (unchanged) -- [ ] **AC-10.2:** Todos are separate signal, do not affect attention state -- [ ] **AC-10.3:** Document this design decision in code comments - -### AC-11: Robot Mode Schema - -- [ ] **AC-11.1:** `lore todos --robot` returns: - ```json - { - "ok": true, - "data": { - "todos": [{ - "id": 123, - "gitlab_todo_id": 456, - "action": "mentioned", - "target_type": "Issue", - "target_iid": 42, - "target_title": "Fix login bug", - "target_url": "https://...", - "project_path": "group/repo", - "author_username": "jdoe", - "body": "Hey @you, can you look at this?", - "created_at_iso": "2026-02-20T10:00:00Z", - "is_external": false - }], - "counts": { - "total": 8, - "assigned": 2, - "mentioned": 5, - "approval_required": 1, - "build_failed": 0, - "unmergeable": 0 - } - }, - "meta": {"elapsed_ms": 42} - } - ``` -- [ ] **AC-11.2:** `lore me --robot` includes `todos` and `pending_todo_count` in response -- [ ] **AC-11.3:** Support `--fields minimal` for token efficiency - -### AC-12: Documentation - -- [ ] **AC-12.1:** Update CLAUDE.md with `lore todos` command reference -- [ ] **AC-12.2:** Update `lore robot-docs` manifest with todos schema -- [ ] **AC-12.3:** Add todos to CLI help output - -### AC-13: Quality Gates - -- [ ] **AC-13.1:** `cargo check --all-targets` passes -- [ ] **AC-13.2:** `cargo clippy --all-targets -- -D warnings` passes -- [ ] **AC-13.3:** `cargo fmt --check` passes -- [ ] **AC-13.4:** `cargo test` passes with new tests +| ID | Behavior | +|----|----------| +| AC-34 | All quality gates pass: check, clippy, fmt, test | --- -## Technical Notes +## Architecture -### GitLab API Endpoint +Designed to fulfill the acceptance criteria above. + +### Module Structure ``` -GET /api/v4/todos?state=pending +src/ +├── gitlab/ +│ ├── client.rs # fetch_todos() method (AC-4, AC-5) +│ └── types.rs # GitLabTodo struct +├── ingestion/ +│ └── todos.rs # sync_todos(), purge-safe deletion (AC-6, AC-7) +├── cli/commands/ +│ ├── todos.rs # lore todos command (AC-10-17) +│ └── me/ +│ ├── types.rs # MeTodo, extend MeSummary (AC-18) +│ └── queries.rs # query_todos() (AC-19, AC-23) +└── core/ + └── db.rs # Migration 028 (AC-1, AC-2, AC-3) ``` -Response fields: id, project, author, action_name, target_type, target, target_url, body, state, created_at, updated_at +### Data Flow -### Sync Deletion Strategy - -Snapshot semantics: a todo disappearing from API response means it was marked done elsewhere. Delete from local DB to stay in sync. - -### Project Path Extraction - -For non-synced projects, extract path from `target_url`: ``` -https://gitlab.com/group/subgroup/repo/-/issues/42 - ^^^^^^^^^^^^^^^^^ extract this +GitLab API Local SQLite CLI Output +─────────── ──────────── ────────── +GET /api/v4/todos → todos table → lore todos +(account-wide) (purge-safe sync) lore me --todos ``` -### Action Type Grouping +### Key Design Decisions + +| Decision | Rationale | ACs | +|----------|-----------|-----| +| Account-wide fetch | GitLab todos API is user-scoped, not project-scoped | AC-5, AC-21 | +| Purge-safe deletion | Transient errors should not delete valid data | AC-7 | +| Separate from attention | Todos are notifications, not engagement signals | AC-26, AC-27 | +| Store all actions, display core | Future-proofs for new action types | AC-24, AC-25 | + +### Existing Code to Extend + +| Type | Location | Extension | +|------|----------|-----------| +| `MeSummary` | `src/cli/commands/me/types.rs` | Add `pending_todo_count` field | +| `ActivityEventType` | `src/cli/commands/me/types.rs` | Add `Todo` variant | +| `MeDashboard` | `src/cli/commands/me/types.rs` | Add `todos: Vec` field | +| `SyncArgs` | `src/cli/mod.rs` | Add `--no-todos` flag | +| `MeArgs` | `src/cli/mod.rs` | Add `--todos` flag | + +--- + +## Implementation Specifications + +Each IMP section details HOW to fulfill specific ACs. + +### IMP-1: Database Schema + +**Fulfills:** AC-1, AC-2, AC-3 + +**Migration 028:** + +```sql +CREATE TABLE todos ( + id INTEGER PRIMARY KEY, + gitlab_todo_id INTEGER NOT NULL UNIQUE, + project_id INTEGER REFERENCES projects(id) ON DELETE SET NULL, + gitlab_project_id INTEGER, + target_type TEXT NOT NULL, + target_id TEXT, + target_iid INTEGER, + target_url TEXT NOT NULL, + target_title TEXT, + action_name TEXT NOT NULL, + author_id INTEGER, + author_username TEXT, + body TEXT, + created_at INTEGER NOT NULL, + updated_at INTEGER NOT NULL, + synced_at INTEGER NOT NULL, + sync_generation INTEGER NOT NULL DEFAULT 0, + project_path TEXT +); + +CREATE INDEX idx_todos_action_created ON todos(action_name, created_at DESC); +CREATE INDEX idx_todos_target ON todos(target_type, target_id); +CREATE INDEX idx_todos_created ON todos(created_at DESC); +CREATE INDEX idx_todos_sync_gen ON todos(sync_generation); +CREATE INDEX idx_todos_gitlab_project ON todos(gitlab_project_id); +CREATE INDEX idx_todos_target_lookup ON todos(target_type, project_id, target_iid); +``` + +**Notes:** +- `project_id` nullable for non-synced projects (AC-3) +- `gitlab_project_id` nullable — TODO targets include non-project entities (Namespace, etc.) +- No `state` column — we only store pending todos +- `sync_generation` enables two-generation grace purge (AC-7) + +--- + +### IMP-2: GitLab API Client + +**Fulfills:** AC-4, AC-5 + +**Endpoint:** `GET /api/v4/todos?state=pending` + +**Types to add in `src/gitlab/types.rs`:** + +```rust +#[derive(Debug, Deserialize)] +pub struct GitLabTodo { + pub id: i64, + pub project: Option, + pub author: Option, + pub action_name: String, + pub target_type: String, + pub target: Option, + pub target_url: String, + pub body: Option, + pub state: String, + pub created_at: String, + pub updated_at: String, +} + +#[derive(Debug, Deserialize)] +pub struct GitLabTodoProject { + pub id: i64, + pub path_with_namespace: String, +} + +#[derive(Debug, Deserialize)] +pub struct GitLabTodoTarget { + pub id: serde_json::Value, // i64 or String (commit SHA) + pub iid: Option, + pub title: Option, +} + +#[derive(Debug, Deserialize)] +pub struct GitLabTodoAuthor { + pub id: i64, + pub username: String, +} +``` + +**Client method in `src/gitlab/client.rs`:** + +```rust +pub fn fetch_todos(&self) -> impl Stream> { + self.paginate("/api/v4/todos?state=pending") +} +``` + +--- + +### IMP-3: Sync Pipeline Integration + +**Fulfills:** AC-4, AC-5, AC-6, AC-7, AC-8, AC-9 + +**New file: `src/ingestion/todos.rs`** + +**Sync position:** Account-wide step after per-project sync and status enrichment. + +``` +Sync order: +1. Issues (per project) +2. MRs (per project) +3. Status enrichment (account-wide GraphQL) +4. Todos (account-wide REST) ← NEW +``` + +**Purge-safe deletion pattern:** + +```rust +pub struct TodoSyncResult { + pub fetched: usize, + pub upserted: usize, + pub deleted: usize, + pub generation: i64, + pub purge_allowed: bool, +} + +pub fn sync_todos(conn: &Connection, client: &GitLabClient) -> Result { + // 1. Get next generation + let generation: i64 = conn.query_row( + "SELECT COALESCE(MAX(sync_generation), 0) + 1 FROM todos", + [], |r| r.get(0) + )?; + + let mut fetched = 0; + let mut purge_allowed = true; + + // 2. Fetch and upsert all todos + for result in client.fetch_todos()? { + match result { + Ok(todo) => { + upsert_todo_guarded(conn, &todo, generation)?; + fetched += 1; + } + Err(e) => { + // Malformed JSON: log warning, skip item, disable purge + warn!("Skipping malformed todo: {e}"); + purge_allowed = false; + } + } + } + + // 3. Two-generation grace purge: delete only if missing for 2+ consecutive syncs + // This protects against pagination drift (new todos inserted during traversal) + let deleted = if purge_allowed { + conn.execute("DELETE FROM todos WHERE sync_generation < ? - 1", [generation])? + } else { + 0 + }; + + Ok(TodoSyncResult { fetched, upserted: fetched, deleted, generation, purge_allowed }) +} +``` + +**Concurrent-safe upsert:** + +```sql +INSERT INTO todos (..., sync_generation) VALUES (?, ..., ?) +ON CONFLICT(gitlab_todo_id) DO UPDATE SET + ..., + sync_generation = excluded.sync_generation, + synced_at = excluded.synced_at +WHERE excluded.sync_generation >= todos.sync_generation; +``` + +**"Success" for purge (all must be true):** +- Every page fetch completed without error +- Every todo JSON decoded successfully (any decode failure sets `purge_allowed=false`) +- Pagination traversal completed (not interrupted) +- Response was not 401/403 +- Zero todos IS valid for purge when above conditions met + +**Two-generation grace purge:** +Todos are deleted only if missing for 2 consecutive successful syncs (`sync_generation < current - 1`). +This protects against false deletions from pagination drift (new todos inserted during traversal). + +--- + +### IMP-4: Project Path Extraction + +**Fulfills:** AC-3, AC-13 + +```rust +use once_cell::sync::Lazy; +use regex::Regex; + +pub fn extract_project_path(url: &str) -> Option<&str> { + static RE: Lazy = Lazy::new(|| { + Regex::new(r"https?://[^/]+/(.+?)/-/(?:issues|merge_requests|epics|commits)/") + .expect("valid regex") + }); + + RE.captures(url) + .and_then(|c| c.get(1)) + .map(|m| m.as_str()) +} +``` + +**Usage:** Prefer `project.path_with_namespace` from API when available. Fall back to URL extraction for external projects. + +--- + +### IMP-5: `lore todos` Command + +**Fulfills:** AC-10, AC-11, AC-12, AC-13, AC-14, AC-15, AC-16, AC-17 + +**New file: `src/cli/commands/todos.rs`** + +**Args:** + +```rust +#[derive(Parser)] +#[command(alias = "todo")] +pub struct TodosArgs { + #[arg(short = 'n', long)] + pub limit: Option, +} +``` + +**Autocorrect aliases in `src/cli/mod.rs`:** + +```rust +("td", "todos"), +("todo", "todos"), +``` + +**Action type grouping:** | Group | Actions | |-------|---------| @@ -239,36 +437,212 @@ https://gitlab.com/group/subgroup/repo/-/issues/42 | Approvals | `approval_required` | | Build Issues | `build_failed`, `unmergeable` | +**Robot mode schema:** + +```json +{ + "ok": true, + "data": { + "todos": [{ + "id": 123, + "gitlab_todo_id": 456, + "action": "mentioned", + "target_type": "Issue", + "target_iid": 42, + "target_title": "Fix login bug", + "target_url": "https://...", + "project_path": "group/repo", + "author_username": "jdoe", + "body": "Hey @you, can you look at this?", + "created_at_iso": "2026-02-20T10:00:00Z", + "is_external": false + }], + "counts": { + "total": 8, + "assigned": 2, + "mentioned": 5, + "approval_required": 1, + "build_failed": 0, + "unmergeable": 0, + "other": 0 + } + }, + "meta": {"elapsed_ms": 42} +} +``` + +**Minimal fields:** `gitlab_todo_id`, `action`, `target_type`, `target_iid`, `project_path`, `is_external` + +--- + +### IMP-6: `lore me` Integration + +**Fulfills:** AC-18, AC-19, AC-20, AC-21, AC-22, AC-23 + +**Types to add/extend in `src/cli/commands/me/types.rs`:** + +```rust +// EXTEND +pub struct MeSummary { + // ... existing fields ... + pub pending_todo_count: usize, // ADD +} + +// EXTEND +pub enum ActivityEventType { + // ... existing variants ... + Todo, // ADD +} + +// EXTEND +pub struct MeDashboard { + // ... existing fields ... + pub todos: Vec, // ADD +} + +// NEW +pub struct MeTodo { + pub id: i64, + pub gitlab_todo_id: i64, + pub action: String, + pub target_type: String, + pub target_iid: Option, + pub target_title: Option, + pub target_url: String, + pub project_path: String, + pub author_username: Option, + pub body: Option, + pub created_at: i64, + pub is_external: bool, +} +``` + +**Warning for `--project` with `--todos` (AC-22):** + +```rust +if args.todos && args.project.is_some() { + eprintln!("Warning: Todos are account-wide; project filter not applied"); +} +``` + +--- + +### IMP-7: Error Handling + +**Fulfills:** AC-28, AC-29, AC-30 + +| Error | Behavior | +|-------|----------| +| 403 Forbidden | Log warning, skip todo sync, continue with other entities | +| 429 Rate Limited | Respect `Retry-After` header using existing retry policy | +| Malformed JSON | Log warning with todo ID, skip item, set `purge_allowed=false`, continue batch | + +**Rationale for purge disable on malformed JSON:** If we can't decode a todo, we don't know its `gitlab_todo_id`. Without that, we might accidentally purge a valid todo that was simply malformed in transit. Disabling purge for that sync is the safe choice. + +--- + +### IMP-8: Test Fixtures + +**Fulfills:** AC-34 + +**Location:** `tests/fixtures/todos/` + +**`todos_pending.json`:** +```json +[ + { + "id": 102, + "project": {"id": 2, "path_with_namespace": "diaspora/client"}, + "author": {"id": 1, "username": "admin"}, + "action_name": "mentioned", + "target_type": "Issue", + "target": {"id": 11, "iid": 4, "title": "Inventory system"}, + "target_url": "https://gitlab.example.com/diaspora/client/-/issues/4", + "body": "@user please review", + "state": "pending", + "created_at": "2026-02-20T10:00:00.000Z", + "updated_at": "2026-02-20T10:00:00.000Z" + } +] +``` + +**`todos_empty.json`:** `[]` + +**`todos_commit_target.json`:** (target.id is string SHA) + +**`todos_niche_actions.json`:** (merge_train_removed, etc.) + --- ## Rollout Slices +### Dependency Graph + +``` +Slice A ──────► Slice B ──────┬──────► Slice C +(Schema) (Sync) │ (`lore todos`) + │ + └──────► Slice D + (`lore me`) + + Slice C ───┬───► Slice E + Slice D ───┘ (Polish) +``` + ### Slice A: Schema + Client -- Migration 028 -- `GitLabTodo` type -- `fetch_todos()` client method -- Unit tests for deserialization + +**ACs:** AC-1, AC-2, AC-3, AC-4, AC-5 +**IMPs:** IMP-1, IMP-2, IMP-4 +**Deliverable:** Migration + client method + deserialization tests pass ### Slice B: Sync Integration -- `src/ingestion/todos.rs` -- Integrate into `lore sync` -- `--no-todos` flag -- Sync stats + +**ACs:** AC-6, AC-7, AC-8, AC-9, AC-28, AC-29, AC-30 +**IMPs:** IMP-3, IMP-7 +**Deliverable:** `lore sync` fetches todos; `--no-todos` works ### Slice C: `lore todos` Command -- CLI args + dispatch -- Human + robot rendering -- Autocorrect aliases + +**ACs:** AC-10, AC-11, AC-12, AC-13, AC-14, AC-15, AC-16, AC-17, AC-24, AC-25 +**IMPs:** IMP-5 +**Deliverable:** `lore todos` and `lore --robot todos` work ### Slice D: `lore me` Integration -- `--todos` flag -- Summary count -- Activity feed enrichment + +**ACs:** AC-18, AC-19, AC-20, AC-21, AC-22, AC-23, AC-26, AC-27 +**IMPs:** IMP-6 +**Deliverable:** `lore me --todos` works; summary shows count ### Slice E: Polish -- Edge case tests -- Documentation updates -- `robot-docs` manifest + +**ACs:** AC-31, AC-32, AC-33, AC-34 +**IMPs:** IMP-8 +**Deliverable:** Docs updated; all quality gates pass + +--- + +## Design Decisions + +| Decision | Choice | Rationale | +|----------|--------|-----------| +| Write operations | Read-only | Complexity; glab handles writes | +| Storage | SQLite | Consistent with existing architecture | +| Project filter | Account-wide only | GitLab API is user-scoped | +| Action type display | Core only | Reduce noise; store all for future | +| Attention state | Separate signal | Todos are notifications, not engagement | +| History | Pending only | Simplicity; done todos have no value locally | +| Grouping | By action type | Matches GitLab UI; aids triage | +| Purge strategy | Two-generation grace | Protects against pagination drift during sync | + +--- + +## Out of Scope + +- Write operations (mark as done) +- Done todo history tracking +- Filters beyond `--limit` +- Todo-based attention state boosting +- Notification settings API ---