diff --git a/.beads/issues.jsonl b/.beads/issues.jsonl index 8b809b1..2e31f21 100644 --- a/.beads/issues.jsonl +++ b/.beads/issues.jsonl @@ -20,7 +20,7 @@ {"id":"bd-1i2","title":"Integrate mark_dirty_tx into ingestion modules","description":"## Background\nThis bead integrates dirty source tracking into the existing ingestion pipelines. Every entity upserted during ingestion must be marked dirty so the document regenerator knows to update the corresponding search document. The critical constraint: mark_dirty_tx() must be called INSIDE the same transaction that upserts the entity — not after commit.\n\n**Key PRD clarification:** Mark ALL upserted entities dirty (not just changed ones). The regenerator's hash comparison handles \"unchanged\" detection cheaply — this avoids needing change detection in ingestion.\n\n## Approach\nModify 4 existing ingestion files to add mark_dirty_tx() calls inside existing transaction blocks per PRD Section 6.1.\n\n**1. src/ingestion/issues.rs:**\nInside the issue upsert loop, after each successful INSERT/UPDATE:\n```rust\ndirty_tracker::mark_dirty_tx(&tx, SourceType::Issue, issue_row.id)?;\n```\n\n**2. src/ingestion/merge_requests.rs:**\nInside the MR upsert loop:\n```rust\ndirty_tracker::mark_dirty_tx(&tx, SourceType::MergeRequest, mr_row.id)?;\n```\n\n**3. src/ingestion/discussions.rs:**\nInside discussion insert (issue discussions, full-refresh transaction):\n```rust\ndirty_tracker::mark_dirty_tx(&tx, SourceType::Discussion, discussion_row.id)?;\n```\n\n**4. src/ingestion/mr_discussions.rs:**\nInside discussion upsert (write phase):\n```rust\ndirty_tracker::mark_dirty_tx(&tx, SourceType::Discussion, discussion_row.id)?;\n```\n\n**Discussion Sweep Cleanup (PRD Section 6.1 — CRITICAL):**\nWhen the MR discussion sweep deletes stale discussions (`last_seen_at < run_start_time`), **delete the corresponding document rows directly** — do NOT use the dirty queue for cleanup. The `ON DELETE CASCADE` on `document_labels`/`document_paths` and the `documents_embeddings_ad` trigger handle all downstream cleanup.\n\n**PRD-exact CTE pattern:**\n```sql\n-- In src/ingestion/mr_discussions.rs, during sweep phase.\n-- Uses a CTE to capture stale IDs atomically before cascading deletes.\n-- This is more defensive than two separate statements because the CTE\n-- guarantees the ID set is captured before any row is deleted.\nWITH stale AS (\n SELECT id FROM discussions\n WHERE merge_request_id = ? AND last_seen_at < ?\n)\n-- Step 1: delete orphaned documents (must happen while source_id still resolves)\nDELETE FROM documents\n WHERE source_type = 'discussion' AND source_id IN (SELECT id FROM stale);\n-- Step 2: delete the stale discussions themselves\nDELETE FROM discussions\n WHERE id IN (SELECT id FROM stale);\n```\n\n**NOTE:** If SQLite version doesn't support CTE-based multi-statement, execute as two sequential statements capturing IDs in Rust first:\n```rust\nlet stale_ids: Vec = conn.prepare(\n \"SELECT id FROM discussions WHERE merge_request_id = ? AND last_seen_at < ?\"\n)?.query_map(params![mr_id, run_start], |r| r.get(0))?\n .collect::, _>>()?;\n\nif !stale_ids.is_empty() {\n // Delete documents FIRST (while source_id still resolves)\n conn.execute(\n \"DELETE FROM documents WHERE source_type = 'discussion' AND source_id IN (...)\",\n ...\n )?;\n // Then delete the discussions\n conn.execute(\n \"DELETE FROM discussions WHERE id IN (...)\",\n ...\n )?;\n}\n```\n\n**IMPORTANT difference from dirty queue pattern:** The sweep deletes documents DIRECTLY (not via dirty_sources queue). This is because the source entity is being deleted — there's nothing for the regenerator to regenerate from. The cascade handles FTS, labels, paths, and embeddings cleanup.\n\n## Acceptance Criteria\n- [ ] Every upserted issue is marked dirty inside the same transaction\n- [ ] Every upserted MR is marked dirty inside the same transaction\n- [ ] Every upserted discussion (issue + MR) is marked dirty inside the same transaction\n- [ ] ALL upserted entities marked dirty (not just changed ones) — regenerator handles skip\n- [ ] mark_dirty_tx called with &Transaction (not &Connection)\n- [ ] mark_dirty_tx uses upsert with ON CONFLICT to reset backoff state (not INSERT OR IGNORE)\n- [ ] Discussion sweep deletes documents DIRECTLY (not via dirty queue)\n- [ ] Discussion sweep uses CTE (or Rust-side ID capture) to capture stale IDs before cascading deletes\n- [ ] Documents deleted BEFORE discussions (while source_id still resolves)\n- [ ] ON DELETE CASCADE handles document_labels, document_paths cleanup\n- [ ] documents_embeddings_ad trigger handles embedding cleanup\n- [ ] `cargo build` succeeds\n- [ ] Existing ingestion tests still pass\n\n## Files\n- `src/ingestion/issues.rs` — add mark_dirty_tx calls in upsert loop\n- `src/ingestion/merge_requests.rs` — add mark_dirty_tx calls in upsert loop\n- `src/ingestion/discussions.rs` — add mark_dirty_tx calls in insert loop\n- `src/ingestion/mr_discussions.rs` — add mark_dirty_tx calls + direct document deletion in sweep\n\n## TDD Loop\nRED: Existing tests should still pass (regression); new tests:\n- `test_issue_upsert_marks_dirty` — after issue ingest, dirty_sources has entry\n- `test_mr_upsert_marks_dirty` — after MR ingest, dirty_sources has entry\n- `test_discussion_upsert_marks_dirty` — after discussion ingest, dirty_sources has entry\n- `test_discussion_sweep_deletes_documents` — stale discussion documents deleted directly\n- `test_sweep_cascade_cleans_labels_paths` — ON DELETE CASCADE works\nGREEN: Add mark_dirty_tx calls in all 4 files, implement sweep with CTE\nVERIFY: `cargo test ingestion && cargo build`\n\n## Edge Cases\n- Upsert that doesn't change data: still marks dirty (regenerator hash check handles skip)\n- Transaction rollback: dirty mark also rolled back (atomic, inside same txn)\n- Discussion sweep with zero stale IDs: CTE returns empty, no DELETE executed\n- Large batch of upserts: each mark_dirty_tx is O(1) INSERT with ON CONFLICT\n- Sweep deletes document before discussion: order matters for source_id resolution","status":"closed","priority":2,"issue_type":"task","created_at":"2026-01-30T15:27:09.540279Z","created_by":"tayloreernisse","updated_at":"2026-01-30T17:39:17.241433Z","closed_at":"2026-01-30T17:39:17.241390Z","close_reason":"Added mark_dirty_tx calls in issues.rs, merge_requests.rs, discussions.rs, mr_discussions.rs (2 paths)","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-1i2","depends_on_id":"bd-38q","type":"blocks","created_at":"2026-01-30T15:29:35.105551Z","created_by":"tayloreernisse"}]} {"id":"bd-1j1","title":"Integration test: full Phase B sync pipeline","description":"## Background\nAfter all Gate 1-2 components are built, we need an integration test proving the full pipeline works end-to-end: sync → enqueue dependent fetches → drain queue → extract refs from state events → parse system notes for refs. Without this, individual unit tests pass but the pipeline may not wire together correctly.\n\n## Approach\nCreate tests/phase_b_integration.rs with a comprehensive test suite:\n\n```rust\n#[tokio::test]\nasync fn test_phase_b_sync_pipeline_integration() {\n // 1. Create test DB with migrations 001-012\n // 2. Seed: project, issues, MRs, discussions with system notes\n // 3. Seed: resource_state_events with source_merge_request_id\n // 4. Seed: dependent_fetch_queue entries (state_events, label_events)\n // 5. Run drain_dependent_queue (mocked HTTP → fixture JSON)\n // 6. Run extract_refs_from_state_events\n // 7. Run extract_refs_from_system_notes\n // 8. Assert: entity_references populated with correct source/target/type/method\n // 9. Assert: no duplicate refs (INSERT OR IGNORE worked)\n // 10. Assert: unresolved cross-project refs stored correctly\n}\n```\n\nUse wiremock or a trait-based HTTP mock for GitLab API responses. Fixture files in tests/fixtures/phase_b/.\n\n## Acceptance Criteria\n- [ ] Test creates DB, runs all migrations through 012\n- [ ] Test seeds realistic data (issues, MRs, state events, system notes)\n- [ ] Test runs the full pipeline in correct order\n- [ ] Test verifies entity_references from all 3 sources: closes_issues API, state events, system notes\n- [ ] Test verifies deduplication across sources\n- [ ] Test verifies unresolved cross-project references\n- [ ] Test passes with `cargo test phase_b_integration -- --nocapture`\n\n## Files\n- tests/phase_b_integration.rs (new)\n- tests/fixtures/phase_b/state_events.json (new)\n- tests/fixtures/phase_b/label_events.json (new)\n- tests/fixtures/phase_b/system_notes.json (new)\n\n## TDD Loop\nRED: tests/phase_b_integration.rs:\n- `test_full_pipeline_produces_entity_references` - seeds all data, runs full pipeline, asserts entity_references populated from state events + system notes + closes_issues API\n- `test_pipeline_deduplication_across_sources` - same ref discovered by API and system note → single row in entity_references\n- `test_pipeline_unresolved_cross_project_refs` - system note mentioning external project → entity_references row with target_entity_id=NULL and target_iid populated\n- `test_pipeline_empty_queue_succeeds` - no queue entries → pipeline completes with 0 refs, no error\n- `test_pipeline_migrations_001_through_012` - verify all migrations apply cleanly in sequence on fresh DB\n\nSetup: create_test_db helper applying all migrations, seed_phase_b_fixtures() populating issues, MRs, discussions, notes (including system notes with \"closed by !123\" patterns), resource_state_events with source_merge_request fields.\n\nGREEN: Wire pipeline calls in correct order, create fixture JSON files\n\nVERIFY: `cargo test phase_b_integration -- --nocapture`\n\n## Edge Cases\n- Empty queue: pipeline completes successfully with 0 refs\n- All refs duplicate: INSERT OR IGNORE produces 0 new inserts\n- Mixed sources: same ref discovered by API + system note → single entry\n- Migration failure: test should fail clearly if migrations don't apply cleanly","status":"open","priority":3,"issue_type":"task","created_at":"2026-02-02T22:42:26.355071Z","created_by":"tayloreernisse","updated_at":"2026-02-03T13:42:58.964288Z","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-1j1","depends_on_id":"bd-1ji","type":"blocks","created_at":"2026-02-02T22:43:27.941002Z","created_by":"tayloreernisse"},{"issue_id":"bd-1j1","depends_on_id":"bd-1se","type":"parent-child","created_at":"2026-02-02T22:43:40.577709Z","created_by":"tayloreernisse"},{"issue_id":"bd-1j1","depends_on_id":"bd-3ia","type":"blocks","created_at":"2026-02-02T22:43:28.048311Z","created_by":"tayloreernisse"},{"issue_id":"bd-1j1","depends_on_id":"bd-8t4","type":"blocks","created_at":"2026-02-02T22:43:27.996061Z","created_by":"tayloreernisse"}]} {"id":"bd-1je","title":"Implement pending discussion queue","description":"## Background\nThe pending discussion queue tracks discussions that need to be fetched from GitLab. When an issue or MR is updated, its discussions may need re-fetching. This queue is separate from dirty_sources (which tracks entities needing document regeneration) — it tracks entities needing API calls to GitLab. The queue uses the same backoff pattern as dirty_sources for consistency.\n\n## Approach\nCreate `src/ingestion/discussion_queue.rs`:\n\n```rust\nuse crate::core::backoff::compute_next_attempt_at;\n\n/// Noteable type for discussion queue.\n#[derive(Debug, Clone, Copy)]\npub enum NoteableType {\n Issue,\n MergeRequest,\n}\n\nimpl NoteableType {\n pub fn as_str(&self) -> &'static str {\n match self {\n Self::Issue => \"Issue\",\n Self::MergeRequest => \"MergeRequest\",\n }\n }\n}\n\npub struct PendingFetch {\n pub project_id: i64,\n pub noteable_type: NoteableType,\n pub noteable_iid: i64,\n pub attempt_count: i32,\n}\n\n/// Queue a discussion fetch. ON CONFLICT DO UPDATE resets backoff (consistent with dirty_sources).\npub fn queue_discussion_fetch(\n conn: &Connection,\n project_id: i64,\n noteable_type: NoteableType,\n noteable_iid: i64,\n) -> Result<()>;\n\n/// Get next batch of pending fetches (WHERE next_attempt_at IS NULL OR <= now).\npub fn get_pending_fetches(conn: &Connection, limit: usize) -> Result>;\n\n/// Mark fetch complete (remove from queue).\npub fn complete_fetch(\n conn: &Connection,\n project_id: i64,\n noteable_type: NoteableType,\n noteable_iid: i64,\n) -> Result<()>;\n\n/// Record fetch error with backoff.\npub fn record_fetch_error(\n conn: &Connection,\n project_id: i64,\n noteable_type: NoteableType,\n noteable_iid: i64,\n error: &str,\n) -> Result<()>;\n```\n\n## Acceptance Criteria\n- [ ] queue_discussion_fetch uses ON CONFLICT DO UPDATE (consistent with dirty_sources pattern)\n- [ ] Re-queuing resets: attempt_count=0, next_attempt_at=NULL, last_error=NULL\n- [ ] get_pending_fetches respects next_attempt_at backoff\n- [ ] get_pending_fetches returns entries ordered by queued_at ASC\n- [ ] complete_fetch removes entry from queue\n- [ ] record_fetch_error increments attempt_count, computes next_attempt_at via shared backoff\n- [ ] NoteableType.as_str() returns \"Issue\" or \"MergeRequest\" (matches DB CHECK constraint)\n- [ ] `cargo test discussion_queue` passes\n\n## Files\n- `src/ingestion/discussion_queue.rs` — new file\n- `src/ingestion/mod.rs` — add `pub mod discussion_queue;`\n\n## TDD Loop\nRED: Tests in `#[cfg(test)] mod tests`:\n- `test_queue_and_get` — queue entry, get returns it\n- `test_requeue_resets_backoff` — queue, error, re-queue -> attempt_count=0\n- `test_backoff_respected` — entry with future next_attempt_at not returned\n- `test_complete_removes` — complete_fetch removes entry\n- `test_error_increments_attempts` — error -> attempt_count=1, next_attempt_at set\nGREEN: Implement all functions\nVERIFY: `cargo test discussion_queue`\n\n## Edge Cases\n- Queue same (project_id, noteable_type, noteable_iid) twice: ON CONFLICT resets state\n- NoteableType must match DB CHECK constraint exactly (\"Issue\", \"MergeRequest\" — capitalized)\n- Empty queue: get_pending_fetches returns empty Vec","status":"closed","priority":2,"issue_type":"task","created_at":"2026-01-30T15:27:09.505548Z","created_by":"tayloreernisse","updated_at":"2026-01-30T17:31:35.496454Z","closed_at":"2026-01-30T17:31:35.496405Z","close_reason":"Implemented discussion_queue with queue/get/complete/record_error + 6 tests","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-1je","depends_on_id":"bd-hrs","type":"blocks","created_at":"2026-01-30T15:29:35.034753Z","created_by":"tayloreernisse"},{"issue_id":"bd-1je","depends_on_id":"bd-mem","type":"blocks","created_at":"2026-01-30T15:29:35.071573Z","created_by":"tayloreernisse"}]} -{"id":"bd-1ji","title":"Parse system notes for cross-reference patterns","description":"## Background\nSystem notes contain cross-reference patterns like 'mentioned in !{iid}', 'closed by !{iid}', etc. This is best-effort, English-only extraction that supplements the structured API data from bd-3ia and bd-8t4. Runs as a local post-processing step (no API calls).\n\n## Approach\nCreate src/core/note_parser.rs:\n\n```rust\nuse regex::Regex;\nuse lazy_static::lazy_static;\n\n/// A parsed cross-reference from a system note.\npub struct ParsedCrossRef {\n pub reference_type: String, // \"mentioned\" | \"closes\"\n pub target_entity_type: String, // \"issue\" | \"merge_request\" \n pub target_iid: i64,\n pub target_project_path: Option, // None = same project\n}\n\nlazy_static! {\n static ref MENTIONED_RE: Regex = Regex::new(\n r\"mentioned in (?:(?P[\\w\\-]+/[\\w\\-]+))?(?P[#!])(?P\\d+)\"\n ).unwrap();\n static ref CLOSED_BY_RE: Regex = Regex::new(\n r\"closed by (?:(?P[\\w\\-]+/[\\w\\-]+))?(?P[#!])(?P\\d+)\"\n ).unwrap();\n}\n\n/// Parse a system note body for cross-references.\npub fn parse_cross_refs(body: &str) -> Vec\n\n/// Extract cross-references from all system notes and insert into entity_references.\n/// Queries notes WHERE is_system = 1, parses body text, resolves to entity_references.\npub fn extract_refs_from_system_notes(\n conn: &Connection,\n project_id: i64,\n) -> Result\n\npub struct ExtractResult {\n pub inserted: usize,\n pub skipped_unresolvable: usize,\n pub parse_failures: usize, // logged at debug level\n}\n```\n\nSigil mapping: `#` = issue, `!` = merge_request\n\nResolution logic:\n1. If target_project_path is None (same project): look up entity by iid in local DB → set target_entity_id\n2. If target_project_path is Some: check if project is synced locally\n - If yes: resolve to local entity id\n - If no: store as unresolved (target_entity_id=NULL, target_project_path=path, target_entity_iid=iid)\n\nInsert with source_method='system_note_parse', INSERT OR IGNORE for dedup.\n\nCall after drain_dependent_queue and extract_refs_from_state_events in the sync pipeline.\n\n## Acceptance Criteria\n- [ ] 'mentioned in !123' → mentioned ref, target=MR iid 123\n- [ ] 'mentioned in #456' → mentioned ref, target=issue iid 456\n- [ ] 'mentioned in group/project!789' → cross-project mentioned ref\n- [ ] 'closed by !123' → closes ref\n- [ ] Cross-project refs stored as unresolved when target project not synced\n- [ ] source_method = 'system_note_parse'\n- [ ] Parse failures logged at debug level (not errors)\n- [ ] Idempotent (INSERT OR IGNORE)\n- [ ] Only processes is_system=1 notes\n\n## Files\n- src/core/note_parser.rs (new)\n- src/core/mod.rs (add `pub mod note_parser;`)\n- src/cli/commands/sync.rs (call after other ref extraction steps)\n\n## TDD Loop\nRED: tests/note_parser_tests.rs:\n- `test_parse_mentioned_in_mr` - \"mentioned in !567\" → ParsedCrossRef { mentioned, merge_request, 567 }\n- `test_parse_mentioned_in_issue` - \"mentioned in #234\" → ParsedCrossRef { mentioned, issue, 234 }\n- `test_parse_mentioned_cross_project` - \"mentioned in group/repo!789\" → with project path\n- `test_parse_closed_by_mr` - \"closed by !567\" → ParsedCrossRef { closes, merge_request, 567 }\n- `test_parse_multiple_refs` - note with two mentions → two refs\n- `test_parse_no_refs` - \"Updated the description\" → empty vec\n- `test_extract_refs_from_system_notes_integration` - seed DB with system notes, verify entity_references created\n\nGREEN: Implement regex patterns and extraction logic\n\nVERIFY: `cargo test note_parser -- --nocapture`\n\n## Edge Cases\n- Non-English GitLab instances: \"ajouté l'étiquette ~bug\" won't match — this is accepted limitation, logged at debug\n- Multi-level group paths: \"mentioned in top/sub/project#123\" — regex needs to handle arbitrary depth ([\\w\\-]+(?:/[\\w\\-]+)+)\n- Note body may contain markdown links that look like refs: \"[#123](url)\" — the regex should handle this correctly since the prefix \"mentioned in\" is required\n- Same ref mentioned multiple times in same note — dedup via INSERT OR IGNORE\n- Note may reference itself (e.g., system note on issue #123 says \"mentioned in #123\") — technically valid, store it","status":"open","priority":3,"issue_type":"task","created_at":"2026-02-02T21:32:33.663304Z","created_by":"tayloreernisse","updated_at":"2026-02-02T22:41:50.672968Z","compaction_level":0,"original_size":0,"labels":["gate-2","parsing","phase-b"],"dependencies":[{"issue_id":"bd-1ji","depends_on_id":"bd-1se","type":"parent-child","created_at":"2026-02-02T21:32:33.665218Z","created_by":"tayloreernisse"},{"issue_id":"bd-1ji","depends_on_id":"bd-hu3","type":"blocks","created_at":"2026-02-02T22:41:50.672947Z","created_by":"tayloreernisse"}]} +{"id":"bd-1ji","title":"Parse system notes for cross-reference patterns","description":"## Background\nSystem notes contain cross-reference patterns like 'mentioned in !{iid}', 'closed by !{iid}', etc. This is best-effort, English-only extraction that supplements the structured API data from bd-3ia and bd-8t4. Runs as a local post-processing step (no API calls).\n\n## Approach\nCreate src/core/note_parser.rs:\n\n```rust\nuse regex::Regex;\nuse lazy_static::lazy_static;\n\n/// A parsed cross-reference from a system note.\npub struct ParsedCrossRef {\n pub reference_type: String, // \"mentioned\" | \"closes\"\n pub target_entity_type: String, // \"issue\" | \"merge_request\" \n pub target_iid: i64,\n pub target_project_path: Option, // None = same project\n}\n\nlazy_static! {\n static ref MENTIONED_RE: Regex = Regex::new(\n r\"mentioned in (?:(?P[\\w\\-]+/[\\w\\-]+))?(?P[#!])(?P\\d+)\"\n ).unwrap();\n static ref CLOSED_BY_RE: Regex = Regex::new(\n r\"closed by (?:(?P[\\w\\-]+/[\\w\\-]+))?(?P[#!])(?P\\d+)\"\n ).unwrap();\n}\n\n/// Parse a system note body for cross-references.\npub fn parse_cross_refs(body: &str) -> Vec\n\n/// Extract cross-references from all system notes and insert into entity_references.\n/// Queries notes WHERE is_system = 1, parses body text, resolves to entity_references.\npub fn extract_refs_from_system_notes(\n conn: &Connection,\n project_id: i64,\n) -> Result\n\npub struct ExtractResult {\n pub inserted: usize,\n pub skipped_unresolvable: usize,\n pub parse_failures: usize, // logged at debug level\n}\n```\n\nSigil mapping: `#` = issue, `!` = merge_request\n\nResolution logic:\n1. If target_project_path is None (same project): look up entity by iid in local DB → set target_entity_id\n2. If target_project_path is Some: check if project is synced locally\n - If yes: resolve to local entity id\n - If no: store as unresolved (target_entity_id=NULL, target_project_path=path, target_entity_iid=iid)\n\nInsert with source_method='system_note_parse', INSERT OR IGNORE for dedup.\n\nCall after drain_dependent_queue and extract_refs_from_state_events in the sync pipeline.\n\n## Acceptance Criteria\n- [ ] 'mentioned in !123' → mentioned ref, target=MR iid 123\n- [ ] 'mentioned in #456' → mentioned ref, target=issue iid 456\n- [ ] 'mentioned in group/project!789' → cross-project mentioned ref\n- [ ] 'closed by !123' → closes ref\n- [ ] Cross-project refs stored as unresolved when target project not synced\n- [ ] source_method = 'system_note_parse'\n- [ ] Parse failures logged at debug level (not errors)\n- [ ] Idempotent (INSERT OR IGNORE)\n- [ ] Only processes is_system=1 notes\n\n## Files\n- src/core/note_parser.rs (new)\n- src/core/mod.rs (add `pub mod note_parser;`)\n- src/cli/commands/sync.rs (call after other ref extraction steps)\n\n## TDD Loop\nRED: tests/note_parser_tests.rs:\n- `test_parse_mentioned_in_mr` - \"mentioned in !567\" → ParsedCrossRef { mentioned, merge_request, 567 }\n- `test_parse_mentioned_in_issue` - \"mentioned in #234\" → ParsedCrossRef { mentioned, issue, 234 }\n- `test_parse_mentioned_cross_project` - \"mentioned in group/repo!789\" → with project path\n- `test_parse_closed_by_mr` - \"closed by !567\" → ParsedCrossRef { closes, merge_request, 567 }\n- `test_parse_multiple_refs` - note with two mentions → two refs\n- `test_parse_no_refs` - \"Updated the description\" → empty vec\n- `test_extract_refs_from_system_notes_integration` - seed DB with system notes, verify entity_references created\n\nGREEN: Implement regex patterns and extraction logic\n\nVERIFY: `cargo test note_parser -- --nocapture`\n\n## Edge Cases\n- Non-English GitLab instances: \"ajouté l'étiquette ~bug\" won't match — this is accepted limitation, logged at debug\n- Multi-level group paths: \"mentioned in top/sub/project#123\" — regex needs to handle arbitrary depth ([\\w\\-]+(?:/[\\w\\-]+)+)\n- Note body may contain markdown links that look like refs: \"[#123](url)\" — the regex should handle this correctly since the prefix \"mentioned in\" is required\n- Same ref mentioned multiple times in same note — dedup via INSERT OR IGNORE\n- Note may reference itself (e.g., system note on issue #123 says \"mentioned in #123\") — technically valid, store it","status":"closed","priority":3,"issue_type":"task","created_at":"2026-02-02T21:32:33.663304Z","created_by":"tayloreernisse","updated_at":"2026-02-04T20:13:33.398960Z","closed_at":"2026-02-04T20:13:33.398868Z","close_reason":"Completed: parse_cross_refs regex parser, extract_refs_from_system_notes DB function, wired into orchestrator. 17 tests passing.","compaction_level":0,"original_size":0,"labels":["gate-2","parsing","phase-b"],"dependencies":[{"issue_id":"bd-1ji","depends_on_id":"bd-1se","type":"parent-child","created_at":"2026-02-02T21:32:33.665218Z","created_by":"tayloreernisse"},{"issue_id":"bd-1ji","depends_on_id":"bd-hu3","type":"blocks","created_at":"2026-02-02T22:41:50.672947Z","created_by":"tayloreernisse"}]} {"id":"bd-1k1","title":"Implement FTS5 search function and query sanitization","description":"## Background\nFTS5 search is the core lexical retrieval engine. It wraps SQLite's FTS5 with safe query parsing that prevents user input from causing SQL syntax errors, while preserving useful features like prefix search for type-ahead. The search function returns ranked results with BM25 scores and contextual snippets. This module is the Gate A search backbone and also provides fallback search when Ollama is unavailable in Gate B.\n\n## Approach\nCreate `src/search/` module with `mod.rs` and `fts.rs` per PRD Section 3.1-3.2.\n\n**src/search/mod.rs:**\n```rust\nmod fts;\nmod filters;\n// Later beads add: mod vector; mod hybrid; mod rrf;\npub use fts::{search_fts, to_fts_query, FtsResult, FtsQueryMode, generate_fallback_snippet, get_result_snippet};\n```\n\n**src/search/fts.rs — key functions:**\n\n1. `to_fts_query(raw: &str, mode: FtsQueryMode) -> String`\n - Safe mode: wrap each token in quotes, escape internal quotes, preserve trailing * on alphanumeric tokens\n - Raw mode: pass through unchanged\n\n2. `search_fts(conn: &Connection, query: &str, limit: usize, mode: FtsQueryMode) -> Result>`\n - Uses `bm25(documents_fts)` for ranking\n - Uses `snippet(documents_fts, 1, '', '', '...', 64)` for context\n - Column index 1 = content_text (0=title)\n\n3. `generate_fallback_snippet(content_text: &str, max_chars: usize) -> String`\n - For semantic-only results without FTS snippets\n - Uses `truncate_utf8()` for safe byte boundaries\n\n4. `truncate_utf8(s: &str, max_bytes: usize) -> &str`\n - Walks backward from max_bytes to find nearest char boundary\n\n5. `get_result_snippet(fts_snippet: Option<&str>, content_text: &str) -> String`\n - Prefers FTS snippet, falls back to truncated content\n\nUpdate `src/lib.rs`: add `pub mod search;`\n\n## Acceptance Criteria\n- [ ] Porter stemming works: search \"searching\" matches document containing \"search\"\n- [ ] Prefix search works: `auth*` matches \"authentication\"\n- [ ] Empty query returns empty Vec (no error)\n- [ ] Special characters don't cause FTS5 errors: `-`, `\"`, `:`, `*`\n- [ ] Query `\"-DWITH_SSL\"` returns results (dash not treated as NOT operator)\n- [ ] Query `C++` returns results (special chars preserved in quotes)\n- [ ] Safe mode preserves trailing `*` on alphanumeric tokens: `auth*` -> `\"auth\"*`\n- [ ] Raw mode passes query unchanged\n- [ ] BM25 scores returned (lower = better match)\n- [ ] Snippets contain `` tags around matches\n- [ ] `generate_fallback_snippet` truncates at word boundary, appends \"...\"\n- [ ] `truncate_utf8` never panics on multi-byte codepoints\n- [ ] `cargo test fts` passes\n\n## Files\n- `src/search/mod.rs` — new file (module root)\n- `src/search/fts.rs` — new file (FTS5 search + query sanitization)\n- `src/lib.rs` — add `pub mod search;`\n\n## TDD Loop\nRED: Tests in `fts.rs` `#[cfg(test)] mod tests`:\n- `test_safe_query_basic` — \"auth error\" -> `\"auth\" \"error\"`\n- `test_safe_query_prefix` — \"auth*\" -> `\"auth\"*`\n- `test_safe_query_special_chars` — \"C++\" -> `\"C++\"`\n- `test_safe_query_dash` — \"-DWITH_SSL\" -> `\"-DWITH_SSL\"`\n- `test_safe_query_quotes` — `he said \"hello\"` -> escaped\n- `test_raw_mode_passthrough` — raw query unchanged\n- `test_empty_query` — returns empty vec\n- `test_truncate_utf8_emoji` — truncate mid-emoji walks back\n- `test_fallback_snippet_word_boundary` — truncates at space\nGREEN: Implement to_fts_query, search_fts, helpers\nVERIFY: `cargo test fts`\n\n## Edge Cases\n- Query with only whitespace: treated as empty, returns empty\n- Query with only special characters: quoted, may return no results (not an error)\n- Very long query (1000+ chars): works but may be slow (no explicit limit)\n- FTS5 snippet returns empty string: fallback to truncated content_text\n- Non-alphanumeric prefix: `C++*` — NOT treated as prefix (special chars present)","status":"closed","priority":2,"issue_type":"task","created_at":"2026-01-30T15:26:13.005179Z","created_by":"tayloreernisse","updated_at":"2026-01-30T17:23:35.204290Z","closed_at":"2026-01-30T17:23:35.204106Z","close_reason":"Completed: to_fts_query (safe/raw modes), search_fts with BM25+snippets, generate_fallback_snippet, get_result_snippet, truncate_utf8 reuse, 13 tests pass","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-1k1","depends_on_id":"bd-221","type":"blocks","created_at":"2026-01-30T15:29:24.374108Z","created_by":"tayloreernisse"}]} {"id":"bd-1k4","title":"OBSERV: Add get_log_dir() helper to paths module","description":"## Background\nA centralized helper for the log directory path ensures consistent XDG compliance and directory creation. The existing get_data_dir() (src/core/paths.rs:40-43) returns ~/.local/share/lore/. We add a sibling that appends /logs/.\n\n## Approach\nAdd to src/core/paths.rs, after get_db_path() (around line 53):\n\n```rust\n/// Get the log directory path. Creates the directory if it doesn't exist.\npub fn get_log_dir(config_override: Option<&str>) -> PathBuf {\n let dir = if let Some(path) = config_override {\n PathBuf::from(path)\n } else {\n get_data_dir().join(\"logs\")\n };\n std::fs::create_dir_all(&dir).ok();\n dir\n}\n```\n\nThe config_override comes from LoggingConfig.log_dir (bd-17n). When None, uses XDG default.\n\nExisting pattern to follow (src/core/paths.rs:40-53):\n- get_data_dir() -> PathBuf (returns ~/.local/share/lore/)\n- get_db_path(config_override: Option<&str>) -> PathBuf\n\n## Acceptance Criteria\n- [ ] get_log_dir(None) returns ~/.local/share/lore/logs/\n- [ ] get_log_dir(Some(\"/tmp/custom\")) returns /tmp/custom\n- [ ] Directory is created if it doesn't exist\n- [ ] Function is pub and accessible from other modules\n- [ ] cargo clippy --all-targets -- -D warnings passes\n\n## Files\n- src/core/paths.rs (add get_log_dir function after line ~53)\n\n## TDD Loop\nRED: test_get_log_dir_default, test_get_log_dir_override (use tempdir)\nGREEN: Add get_log_dir() function\nVERIFY: cargo test && cargo clippy --all-targets -- -D warnings\n\n## Edge Cases\n- create_dir_all failure (e.g., permissions): .ok() swallows error silently. This matches get_db_path() which also doesn't create dirs. Consider: should we propagate the error? The subscriber init will fail anyway if the dir doesn't exist, providing a clear error.\n- Trailing slash: PathBuf handles this correctly","status":"closed","priority":1,"issue_type":"task","created_at":"2026-02-04T15:53:55.525165Z","created_by":"tayloreernisse","updated_at":"2026-02-04T17:10:22.907812Z","closed_at":"2026-02-04T17:10:22.907763Z","close_reason":"Added get_log_dir() helper mirroring get_db_path/get_backup_dir pattern","compaction_level":0,"original_size":0,"labels":["observability"],"dependencies":[{"issue_id":"bd-1k4","depends_on_id":"bd-2nx","type":"parent-child","created_at":"2026-02-04T15:53:55.526345Z","created_by":"tayloreernisse"}]} {"id":"bd-1kh","title":"[CP0] Raw payload handling - compression and deduplication","description":"## Background\n\nRaw payload storage allows replaying API responses for debugging and audit. Compression reduces storage for large payloads. SHA-256 deduplication prevents storing identical payloads multiple times (important for frequently polled resources that haven't changed).\n\nReference: docs/prd/checkpoint-0.md section \"Raw Payload Handling\"\n\n## Approach\n\n**src/core/payloads.ts:**\n```typescript\nimport { createHash } from 'node:crypto';\nimport { gzipSync, gunzipSync } from 'node:zlib';\nimport Database from 'better-sqlite3';\nimport { nowMs } from './time';\n\ninterface StorePayloadOptions {\n projectId: number | null;\n resourceType: string; // 'project' | 'issue' | 'mr' | 'note' | 'discussion'\n gitlabId: string; // TEXT because discussion IDs are strings\n payload: unknown; // JSON-serializable object\n compress: boolean; // from config.storage.compressRawPayloads\n}\n\nexport function storePayload(db: Database.Database, options: StorePayloadOptions): number | null {\n // 1. JSON.stringify the payload\n // 2. SHA-256 hash the JSON bytes\n // 3. Check for duplicate by (project_id, resource_type, gitlab_id, payload_hash)\n // 4. If duplicate, return existing ID\n // 5. If compress=true, gzip the JSON bytes\n // 6. INSERT with content_encoding='gzip' or 'identity'\n // 7. Return lastInsertRowid\n}\n\nexport function readPayload(db: Database.Database, id: number): unknown {\n // 1. SELECT content_encoding, payload FROM raw_payloads WHERE id = ?\n // 2. If gzip, decompress\n // 3. JSON.parse and return\n}\n```\n\n## Acceptance Criteria\n\n- [ ] storePayload() with compress=true stores gzip-encoded payload\n- [ ] storePayload() with compress=false stores identity-encoded payload\n- [ ] Duplicate payload (same hash) returns existing row ID, not new row\n- [ ] readPayload() correctly decompresses gzip payloads\n- [ ] readPayload() returns null for non-existent ID\n- [ ] SHA-256 hash computed from pre-compression JSON bytes\n- [ ] Large payloads (100KB+) compress to ~10-20% of original size\n\n## Files\n\nCREATE:\n- src/core/payloads.ts\n- tests/unit/payloads.test.ts\n\n## TDD Loop\n\nRED:\n```typescript\n// tests/unit/payloads.test.ts\ndescribe('Payload Storage', () => {\n describe('storePayload', () => {\n it('stores uncompressed payload with identity encoding')\n it('stores compressed payload with gzip encoding')\n it('deduplicates identical payloads by hash')\n it('stores different payloads for same gitlab_id')\n })\n\n describe('readPayload', () => {\n it('reads uncompressed payload')\n it('reads and decompresses gzip payload')\n it('returns null for non-existent id')\n })\n})\n```\n\nGREEN: Implement storePayload() and readPayload()\n\nVERIFY: `npm run test -- tests/unit/payloads.test.ts`\n\n## Edge Cases\n\n- gitlabId is TEXT not INTEGER - discussion IDs are UUIDs\n- Compression ratio varies - some JSON compresses better than others\n- null projectId valid for global resources (like user profile)\n- Hash collision extremely unlikely with SHA-256 but unique index enforces","status":"closed","priority":1,"issue_type":"task","created_at":"2026-01-24T16:09:50.189494Z","created_by":"tayloreernisse","updated_at":"2026-01-25T03:19:12.854771Z","closed_at":"2026-01-25T03:19:12.854372Z","close_reason":"done","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-1kh","depends_on_id":"bd-3ng","type":"blocks","created_at":"2026-01-24T16:13:09.055338Z","created_by":"tayloreernisse"}]} @@ -117,7 +117,7 @@ {"id":"bd-3eu","title":"Implement hybrid search with adaptive recall","description":"## Background\nHybrid search is the top-level search orchestrator that combines FTS5 lexical results with sqlite-vec semantic results via RRF ranking. It supports three modes (Lexical, Semantic, Hybrid) and implements adaptive recall (wider initial fetch when filters are applied) and graceful degradation (falls back to FTS when Ollama is unavailable). All modes use RRF for consistent --explain output.\n\n## Approach\nCreate `src/search/hybrid.rs` per PRD Section 5.3.\n\n**Key types:**\n```rust\n#[derive(Debug, Clone, Copy, PartialEq, Eq)]\npub enum SearchMode {\n Hybrid, // Vector + FTS with RRF\n Lexical, // FTS only\n Semantic, // Vector only\n}\n\nimpl SearchMode {\n pub fn from_str(s: &str) -> Option {\n match s.to_lowercase().as_str() {\n \"hybrid\" => Some(Self::Hybrid),\n \"lexical\" | \"fts\" => Some(Self::Lexical),\n \"semantic\" | \"vector\" => Some(Self::Semantic),\n _ => None,\n }\n }\n\n pub fn as_str(&self) -> &'static str {\n match self {\n Self::Hybrid => \"hybrid\",\n Self::Lexical => \"lexical\",\n Self::Semantic => \"semantic\",\n }\n }\n}\n\npub struct HybridResult {\n pub document_id: i64,\n pub score: f64, // Normalized RRF score (0-1)\n pub vector_rank: Option,\n pub fts_rank: Option,\n pub rrf_score: f64, // Raw RRF score\n}\n```\n\n**Core function (ASYNC, PRD-exact signature):**\n```rust\npub async fn search_hybrid(\n conn: &Connection,\n client: Option<&OllamaClient>, // None if Ollama unavailable\n ollama_base_url: Option<&str>, // For actionable error messages\n query: &str,\n mode: SearchMode,\n filters: &SearchFilters,\n fts_mode: FtsQueryMode,\n) -> Result<(Vec, Vec)>\n```\n\n**IMPORTANT — client is `Option<&OllamaClient>`:** This enables graceful degradation. When Ollama is unavailable, the caller passes `None` and hybrid mode falls back to FTS-only with a warning. The `ollama_base_url` is separate so error messages can include it even when client is None.\n\n**Adaptive recall constants (PRD Section 5.3):**\n```rust\nconst BASE_RECALL_MIN: usize = 50;\nconst FILTERED_RECALL_MIN: usize = 200;\nconst RECALL_CAP: usize = 1500;\n```\n\n**Recall formula:**\n```rust\nlet requested = filters.clamp_limit();\nlet top_k = if filters.has_any_filter() {\n (requested * 50).max(FILTERED_RECALL_MIN).min(RECALL_CAP)\n} else {\n (requested * 10).max(BASE_RECALL_MIN).min(RECALL_CAP)\n};\n```\n\n**Mode behavior:**\n- **Lexical:** FTS only -> rank_rrf with empty vector list (single-list RRF)\n- **Semantic:** Vector only -> requires client (error if None) -> rank_rrf with empty FTS list\n- **Hybrid:** Both FTS + vector -> rank_rrf with both lists\n- **Hybrid with client=None:** Graceful degradation to Lexical with warning, NOT error\n\n**Graceful degradation logic:**\n```rust\nSearchMode::Hybrid => {\n let fts_results = search_fts(conn, query, top_k, fts_mode)?;\n let fts_tuples: Vec<_> = fts_results.iter().map(|r| (r.document_id, r.rank)).collect();\n\n match client {\n Some(client) => {\n let query_embedding = client.embed_batch(vec\\![query.to_string()]).await?;\n let embedding = query_embedding.into_iter().next().unwrap();\n let vec_results = search_vector(conn, &embedding, top_k)?;\n let vec_tuples: Vec<_> = vec_results.iter().map(|r| (r.document_id, r.distance)).collect();\n let ranked = rank_rrf(&vec_tuples, &fts_tuples);\n // ... map to HybridResult\n Ok((results, warnings))\n }\n None => {\n warnings.push(\"Ollama unavailable, falling back to lexical search\".into());\n let ranked = rank_rrf(&[], &fts_tuples);\n // ... map to HybridResult\n Ok((results, warnings))\n }\n }\n}\n```\n\n## Acceptance Criteria\n- [ ] Function is `async` (per PRD — Ollama client methods are async)\n- [ ] Signature takes `client: Option<&OllamaClient>` (not required)\n- [ ] Signature takes `ollama_base_url: Option<&str>` for actionable error messages\n- [ ] Returns `(Vec, Vec)` — results + warnings\n- [ ] Lexical mode: FTS-only results ranked via RRF (single list)\n- [ ] Semantic mode: vector-only results ranked via RRF; error if client is None\n- [ ] Hybrid mode: both FTS + vector results merged via RRF\n- [ ] Graceful degradation: client=None in Hybrid falls back to FTS with warning (not error)\n- [ ] Adaptive recall: unfiltered max(50, limit*10), filtered max(200, limit*50), capped 1500\n- [ ] All modes produce consistent --explain output (vector_rank, fts_rank, rrf_score)\n- [ ] SearchMode::from_str accepts aliases: \"fts\" for Lexical, \"vector\" for Semantic\n- [ ] `cargo build` succeeds\n\n## Files\n- `src/search/hybrid.rs` — new file\n- `src/search/mod.rs` — add `pub use hybrid::{search_hybrid, HybridResult, SearchMode};`\n\n## TDD Loop\nRED: Tests (some integration, some unit):\n- `test_lexical_mode` — FTS results only\n- `test_semantic_mode` — vector results only\n- `test_hybrid_mode` — both lists merged\n- `test_graceful_degradation` — None client falls back to FTS with warning in warnings vec\n- `test_adaptive_recall_unfiltered` — recall = max(50, limit*10)\n- `test_adaptive_recall_filtered` — recall = max(200, limit*50)\n- `test_recall_cap` — never exceeds 1500\n- `test_search_mode_from_str` — \"hybrid\", \"lexical\", \"fts\", \"semantic\", \"vector\", invalid\nGREEN: Implement search_hybrid\nVERIFY: `cargo test hybrid`\n\n## Edge Cases\n- Both FTS and vector return zero results: empty output (not error)\n- FTS returns results but vector returns empty: RRF still works (single-list)\n- Very high limit (100) with filters: recall = min(5000, 1500) = 1500\n- Semantic mode with client=None: error (OllamaUnavailable), not degradation\n- Semantic mode with 0% coverage: return LoreError::EmbeddingsNotBuilt","status":"closed","priority":2,"issue_type":"task","created_at":"2026-01-30T15:26:50.343002Z","created_by":"tayloreernisse","updated_at":"2026-01-30T17:56:16.631748Z","closed_at":"2026-01-30T17:56:16.631682Z","close_reason":"Implemented hybrid search with 3 modes (lexical/semantic/hybrid), graceful degradation when Ollama unavailable, adaptive recall (50-1500), RRF fusion. 6 tests pass.","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-3eu","depends_on_id":"bd-1k1","type":"blocks","created_at":"2026-01-30T15:29:24.913458Z","created_by":"tayloreernisse"},{"issue_id":"bd-3eu","depends_on_id":"bd-335","type":"blocks","created_at":"2026-01-30T15:29:25.025502Z","created_by":"tayloreernisse"},{"issue_id":"bd-3eu","depends_on_id":"bd-3ez","type":"blocks","created_at":"2026-01-30T15:29:24.987809Z","created_by":"tayloreernisse"},{"issue_id":"bd-3eu","depends_on_id":"bd-bjo","type":"blocks","created_at":"2026-01-30T15:29:24.950761Z","created_by":"tayloreernisse"}]} {"id":"bd-3ez","title":"Implement RRF ranking","description":"## Background\nReciprocal Rank Fusion (RRF) combines results from multiple retrieval systems (FTS5 lexical + sqlite-vec semantic) into a single ranked list without requiring score normalization. Documents appearing in both lists rank higher than single-list documents. This is the core ranking algorithm for hybrid search in Gate B.\n\n## Approach\nCreate \\`src/search/rrf.rs\\` per PRD Section 5.2.\n\n```rust\nuse std::collections::HashMap;\n\nconst RRF_K: f64 = 60.0;\n\npub struct RrfResult {\n pub document_id: i64,\n pub rrf_score: f64, // Raw RRF score\n pub normalized_score: f64, // Normalized to 0-1 (rrf_score / max)\n pub vector_rank: Option, // 1-indexed rank in vector list\n pub fts_rank: Option, // 1-indexed rank in FTS list\n}\n\n/// Input: tuples of (document_id, score/distance) — already sorted by retriever.\n/// Ranks are 1-indexed (first result = rank 1).\n/// Score = sum of 1/(k + rank) for each list containing the document.\npub fn rank_rrf(\n vector_results: &[(i64, f64)], // (doc_id, distance)\n fts_results: &[(i64, f64)], // (doc_id, bm25_score)\n) -> Vec\n```\n\n**Algorithm (per PRD):**\n1. Build HashMap\n2. For each vector result at position i: score += 1/(K + (i+1)), record vector_rank = i+1 (**1-indexed**)\n3. For each FTS result at position i: score += 1/(K + (i+1)), record fts_rank = i+1 (**1-indexed**)\n4. Sort descending by rrf_score\n5. Normalize: each result.normalized_score = result.rrf_score / max_score (best = 1.0)\n\n**Key PRD details:**\n- Ranks are **1-indexed** (rank 1 = best, not rank 0)\n- Input is \\`&[(i64, f64)]\\` tuples, NOT custom structs\n- Output has both \\`rrf_score\\` (raw) and \\`normalized_score\\` (0-1)\n\n## Acceptance Criteria\n- [ ] Documents in both lists score higher than single-list documents\n- [ ] Single-list documents are included (not dropped)\n- [ ] Ranks are 1-indexed (first element = rank 1)\n- [ ] Raw RRF score available in rrf_score field\n- [ ] Normalized score: best = 1.0, all in [0, 1]\n- [ ] Results sorted descending by rrf_score\n- [ ] vector_rank and fts_rank tracked per result for --explain\n- [ ] Empty input lists handled (return empty)\n- [ ] One empty list + one non-empty returns results from non-empty list\n\n## Files\n- \\`src/search/rrf.rs\\` — new file\n- \\`src/search/mod.rs\\` — add \\`mod rrf; pub use rrf::{rank_rrf, RrfResult};\\`\n\n## TDD Loop\nRED: Tests in \\`#[cfg(test)] mod tests\\`:\n- \\`test_dual_list_ranks_higher\\` — doc in both lists scores > doc in one list\n- \\`test_single_list_included\\` — FTS-only and vector-only docs appear\n- \\`test_normalization\\` — best score is 1.0, all in [0, 1]\n- \\`test_empty_inputs\\` — empty returns empty\n- \\`test_ranks_are_1_indexed\\` — verify vector_rank/fts_rank start at 1\n- \\`test_raw_and_normalized_scores\\` — both fields populated correctly\nGREEN: Implement rank_rrf()\nVERIFY: \\`cargo test rrf\\`\n\n## Edge Cases\n- Duplicate document_id within same list: shouldn't happen, use first occurrence\n- Single result in one list, zero in other: normalized_score = 1.0\n- Very large input lists: HashMap handles efficiently","status":"closed","priority":2,"issue_type":"task","created_at":"2026-01-30T15:26:50.309012Z","created_by":"tayloreernisse","updated_at":"2026-01-30T16:53:04.128560Z","closed_at":"2026-01-30T16:53:04.128498Z","close_reason":"Completed: RRF ranking with 1-indexed ranks, raw+normalized scores, vector_rank/fts_rank provenance, 7 tests pass","compaction_level":0,"original_size":0} {"id":"bd-3hy","title":"[CP1] Test fixtures for mocked GitLab responses","description":"Create mock response files for integration tests using wiremock.\n\n## Fixtures to Create\n\n### tests/fixtures/gitlab_issue.json\nSingle issue with labels:\n- id, iid, project_id, title, description, state\n- author object\n- labels array (string names)\n- timestamps\n- web_url\n\n### tests/fixtures/gitlab_issues_page.json\nArray of issues simulating paginated response:\n- 3-5 issues with varying states\n- Mix of labels\n\n### tests/fixtures/gitlab_discussion.json\nSingle discussion:\n- id (string)\n- individual_note: false\n- notes array with 2+ notes\n- Include one system note\n\n### tests/fixtures/gitlab_discussions_page.json\nArray of discussions:\n- Mix of individual_note true/false\n- Include resolvable/resolved examples\n\n## Edge Cases to Cover\n- Issue with no labels (empty array)\n- Issue with labels_details (ignored in CP1)\n- Discussion with individual_note=true (single note)\n- System notes with system=true\n- Resolvable notes\n\nFiles: tests/fixtures/gitlab_issue.json, gitlab_issues_page.json, gitlab_discussion.json, gitlab_discussions_page.json\nDone when: wiremock handlers can use fixtures for deterministic tests","status":"tombstone","priority":3,"issue_type":"task","created_at":"2026-01-25T16:59:01.206436Z","created_by":"tayloreernisse","updated_at":"2026-01-25T17:02:01.991367Z","deleted_at":"2026-01-25T17:02:01.991362Z","deleted_by":"tayloreernisse","delete_reason":"recreating with correct deps","original_type":"task","compaction_level":0,"original_size":0} -{"id":"bd-3ia","title":"Fetch closes_issues API and populate entity_references","description":"## Background\nGET /projects/:id/merge_requests/:iid/closes_issues returns issues that will close when MR merges. This is the most reliable source for MR→issue relationships. Uses the generic dependent fetch queue (job_type = 'mr_closes_issues').\n\n## Approach\n\n**1. Add API endpoint to GitLab client (src/gitlab/client.rs):**\n```rust\n/// Fetch issues that will be closed when this MR merges.\npub async fn fetch_mr_closes_issues(\n &self, \n project_id: i64, \n iid: i64\n) -> Result>\n```\n\nNew type in src/gitlab/types.rs:\n```rust\n#[derive(Debug, Clone, Deserialize)]\npub struct GitLabIssueRef {\n pub id: i64,\n pub iid: i64,\n pub project_id: i64,\n pub title: String,\n pub state: String,\n pub web_url: String,\n}\n```\n\nURL: `GET /api/v4/projects/{project_id}/merge_requests/{iid}/closes_issues?per_page=100`\n\n**2. Enqueue jobs during MR ingestion:**\nIn orchestrator.rs, after MR upsert:\n```rust\nenqueue_job(conn, project_id, \"merge_request\", iid, local_id, \"mr_closes_issues\", None)?;\n```\n\nThis is always enqueued (not gated by a config flag) because cross-reference data is fundamental to all temporal queries.\n\n**3. Process jobs in drain step:**\nIn the drain dispatcher (from bd-1ep), handle \"mr_closes_issues\" job_type:\n```rust\nlet closes_issues = client.fetch_mr_closes_issues(gitlab_project_id, job.entity_iid).await?;\nfor issue_ref in &closes_issues {\n let target_id = resolve_issue_local_id(conn, project_id, issue_ref.iid);\n insert_entity_reference(conn, EntityReference {\n source_entity_type: \"merge_request\",\n source_entity_id: job.entity_local_id,\n target_entity_type: \"issue\",\n target_entity_id: target_id, // Some(id) or None for cross-project\n target_project_path: if target_id.is_none() { Some(resolve_project_path(issue_ref.project_id)) } else { None },\n target_entity_iid: if target_id.is_none() { Some(issue_ref.iid) } else { None },\n reference_type: \"closes\",\n source_method: \"api_closes_issues\",\n created_at: None,\n })?;\n}\n```\n\n**4. Insert helper for entity_references:**\nAdd to src/core/references.rs:\n```rust\npub fn insert_entity_reference(conn: &Connection, ref_: &EntityReference) -> Result\n// INSERT OR IGNORE, returns true if inserted\n```\n\n## Acceptance Criteria\n- [ ] closes_issues API called for all MRs during sync\n- [ ] Entity references created with reference_type='closes', source_method='api_closes_issues'\n- [ ] Source = MR, target = issue (correct directionality)\n- [ ] Cross-project issues stored as unresolved (target_entity_id=NULL, target_project_path set)\n- [ ] Idempotent: re-sync doesn't create duplicate references\n- [ ] 404 on deleted MR handled gracefully (fail_job)\n\n## Files\n- src/gitlab/client.rs (add fetch_mr_closes_issues)\n- src/gitlab/types.rs (add GitLabIssueRef)\n- src/core/references.rs (add insert_entity_reference helper)\n- src/ingestion/orchestrator.rs (enqueue mr_closes_issues jobs)\n- src/core/drain.rs or sync.rs (handle mr_closes_issues in drain dispatcher)\n\n## TDD Loop\nRED: tests/references_tests.rs:\n- `test_closes_issues_creates_references` - mock closes_issues response, verify entity_references rows\n- `test_closes_issues_cross_project_unresolved` - issue from different project stored as unresolved\n- `test_closes_issues_idempotent` - process same job twice, verify no duplicates\n\ntests/gitlab_types_tests.rs:\n- `test_deserialize_issue_ref` - verify GitLabIssueRef deserialization\n\nGREEN: Implement API endpoint, enqueue hook, drain handler, insert helper\n\nVERIFY: `cargo test references -- --nocapture && cargo test gitlab_types -- --nocapture`\n\n## Edge Cases\n- closes_issues API returns issues from OTHER projects (cross-project closing) — must check if issue is in local DB\n- Empty response (MR doesn't close any issues) — no refs created, job still completed\n- MR may close the same issue via description (\"Closes #123\") and via commits — API deduplicates, but our INSERT OR IGNORE handles it too\n- The closes_issues API may return stale data for draft MRs (issues that *would* close but haven't yet)","status":"open","priority":2,"issue_type":"task","created_at":"2026-02-02T21:32:33.561956Z","created_by":"tayloreernisse","updated_at":"2026-02-02T22:41:50.613792Z","compaction_level":0,"original_size":0,"labels":["api","gate-2","phase-b"],"dependencies":[{"issue_id":"bd-3ia","depends_on_id":"bd-1se","type":"parent-child","created_at":"2026-02-02T21:32:33.563366Z","created_by":"tayloreernisse"},{"issue_id":"bd-3ia","depends_on_id":"bd-hu3","type":"blocks","created_at":"2026-02-02T22:41:50.613776Z","created_by":"tayloreernisse"},{"issue_id":"bd-3ia","depends_on_id":"bd-tir","type":"blocks","created_at":"2026-02-02T21:32:42.860463Z","created_by":"tayloreernisse"}]} +{"id":"bd-3ia","title":"Fetch closes_issues API and populate entity_references","description":"## Background\nGET /projects/:id/merge_requests/:iid/closes_issues returns issues that will close when MR merges. This is the most reliable source for MR→issue relationships. Uses the generic dependent fetch queue (job_type = 'mr_closes_issues').\n\n## Approach\n\n**1. Add API endpoint to GitLab client (src/gitlab/client.rs):**\n```rust\n/// Fetch issues that will be closed when this MR merges.\npub async fn fetch_mr_closes_issues(\n &self, \n project_id: i64, \n iid: i64\n) -> Result>\n```\n\nNew type in src/gitlab/types.rs:\n```rust\n#[derive(Debug, Clone, Deserialize)]\npub struct GitLabIssueRef {\n pub id: i64,\n pub iid: i64,\n pub project_id: i64,\n pub title: String,\n pub state: String,\n pub web_url: String,\n}\n```\n\nURL: `GET /api/v4/projects/{project_id}/merge_requests/{iid}/closes_issues?per_page=100`\n\n**2. Enqueue jobs during MR ingestion:**\nIn orchestrator.rs, after MR upsert:\n```rust\nenqueue_job(conn, project_id, \"merge_request\", iid, local_id, \"mr_closes_issues\", None)?;\n```\n\nThis is always enqueued (not gated by a config flag) because cross-reference data is fundamental to all temporal queries.\n\n**3. Process jobs in drain step:**\nIn the drain dispatcher (from bd-1ep), handle \"mr_closes_issues\" job_type:\n```rust\nlet closes_issues = client.fetch_mr_closes_issues(gitlab_project_id, job.entity_iid).await?;\nfor issue_ref in &closes_issues {\n let target_id = resolve_issue_local_id(conn, project_id, issue_ref.iid);\n insert_entity_reference(conn, EntityReference {\n source_entity_type: \"merge_request\",\n source_entity_id: job.entity_local_id,\n target_entity_type: \"issue\",\n target_entity_id: target_id, // Some(id) or None for cross-project\n target_project_path: if target_id.is_none() { Some(resolve_project_path(issue_ref.project_id)) } else { None },\n target_entity_iid: if target_id.is_none() { Some(issue_ref.iid) } else { None },\n reference_type: \"closes\",\n source_method: \"api_closes_issues\",\n created_at: None,\n })?;\n}\n```\n\n**4. Insert helper for entity_references:**\nAdd to src/core/references.rs:\n```rust\npub fn insert_entity_reference(conn: &Connection, ref_: &EntityReference) -> Result\n// INSERT OR IGNORE, returns true if inserted\n```\n\n## Acceptance Criteria\n- [ ] closes_issues API called for all MRs during sync\n- [ ] Entity references created with reference_type='closes', source_method='api_closes_issues'\n- [ ] Source = MR, target = issue (correct directionality)\n- [ ] Cross-project issues stored as unresolved (target_entity_id=NULL, target_project_path set)\n- [ ] Idempotent: re-sync doesn't create duplicate references\n- [ ] 404 on deleted MR handled gracefully (fail_job)\n\n## Files\n- src/gitlab/client.rs (add fetch_mr_closes_issues)\n- src/gitlab/types.rs (add GitLabIssueRef)\n- src/core/references.rs (add insert_entity_reference helper)\n- src/ingestion/orchestrator.rs (enqueue mr_closes_issues jobs)\n- src/core/drain.rs or sync.rs (handle mr_closes_issues in drain dispatcher)\n\n## TDD Loop\nRED: tests/references_tests.rs:\n- `test_closes_issues_creates_references` - mock closes_issues response, verify entity_references rows\n- `test_closes_issues_cross_project_unresolved` - issue from different project stored as unresolved\n- `test_closes_issues_idempotent` - process same job twice, verify no duplicates\n\ntests/gitlab_types_tests.rs:\n- `test_deserialize_issue_ref` - verify GitLabIssueRef deserialization\n\nGREEN: Implement API endpoint, enqueue hook, drain handler, insert helper\n\nVERIFY: `cargo test references -- --nocapture && cargo test gitlab_types -- --nocapture`\n\n## Edge Cases\n- closes_issues API returns issues from OTHER projects (cross-project closing) — must check if issue is in local DB\n- Empty response (MR doesn't close any issues) — no refs created, job still completed\n- MR may close the same issue via description (\"Closes #123\") and via commits — API deduplicates, but our INSERT OR IGNORE handles it too\n- The closes_issues API may return stale data for draft MRs (issues that *would* close but haven't yet)","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-02T21:32:33.561956Z","created_by":"tayloreernisse","updated_at":"2026-02-04T20:15:54.763773Z","closed_at":"2026-02-04T20:15:54.763643Z","compaction_level":0,"original_size":0,"labels":["api","gate-2","phase-b"],"dependencies":[{"issue_id":"bd-3ia","depends_on_id":"bd-1se","type":"parent-child","created_at":"2026-02-02T21:32:33.563366Z","created_by":"tayloreernisse"},{"issue_id":"bd-3ia","depends_on_id":"bd-hu3","type":"blocks","created_at":"2026-02-02T22:41:50.613776Z","created_by":"tayloreernisse"},{"issue_id":"bd-3ia","depends_on_id":"bd-tir","type":"blocks","created_at":"2026-02-02T21:32:42.860463Z","created_by":"tayloreernisse"}]} {"id":"bd-3ir","title":"Add database migration 006_merge_requests.sql","description":"## Background\nFoundation for all CP2 MR features. This migration defines the schema that all other MR components depend on. Must complete BEFORE any other CP2 work can proceed.\n\n## Approach\nCreate migration file that adds:\n1. `merge_requests` table with all CP2 fields\n2. `mr_labels`, `mr_assignees`, `mr_reviewers` junction tables\n3. Indexes on discussions for MR queries\n4. DiffNote position columns on notes table\n\n## Files\n- `migrations/006_merge_requests.sql` - New migration file\n- `src/core/db.rs` - Update MIGRATIONS const to include version 6\n\n## Acceptance Criteria\n- [ ] Migration file exists at `migrations/006_merge_requests.sql`\n- [ ] `merge_requests` table has columns: id, gitlab_id, project_id, iid, title, description, state, draft, author_username, source_branch, target_branch, head_sha, references_short, references_full, detailed_merge_status, merge_user_username, created_at, updated_at, merged_at, closed_at, last_seen_at, discussions_synced_for_updated_at, discussions_sync_last_attempt_at, discussions_sync_attempts, discussions_sync_last_error, web_url, raw_payload_id\n- [ ] `mr_labels` junction table exists with (merge_request_id, label_id) PK\n- [ ] `mr_assignees` junction table exists with (merge_request_id, username) PK\n- [ ] `mr_reviewers` junction table exists with (merge_request_id, username) PK\n- [ ] `idx_discussions_mr_id` and `idx_discussions_mr_resolved` indexes exist\n- [ ] `notes` table has new columns: position_type, position_line_range_start, position_line_range_end, position_base_sha, position_start_sha, position_head_sha\n- [ ] `gi doctor` runs without migration errors\n- [ ] `cargo test` passes\n\n## TDD Loop\nRED: Cannot open DB with version 6 schema\nGREEN: Add migration file with full SQL\nVERIFY: `cargo run -- doctor` shows healthy DB\n\n## SQL Reference (from PRD)\n```sql\n-- Merge requests table\nCREATE TABLE merge_requests (\n id INTEGER PRIMARY KEY,\n gitlab_id INTEGER UNIQUE NOT NULL,\n project_id INTEGER NOT NULL REFERENCES projects(id),\n iid INTEGER NOT NULL,\n title TEXT,\n description TEXT,\n state TEXT, -- opened | merged | closed | locked\n draft INTEGER NOT NULL DEFAULT 0, -- SQLite boolean\n author_username TEXT,\n source_branch TEXT,\n target_branch TEXT,\n head_sha TEXT,\n references_short TEXT,\n references_full TEXT,\n detailed_merge_status TEXT,\n merge_user_username TEXT,\n created_at INTEGER, -- ms epoch UTC\n updated_at INTEGER,\n merged_at INTEGER,\n closed_at INTEGER,\n last_seen_at INTEGER NOT NULL,\n discussions_synced_for_updated_at INTEGER,\n discussions_sync_last_attempt_at INTEGER,\n discussions_sync_attempts INTEGER DEFAULT 0,\n discussions_sync_last_error TEXT,\n web_url TEXT,\n raw_payload_id INTEGER REFERENCES raw_payloads(id)\n);\nCREATE INDEX idx_mrs_project_updated ON merge_requests(project_id, updated_at);\nCREATE UNIQUE INDEX uq_mrs_project_iid ON merge_requests(project_id, iid);\n-- ... (see PRD for full index list)\n\n-- Junction tables\nCREATE TABLE mr_labels (\n merge_request_id INTEGER REFERENCES merge_requests(id) ON DELETE CASCADE,\n label_id INTEGER REFERENCES labels(id) ON DELETE CASCADE,\n PRIMARY KEY(merge_request_id, label_id)\n);\n\nCREATE TABLE mr_assignees (\n merge_request_id INTEGER REFERENCES merge_requests(id) ON DELETE CASCADE,\n username TEXT NOT NULL,\n PRIMARY KEY(merge_request_id, username)\n);\n\nCREATE TABLE mr_reviewers (\n merge_request_id INTEGER REFERENCES merge_requests(id) ON DELETE CASCADE,\n username TEXT NOT NULL,\n PRIMARY KEY(merge_request_id, username)\n);\n\n-- DiffNote position columns (ALTER TABLE)\nALTER TABLE notes ADD COLUMN position_type TEXT;\nALTER TABLE notes ADD COLUMN position_line_range_start INTEGER;\nALTER TABLE notes ADD COLUMN position_line_range_end INTEGER;\nALTER TABLE notes ADD COLUMN position_base_sha TEXT;\nALTER TABLE notes ADD COLUMN position_start_sha TEXT;\nALTER TABLE notes ADD COLUMN position_head_sha TEXT;\n\nINSERT INTO schema_version (version, applied_at, description)\nVALUES (6, strftime('%s', 'now') * 1000, 'Merge requests, MR labels, assignees, reviewers');\n```\n\n## Edge Cases\n- SQLite does not support ADD CONSTRAINT - FK defined as nullable in CP1\n- `locked` state is transitional (merge-in-progress) - store as first-class\n- discussions_synced_for_updated_at prevents redundant discussion refetch","status":"closed","priority":2,"issue_type":"task","created_at":"2026-01-26T22:06:40.101470Z","created_by":"tayloreernisse","updated_at":"2026-01-27T00:06:43.899079Z","closed_at":"2026-01-27T00:06:43.898875Z","close_reason":"Migration 006_merge_requests.sql created and verified. Schema v6 applied successfully with all tables, indexes, and position columns.","compaction_level":0,"original_size":0} {"id":"bd-3j6","title":"Add transform_mr_discussion and transform_notes_with_diff_position","description":"## Background\nExtends discussion transformer for MR context. MR discussions can contain DiffNotes with file position metadata. This is critical for code review context in CP3 document generation.\n\n## Approach\nAdd two new functions to existing `src/gitlab/transformers/discussion.rs`:\n1. `transform_mr_discussion()` - Transform discussion with MR reference\n2. `transform_notes_with_diff_position()` - Extract DiffNote position metadata\n\nCP1 already has the polymorphic `NormalizedDiscussion` with `NoteableRef` enum - reuse that pattern.\n\n## Files\n- `src/gitlab/transformers/discussion.rs` - Add new functions\n- `tests/diffnote_tests.rs` - DiffNote position extraction tests\n- `tests/mr_discussion_tests.rs` - MR discussion transform tests\n\n## Acceptance Criteria\n- [ ] `transform_mr_discussion()` returns `NormalizedDiscussion` with `merge_request_id: Some(local_mr_id)`\n- [ ] `transform_notes_with_diff_position()` returns `Result, String>`\n- [ ] DiffNote position fields extracted: `position_old_path`, `position_new_path`, `position_old_line`, `position_new_line`\n- [ ] Extended position fields extracted: `position_type`, `position_line_range_start`, `position_line_range_end`\n- [ ] SHA triplet extracted: `position_base_sha`, `position_start_sha`, `position_head_sha`\n- [ ] Strict timestamp parsing - returns `Err` on invalid timestamps (no `unwrap_or(0)`)\n- [ ] `cargo test diffnote` passes\n- [ ] `cargo test mr_discussion` passes\n\n## TDD Loop\nRED: `cargo test diffnote_position` -> test fails\nGREEN: Add position extraction logic\nVERIFY: `cargo test diffnote`\n\n## Function Signatures\n```rust\n/// Transform GitLab discussion for MR context.\n/// Reuses existing transform_discussion logic, just with MR reference.\npub fn transform_mr_discussion(\n gitlab_discussion: &GitLabDiscussion,\n local_project_id: i64,\n local_mr_id: i64,\n) -> NormalizedDiscussion {\n // Use existing transform_discussion with NoteableRef::MergeRequest(local_mr_id)\n transform_discussion(\n gitlab_discussion,\n local_project_id,\n NoteableRef::MergeRequest(local_mr_id),\n )\n}\n\n/// Transform notes with DiffNote position extraction.\n/// Returns Result to enforce strict timestamp parsing.\npub fn transform_notes_with_diff_position(\n gitlab_discussion: &GitLabDiscussion,\n local_project_id: i64,\n) -> Result, String>\n```\n\n## DiffNote Position Extraction\n```rust\n// Extract position metadata if present\nlet (old_path, new_path, old_line, new_line, position_type, lr_start, lr_end, base_sha, start_sha, head_sha) = note\n .position\n .as_ref()\n .map(|pos| (\n pos.old_path.clone(),\n pos.new_path.clone(),\n pos.old_line,\n pos.new_line,\n pos.position_type.clone(), // \"text\" | \"image\" | \"file\"\n pos.line_range.as_ref().map(|r| r.start_line),\n pos.line_range.as_ref().map(|r| r.end_line),\n pos.base_sha.clone(),\n pos.start_sha.clone(),\n pos.head_sha.clone(),\n ))\n .unwrap_or((None, None, None, None, None, None, None, None, None, None));\n```\n\n## Strict Timestamp Parsing\n```rust\n// CRITICAL: Return error on invalid timestamps, never zero\nlet created_at = iso_to_ms(¬e.created_at)\n .ok_or_else(|| format\\!(\n \"Invalid note.created_at for note {}: {}\",\n note.id, note.created_at\n ))?;\n```\n\n## NormalizedNote Fields for DiffNotes\n```rust\nNormalizedNote {\n // ... existing fields ...\n // DiffNote position metadata\n position_old_path: old_path,\n position_new_path: new_path,\n position_old_line: old_line,\n position_new_line: new_line,\n // Extended position\n position_type,\n position_line_range_start: lr_start,\n position_line_range_end: lr_end,\n // SHA triplet\n position_base_sha: base_sha,\n position_start_sha: start_sha,\n position_head_sha: head_sha,\n}\n```\n\n## Edge Cases\n- Notes without position should have all position fields as None\n- Invalid timestamp should fail the entire discussion (no partial results)\n- File renames: `old_path \\!= new_path` indicates a renamed file\n- Multi-line comments: `line_range` present means comment spans lines 45-48","status":"closed","priority":2,"issue_type":"task","created_at":"2026-01-26T22:06:41.208380Z","created_by":"tayloreernisse","updated_at":"2026-01-27T00:20:13.473091Z","closed_at":"2026-01-27T00:20:13.473031Z","close_reason":"Implemented transform_mr_discussion() and transform_notes_with_diff_position() with full DiffNote position extraction:\n- Extended NormalizedNote with 10 DiffNote position fields (path, line, type, line_range, SHA triplet)\n- Added strict timestamp parsing that returns Err on invalid timestamps\n- Created 13 diffnote_position_tests covering all extraction paths and error cases\n- Created 6 mr_discussion_tests verifying MR reference handling\n- All 161 tests passing","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-3j6","depends_on_id":"bd-3ir","type":"blocks","created_at":"2026-01-26T22:08:54.207801Z","created_by":"tayloreernisse"},{"issue_id":"bd-3j6","depends_on_id":"bd-5ta","type":"blocks","created_at":"2026-01-26T22:08:54.244201Z","created_by":"tayloreernisse"}]} {"id":"bd-3js","title":"Implement MR CLI commands (list, show, count)","description":"## Background\nCLI commands for viewing and filtering merge requests. Includes list, show, and count commands with MR-specific filters.\n\n## Approach\nUpdate existing CLI command files:\n1. `list.rs` - Add MR listing with filters\n2. `show.rs` - Add MR detail view with discussions\n3. `count.rs` - Add MR counting with state breakdown\n\n## Files\n- `src/cli/commands/list.rs` - Add MR subcommand\n- `src/cli/commands/show.rs` - Add MR detail view\n- `src/cli/commands/count.rs` - Add MR counting\n\n## Acceptance Criteria\n- [ ] `gi list mrs` shows MR table with iid, title, state, author, branches\n- [ ] `gi list mrs --state=merged` filters by state\n- [ ] `gi list mrs --state=locked` filters locally (not server-side)\n- [ ] `gi list mrs --draft` shows only draft MRs\n- [ ] `gi list mrs --no-draft` excludes draft MRs\n- [ ] `gi list mrs --reviewer=username` filters by reviewer\n- [ ] `gi list mrs --target-branch=main` filters by target branch\n- [ ] `gi list mrs --source-branch=feature/x` filters by source branch\n- [ ] Draft MRs show `[DRAFT]` prefix in title\n- [ ] `gi show mr ` displays full detail including discussions\n- [ ] DiffNote shows file context: `[src/file.ts:45]`\n- [ ] Multi-line DiffNote shows: `[src/file.ts:45-48]`\n- [ ] `gi show mr` shows `detailed_merge_status`\n- [ ] `gi count mrs` shows total with state breakdown\n- [ ] `gi sync-status` shows MR cursor positions\n- [ ] `cargo test cli_commands` passes\n\n## TDD Loop\nRED: `cargo test list_mrs` -> command not found\nGREEN: Add MR subcommand\nVERIFY: `gi list mrs --help`\n\n## gi list mrs Output\n```\nMerge Requests (showing 20 of 1,234)\n\n !847 Refactor auth to use JWT tokens merged @johndoe main <- feature/jwt 3 days ago\n !846 Fix memory leak in websocket handler opened @janedoe main <- fix/websocket 5 days ago\n !845 [DRAFT] Add dark mode CSS variables opened @bobsmith main <- ui/dark-mode 1 week ago\n```\n\n## SQL for MR Listing\n```sql\nSELECT \n m.iid, m.title, m.state, m.draft, m.author_username,\n m.target_branch, m.source_branch, m.updated_at\nFROM merge_requests m\nWHERE m.project_id = ?\n AND (? IS NULL OR m.state = ?) -- state filter\n AND (? IS NULL OR m.draft = ?) -- draft filter\n AND (? IS NULL OR m.author_username = ?) -- author filter\n AND (? IS NULL OR m.target_branch = ?) -- target-branch filter\n AND (? IS NULL OR m.source_branch = ?) -- source-branch filter\n AND (? IS NULL OR EXISTS ( -- reviewer filter\n SELECT 1 FROM mr_reviewers r \n WHERE r.merge_request_id = m.id AND r.username = ?\n ))\nORDER BY m.updated_at DESC\nLIMIT ?\n```\n\n## gi show mr Output\n```\nMerge Request !847: Refactor auth to use JWT tokens\n================================================================================\n\nProject: group/project-one\nState: merged\nDraft: No\nAuthor: @johndoe\nAssignees: @janedoe, @bobsmith\nReviewers: @alice, @charlie\nSource: feature/jwt\nTarget: main\nMerge Status: mergeable\nMerged By: @alice\nMerged At: 2024-03-20 14:30:00\nLabels: enhancement, auth, reviewed\n\nDescription:\n Moving away from session cookies to JWT-based authentication...\n\nDiscussions (8):\n\n @janedoe (2024-03-16) [src/auth/jwt.ts:45]:\n Should we use a separate signing key for refresh tokens?\n\n @johndoe (2024-03-16):\n Good point. I'll add a separate key with rotation support.\n\n @alice (2024-03-18) [RESOLVED]:\n Looks good! Just one nit about the token expiry constant.\n```\n\n## DiffNote File Context Display\n```rust\n// Build file context string\nlet file_context = match (note.position_new_path, note.position_new_line, note.position_line_range_end) {\n (Some(path), Some(line), Some(end_line)) if line != end_line => {\n format!(\"[{}:{}-{}]\", path, line, end_line)\n }\n (Some(path), Some(line), _) => {\n format!(\"[{}:{}]\", path, line)\n }\n _ => String::new(),\n};\n```\n\n## gi count mrs Output\n```\nMerge Requests: 1,234\n opened: 89\n merged: 1,045\n closed: 100\n```\n\n## Filter Arguments (clap)\n```rust\n#[derive(Parser)]\nstruct ListMrsArgs {\n #[arg(long)]\n state: Option, // opened|merged|closed|locked|all\n #[arg(long)]\n draft: bool,\n #[arg(long)]\n no_draft: bool,\n #[arg(long)]\n author: Option,\n #[arg(long)]\n assignee: Option,\n #[arg(long)]\n reviewer: Option,\n #[arg(long)]\n target_branch: Option,\n #[arg(long)]\n source_branch: Option,\n #[arg(long)]\n label: Vec,\n #[arg(long)]\n project: Option,\n #[arg(long, default_value = \"20\")]\n limit: u32,\n}\n```\n\n## Edge Cases\n- `--state=locked` must filter locally (GitLab API doesn't support it)\n- Ambiguous MR iid across projects: prompt for `--project`\n- Empty discussions: show \"No discussions\" message\n- Multi-line DiffNotes: show line range in context","status":"closed","priority":2,"issue_type":"task","created_at":"2026-01-26T22:06:43.354939Z","created_by":"tayloreernisse","updated_at":"2026-01-27T00:37:31.792569Z","closed_at":"2026-01-27T00:37:31.792504Z","close_reason":"done","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-3js","depends_on_id":"bd-20h","type":"blocks","created_at":"2026-01-26T22:08:55.209249Z","created_by":"tayloreernisse"},{"issue_id":"bd-3js","depends_on_id":"bd-ser","type":"blocks","created_at":"2026-01-26T22:08:55.117728Z","created_by":"tayloreernisse"}]} @@ -139,7 +139,7 @@ {"id":"bd-4qd","title":"Write unit tests for core algorithms","description":"## Background\nUnit tests verify the core algorithms in isolation: document extraction formatting, FTS query sanitization, RRF scoring, content hashing, backoff curves, and filter helpers. These tests don't require a database or external services — they test pure functions and logic.\n\n## Approach\nAdd #[cfg(test)] mod tests blocks to each module:\n\n**1. src/documents/extractor.rs:**\n- test_source_type_parse_all_aliases — every alias resolves correctly\n- test_source_type_parse_unknown — returns None\n- test_source_type_as_str_roundtrip — as_str matches parse input\n- test_content_hash_deterministic — same input = same hash\n- test_list_hash_order_independent — sorted before hashing\n- test_list_hash_empty — empty vec produces consistent hash\n\n**2. src/documents/truncation.rs:**\n- test_truncation_edge_cases (per bd-18t TDD Loop)\n\n**3. src/search/fts.rs:**\n- test_to_fts_query_basic — \"auth error\" -> quoted tokens\n- test_to_fts_query_prefix — \"auth*\" preserves prefix\n- test_to_fts_query_special_chars — \"C++\" quoted correctly\n- test_to_fts_query_dash — \"-DWITH_SSL\" quoted (not NOT operator)\n- test_to_fts_query_internal_quotes — escaped by doubling\n- test_to_fts_query_empty — empty string returns empty\n\n**4. src/search/rrf.rs:**\n- test_rrf_dual_list — docs in both lists score higher\n- test_rrf_normalization — best score = 1.0\n- test_rrf_empty — empty returns empty\n\n**5. src/core/backoff.rs:**\n- test_exponential_curve — delays double each attempt\n- test_cap_at_one_hour — high attempt_count capped\n- test_jitter_range — within [0.9, 1.1) factor\n\n**6. src/search/filters.rs:**\n- test_has_any_filter — true/false for various filter combos\n- test_clamp_limit — 0->20, 200->100, 50->50\n- test_path_filter_from_str — trailing slash = Prefix\n\n**7. src/search/hybrid.rs (hydration round-trip):**\n- test_single_round_trip_query — verify hydration SQL produces correct structure\n\n## Acceptance Criteria\n- [ ] All edge cases covered per PRD acceptance criteria\n- [ ] Tests are unit tests (no DB, no network, no Ollama)\n- [ ] `cargo test` passes with all new tests\n- [ ] No test depends on execution order\n- [ ] Tests cover: document extractor formats, truncation, RRF, hashing, FTS sanitization, backoff, filters\n\n## Files\n- In-module tests in: extractor.rs, truncation.rs, fts.rs, rrf.rs, backoff.rs, filters.rs, hybrid.rs\n\n## TDD Loop\nThese tests ARE the TDD loop for their respective beads. Each implementation bead should write its tests first (RED), then implement (GREEN).\nVERIFY: `cargo test`\n\n## Edge Cases\n- Tests with Unicode: include emoji, CJK characters in truncation tests\n- Tests with empty strings: empty queries, empty content, empty labels\n- Tests with boundary values: limit=0, limit=100, limit=101","status":"closed","priority":3,"issue_type":"task","created_at":"2026-01-30T15:27:21.712924Z","created_by":"tayloreernisse","updated_at":"2026-01-30T17:46:00.059346Z","closed_at":"2026-01-30T17:46:00.059292Z","close_reason":"All acceptance criteria tests already exist across modules. 276 tests passing (189 unit + 87 integration).","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-4qd","depends_on_id":"bd-18t","type":"blocks","created_at":"2026-01-30T15:29:35.356715Z","created_by":"tayloreernisse"},{"issue_id":"bd-4qd","depends_on_id":"bd-1k1","type":"blocks","created_at":"2026-01-30T15:29:35.320913Z","created_by":"tayloreernisse"},{"issue_id":"bd-4qd","depends_on_id":"bd-36p","type":"blocks","created_at":"2026-01-30T15:29:35.465589Z","created_by":"tayloreernisse"},{"issue_id":"bd-4qd","depends_on_id":"bd-3ez","type":"blocks","created_at":"2026-01-30T15:29:35.393455Z","created_by":"tayloreernisse"},{"issue_id":"bd-4qd","depends_on_id":"bd-mem","type":"blocks","created_at":"2026-01-30T15:29:35.427448Z","created_by":"tayloreernisse"}]} {"id":"bd-5ta","title":"Add GitLab MR types to types.rs","description":"## Background\nGitLab API types for merge requests. These structs define how we deserialize GitLab API responses. Must handle deprecated field aliases for backward compatibility with older GitLab instances.\n\n## Approach\nAdd new structs to `src/gitlab/types.rs`:\n- `GitLabMergeRequest` - Main MR struct with all fields\n- `GitLabReviewer` - Reviewer with optional approval state\n- `GitLabReferences` - Short and full reference strings\n\nUse serde `#[serde(alias = \"...\")]` for deprecated field fallbacks.\n\n## Files\n- `src/gitlab/types.rs` - Add new structs after existing GitLabIssue\n- `tests/fixtures/gitlab_merge_request.json` - Test fixture\n\n## Acceptance Criteria\n- [ ] `GitLabMergeRequest` struct exists with all fields from PRD\n- [ ] `detailed_merge_status` field exists (non-deprecated)\n- [ ] `#[serde(alias = \"merge_status\")]` on `merge_status_legacy` for fallback\n- [ ] `merge_user` field exists (non-deprecated)\n- [ ] `merged_by` field exists for fallback\n- [ ] `draft` and `work_in_progress` both exist (draft preferred, WIP fallback)\n- [ ] `sha` field maps to `head_sha` in transformer\n- [ ] `references: Option` for short/full refs\n- [ ] `state: String` supports \"opened\", \"merged\", \"closed\", \"locked\"\n- [ ] Fixture deserializes without error\n- [ ] `cargo test` passes\n\n## TDD Loop\nRED: Add test that deserializes fixture -> struct not found\nGREEN: Add GitLabMergeRequest, GitLabReviewer, GitLabReferences structs\nVERIFY: `cargo test gitlab_types`\n\n## Struct Definitions (from PRD)\n```rust\n#[derive(Debug, Clone, Deserialize)]\npub struct GitLabMergeRequest {\n pub id: i64,\n pub iid: i64,\n pub project_id: i64,\n pub title: String,\n pub description: Option,\n pub state: String, // \"opened\" | \"merged\" | \"closed\" | \"locked\"\n #[serde(default)]\n pub draft: bool,\n #[serde(default)]\n pub work_in_progress: bool, // Deprecated fallback\n pub source_branch: String,\n pub target_branch: String,\n pub sha: Option, // head_sha\n pub references: Option,\n pub detailed_merge_status: Option,\n #[serde(alias = \"merge_status\")]\n pub merge_status_legacy: Option,\n pub created_at: String,\n pub updated_at: String,\n pub merged_at: Option,\n pub closed_at: Option,\n pub author: GitLabAuthor,\n pub merge_user: Option,\n pub merged_by: Option,\n #[serde(default)]\n pub labels: Vec,\n #[serde(default)]\n pub assignees: Vec,\n #[serde(default)]\n pub reviewers: Vec,\n pub web_url: String,\n}\n\n#[derive(Debug, Clone, Deserialize)]\npub struct GitLabReferences {\n pub short: String, // e.g. \"\\!123\"\n pub full: String, // e.g. \"group/project\\!123\"\n}\n\n#[derive(Debug, Clone, Deserialize)]\npub struct GitLabReviewer {\n pub id: i64,\n pub username: String,\n pub name: String,\n}\n```\n\n## Test Fixture (create tests/fixtures/gitlab_merge_request.json)\n```json\n{\n \"id\": 12345,\n \"iid\": 42,\n \"project_id\": 100,\n \"title\": \"Add user authentication\",\n \"description\": \"Implements JWT auth flow\",\n \"state\": \"merged\",\n \"draft\": false,\n \"work_in_progress\": false,\n \"source_branch\": \"feature/auth\",\n \"target_branch\": \"main\",\n \"sha\": \"abc123def456\",\n \"references\": { \"short\": \"\\!42\", \"full\": \"group/project\\!42\" },\n \"detailed_merge_status\": \"mergeable\",\n \"merge_status\": \"can_be_merged\",\n \"created_at\": \"2024-01-15T10:00:00Z\",\n \"updated_at\": \"2024-01-20T14:30:00Z\",\n \"merged_at\": \"2024-01-20T14:30:00Z\",\n \"closed_at\": null,\n \"author\": { \"id\": 1, \"username\": \"johndoe\", \"name\": \"John Doe\" },\n \"merge_user\": { \"id\": 2, \"username\": \"janedoe\", \"name\": \"Jane Doe\" },\n \"merged_by\": { \"id\": 2, \"username\": \"janedoe\", \"name\": \"Jane Doe\" },\n \"labels\": [\"enhancement\", \"auth\"],\n \"assignees\": [{ \"id\": 3, \"username\": \"bob\", \"name\": \"Bob Smith\" }],\n \"reviewers\": [{ \"id\": 4, \"username\": \"alice\", \"name\": \"Alice Wong\" }],\n \"web_url\": \"https://gitlab.example.com/group/project/-/merge_requests/42\"\n}\n```\n\n## Edge Cases\n- `locked` state is transitional (merge in progress) - rare but valid\n- Some older instances may not return `detailed_merge_status`\n- Some older instances may not return `merge_user` (use `merged_by` fallback)\n- `work_in_progress` is deprecated but still returned by some instances","status":"closed","priority":2,"issue_type":"task","created_at":"2026-01-26T22:06:40.498088Z","created_by":"tayloreernisse","updated_at":"2026-01-27T00:08:35.520229Z","closed_at":"2026-01-27T00:08:35.520167Z","close_reason":"Added GitLabMergeRequest, GitLabReviewer, GitLabReferences structs. Updated GitLabNotePosition with position_type, line_range, and SHA triplet fields. All 23 type tests passing.","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-5ta","depends_on_id":"bd-3ir","type":"blocks","created_at":"2026-01-26T22:08:53.981911Z","created_by":"tayloreernisse"}]} {"id":"bd-88m","title":"[CP1] Issue ingestion module","description":"Fetch and store issues with cursor-based incremental sync.\n\n## Module\nsrc/ingestion/issues.rs\n\n## Key Structs\n\n### IngestIssuesResult\n- fetched: usize\n- upserted: usize\n- labels_created: usize\n- issues_needing_discussion_sync: Vec\n\n### IssueForDiscussionSync\n- local_issue_id: i64\n- iid: i64\n- updated_at: i64\n\n## Main Function\npub async fn ingest_issues(conn, client, config, project_id, gitlab_project_id) -> Result\n\n## Logic\n1. Get current cursor from sync_cursors (updated_at_cursor, tie_breaker_id)\n2. Paginate through issues updated after cursor with cursor_rewind_seconds\n3. Apply local filtering for tuple cursor semantics:\n - Skip if issue.updated_at < cursor_updated_at\n - Skip if issue.updated_at == cursor_updated_at AND issue.id <= cursor_gitlab_id\n4. For each issue passing filter:\n - Begin transaction\n - Store raw payload (compressed)\n - Transform and upsert issue\n - Clear existing label links (DELETE FROM issue_labels)\n - Extract and upsert labels\n - Link issue to labels via junction\n - Commit transaction\n - Track for discussion sync eligibility\n5. Incremental cursor update every 100 issues\n6. Final cursor update\n7. Determine issues needing discussion sync: where updated_at > discussions_synced_for_updated_at\n\n## Helper Functions\n- get_cursor(conn, project_id) -> (Option, Option)\n- get_discussions_synced_at(conn, issue_id) -> Option\n- upsert_issue(conn, issue, payload_id) -> usize\n- get_local_issue_id(conn, gitlab_id) -> i64\n- clear_issue_labels(conn, issue_id)\n- upsert_label(conn, label) -> bool\n- get_label_id(conn, project_id, name) -> i64\n- link_issue_label(conn, issue_id, label_id)\n- update_cursor(conn, project_id, resource_type, updated_at, gitlab_id)\n\nFiles: src/ingestion/mod.rs, src/ingestion/issues.rs\nTests: tests/issue_ingestion_tests.rs\nDone when: Issues, labels, issue_labels populated correctly with resumable cursor","status":"tombstone","priority":2,"issue_type":"task","created_at":"2026-01-25T16:57:35.655708Z","created_by":"tayloreernisse","updated_at":"2026-01-25T17:02:01.806982Z","deleted_at":"2026-01-25T17:02:01.806977Z","deleted_by":"tayloreernisse","delete_reason":"recreating with correct deps","original_type":"task","compaction_level":0,"original_size":0} -{"id":"bd-8t4","title":"Extract cross-references from resource_state_events","description":"## Background\nresource_state_events includes source_merge_request (with iid) for 'closed by MR' events. After state events are stored (Gate 1), post-processing extracts these into entity_references for the cross-reference graph.\n\n## Approach\nCreate src/core/references.rs (new module) or add to events_db.rs:\n\n```rust\n/// Extract cross-references from stored state events and insert into entity_references.\n/// Looks for state events with source_merge_request_id IS NOT NULL (meaning \"closed by MR\").\n/// \n/// Directionality: source = MR (that caused the close), target = issue (that was closed)\npub fn extract_refs_from_state_events(\n conn: &Connection,\n project_id: i64,\n) -> Result // returns count of new references inserted\n```\n\nSQL logic:\n```sql\nINSERT OR IGNORE INTO entity_references (\n source_entity_type, source_entity_id,\n target_entity_type, target_entity_id,\n reference_type, source_method, created_at\n)\nSELECT\n 'merge_request',\n mr.id,\n 'issue',\n rse.issue_id,\n 'closes',\n 'api_state_event',\n rse.created_at\nFROM resource_state_events rse\nJOIN merge_requests mr ON mr.project_id = rse.project_id AND mr.iid = rse.source_merge_request_id\nWHERE rse.source_merge_request_id IS NOT NULL\n AND rse.issue_id IS NOT NULL\n AND rse.project_id = ?1;\n```\n\nKey: source_merge_request_id stores the MR iid, so we JOIN on merge_requests.iid to get the local DB id.\n\nRegister in src/core/mod.rs: `pub mod references;`\n\nCall this after drain_dependent_queue in the sync pipeline (after all state events are stored).\n\n## Acceptance Criteria\n- [ ] State events with source_merge_request_id produce 'closes' references\n- [ ] Source = MR (resolved by iid), target = issue\n- [ ] source_method = 'api_state_event'\n- [ ] INSERT OR IGNORE prevents duplicates with api_closes_issues data\n- [ ] Returns count of newly inserted references\n- [ ] No-op when no state events have source_merge_request_id\n\n## Files\n- src/core/references.rs (new)\n- src/core/mod.rs (add `pub mod references;`)\n- src/cli/commands/sync.rs (call after drain step)\n\n## TDD Loop\nRED: tests/references_tests.rs:\n- `test_extract_refs_from_state_events_basic` - seed a \"closed\" state event with source_merge_request_id, verify entity_reference created\n- `test_extract_refs_dedup_with_closes_issues` - insert ref from closes_issues API first, verify state event extraction doesn't duplicate\n- `test_extract_refs_no_source_mr` - state events without source_merge_request_id produce no refs\n\nSetup: create_test_db with migrations 001-011, seed project + issue + MR + state events.\n\nGREEN: Implement extract_refs_from_state_events\n\nVERIFY: `cargo test references -- --nocapture`\n\n## Edge Cases\n- source_merge_request_id may reference an MR not synced locally (cross-project close) — the JOIN will produce no match, which is correct behavior (ref simply not created)\n- Multiple state events can reference the same MR for the same issue (reopen + re-close) — INSERT OR IGNORE handles dedup\n- The merge_requests table might not have the MR yet if sync is still running — call this after all dependent fetches complete","status":"open","priority":2,"issue_type":"task","created_at":"2026-02-02T21:32:33.619606Z","created_by":"tayloreernisse","updated_at":"2026-02-02T22:41:50.562956Z","compaction_level":0,"original_size":0,"labels":["extraction","gate-2","phase-b"],"dependencies":[{"issue_id":"bd-8t4","depends_on_id":"bd-1ep","type":"blocks","created_at":"2026-02-02T21:32:42.945176Z","created_by":"tayloreernisse"},{"issue_id":"bd-8t4","depends_on_id":"bd-1se","type":"parent-child","created_at":"2026-02-02T21:32:33.621025Z","created_by":"tayloreernisse"},{"issue_id":"bd-8t4","depends_on_id":"bd-hu3","type":"blocks","created_at":"2026-02-02T22:41:50.562935Z","created_by":"tayloreernisse"}]} +{"id":"bd-8t4","title":"Extract cross-references from resource_state_events","description":"## Background\nresource_state_events includes source_merge_request (with iid) for 'closed by MR' events. After state events are stored (Gate 1), post-processing extracts these into entity_references for the cross-reference graph.\n\n## Approach\nCreate src/core/references.rs (new module) or add to events_db.rs:\n\n```rust\n/// Extract cross-references from stored state events and insert into entity_references.\n/// Looks for state events with source_merge_request_id IS NOT NULL (meaning \"closed by MR\").\n/// \n/// Directionality: source = MR (that caused the close), target = issue (that was closed)\npub fn extract_refs_from_state_events(\n conn: &Connection,\n project_id: i64,\n) -> Result // returns count of new references inserted\n```\n\nSQL logic:\n```sql\nINSERT OR IGNORE INTO entity_references (\n source_entity_type, source_entity_id,\n target_entity_type, target_entity_id,\n reference_type, source_method, created_at\n)\nSELECT\n 'merge_request',\n mr.id,\n 'issue',\n rse.issue_id,\n 'closes',\n 'api_state_event',\n rse.created_at\nFROM resource_state_events rse\nJOIN merge_requests mr ON mr.project_id = rse.project_id AND mr.iid = rse.source_merge_request_id\nWHERE rse.source_merge_request_id IS NOT NULL\n AND rse.issue_id IS NOT NULL\n AND rse.project_id = ?1;\n```\n\nKey: source_merge_request_id stores the MR iid, so we JOIN on merge_requests.iid to get the local DB id.\n\nRegister in src/core/mod.rs: `pub mod references;`\n\nCall this after drain_dependent_queue in the sync pipeline (after all state events are stored).\n\n## Acceptance Criteria\n- [ ] State events with source_merge_request_id produce 'closes' references\n- [ ] Source = MR (resolved by iid), target = issue\n- [ ] source_method = 'api_state_event'\n- [ ] INSERT OR IGNORE prevents duplicates with api_closes_issues data\n- [ ] Returns count of newly inserted references\n- [ ] No-op when no state events have source_merge_request_id\n\n## Files\n- src/core/references.rs (new)\n- src/core/mod.rs (add `pub mod references;`)\n- src/cli/commands/sync.rs (call after drain step)\n\n## TDD Loop\nRED: tests/references_tests.rs:\n- `test_extract_refs_from_state_events_basic` - seed a \"closed\" state event with source_merge_request_id, verify entity_reference created\n- `test_extract_refs_dedup_with_closes_issues` - insert ref from closes_issues API first, verify state event extraction doesn't duplicate\n- `test_extract_refs_no_source_mr` - state events without source_merge_request_id produce no refs\n\nSetup: create_test_db with migrations 001-011, seed project + issue + MR + state events.\n\nGREEN: Implement extract_refs_from_state_events\n\nVERIFY: `cargo test references -- --nocapture`\n\n## Edge Cases\n- source_merge_request_id may reference an MR not synced locally (cross-project close) — the JOIN will produce no match, which is correct behavior (ref simply not created)\n- Multiple state events can reference the same MR for the same issue (reopen + re-close) — INSERT OR IGNORE handles dedup\n- The merge_requests table might not have the MR yet if sync is still running — call this after all dependent fetches complete","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-02T21:32:33.619606Z","created_by":"tayloreernisse","updated_at":"2026-02-04T20:13:28.219791Z","closed_at":"2026-02-04T20:13:28.219633Z","compaction_level":0,"original_size":0,"labels":["extraction","gate-2","phase-b"],"dependencies":[{"issue_id":"bd-8t4","depends_on_id":"bd-1ep","type":"blocks","created_at":"2026-02-02T21:32:42.945176Z","created_by":"tayloreernisse"},{"issue_id":"bd-8t4","depends_on_id":"bd-1se","type":"parent-child","created_at":"2026-02-02T21:32:33.621025Z","created_by":"tayloreernisse"},{"issue_id":"bd-8t4","depends_on_id":"bd-hu3","type":"blocks","created_at":"2026-02-02T22:41:50.562935Z","created_by":"tayloreernisse"}]} {"id":"bd-9av","title":"[CP1] gi sync-status enhancement","description":"Enhance sync-status from CP0 stub to show issue cursors.\n\n## Changes to src/cli/commands/sync_status.rs\n\nUpdate the existing stub to show:\n- Last run timestamp and duration\n- Cursor positions per project (issues resource_type)\n- Entity counts (issues, discussions, notes)\n\n## Output Format\nLast sync: 2026-01-25 10:30:00 (succeeded, 45s)\n\nCursors:\n group/project-one\n issues: 2026-01-25T10:25:00Z (gitlab_id: 12345678)\n\nCounts:\n Issues: 1,234\n Discussions: 5,678\n Notes: 23,456 (4,567 system)\n\nFiles: src/cli/commands/sync_status.rs\nDone when: Shows cursor positions and counts after ingestion","status":"tombstone","priority":3,"issue_type":"task","created_at":"2026-01-25T16:58:27.246825Z","created_by":"tayloreernisse","updated_at":"2026-01-25T17:02:01.968507Z","deleted_at":"2026-01-25T17:02:01.968503Z","deleted_by":"tayloreernisse","delete_reason":"recreating with correct deps","original_type":"task","compaction_level":0,"original_size":0} {"id":"bd-9dd","title":"Implement 'lore trace' command with human and robot output","description":"## Background\nlore trace shows the MR→issue→discussion chain for a file. CLI wiring with human and robot output per spec §5.5.\n\n## Approach\nCreate src/cli/commands/trace.rs:\n\n**1. CLI args (in src/cli/mod.rs):**\n```rust\n/// Trace a file to its motivating decisions\n#[command(name = \"trace\")]\nTrace(TraceArgs),\n\n#[derive(clap::Args)]\npub struct TraceArgs {\n /// File path to trace (optionally :line for future Tier 2)\n #[arg(required = true)]\n pub path: String,\n\n /// Scope to a specific project\n #[arg(short = 'p', long = \"project\")]\n pub project: Option,\n\n /// Disable rename chain resolution\n #[arg(long = \"no-follow-renames\")]\n pub no_follow_renames: bool,\n\n /// Maximum number of MRs to show\n #[arg(short = 'n', long = \"limit\", default_value = \"20\")]\n pub limit: usize,\n}\n```\n\n**2. Path parsing:**\n```rust\nfn parse_trace_path(input: &str) -> (String, Option) {\n // \"src/foo.rs:45\" → (\"src/foo.rs\", Some(45))\n // \"src/foo.rs\" → (\"src/foo.rs\", None)\n if let Some((path, line)) = input.rsplit_once(':') {\n if let Ok(n) = line.parse::() {\n return (path.to_string(), Some(n));\n }\n }\n (input.to_string(), None)\n}\n```\n\nIf line number provided, warn: \"Line-level tracing requires git integration (future feature). Proceeding with file-level trace.\"\n\n**3. Human output** (spec §5.5):\n```\nTrace: src/auth/oauth.rs\n────────────────────────\n\n!567 feat: add OAuth2 provider MERGED 2024-03-25\n → Closes #234: Migrate to OAuth2\n → 12 discussion comments, 4 on this file\n → Decision: Use rust-oauth2 crate (discussed in #234, comment by @alice)\n\n!612 fix: token refresh race condition MERGED 2024-04-10\n → Closes #299: OAuth2 login fails for SSO users\n → 5 discussion comments, 2 on this file\n → [src/auth/oauth.rs:45] \"Add mutex around refresh to prevent double-refresh\"\n```\n\n**4. Robot JSON:**\n```json\n{\n \"ok\": true,\n \"data\": {\n \"file_path\": \"src/auth/oauth.rs\",\n \"rename_chain\": [\"src/auth/handler.rs\", \"src/auth/oauth.rs\"],\n \"trace_chains\": [{\n \"mr\": { \"iid\": 567, \"title\": \"...\", \"state\": \"merged\", ... },\n \"issues\": [{ \"iid\": 234, \"title\": \"...\", \"reference_type\": \"closes\", ... }],\n \"discussions\": [{ \"note_id\": 123, \"snippet\": \"...\", \"position_line\": 45, ... }]\n }]\n },\n \"meta\": { \"total_mrs\": 2, \"tier\": \"api_only\" }\n}\n```\n\n**5. Graceful empty state:**\n\"No MR data found for this file. Run lore sync with fetchMrFileChanges: true\"\n\n## Acceptance Criteria\n- [ ] `lore trace src/auth/oauth.rs` shows trace chains\n- [ ] Human output matches spec §5.5 format\n- [ ] Robot JSON structured with trace_chains array\n- [ ] :line suffix parsed with Tier 2 warning\n- [ ] -p flag for project scoping\n- [ ] --no-follow-renames disables rename chain\n- [ ] Graceful empty state message\n- [ ] meta.tier = \"api_only\" (Tier 1)\n\n## Files\n- src/cli/commands/trace.rs (new)\n- src/cli/commands/mod.rs (add `pub mod trace;`)\n- src/cli/mod.rs (add Trace variant + TraceArgs)\n- src/main.rs (add handler)\n\n## TDD Loop\nRED: tests/trace_command_tests.rs:\n- `test_trace_command_basic` - end-to-end with seeded data\n- `test_trace_command_empty_state` - no data, verify message\n- `test_trace_command_line_number_warning` - :45 suffix, verify warning\n- `test_trace_command_robot_json` - verify JSON structure\n- `test_parse_trace_path` - unit test path parsing\n\nGREEN: Implement CLI wiring + output renderers\n\nVERIFY: `cargo test trace_command -- --nocapture && cargo build`\n\n## Edge Cases\n- Path with colons in directory name (unlikely but possible on macOS): rsplit_once handles this (last colon)\n- File path that doesn't exist in mr_file_changes: empty state, not error\n- Very long DiffNote snippets: truncate to ~200 chars","status":"open","priority":2,"issue_type":"task","created_at":"2026-02-02T21:34:32.788530Z","created_by":"tayloreernisse","updated_at":"2026-02-02T21:50:13.552923Z","compaction_level":0,"original_size":0,"labels":["cli","gate-5","phase-b"],"dependencies":[{"issue_id":"bd-9dd","depends_on_id":"bd-1ht","type":"parent-child","created_at":"2026-02-02T21:34:32.789920Z","created_by":"tayloreernisse"},{"issue_id":"bd-9dd","depends_on_id":"bd-2n4","type":"blocks","created_at":"2026-02-02T21:34:37.941327Z","created_by":"tayloreernisse"}]} {"id":"bd-am7","title":"Implement embedding pipeline with chunking","description":"## Background\nThe embedding pipeline takes documents, chunks them (paragraph-boundary splitting with overlap), sends chunks to Ollama for embedding via async HTTP, and stores vectors in sqlite-vec + metadata. It uses keyset pagination, concurrent HTTP requests via FuturesUnordered, per-batch transactions, and dimension validation.\n\n## Approach\nCreate \\`src/embedding/pipeline.rs\\` per PRD Section 4.4. **The pipeline is async.**\n\n**Constants (per PRD):**\n```rust\nconst BATCH_SIZE: usize = 32; // texts per Ollama API call\nconst DB_PAGE_SIZE: usize = 500; // keyset pagination page size\nconst EXPECTED_DIMS: usize = 768; // nomic-embed-text dimensions\nconst CHUNK_MAX_CHARS: usize = 32_000; // max chars per chunk\nconst CHUNK_OVERLAP_CHARS: usize = 500; // overlap between chunks\n```\n\n**Core async function:**\n```rust\npub async fn embed_documents(\n conn: &Connection,\n client: &OllamaClient,\n selection: EmbedSelection,\n concurrency: usize, // max in-flight HTTP requests\n progress_callback: Option>,\n) -> Result\n```\n\n**EmbedSelection:** Pending | RetryFailed\n**EmbedResult:** { embedded, failed, skipped }\n\n**Algorithm (per PRD):**\n1. count_pending_documents(conn, selection) for progress total\n2. Keyset pagination loop: find_pending_documents(conn, DB_PAGE_SIZE, last_id, selection)\n3. For each page:\n a. Begin transaction\n b. For each doc: clear_document_embeddings(&tx, doc.id), split_into_chunks(&doc.content)\n c. Build ChunkWork items with doc_hash + chunk_hash\n d. Commit clearing transaction\n4. Batch ChunkWork texts into Ollama calls (BATCH_SIZE=32)\n5. Use **FuturesUnordered** for concurrent HTTP, cap at \\`concurrency\\`\n6. collect_writes() in per-batch transactions: validate dims (768), store LE bytes, write metadata\n7. On error: record_embedding_error per chunk (not abort)\n8. Advance keyset cursor\n\n**ChunkWork struct:**\n```rust\nstruct ChunkWork {\n doc_id: i64,\n chunk_index: usize,\n doc_hash: String, // SHA-256 of FULL document (staleness detection)\n chunk_hash: String, // SHA-256 of THIS chunk (provenance)\n text: String,\n}\n```\n\n**Splitting:** split_into_chunks(content) -> Vec<(usize, String)>\n- Documents <= CHUNK_MAX_CHARS: single chunk (index 0)\n- Longer: split at paragraph boundaries (\\\\n\\\\n), fallback to sentence/word, with CHUNK_OVERLAP_CHARS overlap\n\n**Storage:** embeddings as raw LE bytes, rowid = encode_rowid(doc_id, chunk_idx)\n**Staleness detection:** uses document_hash (not chunk_hash) because it's document-level\n\nAlso create \\`src/embedding/change_detector.rs\\` (referenced in PRD module structure):\n```rust\npub fn detect_embedding_changes(conn: &Connection) -> Result>;\n```\n\n## Acceptance Criteria\n- [ ] Pipeline is async (uses FuturesUnordered for concurrent HTTP)\n- [ ] concurrency parameter caps in-flight HTTP requests\n- [ ] progress_callback reports (processed, total)\n- [ ] New documents embedded, changed re-embedded, unchanged skipped\n- [ ] clear_document_embeddings before re-embedding (range delete vec0 + metadata)\n- [ ] Chunking at paragraph boundaries with 500-char overlap\n- [ ] Short documents (<32k chars) produce exactly 1 chunk\n- [ ] Embeddings stored as raw LE bytes in vec0\n- [ ] Rowids encoded via encode_rowid(doc_id, chunk_index)\n- [ ] Dimension validation: 768 floats per embedding (mismatch -> record error, not store)\n- [ ] Per-batch transactions for writes\n- [ ] Errors recorded in embedding_metadata per chunk (last_error, attempt_count)\n- [ ] Keyset pagination (d.id > last_id, not OFFSET)\n- [ ] Pending detection uses document_hash (not chunk_hash)\n- [ ] \\`cargo build\\` succeeds\n\n## Files\n- \\`src/embedding/pipeline.rs\\` — new file (async)\n- \\`src/embedding/change_detector.rs\\` — new file\n- \\`src/embedding/mod.rs\\` — add \\`pub mod pipeline; pub mod change_detector;\\` + re-exports\n\n## TDD Loop\nRED: Unit tests for chunking:\n- \\`test_short_document_single_chunk\\` — <32k produces [(0, full_content)]\n- \\`test_long_document_multiple_chunks\\` — >32k splits at paragraph boundaries\n- \\`test_chunk_overlap\\` — adjacent chunks share 500-char overlap\n- \\`test_no_paragraph_boundary\\` — falls back to char boundary\nIntegration tests need Ollama or mock.\nGREEN: Implement split_into_chunks, embed_documents (async)\nVERIFY: \\`cargo test pipeline\\`\n\n## Edge Cases\n- Empty document content_text: skip (don't embed)\n- No paragraph boundaries: split at CHUNK_MAX_CHARS with overlap\n- Ollama error for one batch: record error per chunk, continue with next batch\n- Dimension mismatch (model returns 512 instead of 768): record error, don't store corrupt data\n- Document deleted between pagination and embedding: skip gracefully","status":"closed","priority":2,"issue_type":"task","created_at":"2026-01-30T15:26:34.093701Z","created_by":"tayloreernisse","updated_at":"2026-01-30T17:58:58.908585Z","closed_at":"2026-01-30T17:58:58.908525Z","close_reason":"Implemented embedding pipeline: chunking at paragraph boundaries with 500-char overlap, change detector (keyset pagination, hash-based staleness), async embed via Ollama with batch processing, dimension validation, per-chunk error recording, LE byte vector storage. 7 chunking tests pass. 289 total tests.","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-am7","depends_on_id":"bd-1y8","type":"blocks","created_at":"2026-01-30T15:29:24.697418Z","created_by":"tayloreernisse"},{"issue_id":"bd-am7","depends_on_id":"bd-2ac","type":"blocks","created_at":"2026-01-30T15:29:24.732567Z","created_by":"tayloreernisse"},{"issue_id":"bd-am7","depends_on_id":"bd-335","type":"blocks","created_at":"2026-01-30T15:29:24.660199Z","created_by":"tayloreernisse"}]} diff --git a/.beads/last-touched b/.beads/last-touched index 72b3adf..bb7812b 100644 --- a/.beads/last-touched +++ b/.beads/last-touched @@ -1 +1 @@ -bd-1ht +bd-3ia