diff --git a/.beads/issues.jsonl b/.beads/issues.jsonl index e676c9f..7ff3245 100644 --- a/.beads/issues.jsonl +++ b/.beads/issues.jsonl @@ -215,7 +215,7 @@ {"id":"bd-32mc","title":"OBSERV: Implement log retention cleanup at startup","description":"## Background\nLog files accumulate at ~1-10 MB/day. Without cleanup, they grow unbounded. Retention runs BEFORE subscriber init so deleted file handles aren't held open by the appender.\n\n## Approach\nAdd a cleanup function, called from main.rs before the subscriber is initialized (before current line 44):\n\n```rust\n/// Delete log files older than retention_days.\n/// Matches files named lore.YYYY-MM-DD.log in the log directory.\npub fn cleanup_old_logs(log_dir: &Path, retention_days: u32) -> std::io::Result {\n if retention_days == 0 {\n return Ok(0); // 0 means file logging disabled, don't delete\n }\n let cutoff = SystemTime::now() - Duration::from_secs(u64::from(retention_days) * 86400);\n let mut deleted = 0;\n\n for entry in std::fs::read_dir(log_dir)? {\n let entry = entry?;\n let name = entry.file_name();\n let name_str = name.to_string_lossy();\n\n // Only match lore.YYYY-MM-DD.log pattern\n if !name_str.starts_with(\"lore.\") || !name_str.ends_with(\".log\") {\n continue;\n }\n\n if let Ok(metadata) = entry.metadata() {\n if let Ok(modified) = metadata.modified() {\n if modified < cutoff {\n std::fs::remove_file(entry.path())?;\n deleted += 1;\n }\n }\n }\n }\n Ok(deleted)\n}\n```\n\nPlace this function in src/core/paths.rs (next to get_log_dir) or a new src/core/log_retention.rs. Prefer paths.rs since it's small and related.\n\nCall from main.rs:\n```rust\nlet log_dir = get_log_dir(config.logging.log_dir.as_deref());\nlet _ = cleanup_old_logs(&log_dir, config.logging.retention_days);\n// THEN init subscriber\n```\n\nNote: Config must be loaded before cleanup runs. Current main.rs parses Cli at line 60, but config loading happens inside command handlers. This means we need to either:\n A) Load config early in main() before subscriber init (preferred)\n B) Defer cleanup to after config load\n\nSince the subscriber must also know log_dir, approach A is natural: load config -> cleanup -> init subscriber -> dispatch command.\n\n## Acceptance Criteria\n- [ ] Files matching lore.*.log older than retention_days are deleted\n- [ ] Files matching lore.*.log within retention_days are preserved\n- [ ] Non-matching files (e.g., other.txt) are never deleted\n- [ ] retention_days=0 skips cleanup entirely (no files deleted)\n- [ ] Errors on individual files don't prevent cleanup of remaining files\n- [ ] cargo clippy --all-targets -- -D warnings passes\n\n## Files\n- src/core/paths.rs (add cleanup_old_logs function)\n- src/main.rs (call cleanup before subscriber init)\n\n## TDD Loop\nRED:\n - test_log_retention_cleanup: create tempdir with lore.2026-01-01.log through lore.2026-02-04.log, run with retention_days=7, assert old deleted, recent preserved\n - test_log_retention_ignores_non_log_files: create other.txt alongside old log files, assert other.txt untouched\n - test_log_retention_zero_days: retention_days=0, assert nothing deleted\nGREEN: Implement cleanup_old_logs\nVERIFY: cargo test && cargo clippy --all-targets -- -D warnings\n\n## Edge Cases\n- SystemTime::now() precision varies by OS; use file modified time, not name parsing (simpler and more reliable)\n- read_dir on non-existent directory: get_log_dir creates it first, so this shouldn't happen. But handle gracefully.\n- Permissions error on individual file: log a warning, continue with remaining files (don't propagate)\n- Race condition: another process creates a file during cleanup. Not a concern -- we only delete old files.","status":"closed","priority":1,"issue_type":"task","created_at":"2026-02-04T15:53:55.627901Z","created_by":"tayloreernisse","updated_at":"2026-02-04T17:15:04.452086Z","closed_at":"2026-02-04T17:15:04.452039Z","close_reason":"Implemented cleanup_old_logs() with date-pattern matching and retention_days config, runs at startup before subscriber init","compaction_level":0,"original_size":0,"labels":["observability"],"dependencies":[{"issue_id":"bd-32mc","depends_on_id":"bd-17n","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-32mc","depends_on_id":"bd-1k4","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-32mc","depends_on_id":"bd-2nx","type":"parent-child","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} {"id":"bd-32q","title":"Implement timeline seed phase: FTS5 keyword search to entity IDs","description":"## Background\n\nThe seed phase is steps 1-2 of the timeline pipeline (spec Section 3.2): SEED + HYDRATE. It converts a keyword query into entity IDs via FTS5 search and collects evidence note candidates.\n\n**Spec reference:** `docs/phase-b-temporal-intelligence.md` Section 3.2 steps 1-2.\n\n## Codebase Context\n\n- FTS5 index exists: documents_fts table (migration 008)\n- documents table: id, source_type ('issue'|'merge_request'|'discussion'), source_id, project_id, created_at, content\n- discussions table: id, issue_id, merge_request_id\n- notes table: discussion_id, author_username, body, created_at, is_system, id (note_id)\n- Safe FTS query builder: src/search/fts.rs has to_fts_query(raw, FtsQueryMode::Safe) for sanitizing user input\n- projects table: path_with_namespace\n- issues/merge_requests: iid, project_id\n\n## Approach\n\nCreate `src/core/timeline_seed.rs`:\n\n```rust\nuse crate::core::timeline::{EntityRef, TimelineEvent, TimelineEventType};\nuse rusqlite::Connection;\n\npub struct SeedResult {\n pub seed_entities: Vec,\n pub evidence_notes: Vec, // NoteEvidence events\n}\n\npub fn seed_timeline(\n conn: &Connection,\n query: &str,\n project_id: Option,\n since_ms: Option,\n max_seeds: usize, // default 50\n) -> Result { ... }\n```\n\n### SQL for SEED + HYDRATE (entity discovery):\n```sql\nSELECT DISTINCT d.source_type, d.source_id, d.project_id,\n CASE d.source_type\n WHEN 'issue' THEN (SELECT iid FROM issues WHERE id = d.source_id)\n WHEN 'merge_request' THEN (SELECT iid FROM merge_requests WHERE id = d.source_id)\n WHEN 'discussion' THEN NULL -- discussions map to parent entity below\n END AS iid,\n CASE d.source_type\n WHEN 'issue' THEN (SELECT p.path_with_namespace FROM projects p JOIN issues i ON i.project_id = p.id WHERE i.id = d.source_id)\n WHEN 'merge_request' THEN (SELECT p.path_with_namespace FROM projects p JOIN merge_requests m ON m.project_id = p.id WHERE m.id = d.source_id)\n WHEN 'discussion' THEN NULL\n END AS project_path\nFROM documents_fts fts\nJOIN documents d ON d.id = fts.rowid\nWHERE documents_fts MATCH ?1\n AND (?2 IS NULL OR d.project_id = ?2)\nORDER BY rank\nLIMIT ?3\n```\n\nFor 'discussion' source_type: resolve to parent entity via discussions.issue_id or discussions.merge_request_id.\n\n### SQL for evidence notes (top 10 FTS5-matched notes):\n```sql\nSELECT n.id as note_id, n.body, n.created_at, n.author_username,\n disc.id as discussion_id,\n CASE WHEN disc.issue_id IS NOT NULL THEN 'issue' ELSE 'merge_request' END as parent_type,\n COALESCE(disc.issue_id, disc.merge_request_id) AS parent_entity_id\nFROM documents_fts fts\nJOIN documents d ON d.id = fts.rowid\nJOIN discussions disc ON disc.id = d.source_id AND d.source_type = 'discussion'\nJOIN notes n ON n.discussion_id = disc.id AND n.is_system = 0\nWHERE documents_fts MATCH ?1\nORDER BY rank\nLIMIT 10\n```\n\nEvidence notes become TimelineEvent with:\n- event_type: NoteEvidence { note_id, snippet (first 200 chars), discussion_id }\n- Use to_fts_query(query, FtsQueryMode::Safe) to sanitize user input before MATCH\n\nRegister in `src/core/mod.rs`: `pub mod timeline_seed;`\n\n## Acceptance Criteria\n\n- [ ] seed_timeline() returns entities from FTS5 search\n- [ ] Entities deduplicated (same entity from multiple docs appears once)\n- [ ] Discussion documents resolved to parent entity (issue or MR)\n- [ ] Evidence notes capped at 10\n- [ ] Evidence note snippets truncated to 200 chars (safe UTF-8 boundary)\n- [ ] Uses to_fts_query(query, FtsQueryMode::Safe) for input sanitization\n- [ ] --since filter works\n- [ ] -p filter works\n- [ ] Empty result for zero-match queries (not error)\n- [ ] Module registered in src/core/mod.rs\n- [ ] `cargo check --all-targets` passes\n- [ ] `cargo clippy --all-targets -- -D warnings` passes\n\n## Files\n\n- `src/core/timeline_seed.rs` (NEW)\n- `src/core/mod.rs` (add `pub mod timeline_seed;`)\n\n## TDD Loop\n\nRED:\n- `test_seed_deduplicates_entities`\n- `test_seed_resolves_discussion_to_parent`\n- `test_seed_empty_query_returns_empty`\n- `test_seed_evidence_capped_at_10`\n- `test_seed_evidence_snippet_truncated`\n- `test_seed_respects_since_filter`\n\nTests need in-memory DB with migrations 001-014 + documents/FTS test data.\n\nGREEN: Implement FTS5 queries and deduplication.\n\nVERIFY: `cargo test --lib -- timeline_seed`\n\n## Edge Cases\n\n- FTS5 MATCH invalid syntax: to_fts_query(query, FtsQueryMode::Safe) sanitizes\n- Discussion orphans: LEFT JOIN handles deleted notes\n- UTF-8 truncation: use char_indices() to find safe 200-char boundary\n- Discussion source resolving to both issue_id and merge_request_id: prefer issue_id (shouldn't happen but be defensive)","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-02T21:33:08.615908Z","created_by":"tayloreernisse","updated_at":"2026-02-05T21:47:07.966488Z","closed_at":"2026-02-05T21:47:07.966437Z","close_reason":"Completed: Created src/core/timeline_seed.rs with seed_timeline() function. FTS5 search to entity IDs with discussion-to-parent resolution, entity deduplication, evidence note extraction (capped, snippet-truncated). 12 tests pass. All quality gates pass.","compaction_level":0,"original_size":0,"labels":["gate-3","phase-b","query"],"dependencies":[{"issue_id":"bd-32q","depends_on_id":"bd-20e","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-32q","depends_on_id":"bd-ike","type":"parent-child","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} {"id":"bd-335","title":"Implement Ollama API client","description":"## Background\nThe Ollama API client provides the HTTP interface to the local Ollama embedding server. It handles health checks (is Ollama running? does the model exist?), batch embedding requests (up to 32 texts per call), and error translation to LoreError variants. This is the lowest-level embedding component — the pipeline (bd-am7) builds on top of it.\n\n## Approach\nCreate \\`src/embedding/ollama.rs\\` per PRD Section 4.2. **Uses async reqwest (not blocking).**\n\n```rust\nuse reqwest::Client; // NOTE: async Client, not reqwest::blocking\nuse serde::{Deserialize, Serialize};\nuse crate::core::error::{LoreError, Result};\n\npub struct OllamaConfig {\n pub base_url: String, // default \\\"http://localhost:11434\\\"\n pub model: String, // default \\\"nomic-embed-text\\\"\n pub timeout_secs: u64, // default 60\n}\n\nimpl Default for OllamaConfig { /* PRD defaults */ }\n\npub struct OllamaClient {\n client: Client, // async reqwest::Client\n config: OllamaConfig,\n}\n\n#[derive(Serialize)]\nstruct EmbedRequest { model: String, input: Vec }\n\n#[derive(Deserialize)]\nstruct EmbedResponse { model: String, embeddings: Vec> }\n\n#[derive(Deserialize)]\nstruct TagsResponse { models: Vec }\n\n#[derive(Deserialize)]\nstruct ModelInfo { name: String }\n\nimpl OllamaClient {\n pub fn new(config: OllamaConfig) -> Self;\n\n /// Async health check: GET /api/tags\n /// Model matched via starts_with (\\\"nomic-embed-text\\\" matches \\\"nomic-embed-text:latest\\\")\n pub async fn health_check(&self) -> Result<()>;\n\n /// Async batch embedding: POST /api/embed\n /// Input: Vec of texts, Response: Vec> of 768-dim embeddings\n pub async fn embed_batch(&self, texts: Vec) -> Result>>;\n}\n\n/// Quick health check without full client (async).\npub async fn check_ollama_health(base_url: &str) -> bool;\n```\n\n**Error mapping (per PRD):**\n- Connection refused/timeout -> LoreError::OllamaUnavailable { base_url, source: Some(e) }\n- Model not in /api/tags -> LoreError::OllamaModelNotFound { model }\n- Non-200 from /api/embed -> LoreError::EmbeddingFailed { document_id: 0, reason: format!(\\\"HTTP {}: {}\\\", status, body) }\n\n**Key PRD detail:** Model matching uses \\`starts_with\\` (not exact match) so \\\"nomic-embed-text\\\" matches \\\"nomic-embed-text:latest\\\".\n\n## Acceptance Criteria\n- [ ] Uses async reqwest::Client (not blocking)\n- [ ] health_check() is async, detects server availability and model presence\n- [ ] Model matched via starts_with (handles \\\":latest\\\" suffix)\n- [ ] embed_batch() is async, sends POST /api/embed\n- [ ] Batch size up to 32 texts\n- [ ] Returns Vec> with 768 dimensions each\n- [ ] OllamaUnavailable error includes base_url and source error\n- [ ] OllamaModelNotFound error includes model name\n- [ ] Non-200 response mapped to EmbeddingFailed with status + body\n- [ ] Timeout: 60 seconds default (configurable via OllamaConfig)\n- [ ] \\`cargo build\\` succeeds\n\n## Files\n- \\`src/embedding/ollama.rs\\` — new file\n- \\`src/embedding/mod.rs\\` — add \\`pub mod ollama;\\` and re-exports\n\n## TDD Loop\nRED: Tests (unit tests with mock, integration needs Ollama):\n- \\`test_config_defaults\\` — verify default base_url, model, timeout\n- \\`test_health_check_model_starts_with\\` — \\\"nomic-embed-text\\\" matches \\\"nomic-embed-text:latest\\\"\n- \\`test_embed_batch_parse\\` — mock response parsed correctly\n- \\`test_connection_error_maps_to_ollama_unavailable\\`\nGREEN: Implement OllamaClient\nVERIFY: \\`cargo test ollama\\`\n\n## Edge Cases\n- Ollama returns model name with version tag (\\\"nomic-embed-text:latest\\\"): starts_with handles this\n- Empty texts array: send empty batch, Ollama returns empty embeddings\n- Ollama returns wrong number of embeddings (2 texts, 1 embedding): caller (pipeline) validates\n- Non-JSON response: reqwest deserialization error -> wrap appropriately","status":"closed","priority":2,"issue_type":"task","created_at":"2026-01-30T15:26:34.025099Z","created_by":"tayloreernisse","updated_at":"2026-01-30T16:58:17.546852Z","closed_at":"2026-01-30T16:58:17.546794Z","close_reason":"Completed: OllamaClient with async health_check (starts_with model matching), embed_batch, error mapping to LoreError variants, check_ollama_health helper, 4 tests pass","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-335","depends_on_id":"bd-ljf","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} -{"id":"bd-343o","title":"Fetch and store GitLab linked issues (Related to)","description":"## Background\n\nGitLab's \"Linked items\" provides bidirectional issue linking distinct from \"closes\" and \"mentioned\" references. This data is only available via the issue links API (GET /projects/:id/issues/:iid/links). The goal is to fetch these links during sync and store them as entity_references so they appear in `lore show issue` and are queryable.\n\n**Why:** Currently `lore show issue` displays closing MRs (via `get_closing_mrs()` in show.rs:~line 1544) but has NO related issues section. This bead adds that capability.\n\n## Codebase Context\n\n- **entity_references table** (migration 011): reference_type CHECK: 'closes' | 'mentioned' | 'related'; source_method CHECK: 'api' | 'note_parse' | 'description_parse'\n- **pending_dependent_fetches** (migration 011): job_type CHECK: 'resource_events' | 'mr_closes_issues' | 'mr_diffs'. No later migrations modified this table.\n- **CRITICAL:** Adding 'issue_links' to job_type CHECK requires recreating pending_dependent_fetches table (SQLite can't ALTER CHECK constraints). Migration **027** must copy data, drop, recreate with expanded CHECK, and reinsert.\n- **Orchestrator** (src/ingestion/orchestrator.rs, 1745 lines): Three drain functions exist — drain_resource_events() (line 932), drain_mr_closes_issues() (line 1254), drain_mr_diffs() (line 1514). Follow the same claim/complete/fail pattern from dependent_queue.rs.\n- **dependent_queue.rs**: enqueue_job(), claim_jobs(), complete_job(), fail_job() with exponential backoff\n- **show.rs** (1544 lines): Has get_closing_mrs() for closing MR display. NO related_issues section exists yet.\n- **GitLab API**: GET /projects/:id/issues/:iid/links returns link_type: \"relates_to\", \"blocks\", \"is_blocked_by\"\n- **Migration count**: 26 migrations exist (001-026). Next migration = **027**.\n\n## Approach\n\n### Phase 1: API Client (src/gitlab/client.rs)\n```rust\npub async fn fetch_issue_links(\n &self,\n project_id: i64,\n issue_iid: i64,\n) -> Result> {\n // GET /projects/:id/issues/:iid/links\n // Use fetch_all_pages() + coalesce_not_found()\n}\n```\n\n### Phase 2: Types (src/gitlab/types.rs)\n```rust\n#[derive(Debug, Deserialize)]\npub struct GitLabIssueLink {\n pub id: i64,\n pub iid: i64,\n pub title: String,\n pub state: String,\n pub web_url: String,\n pub link_type: String, // \"relates_to\", \"blocks\", \"is_blocked_by\"\n pub link_created_at: Option,\n}\n```\n\n### Phase 3: Migration 027 (migrations/027_issue_links_job_type.sql)\nRecreate pending_dependent_fetches with expanded CHECK:\n```sql\nCREATE TABLE pending_dependent_fetches_new (\n id INTEGER PRIMARY KEY,\n project_id INTEGER NOT NULL REFERENCES projects(id) ON DELETE CASCADE,\n entity_type TEXT NOT NULL CHECK (entity_type IN ('issue', 'merge_request')),\n entity_iid INTEGER NOT NULL,\n entity_local_id INTEGER NOT NULL,\n job_type TEXT NOT NULL CHECK (job_type IN (\n 'resource_events', 'mr_closes_issues', 'mr_diffs', 'issue_links'\n )),\n payload_json TEXT,\n enqueued_at INTEGER NOT NULL,\n attempts INTEGER NOT NULL DEFAULT 0,\n last_error TEXT,\n next_retry_at INTEGER,\n locked_at INTEGER,\n UNIQUE(project_id, entity_type, entity_iid, job_type)\n);\nINSERT INTO pending_dependent_fetches_new SELECT * FROM pending_dependent_fetches;\nDROP TABLE pending_dependent_fetches;\nALTER TABLE pending_dependent_fetches_new RENAME TO pending_dependent_fetches;\n-- Recreate indexes from migration 011 (idx_pdf_job_type, idx_pdf_next_retry)\n```\n\nRegister in MIGRATIONS array in src/core/db.rs (entry 27).\n\n### Phase 4: Ingestion (src/ingestion/issue_links.rs NEW)\n```rust\npub async fn fetch_and_store_issue_links(\n conn: &Connection,\n client: &GitLabClient,\n project_id: i64,\n issue_local_id: i64,\n issue_iid: i64,\n) -> Result {\n // 1. Fetch links from API\n // 2. Resolve target issue to local DB id (SELECT id FROM issues WHERE project_id=? AND iid=?)\n // 3. Insert into entity_references: reference_type='related', source_method='api'\n // 4. Create bidirectional refs: A->B and B->A\n // 5. Skip self-links\n // 6. Cross-project: store with target_entity_id=NULL (unresolved)\n}\n```\n\n### Phase 5: Queue Integration (src/ingestion/orchestrator.rs)\n- Enqueue 'issue_links' job after issue ingestion (near the existing resource_events enqueue)\n- Add drain_issue_links() following drain_mr_closes_issues() pattern (lines 1254-1512)\n- Config gate: add `sync.fetchIssueLinks` (default true) to config, like existing `sync.fetchResourceEvents`\n\n### Phase 6: Display (src/cli/commands/show.rs)\nIn `lore show issue 123`, add \"Related Issues\" section after closing MRs.\nPattern: query entity_references WHERE source_entity_type='issue' AND source_entity_id= AND reference_type='related'.\n\n## Acceptance Criteria\n\n- [ ] API client fetches issue links with pagination (fetch_all_pages + coalesce_not_found)\n- [ ] Stored as entity_reference: reference_type='related', source_method='api'\n- [ ] Bidirectional: A links B creates both A->B and B->A references\n- [ ] link_type captured (relates_to, blocks, is_blocked_by) — stored as 'related' for now\n- [ ] Cross-project links stored as unresolved (target_entity_id NULL)\n- [ ] Self-links skipped\n- [ ] Migration **027** recreates pending_dependent_fetches with 'issue_links' in CHECK\n- [ ] Migration registered in MIGRATIONS array in src/core/db.rs\n- [ ] `lore show issue 123` shows related issues section\n- [ ] `lore --robot show issue 123` includes related_issues in JSON\n- [ ] Config gate: sync.fetchIssueLinks (default true, camelCase serde rename)\n- [ ] `cargo check --all-targets` passes\n- [ ] `cargo clippy --all-targets -- -D warnings` passes\n- [ ] `cargo fmt --check` passes\n\n## Files\n\n- MODIFY: src/gitlab/client.rs (add fetch_issue_links)\n- MODIFY: src/gitlab/types.rs (add GitLabIssueLink)\n- CREATE: src/ingestion/issue_links.rs\n- MODIFY: src/ingestion/mod.rs (add pub mod issue_links)\n- MODIFY: src/ingestion/orchestrator.rs (enqueue + drain_issue_links)\n- CREATE: migrations/027_issue_links_job_type.sql\n- MODIFY: src/core/db.rs (add migration 027 to MIGRATIONS array)\n- MODIFY: src/core/config.rs (add sync.fetchIssueLinks)\n- MODIFY: src/cli/commands/show.rs (display related issues)\n\n## TDD Anchor\n\nRED:\n- test_issue_link_deserialization (types.rs: deserialize GitLabIssueLink from JSON)\n- test_store_issue_links_creates_bidirectional_references (in-memory DB, insert 2 issues, store link, verify 2 rows in entity_references)\n- test_self_link_skipped (same issue_iid both sides, verify 0 rows)\n- test_cross_project_link_unresolved (target not in DB, verify target_entity_id IS NULL)\n\nGREEN: Implement API client, ingestion, migration, display.\n\nVERIFY: cargo test --lib -- issue_links\n\n## Edge Cases\n\n- Cross-project links: target not in local DB -> unresolved reference (target_entity_id NULL)\n- Self-links: skip entirely\n- UNIQUE constraint on entity_references prevents duplicate refs on re-sync\n- \"blocks\"/\"is_blocked_by\" semantics not modeled in entity_references yet — store as 'related'\n- Table recreation migration: safe because pending_dependent_fetches is transient queue data that gets re-enqueued on next sync\n- Recreated table must restore indexes: idx_pdf_job_type, idx_pdf_next_retry (check migration 011 for exact definitions)\n\n## Dependency Context\n\n- **entity_references** (migration 011): provides the target table. reference_type='related' already in CHECK.\n- **dependent_queue.rs**: provides enqueue_job/claim_jobs/complete_job/fail_job lifecycle used by drain_issue_links()\n- **orchestrator drain pattern**: drain_mr_closes_issues() (line 1254) is the closest template — fetch API data, insert entity_references, complete job","status":"open","priority":2,"issue_type":"feature","created_at":"2026-02-05T15:14:25.202900Z","created_by":"tayloreernisse","updated_at":"2026-02-17T16:50:44.934373Z","compaction_level":0,"original_size":0,"labels":["ISSUE"]} +{"id":"bd-343o","title":"Fetch and store GitLab linked issues (Related to)","description":"-","notes":"ADDENDUM (2026-03-13): Integrate with `lore related` command. Explicit GitLab issue links should be surfaced as high-confidence results in `related` output, ranked above semantic-only matches. The related command should blend: (1) explicit links from entity_references where reference_type='related' and source_method='api', (2) structural links (closes/mentioned), (3) semantic similarity (existing behavior). New acceptance criteria: `lore related issues 42` surfaces explicitly-linked issues first with a distinct marker (e.g. 'linked' vs 'semantic'). `lore --robot related issues 42` includes a `link_type` field distinguishing explicit from semantic matches.","status":"open","priority":2,"issue_type":"feature","created_at":"2026-02-05T15:14:25.202900Z","created_by":"tayloreernisse","updated_at":"2026-03-13T19:24:50.210717Z","compaction_level":0,"original_size":0,"labels":["ISSUE"]} {"id":"bd-34ek","title":"OBSERV: Implement MetricsLayer custom tracing subscriber layer","description":"## Background\nMetricsLayer is a custom tracing subscriber layer that records span timing and structured fields, then materializes them into Vec. This avoids threading a mutable collector through every function signature -- spans are the single source of truth.\n\n## Approach\nAdd to src/core/metrics.rs (same file as StageTiming):\n\n```rust\nuse std::collections::HashMap;\nuse std::sync::{Arc, Mutex};\nuse std::time::Instant;\nuse tracing::span::{Attributes, Id, Record};\nuse tracing::Subscriber;\nuse tracing_subscriber::layer::{Context, Layer};\nuse tracing_subscriber::registry::LookupSpan;\n\n#[derive(Debug)]\nstruct SpanData {\n name: String,\n parent_id: Option,\n start: Instant,\n fields: HashMap,\n}\n\n#[derive(Debug, Clone)]\npub struct MetricsLayer {\n spans: Arc>>,\n completed: Arc>>,\n}\n\nimpl MetricsLayer {\n pub fn new() -> Self {\n Self {\n spans: Arc::new(Mutex::new(HashMap::new())),\n completed: Arc::new(Mutex::new(Vec::new())),\n }\n }\n\n /// Extract timing tree for a completed run.\n /// Call this after the root span closes.\n pub fn extract_timings(&self) -> Vec {\n let completed = self.completed.lock().unwrap();\n // Build tree: find root entries (no parent), attach children\n // ... tree construction logic\n }\n}\n\nimpl Layer for MetricsLayer\nwhere\n S: Subscriber + for<'a> LookupSpan<'a>,\n{\n fn on_new_span(&self, attrs: &Attributes<'_>, id: &Id, ctx: Context<'_, S>) {\n let parent_id = ctx.span(id).and_then(|s| s.parent().map(|p| p.id()));\n let mut fields = HashMap::new();\n // Visit attrs to capture initial field values\n let mut visitor = FieldVisitor(&mut fields);\n attrs.record(&mut visitor);\n\n self.spans.lock().unwrap().insert(id.into_u64(), SpanData {\n name: attrs.metadata().name().to_string(),\n parent_id,\n start: Instant::now(),\n fields,\n });\n }\n\n fn on_record(&self, id: &Id, values: &Record<'_>, _ctx: Context<'_, S>) {\n // Capture recorded fields (items_processed, items_skipped, errors)\n if let Some(data) = self.spans.lock().unwrap().get_mut(&id.into_u64()) {\n let mut visitor = FieldVisitor(&mut data.fields);\n values.record(&mut visitor);\n }\n }\n\n fn on_close(&self, id: Id, _ctx: Context<'_, S>) {\n if let Some(data) = self.spans.lock().unwrap().remove(&id.into_u64()) {\n let elapsed = data.start.elapsed();\n let timing = StageTiming {\n name: data.name,\n project: data.fields.get(\"project\").and_then(|v| v.as_str()).map(String::from),\n elapsed_ms: elapsed.as_millis() as u64,\n items_processed: data.fields.get(\"items_processed\").and_then(|v| v.as_u64()).unwrap_or(0) as usize,\n items_skipped: data.fields.get(\"items_skipped\").and_then(|v| v.as_u64()).unwrap_or(0) as usize,\n errors: data.fields.get(\"errors\").and_then(|v| v.as_u64()).unwrap_or(0) as usize,\n sub_stages: vec![], // Will be populated during extract_timings tree construction\n };\n self.completed.lock().unwrap().push((id.into_u64(), timing));\n }\n }\n}\n```\n\nNeed a FieldVisitor struct implementing tracing::field::Visit to capture field values.\n\nRegister in subscriber stack (src/main.rs), alongside stderr and file layers:\n```rust\nlet metrics_layer = MetricsLayer::new();\nlet metrics_handle = metrics_layer.clone(); // Clone Arc for later extraction\n\nregistry()\n .with(stderr_layer.with_filter(stderr_filter))\n .with(file_layer.with_filter(file_filter))\n .with(metrics_layer) // No filter -- captures all spans\n .init();\n```\n\nPass metrics_handle to command handlers so they can call extract_timings() after the pipeline completes.\n\n## Acceptance Criteria\n- [ ] MetricsLayer captures span enter/close timing\n- [ ] on_record captures items_processed, items_skipped, errors fields\n- [ ] extract_timings() returns correctly nested Vec tree\n- [ ] Parallel spans (multiple projects) both appear as sub_stages of parent\n- [ ] Thread-safe: Arc> allows concurrent span operations\n- [ ] cargo clippy --all-targets -- -D warnings passes\n\n## Files\n- src/core/metrics.rs (add MetricsLayer, FieldVisitor, tree construction)\n- src/main.rs (register MetricsLayer in subscriber stack)\n\n## TDD Loop\nRED:\n - test_metrics_layer_single_span: enter/exit one span, extract, assert one StageTiming\n - test_metrics_layer_nested_spans: parent + child, assert child in parent.sub_stages\n - test_metrics_layer_parallel_spans: two sibling spans, assert both in parent.sub_stages\n - test_metrics_layer_field_recording: record items_processed=42, assert captured\nGREEN: Implement MetricsLayer with on_new_span, on_record, on_close, extract_timings\nVERIFY: cargo test && cargo clippy --all-targets -- -D warnings\n\n## Edge Cases\n- Span ID reuse: tracing may reuse span IDs after close. Using remove on close prevents stale data.\n- Lock contention: Mutex per operation. For high-span-count scenarios, consider parking_lot::Mutex. But lore's span count is low (<100 per run), so std::sync::Mutex is fine.\n- extract_timings tree construction: iterate completed Vec, build parent->children map, then recursively construct StageTiming tree. Root entries have parent_id matching the root span or None.\n- MetricsLayer has no filter: it sees ALL spans. To avoid noise from dependency spans, check if span name starts with known stage names, or rely on the \"stage\" field being present.","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-04T15:54:31.960669Z","created_by":"tayloreernisse","updated_at":"2026-02-04T17:25:25.523811Z","closed_at":"2026-02-04T17:25:25.523730Z","close_reason":"Implemented MetricsLayer custom tracing subscriber layer with span timing capture, rate-limit/retry event detection, tree extraction, and 12 unit tests","compaction_level":0,"original_size":0,"labels":["observability"],"dependencies":[{"issue_id":"bd-34ek","depends_on_id":"bd-1o4h","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-34ek","depends_on_id":"bd-24j1","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-34ek","depends_on_id":"bd-3er","type":"parent-child","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} {"id":"bd-34o","title":"Implement MR transformer","description":"## Background\nTransforms GitLab MR API responses into normalized schema for database storage. Handles deprecated field fallbacks and extracts metadata (labels, assignees, reviewers).\n\n## Approach\nCreate new transformer module following existing issue transformer pattern:\n- `NormalizedMergeRequest` - Database-ready struct\n- `MergeRequestWithMetadata` - MR + extracted labels/assignees/reviewers\n- `transform_merge_request()` - Main transformation function\n- `extract_labels()` - Label extraction helper\n\n## Files\n- `src/gitlab/transformers/merge_request.rs` - New transformer module\n- `src/gitlab/transformers/mod.rs` - Export new module\n- `tests/mr_transformer_tests.rs` - Unit tests\n\n## Acceptance Criteria\n- [ ] `NormalizedMergeRequest` struct exists with all DB columns\n- [ ] `MergeRequestWithMetadata` contains MR + label_names + assignee_usernames + reviewer_usernames\n- [ ] `transform_merge_request()` returns `Result`\n- [ ] `draft` computed as `gitlab_mr.draft || gitlab_mr.work_in_progress`\n- [ ] `detailed_merge_status` prefers `detailed_merge_status` over `merge_status_legacy`\n- [ ] `merge_user_username` prefers `merge_user` over `merged_by`\n- [ ] `head_sha` extracted from `sha` field\n- [ ] `references_short` and `references_full` extracted from `references` Option\n- [ ] Timestamps parsed with `iso_to_ms()`, errors returned (not zeroed)\n- [ ] `last_seen_at` set to `now_ms()`\n- [ ] `cargo test mr_transformer` passes\n\n## TDD Loop\nRED: `cargo test mr_transformer` -> module not found\nGREEN: Add transformer with all fields\nVERIFY: `cargo test mr_transformer`\n\n## Struct Definitions\n```rust\n#[derive(Debug, Clone)]\npub struct NormalizedMergeRequest {\n pub gitlab_id: i64,\n pub project_id: i64,\n pub iid: i64,\n pub title: String,\n pub description: Option,\n pub state: String,\n pub draft: bool,\n pub author_username: String,\n pub source_branch: String,\n pub target_branch: String,\n pub head_sha: Option,\n pub references_short: Option,\n pub references_full: Option,\n pub detailed_merge_status: Option,\n pub merge_user_username: Option,\n pub created_at: i64,\n pub updated_at: i64,\n pub merged_at: Option,\n pub closed_at: Option,\n pub last_seen_at: i64,\n pub web_url: String,\n}\n\n#[derive(Debug, Clone)]\npub struct MergeRequestWithMetadata {\n pub merge_request: NormalizedMergeRequest,\n pub label_names: Vec,\n pub assignee_usernames: Vec,\n pub reviewer_usernames: Vec,\n}\n```\n\n## Function Signature\n```rust\npub fn transform_merge_request(\n gitlab_mr: &GitLabMergeRequest,\n local_project_id: i64,\n) -> Result\n```\n\n## Key Logic\n```rust\n// Draft: prefer draft, fallback to work_in_progress\nlet is_draft = gitlab_mr.draft || gitlab_mr.work_in_progress;\n\n// Merge status: prefer detailed_merge_status\nlet detailed_merge_status = gitlab_mr.detailed_merge_status\n .clone()\n .or_else(|| gitlab_mr.merge_status_legacy.clone());\n\n// Merge user: prefer merge_user\nlet merge_user_username = gitlab_mr.merge_user\n .as_ref()\n .map(|u| u.username.clone())\n .or_else(|| gitlab_mr.merged_by.as_ref().map(|u| u.username.clone()));\n\n// References extraction\nlet (references_short, references_full) = gitlab_mr.references\n .as_ref()\n .map(|r| (Some(r.short.clone()), Some(r.full.clone())))\n .unwrap_or((None, None));\n\n// Head SHA\nlet head_sha = gitlab_mr.sha.clone();\n```\n\n## Edge Cases\n- Invalid timestamps should return `Err`, not zero values\n- Empty labels/assignees/reviewers should return empty Vecs, not None\n- `state` must pass through as-is (including \"locked\")","status":"closed","priority":2,"issue_type":"task","created_at":"2026-01-26T22:06:40.849049Z","created_by":"tayloreernisse","updated_at":"2026-01-27T00:11:48.501301Z","closed_at":"2026-01-27T00:11:48.501241Z","close_reason":"done","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-34o","depends_on_id":"bd-3ir","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"},{"issue_id":"bd-34o","depends_on_id":"bd-5ta","type":"blocks","created_at":"2026-03-04T20:00:21Z","created_by":"import"}]} {"id":"bd-34rr","title":"WHO: Migration 017 — composite indexes for query paths","description":"## Background\n\nWith 280K notes, the path/timestamp queries for lore who will degrade without composite indexes. Existing indexes cover note_type and position_new_path separately (migration 006) but not as composites aligned to the who query patterns. This is a non-breaking, additive-only migration.\n\n## Approach\n\nAdd as entry 17 (index 16) in the MIGRATIONS array in src/core/db.rs. LATEST_SCHEMA_VERSION auto-updates via MIGRATIONS.len() as i32.\n\n### Exact SQL for the migration entry:\n\n```sql\n-- Migration 017: Composite indexes for who query paths\n\n-- Expert/Overlap: DiffNote path prefix + timestamp filter.\n-- Leading with position_new_path (not note_type) because the partial index\n-- predicate already handles the constant filter.\nCREATE INDEX IF NOT EXISTS idx_notes_diffnote_path_created\n ON notes(position_new_path, created_at, project_id)\n WHERE note_type = 'DiffNote' AND is_system = 0;\n\n-- Active/Workload: discussion participation lookups.\nCREATE INDEX IF NOT EXISTS idx_notes_discussion_author\n ON notes(discussion_id, author_username)\n WHERE is_system = 0;\n\n-- Active (project-scoped): unresolved discussions by recency.\nCREATE INDEX IF NOT EXISTS idx_discussions_unresolved_recent\n ON discussions(project_id, last_note_at)\n WHERE resolvable = 1 AND resolved = 0;\n\n-- Active (global): unresolved discussions by recency (no project scope).\n-- Without this, (project_id, last_note_at) can't satisfy ORDER BY last_note_at DESC\n-- efficiently when project_id is unconstrained.\nCREATE INDEX IF NOT EXISTS idx_discussions_unresolved_recent_global\n ON discussions(last_note_at)\n WHERE resolvable = 1 AND resolved = 0;\n\n-- Workload: issue assignees by username.\nCREATE INDEX IF NOT EXISTS idx_issue_assignees_username\n ON issue_assignees(username, issue_id);\n```\n\n### Not added (already adequate):\n- merge_requests(author_username) — idx_mrs_author (migration 006)\n- mr_reviewers(username) — idx_mr_reviewers_username (migration 006)\n- notes(discussion_id) — idx_notes_discussion (migration 002)\n\n## Files\n\n- `src/core/db.rs` — append to MIGRATIONS array as entry index 16\n\n## TDD Loop\n\nRED: `cargo test -- test_migration` (existing migration tests should still pass)\nGREEN: Add the migration SQL string to the array\nVERIFY: `cargo test && cargo check --all-targets`\n\n## Acceptance Criteria\n\n- [ ] MIGRATIONS array has 17 entries (index 0-16)\n- [ ] LATEST_SCHEMA_VERSION is 17\n- [ ] cargo test passes (in-memory DB runs all migrations including 017)\n- [ ] No existing index names conflict\n\n## Edge Cases\n\n- The SQL uses CREATE INDEX IF NOT EXISTS — safe for idempotent reruns\n- Partial indexes (WHERE clause) keep index size small: ~33K of 280K notes for DiffNote index","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-08T02:39:49.397860Z","created_by":"tayloreernisse","updated_at":"2026-02-08T04:10:29.593561Z","closed_at":"2026-02-08T04:10:29.593519Z","close_reason":"Implemented by agent team: migration 017, CLI skeleton, all 5 query modes, human+robot output, 20 tests. All quality gates pass.","compaction_level":0,"original_size":0} @@ -317,7 +317,7 @@ {"id":"bd-e48d","title":"Render activity feed with event badges in human mode","description":"## Background\nHuman-mode activity output needs fast scanning: event badge, actor, entity reference, summary, time, optional preview, and optional project path. Badge color semantics must match AC-6.4 while covering all event types emitted by me activity queries (`note`, `status`, `label`, `milestone`, `assign`, `unassign`, `review_request`).\n\n## Approach\nImplement activity rendering in `src/cli/commands/me/render_human.rs`.\n\n### 1. Badge renderer\nAdd helper:\n```rust\nfn render_event_badge(event_type: &str) -> String\n```\n\nMapping:\n- `note` -> cyan\n- `status` -> amber\n- `label` -> purple\n- `assign` / `unassign` / `review_request` -> green\n- `milestone` -> magenta\n- fallback -> dim bracket label\n\nMode behavior:\n- Color-capable mode: background pill style (fg/bg contrast).\n- ASCII/no-color mode: bracketed text label using foreground color only (`[note]`, `[status]`, etc).\n\n### 2. Row renderer\nFor each activity item:\n- line 1: badge + `@actor` + entity ref (`#iid`/`!iid`) + summary + optional `(you)` + relative timestamp\n- line 2: quoted `body_preview` (only for `note`, newline-collapsed)\n- line 3: `project_path` dimmed unless single-project scope\n\n### 3. Own-action styling\nIf `is_own`:\n- append `(you)`\n- dim the full first line to reduce visual priority\n\n## Acceptance Criteria\n- [ ] Badge colors: note=cyan, status=amber, label=purple, assign/unassign/review_request=green, milestone=magenta\n- [ ] ASCII fallback renders bracket labels with colored foreground text\n- [ ] `@actor` rendered with username color\n- [ ] Issue refs (`#iid`) and MR refs (`!iid`) use existing ref palette\n- [ ] Own actions include `(you)` and dimmed emphasis\n- [ ] `body_preview` only appears for `note` events\n- [ ] `body_preview` newlines are replaced with spaces\n- [ ] Project path line suppressed for single-project scope\n- [ ] Section header is `Activity (N)` via `section_divider`\n- [ ] Empty state renders `No recent activity`\n\n## Files\n- MODIFY: `src/cli/commands/me/render_human.rs`\n\n## TDD Anchor\nRED:\n- `test_event_badge_color_mapping_includes_unassign`\n- `test_activity_own_action_includes_you_suffix`\n- `test_activity_preview_only_for_note`\n- `test_activity_project_path_suppressed_single_project`\n- `test_activity_ascii_badges`\n\nGREEN:\n- Implement badge + row rendering with event coverage parity.\n\nVERIFY:\n- `cargo test me_render_activity`\n\n## Edge Cases\n- Unknown event types should still render safely with a neutral badge.\n- `actor` may be empty/system fallback; renderer must not panic.\n- `review_request` labels are long; spacing should remain readable without fixed-width assumptions.\n\n## Dependency Context\nConsumes `MeActivityItem` from `bd-3bwh` and event types produced by `bd-b3r3` + `bd-2nl3`.\nIntegrated by `bd-1vv8` handler.\n\nDependencies:\n -> bd-1vxq (blocks) - Render summary header and attention legend\n\nDependents:\n <- bd-1vv8 (blocks) - Implement me command handler: wire queries to renderers","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-19T19:39:46.381793Z","created_by":"tayloreernisse","updated_at":"2026-02-20T16:09:13.062218Z","closed_at":"2026-02-20T16:09:13.062179Z","close_reason":"Implemented by lore-me agent swarm","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-e48d","depends_on_id":"bd-1vxq","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"}]} {"id":"bd-ef0u","title":"NOTE-2B: SourceType enum extension for notes","description":"## Background\nThe SourceType enum in src/documents/extractor.rs (line 15-19) needs a Note variant for the document pipeline to handle note-type documents.\n\n## Approach\nIn src/documents/extractor.rs:\n1. Add Note variant to SourceType enum (line 15-19, after Discussion):\n pub enum SourceType { Issue, MergeRequest, Discussion, Note }\n\n2. Add match arm to as_str() (line 22-28): Self::Note => \"note\"\n\n3. Add parse aliases (line 30-37): \"note\" | \"notes\" => Some(Self::Note)\n\n4. Display impl (line 40-43) already delegates to as_str() — no change needed.\n\n5. IMPORTANT: Also update seed_dirty() in src/cli/commands/generate_docs.rs (line 66-70) which has a match on SourceType that maps to table names. SourceType::Note should NOT be added to this match — notes are seeded differently (by querying the notes table, not by table name pattern). This is handled by NOTE-2E.\n\n## Files\n- MODIFY: src/documents/extractor.rs (SourceType enum at line 15, as_str at line 22, parse at line 30)\n\n## TDD Anchor\nRED: test_source_type_parse_note — assert SourceType::parse(\"note\") == Some(SourceType::Note)\nGREEN: Add Note variant and match arms.\nVERIFY: cargo test source_type_parse_note -- --nocapture\nTests: test_source_type_note_as_str (assert as_str() == \"note\"), test_source_type_note_display (assert format!(\"{}\", SourceType::Note) == \"note\"), test_source_type_parse_notes_alias (assert parse(\"notes\") works)\n\n## Acceptance Criteria\n- [ ] SourceType::Note variant exists\n- [ ] as_str() returns \"note\"\n- [ ] parse() accepts \"note\", \"notes\" (case-insensitive via to_lowercase)\n- [ ] Display trait works via as_str delegation\n- [ ] No change to seed_dirty() match — that's a separate bead (NOTE-2E)\n- [ ] All 4 tests pass, clippy clean\n- [ ] CRITICAL: regenerate_one() in src/documents/regenerator.rs (line 86-91) has exhaustive match on SourceType — adding Note variant will cause a compile error until NOTE-2D adds the match arm. Either add a temporary todo!() or coordinate with NOTE-2D.\n\n## Dependency Context\n- Depends on NOTE-2A (bd-1oi7): migration 024 must exist so test DBs accept source_type='note' in documents/dirty_sources tables\n\n## Edge Cases\n- Exhaustive match: Adding the variant breaks regenerate_one() (line 86-91) and seed_dirty() (line 66-70) until downstream beads handle it. Agent should add temporary unreachable!() arms with comments referencing the downstream bead IDs.","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-12T17:01:45.555568Z","created_by":"tayloreernisse","updated_at":"2026-02-12T18:13:24.004157Z","closed_at":"2026-02-12T18:13:24.004106Z","close_reason":"Implemented by agent swarm","compaction_level":0,"original_size":0,"labels":["per-note","search"],"dependencies":[{"issue_id":"bd-ef0u","depends_on_id":"bd-18yh","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"},{"issue_id":"bd-ef0u","depends_on_id":"bd-2ezb","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"}]} {"id":"bd-epj","title":"[CP0] Config loading with Zod validation","description":"## Background\n\nConfig loading is critical infrastructure - every CLI command needs the config. Uses Zod for schema validation with sensible defaults. Must handle missing files gracefully with typed errors.\n\nReference: docs/prd/checkpoint-0.md sections \"Configuration Schema\", \"Config Resolution Order\"\n\n## Approach\n\n**src/core/config.ts:**\n```typescript\nimport { z } from 'zod';\nimport { readFileSync } from 'node:fs';\nimport { ConfigNotFoundError, ConfigValidationError } from './errors';\nimport { getConfigPath } from './paths';\n\nexport const ConfigSchema = z.object({\n gitlab: z.object({\n baseUrl: z.string().url(),\n tokenEnvVar: z.string().default('GITLAB_TOKEN'),\n }),\n projects: z.array(z.object({\n path: z.string().min(1),\n })).min(1),\n sync: z.object({\n backfillDays: z.number().int().positive().default(14),\n staleLockMinutes: z.number().int().positive().default(10),\n heartbeatIntervalSeconds: z.number().int().positive().default(30),\n cursorRewindSeconds: z.number().int().nonnegative().default(2),\n primaryConcurrency: z.number().int().positive().default(4),\n dependentConcurrency: z.number().int().positive().default(2),\n }).default({}),\n storage: z.object({\n dbPath: z.string().optional(),\n backupDir: z.string().optional(),\n compressRawPayloads: z.boolean().default(true),\n }).default({}),\n embedding: z.object({\n provider: z.literal('ollama').default('ollama'),\n model: z.string().default('nomic-embed-text'),\n baseUrl: z.string().url().default('http://localhost:11434'),\n concurrency: z.number().int().positive().default(4),\n }).default({}),\n});\n\nexport type Config = z.infer;\n\nexport function loadConfig(cliOverride?: string): Config {\n const path = getConfigPath(cliOverride);\n // throws ConfigNotFoundError if missing\n // throws ConfigValidationError if invalid\n}\n```\n\n## Acceptance Criteria\n\n- [ ] `loadConfig()` returns validated Config object\n- [ ] `loadConfig()` throws ConfigNotFoundError if file missing\n- [ ] `loadConfig()` throws ConfigValidationError with Zod errors if invalid\n- [ ] Empty optional fields get default values\n- [ ] projects array must have at least 1 item\n- [ ] gitlab.baseUrl must be valid URL\n- [ ] All number fields must be positive integers\n- [ ] tests/unit/config.test.ts passes (8 tests)\n\n## Files\n\nCREATE:\n- src/core/config.ts\n- tests/unit/config.test.ts\n- tests/fixtures/mock-responses/valid-config.json\n- tests/fixtures/mock-responses/invalid-config.json\n\n## TDD Loop\n\nRED:\n```typescript\n// tests/unit/config.test.ts\ndescribe('Config', () => {\n it('loads config from file path')\n it('throws ConfigNotFoundError if file missing')\n it('throws ConfigValidationError if required fields missing')\n it('validates project paths are non-empty strings')\n it('applies default values for optional fields')\n it('loads from XDG path by default')\n it('respects GI_CONFIG_PATH override')\n it('respects --config flag override')\n})\n```\n\nGREEN: Implement loadConfig() function\n\nVERIFY: `npm run test -- tests/unit/config.test.ts`\n\n## Edge Cases\n\n- JSON parse error should wrap in ConfigValidationError\n- Zod error messages should be human-readable\n- File exists but empty → ConfigValidationError\n- File has extra fields → should pass (Zod strips by default)","status":"closed","priority":1,"issue_type":"task","created_at":"2026-01-24T16:09:49.091078Z","created_by":"tayloreernisse","updated_at":"2026-01-25T03:04:32.592139Z","closed_at":"2026-01-25T03:04:32.592003Z","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-epj","depends_on_id":"bd-gg1","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"}]} -{"id":"bd-flwo","title":"Interactive path selection for ambiguous matches (TTY picker)","description":"When a partial file path matches multiple files, show an interactive numbered picker in TTY mode instead of a hard error. In robot mode, return candidates as structured JSON in the error envelope. Use dialoguer crate for selection UI. The path_resolver module already detects ambiguity via SuffixResult::Ambiguous and limits to 11 candidates.","status":"open","priority":3,"issue_type":"feature","created_at":"2026-02-13T16:31:50.005222Z","created_by":"tayloreernisse","updated_at":"2026-02-13T16:31:50.007520Z","compaction_level":0,"original_size":0,"labels":["cli-ux","gate-4"]} +{"id":"bd-flwo","title":"Interactive path selection for ambiguous matches (TTY picker)","description":"When a partial file path matches multiple files, show an interactive numbered picker in TTY mode instead of a hard error. In robot mode, return candidates as structured JSON in the error envelope. Use dialoguer crate for selection UI. The path_resolver module already detects ambiguity via SuffixResult::Ambiguous and limits to 11 candidates.\n\n## Current Behavior\n\n`build_path_query()` in `src/core/path_resolver.rs` returns `Err(LoreError::Ambiguous(String))` when suffix probe finds 2+ distinct paths (after `try_resolve_rename_ambiguity` also fails). The error message is a formatted string with candidates listed as indented lines. All callers propagate this as a hard error — users must re-run with the full path or `-p` flag.\n\n## Desired Behavior\n\n### TTY Mode (Interactive)\nWhen stdout is a TTY and robot mode is OFF:\n1. Intercept `LoreError::AmbiguousPath` in the handler before it propagates\n2. Show a `dialoguer::Select` picker with the candidate paths\n3. Retry with the selected full path and continue normally\n\nExample UX:\n```\nMultiple files match 'utils.rs':\n> src/auth/utils.rs\n src/api/utils.rs\n lib/shared/utils.rs\n```\n\n### Robot Mode (JSON)\nWhen robot mode is ON:\n1. Return structured candidates in the error JSON envelope (stderr)\n2. The error code remains `AMBIGUOUS` (exit 18)\n3. Add a `candidates` array to the error object\n\nExample robot output (stderr):\n```json\n{\n \"error\": {\n \"code\": \"AMBIGUOUS\",\n \"message\": \"'utils.rs' matches multiple paths. Use the full path or -p to scope.\",\n \"suggestion\": \"Pass the full path, or use -p to scope to a specific project.\",\n \"candidates\": [\n \"src/auth/utils.rs\",\n \"src/api/utils.rs\",\n \"lib/shared/utils.rs\"\n ]\n }\n}\n```\n\n### Non-TTY, Non-Robot\nSame as current behavior (formatted error message to stderr). The only internal change is the error now originates from `AmbiguousPath { message, candidates }` instead of `Ambiguous(String)`, but the `Display` impl uses the `message` field directly, so formatted output is identical.\n\n## Architecture: New `AmbiguousPath` Variant (REQUIRED)\n\nAdd a new error variant to `LoreError` in `src/core/error.rs`:\n\n```rust\n#[error(\"{message}\")]\nAmbiguousPath {\n message: String,\n candidates: Vec,\n}\n```\n\n**Note:** Use `#[error(\"{message}\")]` (NOT `\"Ambiguous path: {message}\"`) because `message` already contains the full formatted text like \"'utils.rs' matches multiple paths...\". Wrapping it would produce redundant output.\n\n**Why not modify `Ambiguous(String)`?** There are 9 construction sites for `Ambiguous(String)` across the codebase (project.rs ×2, show/issue.rs, show/mr.rs, explain.rs ×2, drift.rs, related.rs, timeline/types.rs) used for project/entity ambiguity — NOT path ambiguity. Changing the variant shape would require updating all of them (out of scope). A dedicated variant avoids this entirely.\n\n### Error wiring for `AmbiguousPath`:\n\n**`code()` match arm** (error.rs ~line 208):\n```rust\nSelf::AmbiguousPath { .. } => ErrorCode::Ambiguous,\n```\nMaps to the same error code (exit 18) as the existing `Ambiguous(String)`.\n\n**`suggestion()` match arm** (error.rs ~line 257):\n```rust\nSelf::AmbiguousPath { .. } => Some(\n \"Pass the full path, or use -p to scope to a specific project.\",\n),\n```\nThis is PATH-focused, unlike the existing `Ambiguous` suggestion which is PROJECT-focused.\n\n**`actions()` match arm** (error.rs ~line 306):\n```rust\nSelf::AmbiguousPath { .. } => vec![],\n```\nNo machine-actionable recovery commands — the `candidates` array in `RobotError` serves this role instead.\n\n**`is_permanent_api_error()`**: No change needed (returns false for this variant by default).\n\n### Construction site change in `path_resolver.rs`:\n\nAt `src/core/path_resolver.rs:168`, change:\n```rust\n// Before:\nErr(LoreError::Ambiguous(format!(\n \"'{trimmed}' matches multiple paths. Use the full path or -p to scope:\\n{list}\"\n)))\n\n// After:\nErr(LoreError::AmbiguousPath {\n message: format!(\n \"'{trimmed}' matches multiple paths. Use the full path or -p to scope:\\n{list}\"\n ),\n candidates,\n})\n```\n\nThe `candidates` variable is already in scope from `SuffixResult::Ambiguous(candidates)` at line 152.\n\n### RobotError modification:\nAdd a `candidates` field to `RobotError` in `src/core/error.rs`:\n\n```rust\n#[derive(Debug, Serialize)]\npub struct RobotError {\n pub code: String,\n pub message: String,\n #[serde(skip_serializing_if = \"Option::is_none\")]\n pub suggestion: Option,\n #[serde(skip_serializing_if = \"Vec::is_empty\")]\n pub actions: Vec,\n #[serde(skip_serializing_if = \"Vec::is_empty\")]\n pub candidates: Vec,\n}\n```\n\nIn `to_robot_error()`, populate `candidates` from `AmbiguousPath`:\n```rust\nlet candidates = match self {\n Self::AmbiguousPath { candidates, .. } => candidates.clone(),\n _ => vec![],\n};\nRobotError {\n code: self.code().to_string(),\n message: self.to_string(),\n suggestion: self.suggestion().map(String::from),\n actions,\n candidates,\n}\n```\n\n**Prior art:** `RobotErrorSuggestionData` in `src/app/handlers.rs:1177` already extends the error envelope with `valid_values: Option>`. The `candidates` field follows the same pattern.\n\n### Picker Integration Point\n\nThe picker MUST live in the CLI/handler layer, NOT in `core/path_resolver.rs`. The core module should never import dialoguer or know about TTY state.\n\n**Pattern**: Each handler that calls `build_path_query` (or calls a function that does) catches `LoreError::AmbiguousPath` and either:\n- Shows the picker (TTY, non-robot) and retries with the selected full path\n- Lets the error propagate for robot/non-TTY serialization\n\n### TTY Detection\n\nUse `std::io::IsTerminal` from Rust std (already used in the codebase at `src/cli/mod.rs:94` and `src/cli/render.rs:252`). Do NOT add `atty` crate.\n\n```rust\nuse std::io::IsTerminal;\nlet is_tty = std::io::stdout().is_terminal();\n```\n\n### Shared Helper for file-history / trace\n\nA shared helper function in `src/cli/` encapsulates the \"resolve path with optional picker\" logic for the two call sites that directly invoke `build_path_query`:\n\n```rust\n/// Resolve a user-supplied path, showing an interactive picker on ambiguity (TTY only).\n/// In robot mode or non-TTY, returns the AmbiguousPath error unchanged.\npub fn resolve_path_interactive(\n conn: &Connection,\n path: &str,\n project_id: Option,\n robot_mode: bool,\n) -> Result {\n match build_path_query(conn, path, project_id) {\n Err(LoreError::AmbiguousPath { candidates, .. })\n if !robot_mode && std::io::stdout().is_terminal() =>\n {\n let selection = dialoguer::Select::new()\n .with_prompt(format!(\"Multiple files match '{path}'\"))\n .items(&candidates)\n .default(0)\n .interact()\n .map_err(|e| LoreError::Other(format!(\"Selection cancelled: {e}\")))?;\n build_path_query(conn, &candidates[selection], project_id)\n }\n other => other,\n }\n}\n```\n\n**Note:** This helper covers `handle_file_history` and `handle_trace` only. The `who` command uses a different interception pattern (see below).\n\n## Affected Call Sites (4 total)\n\n### Direct `build_path_query` calls — use `resolve_path_interactive`:\n\n1. **`src/app/handlers.rs:1302`** — `handle_file_history` (has `robot_mode` param)\n - Replace `build_path_query(&conn_tmp, &normalized, project_id_tmp)?` with `resolve_path_interactive(&conn_tmp, &normalized, project_id_tmp, robot_mode)?`\n\n2. **`src/app/handlers.rs:1359`** — `handle_trace` (has `robot_mode` param)\n - Replace `build_path_query(&conn, &normalized, project_id)?` with `resolve_path_interactive(&conn, &normalized, project_id, robot_mode)?`\n\n### `build_path_query` calls inside `run_who` — intercept at handler level:\n\n3. **`src/cli/commands/who/expert.rs:37`** — `query_expert` calls `build_path_query`\n4. **`src/cli/commands/who/overlap.rs:19`** — `query_overlap` calls `build_path_query`\n\nThese are called indirectly by `run_who()` which dispatches via `resolve_mode(args)`. The interception happens at `handle_who` (src/app/robot_docs.rs, function starts at line 622):\n\n**Architecture context:** `run_who` internally calls `resolve_mode(args)` to extract the path from `WhoArgs`. `resolve_mode` checks `args.path` FIRST (highest priority), then `args.overlap`, then `args.target`. This means the handler can override the path by setting `args.path` and retrying.\n\n```rust\n// In handle_who (src/app/robot_docs.rs:622, run_who call at line 632)\nfn handle_who(\n config_override: Option<&str>,\n mut args: WhoArgs, // already `mut` in current code\n robot_mode: bool,\n) -> Result<(), Box> {\n let start = std::time::Instant::now();\n let config = Config::load(config_override)?;\n if args.project.is_none() {\n args.project = config.default_project.clone();\n }\n\n let run = match run_who(&config, &args) {\n Err(LoreError::AmbiguousPath { candidates, .. })\n if !robot_mode && std::io::stdout().is_terminal() =>\n {\n let selection = dialoguer::Select::new()\n .with_prompt(\"Multiple files match the path\")\n .items(&candidates)\n .default(0)\n .interact()\n .map_err(|e| LoreError::Other(format!(\"Selection cancelled: {e}\")))?;\n // Set args.path to override — resolve_mode checks args.path first\n args.path = Some(candidates[selection].clone());\n run_who(&config, &args)?\n }\n other => other?,\n };\n\n let elapsed_ms = start.elapsed().as_millis() as u64;\n if robot_mode {\n print_who_json(&run, &args, elapsed_ms);\n } else {\n print_who_human(&run.result, run.resolved_input.project_path.as_deref());\n }\n Ok(())\n}\n```\n\n**Why `args.path = Some(...)` works:** `resolve_mode()` (at `src/cli/commands/who/mod.rs:59`) checks `args.path` first. Setting it to `Some(full_path)` forces Expert mode with the resolved path, regardless of whether the original input came from `--overlap`, `target`, or `--path`.\n\n**Key constraints:**\n- `query_expert`/`query_overlap` must NOT receive robot_mode or dialoguer dependencies\n- The handler is the right interception boundary — it already knows about robot_mode and TTY\n- `WhoArgs` is already `mut` in `handle_who`'s signature\n\n## Scope Boundary\n\nThis bead covers ONLY path ambiguity from `build_path_query` / `suffix_probe`. It does NOT cover:\n- Project ambiguity (`resolve_project` in `src/core/project.rs`) — uses `Ambiguous(String)`\n- Issue/MR ambiguity (`show/issue.rs`, `show/mr.rs`) — uses `Ambiguous(String)`\n- Entity resolution ambiguity (`explain.rs`, `drift.rs`, `related.rs`, `timeline/types.rs`) — uses `Ambiguous(String)`\n\nThose remain on `Ambiguous(String)` and can be separate beads if interactive resolution is desired.\n\n## Testing\n\n### Unit Tests\n- Add test that `LoreError::AmbiguousPath` carries candidates vec correctly\n- Add test that `LoreError::AmbiguousPath` maps to `ErrorCode::Ambiguous` (exit 18)\n- Add test that `RobotError` serialization includes `candidates` array when non-empty\n- Add test that `RobotError` serialization omits `candidates` when empty (skip_serializing_if)\n- Existing ambiguity tests in path_resolver_tests.rs remain unchanged (they test core logic, not UI)\n- Update the path_resolver test at who_tests.rs:2087 that matches on `SuffixResult::Ambiguous(paths)` — it should still work since the core SuffixResult enum is unchanged, only the LoreError construction differs\n\n### Integration Tests\n- Test robot mode: verify JSON error includes `candidates` array with correct paths\n- Test non-TTY: verify error message still contains candidate paths as text\n- Test that the helper retries with the correct path after simulated selection\n\n### Manual Testing\n- TTY picker: run `lore who utils.rs` (or similar ambiguous path) in a terminal\n- Verify arrow-key selection works\n- Verify Ctrl+C cancels cleanly (returns non-zero exit, no panic)\n- Verify robot mode: `lore --robot who utils.rs` returns structured candidates\n\n## Dependencies\n- `dialoguer = \"0.12\"` — already in Cargo.toml (line 25), already used in init flow (src/app/handlers.rs:873)\n- `std::io::IsTerminal` — stdlib, already used in src/cli/mod.rs:94 and src/cli/render.rs:252\n- No new crate dependencies needed\n\n## Acceptance Criteria\n1. TTY + non-robot: `dialoguer::Select` picker appears, user selects, command completes normally\n2. Robot mode: error JSON includes `candidates: [...]` array\n3. Non-TTY + non-robot: formatted error with candidate list (current behavior preserved)\n4. All existing path_resolver tests pass unchanged\n5. Picker shows at most 11 candidates (existing limit from suffix_probe LIMIT 11)\n6. Ctrl+C during picker exits cleanly (non-zero exit, no panic)\n7. Helper function is shared across file_history and trace call sites (no duplication)\n8. The who command intercepts at the handler level via `args.path` mutation, not inside query_expert/query_overlap\n9. New `AmbiguousPath` variant does NOT affect any existing `Ambiguous(String)` call sites (9 sites confirmed)\n10. `RobotError.candidates` is empty-vec-omitted for all non-path-ambiguity errors\n11. `AmbiguousPath` suggestion is path-focused (\"Pass the full path...\"), not project-focused like `Ambiguous`","notes":"## Polish Audit (2026-03-13)\n\n### Issues Found & Fixed\n\n1. **Architectural contradiction (CRITICAL)**: Original Option A proposed changing `LoreError::Ambiguous(String)` to carry structured data, but excluded all other Ambiguous construction sites from scope. Fixed by recommending new `AmbiguousPath` variant instead.\n\n2. **Wrong call site location**: `handle_who` is in `src/app/robot_docs.rs:622`, not `src/app/handlers.rs`. The who build_path_query calls are inside `query_expert`/`query_overlap` which lack robot_mode. Fixed by specifying handler-level interception.\n\n3. **TTY detection unspecified**: Codebase uses `std::io::IsTerminal` (Rust std), not `atty`. Added explicit specification.\n\n4. **RobotError modification under-specified**: Added concrete field addition and `to_robot_error()` population code.\n\n5. **`try_resolve_rename_ambiguity` not mentioned**: The AmbiguousPath error only fires after rename resolution also fails. Added to \"Current Behavior\" for clarity.\n\n6. **Referenced prior art**: `RobotErrorSuggestionData` in handlers.rs already extends error envelope with `valid_values`. Candidates field follows same pattern.","status":"open","priority":3,"issue_type":"feature","created_at":"2026-02-13T16:31:50.005222Z","created_by":"tayloreernisse","updated_at":"2026-03-13T19:13:31.601462Z","compaction_level":0,"original_size":0,"labels":["cli-ux","gate-4"]} {"id":"bd-g0d5","title":"WHO: Verification gate — check, clippy, fmt, EXPLAIN QUERY PLAN","description":"## Background\n\nFinal verification gate before the who epic is considered complete. Confirms code quality, test coverage, and index utilization against real data.\n\n## Approach\n\n### Step 1: Compiler checks\n```bash\ncargo check --all-targets\ncargo clippy --all-targets -- -D warnings\ncargo fmt --check\ncargo test\n```\n\n### Step 2: Manual smoke test (against real DB)\n```bash\ncargo run --release -- who src/features/global-search/\ncargo run --release -- who @asmith\ncargo run --release -- who @asmith --reviews\ncargo run --release -- who --active\ncargo run --release -- who --active --since 30d\ncargo run --release -- who --overlap libs/shared-frontend/src/features/global-search/\ncargo run --release -- who --path README.md\ncargo run --release -- who --path Makefile\ncargo run --release -- -J who src/features/global-search/ # robot mode\ncargo run --release -- -J who @asmith # robot mode\ncargo run --release -- who src/features/global-search/ -p typescript # project scoped\n```\n\n### Step 3: EXPLAIN QUERY PLAN verification\n```bash\n# Expert: should use idx_notes_diffnote_path_created\nsqlite3 ~/.local/share/lore/lore.db \"\n EXPLAIN QUERY PLAN\n SELECT n.author_username, COUNT(*), MAX(n.created_at)\n FROM notes n\n WHERE n.note_type = 'DiffNote' AND n.is_system = 0\n AND n.position_new_path LIKE 'src/features/global-search/%' ESCAPE '\\\\'\n AND n.created_at >= 0\n GROUP BY n.author_username;\"\n\n# Active global: should use idx_discussions_unresolved_recent_global\nsqlite3 ~/.local/share/lore/lore.db \"\n EXPLAIN QUERY PLAN\n SELECT d.id, d.last_note_at FROM discussions d\n WHERE d.resolvable = 1 AND d.resolved = 0 AND d.last_note_at >= 0\n ORDER BY d.last_note_at DESC LIMIT 20;\"\n\n# Active scoped: should use idx_discussions_unresolved_recent\nsqlite3 ~/.local/share/lore/lore.db \"\n EXPLAIN QUERY PLAN\n SELECT d.id, d.last_note_at FROM discussions d\n WHERE d.resolvable = 1 AND d.resolved = 0 AND d.project_id = 1\n AND d.last_note_at >= 0\n ORDER BY d.last_note_at DESC LIMIT 20;\"\n```\n\n## Files\n\nNo files modified — verification only.\n\n## TDD Loop\n\nThis bead is the TDD VERIFY phase for the entire epic. No code written.\nVERIFY: All commands in Steps 1-3 must succeed. Document results.\n\n## Acceptance Criteria\n\n- [ ] cargo check --all-targets: 0 errors\n- [ ] cargo clippy --all-targets -- -D warnings: 0 warnings\n- [ ] cargo fmt --check: no formatting changes needed\n- [ ] cargo test: all tests pass (including 20+ who tests)\n- [ ] Expert EXPLAIN shows idx_notes_diffnote_path_created\n- [ ] Active global EXPLAIN shows idx_discussions_unresolved_recent_global\n- [ ] Active scoped EXPLAIN shows idx_discussions_unresolved_recent\n- [ ] All 5 modes produce reasonable output against real data\n- [ ] Robot mode produces valid JSON for all modes\n\n## Edge Cases\n\n- DB path may differ from ~/.local/share/lore/lore.db — check config with `lore -J doctor` first to get actual db_path\n- EXPLAIN QUERY PLAN output format varies by SQLite version — look for the index name in any output column, not an exact string match\n- If the DB has not been synced recently, smoke tests may return empty results — run `lore sync` first if needed\n- Project name \"typescript\" in the -p flag may not exist — use an actual project from `lore -J status` output\n- The real DB may not have migration 017 yet — run `cargo run --release -- migrate` first if the who command fails with a missing index error\n- clippy::pedantic + clippy::nursery are enabled — common issues: arrays vs vec![] for sorted collections, too_many_arguments on test helpers (use #[allow])","status":"closed","priority":3,"issue_type":"task","created_at":"2026-02-08T02:41:42.642988Z","created_by":"tayloreernisse","updated_at":"2026-02-08T04:10:29.606672Z","closed_at":"2026-02-08T04:10:29.606631Z","close_reason":"Implemented by agent team: migration 017, CLI skeleton, all 5 query modes, human+robot output, 20 tests. All quality gates pass.","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-g0d5","depends_on_id":"bd-tfh3","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"},{"issue_id":"bd-g0d5","depends_on_id":"bd-zibc","type":"blocks","created_at":"2026-03-04T20:02:52Z","created_by":"import"}]} {"id":"bd-gba","title":"OBSERV: Add tracing-appender dependency to Cargo.toml","description":"## Background\ntracing-appender provides non-blocking, daily-rotating file writes for the tracing ecosystem. It's the canonical solution used by tokio-rs projects. We need it for the file logging layer (Phase 1) that writes JSON logs to ~/.local/share/lore/logs/.\n\n## Approach\nAdd tracing-appender to [dependencies] in Cargo.toml (line ~54, after the existing tracing-subscriber entry):\n\n```toml\ntracing-appender = \"0.2\"\n```\n\nAlso add the \"json\" feature to tracing-subscriber since the file layer and --log-format json both need it:\n\n```toml\ntracing-subscriber = { version = \"0.3\", features = [\"env-filter\", \"json\"] }\n```\n\nCurrent tracing deps (Cargo.toml lines 53-54):\n tracing = \"0.1\"\n tracing-subscriber = { version = \"0.3\", features = [\"env-filter\"] }\n\n## Acceptance Criteria\n- [ ] cargo check --all-targets succeeds with tracing-appender available\n- [ ] tracing_appender::rolling::daily() is importable\n- [ ] tracing-subscriber json feature is available (fmt::layer().json() compiles)\n- [ ] cargo clippy --all-targets -- -D warnings passes\n\n## Files\n- Cargo.toml (modify lines 53-54 region)\n\n## TDD Loop\nRED: Not applicable (dependency addition)\nGREEN: Add deps, run cargo check\nVERIFY: cargo check --all-targets && cargo clippy --all-targets -- -D warnings\n\n## Edge Cases\n- Ensure tracing-appender 0.2 is compatible with tracing-subscriber 0.3 (both from tokio-rs/tracing monorepo, always compatible)\n- The \"json\" feature on tracing-subscriber pulls in serde_json, which is already a dependency","status":"closed","priority":1,"issue_type":"task","created_at":"2026-02-04T15:53:55.364100Z","created_by":"tayloreernisse","updated_at":"2026-02-04T17:10:22.520471Z","closed_at":"2026-02-04T17:10:22.520423Z","close_reason":"Added tracing-appender 0.2 and json feature to tracing-subscriber","compaction_level":0,"original_size":0,"labels":["observability"],"dependencies":[{"issue_id":"bd-gba","depends_on_id":"bd-2nx","type":"parent-child","created_at":"2026-03-04T20:02:52Z","created_by":"import"}]} {"id":"bd-gcnx","title":"NOTE-TEST: Test bead","description":"type: task","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-12T16:58:40.129030Z","updated_at":"2026-02-12T16:58:47.794167Z","closed_at":"2026-02-12T16:58:47.794116Z","close_reason":"test","compaction_level":0,"original_size":0} diff --git a/.beads/last-touched b/.beads/last-touched index 168b9ce..81eb274 100644 --- a/.beads/last-touched +++ b/.beads/last-touched @@ -1 +1 @@ -bd-1n5q +bd-343o