Add beads issue tracking infrastructure

Initialize beads (br) for dependency-aware issue tracking in this project.
Beads provides a lightweight task database with graph-aware triage, used
for coordinating work on the JSONL-first discovery feature.

Files added:
- .beads/config.yaml: Project configuration (issue prefix, defaults)
- .beads/issues.jsonl: Issue database with JSONL-first discovery tasks
- .beads/metadata.json: Beads metadata (commit correlation, etc.)
- .beads/.gitignore: Ignore SQLite databases, lock files, temp files

Also ignore .bv/ (beads viewer local state) in project .gitignore.

Beads is non-invasive: it never executes git/jj commands. The .beads/
directory is manually committed alongside code changes.

Usage:
- br ready --json: Find unblocked work
- br update <id> --status in_progress: Claim a task
- br close <id> --reason "done": Complete a task
- bv --robot-triage: Get graph-aware recommendations

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
teernisse
2026-02-28 00:53:49 -05:00
parent 04343f6a9a
commit 66f8cc3eb4
5 changed files with 34 additions and 0 deletions

12
.beads/issues.jsonl Normal file
View File

@@ -0,0 +1,12 @@
{"id":"bd-10w","title":"CP3: Unit tests for Tier 1 index validation","description":"## Background\nTier 1 validation has specific edge cases around timestamp normalization and stale detection that must be tested independently from the full discovery pipeline.\n\n## Approach\nAdd tests to tests/unit/session-discovery.test.ts under describe \"Tier 1 index validation\". Tests create temp project directories with both .jsonl files and sessions-index.json, then verify Tier 1 behavior.\n\nTest cases:\n1. Uses index messageCount/summary/firstPrompt when index modified matches file mtime within 1s\n2. Rejects stale index entries when mtime differs by > 1s (falls through to Tier 3)\n3. Handles missing modified field in index entry (falls through to Tier 2/3)\n4. SessionEntry.created and .modified always from stat even when Tier 1 is trusted\n5. Missing sessions-index.json: all sessions still discovered via Tier 2/3\n6. Corrupt sessions-index.json (invalid JSON): warning logged, all sessions still discovered\n7. Legacy index format (raw array, no version wrapper): still parsed correctly\n\n## Acceptance Criteria\n- [ ] All 7 test cases pass\n- [ ] Tests use temp directories with controlled file timestamps\n- [ ] npm run test -- session-discovery passes\n\n## Files\n- MODIFY: tests/unit/session-discovery.test.ts (add tests in \"Tier 1 index validation\" describe block)\n\n## TDD Loop\nRED: Write tests (fail until bd-3g5 Tier 1 implementation is done)\nGREEN: Tests pass after Tier 1 implementation\nVERIFY: npm run test -- session-discovery\n\n## Edge Cases\n- Setting file mtime in tests: use fs.utimes() to control mtime precisely\n- Index with ISO string timestamps vs epoch ms: test both formats","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-04T17:48:25.641546Z","created_by":"tayloreernisse","updated_at":"2026-02-04T18:27:28.878028Z","closed_at":"2026-02-04T18:27:28.877984Z","close_reason":"Tests already written as part of bd-3g5: 6 tests covering Tier 1 hit, miss, no-modified-field, missing index, corrupt index, stat-derived timestamps.","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-10w","depends_on_id":"bd-3g5","type":"blocks","created_at":"2026-02-04T17:49:30.660263Z","created_by":"tayloreernisse"}]}
{"id":"bd-18a","title":"CP2: Implement MetadataCache with dirty-flag write-behind and atomic writes","description":"## Background\nThe persistent metadata cache at ~/.cache/session-viewer/metadata.json avoids re-parsing unchanged JSONL files across server restarts. This is Tier 2 in the tiered lookup — checked after Tier 1 (index) fails, before Tier 3 (full parse).\n\n## Approach\nCreate MetadataCache class in src/server/services/metadata-cache.ts:\n\n```typescript\ninterface CacheEntry {\n mtimeMs: number;\n size: number;\n messageCount: number;\n firstPrompt: string;\n summary: string;\n created: string; // ISO from file birthtime\n modified: string; // ISO from file mtime\n firstTimestamp: string;\n lastTimestamp: string;\n}\n\ninterface CacheFile {\n version: 1;\n entries: Record<string, CacheEntry>; // keyed by absolute file path\n}\n\nexport class MetadataCache {\n constructor(cachePath?: string) // default: ~/.cache/session-viewer/metadata.json\n\n async load(): Promise<void> // Load from disk, graceful on missing/corrupt\n get(filePath: string, mtimeMs: number, size: number): CacheEntry | null // Tier 2 lookup\n set(filePath: string, entry: CacheEntry): void // Mark dirty\n async save(existingPaths?: Set<string>): Promise<void> // Prune stale, atomic write if dirty\n async flush(): Promise<void> // Force write if dirty (for shutdown)\n}\n```\n\nKey behaviors:\n1. load(): Read + JSON.parse cache file. On corrupt/missing: start empty, no error.\n2. get(): Return entry only if mtimeMs AND size match. Otherwise return null (cache miss).\n3. set(): Store entry, set dirty flag to true.\n4. save(existingPaths): If dirty, prune entries whose keys are not in existingPaths, write to temp file then fs.rename (atomic). Reset dirty flag.\n5. flush(): Same as save() but without pruning. Called on shutdown.\n6. Shutdown hooks: Register process.on(\"SIGTERM\") and process.on(\"SIGINT\") handlers that call flush(). Register once in module scope or via an init function.\n\nWrite-behind strategy: discoverSessions() calls save() asynchronously after returning results. The Promise is fire-and-forget but errors are logged.\n\nIntegrate into discoverSessions() in session-discovery.ts:\n- Load cache once on first call (lazy init)\n- Before Tier 3 parse: check cache.get(filePath, stat.mtimeMs, stat.size)\n- After Tier 3 parse: call cache.set(filePath, extractedEntry)\n- After building all entries: fire-and-forget cache.save(discoveredPaths)\n\n## Acceptance Criteria\n- [ ] MetadataCache class exported from src/server/services/metadata-cache.ts\n- [ ] Cache hit returns entry when mtimeMs + size match\n- [ ] Cache miss (returns null) when mtimeMs or size differ\n- [ ] Dirty flag only set when set() is called (not on load/get)\n- [ ] save() is no-op when not dirty\n- [ ] Atomic writes: temp file + rename pattern\n- [ ] Corrupt cache file loads gracefully (empty cache, no throw)\n- [ ] Missing cache file loads gracefully (empty cache, no throw)\n- [ ] Stale entries pruned on save\n- [ ] Shutdown hooks registered for SIGTERM/SIGINT\n- [ ] Cache directory created if it does not exist (mkdir -p equivalent)\n- [ ] npm run test passes\n\n## Files\n- CREATE: src/server/services/metadata-cache.ts\n- MODIFY: src/server/services/session-discovery.ts (integrate cache into tiered lookup)\n\n## TDD Loop\nRED: tests/unit/metadata-cache.test.ts — tests:\n - \"returns null for unknown file path\"\n - \"returns entry when mtimeMs and size match\"\n - \"returns null when mtimeMs differs\"\n - \"returns null when size differs\"\n - \"save is no-op when not dirty\"\n - \"save writes to disk when dirty\"\n - \"save prunes entries not in existingPaths\"\n - \"load handles missing cache file\"\n - \"load handles corrupt cache file\"\n - \"atomic write: file not corrupted on crash\"\nGREEN: Implement MetadataCache\nVERIFY: npm run test -- metadata-cache\n\n## Edge Cases\n- Cache directory does not exist: create with fs.mkdir recursive\n- Cache file locked by another process: log warning, continue without cache\n- Server killed with SIGKILL (hard kill): cache may be lost — acceptable, rebuilt on next cold start\n- Concurrent save() calls: second save waits for first (or coalesce via dirty flag)\n- Very large cache (3000+ entries): JSON serialization should still be < 50ms","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-04T17:48:03.919559Z","created_by":"tayloreernisse","updated_at":"2026-02-04T18:25:16.365118Z","closed_at":"2026-02-04T18:25:16.365065Z","close_reason":"Implemented MetadataCache class with dirty-flag write-behind, atomic writes (temp+rename), prune stale entries, load/save/flush. Integrated into discoverSessions() as Tier 2 lookup. 13 unit tests.","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-18a","depends_on_id":"bd-34v","type":"blocks","created_at":"2026-02-04T17:49:30.583939Z","created_by":"tayloreernisse"}]}
{"id":"bd-1dn","title":"CP4: Bounded concurrency for stat and parse phases","description":"## Background\nCold start with 3,103 files requires bounded parallelism to avoid file-handle exhaustion and IO thrash. Without limits, Node.js will attempt thousands of concurrent fs.stat() and fs.readFile() calls, potentially hitting EMFILE errors.\n\n## Approach\nAdd a lightweight concurrency limiter. Options:\n- Install p-limit (npm i p-limit) — well-maintained, zero deps, 1.2KB\n- Or hand-roll a simple semaphore (~15 lines)\n\nRecommendation: p-limit for clarity and maintenance. It is ESM-only since v4, so use dynamic import or pin v3.x if the project uses CJS.\n\nImplementation in session-discovery.ts:\n\n```typescript\nimport pLimit from \"p-limit\";\n\nconst STAT_CONCURRENCY = 64;\nconst PARSE_CONCURRENCY = 8;\n\n// In discoverSessions(), per project:\nconst statLimit = pLimit(STAT_CONCURRENCY);\nconst parseLimit = pLimit(PARSE_CONCURRENCY);\n\n// Stat phase: batch all files\nconst statResults = await Promise.all(\n jsonlFiles.map(f => statLimit(() => safeStat(f)))\n);\n\n// Parse phase: only Tier 3 misses\nconst parseResults = await Promise.all(\n tier3Files.map(f => parseLimit(() => safeReadAndExtract(f)))\n);\n```\n\nsafeStat() wraps fs.stat in try/catch, returns null on ENOENT/EACCES (with debug log).\nsafeReadAndExtract() wraps fs.readFile + extractSessionMetadata, returns null on failure.\n\nPerformance targets:\n- Cold start (no cache, no index): < 5s for 3,103 files\n- Warm start (cache exists, few changes): < 1s\n- Incremental (cache + few new files): ~500ms + ~50ms per new file\n\n## Acceptance Criteria\n- [ ] p-limit (or equivalent) added as dependency\n- [ ] Stat phase uses concurrency limit of 64\n- [ ] Parse phase uses concurrency limit of 8\n- [ ] ENOENT and EACCES errors from stat silently handled (debug log, skip file)\n- [ ] Read errors silently handled (debug log, skip file)\n- [ ] No EMFILE errors on cold start with 3000+ files\n- [ ] Warm start < 1s verified on real dataset (manual verification step)\n- [ ] npm run test passes\n\n## Files\n- MODIFY: package.json (add p-limit dependency)\n- MODIFY: src/server/services/session-discovery.ts (wrap stat + parse in concurrency limiters)\n\n## TDD Loop\nRED: Manual performance test — time cold start on real ~/.claude/projects\nGREEN: Add concurrency limits, re-test\nVERIFY: npm run test && manual timing of warm/cold starts\n\n## Edge Cases\n- p-limit v4+ is ESM-only: check if project tsconfig uses \"module\": \"ESNext\" or \"NodeNext\". If CJS, use p-limit@3 or hand-roll.\n- Concurrency limits are per-project. With many small projects, total concurrency could still be high. Consider a global limiter shared across projects if needed.\n- Files actively being written during stat phase: mtime captured at stat time, content may differ at read time. Next discovery pass will re-extract (mtime changed).","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-04T17:48:36.609991Z","created_by":"tayloreernisse","updated_at":"2026-02-04T18:28:36.268754Z","closed_at":"2026-02-04T18:28:36.268402Z","close_reason":"Added mapWithLimit() concurrency limiter (32 concurrent ops per project) to prevent EMFILE on large session directories. Hand-rolled to avoid external dependency. No behavior change to existing tests.","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-1dn","depends_on_id":"bd-34v","type":"blocks","created_at":"2026-02-04T17:49:30.683780Z","created_by":"tayloreernisse"}]}
{"id":"bd-1ed","title":"CP1: Unit tests for filesystem-first discovery and metadata extraction","description":"## Background\nDiscovery correctness must be verified with tests covering the key scenarios from the PRD edge cases table. These tests validate the integration of filesystem scanning + metadata extraction.\n\n## Approach\nAdd tests to the existing test file, building on its temp directory pattern (uses os.tmpdir, writes .jsonl fixtures, cleans up).\n\nTest cases:\n1. Discovers all .jsonl files without sessions-index.json present\n2. SessionEntry.created from stat.birthtimeMs, .modified from stat.mtimeMs\n3. Duration computed from first/last JSONL timestamps (not index)\n4. Silently skips files that disappear between readdir and stat (create file, delete before stat mock)\n5. Empty .jsonl file returns messageCount: 0, session still appears in list\n6. extractSessionMetadata().messageCount === parseSessionContent().messages.length on fixture data\n7. Sessions sorted by modified descending\n8. Path traversal in filename rejected (symlink or \"..\" in name)\n9. Multiple project directories scanned and merged\n\n## Acceptance Criteria\n- [ ] All 9+ test cases pass\n- [ ] Tests use temp directories (not real ~/.claude/projects)\n- [ ] Cleanup runs even on test failure (afterEach or try/finally)\n- [ ] npm run test -- session-discovery passes\n\n## Files\n- MODIFY: tests/unit/session-discovery.test.ts (add describe block \"filesystem-first discovery\")\n\n## TDD Loop\nRED: Write tests (will fail until bd-34v discovery rewrite is done)\nGREEN: Tests pass after discovery loop rewrite\nVERIFY: npm run test -- session-discovery\n\n## Edge Cases\n- Test cleanup must handle partially-created temp dirs\n- stat.birthtimeMs may equal 0 on some filesystems (Linux ext4) — test should not hardcode platform-specific birthtimeMs behavior","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-04T17:47:52.189987Z","created_by":"tayloreernisse","updated_at":"2026-02-04T18:23:46.313213Z","closed_at":"2026-02-04T18:23:46.313152Z","close_reason":"Tests already written as part of bd-34v implementation. 9 tests cover all spec requirements: filesystem discovery, stat timestamps, JSONL duration, TOCTOU resilience, empty files, sorting, extension filtering, multi-project aggregation.","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-1ed","depends_on_id":"bd-34v","type":"blocks","created_at":"2026-02-04T17:49:30.541764Z","created_by":"tayloreernisse"},{"issue_id":"bd-1ed","depends_on_id":"bd-3pr","type":"blocks","created_at":"2026-02-04T17:49:30.559261Z","created_by":"tayloreernisse"}]}
{"id":"bd-1tm","title":"CP0: Extract countMessagesForLine() and classifyLine() shared helpers","description":"## Background\nThe PRD requires exact message counts — list counts must match detail-view counts. The existing extractMessages() function (session-parser.ts:78-233) has non-trivial expansion rules that must be encoded in a shared helper so both the metadata extractor and full parser produce identical counts.\n\n## Approach\nExtract two helpers from extractMessages() logic:\n\n```typescript\nexport type LineClassification =\n | \"user\" | \"assistant\" | \"progress\" | \"file-history-snapshot\"\n | \"summary\" | \"system\" | \"queue-operation\" | \"unknown\";\n\nexport function classifyLine(parsed: RawLine): LineClassification\nexport function countMessagesForLine(parsed: RawLine): number\n```\n\nExpansion rules countMessagesForLine must encode:\n- type=user, string content: 1\n- type=user, array content: count of (tool_result + text) blocks in array\n- user text block containing \"<system-reminder>\": still counts as 1 (reclassified as system_message)\n- type=assistant, string content: 1\n- type=assistant, array content: count of (thinking + text + tool_use) blocks\n- type=progress: 1\n- type=file-history-snapshot: 1\n- type=summary: 1\n- type=system: 0 (skipped)\n- type=queue-operation: 0 (skipped)\n- unknown/missing type: 0\n\nThen refactor extractMessages() to use classifyLine() for its initial type dispatch (the switch on raw.type around line 88). countMessagesForLine() can be validated against extractMessages() output.\n\n## Acceptance Criteria\n- [ ] classifyLine() and countMessagesForLine() exported from session-parser.ts\n- [ ] extractMessages() refactored to use classifyLine() internally\n- [ ] npm run test passes (no behavior change to existing parser)\n- [ ] countMessagesForLine() matches extractMessages(line).length for every message type\n\n## Files\n- MODIFY: src/server/services/session-parser.ts\n\n## TDD Loop\nRED: tests/unit/session-parser.test.ts — add tests:\n - \"countMessagesForLine matches extractMessages length for user string message\"\n - \"countMessagesForLine matches extractMessages length for user array with tool_result and text\"\n - \"countMessagesForLine matches extractMessages length for assistant array with thinking/text/tool_use\"\n - \"countMessagesForLine returns 1 for progress/file-history-snapshot/summary\"\n - \"countMessagesForLine returns 0 for system/queue-operation\"\n - \"classifyLine returns correct classification for each type\"\nGREEN: Implement classifyLine() + countMessagesForLine(), wire into extractMessages()\nVERIFY: npm run test -- session-parser\n\n## Edge Cases\n- User message with empty content array: returns 0 (no expandable blocks)\n- Assistant message with mixed block types (some unrecognized): only count known types\n- Missing type field: classify as unknown, count as 0\n- Null/undefined message.content: count as 0 (not 1)","status":"closed","priority":1,"issue_type":"task","created_at":"2026-02-04T17:47:13.654314Z","created_by":"tayloreernisse","updated_at":"2026-02-04T18:20:11.248274Z","closed_at":"2026-02-04T18:20:11.248229Z","close_reason":"Implemented classifyLine() and countMessagesForLine() helpers, refactored extractMessages() to use classifyLine(), added 13 unit tests including parity checks","compaction_level":0,"original_size":0}
{"id":"bd-2bj","title":"CP2: Unit tests for MetadataCache","description":"## Background\nCache behavior must be verified to ensure correctness, performance, and graceful degradation. These tests validate MetadataCache in isolation using temp directories.\n\n## Approach\nCreate tests/unit/metadata-cache.test.ts. Use temp directories for cache file location. Test the MetadataCache class directly without involving the full discovery pipeline.\n\nTest cases:\n1. get() returns null for unknown file path\n2. get() returns entry when mtimeMs AND size match exactly\n3. get() returns null when mtimeMs differs (file modified)\n4. get() returns null when size differs (file modified)\n5. save() is no-op when cache is not dirty (verify file not written via stat check)\n6. save() writes to disk when dirty (verify file exists after save)\n7. save() prunes entries whose paths are not in existingPaths set\n8. load() handles missing cache file gracefully (no throw, empty state)\n9. load() handles corrupt JSON gracefully (no throw, empty state)\n10. Roundtrip: set entries, save, create new instance, load, get returns same entries\n11. Dirty flag reset after save (second save is no-op)\n\n## Acceptance Criteria\n- [ ] All 11 test cases pass\n- [ ] Tests use temp directories (not real ~/.cache)\n- [ ] Cleanup in afterEach\n- [ ] npm run test -- metadata-cache passes\n\n## Files\n- CREATE: tests/unit/metadata-cache.test.ts\n\n## TDD Loop\nRED: Write all test cases (fail until bd-18a MetadataCache is implemented)\nGREEN: Tests pass after MetadataCache implementation\nVERIFY: npm run test -- metadata-cache\n\n## Edge Cases\n- Temp dir cleanup must handle missing files (rm with force)\n- Tests should not depend on timing (no race-condition-sensitive assertions)","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-04T17:48:10.222765Z","created_by":"tayloreernisse","updated_at":"2026-02-04T18:25:41.487042Z","closed_at":"2026-02-04T18:25:41.486998Z","close_reason":"Tests already written as part of bd-18a implementation: 13 tests covering get/set, dirty flag, mtimeMs/size matching, prune, persistence, corrupt/missing file handling, flush.","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-2bj","depends_on_id":"bd-18a","type":"blocks","created_at":"2026-02-04T17:49:30.610149Z","created_by":"tayloreernisse"}]}
{"id":"bd-2jf","title":"CP0: Unit tests for parser parity (forEachJsonlLine, countMessagesForLine)","description":"## Background\nThe parser parity contract is the biggest correctness risk in this feature. These tests prove that the shared helpers produce identical results to the full parser, ensuring list-view counts can never drift from detail-view counts.\n\n## Approach\nAdd a new describe block in the existing test file. Tests should exercise every expansion rule using inline JSONL lines, plus test with the existing fixture files.\n\nTest cases:\n1. forEachJsonlLine skips malformed/truncated JSON lines (missing closing brace)\n2. forEachJsonlLine reports parseErrors count accurately\n3. forEachJsonlLine handles empty content string\n4. countMessagesForLine matches extractMessages().length for user string content\n5. countMessagesForLine matches extractMessages().length for user array with tool_result + text blocks\n6. countMessagesForLine matches extractMessages().length for user text with system-reminder (reclassified)\n7. countMessagesForLine matches extractMessages().length for assistant string content\n8. countMessagesForLine matches extractMessages().length for assistant array (thinking + text + tool_use)\n9. countMessagesForLine returns 1 for progress, file-history-snapshot, summary\n10. countMessagesForLine returns 0 for system, queue-operation\n11. classifyLine returns correct classification for each known type\n12. Integration: extractSessionMetadata().messageCount === parseSessionContent().messages.length on tests/fixtures/sample-session.jsonl\n13. Integration: same check on tests/fixtures/edge-cases.jsonl (has malformed lines)\n\n## Acceptance Criteria\n- [ ] All 13+ test cases pass\n- [ ] Tests cover every message type expansion rule\n- [ ] At least one test uses a malformed/truncated JSONL line\n- [ ] At least one test uses real fixture files for integration verification\n- [ ] npm run test -- session-parser passes\n\n## Files\n- MODIFY: tests/unit/session-parser.test.ts (add describe block \"parser parity: shared helpers\")\n\n## TDD Loop\nRED: Write all test cases first (they will fail until CP0 implementation beads are done)\nGREEN: Tests pass after bd-2og and bd-1tm are implemented\nVERIFY: npm run test -- session-parser\n\n## Edge Cases\n- Fixture files may change over time; tests should assert count equality (meta.messageCount === parsed.messages.length) not hardcoded numbers\n- Truncated JSON at end of file (crash mid-write) must be handled identically by both paths","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-04T17:47:22.070948Z","created_by":"tayloreernisse","updated_at":"2026-02-04T18:20:50.740098Z","closed_at":"2026-02-04T18:20:50.740034Z","close_reason":"Added 4 additional parity tests: system-reminder reclassification, truncated JSON handling, and 2 fixture-based integration tests proving countMessagesForLine sum equals parseSessionContent length","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-2jf","depends_on_id":"bd-1tm","type":"blocks","created_at":"2026-02-04T17:49:30.466873Z","created_by":"tayloreernisse"},{"issue_id":"bd-2jf","depends_on_id":"bd-2og","type":"blocks","created_at":"2026-02-04T17:49:30.447953Z","created_by":"tayloreernisse"}]}
{"id":"bd-2og","title":"CP0: Extract forEachJsonlLine() shared line iterator from session-parser.ts","description":"## Background\nThe metadata extractor and full parser must iterate JSONL lines identically — same malformed-line handling, same error skipping. Currently this logic is inline in parseSessionContent() (session-parser.ts:59-70). Extracting it guarantees the parser parity contract from the PRD.\n\n## Approach\nExtract a shared function from the existing inline loop in parseSessionContent():\n\n```typescript\nexport interface RawLine {\n type?: string; uuid?: string; timestamp?: string;\n parentToolUseID?: string;\n message?: { role?: string; content?: string | ContentBlock[]; };\n data?: Record<string, unknown>;\n summary?: string; snapshot?: Record<string, unknown>;\n subtype?: string;\n}\n\nexport function forEachJsonlLine(\n content: string,\n onLine: (parsed: RawLine, lineIndex: number) => void\n): { parseErrors: number }\n```\n\nImplementation:\n1. Split content by newlines, filter empty/whitespace-only lines\n2. JSON.parse each line inside try/catch\n3. On parse failure: increment parseErrors counter, continue (skip line)\n4. On success: call onLine(parsed, lineIndex)\n5. Return { parseErrors }\n\nThen refactor parseSessionContent() to use forEachJsonlLine() internally — replacing its current inline loop (lines 59-70). No behavior change to parseSessionContent output.\n\n## Acceptance Criteria\n- [ ] forEachJsonlLine() exported from src/server/services/session-parser.ts\n- [ ] RawLine interface exported from src/server/services/session-parser.ts\n- [ ] parseSessionContent() refactored to use forEachJsonlLine() internally\n- [ ] npm run test passes (existing tests unchanged, proving no behavior change)\n- [ ] Malformed JSON lines skipped with parseErrors count incremented\n- [ ] Empty/whitespace-only lines skipped without incrementing parseErrors\n\n## Files\n- MODIFY: src/server/services/session-parser.ts (extract from lines 59-70, export new function + RawLine type)\n\n## TDD Loop\nRED: tests/unit/session-parser.test.ts — add tests:\n - \"forEachJsonlLine skips malformed JSON lines\"\n - \"forEachJsonlLine reports parseErrors count\"\n - \"forEachJsonlLine skips empty and whitespace-only lines\"\nGREEN: Extract forEachJsonlLine(), refactor parseSessionContent() to call it\nVERIFY: npm run test -- session-parser\n\n## Edge Cases\n- Truncated JSON from crash mid-write (common) — must skip, not throw\n- Lines with only whitespace or newlines — skip without error\n- Empty content string — returns { parseErrors: 0 }, onLine never called\n- Content with no trailing newline — last line still processed","status":"closed","priority":1,"issue_type":"task","created_at":"2026-02-04T17:47:00.597480Z","created_by":"tayloreernisse","updated_at":"2026-02-04T18:20:09.073002Z","closed_at":"2026-02-04T18:20:09.072954Z","close_reason":"Implemented forEachJsonlLine() with RawLine export, refactored parseSessionContent() to use it, added 5 unit tests","compaction_level":0,"original_size":0}
{"id":"bd-34v","title":"CP1: Rewrite discovery loop — filesystem-first with tiered metadata","description":"## Background\nCurrent discovery in session-discovery.ts (114 lines) relies exclusively on sessions-index.json. This misses 17% of sessions. The rewrite makes the filesystem the primary source — every .jsonl file is a session, regardless of index state.\n\n## Approach\nRewrite discoverSessions() in session-discovery.ts:\n\n```typescript\nexport async function discoverSessions(\n projectsDir: string = CLAUDE_PROJECTS_DIR\n): Promise<SessionEntry[]>\n```\n\nNew flow per project directory:\n1. fs.readdir() to list all *.jsonl files (filter by .jsonl extension)\n2. Batch fs.stat() all files (initially unbounded; CP4 adds concurrency limits)\n3. Silently skip files that fail stat (ENOENT from TOCTOU race, EACCES) with debug log\n4. For each successfully statted file, get metadata via tiered lookup:\n - Tier 3 only in this checkpoint (Tier 1 and 2 added in CP3 and CP2)\n - Read file content, call extractSessionMetadata() from session-metadata.ts\n - Silently skip files that fail read (TOCTOU between stat and read)\n5. Build SessionEntry:\n - id: path.basename(file, \".jsonl\")\n - project: decoded project directory name\n - path: absolute path to .jsonl file\n - created: new Date(stat.birthtimeMs).toISOString()\n - modified: new Date(stat.mtimeMs).toISOString()\n - messageCount, firstPrompt, summary: from metadata\n - duration: computed from (lastTimestamp - firstTimestamp) in ms, or undefined\n6. Sort all entries by modified descending (stat-derived, never index-derived)\n\nSecurity validations (preserved from current implementation):\n- Reject paths containing \"..\" (traversal)\n- Reject non-.jsonl extensions\n- Reject absolute paths outside projectsDir (containment check)\n\nThe existing 30s in-memory cache in routes/sessions.ts and ?refresh=1 are NOT modified — they wrap discoverSessions() and continue working.\n\n## Acceptance Criteria\n- [ ] All .jsonl sessions appear regardless of whether sessions-index.json exists\n- [ ] SessionEntry.created and .modified always come from fs.stat\n- [ ] List is sorted by modified descending\n- [ ] TOCTOU: files disappearing between readdir/stat silently skipped\n- [ ] TOCTOU: files disappearing between stat/read silently skipped\n- [ ] Path traversal protection applied to filesystem-discovered files\n- [ ] Duration computed from JSONL timestamps (not index)\n- [ ] Existing route-level caching unmodified and working\n- [ ] npm run test passes\n\n## Files\n- MODIFY: src/server/services/session-discovery.ts (rewrite discoverSessions)\n- USES: src/server/services/session-metadata.ts (extractSessionMetadata)\n\n## TDD Loop\nRED: tests/unit/session-discovery.test.ts — add/update tests:\n - \"discovers sessions from .jsonl files without index\"\n - \"timestamps come from stat, not index\"\n - \"silently skips files deleted between readdir and stat\"\n - \"rejects path traversal in filenames\"\n - \"duration computed from JSONL timestamps\"\nGREEN: Rewrite discoverSessions()\nVERIFY: npm run test -- session-discovery\n\n## Edge Cases\n- Project directory with no .jsonl files: returns empty array for that project\n- Project directory that disappears during scan: silently skipped\n- .jsonl file with 0 bytes: extractSessionMetadata returns messageCount 0, session still listed\n- Very long project directory names (URL-encoded paths): handled by existing decoding logic\n- Concurrent discoverSessions() calls: no shared mutable state in this checkpoint (cache added in CP2)","status":"closed","priority":1,"issue_type":"task","created_at":"2026-02-04T17:47:44.866319Z","created_by":"tayloreernisse","updated_at":"2026-02-04T18:23:23.803724Z","closed_at":"2026-02-04T18:23:23.803676Z","close_reason":"Rewrote discoverSessions() to be filesystem-first: scans .jsonl files directly, uses extractSessionMetadata() for parser parity, timestamps from stat(), TOCTOU-safe. 9 tests covering discovery, stat-based timestamps, TOCTOU, aggregation, sorting, duration, empty files, extension filtering.","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-34v","depends_on_id":"bd-3pr","type":"blocks","created_at":"2026-02-04T17:49:30.523977Z","created_by":"tayloreernisse"}]}
{"id":"bd-3g5","title":"CP3: Implement Tier 1 index validation and fast path","description":"## Background\nsessions-index.json is unreliable but when valid, it saves parsing work. Tier 1 uses it as a fast-path optimization. The index format varies: modern ({ version: 1, entries: [...] }) or legacy (raw array). The existing parsing logic in session-discovery.ts already handles both formats.\n\n## Approach\nAdd Tier 1 lookup to discoverSessions() before Tier 2 (cache) and Tier 3 (parse):\n\n```typescript\ninterface IndexEntry {\n sessionId: string;\n summary?: string;\n firstPrompt?: string;\n created?: string;\n modified?: string;\n messageCount?: number;\n fullPath?: string;\n projectPath?: string;\n}\n```\n\nPer-project flow:\n1. Try to read and parse sessions-index.json into Map<sessionId, IndexEntry>\n - Handle both modern (version:1 wrapper) and legacy (raw array) formats\n - On missing file: continue silently (common case, 13 projects have none)\n - On corrupt JSON: log warning, continue with empty map\n2. For each .jsonl file, after stat:\n a. Derive sessionId: path.basename(file, \".jsonl\")\n b. Look up sessionId in index map\n c. If found AND entry.modified exists:\n - Compare new Date(entry.modified).getTime() vs stat.mtimeMs\n - If difference <= 1000ms: Tier 1 HIT — use entry.messageCount, entry.summary, entry.firstPrompt\n - If difference > 1000ms: Tier 1 MISS (stale) — fall through to Tier 2/3\n d. If found but no modified field: skip Tier 1, fall through\n e. If not found: skip Tier 1, fall through\n3. IMPORTANT: SessionEntry.created and .modified ALWAYS from stat, even on Tier 1 hit\n\n## Acceptance Criteria\n- [ ] Tier 1 used when index entry modified matches stat mtime within 1s tolerance\n- [ ] Tier 1 rejected when mtime mismatch > 1s\n- [ ] Tier 1 skipped when entry has no modified field\n- [ ] Missing sessions-index.json does not break discovery\n- [ ] Corrupt sessions-index.json does not break discovery (logged, skipped)\n- [ ] SessionEntry timestamps always from stat, never from index\n- [ ] Both modern and legacy index formats handled\n- [ ] npm run test passes\n\n## Files\n- MODIFY: src/server/services/session-discovery.ts (add Tier 1 logic before Tier 2/3)\n\n## TDD Loop\nRED: tests/unit/session-discovery.test.ts — add describe block \"Tier 1 index validation\":\n - \"uses index data when modified matches stat mtime within 1s\"\n - \"rejects index data when mtime mismatch > 1s\"\n - \"skips Tier 1 when entry has no modified field\"\n - \"handles missing sessions-index.json\"\n - \"handles corrupt sessions-index.json\"\n - \"timestamps always from stat even on Tier 1 hit\"\nGREEN: Implement Tier 1 logic\nVERIFY: npm run test -- session-discovery\n\n## Edge Cases\n- Index modified as ISO string vs numeric timestamp: normalize both via new Date().getTime()\n- Index with extra/unknown fields: ignored (only read known fields)\n- Multiple index entries with same sessionId: last one wins (Map.set overwrites)\n- Extremely old index (years stale): rejected by mtime check, no special handling needed","status":"closed","priority":2,"issue_type":"task","created_at":"2026-02-04T17:48:20.640825Z","created_by":"tayloreernisse","updated_at":"2026-02-04T18:27:21.126761Z","closed_at":"2026-02-04T18:27:21.126714Z","close_reason":"Implemented Tier 1 index validation fast path in discoverSessions(). Reads sessions-index.json per project, validates mtime within 1s tolerance, falls through to Tier 2/3 on miss. 6 new tests for hit/miss/no-modified/missing/corrupt/stat-timestamps.","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-3g5","depends_on_id":"bd-34v","type":"blocks","created_at":"2026-02-04T17:49:30.633475Z","created_by":"tayloreernisse"}]}
{"id":"bd-3pr","title":"CP1: Implement extractSessionMetadata() using shared helpers","description":"## Background\nThe lightweight metadata extractor reads JSONL and extracts only what the list view needs, without building full message content strings. It must use the shared helpers from CP0 to guarantee parser parity (list counts match detail counts).\n\n## Approach\nCreate in a new file src/server/services/session-metadata.ts:\n\n```typescript\nexport interface SessionMetadata {\n messageCount: number;\n firstPrompt: string; // first non-system-reminder user message, truncated to 200 chars\n summary: string; // summary field from last type=summary line\n firstTimestamp: string; // ISO from first JSONL line with timestamp field\n lastTimestamp: string; // ISO from last JSONL line with timestamp field\n parseErrors: number; // from forEachJsonlLine\n}\n\nexport function extractSessionMetadata(content: string): SessionMetadata\n```\n\nImplementation:\n1. Call forEachJsonlLine(content, onLine) from session-parser.ts\n2. In onLine callback:\n a. Accumulate messageCount via countMessagesForLine(parsed)\n b. Track firstTimestamp (first parsed.timestamp seen) and lastTimestamp (latest)\n c. For firstPrompt: first user message where content is string and does not contain \"<system-reminder>\", truncated to 200 chars\n d. For summary: overwrite on each type=summary line (keeps last)\n3. Return SessionMetadata with all fields\n\nNo string building, no JSON.stringify, no markdown processing. Just counting + timestamp capture + first-match extraction.\n\n## Acceptance Criteria\n- [ ] extractSessionMetadata() exported from src/server/services/session-metadata.ts\n- [ ] SessionMetadata interface exported\n- [ ] extractSessionMetadata(content).messageCount === parseSessionContent(content, id).messages.length on fixture files\n- [ ] firstPrompt skips system-reminder user messages\n- [ ] firstPrompt truncated to 200 chars\n- [ ] summary captures the LAST summary line (not first)\n- [ ] firstTimestamp and lastTimestamp correctly captured\n- [ ] Empty JSONL content returns messageCount: 0, empty strings for text fields\n\n## Files\n- CREATE: src/server/services/session-metadata.ts\n- MODIFY: src/server/services/session-parser.ts (ensure forEachJsonlLine, countMessagesForLine, RawLine are exported)\n\n## TDD Loop\nRED: tests/unit/session-metadata.test.ts — tests:\n - \"messageCount matches parseSessionContent on sample-session.jsonl\"\n - \"messageCount matches parseSessionContent on edge-cases.jsonl\"\n - \"firstPrompt skips system-reminder messages\"\n - \"firstPrompt truncated to 200 chars\"\n - \"summary captures last summary line\"\n - \"timestamps captured from first and last lines\"\n - \"empty content returns zero counts\"\nGREEN: Implement extractSessionMetadata()\nVERIFY: npm run test -- session-metadata\n\n## Edge Cases\n- JSONL with no user messages: firstPrompt is empty string\n- JSONL with no summary lines: summary is empty string\n- JSONL with no timestamps: firstTimestamp and lastTimestamp are empty strings\n- All user messages are system-reminders: firstPrompt is empty string\n- Single-line JSONL: firstTimestamp === lastTimestamp","status":"closed","priority":1,"issue_type":"task","created_at":"2026-02-04T17:47:32.534319Z","created_by":"tayloreernisse","updated_at":"2026-02-04T18:21:48.880124Z","closed_at":"2026-02-04T18:21:48.880075Z","close_reason":"Implemented extractSessionMetadata() in new file session-metadata.ts with 12 unit tests. Uses forEachJsonlLine/countMessagesForLine/classifyLine for parser parity. Fixture integration tests confirm messageCount matches parseSessionContent.","compaction_level":0,"original_size":0,"dependencies":[{"issue_id":"bd-3pr","depends_on_id":"bd-1tm","type":"blocks","created_at":"2026-02-04T17:49:30.505649Z","created_by":"tayloreernisse"},{"issue_id":"bd-3pr","depends_on_id":"bd-2og","type":"blocks","created_at":"2026-02-04T17:49:30.484195Z","created_by":"tayloreernisse"}]}
{"id":"bd-sks","title":"Epic: JSONL-First Session Discovery","description":"## Background\nEpic tracking the JSONL-First Session Discovery feature. Claude Code sessions-index.json is unreliable (17% loss rate: 542 unindexed JSONL files). The .jsonl files are the source of truth; the index is an unreliable convenience cache.\n\n## Scope\n- CP0: Parser parity foundations (shared line iterator + counting helpers)\n- CP1: Filesystem-first discovery with tiered metadata lookup\n- CP2: Persistent metadata cache (~/.cache/session-viewer/metadata.json)\n- CP3: Tier 1 index validation fast path\n- CP4: Bounded concurrency for performance targets\n\n## Acceptance Criteria\n- [ ] All .jsonl sessions appear in session list regardless of index state\n- [ ] Message counts in list view match detail view exactly (parser parity)\n- [ ] Warm start < 1s, cold start < 5s\n- [ ] Existing 30s in-memory cache and ?refresh=1 continue working\n- [ ] Zero config, works out of the box\n\n## PRD Reference\ndocs/prd-jsonl-first-discovery.md","status":"closed","priority":1,"issue_type":"epic","created_at":"2026-02-04T17:46:50.724897Z","created_by":"tayloreernisse","updated_at":"2026-02-04T18:28:42.868428Z","closed_at":"2026-02-04T18:28:42.868380Z","close_reason":"All checkpoints complete: CP0 (parser parity helpers), CP1 (filesystem-first discovery + metadata extraction), CP2 (persistent MetadataCache), CP3 (Tier 1 index validation), CP4 (bounded concurrency). 297 tests passing.","compaction_level":0,"original_size":0}