I reviewed the full PRD end-to-end and avoided all items already listed in `## Rejected Recommendations`. These are the highest-impact revisions I’d make. 1. **Fix keybinding/state-machine correctness gaps (critical)** The plan currently has an internal conflict: the doc says jump-forward is `Alt+o`, but code sample uses `Ctrl+i` (which collides with `Tab` in many terminals). Also, `g`-prefix timeout depends on `Tick`, but `Tick` isn’t guaranteed when idle, so prefix mode can get “stuck.” This is a correctness bug, not polish. ```diff @@ 8.1 Global (Available Everywhere) -| `Ctrl+O` | Jump backward in jump list (entity hops) | -| `Alt+o` | Jump forward in jump list (entity hops) | +| `Ctrl+O` | Jump backward in jump list (entity hops) | +| `Alt+o` | Jump forward in jump list (entity hops) | +| `Backspace` | Go back (when no text input is focused) | @@ 4.4 LoreApp::interpret_key - (KeyCode::Char('i'), m) if m.contains(Modifiers::CTRL) => { - return Some(Msg::JumpForward); - } + (KeyCode::Char('o'), m) if m.contains(Modifiers::ALT) => { + return Some(Msg::JumpForward); + } + (KeyCode::Backspace, Modifiers::NONE) => { + return Some(Msg::GoBack); + } @@ 4.4 Model::subscriptions + // Go-prefix timeout enforcement must tick even when nothing is loading. + if matches!(self.input_mode, InputMode::GoPrefix { .. }) { + subs.push(Box::new( + Every::with_id(2, Duration::from_millis(50), || Msg::Tick) + )); + } ``` 2. **Make `TaskSupervisor` API internally consistent and enforceable** The plan uses `submit()`/`is_current()` in one place and `register()`/`next_generation()` in another. That inconsistency will cause implementation drift and stale-result bugs. Use one coherent API with a returned handle containing `{key, generation, cancel_token}`. ```diff @@ 4.5.1 Task Supervisor (Dedup + Cancellation + Priority) -pub struct TaskSupervisor { - active: HashMap>, - generation: AtomicU64, -} +pub struct TaskSupervisor { + active: HashMap, +} + +pub struct TaskHandle { + pub key: TaskKey, + pub generation: u64, + pub cancel: Arc, +} - pub fn register(&mut self, key: TaskKey) -> Arc - pub fn next_generation(&self) -> u64 + pub fn submit(&mut self, key: TaskKey) -> TaskHandle + pub fn is_current(&self, key: &TaskKey, generation: u64) -> bool + pub fn complete(&mut self, key: &TaskKey, generation: u64) ``` 3. **Replace thread-sleep debounce with runtime timer messages** `std::thread::sleep(200ms)` inside task closures wastes pool threads under fast typing and reduces responsiveness under contention. Use timer-driven debounce messages and only fire the latest generation. This improves latency stability on large datasets. ```diff @@ 4.3 Core Types (Msg enum) + SearchDebounceArmed { generation: u64, query: String }, + SearchDebounceFired { generation: u64 }, @@ 4.4 maybe_debounced_query - Cmd::task(move || { - std::thread::sleep(std::time::Duration::from_millis(200)); - ... - }) + // Arm debounce only; runtime timer emits SearchDebounceFired. + Cmd::msg(Msg::SearchDebounceArmed { generation, query }) @@ 4.4 subscriptions() + if self.state.search.debounce_pending() { + subs.push(Box::new( + Every::with_id(3, Duration::from_millis(200), || Msg::SearchDebounceFired { generation: ... }) + )); + } ``` 4. **Harden `DbManager` API to avoid lock-poison panics and accidental long-held guards** Returning raw `MutexGuard` invites accidental lock scope expansion and `expect("lock poisoned")` panics. Move to closure-based access (`with_reader`, `with_writer`) returning `Result`, and use cached statements. This reduces deadlock risk and tail latency. ```diff @@ 4.4 DbManager - pub fn reader(&self) -> MutexGuard<'_, Connection> { ...expect("reader lock poisoned") } - pub fn writer(&self) -> MutexGuard<'_, Connection> { ...expect("writer lock poisoned") } + pub fn with_reader(&self, f: impl FnOnce(&Connection) -> Result) -> Result + pub fn with_writer(&self, f: impl FnOnce(&Connection) -> Result) -> Result @@ 10.11 action.rs - let conn = db.reader(); - match fetch_issues(&conn, &filter) { ... } + match db.with_reader(|conn| fetch_issues(conn, &filter)) { ... } + // Query hot paths use prepare_cached() to reduce parse overhead. ``` 5. **Add read-path entity cache (LRU) for repeated drill-in/out workflows** Your core daily flow is Enter/Esc bouncing between list/detail. Without caching, identical detail payloads are re-queried repeatedly. A bounded LRU by `EntityKey` with invalidation on sync completion gives near-instant reopen behavior and reduces DB pressure. ```diff @@ 4.1 Module Structure + entity_cache.rs # Bounded LRU cache for detail payloads @@ app.rs LoreApp fields + entity_cache: EntityCache, @@ load_screen(Screen::IssueDetail / MrDetail) + if let Some(cached) = self.entity_cache.get_issue(&key) { + return Cmd::msg(Msg::IssueDetailLoaded { key, detail: cached.clone() }); + } @@ Msg::IssueDetailLoaded / Msg::MrDetailLoaded handlers + self.entity_cache.put_issue(key.clone(), detail.clone()); @@ Msg::SyncCompleted + self.entity_cache.invalidate_all(); ``` 6. **Tighten sync-stream observability and drop semantics without adding heavy architecture** You already handle backpressure, but operators need visibility when it happens. Track dropped-progress count and max queue depth in state and surface it in running/summary views. This keeps the current simple design while making reliability measurable. ```diff @@ 4.3 Msg + SyncStreamStats { dropped_progress: u64, max_queue_depth: usize }, @@ 5.9 Sync (Running mode footer) -| Esc cancel f full sync e embed after d dry-run l log level| +| Esc cancel f full sync e embed after d dry-run l log level stats:drop=12 qmax=1847 | @@ 9.3 Success criteria +24. Sync stream stats are emitted and rendered; terminal events (completed/failed/cancelled) delivery is 100% under induced backpressure. ``` 7. **Make crash reporting match the promised diagnostic value** The PRD promises event replay context, but sample hook writes only panic text. Add explicit crash context capture (`last events`, `current screen`, `task handles`, `build id`, `db fingerprint`) and retention policy. This materially improves post-mortem debugging. ```diff @@ 4.1 Module Structure + crash_context.rs # ring buffer of normalized events + task/screen snapshot @@ 10.3 install_panic_hook_for_tui() - let report = crate::redact::redact_sensitive(&format!("{panic_info:#?}")); + let ctx = crate::crash_context::snapshot(); + let report = crate::redact::redact_sensitive(&format!("{panic_info:#?}\n{ctx:#?}")); + // Retention: keep latest 20 crash files, delete oldest metadata entries only. ``` 8. **Add Search Facets panel for faster triage (high-value feature, low risk)** Search is central, but right now filtering requires manual field edits. Add facet counts (`issues`, `MRs`, `discussions`, top labels/projects/authors) with one-key apply. This makes search more compelling and actionable without introducing schema changes. ```diff @@ 5.6 Search -- Layout: Split pane — results list (left) + preview (right) +- Layout: Three-pane on wide terminals — results (left) + preview (center) + facets (right) +**Facets panel:** +- Entity type counts (issue/MR/discussion) +- Top labels/projects/authors for current query +- `1/2/3` quick-apply type facet; `l` cycles top label facet @@ 8.2 List/Search keybindings +| `1` `2` `3` | Apply facet: Issue / MR / Discussion | +| `l` | Apply next top-label facet | ``` 9. **Strengthen text sanitization for terminal edge cases** Current sanitizer is strong, but still misses some control-space edge cases (C1 controls, directional marks beyond the listed bidi set). Add those and test them. This closes spoofing/render confusion gaps with minimal complexity. ```diff @@ 10.4.1 sanitize_for_terminal() + // Strip C1 control block (U+0080..U+009F) and additional directional marks + c if ('\u{0080}'..='\u{009F}').contains(&c) => {} + '\u{200E}' | '\u{200F}' | '\u{061C}' => {} // LRM, RLM, ALM @@ tests + #[test] fn strips_c1_controls() { ... } + #[test] fn strips_lrm_rlm_alm() { ... } ``` 10. **Add an explicit vertical-slice gate before broad screen expansion** The plan is comprehensive, but risk is still front-loaded on framework + runtime behavior. Insert a strict vertical slice gate (`Dashboard + IssueList + IssueDetail + Sync running`) with perf and stability thresholds before Phase 3 features. This reduces rework if foundational assumptions break. ```diff @@ 9.2 Phases +section Phase 2.5 — Vertical Slice Gate +Dashboard + IssueList + IssueDetail + Sync (running) integrated :p25a, after p2c, 3d +Gate: p95 nav latency < 75ms on M tier; zero stuck-input-state bugs; cancel p95 < 2s :p25b, after p25a, 1d +Only then proceed to Search/Timeline/Who/Palette expansion. ``` If you want, I can produce a full consolidated `diff` block against the entire PRD text (single patch), but the above is the set I’d prioritize first.