perf: force partial index for DiffNote queries (26-75x), batch stats counts (1.7x)
who.rs: Add INDEXED BY idx_notes_diffnote_path_created to all DiffNote query paths (expert, expert_details, reviews, path probes, suffix_probe). SQLite planner was choosing idx_notes_system (106K rows, 38%) over the partial index (26K rows, 9.3%) when LIKE predicates are present. Measured: expert 1561ms->59ms (26x), reviews ~1200ms->16ms (75x). stats.rs: Replace 12+ sequential COUNT(*) queries with conditional aggregates (SUM(CASE WHEN...)) and use FTS5 shadow table (documents_fts_docsize) instead of virtual table for counting. Measured: warm 109ms->65ms (1.68x).
This commit is contained in:
214
plans/tui-prd-v2-frankentui.feedback-10.md
Normal file
214
plans/tui-prd-v2-frankentui.feedback-10.md
Normal file
@@ -0,0 +1,214 @@
|
||||
I found 9 high-impact revisions that materially improve correctness, robustness, and usability without reintroducing anything in `## Rejected Recommendations`.
|
||||
|
||||
### 1. Prevent stale async overwrites on **all** screens (not just search)
|
||||
Right now, only `SearchExecuted` is generation-guarded. `IssueListLoaded`, `MrListLoaded`, `IssueDetailLoaded`, etc. can still race and overwrite newer state after rapid navigation/filtering. This is the biggest correctness risk in the current design.
|
||||
|
||||
```diff
|
||||
diff --git a/PRD.md b/PRD.md
|
||||
@@ message.rs
|
||||
- IssueListLoaded(Vec<IssueRow>),
|
||||
+ IssueListLoaded { generation: u64, rows: Vec<IssueRow> },
|
||||
@@
|
||||
- MrListLoaded(Vec<MrRow>),
|
||||
+ MrListLoaded { generation: u64, rows: Vec<MrRow> },
|
||||
@@
|
||||
- IssueDetailLoaded { key: EntityKey, detail: IssueDetail },
|
||||
- MrDetailLoaded { key: EntityKey, detail: MrDetail },
|
||||
+ IssueDetailLoaded { generation: u64, key: EntityKey, detail: IssueDetail },
|
||||
+ MrDetailLoaded { generation: u64, key: EntityKey, detail: MrDetail },
|
||||
|
||||
@@ update()
|
||||
- Msg::IssueListLoaded(result) => {
|
||||
+ Msg::IssueListLoaded { generation, rows } => {
|
||||
+ if !self.task_supervisor.is_current(&TaskKey::LoadScreen(Screen::IssueList), generation) {
|
||||
+ return Cmd::none();
|
||||
+ }
|
||||
self.state.set_loading(false);
|
||||
- self.state.issue_list.set_result(result);
|
||||
+ self.state.issue_list.set_result(rows);
|
||||
Cmd::none()
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Make cancellation safe with task-owned SQLite interrupt handles
|
||||
The plan mentions `sqlite3_interrupt()` but uses pooled shared reader connections. Interrupting a shared connection can cancel unrelated work. Use per-task reader leases and store `InterruptHandle` in `TaskHandle`.
|
||||
|
||||
```diff
|
||||
diff --git a/PRD.md b/PRD.md
|
||||
@@ DbManager
|
||||
- readers: Vec<Mutex<Connection>>,
|
||||
+ readers: Vec<Mutex<Connection>>,
|
||||
+ // task-scoped interrupt handles prevent cross-task cancellation bleed
|
||||
+ // each dispatched query receives an owned ReaderLease
|
||||
|
||||
+pub struct ReaderLease {
|
||||
+ conn: Connection,
|
||||
+ interrupt: rusqlite::InterruptHandle,
|
||||
+}
|
||||
+
|
||||
+impl DbManager {
|
||||
+ pub fn lease_reader(&self) -> Result<ReaderLease, LoreError> { ... }
|
||||
+}
|
||||
|
||||
@@ TaskHandle
|
||||
pub struct TaskHandle {
|
||||
pub key: TaskKey,
|
||||
pub generation: u64,
|
||||
pub cancel: Arc<CancelToken>,
|
||||
+ pub interrupt: Option<rusqlite::InterruptHandle>,
|
||||
}
|
||||
|
||||
@@ cancellation
|
||||
-Query interruption: ... fires sqlite3_interrupt() on the connection.
|
||||
+Query interruption: cancel triggers the task's owned InterruptHandle only.
|
||||
+No shared-connection interrupt is permitted.
|
||||
```
|
||||
|
||||
### 3. Harden keyset pagination for multi-project and sort changes
|
||||
`updated_at + iid` cursor is not enough when rows share timestamps across projects or sort mode changes. This can duplicate/skip rows.
|
||||
|
||||
```diff
|
||||
diff --git a/PRD.md b/PRD.md
|
||||
@@ issue_list.rs
|
||||
-pub struct IssueCursor {
|
||||
- pub updated_at: i64,
|
||||
- pub iid: i64,
|
||||
-}
|
||||
+pub struct IssueCursor {
|
||||
+ pub sort_field: SortField,
|
||||
+ pub sort_order: SortOrder,
|
||||
+ pub updated_at: Option<i64>,
|
||||
+ pub created_at: Option<i64>,
|
||||
+ pub iid: i64,
|
||||
+ pub project_id: i64, // deterministic tie-breaker
|
||||
+ pub filter_hash: u64, // invalidates stale cursors on filter mutation
|
||||
+}
|
||||
|
||||
@@ pagination section
|
||||
-Windowed keyset pagination ...
|
||||
+Windowed keyset pagination uses deterministic tuple ordering:
|
||||
+`ORDER BY <primary_sort>, project_id, iid`.
|
||||
+Cursor is rejected if `filter_hash` or sort tuple mismatches current query.
|
||||
```
|
||||
|
||||
### 4. Replace ad-hoc filter parsing with a small typed DSL
|
||||
Current `split_whitespace()` parser is brittle and silently lossy. Add quoted values, negation, and strict parse errors.
|
||||
|
||||
```diff
|
||||
diff --git a/PRD.md b/PRD.md
|
||||
@@ filter_bar.rs
|
||||
- fn parse_tokens(&mut self) {
|
||||
- let text = self.input.value().to_string();
|
||||
- self.tokens = text.split_whitespace().map(|chunk| { ... }).collect();
|
||||
- }
|
||||
+ fn parse_tokens(&mut self) {
|
||||
+ // grammar (v1):
|
||||
+ // term := [ "-" ] (field ":" value | quoted_text | bare_text)
|
||||
+ // value := quoted | unquoted
|
||||
+ // examples:
|
||||
+ // state:opened label:"P1 blocker" -author:bot since:14d
|
||||
+ self.tokens = filter_dsl::parse(self.input.value())?;
|
||||
+ }
|
||||
|
||||
@@ section 8 / keybindings-help
|
||||
+Filter parser surfaces actionable inline diagnostics with cursor position,
|
||||
+and never silently drops unknown fields.
|
||||
```
|
||||
|
||||
### 5. Add render caches for markdown/tree shaping
|
||||
Markdown and tree shaping are currently recomputed on every frame in several snippets. Cache render artifacts by `(entity, width, theme, content_hash)` to protect frame time.
|
||||
|
||||
```diff
|
||||
diff --git a/PRD.md b/PRD.md
|
||||
@@ module structure
|
||||
+ render_cache.rs # Width/theme/content-hash keyed cache for markdown + tree layouts
|
||||
|
||||
@@ Assumptions / Performance
|
||||
+Detail and search preview rendering uses memoized render artifacts.
|
||||
+Cache invalidation triggers: content hash change, terminal width change, theme change.
|
||||
```
|
||||
|
||||
### 6. Use one-shot timers for debounce/prefix timeout
|
||||
`Every` is periodic; it wakes repeatedly and can produce edge-case repeated firings. One-shot subscriptions are cleaner and cheaper.
|
||||
|
||||
```diff
|
||||
diff --git a/PRD.md b/PRD.md
|
||||
@@ subscriptions()
|
||||
- if self.state.search.debounce_pending() {
|
||||
- subs.push(Box::new(
|
||||
- Every::with_id(3, Duration::from_millis(200), move || {
|
||||
- Msg::SearchDebounceFired { generation }
|
||||
- })
|
||||
- ));
|
||||
- }
|
||||
+ if self.state.search.debounce_pending() {
|
||||
+ subs.push(Box::new(
|
||||
+ After::with_id(3, Duration::from_millis(200), move || {
|
||||
+ Msg::SearchDebounceFired { generation }
|
||||
+ })
|
||||
+ ));
|
||||
+ }
|
||||
|
||||
@@ InputMode GoPrefix timeout
|
||||
-The tick subscription compares clock instant...
|
||||
+GoPrefix timeout is a one-shot `After(500ms)` tied to prefix generation.
|
||||
```
|
||||
|
||||
### 7. New feature: list “Quick Peek” panel (`Space`) for triage speed
|
||||
This adds immediate value without v2-level scope. Users can inspect selected issue/MR metadata/snippet without entering detail and coming back.
|
||||
|
||||
```diff
|
||||
diff --git a/PRD.md b/PRD.md
|
||||
@@ 5.2 Issue List
|
||||
-Interaction: Enter detail
|
||||
+Interaction: Enter detail, Space quick-peek (toggle right preview pane)
|
||||
|
||||
@@ 5.4 MR List
|
||||
+Quick Peek mode mirrors Issue List: metadata + first discussion snippet + cross-refs.
|
||||
|
||||
@@ 8.2 List Screens
|
||||
| `Enter` | Open selected item |
|
||||
+| `Space` | Toggle Quick Peek panel for selected row |
|
||||
```
|
||||
|
||||
### 8. Upgrade compatibility handshake from integer to machine-readable contract
|
||||
Single integer compat is too coarse for real drift detection. Keep it simple but structured.
|
||||
|
||||
```diff
|
||||
diff --git a/PRD.md b/PRD.md
|
||||
@@ Nightly Rust Strategy / Compatibility contract
|
||||
- 1. Binary compat version (`lore-tui --compat-version`) — integer check ...
|
||||
+ 1. Binary compat contract (`lore-tui --compat-json`) — JSON:
|
||||
+ `{ "protocol": 1, "compat_version": 2, "min_schema": 14, "max_schema": 16, "build": "..." }`
|
||||
+ `lore` validates protocol + compat + schema range before spawn.
|
||||
|
||||
@@ CLI integration
|
||||
-fn validate_tui_compat(...) { ... --compat-version ... }
|
||||
+fn validate_tui_compat(...) { ... --compat-json ... }
|
||||
```
|
||||
|
||||
### 9. Fix sync stream bug and formalize progress coalescing
|
||||
The current snippet calls `try_send` for progress twice in one callback path and depth math is wrong. Also progress spam should be coalesced by lane.
|
||||
|
||||
```diff
|
||||
diff --git a/PRD.md b/PRD.md
|
||||
@@ start_sync_task()
|
||||
- let current_depth = 2048 - tx.try_send(Msg::SyncProgress(event.clone()))
|
||||
- .err().map_or(0, |_| 1);
|
||||
- max_queue_depth = max_queue_depth.max(current_depth);
|
||||
- if tx.try_send(Msg::SyncProgress(event.clone())).is_err() {
|
||||
+ // coalesce by lane key at <=30Hz; one send attempt per flush
|
||||
+ coalescer.update(event.clone());
|
||||
+ if let Some(batch) = coalescer.flush_ready() {
|
||||
+ if tx.try_send(Msg::SyncProgressBatch(batch)).is_err() {
|
||||
dropped_count += 1;
|
||||
let _ = tx.try_send(Msg::SyncBackpressureDrop);
|
||||
+ } else {
|
||||
+ max_queue_depth = max_queue_depth.max(observed_queue_depth());
|
||||
+ }
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
If you want, I can produce a single consolidated patch-style rewrite of Sections `4.x`, `5.2/5.4`, `8.2`, `9.3`, and `10.x` so you can drop it directly into iteration 10.
|
||||
Reference in New Issue
Block a user