perf: force partial index for DiffNote queries (26-75x), batch stats counts (1.7x)
who.rs: Add INDEXED BY idx_notes_diffnote_path_created to all DiffNote query paths (expert, expert_details, reviews, path probes, suffix_probe). SQLite planner was choosing idx_notes_system (106K rows, 38%) over the partial index (26K rows, 9.3%) when LIKE predicates are present. Measured: expert 1561ms->59ms (26x), reviews ~1200ms->16ms (75x). stats.rs: Replace 12+ sequential COUNT(*) queries with conditional aggregates (SUM(CASE WHEN...)) and use FTS5 shadow table (documents_fts_docsize) instead of virtual table for counting. Measured: warm 109ms->65ms (1.68x).
This commit is contained in:
214
plans/tui-prd-v2-frankentui.feedback-10.md
Normal file
214
plans/tui-prd-v2-frankentui.feedback-10.md
Normal file
@@ -0,0 +1,214 @@
|
||||
I found 9 high-impact revisions that materially improve correctness, robustness, and usability without reintroducing anything in `## Rejected Recommendations`.
|
||||
|
||||
### 1. Prevent stale async overwrites on **all** screens (not just search)
|
||||
Right now, only `SearchExecuted` is generation-guarded. `IssueListLoaded`, `MrListLoaded`, `IssueDetailLoaded`, etc. can still race and overwrite newer state after rapid navigation/filtering. This is the biggest correctness risk in the current design.
|
||||
|
||||
```diff
|
||||
diff --git a/PRD.md b/PRD.md
|
||||
@@ message.rs
|
||||
- IssueListLoaded(Vec<IssueRow>),
|
||||
+ IssueListLoaded { generation: u64, rows: Vec<IssueRow> },
|
||||
@@
|
||||
- MrListLoaded(Vec<MrRow>),
|
||||
+ MrListLoaded { generation: u64, rows: Vec<MrRow> },
|
||||
@@
|
||||
- IssueDetailLoaded { key: EntityKey, detail: IssueDetail },
|
||||
- MrDetailLoaded { key: EntityKey, detail: MrDetail },
|
||||
+ IssueDetailLoaded { generation: u64, key: EntityKey, detail: IssueDetail },
|
||||
+ MrDetailLoaded { generation: u64, key: EntityKey, detail: MrDetail },
|
||||
|
||||
@@ update()
|
||||
- Msg::IssueListLoaded(result) => {
|
||||
+ Msg::IssueListLoaded { generation, rows } => {
|
||||
+ if !self.task_supervisor.is_current(&TaskKey::LoadScreen(Screen::IssueList), generation) {
|
||||
+ return Cmd::none();
|
||||
+ }
|
||||
self.state.set_loading(false);
|
||||
- self.state.issue_list.set_result(result);
|
||||
+ self.state.issue_list.set_result(rows);
|
||||
Cmd::none()
|
||||
}
|
||||
```
|
||||
|
||||
### 2. Make cancellation safe with task-owned SQLite interrupt handles
|
||||
The plan mentions `sqlite3_interrupt()` but uses pooled shared reader connections. Interrupting a shared connection can cancel unrelated work. Use per-task reader leases and store `InterruptHandle` in `TaskHandle`.
|
||||
|
||||
```diff
|
||||
diff --git a/PRD.md b/PRD.md
|
||||
@@ DbManager
|
||||
- readers: Vec<Mutex<Connection>>,
|
||||
+ readers: Vec<Mutex<Connection>>,
|
||||
+ // task-scoped interrupt handles prevent cross-task cancellation bleed
|
||||
+ // each dispatched query receives an owned ReaderLease
|
||||
|
||||
+pub struct ReaderLease {
|
||||
+ conn: Connection,
|
||||
+ interrupt: rusqlite::InterruptHandle,
|
||||
+}
|
||||
+
|
||||
+impl DbManager {
|
||||
+ pub fn lease_reader(&self) -> Result<ReaderLease, LoreError> { ... }
|
||||
+}
|
||||
|
||||
@@ TaskHandle
|
||||
pub struct TaskHandle {
|
||||
pub key: TaskKey,
|
||||
pub generation: u64,
|
||||
pub cancel: Arc<CancelToken>,
|
||||
+ pub interrupt: Option<rusqlite::InterruptHandle>,
|
||||
}
|
||||
|
||||
@@ cancellation
|
||||
-Query interruption: ... fires sqlite3_interrupt() on the connection.
|
||||
+Query interruption: cancel triggers the task's owned InterruptHandle only.
|
||||
+No shared-connection interrupt is permitted.
|
||||
```
|
||||
|
||||
### 3. Harden keyset pagination for multi-project and sort changes
|
||||
`updated_at + iid` cursor is not enough when rows share timestamps across projects or sort mode changes. This can duplicate/skip rows.
|
||||
|
||||
```diff
|
||||
diff --git a/PRD.md b/PRD.md
|
||||
@@ issue_list.rs
|
||||
-pub struct IssueCursor {
|
||||
- pub updated_at: i64,
|
||||
- pub iid: i64,
|
||||
-}
|
||||
+pub struct IssueCursor {
|
||||
+ pub sort_field: SortField,
|
||||
+ pub sort_order: SortOrder,
|
||||
+ pub updated_at: Option<i64>,
|
||||
+ pub created_at: Option<i64>,
|
||||
+ pub iid: i64,
|
||||
+ pub project_id: i64, // deterministic tie-breaker
|
||||
+ pub filter_hash: u64, // invalidates stale cursors on filter mutation
|
||||
+}
|
||||
|
||||
@@ pagination section
|
||||
-Windowed keyset pagination ...
|
||||
+Windowed keyset pagination uses deterministic tuple ordering:
|
||||
+`ORDER BY <primary_sort>, project_id, iid`.
|
||||
+Cursor is rejected if `filter_hash` or sort tuple mismatches current query.
|
||||
```
|
||||
|
||||
### 4. Replace ad-hoc filter parsing with a small typed DSL
|
||||
Current `split_whitespace()` parser is brittle and silently lossy. Add quoted values, negation, and strict parse errors.
|
||||
|
||||
```diff
|
||||
diff --git a/PRD.md b/PRD.md
|
||||
@@ filter_bar.rs
|
||||
- fn parse_tokens(&mut self) {
|
||||
- let text = self.input.value().to_string();
|
||||
- self.tokens = text.split_whitespace().map(|chunk| { ... }).collect();
|
||||
- }
|
||||
+ fn parse_tokens(&mut self) {
|
||||
+ // grammar (v1):
|
||||
+ // term := [ "-" ] (field ":" value | quoted_text | bare_text)
|
||||
+ // value := quoted | unquoted
|
||||
+ // examples:
|
||||
+ // state:opened label:"P1 blocker" -author:bot since:14d
|
||||
+ self.tokens = filter_dsl::parse(self.input.value())?;
|
||||
+ }
|
||||
|
||||
@@ section 8 / keybindings-help
|
||||
+Filter parser surfaces actionable inline diagnostics with cursor position,
|
||||
+and never silently drops unknown fields.
|
||||
```
|
||||
|
||||
### 5. Add render caches for markdown/tree shaping
|
||||
Markdown and tree shaping are currently recomputed on every frame in several snippets. Cache render artifacts by `(entity, width, theme, content_hash)` to protect frame time.
|
||||
|
||||
```diff
|
||||
diff --git a/PRD.md b/PRD.md
|
||||
@@ module structure
|
||||
+ render_cache.rs # Width/theme/content-hash keyed cache for markdown + tree layouts
|
||||
|
||||
@@ Assumptions / Performance
|
||||
+Detail and search preview rendering uses memoized render artifacts.
|
||||
+Cache invalidation triggers: content hash change, terminal width change, theme change.
|
||||
```
|
||||
|
||||
### 6. Use one-shot timers for debounce/prefix timeout
|
||||
`Every` is periodic; it wakes repeatedly and can produce edge-case repeated firings. One-shot subscriptions are cleaner and cheaper.
|
||||
|
||||
```diff
|
||||
diff --git a/PRD.md b/PRD.md
|
||||
@@ subscriptions()
|
||||
- if self.state.search.debounce_pending() {
|
||||
- subs.push(Box::new(
|
||||
- Every::with_id(3, Duration::from_millis(200), move || {
|
||||
- Msg::SearchDebounceFired { generation }
|
||||
- })
|
||||
- ));
|
||||
- }
|
||||
+ if self.state.search.debounce_pending() {
|
||||
+ subs.push(Box::new(
|
||||
+ After::with_id(3, Duration::from_millis(200), move || {
|
||||
+ Msg::SearchDebounceFired { generation }
|
||||
+ })
|
||||
+ ));
|
||||
+ }
|
||||
|
||||
@@ InputMode GoPrefix timeout
|
||||
-The tick subscription compares clock instant...
|
||||
+GoPrefix timeout is a one-shot `After(500ms)` tied to prefix generation.
|
||||
```
|
||||
|
||||
### 7. New feature: list “Quick Peek” panel (`Space`) for triage speed
|
||||
This adds immediate value without v2-level scope. Users can inspect selected issue/MR metadata/snippet without entering detail and coming back.
|
||||
|
||||
```diff
|
||||
diff --git a/PRD.md b/PRD.md
|
||||
@@ 5.2 Issue List
|
||||
-Interaction: Enter detail
|
||||
+Interaction: Enter detail, Space quick-peek (toggle right preview pane)
|
||||
|
||||
@@ 5.4 MR List
|
||||
+Quick Peek mode mirrors Issue List: metadata + first discussion snippet + cross-refs.
|
||||
|
||||
@@ 8.2 List Screens
|
||||
| `Enter` | Open selected item |
|
||||
+| `Space` | Toggle Quick Peek panel for selected row |
|
||||
```
|
||||
|
||||
### 8. Upgrade compatibility handshake from integer to machine-readable contract
|
||||
Single integer compat is too coarse for real drift detection. Keep it simple but structured.
|
||||
|
||||
```diff
|
||||
diff --git a/PRD.md b/PRD.md
|
||||
@@ Nightly Rust Strategy / Compatibility contract
|
||||
- 1. Binary compat version (`lore-tui --compat-version`) — integer check ...
|
||||
+ 1. Binary compat contract (`lore-tui --compat-json`) — JSON:
|
||||
+ `{ "protocol": 1, "compat_version": 2, "min_schema": 14, "max_schema": 16, "build": "..." }`
|
||||
+ `lore` validates protocol + compat + schema range before spawn.
|
||||
|
||||
@@ CLI integration
|
||||
-fn validate_tui_compat(...) { ... --compat-version ... }
|
||||
+fn validate_tui_compat(...) { ... --compat-json ... }
|
||||
```
|
||||
|
||||
### 9. Fix sync stream bug and formalize progress coalescing
|
||||
The current snippet calls `try_send` for progress twice in one callback path and depth math is wrong. Also progress spam should be coalesced by lane.
|
||||
|
||||
```diff
|
||||
diff --git a/PRD.md b/PRD.md
|
||||
@@ start_sync_task()
|
||||
- let current_depth = 2048 - tx.try_send(Msg::SyncProgress(event.clone()))
|
||||
- .err().map_or(0, |_| 1);
|
||||
- max_queue_depth = max_queue_depth.max(current_depth);
|
||||
- if tx.try_send(Msg::SyncProgress(event.clone())).is_err() {
|
||||
+ // coalesce by lane key at <=30Hz; one send attempt per flush
|
||||
+ coalescer.update(event.clone());
|
||||
+ if let Some(batch) = coalescer.flush_ready() {
|
||||
+ if tx.try_send(Msg::SyncProgressBatch(batch)).is_err() {
|
||||
dropped_count += 1;
|
||||
let _ = tx.try_send(Msg::SyncBackpressureDrop);
|
||||
+ } else {
|
||||
+ max_queue_depth = max_queue_depth.max(observed_queue_depth());
|
||||
+ }
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
If you want, I can produce a single consolidated patch-style rewrite of Sections `4.x`, `5.2/5.4`, `8.2`, `9.3`, and `10.x` so you can drop it directly into iteration 10.
|
||||
177
plans/tui-prd-v2-frankentui.feedback-11.md
Normal file
177
plans/tui-prd-v2-frankentui.feedback-11.md
Normal file
@@ -0,0 +1,177 @@
|
||||
I reviewed the full PRD and avoided everything listed under `## Rejected Recommendations`.
|
||||
These are the highest-impact revisions I’d make.
|
||||
|
||||
1. Stable list pagination via snapshot fences
|
||||
Why this improves the plan: your keyset cursor is deterministic for sort/filter, but still vulnerable to duplicates/skips if sync writes land between page fetches. Add a per-browse snapshot fence so one browse session sees a stable dataset.
|
||||
Tradeoff: newest rows are hidden until refresh, which is correct for deterministic triage.
|
||||
|
||||
```diff
|
||||
diff --git a/docs/plans/gitlore-tui-prd-v2.md b/docs/plans/gitlore-tui-prd-v2.md
|
||||
@@ 5.2 Issue List
|
||||
- **Pagination:** Windowed keyset pagination with explicit cursor state.
|
||||
+ **Pagination:** Windowed keyset pagination with explicit cursor state.
|
||||
+ **Snapshot fence:** On list entry, capture `snapshot_upper_updated_at` (ms) and pin all
|
||||
+ list-page queries to `updated_at <= snapshot_upper_updated_at`. This guarantees no duplicate
|
||||
+ or skipped rows during scrolling even if sync writes occur concurrently.
|
||||
+ A "new data available" badge appears when a newer sync completes; `r` refreshes the fence.
|
||||
|
||||
@@ 5.4 MR List
|
||||
- **Pagination:** Same windowed keyset pagination strategy as Issue List.
|
||||
+ **Pagination:** Same strategy plus snapshot fence (`updated_at <= snapshot_upper_updated_at`)
|
||||
+ for deterministic cross-page traversal under concurrent sync writes.
|
||||
|
||||
@@ 4.7 Navigation Stack Implementation
|
||||
+ Browsing sessions carry a per-screen `BrowseSnapshot` token to preserve stable ordering
|
||||
+ until explicit refresh or screen re-entry.
|
||||
```
|
||||
|
||||
2. Query budgets and soft deadlines
|
||||
Why this improves the plan: currently “slow query” is handled mostly by cancellation and stale-drop. Add explicit latency budgets so UI responsiveness stays predictable under worst-case filters.
|
||||
Tradeoff: sometimes user gets partial/truncated results first, followed by full results on retry/refine.
|
||||
|
||||
```diff
|
||||
diff --git a/docs/plans/gitlore-tui-prd-v2.md b/docs/plans/gitlore-tui-prd-v2.md
|
||||
@@ 4.5 Async Action System
|
||||
+ #### 4.5.2 Query Budgets and Soft Deadlines
|
||||
+ Each query type gets a budget:
|
||||
+ - list window fetch: 120ms target, 250ms hard deadline
|
||||
+ - detail phase-1 metadata: 75ms target, 150ms hard deadline
|
||||
+ - search lexical/hybrid: 250ms hard deadline
|
||||
+ On hard deadline breach, return `QueryDegraded { truncated: true }` and show inline badge:
|
||||
+ "results truncated; refine filter or press r to retry full".
|
||||
+ Implementation uses SQLite progress handler + per-task interrupt deadline.
|
||||
|
||||
@@ 9.3 Phase 0 — Toolchain Gate
|
||||
+ 26. Query deadline behavior validated: hard deadline cancels query and renders degraded badge
|
||||
+ without blocking input loop.
|
||||
```
|
||||
|
||||
3. Targeted cache invalidation and prewarm after sync
|
||||
Why this improves the plan: `invalidate_all()` after sync throws away hot detail cache and hurts the exact post-sync workflow you optimized for. Invalidate only changed keys and prewarm likely-next entities.
|
||||
Tradeoff: slightly more bookkeeping in sync result handling.
|
||||
|
||||
```diff
|
||||
diff --git a/docs/plans/gitlore-tui-prd-v2.md b/docs/plans/gitlore-tui-prd-v2.md
|
||||
@@ 4.1 Module Structure
|
||||
- entity_cache.rs # Bounded LRU cache ... Invalidated on sync completion.
|
||||
+ entity_cache.rs # Bounded LRU cache with selective invalidation by changed EntityKey
|
||||
+ # and optional post-sync prewarm of top changed entities.
|
||||
|
||||
@@ 4.4 App — Implementing the Model Trait (Msg::SyncCompleted)
|
||||
- // Invalidate entity cache — synced data may have changed.
|
||||
- self.entity_cache.invalidate_all();
|
||||
+ // Selective invalidation: evict only changed entities from sync delta.
|
||||
+ self.entity_cache.invalidate_keys(&result.changed_entity_keys);
|
||||
+ // Prewarm top N changed/new entities for immediate post-sync triage.
|
||||
+ self.enqueue_cache_prewarm(&result.changed_entity_keys);
|
||||
```
|
||||
|
||||
4. Exact “what changed” navigation without new DB tables
|
||||
Why this improves the plan: your summary currently uses timestamp filter; this can include unrelated updates and miss edge cases. Keep an in-memory delta ledger per sync run and navigate by exact IDs.
|
||||
Tradeoff: small memory overhead per run; no schema migration required.
|
||||
|
||||
```diff
|
||||
diff --git a/docs/plans/gitlore-tui-prd-v2.md b/docs/plans/gitlore-tui-prd-v2.md
|
||||
@@ 5.9 Sync (Summary mode)
|
||||
-- `i` navigates to Issue List pre-filtered to "since last sync" (using `sync_status.last_completed_at` timestamp comparison)
|
||||
-- `m` navigates to MR List pre-filtered to "since last sync" (using `sync_status.last_completed_at` timestamp comparison)
|
||||
+- `i` navigates to Issue List filtered by exact issue IDs changed in this sync run
|
||||
+- `m` navigates to MR List filtered by exact MR IDs changed in this sync run
|
||||
+ (fallback to timestamp filter only if run delta not available)
|
||||
|
||||
@@ 10.1 New Files
|
||||
+crates/lore-tui/src/sync_delta_ledger.rs # In-memory per-run exact changed/new IDs (issues/MRs/discussions)
|
||||
```
|
||||
|
||||
5. Adaptive render governor (runtime performance safety)
|
||||
Why this improves the plan: capability detection is static; you also need dynamic adaptation when frame time/backpressure worsens (SSH, tmux nesting, huge logs).
|
||||
Tradeoff: visual richness may step down automatically under load.
|
||||
|
||||
```diff
|
||||
diff --git a/docs/plans/gitlore-tui-prd-v2.md b/docs/plans/gitlore-tui-prd-v2.md
|
||||
@@ 3.4.1 Capability-Adaptive Rendering
|
||||
+#### 3.4.2 Adaptive Render Governor
|
||||
+Runtime monitors frame time and stream pressure:
|
||||
+- if frame p95 > 40ms or sync drops spike, switch to lighter profile:
|
||||
+ plain markdown, reduced tree guides, slower spinner tick, less frequent repaint.
|
||||
+- when stable for N seconds, restore previous profile.
|
||||
+CLI override:
|
||||
+`lore tui --render-profile=auto|quality|balanced|speed`
|
||||
|
||||
@@ 9.3 Phase 0 — Toolchain Gate
|
||||
+27. Frame-time governor validated: under induced load, UI remains responsive and input latency
|
||||
+stays within p95 < 75ms while auto-downgrading render profile.
|
||||
```
|
||||
|
||||
6. First-run/data-not-ready screen (not an init wizard)
|
||||
Why this improves the plan: empty DB or missing indexes will otherwise feel broken. A dedicated read-only readiness screen improves first impression and self-recovery.
|
||||
Tradeoff: one extra lightweight screen/state.
|
||||
|
||||
```diff
|
||||
diff --git a/docs/plans/gitlore-tui-prd-v2.md b/docs/plans/gitlore-tui-prd-v2.md
|
||||
@@ 4.3 Core Types (Screen enum)
|
||||
Sync,
|
||||
Stats,
|
||||
Doctor,
|
||||
+ Bootstrap,
|
||||
|
||||
@@ 5.11 Doctor / Stats (Info Screens)
|
||||
+### 5.12 Bootstrap (Data Readiness)
|
||||
+Shown when no synced projects/documents are present or required indexes are missing.
|
||||
+Displays concise readiness checks and exact CLI commands to recover:
|
||||
+`lore sync`, `lore migrate`, `lore --robot doctor`.
|
||||
+Read-only; no auto-execution.
|
||||
```
|
||||
|
||||
7. Global project scope pinning across screens
|
||||
Why this improves the plan: users repeatedly apply the same project filter across dashboard/list/search/timeline/who. Add a global scope pin to reduce repetitive filtering and speed triage.
|
||||
Tradeoff: must show clear “scope active” indicator to avoid confusion.
|
||||
|
||||
```diff
|
||||
diff --git a/docs/plans/gitlore-tui-prd-v2.md b/docs/plans/gitlore-tui-prd-v2.md
|
||||
@@ 4.1 Module Structure
|
||||
+ scope.rs # Global project scope context (all-projects or pinned project set)
|
||||
|
||||
@@ 8.1 Global (Available Everywhere)
|
||||
+| `P` | Open project scope picker / toggle global scope pin |
|
||||
|
||||
@@ 4.10 State Module — Complete
|
||||
+pub global_scope: ScopeContext, // Applies to dashboard/list/search/timeline/who queries
|
||||
|
||||
@@ 10.11 Action Module — Query Bridge
|
||||
- pub fn fetch_issues(conn: &Connection, filter: &IssueFilter) -> Result<Vec<IssueListRow>, LoreError>
|
||||
+ pub fn fetch_issues(conn: &Connection, scope: &ScopeContext, filter: &IssueFilter) -> Result<Vec<IssueListRow>, LoreError>
|
||||
```
|
||||
|
||||
8. Concurrency correctness tests for pagination and cancellation races
|
||||
Why this improves the plan: current reliability tests are good, but missing a direct test for duplicate/skip behavior under concurrent sync writes while paginating.
|
||||
Tradeoff: additional integration test complexity.
|
||||
|
||||
```diff
|
||||
diff --git a/docs/plans/gitlore-tui-prd-v2.md b/docs/plans/gitlore-tui-prd-v2.md
|
||||
@@ 9.2 Phases (Phase 5.5 — Reliability Test Pack)
|
||||
+ Concurrent pagination/write race tests :p55j, after p55h, 1d
|
||||
+ Query deadline cancellation race tests :p55k, after p55j, 0.5d
|
||||
|
||||
@@ 9.3 Phase 0 — Toolchain Gate
|
||||
+28. Concurrent pagination/write test proves no duplicates/skips within a pinned browse snapshot.
|
||||
+29. Cancellation race test proves no cross-task interrupt bleed and no stuck loading state.
|
||||
```
|
||||
|
||||
9. URL opening policy v2: allowlisted GitLab entity paths
|
||||
Why this improves the plan: host validation is necessary but not always sufficient. Restrict default browser opens to known GitLab entity paths and require confirmation for unusual paths on same host.
|
||||
Tradeoff: occasional extra prompt for uncommon but valid URLs.
|
||||
|
||||
```diff
|
||||
diff --git a/docs/plans/gitlore-tui-prd-v2.md b/docs/plans/gitlore-tui-prd-v2.md
|
||||
@@ 3.1 Risk Matrix
|
||||
-| Malicious URL in entity data opened in browser | Medium | Low | URL host validated against configured GitLab instance before `open`/`xdg-open` |
|
||||
+| Malicious URL in entity data opened in browser | Medium | Low | Validate scheme+host+port and path pattern allowlist (`/-/issues/`, `/-/merge_requests/`, project issue/MR routes). Unknown same-host paths require explicit confirm modal. |
|
||||
|
||||
@@ 10.4.1 Terminal Safety — Untrusted Text Sanitization
|
||||
- pub fn is_safe_url(url: &str, allowed_origins: &[AllowedOrigin]) -> bool
|
||||
+ pub fn classify_safe_url(url: &str, policy: &UrlPolicy) -> UrlSafety
|
||||
+ // UrlSafety::{AllowedEntityPath, AllowedButUnrecognizedPath, Blocked}
|
||||
```
|
||||
|
||||
These 9 changes are additive, avoid previously rejected ideas, and materially improve determinism, responsiveness, post-sync usefulness, and safety without forcing a big architecture reset.
|
||||
@@ -2,12 +2,12 @@
|
||||
plan: true
|
||||
title: "Gitlore TUI PRD v2 - FrankenTUI"
|
||||
status: iterating
|
||||
iteration: 9
|
||||
iteration: 10
|
||||
target_iterations: 10
|
||||
beads_revision: 0
|
||||
related_plans: []
|
||||
created: 2026-02-11
|
||||
updated: 2026-02-11
|
||||
updated: 2026-02-12
|
||||
---
|
||||
|
||||
# Gitlore TUI — Product Requirements Document
|
||||
@@ -288,7 +288,9 @@ crates/lore-tui/src/
|
||||
safety.rs # sanitize_for_terminal(), safe_url_policy()
|
||||
redact.rs # redact_sensitive(): strip tokens, Authorization headers, and credential patterns from logs and crash reports before persisting to disk
|
||||
session.rs # Versioned session state persistence + corruption quarantine
|
||||
entity_cache.rs # Bounded LRU cache for detail payloads (IssueDetail, MrDetail). Keyed by EntityKey. Invalidated on sync completion. Enables near-instant reopen during Enter/Esc drill-in/out workflows without re-querying.
|
||||
scope.rs # Global project scope context: all-projects or pinned project set. Applied to dashboard/list/search/timeline/who queries. Persisted in session state.
|
||||
entity_cache.rs # Bounded LRU cache for detail payloads (IssueDetail, MrDetail). Keyed by EntityKey. Selective invalidation by changed EntityKey set on sync completion (not blanket invalidate_all). Optional post-sync prewarm of top changed entities for immediate triage. Enables near-instant reopen during Enter/Esc drill-in/out workflows without re-querying.
|
||||
render_cache.rs # Width/theme/content-hash keyed cache for expensive render artifacts (markdown → styled text, discussion tree shaping). Invalidation triggers: content hash change, terminal width change, theme change. Prevents per-frame recomputation of markdown parsing and tree layout.
|
||||
crash_context.rs # Ring buffer of last 2000 normalized events + current screen/task snapshot for crash diagnostics. Captured by panic hook for post-mortem debugging.
|
||||
```
|
||||
|
||||
@@ -359,20 +361,24 @@ pub enum Msg {
|
||||
CommandPaletteSelect(usize),
|
||||
|
||||
// Issue list
|
||||
IssueListLoaded(Vec<IssueRow>),
|
||||
/// Generation-guarded: stale results from superseded filter/nav are dropped.
|
||||
IssueListLoaded { generation: u64, rows: Vec<IssueRow> },
|
||||
IssueListFilterChanged(IssueFilter),
|
||||
IssueListSortChanged(SortField, SortOrder),
|
||||
IssueSelected(EntityKey),
|
||||
|
||||
// MR list
|
||||
MrListLoaded(Vec<MrRow>),
|
||||
/// Generation-guarded: stale results from superseded filter/nav are dropped.
|
||||
MrListLoaded { generation: u64, rows: Vec<MrRow> },
|
||||
MrListFilterChanged(MrFilter),
|
||||
MrSelected(EntityKey),
|
||||
|
||||
// Detail views
|
||||
IssueDetailLoaded { key: EntityKey, detail: IssueDetail },
|
||||
MrDetailLoaded { key: EntityKey, detail: MrDetail },
|
||||
DiscussionsLoaded(Vec<Discussion>),
|
||||
/// Generation-guarded: prevents stale detail overwrites after rapid navigation.
|
||||
IssueDetailLoaded { generation: u64, key: EntityKey, detail: IssueDetail },
|
||||
/// Generation-guarded: prevents stale detail overwrites after rapid navigation.
|
||||
MrDetailLoaded { generation: u64, key: EntityKey, detail: MrDetail },
|
||||
DiscussionsLoaded { generation: u64, discussions: Vec<Discussion> },
|
||||
|
||||
// Search
|
||||
SearchQueryChanged(String),
|
||||
@@ -395,6 +401,9 @@ pub enum Msg {
|
||||
// Sync
|
||||
SyncStarted,
|
||||
SyncProgress(ProgressEvent),
|
||||
/// Coalesced batch of progress events (one per lane key).
|
||||
/// Reduces render pressure by batching at <=30Hz per lane.
|
||||
SyncProgressBatch(Vec<ProgressEvent>),
|
||||
SyncLogLine(String),
|
||||
SyncBackpressureDrop,
|
||||
SyncCompleted(SyncResult),
|
||||
@@ -454,6 +463,7 @@ pub enum Screen {
|
||||
Sync,
|
||||
Stats,
|
||||
Doctor,
|
||||
Bootstrap,
|
||||
}
|
||||
|
||||
/// Composite key for entity identity across multi-project datasets.
|
||||
@@ -553,7 +563,7 @@ impl Default for InputMode {
|
||||
// crates/lore-tui/src/app.rs
|
||||
|
||||
use ftui_runtime::program::{Model, Cmd, TaskSpec};
|
||||
use ftui_runtime::subscription::{Subscription, Every};
|
||||
use ftui_runtime::subscription::{Subscription, Every, After};
|
||||
use ftui_core::event::{Event, KeyEvent, KeyCode, KeyEventKind, Modifiers};
|
||||
use ftui_render::frame::Frame;
|
||||
use rusqlite::Connection;
|
||||
@@ -626,6 +636,20 @@ pub struct DbManager {
|
||||
next_reader: AtomicUsize,
|
||||
}
|
||||
|
||||
/// A task-scoped reader lease that owns an interrupt handle for safe cancellation.
|
||||
/// Unlike interrupting a shared pooled connection (which can cancel unrelated work),
|
||||
/// each dispatched query receives its own ReaderLease. The InterruptHandle stored in
|
||||
/// TaskHandle targets only this lease's connection, preventing cross-task cancellation bleed.
|
||||
pub struct ReaderLease<'a> {
|
||||
conn: std::sync::MutexGuard<'a, Connection>,
|
||||
/// Owned interrupt handle — safe to fire without affecting other tasks.
|
||||
pub interrupt: rusqlite::InterruptHandle,
|
||||
}
|
||||
|
||||
impl<'a> ReaderLease<'a> {
|
||||
pub fn conn(&self) -> &Connection { &self.conn }
|
||||
}
|
||||
|
||||
impl DbManager {
|
||||
pub fn new(db_path: &Path, reader_count: usize) -> Result<Self, LoreError> {
|
||||
let mut readers = Vec::with_capacity(reader_count);
|
||||
@@ -663,6 +687,19 @@ impl DbManager {
|
||||
.map_err(|e| LoreError::Internal(format!("writer lock poisoned: {e}")))?;
|
||||
f(&conn)
|
||||
}
|
||||
|
||||
/// Lease a reader connection with a task-owned interrupt handle.
|
||||
/// The returned `ReaderLease` holds the mutex guard and provides
|
||||
/// an `InterruptHandle` that can be stored in `TaskHandle` for
|
||||
/// safe per-task cancellation. This prevents cross-task interrupt bleed
|
||||
/// that would occur with shared-connection `sqlite3_interrupt()`.
|
||||
pub fn lease_reader(&self) -> Result<ReaderLease<'_>, LoreError> {
|
||||
let idx = self.next_reader.fetch_add(1, Ordering::Relaxed) % self.readers.len();
|
||||
let conn = self.readers[idx].lock()
|
||||
.map_err(|e| LoreError::Internal(format!("reader lock poisoned: {e}")))?;
|
||||
let interrupt = conn.get_interrupt_handle();
|
||||
Ok(ReaderLease { conn, interrupt })
|
||||
}
|
||||
}
|
||||
|
||||
impl LoreApp {
|
||||
@@ -786,9 +823,11 @@ impl LoreApp {
|
||||
}),
|
||||
Screen::IssueList => {
|
||||
let filter = self.state.issue_list.current_filter();
|
||||
let handle = self.task_supervisor.submit(TaskKey::LoadScreen(Screen::IssueList));
|
||||
let generation = handle.generation;
|
||||
Cmd::task(move || {
|
||||
match db.with_reader(|conn| crate::tui::action::fetch_issues(conn, &filter)) {
|
||||
Ok(result) => Msg::IssueListLoaded(result),
|
||||
Ok(rows) => Msg::IssueListLoaded { generation, rows },
|
||||
Err(e) => Msg::Error(AppError::Internal(e.to_string())),
|
||||
}
|
||||
})
|
||||
@@ -797,21 +836,26 @@ impl LoreApp {
|
||||
// Check entity cache first — enables near-instant reopen
|
||||
// during Enter/Esc drill-in/out workflows.
|
||||
if let Some(cached) = self.entity_cache.get_issue(key) {
|
||||
return Cmd::msg(Msg::IssueDetailLoaded { key: key.clone(), detail: cached.clone() });
|
||||
let handle = self.task_supervisor.submit(TaskKey::LoadScreen(Screen::IssueDetail(key.clone())));
|
||||
return Cmd::msg(Msg::IssueDetailLoaded { generation: handle.generation, key: key.clone(), detail: cached.clone() });
|
||||
}
|
||||
let handle = self.task_supervisor.submit(TaskKey::LoadScreen(Screen::IssueDetail(key.clone())));
|
||||
let generation = handle.generation;
|
||||
let key = key.clone();
|
||||
Cmd::task(move || {
|
||||
match db.with_reader(|conn| crate::tui::action::fetch_issue_detail(conn, &key)) {
|
||||
Ok(detail) => Msg::IssueDetailLoaded { key, detail },
|
||||
Ok(detail) => Msg::IssueDetailLoaded { generation, key, detail },
|
||||
Err(e) => Msg::Error(AppError::Internal(e.to_string())),
|
||||
}
|
||||
})
|
||||
}
|
||||
Screen::MrList => {
|
||||
let filter = self.state.mr_list.current_filter();
|
||||
let handle = self.task_supervisor.submit(TaskKey::LoadScreen(Screen::MrList));
|
||||
let generation = handle.generation;
|
||||
Cmd::task(move || {
|
||||
match db.with_reader(|conn| crate::tui::action::fetch_mrs(conn, &filter)) {
|
||||
Ok(result) => Msg::MrListLoaded(result),
|
||||
Ok(rows) => Msg::MrListLoaded { generation, rows },
|
||||
Err(e) => Msg::Error(AppError::Internal(e.to_string())),
|
||||
}
|
||||
})
|
||||
@@ -819,12 +863,15 @@ impl LoreApp {
|
||||
Screen::MrDetail(key) => {
|
||||
// Check entity cache first
|
||||
if let Some(cached) = self.entity_cache.get_mr(key) {
|
||||
return Cmd::msg(Msg::MrDetailLoaded { key: key.clone(), detail: cached.clone() });
|
||||
let handle = self.task_supervisor.submit(TaskKey::LoadScreen(Screen::MrDetail(key.clone())));
|
||||
return Cmd::msg(Msg::MrDetailLoaded { generation: handle.generation, key: key.clone(), detail: cached.clone() });
|
||||
}
|
||||
let handle = self.task_supervisor.submit(TaskKey::LoadScreen(Screen::MrDetail(key.clone())));
|
||||
let generation = handle.generation;
|
||||
let key = key.clone();
|
||||
Cmd::task(move || {
|
||||
match db.with_reader(|conn| crate::tui::action::fetch_mr_detail(conn, &key)) {
|
||||
Ok(detail) => Msg::MrDetailLoaded { key, detail },
|
||||
Ok(detail) => Msg::MrDetailLoaded { generation, key, detail },
|
||||
Err(e) => Msg::Error(AppError::Internal(e.to_string())),
|
||||
}
|
||||
})
|
||||
@@ -895,9 +942,11 @@ impl LoreApp {
|
||||
Screen::IssueList => {
|
||||
let filter = self.state.issue_list.current_filter();
|
||||
let db = Arc::clone(&self.db);
|
||||
let handle = self.task_supervisor.submit(TaskKey::FilterRequery(Screen::IssueList));
|
||||
let generation = handle.generation;
|
||||
Cmd::task(move || {
|
||||
match db.with_reader(|conn| crate::tui::action::fetch_issues(conn, &filter)) {
|
||||
Ok(result) => Msg::IssueListLoaded(result),
|
||||
Ok(rows) => Msg::IssueListLoaded { generation, rows },
|
||||
Err(e) => Msg::Error(AppError::Internal(e.to_string())),
|
||||
}
|
||||
})
|
||||
@@ -905,9 +954,11 @@ impl LoreApp {
|
||||
Screen::MrList => {
|
||||
let filter = self.state.mr_list.current_filter();
|
||||
let db = Arc::clone(&self.db);
|
||||
let handle = self.task_supervisor.submit(TaskKey::FilterRequery(Screen::MrList));
|
||||
let generation = handle.generation;
|
||||
Cmd::task(move || {
|
||||
match db.with_reader(|conn| crate::tui::action::fetch_mrs(conn, &filter)) {
|
||||
Ok(result) => Msg::MrListLoaded(result),
|
||||
Ok(rows) => Msg::MrListLoaded { generation, rows },
|
||||
Err(e) => Msg::Error(AppError::Internal(e.to_string())),
|
||||
}
|
||||
})
|
||||
@@ -961,15 +1012,18 @@ impl LoreApp {
|
||||
if cancel_token.load(std::sync::atomic::Ordering::Relaxed) {
|
||||
return; // Early exit — orchestrator handles partial state
|
||||
}
|
||||
// Track queue depth for stream stats
|
||||
let current_depth = 2048 - tx.try_send(Msg::SyncProgress(event.clone()))
|
||||
.err().map_or(0, |_| 1);
|
||||
max_queue_depth = max_queue_depth.max(current_depth);
|
||||
if tx.try_send(Msg::SyncProgress(event.clone())).is_err() {
|
||||
// Channel full — drop this progress update rather than
|
||||
// blocking the sync thread. Track for stats.
|
||||
dropped_count += 1;
|
||||
let _ = tx.try_send(Msg::SyncBackpressureDrop);
|
||||
// Coalesce progress events by lane key at <=30Hz to reduce
|
||||
// render pressure. Each lane (project x resource_type) keeps
|
||||
// only its latest progress snapshot. The coalescer flushes
|
||||
// a batch when 33ms have elapsed since last flush.
|
||||
coalescer.update(event.clone());
|
||||
if let Some(batch) = coalescer.flush_ready() {
|
||||
if tx.try_send(Msg::SyncProgressBatch(batch)).is_err() {
|
||||
// Channel full — drop this batch rather than
|
||||
// blocking the sync thread. Track for stats.
|
||||
dropped_count += 1;
|
||||
let _ = tx.try_send(Msg::SyncBackpressureDrop);
|
||||
}
|
||||
}
|
||||
let _ = tx.try_send(Msg::SyncLogLine(format!("{event:?}")));
|
||||
},
|
||||
@@ -1143,23 +1197,35 @@ impl Model for LoreApp {
|
||||
self.state.dashboard.update(data);
|
||||
Cmd::none()
|
||||
}
|
||||
Msg::IssueListLoaded(result) => {
|
||||
Msg::IssueListLoaded { generation, rows } => {
|
||||
if !self.task_supervisor.is_current(&TaskKey::LoadScreen(Screen::IssueList), generation) {
|
||||
return Cmd::none(); // Stale — superseded by newer nav/filter
|
||||
}
|
||||
self.state.set_loading(false);
|
||||
self.state.issue_list.set_result(result);
|
||||
self.state.issue_list.set_result(rows);
|
||||
Cmd::none()
|
||||
}
|
||||
Msg::IssueDetailLoaded { key, detail } => {
|
||||
Msg::IssueDetailLoaded { generation, key, detail } => {
|
||||
if !self.task_supervisor.is_current(&TaskKey::LoadScreen(Screen::IssueDetail(key.clone())), generation) {
|
||||
return Cmd::none(); // Stale — user navigated away
|
||||
}
|
||||
self.state.set_loading(false);
|
||||
self.entity_cache.put_issue(key, detail.clone());
|
||||
self.state.issue_detail.set(detail);
|
||||
Cmd::none()
|
||||
}
|
||||
Msg::MrListLoaded(result) => {
|
||||
Msg::MrListLoaded { generation, rows } => {
|
||||
if !self.task_supervisor.is_current(&TaskKey::LoadScreen(Screen::MrList), generation) {
|
||||
return Cmd::none(); // Stale — superseded by newer nav/filter
|
||||
}
|
||||
self.state.set_loading(false);
|
||||
self.state.mr_list.set_result(result);
|
||||
self.state.mr_list.set_result(rows);
|
||||
Cmd::none()
|
||||
}
|
||||
Msg::MrDetailLoaded { key, detail } => {
|
||||
Msg::MrDetailLoaded { generation, key, detail } => {
|
||||
if !self.task_supervisor.is_current(&TaskKey::LoadScreen(Screen::MrDetail(key.clone())), generation) {
|
||||
return Cmd::none(); // Stale — user navigated away
|
||||
}
|
||||
self.state.set_loading(false);
|
||||
self.entity_cache.put_mr(key, detail.clone());
|
||||
self.state.mr_detail.set(detail);
|
||||
@@ -1219,6 +1285,12 @@ impl Model for LoreApp {
|
||||
self.state.sync.update_progress(event);
|
||||
Cmd::none()
|
||||
}
|
||||
Msg::SyncProgressBatch(events) => {
|
||||
for event in events {
|
||||
self.state.sync.update_progress(event);
|
||||
}
|
||||
Cmd::none()
|
||||
}
|
||||
Msg::SyncLogLine(line) => {
|
||||
self.state.sync.push_log(line);
|
||||
Cmd::none()
|
||||
@@ -1234,10 +1306,15 @@ impl Model for LoreApp {
|
||||
Cmd::none()
|
||||
}
|
||||
Msg::SyncCompleted(result) => {
|
||||
self.state.sync.complete(result);
|
||||
// Invalidate entity cache — synced data may have changed.
|
||||
self.entity_cache.invalidate_all();
|
||||
Cmd::none()
|
||||
self.state.sync.complete(&result);
|
||||
// Selective invalidation: evict only changed entities from sync delta.
|
||||
self.entity_cache.invalidate_keys(&result.changed_entity_keys);
|
||||
// Prewarm top N changed/new entities for immediate post-sync triage.
|
||||
// This is lazy — enqueues Cmd::task fetches, doesn't block the event loop.
|
||||
let prewarm_cmds = self.enqueue_cache_prewarm(&result.changed_entity_keys);
|
||||
// Notify list screens that new data is available (snapshot fence refresh badge).
|
||||
self.state.notify_data_changed();
|
||||
prewarm_cmds
|
||||
}
|
||||
Msg::SyncFailed(err) => {
|
||||
self.state.sync.fail(err);
|
||||
@@ -1416,21 +1493,23 @@ impl Model for LoreApp {
|
||||
));
|
||||
}
|
||||
|
||||
// Go-prefix timeout enforcement: tick even when nothing is loading.
|
||||
// Without this, GoPrefix mode can get "stuck" when idle (no other
|
||||
// events to drive the Tick that checks the 500ms timeout).
|
||||
// Go-prefix timeout: one-shot After(500ms) tied to the prefix start.
|
||||
// Uses After (one-shot) instead of Every (periodic) — the prefix
|
||||
// either completes with a valid key or times out exactly once.
|
||||
if matches!(self.input_mode, InputMode::GoPrefix { .. }) {
|
||||
subs.push(Box::new(
|
||||
Every::with_id(2, Duration::from_millis(50), || Msg::Tick)
|
||||
After::with_id(2, Duration::from_millis(500), || Msg::Tick)
|
||||
));
|
||||
}
|
||||
|
||||
// Search debounce timer: fires SearchDebounceFired after 200ms.
|
||||
// Search debounce timer: one-shot fires SearchDebounceFired after 200ms.
|
||||
// Only active when a debounce is pending (armed by keystroke).
|
||||
// Uses After (one-shot) instead of Every (periodic) to avoid repeated
|
||||
// firings from a periodic timer — one debounce = one fire.
|
||||
if self.state.search.debounce_pending() {
|
||||
let generation = self.state.search.debounce_generation();
|
||||
subs.push(Box::new(
|
||||
Every::with_id(3, Duration::from_millis(200), move || {
|
||||
After::with_id(3, Duration::from_millis(200), move || {
|
||||
Msg::SearchDebounceFired { generation }
|
||||
})
|
||||
));
|
||||
@@ -1485,7 +1564,7 @@ pub fn with_read_snapshot<T>(
|
||||
}
|
||||
```
|
||||
|
||||
**Query interruption:** Long-running queries register interrupt checks tied to `CancelToken` to avoid >1s uninterruptible stalls during rapid navigation/filtering. When the user navigates away from a detail screen before queries complete, the cancel token fires `sqlite3_interrupt()` on the connection.
|
||||
**Query interruption:** Long-running queries use task-owned `ReaderLease` interrupt handles (from `DbManager::lease_reader()`) to avoid >1s uninterruptible stalls during rapid navigation/filtering. When the user navigates away from a detail screen before queries complete, the `TaskHandle`'s owned `InterruptHandle` fires `sqlite3_interrupt()` on that specific leased connection — never on a shared pool connection. This prevents cross-task cancellation bleed where interrupting one query accidentally cancels an unrelated query on the same pooled connection.
|
||||
|
||||
#### 4.5.1 Task Supervisor (Dedup + Cancellation + Priority)
|
||||
|
||||
@@ -1549,6 +1628,10 @@ pub struct TaskHandle {
|
||||
pub key: TaskKey,
|
||||
pub generation: u64,
|
||||
pub cancel: Arc<CancelToken>,
|
||||
/// Per-task SQLite interrupt handle. When set, cancellation fires
|
||||
/// this handle instead of interrupting shared pool connections.
|
||||
/// Prevents cross-task cancellation bleed.
|
||||
pub interrupt: Option<rusqlite::InterruptHandle>,
|
||||
}
|
||||
|
||||
/// The TaskSupervisor manages active tasks, deduplicates by key, and tracks
|
||||
@@ -1756,6 +1839,11 @@ pub struct NavigationStack {
|
||||
/// This mirrors vim's jump list behavior.
|
||||
jump_list: Vec<Screen>,
|
||||
jump_index: usize,
|
||||
/// Browse snapshot token: each list/search screen carries a per-screen
|
||||
/// `BrowseSnapshot` that preserves stable ordering until explicit refresh
|
||||
/// or screen re-entry. This works with the snapshot fence to ensure
|
||||
/// deterministic pagination during concurrent sync writes.
|
||||
browse_snapshots: HashMap<ScreenKind, BrowseSnapshot>,
|
||||
}
|
||||
|
||||
impl NavigationStack {
|
||||
@@ -1979,9 +2067,21 @@ Insights are computed from local data during dashboard load. Each insight row is
|
||||
**Data source:** `lore issues` query against SQLite
|
||||
**Columns:** Configurable — iid, title, state, author, labels, milestone, updated_at
|
||||
**Sorting:** Click column header or Tab to cycle (iid, updated, created)
|
||||
**Filtering:** Interactive filter bar with field:value syntax
|
||||
**Filtering:** Interactive filter bar with typed DSL parser. Grammar (v1):
|
||||
- `term := [ "-" ] (field ":" value | quoted_text | bare_text)`
|
||||
- `value := quoted | unquoted`
|
||||
- Examples: `state:opened label:"P1 blocker" -author:bot since:14d`
|
||||
- Negation prefix (`-`) excludes matches for that term
|
||||
- Quoted values allow spaces in filter values
|
||||
- Parser surfaces inline diagnostics with cursor position for parse errors — never silently drops unknown fields
|
||||
**Pagination:** Windowed keyset pagination with explicit cursor state. The list state maintains `window` (current visible rows), `next_cursor` / `prev_cursor` (keyset boundary values for forward/back navigation), `prefetching` flag (background fetch of next window in progress), and a fixed `window_size` (default 200 rows). First paint uses current window only; no full-result materialization. Virtual scrolling within the window for smooth UX. When the user scrolls past ~80% of the window, the next window is prefetched in the background.
|
||||
|
||||
**Snapshot fence:** On list entry, capture `snapshot_upper_updated_at` (current max `updated_at` in the result set) and pin all list-page queries to `updated_at <= snapshot_upper_updated_at`. This guarantees no duplicate or skipped rows during scrolling even if sync writes occur concurrently. A "new data available" badge appears when a newer sync completes; `r` refreshes the fence and re-queries from the top.
|
||||
|
||||
**Quick Peek (`Space`):** Toggle a right-side preview pane showing the selected item's metadata, first discussion snippet, and cross-references without entering the full detail view. This enables rapid triage scanning — the user can evaluate issues at a glance without the Enter/Esc cycle. The peek pane uses the same progressive hydration as detail views (metadata first, discussions lazy). The pane width adapts to terminal breakpoints (hidden at Xs/Sm, 40% width at Md+).
|
||||
|
||||
**Cursor determinism:** Keyset pagination uses deterministic tuple ordering: `ORDER BY <primary_sort>, project_id, iid`. The cursor struct includes the current `sort_field`, `sort_order`, `project_id` (tie-breaker for multi-project datasets where rows share timestamps), and a `filter_hash: u64` (hash of the active filter state). On cursor resume, the cursor is rejected if `filter_hash` or sort tuple mismatches the current query — this prevents stale cursors from producing duplicate/skipped rows after the user changes sort mode or filters mid-browse.
|
||||
|
||||
### 5.3 Issue Detail
|
||||
|
||||
```
|
||||
@@ -2052,7 +2152,9 @@ Identical structure to Issue List with MR-specific columns:
|
||||
| Author | MR author |
|
||||
| Updated | Relative time |
|
||||
|
||||
**Pagination:** Same windowed keyset pagination strategy as Issue List (window=200, background prefetch).
|
||||
**Pagination:** Same windowed keyset pagination strategy as Issue List (window=200, background prefetch, deterministic cursor with `project_id` tie-breaker and `filter_hash` invalidation). Same snapshot fence (`updated_at <= snapshot_upper_updated_at`) for deterministic cross-page traversal under concurrent sync writes.
|
||||
|
||||
**Quick Peek (`Space`):** Same as Issue List — toggle right preview pane showing MR metadata, first discussion snippet, and cross-references for rapid triage without entering detail view.
|
||||
|
||||
**Additional filters:** `--draft`, `--no-draft`, `--target-branch`, `--source-branch`, `--reviewer`
|
||||
|
||||
@@ -2294,8 +2396,8 @@ The Sync screen has two modes: **running** (progress + log) and **summary** (pos
|
||||
|
||||
**Summary mode:**
|
||||
- Shows delta counts (new, updated) for each entity type
|
||||
- `i` navigates to Issue List pre-filtered to "since last sync" (using `sync_status.last_completed_at` timestamp comparison)
|
||||
- `m` navigates to MR List pre-filtered to "since last sync" (using `sync_status.last_completed_at` timestamp comparison)
|
||||
- `i` navigates to Issue List filtered by exact issue IDs changed in this sync run (from in-memory `SyncDeltaLedger`). Falls back to timestamp filter via `sync_status.last_completed_at` only if run delta is not available (e.g., after app restart).
|
||||
- `m` navigates to MR List filtered by exact MR IDs changed in this sync run (from in-memory `SyncDeltaLedger`). Falls back to timestamp filter only if run delta is not available.
|
||||
- `r` restarts sync
|
||||
|
||||
### 5.10 Command Palette (Overlay)
|
||||
@@ -2349,6 +2451,21 @@ The Sync screen has two modes: **running** (progress + log) and **summary** (pos
|
||||
- Does NOT auto-execute commands — the user always runs them manually for safety
|
||||
- Scrollable with j/k, Esc to go back
|
||||
|
||||
### 5.12 Bootstrap (Data Readiness)
|
||||
|
||||
Shown automatically when the TUI detects no synced projects/documents or required indexes are missing. This is a read-only screen — it never auto-executes commands.
|
||||
|
||||
Displays concise readiness checks with pass/fail indicators:
|
||||
- Synced projects present?
|
||||
- Issues/MRs populated?
|
||||
- FTS index built?
|
||||
- Embedding index built? (optional — warns but doesn't block)
|
||||
- Required migration version met?
|
||||
|
||||
For each failing check, shows the exact CLI command to recover (e.g., `lore sync`, `lore migrate`, `lore --robot doctor`). The user exits the TUI and runs the commands manually.
|
||||
|
||||
This prevents the "blank screen" first-run experience where a user launches `lore tui` before syncing data and sees an empty dashboard with no indication of what to do next.
|
||||
|
||||
---
|
||||
|
||||
## 6. User Flows
|
||||
@@ -2483,8 +2600,8 @@ graph TD
|
||||
style F fill:#51cf66,stroke:#333,color:#fff
|
||||
```
|
||||
|
||||
**Keystrokes:** `i` → `j/k` to scan → `Enter` to peek → `Esc` to return → continue scanning
|
||||
**State preservation:** After pressing Esc from Issue Detail, the cursor returns to exactly the same row in the list. Filter state and scroll offset are preserved. This tight Enter/Esc loop is the most common daily workflow.
|
||||
**Keystrokes:** `i` → `j/k` to scan → `Space` to Quick Peek (or `Enter` for full detail) → `Esc` to return → continue scanning
|
||||
**State preservation:** After pressing Esc from Issue Detail, the cursor returns to exactly the same row in the list. Filter state and scroll offset are preserved. This tight Enter/Esc loop is the most common daily workflow. Quick Peek (`Space`) makes triage even faster — preview metadata and first discussion snippet without leaving the list.
|
||||
|
||||
### 6.8 Flow: "Jump between screens without returning to Dashboard"
|
||||
|
||||
@@ -2591,6 +2708,7 @@ graph TD
|
||||
| `Ctrl+O` | Jump backward in jump list (entity hops) |
|
||||
| `Alt+o` | Jump forward in jump list (entity hops) |
|
||||
| `Ctrl+R` | Reset session state for current screen (clear filters, scroll to top) |
|
||||
| `P` | Open project scope picker / toggle global scope pin. When a scope is pinned, all list/search/timeline/who queries are filtered to that project set. A visible `[scope: project/path]` indicator appears in the status bar. |
|
||||
| `Ctrl+C` | Quit (force) |
|
||||
|
||||
### 8.2 List Screens (Issues, MRs, Search Results)
|
||||
@@ -2600,6 +2718,7 @@ graph TD
|
||||
| `j` / `↓` | Move selection down |
|
||||
| `k` / `↑` | Move selection up |
|
||||
| `Enter` | Open selected item |
|
||||
| `Space` | Toggle Quick Peek panel for selected row |
|
||||
| `G` | Jump to bottom |
|
||||
| `g` `g` | Jump to top |
|
||||
| `Tab` / `f` | Focus filter bar |
|
||||
@@ -2614,7 +2733,7 @@ graph TD
|
||||
3. Global shortcuts — `q`, `H`, `?`, `o`, `Ctrl+C`, `Ctrl+P`, `Esc`, `g` prefix
|
||||
4. Screen-local shortcuts — per-screen key handlers (the table above)
|
||||
|
||||
**Go-prefix timeout:** 500ms from first `g` press, enforced by `InputMode::GoPrefix { started_at }` state checked on each tick via `clock.now_instant()`. If no valid continuation key arrives within 500ms, the prefix cancels and a brief "g--" flash clears from the status bar. The tick subscription compares the injected Clock's current instant against `started_at` — no separate timer task needed. Using `InputMode` instead of ad-hoc boolean flags makes the state machine explicit and deterministic. Feedback is immediate — the status bar shows "g--" within the same frame as the keypress.
|
||||
**Go-prefix timeout:** 500ms from first `g` press, enforced by a one-shot `After(500ms)` subscription tied to the prefix generation. If no valid continuation key arrives within 500ms, the timer fires a single `Msg::Tick` which checks `InputMode::GoPrefix { started_at }` via `clock.now_instant()` and cancels the prefix. A brief "g--" flash clears from the status bar. Using `After` (one-shot) instead of `Every` (periodic) avoids unnecessary repeated ticks. Using `InputMode` instead of ad-hoc boolean flags makes the state machine explicit and deterministic. Feedback is immediate — the status bar shows "g--" within the same frame as the keypress.
|
||||
|
||||
**Terminal keybinding safety notes:**
|
||||
- `Ctrl+I` is NOT used — it is indistinguishable from `Tab` in most terminals (both send `\x09`). Jump-forward uses `Alt+o` instead.
|
||||
@@ -2783,6 +2902,8 @@ gantt
|
||||
Event fuzz tests (key/resize/paste, deterministic seed replay):p55g, after p55e, 1d
|
||||
Deterministic clock/render tests:p55i, after p55g, 0.5d
|
||||
30-minute soak test (no panic/leak):p55h, after p55i, 1d
|
||||
Concurrent pagination/write race tests :p55j, after p55h, 1d
|
||||
Query cancellation race tests :p55k, after p55j, 0.5d
|
||||
|
||||
section Phase 5.6 — CLI/TUI Parity Pack
|
||||
Dashboard count parity tests :p56a, after p55h, 0.5d
|
||||
@@ -2802,7 +2923,7 @@ Ensures the TUI displays the same data as the CLI robot mode, preventing drift b
|
||||
|
||||
**Success criterion:** Parity suite passes on CI fixtures (S and M tiers). Parity is asserted by field-level comparison, not string formatting comparison — the TUI and CLI may format differently but must present the same underlying data.
|
||||
|
||||
**Total estimated scope:** ~47 implementation days across 9 phases (increased from ~43 to account for Phase 2.5 vertical slice gate, entity cache, crash context ring buffer, timer-based debounce, and expanded success criteria 24-25).
|
||||
**Total estimated scope:** ~49 implementation days across 9 phases (increased from ~47 to account for filter DSL parser, render cache, progress coalescer, Quick Peek panel, ReaderLease interrupt handles, and generation-guarding all async Msg variants).
|
||||
|
||||
### 9.3 Phase 0 — Toolchain Gate
|
||||
|
||||
@@ -2912,7 +3033,12 @@ crates/lore-tui/src/theme.rs # ftui Theme config
|
||||
crates/lore-tui/src/action.rs # Query bridge functions (uses lore core)
|
||||
crates/lore-tui/src/db_manager.rs # DbManager: closure-based read pool (with_reader) + dedicated writer (with_writer). Prevents lock-poison panics and accidental long-held guards.
|
||||
crates/lore-tui/src/task_supervisor.rs # TaskSupervisor: unified submit() → TaskHandle API with dedup, cancellation, generation IDs, and priority lanes
|
||||
crates/lore-tui/src/entity_cache.rs # Bounded LRU cache for IssueDetail/MrDetail payloads. Keyed by EntityKey. Invalidated on sync completion. Enables near-instant reopen during Enter/Esc drill-in/out workflows.
|
||||
crates/lore-tui/src/entity_cache.rs # Bounded LRU cache for IssueDetail/MrDetail payloads. Keyed by EntityKey. Selective invalidation by changed EntityKey set (not blanket invalidate_all). Optional post-sync prewarm of top changed entities. Enables near-instant reopen during Enter/Esc drill-in/out workflows.
|
||||
crates/lore-tui/src/render_cache.rs # Width/theme/content-hash keyed cache for expensive render artifacts (markdown → styled text, discussion tree shaping). Prevents per-frame recomputation.
|
||||
crates/lore-tui/src/filter_dsl.rs # Typed filter bar DSL parser: quoted values, negation prefix, field:value syntax, inline diagnostics with cursor position. Replaces brittle split_whitespace() parsing.
|
||||
crates/lore-tui/src/progress_coalescer.rs # Per-lane progress event coalescer. Batches progress updates at <=30Hz per lane key (project x resource_type) to reduce render pressure during sync.
|
||||
crates/lore-tui/src/sync_delta_ledger.rs # In-memory per-run exact changed/new entity IDs (issues, MRs, discussions). Populated from SyncCompleted result. Used by Sync Summary mode for exact "what changed" navigation without new DB tables. Cleared on next sync run start.
|
||||
crates/lore-tui/src/scope.rs # Global project scope context (AllProjects or pinned project set). Flows through all query bridge functions. Persisted in session state. `P` keybinding opens scope picker overlay.
|
||||
crates/lore-tui/src/crash_context.rs # Ring buffer of last 2000 normalized events + current screen/task/build snapshot. Captured by panic hook for post-mortem crash diagnostics with retention policy (latest 20 files).
|
||||
crates/lore-tui/src/safety.rs # sanitize_for_terminal(), safe_url_policy()
|
||||
crates/lore-tui/src/redact.rs # redact_sensitive(): strip tokens, Authorization headers, and credential patterns from logs and crash reports before persisting
|
||||
@@ -4285,6 +4411,7 @@ pub struct AppState {
|
||||
pub command_palette: CommandPaletteState,
|
||||
|
||||
// Cross-cutting state
|
||||
pub global_scope: ScopeContext, // Applies to dashboard/list/search/timeline/who queries. Default: AllProjects.
|
||||
pub load_state: ScreenLoadStateMap,
|
||||
pub error_toast: Option<String>,
|
||||
pub show_help: bool,
|
||||
@@ -5445,15 +5572,20 @@ pub fn fetch_dashboard(conn: &Connection) -> Result<DashboardData, LoreError> {
|
||||
}
|
||||
|
||||
/// Fetch issues, converting TUI IssueFilter → CLI ListFilters.
|
||||
/// The `scope` parameter applies global project pinning — when a scope is active,
|
||||
/// it overrides any per-filter project selection, ensuring cross-screen consistency.
|
||||
pub fn fetch_issues(
|
||||
conn: &Connection,
|
||||
scope: &ScopeContext,
|
||||
filter: &IssueFilter,
|
||||
) -> Result<Vec<IssueListRow>, LoreError> {
|
||||
// Convert TUI filter to CLI filter format.
|
||||
// The CLI already has query_issues() — we just need to bridge the types.
|
||||
// Global scope overrides per-filter project when active.
|
||||
let effective_project = scope.effective_project(filter.project.as_deref());
|
||||
let cli_filter = ListFilters {
|
||||
limit: filter.limit,
|
||||
project: filter.project.as_deref(),
|
||||
project: effective_project.as_deref(),
|
||||
state: filter.state.as_deref(),
|
||||
author: filter.author.as_deref(),
|
||||
assignee: filter.assignee.as_deref(),
|
||||
@@ -7806,3 +7938,7 @@ Recommendations from external review (feedback-8, ChatGPT) that were evaluated a
|
||||
Recommendations from external review (feedback-9, ChatGPT) that were evaluated and declined:
|
||||
|
||||
- **Search Facets panel (entity type counts, top labels/projects/authors with one-key apply)** — rejected as feature scope expansion for v1. The concept (three-pane layout with facet counts and quick-apply shortcuts like `1/2/3` for type facets, `l` for label cycling) is compelling and would make search more actionable for triage workflows. However, it requires: new aggregate queries for facet counting that must perform well across all three data tiers, a third layout pane that breaks the current two-pane split design, new keybinding slots (`1/2/3/l`) that could conflict with future list navigation, and per-query facet recalculation that adds latency. The existing search with explicit field-based filters is sufficient for v1. Facets are a strong v2 candidate — once search has production mileage and users report wanting faster triage filtering, the aggregate query patterns and UI layout can be designed with real usage data.
|
||||
|
||||
Recommendations from external review (feedback-10, ChatGPT) that were evaluated and declined:
|
||||
|
||||
- **Structured compat handshake (`--compat-json` replacing `--compat-version` integer)** — rejected because the current two-step contract (integer compat version + separate schema version check) is intentionally minimal and robust. Adding JSON parsing (`{ "protocol": 1, "compat_version": 2, "min_schema": 14, "max_schema": 16, "build": "..." }`) to a preflight binary validation introduces a new failure mode (malformed JSON, missing fields, version parsing) for zero user-visible benefit. The integer check detects "too old to work" — the only case that matters before spawning the TUI. Schema range is already validated separately via `--check-schema`. Combining both into a single JSON response couples concerns that are better kept independent (binary compat vs schema compat). The current approach is more resilient: if `--compat-version` is missing (old binary), we warn and proceed; JSON parsing failure would be a hard error. KISS principle applies.
|
||||
|
||||
Reference in New Issue
Block a user