perf: force partial index for DiffNote queries (26-75x), batch stats counts (1.7x)
who.rs: Add INDEXED BY idx_notes_diffnote_path_created to all DiffNote query paths (expert, expert_details, reviews, path probes, suffix_probe). SQLite planner was choosing idx_notes_system (106K rows, 38%) over the partial index (26K rows, 9.3%) when LIKE predicates are present. Measured: expert 1561ms->59ms (26x), reviews ~1200ms->16ms (75x). stats.rs: Replace 12+ sequential COUNT(*) queries with conditional aggregates (SUM(CASE WHEN...)) and use FTS5 shadow table (documents_fts_docsize) instead of virtual table for counting. Measured: warm 109ms->65ms (1.68x).
This commit is contained in:
@@ -2,12 +2,12 @@
|
||||
plan: true
|
||||
title: "Gitlore TUI PRD v2 - FrankenTUI"
|
||||
status: iterating
|
||||
iteration: 9
|
||||
iteration: 10
|
||||
target_iterations: 10
|
||||
beads_revision: 0
|
||||
related_plans: []
|
||||
created: 2026-02-11
|
||||
updated: 2026-02-11
|
||||
updated: 2026-02-12
|
||||
---
|
||||
|
||||
# Gitlore TUI — Product Requirements Document
|
||||
@@ -288,7 +288,9 @@ crates/lore-tui/src/
|
||||
safety.rs # sanitize_for_terminal(), safe_url_policy()
|
||||
redact.rs # redact_sensitive(): strip tokens, Authorization headers, and credential patterns from logs and crash reports before persisting to disk
|
||||
session.rs # Versioned session state persistence + corruption quarantine
|
||||
entity_cache.rs # Bounded LRU cache for detail payloads (IssueDetail, MrDetail). Keyed by EntityKey. Invalidated on sync completion. Enables near-instant reopen during Enter/Esc drill-in/out workflows without re-querying.
|
||||
scope.rs # Global project scope context: all-projects or pinned project set. Applied to dashboard/list/search/timeline/who queries. Persisted in session state.
|
||||
entity_cache.rs # Bounded LRU cache for detail payloads (IssueDetail, MrDetail). Keyed by EntityKey. Selective invalidation by changed EntityKey set on sync completion (not blanket invalidate_all). Optional post-sync prewarm of top changed entities for immediate triage. Enables near-instant reopen during Enter/Esc drill-in/out workflows without re-querying.
|
||||
render_cache.rs # Width/theme/content-hash keyed cache for expensive render artifacts (markdown → styled text, discussion tree shaping). Invalidation triggers: content hash change, terminal width change, theme change. Prevents per-frame recomputation of markdown parsing and tree layout.
|
||||
crash_context.rs # Ring buffer of last 2000 normalized events + current screen/task snapshot for crash diagnostics. Captured by panic hook for post-mortem debugging.
|
||||
```
|
||||
|
||||
@@ -359,20 +361,24 @@ pub enum Msg {
|
||||
CommandPaletteSelect(usize),
|
||||
|
||||
// Issue list
|
||||
IssueListLoaded(Vec<IssueRow>),
|
||||
/// Generation-guarded: stale results from superseded filter/nav are dropped.
|
||||
IssueListLoaded { generation: u64, rows: Vec<IssueRow> },
|
||||
IssueListFilterChanged(IssueFilter),
|
||||
IssueListSortChanged(SortField, SortOrder),
|
||||
IssueSelected(EntityKey),
|
||||
|
||||
// MR list
|
||||
MrListLoaded(Vec<MrRow>),
|
||||
/// Generation-guarded: stale results from superseded filter/nav are dropped.
|
||||
MrListLoaded { generation: u64, rows: Vec<MrRow> },
|
||||
MrListFilterChanged(MrFilter),
|
||||
MrSelected(EntityKey),
|
||||
|
||||
// Detail views
|
||||
IssueDetailLoaded { key: EntityKey, detail: IssueDetail },
|
||||
MrDetailLoaded { key: EntityKey, detail: MrDetail },
|
||||
DiscussionsLoaded(Vec<Discussion>),
|
||||
/// Generation-guarded: prevents stale detail overwrites after rapid navigation.
|
||||
IssueDetailLoaded { generation: u64, key: EntityKey, detail: IssueDetail },
|
||||
/// Generation-guarded: prevents stale detail overwrites after rapid navigation.
|
||||
MrDetailLoaded { generation: u64, key: EntityKey, detail: MrDetail },
|
||||
DiscussionsLoaded { generation: u64, discussions: Vec<Discussion> },
|
||||
|
||||
// Search
|
||||
SearchQueryChanged(String),
|
||||
@@ -395,6 +401,9 @@ pub enum Msg {
|
||||
// Sync
|
||||
SyncStarted,
|
||||
SyncProgress(ProgressEvent),
|
||||
/// Coalesced batch of progress events (one per lane key).
|
||||
/// Reduces render pressure by batching at <=30Hz per lane.
|
||||
SyncProgressBatch(Vec<ProgressEvent>),
|
||||
SyncLogLine(String),
|
||||
SyncBackpressureDrop,
|
||||
SyncCompleted(SyncResult),
|
||||
@@ -454,6 +463,7 @@ pub enum Screen {
|
||||
Sync,
|
||||
Stats,
|
||||
Doctor,
|
||||
Bootstrap,
|
||||
}
|
||||
|
||||
/// Composite key for entity identity across multi-project datasets.
|
||||
@@ -553,7 +563,7 @@ impl Default for InputMode {
|
||||
// crates/lore-tui/src/app.rs
|
||||
|
||||
use ftui_runtime::program::{Model, Cmd, TaskSpec};
|
||||
use ftui_runtime::subscription::{Subscription, Every};
|
||||
use ftui_runtime::subscription::{Subscription, Every, After};
|
||||
use ftui_core::event::{Event, KeyEvent, KeyCode, KeyEventKind, Modifiers};
|
||||
use ftui_render::frame::Frame;
|
||||
use rusqlite::Connection;
|
||||
@@ -626,6 +636,20 @@ pub struct DbManager {
|
||||
next_reader: AtomicUsize,
|
||||
}
|
||||
|
||||
/// A task-scoped reader lease that owns an interrupt handle for safe cancellation.
|
||||
/// Unlike interrupting a shared pooled connection (which can cancel unrelated work),
|
||||
/// each dispatched query receives its own ReaderLease. The InterruptHandle stored in
|
||||
/// TaskHandle targets only this lease's connection, preventing cross-task cancellation bleed.
|
||||
pub struct ReaderLease<'a> {
|
||||
conn: std::sync::MutexGuard<'a, Connection>,
|
||||
/// Owned interrupt handle — safe to fire without affecting other tasks.
|
||||
pub interrupt: rusqlite::InterruptHandle,
|
||||
}
|
||||
|
||||
impl<'a> ReaderLease<'a> {
|
||||
pub fn conn(&self) -> &Connection { &self.conn }
|
||||
}
|
||||
|
||||
impl DbManager {
|
||||
pub fn new(db_path: &Path, reader_count: usize) -> Result<Self, LoreError> {
|
||||
let mut readers = Vec::with_capacity(reader_count);
|
||||
@@ -663,6 +687,19 @@ impl DbManager {
|
||||
.map_err(|e| LoreError::Internal(format!("writer lock poisoned: {e}")))?;
|
||||
f(&conn)
|
||||
}
|
||||
|
||||
/// Lease a reader connection with a task-owned interrupt handle.
|
||||
/// The returned `ReaderLease` holds the mutex guard and provides
|
||||
/// an `InterruptHandle` that can be stored in `TaskHandle` for
|
||||
/// safe per-task cancellation. This prevents cross-task interrupt bleed
|
||||
/// that would occur with shared-connection `sqlite3_interrupt()`.
|
||||
pub fn lease_reader(&self) -> Result<ReaderLease<'_>, LoreError> {
|
||||
let idx = self.next_reader.fetch_add(1, Ordering::Relaxed) % self.readers.len();
|
||||
let conn = self.readers[idx].lock()
|
||||
.map_err(|e| LoreError::Internal(format!("reader lock poisoned: {e}")))?;
|
||||
let interrupt = conn.get_interrupt_handle();
|
||||
Ok(ReaderLease { conn, interrupt })
|
||||
}
|
||||
}
|
||||
|
||||
impl LoreApp {
|
||||
@@ -786,9 +823,11 @@ impl LoreApp {
|
||||
}),
|
||||
Screen::IssueList => {
|
||||
let filter = self.state.issue_list.current_filter();
|
||||
let handle = self.task_supervisor.submit(TaskKey::LoadScreen(Screen::IssueList));
|
||||
let generation = handle.generation;
|
||||
Cmd::task(move || {
|
||||
match db.with_reader(|conn| crate::tui::action::fetch_issues(conn, &filter)) {
|
||||
Ok(result) => Msg::IssueListLoaded(result),
|
||||
Ok(rows) => Msg::IssueListLoaded { generation, rows },
|
||||
Err(e) => Msg::Error(AppError::Internal(e.to_string())),
|
||||
}
|
||||
})
|
||||
@@ -797,21 +836,26 @@ impl LoreApp {
|
||||
// Check entity cache first — enables near-instant reopen
|
||||
// during Enter/Esc drill-in/out workflows.
|
||||
if let Some(cached) = self.entity_cache.get_issue(key) {
|
||||
return Cmd::msg(Msg::IssueDetailLoaded { key: key.clone(), detail: cached.clone() });
|
||||
let handle = self.task_supervisor.submit(TaskKey::LoadScreen(Screen::IssueDetail(key.clone())));
|
||||
return Cmd::msg(Msg::IssueDetailLoaded { generation: handle.generation, key: key.clone(), detail: cached.clone() });
|
||||
}
|
||||
let handle = self.task_supervisor.submit(TaskKey::LoadScreen(Screen::IssueDetail(key.clone())));
|
||||
let generation = handle.generation;
|
||||
let key = key.clone();
|
||||
Cmd::task(move || {
|
||||
match db.with_reader(|conn| crate::tui::action::fetch_issue_detail(conn, &key)) {
|
||||
Ok(detail) => Msg::IssueDetailLoaded { key, detail },
|
||||
Ok(detail) => Msg::IssueDetailLoaded { generation, key, detail },
|
||||
Err(e) => Msg::Error(AppError::Internal(e.to_string())),
|
||||
}
|
||||
})
|
||||
}
|
||||
Screen::MrList => {
|
||||
let filter = self.state.mr_list.current_filter();
|
||||
let handle = self.task_supervisor.submit(TaskKey::LoadScreen(Screen::MrList));
|
||||
let generation = handle.generation;
|
||||
Cmd::task(move || {
|
||||
match db.with_reader(|conn| crate::tui::action::fetch_mrs(conn, &filter)) {
|
||||
Ok(result) => Msg::MrListLoaded(result),
|
||||
Ok(rows) => Msg::MrListLoaded { generation, rows },
|
||||
Err(e) => Msg::Error(AppError::Internal(e.to_string())),
|
||||
}
|
||||
})
|
||||
@@ -819,12 +863,15 @@ impl LoreApp {
|
||||
Screen::MrDetail(key) => {
|
||||
// Check entity cache first
|
||||
if let Some(cached) = self.entity_cache.get_mr(key) {
|
||||
return Cmd::msg(Msg::MrDetailLoaded { key: key.clone(), detail: cached.clone() });
|
||||
let handle = self.task_supervisor.submit(TaskKey::LoadScreen(Screen::MrDetail(key.clone())));
|
||||
return Cmd::msg(Msg::MrDetailLoaded { generation: handle.generation, key: key.clone(), detail: cached.clone() });
|
||||
}
|
||||
let handle = self.task_supervisor.submit(TaskKey::LoadScreen(Screen::MrDetail(key.clone())));
|
||||
let generation = handle.generation;
|
||||
let key = key.clone();
|
||||
Cmd::task(move || {
|
||||
match db.with_reader(|conn| crate::tui::action::fetch_mr_detail(conn, &key)) {
|
||||
Ok(detail) => Msg::MrDetailLoaded { key, detail },
|
||||
Ok(detail) => Msg::MrDetailLoaded { generation, key, detail },
|
||||
Err(e) => Msg::Error(AppError::Internal(e.to_string())),
|
||||
}
|
||||
})
|
||||
@@ -895,9 +942,11 @@ impl LoreApp {
|
||||
Screen::IssueList => {
|
||||
let filter = self.state.issue_list.current_filter();
|
||||
let db = Arc::clone(&self.db);
|
||||
let handle = self.task_supervisor.submit(TaskKey::FilterRequery(Screen::IssueList));
|
||||
let generation = handle.generation;
|
||||
Cmd::task(move || {
|
||||
match db.with_reader(|conn| crate::tui::action::fetch_issues(conn, &filter)) {
|
||||
Ok(result) => Msg::IssueListLoaded(result),
|
||||
Ok(rows) => Msg::IssueListLoaded { generation, rows },
|
||||
Err(e) => Msg::Error(AppError::Internal(e.to_string())),
|
||||
}
|
||||
})
|
||||
@@ -905,9 +954,11 @@ impl LoreApp {
|
||||
Screen::MrList => {
|
||||
let filter = self.state.mr_list.current_filter();
|
||||
let db = Arc::clone(&self.db);
|
||||
let handle = self.task_supervisor.submit(TaskKey::FilterRequery(Screen::MrList));
|
||||
let generation = handle.generation;
|
||||
Cmd::task(move || {
|
||||
match db.with_reader(|conn| crate::tui::action::fetch_mrs(conn, &filter)) {
|
||||
Ok(result) => Msg::MrListLoaded(result),
|
||||
Ok(rows) => Msg::MrListLoaded { generation, rows },
|
||||
Err(e) => Msg::Error(AppError::Internal(e.to_string())),
|
||||
}
|
||||
})
|
||||
@@ -961,15 +1012,18 @@ impl LoreApp {
|
||||
if cancel_token.load(std::sync::atomic::Ordering::Relaxed) {
|
||||
return; // Early exit — orchestrator handles partial state
|
||||
}
|
||||
// Track queue depth for stream stats
|
||||
let current_depth = 2048 - tx.try_send(Msg::SyncProgress(event.clone()))
|
||||
.err().map_or(0, |_| 1);
|
||||
max_queue_depth = max_queue_depth.max(current_depth);
|
||||
if tx.try_send(Msg::SyncProgress(event.clone())).is_err() {
|
||||
// Channel full — drop this progress update rather than
|
||||
// blocking the sync thread. Track for stats.
|
||||
dropped_count += 1;
|
||||
let _ = tx.try_send(Msg::SyncBackpressureDrop);
|
||||
// Coalesce progress events by lane key at <=30Hz to reduce
|
||||
// render pressure. Each lane (project x resource_type) keeps
|
||||
// only its latest progress snapshot. The coalescer flushes
|
||||
// a batch when 33ms have elapsed since last flush.
|
||||
coalescer.update(event.clone());
|
||||
if let Some(batch) = coalescer.flush_ready() {
|
||||
if tx.try_send(Msg::SyncProgressBatch(batch)).is_err() {
|
||||
// Channel full — drop this batch rather than
|
||||
// blocking the sync thread. Track for stats.
|
||||
dropped_count += 1;
|
||||
let _ = tx.try_send(Msg::SyncBackpressureDrop);
|
||||
}
|
||||
}
|
||||
let _ = tx.try_send(Msg::SyncLogLine(format!("{event:?}")));
|
||||
},
|
||||
@@ -1143,23 +1197,35 @@ impl Model for LoreApp {
|
||||
self.state.dashboard.update(data);
|
||||
Cmd::none()
|
||||
}
|
||||
Msg::IssueListLoaded(result) => {
|
||||
Msg::IssueListLoaded { generation, rows } => {
|
||||
if !self.task_supervisor.is_current(&TaskKey::LoadScreen(Screen::IssueList), generation) {
|
||||
return Cmd::none(); // Stale — superseded by newer nav/filter
|
||||
}
|
||||
self.state.set_loading(false);
|
||||
self.state.issue_list.set_result(result);
|
||||
self.state.issue_list.set_result(rows);
|
||||
Cmd::none()
|
||||
}
|
||||
Msg::IssueDetailLoaded { key, detail } => {
|
||||
Msg::IssueDetailLoaded { generation, key, detail } => {
|
||||
if !self.task_supervisor.is_current(&TaskKey::LoadScreen(Screen::IssueDetail(key.clone())), generation) {
|
||||
return Cmd::none(); // Stale — user navigated away
|
||||
}
|
||||
self.state.set_loading(false);
|
||||
self.entity_cache.put_issue(key, detail.clone());
|
||||
self.state.issue_detail.set(detail);
|
||||
Cmd::none()
|
||||
}
|
||||
Msg::MrListLoaded(result) => {
|
||||
Msg::MrListLoaded { generation, rows } => {
|
||||
if !self.task_supervisor.is_current(&TaskKey::LoadScreen(Screen::MrList), generation) {
|
||||
return Cmd::none(); // Stale — superseded by newer nav/filter
|
||||
}
|
||||
self.state.set_loading(false);
|
||||
self.state.mr_list.set_result(result);
|
||||
self.state.mr_list.set_result(rows);
|
||||
Cmd::none()
|
||||
}
|
||||
Msg::MrDetailLoaded { key, detail } => {
|
||||
Msg::MrDetailLoaded { generation, key, detail } => {
|
||||
if !self.task_supervisor.is_current(&TaskKey::LoadScreen(Screen::MrDetail(key.clone())), generation) {
|
||||
return Cmd::none(); // Stale — user navigated away
|
||||
}
|
||||
self.state.set_loading(false);
|
||||
self.entity_cache.put_mr(key, detail.clone());
|
||||
self.state.mr_detail.set(detail);
|
||||
@@ -1219,6 +1285,12 @@ impl Model for LoreApp {
|
||||
self.state.sync.update_progress(event);
|
||||
Cmd::none()
|
||||
}
|
||||
Msg::SyncProgressBatch(events) => {
|
||||
for event in events {
|
||||
self.state.sync.update_progress(event);
|
||||
}
|
||||
Cmd::none()
|
||||
}
|
||||
Msg::SyncLogLine(line) => {
|
||||
self.state.sync.push_log(line);
|
||||
Cmd::none()
|
||||
@@ -1234,10 +1306,15 @@ impl Model for LoreApp {
|
||||
Cmd::none()
|
||||
}
|
||||
Msg::SyncCompleted(result) => {
|
||||
self.state.sync.complete(result);
|
||||
// Invalidate entity cache — synced data may have changed.
|
||||
self.entity_cache.invalidate_all();
|
||||
Cmd::none()
|
||||
self.state.sync.complete(&result);
|
||||
// Selective invalidation: evict only changed entities from sync delta.
|
||||
self.entity_cache.invalidate_keys(&result.changed_entity_keys);
|
||||
// Prewarm top N changed/new entities for immediate post-sync triage.
|
||||
// This is lazy — enqueues Cmd::task fetches, doesn't block the event loop.
|
||||
let prewarm_cmds = self.enqueue_cache_prewarm(&result.changed_entity_keys);
|
||||
// Notify list screens that new data is available (snapshot fence refresh badge).
|
||||
self.state.notify_data_changed();
|
||||
prewarm_cmds
|
||||
}
|
||||
Msg::SyncFailed(err) => {
|
||||
self.state.sync.fail(err);
|
||||
@@ -1416,21 +1493,23 @@ impl Model for LoreApp {
|
||||
));
|
||||
}
|
||||
|
||||
// Go-prefix timeout enforcement: tick even when nothing is loading.
|
||||
// Without this, GoPrefix mode can get "stuck" when idle (no other
|
||||
// events to drive the Tick that checks the 500ms timeout).
|
||||
// Go-prefix timeout: one-shot After(500ms) tied to the prefix start.
|
||||
// Uses After (one-shot) instead of Every (periodic) — the prefix
|
||||
// either completes with a valid key or times out exactly once.
|
||||
if matches!(self.input_mode, InputMode::GoPrefix { .. }) {
|
||||
subs.push(Box::new(
|
||||
Every::with_id(2, Duration::from_millis(50), || Msg::Tick)
|
||||
After::with_id(2, Duration::from_millis(500), || Msg::Tick)
|
||||
));
|
||||
}
|
||||
|
||||
// Search debounce timer: fires SearchDebounceFired after 200ms.
|
||||
// Search debounce timer: one-shot fires SearchDebounceFired after 200ms.
|
||||
// Only active when a debounce is pending (armed by keystroke).
|
||||
// Uses After (one-shot) instead of Every (periodic) to avoid repeated
|
||||
// firings from a periodic timer — one debounce = one fire.
|
||||
if self.state.search.debounce_pending() {
|
||||
let generation = self.state.search.debounce_generation();
|
||||
subs.push(Box::new(
|
||||
Every::with_id(3, Duration::from_millis(200), move || {
|
||||
After::with_id(3, Duration::from_millis(200), move || {
|
||||
Msg::SearchDebounceFired { generation }
|
||||
})
|
||||
));
|
||||
@@ -1485,7 +1564,7 @@ pub fn with_read_snapshot<T>(
|
||||
}
|
||||
```
|
||||
|
||||
**Query interruption:** Long-running queries register interrupt checks tied to `CancelToken` to avoid >1s uninterruptible stalls during rapid navigation/filtering. When the user navigates away from a detail screen before queries complete, the cancel token fires `sqlite3_interrupt()` on the connection.
|
||||
**Query interruption:** Long-running queries use task-owned `ReaderLease` interrupt handles (from `DbManager::lease_reader()`) to avoid >1s uninterruptible stalls during rapid navigation/filtering. When the user navigates away from a detail screen before queries complete, the `TaskHandle`'s owned `InterruptHandle` fires `sqlite3_interrupt()` on that specific leased connection — never on a shared pool connection. This prevents cross-task cancellation bleed where interrupting one query accidentally cancels an unrelated query on the same pooled connection.
|
||||
|
||||
#### 4.5.1 Task Supervisor (Dedup + Cancellation + Priority)
|
||||
|
||||
@@ -1549,6 +1628,10 @@ pub struct TaskHandle {
|
||||
pub key: TaskKey,
|
||||
pub generation: u64,
|
||||
pub cancel: Arc<CancelToken>,
|
||||
/// Per-task SQLite interrupt handle. When set, cancellation fires
|
||||
/// this handle instead of interrupting shared pool connections.
|
||||
/// Prevents cross-task cancellation bleed.
|
||||
pub interrupt: Option<rusqlite::InterruptHandle>,
|
||||
}
|
||||
|
||||
/// The TaskSupervisor manages active tasks, deduplicates by key, and tracks
|
||||
@@ -1756,6 +1839,11 @@ pub struct NavigationStack {
|
||||
/// This mirrors vim's jump list behavior.
|
||||
jump_list: Vec<Screen>,
|
||||
jump_index: usize,
|
||||
/// Browse snapshot token: each list/search screen carries a per-screen
|
||||
/// `BrowseSnapshot` that preserves stable ordering until explicit refresh
|
||||
/// or screen re-entry. This works with the snapshot fence to ensure
|
||||
/// deterministic pagination during concurrent sync writes.
|
||||
browse_snapshots: HashMap<ScreenKind, BrowseSnapshot>,
|
||||
}
|
||||
|
||||
impl NavigationStack {
|
||||
@@ -1979,9 +2067,21 @@ Insights are computed from local data during dashboard load. Each insight row is
|
||||
**Data source:** `lore issues` query against SQLite
|
||||
**Columns:** Configurable — iid, title, state, author, labels, milestone, updated_at
|
||||
**Sorting:** Click column header or Tab to cycle (iid, updated, created)
|
||||
**Filtering:** Interactive filter bar with field:value syntax
|
||||
**Filtering:** Interactive filter bar with typed DSL parser. Grammar (v1):
|
||||
- `term := [ "-" ] (field ":" value | quoted_text | bare_text)`
|
||||
- `value := quoted | unquoted`
|
||||
- Examples: `state:opened label:"P1 blocker" -author:bot since:14d`
|
||||
- Negation prefix (`-`) excludes matches for that term
|
||||
- Quoted values allow spaces in filter values
|
||||
- Parser surfaces inline diagnostics with cursor position for parse errors — never silently drops unknown fields
|
||||
**Pagination:** Windowed keyset pagination with explicit cursor state. The list state maintains `window` (current visible rows), `next_cursor` / `prev_cursor` (keyset boundary values for forward/back navigation), `prefetching` flag (background fetch of next window in progress), and a fixed `window_size` (default 200 rows). First paint uses current window only; no full-result materialization. Virtual scrolling within the window for smooth UX. When the user scrolls past ~80% of the window, the next window is prefetched in the background.
|
||||
|
||||
**Snapshot fence:** On list entry, capture `snapshot_upper_updated_at` (current max `updated_at` in the result set) and pin all list-page queries to `updated_at <= snapshot_upper_updated_at`. This guarantees no duplicate or skipped rows during scrolling even if sync writes occur concurrently. A "new data available" badge appears when a newer sync completes; `r` refreshes the fence and re-queries from the top.
|
||||
|
||||
**Quick Peek (`Space`):** Toggle a right-side preview pane showing the selected item's metadata, first discussion snippet, and cross-references without entering the full detail view. This enables rapid triage scanning — the user can evaluate issues at a glance without the Enter/Esc cycle. The peek pane uses the same progressive hydration as detail views (metadata first, discussions lazy). The pane width adapts to terminal breakpoints (hidden at Xs/Sm, 40% width at Md+).
|
||||
|
||||
**Cursor determinism:** Keyset pagination uses deterministic tuple ordering: `ORDER BY <primary_sort>, project_id, iid`. The cursor struct includes the current `sort_field`, `sort_order`, `project_id` (tie-breaker for multi-project datasets where rows share timestamps), and a `filter_hash: u64` (hash of the active filter state). On cursor resume, the cursor is rejected if `filter_hash` or sort tuple mismatches the current query — this prevents stale cursors from producing duplicate/skipped rows after the user changes sort mode or filters mid-browse.
|
||||
|
||||
### 5.3 Issue Detail
|
||||
|
||||
```
|
||||
@@ -2052,7 +2152,9 @@ Identical structure to Issue List with MR-specific columns:
|
||||
| Author | MR author |
|
||||
| Updated | Relative time |
|
||||
|
||||
**Pagination:** Same windowed keyset pagination strategy as Issue List (window=200, background prefetch).
|
||||
**Pagination:** Same windowed keyset pagination strategy as Issue List (window=200, background prefetch, deterministic cursor with `project_id` tie-breaker and `filter_hash` invalidation). Same snapshot fence (`updated_at <= snapshot_upper_updated_at`) for deterministic cross-page traversal under concurrent sync writes.
|
||||
|
||||
**Quick Peek (`Space`):** Same as Issue List — toggle right preview pane showing MR metadata, first discussion snippet, and cross-references for rapid triage without entering detail view.
|
||||
|
||||
**Additional filters:** `--draft`, `--no-draft`, `--target-branch`, `--source-branch`, `--reviewer`
|
||||
|
||||
@@ -2294,8 +2396,8 @@ The Sync screen has two modes: **running** (progress + log) and **summary** (pos
|
||||
|
||||
**Summary mode:**
|
||||
- Shows delta counts (new, updated) for each entity type
|
||||
- `i` navigates to Issue List pre-filtered to "since last sync" (using `sync_status.last_completed_at` timestamp comparison)
|
||||
- `m` navigates to MR List pre-filtered to "since last sync" (using `sync_status.last_completed_at` timestamp comparison)
|
||||
- `i` navigates to Issue List filtered by exact issue IDs changed in this sync run (from in-memory `SyncDeltaLedger`). Falls back to timestamp filter via `sync_status.last_completed_at` only if run delta is not available (e.g., after app restart).
|
||||
- `m` navigates to MR List filtered by exact MR IDs changed in this sync run (from in-memory `SyncDeltaLedger`). Falls back to timestamp filter only if run delta is not available.
|
||||
- `r` restarts sync
|
||||
|
||||
### 5.10 Command Palette (Overlay)
|
||||
@@ -2349,6 +2451,21 @@ The Sync screen has two modes: **running** (progress + log) and **summary** (pos
|
||||
- Does NOT auto-execute commands — the user always runs them manually for safety
|
||||
- Scrollable with j/k, Esc to go back
|
||||
|
||||
### 5.12 Bootstrap (Data Readiness)
|
||||
|
||||
Shown automatically when the TUI detects no synced projects/documents or required indexes are missing. This is a read-only screen — it never auto-executes commands.
|
||||
|
||||
Displays concise readiness checks with pass/fail indicators:
|
||||
- Synced projects present?
|
||||
- Issues/MRs populated?
|
||||
- FTS index built?
|
||||
- Embedding index built? (optional — warns but doesn't block)
|
||||
- Required migration version met?
|
||||
|
||||
For each failing check, shows the exact CLI command to recover (e.g., `lore sync`, `lore migrate`, `lore --robot doctor`). The user exits the TUI and runs the commands manually.
|
||||
|
||||
This prevents the "blank screen" first-run experience where a user launches `lore tui` before syncing data and sees an empty dashboard with no indication of what to do next.
|
||||
|
||||
---
|
||||
|
||||
## 6. User Flows
|
||||
@@ -2483,8 +2600,8 @@ graph TD
|
||||
style F fill:#51cf66,stroke:#333,color:#fff
|
||||
```
|
||||
|
||||
**Keystrokes:** `i` → `j/k` to scan → `Enter` to peek → `Esc` to return → continue scanning
|
||||
**State preservation:** After pressing Esc from Issue Detail, the cursor returns to exactly the same row in the list. Filter state and scroll offset are preserved. This tight Enter/Esc loop is the most common daily workflow.
|
||||
**Keystrokes:** `i` → `j/k` to scan → `Space` to Quick Peek (or `Enter` for full detail) → `Esc` to return → continue scanning
|
||||
**State preservation:** After pressing Esc from Issue Detail, the cursor returns to exactly the same row in the list. Filter state and scroll offset are preserved. This tight Enter/Esc loop is the most common daily workflow. Quick Peek (`Space`) makes triage even faster — preview metadata and first discussion snippet without leaving the list.
|
||||
|
||||
### 6.8 Flow: "Jump between screens without returning to Dashboard"
|
||||
|
||||
@@ -2591,6 +2708,7 @@ graph TD
|
||||
| `Ctrl+O` | Jump backward in jump list (entity hops) |
|
||||
| `Alt+o` | Jump forward in jump list (entity hops) |
|
||||
| `Ctrl+R` | Reset session state for current screen (clear filters, scroll to top) |
|
||||
| `P` | Open project scope picker / toggle global scope pin. When a scope is pinned, all list/search/timeline/who queries are filtered to that project set. A visible `[scope: project/path]` indicator appears in the status bar. |
|
||||
| `Ctrl+C` | Quit (force) |
|
||||
|
||||
### 8.2 List Screens (Issues, MRs, Search Results)
|
||||
@@ -2600,6 +2718,7 @@ graph TD
|
||||
| `j` / `↓` | Move selection down |
|
||||
| `k` / `↑` | Move selection up |
|
||||
| `Enter` | Open selected item |
|
||||
| `Space` | Toggle Quick Peek panel for selected row |
|
||||
| `G` | Jump to bottom |
|
||||
| `g` `g` | Jump to top |
|
||||
| `Tab` / `f` | Focus filter bar |
|
||||
@@ -2614,7 +2733,7 @@ graph TD
|
||||
3. Global shortcuts — `q`, `H`, `?`, `o`, `Ctrl+C`, `Ctrl+P`, `Esc`, `g` prefix
|
||||
4. Screen-local shortcuts — per-screen key handlers (the table above)
|
||||
|
||||
**Go-prefix timeout:** 500ms from first `g` press, enforced by `InputMode::GoPrefix { started_at }` state checked on each tick via `clock.now_instant()`. If no valid continuation key arrives within 500ms, the prefix cancels and a brief "g--" flash clears from the status bar. The tick subscription compares the injected Clock's current instant against `started_at` — no separate timer task needed. Using `InputMode` instead of ad-hoc boolean flags makes the state machine explicit and deterministic. Feedback is immediate — the status bar shows "g--" within the same frame as the keypress.
|
||||
**Go-prefix timeout:** 500ms from first `g` press, enforced by a one-shot `After(500ms)` subscription tied to the prefix generation. If no valid continuation key arrives within 500ms, the timer fires a single `Msg::Tick` which checks `InputMode::GoPrefix { started_at }` via `clock.now_instant()` and cancels the prefix. A brief "g--" flash clears from the status bar. Using `After` (one-shot) instead of `Every` (periodic) avoids unnecessary repeated ticks. Using `InputMode` instead of ad-hoc boolean flags makes the state machine explicit and deterministic. Feedback is immediate — the status bar shows "g--" within the same frame as the keypress.
|
||||
|
||||
**Terminal keybinding safety notes:**
|
||||
- `Ctrl+I` is NOT used — it is indistinguishable from `Tab` in most terminals (both send `\x09`). Jump-forward uses `Alt+o` instead.
|
||||
@@ -2783,6 +2902,8 @@ gantt
|
||||
Event fuzz tests (key/resize/paste, deterministic seed replay):p55g, after p55e, 1d
|
||||
Deterministic clock/render tests:p55i, after p55g, 0.5d
|
||||
30-minute soak test (no panic/leak):p55h, after p55i, 1d
|
||||
Concurrent pagination/write race tests :p55j, after p55h, 1d
|
||||
Query cancellation race tests :p55k, after p55j, 0.5d
|
||||
|
||||
section Phase 5.6 — CLI/TUI Parity Pack
|
||||
Dashboard count parity tests :p56a, after p55h, 0.5d
|
||||
@@ -2802,7 +2923,7 @@ Ensures the TUI displays the same data as the CLI robot mode, preventing drift b
|
||||
|
||||
**Success criterion:** Parity suite passes on CI fixtures (S and M tiers). Parity is asserted by field-level comparison, not string formatting comparison — the TUI and CLI may format differently but must present the same underlying data.
|
||||
|
||||
**Total estimated scope:** ~47 implementation days across 9 phases (increased from ~43 to account for Phase 2.5 vertical slice gate, entity cache, crash context ring buffer, timer-based debounce, and expanded success criteria 24-25).
|
||||
**Total estimated scope:** ~49 implementation days across 9 phases (increased from ~47 to account for filter DSL parser, render cache, progress coalescer, Quick Peek panel, ReaderLease interrupt handles, and generation-guarding all async Msg variants).
|
||||
|
||||
### 9.3 Phase 0 — Toolchain Gate
|
||||
|
||||
@@ -2912,7 +3033,12 @@ crates/lore-tui/src/theme.rs # ftui Theme config
|
||||
crates/lore-tui/src/action.rs # Query bridge functions (uses lore core)
|
||||
crates/lore-tui/src/db_manager.rs # DbManager: closure-based read pool (with_reader) + dedicated writer (with_writer). Prevents lock-poison panics and accidental long-held guards.
|
||||
crates/lore-tui/src/task_supervisor.rs # TaskSupervisor: unified submit() → TaskHandle API with dedup, cancellation, generation IDs, and priority lanes
|
||||
crates/lore-tui/src/entity_cache.rs # Bounded LRU cache for IssueDetail/MrDetail payloads. Keyed by EntityKey. Invalidated on sync completion. Enables near-instant reopen during Enter/Esc drill-in/out workflows.
|
||||
crates/lore-tui/src/entity_cache.rs # Bounded LRU cache for IssueDetail/MrDetail payloads. Keyed by EntityKey. Selective invalidation by changed EntityKey set (not blanket invalidate_all). Optional post-sync prewarm of top changed entities. Enables near-instant reopen during Enter/Esc drill-in/out workflows.
|
||||
crates/lore-tui/src/render_cache.rs # Width/theme/content-hash keyed cache for expensive render artifacts (markdown → styled text, discussion tree shaping). Prevents per-frame recomputation.
|
||||
crates/lore-tui/src/filter_dsl.rs # Typed filter bar DSL parser: quoted values, negation prefix, field:value syntax, inline diagnostics with cursor position. Replaces brittle split_whitespace() parsing.
|
||||
crates/lore-tui/src/progress_coalescer.rs # Per-lane progress event coalescer. Batches progress updates at <=30Hz per lane key (project x resource_type) to reduce render pressure during sync.
|
||||
crates/lore-tui/src/sync_delta_ledger.rs # In-memory per-run exact changed/new entity IDs (issues, MRs, discussions). Populated from SyncCompleted result. Used by Sync Summary mode for exact "what changed" navigation without new DB tables. Cleared on next sync run start.
|
||||
crates/lore-tui/src/scope.rs # Global project scope context (AllProjects or pinned project set). Flows through all query bridge functions. Persisted in session state. `P` keybinding opens scope picker overlay.
|
||||
crates/lore-tui/src/crash_context.rs # Ring buffer of last 2000 normalized events + current screen/task/build snapshot. Captured by panic hook for post-mortem crash diagnostics with retention policy (latest 20 files).
|
||||
crates/lore-tui/src/safety.rs # sanitize_for_terminal(), safe_url_policy()
|
||||
crates/lore-tui/src/redact.rs # redact_sensitive(): strip tokens, Authorization headers, and credential patterns from logs and crash reports before persisting
|
||||
@@ -4285,6 +4411,7 @@ pub struct AppState {
|
||||
pub command_palette: CommandPaletteState,
|
||||
|
||||
// Cross-cutting state
|
||||
pub global_scope: ScopeContext, // Applies to dashboard/list/search/timeline/who queries. Default: AllProjects.
|
||||
pub load_state: ScreenLoadStateMap,
|
||||
pub error_toast: Option<String>,
|
||||
pub show_help: bool,
|
||||
@@ -5445,15 +5572,20 @@ pub fn fetch_dashboard(conn: &Connection) -> Result<DashboardData, LoreError> {
|
||||
}
|
||||
|
||||
/// Fetch issues, converting TUI IssueFilter → CLI ListFilters.
|
||||
/// The `scope` parameter applies global project pinning — when a scope is active,
|
||||
/// it overrides any per-filter project selection, ensuring cross-screen consistency.
|
||||
pub fn fetch_issues(
|
||||
conn: &Connection,
|
||||
scope: &ScopeContext,
|
||||
filter: &IssueFilter,
|
||||
) -> Result<Vec<IssueListRow>, LoreError> {
|
||||
// Convert TUI filter to CLI filter format.
|
||||
// The CLI already has query_issues() — we just need to bridge the types.
|
||||
// Global scope overrides per-filter project when active.
|
||||
let effective_project = scope.effective_project(filter.project.as_deref());
|
||||
let cli_filter = ListFilters {
|
||||
limit: filter.limit,
|
||||
project: filter.project.as_deref(),
|
||||
project: effective_project.as_deref(),
|
||||
state: filter.state.as_deref(),
|
||||
author: filter.author.as_deref(),
|
||||
assignee: filter.assignee.as_deref(),
|
||||
@@ -7806,3 +7938,7 @@ Recommendations from external review (feedback-8, ChatGPT) that were evaluated a
|
||||
Recommendations from external review (feedback-9, ChatGPT) that were evaluated and declined:
|
||||
|
||||
- **Search Facets panel (entity type counts, top labels/projects/authors with one-key apply)** — rejected as feature scope expansion for v1. The concept (three-pane layout with facet counts and quick-apply shortcuts like `1/2/3` for type facets, `l` for label cycling) is compelling and would make search more actionable for triage workflows. However, it requires: new aggregate queries for facet counting that must perform well across all three data tiers, a third layout pane that breaks the current two-pane split design, new keybinding slots (`1/2/3/l`) that could conflict with future list navigation, and per-query facet recalculation that adds latency. The existing search with explicit field-based filters is sufficient for v1. Facets are a strong v2 candidate — once search has production mileage and users report wanting faster triage filtering, the aggregate query patterns and UI layout can be designed with real usage data.
|
||||
|
||||
Recommendations from external review (feedback-10, ChatGPT) that were evaluated and declined:
|
||||
|
||||
- **Structured compat handshake (`--compat-json` replacing `--compat-version` integer)** — rejected because the current two-step contract (integer compat version + separate schema version check) is intentionally minimal and robust. Adding JSON parsing (`{ "protocol": 1, "compat_version": 2, "min_schema": 14, "max_schema": 16, "build": "..." }`) to a preflight binary validation introduces a new failure mode (malformed JSON, missing fields, version parsing) for zero user-visible benefit. The integer check detects "too old to work" — the only case that matters before spawning the TUI. Schema range is already validated separately via `--check-schema`. Combining both into a single JSON response couples concerns that are better kept independent (binary compat vs schema compat). The current approach is more resilient: if `--compat-version` is missing (old binary), we warn and proceed; JSON parsing failure would be a hard error. KISS principle applies.
|
||||
|
||||
Reference in New Issue
Block a user