29 Commits

Author SHA1 Message Date
teernisse
386dd884ec test(ingestion): add MR + nonzero_summary tests, close bd-1au9 2026-02-19 10:02:03 -05:00
teernisse
61fbc234d8 test(ingestion): add 25 unit tests for issues.rs (bd-1au9 partial)
Cover parse_timestamp, cursor_filter_with_ts, sync cursor DB operations,
process_single_issue (upsert, labels, assignees, milestone, dirty tracking,
payload storage), and discussion sync queue population.
2026-02-19 09:53:50 -05:00
teernisse
5aca644da6 feat: implement lore brief command (bd-1n5q)
Composable capstone: replaces 5+ separate lore calls with a single
situational awareness command. Three modes:
- Topic: lore brief 'authentication'
- Path: lore brief --path src/auth/
- Person: lore brief --person username

Seven sections: open_issues, active_mrs, experts, recent_activity,
unresolved_threads, related (semantic), warnings.

Each section degrades gracefully if data is unavailable.
7 unit tests, robot-docs, autocorrect registry.
2026-02-19 09:51:37 -05:00
teernisse
20db46a514 refactor: split who.rs into who/ module (bd-2cbw)
Split 2447-line who.rs into focused submodules:
- who/scoring.rs: half_life_decay (20 lines)
- who/queries.rs: 5 query functions + helpers (~1400 lines)
- who/format.rs: human + JSON formatters (~570 lines)
- who.rs: slim module root with mode dispatch + re-exports (~260 lines)

All 1052 tests pass. No public API changes.
2026-02-19 09:45:12 -05:00
teernisse
e8ecb561cf feat: implement lore explain command (bd-9lbr)
Auto-generates structured narratives for issues and MRs from local DB:
- EntitySummary with title, state, author, labels, status
- Key decisions heuristic (correlates state/label changes with nearby notes)
- Activity summary with event counts and time span
- Open threads detection (unresolved discussions)
- Related entities (closing MRs, related issues)
- Timeline of all events in chronological order

7 unit tests, robot-docs entry, autocorrect registry, CLI dispatch wired.
2026-02-19 09:38:50 -05:00
teernisse
1e679a6d72 feat(sync): fetch and store GitLab issue links (bd-343o)
Add end-to-end support for GitLab issue link fetching:
- New GitLabIssueLink type + fetch_issue_links API client method
- Migration 029: add issue_links job type and watermark column
- issue_links.rs: bidirectional entity_reference storage with
  self-link skip, cross-project fallback, idempotent upsert
- Drain pipeline in orchestrator following mr_closes_issues pattern
- Display related issues in 'lore show issues' output
- --no-issue-links CLI flag with config, autocorrect, robot-docs
- 6 unit tests for storage logic
2026-02-19 09:26:47 -05:00
teernisse
9a1dbda522 docs: update AGENTS.md and CLAUDE.md with Phase B commands (bd-2fc)
Add temporal intelligence command examples: timeline, file-history,
trace, related, drift, who, count references, surgical sync.
Both AGENTS.md (project) and ~/.claude/CLAUDE.md (global) updated.
2026-02-19 09:05:19 -05:00
teernisse
a55f47161b docs(robot): update robot-docs manifest with Phase B commands (bd-1v8)
- Add 'related' command with entity/query modes and response schema
- Update 'count' to document 'references' entity type and its schema
- Expand temporal_intelligence workflow with file-history and trace steps
- Update lore_exclusive list with file-history, trace, related, drift
2026-02-19 09:03:24 -05:00
teernisse
2bbd1e3426 feat(cli): close bd-3jqx, add related to autocorrect registry, robot-docs updates
- Register related subcommand flags (--limit, --project) in COMMAND_FLAGS
- Robot-docs: add related command schema, count references schema
- Robot-docs: add file-history, trace, related, drift to capabilities
- Close bd-3jqx: all 4 integration tests passing (903 total, 0 failures)
- Beads sync
2026-02-19 09:03:16 -05:00
teernisse
574cd55eff feat(cli): add 'lore count references' command (bd-2ez)
Adds 'references' entity type to the count command with breakdowns
by reference_type (closes/mentioned/related), source_method
(api/note_parse/description_parse), and unresolved count.

Includes human and robot output formatters, 2 unit tests.
2026-02-19 09:01:05 -05:00
teernisse
c8dece8c60 feat(cli): add 'lore related' semantic similarity command (bd-8con)
Adds 'lore related' / 'lore similar' command for discovering semantically
related issues and MRs using vector embeddings.

Two modes:
- Entity mode: find entities similar to a specific issue/MR
- Query mode: embed free text and find matching entities

Includes distance-to-similarity conversion, label intersection,
human and robot output formatters, and 11 unit tests.
2026-02-19 08:56:16 -05:00
teernisse
3e96f19a11 feat(tui): add CLI/TUI parity tests (bd-wrw1)
10 parity tests verifying TUI and CLI query paths return consistent
results from the same SQLite database:
- Dashboard count parity (issues, MRs, discussions, notes)
- Issue list parity (IID ordering, state/author filters, ascending sort)
- MR list parity (IID ordering, state filter)
- Shared field parity (title, state, author, project_path)
- Empty database handling
- Terminal safety sanitization (dangerous sequences stripped)

Uses full-schema in-memory DB via create_connection + run_migrations.
Closes bd-wrw1, bd-2o49 (Phase 5.6 epic).
2026-02-19 08:01:55 -05:00
teernisse
8d24138655 chore: close Phase 5.5 epic (bd-1b6k) — 63 reliability tests 2026-02-19 07:49:59 -05:00
teernisse
01491b4180 feat(tui): add soak + pagination race tests (bd-14hv)
7 soak tests: 50k-event sustained load, watchdog timeout, render
interleaving, screen cycling, mode oscillation, depth bounds, multi-seed.
7 pagination race tests: concurrent read/write with snapshot fence,
multi-reader, within-fence writes, stress 1000 iterations.
2026-02-19 07:49:22 -05:00
teernisse
5143befe46 feat(tui): add 14 performance benchmark tests (bd-wnuo)
S/M/L tiered benchmarks measuring TUI update+render cycles with
synthetic data fixtures. SLO gates: S-tier <10ms update/<20ms render,
M-tier <50ms each. L-tier advisory only. All pass with generous margins.
2026-02-19 07:42:51 -05:00
teernisse
1e06cec3df feat(tui): add 10 navigation property tests (bd-3eis)
Deterministic seeded PRNG verifies NavigationStack invariants across
200k+ operations: depth >= 1, push/pop identity, forward cleared,
jump list only tracks detail screens, reset clears all, breadcrumbs
match depth, no panic under arbitrary sequences.
2026-02-19 01:13:20 -05:00
teernisse
9d6352a6af feat(tui): add 9 stress/fuzz tests for resize storm, rapid keys, event fuzz (bd-nu0d) 2026-02-19 01:09:02 -05:00
teernisse
656db00c04 feat(tui): add 16 race condition reliability tests (bd-3fjk)
- 4 stale response tests: issue list, dashboard, MR list, cross-screen isolation
- 4 SQLITE_BUSY error handling tests: toast display, nav preservation, idempotent toasts, error-then-success
- 7 cancel race tests: cancel/resubmit, rapid 5-submit sequence, key isolation, complete removes handle, stale completion no-op, stuck loading prevention, cancel_all
- 1 issue detail stale guard test
- Added active_cancel_token() method to TaskSupervisor for test observability
2026-02-19 01:03:25 -05:00
teernisse
9bcc512639 feat(tui): add 9 user flow integration tests (bd-2ygk)
Implements end-to-end flow tests covering all PRD Section 6 journeys:
- Morning triage (dashboard -> issue list -> detail -> back)
- Direct screen jumps (g-prefix chain: gt -> gw -> gi -> gh)
- Quick search (g/ -> results -> drill-in -> back with preserved state)
- Sync and browse (gs -> sync lifecycle -> complete -> browse)
- Expert navigation (gw -> Who -> verify expert mode default)
- Command palette (Ctrl+P -> verify open/filtered -> Esc close)
- Timeline navigation (gt -> events -> drill-in -> back)
- Bootstrap sync flow (Bootstrap -> gs -> SyncCompleted -> Dashboard)
- MR drill-in and back (gm -> detail -> Esc -> cursor preserved)

Key testing patterns:
- State generation alignment for dual-guard stale detection
- Key event injection via send_key/send_go helpers
- Data injection via supervisor.submit() + Msg handlers
- Cross-screen state preservation assertions
2026-02-19 00:52:58 -05:00
teernisse
403800be22 feat(tui): add snapshot test infrastructure + terminal compat matrix (bd-2nfs)
- 6 deterministic snapshot tests at 120x40 with FakeClock frozen at 2026-01-15T12:00:00Z
- Buffer-to-plaintext serializer resolving chars, graphemes, and wide-char continuations
- Golden file management with UPDATE_SNAPSHOTS=1 env var for regeneration
- Snapshot diff output on mismatch for easy debugging
- Tests: dashboard, issue list, issue detail, MR list, search results, empty state
- TERMINAL_COMPAT.md template for manual QA across iTerm2/tmux/Alacritty/kitty/WezTerm
2026-02-19 00:38:11 -05:00
teernisse
04ea1f7673 feat(tui): wire entity cache for near-instant detail view reopens (bd-3rjw)
- Add get_mut() and clear() methods to EntityCache<V>
- Add CachedIssuePayload / CachedMrPayload types to state
- Wire cache check in navigate_to for instant cache hits
- Populate cache on IssueDetailLoaded / MrDetailLoaded
- Update cache on DiscussionsLoaded
- Add 6 new entity_cache tests (get_mut, clear)
2026-02-19 00:25:28 -05:00
teernisse
026b3f0754 feat(tui): responsive breakpoints for detail views (bd-a6yb)
Apply breakpoint-aware layout to issue_detail and mr_detail views:
- Issue detail: hide labels on Xs, hide assignees on Xs/Sm, skip milestone row on Xs
- MR detail: hide branch names and merge status on Xs/Sm
- Issue detail allocate_sections gives description 60% on wide (Lg+) vs 40% narrow
- Add responsive tests for both detail views
- Close bd-a6yb: all TUI screens now adapt to terminal width

760 lib tests pass, clippy clean.
2026-02-19 00:10:43 -05:00
teernisse
ae1c3e3b05 chore: update beads tracking
Sync beads issue database to reflect current project state.
2026-02-18 23:59:40 -05:00
teernisse
bbfcfa2082 fix(tui): bounds-check scope picker selected index
Replace direct indexing (self.projects[self.selected_index - 1]) with
.get() to prevent panic if selected_index is somehow out of bounds.
Falls back to "All Projects" scope when the index is invalid instead
of panicking.
2026-02-18 23:59:11 -05:00
teernisse
45a989637c feat(tui): add per-screen responsive layout helpers
Introduce breakpoint-aware helper functions in layout.rs that
centralize per-screen responsive decisions. Each function maps a
Breakpoint to a screen-specific value, replacing scattered
hardcoded checks across view modules:

- detail_side_panel: show discussions side panel at Lg+
- info_screen_columns: 1 column on Xs/Sm, 2 on Md+
- search_show_project: hide project path column on narrow terminals
- timeline_time_width: compact time on Xs (5), full on Md+ (12)
- who_abbreviated_tabs: shorten tab labels on Xs/Sm
- sync_progress_bar_width: scale progress bar 15→50 with width

All functions are const fn with exhaustive match arms.
Includes 6 unit tests covering every breakpoint variant.
2026-02-18 23:59:04 -05:00
teernisse
1b66b80ac4 style(tui): apply rustfmt and clippy formatting across crate
Mechanical formatting pass to satisfy rustfmt line-width limits and
clippy pedantic/nursery lints. No behavioral changes.

Formatting (rustfmt line wrapping):
- action/sync.rs: multiline tuple destructure, function call args in tests
- state/sync.rs: if-let chain formatting, remove unnecessary Vec collect
- view/sync.rs: multiline array entries, format!(), vec! literals
- view/doctor.rs: multiline floor_char_boundary chain
- view/scope_picker.rs: multiline format!() with floor_char_boundary
- view/stats.rs: multiline render_stat_row call
- view/mod.rs: multiline assert!() in test
- app/update.rs: multiline enum variant destructure
- entity_cache.rs: multiline assert_eq!() with messages
- render_cache.rs: multiline retain() closure
- session.rs: multiline serde_json/File::create/parent() chains

Clippy:
- action/sync.rs: #[allow(clippy::too_many_arguments)] on test helper

Import/module ordering (alphabetical):
- state/mod.rs: move scope_picker mod + pub use to sorted position
- view/mod.rs: move scope_picker, stats, sync mod + use to sorted position
- view/scope_picker.rs: sort use imports (ScopeContext before ScopePickerState)
2026-02-18 23:58:29 -05:00
teernisse
09ffcfcf0f refactor(tui): deduplicate cursor_cell_offset into text_width module
Four view modules (search, command_palette, file_history, trace) each had
their own copy of cursor_cell_offset / text_cell_width for converting a
byte-offset cursor position to a display-column offset. Phase 5 introduced
a proper text_width module with these functions; this commit removes the
duplicates and rewires all call sites to use crate::text_width.

- search.rs: removed local text_cell_width + cursor_cell_offset definitions
- command_palette.rs: removed local cursor_cell_offset definition
- file_history.rs: replaced inline chars().count() cursor calc with import
- trace.rs: replaced inline chars().count() cursor calc with import
2026-02-18 23:58:13 -05:00
teernisse
146eb61623 feat(tui): Phase 4 completion + Phase 5 session/lock/text-width
Phase 4 (bd-1df9) — all 5 acceptance criteria met:
- Sync screen with delta ledger (bd-2x2h, bd-y095)
- Doctor screen with health checks (bd-2iqk)
- Stats screen with document counts (bd-2iqk)
- CLI integration: lore tui subcommand (bd-26lp)
- CLI integration: lore sync --tui flag (bd-3l56)

Phase 5 (bd-3h00) — session persistence + instance lock + text width:
- text_width.rs: Unicode-aware measurement, truncation, padding (16 tests)
- instance_lock.rs: Advisory PID lock with stale recovery (6 tests)
- session.rs: Atomic write + CRC32 checksum + quarantine (9 tests)

Closes: bd-26lp, bd-3h00, bd-3l56, bd-1df9, bd-y095
2026-02-18 23:51:54 -05:00
teernisse
418417b0f4 fix(tui): correct column names in file_history action queries + update beads
- file_history.rs: Fix SQL column references to match actual schema
  (position_new_path/position_old_path naming).

- beads: Update issue tracker state.
2026-02-18 22:58:14 -05:00
108 changed files with 23289 additions and 2624 deletions

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

File diff suppressed because one or more lines are too long

View File

@@ -1 +1 @@
bd-2og9
bd-1au9

View File

@@ -655,9 +655,40 @@ lore --robot sync
# Run sync without resource events
lore --robot sync --no-events
# Run sync without MR file change fetching
lore --robot sync --no-file-changes
# Surgical sync: specific entities by IID
lore --robot sync --issue 42 -p group/repo
lore --robot sync --mr 99 --mr 100 -p group/repo
# Run ingestion only
lore --robot ingest issues
# Trace why code was introduced
lore --robot trace src/main.rs -p group/repo
# File-level MR history
lore --robot file-history src/auth/ -p group/repo
# Chronological timeline of events
lore --robot timeline "authentication" --since 30d
lore --robot timeline issue:42
# Find semantically related entities
lore --robot related issues 42 -n 5
lore --robot related "authentication bug"
# Detect discussion divergence from original intent
lore --robot drift issues 42 --threshold 0.4
# People intelligence
lore --robot who src/features/auth/
lore --robot who @username --reviews
# Count references (cross-reference statistics)
lore --robot count references
# Check environment health
lore --robot doctor

31
Cargo.lock generated
View File

@@ -485,6 +485,12 @@ dependencies = [
"litrs",
]
[[package]]
name = "either"
version = "1.15.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "48c757948c5ede0e46177b7add2e67155f70e33c07fea8284df6576da70b3719"
[[package]]
name = "encode_unicode"
version = "1.0.0"
@@ -500,6 +506,12 @@ dependencies = [
"cfg-if",
]
[[package]]
name = "env_home"
version = "0.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c7f84e12ccf0a7ddc17a6c41c93326024c42920d7ee630d04950e6926645c0fe"
[[package]]
name = "equivalent"
version = "1.0.2"
@@ -1191,6 +1203,7 @@ dependencies = [
"url",
"urlencoding",
"uuid",
"which",
"wiremock",
]
@@ -2507,6 +2520,18 @@ dependencies = [
"wasm-bindgen",
]
[[package]]
name = "which"
version = "7.0.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "24d643ce3fd3e5b54854602a080f34fb10ab75e0b813ee32d00ca2b44fa74762"
dependencies = [
"either",
"env_home",
"rustix",
"winsafe",
]
[[package]]
name = "winapi"
version = "0.3.9"
@@ -2764,6 +2789,12 @@ dependencies = [
"memchr",
]
[[package]]
name = "winsafe"
version = "0.0.19"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d135d17ab770252ad95e9a872d365cf3090e3be864a34ab46f48555993efc904"
[[package]]
name = "wiremock"
version = "0.6.5"

View File

@@ -49,6 +49,7 @@ httpdate = "1"
uuid = { version = "1", features = ["v4"] }
regex = "1"
strsim = "0.11"
which = "7"
[target.'cfg(unix)'.dependencies]
libc = "0.2"

View File

@@ -485,6 +485,12 @@ dependencies = [
"litrs",
]
[[package]]
name = "either"
version = "1.15.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "48c757948c5ede0e46177b7add2e67155f70e33c07fea8284df6576da70b3719"
[[package]]
name = "encode_unicode"
version = "1.0.0"
@@ -500,6 +506,12 @@ dependencies = [
"cfg-if",
]
[[package]]
name = "env_home"
version = "0.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c7f84e12ccf0a7ddc17a6c41c93326024c42920d7ee630d04950e6926645c0fe"
[[package]]
name = "equivalent"
version = "1.0.2"
@@ -1336,6 +1348,7 @@ dependencies = [
"url",
"urlencoding",
"uuid",
"which",
]
[[package]]
@@ -1345,6 +1358,7 @@ dependencies = [
"anyhow",
"chrono",
"clap",
"crc32fast",
"crossterm 0.28.1",
"dirs",
"ftui",
@@ -1354,6 +1368,8 @@ dependencies = [
"serde",
"serde_json",
"tempfile",
"unicode-segmentation",
"unicode-width 0.2.2",
]
[[package]]
@@ -2782,6 +2798,18 @@ dependencies = [
"wasm-bindgen",
]
[[package]]
name = "which"
version = "7.0.3"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "24d643ce3fd3e5b54854602a080f34fb10ab75e0b813ee32d00ca2b44fa74762"
dependencies = [
"either",
"env_home",
"rustix 1.1.3",
"winsafe",
]
[[package]]
name = "winapi"
version = "0.3.9"
@@ -3048,6 +3076,12 @@ dependencies = [
"memchr",
]
[[package]]
name = "winsafe"
version = "0.0.19"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d135d17ab770252ad95e9a872d365cf3090e3be864a34ab46f48555993efc904"
[[package]]
name = "wit-bindgen"
version = "0.51.0"

View File

@@ -42,5 +42,12 @@ serde_json = "1"
# Regex (used by safety module for PII/secret redaction)
regex = "1"
# Unicode text measurement
unicode-width = "0.2"
unicode-segmentation = "1"
# Session persistence (CRC32 checksum)
crc32fast = "1"
[dev-dependencies]
tempfile = "3"

View File

@@ -0,0 +1,61 @@
# Terminal Compatibility Matrix
Manual verification checklist for lore-tui rendering across terminal emulators.
**How to use:** Run `cargo run -p lore-tui` in each terminal, navigate through
all screens, and mark each cell with one of:
- OK — works correctly
- PARTIAL — works with minor visual glitches (describe in Notes)
- FAIL — broken or unusable (describe in Notes)
- N/T — not tested
Last verified: _not yet_
## Rendering Features
| Feature | iTerm2 | tmux | Alacritty | kitty | WezTerm |
|----------------------|--------|------|-----------|-------|---------|
| True color (RGB) | | | | | |
| Unicode box-drawing | | | | | |
| CJK wide characters | | | | | |
| Bold text | | | | | |
| Italic text | | | | | |
| Underline | | | | | |
| Dim / faint | | | | | |
| Strikethrough | | | | | |
## Interaction Features
| Feature | iTerm2 | tmux | Alacritty | kitty | WezTerm |
|----------------------|--------|------|-----------|-------|---------|
| Keyboard input | | | | | |
| Mouse click | | | | | |
| Mouse scroll | | | | | |
| Resize handling | | | | | |
| Alt screen toggle | | | | | |
| Bracketed paste | | | | | |
## Screen-Specific Checks
| Screen | iTerm2 | tmux | Alacritty | kitty | WezTerm |
|----------------------|--------|------|-----------|-------|---------|
| Dashboard | | | | | |
| Issue list | | | | | |
| Issue detail | | | | | |
| MR list | | | | | |
| MR detail | | | | | |
| Search | | | | | |
| Command palette | | | | | |
| Help overlay | | | | | |
## Minimum Sizes
| Terminal size | Renders correctly? | Notes |
|---------------|-------------------|-------|
| 80x24 | | |
| 120x40 | | |
| 200x60 | | |
## Notes
_Record any issues, workarounds, or version-specific quirks here._

View File

@@ -28,8 +28,8 @@ pub fn check_schema_version(conn: &Connection, minimum: i32) -> SchemaCheck {
return SchemaCheck::NoDB;
}
// Read the current version.
match conn.query_row("SELECT version FROM schema_version LIMIT 1", [], |r| {
// Read the highest version (one row per migration).
match conn.query_row("SELECT MAX(version) FROM schema_version", [], |r| {
r.get::<_, i32>(0)
}) {
Ok(version) if version >= minimum => SchemaCheck::Compatible { version },
@@ -65,7 +65,7 @@ pub fn check_data_readiness(conn: &Connection) -> Result<DataReadiness> {
.unwrap_or(false);
let schema_version = conn
.query_row("SELECT version FROM schema_version LIMIT 1", [], |r| {
.query_row("SELECT MAX(version) FROM schema_version", [], |r| {
r.get::<_, i32>(0)
})
.unwrap_or(0);
@@ -247,6 +247,24 @@ mod tests {
assert!(matches!(result, SchemaCheck::NoDB));
}
#[test]
fn test_schema_preflight_multiple_migration_rows() {
let conn = Connection::open_in_memory().unwrap();
conn.execute_batch(
"CREATE TABLE schema_version (version INTEGER, applied_at INTEGER, description TEXT);
INSERT INTO schema_version VALUES (1, 0, 'Initial');
INSERT INTO schema_version VALUES (2, 0, 'Second');
INSERT INTO schema_version VALUES (27, 0, 'Latest');",
)
.unwrap();
let result = check_schema_version(&conn, 20);
assert!(
matches!(result, SchemaCheck::Compatible { version: 27 }),
"should use MAX(version), not first row: {result:?}"
);
}
#[test]
fn test_check_data_readiness_empty() {
let conn = Connection::open_in_memory().unwrap();

View File

@@ -96,8 +96,7 @@ pub fn fetch_file_history(
merge_commit_sha: row.get(7)?,
})
})?
.filter_map(std::result::Result::ok)
.collect();
.collect::<std::result::Result<Vec<_>, _>>()?;
let total_mrs = merge_requests.len();
@@ -135,10 +134,10 @@ fn fetch_file_discussions(
};
let sql = format!(
"SELECT d.gitlab_discussion_id, n.author_username, n.body, n.new_path, n.created_at \
"SELECT d.gitlab_discussion_id, n.author_username, n.body, n.position_new_path, n.created_at \
FROM notes n \
JOIN discussions d ON d.id = n.discussion_id \
WHERE n.new_path IN ({in_clause}) {project_filter} \
WHERE n.position_new_path IN ({in_clause}) {project_filter} \
AND n.is_system = 0 \
ORDER BY n.created_at DESC \
LIMIT 50"
@@ -170,8 +169,7 @@ fn fetch_file_discussions(
created_at_ms: row.get(4)?,
})
})?
.filter_map(std::result::Result::ok)
.collect();
.collect::<std::result::Result<Vec<_>, _>>()?;
Ok(discussions)
}
@@ -187,12 +185,10 @@ pub fn fetch_file_history_paths(conn: &Connection, project_id: Option<i64>) -> R
let mut stmt = conn.prepare(sql)?;
let paths: Vec<String> = if let Some(pid) = project_id {
stmt.query_map([pid], |row| row.get(0))?
.filter_map(std::result::Result::ok)
.collect()
.collect::<std::result::Result<Vec<_>, _>>()?
} else {
stmt.query_map([], |row| row.get(0))?
.filter_map(std::result::Result::ok)
.collect()
.collect::<std::result::Result<Vec<_>, _>>()?
};
Ok(paths)
@@ -258,8 +254,8 @@ mod tests {
author_username TEXT,
body TEXT,
note_type TEXT,
new_path TEXT,
old_path TEXT,
position_new_path TEXT,
position_old_path TEXT,
created_at INTEGER NOT NULL,
updated_at INTEGER NOT NULL,
last_seen_at INTEGER NOT NULL

View File

@@ -12,6 +12,7 @@ mod issue_list;
mod mr_detail;
mod mr_list;
mod search;
mod sync;
mod timeline;
mod trace;
mod who;
@@ -24,6 +25,7 @@ pub use issue_list::*;
pub use mr_detail::*;
pub use mr_list::*;
pub use search::*;
pub use sync::*;
pub use timeline::*;
pub use trace::*;
pub use who::*;

View File

@@ -0,0 +1,668 @@
#![allow(dead_code)]
//! Sync screen actions — query sync run history and detect running syncs.
//!
//! With cron-driven syncs as the primary mechanism, the TUI's sync screen
//! acts as a status dashboard. These pure query functions read `sync_runs`
//! and `projects` to populate the screen.
use anyhow::{Context, Result};
use rusqlite::Connection;
use crate::clock::Clock;
/// How many recent runs to display in the sync history.
const HISTORY_LIMIT: usize = 10;
/// If a "running" sync hasn't heartbeated in this many milliseconds,
/// consider it stale (likely crashed).
const STALE_HEARTBEAT_MS: i64 = 120_000; // 2 minutes
// ---------------------------------------------------------------------------
// Data types
// ---------------------------------------------------------------------------
/// Overview data for the sync screen.
#[derive(Debug, Default)]
pub struct SyncOverview {
/// Info about a currently running sync, if any.
pub running: Option<RunningSyncInfo>,
/// Most recent completed (succeeded or failed) run.
pub last_completed: Option<SyncRunInfo>,
/// Recent sync run history (newest first).
pub recent_runs: Vec<SyncRunInfo>,
/// Configured project paths.
pub projects: Vec<String>,
}
/// A sync that is currently in progress.
#[derive(Debug, Clone)]
pub struct RunningSyncInfo {
/// Row ID in sync_runs.
pub id: i64,
/// When this sync started (ms epoch).
pub started_at: i64,
/// Last heartbeat (ms epoch).
pub heartbeat_at: i64,
/// How long it's been running (ms).
pub elapsed_ms: u64,
/// Whether the heartbeat is stale (sync may have crashed).
pub stale: bool,
/// Items processed so far.
pub items_processed: u64,
}
/// Summary of a single sync run.
#[derive(Debug, Clone)]
pub struct SyncRunInfo {
/// Row ID in sync_runs.
pub id: i64,
/// 'succeeded', 'failed', or 'running'.
pub status: String,
/// The command that was run (e.g., 'sync', 'ingest issues').
pub command: String,
/// When this sync started (ms epoch).
pub started_at: i64,
/// When this sync finished (ms epoch), if completed.
pub finished_at: Option<i64>,
/// Duration in ms (computed from started_at/finished_at).
pub duration_ms: Option<u64>,
/// Total items processed.
pub items_processed: u64,
/// Total errors encountered.
pub errors: u64,
/// Error message if the run failed.
pub error: Option<String>,
/// Correlation ID for log matching.
pub run_id: Option<String>,
}
// ---------------------------------------------------------------------------
// Public API
// ---------------------------------------------------------------------------
/// Fetch the complete sync overview for the sync screen.
///
/// Combines running sync detection, last completed run, recent history,
/// and configured projects into a single struct.
pub fn fetch_sync_overview(conn: &Connection, clock: &dyn Clock) -> Result<SyncOverview> {
let running = detect_running_sync(conn, clock)?;
let recent_runs = fetch_recent_runs(conn, HISTORY_LIMIT)?;
let last_completed = recent_runs
.iter()
.find(|r| r.status == "succeeded" || r.status == "failed")
.cloned();
let projects = fetch_configured_projects(conn)?;
Ok(SyncOverview {
running,
last_completed,
recent_runs,
projects,
})
}
/// Detect a currently running sync from the `sync_runs` table.
///
/// A sync is considered "running" if `status = 'running'`. It's marked
/// stale if the heartbeat is older than [`STALE_HEARTBEAT_MS`].
pub fn detect_running_sync(
conn: &Connection,
clock: &dyn Clock,
) -> Result<Option<RunningSyncInfo>> {
let result = conn.query_row(
"SELECT id, started_at, heartbeat_at, total_items_processed
FROM sync_runs
WHERE status = 'running'
ORDER BY id DESC
LIMIT 1",
[],
|row| {
let id: i64 = row.get(0)?;
let started_at: i64 = row.get(1)?;
let heartbeat_at: i64 = row.get(2)?;
let items: Option<i64> = row.get(3)?;
Ok((id, started_at, heartbeat_at, items.unwrap_or(0)))
},
);
match result {
Ok((id, started_at, heartbeat_at, items)) => {
let now = clock.now_ms();
let elapsed_ms = now.saturating_sub(started_at);
let stale = (now - heartbeat_at) > STALE_HEARTBEAT_MS;
#[allow(clippy::cast_sign_loss)]
Ok(Some(RunningSyncInfo {
id,
started_at,
heartbeat_at,
elapsed_ms: elapsed_ms as u64,
stale,
items_processed: items as u64,
}))
}
Err(rusqlite::Error::QueryReturnedNoRows) => Ok(None),
Err(e) => Err(e).context("detecting running sync"),
}
}
/// Fetch recent sync runs (newest first).
pub fn fetch_recent_runs(conn: &Connection, limit: usize) -> Result<Vec<SyncRunInfo>> {
let mut stmt = conn
.prepare(
"SELECT id, status, command, started_at, finished_at,
total_items_processed, total_errors, error, run_id
FROM sync_runs
ORDER BY id DESC
LIMIT ?1",
)
.context("preparing sync runs query")?;
let rows = stmt
.query_map([limit as i64], |row| {
let id: i64 = row.get(0)?;
let status: String = row.get(1)?;
let command: String = row.get(2)?;
let started_at: i64 = row.get(3)?;
let finished_at: Option<i64> = row.get(4)?;
let items: Option<i64> = row.get(5)?;
let errors: Option<i64> = row.get(6)?;
let error: Option<String> = row.get(7)?;
let run_id: Option<String> = row.get(8)?;
Ok((
id,
status,
command,
started_at,
finished_at,
items,
errors,
error,
run_id,
))
})
.context("querying sync runs")?;
let mut result = Vec::new();
for row in rows {
let (id, status, command, started_at, finished_at, items, errors, error, run_id) =
row.context("reading sync run row")?;
#[allow(clippy::cast_sign_loss)]
let duration_ms = finished_at.map(|f| (f - started_at) as u64);
#[allow(clippy::cast_sign_loss)]
result.push(SyncRunInfo {
id,
status,
command,
started_at,
finished_at,
duration_ms,
items_processed: items.unwrap_or(0) as u64,
errors: errors.unwrap_or(0) as u64,
error,
run_id,
});
}
Ok(result)
}
/// Fetch configured project paths from the `projects` table.
pub fn fetch_configured_projects(conn: &Connection) -> Result<Vec<String>> {
let mut stmt = conn
.prepare("SELECT path_with_namespace FROM projects ORDER BY path_with_namespace")
.context("preparing projects query")?;
let rows = stmt
.query_map([], |row| row.get::<_, String>(0))
.context("querying projects")?;
let mut result = Vec::new();
for row in rows {
result.push(row.context("reading project row")?);
}
Ok(result)
}
// ---------------------------------------------------------------------------
// Tests
// ---------------------------------------------------------------------------
#[cfg(test)]
mod tests {
use super::*;
use crate::clock::FakeClock;
/// Create the minimal schema needed for sync queries.
fn create_sync_schema(conn: &Connection) {
conn.execute_batch(
"
CREATE TABLE projects (
id INTEGER PRIMARY KEY,
gitlab_project_id INTEGER UNIQUE NOT NULL,
path_with_namespace TEXT NOT NULL
);
CREATE TABLE sync_runs (
id INTEGER PRIMARY KEY,
started_at INTEGER NOT NULL,
heartbeat_at INTEGER NOT NULL,
finished_at INTEGER,
status TEXT NOT NULL,
command TEXT NOT NULL,
error TEXT,
metrics_json TEXT,
run_id TEXT,
total_items_processed INTEGER DEFAULT 0,
total_errors INTEGER DEFAULT 0
);
",
)
.expect("create sync schema");
}
fn insert_project(conn: &Connection, id: i64, path: &str) {
conn.execute(
"INSERT INTO projects (id, gitlab_project_id, path_with_namespace)
VALUES (?1, ?2, ?3)",
rusqlite::params![id, id * 100, path],
)
.expect("insert project");
}
#[allow(clippy::too_many_arguments)]
fn insert_sync_run(
conn: &Connection,
started_at: i64,
finished_at: Option<i64>,
status: &str,
command: &str,
items: i64,
errors: i64,
error: Option<&str>,
) -> i64 {
conn.execute(
"INSERT INTO sync_runs (started_at, heartbeat_at, finished_at, status, command,
total_items_processed, total_errors, error)
VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8)",
rusqlite::params![
started_at,
finished_at.unwrap_or(started_at),
finished_at,
status,
command,
items,
errors,
error,
],
)
.expect("insert sync run");
conn.last_insert_rowid()
}
// -----------------------------------------------------------------------
// detect_running_sync
// -----------------------------------------------------------------------
#[test]
fn test_detect_running_sync_none_when_empty() {
let conn = Connection::open_in_memory().unwrap();
create_sync_schema(&conn);
let clock = FakeClock::from_ms(1_700_000_000_000);
let result = detect_running_sync(&conn, &clock).unwrap();
assert!(result.is_none());
}
#[test]
fn test_detect_running_sync_none_when_all_completed() {
let conn = Connection::open_in_memory().unwrap();
create_sync_schema(&conn);
let now = 1_700_000_000_000_i64;
insert_sync_run(
&conn,
now - 60_000,
Some(now - 30_000),
"succeeded",
"sync",
100,
0,
None,
);
insert_sync_run(
&conn,
now - 120_000,
Some(now - 90_000),
"failed",
"sync",
50,
2,
Some("timeout"),
);
let clock = FakeClock::from_ms(now);
let result = detect_running_sync(&conn, &clock).unwrap();
assert!(result.is_none());
}
#[test]
fn test_detect_running_sync_found() {
let conn = Connection::open_in_memory().unwrap();
create_sync_schema(&conn);
let now = 1_700_000_000_000_i64;
let started = now - 30_000; // 30 seconds ago
// Heartbeat at started_at (fresh since we just set it)
conn.execute(
"INSERT INTO sync_runs (started_at, heartbeat_at, status, command, total_items_processed)
VALUES (?1, ?2, 'running', 'sync', 42)",
[started, now - 5_000], // heartbeat 5 seconds ago
)
.unwrap();
let clock = FakeClock::from_ms(now);
let running = detect_running_sync(&conn, &clock).unwrap().unwrap();
assert_eq!(running.elapsed_ms, 30_000);
assert_eq!(running.items_processed, 42);
assert!(!running.stale);
}
#[test]
fn test_detect_running_sync_stale_heartbeat() {
let conn = Connection::open_in_memory().unwrap();
create_sync_schema(&conn);
let now = 1_700_000_000_000_i64;
let started = now - 300_000; // 5 minutes ago
// Heartbeat 3 minutes ago — stale
conn.execute(
"INSERT INTO sync_runs (started_at, heartbeat_at, status, command)
VALUES (?1, ?2, 'running', 'sync')",
[started, now - 180_000],
)
.unwrap();
let clock = FakeClock::from_ms(now);
let running = detect_running_sync(&conn, &clock).unwrap().unwrap();
assert!(running.stale);
assert_eq!(running.elapsed_ms, 300_000);
}
// -----------------------------------------------------------------------
// fetch_recent_runs
// -----------------------------------------------------------------------
#[test]
fn test_fetch_recent_runs_empty() {
let conn = Connection::open_in_memory().unwrap();
create_sync_schema(&conn);
let runs = fetch_recent_runs(&conn, 10).unwrap();
assert!(runs.is_empty());
}
#[test]
fn test_fetch_recent_runs_ordered_newest_first() {
let conn = Connection::open_in_memory().unwrap();
create_sync_schema(&conn);
let now = 1_700_000_000_000_i64;
insert_sync_run(
&conn,
now - 120_000,
Some(now - 90_000),
"succeeded",
"sync",
100,
0,
None,
);
insert_sync_run(
&conn,
now - 60_000,
Some(now - 30_000),
"succeeded",
"sync",
200,
0,
None,
);
let runs = fetch_recent_runs(&conn, 10).unwrap();
assert_eq!(runs.len(), 2);
// Newest first (higher id)
assert_eq!(runs[0].items_processed, 200);
assert_eq!(runs[1].items_processed, 100);
}
#[test]
fn test_fetch_recent_runs_respects_limit() {
let conn = Connection::open_in_memory().unwrap();
create_sync_schema(&conn);
let now = 1_700_000_000_000_i64;
for i in 0..5 {
insert_sync_run(
&conn,
now - (5 - i) * 60_000,
Some(now - (5 - i) * 60_000 + 30_000),
"succeeded",
"sync",
i * 10,
0,
None,
);
}
let runs = fetch_recent_runs(&conn, 3).unwrap();
assert_eq!(runs.len(), 3);
}
#[test]
fn test_fetch_recent_runs_duration_computed() {
let conn = Connection::open_in_memory().unwrap();
create_sync_schema(&conn);
let now = 1_700_000_000_000_i64;
insert_sync_run(
&conn,
now - 60_000,
Some(now - 15_000),
"succeeded",
"sync",
0,
0,
None,
);
let runs = fetch_recent_runs(&conn, 10).unwrap();
assert_eq!(runs[0].duration_ms, Some(45_000));
}
#[test]
fn test_fetch_recent_runs_running_no_duration() {
let conn = Connection::open_in_memory().unwrap();
create_sync_schema(&conn);
let now = 1_700_000_000_000_i64;
insert_sync_run(&conn, now - 60_000, None, "running", "sync", 0, 0, None);
let runs = fetch_recent_runs(&conn, 10).unwrap();
assert_eq!(runs[0].status, "running");
assert!(runs[0].duration_ms.is_none());
}
#[test]
fn test_fetch_recent_runs_failed_with_error() {
let conn = Connection::open_in_memory().unwrap();
create_sync_schema(&conn);
let now = 1_700_000_000_000_i64;
insert_sync_run(
&conn,
now - 60_000,
Some(now - 30_000),
"failed",
"sync",
50,
3,
Some("network timeout"),
);
let runs = fetch_recent_runs(&conn, 10).unwrap();
assert_eq!(runs[0].status, "failed");
assert_eq!(runs[0].errors, 3);
assert_eq!(runs[0].error.as_deref(), Some("network timeout"));
}
// -----------------------------------------------------------------------
// fetch_configured_projects
// -----------------------------------------------------------------------
#[test]
fn test_fetch_configured_projects_empty() {
let conn = Connection::open_in_memory().unwrap();
create_sync_schema(&conn);
let projects = fetch_configured_projects(&conn).unwrap();
assert!(projects.is_empty());
}
#[test]
fn test_fetch_configured_projects_sorted() {
let conn = Connection::open_in_memory().unwrap();
create_sync_schema(&conn);
insert_project(&conn, 1, "group/beta");
insert_project(&conn, 2, "group/alpha");
insert_project(&conn, 3, "other/gamma");
let projects = fetch_configured_projects(&conn).unwrap();
assert_eq!(projects, vec!["group/alpha", "group/beta", "other/gamma"]);
}
// -----------------------------------------------------------------------
// fetch_sync_overview (integration)
// -----------------------------------------------------------------------
#[test]
fn test_fetch_sync_overview_empty_db() {
let conn = Connection::open_in_memory().unwrap();
create_sync_schema(&conn);
let clock = FakeClock::from_ms(1_700_000_000_000);
let overview = fetch_sync_overview(&conn, &clock).unwrap();
assert!(overview.running.is_none());
assert!(overview.last_completed.is_none());
assert!(overview.recent_runs.is_empty());
assert!(overview.projects.is_empty());
}
#[test]
fn test_fetch_sync_overview_with_history() {
let conn = Connection::open_in_memory().unwrap();
create_sync_schema(&conn);
let now = 1_700_000_000_000_i64;
insert_project(&conn, 1, "group/repo");
insert_sync_run(
&conn,
now - 120_000,
Some(now - 90_000),
"succeeded",
"sync",
150,
0,
None,
);
insert_sync_run(
&conn,
now - 60_000,
Some(now - 30_000),
"failed",
"sync",
50,
2,
Some("db locked"),
);
let clock = FakeClock::from_ms(now);
let overview = fetch_sync_overview(&conn, &clock).unwrap();
assert!(overview.running.is_none());
assert_eq!(overview.recent_runs.len(), 2);
assert_eq!(overview.projects, vec!["group/repo"]);
// last_completed should be the newest completed run (failed, id=2)
let last = overview.last_completed.unwrap();
assert_eq!(last.status, "failed");
assert_eq!(last.errors, 2);
}
#[test]
fn test_fetch_sync_overview_with_running_sync() {
let conn = Connection::open_in_memory().unwrap();
create_sync_schema(&conn);
let now = 1_700_000_000_000_i64;
insert_project(&conn, 1, "group/repo");
// A completed run.
insert_sync_run(
&conn,
now - 600_000,
Some(now - 570_000),
"succeeded",
"sync",
200,
0,
None,
);
// A currently running sync.
conn.execute(
"INSERT INTO sync_runs (started_at, heartbeat_at, status, command, total_items_processed)
VALUES (?1, ?2, 'running', 'sync', 75)",
[now - 20_000, now - 2_000],
)
.unwrap();
let clock = FakeClock::from_ms(now);
let overview = fetch_sync_overview(&conn, &clock).unwrap();
assert!(overview.running.is_some());
let running = overview.running.unwrap();
assert_eq!(running.elapsed_ms, 20_000);
assert_eq!(running.items_processed, 75);
assert!(!running.stale);
// last_completed should find the succeeded run, not the running one.
let last = overview.last_completed.unwrap();
assert_eq!(last.status, "succeeded");
assert_eq!(last.items_processed, 200);
}
#[test]
fn test_sync_run_info_with_run_id() {
let conn = Connection::open_in_memory().unwrap();
create_sync_schema(&conn);
let now = 1_700_000_000_000_i64;
conn.execute(
"INSERT INTO sync_runs (started_at, heartbeat_at, finished_at, status, command,
total_items_processed, total_errors, run_id)
VALUES (?1, ?1, ?2, 'succeeded', 'sync', 100, 0, 'abc-123')",
[now - 60_000, now - 30_000],
)
.unwrap();
let runs = fetch_recent_runs(&conn, 10).unwrap();
assert_eq!(runs[0].run_id.as_deref(), Some("abc-123"));
}
}

View File

@@ -64,10 +64,12 @@ pub fn fetch_timeline_events(
let filter = resolve_timeline_scope(conn, scope)?;
let mut events = Vec::new();
collect_tl_created_events(conn, &filter, &mut events)?;
collect_tl_state_events(conn, &filter, &mut events)?;
collect_tl_label_events(conn, &filter, &mut events)?;
collect_tl_milestone_events(conn, &filter, &mut events)?;
// Each collector is given the full limit. After merge-sorting, we truncate
// to `limit`. Worst case we hold 4*limit events in memory (bounded).
collect_tl_created_events(conn, &filter, limit, &mut events)?;
collect_tl_state_events(conn, &filter, limit, &mut events)?;
collect_tl_label_events(conn, &filter, limit, &mut events)?;
collect_tl_milestone_events(conn, &filter, limit, &mut events)?;
// Sort by timestamp descending (most recent first), with stable tiebreak.
events.sort_by(|a, b| {
@@ -85,11 +87,12 @@ pub fn fetch_timeline_events(
fn collect_tl_created_events(
conn: &Connection,
filter: &TimelineFilter,
limit: usize,
events: &mut Vec<TimelineEvent>,
) -> Result<()> {
// Issue created events.
if !matches!(filter, TimelineFilter::MergeRequest(_)) {
let (where_clause, params) = match filter {
let (where_clause, mut params) = match filter {
TimelineFilter::All => (
"1=1".to_string(),
Vec::<Box<dyn rusqlite::types::ToSql>>::new(),
@@ -105,12 +108,16 @@ fn collect_tl_created_events(
TimelineFilter::MergeRequest(_) => unreachable!(),
};
let limit_param = params.len() + 1;
let sql = format!(
"SELECT i.created_at, i.iid, i.title, i.author_username, i.project_id, p.path_with_namespace
FROM issues i
JOIN projects p ON p.id = i.project_id
WHERE {where_clause}"
WHERE {where_clause}
ORDER BY i.created_at DESC
LIMIT ?{limit_param}"
);
params.push(Box::new(limit as i64));
let mut stmt = conn
.prepare(&sql)
@@ -148,7 +155,7 @@ fn collect_tl_created_events(
// MR created events.
if !matches!(filter, TimelineFilter::Issue(_)) {
let (where_clause, params) = match filter {
let (where_clause, mut params) = match filter {
TimelineFilter::All => (
"1=1".to_string(),
Vec::<Box<dyn rusqlite::types::ToSql>>::new(),
@@ -164,12 +171,16 @@ fn collect_tl_created_events(
TimelineFilter::Issue(_) => unreachable!(),
};
let limit_param = params.len() + 1;
let sql = format!(
"SELECT mr.created_at, mr.iid, mr.title, mr.author_username, mr.project_id, p.path_with_namespace
FROM merge_requests mr
JOIN projects p ON p.id = mr.project_id
WHERE {where_clause}"
WHERE {where_clause}
ORDER BY mr.created_at DESC
LIMIT ?{limit_param}"
);
params.push(Box::new(limit as i64));
let mut stmt = conn.prepare(&sql).context("preparing MR created query")?;
let param_refs: Vec<&dyn rusqlite::types::ToSql> =
@@ -252,9 +263,11 @@ fn resolve_event_entity(
fn collect_tl_state_events(
conn: &Connection,
filter: &TimelineFilter,
limit: usize,
events: &mut Vec<TimelineEvent>,
) -> Result<()> {
let (where_clause, params) = resource_event_where(filter);
let (where_clause, mut params) = resource_event_where(filter);
let limit_param = params.len() + 1;
let sql = format!(
"SELECT e.created_at, e.state, e.actor_username,
@@ -266,8 +279,11 @@ fn collect_tl_state_events(
LEFT JOIN merge_requests mr ON mr.id = e.merge_request_id
LEFT JOIN projects pi ON pi.id = i.project_id
LEFT JOIN projects pm ON pm.id = mr.project_id
WHERE {where_clause}"
WHERE {where_clause}
ORDER BY e.created_at DESC
LIMIT ?{limit_param}"
);
params.push(Box::new(limit as i64));
let mut stmt = conn.prepare(&sql).context("preparing state events query")?;
let param_refs: Vec<&dyn rusqlite::types::ToSql> = params.iter().map(AsRef::as_ref).collect();
@@ -338,9 +354,11 @@ fn collect_tl_state_events(
fn collect_tl_label_events(
conn: &Connection,
filter: &TimelineFilter,
limit: usize,
events: &mut Vec<TimelineEvent>,
) -> Result<()> {
let (where_clause, params) = resource_event_where(filter);
let (where_clause, mut params) = resource_event_where(filter);
let limit_param = params.len() + 1;
let sql = format!(
"SELECT e.created_at, e.action, e.label_name, e.actor_username,
@@ -352,8 +370,11 @@ fn collect_tl_label_events(
LEFT JOIN merge_requests mr ON mr.id = e.merge_request_id
LEFT JOIN projects pi ON pi.id = i.project_id
LEFT JOIN projects pm ON pm.id = mr.project_id
WHERE {where_clause}"
WHERE {where_clause}
ORDER BY e.created_at DESC
LIMIT ?{limit_param}"
);
params.push(Box::new(limit as i64));
let mut stmt = conn.prepare(&sql).context("preparing label events query")?;
let param_refs: Vec<&dyn rusqlite::types::ToSql> = params.iter().map(AsRef::as_ref).collect();
@@ -426,9 +447,11 @@ fn collect_tl_label_events(
fn collect_tl_milestone_events(
conn: &Connection,
filter: &TimelineFilter,
limit: usize,
events: &mut Vec<TimelineEvent>,
) -> Result<()> {
let (where_clause, params) = resource_event_where(filter);
let (where_clause, mut params) = resource_event_where(filter);
let limit_param = params.len() + 1;
let sql = format!(
"SELECT e.created_at, e.action, e.milestone_title, e.actor_username,
@@ -440,8 +463,11 @@ fn collect_tl_milestone_events(
LEFT JOIN merge_requests mr ON mr.id = e.merge_request_id
LEFT JOIN projects pi ON pi.id = i.project_id
LEFT JOIN projects pm ON pm.id = mr.project_id
WHERE {where_clause}"
WHERE {where_clause}
ORDER BY e.created_at DESC
LIMIT ?{limit_param}"
);
params.push(Box::new(limit as i64));
let mut stmt = conn
.prepare(&sql)

View File

@@ -38,20 +38,20 @@ pub fn fetch_trace(
/// Returns distinct `new_path` values scoped to the given project (or all
/// projects if `None`), sorted alphabetically.
pub fn fetch_known_paths(conn: &Connection, project_id: Option<i64>) -> Result<Vec<String>> {
let mut paths = if let Some(pid) = project_id {
let paths = if let Some(pid) = project_id {
let mut stmt = conn.prepare(
"SELECT DISTINCT new_path FROM mr_file_changes WHERE project_id = ?1 ORDER BY new_path",
"SELECT DISTINCT new_path FROM mr_file_changes \
WHERE project_id = ?1 ORDER BY new_path LIMIT 5000",
)?;
let rows = stmt.query_map([pid], |row| row.get::<_, String>(0))?;
rows.filter_map(Result::ok).collect::<Vec<_>>()
rows.collect::<std::result::Result<Vec<_>, _>>()?
} else {
let mut stmt =
conn.prepare("SELECT DISTINCT new_path FROM mr_file_changes ORDER BY new_path")?;
let mut stmt = conn.prepare(
"SELECT DISTINCT new_path FROM mr_file_changes ORDER BY new_path LIMIT 5000",
)?;
let rows = stmt.query_map([], |row| row.get::<_, String>(0))?;
rows.filter_map(Result::ok).collect::<Vec<_>>()
rows.collect::<std::result::Result<Vec<_>, _>>()?
};
paths.sort();
paths.dedup();
Ok(paths)
}

View File

@@ -377,3 +377,120 @@ fn test_sync_completed_from_bootstrap_resets_navigation_and_state() {
assert_eq!(app.navigation.depth(), 1);
assert!(!app.state.bootstrap.sync_started);
}
#[test]
fn test_sync_completed_flushes_entity_caches() {
use crate::message::EntityKey;
use crate::state::issue_detail::{IssueDetailData, IssueMetadata};
use crate::state::mr_detail::{MrDetailData, MrMetadata};
use crate::state::{CachedIssuePayload, CachedMrPayload};
use crate::view::common::cross_ref::CrossRef;
let mut app = test_app();
// Populate caches with dummy data.
let issue_key = EntityKey::issue(1, 42);
app.state.issue_cache.put(
issue_key,
CachedIssuePayload {
data: IssueDetailData {
metadata: IssueMetadata {
iid: 42,
project_path: "g/p".into(),
title: "Test".into(),
description: String::new(),
state: "opened".into(),
author: "alice".into(),
assignees: vec![],
labels: vec![],
milestone: None,
due_date: None,
created_at: 0,
updated_at: 0,
web_url: String::new(),
discussion_count: 0,
},
cross_refs: Vec::<CrossRef>::new(),
},
discussions: vec![],
},
);
let mr_key = EntityKey::mr(1, 99);
app.state.mr_cache.put(
mr_key,
CachedMrPayload {
data: MrDetailData {
metadata: MrMetadata {
iid: 99,
project_path: "g/p".into(),
title: "MR".into(),
description: String::new(),
state: "opened".into(),
draft: false,
author: "bob".into(),
assignees: vec![],
reviewers: vec![],
labels: vec![],
source_branch: "feat".into(),
target_branch: "main".into(),
merge_status: String::new(),
created_at: 0,
updated_at: 0,
merged_at: None,
web_url: String::new(),
discussion_count: 0,
file_change_count: 0,
},
cross_refs: Vec::<CrossRef>::new(),
file_changes: vec![],
},
discussions: vec![],
},
);
assert_eq!(app.state.issue_cache.len(), 1);
assert_eq!(app.state.mr_cache.len(), 1);
// Sync completes — caches should be flushed.
app.update(Msg::SyncCompleted { elapsed_ms: 500 });
assert!(
app.state.issue_cache.is_empty(),
"issue cache should be flushed after sync"
);
assert!(
app.state.mr_cache.is_empty(),
"MR cache should be flushed after sync"
);
}
#[test]
fn test_sync_completed_refreshes_current_detail_view() {
use crate::message::EntityKey;
use crate::state::LoadState;
let mut app = test_app();
// Navigate to an issue detail screen.
let key = EntityKey::issue(1, 42);
app.update(Msg::NavigateTo(Screen::IssueDetail(key)));
// Simulate load completion so LoadState goes to Idle.
app.state.set_loading(
Screen::IssueDetail(EntityKey::issue(1, 42)),
LoadState::Idle,
);
// Sync completes while viewing the detail.
app.update(Msg::SyncCompleted { elapsed_ms: 300 });
// The detail screen should have been set to Refreshing.
assert_eq!(
*app.state
.load_state
.get(&Screen::IssueDetail(EntityKey::issue(1, 42))),
LoadState::Refreshing,
"detail view should refresh after sync"
);
}

View File

@@ -4,8 +4,8 @@ use chrono::TimeDelta;
use ftui::{Cmd, Event, Frame, KeyCode, KeyEvent, Model, Modifiers};
use crate::crash_context::CrashEvent;
use crate::message::{InputMode, Msg, Screen};
use crate::state::LoadState;
use crate::message::{EntityKind, InputMode, Msg, Screen};
use crate::state::{CachedIssuePayload, CachedMrPayload, LoadState};
use crate::task_supervisor::TaskKey;
use super::LoreApp;
@@ -219,6 +219,8 @@ impl LoreApp {
"go_who" => self.navigate_to(Screen::Who),
"go_file_history" => self.navigate_to(Screen::FileHistory),
"go_trace" => self.navigate_to(Screen::Trace),
"go_doctor" => self.navigate_to(Screen::Doctor),
"go_stats" => self.navigate_to(Screen::Stats),
"go_sync" => {
if screen == &Screen::Bootstrap {
self.state.bootstrap.sync_started = true;
@@ -235,6 +237,19 @@ impl LoreApp {
self.navigation.jump_forward();
Cmd::none()
}
"toggle_scope" => {
if self.state.scope_picker.visible {
self.state.scope_picker.close();
Cmd::none()
} else {
// Fetch projects and open picker asynchronously.
Cmd::task(move || {
// The actual DB query runs in the task; for now, open
// immediately with cached projects if available.
Msg::ScopeProjectsLoaded { projects: vec![] }
})
}
}
"move_down" | "move_up" | "select_item" | "focus_filter" | "scroll_to_top" => {
// Screen-specific actions — delegated in future phases.
Cmd::none()
@@ -248,6 +263,10 @@ impl LoreApp {
// -----------------------------------------------------------------------
/// Navigate to a screen, pushing the nav stack and starting a data load.
///
/// For detail views (issue/MR), checks the entity cache first. On a
/// cache hit, applies cached data immediately and uses `Refreshing`
/// (background re-fetch) instead of `LoadingInitial` (full spinner).
fn navigate_to(&mut self, screen: Screen) -> Cmd<Msg> {
let screen_label = screen.label().to_string();
let current_label = self.navigation.current().label().to_string();
@@ -259,21 +278,56 @@ impl LoreApp {
self.navigation.push(screen.clone());
// First visit → full-screen spinner; revisit → corner spinner over stale data.
let load_state = if self.state.load_state.was_visited(&screen) {
// Check entity cache for detail views — apply cached data instantly.
let cache_hit = self.try_apply_detail_cache(&screen);
// Cache hit → background refresh; first visit → full spinner; revisit → stale refresh.
let load_state = if cache_hit || self.state.load_state.was_visited(&screen) {
LoadState::Refreshing
} else {
LoadState::LoadingInitial
};
self.state.set_loading(screen.clone(), load_state);
// Spawn supervised task for data loading (placeholder — actual DB
// query dispatch comes in Phase 2 screen implementations).
// Always spawn a data load (even on cache hit, to ensure freshness).
let _handle = self.supervisor.submit(TaskKey::LoadScreen(screen));
Cmd::none()
}
/// Try to populate a detail view from the entity cache. Returns true on hit.
fn try_apply_detail_cache(&mut self, screen: &Screen) -> bool {
match screen {
Screen::IssueDetail(key) => {
if let Some(payload) = self.state.issue_cache.get(key).cloned() {
self.state.issue_detail.load_new(key.clone());
self.state.issue_detail.apply_metadata(payload.data);
if !payload.discussions.is_empty() {
self.state
.issue_detail
.apply_discussions(payload.discussions);
}
true
} else {
false
}
}
Screen::MrDetail(key) => {
if let Some(payload) = self.state.mr_cache.get(key).cloned() {
self.state.mr_detail.load_new(key.clone());
self.state.mr_detail.apply_metadata(payload.data);
if !payload.discussions.is_empty() {
self.state.mr_detail.apply_discussions(payload.discussions);
}
true
} else {
false
}
}
_ => false,
}
}
// -----------------------------------------------------------------------
// Message dispatch (non-key)
// -----------------------------------------------------------------------
@@ -382,6 +436,14 @@ impl LoreApp {
.supervisor
.is_current(&TaskKey::LoadScreen(screen.clone()), generation)
{
// Populate entity cache (metadata only; discussions added later).
self.state.issue_cache.put(
key,
CachedIssuePayload {
data: (*data).clone(),
discussions: Vec::new(),
},
);
self.state.issue_detail.apply_metadata(*data);
self.state.set_loading(screen.clone(), LoadState::Idle);
self.supervisor
@@ -398,14 +460,24 @@ impl LoreApp {
// supervisor.complete(), so is_current() would return false.
// Instead, check that the detail state still expects this key.
match key.kind {
crate::message::EntityKind::Issue => {
EntityKind::Issue => {
if self.state.issue_detail.current_key.as_ref() == Some(&key) {
self.state.issue_detail.apply_discussions(discussions);
self.state
.issue_detail
.apply_discussions(discussions.clone());
// Update cache with discussions.
if let Some(cached) = self.state.issue_cache.get_mut(&key) {
cached.discussions = discussions;
}
}
}
crate::message::EntityKind::MergeRequest => {
EntityKind::MergeRequest => {
if self.state.mr_detail.current_key.as_ref() == Some(&key) {
self.state.mr_detail.apply_discussions(discussions);
self.state.mr_detail.apply_discussions(discussions.clone());
// Update cache with discussions.
if let Some(cached) = self.state.mr_cache.get_mut(&key) {
cached.discussions = discussions;
}
}
}
}
@@ -423,6 +495,14 @@ impl LoreApp {
.supervisor
.is_current(&TaskKey::LoadScreen(screen.clone()), generation)
{
// Populate entity cache (metadata only; discussions added later).
self.state.mr_cache.put(
key,
CachedMrPayload {
data: (*data).clone(),
discussions: Vec::new(),
},
);
self.state.mr_detail.apply_metadata(*data);
self.state.set_loading(screen.clone(), LoadState::Idle);
self.supervisor
@@ -431,14 +511,42 @@ impl LoreApp {
Cmd::none()
}
// --- Sync lifecycle (Bootstrap auto-transition) ---
// --- Sync lifecycle ---
Msg::SyncStarted => {
self.state.sync.start();
if *self.navigation.current() == Screen::Bootstrap {
self.state.bootstrap.sync_started = true;
}
Cmd::none()
}
Msg::SyncCompleted { .. } => {
Msg::SyncProgress {
stage,
current,
total,
} => {
self.state.sync.update_progress(&stage, current, total);
Cmd::none()
}
Msg::SyncProgressBatch { stage, batch_size } => {
self.state.sync.update_batch(&stage, batch_size);
Cmd::none()
}
Msg::SyncLogLine(line) => {
self.state.sync.add_log_line(line);
Cmd::none()
}
Msg::SyncBackpressureDrop => {
// Silently drop — the coalescer already handles throttling.
Cmd::none()
}
Msg::SyncCompleted { elapsed_ms } => {
self.state.sync.complete(elapsed_ms);
// Flush entity caches — sync may have updated any entity's
// metadata, discussions, or cross-refs in the DB.
self.state.issue_cache.clear();
self.state.mr_cache.clear();
// If we came from Bootstrap, replace nav history with Dashboard.
if *self.navigation.current() == Screen::Bootstrap {
self.state.bootstrap.sync_started = false;
@@ -454,6 +562,31 @@ impl LoreApp {
self.state.set_loading(dashboard.clone(), load_state);
let _handle = self.supervisor.submit(TaskKey::LoadScreen(dashboard));
}
// If currently on a detail view, refresh it so the user sees
// updated data without navigating away and back.
let current = self.navigation.current().clone();
match &current {
Screen::IssueDetail(_) | Screen::MrDetail(_) => {
self.state
.set_loading(current.clone(), LoadState::Refreshing);
let _handle = self.supervisor.submit(TaskKey::LoadScreen(current));
}
_ => {}
}
Cmd::none()
}
Msg::SyncCancelled => {
self.state.sync.cancel();
Cmd::none()
}
Msg::SyncFailed(err) => {
self.state.sync.fail(err);
Cmd::none()
}
Msg::SyncStreamStats { bytes, items } => {
self.state.sync.update_stream_stats(bytes, items);
Cmd::none()
}
@@ -511,6 +644,59 @@ impl LoreApp {
Cmd::none()
}
// --- Doctor ---
Msg::DoctorLoaded { checks } => {
self.state.doctor.apply_checks(checks);
self.state.set_loading(Screen::Doctor, LoadState::Idle);
Cmd::none()
}
// --- Stats ---
Msg::StatsLoaded { data } => {
self.state.stats.apply_data(data);
self.state.set_loading(Screen::Stats, LoadState::Idle);
Cmd::none()
}
// --- Timeline ---
Msg::TimelineLoaded { generation, events } => {
if self
.supervisor
.is_current(&TaskKey::LoadScreen(Screen::Timeline), generation)
{
self.state.timeline.apply_results(generation, events);
self.state.set_loading(Screen::Timeline, LoadState::Idle);
self.supervisor
.complete(&TaskKey::LoadScreen(Screen::Timeline), generation);
}
Cmd::none()
}
// --- Search ---
Msg::SearchExecuted {
generation,
results,
} => {
if self
.supervisor
.is_current(&TaskKey::LoadScreen(Screen::Search), generation)
{
self.state.search.apply_results(generation, results);
self.state.set_loading(Screen::Search, LoadState::Idle);
self.supervisor
.complete(&TaskKey::LoadScreen(Screen::Search), generation);
}
Cmd::none()
}
// --- Scope ---
Msg::ScopeProjectsLoaded { projects } => {
self.state
.scope_picker
.open(projects, &self.state.global_scope);
Cmd::none()
}
// All other message variants: no-op for now.
// Future phases will fill these in as screens are implemented.
_ => Cmd::none(),

View File

@@ -112,7 +112,7 @@ mod tests {
let cmd = reg.complete_sequence(
&KeyCode::Char('g'),
&Modifiers::NONE,
&KeyCode::Char('x'),
&KeyCode::Char('z'),
&Modifiers::NONE,
&Screen::Dashboard,
);

View File

@@ -213,6 +213,16 @@ pub fn build_registry() -> CommandRegistry {
available_in: ScreenFilter::Global,
available_in_text_mode: false,
},
CommandDef {
id: "toggle_scope",
label: "Project Scope",
keybinding: Some(KeyCombo::key(KeyCode::Char('P'))),
cli_equivalent: None,
help_text: "Toggle project scope filter",
status_hint: "P:scope",
available_in: ScreenFilter::Global,
available_in_text_mode: false,
},
// --- Navigation: g-prefix sequences ---
CommandDef {
id: "go_home",
@@ -284,6 +294,46 @@ pub fn build_registry() -> CommandRegistry {
available_in: ScreenFilter::Global,
available_in_text_mode: false,
},
CommandDef {
id: "go_file_history",
label: "Go to File History",
keybinding: Some(KeyCombo::g_then('f')),
cli_equivalent: Some("lore file-history"),
help_text: "Jump to file history",
status_hint: "gf:files",
available_in: ScreenFilter::Global,
available_in_text_mode: false,
},
CommandDef {
id: "go_trace",
label: "Go to Trace",
keybinding: Some(KeyCombo::g_then('r')),
cli_equivalent: Some("lore trace"),
help_text: "Jump to trace",
status_hint: "gr:trace",
available_in: ScreenFilter::Global,
available_in_text_mode: false,
},
CommandDef {
id: "go_doctor",
label: "Go to Doctor",
keybinding: Some(KeyCombo::g_then('d')),
cli_equivalent: Some("lore doctor"),
help_text: "Jump to environment health checks",
status_hint: "gd:doctor",
available_in: ScreenFilter::Global,
available_in_text_mode: false,
},
CommandDef {
id: "go_stats",
label: "Go to Stats",
keybinding: Some(KeyCombo::g_then('x')),
cli_equivalent: Some("lore stats"),
help_text: "Jump to database statistics",
status_hint: "gx:stats",
available_in: ScreenFilter::Global,
available_in_text_mode: false,
},
// --- Vim-style jump list ---
CommandDef {
id: "jump_back",

View File

@@ -25,6 +25,15 @@ pub struct EntityCache<V> {
tick: u64,
}
impl<V> std::fmt::Debug for EntityCache<V> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.debug_struct("EntityCache")
.field("len", &self.entries.len())
.field("capacity", &self.capacity)
.finish()
}
}
impl<V> EntityCache<V> {
/// Create a new cache with the default capacity (64).
#[must_use]
@@ -60,6 +69,16 @@ impl<V> EntityCache<V> {
})
}
/// Look up an entry mutably, bumping its access tick to keep it alive.
pub fn get_mut(&mut self, key: &EntityKey) -> Option<&mut V> {
self.tick += 1;
let tick = self.tick;
self.entries.get_mut(key).map(|(val, t)| {
*t = tick;
val
})
}
/// Insert an entry, evicting the least-recently-accessed entry if at capacity.
pub fn put(&mut self, key: EntityKey, value: V) {
self.tick += 1;
@@ -72,15 +91,14 @@ impl<V> EntityCache<V> {
}
// Evict LRU if at capacity.
if self.entries.len() >= self.capacity {
if let Some(lru_key) = self
if self.entries.len() >= self.capacity
&& let Some(lru_key) = self
.entries
.iter()
.min_by_key(|(_, (_, t))| *t)
.map(|(k, _)| k.clone())
{
self.entries.remove(&lru_key);
}
{
self.entries.remove(&lru_key);
}
self.entries.insert(key, (value, tick));
@@ -104,6 +122,11 @@ impl<V> EntityCache<V> {
pub fn is_empty(&self) -> bool {
self.entries.is_empty()
}
/// Remove all entries from the cache.
pub fn clear(&mut self) {
self.entries.clear();
}
}
impl<V> Default for EntityCache<V> {
@@ -155,8 +178,16 @@ mod tests {
// Insert a 4th item: should evict issue(2) (tick 2, lowest).
cache.put(issue(4), "d"); // tick 5
assert_eq!(cache.get(&issue(1)), Some(&"a"), "issue(1) should survive (recently accessed)");
assert_eq!(cache.get(&issue(2)), None, "issue(2) should be evicted (LRU)");
assert_eq!(
cache.get(&issue(1)),
Some(&"a"),
"issue(1) should survive (recently accessed)"
);
assert_eq!(
cache.get(&issue(2)),
None,
"issue(2) should be evicted (LRU)"
);
assert_eq!(cache.get(&issue(3)), Some(&"c"), "issue(3) should survive");
assert_eq!(cache.get(&issue(4)), Some(&"d"), "issue(4) just inserted");
}
@@ -229,4 +260,79 @@ mod tests {
assert_eq!(cache.get(&mr(42)), Some(&"mr-42"));
assert_eq!(cache.len(), 2);
}
#[test]
fn test_get_mut_modifies_in_place() {
let mut cache = EntityCache::with_capacity(4);
cache.put(issue(1), String::from("original"));
if let Some(val) = cache.get_mut(&issue(1)) {
val.push_str("-modified");
}
assert_eq!(
cache.get(&issue(1)),
Some(&String::from("original-modified"))
);
}
#[test]
fn test_get_mut_returns_none_for_missing() {
let mut cache: EntityCache<String> = EntityCache::with_capacity(4);
assert!(cache.get_mut(&issue(99)).is_none());
}
#[test]
fn test_get_mut_bumps_tick_keeps_alive() {
let mut cache = EntityCache::with_capacity(2);
cache.put(issue(1), "a"); // tick 1
cache.put(issue(2), "b"); // tick 2
// Bump issue(1) via get_mut so it survives eviction.
let _ = cache.get_mut(&issue(1)); // tick 3
// Insert a 3rd — should evict issue(2) (tick 2, LRU).
cache.put(issue(3), "c"); // tick 4
assert!(cache.get(&issue(1)).is_some(), "issue(1) should survive");
assert!(cache.get(&issue(2)).is_none(), "issue(2) should be evicted");
assert!(cache.get(&issue(3)).is_some(), "issue(3) just inserted");
}
#[test]
fn test_clear_removes_all_entries() {
let mut cache = EntityCache::with_capacity(8);
cache.put(issue(1), "a");
cache.put(issue(2), "b");
cache.put(mr(3), "c");
assert_eq!(cache.len(), 3);
cache.clear();
assert!(cache.is_empty());
assert_eq!(cache.len(), 0);
assert_eq!(cache.get(&issue(1)), None);
assert_eq!(cache.get(&issue(2)), None);
assert_eq!(cache.get(&mr(3)), None);
}
#[test]
fn test_clear_on_empty_cache_is_noop() {
let mut cache: EntityCache<&str> = EntityCache::with_capacity(4);
cache.clear();
assert!(cache.is_empty());
}
#[test]
fn test_clear_resets_tick_and_allows_reuse() {
let mut cache = EntityCache::with_capacity(4);
cache.put(issue(1), "v1");
cache.put(issue(2), "v2");
cache.clear();
// Cache should work normally after clear.
cache.put(issue(3), "v3");
assert_eq!(cache.get(&issue(3)), Some(&"v3"));
assert_eq!(cache.len(), 1);
}
}

View File

@@ -0,0 +1,202 @@
//! Single-instance advisory lock for the TUI.
//!
//! Prevents concurrent `lore-tui` launches from corrupting state.
//! Uses an advisory lock file with PID. Stale locks (dead PID) are
//! automatically recovered.
use std::fs;
use std::io::Write;
use std::path::{Path, PathBuf};
/// Advisory lock preventing concurrent TUI launches.
///
/// On `acquire()`, writes the current PID to the lock file.
/// On `Drop`, removes the lock file (best-effort).
#[derive(Debug)]
pub struct InstanceLock {
path: PathBuf,
}
/// Error returned when another instance is already running.
#[derive(Debug)]
pub struct LockConflict {
pub pid: u32,
pub path: PathBuf,
}
impl std::fmt::Display for LockConflict {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(
f,
"Another lore-tui instance is running (PID {}). Lock file: {}",
self.pid,
self.path.display()
)
}
}
impl std::error::Error for LockConflict {}
impl InstanceLock {
/// Try to acquire the instance lock.
///
/// - If the lock file doesn't exist, creates it with our PID.
/// - If the lock file exists with a live PID, returns `LockConflict`.
/// - If the lock file exists with a dead PID, removes the stale lock and acquires.
pub fn acquire(lock_dir: &Path) -> Result<Self, Box<dyn std::error::Error>> {
// Ensure lock directory exists.
fs::create_dir_all(lock_dir)?;
let path = lock_dir.join("tui.lock");
// Check for existing lock.
if path.exists() {
let contents = fs::read_to_string(&path).unwrap_or_default();
if let Ok(pid) = contents.trim().parse::<u32>()
&& is_process_alive(pid)
{
return Err(Box::new(LockConflict {
pid,
path: path.clone(),
}));
}
// Stale lock — PID is dead, or corrupt file. Remove and re-acquire.
fs::remove_file(&path)?;
}
// Write our PID.
let mut file = fs::File::create(&path)?;
write!(file, "{}", std::process::id())?;
file.sync_all()?;
Ok(Self { path })
}
/// Path to the lock file.
#[must_use]
pub fn path(&self) -> &Path {
&self.path
}
}
impl Drop for InstanceLock {
fn drop(&mut self) {
// Best-effort cleanup. If it fails, the stale lock will be
// recovered on next launch via the dead-PID check.
let _ = fs::remove_file(&self.path);
}
}
/// Check whether a process with the given PID is alive.
///
/// Uses `kill -0 <pid>` on Unix (exit 0 = alive, non-zero = dead).
/// On non-Unix, conservatively assumes alive.
#[cfg(unix)]
fn is_process_alive(pid: u32) -> bool {
std::process::Command::new("kill")
.args(["-0", &pid.to_string()])
.stdout(std::process::Stdio::null())
.stderr(std::process::Stdio::null())
.status()
.is_ok_and(|s| s.success())
}
#[cfg(not(unix))]
fn is_process_alive(_pid: u32) -> bool {
// Conservative fallback: assume alive.
true
}
// ---------------------------------------------------------------------------
// Tests
// ---------------------------------------------------------------------------
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_acquire_and_release() {
let dir = tempfile::tempdir().unwrap();
let lock_path = dir.path().join("tui.lock");
{
let _lock = InstanceLock::acquire(dir.path()).unwrap();
assert!(lock_path.exists());
// Lock file should contain our PID.
let contents = fs::read_to_string(&lock_path).unwrap();
assert_eq!(contents, format!("{}", std::process::id()));
}
// After drop, lock file should be removed.
assert!(!lock_path.exists());
}
#[test]
fn test_double_acquire_fails() {
let dir = tempfile::tempdir().unwrap();
let _lock = InstanceLock::acquire(dir.path()).unwrap();
// Second acquire should fail because our PID is still alive.
let result = InstanceLock::acquire(dir.path());
assert!(result.is_err());
let err = result.unwrap_err();
let conflict = err.downcast_ref::<LockConflict>().unwrap();
assert_eq!(conflict.pid, std::process::id());
}
#[test]
fn test_stale_lock_recovery() {
let dir = tempfile::tempdir().unwrap();
let lock_path = dir.path().join("tui.lock");
// Write a lock file with a dead PID (PID 1 is init, but PID 99999999
// almost certainly doesn't exist).
let dead_pid = 99_999_999u32;
fs::write(&lock_path, dead_pid.to_string()).unwrap();
// Should succeed — stale lock is recovered.
let _lock = InstanceLock::acquire(dir.path()).unwrap();
assert!(lock_path.exists());
// Lock file now contains our PID, not the dead one.
let contents = fs::read_to_string(&lock_path).unwrap();
assert_eq!(contents, format!("{}", std::process::id()));
}
#[test]
fn test_corrupt_lock_file_recovered() {
let dir = tempfile::tempdir().unwrap();
let lock_path = dir.path().join("tui.lock");
// Write garbage to the lock file.
fs::write(&lock_path, "not-a-pid").unwrap();
// Should succeed — corrupt lock is treated as stale.
let lock = InstanceLock::acquire(dir.path()).unwrap();
let contents = fs::read_to_string(lock.path()).unwrap();
assert_eq!(contents, format!("{}", std::process::id()));
}
#[test]
fn test_creates_lock_directory() {
let dir = tempfile::tempdir().unwrap();
let nested = dir.path().join("a").join("b").join("c");
let lock = InstanceLock::acquire(&nested).unwrap();
assert!(nested.join("tui.lock").exists());
drop(lock);
}
#[test]
fn test_lock_conflict_display() {
let conflict = LockConflict {
pid: 12345,
path: PathBuf::from("/tmp/tui.lock"),
};
let msg = format!("{conflict}");
assert!(msg.contains("12345"));
assert!(msg.contains("/tmp/tui.lock"));
}
}

View File

@@ -44,6 +44,84 @@ pub const fn show_preview_pane(bp: Breakpoint) -> bool {
}
}
// ---------------------------------------------------------------------------
// Per-screen responsive helpers
// ---------------------------------------------------------------------------
/// Whether detail views (issue/MR) should show a side panel for discussions.
///
/// At `Lg`+ widths, enough room exists for a 60/40 or 50/50 split with
/// description on the left and discussions/cross-refs on the right.
#[inline]
pub const fn detail_side_panel(bp: Breakpoint) -> bool {
match bp {
Breakpoint::Lg | Breakpoint::Xl => true,
Breakpoint::Xs | Breakpoint::Sm | Breakpoint::Md => false,
}
}
/// Number of stat columns for the Stats/Doctor screens.
///
/// - `Xs` / `Sm`: 1 column (full-width stacked)
/// - `Md`+: 2 columns (side-by-side sections)
#[inline]
pub const fn info_screen_columns(bp: Breakpoint) -> u16 {
match bp {
Breakpoint::Xs | Breakpoint::Sm => 1,
Breakpoint::Md | Breakpoint::Lg | Breakpoint::Xl => 2,
}
}
/// Whether to show the project path column in search results.
///
/// On narrow terminals, the project path is dropped to give the title
/// more room.
#[inline]
pub const fn search_show_project(bp: Breakpoint) -> bool {
match bp {
Breakpoint::Xs | Breakpoint::Sm => false,
Breakpoint::Md | Breakpoint::Lg | Breakpoint::Xl => true,
}
}
/// Width allocated for the relative-time column in timeline events.
///
/// Narrow terminals get a compact time (e.g., "2h"), wider terminals
/// get the full relative time (e.g., "2 hours ago").
#[inline]
pub const fn timeline_time_width(bp: Breakpoint) -> u16 {
match bp {
Breakpoint::Xs => 5,
Breakpoint::Sm => 8,
Breakpoint::Md | Breakpoint::Lg | Breakpoint::Xl => 12,
}
}
/// Whether to use abbreviated mode-tab labels in the Who screen.
///
/// On narrow terminals, tabs are shortened to 3-char abbreviations
/// (e.g., "Exp" instead of "Expert") to fit all 5 modes.
#[inline]
pub const fn who_abbreviated_tabs(bp: Breakpoint) -> bool {
match bp {
Breakpoint::Xs | Breakpoint::Sm => true,
Breakpoint::Md | Breakpoint::Lg | Breakpoint::Xl => false,
}
}
/// Width of the progress bar in the Sync screen.
///
/// Scales with terminal width to use available space effectively.
#[inline]
pub const fn sync_progress_bar_width(bp: Breakpoint) -> u16 {
match bp {
Breakpoint::Xs => 15,
Breakpoint::Sm => 25,
Breakpoint::Md => 35,
Breakpoint::Lg | Breakpoint::Xl => 50,
}
}
// ---------------------------------------------------------------------------
// Tests
// ---------------------------------------------------------------------------
@@ -99,4 +177,60 @@ mod tests {
fn test_lore_breakpoints_matches_defaults() {
assert_eq!(LORE_BREAKPOINTS, Breakpoints::DEFAULT);
}
// -- Per-screen responsive helpers ----------------------------------------
#[test]
fn test_detail_side_panel() {
assert!(!detail_side_panel(Breakpoint::Xs));
assert!(!detail_side_panel(Breakpoint::Sm));
assert!(!detail_side_panel(Breakpoint::Md));
assert!(detail_side_panel(Breakpoint::Lg));
assert!(detail_side_panel(Breakpoint::Xl));
}
#[test]
fn test_info_screen_columns() {
assert_eq!(info_screen_columns(Breakpoint::Xs), 1);
assert_eq!(info_screen_columns(Breakpoint::Sm), 1);
assert_eq!(info_screen_columns(Breakpoint::Md), 2);
assert_eq!(info_screen_columns(Breakpoint::Lg), 2);
assert_eq!(info_screen_columns(Breakpoint::Xl), 2);
}
#[test]
fn test_search_show_project() {
assert!(!search_show_project(Breakpoint::Xs));
assert!(!search_show_project(Breakpoint::Sm));
assert!(search_show_project(Breakpoint::Md));
assert!(search_show_project(Breakpoint::Lg));
assert!(search_show_project(Breakpoint::Xl));
}
#[test]
fn test_timeline_time_width() {
assert_eq!(timeline_time_width(Breakpoint::Xs), 5);
assert_eq!(timeline_time_width(Breakpoint::Sm), 8);
assert_eq!(timeline_time_width(Breakpoint::Md), 12);
assert_eq!(timeline_time_width(Breakpoint::Lg), 12);
assert_eq!(timeline_time_width(Breakpoint::Xl), 12);
}
#[test]
fn test_who_abbreviated_tabs() {
assert!(who_abbreviated_tabs(Breakpoint::Xs));
assert!(who_abbreviated_tabs(Breakpoint::Sm));
assert!(!who_abbreviated_tabs(Breakpoint::Md));
assert!(!who_abbreviated_tabs(Breakpoint::Lg));
assert!(!who_abbreviated_tabs(Breakpoint::Xl));
}
#[test]
fn test_sync_progress_bar_width() {
assert_eq!(sync_progress_bar_width(Breakpoint::Xs), 15);
assert_eq!(sync_progress_bar_width(Breakpoint::Sm), 25);
assert_eq!(sync_progress_bar_width(Breakpoint::Md), 35);
assert_eq!(sync_progress_bar_width(Breakpoint::Lg), 50);
assert_eq!(sync_progress_bar_width(Breakpoint::Xl), 50);
}
}

View File

@@ -5,7 +5,7 @@
//! Built on FrankenTUI (Elm architecture): Model, update, view.
//! The `lore` CLI spawns `lore-tui` via PATH lookup at runtime.
use anyhow::Result;
use anyhow::{Context, Result};
// Phase 0 modules.
pub mod clock; // Clock trait: SystemClock + FakeClock (bd-2lg6)
@@ -34,6 +34,12 @@ pub mod filter_dsl; // Filter DSL tokenizer for list screen filter bars (bd-18qs
// Phase 4 modules.
pub mod entity_cache; // Bounded LRU entity cache for detail view reopens (bd-2og9)
pub mod render_cache; // Bounded render cache for expensive per-frame computations (bd-2og9)
pub mod scope; // Global scope context: SQL helpers + project listing (bd-1ser)
// Phase 5 modules.
pub mod instance_lock; // Single-instance advisory lock for TUI (bd-3h00)
pub mod session; // Session state persistence: save/load/quarantine (bd-3h00)
pub mod text_width; // Unicode-aware text width measurement + truncation (bd-3h00)
/// Options controlling how the TUI launches.
#[derive(Debug, Clone)]
@@ -65,9 +71,42 @@ pub struct LaunchOptions {
/// 2. **Data readiness** — check whether the database has any entity data.
/// If empty, start on the Bootstrap screen; otherwise start on Dashboard.
pub fn launch_tui(options: LaunchOptions) -> Result<()> {
let _options = options;
// Phase 1 will wire this to LoreApp + App::fullscreen().run()
eprintln!("lore-tui: browse mode not yet implemented (Phase 1)");
let _options = options; // remaining fields (fresh, ascii, etc.) consumed in later phases
// 1. Resolve database path.
let db_path = lore::core::paths::get_db_path(None);
if !db_path.exists() {
anyhow::bail!(
"No lore database found at {}.\n\
Run 'lore init' to create a config, then 'lore sync' to fetch data.",
db_path.display()
);
}
// 2. Open DB and run schema preflight.
let db = db::DbManager::open(&db_path)
.with_context(|| format!("opening database at {}", db_path.display()))?;
db.with_reader(schema_preflight)?;
// 3. Check data readiness — bootstrap screen if empty.
let start_on_bootstrap = db.with_reader(|conn| {
let readiness = action::check_data_readiness(conn)?;
Ok(!readiness.has_any_data())
})?;
// 4. Build the app model.
let mut app = app::LoreApp::new();
app.db = Some(db);
if start_on_bootstrap {
app.navigation.reset_to(message::Screen::Bootstrap);
}
// 5. Enter the FrankenTUI event loop.
ftui::App::fullscreen(app)
.with_mouse()
.run()
.context("running TUI event loop")?;
Ok(())
}

View File

@@ -307,6 +307,22 @@ pub enum Msg {
paths: Vec<String>,
},
// --- Scope ---
/// Projects loaded for the scope picker.
ScopeProjectsLoaded {
projects: Vec<crate::scope::ProjectInfo>,
},
// --- Doctor ---
DoctorLoaded {
checks: Vec<crate::state::doctor::HealthCheck>,
},
// --- Stats ---
StatsLoaded {
data: crate::state::stats::StatsData,
},
// --- Sync ---
SyncStarted,
SyncProgress {
@@ -397,6 +413,9 @@ impl Msg {
Self::TraceKnownPathsLoaded { .. } => "TraceKnownPathsLoaded",
Self::FileHistoryLoaded { .. } => "FileHistoryLoaded",
Self::FileHistoryKnownPathsLoaded { .. } => "FileHistoryKnownPathsLoaded",
Self::ScopeProjectsLoaded { .. } => "ScopeProjectsLoaded",
Self::DoctorLoaded { .. } => "DoctorLoaded",
Self::StatsLoaded { .. } => "StatsLoaded",
Self::SyncStarted => "SyncStarted",
Self::SyncProgress { .. } => "SyncProgress",
Self::SyncProgressBatch { .. } => "SyncProgressBatch",

View File

@@ -87,15 +87,14 @@ impl<V> RenderCache<V> {
return;
}
if self.entries.len() >= self.capacity {
if let Some(oldest_key) = self
if self.entries.len() >= self.capacity
&& let Some(oldest_key) = self
.entries
.iter()
.min_by_key(|(_, (_, t))| *t)
.map(|(k, _)| *k)
{
self.entries.remove(&oldest_key);
}
{
self.entries.remove(&oldest_key);
}
self.entries.insert(key, (value, tick));
@@ -105,8 +104,7 @@ impl<V> RenderCache<V> {
///
/// After a resize, only entries rendered at the new width are still valid.
pub fn invalidate_width(&mut self, keep_width: u16) {
self.entries
.retain(|k, _| k.terminal_width == keep_width);
self.entries.retain(|k, _| k.terminal_width == keep_width);
}
/// Clear the entire cache (theme change — all colors invalidated).

View File

@@ -0,0 +1,162 @@
//! Global scope context helpers: SQL fragment generation and project listing.
//!
//! The [`ScopeContext`] struct lives in [`state::mod`] and holds the active
//! project filter. This module provides:
//!
//! - [`scope_filter_sql`] — generates a SQL WHERE clause fragment
//! - [`fetch_projects`] — lists available projects for the scope picker
//!
//! Action functions already accept `project_id: Option<i64>` — callers pass
//! `scope.project_id` directly. The helpers here are for screens that build
//! custom SQL or need the project list for UI.
use anyhow::{Context, Result};
use rusqlite::Connection;
/// Project metadata for the scope picker overlay.
#[derive(Debug, Clone)]
pub struct ProjectInfo {
/// Internal database ID (projects.id).
pub id: i64,
/// GitLab path (e.g., "group/repo").
pub path: String,
}
/// Generate a SQL WHERE clause fragment that filters by project_id.
///
/// Returns an empty string for `None` (all projects), or
/// `" AND {table_alias}.project_id = {id}"` for `Some(id)`.
///
/// The leading `AND` makes it safe to append to an existing WHERE clause.
///
/// # Examples
///
/// ```ignore
/// let filter = scope_filter_sql(Some(42), "mr");
/// assert_eq!(filter, " AND mr.project_id = 42");
///
/// let filter = scope_filter_sql(None, "mr");
/// assert_eq!(filter, "");
/// ```
#[must_use]
pub fn scope_filter_sql(project_id: Option<i64>, table_alias: &str) -> String {
debug_assert!(
!table_alias.is_empty()
&& table_alias
.bytes()
.all(|b| b.is_ascii_alphanumeric() || b == b'_'),
"table_alias must be a valid SQL identifier, got: {table_alias:?}"
);
match project_id {
Some(id) => format!(" AND {table_alias}.project_id = {id}"),
None => String::new(),
}
}
/// Fetch all projects from the database for the scope picker.
///
/// Returns projects sorted by path. Used to populate the scope picker
/// overlay when the user presses `P`.
pub fn fetch_projects(conn: &Connection) -> Result<Vec<ProjectInfo>> {
let mut stmt = conn
.prepare("SELECT id, path_with_namespace FROM projects ORDER BY path_with_namespace")
.context("preparing projects query")?;
let projects = stmt
.query_map([], |row| {
Ok(ProjectInfo {
id: row.get(0)?,
path: row.get(1)?,
})
})
.context("querying projects")?
.filter_map(std::result::Result::ok)
.collect();
Ok(projects)
}
// ---------------------------------------------------------------------------
// Tests
// ---------------------------------------------------------------------------
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_scope_filter_sql_none_returns_empty() {
let sql = scope_filter_sql(None, "mr");
assert_eq!(sql, "");
}
#[test]
fn test_scope_filter_sql_some_returns_and_clause() {
let sql = scope_filter_sql(Some(42), "mr");
assert_eq!(sql, " AND mr.project_id = 42");
}
#[test]
fn test_scope_filter_sql_different_alias() {
let sql = scope_filter_sql(Some(7), "mfc");
assert_eq!(sql, " AND mfc.project_id = 7");
}
#[test]
fn test_fetch_projects_empty_db() {
let conn = Connection::open_in_memory().unwrap();
conn.execute_batch(
"CREATE TABLE projects (
id INTEGER PRIMARY KEY,
gitlab_project_id INTEGER UNIQUE NOT NULL,
path_with_namespace TEXT NOT NULL
)",
)
.unwrap();
let projects = fetch_projects(&conn).unwrap();
assert!(projects.is_empty());
}
#[test]
fn test_fetch_projects_returns_sorted() {
let conn = Connection::open_in_memory().unwrap();
conn.execute_batch(
"CREATE TABLE projects (
id INTEGER PRIMARY KEY,
gitlab_project_id INTEGER UNIQUE NOT NULL,
path_with_namespace TEXT NOT NULL
)",
)
.unwrap();
conn.execute(
"INSERT INTO projects (id, gitlab_project_id, path_with_namespace) VALUES (1, 100, 'z-group/repo')",
[],
)
.unwrap();
conn.execute(
"INSERT INTO projects (id, gitlab_project_id, path_with_namespace) VALUES (2, 200, 'a-group/repo')",
[],
)
.unwrap();
let projects = fetch_projects(&conn).unwrap();
assert_eq!(projects.len(), 2);
assert_eq!(projects[0].path, "a-group/repo");
assert_eq!(projects[0].id, 2);
assert_eq!(projects[1].path, "z-group/repo");
assert_eq!(projects[1].id, 1);
}
#[test]
fn test_scope_filter_sql_composable_in_query() {
// Verify the fragment works when embedded in a full SQL statement.
let project_id = Some(5);
let filter = scope_filter_sql(project_id, "mr");
let sql = format!(
"SELECT * FROM merge_requests mr WHERE mr.state = 'merged'{filter} ORDER BY mr.updated_at"
);
assert!(sql.contains("AND mr.project_id = 5"));
}
}

View File

@@ -0,0 +1,402 @@
//! Session state persistence — save on quit, restore on launch.
//!
//! Enables the TUI to resume where the user left off: current screen,
//! navigation history, filter state, scroll positions.
//!
//! ## File format
//!
//! `session.json` is a versioned JSON blob with a CRC32 checksum appended
//! as the last 8 hex characters. Writes are atomic (tmp → fsync → rename).
//! Corrupt files are quarantined, not deleted.
use std::fs;
use std::io::Write;
use std::path::Path;
use serde::{Deserialize, Serialize};
/// Maximum session file size (1 MB). Files larger than this are rejected.
const MAX_SESSION_SIZE: u64 = 1_024 * 1_024;
/// Current session format version. Bump when the schema changes.
const SESSION_VERSION: u32 = 1;
// ---------------------------------------------------------------------------
// Persisted screen (decoupled from message::Screen)
// ---------------------------------------------------------------------------
/// Lightweight screen identifier for serialization.
///
/// Decoupled from `message::Screen` so session persistence doesn't require
/// `Serialize`/`Deserialize` on core types.
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
#[serde(tag = "kind")]
pub enum PersistedScreen {
Dashboard,
IssueList,
IssueDetail { project_id: i64, iid: i64 },
MrList,
MrDetail { project_id: i64, iid: i64 },
Search,
Timeline,
Who,
Trace,
FileHistory,
Sync,
Stats,
Doctor,
}
// ---------------------------------------------------------------------------
// Session state
// ---------------------------------------------------------------------------
/// Versioned session state persisted to disk.
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
pub struct SessionState {
/// Format version for migration.
pub version: u32,
/// Screen to restore on launch.
pub current_screen: PersistedScreen,
/// Navigation history (back stack).
pub nav_history: Vec<PersistedScreen>,
/// Per-screen filter text (screen name -> filter string).
pub filters: Vec<(String, String)>,
/// Per-screen scroll offset (screen name -> offset).
pub scroll_offsets: Vec<(String, u16)>,
/// Global scope project path filter (if set).
pub global_scope: Option<String>,
}
impl Default for SessionState {
fn default() -> Self {
Self {
version: SESSION_VERSION,
current_screen: PersistedScreen::Dashboard,
nav_history: Vec::new(),
filters: Vec::new(),
scroll_offsets: Vec::new(),
global_scope: None,
}
}
}
// ---------------------------------------------------------------------------
// Save / Load
// ---------------------------------------------------------------------------
/// Save session state atomically.
///
/// Writes to a temp file, fsyncs, appends CRC32 checksum, then renames
/// over the target path. This prevents partial writes on crash.
pub fn save_session(state: &SessionState, path: &Path) -> Result<(), SessionError> {
// Ensure parent directory exists.
if let Some(parent) = path.parent() {
fs::create_dir_all(parent).map_err(|e| SessionError::Io(e.to_string()))?;
}
let json =
serde_json::to_string_pretty(state).map_err(|e| SessionError::Serialize(e.to_string()))?;
// Check size before writing.
if json.len() as u64 > MAX_SESSION_SIZE {
return Err(SessionError::TooLarge {
size: json.len() as u64,
max: MAX_SESSION_SIZE,
});
}
// Compute CRC32 over the JSON payload.
let checksum = crc32fast::hash(json.as_bytes());
let payload = format!("{json}\n{checksum:08x}");
// Write to temp file, fsync, rename.
let tmp_path = path.with_extension("tmp");
let mut file = fs::File::create(&tmp_path).map_err(|e| SessionError::Io(e.to_string()))?;
file.write_all(payload.as_bytes())
.map_err(|e| SessionError::Io(e.to_string()))?;
file.sync_all()
.map_err(|e| SessionError::Io(e.to_string()))?;
drop(file);
fs::rename(&tmp_path, path).map_err(|e| SessionError::Io(e.to_string()))?;
Ok(())
}
/// Load session state from disk.
///
/// Validates CRC32 checksum. On corruption, quarantines the file and
/// returns `SessionError::Corrupt`.
pub fn load_session(path: &Path) -> Result<SessionState, SessionError> {
if !path.exists() {
return Err(SessionError::NotFound);
}
// Check file size before reading.
let metadata = fs::metadata(path).map_err(|e| SessionError::Io(e.to_string()))?;
if metadata.len() > MAX_SESSION_SIZE {
quarantine(path)?;
return Err(SessionError::TooLarge {
size: metadata.len(),
max: MAX_SESSION_SIZE,
});
}
let raw = fs::read_to_string(path).map_err(|e| SessionError::Io(e.to_string()))?;
// Split: everything before the last newline is JSON, after is the checksum.
let (json, checksum_hex) = raw
.rsplit_once('\n')
.ok_or_else(|| SessionError::Corrupt("no checksum separator".into()))?;
// Validate checksum.
let expected = u32::from_str_radix(checksum_hex.trim(), 16)
.map_err(|_| SessionError::Corrupt("invalid checksum hex".into()))?;
let actual = crc32fast::hash(json.as_bytes());
if actual != expected {
quarantine(path)?;
return Err(SessionError::Corrupt(format!(
"CRC32 mismatch: expected {expected:08x}, got {actual:08x}"
)));
}
// Deserialize.
let state: SessionState = serde_json::from_str(json)
.map_err(|e| SessionError::Corrupt(format!("JSON parse error: {e}")))?;
// Version check — future-proof: reject newer versions, accept current.
if state.version > SESSION_VERSION {
return Err(SessionError::Corrupt(format!(
"session version {} is newer than supported ({})",
state.version, SESSION_VERSION
)));
}
Ok(state)
}
/// Move a corrupt session file to `.quarantine/` instead of deleting it.
fn quarantine(path: &Path) -> Result<(), SessionError> {
let quarantine_dir = path.parent().unwrap_or(Path::new(".")).join(".quarantine");
fs::create_dir_all(&quarantine_dir).map_err(|e| SessionError::Io(e.to_string()))?;
let filename = path
.file_name()
.unwrap_or_default()
.to_string_lossy()
.to_string();
let ts = chrono::Utc::now().format("%Y%m%d_%H%M%S");
let quarantine_path = quarantine_dir.join(format!("{filename}.{ts}"));
fs::rename(path, &quarantine_path).map_err(|e| SessionError::Io(e.to_string()))?;
Ok(())
}
// ---------------------------------------------------------------------------
// Errors
// ---------------------------------------------------------------------------
/// Session persistence errors.
#[derive(Debug, Clone, PartialEq)]
pub enum SessionError {
/// Session file not found (first launch).
NotFound,
/// File is corrupt (bad checksum, invalid JSON, etc.).
Corrupt(String),
/// File exceeds size limit.
TooLarge { size: u64, max: u64 },
/// I/O error.
Io(String),
/// Serialization error.
Serialize(String),
}
impl std::fmt::Display for SessionError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
Self::NotFound => write!(f, "session file not found"),
Self::Corrupt(msg) => write!(f, "corrupt session: {msg}"),
Self::TooLarge { size, max } => {
write!(f, "session file too large ({size} bytes, max {max})")
}
Self::Io(msg) => write!(f, "session I/O error: {msg}"),
Self::Serialize(msg) => write!(f, "session serialization error: {msg}"),
}
}
}
impl std::error::Error for SessionError {}
// ---------------------------------------------------------------------------
// Tests
// ---------------------------------------------------------------------------
#[cfg(test)]
mod tests {
use super::*;
fn sample_state() -> SessionState {
SessionState {
version: SESSION_VERSION,
current_screen: PersistedScreen::IssueList,
nav_history: vec![PersistedScreen::Dashboard],
filters: vec![("IssueList".into(), "bug".into())],
scroll_offsets: vec![("IssueList".into(), 5)],
global_scope: Some("group/project".into()),
}
}
#[test]
fn test_session_roundtrip() {
let dir = tempfile::tempdir().unwrap();
let path = dir.path().join("session.json");
let state = sample_state();
save_session(&state, &path).unwrap();
let loaded = load_session(&path).unwrap();
assert_eq!(state, loaded);
}
#[test]
fn test_session_default_roundtrip() {
let dir = tempfile::tempdir().unwrap();
let path = dir.path().join("session.json");
let state = SessionState::default();
save_session(&state, &path).unwrap();
let loaded = load_session(&path).unwrap();
assert_eq!(state, loaded);
}
#[test]
fn test_session_not_found() {
let dir = tempfile::tempdir().unwrap();
let path = dir.path().join("nonexistent.json");
let result = load_session(&path);
assert_eq!(result.unwrap_err(), SessionError::NotFound);
}
#[test]
fn test_session_corruption_detected() {
let dir = tempfile::tempdir().unwrap();
let path = dir.path().join("session.json");
let state = sample_state();
save_session(&state, &path).unwrap();
// Tamper with the file — modify a byte in the JSON section.
let raw = fs::read_to_string(&path).unwrap();
let tampered = raw.replacen("IssueList", "MrList___", 1);
fs::write(&path, tampered).unwrap();
let result = load_session(&path);
assert!(matches!(result, Err(SessionError::Corrupt(_))));
}
#[test]
fn test_session_corruption_quarantines_file() {
let dir = tempfile::tempdir().unwrap();
let path = dir.path().join("session.json");
let state = sample_state();
save_session(&state, &path).unwrap();
// Tamper with the checksum line.
let raw = fs::read_to_string(&path).unwrap();
let tampered = format!("{}\ndeadbeef", raw.rsplit_once('\n').unwrap().0);
fs::write(&path, tampered).unwrap();
let _ = load_session(&path);
// Original file should be gone.
assert!(!path.exists());
// Quarantine directory should contain the file.
let quarantine_dir = dir.path().join(".quarantine");
assert!(quarantine_dir.exists());
let entries: Vec<_> = fs::read_dir(&quarantine_dir).unwrap().collect();
assert_eq!(entries.len(), 1);
}
#[test]
fn test_session_creates_parent_directory() {
let dir = tempfile::tempdir().unwrap();
let nested = dir.path().join("a").join("b").join("session.json");
let state = SessionState::default();
save_session(&state, &nested).unwrap();
assert!(nested.exists());
}
#[test]
fn test_session_persisted_screen_variants() {
let screens = vec![
PersistedScreen::Dashboard,
PersistedScreen::IssueList,
PersistedScreen::IssueDetail {
project_id: 1,
iid: 42,
},
PersistedScreen::MrList,
PersistedScreen::MrDetail {
project_id: 2,
iid: 99,
},
PersistedScreen::Search,
PersistedScreen::Timeline,
PersistedScreen::Who,
PersistedScreen::Trace,
PersistedScreen::FileHistory,
PersistedScreen::Sync,
PersistedScreen::Stats,
PersistedScreen::Doctor,
];
for screen in screens {
let state = SessionState {
current_screen: screen.clone(),
..SessionState::default()
};
let dir = tempfile::tempdir().unwrap();
let path = dir.path().join("session.json");
save_session(&state, &path).unwrap();
let loaded = load_session(&path).unwrap();
assert_eq!(state.current_screen, loaded.current_screen);
}
}
#[test]
fn test_session_max_size_enforced() {
let state = SessionState {
filters: (0..100_000)
.map(|i| (format!("key_{i}"), "x".repeat(100)))
.collect(),
..SessionState::default()
};
let dir = tempfile::tempdir().unwrap();
let path = dir.path().join("session.json");
let result = save_session(&state, &path);
assert!(matches!(result, Err(SessionError::TooLarge { .. })));
}
#[test]
fn test_session_atomic_write_no_partial() {
let dir = tempfile::tempdir().unwrap();
let path = dir.path().join("session.json");
let tmp_path = path.with_extension("tmp");
let state = sample_state();
save_session(&state, &path).unwrap();
// After save, no tmp file should remain.
assert!(!tmp_path.exists());
assert!(path.exists());
}
}

View File

@@ -0,0 +1,199 @@
#![allow(dead_code)]
//! Doctor screen state — health check results.
//!
//! Displays a list of environment health checks with pass/warn/fail
//! indicators. Checks are synchronous (config, DB, projects, FTS) —
//! network checks (GitLab auth, Ollama) are not run from the TUI.
// ---------------------------------------------------------------------------
// HealthStatus
// ---------------------------------------------------------------------------
/// Status of a single health check.
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum HealthStatus {
Pass,
Warn,
Fail,
}
impl HealthStatus {
/// Human-readable label for display.
#[must_use]
pub fn label(self) -> &'static str {
match self {
Self::Pass => "PASS",
Self::Warn => "WARN",
Self::Fail => "FAIL",
}
}
}
// ---------------------------------------------------------------------------
// HealthCheck
// ---------------------------------------------------------------------------
/// A single health check result for display.
#[derive(Debug, Clone)]
pub struct HealthCheck {
/// Check category name (e.g., "Config", "Database").
pub name: String,
/// Pass/warn/fail status.
pub status: HealthStatus,
/// Human-readable detail (e.g., path, version, count).
pub detail: String,
}
// ---------------------------------------------------------------------------
// DoctorState
// ---------------------------------------------------------------------------
/// State for the Doctor screen.
#[derive(Debug, Default)]
pub struct DoctorState {
/// Health check results (empty until loaded).
pub checks: Vec<HealthCheck>,
/// Whether checks have been loaded at least once.
pub loaded: bool,
}
impl DoctorState {
/// Apply loaded health check results.
pub fn apply_checks(&mut self, checks: Vec<HealthCheck>) {
self.checks = checks;
self.loaded = true;
}
/// Overall status — worst status across all checks.
#[must_use]
pub fn overall_status(&self) -> HealthStatus {
if self.checks.iter().any(|c| c.status == HealthStatus::Fail) {
HealthStatus::Fail
} else if self.checks.iter().any(|c| c.status == HealthStatus::Warn) {
HealthStatus::Warn
} else {
HealthStatus::Pass
}
}
/// Count of checks by status.
#[must_use]
pub fn count_by_status(&self, status: HealthStatus) -> usize {
self.checks.iter().filter(|c| c.status == status).count()
}
}
// ---------------------------------------------------------------------------
// Tests
// ---------------------------------------------------------------------------
#[cfg(test)]
mod tests {
use super::*;
fn sample_checks() -> Vec<HealthCheck> {
vec![
HealthCheck {
name: "Config".into(),
status: HealthStatus::Pass,
detail: "/home/user/.config/lore/config.json".into(),
},
HealthCheck {
name: "Database".into(),
status: HealthStatus::Pass,
detail: "schema v12".into(),
},
HealthCheck {
name: "Projects".into(),
status: HealthStatus::Warn,
detail: "0 projects configured".into(),
},
HealthCheck {
name: "FTS Index".into(),
status: HealthStatus::Fail,
detail: "No documents indexed".into(),
},
]
}
#[test]
fn test_default_state() {
let state = DoctorState::default();
assert!(state.checks.is_empty());
assert!(!state.loaded);
}
#[test]
fn test_apply_checks() {
let mut state = DoctorState::default();
state.apply_checks(sample_checks());
assert!(state.loaded);
assert_eq!(state.checks.len(), 4);
}
#[test]
fn test_overall_status_fail_wins() {
let mut state = DoctorState::default();
state.apply_checks(sample_checks());
assert_eq!(state.overall_status(), HealthStatus::Fail);
}
#[test]
fn test_overall_status_all_pass() {
let mut state = DoctorState::default();
state.apply_checks(vec![
HealthCheck {
name: "Config".into(),
status: HealthStatus::Pass,
detail: "ok".into(),
},
HealthCheck {
name: "Database".into(),
status: HealthStatus::Pass,
detail: "ok".into(),
},
]);
assert_eq!(state.overall_status(), HealthStatus::Pass);
}
#[test]
fn test_overall_status_warn_without_fail() {
let mut state = DoctorState::default();
state.apply_checks(vec![
HealthCheck {
name: "Config".into(),
status: HealthStatus::Pass,
detail: "ok".into(),
},
HealthCheck {
name: "Ollama".into(),
status: HealthStatus::Warn,
detail: "not running".into(),
},
]);
assert_eq!(state.overall_status(), HealthStatus::Warn);
}
#[test]
fn test_overall_status_empty_is_pass() {
let state = DoctorState::default();
assert_eq!(state.overall_status(), HealthStatus::Pass);
}
#[test]
fn test_count_by_status() {
let mut state = DoctorState::default();
state.apply_checks(sample_checks());
assert_eq!(state.count_by_status(HealthStatus::Pass), 2);
assert_eq!(state.count_by_status(HealthStatus::Warn), 1);
assert_eq!(state.count_by_status(HealthStatus::Fail), 1);
}
#[test]
fn test_health_status_labels() {
assert_eq!(HealthStatus::Pass.label(), "PASS");
assert_eq!(HealthStatus::Warn.label(), "WARN");
assert_eq!(HealthStatus::Fail.label(), "FAIL");
}
}

View File

@@ -4,6 +4,8 @@
//! Users enter a file path, toggle options (follow renames, merged only,
//! show discussions), and browse a chronological MR list.
use crate::text_width::{next_char_boundary, prev_char_boundary};
// ---------------------------------------------------------------------------
// FileHistoryState
// ---------------------------------------------------------------------------
@@ -225,24 +227,6 @@ impl FileHistoryState {
}
}
/// Find the byte offset of the previous char boundary.
fn prev_char_boundary(s: &str, pos: usize) -> usize {
let mut i = pos.saturating_sub(1);
while i > 0 && !s.is_char_boundary(i) {
i -= 1;
}
i
}
/// Find the byte offset of the next char boundary.
fn next_char_boundary(s: &str, pos: usize) -> usize {
let mut i = pos + 1;
while i < s.len() && !s.is_char_boundary(i) {
i += 1;
}
i
}
// ---------------------------------------------------------------------------
// Tests
// ---------------------------------------------------------------------------

View File

@@ -16,31 +16,40 @@
pub mod bootstrap;
pub mod command_palette;
pub mod dashboard;
pub mod doctor;
pub mod file_history;
pub mod issue_detail;
pub mod issue_list;
pub mod mr_detail;
pub mod mr_list;
pub mod scope_picker;
pub mod search;
pub mod stats;
pub mod sync;
pub mod sync_delta_ledger;
pub mod timeline;
pub mod trace;
pub mod who;
use std::collections::{HashMap, HashSet};
use crate::entity_cache::EntityCache;
use crate::message::Screen;
use crate::view::common::discussion_tree::DiscussionNode;
// Re-export screen states for convenience.
pub use bootstrap::BootstrapState;
pub use command_palette::CommandPaletteState;
pub use dashboard::DashboardState;
pub use doctor::DoctorState;
pub use file_history::FileHistoryState;
pub use issue_detail::IssueDetailState;
pub use issue_list::IssueListState;
pub use mr_detail::MrDetailState;
pub use mr_list::MrListState;
pub use scope_picker::ScopePickerState;
pub use search::SearchState;
pub use stats::StatsState;
pub use sync::SyncState;
pub use timeline::TimelineState;
pub use trace::TraceState;
@@ -158,6 +167,24 @@ pub struct ScopeContext {
pub project_name: Option<String>,
}
// ---------------------------------------------------------------------------
// Cached detail payloads
// ---------------------------------------------------------------------------
/// Cached issue detail payload (metadata + discussions).
#[derive(Debug, Clone)]
pub struct CachedIssuePayload {
pub data: issue_detail::IssueDetailData,
pub discussions: Vec<DiscussionNode>,
}
/// Cached MR detail payload (metadata + discussions).
#[derive(Debug, Clone)]
pub struct CachedMrPayload {
pub data: mr_detail::MrDetailData,
pub discussions: Vec<DiscussionNode>,
}
// ---------------------------------------------------------------------------
// AppState
// ---------------------------------------------------------------------------
@@ -171,17 +198,20 @@ pub struct AppState {
// Per-screen states.
pub bootstrap: BootstrapState,
pub dashboard: DashboardState,
pub doctor: DoctorState,
pub issue_list: IssueListState,
pub issue_detail: IssueDetailState,
pub mr_list: MrListState,
pub mr_detail: MrDetailState,
pub search: SearchState,
pub stats: StatsState,
pub timeline: TimelineState,
pub who: WhoState,
pub trace: TraceState,
pub file_history: FileHistoryState,
pub sync: SyncState,
pub command_palette: CommandPaletteState,
pub scope_picker: ScopePickerState,
// Cross-cutting state.
pub global_scope: ScopeContext,
@@ -189,6 +219,10 @@ pub struct AppState {
pub error_toast: Option<String>,
pub show_help: bool,
pub terminal_size: (u16, u16),
// Entity caches for near-instant detail view reopens.
pub issue_cache: EntityCache<CachedIssuePayload>,
pub mr_cache: EntityCache<CachedMrPayload>,
}
impl AppState {

View File

@@ -0,0 +1,239 @@
//! Scope picker overlay state.
//!
//! The scope picker lets users filter all screens to a specific project.
//! It appears as a modal overlay when the user presses `P`.
use crate::scope::ProjectInfo;
use crate::state::ScopeContext;
/// State for the scope picker overlay.
#[derive(Debug, Default)]
pub struct ScopePickerState {
/// Available projects (populated on open).
pub projects: Vec<ProjectInfo>,
/// Currently highlighted index (0 = "All Projects", 1..N = specific projects).
pub selected_index: usize,
/// Whether the picker overlay is visible.
pub visible: bool,
/// Scroll offset for long project lists.
pub scroll_offset: usize,
}
/// Max visible rows in the picker before scrolling kicks in.
const MAX_VISIBLE_ROWS: usize = 15;
impl ScopePickerState {
/// Open the picker with the given project list.
///
/// Pre-selects the row matching the current scope, or "All Projects" (index 0)
/// if no project filter is active.
pub fn open(&mut self, projects: Vec<ProjectInfo>, current_scope: &ScopeContext) {
self.projects = projects;
self.visible = true;
self.scroll_offset = 0;
// Pre-select the currently active scope.
self.selected_index = match current_scope.project_id {
None => 0, // "All Projects" row
Some(id) => self
.projects
.iter()
.position(|p| p.id == id)
.map_or(0, |i| i + 1), // +1 because index 0 is "All Projects"
};
self.ensure_visible();
}
/// Close the picker without changing scope.
pub fn close(&mut self) {
self.visible = false;
}
/// Move selection up.
pub fn select_prev(&mut self) {
if self.selected_index > 0 {
self.selected_index -= 1;
self.ensure_visible();
}
}
/// Move selection down.
pub fn select_next(&mut self) {
let max_index = self.projects.len(); // 0="All" + N projects
if self.selected_index < max_index {
self.selected_index += 1;
self.ensure_visible();
}
}
/// Confirm the current selection and return the new scope.
#[must_use]
pub fn confirm(&self) -> ScopeContext {
if self.selected_index == 0 {
// "All Projects"
ScopeContext {
project_id: None,
project_name: None,
}
} else if let Some(project) = self.projects.get(self.selected_index - 1) {
ScopeContext {
project_id: Some(project.id),
project_name: Some(project.path.clone()),
}
} else {
// Out-of-bounds — fall back to "All Projects".
ScopeContext {
project_id: None,
project_name: None,
}
}
}
/// Total number of rows (1 for "All" + project count).
#[must_use]
pub fn row_count(&self) -> usize {
1 + self.projects.len()
}
/// Ensure the selected index is within the visible scroll window.
fn ensure_visible(&mut self) {
if self.selected_index < self.scroll_offset {
self.scroll_offset = self.selected_index;
} else if self.selected_index >= self.scroll_offset + MAX_VISIBLE_ROWS {
self.scroll_offset = self.selected_index.saturating_sub(MAX_VISIBLE_ROWS - 1);
}
}
}
// ---------------------------------------------------------------------------
// Tests
// ---------------------------------------------------------------------------
#[cfg(test)]
mod tests {
use super::*;
fn sample_projects() -> Vec<ProjectInfo> {
vec![
ProjectInfo {
id: 1,
path: "alpha/repo".into(),
},
ProjectInfo {
id: 2,
path: "beta/repo".into(),
},
ProjectInfo {
id: 3,
path: "gamma/repo".into(),
},
]
}
#[test]
fn test_open_no_scope_selects_all() {
let mut picker = ScopePickerState::default();
let scope = ScopeContext::default();
picker.open(sample_projects(), &scope);
assert!(picker.visible);
assert_eq!(picker.selected_index, 0); // "All Projects"
assert_eq!(picker.projects.len(), 3);
}
#[test]
fn test_open_with_scope_preselects_project() {
let mut picker = ScopePickerState::default();
let scope = ScopeContext {
project_id: Some(2),
project_name: Some("beta/repo".into()),
};
picker.open(sample_projects(), &scope);
assert_eq!(picker.selected_index, 2); // index 1 in projects = index 2 in picker
}
#[test]
fn test_select_prev_and_next() {
let mut picker = ScopePickerState::default();
picker.open(sample_projects(), &ScopeContext::default());
picker.select_next();
assert_eq!(picker.selected_index, 1);
picker.select_next();
assert_eq!(picker.selected_index, 2);
picker.select_prev();
assert_eq!(picker.selected_index, 1);
}
#[test]
fn test_select_prev_at_zero_stays() {
let mut picker = ScopePickerState::default();
picker.open(sample_projects(), &ScopeContext::default());
picker.select_prev();
assert_eq!(picker.selected_index, 0);
}
#[test]
fn test_select_next_at_max_stays() {
let mut picker = ScopePickerState::default();
picker.open(sample_projects(), &ScopeContext::default());
// 4 total rows (All + 3 projects), max index = 3
for _ in 0..10 {
picker.select_next();
}
assert_eq!(picker.selected_index, 3);
}
#[test]
fn test_confirm_all_projects() {
let mut picker = ScopePickerState::default();
picker.open(sample_projects(), &ScopeContext::default());
let scope = picker.confirm();
assert!(scope.project_id.is_none());
assert!(scope.project_name.is_none());
}
#[test]
fn test_confirm_specific_project() {
let mut picker = ScopePickerState::default();
picker.open(sample_projects(), &ScopeContext::default());
picker.select_next(); // index 1 = first project (alpha/repo, id=1)
let scope = picker.confirm();
assert_eq!(scope.project_id, Some(1));
assert_eq!(scope.project_name.as_deref(), Some("alpha/repo"));
}
#[test]
fn test_close_hides_picker() {
let mut picker = ScopePickerState::default();
picker.open(sample_projects(), &ScopeContext::default());
assert!(picker.visible);
picker.close();
assert!(!picker.visible);
}
#[test]
fn test_row_count() {
let mut picker = ScopePickerState::default();
picker.open(sample_projects(), &ScopeContext::default());
assert_eq!(picker.row_count(), 4); // "All" + 3 projects
}
#[test]
fn test_open_with_unknown_project_selects_all() {
let mut picker = ScopePickerState::default();
let scope = ScopeContext {
project_id: Some(999), // Not in list
project_name: Some("unknown".into()),
};
picker.open(sample_projects(), &scope);
assert_eq!(picker.selected_index, 0); // Falls back to "All"
}
}

View File

@@ -0,0 +1,153 @@
#![allow(dead_code)]
//! Stats screen state — database and index statistics.
//!
//! Shows entity counts, FTS coverage, embedding coverage, and queue
//! health. Data is produced by synchronous DB queries.
// ---------------------------------------------------------------------------
// StatsData
// ---------------------------------------------------------------------------
/// Database statistics for TUI display.
#[derive(Debug, Clone, Default)]
pub struct StatsData {
/// Total documents in the database.
pub total_documents: i64,
/// Issues stored.
pub issues: i64,
/// Merge requests stored.
pub merge_requests: i64,
/// Discussions stored.
pub discussions: i64,
/// Notes stored.
pub notes: i64,
/// Documents indexed in FTS.
pub fts_indexed: i64,
/// Documents with embeddings.
pub embedded_documents: i64,
/// Total embedding chunks.
pub total_chunks: i64,
/// Embedding coverage percentage (0.0100.0).
pub coverage_pct: f64,
/// Pending queue items (dirty sources).
pub queue_pending: i64,
/// Failed queue items.
pub queue_failed: i64,
}
impl StatsData {
/// FTS coverage percentage relative to total documents.
#[must_use]
pub fn fts_coverage_pct(&self) -> f64 {
if self.total_documents == 0 {
0.0
} else {
(self.fts_indexed as f64 / self.total_documents as f64) * 100.0
}
}
/// Whether there are pending queue items that need processing.
#[must_use]
pub fn has_queue_work(&self) -> bool {
self.queue_pending > 0 || self.queue_failed > 0
}
}
// ---------------------------------------------------------------------------
// StatsState
// ---------------------------------------------------------------------------
/// State for the Stats screen.
#[derive(Debug, Default)]
pub struct StatsState {
/// Statistics data (None until loaded).
pub data: Option<StatsData>,
/// Whether data has been loaded at least once.
pub loaded: bool,
}
impl StatsState {
/// Apply loaded stats data.
pub fn apply_data(&mut self, data: StatsData) {
self.data = Some(data);
self.loaded = true;
}
}
// ---------------------------------------------------------------------------
// Tests
// ---------------------------------------------------------------------------
#[cfg(test)]
mod tests {
use super::*;
fn sample_stats() -> StatsData {
StatsData {
total_documents: 500,
issues: 200,
merge_requests: 150,
discussions: 100,
notes: 50,
fts_indexed: 450,
embedded_documents: 300,
total_chunks: 1200,
coverage_pct: 60.0,
queue_pending: 5,
queue_failed: 1,
}
}
#[test]
fn test_default_state() {
let state = StatsState::default();
assert!(state.data.is_none());
assert!(!state.loaded);
}
#[test]
fn test_apply_data() {
let mut state = StatsState::default();
state.apply_data(sample_stats());
assert!(state.loaded);
assert!(state.data.is_some());
}
#[test]
fn test_fts_coverage_pct() {
let stats = sample_stats();
let pct = stats.fts_coverage_pct();
assert!((pct - 90.0).abs() < 0.01); // 450/500 = 90%
}
#[test]
fn test_fts_coverage_pct_zero_documents() {
let stats = StatsData::default();
assert_eq!(stats.fts_coverage_pct(), 0.0);
}
#[test]
fn test_has_queue_work() {
let stats = sample_stats();
assert!(stats.has_queue_work());
}
#[test]
fn test_no_queue_work() {
let stats = StatsData {
queue_pending: 0,
queue_failed: 0,
..sample_stats()
};
assert!(!stats.has_queue_work());
}
#[test]
fn test_stats_data_default() {
let stats = StatsData::default();
assert_eq!(stats.total_documents, 0);
assert_eq!(stats.issues, 0);
assert_eq!(stats.coverage_pct, 0.0);
}
}

View File

@@ -1,15 +1,593 @@
#![allow(dead_code)]
//! Sync screen state.
//! Sync screen state: progress tracking, coalescing, and summary.
//!
//! The sync screen shows real-time progress during data synchronization
//! and transitions to a summary view when complete. A progress coalescer
//! prevents render thrashing from rapid progress updates.
use std::time::Instant;
// ---------------------------------------------------------------------------
// Sync lanes (entity types being synced)
// ---------------------------------------------------------------------------
/// Sync entity types that progress is tracked for.
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
pub enum SyncLane {
Issues,
MergeRequests,
Discussions,
Notes,
Events,
Statuses,
}
impl SyncLane {
/// Human-readable label for this lane.
#[must_use]
pub fn label(self) -> &'static str {
match self {
Self::Issues => "Issues",
Self::MergeRequests => "MRs",
Self::Discussions => "Discussions",
Self::Notes => "Notes",
Self::Events => "Events",
Self::Statuses => "Statuses",
}
}
/// All lanes in display order.
pub const ALL: &'static [SyncLane] = &[
Self::Issues,
Self::MergeRequests,
Self::Discussions,
Self::Notes,
Self::Events,
Self::Statuses,
];
}
// ---------------------------------------------------------------------------
// Per-lane progress
// ---------------------------------------------------------------------------
/// Progress for a single sync lane.
#[derive(Debug, Clone, Default)]
pub struct LaneProgress {
/// Current items processed.
pub current: u64,
/// Total items expected (0 = unknown).
pub total: u64,
/// Whether this lane has completed.
pub done: bool,
}
impl LaneProgress {
/// Fraction complete (0.0..=1.0). Returns 0.0 if total is unknown.
#[must_use]
pub fn fraction(&self) -> f64 {
if self.total == 0 {
return 0.0;
}
(self.current as f64 / self.total as f64).clamp(0.0, 1.0)
}
}
// ---------------------------------------------------------------------------
// Sync summary
// ---------------------------------------------------------------------------
/// Per-entity-type change counts after sync completes.
#[derive(Debug, Clone, Default)]
pub struct EntityChangeCounts {
pub new: u64,
pub updated: u64,
}
/// Summary of a completed sync run.
#[derive(Debug, Clone, Default)]
pub struct SyncSummary {
pub issues: EntityChangeCounts,
pub merge_requests: EntityChangeCounts,
pub discussions: EntityChangeCounts,
pub notes: EntityChangeCounts,
pub elapsed_ms: u64,
/// Per-project errors (project path -> error message).
pub project_errors: Vec<(String, String)>,
}
impl SyncSummary {
/// Total number of changes across all entity types.
#[must_use]
pub fn total_changes(&self) -> u64 {
self.issues.new
+ self.issues.updated
+ self.merge_requests.new
+ self.merge_requests.updated
+ self.discussions.new
+ self.discussions.updated
+ self.notes.new
+ self.notes.updated
}
/// Whether any errors occurred during sync.
#[must_use]
pub fn has_errors(&self) -> bool {
!self.project_errors.is_empty()
}
}
// ---------------------------------------------------------------------------
// Sync screen mode
// ---------------------------------------------------------------------------
/// Display mode for the sync screen.
#[derive(Debug, Clone, Copy, PartialEq, Eq, Default)]
pub enum SyncScreenMode {
/// Full-screen sync progress with per-lane bars.
#[default]
FullScreen,
/// Compact single-line progress for embedding in Bootstrap screen.
Inline,
}
// ---------------------------------------------------------------------------
// Sync phase
// ---------------------------------------------------------------------------
/// Current phase of the sync operation.
#[derive(Debug, Clone, PartialEq, Eq, Default)]
pub enum SyncPhase {
/// Sync hasn't started yet.
#[default]
Idle,
/// Sync is running.
Running,
/// Sync completed successfully.
Complete,
/// Sync was cancelled by user.
Cancelled,
/// Sync failed with an error.
Failed(String),
}
// ---------------------------------------------------------------------------
// Progress coalescer
// ---------------------------------------------------------------------------
/// Batches rapid progress updates to prevent render thrashing.
///
/// At most one update is emitted per `floor_ms`. Updates arriving faster
/// are coalesced — only the latest value survives.
#[derive(Debug)]
pub struct ProgressCoalescer {
/// Minimum interval between emitted updates.
floor_ms: u64,
/// Timestamp of the last emitted update.
last_emit: Option<Instant>,
/// Number of updates coalesced (dropped) since last emit.
coalesced_count: u64,
}
impl ProgressCoalescer {
/// Create a new coalescer with the given floor interval in milliseconds.
#[must_use]
pub fn new(floor_ms: u64) -> Self {
Self {
floor_ms,
last_emit: None,
coalesced_count: 0,
}
}
/// Default coalescer with 100ms floor (10 updates/second max).
#[must_use]
pub fn default_floor() -> Self {
Self::new(100)
}
/// Should this update be emitted?
///
/// Returns `true` if enough time has elapsed since the last emit.
/// The caller should only render/process the update when this returns true.
pub fn should_emit(&mut self) -> bool {
let now = Instant::now();
match self.last_emit {
None => {
self.last_emit = Some(now);
self.coalesced_count = 0;
true
}
Some(last) => {
let elapsed_ms = now.duration_since(last).as_millis() as u64;
if elapsed_ms >= self.floor_ms {
self.last_emit = Some(now);
self.coalesced_count = 0;
true
} else {
self.coalesced_count += 1;
false
}
}
}
}
/// Number of updates that have been coalesced since the last emit.
#[must_use]
pub fn coalesced_count(&self) -> u64 {
self.coalesced_count
}
/// Reset the coalescer (e.g., when sync restarts).
pub fn reset(&mut self) {
self.last_emit = None;
self.coalesced_count = 0;
}
}
// ---------------------------------------------------------------------------
// SyncState
// ---------------------------------------------------------------------------
/// State for the sync progress/summary screen.
#[derive(Debug, Default)]
#[derive(Debug)]
pub struct SyncState {
/// Current sync phase.
pub phase: SyncPhase,
/// Display mode (full screen vs inline).
pub mode: SyncScreenMode,
/// Per-lane progress (updated during Running phase).
pub lanes: [LaneProgress; 6],
/// Current stage label (e.g., "Fetching issues...").
pub stage: String,
pub current: u64,
pub total: u64,
/// Log lines from the sync process.
pub log_lines: Vec<String>,
pub completed: bool,
pub elapsed_ms: Option<u64>,
pub error: Option<String>,
/// Stream throughput stats (items per second).
pub items_per_sec: f64,
/// Bytes synced.
pub bytes_synced: u64,
/// Total items synced.
pub items_synced: u64,
/// When the current sync run started (for throughput calculation).
pub started_at: Option<Instant>,
/// Progress coalescer for render throttling.
pub coalescer: ProgressCoalescer,
/// Summary (populated after sync completes).
pub summary: Option<SyncSummary>,
/// Scroll offset for log lines view.
pub log_scroll_offset: usize,
}
impl Default for SyncState {
fn default() -> Self {
Self {
phase: SyncPhase::Idle,
mode: SyncScreenMode::FullScreen,
lanes: Default::default(),
stage: String::new(),
log_lines: Vec::new(),
items_per_sec: 0.0,
bytes_synced: 0,
items_synced: 0,
started_at: None,
coalescer: ProgressCoalescer::default_floor(),
summary: None,
log_scroll_offset: 0,
}
}
}
impl SyncState {
/// Reset state for a new sync run.
pub fn start(&mut self) {
self.phase = SyncPhase::Running;
self.lanes = Default::default();
self.stage.clear();
self.log_lines.clear();
self.items_per_sec = 0.0;
self.bytes_synced = 0;
self.items_synced = 0;
self.started_at = Some(Instant::now());
self.coalescer.reset();
self.summary = None;
self.log_scroll_offset = 0;
}
/// Apply a progress update for a specific lane.
pub fn update_progress(&mut self, stage: &str, current: u64, total: u64) {
self.stage = stage.to_string();
// Map stage name to lane index.
if let Some(lane) = self.lane_for_stage(stage) {
lane.current = current;
lane.total = total;
}
}
/// Apply a batch progress increment.
pub fn update_batch(&mut self, stage: &str, batch_size: u64) {
self.stage = stage.to_string();
if let Some(lane) = self.lane_for_stage(stage) {
lane.current += batch_size;
}
}
/// Mark sync as completed with summary.
pub fn complete(&mut self, elapsed_ms: u64) {
self.phase = SyncPhase::Complete;
// Mark all lanes as done.
for lane in &mut self.lanes {
lane.done = true;
}
// Build summary from lane data if not already set.
if self.summary.is_none() {
self.summary = Some(SyncSummary {
elapsed_ms,
..Default::default()
});
} else if let Some(ref mut summary) = self.summary {
summary.elapsed_ms = elapsed_ms;
}
}
/// Mark sync as cancelled.
pub fn cancel(&mut self) {
self.phase = SyncPhase::Cancelled;
}
/// Mark sync as failed.
pub fn fail(&mut self, error: String) {
self.phase = SyncPhase::Failed(error);
}
/// Add a log line.
pub fn add_log_line(&mut self, line: String) {
self.log_lines.push(line);
// Auto-scroll to bottom.
if self.log_lines.len() > 1 {
self.log_scroll_offset = self.log_lines.len().saturating_sub(20);
}
}
/// Update stream stats.
pub fn update_stream_stats(&mut self, bytes: u64, items: u64) {
self.bytes_synced = bytes;
self.items_synced = items;
// Compute actual throughput from elapsed time since sync start.
if items > 0
&& let Some(started) = self.started_at
{
let elapsed_secs = started.elapsed().as_secs_f64();
if elapsed_secs > 0.0 {
self.items_per_sec = items as f64 / elapsed_secs;
}
}
}
/// Whether sync is currently running.
#[must_use]
pub fn is_running(&self) -> bool {
self.phase == SyncPhase::Running
}
/// Overall progress fraction (average of all lanes).
#[must_use]
pub fn overall_progress(&self) -> f64 {
let active_lanes: Vec<&LaneProgress> = self.lanes.iter().filter(|l| l.total > 0).collect();
if active_lanes.is_empty() {
return 0.0;
}
let sum: f64 = active_lanes.iter().map(|l| l.fraction()).sum();
sum / active_lanes.len() as f64
}
/// Map a stage name to the corresponding lane.
fn lane_for_stage(&mut self, stage: &str) -> Option<&mut LaneProgress> {
let lower = stage.to_lowercase();
let idx = if lower.contains("issue") {
Some(0)
} else if lower.contains("merge") || lower.contains("mr") {
Some(1)
} else if lower.contains("discussion") {
Some(2)
} else if lower.contains("note") {
Some(3)
} else if lower.contains("event") {
Some(4)
} else if lower.contains("status") {
Some(5)
} else {
None
};
idx.map(|i| &mut self.lanes[i])
}
}
// ---------------------------------------------------------------------------
// Tests
// ---------------------------------------------------------------------------
#[cfg(test)]
mod tests {
use super::*;
use std::thread;
use std::time::Duration;
#[test]
fn test_lane_progress_fraction() {
let lane = LaneProgress {
current: 50,
total: 100,
done: false,
};
assert!((lane.fraction() - 0.5).abs() < f64::EPSILON);
}
#[test]
fn test_lane_progress_fraction_zero_total() {
let lane = LaneProgress::default();
assert!((lane.fraction()).abs() < f64::EPSILON);
}
#[test]
fn test_sync_summary_total_changes() {
let summary = SyncSummary {
issues: EntityChangeCounts { new: 5, updated: 3 },
merge_requests: EntityChangeCounts { new: 2, updated: 1 },
..Default::default()
};
assert_eq!(summary.total_changes(), 11);
}
#[test]
fn test_sync_summary_has_errors() {
let mut summary = SyncSummary::default();
assert!(!summary.has_errors());
summary
.project_errors
.push(("grp/repo".into(), "timeout".into()));
assert!(summary.has_errors());
}
#[test]
fn test_sync_state_start_resets() {
let mut state = SyncState {
stage: "old".into(),
phase: SyncPhase::Complete,
..SyncState::default()
};
state.log_lines.push("old log".into());
state.start();
assert_eq!(state.phase, SyncPhase::Running);
assert!(state.stage.is_empty());
assert!(state.log_lines.is_empty());
}
#[test]
fn test_sync_state_update_progress() {
let mut state = SyncState::default();
state.start();
state.update_progress("Fetching issues", 10, 50);
assert_eq!(state.lanes[0].current, 10);
assert_eq!(state.lanes[0].total, 50);
assert_eq!(state.stage, "Fetching issues");
}
#[test]
fn test_sync_state_update_batch() {
let mut state = SyncState::default();
state.start();
state.update_batch("MR processing", 5);
state.update_batch("MR processing", 3);
assert_eq!(state.lanes[1].current, 8); // MR lane
}
#[test]
fn test_sync_state_complete() {
let mut state = SyncState::default();
state.start();
state.complete(5000);
assert_eq!(state.phase, SyncPhase::Complete);
assert!(state.summary.is_some());
assert_eq!(state.summary.as_ref().unwrap().elapsed_ms, 5000);
}
#[test]
fn test_sync_state_overall_progress() {
let mut state = SyncState::default();
state.start();
state.update_progress("issues", 50, 100);
state.update_progress("merge requests", 25, 100);
// Two active lanes: 0.5 and 0.25, average = 0.375
assert!((state.overall_progress() - 0.375).abs() < 0.01);
}
#[test]
fn test_sync_state_overall_progress_no_active_lanes() {
let state = SyncState::default();
assert!((state.overall_progress()).abs() < f64::EPSILON);
}
#[test]
fn test_progress_coalescer_first_always_emits() {
let mut coalescer = ProgressCoalescer::new(100);
assert!(coalescer.should_emit());
}
#[test]
fn test_progress_coalescer_rapid_updates_coalesced() {
let mut coalescer = ProgressCoalescer::new(100);
assert!(coalescer.should_emit()); // First always emits.
// Rapid-fire updates within 100ms should be coalesced.
let mut emitted = 0;
for _ in 0..50 {
if coalescer.should_emit() {
emitted += 1;
}
}
// With ~0ms between calls, at most 0-1 additional emits expected.
assert!(emitted <= 1, "Expected at most 1 emit, got {emitted}");
}
#[test]
fn test_progress_coalescer_emits_after_floor() {
let mut coalescer = ProgressCoalescer::new(50);
assert!(coalescer.should_emit());
// Wait longer than floor.
thread::sleep(Duration::from_millis(60));
assert!(coalescer.should_emit());
}
#[test]
fn test_progress_coalescer_reset() {
let mut coalescer = ProgressCoalescer::new(100);
coalescer.should_emit();
coalescer.should_emit(); // Coalesced.
coalescer.reset();
assert!(coalescer.should_emit()); // Fresh start.
}
#[test]
fn test_sync_lane_labels() {
assert_eq!(SyncLane::Issues.label(), "Issues");
assert_eq!(SyncLane::MergeRequests.label(), "MRs");
assert_eq!(SyncLane::Notes.label(), "Notes");
}
#[test]
fn test_sync_state_add_log_line() {
let mut state = SyncState::default();
state.add_log_line("line 1".into());
state.add_log_line("line 2".into());
assert_eq!(state.log_lines.len(), 2);
assert_eq!(state.log_lines[0], "line 1");
}
#[test]
fn test_sync_state_cancel() {
let mut state = SyncState::default();
state.start();
state.cancel();
assert_eq!(state.phase, SyncPhase::Cancelled);
}
#[test]
fn test_sync_state_fail() {
let mut state = SyncState::default();
state.start();
state.fail("network timeout".into());
assert!(matches!(state.phase, SyncPhase::Failed(_)));
}
}

View File

@@ -0,0 +1,222 @@
#![allow(dead_code)]
//! Sync delta ledger — records entity changes during a sync run.
//!
//! After sync completes, the dashboard and list screens can query the
//! ledger to highlight "new since last sync" items. The ledger is
//! ephemeral (per-run, not persisted to disk).
use std::collections::HashSet;
/// Kind of change that occurred to an entity during sync.
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum ChangeKind {
New,
Updated,
}
/// Entity type for the ledger.
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum LedgerEntityType {
Issue,
MergeRequest,
Discussion,
Note,
}
/// Per-run record of changed entity IDs during sync.
///
/// Used to highlight newly synced items in list/dashboard views.
#[derive(Debug, Default)]
pub struct SyncDeltaLedger {
pub new_issue_iids: HashSet<i64>,
pub updated_issue_iids: HashSet<i64>,
pub new_mr_iids: HashSet<i64>,
pub updated_mr_iids: HashSet<i64>,
pub new_discussion_count: u64,
pub updated_discussion_count: u64,
pub new_note_count: u64,
}
impl SyncDeltaLedger {
/// Record a change to an entity.
pub fn record_change(&mut self, entity_type: LedgerEntityType, iid: i64, kind: ChangeKind) {
match (entity_type, kind) {
(LedgerEntityType::Issue, ChangeKind::New) => {
self.new_issue_iids.insert(iid);
}
(LedgerEntityType::Issue, ChangeKind::Updated) => {
self.updated_issue_iids.insert(iid);
}
(LedgerEntityType::MergeRequest, ChangeKind::New) => {
self.new_mr_iids.insert(iid);
}
(LedgerEntityType::MergeRequest, ChangeKind::Updated) => {
self.updated_mr_iids.insert(iid);
}
(LedgerEntityType::Discussion, ChangeKind::New) => {
self.new_discussion_count += 1;
}
(LedgerEntityType::Discussion, ChangeKind::Updated) => {
self.updated_discussion_count += 1;
}
(LedgerEntityType::Note, ChangeKind::New) => {
self.new_note_count += 1;
}
(LedgerEntityType::Note, ChangeKind::Updated) => {
// Notes don't have a meaningful "updated" count.
}
}
}
/// Produce a summary of changes from this sync run.
#[must_use]
pub fn summary(&self) -> super::sync::SyncSummary {
use super::sync::{EntityChangeCounts, SyncSummary};
SyncSummary {
issues: EntityChangeCounts {
new: self.new_issue_iids.len() as u64,
updated: self.updated_issue_iids.len() as u64,
},
merge_requests: EntityChangeCounts {
new: self.new_mr_iids.len() as u64,
updated: self.updated_mr_iids.len() as u64,
},
discussions: EntityChangeCounts {
new: self.new_discussion_count,
updated: self.updated_discussion_count,
},
notes: EntityChangeCounts {
new: self.new_note_count,
updated: 0,
},
..Default::default()
}
}
/// Whether any entity was an issue IID that was newly added in this sync.
#[must_use]
pub fn is_new_issue(&self, iid: i64) -> bool {
self.new_issue_iids.contains(&iid)
}
/// Whether any entity was an MR IID that was newly added in this sync.
#[must_use]
pub fn is_new_mr(&self, iid: i64) -> bool {
self.new_mr_iids.contains(&iid)
}
/// Total changes recorded.
#[must_use]
pub fn total_changes(&self) -> u64 {
self.new_issue_iids.len() as u64
+ self.updated_issue_iids.len() as u64
+ self.new_mr_iids.len() as u64
+ self.updated_mr_iids.len() as u64
+ self.new_discussion_count
+ self.updated_discussion_count
+ self.new_note_count
}
/// Clear the ledger for a new sync run.
pub fn clear(&mut self) {
self.new_issue_iids.clear();
self.updated_issue_iids.clear();
self.new_mr_iids.clear();
self.updated_mr_iids.clear();
self.new_discussion_count = 0;
self.updated_discussion_count = 0;
self.new_note_count = 0;
}
}
// ---------------------------------------------------------------------------
// Tests
// ---------------------------------------------------------------------------
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_record_new_issues() {
let mut ledger = SyncDeltaLedger::default();
ledger.record_change(LedgerEntityType::Issue, 1, ChangeKind::New);
ledger.record_change(LedgerEntityType::Issue, 2, ChangeKind::New);
ledger.record_change(LedgerEntityType::Issue, 3, ChangeKind::Updated);
assert_eq!(ledger.new_issue_iids.len(), 2);
assert_eq!(ledger.updated_issue_iids.len(), 1);
assert!(ledger.is_new_issue(1));
assert!(ledger.is_new_issue(2));
assert!(!ledger.is_new_issue(3));
}
#[test]
fn test_record_new_mrs() {
let mut ledger = SyncDeltaLedger::default();
ledger.record_change(LedgerEntityType::MergeRequest, 10, ChangeKind::New);
ledger.record_change(LedgerEntityType::MergeRequest, 20, ChangeKind::Updated);
assert!(ledger.is_new_mr(10));
assert!(!ledger.is_new_mr(20));
}
#[test]
fn test_summary_counts() {
let mut ledger = SyncDeltaLedger::default();
ledger.record_change(LedgerEntityType::Issue, 1, ChangeKind::New);
ledger.record_change(LedgerEntityType::Issue, 2, ChangeKind::New);
ledger.record_change(LedgerEntityType::Issue, 3, ChangeKind::Updated);
ledger.record_change(LedgerEntityType::MergeRequest, 10, ChangeKind::New);
ledger.record_change(LedgerEntityType::Discussion, 0, ChangeKind::New);
ledger.record_change(LedgerEntityType::Note, 0, ChangeKind::New);
let summary = ledger.summary();
assert_eq!(summary.issues.new, 2);
assert_eq!(summary.issues.updated, 1);
assert_eq!(summary.merge_requests.new, 1);
assert_eq!(summary.discussions.new, 1);
assert_eq!(summary.notes.new, 1);
}
#[test]
fn test_total_changes() {
let mut ledger = SyncDeltaLedger::default();
ledger.record_change(LedgerEntityType::Issue, 1, ChangeKind::New);
ledger.record_change(LedgerEntityType::MergeRequest, 10, ChangeKind::Updated);
ledger.record_change(LedgerEntityType::Note, 0, ChangeKind::New);
assert_eq!(ledger.total_changes(), 3);
}
#[test]
fn test_dedup_same_iid() {
let mut ledger = SyncDeltaLedger::default();
// Recording same IID twice should deduplicate.
ledger.record_change(LedgerEntityType::Issue, 1, ChangeKind::New);
ledger.record_change(LedgerEntityType::Issue, 1, ChangeKind::New);
assert_eq!(ledger.new_issue_iids.len(), 1);
}
#[test]
fn test_clear() {
let mut ledger = SyncDeltaLedger::default();
ledger.record_change(LedgerEntityType::Issue, 1, ChangeKind::New);
ledger.record_change(LedgerEntityType::Note, 0, ChangeKind::New);
ledger.clear();
assert_eq!(ledger.total_changes(), 0);
assert!(ledger.new_issue_iids.is_empty());
}
#[test]
fn test_empty_ledger_summary() {
let ledger = SyncDeltaLedger::default();
let summary = ledger.summary();
assert_eq!(summary.total_changes(), 0);
assert!(!summary.has_errors());
}
}

View File

@@ -9,6 +9,8 @@ use std::collections::HashSet;
use lore::core::trace::TraceResult;
use crate::text_width::{next_char_boundary, prev_char_boundary};
// ---------------------------------------------------------------------------
// TraceState
// ---------------------------------------------------------------------------
@@ -18,7 +20,7 @@ use lore::core::trace::TraceResult;
pub struct TraceState {
/// User-entered file path (with optional :line suffix).
pub path_input: String,
/// Cursor position within `path_input`.
/// Cursor position within `path_input` (byte offset).
pub path_cursor: usize,
/// Whether the path input field has keyboard focus.
pub path_focused: bool,
@@ -188,48 +190,35 @@ impl TraceState {
// --- Text editing helpers ---
/// Insert a character at the cursor position.
/// Insert a character at the cursor position (byte offset).
pub fn insert_char(&mut self, ch: char) {
let byte_pos = self
.path_input
.char_indices()
.nth(self.path_cursor)
.map_or(self.path_input.len(), |(i, _)| i);
self.path_input.insert(byte_pos, ch);
self.path_cursor += 1;
self.path_input.insert(self.path_cursor, ch);
self.path_cursor += ch.len_utf8();
self.update_autocomplete();
}
/// Delete the character before the cursor.
/// Delete the character before the cursor (byte offset).
pub fn delete_char_before_cursor(&mut self) {
if self.path_cursor == 0 {
return;
}
self.path_cursor -= 1;
let byte_pos = self
.path_input
.char_indices()
.nth(self.path_cursor)
.map_or(self.path_input.len(), |(i, _)| i);
let end = self
.path_input
.char_indices()
.nth(self.path_cursor + 1)
.map_or(self.path_input.len(), |(i, _)| i);
self.path_input.drain(byte_pos..end);
let prev = prev_char_boundary(&self.path_input, self.path_cursor);
self.path_input.drain(prev..self.path_cursor);
self.path_cursor = prev;
self.update_autocomplete();
}
/// Move cursor left.
/// Move cursor left (byte offset).
pub fn cursor_left(&mut self) {
self.path_cursor = self.path_cursor.saturating_sub(1);
if self.path_cursor > 0 {
self.path_cursor = prev_char_boundary(&self.path_input, self.path_cursor);
}
}
/// Move cursor right.
/// Move cursor right (byte offset).
pub fn cursor_right(&mut self) {
let max = self.path_input.chars().count();
if self.path_cursor < max {
self.path_cursor += 1;
if self.path_cursor < self.path_input.len() {
self.path_cursor = next_char_boundary(&self.path_input, self.path_cursor);
}
}
@@ -266,7 +255,7 @@ impl TraceState {
pub fn accept_autocomplete(&mut self) {
if let Some(match_) = self.autocomplete_matches.get(self.autocomplete_index) {
self.path_input = match_.clone();
self.path_cursor = self.path_input.chars().count();
self.path_cursor = self.path_input.len();
self.autocomplete_matches.clear();
}
}

View File

@@ -5,6 +5,8 @@
use lore::core::who_types::WhoResult;
use crate::text_width::{next_char_boundary, prev_char_boundary};
// ---------------------------------------------------------------------------
// WhoMode
// ---------------------------------------------------------------------------
@@ -38,6 +40,18 @@ impl WhoMode {
}
}
/// Abbreviated 3-char label for narrow terminals.
#[must_use]
pub fn short_label(self) -> &'static str {
match self {
Self::Expert => "Exp",
Self::Workload => "Wkl",
Self::Reviews => "Rev",
Self::Active => "Act",
Self::Overlap => "Ovl",
}
}
/// Whether this mode requires a path input.
#[must_use]
pub fn needs_path(self) -> bool {
@@ -291,24 +305,6 @@ impl WhoState {
}
}
/// Find the byte offset of the previous char boundary.
fn prev_char_boundary(s: &str, pos: usize) -> usize {
let mut i = pos.saturating_sub(1);
while i > 0 && !s.is_char_boundary(i) {
i -= 1;
}
i
}
/// Find the byte offset of the next char boundary.
fn next_char_boundary(s: &str, pos: usize) -> usize {
let mut i = pos + 1;
while i < s.len() && !s.is_char_boundary(i) {
i += 1;
}
i
}
// ---------------------------------------------------------------------------
// Tests
// ---------------------------------------------------------------------------

View File

@@ -232,6 +232,14 @@ impl TaskSupervisor {
}
}
/// Get the cancel token for an active task, if any.
///
/// Used in tests to verify cooperative cancellation behavior.
#[must_use]
pub fn active_cancel_token(&self, key: &TaskKey) -> Option<Arc<CancelToken>> {
self.active.get(key).map(|h| h.cancel.clone())
}
/// Number of currently active tasks.
#[must_use]
pub fn active_count(&self) -> usize {

View File

@@ -0,0 +1,300 @@
//! Unicode-aware text width measurement and truncation.
//!
//! Terminal cells aren't 1:1 with bytes or even chars. CJK characters
//! occupy 2 cells, emoji ZWJ sequences are single grapheme clusters,
//! and combining marks have zero width. This module provides correct
//! measurement and truncation that never splits a grapheme cluster.
use unicode_segmentation::UnicodeSegmentation;
use unicode_width::UnicodeWidthStr;
/// Measure the display width of a string in terminal cells.
///
/// - ASCII characters: 1 cell each
/// - CJK characters: 2 cells each
/// - Emoji: varies (ZWJ sequences treated as grapheme clusters)
/// - Combining marks: 0 cells
#[must_use]
pub fn measure_display_width(s: &str) -> usize {
UnicodeWidthStr::width(s)
}
/// Truncate a string to fit within `max_width` terminal cells.
///
/// Appends an ellipsis character if truncation occurs. Never splits
/// a grapheme cluster — if appending the next cluster would exceed
/// the limit, it stops before that cluster.
///
/// The ellipsis itself occupies 1 cell of the budget.
#[must_use]
pub fn truncate_display_width(s: &str, max_width: usize) -> String {
let full_width = measure_display_width(s);
if full_width <= max_width {
return s.to_string();
}
if max_width == 0 {
return String::new();
}
// Reserve 1 cell for the ellipsis.
let budget = max_width.saturating_sub(1);
let mut result = String::new();
let mut used = 0;
for grapheme in s.graphemes(true) {
let gw = UnicodeWidthStr::width(grapheme);
if used + gw > budget {
break;
}
result.push_str(grapheme);
used += gw;
}
result.push('\u{2026}'); // ellipsis
result
}
/// Pad a string with trailing spaces to reach `width` terminal cells.
///
/// If the string is already wider than `width`, returns it unchanged.
#[must_use]
pub fn pad_display_width(s: &str, width: usize) -> String {
let current = measure_display_width(s);
if current >= width {
return s.to_string();
}
let padding = width - current;
let mut result = s.to_string();
for _ in 0..padding {
result.push(' ');
}
result
}
// ---------------------------------------------------------------------------
// Cursor / char-boundary helpers
// ---------------------------------------------------------------------------
/// Find the byte offset of the previous char boundary before `pos`.
///
/// Walks backwards from `pos - 1` until a valid char boundary is found.
/// Returns 0 if `pos` is 0 or 1.
pub(crate) fn prev_char_boundary(s: &str, pos: usize) -> usize {
let mut i = pos.saturating_sub(1);
while i > 0 && !s.is_char_boundary(i) {
i -= 1;
}
i
}
/// Find the byte offset of the next char boundary after `pos`.
///
/// Walks forward from `pos + 1` until a valid char boundary is found.
/// Returns `s.len()` if already at or past the end.
pub(crate) fn next_char_boundary(s: &str, pos: usize) -> usize {
let mut i = pos + 1;
while i < s.len() && !s.is_char_boundary(i) {
i += 1;
}
i
}
/// Convert a byte-offset cursor position to a display-column offset.
///
/// Snaps to the nearest char boundary at or before `cursor`, then counts
/// the number of characters from the start of the string to that point.
/// This gives the correct terminal column offset for cursor rendering.
pub(crate) fn cursor_cell_offset(text: &str, cursor: usize) -> u16 {
let mut idx = cursor.min(text.len());
while idx > 0 && !text.is_char_boundary(idx) {
idx -= 1;
}
text[..idx].chars().count().min(u16::MAX as usize) as u16
}
// ---------------------------------------------------------------------------
// Tests
// ---------------------------------------------------------------------------
#[cfg(test)]
mod tests {
use super::*;
// --- measure_display_width ---
#[test]
fn test_measure_ascii() {
assert_eq!(measure_display_width("Hello"), 5);
}
#[test]
fn test_measure_empty() {
assert_eq!(measure_display_width(""), 0);
}
#[test]
fn test_measure_cjk_width() {
// TDD anchor from the bead spec
assert_eq!(measure_display_width("Hello"), 5);
assert_eq!(measure_display_width("\u{65E5}\u{672C}\u{8A9E}"), 6); // 日本語 = 3 chars * 2 cells
}
#[test]
fn test_measure_mixed_ascii_cjk() {
// "Hi日本" = 2 + 2 + 2 = 6
assert_eq!(measure_display_width("Hi\u{65E5}\u{672C}"), 6);
}
#[test]
fn test_measure_combining_marks() {
// e + combining acute accent = 1 cell (combining mark is 0-width)
assert_eq!(measure_display_width("e\u{0301}"), 1);
}
// --- truncate_display_width ---
#[test]
fn test_truncate_no_truncation_needed() {
assert_eq!(truncate_display_width("Hello", 10), "Hello");
}
#[test]
fn test_truncate_exact_fit() {
assert_eq!(truncate_display_width("Hello", 5), "Hello");
}
#[test]
fn test_truncate_ascii() {
// "Hello World" is 11 cells. Truncate to 8: budget=7 for text + 1 for ellipsis
let result = truncate_display_width("Hello World", 8);
assert_eq!(measure_display_width(&result), 8); // 7 chars + ellipsis
assert!(result.ends_with('\u{2026}'));
}
#[test]
fn test_truncate_cjk_no_split() {
// 日本語テスト = 6 chars * 2 cells = 12 cells
// Truncate to 5: budget=4 for text + 1 for ellipsis
// Can fit 2 CJK chars (4 cells), then ellipsis
let result = truncate_display_width("\u{65E5}\u{672C}\u{8A9E}\u{30C6}\u{30B9}\u{30C8}", 5);
assert!(result.ends_with('\u{2026}'));
assert!(measure_display_width(&result) <= 5);
}
#[test]
fn test_truncate_zero_width() {
assert_eq!(truncate_display_width("Hello", 0), "");
}
#[test]
fn test_truncate_width_one() {
// Only room for the ellipsis itself
let result = truncate_display_width("Hello", 1);
assert_eq!(result, "\u{2026}");
}
#[test]
fn test_truncate_emoji() {
// Family emoji (ZWJ sequence) — should not be split
let family = "\u{1F468}\u{200D}\u{1F469}\u{200D}\u{1F467}"; // 👨‍👩‍👧
let result = truncate_display_width(&format!("{family}Hello"), 3);
// The emoji grapheme cluster is > 1 cell; if it doesn't fit in budget,
// it should be skipped entirely, leaving just the ellipsis or less.
assert!(measure_display_width(&result) <= 3);
}
// --- pad_display_width ---
#[test]
fn test_pad_basic() {
let result = pad_display_width("Hi", 5);
assert_eq!(result, "Hi ");
assert_eq!(measure_display_width(&result), 5);
}
#[test]
fn test_pad_already_wide_enough() {
assert_eq!(pad_display_width("Hello", 3), "Hello");
}
#[test]
fn test_pad_exact_width() {
assert_eq!(pad_display_width("Hello", 5), "Hello");
}
#[test]
fn test_pad_cjk() {
// 日本 = 4 cells, pad to 6 = 2 spaces
let result = pad_display_width("\u{65E5}\u{672C}", 6);
assert_eq!(measure_display_width(&result), 6);
assert!(result.ends_with(" "));
}
// --- prev_char_boundary / next_char_boundary ---
#[test]
fn test_prev_char_boundary_ascii() {
assert_eq!(prev_char_boundary("hello", 3), 2);
assert_eq!(prev_char_boundary("hello", 1), 0);
}
#[test]
fn test_prev_char_boundary_at_zero() {
assert_eq!(prev_char_boundary("hello", 0), 0);
}
#[test]
fn test_prev_char_boundary_multibyte() {
// "aé" = 'a' (1 byte) + 'é' (2 bytes) = 3 bytes total
let s = "a\u{00E9}b";
// Position 3 = start of 'b', prev boundary = 1 (start of 'é')
assert_eq!(prev_char_boundary(s, 3), 1);
// Position 2 = mid-'é' byte, should snap to 1
assert_eq!(prev_char_boundary(s, 2), 1);
}
#[test]
fn test_next_char_boundary_ascii() {
assert_eq!(next_char_boundary("hello", 0), 1);
assert_eq!(next_char_boundary("hello", 3), 4);
}
#[test]
fn test_next_char_boundary_multibyte() {
// "aé" = 'a' (1 byte) + 'é' (2 bytes)
let s = "a\u{00E9}b";
// Position 1 = start of 'é', next boundary = 3 (start of 'b')
assert_eq!(next_char_boundary(s, 1), 3);
}
#[test]
fn test_next_char_boundary_at_end() {
assert_eq!(next_char_boundary("hi", 2), 3);
}
// --- cursor_cell_offset ---
#[test]
fn test_cursor_cell_offset_ascii() {
assert_eq!(cursor_cell_offset("hello", 0), 0);
assert_eq!(cursor_cell_offset("hello", 3), 3);
assert_eq!(cursor_cell_offset("hello", 5), 5);
}
#[test]
fn test_cursor_cell_offset_multibyte() {
// "aéb" = byte offsets: a=0, é=1..3, b=3
let s = "a\u{00E9}b";
assert_eq!(cursor_cell_offset(s, 0), 0); // before 'a'
assert_eq!(cursor_cell_offset(s, 1), 1); // after 'a', before 'é'
assert_eq!(cursor_cell_offset(s, 2), 1); // mid-'é', snaps back to 1
assert_eq!(cursor_cell_offset(s, 3), 2); // after 'é', before 'b'
assert_eq!(cursor_cell_offset(s, 4), 3); // after 'b'
}
#[test]
fn test_cursor_cell_offset_beyond_end() {
assert_eq!(cursor_cell_offset("hi", 99), 2);
}
}

View File

@@ -9,6 +9,7 @@ use ftui::render::drawing::{BorderChars, Draw};
use ftui::render::frame::Frame;
use crate::state::command_palette::CommandPaletteState;
use crate::text_width::cursor_cell_offset;
use super::{ACCENT, BG_SURFACE, BORDER, TEXT, TEXT_MUTED};
@@ -16,14 +17,6 @@ fn text_cell_width(text: &str) -> u16 {
text.chars().count().min(u16::MAX as usize) as u16
}
fn cursor_cell_offset(query: &str, cursor: usize) -> u16 {
let mut idx = cursor.min(query.len());
while idx > 0 && !query.is_char_boundary(idx) {
idx -= 1;
}
text_cell_width(&query[..idx])
}
// ---------------------------------------------------------------------------
// render_command_palette
// ---------------------------------------------------------------------------

View File

@@ -26,3 +26,18 @@ pub use filter_bar::{FilterBarColors, FilterBarState, render_filter_bar};
pub use help_overlay::render_help_overlay;
pub use loading::render_loading;
pub use status_bar::render_status_bar;
/// Truncate a string to at most `max_chars` display characters.
///
/// Uses Unicode ellipsis `…` for truncation. If `max_chars` is too small
/// for an ellipsis (<=1), just truncates without one.
pub fn truncate_str(s: &str, max_chars: usize) -> String {
if s.chars().count() <= max_chars {
s.to_string()
} else if max_chars <= 1 {
s.chars().take(max_chars).collect()
} else {
let truncated: String = s.chars().take(max_chars.saturating_sub(1)).collect();
format!("{truncated}\u{2026}")
}
}

View File

@@ -0,0 +1,297 @@
//! Doctor screen view — health check results.
//!
//! Renders a vertical list of health checks with colored status
//! indicators (green PASS, yellow WARN, red FAIL).
use ftui::core::geometry::Rect;
use ftui::render::cell::{Cell, PackedRgba};
use ftui::render::drawing::Draw;
use ftui::render::frame::Frame;
use crate::layout::classify_width;
use crate::state::doctor::{DoctorState, HealthStatus};
use super::{TEXT, TEXT_MUTED};
/// Pass green.
const PASS_FG: PackedRgba = PackedRgba::rgb(0x87, 0x9A, 0x39);
/// Warning yellow.
const WARN_FG: PackedRgba = PackedRgba::rgb(0xD0, 0xA2, 0x15);
/// Fail red.
const FAIL_FG: PackedRgba = PackedRgba::rgb(0xD1, 0x4D, 0x41);
// ---------------------------------------------------------------------------
// Public entry point
// ---------------------------------------------------------------------------
/// Render the doctor screen.
pub fn render_doctor(frame: &mut Frame<'_>, state: &DoctorState, area: Rect) {
if area.width < 10 || area.height < 3 {
return;
}
let max_x = area.right();
if !state.loaded {
// Not yet loaded — show centered prompt.
let msg = "Loading health checks...";
let x = area.x + area.width.saturating_sub(msg.len() as u16) / 2;
let y = area.y + area.height / 2;
frame.print_text_clipped(
x,
y,
msg,
Cell {
fg: TEXT_MUTED,
..Cell::default()
},
max_x,
);
return;
}
// Title.
let overall = state.overall_status();
let title_fg = status_color(overall);
let title = format!("Doctor — {}", overall.label());
frame.print_text_clipped(
area.x + 2,
area.y + 1,
&title,
Cell {
fg: title_fg,
..Cell::default()
},
max_x,
);
// Summary line.
let pass_count = state.count_by_status(HealthStatus::Pass);
let warn_count = state.count_by_status(HealthStatus::Warn);
let fail_count = state.count_by_status(HealthStatus::Fail);
let summary = format!(
"{} passed, {} warnings, {} failed",
pass_count, warn_count, fail_count
);
frame.print_text_clipped(
area.x + 2,
area.y + 2,
&summary,
Cell {
fg: TEXT_MUTED,
..Cell::default()
},
max_x,
);
// Health check rows — name column adapts to breakpoint.
let bp = classify_width(area.width);
let rows_start_y = area.y + 4;
let name_width = match bp {
ftui::layout::Breakpoint::Xs => 10u16,
ftui::layout::Breakpoint::Sm => 13,
_ => 16,
};
for (i, check) in state.checks.iter().enumerate() {
let y = rows_start_y + i as u16;
if y >= area.bottom().saturating_sub(2) {
break;
}
// Status badge.
let badge = format!("[{}]", check.status.label());
let badge_fg = status_color(check.status);
frame.print_text_clipped(
area.x + 2,
y,
&badge,
Cell {
fg: badge_fg,
..Cell::default()
},
max_x,
);
// Check name.
let name_x = area.x + 2 + 7; // "[PASS] " = 7 chars
let name = format!("{:<width$}", check.name, width = name_width as usize);
frame.print_text_clipped(
name_x,
y,
&name,
Cell {
fg: TEXT,
..Cell::default()
},
max_x,
);
// Detail text.
let detail_x = name_x + name_width;
let max_detail = area.right().saturating_sub(detail_x + 1) as usize;
let detail = if check.detail.len() > max_detail {
format!(
"{}...",
&check.detail[..check
.detail
.floor_char_boundary(max_detail.saturating_sub(3))]
)
} else {
check.detail.clone()
};
frame.print_text_clipped(
detail_x,
y,
&detail,
Cell {
fg: TEXT_MUTED,
..Cell::default()
},
max_x,
);
}
// Hint at bottom.
let hint_y = area.bottom().saturating_sub(1);
frame.print_text_clipped(
area.x + 2,
hint_y,
"Esc: back | lore doctor (full check)",
Cell {
fg: TEXT_MUTED,
..Cell::default()
},
max_x,
);
}
/// Map health status to a display color.
fn status_color(status: HealthStatus) -> PackedRgba {
match status {
HealthStatus::Pass => PASS_FG,
HealthStatus::Warn => WARN_FG,
HealthStatus::Fail => FAIL_FG,
}
}
// ---------------------------------------------------------------------------
// Tests
// ---------------------------------------------------------------------------
#[cfg(test)]
mod tests {
use super::*;
use crate::state::doctor::HealthCheck;
use ftui::render::grapheme_pool::GraphemePool;
macro_rules! with_frame {
($width:expr, $height:expr, |$frame:ident| $body:block) => {{
let mut pool = GraphemePool::new();
let mut $frame = Frame::new($width, $height, &mut pool);
$body
}};
}
fn sample_checks() -> Vec<HealthCheck> {
vec![
HealthCheck {
name: "Config".into(),
status: HealthStatus::Pass,
detail: "/home/user/.config/lore/config.json".into(),
},
HealthCheck {
name: "Database".into(),
status: HealthStatus::Pass,
detail: "schema v12".into(),
},
HealthCheck {
name: "Projects".into(),
status: HealthStatus::Warn,
detail: "0 projects configured".into(),
},
HealthCheck {
name: "FTS Index".into(),
status: HealthStatus::Fail,
detail: "No documents indexed".into(),
},
]
}
#[test]
fn test_render_not_loaded() {
with_frame!(80, 24, |frame| {
let state = DoctorState::default();
let area = frame.bounds();
render_doctor(&mut frame, &state, area);
});
}
#[test]
fn test_render_with_checks() {
with_frame!(80, 24, |frame| {
let mut state = DoctorState::default();
state.apply_checks(sample_checks());
let area = frame.bounds();
render_doctor(&mut frame, &state, area);
});
}
#[test]
fn test_render_all_pass() {
with_frame!(80, 24, |frame| {
let mut state = DoctorState::default();
state.apply_checks(vec![HealthCheck {
name: "Config".into(),
status: HealthStatus::Pass,
detail: "ok".into(),
}]);
let area = frame.bounds();
render_doctor(&mut frame, &state, area);
});
}
#[test]
fn test_render_tiny_terminal() {
with_frame!(8, 2, |frame| {
let mut state = DoctorState::default();
state.apply_checks(sample_checks());
let area = frame.bounds();
render_doctor(&mut frame, &state, area);
// Should not panic.
});
}
#[test]
fn test_render_narrow_terminal_truncates() {
with_frame!(40, 20, |frame| {
let mut state = DoctorState::default();
state.apply_checks(vec![HealthCheck {
name: "Database".into(),
status: HealthStatus::Pass,
detail: "This is a very long detail string that should be truncated".into(),
}]);
let area = frame.bounds();
render_doctor(&mut frame, &state, area);
});
}
#[test]
fn test_render_many_checks_clips() {
with_frame!(80, 10, |frame| {
let mut state = DoctorState::default();
let mut checks = Vec::new();
for i in 0..20 {
checks.push(HealthCheck {
name: format!("Check {i}"),
status: HealthStatus::Pass,
detail: "ok".into(),
});
}
state.apply_checks(checks);
let area = frame.bounds();
render_doctor(&mut frame, &state, area);
// Should clip without panicking.
});
}
}

View File

@@ -17,20 +17,21 @@
//! +-----------------------------------+
//! ```
use ftui::layout::Breakpoint;
use ftui::render::cell::{Cell, PackedRgba};
use ftui::render::drawing::Draw;
use ftui::render::frame::Frame;
use super::common::truncate_str;
use super::{ACCENT, BG_SURFACE, TEXT, TEXT_MUTED};
use crate::layout::classify_width;
use crate::state::file_history::{FileHistoryResult, FileHistoryState};
use crate::text_width::cursor_cell_offset;
// ---------------------------------------------------------------------------
// Colors (Flexoki palette)
// Colors (Flexoki palette — screen-specific)
// ---------------------------------------------------------------------------
const TEXT: PackedRgba = PackedRgba::rgb(0xCE, 0xCD, 0xC3); // tx
const TEXT_MUTED: PackedRgba = PackedRgba::rgb(0x87, 0x87, 0x80); // tx-2
const BG_SURFACE: PackedRgba = PackedRgba::rgb(0x28, 0x28, 0x24); // bg-2
const ACCENT: PackedRgba = PackedRgba::rgb(0xDA, 0x70, 0x2C); // orange
const GREEN: PackedRgba = PackedRgba::rgb(0x87, 0x9A, 0x39); // green
const CYAN: PackedRgba = PackedRgba::rgb(0x3A, 0xA9, 0x9F); // cyan
const YELLOW: PackedRgba = PackedRgba::rgb(0xD0, 0xA2, 0x15); // yellow
@@ -51,6 +52,7 @@ pub fn render_file_history(
return; // Terminal too small.
}
let bp = classify_width(area.width);
let x = area.x;
let max_x = area.right();
let width = area.width;
@@ -103,7 +105,7 @@ pub fn render_file_history(
}
// --- MR list ---
render_mr_list(frame, result, state, x, y, width, list_height);
render_mr_list(frame, result, state, x, y, width, list_height, bp);
// --- Hint bar ---
render_hint_bar(frame, x, hint_y, max_x);
@@ -136,7 +138,7 @@ fn render_path_input(frame: &mut Frame<'_>, state: &FileHistoryState, x: u16, y:
// Cursor indicator.
if state.path_focused {
let cursor_x = after_label + state.path_cursor as u16;
let cursor_x = after_label + cursor_cell_offset(&state.path_input, state.path_cursor);
if cursor_x < max_x {
let cursor_cell = Cell {
fg: PackedRgba::rgb(0x10, 0x0F, 0x0F), // dark bg
@@ -246,6 +248,33 @@ fn render_summary(frame: &mut Frame<'_>, result: &FileHistoryResult, x: u16, y:
frame.print_text_clipped(x + 1, y, &summary, style, max_x);
}
/// Responsive truncation widths for file history MR rows.
const fn fh_title_max(bp: Breakpoint) -> usize {
match bp {
Breakpoint::Xs => 15,
Breakpoint::Sm => 25,
Breakpoint::Md => 35,
Breakpoint::Lg | Breakpoint::Xl => 55,
}
}
const fn fh_author_max(bp: Breakpoint) -> usize {
match bp {
Breakpoint::Xs | Breakpoint::Sm => 8,
Breakpoint::Md | Breakpoint::Lg | Breakpoint::Xl => 12,
}
}
const fn fh_disc_max(bp: Breakpoint) -> usize {
match bp {
Breakpoint::Xs => 25,
Breakpoint::Sm => 40,
Breakpoint::Md => 60,
Breakpoint::Lg | Breakpoint::Xl => 80,
}
}
#[allow(clippy::too_many_arguments)]
fn render_mr_list(
frame: &mut Frame<'_>,
result: &FileHistoryResult,
@@ -254,10 +283,14 @@ fn render_mr_list(
start_y: u16,
width: u16,
height: usize,
bp: Breakpoint,
) {
let max_x = x + width;
let offset = state.scroll_offset as usize;
let title_max = fh_title_max(bp);
let author_max = fh_author_max(bp);
for (i, mr) in result
.merge_requests
.iter()
@@ -313,8 +346,8 @@ fn render_mr_list(
};
let after_iid = frame.print_text_clipped(after_icon, y, &iid_str, ref_style, max_x);
// Title (truncated).
let title = truncate_str(&mr.title, 35);
// Title (responsive truncation).
let title = truncate_str(&mr.title, title_max);
let title_style = Cell {
fg: TEXT,
bg: sel_bg,
@@ -322,10 +355,10 @@ fn render_mr_list(
};
let after_title = frame.print_text_clipped(after_iid + 1, y, &title, title_style, max_x);
// @author + change_type
// @author + change_type (responsive author width).
let meta = format!(
"@{} {}",
truncate_str(&mr.author_username, 12),
truncate_str(&mr.author_username, author_max),
mr.change_type
);
let meta_style = Cell {
@@ -337,13 +370,15 @@ fn render_mr_list(
}
// Inline discussion snippets (rendered beneath MRs when toggled on).
// For simplicity, discussions are shown as a separate block after the MR list
// in this initial implementation. Full inline rendering (grouped by MR) is
// a follow-up enhancement.
if state.show_discussions && !result.discussions.is_empty() {
let disc_start_y = start_y + result.merge_requests.len().min(height) as u16;
let remaining = height.saturating_sub(result.merge_requests.len().min(height));
render_discussions(frame, result, x, disc_start_y, max_x, remaining);
let visible_mrs = result
.merge_requests
.len()
.saturating_sub(offset)
.min(height);
let disc_start_y = start_y + visible_mrs as u16;
let remaining = height.saturating_sub(visible_mrs);
render_discussions(frame, result, x, disc_start_y, max_x, remaining, bp);
}
}
@@ -354,11 +389,14 @@ fn render_discussions(
start_y: u16,
max_x: u16,
max_rows: usize,
bp: Breakpoint,
) {
if max_rows == 0 {
return;
}
let disc_max = fh_disc_max(bp);
let sep_style = Cell {
fg: TEXT_MUTED,
..Cell::default()
@@ -388,7 +426,7 @@ fn render_discussions(
author_style,
max_x,
);
let snippet = truncate_str(&disc.body_snippet, 60);
let snippet = truncate_str(&disc.body_snippet, disc_max);
frame.print_text_clipped(after_author, y, &snippet, disc_style, max_x);
}
}
@@ -446,16 +484,6 @@ fn render_hint_bar(frame: &mut Frame<'_>, x: u16, y: u16, max_x: u16) {
frame.print_text_clipped(x + 1, y, hints, style, max_x);
}
/// Truncate a string to at most `max_chars` display characters.
fn truncate_str(s: &str, max_chars: usize) -> String {
if s.chars().count() <= max_chars {
s.to_string()
} else {
let truncated: String = s.chars().take(max_chars.saturating_sub(1)).collect();
format!("{truncated}")
}
}
// ---------------------------------------------------------------------------
// Tests
// ---------------------------------------------------------------------------

View File

@@ -13,6 +13,9 @@ use ftui::render::drawing::Draw;
use ftui::render::frame::Frame;
use crate::clock::Clock;
use ftui::layout::Breakpoint;
use crate::layout::{classify_width, detail_side_panel};
use crate::safety::{UrlPolicy, sanitize_for_terminal};
use crate::state::issue_detail::{DetailSection, IssueDetailState, IssueMetadata};
use crate::view::common::cross_ref::{CrossRefColors, render_cross_refs};
@@ -99,6 +102,7 @@ pub fn render_issue_detail(
return;
};
let bp = classify_width(area.width);
let max_x = area.x.saturating_add(area.width);
let mut y = area.y;
@@ -106,10 +110,10 @@ pub fn render_issue_detail(
y = render_title_bar(frame, meta, area.x, y, max_x);
// --- Metadata row ---
y = render_metadata_row(frame, meta, area.x, y, max_x);
y = render_metadata_row(frame, meta, bp, area.x, y, max_x);
// --- Optional milestone / due date row ---
if meta.milestone.is_some() || meta.due_date.is_some() {
// --- Optional milestone / due date row (skip on Xs — too narrow) ---
if !matches!(bp, Breakpoint::Xs) && (meta.milestone.is_some() || meta.due_date.is_some()) {
y = render_milestone_row(frame, meta, area.x, y, max_x);
}
@@ -129,7 +133,9 @@ pub fn render_issue_detail(
let disc_count = state.discussions.len();
let xref_count = state.cross_refs.len();
let (desc_h, disc_h, xref_h) = allocate_sections(remaining, desc_lines, disc_count, xref_count);
let wide = detail_side_panel(bp);
let (desc_h, disc_h, xref_h) =
allocate_sections(remaining, desc_lines, disc_count, xref_count, wide);
// --- Description section ---
if desc_h > 0 {
@@ -263,9 +269,12 @@ fn render_title_bar(
}
/// Render the metadata row: `opened | alice | backend, security`
///
/// Responsive: Xs shows state + author only; Sm adds labels; Md+ adds assignees.
fn render_metadata_row(
frame: &mut Frame<'_>,
meta: &IssueMetadata,
bp: Breakpoint,
x: u16,
y: u16,
max_x: u16,
@@ -292,13 +301,15 @@ fn render_metadata_row(
cx = frame.print_text_clipped(cx, y, " | ", muted_style, max_x);
cx = frame.print_text_clipped(cx, y, &meta.author, author_style, max_x);
if !meta.labels.is_empty() {
// Labels: shown on Sm+
if !matches!(bp, Breakpoint::Xs) && !meta.labels.is_empty() {
cx = frame.print_text_clipped(cx, y, " | ", muted_style, max_x);
let labels_text = meta.labels.join(", ");
cx = frame.print_text_clipped(cx, y, &labels_text, muted_style, max_x);
}
if !meta.assignees.is_empty() {
// Assignees: shown on Md+
if !matches!(bp, Breakpoint::Xs | Breakpoint::Sm) && !meta.assignees.is_empty() {
cx = frame.print_text_clipped(cx, y, " | ", muted_style, max_x);
let assignees_text = format!("-> {}", meta.assignees.join(", "));
let _ = frame.print_text_clipped(cx, y, &assignees_text, muted_style, max_x);
@@ -424,11 +435,13 @@ fn count_description_lines(meta: &IssueMetadata, _width: u16) -> usize {
///
/// Priority: description gets min(content, 40%), discussions get most of the
/// remaining space, cross-refs get a fixed portion at the bottom.
/// On wide terminals (`wide = true`), description gets up to 60%.
fn allocate_sections(
available: u16,
desc_lines: usize,
_disc_count: usize,
xref_count: usize,
wide: bool,
) -> (u16, u16, u16) {
if available == 0 {
return (0, 0, 0);
@@ -445,8 +458,9 @@ fn allocate_sections(
let after_xref = total.saturating_sub(xref_need);
// Description: up to 40% of remaining, but at least the content lines.
let desc_max = after_xref * 2 / 5;
// Description: up to 40% on narrow, 60% on wide terminals.
let desc_pct = if wide { 3 } else { 2 }; // numerator over 5
let desc_max = after_xref * desc_pct / 5;
let desc_alloc = desc_lines.min(desc_max).min(after_xref);
// Discussions: everything else.
@@ -584,12 +598,12 @@ mod tests {
#[test]
fn test_allocate_sections_empty() {
assert_eq!(allocate_sections(0, 5, 3, 2), (0, 0, 0));
assert_eq!(allocate_sections(0, 5, 3, 2, false), (0, 0, 0));
}
#[test]
fn test_allocate_sections_balanced() {
let (d, disc, x) = allocate_sections(20, 5, 3, 2);
let (d, disc, x) = allocate_sections(20, 5, 3, 2, false);
assert!(d > 0);
assert!(disc > 0);
assert!(x > 0);
@@ -598,18 +612,28 @@ mod tests {
#[test]
fn test_allocate_sections_no_xrefs() {
let (d, disc, x) = allocate_sections(20, 5, 3, 0);
let (d, disc, x) = allocate_sections(20, 5, 3, 0, false);
assert_eq!(x, 0);
assert_eq!(d + disc, 20);
}
#[test]
fn test_allocate_sections_no_discussions() {
let (d, disc, x) = allocate_sections(20, 5, 0, 2);
let (d, disc, x) = allocate_sections(20, 5, 0, 2, false);
assert!(d > 0);
assert_eq!(d + disc + x, 20);
}
#[test]
fn test_allocate_sections_wide_gives_more_description() {
let (d_narrow, _, _) = allocate_sections(20, 10, 3, 2, false);
let (d_wide, _, _) = allocate_sections(20, 10, 3, 2, true);
assert!(
d_wide >= d_narrow,
"wide should give desc at least as much space"
);
}
#[test]
fn test_count_description_lines() {
let meta = sample_metadata();
@@ -623,4 +647,27 @@ mod tests {
meta.description = String::new();
assert_eq!(count_description_lines(&meta, 80), 0);
}
#[test]
fn test_render_issue_detail_responsive_breakpoints() {
let clock = FakeClock::from_ms(1_700_000_060_000);
// Narrow (Xs=50): milestone row hidden, labels/assignees hidden.
with_frame!(50, 24, |frame| {
let state = sample_state_with_metadata();
render_issue_detail(&mut frame, &state, Rect::new(0, 0, 50, 24), &clock);
});
// Medium (Sm=70): milestone shown, labels shown, assignees hidden.
with_frame!(70, 24, |frame| {
let state = sample_state_with_metadata();
render_issue_detail(&mut frame, &state, Rect::new(0, 0, 70, 24), &clock);
});
// Wide (Lg=130): all metadata, description gets more space.
with_frame!(130, 40, |frame| {
let state = sample_state_with_metadata();
render_issue_detail(&mut frame, &state, Rect::new(0, 0, 130, 40), &clock);
});
}
}

View File

@@ -10,19 +10,22 @@ pub mod bootstrap;
pub mod command_palette;
pub mod common;
pub mod dashboard;
pub mod doctor;
pub mod file_history;
pub mod issue_detail;
pub mod issue_list;
pub mod mr_detail;
pub mod mr_list;
pub mod scope_picker;
pub mod search;
pub mod stats;
pub mod sync;
pub mod timeline;
pub mod trace;
pub mod who;
use ftui::layout::{Constraint, Flex};
use ftui::render::cell::{Cell, PackedRgba};
use ftui::render::drawing::Draw;
use ftui::render::cell::PackedRgba;
use ftui::render::frame::Frame;
use crate::app::LoreApp;
@@ -34,12 +37,16 @@ use common::{
render_breadcrumb, render_error_toast, render_help_overlay, render_loading, render_status_bar,
};
use dashboard::render_dashboard;
use doctor::render_doctor;
use file_history::render_file_history;
use issue_detail::render_issue_detail;
use issue_list::render_issue_list;
use mr_detail::render_mr_detail;
use mr_list::render_mr_list;
use scope_picker::render_scope_picker;
use search::render_search;
use stats::render_stats;
use sync::render_sync;
use timeline::render_timeline;
use trace::render_trace;
use who::render_who;
@@ -56,41 +63,6 @@ const ERROR_BG: PackedRgba = PackedRgba::rgb(0xAF, 0x3A, 0x29); // red
const ERROR_FG: PackedRgba = PackedRgba::rgb(0xCE, 0xCD, 0xC3); // tx
const BORDER: PackedRgba = PackedRgba::rgb(0x87, 0x87, 0x80); // tx-2
fn render_sync_placeholder(frame: &mut Frame<'_>, area: ftui::core::geometry::Rect) {
if area.width < 10 || area.height < 5 {
return;
}
let max_x = area.right();
let center_y = area.y + area.height / 2;
let title = "Sync";
let title_x = area.x + area.width.saturating_sub(title.len() as u16) / 2;
frame.print_text_clipped(
title_x,
center_y.saturating_sub(1),
title,
Cell {
fg: ACCENT,
..Cell::default()
},
max_x,
);
let body = "Run `lore sync` in another terminal.";
let body_x = area.x + area.width.saturating_sub(body.len() as u16) / 2;
frame.print_text_clipped(
body_x,
center_y + 1,
body,
Cell {
fg: TEXT_MUTED,
..Cell::default()
},
max_x,
);
}
// ---------------------------------------------------------------------------
// render_screen
// ---------------------------------------------------------------------------
@@ -144,7 +116,7 @@ pub fn render_screen(frame: &mut Frame<'_>, app: &LoreApp) {
if screen == &Screen::Bootstrap {
render_bootstrap(frame, &app.state.bootstrap, content_area);
} else if screen == &Screen::Sync {
render_sync_placeholder(frame, content_area);
render_sync(frame, &app.state.sync, content_area);
} else if screen == &Screen::Dashboard {
render_dashboard(frame, &app.state.dashboard, content_area);
} else if screen == &Screen::IssueList {
@@ -165,6 +137,10 @@ pub fn render_screen(frame: &mut Frame<'_>, app: &LoreApp) {
render_file_history(frame, &app.state.file_history, content_area);
} else if screen == &Screen::Trace {
render_trace(frame, &app.state.trace, content_area);
} else if screen == &Screen::Doctor {
render_doctor(frame, &app.state.doctor, content_area);
} else if screen == &Screen::Stats {
render_stats(frame, &app.state.stats, content_area);
}
// --- Status bar ---
@@ -189,6 +165,14 @@ pub fn render_screen(frame: &mut Frame<'_>, app: &LoreApp) {
// Command palette overlay.
render_command_palette(frame, &app.state.command_palette, bounds);
// Scope picker overlay.
render_scope_picker(
frame,
&app.state.scope_picker,
&app.state.global_scope,
bounds,
);
// Help overlay.
if app.state.show_help {
render_help_overlay(
@@ -277,10 +261,7 @@ mod tests {
let has_content = (20..60u16).any(|x| {
(8..16u16).any(|y| frame.buffer.get(x, y).is_some_and(|cell| !cell.is_empty()))
});
assert!(
has_content,
"Expected sync placeholder content in center area"
);
assert!(has_content, "Expected sync idle content in center area");
});
}
}

View File

@@ -7,11 +7,13 @@
//! changes render immediately while discussions load async.
use ftui::core::geometry::Rect;
use ftui::layout::Breakpoint;
use ftui::render::cell::{Cell, PackedRgba};
use ftui::render::drawing::Draw;
use ftui::render::frame::Frame;
use crate::clock::Clock;
use crate::layout::classify_width;
use crate::safety::{UrlPolicy, sanitize_for_terminal};
use crate::state::mr_detail::{FileChangeType, MrDetailState, MrMetadata, MrTab};
use crate::view::common::cross_ref::{CrossRefColors, render_cross_refs};
@@ -85,6 +87,7 @@ pub fn render_mr_detail(
return;
}
let bp = classify_width(area.width);
let Some(ref meta) = state.metadata else {
return;
};
@@ -96,7 +99,7 @@ pub fn render_mr_detail(
y = render_title_bar(frame, meta, area.x, y, max_x);
// --- Metadata row ---
y = render_metadata_row(frame, meta, area.x, y, max_x);
y = render_metadata_row(frame, meta, area.x, y, max_x, bp);
// --- Tab bar ---
y = render_tab_bar(frame, state, area.x, y, max_x);
@@ -150,12 +153,16 @@ fn render_title_bar(frame: &mut Frame<'_>, meta: &MrMetadata, x: u16, y: u16, ma
}
/// Render `opened | alice | fix-auth -> main | mergeable`.
///
/// On narrow terminals (Xs/Sm), branch names and merge status are hidden
/// to avoid truncating more critical information.
fn render_metadata_row(
frame: &mut Frame<'_>,
meta: &MrMetadata,
x: u16,
y: u16,
max_x: u16,
bp: Breakpoint,
) -> u16 {
let state_fg = match meta.state.as_str() {
"opened" => GREEN,
@@ -179,12 +186,16 @@ fn render_metadata_row(
let mut cx = frame.print_text_clipped(x, y, &meta.state, state_style, max_x);
cx = frame.print_text_clipped(cx, y, " | ", muted, max_x);
cx = frame.print_text_clipped(cx, y, &meta.author, author_style, max_x);
cx = frame.print_text_clipped(cx, y, " | ", muted, max_x);
let branch_text = format!("{} -> {}", meta.source_branch, meta.target_branch);
cx = frame.print_text_clipped(cx, y, &branch_text, muted, max_x);
// Branch names: hidden on Xs/Sm to save horizontal space.
if !matches!(bp, Breakpoint::Xs | Breakpoint::Sm) {
cx = frame.print_text_clipped(cx, y, " | ", muted, max_x);
let branch_text = format!("{} -> {}", meta.source_branch, meta.target_branch);
cx = frame.print_text_clipped(cx, y, &branch_text, muted, max_x);
}
if !meta.merge_status.is_empty() {
// Merge status: hidden on Xs/Sm.
if !matches!(bp, Breakpoint::Xs | Breakpoint::Sm) && !meta.merge_status.is_empty() {
cx = frame.print_text_clipped(cx, y, " | ", muted, max_x);
let status_fg = if meta.merge_status == "mergeable" {
GREEN
@@ -636,4 +647,21 @@ mod tests {
render_mr_detail(&mut frame, &state, Rect::new(0, 0, 80, 24), &clock);
});
}
#[test]
fn test_render_mr_detail_responsive_breakpoints() {
let clock = FakeClock::from_ms(1_700_000_060_000);
// Narrow (Xs=50): branches and merge status hidden.
with_frame!(50, 24, |frame| {
let state = sample_mr_state();
render_mr_detail(&mut frame, &state, Rect::new(0, 0, 50, 24), &clock);
});
// Medium (Md=100): all metadata shown.
with_frame!(100, 24, |frame| {
let state = sample_mr_state();
render_mr_detail(&mut frame, &state, Rect::new(0, 0, 100, 24), &clock);
});
}
}

View File

@@ -0,0 +1,279 @@
//! Scope picker overlay — modal project filter selector.
//!
//! Renders a centered modal listing all available projects. The user
//! selects "All Projects" or a specific project to filter all screens.
use ftui::core::geometry::Rect;
use ftui::render::cell::{Cell, PackedRgba};
use ftui::render::drawing::{BorderChars, Draw};
use ftui::render::frame::Frame;
use crate::state::ScopeContext;
use crate::state::scope_picker::ScopePickerState;
use super::{ACCENT, BG_SURFACE, BORDER, TEXT, TEXT_MUTED};
/// Selection highlight background.
const SELECTION_BG: PackedRgba = PackedRgba::rgb(0x3A, 0x3A, 0x34);
// ---------------------------------------------------------------------------
// render_scope_picker
// ---------------------------------------------------------------------------
/// Render the scope picker overlay centered on the screen.
///
/// Only renders if `state.visible`. The modal is 50% width, up to 40x20.
pub fn render_scope_picker(
frame: &mut Frame<'_>,
state: &ScopePickerState,
current_scope: &ScopeContext,
area: Rect,
) {
if !state.visible {
return;
}
if area.height < 5 || area.width < 20 {
return;
}
// Modal dimensions.
let modal_width = (area.width / 2).clamp(25, 40);
let row_count = state.row_count();
// +3 for border top, title gap, border bottom.
let modal_height = ((row_count + 3) as u16).clamp(5, 20).min(area.height - 2);
let modal_x = area.x + (area.width.saturating_sub(modal_width)) / 2;
let modal_y = area.y + (area.height.saturating_sub(modal_height)) / 2;
let modal_rect = Rect::new(modal_x, modal_y, modal_width, modal_height);
// Clear background.
let bg_cell = Cell {
fg: TEXT,
bg: BG_SURFACE,
..Cell::default()
};
for y in modal_rect.y..modal_rect.bottom() {
for x in modal_rect.x..modal_rect.right() {
frame.buffer.set(x, y, bg_cell);
}
}
// Border.
let border_cell = Cell {
fg: BORDER,
bg: BG_SURFACE,
..Cell::default()
};
frame.draw_border(modal_rect, BorderChars::ROUNDED, border_cell);
// Title.
let title = " Project Scope ";
let title_x = modal_x + (modal_width.saturating_sub(title.len() as u16)) / 2;
let title_cell = Cell {
fg: ACCENT,
bg: BG_SURFACE,
..Cell::default()
};
frame.print_text_clipped(title_x, modal_y, title, title_cell, modal_rect.right());
// Content area (inside border).
let content_x = modal_x + 1;
let content_max_x = modal_rect.right().saturating_sub(1);
let content_width = content_max_x.saturating_sub(content_x);
let first_row_y = modal_y + 1;
let max_rows = (modal_height.saturating_sub(2)) as usize; // Inside borders.
// Render rows.
let visible_end = (state.scroll_offset + max_rows).min(row_count);
for vis_idx in 0..max_rows {
let row_idx = state.scroll_offset + vis_idx;
if row_idx >= row_count {
break;
}
let y = first_row_y + vis_idx as u16;
let selected = row_idx == state.selected_index;
let bg = if selected { SELECTION_BG } else { BG_SURFACE };
// Fill row background.
if selected {
let sel_cell = Cell {
fg: TEXT,
bg,
..Cell::default()
};
for x in content_x..content_max_x {
frame.buffer.set(x, y, sel_cell);
}
}
// Row content.
let (label, is_active) = if row_idx == 0 {
let active = current_scope.project_id.is_none();
("All Projects".to_string(), active)
} else {
let project = &state.projects[row_idx - 1];
let active = current_scope.project_id == Some(project.id);
(project.path.clone(), active)
};
// Active indicator.
let prefix = if is_active { "> " } else { " " };
let fg = if is_active { ACCENT } else { TEXT };
let cell = Cell {
fg,
bg,
..Cell::default()
};
// Truncate label to fit.
let max_label_len = content_width.saturating_sub(2) as usize; // 2 for prefix
let display = if label.len() > max_label_len {
format!(
"{prefix}{}...",
&label[..label.floor_char_boundary(max_label_len.saturating_sub(3))]
)
} else {
format!("{prefix}{label}")
};
frame.print_text_clipped(content_x, y, &display, cell, content_max_x);
}
// Scroll indicators.
if state.scroll_offset > 0 {
let arrow_cell = Cell {
fg: TEXT_MUTED,
bg: BG_SURFACE,
..Cell::default()
};
frame.print_text_clipped(
content_max_x.saturating_sub(1),
first_row_y,
"^",
arrow_cell,
modal_rect.right(),
);
}
if visible_end < row_count {
let arrow_cell = Cell {
fg: TEXT_MUTED,
bg: BG_SURFACE,
..Cell::default()
};
let bottom_y = first_row_y + (max_rows as u16).saturating_sub(1);
frame.print_text_clipped(
content_max_x.saturating_sub(1),
bottom_y,
"v",
arrow_cell,
modal_rect.right(),
);
}
}
// ---------------------------------------------------------------------------
// Tests
// ---------------------------------------------------------------------------
#[cfg(test)]
mod tests {
use super::*;
use crate::scope::ProjectInfo;
use ftui::render::grapheme_pool::GraphemePool;
macro_rules! with_frame {
($width:expr, $height:expr, |$frame:ident| $body:block) => {{
let mut pool = GraphemePool::new();
let mut $frame = Frame::new($width, $height, &mut pool);
$body
}};
}
fn sample_projects() -> Vec<ProjectInfo> {
vec![
ProjectInfo {
id: 1,
path: "alpha/repo".into(),
},
ProjectInfo {
id: 2,
path: "beta/repo".into(),
},
]
}
#[test]
fn test_render_hidden_noop() {
with_frame!(80, 24, |frame| {
let state = ScopePickerState::default();
let scope = ScopeContext::default();
let area = frame.bounds();
render_scope_picker(&mut frame, &state, &scope, area);
// Should not panic.
});
}
#[test]
fn test_render_visible_no_panic() {
with_frame!(80, 24, |frame| {
let mut state = ScopePickerState::default();
let scope = ScopeContext::default();
state.open(sample_projects(), &scope);
let area = frame.bounds();
render_scope_picker(&mut frame, &state, &scope, area);
});
}
#[test]
fn test_render_with_selection() {
with_frame!(80, 24, |frame| {
let mut state = ScopePickerState::default();
let scope = ScopeContext::default();
state.open(sample_projects(), &scope);
state.select_next(); // Move to first project
let area = frame.bounds();
render_scope_picker(&mut frame, &state, &scope, area);
});
}
#[test]
fn test_render_tiny_terminal_noop() {
with_frame!(15, 4, |frame| {
let mut state = ScopePickerState::default();
let scope = ScopeContext::default();
state.open(sample_projects(), &scope);
let area = frame.bounds();
render_scope_picker(&mut frame, &state, &scope, area);
// Should not panic on tiny terminals.
});
}
#[test]
fn test_render_active_scope_highlighted() {
with_frame!(80, 24, |frame| {
let mut state = ScopePickerState::default();
let scope = ScopeContext {
project_id: Some(2),
project_name: Some("beta/repo".into()),
};
state.open(sample_projects(), &scope);
let area = frame.bounds();
render_scope_picker(&mut frame, &state, &scope, area);
});
}
#[test]
fn test_render_empty_project_list() {
with_frame!(80, 24, |frame| {
let mut state = ScopePickerState::default();
let scope = ScopeContext::default();
state.open(vec![], &scope);
let area = frame.bounds();
render_scope_picker(&mut frame, &state, &scope, area);
// Only "All Projects" row, should not panic.
});
}
}

View File

@@ -18,24 +18,12 @@
use ftui::core::geometry::Rect;
use ftui::render::cell::Cell;
use ftui::render::drawing::Draw;
/// Count display-width columns for a string (char count, not byte count).
fn text_cell_width(text: &str) -> u16 {
text.chars().count().min(u16::MAX as usize) as u16
}
/// Convert a byte-offset cursor position to a display-column offset.
fn cursor_cell_offset(query: &str, cursor: usize) -> u16 {
let mut idx = cursor.min(query.len());
while idx > 0 && !query.is_char_boundary(idx) {
idx -= 1;
}
text_cell_width(&query[..idx])
}
use ftui::render::frame::Frame;
use crate::layout::{classify_width, search_show_project};
use crate::message::EntityKind;
use crate::state::search::SearchState;
use crate::text_width::cursor_cell_offset;
use super::{ACCENT, BG_SURFACE, BORDER, TEXT, TEXT_MUTED};
@@ -52,6 +40,8 @@ pub fn render_search(frame: &mut Frame<'_>, state: &SearchState, area: Rect) {
return;
}
let bp = classify_width(area.width);
let show_project = search_show_project(bp);
let mut y = area.y;
let max_x = area.right();
@@ -112,7 +102,15 @@ pub fn render_search(frame: &mut Frame<'_>, state: &SearchState, area: Rect) {
if state.results.is_empty() {
render_empty_state(frame, state, area.x + 1, y, max_x);
} else {
render_result_list(frame, state, area.x, y, area.width, list_height);
render_result_list(
frame,
state,
area.x,
y,
area.width,
list_height,
show_project,
);
}
// -- Bottom hint bar -----------------------------------------------------
@@ -241,6 +239,7 @@ fn render_result_list(
start_y: u16,
width: u16,
list_height: usize,
show_project: bool,
) {
let max_x = x + width;
@@ -307,11 +306,13 @@ fn render_result_list(
let after_title =
frame.print_text_clipped(after_iid + 1, y, &result.title, label_style, max_x);
// Project path (right-aligned).
let path_width = result.project_path.len() as u16 + 2;
let path_x = max_x.saturating_sub(path_width);
if path_x > after_title + 1 {
frame.print_text_clipped(path_x, y, &result.project_path, detail_style, max_x);
// Project path (right-aligned, hidden on narrow terminals).
if show_project {
let path_width = result.project_path.len() as u16 + 2;
let path_x = max_x.saturating_sub(path_width);
if path_x > after_title + 1 {
frame.print_text_clipped(path_x, y, &result.project_path, detail_style, max_x);
}
}
}
@@ -489,4 +490,23 @@ mod tests {
render_search(&mut frame, &state, Rect::new(0, 0, 80, 10));
});
}
#[test]
fn test_render_search_responsive_breakpoints() {
// Narrow (Xs=50): project path hidden.
with_frame!(50, 24, |frame| {
let mut state = SearchState::default();
state.enter(fts_caps());
state.results = sample_results(3);
render_search(&mut frame, &state, Rect::new(0, 0, 50, 24));
});
// Standard (Md=100): project path shown.
with_frame!(100, 24, |frame| {
let mut state = SearchState::default();
state.enter(fts_caps());
state.results = sample_results(3);
render_search(&mut frame, &state, Rect::new(0, 0, 100, 24));
});
}
}

View File

@@ -0,0 +1,457 @@
//! Stats screen view — database and index statistics.
//!
//! Renders entity counts, FTS/embedding coverage, and queue health
//! as a simple table layout.
use ftui::core::geometry::Rect;
use ftui::render::cell::{Cell, PackedRgba};
use ftui::render::drawing::Draw;
use ftui::render::frame::Frame;
use crate::layout::classify_width;
use crate::state::stats::StatsState;
use super::{ACCENT, TEXT, TEXT_MUTED};
/// Success green (for good coverage).
const GOOD_FG: PackedRgba = PackedRgba::rgb(0x87, 0x9A, 0x39);
/// Warning yellow (for partial coverage).
const WARN_FG: PackedRgba = PackedRgba::rgb(0xD0, 0xA2, 0x15);
// ---------------------------------------------------------------------------
// Public entry point
// ---------------------------------------------------------------------------
/// Render the stats screen.
pub fn render_stats(frame: &mut Frame<'_>, state: &StatsState, area: Rect) {
if area.width < 10 || area.height < 3 {
return;
}
let max_x = area.right();
if !state.loaded {
let msg = "Loading statistics...";
let x = area.x + area.width.saturating_sub(msg.len() as u16) / 2;
let y = area.y + area.height / 2;
frame.print_text_clipped(
x,
y,
msg,
Cell {
fg: TEXT_MUTED,
..Cell::default()
},
max_x,
);
return;
}
let data = match &state.data {
Some(d) => d,
None => return,
};
// Title.
frame.print_text_clipped(
area.x + 2,
area.y + 1,
"Database Statistics",
Cell {
fg: ACCENT,
..Cell::default()
},
max_x,
);
let bp = classify_width(area.width);
let mut y = area.y + 3;
let label_width = match bp {
ftui::layout::Breakpoint::Xs => 16u16,
ftui::layout::Breakpoint::Sm => 18,
_ => 22,
};
let value_x = area.x + 2 + label_width;
// --- Entity Counts section ---
if y < area.bottom().saturating_sub(2) {
frame.print_text_clipped(
area.x + 2,
y,
"Entities",
Cell {
fg: TEXT,
..Cell::default()
},
max_x,
);
y += 1;
}
let entity_rows: [(&str, i64); 4] = [
(" Issues", data.issues),
(" Merge Requests", data.merge_requests),
(" Discussions", data.discussions),
(" Notes", data.notes),
];
for (label, count) in &entity_rows {
if y >= area.bottom().saturating_sub(2) {
break;
}
render_stat_row(
frame,
area.x + 2,
y,
label,
&format_count(*count),
label_width,
max_x,
);
y += 1;
}
// Total.
if y < area.bottom().saturating_sub(2) {
let total = data.issues + data.merge_requests + data.discussions + data.notes;
render_stat_row(
frame,
area.x + 2,
y,
" Total",
&format_count(total),
label_width,
max_x,
);
y += 1;
}
y += 1; // Blank line.
// --- Index Coverage section ---
if y < area.bottom().saturating_sub(2) {
frame.print_text_clipped(
area.x + 2,
y,
"Index Coverage",
Cell {
fg: TEXT,
..Cell::default()
},
max_x,
);
y += 1;
}
// FTS.
if y < area.bottom().saturating_sub(2) {
let fts_pct = data.fts_coverage_pct();
let fts_text = format!("{} ({:.0}%)", format_count(data.fts_indexed), fts_pct);
let fg = coverage_color(fts_pct);
frame.print_text_clipped(
area.x + 2,
y,
&format!("{:<width$}", " FTS Indexed", width = label_width as usize),
Cell {
fg: TEXT_MUTED,
..Cell::default()
},
value_x,
);
frame.print_text_clipped(
value_x,
y,
&fts_text,
Cell {
fg,
..Cell::default()
},
max_x,
);
y += 1;
}
// Embeddings.
if y < area.bottom().saturating_sub(2) {
let embed_text = format!(
"{} ({:.0}%)",
format_count(data.embedded_documents),
data.coverage_pct
);
let fg = coverage_color(data.coverage_pct);
frame.print_text_clipped(
area.x + 2,
y,
&format!("{:<width$}", " Embeddings", width = label_width as usize),
Cell {
fg: TEXT_MUTED,
..Cell::default()
},
value_x,
);
frame.print_text_clipped(
value_x,
y,
&embed_text,
Cell {
fg,
..Cell::default()
},
max_x,
);
y += 1;
}
// Chunks.
if y < area.bottom().saturating_sub(2) {
render_stat_row(
frame,
area.x + 2,
y,
" Chunks",
&format_count(data.total_chunks),
label_width,
max_x,
);
y += 1;
}
y += 1; // Blank line.
// --- Queue section ---
if data.has_queue_work() && y < area.bottom().saturating_sub(2) {
frame.print_text_clipped(
area.x + 2,
y,
"Queue",
Cell {
fg: TEXT,
..Cell::default()
},
max_x,
);
y += 1;
if y < area.bottom().saturating_sub(2) {
render_stat_row(
frame,
area.x + 2,
y,
" Pending",
&format_count(data.queue_pending),
label_width,
max_x,
);
y += 1;
}
if data.queue_failed > 0 && y < area.bottom().saturating_sub(2) {
let failed_cell = Cell {
fg: WARN_FG,
..Cell::default()
};
frame.print_text_clipped(
area.x + 2,
y,
&format!("{:<width$}", " Failed", width = label_width as usize),
Cell {
fg: TEXT_MUTED,
..Cell::default()
},
value_x,
);
frame.print_text_clipped(
value_x,
y,
&format_count(data.queue_failed),
failed_cell,
max_x,
);
}
}
// Hint at bottom.
let hint_y = area.bottom().saturating_sub(1);
frame.print_text_clipped(
area.x + 2,
hint_y,
"Esc: back | lore stats (full report)",
Cell {
fg: TEXT_MUTED,
..Cell::default()
},
max_x,
);
}
/// Render a label + value row.
fn render_stat_row(
frame: &mut Frame<'_>,
x: u16,
y: u16,
label: &str,
value: &str,
label_width: u16,
max_x: u16,
) {
let value_x = x + label_width;
frame.print_text_clipped(
x,
y,
&format!("{label:<width$}", width = label_width as usize),
Cell {
fg: TEXT_MUTED,
..Cell::default()
},
value_x,
);
frame.print_text_clipped(
value_x,
y,
value,
Cell {
fg: TEXT,
..Cell::default()
},
max_x,
);
}
/// Color based on coverage percentage.
fn coverage_color(pct: f64) -> PackedRgba {
if pct >= 90.0 {
GOOD_FG
} else if pct >= 50.0 {
WARN_FG
} else {
TEXT
}
}
/// Format a count with comma separators for readability.
fn format_count(n: i64) -> String {
if n < 1_000 {
return n.to_string();
}
let s = n.to_string();
let mut result = String::with_capacity(s.len() + s.len() / 3);
for (i, c) in s.chars().enumerate() {
if i > 0 && (s.len() - i).is_multiple_of(3) {
result.push(',');
}
result.push(c);
}
result
}
// ---------------------------------------------------------------------------
// Tests
// ---------------------------------------------------------------------------
#[cfg(test)]
mod tests {
use super::*;
use crate::state::stats::StatsData;
use ftui::render::grapheme_pool::GraphemePool;
macro_rules! with_frame {
($width:expr, $height:expr, |$frame:ident| $body:block) => {{
let mut pool = GraphemePool::new();
let mut $frame = Frame::new($width, $height, &mut pool);
$body
}};
}
fn sample_data() -> StatsData {
StatsData {
total_documents: 500,
issues: 200,
merge_requests: 150,
discussions: 100,
notes: 50,
fts_indexed: 450,
embedded_documents: 300,
total_chunks: 1200,
coverage_pct: 60.0,
queue_pending: 5,
queue_failed: 1,
}
}
#[test]
fn test_render_not_loaded() {
with_frame!(80, 24, |frame| {
let state = StatsState::default();
let area = frame.bounds();
render_stats(&mut frame, &state, area);
});
}
#[test]
fn test_render_with_data() {
with_frame!(80, 24, |frame| {
let mut state = StatsState::default();
state.apply_data(sample_data());
let area = frame.bounds();
render_stats(&mut frame, &state, area);
});
}
#[test]
fn test_render_no_queue_work() {
with_frame!(80, 24, |frame| {
let mut state = StatsState::default();
state.apply_data(StatsData {
queue_pending: 0,
queue_failed: 0,
..sample_data()
});
let area = frame.bounds();
render_stats(&mut frame, &state, area);
});
}
#[test]
fn test_render_tiny_terminal() {
with_frame!(8, 2, |frame| {
let mut state = StatsState::default();
state.apply_data(sample_data());
let area = frame.bounds();
render_stats(&mut frame, &state, area);
});
}
#[test]
fn test_render_short_terminal() {
with_frame!(80, 8, |frame| {
let mut state = StatsState::default();
state.apply_data(sample_data());
let area = frame.bounds();
render_stats(&mut frame, &state, area);
// Should clip without panicking.
});
}
#[test]
fn test_format_count_small() {
assert_eq!(format_count(0), "0");
assert_eq!(format_count(42), "42");
assert_eq!(format_count(999), "999");
}
#[test]
fn test_format_count_thousands() {
assert_eq!(format_count(1_000), "1,000");
assert_eq!(format_count(12_345), "12,345");
assert_eq!(format_count(1_234_567), "1,234,567");
}
#[test]
fn test_coverage_color_thresholds() {
assert_eq!(coverage_color(100.0), GOOD_FG);
assert_eq!(coverage_color(90.0), GOOD_FG);
assert_eq!(coverage_color(89.9), WARN_FG);
assert_eq!(coverage_color(50.0), WARN_FG);
assert_eq!(coverage_color(49.9), TEXT);
}
}

View File

@@ -0,0 +1,587 @@
//! Sync screen view — progress bars, summary table, and log.
//!
//! Renders the sync screen in different phases:
//! - **Idle**: prompt to start sync
//! - **Running**: per-lane progress bars with throughput stats
//! - **Complete**: summary table with change counts
//! - **Cancelled/Failed**: status message with retry hint
use ftui::core::geometry::Rect;
use ftui::render::cell::{Cell, PackedRgba};
use ftui::render::drawing::Draw;
use ftui::render::frame::Frame;
use crate::layout::{classify_width, sync_progress_bar_width};
use crate::state::sync::{SyncLane, SyncPhase, SyncState};
use super::{ACCENT, TEXT, TEXT_MUTED};
/// Progress bar fill color.
const PROGRESS_FG: PackedRgba = PackedRgba::rgb(0xDA, 0x70, 0x2C); // orange
/// Progress bar background.
const PROGRESS_BG: PackedRgba = PackedRgba::rgb(0x34, 0x34, 0x30);
/// Success green.
const SUCCESS_FG: PackedRgba = PackedRgba::rgb(0x87, 0x9A, 0x39);
/// Error red.
const ERROR_FG: PackedRgba = PackedRgba::rgb(0xD1, 0x4D, 0x41);
// ---------------------------------------------------------------------------
// Public entry point
// ---------------------------------------------------------------------------
/// Render the sync screen.
pub fn render_sync(frame: &mut Frame<'_>, state: &SyncState, area: Rect) {
if area.width < 10 || area.height < 3 {
return;
}
match &state.phase {
SyncPhase::Idle => render_idle(frame, area),
SyncPhase::Running => render_running(frame, state, area),
SyncPhase::Complete => render_summary(frame, state, area),
SyncPhase::Cancelled => render_cancelled(frame, area),
SyncPhase::Failed(err) => render_failed(frame, area, err),
}
}
// ---------------------------------------------------------------------------
// Idle view
// ---------------------------------------------------------------------------
fn render_idle(frame: &mut Frame<'_>, area: Rect) {
let max_x = area.right();
let center_y = area.y + area.height / 2;
let title = "Sync";
let title_x = area.x + area.width.saturating_sub(title.len() as u16) / 2;
frame.print_text_clipped(
title_x,
center_y.saturating_sub(1),
title,
Cell {
fg: ACCENT,
..Cell::default()
},
max_x,
);
let hint = "Press Enter to start sync, or run `lore sync` externally.";
let hint_x = area.x + area.width.saturating_sub(hint.len() as u16) / 2;
frame.print_text_clipped(
hint_x,
center_y + 1,
hint,
Cell {
fg: TEXT_MUTED,
..Cell::default()
},
max_x,
);
}
// ---------------------------------------------------------------------------
// Running view — per-lane progress bars
// ---------------------------------------------------------------------------
fn render_running(frame: &mut Frame<'_>, state: &SyncState, area: Rect) {
let max_x = area.right();
// Title.
let title = "Syncing...";
let title_x = area.x + 2;
frame.print_text_clipped(
title_x,
area.y + 1,
title,
Cell {
fg: ACCENT,
..Cell::default()
},
max_x,
);
// Stage label.
if !state.stage.is_empty() {
let stage_cell = Cell {
fg: TEXT_MUTED,
..Cell::default()
};
frame.print_text_clipped(title_x, area.y + 2, &state.stage, stage_cell, max_x);
}
// Per-lane progress bars.
let bp = classify_width(area.width);
let max_bar = sync_progress_bar_width(bp);
let bar_start_y = area.y + 4;
let label_width = 14u16; // "Discussions " is the longest
let bar_x = area.x + 2 + label_width;
let bar_width = area.width.saturating_sub(4 + label_width + 12).min(max_bar); // Cap bar width for very wide terminals
for (i, lane) in SyncLane::ALL.iter().enumerate() {
let y = bar_start_y + i as u16;
if y >= area.bottom().saturating_sub(3) {
break;
}
let lane_progress = &state.lanes[i];
// Lane label.
let label = format!("{:<12}", lane.label());
frame.print_text_clipped(
area.x + 2,
y,
&label,
Cell {
fg: TEXT,
..Cell::default()
},
bar_x,
);
// Progress bar.
if bar_width > 2 {
render_progress_bar(frame, bar_x, y, bar_width, lane_progress.fraction());
}
// Count text (e.g., "50/100").
let count_x = bar_x + bar_width + 1;
let count_text = if lane_progress.total > 0 {
format!("{}/{}", lane_progress.current, lane_progress.total)
} else if lane_progress.current > 0 {
format!("{}", lane_progress.current)
} else {
"--".to_string()
};
frame.print_text_clipped(
count_x,
y,
&count_text,
Cell {
fg: TEXT_MUTED,
..Cell::default()
},
max_x,
);
}
// Throughput stats.
let stats_y = bar_start_y + SyncLane::ALL.len() as u16 + 1;
if stats_y < area.bottom().saturating_sub(2) && state.items_synced > 0 {
let stats = format!(
"{} items synced ({:.0} items/sec)",
state.items_synced, state.items_per_sec
);
frame.print_text_clipped(
area.x + 2,
stats_y,
&stats,
Cell {
fg: TEXT_MUTED,
..Cell::default()
},
max_x,
);
}
// Cancel hint at bottom.
let hint_y = area.bottom().saturating_sub(1);
frame.print_text_clipped(
area.x + 2,
hint_y,
"Esc: cancel sync",
Cell {
fg: TEXT_MUTED,
..Cell::default()
},
max_x,
);
}
/// Render a horizontal progress bar.
fn render_progress_bar(frame: &mut Frame<'_>, x: u16, y: u16, width: u16, fraction: f64) {
let filled = ((width as f64) * fraction).round() as u16;
let max_x = x + width;
for col in x..max_x {
let is_filled = col < x + filled;
let cell = Cell {
fg: if is_filled { PROGRESS_FG } else { PROGRESS_BG },
bg: if is_filled { PROGRESS_FG } else { PROGRESS_BG },
..Cell::default()
};
frame.buffer.set(col, y, cell);
}
}
// ---------------------------------------------------------------------------
// Summary view
// ---------------------------------------------------------------------------
fn render_summary(frame: &mut Frame<'_>, state: &SyncState, area: Rect) {
let max_x = area.right();
// Title.
let title = "Sync Complete";
let title_x = area.x + 2;
frame.print_text_clipped(
title_x,
area.y + 1,
title,
Cell {
fg: SUCCESS_FG,
..Cell::default()
},
max_x,
);
if let Some(ref summary) = state.summary {
// Duration.
let duration = format_duration(summary.elapsed_ms);
frame.print_text_clipped(
title_x,
area.y + 2,
&format!("Duration: {duration}"),
Cell {
fg: TEXT_MUTED,
..Cell::default()
},
max_x,
);
// Summary table header.
let table_y = area.y + 4;
let header = format!("{:<16} {:>6} {:>8}", "Entity", "New", "Updated");
frame.print_text_clipped(
area.x + 2,
table_y,
&header,
Cell {
fg: TEXT_MUTED,
..Cell::default()
},
max_x,
);
// Summary rows.
let rows = [
("Issues", summary.issues.new, summary.issues.updated),
(
"MRs",
summary.merge_requests.new,
summary.merge_requests.updated,
),
(
"Discussions",
summary.discussions.new,
summary.discussions.updated,
),
("Notes", summary.notes.new, summary.notes.updated),
];
for (i, (label, new, updated)) in rows.iter().enumerate() {
let row_y = table_y + 1 + i as u16;
if row_y >= area.bottom().saturating_sub(3) {
break;
}
let row = format!("{label:<16} {new:>6} {updated:>8}");
let fg = if *new > 0 || *updated > 0 {
TEXT
} else {
TEXT_MUTED
};
frame.print_text_clipped(
area.x + 2,
row_y,
&row,
Cell {
fg,
..Cell::default()
},
max_x,
);
}
// Total.
let total_y = table_y + 1 + rows.len() as u16;
if total_y < area.bottom().saturating_sub(2) {
let total = format!("Total changes: {}", summary.total_changes());
frame.print_text_clipped(
area.x + 2,
total_y,
&total,
Cell {
fg: ACCENT,
..Cell::default()
},
max_x,
);
}
// Per-project errors.
if summary.has_errors() {
let err_y = total_y + 2;
if err_y < area.bottom().saturating_sub(1) {
frame.print_text_clipped(
area.x + 2,
err_y,
"Errors:",
Cell {
fg: ERROR_FG,
..Cell::default()
},
max_x,
);
for (i, (project, err)) in summary.project_errors.iter().enumerate() {
let y = err_y + 1 + i as u16;
if y >= area.bottom().saturating_sub(1) {
break;
}
let line = format!(" {project}: {err}");
frame.print_text_clipped(
area.x + 2,
y,
&line,
Cell {
fg: ERROR_FG,
..Cell::default()
},
max_x,
);
}
}
}
}
// Navigation hint at bottom.
let hint_y = area.bottom().saturating_sub(1);
frame.print_text_clipped(
area.x + 2,
hint_y,
"Esc: back | Enter: sync again",
Cell {
fg: TEXT_MUTED,
..Cell::default()
},
max_x,
);
}
// ---------------------------------------------------------------------------
// Cancelled / Failed views
// ---------------------------------------------------------------------------
fn render_cancelled(frame: &mut Frame<'_>, area: Rect) {
let max_x = area.right();
let center_y = area.y + area.height / 2;
frame.print_text_clipped(
area.x + 2,
center_y.saturating_sub(1),
"Sync Cancelled",
Cell {
fg: ACCENT,
..Cell::default()
},
max_x,
);
frame.print_text_clipped(
area.x + 2,
center_y + 1,
"Press Enter to retry, or Esc to go back.",
Cell {
fg: TEXT_MUTED,
..Cell::default()
},
max_x,
);
}
fn render_failed(frame: &mut Frame<'_>, area: Rect, error: &str) {
let max_x = area.right();
let center_y = area.y + area.height / 2;
frame.print_text_clipped(
area.x + 2,
center_y.saturating_sub(2),
"Sync Failed",
Cell {
fg: ERROR_FG,
..Cell::default()
},
max_x,
);
// Truncate error to fit screen.
let max_len = area.width.saturating_sub(4) as usize;
let display_err = if error.len() > max_len {
format!(
"{}...",
&error[..error.floor_char_boundary(max_len.saturating_sub(3))]
)
} else {
error.to_string()
};
frame.print_text_clipped(
area.x + 2,
center_y,
&display_err,
Cell {
fg: TEXT,
..Cell::default()
},
max_x,
);
frame.print_text_clipped(
area.x + 2,
center_y + 2,
"Press Enter to retry, or Esc to go back.",
Cell {
fg: TEXT_MUTED,
..Cell::default()
},
max_x,
);
}
// ---------------------------------------------------------------------------
// Helpers
// ---------------------------------------------------------------------------
fn format_duration(ms: u64) -> String {
let secs = ms / 1000;
let mins = secs / 60;
let remaining_secs = secs % 60;
if mins > 0 {
format!("{mins}m {remaining_secs}s")
} else {
format!("{secs}s")
}
}
// ---------------------------------------------------------------------------
// Tests
// ---------------------------------------------------------------------------
#[cfg(test)]
mod tests {
use super::*;
use crate::state::sync::{EntityChangeCounts, SyncSummary};
use ftui::render::grapheme_pool::GraphemePool;
macro_rules! with_frame {
($width:expr, $height:expr, |$frame:ident| $body:block) => {{
let mut pool = GraphemePool::new();
let mut $frame = Frame::new($width, $height, &mut pool);
$body
}};
}
#[test]
fn test_render_idle_no_panic() {
with_frame!(80, 24, |frame| {
let state = SyncState::default();
let area = frame.bounds();
render_sync(&mut frame, &state, area);
});
}
#[test]
fn test_render_running_no_panic() {
with_frame!(80, 24, |frame| {
let mut state = SyncState::default();
state.start();
state.update_progress("issues", 25, 100);
let area = frame.bounds();
render_sync(&mut frame, &state, area);
});
}
#[test]
fn test_render_complete_no_panic() {
with_frame!(80, 24, |frame| {
let mut state = SyncState::default();
state.start();
state.complete(5000);
state.summary = Some(SyncSummary {
issues: EntityChangeCounts { new: 5, updated: 3 },
merge_requests: EntityChangeCounts { new: 2, updated: 1 },
elapsed_ms: 5000,
..Default::default()
});
let area = frame.bounds();
render_sync(&mut frame, &state, area);
});
}
#[test]
fn test_render_cancelled_no_panic() {
with_frame!(80, 24, |frame| {
let mut state = SyncState::default();
state.start();
state.cancel();
let area = frame.bounds();
render_sync(&mut frame, &state, area);
});
}
#[test]
fn test_render_failed_no_panic() {
with_frame!(80, 24, |frame| {
let mut state = SyncState::default();
state.start();
state.fail("network timeout".into());
let area = frame.bounds();
render_sync(&mut frame, &state, area);
});
}
#[test]
fn test_render_tiny_terminal() {
with_frame!(8, 2, |frame| {
let state = SyncState::default();
let area = frame.bounds();
render_sync(&mut frame, &state, area);
// Should not panic.
});
}
#[test]
fn test_render_complete_with_errors() {
with_frame!(80, 24, |frame| {
let mut state = SyncState::default();
state.start();
state.complete(3000);
state.summary = Some(SyncSummary {
elapsed_ms: 3000,
project_errors: vec![("grp/repo".into(), "timeout".into())],
..Default::default()
});
let area = frame.bounds();
render_sync(&mut frame, &state, area);
});
}
#[test]
fn test_format_duration_seconds() {
assert_eq!(format_duration(3500), "3s");
}
#[test]
fn test_format_duration_minutes() {
assert_eq!(format_duration(125_000), "2m 5s");
}
#[test]
fn test_render_running_with_stats() {
with_frame!(80, 24, |frame| {
let mut state = SyncState::default();
state.start();
state.update_progress("issues", 50, 200);
state.update_stream_stats(1024, 50);
let area = frame.bounds();
render_sync(&mut frame, &state, area);
});
}
}

View File

@@ -20,6 +20,7 @@ use ftui::render::drawing::Draw;
use ftui::render::frame::Frame;
use crate::clock::Clock;
use crate::layout::{classify_width, timeline_time_width};
use crate::message::TimelineEventKind;
use crate::state::timeline::TimelineState;
use crate::view::common::discussion_tree::format_relative_time;
@@ -121,7 +122,18 @@ pub fn render_timeline(
if state.events.is_empty() {
render_empty_state(frame, state, area.x + 1, y, max_x);
} else {
render_event_list(frame, state, area.x, y, area.width, list_height, clock);
let bp = classify_width(area.width);
let time_col_width = timeline_time_width(bp);
render_event_list(
frame,
state,
area.x,
y,
area.width,
list_height,
clock,
time_col_width,
);
}
// -- Hint bar --
@@ -153,6 +165,7 @@ fn render_empty_state(frame: &mut Frame<'_>, state: &TimelineState, x: u16, y: u
// ---------------------------------------------------------------------------
/// Render the scrollable list of timeline events.
#[allow(clippy::too_many_arguments)]
fn render_event_list(
frame: &mut Frame<'_>,
state: &TimelineState,
@@ -161,6 +174,7 @@ fn render_event_list(
width: u16,
list_height: usize,
clock: &dyn Clock,
time_col_width: u16,
) {
let max_x = x + width;
@@ -198,10 +212,9 @@ fn render_event_list(
let mut cx = x + 1;
// Timestamp gutter (right-aligned in ~10 chars).
// Timestamp gutter (right-aligned, width varies by breakpoint).
let time_str = format_relative_time(event.timestamp_ms, clock);
let time_width = 10u16;
let time_x = cx + time_width.saturating_sub(time_str.len() as u16);
let time_x = cx + time_col_width.saturating_sub(time_str.len() as u16);
let time_cell = if is_selected {
selected_cell
} else {
@@ -211,8 +224,8 @@ fn render_event_list(
..Cell::default()
}
};
frame.print_text_clipped(time_x, y, &time_str, time_cell, cx + time_width);
cx += time_width + 1;
frame.print_text_clipped(time_x, y, &time_str, time_cell, cx + time_col_width);
cx += time_col_width + 1;
// Entity prefix: #42 or !99
let prefix = match event.entity_key.kind {

View File

@@ -23,17 +23,20 @@ use ftui::render::cell::{Cell, PackedRgba};
use ftui::render::drawing::Draw;
use ftui::render::frame::Frame;
use ftui::layout::Breakpoint;
use crate::layout::classify_width;
use crate::state::trace::TraceState;
use crate::text_width::cursor_cell_offset;
use lore::core::trace::TraceResult;
use super::common::truncate_str;
use super::{ACCENT, BG_SURFACE, TEXT, TEXT_MUTED};
// ---------------------------------------------------------------------------
// Colors (Flexoki palette)
// Colors (Flexoki palette — extras not in parent module)
// ---------------------------------------------------------------------------
const TEXT: PackedRgba = PackedRgba::rgb(0xCE, 0xCD, 0xC3); // tx
const TEXT_MUTED: PackedRgba = PackedRgba::rgb(0x87, 0x87, 0x80); // tx-2
const BG_SURFACE: PackedRgba = PackedRgba::rgb(0x28, 0x28, 0x24); // bg-2
const ACCENT: PackedRgba = PackedRgba::rgb(0xDA, 0x70, 0x2C); // orange
const GREEN: PackedRgba = PackedRgba::rgb(0x87, 0x9A, 0x39); // green
const CYAN: PackedRgba = PackedRgba::rgb(0x3A, 0xA9, 0x9F); // cyan
const YELLOW: PackedRgba = PackedRgba::rgb(0xD0, 0xA2, 0x15); // yellow
@@ -51,6 +54,7 @@ pub fn render_trace(frame: &mut Frame<'_>, state: &TraceState, area: ftui::core:
return;
}
let bp = classify_width(area.width);
let x = area.x;
let max_x = area.right();
let width = area.width;
@@ -103,7 +107,7 @@ pub fn render_trace(frame: &mut Frame<'_>, state: &TraceState, area: ftui::core:
}
// --- Chain list ---
render_chain_list(frame, result, state, x, y, width, list_height);
render_chain_list(frame, result, state, x, y, width, list_height, bp);
// --- Hint bar ---
render_hint_bar(frame, x, hint_y, max_x);
@@ -135,7 +139,7 @@ fn render_path_input(frame: &mut Frame<'_>, state: &TraceState, x: u16, y: u16,
// Cursor.
if state.path_focused {
let cursor_x = after_label + state.path_cursor as u16;
let cursor_x = after_label + cursor_cell_offset(&state.path_input, state.path_cursor);
if cursor_x < max_x {
let cursor_cell = Cell {
fg: PackedRgba::rgb(0x10, 0x0F, 0x0F),
@@ -144,8 +148,8 @@ fn render_path_input(frame: &mut Frame<'_>, state: &TraceState, x: u16, y: u16,
};
let ch = state
.path_input
.chars()
.nth(state.path_cursor)
.get(state.path_cursor..)
.and_then(|s| s.chars().next())
.unwrap_or(' ');
frame.print_text_clipped(cursor_x, y, &ch.to_string(), cursor_cell, max_x);
}
@@ -227,6 +231,42 @@ fn render_summary(frame: &mut Frame<'_>, result: &TraceResult, x: u16, y: u16, m
frame.print_text_clipped(x + 1, y, &summary, style, max_x);
}
/// Responsive truncation widths for trace chain rows.
const fn chain_title_max(bp: Breakpoint) -> usize {
match bp {
Breakpoint::Xs => 15,
Breakpoint::Sm => 22,
Breakpoint::Md => 30,
Breakpoint::Lg | Breakpoint::Xl => 50,
}
}
const fn chain_author_max(bp: Breakpoint) -> usize {
match bp {
Breakpoint::Xs | Breakpoint::Sm => 8,
Breakpoint::Md | Breakpoint::Lg | Breakpoint::Xl => 12,
}
}
const fn expanded_issue_title_max(bp: Breakpoint) -> usize {
match bp {
Breakpoint::Xs => 20,
Breakpoint::Sm => 30,
Breakpoint::Md => 40,
Breakpoint::Lg | Breakpoint::Xl => 60,
}
}
const fn expanded_disc_snippet_max(bp: Breakpoint) -> usize {
match bp {
Breakpoint::Xs => 25,
Breakpoint::Sm => 40,
Breakpoint::Md => 60,
Breakpoint::Lg | Breakpoint::Xl => 80,
}
}
#[allow(clippy::too_many_arguments)]
fn render_chain_list(
frame: &mut Frame<'_>,
result: &TraceResult,
@@ -235,10 +275,16 @@ fn render_chain_list(
start_y: u16,
width: u16,
height: usize,
bp: Breakpoint,
) {
let max_x = x + width;
let mut row = 0;
let title_max = chain_title_max(bp);
let author_max = chain_author_max(bp);
let issue_title_max = expanded_issue_title_max(bp);
let disc_max = expanded_disc_snippet_max(bp);
for (chain_idx, chain) in result.trace_chains.iter().enumerate() {
if row >= height {
break;
@@ -294,8 +340,8 @@ fn render_chain_list(
};
let after_iid = frame.print_text_clipped(after_icon, y, &iid_str, ref_style, max_x);
// Title.
let title = truncate_str(&chain.mr_title, 30);
// Title (responsive).
let title = truncate_str(&chain.mr_title, title_max);
let title_style = Cell {
fg: TEXT,
bg: sel_bg,
@@ -303,10 +349,10 @@ fn render_chain_list(
};
let after_title = frame.print_text_clipped(after_iid + 1, y, &title, title_style, max_x);
// @author + change_type
// @author + change_type (responsive author width).
let meta = format!(
"@{} {}",
truncate_str(&chain.mr_author, 12),
truncate_str(&chain.mr_author, author_max),
chain.change_type
);
let meta_style = Cell {
@@ -338,10 +384,6 @@ fn render_chain_list(
_ => TEXT_MUTED,
};
let indent_style = Cell {
fg: TEXT_MUTED,
..Cell::default()
};
let after_indent = frame.print_text_clipped(
x + 4,
iy,
@@ -361,8 +403,7 @@ fn render_chain_list(
let after_ref =
frame.print_text_clipped(after_indent, iy, &issue_ref, issue_ref_style, max_x);
let issue_title = truncate_str(&issue.title, 40);
let _ = indent_style; // suppress unused
let issue_title = truncate_str(&issue.title, issue_title_max);
frame.print_text_clipped(
after_ref,
iy,
@@ -384,7 +425,7 @@ fn render_chain_list(
}
let dy = start_y + row as u16;
let author = format!("@{}: ", truncate_str(&disc.author_username, 12));
let author = format!("@{}: ", truncate_str(&disc.author_username, author_max));
let author_style = Cell {
fg: CYAN,
..Cell::default()
@@ -392,7 +433,7 @@ fn render_chain_list(
let after_author =
frame.print_text_clipped(x + 4, dy, &author, author_style, max_x);
let snippet = truncate_str(&disc.body, 60);
let snippet = truncate_str(&disc.body, disc_max);
let snippet_style = Cell {
fg: TEXT_MUTED,
..Cell::default()
@@ -457,16 +498,6 @@ fn render_hint_bar(frame: &mut Frame<'_>, x: u16, y: u16, max_x: u16) {
frame.print_text_clipped(x + 1, y, hints, style, max_x);
}
/// Truncate a string to at most `max_chars` display characters.
fn truncate_str(s: &str, max_chars: usize) -> String {
if s.chars().count() <= max_chars {
s.to_string()
} else {
let truncated: String = s.chars().take(max_chars.saturating_sub(1)).collect();
format!("{truncated}")
}
}
// ---------------------------------------------------------------------------
// Tests
// ---------------------------------------------------------------------------

View File

@@ -23,8 +23,11 @@ use lore::core::who_types::{
ActiveResult, ExpertResult, OverlapResult, ReviewsResult, WhoResult, WorkloadResult,
};
use crate::layout::{classify_width, who_abbreviated_tabs};
use crate::state::who::{WhoMode, WhoState};
use crate::text_width::cursor_cell_offset;
use super::common::truncate_str;
use super::{ACCENT, BG_SURFACE, BORDER, TEXT, TEXT_MUTED};
/// Muted accent for inactive mode tabs.
@@ -50,7 +53,9 @@ pub fn render_who(frame: &mut Frame<'_>, state: &WhoState, area: Rect) {
let max_x = area.right();
// -- Mode tabs -----------------------------------------------------------
y = render_mode_tabs(frame, state.mode, area.x, y, area.width, max_x);
let bp = classify_width(area.width);
let abbreviated = who_abbreviated_tabs(bp);
y = render_mode_tabs(frame, state.mode, area.x, y, area.width, max_x, abbreviated);
// -- Input bar -----------------------------------------------------------
if state.mode.needs_path() || state.mode.needs_username() {
@@ -115,15 +120,21 @@ fn render_mode_tabs(
y: u16,
_width: u16,
max_x: u16,
abbreviated: bool,
) -> u16 {
let mut cursor_x = x;
for mode in WhoMode::ALL {
let is_active = mode == current;
let label = if is_active {
format!("[ {} ]", mode.label())
let name = if abbreviated {
mode.short_label()
} else {
format!(" {} ", mode.label())
mode.label()
};
let label = if is_active {
format!("[ {name} ]")
} else {
format!(" {name} ")
};
let cell = Cell {
@@ -192,28 +203,31 @@ fn render_input_bar(
frame.print_text_clipped(after_prompt, y, display_text, text_cell, max_x);
// Cursor rendering when focused.
if focused && !text.is_empty() {
let cursor_pos = if state.mode.needs_path() {
state.path_cursor
} else {
state.username_cursor
if focused {
let cursor_cell = Cell {
fg: BG_SURFACE,
bg: TEXT,
..Cell::default()
};
let cursor_col = text[..cursor_pos.min(text.len())]
.chars()
.count()
.min(u16::MAX as usize) as u16;
let cursor_x = after_prompt + cursor_col;
if cursor_x < max_x {
let cursor_cell = Cell {
fg: BG_SURFACE,
bg: TEXT,
..Cell::default()
if text.is_empty() {
// Show cursor at input start when empty.
if after_prompt < max_x {
frame.print_text_clipped(after_prompt, y, " ", cursor_cell, max_x);
}
} else {
let cursor_pos = if state.mode.needs_path() {
state.path_cursor
} else {
state.username_cursor
};
let cursor_char = text
.get(cursor_pos..)
.and_then(|s| s.chars().next())
.unwrap_or(' ');
frame.print_text_clipped(cursor_x, y, &cursor_char.to_string(), cursor_cell, max_x);
let cursor_x = after_prompt + cursor_cell_offset(text, cursor_pos);
if cursor_x < max_x {
let cursor_char = text
.get(cursor_pos..)
.and_then(|s| s.chars().next())
.unwrap_or(' ');
frame.print_text_clipped(cursor_x, y, &cursor_char.to_string(), cursor_cell, max_x);
}
}
}
@@ -915,20 +929,6 @@ fn render_truncation_footer(
frame.print_text_clipped(footer_x, footer_y, &footer, cell, max_x);
}
/// Truncate a string to at most `max_chars` display characters.
fn truncate_str(s: &str, max_chars: usize) -> String {
let chars: Vec<char> = s.chars().collect();
if chars.len() <= max_chars {
s.to_string()
} else if max_chars <= 3 {
chars[..max_chars].iter().collect()
} else {
let mut result: String = chars[..max_chars - 3].iter().collect();
result.push_str("...");
result
}
}
// ---------------------------------------------------------------------------
// Tests
// ---------------------------------------------------------------------------
@@ -1029,7 +1029,7 @@ mod tests {
#[test]
fn test_truncate_str() {
assert_eq!(truncate_str("hello", 10), "hello");
assert_eq!(truncate_str("hello world", 8), "hello...");
assert_eq!(truncate_str("hello world", 8), "hello w\u{2026}");
assert_eq!(truncate_str("hi", 2), "hi");
assert_eq!(truncate_str("abc", 3), "abc");
}
@@ -1046,4 +1046,25 @@ mod tests {
});
}
}
#[test]
fn test_render_who_responsive_breakpoints() {
// Narrow (Xs=50): abbreviated tabs should fit.
with_frame!(50, 24, |frame| {
let state = WhoState::default();
render_who(&mut frame, &state, Rect::new(0, 0, 50, 24));
});
// Medium (Md=90): full tab labels.
with_frame!(90, 24, |frame| {
let state = WhoState::default();
render_who(&mut frame, &state, Rect::new(0, 0, 90, 24));
});
// Wide (Lg=130): full tab labels, more room.
with_frame!(130, 24, |frame| {
let state = WhoState::default();
render_who(&mut frame, &state, Rect::new(0, 0, 130, 24));
});
}
}

View File

@@ -0,0 +1,392 @@
//! Property tests for NavigationStack invariants (bd-3eis).
//!
//! Verifies that NavigationStack maintains its invariants under arbitrary
//! sequences of push/pop/forward/jump/reset operations, using deterministic
//! seeded random generation for reproducibility.
//!
//! All properties are tested across 10,000+ generated operation sequences.
use lore_tui::message::{EntityKey, Screen};
use lore_tui::navigation::NavigationStack;
// ---------------------------------------------------------------------------
// Seeded PRNG (xorshift64) — same as stress_tests.rs
// ---------------------------------------------------------------------------
struct Rng(u64);
impl Rng {
fn new(seed: u64) -> Self {
Self(seed.wrapping_add(1))
}
fn next(&mut self) -> u64 {
let mut x = self.0;
x ^= x << 13;
x ^= x >> 7;
x ^= x << 17;
self.0 = x;
x
}
fn range(&mut self, max: u64) -> u64 {
self.next() % max
}
}
// ---------------------------------------------------------------------------
// Random Screen and Operation generators
// ---------------------------------------------------------------------------
fn random_screen(rng: &mut Rng) -> Screen {
match rng.range(12) {
0 => Screen::Dashboard,
1 => Screen::IssueList,
2 => Screen::IssueDetail(EntityKey::issue(1, rng.range(100) as i64)),
3 => Screen::MrList,
4 => Screen::MrDetail(EntityKey::mr(1, rng.range(100) as i64)),
5 => Screen::Search,
6 => Screen::Timeline,
7 => Screen::Who,
8 => Screen::Trace,
9 => Screen::FileHistory,
10 => Screen::Sync,
_ => Screen::Stats,
}
}
#[derive(Debug)]
enum NavOp {
Push(Screen),
Pop,
GoForward,
JumpBack,
JumpForward,
ResetTo(Screen),
}
fn random_op(rng: &mut Rng) -> NavOp {
match rng.range(10) {
// Push is the most common operation.
0..=4 => NavOp::Push(random_screen(rng)),
5 | 6 => NavOp::Pop,
7 => NavOp::GoForward,
8 => NavOp::JumpBack,
9 => NavOp::JumpForward,
_ => NavOp::ResetTo(random_screen(rng)),
}
}
fn apply_op(nav: &mut NavigationStack, op: &NavOp) {
match op {
NavOp::Push(screen) => {
nav.push(screen.clone());
}
NavOp::Pop => {
nav.pop();
}
NavOp::GoForward => {
nav.go_forward();
}
NavOp::JumpBack => {
nav.jump_back();
}
NavOp::JumpForward => {
nav.jump_forward();
}
NavOp::ResetTo(screen) => {
nav.reset_to(screen.clone());
}
}
}
// ---------------------------------------------------------------------------
// Properties
// ---------------------------------------------------------------------------
/// Property 1: Stack depth is always >= 1.
///
/// The NavigationStack always has at least one screen (the root).
/// No sequence of operations can empty it.
#[test]
fn prop_depth_always_at_least_one() {
for seed in 0..1000 {
let mut rng = Rng::new(seed);
let mut nav = NavigationStack::new();
for step in 0..100 {
let op = random_op(&mut rng);
apply_op(&mut nav, &op);
assert!(
nav.depth() >= 1,
"depth < 1 at seed={seed}, step={step}, op={op:?}"
);
}
}
}
/// Property 2: After push(X), current() == X.
///
/// Pushing a screen always makes it the current screen.
#[test]
fn prop_push_sets_current() {
for seed in 0..1000 {
let mut rng = Rng::new(seed);
let mut nav = NavigationStack::new();
for step in 0..100 {
let screen = random_screen(&mut rng);
nav.push(screen.clone());
assert_eq!(
nav.current(),
&screen,
"push didn't set current at seed={seed}, step={step}"
);
// Also do some random ops to make sequences varied.
if rng.range(3) == 0 {
let op = random_op(&mut rng);
apply_op(&mut nav, &op);
}
}
}
}
/// Property 3: After push(X) then pop(), current returns to previous.
///
/// Push-then-pop is identity on current() when no intermediate ops occur.
#[test]
fn prop_push_pop_returns_to_previous() {
for seed in 0..1000 {
let mut rng = Rng::new(seed);
let mut nav = NavigationStack::new();
// Do some random setup ops.
for _ in 0..rng.range(20) {
let op = random_op(&mut rng);
apply_op(&mut nav, &op);
}
let before = nav.current().clone();
let screen = random_screen(&mut rng);
nav.push(screen);
let pop_result = nav.pop();
assert!(pop_result.is_some(), "pop after push should succeed");
assert_eq!(
nav.current(),
&before,
"push-pop should restore previous at seed={seed}"
);
}
}
/// Property 4: Forward stack is cleared after any push.
///
/// Browser semantics: navigating to a new page clears the forward history.
#[test]
fn prop_forward_cleared_after_push() {
for seed in 0..1000 {
let mut rng = Rng::new(seed);
let mut nav = NavigationStack::new();
// Build up some forward stack via push-pop sequences.
for _ in 0..rng.range(10) + 2 {
nav.push(random_screen(&mut rng));
}
for _ in 0..rng.range(5) + 1 {
nav.pop();
}
// Now we might have forward entries.
// Push clears forward.
nav.push(random_screen(&mut rng));
assert!(
!nav.can_go_forward(),
"forward stack should be empty after push at seed={seed}"
);
}
}
/// Property 5: Jump list only records detail/entity screens.
///
/// The jump list (vim Ctrl+O/Ctrl+I) only tracks IssueDetail and MrDetail.
/// After many operations, every jump_back/jump_forward target should be
/// a detail screen.
#[test]
fn prop_jump_list_only_details() {
for seed in 0..500 {
let mut rng = Rng::new(seed);
let mut nav = NavigationStack::new();
// Do many operations to build up jump list.
for _ in 0..200 {
let op = random_op(&mut rng);
apply_op(&mut nav, &op);
}
// Walk the jump list backward and forward — every screen reached
// should be a detail screen.
let saved_current = nav.current().clone();
let mut visited = Vec::new();
while let Some(screen) = nav.jump_back() {
visited.push(screen.clone());
}
while let Some(screen) = nav.jump_forward() {
visited.push(screen.clone());
}
for screen in &visited {
assert!(
screen.is_detail_or_entity(),
"jump list contained non-detail screen {screen:?} at seed={seed}"
);
}
// Restore (this is a property test, not behavior test — we don't
// care about restoring, just checking the invariant).
let _ = saved_current;
}
}
/// Property 6: reset_to(X) clears all history, current() == X.
///
/// After reset, depth == 1, no back, no forward, empty jump list.
#[test]
fn prop_reset_clears_all() {
for seed in 0..1000 {
let mut rng = Rng::new(seed);
let mut nav = NavigationStack::new();
// Build up complex state.
for _ in 0..rng.range(50) + 10 {
let op = random_op(&mut rng);
apply_op(&mut nav, &op);
}
let target = random_screen(&mut rng);
nav.reset_to(target.clone());
assert_eq!(
nav.current(),
&target,
"reset didn't set current at seed={seed}"
);
assert_eq!(
nav.depth(),
1,
"reset didn't clear back stack at seed={seed}"
);
assert!(!nav.can_go_back(), "reset didn't clear back at seed={seed}");
assert!(
!nav.can_go_forward(),
"reset didn't clear forward at seed={seed}"
);
}
}
/// Property 7: Breadcrumbs length == depth.
///
/// breadcrumbs() always returns exactly as many entries as the navigation
/// depth (back_stack + 1 for current).
#[test]
fn prop_breadcrumbs_match_depth() {
for seed in 0..1000 {
let mut rng = Rng::new(seed);
let mut nav = NavigationStack::new();
for step in 0..100 {
let op = random_op(&mut rng);
apply_op(&mut nav, &op);
assert_eq!(
nav.breadcrumbs().len(),
nav.depth(),
"breadcrumbs length != depth at seed={seed}, step={step}, op={op:?}"
);
}
}
}
/// Property 8: No panic for any sequence of operations.
///
/// This is the "chaos monkey" property — the most important invariant.
#[test]
fn prop_no_panic_any_sequence() {
for seed in 0..1000 {
let mut rng = Rng::new(seed);
let mut nav = NavigationStack::new();
for _ in 0..200 {
let op = random_op(&mut rng);
apply_op(&mut nav, &op);
}
// If we got here, no panic occurred.
assert!(nav.depth() >= 1);
}
}
/// Property 9: Pop at root is always safe and returns None.
///
/// Repeated pops from any state eventually reach root and stop.
#[test]
fn prop_repeated_pop_reaches_root() {
for seed in 0..1000 {
let mut rng = Rng::new(seed);
let mut nav = NavigationStack::new();
// Push random depth.
let pushes = rng.range(20) + 1;
for _ in 0..pushes {
nav.push(random_screen(&mut rng));
}
// Pop until we can't.
let mut pops = 0;
while nav.pop().is_some() {
pops += 1;
assert!(pops <= pushes, "more pops than pushes at seed={seed}");
}
assert_eq!(
nav.depth(),
1,
"should be at root after exhaustive pop at seed={seed}"
);
// One more pop should be None.
assert!(nav.pop().is_none());
}
}
/// Property 10: Go forward after go back restores screen.
///
/// Pop-then-forward is identity (when no intermediate push).
#[test]
fn prop_pop_forward_identity() {
for seed in 0..1000 {
let mut rng = Rng::new(seed);
let mut nav = NavigationStack::new();
// Build some state.
for _ in 0..rng.range(10) + 2 {
nav.push(random_screen(&mut rng));
}
let before_pop = nav.current().clone();
if nav.pop().is_some() {
let result = nav.go_forward();
assert!(
result.is_some(),
"forward should succeed after pop at seed={seed}"
);
assert_eq!(
nav.current(),
&before_pop,
"pop-forward should restore screen at seed={seed}"
);
}
}
}

View File

@@ -0,0 +1,671 @@
//! Concurrent pagination/write race tests (bd-14hv).
//!
//! Proves that the keyset pagination + snapshot fence mechanism prevents
//! duplicate or skipped rows when a writer inserts new issues concurrently
//! with a reader paginating through the issue list.
//!
//! Architecture:
//! - DbManager (3 readers + 1 writer) with WAL mode
//! - Reader threads: paginate using `fetch_issue_list()` with keyset cursor
//! - Writer thread: INSERT new issues concurrently
//! - Assertions: no duplicate IIDs, snapshot fence excludes new writes
use std::collections::HashSet;
use std::path::PathBuf;
use std::sync::atomic::{AtomicU64, Ordering};
use std::sync::{Arc, Barrier};
use rusqlite::Connection;
use lore_tui::action::fetch_issue_list;
use lore_tui::db::DbManager;
use lore_tui::state::issue_list::{IssueFilter, IssueListState, SortField, SortOrder};
// ---------------------------------------------------------------------------
// Test infrastructure
// ---------------------------------------------------------------------------
static DB_COUNTER: AtomicU64 = AtomicU64::new(0);
fn test_db_path() -> PathBuf {
let n = DB_COUNTER.fetch_add(1, Ordering::Relaxed);
let dir = std::env::temp_dir().join("lore-tui-pagination-tests");
std::fs::create_dir_all(&dir).expect("create test dir");
dir.join(format!(
"race-{}-{:?}-{n}.db",
std::process::id(),
std::thread::current().id(),
))
}
/// Create the schema needed for issue list queries.
fn create_schema(conn: &Connection) {
conn.execute_batch(
"
CREATE TABLE projects (
id INTEGER PRIMARY KEY,
gitlab_project_id INTEGER UNIQUE NOT NULL,
path_with_namespace TEXT NOT NULL
);
CREATE TABLE issues (
id INTEGER PRIMARY KEY,
gitlab_id INTEGER UNIQUE NOT NULL,
project_id INTEGER NOT NULL,
iid INTEGER NOT NULL,
title TEXT,
state TEXT NOT NULL,
author_username TEXT,
created_at INTEGER NOT NULL,
updated_at INTEGER NOT NULL,
last_seen_at INTEGER NOT NULL
);
CREATE TABLE labels (
id INTEGER PRIMARY KEY,
gitlab_id INTEGER,
project_id INTEGER NOT NULL,
name TEXT NOT NULL,
color TEXT,
description TEXT
);
CREATE TABLE issue_labels (
issue_id INTEGER NOT NULL,
label_id INTEGER NOT NULL,
PRIMARY KEY(issue_id, label_id)
);
INSERT INTO projects (gitlab_project_id, path_with_namespace)
VALUES (1, 'group/project');
",
)
.expect("create schema");
}
/// Insert N issues with sequential IIDs starting from `start_iid`.
///
/// Each issue gets `updated_at = base_ts - (offset * 1000)` to create
/// a deterministic ordering for keyset pagination (newest first).
fn seed_issues(conn: &Connection, start_iid: i64, count: i64, base_ts: i64) {
let mut stmt = conn
.prepare(
"INSERT INTO issues (gitlab_id, project_id, iid, title, state,
author_username, created_at, updated_at, last_seen_at)
VALUES (?1, 1, ?2, ?3, 'opened', 'alice', ?4, ?4, ?4)",
)
.expect("prepare insert");
for i in 0..count {
let iid = start_iid + i;
let ts = base_ts - (i * 1000);
stmt.execute(rusqlite::params![
iid * 100, // gitlab_id
iid,
format!("Issue {iid}"),
ts,
])
.expect("insert issue");
}
}
// ---------------------------------------------------------------------------
// Tests
// ---------------------------------------------------------------------------
/// Paginate through all issues without concurrent writes.
///
/// Baseline: keyset pagination yields every IID exactly once.
#[test]
fn test_pagination_no_duplicates_baseline() {
let path = test_db_path();
let db = DbManager::open(&path).expect("open db");
let base_ts = 1_700_000_000_000_i64;
db.with_writer(|conn| {
create_schema(conn);
seed_issues(conn, 1, 200, base_ts);
Ok(())
})
.unwrap();
// Paginate through all issues collecting IIDs.
let mut all_iids = Vec::new();
let mut state = IssueListState::default();
let filter = IssueFilter::default();
loop {
let page = db
.with_reader(|conn| {
fetch_issue_list(
conn,
&filter,
SortField::UpdatedAt,
SortOrder::Desc,
state.next_cursor.as_ref(),
state.snapshot_fence,
)
})
.expect("fetch page");
if page.rows.is_empty() {
break;
}
for row in &page.rows {
all_iids.push(row.iid);
}
state.apply_page(page);
if state.next_cursor.is_none() {
break;
}
}
// Every IID 1..=200 should appear exactly once.
let unique: HashSet<i64> = all_iids.iter().copied().collect();
assert_eq!(
unique.len(),
200,
"Expected 200 unique IIDs, got {}",
unique.len()
);
assert_eq!(
all_iids.len(),
200,
"Expected 200 total IIDs, got {} (duplicates present)",
all_iids.len()
);
}
/// Concurrent writer inserts NEW issues (with future timestamps) while
/// reader paginates. Snapshot fence should exclude the new rows.
#[test]
fn test_pagination_no_duplicates_with_concurrent_writes() {
let path = test_db_path();
let db = Arc::new(DbManager::open(&path).expect("open db"));
let base_ts = 1_700_000_000_000_i64;
// Seed 200 issues.
db.with_writer(|conn| {
create_schema(conn);
seed_issues(conn, 1, 200, base_ts);
Ok(())
})
.unwrap();
// Barrier to synchronize reader and writer start.
let barrier = Arc::new(Barrier::new(2));
// Writer thread: inserts issues with NEWER timestamps (above the fence).
let db_w = Arc::clone(&db);
let barrier_w = Arc::clone(&barrier);
let writer = std::thread::spawn(move || {
barrier_w.wait();
for batch in 0..10 {
db_w.with_writer(|conn| {
for i in 0..10 {
let iid = 1000 + batch * 10 + i;
// Future timestamp: above the snapshot fence.
let ts = base_ts + 100_000 + (batch * 10 + i) * 1000;
conn.execute(
"INSERT INTO issues (gitlab_id, project_id, iid, title, state,
author_username, created_at, updated_at, last_seen_at)
VALUES (?1, 1, ?2, ?3, 'opened', 'writer', ?4, ?4, ?4)",
rusqlite::params![iid * 100, iid, format!("New {iid}"), ts],
)?;
}
Ok(())
})
.expect("writer batch");
// Small yield to interleave with reader.
std::thread::yield_now();
}
});
// Reader thread: paginate with snapshot fence.
let db_r = Arc::clone(&db);
let barrier_r = Arc::clone(&barrier);
let reader = std::thread::spawn(move || {
barrier_r.wait();
let mut all_iids = Vec::new();
let mut state = IssueListState::default();
let filter = IssueFilter::default();
loop {
let page = db_r
.with_reader(|conn| {
fetch_issue_list(
conn,
&filter,
SortField::UpdatedAt,
SortOrder::Desc,
state.next_cursor.as_ref(),
state.snapshot_fence,
)
})
.expect("fetch page");
if page.rows.is_empty() {
break;
}
for row in &page.rows {
all_iids.push(row.iid);
}
state.apply_page(page);
// Yield to let writer interleave.
std::thread::yield_now();
if state.next_cursor.is_none() {
break;
}
}
all_iids
});
writer.join().expect("writer thread");
let all_iids = reader.join().expect("reader thread");
// The critical invariant: NO DUPLICATES.
let unique: HashSet<i64> = all_iids.iter().copied().collect();
assert_eq!(
all_iids.len(),
unique.len(),
"Duplicate IIDs found in pagination results"
);
// All original issues present.
for iid in 1..=200 {
assert!(
unique.contains(&iid),
"Original issue {iid} missing from pagination"
);
}
// Writer issues may appear on the first page (before the fence is
// established), but should NOT cause duplicates. Count them as a
// diagnostic.
let writer_count = all_iids.iter().filter(|&&iid| iid >= 1000).count();
eprintln!("Writer issues visible through fence: {writer_count} (expected: few or zero)");
}
/// Multiple concurrent readers paginating simultaneously — no interference.
#[test]
fn test_multiple_concurrent_readers() {
let path = test_db_path();
let db = Arc::new(DbManager::open(&path).expect("open db"));
let base_ts = 1_700_000_000_000_i64;
db.with_writer(|conn| {
create_schema(conn);
seed_issues(conn, 1, 100, base_ts);
Ok(())
})
.unwrap();
let barrier = Arc::new(Barrier::new(4));
let mut handles = Vec::new();
for reader_id in 0..4 {
let db_r = Arc::clone(&db);
let barrier_r = Arc::clone(&barrier);
handles.push(std::thread::spawn(move || {
barrier_r.wait();
let mut all_iids = Vec::new();
let mut state = IssueListState::default();
let filter = IssueFilter::default();
loop {
let page = db_r
.with_reader(|conn| {
fetch_issue_list(
conn,
&filter,
SortField::UpdatedAt,
SortOrder::Desc,
state.next_cursor.as_ref(),
state.snapshot_fence,
)
})
.unwrap_or_else(|e| panic!("reader {reader_id} fetch failed: {e}"));
if page.rows.is_empty() {
break;
}
for row in &page.rows {
all_iids.push(row.iid);
}
state.apply_page(page);
if state.next_cursor.is_none() {
break;
}
}
all_iids
}));
}
for (i, h) in handles.into_iter().enumerate() {
let iids = h.join().unwrap_or_else(|_| panic!("reader {i} panicked"));
let unique: HashSet<i64> = iids.iter().copied().collect();
assert_eq!(iids.len(), unique.len(), "Reader {i} got duplicates");
assert_eq!(
unique.len(),
100,
"Reader {i} missed issues: got {}",
unique.len()
);
}
}
/// Snapshot fence invalidation: after `reset_pagination()`, the fence is
/// cleared and a new read picks up newly written rows.
#[test]
fn test_snapshot_fence_invalidated_on_refresh() {
let path = test_db_path();
let db = DbManager::open(&path).expect("open db");
let base_ts = 1_700_000_000_000_i64;
db.with_writer(|conn| {
create_schema(conn);
seed_issues(conn, 1, 10, base_ts);
Ok(())
})
.unwrap();
// First pagination: snapshot fence set.
let mut state = IssueListState::default();
let filter = IssueFilter::default();
let page = db
.with_reader(|conn| {
fetch_issue_list(
conn,
&filter,
SortField::UpdatedAt,
SortOrder::Desc,
None,
None,
)
})
.unwrap();
state.apply_page(page);
assert_eq!(state.rows.len(), 10);
assert!(state.snapshot_fence.is_some());
// Writer adds new issues with FUTURE timestamps.
db.with_writer(|conn| {
seed_issues(conn, 100, 5, base_ts + 500_000);
Ok(())
})
.unwrap();
// WITH fence: new issues should NOT appear.
let fenced_page = db
.with_reader(|conn| {
fetch_issue_list(
conn,
&filter,
SortField::UpdatedAt,
SortOrder::Desc,
None,
state.snapshot_fence,
)
})
.unwrap();
assert_eq!(
fenced_page.total_count, 10,
"Fence should exclude new issues"
);
// Manual refresh: reset_pagination clears the fence.
state.reset_pagination();
assert!(state.snapshot_fence.is_none());
// WITHOUT fence: new issues should appear.
let refreshed_page = db
.with_reader(|conn| {
fetch_issue_list(
conn,
&filter,
SortField::UpdatedAt,
SortOrder::Desc,
None,
state.snapshot_fence,
)
})
.unwrap();
assert_eq!(
refreshed_page.total_count, 15,
"After refresh, should see all 15 issues"
);
}
/// Concurrent writer inserts issues with timestamps WITHIN the fence range.
///
/// This is the edge case: snapshot fence is timestamp-based, not
/// transaction-based, so writes with `updated_at <= fence` CAN appear.
/// The keyset cursor still prevents duplicates (no row appears twice),
/// but newly inserted rows with old timestamps might appear in later pages.
///
/// This test documents the known behavior.
#[test]
fn test_concurrent_write_within_fence_range() {
let path = test_db_path();
let db = Arc::new(DbManager::open(&path).expect("open db"));
let base_ts = 1_700_000_000_000_i64;
// Seed 100 issues spanning base_ts down to base_ts - 99000.
db.with_writer(|conn| {
create_schema(conn);
seed_issues(conn, 1, 100, base_ts);
Ok(())
})
.unwrap();
let barrier = Arc::new(Barrier::new(2));
// Writer: insert issues with timestamps WITHIN the existing range.
let db_w = Arc::clone(&db);
let barrier_w = Arc::clone(&barrier);
let writer = std::thread::spawn(move || {
barrier_w.wait();
for i in 0..20 {
db_w.with_writer(|conn| {
let iid = 500 + i;
// Timestamp within the range of existing issues.
let ts = base_ts - 50_000 - i * 100;
conn.execute(
"INSERT INTO issues (gitlab_id, project_id, iid, title, state,
author_username, created_at, updated_at, last_seen_at)
VALUES (?1, 1, ?2, ?3, 'opened', 'writer', ?4, ?4, ?4)",
rusqlite::params![iid * 100, iid, format!("Mid {iid}"), ts],
)?;
Ok(())
})
.expect("writer insert");
std::thread::yield_now();
}
});
// Reader: paginate with fence.
let db_r = Arc::clone(&db);
let barrier_r = Arc::clone(&barrier);
let reader = std::thread::spawn(move || {
barrier_r.wait();
let mut all_iids = Vec::new();
let mut state = IssueListState::default();
let filter = IssueFilter::default();
loop {
let page = db_r
.with_reader(|conn| {
fetch_issue_list(
conn,
&filter,
SortField::UpdatedAt,
SortOrder::Desc,
state.next_cursor.as_ref(),
state.snapshot_fence,
)
})
.expect("fetch");
if page.rows.is_empty() {
break;
}
for row in &page.rows {
all_iids.push(row.iid);
}
state.apply_page(page);
std::thread::yield_now();
if state.next_cursor.is_none() {
break;
}
}
all_iids
});
writer.join().expect("writer");
let all_iids = reader.join().expect("reader");
// The critical invariant: NO DUPLICATES regardless of timing.
let unique: HashSet<i64> = all_iids.iter().copied().collect();
assert_eq!(
all_iids.len(),
unique.len(),
"No duplicate IIDs should appear even with concurrent in-range writes"
);
// All original issues must still be present.
for iid in 1..=100 {
assert!(unique.contains(&iid), "Original issue {iid} missing");
}
}
/// Stress test: 1000 iterations of concurrent read+write with verification.
#[test]
fn test_pagination_stress_1000_iterations() {
let path = test_db_path();
let db = Arc::new(DbManager::open(&path).expect("open db"));
let base_ts = 1_700_000_000_000_i64;
db.with_writer(|conn| {
create_schema(conn);
seed_issues(conn, 1, 100, base_ts);
Ok(())
})
.unwrap();
// Run 1000 pagination cycles with concurrent writes.
let writer_iid = Arc::new(AtomicU64::new(1000));
for iteration in 0..1000 {
// Writer: insert one issue per iteration.
let next_iid = writer_iid.fetch_add(1, Ordering::Relaxed) as i64;
db.with_writer(|conn| {
let ts = base_ts + 100_000 + next_iid * 100;
conn.execute(
"INSERT INTO issues (gitlab_id, project_id, iid, title, state,
author_username, created_at, updated_at, last_seen_at)
VALUES (?1, 1, ?2, ?3, 'opened', 'stress', ?4, ?4, ?4)",
rusqlite::params![next_iid * 100, next_iid, format!("Stress {next_iid}"), ts],
)?;
Ok(())
})
.expect("stress write");
// Reader: paginate first page, verify no duplicates within that page.
let page = db
.with_reader(|conn| {
fetch_issue_list(
conn,
&IssueFilter::default(),
SortField::UpdatedAt,
SortOrder::Desc,
None,
None,
)
})
.unwrap_or_else(|e| panic!("iteration {iteration}: fetch failed: {e}"));
let iids: Vec<i64> = page.rows.iter().map(|r| r.iid).collect();
let unique: HashSet<i64> = iids.iter().copied().collect();
assert_eq!(
iids.len(),
unique.len(),
"Iteration {iteration}: duplicates within a single page"
);
}
}
/// Background writes do NOT invalidate an active snapshot fence.
#[test]
fn test_background_writes_dont_invalidate_fence() {
let path = test_db_path();
let db = DbManager::open(&path).expect("open db");
let base_ts = 1_700_000_000_000_i64;
db.with_writer(|conn| {
create_schema(conn);
seed_issues(conn, 1, 50, base_ts);
Ok(())
})
.unwrap();
// Initial pagination sets the fence.
let mut state = IssueListState::default();
let filter = IssueFilter::default();
let page = db
.with_reader(|conn| {
fetch_issue_list(
conn,
&filter,
SortField::UpdatedAt,
SortOrder::Desc,
None,
None,
)
})
.unwrap();
state.apply_page(page);
let original_fence = state.snapshot_fence;
// Simulate background sync writing 20 new issues.
db.with_writer(|conn| {
seed_issues(conn, 200, 20, base_ts + 1_000_000);
Ok(())
})
.unwrap();
// The state's fence should be unchanged — background writes are invisible.
assert_eq!(state.snapshot_fence, original_fence);
assert_eq!(state.rows.len(), 50);
// Re-fetch with the existing fence: still sees only original 50.
let fenced = db
.with_reader(|conn| {
fetch_issue_list(
conn,
&filter,
SortField::UpdatedAt,
SortOrder::Desc,
None,
state.snapshot_fence,
)
})
.unwrap();
assert_eq!(fenced.total_count, 50);
}

View File

@@ -0,0 +1,710 @@
//! CLI/TUI parity tests (bd-wrw1).
//!
//! Verifies that the TUI action layer and CLI query layer return consistent
//! results when querying the same SQLite database. Both paths read the same
//! tables with different query strategies (TUI uses keyset pagination, CLI
//! uses LIMIT-based pagination), so given identical data and filters they
//! must agree on entity IIDs, ordering, and counts.
//!
//! Uses `lore::core::db::{create_connection, run_migrations}` for a
//! full-schema in-memory database with deterministic seed data.
use std::path::Path;
use rusqlite::Connection;
use lore::cli::commands::{ListFilters, MrListFilters, query_issues, query_mrs};
use lore::core::db::{create_connection, run_migrations};
use lore_tui::action::{fetch_dashboard, fetch_issue_list, fetch_mr_list};
use lore_tui::clock::FakeClock;
use lore_tui::state::issue_list::{IssueFilter, SortField, SortOrder};
use lore_tui::state::mr_list::{MrFilter, MrSortField, MrSortOrder};
use chrono::{TimeZone, Utc};
// ---------------------------------------------------------------------------
// Setup: in-memory database with full schema and seed data
// ---------------------------------------------------------------------------
fn test_conn() -> Connection {
let conn = create_connection(Path::new(":memory:")).expect("create in-memory connection");
run_migrations(&conn).expect("run migrations");
conn
}
fn frozen_clock() -> FakeClock {
FakeClock::new(Utc.with_ymd_and_hms(2026, 1, 15, 12, 0, 0).unwrap())
}
/// Insert a project and return its id.
fn insert_project(conn: &Connection, id: i64, path: &str) -> i64 {
conn.execute(
"INSERT INTO projects (id, gitlab_project_id, path_with_namespace, web_url)
VALUES (?1, ?1, ?2, ?3)",
rusqlite::params![id, path, format!("https://gitlab.com/{path}")],
)
.expect("insert project");
id
}
/// Test issue row for insertion.
struct TestIssue<'a> {
id: i64,
project_id: i64,
iid: i64,
title: &'a str,
state: &'a str,
author: &'a str,
created_at: i64,
updated_at: i64,
}
/// Insert a test issue.
fn insert_issue(conn: &Connection, issue: &TestIssue<'_>) {
conn.execute(
"INSERT INTO issues (id, gitlab_id, project_id, iid, title, state,
author_username, created_at, updated_at, last_seen_at, web_url)
VALUES (?1, ?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8, ?8, NULL)",
rusqlite::params![
issue.id,
issue.project_id,
issue.iid,
issue.title,
issue.state,
issue.author,
issue.created_at,
issue.updated_at
],
)
.expect("insert issue");
}
/// Test MR row for insertion.
struct TestMr<'a> {
id: i64,
project_id: i64,
iid: i64,
title: &'a str,
state: &'a str,
draft: bool,
author: &'a str,
source_branch: &'a str,
target_branch: &'a str,
created_at: i64,
updated_at: i64,
}
/// Insert a test merge request.
fn insert_mr(conn: &Connection, mr: &TestMr<'_>) {
conn.execute(
"INSERT INTO merge_requests (id, gitlab_id, project_id, iid, title, state,
draft, author_username, source_branch, target_branch,
created_at, updated_at, last_seen_at, web_url)
VALUES (?1, ?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8, ?9, ?10, ?11, ?11, NULL)",
rusqlite::params![
mr.id,
mr.project_id,
mr.iid,
mr.title,
mr.state,
i64::from(mr.draft),
mr.author,
mr.source_branch,
mr.target_branch,
mr.created_at,
mr.updated_at
],
)
.expect("insert mr");
}
/// Insert a test discussion.
fn insert_discussion(conn: &Connection, id: i64, project_id: i64, issue_id: i64) {
conn.execute(
"INSERT INTO discussions (id, gitlab_discussion_id, project_id, issue_id,
noteable_type, last_seen_at)
VALUES (?1, ?1, ?2, ?3, 'Issue', 1000)",
rusqlite::params![id, project_id, issue_id],
)
.expect("insert discussion");
}
/// Insert a test note.
fn insert_note(conn: &Connection, id: i64, discussion_id: i64, project_id: i64, is_system: bool) {
conn.execute(
"INSERT INTO notes (id, gitlab_id, discussion_id, project_id, is_system,
author_username, body, created_at, updated_at, last_seen_at)
VALUES (?1, ?1, ?2, ?3, ?4, 'author', 'body', 1000, 1000, 1000)",
rusqlite::params![id, discussion_id, project_id, i64::from(is_system)],
)
.expect("insert note");
}
/// Seed the database with a deterministic fixture set.
///
/// Creates:
/// - 1 project
/// - 10 issues (5 opened, 5 closed, various authors/timestamps)
/// - 5 merge requests (3 opened, 1 merged, 1 closed)
/// - 3 discussions + 6 notes (2 system)
fn seed_fixture(conn: &Connection) {
let pid = insert_project(conn, 1, "group/repo");
// Issues: iid 1..=10, alternating state, varying timestamps.
let base_ts: i64 = 1_700_000_000_000; // ~Nov 2023
for i in 1..=10 {
let state = if i % 2 == 0 { "closed" } else { "opened" };
let author = if i <= 5 { "alice" } else { "bob" };
let created = base_ts + i * 60_000;
let updated = base_ts + i * 120_000; // Strictly increasing for deterministic sort.
let title = format!("Issue {i}");
insert_issue(
conn,
&TestIssue {
id: i,
project_id: pid,
iid: i,
title: &title,
state,
author,
created_at: created,
updated_at: updated,
},
);
}
// Merge requests: iid 1..=5.
for i in 1..=5 {
let (state, draft) = match i {
1..=3 => ("opened", i == 2),
4 => ("merged", false),
_ => ("closed", false),
};
let created = base_ts + i * 60_000;
let updated = base_ts + i * 120_000;
let title = format!("MR {i}");
let source = format!("feature-{i}");
insert_mr(
conn,
&TestMr {
id: 100 + i,
project_id: pid,
iid: i,
title: &title,
state,
draft,
author: "alice",
source_branch: &source,
target_branch: "main",
created_at: created,
updated_at: updated,
},
);
}
// Discussions + notes (for count parity).
for d in 1..=3 {
insert_discussion(conn, d, pid, d); // discussions on issues 1..3
// 2 notes per discussion.
let n1 = d * 10;
let n2 = d * 10 + 1;
insert_note(conn, n1, d, pid, false);
insert_note(conn, n2, d, pid, d == 1); // discussion 1 gets a system note
}
}
// ---------------------------------------------------------------------------
// Default CLI filters (no filtering, default sort)
// ---------------------------------------------------------------------------
fn default_issue_filters() -> ListFilters<'static> {
ListFilters {
limit: 100,
project: None,
state: None,
author: None,
assignee: None,
labels: None,
milestone: None,
since: None,
due_before: None,
has_due_date: false,
statuses: &[],
sort: "updated",
order: "desc",
}
}
fn default_mr_filters() -> MrListFilters<'static> {
MrListFilters {
limit: 100,
project: None,
state: None,
author: None,
assignee: None,
reviewer: None,
labels: None,
since: None,
draft: false,
no_draft: false,
target_branch: None,
source_branch: None,
sort: "updated",
order: "desc",
}
}
// ---------------------------------------------------------------------------
// Parity Tests
// ---------------------------------------------------------------------------
/// Count parity: TUI dashboard entity counts match direct SQL (CLI logic).
///
/// The TUI fetches counts via `fetch_dashboard().counts`, while the CLI uses
/// private `count_issues`/`count_mrs` with simple COUNT(*) queries. Since both
/// query the same tables, counts must agree.
#[test]
fn test_count_parity_dashboard_vs_sql() {
let conn = test_conn();
seed_fixture(&conn);
// TUI path: fetch_dashboard returns EntityCounts.
let clock = frozen_clock();
let dashboard = fetch_dashboard(&conn, &clock).expect("fetch_dashboard");
let counts = &dashboard.counts;
// CLI-equivalent: direct SQL matching the CLI's count logic.
let issues_total: i64 = conn
.query_row("SELECT COUNT(*) FROM issues", [], |r| r.get(0))
.unwrap();
let issues_open: i64 = conn
.query_row(
"SELECT COUNT(*) FROM issues WHERE state = 'opened'",
[],
|r| r.get(0),
)
.unwrap();
let mrs_total: i64 = conn
.query_row("SELECT COUNT(*) FROM merge_requests", [], |r| r.get(0))
.unwrap();
let mrs_open: i64 = conn
.query_row(
"SELECT COUNT(*) FROM merge_requests WHERE state = 'opened'",
[],
|r| r.get(0),
)
.unwrap();
let discussions: i64 = conn
.query_row("SELECT COUNT(*) FROM discussions", [], |r| r.get(0))
.unwrap();
let notes_total: i64 = conn
.query_row("SELECT COUNT(*) FROM notes", [], |r| r.get(0))
.unwrap();
assert_eq!(counts.issues_total, issues_total as u64, "issues_total");
assert_eq!(counts.issues_open, issues_open as u64, "issues_open");
assert_eq!(counts.mrs_total, mrs_total as u64, "mrs_total");
assert_eq!(counts.mrs_open, mrs_open as u64, "mrs_open");
assert_eq!(counts.discussions, discussions as u64, "discussions");
assert_eq!(counts.notes_total, notes_total as u64, "notes_total");
// Verify known fixture counts.
assert_eq!(counts.issues_total, 10);
assert_eq!(counts.issues_open, 5); // odd IIDs are opened
assert_eq!(counts.mrs_total, 5);
assert_eq!(counts.mrs_open, 3); // iid 1,2,3 opened
assert_eq!(counts.discussions, 3);
assert_eq!(counts.notes_total, 6); // 2 per discussion
}
/// Issue list parity: TUI and CLI return the same IIDs in the same order.
///
/// TUI uses keyset pagination (`fetch_issue_list`), CLI uses LIMIT-based
/// (`query_issues`). Both sorted by updated_at DESC with no filters.
#[test]
fn test_issue_list_parity_iids_and_order() {
let conn = test_conn();
seed_fixture(&conn);
// CLI path.
let cli_result = query_issues(&conn, &default_issue_filters()).expect("CLI query_issues");
let cli_iids: Vec<i64> = cli_result.issues.iter().map(|r| r.iid).collect();
// TUI path: first page (no cursor, no fence — equivalent to CLI's initial query).
let tui_page = fetch_issue_list(
&conn,
&IssueFilter::default(),
SortField::UpdatedAt,
SortOrder::Desc,
None,
None,
)
.expect("TUI fetch_issue_list");
let tui_iids: Vec<i64> = tui_page.rows.iter().map(|r| r.iid).collect();
// Both should see all 10 issues in the same descending updated_at order.
assert_eq!(cli_result.total_count, 10);
assert_eq!(tui_page.total_count, 10);
assert_eq!(
cli_iids, tui_iids,
"CLI and TUI issue IID order must match.\nCLI: {cli_iids:?}\nTUI: {tui_iids:?}"
);
// Verify descending order (iid 10 has highest updated_at).
assert_eq!(cli_iids[0], 10, "most recently updated should be iid 10");
assert_eq!(
*cli_iids.last().unwrap(),
1,
"oldest updated should be iid 1"
);
}
/// Issue list parity with state filter: both paths agree on filtered results.
#[test]
fn test_issue_list_parity_state_filter() {
let conn = test_conn();
seed_fixture(&conn);
// CLI: filter state = "opened".
let mut cli_filters = default_issue_filters();
cli_filters.state = Some("opened");
let cli_result = query_issues(&conn, &cli_filters).expect("CLI opened issues");
let cli_iids: Vec<i64> = cli_result.issues.iter().map(|r| r.iid).collect();
// TUI: filter state = "opened".
let tui_filter = IssueFilter {
state: Some("opened".into()),
..Default::default()
};
let tui_page = fetch_issue_list(
&conn,
&tui_filter,
SortField::UpdatedAt,
SortOrder::Desc,
None,
None,
)
.expect("TUI opened issues");
let tui_iids: Vec<i64> = tui_page.rows.iter().map(|r| r.iid).collect();
assert_eq!(cli_result.total_count, 5, "CLI count for opened");
assert_eq!(tui_page.total_count, 5, "TUI count for opened");
assert_eq!(
cli_iids, tui_iids,
"Filtered IIDs must match.\nCLI: {cli_iids:?}\nTUI: {tui_iids:?}"
);
// All returned IIDs should be odd (our fixture alternates).
for iid in &cli_iids {
assert!(
iid % 2 == 1,
"opened issues should have odd IIDs, got {iid}"
);
}
}
/// Issue list parity with author filter.
#[test]
fn test_issue_list_parity_author_filter() {
let conn = test_conn();
seed_fixture(&conn);
// CLI: filter author = "alice" (issues 1..=5).
let mut cli_filters = default_issue_filters();
cli_filters.author = Some("alice");
let cli_result = query_issues(&conn, &cli_filters).expect("CLI alice issues");
let cli_iids: Vec<i64> = cli_result.issues.iter().map(|r| r.iid).collect();
// TUI: filter author = "alice".
let tui_filter = IssueFilter {
author: Some("alice".into()),
..Default::default()
};
let tui_page = fetch_issue_list(
&conn,
&tui_filter,
SortField::UpdatedAt,
SortOrder::Desc,
None,
None,
)
.expect("TUI alice issues");
let tui_iids: Vec<i64> = tui_page.rows.iter().map(|r| r.iid).collect();
assert_eq!(cli_result.total_count, 5, "CLI count for alice");
assert_eq!(tui_page.total_count, 5, "TUI count for alice");
assert_eq!(cli_iids, tui_iids, "Author-filtered IIDs must match");
// All returned IIDs should be <= 5 (alice authors issues 1-5).
for iid in &cli_iids {
assert!(*iid <= 5, "alice issues should have IID <= 5, got {iid}");
}
}
/// MR list parity: TUI and CLI return the same IIDs in the same order.
#[test]
fn test_mr_list_parity_iids_and_order() {
let conn = test_conn();
seed_fixture(&conn);
// CLI path.
let cli_result = query_mrs(&conn, &default_mr_filters()).expect("CLI query_mrs");
let cli_iids: Vec<i64> = cli_result.mrs.iter().map(|r| r.iid).collect();
// TUI path.
let tui_page = fetch_mr_list(
&conn,
&MrFilter::default(),
MrSortField::UpdatedAt,
MrSortOrder::Desc,
None,
None,
)
.expect("TUI fetch_mr_list");
let tui_iids: Vec<i64> = tui_page.rows.iter().map(|r| r.iid).collect();
assert_eq!(cli_result.total_count, 5, "CLI MR count");
assert_eq!(tui_page.total_count, 5, "TUI MR count");
assert_eq!(
cli_iids, tui_iids,
"CLI and TUI MR IID order must match.\nCLI: {cli_iids:?}\nTUI: {tui_iids:?}"
);
// Verify descending order.
assert_eq!(cli_iids[0], 5, "most recently updated MR should be iid 5");
}
/// MR list parity with state filter.
#[test]
fn test_mr_list_parity_state_filter() {
let conn = test_conn();
seed_fixture(&conn);
// CLI: filter state = "opened".
let mut cli_filters = default_mr_filters();
cli_filters.state = Some("opened");
let cli_result = query_mrs(&conn, &cli_filters).expect("CLI opened MRs");
let cli_iids: Vec<i64> = cli_result.mrs.iter().map(|r| r.iid).collect();
// TUI: filter state = "opened".
let tui_filter = MrFilter {
state: Some("opened".into()),
..Default::default()
};
let tui_page = fetch_mr_list(
&conn,
&tui_filter,
MrSortField::UpdatedAt,
MrSortOrder::Desc,
None,
None,
)
.expect("TUI opened MRs");
let tui_iids: Vec<i64> = tui_page.rows.iter().map(|r| r.iid).collect();
assert_eq!(cli_result.total_count, 3, "CLI opened MR count");
assert_eq!(tui_page.total_count, 3, "TUI opened MR count");
assert_eq!(cli_iids, tui_iids, "Opened MR IIDs must match");
}
/// Shared field parity: verify overlapping fields agree between CLI and TUI.
///
/// CLI IssueListRow has more fields (discussion_count, assignees, web_url),
/// but the shared fields (iid, title, state, author) must be identical.
#[test]
fn test_issue_shared_fields_parity() {
let conn = test_conn();
seed_fixture(&conn);
let cli_result = query_issues(&conn, &default_issue_filters()).expect("CLI issues");
let tui_page = fetch_issue_list(
&conn,
&IssueFilter::default(),
SortField::UpdatedAt,
SortOrder::Desc,
None,
None,
)
.expect("TUI issues");
assert_eq!(
cli_result.issues.len(),
tui_page.rows.len(),
"row count must match"
);
for (cli_row, tui_row) in cli_result.issues.iter().zip(tui_page.rows.iter()) {
assert_eq!(cli_row.iid, tui_row.iid, "IID mismatch");
assert_eq!(
cli_row.title, tui_row.title,
"title mismatch for iid {}",
cli_row.iid
);
assert_eq!(
cli_row.state, tui_row.state,
"state mismatch for iid {}",
cli_row.iid
);
assert_eq!(
cli_row.author_username, tui_row.author,
"author mismatch for iid {}",
cli_row.iid
);
assert_eq!(
cli_row.project_path, tui_row.project_path,
"project_path mismatch for iid {}",
cli_row.iid
);
}
}
/// Sort order parity: ascending sort returns the same order in both paths.
#[test]
fn test_issue_list_parity_ascending_sort() {
let conn = test_conn();
seed_fixture(&conn);
// CLI: ascending by updated_at.
let mut cli_filters = default_issue_filters();
cli_filters.order = "asc";
let cli_result = query_issues(&conn, &cli_filters).expect("CLI asc issues");
let cli_iids: Vec<i64> = cli_result.issues.iter().map(|r| r.iid).collect();
// TUI: ascending by updated_at.
let tui_page = fetch_issue_list(
&conn,
&IssueFilter::default(),
SortField::UpdatedAt,
SortOrder::Asc,
None,
None,
)
.expect("TUI asc issues");
let tui_iids: Vec<i64> = tui_page.rows.iter().map(|r| r.iid).collect();
assert_eq!(
cli_iids, tui_iids,
"Ascending sort order must match.\nCLI: {cli_iids:?}\nTUI: {tui_iids:?}"
);
// Ascending: iid 1 has lowest updated_at.
assert_eq!(cli_iids[0], 1);
assert_eq!(*cli_iids.last().unwrap(), 10);
}
/// Empty database parity: both paths handle zero rows gracefully.
#[test]
fn test_empty_database_parity() {
let conn = test_conn();
// No seed — empty DB.
// Dashboard counts should all be zero.
let clock = frozen_clock();
let dashboard = fetch_dashboard(&conn, &clock).expect("fetch_dashboard empty");
assert_eq!(dashboard.counts.issues_total, 0);
assert_eq!(dashboard.counts.mrs_total, 0);
assert_eq!(dashboard.counts.discussions, 0);
assert_eq!(dashboard.counts.notes_total, 0);
// Issue list: both empty.
let cli_result = query_issues(&conn, &default_issue_filters()).expect("CLI empty");
let tui_page = fetch_issue_list(
&conn,
&IssueFilter::default(),
SortField::UpdatedAt,
SortOrder::Desc,
None,
None,
)
.expect("TUI empty");
assert_eq!(cli_result.total_count, 0);
assert_eq!(tui_page.total_count, 0);
assert!(cli_result.issues.is_empty());
assert!(tui_page.rows.is_empty());
// MR list: both empty.
let cli_mrs = query_mrs(&conn, &default_mr_filters()).expect("CLI empty MRs");
let tui_mrs = fetch_mr_list(
&conn,
&MrFilter::default(),
MrSortField::UpdatedAt,
MrSortOrder::Desc,
None,
None,
)
.expect("TUI empty MRs");
assert_eq!(cli_mrs.total_count, 0);
assert_eq!(tui_mrs.total_count, 0);
}
/// Sanitization: TUI safety module strips dangerous escape sequences
/// while preserving safe SGR. Both paths return raw data from the DB,
/// and the safety module is applied at the view layer.
#[test]
fn test_sanitization_dangerous_sequences_stripped() {
let conn = test_conn();
insert_project(&conn, 1, "group/repo");
// Dangerous title: cursor movement (CSI 2A = move up 2) + bidi override.
let dangerous_title = "normal\x1b[2Ahidden\u{202E}reversed";
insert_issue(
&conn,
&TestIssue {
id: 1,
project_id: 1,
iid: 1,
title: dangerous_title,
state: "opened",
author: "alice",
created_at: 1000,
updated_at: 2000,
},
);
// Both CLI and TUI data layers return raw titles.
let cli_result = query_issues(&conn, &default_issue_filters()).expect("CLI dangerous issue");
let tui_page = fetch_issue_list(
&conn,
&IssueFilter::default(),
SortField::UpdatedAt,
SortOrder::Desc,
None,
None,
)
.expect("TUI dangerous issue");
// Data layer parity: both return the raw title.
assert_eq!(cli_result.issues[0].title, dangerous_title);
assert_eq!(tui_page.rows[0].title, dangerous_title);
// Safety module strips dangerous sequences but preserves text.
use lore_tui::safety::{UrlPolicy, sanitize_for_terminal};
let sanitized = sanitize_for_terminal(&tui_page.rows[0].title, UrlPolicy::Strip);
// Cursor movement sequence (ESC[2A) should be stripped.
assert!(
!sanitized.contains('\x1b'),
"sanitized should have no ESC: {sanitized:?}"
);
// Bidi override (U+202E) should be stripped.
assert!(
!sanitized.contains('\u{202E}'),
"sanitized should have no bidi overrides: {sanitized:?}"
);
// Safe text should be preserved.
assert!(
sanitized.contains("normal"),
"should preserve 'normal': {sanitized:?}"
);
assert!(
sanitized.contains("hidden"),
"should preserve 'hidden': {sanitized:?}"
);
assert!(
sanitized.contains("reversed"),
"should preserve 'reversed': {sanitized:?}"
);
}

View File

@@ -0,0 +1,572 @@
//! Performance benchmark fixtures with S/M/L tiered SLOs (bd-wnuo).
//!
//! Measures TUI update+render cycle time with synthetic data at three scales:
//! - **S-tier** (small): ~100 issues, 50 MRs — CI gate, strict SLOs
//! - **M-tier** (medium): ~1,000 issues, 500 MRs — CI gate, relaxed SLOs
//! - **L-tier** (large): ~5,000 issues, 2,000 MRs — advisory, no CI gate
//!
//! SLOs are measured in wall-clock time per operation (update or render).
//! Tests run 20 iterations and assert on the median to avoid flaky p95.
//!
//! These test the TUI state/render performance, NOT database query time.
//! DB benchmarks belong in the root `lore` crate.
use std::time::{Duration, Instant};
use chrono::{TimeZone, Utc};
use ftui::Model;
use ftui::render::frame::Frame;
use ftui::render::grapheme_pool::GraphemePool;
use lore_tui::app::LoreApp;
use lore_tui::clock::FakeClock;
use lore_tui::message::{Msg, Screen};
use lore_tui::state::dashboard::{DashboardData, EntityCounts, LastSyncInfo, ProjectSyncInfo};
use lore_tui::state::issue_list::{IssueListPage, IssueListRow};
use lore_tui::state::mr_list::{MrListPage, MrListRow};
use lore_tui::task_supervisor::TaskKey;
// ---------------------------------------------------------------------------
// Constants
// ---------------------------------------------------------------------------
const RENDER_WIDTH: u16 = 120;
const RENDER_HEIGHT: u16 = 40;
const ITERATIONS: usize = 20;
// SLOs (median per operation).
// These are generous to avoid CI flakiness.
const SLO_UPDATE_S: Duration = Duration::from_millis(10);
const SLO_UPDATE_M: Duration = Duration::from_millis(50);
const SLO_RENDER_S: Duration = Duration::from_millis(20);
const SLO_RENDER_M: Duration = Duration::from_millis(50);
// ---------------------------------------------------------------------------
// Helpers
// ---------------------------------------------------------------------------
fn frozen_clock() -> FakeClock {
FakeClock::new(Utc.with_ymd_and_hms(2026, 1, 15, 12, 0, 0).unwrap())
}
fn test_app() -> LoreApp {
let mut app = LoreApp::new();
app.clock = Box::new(frozen_clock());
app
}
fn render_app(app: &LoreApp) {
let mut pool = GraphemePool::new();
let mut frame = Frame::new(RENDER_WIDTH, RENDER_HEIGHT, &mut pool);
app.view(&mut frame);
}
fn median(durations: &mut [Duration]) -> Duration {
durations.sort();
durations[durations.len() / 2]
}
// ---------------------------------------------------------------------------
// Seeded fixture generators
// ---------------------------------------------------------------------------
/// Simple xorshift64 PRNG for deterministic fixtures.
struct Rng(u64);
impl Rng {
fn new(seed: u64) -> Self {
Self(seed.wrapping_add(1))
}
fn next(&mut self) -> u64 {
let mut x = self.0;
x ^= x << 13;
x ^= x >> 7;
x ^= x << 17;
self.0 = x;
x
}
fn range(&mut self, max: u64) -> u64 {
self.next() % max
}
}
const AUTHORS: &[&str] = &[
"alice", "bob", "carol", "dave", "eve", "frank", "grace", "heidi", "ivan", "judy", "karl",
"lucy", "mike", "nancy", "oscar", "peggy", "quinn", "ruth", "steve", "tina",
];
const LABELS: &[&str] = &[
"backend",
"frontend",
"infra",
"bug",
"feature",
"refactor",
"docs",
"ci",
"security",
"performance",
"ui",
"api",
"testing",
"devops",
"database",
];
const PROJECTS: &[&str] = &[
"infra/platform",
"web/frontend",
"api/backend",
"tools/scripts",
"data/pipeline",
];
fn random_author(rng: &mut Rng) -> String {
AUTHORS[rng.range(AUTHORS.len() as u64) as usize].to_string()
}
fn random_labels(rng: &mut Rng, max: usize) -> Vec<String> {
let count = rng.range(max as u64 + 1) as usize;
(0..count)
.map(|_| LABELS[rng.range(LABELS.len() as u64) as usize].to_string())
.collect()
}
fn random_project(rng: &mut Rng) -> String {
PROJECTS[rng.range(PROJECTS.len() as u64) as usize].to_string()
}
fn random_state(rng: &mut Rng) -> String {
match rng.range(10) {
0..=5 => "closed".to_string(),
6..=8 => "opened".to_string(),
_ => "merged".to_string(),
}
}
fn generate_issue_list(count: usize, seed: u64) -> IssueListPage {
let mut rng = Rng::new(seed);
let rows = (0..count)
.map(|i| IssueListRow {
project_path: random_project(&mut rng),
iid: (i + 1) as i64,
title: format!(
"{} {} for {} component",
if rng.range(2) == 0 { "Fix" } else { "Add" },
[
"retry logic",
"caching",
"validation",
"error handling",
"rate limiting"
][rng.range(5) as usize],
["auth", "payments", "search", "notifications", "dashboard"][rng.range(5) as usize]
),
state: random_state(&mut rng),
author: random_author(&mut rng),
labels: random_labels(&mut rng, 3),
updated_at: 1_736_900_000_000 + rng.range(100_000_000) as i64,
})
.collect();
IssueListPage {
rows,
next_cursor: None,
total_count: count as u64,
}
}
fn generate_mr_list(count: usize, seed: u64) -> MrListPage {
let mut rng = Rng::new(seed);
let rows = (0..count)
.map(|i| MrListRow {
project_path: random_project(&mut rng),
iid: (i + 1) as i64,
title: format!(
"{}: {} {} implementation",
if rng.range(3) == 0 { "WIP" } else { "feat" },
["Implement", "Refactor", "Update", "Fix", "Add"][rng.range(5) as usize],
["middleware", "service", "handler", "model", "view"][rng.range(5) as usize]
),
state: random_state(&mut rng),
author: random_author(&mut rng),
labels: random_labels(&mut rng, 2),
updated_at: 1_736_900_000_000 + rng.range(100_000_000) as i64,
draft: rng.range(5) == 0,
target_branch: "main".to_string(),
})
.collect();
MrListPage {
rows,
next_cursor: None,
total_count: count as u64,
}
}
fn generate_dashboard_data(
issues_total: u64,
mrs_total: u64,
project_count: usize,
) -> DashboardData {
let mut rng = Rng::new(42);
DashboardData {
counts: EntityCounts {
issues_total,
issues_open: issues_total * 3 / 10,
mrs_total,
mrs_open: mrs_total / 5,
discussions: issues_total * 3,
notes_total: issues_total * 8,
notes_system_pct: 18,
documents: issues_total * 2,
embeddings: issues_total,
},
projects: (0..project_count)
.map(|_| ProjectSyncInfo {
path: random_project(&mut rng),
minutes_since_sync: rng.range(60),
})
.collect(),
recent: vec![],
last_sync: Some(LastSyncInfo {
status: "succeeded".into(),
finished_at: Some(1_736_942_100_000),
command: "sync".into(),
error: None,
}),
}
}
// ---------------------------------------------------------------------------
// Benchmark runner
// ---------------------------------------------------------------------------
fn bench_update(app: &mut LoreApp, msg_factory: impl Fn() -> Msg) -> Duration {
let mut durations = Vec::with_capacity(ITERATIONS);
for _ in 0..ITERATIONS {
let msg = msg_factory();
let start = Instant::now();
app.update(msg);
durations.push(start.elapsed());
}
median(&mut durations)
}
fn bench_render(app: &LoreApp) -> Duration {
let mut durations = Vec::with_capacity(ITERATIONS);
for _ in 0..ITERATIONS {
let start = Instant::now();
render_app(app);
durations.push(start.elapsed());
}
median(&mut durations)
}
// ---------------------------------------------------------------------------
// S-Tier Benchmarks (100 issues, 50 MRs)
// ---------------------------------------------------------------------------
#[test]
fn bench_s_tier_dashboard_update() {
let mut app = test_app();
let data = generate_dashboard_data(100, 50, 5);
let generation = app
.supervisor
.submit(TaskKey::LoadScreen(Screen::Dashboard))
.generation;
let med = bench_update(&mut app, || Msg::DashboardLoaded {
generation,
data: Box::new(data.clone()),
});
eprintln!("S-tier dashboard update median: {med:?}");
assert!(
med < SLO_UPDATE_S,
"S-tier dashboard update {med:?} exceeds SLO {SLO_UPDATE_S:?}"
);
}
#[test]
fn bench_s_tier_issue_list_update() {
let mut app = test_app();
let generation = app
.supervisor
.submit(TaskKey::LoadScreen(Screen::IssueList))
.generation;
let med = bench_update(&mut app, || Msg::IssueListLoaded {
generation,
page: generate_issue_list(100, 1),
});
eprintln!("S-tier issue list update median: {med:?}");
assert!(
med < SLO_UPDATE_S,
"S-tier issue list update {med:?} exceeds SLO {SLO_UPDATE_S:?}"
);
}
#[test]
fn bench_s_tier_mr_list_update() {
let mut app = test_app();
let generation = app
.supervisor
.submit(TaskKey::LoadScreen(Screen::MrList))
.generation;
let med = bench_update(&mut app, || Msg::MrListLoaded {
generation,
page: generate_mr_list(50, 2),
});
eprintln!("S-tier MR list update median: {med:?}");
assert!(
med < SLO_UPDATE_S,
"S-tier MR list update {med:?} exceeds SLO {SLO_UPDATE_S:?}"
);
}
#[test]
fn bench_s_tier_dashboard_render() {
let mut app = test_app();
let generation = app
.supervisor
.submit(TaskKey::LoadScreen(Screen::Dashboard))
.generation;
app.update(Msg::DashboardLoaded {
generation,
data: Box::new(generate_dashboard_data(100, 50, 5)),
});
let med = bench_render(&app);
eprintln!("S-tier dashboard render median: {med:?}");
assert!(
med < SLO_RENDER_S,
"S-tier dashboard render {med:?} exceeds SLO {SLO_RENDER_S:?}"
);
}
#[test]
fn bench_s_tier_issue_list_render() {
let mut app = test_app();
let generation = app
.supervisor
.submit(TaskKey::LoadScreen(Screen::IssueList))
.generation;
app.update(Msg::NavigateTo(Screen::IssueList));
app.update(Msg::IssueListLoaded {
generation,
page: generate_issue_list(100, 1),
});
let med = bench_render(&app);
eprintln!("S-tier issue list render median: {med:?}");
assert!(
med < SLO_RENDER_S,
"S-tier issue list render {med:?} exceeds SLO {SLO_RENDER_S:?}"
);
}
// ---------------------------------------------------------------------------
// M-Tier Benchmarks (1,000 issues, 500 MRs)
// ---------------------------------------------------------------------------
#[test]
fn bench_m_tier_dashboard_update() {
let mut app = test_app();
let data = generate_dashboard_data(1_000, 500, 10);
let generation = app
.supervisor
.submit(TaskKey::LoadScreen(Screen::Dashboard))
.generation;
let med = bench_update(&mut app, || Msg::DashboardLoaded {
generation,
data: Box::new(data.clone()),
});
eprintln!("M-tier dashboard update median: {med:?}");
assert!(
med < SLO_UPDATE_M,
"M-tier dashboard update {med:?} exceeds SLO {SLO_UPDATE_M:?}"
);
}
#[test]
fn bench_m_tier_issue_list_update() {
let mut app = test_app();
let generation = app
.supervisor
.submit(TaskKey::LoadScreen(Screen::IssueList))
.generation;
let med = bench_update(&mut app, || Msg::IssueListLoaded {
generation,
page: generate_issue_list(1_000, 10),
});
eprintln!("M-tier issue list update median: {med:?}");
assert!(
med < SLO_UPDATE_M,
"M-tier issue list update {med:?} exceeds SLO {SLO_UPDATE_M:?}"
);
}
#[test]
fn bench_m_tier_mr_list_update() {
let mut app = test_app();
let generation = app
.supervisor
.submit(TaskKey::LoadScreen(Screen::MrList))
.generation;
let med = bench_update(&mut app, || Msg::MrListLoaded {
generation,
page: generate_mr_list(500, 20),
});
eprintln!("M-tier MR list update median: {med:?}");
assert!(
med < SLO_UPDATE_M,
"M-tier MR list update {med:?} exceeds SLO {SLO_UPDATE_M:?}"
);
}
#[test]
fn bench_m_tier_dashboard_render() {
let mut app = test_app();
let generation = app
.supervisor
.submit(TaskKey::LoadScreen(Screen::Dashboard))
.generation;
app.update(Msg::DashboardLoaded {
generation,
data: Box::new(generate_dashboard_data(1_000, 500, 10)),
});
let med = bench_render(&app);
eprintln!("M-tier dashboard render median: {med:?}");
assert!(
med < SLO_RENDER_M,
"M-tier dashboard render {med:?} exceeds SLO {SLO_RENDER_M:?}"
);
}
#[test]
fn bench_m_tier_issue_list_render() {
let mut app = test_app();
let generation = app
.supervisor
.submit(TaskKey::LoadScreen(Screen::IssueList))
.generation;
app.update(Msg::NavigateTo(Screen::IssueList));
app.update(Msg::IssueListLoaded {
generation,
page: generate_issue_list(1_000, 10),
});
let med = bench_render(&app);
eprintln!("M-tier issue list render median: {med:?}");
assert!(
med < SLO_RENDER_M,
"M-tier issue list render {med:?} exceeds SLO {SLO_RENDER_M:?}"
);
}
// ---------------------------------------------------------------------------
// L-Tier Benchmarks (5,000 issues, 2,000 MRs) — advisory, not CI gate
// ---------------------------------------------------------------------------
#[test]
fn bench_l_tier_issue_list_update() {
let mut app = test_app();
let generation = app
.supervisor
.submit(TaskKey::LoadScreen(Screen::IssueList))
.generation;
let med = bench_update(&mut app, || Msg::IssueListLoaded {
generation,
page: generate_issue_list(5_000, 100),
});
// Advisory — log but don't fail CI.
eprintln!("L-tier issue list update median: {med:?} (advisory, no SLO gate)");
}
#[test]
fn bench_l_tier_issue_list_render() {
let mut app = test_app();
let generation = app
.supervisor
.submit(TaskKey::LoadScreen(Screen::IssueList))
.generation;
app.update(Msg::NavigateTo(Screen::IssueList));
app.update(Msg::IssueListLoaded {
generation,
page: generate_issue_list(5_000, 100),
});
let med = bench_render(&app);
eprintln!("L-tier issue list render median: {med:?} (advisory, no SLO gate)");
}
// ---------------------------------------------------------------------------
// Combined update+render cycle benchmarks
// ---------------------------------------------------------------------------
#[test]
fn bench_full_cycle_s_tier() {
let mut app = test_app();
let mut durations = Vec::with_capacity(ITERATIONS);
for i in 0..ITERATIONS {
let generation = app
.supervisor
.submit(TaskKey::LoadScreen(Screen::IssueList))
.generation;
let page = generate_issue_list(100, i as u64 + 500);
let start = Instant::now();
app.update(Msg::IssueListLoaded { generation, page });
render_app(&app);
durations.push(start.elapsed());
}
let med = median(&mut durations);
eprintln!("S-tier full cycle (update+render) median: {med:?}");
assert!(
med < SLO_UPDATE_S + SLO_RENDER_S,
"S-tier full cycle {med:?} exceeds combined SLO {:?}",
SLO_UPDATE_S + SLO_RENDER_S
);
}
#[test]
fn bench_full_cycle_m_tier() {
let mut app = test_app();
let mut durations = Vec::with_capacity(ITERATIONS);
for i in 0..ITERATIONS {
let generation = app
.supervisor
.submit(TaskKey::LoadScreen(Screen::IssueList))
.generation;
let page = generate_issue_list(1_000, i as u64 + 500);
let start = Instant::now();
app.update(Msg::IssueListLoaded { generation, page });
render_app(&app);
durations.push(start.elapsed());
}
let med = median(&mut durations);
eprintln!("M-tier full cycle (update+render) median: {med:?}");
assert!(
med < SLO_UPDATE_M + SLO_RENDER_M,
"M-tier full cycle {med:?} exceeds combined SLO {:?}",
SLO_UPDATE_M + SLO_RENDER_M
);
}

View File

@@ -0,0 +1,668 @@
//! Race condition and reliability tests (bd-3fjk).
//!
//! Verifies the TUI handles async race conditions correctly:
//! - Stale responses from superseded tasks are silently dropped
//! - SQLITE_BUSY errors surface a user-friendly toast
//! - Cancel/resubmit sequences don't leave stuck loading states
//! - InterruptHandle only cancels its owning task's connection
//! - Rapid submit/cancel sequences (5 in quick succession) converge correctly
use std::sync::Arc;
use chrono::{TimeZone, Utc};
use ftui::Model;
use lore_tui::app::LoreApp;
use lore_tui::clock::FakeClock;
use lore_tui::message::{AppError, EntityKey, Msg, Screen};
use lore_tui::state::dashboard::{DashboardData, EntityCounts, LastSyncInfo, ProjectSyncInfo};
use lore_tui::state::issue_list::{IssueListPage, IssueListRow};
use lore_tui::state::mr_list::{MrListPage, MrListRow};
use lore_tui::task_supervisor::{CancelToken, TaskKey};
// ---------------------------------------------------------------------------
// Helpers
// ---------------------------------------------------------------------------
fn frozen_clock() -> FakeClock {
FakeClock::new(Utc.with_ymd_and_hms(2026, 1, 15, 12, 0, 0).unwrap())
}
fn test_app() -> LoreApp {
let mut app = LoreApp::new();
app.clock = Box::new(frozen_clock());
app
}
fn fixture_dashboard_data() -> DashboardData {
DashboardData {
counts: EntityCounts {
issues_total: 42,
issues_open: 15,
mrs_total: 28,
mrs_open: 7,
discussions: 120,
notes_total: 350,
notes_system_pct: 18,
documents: 85,
embeddings: 200,
},
projects: vec![ProjectSyncInfo {
path: "infra/platform".into(),
minutes_since_sync: 5,
}],
recent: vec![],
last_sync: Some(LastSyncInfo {
status: "succeeded".into(),
finished_at: Some(1_736_942_100_000),
command: "sync".into(),
error: None,
}),
}
}
fn fixture_issue_list(count: usize) -> IssueListPage {
let rows: Vec<IssueListRow> = (0..count)
.map(|i| IssueListRow {
project_path: "infra/platform".into(),
iid: (100 + i) as i64,
title: format!("Issue {i}"),
state: "opened".into(),
author: "alice".into(),
labels: vec![],
updated_at: 1_736_942_000_000,
})
.collect();
IssueListPage {
total_count: count as u64,
next_cursor: None,
rows,
}
}
fn fixture_mr_list(count: usize) -> MrListPage {
let rows: Vec<MrListRow> = (0..count)
.map(|i| MrListRow {
project_path: "infra/platform".into(),
iid: (200 + i) as i64,
title: format!("MR {i}"),
state: "opened".into(),
author: "bob".into(),
labels: vec![],
updated_at: 1_736_942_000_000,
draft: false,
target_branch: "main".into(),
})
.collect();
MrListPage {
total_count: count as u64,
next_cursor: None,
rows,
}
}
fn load_dashboard(app: &mut LoreApp) {
let generation = app
.supervisor
.submit(TaskKey::LoadScreen(Screen::Dashboard))
.generation;
app.update(Msg::DashboardLoaded {
generation,
data: Box::new(fixture_dashboard_data()),
});
}
// ---------------------------------------------------------------------------
// Stale Response Tests
// ---------------------------------------------------------------------------
/// Stale response with old generation is silently dropped.
///
/// Submit task A (gen N), then task B (gen M > N) with the same key.
/// Delivering a result with generation N should be a no-op.
#[test]
fn test_stale_issue_list_response_dropped() {
let mut app = test_app();
load_dashboard(&mut app);
// Submit first task — get generation A.
let gen_a = app
.supervisor
.submit(TaskKey::LoadScreen(Screen::IssueList))
.generation;
// Submit second task (same key) — get generation B, cancels A.
let gen_b = app
.supervisor
.submit(TaskKey::LoadScreen(Screen::IssueList))
.generation;
assert!(gen_b > gen_a, "Generation B should be newer than A");
// Deliver stale result with gen_a — should be silently dropped.
app.update(Msg::IssueListLoaded {
generation: gen_a,
page: fixture_issue_list(5),
});
assert_eq!(
app.state.issue_list.rows.len(),
0,
"Stale result should not populate state"
);
// Deliver fresh result with gen_b — should be applied.
app.update(Msg::IssueListLoaded {
generation: gen_b,
page: fixture_issue_list(3),
});
assert_eq!(
app.state.issue_list.rows.len(),
3,
"Current-generation result should be applied"
);
}
/// Stale dashboard response dropped after navigation triggers re-load.
#[test]
fn test_stale_dashboard_response_dropped() {
let mut app = test_app();
// First load.
let gen_old = app
.supervisor
.submit(TaskKey::LoadScreen(Screen::Dashboard))
.generation;
// Simulate re-navigation (new load).
let gen_new = app
.supervisor
.submit(TaskKey::LoadScreen(Screen::Dashboard))
.generation;
// Deliver old generation — should not apply.
let mut old_data = fixture_dashboard_data();
old_data.counts.issues_total = 999;
app.update(Msg::DashboardLoaded {
generation: gen_old,
data: Box::new(old_data),
});
assert_eq!(
app.state.dashboard.counts.issues_total, 0,
"Stale dashboard data should not be applied"
);
// Deliver current generation — should apply.
app.update(Msg::DashboardLoaded {
generation: gen_new,
data: Box::new(fixture_dashboard_data()),
});
assert_eq!(
app.state.dashboard.counts.issues_total, 42,
"Current dashboard data should be applied"
);
}
/// MR list stale response dropped correctly.
#[test]
fn test_stale_mr_list_response_dropped() {
let mut app = test_app();
load_dashboard(&mut app);
let gen_a = app
.supervisor
.submit(TaskKey::LoadScreen(Screen::MrList))
.generation;
let gen_b = app
.supervisor
.submit(TaskKey::LoadScreen(Screen::MrList))
.generation;
// Stale.
app.update(Msg::MrListLoaded {
generation: gen_a,
page: fixture_mr_list(10),
});
assert_eq!(app.state.mr_list.rows.len(), 0);
// Current.
app.update(Msg::MrListLoaded {
generation: gen_b,
page: fixture_mr_list(2),
});
assert_eq!(app.state.mr_list.rows.len(), 2);
}
/// Stale result for one screen does not interfere with another screen's data.
#[test]
fn test_stale_response_cross_screen_isolation() {
let mut app = test_app();
// Submit tasks for two different screens.
let gen_issues = app
.supervisor
.submit(TaskKey::LoadScreen(Screen::IssueList))
.generation;
let gen_mrs = app
.supervisor
.submit(TaskKey::LoadScreen(Screen::MrList))
.generation;
// Deliver issue list results.
app.update(Msg::IssueListLoaded {
generation: gen_issues,
page: fixture_issue_list(5),
});
assert_eq!(app.state.issue_list.rows.len(), 5);
// MR list should still be empty — different key.
assert_eq!(app.state.mr_list.rows.len(), 0);
// Deliver MR list results.
app.update(Msg::MrListLoaded {
generation: gen_mrs,
page: fixture_mr_list(3),
});
assert_eq!(app.state.mr_list.rows.len(), 3);
// Issue list should be unchanged.
assert_eq!(app.state.issue_list.rows.len(), 5);
}
// ---------------------------------------------------------------------------
// SQLITE_BUSY Error Handling Tests
// ---------------------------------------------------------------------------
/// DbBusy error shows user-friendly toast with "busy" in message.
#[test]
fn test_db_busy_shows_toast() {
let mut app = test_app();
load_dashboard(&mut app);
app.update(Msg::Error(AppError::DbBusy));
assert!(
app.state.error_toast.is_some(),
"DbBusy should produce an error toast"
);
assert!(
app.state.error_toast.as_ref().unwrap().contains("busy"),
"Toast should mention 'busy'"
);
}
/// DbBusy error does not crash or alter navigation state.
#[test]
fn test_db_busy_preserves_navigation() {
let mut app = test_app();
load_dashboard(&mut app);
app.update(Msg::NavigateTo(Screen::IssueList));
assert!(app.navigation.is_at(&Screen::IssueList));
// DbBusy should not change screen.
app.update(Msg::Error(AppError::DbBusy));
assert!(
app.navigation.is_at(&Screen::IssueList),
"DbBusy error should not alter navigation"
);
}
/// Multiple consecutive DbBusy errors don't stack — last message wins.
#[test]
fn test_db_busy_toast_idempotent() {
let mut app = test_app();
load_dashboard(&mut app);
app.update(Msg::Error(AppError::DbBusy));
app.update(Msg::Error(AppError::DbBusy));
app.update(Msg::Error(AppError::DbBusy));
// Should have exactly one toast (last error).
assert!(app.state.error_toast.is_some());
assert!(app.state.error_toast.as_ref().unwrap().contains("busy"));
}
/// DbBusy followed by successful load clears the error.
#[test]
fn test_db_busy_then_success_clears_error() {
let mut app = test_app();
load_dashboard(&mut app);
app.update(Msg::Error(AppError::DbBusy));
assert!(app.state.error_toast.is_some());
// Successful load comes in.
let gen_ok = app
.supervisor
.submit(TaskKey::LoadScreen(Screen::IssueList))
.generation;
app.update(Msg::IssueListLoaded {
generation: gen_ok,
page: fixture_issue_list(3),
});
// Error toast should still be set (it's not auto-cleared by data loads).
// The user explicitly dismisses it via key press.
// What matters is the data was applied despite the prior error.
assert_eq!(
app.state.issue_list.rows.len(),
3,
"Data load should succeed after DbBusy error"
);
}
// ---------------------------------------------------------------------------
// Cancel Race Tests
// ---------------------------------------------------------------------------
/// Submit, cancel via token, resubmit: new task proceeds normally.
#[test]
fn test_cancel_then_resubmit_works() {
let mut app = test_app();
// Submit first task and capture its cancel token.
let gen1 = app
.supervisor
.submit(TaskKey::LoadScreen(Screen::IssueList))
.generation;
let token1 = app
.supervisor
.active_cancel_token(&TaskKey::LoadScreen(Screen::IssueList))
.expect("Should have active handle");
assert!(!token1.is_cancelled());
// Resubmit with same key — old token should be cancelled.
let gen2 = app
.supervisor
.submit(TaskKey::LoadScreen(Screen::IssueList))
.generation;
assert!(
token1.is_cancelled(),
"Old token should be cancelled on resubmit"
);
assert!(gen2 > gen1);
// Deliver result for new task.
app.update(Msg::IssueListLoaded {
generation: gen2,
page: fixture_issue_list(4),
});
assert_eq!(app.state.issue_list.rows.len(), 4);
}
/// Rapid sequence: 5 submit cycles for the same key.
/// Only the last generation should be accepted.
#[test]
fn test_rapid_submit_sequence_only_last_wins() {
let mut app = test_app();
let mut tokens: Vec<Arc<CancelToken>> = Vec::new();
let mut generations: Vec<u64> = Vec::new();
// Rapidly submit 5 tasks with the same key.
for _ in 0..5 {
let g = app
.supervisor
.submit(TaskKey::LoadScreen(Screen::IssueList))
.generation;
let token = app
.supervisor
.active_cancel_token(&TaskKey::LoadScreen(Screen::IssueList))
.expect("Should have active handle");
generations.push(g);
tokens.push(token);
}
// All tokens except the last should be cancelled.
for (i, token) in tokens.iter().enumerate() {
if i < 4 {
assert!(token.is_cancelled(), "Token {i} should be cancelled");
} else {
assert!(!token.is_cancelled(), "Last token should still be active");
}
}
// Deliver results for each generation — only the last should apply.
for (i, g) in generations.iter().enumerate() {
let count = (i + 1) * 10;
app.update(Msg::IssueListLoaded {
generation: *g,
page: fixture_issue_list(count),
});
}
// Only the last (50 rows) should have been applied.
assert_eq!(
app.state.issue_list.rows.len(),
50,
"Only the last generation's data should be applied"
);
}
/// Cancel token from one key does not affect tasks with different keys.
#[test]
fn test_cancel_token_key_isolation() {
let mut app = test_app();
// Submit tasks for two different keys.
app.supervisor
.submit(TaskKey::LoadScreen(Screen::IssueList));
let issue_token = app
.supervisor
.active_cancel_token(&TaskKey::LoadScreen(Screen::IssueList))
.expect("issue handle");
app.supervisor.submit(TaskKey::LoadScreen(Screen::MrList));
let mr_token = app
.supervisor
.active_cancel_token(&TaskKey::LoadScreen(Screen::MrList))
.expect("mr handle");
// Resubmit only the issue task — should cancel issue token but not MR.
app.supervisor
.submit(TaskKey::LoadScreen(Screen::IssueList));
assert!(
issue_token.is_cancelled(),
"Issue token should be cancelled"
);
assert!(!mr_token.is_cancelled(), "MR token should NOT be cancelled");
}
/// After completing a task, the handle is removed and is_current returns false.
#[test]
fn test_complete_removes_handle() {
let mut app = test_app();
let gen_c = app
.supervisor
.submit(TaskKey::LoadScreen(Screen::IssueList))
.generation;
assert!(
app.supervisor
.is_current(&TaskKey::LoadScreen(Screen::IssueList), gen_c)
);
// Complete the task.
app.supervisor
.complete(&TaskKey::LoadScreen(Screen::IssueList), gen_c);
assert!(
!app.supervisor
.is_current(&TaskKey::LoadScreen(Screen::IssueList), gen_c),
"Handle should be removed after completion"
);
assert_eq!(app.supervisor.active_count(), 0);
}
/// Completing with a stale generation does not remove the newer handle.
#[test]
fn test_complete_stale_does_not_remove_newer() {
let mut app = test_app();
let gen1 = app
.supervisor
.submit(TaskKey::LoadScreen(Screen::IssueList))
.generation;
let gen2 = app
.supervisor
.submit(TaskKey::LoadScreen(Screen::IssueList))
.generation;
// Completing with old generation should be a no-op.
app.supervisor
.complete(&TaskKey::LoadScreen(Screen::IssueList), gen1);
assert!(
app.supervisor
.is_current(&TaskKey::LoadScreen(Screen::IssueList), gen2),
"Newer handle should survive stale completion"
);
}
/// No stuck loading state after cancel-then-resubmit through the full app.
#[test]
fn test_no_stuck_loading_after_cancel_resubmit() {
let mut app = test_app();
load_dashboard(&mut app);
// Navigate to issue list — sets LoadingInitial.
app.update(Msg::NavigateTo(Screen::IssueList));
assert!(app.navigation.is_at(&Screen::IssueList));
// Re-navigate (resubmit) — cancels old, creates new.
app.update(Msg::NavigateTo(Screen::IssueList));
// Deliver the result for the current generation.
let gen_cur = app
.supervisor
.submit(TaskKey::LoadScreen(Screen::IssueList))
.generation;
app.update(Msg::IssueListLoaded {
generation: gen_cur,
page: fixture_issue_list(3),
});
// Data should be applied and loading should be idle.
assert_eq!(app.state.issue_list.rows.len(), 3);
}
/// cancel_all cancels all active tasks.
#[test]
fn test_cancel_all_cancels_everything() {
let mut app = test_app();
app.supervisor
.submit(TaskKey::LoadScreen(Screen::IssueList));
let t1 = app
.supervisor
.active_cancel_token(&TaskKey::LoadScreen(Screen::IssueList))
.expect("handle");
app.supervisor.submit(TaskKey::LoadScreen(Screen::MrList));
let t2 = app
.supervisor
.active_cancel_token(&TaskKey::LoadScreen(Screen::MrList))
.expect("handle");
app.supervisor.submit(TaskKey::SyncStream);
let t3 = app
.supervisor
.active_cancel_token(&TaskKey::SyncStream)
.expect("handle");
app.supervisor.cancel_all();
assert!(t1.is_cancelled());
assert!(t2.is_cancelled());
assert!(t3.is_cancelled());
assert_eq!(app.supervisor.active_count(), 0);
}
// ---------------------------------------------------------------------------
// Issue Detail Stale Guard (entity-keyed screens)
// ---------------------------------------------------------------------------
/// Stale issue detail response is dropped when a newer load supersedes it.
#[test]
fn test_stale_issue_detail_response_dropped() {
let mut app = test_app();
load_dashboard(&mut app);
let key = EntityKey::issue(1, 101);
let screen = Screen::IssueDetail(key.clone());
let gen_old = app
.supervisor
.submit(TaskKey::LoadScreen(screen.clone()))
.generation;
let gen_new = app
.supervisor
.submit(TaskKey::LoadScreen(screen))
.generation;
// Deliver stale response.
app.update(Msg::IssueDetailLoaded {
generation: gen_old,
key: key.clone(),
data: Box::new(lore_tui::state::issue_detail::IssueDetailData {
metadata: lore_tui::state::issue_detail::IssueMetadata {
iid: 101,
project_path: "infra/platform".into(),
title: "STALE TITLE".into(),
description: String::new(),
state: "opened".into(),
author: "alice".into(),
assignees: vec![],
labels: vec![],
milestone: None,
due_date: None,
created_at: 0,
updated_at: 0,
web_url: String::new(),
discussion_count: 0,
},
cross_refs: vec![],
}),
});
// Stale — metadata should NOT be populated with "STALE TITLE".
assert_ne!(
app.state
.issue_detail
.metadata
.as_ref()
.map(|m| m.title.as_str()),
Some("STALE TITLE"),
"Stale issue detail should be dropped"
);
// Deliver current response.
app.update(Msg::IssueDetailLoaded {
generation: gen_new,
key,
data: Box::new(lore_tui::state::issue_detail::IssueDetailData {
metadata: lore_tui::state::issue_detail::IssueMetadata {
iid: 101,
project_path: "infra/platform".into(),
title: "CURRENT TITLE".into(),
description: String::new(),
state: "opened".into(),
author: "alice".into(),
assignees: vec![],
labels: vec![],
milestone: None,
due_date: None,
created_at: 0,
updated_at: 0,
web_url: String::new(),
discussion_count: 0,
},
cross_refs: vec![],
}),
});
assert_eq!(
app.state
.issue_detail
.metadata
.as_ref()
.map(|m| m.title.as_str()),
Some("CURRENT TITLE"),
"Current generation detail should be applied"
);
}

View File

@@ -0,0 +1,453 @@
//! Snapshot tests for deterministic TUI rendering.
//!
//! Each test renders a screen at a fixed terminal size (120x40) with
//! FakeClock frozen at 2026-01-15T12:00:00Z, then compares the plain-text
//! output against a golden file in `tests/snapshots/`.
//!
//! To update golden files after intentional changes:
//! UPDATE_SNAPSHOTS=1 cargo test -p lore-tui snapshot
//!
//! Golden files are UTF-8 plain text with LF line endings, diffable in VCS.
use std::path::PathBuf;
use chrono::{TimeZone, Utc};
use ftui::Model;
use ftui::render::frame::Frame;
use ftui::render::grapheme_pool::GraphemePool;
use lore_tui::app::LoreApp;
use lore_tui::clock::FakeClock;
use lore_tui::message::{EntityKey, Msg, Screen, SearchResult};
use lore_tui::state::dashboard::{DashboardData, EntityCounts, LastSyncInfo, ProjectSyncInfo};
use lore_tui::state::issue_detail::{IssueDetailData, IssueMetadata};
use lore_tui::state::issue_list::{IssueListPage, IssueListRow};
use lore_tui::state::mr_list::{MrListPage, MrListRow};
use lore_tui::task_supervisor::TaskKey;
// ---------------------------------------------------------------------------
// Constants
// ---------------------------------------------------------------------------
/// Fixed terminal size for all snapshot tests.
const WIDTH: u16 = 120;
const HEIGHT: u16 = 40;
/// Frozen clock epoch: 2026-01-15T12:00:00Z.
fn frozen_clock() -> FakeClock {
FakeClock::new(Utc.with_ymd_and_hms(2026, 1, 15, 12, 0, 0).unwrap())
}
/// Path to the snapshots directory (relative to crate root).
fn snapshots_dir() -> PathBuf {
PathBuf::from(env!("CARGO_MANIFEST_DIR")).join("tests/snapshots")
}
// ---------------------------------------------------------------------------
// Buffer serializer
// ---------------------------------------------------------------------------
/// Serialize a Frame's buffer to plain text.
///
/// - Direct chars are rendered as-is.
/// - Grapheme references are resolved via the pool.
/// - Continuation cells (wide char trailing cells) are skipped.
/// - Empty cells become spaces.
/// - Each row is right-trimmed and joined with '\n'.
fn serialize_frame(frame: &Frame<'_>) -> String {
let w = frame.buffer.width();
let h = frame.buffer.height();
let mut lines = Vec::with_capacity(h as usize);
for y in 0..h {
let mut row = String::with_capacity(w as usize);
for x in 0..w {
if let Some(cell) = frame.buffer.get(x, y) {
let content = cell.content;
if content.is_continuation() {
// Skip — part of a wide character already rendered.
continue;
} else if content.is_empty() {
row.push(' ');
} else if let Some(ch) = content.as_char() {
row.push(ch);
} else if let Some(gid) = content.grapheme_id() {
if let Some(grapheme) = frame.pool.get(gid) {
row.push_str(grapheme);
} else {
row.push('?'); // Fallback for unresolved grapheme.
}
} else {
row.push(' ');
}
} else {
row.push(' ');
}
}
lines.push(row.trim_end().to_string());
}
// Trim trailing empty lines.
while lines.last().is_some_and(|l| l.is_empty()) {
lines.pop();
}
let mut result = lines.join("\n");
result.push('\n'); // Trailing newline for VCS friendliness.
result
}
// ---------------------------------------------------------------------------
// Snapshot assertion
// ---------------------------------------------------------------------------
/// Compare rendered output against a golden file.
///
/// If `UPDATE_SNAPSHOTS=1` is set, overwrites the golden file instead.
/// On mismatch, prints a clear diff showing expected vs actual.
fn assert_snapshot(name: &str, actual: &str) {
let path = snapshots_dir().join(format!("{name}.snap"));
if std::env::var("UPDATE_SNAPSHOTS").is_ok() {
std::fs::write(&path, actual).unwrap_or_else(|e| {
panic!("Failed to write snapshot {}: {e}", path.display());
});
eprintln!("Updated snapshot: {}", path.display());
return;
}
if !path.exists() {
panic!(
"Golden file missing: {}\n\
Run with UPDATE_SNAPSHOTS=1 to create it.\n\
Actual output:\n{}",
path.display(),
actual
);
}
let expected = std::fs::read_to_string(&path).unwrap_or_else(|e| {
panic!("Failed to read snapshot {}: {e}", path.display());
});
if actual != expected {
// Print a useful diff.
let actual_lines: Vec<&str> = actual.lines().collect();
let expected_lines: Vec<&str> = expected.lines().collect();
let max = actual_lines.len().max(expected_lines.len());
let mut diff = String::new();
for i in 0..max {
let a = actual_lines.get(i).copied().unwrap_or("");
let e = expected_lines.get(i).copied().unwrap_or("");
if a != e {
diff.push_str(&format!(" line {i:3}: expected: {e:?}\n"));
diff.push_str(&format!(" line {i:3}: actual: {a:?}\n"));
}
}
panic!(
"Snapshot mismatch: {}\n\
Run with UPDATE_SNAPSHOTS=1 to update.\n\n\
Differences:\n{diff}",
path.display()
);
}
}
// ---------------------------------------------------------------------------
// Helpers
// ---------------------------------------------------------------------------
fn test_app() -> LoreApp {
let mut app = LoreApp::new();
app.clock = Box::new(frozen_clock());
app
}
fn render_app(app: &LoreApp) -> String {
let mut pool = GraphemePool::new();
let mut frame = Frame::new(WIDTH, HEIGHT, &mut pool);
app.view(&mut frame);
serialize_frame(&frame)
}
// -- Synthetic data fixtures ------------------------------------------------
fn fixture_dashboard_data() -> DashboardData {
DashboardData {
counts: EntityCounts {
issues_total: 42,
issues_open: 15,
mrs_total: 28,
mrs_open: 7,
discussions: 120,
notes_total: 350,
notes_system_pct: 18,
documents: 85,
embeddings: 200,
},
projects: vec![
ProjectSyncInfo {
path: "infra/platform".into(),
minutes_since_sync: 5,
},
ProjectSyncInfo {
path: "web/frontend".into(),
minutes_since_sync: 12,
},
ProjectSyncInfo {
path: "api/backend".into(),
minutes_since_sync: 8,
},
ProjectSyncInfo {
path: "tools/scripts".into(),
minutes_since_sync: 4,
},
],
recent: vec![],
last_sync: Some(LastSyncInfo {
status: "succeeded".into(),
// 2026-01-15T11:55:00Z — 5 min before frozen clock.
finished_at: Some(1_736_942_100_000),
command: "sync".into(),
error: None,
}),
}
}
fn fixture_issue_list() -> IssueListPage {
IssueListPage {
rows: vec![
IssueListRow {
project_path: "infra/platform".into(),
iid: 101,
title: "Add retry logic for transient failures".into(),
state: "opened".into(),
author: "alice".into(),
labels: vec!["backend".into(), "reliability".into()],
updated_at: 1_736_942_000_000, // ~5 min before frozen
},
IssueListRow {
project_path: "web/frontend".into(),
iid: 55,
title: "Dark mode toggle not persisting across sessions".into(),
state: "opened".into(),
author: "bob".into(),
labels: vec!["ui".into(), "bug".into()],
updated_at: 1_736_938_400_000, // ~1 hr before frozen
},
IssueListRow {
project_path: "api/backend".into(),
iid: 203,
title: "Migrate user service to async runtime".into(),
state: "closed".into(),
author: "carol".into(),
labels: vec!["backend".into(), "refactor".into()],
updated_at: 1_736_856_000_000, // ~1 day before frozen
},
],
next_cursor: None,
total_count: 3,
}
}
fn fixture_issue_detail() -> IssueDetailData {
IssueDetailData {
metadata: IssueMetadata {
iid: 101,
project_path: "infra/platform".into(),
title: "Add retry logic for transient failures".into(),
description: "## Problem\n\nTransient network failures cause cascading \
errors in the ingestion pipeline. We need exponential \
backoff with jitter.\n\n## Approach\n\n1. Wrap HTTP calls \
in a retry decorator\n2. Use exponential backoff (base 1s, \
max 30s)\n3. Add jitter to prevent thundering herd"
.into(),
state: "opened".into(),
author: "alice".into(),
assignees: vec!["bob".into(), "carol".into()],
labels: vec!["backend".into(), "reliability".into()],
milestone: Some("v2.0".into()),
due_date: Some("2026-02-01".into()),
created_at: 1_736_856_000_000, // ~1 day before frozen
updated_at: 1_736_942_000_000,
web_url: "https://gitlab.com/infra/platform/-/issues/101".into(),
discussion_count: 3,
},
cross_refs: vec![],
}
}
fn fixture_mr_list() -> MrListPage {
MrListPage {
rows: vec![
MrListRow {
project_path: "infra/platform".into(),
iid: 42,
title: "Implement exponential backoff for HTTP client".into(),
state: "opened".into(),
author: "bob".into(),
labels: vec!["backend".into()],
updated_at: 1_736_942_000_000,
draft: false,
target_branch: "main".into(),
},
MrListRow {
project_path: "web/frontend".into(),
iid: 88,
title: "WIP: Redesign settings page".into(),
state: "opened".into(),
author: "alice".into(),
labels: vec!["ui".into()],
updated_at: 1_736_938_400_000,
draft: true,
target_branch: "main".into(),
},
],
next_cursor: None,
total_count: 2,
}
}
fn fixture_search_results() -> Vec<SearchResult> {
vec![
SearchResult {
key: EntityKey::issue(1, 101),
title: "Add retry logic for transient failures".into(),
snippet: "...exponential backoff with jitter for transient network...".into(),
score: 0.95,
project_path: "infra/platform".into(),
},
SearchResult {
key: EntityKey::mr(1, 42),
title: "Implement exponential backoff for HTTP client".into(),
snippet: "...wraps reqwest calls in retry decorator with backoff...".into(),
score: 0.82,
project_path: "infra/platform".into(),
},
]
}
// -- Data injection helpers -------------------------------------------------
fn load_dashboard(app: &mut LoreApp) {
let generation = app
.supervisor
.submit(TaskKey::LoadScreen(Screen::Dashboard))
.generation;
app.update(Msg::DashboardLoaded {
generation,
data: Box::new(fixture_dashboard_data()),
});
}
fn load_issue_list(app: &mut LoreApp) {
app.update(Msg::NavigateTo(Screen::IssueList));
let generation = app
.supervisor
.submit(TaskKey::LoadScreen(Screen::IssueList))
.generation;
app.update(Msg::IssueListLoaded {
generation,
page: fixture_issue_list(),
});
}
fn load_issue_detail(app: &mut LoreApp) {
let key = EntityKey::issue(1, 101);
let screen = Screen::IssueDetail(key.clone());
app.update(Msg::NavigateTo(screen.clone()));
let generation = app
.supervisor
.submit(TaskKey::LoadScreen(screen))
.generation;
app.update(Msg::IssueDetailLoaded {
generation,
key,
data: Box::new(fixture_issue_detail()),
});
}
fn load_mr_list(app: &mut LoreApp) {
app.update(Msg::NavigateTo(Screen::MrList));
let generation = app
.supervisor
.submit(TaskKey::LoadScreen(Screen::MrList))
.generation;
app.update(Msg::MrListLoaded {
generation,
page: fixture_mr_list(),
});
}
fn load_search_results(app: &mut LoreApp) {
app.update(Msg::NavigateTo(Screen::Search));
// Set the query text first so the search state has context.
app.update(Msg::SearchQueryChanged("retry backoff".into()));
let generation = app
.supervisor
.submit(TaskKey::LoadScreen(Screen::Search))
.generation;
app.update(Msg::SearchExecuted {
generation,
results: fixture_search_results(),
});
}
// ---------------------------------------------------------------------------
// Snapshot tests
// ---------------------------------------------------------------------------
#[test]
fn test_dashboard_snapshot() {
let mut app = test_app();
load_dashboard(&mut app);
let output = render_app(&app);
assert_snapshot("dashboard_default", &output);
}
#[test]
fn test_issue_list_snapshot() {
let mut app = test_app();
load_dashboard(&mut app); // Load dashboard first for realistic nav.
load_issue_list(&mut app);
let output = render_app(&app);
assert_snapshot("issue_list_default", &output);
}
#[test]
fn test_issue_detail_snapshot() {
let mut app = test_app();
load_dashboard(&mut app);
load_issue_list(&mut app);
load_issue_detail(&mut app);
let output = render_app(&app);
assert_snapshot("issue_detail", &output);
}
#[test]
fn test_mr_list_snapshot() {
let mut app = test_app();
load_dashboard(&mut app);
load_mr_list(&mut app);
let output = render_app(&app);
assert_snapshot("mr_list_default", &output);
}
#[test]
fn test_search_results_snapshot() {
let mut app = test_app();
load_dashboard(&mut app);
load_search_results(&mut app);
let output = render_app(&app);
assert_snapshot("search_results", &output);
}
#[test]
fn test_empty_state_snapshot() {
let app = test_app();
// No data loaded — Dashboard with initial/empty state.
let output = render_app(&app);
assert_snapshot("empty_state", &output);
}

View File

@@ -0,0 +1,40 @@
Dashboard
Entity Counts Projects Recent Activity
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Issues: 15 open / 42 ● 5m ago infra/platform No recent activity
MRs: 7 open / 28 ● 12m ago web/frontend
Discussions: 120 ● 8m ago api/backend
Notes: 350 (18% system) ● 4m ago tools/scripts
Documents: 85
Embeddings: 200 Last sync: succeeded
NORMAL q:quit esc:back ?:help C-p:palette o:browser P:scope gh:home gi:issues gm:mrs g/:search gt:timeline

View File

@@ -0,0 +1,40 @@
Dashboard
Entity Counts Projects Recent Activity
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Issues: 0 open / 0 No projects synced No recent activity
MRs: 0 open / 0
Discussions: 0
Notes: 0 (0% system)
Documents: 0
Embeddings: 0
NORMAL q:quit esc:back ?:help C-p:palette o:browser P:scope gh:home gi:issues gm:mrs g/:search gt:timeline

View File

@@ -0,0 +1,40 @@
Dashboard > Issues > Issue
#101 Add retry logic for transient failures
opened | alice | backend, reliability | -> bob, carol
Milestone: v2.0 | Due: 2026-02-01
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
## Problem
Transient network failures cause cascading errors in the ingestion pipeline. We need exponential backoff with jitter.
## Approach
1. Wrap HTTP calls in a retry decorator
2. Use exponential backoff (base 1s, max 30s)
3. Add jitter to prevent thundering herd
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Discussions (0)
Loading discussions...
NORMAL q:quit esc:back ?:help C-p:palette o:browser P:scope gh:home gi:issues gm:mrs g/:search gt:timeline

View File

@@ -0,0 +1,40 @@
Dashboard > Issues
/ type / to filter
IID v Title State Author Labels Project
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
#101 Add retry logic for transient failures opened alice backend, reliability infra/platform
#55 Dark mode toggle not persisting across sessi opened bob ui, bug web/frontend
#203 Migrate user service to async runtime closed carol backend, refactor api/backend
Showing 3 of 3 issues
NORMAL q:quit esc:back ?:help C-p:palette o:browser P:scope gh:home gi:issues gm:mrs g/:search gt:timeline

View File

@@ -0,0 +1,40 @@
Dashboard > Merge Requests
/ type / to filter
IID v Title State Author Target Labels Project
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
!42 Implement exponential backoff for HT opened bob main backend infra/platform
!88 [W WIP: Redesign settings page opened alice main ui web/frontend
Showing 2 of 2 merge requests
NORMAL q:quit esc:back ?:help C-p:palette o:browser P:scope gh:home gi:issues gm:mrs g/:search gt:timeline

View File

@@ -0,0 +1,40 @@
Dashboard > Search
[ FTS ] > Type to search...
────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
No search indexes found.
Run: lore generate-docs && lore embed
NORMAL q:quit esc:back ?:help C-p:palette o:browser P:scope gh:home gi:issues gm:mrs g/:search gt:timeline

View File

@@ -0,0 +1,410 @@
//! Soak test for sustained TUI robustness (bd-14hv).
//!
//! Drives the TUI through 50,000+ events (navigation, filter, mode switches,
//! resize, tick) with FakeClock time acceleration. Verifies:
//! - No panic under sustained load
//! - No deadlock (watchdog timeout)
//! - Navigation stack depth stays bounded (no unbounded memory growth)
//! - Input mode stays valid after every event
//!
//! The soak simulates ~30 minutes of accelerated usage in <5s wall clock.
use std::sync::mpsc;
use std::time::Duration;
use chrono::{TimeZone, Utc};
use ftui::render::frame::Frame;
use ftui::render::grapheme_pool::GraphemePool;
use ftui::{Cmd, Event, KeyCode, KeyEvent, Model};
use lore_tui::app::LoreApp;
use lore_tui::clock::FakeClock;
use lore_tui::message::{InputMode, Msg, Screen};
// ---------------------------------------------------------------------------
// Helpers
// ---------------------------------------------------------------------------
fn frozen_clock() -> FakeClock {
FakeClock::new(Utc.with_ymd_and_hms(2026, 1, 15, 12, 0, 0).unwrap())
}
fn test_app() -> LoreApp {
let mut app = LoreApp::new();
app.clock = Box::new(frozen_clock());
app
}
fn key(code: KeyCode) -> Msg {
Msg::RawEvent(Event::Key(KeyEvent::new(code)))
}
fn key_char(c: char) -> Msg {
key(KeyCode::Char(c))
}
fn resize(w: u16, h: u16) -> Msg {
Msg::Resize {
width: w,
height: h,
}
}
fn render_at(app: &LoreApp, width: u16, height: u16) {
let w = width.max(1);
let h = height.max(1);
let mut pool = GraphemePool::new();
let mut frame = Frame::new(w, h, &mut pool);
app.view(&mut frame);
}
// ---------------------------------------------------------------------------
// Seeded PRNG (xorshift64)
// ---------------------------------------------------------------------------
struct Rng(u64);
impl Rng {
fn new(seed: u64) -> Self {
Self(seed.wrapping_add(1))
}
fn next(&mut self) -> u64 {
let mut x = self.0;
x ^= x << 13;
x ^= x >> 7;
x ^= x << 17;
self.0 = x;
x
}
fn range(&mut self, max: u64) -> u64 {
self.next() % max
}
}
/// Generate a random TUI event from a realistic distribution.
///
/// Distribution:
/// - 50% navigation keys (j/k/up/down/enter/escape/tab)
/// - 15% filter/search keys (/, letters, backspace)
/// - 10% "go" prefix (g + second key)
/// - 10% resize events
/// - 10% tick events
/// - 5% special keys (ctrl+c excluded to avoid quit)
fn random_event(rng: &mut Rng) -> Msg {
match rng.range(20) {
// Navigation keys (50%)
0 | 1 => key(KeyCode::Down),
2 | 3 => key(KeyCode::Up),
4 => key(KeyCode::Enter),
5 => key(KeyCode::Escape),
6 => key(KeyCode::Tab),
7 => key_char('j'),
8 => key_char('k'),
9 => key(KeyCode::BackTab),
// Filter/search keys (15%)
10 => key_char('/'),
11 => key_char('a'),
12 => key(KeyCode::Backspace),
// Go prefix (10%)
13 => key_char('g'),
14 => key_char('d'),
// Resize (10%)
15 => {
let w = (rng.range(260) + 40) as u16;
let h = (rng.range(50) + 10) as u16;
resize(w, h)
}
16 => resize(80, 24),
// Tick (10%)
17 | 18 => Msg::Tick,
// Special keys (5%)
_ => match rng.range(6) {
0 => key(KeyCode::Home),
1 => key(KeyCode::End),
2 => key(KeyCode::PageUp),
3 => key(KeyCode::PageDown),
4 => key_char('G'),
_ => key_char('?'),
},
}
}
/// Check invariants that must hold after every event.
fn check_soak_invariants(app: &LoreApp, event_idx: usize) {
// Navigation stack depth >= 1 (always has root).
assert!(
app.navigation.depth() >= 1,
"Soak invariant: nav depth < 1 at event {event_idx}"
);
// Navigation depth bounded (soak shouldn't grow stack unboundedly).
// With random escape/pop interspersed, depth should stay reasonable.
// We use 500 as a generous upper bound.
assert!(
app.navigation.depth() <= 500,
"Soak invariant: nav depth {} exceeds 500 at event {event_idx}",
app.navigation.depth()
);
// Input mode is a valid variant.
match &app.input_mode {
InputMode::Normal | InputMode::Text | InputMode::Palette | InputMode::GoPrefix { .. } => {}
}
// Breadcrumbs match depth.
assert_eq!(
app.navigation.breadcrumbs().len(),
app.navigation.depth(),
"Soak invariant: breadcrumbs != depth at event {event_idx}"
);
}
// ---------------------------------------------------------------------------
// Soak Tests
// ---------------------------------------------------------------------------
/// 50,000 random events with invariant checks — no panic, no unbounded growth.
///
/// Simulates ~30 minutes of sustained TUI usage at accelerated speed.
/// If Ctrl+C fires (we exclude it from the event alphabet), we restart.
#[test]
fn test_soak_50k_events_no_panic() {
let seed = 0xDEAD_BEEF_u64;
let mut rng = Rng::new(seed);
let mut app = test_app();
for event_idx in 0..50_000 {
let msg = random_event(&mut rng);
let cmd = app.update(msg);
// If quit fires (shouldn't with our alphabet, but be safe), restart.
if matches!(cmd, Cmd::Quit) {
app = test_app();
continue;
}
// Check invariants every 100 events (full check is expensive at 50k).
if event_idx % 100 == 0 {
check_soak_invariants(&app, event_idx);
}
}
// Final invariant check.
check_soak_invariants(&app, 50_000);
}
/// Soak with interleaved renders — verifies view() never panics.
#[test]
fn test_soak_with_renders_no_panic() {
let seed = 0xCAFE_BABE_u64;
let mut rng = Rng::new(seed);
let mut app = test_app();
for event_idx in 0..10_000 {
let msg = random_event(&mut rng);
let cmd = app.update(msg);
if matches!(cmd, Cmd::Quit) {
app = test_app();
continue;
}
// Render every 50th event.
if event_idx % 50 == 0 {
let (w, h) = app.state.terminal_size;
if w > 0 && h > 0 {
render_at(&app, w, h);
}
}
}
}
/// Watchdog: run the soak in a thread with a timeout.
///
/// If the soak takes longer than 30 seconds, it's likely deadlocked.
#[test]
fn test_soak_watchdog_no_deadlock() {
let (tx, rx) = mpsc::channel();
let handle = std::thread::spawn(move || {
let seed = 0xBAAD_F00D_u64;
let mut rng = Rng::new(seed);
let mut app = test_app();
for _ in 0..20_000 {
let msg = random_event(&mut rng);
let cmd = app.update(msg);
if matches!(cmd, Cmd::Quit) {
app = test_app();
}
}
tx.send(()).expect("send completion signal");
});
// Wait up to 30 seconds.
let result = rx.recv_timeout(Duration::from_secs(30));
assert!(result.is_ok(), "Soak test timed out — possible deadlock");
handle.join().expect("soak thread panicked");
}
/// Multi-screen navigation soak: cycle through all screens.
///
/// Verifies the TUI handles rapid screen switching under sustained load.
#[test]
fn test_soak_screen_cycling() {
let mut app = test_app();
let screens_to_visit = [
Screen::Dashboard,
Screen::IssueList,
Screen::MrList,
Screen::Search,
Screen::Timeline,
Screen::Who,
Screen::Trace,
Screen::FileHistory,
Screen::Sync,
Screen::Stats,
];
// Cycle through screens 500 times, doing random ops at each.
let mut rng = Rng::new(42);
for cycle in 0..500 {
for screen in &screens_to_visit {
app.update(Msg::NavigateTo(screen.clone()));
// Do 5 random events per screen.
for _ in 0..5 {
let msg = random_event(&mut rng);
let cmd = app.update(msg);
if matches!(cmd, Cmd::Quit) {
app = test_app();
}
}
}
// Periodic invariant check (skip depth bound — this test pushes 10 screens/cycle).
if cycle % 50 == 0 {
assert!(
app.navigation.depth() >= 1,
"Nav depth < 1 at cycle {cycle}"
);
match &app.input_mode {
InputMode::Normal
| InputMode::Text
| InputMode::Palette
| InputMode::GoPrefix { .. } => {}
}
}
}
}
/// Navigation depth tracking: verify depth stays bounded under random pushes.
///
/// The soak includes both push (Enter, navigation) and pop (Escape, Backspace)
/// operations. Depth should fluctuate but remain bounded.
#[test]
fn test_soak_nav_depth_bounded() {
let mut rng = Rng::new(777);
let mut app = test_app();
let mut max_depth = 0_usize;
for _ in 0..30_000 {
let msg = random_event(&mut rng);
let cmd = app.update(msg);
if matches!(cmd, Cmd::Quit) {
app = test_app();
continue;
}
let depth = app.navigation.depth();
if depth > max_depth {
max_depth = depth;
}
}
// With ~50% navigation keys including Escape/pop, depth shouldn't
// grow unboundedly. 200 is a very generous upper bound.
assert!(
max_depth < 200,
"Navigation depth grew to {max_depth} — potential unbounded growth"
);
}
/// Rapid mode oscillation soak: rapidly switch between input modes.
#[test]
fn test_soak_mode_oscillation() {
let mut app = test_app();
// Rapidly switch modes 10,000 times.
for i in 0..10_000 {
match i % 6 {
0 => {
app.update(key_char('g'));
} // Enter GoPrefix
1 => {
app.update(key(KeyCode::Escape));
} // Back to Normal
2 => {
app.update(key_char('/'));
} // Enter Text/Search
3 => {
app.update(key(KeyCode::Escape));
} // Back to Normal
4 => {
app.update(key_char('g'));
app.update(key_char('d'));
} // Go to Dashboard
_ => {
app.update(key(KeyCode::Escape));
} // Ensure Normal
}
// InputMode should always be valid.
match &app.input_mode {
InputMode::Normal
| InputMode::Text
| InputMode::Palette
| InputMode::GoPrefix { .. } => {}
}
}
// After final Escape, should be in Normal.
app.update(key(KeyCode::Escape));
assert!(
matches!(app.input_mode, InputMode::Normal),
"Should be Normal after final Escape"
);
}
/// Full soak: events + renders + multiple seeds for coverage.
#[test]
fn test_soak_multi_seed_comprehensive() {
for seed in [1, 42, 999, 0xFFFF, 0xDEAD_CAFE, 31337] {
let mut rng = Rng::new(seed);
let mut app = test_app();
for event_idx in 0..5_000 {
let msg = random_event(&mut rng);
let cmd = app.update(msg);
if matches!(cmd, Cmd::Quit) {
app = test_app();
continue;
}
if event_idx % 200 == 0 {
let (w, h) = app.state.terminal_size;
if w > 0 && h > 0 {
render_at(&app, w, h);
}
check_soak_invariants(&app, event_idx);
}
}
}
}

View File

@@ -0,0 +1,414 @@
//! Stress and fuzz tests for TUI robustness (bd-nu0d).
//!
//! Verifies the TUI handles adverse conditions without panic:
//! - Resize storms: 100 rapid resizes including degenerate sizes
//! - Rapid keypresses: 50 keys in fast succession across modes
//! - Event fuzz: 10k seeded deterministic event traces with invariant checks
//!
//! Fuzz seeds are logged at test start for reproduction.
use chrono::{TimeZone, Utc};
use ftui::render::frame::Frame;
use ftui::render::grapheme_pool::GraphemePool;
use ftui::{Cmd, Event, KeyCode, KeyEvent, Model, Modifiers};
use lore_tui::app::LoreApp;
use lore_tui::clock::FakeClock;
use lore_tui::message::{InputMode, Msg, Screen};
// ---------------------------------------------------------------------------
// Helpers
// ---------------------------------------------------------------------------
fn frozen_clock() -> FakeClock {
FakeClock::new(Utc.with_ymd_and_hms(2026, 1, 15, 12, 0, 0).unwrap())
}
fn test_app() -> LoreApp {
let mut app = LoreApp::new();
app.clock = Box::new(frozen_clock());
app
}
fn key(code: KeyCode) -> Msg {
Msg::RawEvent(Event::Key(KeyEvent::new(code)))
}
fn key_char(c: char) -> Msg {
key(KeyCode::Char(c))
}
fn ctrl_c() -> Msg {
Msg::RawEvent(Event::Key(
KeyEvent::new(KeyCode::Char('c')).with_modifiers(Modifiers::CTRL),
))
}
fn resize(w: u16, h: u16) -> Msg {
Msg::Resize {
width: w,
height: h,
}
}
/// Render the app at a given size — panics if view() panics.
fn render_at(app: &LoreApp, width: u16, height: u16) {
// Clamp to at least 1x1 to avoid zero-size frame allocation.
let w = width.max(1);
let h = height.max(1);
let mut pool = GraphemePool::new();
let mut frame = Frame::new(w, h, &mut pool);
app.view(&mut frame);
}
// ---------------------------------------------------------------------------
// Resize Storm Tests
// ---------------------------------------------------------------------------
/// 100 rapid resize events with varying sizes — no panic, valid final state.
#[test]
fn test_resize_storm_no_panic() {
let mut app = test_app();
let sizes: Vec<(u16, u16)> = (0..100)
.map(|i| {
// Vary between small and large sizes, including edge cases.
let w = ((i * 7 + 13) % 281 + 20) as u16; // 20..300
let h = ((i * 11 + 3) % 71 + 10) as u16; // 10..80
(w, h)
})
.collect();
for &(w, h) in &sizes {
app.update(resize(w, h));
}
// Final state should reflect last resize.
let (last_w, last_h) = sizes[99];
assert_eq!(app.state.terminal_size, (last_w, last_h));
// Render at final size — must not panic.
render_at(&app, last_w, last_h);
}
/// Resize to degenerate sizes (very small, zero-like) — no panic.
#[test]
fn test_resize_degenerate_sizes_no_panic() {
let mut app = test_app();
let degenerate_sizes = [
(1, 1),
(0, 0),
(1, 0),
(0, 1),
(2, 2),
(10, 1),
(1, 10),
(u16::MAX, 1),
(1, u16::MAX),
(80, 24), // Reset to normal.
];
for &(w, h) in &degenerate_sizes {
app.update(resize(w, h));
// Render must not panic even at degenerate sizes.
render_at(&app, w, h);
}
}
/// Resize storm interleaved with key events — no panic.
#[test]
fn test_resize_interleaved_with_keys() {
let mut app = test_app();
for i in 0..50 {
let w = (40 + i * 3) as u16;
let h = (15 + i) as u16;
app.update(resize(w, h));
// Send a navigation key between resizes.
let cmd = app.update(key(KeyCode::Down));
assert!(!matches!(cmd, Cmd::Quit));
}
// Final render at last size.
render_at(&app, 40 + 49 * 3, 15 + 49);
}
// ---------------------------------------------------------------------------
// Rapid Keypress Tests
// ---------------------------------------------------------------------------
/// 50 rapid key events mixing navigation, filter, and mode switches — no panic.
#[test]
fn test_rapid_keypress_no_panic() {
let mut app = test_app();
let mut quit_seen = false;
let keys = [
KeyCode::Down,
KeyCode::Up,
KeyCode::Enter,
KeyCode::Escape,
KeyCode::Tab,
KeyCode::Char('j'),
KeyCode::Char('k'),
KeyCode::Char('/'),
KeyCode::Char('g'),
KeyCode::Char('i'),
KeyCode::Char('g'),
KeyCode::Char('m'),
KeyCode::Escape,
KeyCode::Char('?'),
KeyCode::Escape,
KeyCode::Char('g'),
KeyCode::Char('d'),
KeyCode::Down,
KeyCode::Down,
KeyCode::Enter,
KeyCode::Escape,
KeyCode::Char('g'),
KeyCode::Char('s'),
KeyCode::Char('r'),
KeyCode::Char('e'),
KeyCode::Char('t'),
KeyCode::Char('r'),
KeyCode::Char('y'),
KeyCode::Enter,
KeyCode::Escape,
KeyCode::Backspace,
KeyCode::Char('g'),
KeyCode::Char('d'),
KeyCode::Up,
KeyCode::Up,
KeyCode::Down,
KeyCode::Home,
KeyCode::End,
KeyCode::PageDown,
KeyCode::PageUp,
KeyCode::Left,
KeyCode::Right,
KeyCode::Tab,
KeyCode::BackTab,
KeyCode::Char('G'),
KeyCode::Char('1'),
KeyCode::Char('2'),
KeyCode::Char('3'),
KeyCode::Delete,
KeyCode::F(1),
];
for k in keys {
let cmd = app.update(key(k));
if matches!(cmd, Cmd::Quit) {
quit_seen = true;
break;
}
}
// Test that we didn't panic. If we quit early (via 'q' equivalent), that's fine.
// The point is no panic.
let _ = quit_seen;
}
/// Ctrl+C always exits regardless of input mode state.
#[test]
fn test_ctrl_c_exits_from_any_mode() {
// Normal mode.
let mut app = test_app();
assert!(matches!(app.update(ctrl_c()), Cmd::Quit));
// Text mode.
let mut app = test_app();
app.input_mode = InputMode::Text;
assert!(matches!(app.update(ctrl_c()), Cmd::Quit));
// Palette mode.
let mut app = test_app();
app.input_mode = InputMode::Palette;
assert!(matches!(app.update(ctrl_c()), Cmd::Quit));
// GoPrefix mode.
let mut app = test_app();
app.update(key_char('g'));
assert!(matches!(app.update(ctrl_c()), Cmd::Quit));
}
/// After rapid mode switches, input mode settles to a valid state.
#[test]
fn test_rapid_mode_switches_consistent() {
let mut app = test_app();
// Rapid mode toggles: Normal -> GoPrefix -> back -> Text -> back -> Palette -> back
for _ in 0..10 {
app.update(key_char('g')); // Enter GoPrefix
app.update(key(KeyCode::Escape)); // Back to Normal
app.update(key_char('/')); // Might enter Text (search)
app.update(key(KeyCode::Escape)); // Back to Normal
}
// After all that, mode should be Normal (Escape always returns to Normal).
assert!(
matches!(app.input_mode, InputMode::Normal),
"Input mode should settle to Normal after Escape"
);
}
// ---------------------------------------------------------------------------
// Event Fuzz Tests (Deterministic)
// ---------------------------------------------------------------------------
/// Simple seeded PRNG for deterministic fuzz (xorshift64).
struct Rng(u64);
impl Rng {
fn new(seed: u64) -> Self {
Self(seed.wrapping_add(1)) // Avoid zero seed.
}
fn next(&mut self) -> u64 {
let mut x = self.0;
x ^= x << 13;
x ^= x >> 7;
x ^= x << 17;
self.0 = x;
x
}
fn next_range(&mut self, max: u64) -> u64 {
self.next() % max
}
}
/// Generate a random Msg from the fuzz alphabet.
fn random_event(rng: &mut Rng) -> Msg {
match rng.next_range(10) {
// Key events (60% of events).
0..=5 => {
let key_code = match rng.next_range(20) {
0 => KeyCode::Up,
1 => KeyCode::Down,
2 => KeyCode::Left,
3 => KeyCode::Right,
4 => KeyCode::Enter,
5 => KeyCode::Escape,
6 => KeyCode::Tab,
7 => KeyCode::BackTab,
8 => KeyCode::Backspace,
9 => KeyCode::Home,
10 => KeyCode::End,
11 => KeyCode::PageUp,
12 => KeyCode::PageDown,
13 => KeyCode::Char('g'),
14 => KeyCode::Char('j'),
15 => KeyCode::Char('k'),
16 => KeyCode::Char('/'),
17 => KeyCode::Char('?'),
18 => KeyCode::Char('a'),
_ => KeyCode::Char('x'),
};
key(key_code)
}
// Resize events (20% of events).
6 | 7 => {
let w = (rng.next_range(300) + 1) as u16;
let h = (rng.next_range(100) + 1) as u16;
resize(w, h)
}
// Tick events (20% of events).
_ => Msg::Tick,
}
}
/// Check invariants after each event in the fuzz loop.
fn check_invariants(app: &LoreApp, seed: u64, event_idx: usize) {
// Navigation stack depth >= 1.
assert!(
app.navigation.depth() >= 1,
"Invariant violation at seed={seed}, event={event_idx}: nav stack empty"
);
// InputMode is one of the valid variants.
match &app.input_mode {
InputMode::Normal | InputMode::Text | InputMode::Palette | InputMode::GoPrefix { .. } => {}
}
}
/// 10k deterministic fuzz traces with invariant checks.
#[test]
fn test_event_fuzz_10k_traces() {
const NUM_TRACES: usize = 100;
const EVENTS_PER_TRACE: usize = 100;
// Total: 100 * 100 = 10k events.
for trace in 0..NUM_TRACES {
let seed = 42_u64.wrapping_mul(trace as u64 + 1);
let mut rng = Rng::new(seed);
let mut app = test_app();
for event_idx in 0..EVENTS_PER_TRACE {
let msg = random_event(&mut rng);
let cmd = app.update(msg);
// If we get Quit, that's valid — restart the app for this trace.
if matches!(cmd, Cmd::Quit) {
app = test_app();
continue;
}
check_invariants(&app, seed, event_idx);
}
}
}
/// Verify fuzz is deterministic — same seed produces same final state.
#[test]
fn test_fuzz_deterministic_replay() {
let seed = 12345_u64;
let run = |s: u64| -> (Screen, (u16, u16)) {
let mut rng = Rng::new(s);
let mut app = test_app();
for _ in 0..200 {
let msg = random_event(&mut rng);
let cmd = app.update(msg);
if matches!(cmd, Cmd::Quit) {
app = test_app();
}
}
(app.navigation.current().clone(), app.state.terminal_size)
};
let (screen1, size1) = run(seed);
let (screen2, size2) = run(seed);
assert_eq!(screen1, screen2, "Same seed should produce same screen");
assert_eq!(size1, size2, "Same seed should produce same terminal size");
}
/// Extended fuzz: interleave renders with events — no panic during view().
#[test]
fn test_fuzz_with_render_no_panic() {
let seed = 99999_u64;
let mut rng = Rng::new(seed);
let mut app = test_app();
for _ in 0..500 {
let msg = random_event(&mut rng);
let cmd = app.update(msg);
if matches!(cmd, Cmd::Quit) {
app = test_app();
continue;
}
// Render every 10th event to catch view panics.
let (w, h) = app.state.terminal_size;
if w > 0 && h > 0 {
render_at(&app, w, h);
}
}
}

View File

@@ -0,0 +1,677 @@
//! User flow integration tests — PRD Section 6 end-to-end journeys.
//!
//! Each test simulates a realistic user workflow through multiple screens,
//! using key events for navigation and message injection for data loading.
//! All tests use `FakeClock` and synthetic data for determinism.
//!
//! These tests complement the vertical slice tests (bd-1mju) which cover
//! a single flow in depth. These focus on breadth — 9 distinct user
//! journeys that exercise cross-screen navigation, state preservation,
//! and the command dispatch pipeline.
use chrono::{TimeZone, Utc};
use ftui::{Cmd, Event, KeyCode, KeyEvent, Model, Modifiers};
use lore_tui::app::LoreApp;
use lore_tui::clock::FakeClock;
use lore_tui::message::{
EntityKey, InputMode, Msg, Screen, SearchResult, TimelineEvent, TimelineEventKind,
};
use lore_tui::state::dashboard::{DashboardData, EntityCounts, LastSyncInfo, ProjectSyncInfo};
use lore_tui::state::issue_detail::{IssueDetailData, IssueMetadata};
use lore_tui::state::issue_list::{IssueListPage, IssueListRow};
use lore_tui::state::mr_list::{MrListPage, MrListRow};
use lore_tui::task_supervisor::TaskKey;
// ---------------------------------------------------------------------------
// Constants & clock
// ---------------------------------------------------------------------------
/// Frozen clock epoch: 2026-01-15T12:00:00Z.
fn frozen_clock() -> FakeClock {
FakeClock::new(Utc.with_ymd_and_hms(2026, 1, 15, 12, 0, 0).unwrap())
}
// ---------------------------------------------------------------------------
// Helpers
// ---------------------------------------------------------------------------
fn test_app() -> LoreApp {
let mut app = LoreApp::new();
app.clock = Box::new(frozen_clock());
app
}
/// Send a key event and return the Cmd.
fn send_key(app: &mut LoreApp, code: KeyCode) -> Cmd<Msg> {
app.update(Msg::RawEvent(Event::Key(KeyEvent::new(code))))
}
/// Send a key event with modifiers.
fn send_key_mod(app: &mut LoreApp, code: KeyCode, mods: Modifiers) -> Cmd<Msg> {
app.update(Msg::RawEvent(Event::Key(
KeyEvent::new(code).with_modifiers(mods),
)))
}
/// Send a g-prefix navigation sequence (e.g., 'g' then 'i' for issues).
fn send_go(app: &mut LoreApp, second: char) {
send_key(app, KeyCode::Char('g'));
send_key(app, KeyCode::Char(second));
}
// -- Synthetic data fixtures ------------------------------------------------
fn fixture_dashboard_data() -> DashboardData {
DashboardData {
counts: EntityCounts {
issues_total: 42,
issues_open: 15,
mrs_total: 28,
mrs_open: 7,
discussions: 120,
notes_total: 350,
notes_system_pct: 18,
documents: 85,
embeddings: 200,
},
projects: vec![
ProjectSyncInfo {
path: "infra/platform".into(),
minutes_since_sync: 5,
},
ProjectSyncInfo {
path: "web/frontend".into(),
minutes_since_sync: 12,
},
],
recent: vec![],
last_sync: Some(LastSyncInfo {
status: "succeeded".into(),
finished_at: Some(1_736_942_100_000),
command: "sync".into(),
error: None,
}),
}
}
fn fixture_issue_list() -> IssueListPage {
IssueListPage {
rows: vec![
IssueListRow {
project_path: "infra/platform".into(),
iid: 101,
title: "Add retry logic for transient failures".into(),
state: "opened".into(),
author: "alice".into(),
labels: vec!["backend".into(), "reliability".into()],
updated_at: 1_736_942_000_000,
},
IssueListRow {
project_path: "web/frontend".into(),
iid: 55,
title: "Dark mode toggle not persisting".into(),
state: "opened".into(),
author: "bob".into(),
labels: vec!["ui".into(), "bug".into()],
updated_at: 1_736_938_400_000,
},
IssueListRow {
project_path: "api/backend".into(),
iid: 203,
title: "Migrate user service to async runtime".into(),
state: "closed".into(),
author: "carol".into(),
labels: vec!["backend".into(), "refactor".into()],
updated_at: 1_736_856_000_000,
},
],
next_cursor: None,
total_count: 3,
}
}
fn fixture_issue_detail() -> IssueDetailData {
IssueDetailData {
metadata: IssueMetadata {
iid: 101,
project_path: "infra/platform".into(),
title: "Add retry logic for transient failures".into(),
description: "## Problem\n\nTransient network failures cause errors.".into(),
state: "opened".into(),
author: "alice".into(),
assignees: vec!["bob".into()],
labels: vec!["backend".into(), "reliability".into()],
milestone: Some("v2.0".into()),
due_date: Some("2026-02-01".into()),
created_at: 1_736_856_000_000,
updated_at: 1_736_942_000_000,
web_url: "https://gitlab.com/infra/platform/-/issues/101".into(),
discussion_count: 3,
},
cross_refs: vec![],
}
}
fn fixture_mr_list() -> MrListPage {
MrListPage {
rows: vec![
MrListRow {
project_path: "infra/platform".into(),
iid: 42,
title: "Implement exponential backoff for HTTP client".into(),
state: "opened".into(),
author: "bob".into(),
labels: vec!["backend".into()],
updated_at: 1_736_942_000_000,
draft: false,
target_branch: "main".into(),
},
MrListRow {
project_path: "web/frontend".into(),
iid: 88,
title: "WIP: Redesign settings page".into(),
state: "opened".into(),
author: "alice".into(),
labels: vec!["ui".into()],
updated_at: 1_736_938_400_000,
draft: true,
target_branch: "main".into(),
},
],
next_cursor: None,
total_count: 2,
}
}
fn fixture_search_results() -> Vec<SearchResult> {
vec![
SearchResult {
key: EntityKey::issue(1, 101),
title: "Add retry logic for transient failures".into(),
snippet: "...exponential backoff with jitter...".into(),
score: 0.95,
project_path: "infra/platform".into(),
},
SearchResult {
key: EntityKey::mr(1, 42),
title: "Implement exponential backoff for HTTP client".into(),
snippet: "...wraps reqwest calls in retry decorator...".into(),
score: 0.82,
project_path: "infra/platform".into(),
},
]
}
fn fixture_timeline_events() -> Vec<TimelineEvent> {
vec![
TimelineEvent {
timestamp_ms: 1_736_942_000_000,
entity_key: EntityKey::issue(1, 101),
event_kind: TimelineEventKind::Created,
summary: "Issue #101 created".into(),
detail: None,
actor: Some("alice".into()),
project_path: "infra/platform".into(),
},
TimelineEvent {
timestamp_ms: 1_736_938_400_000,
entity_key: EntityKey::mr(1, 42),
event_kind: TimelineEventKind::Created,
summary: "MR !42 created".into(),
detail: None,
actor: Some("bob".into()),
project_path: "infra/platform".into(),
},
]
}
// -- Data injection helpers -------------------------------------------------
fn load_dashboard(app: &mut LoreApp) {
let generation = app
.supervisor
.submit(TaskKey::LoadScreen(Screen::Dashboard))
.generation;
app.update(Msg::DashboardLoaded {
generation,
data: Box::new(fixture_dashboard_data()),
});
}
fn load_issue_list(app: &mut LoreApp) {
let generation = app
.supervisor
.submit(TaskKey::LoadScreen(Screen::IssueList))
.generation;
app.update(Msg::IssueListLoaded {
generation,
page: fixture_issue_list(),
});
}
fn load_issue_detail(app: &mut LoreApp, key: EntityKey) {
let screen = Screen::IssueDetail(key.clone());
let generation = app
.supervisor
.submit(TaskKey::LoadScreen(screen))
.generation;
app.update(Msg::IssueDetailLoaded {
generation,
key,
data: Box::new(fixture_issue_detail()),
});
}
fn load_mr_list(app: &mut LoreApp) {
let generation = app
.supervisor
.submit(TaskKey::LoadScreen(Screen::MrList))
.generation;
app.update(Msg::MrListLoaded {
generation,
page: fixture_mr_list(),
});
}
fn load_search_results(app: &mut LoreApp) {
app.update(Msg::SearchQueryChanged("retry backoff".into()));
let generation = app
.supervisor
.submit(TaskKey::LoadScreen(Screen::Search))
.generation;
// Align state generation with supervisor generation so both guards pass.
app.state.search.generation = generation;
app.update(Msg::SearchExecuted {
generation,
results: fixture_search_results(),
});
}
fn load_timeline(app: &mut LoreApp) {
let generation = app
.supervisor
.submit(TaskKey::LoadScreen(Screen::Timeline))
.generation;
// Align state generation with supervisor generation so both guards pass.
app.state.timeline.generation = generation;
app.update(Msg::TimelineLoaded {
generation,
events: fixture_timeline_events(),
});
}
// ---------------------------------------------------------------------------
// Flow 1: Morning Triage
// ---------------------------------------------------------------------------
// Dashboard -> gi -> Issue List (with data) -> detail (via Msg) -> Esc back
// Verifies cursor preservation and state on back-navigation.
#[test]
fn test_flow_morning_triage() {
let mut app = test_app();
load_dashboard(&mut app);
assert!(app.navigation.is_at(&Screen::Dashboard));
// Navigate to issue list via g-prefix.
send_go(&mut app, 'i');
assert!(app.navigation.is_at(&Screen::IssueList));
// Inject issue list data.
load_issue_list(&mut app);
assert_eq!(app.state.issue_list.rows.len(), 3);
// Simulate selecting the second item (cursor state).
app.state.issue_list.selected_index = 1;
// Navigate to issue detail for the second row (iid=55).
let issue_key = EntityKey::issue(1, 55);
app.update(Msg::NavigateTo(Screen::IssueDetail(issue_key.clone())));
load_issue_detail(&mut app, issue_key);
assert!(matches!(app.navigation.current(), Screen::IssueDetail(_)));
// Go back via Esc.
send_key(&mut app, KeyCode::Escape);
assert!(
app.navigation.is_at(&Screen::IssueList),
"Esc should return to issue list"
);
// Cursor position should be preserved.
assert_eq!(
app.state.issue_list.selected_index, 1,
"Cursor should be preserved on the second row after back-navigation"
);
// Data should still be there.
assert_eq!(app.state.issue_list.rows.len(), 3);
}
// ---------------------------------------------------------------------------
// Flow 2: Direct Screen Jumps (g-prefix chain)
// ---------------------------------------------------------------------------
// Issue Detail -> gt (Timeline) -> gw (Who) -> gi (Issues) -> gh (Dashboard)
// Verifies the g-prefix navigation chain works across screens.
#[test]
fn test_flow_direct_screen_jumps() {
let mut app = test_app();
load_dashboard(&mut app);
// Start on issue detail.
let key = EntityKey::issue(1, 101);
app.update(Msg::NavigateTo(Screen::IssueDetail(key.clone())));
load_issue_detail(&mut app, key);
assert!(matches!(app.navigation.current(), Screen::IssueDetail(_)));
// Jump to Timeline.
send_go(&mut app, 't');
assert!(
app.navigation.is_at(&Screen::Timeline),
"gt should jump to Timeline"
);
// Jump to Who.
send_go(&mut app, 'w');
assert!(app.navigation.is_at(&Screen::Who), "gw should jump to Who");
// Jump to Issues.
send_go(&mut app, 'i');
assert!(
app.navigation.is_at(&Screen::IssueList),
"gi should jump to Issue List"
);
// Jump Home (Dashboard).
send_go(&mut app, 'h');
assert!(
app.navigation.is_at(&Screen::Dashboard),
"gh should jump to Dashboard"
);
}
// ---------------------------------------------------------------------------
// Flow 3: Quick Search
// ---------------------------------------------------------------------------
// Any screen -> g/ -> Search -> inject query and results -> verify results
#[test]
fn test_flow_quick_search() {
let mut app = test_app();
load_dashboard(&mut app);
// Navigate to search via g-prefix.
send_go(&mut app, '/');
assert!(
app.navigation.is_at(&Screen::Search),
"g/ should navigate to Search"
);
// Inject search query and results.
load_search_results(&mut app);
assert_eq!(app.state.search.results.len(), 2);
assert_eq!(
app.state.search.results[0].title,
"Add retry logic for transient failures"
);
// Navigate to a result via Msg (simulating Enter on first result).
let result_key = app.state.search.results[0].key.clone();
app.update(Msg::NavigateTo(Screen::IssueDetail(result_key.clone())));
load_issue_detail(&mut app, result_key);
assert!(matches!(app.navigation.current(), Screen::IssueDetail(_)));
// Go back to search — results should be preserved.
send_key(&mut app, KeyCode::Escape);
assert!(app.navigation.is_at(&Screen::Search));
assert_eq!(app.state.search.results.len(), 2);
}
// ---------------------------------------------------------------------------
// Flow 4: Sync and Browse
// ---------------------------------------------------------------------------
// Dashboard -> gs -> Sync -> sync lifecycle -> complete -> verify summary
#[test]
fn test_flow_sync_and_browse() {
let mut app = test_app();
load_dashboard(&mut app);
// Navigate to Sync via g-prefix.
send_go(&mut app, 's');
assert!(
app.navigation.is_at(&Screen::Sync),
"gs should navigate to Sync"
);
// Start sync.
app.update(Msg::SyncStarted);
assert!(app.state.sync.is_running());
// Progress updates.
app.update(Msg::SyncProgress {
stage: "Fetching issues".into(),
current: 10,
total: 42,
});
assert_eq!(app.state.sync.lanes[0].current, 10);
assert_eq!(app.state.sync.lanes[0].total, 42);
app.update(Msg::SyncProgress {
stage: "Fetching merge requests".into(),
current: 5,
total: 28,
});
assert_eq!(app.state.sync.lanes[1].current, 5);
// Complete sync.
app.update(Msg::SyncCompleted { elapsed_ms: 5000 });
assert!(app.state.sync.summary.is_some());
assert_eq!(app.state.sync.summary.as_ref().unwrap().elapsed_ms, 5000);
// Navigate to issue list to browse updated data.
send_go(&mut app, 'i');
assert!(app.navigation.is_at(&Screen::IssueList));
load_issue_list(&mut app);
assert_eq!(app.state.issue_list.rows.len(), 3);
}
// ---------------------------------------------------------------------------
// Flow 5: Who / Expert Navigation
// ---------------------------------------------------------------------------
// Dashboard -> gw -> Who screen -> verify expert mode default -> inject data
#[test]
fn test_flow_find_expert() {
let mut app = test_app();
load_dashboard(&mut app);
// Navigate to Who via g-prefix.
send_go(&mut app, 'w');
assert!(
app.navigation.is_at(&Screen::Who),
"gw should navigate to Who"
);
// Default mode should be Expert.
assert_eq!(
app.state.who.mode,
lore_tui::state::who::WhoMode::Expert,
"Who should start in Expert mode"
);
// Navigate back and verify dashboard is preserved.
send_key(&mut app, KeyCode::Escape);
assert!(app.navigation.is_at(&Screen::Dashboard));
assert_eq!(app.state.dashboard.counts.issues_total, 42);
}
// ---------------------------------------------------------------------------
// Flow 6: Command Palette
// ---------------------------------------------------------------------------
// Any screen -> Ctrl+P -> type -> select -> verify navigation
#[test]
fn test_flow_command_palette() {
let mut app = test_app();
load_dashboard(&mut app);
// Open command palette.
send_key_mod(&mut app, KeyCode::Char('p'), Modifiers::CTRL);
assert!(
matches!(app.input_mode, InputMode::Palette),
"Ctrl+P should open command palette"
);
assert!(app.state.command_palette.query_focused);
// Type a filter — palette should have entries.
assert!(
!app.state.command_palette.filtered.is_empty(),
"Palette should have entries when opened"
);
// Close palette with Esc.
send_key(&mut app, KeyCode::Escape);
assert!(
matches!(app.input_mode, InputMode::Normal),
"Esc should close palette and return to Normal mode"
);
}
// ---------------------------------------------------------------------------
// Flow 7: Timeline Navigation
// ---------------------------------------------------------------------------
// Dashboard -> gt -> Timeline -> inject events -> verify events -> Esc back
#[test]
fn test_flow_timeline_navigate() {
let mut app = test_app();
load_dashboard(&mut app);
// Navigate to Timeline via g-prefix.
send_go(&mut app, 't');
assert!(
app.navigation.is_at(&Screen::Timeline),
"gt should navigate to Timeline"
);
// Inject timeline events.
load_timeline(&mut app);
assert_eq!(app.state.timeline.events.len(), 2);
assert_eq!(app.state.timeline.events[0].summary, "Issue #101 created");
// Navigate to the entity from the first event via Msg.
let event_key = app.state.timeline.events[0].entity_key.clone();
app.update(Msg::NavigateTo(Screen::IssueDetail(event_key.clone())));
load_issue_detail(&mut app, event_key);
assert!(matches!(app.navigation.current(), Screen::IssueDetail(_)));
// Esc back to Timeline — events should be preserved.
send_key(&mut app, KeyCode::Escape);
assert!(app.navigation.is_at(&Screen::Timeline));
assert_eq!(app.state.timeline.events.len(), 2);
}
// ---------------------------------------------------------------------------
// Flow 8: Bootstrap → Sync → Dashboard
// ---------------------------------------------------------------------------
// Bootstrap -> gs (triggers sync) -> SyncCompleted -> auto-navigate Dashboard
#[test]
fn test_flow_bootstrap_sync_to_dashboard() {
let mut app = test_app();
// Start on Bootstrap screen.
app.update(Msg::NavigateTo(Screen::Bootstrap));
assert!(app.navigation.is_at(&Screen::Bootstrap));
assert!(!app.state.bootstrap.sync_started);
// User triggers sync via g-prefix.
send_go(&mut app, 's');
assert!(
app.state.bootstrap.sync_started,
"gs on Bootstrap should set sync_started"
);
// Sync completes — should auto-transition to Dashboard.
app.update(Msg::SyncCompleted { elapsed_ms: 3000 });
assert!(
app.navigation.is_at(&Screen::Dashboard),
"SyncCompleted on Bootstrap should auto-navigate to Dashboard"
);
}
// ---------------------------------------------------------------------------
// Flow 9: MR List → MR Detail → Back with State
// ---------------------------------------------------------------------------
// Dashboard -> gm -> MR List -> detail (via Msg) -> Esc -> verify state
#[test]
fn test_flow_mr_drill_in_and_back() {
let mut app = test_app();
load_dashboard(&mut app);
// Navigate to MR list.
send_go(&mut app, 'm');
assert!(
app.navigation.is_at(&Screen::MrList),
"gm should navigate to MR List"
);
// Inject MR list data.
load_mr_list(&mut app);
assert_eq!(app.state.mr_list.rows.len(), 2);
// Set cursor to second row (draft MR).
app.state.mr_list.selected_index = 1;
// Navigate to MR detail via Msg.
let mr_key = EntityKey::mr(1, 88);
app.update(Msg::NavigateTo(Screen::MrDetail(mr_key.clone())));
let screen = Screen::MrDetail(mr_key.clone());
let generation = app
.supervisor
.submit(TaskKey::LoadScreen(screen))
.generation;
app.update(Msg::MrDetailLoaded {
generation,
key: mr_key,
data: Box::new(lore_tui::state::mr_detail::MrDetailData {
metadata: lore_tui::state::mr_detail::MrMetadata {
iid: 88,
project_path: "web/frontend".into(),
title: "WIP: Redesign settings page".into(),
description: "Settings page redesign".into(),
state: "opened".into(),
draft: true,
author: "alice".into(),
assignees: vec![],
reviewers: vec![],
labels: vec!["ui".into()],
source_branch: "redesign-settings".into(),
target_branch: "main".into(),
merge_status: "checking".into(),
created_at: 1_736_938_400_000,
updated_at: 1_736_938_400_000,
merged_at: None,
web_url: "https://gitlab.com/web/frontend/-/merge_requests/88".into(),
discussion_count: 0,
file_change_count: 5,
},
cross_refs: vec![],
file_changes: vec![],
}),
});
assert!(matches!(app.navigation.current(), Screen::MrDetail(_)));
// Go back.
send_key(&mut app, KeyCode::Escape);
assert!(app.navigation.is_at(&Screen::MrList));
// Cursor and data preserved.
assert_eq!(
app.state.mr_list.selected_index, 1,
"MR list cursor should be preserved after back-navigation"
);
assert_eq!(app.state.mr_list.rows.len(), 2);
}

View File

@@ -0,0 +1,20 @@
-- Migration 028: Extend sync_runs for surgical sync observability
-- Adds mode/phase tracking and surgical-specific counters.
ALTER TABLE sync_runs ADD COLUMN mode TEXT;
ALTER TABLE sync_runs ADD COLUMN phase TEXT;
ALTER TABLE sync_runs ADD COLUMN surgical_iids_json TEXT;
ALTER TABLE sync_runs ADD COLUMN issues_fetched INTEGER NOT NULL DEFAULT 0;
ALTER TABLE sync_runs ADD COLUMN mrs_fetched INTEGER NOT NULL DEFAULT 0;
ALTER TABLE sync_runs ADD COLUMN issues_ingested INTEGER NOT NULL DEFAULT 0;
ALTER TABLE sync_runs ADD COLUMN mrs_ingested INTEGER NOT NULL DEFAULT 0;
ALTER TABLE sync_runs ADD COLUMN skipped_stale INTEGER NOT NULL DEFAULT 0;
ALTER TABLE sync_runs ADD COLUMN docs_regenerated INTEGER NOT NULL DEFAULT 0;
ALTER TABLE sync_runs ADD COLUMN docs_embedded INTEGER NOT NULL DEFAULT 0;
ALTER TABLE sync_runs ADD COLUMN warnings_count INTEGER NOT NULL DEFAULT 0;
ALTER TABLE sync_runs ADD COLUMN cancelled_at INTEGER;
CREATE INDEX IF NOT EXISTS idx_sync_runs_mode_started
ON sync_runs(mode, started_at DESC);
CREATE INDEX IF NOT EXISTS idx_sync_runs_status_phase_started
ON sync_runs(status, phase, started_at DESC);

View File

@@ -0,0 +1,43 @@
-- Migration 029: Expand pending_dependent_fetches CHECK to include 'issue_links' job type.
-- Also adds issue_links_synced_for_updated_at watermark to issues table.
-- SQLite cannot ALTER CHECK constraints, so we recreate the table.
-- Step 1: Recreate pending_dependent_fetches with expanded CHECK
CREATE TABLE pending_dependent_fetches_new (
id INTEGER PRIMARY KEY,
project_id INTEGER NOT NULL REFERENCES projects(id) ON DELETE CASCADE,
entity_type TEXT NOT NULL CHECK (entity_type IN ('issue', 'merge_request')),
entity_iid INTEGER NOT NULL,
entity_local_id INTEGER NOT NULL,
job_type TEXT NOT NULL CHECK (job_type IN (
'resource_events', 'mr_closes_issues', 'mr_diffs', 'issue_links'
)),
payload_json TEXT,
enqueued_at INTEGER NOT NULL,
locked_at INTEGER,
attempts INTEGER NOT NULL DEFAULT 0,
next_retry_at INTEGER,
last_error TEXT
);
INSERT INTO pending_dependent_fetches_new
SELECT * FROM pending_dependent_fetches;
DROP TABLE pending_dependent_fetches;
ALTER TABLE pending_dependent_fetches_new RENAME TO pending_dependent_fetches;
-- Recreate indexes from migration 011
CREATE UNIQUE INDEX uq_pending_fetches
ON pending_dependent_fetches(project_id, entity_type, entity_iid, job_type);
CREATE INDEX idx_pending_fetches_claimable
ON pending_dependent_fetches(job_type, locked_at) WHERE locked_at IS NULL;
CREATE INDEX idx_pending_fetches_retryable
ON pending_dependent_fetches(next_retry_at) WHERE locked_at IS NULL AND next_retry_at IS NOT NULL;
-- Step 2: Add watermark column for issue link sync tracking
ALTER TABLE issues ADD COLUMN issue_links_synced_for_updated_at INTEGER;
-- Update schema version
INSERT INTO schema_version (version, applied_at, description)
VALUES (29, strftime('%s', 'now') * 1000, 'Expand dependent fetch queue for issue links');

View File

@@ -125,9 +125,15 @@ const COMMAND_FLAGS: &[(&str, &[&str])] = &[
"--no-events",
"--no-file-changes",
"--no-status",
"--no-issue-links",
"--dry-run",
"--no-dry-run",
"--timings",
"--tui",
"--issue",
"--mr",
"--project",
"--preflight-only",
],
),
(
@@ -256,6 +262,7 @@ const COMMAND_FLAGS: &[(&str, &[&str])] = &[
("generate-docs", &["--full", "--project"]),
("completions", &[]),
("robot-docs", &["--brief"]),
("tui", &["--config"]),
(
"list",
&[
@@ -281,6 +288,12 @@ const COMMAND_FLAGS: &[(&str, &[&str])] = &[
),
("show", &["--project"]),
("reset", &["--yes"]),
("related", &["--limit", "--project"]),
("explain", &["--project"]),
(
"brief",
&["--path", "--person", "--project", "--section-limit"],
),
];
/// Valid values for enum-like flags, used for post-clap error enhancement.

838
src/cli/commands/brief.rs Normal file
View File

@@ -0,0 +1,838 @@
use serde::Serialize;
use crate::cli::WhoArgs;
use crate::cli::commands::list::{IssueListRow, ListFilters, MrListFilters, MrListRow};
use crate::cli::commands::related::RelatedResult;
use crate::cli::commands::who::WhoRun;
use crate::core::config::Config;
use crate::core::db::create_connection;
use crate::core::error::Result;
use crate::core::paths::get_db_path;
use crate::core::time::ms_to_iso;
// ─── Public Types ──────────────────────────────────────────────────────────
#[derive(Debug, Serialize)]
pub struct BriefResponse {
pub mode: String,
pub query: Option<String>,
pub summary: String,
pub open_issues: Vec<BriefIssue>,
pub active_mrs: Vec<BriefMr>,
pub experts: Vec<BriefExpert>,
pub recent_activity: Vec<BriefActivity>,
pub unresolved_threads: Vec<BriefThread>,
#[serde(skip_serializing_if = "Vec::is_empty")]
pub related: Vec<BriefRelated>,
pub warnings: Vec<String>,
pub sections_computed: Vec<String>,
}
#[derive(Debug, Serialize)]
pub struct BriefIssue {
pub iid: i64,
pub title: String,
pub state: String,
pub assignees: Vec<String>,
pub labels: Vec<String>,
pub updated_at: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub status_name: Option<String>,
pub unresolved_count: i64,
}
#[derive(Debug, Serialize)]
pub struct BriefMr {
pub iid: i64,
pub title: String,
pub state: String,
pub author: String,
pub draft: bool,
pub labels: Vec<String>,
pub updated_at: String,
pub unresolved_count: i64,
}
#[derive(Debug, Serialize)]
pub struct BriefExpert {
pub username: String,
pub score: f64,
#[serde(skip_serializing_if = "Option::is_none")]
pub last_activity: Option<String>,
}
#[derive(Debug, Serialize)]
pub struct BriefActivity {
pub timestamp: String,
pub event_type: String,
pub entity_ref: String,
pub summary: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub actor: Option<String>,
}
#[derive(Debug, Serialize)]
pub struct BriefThread {
pub discussion_id: String,
pub entity_type: String,
pub entity_iid: i64,
pub started_by: String,
pub note_count: i64,
pub last_note_at: String,
pub first_note_body: String,
}
#[derive(Debug, Serialize)]
pub struct BriefRelated {
pub source_type: String,
pub iid: i64,
pub title: String,
pub similarity_score: f64,
}
// ─── Input ─────────────────────────────────────────────────────────────────
pub struct BriefArgs {
pub query: Option<String>,
pub path: Option<String>,
pub person: Option<String>,
pub project: Option<String>,
pub section_limit: usize,
}
// ─── Conversion helpers ────────────────────────────────────────────────────
fn issue_to_brief(row: &IssueListRow) -> BriefIssue {
BriefIssue {
iid: row.iid,
title: row.title.clone(),
state: row.state.clone(),
assignees: row.assignees.clone(),
labels: row.labels.clone(),
updated_at: ms_to_iso(row.updated_at),
status_name: row.status_name.clone(),
unresolved_count: row.unresolved_count,
}
}
fn mr_to_brief(row: &MrListRow) -> BriefMr {
BriefMr {
iid: row.iid,
title: row.title.clone(),
state: row.state.clone(),
author: row.author_username.clone(),
draft: row.draft,
labels: row.labels.clone(),
updated_at: ms_to_iso(row.updated_at),
unresolved_count: row.unresolved_count,
}
}
fn related_to_brief(r: &RelatedResult) -> BriefRelated {
BriefRelated {
source_type: r.source_type.clone(),
iid: r.iid,
title: r.title.clone(),
similarity_score: r.similarity_score,
}
}
fn experts_from_who_run(run: &WhoRun) -> Vec<BriefExpert> {
use crate::core::who_types::WhoResult;
match &run.result {
WhoResult::Expert(er) => er
.experts
.iter()
.map(|e| BriefExpert {
username: e.username.clone(),
score: e.score as f64,
last_activity: Some(ms_to_iso(e.last_seen_ms)),
})
.collect(),
WhoResult::Workload(wr) => {
vec![BriefExpert {
username: wr.username.clone(),
score: 0.0,
last_activity: None,
}]
}
_ => vec![],
}
}
// ─── Warning heuristics ────────────────────────────────────────────────────
const STALE_THRESHOLD_MS: i64 = 30 * 24 * 60 * 60 * 1000; // 30 days
fn compute_warnings(issues: &[IssueListRow], mrs: &[MrListRow]) -> Vec<String> {
let now = chrono::Utc::now().timestamp_millis();
let mut warnings = Vec::new();
for i in issues {
let age_ms = now - i.updated_at;
if age_ms > STALE_THRESHOLD_MS {
let days = age_ms / (24 * 60 * 60 * 1000);
warnings.push(format!(
"Issue #{} has no activity for {} days",
i.iid, days
));
}
if i.assignees.is_empty() && i.state == "opened" {
warnings.push(format!("Issue #{} is unassigned", i.iid));
}
}
for m in mrs {
let age_ms = now - m.updated_at;
if age_ms > STALE_THRESHOLD_MS {
let days = age_ms / (24 * 60 * 60 * 1000);
warnings.push(format!("MR !{} has no activity for {} days", m.iid, days));
}
if m.unresolved_count > 0 && m.state == "opened" {
warnings.push(format!(
"MR !{} has {} unresolved threads",
m.iid, m.unresolved_count
));
}
}
warnings
}
fn build_summary(response: &BriefResponse) -> String {
let parts: Vec<String> = [
(!response.open_issues.is_empty())
.then(|| format!("{} open issues", response.open_issues.len())),
(!response.active_mrs.is_empty())
.then(|| format!("{} active MRs", response.active_mrs.len())),
(!response.experts.is_empty()).then(|| {
format!(
"top expert: {}",
response.experts.first().map_or("none", |e| &e.username)
)
}),
(!response.warnings.is_empty()).then(|| format!("{} warnings", response.warnings.len())),
]
.into_iter()
.flatten()
.collect();
if parts.is_empty() {
"No data found".to_string()
} else {
parts.join(", ")
}
}
// ─── Unresolved threads (direct SQL) ───────────────────────────────────────
fn query_unresolved_threads(
config: &Config,
project: Option<&str>,
limit: usize,
) -> Result<Vec<BriefThread>> {
let db_path = get_db_path(config.storage.db_path.as_deref());
let conn = create_connection(&db_path)?;
let project_id: Option<i64> = project
.map(|p| crate::core::project::resolve_project(&conn, p))
.transpose()?;
let (sql, params): (String, Vec<Box<dyn rusqlite::ToSql>>) = if let Some(pid) = project_id {
(
format!(
"SELECT d.gitlab_discussion_id, d.noteable_type, d.noteable_id,
n.author_username, COUNT(n.id) as note_count,
MAX(n.created_at_ms) as last_note_at,
MIN(CASE WHEN n.system = 0 THEN n.body END) as first_body
FROM discussions d
JOIN notes n ON n.discussion_id = d.id
WHERE d.resolved = 0
AND d.project_id = ?
GROUP BY d.id
ORDER BY last_note_at DESC
LIMIT {limit}"
),
vec![Box::new(pid)],
)
} else {
(
format!(
"SELECT d.gitlab_discussion_id, d.noteable_type, d.noteable_id,
n.author_username, COUNT(n.id) as note_count,
MAX(n.created_at_ms) as last_note_at,
MIN(CASE WHEN n.system = 0 THEN n.body END) as first_body
FROM discussions d
JOIN notes n ON n.discussion_id = d.id
WHERE d.resolved = 0
GROUP BY d.id
ORDER BY last_note_at DESC
LIMIT {limit}"
),
vec![],
)
};
let mut stmt = conn.prepare(&sql)?;
let params_refs: Vec<&dyn rusqlite::ToSql> = params.iter().map(|p| p.as_ref()).collect();
let rows = stmt
.query_map(params_refs.as_slice(), |row| {
let noteable_id: i64 = row.get(2)?;
let noteable_type: String = row.get(1)?;
let last_note_ms: i64 = row.get(5)?;
let body: Option<String> = row.get(6)?;
// Look up the IID from the entity table
Ok(BriefThread {
discussion_id: row.get(0)?,
entity_type: noteable_type,
entity_iid: noteable_id, // We'll resolve IID below
started_by: row.get(3)?,
note_count: row.get(4)?,
last_note_at: ms_to_iso(last_note_ms),
first_note_body: truncate_body(&body.unwrap_or_default(), 120),
})
})?
.filter_map(|r| r.ok())
.collect::<Vec<_>>();
// Resolve noteable_id -> IID. noteable_id is the internal DB id, not the IID.
// For now, we use noteable_id as a best-effort proxy since the discussions table
// stores noteable_id which is the row PK in issues/merge_requests table.
let mut resolved = Vec::with_capacity(rows.len());
for mut t in rows {
let iid_result: rusqlite::Result<i64> = if t.entity_type == "Issue" {
conn.query_row(
"SELECT iid FROM issues WHERE id = ?",
[t.entity_iid],
|row| row.get(0),
)
} else {
conn.query_row(
"SELECT iid FROM merge_requests WHERE id = ?",
[t.entity_iid],
|row| row.get(0),
)
};
if let Ok(iid) = iid_result {
t.entity_iid = iid;
}
resolved.push(t);
}
Ok(resolved)
}
fn truncate_body(s: &str, max_len: usize) -> String {
let first_line = s.lines().next().unwrap_or("");
if first_line.len() <= max_len {
first_line.to_string()
} else {
let mut end = max_len;
while !first_line.is_char_boundary(end) {
end -= 1;
}
format!("{}...", &first_line[..end])
}
}
// ─── Recent activity (direct SQL, lightweight) ─────────────────────────────
fn query_recent_activity(
config: &Config,
project: Option<&str>,
limit: usize,
) -> Result<Vec<BriefActivity>> {
let db_path = get_db_path(config.storage.db_path.as_deref());
let conn = create_connection(&db_path)?;
let project_id: Option<i64> = project
.map(|p| crate::core::project::resolve_project(&conn, p))
.transpose()?;
// Combine state events and non-system notes into a timeline
let mut events: Vec<BriefActivity> = Vec::new();
// State events
{
let (sql, params): (String, Vec<Box<dyn rusqlite::ToSql>>) = if let Some(pid) = project_id {
(
format!(
"SELECT rse.created_at, rse.state, rse.actor_username,
COALESCE(i.iid, mr.iid) as entity_iid,
CASE WHEN rse.issue_id IS NOT NULL THEN 'issue' ELSE 'mr' END as etype
FROM resource_state_events rse
LEFT JOIN issues i ON i.id = rse.issue_id
LEFT JOIN merge_requests mr ON mr.id = rse.merge_request_id
WHERE (i.project_id = ? OR mr.project_id = ?)
ORDER BY rse.created_at DESC
LIMIT {limit}"
),
vec![Box::new(pid) as Box<dyn rusqlite::ToSql>, Box::new(pid)],
)
} else {
(
format!(
"SELECT rse.created_at, rse.state, rse.actor_username,
COALESCE(i.iid, mr.iid) as entity_iid,
CASE WHEN rse.issue_id IS NOT NULL THEN 'issue' ELSE 'mr' END as etype
FROM resource_state_events rse
LEFT JOIN issues i ON i.id = rse.issue_id
LEFT JOIN merge_requests mr ON mr.id = rse.merge_request_id
ORDER BY rse.created_at DESC
LIMIT {limit}"
),
vec![],
)
};
let mut stmt = conn.prepare(&sql)?;
let params_refs: Vec<&dyn rusqlite::ToSql> = params.iter().map(|p| p.as_ref()).collect();
let rows = stmt.query_map(params_refs.as_slice(), |row| {
let ts: i64 = row.get(0)?;
let state: String = row.get(1)?;
let actor: Option<String> = row.get(2)?;
let iid: Option<i64> = row.get(3)?;
let etype: String = row.get(4)?;
Ok(BriefActivity {
timestamp: ms_to_iso(ts),
event_type: "state_change".to_string(),
entity_ref: format!(
"{}#{}",
if etype == "issue" { "issues" } else { "mrs" },
iid.unwrap_or(0)
),
summary: format!("State changed to {state}"),
actor,
})
})?;
for row in rows.flatten() {
events.push(row);
}
}
// Sort by timestamp descending and truncate
events.sort_by(|a, b| b.timestamp.cmp(&a.timestamp));
events.truncate(limit);
Ok(events)
}
// ─── Main entry point ──────────────────────────────────────────────────────
pub async fn run_brief(config: &Config, args: &BriefArgs) -> Result<BriefResponse> {
use crate::cli::commands::list::{run_list_issues, run_list_mrs};
use crate::cli::commands::related::run_related;
use crate::cli::commands::who::run_who;
let limit = args.section_limit;
let mut sections = Vec::new();
let mode = if args.path.is_some() {
"path"
} else if args.person.is_some() {
"person"
} else {
"topic"
};
// ── 1. Open issues ─────────────────────────────────────────────────────
let empty_statuses: Vec<String> = vec![];
let assignee_filter = args.person.as_deref();
let issue_result = run_list_issues(
config,
ListFilters {
limit,
project: args.project.as_deref(),
state: Some("opened"),
author: None,
assignee: assignee_filter,
labels: None,
milestone: None,
since: None,
due_before: None,
has_due_date: false,
statuses: &empty_statuses,
sort: "updated_at",
order: "desc",
},
);
let (open_issues, raw_issue_list): (Vec<BriefIssue>, Vec<IssueListRow>) = match issue_result {
Ok(r) => {
sections.push("open_issues".to_string());
let brief: Vec<BriefIssue> = r.issues.iter().map(issue_to_brief).collect();
(brief, r.issues)
}
Err(_) => (vec![], vec![]),
};
// ── 2. Active MRs ──────────────────────────────────────────────────────
let mr_result = run_list_mrs(
config,
MrListFilters {
limit,
project: args.project.as_deref(),
state: Some("opened"),
author: args.person.as_deref(),
assignee: None,
reviewer: None,
labels: None,
since: None,
draft: false,
no_draft: false,
target_branch: None,
source_branch: None,
sort: "updated_at",
order: "desc",
},
);
let (active_mrs, raw_mr_list): (Vec<BriefMr>, Vec<MrListRow>) = match mr_result {
Ok(r) => {
sections.push("active_mrs".to_string());
let brief: Vec<BriefMr> = r.mrs.iter().map(mr_to_brief).collect();
(brief, r.mrs)
}
Err(_) => (vec![], vec![]),
};
// ── 3. Experts (only for path mode or if query looks like a path) ──────
let experts: Vec<BriefExpert> = if args.path.is_some() {
let who_args = WhoArgs {
target: None,
path: args.path.clone(),
active: false,
overlap: None,
reviews: false,
since: None,
project: args.project.clone(),
limit: 3,
fields: None,
detail: false,
no_detail: false,
as_of: None,
explain_score: false,
include_bots: false,
include_closed: false,
all_history: false,
};
match run_who(config, &who_args) {
Ok(run) => {
sections.push("experts".to_string());
experts_from_who_run(&run)
}
Err(_) => vec![],
}
} else if let Some(person) = &args.person {
let who_args = WhoArgs {
target: Some(person.clone()),
path: None,
active: false,
overlap: None,
reviews: false,
since: None,
project: args.project.clone(),
limit: 3,
fields: None,
detail: false,
no_detail: false,
as_of: None,
explain_score: false,
include_bots: false,
include_closed: false,
all_history: false,
};
match run_who(config, &who_args) {
Ok(run) => {
sections.push("experts".to_string());
experts_from_who_run(&run)
}
Err(_) => vec![],
}
} else {
vec![]
};
// ── 4. Recent activity ─────────────────────────────────────────────────
let recent_activity =
query_recent_activity(config, args.project.as_deref(), limit).unwrap_or_default();
if !recent_activity.is_empty() {
sections.push("recent_activity".to_string());
}
// ── 5. Unresolved threads ──────────────────────────────────────────────
let unresolved_threads =
query_unresolved_threads(config, args.project.as_deref(), limit).unwrap_or_default();
if !unresolved_threads.is_empty() {
sections.push("unresolved_threads".to_string());
}
// ── 6. Related (only for topic mode with a query) ──────────────────────
let related: Vec<BriefRelated> = if let Some(q) = &args.query {
match run_related(config, None, None, Some(q), args.project.as_deref(), limit).await {
Ok(resp) => {
if !resp.results.is_empty() {
sections.push("related".to_string());
}
resp.results.iter().map(related_to_brief).collect()
}
Err(_) => vec![], // Graceful degradation: no embeddings = no related
}
} else {
vec![]
};
// ── 7. Warnings ────────────────────────────────────────────────────────
let warnings = compute_warnings(&raw_issue_list, &raw_mr_list);
// ── Build response ─────────────────────────────────────────────────────
let mut response = BriefResponse {
mode: mode.to_string(),
query: args.query.clone(),
summary: String::new(), // Computed below
open_issues,
active_mrs,
experts,
recent_activity,
unresolved_threads,
related,
warnings,
sections_computed: sections,
};
response.summary = build_summary(&response);
Ok(response)
}
// ─── Output formatters ─────────────────────────────────────────────────────
pub fn print_brief_json(response: &BriefResponse, elapsed_ms: u64) {
let output = serde_json::json!({
"ok": true,
"data": response,
"meta": {
"elapsed_ms": elapsed_ms,
"sections_computed": response.sections_computed,
}
});
println!("{}", serde_json::to_string(&output).unwrap_or_default());
}
pub fn print_brief_human(response: &BriefResponse) {
println!("=== Brief: {} ===", response.summary);
println!();
if !response.open_issues.is_empty() {
println!("--- Open Issues ({}) ---", response.open_issues.len());
for i in &response.open_issues {
let status = i
.status_name
.as_deref()
.map_or(String::new(), |s| format!(" [{s}]"));
println!(" #{} {}{}", i.iid, i.title, status);
if !i.assignees.is_empty() {
println!(" assignees: {}", i.assignees.join(", "));
}
}
println!();
}
if !response.active_mrs.is_empty() {
println!("--- Active MRs ({}) ---", response.active_mrs.len());
for m in &response.active_mrs {
let draft = if m.draft { " [DRAFT]" } else { "" };
println!(" !{} {}{} by {}", m.iid, m.title, draft, m.author);
}
println!();
}
if !response.experts.is_empty() {
println!("--- Experts ({}) ---", response.experts.len());
for e in &response.experts {
println!(" {} (score: {:.1})", e.username, e.score);
}
println!();
}
if !response.recent_activity.is_empty() {
println!(
"--- Recent Activity ({}) ---",
response.recent_activity.len()
);
for a in &response.recent_activity {
let actor = a.actor.as_deref().unwrap_or("system");
println!(
" {} {} | {} | {}",
a.timestamp, actor, a.entity_ref, a.summary
);
}
println!();
}
if !response.unresolved_threads.is_empty() {
println!(
"--- Unresolved Threads ({}) ---",
response.unresolved_threads.len()
);
for t in &response.unresolved_threads {
println!(
" {}#{} by {} ({} notes): {}",
t.entity_type, t.entity_iid, t.started_by, t.note_count, t.first_note_body
);
}
println!();
}
if !response.related.is_empty() {
println!("--- Related ({}) ---", response.related.len());
for r in &response.related {
println!(
" {}#{} {} (sim: {:.2})",
r.source_type, r.iid, r.title, r.similarity_score
);
}
println!();
}
if !response.warnings.is_empty() {
println!("--- Warnings ({}) ---", response.warnings.len());
for w in &response.warnings {
println!(" {w}");
}
println!();
}
}
// ─── Tests ─────────────────────────────────────────────────────────────────
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_truncate_body_short() {
assert_eq!(truncate_body("hello world", 20), "hello world");
}
#[test]
fn test_truncate_body_long() {
let long = "a".repeat(200);
let result = truncate_body(&long, 50);
assert!(result.ends_with("..."));
// 50 chars + "..."
assert_eq!(result.len(), 53);
}
#[test]
fn test_truncate_body_multiline() {
let text = "first line\nsecond line\nthird line";
assert_eq!(truncate_body(text, 100), "first line");
}
#[test]
fn test_build_summary_empty() {
let response = BriefResponse {
mode: "topic".to_string(),
query: Some("auth".to_string()),
summary: String::new(),
open_issues: vec![],
active_mrs: vec![],
experts: vec![],
recent_activity: vec![],
unresolved_threads: vec![],
related: vec![],
warnings: vec![],
sections_computed: vec![],
};
assert_eq!(build_summary(&response), "No data found");
}
#[test]
fn test_build_summary_with_data() {
let response = BriefResponse {
mode: "topic".to_string(),
query: Some("auth".to_string()),
summary: String::new(),
open_issues: vec![BriefIssue {
iid: 1,
title: "test".to_string(),
state: "opened".to_string(),
assignees: vec![],
labels: vec![],
updated_at: "2024-01-01".to_string(),
status_name: None,
unresolved_count: 0,
}],
active_mrs: vec![],
experts: vec![BriefExpert {
username: "alice".to_string(),
score: 42.0,
last_activity: None,
}],
recent_activity: vec![],
unresolved_threads: vec![],
related: vec![],
warnings: vec!["stale".to_string()],
sections_computed: vec![],
};
let summary = build_summary(&response);
assert!(summary.contains("1 open issues"));
assert!(summary.contains("top expert: alice"));
assert!(summary.contains("1 warnings"));
}
#[test]
fn test_compute_warnings_stale_issue() {
let now = chrono::Utc::now().timestamp_millis();
let old = now - (45 * 24 * 60 * 60 * 1000); // 45 days ago
let issues = vec![IssueListRow {
iid: 42,
title: "Old issue".to_string(),
state: "opened".to_string(),
author_username: "alice".to_string(),
created_at: old,
updated_at: old,
web_url: None,
project_path: "group/repo".to_string(),
labels: vec![],
assignees: vec![],
discussion_count: 0,
unresolved_count: 0,
status_name: None,
status_category: None,
status_color: None,
status_icon_name: None,
status_synced_at: None,
}];
let warnings = compute_warnings(&issues, &[]);
assert!(warnings.iter().any(|w| w.contains("Issue #42")));
assert!(warnings.iter().any(|w| w.contains("unassigned")));
}
#[test]
fn test_compute_warnings_unresolved_mr() {
let now = chrono::Utc::now().timestamp_millis();
let mrs = vec![MrListRow {
iid: 99,
title: "WIP MR".to_string(),
state: "opened".to_string(),
draft: false,
author_username: "bob".to_string(),
source_branch: "feat".to_string(),
target_branch: "main".to_string(),
created_at: now,
updated_at: now,
web_url: None,
project_path: "group/repo".to_string(),
labels: vec![],
assignees: vec![],
reviewers: vec![],
discussion_count: 3,
unresolved_count: 2,
}];
let warnings = compute_warnings(&[], &mrs);
assert!(warnings.iter().any(|w| w.contains("MR !99")));
assert!(warnings.iter().any(|w| w.contains("2 unresolved")));
}
}

View File

@@ -1,3 +1,5 @@
use std::collections::HashMap;
use crate::cli::render::{self, Theme};
use rusqlite::Connection;
use serde::Serialize;
@@ -211,6 +213,78 @@ pub fn run_count_events(config: &Config) -> Result<EventCounts> {
events_db::count_events(&conn)
}
// ---------------------------------------------------------------------------
// References count
// ---------------------------------------------------------------------------
#[derive(Debug, Serialize)]
pub struct ReferenceCountResult {
pub total: i64,
pub by_type: HashMap<String, i64>,
pub by_method: HashMap<String, i64>,
pub unresolved: i64,
}
pub fn run_count_references(config: &Config) -> Result<ReferenceCountResult> {
let db_path = get_db_path(config.storage.db_path.as_deref());
let conn = create_connection(&db_path)?;
count_references(&conn)
}
fn count_references(conn: &Connection) -> Result<ReferenceCountResult> {
let (total, closes, mentioned, related, api, note_parse, desc_parse, unresolved): (
i64,
i64,
i64,
i64,
i64,
i64,
i64,
i64,
) = conn.query_row(
"SELECT
COUNT(*) AS total,
COALESCE(SUM(CASE WHEN reference_type = 'closes' THEN 1 ELSE 0 END), 0),
COALESCE(SUM(CASE WHEN reference_type = 'mentioned' THEN 1 ELSE 0 END), 0),
COALESCE(SUM(CASE WHEN reference_type = 'related' THEN 1 ELSE 0 END), 0),
COALESCE(SUM(CASE WHEN source_method = 'api' THEN 1 ELSE 0 END), 0),
COALESCE(SUM(CASE WHEN source_method = 'note_parse' THEN 1 ELSE 0 END), 0),
COALESCE(SUM(CASE WHEN source_method = 'description_parse' THEN 1 ELSE 0 END), 0),
COALESCE(SUM(CASE WHEN target_entity_id IS NULL THEN 1 ELSE 0 END), 0)
FROM entity_references",
[],
|row| {
Ok((
row.get(0)?,
row.get(1)?,
row.get(2)?,
row.get(3)?,
row.get(4)?,
row.get(5)?,
row.get(6)?,
row.get(7)?,
))
},
)?;
let mut by_type = HashMap::new();
by_type.insert("closes".to_string(), closes);
by_type.insert("mentioned".to_string(), mentioned);
by_type.insert("related".to_string(), related);
let mut by_method = HashMap::new();
by_method.insert("api".to_string(), api);
by_method.insert("note_parse".to_string(), note_parse);
by_method.insert("description_parse".to_string(), desc_parse);
Ok(ReferenceCountResult {
total,
by_type,
by_method,
unresolved,
})
}
#[derive(Serialize)]
struct EventCountJsonOutput {
ok: bool,
@@ -363,6 +437,77 @@ pub fn print_count(result: &CountResult) {
}
}
// ---------------------------------------------------------------------------
// References output
// ---------------------------------------------------------------------------
pub fn print_reference_count(result: &ReferenceCountResult) {
println!(
"{}: {:>10}",
Theme::info().render("References"),
Theme::bold().render(&render::format_number(result.total))
);
println!(" By type:");
for key in &["closes", "mentioned", "related"] {
let val = result.by_type.get(*key).copied().unwrap_or(0);
println!(" {:<20} {:>10}", key, render::format_number(val));
}
println!(" By source:");
for key in &["api", "note_parse", "description_parse"] {
let val = result.by_method.get(*key).copied().unwrap_or(0);
println!(" {:<20} {:>10}", key, render::format_number(val));
}
let pct = if result.total > 0 {
format!(
" ({:.1}%)",
result.unresolved as f64 / result.total as f64 * 100.0
)
} else {
String::new()
};
println!(
" Unresolved: {:>10}{}",
render::format_number(result.unresolved),
Theme::dim().render(&pct)
);
}
#[derive(Serialize)]
struct RefCountJsonOutput {
ok: bool,
data: RefCountJsonData,
meta: RobotMeta,
}
#[derive(Serialize)]
struct RefCountJsonData {
entity: String,
total: i64,
by_type: HashMap<String, i64>,
by_method: HashMap<String, i64>,
unresolved: i64,
}
pub fn print_reference_count_json(result: &ReferenceCountResult, elapsed_ms: u64) {
let output = RefCountJsonOutput {
ok: true,
data: RefCountJsonData {
entity: "references".to_string(),
total: result.total,
by_type: result.by_type.clone(),
by_method: result.by_method.clone(),
unresolved: result.unresolved,
},
meta: RobotMeta { elapsed_ms },
};
println!("{}", serde_json::to_string(&output).unwrap_or_default());
}
#[cfg(test)]
mod tests {
use crate::cli::render;
@@ -381,4 +526,99 @@ mod tests {
assert_eq!(render::format_number(12345), "12,345");
assert_eq!(render::format_number(1234567), "1,234,567");
}
#[test]
fn test_count_references_query() {
use std::path::Path;
use crate::core::db::{create_connection, run_migrations};
use super::count_references;
let conn = create_connection(Path::new(":memory:")).unwrap();
run_migrations(&conn).unwrap();
// Insert 3 entity_references rows with varied types/methods.
// First need a project row to satisfy FK.
conn.execute(
"INSERT INTO projects (id, gitlab_project_id, path_with_namespace, web_url)
VALUES (1, 100, 'g/test', 'https://git.example.com/g/test')",
[],
)
.unwrap();
// Need source entities for the FK.
conn.execute(
"INSERT INTO issues (id, gitlab_id, iid, project_id, title, state, created_at, updated_at, last_seen_at)
VALUES (1, 200, 1, 1, 'Issue 1', 'opened', 0, 0, 0)",
[],
)
.unwrap();
// Row 1: closes / api / resolved (target_entity_id = 1)
conn.execute(
"INSERT INTO entity_references
(project_id, source_entity_type, source_entity_id, target_entity_type, target_entity_id,
reference_type, source_method, created_at)
VALUES (1, 'issue', 1, 'issue', 1, 'closes', 'api', 1000)",
[],
)
.unwrap();
// Row 2: mentioned / note_parse / unresolved (target_entity_id = NULL)
conn.execute(
"INSERT INTO entity_references
(project_id, source_entity_type, source_entity_id, target_entity_type, target_entity_id,
target_project_path, target_entity_iid,
reference_type, source_method, created_at)
VALUES (1, 'issue', 1, 'merge_request', NULL, 'other/proj', 42, 'mentioned', 'note_parse', 2000)",
[],
)
.unwrap();
// Row 3: related / api / unresolved (target_entity_id = NULL)
conn.execute(
"INSERT INTO entity_references
(project_id, source_entity_type, source_entity_id, target_entity_type, target_entity_id,
target_project_path, target_entity_iid,
reference_type, source_method, created_at)
VALUES (1, 'issue', 1, 'issue', NULL, 'other/proj2', 99, 'related', 'api', 3000)",
[],
)
.unwrap();
let result = count_references(&conn).unwrap();
assert_eq!(result.total, 3);
assert_eq!(*result.by_type.get("closes").unwrap(), 1);
assert_eq!(*result.by_type.get("mentioned").unwrap(), 1);
assert_eq!(*result.by_type.get("related").unwrap(), 1);
assert_eq!(*result.by_method.get("api").unwrap(), 2);
assert_eq!(*result.by_method.get("note_parse").unwrap(), 1);
assert_eq!(*result.by_method.get("description_parse").unwrap(), 0);
assert_eq!(result.unresolved, 2);
}
#[test]
fn test_count_references_empty_table() {
use std::path::Path;
use crate::core::db::{create_connection, run_migrations};
use super::count_references;
let conn = create_connection(Path::new(":memory:")).unwrap();
run_migrations(&conn).unwrap();
let result = count_references(&conn).unwrap();
assert_eq!(result.total, 0);
assert_eq!(*result.by_type.get("closes").unwrap(), 0);
assert_eq!(*result.by_type.get("mentioned").unwrap(), 0);
assert_eq!(*result.by_type.get("related").unwrap(), 0);
assert_eq!(*result.by_method.get("api").unwrap(), 0);
assert_eq!(*result.by_method.get("note_parse").unwrap(), 0);
assert_eq!(*result.by_method.get("description_parse").unwrap(), 0);
assert_eq!(result.unresolved, 0);
}
}

1223
src/cli/commands/explain.rs Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -590,6 +590,9 @@ async fn run_ingest_inner(
}
}
ProgressEvent::StatusEnrichmentSkipped => {}
ProgressEvent::IssueLinksFetchStarted { .. }
| ProgressEvent::IssueLinkFetched { .. }
| ProgressEvent::IssueLinksFetchComplete { .. } => {}
})
};

View File

@@ -1,30 +1,38 @@
pub mod auth_test;
pub mod brief;
pub mod count;
pub mod doctor;
pub mod drift;
pub mod embed;
pub mod explain;
pub mod file_history;
pub mod generate_docs;
pub mod ingest;
pub mod init;
pub mod list;
pub mod related;
pub mod search;
pub mod show;
pub mod stats;
pub mod sync;
pub mod sync_status;
pub mod sync_surgical;
pub mod timeline;
pub mod trace;
pub mod tui;
pub mod who;
pub use auth_test::run_auth_test;
pub use brief::{BriefArgs, BriefResponse, print_brief_human, print_brief_json, run_brief};
pub use count::{
print_count, print_count_json, print_event_count, print_event_count_json, run_count,
run_count_events,
print_count, print_count_json, print_event_count, print_event_count_json,
print_reference_count, print_reference_count_json, run_count, run_count_events,
run_count_references,
};
pub use doctor::{DoctorChecks, print_doctor_results, run_doctor};
pub use drift::{DriftResponse, print_drift_human, print_drift_json, run_drift};
pub use embed::{print_embed, print_embed_json, run_embed};
pub use explain::{ExplainResponse, print_explain_human, print_explain_json, run_explain};
pub use file_history::{print_file_history, print_file_history_json, run_file_history};
pub use generate_docs::{print_generate_docs, print_generate_docs_json, run_generate_docs};
pub use ingest::{
@@ -38,6 +46,7 @@ pub use list::{
print_list_notes, print_list_notes_csv, print_list_notes_json, print_list_notes_jsonl,
query_issues, query_mrs, query_notes, run_list_issues, run_list_mrs,
};
pub use related::{print_related, print_related_json, run_related};
pub use search::{
SearchCliFilters, SearchResponse, print_search_results, print_search_results_json, run_search,
};
@@ -48,8 +57,10 @@ pub use show::{
pub use stats::{print_stats, print_stats_json, run_stats};
pub use sync::{SyncOptions, SyncResult, print_sync, print_sync_json, run_sync};
pub use sync_status::{print_sync_status, print_sync_status_json, run_sync_status};
pub use sync_surgical::run_sync_surgical;
pub use timeline::{TimelineParams, print_timeline, print_timeline_json_with_meta, run_timeline};
pub use trace::{parse_trace_path, print_trace, print_trace_json};
pub use tui::{TuiArgs, find_lore_tui, run_tui};
pub use who::{
WhoRun, half_life_decay, print_who_human, print_who_json, query_active, query_expert,
query_overlap, query_reviews, query_workload, run_who,

692
src/cli/commands/related.rs Normal file
View File

@@ -0,0 +1,692 @@
use std::collections::HashSet;
use serde::Serialize;
use crate::cli::render::{Icons, Theme};
use crate::core::config::Config;
use crate::core::db::create_connection;
use crate::core::error::{LoreError, Result};
use crate::core::paths::get_db_path;
use crate::core::project::resolve_project;
use crate::embedding::ollama::{OllamaClient, OllamaConfig};
use crate::search::search_vector;
// ---------------------------------------------------------------------------
// Public types
// ---------------------------------------------------------------------------
#[derive(Debug, Serialize)]
pub struct RelatedSource {
pub source_type: String,
pub iid: Option<i64>,
pub title: Option<String>,
}
#[derive(Debug, Serialize)]
pub struct RelatedResult {
pub source_type: String,
pub iid: i64,
pub title: String,
pub url: Option<String>,
pub similarity_score: f64,
pub shared_labels: Vec<String>,
pub project_path: Option<String>,
}
#[derive(Debug, Serialize)]
pub struct RelatedResponse {
pub source: RelatedSource,
pub query: Option<String>,
pub results: Vec<RelatedResult>,
pub mode: String,
}
// ---------------------------------------------------------------------------
// Pure helpers (unit-testable)
// ---------------------------------------------------------------------------
/// Convert L2 distance to a 0-1 similarity score.
///
/// Inverse relationship: closer (lower distance) = higher similarity.
/// The +1 prevents division by zero and ensures score is in (0, 1].
fn distance_to_similarity(distance: f64) -> f64 {
1.0 / (1.0 + distance)
}
/// Parse the JSON `label_names` column into a set of labels.
fn parse_label_names(label_names_json: &Option<String>) -> HashSet<String> {
label_names_json
.as_deref()
.and_then(|s| serde_json::from_str::<Vec<String>>(s).ok())
.unwrap_or_default()
.into_iter()
.collect()
}
// ---------------------------------------------------------------------------
// Internal row types
// ---------------------------------------------------------------------------
struct DocRow {
id: i64,
content_text: String,
label_names: Option<String>,
title: Option<String>,
}
struct HydratedDoc {
source_type: String,
iid: i64,
title: String,
url: Option<String>,
label_names: Option<String>,
project_path: Option<String>,
}
/// (source_type, source_id, label_names, url, project_id)
type DocMetaRow = (String, i64, Option<String>, Option<String>, i64);
// ---------------------------------------------------------------------------
// Main entry point
// ---------------------------------------------------------------------------
pub async fn run_related(
config: &Config,
entity_type: Option<&str>,
entity_iid: Option<i64>,
query_text: Option<&str>,
project: Option<&str>,
limit: usize,
) -> Result<RelatedResponse> {
let db_path = get_db_path(config.storage.db_path.as_deref());
let conn = create_connection(&db_path)?;
// Check that embeddings exist at all.
let embedding_count: i64 = conn
.query_row(
"SELECT COUNT(*) FROM embedding_metadata WHERE last_error IS NULL",
[],
|row| row.get(0),
)
.unwrap_or(0);
if embedding_count == 0 {
return Err(LoreError::EmbeddingsNotBuilt);
}
match (entity_type, entity_iid) {
(Some(etype), Some(iid)) => {
run_entity_mode(config, &conn, etype, iid, project, limit).await
}
_ => {
let text = query_text.unwrap_or("");
if text.is_empty() {
return Err(LoreError::Other(
"Provide either an entity type + IID or a free-text query.".into(),
));
}
run_query_mode(config, &conn, text, project, limit).await
}
}
}
// ---------------------------------------------------------------------------
// Entity mode: find entities similar to a specific issue/MR
// ---------------------------------------------------------------------------
async fn run_entity_mode(
config: &Config,
conn: &rusqlite::Connection,
entity_type: &str,
iid: i64,
project: Option<&str>,
limit: usize,
) -> Result<RelatedResponse> {
let source_type = match entity_type {
"issues" | "issue" => "issue",
"mrs" | "mr" | "merge-requests" | "merge_request" => "merge_request",
other => {
return Err(LoreError::Other(format!(
"Unknown entity type '{other}'. Use 'issues' or 'mrs'."
)));
}
};
// Resolve project (optional but needed for multi-project setups).
let project_id = match project {
Some(p) => Some(resolve_project(conn, p)?),
None => None,
};
// Find the source document.
let doc = find_entity_document(conn, source_type, iid, project_id)?;
// Get or compute the embedding.
let embedding = get_or_compute_embedding(config, conn, &doc).await?;
// KNN search (request extra to filter self).
let vector_results = search_vector(conn, &embedding, limit + 5)?;
// Hydrate and filter.
let source_labels = parse_label_names(&doc.label_names);
let mut results = Vec::new();
for vr in vector_results {
// Exclude self.
if vr.document_id == doc.id {
continue;
}
if let Some(hydrated) = hydrate_document(conn, vr.document_id)? {
let result_labels = parse_label_names(&hydrated.label_names);
let shared: Vec<String> = source_labels
.intersection(&result_labels)
.cloned()
.collect();
results.push(RelatedResult {
source_type: hydrated.source_type,
iid: hydrated.iid,
title: hydrated.title,
url: hydrated.url,
similarity_score: distance_to_similarity(vr.distance),
shared_labels: shared,
project_path: hydrated.project_path,
});
}
if results.len() >= limit {
break;
}
}
Ok(RelatedResponse {
source: RelatedSource {
source_type: source_type.to_string(),
iid: Some(iid),
title: doc.title,
},
query: None,
results,
mode: "entity".to_string(),
})
}
// ---------------------------------------------------------------------------
// Query mode: embed free text and find similar entities
// ---------------------------------------------------------------------------
async fn run_query_mode(
config: &Config,
conn: &rusqlite::Connection,
text: &str,
project: Option<&str>,
limit: usize,
) -> Result<RelatedResponse> {
let ollama = OllamaClient::new(OllamaConfig {
base_url: config.embedding.base_url.clone(),
model: config.embedding.model.clone(),
timeout_secs: 60,
});
let embeddings = ollama.embed_batch(&[text]).await?;
let embedding = embeddings
.into_iter()
.next()
.ok_or_else(|| LoreError::Other("Ollama returned empty embedding result.".to_string()))?;
let vector_results = search_vector(conn, &embedding, limit)?;
let _project_id = match project {
Some(p) => Some(resolve_project(conn, p)?),
None => None,
};
let mut results = Vec::new();
for vr in vector_results {
if let Some(hydrated) = hydrate_document(conn, vr.document_id)? {
results.push(RelatedResult {
source_type: hydrated.source_type,
iid: hydrated.iid,
title: hydrated.title,
url: hydrated.url,
similarity_score: distance_to_similarity(vr.distance),
shared_labels: Vec::new(), // No source labels in query mode.
project_path: hydrated.project_path,
});
}
if results.len() >= limit {
break;
}
}
Ok(RelatedResponse {
source: RelatedSource {
source_type: "query".to_string(),
iid: None,
title: None,
},
query: Some(text.to_string()),
results,
mode: "query".to_string(),
})
}
// ---------------------------------------------------------------------------
// DB helpers
// ---------------------------------------------------------------------------
fn find_entity_document(
conn: &rusqlite::Connection,
source_type: &str,
iid: i64,
project_id: Option<i64>,
) -> Result<DocRow> {
let (table, iid_col) = match source_type {
"issue" => ("issues", "iid"),
"merge_request" => ("merge_requests", "iid"),
_ => {
return Err(LoreError::Other(format!(
"Unknown source type: {source_type}"
)));
}
};
// We build the query dynamically because the table name differs.
let project_filter = if project_id.is_some() {
"AND e.project_id = ?3".to_string()
} else {
String::new()
};
let sql = format!(
"SELECT d.id, d.content_text, d.label_names, d.title \
FROM documents d \
JOIN {table} e ON d.source_type = ?1 AND d.source_id = e.id \
WHERE e.{iid_col} = ?2 {project_filter} \
LIMIT 1"
);
let mut stmt = conn.prepare(&sql)?;
let params: Vec<Box<dyn rusqlite::types::ToSql>> = if let Some(pid) = project_id {
vec![
Box::new(source_type.to_string()),
Box::new(iid),
Box::new(pid),
]
} else {
vec![Box::new(source_type.to_string()), Box::new(iid)]
};
let param_refs: Vec<&dyn rusqlite::types::ToSql> = params.iter().map(|p| p.as_ref()).collect();
let doc = stmt
.query_row(param_refs.as_slice(), |row| {
Ok(DocRow {
id: row.get(0)?,
content_text: row.get(1)?,
label_names: row.get(2)?,
title: row.get(3)?,
})
})
.map_err(|_| {
LoreError::NotFound(format!(
"{source_type} #{iid} not found. Run 'lore sync' to fetch the latest data."
))
})?;
Ok(doc)
}
/// Get the embedding for a document from the DB, or compute it on-the-fly via Ollama.
async fn get_or_compute_embedding(
config: &Config,
conn: &rusqlite::Connection,
doc: &DocRow,
) -> Result<Vec<f32>> {
// Try to find an existing embedding in the vec0 table.
use crate::embedding::chunk_ids::encode_rowid;
let rowid = encode_rowid(doc.id, 0);
let result: Option<Vec<u8>> = conn
.query_row(
"SELECT embedding FROM embeddings WHERE rowid = ?1",
rusqlite::params![rowid],
|row| row.get(0),
)
.ok();
if let Some(bytes) = result {
// Decode f32 vec from raw bytes.
let floats: Vec<f32> = bytes
.chunks_exact(4)
.map(|chunk| f32::from_le_bytes([chunk[0], chunk[1], chunk[2], chunk[3]]))
.collect();
if !floats.is_empty() {
return Ok(floats);
}
}
// Fallback: embed the content on-the-fly via Ollama.
let ollama = OllamaClient::new(OllamaConfig {
base_url: config.embedding.base_url.clone(),
model: config.embedding.model.clone(),
timeout_secs: 60,
});
let embeddings = ollama.embed_batch(&[&doc.content_text]).await?;
embeddings
.into_iter()
.next()
.ok_or_else(|| LoreError::Other("Ollama returned empty embedding result.".to_string()))
}
/// Hydrate a document_id into a displayable result by joining back to the source entity.
fn hydrate_document(conn: &rusqlite::Connection, document_id: i64) -> Result<Option<HydratedDoc>> {
// First get the document metadata.
let doc_row: Option<DocMetaRow> = conn
.query_row(
"SELECT d.source_type, d.source_id, d.label_names, d.url, d.project_id \
FROM documents d WHERE d.id = ?1",
rusqlite::params![document_id],
|row| {
Ok((
row.get(0)?,
row.get(1)?,
row.get(2)?,
row.get(3)?,
row.get(4)?,
))
},
)
.ok();
let Some((source_type, source_id, label_names, url, project_id)) = doc_row else {
return Ok(None);
};
// Get the project path.
let project_path: Option<String> = conn
.query_row(
"SELECT path_with_namespace FROM projects WHERE id = ?1",
rusqlite::params![project_id],
|row| row.get(0),
)
.ok();
// Get the entity IID and title from the source table.
let (iid, title) = match source_type.as_str() {
"issue" => {
let row: Option<(i64, String)> = conn
.query_row(
"SELECT iid, title FROM issues WHERE id = ?1",
rusqlite::params![source_id],
|row| Ok((row.get(0)?, row.get(1)?)),
)
.ok();
match row {
Some((iid, title)) => (iid, title),
None => return Ok(None),
}
}
"merge_request" => {
let row: Option<(i64, String)> = conn
.query_row(
"SELECT iid, title FROM merge_requests WHERE id = ?1",
rusqlite::params![source_id],
|row| Ok((row.get(0)?, row.get(1)?)),
)
.ok();
match row {
Some((iid, title)) => (iid, title),
None => return Ok(None),
}
}
// Discussion/note documents: use the document title or a placeholder.
_ => return Ok(None), // Skip non-entity documents in results.
};
Ok(Some(HydratedDoc {
source_type,
iid,
title,
url,
label_names,
project_path,
}))
}
// ---------------------------------------------------------------------------
// Human output
// ---------------------------------------------------------------------------
pub fn print_related(response: &RelatedResponse) {
println!();
match &response.source.source_type.as_str() {
&"query" => {
println!(
"{}",
Theme::bold().render(&format!(
"Related to: \"{}\"",
response.query.as_deref().unwrap_or("")
))
);
}
_ => {
let entity_label = if response.source.source_type == "issue" {
format!("#{}", response.source.iid.unwrap_or(0))
} else {
format!("!{}", response.source.iid.unwrap_or(0))
};
println!(
"{}",
Theme::bold().render(&format!(
"Related to {} {} {}",
response.source.source_type,
entity_label,
response
.source
.title
.as_deref()
.map(|t| format!("\"{}\"", t))
.unwrap_or_default()
))
);
}
}
if response.results.is_empty() {
println!(
"\n {} {}",
Icons::info(),
Theme::dim().render("No related entities found.")
);
println!();
return;
}
println!();
for (i, r) in response.results.iter().enumerate() {
let icon = if r.source_type == "issue" {
Icons::issue_opened()
} else {
Icons::mr_opened()
};
let prefix = if r.source_type == "issue" { "#" } else { "!" };
let score_pct = (r.similarity_score * 100.0) as u8;
let score_str = format!("{score_pct}%");
let labels_str = if r.shared_labels.is_empty() {
String::new()
} else {
format!(" [{}]", r.shared_labels.join(", "))
};
let project_str = r
.project_path
.as_deref()
.map(|p| format!(" ({})", p))
.unwrap_or_default();
println!(
" {:>2}. {} {}{:<5} {} {}{}{}",
i + 1,
icon,
prefix,
r.iid,
Theme::accent().render(&score_str),
r.title,
Theme::dim().render(&labels_str),
Theme::dim().render(&project_str),
);
}
println!();
}
// ---------------------------------------------------------------------------
// Robot (JSON) output
// ---------------------------------------------------------------------------
pub fn print_related_json(response: &RelatedResponse, elapsed_ms: u64) {
let output = serde_json::json!({
"ok": true,
"data": {
"source": response.source,
"query": response.query,
"mode": response.mode,
"results": response.results,
},
"meta": {
"elapsed_ms": elapsed_ms,
"mode": response.mode,
"embedding_dims": 768,
"distance_metric": "l2",
}
});
println!("{}", serde_json::to_string(&output).unwrap_or_default());
}
// ---------------------------------------------------------------------------
// Tests
// ---------------------------------------------------------------------------
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_distance_to_similarity_identical() {
let sim = distance_to_similarity(0.0);
assert!(
(sim - 1.0).abs() < f64::EPSILON,
"distance 0 should give similarity 1.0"
);
}
#[test]
fn test_distance_to_similarity_one() {
let sim = distance_to_similarity(1.0);
assert!(
(sim - 0.5).abs() < f64::EPSILON,
"distance 1 should give similarity 0.5"
);
}
#[test]
fn test_distance_to_similarity_large() {
let sim = distance_to_similarity(100.0);
assert!(
sim > 0.0 && sim < 0.02,
"large distance should give near-zero similarity"
);
}
#[test]
fn test_distance_to_similarity_range() {
for d in [0.0, 0.1, 0.5, 1.0, 2.0, 5.0, 10.0, 100.0] {
let sim = distance_to_similarity(d);
assert!(
(0.0..=1.0).contains(&sim),
"similarity {sim} out of [0, 1] range for distance {d}"
);
}
}
#[test]
fn test_distance_to_similarity_monotonic() {
let distances = [0.0, 0.1, 0.5, 1.0, 2.0, 5.0, 10.0];
for w in distances.windows(2) {
let s1 = distance_to_similarity(w[0]);
let s2 = distance_to_similarity(w[1]);
assert!(
s1 >= s2,
"similarity should decrease with distance: d={} s={} vs d={} s={}",
w[0],
s1,
w[1],
s2
);
}
}
#[test]
fn test_parse_label_names_valid_json() {
let json = Some(r#"["bug","frontend","urgent"]"#.to_string());
let labels = parse_label_names(&json);
assert_eq!(labels.len(), 3);
assert!(labels.contains("bug"));
assert!(labels.contains("frontend"));
assert!(labels.contains("urgent"));
}
#[test]
fn test_parse_label_names_null() {
let labels = parse_label_names(&None);
assert!(labels.is_empty());
}
#[test]
fn test_parse_label_names_invalid_json() {
let json = Some("not valid json".to_string());
let labels = parse_label_names(&json);
assert!(labels.is_empty());
}
#[test]
fn test_parse_label_names_empty_array() {
let json = Some("[]".to_string());
let labels = parse_label_names(&json);
assert!(labels.is_empty());
}
#[test]
fn test_shared_labels_intersection() {
let a = Some(r#"["bug","frontend","urgent"]"#.to_string());
let b = Some(r#"["bug","backend","urgent","perf"]"#.to_string());
let labels_a = parse_label_names(&a);
let labels_b = parse_label_names(&b);
let shared: HashSet<String> = labels_a.intersection(&labels_b).cloned().collect();
assert_eq!(shared.len(), 2);
assert!(shared.contains("bug"));
assert!(shared.contains("urgent"));
}
#[test]
fn test_shared_labels_no_overlap() {
let a = Some(r#"["bug"]"#.to_string());
let b = Some(r#"["feature"]"#.to_string());
let labels_a = parse_label_names(&a);
let labels_b = parse_label_names(&b);
let shared: HashSet<String> = labels_a.intersection(&labels_b).cloned().collect();
assert!(shared.is_empty());
}
}

View File

@@ -65,6 +65,16 @@ pub struct ClosingMrRef {
pub web_url: Option<String>,
}
#[derive(Debug, Clone, Serialize)]
pub struct RelatedIssueRef {
pub iid: i64,
pub title: String,
pub state: String,
pub web_url: Option<String>,
/// For unresolved cross-project refs
pub project_path: Option<String>,
}
#[derive(Debug, Serialize)]
pub struct IssueDetail {
pub id: i64,
@@ -87,6 +97,7 @@ pub struct IssueDetail {
pub user_notes_count: i64,
pub merge_requests_count: usize,
pub closing_merge_requests: Vec<ClosingMrRef>,
pub related_issues: Vec<RelatedIssueRef>,
pub discussions: Vec<DiscussionDetail>,
pub status_name: Option<String>,
pub status_category: Option<String>,
@@ -125,6 +136,8 @@ pub fn run_show_issue(
let closing_mrs = get_closing_mrs(&conn, issue.id)?;
let related_issues = get_related_issues(&conn, issue.id)?;
let discussions = get_issue_discussions(&conn, issue.id)?;
let references_full = format!("{}#{}", issue.project_path, issue.iid);
@@ -151,6 +164,7 @@ pub fn run_show_issue(
user_notes_count: issue.user_notes_count,
merge_requests_count,
closing_merge_requests: closing_mrs,
related_issues,
discussions,
status_name: issue.status_name,
status_category: issue.status_category,
@@ -321,6 +335,54 @@ fn get_closing_mrs(conn: &Connection, issue_id: i64) -> Result<Vec<ClosingMrRef>
Ok(mrs)
}
fn get_related_issues(conn: &Connection, issue_id: i64) -> Result<Vec<RelatedIssueRef>> {
// Resolved local references: source or target side
let mut stmt = conn.prepare(
"SELECT DISTINCT i.iid, i.title, i.state, i.web_url, NULL AS project_path
FROM entity_references er
JOIN issues i ON i.id = er.target_entity_id
WHERE er.source_entity_type = 'issue'
AND er.source_entity_id = ?1
AND er.target_entity_type = 'issue'
AND er.reference_type = 'related'
AND er.target_entity_id IS NOT NULL
UNION
SELECT DISTINCT i.iid, i.title, i.state, i.web_url, NULL AS project_path
FROM entity_references er
JOIN issues i ON i.id = er.source_entity_id
WHERE er.target_entity_type = 'issue'
AND er.target_entity_id = ?1
AND er.source_entity_type = 'issue'
AND er.reference_type = 'related'
UNION
SELECT er.target_entity_iid AS iid, NULL AS title, NULL AS state, NULL AS web_url,
er.target_project_path AS project_path
FROM entity_references er
WHERE er.source_entity_type = 'issue'
AND er.source_entity_id = ?1
AND er.target_entity_type = 'issue'
AND er.reference_type = 'related'
AND er.target_entity_id IS NULL
ORDER BY iid",
)?;
let related: Vec<RelatedIssueRef> = stmt
.query_map([issue_id], |row| {
Ok(RelatedIssueRef {
iid: row.get(0)?,
title: row.get::<_, Option<String>>(1)?.unwrap_or_default(),
state: row
.get::<_, Option<String>>(2)?
.unwrap_or_else(|| "unknown".to_string()),
web_url: row.get(3)?,
project_path: row.get(4)?,
})
})?
.collect::<std::result::Result<Vec<_>, _>>()?;
Ok(related)
}
fn get_issue_discussions(conn: &Connection, issue_id: i64) -> Result<Vec<DiscussionDetail>> {
let mut disc_stmt = conn.prepare(
"SELECT id, individual_note FROM discussions
@@ -729,6 +791,38 @@ pub fn print_show_issue(issue: &IssueDetail) {
}
}
// Related Issues section
if !issue.related_issues.is_empty() {
println!(
"{}",
render::section_divider(&format!("Related Issues ({})", issue.related_issues.len()))
);
for rel in &issue.related_issues {
let (icon, style) = match rel.state.as_str() {
"opened" => (Icons::issue_opened(), Theme::success()),
"closed" => (Icons::issue_closed(), Theme::dim()),
_ => (Icons::issue_opened(), Theme::muted()),
};
if let Some(project_path) = &rel.project_path {
println!(
" {} {}#{} {}",
Theme::muted().render(icon),
project_path,
rel.iid,
Theme::muted().render("(cross-project, unresolved)"),
);
} else {
println!(
" {} #{} {} {}",
style.render(icon),
rel.iid,
rel.title,
style.render(&rel.state),
);
}
}
}
// Description section
println!("{}", render::section_divider("Description"));
if let Some(desc) = &issue.description {

View File

@@ -26,6 +26,35 @@ pub struct SyncOptions {
pub no_events: bool,
pub robot_mode: bool,
pub dry_run: bool,
pub issue_iids: Vec<u64>,
pub mr_iids: Vec<u64>,
pub project: Option<String>,
pub preflight_only: bool,
}
impl SyncOptions {
pub const MAX_SURGICAL_TARGETS: usize = 100;
pub fn is_surgical(&self) -> bool {
!self.issue_iids.is_empty() || !self.mr_iids.is_empty()
}
}
#[derive(Debug, Default, Serialize)]
pub struct SurgicalIids {
pub issues: Vec<u64>,
pub merge_requests: Vec<u64>,
}
#[derive(Debug, Serialize)]
pub struct EntitySyncResult {
pub entity_type: String,
pub iid: u64,
pub outcome: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub error: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
pub toctou_reason: Option<String>,
}
#[derive(Debug, Default, Serialize)]
@@ -49,6 +78,14 @@ pub struct SyncResult {
pub issue_projects: Vec<ProjectSummary>,
#[serde(skip)]
pub mr_projects: Vec<ProjectSummary>,
#[serde(skip_serializing_if = "Option::is_none")]
pub surgical_mode: Option<bool>,
#[serde(skip_serializing_if = "Option::is_none")]
pub surgical_iids: Option<SurgicalIids>,
#[serde(skip_serializing_if = "Option::is_none")]
pub entity_results: Option<Vec<EntitySyncResult>>,
#[serde(skip_serializing_if = "Option::is_none")]
pub preflight_only: Option<bool>,
}
/// Apply semantic color to a stage-completion icon glyph.
@@ -66,6 +103,11 @@ pub async fn run_sync(
run_id: Option<&str>,
signal: &ShutdownSignal,
) -> Result<SyncResult> {
// Surgical dispatch: if any IIDs specified, route to the surgical pipeline.
if options.is_surgical() {
return super::sync_surgical::run_sync_surgical(config, options, run_id, signal).await;
}
let generated_id;
let run_id = match run_id {
Some(id) => id,
@@ -1029,4 +1071,93 @@ mod tests {
assert!(rows[0].contains("0 statuses updated"));
assert!(rows[0].contains("skipped (disabled)"));
}
#[test]
fn sync_result_default_omits_surgical_fields() {
let result = SyncResult::default();
let json = serde_json::to_value(&result).unwrap();
assert!(json.get("surgical_mode").is_none());
assert!(json.get("surgical_iids").is_none());
assert!(json.get("entity_results").is_none());
assert!(json.get("preflight_only").is_none());
}
#[test]
fn sync_result_with_surgical_fields_serializes_correctly() {
let result = SyncResult {
surgical_mode: Some(true),
surgical_iids: Some(SurgicalIids {
issues: vec![7, 42],
merge_requests: vec![10],
}),
entity_results: Some(vec![
EntitySyncResult {
entity_type: "issue".to_string(),
iid: 7,
outcome: "synced".to_string(),
error: None,
toctou_reason: None,
},
EntitySyncResult {
entity_type: "issue".to_string(),
iid: 42,
outcome: "skipped_toctou".to_string(),
error: None,
toctou_reason: Some("updated_at changed".to_string()),
},
]),
preflight_only: Some(false),
..SyncResult::default()
};
let json = serde_json::to_value(&result).unwrap();
assert_eq!(json["surgical_mode"], true);
assert_eq!(json["surgical_iids"]["issues"], serde_json::json!([7, 42]));
assert_eq!(json["entity_results"].as_array().unwrap().len(), 2);
assert_eq!(json["entity_results"][1]["outcome"], "skipped_toctou");
assert_eq!(json["preflight_only"], false);
}
#[test]
fn entity_sync_result_omits_none_fields() {
let entity = EntitySyncResult {
entity_type: "merge_request".to_string(),
iid: 10,
outcome: "synced".to_string(),
error: None,
toctou_reason: None,
};
let json = serde_json::to_value(&entity).unwrap();
assert!(json.get("error").is_none());
assert!(json.get("toctou_reason").is_none());
assert!(json.get("entity_type").is_some());
}
#[test]
fn is_surgical_with_issues() {
let opts = SyncOptions {
issue_iids: vec![1],
..SyncOptions::default()
};
assert!(opts.is_surgical());
}
#[test]
fn is_surgical_with_mrs() {
let opts = SyncOptions {
mr_iids: vec![10],
..SyncOptions::default()
};
assert!(opts.is_surgical());
}
#[test]
fn is_surgical_empty() {
let opts = SyncOptions::default();
assert!(!opts.is_surgical());
}
#[test]
fn max_surgical_targets_is_100() {
assert_eq!(SyncOptions::MAX_SURGICAL_TARGETS, 100);
}
}

View File

@@ -0,0 +1,462 @@
//! Surgical (by-IID) sync orchestration.
//!
//! Coordinates the full pipeline for syncing specific issues/MRs by IID:
//! resolve project → preflight fetch → ingest with TOCTOU → enrichment →
//! scoped doc regeneration → embedding.
use std::time::Instant;
use tracing::{debug, warn};
use crate::Config;
use crate::cli::commands::embed::run_embed;
use crate::core::db::create_connection;
use crate::core::error::{LoreError, Result};
use crate::core::lock::{AppLock, LockOptions};
use crate::core::metrics::StageTiming;
use crate::core::paths::get_db_path;
use crate::core::project::resolve_project;
use crate::core::shutdown::ShutdownSignal;
use crate::core::sync_run::SyncRunRecorder;
use crate::documents::{SourceType, regenerate_documents_for_sources};
use crate::gitlab::GitLabClient;
use crate::ingestion::surgical::{
SurgicalTarget, enrich_entity_resource_events, enrich_mr_closes_issues, enrich_mr_file_changes,
ingest_issue_by_iid, ingest_mr_by_iid, preflight_fetch,
};
use super::sync::{EntitySyncResult, SurgicalIids, SyncOptions, SyncResult};
fn timing(name: &str, elapsed_ms: u64, items: usize, errors: usize) -> StageTiming {
StageTiming {
name: name.to_string(),
project: None,
elapsed_ms,
items_processed: items,
items_skipped: 0,
errors,
rate_limit_hits: 0,
retries: 0,
sub_stages: vec![],
}
}
/// Run the surgical sync pipeline for specific IIDs within a single project.
///
/// Unlike [`super::sync::run_sync`], this targets specific issues/MRs by IID
/// rather than paginating all entities across all projects.
pub async fn run_sync_surgical(
config: &Config,
options: SyncOptions,
run_id: Option<&str>,
signal: &ShutdownSignal,
) -> Result<SyncResult> {
// ── Validate inputs ──
if !options.is_surgical() {
return Ok(SyncResult::default());
}
let project_str = options.project.as_deref().ok_or_else(|| {
LoreError::Other("Surgical sync requires --project (-p) to identify the target".into())
})?;
// ── Run ID ──
let generated_id;
let run_id = match run_id {
Some(id) => id,
None => {
generated_id = uuid::Uuid::new_v4().simple().to_string();
&generated_id[..8]
}
};
// ── DB connections ──
let db_path = get_db_path(config.storage.db_path.as_deref());
let conn = create_connection(&db_path)?;
let recorder_conn = create_connection(&db_path)?;
let lock_conn = create_connection(&db_path)?;
// ── Resolve project ──
let project_id = resolve_project(&conn, project_str)?;
let (gitlab_project_id, project_path): (i64, String) = conn.query_row(
"SELECT gitlab_project_id, path_with_namespace FROM projects WHERE id = ?1",
[project_id],
|row| Ok((row.get(0)?, row.get(1)?)),
)?;
// ── Build surgical targets ──
let mut targets = Vec::new();
for &iid in &options.issue_iids {
targets.push(SurgicalTarget::Issue { iid });
}
for &iid in &options.mr_iids {
targets.push(SurgicalTarget::MergeRequest { iid });
}
// ── Prepare result ──
let mut result = SyncResult {
run_id: run_id.to_string(),
surgical_mode: Some(true),
surgical_iids: Some(SurgicalIids {
issues: options.issue_iids.clone(),
merge_requests: options.mr_iids.clone(),
}),
..SyncResult::default()
};
let mut entity_results: Vec<EntitySyncResult> = Vec::new();
let mut stage_timings: Vec<StageTiming> = Vec::new();
// ── Start recorder ──
let recorder = SyncRunRecorder::start(&recorder_conn, "surgical-sync", run_id)?;
let iids_json = serde_json::to_string(&result.surgical_iids).unwrap_or_default();
recorder.set_surgical_metadata(&recorder_conn, "surgical", "preflight", &iids_json)?;
// ── GitLab client ──
let token =
std::env::var(&config.gitlab.token_env_var).map_err(|_| LoreError::TokenNotSet {
env_var: config.gitlab.token_env_var.clone(),
})?;
let client = GitLabClient::new(
&config.gitlab.base_url,
&token,
Some(config.sync.requests_per_second),
);
// ── Stage: Preflight fetch ──
let preflight_start = Instant::now();
debug!(%run_id, "Surgical sync: preflight fetch");
recorder.update_phase(&recorder_conn, "preflight")?;
let preflight = preflight_fetch(&client, gitlab_project_id, &project_path, &targets).await?;
for failure in &preflight.failures {
entity_results.push(EntitySyncResult {
entity_type: failure.target.entity_type().to_string(),
iid: failure.target.iid(),
outcome: "not_found".to_string(),
error: Some(failure.error.to_string()),
toctou_reason: None,
});
}
stage_timings.push(timing(
"preflight",
preflight_start.elapsed().as_millis() as u64,
preflight.issues.len() + preflight.merge_requests.len(),
preflight.failures.len(),
));
// ── Preflight-only mode ──
if options.preflight_only {
result.preflight_only = Some(true);
result.entity_results = Some(entity_results);
recorder.succeed(&recorder_conn, &stage_timings, 0, preflight.failures.len())?;
return Ok(result);
}
// ── Cancellation check ──
if signal.is_cancelled() {
result.entity_results = Some(entity_results);
recorder.cancel(&recorder_conn, "Cancelled before ingest")?;
return Ok(result);
}
// ── Acquire lock ──
let mut lock = AppLock::new(
lock_conn,
LockOptions {
name: "sync".to_string(),
stale_lock_minutes: config.sync.stale_lock_minutes,
heartbeat_interval_seconds: config.sync.heartbeat_interval_seconds,
},
);
lock.acquire(options.force)?;
// ── Stage: Ingest ──
let ingest_start = Instant::now();
debug!(%run_id, "Surgical sync: ingesting entities");
recorder.update_phase(&recorder_conn, "ingest")?;
let mut dirty_sources: Vec<(SourceType, i64)> = Vec::new();
// Ingest issues
for issue in &preflight.issues {
match ingest_issue_by_iid(&conn, config, project_id, issue) {
Ok(ir) => {
if ir.skipped_stale {
entity_results.push(EntitySyncResult {
entity_type: "issue".to_string(),
iid: issue.iid as u64,
outcome: "skipped_stale".to_string(),
error: None,
toctou_reason: Some("DB has same or newer updated_at".to_string()),
});
recorder.record_entity_result(&recorder_conn, "issue", "skipped_stale")?;
} else {
dirty_sources.extend(ir.dirty_source_keys);
result.issues_updated += 1;
entity_results.push(EntitySyncResult {
entity_type: "issue".to_string(),
iid: issue.iid as u64,
outcome: "ingested".to_string(),
error: None,
toctou_reason: None,
});
recorder.record_entity_result(&recorder_conn, "issue", "ingested")?;
}
}
Err(e) => {
warn!(iid = issue.iid, error = %e, "Surgical issue ingest failed");
entity_results.push(EntitySyncResult {
entity_type: "issue".to_string(),
iid: issue.iid as u64,
outcome: "error".to_string(),
error: Some(e.to_string()),
toctou_reason: None,
});
}
}
}
// Ingest MRs
for mr in &preflight.merge_requests {
match ingest_mr_by_iid(&conn, config, project_id, mr) {
Ok(mr_result) => {
if mr_result.skipped_stale {
entity_results.push(EntitySyncResult {
entity_type: "merge_request".to_string(),
iid: mr.iid as u64,
outcome: "skipped_stale".to_string(),
error: None,
toctou_reason: Some("DB has same or newer updated_at".to_string()),
});
recorder.record_entity_result(&recorder_conn, "mr", "skipped_stale")?;
} else {
dirty_sources.extend(mr_result.dirty_source_keys);
result.mrs_updated += 1;
entity_results.push(EntitySyncResult {
entity_type: "merge_request".to_string(),
iid: mr.iid as u64,
outcome: "ingested".to_string(),
error: None,
toctou_reason: None,
});
recorder.record_entity_result(&recorder_conn, "mr", "ingested")?;
}
}
Err(e) => {
warn!(iid = mr.iid, error = %e, "Surgical MR ingest failed");
entity_results.push(EntitySyncResult {
entity_type: "merge_request".to_string(),
iid: mr.iid as u64,
outcome: "error".to_string(),
error: Some(e.to_string()),
toctou_reason: None,
});
}
}
}
stage_timings.push(timing(
"ingest",
ingest_start.elapsed().as_millis() as u64,
result.issues_updated + result.mrs_updated,
0,
));
// ── Stage: Enrichment ──
if signal.is_cancelled() {
result.entity_results = Some(entity_results);
lock.release();
recorder.cancel(&recorder_conn, "Cancelled before enrichment")?;
return Ok(result);
}
let enrich_start = Instant::now();
debug!(%run_id, "Surgical sync: enriching dependents");
recorder.update_phase(&recorder_conn, "enrichment")?;
// Enrich issues: resource events
if !options.no_events {
for issue in &preflight.issues {
let local_id = match conn.query_row(
"SELECT id FROM issues WHERE project_id = ? AND iid = ?",
(project_id, issue.iid),
|row| row.get::<_, i64>(0),
) {
Ok(id) => id,
Err(_) => continue,
};
if let Err(e) = enrich_entity_resource_events(
&client,
&conn,
project_id,
gitlab_project_id,
"issue",
issue.iid,
local_id,
)
.await
{
warn!(iid = issue.iid, error = %e, "Failed to enrich issue resource events");
result.resource_events_failed += 1;
} else {
result.resource_events_fetched += 1;
}
}
}
// Enrich MRs: resource events, closes_issues, file changes
for mr in &preflight.merge_requests {
let local_mr_id = match conn.query_row(
"SELECT id FROM merge_requests WHERE project_id = ? AND iid = ?",
(project_id, mr.iid),
|row| row.get::<_, i64>(0),
) {
Ok(id) => id,
Err(_) => continue,
};
if !options.no_events {
if let Err(e) = enrich_entity_resource_events(
&client,
&conn,
project_id,
gitlab_project_id,
"merge_request",
mr.iid,
local_mr_id,
)
.await
{
warn!(iid = mr.iid, error = %e, "Failed to enrich MR resource events");
result.resource_events_failed += 1;
} else {
result.resource_events_fetched += 1;
}
}
if let Err(e) = enrich_mr_closes_issues(
&client,
&conn,
project_id,
gitlab_project_id,
mr.iid,
local_mr_id,
)
.await
{
warn!(iid = mr.iid, error = %e, "Failed to enrich MR closes_issues");
}
if let Err(e) = enrich_mr_file_changes(
&client,
&conn,
project_id,
gitlab_project_id,
mr.iid,
local_mr_id,
)
.await
{
warn!(iid = mr.iid, error = %e, "Failed to enrich MR file changes");
result.mr_diffs_failed += 1;
} else {
result.mr_diffs_fetched += 1;
}
}
stage_timings.push(timing(
"enrichment",
enrich_start.elapsed().as_millis() as u64,
result.resource_events_fetched + result.mr_diffs_fetched,
result.resource_events_failed + result.mr_diffs_failed,
));
// ── Stage: Scoped doc regeneration ──
if !options.no_docs && !dirty_sources.is_empty() {
if signal.is_cancelled() {
result.entity_results = Some(entity_results);
lock.release();
recorder.cancel(&recorder_conn, "Cancelled before doc generation")?;
return Ok(result);
}
let docs_start = Instant::now();
debug!(%run_id, count = dirty_sources.len(), "Surgical sync: regenerating docs");
recorder.update_phase(&recorder_conn, "docs")?;
match regenerate_documents_for_sources(&conn, &dirty_sources) {
Ok(docs_result) => {
result.documents_regenerated = docs_result.regenerated;
result.documents_errored = docs_result.errored;
}
Err(e) => {
warn!(error = %e, "Surgical doc regeneration failed");
}
}
stage_timings.push(timing(
"docs",
docs_start.elapsed().as_millis() as u64,
result.documents_regenerated,
result.documents_errored,
));
}
// ── Stage: Embedding ──
if !options.no_embed {
if signal.is_cancelled() {
result.entity_results = Some(entity_results);
lock.release();
recorder.cancel(&recorder_conn, "Cancelled before embedding")?;
return Ok(result);
}
let embed_start = Instant::now();
debug!(%run_id, "Surgical sync: embedding");
recorder.update_phase(&recorder_conn, "embed")?;
match run_embed(config, false, false, None, signal).await {
Ok(embed_result) => {
result.documents_embedded = embed_result.docs_embedded;
result.embedding_failed = embed_result.failed;
}
Err(e) => {
// Embedding failure is non-fatal (Ollama may be unavailable)
warn!(error = %e, "Surgical embedding failed (non-fatal)");
}
}
stage_timings.push(timing(
"embed",
embed_start.elapsed().as_millis() as u64,
result.documents_embedded,
result.embedding_failed,
));
}
// ── Finalize ──
lock.release();
result.entity_results = Some(entity_results);
let total_items = result.issues_updated + result.mrs_updated;
let total_errors =
result.resource_events_failed + result.mr_diffs_failed + result.documents_errored;
recorder.succeed(&recorder_conn, &stage_timings, total_items, total_errors)?;
debug!(
%run_id,
issues = result.issues_updated,
mrs = result.mrs_updated,
docs = result.documents_regenerated,
"Surgical sync complete"
);
Ok(result)
}
#[cfg(test)]
#[path = "sync_surgical_tests.rs"]
mod tests;

View File

@@ -0,0 +1,323 @@
//! Tests for `sync_surgical.rs` — surgical sync orchestration.
use std::path::Path;
use wiremock::matchers::{method, path, path_regex};
use wiremock::{Mock, MockServer, ResponseTemplate};
use crate::cli::commands::sync::SyncOptions;
use crate::cli::commands::sync_surgical::run_sync_surgical;
use crate::core::config::{Config, GitLabConfig, ProjectConfig};
use crate::core::db::{create_connection, run_migrations};
use crate::core::shutdown::ShutdownSignal;
// ---------------------------------------------------------------------------
// Helpers
// ---------------------------------------------------------------------------
fn setup_temp_db() -> (tempfile::NamedTempFile, rusqlite::Connection) {
let tmp = tempfile::NamedTempFile::new().unwrap();
let conn = create_connection(tmp.path()).unwrap();
run_migrations(&conn).unwrap();
conn.execute(
"INSERT INTO projects (id, gitlab_project_id, path_with_namespace, web_url)
VALUES (1, 42, 'group/repo', 'https://gitlab.example.com/group/repo')",
[],
)
.unwrap();
(tmp, conn)
}
fn test_config(base_url: &str, db_path: &Path) -> Config {
Config {
gitlab: GitLabConfig {
base_url: base_url.to_string(),
token_env_var: "LORE_TEST_TOKEN".to_string(),
},
projects: vec![ProjectConfig {
path: "group/repo".to_string(),
}],
default_project: None,
sync: crate::core::config::SyncConfig {
requests_per_second: 1000.0,
stale_lock_minutes: 30,
heartbeat_interval_seconds: 10,
..Default::default()
},
storage: crate::core::config::StorageConfig {
db_path: Some(db_path.to_string_lossy().to_string()),
backup_dir: None,
compress_raw_payloads: false,
},
embedding: Default::default(),
logging: Default::default(),
scoring: Default::default(),
}
}
fn issue_json(iid: i64) -> serde_json::Value {
serde_json::json!({
"id": 1000 + iid,
"iid": iid,
"project_id": 42,
"title": format!("Test issue #{iid}"),
"description": "desc",
"state": "opened",
"created_at": "2026-02-17T10:00:00.000+00:00",
"updated_at": "2026-02-17T12:00:00.000+00:00",
"closed_at": null,
"author": { "id": 1, "username": "alice", "name": "Alice" },
"assignees": [],
"labels": ["bug"],
"milestone": null,
"due_date": null,
"web_url": format!("https://gitlab.example.com/group/repo/-/issues/{iid}")
})
}
#[allow(dead_code)] // Used by MR integration tests added later
fn mr_json(iid: i64) -> serde_json::Value {
serde_json::json!({
"id": 2000 + iid,
"iid": iid,
"project_id": 42,
"title": format!("Test MR !{iid}"),
"description": "desc",
"state": "opened",
"draft": false,
"work_in_progress": false,
"source_branch": "feat",
"target_branch": "main",
"sha": "abc123",
"references": { "short": format!("!{iid}"), "full": format!("group/repo!{iid}") },
"detailed_merge_status": "mergeable",
"created_at": "2026-02-17T10:00:00.000+00:00",
"updated_at": "2026-02-17T12:00:00.000+00:00",
"merged_at": null,
"closed_at": null,
"author": { "id": 2, "username": "bob", "name": "Bob" },
"merge_user": null,
"merged_by": null,
"labels": [],
"assignees": [],
"reviewers": [],
"web_url": format!("https://gitlab.example.com/group/repo/-/merge_requests/{iid}"),
"merge_commit_sha": null,
"squash_commit_sha": null
})
}
/// Mount all enrichment endpoint mocks (resource events, closes_issues, diffs) as empty.
async fn mount_empty_enrichment_mocks(server: &MockServer) {
// Resource events for issues
Mock::given(method("GET"))
.and(path_regex(
r"/api/v4/projects/\d+/issues/\d+/resource_state_events",
))
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!([])))
.mount(server)
.await;
Mock::given(method("GET"))
.and(path_regex(
r"/api/v4/projects/\d+/issues/\d+/resource_label_events",
))
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!([])))
.mount(server)
.await;
Mock::given(method("GET"))
.and(path_regex(
r"/api/v4/projects/\d+/issues/\d+/resource_milestone_events",
))
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!([])))
.mount(server)
.await;
// Resource events for MRs
Mock::given(method("GET"))
.and(path_regex(
r"/api/v4/projects/\d+/merge_requests/\d+/resource_state_events",
))
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!([])))
.mount(server)
.await;
Mock::given(method("GET"))
.and(path_regex(
r"/api/v4/projects/\d+/merge_requests/\d+/resource_label_events",
))
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!([])))
.mount(server)
.await;
Mock::given(method("GET"))
.and(path_regex(
r"/api/v4/projects/\d+/merge_requests/\d+/resource_milestone_events",
))
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!([])))
.mount(server)
.await;
// Closes issues
Mock::given(method("GET"))
.and(path_regex(
r"/api/v4/projects/\d+/merge_requests/\d+/closes_issues",
))
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!([])))
.mount(server)
.await;
// Diffs
Mock::given(method("GET"))
.and(path_regex(r"/api/v4/projects/\d+/merge_requests/\d+/diffs"))
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!([])))
.mount(server)
.await;
}
// ---------------------------------------------------------------------------
// Tests
// ---------------------------------------------------------------------------
#[tokio::test]
async fn ingest_one_issue_updates_result() {
let server = MockServer::start().await;
let (tmp, _conn) = setup_temp_db();
// Set token env var
// SAFETY: Tests are single-threaded within each test function.
unsafe { std::env::set_var("LORE_TEST_TOKEN", "test-token") };
// Mock preflight issue fetch
Mock::given(method("GET"))
.and(path("/api/v4/projects/42/issues/7"))
.respond_with(ResponseTemplate::new(200).set_body_json(issue_json(7)))
.mount(&server)
.await;
mount_empty_enrichment_mocks(&server).await;
let config = test_config(&server.uri(), tmp.path());
let options = SyncOptions {
robot_mode: true,
issue_iids: vec![7],
project: Some("group/repo".to_string()),
no_embed: true, // skip embed (no Ollama in tests)
..SyncOptions::default()
};
let signal = ShutdownSignal::new();
let result = run_sync_surgical(&config, options, Some("test01"), &signal)
.await
.unwrap();
assert_eq!(result.surgical_mode, Some(true));
assert_eq!(result.issues_updated, 1);
assert!(result.entity_results.is_some());
let entities = result.entity_results.unwrap();
assert_eq!(entities.len(), 1);
assert_eq!(entities[0].outcome, "ingested");
}
#[tokio::test]
async fn preflight_only_returns_early() {
let server = MockServer::start().await;
let (tmp, _conn) = setup_temp_db();
// SAFETY: Tests are single-threaded within each test function.
unsafe { std::env::set_var("LORE_TEST_TOKEN", "test-token") };
Mock::given(method("GET"))
.and(path("/api/v4/projects/42/issues/7"))
.respond_with(ResponseTemplate::new(200).set_body_json(issue_json(7)))
.mount(&server)
.await;
let config = test_config(&server.uri(), tmp.path());
let options = SyncOptions {
robot_mode: true,
issue_iids: vec![7],
project: Some("group/repo".to_string()),
preflight_only: true,
..SyncOptions::default()
};
let signal = ShutdownSignal::new();
let result = run_sync_surgical(&config, options, Some("test02"), &signal)
.await
.unwrap();
assert_eq!(result.preflight_only, Some(true));
assert_eq!(result.issues_updated, 0); // No actual ingest
}
#[tokio::test]
async fn cancellation_before_ingest_cancels_recorder() {
let server = MockServer::start().await;
let (tmp, _conn) = setup_temp_db();
// SAFETY: Tests are single-threaded within each test function.
unsafe { std::env::set_var("LORE_TEST_TOKEN", "test-token") };
Mock::given(method("GET"))
.and(path("/api/v4/projects/42/issues/7"))
.respond_with(ResponseTemplate::new(200).set_body_json(issue_json(7)))
.mount(&server)
.await;
let config = test_config(&server.uri(), tmp.path());
let options = SyncOptions {
robot_mode: true,
issue_iids: vec![7],
project: Some("group/repo".to_string()),
..SyncOptions::default()
};
let signal = ShutdownSignal::new();
signal.cancel(); // Cancel before we start
let result = run_sync_surgical(&config, options, Some("test03"), &signal)
.await
.unwrap();
assert_eq!(result.issues_updated, 0);
}
fn dummy_config() -> Config {
Config {
gitlab: GitLabConfig {
base_url: "https://unused.example.com".to_string(),
token_env_var: "LORE_TEST_TOKEN".to_string(),
},
projects: vec![],
default_project: None,
sync: Default::default(),
storage: Default::default(),
embedding: Default::default(),
logging: Default::default(),
scoring: Default::default(),
}
}
#[tokio::test]
async fn missing_project_returns_error() {
let options = SyncOptions {
issue_iids: vec![7],
project: None, // Missing!
..SyncOptions::default()
};
let config = dummy_config();
let signal = ShutdownSignal::new();
let err = run_sync_surgical(&config, options, Some("test04"), &signal)
.await
.unwrap_err();
assert!(err.to_string().contains("--project"));
}
#[tokio::test]
async fn empty_iids_returns_default_result() {
let config = dummy_config();
let options = SyncOptions::default(); // No IIDs
let signal = ShutdownSignal::new();
let result = run_sync_surgical(&config, options, None, &signal)
.await
.unwrap();
assert_eq!(result.issues_updated, 0);
assert_eq!(result.mrs_updated, 0);
assert!(result.surgical_mode.is_none()); // Not surgical mode
}

121
src/cli/commands/tui.rs Normal file
View File

@@ -0,0 +1,121 @@
//! `lore tui` subcommand — delegates to the `lore-tui` binary.
//!
//! Resolves `lore-tui` via PATH and execs it, replacing the current process.
//! In robot mode, returns a structured JSON error (TUI is human-only).
use std::path::PathBuf;
use clap::Parser;
/// Launch the interactive TUI dashboard
#[derive(Parser, Debug)]
pub struct TuiArgs {
/// Path to config file (forwarded to lore-tui)
#[arg(long)]
pub config: Option<String>,
}
/// Resolve the `lore-tui` binary via PATH lookup.
pub fn find_lore_tui() -> Option<PathBuf> {
which::which("lore-tui").ok()
}
/// Run the TUI subcommand.
///
/// In robot mode this returns an error (TUI requires a terminal).
/// Otherwise it execs `lore-tui`, replacing the current process.
pub fn run_tui(args: &TuiArgs, robot_mode: bool) -> Result<(), Box<dyn std::error::Error>> {
if robot_mode {
let err = serde_json::json!({
"error": {
"code": "TUI_NOT_AVAILABLE",
"message": "The TUI requires an interactive terminal and cannot run in robot mode.",
"suggestion": "Use `lore --robot <command>` for programmatic access.",
"actions": []
}
});
eprintln!("{err}");
std::process::exit(2);
}
let binary = find_lore_tui().ok_or_else(|| {
"Could not find `lore-tui` on PATH.\n\n\
Install it with:\n \
cargo install --path crates/lore-tui\n\n\
Or build the workspace:\n \
cargo build --release -p lore-tui"
.to_string()
})?;
// Build the command with explicit arguments (no shell interpolation).
let mut cmd = std::process::Command::new(&binary);
if let Some(ref config) = args.config {
cmd.arg("--config").arg(config);
}
// On Unix, exec() replaces the current process entirely.
// This gives lore-tui direct terminal control (stdin/stdout/stderr).
#[cfg(unix)]
{
use std::os::unix::process::CommandExt;
let err = cmd.exec();
// exec() only returns on error
Err(format!("Failed to exec lore-tui at {}: {err}", binary.display()).into())
}
// On non-Unix, spawn and wait.
#[cfg(not(unix))]
{
let status = cmd.status()?;
if status.success() {
Ok(())
} else {
std::process::exit(status.code().unwrap_or(1));
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_find_lore_tui_does_not_panic() {
// Just verify the lookup doesn't panic; it may or may not find the binary.
let _ = find_lore_tui();
}
#[test]
fn test_robot_mode_error_json_structure() {
let err = serde_json::json!({
"error": {
"code": "TUI_NOT_AVAILABLE",
"message": "The TUI requires an interactive terminal and cannot run in robot mode.",
"suggestion": "Use `lore --robot <command>` for programmatic access.",
"actions": []
}
});
let parsed: serde_json::Value = serde_json::from_str(&err.to_string()).unwrap();
assert_eq!(parsed["error"]["code"], "TUI_NOT_AVAILABLE");
}
#[test]
fn test_tui_args_default() {
let args = TuiArgs { config: None };
assert!(args.config.is_none());
}
#[test]
fn test_tui_args_with_config() {
let args = TuiArgs {
config: Some("/tmp/test.json".into()),
};
assert_eq!(args.config.as_deref(), Some("/tmp/test.json"));
}
#[test]
fn test_binary_not_found_error_message() {
let msg = "Could not find `lore-tui` on PATH.";
assert!(msg.contains("lore-tui"));
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,696 @@
use serde::Serialize;
use crate::cli::WhoArgs;
use crate::cli::render::{self, Icons, Theme};
use crate::cli::robot::RobotMeta;
use crate::core::time::ms_to_iso;
use crate::core::who_types::{
ActiveResult, ExpertResult, OverlapResult, ReviewsResult, WhoResult, WorkloadResult,
};
use super::WhoRun;
use super::queries::format_overlap_role;
// ─── Human Output ────────────────────────────────────────────────────────────
pub fn print_who_human(result: &WhoResult, project_path: Option<&str>) {
match result {
WhoResult::Expert(r) => print_expert_human(r, project_path),
WhoResult::Workload(r) => print_workload_human(r),
WhoResult::Reviews(r) => print_reviews_human(r),
WhoResult::Active(r) => print_active_human(r, project_path),
WhoResult::Overlap(r) => print_overlap_human(r, project_path),
}
}
/// Print a dim hint when results aggregate across all projects.
fn print_scope_hint(project_path: Option<&str>) {
if project_path.is_none() {
println!(
" {}",
Theme::dim().render("(aggregated across all projects; use -p to scope)")
);
}
}
fn print_expert_human(r: &ExpertResult, project_path: Option<&str>) {
println!();
println!(
"{}",
Theme::bold().render(&format!("Experts for {}", r.path_query))
);
println!("{}", "\u{2500}".repeat(60));
println!(
" {}",
Theme::dim().render(&format!(
"(matching {} {})",
r.path_match,
if r.path_match == "exact" {
"file"
} else {
"directory prefix"
}
))
);
print_scope_hint(project_path);
println!();
if r.experts.is_empty() {
println!(
" {}",
Theme::dim().render("No experts found for this path.")
);
println!();
return;
}
println!(
" {:<16} {:>6} {:>12} {:>6} {:>12} {} {}",
Theme::bold().render("Username"),
Theme::bold().render("Score"),
Theme::bold().render("Reviewed(MRs)"),
Theme::bold().render("Notes"),
Theme::bold().render("Authored(MRs)"),
Theme::bold().render("Last Seen"),
Theme::bold().render("MR Refs"),
);
for expert in &r.experts {
let reviews = if expert.review_mr_count > 0 {
expert.review_mr_count.to_string()
} else {
"-".to_string()
};
let notes = if expert.review_note_count > 0 {
expert.review_note_count.to_string()
} else {
"-".to_string()
};
let authored = if expert.author_mr_count > 0 {
expert.author_mr_count.to_string()
} else {
"-".to_string()
};
let mr_str = expert
.mr_refs
.iter()
.take(5)
.cloned()
.collect::<Vec<_>>()
.join(", ");
let overflow = if expert.mr_refs_total > 5 {
format!(" +{}", expert.mr_refs_total - 5)
} else {
String::new()
};
println!(
" {:<16} {:>6} {:>12} {:>6} {:>12} {:<12}{}{}",
Theme::info().render(&format!("{} {}", Icons::user(), expert.username)),
expert.score,
reviews,
notes,
authored,
render::format_relative_time(expert.last_seen_ms),
if mr_str.is_empty() {
String::new()
} else {
format!(" {mr_str}")
},
overflow,
);
// Print detail sub-rows when populated
if let Some(details) = &expert.details {
const MAX_DETAIL_DISPLAY: usize = 10;
for d in details.iter().take(MAX_DETAIL_DISPLAY) {
let notes_str = if d.note_count > 0 {
format!("{} notes", d.note_count)
} else {
String::new()
};
println!(
" {:<3} {:<30} {:>30} {:>10} {}",
Theme::dim().render(&d.role),
d.mr_ref,
render::truncate(&format!("\"{}\"", d.title), 30),
notes_str,
Theme::dim().render(&render::format_relative_time(d.last_activity_ms)),
);
}
if details.len() > MAX_DETAIL_DISPLAY {
println!(
" {}",
Theme::dim().render(&format!("+{} more", details.len() - MAX_DETAIL_DISPLAY))
);
}
}
}
if r.truncated {
println!(
" {}",
Theme::dim().render("(showing first -n; rerun with a higher --limit)")
);
}
println!();
}
fn print_workload_human(r: &WorkloadResult) {
println!();
println!(
"{}",
Theme::bold().render(&format!(
"{} {} -- Workload Summary",
Icons::user(),
r.username
))
);
println!("{}", "\u{2500}".repeat(60));
if !r.assigned_issues.is_empty() {
println!(
"{}",
render::section_divider(&format!("Assigned Issues ({})", r.assigned_issues.len()))
);
for item in &r.assigned_issues {
println!(
" {} {} {}",
Theme::info().render(&item.ref_),
render::truncate(&item.title, 40),
Theme::dim().render(&render::format_relative_time(item.updated_at)),
);
}
if r.assigned_issues_truncated {
println!(
" {}",
Theme::dim().render("(truncated; rerun with a higher --limit)")
);
}
}
if !r.authored_mrs.is_empty() {
println!(
"{}",
render::section_divider(&format!("Authored MRs ({})", r.authored_mrs.len()))
);
for mr in &r.authored_mrs {
let draft = if mr.draft { " [draft]" } else { "" };
println!(
" {} {}{} {}",
Theme::info().render(&mr.ref_),
render::truncate(&mr.title, 35),
Theme::dim().render(draft),
Theme::dim().render(&render::format_relative_time(mr.updated_at)),
);
}
if r.authored_mrs_truncated {
println!(
" {}",
Theme::dim().render("(truncated; rerun with a higher --limit)")
);
}
}
if !r.reviewing_mrs.is_empty() {
println!(
"{}",
render::section_divider(&format!("Reviewing MRs ({})", r.reviewing_mrs.len()))
);
for mr in &r.reviewing_mrs {
let author = mr
.author_username
.as_deref()
.map(|a| format!(" by @{a}"))
.unwrap_or_default();
println!(
" {} {}{} {}",
Theme::info().render(&mr.ref_),
render::truncate(&mr.title, 30),
Theme::dim().render(&author),
Theme::dim().render(&render::format_relative_time(mr.updated_at)),
);
}
if r.reviewing_mrs_truncated {
println!(
" {}",
Theme::dim().render("(truncated; rerun with a higher --limit)")
);
}
}
if !r.unresolved_discussions.is_empty() {
println!(
"{}",
render::section_divider(&format!(
"Unresolved Discussions ({})",
r.unresolved_discussions.len()
))
);
for disc in &r.unresolved_discussions {
println!(
" {} {} {} {}",
Theme::dim().render(&disc.entity_type),
Theme::info().render(&disc.ref_),
render::truncate(&disc.entity_title, 35),
Theme::dim().render(&render::format_relative_time(disc.last_note_at)),
);
}
if r.unresolved_discussions_truncated {
println!(
" {}",
Theme::dim().render("(truncated; rerun with a higher --limit)")
);
}
}
if r.assigned_issues.is_empty()
&& r.authored_mrs.is_empty()
&& r.reviewing_mrs.is_empty()
&& r.unresolved_discussions.is_empty()
{
println!();
println!(
" {}",
Theme::dim().render("No open work items found for this user.")
);
}
println!();
}
fn print_reviews_human(r: &ReviewsResult) {
println!();
println!(
"{}",
Theme::bold().render(&format!(
"{} {} -- Review Patterns",
Icons::user(),
r.username
))
);
println!("{}", "\u{2500}".repeat(60));
println!();
if r.total_diffnotes == 0 {
println!(
" {}",
Theme::dim().render("No review comments found for this user.")
);
println!();
return;
}
println!(
" {} DiffNotes across {} MRs ({} categorized)",
Theme::bold().render(&r.total_diffnotes.to_string()),
Theme::bold().render(&r.mrs_reviewed.to_string()),
Theme::bold().render(&r.categorized_count.to_string()),
);
println!();
if !r.categories.is_empty() {
println!(
" {:<16} {:>6} {:>6}",
Theme::bold().render("Category"),
Theme::bold().render("Count"),
Theme::bold().render("%"),
);
for cat in &r.categories {
println!(
" {:<16} {:>6} {:>5.1}%",
Theme::info().render(&cat.name),
cat.count,
cat.percentage,
);
}
}
let uncategorized = r.total_diffnotes - r.categorized_count;
if uncategorized > 0 {
println!();
println!(
" {} {} uncategorized (no **prefix** convention)",
Theme::dim().render("Note:"),
uncategorized,
);
}
println!();
}
fn print_active_human(r: &ActiveResult, project_path: Option<&str>) {
println!();
println!(
"{}",
Theme::bold().render(&format!(
"Active Discussions ({} unresolved in window)",
r.total_unresolved_in_window
))
);
println!("{}", "\u{2500}".repeat(60));
print_scope_hint(project_path);
println!();
if r.discussions.is_empty() {
println!(
" {}",
Theme::dim().render("No active unresolved discussions in this time window.")
);
println!();
return;
}
for disc in &r.discussions {
let prefix = if disc.entity_type == "MR" { "!" } else { "#" };
let participants_str = disc
.participants
.iter()
.map(|p| format!("@{p}"))
.collect::<Vec<_>>()
.join(", ");
println!(
" {} {} {} {} notes {}",
Theme::info().render(&format!("{prefix}{}", disc.entity_iid)),
render::truncate(&disc.entity_title, 40),
Theme::dim().render(&render::format_relative_time(disc.last_note_at)),
disc.note_count,
Theme::dim().render(&disc.project_path),
);
if !participants_str.is_empty() {
println!(" {}", Theme::dim().render(&participants_str));
}
}
if r.truncated {
println!(
" {}",
Theme::dim().render("(showing first -n; rerun with a higher --limit)")
);
}
println!();
}
fn print_overlap_human(r: &OverlapResult, project_path: Option<&str>) {
println!();
println!(
"{}",
Theme::bold().render(&format!("Overlap for {}", r.path_query))
);
println!("{}", "\u{2500}".repeat(60));
println!(
" {}",
Theme::dim().render(&format!(
"(matching {} {})",
r.path_match,
if r.path_match == "exact" {
"file"
} else {
"directory prefix"
}
))
);
print_scope_hint(project_path);
println!();
if r.users.is_empty() {
println!(
" {}",
Theme::dim().render("No overlapping users found for this path.")
);
println!();
return;
}
println!(
" {:<16} {:<6} {:>7} {:<12} {}",
Theme::bold().render("Username"),
Theme::bold().render("Role"),
Theme::bold().render("MRs"),
Theme::bold().render("Last Seen"),
Theme::bold().render("MR Refs"),
);
for user in &r.users {
let mr_str = user
.mr_refs
.iter()
.take(5)
.cloned()
.collect::<Vec<_>>()
.join(", ");
let overflow = if user.mr_refs.len() > 5 {
format!(" +{}", user.mr_refs.len() - 5)
} else {
String::new()
};
println!(
" {:<16} {:<6} {:>7} {:<12} {}{}",
Theme::info().render(&format!("{} {}", Icons::user(), user.username)),
format_overlap_role(user),
user.touch_count,
render::format_relative_time(user.last_seen_at),
mr_str,
overflow,
);
}
if r.truncated {
println!(
" {}",
Theme::dim().render("(showing first -n; rerun with a higher --limit)")
);
}
println!();
}
// ─── Robot JSON Output ───────────────────────────────────────────────────────
pub fn print_who_json(run: &WhoRun, args: &WhoArgs, elapsed_ms: u64) {
let (mode, data) = match &run.result {
WhoResult::Expert(r) => ("expert", expert_to_json(r)),
WhoResult::Workload(r) => ("workload", workload_to_json(r)),
WhoResult::Reviews(r) => ("reviews", reviews_to_json(r)),
WhoResult::Active(r) => ("active", active_to_json(r)),
WhoResult::Overlap(r) => ("overlap", overlap_to_json(r)),
};
// Raw CLI args -- what the user typed
let input = serde_json::json!({
"target": args.target,
"path": args.path,
"project": args.project,
"since": args.since,
"limit": args.limit,
"detail": args.detail,
"as_of": args.as_of,
"explain_score": args.explain_score,
"include_bots": args.include_bots,
"all_history": args.all_history,
});
// Resolved/computed values -- what actually ran
let resolved_input = serde_json::json!({
"mode": run.resolved_input.mode,
"project_id": run.resolved_input.project_id,
"project_path": run.resolved_input.project_path,
"since_ms": run.resolved_input.since_ms,
"since_iso": run.resolved_input.since_iso,
"since_mode": run.resolved_input.since_mode,
"limit": run.resolved_input.limit,
});
let output = WhoJsonEnvelope {
ok: true,
data: WhoJsonData {
mode: mode.to_string(),
input,
resolved_input,
result: data,
},
meta: RobotMeta { elapsed_ms },
};
let mut value = serde_json::to_value(&output).unwrap_or_else(|e| {
serde_json::json!({"ok":false,"error":{"code":"INTERNAL_ERROR","message":format!("JSON serialization failed: {e}")}})
});
if let Some(f) = &args.fields {
let preset_key = format!("who_{mode}");
let expanded = crate::cli::robot::expand_fields_preset(f, &preset_key);
// Each who mode uses a different array key; try all possible keys
for key in &[
"experts",
"assigned_issues",
"authored_mrs",
"review_mrs",
"categories",
"discussions",
"users",
] {
crate::cli::robot::filter_fields(&mut value, key, &expanded);
}
}
println!("{}", serde_json::to_string(&value).unwrap());
}
#[derive(Serialize)]
struct WhoJsonEnvelope {
ok: bool,
data: WhoJsonData,
meta: RobotMeta,
}
#[derive(Serialize)]
struct WhoJsonData {
mode: String,
input: serde_json::Value,
resolved_input: serde_json::Value,
#[serde(flatten)]
result: serde_json::Value,
}
fn expert_to_json(r: &ExpertResult) -> serde_json::Value {
serde_json::json!({
"path_query": r.path_query,
"path_match": r.path_match,
"scoring_model_version": 2,
"truncated": r.truncated,
"experts": r.experts.iter().map(|e| {
let mut obj = serde_json::json!({
"username": e.username,
"score": e.score,
"review_mr_count": e.review_mr_count,
"review_note_count": e.review_note_count,
"author_mr_count": e.author_mr_count,
"last_seen_at": ms_to_iso(e.last_seen_ms),
"mr_refs": e.mr_refs,
"mr_refs_total": e.mr_refs_total,
"mr_refs_truncated": e.mr_refs_truncated,
});
if let Some(raw) = e.score_raw {
obj["score_raw"] = serde_json::json!(raw);
}
if let Some(comp) = &e.components {
obj["components"] = serde_json::json!({
"author": comp.author,
"reviewer_participated": comp.reviewer_participated,
"reviewer_assigned": comp.reviewer_assigned,
"notes": comp.notes,
});
}
if let Some(details) = &e.details {
obj["details"] = serde_json::json!(details.iter().map(|d| serde_json::json!({
"mr_ref": d.mr_ref,
"title": d.title,
"role": d.role,
"note_count": d.note_count,
"last_activity_at": ms_to_iso(d.last_activity_ms),
})).collect::<Vec<_>>());
}
obj
}).collect::<Vec<_>>(),
})
}
fn workload_to_json(r: &WorkloadResult) -> serde_json::Value {
serde_json::json!({
"username": r.username,
"assigned_issues": r.assigned_issues.iter().map(|i| serde_json::json!({
"iid": i.iid,
"ref": i.ref_,
"title": i.title,
"project_path": i.project_path,
"updated_at": ms_to_iso(i.updated_at),
})).collect::<Vec<_>>(),
"authored_mrs": r.authored_mrs.iter().map(|m| serde_json::json!({
"iid": m.iid,
"ref": m.ref_,
"title": m.title,
"draft": m.draft,
"project_path": m.project_path,
"updated_at": ms_to_iso(m.updated_at),
})).collect::<Vec<_>>(),
"reviewing_mrs": r.reviewing_mrs.iter().map(|m| serde_json::json!({
"iid": m.iid,
"ref": m.ref_,
"title": m.title,
"draft": m.draft,
"project_path": m.project_path,
"author_username": m.author_username,
"updated_at": ms_to_iso(m.updated_at),
})).collect::<Vec<_>>(),
"unresolved_discussions": r.unresolved_discussions.iter().map(|d| serde_json::json!({
"entity_type": d.entity_type,
"entity_iid": d.entity_iid,
"ref": d.ref_,
"entity_title": d.entity_title,
"project_path": d.project_path,
"last_note_at": ms_to_iso(d.last_note_at),
})).collect::<Vec<_>>(),
"summary": {
"assigned_issue_count": r.assigned_issues.len(),
"authored_mr_count": r.authored_mrs.len(),
"reviewing_mr_count": r.reviewing_mrs.len(),
"unresolved_discussion_count": r.unresolved_discussions.len(),
},
"truncation": {
"assigned_issues_truncated": r.assigned_issues_truncated,
"authored_mrs_truncated": r.authored_mrs_truncated,
"reviewing_mrs_truncated": r.reviewing_mrs_truncated,
"unresolved_discussions_truncated": r.unresolved_discussions_truncated,
}
})
}
fn reviews_to_json(r: &ReviewsResult) -> serde_json::Value {
serde_json::json!({
"username": r.username,
"total_diffnotes": r.total_diffnotes,
"categorized_count": r.categorized_count,
"mrs_reviewed": r.mrs_reviewed,
"categories": r.categories.iter().map(|c| serde_json::json!({
"name": c.name,
"count": c.count,
"percentage": (c.percentage * 10.0).round() / 10.0,
})).collect::<Vec<_>>(),
})
}
fn active_to_json(r: &ActiveResult) -> serde_json::Value {
serde_json::json!({
"total_unresolved_in_window": r.total_unresolved_in_window,
"truncated": r.truncated,
"discussions": r.discussions.iter().map(|d| serde_json::json!({
"discussion_id": d.discussion_id,
"entity_type": d.entity_type,
"entity_iid": d.entity_iid,
"entity_title": d.entity_title,
"project_path": d.project_path,
"last_note_at": ms_to_iso(d.last_note_at),
"note_count": d.note_count,
"participants": d.participants,
"participants_total": d.participants_total,
"participants_truncated": d.participants_truncated,
})).collect::<Vec<_>>(),
})
}
fn overlap_to_json(r: &OverlapResult) -> serde_json::Value {
serde_json::json!({
"path_query": r.path_query,
"path_match": r.path_match,
"truncated": r.truncated,
"users": r.users.iter().map(|u| serde_json::json!({
"username": u.username,
"role": format_overlap_role(u),
"author_touch_count": u.author_touch_count,
"review_touch_count": u.review_touch_count,
"touch_count": u.touch_count,
"last_seen_at": ms_to_iso(u.last_seen_at),
"mr_refs": u.mr_refs,
"mr_refs_total": u.mr_refs_total,
"mr_refs_truncated": u.mr_refs_truncated,
})).collect::<Vec<_>>(),
})
}

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,20 @@
// ─── Scoring Helpers ─────────────────────────────────────────────────────────
/// Exponential half-life decay: `2^(-days / half_life)`.
///
/// Returns a value in `[0.0, 1.0]` representing how much of an original signal
/// is retained after `elapsed_ms` milliseconds, given a `half_life_days` period.
/// At `elapsed=0` the signal is fully retained (1.0); at `elapsed=half_life`
/// exactly half remains (0.5); the signal halves again for each additional
/// half-life period.
///
/// Returns `0.0` when `half_life_days` is zero (prevents division by zero).
/// Negative elapsed values are clamped to zero (future events retain full weight).
pub fn half_life_decay(elapsed_ms: i64, half_life_days: u32) -> f64 {
let days = (elapsed_ms as f64 / 86_400_000.0).max(0.0);
let hl = f64::from(half_life_days);
if hl <= 0.0 {
return 0.0;
}
2.0_f64.powf(-days / hl)
}

View File

@@ -7,6 +7,8 @@ pub mod robot;
use clap::{Parser, Subcommand};
use std::io::IsTerminal;
use commands::tui::TuiArgs;
#[derive(Parser)]
#[command(name = "lore")]
#[command(version = env!("LORE_VERSION"), about = "Local GitLab data management with semantic search", long_about = None)]
@@ -241,6 +243,49 @@ pub enum Commands {
/// Trace why code was introduced: file -> MR -> issue -> discussion
Trace(TraceArgs),
/// Launch the interactive TUI dashboard
Tui(TuiArgs),
/// Find semantically related entities via vector similarity
#[command(visible_alias = "similar")]
Related(RelatedArgs),
/// Situational awareness: open issues, active MRs, experts, activity, threads
Brief {
/// Free-text topic, entity type, or omit for project-wide brief
query: Option<String>,
/// Focus on a file path (who expert mode)
#[arg(long)]
path: Option<String>,
/// Focus on a person (who workload mode)
#[arg(long)]
person: Option<String>,
/// Scope to project (fuzzy match)
#[arg(short, long)]
project: Option<String>,
/// Maximum items per section
#[arg(long, default_value = "5")]
section_limit: usize,
},
/// Auto-generate a structured narrative for an issue or MR
Explain {
/// Entity type: "issues" or "mrs"
#[arg(value_parser = ["issues", "issue", "mrs", "mr"])]
entity_type: String,
/// Entity IID
iid: i64,
/// Scope to project (fuzzy match)
#[arg(short, long)]
project: Option<String>,
},
/// Detect discussion divergence from original intent
Drift {
/// Entity type (currently only "issues" supported)
@@ -795,6 +840,10 @@ pub struct SyncArgs {
#[arg(long = "no-status")]
pub no_status: bool,
/// Skip issue link fetching (overrides config)
#[arg(long = "no-issue-links")]
pub no_issue_links: bool,
/// Preview what would be synced without making changes
#[arg(long, overrides_with = "no_dry_run")]
pub dry_run: bool,
@@ -805,6 +854,26 @@ pub struct SyncArgs {
/// Show detailed timing breakdown for sync stages
#[arg(short = 't', long = "timings")]
pub timings: bool,
/// Show sync progress in interactive TUI
#[arg(long)]
pub tui: bool,
/// Surgically sync specific issues by IID (repeatable)
#[arg(long, value_parser = clap::value_parser!(u64).range(1..))]
pub issue: Vec<u64>,
/// Surgically sync specific merge requests by IID (repeatable)
#[arg(long, value_parser = clap::value_parser!(u64).range(1..))]
pub mr: Vec<u64>,
/// Scope to a single project (required for surgical sync if no defaultProject)
#[arg(short = 'p', long)]
pub project: Option<String>,
/// Run preflight validation only (no DB writes). Requires --issue or --mr.
#[arg(long)]
pub preflight_only: bool,
}
#[derive(Parser)]
@@ -1045,10 +1114,36 @@ pub struct TraceArgs {
pub limit: usize,
}
#[derive(Parser)]
#[command(after_help = "\x1b[1mExamples:\x1b[0m
lore related issues 42 # Find issues similar to #42
lore related mrs 99 -p group/repo # MRs similar to !99
lore related 'authentication timeout' # Concept search")]
pub struct RelatedArgs {
/// Entity type ('issues' or 'mrs') OR free-text query
pub query_or_type: String,
/// Entity IID (when first arg is entity type)
pub iid: Option<i64>,
/// Maximum results
#[arg(
short = 'n',
long = "limit",
default_value = "10",
help_heading = "Output"
)]
pub limit: usize,
/// Scope to project (fuzzy match)
#[arg(short = 'p', long, help_heading = "Filters")]
pub project: Option<String>,
}
#[derive(Parser)]
pub struct CountArgs {
/// Entity type to count (issues, mrs, discussions, notes, events)
#[arg(value_parser = ["issues", "mrs", "discussions", "notes", "events"])]
/// Entity type to count (issues, mrs, discussions, notes, events, references)
#[arg(value_parser = ["issues", "mrs", "discussions", "notes", "events", "references"])]
pub entity: String,
/// Parent type filter: issue or mr (for discussions/notes)

View File

@@ -55,6 +55,9 @@ pub struct SyncConfig {
#[serde(rename = "fetchWorkItemStatus", default = "default_true")]
pub fetch_work_item_status: bool,
#[serde(rename = "fetchIssueLinks", default = "default_true")]
pub fetch_issue_links: bool,
}
fn default_true() -> bool {
@@ -74,6 +77,7 @@ impl Default for SyncConfig {
fetch_resource_events: true,
fetch_mr_file_changes: true,
fetch_work_item_status: true,
fetch_issue_links: true,
}
}
}

View File

@@ -93,6 +93,14 @@ const MIGRATIONS: &[(&str, &str)] = &[
"027",
include_str!("../../migrations/027_tui_list_indexes.sql"),
),
(
"028",
include_str!("../../migrations/028_surgical_sync_runs.sql"),
),
(
"029",
include_str!("../../migrations/029_issue_links_job_type.sql"),
),
];
pub fn create_connection(db_path: &Path) -> Result<Connection> {

View File

@@ -21,6 +21,7 @@ pub enum ErrorCode {
EmbeddingFailed,
NotFound,
Ambiguous,
SurgicalPreflightFailed,
}
impl std::fmt::Display for ErrorCode {
@@ -44,6 +45,7 @@ impl std::fmt::Display for ErrorCode {
Self::EmbeddingFailed => "EMBEDDING_FAILED",
Self::NotFound => "NOT_FOUND",
Self::Ambiguous => "AMBIGUOUS",
Self::SurgicalPreflightFailed => "SURGICAL_PREFLIGHT_FAILED",
};
write!(f, "{code}")
}
@@ -70,6 +72,7 @@ impl ErrorCode {
Self::EmbeddingFailed => 16,
Self::NotFound => 17,
Self::Ambiguous => 18,
Self::SurgicalPreflightFailed => 6,
}
}
}
@@ -153,6 +156,14 @@ pub enum LoreError {
#[error("No embeddings found. Run: lore embed")]
EmbeddingsNotBuilt,
#[error("Surgical preflight failed for {entity_type} !{iid} in {project}: {reason}")]
SurgicalPreflightFailed {
entity_type: String,
iid: u64,
project: String,
reason: String,
},
}
impl LoreError {
@@ -179,6 +190,7 @@ impl LoreError {
Self::OllamaModelNotFound { .. } => ErrorCode::OllamaModelNotFound,
Self::EmbeddingFailed { .. } => ErrorCode::EmbeddingFailed,
Self::EmbeddingsNotBuilt => ErrorCode::EmbeddingFailed,
Self::SurgicalPreflightFailed { .. } => ErrorCode::SurgicalPreflightFailed,
}
}
@@ -227,6 +239,9 @@ impl LoreError {
Some("Check Ollama logs or retry with 'lore embed --retry-failed'")
}
Self::EmbeddingsNotBuilt => Some("Generate embeddings first: lore embed"),
Self::SurgicalPreflightFailed { .. } => Some(
"Verify the IID exists and you have access to the project.\n\n Example:\n lore issues -p <project>\n lore mrs -p <project>",
),
Self::Json(_) | Self::Io(_) | Self::Transform(_) | Self::Other(_) => None,
}
}
@@ -254,6 +269,9 @@ impl LoreError {
Self::EmbeddingFailed { .. } => vec!["lore embed --retry-failed"],
Self::MigrationFailed { .. } => vec!["lore migrate"],
Self::GitLabNetworkError { .. } => vec!["lore doctor"],
Self::SurgicalPreflightFailed { .. } => {
vec!["lore issues -p <project>", "lore mrs -p <project>"]
}
_ => vec![],
}
}
@@ -293,3 +311,72 @@ impl From<&LoreError> for RobotErrorOutput {
}
pub type Result<T> = std::result::Result<T, LoreError>;
// ---------------------------------------------------------------------------
// Tests
// ---------------------------------------------------------------------------
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn surgical_preflight_failed_display() {
let err = LoreError::SurgicalPreflightFailed {
entity_type: "issue".to_string(),
iid: 42,
project: "group/repo".to_string(),
reason: "not found on GitLab".to_string(),
};
let msg = err.to_string();
assert!(msg.contains("issue"), "missing entity_type: {msg}");
assert!(msg.contains("42"), "missing iid: {msg}");
assert!(msg.contains("group/repo"), "missing project: {msg}");
assert!(msg.contains("not found on GitLab"), "missing reason: {msg}");
}
#[test]
fn surgical_preflight_failed_error_code() {
let code = ErrorCode::SurgicalPreflightFailed;
assert_eq!(code.exit_code(), 6);
}
#[test]
fn surgical_preflight_failed_code_mapping() {
let err = LoreError::SurgicalPreflightFailed {
entity_type: "merge_request".to_string(),
iid: 99,
project: "ns/proj".to_string(),
reason: "404".to_string(),
};
assert_eq!(err.code(), ErrorCode::SurgicalPreflightFailed);
}
#[test]
fn surgical_preflight_failed_has_suggestion() {
let err = LoreError::SurgicalPreflightFailed {
entity_type: "issue".to_string(),
iid: 7,
project: "g/p".to_string(),
reason: "not found".to_string(),
};
assert!(err.suggestion().is_some());
}
#[test]
fn surgical_preflight_failed_has_actions() {
let err = LoreError::SurgicalPreflightFailed {
entity_type: "issue".to_string(),
iid: 7,
project: "g/p".to_string(),
reason: "not found".to_string(),
};
assert!(!err.actions().is_empty());
}
#[test]
fn surgical_preflight_failed_display_code_string() {
let code = ErrorCode::SurgicalPreflightFailed;
assert_eq!(code.to_string(), "SURGICAL_PREFLIGHT_FAILED");
}
}

View File

@@ -20,6 +20,67 @@ impl SyncRunRecorder {
Ok(Self { row_id })
}
/// Returns the database row ID for this sync run.
pub fn row_id(&self) -> i64 {
self.row_id
}
/// Set surgical-specific metadata after `start()`.
///
/// Takes `&self` so the recorder can continue to be used for phase
/// updates and entity result recording before finalization.
pub fn set_surgical_metadata(
&self,
conn: &Connection,
mode: &str,
phase: &str,
iids_json: &str,
) -> Result<()> {
conn.execute(
"UPDATE sync_runs SET mode = ?1, phase = ?2, surgical_iids_json = ?3
WHERE id = ?4",
rusqlite::params![mode, phase, iids_json, self.row_id],
)?;
Ok(())
}
/// Update the pipeline phase and refresh the heartbeat timestamp.
pub fn update_phase(&self, conn: &Connection, phase: &str) -> Result<()> {
conn.execute(
"UPDATE sync_runs SET phase = ?1, heartbeat_at = ?2 WHERE id = ?3",
rusqlite::params![phase, now_ms(), self.row_id],
)?;
Ok(())
}
/// Increment a surgical counter column for the given entity type and stage.
///
/// Unknown `(entity_type, stage)` combinations are silently ignored.
/// Column names are derived from a hardcoded match — no SQL injection risk.
pub fn record_entity_result(
&self,
conn: &Connection,
entity_type: &str,
stage: &str,
) -> Result<()> {
let column = match (entity_type, stage) {
("issue", "fetched") => "issues_fetched",
("issue", "ingested") => "issues_ingested",
("mr", "fetched") => "mrs_fetched",
("mr", "ingested") => "mrs_ingested",
("issue" | "mr", "skipped_stale") => "skipped_stale",
("doc", "regenerated") => "docs_regenerated",
("doc", "embedded") => "docs_embedded",
(_, "warning") => "warnings_count",
_ => return Ok(()),
};
conn.execute(
&format!("UPDATE sync_runs SET {column} = {column} + 1 WHERE id = ?1"),
rusqlite::params![self.row_id],
)?;
Ok(())
}
pub fn succeed(
self,
conn: &Connection,
@@ -63,6 +124,18 @@ impl SyncRunRecorder {
)?;
Ok(())
}
/// Finalize the run as cancelled. Consumes self to prevent further use.
pub fn cancel(self, conn: &Connection, reason: &str) -> Result<()> {
let now = now_ms();
conn.execute(
"UPDATE sync_runs SET finished_at = ?1, cancelled_at = ?2,
status = 'cancelled', error = ?3
WHERE id = ?4",
rusqlite::params![now, now, reason, self.row_id],
)?;
Ok(())
}
}
#[cfg(test)]

View File

@@ -146,3 +146,247 @@ fn test_sync_run_recorder_fail_with_partial_metrics() {
assert_eq!(parsed.len(), 1);
assert_eq!(parsed[0].name, "ingest_issues");
}
// ---------------------------------------------------------------------------
// Migration 028: Surgical sync columns
// ---------------------------------------------------------------------------
#[test]
fn sync_run_surgical_columns_exist() {
let conn = setup_test_db();
conn.execute(
"INSERT INTO sync_runs (started_at, heartbeat_at, status, command, mode, phase, surgical_iids_json)
VALUES (1000, 1000, 'running', 'sync', 'surgical', 'preflight', '{\"issues\":[7],\"mrs\":[101]}')",
[],
)
.unwrap();
let (mode, phase, iids_json): (String, String, String) = conn
.query_row(
"SELECT mode, phase, surgical_iids_json FROM sync_runs WHERE mode = 'surgical'",
[],
|r| Ok((r.get(0)?, r.get(1)?, r.get(2)?)),
)
.unwrap();
assert_eq!(mode, "surgical");
assert_eq!(phase, "preflight");
assert!(iids_json.contains("7"));
}
#[test]
fn sync_run_counter_defaults_are_zero() {
let conn = setup_test_db();
conn.execute(
"INSERT INTO sync_runs (started_at, heartbeat_at, status, command)
VALUES (2000, 2000, 'running', 'sync')",
[],
)
.unwrap();
let row_id = conn.last_insert_rowid();
let (issues_fetched, mrs_fetched, docs_regenerated, warnings_count): (i64, i64, i64, i64) =
conn.query_row(
"SELECT issues_fetched, mrs_fetched, docs_regenerated, warnings_count FROM sync_runs WHERE id = ?1",
[row_id],
|r| Ok((r.get(0)?, r.get(1)?, r.get(2)?, r.get(3)?)),
)
.unwrap();
assert_eq!(issues_fetched, 0);
assert_eq!(mrs_fetched, 0);
assert_eq!(docs_regenerated, 0);
assert_eq!(warnings_count, 0);
}
#[test]
fn sync_run_nullable_columns_default_to_null() {
let conn = setup_test_db();
conn.execute(
"INSERT INTO sync_runs (started_at, heartbeat_at, status, command)
VALUES (3000, 3000, 'running', 'sync')",
[],
)
.unwrap();
let row_id = conn.last_insert_rowid();
let (mode, phase, cancelled_at): (Option<String>, Option<String>, Option<i64>) = conn
.query_row(
"SELECT mode, phase, cancelled_at FROM sync_runs WHERE id = ?1",
[row_id],
|r| Ok((r.get(0)?, r.get(1)?, r.get(2)?)),
)
.unwrap();
assert!(mode.is_none());
assert!(phase.is_none());
assert!(cancelled_at.is_none());
}
#[test]
fn sync_run_counter_round_trip() {
let conn = setup_test_db();
conn.execute(
"INSERT INTO sync_runs (started_at, heartbeat_at, status, command, mode, issues_fetched, mrs_ingested, docs_embedded)
VALUES (4000, 4000, 'succeeded', 'sync', 'surgical', 3, 2, 5)",
[],
)
.unwrap();
let row_id = conn.last_insert_rowid();
let (issues_fetched, mrs_ingested, docs_embedded): (i64, i64, i64) = conn
.query_row(
"SELECT issues_fetched, mrs_ingested, docs_embedded FROM sync_runs WHERE id = ?1",
[row_id],
|r| Ok((r.get(0)?, r.get(1)?, r.get(2)?)),
)
.unwrap();
assert_eq!(issues_fetched, 3);
assert_eq!(mrs_ingested, 2);
assert_eq!(docs_embedded, 5);
}
// ---------------------------------------------------------------------------
// bd-arka: SyncRunRecorder surgical lifecycle methods
// ---------------------------------------------------------------------------
#[test]
fn surgical_lifecycle_start_metadata_succeed() {
let conn = setup_test_db();
let recorder = SyncRunRecorder::start(&conn, "sync", "surg001").unwrap();
let row_id = recorder.row_id();
recorder
.set_surgical_metadata(
&conn,
"surgical",
"preflight",
r#"{"issues":[7,8],"mrs":[101]}"#,
)
.unwrap();
recorder.update_phase(&conn, "ingest").unwrap();
recorder
.record_entity_result(&conn, "issue", "fetched")
.unwrap();
recorder
.record_entity_result(&conn, "issue", "fetched")
.unwrap();
recorder
.record_entity_result(&conn, "issue", "ingested")
.unwrap();
recorder
.record_entity_result(&conn, "mr", "fetched")
.unwrap();
recorder
.record_entity_result(&conn, "mr", "ingested")
.unwrap();
recorder.succeed(&conn, &[], 3, 0).unwrap();
let (mode, phase, iids, issues_fetched, mrs_fetched, issues_ingested, mrs_ingested, status): (
String,
String,
String,
i64,
i64,
i64,
i64,
String,
) = conn
.query_row(
"SELECT mode, phase, surgical_iids_json, issues_fetched, mrs_fetched,
issues_ingested, mrs_ingested, status
FROM sync_runs WHERE id = ?1",
[row_id],
|r| {
Ok((
r.get(0)?,
r.get(1)?,
r.get(2)?,
r.get(3)?,
r.get(4)?,
r.get(5)?,
r.get(6)?,
r.get(7)?,
))
},
)
.unwrap();
assert_eq!(mode, "surgical");
assert_eq!(phase, "ingest"); // Last phase set before succeed
assert!(iids.contains("101"));
assert_eq!(issues_fetched, 2);
assert_eq!(mrs_fetched, 1);
assert_eq!(issues_ingested, 1);
assert_eq!(mrs_ingested, 1);
assert_eq!(status, "succeeded");
}
#[test]
fn surgical_lifecycle_cancel() {
let conn = setup_test_db();
let recorder = SyncRunRecorder::start(&conn, "sync", "cancel01").unwrap();
let row_id = recorder.row_id();
recorder
.set_surgical_metadata(&conn, "surgical", "preflight", "{}")
.unwrap();
recorder
.cancel(&conn, "User requested cancellation")
.unwrap();
let (status, error, cancelled_at, finished_at): (
String,
Option<String>,
Option<i64>,
Option<i64>,
) = conn
.query_row(
"SELECT status, error, cancelled_at, finished_at FROM sync_runs WHERE id = ?1",
[row_id],
|r| Ok((r.get(0)?, r.get(1)?, r.get(2)?, r.get(3)?)),
)
.unwrap();
assert_eq!(status, "cancelled");
assert_eq!(error.as_deref(), Some("User requested cancellation"));
assert!(cancelled_at.is_some());
assert!(finished_at.is_some());
}
#[test]
fn record_entity_result_ignores_unknown() {
let conn = setup_test_db();
let recorder = SyncRunRecorder::start(&conn, "sync", "unk001").unwrap();
// Should not panic or error on unknown combinations
recorder
.record_entity_result(&conn, "widget", "exploded")
.unwrap();
}
#[test]
fn record_entity_result_doc_counters() {
let conn = setup_test_db();
let recorder = SyncRunRecorder::start(&conn, "sync", "cnt001").unwrap();
let row_id = recorder.row_id();
recorder
.record_entity_result(&conn, "doc", "regenerated")
.unwrap();
recorder
.record_entity_result(&conn, "doc", "regenerated")
.unwrap();
recorder
.record_entity_result(&conn, "doc", "embedded")
.unwrap();
recorder
.record_entity_result(&conn, "issue", "skipped_stale")
.unwrap();
let (docs_regen, docs_embed, skipped): (i64, i64, i64) = conn
.query_row(
"SELECT docs_regenerated, docs_embedded, skipped_stale FROM sync_runs WHERE id = ?1",
[row_id],
|r| Ok((r.get(0)?, r.get(1)?, r.get(2)?)),
)
.unwrap();
assert_eq!(docs_regen, 2);
assert_eq!(docs_embed, 1);
assert_eq!(skipped, 1);
}

View File

@@ -7,7 +7,10 @@ pub use extractor::{
extract_discussion_document, extract_issue_document, extract_mr_document,
extract_note_document, extract_note_document_cached,
};
pub use regenerator::{RegenerateResult, regenerate_dirty_documents};
pub use regenerator::{
RegenerateForSourcesResult, RegenerateResult, regenerate_dirty_documents,
regenerate_documents_for_sources,
};
pub use truncation::{
MAX_DISCUSSION_BYTES, MAX_DOCUMENT_BYTES_HARD, NoteContent, TruncationReason, TruncationResult,
truncate_discussion, truncate_hard_cap, truncate_utf8,

View File

@@ -268,6 +268,75 @@ fn get_document_id(conn: &Connection, source_type: SourceType, source_id: i64) -
Ok(id)
}
// ---------------------------------------------------------------------------
// Scoped regeneration for surgical sync
// ---------------------------------------------------------------------------
/// Result of regenerating documents for specific source keys.
#[derive(Debug, Default)]
pub struct RegenerateForSourcesResult {
pub regenerated: usize,
pub unchanged: usize,
pub errored: usize,
/// IDs of documents that were regenerated or confirmed unchanged,
/// for downstream scoped embedding.
pub document_ids: Vec<i64>,
}
/// Regenerate documents for specific source keys only.
///
/// Unlike [`regenerate_dirty_documents`], this does NOT read from the
/// `dirty_sources` table. It processes exactly the provided keys and
/// returns the document IDs for scoped embedding.
pub fn regenerate_documents_for_sources(
conn: &Connection,
source_keys: &[(SourceType, i64)],
) -> Result<RegenerateForSourcesResult> {
let mut result = RegenerateForSourcesResult::default();
let mut cache = ParentMetadataCache::new();
for (source_type, source_id) in source_keys {
match regenerate_one(conn, *source_type, *source_id, &mut cache) {
Ok(changed) => {
if changed {
result.regenerated += 1;
} else {
result.unchanged += 1;
}
clear_dirty(conn, *source_type, *source_id)?;
// Collect document_id for scoped embedding
match get_document_id(conn, *source_type, *source_id) {
Ok(doc_id) => result.document_ids.push(doc_id),
Err(_) => {
// Document was deleted (source no longer exists) — no ID to return
}
}
}
Err(e) => {
warn!(
source_type = %source_type,
source_id,
error = %e,
"Scoped regeneration failed"
);
record_dirty_error(conn, *source_type, *source_id, &e.to_string())?;
result.errored += 1;
}
}
}
debug!(
regenerated = result.regenerated,
unchanged = result.unchanged,
errored = result.errored,
document_ids = result.document_ids.len(),
"Scoped document regeneration complete"
);
Ok(result)
}
#[cfg(test)]
#[path = "regenerator_tests.rs"]
mod tests;

View File

@@ -518,3 +518,65 @@ fn test_note_regeneration_cache_invalidates_across_parents() {
assert!(beta_content.contains("parent_iid: 99"));
assert!(beta_content.contains("parent_title: Issue Beta"));
}
// ---------------------------------------------------------------------------
// Scoped regeneration (bd-hs6j)
// ---------------------------------------------------------------------------
#[test]
fn scoped_regen_only_processes_specified_sources() {
let conn = setup_db();
conn.execute(
"INSERT INTO issues (id, gitlab_id, project_id, iid, title, state, created_at, updated_at, last_seen_at) VALUES (1, 10, 1, 42, 'Issue A', 'opened', 1000, 2000, 3000)",
[],
).unwrap();
conn.execute(
"INSERT INTO issues (id, gitlab_id, project_id, iid, title, state, created_at, updated_at, last_seen_at) VALUES (2, 20, 1, 43, 'Issue B', 'opened', 1000, 2000, 3000)",
[],
).unwrap();
mark_dirty(&conn, SourceType::Issue, 1).unwrap();
mark_dirty(&conn, SourceType::Issue, 2).unwrap();
// Regenerate only issue 1
let result = regenerate_documents_for_sources(&conn, &[(SourceType::Issue, 1)]).unwrap();
assert_eq!(result.regenerated, 1);
assert_eq!(result.document_ids.len(), 1);
// Issue 1 dirty cleared, issue 2 still dirty
let remaining = get_dirty_sources(&conn).unwrap();
assert_eq!(remaining.len(), 1);
assert_eq!(remaining[0], (SourceType::Issue, 2));
}
#[test]
fn scoped_regen_returns_document_ids() {
let conn = setup_db();
conn.execute(
"INSERT INTO issues (id, gitlab_id, project_id, iid, title, state, created_at, updated_at, last_seen_at) VALUES (1, 10, 1, 42, 'Test', 'opened', 1000, 2000, 3000)",
[],
).unwrap();
mark_dirty(&conn, SourceType::Issue, 1).unwrap();
let result = regenerate_documents_for_sources(&conn, &[(SourceType::Issue, 1)]).unwrap();
assert!(!result.document_ids.is_empty());
let exists: bool = conn
.query_row(
"SELECT EXISTS(SELECT 1 FROM documents WHERE id = ?1)",
[result.document_ids[0]],
|r| r.get(0),
)
.unwrap();
assert!(exists);
}
#[test]
fn scoped_regen_handles_missing_source() {
let conn = setup_db();
// Source key 9999 doesn't exist in issues table
let result = regenerate_documents_for_sources(&conn, &[(SourceType::Issue, 9999)]).unwrap();
// regenerate_one returns Ok(true) for deletions, but no doc_id to return
assert_eq!(result.document_ids.len(), 0);
}

View File

@@ -112,6 +112,20 @@ impl GitLabClient {
self.request("/api/v4/version").await
}
pub async fn get_issue_by_iid(&self, gitlab_project_id: i64, iid: i64) -> Result<GitLabIssue> {
let path = format!("/api/v4/projects/{gitlab_project_id}/issues/{iid}");
self.request(&path).await
}
pub async fn get_mr_by_iid(
&self,
gitlab_project_id: i64,
iid: i64,
) -> Result<GitLabMergeRequest> {
let path = format!("/api/v4/projects/{gitlab_project_id}/merge_requests/{iid}");
self.request(&path).await
}
const MAX_RETRIES: u32 = 3;
async fn request<T: serde::de::DeserializeOwned>(&self, path: &str) -> Result<T> {
@@ -613,6 +627,15 @@ impl GitLabClient {
self.fetch_all_pages(&path).await
}
pub async fn fetch_issue_links(
&self,
gitlab_project_id: i64,
issue_iid: i64,
) -> Result<Vec<crate::gitlab::types::GitLabIssueLink>> {
let path = format!("/api/v4/projects/{gitlab_project_id}/issues/{issue_iid}/links");
coalesce_not_found(self.fetch_all_pages(&path).await)
}
pub async fn fetch_mr_diffs(
&self,
gitlab_project_id: i64,
@@ -848,4 +871,143 @@ mod tests {
let result = parse_link_header_next(&headers);
assert!(result.is_none());
}
// ─────────────────────────────────────────────────────────────────
// get_issue_by_iid / get_mr_by_iid
// ─────────────────────────────────────────────────────────────────
use wiremock::matchers::{header, method, path};
use wiremock::{Mock, MockServer, ResponseTemplate};
fn mock_issue_json(iid: i64) -> serde_json::Value {
serde_json::json!({
"id": 1000 + iid,
"iid": iid,
"project_id": 42,
"title": format!("Issue #{iid}"),
"description": null,
"state": "opened",
"created_at": "2024-01-15T10:00:00.000Z",
"updated_at": "2024-01-16T12:00:00.000Z",
"closed_at": null,
"author": { "id": 1, "username": "alice", "name": "Alice", "avatar_url": null },
"assignees": [],
"labels": ["bug"],
"milestone": null,
"due_date": null,
"web_url": format!("https://gitlab.example.com/g/p/-/issues/{iid}")
})
}
fn mock_mr_json(iid: i64) -> serde_json::Value {
serde_json::json!({
"id": 2000 + iid,
"iid": iid,
"project_id": 42,
"title": format!("MR !{iid}"),
"description": null,
"state": "opened",
"draft": false,
"work_in_progress": false,
"source_branch": "feat",
"target_branch": "main",
"sha": "abc123",
"references": { "short": format!("!{iid}"), "full": format!("g/p!{iid}") },
"detailed_merge_status": "mergeable",
"created_at": "2024-02-01T08:00:00.000Z",
"updated_at": "2024-02-02T09:00:00.000Z",
"merged_at": null,
"closed_at": null,
"author": { "id": 2, "username": "bob", "name": "Bob", "avatar_url": null },
"merge_user": null,
"merged_by": null,
"labels": [],
"assignees": [],
"reviewers": [],
"web_url": format!("https://gitlab.example.com/g/p/-/merge_requests/{iid}"),
"merge_commit_sha": null,
"squash_commit_sha": null
})
}
fn test_client(base_url: &str) -> GitLabClient {
GitLabClient::new(base_url, "test-token", Some(1000.0))
}
#[tokio::test]
async fn get_issue_by_iid_success() {
let server = MockServer::start().await;
Mock::given(method("GET"))
.and(path("/api/v4/projects/42/issues/7"))
.and(header("PRIVATE-TOKEN", "test-token"))
.respond_with(ResponseTemplate::new(200).set_body_json(mock_issue_json(7)))
.mount(&server)
.await;
let client = test_client(&server.uri());
let issue = client.get_issue_by_iid(42, 7).await.unwrap();
assert_eq!(issue.iid, 7);
assert_eq!(issue.title, "Issue #7");
assert_eq!(issue.state, "opened");
}
#[tokio::test]
async fn get_issue_by_iid_not_found() {
let server = MockServer::start().await;
Mock::given(method("GET"))
.and(path("/api/v4/projects/42/issues/999"))
.respond_with(ResponseTemplate::new(404))
.mount(&server)
.await;
let client = test_client(&server.uri());
let err = client.get_issue_by_iid(42, 999).await.unwrap_err();
assert!(
matches!(err, LoreError::GitLabNotFound { .. }),
"Expected GitLabNotFound, got: {err:?}"
);
}
#[tokio::test]
async fn get_mr_by_iid_success() {
let server = MockServer::start().await;
Mock::given(method("GET"))
.and(path("/api/v4/projects/42/merge_requests/99"))
.and(header("PRIVATE-TOKEN", "test-token"))
.respond_with(ResponseTemplate::new(200).set_body_json(mock_mr_json(99)))
.mount(&server)
.await;
let client = test_client(&server.uri());
let mr = client.get_mr_by_iid(42, 99).await.unwrap();
assert_eq!(mr.iid, 99);
assert_eq!(mr.title, "MR !99");
assert_eq!(mr.source_branch, "feat");
assert_eq!(mr.target_branch, "main");
}
#[tokio::test]
async fn get_mr_by_iid_not_found() {
let server = MockServer::start().await;
Mock::given(method("GET"))
.and(path("/api/v4/projects/42/merge_requests/999"))
.respond_with(ResponseTemplate::new(404))
.mount(&server)
.await;
let client = test_client(&server.uri());
let err = client.get_mr_by_iid(42, 999).await.unwrap_err();
assert!(
matches!(err, LoreError::GitLabNotFound { .. }),
"Expected GitLabNotFound, got: {err:?}"
);
}
}

View File

@@ -263,6 +263,21 @@ pub struct GitLabMergeRequest {
pub squash_commit_sha: Option<String>,
}
/// Linked issue returned by GitLab's issue links API.
/// GET /projects/:id/issues/:iid/links
#[derive(Debug, Clone, Deserialize, Serialize)]
pub struct GitLabIssueLink {
pub id: i64,
pub iid: i64,
pub project_id: i64,
pub title: String,
pub state: String,
pub web_url: String,
/// "relates_to", "blocks", or "is_blocked_by"
pub link_type: String,
pub link_created_at: Option<String>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct WorkItemStatus {
pub name: String,

View File

@@ -0,0 +1,397 @@
use rusqlite::Connection;
use tracing::debug;
use crate::core::error::Result;
use crate::core::references::{
EntityReference, insert_entity_reference, resolve_issue_local_id, resolve_project_path,
};
use crate::gitlab::types::GitLabIssueLink;
/// Store issue links as bidirectional entity_references.
///
/// For each linked issue:
/// - Creates A -> B reference (source -> target)
/// - Creates B -> A reference (target -> source)
/// - Skips self-links
/// - Stores unresolved cross-project links (target_entity_id = NULL)
pub fn store_issue_links(
conn: &Connection,
project_id: i64,
source_issue_local_id: i64,
source_issue_iid: i64,
links: &[GitLabIssueLink],
) -> Result<usize> {
let mut stored = 0;
for link in links {
// Skip self-links
if link.iid == source_issue_iid
&& link.project_id == resolve_gitlab_project_id(conn, project_id)?.unwrap_or(-1)
{
debug!(source_iid = source_issue_iid, "Skipping self-link");
continue;
}
let target_local_id =
if link.project_id == resolve_gitlab_project_id(conn, project_id)?.unwrap_or(-1) {
resolve_issue_local_id(conn, project_id, link.iid)?
} else {
// Cross-project link: try to find in our DB
resolve_issue_by_gitlab_project(conn, link.project_id, link.iid)?
};
let (target_id, target_path, target_iid) = if let Some(local_id) = target_local_id {
(Some(local_id), None, None)
} else {
let path = resolve_project_path(conn, link.project_id)?;
let fallback = path.unwrap_or_else(|| format!("gitlab_project:{}", link.project_id));
(None, Some(fallback), Some(link.iid))
};
// Forward reference: source -> target
let forward = EntityReference {
project_id,
source_entity_type: "issue",
source_entity_id: source_issue_local_id,
target_entity_type: "issue",
target_entity_id: target_id,
target_project_path: target_path.as_deref(),
target_entity_iid: target_iid,
reference_type: "related",
source_method: "api",
};
if insert_entity_reference(conn, &forward)? {
stored += 1;
}
// Reverse reference: target -> source (only if target is resolved locally)
if let Some(target_local) = target_id {
let reverse = EntityReference {
project_id,
source_entity_type: "issue",
source_entity_id: target_local,
target_entity_type: "issue",
target_entity_id: Some(source_issue_local_id),
target_project_path: None,
target_entity_iid: None,
reference_type: "related",
source_method: "api",
};
if insert_entity_reference(conn, &reverse)? {
stored += 1;
}
}
}
Ok(stored)
}
/// Resolve the gitlab_project_id for a local project_id.
fn resolve_gitlab_project_id(conn: &Connection, project_id: i64) -> Result<Option<i64>> {
use rusqlite::OptionalExtension;
let result = conn
.query_row(
"SELECT gitlab_project_id FROM projects WHERE id = ?1",
[project_id],
|row| row.get(0),
)
.optional()?;
Ok(result)
}
/// Resolve an issue local ID by gitlab_project_id and iid (cross-project).
fn resolve_issue_by_gitlab_project(
conn: &Connection,
gitlab_project_id: i64,
issue_iid: i64,
) -> Result<Option<i64>> {
use rusqlite::OptionalExtension;
let result = conn
.query_row(
"SELECT i.id FROM issues i
JOIN projects p ON p.id = i.project_id
WHERE p.gitlab_project_id = ?1 AND i.iid = ?2",
rusqlite::params![gitlab_project_id, issue_iid],
|row| row.get(0),
)
.optional()?;
Ok(result)
}
/// Update the issue_links watermark after successful sync.
pub fn update_issue_links_watermark(conn: &Connection, issue_local_id: i64) -> Result<()> {
conn.execute(
"UPDATE issues SET issue_links_synced_for_updated_at = updated_at WHERE id = ?",
[issue_local_id],
)?;
Ok(())
}
/// Update the issue_links watermark within a transaction.
pub fn update_issue_links_watermark_tx(
tx: &rusqlite::Transaction<'_>,
issue_local_id: i64,
) -> Result<()> {
tx.execute(
"UPDATE issues SET issue_links_synced_for_updated_at = updated_at WHERE id = ?",
[issue_local_id],
)?;
Ok(())
}
#[cfg(test)]
mod tests {
use super::*;
use crate::core::db::{create_connection, run_migrations};
use std::path::Path;
fn setup_test_db() -> Connection {
let conn = create_connection(Path::new(":memory:")).unwrap();
run_migrations(&conn).unwrap();
// Insert a project
conn.execute(
"INSERT INTO projects (id, gitlab_project_id, path_with_namespace, web_url)
VALUES (1, 100, 'group/project', 'https://gitlab.example.com/group/project')",
[],
)
.unwrap();
// Insert two issues
conn.execute(
"INSERT INTO issues (id, gitlab_id, iid, project_id, title, state, author_username, created_at, updated_at, last_seen_at)
VALUES (10, 1001, 1, 1, 'Issue One', 'opened', 'alice', 1000, 2000, 3000)",
[],
)
.unwrap();
conn.execute(
"INSERT INTO issues (id, gitlab_id, iid, project_id, title, state, author_username, created_at, updated_at, last_seen_at)
VALUES (20, 1002, 2, 1, 'Issue Two', 'opened', 'bob', 1000, 2000, 3000)",
[],
)
.unwrap();
conn
}
#[test]
fn test_store_issue_links_creates_bidirectional_references() {
let conn = setup_test_db();
let links = vec![GitLabIssueLink {
id: 999,
iid: 2,
project_id: 100, // same project
title: "Issue Two".to_string(),
state: "opened".to_string(),
web_url: "https://gitlab.example.com/group/project/-/issues/2".to_string(),
link_type: "relates_to".to_string(),
link_created_at: None,
}];
let stored = store_issue_links(&conn, 1, 10, 1, &links).unwrap();
assert_eq!(stored, 2, "Should create 2 references (forward + reverse)");
// Verify forward reference: issue 10 (iid 1) -> issue 20 (iid 2)
let forward_count: i64 = conn
.query_row(
"SELECT COUNT(*) FROM entity_references
WHERE source_entity_type = 'issue' AND source_entity_id = 10
AND target_entity_type = 'issue' AND target_entity_id = 20
AND reference_type = 'related' AND source_method = 'api'",
[],
|row| row.get(0),
)
.unwrap();
assert_eq!(forward_count, 1);
// Verify reverse reference: issue 20 (iid 2) -> issue 10 (iid 1)
let reverse_count: i64 = conn
.query_row(
"SELECT COUNT(*) FROM entity_references
WHERE source_entity_type = 'issue' AND source_entity_id = 20
AND target_entity_type = 'issue' AND target_entity_id = 10
AND reference_type = 'related' AND source_method = 'api'",
[],
|row| row.get(0),
)
.unwrap();
assert_eq!(reverse_count, 1);
}
#[test]
fn test_self_link_skipped() {
let conn = setup_test_db();
let links = vec![GitLabIssueLink {
id: 999,
iid: 1, // same iid as source
project_id: 100,
title: "Issue One".to_string(),
state: "opened".to_string(),
web_url: "https://gitlab.example.com/group/project/-/issues/1".to_string(),
link_type: "relates_to".to_string(),
link_created_at: None,
}];
let stored = store_issue_links(&conn, 1, 10, 1, &links).unwrap();
assert_eq!(stored, 0, "Self-link should be skipped");
let count: i64 = conn
.query_row(
"SELECT COUNT(*) FROM entity_references WHERE project_id = 1",
[],
|row| row.get(0),
)
.unwrap();
assert_eq!(count, 0);
}
#[test]
fn test_cross_project_link_unresolved() {
let conn = setup_test_db();
// Link to an issue in a different project (not in our DB)
let links = vec![GitLabIssueLink {
id: 999,
iid: 42,
project_id: 200, // different project, not in DB
title: "External Issue".to_string(),
state: "opened".to_string(),
web_url: "https://gitlab.example.com/other/project/-/issues/42".to_string(),
link_type: "relates_to".to_string(),
link_created_at: None,
}];
let stored = store_issue_links(&conn, 1, 10, 1, &links).unwrap();
assert_eq!(
stored, 1,
"Should create 1 forward reference (no reverse for unresolved)"
);
// Verify unresolved reference
let (target_id, target_path, target_iid): (Option<i64>, Option<String>, Option<i64>) = conn
.query_row(
"SELECT target_entity_id, target_project_path, target_entity_iid
FROM entity_references
WHERE source_entity_type = 'issue' AND source_entity_id = 10",
[],
|row| Ok((row.get(0)?, row.get(1)?, row.get(2)?)),
)
.unwrap();
assert!(target_id.is_none(), "Target should be unresolved");
assert_eq!(
target_path.as_deref(),
Some("gitlab_project:200"),
"Should store gitlab_project fallback"
);
assert_eq!(target_iid, Some(42));
}
#[test]
fn test_duplicate_links_idempotent() {
let conn = setup_test_db();
let links = vec![GitLabIssueLink {
id: 999,
iid: 2,
project_id: 100,
title: "Issue Two".to_string(),
state: "opened".to_string(),
web_url: "https://gitlab.example.com/group/project/-/issues/2".to_string(),
link_type: "relates_to".to_string(),
link_created_at: None,
}];
// Store twice
let stored1 = store_issue_links(&conn, 1, 10, 1, &links).unwrap();
let stored2 = store_issue_links(&conn, 1, 10, 1, &links).unwrap();
assert_eq!(stored1, 2);
assert_eq!(
stored2, 0,
"Second insert should be idempotent (INSERT OR IGNORE)"
);
let count: i64 = conn
.query_row(
"SELECT COUNT(*) FROM entity_references WHERE project_id = 1 AND reference_type = 'related'",
[],
|row| row.get(0),
)
.unwrap();
assert_eq!(count, 2, "Should still have exactly 2 references");
}
#[test]
fn test_issue_link_deserialization() {
let json = r#"[
{
"id": 123,
"iid": 42,
"project_id": 100,
"title": "Linked Issue",
"state": "opened",
"web_url": "https://gitlab.example.com/group/project/-/issues/42",
"link_type": "relates_to",
"link_created_at": "2026-01-15T10:30:00.000Z"
},
{
"id": 456,
"iid": 99,
"project_id": 200,
"title": "Blocking Issue",
"state": "closed",
"web_url": "https://gitlab.example.com/other/project/-/issues/99",
"link_type": "blocks",
"link_created_at": null
}
]"#;
let links: Vec<GitLabIssueLink> = serde_json::from_str(json).unwrap();
assert_eq!(links.len(), 2);
assert_eq!(links[0].iid, 42);
assert_eq!(links[0].link_type, "relates_to");
assert_eq!(
links[0].link_created_at.as_deref(),
Some("2026-01-15T10:30:00.000Z")
);
assert_eq!(links[1].link_type, "blocks");
assert!(links[1].link_created_at.is_none());
}
#[test]
fn test_update_issue_links_watermark() {
let conn = setup_test_db();
// Initially NULL
let wm: Option<i64> = conn
.query_row(
"SELECT issue_links_synced_for_updated_at FROM issues WHERE id = 10",
[],
|row| row.get(0),
)
.unwrap();
assert!(wm.is_none());
// Update watermark
update_issue_links_watermark(&conn, 10).unwrap();
// Should now equal updated_at (2000)
let wm: Option<i64> = conn
.query_row(
"SELECT issue_links_synced_for_updated_at FROM issues WHERE id = 10",
[],
|row| row.get(0),
)
.unwrap();
assert_eq!(wm, Some(2000));
}
}

View File

@@ -140,7 +140,7 @@ fn passes_cursor_filter_with_ts(gitlab_id: i64, issue_ts: i64, cursor: &SyncCurs
true
}
fn process_single_issue(
pub(crate) fn process_single_issue(
conn: &Connection,
config: &Config,
project_id: i64,

Some files were not shown because too many files have changed in this diff Show More