7 Commits

Author SHA1 Message Date
teernisse
53ce20595b feat(cron): add lore cron command for automated sync scheduling
Add lore cron {install,uninstall,status} to manage a crontab entry that
runs lore sync on a configurable interval. Supports both human and robot
output modes.

Core implementation (src/core/cron.rs):
  - install_cron: appends a tagged crontab entry, detects existing entries
  - uninstall_cron: removes the tagged entry
  - cron_status: reads crontab + checks last-sync time from the database
  - Unix-only (#[cfg(unix)]) — compiles out on Windows

CLI wiring:
  - CronAction enum and CronArgs in cli/mod.rs with after_help examples
  - Robot JSON envelope with RobotMeta timing for all 3 sub-actions
  - Dispatch in main.rs

Also in this commit:
  - Add after_help example blocks to Status, Auth, Doctor, Init, Migrate,
    Health commands for better discoverability
  - Add LORE_ICONS env var documentation to CLI help text
  - Simplify notes format dispatch in main.rs (removed csv/jsonl paths)
  - Update commands/mod.rs re-exports for cron + notes cleanup

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-18 13:29:20 -05:00
teernisse
1808a4da8e refactor(notes): remove csv and jsonl output formats
Remove print_list_notes_csv, print_list_notes_jsonl, and csv_escape from
the notes list command. The --format flag's csv and jsonl variants added
complexity without meaningful adoption — robot mode already provides
structured JSON output. Notes now have two output paths: human (default)
and JSON (--robot).

Also removes the corresponding test coverage (csv_escape, csv_output).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-18 13:29:07 -05:00
teernisse
7d032833a2 feat(cli): improve autocorrect with --no-color expansion and --lock flag
Add NoColorExpansion correction rule that rewrites --no-color into the
two-arg form --color never, matching clap's expected syntax. The caller
detects the rule variant and inserts the second arg.

Also: add --lock to the sync command's known flags, and remove --format
from the notes command's known flags (format selection was removed).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-18 13:29:00 -05:00
teernisse
097249f4e6 fix(robot): replace JSON serialization unwrap with graceful error handling
Replace serde_json::to_string(&output).unwrap() with match-based error
handling across all robot-mode JSON printers. On serialization failure,
the error is now written to stderr instead of panicking. This hardens
the CLI against unexpected Serialize failures in production.

Affected commands: count (2), embed, generate-docs, ingest (2), search,
stats, sync (2), sync-status, timeline.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-18 13:28:53 -05:00
teernisse
8442bcf367 feat(trace,file-history): add tracing instrumentation and diagnostic hints
Add structured tracing spans to trace and file-history pipelines so debug
logging (-vv) shows path resolution counts, MR match counts, and discussion
counts at each stage. This makes empty-result debugging straightforward.

Add a hints field to TraceResult and FileHistoryResult that carries
machine-readable diagnostic strings explaining *why* results may be empty
(e.g., "Run 'lore sync' to fetch MR file changes"). The CLI renders these
as info lines; robot mode includes them in JSON when non-empty.

Also: fix filter_map(Result::ok) → collect::<Result> in trace.rs (same
pattern fixed in prior commit for file_history/path_resolver), and switch
conn.prepare → conn.prepare_cached for the MR query.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-18 13:28:47 -05:00
teernisse
c0ca501662 fix: replace silent error swallowing with proper error propagation
Replace .filter_map(Result::ok).collect() with .collect::<Result<Vec<_>,_>>()?
in rename chain resolution and suffix probe queries. The old pattern silently
discarded database errors, making failures invisible. Now any rusqlite error
propagates to the caller immediately.

Affected: resolve_rename_chain (2 queries) and resolve_ambiguity (1 query).

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-18 13:28:37 -05:00
teernisse
c953d8e519 refactor(who): split 2598-line who.rs into per-mode modules
Split the monolithic who.rs into a who/ directory module with 7 focused
files. The 5 query modes (expert, workload, reviews, active, overlap) share
no query-level code — only types and a few small helpers — making this a
clean mechanical extraction.

New structure:
  who/types.rs     — all pub result structs/enums (~185 lines)
  who/mod.rs       — dispatch, shared helpers, JSON envelope (~428 lines)
  who/expert.rs    — query + render + json for expert mode (~839 lines)
  who/workload.rs  — query + render + json for workload mode (~370 lines)
  who/reviews.rs   — query + render + json for reviews mode (~214 lines)
  who/active.rs    — query + render + json for active mode (~299 lines)
  who/overlap.rs   — query + render + json for overlap mode (~323 lines)

Token savings: an agent working on any single mode now loads ~400-960 lines
instead of 2,598 (63-85% reduction). Public API unchanged — parent mod.rs
re-exports are identical.

Test re-exports use #[cfg(test)] use (not pub use) to avoid visibility
conflicts with pub(super) items in submodules. All 79 who tests pass.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-18 13:28:30 -05:00
31 changed files with 3839 additions and 2797 deletions

View File

@@ -25,6 +25,7 @@ pub enum CorrectionRule {
ValueNormalization,
ValueFuzzy,
FlagPrefix,
NoColorExpansion,
}
/// Result of the correction pass over raw args.
@@ -128,6 +129,7 @@ const COMMAND_FLAGS: &[(&str, &[&str])] = &[
"--dry-run",
"--no-dry-run",
"--timings",
"--lock",
],
),
(
@@ -203,7 +205,6 @@ const COMMAND_FLAGS: &[(&str, &[&str])] = &[
&[
"--limit",
"--fields",
"--format",
"--author",
"--note-type",
"--contains",
@@ -424,9 +425,21 @@ pub fn correct_args(raw: Vec<String>, strict: bool) -> CorrectionResult {
}
if let Some(fixed) = try_correct(&arg, &valid, strict) {
if fixed.rule == CorrectionRule::NoColorExpansion {
// Expand --no-color → --color never
corrections.push(Correction {
original: fixed.original,
corrected: "--color never".to_string(),
rule: CorrectionRule::NoColorExpansion,
confidence: 1.0,
});
corrected.push("--color".to_string());
corrected.push("never".to_string());
} else {
let s = fixed.corrected.clone();
corrections.push(fixed);
corrected.push(s);
}
} else {
corrected.push(arg);
}
@@ -611,12 +624,27 @@ const CLAP_BUILTINS: &[&str] = &["--help", "--version"];
///
/// When `strict` is true, fuzzy matching is disabled — only deterministic
/// corrections (single-dash fix, case normalization) are applied.
///
/// Special case: `--no-color` is rewritten to `--color never` by returning
/// the `--color` correction and letting the caller handle arg insertion.
/// However, since we correct one arg at a time, we use `NoColorExpansion`
/// to signal that the next phase should insert `never` after this arg.
fn try_correct(arg: &str, valid_flags: &[&str], strict: bool) -> Option<Correction> {
// Only attempt correction on flag-like args (starts with `-`)
if !arg.starts_with('-') {
return None;
}
// Special case: --no-color → --color never (common agent/user expectation)
if arg.eq_ignore_ascii_case("--no-color") {
return Some(Correction {
original: arg.to_string(),
corrected: "--no-color".to_string(), // sentinel; expanded in correct_args
rule: CorrectionRule::NoColorExpansion,
confidence: 1.0,
});
}
// B2: Never correct clap built-in flags (--help, --version)
let flag_part_for_builtin = if let Some(eq_pos) = arg.find('=') {
&arg[..eq_pos]
@@ -766,9 +794,21 @@ fn try_correct(arg: &str, valid_flags: &[&str], strict: bool) -> Option<Correcti
}
/// Find the best fuzzy match among valid flags for a given (lowercased) input.
///
/// Applies a length guard to prevent short candidates (e.g. `--for`, 5 chars
/// including dashes) from inflating Jaro-Winkler scores against long inputs.
/// When the input is more than 40% longer than a candidate, that candidate is
/// excluded from fuzzy consideration (it can still match via prefix rule).
fn best_fuzzy_match<'a>(input: &str, valid_flags: &[&'a str]) -> Option<(&'a str, f64)> {
valid_flags
.iter()
.filter(|&&flag| {
// Guard: skip short candidates when input is much longer.
// e.g. "--foobar" (8 chars) should not fuzzy-match "--for" (5 chars)
// Ratio: input must be within 1.4x the candidate length.
let max_input_len = (flag.len() as f64 * 1.4) as usize;
input.len() <= max_input_len
})
.map(|&flag| (flag, jaro_winkler(input, flag)))
.max_by(|a, b| a.1.partial_cmp(&b.1).unwrap_or(std::cmp::Ordering::Equal))
}
@@ -846,6 +886,9 @@ pub fn format_teaching_note(correction: &Correction) -> String {
correction.corrected, correction.original
)
}
CorrectionRule::NoColorExpansion => {
"Use `--color never` instead of `--no-color`".to_string()
}
}
}
@@ -1286,6 +1329,53 @@ mod tests {
assert!(note.contains("full flag name"));
}
// ---- --no-color expansion ----
#[test]
fn no_color_expands_to_color_never() {
let result = correct_args(args("lore --no-color health"), false);
assert_eq!(result.corrections.len(), 1);
assert_eq!(result.corrections[0].rule, CorrectionRule::NoColorExpansion);
assert_eq!(result.args, args("lore --color never health"));
}
#[test]
fn no_color_case_insensitive() {
let result = correct_args(args("lore --No-Color issues"), false);
assert_eq!(result.corrections.len(), 1);
assert_eq!(result.args, args("lore --color never issues"));
}
#[test]
fn no_color_with_robot_mode() {
let result = correct_args(args("lore --robot --no-color health"), true);
assert_eq!(result.corrections.len(), 1);
assert_eq!(result.args, args("lore --robot --color never health"));
}
// ---- Fuzzy matching length guard ----
#[test]
fn foobar_does_not_match_for() {
// --foobar (8 chars) should NOT fuzzy-match --for (5 chars)
let result = correct_args(args("lore count --foobar issues"), false);
assert!(
!result.corrections.iter().any(|c| c.corrected == "--for"),
"expected --foobar not to match --for"
);
}
#[test]
fn fro_still_matches_for() {
// --fro (5 chars) is short enough to fuzzy-match --for (5 chars)
// and also qualifies as a prefix match
let result = correct_args(args("lore count --fro issues"), false);
assert!(
result.corrections.iter().any(|c| c.corrected == "--for"),
"expected --fro to match --for"
);
}
// ---- Post-clap suggestion helpers ----
#[test]

View File

@@ -257,7 +257,10 @@ pub fn print_event_count_json(counts: &EventCounts, elapsed_ms: u64) {
meta: RobotMeta { elapsed_ms },
};
println!("{}", serde_json::to_string(&output).unwrap());
match serde_json::to_string(&output) {
Ok(json) => println!("{json}"),
Err(e) => eprintln!("Error serializing to JSON: {e}"),
}
}
pub fn print_event_count(counts: &EventCounts) {
@@ -325,7 +328,10 @@ pub fn print_count_json(result: &CountResult, elapsed_ms: u64) {
meta: RobotMeta { elapsed_ms },
};
println!("{}", serde_json::to_string(&output).unwrap());
match serde_json::to_string(&output) {
Ok(json) => println!("{json}"),
Err(e) => eprintln!("Error serializing to JSON: {e}"),
}
}
pub fn print_count(result: &CountResult) {

292
src/cli/commands/cron.rs Normal file
View File

@@ -0,0 +1,292 @@
use serde::Serialize;
use crate::Config;
use crate::cli::render::Theme;
use crate::cli::robot::RobotMeta;
use crate::core::cron::{
CronInstallResult, CronStatusResult, CronUninstallResult, cron_status, install_cron,
uninstall_cron,
};
use crate::core::db::create_connection;
use crate::core::error::Result;
use crate::core::paths::get_db_path;
use crate::core::time::ms_to_iso;
// ── install ──
pub fn run_cron_install(interval_minutes: u32) -> Result<CronInstallResult> {
install_cron(interval_minutes)
}
pub fn print_cron_install(result: &CronInstallResult) {
if result.replaced {
println!(
" {} cron entry updated (was already installed)",
Theme::success().render("Updated")
);
} else {
println!(
" {} cron entry installed",
Theme::success().render("Installed")
);
}
println!();
println!(" {} {}", Theme::dim().render("entry:"), result.entry);
println!(
" {} every {} minutes",
Theme::dim().render("interval:"),
result.interval_minutes
);
println!(
" {} {}",
Theme::dim().render("log:"),
result.log_path.display()
);
if cfg!(target_os = "macos") {
println!();
println!(
" {} On macOS, the terminal running cron may need",
Theme::warning().render("Note:")
);
println!(" Full Disk Access in System Settings > Privacy & Security.");
}
println!();
}
#[derive(Serialize)]
struct CronInstallJson {
ok: bool,
data: CronInstallData,
meta: RobotMeta,
}
#[derive(Serialize)]
struct CronInstallData {
action: &'static str,
entry: String,
interval_minutes: u32,
log_path: String,
replaced: bool,
}
pub fn print_cron_install_json(result: &CronInstallResult, elapsed_ms: u64) {
let output = CronInstallJson {
ok: true,
data: CronInstallData {
action: "install",
entry: result.entry.clone(),
interval_minutes: result.interval_minutes,
log_path: result.log_path.display().to_string(),
replaced: result.replaced,
},
meta: RobotMeta { elapsed_ms },
};
if let Ok(json) = serde_json::to_string(&output) {
println!("{json}");
}
}
// ── uninstall ──
pub fn run_cron_uninstall() -> Result<CronUninstallResult> {
uninstall_cron()
}
pub fn print_cron_uninstall(result: &CronUninstallResult) {
if result.was_installed {
println!(
" {} cron entry removed",
Theme::success().render("Removed")
);
} else {
println!(
" {} no lore-sync cron entry found",
Theme::dim().render("Nothing to remove:")
);
}
println!();
}
#[derive(Serialize)]
struct CronUninstallJson {
ok: bool,
data: CronUninstallData,
meta: RobotMeta,
}
#[derive(Serialize)]
struct CronUninstallData {
action: &'static str,
was_installed: bool,
}
pub fn print_cron_uninstall_json(result: &CronUninstallResult, elapsed_ms: u64) {
let output = CronUninstallJson {
ok: true,
data: CronUninstallData {
action: "uninstall",
was_installed: result.was_installed,
},
meta: RobotMeta { elapsed_ms },
};
if let Ok(json) = serde_json::to_string(&output) {
println!("{json}");
}
}
// ── status ──
pub fn run_cron_status(config: &Config) -> Result<CronStatusInfo> {
let status = cron_status()?;
// Query last sync run from DB
let last_sync = get_last_sync_time(config).unwrap_or_default();
Ok(CronStatusInfo { status, last_sync })
}
pub struct CronStatusInfo {
pub status: CronStatusResult,
pub last_sync: Option<LastSyncInfo>,
}
pub struct LastSyncInfo {
pub started_at_iso: String,
pub status: String,
}
fn get_last_sync_time(config: &Config) -> Result<Option<LastSyncInfo>> {
let db_path = get_db_path(config.storage.db_path.as_deref());
if !db_path.exists() {
return Ok(None);
}
let conn = create_connection(&db_path)?;
let result = conn.query_row(
"SELECT started_at, status FROM sync_runs ORDER BY started_at DESC LIMIT 1",
[],
|row| {
let started_at: i64 = row.get(0)?;
let status: String = row.get(1)?;
Ok(LastSyncInfo {
started_at_iso: ms_to_iso(started_at),
status,
})
},
);
match result {
Ok(info) => Ok(Some(info)),
Err(rusqlite::Error::QueryReturnedNoRows) => Ok(None),
// Table may not exist if migrations haven't run yet
Err(rusqlite::Error::SqliteFailure(_, Some(ref msg))) if msg.contains("no such table") => {
Ok(None)
}
Err(e) => Err(e.into()),
}
}
pub fn print_cron_status(info: &CronStatusInfo) {
if info.status.installed {
println!(
" {} lore-sync is installed in crontab",
Theme::success().render("Installed")
);
if let Some(interval) = info.status.interval_minutes {
println!(
" {} every {} minutes",
Theme::dim().render("interval:"),
interval
);
}
if let Some(ref binary) = info.status.binary_path {
let label = if info.status.binary_mismatch {
Theme::warning().render("binary:")
} else {
Theme::dim().render("binary:")
};
println!(" {label} {binary}");
if info.status.binary_mismatch
&& let Some(ref current) = info.status.current_binary
{
println!(
" {}",
Theme::warning().render(&format!(" current binary is {current} (mismatch!)"))
);
}
}
if let Some(ref log) = info.status.log_path {
println!(" {} {}", Theme::dim().render("log:"), log.display());
}
} else {
println!(
" {} lore-sync is not installed in crontab",
Theme::dim().render("Not installed:")
);
println!(
" {} lore cron install",
Theme::dim().render("install with:")
);
}
if let Some(ref last) = info.last_sync {
println!(
" {} {} ({})",
Theme::dim().render("last sync:"),
last.started_at_iso,
last.status
);
}
println!();
}
#[derive(Serialize)]
struct CronStatusJson {
ok: bool,
data: CronStatusData,
meta: RobotMeta,
}
#[derive(Serialize)]
struct CronStatusData {
installed: bool,
#[serde(skip_serializing_if = "Option::is_none")]
interval_minutes: Option<u32>,
#[serde(skip_serializing_if = "Option::is_none")]
binary_path: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
current_binary: Option<String>,
binary_mismatch: bool,
#[serde(skip_serializing_if = "Option::is_none")]
log_path: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
cron_entry: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
last_sync_at: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
last_sync_status: Option<String>,
}
pub fn print_cron_status_json(info: &CronStatusInfo, elapsed_ms: u64) {
let output = CronStatusJson {
ok: true,
data: CronStatusData {
installed: info.status.installed,
interval_minutes: info.status.interval_minutes,
binary_path: info.status.binary_path.clone(),
current_binary: info.status.current_binary.clone(),
binary_mismatch: info.status.binary_mismatch,
log_path: info
.status
.log_path
.as_ref()
.map(|p| p.display().to_string()),
cron_entry: info.status.cron_entry.clone(),
last_sync_at: info.last_sync.as_ref().map(|s| s.started_at_iso.clone()),
last_sync_status: info.last_sync.as_ref().map(|s| s.status.clone()),
},
meta: RobotMeta { elapsed_ms },
};
if let Ok(json) = serde_json::to_string(&output) {
println!("{json}");
}
}

View File

@@ -137,5 +137,8 @@ pub fn print_embed_json(result: &EmbedCommandResult, elapsed_ms: u64) {
data: result,
meta: RobotMeta { elapsed_ms },
};
println!("{}", serde_json::to_string(&output).unwrap());
match serde_json::to_string(&output) {
Ok(json) => println!("{json}"),
Err(e) => eprintln!("Error serializing to JSON: {e}"),
}
}

View File

@@ -1,5 +1,7 @@
use serde::Serialize;
use tracing::info;
use crate::Config;
use crate::cli::render::{self, Icons, Theme};
use crate::core::db::create_connection;
@@ -46,6 +48,9 @@ pub struct FileHistoryResult {
pub discussions: Vec<FileDiscussion>,
pub total_mrs: usize,
pub paths_searched: usize,
/// Diagnostic hints explaining why results may be empty.
#[serde(skip_serializing_if = "Vec::is_empty")]
pub hints: Vec<String>,
}
/// Run the file-history query.
@@ -77,6 +82,11 @@ pub fn run_file_history(
let paths_searched = all_paths.len();
info!(
paths = paths_searched,
renames_followed, "file-history: resolved {} path(s) for '{}'", paths_searched, path
);
// Build placeholders for IN clause
let placeholders: Vec<String> = (0..all_paths.len())
.map(|i| format!("?{}", i + 2))
@@ -135,14 +145,31 @@ pub fn run_file_history(
web_url: row.get(8)?,
})
})?
.filter_map(std::result::Result::ok)
.collect();
.collect::<std::result::Result<Vec<_>, _>>()?;
let total_mrs = merge_requests.len();
info!(
mr_count = total_mrs,
"file-history: found {} MR(s) touching '{}'", total_mrs, path
);
// Optionally fetch DiffNote discussions on this file
let discussions = if include_discussions && !merge_requests.is_empty() {
fetch_file_discussions(&conn, &all_paths, project_id)?
let discs = fetch_file_discussions(&conn, &all_paths, project_id)?;
info!(
discussion_count = discs.len(),
"file-history: found {} discussion(s)",
discs.len()
);
discs
} else {
Vec::new()
};
// Build diagnostic hints when no results found
let hints = if total_mrs == 0 {
build_file_history_hints(&conn, project_id, &all_paths)?
} else {
Vec::new()
};
@@ -155,6 +182,7 @@ pub fn run_file_history(
discussions,
total_mrs,
paths_searched,
hints,
})
}
@@ -179,8 +207,7 @@ fn fetch_file_discussions(
JOIN discussions d ON d.id = n.discussion_id \
WHERE n.position_new_path IN ({in_clause}) {project_filter} \
AND n.is_system = 0 \
ORDER BY n.created_at DESC \
LIMIT 50"
ORDER BY n.created_at DESC"
);
let mut stmt = conn.prepare(&sql)?;
@@ -210,12 +237,57 @@ fn fetch_file_discussions(
created_at_iso: ms_to_iso(created_at),
})
})?
.filter_map(std::result::Result::ok)
.collect();
.collect::<std::result::Result<Vec<_>, _>>()?;
Ok(discussions)
}
/// Build diagnostic hints explaining why a file-history query returned no results.
fn build_file_history_hints(
conn: &rusqlite::Connection,
project_id: Option<i64>,
paths: &[String],
) -> Result<Vec<String>> {
let mut hints = Vec::new();
// Check if mr_file_changes has ANY rows for this project
let has_file_changes: bool = if let Some(pid) = project_id {
conn.query_row(
"SELECT EXISTS(SELECT 1 FROM mr_file_changes WHERE project_id = ?1 LIMIT 1)",
rusqlite::params![pid],
|row| row.get(0),
)?
} else {
conn.query_row(
"SELECT EXISTS(SELECT 1 FROM mr_file_changes LIMIT 1)",
[],
|row| row.get(0),
)?
};
if !has_file_changes {
hints.push(
"No MR file changes have been synced yet. Run 'lore sync' to fetch file change data."
.to_string(),
);
return Ok(hints);
}
// File changes exist but none match these paths
let path_list = paths
.iter()
.map(|p| format!("'{p}'"))
.collect::<Vec<_>>()
.join(", ");
hints.push(format!(
"Searched paths [{}] were not found in MR file changes. \
The file may predate the sync window or use a different path.",
path_list
));
Ok(hints)
}
// ── Human output ────────────────────────────────────────────────────────────
pub fn print_file_history(result: &FileHistoryResult) {
@@ -250,10 +322,16 @@ pub fn print_file_history(result: &FileHistoryResult) {
Icons::info(),
Theme::dim().render("No merge requests found touching this file.")
);
if !result.renames_followed && result.rename_chain.len() == 1 {
println!(
" {}",
Theme::dim().render("Hint: Run 'lore sync' to fetch MR file changes.")
" {} Searched: {}",
Icons::info(),
Theme::dim().render(&result.rename_chain[0])
);
}
for hint in &result.hints {
println!(" {} {}", Icons::info(), Theme::dim().render(hint));
}
println!();
return;
}
@@ -327,6 +405,7 @@ pub fn print_file_history_json(result: &FileHistoryResult, elapsed_ms: u64) {
"total_mrs": result.total_mrs,
"renames_followed": result.renames_followed,
"paths_searched": result.paths_searched,
"hints": if result.hints.is_empty() { None } else { Some(&result.hints) },
}
});

View File

@@ -259,7 +259,10 @@ pub fn print_generate_docs_json(result: &GenerateDocsResult, elapsed_ms: u64) {
},
meta: RobotMeta { elapsed_ms },
};
println!("{}", serde_json::to_string(&output).unwrap());
match serde_json::to_string(&output) {
Ok(json) => println!("{json}"),
Err(e) => eprintln!("Error serializing to JSON: {e}"),
}
}
#[cfg(test)]

View File

@@ -982,7 +982,10 @@ pub fn print_ingest_summary_json(result: &IngestResult, elapsed_ms: u64) {
meta: RobotMeta { elapsed_ms },
};
println!("{}", serde_json::to_string(&output).unwrap());
match serde_json::to_string(&output) {
Ok(json) => println!("{json}"),
Err(e) => eprintln!("Error serializing to JSON: {e}"),
}
}
pub fn print_ingest_summary(result: &IngestResult) {
@@ -1109,5 +1112,8 @@ pub fn print_dry_run_preview_json(preview: &DryRunPreview) {
data: preview.clone(),
};
println!("{}", serde_json::to_string(&output).unwrap());
match serde_json::to_string(&output) {
Ok(json) => println!("{json}"),
Err(e) => eprintln!("Error serializing to JSON: {e}"),
}
}

View File

@@ -980,59 +980,6 @@ pub fn print_list_notes_json(result: &NoteListResult, elapsed_ms: u64, fields: O
}
}
pub fn print_list_notes_jsonl(result: &NoteListResult) {
for note in &result.notes {
let json_row = NoteListRowJson::from(note);
match serde_json::to_string(&json_row) {
Ok(json) => println!("{json}"),
Err(e) => eprintln!("Error serializing to JSON: {e}"),
}
}
}
/// Escape a field for RFC 4180 CSV: quote fields containing commas, quotes, or newlines.
fn csv_escape(field: &str) -> String {
if field.contains(',') || field.contains('"') || field.contains('\n') || field.contains('\r') {
let escaped = field.replace('"', "\"\"");
format!("\"{escaped}\"")
} else {
field.to_string()
}
}
pub fn print_list_notes_csv(result: &NoteListResult) {
println!(
"id,gitlab_id,author_username,body,note_type,is_system,created_at,updated_at,position_new_path,position_new_line,noteable_type,parent_iid,project_path"
);
for note in &result.notes {
let body = note.body.as_deref().unwrap_or("");
let note_type = note.note_type.as_deref().unwrap_or("");
let path = note.position_new_path.as_deref().unwrap_or("");
let line = note
.position_new_line
.map_or(String::new(), |l| l.to_string());
let noteable = note.noteable_type.as_deref().unwrap_or("");
let parent_iid = note.parent_iid.map_or(String::new(), |i| i.to_string());
println!(
"{},{},{},{},{},{},{},{},{},{},{},{},{}",
note.id,
note.gitlab_id,
csv_escape(&note.author_username),
csv_escape(body),
csv_escape(note_type),
note.is_system,
note.created_at,
note.updated_at,
csv_escape(path),
line,
csv_escape(noteable),
parent_iid,
csv_escape(&note.project_path),
);
}
}
// ---------------------------------------------------------------------------
// Note query layer
// ---------------------------------------------------------------------------

View File

@@ -1269,60 +1269,6 @@ fn test_truncate_note_body() {
assert!(result.ends_with("..."));
}
#[test]
fn test_csv_escape_basic() {
assert_eq!(csv_escape("simple"), "simple");
assert_eq!(csv_escape("has,comma"), "\"has,comma\"");
assert_eq!(csv_escape("has\"quote"), "\"has\"\"quote\"");
assert_eq!(csv_escape("has\nnewline"), "\"has\nnewline\"");
}
#[test]
fn test_csv_output_basic() {
let result = NoteListResult {
notes: vec![NoteListRow {
id: 1,
gitlab_id: 100,
author_username: "alice".to_string(),
body: Some("Hello, world".to_string()),
note_type: Some("DiffNote".to_string()),
is_system: false,
created_at: 1_000_000,
updated_at: 2_000_000,
position_new_path: Some("src/main.rs".to_string()),
position_new_line: Some(42),
position_old_path: None,
position_old_line: None,
resolvable: true,
resolved: false,
resolved_by: None,
noteable_type: Some("Issue".to_string()),
parent_iid: Some(7),
parent_title: Some("Test issue".to_string()),
project_path: "group/project".to_string(),
}],
total_count: 1,
};
// Verify csv_escape handles the comma in body correctly
let body = result.notes[0].body.as_deref().unwrap();
let escaped = csv_escape(body);
assert_eq!(escaped, "\"Hello, world\"");
// Verify the formatting helpers
assert_eq!(
format_note_type(result.notes[0].note_type.as_deref()),
"Diff"
);
assert_eq!(
format_note_parent(
result.notes[0].noteable_type.as_deref(),
result.notes[0].parent_iid,
),
"Issue #7"
);
}
#[test]
fn test_jsonl_output_one_per_line() {
let result = NoteListResult {

View File

@@ -1,5 +1,7 @@
pub mod auth_test;
pub mod count;
#[cfg(unix)]
pub mod cron;
pub mod doctor;
pub mod drift;
pub mod embed;
@@ -22,6 +24,12 @@ pub use count::{
print_count, print_count_json, print_event_count, print_event_count_json, run_count,
run_count_events,
};
#[cfg(unix)]
pub use cron::{
print_cron_install, print_cron_install_json, print_cron_status, print_cron_status_json,
print_cron_uninstall, print_cron_uninstall_json, run_cron_install, run_cron_status,
run_cron_uninstall,
};
pub use doctor::{DoctorChecks, print_doctor_results, run_doctor};
pub use drift::{DriftResponse, print_drift_human, print_drift_json, run_drift};
pub use embed::{print_embed, print_embed_json, run_embed};
@@ -35,8 +43,7 @@ pub use init::{InitInputs, InitOptions, InitResult, run_init};
pub use list::{
ListFilters, MrListFilters, NoteListFilters, open_issue_in_browser, open_mr_in_browser,
print_list_issues, print_list_issues_json, print_list_mrs, print_list_mrs_json,
print_list_notes, print_list_notes_csv, print_list_notes_json, print_list_notes_jsonl,
query_notes, run_list_issues, run_list_mrs,
print_list_notes, print_list_notes_json, query_notes, run_list_issues, run_list_mrs,
};
pub use search::{
SearchCliFilters, SearchResponse, print_search_results, print_search_results_json, run_search,

View File

@@ -439,5 +439,8 @@ pub fn print_search_results_json(
let expanded = crate::cli::robot::expand_fields_preset(f, "search");
crate::cli::robot::filter_fields(&mut value, "results", &expanded);
}
println!("{}", serde_json::to_string(&value).unwrap());
match serde_json::to_string(&value) {
Ok(json) => println!("{json}"),
Err(e) => eprintln!("Error serializing to JSON: {e}"),
}
}

View File

@@ -585,5 +585,8 @@ pub fn print_stats_json(result: &StatsResult, elapsed_ms: u64) {
},
meta: RobotMeta { elapsed_ms },
};
println!("{}", serde_json::to_string(&output).unwrap());
match serde_json::to_string(&output) {
Ok(json) => println!("{json}"),
Err(e) => eprintln!("Error serializing to JSON: {e}"),
}
}

View File

@@ -746,7 +746,10 @@ pub fn print_sync_json(result: &SyncResult, elapsed_ms: u64, metrics: Option<&Me
stages,
},
};
println!("{}", serde_json::to_string(&output).unwrap());
match serde_json::to_string(&output) {
Ok(json) => println!("{json}"),
Err(e) => eprintln!("Error serializing to JSON: {e}"),
}
}
#[derive(Debug, Default, Serialize)]
@@ -880,7 +883,10 @@ pub fn print_sync_dry_run_json(result: &SyncDryRunResult) {
},
};
println!("{}", serde_json::to_string(&output).unwrap());
match serde_json::to_string(&output) {
Ok(json) => println!("{json}"),
Err(e) => eprintln!("Error serializing to JSON: {e}"),
}
}
#[cfg(test)]

View File

@@ -268,7 +268,10 @@ pub fn print_sync_status_json(result: &SyncStatusResult, elapsed_ms: u64) {
meta: RobotMeta { elapsed_ms },
};
println!("{}", serde_json::to_string(&output).unwrap());
match serde_json::to_string(&output) {
Ok(json) => println!("{json}"),
Err(e) => eprintln!("Error serializing to JSON: {e}"),
}
}
pub fn print_sync_status(result: &SyncStatusResult) {

View File

@@ -374,7 +374,10 @@ pub fn print_timeline_json_with_meta(
let expanded = crate::cli::robot::expand_fields_preset(f, "timeline");
crate::cli::robot::filter_fields(&mut value, "events", &expanded);
}
println!("{}", serde_json::to_string(&value).unwrap());
match serde_json::to_string(&value) {
Ok(json) => println!("{json}"),
Err(e) => eprintln!("Error serializing to JSON: {e}"),
}
}
#[derive(Serialize)]

View File

@@ -50,17 +50,23 @@ pub fn print_trace(result: &TraceResult) {
);
}
// Show searched paths when there are renames but no chains
if result.trace_chains.is_empty() {
println!(
"\n {} {}",
Icons::info(),
Theme::dim().render("No trace chains found for this file.")
);
if !result.renames_followed && result.resolved_paths.len() == 1 {
println!(
" {}",
Theme::dim()
.render("Hint: Run 'lore sync' to fetch MR file changes and cross-references.")
" {} Searched: {}",
Icons::info(),
Theme::dim().render(&result.resolved_paths[0])
);
}
for hint in &result.hints {
println!(" {} {}", Icons::info(), Theme::dim().render(hint));
}
println!();
return;
}
@@ -195,6 +201,7 @@ pub fn print_trace_json(result: &TraceResult, elapsed_ms: u64, line_requested: O
"elapsed_ms": elapsed_ms,
"total_chains": result.total_chains,
"renames_followed": result.renames_followed,
"hints": if result.hints.is_empty() { None } else { Some(&result.hints) },
}
});

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,299 @@
use rusqlite::Connection;
use crate::cli::render::{self, Theme};
use crate::core::error::Result;
use crate::core::time::ms_to_iso;
use super::types::*;
pub(super) fn query_active(
conn: &Connection,
project_id: Option<i64>,
since_ms: i64,
limit: usize,
include_closed: bool,
) -> Result<ActiveResult> {
let limit_plus_one = (limit + 1) as i64;
// State filter for open-entities-only (default behavior)
let state_joins = if include_closed {
""
} else {
" LEFT JOIN issues i ON d.issue_id = i.id
LEFT JOIN merge_requests m ON d.merge_request_id = m.id"
};
let state_filter = if include_closed {
""
} else {
" AND (i.id IS NULL OR i.state = 'opened')
AND (m.id IS NULL OR m.state = 'opened')"
};
// Total unresolved count -- conditionally built
let total_sql_global = format!(
"SELECT COUNT(*) FROM discussions d
{state_joins}
WHERE d.resolvable = 1 AND d.resolved = 0
AND d.last_note_at >= ?1
{state_filter}"
);
let total_sql_scoped = format!(
"SELECT COUNT(*) FROM discussions d
{state_joins}
WHERE d.resolvable = 1 AND d.resolved = 0
AND d.last_note_at >= ?1
AND d.project_id = ?2
{state_filter}"
);
let total_unresolved_in_window: u32 = match project_id {
None => conn.query_row(&total_sql_global, rusqlite::params![since_ms], |row| {
row.get(0)
})?,
Some(pid) => {
conn.query_row(&total_sql_scoped, rusqlite::params![since_ms, pid], |row| {
row.get(0)
})?
}
};
// Active discussions with context -- conditionally built SQL
let sql_global = format!(
"
WITH picked AS (
SELECT d.id, d.noteable_type, d.issue_id, d.merge_request_id,
d.project_id, d.last_note_at
FROM discussions d
{state_joins}
WHERE d.resolvable = 1 AND d.resolved = 0
AND d.last_note_at >= ?1
{state_filter}
ORDER BY d.last_note_at DESC
LIMIT ?2
),
note_counts AS (
SELECT
n.discussion_id,
COUNT(*) AS note_count
FROM notes n
JOIN picked p ON p.id = n.discussion_id
WHERE n.is_system = 0
GROUP BY n.discussion_id
),
participants AS (
SELECT
x.discussion_id,
GROUP_CONCAT(x.author_username, X'1F') AS participants
FROM (
SELECT DISTINCT n.discussion_id, n.author_username
FROM notes n
JOIN picked p ON p.id = n.discussion_id
WHERE n.is_system = 0 AND n.author_username IS NOT NULL
) x
GROUP BY x.discussion_id
)
SELECT
p.id AS discussion_id,
p.noteable_type,
COALESCE(i.iid, m.iid) AS entity_iid,
COALESCE(i.title, m.title) AS entity_title,
proj.path_with_namespace,
p.last_note_at,
COALESCE(nc.note_count, 0) AS note_count,
COALESCE(pa.participants, '') AS participants
FROM picked p
JOIN projects proj ON p.project_id = proj.id
LEFT JOIN issues i ON p.issue_id = i.id
LEFT JOIN merge_requests m ON p.merge_request_id = m.id
LEFT JOIN note_counts nc ON nc.discussion_id = p.id
LEFT JOIN participants pa ON pa.discussion_id = p.id
ORDER BY p.last_note_at DESC
"
);
let sql_scoped = format!(
"
WITH picked AS (
SELECT d.id, d.noteable_type, d.issue_id, d.merge_request_id,
d.project_id, d.last_note_at
FROM discussions d
{state_joins}
WHERE d.resolvable = 1 AND d.resolved = 0
AND d.last_note_at >= ?1
AND d.project_id = ?2
{state_filter}
ORDER BY d.last_note_at DESC
LIMIT ?3
),
note_counts AS (
SELECT
n.discussion_id,
COUNT(*) AS note_count
FROM notes n
JOIN picked p ON p.id = n.discussion_id
WHERE n.is_system = 0
GROUP BY n.discussion_id
),
participants AS (
SELECT
x.discussion_id,
GROUP_CONCAT(x.author_username, X'1F') AS participants
FROM (
SELECT DISTINCT n.discussion_id, n.author_username
FROM notes n
JOIN picked p ON p.id = n.discussion_id
WHERE n.is_system = 0 AND n.author_username IS NOT NULL
) x
GROUP BY x.discussion_id
)
SELECT
p.id AS discussion_id,
p.noteable_type,
COALESCE(i.iid, m.iid) AS entity_iid,
COALESCE(i.title, m.title) AS entity_title,
proj.path_with_namespace,
p.last_note_at,
COALESCE(nc.note_count, 0) AS note_count,
COALESCE(pa.participants, '') AS participants
FROM picked p
JOIN projects proj ON p.project_id = proj.id
LEFT JOIN issues i ON p.issue_id = i.id
LEFT JOIN merge_requests m ON p.merge_request_id = m.id
LEFT JOIN note_counts nc ON nc.discussion_id = p.id
LEFT JOIN participants pa ON pa.discussion_id = p.id
ORDER BY p.last_note_at DESC
"
);
// Row-mapping closure shared between both variants
let map_row = |row: &rusqlite::Row| -> rusqlite::Result<ActiveDiscussion> {
let noteable_type: String = row.get(1)?;
let entity_type = if noteable_type == "MergeRequest" {
"MR"
} else {
"Issue"
};
let participants_csv: Option<String> = row.get(7)?;
// Sort participants for deterministic output -- GROUP_CONCAT order is undefined
let mut participants: Vec<String> = participants_csv
.as_deref()
.filter(|s| !s.is_empty())
.map(|csv| csv.split('\x1F').map(String::from).collect())
.unwrap_or_default();
participants.sort();
const MAX_PARTICIPANTS: usize = 50;
let participants_total = participants.len() as u32;
let participants_truncated = participants.len() > MAX_PARTICIPANTS;
if participants_truncated {
participants.truncate(MAX_PARTICIPANTS);
}
Ok(ActiveDiscussion {
discussion_id: row.get(0)?,
entity_type: entity_type.to_string(),
entity_iid: row.get(2)?,
entity_title: row.get(3)?,
project_path: row.get(4)?,
last_note_at: row.get(5)?,
note_count: row.get(6)?,
participants,
participants_total,
participants_truncated,
})
};
// Select variant first, then prepare exactly one statement
let discussions: Vec<ActiveDiscussion> = match project_id {
None => {
let mut stmt = conn.prepare_cached(&sql_global)?;
stmt.query_map(rusqlite::params![since_ms, limit_plus_one], &map_row)?
.collect::<std::result::Result<Vec<_>, _>>()?
}
Some(pid) => {
let mut stmt = conn.prepare_cached(&sql_scoped)?;
stmt.query_map(rusqlite::params![since_ms, pid, limit_plus_one], &map_row)?
.collect::<std::result::Result<Vec<_>, _>>()?
}
};
let truncated = discussions.len() > limit;
let discussions: Vec<ActiveDiscussion> = discussions.into_iter().take(limit).collect();
Ok(ActiveResult {
discussions,
total_unresolved_in_window,
truncated,
})
}
pub(super) fn print_active_human(r: &ActiveResult, project_path: Option<&str>) {
println!();
println!(
"{}",
Theme::bold().render(&format!(
"Active Discussions ({} unresolved in window)",
r.total_unresolved_in_window
))
);
println!("{}", "\u{2500}".repeat(60));
super::print_scope_hint(project_path);
println!();
if r.discussions.is_empty() {
println!(
" {}",
Theme::dim().render("No active unresolved discussions in this time window.")
);
println!();
return;
}
for disc in &r.discussions {
let prefix = if disc.entity_type == "MR" { "!" } else { "#" };
let participants_str = disc
.participants
.iter()
.map(|p| format!("@{p}"))
.collect::<Vec<_>>()
.join(", ");
println!(
" {} {} {} {} notes {}",
Theme::info().render(&format!("{prefix}{}", disc.entity_iid)),
render::truncate(&disc.entity_title, 40),
Theme::dim().render(&render::format_relative_time(disc.last_note_at)),
disc.note_count,
Theme::dim().render(&disc.project_path),
);
if !participants_str.is_empty() {
println!(" {}", Theme::dim().render(&participants_str));
}
}
if r.truncated {
println!(
" {}",
Theme::dim().render("(showing first -n; rerun with a higher --limit)")
);
}
println!();
}
pub(super) fn active_to_json(r: &ActiveResult) -> serde_json::Value {
serde_json::json!({
"total_unresolved_in_window": r.total_unresolved_in_window,
"truncated": r.truncated,
"discussions": r.discussions.iter().map(|d| serde_json::json!({
"discussion_id": d.discussion_id,
"entity_type": d.entity_type,
"entity_iid": d.entity_iid,
"entity_title": d.entity_title,
"project_path": d.project_path,
"last_note_at": ms_to_iso(d.last_note_at),
"note_count": d.note_count,
"participants": d.participants,
"participants_total": d.participants_total,
"participants_truncated": d.participants_truncated,
})).collect::<Vec<_>>(),
})
}

View File

@@ -0,0 +1,839 @@
use std::collections::{HashMap, HashSet};
use rusqlite::Connection;
use crate::cli::render::{self, Icons, Theme};
use crate::core::config::ScoringConfig;
use crate::core::error::Result;
use crate::core::path_resolver::{PathQuery, build_path_query};
use crate::core::time::ms_to_iso;
use super::types::*;
pub(super) fn half_life_decay(elapsed_ms: i64, half_life_days: u32) -> f64 {
let days = (elapsed_ms as f64 / 86_400_000.0).max(0.0);
let hl = f64::from(half_life_days);
if hl <= 0.0 {
return 0.0;
}
2.0_f64.powf(-days / hl)
}
// ─── Query: Expert Mode ─────────────────────────────────────────────────────
#[allow(clippy::too_many_arguments)]
pub(super) fn query_expert(
conn: &Connection,
path: &str,
project_id: Option<i64>,
since_ms: i64,
as_of_ms: i64,
limit: usize,
scoring: &ScoringConfig,
detail: bool,
explain_score: bool,
include_bots: bool,
) -> Result<ExpertResult> {
let pq = build_path_query(conn, path, project_id)?;
let sql = build_expert_sql_v2(pq.is_prefix);
let mut stmt = conn.prepare_cached(&sql)?;
// Params: ?1=path, ?2=since_ms, ?3=project_id, ?4=as_of_ms,
// ?5=closed_mr_multiplier, ?6=reviewer_min_note_chars
let rows = stmt.query_map(
rusqlite::params![
pq.value,
since_ms,
project_id,
as_of_ms,
scoring.closed_mr_multiplier,
scoring.reviewer_min_note_chars,
],
|row| {
Ok(SignalRow {
username: row.get(0)?,
signal: row.get(1)?,
mr_id: row.get(2)?,
qty: row.get(3)?,
ts: row.get(4)?,
state_mult: row.get(5)?,
})
},
)?;
// Per-user accumulator keyed by username.
let mut accum: HashMap<String, UserAccum> = HashMap::new();
for row_result in rows {
let r = row_result?;
let entry = accum
.entry(r.username.clone())
.or_insert_with(|| UserAccum {
contributions: Vec::new(),
last_seen_ms: 0,
mr_ids_author: HashSet::new(),
mr_ids_reviewer: HashSet::new(),
note_count: 0,
});
if r.ts > entry.last_seen_ms {
entry.last_seen_ms = r.ts;
}
match r.signal.as_str() {
"diffnote_author" | "file_author" => {
entry.mr_ids_author.insert(r.mr_id);
}
"file_reviewer_participated" | "file_reviewer_assigned" => {
entry.mr_ids_reviewer.insert(r.mr_id);
}
"note_group" => {
entry.note_count += r.qty as u32;
// DiffNote reviewers are also reviewer activity.
entry.mr_ids_reviewer.insert(r.mr_id);
}
_ => {}
}
entry.contributions.push(Contribution {
signal: r.signal,
mr_id: r.mr_id,
qty: r.qty,
ts: r.ts,
state_mult: r.state_mult,
});
}
// Bot filtering: exclude configured bot usernames (case-insensitive).
if !include_bots && !scoring.excluded_usernames.is_empty() {
let excluded: HashSet<String> = scoring
.excluded_usernames
.iter()
.map(|u| u.to_lowercase())
.collect();
accum.retain(|username, _| !excluded.contains(&username.to_lowercase()));
}
// Compute decayed scores with deterministic ordering.
let mut scored: Vec<ScoredUser> = accum
.into_iter()
.map(|(username, mut ua)| {
// Sort contributions by mr_id ASC for deterministic f64 summation.
ua.contributions.sort_by_key(|c| c.mr_id);
let mut comp_author = 0.0_f64;
let mut comp_reviewer_participated = 0.0_f64;
let mut comp_reviewer_assigned = 0.0_f64;
let mut comp_notes = 0.0_f64;
for c in &ua.contributions {
let elapsed = as_of_ms - c.ts;
match c.signal.as_str() {
"diffnote_author" | "file_author" => {
let decay = half_life_decay(elapsed, scoring.author_half_life_days);
comp_author += scoring.author_weight as f64 * decay * c.state_mult;
}
"file_reviewer_participated" => {
let decay = half_life_decay(elapsed, scoring.reviewer_half_life_days);
comp_reviewer_participated +=
scoring.reviewer_weight as f64 * decay * c.state_mult;
}
"file_reviewer_assigned" => {
let decay =
half_life_decay(elapsed, scoring.reviewer_assignment_half_life_days);
comp_reviewer_assigned +=
scoring.reviewer_assignment_weight as f64 * decay * c.state_mult;
}
"note_group" => {
let decay = half_life_decay(elapsed, scoring.note_half_life_days);
// Diminishing returns: log2(1 + count) per MR.
let note_value = (1.0 + c.qty as f64).log2();
comp_notes += scoring.note_bonus as f64 * note_value * decay * c.state_mult;
}
_ => {}
}
}
let raw_score =
comp_author + comp_reviewer_participated + comp_reviewer_assigned + comp_notes;
ScoredUser {
username,
raw_score,
components: ScoreComponents {
author: comp_author,
reviewer_participated: comp_reviewer_participated,
reviewer_assigned: comp_reviewer_assigned,
notes: comp_notes,
},
accum: ua,
}
})
.collect();
// Sort: raw_score DESC, last_seen DESC, username ASC (deterministic tiebreaker).
scored.sort_by(|a, b| {
b.raw_score
.partial_cmp(&a.raw_score)
.unwrap_or(std::cmp::Ordering::Equal)
.then_with(|| b.accum.last_seen_ms.cmp(&a.accum.last_seen_ms))
.then_with(|| a.username.cmp(&b.username))
});
let truncated = scored.len() > limit;
scored.truncate(limit);
// Build Expert structs with MR refs.
let mut experts: Vec<Expert> = scored
.into_iter()
.map(|su| {
let mut mr_refs = build_mr_refs_for_user(conn, &su.accum);
mr_refs.sort();
let mr_refs_total = mr_refs.len() as u32;
let mr_refs_truncated = mr_refs.len() > MAX_MR_REFS_PER_USER;
if mr_refs_truncated {
mr_refs.truncate(MAX_MR_REFS_PER_USER);
}
Expert {
username: su.username,
score: su.raw_score.round() as i64,
score_raw: if explain_score {
Some(su.raw_score)
} else {
None
},
components: if explain_score {
Some(su.components)
} else {
None
},
review_mr_count: su.accum.mr_ids_reviewer.len() as u32,
review_note_count: su.accum.note_count,
author_mr_count: su.accum.mr_ids_author.len() as u32,
last_seen_ms: su.accum.last_seen_ms,
mr_refs,
mr_refs_total,
mr_refs_truncated,
details: None,
}
})
.collect();
// Populate per-MR detail when --detail is requested
if detail && !experts.is_empty() {
let details_map = query_expert_details(conn, &pq, &experts, since_ms, project_id)?;
for expert in &mut experts {
expert.details = details_map.get(&expert.username).cloned();
}
}
Ok(ExpertResult {
path_query: if pq.is_prefix {
// Use raw input (unescaped) for display — pq.value has LIKE escaping.
path.trim_end_matches('/').to_string()
} else {
// For exact matches (including suffix-resolved), show the resolved path.
pq.value.clone()
},
path_match: if pq.is_prefix { "prefix" } else { "exact" }.to_string(),
experts,
truncated,
})
}
struct SignalRow {
username: String,
signal: String,
mr_id: i64,
qty: i64,
ts: i64,
state_mult: f64,
}
/// Per-user signal accumulator used during Rust-side scoring.
struct UserAccum {
contributions: Vec<Contribution>,
last_seen_ms: i64,
mr_ids_author: HashSet<i64>,
mr_ids_reviewer: HashSet<i64>,
note_count: u32,
}
/// A single contribution to a user's score (one signal row).
struct Contribution {
signal: String,
mr_id: i64,
qty: i64,
ts: i64,
state_mult: f64,
}
/// Intermediate scored user before building Expert structs.
struct ScoredUser {
username: String,
raw_score: f64,
components: ScoreComponents,
accum: UserAccum,
}
/// Build MR refs (e.g. "group/project!123") for a user from their accumulated MR IDs.
fn build_mr_refs_for_user(conn: &Connection, ua: &UserAccum) -> Vec<String> {
let all_mr_ids: HashSet<i64> = ua
.mr_ids_author
.iter()
.chain(ua.mr_ids_reviewer.iter())
.copied()
.chain(ua.contributions.iter().map(|c| c.mr_id))
.collect();
if all_mr_ids.is_empty() {
return Vec::new();
}
let placeholders: Vec<String> = (1..=all_mr_ids.len()).map(|i| format!("?{i}")).collect();
let sql = format!(
"SELECT p.path_with_namespace || '!' || CAST(m.iid AS TEXT)
FROM merge_requests m
JOIN projects p ON m.project_id = p.id
WHERE m.id IN ({})",
placeholders.join(",")
);
let mut stmt = match conn.prepare(&sql) {
Ok(s) => s,
Err(_) => return Vec::new(),
};
let mut mr_ids_vec: Vec<i64> = all_mr_ids.into_iter().collect();
mr_ids_vec.sort_unstable();
let params: Vec<&dyn rusqlite::types::ToSql> = mr_ids_vec
.iter()
.map(|id| id as &dyn rusqlite::types::ToSql)
.collect();
stmt.query_map(&*params, |row| row.get::<_, String>(0))
.map(|rows| rows.filter_map(|r| r.ok()).collect())
.unwrap_or_default()
}
/// Build the CTE-based expert SQL for time-decay scoring (v2).
///
/// Returns raw signal rows `(username, signal, mr_id, qty, ts, state_mult)` that
/// Rust aggregates with per-signal decay and `log2(1+count)` for note groups.
///
/// Parameters: `?1` = path, `?2` = since_ms, `?3` = project_id (nullable),
/// `?4` = as_of_ms, `?5` = closed_mr_multiplier, `?6` = reviewer_min_note_chars
pub(super) fn build_expert_sql_v2(is_prefix: bool) -> String {
let path_op = if is_prefix {
"LIKE ?1 ESCAPE '\\'"
} else {
"= ?1"
};
// INDEXED BY hints for each branch:
// - new_path branch: idx_notes_diffnote_path_created (existing)
// - old_path branch: idx_notes_old_path_author (migration 026)
format!(
"
WITH matched_notes_raw AS (
-- Branch 1: match on position_new_path
SELECT n.id, n.discussion_id, n.author_username, n.created_at, n.project_id
FROM notes n INDEXED BY idx_notes_diffnote_path_created
WHERE n.note_type = 'DiffNote'
AND n.is_system = 0
AND n.author_username IS NOT NULL
AND n.created_at >= ?2
AND n.created_at < ?4
AND (?3 IS NULL OR n.project_id = ?3)
AND n.position_new_path {path_op}
UNION ALL
-- Branch 2: match on position_old_path
SELECT n.id, n.discussion_id, n.author_username, n.created_at, n.project_id
FROM notes n INDEXED BY idx_notes_old_path_author
WHERE n.note_type = 'DiffNote'
AND n.is_system = 0
AND n.author_username IS NOT NULL
AND n.created_at >= ?2
AND n.created_at < ?4
AND (?3 IS NULL OR n.project_id = ?3)
AND n.position_old_path IS NOT NULL
AND n.position_old_path {path_op}
),
matched_notes AS (
-- Dedup: prevent double-counting when old_path = new_path (no rename)
SELECT DISTINCT id, discussion_id, author_username, created_at, project_id
FROM matched_notes_raw
),
matched_file_changes_raw AS (
-- Branch 1: match on new_path
SELECT fc.merge_request_id, fc.project_id
FROM mr_file_changes fc INDEXED BY idx_mfc_new_path_project_mr
WHERE (?3 IS NULL OR fc.project_id = ?3)
AND fc.new_path {path_op}
UNION ALL
-- Branch 2: match on old_path
SELECT fc.merge_request_id, fc.project_id
FROM mr_file_changes fc INDEXED BY idx_mfc_old_path_project_mr
WHERE (?3 IS NULL OR fc.project_id = ?3)
AND fc.old_path IS NOT NULL
AND fc.old_path {path_op}
),
matched_file_changes AS (
-- Dedup: prevent double-counting when old_path = new_path (no rename)
SELECT DISTINCT merge_request_id, project_id
FROM matched_file_changes_raw
),
mr_activity AS (
-- Centralized state-aware timestamps and state multiplier.
-- Scoped to MRs matched by file changes to avoid materializing the full MR table.
SELECT DISTINCT
m.id AS mr_id,
m.author_username,
m.state,
CASE
WHEN m.state = 'merged' THEN COALESCE(m.merged_at, m.created_at)
WHEN m.state = 'closed' THEN COALESCE(m.closed_at, m.created_at)
ELSE COALESCE(m.updated_at, m.created_at)
END AS activity_ts,
CASE WHEN m.state = 'closed' THEN ?5 ELSE 1.0 END AS state_mult
FROM merge_requests m
JOIN matched_file_changes mfc ON mfc.merge_request_id = m.id
WHERE m.state IN ('opened','merged','closed')
),
reviewer_participation AS (
-- Precompute which (mr_id, username) pairs have substantive DiffNote participation.
SELECT DISTINCT d.merge_request_id AS mr_id, mn.author_username AS username
FROM matched_notes mn
JOIN discussions d ON mn.discussion_id = d.id
JOIN notes n_body ON mn.id = n_body.id
WHERE d.merge_request_id IS NOT NULL
AND LENGTH(TRIM(COALESCE(n_body.body, ''))) >= ?6
),
raw AS (
-- Signal 1: DiffNote reviewer (individual notes for note_cnt)
SELECT mn.author_username AS username, 'diffnote_reviewer' AS signal,
m.id AS mr_id, mn.id AS note_id, mn.created_at AS seen_at,
CASE WHEN m.state = 'closed' THEN ?5 ELSE 1.0 END AS state_mult
FROM matched_notes mn
JOIN discussions d ON mn.discussion_id = d.id
JOIN merge_requests m ON d.merge_request_id = m.id
WHERE (m.author_username IS NULL OR mn.author_username != m.author_username)
AND m.state IN ('opened','merged','closed')
UNION ALL
-- Signal 2: DiffNote MR author
SELECT m.author_username AS username, 'diffnote_author' AS signal,
m.id AS mr_id, NULL AS note_id, MAX(mn.created_at) AS seen_at,
CASE WHEN m.state = 'closed' THEN ?5 ELSE 1.0 END AS state_mult
FROM merge_requests m
JOIN discussions d ON d.merge_request_id = m.id
JOIN matched_notes mn ON mn.discussion_id = d.id
WHERE m.author_username IS NOT NULL
AND m.state IN ('opened','merged','closed')
GROUP BY m.author_username, m.id
UNION ALL
-- Signal 3: MR author via file changes (uses mr_activity CTE)
SELECT a.author_username AS username, 'file_author' AS signal,
a.mr_id, NULL AS note_id,
a.activity_ts AS seen_at, a.state_mult
FROM mr_activity a
WHERE a.author_username IS NOT NULL
AND a.activity_ts >= ?2
AND a.activity_ts < ?4
UNION ALL
-- Signal 4a: Reviewer participated (in mr_reviewers AND left DiffNotes on path)
SELECT r.username AS username, 'file_reviewer_participated' AS signal,
a.mr_id, NULL AS note_id,
a.activity_ts AS seen_at, a.state_mult
FROM mr_activity a
JOIN mr_reviewers r ON r.merge_request_id = a.mr_id
JOIN reviewer_participation rp ON rp.mr_id = a.mr_id AND rp.username = r.username
WHERE r.username IS NOT NULL
AND (a.author_username IS NULL OR r.username != a.author_username)
AND a.activity_ts >= ?2
AND a.activity_ts < ?4
UNION ALL
-- Signal 4b: Reviewer assigned-only (in mr_reviewers, NO DiffNotes on path)
SELECT r.username AS username, 'file_reviewer_assigned' AS signal,
a.mr_id, NULL AS note_id,
a.activity_ts AS seen_at, a.state_mult
FROM mr_activity a
JOIN mr_reviewers r ON r.merge_request_id = a.mr_id
LEFT JOIN reviewer_participation rp ON rp.mr_id = a.mr_id AND rp.username = r.username
WHERE rp.username IS NULL
AND r.username IS NOT NULL
AND (a.author_username IS NULL OR r.username != a.author_username)
AND a.activity_ts >= ?2
AND a.activity_ts < ?4
),
aggregated AS (
-- MR-level signals: 1 row per (username, signal_class, mr_id) with MAX(ts)
SELECT username, signal, mr_id, 1 AS qty, MAX(seen_at) AS ts, MAX(state_mult) AS state_mult
FROM raw WHERE signal != 'diffnote_reviewer'
GROUP BY username, signal, mr_id
UNION ALL
-- Note signals: 1 row per (username, mr_id) with note_count and max_ts
SELECT username, 'note_group' AS signal, mr_id, COUNT(*) AS qty, MAX(seen_at) AS ts,
MAX(state_mult) AS state_mult
FROM raw WHERE signal = 'diffnote_reviewer' AND note_id IS NOT NULL
GROUP BY username, mr_id
)
SELECT username, signal, mr_id, qty, ts, state_mult FROM aggregated WHERE username IS NOT NULL
"
)
}
/// Query per-MR detail for a set of experts. Returns a map of username -> Vec<ExpertMrDetail>.
pub(super) fn query_expert_details(
conn: &Connection,
pq: &PathQuery,
experts: &[Expert],
since_ms: i64,
project_id: Option<i64>,
) -> Result<HashMap<String, Vec<ExpertMrDetail>>> {
let path_op = if pq.is_prefix {
"LIKE ?1 ESCAPE '\\'"
} else {
"= ?1"
};
// Build IN clause for usernames
let placeholders: Vec<String> = experts
.iter()
.enumerate()
.map(|(i, _)| format!("?{}", i + 4))
.collect();
let in_clause = placeholders.join(",");
let sql = format!(
"
WITH signals AS (
-- 1. DiffNote reviewer (matches both new_path and old_path for renamed files)
SELECT
n.author_username AS username,
'reviewer' AS role,
m.id AS mr_id,
(p.path_with_namespace || '!' || CAST(m.iid AS TEXT)) AS mr_ref,
m.title AS title,
COUNT(*) AS note_count,
MAX(n.created_at) AS last_activity
FROM notes n
JOIN discussions d ON n.discussion_id = d.id
JOIN merge_requests m ON d.merge_request_id = m.id
JOIN projects p ON m.project_id = p.id
WHERE n.note_type = 'DiffNote'
AND n.is_system = 0
AND n.author_username IS NOT NULL
AND (m.author_username IS NULL OR n.author_username != m.author_username)
AND m.state IN ('opened','merged','closed')
AND (n.position_new_path {path_op}
OR (n.position_old_path IS NOT NULL AND n.position_old_path {path_op}))
AND n.created_at >= ?2
AND (?3 IS NULL OR n.project_id = ?3)
AND n.author_username IN ({in_clause})
GROUP BY n.author_username, m.id
UNION ALL
-- 2. DiffNote MR author (matches both new_path and old_path for renamed files)
SELECT
m.author_username AS username,
'author' AS role,
m.id AS mr_id,
(p.path_with_namespace || '!' || CAST(m.iid AS TEXT)) AS mr_ref,
m.title AS title,
0 AS note_count,
MAX(n.created_at) AS last_activity
FROM merge_requests m
JOIN discussions d ON d.merge_request_id = m.id
JOIN notes n ON n.discussion_id = d.id
JOIN projects p ON m.project_id = p.id
WHERE n.note_type = 'DiffNote'
AND n.is_system = 0
AND m.author_username IS NOT NULL
AND m.state IN ('opened','merged','closed')
AND (n.position_new_path {path_op}
OR (n.position_old_path IS NOT NULL AND n.position_old_path {path_op}))
AND n.created_at >= ?2
AND (?3 IS NULL OR n.project_id = ?3)
AND m.author_username IN ({in_clause})
GROUP BY m.author_username, m.id
UNION ALL
-- 3. MR author via file changes (matches both new_path and old_path)
SELECT
m.author_username AS username,
'author' AS role,
m.id AS mr_id,
(p.path_with_namespace || '!' || CAST(m.iid AS TEXT)) AS mr_ref,
m.title AS title,
0 AS note_count,
m.updated_at AS last_activity
FROM mr_file_changes fc
JOIN merge_requests m ON fc.merge_request_id = m.id
JOIN projects p ON m.project_id = p.id
WHERE m.author_username IS NOT NULL
AND m.state IN ('opened','merged','closed')
AND (fc.new_path {path_op}
OR (fc.old_path IS NOT NULL AND fc.old_path {path_op}))
AND m.updated_at >= ?2
AND (?3 IS NULL OR fc.project_id = ?3)
AND m.author_username IN ({in_clause})
UNION ALL
-- 4. MR reviewer via file changes + mr_reviewers (matches both new_path and old_path)
SELECT
r.username AS username,
'reviewer' AS role,
m.id AS mr_id,
(p.path_with_namespace || '!' || CAST(m.iid AS TEXT)) AS mr_ref,
m.title AS title,
0 AS note_count,
m.updated_at AS last_activity
FROM mr_file_changes fc
JOIN merge_requests m ON fc.merge_request_id = m.id
JOIN projects p ON m.project_id = p.id
JOIN mr_reviewers r ON r.merge_request_id = m.id
WHERE r.username IS NOT NULL
AND (m.author_username IS NULL OR r.username != m.author_username)
AND m.state IN ('opened','merged','closed')
AND (fc.new_path {path_op}
OR (fc.old_path IS NOT NULL AND fc.old_path {path_op}))
AND m.updated_at >= ?2
AND (?3 IS NULL OR fc.project_id = ?3)
AND r.username IN ({in_clause})
)
SELECT
username,
mr_ref,
title,
GROUP_CONCAT(DISTINCT role) AS roles,
SUM(note_count) AS total_notes,
MAX(last_activity) AS last_activity
FROM signals
GROUP BY username, mr_ref
ORDER BY username ASC, last_activity DESC
"
);
// prepare() not prepare_cached(): the IN clause varies by expert count,
// so the SQL shape changes per invocation and caching wastes memory.
let mut stmt = conn.prepare(&sql)?;
// Build params: ?1=path, ?2=since_ms, ?3=project_id, ?4..=usernames
let mut params: Vec<Box<dyn rusqlite::types::ToSql>> = Vec::new();
params.push(Box::new(pq.value.clone()));
params.push(Box::new(since_ms));
params.push(Box::new(project_id));
for expert in experts {
params.push(Box::new(expert.username.clone()));
}
let param_refs: Vec<&dyn rusqlite::types::ToSql> = params.iter().map(|p| p.as_ref()).collect();
let rows: Vec<(String, String, String, String, u32, i64)> = stmt
.query_map(param_refs.as_slice(), |row| {
Ok((
row.get(0)?,
row.get(1)?,
row.get(2)?,
row.get::<_, String>(3)?,
row.get(4)?,
row.get(5)?,
))
})?
.collect::<std::result::Result<Vec<_>, _>>()?;
let mut map: HashMap<String, Vec<ExpertMrDetail>> = HashMap::new();
for (username, mr_ref, title, roles_csv, note_count, last_activity) in rows {
let has_author = roles_csv.contains("author");
let has_reviewer = roles_csv.contains("reviewer");
let role = match (has_author, has_reviewer) {
(true, true) => "A+R",
(true, false) => "A",
(false, true) => "R",
_ => "?",
}
.to_string();
map.entry(username).or_default().push(ExpertMrDetail {
mr_ref,
title,
role,
note_count,
last_activity_ms: last_activity,
});
}
Ok(map)
}
pub(super) fn print_expert_human(r: &ExpertResult, project_path: Option<&str>) {
println!();
println!(
"{}",
Theme::bold().render(&format!("Experts for {}", r.path_query))
);
println!("{}", "\u{2500}".repeat(60));
println!(
" {}",
Theme::dim().render(&format!(
"(matching {} {})",
r.path_match,
if r.path_match == "exact" {
"file"
} else {
"directory prefix"
}
))
);
super::print_scope_hint(project_path);
println!();
if r.experts.is_empty() {
println!(
" {}",
Theme::dim().render("No experts found for this path.")
);
println!();
return;
}
println!(
" {:<16} {:>6} {:>12} {:>6} {:>12} {} {}",
Theme::bold().render("Username"),
Theme::bold().render("Score"),
Theme::bold().render("Reviewed(MRs)"),
Theme::bold().render("Notes"),
Theme::bold().render("Authored(MRs)"),
Theme::bold().render("Last Seen"),
Theme::bold().render("MR Refs"),
);
for expert in &r.experts {
let reviews = if expert.review_mr_count > 0 {
expert.review_mr_count.to_string()
} else {
"-".to_string()
};
let notes = if expert.review_note_count > 0 {
expert.review_note_count.to_string()
} else {
"-".to_string()
};
let authored = if expert.author_mr_count > 0 {
expert.author_mr_count.to_string()
} else {
"-".to_string()
};
let mr_str = expert
.mr_refs
.iter()
.take(5)
.cloned()
.collect::<Vec<_>>()
.join(", ");
let overflow = if expert.mr_refs_total > 5 {
format!(" +{}", expert.mr_refs_total - 5)
} else {
String::new()
};
println!(
" {:<16} {:>6} {:>12} {:>6} {:>12} {:<12}{}{}",
Theme::info().render(&format!("{} {}", Icons::user(), expert.username)),
expert.score,
reviews,
notes,
authored,
render::format_relative_time(expert.last_seen_ms),
if mr_str.is_empty() {
String::new()
} else {
format!(" {mr_str}")
},
overflow,
);
// Print detail sub-rows when populated
if let Some(details) = &expert.details {
const MAX_DETAIL_DISPLAY: usize = 10;
for d in details.iter().take(MAX_DETAIL_DISPLAY) {
let notes_str = if d.note_count > 0 {
format!("{} notes", d.note_count)
} else {
String::new()
};
println!(
" {:<3} {:<30} {:>30} {:>10} {}",
Theme::dim().render(&d.role),
d.mr_ref,
render::truncate(&format!("\"{}\"", d.title), 30),
notes_str,
Theme::dim().render(&render::format_relative_time(d.last_activity_ms)),
);
}
if details.len() > MAX_DETAIL_DISPLAY {
println!(
" {}",
Theme::dim().render(&format!("+{} more", details.len() - MAX_DETAIL_DISPLAY))
);
}
}
}
if r.truncated {
println!(
" {}",
Theme::dim().render("(showing first -n; rerun with a higher --limit)")
);
}
println!();
}
pub(super) fn expert_to_json(r: &ExpertResult) -> serde_json::Value {
serde_json::json!({
"path_query": r.path_query,
"path_match": r.path_match,
"scoring_model_version": 2,
"truncated": r.truncated,
"experts": r.experts.iter().map(|e| {
let mut obj = serde_json::json!({
"username": e.username,
"score": e.score,
"review_mr_count": e.review_mr_count,
"review_note_count": e.review_note_count,
"author_mr_count": e.author_mr_count,
"last_seen_at": ms_to_iso(e.last_seen_ms),
"mr_refs": e.mr_refs,
"mr_refs_total": e.mr_refs_total,
"mr_refs_truncated": e.mr_refs_truncated,
});
if let Some(raw) = e.score_raw {
obj["score_raw"] = serde_json::json!(raw);
}
if let Some(comp) = &e.components {
obj["components"] = serde_json::json!({
"author": comp.author,
"reviewer_participated": comp.reviewer_participated,
"reviewer_assigned": comp.reviewer_assigned,
"notes": comp.notes,
});
}
if let Some(details) = &e.details {
obj["details"] = serde_json::json!(details.iter().map(|d| serde_json::json!({
"mr_ref": d.mr_ref,
"title": d.title,
"role": d.role,
"note_count": d.note_count,
"last_activity_at": ms_to_iso(d.last_activity_ms),
})).collect::<Vec<_>>());
}
obj
}).collect::<Vec<_>>(),
})
}

428
src/cli/commands/who/mod.rs Normal file
View File

@@ -0,0 +1,428 @@
mod active;
mod expert;
mod overlap;
mod reviews;
pub mod types;
mod workload;
pub use types::*;
// Re-export submodule functions for tests (tests use `use super::*`).
#[cfg(test)]
use active::query_active;
#[cfg(test)]
use expert::{build_expert_sql_v2, half_life_decay, query_expert};
#[cfg(test)]
use overlap::{format_overlap_role, query_overlap};
#[cfg(test)]
use reviews::{normalize_review_prefix, query_reviews};
#[cfg(test)]
use workload::query_workload;
use rusqlite::Connection;
use serde::Serialize;
use crate::Config;
use crate::cli::WhoArgs;
use crate::cli::render::Theme;
use crate::cli::robot::RobotMeta;
use crate::core::db::create_connection;
use crate::core::error::{LoreError, Result};
use crate::core::path_resolver::normalize_repo_path;
use crate::core::paths::get_db_path;
use crate::core::project::resolve_project;
use crate::core::time::{ms_to_iso, now_ms, parse_since, parse_since_from};
#[cfg(test)]
use crate::core::config::ScoringConfig;
#[cfg(test)]
use crate::core::path_resolver::{SuffixResult, build_path_query, escape_like, suffix_probe};
// ─── Mode Discrimination ────────────────────────────────────────────────────
/// Determines which query mode to run based on args.
/// Path variants own their strings because path normalization produces new `String`s.
/// Username variants borrow from args since no normalization is needed.
enum WhoMode<'a> {
/// lore who <file-path> OR lore who --path <path>
Expert { path: String },
/// lore who <username>
Workload { username: &'a str },
/// lore who <username> --reviews
Reviews { username: &'a str },
/// lore who --active
Active,
/// lore who --overlap <path>
Overlap { path: String },
}
fn resolve_mode<'a>(args: &'a WhoArgs) -> Result<WhoMode<'a>> {
// Explicit --path flag always wins (handles root files like README.md,
// LICENSE, Makefile -- anything without a / that can't be auto-detected)
if let Some(p) = &args.path {
return Ok(WhoMode::Expert {
path: normalize_repo_path(p),
});
}
if args.active {
return Ok(WhoMode::Active);
}
if let Some(path) = &args.overlap {
return Ok(WhoMode::Overlap {
path: normalize_repo_path(path),
});
}
if let Some(target) = &args.target {
let clean = target.strip_prefix('@').unwrap_or(target);
if args.reviews {
return Ok(WhoMode::Reviews { username: clean });
}
// Disambiguation: if target contains '/', it's a file path.
// GitLab usernames never contain '/'.
// Root files (no '/') require --path.
if clean.contains('/') {
return Ok(WhoMode::Expert {
path: normalize_repo_path(clean),
});
}
return Ok(WhoMode::Workload { username: clean });
}
Err(LoreError::Other(
"Provide a username, file path, --active, or --overlap <path>.\n\n\
Examples:\n \
lore who src/features/auth/\n \
lore who @username\n \
lore who --active\n \
lore who --overlap src/features/\n \
lore who --path README.md\n \
lore who --path Makefile"
.to_string(),
))
}
fn validate_mode_flags(mode: &WhoMode<'_>, args: &WhoArgs) -> Result<()> {
if args.detail && !matches!(mode, WhoMode::Expert { .. }) {
return Err(LoreError::Other(
"--detail is only supported in expert mode (`lore who --path <path>` or `lore who <path/with/slash>`).".to_string(),
));
}
Ok(())
}
// ─── Entry Point ─────────────────────────────────────────────────────────────
/// Main entry point. Resolves mode + resolved inputs once, then dispatches.
pub fn run_who(config: &Config, args: &WhoArgs) -> Result<WhoRun> {
let db_path = get_db_path(config.storage.db_path.as_deref());
let conn = create_connection(&db_path)?;
let project_id = args
.project
.as_deref()
.map(|p| resolve_project(&conn, p))
.transpose()?;
let project_path = project_id
.map(|id| lookup_project_path(&conn, id))
.transpose()?;
let mode = resolve_mode(args)?;
validate_mode_flags(&mode, args)?;
// since_mode semantics:
// - expert/reviews/active/overlap: default window applies if args.since is None -> "default"
// - workload: no default window; args.since None => "none"
let since_mode_for_defaulted = if args.since.is_some() {
"explicit"
} else {
"default"
};
let since_mode_for_workload = if args.since.is_some() {
"explicit"
} else {
"none"
};
match mode {
WhoMode::Expert { path } => {
// Compute as_of first so --since durations are relative to it.
let as_of_ms = match &args.as_of {
Some(v) => parse_since(v).ok_or_else(|| {
LoreError::Other(format!(
"Invalid --as-of value: '{v}'. Use a duration (30d, 6m) or date (2024-01-15)"
))
})?,
None => now_ms(),
};
let since_ms = if args.all_history {
0
} else {
resolve_since_from(args.since.as_deref(), "24m", as_of_ms)?
};
let limit = usize::from(args.limit);
let result = expert::query_expert(
&conn,
&path,
project_id,
since_ms,
as_of_ms,
limit,
&config.scoring,
args.detail,
args.explain_score,
args.include_bots,
)?;
Ok(WhoRun {
resolved_input: WhoResolvedInput {
mode: "expert".to_string(),
project_id,
project_path,
since_ms: Some(since_ms),
since_iso: Some(ms_to_iso(since_ms)),
since_mode: since_mode_for_defaulted.to_string(),
limit: args.limit,
},
result: WhoResult::Expert(result),
})
}
WhoMode::Workload { username } => {
let since_ms = args
.since
.as_deref()
.map(resolve_since_required)
.transpose()?;
let limit = usize::from(args.limit);
let result = workload::query_workload(
&conn,
username,
project_id,
since_ms,
limit,
args.include_closed,
)?;
Ok(WhoRun {
resolved_input: WhoResolvedInput {
mode: "workload".to_string(),
project_id,
project_path,
since_ms,
since_iso: since_ms.map(ms_to_iso),
since_mode: since_mode_for_workload.to_string(),
limit: args.limit,
},
result: WhoResult::Workload(result),
})
}
WhoMode::Reviews { username } => {
let since_ms = resolve_since(args.since.as_deref(), "6m")?;
let result = reviews::query_reviews(&conn, username, project_id, since_ms)?;
Ok(WhoRun {
resolved_input: WhoResolvedInput {
mode: "reviews".to_string(),
project_id,
project_path,
since_ms: Some(since_ms),
since_iso: Some(ms_to_iso(since_ms)),
since_mode: since_mode_for_defaulted.to_string(),
limit: args.limit,
},
result: WhoResult::Reviews(result),
})
}
WhoMode::Active => {
let since_ms = resolve_since(args.since.as_deref(), "7d")?;
let limit = usize::from(args.limit);
let result =
active::query_active(&conn, project_id, since_ms, limit, args.include_closed)?;
Ok(WhoRun {
resolved_input: WhoResolvedInput {
mode: "active".to_string(),
project_id,
project_path,
since_ms: Some(since_ms),
since_iso: Some(ms_to_iso(since_ms)),
since_mode: since_mode_for_defaulted.to_string(),
limit: args.limit,
},
result: WhoResult::Active(result),
})
}
WhoMode::Overlap { path } => {
let since_ms = resolve_since(args.since.as_deref(), "30d")?;
let limit = usize::from(args.limit);
let result = overlap::query_overlap(&conn, &path, project_id, since_ms, limit)?;
Ok(WhoRun {
resolved_input: WhoResolvedInput {
mode: "overlap".to_string(),
project_id,
project_path,
since_ms: Some(since_ms),
since_iso: Some(ms_to_iso(since_ms)),
since_mode: since_mode_for_defaulted.to_string(),
limit: args.limit,
},
result: WhoResult::Overlap(result),
})
}
}
}
// ─── Helpers ─────────────────────────────────────────────────────────────────
/// Look up the project path for a resolved project ID.
fn lookup_project_path(conn: &Connection, project_id: i64) -> Result<String> {
conn.query_row(
"SELECT path_with_namespace FROM projects WHERE id = ?1",
rusqlite::params![project_id],
|row| row.get(0),
)
.map_err(|e| LoreError::Other(format!("Failed to look up project path: {e}")))
}
/// Parse --since with a default fallback.
fn resolve_since(input: Option<&str>, default: &str) -> Result<i64> {
let s = input.unwrap_or(default);
parse_since(s).ok_or_else(|| {
LoreError::Other(format!(
"Invalid --since value: '{s}'. Use a duration (7d, 2w, 6m) or date (2024-01-15)"
))
})
}
/// Parse --since with a default fallback, relative to a reference timestamp.
/// Durations (7d, 2w, 6m) are computed from `reference_ms` instead of now.
fn resolve_since_from(input: Option<&str>, default: &str, reference_ms: i64) -> Result<i64> {
let s = input.unwrap_or(default);
parse_since_from(s, reference_ms).ok_or_else(|| {
LoreError::Other(format!(
"Invalid --since value: '{s}'. Use a duration (7d, 2w, 6m) or date (2024-01-15)"
))
})
}
/// Parse --since without a default (returns error if invalid).
fn resolve_since_required(input: &str) -> Result<i64> {
parse_since(input).ok_or_else(|| {
LoreError::Other(format!(
"Invalid --since value: '{input}'. Use a duration (7d, 2w, 6m) or date (2024-01-15)"
))
})
}
// ─── Human Output ────────────────────────────────────────────────────────────
pub fn print_who_human(result: &WhoResult, project_path: Option<&str>) {
match result {
WhoResult::Expert(r) => expert::print_expert_human(r, project_path),
WhoResult::Workload(r) => workload::print_workload_human(r),
WhoResult::Reviews(r) => reviews::print_reviews_human(r),
WhoResult::Active(r) => active::print_active_human(r, project_path),
WhoResult::Overlap(r) => overlap::print_overlap_human(r, project_path),
}
}
/// Print a dim hint when results aggregate across all projects.
pub(super) fn print_scope_hint(project_path: Option<&str>) {
if project_path.is_none() {
println!(
" {}",
Theme::dim().render("(aggregated across all projects; use -p to scope)")
);
}
}
// ─── Robot JSON Output ───────────────────────────────────────────────────────
pub fn print_who_json(run: &WhoRun, args: &WhoArgs, elapsed_ms: u64) {
let (mode, data) = match &run.result {
WhoResult::Expert(r) => ("expert", expert::expert_to_json(r)),
WhoResult::Workload(r) => ("workload", workload::workload_to_json(r)),
WhoResult::Reviews(r) => ("reviews", reviews::reviews_to_json(r)),
WhoResult::Active(r) => ("active", active::active_to_json(r)),
WhoResult::Overlap(r) => ("overlap", overlap::overlap_to_json(r)),
};
// Raw CLI args -- what the user typed
let input = serde_json::json!({
"target": args.target,
"path": args.path,
"project": args.project,
"since": args.since,
"limit": args.limit,
"detail": args.detail,
"as_of": args.as_of,
"explain_score": args.explain_score,
"include_bots": args.include_bots,
"all_history": args.all_history,
});
// Resolved/computed values -- what actually ran
let resolved_input = serde_json::json!({
"mode": run.resolved_input.mode,
"project_id": run.resolved_input.project_id,
"project_path": run.resolved_input.project_path,
"since_ms": run.resolved_input.since_ms,
"since_iso": run.resolved_input.since_iso,
"since_mode": run.resolved_input.since_mode,
"limit": run.resolved_input.limit,
});
let output = WhoJsonEnvelope {
ok: true,
data: WhoJsonData {
mode: mode.to_string(),
input,
resolved_input,
result: data,
},
meta: RobotMeta { elapsed_ms },
};
let mut value = serde_json::to_value(&output).unwrap_or_else(|e| {
serde_json::json!({"ok":false,"error":{"code":"INTERNAL_ERROR","message":format!("JSON serialization failed: {e}")}})
});
if let Some(f) = &args.fields {
let preset_key = format!("who_{mode}");
let expanded = crate::cli::robot::expand_fields_preset(f, &preset_key);
// Each who mode uses a different array key; try all possible keys
for key in &[
"experts",
"assigned_issues",
"authored_mrs",
"review_mrs",
"categories",
"discussions",
"users",
] {
crate::cli::robot::filter_fields(&mut value, key, &expanded);
}
}
match serde_json::to_string(&value) {
Ok(json) => println!("{json}"),
Err(e) => eprintln!("Error serializing to JSON: {e}"),
}
}
#[derive(Serialize)]
struct WhoJsonEnvelope {
ok: bool,
data: WhoJsonData,
meta: RobotMeta,
}
#[derive(Serialize)]
struct WhoJsonData {
mode: String,
input: serde_json::Value,
resolved_input: serde_json::Value,
#[serde(flatten)]
result: serde_json::Value,
}
// ─── Tests ───────────────────────────────────────────────────────────────────
#[cfg(test)]
#[path = "../who_tests.rs"]
mod tests;

View File

@@ -0,0 +1,323 @@
use std::collections::{HashMap, HashSet};
use rusqlite::Connection;
use crate::cli::render::{self, Icons, Theme};
use crate::core::error::Result;
use crate::core::path_resolver::build_path_query;
use crate::core::time::ms_to_iso;
use super::types::*;
pub(super) fn query_overlap(
conn: &Connection,
path: &str,
project_id: Option<i64>,
since_ms: i64,
limit: usize,
) -> Result<OverlapResult> {
let pq = build_path_query(conn, path, project_id)?;
// Build SQL with 4 signal sources, matching the expert query expansion.
// Each row produces (username, role, mr_id, mr_ref, seen_at) for Rust-side accumulation.
let path_op = if pq.is_prefix {
"LIKE ?1 ESCAPE '\\'"
} else {
"= ?1"
};
// Match both new_path and old_path to capture activity on renamed files.
// INDEXED BY removed to allow OR across path columns; overlap runs once
// per command so the minor plan difference is acceptable.
let sql = format!(
"SELECT username, role, touch_count, last_seen_at, mr_refs FROM (
-- 1. DiffNote reviewer (matches both new_path and old_path)
SELECT
n.author_username AS username,
'reviewer' AS role,
COUNT(DISTINCT m.id) AS touch_count,
MAX(n.created_at) AS last_seen_at,
GROUP_CONCAT(DISTINCT (p.path_with_namespace || '!' || m.iid)) AS mr_refs
FROM notes n
JOIN discussions d ON n.discussion_id = d.id
JOIN merge_requests m ON d.merge_request_id = m.id
JOIN projects p ON m.project_id = p.id
WHERE n.note_type = 'DiffNote'
AND (n.position_new_path {path_op}
OR (n.position_old_path IS NOT NULL AND n.position_old_path {path_op}))
AND n.is_system = 0
AND n.author_username IS NOT NULL
AND (m.author_username IS NULL OR n.author_username != m.author_username)
AND m.state IN ('opened','merged','closed')
AND n.created_at >= ?2
AND (?3 IS NULL OR n.project_id = ?3)
GROUP BY n.author_username
UNION ALL
-- 2. DiffNote MR author (matches both new_path and old_path)
SELECT
m.author_username AS username,
'author' AS role,
COUNT(DISTINCT m.id) AS touch_count,
MAX(n.created_at) AS last_seen_at,
GROUP_CONCAT(DISTINCT (p.path_with_namespace || '!' || m.iid)) AS mr_refs
FROM notes n
JOIN discussions d ON n.discussion_id = d.id
JOIN merge_requests m ON d.merge_request_id = m.id
JOIN projects p ON m.project_id = p.id
WHERE n.note_type = 'DiffNote'
AND (n.position_new_path {path_op}
OR (n.position_old_path IS NOT NULL AND n.position_old_path {path_op}))
AND n.is_system = 0
AND m.state IN ('opened','merged','closed')
AND m.author_username IS NOT NULL
AND n.created_at >= ?2
AND (?3 IS NULL OR n.project_id = ?3)
GROUP BY m.author_username
UNION ALL
-- 3. MR author via file changes (matches both new_path and old_path)
SELECT
m.author_username AS username,
'author' AS role,
COUNT(DISTINCT m.id) AS touch_count,
MAX(m.updated_at) AS last_seen_at,
GROUP_CONCAT(DISTINCT (p.path_with_namespace || '!' || m.iid)) AS mr_refs
FROM mr_file_changes fc
JOIN merge_requests m ON fc.merge_request_id = m.id
JOIN projects p ON m.project_id = p.id
WHERE m.author_username IS NOT NULL
AND m.state IN ('opened','merged','closed')
AND (fc.new_path {path_op}
OR (fc.old_path IS NOT NULL AND fc.old_path {path_op}))
AND m.updated_at >= ?2
AND (?3 IS NULL OR fc.project_id = ?3)
GROUP BY m.author_username
UNION ALL
-- 4. MR reviewer via file changes + mr_reviewers (matches both new_path and old_path)
SELECT
r.username AS username,
'reviewer' AS role,
COUNT(DISTINCT m.id) AS touch_count,
MAX(m.updated_at) AS last_seen_at,
GROUP_CONCAT(DISTINCT (p.path_with_namespace || '!' || m.iid)) AS mr_refs
FROM mr_file_changes fc
JOIN merge_requests m ON fc.merge_request_id = m.id
JOIN projects p ON m.project_id = p.id
JOIN mr_reviewers r ON r.merge_request_id = m.id
WHERE r.username IS NOT NULL
AND (m.author_username IS NULL OR r.username != m.author_username)
AND m.state IN ('opened','merged','closed')
AND (fc.new_path {path_op}
OR (fc.old_path IS NOT NULL AND fc.old_path {path_op}))
AND m.updated_at >= ?2
AND (?3 IS NULL OR fc.project_id = ?3)
GROUP BY r.username
)"
);
let mut stmt = conn.prepare_cached(&sql)?;
let rows: Vec<(String, String, u32, i64, Option<String>)> = stmt
.query_map(rusqlite::params![pq.value, since_ms, project_id], |row| {
Ok((
row.get(0)?,
row.get(1)?,
row.get(2)?,
row.get(3)?,
row.get(4)?,
))
})?
.collect::<std::result::Result<Vec<_>, _>>()?;
// Internal accumulator uses HashSet for MR refs from the start
struct OverlapAcc {
username: String,
author_touch_count: u32,
review_touch_count: u32,
touch_count: u32,
last_seen_at: i64,
mr_refs: HashSet<String>,
}
let mut user_map: HashMap<String, OverlapAcc> = HashMap::new();
for (username, role, count, last_seen, mr_refs_csv) in &rows {
let mr_refs: Vec<String> = mr_refs_csv
.as_deref()
.map(|csv| csv.split(',').map(|s| s.trim().to_string()).collect())
.unwrap_or_default();
let entry = user_map
.entry(username.clone())
.or_insert_with(|| OverlapAcc {
username: username.clone(),
author_touch_count: 0,
review_touch_count: 0,
touch_count: 0,
last_seen_at: 0,
mr_refs: HashSet::new(),
});
entry.touch_count += count;
if role == "author" {
entry.author_touch_count += count;
} else {
entry.review_touch_count += count;
}
if *last_seen > entry.last_seen_at {
entry.last_seen_at = *last_seen;
}
for r in mr_refs {
entry.mr_refs.insert(r);
}
}
// Convert accumulators to output structs
let mut users: Vec<OverlapUser> = user_map
.into_values()
.map(|a| {
let mut mr_refs: Vec<String> = a.mr_refs.into_iter().collect();
mr_refs.sort();
let mr_refs_total = mr_refs.len() as u32;
let mr_refs_truncated = mr_refs.len() > MAX_MR_REFS_PER_USER;
if mr_refs_truncated {
mr_refs.truncate(MAX_MR_REFS_PER_USER);
}
OverlapUser {
username: a.username,
author_touch_count: a.author_touch_count,
review_touch_count: a.review_touch_count,
touch_count: a.touch_count,
last_seen_at: a.last_seen_at,
mr_refs,
mr_refs_total,
mr_refs_truncated,
}
})
.collect();
// Stable sort with full tie-breakers for deterministic output
users.sort_by(|a, b| {
b.touch_count
.cmp(&a.touch_count)
.then_with(|| b.last_seen_at.cmp(&a.last_seen_at))
.then_with(|| a.username.cmp(&b.username))
});
let truncated = users.len() > limit;
users.truncate(limit);
Ok(OverlapResult {
path_query: if pq.is_prefix {
path.trim_end_matches('/').to_string()
} else {
pq.value.clone()
},
path_match: if pq.is_prefix { "prefix" } else { "exact" }.to_string(),
users,
truncated,
})
}
/// Format overlap role for display: "A", "R", or "A+R".
pub(super) fn format_overlap_role(user: &OverlapUser) -> &'static str {
match (user.author_touch_count > 0, user.review_touch_count > 0) {
(true, true) => "A+R",
(true, false) => "A",
(false, true) => "R",
(false, false) => "-",
}
}
pub(super) fn print_overlap_human(r: &OverlapResult, project_path: Option<&str>) {
println!();
println!(
"{}",
Theme::bold().render(&format!("Overlap for {}", r.path_query))
);
println!("{}", "\u{2500}".repeat(60));
println!(
" {}",
Theme::dim().render(&format!(
"(matching {} {})",
r.path_match,
if r.path_match == "exact" {
"file"
} else {
"directory prefix"
}
))
);
super::print_scope_hint(project_path);
println!();
if r.users.is_empty() {
println!(
" {}",
Theme::dim().render("No overlapping users found for this path.")
);
println!();
return;
}
println!(
" {:<16} {:<6} {:>7} {:<12} {}",
Theme::bold().render("Username"),
Theme::bold().render("Role"),
Theme::bold().render("MRs"),
Theme::bold().render("Last Seen"),
Theme::bold().render("MR Refs"),
);
for user in &r.users {
let mr_str = user
.mr_refs
.iter()
.take(5)
.cloned()
.collect::<Vec<_>>()
.join(", ");
let overflow = if user.mr_refs.len() > 5 {
format!(" +{}", user.mr_refs.len() - 5)
} else {
String::new()
};
println!(
" {:<16} {:<6} {:>7} {:<12} {}{}",
Theme::info().render(&format!("{} {}", Icons::user(), user.username)),
format_overlap_role(user),
user.touch_count,
render::format_relative_time(user.last_seen_at),
mr_str,
overflow,
);
}
if r.truncated {
println!(
" {}",
Theme::dim().render("(showing first -n; rerun with a higher --limit)")
);
}
println!();
}
pub(super) fn overlap_to_json(r: &OverlapResult) -> serde_json::Value {
serde_json::json!({
"path_query": r.path_query,
"path_match": r.path_match,
"truncated": r.truncated,
"users": r.users.iter().map(|u| serde_json::json!({
"username": u.username,
"role": format_overlap_role(u),
"author_touch_count": u.author_touch_count,
"review_touch_count": u.review_touch_count,
"touch_count": u.touch_count,
"last_seen_at": ms_to_iso(u.last_seen_at),
"mr_refs": u.mr_refs,
"mr_refs_total": u.mr_refs_total,
"mr_refs_truncated": u.mr_refs_truncated,
})).collect::<Vec<_>>(),
})
}

View File

@@ -0,0 +1,214 @@
use std::collections::HashMap;
use rusqlite::Connection;
use crate::cli::render::{Icons, Theme};
use crate::core::error::Result;
use super::types::*;
// ─── Query: Reviews Mode ────────────────────────────────────────────────────
pub(super) fn query_reviews(
conn: &Connection,
username: &str,
project_id: Option<i64>,
since_ms: i64,
) -> Result<ReviewsResult> {
// Force the partial index on DiffNote queries (same rationale as expert mode).
// COUNT + COUNT(DISTINCT) + category extraction all benefit from 26K DiffNote
// scan vs 282K notes full scan: measured 25x speedup.
let total_sql = "SELECT COUNT(*) FROM notes n
INDEXED BY idx_notes_diffnote_path_created
JOIN discussions d ON n.discussion_id = d.id
JOIN merge_requests m ON d.merge_request_id = m.id
WHERE n.author_username = ?1
AND n.note_type = 'DiffNote'
AND n.is_system = 0
AND (m.author_username IS NULL OR m.author_username != ?1)
AND n.created_at >= ?2
AND (?3 IS NULL OR n.project_id = ?3)";
let total_diffnotes: u32 = conn.query_row(
total_sql,
rusqlite::params![username, since_ms, project_id],
|row| row.get(0),
)?;
// Count distinct MRs reviewed
let mrs_sql = "SELECT COUNT(DISTINCT m.id) FROM notes n
INDEXED BY idx_notes_diffnote_path_created
JOIN discussions d ON n.discussion_id = d.id
JOIN merge_requests m ON d.merge_request_id = m.id
WHERE n.author_username = ?1
AND n.note_type = 'DiffNote'
AND n.is_system = 0
AND (m.author_username IS NULL OR m.author_username != ?1)
AND n.created_at >= ?2
AND (?3 IS NULL OR n.project_id = ?3)";
let mrs_reviewed: u32 = conn.query_row(
mrs_sql,
rusqlite::params![username, since_ms, project_id],
|row| row.get(0),
)?;
// Extract prefixed categories: body starts with **prefix**
let cat_sql = "SELECT
SUBSTR(ltrim(n.body), 3, INSTR(SUBSTR(ltrim(n.body), 3), '**') - 1) AS raw_prefix,
COUNT(*) AS cnt
FROM notes n INDEXED BY idx_notes_diffnote_path_created
JOIN discussions d ON n.discussion_id = d.id
JOIN merge_requests m ON d.merge_request_id = m.id
WHERE n.author_username = ?1
AND n.note_type = 'DiffNote'
AND n.is_system = 0
AND (m.author_username IS NULL OR m.author_username != ?1)
AND ltrim(n.body) LIKE '**%**%'
AND n.created_at >= ?2
AND (?3 IS NULL OR n.project_id = ?3)
GROUP BY raw_prefix
ORDER BY cnt DESC";
let mut stmt = conn.prepare_cached(cat_sql)?;
let raw_categories: Vec<(String, u32)> = stmt
.query_map(rusqlite::params![username, since_ms, project_id], |row| {
Ok((row.get::<_, String>(0)?, row.get(1)?))
})?
.collect::<std::result::Result<Vec<_>, _>>()?;
// Normalize categories: lowercase, strip trailing colon/space,
// merge nit/nitpick variants, merge (non-blocking) variants
let mut merged: HashMap<String, u32> = HashMap::new();
for (raw, count) in &raw_categories {
let normalized = normalize_review_prefix(raw);
if !normalized.is_empty() {
*merged.entry(normalized).or_insert(0) += count;
}
}
let categorized_count: u32 = merged.values().sum();
let mut categories: Vec<ReviewCategory> = merged
.into_iter()
.map(|(name, count)| {
let percentage = if categorized_count > 0 {
f64::from(count) / f64::from(categorized_count) * 100.0
} else {
0.0
};
ReviewCategory {
name,
count,
percentage,
}
})
.collect();
categories.sort_by(|a, b| b.count.cmp(&a.count));
Ok(ReviewsResult {
username: username.to_string(),
total_diffnotes,
categorized_count,
mrs_reviewed,
categories,
})
}
/// Normalize a raw review prefix like "Suggestion (non-blocking):" into "suggestion".
pub(super) fn normalize_review_prefix(raw: &str) -> String {
let s = raw.trim().trim_end_matches(':').trim().to_lowercase();
// Strip "(non-blocking)" and similar parentheticals
let s = if let Some(idx) = s.find('(') {
s[..idx].trim().to_string()
} else {
s
};
// Merge nit/nitpick variants
match s.as_str() {
"nitpick" | "nit" => "nit".to_string(),
other => other.to_string(),
}
}
// ─── Human Renderer ─────────────────────────────────────────────────────────
pub(super) fn print_reviews_human(r: &ReviewsResult) {
println!();
println!(
"{}",
Theme::bold().render(&format!(
"{} {} -- Review Patterns",
Icons::user(),
r.username
))
);
println!("{}", "\u{2500}".repeat(60));
println!();
if r.total_diffnotes == 0 {
println!(
" {}",
Theme::dim().render("No review comments found for this user.")
);
println!();
return;
}
println!(
" {} DiffNotes across {} MRs ({} categorized)",
Theme::bold().render(&r.total_diffnotes.to_string()),
Theme::bold().render(&r.mrs_reviewed.to_string()),
Theme::bold().render(&r.categorized_count.to_string()),
);
println!();
if !r.categories.is_empty() {
println!(
" {:<16} {:>6} {:>6}",
Theme::bold().render("Category"),
Theme::bold().render("Count"),
Theme::bold().render("%"),
);
for cat in &r.categories {
println!(
" {:<16} {:>6} {:>5.1}%",
Theme::info().render(&cat.name),
cat.count,
cat.percentage,
);
}
}
let uncategorized = r.total_diffnotes - r.categorized_count;
if uncategorized > 0 {
println!();
println!(
" {} {} uncategorized (no **prefix** convention)",
Theme::dim().render("Note:"),
uncategorized,
);
}
println!();
}
// ─── Robot Renderer ─────────────────────────────────────────────────────────
pub(super) fn reviews_to_json(r: &ReviewsResult) -> serde_json::Value {
serde_json::json!({
"username": r.username,
"total_diffnotes": r.total_diffnotes,
"categorized_count": r.categorized_count,
"mrs_reviewed": r.mrs_reviewed,
"categories": r.categories.iter().map(|c| serde_json::json!({
"name": c.name,
"count": c.count,
"percentage": (c.percentage * 10.0).round() / 10.0,
})).collect::<Vec<_>>(),
})
}

View File

@@ -0,0 +1,185 @@
// ─── Result Types ────────────────────────────────────────────────────────────
//
// All pub result structs and enums for the `who` command family.
// Zero logic — pure data definitions.
/// Top-level run result: carries resolved inputs + the mode-specific result.
pub struct WhoRun {
pub resolved_input: WhoResolvedInput,
pub result: WhoResult,
}
/// Resolved query parameters -- computed once, used for robot JSON reproducibility.
pub struct WhoResolvedInput {
pub mode: String,
pub project_id: Option<i64>,
pub project_path: Option<String>,
pub since_ms: Option<i64>,
pub since_iso: Option<String>,
/// "default" (mode default applied), "explicit" (user provided --since), "none" (no window)
pub since_mode: String,
pub limit: u16,
}
/// Top-level result enum -- one variant per mode.
pub enum WhoResult {
Expert(ExpertResult),
Workload(WorkloadResult),
Reviews(ReviewsResult),
Active(ActiveResult),
Overlap(OverlapResult),
}
// --- Expert ---
pub struct ExpertResult {
pub path_query: String,
/// "exact" or "prefix" -- how the path was matched in SQL.
pub path_match: String,
pub experts: Vec<Expert>,
pub truncated: bool,
}
pub struct Expert {
pub username: String,
pub score: i64,
/// Unrounded f64 score (only populated when explain_score is set).
pub score_raw: Option<f64>,
/// Per-component score breakdown (only populated when explain_score is set).
pub components: Option<ScoreComponents>,
pub review_mr_count: u32,
pub review_note_count: u32,
pub author_mr_count: u32,
pub last_seen_ms: i64,
/// Stable MR references like "group/project!123"
pub mr_refs: Vec<String>,
pub mr_refs_total: u32,
pub mr_refs_truncated: bool,
/// Per-MR detail breakdown (only populated when --detail is set)
pub details: Option<Vec<ExpertMrDetail>>,
}
/// Per-component score breakdown for explain mode.
pub struct ScoreComponents {
pub author: f64,
pub reviewer_participated: f64,
pub reviewer_assigned: f64,
pub notes: f64,
}
#[derive(Clone)]
pub struct ExpertMrDetail {
pub mr_ref: String,
pub title: String,
/// "R", "A", or "A+R"
pub role: String,
pub note_count: u32,
pub last_activity_ms: i64,
}
// --- Workload ---
pub struct WorkloadResult {
pub username: String,
pub assigned_issues: Vec<WorkloadIssue>,
pub authored_mrs: Vec<WorkloadMr>,
pub reviewing_mrs: Vec<WorkloadMr>,
pub unresolved_discussions: Vec<WorkloadDiscussion>,
pub assigned_issues_truncated: bool,
pub authored_mrs_truncated: bool,
pub reviewing_mrs_truncated: bool,
pub unresolved_discussions_truncated: bool,
}
pub struct WorkloadIssue {
pub iid: i64,
/// Canonical reference: `group/project#iid`
pub ref_: String,
pub title: String,
pub project_path: String,
pub updated_at: i64,
}
pub struct WorkloadMr {
pub iid: i64,
/// Canonical reference: `group/project!iid`
pub ref_: String,
pub title: String,
pub draft: bool,
pub project_path: String,
pub author_username: Option<String>,
pub updated_at: i64,
}
pub struct WorkloadDiscussion {
pub entity_type: String,
pub entity_iid: i64,
/// Canonical reference: `group/project!iid` or `group/project#iid`
pub ref_: String,
pub entity_title: String,
pub project_path: String,
pub last_note_at: i64,
}
// --- Reviews ---
pub struct ReviewsResult {
pub username: String,
pub total_diffnotes: u32,
pub categorized_count: u32,
pub mrs_reviewed: u32,
pub categories: Vec<ReviewCategory>,
}
pub struct ReviewCategory {
pub name: String,
pub count: u32,
pub percentage: f64,
}
// --- Active ---
pub struct ActiveResult {
pub discussions: Vec<ActiveDiscussion>,
/// Count of unresolved discussions *within the time window*, not total across all time.
pub total_unresolved_in_window: u32,
pub truncated: bool,
}
pub struct ActiveDiscussion {
pub discussion_id: i64,
pub entity_type: String,
pub entity_iid: i64,
pub entity_title: String,
pub project_path: String,
pub last_note_at: i64,
pub note_count: u32,
pub participants: Vec<String>,
pub participants_total: u32,
pub participants_truncated: bool,
}
// --- Overlap ---
pub struct OverlapResult {
pub path_query: String,
/// "exact" or "prefix" -- how the path was matched in SQL.
pub path_match: String,
pub users: Vec<OverlapUser>,
pub truncated: bool,
}
pub struct OverlapUser {
pub username: String,
pub author_touch_count: u32,
pub review_touch_count: u32,
pub touch_count: u32,
pub last_seen_at: i64,
/// Stable MR references like "group/project!123"
pub mr_refs: Vec<String>,
pub mr_refs_total: u32,
pub mr_refs_truncated: bool,
}
/// Maximum MR references to retain per user in output (shared across modes).
pub const MAX_MR_REFS_PER_USER: usize = 50;

View File

@@ -0,0 +1,370 @@
use rusqlite::Connection;
use crate::cli::render::{self, Icons, Theme};
use crate::core::error::Result;
use crate::core::time::ms_to_iso;
use super::types::*;
// ─── Query: Workload Mode ───────────────────────────────────────────────────
pub(super) fn query_workload(
conn: &Connection,
username: &str,
project_id: Option<i64>,
since_ms: Option<i64>,
limit: usize,
include_closed: bool,
) -> Result<WorkloadResult> {
let limit_plus_one = (limit + 1) as i64;
// Query 1: Open issues assigned to user
let issues_sql = "SELECT i.iid,
(p.path_with_namespace || '#' || i.iid) AS ref,
i.title, p.path_with_namespace, i.updated_at
FROM issues i
JOIN issue_assignees ia ON ia.issue_id = i.id
JOIN projects p ON i.project_id = p.id
WHERE ia.username = ?1
AND i.state = 'opened'
AND (?2 IS NULL OR i.project_id = ?2)
AND (?3 IS NULL OR i.updated_at >= ?3)
ORDER BY i.updated_at DESC
LIMIT ?4";
let mut stmt = conn.prepare_cached(issues_sql)?;
let assigned_issues: Vec<WorkloadIssue> = stmt
.query_map(
rusqlite::params![username, project_id, since_ms, limit_plus_one],
|row| {
Ok(WorkloadIssue {
iid: row.get(0)?,
ref_: row.get(1)?,
title: row.get(2)?,
project_path: row.get(3)?,
updated_at: row.get(4)?,
})
},
)?
.collect::<std::result::Result<Vec<_>, _>>()?;
// Query 2: Open MRs authored
let authored_sql = "SELECT m.iid,
(p.path_with_namespace || '!' || m.iid) AS ref,
m.title, m.draft, p.path_with_namespace, m.updated_at
FROM merge_requests m
JOIN projects p ON m.project_id = p.id
WHERE m.author_username = ?1
AND m.state = 'opened'
AND (?2 IS NULL OR m.project_id = ?2)
AND (?3 IS NULL OR m.updated_at >= ?3)
ORDER BY m.updated_at DESC
LIMIT ?4";
let mut stmt = conn.prepare_cached(authored_sql)?;
let authored_mrs: Vec<WorkloadMr> = stmt
.query_map(
rusqlite::params![username, project_id, since_ms, limit_plus_one],
|row| {
Ok(WorkloadMr {
iid: row.get(0)?,
ref_: row.get(1)?,
title: row.get(2)?,
draft: row.get::<_, i32>(3)? != 0,
project_path: row.get(4)?,
author_username: None,
updated_at: row.get(5)?,
})
},
)?
.collect::<std::result::Result<Vec<_>, _>>()?;
// Query 3: Open MRs where user is reviewer
let reviewing_sql = "SELECT m.iid,
(p.path_with_namespace || '!' || m.iid) AS ref,
m.title, m.draft, p.path_with_namespace,
m.author_username, m.updated_at
FROM merge_requests m
JOIN mr_reviewers r ON r.merge_request_id = m.id
JOIN projects p ON m.project_id = p.id
WHERE r.username = ?1
AND m.state = 'opened'
AND (?2 IS NULL OR m.project_id = ?2)
AND (?3 IS NULL OR m.updated_at >= ?3)
ORDER BY m.updated_at DESC
LIMIT ?4";
let mut stmt = conn.prepare_cached(reviewing_sql)?;
let reviewing_mrs: Vec<WorkloadMr> = stmt
.query_map(
rusqlite::params![username, project_id, since_ms, limit_plus_one],
|row| {
Ok(WorkloadMr {
iid: row.get(0)?,
ref_: row.get(1)?,
title: row.get(2)?,
draft: row.get::<_, i32>(3)? != 0,
project_path: row.get(4)?,
author_username: row.get(5)?,
updated_at: row.get(6)?,
})
},
)?
.collect::<std::result::Result<Vec<_>, _>>()?;
// Query 4: Unresolved discussions where user participated
let state_filter = if include_closed {
""
} else {
" AND (i.id IS NULL OR i.state = 'opened')
AND (m.id IS NULL OR m.state = 'opened')"
};
let disc_sql = format!(
"SELECT d.noteable_type,
COALESCE(i.iid, m.iid) AS entity_iid,
(p.path_with_namespace ||
CASE WHEN d.noteable_type = 'MergeRequest' THEN '!' ELSE '#' END ||
COALESCE(i.iid, m.iid)) AS ref,
COALESCE(i.title, m.title) AS entity_title,
p.path_with_namespace,
d.last_note_at
FROM discussions d
JOIN projects p ON d.project_id = p.id
LEFT JOIN issues i ON d.issue_id = i.id
LEFT JOIN merge_requests m ON d.merge_request_id = m.id
WHERE d.resolvable = 1 AND d.resolved = 0
AND EXISTS (
SELECT 1 FROM notes n
WHERE n.discussion_id = d.id
AND n.author_username = ?1
AND n.is_system = 0
)
AND (?2 IS NULL OR d.project_id = ?2)
AND (?3 IS NULL OR d.last_note_at >= ?3)
{state_filter}
ORDER BY d.last_note_at DESC
LIMIT ?4"
);
let mut stmt = conn.prepare_cached(&disc_sql)?;
let unresolved_discussions: Vec<WorkloadDiscussion> = stmt
.query_map(
rusqlite::params![username, project_id, since_ms, limit_plus_one],
|row| {
let noteable_type: String = row.get(0)?;
let entity_type = if noteable_type == "MergeRequest" {
"MR"
} else {
"Issue"
};
Ok(WorkloadDiscussion {
entity_type: entity_type.to_string(),
entity_iid: row.get(1)?,
ref_: row.get(2)?,
entity_title: row.get(3)?,
project_path: row.get(4)?,
last_note_at: row.get(5)?,
})
},
)?
.collect::<std::result::Result<Vec<_>, _>>()?;
// Truncation detection
let assigned_issues_truncated = assigned_issues.len() > limit;
let authored_mrs_truncated = authored_mrs.len() > limit;
let reviewing_mrs_truncated = reviewing_mrs.len() > limit;
let unresolved_discussions_truncated = unresolved_discussions.len() > limit;
let assigned_issues: Vec<WorkloadIssue> = assigned_issues.into_iter().take(limit).collect();
let authored_mrs: Vec<WorkloadMr> = authored_mrs.into_iter().take(limit).collect();
let reviewing_mrs: Vec<WorkloadMr> = reviewing_mrs.into_iter().take(limit).collect();
let unresolved_discussions: Vec<WorkloadDiscussion> =
unresolved_discussions.into_iter().take(limit).collect();
Ok(WorkloadResult {
username: username.to_string(),
assigned_issues,
authored_mrs,
reviewing_mrs,
unresolved_discussions,
assigned_issues_truncated,
authored_mrs_truncated,
reviewing_mrs_truncated,
unresolved_discussions_truncated,
})
}
// ─── Human Renderer: Workload ───────────────────────────────────────────────
pub(super) fn print_workload_human(r: &WorkloadResult) {
println!();
println!(
"{}",
Theme::bold().render(&format!(
"{} {} -- Workload Summary",
Icons::user(),
r.username
))
);
println!("{}", "\u{2500}".repeat(60));
if !r.assigned_issues.is_empty() {
println!(
"{}",
render::section_divider(&format!("Assigned Issues ({})", r.assigned_issues.len()))
);
for item in &r.assigned_issues {
println!(
" {} {} {}",
Theme::info().render(&item.ref_),
render::truncate(&item.title, 40),
Theme::dim().render(&render::format_relative_time(item.updated_at)),
);
}
if r.assigned_issues_truncated {
println!(
" {}",
Theme::dim().render("(truncated; rerun with a higher --limit)")
);
}
}
if !r.authored_mrs.is_empty() {
println!(
"{}",
render::section_divider(&format!("Authored MRs ({})", r.authored_mrs.len()))
);
for mr in &r.authored_mrs {
let draft = if mr.draft { " [draft]" } else { "" };
println!(
" {} {}{} {}",
Theme::info().render(&mr.ref_),
render::truncate(&mr.title, 35),
Theme::dim().render(draft),
Theme::dim().render(&render::format_relative_time(mr.updated_at)),
);
}
if r.authored_mrs_truncated {
println!(
" {}",
Theme::dim().render("(truncated; rerun with a higher --limit)")
);
}
}
if !r.reviewing_mrs.is_empty() {
println!(
"{}",
render::section_divider(&format!("Reviewing MRs ({})", r.reviewing_mrs.len()))
);
for mr in &r.reviewing_mrs {
let author = mr
.author_username
.as_deref()
.map(|a| format!(" by @{a}"))
.unwrap_or_default();
println!(
" {} {}{} {}",
Theme::info().render(&mr.ref_),
render::truncate(&mr.title, 30),
Theme::dim().render(&author),
Theme::dim().render(&render::format_relative_time(mr.updated_at)),
);
}
if r.reviewing_mrs_truncated {
println!(
" {}",
Theme::dim().render("(truncated; rerun with a higher --limit)")
);
}
}
if !r.unresolved_discussions.is_empty() {
println!(
"{}",
render::section_divider(&format!(
"Unresolved Discussions ({})",
r.unresolved_discussions.len()
))
);
for disc in &r.unresolved_discussions {
println!(
" {} {} {} {}",
Theme::dim().render(&disc.entity_type),
Theme::info().render(&disc.ref_),
render::truncate(&disc.entity_title, 35),
Theme::dim().render(&render::format_relative_time(disc.last_note_at)),
);
}
if r.unresolved_discussions_truncated {
println!(
" {}",
Theme::dim().render("(truncated; rerun with a higher --limit)")
);
}
}
if r.assigned_issues.is_empty()
&& r.authored_mrs.is_empty()
&& r.reviewing_mrs.is_empty()
&& r.unresolved_discussions.is_empty()
{
println!();
println!(
" {}",
Theme::dim().render("No open work items found for this user.")
);
}
println!();
}
// ─── JSON Renderer: Workload ────────────────────────────────────────────────
pub(super) fn workload_to_json(r: &WorkloadResult) -> serde_json::Value {
serde_json::json!({
"username": r.username,
"assigned_issues": r.assigned_issues.iter().map(|i| serde_json::json!({
"iid": i.iid,
"ref": i.ref_,
"title": i.title,
"project_path": i.project_path,
"updated_at": ms_to_iso(i.updated_at),
})).collect::<Vec<_>>(),
"authored_mrs": r.authored_mrs.iter().map(|m| serde_json::json!({
"iid": m.iid,
"ref": m.ref_,
"title": m.title,
"draft": m.draft,
"project_path": m.project_path,
"updated_at": ms_to_iso(m.updated_at),
})).collect::<Vec<_>>(),
"reviewing_mrs": r.reviewing_mrs.iter().map(|m| serde_json::json!({
"iid": m.iid,
"ref": m.ref_,
"title": m.title,
"draft": m.draft,
"project_path": m.project_path,
"author_username": m.author_username,
"updated_at": ms_to_iso(m.updated_at),
})).collect::<Vec<_>>(),
"unresolved_discussions": r.unresolved_discussions.iter().map(|d| serde_json::json!({
"entity_type": d.entity_type,
"entity_iid": d.entity_iid,
"ref": d.ref_,
"entity_title": d.entity_title,
"project_path": d.project_path,
"last_note_at": ms_to_iso(d.last_note_at),
})).collect::<Vec<_>>(),
"summary": {
"assigned_issue_count": r.assigned_issues.len(),
"authored_mr_count": r.authored_mrs.len(),
"reviewing_mr_count": r.reviewing_mrs.len(),
"unresolved_discussion_count": r.unresolved_discussions.len(),
},
"truncation": {
"assigned_issues_truncated": r.assigned_issues_truncated,
"authored_mrs_truncated": r.authored_mrs_truncated,
"reviewing_mrs_truncated": r.reviewing_mrs_truncated,
"unresolved_discussions_truncated": r.unresolved_discussions_truncated,
}
})
}

View File

@@ -16,7 +16,9 @@ use std::io::IsTerminal;
GITLAB_TOKEN GitLab personal access token (or name set in config)
LORE_ROBOT Enable robot/JSON mode (non-empty, non-zero value)
LORE_CONFIG_PATH Override config file location
NO_COLOR Disable color output (any non-empty value)")]
NO_COLOR Disable color output (any non-empty value)
LORE_ICONS Override icon set: nerd, unicode, or ascii
NERD_FONTS Enable Nerd Font icons when set to a non-empty value")]
pub struct Cli {
/// Path to config file
#[arg(short = 'c', long, global = true, help = "Path to config file")]
@@ -135,19 +137,35 @@ pub enum Commands {
Count(CountArgs),
/// Show sync state
#[command(visible_alias = "st")]
#[command(
visible_alias = "st",
after_help = "\x1b[1mExamples:\x1b[0m
lore status # Show last sync times per project
lore --robot status # JSON output for automation"
)]
Status,
/// Verify GitLab authentication
#[command(after_help = "\x1b[1mExamples:\x1b[0m
lore auth # Verify token and show user info
lore --robot auth # JSON output for automation")]
Auth,
/// Check environment health
#[command(after_help = "\x1b[1mExamples:\x1b[0m
lore doctor # Check config, token, database, Ollama
lore --robot doctor # JSON output for automation")]
Doctor,
/// Show version information
Version,
/// Initialize configuration and database
#[command(after_help = "\x1b[1mExamples:\x1b[0m
lore init # Interactive setup
lore init --force # Overwrite existing config
lore --robot init --gitlab-url https://gitlab.com \\
--token-env-var GITLAB_TOKEN --projects group/repo # Non-interactive setup")]
Init {
/// Skip overwrite confirmation
#[arg(short = 'f', long)]
@@ -174,11 +192,14 @@ pub enum Commands {
default_project: Option<String>,
},
/// Back up local database (not yet implemented)
#[command(hide = true)]
Backup,
/// Reset local database (not yet implemented)
#[command(hide = true)]
Reset {
/// Skip confirmation prompt
#[arg(short = 'y', long)]
yes: bool,
},
@@ -202,9 +223,15 @@ pub enum Commands {
Sync(SyncArgs),
/// Run pending database migrations
#[command(after_help = "\x1b[1mExamples:\x1b[0m
lore migrate # Apply pending migrations
lore --robot migrate # JSON output for automation")]
Migrate,
/// Quick health check: config, database, schema version
#[command(after_help = "\x1b[1mExamples:\x1b[0m
lore health # Quick pre-flight check (exit 0 = healthy)
lore --robot health # JSON output for automation")]
Health,
/// Machine-readable command manifest for agent self-discovery
@@ -242,6 +269,10 @@ pub enum Commands {
Trace(TraceArgs),
/// Detect discussion divergence from original intent
#[command(after_help = "\x1b[1mExamples:\x1b[0m
lore drift issues 42 # Check drift on issue #42
lore drift issues 42 --threshold 0.3 # Custom similarity threshold
lore --robot drift issues 42 -p group/repo # JSON output, scoped to project")]
Drift {
/// Entity type (currently only "issues" supported)
#[arg(value_parser = ["issues"])]
@@ -259,6 +290,14 @@ pub enum Commands {
project: Option<String>,
},
/// Manage cron-based automatic syncing
#[command(after_help = "\x1b[1mExamples:\x1b[0m
lore cron install # Install cron job (every 8 minutes)
lore cron install --interval 15 # Custom interval
lore cron status # Check if cron is installed
lore cron uninstall # Remove cron job")]
Cron(CronArgs),
#[command(hide = true)]
List {
#[arg(value_parser = ["issues", "mrs"])]
@@ -344,7 +383,7 @@ pub struct IssuesArgs {
pub fields: Option<Vec<String>>,
/// Filter by state (opened, closed, all)
#[arg(short = 's', long, help_heading = "Filters")]
#[arg(short = 's', long, help_heading = "Filters", value_parser = ["opened", "closed", "all"])]
pub state: Option<String>,
/// Filter by project path
@@ -438,7 +477,7 @@ pub struct MrsArgs {
pub fields: Option<Vec<String>>,
/// Filter by state (opened, merged, closed, locked, all)
#[arg(short = 's', long, help_heading = "Filters")]
#[arg(short = 's', long, help_heading = "Filters", value_parser = ["opened", "merged", "closed", "locked", "all"])]
pub state: Option<String>,
/// Filter by project path
@@ -535,15 +574,6 @@ pub struct NotesArgs {
#[arg(long, help_heading = "Output", value_delimiter = ',')]
pub fields: Option<Vec<String>>,
/// Output format (table, json, jsonl, csv)
#[arg(
long,
default_value = "table",
value_parser = ["table", "json", "jsonl", "csv"],
help_heading = "Output"
)]
pub format: String,
/// Filter by author username
#[arg(short = 'a', long, help_heading = "Filters")]
pub author: Option<String>,
@@ -655,6 +685,11 @@ pub struct IngestArgs {
}
#[derive(Parser)]
#[command(after_help = "\x1b[1mExamples:\x1b[0m
lore stats # Show document and index statistics
lore stats --check # Run integrity checks
lore stats --repair --dry-run # Preview what repair would fix
lore --robot stats # JSON output for automation")]
pub struct StatsArgs {
/// Run integrity checks
#[arg(long, overrides_with = "no_check")]
@@ -743,6 +778,10 @@ pub struct SearchArgs {
}
#[derive(Parser)]
#[command(after_help = "\x1b[1mExamples:\x1b[0m
lore generate-docs # Generate docs for dirty entities
lore generate-docs --full # Full rebuild of all documents
lore generate-docs --full -p group/repo # Full rebuild for one project")]
pub struct GenerateDocsArgs {
/// Full rebuild: seed all entities into dirty queue, then drain
#[arg(long)]
@@ -805,9 +844,17 @@ pub struct SyncArgs {
/// Show detailed timing breakdown for sync stages
#[arg(short = 't', long = "timings")]
pub timings: bool,
/// Acquire file lock before syncing (skip if another sync is running)
#[arg(long)]
pub lock: bool,
}
#[derive(Parser)]
#[command(after_help = "\x1b[1mExamples:\x1b[0m
lore embed # Embed new/changed documents
lore embed --full # Re-embed all documents from scratch
lore embed --retry-failed # Retry previously failed embeddings")]
pub struct EmbedArgs {
/// Re-embed all documents (clears existing embeddings first)
#[arg(long, overrides_with = "no_full")]
@@ -1046,6 +1093,10 @@ pub struct TraceArgs {
}
#[derive(Parser)]
#[command(after_help = "\x1b[1mExamples:\x1b[0m
lore count issues # Total issues in local database
lore count notes --for mr # Notes on merge requests only
lore count discussions --for issue # Discussions on issues only")]
pub struct CountArgs {
/// Entity type to count (issues, mrs, discussions, notes, events)
#[arg(value_parser = ["issues", "mrs", "discussions", "notes", "events"])]
@@ -1055,3 +1106,25 @@ pub struct CountArgs {
#[arg(short = 'f', long = "for", value_parser = ["issue", "mr"])]
pub for_entity: Option<String>,
}
#[derive(Parser)]
pub struct CronArgs {
#[command(subcommand)]
pub action: CronAction,
}
#[derive(Subcommand)]
pub enum CronAction {
/// Install cron job for automatic syncing
Install {
/// Sync interval in minutes (default: 8)
#[arg(long, default_value = "8")]
interval: u32,
},
/// Remove cron job
Uninstall,
/// Show current cron configuration
Status,
}

369
src/core/cron.rs Normal file
View File

@@ -0,0 +1,369 @@
use std::fs::{self, File};
use std::io::{self, Write};
use std::path::{Path, PathBuf};
use std::process::Command;
use serde::Serialize;
use super::error::{LoreError, Result};
use super::paths::get_data_dir;
const CRON_TAG: &str = "# lore-sync";
// ── File-based sync lock (fcntl F_SETLK) ──
/// RAII guard that holds an `fcntl` write lock on a file.
/// The lock is released when the guard is dropped.
pub struct SyncLockGuard {
_file: File,
}
/// Try to acquire an exclusive file lock (non-blocking).
///
/// Returns `Ok(Some(guard))` if the lock was acquired, `Ok(None)` if another
/// process holds it, or `Err` on I/O failure.
#[cfg(unix)]
pub fn acquire_sync_lock() -> Result<Option<SyncLockGuard>> {
acquire_sync_lock_at(&lock_path())
}
fn lock_path() -> PathBuf {
get_data_dir().join("sync.lock")
}
#[cfg(unix)]
fn acquire_sync_lock_at(path: &Path) -> Result<Option<SyncLockGuard>> {
use std::os::unix::io::AsRawFd;
if let Some(parent) = path.parent() {
fs::create_dir_all(parent)?;
}
let file = File::options()
.create(true)
.truncate(false)
.write(true)
.open(path)?;
let fd = file.as_raw_fd();
// SAFETY: zeroed memory is valid for libc::flock (all-zero is a valid
// representation on every Unix platform). We then set only the fields we need.
let mut flock = unsafe { std::mem::zeroed::<libc::flock>() };
flock.l_type = libc::F_WRLCK as libc::c_short;
flock.l_whence = libc::SEEK_SET as libc::c_short;
// SAFETY: fd is a valid open file descriptor; flock is stack-allocated.
let rc = unsafe { libc::fcntl(fd, libc::F_SETLK, &mut flock) };
if rc == -1 {
let err = io::Error::last_os_error();
if err.kind() == io::ErrorKind::WouldBlock
|| err.raw_os_error() == Some(libc::EAGAIN)
|| err.raw_os_error() == Some(libc::EACCES)
{
return Ok(None);
}
return Err(LoreError::Io(err));
}
Ok(Some(SyncLockGuard { _file: file }))
}
// ── Crontab management ──
/// The crontab entry that `lore cron install` writes.
///
/// Paths are single-quoted so spaces in binary or log paths don't break
/// the cron expression.
pub fn build_cron_entry(interval_minutes: u32) -> String {
let binary = std::env::current_exe()
.unwrap_or_else(|_| PathBuf::from("lore"))
.display()
.to_string();
let log_path = sync_log_path();
format!(
"*/{interval_minutes} * * * * '{binary}' sync -q --lock >> '{log}' 2>&1 {CRON_TAG}",
log = log_path.display(),
)
}
/// Path where cron-triggered sync output is appended.
pub fn sync_log_path() -> PathBuf {
get_data_dir().join("sync.log")
}
/// Read the current user crontab. Returns empty string when no crontab exists.
fn read_crontab() -> Result<String> {
let output = Command::new("crontab").arg("-l").output()?;
if output.status.success() {
Ok(String::from_utf8_lossy(&output.stdout).into_owned())
} else {
// exit 1 with "no crontab for <user>" is normal — treat as empty
Ok(String::new())
}
}
/// Write a full crontab string. Replaces the current crontab entirely.
fn write_crontab(content: &str) -> Result<()> {
let mut child = Command::new("crontab")
.arg("-")
.stdin(std::process::Stdio::piped())
.spawn()?;
if let Some(ref mut stdin) = child.stdin {
stdin.write_all(content.as_bytes())?;
}
let status = child.wait()?;
if !status.success() {
return Err(LoreError::Other(format!(
"crontab exited with status {status}"
)));
}
Ok(())
}
/// Install (or update) the lore-sync crontab entry.
pub fn install_cron(interval_minutes: u32) -> Result<CronInstallResult> {
let entry = build_cron_entry(interval_minutes);
let existing = read_crontab()?;
let replaced = existing.contains(CRON_TAG);
// Strip ALL old lore-sync lines first, then append one new entry.
// This is idempotent even if the crontab somehow has duplicate tagged lines.
let mut filtered: String = existing
.lines()
.filter(|line| !line.contains(CRON_TAG))
.collect::<Vec<_>>()
.join("\n");
if !filtered.is_empty() && !filtered.ends_with('\n') {
filtered.push('\n');
}
filtered.push_str(&entry);
filtered.push('\n');
write_crontab(&filtered)?;
Ok(CronInstallResult {
entry,
interval_minutes,
log_path: sync_log_path(),
replaced,
})
}
/// Remove the lore-sync crontab entry.
pub fn uninstall_cron() -> Result<CronUninstallResult> {
let existing = read_crontab()?;
if !existing.contains(CRON_TAG) {
return Ok(CronUninstallResult {
was_installed: false,
});
}
let new_crontab: String = existing
.lines()
.filter(|line| !line.contains(CRON_TAG))
.collect::<Vec<_>>()
.join("\n")
+ "\n";
// If the crontab would be empty (only whitespace), remove it entirely
if new_crontab.trim().is_empty() {
let status = Command::new("crontab").arg("-r").status()?;
if !status.success() {
return Err(LoreError::Other("crontab -r failed".to_string()));
}
} else {
write_crontab(&new_crontab)?;
}
Ok(CronUninstallResult {
was_installed: true,
})
}
/// Inspect the current crontab for a lore-sync entry.
pub fn cron_status() -> Result<CronStatusResult> {
let existing = read_crontab()?;
let lore_line = existing.lines().find(|l| l.contains(CRON_TAG));
match lore_line {
Some(line) => {
let interval = parse_interval(line);
let binary_path = parse_binary_path(line);
let current_exe = std::env::current_exe()
.ok()
.map(|p| p.display().to_string());
let binary_mismatch = current_exe
.as_ref()
.zip(binary_path.as_ref())
.is_some_and(|(current, cron)| current != cron);
Ok(CronStatusResult {
installed: true,
interval_minutes: interval,
binary_path,
current_binary: current_exe,
binary_mismatch,
log_path: Some(sync_log_path()),
cron_entry: Some(line.to_string()),
})
}
None => Ok(CronStatusResult {
installed: false,
interval_minutes: None,
binary_path: None,
current_binary: std::env::current_exe()
.ok()
.map(|p| p.display().to_string()),
binary_mismatch: false,
log_path: None,
cron_entry: None,
}),
}
}
/// Parse the interval from a cron expression like `*/8 * * * * ...`
fn parse_interval(line: &str) -> Option<u32> {
let first_field = line.split_whitespace().next()?;
if let Some(n) = first_field.strip_prefix("*/") {
n.parse().ok()
} else {
None
}
}
/// Parse the binary path from the cron entry after the 5 time fields.
///
/// Handles both quoted (`'/path with spaces/lore'`) and unquoted paths.
/// We skip the time fields manually to avoid `split_whitespace` breaking
/// on spaces inside single-quoted paths.
fn parse_binary_path(line: &str) -> Option<String> {
// Skip the 5 cron time fields (min hour dom month dow).
// These never contain spaces, so whitespace-splitting is safe here.
let mut rest = line;
for _ in 0..5 {
rest = rest.trim_start();
let end = rest.find(char::is_whitespace)?;
rest = &rest[end..];
}
rest = rest.trim_start();
// The command starts here — it may be single-quoted.
if let Some(after_quote) = rest.strip_prefix('\'') {
let end = after_quote.find('\'')?;
Some(after_quote[..end].to_string())
} else {
let end = rest.find(char::is_whitespace).unwrap_or(rest.len());
Some(rest[..end].to_string())
}
}
// ── Result types ──
#[derive(Serialize)]
pub struct CronInstallResult {
pub entry: String,
pub interval_minutes: u32,
pub log_path: PathBuf,
pub replaced: bool,
}
#[derive(Serialize)]
pub struct CronUninstallResult {
pub was_installed: bool,
}
#[derive(Serialize)]
pub struct CronStatusResult {
pub installed: bool,
pub interval_minutes: Option<u32>,
pub binary_path: Option<String>,
pub current_binary: Option<String>,
pub binary_mismatch: bool,
pub log_path: Option<PathBuf>,
pub cron_entry: Option<String>,
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn build_cron_entry_formats_correctly() {
let entry = build_cron_entry(8);
assert!(entry.starts_with("*/8 * * * * "));
assert!(entry.contains("sync -q --lock"));
assert!(entry.ends_with(CRON_TAG));
}
#[test]
fn parse_interval_extracts_number() {
assert_eq!(parse_interval("*/8 * * * * /usr/bin/lore sync"), Some(8));
assert_eq!(parse_interval("*/15 * * * * /usr/bin/lore sync"), Some(15));
assert_eq!(parse_interval("0 * * * * /usr/bin/lore sync"), None);
}
#[test]
fn parse_binary_path_extracts_sixth_field() {
// Unquoted path
assert_eq!(
parse_binary_path(
"*/8 * * * * /usr/local/bin/lore sync -q --lock >> /tmp/log 2>&1 # lore-sync"
),
Some("/usr/local/bin/lore".to_string())
);
// Single-quoted path without spaces
assert_eq!(
parse_binary_path(
"*/8 * * * * '/usr/local/bin/lore' sync -q --lock >> '/tmp/log' 2>&1 # lore-sync"
),
Some("/usr/local/bin/lore".to_string())
);
// Single-quoted path WITH spaces (common on macOS)
assert_eq!(
parse_binary_path(
"*/8 * * * * '/Users/Taylor Eernisse/.cargo/bin/lore' sync -q --lock >> '/tmp/log' 2>&1 # lore-sync"
),
Some("/Users/Taylor Eernisse/.cargo/bin/lore".to_string())
);
}
#[test]
fn sync_lock_at_nonexistent_dir_creates_parents() {
let dir = tempfile::tempdir().unwrap();
let lock_file = dir.path().join("nested").join("deep").join("sync.lock");
let guard = acquire_sync_lock_at(&lock_file).unwrap();
assert!(guard.is_some());
assert!(lock_file.exists());
}
#[test]
fn sync_lock_is_exclusive_across_processes() {
// POSIX fcntl locks are per-process, so same-process re-lock always
// succeeds. We verify cross-process exclusion using a Python child
// that attempts the same fcntl F_SETLK.
let dir = tempfile::tempdir().unwrap();
let lock_file = dir.path().join("sync.lock");
let _guard = acquire_sync_lock_at(&lock_file).unwrap().unwrap();
let script = r#"
import fcntl, struct, sys
fd = open(sys.argv[1], "w")
try:
fcntl.fcntl(fd, fcntl.F_SETLK, struct.pack("hhllhh", fcntl.F_WRLCK, 0, 0, 0, 0, 0))
sys.exit(0)
except (IOError, OSError):
sys.exit(1)
"#;
let status = std::process::Command::new("python3")
.args(["-c", script, &lock_file.display().to_string()])
.status()
.unwrap();
assert!(
!status.success(),
"child process should fail to acquire fcntl lock held by parent"
);
}
}

View File

@@ -44,15 +44,13 @@ pub fn resolve_rename_chain(
let mut fwd_stmt = conn.prepare_cached(forward_sql)?;
let forward: Vec<String> = fwd_stmt
.query_map(rusqlite::params![project_id, &current], |row| row.get(0))?
.filter_map(std::result::Result::ok)
.collect();
.collect::<std::result::Result<Vec<_>, _>>()?;
// Backward: current was the new name -> discover old names
let mut bwd_stmt = conn.prepare_cached(backward_sql)?;
let backward: Vec<String> = bwd_stmt
.query_map(rusqlite::params![project_id, &current], |row| row.get(0))?
.filter_map(std::result::Result::ok)
.collect();
.collect::<std::result::Result<Vec<_>, _>>()?;
for discovered in forward.into_iter().chain(backward) {
if visited.insert(discovered.clone()) {

View File

@@ -1,5 +1,7 @@
pub mod backoff;
pub mod config;
#[cfg(unix)]
pub mod cron;
pub mod db;
pub mod dependent_queue;
pub mod error;

View File

@@ -294,8 +294,7 @@ fn try_resolve_rename_ambiguity(
let old_paths: Vec<String> = stmt
.query_map(param_refs.as_slice(), |row| row.get(0))?
.filter_map(std::result::Result::ok)
.collect();
.collect::<std::result::Result<Vec<_>, _>>()?;
// The newest path is a candidate that is NOT an old_path in any intra-chain rename.
let newest = candidates.iter().find(|c| !old_paths.contains(c));

View File

@@ -1,4 +1,5 @@
use serde::Serialize;
use tracing::info;
use super::error::Result;
use super::file_history::resolve_rename_chain;
@@ -51,6 +52,9 @@ pub struct TraceResult {
pub renames_followed: bool,
pub trace_chains: Vec<TraceChain>,
pub total_chains: usize,
/// Diagnostic hints explaining why results may be empty.
#[serde(skip_serializing_if = "Vec::is_empty")]
pub hints: Vec<String>,
}
/// Run the trace query: file -> MR -> issue chain.
@@ -75,6 +79,14 @@ pub fn run_trace(
(vec![path.to_string()], false)
};
info!(
paths = all_paths.len(),
renames_followed,
"trace: resolved {} path(s) for '{}'",
all_paths.len(),
path
);
// Build placeholders for IN clause
let placeholders: Vec<String> = (0..all_paths.len())
.map(|i| format!("?{}", i + 2))
@@ -100,7 +112,7 @@ pub fn run_trace(
all_paths.len() + 2
);
let mut stmt = conn.prepare(&mr_sql)?;
let mut stmt = conn.prepare_cached(&mr_sql)?;
let mut params: Vec<Box<dyn rusqlite::types::ToSql>> = Vec::new();
params.push(Box::new(project_id.unwrap_or(0)));
@@ -137,8 +149,14 @@ pub fn run_trace(
web_url: row.get(8)?,
})
})?
.filter_map(std::result::Result::ok)
.collect();
.collect::<std::result::Result<Vec<_>, _>>()?;
info!(
mr_count = mr_rows.len(),
"trace: found {} MR(s) touching '{}'",
mr_rows.len(),
path
);
// Step 2: For each MR, find linked issues + optional discussions
let mut trace_chains = Vec::with_capacity(mr_rows.len());
@@ -152,6 +170,16 @@ pub fn run_trace(
Vec::new()
};
info!(
mr_iid = mr.iid,
issues = issues.len(),
discussions = discussions.len(),
"trace: MR !{}: {} issue(s), {} discussion(s)",
mr.iid,
issues.len(),
discussions.len()
);
trace_chains.push(TraceChain {
mr_iid: mr.iid,
mr_title: mr.title.clone(),
@@ -168,12 +196,20 @@ pub fn run_trace(
let total_chains = trace_chains.len();
// Build diagnostic hints when no results found
let hints = if total_chains == 0 {
build_trace_hints(conn, project_id, &all_paths)?
} else {
Vec::new()
};
Ok(TraceResult {
path: path.to_string(),
resolved_paths: all_paths,
renames_followed,
trace_chains,
total_chains,
hints,
})
}
@@ -191,7 +227,7 @@ fn fetch_linked_issues(conn: &rusqlite::Connection, mr_id: i64) -> Result<Vec<Tr
CASE er.reference_type WHEN 'closes' THEN 0 WHEN 'related' THEN 1 ELSE 2 END, \
i.iid";
let mut stmt = conn.prepare(sql)?;
let mut stmt = conn.prepare_cached(sql)?;
let issues: Vec<TraceIssue> = stmt
.query_map(rusqlite::params![mr_id], |row| {
Ok(TraceIssue {
@@ -202,8 +238,7 @@ fn fetch_linked_issues(conn: &rusqlite::Connection, mr_id: i64) -> Result<Vec<Tr
web_url: row.get(4)?,
})
})?
.filter_map(std::result::Result::ok)
.collect();
.collect::<std::result::Result<Vec<_>, _>>()?;
Ok(issues)
}
@@ -225,11 +260,10 @@ fn fetch_trace_discussions(
WHERE d.merge_request_id = ?1 \
AND n.position_new_path IN ({in_clause}) \
AND n.is_system = 0 \
ORDER BY n.created_at DESC \
LIMIT 20"
ORDER BY n.created_at DESC"
);
let mut stmt = conn.prepare(&sql)?;
let mut stmt = conn.prepare_cached(&sql)?;
let mut params: Vec<Box<dyn rusqlite::types::ToSql>> = Vec::new();
params.push(Box::new(mr_id));
@@ -251,12 +285,57 @@ fn fetch_trace_discussions(
created_at_iso: ms_to_iso(created_at),
})
})?
.filter_map(std::result::Result::ok)
.collect();
.collect::<std::result::Result<Vec<_>, _>>()?;
Ok(discussions)
}
/// Build diagnostic hints explaining why a trace query returned no results.
fn build_trace_hints(
conn: &rusqlite::Connection,
project_id: Option<i64>,
paths: &[String],
) -> Result<Vec<String>> {
let mut hints = Vec::new();
// Check if mr_file_changes has ANY rows for this project
let has_file_changes: bool = if let Some(pid) = project_id {
conn.query_row(
"SELECT EXISTS(SELECT 1 FROM mr_file_changes WHERE project_id = ?1 LIMIT 1)",
rusqlite::params![pid],
|row| row.get(0),
)?
} else {
conn.query_row(
"SELECT EXISTS(SELECT 1 FROM mr_file_changes LIMIT 1)",
[],
|row| row.get(0),
)?
};
if !has_file_changes {
hints.push(
"No MR file changes have been synced yet. Run 'lore sync' to fetch file change data."
.to_string(),
);
return Ok(hints);
}
// File changes exist but none match these paths
let path_list = paths
.iter()
.map(|p| format!("'{p}'"))
.collect::<Vec<_>>()
.join(", ");
hints.push(format!(
"Searched paths [{}] were not found in MR file changes. \
The file may predate the sync window or use a different path.",
path_list
));
Ok(hints)
}
#[cfg(test)]
#[path = "trace_tests.rs"]
mod tests;

View File

@@ -11,26 +11,29 @@ use lore::cli::autocorrect::{self, CorrectionResult};
use lore::cli::commands::{
IngestDisplay, InitInputs, InitOptions, InitResult, ListFilters, MrListFilters,
NoteListFilters, SearchCliFilters, SyncOptions, TimelineParams, open_issue_in_browser,
open_mr_in_browser, parse_trace_path, print_count, print_count_json, print_doctor_results,
print_drift_human, print_drift_json, print_dry_run_preview, print_dry_run_preview_json,
print_embed, print_embed_json, print_event_count, print_event_count_json, print_file_history,
print_file_history_json, print_generate_docs, print_generate_docs_json, print_ingest_summary,
print_ingest_summary_json, print_list_issues, print_list_issues_json, print_list_mrs,
print_list_mrs_json, print_list_notes, print_list_notes_csv, print_list_notes_json,
print_list_notes_jsonl, print_search_results, print_search_results_json, print_show_issue,
print_show_issue_json, print_show_mr, print_show_mr_json, print_stats, print_stats_json,
print_sync, print_sync_json, print_sync_status, print_sync_status_json, print_timeline,
print_timeline_json_with_meta, print_trace, print_trace_json, print_who_human, print_who_json,
query_notes, run_auth_test, run_count, run_count_events, run_doctor, run_drift, run_embed,
run_file_history, run_generate_docs, run_ingest, run_ingest_dry_run, run_init, run_list_issues,
run_list_mrs, run_search, run_show_issue, run_show_mr, run_stats, run_sync, run_sync_status,
run_timeline, run_who,
open_mr_in_browser, parse_trace_path, print_count, print_count_json, print_cron_install,
print_cron_install_json, print_cron_status, print_cron_status_json, print_cron_uninstall,
print_cron_uninstall_json, print_doctor_results, print_drift_human, print_drift_json,
print_dry_run_preview, print_dry_run_preview_json, print_embed, print_embed_json,
print_event_count, print_event_count_json, print_file_history, print_file_history_json,
print_generate_docs, print_generate_docs_json, print_ingest_summary, print_ingest_summary_json,
print_list_issues, print_list_issues_json, print_list_mrs, print_list_mrs_json,
print_list_notes, print_list_notes_json, print_search_results, print_search_results_json,
print_show_issue, print_show_issue_json, print_show_mr, print_show_mr_json, print_stats,
print_stats_json, print_sync, print_sync_json, print_sync_status, print_sync_status_json,
print_timeline, print_timeline_json_with_meta, print_trace, print_trace_json, print_who_human,
print_who_json, query_notes, run_auth_test, run_count, run_count_events, run_cron_install,
run_cron_status, run_cron_uninstall, run_doctor, run_drift, run_embed, run_file_history,
run_generate_docs, run_ingest, run_ingest_dry_run, run_init, run_list_issues, run_list_mrs,
run_search, run_show_issue, run_show_mr, run_stats, run_sync, run_sync_status, run_timeline,
run_who,
};
use lore::cli::render::{ColorMode, GlyphMode, Icons, LoreRenderer, Theme};
use lore::cli::robot::{RobotMeta, strip_schemas};
use lore::cli::{
Cli, Commands, CountArgs, EmbedArgs, FileHistoryArgs, GenerateDocsArgs, IngestArgs, IssuesArgs,
MrsArgs, NotesArgs, SearchArgs, StatsArgs, SyncArgs, TimelineArgs, TraceArgs, WhoArgs,
Cli, Commands, CountArgs, CronAction, CronArgs, EmbedArgs, FileHistoryArgs, GenerateDocsArgs,
IngestArgs, IssuesArgs, MrsArgs, NotesArgs, SearchArgs, StatsArgs, SyncArgs, TimelineArgs,
TraceArgs, WhoArgs,
};
use lore::core::db::{
LATEST_SCHEMA_VERSION, create_connection, get_schema_version, run_migrations,
@@ -203,6 +206,7 @@ async fn main() {
handle_file_history(cli.config.as_deref(), args, robot_mode)
}
Some(Commands::Trace(args)) => handle_trace(cli.config.as_deref(), args, robot_mode),
Some(Commands::Cron(args)) => handle_cron(cli.config.as_deref(), args, robot_mode),
Some(Commands::Drift {
entity_type,
iid,
@@ -922,21 +926,14 @@ fn handle_notes(
let result = query_notes(&conn, &filters, &config)?;
let format = if robot_mode && args.format == "table" {
"json"
} else {
&args.format
};
match format {
"json" => print_list_notes_json(
if robot_mode {
print_list_notes_json(
&result,
start.elapsed().as_millis() as u64,
args.fields.as_deref(),
),
"jsonl" => print_list_notes_jsonl(&result),
"csv" => print_list_notes_csv(&result),
_ => print_list_notes(&result),
);
} else {
print_list_notes(&result);
}
Ok(())
@@ -1642,6 +1639,7 @@ struct VersionOutput {
#[derive(Serialize)]
struct VersionData {
name: &'static str,
version: String,
#[serde(skip_serializing_if = "Option::is_none")]
git_hash: Option<String>,
@@ -1655,6 +1653,7 @@ fn handle_version(robot_mode: bool) -> Result<(), Box<dyn std::error::Error>> {
let output = VersionOutput {
ok: true,
data: VersionData {
name: "lore",
version,
git_hash: if git_hash.is_empty() {
None
@@ -2182,6 +2181,24 @@ async fn handle_sync_cmd(
return Ok(());
}
// Acquire file lock if --lock was passed (used by cron to skip overlapping runs)
let _sync_lock = if args.lock {
match lore::core::cron::acquire_sync_lock() {
Ok(Some(guard)) => Some(guard),
Ok(None) => {
// Another sync is running — silently exit (expected for cron)
tracing::debug!("--lock: another sync is running, skipping");
return Ok(());
}
Err(e) => {
tracing::warn!(error = %e, "--lock: failed to acquire file lock, skipping sync");
return Ok(());
}
}
} else {
None
};
let db_path = get_db_path(config.storage.db_path.as_deref());
let recorder_conn = create_connection(&db_path)?;
let run_id = uuid::Uuid::new_v4().simple().to_string();
@@ -2254,6 +2271,47 @@ async fn handle_sync_cmd(
}
}
fn handle_cron(
config_override: Option<&str>,
args: CronArgs,
robot_mode: bool,
) -> Result<(), Box<dyn std::error::Error>> {
let start = std::time::Instant::now();
match args.action {
CronAction::Install { interval } => {
let result = run_cron_install(interval)?;
let elapsed_ms = start.elapsed().as_millis() as u64;
if robot_mode {
print_cron_install_json(&result, elapsed_ms);
} else {
print_cron_install(&result);
}
}
CronAction::Uninstall => {
let result = run_cron_uninstall()?;
let elapsed_ms = start.elapsed().as_millis() as u64;
if robot_mode {
print_cron_uninstall_json(&result, elapsed_ms);
} else {
print_cron_uninstall(&result);
}
}
CronAction::Status => {
let config = Config::load(config_override)?;
let info = run_cron_status(&config)?;
let elapsed_ms = start.elapsed().as_millis() as u64;
if robot_mode {
print_cron_status_json(&info, elapsed_ms);
} else {
print_cron_status(&info);
}
}
}
Ok(())
}
#[derive(Serialize)]
struct HealthOutput {
ok: bool,