5 Commits

Author SHA1 Message Date
Taylor Eernisse
06229ce98b feat(cli): expose available_statuses in robot mode and hide status_category
(Supersedes empty commit f3788eb — jj auto-snapshot race.)

Three related refinements to how work item status is presented:

1. available_statuses in meta (list.rs, main.rs):
   Robot-mode issue list responses now include meta.available_statuses —
   a sorted array of all distinct status_name values in the database.
   Agents can use this to validate --status filter values or display
   valid options without a separate query.

2. Hide status_category from JSON (list.rs, show.rs):
   status_category is a GitLab internal classification that duplicates
   the state field. Switched to skip_serializing so it never appears
   in JSON output while remaining available internally.

3. Simplify human-readable status display (show.rs):
   Removed the "(category)" parenthetical from the Status line.

4. robot-docs schema updates (main.rs):
   Documented --status filter semantics and meta.available_statuses.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-11 10:24:41 -05:00
Taylor Eernisse
8d18552298 docs: add jj-first VCS policy to AGENTS.md
Establishes Jujutsu (jj) as the preferred VCS tool for this colocated
repo, matching the global Claude Code rules. Agents should use jj
equivalents for all git operations and only fall back to raw git for
hooks, LFS, submodules, or gh CLI interop.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-11 10:23:01 -05:00
Taylor Eernisse
f3788eb687 feat(cli): expose available_statuses in robot mode and hide status_category
Three related refinements to how work item status is presented:

1. available_statuses in meta (list.rs, main.rs):
   Robot-mode issue list responses now include meta.available_statuses —
   a sorted array of all distinct status_name values in the database.
   Agents can use this to validate --status filter values, offer
   autocomplete, or display valid options without a separate query.

2. Hide status_category from JSON (list.rs, show.rs):
   status_category (e.g. "open", "closed") is a GitLab internal
   classification that duplicates the state field and adds no actionable
   signal for consumers. Switched from skip_serializing_if to
   skip_serializing so it never appears in JSON output while remaining
   available internally for future use.

3. Simplify human-readable status display (show.rs):
   Removed the "(category)" parenthetical from the Status line in
   lore show issue output. The category was noise — users care about
   the board column label, not GitLab's internal taxonomy.

4. robot-docs schema updates (main.rs):
   Documented the --status filter semantics and the new
   meta.available_statuses field in the self-discovery manifest.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-11 10:22:39 -05:00
Taylor Eernisse
e9af529f6e feat(ingestion): add progress reporting for status enrichment pipeline
Previously the status enrichment phase (GraphQL work item status fetch)
ran silently — users saw no feedback between "syncing issues" and the
final enrichment summary. For projects with hundreds of issues and
adaptive page-size retries, this felt like a hang.

Changes across three layers:

GraphQL (graphql.rs):
  - Extract fetch_issue_statuses_with_progress() accepting an optional
    on_page callback invoked after each paginated fetch with the
    running count of fetched IIDs
  - Original fetch_issue_statuses() preserved as a zero-cost
    delegation wrapper (no callback overhead)

Orchestrator (orchestrator.rs):
  - Three new ProgressEvent variants: StatusEnrichmentStarted,
    StatusEnrichmentPageFetched, StatusEnrichmentWriting
  - Wire the page callback through to the new _with_progress fn

CLI (ingest.rs):
  - Handle all three new events in the progress callback, updating
    both the per-project spinner and the stage bar with live counts

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-11 10:22:20 -05:00
Taylor Eernisse
70271c14d6 fix(core): ensure migration framework records schema version automatically
The migration runner now inserts (OR REPLACE) the schema_version row
after each successful migration batch, regardless of whether the
migration SQL itself contains a self-registering INSERT. This prevents
version tracking gaps when a .sql migration omits the bookkeeping
statement, which would leave the schema at an unrecorded version and
cause re-execution attempts on next startup.

Legacy migrations that already self-register are unaffected thanks to
the OR REPLACE conflict resolution.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-11 10:21:49 -05:00
8 changed files with 107 additions and 13 deletions

View File

@@ -14,6 +14,14 @@ If I tell you to do something, even if it goes against what follows below, YOU M
--- ---
## Version Control: jj-First (CRITICAL)
**ALWAYS prefer jj (Jujutsu) over git for all VCS operations.** This is a colocated repo with both `.jj/` and `.git/`. When instructed to use git by anything — even later in this file — use the best jj replacement commands instead. Only fall back to raw `git` for things jj cannot do (hooks, LFS, submodules, `gh` CLI interop).
See `~/.claude/rules/jj-vcs/` for the full command reference, translation table, revsets, patterns, and recovery recipes.
---
## Irreversible Git & Filesystem Actions — DO NOT EVER BREAK GLASS ## Irreversible Git & Filesystem Actions — DO NOT EVER BREAK GLASS
> **Note:** Treat destructive commands as break-glass. If there's any doubt, stop and ask. > **Note:** Treat destructive commands as break-glass. If there's any doubt, stop and ask.

View File

@@ -532,8 +532,35 @@ async fn run_ingest_inner(
ProgressEvent::MrDiffsFetchComplete { .. } => { ProgressEvent::MrDiffsFetchComplete { .. } => {
disc_bar_clone.finish_and_clear(); disc_bar_clone.finish_and_clear();
} }
ProgressEvent::StatusEnrichmentStarted => {
spinner_clone.set_message(format!(
"{path_for_cb}: Enriching work item statuses..."
));
stage_bar_clone.set_message(
"Enriching work item statuses...".to_string()
);
}
ProgressEvent::StatusEnrichmentPageFetched { items_so_far } => {
spinner_clone.set_message(format!(
"{path_for_cb}: Fetching statuses... ({items_so_far} work items)"
));
stage_bar_clone.set_message(format!(
"Enriching work item statuses... ({items_so_far} fetched)"
));
}
ProgressEvent::StatusEnrichmentWriting { total } => {
spinner_clone.set_message(format!(
"{path_for_cb}: Writing {total} statuses..."
));
stage_bar_clone.set_message(format!(
"Writing {total} work item statuses..."
));
}
ProgressEvent::StatusEnrichmentComplete { enriched, cleared } => { ProgressEvent::StatusEnrichmentComplete { enriched, cleared } => {
if enriched > 0 || cleared > 0 { if enriched > 0 || cleared > 0 {
spinner_clone.set_message(format!(
"{path_for_cb}: {enriched} statuses enriched, {cleared} cleared"
));
stage_bar_clone.set_message(format!( stage_bar_clone.set_message(format!(
"Status enrichment: {enriched} enriched, {cleared} cleared" "Status enrichment: {enriched} enriched, {cleared} cleared"
)); ));

View File

@@ -59,7 +59,7 @@ pub struct IssueListRow {
pub unresolved_count: i64, pub unresolved_count: i64,
#[serde(skip_serializing_if = "Option::is_none")] #[serde(skip_serializing_if = "Option::is_none")]
pub status_name: Option<String>, pub status_name: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")] #[serde(skip_serializing)]
pub status_category: Option<String>, pub status_category: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")] #[serde(skip_serializing_if = "Option::is_none")]
pub status_color: Option<String>, pub status_color: Option<String>,
@@ -86,7 +86,7 @@ pub struct IssueListRowJson {
pub project_path: String, pub project_path: String,
#[serde(skip_serializing_if = "Option::is_none")] #[serde(skip_serializing_if = "Option::is_none")]
pub status_name: Option<String>, pub status_name: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")] #[serde(skip_serializing)]
pub status_category: Option<String>, pub status_category: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")] #[serde(skip_serializing_if = "Option::is_none")]
pub status_color: Option<String>, pub status_color: Option<String>,
@@ -124,6 +124,7 @@ impl From<&IssueListRow> for IssueListRowJson {
pub struct ListResult { pub struct ListResult {
pub issues: Vec<IssueListRow>, pub issues: Vec<IssueListRow>,
pub total_count: usize, pub total_count: usize,
pub available_statuses: Vec<String>,
} }
#[derive(Serialize)] #[derive(Serialize)]
@@ -268,10 +269,21 @@ pub fn run_list_issues(config: &Config, filters: ListFilters) -> Result<ListResu
let db_path = get_db_path(config.storage.db_path.as_deref()); let db_path = get_db_path(config.storage.db_path.as_deref());
let conn = create_connection(&db_path)?; let conn = create_connection(&db_path)?;
let result = query_issues(&conn, &filters)?; let mut result = query_issues(&conn, &filters)?;
result.available_statuses = query_available_statuses(&conn)?;
Ok(result) Ok(result)
} }
fn query_available_statuses(conn: &Connection) -> Result<Vec<String>> {
let mut stmt = conn.prepare(
"SELECT DISTINCT status_name FROM issues WHERE status_name IS NOT NULL ORDER BY status_name",
)?;
let statuses = stmt
.query_map([], |row| row.get::<_, String>(0))?
.collect::<std::result::Result<Vec<_>, _>>()?;
Ok(statuses)
}
fn query_issues(conn: &Connection, filters: &ListFilters) -> Result<ListResult> { fn query_issues(conn: &Connection, filters: &ListFilters) -> Result<ListResult> {
let mut where_clauses = Vec::new(); let mut where_clauses = Vec::new();
let mut params: Vec<Box<dyn rusqlite::ToSql>> = Vec::new(); let mut params: Vec<Box<dyn rusqlite::ToSql>> = Vec::new();
@@ -457,6 +469,7 @@ fn query_issues(conn: &Connection, filters: &ListFilters) -> Result<ListResult>
Ok(ListResult { Ok(ListResult {
issues, issues,
total_count, total_count,
available_statuses: Vec::new(),
}) })
} }
@@ -822,11 +835,13 @@ pub fn print_list_issues(result: &ListResult) {
pub fn print_list_issues_json(result: &ListResult, elapsed_ms: u64, fields: Option<&[String]>) { pub fn print_list_issues_json(result: &ListResult, elapsed_ms: u64, fields: Option<&[String]>) {
let json_result = ListResultJson::from(result); let json_result = ListResultJson::from(result);
let meta = RobotMeta { elapsed_ms };
let output = serde_json::json!({ let output = serde_json::json!({
"ok": true, "ok": true,
"data": json_result, "data": json_result,
"meta": meta, "meta": {
"elapsed_ms": elapsed_ms,
"available_statuses": result.available_statuses,
},
}); });
let mut output = output; let mut output = output;
if let Some(f) = fields { if let Some(f) = fields {

View File

@@ -628,13 +628,9 @@ pub fn print_show_issue(issue: &IssueDetail) {
println!("State: {}", state_styled); println!("State: {}", state_styled);
if let Some(status) = &issue.status_name { if let Some(status) = &issue.status_name {
let display = match &issue.status_category {
Some(cat) => format!("{status} ({})", cat.to_ascii_lowercase()),
None => status.clone(),
};
println!( println!(
"Status: {}", "Status: {}",
style_with_hex(&display, issue.status_color.as_deref()) style_with_hex(status, issue.status_color.as_deref())
); );
} }
@@ -944,6 +940,7 @@ pub struct IssueDetailJson {
pub closing_merge_requests: Vec<ClosingMrRefJson>, pub closing_merge_requests: Vec<ClosingMrRefJson>,
pub discussions: Vec<DiscussionDetailJson>, pub discussions: Vec<DiscussionDetailJson>,
pub status_name: Option<String>, pub status_name: Option<String>,
#[serde(skip_serializing)]
pub status_category: Option<String>, pub status_category: Option<String>,
pub status_color: Option<String>, pub status_color: Option<String>,
pub status_icon_name: Option<String>, pub status_icon_name: Option<String>,

View File

@@ -143,6 +143,20 @@ pub fn run_migrations(conn: &Connection) -> Result<()> {
match conn.execute_batch(sql) { match conn.execute_batch(sql) {
Ok(()) => { Ok(()) => {
// Framework-managed version bookkeeping: ensures the version is
// always recorded even if a migration .sql omits the INSERT.
// Uses OR REPLACE so legacy migrations that self-register are harmless.
conn.execute(
"INSERT OR REPLACE INTO schema_version (version, applied_at, description) \
VALUES (?1, strftime('%s', 'now') * 1000, ?2)",
rusqlite::params![version, version_str],
)
.map_err(|e| LoreError::MigrationFailed {
version,
message: format!("Failed to record schema version: {e}"),
source: Some(e),
})?;
conn.execute_batch(&format!("RELEASE {}", savepoint_name)) conn.execute_batch(&format!("RELEASE {}", savepoint_name))
.map_err(|e| LoreError::MigrationFailed { .map_err(|e| LoreError::MigrationFailed {
version, version,

View File

@@ -233,6 +233,14 @@ fn is_complexity_or_timeout_error(msg: &str) -> bool {
pub async fn fetch_issue_statuses( pub async fn fetch_issue_statuses(
client: &GraphqlClient, client: &GraphqlClient,
project_path: &str, project_path: &str,
) -> crate::core::error::Result<FetchStatusResult> {
fetch_issue_statuses_with_progress(client, project_path, None).await
}
pub async fn fetch_issue_statuses_with_progress(
client: &GraphqlClient,
project_path: &str,
on_page: Option<&dyn Fn(usize)>,
) -> crate::core::error::Result<FetchStatusResult> { ) -> crate::core::error::Result<FetchStatusResult> {
let mut statuses = std::collections::HashMap::new(); let mut statuses = std::collections::HashMap::new();
let mut all_fetched_iids = std::collections::HashSet::new(); let mut all_fetched_iids = std::collections::HashSet::new();
@@ -327,6 +335,10 @@ pub async fn fetch_issue_statuses(
} }
} }
if let Some(cb) = &on_page {
cb(all_fetched_iids.len());
}
// Pagination // Pagination
if !connection.page_info.has_next_page { if !connection.page_info.has_next_page {
break; break;

View File

@@ -45,6 +45,9 @@ pub enum ProgressEvent {
MrDiffsFetchStarted { total: usize }, MrDiffsFetchStarted { total: usize },
MrDiffFetched { current: usize, total: usize }, MrDiffFetched { current: usize, total: usize },
MrDiffsFetchComplete { fetched: usize, failed: usize }, MrDiffsFetchComplete { fetched: usize, failed: usize },
StatusEnrichmentStarted,
StatusEnrichmentPageFetched { items_so_far: usize },
StatusEnrichmentWriting { total: usize },
StatusEnrichmentComplete { enriched: usize, cleared: usize }, StatusEnrichmentComplete { enriched: usize, cleared: usize },
StatusEnrichmentSkipped, StatusEnrichmentSkipped,
} }
@@ -150,6 +153,8 @@ pub async fn ingest_project_issues_with_progress(
if config.sync.fetch_work_item_status && !signal.is_cancelled() { if config.sync.fetch_work_item_status && !signal.is_cancelled() {
use rusqlite::OptionalExtension; use rusqlite::OptionalExtension;
emit(ProgressEvent::StatusEnrichmentStarted);
let project_path: Option<String> = conn let project_path: Option<String> = conn
.query_row( .query_row(
"SELECT path_with_namespace FROM projects WHERE id = ?1", "SELECT path_with_namespace FROM projects WHERE id = ?1",
@@ -170,7 +175,16 @@ pub async fn ingest_project_issues_with_progress(
} }
Some(path) => { Some(path) => {
let graphql_client = client.graphql_client(); let graphql_client = client.graphql_client();
match crate::gitlab::graphql::fetch_issue_statuses(&graphql_client, &path).await { let page_cb = |items_so_far: usize| {
emit(ProgressEvent::StatusEnrichmentPageFetched { items_so_far });
};
match crate::gitlab::graphql::fetch_issue_statuses_with_progress(
&graphql_client,
&path,
Some(&page_cb),
)
.await
{
Ok(fetch_result) => { Ok(fetch_result) => {
if let Some(ref reason) = fetch_result.unsupported_reason { if let Some(ref reason) = fetch_result.unsupported_reason {
result.status_enrichment_mode = "unsupported".into(); result.status_enrichment_mode = "unsupported".into();
@@ -199,6 +213,9 @@ pub async fn ingest_project_issues_with_progress(
cleared: 0, cleared: 0,
}); });
} else { } else {
emit(ProgressEvent::StatusEnrichmentWriting {
total: fetch_result.all_fetched_iids.len(),
});
match enrich_issue_statuses_txn( match enrich_issue_statuses_txn(
conn, conn,
project_id, project_id,

View File

@@ -2109,11 +2109,15 @@ fn handle_robot_docs(robot_mode: bool, brief: bool) -> Result<(), Box<dyn std::e
"description": "List or show issues", "description": "List or show issues",
"flags": ["<IID>", "-n/--limit", "--fields <list>", "-s/--state", "--status <name>", "-p/--project", "-a/--author", "-A/--assignee", "-l/--label", "-m/--milestone", "--since", "--due-before", "--has-due", "--no-has-due", "--sort", "--asc", "--no-asc", "-o/--open", "--no-open"], "flags": ["<IID>", "-n/--limit", "--fields <list>", "-s/--state", "--status <name>", "-p/--project", "-a/--author", "-A/--assignee", "-l/--label", "-m/--milestone", "--since", "--due-before", "--has-due", "--no-has-due", "--sort", "--asc", "--no-asc", "-o/--open", "--no-open"],
"example": "lore --robot issues --state opened --limit 10", "example": "lore --robot issues --state opened --limit 10",
"notes": {
"status_filter": "--status filters by work item status NAME (case-insensitive). Valid values are in meta.available_statuses of any issues list response.",
"status_name": "status_name is the board column label (e.g. 'In review', 'Blocked'). This is the canonical status identifier for filtering."
},
"response_schema": { "response_schema": {
"list": { "list": {
"ok": "bool", "ok": "bool",
"data": {"issues": "[{iid:int, title:string, state:string, author_username:string, labels:[string], assignees:[string], discussion_count:int, unresolved_count:int, created_at_iso:string, updated_at_iso:string, web_url:string?, project_path:string}]", "total_count": "int", "showing": "int"}, "data": {"issues": "[{iid:int, title:string, state:string, author_username:string, labels:[string], assignees:[string], discussion_count:int, unresolved_count:int, created_at_iso:string, updated_at_iso:string, web_url:string?, project_path:string, status_name:string?}]", "total_count": "int", "showing": "int"},
"meta": {"elapsed_ms": "int"} "meta": {"elapsed_ms": "int", "available_statuses": "[string] — all distinct status names in the database, for use with --status filter"}
}, },
"show": { "show": {
"ok": "bool", "ok": "bool",