Files
gitlore/specs/SPEC_explain.md
2026-03-10 13:27:39 -04:00

34 KiB

Spec: lore explain — Auto-Generated Issue/MR Narratives

Bead: bd-9lbr Created: 2026-03-10

Spec Status

Section Status Notes
Objective complete
Tech Stack complete
Project Structure complete
Commands complete
Code Style complete UX-audited: after_help, --sections, --since, --no-timeline, --max-decisions, singular types
Boundaries complete
Testing Strategy complete 13 test cases (7 original + 5 UX flags + 1 singular type)
Git Workflow complete jj-first
User Journeys complete 3 journeys covering agent, human, pipeline use
Architecture complete ExplainParams + section filtering + time scoping
Success Criteria complete 15 criteria (10 original + 5 UX flags)
Non-Goals complete
Tasks complete 5 tasks across 3 phases, all updated for UX flags

Definition of Complete: All sections complete, Open Questions empty, every user journey has tasks, every task has TDD workflow and acceptance criteria.


Quick Reference

  • [Entity Detail] (Architecture): reuse show/ query patterns (private — copy, don't import)
  • [Timeline] (Architecture): import crate::timeline::seed::seed_timeline_direct + collect_events
  • [Events] (Architecture): new inline queries against resource_state_events/resource_label_events
  • [References] (Architecture): new query against entity_references table
  • [Discussions] (Architecture): adapted from show/ patterns, add resolved/resolvable filter

Open Questions (Resolve Before Implementation)


Objective

Goal: Add lore explain issues N / lore explain mrs N to auto-generate structured narratives of what happened on an issue or MR.

Problem: Understanding the full story of an issue/MR requires reading dozens of notes, cross-referencing state changes, checking related entities, and piecing together a timeline. This is time-consuming for humans and nearly impossible for AI agents without custom orchestration.

Success metrics:

  • Produces a complete narrative in <500ms for an issue with 50 notes
  • All 7 sections populated (entity, description_excerpt, key_decisions, activity, open_threads, related, timeline_excerpt)
  • Works fully offline (no API calls, no LLM)
  • Deterministic and reproducible (same input = same output)

Tech Stack & Constraints

Layer Technology Version
Language Rust nightly-2026-03-01 (rust-toolchain.toml)
Framework clap (derive) As in Cargo.toml
Database SQLite via rusqlite Bundled
Testing cargo test Inline #[cfg(test)]
Async asupersync 0.2

Constraints:

  • No LLM dependency — template-based, deterministic
  • No network calls — all data from local SQLite
  • Performance: <500ms for 50-note entity
  • Unsafe code forbidden (#![forbid(unsafe_code)])

Project Structure

src/cli/commands/
  explain.rs            # NEW: command module (queries, heuristic, result types)
src/cli/
  mod.rs                # EDIT: add Explain variant to Commands enum
src/app/
  handlers.rs           # EDIT: add handle_explain dispatch
  robot_docs.rs         # EDIT: register explain in robot-docs manifest
src/main.rs             # EDIT: add Explain match arm

Commands

# Build
cargo check --all-targets

# Test
cargo test explain

# Lint
cargo clippy --all-targets -- -D warnings

# Format
cargo fmt --check

Code Style

Command registration (from cli/mod.rs):

/// Auto-generate a structured narrative of an issue or MR
#[command(after_help = "\x1b[1mExamples:\x1b[0m
  lore explain issues 42                  # Narrative for issue #42
  lore explain mrs 99 -p group/repo       # Narrative for MR !99 in specific project
  lore -J explain issues 42               # JSON output for automation
  lore explain issues 42 --sections key_decisions,open_threads  # Specific sections only
  lore explain issues 42 --since 30d      # Narrative scoped to last 30 days
  lore explain issues 42 --no-timeline    # Skip timeline (faster)")]
Explain {
    /// Entity type: "issues" or "mrs" (singular forms also accepted)
    #[arg(value_parser = ["issues", "mrs", "issue", "mr"])]
    entity_type: String,

    /// Entity IID
    iid: i64,

    /// Scope to project (fuzzy match)
    #[arg(short, long)]
    project: Option<String>,

    /// Select specific sections (comma-separated)
    /// Valid: entity, description, key_decisions, activity, open_threads, related, timeline
    #[arg(long, value_delimiter = ',', help_heading = "Output")]
    sections: Option<Vec<String>>,

    /// Skip timeline excerpt (faster execution)
    #[arg(long, help_heading = "Output")]
    no_timeline: bool,

    /// Maximum key decisions to include
    #[arg(long, default_value = "10", help_heading = "Output")]
    max_decisions: usize,

    /// Time scope for events/notes (e.g. 7d, 2w, 1m, or YYYY-MM-DD)
    #[arg(long, help_heading = "Filters")]
    since: Option<String>,
},

Entity type normalization: The handler must normalize singular forms: "issue" -> "issues", "mr" -> "mrs". This prevents common typos from causing errors.

Query pattern (from show/issue.rs):

fn find_issue(conn: &Connection, iid: i64, project_filter: Option<&str>) -> Result<IssueRow> {
    let project_id = resolve_project(conn, project_filter)?;
    let mut stmt = conn.prepare_cached("SELECT ... FROM issues WHERE iid = ?1 AND project_id = ?2")?;
    // ...
}

Robot mode output (from cli/robot.rs):

let response = serde_json::json!({
    "ok": true,
    "data": result,
    "meta": { "elapsed_ms": elapsed.as_millis() }
});
println!("{}", serde_json::to_string(&response)?);

Boundaries

Always (autonomous)

  • Run cargo test explain and cargo clippy after every code change
  • Follow existing query patterns from show/issue.rs and show/mr.rs
  • Use resolve_project() for project resolution (fuzzy match)
  • Cap key_decisions at --max-decisions (default 10), timeline_excerpt at 20 events
  • Normalize singular entity types (issue -> issues, mr -> mrs)
  • Respect --sections filter: omit unselected sections from output (both robot and human)
  • Respect --since filter: scope events/notes queries with created_at >= ? threshold

Ask First (needs approval)

  • Adding new dependencies to Cargo.toml
  • Modifying existing query functions in show/ or timeline/
  • Changing the entity_references table schema

Never (hard stops)

  • No LLM calls — explain must be deterministic
  • No API/network calls — fully offline
  • No new database migrations — use existing schema only
  • Do not modify show/ or timeline/ modules (copy patterns instead)

Testing Strategy (TDD — Red-Green)

Methodology: Test-Driven Development. Write tests first, confirm red, implement, confirm green.

Framework: cargo test, inline #[cfg(test)] Location: src/cli/commands/explain.rs (inline test module)

Test categories:

  • Unit tests: key-decisions heuristic, activity counting, description truncation
  • Integration tests: full explain pipeline with in-memory DB

User journey test mapping:

Journey Test Scenarios
UJ-1: Agent explains issue test_explain_issue_basic All 7 sections present, robot JSON valid
UJ-1: Agent explains MR test_explain_mr entity.type = "merge_request", merged_at included
UJ-1: Singular entity type test_explain_singular_entity_type "issue" normalizes to "issues"
UJ-1: Section filtering test_explain_sections_filter_robot Only selected sections in output
UJ-1: No-timeline flag test_explain_no_timeline_flag timeline_excerpt is None
UJ-2: Human reads narrative (human render tested manually) Headers, indentation, color
UJ-3: Key decisions test_explain_key_decision_heuristic Note within 60min of state change by same actor
UJ-3: No false decisions test_explain_key_decision_ignores_unrelated_notes Different author's note excluded
UJ-3: Max decisions cap test_explain_max_decisions Respects --max-decisions parameter
UJ-3: Since scopes events test_explain_since_scopes_events Only recent events included
UJ-3: Open threads test_explain_open_threads Only unresolved discussions in output
UJ-3: Edge case test_explain_no_notes Empty sections, no panic
UJ-3: Activity counts test_explain_activity_counts Correct state/label/note counts

Git Workflow

  • jj-first — all VCS via jj, not git
  • Commit format: feat(explain): <description>
  • No branches — commit in place, use jj bookmarks to push

User Journeys (Prioritized)

P1 — Critical

  • UJ-1: Agent queries issue/MR narrative
    • Actor: AI agent (via robot mode)
    • Flow: lore -J explain issues 42 → JSON with 7 sections → agent parses and acts
    • Error paths: Issue not found (exit 17), ambiguous project (exit 18)
    • Implemented by: Task 1, 2, 3, 4

P2 — Important

  • UJ-2: Human reads explain output
    • Actor: Developer at terminal
    • Flow: lore explain issues 42 → formatted narrative with headers, colors, indentation
    • Error paths: Same as UJ-1 but with human-readable error messages
    • Implemented by: Task 5

P3 — Nice to Have

  • UJ-3: Agent uses key-decisions to understand context
    • Actor: AI agent making decisions
    • Flow: Parse key_decisions array → understand who decided what and when → inform action
    • Error paths: No key decisions found (empty array, not error)
    • Implemented by: Task 3

Architecture / Data Model

Data Assembly Pipeline (sync, no async needed)

1. RESOLVE    → resolve_project() + find entity by IID
2. PARSE      → normalize entity_type, parse --since, validate --sections
3. DETAIL     → entity metadata (title, state, author, labels, assignees, status)
4. EVENTS     → resource_state_events + resource_label_events (optionally --since scoped)
5. NOTES      → non-system notes via discussions join (optionally --since scoped)
6. HEURISTIC  → key_decisions = events correlated with notes by same actor within 60min
7. THREADS    → discussions WHERE resolvable=1 AND resolved=0
8. REFERENCES → entity_references (both directions: source and target)
9. TIMELINE   → seed_timeline_direct + collect_events (capped at 20, skip if --no-timeline)
10. FILTER    → apply --sections filter: drop unselected sections before serialization
11. ASSEMBLE  → combine into ExplainResult

Section filtering: When --sections is provided, only the listed sections are populated. Unselected sections are set to their zero-value (None, empty vec, etc.) and omitted from robot JSON via #[serde(skip_serializing_if = "...")]. The entity section is always included (needed for identification). Human mode skips rendering unselected sections.

Time scoping: When --since is provided, parse it using crate::core::time::parse_since() (same function used by timeline, me, file-history). Add AND created_at >= ? to events and notes queries. The entity header, references, and open threads are NOT time-scoped (they represent current state, not historical events).

Key Types

/// Parameters controlling explain behavior.
pub struct ExplainParams {
    pub entity_type: String,       // "issues" or "mrs" (already normalized)
    pub iid: i64,
    pub project: Option<String>,
    pub sections: Option<Vec<String>>,  // None = all sections
    pub no_timeline: bool,
    pub max_decisions: usize,      // default 10
    pub since: Option<i64>,        // ms epoch threshold from --since parsing
}

#[derive(Debug, Serialize)]
pub struct ExplainResult {
    pub entity: EntitySummary,
    #[serde(skip_serializing_if = "Option::is_none")]
    pub description_excerpt: Option<String>,
    #[serde(skip_serializing_if = "Option::is_none")]
    pub key_decisions: Option<Vec<KeyDecision>>,
    #[serde(skip_serializing_if = "Option::is_none")]
    pub activity: Option<ActivitySummary>,
    #[serde(skip_serializing_if = "Option::is_none")]
    pub open_threads: Option<Vec<OpenThread>>,
    #[serde(skip_serializing_if = "Option::is_none")]
    pub related: Option<RelatedEntities>,
    #[serde(skip_serializing_if = "Option::is_none")]
    pub timeline_excerpt: Option<Vec<TimelineEventSummary>>,
}

#[derive(Debug, Serialize)]
pub struct EntitySummary {
    #[serde(rename = "type")]
    pub entity_type: String,    // "issue" or "merge_request"
    pub iid: i64,
    pub title: String,
    pub state: String,
    pub author: String,
    pub assignees: Vec<String>,
    pub labels: Vec<String>,
    pub created_at: String,     // ISO 8601
    pub updated_at: String,     // ISO 8601
    pub url: Option<String>,
    pub status_name: Option<String>,
}

#[derive(Debug, Serialize)]
pub struct KeyDecision {
    pub timestamp: String,      // ISO 8601
    pub actor: String,
    pub action: String,         // "state: opened -> closed" or "label: +bug"
    pub context_note: String,   // truncated to 500 chars
}

#[derive(Debug, Serialize)]
pub struct ActivitySummary {
    pub state_changes: usize,
    pub label_changes: usize,
    pub notes: usize,           // non-system only
    pub first_event: Option<String>,   // ISO 8601
    pub last_event: Option<String>,    // ISO 8601
}

#[derive(Debug, Serialize)]
pub struct OpenThread {
    pub discussion_id: String,
    pub started_by: String,
    pub started_at: String,     // ISO 8601
    pub note_count: usize,
    pub last_note_at: String,   // ISO 8601
}

#[derive(Debug, Serialize)]
pub struct RelatedEntities {
    pub closing_mrs: Vec<ClosingMrInfo>,
    pub related_issues: Vec<RelatedEntityInfo>,
}

#[derive(Debug, Serialize)]
pub struct TimelineEventSummary {
    pub timestamp: String,      // ISO 8601
    pub event_type: String,
    pub actor: Option<String>,
    pub summary: String,
}

Key Decisions Heuristic

The heuristic identifies notes that explain WHY state/label changes were made:

  1. Collect all resource_state_events and resource_label_events for the entity
  2. Merge into unified chronological list with (timestamp, actor, description)
  3. For each event, find the FIRST non-system note by the SAME actor within 60 minutes AFTER the event
  4. Pair them as a KeyDecision
  5. Cap at params.max_decisions (default 10)

SQL for state events:

SELECT state, actor_username, created_at
FROM resource_state_events
WHERE issue_id = ?1   -- or merge_request_id = ?1
  AND (?2 IS NULL OR created_at >= ?2)   -- --since filter
ORDER BY created_at ASC

SQL for label events:

SELECT action, label_name, actor_username, created_at
FROM resource_label_events
WHERE issue_id = ?1   -- or merge_request_id = ?1
  AND (?2 IS NULL OR created_at >= ?2)   -- --since filter
ORDER BY created_at ASC

SQL for non-system notes (for correlation):

SELECT n.body, n.author_username, n.created_at
FROM notes n
JOIN discussions d ON n.discussion_id = d.id
WHERE d.noteable_type = ?1 AND d.issue_id = ?2   -- or d.merge_request_id
  AND n.is_system = 0
  AND (?3 IS NULL OR n.created_at >= ?3)   -- --since filter
ORDER BY n.created_at ASC

Entity ID resolution: The discussions table uses issue_id / merge_request_id columns (CHECK constraint: exactly one non-NULL). The resource_state_events and resource_label_events tables use the same pattern.

Cross-References Query

-- Outgoing references (this entity references others)
SELECT target_entity_type, target_entity_id, target_project_path,
       target_entity_iid, reference_type, source_method
FROM entity_references
WHERE source_entity_type = ?1 AND source_entity_id = ?2

-- Incoming references (others reference this entity)
SELECT source_entity_type, source_entity_id,
       reference_type, source_method
FROM entity_references
WHERE target_entity_type = ?1 AND target_entity_id = ?2

Note: For closing MRs, reuse the pattern from show/issue.rs get_closing_mrs() which queries entity_references with reference_type = 'closes'.

Open Threads Query

SELECT d.gitlab_discussion_id, d.first_note_at, d.last_note_at
FROM discussions d
WHERE d.issue_id = ?1   -- or d.merge_request_id
  AND d.resolvable = 1
  AND d.resolved = 0
ORDER BY d.last_note_at DESC

Then for each discussion, fetch the first note's author:

SELECT author_username, created_at
FROM notes
WHERE discussion_id = ?1
ORDER BY created_at ASC
LIMIT 1

And count notes per discussion:

SELECT COUNT(*) FROM notes WHERE discussion_id = ?1 AND is_system = 0

Robot Mode Output Schema

{
  "ok": true,
  "data": {
    "entity": {
      "type": "issue", "iid": 3864, "title": "...", "state": "opened",
      "author": "teernisse", "assignees": ["teernisse"],
      "labels": ["customer:BNSF"], "created_at": "2026-01-10T...",
      "updated_at": "2026-02-12T...", "url": "...", "status_name": "In progress"
    },
    "description_excerpt": "First 500 chars...",
    "key_decisions": [{
      "timestamp": "2026-01-15T...",
      "actor": "teernisse",
      "action": "state: opened -> closed",
      "context_note": "Starting work on the integration..."
    }],
    "activity": {
      "state_changes": 3, "label_changes": 5, "notes": 42,
      "first_event": "2026-01-10T...", "last_event": "2026-02-12T..."
    },
    "open_threads": [{
      "discussion_id": "abc123",
      "started_by": "cseiber",
      "started_at": "2026-02-01T...",
      "note_count": 5,
      "last_note_at": "2026-02-10T..."
    }],
    "related": {
      "closing_mrs": [{ "iid": 200, "title": "...", "state": "merged" }],
      "related_issues": [{ "iid": 3800, "title": "Rail Break Card", "type": "related" }]
    },
    "timeline_excerpt": [
      { "timestamp": "...", "event_type": "state_changed", "actor": "teernisse", "summary": "State changed to closed" }
    ]
  },
  "meta": { "elapsed_ms": 350 }
}

Success Criteria

# Criterion Input Expected Output
1 Issue explain produces all 7 sections lore -J explain issues N JSON with entity, description_excerpt, key_decisions, activity, open_threads, related, timeline_excerpt
2 MR explain produces all 7 sections lore -J explain mrs N Same shape, entity.type = "merge_request"
3 Key decisions captures correlated notes State change + note by same actor within 60min KeyDecision with action + context_note
4 Key decisions ignores unrelated notes Note by different author near state change Not in key_decisions array
5 Open threads filters correctly 2 discussions: 1 resolved, 1 unresolved Only unresolved in open_threads
6 Activity counts are accurate 3 state events, 2 label events, 10 notes Matching counts in activity section
7 Performance Issue with 50 notes <500ms
8 Entity not found Non-existent IID Exit code 17, suggestion to sync
9 Ambiguous project IID exists in multiple projects, no -p Exit code 18, suggestion to use -p
10 Human render lore explain issues N (no -J) Formatted narrative with headers
11 Singular entity type accepted lore explain issue 42 Same as lore explain issues 42
12 Section filtering works --sections key_decisions,activity Only those 2 sections + entity in JSON
13 No-timeline skips timeline --no-timeline timeline_excerpt absent, faster execution
14 Max-decisions caps output --max-decisions 3 At most 3 key_decisions
15 Since scopes events/notes --since 30d Only events/notes from last 30 days in activity, key_decisions

Non-Goals

  • No LLM summarization — This is template-based v1. LLM enhancement is a separate future feature.
  • No new database migrations — Uses existing schema (resource_state_events, resource_label_events, discussions, notes, entity_references tables all exist).
  • No modification of show/ or timeline/ modules — Copy patterns, don't refactor existing code. If we later want to share code, that's a separate refactoring bead.
  • No interactive mode — Output only, no prompts or follow-up questions.
  • No MR diff analysis — No file-level change summaries. Use file-history or trace for that.
  • No assignee/reviewer history — Activity summary counts events but doesn't track assignment changes over time.

Tasks

Phase 1: Setup & Registration

  • Task 1: Register explain command in CLI and wire dispatch
    • Implements: Infrastructure (UJ-1, UJ-2 prerequisite)
    • Files: src/cli/mod.rs, src/cli/commands/mod.rs, src/main.rs, src/app/handlers.rs, NEW src/cli/commands/explain.rs
    • Depends on: Nothing
    • Test-first:
      1. Write test_explain_issue_basic in explain.rs: insert a minimal issue + project + 1 discussion + 1 note + 1 state event into in-memory DB, call run_explain() with default ExplainParams, assert all 7 top-level sections present in result
      2. Write test_explain_mr in explain.rs: insert MR with merged_at, call run_explain(), assert entity.type == "merge_request" and merged_at is populated
      3. Write test_explain_singular_entity_type: call with entity_type: "issue", assert it resolves same as "issues"
      4. Run tests — all must FAIL (red)
      5. Implement: Explain variant in Commands enum (with all flags: --sections, --no-timeline, --max-decisions, --since, singular entity type acceptance), handle_explain in handlers.rs (normalize entity_type, parse --since, build ExplainParams), skeleton run_explain() in explain.rs
      6. Run tests — all must PASS (green)
    • Acceptance: cargo test explain::tests::test_explain_issue_basic, test_explain_mr, and test_explain_singular_entity_type pass. Command registered in CLI help with after_help examples block.
    • Implementation notes:
      • Use inline args pattern (like Drift) with all flags from Code Style section
      • entity_type validated by #[arg(value_parser = ["issues", "mrs", "issue", "mr"])]
      • Normalize in handler: "issue" -> "issues", "mr" -> "mrs"
      • Parse --since using crate::core::time::parse_since() — returns ms epoch threshold
      • Validate --sections values against allowed set: ["entity", "description", "key_decisions", "activity", "open_threads", "related", "timeline"]
      • Copy the find_issue/find_mr and get_* query patterns from show/issue.rs and show/mr.rs — they're private functions so can't be imported
      • Use resolve_project() from crate::core::project for project resolution
      • Use ms_to_iso() from crate::core::time for timestamp conversion

Phase 2: Core Logic

  • Task 2: Implement key-decisions heuristic

    • Implements: UJ-3
    • Files: src/cli/commands/explain.rs
    • Depends on: Task 1
    • Test-first:
      1. Write test_explain_key_decision_heuristic: insert state change event at T, insert note by SAME author at T+30min, call extract_key_decisions(), assert 1 decision with correct action + context_note
      2. Write test_explain_key_decision_ignores_unrelated_notes: insert state change by alice, insert note by bob at T+30min, assert 0 decisions
      3. Write test_explain_key_decision_label_event: insert label add event + correlated note, assert decision.action starts with "label: +"
      4. Run tests — all must FAIL (red)
      5. Write test_explain_max_decisions: insert 5 correlated event+note pairs, call with max_decisions: 3, assert exactly 3 decisions returned
      6. Write test_explain_since_scopes_events: insert event at T-60d and event at T-10d, call with since: Some(T-30d), assert only recent event appears
      7. Run tests — all must FAIL (red)
      8. Implement extract_key_decisions() function:
        • Query resource_state_events and resource_label_events for entity (with optional --since filter)
        • Merge into unified chronological list
        • For each event, find first non-system note by same actor within 60min (notes also --since filtered)
        • Cap at params.max_decisions
      9. Run tests — all must PASS (green)
    • Acceptance: All 5 tests pass. Heuristic correctly correlates events with explanatory notes. --max-decisions and --since respected.
    • Implementation notes:
      • State events query: SELECT state, actor_username, created_at FROM resource_state_events WHERE {id_col} = ?1 AND (?2 IS NULL OR created_at >= ?2) ORDER BY created_at
      • Label events query: SELECT action, label_name, actor_username, created_at FROM resource_label_events WHERE {id_col} = ?1 AND (?2 IS NULL OR created_at >= ?2) ORDER BY created_at
      • Notes query: SELECT n.body, n.author_username, n.created_at FROM notes n JOIN discussions d ON n.discussion_id = d.id WHERE d.{id_col} = ?1 AND n.is_system = 0 AND (?2 IS NULL OR n.created_at >= ?2) ORDER BY n.created_at
      • The {id_col} is either issue_id or merge_request_id based on entity_type
      • Pass params.since (Option) as the ?2 parameter — NULL means no filter
      • Use crate::core::time::ms_to_iso() for timestamp conversion in output
      • Truncate context_note to 500 chars using crate::cli::render::truncate() or a local helper
  • Task 3: Implement open threads, activity summary, and cross-references

    • Implements: UJ-1
    • Files: src/cli/commands/explain.rs
    • Depends on: Task 1
    • Test-first:
      1. Write test_explain_open_threads: insert 2 discussions (1 with resolved=0 resolvable=1, 1 with resolved=1 resolvable=1), assert only unresolved appears in open_threads
      2. Write test_explain_activity_counts: insert 3 state events + 2 label events + 10 non-system notes, assert activity.state_changes=3, label_changes=2, notes=10
      3. Write test_explain_no_notes: insert issue with zero notes and zero events, assert empty key_decisions, empty open_threads, activity all zeros, description_excerpt = "(no description)" if description is NULL
      4. Run tests — all must FAIL (red)
      5. Implement:
        • fetch_open_threads(): query discussions WHERE resolvable=1 AND resolved=0, fetch first note author + note count per thread
        • build_activity_summary(): count state events, label events, non-system notes, find min/max timestamps
        • fetch_related_entities(): query entity_references in both directions (source and target)
        • Description excerpt: first 500 chars of description, or "(no description)" if NULL
      6. Run tests — all must PASS (green)
    • Acceptance: All 3 tests pass. Open threads correctly filtered. Activity counts accurate. Empty entity handled gracefully.
    • Implementation notes:
      • Open threads query: SELECT d.gitlab_discussion_id, d.first_note_at, d.last_note_at FROM discussions d WHERE d.{id_col} = ?1 AND d.resolvable = 1 AND d.resolved = 0 ORDER BY d.last_note_at DESC
      • For first note author: SELECT author_username FROM notes WHERE discussion_id = ?1 ORDER BY created_at ASC LIMIT 1
      • For note count: SELECT COUNT(*) FROM notes WHERE discussion_id = ?1 AND is_system = 0
      • Cross-references: both outgoing and incoming from entity_references table
      • For closing MRs, reuse the query pattern from show/issue.rs get_closing_mrs()
  • Task 4: Wire timeline excerpt using existing pipeline

    • Implements: UJ-1
    • Files: src/cli/commands/explain.rs
    • Depends on: Task 1
    • Test-first:
      1. Write test_explain_timeline_excerpt: insert issue + state events + notes, call run_explain() with no_timeline: false, assert timeline_excerpt is Some and non-empty and capped at 20 events
      2. Write test_explain_no_timeline_flag: call run_explain() with no_timeline: true, assert timeline_excerpt is None
      3. Run tests — both must FAIL (red)
      4. Implement: when !params.no_timeline and --sections includes "timeline" (or is None), call seed_timeline_direct() with entity type + IID, then collect_events(), convert first 20 TimelineEvents into TimelineEventSummary structs. Otherwise set timeline_excerpt to None.
      5. Run tests — both must PASS (green)
    • Acceptance: Timeline excerpt present with max 20 events when enabled. Skipped entirely when --no-timeline. Uses existing timeline pipeline (no reimplementation).
    • Implementation notes:
      • Import: use crate::timeline::seed::seed_timeline_direct; and use crate::timeline::collect::collect_events;
      • seed_timeline_direct() takes (conn, entity_type, iid, project_id) — verify exact signature before implementing
      • collect_events() returns Vec<TimelineEvent> — map to simplified TimelineEventSummary (timestamp, event_type string, actor, summary)
      • Timeline pipeline uses EntityRef struct from crate::timeline::types — needs entity's local DB id and project_path
      • Cap at 20 events: events.truncate(20) after collection
      • --no-timeline takes precedence over --sections timeline (if both specified, skip timeline)

Phase 3: Output Rendering

  • Task 5: Robot mode JSON output and human-readable rendering
    • Implements: UJ-1, UJ-2
    • Files: src/cli/commands/explain.rs, src/app/robot_docs.rs
    • Depends on: Task 1, 2, 3, 4
    • Test-first:
      1. Write test_explain_robot_output_shape: call run_explain() with all sections, serialize to JSON, assert all 7 top-level keys present
      2. Write test_explain_sections_filter_robot: call run_explain() with sections: Some(vec!["key_decisions", "activity"]), serialize, assert only entity + key_decisions + activity keys present (entity always included), assert description_excerpt, open_threads, related, timeline_excerpt are absent
      3. Run tests — both must FAIL (red)
      4. Implement:
        • Robot mode: print_explain_json() wrapping ExplainResult in {"ok": true, "data": ..., "meta": {...}} envelope. #[serde(skip_serializing_if = "Option::is_none")] on optional sections handles filtering automatically.
        • Human mode: print_explain() with section headers, colored output, indented key decisions, truncated descriptions. Check params.sections before rendering each section.
        • Register in robot-docs manifest (include --sections, --no-timeline, --max-decisions, --since flags)
      5. Run tests — both must PASS (green)
    • Acceptance: Robot JSON matches schema. Section filtering works in both robot and human mode. Command appears in lore robot-docs.
    • Implementation notes:
      • Robot envelope: use serde_json::json!() with RobotMeta from crate::cli::robot
      • Human rendering: use Theme::bold(), Icons, render::truncate() from crate::cli::render
      • Follow timeline.rs rendering pattern: header with entity info -> separator line -> sections
      • Register in robot_docs.rs following the existing pattern for other commands
      • Section filtering: the run_explain() function should already return None for unselected sections. The serializer skips them. Human renderer checks is_some() before rendering.

Corrections from Original Bead

The bead (bd-9lbr) was created before a codebase rearchitecture. Key corrections:

  1. src/core/events_db.rs does not exist — Event storage is in src/ingestion/storage/events.rs (insert only). Event queries are inline in timeline/collect.rs. Explain needs its own inline queries.

  2. ResourceStateEvent / ResourceLabelEvent structs don't exist — The timeline queries raw rows directly. Explain should define lightweight local structs or use tuples.

  3. run_show_issue() / run_show_mr() are private — They live in include!() files inside show/mod.rs. Cannot be imported. Copy the query patterns instead.

  4. bd-2g50 blocker is CLOSEDIssueDetail already has closed_at, references_full, user_notes_count, confidential. No blocker.

  5. Clap registration pattern — The bead shows args directly on the enum variant, which is correct for explain's simple args (matches Drift, Related pattern). No need for a separate ExplainArgs struct.

  6. entity_references has no fetch query — Only insert_entity_reference() and count_references_for_source() exist. Explain needs a new SELECT query (inline in explain.rs).


Session Log

Session 1 — 2026-03-10

  • Read bead bd-9lbr thoroughly — exceptionally detailed but written before rearchitecture
  • Verified infrastructure: show/ (private functions, copy patterns), timeline/ (importable pipeline), events (inline SQL, no typed structs), xref (no fetch query), discussions (resolvable/resolved confirmed in migration 028)
  • Discovered bd-2g50 blocker is CLOSED — no dependency
  • Decided: two positional args (lore explain issues N) over single query syntax
  • Decided: formalize + gap-fill approach (bead is thorough, just needs updating)
  • Documented 6 corrections from original bead to current codebase state
  • Drafted complete spec with 5 tasks across 3 phases

Session 1b — 2026-03-10 (CLI UX Audit)

  • Audited full CLI surface (30+ commands) against explain's proposed UX
  • Identified 8 improvements, user selected 6 to incorporate:
    1. after_help examples block — every other lore command has this, explain was missing it
    2. --sections flag — robot token efficiency, skip unselected sections entirely
    3. Singular entity type tolerance — accept issue/mr alongside issues/mrs
    4. --no-timeline flag — skip heaviest section for faster execution
    5. --max-decisions N flag — user control over key_decisions cap (default 10)
    6. --since flag — time-scope events/notes for long-lived entities
  • Skipped: #3 (command aliases ex/narrative), #6 (#42/!99 shorthand)
  • Updated: Code Style, Boundaries, Architecture (ExplainParams + ExplainResult types, section filtering, time scoping, SQL queries), Success Criteria (+5 new), Testing Strategy (+5 new tests), all 5 Tasks
  • ExplainResult sections now Option<T> with skip_serializing_if for section filtering
  • All sections remain complete — spec is ready for implementation