Compare commits
24 Commits
trace
...
c5843bd823
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
c5843bd823 | ||
|
|
f9e7913232 | ||
|
|
6e487532aa | ||
|
|
7e9a23cc0f | ||
|
|
71d07c28d8 | ||
|
|
f4de6feaa2 | ||
|
|
ec0aaaf77c | ||
|
|
9c1a9bfe5d | ||
|
|
a5c2589c7d | ||
|
|
8fdb366b6d | ||
|
|
53b093586b | ||
|
|
9ec1344945 | ||
|
|
ea6e45e43f | ||
|
|
30ed02c694 | ||
|
|
a4df8e5444 | ||
|
|
53ce20595b | ||
|
|
1808a4da8e | ||
|
|
7d032833a2 | ||
|
|
097249f4e6 | ||
|
|
8442bcf367 | ||
|
|
c0ca501662 | ||
|
|
c953d8e519 | ||
|
|
63bd58c9b4 | ||
|
|
714c8c2623 |
File diff suppressed because one or more lines are too long
@@ -1 +1 @@
|
||||
bd-1sc6
|
||||
bd-1tv8
|
||||
|
||||
1
.gitignore
vendored
1
.gitignore
vendored
@@ -31,6 +31,7 @@ yarn-error.log*
|
||||
|
||||
# Local config files
|
||||
lore.config.json
|
||||
.liquid-mail.toml
|
||||
|
||||
# beads
|
||||
.bv/
|
||||
|
||||
102
AGENTS.md
102
AGENTS.md
@@ -127,66 +127,17 @@ Prefer deterministic lab-runtime tests for concurrency-sensitive behavior.
|
||||
|
||||
---
|
||||
|
||||
## MCP Agent Mail — Multi-Agent Coordination
|
||||
|
||||
A mail-like layer that lets coding agents coordinate asynchronously via MCP tools and resources. Provides identities, inbox/outbox, searchable threads, and advisory file reservations with human-auditable artifacts in Git.
|
||||
|
||||
### Why It's Useful
|
||||
|
||||
- **Prevents conflicts:** Explicit file reservations (leases) for files/globs
|
||||
- **Token-efficient:** Messages stored in per-project archive, not in context
|
||||
- **Quick reads:** `resource://inbox/...`, `resource://thread/...`
|
||||
|
||||
### Same Repository Workflow
|
||||
|
||||
1. **Register identity:**
|
||||
```
|
||||
ensure_project(project_key=<abs-path>)
|
||||
register_agent(project_key, program, model)
|
||||
```
|
||||
|
||||
2. **Reserve files before editing:**
|
||||
```
|
||||
file_reservation_paths(project_key, agent_name, ["src/**"], ttl_seconds=3600, exclusive=true)
|
||||
```
|
||||
|
||||
3. **Communicate with threads:**
|
||||
```
|
||||
send_message(..., thread_id="FEAT-123")
|
||||
fetch_inbox(project_key, agent_name)
|
||||
acknowledge_message(project_key, agent_name, message_id)
|
||||
```
|
||||
|
||||
4. **Quick reads:**
|
||||
```
|
||||
resource://inbox/{Agent}?project=<abs-path>&limit=20
|
||||
resource://thread/{id}?project=<abs-path>&include_bodies=true
|
||||
```
|
||||
|
||||
### Macros vs Granular Tools
|
||||
|
||||
- **Prefer macros for speed:** `macro_start_session`, `macro_prepare_thread`, `macro_file_reservation_cycle`, `macro_contact_handshake`
|
||||
- **Use granular tools for control:** `register_agent`, `file_reservation_paths`, `send_message`, `fetch_inbox`, `acknowledge_message`
|
||||
|
||||
### Common Pitfalls
|
||||
|
||||
- `"from_agent not registered"`: Always `register_agent` in the correct `project_key` first
|
||||
- `"FILE_RESERVATION_CONFLICT"`: Adjust patterns, wait for expiry, or use non-exclusive reservation
|
||||
- **Auth errors:** If JWT+JWKS enabled, include bearer token with matching `kid`
|
||||
|
||||
---
|
||||
|
||||
## Beads (br) — Dependency-Aware Issue Tracking
|
||||
|
||||
Beads provides a lightweight, dependency-aware issue database and CLI (`br` / beads_rust) for selecting "ready work," setting priorities, and tracking status. It complements MCP Agent Mail's messaging and file reservations.
|
||||
Beads provides a lightweight, dependency-aware issue database and CLI (`br` / beads_rust) for selecting "ready work," setting priorities, and tracking status. It complements Liquid Mail's shared log for progress, decisions, and cross-session context.
|
||||
|
||||
**Note:** `br` is non-invasive—it never executes git commands directly. You must run git commands manually after `br sync --flush-only`.
|
||||
|
||||
### Conventions
|
||||
|
||||
- **Single source of truth:** Beads for task status/priority/dependencies; Agent Mail for conversation and audit
|
||||
- **Shared identifiers:** Use Beads issue ID (e.g., `br-123`) as Mail `thread_id` and prefix subjects with `[br-123]`
|
||||
- **Reservations:** When starting a task, call `file_reservation_paths()` with the issue ID in `reason`
|
||||
- **Single source of truth:** Beads for task status/priority/dependencies; Liquid Mail for conversation/decisions
|
||||
- **Shared identifiers:** Include the Beads issue ID in posts (e.g., `[br-123] Topic validation rules`)
|
||||
- **Decisions before action:** Post `DECISION:` messages before risky changes, not after
|
||||
|
||||
### Typical Agent Flow
|
||||
|
||||
@@ -195,35 +146,34 @@ Beads provides a lightweight, dependency-aware issue database and CLI (`br` / be
|
||||
br ready --json # Choose highest priority, no blockers
|
||||
```
|
||||
|
||||
2. **Reserve edit surface (Mail):**
|
||||
```
|
||||
file_reservation_paths(project_key, agent_name, ["src/**"], ttl_seconds=3600, exclusive=true, reason="br-123")
|
||||
2. **Check context (Liquid Mail):**
|
||||
```bash
|
||||
liquid-mail notify # See what changed since last session
|
||||
liquid-mail query "br-123" # Find prior discussion on this issue
|
||||
```
|
||||
|
||||
3. **Announce start (Mail):**
|
||||
```
|
||||
send_message(..., thread_id="br-123", subject="[br-123] Start: <title>", ack_required=true)
|
||||
3. **Work and log progress:**
|
||||
```bash
|
||||
liquid-mail post --topic <workstream> "[br-123] START: <description>"
|
||||
liquid-mail post "[br-123] FINDING: <what you discovered>"
|
||||
liquid-mail post --decision "[br-123] DECISION: <what you decided and why>"
|
||||
```
|
||||
|
||||
4. **Work and update:** Reply in-thread with progress
|
||||
|
||||
5. **Complete and release:**
|
||||
4. **Complete (Beads is authority):**
|
||||
```bash
|
||||
br close br-123 --reason "Completed"
|
||||
liquid-mail post "[br-123] Completed: <summary with commit ref>"
|
||||
```
|
||||
```
|
||||
release_file_reservations(project_key, agent_name, paths=["src/**"])
|
||||
```
|
||||
Final Mail reply: `[br-123] Completed` with summary
|
||||
|
||||
### Mapping Cheat Sheet
|
||||
|
||||
| Concept | Value |
|
||||
|---------|-------|
|
||||
| Mail `thread_id` | `br-###` |
|
||||
| Mail subject | `[br-###] ...` |
|
||||
| File reservation `reason` | `br-###` |
|
||||
| Commit messages | Include `br-###` for traceability |
|
||||
| Concept | In Beads | In Liquid Mail |
|
||||
|---------|----------|----------------|
|
||||
| Work item | `br-###` (issue ID) | Include `[br-###]` in posts |
|
||||
| Workstream | — | `--topic auth-system` |
|
||||
| Subject prefix | — | `[br-###] ...` |
|
||||
| Commit message | Include `br-###` | — |
|
||||
| Status | `br update --status` | Post progress messages |
|
||||
|
||||
---
|
||||
|
||||
@@ -231,7 +181,7 @@ Beads provides a lightweight, dependency-aware issue database and CLI (`br` / be
|
||||
|
||||
bv is a graph-aware triage engine for Beads projects (`.beads/beads.jsonl`). It computes PageRank, betweenness, critical path, cycles, HITS, eigenvector, and k-core metrics deterministically.
|
||||
|
||||
**Scope boundary:** bv handles *what to work on* (triage, priority, planning). For agent-to-agent coordination (messaging, work claiming, file reservations), use MCP Agent Mail.
|
||||
**Scope boundary:** bv handles *what to work on* (triage, priority, planning). For agent-to-agent coordination (progress logging, decisions, cross-session context), use Liquid Mail.
|
||||
|
||||
**CRITICAL: Use ONLY `--robot-*` flags. Bare `bv` launches an interactive TUI that blocks your session.**
|
||||
|
||||
@@ -673,6 +623,12 @@ lore --robot generate-docs
|
||||
# Generate vector embeddings via Ollama
|
||||
lore --robot embed
|
||||
|
||||
# Personal work dashboard
|
||||
lore --robot me
|
||||
lore --robot me --issues
|
||||
lore --robot me --activity --since 7d
|
||||
lore --robot me --fields minimal
|
||||
|
||||
# Agent self-discovery manifest (all commands, flags, exit codes, response schemas)
|
||||
lore robot-docs
|
||||
|
||||
|
||||
949
CLAUDE.md
Normal file
949
CLAUDE.md
Normal file
@@ -0,0 +1,949 @@
|
||||
# CLAUDE.md
|
||||
|
||||
## RULE 0 - THE FUNDAMENTAL OVERRIDE PEROGATIVE
|
||||
|
||||
If I tell you to do something, even if it goes against what follows below, YOU MUST LISTEN TO ME. I AM IN CHARGE, NOT YOU.
|
||||
|
||||
---
|
||||
|
||||
## RULE NUMBER 1: NO FILE DELETION
|
||||
|
||||
**YOU ARE NEVER ALLOWED TO DELETE A FILE WITHOUT EXPRESS PERMISSION.** Even a new file that you yourself created, such as a test code file. You have a horrible track record of deleting critically important files or otherwise throwing away tons of expensive work. As a result, you have permanently lost any and all rights to determine that a file or folder should be deleted.
|
||||
|
||||
**YOU MUST ALWAYS ASK AND RECEIVE CLEAR, WRITTEN PERMISSION BEFORE EVER DELETING A FILE OR FOLDER OF ANY KIND.**
|
||||
|
||||
---
|
||||
|
||||
## Version Control: jj-First (CRITICAL)
|
||||
|
||||
**ALWAYS prefer jj (Jujutsu) over git for all VCS operations.** This is a colocated repo with both `.jj/` and `.git/`. When instructed to use git by anything — even later in this file — use the best jj replacement commands instead. Only fall back to raw `git` for things jj cannot do (hooks, LFS, submodules, `gh` CLI interop).
|
||||
|
||||
See `~/.claude/rules/jj-vcs/` for the full command reference, translation table, revsets, patterns, and recovery recipes.
|
||||
|
||||
---
|
||||
|
||||
## Irreversible Git & Filesystem Actions — DO NOT EVER BREAK GLASS
|
||||
|
||||
> **Note:** Treat destructive commands as break-glass. If there's any doubt, stop and ask.
|
||||
|
||||
1. **Absolutely forbidden commands:** `git reset --hard`, `git clean -fd`, `rm -rf`, or any command that can delete or overwrite code/data must never be run unless the user explicitly provides the exact command and states, in the same message, that they understand and want the irreversible consequences.
|
||||
2. **No guessing:** If there is any uncertainty about what a command might delete or overwrite, stop immediately and ask the user for specific approval. "I think it's safe" is never acceptable.
|
||||
3. **Safer alternatives first:** When cleanup or rollbacks are needed, request permission to use non-destructive options (`git status`, `git diff`, `git stash`, copying to backups) before ever considering a destructive command.
|
||||
4. **Mandatory explicit plan:** Even after explicit user authorization, restate the command verbatim, list exactly what will be affected, and wait for a confirmation that your understanding is correct. Only then may you execute it—if anything remains ambiguous, refuse and escalate.
|
||||
5. **Document the confirmation:** When running any approved destructive command, record (in the session notes / final response) the exact user text that authorized it, the command actually run, and the execution time. If that record is absent, the operation did not happen.
|
||||
|
||||
---
|
||||
|
||||
## Toolchain: Rust & Cargo
|
||||
|
||||
We only use **Cargo** in this project, NEVER any other package manager.
|
||||
|
||||
- **Edition/toolchain:** Follow `rust-toolchain.toml` (if present). Do not assume stable vs nightly.
|
||||
- **Dependencies:** Explicit versions for stability; keep the set minimal.
|
||||
- **Configuration:** Cargo.toml only
|
||||
- **Unsafe code:** Forbidden (`#![forbid(unsafe_code)]`)
|
||||
|
||||
When writing Rust code, reference RUST_CLI_TOOLS_BEST_PRACTICES.md
|
||||
|
||||
### Release Profile
|
||||
|
||||
Use the release profile defined in `Cargo.toml`. If you need to change it, justify the
|
||||
performance/size tradeoff and how it impacts determinism and cancellation behavior.
|
||||
|
||||
---
|
||||
|
||||
## Code Editing Discipline
|
||||
|
||||
### No Script-Based Changes
|
||||
|
||||
**NEVER** run a script that processes/changes code files in this repo. Brittle regex-based transformations create far more problems than they solve.
|
||||
|
||||
- **Always make code changes manually**, even when there are many instances
|
||||
- For many simple changes: use parallel subagents
|
||||
- For subtle/complex changes: do them methodically yourself
|
||||
|
||||
### No File Proliferation
|
||||
|
||||
If you want to change something or add a feature, **revise existing code files in place**.
|
||||
|
||||
**NEVER** create variations like:
|
||||
- `mainV2.rs`
|
||||
- `main_improved.rs`
|
||||
- `main_enhanced.rs`
|
||||
|
||||
New files are reserved for **genuinely new functionality** that makes zero sense to include in any existing file. The bar for creating new files is **incredibly high**.
|
||||
|
||||
---
|
||||
|
||||
## Backwards Compatibility
|
||||
|
||||
We do not care about backwards compatibility—we're in early development with no users. We want to do things the **RIGHT** way with **NO TECH DEBT**.
|
||||
|
||||
- Never create "compatibility shims"
|
||||
- Never create wrapper functions for deprecated APIs
|
||||
- Just fix the code directly
|
||||
|
||||
---
|
||||
|
||||
## Compiler Checks (CRITICAL)
|
||||
|
||||
**After any substantive code changes, you MUST verify no errors were introduced:**
|
||||
|
||||
```bash
|
||||
# Check for compiler errors and warnings
|
||||
cargo check --all-targets
|
||||
|
||||
# Check for clippy lints (pedantic + nursery are enabled)
|
||||
cargo clippy --all-targets -- -D warnings
|
||||
|
||||
# Verify formatting
|
||||
cargo fmt --check
|
||||
```
|
||||
|
||||
If you see errors, **carefully understand and resolve each issue**. Read sufficient context to fix them the RIGHT way.
|
||||
|
||||
---
|
||||
|
||||
## Testing
|
||||
|
||||
### Unit & Property Tests
|
||||
|
||||
```bash
|
||||
# Run all tests
|
||||
cargo test
|
||||
|
||||
# Run with output
|
||||
cargo test -- --nocapture
|
||||
```
|
||||
|
||||
When adding or changing primitives, add tests that assert the core invariants:
|
||||
|
||||
- no task leaks
|
||||
- no obligation leaks
|
||||
- losers are drained after races
|
||||
- region close implies quiescence
|
||||
|
||||
Prefer deterministic lab-runtime tests for concurrency-sensitive behavior.
|
||||
|
||||
---
|
||||
|
||||
---
|
||||
|
||||
## Beads (br) — Dependency-Aware Issue Tracking
|
||||
|
||||
Beads provides a lightweight, dependency-aware issue database and CLI (`br` / beads_rust) for selecting "ready work," setting priorities, and tracking status. It complements Liquid Mail's shared log for progress, decisions, and cross-session context.
|
||||
|
||||
**Note:** `br` is non-invasive—it never executes git commands directly. You must run git commands manually after `br sync --flush-only`.
|
||||
|
||||
### Conventions
|
||||
|
||||
- **Single source of truth:** Beads for task status/priority/dependencies; Liquid Mail for conversation/decisions
|
||||
- **Shared identifiers:** Include the Beads issue ID in posts (e.g., `[br-123] Topic validation rules`)
|
||||
- **Decisions before action:** Post `DECISION:` messages before risky changes, not after
|
||||
|
||||
### Typical Agent Flow
|
||||
|
||||
1. **Pick ready work (Beads):**
|
||||
```bash
|
||||
br ready --json # Choose highest priority, no blockers
|
||||
```
|
||||
|
||||
2. **Check context (Liquid Mail):**
|
||||
```bash
|
||||
liquid-mail notify # See what changed since last session
|
||||
liquid-mail query "br-123" # Find prior discussion on this issue
|
||||
```
|
||||
|
||||
3. **Work and log progress:**
|
||||
```bash
|
||||
liquid-mail post --topic <workstream> "[br-123] START: <description>"
|
||||
liquid-mail post "[br-123] FINDING: <what you discovered>"
|
||||
liquid-mail post --decision "[br-123] DECISION: <what you decided and why>"
|
||||
```
|
||||
|
||||
4. **Complete (Beads is authority):**
|
||||
```bash
|
||||
br close br-123 --reason "Completed"
|
||||
liquid-mail post "[br-123] Completed: <summary with commit ref>"
|
||||
```
|
||||
|
||||
### Mapping Cheat Sheet
|
||||
|
||||
| Concept | In Beads | In Liquid Mail |
|
||||
|---------|----------|----------------|
|
||||
| Work item | `br-###` (issue ID) | Include `[br-###]` in posts |
|
||||
| Workstream | — | `--topic auth-system` |
|
||||
| Subject prefix | — | `[br-###] ...` |
|
||||
| Commit message | Include `br-###` | — |
|
||||
| Status | `br update --status` | Post progress messages |
|
||||
|
||||
---
|
||||
|
||||
## bv — Graph-Aware Triage Engine
|
||||
|
||||
bv is a graph-aware triage engine for Beads projects (`.beads/beads.jsonl`). It computes PageRank, betweenness, critical path, cycles, HITS, eigenvector, and k-core metrics deterministically.
|
||||
|
||||
**Scope boundary:** bv handles *what to work on* (triage, priority, planning). For agent-to-agent coordination (progress logging, decisions, cross-session context), use Liquid Mail.
|
||||
|
||||
**CRITICAL: Use ONLY `--robot-*` flags. Bare `bv` launches an interactive TUI that blocks your session.**
|
||||
|
||||
### The Workflow: Start With Triage
|
||||
|
||||
**`bv --robot-triage` is your single entry point.** It returns:
|
||||
- `quick_ref`: at-a-glance counts + top 3 picks
|
||||
- `recommendations`: ranked actionable items with scores, reasons, unblock info
|
||||
- `quick_wins`: low-effort high-impact items
|
||||
- `blockers_to_clear`: items that unblock the most downstream work
|
||||
- `project_health`: status/type/priority distributions, graph metrics
|
||||
- `commands`: copy-paste shell commands for next steps
|
||||
|
||||
```bash
|
||||
bv --robot-triage # THE MEGA-COMMAND: start here
|
||||
bv --robot-next # Minimal: just the single top pick + claim command
|
||||
```
|
||||
|
||||
### Command Reference
|
||||
|
||||
**Planning:**
|
||||
| Command | Returns |
|
||||
|---------|---------|
|
||||
| `--robot-plan` | Parallel execution tracks with `unblocks` lists |
|
||||
| `--robot-priority` | Priority misalignment detection with confidence |
|
||||
|
||||
**Graph Analysis:**
|
||||
| Command | Returns |
|
||||
|---------|---------|
|
||||
| `--robot-insights` | Full metrics: PageRank, betweenness, HITS, eigenvector, critical path, cycles, k-core, articulation points, slack |
|
||||
| `--robot-label-health` | Per-label health: `health_level`, `velocity_score`, `staleness`, `blocked_count` |
|
||||
| `--robot-label-flow` | Cross-label dependency: `flow_matrix`, `dependencies`, `bottleneck_labels` |
|
||||
| `--robot-label-attention [--attention-limit=N]` | Attention-ranked labels |
|
||||
|
||||
**History & Change Tracking:**
|
||||
| Command | Returns |
|
||||
|---------|---------|
|
||||
| `--robot-history` | Bead-to-commit correlations |
|
||||
| `--robot-diff --diff-since <ref>` | Changes since ref: new/closed/modified issues, cycles |
|
||||
|
||||
**Other:**
|
||||
| Command | Returns |
|
||||
|---------|---------|
|
||||
| `--robot-burndown <sprint>` | Sprint burndown, scope changes, at-risk items |
|
||||
| `--robot-forecast <id\|all>` | ETA predictions with dependency-aware scheduling |
|
||||
| `--robot-alerts` | Stale issues, blocking cascades, priority mismatches |
|
||||
| `--robot-suggest` | Hygiene: duplicates, missing deps, label suggestions |
|
||||
| `--robot-graph [--graph-format=json\|dot\|mermaid]` | Dependency graph export |
|
||||
| `--export-graph <file.html>` | Interactive HTML visualization |
|
||||
|
||||
### Scoping & Filtering
|
||||
|
||||
```bash
|
||||
bv --robot-plan --label backend # Scope to label's subgraph
|
||||
bv --robot-insights --as-of HEAD~30 # Historical point-in-time
|
||||
bv --recipe actionable --robot-plan # Pre-filter: ready to work
|
||||
bv --recipe high-impact --robot-triage # Pre-filter: top PageRank
|
||||
bv --robot-triage --robot-triage-by-track # Group by parallel work streams
|
||||
bv --robot-triage --robot-triage-by-label # Group by domain
|
||||
```
|
||||
|
||||
### Understanding Robot Output
|
||||
|
||||
**All robot JSON includes:**
|
||||
- `data_hash` — Fingerprint of source beads.jsonl
|
||||
- `status` — Per-metric state: `computed|approx|timeout|skipped` + elapsed ms
|
||||
- `as_of` / `as_of_commit` — Present when using `--as-of`
|
||||
|
||||
**Two-phase analysis:**
|
||||
- **Phase 1 (instant):** degree, topo sort, density
|
||||
- **Phase 2 (async, 500ms timeout):** PageRank, betweenness, HITS, eigenvector, cycles
|
||||
|
||||
### jq Quick Reference
|
||||
|
||||
```bash
|
||||
bv --robot-triage | jq '.quick_ref' # At-a-glance summary
|
||||
bv --robot-triage | jq '.recommendations[0]' # Top recommendation
|
||||
bv --robot-plan | jq '.plan.summary.highest_impact' # Best unblock target
|
||||
bv --robot-insights | jq '.status' # Check metric readiness
|
||||
bv --robot-insights | jq '.Cycles' # Circular deps (must fix!)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## UBS — Ultimate Bug Scanner
|
||||
|
||||
**Golden Rule:** `ubs <changed-files>` before every commit. Exit 0 = safe. Exit >0 = fix & re-run.
|
||||
|
||||
### Commands
|
||||
|
||||
```bash
|
||||
ubs file.rs file2.rs # Specific files (< 1s) — USE THIS
|
||||
ubs $(jj diff --name-only) # Changed files — before commit
|
||||
ubs --only=rust,toml src/ # Language filter (3-5x faster)
|
||||
ubs --ci --fail-on-warning . # CI mode — before PR
|
||||
ubs . # Whole project (ignores target/, Cargo.lock)
|
||||
```
|
||||
|
||||
### Output Format
|
||||
|
||||
```
|
||||
⚠️ Category (N errors)
|
||||
file.rs:42:5 – Issue description
|
||||
💡 Suggested fix
|
||||
Exit code: 1
|
||||
```
|
||||
|
||||
Parse: `file:line:col` → location | 💡 → how to fix | Exit 0/1 → pass/fail
|
||||
|
||||
### Fix Workflow
|
||||
|
||||
1. Read finding → category + fix suggestion
|
||||
2. Navigate `file:line:col` → view context
|
||||
3. Verify real issue (not false positive)
|
||||
4. Fix root cause (not symptom)
|
||||
5. Re-run `ubs <file>` → exit 0
|
||||
6. Commit
|
||||
|
||||
### Bug Severity
|
||||
|
||||
- **Critical (always fix):** Memory safety, use-after-free, data races, SQL injection
|
||||
- **Important (production):** Unwrap panics, resource leaks, overflow checks
|
||||
- **Contextual (judgment):** TODO/FIXME, println! debugging
|
||||
|
||||
---
|
||||
|
||||
## ast-grep vs ripgrep
|
||||
|
||||
**Use `ast-grep` when structure matters.** It parses code and matches AST nodes, ignoring comments/strings, and can **safely rewrite** code.
|
||||
|
||||
- Refactors/codemods: rename APIs, change import forms
|
||||
- Policy checks: enforce patterns across a repo
|
||||
- Editor/automation: LSP mode, `--json` output
|
||||
|
||||
**Use `ripgrep` when text is enough.** Fastest way to grep literals/regex.
|
||||
|
||||
- Recon: find strings, TODOs, log lines, config values
|
||||
- Pre-filter: narrow candidate files before ast-grep
|
||||
|
||||
### Rule of Thumb
|
||||
|
||||
- Need correctness or **applying changes** → `ast-grep`
|
||||
- Need raw speed or **hunting text** → `rg`
|
||||
- Often combine: `rg` to shortlist files, then `ast-grep` to match/modify
|
||||
|
||||
### Rust Examples
|
||||
|
||||
```bash
|
||||
# Find structured code (ignores comments)
|
||||
ast-grep run -l Rust -p 'fn $NAME($$$ARGS) -> $RET { $$$BODY }'
|
||||
|
||||
# Find all unwrap() calls
|
||||
ast-grep run -l Rust -p '$EXPR.unwrap()'
|
||||
|
||||
# Quick textual hunt
|
||||
rg -n 'println!' -t rust
|
||||
|
||||
# Combine speed + precision
|
||||
rg -l -t rust 'unwrap\(' | xargs ast-grep run -l Rust -p '$X.unwrap()' --json
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Morph Warp Grep — AI-Powered Code Search
|
||||
|
||||
**Use `mcp__morph-mcp__warp_grep` for exploratory "how does X work?" questions.** An AI agent expands your query, greps the codebase, reads relevant files, and returns precise line ranges with full context.
|
||||
|
||||
**Use `ripgrep` for targeted searches.** When you know exactly what you're looking for.
|
||||
|
||||
**Use `ast-grep` for structural patterns.** When you need AST precision for matching/rewriting.
|
||||
|
||||
### When to Use What
|
||||
|
||||
| Scenario | Tool | Why |
|
||||
|----------|------|-----|
|
||||
| "How is pattern matching implemented?" | `warp_grep` | Exploratory; don't know where to start |
|
||||
| "Where is the quick reject filter?" | `warp_grep` | Need to understand architecture |
|
||||
| "Find all uses of `Regex::new`" | `ripgrep` | Targeted literal search |
|
||||
| "Find files with `println!`" | `ripgrep` | Simple pattern |
|
||||
| "Replace all `unwrap()` with `expect()`" | `ast-grep` | Structural refactor |
|
||||
|
||||
### warp_grep Usage
|
||||
|
||||
```
|
||||
mcp__morph-mcp__warp_grep(
|
||||
repoPath: "/path/to/dcg",
|
||||
query: "How does the safe pattern whitelist work?"
|
||||
)
|
||||
```
|
||||
|
||||
Returns structured results with file paths, line ranges, and extracted code snippets.
|
||||
|
||||
### Anti-Patterns
|
||||
|
||||
- **Don't** use `warp_grep` to find a specific function name → use `ripgrep`
|
||||
- **Don't** use `ripgrep` to understand "how does X work" → wastes time with manual reads
|
||||
- **Don't** use `ripgrep` for codemods → risks collateral edits
|
||||
|
||||
<!-- bv-agent-instructions-v1 -->
|
||||
|
||||
---
|
||||
|
||||
## Beads Workflow Integration
|
||||
|
||||
This project uses [beads_viewer](https://github.com/Dicklesworthstone/beads_viewer) for issue tracking. Issues are stored in `.beads/` and tracked in version control.
|
||||
|
||||
**Note:** `br` is non-invasive—it never executes VCS commands directly. You must commit manually after `br sync --flush-only`.
|
||||
|
||||
### Essential Commands
|
||||
|
||||
```bash
|
||||
# View issues (launches TUI - avoid in automated sessions)
|
||||
bv
|
||||
|
||||
# CLI commands for agents (use these instead)
|
||||
br ready # Show issues ready to work (no blockers)
|
||||
br list --status=open # All open issues
|
||||
br show <id> # Full issue details with dependencies
|
||||
br create --title="..." --type=task --priority=2
|
||||
br update <id> --status=in_progress
|
||||
br close <id> --reason="Completed"
|
||||
br close <id1> <id2> # Close multiple issues at once
|
||||
br sync --flush-only # Export to JSONL (then: jj commit -m "Update beads")
|
||||
```
|
||||
|
||||
### Workflow Pattern
|
||||
|
||||
1. **Start**: Run `br ready` to find actionable work
|
||||
2. **Claim**: Use `br update <id> --status=in_progress`
|
||||
3. **Work**: Implement the task
|
||||
4. **Complete**: Use `br close <id>`
|
||||
5. **Sync**: Run `br sync --flush-only`, then `git add .beads/ && git commit -m "Update beads"`
|
||||
|
||||
### Key Concepts
|
||||
|
||||
- **Dependencies**: Issues can block other issues. `br ready` shows only unblocked work.
|
||||
- **Priority**: P0=critical, P1=high, P2=medium, P3=low, P4=backlog (use numbers, not words)
|
||||
- **Types**: task, bug, feature, epic, question, docs
|
||||
- **Blocking**: `br dep add <issue> <depends-on>` to add dependencies
|
||||
|
||||
### Session Protocol
|
||||
|
||||
**Before ending any session, run this checklist (solo/lead only — workers skip VCS):**
|
||||
|
||||
```bash
|
||||
jj status # Check what changed
|
||||
br sync --flush-only # Export beads to JSONL
|
||||
jj commit -m "..." # Commit code and beads (jj auto-tracks all changes)
|
||||
jj bookmark set <name> -r @- # Point bookmark at committed work
|
||||
jj git push -b <name> # Push to remote
|
||||
```
|
||||
|
||||
### Best Practices
|
||||
|
||||
- Check `br ready` at session start to find available work
|
||||
- Update status as you work (in_progress → closed)
|
||||
- Create new issues with `br create` when you discover tasks
|
||||
- Use descriptive titles and set appropriate priority/type
|
||||
- Always run `br sync --flush-only` then commit before ending session (jj auto-tracks .beads/)
|
||||
|
||||
<!-- end-bv-agent-instructions -->
|
||||
|
||||
## Landing the Plane (Session Completion)
|
||||
|
||||
**When ending a work session**, you MUST complete ALL steps below. Work is NOT complete until push succeeds.
|
||||
|
||||
**WHO RUNS THIS:** Solo agents run it themselves. In multi-agent sessions, ONLY the team lead runs this. Workers skip VCS entirely.
|
||||
|
||||
**MANDATORY WORKFLOW:**
|
||||
|
||||
1. **File issues for remaining work** - Create issues for anything that needs follow-up
|
||||
2. **Run quality gates** (if code changed) - Tests, linters, builds
|
||||
3. **Update issue status** - Close finished work, update in-progress items
|
||||
4. **PUSH TO REMOTE** - This is MANDATORY:
|
||||
```bash
|
||||
jj git fetch # Get latest remote state
|
||||
jj rebase -d trunk() # Rebase onto latest trunk if needed
|
||||
br sync --flush-only # Export beads to JSONL
|
||||
jj commit -m "Update beads" # Commit (jj auto-tracks .beads/ changes)
|
||||
jj bookmark set <name> -r @- # Point bookmark at committed work
|
||||
jj git push -b <name> # Push to remote
|
||||
jj log -r '<name>' # Verify bookmark position
|
||||
```
|
||||
5. **Clean up** - Abandon empty orphan changes if any (`jj abandon <rev>`)
|
||||
6. **Verify** - All changes committed AND pushed
|
||||
7. **Hand off** - Provide context for next session
|
||||
|
||||
**CRITICAL RULES:**
|
||||
- Work is NOT complete until `jj git push` succeeds
|
||||
- NEVER stop before pushing - that leaves work stranded locally
|
||||
- NEVER say "ready to push when you are" - YOU must push
|
||||
- If push fails, resolve and retry until it succeeds
|
||||
|
||||
---
|
||||
|
||||
## cass — Cross-Agent Session Search
|
||||
|
||||
`cass` indexes prior agent conversations (Claude Code, Codex, Cursor, Gemini, ChatGPT, etc.) so we can reuse solved problems.
|
||||
|
||||
**Rules:** Never run bare `cass` (TUI). Always use `--robot` or `--json`.
|
||||
|
||||
### Examples
|
||||
|
||||
```bash
|
||||
cass health
|
||||
cass search "async runtime" --robot --limit 5
|
||||
cass view /path/to/session.jsonl -n 42 --json
|
||||
cass expand /path/to/session.jsonl -n 42 -C 3 --json
|
||||
cass capabilities --json
|
||||
cass robot-docs guide
|
||||
```
|
||||
|
||||
### Tips
|
||||
|
||||
- Use `--fields minimal` for lean output
|
||||
- Filter by agent with `--agent`
|
||||
- Use `--days N` to limit to recent history
|
||||
|
||||
stdout is data-only, stderr is diagnostics; exit code 0 means success.
|
||||
|
||||
Treat cass as a way to avoid re-solving problems other agents already handled.
|
||||
|
||||
---
|
||||
|
||||
## Note for Codex/GPT-5.2
|
||||
|
||||
You constantly bother me and stop working with concerned questions that look similar to this:
|
||||
|
||||
```
|
||||
Unexpected changes (need guidance)
|
||||
|
||||
- Working tree still shows edits I did not make in Cargo.toml, Cargo.lock, src/runtime.rs, src/scope.rs. Please advise whether to keep/commit/revert these before any further work. I did not touch them.
|
||||
|
||||
Next steps (pick one)
|
||||
|
||||
1. Decide how to handle the unrelated modified files above so we can resume cleanly.
|
||||
```
|
||||
|
||||
NEVER EVER DO THAT AGAIN. The answer is literally ALWAYS the same: those are changes created by the potentially dozen of other agents working on the project at the same time. This is not only a common occurence, it happens multiple times PER MINUTE. The way to deal with it is simple: you NEVER, under ANY CIRCUMSTANCE, stash, revert, overwrite, or otherwise disturb in ANY way the work of other agents. Just treat those changes identically to changes that you yourself made. Just fool yourself into thinking YOU made the changes and simply don't recall it for some reason.
|
||||
|
||||
---
|
||||
|
||||
## Note on Built-in TODO Functionality
|
||||
|
||||
Also, if I ask you to explicitly use your built-in TODO functionality, don't complain about this and say you need to use beads. You can use built-in TODOs if I tell you specifically to do so. Always comply with such orders.
|
||||
|
||||
## TDD Requirements
|
||||
|
||||
Test-first development is mandatory:
|
||||
1. **RED** - Write failing test first
|
||||
2. **GREEN** - Minimal implementation to pass
|
||||
3. **REFACTOR** - Clean up while green
|
||||
|
||||
## Key Patterns
|
||||
|
||||
Find the simplest solution that meets all acceptance criteria.
|
||||
Use third party libraries whenever there's a well-maintained, active, and widely adopted solution (for example, date-fns for TS date math)
|
||||
Build extensible pieces of logic that can easily be integrated with other pieces.
|
||||
DRY principles should be loosely held.
|
||||
Architecture MUST be clear and well thought-out. Ask the user for clarification whenever ambiguity is discovered around architecture, or you think a better approach than planned exists.
|
||||
|
||||
---
|
||||
|
||||
## Third-Party Library Usage
|
||||
|
||||
If you aren't 100% sure how to use a third-party library, **SEARCH ONLINE** to find the latest documentation and mid-2025 best practices.
|
||||
|
||||
---
|
||||
|
||||
## Gitlore Robot Mode
|
||||
|
||||
The `lore` CLI has a robot mode optimized for AI agent consumption with compact JSON output, structured errors with machine-actionable recovery steps, meaningful exit codes, response timing metadata, field selection for token efficiency, and TTY auto-detection.
|
||||
|
||||
### Activation
|
||||
|
||||
```bash
|
||||
# Explicit flag
|
||||
lore --robot issues -n 10
|
||||
|
||||
# JSON shorthand (-J)
|
||||
lore -J issues -n 10
|
||||
|
||||
# Auto-detection (when stdout is not a TTY)
|
||||
lore issues | jq .
|
||||
|
||||
# Environment variable
|
||||
LORE_ROBOT=1 lore issues
|
||||
```
|
||||
|
||||
### Robot Mode Commands
|
||||
|
||||
```bash
|
||||
# List issues/MRs with JSON output
|
||||
lore --robot issues -n 10
|
||||
lore --robot mrs -s opened
|
||||
|
||||
# Filter issues by work item status (case-insensitive)
|
||||
lore --robot issues --status "In progress"
|
||||
|
||||
# List with field selection (reduces token usage ~60%)
|
||||
lore --robot issues --fields minimal
|
||||
lore --robot mrs --fields iid,title,state,draft
|
||||
|
||||
# Show detailed entity info
|
||||
lore --robot issues 123
|
||||
lore --robot mrs 456 -p group/repo
|
||||
|
||||
# Count entities
|
||||
lore --robot count issues
|
||||
lore --robot count discussions --for mr
|
||||
|
||||
# Search indexed documents
|
||||
lore --robot search "authentication bug"
|
||||
|
||||
# Check sync status
|
||||
lore --robot status
|
||||
|
||||
# Run full sync pipeline
|
||||
lore --robot sync
|
||||
|
||||
# Run sync without resource events
|
||||
lore --robot sync --no-events
|
||||
|
||||
# Surgical sync: specific entities by IID
|
||||
lore --robot sync --issue 42 -p group/repo
|
||||
lore --robot sync --mr 99 --mr 100 -p group/repo
|
||||
|
||||
# Run ingestion only
|
||||
lore --robot ingest issues
|
||||
|
||||
# Trace why code was introduced
|
||||
lore --robot trace src/main.rs -p group/repo
|
||||
|
||||
# File-level MR history
|
||||
lore --robot file-history src/auth/ -p group/repo
|
||||
|
||||
# Manage cron-based auto-sync (Unix)
|
||||
lore --robot cron status
|
||||
lore --robot cron install --interval 15
|
||||
|
||||
# Token management
|
||||
lore --robot token show
|
||||
|
||||
# Check environment health
|
||||
lore --robot doctor
|
||||
|
||||
# Document and index statistics
|
||||
lore --robot stats
|
||||
|
||||
# Quick health pre-flight check (exit 0 = healthy, 19 = unhealthy)
|
||||
lore --robot health
|
||||
|
||||
# Generate searchable documents from ingested data
|
||||
lore --robot generate-docs
|
||||
|
||||
# Generate vector embeddings via Ollama
|
||||
lore --robot embed
|
||||
|
||||
# Personal work dashboard
|
||||
lore --robot me
|
||||
lore --robot me --issues
|
||||
lore --robot me --activity --since 7d
|
||||
lore --robot me --fields minimal
|
||||
|
||||
# Agent self-discovery manifest (all commands, flags, exit codes, response schemas)
|
||||
lore robot-docs
|
||||
|
||||
# Version information
|
||||
lore --robot version
|
||||
```
|
||||
|
||||
### Response Format
|
||||
|
||||
All commands return compact JSON with a uniform envelope and timing metadata:
|
||||
|
||||
```json
|
||||
{"ok":true,"data":{...},"meta":{"elapsed_ms":42}}
|
||||
```
|
||||
|
||||
Errors return structured JSON to stderr with machine-actionable recovery steps:
|
||||
|
||||
```json
|
||||
{"error":{"code":"CONFIG_NOT_FOUND","message":"...","suggestion":"Run 'lore init'","actions":["lore init"]}}
|
||||
```
|
||||
|
||||
The `actions` array contains executable shell commands for automated recovery. It is omitted when empty.
|
||||
|
||||
### Field Selection
|
||||
|
||||
The `--fields` flag on `issues` and `mrs` list commands controls which fields appear in the JSON response:
|
||||
|
||||
```bash
|
||||
lore -J issues --fields minimal # Preset: iid, title, state, updated_at_iso
|
||||
lore -J mrs --fields iid,title,state,draft,labels # Custom field list
|
||||
```
|
||||
|
||||
### Exit Codes
|
||||
|
||||
| Code | Meaning |
|
||||
|------|---------|
|
||||
| 0 | Success |
|
||||
| 1 | Internal error / not implemented |
|
||||
| 2 | Usage error (invalid flags or arguments) |
|
||||
| 3 | Config invalid |
|
||||
| 4 | Token not set |
|
||||
| 5 | GitLab auth failed |
|
||||
| 6 | Resource not found |
|
||||
| 7 | Rate limited |
|
||||
| 8 | Network error |
|
||||
| 9 | Database locked |
|
||||
| 10 | Database error |
|
||||
| 11 | Migration failed |
|
||||
| 12 | I/O error |
|
||||
| 13 | Transform error |
|
||||
| 14 | Ollama unavailable |
|
||||
| 15 | Ollama model not found |
|
||||
| 16 | Embedding failed |
|
||||
| 17 | Not found (entity does not exist) |
|
||||
| 18 | Ambiguous match (use `-p` to specify project) |
|
||||
| 19 | Health check failed |
|
||||
| 20 | Config not found |
|
||||
|
||||
### Configuration Precedence
|
||||
|
||||
1. CLI flags (highest priority)
|
||||
2. Environment variables (`LORE_ROBOT`, `GITLAB_TOKEN`, `LORE_CONFIG_PATH`)
|
||||
3. Config file (`~/.config/lore/config.json`)
|
||||
4. Built-in defaults (lowest priority)
|
||||
|
||||
### Best Practices
|
||||
|
||||
- Use `lore --robot` or `lore -J` for all agent interactions
|
||||
- Check exit codes for error handling
|
||||
- Parse JSON errors from stderr; use `actions` array for automated recovery
|
||||
- Use `--fields minimal` to reduce token usage (~60% fewer tokens)
|
||||
- Use `-n` / `--limit` to control response size
|
||||
- Use `-q` / `--quiet` to suppress progress bars and non-essential output
|
||||
- Use `--color never` in non-TTY automation for ANSI-free output
|
||||
- Use `-v` / `-vv` / `-vvv` for increasing verbosity (debug/trace logging)
|
||||
- Use `--log-format json` for machine-readable log output to stderr
|
||||
- TTY detection handles piped commands automatically
|
||||
- Use `lore --robot health` as a fast pre-flight check before queries
|
||||
- Use `lore robot-docs` for response schema discovery
|
||||
- The `-p` flag supports fuzzy project matching (suffix and substring)
|
||||
|
||||
---
|
||||
|
||||
## Read/Write Split: lore vs glab
|
||||
|
||||
| Operation | Tool | Why |
|
||||
|-----------|------|-----|
|
||||
| List issues/MRs | lore | Richer: includes status, discussions, closing MRs |
|
||||
| View issue/MR detail | lore | Pre-joined discussions, work-item status |
|
||||
| Search across entities | lore | FTS5 + vector hybrid search |
|
||||
| Expert/workload analysis | lore | who command — no glab equivalent |
|
||||
| Timeline reconstruction | lore | Chronological narrative — no glab equivalent |
|
||||
| Create/update/close | glab | Write operations |
|
||||
| Approve/merge MR | glab | Write operations |
|
||||
| CI/CD pipelines | glab | Not in lore scope |
|
||||
|
||||
````markdown
|
||||
## UBS Quick Reference for AI Agents
|
||||
|
||||
UBS stands for "Ultimate Bug Scanner": **The AI Coding Agent's Secret Weapon: Flagging Likely Bugs for Fixing Early On**
|
||||
|
||||
**Install:** `curl -sSL https://raw.githubusercontent.com/Dicklesworthstone/ultimate_bug_scanner/master/install.sh | bash`
|
||||
|
||||
**Golden Rule:** `ubs <changed-files>` before every commit. Exit 0 = safe. Exit >0 = fix & re-run.
|
||||
|
||||
**Commands:**
|
||||
```bash
|
||||
ubs file.ts file2.py # Specific files (< 1s) — USE THIS
|
||||
ubs $(git diff --name-only --cached) # Staged files — before commit
|
||||
ubs --only=js,python src/ # Language filter (3-5x faster)
|
||||
ubs --ci --fail-on-warning . # CI mode — before PR
|
||||
ubs --help # Full command reference
|
||||
ubs sessions --entries 1 # Tail the latest install session log
|
||||
ubs . # Whole project (ignores things like .venv and node_modules automatically)
|
||||
```
|
||||
|
||||
**Output Format:**
|
||||
```
|
||||
⚠️ Category (N errors)
|
||||
file.ts:42:5 – Issue description
|
||||
💡 Suggested fix
|
||||
Exit code: 1
|
||||
```
|
||||
Parse: `file:line:col` → location | 💡 → how to fix | Exit 0/1 → pass/fail
|
||||
|
||||
**Fix Workflow:**
|
||||
1. Read finding → category + fix suggestion
|
||||
2. Navigate `file:line:col` → view context
|
||||
3. Verify real issue (not false positive)
|
||||
4. Fix root cause (not symptom)
|
||||
5. Re-run `ubs <file>` → exit 0
|
||||
6. Commit
|
||||
|
||||
**Speed Critical:** Scope to changed files. `ubs src/file.ts` (< 1s) vs `ubs .` (30s). Never full scan for small edits.
|
||||
|
||||
**Bug Severity:**
|
||||
- **Critical** (always fix): Null safety, XSS/injection, async/await, memory leaks
|
||||
- **Important** (production): Type narrowing, division-by-zero, resource leaks
|
||||
- **Contextual** (judgment): TODO/FIXME, console logs
|
||||
|
||||
**Anti-Patterns:**
|
||||
- ❌ Ignore findings → ✅ Investigate each
|
||||
- ❌ Full scan per edit → ✅ Scope to file
|
||||
- ❌ Fix symptom (`if (x) { x.y }`) → ✅ Root cause (`x?.y`)
|
||||
````
|
||||
|
||||
<!-- BEGIN LIQUID MAIL (v:48d7b3fc) -->
|
||||
## Integrating Liquid Mail with Beads
|
||||
|
||||
**Beads** manages task status, priority, and dependencies (`br` CLI).
|
||||
**Liquid Mail** provides the shared log—progress, decisions, and context that survives sessions.
|
||||
|
||||
### Conventions
|
||||
|
||||
- **Single source of truth**: Beads owns task state; Liquid Mail owns conversation/decisions
|
||||
- **Shared identifiers**: Include the Beads issue ID in posts (e.g., `[lm-jht] Topic validation rules`)
|
||||
- **Decisions before action**: Post `DECISION:` messages before risky changes, not after
|
||||
- **Identity in user updates**: In every user-facing reply, include your window-name (derived from `LIQUID_MAIL_WINDOW_ID`) so humans can distinguish concurrent agents.
|
||||
|
||||
### Typical Flow
|
||||
|
||||
**1. Pick ready work (Beads)**
|
||||
```bash
|
||||
br ready # Find available work (no blockers)
|
||||
br show lm-jht # Review details
|
||||
br update lm-jht --status in_progress
|
||||
```
|
||||
|
||||
**2. Check context (Liquid Mail)**
|
||||
```bash
|
||||
liquid-mail notify # See what changed since last session
|
||||
liquid-mail query "lm-jht" # Find prior discussion on this issue
|
||||
```
|
||||
|
||||
**3. Work and log progress (topic required)**
|
||||
|
||||
The `--topic` flag is required for your first post. After that, the topic is pinned to your window.
|
||||
```bash
|
||||
liquid-mail post --topic auth-system "[lm-jht] START: Reviewing current topic id patterns"
|
||||
liquid-mail post "[lm-jht] FINDING: IDs like lm3189... are being used as topic names"
|
||||
liquid-mail post "[lm-jht] NEXT: Add validation + rename guidance"
|
||||
```
|
||||
|
||||
**4. Decisions before risky changes**
|
||||
```bash
|
||||
liquid-mail post --decision "[lm-jht] DECISION: Reject UUID-like topic names; require slugs"
|
||||
# Then implement
|
||||
```
|
||||
|
||||
### Decision Conflicts (Preflight)
|
||||
|
||||
When you post a decision (via `--decision` or a `DECISION:` line), Liquid Mail can preflight-check for conflicts with prior decisions **in the same topic**.
|
||||
|
||||
- If a conflict is detected, `liquid-mail post` fails with `DECISION_CONFLICT`.
|
||||
- Review prior decisions: `liquid-mail decisions --topic <topic>`.
|
||||
- If you intend to supersede the old decision, re-run with `--yes` and include what changed and why.
|
||||
|
||||
**5. Complete (Beads is authority)**
|
||||
```bash
|
||||
br close lm-jht # Mark complete in Beads
|
||||
liquid-mail post "[lm-jht] Completed: Topic validation shipped in 177267d"
|
||||
```
|
||||
|
||||
### Posting Format
|
||||
|
||||
- **Short** (5-15 lines, not walls of text)
|
||||
- **Prefixed** with ALL-CAPS tags: `FINDING:`, `DECISION:`, `QUESTION:`, `NEXT:`
|
||||
- **Include file paths** so others can jump in: `src/services/auth.ts:42`
|
||||
- **Include issue IDs** in brackets: `[lm-jht]`
|
||||
- **User-facing replies**: include `AGENT: <window-name>` near the top. Get it with `liquid-mail window name`.
|
||||
|
||||
### Topics (Required)
|
||||
|
||||
Liquid Mail organizes messages into **topics** (Honcho sessions). Topics are **soft boundaries**—search spans all topics by default.
|
||||
|
||||
**Rule:** `liquid-mail post` requires a topic:
|
||||
- Provide `--topic <name>`, OR
|
||||
- Post inside a window that already has a pinned topic.
|
||||
|
||||
Topic names must be:
|
||||
- 4–50 characters
|
||||
- lowercase letters/numbers with hyphens
|
||||
- start with a letter, end with a letter/number
|
||||
- no consecutive hyphens
|
||||
- not reserved (`all`, `new`, `help`, `merge`, `rename`, `list`)
|
||||
- not UUID-like (`lm<32-hex>` or standard UUIDs)
|
||||
|
||||
Good examples: `auth-system`, `db-system`, `dashboards`
|
||||
|
||||
Commands:
|
||||
|
||||
- **List topics (newest first)**: `liquid-mail topics`
|
||||
- **Find context across topics**: `liquid-mail query "auth"`, then pick a topic name
|
||||
- **Rename a topic (alias)**: `liquid-mail topic rename <old> <new>`
|
||||
- **Merge two topics into a new one**: `liquid-mail topic merge <A> <B> --into <C>`
|
||||
|
||||
Examples (component topic + Beads id in the subject):
|
||||
```bash
|
||||
liquid-mail post --topic auth-system "[lm-jht] START: Investigating token refresh failures"
|
||||
liquid-mail post --topic auth-system "[lm-jht] FINDING: refresh happens in middleware, not service layer"
|
||||
liquid-mail post --topic auth-system --decision "[lm-jht] DECISION: Move refresh logic into AuthService"
|
||||
|
||||
liquid-mail post --topic dashboards "[lm-1p5] START: Adding latency panel"
|
||||
```
|
||||
|
||||
### Context Refresh (Before New Work / After Redirects)
|
||||
|
||||
If you see redirect/merge messages, refresh context before acting:
|
||||
```bash
|
||||
liquid-mail notify
|
||||
liquid-mail window status --json
|
||||
liquid-mail summarize --topic <topic>
|
||||
liquid-mail decisions --topic <topic>
|
||||
```
|
||||
|
||||
If you discover a newer "canonical" topic (for example after a topic merge), switch to it explicitly:
|
||||
```bash
|
||||
liquid-mail post --topic <new-topic> "[lm-xxxx] CONTEXT: Switching topics (rename/merge)"
|
||||
```
|
||||
|
||||
### Live Updates (Polling)
|
||||
|
||||
Liquid Mail is pull-based by default (you run `notify`). For near-real-time updates:
|
||||
```bash
|
||||
liquid-mail watch --topic <topic> # watch a topic
|
||||
liquid-mail watch # or watch your pinned topic
|
||||
```
|
||||
|
||||
### Mapping Cheat-Sheet
|
||||
|
||||
| Concept | In Beads | In Liquid Mail |
|
||||
|---------|----------|----------------|
|
||||
| Work item | `lm-jht` (issue ID) | Include `[lm-jht]` in posts |
|
||||
| Workstream | — | `--topic auth-system` |
|
||||
| Subject prefix | — | `[lm-jht] ...` |
|
||||
| Commit message | Include `lm-jht` | — |
|
||||
| Status | `br update --status` | Post progress messages |
|
||||
|
||||
### Pitfalls
|
||||
|
||||
- **Don't manage tasks in Liquid Mail**—Beads is the single task queue
|
||||
- **Always include `lm-xxx`** in posts to avoid ID drift across tools
|
||||
- **Don't dump logs**—keep posts short and structured
|
||||
|
||||
### Quick Reference
|
||||
|
||||
| Need | Command |
|
||||
|------|---------|
|
||||
| What changed? | `liquid-mail notify` |
|
||||
| Log progress | `liquid-mail post "[lm-xxx] ..."` |
|
||||
| Before risky change | `liquid-mail post --decision "[lm-xxx] DECISION: ..."` |
|
||||
| Find history | `liquid-mail query "search term"` |
|
||||
| Prior decisions | `liquid-mail decisions --topic <topic>` |
|
||||
| Show config | `liquid-mail config` |
|
||||
| List topics | `liquid-mail topics` |
|
||||
| Rename topic | `liquid-mail topic rename <old> <new>` |
|
||||
| Merge topics | `liquid-mail topic merge <A> <B> --into <C>` |
|
||||
| Polling watch | `liquid-mail watch [--topic <topic>]` |
|
||||
<!-- END LIQUID MAIL -->
|
||||
7
Cargo.lock
generated
7
Cargo.lock
generated
@@ -171,9 +171,9 @@ checksum = "9330f8b2ff13f34540b44e946ef35111825727b38d33286ef986142615121801"
|
||||
|
||||
[[package]]
|
||||
name = "charmed-lipgloss"
|
||||
version = "0.1.2"
|
||||
version = "0.2.0"
|
||||
source = "registry+https://github.com/rust-lang/crates.io-index"
|
||||
checksum = "45e10db01f5eaea11d98ca5c5cffd8cc4add7ac56d0128d91ba1f2a3757b6c5a"
|
||||
checksum = "a5986a4a6d84055da99e44a6c532fd412d636fe5c3fe17da105a7bf40287ccd1"
|
||||
dependencies = [
|
||||
"bitflags",
|
||||
"colored",
|
||||
@@ -183,6 +183,7 @@ dependencies = [
|
||||
"thiserror",
|
||||
"toml",
|
||||
"tracing",
|
||||
"unicode-segmentation",
|
||||
"unicode-width 0.1.14",
|
||||
]
|
||||
|
||||
@@ -1157,7 +1158,7 @@ checksum = "5e5032e24019045c762d3c0f28f5b6b8bbf38563a65908389bf7978758920897"
|
||||
|
||||
[[package]]
|
||||
name = "lore"
|
||||
version = "0.8.3"
|
||||
version = "0.9.0"
|
||||
dependencies = [
|
||||
"async-stream",
|
||||
"charmed-lipgloss",
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
[package]
|
||||
name = "lore"
|
||||
version = "0.8.3"
|
||||
version = "0.9.0"
|
||||
edition = "2024"
|
||||
description = "Gitlore - Local GitLab data management with semantic search"
|
||||
authors = ["Taylor Eernisse"]
|
||||
@@ -25,7 +25,7 @@ clap_complete = "4"
|
||||
dialoguer = "0.12"
|
||||
console = "0.16"
|
||||
indicatif = "0.18"
|
||||
lipgloss = { package = "charmed-lipgloss", version = "0.1", default-features = false, features = ["native"] }
|
||||
lipgloss = { package = "charmed-lipgloss", version = "0.2", default-features = false, features = ["native"] }
|
||||
open = "5"
|
||||
|
||||
# HTTP
|
||||
|
||||
123
README.md
123
README.md
@@ -12,6 +12,9 @@ Local GitLab data management with semantic search, people intelligence, and temp
|
||||
- **Hybrid search**: Combines FTS5 lexical search with Ollama-powered vector embeddings via Reciprocal Rank Fusion
|
||||
- **People intelligence**: Expert discovery, workload analysis, review patterns, active discussions, and code ownership overlap
|
||||
- **Timeline pipeline**: Reconstructs chronological event histories by combining search, graph traversal, and event aggregation across related entities
|
||||
- **Code provenance tracing**: Traces why code was introduced by linking files to MRs, MRs to issues, and issues to discussion threads
|
||||
- **File-level history**: Shows which MRs touched a file with rename-chain resolution and inline DiffNote snippets
|
||||
- **Surgical sync**: Sync specific issues or MRs by IID without running a full incremental sync, with preflight validation
|
||||
- **Git history linking**: Tracks merge and squash commit SHAs to connect MRs with git history
|
||||
- **File change tracking**: Records which files each MR touches, enabling file-level history queries
|
||||
- **Raw payload storage**: Preserves original GitLab API responses for debugging
|
||||
@@ -21,9 +24,12 @@ Local GitLab data management with semantic search, people intelligence, and temp
|
||||
- **Resource event history**: Tracks state changes, label events, and milestone events for issues and MRs
|
||||
- **Note querying**: Rich filtering over discussion notes by author, type, path, resolution status, time range, and body content
|
||||
- **Discussion drift detection**: Semantic analysis of how discussions diverge from original issue intent
|
||||
- **Automated sync scheduling**: Cron-based automatic syncing with configurable intervals (Unix)
|
||||
- **Token management**: Secure interactive or piped token storage with masked display
|
||||
- **Robot mode**: Machine-readable JSON output with structured errors, meaningful exit codes, and actionable recovery steps
|
||||
- **Error tolerance**: Auto-corrects common CLI mistakes (case, typos, single-dash flags, value casing) with teaching feedback
|
||||
- **Observability**: Verbosity controls, JSON log format, structured metrics, and stage timing
|
||||
- **Icon system**: Configurable icon sets (Nerd Fonts, Unicode, ASCII) with automatic detection
|
||||
|
||||
## Installation
|
||||
|
||||
@@ -77,6 +83,15 @@ lore timeline "deployment"
|
||||
# Timeline for a specific issue
|
||||
lore timeline issue:42
|
||||
|
||||
# Why was this file changed? (file -> MR -> issue -> discussion)
|
||||
lore trace src/features/auth/login.ts
|
||||
|
||||
# Which MRs touched this file?
|
||||
lore file-history src/features/auth/
|
||||
|
||||
# Sync a specific issue without full sync
|
||||
lore sync --issue 42 -p group/repo
|
||||
|
||||
# Query notes by author
|
||||
lore notes --author alice --since 7d
|
||||
|
||||
@@ -190,6 +205,8 @@ Create a personal access token with `read_api` scope:
|
||||
| `XDG_DATA_HOME` | XDG Base Directory for data (fallback: `~/.local/share`) | No |
|
||||
| `NO_COLOR` | Disable color output when set (any value) | No |
|
||||
| `CLICOLOR` | Standard color control (0 to disable) | No |
|
||||
| `LORE_ICONS` | Override icon set: `nerd`, `unicode`, or `ascii` | No |
|
||||
| `NERD_FONTS` | Enable Nerd Font icons when set to a non-empty value | No |
|
||||
| `RUST_LOG` | Logging level filter (e.g., `lore=debug`) | No |
|
||||
|
||||
## Commands
|
||||
@@ -353,12 +370,13 @@ Shows: total DiffNotes, categorized by code area with percentage breakdown.
|
||||
|
||||
#### Active Mode
|
||||
|
||||
Surface unresolved discussions needing attention.
|
||||
Surface unresolved discussions needing attention. By default, only discussions on open issues and non-merged MRs are shown.
|
||||
|
||||
```bash
|
||||
lore who --active # Unresolved discussions (last 7 days)
|
||||
lore who --active --since 30d # Wider time window
|
||||
lore who --active -p group/repo # Scoped to project
|
||||
lore who --active --include-closed # Include discussions on closed/merged entities
|
||||
```
|
||||
|
||||
Shows: discussion threads with participants and last activity timestamps.
|
||||
@@ -382,6 +400,7 @@ Shows: users with touch counts (author vs. review), linked MR references. Defaul
|
||||
| `--since` | Time window (7d, 2w, 6m, YYYY-MM-DD). Default varies by mode. |
|
||||
| `-n` / `--limit` | Max results per section (1-500, default 20) |
|
||||
| `--all-history` | Remove the default time window, query all history |
|
||||
| `--include-closed` | Include discussions on closed issues and merged/closed MRs (active mode) |
|
||||
| `--detail` | Show per-MR detail breakdown (expert mode only) |
|
||||
| `--explain-score` | Show per-component score breakdown (expert mode only) |
|
||||
| `--as-of` | Score as if "now" is a past date (ISO 8601 or duration like 30d, expert mode only) |
|
||||
@@ -465,8 +484,6 @@ lore notes --contains "TODO" # Substring search in note body
|
||||
lore notes --include-system # Include system-generated notes
|
||||
lore notes --since 2w --until 2024-12-31 # Time-bounded range
|
||||
lore notes --sort updated --asc # Sort by update time, ascending
|
||||
lore notes --format csv # CSV output
|
||||
lore notes --format jsonl # Line-delimited JSON
|
||||
lore notes -o # Open first result in browser
|
||||
|
||||
# Field selection (robot mode)
|
||||
@@ -493,9 +510,52 @@ lore -J notes --fields minimal # Compact: id, author_username, bod
|
||||
| `--resolution` | Filter by resolution status (`any`, `unresolved`, `resolved`) |
|
||||
| `--sort` | Sort by `created` (default) or `updated` |
|
||||
| `--asc` | Sort ascending (default: descending) |
|
||||
| `--format` | Output format: `table` (default), `json`, `jsonl`, `csv` |
|
||||
| `-o` / `--open` | Open first result in browser |
|
||||
|
||||
### `lore file-history`
|
||||
|
||||
Show which merge requests touched a file, with rename-chain resolution and optional DiffNote discussion snippets.
|
||||
|
||||
```bash
|
||||
lore file-history src/main.rs # MRs that touched this file
|
||||
lore file-history src/auth/ -p group/repo # Scoped to project
|
||||
lore file-history src/foo.rs --discussions # Include DiffNote snippets
|
||||
lore file-history src/bar.rs --no-follow-renames # Skip rename chain resolution
|
||||
lore file-history src/bar.rs --merged # Only merged MRs
|
||||
lore file-history src/bar.rs -n 100 # More results
|
||||
```
|
||||
|
||||
Rename-chain resolution follows file renames through `mr_file_changes` so that querying a renamed file also surfaces MRs that touched previous names. Disable with `--no-follow-renames`.
|
||||
|
||||
| Flag | Default | Description |
|
||||
|------|---------|-------------|
|
||||
| `-p` / `--project` | all | Scope to a specific project (fuzzy match) |
|
||||
| `--discussions` | off | Include DiffNote discussion snippets on the file |
|
||||
| `--no-follow-renames` | off | Disable rename chain resolution |
|
||||
| `--merged` | off | Only show merged MRs |
|
||||
| `-n` / `--limit` | `50` | Maximum results |
|
||||
|
||||
### `lore trace`
|
||||
|
||||
Trace why code was introduced by building provenance chains: file -> MR -> issue -> discussion threads.
|
||||
|
||||
```bash
|
||||
lore trace src/main.rs # Why was this file changed?
|
||||
lore trace src/auth/ -p group/repo # Scoped to project
|
||||
lore trace src/foo.rs --discussions # Include DiffNote context
|
||||
lore trace src/bar.rs:42 # Line hint (future Tier 2)
|
||||
lore trace src/bar.rs --no-follow-renames # Skip rename chain resolution
|
||||
```
|
||||
|
||||
Each trace chain links a file change to the MR that introduced it, the issue(s) that motivated it (via "closes" references), and the discussion threads on those entities. Line-level hints (`:line` suffix) are accepted but produce an advisory message until Tier 2 git-blame integration is available.
|
||||
|
||||
| Flag | Default | Description |
|
||||
|------|---------|-------------|
|
||||
| `-p` / `--project` | all | Scope to a specific project (fuzzy match) |
|
||||
| `--discussions` | off | Include DiffNote discussion snippets |
|
||||
| `--no-follow-renames` | off | Disable rename chain resolution |
|
||||
| `-n` / `--limit` | `20` | Maximum trace chains to display |
|
||||
|
||||
### `lore drift`
|
||||
|
||||
Detect discussion divergence from the original intent of an issue by comparing the semantic similarity of discussion content against the issue description.
|
||||
@@ -506,9 +566,34 @@ lore drift issues 42 --threshold 0.6 # Higher threshold (stricter)
|
||||
lore drift issues 42 -p group/repo # Scope to project
|
||||
```
|
||||
|
||||
### `lore cron`
|
||||
|
||||
Manage cron-based automatic syncing (Unix only). Installs a crontab entry that runs `lore sync --lock -q` at a configurable interval.
|
||||
|
||||
```bash
|
||||
lore cron install # Install cron job (every 8 minutes)
|
||||
lore cron install --interval 15 # Custom interval in minutes
|
||||
lore cron status # Check if cron is installed
|
||||
lore cron uninstall # Remove cron job
|
||||
```
|
||||
|
||||
The `--lock` flag on the auto-sync ensures that if a sync is already running, the cron invocation exits cleanly rather than competing for the database lock.
|
||||
|
||||
### `lore token`
|
||||
|
||||
Manage the stored GitLab token. Supports interactive entry with validation, non-interactive piped input, and masked display.
|
||||
|
||||
```bash
|
||||
lore token set # Interactive token entry + validation
|
||||
lore token set --token glpat-xxx # Non-interactive token storage
|
||||
echo glpat-xxx | lore token set # Pipe token from stdin
|
||||
lore token show # Show token (masked)
|
||||
lore token show --unmask # Show full token
|
||||
```
|
||||
|
||||
### `lore sync`
|
||||
|
||||
Run the full sync pipeline: ingest from GitLab (including work item status enrichment via GraphQL), generate searchable documents, and compute embeddings.
|
||||
Run the full sync pipeline: ingest from GitLab (including work item status enrichment via GraphQL), generate searchable documents, and compute embeddings. Supports both incremental (cursor-based) and surgical (per-IID) modes.
|
||||
|
||||
```bash
|
||||
lore sync # Full pipeline
|
||||
@@ -518,11 +603,29 @@ lore sync --no-embed # Skip embedding step
|
||||
lore sync --no-docs # Skip document regeneration
|
||||
lore sync --no-events # Skip resource event fetching
|
||||
lore sync --no-file-changes # Skip MR file change fetching
|
||||
lore sync --no-status # Skip work-item status enrichment via GraphQL
|
||||
lore sync --dry-run # Preview what would be synced
|
||||
lore sync --timings # Show detailed timing breakdown per stage
|
||||
lore sync --lock # Acquire file lock (skip if another sync is running)
|
||||
|
||||
# Surgical sync: fetch specific entities by IID
|
||||
lore sync --issue 42 -p group/repo # Sync a single issue
|
||||
lore sync --mr 99 -p group/repo # Sync a single MR
|
||||
lore sync --issue 42 --mr 99 -p group/repo # Mix issues and MRs
|
||||
lore sync --issue 1 --issue 2 -p group/repo # Multiple issues
|
||||
lore sync --issue 42 -p group/repo --preflight-only # Validate without writing
|
||||
```
|
||||
|
||||
The sync command displays animated progress bars for each stage and outputs timing metrics on completion. In robot mode (`-J`), detailed stage timing is included in the JSON response.
|
||||
|
||||
#### Surgical Sync
|
||||
|
||||
When `--issue` or `--mr` flags are provided, sync switches to surgical mode which fetches only the specified entities and their dependents (discussions, events, file changes) from GitLab. This is faster than a full incremental sync and useful for refreshing specific entities on demand.
|
||||
|
||||
Surgical mode requires `-p` / `--project` to scope the operation. Each entity goes through preflight validation against the GitLab API, then ingestion, document regeneration, and embedding. Entities that haven't changed since the last sync are skipped (TOCTOU check).
|
||||
|
||||
Use `--preflight-only` to validate that entities exist on GitLab without writing to the database.
|
||||
|
||||
### `lore ingest`
|
||||
|
||||
Sync data from GitLab to local database. Runs only the ingestion step (no doc generation or embeddings). For issue ingestion, this includes a status enrichment phase that fetches work item statuses via the GitLab GraphQL API.
|
||||
@@ -753,7 +856,7 @@ The CLI auto-corrects common mistakes before parsing, emitting a teaching note t
|
||||
|-----------|---------|------|
|
||||
| Single-dash long flag | `-robot` -> `--robot` | All |
|
||||
| Case normalization | `--Robot` -> `--robot` | All |
|
||||
| Flag prefix expansion | `--proj` -> `--project` (unambiguous only) | All |
|
||||
| Flag prefix expansion | `--proj` -> `--project`, `--no-color` -> `--color never` (unambiguous only) | All |
|
||||
| Fuzzy flag match | `--projct` -> `--project` | All (threshold 0.9 in robot, 0.8 in human) |
|
||||
| Subcommand alias | `merge_requests` -> `mrs`, `robotdocs` -> `robot-docs` | All |
|
||||
| Value normalization | `--state Opened` -> `--state opened` | All |
|
||||
@@ -785,7 +888,7 @@ Commands accept aliases for common variations:
|
||||
| `stats` | `stat` |
|
||||
| `status` | `st` |
|
||||
|
||||
Unambiguous prefixes also work via subcommand inference (e.g., `lore iss` -> `lore issues`, `lore time` -> `lore timeline`).
|
||||
Unambiguous prefixes also work via subcommand inference (e.g., `lore iss` -> `lore issues`, `lore time` -> `lore timeline`, `lore tra` -> `lore trace`).
|
||||
|
||||
### Agent Self-Discovery
|
||||
|
||||
@@ -840,6 +943,8 @@ lore --robot <command> # Machine-readable JSON
|
||||
lore -J <command> # JSON shorthand
|
||||
lore --color never <command> # Disable color output
|
||||
lore --color always <command> # Force color output
|
||||
lore --icons nerd <command> # Nerd Font icons
|
||||
lore --icons ascii <command> # ASCII-only icons (no Unicode)
|
||||
lore -q <command> # Suppress non-essential output
|
||||
lore -v <command> # Debug logging
|
||||
lore -vv <command> # More verbose debug logging
|
||||
@@ -847,7 +952,7 @@ lore -vvv <command> # Trace-level logging
|
||||
lore --log-format json <command> # JSON-formatted log output to stderr
|
||||
```
|
||||
|
||||
Color output respects `NO_COLOR` and `CLICOLOR` environment variables in `auto` mode (the default).
|
||||
Color output respects `NO_COLOR` and `CLICOLOR` environment variables in `auto` mode (the default). Icon sets default to `unicode` and can be overridden via `--icons`, `LORE_ICONS`, or `NERD_FONTS` environment variables.
|
||||
|
||||
## Shell Completions
|
||||
|
||||
@@ -895,7 +1000,7 @@ Data is stored in SQLite with WAL mode and foreign keys enabled. Main tables:
|
||||
| `embeddings` | Vector embeddings for semantic search |
|
||||
| `dirty_sources` | Entities needing document regeneration after ingest |
|
||||
| `pending_discussion_fetches` | Queue for discussion fetch operations |
|
||||
| `sync_runs` | Audit trail of sync operations |
|
||||
| `sync_runs` | Audit trail of sync operations (supports surgical mode tracking with per-entity results) |
|
||||
| `sync_cursors` | Cursor positions for incremental sync |
|
||||
| `app_locks` | Crash-safe single-flight lock |
|
||||
| `raw_payloads` | Compressed original API responses |
|
||||
|
||||
64
acceptance-criteria.md
Normal file
64
acceptance-criteria.md
Normal file
@@ -0,0 +1,64 @@
|
||||
# Trace/File-History Empty-Result Diagnostics
|
||||
|
||||
## AC-1: Human mode shows searched paths on empty results
|
||||
|
||||
When `lore trace <path>` returns 0 chains in human mode, the output includes the resolved path(s) that were searched. If renames were followed, show the full rename chain.
|
||||
|
||||
## AC-2: Human mode shows actionable reason on empty results
|
||||
|
||||
When 0 chains are found, the hint message distinguishes between:
|
||||
- "No MR file changes synced yet" (mr_file_changes table is empty for this project) -> suggest `lore sync`
|
||||
- "File paths not found in MR file changes" (sync has run but this file has no matches) -> suggest checking the path or that the file may predate the sync window
|
||||
|
||||
## AC-3: Robot mode includes diagnostics object on empty results
|
||||
|
||||
When `total_chains == 0` in robot JSON output, add a `"diagnostics"` key to `"meta"` containing:
|
||||
- `paths_searched: [...]` (already present as `resolved_paths` in data -- no duplication needed)
|
||||
- `hints: [string]` -- same actionable reasons as AC-2 but machine-readable
|
||||
|
||||
## AC-4: Info-level logging at each pipeline stage
|
||||
|
||||
Add `tracing::info!` calls visible with `-v`:
|
||||
- After rename resolution: number of paths found
|
||||
- After MR query: number of MRs found
|
||||
- After issue/discussion enrichment: counts per MR
|
||||
|
||||
## AC-5: Apply same pattern to `lore file-history`
|
||||
|
||||
All of the above (AC-1 through AC-4) also apply to `lore file-history` empty results.
|
||||
|
||||
---
|
||||
|
||||
# Secure Token Resolution for Cron
|
||||
|
||||
## AC-6: Stored token in config
|
||||
|
||||
The configuration file supports an optional `token` field in the `gitlab` section, allowing users to persist their GitLab personal access token alongside other settings. Existing configuration files that omit this field continue to load and function normally.
|
||||
|
||||
## AC-7: Token resolution precedence
|
||||
|
||||
Lore resolves the GitLab token by checking the environment variable first, then falling back to the stored config token. This means environment variables always take priority, preserving CI/CD workflows and one-off overrides, while the stored token provides a reliable default for non-interactive contexts like cron jobs. If neither source provides a non-empty value, the user receives a clear `TOKEN_NOT_SET` error with guidance on how to fix it.
|
||||
|
||||
## AC-8: `lore token set` command
|
||||
|
||||
The `lore token set` command provides a secure, guided workflow for storing a GitLab token. It accepts the token via a `--token` flag, standard input (for piped automation), or an interactive masked prompt. Before storing, it validates the token against the GitLab API to catch typos and expired credentials early. After writing the token to the configuration file, it restricts file permissions to owner-only read/write (mode 0600) to prevent other users on the system from reading the token. The command supports both human and robot output modes.
|
||||
|
||||
## AC-9: `lore token show` command
|
||||
|
||||
The `lore token show` command displays the currently active token along with its source ("config file" or "environment variable"). By default the token value is masked for safety; the `--unmask` flag reveals the full value when needed. The command supports both human and robot output modes.
|
||||
|
||||
## AC-10: Consistent token resolution across all commands
|
||||
|
||||
Every command that requires a GitLab token uses the same two-step resolution logic described in AC-7. This ensures that storing a token once via `lore token set` is sufficient to make all commands work, including background cron syncs that have no access to shell environment variables.
|
||||
|
||||
## AC-11: Cron install warns about missing stored token
|
||||
|
||||
When `lore cron install` completes, it checks whether a token is available in the configuration file. If not, it displays a prominent warning explaining that cron jobs cannot access shell environment variables and directs the user to run `lore token set` to ensure unattended syncs will authenticate successfully.
|
||||
|
||||
## AC-12: `TOKEN_NOT_SET` error recommends `lore token set`
|
||||
|
||||
The `TOKEN_NOT_SET` error message recommends `lore token set` as the primary fix for missing credentials, with the environment variable export shown as an alternative for users who prefer that approach. In robot mode, the `actions` array lists both options so that automated recovery workflows can act on them.
|
||||
|
||||
## AC-13: Doctor reports token source
|
||||
|
||||
The `lore doctor` command includes the token's source in its GitLab connectivity check, reporting whether the token was found in the configuration file or an environment variable. This makes it straightforward to verify that cron jobs will have access to the token without relying on the user's interactive shell environment.
|
||||
290
docs/lore-me-spec.md
Normal file
290
docs/lore-me-spec.md
Normal file
@@ -0,0 +1,290 @@
|
||||
# `lore me` — Personal Work Dashboard
|
||||
|
||||
## Overview
|
||||
|
||||
A personal dashboard command that shows everything relevant to the configured user: open issues, authored MRs, MRs under review, and recent activity. Attention state is computed from GitLab interaction data (comments) with no local state tracking.
|
||||
|
||||
## Command Interface
|
||||
|
||||
```
|
||||
lore me # Full dashboard (default project or all)
|
||||
lore me --issues # Issues section only
|
||||
lore me --mrs # MRs section only (authored + reviewing)
|
||||
lore me --activity # Activity feed only
|
||||
lore me --issues --mrs # Multiple sections (combinable)
|
||||
lore me --all # All synced projects (overrides default_project)
|
||||
lore me --since 2d # Activity window (default: 30d)
|
||||
lore me --project group/repo # Scope to one project
|
||||
lore me --user jdoe # Override configured username
|
||||
```
|
||||
|
||||
Standard global flags: `--robot`/`-J`, `--fields`, `--color`, `--icons`.
|
||||
|
||||
---
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
### AC-1: Configuration
|
||||
|
||||
- **AC-1.1**: New optional field `gitlab.username` (string) in config.json
|
||||
- **AC-1.2**: Resolution order: `--user` CLI flag > `config.gitlab.username` > exit code 2 with actionable error message suggesting how to set it
|
||||
- **AC-1.3**: Username is case-sensitive (matches GitLab usernames exactly)
|
||||
|
||||
### AC-2: Command Interface
|
||||
|
||||
- **AC-2.1**: New command `lore me` — single command with flags (matches `who` pattern)
|
||||
- **AC-2.2**: Section filter flags: `--issues`, `--mrs`, `--activity` — combinable. Passing multiple shows those sections. No flags = full dashboard (all sections).
|
||||
- **AC-2.3**: `--since <duration>` controls activity feed window, default 30 days. Only affects the activity section; work item sections always show all open items regardless of `--since`.
|
||||
- **AC-2.4**: `--project <path>` scopes to a single project
|
||||
- **AC-2.5**: `--user <username>` overrides configured username
|
||||
- **AC-2.6**: `--all` flag shows all synced projects (overrides default_project)
|
||||
- **AC-2.7**: `--project` and `--all` are mutually exclusive — passing both is exit code 2
|
||||
- **AC-2.8**: Standard global flags: `--robot`/`-J`, `--fields`, `--color`, `--icons`
|
||||
|
||||
### AC-3: "My Items" Definition
|
||||
|
||||
- **AC-3.1**: Issues assigned to me (`issue_assignees.username`). Authorship alone does NOT qualify an issue.
|
||||
- **AC-3.2**: MRs authored by me (`merge_requests.author_username`)
|
||||
- **AC-3.3**: MRs where I'm a reviewer (`mr_reviewers.username`)
|
||||
- **AC-3.4**: Scope is **Assigned (issues) + Authored/Reviewing (MRs)** — no participation/mention expansion
|
||||
- **AC-3.5**: MR assignees (`mr_assignees`) are NOT used — in Pattern 1 workflows (author = assignee), this is redundant with authorship
|
||||
- **AC-3.6**: Activity feed uses CURRENT association only — if you've been unassigned from an issue, activity on it no longer appears. This keeps the query simple and the feed relevant.
|
||||
|
||||
### AC-4: Attention State Model
|
||||
|
||||
- **AC-4.1**: Computed per-item from synced GitLab data, no local state tracking
|
||||
- **AC-4.2**: Interaction signal: notes authored by the user (`notes.author_username = me` where `is_system = 0`)
|
||||
- **AC-4.3**: Future: award emoji will extend interaction signals (separate bead)
|
||||
- **AC-4.4**: States (evaluated in this order — first match wins):
|
||||
1. `not_ready`: MR only — `draft=1` AND zero entries in `mr_reviewers`
|
||||
2. `needs_attention`: Others' latest non-system note > user's latest non-system note
|
||||
3. `stale`: Entity has at least one non-system note from someone, but the most recent note from anyone is older than 30 days. Items with ZERO notes are NOT stale — they're `not_started`.
|
||||
4. `not_started`: User has zero non-system notes on this entity (regardless of whether others have commented)
|
||||
5. `awaiting_response`: User's latest non-system note timestamp >= all others' latest non-system note timestamps (including when user is the only commenter)
|
||||
- **AC-4.5**: Applied to all item types (issues, authored MRs, reviewing MRs)
|
||||
|
||||
### AC-5: Dashboard Sections
|
||||
|
||||
**AC-5.1: Open Issues**
|
||||
- Source: `issue_assignees.username = me`, state = opened
|
||||
- Fields: project path, iid, title, status_name (work item status), attention state, relative time since updated
|
||||
- Sort: attention-first (needs_attention > not_started > awaiting_response > stale), then most recently updated within same state
|
||||
- No limit, no truncation — show all
|
||||
|
||||
**AC-5.2: Open MRs — Authored**
|
||||
- Source: `merge_requests.author_username = me`, state = opened
|
||||
- Fields: project path, iid, title, draft indicator, detailed_merge_status, attention state, relative time
|
||||
- Sort: same as issues
|
||||
|
||||
**AC-5.3: Open MRs — Reviewing**
|
||||
- Source: `mr_reviewers.username = me`, state = opened
|
||||
- Fields: project path, iid, title, MR author username, draft indicator, attention state, relative time
|
||||
- Sort: same as issues
|
||||
|
||||
**AC-5.4: Activity Feed**
|
||||
- Sources (all within `--since` window, default 30d):
|
||||
- Human comments (`notes.is_system = 0`) on my items
|
||||
- State events (`resource_state_events`) on my items
|
||||
- Label events (`resource_label_events`) on my items
|
||||
- Milestone events (`resource_milestone_events`) on my items
|
||||
- Assignment/reviewer system notes (see AC-12 for patterns) on my items
|
||||
- "My items" for the activity feed = items I'm CURRENTLY associated with per AC-3 (current assignment state, not historical)
|
||||
- Includes activity on items regardless of open/closed state
|
||||
- Own actions included but flagged (`is_own: true` in robot, `(you)` suffix + dimmed in human)
|
||||
- Sort: newest first (chronological descending)
|
||||
- No limit, no truncation — show all events
|
||||
|
||||
**AC-5.5: Summary Header**
|
||||
- Counts: projects, open issues, authored MRs, reviewing MRs, needs_attention count
|
||||
- Attention legend (human mode): icon + label for each state
|
||||
|
||||
### AC-6: Human Output — Visual Design
|
||||
|
||||
**AC-6.1: Layout**
|
||||
- Section card style with `section_divider` headers
|
||||
- Legend at top explains attention icons
|
||||
- Two-line per item: main data on line 1, project path on line 2 (indented)
|
||||
- When scoped to single project (`--project`), suppress project path line (redundant)
|
||||
|
||||
**AC-6.2: Attention Icons (three tiers)**
|
||||
|
||||
| State | Nerd Font | Unicode | ASCII | Color |
|
||||
|-------|-----------|---------|-------|-------|
|
||||
| needs_attention | `\uf0f3` bell | `◆` | `[!]` | amber (warning) |
|
||||
| not_started | `\uf005` star | `★` | `[*]` | cyan (info) |
|
||||
| awaiting_response | `\uf017` clock | `◷` | `[~]` | dim (muted) |
|
||||
| stale | `\uf54c` skull | `☠` | `[x]` | dim (muted) |
|
||||
|
||||
**AC-6.3: Color Vocabulary** (matches existing lore palette)
|
||||
- Issue refs (#N): cyan
|
||||
- MR refs (!N): purple
|
||||
- Usernames (@name): cyan
|
||||
- Opened state: green
|
||||
- Merged state: purple
|
||||
- Closed state: dim
|
||||
- Draft indicator: gray
|
||||
- Own actions: dimmed + `(you)` suffix
|
||||
- Timestamps: dim (relative time)
|
||||
|
||||
**AC-6.4: Activity Event Badges**
|
||||
|
||||
| Event | Nerd/Unicode (colored bg) | ASCII fallback |
|
||||
|-------|--------------------------|----------------|
|
||||
| note | cyan bg, dark text | `[note]` cyan text |
|
||||
| status | amber bg, dark text | `[status]` amber text |
|
||||
| label | purple bg, white text | `[label]` purple text |
|
||||
| assign | green bg, dark text | `[assign]` green text |
|
||||
| milestone | magenta bg, white text | `[milestone]` magenta text |
|
||||
|
||||
Fallback: when background colors aren't available (ASCII mode), use colored text with brackets instead of background pills.
|
||||
|
||||
**AC-6.5: Labels**
|
||||
- Human mode: not shown
|
||||
- Robot mode: included in JSON
|
||||
|
||||
### AC-7: Robot Output
|
||||
|
||||
- **AC-7.1**: Standard `{ok, data, meta}` envelope
|
||||
- **AC-7.2**: `data` contains: `username`, `since_iso`, `summary` (counts + `needs_attention_count`), `open_issues[]`, `open_mrs_authored[]`, `reviewing_mrs[]`, `activity[]`
|
||||
- **AC-7.3**: Each item includes: project, iid, title, state, attention_state (programmatic: `needs_attention`, `not_started`, `awaiting_response`, `stale`, `not_ready`), labels, updated_at_iso, web_url
|
||||
- **AC-7.4**: Issues include `status_name` (work item status)
|
||||
- **AC-7.5**: MRs include `draft`, `detailed_merge_status`, `author_username` (reviewing section)
|
||||
- **AC-7.6**: Activity items include: `timestamp_iso`, `event_type`, `entity_type`, `entity_iid`, `project`, `actor`, `is_own`, `summary`, `body_preview` (for notes, truncated to 200 chars)
|
||||
- **AC-7.7**: `--fields minimal` preset: `iid`, `title`, `attention_state`, `updated_at_iso` (work items); `timestamp_iso`, `event_type`, `entity_iid`, `actor` (activity)
|
||||
- **AC-7.8**: Metadata-only depth — agents drill into specific items with `timeline`, `issues`, `mrs` for full context
|
||||
- **AC-7.9**: No limits, no truncation on any array
|
||||
|
||||
### AC-8: Cross-Project Behavior
|
||||
|
||||
- **AC-8.1**: If `config.default_project` is set, scope to that project by default. If no default project, show all synced projects.
|
||||
- **AC-8.2**: `--all` flag overrides default project and shows all synced projects
|
||||
- **AC-8.3**: `--project` flag narrows to a specific project (supports fuzzy match like other commands)
|
||||
- **AC-8.4**: `--project` and `--all` are mutually exclusive (exit 2 if both passed)
|
||||
- **AC-8.5**: Project path shown per-item in both human and robot output (suppressed in human when single-project scoped per AC-6.1)
|
||||
|
||||
### AC-9: Sort Order
|
||||
|
||||
- **AC-9.1**: Work item sections: attention-first, then most recently updated
|
||||
- **AC-9.2**: Attention priority: `needs_attention` > `not_started` > `awaiting_response` > `stale` > `not_ready`
|
||||
- **AC-9.3**: Activity feed: chronological descending (newest first)
|
||||
|
||||
### AC-10: Error Handling
|
||||
|
||||
- **AC-10.1**: No username configured and no `--user` flag → exit 2 with suggestion
|
||||
- **AC-10.2**: No synced data → exit 17 with suggestion to run `lore sync`
|
||||
- **AC-10.3**: Username found but no matching items → empty sections with summary showing zeros
|
||||
- **AC-10.4**: `--project` and `--all` both passed → exit 2 with message
|
||||
|
||||
### AC-11: Relationship to Existing Commands
|
||||
|
||||
- **AC-11.1**: `who @username` remains for looking at anyone's workload
|
||||
- **AC-11.2**: `lore me` is the self-view with attention intelligence
|
||||
- **AC-11.3**: No deprecation of `who` — they serve different purposes
|
||||
|
||||
### AC-12: New Assignments Detection
|
||||
|
||||
- **AC-12.1**: Detect from system notes (`notes.is_system = 1`) matching these body patterns:
|
||||
- `"assigned to @username"` — issue/MR assignment
|
||||
- `"unassigned @username"` — removal (shown as `unassign` event type)
|
||||
- `"requested review from @username"` — reviewer assignment (shown as `review_request` event type)
|
||||
- **AC-12.2**: These appear in the activity feed with appropriate event types
|
||||
- **AC-12.3**: Shows who performed the action (note author from the associated non-system context, or "system" if unavailable) and when (note created_at)
|
||||
- **AC-12.4**: Pattern matching is case-insensitive and matches username at word boundary
|
||||
|
||||
---
|
||||
|
||||
## Out of Scope (Follow-Up Work)
|
||||
|
||||
- **Award emoji sync**: Extends attention signal with reaction timestamps. Requires new table + GitLab REST API integration. Note-level emoji sync has N+1 concern requiring smart batching.
|
||||
- **Participation/mention expansion**: Broadening "my items" beyond assigned+authored.
|
||||
- **Label filtering**: `--label` flag to scope dashboard by label.
|
||||
|
||||
---
|
||||
|
||||
## Design Notes
|
||||
|
||||
### Why No High-Water Mark
|
||||
|
||||
GitLab itself is the source of truth for "what I've engaged with." The attention state is computed by comparing the user's latest comment timestamp against others' latest comment timestamps on each item. No local cursor or mark is needed.
|
||||
|
||||
### Why Comments-Only (For Now)
|
||||
|
||||
Award emoji (reactions) are a valid "I've engaged" signal but aren't currently synced. The attention model is designed to incorporate emoji timestamps when available — adding them later requires no model changes.
|
||||
|
||||
### Why MR Assignees Are Excluded
|
||||
|
||||
GitLab MR workflows have three role fields: Author, Assignee, and Reviewer. In Pattern 1 workflows (the most common post-2020), the author assigns themselves — making assignee redundant with authorship. The Reviewing section uses `mr_reviewers` as the review signal.
|
||||
|
||||
### Attention State Evaluation Order
|
||||
|
||||
States are evaluated in priority order (first match wins):
|
||||
|
||||
```
|
||||
1. not_ready — MR-only: draft=1 AND no reviewers
|
||||
2. needs_attention — others commented after me
|
||||
3. stale — had activity, but nothing in 30d (NOT for zero-comment items)
|
||||
4. not_started — I have zero comments (may or may not have others' comments)
|
||||
5. awaiting_response — I commented last (or I'm the only commenter)
|
||||
```
|
||||
|
||||
Edge cases:
|
||||
- Zero comments from anyone → `not_started` (NOT stale)
|
||||
- Only my comments, none from others → `awaiting_response`
|
||||
- Only others' comments, none from me → `not_started` (I haven't engaged)
|
||||
- Wait: this conflicts with `needs_attention` (step 2). If others have commented and I haven't, then others' latest > my latest (NULL). This should be `needs_attention`, not `not_started`.
|
||||
|
||||
Corrected logic:
|
||||
- `needs_attention` takes priority over `not_started` when others HAVE commented but I haven't. The distinction: `not_started` only applies when NOBODY has commented.
|
||||
|
||||
```
|
||||
1. not_ready — MR-only: draft=1 AND no reviewers
|
||||
2. needs_attention — others have non-system notes AND (I have none OR others' latest > my latest)
|
||||
3. stale — latest note from anyone is older than 30 days
|
||||
4. awaiting_response — my latest >= others' latest (I'm caught up)
|
||||
5. not_started — zero non-system notes from anyone
|
||||
```
|
||||
|
||||
### Attention State Computation (SQL Sketch)
|
||||
|
||||
```sql
|
||||
WITH my_latest AS (
|
||||
SELECT d.issue_id, d.merge_request_id, MAX(n.created_at) AS ts
|
||||
FROM notes n
|
||||
JOIN discussions d ON n.discussion_id = d.id
|
||||
WHERE n.author_username = ?me AND n.is_system = 0
|
||||
GROUP BY d.issue_id, d.merge_request_id
|
||||
),
|
||||
others_latest AS (
|
||||
SELECT d.issue_id, d.merge_request_id, MAX(n.created_at) AS ts
|
||||
FROM notes n
|
||||
JOIN discussions d ON n.discussion_id = d.id
|
||||
WHERE n.author_username != ?me AND n.is_system = 0
|
||||
GROUP BY d.issue_id, d.merge_request_id
|
||||
),
|
||||
any_latest AS (
|
||||
SELECT d.issue_id, d.merge_request_id, MAX(n.created_at) AS ts
|
||||
FROM notes n
|
||||
JOIN discussions d ON n.discussion_id = d.id
|
||||
WHERE n.is_system = 0
|
||||
GROUP BY d.issue_id, d.merge_request_id
|
||||
)
|
||||
SELECT
|
||||
CASE
|
||||
-- MR-only: draft with no reviewers
|
||||
WHEN entity_type = 'mr' AND draft = 1
|
||||
AND NOT EXISTS (SELECT 1 FROM mr_reviewers WHERE merge_request_id = entity_id)
|
||||
THEN 'not_ready'
|
||||
-- Others commented and I haven't caught up (or never engaged)
|
||||
WHEN others.ts IS NOT NULL AND (my.ts IS NULL OR others.ts > my.ts)
|
||||
THEN 'needs_attention'
|
||||
-- Had activity but gone quiet for 30d
|
||||
WHEN any.ts IS NOT NULL AND any.ts < ?now_minus_30d
|
||||
THEN 'stale'
|
||||
-- I've responded and I'm caught up
|
||||
WHEN my.ts IS NOT NULL AND my.ts >= COALESCE(others.ts, 0)
|
||||
THEN 'awaiting_response'
|
||||
-- Nobody has commented at all
|
||||
ELSE 'not_started'
|
||||
END AS attention_state
|
||||
FROM ...
|
||||
```
|
||||
140
docs/plan-expose-discussion-ids.feedback-5.md
Normal file
140
docs/plan-expose-discussion-ids.feedback-5.md
Normal file
@@ -0,0 +1,140 @@
|
||||
Your iteration 4 plan is already strong. The highest-impact revisions are around query shape, transaction boundaries, and contract stability for agents.
|
||||
|
||||
1. **Switch discussions query to a two-phase page-first architecture**
|
||||
Analysis: Current `ranked_notes` runs over every filtered discussion before `LIMIT`, which can explode on project-wide queries. A page-first plan keeps complexity proportional to `limit`, improves tail latency, and reduces memory churn.
|
||||
```diff
|
||||
@@ ## 3c. SQL Query
|
||||
-Core query uses a CTE + ranked-notes rollup (window function) to avoid per-row correlated
|
||||
-subqueries.
|
||||
+Core query is split into two phases for scalability:
|
||||
+1) `paged_discussions` applies filters/sort/LIMIT and returns only page IDs.
|
||||
+2) Note rollups and optional `--include-notes` expansion run only for those page IDs.
|
||||
+This bounds note scanning to visible results and stabilizes latency on large projects.
|
||||
|
||||
-WITH filtered_discussions AS (
|
||||
+WITH filtered_discussions AS (
|
||||
...
|
||||
),
|
||||
-ranked_notes AS (
|
||||
+paged_discussions AS (
|
||||
+ SELECT id
|
||||
+ FROM filtered_discussions
|
||||
+ ORDER BY COALESCE({sort_column}, 0) {order}, id {order}
|
||||
+ LIMIT ?
|
||||
+),
|
||||
+ranked_notes AS (
|
||||
...
|
||||
- WHERE n.discussion_id IN (SELECT id FROM filtered_discussions)
|
||||
+ WHERE n.discussion_id IN (SELECT id FROM paged_discussions)
|
||||
)
|
||||
```
|
||||
|
||||
2. **Move snapshot transaction ownership to handlers (not query helpers)**
|
||||
Analysis: This avoids nested transaction edge cases, keeps function signatures clean, and guarantees one snapshot across count + page + include-notes + serialization metadata.
|
||||
```diff
|
||||
@@ ## Cross-cutting: snapshot consistency
|
||||
-Wrap `query_notes` and `query_discussions` in a deferred read transaction.
|
||||
+Open one deferred read transaction in each handler (`handle_notes`, `handle_discussions`)
|
||||
+and pass `&Transaction` into query helpers. Query helpers do not open/commit transactions.
|
||||
+This guarantees a single snapshot across all subqueries and avoids nested tx pitfalls.
|
||||
|
||||
-pub fn query_discussions(conn: &Connection, ...)
|
||||
+pub fn query_discussions(tx: &rusqlite::Transaction<'_>, ...)
|
||||
```
|
||||
|
||||
3. **Add immutable input filter `--project-id` across notes/discussions/show**
|
||||
Analysis: You already expose `gitlab_project_id` because paths are mutable; input should support the same immutable selector. This removes failure modes after project renames/transfers.
|
||||
```diff
|
||||
@@ ## 3a. CLI Args
|
||||
+ /// Filter by immutable GitLab project ID
|
||||
+ #[arg(long, help_heading = "Filters", conflicts_with = "project")]
|
||||
+ pub project_id: Option<i64>,
|
||||
@@ ## Bridge Contract
|
||||
+Input symmetry rule: commands that accept `--project` should also accept `--project-id`.
|
||||
+If both are present, return usage error (exit code 2).
|
||||
```
|
||||
|
||||
4. **Enforce bridge fields for nested notes in `discussions --include-notes`**
|
||||
Analysis: Current guardrail is entity-level; nested notes can still lose required IDs under aggressive filtering. This is a contract hole for write-bridging.
|
||||
```diff
|
||||
@@ ### Field Filtering Guardrail
|
||||
-In robot mode, `filter_fields` MUST force-include Bridge Contract fields...
|
||||
+In robot mode, `filter_fields` MUST force-include Bridge Contract fields at all returned levels:
|
||||
+- discussion row fields
|
||||
+- nested note fields when `discussions --include-notes` is used
|
||||
|
||||
+const BRIDGE_FIELDS_DISCUSSION_NOTES: &[&str] = &[
|
||||
+ "project_path", "gitlab_project_id", "noteable_type", "parent_iid",
|
||||
+ "gitlab_discussion_id", "gitlab_note_id",
|
||||
+];
|
||||
```
|
||||
|
||||
5. **Make ambiguity preflight scope-aware and machine-actionable**
|
||||
Analysis: Current preflight checks only `gitlab_discussion_id`, which can produce false ambiguity when additional filters already narrow to one project. Also, agents need structured candidates, not only free-text.
|
||||
```diff
|
||||
@@ ### Ambiguity Guardrail
|
||||
-SELECT DISTINCT p.path_with_namespace
|
||||
+SELECT DISTINCT p.path_with_namespace, p.gitlab_project_id
|
||||
FROM discussions d
|
||||
JOIN projects p ON p.id = d.project_id
|
||||
-WHERE d.gitlab_discussion_id = ?
|
||||
+WHERE d.gitlab_discussion_id = ?
|
||||
+ /* plus active scope filters: noteable_type, for_issue/for_mr, since/path when present */
|
||||
LIMIT 3
|
||||
|
||||
-Return LoreError::Ambiguous with message
|
||||
+Return LoreError::Ambiguous with structured details:
|
||||
+`{ code, message, candidates:[{project_path, gitlab_project_id}], suggestion }`
|
||||
```
|
||||
|
||||
6. **Add `--contains` filter to `discussions`**
|
||||
Analysis: This is a high-utility agent workflow gap. Agents frequently need “find thread by text then reply”; forcing a separate `notes` search round-trip is unnecessary.
|
||||
```diff
|
||||
@@ ## 3a. CLI Args
|
||||
+ /// Filter discussions whose notes contain text
|
||||
+ #[arg(long, help_heading = "Filters")]
|
||||
+ pub contains: Option<String>,
|
||||
@@ ## 3d. Filters struct
|
||||
+ pub contains: Option<String>,
|
||||
@@ ## 3d. Where-clause construction
|
||||
+- `path` -> EXISTS (...)
|
||||
+- `path` -> EXISTS (...)
|
||||
+- `contains` -> EXISTS (
|
||||
+ SELECT 1 FROM notes n
|
||||
+ WHERE n.discussion_id = d.id
|
||||
+ AND n.body LIKE ?
|
||||
+ )
|
||||
```
|
||||
|
||||
7. **Promote two baseline indexes from “candidate” to “required”**
|
||||
Analysis: These are directly hit by new primary paths; waiting for post-merge profiling risks immediate perf cliffs in real usage.
|
||||
```diff
|
||||
@@ ## 3h. Query-plan validation
|
||||
-Candidate indexes (add only if EXPLAIN QUERY PLAN shows they're needed):
|
||||
-- discussions(project_id, gitlab_discussion_id)
|
||||
-- notes(discussion_id, created_at DESC, id DESC)
|
||||
+Required baseline indexes for this feature:
|
||||
+- discussions(project_id, gitlab_discussion_id)
|
||||
+- notes(discussion_id, created_at DESC, id DESC)
|
||||
+Keep other indexes conditional on EXPLAIN QUERY PLAN.
|
||||
```
|
||||
|
||||
8. **Add schema versioning and remove contradictory rejected items**
|
||||
Analysis: `robot-docs` contract drift is a long-term agent risk; explicit schema versions let clients fail safely. Also, rejected items currently contradict active sections, which creates implementation ambiguity.
|
||||
```diff
|
||||
@@ ## 4. Fix Robot-Docs Response Schemas
|
||||
"meta": {"elapsed_ms": "int", ...}
|
||||
+"meta": {"elapsed_ms":"int", ..., "schema_version":"string"}
|
||||
+
|
||||
+Schema version policy:
|
||||
+- bump minor on additive fields
|
||||
+- bump major on removals/renames
|
||||
+- expose per-command versions in `robot-docs`
|
||||
@@ ## Rejected Recommendations
|
||||
-- Add `gitlab_note_id` to show-command note detail structs ... rejected ...
|
||||
-- Add `gitlab_discussion_id` to show-command discussion detail structs ... rejected ...
|
||||
-- Add `gitlab_project_id` to show-command discussion detail structs ... rejected ...
|
||||
+Remove stale rejected entries that conflict with accepted workstreams in this plan iteration.
|
||||
```
|
||||
|
||||
If you want, I can produce a fully rewritten iteration 5 plan document that applies all of the above edits cleanly end-to-end.
|
||||
@@ -2,7 +2,7 @@
|
||||
plan: true
|
||||
title: ""
|
||||
status: iterating
|
||||
iteration: 4
|
||||
iteration: 5
|
||||
target_iterations: 8
|
||||
beads_revision: 0
|
||||
related_plans: []
|
||||
@@ -52,8 +52,9 @@ output.
|
||||
### Field Filtering Guardrail
|
||||
|
||||
In robot mode, `filter_fields` **MUST** force-include Bridge Contract fields even when the
|
||||
caller passes a narrower `--fields` list. This prevents agents from accidentally stripping
|
||||
the identifiers they need for write operations.
|
||||
caller passes a narrower `--fields` list. This applies at **all nesting levels**: both the
|
||||
top-level entity fields and nested sub-entities (e.g., notes inside `discussions --include-notes`).
|
||||
This prevents agents from accidentally stripping the identifiers they need for write operations.
|
||||
|
||||
**Implementation**: Add a `BRIDGE_FIELDS` constant map per entity type. In `filter_fields()`,
|
||||
when operating in robot mode, union the caller's requested fields with the bridge set before
|
||||
@@ -69,70 +70,127 @@ const BRIDGE_FIELDS_DISCUSSIONS: &[&str] = &[
|
||||
"project_path", "gitlab_project_id", "noteable_type", "parent_iid",
|
||||
"gitlab_discussion_id",
|
||||
];
|
||||
// Applied to nested notes within discussions --include-notes
|
||||
const BRIDGE_FIELDS_DISCUSSION_NOTES: &[&str] = &[
|
||||
"project_path", "gitlab_project_id", "noteable_type", "parent_iid",
|
||||
"gitlab_discussion_id", "gitlab_note_id",
|
||||
];
|
||||
```
|
||||
|
||||
In `filter_fields`, when entity is `"notes"` or `"discussions"`, merge the bridge set into the
|
||||
requested fields before filtering the JSON value. This is a ~5-line change to the existing
|
||||
function.
|
||||
requested fields before filtering the JSON value. For `"discussions"`, also apply
|
||||
`BRIDGE_FIELDS_DISCUSSION_NOTES` to each element of the nested `notes` array. This is a ~10-line
|
||||
change to the existing function.
|
||||
|
||||
### Snapshot Consistency (Cross-Cutting)
|
||||
|
||||
Multi-query commands (`handle_notes`, `handle_discussions`) **MUST** execute all their queries
|
||||
within a single deferred read transaction. This guarantees snapshot consistency when a concurrent
|
||||
sync/ingest is modifying the database.
|
||||
|
||||
**Transaction ownership lives in handlers, not query helpers.** Each handler opens one deferred
|
||||
read transaction and passes it to query helpers. Query helpers accept `&Connection` (which
|
||||
`Transaction` derefs to via `std::ops::Deref`) so they remain testable with plain connections
|
||||
in unit tests. This avoids nested transaction edge cases and guarantees a single snapshot across
|
||||
count + page + include-notes + serialization.
|
||||
|
||||
```rust
|
||||
// In handle_notes / handle_discussions:
|
||||
let tx = conn.transaction_with_behavior(rusqlite::TransactionBehavior::Deferred)?;
|
||||
let result = query_notes(&tx, &filters, &config)?;
|
||||
// ... serialize ...
|
||||
tx.commit()?; // read-only, but closes cleanly
|
||||
```
|
||||
|
||||
Query helpers keep their `conn: &Connection` signature — `Transaction<'_>` implements
|
||||
`Deref<Target = Connection>`, so `&tx` coerces to `&Connection` at call sites.
|
||||
|
||||
### Ambiguity Guardrail
|
||||
|
||||
When filtering by `gitlab_discussion_id` (on either `notes` or `discussions` commands) without
|
||||
`--project`, if the query matches discussions in multiple projects:
|
||||
- Return an `Ambiguous` error (exit code 18, matching existing convention)
|
||||
- Include matching project paths in the error message
|
||||
- Include matching project paths **and `gitlab_project_id`s** in a structured candidates list
|
||||
- Suggest retry with `--project <path>`
|
||||
|
||||
**Implementation**: Run a **preflight distinct-project check** before the main list query
|
||||
executes its `LIMIT`. This is critical because a post-query check on the paginated result set
|
||||
can silently miss cross-project ambiguity when `LIMIT` truncates results to rows from a single
|
||||
project. The preflight query is cheap (hits the `gitlab_discussion_id` index, returns at most
|
||||
a few rows) and eliminates non-deterministic write-targeting risk.
|
||||
**Implementation**: Run a **scope-aware preflight distinct-project check** before the main list
|
||||
query executes its `LIMIT`. The preflight applies active scope filters (noteable_type, since,
|
||||
for_issue/for_mr) alongside the discussion ID check, so it won't produce false ambiguity when
|
||||
other filters already narrow to one project. This is critical because a post-query check on the
|
||||
paginated result set can silently miss cross-project ambiguity when `LIMIT` truncates results to
|
||||
rows from a single project. The preflight query is cheap (hits the `gitlab_discussion_id` index,
|
||||
returns at most a few rows) and eliminates non-deterministic write-targeting risk.
|
||||
|
||||
```sql
|
||||
-- Preflight ambiguity check (runs before main query)
|
||||
SELECT DISTINCT p.path_with_namespace
|
||||
-- Preflight ambiguity check (runs before main query, includes active scope filters)
|
||||
SELECT DISTINCT p.path_with_namespace, p.gitlab_project_id
|
||||
FROM discussions d
|
||||
JOIN projects p ON p.id = d.project_id
|
||||
WHERE d.gitlab_discussion_id = ?
|
||||
-- scope filters applied dynamically:
|
||||
-- AND d.noteable_type = ? (when --noteable-type present)
|
||||
-- AND d.merge_request_id = (SELECT ...) (when --for-mr present)
|
||||
-- AND d.issue_id = (SELECT ...) (when --for-issue present)
|
||||
LIMIT 3
|
||||
```
|
||||
|
||||
If more than one project is found, return `LoreError::Ambiguous` (exit code 18) with the
|
||||
distinct project paths and suggestion to retry with `--project <path>`.
|
||||
If more than one project is found, return `LoreError::Ambiguous` (exit code 18) with structured
|
||||
candidates for machine consumption:
|
||||
|
||||
```rust
|
||||
// In query_notes / query_discussions, before executing the main query:
|
||||
if let Some(ref disc_id) = filters.gitlab_discussion_id {
|
||||
if filters.project.is_none() {
|
||||
let distinct_projects: Vec<String> = conn
|
||||
let candidates: Vec<(String, i64)> = conn
|
||||
.prepare(
|
||||
"SELECT DISTINCT p.path_with_namespace \
|
||||
"SELECT DISTINCT p.path_with_namespace, p.gitlab_project_id \
|
||||
FROM discussions d \
|
||||
JOIN projects p ON p.id = d.project_id \
|
||||
WHERE d.gitlab_discussion_id = ? \
|
||||
LIMIT 3"
|
||||
// Note: add scope filter clauses dynamically
|
||||
)?
|
||||
.query_map([disc_id], |row| row.get(0))?
|
||||
.query_map([disc_id], |row| Ok((row.get(0)?, row.get(1)?)))?
|
||||
.collect::<std::result::Result<Vec<_>, _>>()?;
|
||||
|
||||
if distinct_projects.len() > 1 {
|
||||
if candidates.len() > 1 {
|
||||
return Err(LoreError::Ambiguous {
|
||||
message: format!(
|
||||
"Discussion ID matches {} projects: {}. Use --project to disambiguate.",
|
||||
distinct_projects.len(),
|
||||
distinct_projects.join(", ")
|
||||
"Discussion ID matches {} projects. Use --project to disambiguate.",
|
||||
candidates.len(),
|
||||
),
|
||||
candidates: candidates.into_iter()
|
||||
.map(|(path, id)| AmbiguousCandidate { project_path: path, gitlab_project_id: id })
|
||||
.collect(),
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
In robot mode, the error serializes as:
|
||||
```json
|
||||
{
|
||||
"error": {
|
||||
"code": "AMBIGUOUS",
|
||||
"message": "Discussion ID matches 2 projects. Use --project to disambiguate.",
|
||||
"candidates": [
|
||||
{"project_path": "group/repo-a", "gitlab_project_id": 42},
|
||||
{"project_path": "group/repo-b", "gitlab_project_id": 99}
|
||||
],
|
||||
"suggestion": "lore -J discussions --gitlab-discussion-id <id> --project <path>",
|
||||
"actions": ["lore -J discussions --gitlab-discussion-id <id> --project group/repo-a"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
This gives agents machine-actionable candidates: they can pick a project and retry immediately
|
||||
without parsing free-text error messages.
|
||||
|
||||
#### 1h. Wrap `query_notes` in a read transaction
|
||||
|
||||
Wrap the count query and page query in a deferred read transaction per the Snapshot Consistency
|
||||
cross-cutting requirement. See the Bridge Contract section for the pattern.
|
||||
Per the Snapshot Consistency cross-cutting requirement, `handle_notes` opens a deferred read
|
||||
transaction and passes it to `query_notes`. See the Snapshot Consistency section for the pattern.
|
||||
|
||||
### Tests
|
||||
|
||||
@@ -337,6 +395,7 @@ fn notes_ambiguous_gitlab_discussion_id_across_projects() {
|
||||
// (this can happen since IDs are per-project)
|
||||
// Filter by gitlab_discussion_id without --project
|
||||
// Assert LoreError::Ambiguous is returned with both project paths
|
||||
// Assert candidates include gitlab_project_id for machine consumption
|
||||
}
|
||||
```
|
||||
|
||||
@@ -352,6 +411,19 @@ fn notes_ambiguity_preflight_not_defeated_by_limit() {
|
||||
}
|
||||
```
|
||||
|
||||
#### Test 8: Ambiguity preflight respects scope filters (no false positives)
|
||||
|
||||
```rust
|
||||
#[test]
|
||||
fn notes_ambiguity_preflight_respects_scope_filters() {
|
||||
let conn = create_test_db();
|
||||
// Insert 2 projects, each with a discussion sharing the same gitlab_discussion_id
|
||||
// But one is Issue-type and the other MergeRequest-type
|
||||
// Filter by gitlab_discussion_id + --noteable-type MergeRequest (narrows to 1 project)
|
||||
// Assert NO ambiguity error — scope filters disambiguate
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 2. Add `gitlab_discussion_id` to Show Command Discussion Groups
|
||||
@@ -644,6 +716,9 @@ lore -J discussions --gitlab-discussion-id 6a9c1750b37d
|
||||
|
||||
# List unresolved threads with latest 2 notes inline (fewer round-trips)
|
||||
lore -J discussions --for-mr 99 --resolution unresolved --include-notes 2
|
||||
|
||||
# Find discussions containing specific text
|
||||
lore -J discussions --for-mr 99 --contains "prefer the approach"
|
||||
```
|
||||
|
||||
### Response Schema
|
||||
@@ -801,6 +876,10 @@ pub struct DiscussionsArgs {
|
||||
#[arg(long, value_enum, help_heading = "Filters")]
|
||||
pub noteable_type: Option<NoteableTypeFilter>,
|
||||
|
||||
/// Filter discussions whose notes contain text (case-insensitive LIKE match)
|
||||
#[arg(long, help_heading = "Filters")]
|
||||
pub contains: Option<String>,
|
||||
|
||||
/// Include up to N latest notes per discussion (0 = none, default; clamped to 20)
|
||||
#[arg(long, default_value = "0", help_heading = "Output")]
|
||||
pub include_notes: usize,
|
||||
@@ -925,7 +1004,7 @@ The `included_note_count` is set to `notes.len()` and `has_more_notes` is set to
|
||||
`note_count > included_note_count` during the JSON conversion, providing per-discussion
|
||||
truncation signals.
|
||||
|
||||
#### 3c. SQL Query
|
||||
#### 3c. SQL Query — Two-Phase Page-First Architecture
|
||||
|
||||
**File**: `src/cli/commands/list.rs`
|
||||
|
||||
@@ -935,21 +1014,29 @@ pub fn query_discussions(
|
||||
filters: &DiscussionListFilters,
|
||||
config: &Config,
|
||||
) -> Result<DiscussionListResult> {
|
||||
// Wrap all queries in a deferred read transaction for snapshot consistency
|
||||
let tx = conn.transaction_with_behavior(rusqlite::TransactionBehavior::Deferred)?;
|
||||
// NOTE: Transaction is managed by the handler (handle_discussions).
|
||||
// This function receives &Connection (which Transaction derefs to via `std::ops::Deref`).
|
||||
|
||||
// Preflight ambiguity check (if gitlab_discussion_id without project)
|
||||
// ... see Ambiguity Guardrail section ...
|
||||
|
||||
// Main query + count query ...
|
||||
// ... note expansion query (if include_notes > 0) ...
|
||||
|
||||
tx.commit()?;
|
||||
// Phase 1: Filter + sort + LIMIT to get page IDs
|
||||
// Phase 2: Note rollups only for paged results
|
||||
// Phase 3: Optional --include-notes expansion (separate query)
|
||||
}
|
||||
```
|
||||
|
||||
Core query uses a CTE + ranked-notes rollup (window function) to avoid per-row correlated
|
||||
subqueries. The `ROW_NUMBER()` approach produces a single scan over the notes table, which
|
||||
is more predictable than repeated LIMIT 1 sub-selects at scale (200K+ discussions):
|
||||
The query uses a **two-phase page-first architecture** for scalability:
|
||||
|
||||
1. **Phase 1** (`paged_discussions`): Apply all filters, sort, and LIMIT to produce just the
|
||||
discussion IDs for the current page. This bounds the result set before any note scanning.
|
||||
2. **Phase 2** (`ranked_notes` + `note_rollup`): Run note aggregation only for the paged
|
||||
discussion IDs. This ensures note scanning is proportional to `--limit`, not to the total
|
||||
filtered discussion count.
|
||||
|
||||
This architecture prevents the performance cliff that occurs on project-wide queries with
|
||||
thousands of discussions: instead of scanning notes for all filtered discussions (potentially
|
||||
200K+), we scan only for the 50 (or whatever `--limit` is) that will actually be returned.
|
||||
|
||||
```sql
|
||||
WITH filtered_discussions AS (
|
||||
@@ -961,6 +1048,14 @@ WITH filtered_discussions AS (
|
||||
JOIN projects p ON d.project_id = p.id
|
||||
{where_sql}
|
||||
),
|
||||
-- Phase 1: Page-first — apply sort + LIMIT before note scanning
|
||||
paged_discussions AS (
|
||||
SELECT id
|
||||
FROM filtered_discussions
|
||||
ORDER BY COALESCE({sort_column}, 0) {order}, id {order}
|
||||
LIMIT ?
|
||||
),
|
||||
-- Phase 2: Note rollups only for paged results
|
||||
ranked_notes AS (
|
||||
SELECT
|
||||
n.discussion_id,
|
||||
@@ -980,7 +1075,7 @@ ranked_notes AS (
|
||||
n.created_at, n.id
|
||||
) AS rn_first_position
|
||||
FROM notes n
|
||||
WHERE n.discussion_id IN (SELECT id FROM filtered_discussions)
|
||||
WHERE n.discussion_id IN (SELECT id FROM paged_discussions)
|
||||
),
|
||||
note_rollup AS (
|
||||
SELECT
|
||||
@@ -1012,12 +1107,12 @@ SELECT
|
||||
nr.position_new_path,
|
||||
nr.position_new_line
|
||||
FROM filtered_discussions fd
|
||||
JOIN paged_discussions pd ON fd.id = pd.id
|
||||
JOIN projects p ON fd.project_id = p.id
|
||||
LEFT JOIN issues i ON fd.issue_id = i.id
|
||||
LEFT JOIN merge_requests m ON fd.merge_request_id = m.id
|
||||
LEFT JOIN note_rollup nr ON nr.discussion_id = fd.id
|
||||
ORDER BY COALESCE({sort_column}, 0) {order}, fd.id {order}
|
||||
LIMIT ?
|
||||
```
|
||||
|
||||
**Dual window function rationale**: The `ranked_notes` CTE uses two separate `ROW_NUMBER()`
|
||||
@@ -1028,12 +1123,11 @@ displacing the first human author/body, and prevents a non-positioned note from
|
||||
the file location. The `MAX(CASE WHEN rn_xxx = 1 ...)` pattern extracts the correct value
|
||||
from each independently-ranked sequence.
|
||||
|
||||
**Performance rationale**: The CTE pre-filters discussions before joining notes. The
|
||||
`ranked_notes` CTE uses `ROW_NUMBER()` (a single pass over the notes index) instead of
|
||||
correlated `(SELECT ... LIMIT 1)` sub-selects per discussion. For MR-scoped queries
|
||||
(50-200 discussions) the performance is equivalent. For project-wide scans with thousands
|
||||
of discussions, the window function approach avoids repeated index probes and produces a
|
||||
more predictable query plan.
|
||||
**Page-first scalability rationale**: The `paged_discussions` CTE applies LIMIT before note
|
||||
scanning. For MR-scoped queries (50-200 discussions) the performance is equivalent to the
|
||||
non-paged approach. For project-wide scans with thousands of discussions, the page-first
|
||||
architecture avoids scanning notes for discussions that won't appear in the result, keeping
|
||||
latency proportional to `--limit` rather than to the total filtered count.
|
||||
|
||||
**Note on ordering**: The `COALESCE({sort_column}, 0)` with tiebreaker `fd.id` ensures
|
||||
deterministic ordering even when timestamps are NULL (partial sync states). The `id`
|
||||
@@ -1042,6 +1136,10 @@ tiebreaker is cheap (primary key) and prevents unstable sort output.
|
||||
**Note on SQLite FILTER syntax**: SQLite does not support `COUNT(*) FILTER (WHERE ...)`.
|
||||
Use `SUM(CASE WHEN ... THEN 1 ELSE 0 END)` instead (as shown above).
|
||||
|
||||
**Count query**: The total_count query runs separately against `filtered_discussions` (without
|
||||
the LIMIT) using `SELECT COUNT(*) FROM filtered_discussions`. This is needed for `has_more`
|
||||
metadata. The count uses the same filter CTEs but omits notes entirely.
|
||||
|
||||
#### 3c-ii. Note expansion query (--include-notes)
|
||||
|
||||
When `include_notes > 0`, after the main discussion query, run a **single batched query**
|
||||
@@ -1103,6 +1201,7 @@ pub struct DiscussionListFilters {
|
||||
pub since: Option<String>,
|
||||
pub path: Option<String>,
|
||||
pub noteable_type: Option<NoteableTypeFilter>,
|
||||
pub contains: Option<String>,
|
||||
pub sort: DiscussionSortField,
|
||||
pub order: SortDirection,
|
||||
pub include_notes: usize,
|
||||
@@ -1117,6 +1216,7 @@ Where-clause construction uses `match` on typed enums — never raw string inter
|
||||
- `since` → `d.first_note_at >= ?` (using `parse_since()`)
|
||||
- `path` → `EXISTS (SELECT 1 FROM notes n WHERE n.discussion_id = d.id AND n.position_new_path LIKE ?)`
|
||||
- `noteable_type` → match: `Issue` → `d.noteable_type = 'Issue'`, `MergeRequest` → `d.noteable_type = 'MergeRequest'`
|
||||
- `contains` → `EXISTS (SELECT 1 FROM notes n WHERE n.discussion_id = d.id AND n.body LIKE '%' || ? || '%')`
|
||||
|
||||
#### 3e. Handler wiring
|
||||
|
||||
@@ -1128,7 +1228,7 @@ Add match arm:
|
||||
Some(Commands::Discussions(args)) => handle_discussions(cli.config.as_deref(), args, robot_mode),
|
||||
```
|
||||
|
||||
Handler function:
|
||||
Handler function (with transaction ownership):
|
||||
|
||||
```rust
|
||||
fn handle_discussions(
|
||||
@@ -1143,6 +1243,10 @@ fn handle_discussions(
|
||||
|
||||
let effective_limit = args.limit.min(500);
|
||||
let effective_include_notes = args.include_notes.min(20);
|
||||
|
||||
// Snapshot consistency: one transaction across all queries
|
||||
let tx = conn.transaction_with_behavior(rusqlite::TransactionBehavior::Deferred)?;
|
||||
|
||||
let filters = DiscussionListFilters {
|
||||
limit: effective_limit,
|
||||
project: args.project,
|
||||
@@ -1153,12 +1257,15 @@ fn handle_discussions(
|
||||
since: args.since,
|
||||
path: args.path,
|
||||
noteable_type: args.noteable_type,
|
||||
contains: args.contains,
|
||||
sort: args.sort,
|
||||
order: args.order,
|
||||
include_notes: effective_include_notes,
|
||||
};
|
||||
|
||||
let result = query_discussions(&conn, &filters, &config)?;
|
||||
let result = query_discussions(&tx, &filters, &config)?;
|
||||
|
||||
tx.commit()?; // read-only, but closes cleanly
|
||||
|
||||
let format = if robot_mode && args.format == "table" {
|
||||
"json"
|
||||
@@ -1247,7 +1354,7 @@ CSV view: all fields, following same pattern as `print_list_notes_csv`.
|
||||
.collect(),
|
||||
```
|
||||
|
||||
#### 3h. Query-plan validation
|
||||
#### 3h. Query-plan validation and indexes
|
||||
|
||||
Before merging the discussions command, capture `EXPLAIN QUERY PLAN` output for the three
|
||||
primary query patterns:
|
||||
@@ -1255,17 +1362,26 @@ primary query patterns:
|
||||
- `--project <path> --since 7d --sort last-note`
|
||||
- `--gitlab-discussion-id <id>`
|
||||
|
||||
If plans show table scans on `notes` or `discussions` for these patterns, add targeted indexes
|
||||
to the `MIGRATIONS` array in `src/core/db.rs`:
|
||||
**Required baseline index** (directly hit by `--include-notes` expansion, which runs a
|
||||
`ROW_NUMBER() OVER (PARTITION BY discussion_id ORDER BY created_at DESC, id DESC)` window
|
||||
on the notes table):
|
||||
|
||||
**Candidate indexes** (add only if EXPLAIN QUERY PLAN shows they're needed):
|
||||
```sql
|
||||
CREATE INDEX IF NOT EXISTS idx_notes_discussion_created_desc
|
||||
ON notes(discussion_id, created_at DESC, id DESC);
|
||||
```
|
||||
|
||||
This index is non-negotiable because the include-notes expansion query's performance is
|
||||
directly proportional to how efficiently it can scan notes per discussion. Without it, SQLite
|
||||
falls back to a full table scan of the 282K-row notes table for each batch.
|
||||
|
||||
**Conditional indexes** (add only if EXPLAIN QUERY PLAN shows they're needed):
|
||||
- `discussions(project_id, gitlab_discussion_id)` — for ambiguity preflight + direct ID lookup
|
||||
- `discussions(merge_request_id, last_note_at, id)` — for MR-scoped + sorted queries
|
||||
- `notes(discussion_id, created_at DESC, id DESC)` — for `--include-notes` expansion
|
||||
- `notes(discussion_id, is_system, created_at, id)` — for ranked_notes CTE ordering
|
||||
|
||||
This is a measured approach: profile first, add indexes only where the query plan demands them.
|
||||
No speculative index creation.
|
||||
This is a measured approach: one required index for the critical new path, remaining indexes
|
||||
added only where the query plan demands them.
|
||||
|
||||
### Tests
|
||||
|
||||
@@ -1500,7 +1616,7 @@ fn discussions_ambiguous_gitlab_discussion_id_across_projects() {
|
||||
};
|
||||
let result = query_discussions(&conn, &filters, &Config::default());
|
||||
assert!(result.is_err());
|
||||
// Error should be Ambiguous with both project paths
|
||||
// Error should be Ambiguous with both project paths and gitlab_project_ids
|
||||
}
|
||||
```
|
||||
|
||||
@@ -1579,6 +1695,99 @@ fn discussions_first_note_rollup_skips_system_notes() {
|
||||
}
|
||||
```
|
||||
|
||||
#### Test 15: --contains filter returns matching discussions
|
||||
|
||||
```rust
|
||||
#[test]
|
||||
fn query_discussions_contains_filter() {
|
||||
let conn = create_test_db();
|
||||
insert_project(&conn, 1);
|
||||
insert_mr(&conn, 1, 1, 99, "Test MR");
|
||||
insert_discussion(&conn, 1, "disc-match", 1, None, Some(1), "MergeRequest");
|
||||
insert_discussion(&conn, 2, "disc-nomatch", 1, None, Some(1), "MergeRequest");
|
||||
insert_note_in_discussion(&conn, 1, 500, 1, 1, "alice", "I really do prefer this approach");
|
||||
insert_note_in_discussion(&conn, 2, 501, 2, 1, "bob", "Looks good to me");
|
||||
|
||||
let filters = DiscussionListFilters {
|
||||
contains: Some("really do prefer".to_string()),
|
||||
..DiscussionListFilters::default_for_mr(99)
|
||||
};
|
||||
let result = query_discussions(&conn, &filters, &Config::default()).unwrap();
|
||||
|
||||
assert_eq!(result.discussions.len(), 1);
|
||||
assert_eq!(result.discussions[0].gitlab_discussion_id, "disc-match");
|
||||
}
|
||||
```
|
||||
|
||||
#### Test 16: Nested note bridge fields survive --fields filtering in robot mode
|
||||
|
||||
```rust
|
||||
#[test]
|
||||
fn discussions_nested_note_bridge_fields_forced_in_robot_mode() {
|
||||
// When discussions --include-notes returns nested notes,
|
||||
// bridge fields on nested notes must survive --fields filtering
|
||||
let mut value = serde_json::json!({
|
||||
"data": {
|
||||
"discussions": [{
|
||||
"gitlab_discussion_id": "abc",
|
||||
"noteable_type": "MergeRequest",
|
||||
"parent_iid": 99,
|
||||
"project_path": "group/repo",
|
||||
"gitlab_project_id": 42,
|
||||
"note_count": 1,
|
||||
"notes": [{
|
||||
"body": "test note",
|
||||
"project_path": "group/repo",
|
||||
"gitlab_project_id": 42,
|
||||
"noteable_type": "MergeRequest",
|
||||
"parent_iid": 99,
|
||||
"gitlab_discussion_id": "abc",
|
||||
"gitlab_note_id": 500
|
||||
}]
|
||||
}]
|
||||
}
|
||||
});
|
||||
|
||||
// Agent requests only "body" on notes — bridge fields must still appear
|
||||
filter_fields_robot(
|
||||
&mut value,
|
||||
"discussions",
|
||||
&["note_count".to_string()],
|
||||
);
|
||||
|
||||
let note = &value["data"]["discussions"][0]["notes"][0];
|
||||
assert!(note.get("gitlab_discussion_id").is_some());
|
||||
assert!(note.get("gitlab_note_id").is_some());
|
||||
assert!(note.get("gitlab_project_id").is_some());
|
||||
}
|
||||
```
|
||||
|
||||
#### Test 17: Ambiguity preflight respects scope filters (no false positives)
|
||||
|
||||
```rust
|
||||
#[test]
|
||||
fn discussions_ambiguity_preflight_respects_scope_filters() {
|
||||
let conn = create_test_db();
|
||||
insert_project(&conn, 1); // "group/repo-a"
|
||||
insert_project(&conn, 2); // "group/repo-b"
|
||||
// Same gitlab_discussion_id in both projects
|
||||
// But different noteable_types
|
||||
insert_discussion(&conn, 1, "shared-id", 1, Some(1), None, "Issue");
|
||||
insert_discussion(&conn, 2, "shared-id", 2, None, Some(1), "MergeRequest");
|
||||
|
||||
// Filter by noteable_type narrows to one project — should NOT fire ambiguity
|
||||
let filters = DiscussionListFilters {
|
||||
gitlab_discussion_id: Some("shared-id".to_string()),
|
||||
noteable_type: Some(NoteableTypeFilter::MergeRequest),
|
||||
project: None,
|
||||
..DiscussionListFilters::default()
|
||||
};
|
||||
let result = query_discussions(&conn, &filters, &Config::default());
|
||||
assert!(result.is_ok());
|
||||
assert_eq!(result.unwrap().discussions.len(), 1);
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## 4. Fix Robot-Docs Response Schemas
|
||||
@@ -1629,6 +1838,7 @@ With:
|
||||
"--since <period>",
|
||||
"--path <filepath>",
|
||||
"--noteable-type <Issue|MergeRequest>",
|
||||
"--contains <text>",
|
||||
"--include-notes <N>",
|
||||
"--sort <first-note|last-note>",
|
||||
"--order <asc|desc>",
|
||||
@@ -1831,14 +2041,13 @@ Changes 1 and 2 can be done in parallel. Change 4 must come last since it docume
|
||||
final schema of all preceding changes.
|
||||
|
||||
**Cross-cutting**: The Bridge Contract field guardrail (force-including bridge fields in robot
|
||||
mode) should be implemented as part of Change 1, since it modifies `filter_fields` in
|
||||
`robot.rs` which all subsequent changes depend on. The `BRIDGE_FIELDS_*` constants are defined
|
||||
once and reused by Changes 3 and 4.
|
||||
mode, including nested notes) should be implemented as part of Change 1, since it modifies
|
||||
`filter_fields` in `robot.rs` which all subsequent changes depend on. The `BRIDGE_FIELDS_*`
|
||||
constants are defined once and reused by Changes 3 and 4.
|
||||
|
||||
**Cross-cutting**: The snapshot consistency pattern (deferred read transaction) should be
|
||||
implemented in Change 1 for `query_notes` and carried forward to Change 3 for
|
||||
`query_discussions`. This is a one-line wrapper that provides correctness guarantees with
|
||||
zero performance cost.
|
||||
**Cross-cutting**: The snapshot consistency pattern (deferred read transaction in handlers)
|
||||
should be implemented in Change 1 for `handle_notes` and carried forward to Change 3 for
|
||||
`handle_discussions`. Transaction ownership lives in handlers; query helpers accept `&Connection`.
|
||||
|
||||
---
|
||||
|
||||
@@ -1850,40 +2059,52 @@ After all changes:
|
||||
`gitlab_discussion_id`, `gitlab_note_id`, and `gitlab_project_id` in the response
|
||||
2. An agent can run `lore -J discussions --for-mr 3929 --resolution unresolved` to see all
|
||||
open threads with their IDs
|
||||
3. An agent can run `lore -J mrs 3929` and see `gitlab_discussion_id`, `resolvable`,
|
||||
3. An agent can run `lore -J discussions --for-mr 3929 --contains "prefer the approach"` to
|
||||
find threads by text content without a separate `notes` round-trip
|
||||
4. An agent can run `lore -J mrs 3929` and see `gitlab_discussion_id`, `resolvable`,
|
||||
`resolved`, and `last_note_at_iso` on each discussion group, plus `gitlab_note_id` on
|
||||
each note within
|
||||
4. `lore robot-docs` lists actual field names for all commands
|
||||
5. All existing tests still pass
|
||||
6. No clippy warnings (pedantic + nursery)
|
||||
7. Robot-docs contract tests pass with field-set parity (not just string-contains), preventing
|
||||
5. `lore robot-docs` lists actual field names for all commands
|
||||
6. All existing tests still pass
|
||||
7. No clippy warnings (pedantic + nursery)
|
||||
8. Robot-docs contract tests pass with field-set parity (not just string-contains), preventing
|
||||
future schema drift in both directions
|
||||
8. Bridge Contract fields (`project_path`, `gitlab_project_id`, `noteable_type`, `parent_iid`,
|
||||
9. Bridge Contract fields (`project_path`, `gitlab_project_id`, `noteable_type`, `parent_iid`,
|
||||
`gitlab_discussion_id`, `gitlab_note_id`) are present in every applicable read payload
|
||||
9. Bridge Contract fields survive `--fields` filtering in robot mode (guardrail enforced)
|
||||
10. `--gitlab-discussion-id` filter works on both `notes` and `discussions` commands
|
||||
11. `--include-notes N` populates inline notes on `discussions` output via single batched query
|
||||
12. CLI-level contract integration tests verify bridge fields through the full handler path
|
||||
13. `gitlab_note_id` is available in notes list output (alongside `gitlab_id` for back-compat)
|
||||
10. Bridge Contract fields survive `--fields` filtering in robot mode (guardrail enforced),
|
||||
including nested notes within `discussions --include-notes`
|
||||
11. `--gitlab-discussion-id` filter works on both `notes` and `discussions` commands
|
||||
12. `--include-notes N` populates inline notes on `discussions` output via single batched query
|
||||
13. CLI-level contract integration tests verify bridge fields through the full handler path
|
||||
14. `gitlab_note_id` is available in notes list output (alongside `gitlab_id` for back-compat)
|
||||
and in show detail notes, providing a uniform field name across all commands
|
||||
14. Ambiguity guardrail fires when `--gitlab-discussion-id` matches multiple projects without
|
||||
15. Ambiguity guardrail fires when `--gitlab-discussion-id` matches multiple projects without
|
||||
`--project` specified — **including when LIMIT would have hidden the ambiguity** (preflight
|
||||
query runs before LIMIT)
|
||||
15. Output guardrails clamp `--limit` to 500 and `--include-notes` to 20; `meta` reports
|
||||
query runs before LIMIT). Error includes structured candidates with `gitlab_project_id`
|
||||
for machine consumption
|
||||
16. Ambiguity preflight is scope-aware: active filters (noteable_type, for_issue/for_mr) are
|
||||
applied alongside the discussion ID check, preventing false ambiguity when scope already
|
||||
narrows to one project
|
||||
17. Output guardrails clamp `--limit` to 500 and `--include-notes` to 20; `meta` reports
|
||||
effective values and `has_more` truncation flag
|
||||
16. Discussion and show queries use deterministic ordering (COALESCE + id tiebreaker) to
|
||||
18. Discussion and show queries use deterministic ordering (COALESCE + id tiebreaker) to
|
||||
prevent unstable output during partial sync states
|
||||
17. Per-discussion truncation signals (`included_note_count`, `has_more_notes`) are accurate
|
||||
19. Per-discussion truncation signals (`included_note_count`, `has_more_notes`) are accurate
|
||||
for `--include-notes` output
|
||||
18. Multi-query commands (`query_notes`, `query_discussions`) use deferred read transactions
|
||||
for snapshot consistency during concurrent ingest
|
||||
19. Discussion filters (`resolution`, `noteable_type`, `sort`, `order`) use typed enums
|
||||
20. Multi-query handlers (`handle_notes`, `handle_discussions`) open deferred read transactions;
|
||||
query helpers accept `&Connection` for snapshot consistency and testability
|
||||
21. Discussion filters (`resolution`, `noteable_type`, `sort`, `order`) use typed enums
|
||||
with match-to-SQL mapping — no raw string interpolation in query construction
|
||||
20. First-note rollup correctly handles discussions with leading system notes — `first_author`
|
||||
22. First-note rollup correctly handles discussions with leading system notes — `first_author`
|
||||
and `first_note_body_snippet` always reflect the first non-system note
|
||||
21. Query plans for primary discussion query patterns (`--for-mr`, `--project --since`,
|
||||
23. Query plans for primary discussion query patterns (`--for-mr`, `--project --since`,
|
||||
`--gitlab-discussion-id`) have been validated via EXPLAIN QUERY PLAN; targeted indexes
|
||||
added only where scans were observed
|
||||
24. The `notes(discussion_id, created_at DESC, id DESC)` index is present for `--include-notes`
|
||||
expansion performance
|
||||
25. Discussion query uses page-first CTE architecture: note rollups scan only the paged result
|
||||
set, not all filtered discussions, keeping latency proportional to `--limit`
|
||||
26. `--contains` filter on `discussions` returns only discussions with matching note text
|
||||
|
||||
---
|
||||
|
||||
@@ -1902,6 +2123,6 @@ After all changes:
|
||||
- **`--with-write-hints` flag for inline glab endpoint templates** — rejected because this couples lore's read surface to glab's API surface, violating the read/write split principle. The Bridge Contract gives agents the raw identifiers; constructing glab commands is the agent's responsibility. Adding endpoint templates would require lore to track glab API changes, creating an unnecessary maintenance burden.
|
||||
- **Show-command note ordering change (`ORDER BY COALESCE(position, ...), created_at, id`)** — rejected because show-command note ordering within a discussion thread is out of scope for this plan. The existing ordering works correctly for present data; the defensive COALESCE pattern is applied to discussion-level ordering where it matters for agent workflows.
|
||||
- **Query-plan validation as a separate numbered workstream** — rejected because it adds delivery overhead without proportional benefit. Query-plan validation is integrated into workstream 3 as a pre-merge validation step (section 3h), with candidate indexes listed but only added when EXPLAIN QUERY PLAN shows they're needed. This keeps the measured approach without inflating the workstream count.
|
||||
- **Add `gitlab_note_id` to show-command note detail structs** — rejected because show-command note detail structs already have `gitlab_id` (same value as `id`). The field is unambiguous and consistent with the Bridge Contract. Adding `gitlab_note_id` would create a duplicate and increase payload size without benefit.
|
||||
- **Add `gitlab_discussion_id` to show-command discussion detail structs** — rejected because show-command discussion detail structs already have `gitlab_discussion_id`. The field is unambiguous and consistent with the Bridge Contract. Adding `gitlab_discussion_id` would create a duplicate and increase payload size without benefit.
|
||||
- **Add `gitlab_project_id` to show-command discussion detail structs** — rejected because show-command discussion detail structs already have `gitlab_project_id`. The field is unambiguous and consistent with the Bridge Contract. Adding `gitlab_project_id` would create a duplicate and increase payload size without benefit.
|
||||
- **`--project-id` immutable input filter across notes/discussions/show** — rejected because this is scope creep touching every command and changing CLI ergonomics. Agents already get `gitlab_project_id` in output to construct API calls; the input-side concern (project renames breaking `--project`) is theoretical and hasn't been observed in practice. The `--project` flag already supports fuzzy matching which handles most rename scenarios. If real-world evidence surfaces, this can be added later without breaking changes.
|
||||
- **Schema versioning in robot-docs (`schema_version` field + semver policy)** — rejected because this tool has zero external consumers beyond our own agents, and the contract tests (field-set parity assertions) catch drift at compile time. Schema versioning adds bureaucratic overhead (version bumps, compatibility matrices, deprecation policies) without proportional benefit for an internal tool in early development. If lore gains external consumers, this can be reconsidered.
|
||||
- **Remove "stale" rejected items that "conflict" with active sections** — rejected because the prior entries about show-command structs were stale from iteration 2 and have been cleaned up independently. The rejected section is cumulative by design — it prevents future reviewers from re-proposing changes that have already been evaluated.
|
||||
|
||||
@@ -19,3 +19,6 @@ CREATE INDEX IF NOT EXISTS idx_discussions_mr_id ON discussions(merge_request_id
|
||||
-- Immutable author identity column (GitLab numeric user ID)
|
||||
ALTER TABLE notes ADD COLUMN author_id INTEGER;
|
||||
CREATE INDEX IF NOT EXISTS idx_notes_author_id ON notes(author_id) WHERE author_id IS NOT NULL;
|
||||
|
||||
INSERT INTO schema_version (version, applied_at, description)
|
||||
VALUES (22, strftime('%s', 'now') * 1000, '022_notes_query_index');
|
||||
|
||||
@@ -151,3 +151,6 @@ END;
|
||||
|
||||
DROP TABLE IF EXISTS _doc_labels_backup;
|
||||
DROP TABLE IF EXISTS _doc_paths_backup;
|
||||
|
||||
INSERT INTO schema_version (version, applied_at, description)
|
||||
VALUES (24, strftime('%s', 'now') * 1000, '024_note_documents');
|
||||
|
||||
@@ -6,3 +6,6 @@ FROM notes n
|
||||
LEFT JOIN documents d ON d.source_type = 'note' AND d.source_id = n.id
|
||||
WHERE n.is_system = 0 AND d.id IS NULL
|
||||
ON CONFLICT(source_type, source_id) DO NOTHING;
|
||||
|
||||
INSERT INTO schema_version (version, applied_at, description)
|
||||
VALUES (25, strftime('%s', 'now') * 1000, '025_note_dirty_backfill');
|
||||
|
||||
@@ -18,3 +18,6 @@ CREATE INDEX IF NOT EXISTS idx_notes_diffnote_discussion_author
|
||||
CREATE INDEX IF NOT EXISTS idx_notes_old_path_project_created
|
||||
ON notes(position_old_path, project_id, created_at)
|
||||
WHERE note_type = 'DiffNote' AND is_system = 0 AND position_old_path IS NOT NULL;
|
||||
|
||||
INSERT INTO schema_version (version, applied_at, description)
|
||||
VALUES (26, strftime('%s', 'now') * 1000, '026_scoring_indexes');
|
||||
|
||||
23
migrations/027_surgical_sync_runs.sql
Normal file
23
migrations/027_surgical_sync_runs.sql
Normal file
@@ -0,0 +1,23 @@
|
||||
-- Migration 027: Extend sync_runs for surgical sync observability
|
||||
-- Adds mode/phase tracking and surgical-specific counters.
|
||||
|
||||
ALTER TABLE sync_runs ADD COLUMN mode TEXT;
|
||||
ALTER TABLE sync_runs ADD COLUMN phase TEXT;
|
||||
ALTER TABLE sync_runs ADD COLUMN surgical_iids_json TEXT;
|
||||
ALTER TABLE sync_runs ADD COLUMN issues_fetched INTEGER NOT NULL DEFAULT 0;
|
||||
ALTER TABLE sync_runs ADD COLUMN mrs_fetched INTEGER NOT NULL DEFAULT 0;
|
||||
ALTER TABLE sync_runs ADD COLUMN issues_ingested INTEGER NOT NULL DEFAULT 0;
|
||||
ALTER TABLE sync_runs ADD COLUMN mrs_ingested INTEGER NOT NULL DEFAULT 0;
|
||||
ALTER TABLE sync_runs ADD COLUMN skipped_stale INTEGER NOT NULL DEFAULT 0;
|
||||
ALTER TABLE sync_runs ADD COLUMN docs_regenerated INTEGER NOT NULL DEFAULT 0;
|
||||
ALTER TABLE sync_runs ADD COLUMN docs_embedded INTEGER NOT NULL DEFAULT 0;
|
||||
ALTER TABLE sync_runs ADD COLUMN warnings_count INTEGER NOT NULL DEFAULT 0;
|
||||
ALTER TABLE sync_runs ADD COLUMN cancelled_at INTEGER;
|
||||
|
||||
CREATE INDEX IF NOT EXISTS idx_sync_runs_mode_started
|
||||
ON sync_runs(mode, started_at DESC);
|
||||
CREATE INDEX IF NOT EXISTS idx_sync_runs_status_phase_started
|
||||
ON sync_runs(status, phase, started_at DESC);
|
||||
|
||||
INSERT INTO schema_version (version, applied_at, description)
|
||||
VALUES (27, strftime('%s', 'now') * 1000, '027_surgical_sync_runs');
|
||||
@@ -25,6 +25,7 @@ pub enum CorrectionRule {
|
||||
ValueNormalization,
|
||||
ValueFuzzy,
|
||||
FlagPrefix,
|
||||
NoColorExpansion,
|
||||
}
|
||||
|
||||
/// Result of the correction pass over raw args.
|
||||
@@ -128,6 +129,11 @@ const COMMAND_FLAGS: &[(&str, &[&str])] = &[
|
||||
"--dry-run",
|
||||
"--no-dry-run",
|
||||
"--timings",
|
||||
"--lock",
|
||||
"--issue",
|
||||
"--mr",
|
||||
"--project",
|
||||
"--preflight-only",
|
||||
],
|
||||
),
|
||||
(
|
||||
@@ -193,6 +199,7 @@ const COMMAND_FLAGS: &[(&str, &[&str])] = &[
|
||||
"--as-of",
|
||||
"--explain-score",
|
||||
"--include-bots",
|
||||
"--include-closed",
|
||||
"--all-history",
|
||||
],
|
||||
),
|
||||
@@ -202,7 +209,6 @@ const COMMAND_FLAGS: &[(&str, &[&str])] = &[
|
||||
&[
|
||||
"--limit",
|
||||
"--fields",
|
||||
"--format",
|
||||
"--author",
|
||||
"--note-type",
|
||||
"--contains",
|
||||
@@ -280,6 +286,19 @@ const COMMAND_FLAGS: &[(&str, &[&str])] = &[
|
||||
),
|
||||
("show", &["--project"]),
|
||||
("reset", &["--yes"]),
|
||||
(
|
||||
"me",
|
||||
&[
|
||||
"--issues",
|
||||
"--mrs",
|
||||
"--activity",
|
||||
"--since",
|
||||
"--project",
|
||||
"--all",
|
||||
"--user",
|
||||
"--fields",
|
||||
],
|
||||
),
|
||||
];
|
||||
|
||||
/// Valid values for enum-like flags, used for post-clap error enhancement.
|
||||
@@ -423,9 +442,21 @@ pub fn correct_args(raw: Vec<String>, strict: bool) -> CorrectionResult {
|
||||
}
|
||||
|
||||
if let Some(fixed) = try_correct(&arg, &valid, strict) {
|
||||
if fixed.rule == CorrectionRule::NoColorExpansion {
|
||||
// Expand --no-color → --color never
|
||||
corrections.push(Correction {
|
||||
original: fixed.original,
|
||||
corrected: "--color never".to_string(),
|
||||
rule: CorrectionRule::NoColorExpansion,
|
||||
confidence: 1.0,
|
||||
});
|
||||
corrected.push("--color".to_string());
|
||||
corrected.push("never".to_string());
|
||||
} else {
|
||||
let s = fixed.corrected.clone();
|
||||
corrections.push(fixed);
|
||||
corrected.push(s);
|
||||
}
|
||||
} else {
|
||||
corrected.push(arg);
|
||||
}
|
||||
@@ -610,12 +641,27 @@ const CLAP_BUILTINS: &[&str] = &["--help", "--version"];
|
||||
///
|
||||
/// When `strict` is true, fuzzy matching is disabled — only deterministic
|
||||
/// corrections (single-dash fix, case normalization) are applied.
|
||||
///
|
||||
/// Special case: `--no-color` is rewritten to `--color never` by returning
|
||||
/// the `--color` correction and letting the caller handle arg insertion.
|
||||
/// However, since we correct one arg at a time, we use `NoColorExpansion`
|
||||
/// to signal that the next phase should insert `never` after this arg.
|
||||
fn try_correct(arg: &str, valid_flags: &[&str], strict: bool) -> Option<Correction> {
|
||||
// Only attempt correction on flag-like args (starts with `-`)
|
||||
if !arg.starts_with('-') {
|
||||
return None;
|
||||
}
|
||||
|
||||
// Special case: --no-color → --color never (common agent/user expectation)
|
||||
if arg.eq_ignore_ascii_case("--no-color") {
|
||||
return Some(Correction {
|
||||
original: arg.to_string(),
|
||||
corrected: "--no-color".to_string(), // sentinel; expanded in correct_args
|
||||
rule: CorrectionRule::NoColorExpansion,
|
||||
confidence: 1.0,
|
||||
});
|
||||
}
|
||||
|
||||
// B2: Never correct clap built-in flags (--help, --version)
|
||||
let flag_part_for_builtin = if let Some(eq_pos) = arg.find('=') {
|
||||
&arg[..eq_pos]
|
||||
@@ -765,9 +811,21 @@ fn try_correct(arg: &str, valid_flags: &[&str], strict: bool) -> Option<Correcti
|
||||
}
|
||||
|
||||
/// Find the best fuzzy match among valid flags for a given (lowercased) input.
|
||||
///
|
||||
/// Applies a length guard to prevent short candidates (e.g. `--for`, 5 chars
|
||||
/// including dashes) from inflating Jaro-Winkler scores against long inputs.
|
||||
/// When the input is more than 40% longer than a candidate, that candidate is
|
||||
/// excluded from fuzzy consideration (it can still match via prefix rule).
|
||||
fn best_fuzzy_match<'a>(input: &str, valid_flags: &[&'a str]) -> Option<(&'a str, f64)> {
|
||||
valid_flags
|
||||
.iter()
|
||||
.filter(|&&flag| {
|
||||
// Guard: skip short candidates when input is much longer.
|
||||
// e.g. "--foobar" (8 chars) should not fuzzy-match "--for" (5 chars)
|
||||
// Ratio: input must be within 1.4x the candidate length.
|
||||
let max_input_len = (flag.len() as f64 * 1.4) as usize;
|
||||
input.len() <= max_input_len
|
||||
})
|
||||
.map(|&flag| (flag, jaro_winkler(input, flag)))
|
||||
.max_by(|a, b| a.1.partial_cmp(&b.1).unwrap_or(std::cmp::Ordering::Equal))
|
||||
}
|
||||
@@ -845,6 +903,9 @@ pub fn format_teaching_note(correction: &Correction) -> String {
|
||||
correction.corrected, correction.original
|
||||
)
|
||||
}
|
||||
CorrectionRule::NoColorExpansion => {
|
||||
"Use `--color never` instead of `--no-color`".to_string()
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1285,6 +1346,53 @@ mod tests {
|
||||
assert!(note.contains("full flag name"));
|
||||
}
|
||||
|
||||
// ---- --no-color expansion ----
|
||||
|
||||
#[test]
|
||||
fn no_color_expands_to_color_never() {
|
||||
let result = correct_args(args("lore --no-color health"), false);
|
||||
assert_eq!(result.corrections.len(), 1);
|
||||
assert_eq!(result.corrections[0].rule, CorrectionRule::NoColorExpansion);
|
||||
assert_eq!(result.args, args("lore --color never health"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn no_color_case_insensitive() {
|
||||
let result = correct_args(args("lore --No-Color issues"), false);
|
||||
assert_eq!(result.corrections.len(), 1);
|
||||
assert_eq!(result.args, args("lore --color never issues"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn no_color_with_robot_mode() {
|
||||
let result = correct_args(args("lore --robot --no-color health"), true);
|
||||
assert_eq!(result.corrections.len(), 1);
|
||||
assert_eq!(result.args, args("lore --robot --color never health"));
|
||||
}
|
||||
|
||||
// ---- Fuzzy matching length guard ----
|
||||
|
||||
#[test]
|
||||
fn foobar_does_not_match_for() {
|
||||
// --foobar (8 chars) should NOT fuzzy-match --for (5 chars)
|
||||
let result = correct_args(args("lore count --foobar issues"), false);
|
||||
assert!(
|
||||
!result.corrections.iter().any(|c| c.corrected == "--for"),
|
||||
"expected --foobar not to match --for"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn fro_still_matches_for() {
|
||||
// --fro (5 chars) is short enough to fuzzy-match --for (5 chars)
|
||||
// and also qualifies as a prefix match
|
||||
let result = correct_args(args("lore count --fro issues"), false);
|
||||
assert!(
|
||||
result.corrections.iter().any(|c| c.corrected == "--for"),
|
||||
"expected --fro to match --for"
|
||||
);
|
||||
}
|
||||
|
||||
// ---- Post-clap suggestion helpers ----
|
||||
|
||||
#[test]
|
||||
|
||||
@@ -1,5 +1,5 @@
|
||||
use crate::core::config::Config;
|
||||
use crate::core::error::{LoreError, Result};
|
||||
use crate::core::error::Result;
|
||||
use crate::gitlab::GitLabClient;
|
||||
|
||||
pub struct AuthTestResult {
|
||||
@@ -11,17 +11,7 @@ pub struct AuthTestResult {
|
||||
pub async fn run_auth_test(config_path: Option<&str>) -> Result<AuthTestResult> {
|
||||
let config = Config::load(config_path)?;
|
||||
|
||||
let token = std::env::var(&config.gitlab.token_env_var)
|
||||
.map(|t| t.trim().to_string())
|
||||
.map_err(|_| LoreError::TokenNotSet {
|
||||
env_var: config.gitlab.token_env_var.clone(),
|
||||
})?;
|
||||
|
||||
if token.is_empty() {
|
||||
return Err(LoreError::TokenNotSet {
|
||||
env_var: config.gitlab.token_env_var.clone(),
|
||||
});
|
||||
}
|
||||
let token = config.gitlab.resolve_token()?;
|
||||
|
||||
let client = GitLabClient::new(&config.gitlab.base_url, &token, None);
|
||||
|
||||
|
||||
@@ -257,7 +257,10 @@ pub fn print_event_count_json(counts: &EventCounts, elapsed_ms: u64) {
|
||||
meta: RobotMeta { elapsed_ms },
|
||||
};
|
||||
|
||||
println!("{}", serde_json::to_string(&output).unwrap());
|
||||
match serde_json::to_string(&output) {
|
||||
Ok(json) => println!("{json}"),
|
||||
Err(e) => eprintln!("Error serializing to JSON: {e}"),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn print_event_count(counts: &EventCounts) {
|
||||
@@ -325,7 +328,10 @@ pub fn print_count_json(result: &CountResult, elapsed_ms: u64) {
|
||||
meta: RobotMeta { elapsed_ms },
|
||||
};
|
||||
|
||||
println!("{}", serde_json::to_string(&output).unwrap());
|
||||
match serde_json::to_string(&output) {
|
||||
Ok(json) => println!("{json}"),
|
||||
Err(e) => eprintln!("Error serializing to JSON: {e}"),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn print_count(result: &CountResult) {
|
||||
|
||||
292
src/cli/commands/cron.rs
Normal file
292
src/cli/commands/cron.rs
Normal file
@@ -0,0 +1,292 @@
|
||||
use serde::Serialize;
|
||||
|
||||
use crate::Config;
|
||||
use crate::cli::render::Theme;
|
||||
use crate::cli::robot::RobotMeta;
|
||||
use crate::core::cron::{
|
||||
CronInstallResult, CronStatusResult, CronUninstallResult, cron_status, install_cron,
|
||||
uninstall_cron,
|
||||
};
|
||||
use crate::core::db::create_connection;
|
||||
use crate::core::error::Result;
|
||||
use crate::core::paths::get_db_path;
|
||||
use crate::core::time::ms_to_iso;
|
||||
|
||||
// ── install ──
|
||||
|
||||
pub fn run_cron_install(interval_minutes: u32) -> Result<CronInstallResult> {
|
||||
install_cron(interval_minutes)
|
||||
}
|
||||
|
||||
pub fn print_cron_install(result: &CronInstallResult) {
|
||||
if result.replaced {
|
||||
println!(
|
||||
" {} cron entry updated (was already installed)",
|
||||
Theme::success().render("Updated")
|
||||
);
|
||||
} else {
|
||||
println!(
|
||||
" {} cron entry installed",
|
||||
Theme::success().render("Installed")
|
||||
);
|
||||
}
|
||||
println!();
|
||||
println!(" {} {}", Theme::dim().render("entry:"), result.entry);
|
||||
println!(
|
||||
" {} every {} minutes",
|
||||
Theme::dim().render("interval:"),
|
||||
result.interval_minutes
|
||||
);
|
||||
println!(
|
||||
" {} {}",
|
||||
Theme::dim().render("log:"),
|
||||
result.log_path.display()
|
||||
);
|
||||
|
||||
if cfg!(target_os = "macos") {
|
||||
println!();
|
||||
println!(
|
||||
" {} On macOS, the terminal running cron may need",
|
||||
Theme::warning().render("Note:")
|
||||
);
|
||||
println!(" Full Disk Access in System Settings > Privacy & Security.");
|
||||
}
|
||||
println!();
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
struct CronInstallJson {
|
||||
ok: bool,
|
||||
data: CronInstallData,
|
||||
meta: RobotMeta,
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
struct CronInstallData {
|
||||
action: &'static str,
|
||||
entry: String,
|
||||
interval_minutes: u32,
|
||||
log_path: String,
|
||||
replaced: bool,
|
||||
}
|
||||
|
||||
pub fn print_cron_install_json(result: &CronInstallResult, elapsed_ms: u64) {
|
||||
let output = CronInstallJson {
|
||||
ok: true,
|
||||
data: CronInstallData {
|
||||
action: "install",
|
||||
entry: result.entry.clone(),
|
||||
interval_minutes: result.interval_minutes,
|
||||
log_path: result.log_path.display().to_string(),
|
||||
replaced: result.replaced,
|
||||
},
|
||||
meta: RobotMeta { elapsed_ms },
|
||||
};
|
||||
if let Ok(json) = serde_json::to_string(&output) {
|
||||
println!("{json}");
|
||||
}
|
||||
}
|
||||
|
||||
// ── uninstall ──
|
||||
|
||||
pub fn run_cron_uninstall() -> Result<CronUninstallResult> {
|
||||
uninstall_cron()
|
||||
}
|
||||
|
||||
pub fn print_cron_uninstall(result: &CronUninstallResult) {
|
||||
if result.was_installed {
|
||||
println!(
|
||||
" {} cron entry removed",
|
||||
Theme::success().render("Removed")
|
||||
);
|
||||
} else {
|
||||
println!(
|
||||
" {} no lore-sync cron entry found",
|
||||
Theme::dim().render("Nothing to remove:")
|
||||
);
|
||||
}
|
||||
println!();
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
struct CronUninstallJson {
|
||||
ok: bool,
|
||||
data: CronUninstallData,
|
||||
meta: RobotMeta,
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
struct CronUninstallData {
|
||||
action: &'static str,
|
||||
was_installed: bool,
|
||||
}
|
||||
|
||||
pub fn print_cron_uninstall_json(result: &CronUninstallResult, elapsed_ms: u64) {
|
||||
let output = CronUninstallJson {
|
||||
ok: true,
|
||||
data: CronUninstallData {
|
||||
action: "uninstall",
|
||||
was_installed: result.was_installed,
|
||||
},
|
||||
meta: RobotMeta { elapsed_ms },
|
||||
};
|
||||
if let Ok(json) = serde_json::to_string(&output) {
|
||||
println!("{json}");
|
||||
}
|
||||
}
|
||||
|
||||
// ── status ──
|
||||
|
||||
pub fn run_cron_status(config: &Config) -> Result<CronStatusInfo> {
|
||||
let status = cron_status()?;
|
||||
|
||||
// Query last sync run from DB
|
||||
let last_sync = get_last_sync_time(config).unwrap_or_default();
|
||||
|
||||
Ok(CronStatusInfo { status, last_sync })
|
||||
}
|
||||
|
||||
pub struct CronStatusInfo {
|
||||
pub status: CronStatusResult,
|
||||
pub last_sync: Option<LastSyncInfo>,
|
||||
}
|
||||
|
||||
pub struct LastSyncInfo {
|
||||
pub started_at_iso: String,
|
||||
pub status: String,
|
||||
}
|
||||
|
||||
fn get_last_sync_time(config: &Config) -> Result<Option<LastSyncInfo>> {
|
||||
let db_path = get_db_path(config.storage.db_path.as_deref());
|
||||
if !db_path.exists() {
|
||||
return Ok(None);
|
||||
}
|
||||
let conn = create_connection(&db_path)?;
|
||||
let result = conn.query_row(
|
||||
"SELECT started_at, status FROM sync_runs ORDER BY started_at DESC LIMIT 1",
|
||||
[],
|
||||
|row| {
|
||||
let started_at: i64 = row.get(0)?;
|
||||
let status: String = row.get(1)?;
|
||||
Ok(LastSyncInfo {
|
||||
started_at_iso: ms_to_iso(started_at),
|
||||
status,
|
||||
})
|
||||
},
|
||||
);
|
||||
match result {
|
||||
Ok(info) => Ok(Some(info)),
|
||||
Err(rusqlite::Error::QueryReturnedNoRows) => Ok(None),
|
||||
// Table may not exist if migrations haven't run yet
|
||||
Err(rusqlite::Error::SqliteFailure(_, Some(ref msg))) if msg.contains("no such table") => {
|
||||
Ok(None)
|
||||
}
|
||||
Err(e) => Err(e.into()),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn print_cron_status(info: &CronStatusInfo) {
|
||||
if info.status.installed {
|
||||
println!(
|
||||
" {} lore-sync is installed in crontab",
|
||||
Theme::success().render("Installed")
|
||||
);
|
||||
if let Some(interval) = info.status.interval_minutes {
|
||||
println!(
|
||||
" {} every {} minutes",
|
||||
Theme::dim().render("interval:"),
|
||||
interval
|
||||
);
|
||||
}
|
||||
if let Some(ref binary) = info.status.binary_path {
|
||||
let label = if info.status.binary_mismatch {
|
||||
Theme::warning().render("binary:")
|
||||
} else {
|
||||
Theme::dim().render("binary:")
|
||||
};
|
||||
println!(" {label} {binary}");
|
||||
if info.status.binary_mismatch
|
||||
&& let Some(ref current) = info.status.current_binary
|
||||
{
|
||||
println!(
|
||||
" {}",
|
||||
Theme::warning().render(&format!(" current binary is {current} (mismatch!)"))
|
||||
);
|
||||
}
|
||||
}
|
||||
if let Some(ref log) = info.status.log_path {
|
||||
println!(" {} {}", Theme::dim().render("log:"), log.display());
|
||||
}
|
||||
} else {
|
||||
println!(
|
||||
" {} lore-sync is not installed in crontab",
|
||||
Theme::dim().render("Not installed:")
|
||||
);
|
||||
println!(
|
||||
" {} lore cron install",
|
||||
Theme::dim().render("install with:")
|
||||
);
|
||||
}
|
||||
|
||||
if let Some(ref last) = info.last_sync {
|
||||
println!(
|
||||
" {} {} ({})",
|
||||
Theme::dim().render("last sync:"),
|
||||
last.started_at_iso,
|
||||
last.status
|
||||
);
|
||||
}
|
||||
println!();
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
struct CronStatusJson {
|
||||
ok: bool,
|
||||
data: CronStatusData,
|
||||
meta: RobotMeta,
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
struct CronStatusData {
|
||||
installed: bool,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
interval_minutes: Option<u32>,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
binary_path: Option<String>,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
current_binary: Option<String>,
|
||||
binary_mismatch: bool,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
log_path: Option<String>,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
cron_entry: Option<String>,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
last_sync_at: Option<String>,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
last_sync_status: Option<String>,
|
||||
}
|
||||
|
||||
pub fn print_cron_status_json(info: &CronStatusInfo, elapsed_ms: u64) {
|
||||
let output = CronStatusJson {
|
||||
ok: true,
|
||||
data: CronStatusData {
|
||||
installed: info.status.installed,
|
||||
interval_minutes: info.status.interval_minutes,
|
||||
binary_path: info.status.binary_path.clone(),
|
||||
current_binary: info.status.current_binary.clone(),
|
||||
binary_mismatch: info.status.binary_mismatch,
|
||||
log_path: info
|
||||
.status
|
||||
.log_path
|
||||
.as_ref()
|
||||
.map(|p| p.display().to_string()),
|
||||
cron_entry: info.status.cron_entry.clone(),
|
||||
last_sync_at: info.last_sync.as_ref().map(|s| s.started_at_iso.clone()),
|
||||
last_sync_status: info.last_sync.as_ref().map(|s| s.status.clone()),
|
||||
},
|
||||
meta: RobotMeta { elapsed_ms },
|
||||
};
|
||||
if let Ok(json) = serde_json::to_string(&output) {
|
||||
println!("{json}");
|
||||
}
|
||||
}
|
||||
@@ -240,14 +240,14 @@ async fn check_gitlab(config: Option<&Config>) -> GitLabCheck {
|
||||
};
|
||||
};
|
||||
|
||||
let token = match std::env::var(&config.gitlab.token_env_var) {
|
||||
Ok(t) if !t.trim().is_empty() => t.trim().to_string(),
|
||||
_ => {
|
||||
let token = match config.gitlab.resolve_token() {
|
||||
Ok(t) => t,
|
||||
Err(_) => {
|
||||
return GitLabCheck {
|
||||
result: CheckResult {
|
||||
status: CheckStatus::Error,
|
||||
message: Some(format!(
|
||||
"{} not set in environment",
|
||||
"Token not set. Run 'lore token set' or export {}.",
|
||||
config.gitlab.token_env_var
|
||||
)),
|
||||
},
|
||||
@@ -257,6 +257,8 @@ async fn check_gitlab(config: Option<&Config>) -> GitLabCheck {
|
||||
}
|
||||
};
|
||||
|
||||
let source = config.gitlab.token_source().unwrap_or("unknown");
|
||||
|
||||
let client = GitLabClient::new(&config.gitlab.base_url, &token, None);
|
||||
|
||||
match client.get_current_user().await {
|
||||
@@ -264,7 +266,7 @@ async fn check_gitlab(config: Option<&Config>) -> GitLabCheck {
|
||||
result: CheckResult {
|
||||
status: CheckStatus::Ok,
|
||||
message: Some(format!(
|
||||
"{} (authenticated as @{})",
|
||||
"{} (authenticated as @{}, token from {source})",
|
||||
config.gitlab.base_url, user.username
|
||||
)),
|
||||
},
|
||||
|
||||
@@ -382,7 +382,7 @@ fn extract_drift_topics(description: &str, notes: &[NoteRow], drift_idx: usize)
|
||||
}
|
||||
|
||||
let mut sorted: Vec<(String, usize)> = freq.into_iter().collect();
|
||||
sorted.sort_by(|a, b| b.1.cmp(&a.1));
|
||||
sorted.sort_by_key(|b| std::cmp::Reverse(b.1));
|
||||
|
||||
sorted
|
||||
.into_iter()
|
||||
|
||||
@@ -137,5 +137,8 @@ pub fn print_embed_json(result: &EmbedCommandResult, elapsed_ms: u64) {
|
||||
data: result,
|
||||
meta: RobotMeta { elapsed_ms },
|
||||
};
|
||||
println!("{}", serde_json::to_string(&output).unwrap());
|
||||
match serde_json::to_string(&output) {
|
||||
Ok(json) => println!("{json}"),
|
||||
Err(e) => eprintln!("Error serializing to JSON: {e}"),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,5 +1,7 @@
|
||||
use serde::Serialize;
|
||||
|
||||
use tracing::info;
|
||||
|
||||
use crate::Config;
|
||||
use crate::cli::render::{self, Icons, Theme};
|
||||
use crate::core::db::create_connection;
|
||||
@@ -46,6 +48,9 @@ pub struct FileHistoryResult {
|
||||
pub discussions: Vec<FileDiscussion>,
|
||||
pub total_mrs: usize,
|
||||
pub paths_searched: usize,
|
||||
/// Diagnostic hints explaining why results may be empty.
|
||||
#[serde(skip_serializing_if = "Vec::is_empty")]
|
||||
pub hints: Vec<String>,
|
||||
}
|
||||
|
||||
/// Run the file-history query.
|
||||
@@ -77,6 +82,11 @@ pub fn run_file_history(
|
||||
|
||||
let paths_searched = all_paths.len();
|
||||
|
||||
info!(
|
||||
paths = paths_searched,
|
||||
renames_followed, "file-history: resolved {} path(s) for '{}'", paths_searched, path
|
||||
);
|
||||
|
||||
// Build placeholders for IN clause
|
||||
let placeholders: Vec<String> = (0..all_paths.len())
|
||||
.map(|i| format!("?{}", i + 2))
|
||||
@@ -135,14 +145,31 @@ pub fn run_file_history(
|
||||
web_url: row.get(8)?,
|
||||
})
|
||||
})?
|
||||
.filter_map(std::result::Result::ok)
|
||||
.collect();
|
||||
.collect::<std::result::Result<Vec<_>, _>>()?;
|
||||
|
||||
let total_mrs = merge_requests.len();
|
||||
|
||||
info!(
|
||||
mr_count = total_mrs,
|
||||
"file-history: found {} MR(s) touching '{}'", total_mrs, path
|
||||
);
|
||||
|
||||
// Optionally fetch DiffNote discussions on this file
|
||||
let discussions = if include_discussions && !merge_requests.is_empty() {
|
||||
fetch_file_discussions(&conn, &all_paths, project_id)?
|
||||
let discs = fetch_file_discussions(&conn, &all_paths, project_id)?;
|
||||
info!(
|
||||
discussion_count = discs.len(),
|
||||
"file-history: found {} discussion(s)",
|
||||
discs.len()
|
||||
);
|
||||
discs
|
||||
} else {
|
||||
Vec::new()
|
||||
};
|
||||
|
||||
// Build diagnostic hints when no results found
|
||||
let hints = if total_mrs == 0 {
|
||||
build_file_history_hints(&conn, project_id, &all_paths)?
|
||||
} else {
|
||||
Vec::new()
|
||||
};
|
||||
@@ -155,6 +182,7 @@ pub fn run_file_history(
|
||||
discussions,
|
||||
total_mrs,
|
||||
paths_searched,
|
||||
hints,
|
||||
})
|
||||
}
|
||||
|
||||
@@ -179,8 +207,7 @@ fn fetch_file_discussions(
|
||||
JOIN discussions d ON d.id = n.discussion_id \
|
||||
WHERE n.position_new_path IN ({in_clause}) {project_filter} \
|
||||
AND n.is_system = 0 \
|
||||
ORDER BY n.created_at DESC \
|
||||
LIMIT 50"
|
||||
ORDER BY n.created_at DESC"
|
||||
);
|
||||
|
||||
let mut stmt = conn.prepare(&sql)?;
|
||||
@@ -210,12 +237,57 @@ fn fetch_file_discussions(
|
||||
created_at_iso: ms_to_iso(created_at),
|
||||
})
|
||||
})?
|
||||
.filter_map(std::result::Result::ok)
|
||||
.collect();
|
||||
.collect::<std::result::Result<Vec<_>, _>>()?;
|
||||
|
||||
Ok(discussions)
|
||||
}
|
||||
|
||||
/// Build diagnostic hints explaining why a file-history query returned no results.
|
||||
fn build_file_history_hints(
|
||||
conn: &rusqlite::Connection,
|
||||
project_id: Option<i64>,
|
||||
paths: &[String],
|
||||
) -> Result<Vec<String>> {
|
||||
let mut hints = Vec::new();
|
||||
|
||||
// Check if mr_file_changes has ANY rows for this project
|
||||
let has_file_changes: bool = if let Some(pid) = project_id {
|
||||
conn.query_row(
|
||||
"SELECT EXISTS(SELECT 1 FROM mr_file_changes WHERE project_id = ?1 LIMIT 1)",
|
||||
rusqlite::params![pid],
|
||||
|row| row.get(0),
|
||||
)?
|
||||
} else {
|
||||
conn.query_row(
|
||||
"SELECT EXISTS(SELECT 1 FROM mr_file_changes LIMIT 1)",
|
||||
[],
|
||||
|row| row.get(0),
|
||||
)?
|
||||
};
|
||||
|
||||
if !has_file_changes {
|
||||
hints.push(
|
||||
"No MR file changes have been synced yet. Run 'lore sync' to fetch file change data."
|
||||
.to_string(),
|
||||
);
|
||||
return Ok(hints);
|
||||
}
|
||||
|
||||
// File changes exist but none match these paths
|
||||
let path_list = paths
|
||||
.iter()
|
||||
.map(|p| format!("'{p}'"))
|
||||
.collect::<Vec<_>>()
|
||||
.join(", ");
|
||||
hints.push(format!(
|
||||
"Searched paths [{}] were not found in MR file changes. \
|
||||
The file may predate the sync window or use a different path.",
|
||||
path_list
|
||||
));
|
||||
|
||||
Ok(hints)
|
||||
}
|
||||
|
||||
// ── Human output ────────────────────────────────────────────────────────────
|
||||
|
||||
pub fn print_file_history(result: &FileHistoryResult) {
|
||||
@@ -250,10 +322,16 @@ pub fn print_file_history(result: &FileHistoryResult) {
|
||||
Icons::info(),
|
||||
Theme::dim().render("No merge requests found touching this file.")
|
||||
);
|
||||
if !result.renames_followed && result.rename_chain.len() == 1 {
|
||||
println!(
|
||||
" {}",
|
||||
Theme::dim().render("Hint: Run 'lore sync' to fetch MR file changes.")
|
||||
" {} Searched: {}",
|
||||
Icons::info(),
|
||||
Theme::dim().render(&result.rename_chain[0])
|
||||
);
|
||||
}
|
||||
for hint in &result.hints {
|
||||
println!(" {} {}", Icons::info(), Theme::dim().render(hint));
|
||||
}
|
||||
println!();
|
||||
return;
|
||||
}
|
||||
@@ -327,6 +405,7 @@ pub fn print_file_history_json(result: &FileHistoryResult, elapsed_ms: u64) {
|
||||
"total_mrs": result.total_mrs,
|
||||
"renames_followed": result.renames_followed,
|
||||
"paths_searched": result.paths_searched,
|
||||
"hints": if result.hints.is_empty() { None } else { Some(&result.hints) },
|
||||
}
|
||||
});
|
||||
|
||||
|
||||
@@ -259,7 +259,10 @@ pub fn print_generate_docs_json(result: &GenerateDocsResult, elapsed_ms: u64) {
|
||||
},
|
||||
meta: RobotMeta { elapsed_ms },
|
||||
};
|
||||
println!("{}", serde_json::to_string(&output).unwrap());
|
||||
match serde_json::to_string(&output) {
|
||||
Ok(json) => println!("{json}"),
|
||||
Err(e) => eprintln!("Error serializing to JSON: {e}"),
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
|
||||
@@ -293,10 +293,7 @@ async fn run_ingest_inner(
|
||||
);
|
||||
lock.acquire(force)?;
|
||||
|
||||
let token =
|
||||
std::env::var(&config.gitlab.token_env_var).map_err(|_| LoreError::TokenNotSet {
|
||||
env_var: config.gitlab.token_env_var.clone(),
|
||||
})?;
|
||||
let token = config.gitlab.resolve_token()?;
|
||||
|
||||
let client = GitLabClient::new(
|
||||
&config.gitlab.base_url,
|
||||
@@ -982,7 +979,10 @@ pub fn print_ingest_summary_json(result: &IngestResult, elapsed_ms: u64) {
|
||||
meta: RobotMeta { elapsed_ms },
|
||||
};
|
||||
|
||||
println!("{}", serde_json::to_string(&output).unwrap());
|
||||
match serde_json::to_string(&output) {
|
||||
Ok(json) => println!("{json}"),
|
||||
Err(e) => eprintln!("Error serializing to JSON: {e}"),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn print_ingest_summary(result: &IngestResult) {
|
||||
@@ -1109,5 +1109,8 @@ pub fn print_dry_run_preview_json(preview: &DryRunPreview) {
|
||||
data: preview.clone(),
|
||||
};
|
||||
|
||||
println!("{}", serde_json::to_string(&output).unwrap());
|
||||
match serde_json::to_string(&output) {
|
||||
Ok(json) => println!("{json}"),
|
||||
Err(e) => eprintln!("Error serializing to JSON: {e}"),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,9 +1,10 @@
|
||||
use std::fs;
|
||||
use std::io::{IsTerminal, Read};
|
||||
|
||||
use crate::core::config::{MinimalConfig, MinimalGitLabConfig, ProjectConfig};
|
||||
use crate::core::config::{Config, MinimalConfig, MinimalGitLabConfig, ProjectConfig};
|
||||
use crate::core::db::{create_connection, run_migrations};
|
||||
use crate::core::error::{LoreError, Result};
|
||||
use crate::core::paths::{get_config_path, get_data_dir};
|
||||
use crate::core::paths::{ensure_config_permissions, get_config_path, get_data_dir};
|
||||
use crate::gitlab::{GitLabClient, GitLabProject};
|
||||
|
||||
pub struct InitInputs {
|
||||
@@ -172,3 +173,141 @@ pub async fn run_init(inputs: InitInputs, options: InitOptions) -> Result<InitRe
|
||||
default_project: inputs.default_project,
|
||||
})
|
||||
}
|
||||
|
||||
// ── token set / show ──
|
||||
|
||||
pub struct TokenSetResult {
|
||||
pub username: String,
|
||||
pub config_path: String,
|
||||
}
|
||||
|
||||
pub struct TokenShowResult {
|
||||
pub token: String,
|
||||
pub source: &'static str,
|
||||
}
|
||||
|
||||
/// Read token from --token flag or stdin, validate against GitLab, store in config.
|
||||
pub async fn run_token_set(
|
||||
config_path_override: Option<&str>,
|
||||
token_arg: Option<String>,
|
||||
) -> Result<TokenSetResult> {
|
||||
let config_path = get_config_path(config_path_override);
|
||||
|
||||
if !config_path.exists() {
|
||||
return Err(LoreError::ConfigNotFound {
|
||||
path: config_path.display().to_string(),
|
||||
});
|
||||
}
|
||||
|
||||
// Resolve token value: flag > stdin > error
|
||||
let token = if let Some(t) = token_arg {
|
||||
t.trim().to_string()
|
||||
} else if !std::io::stdin().is_terminal() {
|
||||
let mut buf = String::new();
|
||||
std::io::stdin()
|
||||
.read_to_string(&mut buf)
|
||||
.map_err(|e| LoreError::Other(format!("Failed to read token from stdin: {e}")))?;
|
||||
buf.trim().to_string()
|
||||
} else {
|
||||
return Err(LoreError::Other(
|
||||
"No token provided. Use --token or pipe to stdin.".to_string(),
|
||||
));
|
||||
};
|
||||
|
||||
if token.is_empty() {
|
||||
return Err(LoreError::Other("Token cannot be empty.".to_string()));
|
||||
}
|
||||
|
||||
// Load config to get the base URL for validation
|
||||
let config = Config::load(config_path_override)?;
|
||||
|
||||
// Validate token against GitLab
|
||||
let client = GitLabClient::new(&config.gitlab.base_url, &token, None);
|
||||
let user = client.get_current_user().await.map_err(|e| {
|
||||
if matches!(e, LoreError::GitLabAuthFailed) {
|
||||
LoreError::Other("Token validation failed: authentication rejected by GitLab.".into())
|
||||
} else {
|
||||
e
|
||||
}
|
||||
})?;
|
||||
|
||||
// Read config as raw JSON, insert token, write back
|
||||
let content = fs::read_to_string(&config_path)
|
||||
.map_err(|e| LoreError::Other(format!("Failed to read config file: {e}")))?;
|
||||
|
||||
let mut json: serde_json::Value =
|
||||
serde_json::from_str(&content).map_err(|e| LoreError::ConfigInvalid {
|
||||
details: format!("Invalid JSON in config file: {e}"),
|
||||
})?;
|
||||
|
||||
json["gitlab"]["token"] = serde_json::Value::String(token);
|
||||
|
||||
let output = serde_json::to_string_pretty(&json)
|
||||
.map_err(|e| LoreError::Other(format!("Failed to serialize config: {e}")))?;
|
||||
fs::write(&config_path, format!("{output}\n"))?;
|
||||
|
||||
// Enforce permissions
|
||||
ensure_config_permissions(&config_path);
|
||||
|
||||
Ok(TokenSetResult {
|
||||
username: user.username,
|
||||
config_path: config_path.display().to_string(),
|
||||
})
|
||||
}
|
||||
|
||||
/// Show the current token (masked or unmasked) and its source.
|
||||
pub fn run_token_show(config_path_override: Option<&str>, unmask: bool) -> Result<TokenShowResult> {
|
||||
let config = Config::load(config_path_override)?;
|
||||
|
||||
let source = config
|
||||
.gitlab
|
||||
.token_source()
|
||||
.ok_or_else(|| LoreError::TokenNotSet {
|
||||
env_var: config.gitlab.token_env_var.clone(),
|
||||
})?;
|
||||
|
||||
let token = config.gitlab.resolve_token()?;
|
||||
|
||||
let display_token = if unmask { token } else { mask_token(&token) };
|
||||
|
||||
Ok(TokenShowResult {
|
||||
token: display_token,
|
||||
source,
|
||||
})
|
||||
}
|
||||
|
||||
fn mask_token(token: &str) -> String {
|
||||
let len = token.len();
|
||||
if len <= 8 {
|
||||
"*".repeat(len)
|
||||
} else {
|
||||
let visible = &token[..4];
|
||||
format!("{visible}{}", "*".repeat(len - 4))
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn mask_token_hides_short_tokens_completely() {
|
||||
assert_eq!(mask_token(""), "");
|
||||
assert_eq!(mask_token("a"), "*");
|
||||
assert_eq!(mask_token("abcd"), "****");
|
||||
assert_eq!(mask_token("abcdefgh"), "********");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn mask_token_reveals_first_four_chars_for_long_tokens() {
|
||||
assert_eq!(mask_token("abcdefghi"), "abcd*****");
|
||||
assert_eq!(mask_token("glpat-xyzABC123456"), "glpa**************");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn mask_token_boundary_at_nine_chars() {
|
||||
// 8 chars → fully masked, 9 chars → first 4 visible
|
||||
assert_eq!(mask_token("12345678"), "********");
|
||||
assert_eq!(mask_token("123456789"), "1234*****");
|
||||
}
|
||||
}
|
||||
|
||||
@@ -980,59 +980,6 @@ pub fn print_list_notes_json(result: &NoteListResult, elapsed_ms: u64, fields: O
|
||||
}
|
||||
}
|
||||
|
||||
pub fn print_list_notes_jsonl(result: &NoteListResult) {
|
||||
for note in &result.notes {
|
||||
let json_row = NoteListRowJson::from(note);
|
||||
match serde_json::to_string(&json_row) {
|
||||
Ok(json) => println!("{json}"),
|
||||
Err(e) => eprintln!("Error serializing to JSON: {e}"),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Escape a field for RFC 4180 CSV: quote fields containing commas, quotes, or newlines.
|
||||
fn csv_escape(field: &str) -> String {
|
||||
if field.contains(',') || field.contains('"') || field.contains('\n') || field.contains('\r') {
|
||||
let escaped = field.replace('"', "\"\"");
|
||||
format!("\"{escaped}\"")
|
||||
} else {
|
||||
field.to_string()
|
||||
}
|
||||
}
|
||||
|
||||
pub fn print_list_notes_csv(result: &NoteListResult) {
|
||||
println!(
|
||||
"id,gitlab_id,author_username,body,note_type,is_system,created_at,updated_at,position_new_path,position_new_line,noteable_type,parent_iid,project_path"
|
||||
);
|
||||
for note in &result.notes {
|
||||
let body = note.body.as_deref().unwrap_or("");
|
||||
let note_type = note.note_type.as_deref().unwrap_or("");
|
||||
let path = note.position_new_path.as_deref().unwrap_or("");
|
||||
let line = note
|
||||
.position_new_line
|
||||
.map_or(String::new(), |l| l.to_string());
|
||||
let noteable = note.noteable_type.as_deref().unwrap_or("");
|
||||
let parent_iid = note.parent_iid.map_or(String::new(), |i| i.to_string());
|
||||
|
||||
println!(
|
||||
"{},{},{},{},{},{},{},{},{},{},{},{},{}",
|
||||
note.id,
|
||||
note.gitlab_id,
|
||||
csv_escape(¬e.author_username),
|
||||
csv_escape(body),
|
||||
csv_escape(note_type),
|
||||
note.is_system,
|
||||
note.created_at,
|
||||
note.updated_at,
|
||||
csv_escape(path),
|
||||
line,
|
||||
csv_escape(noteable),
|
||||
parent_iid,
|
||||
csv_escape(¬e.project_path),
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Note query layer
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
@@ -95,6 +95,8 @@ fn test_config(default_project: Option<&str>) -> Config {
|
||||
gitlab: GitLabConfig {
|
||||
base_url: "https://gitlab.example.com".to_string(),
|
||||
token_env_var: "GITLAB_TOKEN".to_string(),
|
||||
token: None,
|
||||
username: None,
|
||||
},
|
||||
projects: vec![ProjectConfig {
|
||||
path: "group/project".to_string(),
|
||||
@@ -1269,60 +1271,6 @@ fn test_truncate_note_body() {
|
||||
assert!(result.ends_with("..."));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_csv_escape_basic() {
|
||||
assert_eq!(csv_escape("simple"), "simple");
|
||||
assert_eq!(csv_escape("has,comma"), "\"has,comma\"");
|
||||
assert_eq!(csv_escape("has\"quote"), "\"has\"\"quote\"");
|
||||
assert_eq!(csv_escape("has\nnewline"), "\"has\nnewline\"");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_csv_output_basic() {
|
||||
let result = NoteListResult {
|
||||
notes: vec![NoteListRow {
|
||||
id: 1,
|
||||
gitlab_id: 100,
|
||||
author_username: "alice".to_string(),
|
||||
body: Some("Hello, world".to_string()),
|
||||
note_type: Some("DiffNote".to_string()),
|
||||
is_system: false,
|
||||
created_at: 1_000_000,
|
||||
updated_at: 2_000_000,
|
||||
position_new_path: Some("src/main.rs".to_string()),
|
||||
position_new_line: Some(42),
|
||||
position_old_path: None,
|
||||
position_old_line: None,
|
||||
resolvable: true,
|
||||
resolved: false,
|
||||
resolved_by: None,
|
||||
noteable_type: Some("Issue".to_string()),
|
||||
parent_iid: Some(7),
|
||||
parent_title: Some("Test issue".to_string()),
|
||||
project_path: "group/project".to_string(),
|
||||
}],
|
||||
total_count: 1,
|
||||
};
|
||||
|
||||
// Verify csv_escape handles the comma in body correctly
|
||||
let body = result.notes[0].body.as_deref().unwrap();
|
||||
let escaped = csv_escape(body);
|
||||
assert_eq!(escaped, "\"Hello, world\"");
|
||||
|
||||
// Verify the formatting helpers
|
||||
assert_eq!(
|
||||
format_note_type(result.notes[0].note_type.as_deref()),
|
||||
"Diff"
|
||||
);
|
||||
assert_eq!(
|
||||
format_note_parent(
|
||||
result.notes[0].noteable_type.as_deref(),
|
||||
result.notes[0].parent_iid,
|
||||
),
|
||||
"Issue #7"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_jsonl_output_one_per_line() {
|
||||
let result = NoteListResult {
|
||||
|
||||
779
src/cli/commands/me/me_tests.rs
Normal file
779
src/cli/commands/me/me_tests.rs
Normal file
@@ -0,0 +1,779 @@
|
||||
use super::*;
|
||||
use crate::cli::commands::me::types::{ActivityEventType, AttentionState};
|
||||
use crate::core::db::{create_connection, run_migrations};
|
||||
use crate::core::time::now_ms;
|
||||
use rusqlite::Connection;
|
||||
use std::path::Path;
|
||||
|
||||
// ─── Helpers ────────────────────────────────────────────────────────────────
|
||||
|
||||
fn setup_test_db() -> Connection {
|
||||
let conn = create_connection(Path::new(":memory:")).unwrap();
|
||||
run_migrations(&conn).unwrap();
|
||||
conn
|
||||
}
|
||||
|
||||
fn insert_project(conn: &Connection, id: i64, path: &str) {
|
||||
conn.execute(
|
||||
"INSERT INTO projects (id, gitlab_project_id, path_with_namespace, web_url)
|
||||
VALUES (?1, ?2, ?3, ?4)",
|
||||
rusqlite::params![
|
||||
id,
|
||||
id * 100,
|
||||
path,
|
||||
format!("https://git.example.com/{path}")
|
||||
],
|
||||
)
|
||||
.unwrap();
|
||||
}
|
||||
|
||||
fn insert_issue(conn: &Connection, id: i64, project_id: i64, iid: i64, author: &str) {
|
||||
insert_issue_with_status(
|
||||
conn,
|
||||
id,
|
||||
project_id,
|
||||
iid,
|
||||
author,
|
||||
"opened",
|
||||
Some("In Progress"),
|
||||
);
|
||||
}
|
||||
|
||||
fn insert_issue_with_state(
|
||||
conn: &Connection,
|
||||
id: i64,
|
||||
project_id: i64,
|
||||
iid: i64,
|
||||
author: &str,
|
||||
state: &str,
|
||||
) {
|
||||
// For closed issues, don't set status_name (they won't appear in dashboard anyway)
|
||||
let status_name = if state == "opened" {
|
||||
Some("In Progress")
|
||||
} else {
|
||||
None
|
||||
};
|
||||
insert_issue_with_status(conn, id, project_id, iid, author, state, status_name);
|
||||
}
|
||||
|
||||
#[allow(clippy::too_many_arguments)]
|
||||
fn insert_issue_with_status(
|
||||
conn: &Connection,
|
||||
id: i64,
|
||||
project_id: i64,
|
||||
iid: i64,
|
||||
author: &str,
|
||||
state: &str,
|
||||
status_name: Option<&str>,
|
||||
) {
|
||||
let ts = now_ms();
|
||||
conn.execute(
|
||||
"INSERT INTO issues (id, gitlab_id, project_id, iid, title, state, status_name, author_username, created_at, updated_at, last_seen_at)
|
||||
VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8, ?9, ?10, ?11)",
|
||||
rusqlite::params![
|
||||
id,
|
||||
id * 10,
|
||||
project_id,
|
||||
iid,
|
||||
format!("Issue {iid}"),
|
||||
state,
|
||||
status_name,
|
||||
author,
|
||||
ts,
|
||||
ts,
|
||||
ts
|
||||
],
|
||||
)
|
||||
.unwrap();
|
||||
}
|
||||
|
||||
fn insert_assignee(conn: &Connection, issue_id: i64, username: &str) {
|
||||
conn.execute(
|
||||
"INSERT INTO issue_assignees (issue_id, username) VALUES (?1, ?2)",
|
||||
rusqlite::params![issue_id, username],
|
||||
)
|
||||
.unwrap();
|
||||
}
|
||||
|
||||
#[allow(clippy::too_many_arguments)]
|
||||
fn insert_mr(
|
||||
conn: &Connection,
|
||||
id: i64,
|
||||
project_id: i64,
|
||||
iid: i64,
|
||||
author: &str,
|
||||
state: &str,
|
||||
draft: bool,
|
||||
) {
|
||||
let ts = now_ms();
|
||||
conn.execute(
|
||||
"INSERT INTO merge_requests (id, gitlab_id, project_id, iid, title, author_username, state, draft, last_seen_at, updated_at, created_at, merged_at)
|
||||
VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8, ?9, ?10, ?11, ?12)",
|
||||
rusqlite::params![
|
||||
id,
|
||||
id * 10,
|
||||
project_id,
|
||||
iid,
|
||||
format!("MR {iid}"),
|
||||
author,
|
||||
state,
|
||||
i32::from(draft),
|
||||
ts,
|
||||
ts,
|
||||
ts,
|
||||
if state == "merged" { Some(ts) } else { None::<i64> }
|
||||
],
|
||||
)
|
||||
.unwrap();
|
||||
}
|
||||
|
||||
fn insert_reviewer(conn: &Connection, mr_id: i64, username: &str) {
|
||||
conn.execute(
|
||||
"INSERT INTO mr_reviewers (merge_request_id, username) VALUES (?1, ?2)",
|
||||
rusqlite::params![mr_id, username],
|
||||
)
|
||||
.unwrap();
|
||||
}
|
||||
|
||||
fn insert_discussion(
|
||||
conn: &Connection,
|
||||
id: i64,
|
||||
project_id: i64,
|
||||
mr_id: Option<i64>,
|
||||
issue_id: Option<i64>,
|
||||
) {
|
||||
let noteable_type = if mr_id.is_some() {
|
||||
"MergeRequest"
|
||||
} else {
|
||||
"Issue"
|
||||
};
|
||||
let ts = now_ms();
|
||||
conn.execute(
|
||||
"INSERT INTO discussions (id, gitlab_discussion_id, project_id, merge_request_id, issue_id, noteable_type, resolvable, resolved, last_seen_at, last_note_at)
|
||||
VALUES (?1, ?2, ?3, ?4, ?5, ?6, 0, 0, ?7, ?8)",
|
||||
rusqlite::params![
|
||||
id,
|
||||
format!("disc-{id}"),
|
||||
project_id,
|
||||
mr_id,
|
||||
issue_id,
|
||||
noteable_type,
|
||||
ts,
|
||||
ts
|
||||
],
|
||||
)
|
||||
.unwrap();
|
||||
}
|
||||
|
||||
#[allow(clippy::too_many_arguments)]
|
||||
fn insert_note_at(
|
||||
conn: &Connection,
|
||||
id: i64,
|
||||
discussion_id: i64,
|
||||
project_id: i64,
|
||||
author: &str,
|
||||
is_system: bool,
|
||||
body: &str,
|
||||
created_at: i64,
|
||||
) {
|
||||
conn.execute(
|
||||
"INSERT INTO notes (id, gitlab_id, discussion_id, project_id, note_type, is_system, author_username, body, created_at, updated_at, last_seen_at)
|
||||
VALUES (?1, ?2, ?3, ?4, 'DiscussionNote', ?5, ?6, ?7, ?8, ?9, ?10)",
|
||||
rusqlite::params![
|
||||
id,
|
||||
id * 10,
|
||||
discussion_id,
|
||||
project_id,
|
||||
i32::from(is_system),
|
||||
author,
|
||||
body,
|
||||
created_at,
|
||||
created_at,
|
||||
now_ms()
|
||||
],
|
||||
)
|
||||
.unwrap();
|
||||
}
|
||||
|
||||
#[allow(clippy::too_many_arguments)]
|
||||
fn insert_state_event(
|
||||
conn: &Connection,
|
||||
id: i64,
|
||||
project_id: i64,
|
||||
issue_id: Option<i64>,
|
||||
mr_id: Option<i64>,
|
||||
state: &str,
|
||||
actor: &str,
|
||||
created_at: i64,
|
||||
) {
|
||||
conn.execute(
|
||||
"INSERT INTO resource_state_events (id, gitlab_id, project_id, issue_id, merge_request_id, state, actor_username, created_at)
|
||||
VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8)",
|
||||
rusqlite::params![id, id * 10, project_id, issue_id, mr_id, state, actor, created_at],
|
||||
)
|
||||
.unwrap();
|
||||
}
|
||||
|
||||
#[allow(clippy::too_many_arguments)]
|
||||
fn insert_label_event(
|
||||
conn: &Connection,
|
||||
id: i64,
|
||||
project_id: i64,
|
||||
issue_id: Option<i64>,
|
||||
mr_id: Option<i64>,
|
||||
action: &str,
|
||||
label_name: &str,
|
||||
actor: &str,
|
||||
created_at: i64,
|
||||
) {
|
||||
conn.execute(
|
||||
"INSERT INTO resource_label_events (id, gitlab_id, project_id, issue_id, merge_request_id, action, label_name, actor_username, created_at)
|
||||
VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8, ?9)",
|
||||
rusqlite::params![
|
||||
id,
|
||||
id * 10,
|
||||
project_id,
|
||||
issue_id,
|
||||
mr_id,
|
||||
action,
|
||||
label_name,
|
||||
actor,
|
||||
created_at
|
||||
],
|
||||
)
|
||||
.unwrap();
|
||||
}
|
||||
|
||||
// ─── Open Issues Tests (Task #7) ───────────────────────────────────────────
|
||||
|
||||
#[test]
|
||||
fn open_issues_returns_assigned_only() {
|
||||
let conn = setup_test_db();
|
||||
insert_project(&conn, 1, "group/repo");
|
||||
insert_issue(&conn, 10, 1, 42, "someone");
|
||||
insert_issue(&conn, 11, 1, 43, "someone");
|
||||
// Only assign issue 42 to alice
|
||||
insert_assignee(&conn, 10, "alice");
|
||||
|
||||
let results = query_open_issues(&conn, "alice", &[]).unwrap();
|
||||
assert_eq!(results.len(), 1);
|
||||
assert_eq!(results[0].iid, 42);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn open_issues_excludes_closed() {
|
||||
let conn = setup_test_db();
|
||||
insert_project(&conn, 1, "group/repo");
|
||||
insert_issue(&conn, 10, 1, 42, "someone");
|
||||
insert_issue_with_state(&conn, 11, 1, 43, "someone", "closed");
|
||||
insert_assignee(&conn, 10, "alice");
|
||||
insert_assignee(&conn, 11, "alice");
|
||||
|
||||
let results = query_open_issues(&conn, "alice", &[]).unwrap();
|
||||
assert_eq!(results.len(), 1);
|
||||
assert_eq!(results[0].iid, 42);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn open_issues_project_filter() {
|
||||
let conn = setup_test_db();
|
||||
insert_project(&conn, 1, "group/repo-a");
|
||||
insert_project(&conn, 2, "group/repo-b");
|
||||
insert_issue(&conn, 10, 1, 42, "someone");
|
||||
insert_issue(&conn, 11, 2, 43, "someone");
|
||||
insert_assignee(&conn, 10, "alice");
|
||||
insert_assignee(&conn, 11, "alice");
|
||||
|
||||
// Filter to project 1 only
|
||||
let results = query_open_issues(&conn, "alice", &[1]).unwrap();
|
||||
assert_eq!(results.len(), 1);
|
||||
assert_eq!(results[0].project_path, "group/repo-a");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn open_issues_empty_when_unassigned() {
|
||||
let conn = setup_test_db();
|
||||
insert_project(&conn, 1, "group/repo");
|
||||
insert_issue(&conn, 10, 1, 42, "alice");
|
||||
// alice authored but is NOT assigned
|
||||
let results = query_open_issues(&conn, "alice", &[]).unwrap();
|
||||
assert!(results.is_empty());
|
||||
}
|
||||
|
||||
// ─── Attention State Tests (Task #10) ──────────────────────────────────────
|
||||
|
||||
#[test]
|
||||
fn attention_state_not_started_no_notes() {
|
||||
let conn = setup_test_db();
|
||||
insert_project(&conn, 1, "group/repo");
|
||||
insert_issue(&conn, 10, 1, 42, "someone");
|
||||
insert_assignee(&conn, 10, "alice");
|
||||
|
||||
let results = query_open_issues(&conn, "alice", &[]).unwrap();
|
||||
assert_eq!(results.len(), 1);
|
||||
assert_eq!(results[0].attention_state, AttentionState::NotStarted);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn attention_state_needs_attention_others_replied() {
|
||||
let conn = setup_test_db();
|
||||
insert_project(&conn, 1, "group/repo");
|
||||
insert_issue(&conn, 10, 1, 42, "someone");
|
||||
insert_assignee(&conn, 10, "alice");
|
||||
|
||||
// alice comments first, then bob replies after
|
||||
let disc_id = 100;
|
||||
insert_discussion(&conn, disc_id, 1, None, Some(10));
|
||||
let t1 = now_ms() - 5000;
|
||||
let t2 = now_ms() - 1000;
|
||||
insert_note_at(&conn, 200, disc_id, 1, "alice", false, "my comment", t1);
|
||||
insert_note_at(&conn, 201, disc_id, 1, "bob", false, "reply", t2);
|
||||
|
||||
let results = query_open_issues(&conn, "alice", &[]).unwrap();
|
||||
assert_eq!(results.len(), 1);
|
||||
assert_eq!(results[0].attention_state, AttentionState::NeedsAttention);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn attention_state_awaiting_response() {
|
||||
let conn = setup_test_db();
|
||||
insert_project(&conn, 1, "group/repo");
|
||||
insert_issue(&conn, 10, 1, 42, "someone");
|
||||
insert_assignee(&conn, 10, "alice");
|
||||
|
||||
let disc_id = 100;
|
||||
insert_discussion(&conn, disc_id, 1, None, Some(10));
|
||||
let t1 = now_ms() - 5000;
|
||||
let t2 = now_ms() - 1000;
|
||||
// bob first, then alice replies (alice's latest >= others' latest)
|
||||
insert_note_at(&conn, 200, disc_id, 1, "bob", false, "question", t1);
|
||||
insert_note_at(&conn, 201, disc_id, 1, "alice", false, "my reply", t2);
|
||||
|
||||
let results = query_open_issues(&conn, "alice", &[]).unwrap();
|
||||
assert_eq!(results.len(), 1);
|
||||
assert_eq!(results[0].attention_state, AttentionState::AwaitingResponse);
|
||||
}
|
||||
|
||||
// ─── Authored MRs Tests (Task #8) ─────────────────────────────────────────
|
||||
|
||||
#[test]
|
||||
fn authored_mrs_returns_own_only() {
|
||||
let conn = setup_test_db();
|
||||
insert_project(&conn, 1, "group/repo");
|
||||
insert_mr(&conn, 10, 1, 99, "alice", "opened", false);
|
||||
insert_mr(&conn, 11, 1, 100, "bob", "opened", false);
|
||||
|
||||
let results = query_authored_mrs(&conn, "alice", &[]).unwrap();
|
||||
assert_eq!(results.len(), 1);
|
||||
assert_eq!(results[0].iid, 99);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn authored_mrs_excludes_merged() {
|
||||
let conn = setup_test_db();
|
||||
insert_project(&conn, 1, "group/repo");
|
||||
insert_mr(&conn, 10, 1, 99, "alice", "opened", false);
|
||||
insert_mr(&conn, 11, 1, 100, "alice", "merged", false);
|
||||
|
||||
let results = query_authored_mrs(&conn, "alice", &[]).unwrap();
|
||||
assert_eq!(results.len(), 1);
|
||||
assert_eq!(results[0].iid, 99);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn authored_mrs_project_filter() {
|
||||
let conn = setup_test_db();
|
||||
insert_project(&conn, 1, "group/repo-a");
|
||||
insert_project(&conn, 2, "group/repo-b");
|
||||
insert_mr(&conn, 10, 1, 99, "alice", "opened", false);
|
||||
insert_mr(&conn, 11, 2, 100, "alice", "opened", false);
|
||||
|
||||
let results = query_authored_mrs(&conn, "alice", &[2]).unwrap();
|
||||
assert_eq!(results.len(), 1);
|
||||
assert_eq!(results[0].project_path, "group/repo-b");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn authored_mr_not_ready_when_draft_no_reviewers() {
|
||||
let conn = setup_test_db();
|
||||
insert_project(&conn, 1, "group/repo");
|
||||
insert_mr(&conn, 10, 1, 99, "alice", "opened", true);
|
||||
// No reviewers added
|
||||
|
||||
let results = query_authored_mrs(&conn, "alice", &[]).unwrap();
|
||||
assert_eq!(results.len(), 1);
|
||||
assert!(results[0].draft);
|
||||
assert_eq!(results[0].attention_state, AttentionState::NotReady);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn authored_mr_not_ready_overridden_when_has_reviewers() {
|
||||
let conn = setup_test_db();
|
||||
insert_project(&conn, 1, "group/repo");
|
||||
insert_mr(&conn, 10, 1, 99, "alice", "opened", true);
|
||||
insert_reviewer(&conn, 10, "bob");
|
||||
|
||||
let results = query_authored_mrs(&conn, "alice", &[]).unwrap();
|
||||
assert_eq!(results.len(), 1);
|
||||
// Draft with reviewers -> not_started (not not_ready), since no one has commented
|
||||
assert_eq!(results[0].attention_state, AttentionState::NotStarted);
|
||||
}
|
||||
|
||||
// ─── Reviewing MRs Tests (Task #9) ────────────────────────────────────────
|
||||
|
||||
#[test]
|
||||
fn reviewing_mrs_returns_reviewer_items() {
|
||||
let conn = setup_test_db();
|
||||
insert_project(&conn, 1, "group/repo");
|
||||
insert_mr(&conn, 10, 1, 99, "bob", "opened", false);
|
||||
insert_mr(&conn, 11, 1, 100, "charlie", "opened", false);
|
||||
insert_reviewer(&conn, 10, "alice");
|
||||
// alice is NOT a reviewer of MR 100
|
||||
|
||||
let results = query_reviewing_mrs(&conn, "alice", &[]).unwrap();
|
||||
assert_eq!(results.len(), 1);
|
||||
assert_eq!(results[0].iid, 99);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn reviewing_mrs_includes_author_username() {
|
||||
let conn = setup_test_db();
|
||||
insert_project(&conn, 1, "group/repo");
|
||||
insert_mr(&conn, 10, 1, 99, "bob", "opened", false);
|
||||
insert_reviewer(&conn, 10, "alice");
|
||||
|
||||
let results = query_reviewing_mrs(&conn, "alice", &[]).unwrap();
|
||||
assert_eq!(results.len(), 1);
|
||||
assert_eq!(results[0].author_username, Some("bob".to_string()));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn reviewing_mrs_project_filter() {
|
||||
let conn = setup_test_db();
|
||||
insert_project(&conn, 1, "group/repo-a");
|
||||
insert_project(&conn, 2, "group/repo-b");
|
||||
insert_mr(&conn, 10, 1, 99, "bob", "opened", false);
|
||||
insert_mr(&conn, 11, 2, 100, "bob", "opened", false);
|
||||
insert_reviewer(&conn, 10, "alice");
|
||||
insert_reviewer(&conn, 11, "alice");
|
||||
|
||||
let results = query_reviewing_mrs(&conn, "alice", &[1]).unwrap();
|
||||
assert_eq!(results.len(), 1);
|
||||
assert_eq!(results[0].project_path, "group/repo-a");
|
||||
}
|
||||
|
||||
// ─── Activity Feed Tests (Tasks #11-13) ────────────────────────────────────
|
||||
|
||||
#[test]
|
||||
fn activity_note_on_assigned_issue() {
|
||||
let conn = setup_test_db();
|
||||
insert_project(&conn, 1, "group/repo");
|
||||
insert_issue(&conn, 10, 1, 42, "someone");
|
||||
insert_assignee(&conn, 10, "alice");
|
||||
|
||||
let disc_id = 100;
|
||||
insert_discussion(&conn, disc_id, 1, None, Some(10));
|
||||
let t = now_ms() - 1000;
|
||||
insert_note_at(&conn, 200, disc_id, 1, "bob", false, "a comment", t);
|
||||
|
||||
let results = query_activity(&conn, "alice", &[], 0).unwrap();
|
||||
assert_eq!(results.len(), 1);
|
||||
assert_eq!(results[0].event_type, ActivityEventType::Note);
|
||||
assert_eq!(results[0].entity_iid, 42);
|
||||
assert_eq!(results[0].entity_type, "issue");
|
||||
assert!(!results[0].is_own);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn activity_note_on_authored_mr() {
|
||||
let conn = setup_test_db();
|
||||
insert_project(&conn, 1, "group/repo");
|
||||
insert_mr(&conn, 10, 1, 99, "alice", "opened", false);
|
||||
|
||||
let disc_id = 100;
|
||||
insert_discussion(&conn, disc_id, 1, Some(10), None);
|
||||
let t = now_ms() - 1000;
|
||||
insert_note_at(&conn, 200, disc_id, 1, "bob", false, "nice work", t);
|
||||
|
||||
let results = query_activity(&conn, "alice", &[], 0).unwrap();
|
||||
assert_eq!(results.len(), 1);
|
||||
assert_eq!(results[0].event_type, ActivityEventType::Note);
|
||||
assert_eq!(results[0].entity_type, "mr");
|
||||
assert_eq!(results[0].entity_iid, 99);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn activity_state_event_on_my_issue() {
|
||||
let conn = setup_test_db();
|
||||
insert_project(&conn, 1, "group/repo");
|
||||
insert_issue(&conn, 10, 1, 42, "someone");
|
||||
insert_assignee(&conn, 10, "alice");
|
||||
|
||||
let t = now_ms() - 1000;
|
||||
insert_state_event(&conn, 300, 1, Some(10), None, "closed", "bob", t);
|
||||
|
||||
let results = query_activity(&conn, "alice", &[], 0).unwrap();
|
||||
assert_eq!(results.len(), 1);
|
||||
assert_eq!(results[0].event_type, ActivityEventType::StatusChange);
|
||||
assert_eq!(results[0].summary, "closed");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn activity_label_event_on_my_issue() {
|
||||
let conn = setup_test_db();
|
||||
insert_project(&conn, 1, "group/repo");
|
||||
insert_issue(&conn, 10, 1, 42, "someone");
|
||||
insert_assignee(&conn, 10, "alice");
|
||||
|
||||
let t = now_ms() - 1000;
|
||||
insert_label_event(&conn, 400, 1, Some(10), None, "add", "bug", "bob", t);
|
||||
|
||||
let results = query_activity(&conn, "alice", &[], 0).unwrap();
|
||||
assert_eq!(results.len(), 1);
|
||||
assert_eq!(results[0].event_type, ActivityEventType::LabelChange);
|
||||
assert!(results[0].summary.contains("bug"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn activity_excludes_unassociated_items() {
|
||||
let conn = setup_test_db();
|
||||
insert_project(&conn, 1, "group/repo");
|
||||
// Issue NOT assigned to alice
|
||||
insert_issue(&conn, 10, 1, 42, "someone");
|
||||
|
||||
let disc_id = 100;
|
||||
insert_discussion(&conn, disc_id, 1, None, Some(10));
|
||||
let t = now_ms() - 1000;
|
||||
insert_note_at(&conn, 200, disc_id, 1, "bob", false, "a comment", t);
|
||||
|
||||
let results = query_activity(&conn, "alice", &[], 0).unwrap();
|
||||
assert!(
|
||||
results.is_empty(),
|
||||
"should not see activity on unassigned issues"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn activity_since_filter() {
|
||||
let conn = setup_test_db();
|
||||
insert_project(&conn, 1, "group/repo");
|
||||
insert_issue(&conn, 10, 1, 42, "someone");
|
||||
insert_assignee(&conn, 10, "alice");
|
||||
|
||||
let disc_id = 100;
|
||||
insert_discussion(&conn, disc_id, 1, None, Some(10));
|
||||
let old_t = now_ms() - 100_000_000; // ~1 day ago
|
||||
let recent_t = now_ms() - 1000;
|
||||
insert_note_at(&conn, 200, disc_id, 1, "bob", false, "old comment", old_t);
|
||||
insert_note_at(
|
||||
&conn,
|
||||
201,
|
||||
disc_id,
|
||||
1,
|
||||
"bob",
|
||||
false,
|
||||
"new comment",
|
||||
recent_t,
|
||||
);
|
||||
|
||||
// since = 50 seconds ago, should only get the recent note
|
||||
let since = now_ms() - 50_000;
|
||||
let results = query_activity(&conn, "alice", &[], since).unwrap();
|
||||
assert_eq!(results.len(), 1);
|
||||
// Notes no longer duplicate body into body_preview (summary carries the content)
|
||||
assert_eq!(results[0].body_preview, None);
|
||||
assert_eq!(results[0].summary, "new comment");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn activity_project_filter() {
|
||||
let conn = setup_test_db();
|
||||
insert_project(&conn, 1, "group/repo-a");
|
||||
insert_project(&conn, 2, "group/repo-b");
|
||||
insert_issue(&conn, 10, 1, 42, "someone");
|
||||
insert_issue(&conn, 11, 2, 43, "someone");
|
||||
insert_assignee(&conn, 10, "alice");
|
||||
insert_assignee(&conn, 11, "alice");
|
||||
|
||||
let disc_a = 100;
|
||||
let disc_b = 101;
|
||||
insert_discussion(&conn, disc_a, 1, None, Some(10));
|
||||
insert_discussion(&conn, disc_b, 2, None, Some(11));
|
||||
let t = now_ms() - 1000;
|
||||
insert_note_at(&conn, 200, disc_a, 1, "bob", false, "comment a", t);
|
||||
insert_note_at(&conn, 201, disc_b, 2, "bob", false, "comment b", t);
|
||||
|
||||
// Filter to project 1 only
|
||||
let results = query_activity(&conn, "alice", &[1], 0).unwrap();
|
||||
assert_eq!(results.len(), 1);
|
||||
assert_eq!(results[0].project_path, "group/repo-a");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn activity_sorted_newest_first() {
|
||||
let conn = setup_test_db();
|
||||
insert_project(&conn, 1, "group/repo");
|
||||
insert_issue(&conn, 10, 1, 42, "someone");
|
||||
insert_assignee(&conn, 10, "alice");
|
||||
|
||||
let disc_id = 100;
|
||||
insert_discussion(&conn, disc_id, 1, None, Some(10));
|
||||
let t1 = now_ms() - 5000;
|
||||
let t2 = now_ms() - 1000;
|
||||
insert_note_at(&conn, 200, disc_id, 1, "bob", false, "first", t1);
|
||||
insert_note_at(&conn, 201, disc_id, 1, "charlie", false, "second", t2);
|
||||
|
||||
let results = query_activity(&conn, "alice", &[], 0).unwrap();
|
||||
assert_eq!(results.len(), 2);
|
||||
assert!(
|
||||
results[0].timestamp >= results[1].timestamp,
|
||||
"should be sorted newest first"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn activity_is_own_flag() {
|
||||
let conn = setup_test_db();
|
||||
insert_project(&conn, 1, "group/repo");
|
||||
insert_issue(&conn, 10, 1, 42, "someone");
|
||||
insert_assignee(&conn, 10, "alice");
|
||||
|
||||
let disc_id = 100;
|
||||
insert_discussion(&conn, disc_id, 1, None, Some(10));
|
||||
let t = now_ms() - 1000;
|
||||
insert_note_at(&conn, 200, disc_id, 1, "alice", false, "my comment", t);
|
||||
|
||||
let results = query_activity(&conn, "alice", &[], 0).unwrap();
|
||||
assert_eq!(results.len(), 1);
|
||||
assert!(results[0].is_own);
|
||||
}
|
||||
|
||||
// ─── Assignment Detection Tests (Task #12) ─────────────────────────────────
|
||||
|
||||
#[test]
|
||||
fn activity_assignment_system_note() {
|
||||
let conn = setup_test_db();
|
||||
insert_project(&conn, 1, "group/repo");
|
||||
insert_issue(&conn, 10, 1, 42, "someone");
|
||||
insert_assignee(&conn, 10, "alice");
|
||||
|
||||
let disc_id = 100;
|
||||
insert_discussion(&conn, disc_id, 1, None, Some(10));
|
||||
let t = now_ms() - 1000;
|
||||
insert_note_at(&conn, 200, disc_id, 1, "bob", true, "assigned to @alice", t);
|
||||
|
||||
let results = query_activity(&conn, "alice", &[], 0).unwrap();
|
||||
assert_eq!(results.len(), 1);
|
||||
assert_eq!(results[0].event_type, ActivityEventType::Assign);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn activity_unassignment_system_note() {
|
||||
let conn = setup_test_db();
|
||||
insert_project(&conn, 1, "group/repo");
|
||||
insert_issue(&conn, 10, 1, 42, "someone");
|
||||
insert_assignee(&conn, 10, "alice");
|
||||
|
||||
let disc_id = 100;
|
||||
insert_discussion(&conn, disc_id, 1, None, Some(10));
|
||||
let t = now_ms() - 1000;
|
||||
insert_note_at(&conn, 200, disc_id, 1, "bob", true, "unassigned @alice", t);
|
||||
|
||||
let results = query_activity(&conn, "alice", &[], 0).unwrap();
|
||||
assert_eq!(results.len(), 1);
|
||||
assert_eq!(results[0].event_type, ActivityEventType::Unassign);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn activity_review_request_system_note() {
|
||||
let conn = setup_test_db();
|
||||
insert_project(&conn, 1, "group/repo");
|
||||
insert_mr(&conn, 10, 1, 99, "bob", "opened", false);
|
||||
insert_reviewer(&conn, 10, "alice");
|
||||
|
||||
let disc_id = 100;
|
||||
insert_discussion(&conn, disc_id, 1, Some(10), None);
|
||||
let t = now_ms() - 1000;
|
||||
insert_note_at(
|
||||
&conn,
|
||||
200,
|
||||
disc_id,
|
||||
1,
|
||||
"bob",
|
||||
true,
|
||||
"requested review from @alice",
|
||||
t,
|
||||
);
|
||||
|
||||
let results = query_activity(&conn, "alice", &[], 0).unwrap();
|
||||
assert_eq!(results.len(), 1);
|
||||
assert_eq!(results[0].event_type, ActivityEventType::ReviewRequest);
|
||||
}
|
||||
|
||||
// ─── Helper Tests ──────────────────────────────────────────────────────────
|
||||
|
||||
#[test]
|
||||
fn parse_attention_state_all_variants() {
|
||||
assert_eq!(
|
||||
parse_attention_state("needs_attention"),
|
||||
AttentionState::NeedsAttention
|
||||
);
|
||||
assert_eq!(
|
||||
parse_attention_state("not_started"),
|
||||
AttentionState::NotStarted
|
||||
);
|
||||
assert_eq!(
|
||||
parse_attention_state("awaiting_response"),
|
||||
AttentionState::AwaitingResponse
|
||||
);
|
||||
assert_eq!(parse_attention_state("stale"), AttentionState::Stale);
|
||||
assert_eq!(parse_attention_state("not_ready"), AttentionState::NotReady);
|
||||
assert_eq!(parse_attention_state("unknown"), AttentionState::NotStarted);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_event_type_all_variants() {
|
||||
assert_eq!(parse_event_type("note"), ActivityEventType::Note);
|
||||
assert_eq!(
|
||||
parse_event_type("status_change"),
|
||||
ActivityEventType::StatusChange
|
||||
);
|
||||
assert_eq!(
|
||||
parse_event_type("label_change"),
|
||||
ActivityEventType::LabelChange
|
||||
);
|
||||
assert_eq!(parse_event_type("assign"), ActivityEventType::Assign);
|
||||
assert_eq!(parse_event_type("unassign"), ActivityEventType::Unassign);
|
||||
assert_eq!(
|
||||
parse_event_type("review_request"),
|
||||
ActivityEventType::ReviewRequest
|
||||
);
|
||||
assert_eq!(
|
||||
parse_event_type("milestone_change"),
|
||||
ActivityEventType::MilestoneChange
|
||||
);
|
||||
assert_eq!(parse_event_type("unknown"), ActivityEventType::Note);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn build_project_clause_empty() {
|
||||
assert_eq!(build_project_clause("i.project_id", &[]), "");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn build_project_clause_single() {
|
||||
let clause = build_project_clause("i.project_id", &[1]);
|
||||
assert_eq!(clause, "AND i.project_id = ?2");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn build_project_clause_multiple() {
|
||||
let clause = build_project_clause("i.project_id", &[1, 2, 3]);
|
||||
assert_eq!(clause, "AND i.project_id IN (?2,?3,?4)");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn build_project_clause_at_custom_start() {
|
||||
let clause = build_project_clause_at("p.id", &[1, 2], 3);
|
||||
assert_eq!(clause, "AND p.id IN (?3,?4)");
|
||||
}
|
||||
424
src/cli/commands/me/mod.rs
Normal file
424
src/cli/commands/me/mod.rs
Normal file
@@ -0,0 +1,424 @@
|
||||
pub mod queries;
|
||||
pub mod render_human;
|
||||
pub mod render_robot;
|
||||
pub mod types;
|
||||
|
||||
use std::collections::HashSet;
|
||||
|
||||
use rusqlite::Connection;
|
||||
|
||||
use crate::Config;
|
||||
use crate::cli::MeArgs;
|
||||
use crate::core::db::create_connection;
|
||||
use crate::core::error::{LoreError, Result};
|
||||
use crate::core::paths::get_db_path;
|
||||
use crate::core::project::resolve_project;
|
||||
use crate::core::time::parse_since;
|
||||
|
||||
use self::queries::{query_activity, query_authored_mrs, query_open_issues, query_reviewing_mrs};
|
||||
use self::types::{AttentionState, MeDashboard, MeSummary};
|
||||
|
||||
/// Default activity lookback: 1 day in milliseconds.
|
||||
const DEFAULT_ACTIVITY_SINCE_DAYS: i64 = 1;
|
||||
const MS_PER_DAY: i64 = 24 * 60 * 60 * 1000;
|
||||
|
||||
/// Resolve the effective username from CLI flag or config.
|
||||
///
|
||||
/// Precedence: `--user` flag > `config.gitlab.username` > error (AC-1.2).
|
||||
pub fn resolve_username<'a>(args: &'a MeArgs, config: &'a Config) -> Result<&'a str> {
|
||||
if let Some(ref user) = args.user {
|
||||
return Ok(user.as_str());
|
||||
}
|
||||
if let Some(ref username) = config.gitlab.username {
|
||||
return Ok(username.as_str());
|
||||
}
|
||||
Err(LoreError::ConfigInvalid {
|
||||
details: "No GitLab username configured. Set gitlab.username in config.json or pass --user <username>.".to_string(),
|
||||
})
|
||||
}
|
||||
|
||||
/// Resolve the project scope for the dashboard.
|
||||
///
|
||||
/// Returns a list of project IDs to filter by. An empty vec means "all projects".
|
||||
///
|
||||
/// Precedence (AC-8):
|
||||
/// - `--project` and `--all` both set → error (AC-8.4, clap also enforces this)
|
||||
/// - `--all` → empty vec (all projects)
|
||||
/// - `--project` → resolve to single project ID via fuzzy match
|
||||
/// - config.default_project → resolve that
|
||||
/// - no default → empty vec (all projects)
|
||||
pub fn resolve_project_scope(
|
||||
conn: &Connection,
|
||||
args: &MeArgs,
|
||||
config: &Config,
|
||||
) -> Result<Vec<i64>> {
|
||||
if args.all {
|
||||
return Ok(Vec::new());
|
||||
}
|
||||
if let Some(ref project) = args.project {
|
||||
let id = resolve_project(conn, project)?;
|
||||
return Ok(vec![id]);
|
||||
}
|
||||
if let Some(ref dp) = config.default_project {
|
||||
let id = resolve_project(conn, dp)?;
|
||||
return Ok(vec![id]);
|
||||
}
|
||||
Ok(Vec::new())
|
||||
}
|
||||
|
||||
/// Run the `lore me` personal dashboard command.
|
||||
///
|
||||
/// Orchestrates: username resolution → project scope → query execution →
|
||||
/// summary computation → dashboard assembly → rendering.
|
||||
pub fn run_me(config: &Config, args: &MeArgs, robot_mode: bool) -> Result<()> {
|
||||
let start = std::time::Instant::now();
|
||||
|
||||
// 1. Open DB
|
||||
let db_path = get_db_path(config.storage.db_path.as_deref());
|
||||
let conn = create_connection(&db_path)?;
|
||||
|
||||
// 2. Check for synced data (AC-10.2)
|
||||
let has_data: bool = conn
|
||||
.query_row("SELECT EXISTS(SELECT 1 FROM projects LIMIT 1)", [], |row| {
|
||||
row.get(0)
|
||||
})
|
||||
.unwrap_or(false);
|
||||
if !has_data {
|
||||
return Err(LoreError::NotFound(
|
||||
"No synced data found. Run `lore sync` first to fetch your GitLab data.".to_string(),
|
||||
));
|
||||
}
|
||||
|
||||
// 3. Resolve username
|
||||
let username = resolve_username(args, config)?;
|
||||
|
||||
// 4. Resolve project scope
|
||||
let project_ids = resolve_project_scope(&conn, args, config)?;
|
||||
let single_project = project_ids.len() == 1;
|
||||
|
||||
// 5. Parse --since (default 1d for activity feed)
|
||||
let since_ms = match args.since.as_deref() {
|
||||
Some(raw) => parse_since(raw).ok_or_else(|| {
|
||||
LoreError::Other(format!(
|
||||
"Invalid --since value '{raw}'. Expected: 7d, 2w, 3m, YYYY-MM-DD, or Unix-ms timestamp."
|
||||
))
|
||||
})?,
|
||||
None => crate::core::time::now_ms() - DEFAULT_ACTIVITY_SINCE_DAYS * MS_PER_DAY,
|
||||
};
|
||||
|
||||
// 6. Determine which sections to query
|
||||
let show_all = args.show_all_sections();
|
||||
let want_issues = show_all || args.issues;
|
||||
let want_mrs = show_all || args.mrs;
|
||||
let want_activity = show_all || args.activity;
|
||||
|
||||
// 7. Run queries for requested sections
|
||||
let open_issues = if want_issues {
|
||||
query_open_issues(&conn, username, &project_ids)?
|
||||
} else {
|
||||
Vec::new()
|
||||
};
|
||||
|
||||
let open_mrs_authored = if want_mrs {
|
||||
query_authored_mrs(&conn, username, &project_ids)?
|
||||
} else {
|
||||
Vec::new()
|
||||
};
|
||||
|
||||
let reviewing_mrs = if want_mrs {
|
||||
query_reviewing_mrs(&conn, username, &project_ids)?
|
||||
} else {
|
||||
Vec::new()
|
||||
};
|
||||
|
||||
let activity = if want_activity {
|
||||
query_activity(&conn, username, &project_ids, since_ms)?
|
||||
} else {
|
||||
Vec::new()
|
||||
};
|
||||
|
||||
// 8. Compute summary
|
||||
let needs_attention_count = open_issues
|
||||
.iter()
|
||||
.filter(|i| i.attention_state == AttentionState::NeedsAttention)
|
||||
.count()
|
||||
+ open_mrs_authored
|
||||
.iter()
|
||||
.filter(|m| m.attention_state == AttentionState::NeedsAttention)
|
||||
.count()
|
||||
+ reviewing_mrs
|
||||
.iter()
|
||||
.filter(|m| m.attention_state == AttentionState::NeedsAttention)
|
||||
.count();
|
||||
|
||||
// Count distinct projects across all items
|
||||
let mut project_paths: HashSet<&str> = HashSet::new();
|
||||
for i in &open_issues {
|
||||
project_paths.insert(&i.project_path);
|
||||
}
|
||||
for m in &open_mrs_authored {
|
||||
project_paths.insert(&m.project_path);
|
||||
}
|
||||
for m in &reviewing_mrs {
|
||||
project_paths.insert(&m.project_path);
|
||||
}
|
||||
|
||||
let summary = MeSummary {
|
||||
project_count: project_paths.len(),
|
||||
open_issue_count: open_issues.len(),
|
||||
authored_mr_count: open_mrs_authored.len(),
|
||||
reviewing_mr_count: reviewing_mrs.len(),
|
||||
needs_attention_count,
|
||||
};
|
||||
|
||||
// 9. Assemble dashboard
|
||||
let dashboard = MeDashboard {
|
||||
username: username.to_string(),
|
||||
since_ms: Some(since_ms),
|
||||
summary,
|
||||
open_issues,
|
||||
open_mrs_authored,
|
||||
reviewing_mrs,
|
||||
activity,
|
||||
};
|
||||
|
||||
// 10. Render
|
||||
let elapsed_ms = start.elapsed().as_millis() as u64;
|
||||
|
||||
if robot_mode {
|
||||
let fields = args.fields.as_deref();
|
||||
render_robot::print_me_json(&dashboard, elapsed_ms, fields)?;
|
||||
} else if show_all {
|
||||
render_human::print_me_dashboard(&dashboard, single_project);
|
||||
} else {
|
||||
render_human::print_me_dashboard_filtered(
|
||||
&dashboard,
|
||||
single_project,
|
||||
want_issues,
|
||||
want_mrs,
|
||||
want_activity,
|
||||
);
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
use crate::core::config::{
|
||||
EmbeddingConfig, GitLabConfig, LoggingConfig, ProjectConfig, ScoringConfig, StorageConfig,
|
||||
SyncConfig,
|
||||
};
|
||||
use crate::core::db::{create_connection, run_migrations};
|
||||
use std::path::Path;
|
||||
|
||||
fn test_config(username: Option<&str>) -> Config {
|
||||
Config {
|
||||
gitlab: GitLabConfig {
|
||||
base_url: "https://gitlab.example.com".to_string(),
|
||||
token_env_var: "GITLAB_TOKEN".to_string(),
|
||||
token: None,
|
||||
username: username.map(String::from),
|
||||
},
|
||||
projects: vec![ProjectConfig {
|
||||
path: "group/project".to_string(),
|
||||
}],
|
||||
default_project: None,
|
||||
sync: SyncConfig::default(),
|
||||
storage: StorageConfig::default(),
|
||||
embedding: EmbeddingConfig::default(),
|
||||
logging: LoggingConfig::default(),
|
||||
scoring: ScoringConfig::default(),
|
||||
}
|
||||
}
|
||||
|
||||
fn test_args(user: Option<&str>) -> MeArgs {
|
||||
MeArgs {
|
||||
issues: false,
|
||||
mrs: false,
|
||||
activity: false,
|
||||
since: None,
|
||||
project: None,
|
||||
all: false,
|
||||
user: user.map(String::from),
|
||||
fields: None,
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn resolve_username_cli_flag_wins() {
|
||||
let config = test_config(Some("config-user"));
|
||||
let args = test_args(Some("cli-user"));
|
||||
let result = resolve_username(&args, &config).unwrap();
|
||||
assert_eq!(result, "cli-user");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn resolve_username_falls_back_to_config() {
|
||||
let config = test_config(Some("config-user"));
|
||||
let args = test_args(None);
|
||||
let result = resolve_username(&args, &config).unwrap();
|
||||
assert_eq!(result, "config-user");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn resolve_username_errors_when_both_absent() {
|
||||
let config = test_config(None);
|
||||
let args = test_args(None);
|
||||
let err = resolve_username(&args, &config).unwrap_err();
|
||||
let msg = err.to_string();
|
||||
assert!(msg.contains("username"), "unexpected error: {msg}");
|
||||
assert!(msg.contains("--user"), "should suggest --user flag: {msg}");
|
||||
}
|
||||
|
||||
fn test_config_with_default_project(
|
||||
username: Option<&str>,
|
||||
default_project: Option<&str>,
|
||||
) -> Config {
|
||||
Config {
|
||||
gitlab: GitLabConfig {
|
||||
base_url: "https://gitlab.example.com".to_string(),
|
||||
token_env_var: "GITLAB_TOKEN".to_string(),
|
||||
token: None,
|
||||
username: username.map(String::from),
|
||||
},
|
||||
projects: vec![
|
||||
ProjectConfig {
|
||||
path: "group/project".to_string(),
|
||||
},
|
||||
ProjectConfig {
|
||||
path: "other/repo".to_string(),
|
||||
},
|
||||
],
|
||||
default_project: default_project.map(String::from),
|
||||
sync: SyncConfig::default(),
|
||||
storage: StorageConfig::default(),
|
||||
embedding: EmbeddingConfig::default(),
|
||||
logging: LoggingConfig::default(),
|
||||
scoring: ScoringConfig::default(),
|
||||
}
|
||||
}
|
||||
|
||||
fn setup_test_db() -> Connection {
|
||||
let conn = create_connection(Path::new(":memory:")).unwrap();
|
||||
run_migrations(&conn).unwrap();
|
||||
conn.execute(
|
||||
"INSERT INTO projects (gitlab_project_id, path_with_namespace, web_url)
|
||||
VALUES (1, 'group/project', 'https://gitlab.example.com/group/project')",
|
||||
[],
|
||||
)
|
||||
.unwrap();
|
||||
conn.execute(
|
||||
"INSERT INTO projects (gitlab_project_id, path_with_namespace, web_url)
|
||||
VALUES (2, 'other/repo', 'https://gitlab.example.com/other/repo')",
|
||||
[],
|
||||
)
|
||||
.unwrap();
|
||||
conn
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn resolve_project_scope_all_flag_returns_empty() {
|
||||
let conn = setup_test_db();
|
||||
let config = test_config(Some("jdoe"));
|
||||
let mut args = test_args(None);
|
||||
args.all = true;
|
||||
let ids = resolve_project_scope(&conn, &args, &config).unwrap();
|
||||
assert!(ids.is_empty(), "expected empty for --all, got {ids:?}");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn resolve_project_scope_project_flag_resolves() {
|
||||
let conn = setup_test_db();
|
||||
let config = test_config(Some("jdoe"));
|
||||
let mut args = test_args(None);
|
||||
args.project = Some("group/project".to_string());
|
||||
let ids = resolve_project_scope(&conn, &args, &config).unwrap();
|
||||
assert_eq!(ids.len(), 1);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn resolve_project_scope_default_project() {
|
||||
let conn = setup_test_db();
|
||||
let config = test_config_with_default_project(Some("jdoe"), Some("other/repo"));
|
||||
let args = test_args(None);
|
||||
let ids = resolve_project_scope(&conn, &args, &config).unwrap();
|
||||
assert_eq!(ids.len(), 1);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn resolve_project_scope_no_default_returns_empty() {
|
||||
let conn = setup_test_db();
|
||||
let config = test_config(Some("jdoe"));
|
||||
let args = test_args(None);
|
||||
let ids = resolve_project_scope(&conn, &args, &config).unwrap();
|
||||
assert!(ids.is_empty(), "expected empty, got {ids:?}");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn resolve_project_scope_project_flag_fuzzy_match() {
|
||||
let conn = setup_test_db();
|
||||
let config = test_config(Some("jdoe"));
|
||||
let mut args = test_args(None);
|
||||
args.project = Some("project".to_string());
|
||||
let ids = resolve_project_scope(&conn, &args, &config).unwrap();
|
||||
assert_eq!(ids.len(), 1);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn resolve_project_scope_all_overrides_default_project() {
|
||||
let conn = setup_test_db();
|
||||
let config = test_config_with_default_project(Some("jdoe"), Some("group/project"));
|
||||
let mut args = test_args(None);
|
||||
args.all = true;
|
||||
let ids = resolve_project_scope(&conn, &args, &config).unwrap();
|
||||
assert!(
|
||||
ids.is_empty(),
|
||||
"expected --all to override default_project, got {ids:?}"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn resolve_project_scope_project_flag_overrides_default() {
|
||||
let conn = setup_test_db();
|
||||
let config = test_config_with_default_project(Some("jdoe"), Some("group/project"));
|
||||
let mut args = test_args(None);
|
||||
args.project = Some("other/repo".to_string());
|
||||
let ids = resolve_project_scope(&conn, &args, &config).unwrap();
|
||||
assert_eq!(ids.len(), 1, "expected --project to override default");
|
||||
// Verify it resolved the explicit project, not the default
|
||||
let resolved_path: String = conn
|
||||
.query_row(
|
||||
"SELECT path_with_namespace FROM projects WHERE id = ?1",
|
||||
rusqlite::params![ids[0]],
|
||||
|row| row.get(0),
|
||||
)
|
||||
.unwrap();
|
||||
assert_eq!(resolved_path, "other/repo");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn resolve_project_scope_unknown_project_errors() {
|
||||
let conn = setup_test_db();
|
||||
let config = test_config(Some("jdoe"));
|
||||
let mut args = test_args(None);
|
||||
args.project = Some("nonexistent/project".to_string());
|
||||
let err = resolve_project_scope(&conn, &args, &config).unwrap_err();
|
||||
let msg = err.to_string();
|
||||
assert!(msg.contains("not found"), "expected not found error: {msg}");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn show_all_sections_true_when_no_flags() {
|
||||
let args = test_args(None);
|
||||
assert!(args.show_all_sections());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn show_all_sections_false_with_issues_flag() {
|
||||
let mut args = test_args(None);
|
||||
args.issues = true;
|
||||
assert!(!args.show_all_sections());
|
||||
}
|
||||
}
|
||||
574
src/cli/commands/me/queries.rs
Normal file
574
src/cli/commands/me/queries.rs
Normal file
@@ -0,0 +1,574 @@
|
||||
// ─── Query Functions ────────────────────────────────────────────────────────
|
||||
//
|
||||
// SQL queries powering the `lore me` dashboard.
|
||||
// Each function takes &Connection, username, optional project scope,
|
||||
// and returns Result<Vec<StructType>>.
|
||||
|
||||
use rusqlite::Connection;
|
||||
|
||||
use crate::core::error::Result;
|
||||
|
||||
use super::types::{ActivityEventType, AttentionState, MeActivityEvent, MeIssue, MeMr};
|
||||
|
||||
/// Stale threshold: items with no activity for 30 days are marked "stale".
|
||||
const STALE_THRESHOLD_MS: i64 = 30 * 24 * 3600 * 1000;
|
||||
|
||||
// ─── Open Issues (AC-5.1, Task #7) ─────────────────────────────────────────
|
||||
|
||||
/// Query open issues assigned to the user via issue_assignees.
|
||||
/// Returns issues sorted by attention state priority, then by most recently updated.
|
||||
/// Attention state is computed inline using CTE-based note timestamp comparison.
|
||||
pub fn query_open_issues(
|
||||
conn: &Connection,
|
||||
username: &str,
|
||||
project_ids: &[i64],
|
||||
) -> Result<Vec<MeIssue>> {
|
||||
let project_clause = build_project_clause("i.project_id", project_ids);
|
||||
|
||||
let sql = format!(
|
||||
"WITH note_ts AS (
|
||||
SELECT d.issue_id,
|
||||
MAX(CASE WHEN n.author_username = ?1 THEN n.created_at END) AS my_ts,
|
||||
MAX(CASE WHEN n.author_username != ?1 THEN n.created_at END) AS others_ts,
|
||||
MAX(n.created_at) AS any_ts
|
||||
FROM notes n
|
||||
JOIN discussions d ON n.discussion_id = d.id
|
||||
WHERE n.is_system = 0 AND d.issue_id IS NOT NULL
|
||||
GROUP BY d.issue_id
|
||||
)
|
||||
SELECT i.iid, i.title, p.path_with_namespace, i.status_name, i.updated_at, i.web_url,
|
||||
CASE
|
||||
WHEN nt.any_ts IS NOT NULL AND nt.any_ts < (strftime('%s', 'now') * 1000 - {stale_ms})
|
||||
THEN 'stale'
|
||||
WHEN nt.others_ts IS NOT NULL AND (nt.my_ts IS NULL OR nt.others_ts > nt.my_ts)
|
||||
THEN 'needs_attention'
|
||||
WHEN nt.my_ts IS NOT NULL AND nt.my_ts >= COALESCE(nt.others_ts, 0)
|
||||
THEN 'awaiting_response'
|
||||
ELSE 'not_started'
|
||||
END AS attention_state
|
||||
FROM issues i
|
||||
JOIN issue_assignees ia ON ia.issue_id = i.id
|
||||
JOIN projects p ON i.project_id = p.id
|
||||
LEFT JOIN note_ts nt ON nt.issue_id = i.id
|
||||
WHERE ia.username = ?1
|
||||
AND i.state = 'opened'
|
||||
AND (i.status_name COLLATE NOCASE IN ('In Progress', 'In Review') OR i.status_name IS NULL)
|
||||
{project_clause}
|
||||
ORDER BY
|
||||
CASE
|
||||
WHEN nt.others_ts IS NOT NULL AND (nt.my_ts IS NULL OR nt.others_ts > nt.my_ts)
|
||||
AND (nt.any_ts IS NULL OR nt.any_ts >= (strftime('%s', 'now') * 1000 - {stale_ms}))
|
||||
THEN 0
|
||||
WHEN nt.any_ts IS NULL AND nt.my_ts IS NULL
|
||||
THEN 1
|
||||
WHEN nt.my_ts IS NOT NULL AND nt.my_ts >= COALESCE(nt.others_ts, 0)
|
||||
AND (nt.any_ts IS NULL OR nt.any_ts >= (strftime('%s', 'now') * 1000 - {stale_ms}))
|
||||
THEN 2
|
||||
WHEN nt.any_ts IS NOT NULL AND nt.any_ts < (strftime('%s', 'now') * 1000 - {stale_ms})
|
||||
THEN 3
|
||||
ELSE 1
|
||||
END,
|
||||
i.updated_at DESC",
|
||||
stale_ms = STALE_THRESHOLD_MS,
|
||||
);
|
||||
|
||||
let params = build_params(username, project_ids);
|
||||
let param_refs: Vec<&dyn rusqlite::types::ToSql> = params.iter().map(|p| p.as_ref()).collect();
|
||||
|
||||
let mut stmt = conn.prepare(&sql)?;
|
||||
let rows = stmt.query_map(param_refs.as_slice(), |row| {
|
||||
let attention_str: String = row.get(6)?;
|
||||
Ok(MeIssue {
|
||||
iid: row.get(0)?,
|
||||
title: row.get::<_, Option<String>>(1)?.unwrap_or_default(),
|
||||
project_path: row.get(2)?,
|
||||
status_name: row.get(3)?,
|
||||
updated_at: row.get(4)?,
|
||||
web_url: row.get(5)?,
|
||||
attention_state: parse_attention_state(&attention_str),
|
||||
labels: Vec::new(),
|
||||
})
|
||||
})?;
|
||||
|
||||
let mut issues: Vec<MeIssue> = rows.collect::<std::result::Result<Vec<_>, _>>()?;
|
||||
populate_issue_labels(conn, &mut issues)?;
|
||||
Ok(issues)
|
||||
}
|
||||
|
||||
// ─── Authored MRs (AC-5.2, Task #8) ────────────────────────────────────────
|
||||
|
||||
/// Query open MRs authored by the user.
|
||||
pub fn query_authored_mrs(
|
||||
conn: &Connection,
|
||||
username: &str,
|
||||
project_ids: &[i64],
|
||||
) -> Result<Vec<MeMr>> {
|
||||
let project_clause = build_project_clause("m.project_id", project_ids);
|
||||
|
||||
let sql = format!(
|
||||
"WITH note_ts AS (
|
||||
SELECT d.merge_request_id,
|
||||
MAX(CASE WHEN n.author_username = ?1 THEN n.created_at END) AS my_ts,
|
||||
MAX(CASE WHEN n.author_username != ?1 THEN n.created_at END) AS others_ts,
|
||||
MAX(n.created_at) AS any_ts
|
||||
FROM notes n
|
||||
JOIN discussions d ON n.discussion_id = d.id
|
||||
WHERE n.is_system = 0 AND d.merge_request_id IS NOT NULL
|
||||
GROUP BY d.merge_request_id
|
||||
)
|
||||
SELECT m.iid, m.title, p.path_with_namespace, m.draft, m.detailed_merge_status,
|
||||
m.updated_at, m.web_url,
|
||||
CASE
|
||||
WHEN m.draft = 1 AND NOT EXISTS (
|
||||
SELECT 1 FROM mr_reviewers WHERE merge_request_id = m.id
|
||||
) THEN 'not_ready'
|
||||
WHEN nt.any_ts IS NOT NULL AND nt.any_ts < (strftime('%s', 'now') * 1000 - {stale_ms})
|
||||
THEN 'stale'
|
||||
WHEN nt.others_ts IS NOT NULL AND (nt.my_ts IS NULL OR nt.others_ts > nt.my_ts)
|
||||
THEN 'needs_attention'
|
||||
WHEN nt.my_ts IS NOT NULL AND nt.my_ts >= COALESCE(nt.others_ts, 0)
|
||||
THEN 'awaiting_response'
|
||||
ELSE 'not_started'
|
||||
END AS attention_state
|
||||
FROM merge_requests m
|
||||
JOIN projects p ON m.project_id = p.id
|
||||
LEFT JOIN note_ts nt ON nt.merge_request_id = m.id
|
||||
WHERE m.author_username = ?1
|
||||
AND m.state = 'opened'
|
||||
{project_clause}
|
||||
ORDER BY
|
||||
CASE
|
||||
WHEN m.draft = 1 AND NOT EXISTS (SELECT 1 FROM mr_reviewers WHERE merge_request_id = m.id) THEN 4
|
||||
WHEN nt.others_ts IS NOT NULL AND (nt.my_ts IS NULL OR nt.others_ts > nt.my_ts)
|
||||
AND (nt.any_ts IS NULL OR nt.any_ts >= (strftime('%s', 'now') * 1000 - {stale_ms})) THEN 0
|
||||
WHEN nt.any_ts IS NULL AND nt.my_ts IS NULL THEN 1
|
||||
WHEN nt.my_ts IS NOT NULL AND nt.my_ts >= COALESCE(nt.others_ts, 0)
|
||||
AND (nt.any_ts IS NULL OR nt.any_ts >= (strftime('%s', 'now') * 1000 - {stale_ms})) THEN 2
|
||||
WHEN nt.any_ts IS NOT NULL AND nt.any_ts < (strftime('%s', 'now') * 1000 - {stale_ms}) THEN 3
|
||||
ELSE 1
|
||||
END,
|
||||
m.updated_at DESC",
|
||||
stale_ms = STALE_THRESHOLD_MS,
|
||||
);
|
||||
|
||||
let params = build_params(username, project_ids);
|
||||
let param_refs: Vec<&dyn rusqlite::types::ToSql> = params.iter().map(|p| p.as_ref()).collect();
|
||||
|
||||
let mut stmt = conn.prepare(&sql)?;
|
||||
let rows = stmt.query_map(param_refs.as_slice(), |row| {
|
||||
let attention_str: String = row.get(7)?;
|
||||
Ok(MeMr {
|
||||
iid: row.get(0)?,
|
||||
title: row.get::<_, Option<String>>(1)?.unwrap_or_default(),
|
||||
project_path: row.get(2)?,
|
||||
draft: row.get::<_, i32>(3)? != 0,
|
||||
detailed_merge_status: row.get(4)?,
|
||||
updated_at: row.get(5)?,
|
||||
web_url: row.get(6)?,
|
||||
attention_state: parse_attention_state(&attention_str),
|
||||
author_username: None,
|
||||
labels: Vec::new(),
|
||||
})
|
||||
})?;
|
||||
|
||||
let mut mrs: Vec<MeMr> = rows.collect::<std::result::Result<Vec<_>, _>>()?;
|
||||
populate_mr_labels(conn, &mut mrs)?;
|
||||
Ok(mrs)
|
||||
}
|
||||
|
||||
// ─── Reviewing MRs (AC-5.3, Task #9) ───────────────────────────────────────
|
||||
|
||||
/// Query open MRs where user is a reviewer.
|
||||
pub fn query_reviewing_mrs(
|
||||
conn: &Connection,
|
||||
username: &str,
|
||||
project_ids: &[i64],
|
||||
) -> Result<Vec<MeMr>> {
|
||||
let project_clause = build_project_clause("m.project_id", project_ids);
|
||||
|
||||
let sql = format!(
|
||||
"WITH note_ts AS (
|
||||
SELECT d.merge_request_id,
|
||||
MAX(CASE WHEN n.author_username = ?1 THEN n.created_at END) AS my_ts,
|
||||
MAX(CASE WHEN n.author_username != ?1 THEN n.created_at END) AS others_ts,
|
||||
MAX(n.created_at) AS any_ts
|
||||
FROM notes n
|
||||
JOIN discussions d ON n.discussion_id = d.id
|
||||
WHERE n.is_system = 0 AND d.merge_request_id IS NOT NULL
|
||||
GROUP BY d.merge_request_id
|
||||
)
|
||||
SELECT m.iid, m.title, p.path_with_namespace, m.draft, m.detailed_merge_status,
|
||||
m.author_username, m.updated_at, m.web_url,
|
||||
CASE
|
||||
-- not_ready is impossible here: JOIN mr_reviewers guarantees a reviewer exists
|
||||
WHEN nt.any_ts IS NOT NULL AND nt.any_ts < (strftime('%s', 'now') * 1000 - {stale_ms})
|
||||
THEN 'stale'
|
||||
WHEN nt.others_ts IS NOT NULL AND (nt.my_ts IS NULL OR nt.others_ts > nt.my_ts)
|
||||
THEN 'needs_attention'
|
||||
WHEN nt.my_ts IS NOT NULL AND nt.my_ts >= COALESCE(nt.others_ts, 0)
|
||||
THEN 'awaiting_response'
|
||||
ELSE 'not_started'
|
||||
END AS attention_state
|
||||
FROM merge_requests m
|
||||
JOIN mr_reviewers r ON r.merge_request_id = m.id
|
||||
JOIN projects p ON m.project_id = p.id
|
||||
LEFT JOIN note_ts nt ON nt.merge_request_id = m.id
|
||||
WHERE r.username = ?1
|
||||
AND m.state = 'opened'
|
||||
{project_clause}
|
||||
ORDER BY
|
||||
CASE
|
||||
WHEN nt.others_ts IS NOT NULL AND (nt.my_ts IS NULL OR nt.others_ts > nt.my_ts)
|
||||
AND (nt.any_ts IS NULL OR nt.any_ts >= (strftime('%s', 'now') * 1000 - {stale_ms})) THEN 0
|
||||
WHEN nt.any_ts IS NULL AND nt.my_ts IS NULL THEN 1
|
||||
WHEN nt.my_ts IS NOT NULL AND nt.my_ts >= COALESCE(nt.others_ts, 0)
|
||||
AND (nt.any_ts IS NULL OR nt.any_ts >= (strftime('%s', 'now') * 1000 - {stale_ms})) THEN 2
|
||||
WHEN nt.any_ts IS NOT NULL AND nt.any_ts < (strftime('%s', 'now') * 1000 - {stale_ms}) THEN 3
|
||||
ELSE 1
|
||||
END,
|
||||
m.updated_at DESC",
|
||||
stale_ms = STALE_THRESHOLD_MS,
|
||||
);
|
||||
|
||||
let params = build_params(username, project_ids);
|
||||
let param_refs: Vec<&dyn rusqlite::types::ToSql> = params.iter().map(|p| p.as_ref()).collect();
|
||||
|
||||
let mut stmt = conn.prepare(&sql)?;
|
||||
let rows = stmt.query_map(param_refs.as_slice(), |row| {
|
||||
let attention_str: String = row.get(8)?;
|
||||
Ok(MeMr {
|
||||
iid: row.get(0)?,
|
||||
title: row.get::<_, Option<String>>(1)?.unwrap_or_default(),
|
||||
project_path: row.get(2)?,
|
||||
draft: row.get::<_, i32>(3)? != 0,
|
||||
detailed_merge_status: row.get(4)?,
|
||||
author_username: row.get(5)?,
|
||||
updated_at: row.get(6)?,
|
||||
web_url: row.get(7)?,
|
||||
attention_state: parse_attention_state(&attention_str),
|
||||
labels: Vec::new(),
|
||||
})
|
||||
})?;
|
||||
|
||||
let mut mrs: Vec<MeMr> = rows.collect::<std::result::Result<Vec<_>, _>>()?;
|
||||
populate_mr_labels(conn, &mut mrs)?;
|
||||
Ok(mrs)
|
||||
}
|
||||
|
||||
// ─── Activity Feed (AC-5.4, Tasks #11-13) ──────────────────────────────────
|
||||
|
||||
/// Query activity events on items currently associated with the user.
|
||||
/// Combines notes, state events, label events, milestone events, and
|
||||
/// assignment/reviewer system notes into a unified feed sorted newest-first.
|
||||
pub fn query_activity(
|
||||
conn: &Connection,
|
||||
username: &str,
|
||||
project_ids: &[i64],
|
||||
since_ms: i64,
|
||||
) -> Result<Vec<MeActivityEvent>> {
|
||||
// Build project filter for activity sources.
|
||||
// Activity params: ?1=username, ?2=since_ms, ?3+=project_ids
|
||||
let project_clause = build_project_clause_at("p.id", project_ids, 3);
|
||||
|
||||
// Build the "my items" subquery fragments for issue/MR association checks.
|
||||
// These ensure we only see activity on items CURRENTLY associated with the user
|
||||
// AND currently open (AC-3.6). Without the state filter, activity would include
|
||||
// events on closed/merged items that don't appear in the dashboard lists.
|
||||
let my_issue_check = "EXISTS (
|
||||
SELECT 1 FROM issue_assignees ia
|
||||
JOIN issues i2 ON ia.issue_id = i2.id
|
||||
WHERE ia.issue_id = {entity_issue_id} AND ia.username = ?1 AND i2.state = 'opened'
|
||||
)";
|
||||
let my_mr_check = "(
|
||||
EXISTS (SELECT 1 FROM merge_requests mr2 WHERE mr2.id = {entity_mr_id} AND mr2.author_username = ?1 AND mr2.state = 'opened')
|
||||
OR EXISTS (SELECT 1 FROM mr_reviewers rv
|
||||
JOIN merge_requests mr3 ON rv.merge_request_id = mr3.id
|
||||
WHERE rv.merge_request_id = {entity_mr_id} AND rv.username = ?1 AND mr3.state = 'opened')
|
||||
)";
|
||||
|
||||
// Source 1: Human comments on my items
|
||||
let notes_sql = format!(
|
||||
"SELECT n.created_at, 'note',
|
||||
CASE WHEN d.issue_id IS NOT NULL THEN 'issue' ELSE 'mr' END,
|
||||
COALESCE(i.iid, m.iid),
|
||||
p.path_with_namespace,
|
||||
n.author_username,
|
||||
CASE WHEN n.author_username = ?1 THEN 1 ELSE 0 END,
|
||||
SUBSTR(n.body, 1, 200),
|
||||
NULL
|
||||
FROM notes n
|
||||
JOIN discussions d ON n.discussion_id = d.id
|
||||
JOIN projects p ON d.project_id = p.id
|
||||
LEFT JOIN issues i ON d.issue_id = i.id
|
||||
LEFT JOIN merge_requests m ON d.merge_request_id = m.id
|
||||
WHERE n.is_system = 0
|
||||
AND n.created_at >= ?2
|
||||
{project_clause}
|
||||
AND (
|
||||
(d.issue_id IS NOT NULL AND {issue_check})
|
||||
OR (d.merge_request_id IS NOT NULL AND {mr_check})
|
||||
)",
|
||||
project_clause = project_clause,
|
||||
issue_check = my_issue_check.replace("{entity_issue_id}", "d.issue_id"),
|
||||
mr_check = my_mr_check.replace("{entity_mr_id}", "d.merge_request_id"),
|
||||
);
|
||||
|
||||
// Source 2: State events
|
||||
let state_sql = format!(
|
||||
"SELECT e.created_at, 'status_change',
|
||||
CASE WHEN e.issue_id IS NOT NULL THEN 'issue' ELSE 'mr' END,
|
||||
COALESCE(i.iid, m.iid),
|
||||
p.path_with_namespace,
|
||||
e.actor_username,
|
||||
CASE WHEN e.actor_username = ?1 THEN 1 ELSE 0 END,
|
||||
e.state,
|
||||
NULL
|
||||
FROM resource_state_events e
|
||||
JOIN projects p ON e.project_id = p.id
|
||||
LEFT JOIN issues i ON e.issue_id = i.id
|
||||
LEFT JOIN merge_requests m ON e.merge_request_id = m.id
|
||||
WHERE e.created_at >= ?2
|
||||
{project_clause}
|
||||
AND (
|
||||
(e.issue_id IS NOT NULL AND {issue_check})
|
||||
OR (e.merge_request_id IS NOT NULL AND {mr_check})
|
||||
)",
|
||||
project_clause = project_clause,
|
||||
issue_check = my_issue_check.replace("{entity_issue_id}", "e.issue_id"),
|
||||
mr_check = my_mr_check.replace("{entity_mr_id}", "e.merge_request_id"),
|
||||
);
|
||||
|
||||
// Source 3: Label events
|
||||
let label_sql = format!(
|
||||
"SELECT e.created_at, 'label_change',
|
||||
CASE WHEN e.issue_id IS NOT NULL THEN 'issue' ELSE 'mr' END,
|
||||
COALESCE(i.iid, m.iid),
|
||||
p.path_with_namespace,
|
||||
e.actor_username,
|
||||
CASE WHEN e.actor_username = ?1 THEN 1 ELSE 0 END,
|
||||
(e.action || ' ' || COALESCE(e.label_name, '(deleted)')),
|
||||
NULL
|
||||
FROM resource_label_events e
|
||||
JOIN projects p ON e.project_id = p.id
|
||||
LEFT JOIN issues i ON e.issue_id = i.id
|
||||
LEFT JOIN merge_requests m ON e.merge_request_id = m.id
|
||||
WHERE e.created_at >= ?2
|
||||
{project_clause}
|
||||
AND (
|
||||
(e.issue_id IS NOT NULL AND {issue_check})
|
||||
OR (e.merge_request_id IS NOT NULL AND {mr_check})
|
||||
)",
|
||||
project_clause = project_clause,
|
||||
issue_check = my_issue_check.replace("{entity_issue_id}", "e.issue_id"),
|
||||
mr_check = my_mr_check.replace("{entity_mr_id}", "e.merge_request_id"),
|
||||
);
|
||||
|
||||
// Source 4: Milestone events
|
||||
let milestone_sql = format!(
|
||||
"SELECT e.created_at, 'milestone_change',
|
||||
CASE WHEN e.issue_id IS NOT NULL THEN 'issue' ELSE 'mr' END,
|
||||
COALESCE(i.iid, m.iid),
|
||||
p.path_with_namespace,
|
||||
e.actor_username,
|
||||
CASE WHEN e.actor_username = ?1 THEN 1 ELSE 0 END,
|
||||
(e.action || ' ' || COALESCE(e.milestone_title, '(deleted)')),
|
||||
NULL
|
||||
FROM resource_milestone_events e
|
||||
JOIN projects p ON e.project_id = p.id
|
||||
LEFT JOIN issues i ON e.issue_id = i.id
|
||||
LEFT JOIN merge_requests m ON e.merge_request_id = m.id
|
||||
WHERE e.created_at >= ?2
|
||||
{project_clause}
|
||||
AND (
|
||||
(e.issue_id IS NOT NULL AND {issue_check})
|
||||
OR (e.merge_request_id IS NOT NULL AND {mr_check})
|
||||
)",
|
||||
project_clause = project_clause,
|
||||
issue_check = my_issue_check.replace("{entity_issue_id}", "e.issue_id"),
|
||||
mr_check = my_mr_check.replace("{entity_mr_id}", "e.merge_request_id"),
|
||||
);
|
||||
|
||||
// Source 5: Assignment/reviewer system notes (AC-12)
|
||||
let assign_sql = format!(
|
||||
"SELECT n.created_at,
|
||||
CASE
|
||||
WHEN LOWER(n.body) LIKE '%assigned to @%' THEN 'assign'
|
||||
WHEN LOWER(n.body) LIKE '%unassigned @%' THEN 'unassign'
|
||||
WHEN LOWER(n.body) LIKE '%requested review from @%' THEN 'review_request'
|
||||
ELSE 'assign'
|
||||
END,
|
||||
CASE WHEN d.issue_id IS NOT NULL THEN 'issue' ELSE 'mr' END,
|
||||
COALESCE(i.iid, m.iid),
|
||||
p.path_with_namespace,
|
||||
n.author_username,
|
||||
CASE WHEN n.author_username = ?1 THEN 1 ELSE 0 END,
|
||||
n.body,
|
||||
NULL
|
||||
FROM notes n
|
||||
JOIN discussions d ON n.discussion_id = d.id
|
||||
JOIN projects p ON d.project_id = p.id
|
||||
LEFT JOIN issues i ON d.issue_id = i.id
|
||||
LEFT JOIN merge_requests m ON d.merge_request_id = m.id
|
||||
WHERE n.is_system = 1
|
||||
AND n.created_at >= ?2
|
||||
{project_clause}
|
||||
AND (
|
||||
LOWER(n.body) LIKE '%assigned to @' || LOWER(?1) || '%'
|
||||
OR LOWER(n.body) LIKE '%unassigned @' || LOWER(?1) || '%'
|
||||
OR LOWER(n.body) LIKE '%requested review from @' || LOWER(?1) || '%'
|
||||
)
|
||||
AND (
|
||||
(d.issue_id IS NOT NULL AND {issue_check})
|
||||
OR (d.merge_request_id IS NOT NULL AND {mr_check})
|
||||
)",
|
||||
project_clause = project_clause,
|
||||
issue_check = my_issue_check.replace("{entity_issue_id}", "d.issue_id"),
|
||||
mr_check = my_mr_check.replace("{entity_mr_id}", "d.merge_request_id"),
|
||||
);
|
||||
|
||||
let full_sql = format!(
|
||||
"{notes_sql}
|
||||
UNION ALL {state_sql}
|
||||
UNION ALL {label_sql}
|
||||
UNION ALL {milestone_sql}
|
||||
UNION ALL {assign_sql}
|
||||
ORDER BY 1 DESC
|
||||
LIMIT 100"
|
||||
);
|
||||
|
||||
let mut params: Vec<Box<dyn rusqlite::types::ToSql>> = Vec::new();
|
||||
params.push(Box::new(username.to_string()));
|
||||
params.push(Box::new(since_ms));
|
||||
for &pid in project_ids {
|
||||
params.push(Box::new(pid));
|
||||
}
|
||||
let param_refs: Vec<&dyn rusqlite::types::ToSql> = params.iter().map(|p| p.as_ref()).collect();
|
||||
|
||||
let mut stmt = conn.prepare(&full_sql)?;
|
||||
let rows = stmt.query_map(param_refs.as_slice(), |row| {
|
||||
let event_type_str: String = row.get(1)?;
|
||||
Ok(MeActivityEvent {
|
||||
timestamp: row.get(0)?,
|
||||
event_type: parse_event_type(&event_type_str),
|
||||
entity_type: row.get(2)?,
|
||||
entity_iid: row.get(3)?,
|
||||
project_path: row.get(4)?,
|
||||
actor: row.get(5)?,
|
||||
is_own: row.get::<_, i32>(6)? != 0,
|
||||
summary: row.get::<_, Option<String>>(7)?.unwrap_or_default(),
|
||||
body_preview: row.get(8)?,
|
||||
})
|
||||
})?;
|
||||
|
||||
let events: Vec<MeActivityEvent> = rows.collect::<std::result::Result<Vec<_>, _>>()?;
|
||||
Ok(events)
|
||||
}
|
||||
|
||||
// ─── Helpers ────────────────────────────────────────────────────────────────
|
||||
|
||||
/// Parse attention state string from SQL CASE result.
|
||||
fn parse_attention_state(s: &str) -> AttentionState {
|
||||
match s {
|
||||
"needs_attention" => AttentionState::NeedsAttention,
|
||||
"not_started" => AttentionState::NotStarted,
|
||||
"awaiting_response" => AttentionState::AwaitingResponse,
|
||||
"stale" => AttentionState::Stale,
|
||||
"not_ready" => AttentionState::NotReady,
|
||||
_ => AttentionState::NotStarted,
|
||||
}
|
||||
}
|
||||
|
||||
/// Parse activity event type string from SQL.
|
||||
fn parse_event_type(s: &str) -> ActivityEventType {
|
||||
match s {
|
||||
"note" => ActivityEventType::Note,
|
||||
"status_change" => ActivityEventType::StatusChange,
|
||||
"label_change" => ActivityEventType::LabelChange,
|
||||
"assign" => ActivityEventType::Assign,
|
||||
"unassign" => ActivityEventType::Unassign,
|
||||
"review_request" => ActivityEventType::ReviewRequest,
|
||||
"milestone_change" => ActivityEventType::MilestoneChange,
|
||||
_ => ActivityEventType::Note,
|
||||
}
|
||||
}
|
||||
|
||||
/// Build a SQL clause for project ID filtering.
|
||||
/// `start_idx` is the 1-based parameter index for the first project ID.
|
||||
/// Returns empty string when no filter is needed (all projects).
|
||||
fn build_project_clause_at(column: &str, project_ids: &[i64], start_idx: usize) -> String {
|
||||
match project_ids.len() {
|
||||
0 => String::new(),
|
||||
1 => format!("AND {column} = ?{start_idx}"),
|
||||
n => {
|
||||
let placeholders: Vec<String> = (0..n).map(|i| format!("?{}", start_idx + i)).collect();
|
||||
format!("AND {column} IN ({})", placeholders.join(","))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Convenience: project clause starting at param index 2 (after username at ?1).
|
||||
fn build_project_clause(column: &str, project_ids: &[i64]) -> String {
|
||||
build_project_clause_at(column, project_ids, 2)
|
||||
}
|
||||
|
||||
/// Build the parameter vector: username first, then project IDs.
|
||||
fn build_params(username: &str, project_ids: &[i64]) -> Vec<Box<dyn rusqlite::types::ToSql>> {
|
||||
let mut params: Vec<Box<dyn rusqlite::types::ToSql>> = Vec::new();
|
||||
params.push(Box::new(username.to_string()));
|
||||
for &pid in project_ids {
|
||||
params.push(Box::new(pid));
|
||||
}
|
||||
params
|
||||
}
|
||||
|
||||
/// Populate labels for issues via cached per-item queries.
|
||||
fn populate_issue_labels(conn: &Connection, issues: &mut [MeIssue]) -> Result<()> {
|
||||
if issues.is_empty() {
|
||||
return Ok(());
|
||||
}
|
||||
for issue in issues.iter_mut() {
|
||||
let mut stmt = conn.prepare_cached(
|
||||
"SELECT l.name FROM labels l
|
||||
JOIN issue_labels il ON l.id = il.label_id
|
||||
JOIN issues i ON il.issue_id = i.id
|
||||
JOIN projects p ON i.project_id = p.id
|
||||
WHERE i.iid = ?1 AND p.path_with_namespace = ?2
|
||||
ORDER BY l.name",
|
||||
)?;
|
||||
let labels: Vec<String> = stmt
|
||||
.query_map(rusqlite::params![issue.iid, issue.project_path], |row| {
|
||||
row.get(0)
|
||||
})?
|
||||
.collect::<std::result::Result<Vec<_>, _>>()?;
|
||||
issue.labels = labels;
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Populate labels for MRs via cached per-item queries.
|
||||
fn populate_mr_labels(conn: &Connection, mrs: &mut [MeMr]) -> Result<()> {
|
||||
if mrs.is_empty() {
|
||||
return Ok(());
|
||||
}
|
||||
for mr in mrs.iter_mut() {
|
||||
let mut stmt = conn.prepare_cached(
|
||||
"SELECT l.name FROM labels l
|
||||
JOIN mr_labels ml ON l.id = ml.label_id
|
||||
JOIN merge_requests m ON ml.merge_request_id = m.id
|
||||
JOIN projects p ON m.project_id = p.id
|
||||
WHERE m.iid = ?1 AND p.path_with_namespace = ?2
|
||||
ORDER BY l.name",
|
||||
)?;
|
||||
let labels: Vec<String> = stmt
|
||||
.query_map(rusqlite::params![mr.iid, mr.project_path], |row| row.get(0))?
|
||||
.collect::<std::result::Result<Vec<_>, _>>()?;
|
||||
mr.labels = labels;
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
// ─── Tests ──────────────────────────────────────────────────────────────────
|
||||
|
||||
#[cfg(test)]
|
||||
#[path = "me_tests.rs"]
|
||||
mod tests;
|
||||
560
src/cli/commands/me/render_human.rs
Normal file
560
src/cli/commands/me/render_human.rs
Normal file
@@ -0,0 +1,560 @@
|
||||
use crate::cli::render::{self, Align, GlyphMode, Icons, LoreRenderer, StyledCell, Table, Theme};
|
||||
|
||||
use super::types::{
|
||||
ActivityEventType, AttentionState, MeActivityEvent, MeDashboard, MeIssue, MeMr, MeSummary,
|
||||
};
|
||||
|
||||
// ─── Layout Helpers ─────────────────────────────────────────────────────────
|
||||
|
||||
/// Compute the title/summary column width for a section given its fixed overhead.
|
||||
/// Returns a width clamped to [20, 80].
|
||||
fn title_width(overhead: usize) -> usize {
|
||||
render::terminal_width()
|
||||
.saturating_sub(overhead)
|
||||
.clamp(20, 80)
|
||||
}
|
||||
|
||||
// ─── Glyph Mode Helper ──────────────────────────────────────────────────────
|
||||
|
||||
/// Get the current glyph mode, defaulting to Unicode if renderer not initialized.
|
||||
fn glyph_mode() -> GlyphMode {
|
||||
LoreRenderer::try_get().map_or(GlyphMode::Unicode, LoreRenderer::glyph_mode)
|
||||
}
|
||||
|
||||
// ─── Attention Icons ─────────────────────────────────────────────────────────
|
||||
|
||||
/// Return the attention icon for the current glyph mode.
|
||||
fn attention_icon(state: &AttentionState) -> &'static str {
|
||||
let mode = glyph_mode();
|
||||
match state {
|
||||
AttentionState::NeedsAttention => match mode {
|
||||
GlyphMode::Nerd => "\u{f0f3}", // bell
|
||||
GlyphMode::Unicode => "\u{25c6}", // diamond
|
||||
GlyphMode::Ascii => "[!]",
|
||||
},
|
||||
AttentionState::NotStarted => match mode {
|
||||
GlyphMode::Nerd => "\u{f005}", // star
|
||||
GlyphMode::Unicode => "\u{2605}", // black star
|
||||
GlyphMode::Ascii => "[*]",
|
||||
},
|
||||
AttentionState::AwaitingResponse => match mode {
|
||||
GlyphMode::Nerd => "\u{f017}", // clock
|
||||
GlyphMode::Unicode => "\u{25f7}", // white circle with upper right quadrant
|
||||
GlyphMode::Ascii => "[~]",
|
||||
},
|
||||
AttentionState::Stale => match mode {
|
||||
GlyphMode::Nerd => "\u{f54c}", // skull
|
||||
GlyphMode::Unicode => "\u{2620}", // skull and crossbones
|
||||
GlyphMode::Ascii => "[x]",
|
||||
},
|
||||
AttentionState::NotReady => match mode {
|
||||
GlyphMode::Nerd => "\u{f040}", // pencil
|
||||
GlyphMode::Unicode => "\u{270e}", // lower right pencil
|
||||
GlyphMode::Ascii => "[D]",
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
/// Style for an attention state.
|
||||
fn attention_style(state: &AttentionState) -> lipgloss::Style {
|
||||
match state {
|
||||
AttentionState::NeedsAttention => Theme::warning(),
|
||||
AttentionState::NotStarted => Theme::info(),
|
||||
AttentionState::AwaitingResponse | AttentionState::Stale => Theme::dim(),
|
||||
AttentionState::NotReady => Theme::state_draft(),
|
||||
}
|
||||
}
|
||||
|
||||
/// Render the styled attention icon for an item.
|
||||
fn styled_attention(state: &AttentionState) -> String {
|
||||
let icon = attention_icon(state);
|
||||
attention_style(state).render(icon)
|
||||
}
|
||||
|
||||
// ─── Merge Status Labels ────────────────────────────────────────────────────
|
||||
|
||||
/// Convert GitLab's `detailed_merge_status` API values to human-friendly labels.
|
||||
fn humanize_merge_status(status: &str) -> &str {
|
||||
match status {
|
||||
"not_approved" => "needs approval",
|
||||
"requested_changes" => "changes requested",
|
||||
"mergeable" => "ready to merge",
|
||||
"not_open" => "not open",
|
||||
"checking" => "checking",
|
||||
"ci_must_pass" => "CI pending",
|
||||
"ci_still_running" => "CI running",
|
||||
"discussions_not_resolved" => "unresolved threads",
|
||||
"draft_status" => "draft",
|
||||
"need_rebase" => "needs rebase",
|
||||
"conflict" | "has_conflicts" => "has conflicts",
|
||||
"blocked_status" => "blocked",
|
||||
"approvals_syncing" => "syncing approvals",
|
||||
"jira_association_missing" => "missing Jira link",
|
||||
"unchecked" => "unchecked",
|
||||
other => other,
|
||||
}
|
||||
}
|
||||
|
||||
// ─── Event Badges ────────────────────────────────────────────────────────────
|
||||
|
||||
/// Return the badge label text for an activity event type.
|
||||
fn activity_badge_label(event_type: &ActivityEventType) -> String {
|
||||
match event_type {
|
||||
ActivityEventType::Note => "note",
|
||||
ActivityEventType::StatusChange => "status",
|
||||
ActivityEventType::LabelChange => "label",
|
||||
ActivityEventType::Assign | ActivityEventType::Unassign => "assign",
|
||||
ActivityEventType::ReviewRequest => "review",
|
||||
ActivityEventType::MilestoneChange => "milestone",
|
||||
}
|
||||
.to_string()
|
||||
}
|
||||
|
||||
/// Return the style for an activity event badge.
|
||||
fn activity_badge_style(event_type: &ActivityEventType) -> lipgloss::Style {
|
||||
match event_type {
|
||||
ActivityEventType::Note => Theme::info(),
|
||||
ActivityEventType::StatusChange => Theme::warning(),
|
||||
ActivityEventType::LabelChange => Theme::accent(),
|
||||
ActivityEventType::Assign
|
||||
| ActivityEventType::Unassign
|
||||
| ActivityEventType::ReviewRequest => Theme::success(),
|
||||
ActivityEventType::MilestoneChange => accent_magenta(),
|
||||
}
|
||||
}
|
||||
|
||||
/// Magenta accent for milestone badges.
|
||||
fn accent_magenta() -> lipgloss::Style {
|
||||
if LoreRenderer::try_get().is_some_and(LoreRenderer::colors_enabled) {
|
||||
lipgloss::Style::new().foreground("#d946ef")
|
||||
} else {
|
||||
lipgloss::Style::new()
|
||||
}
|
||||
}
|
||||
|
||||
/// Very dark gray for system events (label, assign, status, milestone, review).
|
||||
fn system_event_style() -> lipgloss::Style {
|
||||
if LoreRenderer::try_get().is_some_and(LoreRenderer::colors_enabled) {
|
||||
lipgloss::Style::new().foreground("#555555")
|
||||
} else {
|
||||
lipgloss::Style::new().faint()
|
||||
}
|
||||
}
|
||||
|
||||
// ─── Summary Header ─────────────────────────────────────────────────────────
|
||||
|
||||
/// Print the summary header with counts and attention legend (Task #14).
|
||||
pub fn print_summary_header(summary: &MeSummary, username: &str) {
|
||||
println!();
|
||||
println!(
|
||||
"{}",
|
||||
Theme::bold().render(&format!(
|
||||
"{} {} -- Personal Dashboard",
|
||||
Icons::user(),
|
||||
username,
|
||||
))
|
||||
);
|
||||
println!("{}", "\u{2500}".repeat(render::terminal_width()));
|
||||
|
||||
// Counts line
|
||||
let needs = if summary.needs_attention_count > 0 {
|
||||
Theme::warning().render(&format!("{} need attention", summary.needs_attention_count))
|
||||
} else {
|
||||
Theme::dim().render("0 need attention")
|
||||
};
|
||||
|
||||
println!(
|
||||
" {} projects {} issues {} authored MRs {} reviewing MRs {}",
|
||||
summary.project_count,
|
||||
summary.open_issue_count,
|
||||
summary.authored_mr_count,
|
||||
summary.reviewing_mr_count,
|
||||
needs,
|
||||
);
|
||||
|
||||
// Attention legend
|
||||
print_attention_legend();
|
||||
}
|
||||
|
||||
/// Print the attention icon legend.
|
||||
fn print_attention_legend() {
|
||||
println!();
|
||||
let states = [
|
||||
(AttentionState::NeedsAttention, "needs attention"),
|
||||
(AttentionState::NotStarted, "not started"),
|
||||
(AttentionState::AwaitingResponse, "awaiting response"),
|
||||
(AttentionState::Stale, "stale (30d+)"),
|
||||
(AttentionState::NotReady, "draft (not ready)"),
|
||||
];
|
||||
|
||||
let legend: Vec<String> = states
|
||||
.iter()
|
||||
.map(|(state, label)| format!("{} {}", styled_attention(state), Theme::dim().render(label)))
|
||||
.collect();
|
||||
|
||||
println!(" {}", legend.join(" "));
|
||||
}
|
||||
|
||||
// ─── Open Issues Section ─────────────────────────────────────────────────────
|
||||
|
||||
/// Print the open issues section (Task #15).
|
||||
pub fn print_issues_section(issues: &[MeIssue], single_project: bool) {
|
||||
if issues.is_empty() {
|
||||
println!("{}", render::section_divider("Open Issues (0)"));
|
||||
println!(
|
||||
" {}",
|
||||
Theme::dim().render("No open issues assigned to you.")
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
println!(
|
||||
"{}",
|
||||
render::section_divider(&format!("Open Issues ({})", issues.len()))
|
||||
);
|
||||
|
||||
for issue in issues {
|
||||
let attn = styled_attention(&issue.attention_state);
|
||||
let ref_str = format!("#{}", issue.iid);
|
||||
let status = issue
|
||||
.status_name
|
||||
.as_deref()
|
||||
.map(|s| format!(" [{s}]"))
|
||||
.unwrap_or_default();
|
||||
let time = render::format_relative_time(issue.updated_at);
|
||||
|
||||
// Line 1: attention icon, issue ref, title, status, relative time
|
||||
println!(
|
||||
" {} {} {}{} {}",
|
||||
attn,
|
||||
Theme::issue_ref().render(&ref_str),
|
||||
render::truncate(&issue.title, title_width(43)),
|
||||
Theme::dim().render(&status),
|
||||
Theme::dim().render(&time),
|
||||
);
|
||||
|
||||
// Line 2: project path (suppressed in single-project mode)
|
||||
if !single_project {
|
||||
println!(" {}", Theme::dim().render(&issue.project_path),);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ─── MR Sections ─────────────────────────────────────────────────────────────
|
||||
|
||||
/// Print the authored MRs section (Task #16).
|
||||
pub fn print_authored_mrs_section(mrs: &[MeMr], single_project: bool) {
|
||||
if mrs.is_empty() {
|
||||
println!("{}", render::section_divider("Authored MRs (0)"));
|
||||
println!(
|
||||
" {}",
|
||||
Theme::dim().render("No open MRs authored by you.")
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
println!(
|
||||
"{}",
|
||||
render::section_divider(&format!("Authored MRs ({})", mrs.len()))
|
||||
);
|
||||
|
||||
for mr in mrs {
|
||||
let attn = styled_attention(&mr.attention_state);
|
||||
let ref_str = format!("!{}", mr.iid);
|
||||
let draft = if mr.draft {
|
||||
Theme::state_draft().render(" [draft]")
|
||||
} else {
|
||||
String::new()
|
||||
};
|
||||
let merge_status = mr
|
||||
.detailed_merge_status
|
||||
.as_deref()
|
||||
.filter(|s| !s.is_empty() && *s != "not_open")
|
||||
.map(|s| format!(" ({})", humanize_merge_status(s)))
|
||||
.unwrap_or_default();
|
||||
let time = render::format_relative_time(mr.updated_at);
|
||||
|
||||
// Line 1: attention, MR ref, title, draft, merge status, time
|
||||
println!(
|
||||
" {} {} {}{}{} {}",
|
||||
attn,
|
||||
Theme::mr_ref().render(&ref_str),
|
||||
render::truncate(&mr.title, title_width(48)),
|
||||
draft,
|
||||
Theme::dim().render(&merge_status),
|
||||
Theme::dim().render(&time),
|
||||
);
|
||||
|
||||
// Line 2: project path
|
||||
if !single_project {
|
||||
println!(" {}", Theme::dim().render(&mr.project_path),);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Print the reviewing MRs section (Task #16).
|
||||
pub fn print_reviewing_mrs_section(mrs: &[MeMr], single_project: bool) {
|
||||
if mrs.is_empty() {
|
||||
println!("{}", render::section_divider("Reviewing MRs (0)"));
|
||||
println!(
|
||||
" {}",
|
||||
Theme::dim().render("No open MRs awaiting your review.")
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
println!(
|
||||
"{}",
|
||||
render::section_divider(&format!("Reviewing MRs ({})", mrs.len()))
|
||||
);
|
||||
|
||||
for mr in mrs {
|
||||
let attn = styled_attention(&mr.attention_state);
|
||||
let ref_str = format!("!{}", mr.iid);
|
||||
let author = mr
|
||||
.author_username
|
||||
.as_deref()
|
||||
.map(|a| format!(" by {}", Theme::username().render(&format!("@{a}"))))
|
||||
.unwrap_or_default();
|
||||
let draft = if mr.draft {
|
||||
Theme::state_draft().render(" [draft]")
|
||||
} else {
|
||||
String::new()
|
||||
};
|
||||
let time = render::format_relative_time(mr.updated_at);
|
||||
|
||||
// Line 1: attention, MR ref, title, author, draft, time
|
||||
println!(
|
||||
" {} {} {}{}{} {}",
|
||||
attn,
|
||||
Theme::mr_ref().render(&ref_str),
|
||||
render::truncate(&mr.title, title_width(50)),
|
||||
author,
|
||||
draft,
|
||||
Theme::dim().render(&time),
|
||||
);
|
||||
|
||||
// Line 2: project path
|
||||
if !single_project {
|
||||
println!(" {}", Theme::dim().render(&mr.project_path),);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ─── Activity Feed ───────────────────────────────────────────────────────────
|
||||
|
||||
/// Print the activity feed section (Task #17).
|
||||
pub fn print_activity_section(events: &[MeActivityEvent], single_project: bool) {
|
||||
if events.is_empty() {
|
||||
println!("{}", render::section_divider("Activity (0)"));
|
||||
println!(
|
||||
" {}",
|
||||
Theme::dim().render("No recent activity on your items.")
|
||||
);
|
||||
return;
|
||||
}
|
||||
|
||||
println!(
|
||||
"{}",
|
||||
render::section_divider(&format!("Activity ({})", events.len()))
|
||||
);
|
||||
|
||||
// Columns: badge | ref | summary | actor | time
|
||||
// Table handles alignment, padding, and truncation automatically.
|
||||
let summary_max = title_width(46);
|
||||
let mut table = Table::new()
|
||||
.columns(5)
|
||||
.indent(4)
|
||||
.align(1, Align::Right)
|
||||
.align(4, Align::Right)
|
||||
.max_width(2, summary_max);
|
||||
|
||||
for event in events {
|
||||
let badge_label = activity_badge_label(&event.event_type);
|
||||
let badge_style = activity_badge_style(&event.event_type);
|
||||
|
||||
let ref_text = match event.entity_type.as_str() {
|
||||
"issue" => format!("#{}", event.entity_iid),
|
||||
"mr" => format!("!{}", event.entity_iid),
|
||||
_ => format!("{}:{}", event.entity_type, event.entity_iid),
|
||||
};
|
||||
let is_system = !matches!(event.event_type, ActivityEventType::Note);
|
||||
// System events → very dark gray; own notes → standard dim; else → full color.
|
||||
let subdued = is_system || event.is_own;
|
||||
let subdued_style = || {
|
||||
if is_system {
|
||||
system_event_style()
|
||||
} else {
|
||||
Theme::dim()
|
||||
}
|
||||
};
|
||||
|
||||
let badge_style_final = if subdued {
|
||||
subdued_style()
|
||||
} else {
|
||||
badge_style
|
||||
};
|
||||
|
||||
let ref_style = if subdued {
|
||||
Some(subdued_style())
|
||||
} else {
|
||||
match event.entity_type.as_str() {
|
||||
"issue" => Some(Theme::issue_ref()),
|
||||
"mr" => Some(Theme::mr_ref()),
|
||||
_ => None,
|
||||
}
|
||||
};
|
||||
|
||||
let clean_summary = event.summary.replace('\n', " ");
|
||||
let summary_style: Option<lipgloss::Style> =
|
||||
if subdued { Some(subdued_style()) } else { None };
|
||||
|
||||
let actor_text = if event.is_own {
|
||||
event
|
||||
.actor
|
||||
.as_deref()
|
||||
.map_or("(you)".to_string(), |a| format!("@{a} (you)"))
|
||||
} else {
|
||||
event
|
||||
.actor
|
||||
.as_deref()
|
||||
.map_or(String::new(), |a| format!("@{a}"))
|
||||
};
|
||||
let actor_style = if subdued {
|
||||
subdued_style()
|
||||
} else {
|
||||
Theme::username()
|
||||
};
|
||||
|
||||
let time = render::format_relative_time_compact(event.timestamp);
|
||||
|
||||
table.add_row(vec![
|
||||
StyledCell::styled(badge_label, badge_style_final),
|
||||
match ref_style {
|
||||
Some(s) => StyledCell::styled(ref_text, s),
|
||||
None => StyledCell::plain(ref_text),
|
||||
},
|
||||
match summary_style {
|
||||
Some(s) => StyledCell::styled(clean_summary, s),
|
||||
None => StyledCell::plain(clean_summary),
|
||||
},
|
||||
StyledCell::styled(actor_text, actor_style),
|
||||
StyledCell::styled(time, Theme::dim()),
|
||||
]);
|
||||
}
|
||||
|
||||
// Render table rows and interleave per-event detail lines
|
||||
let rendered = table.render();
|
||||
for (line, event) in rendered.lines().zip(events.iter()) {
|
||||
println!("{line}");
|
||||
if !single_project {
|
||||
println!(" {}", Theme::dim().render(&event.project_path));
|
||||
}
|
||||
if let Some(preview) = &event.body_preview
|
||||
&& !preview.is_empty()
|
||||
{
|
||||
let truncated = render::truncate(preview, 60);
|
||||
println!(" {}", Theme::dim().render(&format!("\"{truncated}\"")));
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Format an entity reference (#N for issues, !N for MRs), right-aligned to 6 chars.
|
||||
#[cfg(test)]
|
||||
fn format_entity_ref(entity_type: &str, iid: i64) -> String {
|
||||
match entity_type {
|
||||
"issue" => {
|
||||
let s = format!("{:>6}", format!("#{iid}"));
|
||||
Theme::issue_ref().render(&s)
|
||||
}
|
||||
"mr" => {
|
||||
let s = format!("{:>6}", format!("!{iid}"));
|
||||
Theme::mr_ref().render(&s)
|
||||
}
|
||||
_ => format!("{:>6}", format!("{entity_type}:{iid}")),
|
||||
}
|
||||
}
|
||||
|
||||
// ─── Full Dashboard ──────────────────────────────────────────────────────────
|
||||
|
||||
/// Render the complete human-mode dashboard.
|
||||
pub fn print_me_dashboard(dashboard: &MeDashboard, single_project: bool) {
|
||||
print_summary_header(&dashboard.summary, &dashboard.username);
|
||||
print_issues_section(&dashboard.open_issues, single_project);
|
||||
print_authored_mrs_section(&dashboard.open_mrs_authored, single_project);
|
||||
print_reviewing_mrs_section(&dashboard.reviewing_mrs, single_project);
|
||||
print_activity_section(&dashboard.activity, single_project);
|
||||
println!();
|
||||
}
|
||||
|
||||
/// Render a filtered dashboard (only requested sections).
|
||||
pub fn print_me_dashboard_filtered(
|
||||
dashboard: &MeDashboard,
|
||||
single_project: bool,
|
||||
show_issues: bool,
|
||||
show_mrs: bool,
|
||||
show_activity: bool,
|
||||
) {
|
||||
print_summary_header(&dashboard.summary, &dashboard.username);
|
||||
|
||||
if show_issues {
|
||||
print_issues_section(&dashboard.open_issues, single_project);
|
||||
}
|
||||
if show_mrs {
|
||||
print_authored_mrs_section(&dashboard.open_mrs_authored, single_project);
|
||||
print_reviewing_mrs_section(&dashboard.reviewing_mrs, single_project);
|
||||
}
|
||||
if show_activity {
|
||||
print_activity_section(&dashboard.activity, single_project);
|
||||
}
|
||||
println!();
|
||||
}
|
||||
|
||||
// ─── Tests ───────────────────────────────────────────────────────────────────
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn attention_icon_returns_nonempty_for_all_states() {
|
||||
let states = [
|
||||
AttentionState::NeedsAttention,
|
||||
AttentionState::NotStarted,
|
||||
AttentionState::AwaitingResponse,
|
||||
AttentionState::Stale,
|
||||
AttentionState::NotReady,
|
||||
];
|
||||
for state in &states {
|
||||
assert!(!attention_icon(state).is_empty(), "empty for {state:?}");
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn format_entity_ref_issue() {
|
||||
let result = format_entity_ref("issue", 42);
|
||||
assert!(result.contains("42"), "got: {result}");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn format_entity_ref_mr() {
|
||||
let result = format_entity_ref("mr", 99);
|
||||
assert!(result.contains("99"), "got: {result}");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn activity_badge_label_returns_nonempty_for_all_types() {
|
||||
let types = [
|
||||
ActivityEventType::Note,
|
||||
ActivityEventType::StatusChange,
|
||||
ActivityEventType::LabelChange,
|
||||
ActivityEventType::Assign,
|
||||
ActivityEventType::Unassign,
|
||||
ActivityEventType::ReviewRequest,
|
||||
ActivityEventType::MilestoneChange,
|
||||
];
|
||||
for t in &types {
|
||||
assert!(!activity_badge_label(t).is_empty(), "empty for {t:?}");
|
||||
}
|
||||
}
|
||||
}
|
||||
334
src/cli/commands/me/render_robot.rs
Normal file
334
src/cli/commands/me/render_robot.rs
Normal file
@@ -0,0 +1,334 @@
|
||||
use serde::Serialize;
|
||||
|
||||
use crate::cli::robot::RobotMeta;
|
||||
use crate::core::time::ms_to_iso;
|
||||
|
||||
use super::types::{
|
||||
ActivityEventType, AttentionState, MeActivityEvent, MeDashboard, MeIssue, MeMr, MeSummary,
|
||||
};
|
||||
|
||||
// ─── Robot JSON Output (Task #18) ────────────────────────────────────────────
|
||||
|
||||
/// Print the full me dashboard as robot-mode JSON.
|
||||
pub fn print_me_json(
|
||||
dashboard: &MeDashboard,
|
||||
elapsed_ms: u64,
|
||||
fields: Option<&[String]>,
|
||||
) -> crate::core::error::Result<()> {
|
||||
let envelope = MeJsonEnvelope {
|
||||
ok: true,
|
||||
data: MeDataJson::from_dashboard(dashboard),
|
||||
meta: RobotMeta { elapsed_ms },
|
||||
};
|
||||
|
||||
let mut value = serde_json::to_value(&envelope)
|
||||
.map_err(|e| crate::core::error::LoreError::Other(format!("JSON serialization: {e}")))?;
|
||||
|
||||
// Apply --fields filtering (Task #19)
|
||||
if let Some(f) = fields {
|
||||
let expanded = crate::cli::robot::expand_fields_preset(f, "me_items");
|
||||
// Filter all item arrays
|
||||
for key in &["open_issues", "open_mrs_authored", "reviewing_mrs"] {
|
||||
crate::cli::robot::filter_fields(&mut value, key, &expanded);
|
||||
}
|
||||
|
||||
// Activity gets its own minimal preset
|
||||
let activity_expanded = crate::cli::robot::expand_fields_preset(f, "me_activity");
|
||||
crate::cli::robot::filter_fields(&mut value, "activity", &activity_expanded);
|
||||
}
|
||||
|
||||
let json = serde_json::to_string(&value)
|
||||
.map_err(|e| crate::core::error::LoreError::Other(format!("JSON serialization: {e}")))?;
|
||||
println!("{json}");
|
||||
Ok(())
|
||||
}
|
||||
|
||||
// ─── JSON Envelope ───────────────────────────────────────────────────────────
|
||||
|
||||
#[derive(Serialize)]
|
||||
struct MeJsonEnvelope {
|
||||
ok: bool,
|
||||
data: MeDataJson,
|
||||
meta: RobotMeta,
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
struct MeDataJson {
|
||||
username: String,
|
||||
since_iso: Option<String>,
|
||||
summary: SummaryJson,
|
||||
open_issues: Vec<IssueJson>,
|
||||
open_mrs_authored: Vec<MrJson>,
|
||||
reviewing_mrs: Vec<MrJson>,
|
||||
activity: Vec<ActivityJson>,
|
||||
}
|
||||
|
||||
impl MeDataJson {
|
||||
fn from_dashboard(d: &MeDashboard) -> Self {
|
||||
Self {
|
||||
username: d.username.clone(),
|
||||
since_iso: d.since_ms.map(ms_to_iso),
|
||||
summary: SummaryJson::from(&d.summary),
|
||||
open_issues: d.open_issues.iter().map(IssueJson::from).collect(),
|
||||
open_mrs_authored: d.open_mrs_authored.iter().map(MrJson::from).collect(),
|
||||
reviewing_mrs: d.reviewing_mrs.iter().map(MrJson::from).collect(),
|
||||
activity: d.activity.iter().map(ActivityJson::from).collect(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ─── Summary ─────────────────────────────────────────────────────────────────
|
||||
|
||||
#[derive(Serialize)]
|
||||
struct SummaryJson {
|
||||
project_count: usize,
|
||||
open_issue_count: usize,
|
||||
authored_mr_count: usize,
|
||||
reviewing_mr_count: usize,
|
||||
needs_attention_count: usize,
|
||||
}
|
||||
|
||||
impl From<&MeSummary> for SummaryJson {
|
||||
fn from(s: &MeSummary) -> Self {
|
||||
Self {
|
||||
project_count: s.project_count,
|
||||
open_issue_count: s.open_issue_count,
|
||||
authored_mr_count: s.authored_mr_count,
|
||||
reviewing_mr_count: s.reviewing_mr_count,
|
||||
needs_attention_count: s.needs_attention_count,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ─── Issue ───────────────────────────────────────────────────────────────────
|
||||
|
||||
#[derive(Serialize)]
|
||||
struct IssueJson {
|
||||
project: String,
|
||||
iid: i64,
|
||||
title: String,
|
||||
state: String,
|
||||
attention_state: String,
|
||||
status_name: Option<String>,
|
||||
labels: Vec<String>,
|
||||
updated_at_iso: String,
|
||||
web_url: Option<String>,
|
||||
}
|
||||
|
||||
impl From<&MeIssue> for IssueJson {
|
||||
fn from(i: &MeIssue) -> Self {
|
||||
Self {
|
||||
project: i.project_path.clone(),
|
||||
iid: i.iid,
|
||||
title: i.title.clone(),
|
||||
state: "opened".to_string(),
|
||||
attention_state: attention_state_str(&i.attention_state),
|
||||
status_name: i.status_name.clone(),
|
||||
labels: i.labels.clone(),
|
||||
updated_at_iso: ms_to_iso(i.updated_at),
|
||||
web_url: i.web_url.clone(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ─── MR ──────────────────────────────────────────────────────────────────────
|
||||
|
||||
#[derive(Serialize)]
|
||||
struct MrJson {
|
||||
project: String,
|
||||
iid: i64,
|
||||
title: String,
|
||||
state: String,
|
||||
attention_state: String,
|
||||
draft: bool,
|
||||
detailed_merge_status: Option<String>,
|
||||
author_username: Option<String>,
|
||||
labels: Vec<String>,
|
||||
updated_at_iso: String,
|
||||
web_url: Option<String>,
|
||||
}
|
||||
|
||||
impl From<&MeMr> for MrJson {
|
||||
fn from(m: &MeMr) -> Self {
|
||||
Self {
|
||||
project: m.project_path.clone(),
|
||||
iid: m.iid,
|
||||
title: m.title.clone(),
|
||||
state: "opened".to_string(),
|
||||
attention_state: attention_state_str(&m.attention_state),
|
||||
draft: m.draft,
|
||||
detailed_merge_status: m.detailed_merge_status.clone(),
|
||||
author_username: m.author_username.clone(),
|
||||
labels: m.labels.clone(),
|
||||
updated_at_iso: ms_to_iso(m.updated_at),
|
||||
web_url: m.web_url.clone(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ─── Activity ────────────────────────────────────────────────────────────────
|
||||
|
||||
#[derive(Serialize)]
|
||||
struct ActivityJson {
|
||||
timestamp_iso: String,
|
||||
event_type: String,
|
||||
entity_type: String,
|
||||
entity_iid: i64,
|
||||
project: String,
|
||||
actor: Option<String>,
|
||||
is_own: bool,
|
||||
summary: String,
|
||||
body_preview: Option<String>,
|
||||
}
|
||||
|
||||
impl From<&MeActivityEvent> for ActivityJson {
|
||||
fn from(e: &MeActivityEvent) -> Self {
|
||||
Self {
|
||||
timestamp_iso: ms_to_iso(e.timestamp),
|
||||
event_type: event_type_str(&e.event_type),
|
||||
entity_type: e.entity_type.clone(),
|
||||
entity_iid: e.entity_iid,
|
||||
project: e.project_path.clone(),
|
||||
actor: e.actor.clone(),
|
||||
is_own: e.is_own,
|
||||
summary: e.summary.clone(),
|
||||
body_preview: e.body_preview.clone(),
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ─── Helpers ─────────────────────────────────────────────────────────────────
|
||||
|
||||
/// Convert `AttentionState` to its programmatic string representation.
|
||||
fn attention_state_str(state: &AttentionState) -> String {
|
||||
match state {
|
||||
AttentionState::NeedsAttention => "needs_attention",
|
||||
AttentionState::NotStarted => "not_started",
|
||||
AttentionState::AwaitingResponse => "awaiting_response",
|
||||
AttentionState::Stale => "stale",
|
||||
AttentionState::NotReady => "not_ready",
|
||||
}
|
||||
.to_string()
|
||||
}
|
||||
|
||||
/// Convert `ActivityEventType` to its programmatic string representation.
|
||||
fn event_type_str(event_type: &ActivityEventType) -> String {
|
||||
match event_type {
|
||||
ActivityEventType::Note => "note",
|
||||
ActivityEventType::StatusChange => "status_change",
|
||||
ActivityEventType::LabelChange => "label_change",
|
||||
ActivityEventType::Assign => "assign",
|
||||
ActivityEventType::Unassign => "unassign",
|
||||
ActivityEventType::ReviewRequest => "review_request",
|
||||
ActivityEventType::MilestoneChange => "milestone_change",
|
||||
}
|
||||
.to_string()
|
||||
}
|
||||
|
||||
// ─── Tests ───────────────────────────────────────────────────────────────────
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn attention_state_str_all_variants() {
|
||||
assert_eq!(
|
||||
attention_state_str(&AttentionState::NeedsAttention),
|
||||
"needs_attention"
|
||||
);
|
||||
assert_eq!(
|
||||
attention_state_str(&AttentionState::NotStarted),
|
||||
"not_started"
|
||||
);
|
||||
assert_eq!(
|
||||
attention_state_str(&AttentionState::AwaitingResponse),
|
||||
"awaiting_response"
|
||||
);
|
||||
assert_eq!(attention_state_str(&AttentionState::Stale), "stale");
|
||||
assert_eq!(attention_state_str(&AttentionState::NotReady), "not_ready");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn event_type_str_all_variants() {
|
||||
assert_eq!(event_type_str(&ActivityEventType::Note), "note");
|
||||
assert_eq!(
|
||||
event_type_str(&ActivityEventType::StatusChange),
|
||||
"status_change"
|
||||
);
|
||||
assert_eq!(
|
||||
event_type_str(&ActivityEventType::LabelChange),
|
||||
"label_change"
|
||||
);
|
||||
assert_eq!(event_type_str(&ActivityEventType::Assign), "assign");
|
||||
assert_eq!(event_type_str(&ActivityEventType::Unassign), "unassign");
|
||||
assert_eq!(
|
||||
event_type_str(&ActivityEventType::ReviewRequest),
|
||||
"review_request"
|
||||
);
|
||||
assert_eq!(
|
||||
event_type_str(&ActivityEventType::MilestoneChange),
|
||||
"milestone_change"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn issue_json_from_me_issue() {
|
||||
let issue = MeIssue {
|
||||
iid: 42,
|
||||
title: "Fix auth bug".to_string(),
|
||||
project_path: "group/repo".to_string(),
|
||||
attention_state: AttentionState::NeedsAttention,
|
||||
status_name: Some("In progress".to_string()),
|
||||
labels: vec!["bug".to_string()],
|
||||
updated_at: 1_700_000_000_000,
|
||||
web_url: Some("https://gitlab.com/group/repo/-/issues/42".to_string()),
|
||||
};
|
||||
let json = IssueJson::from(&issue);
|
||||
assert_eq!(json.iid, 42);
|
||||
assert_eq!(json.attention_state, "needs_attention");
|
||||
assert_eq!(json.state, "opened");
|
||||
assert_eq!(json.status_name, Some("In progress".to_string()));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn mr_json_from_me_mr() {
|
||||
let mr = MeMr {
|
||||
iid: 99,
|
||||
title: "Add feature".to_string(),
|
||||
project_path: "group/repo".to_string(),
|
||||
attention_state: AttentionState::AwaitingResponse,
|
||||
draft: true,
|
||||
detailed_merge_status: Some("mergeable".to_string()),
|
||||
author_username: Some("alice".to_string()),
|
||||
labels: vec![],
|
||||
updated_at: 1_700_000_000_000,
|
||||
web_url: None,
|
||||
};
|
||||
let json = MrJson::from(&mr);
|
||||
assert_eq!(json.iid, 99);
|
||||
assert_eq!(json.attention_state, "awaiting_response");
|
||||
assert!(json.draft);
|
||||
assert_eq!(json.author_username, Some("alice".to_string()));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn activity_json_from_event() {
|
||||
let event = MeActivityEvent {
|
||||
timestamp: 1_700_000_000_000,
|
||||
event_type: ActivityEventType::Note,
|
||||
entity_type: "issue".to_string(),
|
||||
entity_iid: 42,
|
||||
project_path: "group/repo".to_string(),
|
||||
actor: Some("bob".to_string()),
|
||||
is_own: false,
|
||||
summary: "Added a comment".to_string(),
|
||||
body_preview: Some("This looks good".to_string()),
|
||||
};
|
||||
let json = ActivityJson::from(&event);
|
||||
assert_eq!(json.event_type, "note");
|
||||
assert_eq!(json.entity_iid, 42);
|
||||
assert!(!json.is_own);
|
||||
assert_eq!(json.body_preview, Some("This looks good".to_string()));
|
||||
}
|
||||
}
|
||||
98
src/cli/commands/me/types.rs
Normal file
98
src/cli/commands/me/types.rs
Normal file
@@ -0,0 +1,98 @@
|
||||
// ─── Dashboard Types ─────────────────────────────────────────────────────────
|
||||
//
|
||||
// Data structs for the `lore me` personal dashboard.
|
||||
// These are populated by query functions and consumed by renderers.
|
||||
|
||||
/// Attention state for a work item (AC-4.4).
|
||||
/// Ordered by display priority (first = most urgent).
|
||||
#[derive(Debug, Clone, Copy, PartialEq, Eq, PartialOrd, Ord)]
|
||||
pub enum AttentionState {
|
||||
/// Others commented after me (or I never engaged but others have)
|
||||
NeedsAttention = 0,
|
||||
/// Zero non-system notes from anyone
|
||||
NotStarted = 1,
|
||||
/// My latest note >= all others' latest notes
|
||||
AwaitingResponse = 2,
|
||||
/// Latest note from anyone is older than 30 days
|
||||
Stale = 3,
|
||||
/// MR-only: draft with no reviewers
|
||||
NotReady = 4,
|
||||
}
|
||||
|
||||
/// Activity event type for the feed (AC-5.4, AC-6.4).
|
||||
#[derive(Debug, Clone, PartialEq, Eq)]
|
||||
pub enum ActivityEventType {
|
||||
/// Human comment (non-system note)
|
||||
Note,
|
||||
/// State change (opened/closed/reopened/merged)
|
||||
StatusChange,
|
||||
/// Label added or removed
|
||||
LabelChange,
|
||||
/// Assignment event
|
||||
Assign,
|
||||
/// Unassignment event
|
||||
Unassign,
|
||||
/// Review request
|
||||
ReviewRequest,
|
||||
/// Milestone change
|
||||
MilestoneChange,
|
||||
}
|
||||
|
||||
/// Summary counts for the dashboard header (AC-5.5).
|
||||
pub struct MeSummary {
|
||||
pub project_count: usize,
|
||||
pub open_issue_count: usize,
|
||||
pub authored_mr_count: usize,
|
||||
pub reviewing_mr_count: usize,
|
||||
pub needs_attention_count: usize,
|
||||
}
|
||||
|
||||
/// An open issue assigned to the user (AC-5.1).
|
||||
pub struct MeIssue {
|
||||
pub iid: i64,
|
||||
pub title: String,
|
||||
pub project_path: String,
|
||||
pub attention_state: AttentionState,
|
||||
pub status_name: Option<String>,
|
||||
pub labels: Vec<String>,
|
||||
pub updated_at: i64,
|
||||
pub web_url: Option<String>,
|
||||
}
|
||||
|
||||
/// An open MR authored by or reviewing for the user (AC-5.2, AC-5.3).
|
||||
pub struct MeMr {
|
||||
pub iid: i64,
|
||||
pub title: String,
|
||||
pub project_path: String,
|
||||
pub attention_state: AttentionState,
|
||||
pub draft: bool,
|
||||
pub detailed_merge_status: Option<String>,
|
||||
pub author_username: Option<String>,
|
||||
pub labels: Vec<String>,
|
||||
pub updated_at: i64,
|
||||
pub web_url: Option<String>,
|
||||
}
|
||||
|
||||
/// An activity event in the feed (AC-5.4).
|
||||
pub struct MeActivityEvent {
|
||||
pub timestamp: i64,
|
||||
pub event_type: ActivityEventType,
|
||||
pub entity_type: String,
|
||||
pub entity_iid: i64,
|
||||
pub project_path: String,
|
||||
pub actor: Option<String>,
|
||||
pub is_own: bool,
|
||||
pub summary: String,
|
||||
pub body_preview: Option<String>,
|
||||
}
|
||||
|
||||
/// The complete dashboard result.
|
||||
pub struct MeDashboard {
|
||||
pub username: String,
|
||||
pub since_ms: Option<i64>,
|
||||
pub summary: MeSummary,
|
||||
pub open_issues: Vec<MeIssue>,
|
||||
pub open_mrs_authored: Vec<MeMr>,
|
||||
pub reviewing_mrs: Vec<MeMr>,
|
||||
pub activity: Vec<MeActivityEvent>,
|
||||
}
|
||||
@@ -1,5 +1,7 @@
|
||||
pub mod auth_test;
|
||||
pub mod count;
|
||||
#[cfg(unix)]
|
||||
pub mod cron;
|
||||
pub mod doctor;
|
||||
pub mod drift;
|
||||
pub mod embed;
|
||||
@@ -8,11 +10,13 @@ pub mod generate_docs;
|
||||
pub mod ingest;
|
||||
pub mod init;
|
||||
pub mod list;
|
||||
pub mod me;
|
||||
pub mod search;
|
||||
pub mod show;
|
||||
pub mod stats;
|
||||
pub mod sync;
|
||||
pub mod sync_status;
|
||||
pub mod sync_surgical;
|
||||
pub mod timeline;
|
||||
pub mod trace;
|
||||
pub mod who;
|
||||
@@ -22,6 +26,12 @@ pub use count::{
|
||||
print_count, print_count_json, print_event_count, print_event_count_json, run_count,
|
||||
run_count_events,
|
||||
};
|
||||
#[cfg(unix)]
|
||||
pub use cron::{
|
||||
print_cron_install, print_cron_install_json, print_cron_status, print_cron_status_json,
|
||||
print_cron_uninstall, print_cron_uninstall_json, run_cron_install, run_cron_status,
|
||||
run_cron_uninstall,
|
||||
};
|
||||
pub use doctor::{DoctorChecks, print_doctor_results, run_doctor};
|
||||
pub use drift::{DriftResponse, print_drift_human, print_drift_json, run_drift};
|
||||
pub use embed::{print_embed, print_embed_json, run_embed};
|
||||
@@ -31,13 +41,13 @@ pub use ingest::{
|
||||
DryRunPreview, IngestDisplay, print_dry_run_preview, print_dry_run_preview_json,
|
||||
print_ingest_summary, print_ingest_summary_json, run_ingest, run_ingest_dry_run,
|
||||
};
|
||||
pub use init::{InitInputs, InitOptions, InitResult, run_init};
|
||||
pub use init::{InitInputs, InitOptions, InitResult, run_init, run_token_set, run_token_show};
|
||||
pub use list::{
|
||||
ListFilters, MrListFilters, NoteListFilters, open_issue_in_browser, open_mr_in_browser,
|
||||
print_list_issues, print_list_issues_json, print_list_mrs, print_list_mrs_json,
|
||||
print_list_notes, print_list_notes_csv, print_list_notes_json, print_list_notes_jsonl,
|
||||
query_notes, run_list_issues, run_list_mrs,
|
||||
print_list_notes, print_list_notes_json, query_notes, run_list_issues, run_list_mrs,
|
||||
};
|
||||
pub use me::run_me;
|
||||
pub use search::{
|
||||
SearchCliFilters, SearchResponse, print_search_results, print_search_results_json, run_search,
|
||||
};
|
||||
@@ -48,6 +58,7 @@ pub use show::{
|
||||
pub use stats::{print_stats, print_stats_json, run_stats};
|
||||
pub use sync::{SyncOptions, SyncResult, print_sync, print_sync_json, run_sync};
|
||||
pub use sync_status::{print_sync_status, print_sync_status_json, run_sync_status};
|
||||
pub use sync_surgical::run_sync_surgical;
|
||||
pub use timeline::{TimelineParams, print_timeline, print_timeline_json_with_meta, run_timeline};
|
||||
pub use trace::{parse_trace_path, print_trace, print_trace_json};
|
||||
pub use who::{WhoRun, print_who_human, print_who_json, run_who};
|
||||
|
||||
@@ -439,5 +439,8 @@ pub fn print_search_results_json(
|
||||
let expanded = crate::cli::robot::expand_fields_preset(f, "search");
|
||||
crate::cli::robot::filter_fields(&mut value, "results", &expanded);
|
||||
}
|
||||
println!("{}", serde_json::to_string(&value).unwrap());
|
||||
match serde_json::to_string(&value) {
|
||||
Ok(json) => println!("{json}"),
|
||||
Err(e) => eprintln!("Error serializing to JSON: {e}"),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -585,5 +585,8 @@ pub fn print_stats_json(result: &StatsResult, elapsed_ms: u64) {
|
||||
},
|
||||
meta: RobotMeta { elapsed_ms },
|
||||
};
|
||||
println!("{}", serde_json::to_string(&output).unwrap());
|
||||
match serde_json::to_string(&output) {
|
||||
Ok(json) => println!("{json}"),
|
||||
Err(e) => eprintln!("Error serializing to JSON: {e}"),
|
||||
}
|
||||
}
|
||||
|
||||
@@ -16,6 +16,7 @@ use super::ingest::{
|
||||
DryRunPreview, IngestDisplay, ProjectStatusEnrichment, ProjectSummary, run_ingest,
|
||||
run_ingest_dry_run,
|
||||
};
|
||||
use super::sync_surgical::run_sync_surgical;
|
||||
|
||||
#[derive(Debug, Default)]
|
||||
pub struct SyncOptions {
|
||||
@@ -26,6 +27,35 @@ pub struct SyncOptions {
|
||||
pub no_events: bool,
|
||||
pub robot_mode: bool,
|
||||
pub dry_run: bool,
|
||||
pub issue_iids: Vec<u64>,
|
||||
pub mr_iids: Vec<u64>,
|
||||
pub project: Option<String>,
|
||||
pub preflight_only: bool,
|
||||
}
|
||||
|
||||
impl SyncOptions {
|
||||
pub const MAX_SURGICAL_TARGETS: usize = 100;
|
||||
|
||||
pub fn is_surgical(&self) -> bool {
|
||||
!self.issue_iids.is_empty() || !self.mr_iids.is_empty()
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Default, Serialize)]
|
||||
pub struct SurgicalIids {
|
||||
pub issues: Vec<u64>,
|
||||
pub merge_requests: Vec<u64>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Serialize)]
|
||||
pub struct EntitySyncResult {
|
||||
pub entity_type: String,
|
||||
pub iid: u64,
|
||||
pub outcome: String,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub error: Option<String>,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub toctou_reason: Option<String>,
|
||||
}
|
||||
|
||||
#[derive(Debug, Default, Serialize)]
|
||||
@@ -45,19 +75,23 @@ pub struct SyncResult {
|
||||
pub embedding_failed: usize,
|
||||
pub status_enrichment_errors: usize,
|
||||
pub statuses_enriched: usize,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub surgical_mode: Option<bool>,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub surgical_iids: Option<SurgicalIids>,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub entity_results: Option<Vec<EntitySyncResult>>,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
pub preflight_only: Option<bool>,
|
||||
#[serde(skip)]
|
||||
pub issue_projects: Vec<ProjectSummary>,
|
||||
#[serde(skip)]
|
||||
pub mr_projects: Vec<ProjectSummary>,
|
||||
}
|
||||
|
||||
/// Apply semantic color to a stage-completion icon glyph.
|
||||
/// Alias for [`Theme::color_icon`] to keep call sites concise.
|
||||
fn color_icon(icon: &str, has_errors: bool) -> String {
|
||||
if has_errors {
|
||||
Theme::warning().render(icon)
|
||||
} else {
|
||||
Theme::success().render(icon)
|
||||
}
|
||||
Theme::color_icon(icon, has_errors)
|
||||
}
|
||||
|
||||
pub async fn run_sync(
|
||||
@@ -66,6 +100,11 @@ pub async fn run_sync(
|
||||
run_id: Option<&str>,
|
||||
signal: &ShutdownSignal,
|
||||
) -> Result<SyncResult> {
|
||||
// Surgical dispatch: if any IIDs specified, route to surgical pipeline
|
||||
if options.is_surgical() {
|
||||
return run_sync_surgical(config, options, run_id, signal).await;
|
||||
}
|
||||
|
||||
let generated_id;
|
||||
let run_id = match run_id {
|
||||
Some(id) => id,
|
||||
@@ -746,7 +785,10 @@ pub fn print_sync_json(result: &SyncResult, elapsed_ms: u64, metrics: Option<&Me
|
||||
stages,
|
||||
},
|
||||
};
|
||||
println!("{}", serde_json::to_string(&output).unwrap());
|
||||
match serde_json::to_string(&output) {
|
||||
Ok(json) => println!("{json}"),
|
||||
Err(e) => eprintln!("Error serializing to JSON: {e}"),
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Debug, Default, Serialize)]
|
||||
@@ -880,13 +922,32 @@ pub fn print_sync_dry_run_json(result: &SyncDryRunResult) {
|
||||
},
|
||||
};
|
||||
|
||||
println!("{}", serde_json::to_string(&output).unwrap());
|
||||
match serde_json::to_string(&output) {
|
||||
Ok(json) => println!("{json}"),
|
||||
Err(e) => eprintln!("Error serializing to JSON: {e}"),
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
fn default_options() -> SyncOptions {
|
||||
SyncOptions {
|
||||
full: false,
|
||||
force: false,
|
||||
no_embed: false,
|
||||
no_docs: false,
|
||||
no_events: false,
|
||||
robot_mode: false,
|
||||
dry_run: false,
|
||||
issue_iids: vec![],
|
||||
mr_iids: vec![],
|
||||
project: None,
|
||||
preflight_only: false,
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn append_failures_skips_zeroes() {
|
||||
let mut summary = "base".to_string();
|
||||
@@ -1029,4 +1090,112 @@ mod tests {
|
||||
assert!(rows[0].contains("0 statuses updated"));
|
||||
assert!(rows[0].contains("skipped (disabled)"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn is_surgical_with_issues() {
|
||||
let opts = SyncOptions {
|
||||
issue_iids: vec![1],
|
||||
..default_options()
|
||||
};
|
||||
assert!(opts.is_surgical());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn is_surgical_with_mrs() {
|
||||
let opts = SyncOptions {
|
||||
mr_iids: vec![10],
|
||||
..default_options()
|
||||
};
|
||||
assert!(opts.is_surgical());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn is_surgical_empty() {
|
||||
let opts = default_options();
|
||||
assert!(!opts.is_surgical());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn max_surgical_targets_is_100() {
|
||||
assert_eq!(SyncOptions::MAX_SURGICAL_TARGETS, 100);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn sync_result_default_omits_surgical_fields() {
|
||||
let result = SyncResult::default();
|
||||
let json = serde_json::to_value(&result).unwrap();
|
||||
assert!(json.get("surgical_mode").is_none());
|
||||
assert!(json.get("surgical_iids").is_none());
|
||||
assert!(json.get("entity_results").is_none());
|
||||
assert!(json.get("preflight_only").is_none());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn sync_result_with_surgical_fields_serializes_correctly() {
|
||||
let result = SyncResult {
|
||||
surgical_mode: Some(true),
|
||||
surgical_iids: Some(SurgicalIids {
|
||||
issues: vec![7, 42],
|
||||
merge_requests: vec![10],
|
||||
}),
|
||||
entity_results: Some(vec![
|
||||
EntitySyncResult {
|
||||
entity_type: "issue".to_string(),
|
||||
iid: 7,
|
||||
outcome: "synced".to_string(),
|
||||
error: None,
|
||||
toctou_reason: None,
|
||||
},
|
||||
EntitySyncResult {
|
||||
entity_type: "issue".to_string(),
|
||||
iid: 42,
|
||||
outcome: "skipped_toctou".to_string(),
|
||||
error: None,
|
||||
toctou_reason: Some("updated_at changed".to_string()),
|
||||
},
|
||||
]),
|
||||
preflight_only: Some(false),
|
||||
..SyncResult::default()
|
||||
};
|
||||
let json = serde_json::to_value(&result).unwrap();
|
||||
assert_eq!(json["surgical_mode"], true);
|
||||
assert_eq!(json["surgical_iids"]["issues"], serde_json::json!([7, 42]));
|
||||
assert_eq!(json["entity_results"].as_array().unwrap().len(), 2);
|
||||
assert_eq!(json["entity_results"][1]["outcome"], "skipped_toctou");
|
||||
assert_eq!(json["preflight_only"], false);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn entity_sync_result_omits_none_fields() {
|
||||
let entity = EntitySyncResult {
|
||||
entity_type: "merge_request".to_string(),
|
||||
iid: 10,
|
||||
outcome: "synced".to_string(),
|
||||
error: None,
|
||||
toctou_reason: None,
|
||||
};
|
||||
let json = serde_json::to_value(&entity).unwrap();
|
||||
assert!(json.get("error").is_none());
|
||||
assert!(json.get("toctou_reason").is_none());
|
||||
assert!(json.get("entity_type").is_some());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn is_surgical_with_both_issues_and_mrs() {
|
||||
let opts = SyncOptions {
|
||||
issue_iids: vec![1, 2],
|
||||
mr_iids: vec![10],
|
||||
..default_options()
|
||||
};
|
||||
assert!(opts.is_surgical());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn is_not_surgical_with_only_project() {
|
||||
let opts = SyncOptions {
|
||||
project: Some("group/repo".to_string()),
|
||||
..default_options()
|
||||
};
|
||||
assert!(!opts.is_surgical());
|
||||
}
|
||||
}
|
||||
|
||||
@@ -268,7 +268,10 @@ pub fn print_sync_status_json(result: &SyncStatusResult, elapsed_ms: u64) {
|
||||
meta: RobotMeta { elapsed_ms },
|
||||
};
|
||||
|
||||
println!("{}", serde_json::to_string(&output).unwrap());
|
||||
match serde_json::to_string(&output) {
|
||||
Ok(json) => println!("{json}"),
|
||||
Err(e) => eprintln!("Error serializing to JSON: {e}"),
|
||||
}
|
||||
}
|
||||
|
||||
pub fn print_sync_status(result: &SyncStatusResult) {
|
||||
|
||||
711
src/cli/commands/sync_surgical.rs
Normal file
711
src/cli/commands/sync_surgical.rs
Normal file
@@ -0,0 +1,711 @@
|
||||
use std::time::Instant;
|
||||
|
||||
use tracing::{Instrument, debug, info, warn};
|
||||
|
||||
use crate::Config;
|
||||
use crate::cli::commands::sync::{EntitySyncResult, SurgicalIids, SyncOptions, SyncResult};
|
||||
use crate::cli::progress::{format_stage_line, stage_spinner_v2};
|
||||
use crate::cli::render::{Icons, Theme};
|
||||
use crate::core::db::{LATEST_SCHEMA_VERSION, create_connection, get_schema_version};
|
||||
use crate::core::error::{LoreError, Result};
|
||||
use crate::core::lock::{AppLock, LockOptions};
|
||||
use crate::core::paths::get_db_path;
|
||||
use crate::core::project::resolve_project;
|
||||
use crate::core::shutdown::ShutdownSignal;
|
||||
use crate::core::sync_run::SyncRunRecorder;
|
||||
use crate::documents::{SourceType, regenerate_dirty_documents_for_sources};
|
||||
use crate::embedding::ollama::{OllamaClient, OllamaConfig};
|
||||
use crate::embedding::pipeline::{DEFAULT_EMBED_CONCURRENCY, embed_documents_by_ids};
|
||||
use crate::gitlab::GitLabClient;
|
||||
use crate::ingestion::surgical::{
|
||||
fetch_dependents_for_issue, fetch_dependents_for_mr, ingest_issue_by_iid, ingest_mr_by_iid,
|
||||
preflight_fetch,
|
||||
};
|
||||
|
||||
pub async fn run_sync_surgical(
|
||||
config: &Config,
|
||||
options: SyncOptions,
|
||||
run_id: Option<&str>,
|
||||
signal: &ShutdownSignal,
|
||||
) -> Result<SyncResult> {
|
||||
// ── Generate run_id ──
|
||||
let generated_id;
|
||||
let run_id = match run_id {
|
||||
Some(id) => id,
|
||||
None => {
|
||||
generated_id = uuid::Uuid::new_v4().simple().to_string();
|
||||
&generated_id[..8]
|
||||
}
|
||||
};
|
||||
let span = tracing::info_span!("surgical_sync", %run_id);
|
||||
|
||||
async move {
|
||||
let pipeline_start = Instant::now();
|
||||
let mut result = SyncResult {
|
||||
run_id: run_id.to_string(),
|
||||
surgical_mode: Some(true),
|
||||
surgical_iids: Some(SurgicalIids {
|
||||
issues: options.issue_iids.clone(),
|
||||
merge_requests: options.mr_iids.clone(),
|
||||
}),
|
||||
..SyncResult::default()
|
||||
};
|
||||
let mut entity_results: Vec<EntitySyncResult> = Vec::new();
|
||||
|
||||
// ── Resolve project ──
|
||||
let project_str = options.project.as_deref().ok_or_else(|| {
|
||||
LoreError::Other(
|
||||
"Surgical sync requires --project. Specify the project path.".to_string(),
|
||||
)
|
||||
})?;
|
||||
|
||||
let db_path = get_db_path(config.storage.db_path.as_deref());
|
||||
let conn = create_connection(&db_path)?;
|
||||
|
||||
let schema_version = get_schema_version(&conn);
|
||||
if schema_version < LATEST_SCHEMA_VERSION {
|
||||
return Err(LoreError::MigrationFailed {
|
||||
version: schema_version,
|
||||
message: format!(
|
||||
"Database is at schema version {schema_version} but {LATEST_SCHEMA_VERSION} is required. \
|
||||
Run 'lore sync' first to apply migrations."
|
||||
),
|
||||
source: None,
|
||||
});
|
||||
}
|
||||
|
||||
let project_id = resolve_project(&conn, project_str)?;
|
||||
|
||||
let gitlab_project_id: i64 = conn.query_row(
|
||||
"SELECT gitlab_project_id FROM projects WHERE id = ?1",
|
||||
[project_id],
|
||||
|row| row.get(0),
|
||||
)?;
|
||||
|
||||
debug!(
|
||||
project_str,
|
||||
project_id,
|
||||
gitlab_project_id,
|
||||
"Resolved project for surgical sync"
|
||||
);
|
||||
|
||||
// ── Start recorder ──
|
||||
let recorder_conn = create_connection(&db_path)?;
|
||||
let recorder = SyncRunRecorder::start(&recorder_conn, "surgical-sync", run_id)?;
|
||||
|
||||
let iids_json = serde_json::to_string(&SurgicalIids {
|
||||
issues: options.issue_iids.clone(),
|
||||
merge_requests: options.mr_iids.clone(),
|
||||
})
|
||||
.unwrap_or_else(|_| "{}".to_string());
|
||||
|
||||
recorder.set_surgical_metadata(&recorder_conn, "surgical", "preflight", &iids_json)?;
|
||||
|
||||
// Wrap recorder in Option for consuming terminal methods
|
||||
let mut recorder = Some(recorder);
|
||||
|
||||
// ── Build GitLab client ──
|
||||
let token = config.gitlab.resolve_token()?;
|
||||
let client = GitLabClient::new(
|
||||
&config.gitlab.base_url,
|
||||
&token,
|
||||
Some(config.sync.requests_per_second),
|
||||
);
|
||||
|
||||
// ── Build targets list ──
|
||||
let mut targets: Vec<(String, i64)> = Vec::new();
|
||||
for iid in &options.issue_iids {
|
||||
targets.push(("issue".to_string(), *iid as i64));
|
||||
}
|
||||
for iid in &options.mr_iids {
|
||||
targets.push(("merge_request".to_string(), *iid as i64));
|
||||
}
|
||||
|
||||
// ── Stage: Preflight ──
|
||||
let stage_start = Instant::now();
|
||||
let spinner =
|
||||
stage_spinner_v2(Icons::sync(), "Preflight", "fetching...", options.robot_mode);
|
||||
|
||||
info!(targets = targets.len(), "Preflight: fetching entities from GitLab");
|
||||
let preflight = preflight_fetch(&client, gitlab_project_id, &targets).await;
|
||||
|
||||
// Record preflight failures
|
||||
for failure in &preflight.failures {
|
||||
let is_not_found = matches!(&failure.error, LoreError::GitLabNotFound { .. });
|
||||
entity_results.push(EntitySyncResult {
|
||||
entity_type: failure.entity_type.clone(),
|
||||
iid: failure.iid as u64,
|
||||
outcome: if is_not_found {
|
||||
"not_found".to_string()
|
||||
} else {
|
||||
"preflight_failed".to_string()
|
||||
},
|
||||
error: Some(failure.error.to_string()),
|
||||
toctou_reason: None,
|
||||
});
|
||||
if let Some(ref rec) = recorder {
|
||||
let _ = rec.record_entity_result(&recorder_conn, &failure.entity_type, "warning");
|
||||
}
|
||||
}
|
||||
|
||||
let preflight_summary = format!(
|
||||
"{} issues, {} MRs fetched ({} failed)",
|
||||
preflight.issues.len(),
|
||||
preflight.merge_requests.len(),
|
||||
preflight.failures.len()
|
||||
);
|
||||
let preflight_icon = color_icon(
|
||||
if preflight.failures.is_empty() {
|
||||
Icons::success()
|
||||
} else {
|
||||
Icons::warning()
|
||||
},
|
||||
!preflight.failures.is_empty(),
|
||||
);
|
||||
emit_stage_line(
|
||||
&spinner,
|
||||
&preflight_icon,
|
||||
"Preflight",
|
||||
&preflight_summary,
|
||||
stage_start.elapsed(),
|
||||
options.robot_mode,
|
||||
);
|
||||
|
||||
// ── Preflight-only early return ──
|
||||
if options.preflight_only {
|
||||
result.preflight_only = Some(true);
|
||||
result.entity_results = Some(entity_results);
|
||||
if let Some(rec) = recorder.take() {
|
||||
rec.succeed(&recorder_conn, &[], 0, preflight.failures.len())?;
|
||||
}
|
||||
return Ok(result);
|
||||
}
|
||||
|
||||
// ── Check cancellation ──
|
||||
if signal.is_cancelled() {
|
||||
if let Some(rec) = recorder.take() {
|
||||
rec.cancel(&recorder_conn, "cancelled before ingest")?;
|
||||
}
|
||||
result.entity_results = Some(entity_results);
|
||||
return Ok(result);
|
||||
}
|
||||
|
||||
// ── Acquire lock ──
|
||||
let lock_conn = create_connection(&db_path)?;
|
||||
let mut lock = AppLock::new(
|
||||
lock_conn,
|
||||
LockOptions {
|
||||
name: "sync".to_string(),
|
||||
stale_lock_minutes: config.sync.stale_lock_minutes,
|
||||
heartbeat_interval_seconds: config.sync.heartbeat_interval_seconds,
|
||||
},
|
||||
);
|
||||
lock.acquire(options.force)?;
|
||||
|
||||
// Wrap the rest in a closure-like block to ensure lock release on error
|
||||
let pipeline_result = run_pipeline_stages(
|
||||
&conn,
|
||||
&recorder_conn,
|
||||
config,
|
||||
&client,
|
||||
&options,
|
||||
&preflight,
|
||||
project_id,
|
||||
gitlab_project_id,
|
||||
&mut entity_results,
|
||||
&mut result,
|
||||
recorder.as_ref(),
|
||||
signal,
|
||||
)
|
||||
.await;
|
||||
|
||||
match pipeline_result {
|
||||
Ok(()) => {
|
||||
// ── Finalize: succeed ──
|
||||
if let Some(ref rec) = recorder {
|
||||
let _ = rec.update_phase(&recorder_conn, "finalize");
|
||||
}
|
||||
let total_items = result.issues_updated
|
||||
+ result.mrs_updated
|
||||
+ result.documents_regenerated
|
||||
+ result.documents_embedded;
|
||||
let total_errors = result.documents_errored
|
||||
+ result.embedding_failed
|
||||
+ entity_results
|
||||
.iter()
|
||||
.filter(|e| e.outcome != "synced" && e.outcome != "skipped_stale")
|
||||
.count();
|
||||
if let Some(rec) = recorder.take() {
|
||||
rec.succeed(&recorder_conn, &[], total_items, total_errors)?;
|
||||
}
|
||||
}
|
||||
Err(ref e) => {
|
||||
if let Some(rec) = recorder.take() {
|
||||
let _ = rec.fail(&recorder_conn, &e.to_string(), None);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
lock.release();
|
||||
|
||||
// Propagate error after cleanup
|
||||
pipeline_result?;
|
||||
|
||||
result.entity_results = Some(entity_results);
|
||||
|
||||
let elapsed = pipeline_start.elapsed();
|
||||
debug!(
|
||||
elapsed_ms = elapsed.as_millis(),
|
||||
issues = result.issues_updated,
|
||||
mrs = result.mrs_updated,
|
||||
docs = result.documents_regenerated,
|
||||
embedded = result.documents_embedded,
|
||||
"Surgical sync pipeline complete"
|
||||
);
|
||||
|
||||
Ok(result)
|
||||
}
|
||||
.instrument(span)
|
||||
.await
|
||||
}
|
||||
|
||||
#[allow(clippy::too_many_arguments)]
|
||||
async fn run_pipeline_stages(
|
||||
conn: &rusqlite::Connection,
|
||||
recorder_conn: &rusqlite::Connection,
|
||||
config: &Config,
|
||||
client: &GitLabClient,
|
||||
options: &SyncOptions,
|
||||
preflight: &crate::ingestion::surgical::PreflightResult,
|
||||
project_id: i64,
|
||||
gitlab_project_id: i64,
|
||||
entity_results: &mut Vec<EntitySyncResult>,
|
||||
result: &mut SyncResult,
|
||||
recorder: Option<&SyncRunRecorder>,
|
||||
signal: &ShutdownSignal,
|
||||
) -> Result<()> {
|
||||
let mut all_dirty_source_keys: Vec<(SourceType, i64)> = Vec::new();
|
||||
|
||||
// ── Stage: Ingest ──
|
||||
if let Some(rec) = recorder {
|
||||
rec.update_phase(recorder_conn, "ingest")?;
|
||||
}
|
||||
|
||||
let stage_start = Instant::now();
|
||||
let spinner = stage_spinner_v2(Icons::sync(), "Ingest", "processing...", options.robot_mode);
|
||||
|
||||
// Ingest issues
|
||||
for issue in &preflight.issues {
|
||||
match ingest_issue_by_iid(conn, config, project_id, issue) {
|
||||
Ok(ingest_result) => {
|
||||
if ingest_result.skipped_stale {
|
||||
entity_results.push(EntitySyncResult {
|
||||
entity_type: "issue".to_string(),
|
||||
iid: issue.iid as u64,
|
||||
outcome: "skipped_stale".to_string(),
|
||||
error: None,
|
||||
toctou_reason: Some("updated_at not newer than DB".to_string()),
|
||||
});
|
||||
if let Some(rec) = recorder {
|
||||
let _ = rec.record_entity_result(recorder_conn, "issue", "skipped_stale");
|
||||
}
|
||||
} else {
|
||||
result.issues_updated += 1;
|
||||
all_dirty_source_keys.extend(ingest_result.dirty_source_keys);
|
||||
entity_results.push(EntitySyncResult {
|
||||
entity_type: "issue".to_string(),
|
||||
iid: issue.iid as u64,
|
||||
outcome: "synced".to_string(),
|
||||
error: None,
|
||||
toctou_reason: None,
|
||||
});
|
||||
if let Some(rec) = recorder {
|
||||
let _ = rec.record_entity_result(recorder_conn, "issue", "ingested");
|
||||
}
|
||||
}
|
||||
}
|
||||
Err(e) => {
|
||||
warn!(iid = issue.iid, error = %e, "Failed to ingest issue");
|
||||
entity_results.push(EntitySyncResult {
|
||||
entity_type: "issue".to_string(),
|
||||
iid: issue.iid as u64,
|
||||
outcome: "error".to_string(),
|
||||
error: Some(e.to_string()),
|
||||
toctou_reason: None,
|
||||
});
|
||||
if let Some(rec) = recorder {
|
||||
let _ = rec.record_entity_result(recorder_conn, "issue", "warning");
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Ingest MRs
|
||||
for mr in &preflight.merge_requests {
|
||||
match ingest_mr_by_iid(conn, config, project_id, mr) {
|
||||
Ok(ingest_result) => {
|
||||
if ingest_result.skipped_stale {
|
||||
entity_results.push(EntitySyncResult {
|
||||
entity_type: "merge_request".to_string(),
|
||||
iid: mr.iid as u64,
|
||||
outcome: "skipped_stale".to_string(),
|
||||
error: None,
|
||||
toctou_reason: Some("updated_at not newer than DB".to_string()),
|
||||
});
|
||||
if let Some(rec) = recorder {
|
||||
let _ = rec.record_entity_result(recorder_conn, "mr", "skipped_stale");
|
||||
}
|
||||
} else {
|
||||
result.mrs_updated += 1;
|
||||
all_dirty_source_keys.extend(ingest_result.dirty_source_keys);
|
||||
entity_results.push(EntitySyncResult {
|
||||
entity_type: "merge_request".to_string(),
|
||||
iid: mr.iid as u64,
|
||||
outcome: "synced".to_string(),
|
||||
error: None,
|
||||
toctou_reason: None,
|
||||
});
|
||||
if let Some(rec) = recorder {
|
||||
let _ = rec.record_entity_result(recorder_conn, "mr", "ingested");
|
||||
}
|
||||
}
|
||||
}
|
||||
Err(e) => {
|
||||
warn!(iid = mr.iid, error = %e, "Failed to ingest MR");
|
||||
entity_results.push(EntitySyncResult {
|
||||
entity_type: "merge_request".to_string(),
|
||||
iid: mr.iid as u64,
|
||||
outcome: "error".to_string(),
|
||||
error: Some(e.to_string()),
|
||||
toctou_reason: None,
|
||||
});
|
||||
if let Some(rec) = recorder {
|
||||
let _ = rec.record_entity_result(recorder_conn, "mr", "warning");
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
let ingest_summary = format!(
|
||||
"{} issues, {} MRs ingested",
|
||||
result.issues_updated, result.mrs_updated
|
||||
);
|
||||
let ingest_icon = color_icon(Icons::success(), false);
|
||||
emit_stage_line(
|
||||
&spinner,
|
||||
&ingest_icon,
|
||||
"Ingest",
|
||||
&ingest_summary,
|
||||
stage_start.elapsed(),
|
||||
options.robot_mode,
|
||||
);
|
||||
|
||||
// ── Check cancellation ──
|
||||
if signal.is_cancelled() {
|
||||
debug!("Shutdown requested after ingest stage");
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
// ── Stage: Dependents ──
|
||||
if let Some(rec) = recorder {
|
||||
rec.update_phase(recorder_conn, "dependents")?;
|
||||
}
|
||||
|
||||
let stage_start = Instant::now();
|
||||
let spinner = stage_spinner_v2(
|
||||
Icons::sync(),
|
||||
"Dependents",
|
||||
"fetching...",
|
||||
options.robot_mode,
|
||||
);
|
||||
|
||||
let mut total_discussions: usize = 0;
|
||||
let mut total_events: usize = 0;
|
||||
|
||||
// Fetch dependents for successfully ingested issues
|
||||
for issue in &preflight.issues {
|
||||
// Only fetch dependents for entities that were actually ingested
|
||||
let was_ingested = entity_results.iter().any(|e| {
|
||||
e.entity_type == "issue" && e.iid == issue.iid as u64 && e.outcome == "synced"
|
||||
});
|
||||
if !was_ingested {
|
||||
continue;
|
||||
}
|
||||
|
||||
let local_id: i64 = match conn.query_row(
|
||||
"SELECT id FROM issues WHERE project_id = ?1 AND iid = ?2",
|
||||
(project_id, issue.iid),
|
||||
|row| row.get(0),
|
||||
) {
|
||||
Ok(id) => id,
|
||||
Err(e) => {
|
||||
warn!(iid = issue.iid, error = %e, "Could not find local issue ID for dependents");
|
||||
continue;
|
||||
}
|
||||
};
|
||||
|
||||
match fetch_dependents_for_issue(
|
||||
client,
|
||||
conn,
|
||||
project_id,
|
||||
gitlab_project_id,
|
||||
issue.iid,
|
||||
local_id,
|
||||
config,
|
||||
)
|
||||
.await
|
||||
{
|
||||
Ok(dep_result) => {
|
||||
total_discussions += dep_result.discussions_fetched;
|
||||
total_events += dep_result.resource_events_fetched;
|
||||
result.discussions_fetched += dep_result.discussions_fetched;
|
||||
result.resource_events_fetched += dep_result.resource_events_fetched;
|
||||
}
|
||||
Err(e) => {
|
||||
warn!(iid = issue.iid, error = %e, "Failed to fetch dependents for issue");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Fetch dependents for successfully ingested MRs
|
||||
for mr in &preflight.merge_requests {
|
||||
let was_ingested = entity_results.iter().any(|e| {
|
||||
e.entity_type == "merge_request" && e.iid == mr.iid as u64 && e.outcome == "synced"
|
||||
});
|
||||
if !was_ingested {
|
||||
continue;
|
||||
}
|
||||
|
||||
let local_id: i64 = match conn.query_row(
|
||||
"SELECT id FROM merge_requests WHERE project_id = ?1 AND iid = ?2",
|
||||
(project_id, mr.iid),
|
||||
|row| row.get(0),
|
||||
) {
|
||||
Ok(id) => id,
|
||||
Err(e) => {
|
||||
warn!(iid = mr.iid, error = %e, "Could not find local MR ID for dependents");
|
||||
continue;
|
||||
}
|
||||
};
|
||||
|
||||
match fetch_dependents_for_mr(
|
||||
client,
|
||||
conn,
|
||||
project_id,
|
||||
gitlab_project_id,
|
||||
mr.iid,
|
||||
local_id,
|
||||
config,
|
||||
)
|
||||
.await
|
||||
{
|
||||
Ok(dep_result) => {
|
||||
total_discussions += dep_result.discussions_fetched;
|
||||
total_events += dep_result.resource_events_fetched;
|
||||
result.discussions_fetched += dep_result.discussions_fetched;
|
||||
result.resource_events_fetched += dep_result.resource_events_fetched;
|
||||
result.mr_diffs_fetched += dep_result.file_changes_stored;
|
||||
}
|
||||
Err(e) => {
|
||||
warn!(iid = mr.iid, error = %e, "Failed to fetch dependents for MR");
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
let dep_summary = format!("{} discussions, {} events", total_discussions, total_events);
|
||||
let dep_icon = color_icon(Icons::success(), false);
|
||||
emit_stage_line(
|
||||
&spinner,
|
||||
&dep_icon,
|
||||
"Dependents",
|
||||
&dep_summary,
|
||||
stage_start.elapsed(),
|
||||
options.robot_mode,
|
||||
);
|
||||
|
||||
// ── Check cancellation ──
|
||||
if signal.is_cancelled() {
|
||||
debug!("Shutdown requested after dependents stage");
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
// ── Stage: Docs ──
|
||||
if !options.no_docs && !all_dirty_source_keys.is_empty() {
|
||||
if let Some(rec) = recorder {
|
||||
rec.update_phase(recorder_conn, "docs")?;
|
||||
}
|
||||
|
||||
let stage_start = Instant::now();
|
||||
let spinner =
|
||||
stage_spinner_v2(Icons::sync(), "Docs", "regenerating...", options.robot_mode);
|
||||
|
||||
let docs_result = regenerate_dirty_documents_for_sources(conn, &all_dirty_source_keys)?;
|
||||
result.documents_regenerated = docs_result.regenerated;
|
||||
result.documents_errored = docs_result.errored;
|
||||
|
||||
for _ in 0..docs_result.regenerated {
|
||||
if let Some(rec) = recorder {
|
||||
let _ = rec.record_entity_result(recorder_conn, "doc", "regenerated");
|
||||
}
|
||||
}
|
||||
|
||||
let docs_summary = format!("{} documents regenerated", result.documents_regenerated);
|
||||
let docs_icon = color_icon(
|
||||
if docs_result.errored > 0 {
|
||||
Icons::warning()
|
||||
} else {
|
||||
Icons::success()
|
||||
},
|
||||
docs_result.errored > 0,
|
||||
);
|
||||
emit_stage_line(
|
||||
&spinner,
|
||||
&docs_icon,
|
||||
"Docs",
|
||||
&docs_summary,
|
||||
stage_start.elapsed(),
|
||||
options.robot_mode,
|
||||
);
|
||||
|
||||
// ── Check cancellation ──
|
||||
if signal.is_cancelled() {
|
||||
debug!("Shutdown requested after docs stage");
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
// ── Stage: Embed ──
|
||||
if !options.no_embed && !docs_result.document_ids.is_empty() {
|
||||
if let Some(rec) = recorder {
|
||||
rec.update_phase(recorder_conn, "embed")?;
|
||||
}
|
||||
|
||||
let stage_start = Instant::now();
|
||||
let spinner =
|
||||
stage_spinner_v2(Icons::sync(), "Embed", "embedding...", options.robot_mode);
|
||||
|
||||
let ollama_config = OllamaConfig {
|
||||
base_url: config.embedding.base_url.clone(),
|
||||
model: config.embedding.model.clone(),
|
||||
..OllamaConfig::default()
|
||||
};
|
||||
let ollama_client = OllamaClient::new(ollama_config);
|
||||
|
||||
let model_name = &config.embedding.model;
|
||||
let concurrency = if config.embedding.concurrency > 0 {
|
||||
config.embedding.concurrency as usize
|
||||
} else {
|
||||
DEFAULT_EMBED_CONCURRENCY
|
||||
};
|
||||
|
||||
match embed_documents_by_ids(
|
||||
conn,
|
||||
&ollama_client,
|
||||
model_name,
|
||||
concurrency,
|
||||
&docs_result.document_ids,
|
||||
signal,
|
||||
)
|
||||
.await
|
||||
{
|
||||
Ok(embed_result) => {
|
||||
result.documents_embedded = embed_result.docs_embedded;
|
||||
result.embedding_failed = embed_result.failed;
|
||||
|
||||
for _ in 0..embed_result.docs_embedded {
|
||||
if let Some(rec) = recorder {
|
||||
let _ = rec.record_entity_result(recorder_conn, "doc", "embedded");
|
||||
}
|
||||
}
|
||||
|
||||
let embed_summary = format!("{} chunks embedded", embed_result.chunks_embedded);
|
||||
let embed_icon = color_icon(
|
||||
if embed_result.failed > 0 {
|
||||
Icons::warning()
|
||||
} else {
|
||||
Icons::success()
|
||||
},
|
||||
embed_result.failed > 0,
|
||||
);
|
||||
emit_stage_line(
|
||||
&spinner,
|
||||
&embed_icon,
|
||||
"Embed",
|
||||
&embed_summary,
|
||||
stage_start.elapsed(),
|
||||
options.robot_mode,
|
||||
);
|
||||
}
|
||||
Err(e) => {
|
||||
let warn_summary = format!("skipped ({})", e);
|
||||
let warn_icon = color_icon(Icons::warning(), true);
|
||||
emit_stage_line(
|
||||
&spinner,
|
||||
&warn_icon,
|
||||
"Embed",
|
||||
&warn_summary,
|
||||
stage_start.elapsed(),
|
||||
options.robot_mode,
|
||||
);
|
||||
warn!(error = %e, "Embedding stage failed (Ollama may be unavailable), continuing");
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Alias for [`Theme::color_icon`] to keep call sites concise.
|
||||
fn color_icon(icon: &str, has_errors: bool) -> String {
|
||||
Theme::color_icon(icon, has_errors)
|
||||
}
|
||||
|
||||
fn emit_stage_line(
|
||||
pb: &indicatif::ProgressBar,
|
||||
icon: &str,
|
||||
label: &str,
|
||||
summary: &str,
|
||||
elapsed: std::time::Duration,
|
||||
robot_mode: bool,
|
||||
) {
|
||||
pb.finish_and_clear();
|
||||
if !robot_mode {
|
||||
crate::cli::progress::multi().suspend(|| {
|
||||
println!("{}", format_stage_line(icon, label, summary, elapsed));
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use crate::cli::commands::sync::SyncOptions;
|
||||
|
||||
#[test]
|
||||
fn sync_options_is_surgical_required() {
|
||||
let opts = SyncOptions {
|
||||
issue_iids: vec![1],
|
||||
project: Some("group/repo".to_string()),
|
||||
..SyncOptions::default()
|
||||
};
|
||||
assert!(opts.is_surgical());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn sync_options_surgical_with_mrs() {
|
||||
let opts = SyncOptions {
|
||||
mr_iids: vec![10, 20],
|
||||
project: Some("group/repo".to_string()),
|
||||
..SyncOptions::default()
|
||||
};
|
||||
assert!(opts.is_surgical());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn sync_options_not_surgical_without_iids() {
|
||||
let opts = SyncOptions {
|
||||
project: Some("group/repo".to_string()),
|
||||
..SyncOptions::default()
|
||||
};
|
||||
assert!(!opts.is_surgical());
|
||||
}
|
||||
}
|
||||
@@ -374,7 +374,10 @@ pub fn print_timeline_json_with_meta(
|
||||
let expanded = crate::cli::robot::expand_fields_preset(f, "timeline");
|
||||
crate::cli::robot::filter_fields(&mut value, "events", &expanded);
|
||||
}
|
||||
println!("{}", serde_json::to_string(&value).unwrap());
|
||||
match serde_json::to_string(&value) {
|
||||
Ok(json) => println!("{json}"),
|
||||
Err(e) => eprintln!("Error serializing to JSON: {e}"),
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
|
||||
@@ -50,17 +50,23 @@ pub fn print_trace(result: &TraceResult) {
|
||||
);
|
||||
}
|
||||
|
||||
// Show searched paths when there are renames but no chains
|
||||
if result.trace_chains.is_empty() {
|
||||
println!(
|
||||
"\n {} {}",
|
||||
Icons::info(),
|
||||
Theme::dim().render("No trace chains found for this file.")
|
||||
);
|
||||
if !result.renames_followed && result.resolved_paths.len() == 1 {
|
||||
println!(
|
||||
" {}",
|
||||
Theme::dim()
|
||||
.render("Hint: Run 'lore sync' to fetch MR file changes and cross-references.")
|
||||
" {} Searched: {}",
|
||||
Icons::info(),
|
||||
Theme::dim().render(&result.resolved_paths[0])
|
||||
);
|
||||
}
|
||||
for hint in &result.hints {
|
||||
println!(" {} {}", Icons::info(), Theme::dim().render(hint));
|
||||
}
|
||||
println!();
|
||||
return;
|
||||
}
|
||||
@@ -195,6 +201,7 @@ pub fn print_trace_json(result: &TraceResult, elapsed_ms: u64, line_requested: O
|
||||
"elapsed_ms": elapsed_ms,
|
||||
"total_chains": result.total_chains,
|
||||
"renames_followed": result.renames_followed,
|
||||
"hints": if result.hints.is_empty() { None } else { Some(&result.hints) },
|
||||
}
|
||||
});
|
||||
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
299
src/cli/commands/who/active.rs
Normal file
299
src/cli/commands/who/active.rs
Normal file
@@ -0,0 +1,299 @@
|
||||
use rusqlite::Connection;
|
||||
|
||||
use crate::cli::render::{self, Theme};
|
||||
use crate::core::error::Result;
|
||||
use crate::core::time::ms_to_iso;
|
||||
|
||||
use super::types::*;
|
||||
|
||||
pub(super) fn query_active(
|
||||
conn: &Connection,
|
||||
project_id: Option<i64>,
|
||||
since_ms: i64,
|
||||
limit: usize,
|
||||
include_closed: bool,
|
||||
) -> Result<ActiveResult> {
|
||||
let limit_plus_one = (limit + 1) as i64;
|
||||
|
||||
// State filter for open-entities-only (default behavior)
|
||||
let state_joins = if include_closed {
|
||||
""
|
||||
} else {
|
||||
" LEFT JOIN issues i ON d.issue_id = i.id
|
||||
LEFT JOIN merge_requests m ON d.merge_request_id = m.id"
|
||||
};
|
||||
let state_filter = if include_closed {
|
||||
""
|
||||
} else {
|
||||
" AND (i.id IS NULL OR i.state = 'opened')
|
||||
AND (m.id IS NULL OR m.state = 'opened')"
|
||||
};
|
||||
|
||||
// Total unresolved count -- conditionally built
|
||||
let total_sql_global = format!(
|
||||
"SELECT COUNT(*) FROM discussions d
|
||||
{state_joins}
|
||||
WHERE d.resolvable = 1 AND d.resolved = 0
|
||||
AND d.last_note_at >= ?1
|
||||
{state_filter}"
|
||||
);
|
||||
let total_sql_scoped = format!(
|
||||
"SELECT COUNT(*) FROM discussions d
|
||||
{state_joins}
|
||||
WHERE d.resolvable = 1 AND d.resolved = 0
|
||||
AND d.last_note_at >= ?1
|
||||
AND d.project_id = ?2
|
||||
{state_filter}"
|
||||
);
|
||||
|
||||
let total_unresolved_in_window: u32 = match project_id {
|
||||
None => conn.query_row(&total_sql_global, rusqlite::params![since_ms], |row| {
|
||||
row.get(0)
|
||||
})?,
|
||||
Some(pid) => {
|
||||
conn.query_row(&total_sql_scoped, rusqlite::params![since_ms, pid], |row| {
|
||||
row.get(0)
|
||||
})?
|
||||
}
|
||||
};
|
||||
|
||||
// Active discussions with context -- conditionally built SQL
|
||||
let sql_global = format!(
|
||||
"
|
||||
WITH picked AS (
|
||||
SELECT d.id, d.noteable_type, d.issue_id, d.merge_request_id,
|
||||
d.project_id, d.last_note_at
|
||||
FROM discussions d
|
||||
{state_joins}
|
||||
WHERE d.resolvable = 1 AND d.resolved = 0
|
||||
AND d.last_note_at >= ?1
|
||||
{state_filter}
|
||||
ORDER BY d.last_note_at DESC
|
||||
LIMIT ?2
|
||||
),
|
||||
note_counts AS (
|
||||
SELECT
|
||||
n.discussion_id,
|
||||
COUNT(*) AS note_count
|
||||
FROM notes n
|
||||
JOIN picked p ON p.id = n.discussion_id
|
||||
WHERE n.is_system = 0
|
||||
GROUP BY n.discussion_id
|
||||
),
|
||||
participants AS (
|
||||
SELECT
|
||||
x.discussion_id,
|
||||
GROUP_CONCAT(x.author_username, X'1F') AS participants
|
||||
FROM (
|
||||
SELECT DISTINCT n.discussion_id, n.author_username
|
||||
FROM notes n
|
||||
JOIN picked p ON p.id = n.discussion_id
|
||||
WHERE n.is_system = 0 AND n.author_username IS NOT NULL
|
||||
) x
|
||||
GROUP BY x.discussion_id
|
||||
)
|
||||
SELECT
|
||||
p.id AS discussion_id,
|
||||
p.noteable_type,
|
||||
COALESCE(i.iid, m.iid) AS entity_iid,
|
||||
COALESCE(i.title, m.title) AS entity_title,
|
||||
proj.path_with_namespace,
|
||||
p.last_note_at,
|
||||
COALESCE(nc.note_count, 0) AS note_count,
|
||||
COALESCE(pa.participants, '') AS participants
|
||||
FROM picked p
|
||||
JOIN projects proj ON p.project_id = proj.id
|
||||
LEFT JOIN issues i ON p.issue_id = i.id
|
||||
LEFT JOIN merge_requests m ON p.merge_request_id = m.id
|
||||
LEFT JOIN note_counts nc ON nc.discussion_id = p.id
|
||||
LEFT JOIN participants pa ON pa.discussion_id = p.id
|
||||
ORDER BY p.last_note_at DESC
|
||||
"
|
||||
);
|
||||
|
||||
let sql_scoped = format!(
|
||||
"
|
||||
WITH picked AS (
|
||||
SELECT d.id, d.noteable_type, d.issue_id, d.merge_request_id,
|
||||
d.project_id, d.last_note_at
|
||||
FROM discussions d
|
||||
{state_joins}
|
||||
WHERE d.resolvable = 1 AND d.resolved = 0
|
||||
AND d.last_note_at >= ?1
|
||||
AND d.project_id = ?2
|
||||
{state_filter}
|
||||
ORDER BY d.last_note_at DESC
|
||||
LIMIT ?3
|
||||
),
|
||||
note_counts AS (
|
||||
SELECT
|
||||
n.discussion_id,
|
||||
COUNT(*) AS note_count
|
||||
FROM notes n
|
||||
JOIN picked p ON p.id = n.discussion_id
|
||||
WHERE n.is_system = 0
|
||||
GROUP BY n.discussion_id
|
||||
),
|
||||
participants AS (
|
||||
SELECT
|
||||
x.discussion_id,
|
||||
GROUP_CONCAT(x.author_username, X'1F') AS participants
|
||||
FROM (
|
||||
SELECT DISTINCT n.discussion_id, n.author_username
|
||||
FROM notes n
|
||||
JOIN picked p ON p.id = n.discussion_id
|
||||
WHERE n.is_system = 0 AND n.author_username IS NOT NULL
|
||||
) x
|
||||
GROUP BY x.discussion_id
|
||||
)
|
||||
SELECT
|
||||
p.id AS discussion_id,
|
||||
p.noteable_type,
|
||||
COALESCE(i.iid, m.iid) AS entity_iid,
|
||||
COALESCE(i.title, m.title) AS entity_title,
|
||||
proj.path_with_namespace,
|
||||
p.last_note_at,
|
||||
COALESCE(nc.note_count, 0) AS note_count,
|
||||
COALESCE(pa.participants, '') AS participants
|
||||
FROM picked p
|
||||
JOIN projects proj ON p.project_id = proj.id
|
||||
LEFT JOIN issues i ON p.issue_id = i.id
|
||||
LEFT JOIN merge_requests m ON p.merge_request_id = m.id
|
||||
LEFT JOIN note_counts nc ON nc.discussion_id = p.id
|
||||
LEFT JOIN participants pa ON pa.discussion_id = p.id
|
||||
ORDER BY p.last_note_at DESC
|
||||
"
|
||||
);
|
||||
|
||||
// Row-mapping closure shared between both variants
|
||||
let map_row = |row: &rusqlite::Row| -> rusqlite::Result<ActiveDiscussion> {
|
||||
let noteable_type: String = row.get(1)?;
|
||||
let entity_type = if noteable_type == "MergeRequest" {
|
||||
"MR"
|
||||
} else {
|
||||
"Issue"
|
||||
};
|
||||
let participants_csv: Option<String> = row.get(7)?;
|
||||
// Sort participants for deterministic output -- GROUP_CONCAT order is undefined
|
||||
let mut participants: Vec<String> = participants_csv
|
||||
.as_deref()
|
||||
.filter(|s| !s.is_empty())
|
||||
.map(|csv| csv.split('\x1F').map(String::from).collect())
|
||||
.unwrap_or_default();
|
||||
participants.sort();
|
||||
|
||||
const MAX_PARTICIPANTS: usize = 50;
|
||||
let participants_total = participants.len() as u32;
|
||||
let participants_truncated = participants.len() > MAX_PARTICIPANTS;
|
||||
if participants_truncated {
|
||||
participants.truncate(MAX_PARTICIPANTS);
|
||||
}
|
||||
|
||||
Ok(ActiveDiscussion {
|
||||
discussion_id: row.get(0)?,
|
||||
entity_type: entity_type.to_string(),
|
||||
entity_iid: row.get(2)?,
|
||||
entity_title: row.get(3)?,
|
||||
project_path: row.get(4)?,
|
||||
last_note_at: row.get(5)?,
|
||||
note_count: row.get(6)?,
|
||||
participants,
|
||||
participants_total,
|
||||
participants_truncated,
|
||||
})
|
||||
};
|
||||
|
||||
// Select variant first, then prepare exactly one statement
|
||||
let discussions: Vec<ActiveDiscussion> = match project_id {
|
||||
None => {
|
||||
let mut stmt = conn.prepare_cached(&sql_global)?;
|
||||
stmt.query_map(rusqlite::params![since_ms, limit_plus_one], &map_row)?
|
||||
.collect::<std::result::Result<Vec<_>, _>>()?
|
||||
}
|
||||
Some(pid) => {
|
||||
let mut stmt = conn.prepare_cached(&sql_scoped)?;
|
||||
stmt.query_map(rusqlite::params![since_ms, pid, limit_plus_one], &map_row)?
|
||||
.collect::<std::result::Result<Vec<_>, _>>()?
|
||||
}
|
||||
};
|
||||
|
||||
let truncated = discussions.len() > limit;
|
||||
let discussions: Vec<ActiveDiscussion> = discussions.into_iter().take(limit).collect();
|
||||
|
||||
Ok(ActiveResult {
|
||||
discussions,
|
||||
total_unresolved_in_window,
|
||||
truncated,
|
||||
})
|
||||
}
|
||||
|
||||
pub(super) fn print_active_human(r: &ActiveResult, project_path: Option<&str>) {
|
||||
println!();
|
||||
println!(
|
||||
"{}",
|
||||
Theme::bold().render(&format!(
|
||||
"Active Discussions ({} unresolved in window)",
|
||||
r.total_unresolved_in_window
|
||||
))
|
||||
);
|
||||
println!("{}", "\u{2500}".repeat(60));
|
||||
super::print_scope_hint(project_path);
|
||||
println!();
|
||||
|
||||
if r.discussions.is_empty() {
|
||||
println!(
|
||||
" {}",
|
||||
Theme::dim().render("No active unresolved discussions in this time window.")
|
||||
);
|
||||
println!();
|
||||
return;
|
||||
}
|
||||
|
||||
for disc in &r.discussions {
|
||||
let prefix = if disc.entity_type == "MR" { "!" } else { "#" };
|
||||
let participants_str = disc
|
||||
.participants
|
||||
.iter()
|
||||
.map(|p| format!("@{p}"))
|
||||
.collect::<Vec<_>>()
|
||||
.join(", ");
|
||||
|
||||
println!(
|
||||
" {} {} {} {} notes {}",
|
||||
Theme::info().render(&format!("{prefix}{}", disc.entity_iid)),
|
||||
render::truncate(&disc.entity_title, 40),
|
||||
Theme::dim().render(&render::format_relative_time(disc.last_note_at)),
|
||||
disc.note_count,
|
||||
Theme::dim().render(&disc.project_path),
|
||||
);
|
||||
if !participants_str.is_empty() {
|
||||
println!(" {}", Theme::dim().render(&participants_str));
|
||||
}
|
||||
}
|
||||
if r.truncated {
|
||||
println!(
|
||||
" {}",
|
||||
Theme::dim().render("(showing first -n; rerun with a higher --limit)")
|
||||
);
|
||||
}
|
||||
println!();
|
||||
}
|
||||
|
||||
pub(super) fn active_to_json(r: &ActiveResult) -> serde_json::Value {
|
||||
serde_json::json!({
|
||||
"total_unresolved_in_window": r.total_unresolved_in_window,
|
||||
"truncated": r.truncated,
|
||||
"discussions": r.discussions.iter().map(|d| serde_json::json!({
|
||||
"discussion_id": d.discussion_id,
|
||||
"entity_type": d.entity_type,
|
||||
"entity_iid": d.entity_iid,
|
||||
"entity_title": d.entity_title,
|
||||
"project_path": d.project_path,
|
||||
"last_note_at": ms_to_iso(d.last_note_at),
|
||||
"note_count": d.note_count,
|
||||
"participants": d.participants,
|
||||
"participants_total": d.participants_total,
|
||||
"participants_truncated": d.participants_truncated,
|
||||
})).collect::<Vec<_>>(),
|
||||
})
|
||||
}
|
||||
839
src/cli/commands/who/expert.rs
Normal file
839
src/cli/commands/who/expert.rs
Normal file
@@ -0,0 +1,839 @@
|
||||
use std::collections::{HashMap, HashSet};
|
||||
|
||||
use rusqlite::Connection;
|
||||
|
||||
use crate::cli::render::{self, Icons, Theme};
|
||||
use crate::core::config::ScoringConfig;
|
||||
use crate::core::error::Result;
|
||||
use crate::core::path_resolver::{PathQuery, build_path_query};
|
||||
use crate::core::time::ms_to_iso;
|
||||
|
||||
use super::types::*;
|
||||
|
||||
pub(super) fn half_life_decay(elapsed_ms: i64, half_life_days: u32) -> f64 {
|
||||
let days = (elapsed_ms as f64 / 86_400_000.0).max(0.0);
|
||||
let hl = f64::from(half_life_days);
|
||||
if hl <= 0.0 {
|
||||
return 0.0;
|
||||
}
|
||||
2.0_f64.powf(-days / hl)
|
||||
}
|
||||
|
||||
// ─── Query: Expert Mode ─────────────────────────────────────────────────────
|
||||
|
||||
#[allow(clippy::too_many_arguments)]
|
||||
pub(super) fn query_expert(
|
||||
conn: &Connection,
|
||||
path: &str,
|
||||
project_id: Option<i64>,
|
||||
since_ms: i64,
|
||||
as_of_ms: i64,
|
||||
limit: usize,
|
||||
scoring: &ScoringConfig,
|
||||
detail: bool,
|
||||
explain_score: bool,
|
||||
include_bots: bool,
|
||||
) -> Result<ExpertResult> {
|
||||
let pq = build_path_query(conn, path, project_id)?;
|
||||
|
||||
let sql = build_expert_sql_v2(pq.is_prefix);
|
||||
let mut stmt = conn.prepare_cached(&sql)?;
|
||||
|
||||
// Params: ?1=path, ?2=since_ms, ?3=project_id, ?4=as_of_ms,
|
||||
// ?5=closed_mr_multiplier, ?6=reviewer_min_note_chars
|
||||
let rows = stmt.query_map(
|
||||
rusqlite::params![
|
||||
pq.value,
|
||||
since_ms,
|
||||
project_id,
|
||||
as_of_ms,
|
||||
scoring.closed_mr_multiplier,
|
||||
scoring.reviewer_min_note_chars,
|
||||
],
|
||||
|row| {
|
||||
Ok(SignalRow {
|
||||
username: row.get(0)?,
|
||||
signal: row.get(1)?,
|
||||
mr_id: row.get(2)?,
|
||||
qty: row.get(3)?,
|
||||
ts: row.get(4)?,
|
||||
state_mult: row.get(5)?,
|
||||
})
|
||||
},
|
||||
)?;
|
||||
|
||||
// Per-user accumulator keyed by username.
|
||||
let mut accum: HashMap<String, UserAccum> = HashMap::new();
|
||||
|
||||
for row_result in rows {
|
||||
let r = row_result?;
|
||||
let entry = accum
|
||||
.entry(r.username.clone())
|
||||
.or_insert_with(|| UserAccum {
|
||||
contributions: Vec::new(),
|
||||
last_seen_ms: 0,
|
||||
mr_ids_author: HashSet::new(),
|
||||
mr_ids_reviewer: HashSet::new(),
|
||||
note_count: 0,
|
||||
});
|
||||
|
||||
if r.ts > entry.last_seen_ms {
|
||||
entry.last_seen_ms = r.ts;
|
||||
}
|
||||
|
||||
match r.signal.as_str() {
|
||||
"diffnote_author" | "file_author" => {
|
||||
entry.mr_ids_author.insert(r.mr_id);
|
||||
}
|
||||
"file_reviewer_participated" | "file_reviewer_assigned" => {
|
||||
entry.mr_ids_reviewer.insert(r.mr_id);
|
||||
}
|
||||
"note_group" => {
|
||||
entry.note_count += r.qty as u32;
|
||||
// DiffNote reviewers are also reviewer activity.
|
||||
entry.mr_ids_reviewer.insert(r.mr_id);
|
||||
}
|
||||
_ => {}
|
||||
}
|
||||
|
||||
entry.contributions.push(Contribution {
|
||||
signal: r.signal,
|
||||
mr_id: r.mr_id,
|
||||
qty: r.qty,
|
||||
ts: r.ts,
|
||||
state_mult: r.state_mult,
|
||||
});
|
||||
}
|
||||
|
||||
// Bot filtering: exclude configured bot usernames (case-insensitive).
|
||||
if !include_bots && !scoring.excluded_usernames.is_empty() {
|
||||
let excluded: HashSet<String> = scoring
|
||||
.excluded_usernames
|
||||
.iter()
|
||||
.map(|u| u.to_lowercase())
|
||||
.collect();
|
||||
accum.retain(|username, _| !excluded.contains(&username.to_lowercase()));
|
||||
}
|
||||
|
||||
// Compute decayed scores with deterministic ordering.
|
||||
let mut scored: Vec<ScoredUser> = accum
|
||||
.into_iter()
|
||||
.map(|(username, mut ua)| {
|
||||
// Sort contributions by mr_id ASC for deterministic f64 summation.
|
||||
ua.contributions.sort_by_key(|c| c.mr_id);
|
||||
|
||||
let mut comp_author = 0.0_f64;
|
||||
let mut comp_reviewer_participated = 0.0_f64;
|
||||
let mut comp_reviewer_assigned = 0.0_f64;
|
||||
let mut comp_notes = 0.0_f64;
|
||||
|
||||
for c in &ua.contributions {
|
||||
let elapsed = as_of_ms - c.ts;
|
||||
match c.signal.as_str() {
|
||||
"diffnote_author" | "file_author" => {
|
||||
let decay = half_life_decay(elapsed, scoring.author_half_life_days);
|
||||
comp_author += scoring.author_weight as f64 * decay * c.state_mult;
|
||||
}
|
||||
"file_reviewer_participated" => {
|
||||
let decay = half_life_decay(elapsed, scoring.reviewer_half_life_days);
|
||||
comp_reviewer_participated +=
|
||||
scoring.reviewer_weight as f64 * decay * c.state_mult;
|
||||
}
|
||||
"file_reviewer_assigned" => {
|
||||
let decay =
|
||||
half_life_decay(elapsed, scoring.reviewer_assignment_half_life_days);
|
||||
comp_reviewer_assigned +=
|
||||
scoring.reviewer_assignment_weight as f64 * decay * c.state_mult;
|
||||
}
|
||||
"note_group" => {
|
||||
let decay = half_life_decay(elapsed, scoring.note_half_life_days);
|
||||
// Diminishing returns: log2(1 + count) per MR.
|
||||
let note_value = (1.0 + c.qty as f64).log2();
|
||||
comp_notes += scoring.note_bonus as f64 * note_value * decay * c.state_mult;
|
||||
}
|
||||
_ => {}
|
||||
}
|
||||
}
|
||||
|
||||
let raw_score =
|
||||
comp_author + comp_reviewer_participated + comp_reviewer_assigned + comp_notes;
|
||||
ScoredUser {
|
||||
username,
|
||||
raw_score,
|
||||
components: ScoreComponents {
|
||||
author: comp_author,
|
||||
reviewer_participated: comp_reviewer_participated,
|
||||
reviewer_assigned: comp_reviewer_assigned,
|
||||
notes: comp_notes,
|
||||
},
|
||||
accum: ua,
|
||||
}
|
||||
})
|
||||
.collect();
|
||||
|
||||
// Sort: raw_score DESC, last_seen DESC, username ASC (deterministic tiebreaker).
|
||||
scored.sort_by(|a, b| {
|
||||
b.raw_score
|
||||
.partial_cmp(&a.raw_score)
|
||||
.unwrap_or(std::cmp::Ordering::Equal)
|
||||
.then_with(|| b.accum.last_seen_ms.cmp(&a.accum.last_seen_ms))
|
||||
.then_with(|| a.username.cmp(&b.username))
|
||||
});
|
||||
|
||||
let truncated = scored.len() > limit;
|
||||
scored.truncate(limit);
|
||||
|
||||
// Build Expert structs with MR refs.
|
||||
let mut experts: Vec<Expert> = scored
|
||||
.into_iter()
|
||||
.map(|su| {
|
||||
let mut mr_refs = build_mr_refs_for_user(conn, &su.accum);
|
||||
mr_refs.sort();
|
||||
let mr_refs_total = mr_refs.len() as u32;
|
||||
let mr_refs_truncated = mr_refs.len() > MAX_MR_REFS_PER_USER;
|
||||
if mr_refs_truncated {
|
||||
mr_refs.truncate(MAX_MR_REFS_PER_USER);
|
||||
}
|
||||
Expert {
|
||||
username: su.username,
|
||||
score: su.raw_score.round() as i64,
|
||||
score_raw: if explain_score {
|
||||
Some(su.raw_score)
|
||||
} else {
|
||||
None
|
||||
},
|
||||
components: if explain_score {
|
||||
Some(su.components)
|
||||
} else {
|
||||
None
|
||||
},
|
||||
review_mr_count: su.accum.mr_ids_reviewer.len() as u32,
|
||||
review_note_count: su.accum.note_count,
|
||||
author_mr_count: su.accum.mr_ids_author.len() as u32,
|
||||
last_seen_ms: su.accum.last_seen_ms,
|
||||
mr_refs,
|
||||
mr_refs_total,
|
||||
mr_refs_truncated,
|
||||
details: None,
|
||||
}
|
||||
})
|
||||
.collect();
|
||||
|
||||
// Populate per-MR detail when --detail is requested
|
||||
if detail && !experts.is_empty() {
|
||||
let details_map = query_expert_details(conn, &pq, &experts, since_ms, project_id)?;
|
||||
for expert in &mut experts {
|
||||
expert.details = details_map.get(&expert.username).cloned();
|
||||
}
|
||||
}
|
||||
|
||||
Ok(ExpertResult {
|
||||
path_query: if pq.is_prefix {
|
||||
// Use raw input (unescaped) for display — pq.value has LIKE escaping.
|
||||
path.trim_end_matches('/').to_string()
|
||||
} else {
|
||||
// For exact matches (including suffix-resolved), show the resolved path.
|
||||
pq.value.clone()
|
||||
},
|
||||
path_match: if pq.is_prefix { "prefix" } else { "exact" }.to_string(),
|
||||
experts,
|
||||
truncated,
|
||||
})
|
||||
}
|
||||
|
||||
struct SignalRow {
|
||||
username: String,
|
||||
signal: String,
|
||||
mr_id: i64,
|
||||
qty: i64,
|
||||
ts: i64,
|
||||
state_mult: f64,
|
||||
}
|
||||
|
||||
/// Per-user signal accumulator used during Rust-side scoring.
|
||||
struct UserAccum {
|
||||
contributions: Vec<Contribution>,
|
||||
last_seen_ms: i64,
|
||||
mr_ids_author: HashSet<i64>,
|
||||
mr_ids_reviewer: HashSet<i64>,
|
||||
note_count: u32,
|
||||
}
|
||||
|
||||
/// A single contribution to a user's score (one signal row).
|
||||
struct Contribution {
|
||||
signal: String,
|
||||
mr_id: i64,
|
||||
qty: i64,
|
||||
ts: i64,
|
||||
state_mult: f64,
|
||||
}
|
||||
|
||||
/// Intermediate scored user before building Expert structs.
|
||||
struct ScoredUser {
|
||||
username: String,
|
||||
raw_score: f64,
|
||||
components: ScoreComponents,
|
||||
accum: UserAccum,
|
||||
}
|
||||
|
||||
/// Build MR refs (e.g. "group/project!123") for a user from their accumulated MR IDs.
|
||||
fn build_mr_refs_for_user(conn: &Connection, ua: &UserAccum) -> Vec<String> {
|
||||
let all_mr_ids: HashSet<i64> = ua
|
||||
.mr_ids_author
|
||||
.iter()
|
||||
.chain(ua.mr_ids_reviewer.iter())
|
||||
.copied()
|
||||
.chain(ua.contributions.iter().map(|c| c.mr_id))
|
||||
.collect();
|
||||
|
||||
if all_mr_ids.is_empty() {
|
||||
return Vec::new();
|
||||
}
|
||||
|
||||
let placeholders: Vec<String> = (1..=all_mr_ids.len()).map(|i| format!("?{i}")).collect();
|
||||
let sql = format!(
|
||||
"SELECT p.path_with_namespace || '!' || CAST(m.iid AS TEXT)
|
||||
FROM merge_requests m
|
||||
JOIN projects p ON m.project_id = p.id
|
||||
WHERE m.id IN ({})",
|
||||
placeholders.join(",")
|
||||
);
|
||||
|
||||
let mut stmt = match conn.prepare(&sql) {
|
||||
Ok(s) => s,
|
||||
Err(_) => return Vec::new(),
|
||||
};
|
||||
|
||||
let mut mr_ids_vec: Vec<i64> = all_mr_ids.into_iter().collect();
|
||||
mr_ids_vec.sort_unstable();
|
||||
let params: Vec<&dyn rusqlite::types::ToSql> = mr_ids_vec
|
||||
.iter()
|
||||
.map(|id| id as &dyn rusqlite::types::ToSql)
|
||||
.collect();
|
||||
|
||||
stmt.query_map(&*params, |row| row.get::<_, String>(0))
|
||||
.map(|rows| rows.filter_map(|r| r.ok()).collect())
|
||||
.unwrap_or_default()
|
||||
}
|
||||
|
||||
/// Build the CTE-based expert SQL for time-decay scoring (v2).
|
||||
///
|
||||
/// Returns raw signal rows `(username, signal, mr_id, qty, ts, state_mult)` that
|
||||
/// Rust aggregates with per-signal decay and `log2(1+count)` for note groups.
|
||||
///
|
||||
/// Parameters: `?1` = path, `?2` = since_ms, `?3` = project_id (nullable),
|
||||
/// `?4` = as_of_ms, `?5` = closed_mr_multiplier, `?6` = reviewer_min_note_chars
|
||||
pub(super) fn build_expert_sql_v2(is_prefix: bool) -> String {
|
||||
let path_op = if is_prefix {
|
||||
"LIKE ?1 ESCAPE '\\'"
|
||||
} else {
|
||||
"= ?1"
|
||||
};
|
||||
// INDEXED BY hints for each branch:
|
||||
// - new_path branch: idx_notes_diffnote_path_created (existing)
|
||||
// - old_path branch: idx_notes_old_path_author (migration 026)
|
||||
format!(
|
||||
"
|
||||
WITH matched_notes_raw AS (
|
||||
-- Branch 1: match on position_new_path
|
||||
SELECT n.id, n.discussion_id, n.author_username, n.created_at, n.project_id
|
||||
FROM notes n INDEXED BY idx_notes_diffnote_path_created
|
||||
WHERE n.note_type = 'DiffNote'
|
||||
AND n.is_system = 0
|
||||
AND n.author_username IS NOT NULL
|
||||
AND n.created_at >= ?2
|
||||
AND n.created_at < ?4
|
||||
AND (?3 IS NULL OR n.project_id = ?3)
|
||||
AND n.position_new_path {path_op}
|
||||
UNION ALL
|
||||
-- Branch 2: match on position_old_path
|
||||
SELECT n.id, n.discussion_id, n.author_username, n.created_at, n.project_id
|
||||
FROM notes n INDEXED BY idx_notes_old_path_author
|
||||
WHERE n.note_type = 'DiffNote'
|
||||
AND n.is_system = 0
|
||||
AND n.author_username IS NOT NULL
|
||||
AND n.created_at >= ?2
|
||||
AND n.created_at < ?4
|
||||
AND (?3 IS NULL OR n.project_id = ?3)
|
||||
AND n.position_old_path IS NOT NULL
|
||||
AND n.position_old_path {path_op}
|
||||
),
|
||||
matched_notes AS (
|
||||
-- Dedup: prevent double-counting when old_path = new_path (no rename)
|
||||
SELECT DISTINCT id, discussion_id, author_username, created_at, project_id
|
||||
FROM matched_notes_raw
|
||||
),
|
||||
matched_file_changes_raw AS (
|
||||
-- Branch 1: match on new_path
|
||||
SELECT fc.merge_request_id, fc.project_id
|
||||
FROM mr_file_changes fc INDEXED BY idx_mfc_new_path_project_mr
|
||||
WHERE (?3 IS NULL OR fc.project_id = ?3)
|
||||
AND fc.new_path {path_op}
|
||||
UNION ALL
|
||||
-- Branch 2: match on old_path
|
||||
SELECT fc.merge_request_id, fc.project_id
|
||||
FROM mr_file_changes fc INDEXED BY idx_mfc_old_path_project_mr
|
||||
WHERE (?3 IS NULL OR fc.project_id = ?3)
|
||||
AND fc.old_path IS NOT NULL
|
||||
AND fc.old_path {path_op}
|
||||
),
|
||||
matched_file_changes AS (
|
||||
-- Dedup: prevent double-counting when old_path = new_path (no rename)
|
||||
SELECT DISTINCT merge_request_id, project_id
|
||||
FROM matched_file_changes_raw
|
||||
),
|
||||
mr_activity AS (
|
||||
-- Centralized state-aware timestamps and state multiplier.
|
||||
-- Scoped to MRs matched by file changes to avoid materializing the full MR table.
|
||||
SELECT DISTINCT
|
||||
m.id AS mr_id,
|
||||
m.author_username,
|
||||
m.state,
|
||||
CASE
|
||||
WHEN m.state = 'merged' THEN COALESCE(m.merged_at, m.created_at)
|
||||
WHEN m.state = 'closed' THEN COALESCE(m.closed_at, m.created_at)
|
||||
ELSE COALESCE(m.updated_at, m.created_at)
|
||||
END AS activity_ts,
|
||||
CASE WHEN m.state = 'closed' THEN ?5 ELSE 1.0 END AS state_mult
|
||||
FROM merge_requests m
|
||||
JOIN matched_file_changes mfc ON mfc.merge_request_id = m.id
|
||||
WHERE m.state IN ('opened','merged','closed')
|
||||
),
|
||||
reviewer_participation AS (
|
||||
-- Precompute which (mr_id, username) pairs have substantive DiffNote participation.
|
||||
SELECT DISTINCT d.merge_request_id AS mr_id, mn.author_username AS username
|
||||
FROM matched_notes mn
|
||||
JOIN discussions d ON mn.discussion_id = d.id
|
||||
JOIN notes n_body ON mn.id = n_body.id
|
||||
WHERE d.merge_request_id IS NOT NULL
|
||||
AND LENGTH(TRIM(COALESCE(n_body.body, ''))) >= ?6
|
||||
),
|
||||
raw AS (
|
||||
-- Signal 1: DiffNote reviewer (individual notes for note_cnt)
|
||||
SELECT mn.author_username AS username, 'diffnote_reviewer' AS signal,
|
||||
m.id AS mr_id, mn.id AS note_id, mn.created_at AS seen_at,
|
||||
CASE WHEN m.state = 'closed' THEN ?5 ELSE 1.0 END AS state_mult
|
||||
FROM matched_notes mn
|
||||
JOIN discussions d ON mn.discussion_id = d.id
|
||||
JOIN merge_requests m ON d.merge_request_id = m.id
|
||||
WHERE (m.author_username IS NULL OR mn.author_username != m.author_username)
|
||||
AND m.state IN ('opened','merged','closed')
|
||||
|
||||
UNION ALL
|
||||
|
||||
-- Signal 2: DiffNote MR author
|
||||
SELECT m.author_username AS username, 'diffnote_author' AS signal,
|
||||
m.id AS mr_id, NULL AS note_id, MAX(mn.created_at) AS seen_at,
|
||||
CASE WHEN m.state = 'closed' THEN ?5 ELSE 1.0 END AS state_mult
|
||||
FROM merge_requests m
|
||||
JOIN discussions d ON d.merge_request_id = m.id
|
||||
JOIN matched_notes mn ON mn.discussion_id = d.id
|
||||
WHERE m.author_username IS NOT NULL
|
||||
AND m.state IN ('opened','merged','closed')
|
||||
GROUP BY m.author_username, m.id
|
||||
|
||||
UNION ALL
|
||||
|
||||
-- Signal 3: MR author via file changes (uses mr_activity CTE)
|
||||
SELECT a.author_username AS username, 'file_author' AS signal,
|
||||
a.mr_id, NULL AS note_id,
|
||||
a.activity_ts AS seen_at, a.state_mult
|
||||
FROM mr_activity a
|
||||
WHERE a.author_username IS NOT NULL
|
||||
AND a.activity_ts >= ?2
|
||||
AND a.activity_ts < ?4
|
||||
|
||||
UNION ALL
|
||||
|
||||
-- Signal 4a: Reviewer participated (in mr_reviewers AND left DiffNotes on path)
|
||||
SELECT r.username AS username, 'file_reviewer_participated' AS signal,
|
||||
a.mr_id, NULL AS note_id,
|
||||
a.activity_ts AS seen_at, a.state_mult
|
||||
FROM mr_activity a
|
||||
JOIN mr_reviewers r ON r.merge_request_id = a.mr_id
|
||||
JOIN reviewer_participation rp ON rp.mr_id = a.mr_id AND rp.username = r.username
|
||||
WHERE r.username IS NOT NULL
|
||||
AND (a.author_username IS NULL OR r.username != a.author_username)
|
||||
AND a.activity_ts >= ?2
|
||||
AND a.activity_ts < ?4
|
||||
|
||||
UNION ALL
|
||||
|
||||
-- Signal 4b: Reviewer assigned-only (in mr_reviewers, NO DiffNotes on path)
|
||||
SELECT r.username AS username, 'file_reviewer_assigned' AS signal,
|
||||
a.mr_id, NULL AS note_id,
|
||||
a.activity_ts AS seen_at, a.state_mult
|
||||
FROM mr_activity a
|
||||
JOIN mr_reviewers r ON r.merge_request_id = a.mr_id
|
||||
LEFT JOIN reviewer_participation rp ON rp.mr_id = a.mr_id AND rp.username = r.username
|
||||
WHERE rp.username IS NULL
|
||||
AND r.username IS NOT NULL
|
||||
AND (a.author_username IS NULL OR r.username != a.author_username)
|
||||
AND a.activity_ts >= ?2
|
||||
AND a.activity_ts < ?4
|
||||
),
|
||||
aggregated AS (
|
||||
-- MR-level signals: 1 row per (username, signal_class, mr_id) with MAX(ts)
|
||||
SELECT username, signal, mr_id, 1 AS qty, MAX(seen_at) AS ts, MAX(state_mult) AS state_mult
|
||||
FROM raw WHERE signal != 'diffnote_reviewer'
|
||||
GROUP BY username, signal, mr_id
|
||||
UNION ALL
|
||||
-- Note signals: 1 row per (username, mr_id) with note_count and max_ts
|
||||
SELECT username, 'note_group' AS signal, mr_id, COUNT(*) AS qty, MAX(seen_at) AS ts,
|
||||
MAX(state_mult) AS state_mult
|
||||
FROM raw WHERE signal = 'diffnote_reviewer' AND note_id IS NOT NULL
|
||||
GROUP BY username, mr_id
|
||||
)
|
||||
SELECT username, signal, mr_id, qty, ts, state_mult FROM aggregated WHERE username IS NOT NULL
|
||||
"
|
||||
)
|
||||
}
|
||||
|
||||
/// Query per-MR detail for a set of experts. Returns a map of username -> Vec<ExpertMrDetail>.
|
||||
pub(super) fn query_expert_details(
|
||||
conn: &Connection,
|
||||
pq: &PathQuery,
|
||||
experts: &[Expert],
|
||||
since_ms: i64,
|
||||
project_id: Option<i64>,
|
||||
) -> Result<HashMap<String, Vec<ExpertMrDetail>>> {
|
||||
let path_op = if pq.is_prefix {
|
||||
"LIKE ?1 ESCAPE '\\'"
|
||||
} else {
|
||||
"= ?1"
|
||||
};
|
||||
|
||||
// Build IN clause for usernames
|
||||
let placeholders: Vec<String> = experts
|
||||
.iter()
|
||||
.enumerate()
|
||||
.map(|(i, _)| format!("?{}", i + 4))
|
||||
.collect();
|
||||
let in_clause = placeholders.join(",");
|
||||
|
||||
let sql = format!(
|
||||
"
|
||||
WITH signals AS (
|
||||
-- 1. DiffNote reviewer (matches both new_path and old_path for renamed files)
|
||||
SELECT
|
||||
n.author_username AS username,
|
||||
'reviewer' AS role,
|
||||
m.id AS mr_id,
|
||||
(p.path_with_namespace || '!' || CAST(m.iid AS TEXT)) AS mr_ref,
|
||||
m.title AS title,
|
||||
COUNT(*) AS note_count,
|
||||
MAX(n.created_at) AS last_activity
|
||||
FROM notes n
|
||||
JOIN discussions d ON n.discussion_id = d.id
|
||||
JOIN merge_requests m ON d.merge_request_id = m.id
|
||||
JOIN projects p ON m.project_id = p.id
|
||||
WHERE n.note_type = 'DiffNote'
|
||||
AND n.is_system = 0
|
||||
AND n.author_username IS NOT NULL
|
||||
AND (m.author_username IS NULL OR n.author_username != m.author_username)
|
||||
AND m.state IN ('opened','merged','closed')
|
||||
AND (n.position_new_path {path_op}
|
||||
OR (n.position_old_path IS NOT NULL AND n.position_old_path {path_op}))
|
||||
AND n.created_at >= ?2
|
||||
AND (?3 IS NULL OR n.project_id = ?3)
|
||||
AND n.author_username IN ({in_clause})
|
||||
GROUP BY n.author_username, m.id
|
||||
|
||||
UNION ALL
|
||||
|
||||
-- 2. DiffNote MR author (matches both new_path and old_path for renamed files)
|
||||
SELECT
|
||||
m.author_username AS username,
|
||||
'author' AS role,
|
||||
m.id AS mr_id,
|
||||
(p.path_with_namespace || '!' || CAST(m.iid AS TEXT)) AS mr_ref,
|
||||
m.title AS title,
|
||||
0 AS note_count,
|
||||
MAX(n.created_at) AS last_activity
|
||||
FROM merge_requests m
|
||||
JOIN discussions d ON d.merge_request_id = m.id
|
||||
JOIN notes n ON n.discussion_id = d.id
|
||||
JOIN projects p ON m.project_id = p.id
|
||||
WHERE n.note_type = 'DiffNote'
|
||||
AND n.is_system = 0
|
||||
AND m.author_username IS NOT NULL
|
||||
AND m.state IN ('opened','merged','closed')
|
||||
AND (n.position_new_path {path_op}
|
||||
OR (n.position_old_path IS NOT NULL AND n.position_old_path {path_op}))
|
||||
AND n.created_at >= ?2
|
||||
AND (?3 IS NULL OR n.project_id = ?3)
|
||||
AND m.author_username IN ({in_clause})
|
||||
GROUP BY m.author_username, m.id
|
||||
|
||||
UNION ALL
|
||||
|
||||
-- 3. MR author via file changes (matches both new_path and old_path)
|
||||
SELECT
|
||||
m.author_username AS username,
|
||||
'author' AS role,
|
||||
m.id AS mr_id,
|
||||
(p.path_with_namespace || '!' || CAST(m.iid AS TEXT)) AS mr_ref,
|
||||
m.title AS title,
|
||||
0 AS note_count,
|
||||
m.updated_at AS last_activity
|
||||
FROM mr_file_changes fc
|
||||
JOIN merge_requests m ON fc.merge_request_id = m.id
|
||||
JOIN projects p ON m.project_id = p.id
|
||||
WHERE m.author_username IS NOT NULL
|
||||
AND m.state IN ('opened','merged','closed')
|
||||
AND (fc.new_path {path_op}
|
||||
OR (fc.old_path IS NOT NULL AND fc.old_path {path_op}))
|
||||
AND m.updated_at >= ?2
|
||||
AND (?3 IS NULL OR fc.project_id = ?3)
|
||||
AND m.author_username IN ({in_clause})
|
||||
|
||||
UNION ALL
|
||||
|
||||
-- 4. MR reviewer via file changes + mr_reviewers (matches both new_path and old_path)
|
||||
SELECT
|
||||
r.username AS username,
|
||||
'reviewer' AS role,
|
||||
m.id AS mr_id,
|
||||
(p.path_with_namespace || '!' || CAST(m.iid AS TEXT)) AS mr_ref,
|
||||
m.title AS title,
|
||||
0 AS note_count,
|
||||
m.updated_at AS last_activity
|
||||
FROM mr_file_changes fc
|
||||
JOIN merge_requests m ON fc.merge_request_id = m.id
|
||||
JOIN projects p ON m.project_id = p.id
|
||||
JOIN mr_reviewers r ON r.merge_request_id = m.id
|
||||
WHERE r.username IS NOT NULL
|
||||
AND (m.author_username IS NULL OR r.username != m.author_username)
|
||||
AND m.state IN ('opened','merged','closed')
|
||||
AND (fc.new_path {path_op}
|
||||
OR (fc.old_path IS NOT NULL AND fc.old_path {path_op}))
|
||||
AND m.updated_at >= ?2
|
||||
AND (?3 IS NULL OR fc.project_id = ?3)
|
||||
AND r.username IN ({in_clause})
|
||||
)
|
||||
SELECT
|
||||
username,
|
||||
mr_ref,
|
||||
title,
|
||||
GROUP_CONCAT(DISTINCT role) AS roles,
|
||||
SUM(note_count) AS total_notes,
|
||||
MAX(last_activity) AS last_activity
|
||||
FROM signals
|
||||
GROUP BY username, mr_ref
|
||||
ORDER BY username ASC, last_activity DESC
|
||||
"
|
||||
);
|
||||
|
||||
// prepare() not prepare_cached(): the IN clause varies by expert count,
|
||||
// so the SQL shape changes per invocation and caching wastes memory.
|
||||
let mut stmt = conn.prepare(&sql)?;
|
||||
|
||||
// Build params: ?1=path, ?2=since_ms, ?3=project_id, ?4..=usernames
|
||||
let mut params: Vec<Box<dyn rusqlite::types::ToSql>> = Vec::new();
|
||||
params.push(Box::new(pq.value.clone()));
|
||||
params.push(Box::new(since_ms));
|
||||
params.push(Box::new(project_id));
|
||||
for expert in experts {
|
||||
params.push(Box::new(expert.username.clone()));
|
||||
}
|
||||
let param_refs: Vec<&dyn rusqlite::types::ToSql> = params.iter().map(|p| p.as_ref()).collect();
|
||||
|
||||
let rows: Vec<(String, String, String, String, u32, i64)> = stmt
|
||||
.query_map(param_refs.as_slice(), |row| {
|
||||
Ok((
|
||||
row.get(0)?,
|
||||
row.get(1)?,
|
||||
row.get(2)?,
|
||||
row.get::<_, String>(3)?,
|
||||
row.get(4)?,
|
||||
row.get(5)?,
|
||||
))
|
||||
})?
|
||||
.collect::<std::result::Result<Vec<_>, _>>()?;
|
||||
|
||||
let mut map: HashMap<String, Vec<ExpertMrDetail>> = HashMap::new();
|
||||
for (username, mr_ref, title, roles_csv, note_count, last_activity) in rows {
|
||||
let has_author = roles_csv.contains("author");
|
||||
let has_reviewer = roles_csv.contains("reviewer");
|
||||
let role = match (has_author, has_reviewer) {
|
||||
(true, true) => "A+R",
|
||||
(true, false) => "A",
|
||||
(false, true) => "R",
|
||||
_ => "?",
|
||||
}
|
||||
.to_string();
|
||||
map.entry(username).or_default().push(ExpertMrDetail {
|
||||
mr_ref,
|
||||
title,
|
||||
role,
|
||||
note_count,
|
||||
last_activity_ms: last_activity,
|
||||
});
|
||||
}
|
||||
|
||||
Ok(map)
|
||||
}
|
||||
|
||||
pub(super) fn print_expert_human(r: &ExpertResult, project_path: Option<&str>) {
|
||||
println!();
|
||||
println!(
|
||||
"{}",
|
||||
Theme::bold().render(&format!("Experts for {}", r.path_query))
|
||||
);
|
||||
println!("{}", "\u{2500}".repeat(60));
|
||||
println!(
|
||||
" {}",
|
||||
Theme::dim().render(&format!(
|
||||
"(matching {} {})",
|
||||
r.path_match,
|
||||
if r.path_match == "exact" {
|
||||
"file"
|
||||
} else {
|
||||
"directory prefix"
|
||||
}
|
||||
))
|
||||
);
|
||||
super::print_scope_hint(project_path);
|
||||
println!();
|
||||
|
||||
if r.experts.is_empty() {
|
||||
println!(
|
||||
" {}",
|
||||
Theme::dim().render("No experts found for this path.")
|
||||
);
|
||||
println!();
|
||||
return;
|
||||
}
|
||||
|
||||
println!(
|
||||
" {:<16} {:>6} {:>12} {:>6} {:>12} {} {}",
|
||||
Theme::bold().render("Username"),
|
||||
Theme::bold().render("Score"),
|
||||
Theme::bold().render("Reviewed(MRs)"),
|
||||
Theme::bold().render("Notes"),
|
||||
Theme::bold().render("Authored(MRs)"),
|
||||
Theme::bold().render("Last Seen"),
|
||||
Theme::bold().render("MR Refs"),
|
||||
);
|
||||
|
||||
for expert in &r.experts {
|
||||
let reviews = if expert.review_mr_count > 0 {
|
||||
expert.review_mr_count.to_string()
|
||||
} else {
|
||||
"-".to_string()
|
||||
};
|
||||
let notes = if expert.review_note_count > 0 {
|
||||
expert.review_note_count.to_string()
|
||||
} else {
|
||||
"-".to_string()
|
||||
};
|
||||
let authored = if expert.author_mr_count > 0 {
|
||||
expert.author_mr_count.to_string()
|
||||
} else {
|
||||
"-".to_string()
|
||||
};
|
||||
let mr_str = expert
|
||||
.mr_refs
|
||||
.iter()
|
||||
.take(5)
|
||||
.cloned()
|
||||
.collect::<Vec<_>>()
|
||||
.join(", ");
|
||||
let overflow = if expert.mr_refs_total > 5 {
|
||||
format!(" +{}", expert.mr_refs_total - 5)
|
||||
} else {
|
||||
String::new()
|
||||
};
|
||||
println!(
|
||||
" {:<16} {:>6} {:>12} {:>6} {:>12} {:<12}{}{}",
|
||||
Theme::info().render(&format!("{} {}", Icons::user(), expert.username)),
|
||||
expert.score,
|
||||
reviews,
|
||||
notes,
|
||||
authored,
|
||||
render::format_relative_time(expert.last_seen_ms),
|
||||
if mr_str.is_empty() {
|
||||
String::new()
|
||||
} else {
|
||||
format!(" {mr_str}")
|
||||
},
|
||||
overflow,
|
||||
);
|
||||
|
||||
// Print detail sub-rows when populated
|
||||
if let Some(details) = &expert.details {
|
||||
const MAX_DETAIL_DISPLAY: usize = 10;
|
||||
for d in details.iter().take(MAX_DETAIL_DISPLAY) {
|
||||
let notes_str = if d.note_count > 0 {
|
||||
format!("{} notes", d.note_count)
|
||||
} else {
|
||||
String::new()
|
||||
};
|
||||
println!(
|
||||
" {:<3} {:<30} {:>30} {:>10} {}",
|
||||
Theme::dim().render(&d.role),
|
||||
d.mr_ref,
|
||||
render::truncate(&format!("\"{}\"", d.title), 30),
|
||||
notes_str,
|
||||
Theme::dim().render(&render::format_relative_time(d.last_activity_ms)),
|
||||
);
|
||||
}
|
||||
if details.len() > MAX_DETAIL_DISPLAY {
|
||||
println!(
|
||||
" {}",
|
||||
Theme::dim().render(&format!("+{} more", details.len() - MAX_DETAIL_DISPLAY))
|
||||
);
|
||||
}
|
||||
}
|
||||
}
|
||||
if r.truncated {
|
||||
println!(
|
||||
" {}",
|
||||
Theme::dim().render("(showing first -n; rerun with a higher --limit)")
|
||||
);
|
||||
}
|
||||
println!();
|
||||
}
|
||||
|
||||
pub(super) fn expert_to_json(r: &ExpertResult) -> serde_json::Value {
|
||||
serde_json::json!({
|
||||
"path_query": r.path_query,
|
||||
"path_match": r.path_match,
|
||||
"scoring_model_version": 2,
|
||||
"truncated": r.truncated,
|
||||
"experts": r.experts.iter().map(|e| {
|
||||
let mut obj = serde_json::json!({
|
||||
"username": e.username,
|
||||
"score": e.score,
|
||||
"review_mr_count": e.review_mr_count,
|
||||
"review_note_count": e.review_note_count,
|
||||
"author_mr_count": e.author_mr_count,
|
||||
"last_seen_at": ms_to_iso(e.last_seen_ms),
|
||||
"mr_refs": e.mr_refs,
|
||||
"mr_refs_total": e.mr_refs_total,
|
||||
"mr_refs_truncated": e.mr_refs_truncated,
|
||||
});
|
||||
if let Some(raw) = e.score_raw {
|
||||
obj["score_raw"] = serde_json::json!(raw);
|
||||
}
|
||||
if let Some(comp) = &e.components {
|
||||
obj["components"] = serde_json::json!({
|
||||
"author": comp.author,
|
||||
"reviewer_participated": comp.reviewer_participated,
|
||||
"reviewer_assigned": comp.reviewer_assigned,
|
||||
"notes": comp.notes,
|
||||
});
|
||||
}
|
||||
if let Some(details) = &e.details {
|
||||
obj["details"] = serde_json::json!(details.iter().map(|d| serde_json::json!({
|
||||
"mr_ref": d.mr_ref,
|
||||
"title": d.title,
|
||||
"role": d.role,
|
||||
"note_count": d.note_count,
|
||||
"last_activity_at": ms_to_iso(d.last_activity_ms),
|
||||
})).collect::<Vec<_>>());
|
||||
}
|
||||
obj
|
||||
}).collect::<Vec<_>>(),
|
||||
})
|
||||
}
|
||||
429
src/cli/commands/who/mod.rs
Normal file
429
src/cli/commands/who/mod.rs
Normal file
@@ -0,0 +1,429 @@
|
||||
mod active;
|
||||
mod expert;
|
||||
mod overlap;
|
||||
mod reviews;
|
||||
pub mod types;
|
||||
mod workload;
|
||||
|
||||
pub use types::*;
|
||||
|
||||
// Re-export submodule functions for tests (tests use `use super::*`).
|
||||
#[cfg(test)]
|
||||
use active::query_active;
|
||||
#[cfg(test)]
|
||||
use expert::{build_expert_sql_v2, half_life_decay, query_expert};
|
||||
#[cfg(test)]
|
||||
use overlap::{format_overlap_role, query_overlap};
|
||||
#[cfg(test)]
|
||||
use reviews::{normalize_review_prefix, query_reviews};
|
||||
#[cfg(test)]
|
||||
use workload::query_workload;
|
||||
|
||||
use rusqlite::Connection;
|
||||
use serde::Serialize;
|
||||
|
||||
use crate::Config;
|
||||
use crate::cli::WhoArgs;
|
||||
use crate::cli::render::Theme;
|
||||
use crate::cli::robot::RobotMeta;
|
||||
use crate::core::db::create_connection;
|
||||
use crate::core::error::{LoreError, Result};
|
||||
use crate::core::path_resolver::normalize_repo_path;
|
||||
use crate::core::paths::get_db_path;
|
||||
use crate::core::project::resolve_project;
|
||||
use crate::core::time::{ms_to_iso, now_ms, parse_since, parse_since_from};
|
||||
|
||||
#[cfg(test)]
|
||||
use crate::core::config::ScoringConfig;
|
||||
#[cfg(test)]
|
||||
use crate::core::path_resolver::{SuffixResult, build_path_query, escape_like, suffix_probe};
|
||||
|
||||
// ─── Mode Discrimination ────────────────────────────────────────────────────
|
||||
|
||||
/// Determines which query mode to run based on args.
|
||||
/// Path variants own their strings because path normalization produces new `String`s.
|
||||
/// Username variants borrow from args since no normalization is needed.
|
||||
enum WhoMode<'a> {
|
||||
/// lore who <file-path> OR lore who --path <path>
|
||||
Expert { path: String },
|
||||
/// lore who <username>
|
||||
Workload { username: &'a str },
|
||||
/// lore who <username> --reviews
|
||||
Reviews { username: &'a str },
|
||||
/// lore who --active
|
||||
Active,
|
||||
/// lore who --overlap <path>
|
||||
Overlap { path: String },
|
||||
}
|
||||
|
||||
fn resolve_mode<'a>(args: &'a WhoArgs) -> Result<WhoMode<'a>> {
|
||||
// Explicit --path flag always wins (handles root files like README.md,
|
||||
// LICENSE, Makefile -- anything without a / that can't be auto-detected)
|
||||
if let Some(p) = &args.path {
|
||||
return Ok(WhoMode::Expert {
|
||||
path: normalize_repo_path(p),
|
||||
});
|
||||
}
|
||||
if args.active {
|
||||
return Ok(WhoMode::Active);
|
||||
}
|
||||
if let Some(path) = &args.overlap {
|
||||
return Ok(WhoMode::Overlap {
|
||||
path: normalize_repo_path(path),
|
||||
});
|
||||
}
|
||||
if let Some(target) = &args.target {
|
||||
let clean = target.strip_prefix('@').unwrap_or(target);
|
||||
if args.reviews {
|
||||
return Ok(WhoMode::Reviews { username: clean });
|
||||
}
|
||||
// Disambiguation: if target contains '/', it's a file path.
|
||||
// GitLab usernames never contain '/'.
|
||||
// Root files (no '/') require --path.
|
||||
if clean.contains('/') {
|
||||
return Ok(WhoMode::Expert {
|
||||
path: normalize_repo_path(clean),
|
||||
});
|
||||
}
|
||||
return Ok(WhoMode::Workload { username: clean });
|
||||
}
|
||||
Err(LoreError::Other(
|
||||
"Provide a username, file path, --active, or --overlap <path>.\n\n\
|
||||
Examples:\n \
|
||||
lore who src/features/auth/\n \
|
||||
lore who @username\n \
|
||||
lore who --active\n \
|
||||
lore who --overlap src/features/\n \
|
||||
lore who --path README.md\n \
|
||||
lore who --path Makefile"
|
||||
.to_string(),
|
||||
))
|
||||
}
|
||||
|
||||
fn validate_mode_flags(mode: &WhoMode<'_>, args: &WhoArgs) -> Result<()> {
|
||||
if args.detail && !matches!(mode, WhoMode::Expert { .. }) {
|
||||
return Err(LoreError::Other(
|
||||
"--detail is only supported in expert mode (`lore who --path <path>` or `lore who <path/with/slash>`).".to_string(),
|
||||
));
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
// ─── Entry Point ─────────────────────────────────────────────────────────────
|
||||
|
||||
/// Main entry point. Resolves mode + resolved inputs once, then dispatches.
|
||||
pub fn run_who(config: &Config, args: &WhoArgs) -> Result<WhoRun> {
|
||||
let db_path = get_db_path(config.storage.db_path.as_deref());
|
||||
let conn = create_connection(&db_path)?;
|
||||
|
||||
let project_id = args
|
||||
.project
|
||||
.as_deref()
|
||||
.map(|p| resolve_project(&conn, p))
|
||||
.transpose()?;
|
||||
|
||||
let project_path = project_id
|
||||
.map(|id| lookup_project_path(&conn, id))
|
||||
.transpose()?;
|
||||
|
||||
let mode = resolve_mode(args)?;
|
||||
validate_mode_flags(&mode, args)?;
|
||||
|
||||
// since_mode semantics:
|
||||
// - expert/reviews/active/overlap: default window applies if args.since is None -> "default"
|
||||
// - workload: no default window; args.since None => "none"
|
||||
let since_mode_for_defaulted = if args.since.is_some() {
|
||||
"explicit"
|
||||
} else {
|
||||
"default"
|
||||
};
|
||||
let since_mode_for_workload = if args.since.is_some() {
|
||||
"explicit"
|
||||
} else {
|
||||
"none"
|
||||
};
|
||||
|
||||
let limit = args.limit.map_or(usize::MAX, usize::from);
|
||||
|
||||
match mode {
|
||||
WhoMode::Expert { path } => {
|
||||
// Compute as_of first so --since durations are relative to it.
|
||||
let as_of_ms = match &args.as_of {
|
||||
Some(v) => parse_since(v).ok_or_else(|| {
|
||||
LoreError::Other(format!(
|
||||
"Invalid --as-of value: '{v}'. Use a duration (30d, 6m) or date (2024-01-15)"
|
||||
))
|
||||
})?,
|
||||
None => now_ms(),
|
||||
};
|
||||
let since_ms = if args.all_history {
|
||||
0
|
||||
} else {
|
||||
resolve_since_from(args.since.as_deref(), "24m", as_of_ms)?
|
||||
};
|
||||
let result = expert::query_expert(
|
||||
&conn,
|
||||
&path,
|
||||
project_id,
|
||||
since_ms,
|
||||
as_of_ms,
|
||||
limit,
|
||||
&config.scoring,
|
||||
args.detail,
|
||||
args.explain_score,
|
||||
args.include_bots,
|
||||
)?;
|
||||
Ok(WhoRun {
|
||||
resolved_input: WhoResolvedInput {
|
||||
mode: "expert".to_string(),
|
||||
project_id,
|
||||
project_path,
|
||||
since_ms: Some(since_ms),
|
||||
since_iso: Some(ms_to_iso(since_ms)),
|
||||
since_mode: since_mode_for_defaulted.to_string(),
|
||||
limit: args.limit,
|
||||
},
|
||||
result: WhoResult::Expert(result),
|
||||
})
|
||||
}
|
||||
WhoMode::Workload { username } => {
|
||||
let since_ms = args
|
||||
.since
|
||||
.as_deref()
|
||||
.map(resolve_since_required)
|
||||
.transpose()?;
|
||||
|
||||
let result = workload::query_workload(
|
||||
&conn,
|
||||
username,
|
||||
project_id,
|
||||
since_ms,
|
||||
limit,
|
||||
args.include_closed,
|
||||
)?;
|
||||
Ok(WhoRun {
|
||||
resolved_input: WhoResolvedInput {
|
||||
mode: "workload".to_string(),
|
||||
project_id,
|
||||
project_path,
|
||||
since_ms,
|
||||
since_iso: since_ms.map(ms_to_iso),
|
||||
since_mode: since_mode_for_workload.to_string(),
|
||||
limit: args.limit,
|
||||
},
|
||||
result: WhoResult::Workload(result),
|
||||
})
|
||||
}
|
||||
WhoMode::Reviews { username } => {
|
||||
let since_ms = resolve_since(args.since.as_deref(), "6m")?;
|
||||
let result = reviews::query_reviews(&conn, username, project_id, since_ms)?;
|
||||
Ok(WhoRun {
|
||||
resolved_input: WhoResolvedInput {
|
||||
mode: "reviews".to_string(),
|
||||
project_id,
|
||||
project_path,
|
||||
since_ms: Some(since_ms),
|
||||
since_iso: Some(ms_to_iso(since_ms)),
|
||||
since_mode: since_mode_for_defaulted.to_string(),
|
||||
limit: args.limit,
|
||||
},
|
||||
result: WhoResult::Reviews(result),
|
||||
})
|
||||
}
|
||||
WhoMode::Active => {
|
||||
let since_ms = resolve_since(args.since.as_deref(), "7d")?;
|
||||
|
||||
let result =
|
||||
active::query_active(&conn, project_id, since_ms, limit, args.include_closed)?;
|
||||
Ok(WhoRun {
|
||||
resolved_input: WhoResolvedInput {
|
||||
mode: "active".to_string(),
|
||||
project_id,
|
||||
project_path,
|
||||
since_ms: Some(since_ms),
|
||||
since_iso: Some(ms_to_iso(since_ms)),
|
||||
since_mode: since_mode_for_defaulted.to_string(),
|
||||
limit: args.limit,
|
||||
},
|
||||
result: WhoResult::Active(result),
|
||||
})
|
||||
}
|
||||
WhoMode::Overlap { path } => {
|
||||
let since_ms = resolve_since(args.since.as_deref(), "30d")?;
|
||||
|
||||
let result = overlap::query_overlap(&conn, &path, project_id, since_ms, limit)?;
|
||||
Ok(WhoRun {
|
||||
resolved_input: WhoResolvedInput {
|
||||
mode: "overlap".to_string(),
|
||||
project_id,
|
||||
project_path,
|
||||
since_ms: Some(since_ms),
|
||||
since_iso: Some(ms_to_iso(since_ms)),
|
||||
since_mode: since_mode_for_defaulted.to_string(),
|
||||
limit: args.limit,
|
||||
},
|
||||
result: WhoResult::Overlap(result),
|
||||
})
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ─── Helpers ─────────────────────────────────────────────────────────────────
|
||||
|
||||
/// Look up the project path for a resolved project ID.
|
||||
fn lookup_project_path(conn: &Connection, project_id: i64) -> Result<String> {
|
||||
conn.query_row(
|
||||
"SELECT path_with_namespace FROM projects WHERE id = ?1",
|
||||
rusqlite::params![project_id],
|
||||
|row| row.get(0),
|
||||
)
|
||||
.map_err(|e| LoreError::Other(format!("Failed to look up project path: {e}")))
|
||||
}
|
||||
|
||||
/// Parse --since with a default fallback.
|
||||
fn resolve_since(input: Option<&str>, default: &str) -> Result<i64> {
|
||||
let s = input.unwrap_or(default);
|
||||
parse_since(s).ok_or_else(|| {
|
||||
LoreError::Other(format!(
|
||||
"Invalid --since value: '{s}'. Use a duration (7d, 2w, 6m) or date (2024-01-15)"
|
||||
))
|
||||
})
|
||||
}
|
||||
|
||||
/// Parse --since with a default fallback, relative to a reference timestamp.
|
||||
/// Durations (7d, 2w, 6m) are computed from `reference_ms` instead of now.
|
||||
fn resolve_since_from(input: Option<&str>, default: &str, reference_ms: i64) -> Result<i64> {
|
||||
let s = input.unwrap_or(default);
|
||||
parse_since_from(s, reference_ms).ok_or_else(|| {
|
||||
LoreError::Other(format!(
|
||||
"Invalid --since value: '{s}'. Use a duration (7d, 2w, 6m) or date (2024-01-15)"
|
||||
))
|
||||
})
|
||||
}
|
||||
|
||||
/// Parse --since without a default (returns error if invalid).
|
||||
fn resolve_since_required(input: &str) -> Result<i64> {
|
||||
parse_since(input).ok_or_else(|| {
|
||||
LoreError::Other(format!(
|
||||
"Invalid --since value: '{input}'. Use a duration (7d, 2w, 6m) or date (2024-01-15)"
|
||||
))
|
||||
})
|
||||
}
|
||||
|
||||
// ─── Human Output ────────────────────────────────────────────────────────────
|
||||
|
||||
pub fn print_who_human(result: &WhoResult, project_path: Option<&str>) {
|
||||
match result {
|
||||
WhoResult::Expert(r) => expert::print_expert_human(r, project_path),
|
||||
WhoResult::Workload(r) => workload::print_workload_human(r),
|
||||
WhoResult::Reviews(r) => reviews::print_reviews_human(r),
|
||||
WhoResult::Active(r) => active::print_active_human(r, project_path),
|
||||
WhoResult::Overlap(r) => overlap::print_overlap_human(r, project_path),
|
||||
}
|
||||
}
|
||||
|
||||
/// Print a dim hint when results aggregate across all projects.
|
||||
pub(super) fn print_scope_hint(project_path: Option<&str>) {
|
||||
if project_path.is_none() {
|
||||
println!(
|
||||
" {}",
|
||||
Theme::dim().render("(aggregated across all projects; use -p to scope)")
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// ─── Robot JSON Output ───────────────────────────────────────────────────────
|
||||
|
||||
pub fn print_who_json(run: &WhoRun, args: &WhoArgs, elapsed_ms: u64) {
|
||||
let (mode, data) = match &run.result {
|
||||
WhoResult::Expert(r) => ("expert", expert::expert_to_json(r)),
|
||||
WhoResult::Workload(r) => ("workload", workload::workload_to_json(r)),
|
||||
WhoResult::Reviews(r) => ("reviews", reviews::reviews_to_json(r)),
|
||||
WhoResult::Active(r) => ("active", active::active_to_json(r)),
|
||||
WhoResult::Overlap(r) => ("overlap", overlap::overlap_to_json(r)),
|
||||
};
|
||||
|
||||
// Raw CLI args -- what the user typed
|
||||
let input = serde_json::json!({
|
||||
"target": args.target,
|
||||
"path": args.path,
|
||||
"project": args.project,
|
||||
"since": args.since,
|
||||
"limit": args.limit,
|
||||
"detail": args.detail,
|
||||
"as_of": args.as_of,
|
||||
"explain_score": args.explain_score,
|
||||
"include_bots": args.include_bots,
|
||||
"all_history": args.all_history,
|
||||
});
|
||||
|
||||
// Resolved/computed values -- what actually ran
|
||||
let resolved_input = serde_json::json!({
|
||||
"mode": run.resolved_input.mode,
|
||||
"project_id": run.resolved_input.project_id,
|
||||
"project_path": run.resolved_input.project_path,
|
||||
"since_ms": run.resolved_input.since_ms,
|
||||
"since_iso": run.resolved_input.since_iso,
|
||||
"since_mode": run.resolved_input.since_mode,
|
||||
"limit": run.resolved_input.limit,
|
||||
});
|
||||
|
||||
let output = WhoJsonEnvelope {
|
||||
ok: true,
|
||||
data: WhoJsonData {
|
||||
mode: mode.to_string(),
|
||||
input,
|
||||
resolved_input,
|
||||
result: data,
|
||||
},
|
||||
meta: RobotMeta { elapsed_ms },
|
||||
};
|
||||
|
||||
let mut value = serde_json::to_value(&output).unwrap_or_else(|e| {
|
||||
serde_json::json!({"ok":false,"error":{"code":"INTERNAL_ERROR","message":format!("JSON serialization failed: {e}")}})
|
||||
});
|
||||
|
||||
if let Some(f) = &args.fields {
|
||||
let preset_key = format!("who_{mode}");
|
||||
let expanded = crate::cli::robot::expand_fields_preset(f, &preset_key);
|
||||
// Each who mode uses a different array key; try all possible keys
|
||||
for key in &[
|
||||
"experts",
|
||||
"assigned_issues",
|
||||
"authored_mrs",
|
||||
"review_mrs",
|
||||
"categories",
|
||||
"discussions",
|
||||
"users",
|
||||
] {
|
||||
crate::cli::robot::filter_fields(&mut value, key, &expanded);
|
||||
}
|
||||
}
|
||||
|
||||
match serde_json::to_string(&value) {
|
||||
Ok(json) => println!("{json}"),
|
||||
Err(e) => eprintln!("Error serializing to JSON: {e}"),
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
struct WhoJsonEnvelope {
|
||||
ok: bool,
|
||||
data: WhoJsonData,
|
||||
meta: RobotMeta,
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
struct WhoJsonData {
|
||||
mode: String,
|
||||
input: serde_json::Value,
|
||||
resolved_input: serde_json::Value,
|
||||
#[serde(flatten)]
|
||||
result: serde_json::Value,
|
||||
}
|
||||
|
||||
// ─── Tests ───────────────────────────────────────────────────────────────────
|
||||
|
||||
#[cfg(test)]
|
||||
#[path = "../who_tests.rs"]
|
||||
mod tests;
|
||||
323
src/cli/commands/who/overlap.rs
Normal file
323
src/cli/commands/who/overlap.rs
Normal file
@@ -0,0 +1,323 @@
|
||||
use std::collections::{HashMap, HashSet};
|
||||
|
||||
use rusqlite::Connection;
|
||||
|
||||
use crate::cli::render::{self, Icons, Theme};
|
||||
use crate::core::error::Result;
|
||||
use crate::core::path_resolver::build_path_query;
|
||||
use crate::core::time::ms_to_iso;
|
||||
|
||||
use super::types::*;
|
||||
|
||||
pub(super) fn query_overlap(
|
||||
conn: &Connection,
|
||||
path: &str,
|
||||
project_id: Option<i64>,
|
||||
since_ms: i64,
|
||||
limit: usize,
|
||||
) -> Result<OverlapResult> {
|
||||
let pq = build_path_query(conn, path, project_id)?;
|
||||
|
||||
// Build SQL with 4 signal sources, matching the expert query expansion.
|
||||
// Each row produces (username, role, mr_id, mr_ref, seen_at) for Rust-side accumulation.
|
||||
let path_op = if pq.is_prefix {
|
||||
"LIKE ?1 ESCAPE '\\'"
|
||||
} else {
|
||||
"= ?1"
|
||||
};
|
||||
// Match both new_path and old_path to capture activity on renamed files.
|
||||
// INDEXED BY removed to allow OR across path columns; overlap runs once
|
||||
// per command so the minor plan difference is acceptable.
|
||||
let sql = format!(
|
||||
"SELECT username, role, touch_count, last_seen_at, mr_refs FROM (
|
||||
-- 1. DiffNote reviewer (matches both new_path and old_path)
|
||||
SELECT
|
||||
n.author_username AS username,
|
||||
'reviewer' AS role,
|
||||
COUNT(DISTINCT m.id) AS touch_count,
|
||||
MAX(n.created_at) AS last_seen_at,
|
||||
GROUP_CONCAT(DISTINCT (p.path_with_namespace || '!' || m.iid)) AS mr_refs
|
||||
FROM notes n
|
||||
JOIN discussions d ON n.discussion_id = d.id
|
||||
JOIN merge_requests m ON d.merge_request_id = m.id
|
||||
JOIN projects p ON m.project_id = p.id
|
||||
WHERE n.note_type = 'DiffNote'
|
||||
AND (n.position_new_path {path_op}
|
||||
OR (n.position_old_path IS NOT NULL AND n.position_old_path {path_op}))
|
||||
AND n.is_system = 0
|
||||
AND n.author_username IS NOT NULL
|
||||
AND (m.author_username IS NULL OR n.author_username != m.author_username)
|
||||
AND m.state IN ('opened','merged','closed')
|
||||
AND n.created_at >= ?2
|
||||
AND (?3 IS NULL OR n.project_id = ?3)
|
||||
GROUP BY n.author_username
|
||||
|
||||
UNION ALL
|
||||
|
||||
-- 2. DiffNote MR author (matches both new_path and old_path)
|
||||
SELECT
|
||||
m.author_username AS username,
|
||||
'author' AS role,
|
||||
COUNT(DISTINCT m.id) AS touch_count,
|
||||
MAX(n.created_at) AS last_seen_at,
|
||||
GROUP_CONCAT(DISTINCT (p.path_with_namespace || '!' || m.iid)) AS mr_refs
|
||||
FROM notes n
|
||||
JOIN discussions d ON n.discussion_id = d.id
|
||||
JOIN merge_requests m ON d.merge_request_id = m.id
|
||||
JOIN projects p ON m.project_id = p.id
|
||||
WHERE n.note_type = 'DiffNote'
|
||||
AND (n.position_new_path {path_op}
|
||||
OR (n.position_old_path IS NOT NULL AND n.position_old_path {path_op}))
|
||||
AND n.is_system = 0
|
||||
AND m.state IN ('opened','merged','closed')
|
||||
AND m.author_username IS NOT NULL
|
||||
AND n.created_at >= ?2
|
||||
AND (?3 IS NULL OR n.project_id = ?3)
|
||||
GROUP BY m.author_username
|
||||
|
||||
UNION ALL
|
||||
|
||||
-- 3. MR author via file changes (matches both new_path and old_path)
|
||||
SELECT
|
||||
m.author_username AS username,
|
||||
'author' AS role,
|
||||
COUNT(DISTINCT m.id) AS touch_count,
|
||||
MAX(m.updated_at) AS last_seen_at,
|
||||
GROUP_CONCAT(DISTINCT (p.path_with_namespace || '!' || m.iid)) AS mr_refs
|
||||
FROM mr_file_changes fc
|
||||
JOIN merge_requests m ON fc.merge_request_id = m.id
|
||||
JOIN projects p ON m.project_id = p.id
|
||||
WHERE m.author_username IS NOT NULL
|
||||
AND m.state IN ('opened','merged','closed')
|
||||
AND (fc.new_path {path_op}
|
||||
OR (fc.old_path IS NOT NULL AND fc.old_path {path_op}))
|
||||
AND m.updated_at >= ?2
|
||||
AND (?3 IS NULL OR fc.project_id = ?3)
|
||||
GROUP BY m.author_username
|
||||
|
||||
UNION ALL
|
||||
|
||||
-- 4. MR reviewer via file changes + mr_reviewers (matches both new_path and old_path)
|
||||
SELECT
|
||||
r.username AS username,
|
||||
'reviewer' AS role,
|
||||
COUNT(DISTINCT m.id) AS touch_count,
|
||||
MAX(m.updated_at) AS last_seen_at,
|
||||
GROUP_CONCAT(DISTINCT (p.path_with_namespace || '!' || m.iid)) AS mr_refs
|
||||
FROM mr_file_changes fc
|
||||
JOIN merge_requests m ON fc.merge_request_id = m.id
|
||||
JOIN projects p ON m.project_id = p.id
|
||||
JOIN mr_reviewers r ON r.merge_request_id = m.id
|
||||
WHERE r.username IS NOT NULL
|
||||
AND (m.author_username IS NULL OR r.username != m.author_username)
|
||||
AND m.state IN ('opened','merged','closed')
|
||||
AND (fc.new_path {path_op}
|
||||
OR (fc.old_path IS NOT NULL AND fc.old_path {path_op}))
|
||||
AND m.updated_at >= ?2
|
||||
AND (?3 IS NULL OR fc.project_id = ?3)
|
||||
GROUP BY r.username
|
||||
)"
|
||||
);
|
||||
|
||||
let mut stmt = conn.prepare_cached(&sql)?;
|
||||
let rows: Vec<(String, String, u32, i64, Option<String>)> = stmt
|
||||
.query_map(rusqlite::params![pq.value, since_ms, project_id], |row| {
|
||||
Ok((
|
||||
row.get(0)?,
|
||||
row.get(1)?,
|
||||
row.get(2)?,
|
||||
row.get(3)?,
|
||||
row.get(4)?,
|
||||
))
|
||||
})?
|
||||
.collect::<std::result::Result<Vec<_>, _>>()?;
|
||||
|
||||
// Internal accumulator uses HashSet for MR refs from the start
|
||||
struct OverlapAcc {
|
||||
username: String,
|
||||
author_touch_count: u32,
|
||||
review_touch_count: u32,
|
||||
touch_count: u32,
|
||||
last_seen_at: i64,
|
||||
mr_refs: HashSet<String>,
|
||||
}
|
||||
|
||||
let mut user_map: HashMap<String, OverlapAcc> = HashMap::new();
|
||||
for (username, role, count, last_seen, mr_refs_csv) in &rows {
|
||||
let mr_refs: Vec<String> = mr_refs_csv
|
||||
.as_deref()
|
||||
.map(|csv| csv.split(',').map(|s| s.trim().to_string()).collect())
|
||||
.unwrap_or_default();
|
||||
|
||||
let entry = user_map
|
||||
.entry(username.clone())
|
||||
.or_insert_with(|| OverlapAcc {
|
||||
username: username.clone(),
|
||||
author_touch_count: 0,
|
||||
review_touch_count: 0,
|
||||
touch_count: 0,
|
||||
last_seen_at: 0,
|
||||
mr_refs: HashSet::new(),
|
||||
});
|
||||
entry.touch_count += count;
|
||||
if role == "author" {
|
||||
entry.author_touch_count += count;
|
||||
} else {
|
||||
entry.review_touch_count += count;
|
||||
}
|
||||
if *last_seen > entry.last_seen_at {
|
||||
entry.last_seen_at = *last_seen;
|
||||
}
|
||||
for r in mr_refs {
|
||||
entry.mr_refs.insert(r);
|
||||
}
|
||||
}
|
||||
|
||||
// Convert accumulators to output structs
|
||||
let mut users: Vec<OverlapUser> = user_map
|
||||
.into_values()
|
||||
.map(|a| {
|
||||
let mut mr_refs: Vec<String> = a.mr_refs.into_iter().collect();
|
||||
mr_refs.sort();
|
||||
let mr_refs_total = mr_refs.len() as u32;
|
||||
let mr_refs_truncated = mr_refs.len() > MAX_MR_REFS_PER_USER;
|
||||
if mr_refs_truncated {
|
||||
mr_refs.truncate(MAX_MR_REFS_PER_USER);
|
||||
}
|
||||
OverlapUser {
|
||||
username: a.username,
|
||||
author_touch_count: a.author_touch_count,
|
||||
review_touch_count: a.review_touch_count,
|
||||
touch_count: a.touch_count,
|
||||
last_seen_at: a.last_seen_at,
|
||||
mr_refs,
|
||||
mr_refs_total,
|
||||
mr_refs_truncated,
|
||||
}
|
||||
})
|
||||
.collect();
|
||||
|
||||
// Stable sort with full tie-breakers for deterministic output
|
||||
users.sort_by(|a, b| {
|
||||
b.touch_count
|
||||
.cmp(&a.touch_count)
|
||||
.then_with(|| b.last_seen_at.cmp(&a.last_seen_at))
|
||||
.then_with(|| a.username.cmp(&b.username))
|
||||
});
|
||||
|
||||
let truncated = users.len() > limit;
|
||||
users.truncate(limit);
|
||||
|
||||
Ok(OverlapResult {
|
||||
path_query: if pq.is_prefix {
|
||||
path.trim_end_matches('/').to_string()
|
||||
} else {
|
||||
pq.value.clone()
|
||||
},
|
||||
path_match: if pq.is_prefix { "prefix" } else { "exact" }.to_string(),
|
||||
users,
|
||||
truncated,
|
||||
})
|
||||
}
|
||||
|
||||
/// Format overlap role for display: "A", "R", or "A+R".
|
||||
pub(super) fn format_overlap_role(user: &OverlapUser) -> &'static str {
|
||||
match (user.author_touch_count > 0, user.review_touch_count > 0) {
|
||||
(true, true) => "A+R",
|
||||
(true, false) => "A",
|
||||
(false, true) => "R",
|
||||
(false, false) => "-",
|
||||
}
|
||||
}
|
||||
|
||||
pub(super) fn print_overlap_human(r: &OverlapResult, project_path: Option<&str>) {
|
||||
println!();
|
||||
println!(
|
||||
"{}",
|
||||
Theme::bold().render(&format!("Overlap for {}", r.path_query))
|
||||
);
|
||||
println!("{}", "\u{2500}".repeat(60));
|
||||
println!(
|
||||
" {}",
|
||||
Theme::dim().render(&format!(
|
||||
"(matching {} {})",
|
||||
r.path_match,
|
||||
if r.path_match == "exact" {
|
||||
"file"
|
||||
} else {
|
||||
"directory prefix"
|
||||
}
|
||||
))
|
||||
);
|
||||
super::print_scope_hint(project_path);
|
||||
println!();
|
||||
|
||||
if r.users.is_empty() {
|
||||
println!(
|
||||
" {}",
|
||||
Theme::dim().render("No overlapping users found for this path.")
|
||||
);
|
||||
println!();
|
||||
return;
|
||||
}
|
||||
|
||||
println!(
|
||||
" {:<16} {:<6} {:>7} {:<12} {}",
|
||||
Theme::bold().render("Username"),
|
||||
Theme::bold().render("Role"),
|
||||
Theme::bold().render("MRs"),
|
||||
Theme::bold().render("Last Seen"),
|
||||
Theme::bold().render("MR Refs"),
|
||||
);
|
||||
|
||||
for user in &r.users {
|
||||
let mr_str = user
|
||||
.mr_refs
|
||||
.iter()
|
||||
.take(5)
|
||||
.cloned()
|
||||
.collect::<Vec<_>>()
|
||||
.join(", ");
|
||||
let overflow = if user.mr_refs.len() > 5 {
|
||||
format!(" +{}", user.mr_refs.len() - 5)
|
||||
} else {
|
||||
String::new()
|
||||
};
|
||||
|
||||
println!(
|
||||
" {:<16} {:<6} {:>7} {:<12} {}{}",
|
||||
Theme::info().render(&format!("{} {}", Icons::user(), user.username)),
|
||||
format_overlap_role(user),
|
||||
user.touch_count,
|
||||
render::format_relative_time(user.last_seen_at),
|
||||
mr_str,
|
||||
overflow,
|
||||
);
|
||||
}
|
||||
if r.truncated {
|
||||
println!(
|
||||
" {}",
|
||||
Theme::dim().render("(showing first -n; rerun with a higher --limit)")
|
||||
);
|
||||
}
|
||||
println!();
|
||||
}
|
||||
|
||||
pub(super) fn overlap_to_json(r: &OverlapResult) -> serde_json::Value {
|
||||
serde_json::json!({
|
||||
"path_query": r.path_query,
|
||||
"path_match": r.path_match,
|
||||
"truncated": r.truncated,
|
||||
"users": r.users.iter().map(|u| serde_json::json!({
|
||||
"username": u.username,
|
||||
"role": format_overlap_role(u),
|
||||
"author_touch_count": u.author_touch_count,
|
||||
"review_touch_count": u.review_touch_count,
|
||||
"touch_count": u.touch_count,
|
||||
"last_seen_at": ms_to_iso(u.last_seen_at),
|
||||
"mr_refs": u.mr_refs,
|
||||
"mr_refs_total": u.mr_refs_total,
|
||||
"mr_refs_truncated": u.mr_refs_truncated,
|
||||
})).collect::<Vec<_>>(),
|
||||
})
|
||||
}
|
||||
214
src/cli/commands/who/reviews.rs
Normal file
214
src/cli/commands/who/reviews.rs
Normal file
@@ -0,0 +1,214 @@
|
||||
use std::collections::HashMap;
|
||||
|
||||
use rusqlite::Connection;
|
||||
|
||||
use crate::cli::render::{Icons, Theme};
|
||||
use crate::core::error::Result;
|
||||
|
||||
use super::types::*;
|
||||
|
||||
// ─── Query: Reviews Mode ────────────────────────────────────────────────────
|
||||
|
||||
pub(super) fn query_reviews(
|
||||
conn: &Connection,
|
||||
username: &str,
|
||||
project_id: Option<i64>,
|
||||
since_ms: i64,
|
||||
) -> Result<ReviewsResult> {
|
||||
// Force the partial index on DiffNote queries (same rationale as expert mode).
|
||||
// COUNT + COUNT(DISTINCT) + category extraction all benefit from 26K DiffNote
|
||||
// scan vs 282K notes full scan: measured 25x speedup.
|
||||
let total_sql = "SELECT COUNT(*) FROM notes n
|
||||
INDEXED BY idx_notes_diffnote_path_created
|
||||
JOIN discussions d ON n.discussion_id = d.id
|
||||
JOIN merge_requests m ON d.merge_request_id = m.id
|
||||
WHERE n.author_username = ?1
|
||||
AND n.note_type = 'DiffNote'
|
||||
AND n.is_system = 0
|
||||
AND (m.author_username IS NULL OR m.author_username != ?1)
|
||||
AND n.created_at >= ?2
|
||||
AND (?3 IS NULL OR n.project_id = ?3)";
|
||||
|
||||
let total_diffnotes: u32 = conn.query_row(
|
||||
total_sql,
|
||||
rusqlite::params![username, since_ms, project_id],
|
||||
|row| row.get(0),
|
||||
)?;
|
||||
|
||||
// Count distinct MRs reviewed
|
||||
let mrs_sql = "SELECT COUNT(DISTINCT m.id) FROM notes n
|
||||
INDEXED BY idx_notes_diffnote_path_created
|
||||
JOIN discussions d ON n.discussion_id = d.id
|
||||
JOIN merge_requests m ON d.merge_request_id = m.id
|
||||
WHERE n.author_username = ?1
|
||||
AND n.note_type = 'DiffNote'
|
||||
AND n.is_system = 0
|
||||
AND (m.author_username IS NULL OR m.author_username != ?1)
|
||||
AND n.created_at >= ?2
|
||||
AND (?3 IS NULL OR n.project_id = ?3)";
|
||||
|
||||
let mrs_reviewed: u32 = conn.query_row(
|
||||
mrs_sql,
|
||||
rusqlite::params![username, since_ms, project_id],
|
||||
|row| row.get(0),
|
||||
)?;
|
||||
|
||||
// Extract prefixed categories: body starts with **prefix**
|
||||
let cat_sql = "SELECT
|
||||
SUBSTR(ltrim(n.body), 3, INSTR(SUBSTR(ltrim(n.body), 3), '**') - 1) AS raw_prefix,
|
||||
COUNT(*) AS cnt
|
||||
FROM notes n INDEXED BY idx_notes_diffnote_path_created
|
||||
JOIN discussions d ON n.discussion_id = d.id
|
||||
JOIN merge_requests m ON d.merge_request_id = m.id
|
||||
WHERE n.author_username = ?1
|
||||
AND n.note_type = 'DiffNote'
|
||||
AND n.is_system = 0
|
||||
AND (m.author_username IS NULL OR m.author_username != ?1)
|
||||
AND ltrim(n.body) LIKE '**%**%'
|
||||
AND n.created_at >= ?2
|
||||
AND (?3 IS NULL OR n.project_id = ?3)
|
||||
GROUP BY raw_prefix
|
||||
ORDER BY cnt DESC";
|
||||
|
||||
let mut stmt = conn.prepare_cached(cat_sql)?;
|
||||
let raw_categories: Vec<(String, u32)> = stmt
|
||||
.query_map(rusqlite::params![username, since_ms, project_id], |row| {
|
||||
Ok((row.get::<_, String>(0)?, row.get(1)?))
|
||||
})?
|
||||
.collect::<std::result::Result<Vec<_>, _>>()?;
|
||||
|
||||
// Normalize categories: lowercase, strip trailing colon/space,
|
||||
// merge nit/nitpick variants, merge (non-blocking) variants
|
||||
let mut merged: HashMap<String, u32> = HashMap::new();
|
||||
for (raw, count) in &raw_categories {
|
||||
let normalized = normalize_review_prefix(raw);
|
||||
if !normalized.is_empty() {
|
||||
*merged.entry(normalized).or_insert(0) += count;
|
||||
}
|
||||
}
|
||||
|
||||
let categorized_count: u32 = merged.values().sum();
|
||||
|
||||
let mut categories: Vec<ReviewCategory> = merged
|
||||
.into_iter()
|
||||
.map(|(name, count)| {
|
||||
let percentage = if categorized_count > 0 {
|
||||
f64::from(count) / f64::from(categorized_count) * 100.0
|
||||
} else {
|
||||
0.0
|
||||
};
|
||||
ReviewCategory {
|
||||
name,
|
||||
count,
|
||||
percentage,
|
||||
}
|
||||
})
|
||||
.collect();
|
||||
|
||||
categories.sort_by_key(|b| std::cmp::Reverse(b.count));
|
||||
|
||||
Ok(ReviewsResult {
|
||||
username: username.to_string(),
|
||||
total_diffnotes,
|
||||
categorized_count,
|
||||
mrs_reviewed,
|
||||
categories,
|
||||
})
|
||||
}
|
||||
|
||||
/// Normalize a raw review prefix like "Suggestion (non-blocking):" into "suggestion".
|
||||
pub(super) fn normalize_review_prefix(raw: &str) -> String {
|
||||
let s = raw.trim().trim_end_matches(':').trim().to_lowercase();
|
||||
|
||||
// Strip "(non-blocking)" and similar parentheticals
|
||||
let s = if let Some(idx) = s.find('(') {
|
||||
s[..idx].trim().to_string()
|
||||
} else {
|
||||
s
|
||||
};
|
||||
|
||||
// Merge nit/nitpick variants
|
||||
match s.as_str() {
|
||||
"nitpick" | "nit" => "nit".to_string(),
|
||||
other => other.to_string(),
|
||||
}
|
||||
}
|
||||
|
||||
// ─── Human Renderer ─────────────────────────────────────────────────────────
|
||||
|
||||
pub(super) fn print_reviews_human(r: &ReviewsResult) {
|
||||
println!();
|
||||
println!(
|
||||
"{}",
|
||||
Theme::bold().render(&format!(
|
||||
"{} {} -- Review Patterns",
|
||||
Icons::user(),
|
||||
r.username
|
||||
))
|
||||
);
|
||||
println!("{}", "\u{2500}".repeat(60));
|
||||
println!();
|
||||
|
||||
if r.total_diffnotes == 0 {
|
||||
println!(
|
||||
" {}",
|
||||
Theme::dim().render("No review comments found for this user.")
|
||||
);
|
||||
println!();
|
||||
return;
|
||||
}
|
||||
|
||||
println!(
|
||||
" {} DiffNotes across {} MRs ({} categorized)",
|
||||
Theme::bold().render(&r.total_diffnotes.to_string()),
|
||||
Theme::bold().render(&r.mrs_reviewed.to_string()),
|
||||
Theme::bold().render(&r.categorized_count.to_string()),
|
||||
);
|
||||
println!();
|
||||
|
||||
if !r.categories.is_empty() {
|
||||
println!(
|
||||
" {:<16} {:>6} {:>6}",
|
||||
Theme::bold().render("Category"),
|
||||
Theme::bold().render("Count"),
|
||||
Theme::bold().render("%"),
|
||||
);
|
||||
|
||||
for cat in &r.categories {
|
||||
println!(
|
||||
" {:<16} {:>6} {:>5.1}%",
|
||||
Theme::info().render(&cat.name),
|
||||
cat.count,
|
||||
cat.percentage,
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
let uncategorized = r.total_diffnotes - r.categorized_count;
|
||||
if uncategorized > 0 {
|
||||
println!();
|
||||
println!(
|
||||
" {} {} uncategorized (no **prefix** convention)",
|
||||
Theme::dim().render("Note:"),
|
||||
uncategorized,
|
||||
);
|
||||
}
|
||||
|
||||
println!();
|
||||
}
|
||||
|
||||
// ─── Robot Renderer ─────────────────────────────────────────────────────────
|
||||
|
||||
pub(super) fn reviews_to_json(r: &ReviewsResult) -> serde_json::Value {
|
||||
serde_json::json!({
|
||||
"username": r.username,
|
||||
"total_diffnotes": r.total_diffnotes,
|
||||
"categorized_count": r.categorized_count,
|
||||
"mrs_reviewed": r.mrs_reviewed,
|
||||
"categories": r.categories.iter().map(|c| serde_json::json!({
|
||||
"name": c.name,
|
||||
"count": c.count,
|
||||
"percentage": (c.percentage * 10.0).round() / 10.0,
|
||||
})).collect::<Vec<_>>(),
|
||||
})
|
||||
}
|
||||
185
src/cli/commands/who/types.rs
Normal file
185
src/cli/commands/who/types.rs
Normal file
@@ -0,0 +1,185 @@
|
||||
// ─── Result Types ────────────────────────────────────────────────────────────
|
||||
//
|
||||
// All pub result structs and enums for the `who` command family.
|
||||
// Zero logic — pure data definitions.
|
||||
|
||||
/// Top-level run result: carries resolved inputs + the mode-specific result.
|
||||
pub struct WhoRun {
|
||||
pub resolved_input: WhoResolvedInput,
|
||||
pub result: WhoResult,
|
||||
}
|
||||
|
||||
/// Resolved query parameters -- computed once, used for robot JSON reproducibility.
|
||||
pub struct WhoResolvedInput {
|
||||
pub mode: String,
|
||||
pub project_id: Option<i64>,
|
||||
pub project_path: Option<String>,
|
||||
pub since_ms: Option<i64>,
|
||||
pub since_iso: Option<String>,
|
||||
/// "default" (mode default applied), "explicit" (user provided --since), "none" (no window)
|
||||
pub since_mode: String,
|
||||
pub limit: Option<u16>,
|
||||
}
|
||||
|
||||
/// Top-level result enum -- one variant per mode.
|
||||
pub enum WhoResult {
|
||||
Expert(ExpertResult),
|
||||
Workload(WorkloadResult),
|
||||
Reviews(ReviewsResult),
|
||||
Active(ActiveResult),
|
||||
Overlap(OverlapResult),
|
||||
}
|
||||
|
||||
// --- Expert ---
|
||||
|
||||
pub struct ExpertResult {
|
||||
pub path_query: String,
|
||||
/// "exact" or "prefix" -- how the path was matched in SQL.
|
||||
pub path_match: String,
|
||||
pub experts: Vec<Expert>,
|
||||
pub truncated: bool,
|
||||
}
|
||||
|
||||
pub struct Expert {
|
||||
pub username: String,
|
||||
pub score: i64,
|
||||
/// Unrounded f64 score (only populated when explain_score is set).
|
||||
pub score_raw: Option<f64>,
|
||||
/// Per-component score breakdown (only populated when explain_score is set).
|
||||
pub components: Option<ScoreComponents>,
|
||||
pub review_mr_count: u32,
|
||||
pub review_note_count: u32,
|
||||
pub author_mr_count: u32,
|
||||
pub last_seen_ms: i64,
|
||||
/// Stable MR references like "group/project!123"
|
||||
pub mr_refs: Vec<String>,
|
||||
pub mr_refs_total: u32,
|
||||
pub mr_refs_truncated: bool,
|
||||
/// Per-MR detail breakdown (only populated when --detail is set)
|
||||
pub details: Option<Vec<ExpertMrDetail>>,
|
||||
}
|
||||
|
||||
/// Per-component score breakdown for explain mode.
|
||||
pub struct ScoreComponents {
|
||||
pub author: f64,
|
||||
pub reviewer_participated: f64,
|
||||
pub reviewer_assigned: f64,
|
||||
pub notes: f64,
|
||||
}
|
||||
|
||||
#[derive(Clone)]
|
||||
pub struct ExpertMrDetail {
|
||||
pub mr_ref: String,
|
||||
pub title: String,
|
||||
/// "R", "A", or "A+R"
|
||||
pub role: String,
|
||||
pub note_count: u32,
|
||||
pub last_activity_ms: i64,
|
||||
}
|
||||
|
||||
// --- Workload ---
|
||||
|
||||
pub struct WorkloadResult {
|
||||
pub username: String,
|
||||
pub assigned_issues: Vec<WorkloadIssue>,
|
||||
pub authored_mrs: Vec<WorkloadMr>,
|
||||
pub reviewing_mrs: Vec<WorkloadMr>,
|
||||
pub unresolved_discussions: Vec<WorkloadDiscussion>,
|
||||
pub assigned_issues_truncated: bool,
|
||||
pub authored_mrs_truncated: bool,
|
||||
pub reviewing_mrs_truncated: bool,
|
||||
pub unresolved_discussions_truncated: bool,
|
||||
}
|
||||
|
||||
pub struct WorkloadIssue {
|
||||
pub iid: i64,
|
||||
/// Canonical reference: `group/project#iid`
|
||||
pub ref_: String,
|
||||
pub title: String,
|
||||
pub project_path: String,
|
||||
pub updated_at: i64,
|
||||
}
|
||||
|
||||
pub struct WorkloadMr {
|
||||
pub iid: i64,
|
||||
/// Canonical reference: `group/project!iid`
|
||||
pub ref_: String,
|
||||
pub title: String,
|
||||
pub draft: bool,
|
||||
pub project_path: String,
|
||||
pub author_username: Option<String>,
|
||||
pub updated_at: i64,
|
||||
}
|
||||
|
||||
pub struct WorkloadDiscussion {
|
||||
pub entity_type: String,
|
||||
pub entity_iid: i64,
|
||||
/// Canonical reference: `group/project!iid` or `group/project#iid`
|
||||
pub ref_: String,
|
||||
pub entity_title: String,
|
||||
pub project_path: String,
|
||||
pub last_note_at: i64,
|
||||
}
|
||||
|
||||
// --- Reviews ---
|
||||
|
||||
pub struct ReviewsResult {
|
||||
pub username: String,
|
||||
pub total_diffnotes: u32,
|
||||
pub categorized_count: u32,
|
||||
pub mrs_reviewed: u32,
|
||||
pub categories: Vec<ReviewCategory>,
|
||||
}
|
||||
|
||||
pub struct ReviewCategory {
|
||||
pub name: String,
|
||||
pub count: u32,
|
||||
pub percentage: f64,
|
||||
}
|
||||
|
||||
// --- Active ---
|
||||
|
||||
pub struct ActiveResult {
|
||||
pub discussions: Vec<ActiveDiscussion>,
|
||||
/// Count of unresolved discussions *within the time window*, not total across all time.
|
||||
pub total_unresolved_in_window: u32,
|
||||
pub truncated: bool,
|
||||
}
|
||||
|
||||
pub struct ActiveDiscussion {
|
||||
pub discussion_id: i64,
|
||||
pub entity_type: String,
|
||||
pub entity_iid: i64,
|
||||
pub entity_title: String,
|
||||
pub project_path: String,
|
||||
pub last_note_at: i64,
|
||||
pub note_count: u32,
|
||||
pub participants: Vec<String>,
|
||||
pub participants_total: u32,
|
||||
pub participants_truncated: bool,
|
||||
}
|
||||
|
||||
// --- Overlap ---
|
||||
|
||||
pub struct OverlapResult {
|
||||
pub path_query: String,
|
||||
/// "exact" or "prefix" -- how the path was matched in SQL.
|
||||
pub path_match: String,
|
||||
pub users: Vec<OverlapUser>,
|
||||
pub truncated: bool,
|
||||
}
|
||||
|
||||
pub struct OverlapUser {
|
||||
pub username: String,
|
||||
pub author_touch_count: u32,
|
||||
pub review_touch_count: u32,
|
||||
pub touch_count: u32,
|
||||
pub last_seen_at: i64,
|
||||
/// Stable MR references like "group/project!123"
|
||||
pub mr_refs: Vec<String>,
|
||||
pub mr_refs_total: u32,
|
||||
pub mr_refs_truncated: bool,
|
||||
}
|
||||
|
||||
/// Maximum MR references to retain per user in output (shared across modes).
|
||||
pub const MAX_MR_REFS_PER_USER: usize = 50;
|
||||
370
src/cli/commands/who/workload.rs
Normal file
370
src/cli/commands/who/workload.rs
Normal file
@@ -0,0 +1,370 @@
|
||||
use rusqlite::Connection;
|
||||
|
||||
use crate::cli::render::{self, Icons, Theme};
|
||||
use crate::core::error::Result;
|
||||
use crate::core::time::ms_to_iso;
|
||||
|
||||
use super::types::*;
|
||||
|
||||
// ─── Query: Workload Mode ───────────────────────────────────────────────────
|
||||
|
||||
pub(super) fn query_workload(
|
||||
conn: &Connection,
|
||||
username: &str,
|
||||
project_id: Option<i64>,
|
||||
since_ms: Option<i64>,
|
||||
limit: usize,
|
||||
include_closed: bool,
|
||||
) -> Result<WorkloadResult> {
|
||||
let limit_plus_one = (limit + 1) as i64;
|
||||
|
||||
// Query 1: Open issues assigned to user
|
||||
let issues_sql = "SELECT i.iid,
|
||||
(p.path_with_namespace || '#' || i.iid) AS ref,
|
||||
i.title, p.path_with_namespace, i.updated_at
|
||||
FROM issues i
|
||||
JOIN issue_assignees ia ON ia.issue_id = i.id
|
||||
JOIN projects p ON i.project_id = p.id
|
||||
WHERE ia.username = ?1
|
||||
AND i.state = 'opened'
|
||||
AND (?2 IS NULL OR i.project_id = ?2)
|
||||
AND (?3 IS NULL OR i.updated_at >= ?3)
|
||||
ORDER BY i.updated_at DESC
|
||||
LIMIT ?4";
|
||||
|
||||
let mut stmt = conn.prepare_cached(issues_sql)?;
|
||||
let assigned_issues: Vec<WorkloadIssue> = stmt
|
||||
.query_map(
|
||||
rusqlite::params![username, project_id, since_ms, limit_plus_one],
|
||||
|row| {
|
||||
Ok(WorkloadIssue {
|
||||
iid: row.get(0)?,
|
||||
ref_: row.get(1)?,
|
||||
title: row.get(2)?,
|
||||
project_path: row.get(3)?,
|
||||
updated_at: row.get(4)?,
|
||||
})
|
||||
},
|
||||
)?
|
||||
.collect::<std::result::Result<Vec<_>, _>>()?;
|
||||
|
||||
// Query 2: Open MRs authored
|
||||
let authored_sql = "SELECT m.iid,
|
||||
(p.path_with_namespace || '!' || m.iid) AS ref,
|
||||
m.title, m.draft, p.path_with_namespace, m.updated_at
|
||||
FROM merge_requests m
|
||||
JOIN projects p ON m.project_id = p.id
|
||||
WHERE m.author_username = ?1
|
||||
AND m.state = 'opened'
|
||||
AND (?2 IS NULL OR m.project_id = ?2)
|
||||
AND (?3 IS NULL OR m.updated_at >= ?3)
|
||||
ORDER BY m.updated_at DESC
|
||||
LIMIT ?4";
|
||||
let mut stmt = conn.prepare_cached(authored_sql)?;
|
||||
let authored_mrs: Vec<WorkloadMr> = stmt
|
||||
.query_map(
|
||||
rusqlite::params![username, project_id, since_ms, limit_plus_one],
|
||||
|row| {
|
||||
Ok(WorkloadMr {
|
||||
iid: row.get(0)?,
|
||||
ref_: row.get(1)?,
|
||||
title: row.get(2)?,
|
||||
draft: row.get::<_, i32>(3)? != 0,
|
||||
project_path: row.get(4)?,
|
||||
author_username: None,
|
||||
updated_at: row.get(5)?,
|
||||
})
|
||||
},
|
||||
)?
|
||||
.collect::<std::result::Result<Vec<_>, _>>()?;
|
||||
|
||||
// Query 3: Open MRs where user is reviewer
|
||||
let reviewing_sql = "SELECT m.iid,
|
||||
(p.path_with_namespace || '!' || m.iid) AS ref,
|
||||
m.title, m.draft, p.path_with_namespace,
|
||||
m.author_username, m.updated_at
|
||||
FROM merge_requests m
|
||||
JOIN mr_reviewers r ON r.merge_request_id = m.id
|
||||
JOIN projects p ON m.project_id = p.id
|
||||
WHERE r.username = ?1
|
||||
AND m.state = 'opened'
|
||||
AND (?2 IS NULL OR m.project_id = ?2)
|
||||
AND (?3 IS NULL OR m.updated_at >= ?3)
|
||||
ORDER BY m.updated_at DESC
|
||||
LIMIT ?4";
|
||||
let mut stmt = conn.prepare_cached(reviewing_sql)?;
|
||||
let reviewing_mrs: Vec<WorkloadMr> = stmt
|
||||
.query_map(
|
||||
rusqlite::params![username, project_id, since_ms, limit_plus_one],
|
||||
|row| {
|
||||
Ok(WorkloadMr {
|
||||
iid: row.get(0)?,
|
||||
ref_: row.get(1)?,
|
||||
title: row.get(2)?,
|
||||
draft: row.get::<_, i32>(3)? != 0,
|
||||
project_path: row.get(4)?,
|
||||
author_username: row.get(5)?,
|
||||
updated_at: row.get(6)?,
|
||||
})
|
||||
},
|
||||
)?
|
||||
.collect::<std::result::Result<Vec<_>, _>>()?;
|
||||
|
||||
// Query 4: Unresolved discussions where user participated
|
||||
let state_filter = if include_closed {
|
||||
""
|
||||
} else {
|
||||
" AND (i.id IS NULL OR i.state = 'opened')
|
||||
AND (m.id IS NULL OR m.state = 'opened')"
|
||||
};
|
||||
let disc_sql = format!(
|
||||
"SELECT d.noteable_type,
|
||||
COALESCE(i.iid, m.iid) AS entity_iid,
|
||||
(p.path_with_namespace ||
|
||||
CASE WHEN d.noteable_type = 'MergeRequest' THEN '!' ELSE '#' END ||
|
||||
COALESCE(i.iid, m.iid)) AS ref,
|
||||
COALESCE(i.title, m.title) AS entity_title,
|
||||
p.path_with_namespace,
|
||||
d.last_note_at
|
||||
FROM discussions d
|
||||
JOIN projects p ON d.project_id = p.id
|
||||
LEFT JOIN issues i ON d.issue_id = i.id
|
||||
LEFT JOIN merge_requests m ON d.merge_request_id = m.id
|
||||
WHERE d.resolvable = 1 AND d.resolved = 0
|
||||
AND EXISTS (
|
||||
SELECT 1 FROM notes n
|
||||
WHERE n.discussion_id = d.id
|
||||
AND n.author_username = ?1
|
||||
AND n.is_system = 0
|
||||
)
|
||||
AND (?2 IS NULL OR d.project_id = ?2)
|
||||
AND (?3 IS NULL OR d.last_note_at >= ?3)
|
||||
{state_filter}
|
||||
ORDER BY d.last_note_at DESC
|
||||
LIMIT ?4"
|
||||
);
|
||||
|
||||
let mut stmt = conn.prepare_cached(&disc_sql)?;
|
||||
let unresolved_discussions: Vec<WorkloadDiscussion> = stmt
|
||||
.query_map(
|
||||
rusqlite::params![username, project_id, since_ms, limit_plus_one],
|
||||
|row| {
|
||||
let noteable_type: String = row.get(0)?;
|
||||
let entity_type = if noteable_type == "MergeRequest" {
|
||||
"MR"
|
||||
} else {
|
||||
"Issue"
|
||||
};
|
||||
Ok(WorkloadDiscussion {
|
||||
entity_type: entity_type.to_string(),
|
||||
entity_iid: row.get(1)?,
|
||||
ref_: row.get(2)?,
|
||||
entity_title: row.get(3)?,
|
||||
project_path: row.get(4)?,
|
||||
last_note_at: row.get(5)?,
|
||||
})
|
||||
},
|
||||
)?
|
||||
.collect::<std::result::Result<Vec<_>, _>>()?;
|
||||
|
||||
// Truncation detection
|
||||
let assigned_issues_truncated = assigned_issues.len() > limit;
|
||||
let authored_mrs_truncated = authored_mrs.len() > limit;
|
||||
let reviewing_mrs_truncated = reviewing_mrs.len() > limit;
|
||||
let unresolved_discussions_truncated = unresolved_discussions.len() > limit;
|
||||
|
||||
let assigned_issues: Vec<WorkloadIssue> = assigned_issues.into_iter().take(limit).collect();
|
||||
let authored_mrs: Vec<WorkloadMr> = authored_mrs.into_iter().take(limit).collect();
|
||||
let reviewing_mrs: Vec<WorkloadMr> = reviewing_mrs.into_iter().take(limit).collect();
|
||||
let unresolved_discussions: Vec<WorkloadDiscussion> =
|
||||
unresolved_discussions.into_iter().take(limit).collect();
|
||||
|
||||
Ok(WorkloadResult {
|
||||
username: username.to_string(),
|
||||
assigned_issues,
|
||||
authored_mrs,
|
||||
reviewing_mrs,
|
||||
unresolved_discussions,
|
||||
assigned_issues_truncated,
|
||||
authored_mrs_truncated,
|
||||
reviewing_mrs_truncated,
|
||||
unresolved_discussions_truncated,
|
||||
})
|
||||
}
|
||||
|
||||
// ─── Human Renderer: Workload ───────────────────────────────────────────────
|
||||
|
||||
pub(super) fn print_workload_human(r: &WorkloadResult) {
|
||||
println!();
|
||||
println!(
|
||||
"{}",
|
||||
Theme::bold().render(&format!(
|
||||
"{} {} -- Workload Summary",
|
||||
Icons::user(),
|
||||
r.username
|
||||
))
|
||||
);
|
||||
println!("{}", "\u{2500}".repeat(60));
|
||||
|
||||
if !r.assigned_issues.is_empty() {
|
||||
println!(
|
||||
"{}",
|
||||
render::section_divider(&format!("Assigned Issues ({})", r.assigned_issues.len()))
|
||||
);
|
||||
for item in &r.assigned_issues {
|
||||
println!(
|
||||
" {} {} {}",
|
||||
Theme::info().render(&item.ref_),
|
||||
render::truncate(&item.title, 40),
|
||||
Theme::dim().render(&render::format_relative_time(item.updated_at)),
|
||||
);
|
||||
}
|
||||
if r.assigned_issues_truncated {
|
||||
println!(
|
||||
" {}",
|
||||
Theme::dim().render("(truncated; rerun with a higher --limit)")
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
if !r.authored_mrs.is_empty() {
|
||||
println!(
|
||||
"{}",
|
||||
render::section_divider(&format!("Authored MRs ({})", r.authored_mrs.len()))
|
||||
);
|
||||
for mr in &r.authored_mrs {
|
||||
let draft = if mr.draft { " [draft]" } else { "" };
|
||||
println!(
|
||||
" {} {}{} {}",
|
||||
Theme::info().render(&mr.ref_),
|
||||
render::truncate(&mr.title, 35),
|
||||
Theme::dim().render(draft),
|
||||
Theme::dim().render(&render::format_relative_time(mr.updated_at)),
|
||||
);
|
||||
}
|
||||
if r.authored_mrs_truncated {
|
||||
println!(
|
||||
" {}",
|
||||
Theme::dim().render("(truncated; rerun with a higher --limit)")
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
if !r.reviewing_mrs.is_empty() {
|
||||
println!(
|
||||
"{}",
|
||||
render::section_divider(&format!("Reviewing MRs ({})", r.reviewing_mrs.len()))
|
||||
);
|
||||
for mr in &r.reviewing_mrs {
|
||||
let author = mr
|
||||
.author_username
|
||||
.as_deref()
|
||||
.map(|a| format!(" by @{a}"))
|
||||
.unwrap_or_default();
|
||||
println!(
|
||||
" {} {}{} {}",
|
||||
Theme::info().render(&mr.ref_),
|
||||
render::truncate(&mr.title, 30),
|
||||
Theme::dim().render(&author),
|
||||
Theme::dim().render(&render::format_relative_time(mr.updated_at)),
|
||||
);
|
||||
}
|
||||
if r.reviewing_mrs_truncated {
|
||||
println!(
|
||||
" {}",
|
||||
Theme::dim().render("(truncated; rerun with a higher --limit)")
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
if !r.unresolved_discussions.is_empty() {
|
||||
println!(
|
||||
"{}",
|
||||
render::section_divider(&format!(
|
||||
"Unresolved Discussions ({})",
|
||||
r.unresolved_discussions.len()
|
||||
))
|
||||
);
|
||||
for disc in &r.unresolved_discussions {
|
||||
println!(
|
||||
" {} {} {} {}",
|
||||
Theme::dim().render(&disc.entity_type),
|
||||
Theme::info().render(&disc.ref_),
|
||||
render::truncate(&disc.entity_title, 35),
|
||||
Theme::dim().render(&render::format_relative_time(disc.last_note_at)),
|
||||
);
|
||||
}
|
||||
if r.unresolved_discussions_truncated {
|
||||
println!(
|
||||
" {}",
|
||||
Theme::dim().render("(truncated; rerun with a higher --limit)")
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
if r.assigned_issues.is_empty()
|
||||
&& r.authored_mrs.is_empty()
|
||||
&& r.reviewing_mrs.is_empty()
|
||||
&& r.unresolved_discussions.is_empty()
|
||||
{
|
||||
println!();
|
||||
println!(
|
||||
" {}",
|
||||
Theme::dim().render("No open work items found for this user.")
|
||||
);
|
||||
}
|
||||
|
||||
println!();
|
||||
}
|
||||
|
||||
// ─── JSON Renderer: Workload ────────────────────────────────────────────────
|
||||
|
||||
pub(super) fn workload_to_json(r: &WorkloadResult) -> serde_json::Value {
|
||||
serde_json::json!({
|
||||
"username": r.username,
|
||||
"assigned_issues": r.assigned_issues.iter().map(|i| serde_json::json!({
|
||||
"iid": i.iid,
|
||||
"ref": i.ref_,
|
||||
"title": i.title,
|
||||
"project_path": i.project_path,
|
||||
"updated_at": ms_to_iso(i.updated_at),
|
||||
})).collect::<Vec<_>>(),
|
||||
"authored_mrs": r.authored_mrs.iter().map(|m| serde_json::json!({
|
||||
"iid": m.iid,
|
||||
"ref": m.ref_,
|
||||
"title": m.title,
|
||||
"draft": m.draft,
|
||||
"project_path": m.project_path,
|
||||
"updated_at": ms_to_iso(m.updated_at),
|
||||
})).collect::<Vec<_>>(),
|
||||
"reviewing_mrs": r.reviewing_mrs.iter().map(|m| serde_json::json!({
|
||||
"iid": m.iid,
|
||||
"ref": m.ref_,
|
||||
"title": m.title,
|
||||
"draft": m.draft,
|
||||
"project_path": m.project_path,
|
||||
"author_username": m.author_username,
|
||||
"updated_at": ms_to_iso(m.updated_at),
|
||||
})).collect::<Vec<_>>(),
|
||||
"unresolved_discussions": r.unresolved_discussions.iter().map(|d| serde_json::json!({
|
||||
"entity_type": d.entity_type,
|
||||
"entity_iid": d.entity_iid,
|
||||
"ref": d.ref_,
|
||||
"entity_title": d.entity_title,
|
||||
"project_path": d.project_path,
|
||||
"last_note_at": ms_to_iso(d.last_note_at),
|
||||
})).collect::<Vec<_>>(),
|
||||
"summary": {
|
||||
"assigned_issue_count": r.assigned_issues.len(),
|
||||
"authored_mr_count": r.authored_mrs.len(),
|
||||
"reviewing_mr_count": r.reviewing_mrs.len(),
|
||||
"unresolved_discussion_count": r.unresolved_discussions.len(),
|
||||
},
|
||||
"truncation": {
|
||||
"assigned_issues_truncated": r.assigned_issues_truncated,
|
||||
"authored_mrs_truncated": r.authored_mrs_truncated,
|
||||
"reviewing_mrs_truncated": r.reviewing_mrs_truncated,
|
||||
"unresolved_discussions_truncated": r.unresolved_discussions_truncated,
|
||||
}
|
||||
})
|
||||
}
|
||||
@@ -54,15 +54,27 @@ fn insert_mr(conn: &Connection, id: i64, project_id: i64, iid: i64, author: &str
|
||||
}
|
||||
|
||||
fn insert_issue(conn: &Connection, id: i64, project_id: i64, iid: i64, author: &str) {
|
||||
insert_issue_with_state(conn, id, project_id, iid, author, "opened");
|
||||
}
|
||||
|
||||
fn insert_issue_with_state(
|
||||
conn: &Connection,
|
||||
id: i64,
|
||||
project_id: i64,
|
||||
iid: i64,
|
||||
author: &str,
|
||||
state: &str,
|
||||
) {
|
||||
conn.execute(
|
||||
"INSERT INTO issues (id, gitlab_id, project_id, iid, title, state, author_username, created_at, updated_at, last_seen_at)
|
||||
VALUES (?1, ?2, ?3, ?4, ?5, 'opened', ?6, ?7, ?8, ?9)",
|
||||
VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8, ?9, ?10)",
|
||||
rusqlite::params![
|
||||
id,
|
||||
id * 10,
|
||||
project_id,
|
||||
iid,
|
||||
format!("Issue {iid}"),
|
||||
state,
|
||||
author,
|
||||
now_ms(),
|
||||
now_ms(),
|
||||
@@ -134,6 +146,24 @@ fn insert_diffnote(
|
||||
.unwrap();
|
||||
}
|
||||
|
||||
fn insert_note(conn: &Connection, id: i64, discussion_id: i64, project_id: i64, author: &str) {
|
||||
conn.execute(
|
||||
"INSERT INTO notes (id, gitlab_id, discussion_id, project_id, note_type, is_system, author_username, body, created_at, updated_at, last_seen_at)
|
||||
VALUES (?1, ?2, ?3, ?4, 'DiscussionNote', 0, ?5, 'comment', ?6, ?7, ?8)",
|
||||
rusqlite::params![
|
||||
id,
|
||||
id * 10,
|
||||
discussion_id,
|
||||
project_id,
|
||||
author,
|
||||
now_ms(),
|
||||
now_ms(),
|
||||
now_ms()
|
||||
],
|
||||
)
|
||||
.unwrap();
|
||||
}
|
||||
|
||||
fn insert_assignee(conn: &Connection, issue_id: i64, username: &str) {
|
||||
conn.execute(
|
||||
"INSERT INTO issue_assignees (issue_id, username) VALUES (?1, ?2)",
|
||||
@@ -256,13 +286,14 @@ fn test_is_file_path_discrimination() {
|
||||
reviews: false,
|
||||
since: None,
|
||||
project: None,
|
||||
limit: 20,
|
||||
limit: None,
|
||||
detail: false,
|
||||
no_detail: false,
|
||||
fields: None,
|
||||
as_of: None,
|
||||
explain_score: false,
|
||||
include_bots: false,
|
||||
include_closed: false,
|
||||
all_history: false,
|
||||
})
|
||||
.unwrap(),
|
||||
@@ -279,13 +310,14 @@ fn test_is_file_path_discrimination() {
|
||||
reviews: false,
|
||||
since: None,
|
||||
project: None,
|
||||
limit: 20,
|
||||
limit: None,
|
||||
detail: false,
|
||||
no_detail: false,
|
||||
fields: None,
|
||||
as_of: None,
|
||||
explain_score: false,
|
||||
include_bots: false,
|
||||
include_closed: false,
|
||||
all_history: false,
|
||||
})
|
||||
.unwrap(),
|
||||
@@ -302,13 +334,14 @@ fn test_is_file_path_discrimination() {
|
||||
reviews: false,
|
||||
since: None,
|
||||
project: None,
|
||||
limit: 20,
|
||||
limit: None,
|
||||
detail: false,
|
||||
no_detail: false,
|
||||
fields: None,
|
||||
as_of: None,
|
||||
explain_score: false,
|
||||
include_bots: false,
|
||||
include_closed: false,
|
||||
all_history: false,
|
||||
})
|
||||
.unwrap(),
|
||||
@@ -325,13 +358,14 @@ fn test_is_file_path_discrimination() {
|
||||
reviews: true,
|
||||
since: None,
|
||||
project: None,
|
||||
limit: 20,
|
||||
limit: None,
|
||||
detail: false,
|
||||
no_detail: false,
|
||||
fields: None,
|
||||
as_of: None,
|
||||
explain_score: false,
|
||||
include_bots: false,
|
||||
include_closed: false,
|
||||
all_history: false,
|
||||
})
|
||||
.unwrap(),
|
||||
@@ -348,13 +382,14 @@ fn test_is_file_path_discrimination() {
|
||||
reviews: false,
|
||||
since: None,
|
||||
project: None,
|
||||
limit: 20,
|
||||
limit: None,
|
||||
detail: false,
|
||||
no_detail: false,
|
||||
fields: None,
|
||||
as_of: None,
|
||||
explain_score: false,
|
||||
include_bots: false,
|
||||
include_closed: false,
|
||||
all_history: false,
|
||||
})
|
||||
.unwrap(),
|
||||
@@ -371,13 +406,14 @@ fn test_is_file_path_discrimination() {
|
||||
reviews: false,
|
||||
since: None,
|
||||
project: None,
|
||||
limit: 20,
|
||||
limit: None,
|
||||
detail: false,
|
||||
no_detail: false,
|
||||
fields: None,
|
||||
as_of: None,
|
||||
explain_score: false,
|
||||
include_bots: false,
|
||||
include_closed: false,
|
||||
all_history: false,
|
||||
})
|
||||
.unwrap(),
|
||||
@@ -395,13 +431,14 @@ fn test_detail_rejected_outside_expert_mode() {
|
||||
reviews: false,
|
||||
since: None,
|
||||
project: None,
|
||||
limit: 20,
|
||||
limit: None,
|
||||
detail: true,
|
||||
no_detail: false,
|
||||
fields: None,
|
||||
as_of: None,
|
||||
explain_score: false,
|
||||
include_bots: false,
|
||||
include_closed: false,
|
||||
all_history: false,
|
||||
};
|
||||
let mode = resolve_mode(&args).unwrap();
|
||||
@@ -423,13 +460,14 @@ fn test_detail_allowed_in_expert_mode() {
|
||||
reviews: false,
|
||||
since: None,
|
||||
project: None,
|
||||
limit: 20,
|
||||
limit: None,
|
||||
detail: true,
|
||||
no_detail: false,
|
||||
fields: None,
|
||||
as_of: None,
|
||||
explain_score: false,
|
||||
include_bots: false,
|
||||
include_closed: false,
|
||||
all_history: false,
|
||||
};
|
||||
let mode = resolve_mode(&args).unwrap();
|
||||
@@ -579,7 +617,7 @@ fn test_workload_query() {
|
||||
insert_assignee(&conn, 1, "dev_a");
|
||||
insert_mr(&conn, 1, 1, 100, "dev_a", "opened");
|
||||
|
||||
let result = query_workload(&conn, "dev_a", None, None, 20).unwrap();
|
||||
let result = query_workload(&conn, "dev_a", None, None, 20, true).unwrap();
|
||||
assert_eq!(result.assigned_issues.len(), 1);
|
||||
assert_eq!(result.authored_mrs.len(), 1);
|
||||
}
|
||||
@@ -626,7 +664,7 @@ fn test_active_query() {
|
||||
// Second note by same participant -- note_count should be 2, participants still ["reviewer_b"]
|
||||
insert_diffnote(&conn, 2, 1, 1, "reviewer_b", "src/foo.rs", "follow-up");
|
||||
|
||||
let result = query_active(&conn, None, 0, 20).unwrap();
|
||||
let result = query_active(&conn, None, 0, 20, true).unwrap();
|
||||
assert_eq!(result.total_unresolved_in_window, 1);
|
||||
assert_eq!(result.discussions.len(), 1);
|
||||
assert_eq!(result.discussions[0].participants, vec!["reviewer_b"]);
|
||||
@@ -878,7 +916,7 @@ fn test_active_participants_sorted() {
|
||||
insert_diffnote(&conn, 1, 1, 1, "zebra_user", "src/foo.rs", "note 1");
|
||||
insert_diffnote(&conn, 2, 1, 1, "alpha_user", "src/foo.rs", "note 2");
|
||||
|
||||
let result = query_active(&conn, None, 0, 20).unwrap();
|
||||
let result = query_active(&conn, None, 0, 20, true).unwrap();
|
||||
assert_eq!(
|
||||
result.discussions[0].participants,
|
||||
vec!["alpha_user", "zebra_user"]
|
||||
@@ -3265,3 +3303,94 @@ fn test_deterministic_accumulation_order() {
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// ─── Tests: include_closed filter ────────────────────────────────────────────
|
||||
|
||||
#[test]
|
||||
fn workload_excludes_closed_entity_discussions() {
|
||||
let conn = setup_test_db();
|
||||
insert_project(&conn, 1, "group/repo");
|
||||
|
||||
// Open issue with unresolved discussion
|
||||
insert_issue_with_state(&conn, 10, 1, 10, "someone", "opened");
|
||||
insert_discussion(&conn, 100, 1, None, Some(10), true, false);
|
||||
insert_note(&conn, 1000, 100, 1, "alice");
|
||||
|
||||
// Closed issue with unresolved discussion
|
||||
insert_issue_with_state(&conn, 20, 1, 20, "someone", "closed");
|
||||
insert_discussion(&conn, 200, 1, None, Some(20), true, false);
|
||||
insert_note(&conn, 2000, 200, 1, "alice");
|
||||
|
||||
// Default: exclude closed
|
||||
let result = query_workload(&conn, "alice", None, None, 50, false).unwrap();
|
||||
assert_eq!(result.unresolved_discussions.len(), 1);
|
||||
assert_eq!(result.unresolved_discussions[0].entity_iid, 10);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn workload_include_closed_flag_shows_all() {
|
||||
let conn = setup_test_db();
|
||||
insert_project(&conn, 1, "group/repo");
|
||||
|
||||
insert_issue_with_state(&conn, 10, 1, 10, "someone", "opened");
|
||||
insert_discussion(&conn, 100, 1, None, Some(10), true, false);
|
||||
insert_note(&conn, 1000, 100, 1, "alice");
|
||||
|
||||
insert_issue_with_state(&conn, 20, 1, 20, "someone", "closed");
|
||||
insert_discussion(&conn, 200, 1, None, Some(20), true, false);
|
||||
insert_note(&conn, 2000, 200, 1, "alice");
|
||||
|
||||
let result = query_workload(&conn, "alice", None, None, 50, true).unwrap();
|
||||
assert_eq!(result.unresolved_discussions.len(), 2);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn workload_excludes_merged_mr_discussions() {
|
||||
let conn = setup_test_db();
|
||||
insert_project(&conn, 1, "group/repo");
|
||||
|
||||
// Open MR with unresolved discussion
|
||||
insert_mr(&conn, 10, 1, 10, "someone", "opened");
|
||||
insert_discussion(&conn, 100, 1, Some(10), None, true, false);
|
||||
insert_note(&conn, 1000, 100, 1, "alice");
|
||||
|
||||
// Merged MR with unresolved discussion
|
||||
insert_mr(&conn, 20, 1, 20, "someone", "merged");
|
||||
insert_discussion(&conn, 200, 1, Some(20), None, true, false);
|
||||
insert_note(&conn, 2000, 200, 1, "alice");
|
||||
|
||||
let result = query_workload(&conn, "alice", None, None, 50, false).unwrap();
|
||||
assert_eq!(result.unresolved_discussions.len(), 1);
|
||||
assert_eq!(result.unresolved_discussions[0].entity_iid, 10);
|
||||
|
||||
// include_closed shows both
|
||||
let result = query_workload(&conn, "alice", None, None, 50, true).unwrap();
|
||||
assert_eq!(result.unresolved_discussions.len(), 2);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn active_excludes_closed_entity_discussions() {
|
||||
let conn = setup_test_db();
|
||||
insert_project(&conn, 1, "group/repo");
|
||||
|
||||
// Open issue with unresolved discussion
|
||||
insert_issue_with_state(&conn, 10, 1, 10, "someone", "opened");
|
||||
insert_discussion(&conn, 100, 1, None, Some(10), true, false);
|
||||
insert_note(&conn, 1000, 100, 1, "alice");
|
||||
|
||||
// Closed issue with unresolved discussion
|
||||
insert_issue_with_state(&conn, 20, 1, 20, "someone", "closed");
|
||||
insert_discussion(&conn, 200, 1, None, Some(20), true, false);
|
||||
insert_note(&conn, 2000, 200, 1, "alice");
|
||||
|
||||
// Default: exclude closed
|
||||
let result = query_active(&conn, None, 0, 50, false).unwrap();
|
||||
assert_eq!(result.discussions.len(), 1);
|
||||
assert_eq!(result.discussions[0].entity_iid, 10);
|
||||
assert_eq!(result.total_unresolved_in_window, 1);
|
||||
|
||||
// include_closed shows both
|
||||
let result = query_active(&conn, None, 0, 50, true).unwrap();
|
||||
assert_eq!(result.discussions.len(), 2);
|
||||
assert_eq!(result.total_unresolved_in_window, 2);
|
||||
}
|
||||
|
||||
216
src/cli/mod.rs
216
src/cli/mod.rs
@@ -4,7 +4,7 @@ pub mod progress;
|
||||
pub mod render;
|
||||
pub mod robot;
|
||||
|
||||
use clap::{Parser, Subcommand};
|
||||
use clap::{Args, Parser, Subcommand};
|
||||
use std::io::IsTerminal;
|
||||
|
||||
#[derive(Parser)]
|
||||
@@ -16,7 +16,9 @@ use std::io::IsTerminal;
|
||||
GITLAB_TOKEN GitLab personal access token (or name set in config)
|
||||
LORE_ROBOT Enable robot/JSON mode (non-empty, non-zero value)
|
||||
LORE_CONFIG_PATH Override config file location
|
||||
NO_COLOR Disable color output (any non-empty value)")]
|
||||
NO_COLOR Disable color output (any non-empty value)
|
||||
LORE_ICONS Override icon set: nerd, unicode, or ascii
|
||||
NERD_FONTS Enable Nerd Font icons when set to a non-empty value")]
|
||||
pub struct Cli {
|
||||
/// Path to config file
|
||||
#[arg(short = 'c', long, global = true, help = "Path to config file")]
|
||||
@@ -135,19 +137,35 @@ pub enum Commands {
|
||||
Count(CountArgs),
|
||||
|
||||
/// Show sync state
|
||||
#[command(visible_alias = "st")]
|
||||
#[command(
|
||||
visible_alias = "st",
|
||||
after_help = "\x1b[1mExamples:\x1b[0m
|
||||
lore status # Show last sync times per project
|
||||
lore --robot status # JSON output for automation"
|
||||
)]
|
||||
Status,
|
||||
|
||||
/// Verify GitLab authentication
|
||||
#[command(after_help = "\x1b[1mExamples:\x1b[0m
|
||||
lore auth # Verify token and show user info
|
||||
lore --robot auth # JSON output for automation")]
|
||||
Auth,
|
||||
|
||||
/// Check environment health
|
||||
#[command(after_help = "\x1b[1mExamples:\x1b[0m
|
||||
lore doctor # Check config, token, database, Ollama
|
||||
lore --robot doctor # JSON output for automation")]
|
||||
Doctor,
|
||||
|
||||
/// Show version information
|
||||
Version,
|
||||
|
||||
/// Initialize configuration and database
|
||||
#[command(after_help = "\x1b[1mExamples:\x1b[0m
|
||||
lore init # Interactive setup
|
||||
lore init --force # Overwrite existing config
|
||||
lore --robot init --gitlab-url https://gitlab.com \\
|
||||
--token-env-var GITLAB_TOKEN --projects group/repo # Non-interactive setup")]
|
||||
Init {
|
||||
/// Skip overwrite confirmation
|
||||
#[arg(short = 'f', long)]
|
||||
@@ -174,11 +192,14 @@ pub enum Commands {
|
||||
default_project: Option<String>,
|
||||
},
|
||||
|
||||
/// Back up local database (not yet implemented)
|
||||
#[command(hide = true)]
|
||||
Backup,
|
||||
|
||||
/// Reset local database (not yet implemented)
|
||||
#[command(hide = true)]
|
||||
Reset {
|
||||
/// Skip confirmation prompt
|
||||
#[arg(short = 'y', long)]
|
||||
yes: bool,
|
||||
},
|
||||
@@ -202,9 +223,15 @@ pub enum Commands {
|
||||
Sync(SyncArgs),
|
||||
|
||||
/// Run pending database migrations
|
||||
#[command(after_help = "\x1b[1mExamples:\x1b[0m
|
||||
lore migrate # Apply pending migrations
|
||||
lore --robot migrate # JSON output for automation")]
|
||||
Migrate,
|
||||
|
||||
/// Quick health check: config, database, schema version
|
||||
#[command(after_help = "\x1b[1mExamples:\x1b[0m
|
||||
lore health # Quick pre-flight check (exit 0 = healthy)
|
||||
lore --robot health # JSON output for automation")]
|
||||
Health,
|
||||
|
||||
/// Machine-readable command manifest for agent self-discovery
|
||||
@@ -234,6 +261,9 @@ pub enum Commands {
|
||||
/// People intelligence: experts, workload, active discussions, overlap
|
||||
Who(WhoArgs),
|
||||
|
||||
/// Personal work dashboard: open issues, authored/reviewing MRs, activity
|
||||
Me(MeArgs),
|
||||
|
||||
/// Show MRs that touched a file, with linked discussions
|
||||
#[command(name = "file-history")]
|
||||
FileHistory(FileHistoryArgs),
|
||||
@@ -242,6 +272,10 @@ pub enum Commands {
|
||||
Trace(TraceArgs),
|
||||
|
||||
/// Detect discussion divergence from original intent
|
||||
#[command(after_help = "\x1b[1mExamples:\x1b[0m
|
||||
lore drift issues 42 # Check drift on issue #42
|
||||
lore drift issues 42 --threshold 0.3 # Custom similarity threshold
|
||||
lore --robot drift issues 42 -p group/repo # JSON output, scoped to project")]
|
||||
Drift {
|
||||
/// Entity type (currently only "issues" supported)
|
||||
#[arg(value_parser = ["issues"])]
|
||||
@@ -259,6 +293,23 @@ pub enum Commands {
|
||||
project: Option<String>,
|
||||
},
|
||||
|
||||
/// Manage cron-based automatic syncing
|
||||
#[command(after_help = "\x1b[1mExamples:\x1b[0m
|
||||
lore cron install # Install cron job (every 8 minutes)
|
||||
lore cron install --interval 15 # Custom interval
|
||||
lore cron status # Check if cron is installed
|
||||
lore cron uninstall # Remove cron job")]
|
||||
Cron(CronArgs),
|
||||
|
||||
/// Manage stored GitLab token
|
||||
#[command(after_help = "\x1b[1mExamples:\x1b[0m
|
||||
lore token set # Interactive token entry + validation
|
||||
lore token set --token glpat-xxx # Non-interactive token storage
|
||||
echo glpat-xxx | lore token set # Pipe token from stdin
|
||||
lore token show # Show token (masked)
|
||||
lore token show --unmask # Show full token")]
|
||||
Token(TokenArgs),
|
||||
|
||||
#[command(hide = true)]
|
||||
List {
|
||||
#[arg(value_parser = ["issues", "mrs"])]
|
||||
@@ -344,7 +395,7 @@ pub struct IssuesArgs {
|
||||
pub fields: Option<Vec<String>>,
|
||||
|
||||
/// Filter by state (opened, closed, all)
|
||||
#[arg(short = 's', long, help_heading = "Filters")]
|
||||
#[arg(short = 's', long, help_heading = "Filters", value_parser = ["opened", "closed", "all"])]
|
||||
pub state: Option<String>,
|
||||
|
||||
/// Filter by project path
|
||||
@@ -438,7 +489,7 @@ pub struct MrsArgs {
|
||||
pub fields: Option<Vec<String>>,
|
||||
|
||||
/// Filter by state (opened, merged, closed, locked, all)
|
||||
#[arg(short = 's', long, help_heading = "Filters")]
|
||||
#[arg(short = 's', long, help_heading = "Filters", value_parser = ["opened", "merged", "closed", "locked", "all"])]
|
||||
pub state: Option<String>,
|
||||
|
||||
/// Filter by project path
|
||||
@@ -535,15 +586,6 @@ pub struct NotesArgs {
|
||||
#[arg(long, help_heading = "Output", value_delimiter = ',')]
|
||||
pub fields: Option<Vec<String>>,
|
||||
|
||||
/// Output format (table, json, jsonl, csv)
|
||||
#[arg(
|
||||
long,
|
||||
default_value = "table",
|
||||
value_parser = ["table", "json", "jsonl", "csv"],
|
||||
help_heading = "Output"
|
||||
)]
|
||||
pub format: String,
|
||||
|
||||
/// Filter by author username
|
||||
#[arg(short = 'a', long, help_heading = "Filters")]
|
||||
pub author: Option<String>,
|
||||
@@ -655,6 +697,11 @@ pub struct IngestArgs {
|
||||
}
|
||||
|
||||
#[derive(Parser)]
|
||||
#[command(after_help = "\x1b[1mExamples:\x1b[0m
|
||||
lore stats # Show document and index statistics
|
||||
lore stats --check # Run integrity checks
|
||||
lore stats --repair --dry-run # Preview what repair would fix
|
||||
lore --robot stats # JSON output for automation")]
|
||||
pub struct StatsArgs {
|
||||
/// Run integrity checks
|
||||
#[arg(long, overrides_with = "no_check")]
|
||||
@@ -743,6 +790,10 @@ pub struct SearchArgs {
|
||||
}
|
||||
|
||||
#[derive(Parser)]
|
||||
#[command(after_help = "\x1b[1mExamples:\x1b[0m
|
||||
lore generate-docs # Generate docs for dirty entities
|
||||
lore generate-docs --full # Full rebuild of all documents
|
||||
lore generate-docs --full -p group/repo # Full rebuild for one project")]
|
||||
pub struct GenerateDocsArgs {
|
||||
/// Full rebuild: seed all entities into dirty queue, then drain
|
||||
#[arg(long)]
|
||||
@@ -759,7 +810,9 @@ pub struct GenerateDocsArgs {
|
||||
lore sync --no-embed # Skip embedding step
|
||||
lore sync --no-status # Skip work-item status enrichment
|
||||
lore sync --full --force # Full re-sync, override stale lock
|
||||
lore sync --dry-run # Preview what would change")]
|
||||
lore sync --dry-run # Preview what would change
|
||||
lore sync --issue 42 -p group/repo # Surgically sync one issue
|
||||
lore sync --mr 10 --mr 20 -p g/r # Surgically sync two MRs")]
|
||||
pub struct SyncArgs {
|
||||
/// Reset cursors, fetch everything
|
||||
#[arg(long, overrides_with = "no_full")]
|
||||
@@ -805,9 +858,33 @@ pub struct SyncArgs {
|
||||
/// Show detailed timing breakdown for sync stages
|
||||
#[arg(short = 't', long = "timings")]
|
||||
pub timings: bool,
|
||||
|
||||
/// Acquire file lock before syncing (skip if another sync is running)
|
||||
#[arg(long)]
|
||||
pub lock: bool,
|
||||
|
||||
/// Surgically sync specific issues by IID (repeatable, must be positive)
|
||||
#[arg(long, value_parser = clap::value_parser!(u64).range(1..), action = clap::ArgAction::Append)]
|
||||
pub issue: Vec<u64>,
|
||||
|
||||
/// Surgically sync specific merge requests by IID (repeatable, must be positive)
|
||||
#[arg(long, value_parser = clap::value_parser!(u64).range(1..), action = clap::ArgAction::Append)]
|
||||
pub mr: Vec<u64>,
|
||||
|
||||
/// Scope to a single project (required when --issue or --mr is used)
|
||||
#[arg(short = 'p', long)]
|
||||
pub project: Option<String>,
|
||||
|
||||
/// Validate remote entities exist without DB writes (preflight only)
|
||||
#[arg(long)]
|
||||
pub preflight_only: bool,
|
||||
}
|
||||
|
||||
#[derive(Parser)]
|
||||
#[command(after_help = "\x1b[1mExamples:\x1b[0m
|
||||
lore embed # Embed new/changed documents
|
||||
lore embed --full # Re-embed all documents from scratch
|
||||
lore embed --retry-failed # Retry previously failed embeddings")]
|
||||
pub struct EmbedArgs {
|
||||
/// Re-embed all documents (clears existing embeddings first)
|
||||
#[arg(long, overrides_with = "no_full")]
|
||||
@@ -926,15 +1003,14 @@ pub struct WhoArgs {
|
||||
#[arg(short = 'p', long, help_heading = "Filters")]
|
||||
pub project: Option<String>,
|
||||
|
||||
/// Maximum results per section (1..=500, bounded for output safety)
|
||||
/// Maximum results per section (1..=500); omit for unlimited
|
||||
#[arg(
|
||||
short = 'n',
|
||||
long = "limit",
|
||||
default_value = "20",
|
||||
value_parser = clap::value_parser!(u16).range(1..=500),
|
||||
help_heading = "Output"
|
||||
)]
|
||||
pub limit: u16,
|
||||
pub limit: Option<u16>,
|
||||
|
||||
/// Select output fields (comma-separated, or 'minimal' preset; varies by mode)
|
||||
#[arg(long, help_heading = "Output", value_delimiter = ',')]
|
||||
@@ -964,6 +1040,10 @@ pub struct WhoArgs {
|
||||
#[arg(long = "include-bots", help_heading = "Scoring")]
|
||||
pub include_bots: bool,
|
||||
|
||||
/// Include discussions on closed issues and merged/closed MRs
|
||||
#[arg(long, help_heading = "Filters")]
|
||||
pub include_closed: bool,
|
||||
|
||||
/// Remove the default time window (query all history). Conflicts with --since.
|
||||
#[arg(
|
||||
long = "all-history",
|
||||
@@ -973,6 +1053,57 @@ pub struct WhoArgs {
|
||||
pub all_history: bool,
|
||||
}
|
||||
|
||||
#[derive(Parser)]
|
||||
#[command(after_help = "\x1b[1mExamples:\x1b[0m
|
||||
lore me # Full dashboard (default project or all)
|
||||
lore me --issues # Issues section only
|
||||
lore me --mrs # MRs section only
|
||||
lore me --activity # Activity feed only
|
||||
lore me --all # All synced projects
|
||||
lore me --since 2d # Activity window (default: 30d)
|
||||
lore me --project group/repo # Scope to one project
|
||||
lore me --user jdoe # Override configured username")]
|
||||
pub struct MeArgs {
|
||||
/// Show open issues section
|
||||
#[arg(long, help_heading = "Sections")]
|
||||
pub issues: bool,
|
||||
|
||||
/// Show authored + reviewing MRs section
|
||||
#[arg(long, help_heading = "Sections")]
|
||||
pub mrs: bool,
|
||||
|
||||
/// Show activity feed section
|
||||
#[arg(long, help_heading = "Sections")]
|
||||
pub activity: bool,
|
||||
|
||||
/// Activity window (e.g. 7d, 2w, 30d). Default: 30d. Only affects activity section.
|
||||
#[arg(long, help_heading = "Filters")]
|
||||
pub since: Option<String>,
|
||||
|
||||
/// Scope to a project (supports fuzzy matching)
|
||||
#[arg(short = 'p', long, help_heading = "Filters", conflicts_with = "all")]
|
||||
pub project: Option<String>,
|
||||
|
||||
/// Show all synced projects (overrides default_project)
|
||||
#[arg(long, help_heading = "Filters", conflicts_with = "project")]
|
||||
pub all: bool,
|
||||
|
||||
/// Override configured username
|
||||
#[arg(long = "user", help_heading = "Filters")]
|
||||
pub user: Option<String>,
|
||||
|
||||
/// Select output fields (comma-separated, or 'minimal' preset)
|
||||
#[arg(long, help_heading = "Output", value_delimiter = ',')]
|
||||
pub fields: Option<Vec<String>>,
|
||||
}
|
||||
|
||||
impl MeArgs {
|
||||
/// Returns true if no section flags were passed (show all sections).
|
||||
pub fn show_all_sections(&self) -> bool {
|
||||
!self.issues && !self.mrs && !self.activity
|
||||
}
|
||||
}
|
||||
|
||||
#[derive(Parser)]
|
||||
#[command(after_help = "\x1b[1mExamples:\x1b[0m
|
||||
lore file-history src/main.rs # MRs that touched this file
|
||||
@@ -1042,6 +1173,10 @@ pub struct TraceArgs {
|
||||
}
|
||||
|
||||
#[derive(Parser)]
|
||||
#[command(after_help = "\x1b[1mExamples:\x1b[0m
|
||||
lore count issues # Total issues in local database
|
||||
lore count notes --for mr # Notes on merge requests only
|
||||
lore count discussions --for issue # Discussions on issues only")]
|
||||
pub struct CountArgs {
|
||||
/// Entity type to count (issues, mrs, discussions, notes, events)
|
||||
#[arg(value_parser = ["issues", "mrs", "discussions", "notes", "events"])]
|
||||
@@ -1051,3 +1186,48 @@ pub struct CountArgs {
|
||||
#[arg(short = 'f', long = "for", value_parser = ["issue", "mr"])]
|
||||
pub for_entity: Option<String>,
|
||||
}
|
||||
|
||||
#[derive(Parser)]
|
||||
pub struct CronArgs {
|
||||
#[command(subcommand)]
|
||||
pub action: CronAction,
|
||||
}
|
||||
|
||||
#[derive(Subcommand)]
|
||||
pub enum CronAction {
|
||||
/// Install cron job for automatic syncing
|
||||
Install {
|
||||
/// Sync interval in minutes (default: 8)
|
||||
#[arg(long, default_value = "8")]
|
||||
interval: u32,
|
||||
},
|
||||
|
||||
/// Remove cron job
|
||||
Uninstall,
|
||||
|
||||
/// Show current cron configuration
|
||||
Status,
|
||||
}
|
||||
|
||||
#[derive(Args)]
|
||||
pub struct TokenArgs {
|
||||
#[command(subcommand)]
|
||||
pub action: TokenAction,
|
||||
}
|
||||
|
||||
#[derive(Subcommand)]
|
||||
pub enum TokenAction {
|
||||
/// Store a GitLab token in the config file
|
||||
Set {
|
||||
/// Token value (reads from stdin if omitted in non-interactive mode)
|
||||
#[arg(long)]
|
||||
token: Option<String>,
|
||||
},
|
||||
|
||||
/// Show the current token (masked by default)
|
||||
Show {
|
||||
/// Show the full unmasked token
|
||||
#[arg(long)]
|
||||
unmask: bool,
|
||||
},
|
||||
}
|
||||
|
||||
@@ -263,6 +263,11 @@ impl LoreRenderer {
|
||||
.expect("LoreRenderer::init must be called before get")
|
||||
}
|
||||
|
||||
/// Try to get the global renderer. Returns `None` if `init` hasn't been called.
|
||||
pub fn try_get() -> Option<&'static LoreRenderer> {
|
||||
RENDERER.get()
|
||||
}
|
||||
|
||||
/// Whether color output is enabled.
|
||||
pub fn colors_enabled(&self) -> bool {
|
||||
self.colors
|
||||
@@ -448,6 +453,15 @@ impl Theme {
|
||||
Style::new()
|
||||
}
|
||||
}
|
||||
|
||||
/// Apply semantic color to a stage-completion icon glyph.
|
||||
pub fn color_icon(icon: &str, has_errors: bool) -> String {
|
||||
if has_errors {
|
||||
Self::warning().render(icon)
|
||||
} else {
|
||||
Self::success().render(icon)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// ─── Shared Formatters ───────────────────────────────────────────────────────
|
||||
@@ -518,6 +532,43 @@ pub fn format_datetime(ms: i64) -> String {
|
||||
.unwrap_or_else(|| "unknown".to_string())
|
||||
}
|
||||
|
||||
/// Detect terminal width. Checks `COLUMNS` env, then stderr ioctl, falls back to 80.
|
||||
pub fn terminal_width() -> usize {
|
||||
// 1. Explicit COLUMNS env (set by some shells, resized terminals)
|
||||
if let Ok(val) = std::env::var("COLUMNS")
|
||||
&& let Ok(w) = val.parse::<usize>()
|
||||
&& w > 0
|
||||
{
|
||||
return w;
|
||||
}
|
||||
|
||||
// 2. ioctl on stderr (works even when stdout is piped)
|
||||
#[cfg(unix)]
|
||||
{
|
||||
use std::mem::MaybeUninit;
|
||||
#[allow(non_camel_case_types)]
|
||||
#[repr(C)]
|
||||
struct winsize {
|
||||
ws_row: libc::c_ushort,
|
||||
ws_col: libc::c_ushort,
|
||||
ws_xpixel: libc::c_ushort,
|
||||
ws_ypixel: libc::c_ushort,
|
||||
}
|
||||
let mut ws = MaybeUninit::<winsize>::uninit();
|
||||
// SAFETY: ioctl with TIOCGWINSZ writes into the winsize struct.
|
||||
// stderr (fd 2) is used because stdout may be piped.
|
||||
if unsafe { libc::ioctl(2, libc::TIOCGWINSZ, ws.as_mut_ptr()) } == 0 {
|
||||
let ws = unsafe { ws.assume_init() };
|
||||
let w = ws.ws_col as usize;
|
||||
if w > 0 {
|
||||
return w;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
80
|
||||
}
|
||||
|
||||
/// Truncate a string to `max` characters, appending "..." if truncated.
|
||||
pub fn truncate(s: &str, max: usize) -> String {
|
||||
if max < 4 {
|
||||
@@ -531,6 +582,17 @@ pub fn truncate(s: &str, max: usize) -> String {
|
||||
}
|
||||
}
|
||||
|
||||
/// Truncate and right-pad to exactly `width` visible characters.
|
||||
pub fn truncate_pad(s: &str, width: usize) -> String {
|
||||
let t = truncate(s, width);
|
||||
let count = t.chars().count();
|
||||
if count < width {
|
||||
format!("{t}{}", " ".repeat(width - count))
|
||||
} else {
|
||||
t
|
||||
}
|
||||
}
|
||||
|
||||
/// Word-wrap text to `width`, prepending `indent` to continuation lines.
|
||||
/// Returns a single string with embedded newlines.
|
||||
pub fn wrap_indent(text: &str, width: usize, indent: &str) -> String {
|
||||
@@ -589,7 +651,10 @@ pub fn wrap_lines(text: &str, width: usize) -> Vec<String> {
|
||||
|
||||
/// Render a section divider: `── Title ──────────────────────`
|
||||
pub fn section_divider(title: &str) -> String {
|
||||
let rule_len = 40_usize.saturating_sub(title.len() + 4);
|
||||
// prefix: 2 indent + 2 box-drawing + 1 space = 5
|
||||
// suffix: 1 space + trailing box-drawing
|
||||
let used = 5 + title.len() + 1;
|
||||
let rule_len = terminal_width().saturating_sub(used);
|
||||
format!(
|
||||
"\n {} {} {}",
|
||||
Theme::dim().render("\u{2500}\u{2500}"),
|
||||
@@ -720,6 +785,8 @@ pub struct Table {
|
||||
rows: Vec<Vec<StyledCell>>,
|
||||
alignments: Vec<Align>,
|
||||
max_widths: Vec<Option<usize>>,
|
||||
col_count: usize,
|
||||
indent: usize,
|
||||
}
|
||||
|
||||
impl Table {
|
||||
@@ -730,9 +797,23 @@ impl Table {
|
||||
/// Set column headers.
|
||||
pub fn headers(mut self, h: &[&str]) -> Self {
|
||||
self.headers = h.iter().map(|s| (*s).to_string()).collect();
|
||||
// Initialize alignments and max_widths to match column count
|
||||
self.alignments.resize(self.headers.len(), Align::Left);
|
||||
self.max_widths.resize(self.headers.len(), None);
|
||||
self.col_count = self.headers.len();
|
||||
self.alignments.resize(self.col_count, Align::Left);
|
||||
self.max_widths.resize(self.col_count, None);
|
||||
self
|
||||
}
|
||||
|
||||
/// Set column count without headers (headerless table).
|
||||
pub fn columns(mut self, n: usize) -> Self {
|
||||
self.col_count = n;
|
||||
self.alignments.resize(n, Align::Left);
|
||||
self.max_widths.resize(n, None);
|
||||
self
|
||||
}
|
||||
|
||||
/// Set indent (number of spaces) prepended to each row.
|
||||
pub fn indent(mut self, spaces: usize) -> Self {
|
||||
self.indent = spaces;
|
||||
self
|
||||
}
|
||||
|
||||
@@ -759,15 +840,20 @@ impl Table {
|
||||
|
||||
/// Render the table to a string.
|
||||
pub fn render(&self) -> String {
|
||||
if self.headers.is_empty() {
|
||||
let col_count = self.col_count;
|
||||
if col_count == 0 {
|
||||
return String::new();
|
||||
}
|
||||
|
||||
let col_count = self.headers.len();
|
||||
let gap = " "; // 2-space gap between columns
|
||||
let indent_str = " ".repeat(self.indent);
|
||||
|
||||
// Compute column widths from content
|
||||
let mut widths: Vec<usize> = self.headers.iter().map(|h| h.chars().count()).collect();
|
||||
// Compute column widths from headers (if any) and all row cells
|
||||
let mut widths: Vec<usize> = if self.headers.is_empty() {
|
||||
vec![0; col_count]
|
||||
} else {
|
||||
self.headers.iter().map(|h| h.chars().count()).collect()
|
||||
};
|
||||
|
||||
for row in &self.rows {
|
||||
for (i, cell) in row.iter().enumerate() {
|
||||
@@ -788,7 +874,8 @@ impl Table {
|
||||
|
||||
let mut out = String::new();
|
||||
|
||||
// Header row (bold)
|
||||
// Header row + separator (only when headers are set)
|
||||
if !self.headers.is_empty() {
|
||||
let header_parts: Vec<String> = self
|
||||
.headers
|
||||
.iter()
|
||||
@@ -803,14 +890,16 @@ impl Table {
|
||||
)
|
||||
})
|
||||
.collect();
|
||||
out.push_str(&indent_str);
|
||||
out.push_str(&Theme::header().render(&header_parts.join(gap)));
|
||||
out.push('\n');
|
||||
|
||||
// Separator
|
||||
let total_width: usize =
|
||||
widths.iter().sum::<usize>() + gap.len() * col_count.saturating_sub(1);
|
||||
out.push_str(&indent_str);
|
||||
out.push_str(&Theme::dim().render(&"\u{2500}".repeat(total_width)));
|
||||
out.push('\n');
|
||||
}
|
||||
|
||||
// Data rows
|
||||
for row in &self.rows {
|
||||
@@ -842,6 +931,7 @@ impl Table {
|
||||
parts.push(" ".repeat(w));
|
||||
}
|
||||
}
|
||||
out.push_str(&indent_str);
|
||||
out.push_str(&parts.join(gap));
|
||||
out.push('\n');
|
||||
}
|
||||
|
||||
@@ -68,6 +68,14 @@ pub fn expand_fields_preset(fields: &[String], entity: &str) -> Vec<String> {
|
||||
.iter()
|
||||
.map(|s| (*s).to_string())
|
||||
.collect(),
|
||||
"me_items" => ["iid", "title", "attention_state", "updated_at_iso"]
|
||||
.iter()
|
||||
.map(|s| (*s).to_string())
|
||||
.collect(),
|
||||
"me_activity" => ["timestamp_iso", "event_type", "entity_iid", "actor"]
|
||||
.iter()
|
||||
.map(|s| (*s).to_string())
|
||||
.collect(),
|
||||
_ => fields.to_vec(),
|
||||
}
|
||||
} else {
|
||||
|
||||
@@ -12,6 +12,48 @@ pub struct GitLabConfig {
|
||||
|
||||
#[serde(rename = "tokenEnvVar", default = "default_token_env_var")]
|
||||
pub token_env_var: String,
|
||||
|
||||
/// Optional stored token (env var takes priority when both are set).
|
||||
#[serde(default)]
|
||||
pub token: Option<String>,
|
||||
|
||||
/// Optional GitLab username for `lore me` personal dashboard.
|
||||
#[serde(default)]
|
||||
pub username: Option<String>,
|
||||
}
|
||||
|
||||
impl GitLabConfig {
|
||||
/// Resolve token with priority: env var > config file.
|
||||
pub fn resolve_token(&self) -> Result<String> {
|
||||
if let Ok(val) = std::env::var(&self.token_env_var)
|
||||
&& !val.trim().is_empty()
|
||||
{
|
||||
return Ok(val.trim().to_string());
|
||||
}
|
||||
if let Some(ref t) = self.token
|
||||
&& !t.trim().is_empty()
|
||||
{
|
||||
return Ok(t.trim().to_string());
|
||||
}
|
||||
Err(LoreError::TokenNotSet {
|
||||
env_var: self.token_env_var.clone(),
|
||||
})
|
||||
}
|
||||
|
||||
/// Returns a human-readable label for where the token was found, or `None`.
|
||||
pub fn token_source(&self) -> Option<&'static str> {
|
||||
if let Ok(val) = std::env::var(&self.token_env_var)
|
||||
&& !val.trim().is_empty()
|
||||
{
|
||||
return Some("environment variable");
|
||||
}
|
||||
if let Some(ref t) = self.token
|
||||
&& !t.trim().is_empty()
|
||||
{
|
||||
return Some("config file");
|
||||
}
|
||||
None
|
||||
}
|
||||
}
|
||||
|
||||
fn default_token_env_var() -> String {
|
||||
@@ -531,6 +573,8 @@ mod tests {
|
||||
gitlab: GitLabConfig {
|
||||
base_url: "https://gitlab.example.com".to_string(),
|
||||
token_env_var: "GITLAB_TOKEN".to_string(),
|
||||
token: None,
|
||||
username: None,
|
||||
},
|
||||
projects: vec![ProjectConfig {
|
||||
path: "group/project".to_string(),
|
||||
@@ -554,6 +598,8 @@ mod tests {
|
||||
gitlab: GitLabConfig {
|
||||
base_url: "https://gitlab.example.com".to_string(),
|
||||
token_env_var: "GITLAB_TOKEN".to_string(),
|
||||
token: None,
|
||||
username: None,
|
||||
},
|
||||
projects: vec![ProjectConfig {
|
||||
path: "group/project".to_string(),
|
||||
@@ -574,6 +620,8 @@ mod tests {
|
||||
gitlab: GitLabConfig {
|
||||
base_url: "https://gitlab.example.com".to_string(),
|
||||
token_env_var: "GITLAB_TOKEN".to_string(),
|
||||
token: None,
|
||||
username: None,
|
||||
},
|
||||
projects: vec![ProjectConfig {
|
||||
path: "group/project".to_string(),
|
||||
@@ -786,4 +834,120 @@ mod tests {
|
||||
};
|
||||
validate_scoring(&scoring).unwrap();
|
||||
}
|
||||
|
||||
// ── token_source / resolve_token ────────────────────────────────
|
||||
|
||||
/// Build a `GitLabConfig` that reads from the given unique env var name
|
||||
/// so parallel tests never collide.
|
||||
fn gitlab_cfg_with_env(env_var: &str, token: Option<&str>) -> GitLabConfig {
|
||||
GitLabConfig {
|
||||
base_url: "https://gitlab.example.com".to_string(),
|
||||
token_env_var: env_var.to_string(),
|
||||
token: token.map(ToString::to_string),
|
||||
username: None,
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_token_source_env_wins_over_config() {
|
||||
const VAR: &str = "LORE_TEST_TS_ENV_WINS";
|
||||
// SAFETY: unique var name, no other code reads it.
|
||||
unsafe { std::env::set_var(VAR, "env-tok") };
|
||||
let cfg = gitlab_cfg_with_env(VAR, Some("config-tok"));
|
||||
assert_eq!(cfg.token_source(), Some("environment variable"));
|
||||
unsafe { std::env::remove_var(VAR) };
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_token_source_falls_back_to_config() {
|
||||
const VAR: &str = "LORE_TEST_TS_FALLBACK";
|
||||
unsafe { std::env::remove_var(VAR) };
|
||||
let cfg = gitlab_cfg_with_env(VAR, Some("config-tok"));
|
||||
assert_eq!(cfg.token_source(), Some("config file"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_token_source_none_when_both_absent() {
|
||||
const VAR: &str = "LORE_TEST_TS_NONE";
|
||||
unsafe { std::env::remove_var(VAR) };
|
||||
let cfg = gitlab_cfg_with_env(VAR, None);
|
||||
assert_eq!(cfg.token_source(), None);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_token_source_ignores_whitespace_only_env() {
|
||||
const VAR: &str = "LORE_TEST_TS_WS_ENV";
|
||||
unsafe { std::env::set_var(VAR, " ") };
|
||||
let cfg = gitlab_cfg_with_env(VAR, Some("real"));
|
||||
assert_eq!(cfg.token_source(), Some("config file"));
|
||||
unsafe { std::env::remove_var(VAR) };
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_token_source_ignores_whitespace_only_config() {
|
||||
const VAR: &str = "LORE_TEST_TS_WS_CFG";
|
||||
unsafe { std::env::remove_var(VAR) };
|
||||
let cfg = gitlab_cfg_with_env(VAR, Some(" \t "));
|
||||
assert_eq!(cfg.token_source(), None);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_resolve_token_env_wins_over_config() {
|
||||
const VAR: &str = "LORE_TEST_RT_ENV_WINS";
|
||||
unsafe { std::env::set_var(VAR, " env-tok ") };
|
||||
let cfg = gitlab_cfg_with_env(VAR, Some("config-tok"));
|
||||
assert_eq!(cfg.resolve_token().unwrap(), "env-tok");
|
||||
unsafe { std::env::remove_var(VAR) };
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_resolve_token_config_fallback() {
|
||||
const VAR: &str = "LORE_TEST_RT_FALLBACK";
|
||||
unsafe { std::env::remove_var(VAR) };
|
||||
let cfg = gitlab_cfg_with_env(VAR, Some(" config-tok "));
|
||||
assert_eq!(cfg.resolve_token().unwrap(), "config-tok");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_resolve_token_err_when_both_absent() {
|
||||
const VAR: &str = "LORE_TEST_RT_NONE";
|
||||
unsafe { std::env::remove_var(VAR) };
|
||||
let cfg = gitlab_cfg_with_env(VAR, None);
|
||||
assert!(cfg.resolve_token().is_err());
|
||||
}
|
||||
|
||||
// ── gitlab.username ─────────────────────────────────────────────
|
||||
|
||||
#[test]
|
||||
fn test_config_loads_with_username() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
let path = dir.path().join("config.json");
|
||||
let config = r#"{
|
||||
"gitlab": {
|
||||
"baseUrl": "https://gitlab.example.com",
|
||||
"tokenEnvVar": "GITLAB_TOKEN",
|
||||
"username": "jdoe"
|
||||
},
|
||||
"projects": [{ "path": "group/project" }]
|
||||
}"#;
|
||||
fs::write(&path, config).unwrap();
|
||||
let cfg = Config::load_from_path(&path).unwrap();
|
||||
assert_eq!(cfg.gitlab.username.as_deref(), Some("jdoe"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_config_loads_without_username() {
|
||||
let dir = TempDir::new().unwrap();
|
||||
let path = dir.path().join("config.json");
|
||||
let config = r#"{
|
||||
"gitlab": {
|
||||
"baseUrl": "https://gitlab.example.com",
|
||||
"tokenEnvVar": "GITLAB_TOKEN"
|
||||
},
|
||||
"projects": [{ "path": "group/project" }]
|
||||
}"#;
|
||||
fs::write(&path, config).unwrap();
|
||||
let cfg = Config::load_from_path(&path).unwrap();
|
||||
assert_eq!(cfg.gitlab.username, None);
|
||||
}
|
||||
}
|
||||
|
||||
369
src/core/cron.rs
Normal file
369
src/core/cron.rs
Normal file
@@ -0,0 +1,369 @@
|
||||
use std::fs::{self, File};
|
||||
use std::io::{self, Write};
|
||||
use std::path::{Path, PathBuf};
|
||||
use std::process::Command;
|
||||
|
||||
use serde::Serialize;
|
||||
|
||||
use super::error::{LoreError, Result};
|
||||
use super::paths::get_data_dir;
|
||||
|
||||
const CRON_TAG: &str = "# lore-sync";
|
||||
|
||||
// ── File-based sync lock (fcntl F_SETLK) ──
|
||||
|
||||
/// RAII guard that holds an `fcntl` write lock on a file.
|
||||
/// The lock is released when the guard is dropped.
|
||||
pub struct SyncLockGuard {
|
||||
_file: File,
|
||||
}
|
||||
|
||||
/// Try to acquire an exclusive file lock (non-blocking).
|
||||
///
|
||||
/// Returns `Ok(Some(guard))` if the lock was acquired, `Ok(None)` if another
|
||||
/// process holds it, or `Err` on I/O failure.
|
||||
#[cfg(unix)]
|
||||
pub fn acquire_sync_lock() -> Result<Option<SyncLockGuard>> {
|
||||
acquire_sync_lock_at(&lock_path())
|
||||
}
|
||||
|
||||
fn lock_path() -> PathBuf {
|
||||
get_data_dir().join("sync.lock")
|
||||
}
|
||||
|
||||
#[cfg(unix)]
|
||||
fn acquire_sync_lock_at(path: &Path) -> Result<Option<SyncLockGuard>> {
|
||||
use std::os::unix::io::AsRawFd;
|
||||
|
||||
if let Some(parent) = path.parent() {
|
||||
fs::create_dir_all(parent)?;
|
||||
}
|
||||
|
||||
let file = File::options()
|
||||
.create(true)
|
||||
.truncate(false)
|
||||
.write(true)
|
||||
.open(path)?;
|
||||
|
||||
let fd = file.as_raw_fd();
|
||||
|
||||
// SAFETY: zeroed memory is valid for libc::flock (all-zero is a valid
|
||||
// representation on every Unix platform). We then set only the fields we need.
|
||||
let mut flock = unsafe { std::mem::zeroed::<libc::flock>() };
|
||||
flock.l_type = libc::F_WRLCK as libc::c_short;
|
||||
flock.l_whence = libc::SEEK_SET as libc::c_short;
|
||||
|
||||
// SAFETY: fd is a valid open file descriptor; flock is stack-allocated.
|
||||
let rc = unsafe { libc::fcntl(fd, libc::F_SETLK, &mut flock) };
|
||||
if rc == -1 {
|
||||
let err = io::Error::last_os_error();
|
||||
if err.kind() == io::ErrorKind::WouldBlock
|
||||
|| err.raw_os_error() == Some(libc::EAGAIN)
|
||||
|| err.raw_os_error() == Some(libc::EACCES)
|
||||
{
|
||||
return Ok(None);
|
||||
}
|
||||
return Err(LoreError::Io(err));
|
||||
}
|
||||
|
||||
Ok(Some(SyncLockGuard { _file: file }))
|
||||
}
|
||||
|
||||
// ── Crontab management ──
|
||||
|
||||
/// The crontab entry that `lore cron install` writes.
|
||||
///
|
||||
/// Paths are single-quoted so spaces in binary or log paths don't break
|
||||
/// the cron expression.
|
||||
pub fn build_cron_entry(interval_minutes: u32) -> String {
|
||||
let binary = std::env::current_exe()
|
||||
.unwrap_or_else(|_| PathBuf::from("lore"))
|
||||
.display()
|
||||
.to_string();
|
||||
let log_path = sync_log_path();
|
||||
format!(
|
||||
"*/{interval_minutes} * * * * '{binary}' sync -q --lock >> '{log}' 2>&1 {CRON_TAG}",
|
||||
log = log_path.display(),
|
||||
)
|
||||
}
|
||||
|
||||
/// Path where cron-triggered sync output is appended.
|
||||
pub fn sync_log_path() -> PathBuf {
|
||||
get_data_dir().join("sync.log")
|
||||
}
|
||||
|
||||
/// Read the current user crontab. Returns empty string when no crontab exists.
|
||||
fn read_crontab() -> Result<String> {
|
||||
let output = Command::new("crontab").arg("-l").output()?;
|
||||
if output.status.success() {
|
||||
Ok(String::from_utf8_lossy(&output.stdout).into_owned())
|
||||
} else {
|
||||
// exit 1 with "no crontab for <user>" is normal — treat as empty
|
||||
Ok(String::new())
|
||||
}
|
||||
}
|
||||
|
||||
/// Write a full crontab string. Replaces the current crontab entirely.
|
||||
fn write_crontab(content: &str) -> Result<()> {
|
||||
let mut child = Command::new("crontab")
|
||||
.arg("-")
|
||||
.stdin(std::process::Stdio::piped())
|
||||
.spawn()?;
|
||||
if let Some(ref mut stdin) = child.stdin {
|
||||
stdin.write_all(content.as_bytes())?;
|
||||
}
|
||||
let status = child.wait()?;
|
||||
if !status.success() {
|
||||
return Err(LoreError::Other(format!(
|
||||
"crontab exited with status {status}"
|
||||
)));
|
||||
}
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Install (or update) the lore-sync crontab entry.
|
||||
pub fn install_cron(interval_minutes: u32) -> Result<CronInstallResult> {
|
||||
let entry = build_cron_entry(interval_minutes);
|
||||
|
||||
let existing = read_crontab()?;
|
||||
let replaced = existing.contains(CRON_TAG);
|
||||
|
||||
// Strip ALL old lore-sync lines first, then append one new entry.
|
||||
// This is idempotent even if the crontab somehow has duplicate tagged lines.
|
||||
let mut filtered: String = existing
|
||||
.lines()
|
||||
.filter(|line| !line.contains(CRON_TAG))
|
||||
.collect::<Vec<_>>()
|
||||
.join("\n");
|
||||
if !filtered.is_empty() && !filtered.ends_with('\n') {
|
||||
filtered.push('\n');
|
||||
}
|
||||
filtered.push_str(&entry);
|
||||
filtered.push('\n');
|
||||
|
||||
write_crontab(&filtered)?;
|
||||
|
||||
Ok(CronInstallResult {
|
||||
entry,
|
||||
interval_minutes,
|
||||
log_path: sync_log_path(),
|
||||
replaced,
|
||||
})
|
||||
}
|
||||
|
||||
/// Remove the lore-sync crontab entry.
|
||||
pub fn uninstall_cron() -> Result<CronUninstallResult> {
|
||||
let existing = read_crontab()?;
|
||||
if !existing.contains(CRON_TAG) {
|
||||
return Ok(CronUninstallResult {
|
||||
was_installed: false,
|
||||
});
|
||||
}
|
||||
|
||||
let new_crontab: String = existing
|
||||
.lines()
|
||||
.filter(|line| !line.contains(CRON_TAG))
|
||||
.collect::<Vec<_>>()
|
||||
.join("\n")
|
||||
+ "\n";
|
||||
|
||||
// If the crontab would be empty (only whitespace), remove it entirely
|
||||
if new_crontab.trim().is_empty() {
|
||||
let status = Command::new("crontab").arg("-r").status()?;
|
||||
if !status.success() {
|
||||
return Err(LoreError::Other("crontab -r failed".to_string()));
|
||||
}
|
||||
} else {
|
||||
write_crontab(&new_crontab)?;
|
||||
}
|
||||
|
||||
Ok(CronUninstallResult {
|
||||
was_installed: true,
|
||||
})
|
||||
}
|
||||
|
||||
/// Inspect the current crontab for a lore-sync entry.
|
||||
pub fn cron_status() -> Result<CronStatusResult> {
|
||||
let existing = read_crontab()?;
|
||||
let lore_line = existing.lines().find(|l| l.contains(CRON_TAG));
|
||||
|
||||
match lore_line {
|
||||
Some(line) => {
|
||||
let interval = parse_interval(line);
|
||||
let binary_path = parse_binary_path(line);
|
||||
|
||||
let current_exe = std::env::current_exe()
|
||||
.ok()
|
||||
.map(|p| p.display().to_string());
|
||||
let binary_mismatch = current_exe
|
||||
.as_ref()
|
||||
.zip(binary_path.as_ref())
|
||||
.is_some_and(|(current, cron)| current != cron);
|
||||
|
||||
Ok(CronStatusResult {
|
||||
installed: true,
|
||||
interval_minutes: interval,
|
||||
binary_path,
|
||||
current_binary: current_exe,
|
||||
binary_mismatch,
|
||||
log_path: Some(sync_log_path()),
|
||||
cron_entry: Some(line.to_string()),
|
||||
})
|
||||
}
|
||||
None => Ok(CronStatusResult {
|
||||
installed: false,
|
||||
interval_minutes: None,
|
||||
binary_path: None,
|
||||
current_binary: std::env::current_exe()
|
||||
.ok()
|
||||
.map(|p| p.display().to_string()),
|
||||
binary_mismatch: false,
|
||||
log_path: None,
|
||||
cron_entry: None,
|
||||
}),
|
||||
}
|
||||
}
|
||||
|
||||
/// Parse the interval from a cron expression like `*/8 * * * * ...`
|
||||
fn parse_interval(line: &str) -> Option<u32> {
|
||||
let first_field = line.split_whitespace().next()?;
|
||||
if let Some(n) = first_field.strip_prefix("*/") {
|
||||
n.parse().ok()
|
||||
} else {
|
||||
None
|
||||
}
|
||||
}
|
||||
|
||||
/// Parse the binary path from the cron entry after the 5 time fields.
|
||||
///
|
||||
/// Handles both quoted (`'/path with spaces/lore'`) and unquoted paths.
|
||||
/// We skip the time fields manually to avoid `split_whitespace` breaking
|
||||
/// on spaces inside single-quoted paths.
|
||||
fn parse_binary_path(line: &str) -> Option<String> {
|
||||
// Skip the 5 cron time fields (min hour dom month dow).
|
||||
// These never contain spaces, so whitespace-splitting is safe here.
|
||||
let mut rest = line;
|
||||
for _ in 0..5 {
|
||||
rest = rest.trim_start();
|
||||
let end = rest.find(char::is_whitespace)?;
|
||||
rest = &rest[end..];
|
||||
}
|
||||
rest = rest.trim_start();
|
||||
|
||||
// The command starts here — it may be single-quoted.
|
||||
if let Some(after_quote) = rest.strip_prefix('\'') {
|
||||
let end = after_quote.find('\'')?;
|
||||
Some(after_quote[..end].to_string())
|
||||
} else {
|
||||
let end = rest.find(char::is_whitespace).unwrap_or(rest.len());
|
||||
Some(rest[..end].to_string())
|
||||
}
|
||||
}
|
||||
|
||||
// ── Result types ──
|
||||
|
||||
#[derive(Serialize)]
|
||||
pub struct CronInstallResult {
|
||||
pub entry: String,
|
||||
pub interval_minutes: u32,
|
||||
pub log_path: PathBuf,
|
||||
pub replaced: bool,
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
pub struct CronUninstallResult {
|
||||
pub was_installed: bool,
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
pub struct CronStatusResult {
|
||||
pub installed: bool,
|
||||
pub interval_minutes: Option<u32>,
|
||||
pub binary_path: Option<String>,
|
||||
pub current_binary: Option<String>,
|
||||
pub binary_mismatch: bool,
|
||||
pub log_path: Option<PathBuf>,
|
||||
pub cron_entry: Option<String>,
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn build_cron_entry_formats_correctly() {
|
||||
let entry = build_cron_entry(8);
|
||||
assert!(entry.starts_with("*/8 * * * * "));
|
||||
assert!(entry.contains("sync -q --lock"));
|
||||
assert!(entry.ends_with(CRON_TAG));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_interval_extracts_number() {
|
||||
assert_eq!(parse_interval("*/8 * * * * /usr/bin/lore sync"), Some(8));
|
||||
assert_eq!(parse_interval("*/15 * * * * /usr/bin/lore sync"), Some(15));
|
||||
assert_eq!(parse_interval("0 * * * * /usr/bin/lore sync"), None);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn parse_binary_path_extracts_sixth_field() {
|
||||
// Unquoted path
|
||||
assert_eq!(
|
||||
parse_binary_path(
|
||||
"*/8 * * * * /usr/local/bin/lore sync -q --lock >> /tmp/log 2>&1 # lore-sync"
|
||||
),
|
||||
Some("/usr/local/bin/lore".to_string())
|
||||
);
|
||||
// Single-quoted path without spaces
|
||||
assert_eq!(
|
||||
parse_binary_path(
|
||||
"*/8 * * * * '/usr/local/bin/lore' sync -q --lock >> '/tmp/log' 2>&1 # lore-sync"
|
||||
),
|
||||
Some("/usr/local/bin/lore".to_string())
|
||||
);
|
||||
// Single-quoted path WITH spaces (common on macOS)
|
||||
assert_eq!(
|
||||
parse_binary_path(
|
||||
"*/8 * * * * '/Users/Taylor Eernisse/.cargo/bin/lore' sync -q --lock >> '/tmp/log' 2>&1 # lore-sync"
|
||||
),
|
||||
Some("/Users/Taylor Eernisse/.cargo/bin/lore".to_string())
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn sync_lock_at_nonexistent_dir_creates_parents() {
|
||||
let dir = tempfile::tempdir().unwrap();
|
||||
let lock_file = dir.path().join("nested").join("deep").join("sync.lock");
|
||||
let guard = acquire_sync_lock_at(&lock_file).unwrap();
|
||||
assert!(guard.is_some());
|
||||
assert!(lock_file.exists());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn sync_lock_is_exclusive_across_processes() {
|
||||
// POSIX fcntl locks are per-process, so same-process re-lock always
|
||||
// succeeds. We verify cross-process exclusion using a Python child
|
||||
// that attempts the same fcntl F_SETLK.
|
||||
let dir = tempfile::tempdir().unwrap();
|
||||
let lock_file = dir.path().join("sync.lock");
|
||||
let _guard = acquire_sync_lock_at(&lock_file).unwrap().unwrap();
|
||||
|
||||
let script = r#"
|
||||
import fcntl, struct, sys
|
||||
fd = open(sys.argv[1], "w")
|
||||
try:
|
||||
fcntl.fcntl(fd, fcntl.F_SETLK, struct.pack("hhllhh", fcntl.F_WRLCK, 0, 0, 0, 0, 0))
|
||||
sys.exit(0)
|
||||
except (IOError, OSError):
|
||||
sys.exit(1)
|
||||
"#;
|
||||
let status = std::process::Command::new("python3")
|
||||
.args(["-c", script, &lock_file.display().to_string()])
|
||||
.status()
|
||||
.unwrap();
|
||||
assert!(
|
||||
!status.success(),
|
||||
"child process should fail to acquire fcntl lock held by parent"
|
||||
);
|
||||
}
|
||||
}
|
||||
@@ -89,6 +89,10 @@ const MIGRATIONS: &[(&str, &str)] = &[
|
||||
"026",
|
||||
include_str!("../../migrations/026_scoring_indexes.sql"),
|
||||
),
|
||||
(
|
||||
"027",
|
||||
include_str!("../../migrations/027_surgical_sync_runs.sql"),
|
||||
),
|
||||
];
|
||||
|
||||
pub fn create_connection(db_path: &Path) -> Result<Connection> {
|
||||
|
||||
@@ -21,6 +21,7 @@ pub enum ErrorCode {
|
||||
EmbeddingFailed,
|
||||
NotFound,
|
||||
Ambiguous,
|
||||
SurgicalPreflightFailed,
|
||||
}
|
||||
|
||||
impl std::fmt::Display for ErrorCode {
|
||||
@@ -44,6 +45,7 @@ impl std::fmt::Display for ErrorCode {
|
||||
Self::EmbeddingFailed => "EMBEDDING_FAILED",
|
||||
Self::NotFound => "NOT_FOUND",
|
||||
Self::Ambiguous => "AMBIGUOUS",
|
||||
Self::SurgicalPreflightFailed => "SURGICAL_PREFLIGHT_FAILED",
|
||||
};
|
||||
write!(f, "{code}")
|
||||
}
|
||||
@@ -70,6 +72,9 @@ impl ErrorCode {
|
||||
Self::EmbeddingFailed => 16,
|
||||
Self::NotFound => 17,
|
||||
Self::Ambiguous => 18,
|
||||
// Shares exit code 6 with GitLabNotFound — same semantic category (resource not found).
|
||||
// Robot consumers distinguish via ErrorCode string, not exit code.
|
||||
Self::SurgicalPreflightFailed => 6,
|
||||
}
|
||||
}
|
||||
}
|
||||
@@ -111,7 +116,7 @@ pub enum LoreError {
|
||||
source: Option<rusqlite::Error>,
|
||||
},
|
||||
|
||||
#[error("GitLab token not set. Export {env_var} environment variable.")]
|
||||
#[error("GitLab token not set. Run 'lore token set' or export {env_var}.")]
|
||||
TokenNotSet { env_var: String },
|
||||
|
||||
#[error("Database error: {0}")]
|
||||
@@ -153,6 +158,14 @@ pub enum LoreError {
|
||||
|
||||
#[error("No embeddings found. Run: lore embed")]
|
||||
EmbeddingsNotBuilt,
|
||||
|
||||
#[error("Surgical preflight failed for {entity_type} !{iid} in {project}: {reason}")]
|
||||
SurgicalPreflightFailed {
|
||||
entity_type: String,
|
||||
iid: u64,
|
||||
project: String,
|
||||
reason: String,
|
||||
},
|
||||
}
|
||||
|
||||
impl LoreError {
|
||||
@@ -167,7 +180,13 @@ impl LoreError {
|
||||
Self::DatabaseLocked { .. } => ErrorCode::DatabaseLocked,
|
||||
Self::MigrationFailed { .. } => ErrorCode::MigrationFailed,
|
||||
Self::TokenNotSet { .. } => ErrorCode::TokenNotSet,
|
||||
Self::Database(_) => ErrorCode::DatabaseError,
|
||||
Self::Database(e) => {
|
||||
if e.sqlite_error_code() == Some(rusqlite::ErrorCode::DatabaseBusy) {
|
||||
ErrorCode::DatabaseLocked
|
||||
} else {
|
||||
ErrorCode::DatabaseError
|
||||
}
|
||||
}
|
||||
Self::Http(_) => ErrorCode::GitLabNetworkError,
|
||||
Self::Json(_) => ErrorCode::InternalError,
|
||||
Self::Io(_) => ErrorCode::IoError,
|
||||
@@ -179,6 +198,7 @@ impl LoreError {
|
||||
Self::OllamaModelNotFound { .. } => ErrorCode::OllamaModelNotFound,
|
||||
Self::EmbeddingFailed { .. } => ErrorCode::EmbeddingFailed,
|
||||
Self::EmbeddingsNotBuilt => ErrorCode::EmbeddingFailed,
|
||||
Self::SurgicalPreflightFailed { .. } => ErrorCode::SurgicalPreflightFailed,
|
||||
}
|
||||
}
|
||||
|
||||
@@ -204,14 +224,20 @@ impl LoreError {
|
||||
"Wait for other sync to complete or use --force.\n\n Example:\n lore ingest --force\n lore ingest issues --force",
|
||||
),
|
||||
Self::MigrationFailed { .. } => Some(
|
||||
"Check database file permissions or reset with 'lore reset'.\n\n Example:\n lore migrate\n lore reset --yes",
|
||||
"Check database file permissions and try again.\n\n Example:\n lore migrate\n lore doctor",
|
||||
),
|
||||
Self::TokenNotSet { .. } => Some(
|
||||
"Export the token to your shell:\n\n export GITLAB_TOKEN=glpat-xxxxxxxxxxxx\n\n Your token needs the read_api scope.",
|
||||
),
|
||||
Self::Database(_) => Some(
|
||||
"Check database file permissions or reset with 'lore reset'.\n\n Example:\n lore doctor\n lore reset --yes",
|
||||
"Set your token:\n\n lore token set\n\n Or export to your shell:\n\n export GITLAB_TOKEN=glpat-xxxxxxxxxxxx\n\n Your token needs the read_api scope.",
|
||||
),
|
||||
Self::Database(e) => {
|
||||
if e.sqlite_error_code() == Some(rusqlite::ErrorCode::DatabaseBusy) {
|
||||
Some(
|
||||
"Another process has the database locked. Wait a moment and retry.\n\n Common causes:\n - A cron sync is running (lore cron status)\n - Another lore command is active",
|
||||
)
|
||||
} else {
|
||||
Some("Check database file permissions.\n\n Example:\n lore doctor")
|
||||
}
|
||||
}
|
||||
Self::Http(_) => Some("Check network connection"),
|
||||
Self::NotFound(_) => {
|
||||
Some("Verify the entity exists.\n\n Example:\n lore issues\n lore mrs")
|
||||
@@ -227,6 +253,9 @@ impl LoreError {
|
||||
Some("Check Ollama logs or retry with 'lore embed --retry-failed'")
|
||||
}
|
||||
Self::EmbeddingsNotBuilt => Some("Generate embeddings first: lore embed"),
|
||||
Self::SurgicalPreflightFailed { .. } => Some(
|
||||
"Verify the IID exists in the project and you have access.\n\n Example:\n lore issues -p <project>\n lore mrs -p <project>",
|
||||
),
|
||||
Self::Json(_) | Self::Io(_) | Self::Transform(_) | Self::Other(_) => None,
|
||||
}
|
||||
}
|
||||
@@ -246,14 +275,22 @@ impl LoreError {
|
||||
Self::GitLabAuthFailed => {
|
||||
vec!["export GITLAB_TOKEN=glpat-xxx", "lore auth"]
|
||||
}
|
||||
Self::TokenNotSet { .. } => vec!["export GITLAB_TOKEN=glpat-xxx"],
|
||||
Self::TokenNotSet { .. } => vec!["lore token set", "export GITLAB_TOKEN=glpat-xxx"],
|
||||
Self::OllamaUnavailable { .. } => vec!["ollama serve"],
|
||||
Self::OllamaModelNotFound { .. } => vec!["ollama pull nomic-embed-text"],
|
||||
Self::DatabaseLocked { .. } => vec!["lore ingest --force"],
|
||||
Self::Database(e)
|
||||
if e.sqlite_error_code() == Some(rusqlite::ErrorCode::DatabaseBusy) =>
|
||||
{
|
||||
vec!["lore cron status"]
|
||||
}
|
||||
Self::EmbeddingsNotBuilt => vec!["lore embed"],
|
||||
Self::EmbeddingFailed { .. } => vec!["lore embed --retry-failed"],
|
||||
Self::MigrationFailed { .. } => vec!["lore migrate"],
|
||||
Self::GitLabNetworkError { .. } => vec!["lore doctor"],
|
||||
Self::SurgicalPreflightFailed { .. } => {
|
||||
vec!["lore issues -p <project>", "lore mrs -p <project>"]
|
||||
}
|
||||
_ => vec![],
|
||||
}
|
||||
}
|
||||
@@ -293,3 +330,40 @@ impl From<&LoreError> for RobotErrorOutput {
|
||||
}
|
||||
|
||||
pub type Result<T> = std::result::Result<T, LoreError>;
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
#[test]
|
||||
fn surgical_preflight_failed_display() {
|
||||
let err = LoreError::SurgicalPreflightFailed {
|
||||
entity_type: "issue".to_string(),
|
||||
iid: 42,
|
||||
project: "group/repo".to_string(),
|
||||
reason: "not found on GitLab".to_string(),
|
||||
};
|
||||
let msg = err.to_string();
|
||||
assert!(msg.contains("issue"), "missing entity_type: {msg}");
|
||||
assert!(msg.contains("42"), "missing iid: {msg}");
|
||||
assert!(msg.contains("group/repo"), "missing project: {msg}");
|
||||
assert!(msg.contains("not found on GitLab"), "missing reason: {msg}");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn surgical_preflight_failed_error_code() {
|
||||
let code = ErrorCode::SurgicalPreflightFailed;
|
||||
assert_eq!(code.exit_code(), 6);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn surgical_preflight_failed_code_mapping() {
|
||||
let err = LoreError::SurgicalPreflightFailed {
|
||||
entity_type: "merge_request".to_string(),
|
||||
iid: 99,
|
||||
project: "ns/proj".to_string(),
|
||||
reason: "404".to_string(),
|
||||
};
|
||||
assert_eq!(err.code(), ErrorCode::SurgicalPreflightFailed);
|
||||
}
|
||||
}
|
||||
|
||||
@@ -44,15 +44,13 @@ pub fn resolve_rename_chain(
|
||||
let mut fwd_stmt = conn.prepare_cached(forward_sql)?;
|
||||
let forward: Vec<String> = fwd_stmt
|
||||
.query_map(rusqlite::params![project_id, ¤t], |row| row.get(0))?
|
||||
.filter_map(std::result::Result::ok)
|
||||
.collect();
|
||||
.collect::<std::result::Result<Vec<_>, _>>()?;
|
||||
|
||||
// Backward: current was the new name -> discover old names
|
||||
let mut bwd_stmt = conn.prepare_cached(backward_sql)?;
|
||||
let backward: Vec<String> = bwd_stmt
|
||||
.query_map(rusqlite::params![project_id, ¤t], |row| row.get(0))?
|
||||
.filter_map(std::result::Result::ok)
|
||||
.collect();
|
||||
.collect::<std::result::Result<Vec<_>, _>>()?;
|
||||
|
||||
for discovered in forward.into_iter().chain(backward) {
|
||||
if visited.insert(discovered.clone()) {
|
||||
|
||||
@@ -1,5 +1,7 @@
|
||||
pub mod backoff;
|
||||
pub mod config;
|
||||
#[cfg(unix)]
|
||||
pub mod cron;
|
||||
pub mod db;
|
||||
pub mod dependent_queue;
|
||||
pub mod error;
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
use rusqlite::Connection;
|
||||
|
||||
use super::error::{LoreError, Result};
|
||||
use super::file_history::resolve_rename_chain;
|
||||
|
||||
// ─── SQL Helpers ─────────────────────────────────────────────────────────────
|
||||
|
||||
@@ -149,6 +150,16 @@ pub fn build_path_query(
|
||||
is_prefix: false,
|
||||
}),
|
||||
SuffixResult::Ambiguous(candidates) => {
|
||||
// Check if all candidates are the same file connected by renames.
|
||||
// resolve_rename_chain requires a concrete project_id.
|
||||
if let Some(pid) = project_id
|
||||
&& let Some(resolved) = try_resolve_rename_ambiguity(conn, pid, &candidates)?
|
||||
{
|
||||
return Ok(PathQuery {
|
||||
value: resolved,
|
||||
is_prefix: false,
|
||||
});
|
||||
}
|
||||
let list = candidates
|
||||
.iter()
|
||||
.map(|p| format!(" {p}"))
|
||||
@@ -239,6 +250,58 @@ pub fn suffix_probe(
|
||||
}
|
||||
}
|
||||
|
||||
/// Maximum rename hops when resolving ambiguity.
|
||||
const AMBIGUITY_MAX_RENAME_HOPS: usize = 10;
|
||||
|
||||
/// When suffix probe returns multiple paths, check if they are all the same file
|
||||
/// connected by renames. If so, return the "newest" path (the leaf of the chain
|
||||
/// that is never renamed away from). Returns `None` if truly ambiguous.
|
||||
fn try_resolve_rename_ambiguity(
|
||||
conn: &Connection,
|
||||
project_id: i64,
|
||||
candidates: &[String],
|
||||
) -> Result<Option<String>> {
|
||||
// BFS from the first candidate to discover the full rename chain.
|
||||
let chain = resolve_rename_chain(conn, project_id, &candidates[0], AMBIGUITY_MAX_RENAME_HOPS)?;
|
||||
|
||||
// If any candidate is NOT in the chain, these are genuinely different files.
|
||||
if !candidates.iter().all(|c| chain.contains(c)) {
|
||||
return Ok(None);
|
||||
}
|
||||
|
||||
// All candidates are the same file. Find the "newest" path: the one that
|
||||
// appears as new_path in a rename but is never old_path in a subsequent rename
|
||||
// (within the chain). This is the leaf of the rename DAG.
|
||||
let placeholders: Vec<String> = (0..chain.len()).map(|i| format!("?{}", i + 2)).collect();
|
||||
let in_clause = placeholders.join(", ");
|
||||
|
||||
// Find paths that are old_path in a rename where new_path is also in the chain.
|
||||
let sql = format!(
|
||||
"SELECT DISTINCT old_path FROM mr_file_changes \
|
||||
WHERE project_id = ?1 \
|
||||
AND change_type = 'renamed' \
|
||||
AND old_path IN ({in_clause}) \
|
||||
AND new_path IN ({in_clause})"
|
||||
);
|
||||
|
||||
let mut stmt = conn.prepare(&sql)?;
|
||||
let mut params: Vec<Box<dyn rusqlite::types::ToSql>> = Vec::new();
|
||||
params.push(Box::new(project_id));
|
||||
for p in &chain {
|
||||
params.push(Box::new(p.clone()));
|
||||
}
|
||||
let param_refs: Vec<&dyn rusqlite::types::ToSql> = params.iter().map(|p| p.as_ref()).collect();
|
||||
|
||||
let old_paths: Vec<String> = stmt
|
||||
.query_map(param_refs.as_slice(), |row| row.get(0))?
|
||||
.collect::<std::result::Result<Vec<_>, _>>()?;
|
||||
|
||||
// The newest path is a candidate that is NOT an old_path in any intra-chain rename.
|
||||
let newest = candidates.iter().find(|c| !old_paths.contains(c));
|
||||
|
||||
Ok(newest.cloned().or_else(|| Some(candidates[0].clone())))
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
#[path = "path_resolver_tests.rs"]
|
||||
mod tests;
|
||||
|
||||
@@ -288,3 +288,80 @@ fn test_exact_match_preferred_over_suffix() {
|
||||
assert_eq!(pq.value, "README.md");
|
||||
assert!(!pq.is_prefix);
|
||||
}
|
||||
|
||||
fn seed_rename(conn: &Connection, mr_id: i64, project_id: i64, old_path: &str, new_path: &str) {
|
||||
conn.execute(
|
||||
"INSERT INTO mr_file_changes (merge_request_id, project_id, old_path, new_path, change_type)
|
||||
VALUES (?1, ?2, ?3, ?4, 'renamed')",
|
||||
rusqlite::params![mr_id, project_id, old_path, new_path],
|
||||
)
|
||||
.unwrap();
|
||||
}
|
||||
|
||||
// ─── rename-aware ambiguity resolution ──────────────────────────────────────
|
||||
|
||||
#[test]
|
||||
fn test_ambiguity_resolved_by_rename_chain() {
|
||||
let conn = setup_test_db();
|
||||
seed_project(&conn, 1);
|
||||
seed_mr(&conn, 1, 1);
|
||||
seed_mr(&conn, 2, 1);
|
||||
|
||||
// File was at src/old/operators.ts, then renamed to src/new/operators.ts
|
||||
seed_file_change(&conn, 1, 1, "src/old/operators.ts");
|
||||
seed_rename(&conn, 2, 1, "src/old/operators.ts", "src/new/operators.ts");
|
||||
|
||||
// Bare "operators.ts" matches both paths via suffix probe, but they're
|
||||
// connected by a rename — should auto-resolve to the newest path.
|
||||
let pq = build_path_query(&conn, "operators.ts", Some(1)).unwrap();
|
||||
assert_eq!(pq.value, "src/new/operators.ts");
|
||||
assert!(!pq.is_prefix);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_ambiguity_not_resolved_when_genuinely_different_files() {
|
||||
let conn = setup_test_db();
|
||||
seed_project(&conn, 1);
|
||||
seed_mr(&conn, 1, 1);
|
||||
|
||||
// Two genuinely different files with the same name (no rename connecting them)
|
||||
seed_file_change(&conn, 1, 1, "src/utils/helpers.ts");
|
||||
seed_file_change(&conn, 1, 1, "tests/utils/helpers.ts");
|
||||
|
||||
let err = build_path_query(&conn, "helpers.ts", Some(1)).unwrap_err();
|
||||
assert!(err.to_string().contains("matches multiple paths"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_ambiguity_rename_chain_with_three_hops() {
|
||||
let conn = setup_test_db();
|
||||
seed_project(&conn, 1);
|
||||
seed_mr(&conn, 1, 1);
|
||||
seed_mr(&conn, 2, 1);
|
||||
seed_mr(&conn, 3, 1);
|
||||
|
||||
// File named "config.ts" moved twice: lib/ -> src/ -> src/core/
|
||||
seed_file_change(&conn, 1, 1, "lib/config.ts");
|
||||
seed_rename(&conn, 2, 1, "lib/config.ts", "src/config.ts");
|
||||
seed_rename(&conn, 3, 1, "src/config.ts", "src/core/config.ts");
|
||||
|
||||
// "config.ts" matches lib/config.ts, src/config.ts, src/core/config.ts via suffix
|
||||
let pq = build_path_query(&conn, "config.ts", Some(1)).unwrap();
|
||||
assert_eq!(pq.value, "src/core/config.ts");
|
||||
assert!(!pq.is_prefix);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_ambiguity_rename_without_project_id_stays_ambiguous() {
|
||||
let conn = setup_test_db();
|
||||
seed_project(&conn, 1);
|
||||
seed_mr(&conn, 1, 1);
|
||||
seed_mr(&conn, 2, 1);
|
||||
|
||||
seed_file_change(&conn, 1, 1, "src/old/utils.ts");
|
||||
seed_rename(&conn, 2, 1, "src/old/utils.ts", "src/new/utils.ts");
|
||||
|
||||
// Without project_id, rename resolution is skipped → stays ambiguous
|
||||
let err = build_path_query(&conn, "utils.ts", None).unwrap_err();
|
||||
assert!(err.to_string().contains("matches multiple paths"));
|
||||
}
|
||||
|
||||
@@ -68,6 +68,36 @@ fn get_xdg_data_dir() -> PathBuf {
|
||||
})
|
||||
}
|
||||
|
||||
/// Enforce restrictive permissions (0600) on the config file.
|
||||
/// Warns to stderr if permissions were too open, then tightens them.
|
||||
#[cfg(unix)]
|
||||
pub fn ensure_config_permissions(path: &std::path::Path) {
|
||||
use std::os::unix::fs::MetadataExt;
|
||||
|
||||
let Ok(meta) = std::fs::metadata(path) else {
|
||||
return;
|
||||
};
|
||||
let mode = meta.mode() & 0o777;
|
||||
if mode != 0o600 {
|
||||
eprintln!(
|
||||
"Warning: config file permissions were {mode:04o}, tightening to 0600: {}",
|
||||
path.display()
|
||||
);
|
||||
let _ = set_permissions_600(path);
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(unix)]
|
||||
fn set_permissions_600(path: &std::path::Path) -> std::io::Result<()> {
|
||||
use std::os::unix::fs::PermissionsExt;
|
||||
let perms = std::fs::Permissions::from_mode(0o600);
|
||||
std::fs::set_permissions(path, perms)
|
||||
}
|
||||
|
||||
/// No-op on non-Unix platforms.
|
||||
#[cfg(not(unix))]
|
||||
pub fn ensure_config_permissions(_path: &std::path::Path) {}
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
@@ -20,6 +20,75 @@ impl SyncRunRecorder {
|
||||
Ok(Self { row_id })
|
||||
}
|
||||
|
||||
/// Returns the database row ID of this sync run.
|
||||
pub fn row_id(&self) -> i64 {
|
||||
self.row_id
|
||||
}
|
||||
|
||||
/// Sets surgical-mode metadata on the run (mode, phase, IID manifest).
|
||||
pub fn set_surgical_metadata(
|
||||
&self,
|
||||
conn: &Connection,
|
||||
mode: &str,
|
||||
phase: &str,
|
||||
surgical_iids_json: &str,
|
||||
) -> Result<()> {
|
||||
conn.execute(
|
||||
"UPDATE sync_runs
|
||||
SET mode = ?1, phase = ?2, surgical_iids_json = ?3
|
||||
WHERE id = ?4",
|
||||
rusqlite::params![mode, phase, surgical_iids_json, self.row_id],
|
||||
)?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Updates the current phase and refreshes the heartbeat timestamp.
|
||||
pub fn update_phase(&self, conn: &Connection, phase: &str) -> Result<()> {
|
||||
let now = now_ms();
|
||||
conn.execute(
|
||||
"UPDATE sync_runs SET phase = ?1, heartbeat_at = ?2 WHERE id = ?3",
|
||||
rusqlite::params![phase, now, self.row_id],
|
||||
)?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Increments a counter column by 1 based on entity type and stage.
|
||||
/// Unknown (entity_type, stage) combinations are silently ignored.
|
||||
pub fn record_entity_result(
|
||||
&self,
|
||||
conn: &Connection,
|
||||
entity_type: &str,
|
||||
stage: &str,
|
||||
) -> Result<()> {
|
||||
let column = match (entity_type, stage) {
|
||||
("issue", "fetched") => "issues_fetched",
|
||||
("issue", "ingested") => "issues_ingested",
|
||||
("mr", "fetched") => "mrs_fetched",
|
||||
("mr", "ingested") => "mrs_ingested",
|
||||
("issue" | "mr", "skipped_stale") => "skipped_stale",
|
||||
("doc", "regenerated") => "docs_regenerated",
|
||||
("doc", "embedded") => "docs_embedded",
|
||||
(_, "warning") => "warnings_count",
|
||||
_ => return Ok(()),
|
||||
};
|
||||
// Column name is from a hardcoded match, not user input — safe to interpolate.
|
||||
let sql = format!("UPDATE sync_runs SET {column} = {column} + 1 WHERE id = ?1");
|
||||
conn.execute(&sql, rusqlite::params![self.row_id])?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
/// Marks the run as cancelled with a reason. Consumes self (terminal state).
|
||||
pub fn cancel(self, conn: &Connection, reason: &str) -> Result<()> {
|
||||
let now = now_ms();
|
||||
conn.execute(
|
||||
"UPDATE sync_runs
|
||||
SET status = 'cancelled', error = ?1, cancelled_at = ?2, finished_at = ?3
|
||||
WHERE id = ?4",
|
||||
rusqlite::params![reason, now, now, self.row_id],
|
||||
)?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
pub fn succeed(
|
||||
self,
|
||||
conn: &Connection,
|
||||
|
||||
@@ -146,3 +146,239 @@ fn test_sync_run_recorder_fail_with_partial_metrics() {
|
||||
assert_eq!(parsed.len(), 1);
|
||||
assert_eq!(parsed[0].name, "ingest_issues");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn sync_run_surgical_columns_exist() {
|
||||
let conn = setup_test_db();
|
||||
conn.execute(
|
||||
"INSERT INTO sync_runs (started_at, heartbeat_at, status, command, mode, phase, surgical_iids_json)
|
||||
VALUES (1000, 1000, 'running', 'sync', 'surgical', 'preflight', '{\"issues\":[7],\"mrs\":[]}')",
|
||||
[],
|
||||
)
|
||||
.unwrap();
|
||||
let (mode, phase, iids_json): (String, String, String) = conn
|
||||
.query_row(
|
||||
"SELECT mode, phase, surgical_iids_json FROM sync_runs WHERE mode = 'surgical'",
|
||||
[],
|
||||
|r| Ok((r.get(0)?, r.get(1)?, r.get(2)?)),
|
||||
)
|
||||
.unwrap();
|
||||
assert_eq!(mode, "surgical");
|
||||
assert_eq!(phase, "preflight");
|
||||
assert!(iids_json.contains("7"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn sync_run_counter_defaults_are_zero() {
|
||||
let conn = setup_test_db();
|
||||
conn.execute(
|
||||
"INSERT INTO sync_runs (started_at, heartbeat_at, status, command)
|
||||
VALUES (2000, 2000, 'running', 'sync')",
|
||||
[],
|
||||
)
|
||||
.unwrap();
|
||||
let row_id = conn.last_insert_rowid();
|
||||
let (issues_fetched, mrs_fetched, docs_regenerated, warnings_count): (i64, i64, i64, i64) =
|
||||
conn.query_row(
|
||||
"SELECT issues_fetched, mrs_fetched, docs_regenerated, warnings_count FROM sync_runs WHERE id = ?1",
|
||||
[row_id],
|
||||
|r| Ok((r.get(0)?, r.get(1)?, r.get(2)?, r.get(3)?)),
|
||||
)
|
||||
.unwrap();
|
||||
assert_eq!(issues_fetched, 0);
|
||||
assert_eq!(mrs_fetched, 0);
|
||||
assert_eq!(docs_regenerated, 0);
|
||||
assert_eq!(warnings_count, 0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn sync_run_nullable_columns_default_to_null() {
|
||||
let conn = setup_test_db();
|
||||
conn.execute(
|
||||
"INSERT INTO sync_runs (started_at, heartbeat_at, status, command)
|
||||
VALUES (3000, 3000, 'running', 'sync')",
|
||||
[],
|
||||
)
|
||||
.unwrap();
|
||||
let row_id = conn.last_insert_rowid();
|
||||
let (mode, phase, cancelled_at): (Option<String>, Option<String>, Option<i64>) = conn
|
||||
.query_row(
|
||||
"SELECT mode, phase, cancelled_at FROM sync_runs WHERE id = ?1",
|
||||
[row_id],
|
||||
|r| Ok((r.get(0)?, r.get(1)?, r.get(2)?)),
|
||||
)
|
||||
.unwrap();
|
||||
assert!(mode.is_none());
|
||||
assert!(phase.is_none());
|
||||
assert!(cancelled_at.is_none());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn sync_run_counter_round_trip() {
|
||||
let conn = setup_test_db();
|
||||
conn.execute(
|
||||
"INSERT INTO sync_runs (started_at, heartbeat_at, status, command, mode, issues_fetched, mrs_ingested, docs_embedded)
|
||||
VALUES (4000, 4000, 'succeeded', 'sync', 'surgical', 3, 2, 5)",
|
||||
[],
|
||||
)
|
||||
.unwrap();
|
||||
let row_id = conn.last_insert_rowid();
|
||||
let (issues_fetched, mrs_ingested, docs_embedded): (i64, i64, i64) = conn
|
||||
.query_row(
|
||||
"SELECT issues_fetched, mrs_ingested, docs_embedded FROM sync_runs WHERE id = ?1",
|
||||
[row_id],
|
||||
|r| Ok((r.get(0)?, r.get(1)?, r.get(2)?)),
|
||||
)
|
||||
.unwrap();
|
||||
assert_eq!(issues_fetched, 3);
|
||||
assert_eq!(mrs_ingested, 2);
|
||||
assert_eq!(docs_embedded, 5);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn surgical_lifecycle_start_metadata_succeed() {
|
||||
let conn = setup_test_db();
|
||||
let recorder = SyncRunRecorder::start(&conn, "sync", "surg001").unwrap();
|
||||
let row_id = recorder.row_id();
|
||||
|
||||
recorder
|
||||
.set_surgical_metadata(
|
||||
&conn,
|
||||
"surgical",
|
||||
"preflight",
|
||||
r#"{"issues":[7,8],"mrs":[101]}"#,
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
recorder.update_phase(&conn, "ingest").unwrap();
|
||||
recorder
|
||||
.record_entity_result(&conn, "issue", "fetched")
|
||||
.unwrap();
|
||||
recorder
|
||||
.record_entity_result(&conn, "issue", "fetched")
|
||||
.unwrap();
|
||||
recorder
|
||||
.record_entity_result(&conn, "issue", "ingested")
|
||||
.unwrap();
|
||||
recorder
|
||||
.record_entity_result(&conn, "mr", "fetched")
|
||||
.unwrap();
|
||||
recorder
|
||||
.record_entity_result(&conn, "mr", "ingested")
|
||||
.unwrap();
|
||||
|
||||
recorder.succeed(&conn, &[], 3, 0).unwrap();
|
||||
|
||||
#[allow(clippy::type_complexity)]
|
||||
let (mode, phase, iids, issues_fetched, mrs_fetched, issues_ingested, mrs_ingested, status): (
|
||||
String,
|
||||
String,
|
||||
String,
|
||||
i64,
|
||||
i64,
|
||||
i64,
|
||||
i64,
|
||||
String,
|
||||
) = conn
|
||||
.query_row(
|
||||
"SELECT mode, phase, surgical_iids_json, issues_fetched, mrs_fetched, \
|
||||
issues_ingested, mrs_ingested, status \
|
||||
FROM sync_runs WHERE id = ?1",
|
||||
[row_id],
|
||||
|r| {
|
||||
Ok((
|
||||
r.get(0)?,
|
||||
r.get(1)?,
|
||||
r.get(2)?,
|
||||
r.get(3)?,
|
||||
r.get(4)?,
|
||||
r.get(5)?,
|
||||
r.get(6)?,
|
||||
r.get(7)?,
|
||||
))
|
||||
},
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(mode, "surgical");
|
||||
assert_eq!(phase, "ingest");
|
||||
assert!(iids.contains("101"));
|
||||
assert_eq!(issues_fetched, 2);
|
||||
assert_eq!(mrs_fetched, 1);
|
||||
assert_eq!(issues_ingested, 1);
|
||||
assert_eq!(mrs_ingested, 1);
|
||||
assert_eq!(status, "succeeded");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn surgical_lifecycle_cancel() {
|
||||
let conn = setup_test_db();
|
||||
let recorder = SyncRunRecorder::start(&conn, "sync", "cancel01").unwrap();
|
||||
let row_id = recorder.row_id();
|
||||
|
||||
recorder
|
||||
.set_surgical_metadata(&conn, "surgical", "preflight", "{}")
|
||||
.unwrap();
|
||||
recorder
|
||||
.cancel(&conn, "User requested cancellation")
|
||||
.unwrap();
|
||||
|
||||
let (status, error, cancelled_at, finished_at): (
|
||||
String,
|
||||
Option<String>,
|
||||
Option<i64>,
|
||||
Option<i64>,
|
||||
) = conn
|
||||
.query_row(
|
||||
"SELECT status, error, cancelled_at, finished_at FROM sync_runs WHERE id = ?1",
|
||||
[row_id],
|
||||
|r| Ok((r.get(0)?, r.get(1)?, r.get(2)?, r.get(3)?)),
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(status, "cancelled");
|
||||
assert_eq!(error.as_deref(), Some("User requested cancellation"));
|
||||
assert!(cancelled_at.is_some());
|
||||
assert!(finished_at.is_some());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn record_entity_result_ignores_unknown() {
|
||||
let conn = setup_test_db();
|
||||
let recorder = SyncRunRecorder::start(&conn, "sync", "unk001").unwrap();
|
||||
recorder
|
||||
.record_entity_result(&conn, "widget", "exploded")
|
||||
.unwrap();
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn record_entity_result_doc_counters() {
|
||||
let conn = setup_test_db();
|
||||
let recorder = SyncRunRecorder::start(&conn, "sync", "cnt001").unwrap();
|
||||
let row_id = recorder.row_id();
|
||||
|
||||
recorder
|
||||
.record_entity_result(&conn, "doc", "regenerated")
|
||||
.unwrap();
|
||||
recorder
|
||||
.record_entity_result(&conn, "doc", "regenerated")
|
||||
.unwrap();
|
||||
recorder
|
||||
.record_entity_result(&conn, "doc", "embedded")
|
||||
.unwrap();
|
||||
recorder
|
||||
.record_entity_result(&conn, "issue", "skipped_stale")
|
||||
.unwrap();
|
||||
|
||||
let (docs_regen, docs_embed, skipped): (i64, i64, i64) = conn
|
||||
.query_row(
|
||||
"SELECT docs_regenerated, docs_embedded, skipped_stale FROM sync_runs WHERE id = ?1",
|
||||
[row_id],
|
||||
|r| Ok((r.get(0)?, r.get(1)?, r.get(2)?)),
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(docs_regen, 2);
|
||||
assert_eq!(docs_embed, 1);
|
||||
assert_eq!(skipped, 1);
|
||||
}
|
||||
|
||||
@@ -1,4 +1,5 @@
|
||||
use serde::Serialize;
|
||||
use tracing::info;
|
||||
|
||||
use super::error::Result;
|
||||
use super::file_history::resolve_rename_chain;
|
||||
@@ -51,6 +52,9 @@ pub struct TraceResult {
|
||||
pub renames_followed: bool,
|
||||
pub trace_chains: Vec<TraceChain>,
|
||||
pub total_chains: usize,
|
||||
/// Diagnostic hints explaining why results may be empty.
|
||||
#[serde(skip_serializing_if = "Vec::is_empty")]
|
||||
pub hints: Vec<String>,
|
||||
}
|
||||
|
||||
/// Run the trace query: file -> MR -> issue chain.
|
||||
@@ -75,6 +79,14 @@ pub fn run_trace(
|
||||
(vec![path.to_string()], false)
|
||||
};
|
||||
|
||||
info!(
|
||||
paths = all_paths.len(),
|
||||
renames_followed,
|
||||
"trace: resolved {} path(s) for '{}'",
|
||||
all_paths.len(),
|
||||
path
|
||||
);
|
||||
|
||||
// Build placeholders for IN clause
|
||||
let placeholders: Vec<String> = (0..all_paths.len())
|
||||
.map(|i| format!("?{}", i + 2))
|
||||
@@ -100,7 +112,7 @@ pub fn run_trace(
|
||||
all_paths.len() + 2
|
||||
);
|
||||
|
||||
let mut stmt = conn.prepare(&mr_sql)?;
|
||||
let mut stmt = conn.prepare_cached(&mr_sql)?;
|
||||
|
||||
let mut params: Vec<Box<dyn rusqlite::types::ToSql>> = Vec::new();
|
||||
params.push(Box::new(project_id.unwrap_or(0)));
|
||||
@@ -137,8 +149,14 @@ pub fn run_trace(
|
||||
web_url: row.get(8)?,
|
||||
})
|
||||
})?
|
||||
.filter_map(std::result::Result::ok)
|
||||
.collect();
|
||||
.collect::<std::result::Result<Vec<_>, _>>()?;
|
||||
|
||||
info!(
|
||||
mr_count = mr_rows.len(),
|
||||
"trace: found {} MR(s) touching '{}'",
|
||||
mr_rows.len(),
|
||||
path
|
||||
);
|
||||
|
||||
// Step 2: For each MR, find linked issues + optional discussions
|
||||
let mut trace_chains = Vec::with_capacity(mr_rows.len());
|
||||
@@ -152,6 +170,16 @@ pub fn run_trace(
|
||||
Vec::new()
|
||||
};
|
||||
|
||||
info!(
|
||||
mr_iid = mr.iid,
|
||||
issues = issues.len(),
|
||||
discussions = discussions.len(),
|
||||
"trace: MR !{}: {} issue(s), {} discussion(s)",
|
||||
mr.iid,
|
||||
issues.len(),
|
||||
discussions.len()
|
||||
);
|
||||
|
||||
trace_chains.push(TraceChain {
|
||||
mr_iid: mr.iid,
|
||||
mr_title: mr.title.clone(),
|
||||
@@ -168,12 +196,20 @@ pub fn run_trace(
|
||||
|
||||
let total_chains = trace_chains.len();
|
||||
|
||||
// Build diagnostic hints when no results found
|
||||
let hints = if total_chains == 0 {
|
||||
build_trace_hints(conn, project_id, &all_paths)?
|
||||
} else {
|
||||
Vec::new()
|
||||
};
|
||||
|
||||
Ok(TraceResult {
|
||||
path: path.to_string(),
|
||||
resolved_paths: all_paths,
|
||||
renames_followed,
|
||||
trace_chains,
|
||||
total_chains,
|
||||
hints,
|
||||
})
|
||||
}
|
||||
|
||||
@@ -191,7 +227,7 @@ fn fetch_linked_issues(conn: &rusqlite::Connection, mr_id: i64) -> Result<Vec<Tr
|
||||
CASE er.reference_type WHEN 'closes' THEN 0 WHEN 'related' THEN 1 ELSE 2 END, \
|
||||
i.iid";
|
||||
|
||||
let mut stmt = conn.prepare(sql)?;
|
||||
let mut stmt = conn.prepare_cached(sql)?;
|
||||
let issues: Vec<TraceIssue> = stmt
|
||||
.query_map(rusqlite::params![mr_id], |row| {
|
||||
Ok(TraceIssue {
|
||||
@@ -202,8 +238,7 @@ fn fetch_linked_issues(conn: &rusqlite::Connection, mr_id: i64) -> Result<Vec<Tr
|
||||
web_url: row.get(4)?,
|
||||
})
|
||||
})?
|
||||
.filter_map(std::result::Result::ok)
|
||||
.collect();
|
||||
.collect::<std::result::Result<Vec<_>, _>>()?;
|
||||
|
||||
Ok(issues)
|
||||
}
|
||||
@@ -225,11 +260,10 @@ fn fetch_trace_discussions(
|
||||
WHERE d.merge_request_id = ?1 \
|
||||
AND n.position_new_path IN ({in_clause}) \
|
||||
AND n.is_system = 0 \
|
||||
ORDER BY n.created_at DESC \
|
||||
LIMIT 20"
|
||||
ORDER BY n.created_at DESC"
|
||||
);
|
||||
|
||||
let mut stmt = conn.prepare(&sql)?;
|
||||
let mut stmt = conn.prepare_cached(&sql)?;
|
||||
|
||||
let mut params: Vec<Box<dyn rusqlite::types::ToSql>> = Vec::new();
|
||||
params.push(Box::new(mr_id));
|
||||
@@ -251,12 +285,57 @@ fn fetch_trace_discussions(
|
||||
created_at_iso: ms_to_iso(created_at),
|
||||
})
|
||||
})?
|
||||
.filter_map(std::result::Result::ok)
|
||||
.collect();
|
||||
.collect::<std::result::Result<Vec<_>, _>>()?;
|
||||
|
||||
Ok(discussions)
|
||||
}
|
||||
|
||||
/// Build diagnostic hints explaining why a trace query returned no results.
|
||||
fn build_trace_hints(
|
||||
conn: &rusqlite::Connection,
|
||||
project_id: Option<i64>,
|
||||
paths: &[String],
|
||||
) -> Result<Vec<String>> {
|
||||
let mut hints = Vec::new();
|
||||
|
||||
// Check if mr_file_changes has ANY rows for this project
|
||||
let has_file_changes: bool = if let Some(pid) = project_id {
|
||||
conn.query_row(
|
||||
"SELECT EXISTS(SELECT 1 FROM mr_file_changes WHERE project_id = ?1 LIMIT 1)",
|
||||
rusqlite::params![pid],
|
||||
|row| row.get(0),
|
||||
)?
|
||||
} else {
|
||||
conn.query_row(
|
||||
"SELECT EXISTS(SELECT 1 FROM mr_file_changes LIMIT 1)",
|
||||
[],
|
||||
|row| row.get(0),
|
||||
)?
|
||||
};
|
||||
|
||||
if !has_file_changes {
|
||||
hints.push(
|
||||
"No MR file changes have been synced yet. Run 'lore sync' to fetch file change data."
|
||||
.to_string(),
|
||||
);
|
||||
return Ok(hints);
|
||||
}
|
||||
|
||||
// File changes exist but none match these paths
|
||||
let path_list = paths
|
||||
.iter()
|
||||
.map(|p| format!("'{p}'"))
|
||||
.collect::<Vec<_>>()
|
||||
.join(", ");
|
||||
hints.push(format!(
|
||||
"Searched paths [{}] were not found in MR file changes. \
|
||||
The file may predate the sync window or use a different path.",
|
||||
path_list
|
||||
));
|
||||
|
||||
Ok(hints)
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
#[path = "trace_tests.rs"]
|
||||
mod tests;
|
||||
|
||||
@@ -7,7 +7,10 @@ pub use extractor::{
|
||||
extract_discussion_document, extract_issue_document, extract_mr_document,
|
||||
extract_note_document, extract_note_document_cached,
|
||||
};
|
||||
pub use regenerator::{RegenerateResult, regenerate_dirty_documents};
|
||||
pub use regenerator::{
|
||||
RegenerateForSourcesResult, RegenerateResult, regenerate_dirty_documents,
|
||||
regenerate_dirty_documents_for_sources,
|
||||
};
|
||||
pub use truncation::{
|
||||
MAX_DISCUSSION_BYTES, MAX_DOCUMENT_BYTES_HARD, NoteContent, TruncationReason, TruncationResult,
|
||||
truncate_discussion, truncate_hard_cap, truncate_utf8,
|
||||
|
||||
@@ -84,6 +84,60 @@ pub fn regenerate_dirty_documents(
|
||||
Ok(result)
|
||||
}
|
||||
|
||||
#[derive(Debug, Default)]
|
||||
pub struct RegenerateForSourcesResult {
|
||||
pub regenerated: usize,
|
||||
pub unchanged: usize,
|
||||
pub errored: usize,
|
||||
pub document_ids: Vec<i64>,
|
||||
}
|
||||
|
||||
pub fn regenerate_dirty_documents_for_sources(
|
||||
conn: &Connection,
|
||||
source_keys: &[(SourceType, i64)],
|
||||
) -> Result<RegenerateForSourcesResult> {
|
||||
let mut result = RegenerateForSourcesResult::default();
|
||||
let mut cache = ParentMetadataCache::new();
|
||||
|
||||
for &(source_type, source_id) in source_keys {
|
||||
match regenerate_one(conn, source_type, source_id, &mut cache) {
|
||||
Ok(changed) => {
|
||||
if changed {
|
||||
result.regenerated += 1;
|
||||
} else {
|
||||
result.unchanged += 1;
|
||||
}
|
||||
clear_dirty(conn, source_type, source_id)?;
|
||||
|
||||
// Try to collect the document_id if a document exists
|
||||
if let Ok(doc_id) = get_document_id(conn, source_type, source_id) {
|
||||
result.document_ids.push(doc_id);
|
||||
}
|
||||
}
|
||||
Err(e) => {
|
||||
warn!(
|
||||
source_type = %source_type,
|
||||
source_id,
|
||||
error = %e,
|
||||
"Failed to regenerate document for source"
|
||||
);
|
||||
record_dirty_error(conn, source_type, source_id, &e.to_string())?;
|
||||
result.errored += 1;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
debug!(
|
||||
regenerated = result.regenerated,
|
||||
unchanged = result.unchanged,
|
||||
errored = result.errored,
|
||||
document_ids = result.document_ids.len(),
|
||||
"Scoped document regeneration complete"
|
||||
);
|
||||
|
||||
Ok(result)
|
||||
}
|
||||
|
||||
fn regenerate_one(
|
||||
conn: &Connection,
|
||||
source_type: SourceType,
|
||||
|
||||
@@ -518,3 +518,88 @@ fn test_note_regeneration_cache_invalidates_across_parents() {
|
||||
assert!(beta_content.contains("parent_iid: 99"));
|
||||
assert!(beta_content.contains("parent_title: Issue Beta"));
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_scoped_regen_only_processes_specified_sources() {
|
||||
let conn = setup_db();
|
||||
// Insert two issues
|
||||
conn.execute(
|
||||
"INSERT INTO issues (id, gitlab_id, project_id, iid, title, state, created_at, updated_at, last_seen_at) VALUES (1, 10, 1, 42, 'First Issue', 'opened', 1000, 2000, 3000)",
|
||||
[],
|
||||
).unwrap();
|
||||
conn.execute(
|
||||
"INSERT INTO issues (id, gitlab_id, project_id, iid, title, state, created_at, updated_at, last_seen_at) VALUES (2, 20, 1, 43, 'Second Issue', 'opened', 1000, 2000, 3000)",
|
||||
[],
|
||||
).unwrap();
|
||||
|
||||
// Mark both dirty
|
||||
mark_dirty(&conn, SourceType::Issue, 1).unwrap();
|
||||
mark_dirty(&conn, SourceType::Issue, 2).unwrap();
|
||||
|
||||
// Regenerate only issue 1
|
||||
let result = regenerate_dirty_documents_for_sources(&conn, &[(SourceType::Issue, 1)]).unwrap();
|
||||
|
||||
assert_eq!(result.regenerated, 1);
|
||||
assert_eq!(result.errored, 0);
|
||||
|
||||
// Issue 1 should be regenerated and cleared from dirty
|
||||
let doc_count: i64 = conn
|
||||
.query_row(
|
||||
"SELECT COUNT(*) FROM documents WHERE source_type = 'issue' AND source_id = 1",
|
||||
[],
|
||||
|r| r.get(0),
|
||||
)
|
||||
.unwrap();
|
||||
assert_eq!(doc_count, 1);
|
||||
|
||||
// Issue 2 should still be dirty
|
||||
let dirty_count: i64 = conn
|
||||
.query_row(
|
||||
"SELECT COUNT(*) FROM dirty_sources WHERE source_type = 'issue' AND source_id = 2",
|
||||
[],
|
||||
|r| r.get(0),
|
||||
)
|
||||
.unwrap();
|
||||
assert_eq!(dirty_count, 1);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_scoped_regen_returns_document_ids() {
|
||||
let conn = setup_db();
|
||||
conn.execute(
|
||||
"INSERT INTO issues (id, gitlab_id, project_id, iid, title, state, created_at, updated_at, last_seen_at) VALUES (1, 10, 1, 42, 'Test Issue', 'opened', 1000, 2000, 3000)",
|
||||
[],
|
||||
).unwrap();
|
||||
mark_dirty(&conn, SourceType::Issue, 1).unwrap();
|
||||
|
||||
let result = regenerate_dirty_documents_for_sources(&conn, &[(SourceType::Issue, 1)]).unwrap();
|
||||
|
||||
assert_eq!(result.document_ids.len(), 1);
|
||||
|
||||
// Verify returned ID matches the actual document
|
||||
let actual_id: i64 = conn
|
||||
.query_row(
|
||||
"SELECT id FROM documents WHERE source_type = 'issue' AND source_id = 1",
|
||||
[],
|
||||
|r| r.get(0),
|
||||
)
|
||||
.unwrap();
|
||||
assert_eq!(result.document_ids[0], actual_id);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_scoped_regen_handles_missing_source() {
|
||||
let conn = setup_db();
|
||||
// Don't insert any issues — source_id 999 doesn't exist
|
||||
// But mark it dirty so the function tries to process it
|
||||
mark_dirty(&conn, SourceType::Issue, 999).unwrap();
|
||||
|
||||
let result =
|
||||
regenerate_dirty_documents_for_sources(&conn, &[(SourceType::Issue, 999)]).unwrap();
|
||||
|
||||
// Source doesn't exist, so regenerate_one returns Ok(true) deleting the doc.
|
||||
// No document_id to collect since there's nothing in the documents table.
|
||||
assert_eq!(result.regenerated, 1);
|
||||
assert_eq!(result.errored, 0);
|
||||
assert!(result.document_ids.is_empty());
|
||||
}
|
||||
|
||||
@@ -7,5 +7,5 @@ pub mod similarity;
|
||||
|
||||
pub use change_detector::{PendingDocument, count_pending_documents, find_pending_documents};
|
||||
pub use chunking::{CHUNK_MAX_BYTES, CHUNK_OVERLAP_CHARS, split_into_chunks};
|
||||
pub use pipeline::{EmbedResult, embed_documents};
|
||||
pub use pipeline::{EmbedForIdsResult, EmbedResult, embed_documents, embed_documents_by_ids};
|
||||
pub use similarity::cosine_similarity;
|
||||
|
||||
@@ -578,3 +578,207 @@ fn sha256_hash(input: &str) -> String {
|
||||
hasher.update(input.as_bytes());
|
||||
format!("{:x}", hasher.finalize())
|
||||
}
|
||||
|
||||
#[derive(Debug, Default)]
|
||||
pub struct EmbedForIdsResult {
|
||||
pub chunks_embedded: usize,
|
||||
pub docs_embedded: usize,
|
||||
pub failed: usize,
|
||||
pub skipped: usize,
|
||||
}
|
||||
|
||||
/// Embed only the documents with the given IDs, skipping any that are
|
||||
/// already embedded with matching config (model, dims, chunk size, hash).
|
||||
pub async fn embed_documents_by_ids(
|
||||
conn: &Connection,
|
||||
client: &OllamaClient,
|
||||
model_name: &str,
|
||||
concurrency: usize,
|
||||
document_ids: &[i64],
|
||||
signal: &ShutdownSignal,
|
||||
) -> Result<EmbedForIdsResult> {
|
||||
let mut result = EmbedForIdsResult::default();
|
||||
|
||||
if document_ids.is_empty() {
|
||||
return Ok(result);
|
||||
}
|
||||
|
||||
if signal.is_cancelled() {
|
||||
return Ok(result);
|
||||
}
|
||||
|
||||
// Load documents for the specified IDs, filtering out already-embedded
|
||||
let pending = find_documents_by_ids(conn, document_ids, model_name)?;
|
||||
|
||||
if pending.is_empty() {
|
||||
result.skipped = document_ids.len();
|
||||
return Ok(result);
|
||||
}
|
||||
|
||||
let skipped_count = document_ids.len() - pending.len();
|
||||
result.skipped = skipped_count;
|
||||
|
||||
info!(
|
||||
requested = document_ids.len(),
|
||||
pending = pending.len(),
|
||||
skipped = skipped_count,
|
||||
"Scoped embedding: processing documents by ID"
|
||||
);
|
||||
|
||||
// Use the same SAVEPOINT + embed_page pattern as the main pipeline
|
||||
let mut last_id: i64 = 0;
|
||||
let mut processed: usize = 0;
|
||||
let total = pending.len();
|
||||
let mut page_stats = EmbedResult::default();
|
||||
|
||||
conn.execute_batch("SAVEPOINT embed_by_ids")?;
|
||||
let page_result = embed_page(
|
||||
conn,
|
||||
client,
|
||||
model_name,
|
||||
concurrency,
|
||||
&pending,
|
||||
&mut page_stats,
|
||||
&mut last_id,
|
||||
&mut processed,
|
||||
total,
|
||||
&None,
|
||||
signal,
|
||||
)
|
||||
.await;
|
||||
|
||||
match page_result {
|
||||
Ok(()) if signal.is_cancelled() => {
|
||||
let _ = conn.execute_batch("ROLLBACK TO embed_by_ids; RELEASE embed_by_ids");
|
||||
info!("Rolled back scoped embed page due to cancellation");
|
||||
}
|
||||
Ok(()) => {
|
||||
conn.execute_batch("RELEASE embed_by_ids")?;
|
||||
|
||||
// Count actual results from DB
|
||||
let (chunks, docs) = count_embedded_results(conn, &pending)?;
|
||||
result.chunks_embedded = chunks;
|
||||
result.docs_embedded = docs;
|
||||
result.failed = page_stats.failed;
|
||||
}
|
||||
Err(e) => {
|
||||
let _ = conn.execute_batch("ROLLBACK TO embed_by_ids; RELEASE embed_by_ids");
|
||||
return Err(e);
|
||||
}
|
||||
}
|
||||
|
||||
info!(
|
||||
chunks_embedded = result.chunks_embedded,
|
||||
docs_embedded = result.docs_embedded,
|
||||
failed = result.failed,
|
||||
skipped = result.skipped,
|
||||
"Scoped embedding complete"
|
||||
);
|
||||
|
||||
Ok(result)
|
||||
}
|
||||
|
||||
/// Load documents by specific IDs, filtering out those already embedded
|
||||
/// with matching config (same logic as `find_pending_documents` but scoped).
|
||||
fn find_documents_by_ids(
|
||||
conn: &Connection,
|
||||
document_ids: &[i64],
|
||||
model_name: &str,
|
||||
) -> Result<Vec<crate::embedding::change_detector::PendingDocument>> {
|
||||
use crate::embedding::chunking::{CHUNK_MAX_BYTES, EXPECTED_DIMS};
|
||||
|
||||
if document_ids.is_empty() {
|
||||
return Ok(Vec::new());
|
||||
}
|
||||
|
||||
// Build IN clause with placeholders
|
||||
let placeholders: Vec<String> = (0..document_ids.len())
|
||||
.map(|i| format!("?{}", i + 1))
|
||||
.collect();
|
||||
let in_clause = placeholders.join(", ");
|
||||
|
||||
let sql = format!(
|
||||
r#"
|
||||
SELECT d.id, d.content_text, d.content_hash
|
||||
FROM documents d
|
||||
LEFT JOIN embedding_metadata em
|
||||
ON em.document_id = d.id AND em.chunk_index = 0
|
||||
WHERE d.id IN ({in_clause})
|
||||
AND (
|
||||
em.document_id IS NULL
|
||||
OR em.document_hash != d.content_hash
|
||||
OR em.chunk_max_bytes IS NULL
|
||||
OR em.chunk_max_bytes != ?{chunk_bytes_idx}
|
||||
OR em.model != ?{model_idx}
|
||||
OR em.dims != ?{dims_idx}
|
||||
)
|
||||
ORDER BY d.id
|
||||
"#,
|
||||
in_clause = in_clause,
|
||||
chunk_bytes_idx = document_ids.len() + 1,
|
||||
model_idx = document_ids.len() + 2,
|
||||
dims_idx = document_ids.len() + 3,
|
||||
);
|
||||
|
||||
let mut stmt = conn.prepare(&sql)?;
|
||||
|
||||
// Build params: document_ids... then chunk_max_bytes, model, dims
|
||||
let mut params: Vec<Box<dyn rusqlite::types::ToSql>> = Vec::new();
|
||||
for id in document_ids {
|
||||
params.push(Box::new(*id));
|
||||
}
|
||||
params.push(Box::new(CHUNK_MAX_BYTES as i64));
|
||||
params.push(Box::new(model_name.to_string()));
|
||||
params.push(Box::new(EXPECTED_DIMS as i64));
|
||||
|
||||
let param_refs: Vec<&dyn rusqlite::types::ToSql> = params.iter().map(|p| p.as_ref()).collect();
|
||||
|
||||
let rows = stmt
|
||||
.query_map(param_refs.as_slice(), |row| {
|
||||
Ok(crate::embedding::change_detector::PendingDocument {
|
||||
document_id: row.get(0)?,
|
||||
content_text: row.get(1)?,
|
||||
content_hash: row.get(2)?,
|
||||
})
|
||||
})?
|
||||
.collect::<std::result::Result<Vec<_>, _>>()?;
|
||||
|
||||
Ok(rows)
|
||||
}
|
||||
|
||||
/// Count how many chunks and complete docs were embedded for the given pending docs.
|
||||
fn count_embedded_results(
|
||||
conn: &Connection,
|
||||
pending: &[crate::embedding::change_detector::PendingDocument],
|
||||
) -> Result<(usize, usize)> {
|
||||
let mut total_chunks: usize = 0;
|
||||
let mut total_docs: usize = 0;
|
||||
|
||||
for doc in pending {
|
||||
let chunk_count: i64 = conn.query_row(
|
||||
"SELECT COUNT(*) FROM embedding_metadata WHERE document_id = ?1 AND last_error IS NULL",
|
||||
[doc.document_id],
|
||||
|row| row.get(0),
|
||||
)?;
|
||||
if chunk_count > 0 {
|
||||
total_chunks += chunk_count as usize;
|
||||
// Check if all expected chunks are present (chunk_count metadata on chunk_index=0)
|
||||
let expected: Option<i64> = conn.query_row(
|
||||
"SELECT chunk_count FROM embedding_metadata WHERE document_id = ?1 AND chunk_index = 0",
|
||||
[doc.document_id],
|
||||
|row| row.get(0),
|
||||
)?;
|
||||
if let Some(expected_count) = expected
|
||||
&& chunk_count >= expected_count
|
||||
{
|
||||
total_docs += 1;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Ok((total_chunks, total_docs))
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
#[path = "pipeline_tests.rs"]
|
||||
mod tests;
|
||||
|
||||
184
src/embedding/pipeline_tests.rs
Normal file
184
src/embedding/pipeline_tests.rs
Normal file
@@ -0,0 +1,184 @@
|
||||
use std::path::Path;
|
||||
|
||||
use rusqlite::Connection;
|
||||
use wiremock::matchers::{method, path};
|
||||
use wiremock::{Mock, MockServer, ResponseTemplate};
|
||||
|
||||
use crate::core::db::{create_connection, run_migrations};
|
||||
use crate::core::shutdown::ShutdownSignal;
|
||||
use crate::embedding::chunking::EXPECTED_DIMS;
|
||||
use crate::embedding::ollama::{OllamaClient, OllamaConfig};
|
||||
use crate::embedding::pipeline::embed_documents_by_ids;
|
||||
|
||||
const MODEL: &str = "nomic-embed-text";
|
||||
|
||||
fn setup_db() -> Connection {
|
||||
let conn = create_connection(Path::new(":memory:")).unwrap();
|
||||
run_migrations(&conn).unwrap();
|
||||
conn
|
||||
}
|
||||
|
||||
fn insert_test_project(conn: &Connection) -> i64 {
|
||||
conn.execute(
|
||||
"INSERT INTO projects (gitlab_project_id, path_with_namespace, web_url)
|
||||
VALUES (1, 'group/test', 'https://gitlab.example.com/group/test')",
|
||||
[],
|
||||
)
|
||||
.unwrap();
|
||||
conn.last_insert_rowid()
|
||||
}
|
||||
|
||||
fn insert_test_document(
|
||||
conn: &Connection,
|
||||
project_id: i64,
|
||||
source_id: i64,
|
||||
content: &str,
|
||||
hash: &str,
|
||||
) -> i64 {
|
||||
conn.execute(
|
||||
"INSERT INTO documents (source_type, source_id, project_id, content_text, content_hash)
|
||||
VALUES ('issue', ?1, ?2, ?3, ?4)",
|
||||
rusqlite::params![source_id, project_id, content, hash],
|
||||
)
|
||||
.unwrap();
|
||||
conn.last_insert_rowid()
|
||||
}
|
||||
|
||||
fn make_fake_embedding() -> Vec<f32> {
|
||||
vec![0.1_f32; EXPECTED_DIMS]
|
||||
}
|
||||
|
||||
fn make_ollama_response(count: usize) -> serde_json::Value {
|
||||
let embedding = make_fake_embedding();
|
||||
let embeddings: Vec<_> = (0..count).map(|_| embedding.clone()).collect();
|
||||
serde_json::json!({
|
||||
"model": MODEL,
|
||||
"embeddings": embeddings
|
||||
})
|
||||
}
|
||||
|
||||
fn count_embeddings_for_doc(conn: &Connection, doc_id: i64) -> i64 {
|
||||
conn.query_row(
|
||||
"SELECT COUNT(*) FROM embedding_metadata WHERE document_id = ?1",
|
||||
[doc_id],
|
||||
|row| row.get(0),
|
||||
)
|
||||
.unwrap()
|
||||
}
|
||||
|
||||
fn make_client(base_url: &str) -> OllamaClient {
|
||||
OllamaClient::new(OllamaConfig {
|
||||
base_url: base_url.to_string(),
|
||||
model: MODEL.to_string(),
|
||||
timeout_secs: 10,
|
||||
})
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_embed_by_ids_only_embeds_specified_docs() {
|
||||
let mock_server = MockServer::start().await;
|
||||
|
||||
Mock::given(method("POST"))
|
||||
.and(path("/api/embed"))
|
||||
.respond_with(ResponseTemplate::new(200).set_body_json(make_ollama_response(1)))
|
||||
.mount(&mock_server)
|
||||
.await;
|
||||
|
||||
let conn = setup_db();
|
||||
let proj_id = insert_test_project(&conn);
|
||||
let doc1 = insert_test_document(&conn, proj_id, 1, "Hello world content for doc 1", "hash_a");
|
||||
let doc2 = insert_test_document(&conn, proj_id, 2, "Hello world content for doc 2", "hash_b");
|
||||
|
||||
let signal = ShutdownSignal::new();
|
||||
let client = make_client(&mock_server.uri());
|
||||
|
||||
// Only embed doc1
|
||||
let result = embed_documents_by_ids(&conn, &client, MODEL, 1, &[doc1], &signal)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(result.docs_embedded, 1, "Should embed exactly 1 doc");
|
||||
assert!(result.chunks_embedded > 0, "Should have embedded chunks");
|
||||
|
||||
// doc1 should have embeddings
|
||||
assert!(
|
||||
count_embeddings_for_doc(&conn, doc1) > 0,
|
||||
"doc1 should have embeddings"
|
||||
);
|
||||
|
||||
// doc2 should have NO embeddings
|
||||
assert_eq!(
|
||||
count_embeddings_for_doc(&conn, doc2),
|
||||
0,
|
||||
"doc2 should have no embeddings"
|
||||
);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_embed_by_ids_skips_already_embedded() {
|
||||
let mock_server = MockServer::start().await;
|
||||
|
||||
Mock::given(method("POST"))
|
||||
.and(path("/api/embed"))
|
||||
.respond_with(ResponseTemplate::new(200).set_body_json(make_ollama_response(1)))
|
||||
.expect(1) // Should only be called once
|
||||
.mount(&mock_server)
|
||||
.await;
|
||||
|
||||
let conn = setup_db();
|
||||
let proj_id = insert_test_project(&conn);
|
||||
let doc1 = insert_test_document(&conn, proj_id, 1, "Hello world content for doc 1", "hash_a");
|
||||
|
||||
let signal = ShutdownSignal::new();
|
||||
let client = make_client(&mock_server.uri());
|
||||
|
||||
// First embed
|
||||
let result1 = embed_documents_by_ids(&conn, &client, MODEL, 1, &[doc1], &signal)
|
||||
.await
|
||||
.unwrap();
|
||||
assert_eq!(result1.docs_embedded, 1);
|
||||
|
||||
// Second embed with same doc — should skip
|
||||
let result2 = embed_documents_by_ids(&conn, &client, MODEL, 1, &[doc1], &signal)
|
||||
.await
|
||||
.unwrap();
|
||||
assert_eq!(result2.docs_embedded, 0, "Should embed 0 on second call");
|
||||
assert_eq!(result2.skipped, 1, "Should report 1 skipped");
|
||||
assert_eq!(result2.chunks_embedded, 0, "No new chunks");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_embed_by_ids_empty_input() {
|
||||
let conn = setup_db();
|
||||
let signal = ShutdownSignal::new();
|
||||
// Client URL doesn't matter — should never be called
|
||||
let client = make_client("http://localhost:99999");
|
||||
|
||||
let result = embed_documents_by_ids(&conn, &client, MODEL, 1, &[], &signal)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(result.docs_embedded, 0);
|
||||
assert_eq!(result.chunks_embedded, 0);
|
||||
assert_eq!(result.failed, 0);
|
||||
assert_eq!(result.skipped, 0);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_embed_by_ids_respects_cancellation() {
|
||||
let conn = setup_db();
|
||||
let proj_id = insert_test_project(&conn);
|
||||
let doc1 = insert_test_document(&conn, proj_id, 1, "Hello world content for doc 1", "hash_a");
|
||||
|
||||
let signal = ShutdownSignal::new();
|
||||
signal.cancel(); // Pre-cancel
|
||||
|
||||
let client = make_client("http://localhost:99999");
|
||||
|
||||
let result = embed_documents_by_ids(&conn, &client, MODEL, 1, &[doc1], &signal)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(result.docs_embedded, 0, "Should embed 0 when cancelled");
|
||||
assert_eq!(result.chunks_embedded, 0, "No chunks when cancelled");
|
||||
}
|
||||
@@ -112,6 +112,18 @@ impl GitLabClient {
|
||||
self.request("/api/v4/version").await
|
||||
}
|
||||
|
||||
pub async fn get_issue_by_iid(&self, project_id: i64, iid: i64) -> Result<GitLabIssue> {
|
||||
self.request(&format!("/api/v4/projects/{project_id}/issues/{iid}"))
|
||||
.await
|
||||
}
|
||||
|
||||
pub async fn get_mr_by_iid(&self, project_id: i64, iid: i64) -> Result<GitLabMergeRequest> {
|
||||
self.request(&format!(
|
||||
"/api/v4/projects/{project_id}/merge_requests/{iid}"
|
||||
))
|
||||
.await
|
||||
}
|
||||
|
||||
const MAX_RETRIES: u32 = 3;
|
||||
|
||||
async fn request<T: serde::de::DeserializeOwned>(&self, path: &str) -> Result<T> {
|
||||
@@ -763,6 +775,10 @@ fn ms_to_iso8601(ms: i64) -> Option<String> {
|
||||
.map(|dt| dt.format("%Y-%m-%dT%H:%M:%S%.3fZ").to_string())
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
#[path = "client_tests.rs"]
|
||||
mod client_tests;
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use super::*;
|
||||
|
||||
113
src/gitlab/client_tests.rs
Normal file
113
src/gitlab/client_tests.rs
Normal file
@@ -0,0 +1,113 @@
|
||||
use super::*;
|
||||
use crate::core::error::LoreError;
|
||||
use wiremock::matchers::{header, method, path};
|
||||
use wiremock::{Mock, MockServer, ResponseTemplate};
|
||||
|
||||
#[tokio::test]
|
||||
async fn get_issue_by_iid_success() {
|
||||
let server = MockServer::start().await;
|
||||
let issue_json = serde_json::json!({
|
||||
"id": 1001,
|
||||
"iid": 42,
|
||||
"project_id": 5,
|
||||
"title": "Fix login bug",
|
||||
"state": "opened",
|
||||
"created_at": "2026-01-15T10:00:00Z",
|
||||
"updated_at": "2026-02-01T14:30:00Z",
|
||||
"author": { "id": 1, "username": "dev1", "name": "Developer One" },
|
||||
"web_url": "https://gitlab.example.com/group/repo/-/issues/42",
|
||||
"labels": [],
|
||||
"milestone": null,
|
||||
"assignees": [],
|
||||
"closed_at": null,
|
||||
"description": "Login fails on mobile"
|
||||
});
|
||||
|
||||
Mock::given(method("GET"))
|
||||
.and(path("/api/v4/projects/5/issues/42"))
|
||||
.and(header("PRIVATE-TOKEN", "test-token"))
|
||||
.respond_with(ResponseTemplate::new(200).set_body_json(&issue_json))
|
||||
.mount(&server)
|
||||
.await;
|
||||
|
||||
let client = GitLabClient::new(&server.uri(), "test-token", Some(100.0));
|
||||
let issue = client.get_issue_by_iid(5, 42).await.unwrap();
|
||||
assert_eq!(issue.iid, 42);
|
||||
assert_eq!(issue.title, "Fix login bug");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn get_issue_by_iid_not_found() {
|
||||
let server = MockServer::start().await;
|
||||
|
||||
Mock::given(method("GET"))
|
||||
.and(path("/api/v4/projects/5/issues/999"))
|
||||
.respond_with(
|
||||
ResponseTemplate::new(404)
|
||||
.set_body_json(serde_json::json!({"message": "404 Not Found"})),
|
||||
)
|
||||
.mount(&server)
|
||||
.await;
|
||||
|
||||
let client = GitLabClient::new(&server.uri(), "test-token", Some(100.0));
|
||||
let err = client.get_issue_by_iid(5, 999).await.unwrap_err();
|
||||
assert!(matches!(err, LoreError::GitLabNotFound { .. }));
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn get_mr_by_iid_success() {
|
||||
let server = MockServer::start().await;
|
||||
let mr_json = serde_json::json!({
|
||||
"id": 2001,
|
||||
"iid": 101,
|
||||
"project_id": 5,
|
||||
"title": "Add caching layer",
|
||||
"state": "merged",
|
||||
"created_at": "2026-01-20T09:00:00Z",
|
||||
"updated_at": "2026-02-10T16:00:00Z",
|
||||
"author": { "id": 2, "username": "dev2", "name": "Developer Two" },
|
||||
"web_url": "https://gitlab.example.com/group/repo/-/merge_requests/101",
|
||||
"source_branch": "feature/caching",
|
||||
"target_branch": "main",
|
||||
"draft": false,
|
||||
"labels": [],
|
||||
"milestone": null,
|
||||
"assignees": [],
|
||||
"reviewers": [],
|
||||
"merged_by": null,
|
||||
"merged_at": null,
|
||||
"closed_at": null,
|
||||
"description": "Adds Redis caching"
|
||||
});
|
||||
|
||||
Mock::given(method("GET"))
|
||||
.and(path("/api/v4/projects/5/merge_requests/101"))
|
||||
.and(header("PRIVATE-TOKEN", "test-token"))
|
||||
.respond_with(ResponseTemplate::new(200).set_body_json(&mr_json))
|
||||
.mount(&server)
|
||||
.await;
|
||||
|
||||
let client = GitLabClient::new(&server.uri(), "test-token", Some(100.0));
|
||||
let mr = client.get_mr_by_iid(5, 101).await.unwrap();
|
||||
assert_eq!(mr.iid, 101);
|
||||
assert_eq!(mr.title, "Add caching layer");
|
||||
assert_eq!(mr.source_branch, "feature/caching");
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn get_mr_by_iid_not_found() {
|
||||
let server = MockServer::start().await;
|
||||
|
||||
Mock::given(method("GET"))
|
||||
.and(path("/api/v4/projects/5/merge_requests/999"))
|
||||
.respond_with(
|
||||
ResponseTemplate::new(404)
|
||||
.set_body_json(serde_json::json!({"message": "404 Not Found"})),
|
||||
)
|
||||
.mount(&server)
|
||||
.await;
|
||||
|
||||
let client = GitLabClient::new(&server.uri(), "test-token", Some(100.0));
|
||||
let err = client.get_mr_by_iid(5, 999).await.unwrap_err();
|
||||
assert!(matches!(err, LoreError::GitLabNotFound { .. }));
|
||||
}
|
||||
@@ -140,7 +140,7 @@ fn passes_cursor_filter_with_ts(gitlab_id: i64, issue_ts: i64, cursor: &SyncCurs
|
||||
true
|
||||
}
|
||||
|
||||
fn process_single_issue(
|
||||
pub(crate) fn process_single_issue(
|
||||
conn: &Connection,
|
||||
config: &Config,
|
||||
project_id: i64,
|
||||
|
||||
@@ -135,13 +135,13 @@ pub async fn ingest_merge_requests(
|
||||
Ok(result)
|
||||
}
|
||||
|
||||
struct ProcessMrResult {
|
||||
labels_created: usize,
|
||||
assignees_linked: usize,
|
||||
reviewers_linked: usize,
|
||||
pub(crate) struct ProcessMrResult {
|
||||
pub(crate) labels_created: usize,
|
||||
pub(crate) assignees_linked: usize,
|
||||
pub(crate) reviewers_linked: usize,
|
||||
}
|
||||
|
||||
fn process_single_mr(
|
||||
pub(crate) fn process_single_mr(
|
||||
conn: &Connection,
|
||||
config: &Config,
|
||||
project_id: i64,
|
||||
|
||||
@@ -6,6 +6,7 @@ pub mod merge_requests;
|
||||
pub mod mr_diffs;
|
||||
pub mod mr_discussions;
|
||||
pub mod orchestrator;
|
||||
pub(crate) mod surgical;
|
||||
|
||||
pub use discussions::{IngestDiscussionsResult, ingest_issue_discussions};
|
||||
pub use issues::{IngestIssuesResult, IssueForDiscussionSync, ingest_issues};
|
||||
|
||||
@@ -1097,7 +1097,7 @@ async fn drain_resource_events(
|
||||
}
|
||||
|
||||
/// Store resource events using the provided connection (caller manages the transaction).
|
||||
fn store_resource_events(
|
||||
pub(crate) fn store_resource_events(
|
||||
conn: &Connection,
|
||||
project_id: i64,
|
||||
entity_type: &str,
|
||||
@@ -1406,7 +1406,7 @@ async fn drain_mr_closes_issues(
|
||||
Ok(result)
|
||||
}
|
||||
|
||||
fn store_closes_issues_refs(
|
||||
pub(crate) fn store_closes_issues_refs(
|
||||
conn: &Connection,
|
||||
project_id: i64,
|
||||
mr_local_id: i64,
|
||||
|
||||
469
src/ingestion/surgical.rs
Normal file
469
src/ingestion/surgical.rs
Normal file
@@ -0,0 +1,469 @@
|
||||
use futures::stream::StreamExt;
|
||||
use rusqlite::Connection;
|
||||
use rusqlite::OptionalExtension;
|
||||
use tracing::{debug, warn};
|
||||
|
||||
use crate::Config;
|
||||
use crate::core::error::{LoreError, Result};
|
||||
use crate::documents::SourceType;
|
||||
use crate::gitlab::GitLabClient;
|
||||
use crate::gitlab::types::{GitLabIssue, GitLabMergeRequest};
|
||||
use crate::ingestion::dirty_tracker;
|
||||
use crate::ingestion::discussions::ingest_issue_discussions;
|
||||
use crate::ingestion::issues::{IssueForDiscussionSync, process_single_issue};
|
||||
use crate::ingestion::merge_requests::{MrForDiscussionSync, process_single_mr};
|
||||
use crate::ingestion::mr_diffs::upsert_mr_file_changes;
|
||||
use crate::ingestion::mr_discussions::ingest_mr_discussions;
|
||||
use crate::ingestion::orchestrator::{store_closes_issues_refs, store_resource_events};
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Result types
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
#[derive(Debug)]
|
||||
pub(crate) struct IngestIssueResult {
|
||||
pub skipped_stale: bool,
|
||||
pub dirty_source_keys: Vec<(SourceType, i64)>,
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
pub(crate) struct IngestMrResult {
|
||||
pub skipped_stale: bool,
|
||||
pub dirty_source_keys: Vec<(SourceType, i64)>,
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
pub(crate) struct PreflightResult {
|
||||
pub issues: Vec<GitLabIssue>,
|
||||
pub merge_requests: Vec<GitLabMergeRequest>,
|
||||
pub failures: Vec<PreflightFailure>,
|
||||
}
|
||||
|
||||
#[derive(Debug)]
|
||||
pub(crate) struct PreflightFailure {
|
||||
pub entity_type: String,
|
||||
pub iid: i64,
|
||||
pub error: LoreError,
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// TOCTOU guard
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/// Returns `true` if the payload is stale (same age or older than what the DB
|
||||
/// already has). Returns `false` when the entity is new (no DB row) or when
|
||||
/// the payload is strictly newer.
|
||||
pub(crate) fn is_stale(payload_updated_at: &str, db_updated_at_ms: Option<i64>) -> Result<bool> {
|
||||
let Some(db_ms) = db_updated_at_ms else {
|
||||
return Ok(false);
|
||||
};
|
||||
|
||||
let payload_ms = chrono::DateTime::parse_from_rfc3339(payload_updated_at)
|
||||
.map(|dt| dt.timestamp_millis())
|
||||
.map_err(|e| {
|
||||
LoreError::Other(format!(
|
||||
"Failed to parse timestamp '{}': {}",
|
||||
payload_updated_at, e
|
||||
))
|
||||
})?;
|
||||
|
||||
Ok(payload_ms <= db_ms)
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Ingestion wrappers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/// Ingest a single issue by IID with TOCTOU guard and dirty marking.
|
||||
pub(crate) fn ingest_issue_by_iid(
|
||||
conn: &Connection,
|
||||
config: &Config,
|
||||
project_id: i64,
|
||||
issue: &GitLabIssue,
|
||||
) -> Result<IngestIssueResult> {
|
||||
let db_updated_at = get_db_updated_at(conn, "issues", issue.iid, project_id)?;
|
||||
|
||||
if is_stale(&issue.updated_at, db_updated_at)? {
|
||||
debug!(iid = issue.iid, "Skipping stale issue (TOCTOU guard)");
|
||||
return Ok(IngestIssueResult {
|
||||
skipped_stale: true,
|
||||
dirty_source_keys: vec![],
|
||||
});
|
||||
}
|
||||
|
||||
process_single_issue(conn, config, project_id, issue)?;
|
||||
|
||||
let local_id: i64 = conn.query_row(
|
||||
"SELECT id FROM issues WHERE project_id = ? AND iid = ?",
|
||||
(project_id, issue.iid),
|
||||
|row| row.get(0),
|
||||
)?;
|
||||
|
||||
dirty_tracker::mark_dirty(conn, SourceType::Issue, local_id)?;
|
||||
|
||||
Ok(IngestIssueResult {
|
||||
skipped_stale: false,
|
||||
dirty_source_keys: vec![(SourceType::Issue, local_id)],
|
||||
})
|
||||
}
|
||||
|
||||
/// Ingest a single merge request by IID with TOCTOU guard and dirty marking.
|
||||
pub(crate) fn ingest_mr_by_iid(
|
||||
conn: &Connection,
|
||||
config: &Config,
|
||||
project_id: i64,
|
||||
mr: &GitLabMergeRequest,
|
||||
) -> Result<IngestMrResult> {
|
||||
let db_updated_at = get_db_updated_at(conn, "merge_requests", mr.iid, project_id)?;
|
||||
|
||||
if is_stale(&mr.updated_at, db_updated_at)? {
|
||||
debug!(iid = mr.iid, "Skipping stale MR (TOCTOU guard)");
|
||||
return Ok(IngestMrResult {
|
||||
skipped_stale: true,
|
||||
dirty_source_keys: vec![],
|
||||
});
|
||||
}
|
||||
|
||||
process_single_mr(conn, config, project_id, mr)?;
|
||||
|
||||
let local_id: i64 = conn.query_row(
|
||||
"SELECT id FROM merge_requests WHERE project_id = ? AND iid = ?",
|
||||
(project_id, mr.iid),
|
||||
|row| row.get(0),
|
||||
)?;
|
||||
|
||||
dirty_tracker::mark_dirty(conn, SourceType::MergeRequest, local_id)?;
|
||||
|
||||
Ok(IngestMrResult {
|
||||
skipped_stale: false,
|
||||
dirty_source_keys: vec![(SourceType::MergeRequest, local_id)],
|
||||
})
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Preflight fetch
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/// Fetch specific issues and MRs by IID from GitLab. Collects successes and
|
||||
/// failures without aborting on individual 404s.
|
||||
///
|
||||
/// Requests are dispatched concurrently (up to 10 in-flight at once) to avoid
|
||||
/// sequential round-trip latency when syncing many IIDs.
|
||||
pub(crate) async fn preflight_fetch(
|
||||
client: &GitLabClient,
|
||||
gitlab_project_id: i64,
|
||||
targets: &[(String, i64)],
|
||||
) -> PreflightResult {
|
||||
/// Max concurrent HTTP requests during preflight.
|
||||
const PREFLIGHT_CONCURRENCY: usize = 10;
|
||||
|
||||
#[allow(clippy::large_enum_variant)]
|
||||
enum FetchOutcome {
|
||||
Issue(std::result::Result<GitLabIssue, (String, i64, LoreError)>),
|
||||
MergeRequest(std::result::Result<GitLabMergeRequest, (String, i64, LoreError)>),
|
||||
UnknownType(String, i64),
|
||||
}
|
||||
|
||||
let mut result = PreflightResult {
|
||||
issues: Vec::new(),
|
||||
merge_requests: Vec::new(),
|
||||
failures: Vec::new(),
|
||||
};
|
||||
|
||||
let mut stream = futures::stream::iter(targets.iter().map(|(entity_type, iid)| {
|
||||
let entity_type = entity_type.clone();
|
||||
let iid = *iid;
|
||||
async move {
|
||||
match entity_type.as_str() {
|
||||
"issue" => FetchOutcome::Issue(
|
||||
client
|
||||
.get_issue_by_iid(gitlab_project_id, iid)
|
||||
.await
|
||||
.map_err(|e| (entity_type, iid, e)),
|
||||
),
|
||||
"merge_request" => FetchOutcome::MergeRequest(
|
||||
client
|
||||
.get_mr_by_iid(gitlab_project_id, iid)
|
||||
.await
|
||||
.map_err(|e| (entity_type, iid, e)),
|
||||
),
|
||||
_ => FetchOutcome::UnknownType(entity_type, iid),
|
||||
}
|
||||
}
|
||||
}))
|
||||
.buffer_unordered(PREFLIGHT_CONCURRENCY);
|
||||
|
||||
while let Some(outcome) = stream.next().await {
|
||||
match outcome {
|
||||
FetchOutcome::Issue(Ok(issue)) => result.issues.push(issue),
|
||||
FetchOutcome::Issue(Err((et, iid, e))) => {
|
||||
result.failures.push(PreflightFailure {
|
||||
entity_type: et,
|
||||
iid,
|
||||
error: e,
|
||||
});
|
||||
}
|
||||
FetchOutcome::MergeRequest(Ok(mr)) => result.merge_requests.push(mr),
|
||||
FetchOutcome::MergeRequest(Err((et, iid, e))) => {
|
||||
result.failures.push(PreflightFailure {
|
||||
entity_type: et,
|
||||
iid,
|
||||
error: e,
|
||||
});
|
||||
}
|
||||
FetchOutcome::UnknownType(et, iid) => {
|
||||
result.failures.push(PreflightFailure {
|
||||
entity_type: et.clone(),
|
||||
iid,
|
||||
error: LoreError::Other(format!("Unknown entity type: {et}")),
|
||||
});
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
result
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Dependent fetch helpers (surgical mode)
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
/// Counts returned from fetching dependents for a single entity.
|
||||
#[derive(Debug, Default)]
|
||||
pub(crate) struct DependentFetchResult {
|
||||
pub resource_events_fetched: usize,
|
||||
pub discussions_fetched: usize,
|
||||
pub closes_issues_stored: usize,
|
||||
pub file_changes_stored: usize,
|
||||
}
|
||||
|
||||
/// Fetch and store all dependents for a single issue:
|
||||
/// resource events (state, label, milestone) and discussions.
|
||||
pub(crate) async fn fetch_dependents_for_issue(
|
||||
client: &GitLabClient,
|
||||
conn: &Connection,
|
||||
project_id: i64,
|
||||
gitlab_project_id: i64,
|
||||
iid: i64,
|
||||
local_id: i64,
|
||||
config: &Config,
|
||||
) -> Result<DependentFetchResult> {
|
||||
let mut result = DependentFetchResult::default();
|
||||
|
||||
// --- Resource events ---
|
||||
match client
|
||||
.fetch_all_resource_events(gitlab_project_id, "issue", iid)
|
||||
.await
|
||||
{
|
||||
Ok((state_events, label_events, milestone_events)) => {
|
||||
let count = state_events.len() + label_events.len() + milestone_events.len();
|
||||
let tx = conn.unchecked_transaction()?;
|
||||
store_resource_events(
|
||||
&tx,
|
||||
project_id,
|
||||
"issue",
|
||||
local_id,
|
||||
&state_events,
|
||||
&label_events,
|
||||
&milestone_events,
|
||||
)?;
|
||||
tx.execute(
|
||||
"UPDATE issues SET resource_events_synced_for_updated_at = updated_at WHERE id = ?",
|
||||
[local_id],
|
||||
)?;
|
||||
tx.commit()?;
|
||||
result.resource_events_fetched = count;
|
||||
}
|
||||
Err(e) => {
|
||||
warn!(
|
||||
iid,
|
||||
error = %e,
|
||||
"Failed to fetch resource events for issue, continuing"
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// --- Discussions ---
|
||||
let sync_item = IssueForDiscussionSync {
|
||||
local_issue_id: local_id,
|
||||
iid,
|
||||
updated_at: 0, // not used for filtering in surgical mode
|
||||
};
|
||||
match ingest_issue_discussions(
|
||||
conn,
|
||||
client,
|
||||
config,
|
||||
gitlab_project_id,
|
||||
project_id,
|
||||
&[sync_item],
|
||||
)
|
||||
.await
|
||||
{
|
||||
Ok(disc_result) => {
|
||||
result.discussions_fetched = disc_result.discussions_fetched;
|
||||
}
|
||||
Err(e) => {
|
||||
warn!(
|
||||
iid,
|
||||
error = %e,
|
||||
"Failed to ingest discussions for issue, continuing"
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
Ok(result)
|
||||
}
|
||||
|
||||
/// Fetch and store all dependents for a single merge request:
|
||||
/// resource events, discussions, closes-issues references, and file changes (diffs).
|
||||
pub(crate) async fn fetch_dependents_for_mr(
|
||||
client: &GitLabClient,
|
||||
conn: &Connection,
|
||||
project_id: i64,
|
||||
gitlab_project_id: i64,
|
||||
iid: i64,
|
||||
local_id: i64,
|
||||
config: &Config,
|
||||
) -> Result<DependentFetchResult> {
|
||||
let mut result = DependentFetchResult::default();
|
||||
|
||||
// --- Resource events ---
|
||||
match client
|
||||
.fetch_all_resource_events(gitlab_project_id, "merge_request", iid)
|
||||
.await
|
||||
{
|
||||
Ok((state_events, label_events, milestone_events)) => {
|
||||
let count = state_events.len() + label_events.len() + milestone_events.len();
|
||||
let tx = conn.unchecked_transaction()?;
|
||||
store_resource_events(
|
||||
&tx,
|
||||
project_id,
|
||||
"merge_request",
|
||||
local_id,
|
||||
&state_events,
|
||||
&label_events,
|
||||
&milestone_events,
|
||||
)?;
|
||||
tx.execute(
|
||||
"UPDATE merge_requests SET resource_events_synced_for_updated_at = updated_at WHERE id = ?",
|
||||
[local_id],
|
||||
)?;
|
||||
tx.commit()?;
|
||||
result.resource_events_fetched = count;
|
||||
}
|
||||
Err(e) => {
|
||||
warn!(
|
||||
iid,
|
||||
error = %e,
|
||||
"Failed to fetch resource events for MR, continuing"
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// --- Discussions ---
|
||||
let sync_item = MrForDiscussionSync {
|
||||
local_mr_id: local_id,
|
||||
iid,
|
||||
updated_at: 0,
|
||||
};
|
||||
match ingest_mr_discussions(
|
||||
conn,
|
||||
client,
|
||||
config,
|
||||
gitlab_project_id,
|
||||
project_id,
|
||||
&[sync_item],
|
||||
)
|
||||
.await
|
||||
{
|
||||
Ok(disc_result) => {
|
||||
result.discussions_fetched = disc_result.discussions_fetched;
|
||||
}
|
||||
Err(e) => {
|
||||
warn!(
|
||||
iid,
|
||||
error = %e,
|
||||
"Failed to ingest discussions for MR, continuing"
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// --- Closes issues ---
|
||||
match client.fetch_mr_closes_issues(gitlab_project_id, iid).await {
|
||||
Ok(closes_issues) => {
|
||||
let count = closes_issues.len();
|
||||
let tx = conn.unchecked_transaction()?;
|
||||
store_closes_issues_refs(&tx, project_id, local_id, &closes_issues)?;
|
||||
tx.execute(
|
||||
"UPDATE merge_requests SET closes_issues_synced_for_updated_at = updated_at WHERE id = ?",
|
||||
[local_id],
|
||||
)?;
|
||||
tx.commit()?;
|
||||
result.closes_issues_stored = count;
|
||||
}
|
||||
Err(e) => {
|
||||
warn!(
|
||||
iid,
|
||||
error = %e,
|
||||
"Failed to fetch closes_issues for MR, continuing"
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
// --- File changes (diffs) ---
|
||||
match client.fetch_mr_diffs(gitlab_project_id, iid).await {
|
||||
Ok(diffs) => {
|
||||
let tx = conn.unchecked_transaction()?;
|
||||
let stored = upsert_mr_file_changes(&tx, local_id, project_id, &diffs)?;
|
||||
tx.execute(
|
||||
"UPDATE merge_requests SET diffs_synced_for_updated_at = updated_at WHERE id = ?",
|
||||
[local_id],
|
||||
)?;
|
||||
tx.commit()?;
|
||||
result.file_changes_stored = stored;
|
||||
}
|
||||
Err(e) => {
|
||||
warn!(
|
||||
iid,
|
||||
error = %e,
|
||||
"Failed to fetch diffs for MR, continuing"
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
Ok(result)
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
fn get_db_updated_at(
|
||||
conn: &Connection,
|
||||
table: &str,
|
||||
iid: i64,
|
||||
project_id: i64,
|
||||
) -> Result<Option<i64>> {
|
||||
// Using a match on known table names avoids SQL injection from the table parameter.
|
||||
let sql = match table {
|
||||
"issues" => "SELECT updated_at FROM issues WHERE project_id = ?1 AND iid = ?2",
|
||||
"merge_requests" => {
|
||||
"SELECT updated_at FROM merge_requests WHERE project_id = ?1 AND iid = ?2"
|
||||
}
|
||||
_ => {
|
||||
return Err(LoreError::Other(format!(
|
||||
"Unknown table for updated_at lookup: {table}"
|
||||
)));
|
||||
}
|
||||
};
|
||||
|
||||
let result: Option<i64> = conn
|
||||
.query_row(sql, (project_id, iid), |row| row.get(0))
|
||||
.optional()?;
|
||||
|
||||
Ok(result)
|
||||
}
|
||||
|
||||
#[cfg(test)]
|
||||
#[path = "surgical_tests.rs"]
|
||||
mod tests;
|
||||
640
src/ingestion/surgical_tests.rs
Normal file
640
src/ingestion/surgical_tests.rs
Normal file
@@ -0,0 +1,640 @@
|
||||
use std::path::Path;
|
||||
|
||||
use super::*;
|
||||
use crate::core::config::{
|
||||
Config, EmbeddingConfig, GitLabConfig, LoggingConfig, ProjectConfig, ScoringConfig,
|
||||
StorageConfig, SyncConfig,
|
||||
};
|
||||
use crate::core::db::{create_connection, run_migrations};
|
||||
use crate::gitlab::types::{GitLabAuthor, GitLabMergeRequest};
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Test helpers
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
fn setup_db() -> rusqlite::Connection {
|
||||
let conn = create_connection(Path::new(":memory:")).expect("in-memory DB");
|
||||
run_migrations(&conn).expect("migrations");
|
||||
conn.execute(
|
||||
"INSERT INTO projects (gitlab_project_id, path_with_namespace, web_url)
|
||||
VALUES (100, 'group/repo', 'https://example.com/group/repo')",
|
||||
[],
|
||||
)
|
||||
.expect("insert project");
|
||||
conn
|
||||
}
|
||||
|
||||
fn test_config() -> Config {
|
||||
Config {
|
||||
gitlab: GitLabConfig {
|
||||
base_url: "https://gitlab.example.com".to_string(),
|
||||
token_env_var: "GITLAB_TOKEN".to_string(),
|
||||
token: None,
|
||||
username: None,
|
||||
},
|
||||
projects: vec![ProjectConfig {
|
||||
path: "group/repo".to_string(),
|
||||
}],
|
||||
default_project: None,
|
||||
sync: SyncConfig::default(),
|
||||
storage: StorageConfig::default(),
|
||||
embedding: EmbeddingConfig::default(),
|
||||
logging: LoggingConfig::default(),
|
||||
scoring: ScoringConfig::default(),
|
||||
}
|
||||
}
|
||||
|
||||
fn make_test_issue(iid: i64, updated_at: &str) -> GitLabIssue {
|
||||
GitLabIssue {
|
||||
id: iid * 1000, // unique gitlab_id
|
||||
iid,
|
||||
project_id: 100,
|
||||
title: format!("Test issue {iid}"),
|
||||
description: Some("Description".to_string()),
|
||||
state: "opened".to_string(),
|
||||
created_at: "2026-01-01T00:00:00.000+00:00".to_string(),
|
||||
updated_at: updated_at.to_string(),
|
||||
closed_at: None,
|
||||
author: GitLabAuthor {
|
||||
id: 1,
|
||||
username: "testuser".to_string(),
|
||||
name: "Test User".to_string(),
|
||||
},
|
||||
assignees: vec![],
|
||||
labels: vec![],
|
||||
milestone: None,
|
||||
due_date: None,
|
||||
web_url: format!("https://example.com/group/repo/-/issues/{iid}"),
|
||||
}
|
||||
}
|
||||
|
||||
fn make_test_mr(iid: i64, updated_at: &str) -> GitLabMergeRequest {
|
||||
GitLabMergeRequest {
|
||||
id: iid * 1000,
|
||||
iid,
|
||||
project_id: 100,
|
||||
title: format!("Test MR {iid}"),
|
||||
description: Some("MR description".to_string()),
|
||||
state: "opened".to_string(),
|
||||
draft: false,
|
||||
work_in_progress: false,
|
||||
source_branch: "feature".to_string(),
|
||||
target_branch: "main".to_string(),
|
||||
sha: Some("abc123".to_string()),
|
||||
references: None,
|
||||
detailed_merge_status: None,
|
||||
merge_status_legacy: None,
|
||||
created_at: "2026-01-01T00:00:00.000+00:00".to_string(),
|
||||
updated_at: updated_at.to_string(),
|
||||
merged_at: None,
|
||||
closed_at: None,
|
||||
author: GitLabAuthor {
|
||||
id: 1,
|
||||
username: "testuser".to_string(),
|
||||
name: "Test User".to_string(),
|
||||
},
|
||||
merge_user: None,
|
||||
merged_by: None,
|
||||
labels: vec![],
|
||||
assignees: vec![],
|
||||
reviewers: vec![],
|
||||
web_url: format!("https://example.com/group/repo/-/merge_requests/{iid}"),
|
||||
merge_commit_sha: None,
|
||||
squash_commit_sha: None,
|
||||
}
|
||||
}
|
||||
|
||||
fn get_db_updated_at_helper(conn: &rusqlite::Connection, table: &str, iid: i64) -> Option<i64> {
|
||||
let sql = match table {
|
||||
"issues" => "SELECT updated_at FROM issues WHERE project_id = 1 AND iid = ?1",
|
||||
"merge_requests" => {
|
||||
"SELECT updated_at FROM merge_requests WHERE project_id = 1 AND iid = ?1"
|
||||
}
|
||||
_ => return None,
|
||||
};
|
||||
conn.query_row(sql, [iid], |row| row.get(0)).ok()
|
||||
}
|
||||
|
||||
fn get_dirty_keys(conn: &rusqlite::Connection) -> Vec<(String, i64)> {
|
||||
let mut stmt = conn
|
||||
.prepare("SELECT source_type, source_id FROM dirty_sources ORDER BY source_type, source_id")
|
||||
.expect("prepare dirty_sources query");
|
||||
stmt.query_map([], |row| {
|
||||
let st: String = row.get(0)?;
|
||||
let id: i64 = row.get(1)?;
|
||||
Ok((st, id))
|
||||
})
|
||||
.expect("query dirty_sources")
|
||||
.collect::<std::result::Result<Vec<_>, _>>()
|
||||
.expect("collect dirty_sources")
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// is_stale unit tests
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
#[test]
|
||||
fn test_is_stale_parses_iso8601() {
|
||||
// 2026-02-17T12:00:00.000+00:00 -> 1771243200000 ms
|
||||
let result = is_stale("2026-02-17T12:00:00.000+00:00", Some(1_771_329_600_000));
|
||||
assert!(result.is_ok());
|
||||
// Same timestamp => stale
|
||||
assert!(result.unwrap());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_is_stale_handles_none_db_value() {
|
||||
let result = is_stale("2026-02-17T12:00:00.000+00:00", None);
|
||||
assert!(result.is_ok());
|
||||
assert!(!result.unwrap(), "no DB row means not stale");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_is_stale_with_z_suffix() {
|
||||
let result = is_stale("2026-02-17T12:00:00Z", Some(1_771_329_600_000));
|
||||
assert!(result.is_ok());
|
||||
assert!(result.unwrap(), "Z suffix should parse same as +00:00");
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Issue ingestion tests
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
#[test]
|
||||
fn test_ingest_issue_by_iid_upserts_and_marks_dirty() {
|
||||
let conn = setup_db();
|
||||
let config = test_config();
|
||||
let issue = make_test_issue(42, "2026-02-17T12:00:00.000+00:00");
|
||||
|
||||
let result = ingest_issue_by_iid(&conn, &config, 1, &issue).unwrap();
|
||||
|
||||
assert!(!result.skipped_stale);
|
||||
assert!(!result.dirty_source_keys.is_empty());
|
||||
|
||||
// Verify DB row exists
|
||||
let db_ts = get_db_updated_at_helper(&conn, "issues", 42);
|
||||
assert!(db_ts.is_some(), "issue should exist in DB");
|
||||
|
||||
// Verify dirty marking
|
||||
let dirty = get_dirty_keys(&conn);
|
||||
assert!(
|
||||
dirty.iter().any(|(t, _)| t == "issue"),
|
||||
"dirty_sources should contain an issue entry"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_toctou_skips_stale_issue() {
|
||||
let conn = setup_db();
|
||||
let config = test_config();
|
||||
let issue = make_test_issue(42, "2026-02-17T12:00:00.000+00:00");
|
||||
|
||||
// First ingest succeeds
|
||||
let r1 = ingest_issue_by_iid(&conn, &config, 1, &issue).unwrap();
|
||||
assert!(!r1.skipped_stale);
|
||||
|
||||
// Clear dirty to check second ingest doesn't re-mark
|
||||
conn.execute("DELETE FROM dirty_sources", []).unwrap();
|
||||
|
||||
// Second ingest with same timestamp should be skipped
|
||||
let r2 = ingest_issue_by_iid(&conn, &config, 1, &issue).unwrap();
|
||||
assert!(r2.skipped_stale);
|
||||
assert!(r2.dirty_source_keys.is_empty());
|
||||
|
||||
// No new dirty mark
|
||||
let dirty = get_dirty_keys(&conn);
|
||||
assert!(dirty.is_empty(), "stale skip should not create dirty marks");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_toctou_allows_newer_issue() {
|
||||
let conn = setup_db();
|
||||
let config = test_config();
|
||||
|
||||
// Ingest at T1
|
||||
let issue_t1 = make_test_issue(42, "2026-02-17T12:00:00.000+00:00");
|
||||
ingest_issue_by_iid(&conn, &config, 1, &issue_t1).unwrap();
|
||||
|
||||
conn.execute("DELETE FROM dirty_sources", []).unwrap();
|
||||
|
||||
// Ingest at T2 (newer) — should succeed
|
||||
let issue_t2 = make_test_issue(42, "2026-02-17T13:00:00.000+00:00");
|
||||
let result = ingest_issue_by_iid(&conn, &config, 1, &issue_t2).unwrap();
|
||||
|
||||
assert!(!result.skipped_stale);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_ingest_issue_returns_dirty_source_keys() {
|
||||
let conn = setup_db();
|
||||
let config = test_config();
|
||||
let issue = make_test_issue(42, "2026-02-17T12:00:00.000+00:00");
|
||||
|
||||
let result = ingest_issue_by_iid(&conn, &config, 1, &issue).unwrap();
|
||||
|
||||
assert_eq!(result.dirty_source_keys.len(), 1);
|
||||
let (source_type, local_id) = &result.dirty_source_keys[0];
|
||||
assert_eq!(source_type.as_str(), "issue");
|
||||
assert!(*local_id > 0, "local_id should be positive");
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_ingest_issue_updates_existing() {
|
||||
let conn = setup_db();
|
||||
let config = test_config();
|
||||
|
||||
let issue_v1 = make_test_issue(42, "2026-02-17T12:00:00.000+00:00");
|
||||
ingest_issue_by_iid(&conn, &config, 1, &issue_v1).unwrap();
|
||||
|
||||
let ts1 = get_db_updated_at_helper(&conn, "issues", 42).unwrap();
|
||||
|
||||
// Newer version
|
||||
let issue_v2 = make_test_issue(42, "2026-02-17T14:00:00.000+00:00");
|
||||
let result = ingest_issue_by_iid(&conn, &config, 1, &issue_v2).unwrap();
|
||||
|
||||
assert!(!result.skipped_stale);
|
||||
let ts2 = get_db_updated_at_helper(&conn, "issues", 42).unwrap();
|
||||
assert!(ts2 > ts1, "DB timestamp should increase after update");
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// MR ingestion tests
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
#[test]
|
||||
fn test_ingest_mr_by_iid_upserts_and_marks_dirty() {
|
||||
let conn = setup_db();
|
||||
let config = test_config();
|
||||
let mr = make_test_mr(101, "2026-02-17T12:00:00.000+00:00");
|
||||
|
||||
let result = ingest_mr_by_iid(&conn, &config, 1, &mr).unwrap();
|
||||
|
||||
assert!(!result.skipped_stale);
|
||||
assert!(!result.dirty_source_keys.is_empty());
|
||||
|
||||
let db_ts = get_db_updated_at_helper(&conn, "merge_requests", 101);
|
||||
assert!(db_ts.is_some(), "MR should exist in DB");
|
||||
|
||||
let dirty = get_dirty_keys(&conn);
|
||||
assert!(
|
||||
dirty.iter().any(|(t, _)| t == "merge_request"),
|
||||
"dirty_sources should contain a merge_request entry"
|
||||
);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_toctou_skips_stale_mr() {
|
||||
let conn = setup_db();
|
||||
let config = test_config();
|
||||
let mr = make_test_mr(101, "2026-02-17T12:00:00.000+00:00");
|
||||
|
||||
let r1 = ingest_mr_by_iid(&conn, &config, 1, &mr).unwrap();
|
||||
assert!(!r1.skipped_stale);
|
||||
|
||||
conn.execute("DELETE FROM dirty_sources", []).unwrap();
|
||||
|
||||
let r2 = ingest_mr_by_iid(&conn, &config, 1, &mr).unwrap();
|
||||
assert!(r2.skipped_stale);
|
||||
assert!(r2.dirty_source_keys.is_empty());
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_toctou_allows_newer_mr() {
|
||||
let conn = setup_db();
|
||||
let config = test_config();
|
||||
|
||||
let mr_t1 = make_test_mr(101, "2026-02-17T12:00:00.000+00:00");
|
||||
ingest_mr_by_iid(&conn, &config, 1, &mr_t1).unwrap();
|
||||
|
||||
conn.execute("DELETE FROM dirty_sources", []).unwrap();
|
||||
|
||||
let mr_t2 = make_test_mr(101, "2026-02-17T13:00:00.000+00:00");
|
||||
let result = ingest_mr_by_iid(&conn, &config, 1, &mr_t2).unwrap();
|
||||
|
||||
assert!(!result.skipped_stale);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_ingest_mr_returns_dirty_source_keys() {
|
||||
let conn = setup_db();
|
||||
let config = test_config();
|
||||
let mr = make_test_mr(101, "2026-02-17T12:00:00.000+00:00");
|
||||
|
||||
let result = ingest_mr_by_iid(&conn, &config, 1, &mr).unwrap();
|
||||
|
||||
assert_eq!(result.dirty_source_keys.len(), 1);
|
||||
let (source_type, local_id) = &result.dirty_source_keys[0];
|
||||
assert_eq!(source_type.as_str(), "merge_request");
|
||||
assert!(*local_id > 0);
|
||||
}
|
||||
|
||||
#[test]
|
||||
fn test_ingest_mr_updates_existing() {
|
||||
let conn = setup_db();
|
||||
let config = test_config();
|
||||
|
||||
let mr_v1 = make_test_mr(101, "2026-02-17T12:00:00.000+00:00");
|
||||
ingest_mr_by_iid(&conn, &config, 1, &mr_v1).unwrap();
|
||||
|
||||
let ts1 = get_db_updated_at_helper(&conn, "merge_requests", 101).unwrap();
|
||||
|
||||
let mr_v2 = make_test_mr(101, "2026-02-17T14:00:00.000+00:00");
|
||||
let result = ingest_mr_by_iid(&conn, &config, 1, &mr_v2).unwrap();
|
||||
|
||||
assert!(!result.skipped_stale);
|
||||
let ts2 = get_db_updated_at_helper(&conn, "merge_requests", 101).unwrap();
|
||||
assert!(ts2 > ts1, "DB timestamp should increase after update");
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Preflight fetch test (wiremock)
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_preflight_fetch_returns_issues_and_mrs() {
|
||||
use wiremock::matchers::{method, path};
|
||||
use wiremock::{Mock, MockServer, ResponseTemplate};
|
||||
|
||||
let mock_server = MockServer::start().await;
|
||||
|
||||
// Issue fixture
|
||||
let issue_json = serde_json::json!({
|
||||
"id": 42000,
|
||||
"iid": 42,
|
||||
"project_id": 100,
|
||||
"title": "Test issue 42",
|
||||
"description": "desc",
|
||||
"state": "opened",
|
||||
"created_at": "2026-01-01T00:00:00.000+00:00",
|
||||
"updated_at": "2026-02-17T12:00:00.000+00:00",
|
||||
"author": {"id": 1, "username": "testuser", "name": "Test User"},
|
||||
"assignees": [],
|
||||
"labels": [],
|
||||
"web_url": "https://example.com/group/repo/-/issues/42"
|
||||
});
|
||||
|
||||
// MR fixture
|
||||
let mr_json = serde_json::json!({
|
||||
"id": 101000,
|
||||
"iid": 101,
|
||||
"project_id": 100,
|
||||
"title": "Test MR 101",
|
||||
"description": "mr desc",
|
||||
"state": "opened",
|
||||
"draft": false,
|
||||
"work_in_progress": false,
|
||||
"source_branch": "feature",
|
||||
"target_branch": "main",
|
||||
"sha": "abc123",
|
||||
"created_at": "2026-01-01T00:00:00.000+00:00",
|
||||
"updated_at": "2026-02-17T12:00:00.000+00:00",
|
||||
"author": {"id": 1, "username": "testuser", "name": "Test User"},
|
||||
"labels": [],
|
||||
"assignees": [],
|
||||
"reviewers": [],
|
||||
"web_url": "https://example.com/group/repo/-/merge_requests/101"
|
||||
});
|
||||
|
||||
Mock::given(method("GET"))
|
||||
.and(path("/api/v4/projects/100/issues/42"))
|
||||
.respond_with(ResponseTemplate::new(200).set_body_json(&issue_json))
|
||||
.mount(&mock_server)
|
||||
.await;
|
||||
|
||||
Mock::given(method("GET"))
|
||||
.and(path("/api/v4/projects/100/merge_requests/101"))
|
||||
.respond_with(ResponseTemplate::new(200).set_body_json(&mr_json))
|
||||
.mount(&mock_server)
|
||||
.await;
|
||||
|
||||
let client = GitLabClient::new(&mock_server.uri(), "test-token", None);
|
||||
let targets = vec![
|
||||
("issue".to_string(), 42i64),
|
||||
("merge_request".to_string(), 101i64),
|
||||
];
|
||||
|
||||
let result = preflight_fetch(&client, 100, &targets).await;
|
||||
|
||||
assert_eq!(result.issues.len(), 1);
|
||||
assert_eq!(result.issues[0].iid, 42);
|
||||
assert_eq!(result.merge_requests.len(), 1);
|
||||
assert_eq!(result.merge_requests[0].iid, 101);
|
||||
assert!(result.failures.is_empty());
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------------------------
|
||||
// Dependent helper tests (bd-kanh)
|
||||
// ---------------------------------------------------------------------------
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_fetch_dependents_for_issue_empty_events() {
|
||||
use wiremock::matchers::{method, path};
|
||||
use wiremock::{Mock, MockServer, ResponseTemplate};
|
||||
|
||||
let mock_server = MockServer::start().await;
|
||||
let conn = setup_db();
|
||||
let config = test_config();
|
||||
|
||||
// Insert an issue so we have a local_id
|
||||
let issue = make_test_issue(42, "2026-02-17T12:00:00.000+00:00");
|
||||
ingest_issue_by_iid(&conn, &config, 1, &issue).unwrap();
|
||||
let local_id: i64 = conn
|
||||
.query_row(
|
||||
"SELECT id FROM issues WHERE project_id = 1 AND iid = 42",
|
||||
[],
|
||||
|row| row.get(0),
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
// Mock empty resource event endpoints
|
||||
Mock::given(method("GET"))
|
||||
.and(path("/api/v4/projects/100/issues/42/resource_state_events"))
|
||||
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!([])))
|
||||
.mount(&mock_server)
|
||||
.await;
|
||||
Mock::given(method("GET"))
|
||||
.and(path("/api/v4/projects/100/issues/42/resource_label_events"))
|
||||
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!([])))
|
||||
.mount(&mock_server)
|
||||
.await;
|
||||
Mock::given(method("GET"))
|
||||
.and(path(
|
||||
"/api/v4/projects/100/issues/42/resource_milestone_events",
|
||||
))
|
||||
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!([])))
|
||||
.mount(&mock_server)
|
||||
.await;
|
||||
|
||||
// Mock empty discussions endpoint
|
||||
Mock::given(method("GET"))
|
||||
.and(path("/api/v4/projects/100/issues/42/discussions"))
|
||||
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!([])))
|
||||
.mount(&mock_server)
|
||||
.await;
|
||||
|
||||
let client = GitLabClient::new(&mock_server.uri(), "test-token", None);
|
||||
|
||||
let result = fetch_dependents_for_issue(&client, &conn, 1, 100, 42, local_id, &config)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(result.resource_events_fetched, 0);
|
||||
assert_eq!(result.discussions_fetched, 0);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_fetch_dependents_for_mr_empty_events() {
|
||||
use wiremock::matchers::{method, path};
|
||||
use wiremock::{Mock, MockServer, ResponseTemplate};
|
||||
|
||||
let mock_server = MockServer::start().await;
|
||||
let conn = setup_db();
|
||||
let config = test_config();
|
||||
|
||||
// Insert an MR so we have a local_id
|
||||
let mr = make_test_mr(101, "2026-02-17T12:00:00.000+00:00");
|
||||
ingest_mr_by_iid(&conn, &config, 1, &mr).unwrap();
|
||||
let local_id: i64 = conn
|
||||
.query_row(
|
||||
"SELECT id FROM merge_requests WHERE project_id = 1 AND iid = 101",
|
||||
[],
|
||||
|row| row.get(0),
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
// Mock empty resource event endpoints for MR
|
||||
Mock::given(method("GET"))
|
||||
.and(path(
|
||||
"/api/v4/projects/100/merge_requests/101/resource_state_events",
|
||||
))
|
||||
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!([])))
|
||||
.mount(&mock_server)
|
||||
.await;
|
||||
Mock::given(method("GET"))
|
||||
.and(path(
|
||||
"/api/v4/projects/100/merge_requests/101/resource_label_events",
|
||||
))
|
||||
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!([])))
|
||||
.mount(&mock_server)
|
||||
.await;
|
||||
Mock::given(method("GET"))
|
||||
.and(path(
|
||||
"/api/v4/projects/100/merge_requests/101/resource_milestone_events",
|
||||
))
|
||||
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!([])))
|
||||
.mount(&mock_server)
|
||||
.await;
|
||||
|
||||
// Mock empty discussions endpoint for MR
|
||||
Mock::given(method("GET"))
|
||||
.and(path("/api/v4/projects/100/merge_requests/101/discussions"))
|
||||
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!([])))
|
||||
.mount(&mock_server)
|
||||
.await;
|
||||
|
||||
// Mock empty closes_issues endpoint
|
||||
Mock::given(method("GET"))
|
||||
.and(path(
|
||||
"/api/v4/projects/100/merge_requests/101/closes_issues",
|
||||
))
|
||||
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!([])))
|
||||
.mount(&mock_server)
|
||||
.await;
|
||||
|
||||
// Mock empty diffs endpoint
|
||||
Mock::given(method("GET"))
|
||||
.and(path("/api/v4/projects/100/merge_requests/101/diffs"))
|
||||
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!([])))
|
||||
.mount(&mock_server)
|
||||
.await;
|
||||
|
||||
let client = GitLabClient::new(&mock_server.uri(), "test-token", None);
|
||||
|
||||
let result = fetch_dependents_for_mr(&client, &conn, 1, 100, 101, local_id, &config)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(result.resource_events_fetched, 0);
|
||||
assert_eq!(result.discussions_fetched, 0);
|
||||
assert_eq!(result.closes_issues_stored, 0);
|
||||
assert_eq!(result.file_changes_stored, 0);
|
||||
}
|
||||
|
||||
#[tokio::test]
|
||||
async fn test_fetch_dependents_for_mr_with_closes_issues() {
|
||||
use wiremock::matchers::{method, path};
|
||||
use wiremock::{Mock, MockServer, ResponseTemplate};
|
||||
|
||||
let mock_server = MockServer::start().await;
|
||||
let conn = setup_db();
|
||||
let config = test_config();
|
||||
|
||||
// Insert issue and MR so references can resolve
|
||||
let issue = make_test_issue(42, "2026-02-17T12:00:00.000+00:00");
|
||||
ingest_issue_by_iid(&conn, &config, 1, &issue).unwrap();
|
||||
|
||||
let mr = make_test_mr(101, "2026-02-17T12:00:00.000+00:00");
|
||||
ingest_mr_by_iid(&conn, &config, 1, &mr).unwrap();
|
||||
let mr_local_id: i64 = conn
|
||||
.query_row(
|
||||
"SELECT id FROM merge_requests WHERE project_id = 1 AND iid = 101",
|
||||
[],
|
||||
|row| row.get(0),
|
||||
)
|
||||
.unwrap();
|
||||
|
||||
// Mock empty resource events
|
||||
for endpoint in [
|
||||
"resource_state_events",
|
||||
"resource_label_events",
|
||||
"resource_milestone_events",
|
||||
] {
|
||||
Mock::given(method("GET"))
|
||||
.and(path(format!(
|
||||
"/api/v4/projects/100/merge_requests/101/{endpoint}"
|
||||
)))
|
||||
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!([])))
|
||||
.mount(&mock_server)
|
||||
.await;
|
||||
}
|
||||
|
||||
// Mock empty discussions
|
||||
Mock::given(method("GET"))
|
||||
.and(path("/api/v4/projects/100/merge_requests/101/discussions"))
|
||||
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!([])))
|
||||
.mount(&mock_server)
|
||||
.await;
|
||||
|
||||
// Mock closes_issues with one reference
|
||||
Mock::given(method("GET"))
|
||||
.and(path(
|
||||
"/api/v4/projects/100/merge_requests/101/closes_issues",
|
||||
))
|
||||
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!([
|
||||
{
|
||||
"id": 42000,
|
||||
"iid": 42,
|
||||
"project_id": 100,
|
||||
"title": "Test issue 42",
|
||||
"state": "opened",
|
||||
"web_url": "https://example.com/group/repo/-/issues/42"
|
||||
}
|
||||
])))
|
||||
.mount(&mock_server)
|
||||
.await;
|
||||
|
||||
// Mock empty diffs
|
||||
Mock::given(method("GET"))
|
||||
.and(path("/api/v4/projects/100/merge_requests/101/diffs"))
|
||||
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!([])))
|
||||
.mount(&mock_server)
|
||||
.await;
|
||||
|
||||
let client = GitLabClient::new(&mock_server.uri(), "test-token", None);
|
||||
|
||||
let result = fetch_dependents_for_mr(&client, &conn, 1, 100, 101, mr_local_id, &config)
|
||||
.await
|
||||
.unwrap();
|
||||
|
||||
assert_eq!(result.closes_issues_stored, 1);
|
||||
}
|
||||
431
src/main.rs
431
src/main.rs
@@ -11,26 +11,29 @@ use lore::cli::autocorrect::{self, CorrectionResult};
|
||||
use lore::cli::commands::{
|
||||
IngestDisplay, InitInputs, InitOptions, InitResult, ListFilters, MrListFilters,
|
||||
NoteListFilters, SearchCliFilters, SyncOptions, TimelineParams, open_issue_in_browser,
|
||||
open_mr_in_browser, parse_trace_path, print_count, print_count_json, print_doctor_results,
|
||||
print_drift_human, print_drift_json, print_dry_run_preview, print_dry_run_preview_json,
|
||||
print_embed, print_embed_json, print_event_count, print_event_count_json, print_file_history,
|
||||
print_file_history_json, print_generate_docs, print_generate_docs_json, print_ingest_summary,
|
||||
print_ingest_summary_json, print_list_issues, print_list_issues_json, print_list_mrs,
|
||||
print_list_mrs_json, print_list_notes, print_list_notes_csv, print_list_notes_json,
|
||||
print_list_notes_jsonl, print_search_results, print_search_results_json, print_show_issue,
|
||||
print_show_issue_json, print_show_mr, print_show_mr_json, print_stats, print_stats_json,
|
||||
print_sync, print_sync_json, print_sync_status, print_sync_status_json, print_timeline,
|
||||
print_timeline_json_with_meta, print_trace, print_trace_json, print_who_human, print_who_json,
|
||||
query_notes, run_auth_test, run_count, run_count_events, run_doctor, run_drift, run_embed,
|
||||
run_file_history, run_generate_docs, run_ingest, run_ingest_dry_run, run_init, run_list_issues,
|
||||
run_list_mrs, run_search, run_show_issue, run_show_mr, run_stats, run_sync, run_sync_status,
|
||||
run_timeline, run_who,
|
||||
open_mr_in_browser, parse_trace_path, print_count, print_count_json, print_cron_install,
|
||||
print_cron_install_json, print_cron_status, print_cron_status_json, print_cron_uninstall,
|
||||
print_cron_uninstall_json, print_doctor_results, print_drift_human, print_drift_json,
|
||||
print_dry_run_preview, print_dry_run_preview_json, print_embed, print_embed_json,
|
||||
print_event_count, print_event_count_json, print_file_history, print_file_history_json,
|
||||
print_generate_docs, print_generate_docs_json, print_ingest_summary, print_ingest_summary_json,
|
||||
print_list_issues, print_list_issues_json, print_list_mrs, print_list_mrs_json,
|
||||
print_list_notes, print_list_notes_json, print_search_results, print_search_results_json,
|
||||
print_show_issue, print_show_issue_json, print_show_mr, print_show_mr_json, print_stats,
|
||||
print_stats_json, print_sync, print_sync_json, print_sync_status, print_sync_status_json,
|
||||
print_timeline, print_timeline_json_with_meta, print_trace, print_trace_json, print_who_human,
|
||||
print_who_json, query_notes, run_auth_test, run_count, run_count_events, run_cron_install,
|
||||
run_cron_status, run_cron_uninstall, run_doctor, run_drift, run_embed, run_file_history,
|
||||
run_generate_docs, run_ingest, run_ingest_dry_run, run_init, run_list_issues, run_list_mrs,
|
||||
run_me, run_search, run_show_issue, run_show_mr, run_stats, run_sync, run_sync_status,
|
||||
run_timeline, run_token_set, run_token_show, run_who,
|
||||
};
|
||||
use lore::cli::render::{ColorMode, GlyphMode, Icons, LoreRenderer, Theme};
|
||||
use lore::cli::robot::{RobotMeta, strip_schemas};
|
||||
use lore::cli::{
|
||||
Cli, Commands, CountArgs, EmbedArgs, FileHistoryArgs, GenerateDocsArgs, IngestArgs, IssuesArgs,
|
||||
MrsArgs, NotesArgs, SearchArgs, StatsArgs, SyncArgs, TimelineArgs, TraceArgs, WhoArgs,
|
||||
Cli, Commands, CountArgs, CronAction, CronArgs, EmbedArgs, FileHistoryArgs, GenerateDocsArgs,
|
||||
IngestArgs, IssuesArgs, MeArgs, MrsArgs, NotesArgs, SearchArgs, StatsArgs, SyncArgs,
|
||||
TimelineArgs, TokenAction, TokenArgs, TraceArgs, WhoArgs,
|
||||
};
|
||||
use lore::core::db::{
|
||||
LATEST_SCHEMA_VERSION, create_connection, get_schema_version, run_migrations,
|
||||
@@ -39,6 +42,7 @@ use lore::core::dependent_queue::release_all_locked_jobs;
|
||||
use lore::core::error::{LoreError, RobotErrorOutput};
|
||||
use lore::core::logging;
|
||||
use lore::core::metrics::MetricsLayer;
|
||||
use lore::core::path_resolver::{build_path_query, normalize_repo_path};
|
||||
use lore::core::paths::{get_config_path, get_db_path, get_log_dir};
|
||||
use lore::core::project::resolve_project;
|
||||
use lore::core::shutdown::ShutdownSignal;
|
||||
@@ -198,10 +202,13 @@ async fn main() {
|
||||
handle_timeline(cli.config.as_deref(), args, robot_mode).await
|
||||
}
|
||||
Some(Commands::Who(args)) => handle_who(cli.config.as_deref(), args, robot_mode),
|
||||
Some(Commands::Me(args)) => handle_me(cli.config.as_deref(), args, robot_mode),
|
||||
Some(Commands::FileHistory(args)) => {
|
||||
handle_file_history(cli.config.as_deref(), args, robot_mode)
|
||||
}
|
||||
Some(Commands::Trace(args)) => handle_trace(cli.config.as_deref(), args, robot_mode),
|
||||
Some(Commands::Cron(args)) => handle_cron(cli.config.as_deref(), args, robot_mode),
|
||||
Some(Commands::Token(args)) => handle_token(cli.config.as_deref(), args, robot_mode).await,
|
||||
Some(Commands::Drift {
|
||||
entity_type,
|
||||
iid,
|
||||
@@ -921,21 +928,14 @@ fn handle_notes(
|
||||
|
||||
let result = query_notes(&conn, &filters, &config)?;
|
||||
|
||||
let format = if robot_mode && args.format == "table" {
|
||||
"json"
|
||||
} else {
|
||||
&args.format
|
||||
};
|
||||
|
||||
match format {
|
||||
"json" => print_list_notes_json(
|
||||
if robot_mode {
|
||||
print_list_notes_json(
|
||||
&result,
|
||||
start.elapsed().as_millis() as u64,
|
||||
args.fields.as_deref(),
|
||||
),
|
||||
"jsonl" => print_list_notes_jsonl(&result),
|
||||
"csv" => print_list_notes_csv(&result),
|
||||
_ => print_list_notes(&result),
|
||||
);
|
||||
} else {
|
||||
print_list_notes(&result);
|
||||
}
|
||||
|
||||
Ok(())
|
||||
@@ -1641,6 +1641,7 @@ struct VersionOutput {
|
||||
|
||||
#[derive(Serialize)]
|
||||
struct VersionData {
|
||||
name: &'static str,
|
||||
version: String,
|
||||
#[serde(skip_serializing_if = "Option::is_none")]
|
||||
git_hash: Option<String>,
|
||||
@@ -1654,6 +1655,7 @@ fn handle_version(robot_mode: bool) -> Result<(), Box<dyn std::error::Error>> {
|
||||
let output = VersionOutput {
|
||||
ok: true,
|
||||
data: VersionData {
|
||||
name: "lore",
|
||||
version,
|
||||
git_hash: if git_hash.is_empty() {
|
||||
None
|
||||
@@ -1874,9 +1876,27 @@ fn handle_file_history(
|
||||
.effective_project(args.project.as_deref())
|
||||
.map(String::from);
|
||||
|
||||
let normalized = normalize_repo_path(&args.path);
|
||||
|
||||
// Resolve bare filenames before querying (same path resolution as trace/who)
|
||||
let db_path_tmp = get_db_path(config.storage.db_path.as_deref());
|
||||
let conn_tmp = create_connection(&db_path_tmp)?;
|
||||
let project_id_tmp = project
|
||||
.as_deref()
|
||||
.map(|p| resolve_project(&conn_tmp, p))
|
||||
.transpose()?;
|
||||
let pq = build_path_query(&conn_tmp, &normalized, project_id_tmp)?;
|
||||
let resolved_path = if pq.is_prefix {
|
||||
// Directory prefix — file-history is file-oriented, pass the raw path.
|
||||
// Don't use pq.value which contains LIKE-escaped metacharacters.
|
||||
normalized.trim_end_matches('/').to_string()
|
||||
} else {
|
||||
pq.value
|
||||
};
|
||||
|
||||
let result = run_file_history(
|
||||
&config,
|
||||
&args.path,
|
||||
&resolved_path,
|
||||
project.as_deref(),
|
||||
args.no_follow_renames,
|
||||
args.merged,
|
||||
@@ -1901,7 +1921,8 @@ fn handle_trace(
|
||||
let start = std::time::Instant::now();
|
||||
let config = Config::load(config_override)?;
|
||||
|
||||
let (path, line_requested) = parse_trace_path(&args.path);
|
||||
let (raw_path, line_requested) = parse_trace_path(&args.path);
|
||||
let normalized = normalize_repo_path(&raw_path);
|
||||
|
||||
if line_requested.is_some() && !robot_mode {
|
||||
eprintln!(
|
||||
@@ -1920,6 +1941,16 @@ fn handle_trace(
|
||||
.map(|p| resolve_project(&conn, p))
|
||||
.transpose()?;
|
||||
|
||||
// Resolve bare filenames (e.g. "operators.ts" -> "src/utils/operators.ts")
|
||||
let pq = build_path_query(&conn, &normalized, project_id)?;
|
||||
let path = if pq.is_prefix {
|
||||
// Directory prefix — trace is file-oriented, pass the raw path.
|
||||
// Don't use pq.value which contains LIKE-escaped metacharacters.
|
||||
normalized.trim_end_matches('/').to_string()
|
||||
} else {
|
||||
pq.value
|
||||
};
|
||||
|
||||
let result = run_trace(
|
||||
&conn,
|
||||
project_id,
|
||||
@@ -2125,6 +2156,14 @@ async fn handle_sync_cmd(
|
||||
) -> Result<(), Box<dyn std::error::Error>> {
|
||||
let dry_run = args.dry_run && !args.no_dry_run;
|
||||
|
||||
// Dedup and sort IIDs
|
||||
let mut issue_iids = args.issue;
|
||||
let mut mr_iids = args.mr;
|
||||
issue_iids.sort_unstable();
|
||||
issue_iids.dedup();
|
||||
mr_iids.sort_unstable();
|
||||
mr_iids.dedup();
|
||||
|
||||
let mut config = Config::load(config_override)?;
|
||||
if args.no_events {
|
||||
config.sync.fetch_resource_events = false;
|
||||
@@ -2143,15 +2182,107 @@ async fn handle_sync_cmd(
|
||||
no_events: args.no_events,
|
||||
robot_mode,
|
||||
dry_run,
|
||||
issue_iids,
|
||||
mr_iids,
|
||||
project: args.project,
|
||||
preflight_only: args.preflight_only,
|
||||
};
|
||||
|
||||
// For dry run, skip recording and just show the preview
|
||||
if dry_run {
|
||||
// Validation: preflight_only requires surgical mode
|
||||
if options.preflight_only && !options.is_surgical() {
|
||||
return Err("--preflight-only requires --issue or --mr".into());
|
||||
}
|
||||
|
||||
// Validation: full + surgical are incompatible
|
||||
if options.full && options.is_surgical() {
|
||||
return Err("--full and --issue/--mr are incompatible".into());
|
||||
}
|
||||
|
||||
// Validation: surgical mode requires a project (via -p or config defaultProject)
|
||||
if options.is_surgical()
|
||||
&& config
|
||||
.effective_project(options.project.as_deref())
|
||||
.is_none()
|
||||
{
|
||||
return Err("--issue/--mr requires -p/--project (or set defaultProject in config)".into());
|
||||
}
|
||||
|
||||
// Validation: hard cap on total surgical targets
|
||||
let total_targets = options.issue_iids.len() + options.mr_iids.len();
|
||||
if total_targets > SyncOptions::MAX_SURGICAL_TARGETS {
|
||||
return Err(format!(
|
||||
"Too many surgical targets ({total_targets}); maximum is {}",
|
||||
SyncOptions::MAX_SURGICAL_TARGETS
|
||||
)
|
||||
.into());
|
||||
}
|
||||
|
||||
// Surgical + dry-run → treat as preflight-only
|
||||
let mut options = options;
|
||||
if dry_run && options.is_surgical() {
|
||||
options.preflight_only = true;
|
||||
}
|
||||
|
||||
// Resolve effective project for surgical mode: when -p is not passed but
|
||||
// defaultProject is set in config, populate options.project so the surgical
|
||||
// orchestrator receives the resolved project path.
|
||||
if options.is_surgical() && options.project.is_none() {
|
||||
options.project = config.default_project.clone();
|
||||
}
|
||||
|
||||
// For non-surgical dry run, skip recording and just show the preview
|
||||
if dry_run && !options.is_surgical() {
|
||||
let signal = ShutdownSignal::new();
|
||||
run_sync(&config, options, None, &signal).await?;
|
||||
return Ok(());
|
||||
}
|
||||
|
||||
// Acquire file lock if --lock was passed (used by cron to skip overlapping runs)
|
||||
let _sync_lock = if args.lock {
|
||||
match lore::core::cron::acquire_sync_lock() {
|
||||
Ok(Some(guard)) => Some(guard),
|
||||
Ok(None) => {
|
||||
// Another sync is running — silently exit (expected for cron)
|
||||
tracing::debug!("--lock: another sync is running, skipping");
|
||||
return Ok(());
|
||||
}
|
||||
Err(e) => {
|
||||
tracing::warn!(error = %e, "--lock: failed to acquire file lock, skipping sync");
|
||||
return Ok(());
|
||||
}
|
||||
}
|
||||
} else {
|
||||
None
|
||||
};
|
||||
|
||||
// Surgical mode: run_sync_surgical manages its own recorder, signal, and recording.
|
||||
// Skip the normal recorder setup and let the dispatch handle everything.
|
||||
if options.is_surgical() {
|
||||
let signal = ShutdownSignal::new();
|
||||
let signal_for_handler = signal.clone();
|
||||
tokio::spawn(async move {
|
||||
let _ = tokio::signal::ctrl_c().await;
|
||||
eprintln!("\nInterrupted, finishing current batch... (Ctrl+C again to force quit)");
|
||||
signal_for_handler.cancel();
|
||||
let _ = tokio::signal::ctrl_c().await;
|
||||
std::process::exit(130);
|
||||
});
|
||||
|
||||
let start = std::time::Instant::now();
|
||||
match run_sync(&config, options, None, &signal).await {
|
||||
Ok(result) => {
|
||||
let elapsed = start.elapsed();
|
||||
if robot_mode {
|
||||
print_sync_json(&result, elapsed.as_millis() as u64, Some(metrics));
|
||||
} else {
|
||||
print_sync(&result, elapsed, Some(metrics), args.timings);
|
||||
}
|
||||
return Ok(());
|
||||
}
|
||||
Err(e) => return Err(e.into()),
|
||||
}
|
||||
}
|
||||
|
||||
let db_path = get_db_path(config.storage.db_path.as_deref());
|
||||
let recorder_conn = create_connection(&db_path)?;
|
||||
let run_id = uuid::Uuid::new_v4().simple().to_string();
|
||||
@@ -2224,6 +2355,138 @@ async fn handle_sync_cmd(
|
||||
}
|
||||
}
|
||||
|
||||
fn handle_cron(
|
||||
config_override: Option<&str>,
|
||||
args: CronArgs,
|
||||
robot_mode: bool,
|
||||
) -> Result<(), Box<dyn std::error::Error>> {
|
||||
let start = std::time::Instant::now();
|
||||
|
||||
match args.action {
|
||||
CronAction::Install { interval } => {
|
||||
let result = run_cron_install(interval)?;
|
||||
let elapsed_ms = start.elapsed().as_millis() as u64;
|
||||
if robot_mode {
|
||||
print_cron_install_json(&result, elapsed_ms);
|
||||
} else {
|
||||
print_cron_install(&result);
|
||||
}
|
||||
// Warn if no stored token — cron runs in a minimal shell with no env vars
|
||||
if let Ok(config) = Config::load(config_override)
|
||||
&& config
|
||||
.gitlab
|
||||
.token
|
||||
.as_ref()
|
||||
.is_none_or(|t| t.trim().is_empty())
|
||||
{
|
||||
if robot_mode {
|
||||
eprintln!(
|
||||
"{{\"warning\":\"No stored token found. Cron sync requires a stored token. Run: lore token set\"}}"
|
||||
);
|
||||
} else {
|
||||
eprintln!();
|
||||
eprintln!(
|
||||
" {} No stored token found. Cron sync requires a stored token.",
|
||||
lore::cli::render::Theme::warning()
|
||||
.render(lore::cli::render::Icons::warning()),
|
||||
);
|
||||
eprintln!(" Run: lore token set");
|
||||
eprintln!();
|
||||
}
|
||||
}
|
||||
}
|
||||
CronAction::Uninstall => {
|
||||
let result = run_cron_uninstall()?;
|
||||
let elapsed_ms = start.elapsed().as_millis() as u64;
|
||||
if robot_mode {
|
||||
print_cron_uninstall_json(&result, elapsed_ms);
|
||||
} else {
|
||||
print_cron_uninstall(&result);
|
||||
}
|
||||
}
|
||||
CronAction::Status => {
|
||||
let config = Config::load(config_override)?;
|
||||
let info = run_cron_status(&config)?;
|
||||
let elapsed_ms = start.elapsed().as_millis() as u64;
|
||||
if robot_mode {
|
||||
print_cron_status_json(&info, elapsed_ms);
|
||||
} else {
|
||||
print_cron_status(&info);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn handle_token(
|
||||
config_override: Option<&str>,
|
||||
args: TokenArgs,
|
||||
robot_mode: bool,
|
||||
) -> Result<(), Box<dyn std::error::Error>> {
|
||||
let start = std::time::Instant::now();
|
||||
|
||||
match args.action {
|
||||
TokenAction::Set { token } => {
|
||||
let result = run_token_set(config_override, token).await?;
|
||||
let elapsed_ms = start.elapsed().as_millis() as u64;
|
||||
if robot_mode {
|
||||
let output = serde_json::json!({
|
||||
"ok": true,
|
||||
"data": {
|
||||
"action": "set",
|
||||
"username": result.username,
|
||||
"config_path": result.config_path,
|
||||
},
|
||||
"meta": { "elapsed_ms": elapsed_ms },
|
||||
});
|
||||
println!("{}", serde_json::to_string(&output)?);
|
||||
} else {
|
||||
println!(
|
||||
" {} Token stored and validated (authenticated as @{})",
|
||||
lore::cli::render::Theme::success().render(lore::cli::render::Icons::success()),
|
||||
result.username
|
||||
);
|
||||
println!(
|
||||
" {} {}",
|
||||
lore::cli::render::Theme::dim().render("config:"),
|
||||
result.config_path
|
||||
);
|
||||
println!();
|
||||
}
|
||||
}
|
||||
TokenAction::Show { unmask } => {
|
||||
let result = run_token_show(config_override, unmask)?;
|
||||
let elapsed_ms = start.elapsed().as_millis() as u64;
|
||||
if robot_mode {
|
||||
let output = serde_json::json!({
|
||||
"ok": true,
|
||||
"data": {
|
||||
"token": result.token,
|
||||
"source": result.source,
|
||||
},
|
||||
"meta": { "elapsed_ms": elapsed_ms },
|
||||
});
|
||||
println!("{}", serde_json::to_string(&output)?);
|
||||
} else {
|
||||
println!(
|
||||
" {} {}",
|
||||
lore::cli::render::Theme::dim().render("token:"),
|
||||
result.token
|
||||
);
|
||||
println!(
|
||||
" {} {}",
|
||||
lore::cli::render::Theme::dim().render("source:"),
|
||||
result.source
|
||||
);
|
||||
println!();
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Ok(())
|
||||
}
|
||||
|
||||
#[derive(Serialize)]
|
||||
struct HealthOutput {
|
||||
ok: bool,
|
||||
@@ -2425,13 +2688,31 @@ fn handle_robot_docs(robot_mode: bool, brief: bool) -> Result<(), Box<dyn std::e
|
||||
}
|
||||
},
|
||||
"sync": {
|
||||
"description": "Full sync pipeline: ingest -> generate-docs -> embed",
|
||||
"flags": ["--full", "--no-full", "--force", "--no-force", "--no-embed", "--no-docs", "--no-events", "--no-file-changes", "--no-status", "--dry-run", "--no-dry-run"],
|
||||
"description": "Full sync pipeline: ingest -> generate-docs -> embed. Supports surgical per-IID mode.",
|
||||
"flags": ["--full", "--no-full", "--force", "--no-force", "--no-embed", "--no-docs", "--no-events", "--no-file-changes", "--no-status", "--dry-run", "--no-dry-run", "-t/--timings", "--lock", "--issue <IID>", "--mr <IID>", "-p/--project <path>", "--preflight-only"],
|
||||
"example": "lore --robot sync",
|
||||
"surgical_mode": {
|
||||
"description": "Sync specific issues or MRs by IID. Runs a scoped pipeline: preflight -> TOCTOU check -> ingest -> dependents -> docs -> embed.",
|
||||
"flags": ["--issue <IID> (repeatable)", "--mr <IID> (repeatable)", "-p/--project <path> (required)", "--preflight-only"],
|
||||
"examples": [
|
||||
"lore --robot sync --issue 7 -p group/project",
|
||||
"lore --robot sync --issue 7 --issue 42 --mr 10 -p group/project",
|
||||
"lore --robot sync --issue 7 -p group/project --preflight-only"
|
||||
],
|
||||
"constraints": ["--issue/--mr requires -p/--project (or defaultProject in config)", "--full and --issue/--mr are incompatible", "--preflight-only requires --issue or --mr", "Max 100 total targets"],
|
||||
"entity_result_outcomes": ["synced", "skipped_stale", "not_found", "preflight_failed", "error"]
|
||||
},
|
||||
"response_schema": {
|
||||
"normal": {
|
||||
"ok": "bool",
|
||||
"data": {"issues_updated": "int", "mrs_updated": "int", "documents_regenerated": "int", "documents_embedded": "int", "resource_events_synced": "int", "resource_events_failed": "int"},
|
||||
"meta": {"elapsed_ms": "int", "stages?": "[{name:string, elapsed_ms:int, items_processed:int}]"}
|
||||
},
|
||||
"surgical": {
|
||||
"ok": "bool",
|
||||
"data": {"surgical_mode": "true", "surgical_iids": "{issues:[int], merge_requests:[int]}", "entity_results": "[{entity_type:string, iid:int, outcome:string, error?:string, toctou_reason?:string}]", "preflight_only?": "bool", "issues_updated": "int", "mrs_updated": "int", "documents_regenerated": "int", "documents_embedded": "int", "discussions_fetched": "int"},
|
||||
"meta": {"elapsed_ms": "int"}
|
||||
}
|
||||
}
|
||||
},
|
||||
"issues": {
|
||||
@@ -2580,7 +2861,7 @@ fn handle_robot_docs(robot_mode: bool, brief: bool) -> Result<(), Box<dyn std::e
|
||||
},
|
||||
"who": {
|
||||
"description": "People intelligence: experts, workload, active discussions, overlap, review patterns",
|
||||
"flags": ["<target>", "--path <path>", "--active", "--overlap <path>", "--reviews", "--since <duration>", "-p/--project", "-n/--limit", "--fields <list>", "--detail", "--no-detail", "--as-of <date>", "--explain-score", "--include-bots", "--all-history"],
|
||||
"flags": ["<target>", "--path <path>", "--active", "--overlap <path>", "--reviews", "--since <duration>", "-p/--project", "-n/--limit", "--fields <list>", "--detail", "--no-detail", "--as-of <date>", "--explain-score", "--include-bots", "--include-closed", "--all-history"],
|
||||
"modes": {
|
||||
"expert": "lore who <file-path> -- Who knows about this area? (also: --path for root files)",
|
||||
"workload": "lore who <username> -- What is someone working on?",
|
||||
@@ -2638,7 +2919,7 @@ fn handle_robot_docs(robot_mode: bool, brief: bool) -> Result<(), Box<dyn std::e
|
||||
},
|
||||
"notes": {
|
||||
"description": "List notes from discussions with rich filtering",
|
||||
"flags": ["--limit/-n <N>", "--author/-a <username>", "--note-type <type>", "--contains <text>", "--for-issue <iid>", "--for-mr <iid>", "-p/--project <path>", "--since <period>", "--until <period>", "--path <filepath>", "--resolution <any|unresolved|resolved>", "--sort <created|updated>", "--asc", "--include-system", "--note-id <id>", "--gitlab-note-id <id>", "--discussion-id <id>", "--format <table|json|jsonl|csv>", "--fields <list|minimal>", "--open"],
|
||||
"flags": ["--limit/-n <N>", "--author/-a <username>", "--note-type <type>", "--contains <text>", "--for-issue <iid>", "--for-mr <iid>", "-p/--project <path>", "--since <period>", "--until <period>", "--path <filepath>", "--resolution <any|unresolved|resolved>", "--sort <created|updated>", "--asc", "--include-system", "--note-id <id>", "--gitlab-note-id <id>", "--discussion-id <id>", "--fields <list|minimal>", "--open"],
|
||||
"robot_flags": ["--format json", "--fields minimal"],
|
||||
"example": "lore --robot notes --author jdefting --since 1y --format json --fields minimal",
|
||||
"response_schema": {
|
||||
@@ -2647,6 +2928,62 @@ fn handle_robot_docs(robot_mode: bool, brief: bool) -> Result<(), Box<dyn std::e
|
||||
"meta": {"elapsed_ms": "int"}
|
||||
}
|
||||
},
|
||||
"cron": {
|
||||
"description": "Manage cron-based automatic syncing (Unix only)",
|
||||
"subcommands": {
|
||||
"install": {"flags": ["--interval <minutes>"], "default_interval": 8},
|
||||
"uninstall": {"flags": []},
|
||||
"status": {"flags": []}
|
||||
},
|
||||
"example": "lore --robot cron status",
|
||||
"response_schema": {
|
||||
"ok": "bool",
|
||||
"data": {"action": "string (install|uninstall|status)", "installed?": "bool", "interval_minutes?": "int", "entry?": "string", "log_path?": "string", "replaced?": "bool", "was_installed?": "bool", "last_run_iso?": "string"},
|
||||
"meta": {"elapsed_ms": "int"}
|
||||
}
|
||||
},
|
||||
"token": {
|
||||
"description": "Manage stored GitLab token",
|
||||
"subcommands": {
|
||||
"set": {"flags": ["--token <value>"], "note": "Reads from stdin if --token omitted in non-interactive mode"},
|
||||
"show": {"flags": ["--unmask"]}
|
||||
},
|
||||
"example": "lore --robot token show",
|
||||
"response_schema": {
|
||||
"ok": "bool",
|
||||
"data": {"action": "string (set|show)", "token_masked?": "string", "token?": "string", "valid?": "bool", "username?": "string"},
|
||||
"meta": {"elapsed_ms": "int"}
|
||||
}
|
||||
},
|
||||
"me": {
|
||||
"description": "Personal work dashboard: open issues, authored/reviewing MRs, activity feed with computed attention states",
|
||||
"flags": ["--issues", "--mrs", "--activity", "--since <period>", "-p/--project <path>", "--all", "--user <username>", "--fields <list|minimal>"],
|
||||
"example": "lore --robot me",
|
||||
"response_schema": {
|
||||
"ok": "bool",
|
||||
"data": {
|
||||
"username": "string",
|
||||
"since_iso": "string?",
|
||||
"summary": {"project_count": "int", "open_issue_count": "int", "authored_mr_count": "int", "reviewing_mr_count": "int", "needs_attention_count": "int"},
|
||||
"open_issues": "[{project:string, iid:int, title:string, state:string, attention_state:string, status_name:string?, labels:[string], updated_at_iso:string, web_url:string?}]",
|
||||
"open_mrs_authored": "[{project:string, iid:int, title:string, state:string, attention_state:string, draft:bool, detailed_merge_status:string?, author_username:string?, labels:[string], updated_at_iso:string, web_url:string?}]",
|
||||
"reviewing_mrs": "[same as open_mrs_authored]",
|
||||
"activity": "[{timestamp_iso:string, event_type:string, entity_type:string, entity_iid:int, project:string, actor:string?, is_own:bool, summary:string, body_preview:string?}]"
|
||||
},
|
||||
"meta": {"elapsed_ms": "int"}
|
||||
},
|
||||
"fields_presets": {
|
||||
"me_items_minimal": ["iid", "title", "attention_state", "updated_at_iso"],
|
||||
"me_activity_minimal": ["timestamp_iso", "event_type", "entity_iid", "actor"]
|
||||
},
|
||||
"notes": {
|
||||
"attention_states": "needs_attention | not_started | awaiting_response | stale | not_ready",
|
||||
"event_types": "note | status_change | label_change | assign | unassign | review_request | milestone_change",
|
||||
"section_flags": "If none of --issues/--mrs/--activity specified, all sections returned",
|
||||
"since_default": "1d for activity feed",
|
||||
"issue_filter": "Only In Progress / In Review status issues shown"
|
||||
}
|
||||
},
|
||||
"robot-docs": {
|
||||
"description": "This command (agent self-discovery manifest)",
|
||||
"flags": ["--brief"],
|
||||
@@ -2668,10 +3005,15 @@ fn handle_robot_docs(robot_mode: bool, brief: bool) -> Result<(), Box<dyn std::e
|
||||
"search: FTS5 + vector hybrid search across all entities",
|
||||
"who: Expert/workload/reviews analysis per file path or person",
|
||||
"timeline: Chronological event reconstruction across entities",
|
||||
"trace: Code provenance chains (file -> MR -> issue -> discussion)",
|
||||
"file-history: MR history per file with rename resolution",
|
||||
"notes: Rich note listing with author, type, resolution, path, and discussion filters",
|
||||
"stats: Database statistics with document/note/discussion counts",
|
||||
"count: Entity counts with state breakdowns",
|
||||
"embed: Generate vector embeddings for semantic search via Ollama"
|
||||
"embed: Generate vector embeddings for semantic search via Ollama",
|
||||
"cron: Automated sync scheduling (Unix)",
|
||||
"token: Secure token management with masked display",
|
||||
"me: Personal work dashboard with attention states, activity feed, and needs-attention triage"
|
||||
],
|
||||
"read_write_split": "lore = ALL reads (issues, MRs, search, who, timeline, intelligence). glab = ALL writes (create, update, approve, merge, CI/CD)."
|
||||
});
|
||||
@@ -2733,6 +3075,11 @@ fn handle_robot_docs(robot_mode: bool, brief: bool) -> Result<(), Box<dyn std::e
|
||||
"lore --robot who --active --since 7d",
|
||||
"lore --robot who --overlap src/path/",
|
||||
"lore --robot who --path README.md"
|
||||
],
|
||||
"surgical_sync": [
|
||||
"lore --robot sync --issue 7 -p group/project",
|
||||
"lore --robot sync --issue 7 --mr 10 -p group/project",
|
||||
"lore --robot sync --issue 7 -p group/project --preflight-only"
|
||||
]
|
||||
});
|
||||
|
||||
@@ -2866,6 +3213,16 @@ fn handle_who(
|
||||
Ok(())
|
||||
}
|
||||
|
||||
fn handle_me(
|
||||
config_override: Option<&str>,
|
||||
args: MeArgs,
|
||||
robot_mode: bool,
|
||||
) -> Result<(), Box<dyn std::error::Error>> {
|
||||
let config = Config::load(config_override)?;
|
||||
run_me(&config, &args, robot_mode)?;
|
||||
Ok(())
|
||||
}
|
||||
|
||||
async fn handle_drift(
|
||||
config_override: Option<&str>,
|
||||
entity_type: &str,
|
||||
|
||||
Reference in New Issue
Block a user