6 Commits

Author SHA1 Message Date
Taylor Eernisse
4185abe05d docs: add feature ideas catalog, time-decay scoring plan, and timeline issue doc
Ideas catalog (docs/ideas/): 25 feature concept documents covering future
lore capabilities including bottleneck detection, churn analysis, expert
scoring, collaboration patterns, milestone risk, knowledge silos, and more.
Each doc includes motivation, implementation sketch, data requirements, and
dependencies on existing infrastructure. README.md provides an overview and
SYSTEM-PROPOSAL.md presents the unified analytics vision.

Plans (plans/): Time-decay expert scoring design with four rounds of review
feedback exploring decay functions, scoring algebra, and integration points
with the existing who-expert pipeline.

Issue doc (docs/issues/001): Documents the timeline pipeline bug where
EntityRef was missing project context, causing ambiguous cross-project
references during the EXPAND stage.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-09 10:16:48 -05:00
Taylor Eernisse
d54f669c5e chore: add multi-agent editor config and UBS file-write hook
Add rule/config files for Cursor, Cline, Codex, Gemini, Continue, and
OpenCode editors pointing them to project conventions, UBS usage, and
AGENTS.md. Add a Claude Code on-file-write hook that runs UBS on
supported source files after every save.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-09 10:16:28 -05:00
Taylor Eernisse
45126f04a6 fix: document upsert project_id, truncation budget, and Ollama model matching
- regenerator: Include project_id in the ON CONFLICT UPDATE clause for
  document upserts. Previously, if a document moved between projects
  (e.g., during re-ingestion), the project_id would remain stale.

- truncation: Compute the omission marker ("N notes omitted") before
  checking whether first+last notes fit in the budget. The old order
  computed the marker after the budget check, meaning the marker's byte
  cost was unaccounted for and could cause over-budget output.

- ollama: Tighten model name matching to require either an exact match
  or a colon-delimited tag prefix (model == name or name starts with
  "model:"). The prior starts_with check would false-positive on
  "nomic-embed-text-v2" when looking for "nomic-embed-text". Tests
  updated to cover exact match, tagged, wrong model, and prefix
  false-positive cases.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-09 10:16:14 -05:00
Taylor Eernisse
dfa44e5bcd fix(ingestion): label upsert reliability, init idempotency, and sync health
Label upsert (issues + merge_requests): Replace INSERT ... ON CONFLICT DO
UPDATE RETURNING with INSERT OR IGNORE + SELECT. The prior RETURNING-based
approach relied on last_insert_rowid() matching the returned id, which is
not guaranteed when ON CONFLICT triggers an update (SQLite may return 0).
The new two-step approach is unambiguous and correctly tracks created_count.

Init: Add ON CONFLICT(gitlab_project_id) DO UPDATE to the project insert
so re-running `lore init` updates path/branch/url instead of failing with
a unique constraint violation.

MR discussions sync: Reset discussions_sync_attempts to 0 when clearing a
sync health error, so previously-failed MRs get a fresh retry budget after
successful sync.

Count: format_number now handles negative numbers correctly by extracting
the sign before inserting thousand-separators.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-09 10:15:53 -05:00
Taylor Eernisse
53ef21d653 fix: propagate DB errors instead of silently swallowing them
Replace .unwrap_or(), .ok(), and .filter_map(|r| r.ok()) patterns with
proper error propagation using ? and rusqlite::OptionalExtension where
the query may legitimately return no rows.

Affected areas:
- events_db::count_events: three count queries now propagate errors
  instead of defaulting to (0, 0) on failure
- note_parser::extract_refs_from_system_notes: row iteration errors
  are now propagated instead of silently dropped via filter_map
- note_parser::noteable_type_to_entity_type: unknown types now log a
  debug warning before defaulting to "issue"
- payloads::store_payload/read_payload: use .optional()? instead of
  .ok() to distinguish "no row" from "query failed"
- backoff::compute_next_attempt_at: use .clamp(0, 30) to guard against
  negative attempt_count, not just .min(30)
- search::vector::max_chunks_per_document: returns Result<i64> with
  proper error propagation through .optional()?.flatten()
- embedding::chunk_ids::decode_rowid: promote debug_assert to assert
  since negative rowids indicate data corruption worth failing fast on
- ingestion::dirty_tracker::record_dirty_error: use .optional()? to
  handle missing dirty_sources row gracefully instead of hard error

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-09 10:15:36 -05:00
Taylor Eernisse
41504b4941 feat(who): configurable scoring weights, MR refs, detail mode, and suffix path resolution
Expert mode now surfaces the specific MR references (project/path!iid) that
contributed to each expert's score, capped at 50 per user. A new --detail flag
adds per-MR breakdowns showing role (Author/Reviewer/both), note count, and
last activity timestamp.

Scoring weights (author_weight, reviewer_weight, note_bonus) are now
configurable via the config file's `scoring` section with validation that
rejects negative values. Defaults shift to author_weight=25, reviewer_weight=10,
note_bonus=1 — better reflecting that code authorship is a stronger expertise
signal than review assignment alone.

Path resolution gains suffix matching: typing "login.rs" auto-resolves to
"src/auth/login.rs" when unambiguous, with clear disambiguation errors when
multiple paths match. Project-scoping (-p) narrows the candidate set.

The MAX_MR_REFS_PER_USER constant is promoted to module scope for reuse
across expert and overlap modes. Human output shows MR refs inline and detail
sub-rows when requested. Robot JSON includes mr_refs, mr_refs_total,
mr_refs_truncated, and optional details array.

Includes comprehensive tests for suffix resolution, scoring weight
configurability, MR ref aggregation across projects, and detail mode.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-09 10:15:15 -05:00
59 changed files with 5558 additions and 128 deletions

12
.claude/hooks/on-file-write.sh Executable file
View File

@@ -0,0 +1,12 @@
#!/bin/bash
# Ultimate Bug Scanner - Claude Code Hook
# Runs on every file save for UBS-supported languages (JS/TS, Python, C/C++, Rust, Go, Java, Ruby)
if [[ "$FILE_PATH" =~ \.(js|jsx|ts|tsx|mjs|cjs|py|pyw|pyi|c|cc|cpp|cxx|h|hh|hpp|hxx|rs|go|java|rb)$ ]]; then
echo "🔬 Running bug scanner..."
if ! command -v ubs >/dev/null 2>&1; then
echo "⚠️ 'ubs' not found in PATH; install it before using this hook." >&2
exit 0
fi
ubs "${PROJECT_DIR}" --ci 2>&1 | head -50
fi

50
.cline/rules Normal file
View File

@@ -0,0 +1,50 @@
````markdown
## UBS Quick Reference for AI Agents
UBS stands for "Ultimate Bug Scanner": **The AI Coding Agent's Secret Weapon: Flagging Likely Bugs for Fixing Early On**
**Install:** `curl -sSL https://raw.githubusercontent.com/Dicklesworthstone/ultimate_bug_scanner/master/install.sh | bash`
**Golden Rule:** `ubs <changed-files>` before every commit. Exit 0 = safe. Exit >0 = fix & re-run.
**Commands:**
```bash
ubs file.ts file2.py # Specific files (< 1s) — USE THIS
ubs $(git diff --name-only --cached) # Staged files — before commit
ubs --only=js,python src/ # Language filter (3-5x faster)
ubs --ci --fail-on-warning . # CI mode — before PR
ubs --help # Full command reference
ubs sessions --entries 1 # Tail the latest install session log
ubs . # Whole project (ignores things like .venv and node_modules automatically)
```
**Output Format:**
```
⚠️ Category (N errors)
file.ts:42:5 Issue description
💡 Suggested fix
Exit code: 1
```
Parse: `file:line:col` → location | 💡 → how to fix | Exit 0/1 → pass/fail
**Fix Workflow:**
1. Read finding → category + fix suggestion
2. Navigate `file:line:col` → view context
3. Verify real issue (not false positive)
4. Fix root cause (not symptom)
5. Re-run `ubs <file>` → exit 0
6. Commit
**Speed Critical:** Scope to changed files. `ubs src/file.ts` (< 1s) vs `ubs .` (30s). Never full scan for small edits.
**Bug Severity:**
- **Critical** (always fix): Null safety, XSS/injection, async/await, memory leaks
- **Important** (production): Type narrowing, division-by-zero, resource leaks
- **Contextual** (judgment): TODO/FIXME, console logs
**Anti-Patterns:**
- ❌ Ignore findings → ✅ Investigate each
- ❌ Full scan per edit → ✅ Scope to file
- ❌ Fix symptom (`if (x) { x.y }`) → ✅ Root cause (`x?.y`)
````

50
.codex/rules/ubs.md Normal file
View File

@@ -0,0 +1,50 @@
````markdown
## UBS Quick Reference for AI Agents
UBS stands for "Ultimate Bug Scanner": **The AI Coding Agent's Secret Weapon: Flagging Likely Bugs for Fixing Early On**
**Install:** `curl -sSL https://raw.githubusercontent.com/Dicklesworthstone/ultimate_bug_scanner/master/install.sh | bash`
**Golden Rule:** `ubs <changed-files>` before every commit. Exit 0 = safe. Exit >0 = fix & re-run.
**Commands:**
```bash
ubs file.ts file2.py # Specific files (< 1s) — USE THIS
ubs $(git diff --name-only --cached) # Staged files — before commit
ubs --only=js,python src/ # Language filter (3-5x faster)
ubs --ci --fail-on-warning . # CI mode — before PR
ubs --help # Full command reference
ubs sessions --entries 1 # Tail the latest install session log
ubs . # Whole project (ignores things like .venv and node_modules automatically)
```
**Output Format:**
```
⚠️ Category (N errors)
file.ts:42:5 Issue description
💡 Suggested fix
Exit code: 1
```
Parse: `file:line:col` → location | 💡 → how to fix | Exit 0/1 → pass/fail
**Fix Workflow:**
1. Read finding → category + fix suggestion
2. Navigate `file:line:col` → view context
3. Verify real issue (not false positive)
4. Fix root cause (not symptom)
5. Re-run `ubs <file>` → exit 0
6. Commit
**Speed Critical:** Scope to changed files. `ubs src/file.ts` (< 1s) vs `ubs .` (30s). Never full scan for small edits.
**Bug Severity:**
- **Critical** (always fix): Null safety, XSS/injection, async/await, memory leaks
- **Important** (production): Type narrowing, division-by-zero, resource leaks
- **Contextual** (judgment): TODO/FIXME, console logs
**Anti-Patterns:**
- ❌ Ignore findings → ✅ Investigate each
- ❌ Full scan per edit → ✅ Scope to file
- ❌ Fix symptom (`if (x) { x.y }`) → ✅ Root cause (`x?.y`)
````

16
.continue/config.json Normal file
View File

@@ -0,0 +1,16 @@
{
"customCommands": [
{
"name": "scan-bugs",
"description": "Run Ultimate Bug Scanner on current project",
"prompt": "Run 'ubs --fail-on-warning .' and fix any critical issues found before proceeding"
}
],
"slashCommands": [
{
"name": "quality",
"description": "Check code quality with UBS",
"run": "ubs ."
}
]
}

50
.cursor/rules Normal file
View File

@@ -0,0 +1,50 @@
````markdown
## UBS Quick Reference for AI Agents
UBS stands for "Ultimate Bug Scanner": **The AI Coding Agent's Secret Weapon: Flagging Likely Bugs for Fixing Early On**
**Install:** `curl -sSL https://raw.githubusercontent.com/Dicklesworthstone/ultimate_bug_scanner/master/install.sh | bash`
**Golden Rule:** `ubs <changed-files>` before every commit. Exit 0 = safe. Exit >0 = fix & re-run.
**Commands:**
```bash
ubs file.ts file2.py # Specific files (< 1s) — USE THIS
ubs $(git diff --name-only --cached) # Staged files — before commit
ubs --only=js,python src/ # Language filter (3-5x faster)
ubs --ci --fail-on-warning . # CI mode — before PR
ubs --help # Full command reference
ubs sessions --entries 1 # Tail the latest install session log
ubs . # Whole project (ignores things like .venv and node_modules automatically)
```
**Output Format:**
```
⚠️ Category (N errors)
file.ts:42:5 Issue description
💡 Suggested fix
Exit code: 1
```
Parse: `file:line:col` → location | 💡 → how to fix | Exit 0/1 → pass/fail
**Fix Workflow:**
1. Read finding → category + fix suggestion
2. Navigate `file:line:col` → view context
3. Verify real issue (not false positive)
4. Fix root cause (not symptom)
5. Re-run `ubs <file>` → exit 0
6. Commit
**Speed Critical:** Scope to changed files. `ubs src/file.ts` (< 1s) vs `ubs .` (30s). Never full scan for small edits.
**Bug Severity:**
- **Critical** (always fix): Null safety, XSS/injection, async/await, memory leaks
- **Important** (production): Type narrowing, division-by-zero, resource leaks
- **Contextual** (judgment): TODO/FIXME, console logs
**Anti-Patterns:**
- ❌ Ignore findings → ✅ Investigate each
- ❌ Full scan per edit → ✅ Scope to file
- ❌ Fix symptom (`if (x) { x.y }`) → ✅ Root cause (`x?.y`)
````

50
.gemini/rules Normal file
View File

@@ -0,0 +1,50 @@
````markdown
## UBS Quick Reference for AI Agents
UBS stands for "Ultimate Bug Scanner": **The AI Coding Agent's Secret Weapon: Flagging Likely Bugs for Fixing Early On**
**Install:** `curl -sSL https://raw.githubusercontent.com/Dicklesworthstone/ultimate_bug_scanner/master/install.sh | bash`
**Golden Rule:** `ubs <changed-files>` before every commit. Exit 0 = safe. Exit >0 = fix & re-run.
**Commands:**
```bash
ubs file.ts file2.py # Specific files (< 1s) — USE THIS
ubs $(git diff --name-only --cached) # Staged files — before commit
ubs --only=js,python src/ # Language filter (3-5x faster)
ubs --ci --fail-on-warning . # CI mode — before PR
ubs --help # Full command reference
ubs sessions --entries 1 # Tail the latest install session log
ubs . # Whole project (ignores things like .venv and node_modules automatically)
```
**Output Format:**
```
⚠️ Category (N errors)
file.ts:42:5 Issue description
💡 Suggested fix
Exit code: 1
```
Parse: `file:line:col` → location | 💡 → how to fix | Exit 0/1 → pass/fail
**Fix Workflow:**
1. Read finding → category + fix suggestion
2. Navigate `file:line:col` → view context
3. Verify real issue (not false positive)
4. Fix root cause (not symptom)
5. Re-run `ubs <file>` → exit 0
6. Commit
**Speed Critical:** Scope to changed files. `ubs src/file.ts` (< 1s) vs `ubs .` (30s). Never full scan for small edits.
**Bug Severity:**
- **Critical** (always fix): Null safety, XSS/injection, async/await, memory leaks
- **Important** (production): Type narrowing, division-by-zero, resource leaks
- **Contextual** (judgment): TODO/FIXME, console logs
**Anti-Patterns:**
- ❌ Ignore findings → ✅ Investigate each
- ❌ Full scan per edit → ✅ Scope to file
- ❌ Fix symptom (`if (x) { x.y }`) → ✅ Root cause (`x?.y`)
````

50
.opencode/rules Normal file
View File

@@ -0,0 +1,50 @@
````markdown
## UBS Quick Reference for AI Agents
UBS stands for "Ultimate Bug Scanner": **The AI Coding Agent's Secret Weapon: Flagging Likely Bugs for Fixing Early On**
**Install:** `curl -sSL https://raw.githubusercontent.com/Dicklesworthstone/ultimate_bug_scanner/master/install.sh | bash`
**Golden Rule:** `ubs <changed-files>` before every commit. Exit 0 = safe. Exit >0 = fix & re-run.
**Commands:**
```bash
ubs file.ts file2.py # Specific files (< 1s) — USE THIS
ubs $(git diff --name-only --cached) # Staged files — before commit
ubs --only=js,python src/ # Language filter (3-5x faster)
ubs --ci --fail-on-warning . # CI mode — before PR
ubs --help # Full command reference
ubs sessions --entries 1 # Tail the latest install session log
ubs . # Whole project (ignores things like .venv and node_modules automatically)
```
**Output Format:**
```
⚠️ Category (N errors)
file.ts:42:5 Issue description
💡 Suggested fix
Exit code: 1
```
Parse: `file:line:col` → location | 💡 → how to fix | Exit 0/1 → pass/fail
**Fix Workflow:**
1. Read finding → category + fix suggestion
2. Navigate `file:line:col` → view context
3. Verify real issue (not false positive)
4. Fix root cause (not symptom)
5. Re-run `ubs <file>` → exit 0
6. Commit
**Speed Critical:** Scope to changed files. `ubs src/file.ts` (< 1s) vs `ubs .` (30s). Never full scan for small edits.
**Bug Severity:**
- **Critical** (always fix): Null safety, XSS/injection, async/await, memory leaks
- **Important** (production): Type narrowing, division-by-zero, resource leaks
- **Contextual** (judgment): TODO/FIXME, console logs
**Anti-Patterns:**
- ❌ Ignore findings → ✅ Investigate each
- ❌ Full scan per edit → ✅ Scope to file
- ❌ Fix symptom (`if (x) { x.y }`) → ✅ Root cause (`x?.y`)
````

View File

@@ -740,3 +740,53 @@ lore -J mrs --fields iid,title,state,draft,labels # Custom field list
- Use `lore --robot health` as a fast pre-flight check before queries - Use `lore --robot health` as a fast pre-flight check before queries
- Use `lore robot-docs` for response schema discovery - Use `lore robot-docs` for response schema discovery
- The `-p` flag supports fuzzy project matching (suffix and substring) - The `-p` flag supports fuzzy project matching (suffix and substring)
````markdown
## UBS Quick Reference for AI Agents
UBS stands for "Ultimate Bug Scanner": **The AI Coding Agent's Secret Weapon: Flagging Likely Bugs for Fixing Early On**
**Install:** `curl -sSL https://raw.githubusercontent.com/Dicklesworthstone/ultimate_bug_scanner/master/install.sh | bash`
**Golden Rule:** `ubs <changed-files>` before every commit. Exit 0 = safe. Exit >0 = fix & re-run.
**Commands:**
```bash
ubs file.ts file2.py # Specific files (< 1s) — USE THIS
ubs $(git diff --name-only --cached) # Staged files — before commit
ubs --only=js,python src/ # Language filter (3-5x faster)
ubs --ci --fail-on-warning . # CI mode — before PR
ubs --help # Full command reference
ubs sessions --entries 1 # Tail the latest install session log
ubs . # Whole project (ignores things like .venv and node_modules automatically)
```
**Output Format:**
```
⚠️ Category (N errors)
file.ts:42:5 Issue description
💡 Suggested fix
Exit code: 1
```
Parse: `file:line:col` → location | 💡 → how to fix | Exit 0/1 → pass/fail
**Fix Workflow:**
1. Read finding → category + fix suggestion
2. Navigate `file:line:col` → view context
3. Verify real issue (not false positive)
4. Fix root cause (not symptom)
5. Re-run `ubs <file>` → exit 0
6. Commit
**Speed Critical:** Scope to changed files. `ubs src/file.ts` (< 1s) vs `ubs .` (30s). Never full scan for small edits.
**Bug Severity:**
- **Critical** (always fix): Null safety, XSS/injection, async/await, memory leaks
- **Important** (production): Type narrowing, division-by-zero, resource leaks
- **Contextual** (judgment): TODO/FIXME, console logs
**Anti-Patterns:**
- ❌ Ignore findings → ✅ Investigate each
- ❌ Full scan per edit → ✅ Scope to file
- ❌ Fix symptom (`if (x) { x.y }`) → ✅ Root cause (`x?.y`)
````

66
docs/ideas/README.md Normal file
View File

@@ -0,0 +1,66 @@
# Gitlore Feature Ideas
Central registry of potential features. Each idea leverages data already ingested
into the local SQLite database (issues, MRs, discussions, notes, resource events,
entity references, embeddings, file changes).
## Priority Tiers
**Tier 1 — High confidence, low effort, immediate value:**
| # | Idea | File | Confidence |
|---|------|------|------------|
| 9 | Similar Issues Finder | [similar-issues.md](similar-issues.md) | 95% |
| 17 | "What Changed?" Digest | [digest.md](digest.md) | 93% |
| 5 | Who Knows About X? | [experts.md](experts.md) | 92% |
| -- | Multi-Project Ergonomics | [project-ergonomics.md](project-ergonomics.md) | 90% |
| 27 | Weekly Digest Generator | [weekly-digest.md](weekly-digest.md) | 90% |
| 4 | Stale Discussion Finder | [stale-discussions.md](stale-discussions.md) | 90% |
**Tier 2 — Strong ideas, moderate effort:**
| # | Idea | File | Confidence |
|---|------|------|------------|
| 19 | MR-to-Issue Closure Gap | [closure-gaps.md](closure-gaps.md) | 88% |
| 1 | Contributor Heatmap | [contributors.md](contributors.md) | 88% |
| 21 | Knowledge Silo Detection | [silos.md](silos.md) | 87% |
| 2 | Review Bottleneck Detector | [bottlenecks.md](bottlenecks.md) | 85% |
| 14 | File Hotspot Report | [hotspots.md](hotspots.md) | 85% |
| 26 | Unlinked MR Finder | [unlinked.md](unlinked.md) | 83% |
| 6 | Decision Archaeology | [decisions.md](decisions.md) | 82% |
| 18 | Label Hygiene Audit | [label-audit.md](label-audit.md) | 82% |
**Tier 3 — Promising, needs more design work:**
| # | Idea | File | Confidence |
|---|------|------|------------|
| 29 | Entity Relationship Explorer | [graph.md](graph.md) | 80% |
| 12 | Milestone Risk Report | [milestone-risk.md](milestone-risk.md) | 78% |
| 3 | Label Velocity | [label-flow.md](label-flow.md) | 78% |
| 24 | Recurring Bug Patterns | [recurring-patterns.md](recurring-patterns.md) | 76% |
| 7 | Cross-Project Impact Graph | [impact-graph.md](impact-graph.md) | 75% |
| 16 | Idle Work Detector | [idle.md](idle.md) | 73% |
| 8 | MR Churn Analysis | [churn.md](churn.md) | 72% |
| 15 | Author Collaboration Network | [collaboration.md](collaboration.md) | 70% |
| 28 | DiffNote Coverage Map | [review-coverage.md](review-coverage.md) | 75% |
| 25 | MR Pipeline Efficiency | [mr-pipeline.md](mr-pipeline.md) | 78% |
## Rejected Ideas (with reasons)
| # | Idea | Reason |
|---|------|--------|
| 10 | Sprint Burndown from Labels | Too opinionated about label semantics |
| 11 | Code Review Quality Score | Subjective "quality" scoring creates perverse incentives |
| 13 | Discussion Sentiment Drift | Unreliable heuristic sentiment on technical text |
| 20 | Response Time Leaderboard | Toxic "leaderboard" framing; metric folded into #2 |
| 22 | Timeline Diff | Niche use case; timeline already interleaves events |
| 23 | Discussion Thread Summarizer | Requires LLM inference; out of scope for local-first tool |
| 30 | NL Query Interface | Over-engineered; existing filters cover this |
## How to use this list
1. Pick an idea from Tier 1 or Tier 2
2. Read its detail file for implementation plan and SQL sketches
3. Create a bead (`br create`) referencing the idea file
4. Implement following TDD (test first, then minimal impl)
5. Update the idea file with `status: implemented` when done

View File

@@ -0,0 +1,555 @@
# Project Manager System — Design Proposal
## The Problem
We have a growing backlog of ideas and issues in markdown files. Agents can ship
features in under an hour. The constraint isn't execution speed — it's knowing
WHAT to execute NEXT, in what ORDER, and detecting when the plan needs to change.
We need a system that:
1. Automatically scores and sequences work items
2. Detects when scope changes during spec generation
3. Tracks the full lifecycle: idea → spec → beads → shipped
4. Re-triages instantly when the dependency graph changes
5. Runs in seconds, not minutes
## Architecture
```
┌─────────────────────────────────────────────────────────────┐
│ docs/ideas/*.md │
│ docs/issues/*.md │
│ (YAML frontmatter) │
└──────────────────────────┬──────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│ IDEA TRIAGE SKILL │
│ │
│ Phase 1: INGEST — parse all frontmatter │
│ Phase 2: VALIDATE — check refs, detect staleness │
│ Phase 3: EVALUATE — detect scope changes since last run │
│ Phase 4: SCORE — compute priority with unlock graph │
│ Phase 5: SEQUENCE — topological sort by dependency + score │
│ Phase 6: RECOMMEND — top 3 + unlock advisories + warnings │
└──────────────────────────┬──────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│ HUMAN DECIDES │
│ (picks from top 3, takes seconds) │
└──────────────────────────┬──────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│ SPEC GENERATION (Claude/GPT) │
│ Takes the idea doc, generates detailed implementation spec │
│ ALSO: re-evaluates frontmatter fields based on deeper │
│ understanding. Updates effort, blocked-by, components. │
│ This is the SCOPE CHANGE DETECTION point. │
└──────────────────────────┬──────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│ PLAN-TO-BEADS (existing skill) │
│ Spec → granular beads with dependencies via br CLI │
│ Links bead IDs back into the idea frontmatter │
└──────────────────────────┬──────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│ AGENT IMPLEMENTATION │
│ Works beads via br/bv workflow │
│ bv --robot-triage handles execution-phase prioritization │
└──────────────────────────┬──────────────────────────────────┘
┌─────────────────────────────────────────────────────────────┐
│ COMPLETION & RE-TRIAGE │
│ Beads close → idea status updates to implemented │
│ Skill re-runs → newly unblocked ideas surface │
│ Loop back to top │
└─────────────────────────────────────────────────────────────┘
```
## The Two Systems and Their Boundary
| Concern | Ideas System (new) | Beads System (existing) |
|---------|-------------------|------------------------|
| Phase | Pre-commitment (what to build) | Execution (how to build) |
| Data | docs/ideas/*.md, docs/issues/*.md | .beads/issues.jsonl |
| Triage | Idea triage skill | bv --robot-triage |
| Tracking | YAML frontmatter | JSONL records |
| Granularity | Feature-level | Task-level |
| Lifecycle | proposed → specced → promoted | open → in_progress → closed |
**The handoff point is promotion.** An idea becomes one or more beads. After that,
the ideas system only tracks the idea's status (promoted/implemented). Beads owns
execution.
An idea file is NEVER deleted. It's a permanent design record. Even after
implementation, it documents WHY the feature was built and what tradeoffs were made.
---
## Data Model
### Frontmatter Schema
```yaml
---
# ── Identity ──
id: idea-009 # stable unique identifier
title: Similar Issues Finder
type: idea # idea | issue
status: proposed # see lifecycle below
# ── Timestamps ──
created: 2026-02-09
updated: 2026-02-09
eval-hash: null # SHA of scoring fields at last triage run
# ── Scoring Inputs ──
impact: high # high | medium | low
effort: small # small | medium | large | xlarge
severity: null # critical | high | medium | low (issues only)
autonomy: full # full | needs-design | needs-human
# ── Dependency Graph ──
blocked-by: [] # IDs of ideas/issues that must complete first
unlocks: # IDs that become possible/better after this ships
- idea-recurring-patterns
requires: [] # external prerequisites (gate names)
related: # soft links, not blocking
- issue-001
# ── Implementation Context ──
components: # source code paths this will touch
- src/search/
- src/embedding/
command: lore similar # proposed CLI command (null for issues)
has-spec: false # detailed spec has been generated
spec-path: null # path to spec doc if it exists
beads: [] # bead IDs after promotion
# ── Classification ──
tags:
- embeddings
- search
---
```
### Status Lifecycle
```
IDEA lifecycle:
proposed ──→ accepted ──→ specced ──→ promoted ──→ implemented
│ │
└──→ rejected └──→ (scope changed, back to accepted)
ISSUE lifecycle:
open ──→ accepted ──→ specced ──→ promoted ──→ resolved
└──→ wontfix
```
Transitions:
- `proposed → accepted`: Human confirms this is worth building
- `accepted → specced`: Detailed implementation spec has been generated
- `specced → promoted`: Beads created from the spec
- `promoted → implemented`: All beads closed
- Any → `rejected`/`wontfix`: Decided not to build (with reason in body)
- `specced → accepted`: Scope changed during spec, needs re-evaluation
### Effort Calibration (Agent-Executed)
| Level | Wall Clock | Autonomy | Example |
|-------|-----------|----------|---------|
| small | ~30 min | Agent ships end-to-end | stale-discussions, closure-gaps |
| medium | ~1 hour | Agent ships end-to-end | similar-issues, digest |
| large | 1-2 hours | May need one design decision | recurring-patterns, experts |
| xlarge | 2+ hours | Needs human architecture input | project groups |
### Gates Registry (docs/gates.yaml)
```yaml
gates:
gate-1:
title: Resource Events Ingestion
status: complete
completed: 2025-12-15
gate-2:
title: Cross-References & Entity Graph
status: complete
completed: 2026-01-10
gate-3:
title: Timeline Pipeline
status: complete
completed: 2026-01-25
gate-4:
title: MR File Changes Ingestion
status: partial
notes: Schema ready (migration 016), ingestion code exists but untested
tracks: mr_file_changes table population
gate-5:
title: Code Trace (file:line → commit → MR → issue)
status: not-started
blocked-by: gate-4
notes: Requires git log parsing + commit SHA matching
```
The skill reads this file to determine which `requires` entries are satisfied.
---
## Scoring Algorithm
### Priority Score
```
For ideas:
base = impact_weight # high=3, medium=2, low=1
unlock = 1 + (0.5 × count_of_unlocks) # items this directly enables
readiness = 0 if blocked, 1 if ready
priority = base × unlock × readiness
For issues:
base = severity_weight × 1.5 # critical=6, high=4.5, medium=3, low=1.5
unlock = 1 + (0.5 × count_of_unlocks) # (bugs rarely unlock, but can)
readiness = 0 if blocked, 1 if ready
priority = base × unlock × readiness
Tiebreak (among equal priority):
1. Prefer smaller effort (ships faster, starts next cycle sooner)
2. Prefer autonomy:full over needs-design over needs-human
3. Prefer older items (FIFO within same score)
```
### Why This Works
- High-impact items that unlock other items float to the top
- Blocked items score 0 regardless of impact (can't be worked)
- Effort is a tiebreaker, not a primary factor (since execution is fast)
- Issues with severity get a 1.5× multiplier (bugs degrade existing value)
- Unlock multiplier captures the "do Gate 4 first" insight automatically
### Example Rankings
| Item | Impact | Unlocks | Readiness | Score |
|------|--------|---------|-----------|-------|
| project-ergonomics | high(3) | 10 | ready(1) | 3 × 6.0 = 18.0 |
| gate-4-completion | med(2) | 5 | ready(1) | 2 × 3.5 = 7.0 |
| similar-issues | high(3) | 1 | ready(1) | 3 × 1.5 = 4.5 |
| stale-discussions | high(3) | 0 | ready(1) | 3 × 1.0 = 3.0 |
| hotspots | high(3) | 1 | blocked(0) | 0.0 |
Project-ergonomics dominates because it unlocks 10 downstream items. This is the
correct recommendation — it's the highest-leverage work even though "stale-discussions"
is simpler.
---
## Scope Change Detection
This is the hardest problem. An idea's scope can change in three ways:
### 1. During Spec Generation (Primary Detection Point)
When Claude/GPT generates a detailed implementation spec from an idea doc, it
understands the idea more deeply than the original sketch. The spec process should
be instructed to:
- Re-evaluate effort (now that implementation is understood in detail)
- Discover new dependencies (need to change schema first, need a new config option)
- Identify component changes (touches more modules than originally thought)
- Assess impact more accurately (this is actually higher/lower value than estimated)
**Mechanism:** The spec generation prompt includes an explicit "re-evaluate frontmatter"
step. The spec output includes an updated frontmatter block. If scoring-relevant
fields changed, the skill flags it:
```
SCOPE CHANGE DETECTED:
idea-009 (Similar Issues Finder)
- effort: small → medium (needs embedding aggregation strategy)
- blocked-by: [] → [gate-embeddings-populated]
- components: +src/cli/commands/similar.rs (new file)
Previous score: 4.5 → New score: 3.0
Recommendation: Still top-3, but sequencing may change.
```
### 2. During Implementation (Discovered Complexity)
An agent working on beads may discover the spec was wrong:
- "This requires a database migration I didn't anticipate"
- "This module doesn't expose the API I need"
**Mechanism:** When a bead is blocked or takes significantly longer than estimated,
the agent should update the idea's frontmatter. The skill detects the change on
next triage run via eval-hash comparison.
### 3. External Changes (Gate Completion, New Ideas)
When a gate completes or a new idea is added that changes the dependency graph:
- Gate 4 completes → 5 ideas become unblocked
- New idea added that's higher priority than current top-3
- Two ideas discovered to be duplicates
**Mechanism:** The skill detects these automatically by re-computing the full graph
on every run. The eval-hash tracks what the scoring fields looked like last time;
if they haven't changed but the SCORE changed (because a dependency was resolved),
the skill flags it as "newly unblocked."
### The eval-hash Field
```yaml
eval-hash: "a1b2c3d4" # SHA-256 of: impact + effort + blocked-by + unlocks + requires
```
Computed by hashing the concatenation of all scoring-relevant fields. When the skill
runs, it compares:
- If eval-hash matches AND score is same → no change, skip
- If eval-hash matches BUT score changed → external change (dependency resolved)
- If eval-hash differs → item was modified, re-evaluate
This avoids re-announcing unchanged items on every run.
---
## Skill Design
### Location
`.claude/skills/idea-triage/SKILL.md` (project-local)
### Trigger Phrases
- "triage ideas" / "what should I build next?"
- "idea triage" / "prioritize ideas"
- "what's the highest value work?"
- `/idea-triage`
### Workflow Phases
**Phase 1: INGEST**
- Glob docs/ideas/*.md and docs/issues/*.md
- Parse YAML frontmatter from each file
- Read docs/gates.yaml for capability status
- Collect: id, title, type, status, impact, effort, severity, autonomy,
blocked-by, unlocks, requires, has-spec, beads, eval-hash
**Phase 2: VALIDATE**
- Required fields present (id, title, type, status, impact, effort)
- All blocked-by IDs reference existing files
- All unlocks IDs reference existing files
- All requires entries exist in gates.yaml
- No dependency cycles (blocked-by graph is a DAG)
- Status transitions are valid (no "proposed" with beads linked)
- Output: list of validation errors/warnings
**Phase 3: EVALUATE (Scope Change Detection)**
- For each item, compute current eval-hash from scoring fields
- Compare against stored eval-hash in frontmatter
- If different: flag as SCOPE_CHANGED with field-level diff
- If same but score changed (due to external dep resolution): flag as NEWLY_UNBLOCKED
- If status is specced but has-spec is false: flag as INCONSISTENT
**Phase 4: SCORE**
- Resolve requires against gates.yaml (is the gate complete?)
- Resolve blocked-by against other items (is the blocker done?)
- Compute readiness: 0 if any hard blocker is unresolved, 1 otherwise
- Compute unlock count: count items whose blocked-by includes this ID
- Apply scoring formula:
- Ideas: impact_weight × (1 + 0.5 × unlock_count) × readiness
- Issues: severity_weight × 1.5 × (1 + 0.5 × unlock_count) × readiness
- Apply tiebreak: effort_weight, autonomy, created date
**Phase 5: SEQUENCE**
- Separate into: actionable (score > 0) vs blocked (score = 0)
- Among actionable: sort by score descending with tiebreak
- Among blocked: sort by "what-if score" (score if blockers were resolved)
- Compute unlock advisories: "completing X unblocks Y items worth Z total score"
**Phase 6: RECOMMEND**
Output structured report:
```
== IDEA TRIAGE ==
Run: 2026-02-09T14:30:00Z
Items: 22 (18 proposed, 2 accepted, 1 specced, 1 implemented)
RECOMMENDED SEQUENCE:
1. [idea-project-ergonomics] Multi-Project Ergonomics
impact:high effort:medium autonomy:full score:18.0
WHY FIRST: Unlocks 10 downstream ideas. Highest leverage.
COMPONENTS: src/core/config.rs, src/core/project.rs, src/cli/
2. [idea-009] Similar Issues Finder
impact:high effort:small autonomy:full score:4.5
WHY NEXT: Highest standalone impact. Ships in ~30 min.
UNLOCKS: idea-recurring-patterns
3. [idea-004] Stale Discussion Finder
impact:high effort:small autonomy:full score:3.0
WHY NEXT: Quick win, no dependencies, immediate user value.
BLOCKED (would rank high if unblocked):
idea-014 File Hotspots score-if-unblocked:4.5 BLOCKED BY: gate-4
idea-021 Knowledge Silos score-if-unblocked:3.0 BLOCKED BY: gate-4
UNLOCK ADVISORY: Completing gate-4 unblocks 5 items (combined: 15.0)
SCOPE CHANGES DETECTED:
idea-009: effort changed small→medium (eval-hash mismatch)
idea-017: now has spec (has-spec flipped to true)
NEWLY UNBLOCKED:
(none this run)
WARNINGS:
idea-016: status=proposed, unchanged for 30+ days
idea-008: blocked-by references "idea-gate4" which doesn't exist (typo?)
HEALTH:
Proposed: 18 | Accepted: 2 | Specced: 1 | Promoted: 0 | Implemented: 1
Blocked: 6 | Actionable: 16
Backlog runway at ~5/day: ~3 days
```
### What the Skill Does NOT Do
- **Never modifies files.** Read-only triage. The agent or human updates frontmatter.
Exception: the skill CAN update eval-hash after a triage run (opt-in).
- **Never creates beads.** That's plan-to-beads skill territory.
- **Never replaces bv.** Once work is in beads, bv --robot-triage handles execution
prioritization. This skill owns pre-commitment only.
- **Never generates specs.** That's a separate step with Claude/GPT.
---
## Integration Points
### With Spec Generation
The spec generation prompt (separate from this skill) should include:
```
After generating the implementation spec, re-evaluate the idea's frontmatter:
1. Is the effort estimate still accurate? (small/medium/large/xlarge)
2. Did you discover new dependencies? (add to blocked-by)
3. Are there components not listed? (add to components)
4. Has the impact assessment changed?
5. Can an agent ship this autonomously? (autonomy: full/needs-design/needs-human)
Output an UPDATED frontmatter block at the end of the spec.
If any scoring field changed, explain what changed and why.
```
### With plan-to-beads
When promoting an idea to beads:
1. Run plan-to-beads on the spec
2. Capture the created bead IDs
3. Update the idea's frontmatter: status → promoted, beads → [bd-xxx, bd-yyy]
4. Run br sync --flush-only && git add .beads/
### With bv --robot-triage
These systems don't talk to each other directly. The boundary is:
- Idea triage skill → "build idea-009 next"
- Human/agent generates spec → plan-to-beads → beads created
- bv --robot-triage → "work on bd-xxx next"
- Beads close → human/agent updates idea frontmatter → idea triage re-runs
### With New Item Ingestion
When someone adds a new file to docs/ideas/ or docs/issues/:
- If it has valid frontmatter: picked up automatically on next triage run
- If it has no/invalid frontmatter: flagged in WARNINGS section
- Skill can suggest default frontmatter based on content analysis
---
## Failure Modes and Mitigations
### 1. Frontmatter Rot
**Risk:** Fields don't get updated. Status says "proposed" but it's actually shipped.
**Mitigation:** Cross-reference with beads. If an idea has beads and all beads are
closed, flag that the idea should be "implemented" even if frontmatter says otherwise.
The skill detects this inconsistency.
### 2. Score Gaming
**Risk:** Someone inflates impact or unlocks count to make their idea rank higher.
**Mitigation:** Unlocks are verified — the skill checks that the referenced items
actually have this idea in their blocked-by. Impact is subjective but reviewed during
spec generation (second opinion from a different model/session).
### 3. Stale Gates Registry
**Risk:** gate-4 is actually complete but gates.yaml wasn't updated.
**Mitigation:** Skill warns when a gate has been "partial" for a long time. Could
also probe the codebase (check if mr_file_changes ingestion code exists and has tests).
### 4. Circular Dependencies
**Risk:** A blocks B blocks A.
**Mitigation:** Phase 2 validation explicitly checks for cycles in the blocked-by
graph and reports them as errors.
### 5. Unlock Count Inflation
**Risk:** An item claims to unlock 20 things, making it score astronomically.
**Mitigation:** Unlock count is VERIFIED by checking reverse blocked-by references.
If idea-X says it unlocks idea-Y, but idea-Y's blocked-by doesn't include idea-X,
the claim is discounted. Both explicit unlocks and reverse blocked-by contribute to
the count, but unverified claims are flagged.
### 6. Scope Creep During Spec
**Risk:** Spec generation reveals the idea is actually 5× harder than estimated.
The score drops, but the human has already mentally committed.
**Mitigation:** The scope change detection makes this VISIBLE. The triage output
explicitly shows "effort changed small→xlarge, score dropped from 4.5 to 0.75."
Human can then decide: proceed anyway, or switch to a different top-3 pick.
### 7. Orphaned Ideas
**Risk:** Ideas get promoted to beads, beads get implemented, but the idea file
never gets updated. It sits in "promoted" forever.
**Mitigation:** Skill checks: for each idea with status=promoted, look up the
linked beads. If all beads are closed, flag: "idea-009 appears complete, update
status to implemented."
---
## Implementation Plan
### Step 1: Create the Frontmatter Schema (this doc → applied to all files)
- Define the exact YAML schema (above)
- Create docs/gates.yaml
- Apply frontmatter to all 22 existing files in docs/ideas/ and docs/issues/
### Step 2: Build the Skill
- Create .claude/skills/idea-triage/SKILL.md
- Implement all 6 phases in the skill prompt
- The skill uses Glob, Read, and text processing — no external scripts needed
(25 files is small enough for Claude to process directly)
### Step 3: Test the System
- Run the skill against current files
- Verify scoring matches manual expectations
- Check that project-ergonomics ranks #1 (it should, due to unlock count)
- Verify blocked items score 0
- Check validation catches intentional errors
### Step 4: Run One Full Cycle
- Pick the top recommendation
- Generate a spec (separate session)
- Verify scope change detection works (spec should update frontmatter)
- Promote to beads via plan-to-beads
- Implement
- Verify completion detection works
### Step 5: Iterate
- Run triage again after implementation
- Verify newly unblocked items surface
- Adjust scoring weights if rankings feel wrong
- Add new ideas as they emerge

88
docs/ideas/bottlenecks.md Normal file
View File

@@ -0,0 +1,88 @@
# Review Bottleneck Detector
- **Command:** `lore bottlenecks [--since <date>]`
- **Confidence:** 85%
- **Tier:** 2
- **Status:** proposed
- **Effort:** medium — join MRs with first review note, compute percentiles
## What
For MRs in a given time window, compute:
1. **Time to first review** — created_at to first non-author DiffNote
2. **Review cycles** — count of discussion resolution rounds
3. **Time to merge** — created_at to merged_at
Flag MRs above P90 thresholds as bottlenecks.
## Why
Review bottlenecks are the #1 developer productivity killer. Making them visible
and measurable is the first step to fixing them. This provides data for process
retrospectives.
## Data Required
All exists today:
- `merge_requests` (created_at, merged_at, author_username)
- `notes` (note_type='DiffNote', author_username, created_at)
- `discussions` (resolved, resolvable)
## Implementation Sketch
```sql
-- Time to first review per MR
SELECT
mr.id,
mr.iid,
mr.title,
mr.author_username,
mr.created_at,
mr.merged_at,
p.path_with_namespace,
MIN(n.created_at) as first_review_at,
(MIN(n.created_at) - mr.created_at) / 3600000.0 as hours_to_first_review,
(mr.merged_at - mr.created_at) / 3600000.0 as hours_to_merge
FROM merge_requests mr
JOIN projects p ON mr.project_id = p.id
LEFT JOIN discussions d ON d.merge_request_id = mr.id
LEFT JOIN notes n ON n.discussion_id = d.id
AND n.note_type = 'DiffNote'
AND n.is_system = 0
AND n.author_username != mr.author_username
WHERE mr.created_at >= ?1
AND mr.state IN ('merged', 'opened')
GROUP BY mr.id
ORDER BY hours_to_first_review DESC NULLS FIRST;
```
## Human Output
```
Review Bottlenecks (last 30 days)
P50 time to first review: 4.2h
P90 time to first review: 28.1h
P50 time to merge: 2.1d
P90 time to merge: 8.3d
Slowest to review:
!234 Refactor auth 72h to first review (alice, still open)
!228 Database migration 48h to first review (bob, merged in 5d)
Most review cycles:
!234 Refactor auth 8 discussion threads, 4 resolved
!225 API versioning 6 discussion threads, 6 resolved
```
## Downsides
- Doesn't capture review done outside GitLab (Slack, in-person)
- DiffNote timestamp != when reviewer started reading
- Large MRs naturally take longer; no size normalization
## Extensions
- `lore bottlenecks --reviewer alice` — how fast does alice review?
- Per-project comparison: which project has the fastest review cycle?
- Trend line: is review speed improving or degrading over time?

77
docs/ideas/churn.md Normal file
View File

@@ -0,0 +1,77 @@
# MR Churn Analysis
- **Command:** `lore churn [--since <date>]`
- **Confidence:** 72%
- **Tier:** 3
- **Status:** proposed
- **Effort:** medium — multi-table aggregation with composite scoring
## What
For merged MRs, compute a "contentiousness score" based on: number of review
discussions, number of DiffNotes, resolution cycles, file count. Flag high-churn
MRs as candidates for architectural review.
## Why
High-churn MRs often indicate architectural disagreements, unclear requirements,
or code that's hard to review. Surfacing them post-merge enables retrospectives
and identifies areas that need better design upfront.
## Data Required
All exists today:
- `merge_requests` (state='merged')
- `discussions` (merge_request_id, resolved, resolvable)
- `notes` (note_type='DiffNote', discussion_id)
- `mr_file_changes` (file count per MR)
## Implementation Sketch
```sql
SELECT
mr.iid,
mr.title,
mr.author_username,
p.path_with_namespace,
COUNT(DISTINCT d.id) as discussion_count,
COUNT(DISTINCT CASE WHEN n.note_type = 'DiffNote' THEN n.id END) as diffnote_count,
COUNT(DISTINCT CASE WHEN d.resolvable = 1 AND d.resolved = 1 THEN d.id END) as resolved_threads,
COUNT(DISTINCT mfc.id) as files_changed,
-- Composite score: normalize each metric and weight
(COUNT(DISTINCT d.id) * 2 + COUNT(DISTINCT n.id) + COUNT(DISTINCT mfc.id)) as churn_score
FROM merge_requests mr
JOIN projects p ON mr.project_id = p.id
LEFT JOIN discussions d ON d.merge_request_id = mr.id AND d.noteable_type = 'MergeRequest'
LEFT JOIN notes n ON n.discussion_id = d.id AND n.is_system = 0
LEFT JOIN mr_file_changes mfc ON mfc.merge_request_id = mr.id
WHERE mr.state = 'merged'
AND mr.merged_at >= ?1
GROUP BY mr.id
ORDER BY churn_score DESC
LIMIT ?2;
```
## Human Output
```
High-Churn MRs (last 90 days)
MR Discussions DiffNotes Files Score Title
!234 12 28 8 60 Refactor auth middleware
!225 8 19 5 39 API versioning v2
!218 6 15 12 39 Database schema migration
!210 5 8 3 21 Update logging framework
```
## Downsides
- High discussion count could mean thorough review, not contention
- Composite scoring weights are arbitrary; needs calibration per team
- Large MRs naturally score higher regardless of contention
## Extensions
- Normalize by file count (discussions per file changed)
- Compare against team averages (flag outliers, not absolute values)
- `lore churn --author alice` — which of alice's MRs generate the most discussion?

View File

@@ -0,0 +1,73 @@
# MR-to-Issue Closure Gap
- **Command:** `lore closure-gaps`
- **Confidence:** 88%
- **Tier:** 2
- **Status:** proposed
- **Effort:** low — single join query
## What
Find entity_references where reference_type='closes' AND the target issue is still
open AND the source MR is merged. These represent broken auto-close links where a
merge should have closed an issue but didn't.
## Why
Simple, definitive, actionable. If a merged MR says "closes #42" but #42 is still
open, something is wrong. Either auto-close failed (wrong target branch), the
reference was incorrect, or the issue needs manual attention.
## Data Required
All exists today:
- `entity_references` (reference_type='closes')
- `merge_requests` (state='merged')
- `issues` (state='opened')
## Implementation Sketch
```sql
SELECT
mr.iid as mr_iid,
mr.title as mr_title,
mr.merged_at,
mr.target_branch,
i.iid as issue_iid,
i.title as issue_title,
i.state as issue_state,
p.path_with_namespace
FROM entity_references er
JOIN merge_requests mr ON er.source_entity_type = 'merge_request'
AND er.source_entity_id = mr.id
JOIN issues i ON er.target_entity_type = 'issue'
AND er.target_entity_id = i.id
JOIN projects p ON er.project_id = p.id
WHERE er.reference_type = 'closes'
AND mr.state = 'merged'
AND i.state = 'opened';
```
## Human Output
```
Closure Gaps — merged MRs that didn't close their referenced issues
group/backend !234 merged 3d ago → #42 still OPEN
"Refactor auth middleware" should have closed "Login timeout bug"
Target branch: develop (default: main) — possible branch mismatch
group/frontend !45 merged 1w ago → #38 still OPEN
"Update dashboard" should have closed "Dashboard layout broken"
```
## Downsides
- Could be intentional (MR merged to wrong branch, issue tracked across branches)
- Cross-project references may not be resolvable if target project not synced
- GitLab auto-close only works when merging to default branch
## Extensions
- Flag likely cause: branch mismatch (target_branch != project.default_branch)
- `lore closure-gaps --auto-close` — actually close the issues via API (dangerous, needs confirmation)

101
docs/ideas/collaboration.md Normal file
View File

@@ -0,0 +1,101 @@
# Author Collaboration Network
- **Command:** `lore collaboration [--since <date>]`
- **Confidence:** 70%
- **Tier:** 3
- **Status:** proposed
- **Effort:** medium — self-join on notes, graph construction
## What
Build a weighted graph of author pairs: (author_A, author_B, weight) where weight =
number of times A reviewed B's MR + B reviewed A's MR + they both commented on the
same entity.
## Why
Reveals team structure empirically. Shows who collaborates across team boundaries
and where knowledge transfer happens. Useful for re-orgs, onboarding planning,
and identifying isolated team members.
## Data Required
All exists today:
- `merge_requests` (author_username)
- `notes` (author_username, note_type='DiffNote')
- `discussions` (for co-participation)
## Implementation Sketch
```sql
-- Review relationships: who reviews whose MRs
SELECT
mr.author_username as author,
n.author_username as reviewer,
COUNT(*) as review_count
FROM merge_requests mr
JOIN discussions d ON d.merge_request_id = mr.id
JOIN notes n ON n.discussion_id = d.id
WHERE n.note_type = 'DiffNote'
AND n.is_system = 0
AND n.author_username != mr.author_username
AND mr.created_at >= ?1
GROUP BY mr.author_username, n.author_username;
-- Co-participation: who comments on the same entities
WITH entity_participants AS (
SELECT
COALESCE(d.issue_id, d.merge_request_id) as entity_id,
d.noteable_type,
n.author_username
FROM discussions d
JOIN notes n ON n.discussion_id = d.id
WHERE n.is_system = 0
AND n.created_at >= ?1
)
SELECT
a.author_username as person_a,
b.author_username as person_b,
COUNT(DISTINCT a.entity_id) as shared_entities
FROM entity_participants a
JOIN entity_participants b
ON a.entity_id = b.entity_id
AND a.noteable_type = b.noteable_type
AND a.author_username < b.author_username -- avoid duplicates
GROUP BY a.author_username, b.author_username;
```
## Output Formats
### JSON (for further analysis)
```json
{
"nodes": ["alice", "bob", "charlie"],
"edges": [
{ "source": "alice", "target": "bob", "reviews": 15, "co_participated": 8 },
{ "source": "bob", "target": "charlie", "reviews": 3, "co_participated": 12 }
]
}
```
### Human
```
Collaboration Network (last 90 days)
alice <-> bob 15 reviews, 8 shared discussions [strong]
bob <-> charlie 3 reviews, 12 shared discussions [moderate]
alice <-> charlie 1 review, 2 shared discussions [weak]
dave <-> (none) 0 reviews, 0 shared discussions [isolated]
```
## Downsides
- Interpretation requires context; high collaboration might mean dependency
- Doesn't capture collaboration outside GitLab
- Self-join can be slow with many notes
## Extensions
- `lore collaboration --format dot` — GraphViz network diagram
- `lore collaboration --isolated` — find team members with no collaboration edges
- Team boundary detection via graph clustering algorithms

View File

@@ -0,0 +1,86 @@
# Contributor Heatmap
- **Command:** `lore contributors [--since <date>]`
- **Confidence:** 88%
- **Tier:** 2
- **Status:** proposed
- **Effort:** medium — multiple aggregation queries
## What
Rank team members by activity across configurable time windows (7d, 30d, 90d). Shows
issues authored, MRs authored, MRs merged, review comments made, discussions
participated in.
## Why
Team leads constantly ask "who's been active?" or "who's contributing to reviews?"
This answers it from local data without GitLab Premium analytics. Also useful for
identifying team members who may be overloaded or disengaged.
## Data Required
All exists today:
- `issues` (author_username, created_at)
- `merge_requests` (author_username, created_at, merged_at)
- `notes` (author_username, created_at, note_type, is_system)
- `discussions` (for participation counting)
## Implementation Sketch
```sql
-- Combined activity per author
WITH activity AS (
SELECT author_username, 'issue_authored' as activity_type, created_at
FROM issues WHERE created_at >= ?1
UNION ALL
SELECT author_username, 'mr_authored', created_at
FROM merge_requests WHERE created_at >= ?1
UNION ALL
SELECT author_username, 'mr_merged', merged_at
FROM merge_requests WHERE merged_at >= ?1 AND state = 'merged'
UNION ALL
SELECT author_username, 'review_comment', created_at
FROM notes WHERE created_at >= ?1 AND note_type = 'DiffNote' AND is_system = 0
UNION ALL
SELECT author_username, 'discussion_comment', created_at
FROM notes WHERE created_at >= ?1 AND note_type != 'DiffNote' AND is_system = 0
)
SELECT
author_username,
COUNT(*) FILTER (WHERE activity_type = 'issue_authored') as issues,
COUNT(*) FILTER (WHERE activity_type = 'mr_authored') as mrs_authored,
COUNT(*) FILTER (WHERE activity_type = 'mr_merged') as mrs_merged,
COUNT(*) FILTER (WHERE activity_type = 'review_comment') as reviews,
COUNT(*) FILTER (WHERE activity_type = 'discussion_comment') as comments,
COUNT(*) as total
FROM activity
GROUP BY author_username
ORDER BY total DESC;
```
Note: SQLite doesn't support FILTER — use SUM(CASE WHEN ... THEN 1 ELSE 0 END).
## Human Output
```
Contributors (last 30 days)
Username Issues MRs Merged Reviews Comments Total
alice 3 8 7 23 12 53
bob 1 5 4 31 8 49
charlie 5 3 2 4 15 29
dave 0 1 0 2 3 6
```
## Downsides
- Could be used for surveillance; frame as team health, not individual tracking
- Activity volume != productivity (one thoughtful review > ten "LGTM"s)
- Doesn't capture work done outside GitLab
## Extensions
- `lore contributors --project group/backend` — scoped to project
- `lore contributors --type reviews` — focus on review activity only
- Trend comparison: `--compare 30d,90d` shows velocity changes

94
docs/ideas/decisions.md Normal file
View File

@@ -0,0 +1,94 @@
# Decision Archaeology
- **Command:** `lore decisions <query>`
- **Confidence:** 82%
- **Tier:** 2
- **Status:** proposed
- **Effort:** medium — search pipeline + regex pattern matching on notes
## What
Search for discussion notes that contain decision-making language. Use the existing
search pipeline but boost notes containing patterns like "decided", "agreed",
"will go with", "tradeoff", "because we", "rationale", "the approach is", "we chose".
Return the surrounding discussion context.
## Why
This is gitlore's unique value proposition — "why was this decision made?" is the
question that no other tool answers well. Architecture Decision Records are rarely
maintained; the real decisions live in discussion threads. This mines them.
## Data Required
All exists today:
- `documents` + search pipeline (for finding relevant entities)
- `notes` (body text for pattern matching)
- `discussions` (for thread context)
## Implementation Sketch
```
1. Run existing hybrid search to find entities matching the query topic
2. For each result entity, query all discussion notes
3. Score each note against decision-language patterns:
- Strong signals (weight 3): "decided to", "agreed on", "the decision is",
"we will go with", "approved approach"
- Medium signals (weight 2): "tradeoff", "because", "rationale", "chosen",
"opted for", "rejected", "alternative"
- Weak signals (weight 1): "should we", "proposal", "option A", "option B",
"pros and cons"
4. Return notes scoring above threshold, with surrounding context (previous and
next note in discussion thread)
5. Sort by: search relevance * decision score
```
### Decision Patterns (regex)
```rust
const STRONG_PATTERNS: &[&str] = &[
r"(?i)\b(decided|agreed|approved)\s+(to|on|that)\b",
r"(?i)\bthe\s+(decision|approach|plan)\s+is\b",
r"(?i)\bwe('ll| will| are going to)\s+(go with|use|implement)\b",
r"(?i)\blet'?s\s+(go with|use|do)\b",
];
const MEDIUM_PATTERNS: &[&str] = &[
r"(?i)\b(tradeoff|trade-off|rationale|because we|opted for)\b",
r"(?i)\b(rejected|ruled out|won't work|not viable)\b",
r"(?i)\b(chosen|selected|picked)\b.{0,20}\b(over|instead of)\b",
];
```
## Human Output
```
Decisions related to "authentication"
group/backend !234 — "Refactor auth middleware"
Discussion #a1b2c3 (alice, 3w ago):
"We decided to use JWT with short-lived tokens instead of session cookies.
The tradeoff is more complexity in the refresh flow, but we get stateless
auth which scales better."
Decision confidence: HIGH (3 strong pattern matches)
group/backend #42 — "Auth architecture review"
Discussion #d4e5f6 (bob, 2mo ago):
"After discussing with the security team, we'll go with bcrypt for password
hashing. Argon2 is theoretically better but bcrypt has wider library support."
Decision confidence: HIGH (2 strong pattern matches)
```
## Downsides
- Pattern matching is imperfect; may miss decisions phrased differently
- May surface "discussion about deciding" rather than actual decisions
- Non-English discussions won't match
- Requires good search results as input (garbage in, garbage out)
## Extensions
- `lore decisions --recent` — decisions made in last 30 days
- `lore decisions --author alice` — decisions made by specific person
- Export as ADR (Architecture Decision Record) format
- Combine with timeline for chronological decision history

131
docs/ideas/digest.md Normal file
View File

@@ -0,0 +1,131 @@
# "What Changed?" Digest
- **Command:** `lore digest --since <date>`
- **Confidence:** 93%
- **Tier:** 1
- **Status:** proposed
- **Effort:** medium — multiple queries across event tables, formatting logic
## What
Generate a structured summary of all activity since a given date: issues
opened/closed, MRs merged, labels changed, milestones updated, key discussions.
Group by project and sort by significance (state changes > merges > label changes >
new comments).
Default `--since` is 1 day (last 24 hours). Supports `7d`, `2w`, `YYYY-MM-DD`.
## Why
"What happened while I was on PTO?" is the most universal developer question. This
is a killer feature that leverages ALL the event data gitlore has ingested. No other
local tool provides this.
## Data Required
All exists today:
- `resource_state_events` (opened/closed/merged/reopened)
- `resource_label_events` (label add/remove)
- `resource_milestone_events` (milestone add/remove)
- `merge_requests` (merged_at for merge events)
- `issues` (created_at for new issues)
- `discussions` (last_note_at for active discussions)
## Implementation Sketch
```
1. Parse --since into ms epoch timestamp
2. Query each event table WHERE created_at >= since
3. Query new issues WHERE created_at >= since
4. Query merged MRs WHERE merged_at >= since
5. Query active discussions WHERE last_note_at >= since
6. Group all events by project
7. Within each project, sort by: state changes first, then merges, then labels
8. Format as human-readable sections or robot JSON
```
### SQL Queries
```sql
-- State changes in window
SELECT rse.*, i.iid as issue_iid, mr.iid as mr_iid,
COALESCE(i.title, mr.title) as title,
p.path_with_namespace
FROM resource_state_events rse
LEFT JOIN issues i ON rse.issue_id = i.id
LEFT JOIN merge_requests mr ON rse.merge_request_id = mr.id
JOIN projects p ON rse.project_id = p.id
WHERE rse.created_at >= ?1
ORDER BY rse.created_at DESC;
-- Newly merged MRs
SELECT mr.iid, mr.title, mr.author_username, mr.merged_at,
p.path_with_namespace
FROM merge_requests mr
JOIN projects p ON mr.project_id = p.id
WHERE mr.merged_at >= ?1
ORDER BY mr.merged_at DESC;
-- New issues
SELECT i.iid, i.title, i.author_username, i.created_at,
p.path_with_namespace
FROM issues i
JOIN projects p ON i.project_id = p.id
WHERE i.created_at >= ?1
ORDER BY i.created_at DESC;
```
## Human Output Format
```
=== What Changed (last 7 days) ===
group/backend (12 events)
Merged:
!234 Refactor auth middleware (alice, 2d ago)
!231 Fix connection pool leak (bob, 5d ago)
Closed:
#89 Login timeout on slow networks (closed by alice, 3d ago)
Opened:
#95 Rate limiting returns 500 (charlie, 1d ago)
Labels:
#90 +priority::high (dave, 4d ago)
group/frontend (3 events)
Merged:
!45 Update dashboard layout (eve, 6d ago)
```
## Robot Mode Output
```json
{
"ok": true,
"data": {
"since": "2025-01-20T00:00:00Z",
"projects": [
{
"path": "group/backend",
"merged": [ { "iid": 234, "title": "...", "author": "alice" } ],
"closed": [ { "iid": 89, "title": "...", "actor": "alice" } ],
"opened": [ { "iid": 95, "title": "...", "author": "charlie" } ],
"label_changes": [ { "iid": 90, "label": "priority::high", "action": "add" } ]
}
],
"summary": { "total_events": 15, "projects_active": 2 }
}
}
```
## Downsides
- Can be overwhelming for very active repos; needs `--limit` per category
- Doesn't capture nuance (a 200-comment MR merge is more significant than a typo fix)
- Only shows what gitlore has synced; stale data = stale digest
## Extensions
- `lore digest --author alice` — personal activity digest
- `lore digest --project group/backend` — single project scope
- `lore digest --format markdown` — paste-ready for Slack/email
- Combine with weekly-digest for scheduled summaries

120
docs/ideas/experts.md Normal file
View File

@@ -0,0 +1,120 @@
# Who Knows About X?
- **Command:** `lore experts <path-or-topic>`
- **Confidence:** 92%
- **Tier:** 1
- **Status:** proposed
- **Effort:** medium — two query paths (file-based, topic-based)
## What
Given a file path, find people who have authored MRs touching that file, left
DiffNotes on that file, or discussed issues referencing that file. Given a topic
string, use search to find relevant entities then extract the active participants.
## Why
"Who should I ask about the auth module?" is one of the most common questions in
large teams. This answers it empirically from actual contribution and review data.
No guessing, no out-of-date wiki pages.
## Data Required
All exists today:
- `mr_file_changes` (new_path, merge_request_id) — who changed the file
- `notes` (position_new_path, author_username) — who reviewed the file
- `merge_requests` (author_username) — MR authorship
- `documents` + search pipeline — for topic-based queries
- `discussions` + `notes` — for participant extraction
## Implementation Sketch
### Path Mode: `lore experts src/auth/`
```
1. Query mr_file_changes WHERE new_path LIKE 'src/auth/%'
2. Join merge_requests to get author_username for each MR
3. Query notes WHERE position_new_path LIKE 'src/auth/%'
4. Collect all usernames with activity counts
5. Rank by: MR authorship (weight 3) + DiffNote authorship (weight 2) + discussion participation (weight 1)
6. Apply recency decay (recent activity weighted higher)
```
### Topic Mode: `lore experts "authentication timeout"`
```
1. Run existing hybrid search for the topic
2. Collect top N document results
3. For each document, extract author_username
4. For each document's entity, query discussions and collect note authors
5. Rank by frequency and recency
```
### SQL (Path Mode)
```sql
-- Authors who changed files matching pattern
SELECT mr.author_username, COUNT(*) as changes, MAX(mr.merged_at) as last_active
FROM mr_file_changes mfc
JOIN merge_requests mr ON mfc.merge_request_id = mr.id
WHERE mfc.new_path LIKE ?1
AND mr.state = 'merged'
GROUP BY mr.author_username
ORDER BY changes DESC;
-- Reviewers who commented on files matching pattern
SELECT n.author_username, COUNT(*) as reviews, MAX(n.created_at) as last_active
FROM notes n
WHERE n.position_new_path LIKE ?1
AND n.note_type = 'DiffNote'
AND n.is_system = 0
GROUP BY n.author_username
ORDER BY reviews DESC;
```
## Human Output Format
```
Experts for: src/auth/
alice 12 changes, 8 reviews (last active 3d ago) [top contributor]
bob 3 changes, 15 reviews (last active 1d ago) [top reviewer]
charlie 5 changes, 2 reviews (last active 2w ago)
dave 1 change, 0 reviews (last active 3mo ago) [stale]
```
## Robot Mode Output
```json
{
"ok": true,
"data": {
"query": "src/auth/",
"query_type": "path",
"experts": [
{
"username": "alice",
"changes": 12,
"reviews": 8,
"discussions": 3,
"score": 62,
"last_active": "2025-01-25T10:00:00Z",
"role": "top_contributor"
}
]
}
}
```
## Downsides
- Historical data may be stale (people leave teams, change roles)
- Path mode requires `mr_file_changes` to be populated (Gate 4 ingestion)
- Topic mode quality depends on search quality
- Doesn't account for org chart / actual ownership
## Extensions
- `lore experts --since 90d` — recency filter
- `lore experts --min-activity 3` — noise filter
- Combine with `lore silos` to highlight when an expert is the ONLY expert

75
docs/ideas/graph.md Normal file
View File

@@ -0,0 +1,75 @@
# Entity Relationship Explorer
- **Command:** `lore graph <entity-type> <iid>`
- **Confidence:** 80%
- **Tier:** 3
- **Status:** proposed
- **Effort:** medium — BFS traversal (similar to timeline expand), output formatting
## What
Given an issue or MR, traverse `entity_references` and display all connected
entities with relationship types and depths. Output as tree, JSON, or Mermaid diagram.
## Why
The entity_references graph is already built (Gate 2) but has no dedicated
exploration command. Timeline shows events over time; this shows the relationship
structure. "What's connected to this issue?" is a different question from "what
happened to this issue?"
## Data Required
All exists today:
- `entity_references` (source/target entity, reference_type)
- `issues` / `merge_requests` (for entity context)
- Timeline expand stage already implements BFS over this graph
## Implementation Sketch
```
1. Resolve entity type + iid to local ID
2. BFS over entity_references:
- Follow source→target AND target→source (bidirectional)
- Track depth (--depth flag, default 2)
- Track reference_type for edge labels
3. Hydrate each discovered entity with title, state, URL
4. Format as tree / JSON / Mermaid
```
## Human Output (Tree)
```
#42 Login timeout bug (CLOSED)
├── closes ── !234 Refactor auth middleware (MERGED)
│ ├── mentioned ── #38 Connection timeout in auth flow (CLOSED)
│ └── mentioned ── #51 Token refresh improvements (OPEN)
├── related ── #45 Auth module documentation (OPEN)
└── mentioned ── !228 Database migration (MERGED)
└── closes ── #35 Schema version drift (CLOSED)
```
## Mermaid Output
```mermaid
graph LR
I42["#42 Login timeout"] -->|closes| MR234["!234 Refactor auth"]
MR234 -->|mentioned| I38["#38 Connection timeout"]
MR234 -->|mentioned| I51["#51 Token refresh"]
I42 -->|related| I45["#45 Auth docs"]
I42 -->|mentioned| MR228["!228 DB migration"]
MR228 -->|closes| I35["#35 Schema drift"]
```
## Downsides
- Overlaps somewhat with timeline (but different focus: structure vs chronology)
- High fan-out for popular entities (need depth + limit controls)
- Unresolved cross-project references appear as dead ends
## Extensions
- `lore graph --format dot` — GraphViz DOT output
- `lore graph --format mermaid` — Mermaid diagram
- `lore graph --include-discussions` — show discussion threads as nodes
- Interactive HTML visualization (future web UI)

70
docs/ideas/hotspots.md Normal file
View File

@@ -0,0 +1,70 @@
# File Hotspot Report
- **Command:** `lore hotspots [--since <date>]`
- **Confidence:** 85%
- **Tier:** 2
- **Status:** proposed
- **Effort:** low — single query on mr_file_changes (requires Gate 4 population)
## What
Rank files by frequency of appearance in merged MRs over a time window. Show
change_type breakdown (modified vs added vs deleted). Optionally filter by project.
## Why
Hot files are where bugs live. This is a proven engineering metric (see "Your Code
as a Crime Scene" by Adam Tornhill). High-churn files deserve extra test coverage,
better documentation, and architectural review.
## Data Required
- `mr_file_changes` (new_path, change_type, merge_request_id) — needs Gate 4 population
- `merge_requests` (merged_at, state='merged')
## Implementation Sketch
```sql
SELECT
mfc.new_path,
p.path_with_namespace,
COUNT(*) as total_changes,
SUM(CASE WHEN mfc.change_type = 'modified' THEN 1 ELSE 0 END) as modifications,
SUM(CASE WHEN mfc.change_type = 'added' THEN 1 ELSE 0 END) as additions,
SUM(CASE WHEN mfc.change_type = 'deleted' THEN 1 ELSE 0 END) as deletions,
SUM(CASE WHEN mfc.change_type = 'renamed' THEN 1 ELSE 0 END) as renames,
COUNT(DISTINCT mr.author_username) as unique_authors
FROM mr_file_changes mfc
JOIN merge_requests mr ON mfc.merge_request_id = mr.id
JOIN projects p ON mfc.project_id = p.id
WHERE mr.state = 'merged'
AND mr.merged_at >= ?1
GROUP BY mfc.new_path, p.path_with_namespace
ORDER BY total_changes DESC
LIMIT ?2;
```
## Human Output
```
File Hotspots (last 90 days, top 20)
File Changes Authors Type Breakdown
src/auth/middleware.rs 18 4 14 mod, 3 add, 1 del
src/api/routes.rs 15 3 12 mod, 2 add, 1 rename
src/db/migrations.rs 12 2 8 mod, 4 add
tests/integration/auth_test.rs 11 3 9 mod, 2 add
```
## Downsides
- Requires `mr_file_changes` to be populated (Gate 4 ingestion)
- Doesn't distinguish meaningful changes from trivial ones (formatting, imports)
- Configuration files (CI, Cargo.toml) will rank high but aren't risky
## Extensions
- `lore hotspots --exclude "*.toml,*.yml"` — filter out config files
- `lore hotspots --dir src/auth/` — scope to directory
- Combine with `lore silos` for risk scoring: high churn + bus factor 1 = critical
- Complexity trend: correlate with discussion count (churn + many discussions = problematic)

69
docs/ideas/idle.md Normal file
View File

@@ -0,0 +1,69 @@
# Idle Work Detector
- **Command:** `lore idle [--days <N>] [--labels <pattern>]`
- **Confidence:** 73%
- **Tier:** 3
- **Status:** proposed
- **Effort:** medium — label event querying with configurable patterns
## What
Find entities that received an "in progress" or similar label but have had no
discussion activity for N days. Cross-reference with assignee to show who might
have forgotten about something.
## Why
Forgotten WIP is invisible waste. Developers start work, get pulled to something
urgent, and the original task sits idle. This makes it visible before it becomes
a problem.
## Data Required
All exists today:
- `resource_label_events` (label_name, action='add', created_at)
- `discussions` (last_note_at for entity activity)
- `issues` / `merge_requests` (state, assignees)
- `issue_assignees` / `mr_assignees`
## Implementation Sketch
```
1. Query resource_label_events for labels matching "in progress" patterns
Default patterns: "in-progress", "in_progress", "doing", "wip",
"workflow::in-progress", "status::in-progress"
Configurable via --labels flag
2. For each entity with an "in progress" label still applied:
a. Check if the label was subsequently removed (if so, skip)
b. Get last_note_at from discussions for that entity
c. Flag if last_note_at is older than threshold
3. Join with assignees for attribution
```
## Human Output
```
Idle Work (labeled "in progress" but no activity for 14+ days)
group/backend
#90 Rate limiting design assigned to: charlie idle 18 days
Last activity: label +priority::high by dave
#85 Cache invalidation fix assigned to: alice idle 21 days
Last activity: discussion comment by bob
group/frontend
!230 Dashboard redesign assigned to: eve idle 14 days
Last activity: DiffNote by dave
```
## Downsides
- Requires label naming conventions; no universal standard
- Work may be happening outside GitLab (local branch, design doc)
- "Idle" threshold is subjective; 14 days may be normal for large features
## Extensions
- `lore idle --assignee alice` — personal idle work check
- `lore idle --notify` — generate message templates for nudging owners
- Configurable label patterns in config.json for team-specific workflows

View File

@@ -0,0 +1,92 @@
# Cross-Project Impact Graph
- **Command:** `lore impact-graph [--format json|dot|mermaid]`
- **Confidence:** 75%
- **Tier:** 3
- **Status:** proposed
- **Effort:** medium — aggregation over entity_references, graph output formatting
## What
Aggregate `entity_references` by project pair to produce a weighted adjacency matrix
showing how projects reference each other. Output as JSON, DOT, or Mermaid for
visualization.
## Why
Makes invisible architectural coupling visible. "Backend and frontend repos have
47 cross-references this quarter" tells you about tight coupling that may need
architectural attention.
## Data Required
All exists today:
- `entity_references` (source/target entity IDs)
- `issues` / `merge_requests` (project_id for source/target)
- `projects` (path_with_namespace)
## Implementation Sketch
```sql
-- Project-to-project reference counts
WITH ref_projects AS (
SELECT
CASE er.source_entity_type
WHEN 'issue' THEN i_src.project_id
WHEN 'merge_request' THEN mr_src.project_id
END as source_project_id,
CASE er.target_entity_type
WHEN 'issue' THEN i_tgt.project_id
WHEN 'merge_request' THEN mr_tgt.project_id
END as target_project_id,
er.reference_type
FROM entity_references er
LEFT JOIN issues i_src ON er.source_entity_type = 'issue' AND er.source_entity_id = i_src.id
LEFT JOIN merge_requests mr_src ON er.source_entity_type = 'merge_request' AND er.source_entity_id = mr_src.id
LEFT JOIN issues i_tgt ON er.target_entity_type = 'issue' AND er.target_entity_id = i_tgt.id
LEFT JOIN merge_requests mr_tgt ON er.target_entity_type = 'merge_request' AND er.target_entity_id = mr_tgt.id
WHERE er.target_entity_id IS NOT NULL -- resolved references only
)
SELECT
p_src.path_with_namespace as source_project,
p_tgt.path_with_namespace as target_project,
er.reference_type,
COUNT(*) as weight
FROM ref_projects rp
JOIN projects p_src ON rp.source_project_id = p_src.id
JOIN projects p_tgt ON rp.target_project_id = p_tgt.id
WHERE rp.source_project_id != rp.target_project_id -- cross-project only
GROUP BY p_src.path_with_namespace, p_tgt.path_with_namespace, er.reference_type
ORDER BY weight DESC;
```
## Output Formats
### Mermaid
```mermaid
graph LR
Backend -->|closes 23| Frontend
Backend -->|mentioned 47| Infrastructure
Frontend -->|mentioned 12| Backend
```
### DOT
```dot
digraph impact {
"group/backend" -> "group/frontend" [label="closes: 23"];
"group/backend" -> "group/infra" [label="mentioned: 47"];
}
```
## Downsides
- Requires multiple projects synced; limited value for single-project users
- "Mentioned" references are noisy (high volume, low signal)
- Doesn't capture coupling through shared libraries or APIs (code-level coupling)
## Extensions
- `lore impact-graph --since 90d` — time-scoped coupling analysis
- `lore impact-graph --type closes` — only meaningful reference types
- Include unresolved references to show dependencies on un-synced projects
- Coupling trend: is cross-project coupling increasing over time?

97
docs/ideas/label-audit.md Normal file
View File

@@ -0,0 +1,97 @@
# Label Hygiene Audit
- **Command:** `lore label-audit`
- **Confidence:** 82%
- **Tier:** 2
- **Status:** proposed
- **Effort:** low — straightforward aggregation queries
## What
Report on label health:
- Labels used only once (may be typos or abandoned experiments)
- Labels applied and removed within 1 hour (likely mistakes)
- Labels with no active issues/MRs (orphaned)
- Label name collisions across projects (same name, different meaning)
- Labels never used at all (defined but not applied)
## Why
Label sprawl is real and makes filtering useless over time. Teams create labels
ad-hoc and never clean them up. This simple audit surfaces maintenance tasks.
## Data Required
All exists today:
- `labels` (name, project_id)
- `issue_labels` / `mr_labels` (usage counts)
- `resource_label_events` (add/remove pairs for mistake detection)
- `issues` / `merge_requests` (state for "active" filtering)
## Implementation Sketch
```sql
-- Labels used only once
SELECT l.name, p.path_with_namespace, COUNT(*) as usage
FROM labels l
JOIN projects p ON l.project_id = p.id
LEFT JOIN issue_labels il ON il.label_id = l.id
LEFT JOIN mr_labels ml ON ml.label_id = l.id
GROUP BY l.id
HAVING COUNT(il.issue_id) + COUNT(ml.merge_request_id) = 1;
-- Flash labels (applied and removed within 1 hour)
SELECT
rle1.label_name,
rle1.created_at as added_at,
rle2.created_at as removed_at,
(rle2.created_at - rle1.created_at) / 60000 as minutes_active
FROM resource_label_events rle1
JOIN resource_label_events rle2
ON rle1.issue_id = rle2.issue_id
AND rle1.label_name = rle2.label_name
AND rle1.action = 'add'
AND rle2.action = 'remove'
AND rle2.created_at > rle1.created_at
AND (rle2.created_at - rle1.created_at) < 3600000;
-- Unused labels (defined but never applied)
SELECT l.name, p.path_with_namespace
FROM labels l
JOIN projects p ON l.project_id = p.id
LEFT JOIN issue_labels il ON il.label_id = l.id
LEFT JOIN mr_labels ml ON ml.label_id = l.id
WHERE il.issue_id IS NULL AND ml.merge_request_id IS NULL;
```
## Human Output
```
Label Audit
Unused Labels (4):
group/backend: deprecated-v1, needs-triage, wontfix-maybe
group/frontend: old-design
Single-Use Labels (3):
group/backend: perf-regression (1 issue)
group/frontend: ux-debt (1 MR), mobile-only (1 issue)
Flash Labels (applied < 1hr, 2):
group/backend #90: +priority::critical then -priority::critical (12 min)
group/backend #85: +blocked then -blocked (5 min)
Cross-Project Collisions (1):
"needs-review" used in group/backend (32 uses) AND group/frontend (8 uses)
```
## Downsides
- Low glamour; this is janitorial work
- Single-use labels may be legitimate (one-off categorization)
- Cross-project collisions may be intentional (shared vocabulary)
## Extensions
- `lore label-audit --fix` — suggest deletions for unused labels
- Trend: label count over time (is sprawl increasing?)

74
docs/ideas/label-flow.md Normal file
View File

@@ -0,0 +1,74 @@
# Label Velocity
- **Command:** `lore label-flow <from-label> <to-label>`
- **Confidence:** 78%
- **Tier:** 3
- **Status:** proposed
- **Effort:** medium — self-join on resource_label_events, percentile computation
## What
For a given label pair (e.g., "needs-review" to "approved"), compute median and P90
transition times using `resource_label_events`. Shows how fast work moves through
your process labels.
Also supports: single label dwell time (how long does "in-progress" stay applied?).
## Why
Process bottlenecks become quantifiable. "Our code review takes a median of 3 days"
is actionable data for retrospectives and process improvement.
## Data Required
All exists today:
- `resource_label_events` (label_name, action, created_at, issue_id, merge_request_id)
## Implementation Sketch
```sql
-- Label A → Label B transition time
WITH add_a AS (
SELECT issue_id, merge_request_id, MIN(created_at) as added_at
FROM resource_label_events
WHERE label_name = ?1 AND action = 'add'
GROUP BY issue_id, merge_request_id
),
add_b AS (
SELECT issue_id, merge_request_id, MIN(created_at) as added_at
FROM resource_label_events
WHERE label_name = ?2 AND action = 'add'
GROUP BY issue_id, merge_request_id
)
SELECT
(b.added_at - a.added_at) / 3600000.0 as hours_transition
FROM add_a a
JOIN add_b b ON a.issue_id = b.issue_id OR a.merge_request_id = b.merge_request_id
WHERE b.added_at > a.added_at;
```
Then compute percentiles in Rust (median, P75, P90).
## Human Output
```
Label Flow: "needs-review" → "approved"
Transitions: 42 issues/MRs in last 90 days
Median: 18.5 hours
P75: 36.2 hours
P90: 72.8 hours
Slowest: !234 Refactor auth (168 hours)
```
## Downsides
- Only works if teams use label-based workflows consistently
- Labels may be applied out of order or skipped
- Self-join performance could be slow with many events
## Extensions
- `lore label-flow --dwell "in-progress"` — how long does a label stay?
- `lore label-flow --all` — auto-discover common transitions from event data
- Visualization: label state machine with median transition times on edges

View File

@@ -0,0 +1,81 @@
# Milestone Risk Report
- **Command:** `lore milestone-risk [title]`
- **Confidence:** 78%
- **Tier:** 3
- **Status:** proposed
- **Effort:** medium — milestone + issue aggregation with scope change detection
## What
For each active milestone (or a specific one): show total issues, % closed, issues
added after milestone creation (scope creep), issues with no assignee, issues with
overdue due_date. Flag milestones where completion rate is below expected trajectory.
## Why
Milestone health is usually assessed by gut feel. This provides objective signals
from data already ingested. Project managers can spot risks early.
## Data Required
All exists today:
- `milestones` (title, state, due_date)
- `issues` (milestone_id, state, created_at, due_date, assignee)
- `issue_assignees` (for unassigned detection)
## Implementation Sketch
```sql
SELECT
m.title,
m.state,
m.due_date,
COUNT(*) as total_issues,
SUM(CASE WHEN i.state = 'closed' THEN 1 ELSE 0 END) as closed,
SUM(CASE WHEN i.state = 'opened' THEN 1 ELSE 0 END) as open,
SUM(CASE WHEN i.created_at > m.created_at THEN 1 ELSE 0 END) as scope_creep,
SUM(CASE WHEN ia.username IS NULL AND i.state = 'opened' THEN 1 ELSE 0 END) as unassigned,
SUM(CASE WHEN i.due_date < DATE('now') AND i.state = 'opened' THEN 1 ELSE 0 END) as overdue
FROM milestones m
JOIN issues i ON i.milestone_id = m.id
LEFT JOIN issue_assignees ia ON ia.issue_id = i.id
WHERE m.state = 'active'
GROUP BY m.id;
```
Note: `created_at` comparison for scope creep is approximate — GitLab doesn't
expose when an issue was added to a milestone via its milestone_events.
Actually we DO have `resource_milestone_events` — use those for precise scope change
detection.
## Human Output
```
Milestone Risk Report
v2.0 (due Feb 15, 2025)
Progress: 14/20 closed (70%)
Scope: +3 issues added after milestone start
Risks: 2 issues overdue, 1 issue unassigned
Status: ON TRACK (70% complete, 60% time elapsed)
v2.1 (due Mar 30, 2025)
Progress: 2/15 closed (13%)
Scope: +8 issues added after milestone start
Risks: 5 issues unassigned
Status: AT RISK (13% complete, scope still growing)
```
## Downsides
- Milestone semantics vary wildly between teams
- "Scope creep" detection is noisy if teams batch-add issues to milestones
- due_date comparison assumes consistent timezone handling
## Extensions
- `lore milestone-risk --history` — show scope changes over time
- Velocity estimation: at current closure rate, will the milestone finish on time?
- Combine with label-flow for "how fast are milestone issues moving through workflow"

67
docs/ideas/mr-pipeline.md Normal file
View File

@@ -0,0 +1,67 @@
# MR Pipeline Efficiency
- **Command:** `lore mr-pipeline [--since <date>]`
- **Confidence:** 78%
- **Tier:** 3
- **Status:** proposed
- **Effort:** medium — builds on bottleneck detector with more stages
## What
Track the full MR lifecycle: creation, first review, all reviews complete (threads
resolved), approval, merge. Compute time spent in each stage across all MRs.
Identify which stage is the bottleneck.
## Why
"Our merge process is slow" is vague. This breaks it into stages so teams can target
the actual bottleneck. Maybe creation-to-review is fast but review-to-merge is slow
(merge queue issues). Maybe first review is fast but resolution takes forever
(contentious code).
## Data Required
All exists today:
- `merge_requests` (created_at, merged_at)
- `notes` (note_type='DiffNote', created_at, author_username)
- `discussions` (resolved, resolvable, merge_request_id)
- `resource_state_events` (state changes with timestamps)
## Implementation Sketch
For each merged MR, compute:
1. **Created → First Review**: MIN(DiffNote.created_at) - mr.created_at
2. **First Review → All Resolved**: MAX(discussion.resolved_at) - MIN(DiffNote.created_at)
3. **All Resolved → Merged**: mr.merged_at - MAX(discussion.resolved_at)
Note: "resolved_at" isn't directly stored but can be approximated from the last
note in resolved discussions, or from state events.
## Human Output
```
MR Pipeline (last 30 days, 24 merged MRs)
Stage Median P75 P90
Created → First Review 4.2h 12.1h 28.3h
First Review → Resolved 8.1h 24.5h 72.0h <-- BOTTLENECK
Resolved → Merged 0.5h 1.2h 3.1h
Total (Created → Merged) 18.4h 48.2h 96.1h
Biggest bottleneck: Review resolution (median 8.1h)
Suggestion: Consider breaking large MRs into smaller reviewable chunks
```
## Downsides
- "Resolved" timestamp approximation may be inaccurate
- Pipeline assumes linear flow; real MRs have back-and-forth cycles
- Draft MRs skew metrics (created early, reviewed late intentionally)
## Extensions
- `lore mr-pipeline --exclude-drafts` — cleaner metrics
- Per-project comparison: which project has the fastest pipeline?
- Trend line: weekly pipeline speed over time
- Break down by MR size (files changed) to normalize

View File

@@ -0,0 +1,265 @@
# Multi-Project Ergonomics
- **Confidence:** 90%
- **Tier:** 1
- **Status:** proposed
- **Effort:** medium (multiple small improvements that compound)
## The Problem
Every command that touches project-scoped data requires `-p group/subgroup/project`
to disambiguate. For users with 5+ projects synced, this is:
- Repetitive: typing `-p infra/platform/auth-service` on every query
- Error-prone: mistyping long paths
- Discoverable only by failure: you don't know you need `-p` until you hit an
ambiguous error
The fuzzy matching in `resolve_project` is already good (suffix, substring,
case-insensitive) but it only kicks in on the `-p` value itself. There's no way to
set a default, group projects, or scope a whole session.
## Proposed Improvements
### 1. Project Aliases in Config
Let users define short aliases for long project paths.
```json
{
"projects": [
{ "path": "infra/platform/auth-service", "alias": "auth" },
{ "path": "infra/platform/billing-service", "alias": "billing" },
{ "path": "frontend/customer-portal", "alias": "portal" },
{ "path": "frontend/admin-dashboard", "alias": "admin" }
]
}
```
Then: `lore issues -p auth` resolves via alias before falling through to fuzzy match.
**Implementation:** Add optional `alias` field to `ProjectConfig`. In
`resolve_project`, check aliases before the existing exact/suffix/substring cascade.
```rust
#[derive(Debug, Clone, Deserialize)]
pub struct ProjectConfig {
pub path: String,
#[serde(default)]
pub alias: Option<String>,
}
```
Resolution order becomes:
1. Exact alias match (new)
2. Exact path match
3. Case-insensitive path match
4. Suffix match
5. Substring match
### 2. Default Project (`LORE_PROJECT` env var)
Set a default project for your shell session so you don't need `-p` at all.
```bash
export LORE_PROJECT=auth
lore issues # scoped to auth-service
lore mrs --state opened # scoped to auth-service
lore search "timeout bug" # scoped to auth-service
lore issues -p billing # explicit -p overrides the env var
```
**Implementation:** In every command that accepts `-p`, fall back to
`std::env::var("LORE_PROJECT")` when the flag is absent. The `-p` flag always wins.
Could also support a config-level default:
```json
{
"defaultProject": "auth"
}
```
Precedence: CLI flag > env var > config default > (no filter).
### 3. `lore use <project>` — Session Context Switcher
A command that sets `LORE_PROJECT` for the current shell by writing to a dotfile.
```bash
lore use auth
# writes ~/.local/state/lore/current-project containing "auth"
lore issues # reads current-project file, scopes to auth
lore use --clear # removes the file, back to all-project mode
lore use # shows current project context
```
This is similar to `kubectl config use-context`, `nvm use`, or `tfenv use`.
**Implementation:** Write a one-line file at a known state path. Each command reads
it as the lowest-priority default (below env var and CLI flag).
Precedence: CLI flag > env var > `lore use` state file > config default > (no filter).
### 4. `lore projects` — Project Listing and Discovery
A dedicated command to see what's synced, with aliases and activity stats.
```bash
$ lore projects
Alias Path Issues MRs Last Sync
auth infra/platform/auth-service 142 87 2h ago
billing infra/platform/billing-service 56 34 2h ago
portal frontend/customer-portal 203 112 2h ago
admin frontend/admin-dashboard 28 15 3d ago
- data/ml-pipeline 89 45 2h ago
```
Robot mode returns the same as JSON with alias, path, counts, and last sync time.
**Implementation:** Query `projects` joined with `COUNT(issues)`, `COUNT(mrs)`,
and `MAX(sync_runs.finished_at)`. Overlay aliases from config.
### 5. Project Groups in Config
Let users define named groups of projects for batch scoping.
```json
{
"projectGroups": {
"backend": ["auth", "billing", "data/ml-pipeline"],
"frontend": ["portal", "admin"],
"all-infra": ["auth", "billing"]
}
}
```
Then: `lore issues -p @backend` (or `--group backend`) queries across all projects
in the group.
**Implementation:** When `-p` value starts with `@`, look up the group and resolve
each member project. Pass as a `Vec<i64>` of project IDs to the query layer.
This is especially powerful for:
- `lore search "auth bug" -p @backend` — search across related repos
- `lore digest --since 7d -p @frontend` — team-scoped activity digest
- `lore timeline "deployment" -p @all-infra` — cross-repo timeline
### 6. Git-Aware Project Detection
When running `lore` from inside a git repo that matches a synced project, auto-scope
to that project without any flags.
```bash
cd ~/code/auth-service
lore issues # auto-detects this is infra/platform/auth-service
```
**Implementation:** Read `.git/config` for the remote URL, extract the project path,
check if it matches a synced project. Only activate when exactly one project matches.
Detection logic:
```
1. Check if cwd is inside a git repo (find .git)
2. Parse git remote origin URL
3. Extract path component (e.g., "infra/platform/auth-service.git" → "infra/platform/auth-service")
4. Match against synced projects
5. If exactly one match, use as implicit -p
6. If ambiguous or no match, do nothing (fall through to normal behavior)
```
Precedence: CLI flag > env var > `lore use` > config default > git detection > (no filter).
This is similar to how `gh` (GitHub CLI) auto-detects the repo you're in.
### 7. Prompt Integration / Shell Function
Provide a shell function that shows the current project context in the prompt.
```bash
# In .bashrc / .zshrc
eval "$(lore completions zsh)"
PROMPT='$(lore-prompt)%~ %# '
```
Output: `[lore:auth] ~/code/auth-service %`
Shows which project `lore` commands will scope to, using the same precedence chain.
Helps users understand what context they're in before running a query.
### 8. Short Project References in Output
Once aliases exist, use them everywhere in output for brevity:
**Before:**
```
infra/platform/auth-service#42 Login timeout bug
infra/platform/auth-service!234 Refactor auth middleware
```
**After:**
```
auth#42 Login timeout bug
auth!234 Refactor auth middleware
```
With `--full-paths` flag to get the verbose form when needed.
## Combined UX Flow
With all improvements, a typical session looks like:
```bash
# One-time config
lore init # sets up aliases during interactive setup
# Daily use
lore use auth # set context
lore issues --state opened # no -p needed
lore search "timeout" # scoped to auth
lore timeline "login flow" # scoped to auth
lore issues -p @backend # cross-repo query via group
lore mrs -p billing # quick alias switch
lore use --clear # back to global
```
Or for the power user who never wants to type `lore use`:
```bash
cd ~/code/auth-service
lore issues # git-aware auto-detection
```
Or for the scripter:
```bash
LORE_PROJECT=auth lore --robot issues -n 50 # env var for automation
```
## Priority Order
Implement in this order for maximum incremental value:
1. **Project aliases** — smallest change, biggest daily friction reduction
2. **`LORE_PROJECT` env var** — trivial to implement, enables scripting
3. **`lore projects` command** — discoverability, completes the alias story
4. **`lore use` context** — nice-to-have for heavy users
5. **Project groups** — high value for multi-repo teams
6. **Git-aware detection** — polish, "it just works" feel
7. **Short refs in output** — ties into timeline issue #001
8. **Prompt integration** — extra polish
## Relationship to Issue #001
The timeline entity-ref ambiguity (issue #001) is solved naturally by items 7 and 8
here. Once aliases exist, `format_entity_ref` can use the alias as the short project
identifier in multi-project output:
```
auth#42 instead of infra/platform/auth-service#42
```
And in single-project timelines (detected via `lore use` or git-aware), the project
prefix is omitted entirely — matching the current behavior but now intentionally.

View File

@@ -0,0 +1,81 @@
# Recurring Bug Pattern Detector
- **Command:** `lore recurring-patterns [--min-cluster <N>]`
- **Confidence:** 76%
- **Tier:** 3
- **Status:** proposed
- **Effort:** high — vector clustering, threshold tuning
## What
Cluster closed issues by embedding similarity. Identify clusters of 3+ issues that
are semantically similar — these represent recurring problems that need a systemic
fix rather than one-off patches.
## Why
Finding the same bug filed 5 different ways is one of the most impactful things you
can surface. This is a sophisticated use of the embedding pipeline that no competing
tool offers. It turns "we keep having auth issues" from a gut feeling into data.
## Data Required
All exists today:
- `documents` (source_type='issue', content_text)
- `embeddings` (768-dim vectors)
- `issues` (state='closed' for filtering)
## Implementation Sketch
```
1. Collect all embeddings for closed issue documents
2. For each issue, find K nearest neighbors (K=10)
3. Build adjacency graph: edge exists if similarity > threshold (e.g., 0.80)
4. Find connected components (simple DFS/BFS)
5. Filter to components with >= min-cluster members (default 3)
6. For each cluster:
a. Extract common terms (TF-IDF or simple word frequency)
b. Sort by recency (most recent issue first)
c. Report cluster with: theme, member issues, time span
```
### Similarity Threshold Tuning
This is the critical parameter. Too low = noise, too high = misses.
- Start at 0.80 cosine similarity
- Expose as `--threshold` flag for user tuning
- Report cluster cohesion score for transparency
## Human Output
```
Recurring Patterns (3+ similar closed issues)
Cluster 1: "Authentication timeout errors" (5 issues, spanning 6 months)
#89 Login timeout on slow networks (closed 3d ago)
#72 Auth flow hangs on cellular (closed 2mo ago)
#58 Token refresh timeout (closed 3mo ago)
#45 SSO login timeout for remote users (closed 5mo ago)
#31 Connection timeout in auth middleware (closed 6mo ago)
Avg similarity: 0.87 | Suggested: systemic fix for auth timeout handling
Cluster 2: "Cache invalidation issues" (3 issues, spanning 2 months)
#85 Stale cache after deploy (closed 2w ago)
#77 Cache headers not updated (closed 1mo ago)
#69 Dashboard shows old data after settings change (closed 2mo ago)
Avg similarity: 0.82 | Suggested: review cache invalidation strategy
```
## Downsides
- Clustering quality depends on embedding quality and threshold tuning
- May produce false clusters (issues that mention similar terms but are different problems)
- Computationally expensive for large issue counts (N^2 comparisons)
- Need to handle multi-chunk documents (aggregate embeddings)
## Extensions
- `lore recurring-patterns --open` — find clusters in open issues (duplicates to merge)
- `lore recurring-patterns --cross-project` — patterns across repos
- Trend detection: are cluster sizes growing? (escalating problem)
- Export as report for engineering retrospectives

View File

@@ -0,0 +1,78 @@
# DiffNote Coverage Map
- **Command:** `lore review-coverage <mr-iid>`
- **Confidence:** 75%
- **Tier:** 3
- **Status:** proposed
- **Effort:** medium — join DiffNote positions with mr_file_changes
## What
For a specific MR, show which files received review comments (DiffNotes) vs. which
files were changed but received no review attention. Highlights blind spots in code
review.
## Why
Large MRs often have files that get reviewed thoroughly and files that slip through
with no comments. This makes the review coverage visible so teams can decide if
un-reviewed files need a second look.
## Data Required
All exists today:
- `mr_file_changes` (new_path per MR)
- `notes` (position_new_path, note_type='DiffNote', discussion_id)
- `discussions` (merge_request_id)
## Implementation Sketch
```sql
SELECT
mfc.new_path,
mfc.change_type,
COUNT(DISTINCT n.id) as review_comments,
COUNT(DISTINCT d.id) as review_threads,
CASE WHEN COUNT(n.id) = 0 THEN 'NOT REVIEWED' ELSE 'REVIEWED' END as status
FROM mr_file_changes mfc
LEFT JOIN notes n ON n.position_new_path = mfc.new_path
AND n.note_type = 'DiffNote'
AND n.is_system = 0
LEFT JOIN discussions d ON n.discussion_id = d.id
AND d.merge_request_id = mfc.merge_request_id
WHERE mfc.merge_request_id = ?1
GROUP BY mfc.new_path
ORDER BY review_comments DESC;
```
## Human Output
```
Review Coverage for !234 — Refactor auth middleware
REVIEWED (5 files, 23 comments)
src/auth/middleware.rs 12 comments, 4 threads
src/auth/jwt.rs 6 comments, 2 threads
src/auth/session.rs 3 comments, 1 thread
tests/auth/middleware_test.rs 1 comment, 1 thread
src/auth/mod.rs 1 comment, 1 thread
NOT REVIEWED (3 files)
src/auth/types.rs modified [no review comments]
src/api/routes.rs modified [no review comments]
Cargo.toml modified [no review comments]
Coverage: 5/8 files (62.5%)
```
## Downsides
- Reviewers may have reviewed a file without leaving comments (approval by silence)
- position_new_path matching may not cover all DiffNote position formats
- Config files (Cargo.toml) not being reviewed is usually fine
## Extensions
- `lore review-coverage --all --since 30d` — aggregate coverage across all MRs
- Per-reviewer breakdown: which reviewers cover which files?
- Coverage heatmap: files that consistently escape review across multiple MRs

90
docs/ideas/silos.md Normal file
View File

@@ -0,0 +1,90 @@
# Knowledge Silo Detection
- **Command:** `lore silos [--min-changes <N>]`
- **Confidence:** 87%
- **Tier:** 2
- **Status:** proposed
- **Effort:** medium — requires mr_file_changes population (Gate 4)
## What
For each file path (or directory), count unique MR authors. Flag paths where only
1 person has ever authored changes (bus factor = 1). Aggregate by directory to show
silo areas.
## Why
Bus factor analysis is critical for team resilience. If only one person has ever
touched the auth module, that's a risk. This uses data already ingested to surface
knowledge concentration that's otherwise invisible.
## Data Required
- `mr_file_changes` (new_path, merge_request_id) — needs Gate 4 ingestion
- `merge_requests` (author_username, state='merged')
- `projects` (path_with_namespace)
## Implementation Sketch
```sql
-- Find directories with bus factor = 1
WITH file_authors AS (
SELECT
mfc.new_path,
mr.author_username,
p.path_with_namespace,
mfc.project_id
FROM mr_file_changes mfc
JOIN merge_requests mr ON mfc.merge_request_id = mr.id
JOIN projects p ON mfc.project_id = p.id
WHERE mr.state = 'merged'
),
directory_authors AS (
SELECT
project_id,
path_with_namespace,
-- Extract directory: everything before last '/'
CASE
WHEN INSTR(new_path, '/') > 0
THEN SUBSTR(new_path, 1, LENGTH(new_path) - LENGTH(REPLACE(RTRIM(new_path, REPLACE(new_path, '/', '')), '', '')))
ELSE '.'
END as directory,
COUNT(DISTINCT author_username) as unique_authors,
COUNT(*) as total_changes,
GROUP_CONCAT(DISTINCT author_username) as authors
FROM file_authors
GROUP BY project_id, directory
)
SELECT * FROM directory_authors
WHERE unique_authors = 1
AND total_changes >= ?1 -- min-changes threshold
ORDER BY total_changes DESC;
```
## Human Output
```
Knowledge Silos (bus factor = 1, min 3 changes)
group/backend
src/auth/ alice (8 changes) HIGH RISK
src/billing/ bob (5 changes) HIGH RISK
src/utils/cache/ charlie (3 changes) MODERATE RISK
group/frontend
src/admin/ dave (12 changes) HIGH RISK
```
## Downsides
- Historical authors may have left the team; needs recency weighting
- Requires `mr_file_changes` to be populated (Gate 4)
- Single-author directories may be intentional (ownership model)
- Directory aggregation heuristic is imperfect for deep nesting
## Extensions
- `lore silos --since 180d` — only count recent activity
- `lore silos --depth 2` — aggregate at directory depth N
- Combine with `lore experts` to show both silos and experts in one view
- Risk scoring: weight by directory size, change frequency, recency

View File

@@ -0,0 +1,95 @@
# Similar Issues Finder
- **Command:** `lore similar <iid>`
- **Confidence:** 95%
- **Tier:** 1
- **Status:** proposed
- **Effort:** low — infrastructure exists, needs one new query path
## What
Given an issue IID, find the N most semantically similar issues using the existing
vector embeddings. Show similarity score and overlapping keywords.
Can also work with MRs: `lore similar --mr <iid>`.
## Why
Duplicate detection is a constant problem on active projects. "Is this bug already
filed?" becomes a one-liner. This is the most natural use of the embedding pipeline
and the feature people expect when they hear "semantic search."
## Data Required
All exists today:
- `documents` table (source_type, source_id, content_text)
- `embeddings` virtual table (768-dim vectors via sqlite-vec)
- `embedding_metadata` (document_hash for staleness check)
## Implementation Sketch
```
1. Resolve IID → issue.id → document.id (via source_type='issue', source_id)
2. Look up embedding vector(s) for that document
3. Query sqlite-vec for K nearest neighbors (K = limit * 2 for headroom)
4. Filter to source_type='issue' (or 'merge_request' if --include-mrs)
5. Exclude self
6. Rank by cosine similarity
7. Return top N with: iid, title, project, similarity_score, url
```
### SQL Core
```sql
-- Get the embedding for target document (chunk 0 = representative)
SELECT embedding FROM embeddings WHERE rowid = ?1 * 1000;
-- Find nearest neighbors
SELECT
rowid,
distance
FROM embeddings
WHERE embedding MATCH ?1
AND k = ?2
ORDER BY distance;
-- Resolve back to entities
SELECT d.source_type, d.source_id, d.title, d.url, i.iid, i.state
FROM documents d
JOIN issues i ON d.source_id = i.id AND d.source_type = 'issue'
WHERE d.id = ?;
```
## Robot Mode Output
```json
{
"ok": true,
"data": {
"query_issue": { "iid": 42, "title": "Login timeout on slow networks" },
"similar": [
{
"iid": 38,
"title": "Connection timeout in auth flow",
"project": "group/backend",
"similarity": 0.87,
"state": "closed",
"url": "https://gitlab.com/group/backend/-/issues/38"
}
]
},
"meta": { "elapsed_ms": 45, "candidates_scanned": 200 }
}
```
## Downsides
- Embedding quality depends on description quality; short issues may not match well
- Multi-chunk documents need aggregation strategy (use chunk 0 or average?)
- Requires embeddings to be generated first (`lore embed`)
## Extensions
- `lore similar --open-only` to filter to unresolved issues (duplicate triage)
- `lore similar --text "free text query"` to find issues similar to arbitrary text
- Batch mode: find all potential duplicate clusters across the entire database

View File

@@ -0,0 +1,100 @@
# Stale Discussion Finder
- **Command:** `lore stale-discussions [--days <N>]`
- **Confidence:** 90%
- **Tier:** 1
- **Status:** proposed
- **Effort:** low — single query, minimal formatting
## What
List unresolved, resolvable discussions where `last_note_at` is older than a
threshold (default 14 days), grouped by parent entity. Prioritize by discussion
count per entity (more stale threads = more urgent).
## Why
Unresolved discussions are silent blockers. They prevent MR merges, stall
decision-making, and represent forgotten conversations. This surfaces them so teams
can take action: resolve, respond, or explicitly mark as won't-fix.
## Data Required
All exists today:
- `discussions` (resolved, resolvable, last_note_at)
- `issues` / `merge_requests` (for parent entity context)
## Implementation Sketch
```sql
SELECT
d.id,
d.noteable_type,
CASE WHEN d.issue_id IS NOT NULL THEN i.iid ELSE mr.iid END as entity_iid,
CASE WHEN d.issue_id IS NOT NULL THEN i.title ELSE mr.title END as entity_title,
p.path_with_namespace,
d.last_note_at,
((?1 - d.last_note_at) / 86400000) as days_stale,
COUNT(*) OVER (PARTITION BY COALESCE(d.issue_id, d.merge_request_id), d.noteable_type) as stale_count_for_entity
FROM discussions d
JOIN projects p ON d.project_id = p.id
LEFT JOIN issues i ON d.issue_id = i.id
LEFT JOIN merge_requests mr ON d.merge_request_id = mr.id
WHERE d.resolved = 0
AND d.resolvable = 1
AND d.last_note_at < ?1
ORDER BY days_stale DESC;
```
## Human Output Format
```
Stale Discussions (14+ days without activity)
group/backend !234 — Refactor auth middleware (3 stale threads)
Discussion #a1b2c3 (28d stale) "Should we use JWT or session tokens?"
Discussion #d4e5f6 (21d stale) "Error handling for expired tokens"
Discussion #g7h8i9 (14d stale) "Performance implications of per-request validation"
group/backend #90 — Rate limiting design (1 stale thread)
Discussion #j0k1l2 (18d stale) "Redis vs in-memory rate counter"
```
## Robot Mode Output
```json
{
"ok": true,
"data": {
"threshold_days": 14,
"total_stale": 4,
"entities": [
{
"type": "merge_request",
"iid": 234,
"title": "Refactor auth middleware",
"project": "group/backend",
"stale_discussions": [
{
"discussion_id": "a1b2c3",
"days_stale": 28,
"first_note_preview": "Should we use JWT or session tokens?"
}
]
}
]
}
}
```
## Downsides
- Some discussions are intentionally left open (design docs, long-running threads)
- Could produce noise in repos with loose discussion hygiene
- Doesn't distinguish "stale and blocking" from "stale and irrelevant"
## Extensions
- `lore stale-discussions --mr-only` — focus on MR review threads (most actionable)
- `lore stale-discussions --author alice` — "threads I started that went quiet"
- `lore stale-discussions --assignee bob` — "threads on my MRs that need attention"

82
docs/ideas/unlinked.md Normal file
View File

@@ -0,0 +1,82 @@
# Unlinked MR Finder
- **Command:** `lore unlinked [--since <date>]`
- **Confidence:** 83%
- **Tier:** 2
- **Status:** proposed
- **Effort:** low — LEFT JOIN queries
## What
Two reports:
1. Merged MRs with no entity_references at all (no "closes", no "mentioned",
no "related") — orphan MRs with no issue traceability
2. Closed issues with no MR reference — issues closed manually without code change
## Why
Process compliance metric. Unlinked MRs mean lost traceability — you can't trace
a code change back to a requirement. Manually closed issues might mean work was done
outside the tracked process, or issues were closed prematurely.
## Data Required
All exists today:
- `merge_requests` (state, merged_at)
- `issues` (state, closed/updated_at)
- `entity_references` (for join/anti-join)
## Implementation Sketch
```sql
-- Orphan merged MRs (no references at all)
SELECT mr.iid, mr.title, mr.author_username, mr.merged_at,
p.path_with_namespace
FROM merge_requests mr
JOIN projects p ON mr.project_id = p.id
LEFT JOIN entity_references er
ON er.source_entity_type = 'merge_request' AND er.source_entity_id = mr.id
WHERE mr.state = 'merged'
AND mr.merged_at >= ?1
AND er.id IS NULL
ORDER BY mr.merged_at DESC;
-- Closed issues with no MR reference
SELECT i.iid, i.title, i.author_username, i.updated_at,
p.path_with_namespace
FROM issues i
JOIN projects p ON i.project_id = p.id
LEFT JOIN entity_references er
ON er.target_entity_type = 'issue' AND er.target_entity_id = i.id
AND er.source_entity_type = 'merge_request'
WHERE i.state = 'closed'
AND i.updated_at >= ?1
AND er.id IS NULL
ORDER BY i.updated_at DESC;
```
## Human Output
```
Unlinked MRs (merged with no issue reference, last 30 days)
!245 Fix typo in README (alice, merged 2d ago)
!239 Update CI pipeline (bob, merged 1w ago)
!236 Bump dependency versions (charlie, merged 2w ago)
Orphan Closed Issues (closed without any MR, last 30 days)
#92 Update documentation for v2 (closed by dave, 3d ago)
#88 Investigate memory usage (closed by eve, 2w ago)
```
## Downsides
- Some MRs legitimately don't reference issues (chores, CI fixes, dependency bumps)
- Some issues are legitimately closed without code (questions, duplicates, won't-fix)
- Noise level depends on team discipline
## Extensions
- `lore unlinked --ignore-labels "chore,ci"` — filter out expected orphans
- Compliance score: % of MRs with issue links over time (trend metric)

102
docs/ideas/weekly-digest.md Normal file
View File

@@ -0,0 +1,102 @@
# Weekly Digest Generator
- **Command:** `lore weekly [--since <date>]`
- **Confidence:** 90%
- **Tier:** 1
- **Status:** proposed
- **Effort:** medium — builds on digest infrastructure, adds markdown formatting
## What
Auto-generate a markdown document summarizing the week: MRs merged (grouped by
project), issues closed, new issues opened, ongoing discussions, milestone progress.
Formatted for pasting into Slack, email, or team standup notes.
Default window is 7 days. `--since` overrides.
## Why
Every team lead writes a weekly status update. This writes itself from the data.
Leverages everything gitlore has ingested. Saves 30-60 minutes of manual summarization
per week.
## Data Required
Same as digest (all exists today):
- `resource_state_events`, `merge_requests`, `issues`, `discussions`
- `milestones` for progress tracking
## Implementation Sketch
This is essentially `lore digest --since 7d --format markdown` with:
1. Section headers for each category
2. Milestone progress bars (X/Y issues closed)
3. "Highlights" section with the most-discussed items
4. "Risks" section with overdue issues and stale MRs
### Markdown Template
```markdown
# Weekly Summary — Jan 20-27, 2025
## Highlights
- **!234** Refactor auth middleware merged (12 discussions, 4 reviewers)
- **#95** New critical bug: Rate limiting returns 500
## Merged (3)
| MR | Title | Author | Reviewers |
|----|-------|--------|-----------|
| !234 | Refactor auth middleware | alice | bob, charlie |
| !231 | Fix connection pool leak | bob | alice |
| !45 | Update dashboard layout | eve | dave |
## Closed Issues (2)
- **#89** Login timeout on slow networks (closed by alice)
- **#87** Stale cache headers (closed by bob)
## New Issues (3)
- **#95** Rate limiting returns 500 (priority::high, assigned to charlie)
- **#94** Add rate limit documentation (priority::low)
- **#93** Flaky test in CI pipeline (assigned to dave)
## Milestone Progress
- **v2.0** — 14/20 issues closed (70%) — due Feb 15
- **v1.9-hotfix** — 3/3 issues closed (100%) — COMPLETE
## Active Discussions
- **#90** 8 new comments this week (needs-review)
- **!230** 5 review threads unresolved
```
## Robot Mode Output
```json
{
"ok": true,
"data": {
"period": { "from": "2025-01-20", "to": "2025-01-27" },
"merged_count": 3,
"closed_count": 2,
"opened_count": 3,
"highlights": [...],
"merged": [...],
"closed": [...],
"opened": [...],
"milestones": [...],
"active_discussions": [...]
}
}
```
## Downsides
- Formatting preferences vary by team; hard to please everyone
- "Highlights" ranking is heuristic (discussion count as proxy for importance)
- Doesn't capture work done outside GitLab
## Extensions
- `lore weekly --project group/backend` — single project scope
- `lore weekly --author alice` — personal weekly summary
- `lore weekly --output weekly.md` — write to file
- Scheduled generation via cron + robot mode

View File

@@ -0,0 +1,140 @@
# 001: Timeline human output omits project path from entity references
- **Severity:** medium
- **Component:** `src/cli/commands/timeline.rs`
- **Status:** open
## Problem
The `lore timeline` human-readable output renders entity references as bare `#42` or
`!234` without the project path. When multiple projects are synced, this makes the
output ambiguous — issue `#42` in `group/backend` and `#42` in `group/frontend` are
indistinguishable.
### Affected code
`format_entity_ref` at `src/cli/commands/timeline.rs:201-207`:
```rust
fn format_entity_ref(entity_type: &str, iid: i64) -> String {
match entity_type {
"issue" => format!("#{iid}"),
"merge_request" => format!("!{iid}"),
_ => format!("{entity_type}:{iid}"),
}
}
```
This function is called in three places:
1. **Event lines** (`print_timeline_event`, line 130) — each event row shows `#42`
with no project context
2. **Footer seed list** (`print_timeline_footer`, line 161) — seed entities listed as
`#42, !234` with no project disambiguation
3. **Collect stage summaries** (`timeline_collect.rs:107`) — the `summary` field itself
bakes in `"Issue #42 created: ..."` without project
### Current output (ambiguous)
```
2025-01-20 CREATED #42 Issue #42 created: Login timeout bug @alice
2025-01-21 LABEL+ #42 Label added: priority::high @dave
2025-01-22 CREATED !234 MR !234 created: Refactor auth middleware @alice
2025-01-25 MERGED !234 MR !234 merged @bob
Seed entities: #42, !234
```
When multiple projects are synced, a reader cannot tell which project `#42` belongs to.
## Robot mode is partially affected
The robot JSON output (`EventJson`, line 387-416) DOES include a `project` field per
event, so programmatic consumers can disambiguate. However, the `summary` string field
still bakes in bare `#42` without project context, which is misleading if an agent uses
the summary for display.
## Proposed fix
### 1. Add project to `format_entity_ref`
Pass `project_path` into `format_entity_ref` and use GitLab's full reference format:
```rust
fn format_entity_ref(entity_type: &str, iid: i64, project_path: &str) -> String {
match entity_type {
"issue" => format!("{project_path}#{iid}"),
"merge_request" => format!("{project_path}!{iid}"),
_ => format!("{project_path}/{entity_type}:{iid}"),
}
}
```
### 2. Smart elision for single-project timelines
When all events belong to the same project, the full path is visual noise. Detect
this and fall back to bare `#42` / `!234`:
```rust
fn should_show_project(events: &[TimelineEvent]) -> bool {
let mut projects = events.iter().map(|e| &e.project_path).collect::<HashSet<_>>();
projects.len() > 1
}
```
Then conditionally format:
```rust
let entity_ref = if show_project {
format_entity_ref(&event.entity_type, event.entity_iid, &event.project_path)
} else {
format_entity_ref_short(&event.entity_type, event.entity_iid)
};
```
### 3. Fix summary strings in collect stage
`timeline_collect.rs:107` bakes the summary as `"Issue #42 created: title"`. This
should include the project when multi-project:
```rust
let prefix = if multi_project {
format!("{type_label} {project_path}#{iid}")
} else {
format!("{type_label} #{iid}")
};
summary = format!("{prefix} created: {title_str}");
```
Same pattern for the merge summary at lines 317 and 347.
### 4. Update footer seed list
`print_timeline_footer` (line 155-164) should also use the project-aware format:
```rust
result.seed_entities.iter()
.map(|e| format_entity_ref(&e.entity_type, e.entity_iid, &e.project_path))
```
## Expected output after fix
### Single project (no change)
```
2025-01-20 CREATED #42 Issue #42 created: Login timeout bug @alice
```
### Multi-project (project path added)
```
2025-01-20 CREATED group/backend#42 Issue group/backend#42 created: Login timeout @alice
2025-01-22 CREATED group/frontend#42 Issue group/frontend#42 created: Broken layout @eve
```
## Impact
- Human output: ambiguous for multi-project users (the primary use case for gitlore)
- Robot output: summary field misleading, but `project` field provides workaround
- Timeline footer: seed entity list ambiguous
- Collect-stage summaries: baked-in bare references propagate to both renderers

View File

@@ -0,0 +1,114 @@
Your plan is strong directionally, but Id revise it in 8 key places to avoid regressions and make it significantly more useful in production.
1. **Split reviewer signals into “participated” vs “assigned-only”**
Reason: todays inflation problem is often assignment noise. Treating `mr_reviewers` equal to real review activity still over-ranks passive reviewers.
```diff
@@ Per-signal contributions
-| Reviewer (reviewed MR touching path) | 10 | 90 days |
+| ReviewerParticipated (left DiffNote on MR/path) | 10 | 90 days |
+| ReviewerAssignedOnly (in mr_reviewers, no DiffNote by that user on MR/path) | 3 | 45 days |
```
```diff
@@ Scoring Formula
-score = reviewer_mr * reviewer_weight + ...
+score = reviewer_participated * reviewer_weight
+ + reviewer_assigned_only * reviewer_assignment_weight
+ + ...
```
2. **Cap/saturate note intensity per MR**
Reason: raw per-note addition can still reward “comment storms.” Use diminishing returns.
```diff
@@ Rust-Side Aggregation
-- Notes: Vec<i64> (timestamps) from diffnote_reviewer
+-- Notes grouped per (username, mr_id): note_count + max_ts
+-- Note contribution per MR uses diminishing returns:
+-- note_score_mr = note_bonus * ln(1 + note_count) * decay(now - ts, note_hl)
```
3. **Use better event timestamps than `m.updated_at` for file-change signals**
Reason: `updated_at` is noisy (title edits, metadata touches) and creates false recency.
```diff
@@ SQL Restructure
- signal 3/4 seen_at = m.updated_at
+ signal 3/4 activity_ts = COALESCE(m.merged_at, m.closed_at, m.created_at, m.updated_at)
```
4. **Dont stream raw note rows to Rust; pre-aggregate in SQL**
Reason: current plan removes SQL grouping and can blow up memory/latency on large repos.
```diff
@@ SQL Restructure
-SELECT username, signal, mr_id, note_id, ts FROM signals
+WITH raw_signals AS (...),
+aggregated AS (
+ -- 1 row per (username, signal_class, mr_id) for MR-level signals
+ -- 1 row per (username, mr_id) for note_count + max_ts
+)
+SELECT username, signal_class, mr_id, qty, ts FROM aggregated
```
5. **Replace fixed `"24m"` with model-driven cutoff**
Reason: hardcoded 24m is arbitrary and tied to current weights/half-lives only.
```diff
@@ Default --since Change
-Expert mode: "6m" -> "24m"
+Expert mode default window derived from scoring.max_age_days (default 1095 days / 36m).
+Formula guidance: choose max_age where max possible single-event contribution < epsilon (e.g. 0.25 points).
+Add `--all-history` to disable cutoff for diagnostics.
```
6. **Validate scoring config explicitly**
Reason: silent bad configs (`half_life_days = 0`, negative weights) create undefined behavior.
```diff
@@ ScoringConfig (config.rs)
pub struct ScoringConfig {
pub author_weight: i64,
pub reviewer_weight: i64,
pub note_bonus: i64,
+ pub reviewer_assignment_weight: i64, // default: 3
pub author_half_life_days: u32,
pub reviewer_half_life_days: u32,
pub note_half_life_days: u32,
+ pub reviewer_assignment_half_life_days: u32, // default: 45
+ pub max_age_days: u32, // default: 1095
}
@@ Config::load_from_path
+validate_scoring(&config.scoring)?; // weights >= 0, half_life_days > 0, max_age_days >= 30
```
7. **Keep raw float score internally; round only for display**
Reason: rounding before sort causes avoidable ties/rank instability.
```diff
@@ Rust-Side Aggregation
-Round to i64 for Expert.score field
+Compute `raw_score: f64`, sort by raw_score DESC.
+Expose integer `score` for existing UX.
+Optionally expose `score_raw` and `score_components` in robot JSON when `--explain-score`.
```
8. **Add confidence + data-completeness metadata**
Reason: rankings are misleading if `mr_file_changes` coverage is poor.
```diff
@@ ExpertResult / Output
+confidence: "high" | "medium" | "low"
+coverage: { mrs_with_file_changes, total_mrs_in_window, percent }
+warning when coverage < threshold (e.g. 70%)
```
```diff
@@ Verification
4. cargo test
+5. ubs src/cli/commands/who.rs src/core/config.rs
+6. Benchmark query_expert on representative DB (latency + rows scanned before/after)
```
If you want, I can rewrite your full plan document into a clean “v2” version that already incorporates these diffs end-to-end.

View File

@@ -0,0 +1,132 @@
The plan is strong, but Id revise it in 10 places to improve correctness, scalability, and operator trust.
1. **Add rename/old-path awareness (correctness gap)**
Analysis: right now both existing code and your plan still center on `position_new_path` / `new_path` matches (`src/cli/commands/who.rs:643`, `src/cli/commands/who.rs:681`). That misses expertise on renamed/deleted paths and under-ranks long-time owners after refactors.
```diff
@@ ## Context
-This produces two compounding problems:
+This produces three compounding problems:
@@
2. **Reviewer inflation**: ...
+3. **Path-history blindness**: Renamed/moved files lose historical expertise because matching relies on current-path fields only.
@@ ### 3. SQL Restructure (who.rs)
-AND n.position_new_path {path_op}
+AND (n.position_new_path {path_op} OR n.position_old_path {path_op})
-AND fc.new_path {path_op}
+AND (fc.new_path {path_op} OR fc.old_path {path_op})
```
2. **Follow rename chains for queried paths**
Analysis: matching `old_path` helps, but true continuity needs alias expansion (A→B→C). Without this, expertise before multi-hop renames is fragmented.
```diff
@@ ### 3. SQL Restructure (who.rs)
+**Path alias expansion**: Before scoring, resolve a bounded rename alias set (default max depth: 20)
+from `mr_file_changes(change_type='renamed')`. Query signals against all aliases.
+Output includes `path_aliases_used` for transparency.
```
3. **Use hybrid SQL pre-aggregation instead of fully raw rows**
Analysis: the “raw row” design is simpler but will degrade on large repos with heavy DiffNote volume. Pre-aggregating to `(user, mr)` for MR signals and `(user, mr, note_count)` for note signals keeps memory/latency predictable.
```diff
@@ ### 3. SQL Restructure (who.rs)
-The SQL CTE ... removes the outer GROUP BY aggregation. Instead, it returns raw signal rows:
-SELECT username, signal, mr_id, note_id, ts FROM signals
+Use hybrid aggregation:
+- SQL returns MR-level rows for author/reviewer signals (1 row per user+MR+signal_class)
+- SQL returns note groups (1 row per user+MR with note_count, max_ts)
+- Rust applies decay + ln(1+count) + final ranking.
```
4. **Make timestamp policy state-aware (merged vs opened)**
Analysis: replacing `updated_at` with only `COALESCE(merged_at, created_at)` over-decays long-running open MRs. Open MRs need recency from active lifecycle; merged MRs should anchor to merge time.
```diff
@@ ### 3. SQL Restructure (who.rs)
-Replace m.updated_at with COALESCE(m.merged_at, m.created_at)
+Use state-aware timestamp:
+activity_ts =
+ CASE
+ WHEN m.state = 'merged' THEN COALESCE(m.merged_at, m.updated_at, m.created_at, m.last_seen_at)
+ WHEN m.state = 'opened' THEN COALESCE(m.updated_at, m.created_at, m.last_seen_at)
+ END
```
5. **Replace fixed `24m` with config-driven max age**
Analysis: `24m` is reasonable now, but brittle after tuning weights/half-lives. Tie cutoff to config so model behavior remains coherent as parameters evolve.
```diff
@@ ### 1. ScoringConfig (config.rs)
+pub max_age_days: u32, // default: 730 (or 1095)
@@ ### 5. Default --since Change
-Expert mode: "6m" -> "24m"
+Expert mode default window derives from `scoring.max_age_days`
+unless user passes `--since` or `--all-history`.
```
6. **Add reproducible scoring time via `--as-of`**
Analysis: decayed ranking is time-sensitive; debugging and tests become flaky without a fixed reference clock. This improves reliability and incident triage.
```diff
@@ ## Files to Modify
-2. src/cli/commands/who.rs
+2. src/cli/commands/who.rs
+3. src/cli/mod.rs
+4. src/main.rs
@@ ### 5. Default --since Change
+Add `--as-of <RFC3339|YYYY-MM-DD>` to score at a fixed timestamp.
+`resolved_input` includes `as_of_ms` and `as_of_iso`.
```
7. **Add explainability output (`--explain-score`)**
Analysis: decayed multi-signal ranking will be disputed without decomposition. Show components and top evidence MRs to make results actionable and debuggable.
```diff
@@ ## Rejected Ideas (with rationale)
-- **`--explain-score` flag with component breakdown**: ... deferred
+**Included in this iteration**: `--explain-score` adds per-user score components
+(`author`, `review_participated`, `review_assigned`, `notes`) plus top evidence MRs.
```
8. **Add confidence/coverage metadata**
Analysis: rankings can look precise while data is incomplete (`mr_file_changes` gaps, sparse DiffNotes). Confidence fields prevent false certainty.
```diff
@@ ### 4. Rust-Side Aggregation (who.rs)
+Compute and emit:
+- `coverage`: {mrs_considered, mrs_with_file_changes, mrs_with_diffnotes, percent}
+- `confidence`: high|medium|low (threshold-based)
```
9. **Add index migration for new query shapes**
Analysis: your new `EXISTS/NOT EXISTS` reviewer split and path dual-matching will need better indexes; current `who` indexes are not enough for author+path+time combinations.
```diff
@@ ## Files to Modify
+3. **`migrations/021_who_decay_indexes.sql`** — indexes for decay query patterns:
+ - notes(diffnote path + author + created_at + discussion_id) partial
+ - notes(old_path variant) partial
+ - mr_file_changes(project_id, new_path, merge_request_id)
+ - mr_file_changes(project_id, old_path, merge_request_id) partial
+ - merge_requests(state, merged_at, updated_at, created_at)
```
10. **Expand tests to invariants and determinism**
Analysis: example-based tests are good, but ranking systems need invariant tests to avoid subtle regressions.
```diff
@@ ### 7. New Tests (TDD)
+**`test_score_monotonicity_by_age`**: same signal, older timestamp never scores higher
+**`test_row_order_independence`**: shuffled SQL row order yields identical ranking
+**`test_as_of_reproducibility`**: same data + same `--as-of` => identical output
+**`test_rename_alias_chain_scoring`**: expertise carries across A->B->C rename chain
+**`test_overlap_participated_vs_assigned_counts`**: overlap reflects split reviewer semantics
```
If you want, I can produce a full consolidated `v2` plan doc patch (single unified diff against `plans/time-decay-expert-scoring.md`) rather than per-change snippets.

View File

@@ -0,0 +1,167 @@
**Critical Plan Findings First**
1. The proposed index `idx_notes_mr_path_author ON notes(noteable_id, ...)` will fail: `notes.noteable_id` does not exist in schema (`migrations/002_issues.sql:74`).
2. Rename awareness is only applied in scoring queries, not in path resolution probes; today `build_path_query()` and `suffix_probe()` only inspect `position_new_path`/`new_path` (`src/cli/commands/who.rs:465`, `src/cli/commands/who.rs:591`), so old-path queries can still miss.
3. A fixed `"24m"` default window is brittle once half-lives become configurable; it can silently truncate meaningful history for larger half-lives.
Below are the revisions Id make to your plan.
1. **Fix migration/index architecture (blocking correctness + perf)**
Rationale: prevents migration failure and aligns indexes to actual query shapes.
```diff
diff --git a/plan.md b/plan.md
@@ ### 6. Index Migration (db.rs)
- -- Support EXISTS subquery for reviewer participation check
- CREATE INDEX IF NOT EXISTS idx_notes_mr_path_author
- ON notes(noteable_id, position_new_path, author_username)
- WHERE note_type = 'DiffNote' AND is_system = 0;
+ -- Support reviewer participation joins (notes -> discussions -> MR)
+ CREATE INDEX IF NOT EXISTS idx_notes_diffnote_discussion_author_created
+ ON notes(discussion_id, author_username, created_at)
+ WHERE note_type = 'DiffNote' AND is_system = 0;
+
+ -- Path-first indexes for global and project-scoped path lookups
+ CREATE INDEX IF NOT EXISTS idx_mfc_new_path_project_mr
+ ON mr_file_changes(new_path, project_id, merge_request_id);
+ CREATE INDEX IF NOT EXISTS idx_mfc_old_path_project_mr
+ ON mr_file_changes(old_path, project_id, merge_request_id)
+ WHERE old_path IS NOT NULL;
@@
- -- Support state-aware timestamp selection
- CREATE INDEX IF NOT EXISTS idx_mr_state_timestamps
- ON merge_requests(state, merged_at, closed_at, updated_at, created_at);
+ -- Removed: low-selectivity timestamp composite index; joins are MR-id driven.
```
2. **Restructure SQL around `matched_mrs` CTE instead of repeating OR path clauses**
Rationale: better index use, less duplicated logic, cleaner maintenance.
```diff
diff --git a/plan.md b/plan.md
@@ ### 3. SQL Restructure (who.rs)
- WITH raw AS (
- -- 5 UNION ALL subqueries (signals 1, 2, 3, 4a, 4b)
- ),
+ WITH matched_notes AS (
+ -- DiffNotes matching new_path
+ ...
+ UNION ALL
+ -- DiffNotes matching old_path
+ ...
+ ),
+ matched_file_changes AS (
+ -- file changes matching new_path
+ ...
+ UNION ALL
+ -- file changes matching old_path
+ ...
+ ),
+ matched_mrs AS (
+ SELECT DISTINCT mr_id, project_id FROM matched_notes
+ UNION
+ SELECT DISTINCT mr_id, project_id FROM matched_file_changes
+ ),
+ raw AS (
+ -- signals sourced from matched_mrs + matched_notes
+ ),
```
3. **Replace correlated `EXISTS/NOT EXISTS` reviewer split with one precomputed participation set**
Rationale: same semantics, lower query cost, easier reasoning.
```diff
diff --git a/plan.md b/plan.md
@@ Signal 4 splits into two
- Signal 4a uses an EXISTS subquery ...
- Signal 4b uses NOT EXISTS ...
+ Build `reviewer_participation(mr_id, username)` once from matched DiffNotes.
+ Then classify `mr_reviewers` rows via LEFT JOIN:
+ - participated: `rp.username IS NOT NULL`
+ - assigned-only: `rp.username IS NULL`
+ This avoids correlated EXISTS scans per reviewer row.
```
4. **Make default `--since` derived from half-life + decay floor, not hardcoded 24m**
Rationale: remains mathematically consistent when config changes.
```diff
diff --git a/plan.md b/plan.md
@@ ### 1. ScoringConfig (config.rs)
+ pub decay_floor: f64, // default: 0.05
@@ ### 5. Default --since Change
- Expert mode: "6m" -> "24m"
+ Expert mode default window is computed:
+ default_since_days = ceil(max_half_life_days * log2(1.0 / decay_floor))
+ With defaults (max_half_life=180, floor=0.05), this is ~26 months.
+ CLI `--since` still overrides; `--all-history` still disables windowing.
```
5. **Use `log2(1+count)` for notes instead of `ln(1+count)`**
Rationale: keeps 1 note ~= 1 unit (with `note_bonus=1`) while preserving diminishing returns.
```diff
diff --git a/plan.md b/plan.md
@@ Scoring Formula
- note_contribution(mr) = note_bonus * ln(1 + note_count_in_mr) * 2^(-days_elapsed / note_half_life)
+ note_contribution(mr) = note_bonus * log2(1 + note_count_in_mr) * 2^(-days_elapsed / note_half_life)
```
6. **Guarantee deterministic float aggregation and expose `score_raw`**
Rationale: avoids hash-order drift and explainability mismatch vs rounded integer score.
```diff
diff --git a/plan.md b/plan.md
@@ ### 4. Rust-Side Aggregation (who.rs)
- HashMap<i64, ...>
+ BTreeMap<i64, ...> (or sort keys before accumulation) for deterministic summation order
+ Use compensated summation (Kahan/Neumaier) for stable f64 totals
@@
- Sort on raw `f64` score ... round only for display
+ Keep `score_raw` internally and expose when `--explain-score` is active.
+ `score` remains integer for backward compatibility.
```
7. **Extend rename awareness to query resolution (not only scoring)**
Rationale: fixes user-facing misses for old path input and suffix lookup.
```diff
diff --git a/plan.md b/plan.md
@@ Path rename awareness
- All signal subqueries match both old and new path columns
+ Also update `build_path_query()` probes and suffix probe:
+ - exact_exists: new_path OR old_path (notes + mr_file_changes)
+ - prefix_exists: new_path LIKE OR old_path LIKE
+ - suffix_probe: union of notes.position_new_path, notes.position_old_path,
+ mr_file_changes.new_path, mr_file_changes.old_path
```
8. **Tighten CLI/output contracts for new flags**
Rationale: avoids payload bloat/ambiguity and keeps robot clients stable.
```diff
diff --git a/plan.md b/plan.md
@@ ### 5b. Score Explainability via `--explain-score`
+ `--explain-score` conflicts with `--detail` (mutually exclusive)
+ `resolved_input` includes `as_of_ms`, `as_of_iso`, `scoring_model_version`
+ robot output includes `score_raw` and `components` only when explain is enabled
```
9. **Add confidence metadata (promote from rejected to accepted)**
Rationale: makes ranking more actionable and trustworthy with sparse evidence.
```diff
diff --git a/plan.md b/plan.md
@@ Rejected Ideas (with rationale)
- Confidence/coverage metadata: ... Deferred to avoid scope creep
+ Confidence/coverage metadata: ACCEPTED (minimal v1)
+ Add per-user `confidence: low|medium|high` based on evidence breadth + recency.
+ Keep implementation lightweight (no extra SQL pass).
```
10. **Upgrade test and verification scope to include query-plan and clock semantics**
Rationale: catches regressions your current tests wont.
```diff
diff --git a/plan.md b/plan.md
@@ 8. New Tests (TDD)
+ test_old_path_probe_exact_and_prefix
+ test_suffix_probe_uses_old_path_sources
+ test_since_relative_to_as_of_clock
+ test_explain_and_detail_are_mutually_exclusive
+ test_null_timestamp_fallback_to_created_at
+ test_query_plan_uses_path_indexes (EXPLAIN QUERY PLAN)
@@ Verification
+ 7. EXPLAIN QUERY PLAN snapshots for expert query (exact + prefix) confirm index usage
```
If you want, I can produce a single consolidated “revision 3” plan document that fully merges all of the above into your original structure.

View File

@@ -0,0 +1,133 @@
Your plan is already strong. The biggest remaining gaps are temporal correctness, indexability at scale, and ranking reliability under sparse/noisy evidence. These are the revisions Id make.
1. **Fix temporal correctness for `--as-of` (critical)**
Analysis: Right now the plan describes `--as-of`, but the SQL only enforces lower bounds (`>= since`). If `as_of` is in the past, “future” events can still enter and get full weight (because elapsed is clamped). This breaks reproducibility.
```diff
@@ 3. SQL Restructure
- AND n.created_at >= ?2
+ AND n.created_at BETWEEN ?2 AND ?4
@@ Signal 3/4a/4b
- AND {state_aware_ts} >= ?2
+ AND {state_aware_ts} BETWEEN ?2 AND ?4
@@ 5a. Reproducible Scoring via --as-of
- All decay computations use as_of_ms instead of SystemTime::now()
+ All event selection and decay computations are bounded by as_of_ms.
+ Query window is [since_ms, as_of_ms], never [since_ms, now_ms].
+ Add test: test_as_of_excludes_future_events.
```
2. **Resolve `closed`-state inconsistency**
Analysis: The CASE handles `closed`, but all signal queries filter to `('opened','merged')`, making the `closed_at` branch dead code. Either include closed MRs intentionally or remove that logic. Id include closed with a reduced multiplier.
```diff
@@ ScoringConfig (config.rs)
+ pub closed_mr_multiplier: f64, // default: 0.5
@@ 3. SQL Restructure
- AND m.state IN ('opened','merged')
+ AND m.state IN ('opened','merged','closed')
@@ 4. Rust-Side Aggregation
+ if state == "closed" { contribution *= closed_mr_multiplier; }
```
3. **Replace `OR` path predicates with index-friendly `UNION ALL` branches**
Analysis: `(new_path ... OR old_path ...)` often degrades index usage in SQLite. Split into two indexed branches and dedupe once. This improves planner stability and latency on large datasets.
```diff
@@ 3. SQL Restructure
-WITH matched_notes AS (
- ... AND (n.position_new_path {path_op} OR n.position_old_path {path_op})
-),
+WITH matched_notes AS (
+ SELECT ... FROM notes n WHERE ... AND n.position_new_path {path_op}
+ UNION ALL
+ SELECT ... FROM notes n WHERE ... AND n.position_old_path {path_op}
+),
+matched_notes_dedup AS (
+ SELECT DISTINCT id, discussion_id, author_username, created_at, project_id
+ FROM matched_notes
+),
@@
- JOIN matched_notes mn ...
+ JOIN matched_notes_dedup mn ...
```
4. **Add canonical path identity (rename-chain support)**
Analysis: Direct `old_path/new_path` matching only handles one-hop rename scenarios. A small alias graph/table built at ingest time gives robust expertise continuity across A→B→C chains and avoids repeated SQL complexity.
```diff
@@ Files to Modify
- 3. src/core/db.rs — Add migration for indexes...
+ 3. src/core/db.rs — Add migration for indexes + path_identity table
+ 4. src/core/ingest/*.rs — populate path_identity on rename events
+ 5. src/cli/commands/who.rs — resolve query path to canonical path_id first
@@ Context
- The fix has three parts:
+ The fix has four parts:
+ - Introduce canonical path identity so multi-hop renames preserve expertise
```
5. **Split scoring engine into a versioned core module**
Analysis: `who.rs` is becoming a mixed CLI/query/math/output surface. Move scoring math and event normalization into `src/core/scoring/` with explicit model versions. This reduces regression risk and enables future model experiments.
```diff
@@ Files to Modify
+ 4. src/core/scoring/mod.rs — model interface + shared types
+ 5. src/core/scoring/model_v2_decay.rs — current implementation
+ 6. src/cli/commands/who.rs — orchestration only
@@ 5b. Score Explainability
+ resolved_input includes scoring_model_version and scoring_model_name
```
6. **Add evidence confidence to reduce sparse-data rank spikes**
Analysis: One recent MR can outrank broader, steadier expertise. Add a confidence factor derived from number of distinct evidence MRs and expose both `score_raw` and `score_adjusted`.
```diff
@@ Scoring Formula
+ confidence(user) = 1 - exp(-evidence_mr_count / 6.0)
+ score_adjusted = score_raw * confidence
@@ 4. Rust-Side Aggregation
+ compute evidence_mr_count from unique MR ids across all signals
+ sort by score_adjusted DESC, then score_raw DESC, then last_seen DESC
@@ 5b. --explain-score
+ include confidence and evidence_mr_count
```
7. **Add first-class bot/service-account filtering**
Analysis: Reviewer inflation is not just assignment; bots and automation users can still pollute rankings. Make exclusion explicit and configurable.
```diff
@@ ScoringConfig (config.rs)
+ pub excluded_username_patterns: Vec<String>, // defaults include "*bot*", "renovate", "dependabot"
@@ 3. SQL Restructure
+ AND username NOT MATCHES excluded patterns (applied in Rust post-query or SQL where feasible)
@@ CLI
+ --include-bots (override exclusions)
```
8. **Tighten reviewer “participated” with substantive-note threshold**
Analysis: A single “LGTM” note shouldnt classify someone as engaged reviewer equivalent to real inline review. Use a minimum substantive threshold.
```diff
@@ ScoringConfig (config.rs)
+ pub reviewer_min_note_chars: u32, // default: 20
@@ reviewer_participation CTE
- SELECT DISTINCT ... FROM matched_notes
+ SELECT DISTINCT ... FROM matched_notes
+ WHERE LENGTH(TRIM(body)) >= ?reviewer_min_note_chars
```
9. **Add rollout safety: model compare mode + rank-delta diagnostics**
Analysis: This is a scoring-model migration. You need safe rollout mechanics, not just tests. Add a compare mode so you can inspect rank deltas before forcing v2.
```diff
@@ CLI (who)
+ --scoring-model v1|v2|compare (default: v2)
+ --max-rank-delta-report N (compare mode diagnostics)
@@ Robot output
+ include v1_score, v2_score, rank_delta when --scoring-model compare
```
If you want, I can produce a single consolidated “plan v4” document that applies all nine diffs cleanly into your original markdown.

View File

@@ -0,0 +1,575 @@
---
plan: true
title: ""
status: iterating
iteration: 4
target_iterations: 8
beads_revision: 0
related_plans: []
created: 2026-02-08
updated: 2026-02-08
---
# Time-Decay Expert Scoring Model
## Context
The `lore who --path` command currently uses flat weights to score expertise: each authored MR counts as 25 points, each reviewed MR as 10, each inline note as 1 — regardless of when the activity happened. This produces three compounding problems:
1. **Temporal blindness**: Old activity counts the same as recent activity. Someone who authored a file 2 years ago ranks equivalently to someone who wrote it last week.
2. **Reviewer inflation**: Senior reviewers (jdefting, zhayes) who rubber-stamp every MR via assignment accumulate inflated scores indistinguishable from reviewers who actually left substantive inline feedback. The `mr_reviewers` table captures assignment, not engagement.
3. **Path-history blindness**: Renamed or moved files lose historical expertise because signal matching relies on `position_new_path` and `mr_file_changes.new_path` only. A developer who authored the file under its previous name gets zero credit after a rename.
The fix has three parts:
- Apply **exponential half-life decay** to each signal, grounded in cognitive science research
- **Split the reviewer signal** into "participated" (left DiffNotes) vs "assigned-only" (in `mr_reviewers` but no inline comments), with different weights and decay rates
- **Match both old and new paths** in all signal queries AND path resolution probes so expertise survives file renames
## Research Foundation
- **Ebbinghaus Forgetting Curve (1885)**: Memory retention follows exponential decay: `R = 2^(-t/h)` where h is the half-life
- **Generation Effect (Slamecka & Graf, 1978)**: Producing information (authoring code) creates ~2x more durable memory traces than reading it (reviewing)
- **Levels of Processing (Craik & Lockhart, 1972)**: Deeper cognitive engagement creates more durable memories — authoring > reviewing > commenting
- **Half-Life Regression (Settles & Meeder, 2016, Duolingo)**: Exponential decay with per-signal-type half-lives is practical and effective at scale. Chosen over power law for additivity, bounded behavior, and intuitive parameterization
- **Fritz et al. (2010, ICSE)**: "Degree-of-knowledge" model for code familiarity considers both authoring and interaction events with time-based decay
## Scoring Formula
```
score(user, path) = Sum_i( weight_i * 2^(-days_elapsed_i / half_life_i) )
```
For note signals grouped per MR, a diminishing-returns function caps comment storms:
```
note_contribution(mr) = note_bonus * log2(1 + note_count_in_mr) * 2^(-days_elapsed / note_half_life)
```
**Why `log2` instead of `ln`?** With `log2`, a single note contributes exactly `note_bonus * 1.0` (since `log2(2) = 1`), making the `note_bonus` weight directly interpretable as "points per note at count=1." With `ln`, one note contributes `note_bonus * 0.69`, which is unintuitive and means `note_bonus=1` doesn't actually mean "1 point per note." The diminishing-returns curve shape is identical — only the scale factor differs.
Per-signal contributions (each signal is either per-MR or per-note-group):
| Signal Type | Base Weight | Half-Life | Rationale |
|-------------|-------------|-----------|-----------|
| **Author** (authored MR touching path) | 25 | 180 days | Deep generative engagement; ~50% retention at 6 months |
| **Reviewer Participated** (left DiffNote on MR/path) | 10 | 90 days | Active review engagement; ~50% at 3 months |
| **Reviewer Assigned-Only** (in `mr_reviewers`, no DiffNote on path) | 3 | 45 days | Passive assignment; minimal cognitive engagement, fades fast |
| **Note** (inline DiffNotes on path, grouped per MR) | 1 | 45 days | `log2(1+count)` per MR; diminishing returns prevent comment storms |
**Why split reviewers?** The `mr_reviewers` table records assignment, not engagement. A reviewer who left 5 inline comments on a file has demonstrably more expertise than one who was merely assigned and clicked "approve." The participated signal inherits the old reviewer weight (10) and decay (90 days); the assigned-only signal gets reduced weight (3) and faster decay (45 days) — enough to register but not enough to inflate past actual contributors.
**Why require substantive notes?** Participation is qualified by a minimum note body length (`reviewer_min_note_chars`, default 20). Without this, a single "LGTM" or "+1" comment would promote a reviewer from the 3-point assigned-only tier to the 10-point participated tier — a 3.3x weight increase for zero substantive engagement. The threshold is configurable to accommodate teams with different review conventions.
**Why cap notes per MR?** Without diminishing returns, a back-and-forth thread of 30 comments on a single MR would score 30 note points — disproportionate to the expertise gained. `log2(1 + 30) ≈ 4.95` vs `log2(1 + 1) = 1.0` preserves the signal that more comments = more engagement while preventing outlier MRs from dominating. The 30-note reviewer gets ~5x the credit of a 1-note reviewer, not 30x.
Author/reviewer signals are deduplicated per MR (one signal per distinct MR). Note signals are grouped per (user, MR) and use `log2(1 + count)` scaling.
**Why include closed MRs?** Closed-without-merge MRs represent real review effort and code familiarity even though the code was abandoned. All signals from closed MRs are multiplied by `closed_mr_multiplier` (default 0.5) to reflect this reduced but non-zero contribution. This applies uniformly to author, reviewer, and note signals on closed MRs.
## Files to Modify
1. **`src/core/config.rs`** — Add half-life fields + assigned-only reviewer config to `ScoringConfig`; add config validation
2. **`src/cli/commands/who.rs`** — Core changes:
- Add `half_life_decay()` pure function
- Restructure `query_expert()`: SQL returns hybrid-aggregated signal rows with timestamps (MR-level for author/reviewer, note-count-per-MR for notes), Rust applies decay + `log2(1+count)` + final ranking
- Match both `new_path` and `old_path` in all signal queries (rename awareness)
- Extend rename awareness to `build_path_query()` probes and `suffix_probe()` (not just scoring)
- Split reviewer signal into participated vs assigned-only
- Use state-aware timestamps (`merged_at` for merged MRs, `updated_at` for open MRs)
- Change default `--since` from `"6m"` to `"24m"` (2 years captures all meaningful decayed signals)
- Add `--as-of` flag for reproducible scoring at a fixed timestamp
- Add `--explain-score` flag for per-user score component breakdown
- Sort on raw f64 score, round only for display
- Update tests
3. **`src/core/db.rs`** — Add migration for indexes supporting the new query shapes (dual-path matching, reviewer participation CTE, path resolution probes)
## Implementation Details
### 1. ScoringConfig (config.rs)
Add half-life fields and the new assigned-only reviewer signal. All new fields use `#[serde(default)]` for backward compatibility:
```rust
pub struct ScoringConfig {
pub author_weight: i64, // default: 25
pub reviewer_weight: i64, // default: 10 (participated — left DiffNotes)
pub reviewer_assignment_weight: i64, // default: 3 (assigned-only — no DiffNotes on path)
pub note_bonus: i64, // default: 1
pub author_half_life_days: u32, // default: 180
pub reviewer_half_life_days: u32, // default: 90 (participated)
pub reviewer_assignment_half_life_days: u32, // default: 45 (assigned-only)
pub note_half_life_days: u32, // default: 45
pub closed_mr_multiplier: f64, // default: 0.5 (applied to closed-without-merge MRs)
pub reviewer_min_note_chars: u32, // default: 20 (minimum note body length to count as participation)
}
```
**Config validation**: Add a `validate_scoring()` call in `Config::load_from_path()` after deserialization:
- All `*_half_life_days` must be > 0 (prevents division by zero in decay function)
- All `*_weight` / `*_bonus` must be >= 0 (negative weights produce nonsensical scores)
- `closed_mr_multiplier` must be in `(0.0, 1.0]` (0 would discard closed MRs entirely; >1 would over-weight them)
- `reviewer_min_note_chars` must be >= 0 (0 disables the filter; typical useful values: 10-50)
- Return `LoreError::ConfigInvalid` with a clear message on failure
### 2. Decay Function (who.rs)
```rust
fn half_life_decay(elapsed_ms: i64, half_life_days: u32) -> f64 {
let days = (elapsed_ms as f64 / 86_400_000.0).max(0.0);
let hl = f64::from(half_life_days);
if hl <= 0.0 { return 0.0; }
2.0_f64.powf(-days / hl)
}
```
### 3. SQL Restructure (who.rs)
The SQL uses **CTE-based dual-path matching** and **hybrid aggregation**. Rather than repeating `OR old_path` in every signal subquery, two foundational CTEs (`matched_notes`, `matched_file_changes`) centralize path matching. A third CTE (`reviewer_participation`) precomputes which reviewers actually left DiffNotes, avoiding correlated `EXISTS`/`NOT EXISTS` subqueries.
MR-level signals return one row per (username, signal, mr_id) with a timestamp; note signals return one row per (username, mr_id) with `note_count` and `max_ts`. This keeps row counts bounded (dozens to low hundreds per path) while giving Rust the data it needs for decay and `log2(1+count)`.
```sql
WITH matched_notes AS (
-- Centralize dual-path matching for DiffNotes
SELECT n.id, n.discussion_id, n.author_username, n.created_at,
n.position_new_path, n.position_old_path, n.project_id
FROM notes n
WHERE n.note_type = 'DiffNote'
AND n.is_system = 0
AND n.author_username IS NOT NULL
AND n.created_at >= ?2
AND n.created_at <= ?4
AND (?3 IS NULL OR n.project_id = ?3)
AND (n.position_new_path {path_op} OR n.position_old_path {path_op})
),
matched_file_changes AS (
-- Centralize dual-path matching for file changes
SELECT fc.merge_request_id, fc.project_id
FROM mr_file_changes fc
WHERE (?3 IS NULL OR fc.project_id = ?3)
AND (fc.new_path {path_op} OR fc.old_path {path_op})
),
reviewer_participation AS (
-- Precompute which (mr_id, username) pairs have substantive DiffNote participation.
-- Materialized once, then joined against mr_reviewers to classify.
-- The LENGTH filter excludes trivial notes ("LGTM", "+1", emoji-only) from qualifying
-- a reviewer as "participated." Without this, a single "LGTM" would promote an assigned
-- reviewer from 3-point to 10-point weight, defeating the purpose of the split.
-- Note: mn.id refers back to notes.id, so we join notes to access the body column
-- (not carried in matched_notes to avoid bloating that CTE with body text).
SELECT DISTINCT d.merge_request_id AS mr_id, mn.author_username AS username
FROM matched_notes mn
JOIN discussions d ON mn.discussion_id = d.id
JOIN notes n_body ON mn.id = n_body.id
WHERE d.merge_request_id IS NOT NULL
AND LENGTH(TRIM(COALESCE(n_body.body, ''))) >= {reviewer_min_note_chars}
),
raw AS (
-- Signal 1: DiffNote reviewer (individual notes for note_cnt)
SELECT mn.author_username AS username, 'diffnote_reviewer' AS signal,
m.id AS mr_id, mn.id AS note_id, mn.created_at AS seen_at, m.state AS mr_state
FROM matched_notes mn
JOIN discussions d ON mn.discussion_id = d.id
JOIN merge_requests m ON d.merge_request_id = m.id
WHERE (m.author_username IS NULL OR mn.author_username != m.author_username)
AND m.state IN ('opened','merged','closed')
UNION ALL
-- Signal 2: DiffNote MR author
SELECT m.author_username AS username, 'diffnote_author' AS signal,
m.id AS mr_id, NULL AS note_id, MAX(mn.created_at) AS seen_at, m.state AS mr_state
FROM merge_requests m
JOIN discussions d ON d.merge_request_id = m.id
JOIN matched_notes mn ON mn.discussion_id = d.id
WHERE m.author_username IS NOT NULL
AND m.state IN ('opened','merged','closed')
GROUP BY m.author_username, m.id
UNION ALL
-- Signal 3: MR author via file changes (state-aware timestamp)
SELECT m.author_username AS username, 'file_author' AS signal,
m.id AS mr_id, NULL AS note_id,
{state_aware_ts} AS seen_at, m.state AS mr_state
FROM matched_file_changes mfc
JOIN merge_requests m ON mfc.merge_request_id = m.id
WHERE m.author_username IS NOT NULL
AND m.state IN ('opened','merged','closed')
AND {state_aware_ts} >= ?2
AND {state_aware_ts} <= ?4
UNION ALL
-- Signal 4a: Reviewer participated (in mr_reviewers AND left DiffNotes on path)
SELECT r.username AS username, 'file_reviewer_participated' AS signal,
m.id AS mr_id, NULL AS note_id,
{state_aware_ts} AS seen_at, m.state AS mr_state
FROM matched_file_changes mfc
JOIN merge_requests m ON mfc.merge_request_id = m.id
JOIN mr_reviewers r ON r.merge_request_id = m.id
JOIN reviewer_participation rp ON rp.mr_id = m.id AND rp.username = r.username
WHERE r.username IS NOT NULL
AND (m.author_username IS NULL OR r.username != m.author_username)
AND m.state IN ('opened','merged','closed')
AND {state_aware_ts} >= ?2
AND {state_aware_ts} <= ?4
UNION ALL
-- Signal 4b: Reviewer assigned-only (in mr_reviewers, NO DiffNotes on path)
SELECT r.username AS username, 'file_reviewer_assigned' AS signal,
m.id AS mr_id, NULL AS note_id,
{state_aware_ts} AS seen_at, m.state AS mr_state
FROM matched_file_changes mfc
JOIN merge_requests m ON mfc.merge_request_id = m.id
JOIN mr_reviewers r ON r.merge_request_id = m.id
LEFT JOIN reviewer_participation rp ON rp.mr_id = m.id AND rp.username = r.username
WHERE rp.username IS NULL -- NOT in participation set
AND r.username IS NOT NULL
AND (m.author_username IS NULL OR r.username != m.author_username)
AND m.state IN ('opened','merged','closed')
AND {state_aware_ts} >= ?2
AND {state_aware_ts} <= ?4
),
aggregated AS (
-- MR-level signals: 1 row per (username, signal_class, mr_id) with MAX(ts)
SELECT username, signal, mr_id, 1 AS qty, MAX(seen_at) AS ts, mr_state
FROM raw WHERE signal != 'diffnote_reviewer'
GROUP BY username, signal, mr_id
UNION ALL
-- Note signals: 1 row per (username, mr_id) with note_count and max_ts
SELECT username, 'note_group' AS signal, mr_id, COUNT(*) AS qty, MAX(seen_at) AS ts, mr_state
FROM raw WHERE signal = 'diffnote_reviewer' AND note_id IS NOT NULL
GROUP BY username, mr_id
)
SELECT username, signal, mr_id, qty, ts, mr_state FROM aggregated WHERE username IS NOT NULL
```
Where `{state_aware_ts}` is the state-aware timestamp expression (defined in the next section), `{path_op}` is either `= ?1` or `LIKE ?1 ESCAPE '\\'` depending on the path query type, `?4` is the `as_of_ms` upper bound (defaults to `now_ms` when `--as-of` is not specified), and `{reviewer_min_note_chars}` is the configured `reviewer_min_note_chars` value (default 20, inlined as a literal in the SQL string). The `BETWEEN ?2 AND ?4` pattern ensures that when `--as-of` is set to a past date, events after that date are excluded — without this, "future" events would leak in with full weight, breaking reproducibility.
**Rationale for CTE-based dual-path matching**: The previous approach (repeating `OR old_path` in every signal subquery) duplicated the path matching logic 5 times. Factoring it into `matched_notes` and `matched_file_changes` CTEs means path matching is defined once, the indexes are hit once, and adding future path resolution logic (e.g., alias chains) only requires changes in one place.
**Index optimization fallback (UNION ALL split)**: SQLite's query planner sometimes struggles with `OR` across two indexed columns, falling back to a full table scan instead of using either index. If EXPLAIN QUERY PLAN shows this during step 6 verification, replace the `OR`-based CTEs with a `UNION ALL` split + dedup pattern:
```sql
matched_notes AS (
SELECT ... FROM notes n WHERE ... AND n.position_new_path {path_op}
UNION ALL
SELECT ... FROM notes n WHERE ... AND n.position_old_path {path_op}
),
matched_notes_dedup AS (
SELECT DISTINCT id, discussion_id, author_username, created_at, project_id
FROM matched_notes
),
```
This ensures each branch can use its respective index independently. The dedup CTE prevents double-counting when `old_path = new_path` (no rename). Start with the simpler `OR` approach and only switch to `UNION ALL` if query plans confirm the degradation.
**Rationale for precomputed participation set**: The previous approach used correlated `EXISTS`/`NOT EXISTS` subqueries to classify reviewers. The `reviewer_participation` CTE materializes the set of `(mr_id, username)` pairs from matched DiffNotes once, then signal 4a JOINs against it (participated) and signal 4b LEFT JOINs with `IS NULL` (assigned-only). This avoids per-reviewer-row correlated scans, is easier to reason about, and produces the same exhaustive split — every `mr_reviewers` row falls into exactly one bucket.
**Rationale for hybrid over fully-raw**: Pre-aggregating note counts in SQL prevents row explosion from heavy DiffNote volume on frequently-discussed paths. MR-level signals are already 1-per-MR by nature (deduped via GROUP BY in each subquery). This keeps memory and latency predictable regardless of review activity density.
**Path rename awareness**: Both `matched_notes` and `matched_file_changes` CTEs match against both old and new path columns:
- Notes: `(n.position_new_path {path_op} OR n.position_old_path {path_op})`
- File changes: `(fc.new_path {path_op} OR fc.old_path {path_op})`
Both columns already exist in the schema (`notes.position_old_path` from migration 002, `mr_file_changes.old_path` from migration 016). The `OR` match ensures expertise is credited even when a file was renamed after the work was done. For prefix queries (`--path src/foo/`), the `LIKE` operator applies to both columns identically.
**Signal 4 splits into two**: The current signal 4 (`file_reviewer`) joins `mr_reviewers` but doesn't distinguish participation. In the new plan:
- **Signal 4a** (`file_reviewer_participated`): User is in `mr_reviewers` AND appears in the `reviewer_participation` CTE (left DiffNotes on the path for that MR). Gets `reviewer_weight` (10) and `reviewer_half_life_days` (90).
- **Signal 4b** (`file_reviewer_assigned`): User is in `mr_reviewers` but NOT in the `reviewer_participation` CTE. Gets `reviewer_assignment_weight` (3) and `reviewer_assignment_half_life_days` (45).
### 3a. Path Resolution Probes (who.rs)
Rename awareness must extend beyond scoring queries to the path resolution layer. Currently `build_path_query()` (line 457) and `suffix_probe()` (line 584) only check `position_new_path` and `new_path`. If a user queries an old path name, these probes return "not found" and the scoring query never runs.
**Changes to `build_path_query()`**:
- **Probe 1 (exact_exists)**: Add `OR position_old_path = ?1` to the notes query and `OR old_path = ?1` to the `mr_file_changes` query. This detects files that existed under the queried name even if they've since been renamed.
- **Probe 2 (prefix_exists)**: Add `OR position_old_path LIKE ?1 ESCAPE '\\'` and `OR old_path LIKE ?1 ESCAPE '\\'` to the respective queries.
**Changes to `suffix_probe()`**:
The UNION query inside `suffix_probe()` currently only selects `position_new_path` from notes and `new_path` from file changes. Add two additional UNION branches:
```sql
UNION
SELECT position_old_path AS full_path FROM notes
WHERE note_type = 'DiffNote' AND is_system = 0
AND position_old_path IS NOT NULL
AND (position_old_path LIKE ?1 ESCAPE '\\' OR position_old_path = ?2)
AND (?3 IS NULL OR project_id = ?3)
UNION
SELECT old_path AS full_path FROM mr_file_changes
WHERE old_path IS NOT NULL
AND (old_path LIKE ?1 ESCAPE '\\' OR old_path = ?2)
AND (?3 IS NULL OR project_id = ?3)
```
This ensures that querying by an old filename (e.g., `login.rs` after it was renamed to `auth.rs`) still resolves to a usable path for scoring. The UNION deduplicates so the same path appearing in both old and new columns doesn't cause false ambiguity.
**State-aware timestamps for file-change signals (signals 3, 4a, 4b)**: Replace `m.updated_at` with a state-aware expression:
```sql
CASE
WHEN m.state = 'merged' THEN COALESCE(m.merged_at, m.created_at)
WHEN m.state = 'closed' THEN COALESCE(m.closed_at, m.created_at)
ELSE COALESCE(m.updated_at, m.created_at) -- opened / other
END AS activity_ts
```
**Rationale**: `updated_at` is noisy for merged MRs — it changes on label edits, title changes, rebases, and metadata touches, creating false recency. `merged_at` is the best indicator of when code expertise was formed (the moment the code entered the branch). But for **open MRs**, `updated_at` is actually the right signal because it reflects ongoing active work. `closed_at` anchors closed-without-merge MRs to their closure time (these represent review effort even if the code was abandoned). Each state gets the timestamp that best represents when expertise was last exercised.
### 4. Rust-Side Aggregation (who.rs)
For each username, accumulate into a struct with:
- **Author MRs**: `HashMap<i64, (i64, String)>` (mr_id -> (max timestamp, mr_state)) from `diffnote_author` + `file_author` signals
- **Reviewer Participated MRs**: `HashMap<i64, (i64, String)>` from `diffnote_reviewer` + `file_reviewer_participated` signals
- **Reviewer Assigned-Only MRs**: `HashMap<i64, (i64, String)>` from `file_reviewer_assigned` signals (excluding any MR already in participated set)
- **Notes per MR**: `HashMap<i64, (u32, i64, String)>` (mr_id -> (count, max_ts, mr_state)) from `note_group` rows in the aggregated query (already grouped per user+MR with note_count in `qty`). Used for `log2(1 + count)` diminishing returns.
- **Last seen**: max of all timestamps
- **Components** (when `--explain-score`): Track per-component f64 subtotals for `author`, `reviewer_participated`, `reviewer_assigned`, `notes`
The `mr_state` field from each SQL row is stored alongside the timestamp so the Rust-side can apply `closed_mr_multiplier` when `mr_state == "closed"`.
Compute score as `f64`. Each MR-level contribution is multiplied by `closed_mr_multiplier` (default 0.5) when the MR's state is `"closed"`:
```
state_mult(mr) = if mr.state == "closed" { closed_mr_multiplier } else { 1.0 }
raw_score =
sum(author_weight * state_mult(mr) * decay(now - ts, author_hl) for (mr, ts) in author_mrs)
+ sum(reviewer_weight * state_mult(mr) * decay(now - ts, reviewer_hl) for (mr, ts) in reviewer_participated)
+ sum(reviewer_assignment_weight * state_mult(mr) * decay(now - ts, reviewer_assignment_hl) for (mr, ts) in reviewer_assigned)
+ sum(note_bonus * state_mult(mr) * log2(1 + count) * decay(now - ts, note_hl) for (mr, count, ts) in notes_per_mr)
```
**Why include closed MRs?** A closed-without-merge MR still represents review effort and code familiarity — the reviewer read the diff, left comments, and engaged with the code even though it was ultimately abandoned. Excluding closed MRs entirely (the previous plan's approach) discarded this signal. The `closed_mr_multiplier` (default 0.5) halves the contribution, reflecting that the code never landed but the reviewer's cognitive engagement was real. This also eliminates the dead-code inconsistency where the state-aware CASE expression handled `closed` but the WHERE clause excluded it.
**Sort on raw `f64` score**`(raw_score DESC, last_seen DESC, username ASC)`. This prevents false ties from premature rounding. Only round to `i64` for the `Expert.score` display field after sorting and truncation. The robot JSON `score` field stays integer for backward compatibility. When `--explain-score` is active, also include `score_raw` (the unrounded f64) alongside `score` so the component totals can be verified without rounding noise.
Compute counts from the accumulated data:
- `review_mr_count = reviewer_participated.len() + reviewer_assigned.len()`
- `review_note_count = notes_per_mr.values().map(|(count, _)| count).sum()`
- `author_mr_count = author_mrs.len()`
Truncate to limit after sorting.
### 5. Default --since Change
Expert mode: `"6m"` -> `"24m"` (line 289 in who.rs).
At 2 years, author decay = 6%, reviewer decay = 0.4%, note decay = 0.006% — negligible, good cutoff.
**Diagnostic escape hatch**: Add `--all-history` flag (conflicts with `--since`) that sets `since_ms = 0`, capturing all data regardless of age. Useful for debugging scoring anomalies and validating the decay model against known experts. The `since_mode` field in robot JSON reports `"all"` when this flag is active.
### 5a. Reproducible Scoring via `--as-of`
Add `--as-of <RFC3339|YYYY-MM-DD>` flag that overrides the `now_ms` reference point used for decay calculations. When set:
- All event selection is bounded by `[since_ms, as_of_ms]` — events after `as_of_ms` are excluded from SQL results entirely (not just decayed)
- All decay computations use `as_of_ms` instead of `SystemTime::now()`
- The `--since` window is calculated relative to `as_of_ms` (not wall clock)
- Robot JSON `resolved_input` includes `as_of_ms` and `as_of_iso` fields
**Rationale**: Decayed scoring is time-sensitive by nature. Without a fixed reference point, the same query run minutes apart produces different rankings, making debugging and test reproducibility difficult. `--as-of` pins the clock so that results are deterministic for a given dataset. The upper-bound filter in SQL is critical — without it, events after the as-of date would enter with full weight (since `elapsed.max(0.0)` clamps negative elapsed time to zero), breaking the reproducibility guarantee.
Implementation: Parse the flag in `run_who()`, compute `as_of_ms: i64`, and thread it through to `query_expert()` where it replaces `now_ms()` and is bound as `?4` in all SQL queries. When the flag is absent, `?4` defaults to `now_ms()` (wall clock), which makes the upper bound transparent — all events are within the window by definition. The flag is compatible with all modes but primarily useful in expert mode.
### 5b. Score Explainability via `--explain-score`
Add `--explain-score` flag that augments each expert result with a per-user component breakdown:
```json
{
"username": "jsmith",
"score": 42,
"score_raw": 42.0,
"components": {
"author": 28.5,
"reviewer_participated": 8.2,
"reviewer_assigned": 1.8,
"notes": 3.5
}
}
```
**Scope for this iteration**: Component breakdown only (4 floats per user). No top-evidence MRs, no decay curves, no per-MR drill-down. Those are v2 features if scoring disputes arise frequently.
**Flag conflicts**: `--explain-score` is mutually exclusive with `--detail`. Both augment per-user output in different ways; combining them would produce confusing overlapping output. Clap `conflicts_with` enforces this at parse time.
**Human output**: When `--explain-score` is active in human mode, append a parenthetical after each score: `42 (author:28.5 review:10.0 notes:3.5)`.
**Robot output**: Add `score_raw` (unrounded f64) and `components` object to each expert entry. Only present when `--explain-score` is active (no payload bloat by default). The `resolved_input` section also includes `scoring_model_version: 2` to distinguish from the v1 flat-weight model, enabling robot clients to adapt parsing.
**Rationale**: Multi-signal decayed ranking will be disputed without decomposition. Showing which signal drives a user's score makes results actionable and builds trust in the model. Keeping scope minimal avoids the output format complexity that originally motivated deferral.
### 6. Index Migration (db.rs)
Add a new migration to support the restructured query patterns. The dual-path matching CTEs and `reviewer_participation` CTE introduce query shapes that need index coverage:
```sql
-- Support dual-path matching on DiffNotes (old_path leg of the OR in matched_notes CTE)
CREATE INDEX IF NOT EXISTS idx_notes_old_path_author
ON notes(position_old_path, author_username, created_at)
WHERE note_type = 'DiffNote' AND is_system = 0 AND position_old_path IS NOT NULL;
-- Support dual-path matching on file changes (old_path leg of the OR in matched_file_changes CTE)
CREATE INDEX IF NOT EXISTS idx_mfc_old_path_project_mr
ON mr_file_changes(old_path, project_id, merge_request_id)
WHERE old_path IS NOT NULL;
-- Support new_path matching on file changes (ensure index parity with old_path)
-- Existing indexes may not have optimal column order for the CTE pattern.
CREATE INDEX IF NOT EXISTS idx_mfc_new_path_project_mr
ON mr_file_changes(new_path, project_id, merge_request_id);
-- Support reviewer_participation CTE: joining matched_notes -> discussions -> mr_reviewers
-- notes.discussion_id (NOT noteable_id, which doesn't exist in the schema) is the FK to discussions
CREATE INDEX IF NOT EXISTS idx_notes_diffnote_discussion_author
ON notes(discussion_id, author_username, created_at)
WHERE note_type = 'DiffNote' AND is_system = 0;
```
**Rationale**: The existing indexes cover `position_new_path` and `new_path` but not their `old_path` counterparts. Without these, the `OR old_path` clauses would force table scans on renamed files. The `reviewer_participation` CTE joins `matched_notes` -> `discussions` -> `merge_requests`, so an index on `(discussion_id, author_username)` speeds up the CTE materialization.
**Schema note**: The `notes` table uses `discussion_id` as its FK to `discussions`, which in turn has `merge_request_id`. There is no `noteable_id` column on `notes`. The previous plan revision incorrectly referenced `noteable_id` — this is corrected.
**Removed**: The `idx_mr_state_timestamps` composite index on `merge_requests(state, merged_at, closed_at, updated_at, created_at)` was removed. MR lookups in the scoring query are always id-driven (joining from `matched_file_changes` or `discussions`), so the state-aware CASE expression operates on rows already fetched by primary key. A low-selectivity composite index on 5 columns would consume space without improving any query path.
Partial indexes (with `WHERE` clauses) keep the index size minimal — only DiffNote rows and non-null old_path rows are indexed.
### 7. Test Helpers
Add timestamp-aware variants:
- `insert_mr_at(conn, id, project_id, iid, author, state, updated_at_ms)`
- `insert_diffnote_at(conn, id, discussion_id, project_id, author, file_path, body, created_at_ms)`
### 8. New Tests (TDD)
#### Example-based tests
**`test_half_life_decay_math`**: Verify the pure function:
- elapsed=0 -> 1.0
- elapsed=half_life -> 0.5
- elapsed=2*half_life -> 0.25
- half_life_days=0 -> 0.0 (guard against div-by-zero)
**`test_expert_scores_decay_with_time`**: Two authors, one recent (10 days), one old (360 days). Recent author should score ~24, old author ~6.
**`test_expert_reviewer_decays_faster_than_author`**: Same MR, same age (90 days). Author retains ~18 points, reviewer retains ~5 points. Author dominates clearly.
**`test_reviewer_participated_vs_assigned_only`**: Two reviewers on the same MR at the same age. One left DiffNotes (participated), one didn't (assigned-only). Participated reviewer should score ~10 * decay, assigned-only should score ~3 * decay. Verifies the split works end-to-end.
**`test_note_diminishing_returns_per_mr`**: One reviewer with 1 note on MR-A and another with 20 notes on MR-B, both at same age. The 20-note reviewer should score `log2(21)/log2(2) ≈ 4.4x` the 1-note reviewer, NOT 20x. Validates the `log2(1+count)` cap.
**`test_config_validation_rejects_zero_half_life`**: `ScoringConfig` with `author_half_life_days = 0` should return `ConfigInvalid` error.
**`test_file_change_timestamp_uses_merged_at`**: An MR with `merged_at` set and `state = 'merged'` should use `merged_at` timestamp, not `updated_at`. Verify by setting `merged_at` to old date and `updated_at` to recent date — score should reflect the old date.
**`test_open_mr_uses_updated_at`**: An MR with `state = 'opened'` should use `updated_at` (not `created_at`). Verify that an open MR with recent `updated_at` scores higher than one with the same `created_at` but older `updated_at`.
**`test_old_path_match_credits_expertise`**: Insert a DiffNote with `position_old_path = "src/old.rs"` and `position_new_path = "src/new.rs"`. Query `--path src/old.rs` — the author should appear. Query `--path src/new.rs` — same author should also appear. Validates dual-path matching.
**`test_explain_score_components_sum_to_total`**: With `--explain-score`, verify that `components.author + components.reviewer_participated + components.reviewer_assigned + components.notes` equals the reported `score_raw` (within f64 rounding tolerance). Note: the closed_mr_multiplier is already folded into the per-component subtotals, not tracked as a separate component.
**`test_as_of_produces_deterministic_results`**: Insert data at known timestamps. Run `query_expert` twice with the same `--as-of` value — results must be identical. Then run with a later `--as-of` — scores should be lower (more decay).
**`test_old_path_probe_exact_and_prefix`**: Insert a DiffNote with `position_old_path = "src/old/foo.rs"` and `position_new_path = "src/new/foo.rs"`. Call `build_path_query(conn, "src/old/foo.rs")` — should resolve as exact file (not "not found"). Call `build_path_query(conn, "src/old/")` — should resolve as prefix. Validates that the path resolution probes now check old_path columns.
**`test_suffix_probe_uses_old_path_sources`**: Insert a file change with `old_path = "legacy/utils.rs"` and `new_path = "src/utils.rs"`. Call `build_path_query(conn, "legacy/utils.rs")` — should resolve via exact probe on old_path. Call `build_path_query(conn, "utils.rs")` — suffix probe should find both `legacy/utils.rs` and `src/utils.rs` and either resolve uniquely (if deduplicated) or report ambiguity.
**`test_since_relative_to_as_of_clock`**: Insert data at timestamps T1 and T2 (T2 > T1). With `--as-of T2` and `--since 30d`, the window is `[T2 - 30d, T2]`, not `[now - 30d, now]`. Verify that data at T1 is included or excluded based on the as-of-relative window, not the wall clock window.
**`test_explain_and_detail_are_mutually_exclusive`**: Parsing `--explain-score --detail` should fail with a conflict error from clap.
**`test_trivial_note_does_not_count_as_participation`**: A reviewer who left only a short note ("LGTM", 4 chars) on an MR should be classified as assigned-only, not participated, when `reviewer_min_note_chars = 20`. A reviewer who left a substantive note (>= 20 chars) should be classified as participated. Validates the LENGTH threshold in the `reviewer_participation` CTE.
**`test_closed_mr_multiplier`**: Two identical MRs (same author, same age, same path). One is `merged`, one is `closed`. The merged MR should contribute `author_weight * decay(...)`, the closed MR should contribute `author_weight * closed_mr_multiplier * decay(...)`. With default multiplier 0.5, the closed MR contributes half.
**`test_as_of_excludes_future_events`**: Insert events at timestamps T1 (past) and T2 (future relative to as-of). With `--as-of` set between T1 and T2, only T1 events should appear in results. T2 events must be excluded entirely, not just decayed. Validates the upper-bound filtering in SQL.
**`test_null_timestamp_fallback_to_created_at`**: Insert a merged MR with `merged_at = NULL` (edge case: old data before the column was populated). The state-aware timestamp should fall back to `created_at`. Verify the score reflects `created_at`, not 0 or a panic.
#### Invariant tests (regression safety for ranking systems)
**`test_score_monotonicity_by_age`**: For any single signal type, an older timestamp must never produce a higher score than a newer timestamp with the same weight and half-life. Generate N random (age, half_life) pairs and assert `decay(older) <= decay(newer)` for all.
**`test_row_order_independence`**: Insert the same set of signals in two different orders (e.g., reversed). Run `query_expert` on both — the resulting rankings (username order + scores) must be identical. Validates that neither SQL ordering nor HashMap iteration order affects final output.
**`test_reviewer_split_is_exhaustive`**: For a reviewer assigned to an MR, they must appear in exactly one of: participated (has substantive DiffNotes meeting `reviewer_min_note_chars`) or assigned-only (no DiffNotes, or only trivial ones below the threshold). Never both, never neither. Test three cases: (1) reviewer with substantive DiffNotes -> participated only, (2) reviewer with no DiffNotes -> assigned-only only, (3) reviewer with only trivial notes ("LGTM") -> assigned-only only.
### 9. Existing Test Compatibility
All existing tests insert data with `now_ms()`. With decay, elapsed ~0ms means decay ~1.0, so scores round to the same integers as before. No existing test assertions should break.
The `test_expert_scoring_weights_are_configurable` test needs `..Default::default()` added to fill the new half-life fields, `reviewer_assignment_weight` / `reviewer_assignment_half_life_days`, `closed_mr_multiplier`, and `reviewer_min_note_chars` fields.
## Verification
1. `cargo check --all-targets` — no compiler errors
2. `cargo clippy --all-targets -- -D warnings` — no lints
3. `cargo fmt --check` — formatting clean
4. `cargo test` — all existing + new tests pass (including invariant tests)
5. `ubs src/cli/commands/who.rs src/core/config.rs src/core/db.rs` — no bug scanner findings
6. Manual query plan verification (not automated — SQLite planner varies across versions):
- Run `EXPLAIN QUERY PLAN` on the expert query (both exact and prefix modes) against a real database
- Confirm that `matched_notes` CTE uses `idx_notes_old_path_author` or the existing new_path index (not a full table scan)
- Confirm that `matched_file_changes` CTE uses `idx_mfc_old_path_project_mr` or `idx_mfc_new_path_project_mr`
- Confirm that `reviewer_participation` CTE uses `idx_notes_diffnote_discussion_author`
- Document the observed plan in a comment near the SQL for future regression reference
7. Real-world validation:
- `cargo run --release -- who --path MeasurementQualityDialog.tsx` — verify jdefting/zhayes old reviews are properly discounted relative to recent authors
- `cargo run --release -- who --path MeasurementQualityDialog.tsx --all-history` — compare full history vs 24m window to validate cutoff is reasonable
- `cargo run --release -- who --path MeasurementQualityDialog.tsx --explain-score` — verify component breakdown sums to total and authored signal dominates for known authors
- Spot-check that assigned-only reviewers (those who never left DiffNotes) rank below participated reviewers on the same MR
- Test a known renamed file path — verify expertise from the old name carries forward
- `cargo run --release -- who --path MeasurementQualityDialog.tsx --as-of 2025-06-01` — verify deterministic output across repeated runs
- Spot-check that reviewers who only left "LGTM"-style notes are classified as assigned-only (not participated)
- Verify closed MRs contribute at ~50% of equivalent merged MR scores via `--explain-score`
## Accepted from External Review
Ideas incorporated from ChatGPT review (feedback-1 through feedback-4) that genuinely improved the plan:
**From feedback-1 and feedback-2:**
- **Path rename awareness (old_path matching)**: Real correctness gap. Both `position_old_path` and `mr_file_changes.old_path` exist in the schema. Simple `OR` clause addition with high value — expertise now survives file renames.
- **Hybrid SQL pre-aggregation**: Revised from "fully raw rows" to pre-aggregate note counts per (user, MR) in SQL. MR-level signals were already 1-per-MR; the note rows were the actual scalability risk. Bounded row counts with predictable memory.
- **State-aware timestamps**: Improved from our overly-simple `COALESCE(merged_at, created_at)` to a state-aware CASE expression. Open MRs genuinely need `updated_at` to reflect ongoing work; merged MRs need `merged_at` to anchor expertise formation.
- **Index migration**: The dual-path matching and CTE patterns need index support. Added partial indexes to keep size minimal.
- **Invariant tests**: `test_score_monotonicity_by_age`, `test_row_order_independence`, `test_reviewer_split_is_exhaustive` catch subtle ranking regressions that example-based tests miss.
- **`--as-of` flag**: Simple clock-pinning for reproducible decay scoring. Essential for debugging and test determinism.
- **`--explain-score` flag**: Moved from rejected to included with minimal scope (component breakdown only, no per-MR drill-down). Multi-signal scoring needs decomposition to build trust.
**From feedback-3:**
- **Fix `noteable_id` index bug (critical)**: The `notes` table uses `discussion_id` as FK to `discussions`, not `noteable_id` (which doesn't exist). The proposed `idx_notes_mr_path_author` index would fail at migration time. Fixed to use `(discussion_id, author_username, created_at)`.
- **CTE-based dual-path matching (`matched_notes`, `matched_file_changes`)**: Rather than repeating `OR old_path` in every signal subquery, centralize path matching in foundational CTEs. Defined once, indexed once, maintained once. Cleaner and more extensible.
- **Precomputed `reviewer_participation` CTE**: Replaced correlated `EXISTS`/`NOT EXISTS` subqueries with a materialized set of `(mr_id, username)` pairs. Same semantics, lower query cost, simpler reasoning about the reviewer split.
- **`log2(1+count)` over `ln(1+count)` for notes**: With `log2`, one note contributes exactly 1.0 unit (since `log2(2) = 1`), making `note_bonus=1` directly interpretable. `ln` gives 0.69 per note, which is unintuitive.
- **Path resolution probe rename awareness**: The plan added `old_path` matching to scoring queries but missed the upstream path resolution layer (`build_path_query()` probes and `suffix_probe()`). Without this, querying an old path name fails at resolution and never reaches scoring. Now both probes check old_path columns.
- **Removed low-selectivity `idx_mr_state_timestamps`**: MR lookups in scoring are id-driven (from file_changes or discussions), so a 5-column composite on state/timestamps adds no query benefit.
- **Added `idx_mfc_new_path_project_mr`**: Ensures index parity between old and new path columns on `mr_file_changes`.
- **`--explain-score` conflicts with `--detail`**: Prevents confusing overlapping output from two per-user augmentation flags.
- **`scoring_model_version` in resolved_input**: Lets robot clients distinguish v1 (flat weights) from v2 (decayed) output schemas.
- **`score_raw` in explain mode**: Exposes the unrounded f64 so component totals can be verified without rounding noise.
- **New tests**: `test_old_path_probe_exact_and_prefix`, `test_suffix_probe_uses_old_path_sources`, `test_since_relative_to_as_of_clock`, `test_explain_and_detail_are_mutually_exclusive`, `test_null_timestamp_fallback_to_created_at` — cover the newly-identified gaps in path resolution, clock semantics, and edge cases.
- **EXPLAIN QUERY PLAN verification step**: Manual check that the restructured queries use the new indexes (not automated, since SQLite planner varies across versions).
**From feedback-4:**
- **`--as-of` temporal correctness (critical)**: The plan described `--as-of` but the SQL only enforced a lower bound (`>= ?2`). Events after the as-of date would leak in with full weight (because `elapsed.max(0.0)` clamps negative elapsed time to zero). Added `<= ?4` upper bound to all SQL timestamp filters, making the query window `[since_ms, as_of_ms]`. Without this, `--as-of` reproducibility was fundamentally broken.
- **Closed-state inconsistency resolution**: The state-aware CASE expression handled `closed` state but the WHERE clause filtered to `('opened','merged')` only — dead code. Resolved by including `'closed'` in state filters and adding a `closed_mr_multiplier` (default 0.5) applied in Rust to all signals from closed-without-merge MRs. This credits real review effort on abandoned MRs while appropriately discounting it.
- **Substantive note threshold for reviewer participation**: A single "LGTM" shouldn't promote a reviewer from 3-point (assigned-only) to 10-point (participated) weight. Added `reviewer_min_note_chars` (default 20) config field and `LENGTH(TRIM(body))` filter in the `reviewer_participation` CTE. This raises the bar for participation classification to actual substantive review comments.
- **UNION ALL optimization fallback for path predicates**: SQLite's planner can degrade `OR` across two indexed columns to a table scan. Added documentation of a `UNION ALL` split + dedup fallback pattern to use if EXPLAIN QUERY PLAN shows degradation during verification. Start with the simpler `OR` approach; switch only if needed.
- **New tests**: `test_trivial_note_does_not_count_as_participation`, `test_closed_mr_multiplier`, `test_as_of_excludes_future_events` — cover the three new features added from this review round.
## Rejected Ideas (with rationale)
These suggestions were considered during review but explicitly excluded from this iteration:
- **Rename alias chain expansion (A->B->C traversal)** (feedback-2 #2, feedback-4 #4): Over-engineered for v1. The old_path `OR` match covers the 80% case (direct renames). Building a canonical path identity table at ingest time adds schema, ingestion logic, and graph traversal complexity for rare multi-hop renames. If real-world usage shows fragmented expertise on multi-rename files, this becomes a v2 feature.
- **Config-driven `max_age_days`** (feedback-1 #5, feedback-2 #5): We already have `--since` (explicit window), `--all-history` (no window), and the 24m default (mathematically justified). Adding a config field that derives the default since window creates confusing interaction between config and CLI flags. If half-lives change, updating the default constant is trivial.
- **Config-driven `decay_floor` for derived `--since` default** (feedback-3 #4): Proposed computing the default since window as `ceil(max_half_life * log2(1/floor))` so it auto-adjusts when half-lives change. Rejected: the formula is non-obvious to users, adds a config param (`decay_floor`) with no intuitive meaning, and the benefit is negligible — half-life changes are rare, and updating a constant is trivial. The 24m default is already mathematically justified and easy to override with `--since` or `--all-history`.
- **BTreeMap + Kahan/Neumaier compensated summation** (feedback-3 #6): Proposed deterministic iteration order and numerically stable summation. Rejected for this scale: the accumulator processes dozens to low hundreds of entries per user, where HashMap iteration order doesn't measurably affect f64 sums. Compensated summation adds code complexity for zero practical benefit at this magnitude. If we eventually aggregate thousands of signals per user, revisit.
- **Confidence/coverage metadata** (feedback-1 #8, feedback-2 #8, feedback-3 #9, feedback-4 #6): Repeatedly proposed across reviews with variations (score_adjusted with confidence factor, low/medium/high labels, evidence_mr_count weighting). Still scope creep. The `--explain-score` component breakdown already tells users which signal drives the score. Defining "sparse evidence" thresholds (how many MRs is "low"? what's the right exponential saturation constant?) is domain-specific guesswork without user feedback data. A single recent MR "outranking broader expertise" is the *correct* behavior of time-decay — the model intentionally weights recency. If real-world usage shows this is a problem, confidence becomes a v2 feature informed by actual threshold data.
- **Automated EXPLAIN QUERY PLAN tests** (feedback-3 #10 partial): SQLite's query planner changes across versions and can use different plans on different data distributions. Automated assertions on plan output are brittle. Instead, we document EXPLAIN QUERY PLAN as a manual verification step during development and include the observed plan as a comment near the SQL.
- **Per-MR evidence drill-down in `--explain-score`** (feedback-2 #7 promoted this): The v1 `--explain-score` shows component totals only. Listing top-evidence MRs per user would require additional SQL queries and significant output format work. Deferred unless component breakdowns prove insufficient for debugging.
- **Split scoring engine into core module** (feedback-4 #5): Proposed extracting scoring math from `who.rs` into `src/core/scoring/model_v2_decay.rs`. Premature modularization — `who.rs` is the only consumer and is ~800 lines. Adding module plumbing and indirection for a single call site adds complexity without reducing it. If we add a second scoring consumer (e.g., automated triage), revisit.
- **Bot/service-account filtering** (feedback-4 #7): Real concern but orthogonal to time-decay scoring. This is a general data quality feature that belongs in its own issue — it affects all `who` modes, not just expert scoring. Adding `excluded_username_patterns` config and `--include-bots` flag is scope expansion that should be designed and tested independently.
- **Model compare mode / rank-delta diagnostics** (feedback-4 #9): Over-engineered rollout safety for an internal CLI tool with ~3 users. Maintaining two parallel scoring codepaths (v1 flat + v2 decayed) doubles test surface and code complexity. The `--explain-score` + `--as-of` combination already provides debugging capability. If a future model change is risky enough to warrant A/B comparison, build it then.

View File

@@ -177,6 +177,8 @@ const COMMAND_FLAGS: &[(&str, &[&str])] = &[
"--since", "--since",
"--project", "--project",
"--limit", "--limit",
"--detail",
"--no-detail",
], ],
), ),
( (

View File

@@ -179,9 +179,15 @@ fn count_notes(conn: &Connection, type_filter: Option<&str>) -> Result<CountResu
} }
fn format_number(n: i64) -> String { fn format_number(n: i64) -> String {
let s = n.to_string(); let (prefix, abs) = if n < 0 {
("-", n.unsigned_abs())
} else {
("", n.unsigned_abs())
};
let s = abs.to_string();
let chars: Vec<char> = s.chars().collect(); let chars: Vec<char> = s.chars().collect();
let mut result = String::new(); let mut result = String::from(prefix);
for (i, c) in chars.iter().enumerate() { for (i, c) in chars.iter().enumerate() {
if i > 0 && (chars.len() - i).is_multiple_of(3) { if i > 0 && (chars.len() - i).is_multiple_of(3) {

View File

@@ -133,7 +133,11 @@ pub async fn run_init(inputs: InitInputs, options: InitOptions) -> Result<InitRe
for (_, gitlab_project) in &validated_projects { for (_, gitlab_project) in &validated_projects {
conn.execute( conn.execute(
"INSERT INTO projects (gitlab_project_id, path_with_namespace, default_branch, web_url) "INSERT INTO projects (gitlab_project_id, path_with_namespace, default_branch, web_url)
VALUES (?, ?, ?, ?)", VALUES (?, ?, ?, ?)
ON CONFLICT(gitlab_project_id) DO UPDATE SET
path_with_namespace = excluded.path_with_namespace,
default_branch = excluded.default_branch,
web_url = excluded.web_url",
( (
gitlab_project.id, gitlab_project.id,
&gitlab_project.path_with_namespace, &gitlab_project.path_with_namespace,

File diff suppressed because it is too large Load Diff

View File

@@ -751,6 +751,13 @@ pub struct WhoArgs {
help_heading = "Output" help_heading = "Output"
)] )]
pub limit: u16, pub limit: u16,
/// Show per-MR detail breakdown (expert mode only)
#[arg(long, help_heading = "Output", overrides_with = "no_detail")]
pub detail: bool,
#[arg(long = "no-detail", hide = true, overrides_with = "detail")]
pub no_detail: bool,
} }
#[derive(Parser)] #[derive(Parser)]

View File

@@ -1,7 +1,7 @@
use rand::Rng; use rand::Rng;
pub fn compute_next_attempt_at(now: i64, attempt_count: i64) -> i64 { pub fn compute_next_attempt_at(now: i64, attempt_count: i64) -> i64 {
let capped_attempts = attempt_count.min(30) as u32; let capped_attempts = attempt_count.clamp(0, 30) as u32;
let base_delay_ms = 1000_i64.saturating_mul(1 << capped_attempts); let base_delay_ms = 1000_i64.saturating_mul(1 << capped_attempts);
let capped_delay_ms = base_delay_ms.min(3_600_000); let capped_delay_ms = base_delay_ms.min(3_600_000);

View File

@@ -146,6 +146,32 @@ impl Default for LoggingConfig {
} }
} }
#[derive(Debug, Clone, Deserialize)]
#[serde(default)]
pub struct ScoringConfig {
/// Points per MR where the user authored code touching the path.
#[serde(rename = "authorWeight")]
pub author_weight: i64,
/// Points per MR where the user reviewed code touching the path.
#[serde(rename = "reviewerWeight")]
pub reviewer_weight: i64,
/// Bonus points per individual inline review comment (DiffNote).
#[serde(rename = "noteBonus")]
pub note_bonus: i64,
}
impl Default for ScoringConfig {
fn default() -> Self {
Self {
author_weight: 25,
reviewer_weight: 10,
note_bonus: 1,
}
}
}
#[derive(Debug, Clone, Deserialize)] #[derive(Debug, Clone, Deserialize)]
pub struct Config { pub struct Config {
pub gitlab: GitLabConfig, pub gitlab: GitLabConfig,
@@ -162,6 +188,9 @@ pub struct Config {
#[serde(default)] #[serde(default)]
pub logging: LoggingConfig, pub logging: LoggingConfig,
#[serde(default)]
pub scoring: ScoringConfig,
} }
impl Config { impl Config {
@@ -207,10 +236,31 @@ impl Config {
}); });
} }
validate_scoring(&config.scoring)?;
Ok(config) Ok(config)
} }
} }
fn validate_scoring(scoring: &ScoringConfig) -> Result<()> {
if scoring.author_weight < 0 {
return Err(LoreError::ConfigInvalid {
details: "scoring.authorWeight must be >= 0".to_string(),
});
}
if scoring.reviewer_weight < 0 {
return Err(LoreError::ConfigInvalid {
details: "scoring.reviewerWeight must be >= 0".to_string(),
});
}
if scoring.note_bonus < 0 {
return Err(LoreError::ConfigInvalid {
details: "scoring.noteBonus must be >= 0".to_string(),
});
}
Ok(())
}
#[derive(Debug, serde::Serialize)] #[derive(Debug, serde::Serialize)]
pub struct MinimalConfig { pub struct MinimalConfig {
pub gitlab: MinimalGitLabConfig, pub gitlab: MinimalGitLabConfig,
@@ -236,3 +286,81 @@ impl serde::Serialize for ProjectConfig {
state.end() state.end()
} }
} }
#[cfg(test)]
mod tests {
use super::*;
use tempfile::TempDir;
fn write_config(dir: &TempDir, scoring_json: &str) -> std::path::PathBuf {
let path = dir.path().join("config.json");
let config = format!(
r#"{{
"gitlab": {{
"baseUrl": "https://gitlab.example.com",
"tokenEnvVar": "GITLAB_TOKEN"
}},
"projects": [
{{ "path": "group/project" }}
],
"scoring": {scoring_json}
}}"#
);
fs::write(&path, config).unwrap();
path
}
#[test]
fn test_load_rejects_negative_author_weight() {
let dir = TempDir::new().unwrap();
let path = write_config(
&dir,
r#"{
"authorWeight": -1,
"reviewerWeight": 10,
"noteBonus": 1
}"#,
);
let err = Config::load_from_path(&path).unwrap_err();
let msg = err.to_string();
assert!(
msg.contains("scoring.authorWeight"),
"unexpected error: {msg}"
);
}
#[test]
fn test_load_rejects_negative_reviewer_weight() {
let dir = TempDir::new().unwrap();
let path = write_config(
&dir,
r#"{
"authorWeight": 25,
"reviewerWeight": -1,
"noteBonus": 1
}"#,
);
let err = Config::load_from_path(&path).unwrap_err();
let msg = err.to_string();
assert!(
msg.contains("scoring.reviewerWeight"),
"unexpected error: {msg}"
);
}
#[test]
fn test_load_rejects_negative_note_bonus() {
let dir = TempDir::new().unwrap();
let path = write_config(
&dir,
r#"{
"authorWeight": 25,
"reviewerWeight": 10,
"noteBonus": -1
}"#,
);
let err = Config::load_from_path(&path).unwrap_err();
let msg = err.to_string();
assert!(msg.contains("scoring.noteBonus"), "unexpected error: {msg}");
}
}

View File

@@ -141,42 +141,36 @@ fn resolve_entity_ids(
pub fn count_events(conn: &Connection) -> Result<EventCounts> { pub fn count_events(conn: &Connection) -> Result<EventCounts> {
let mut counts = EventCounts::default(); let mut counts = EventCounts::default();
let row: (i64, i64) = conn let row: (i64, i64) = conn.query_row(
.query_row( "SELECT
"SELECT
COUNT(CASE WHEN issue_id IS NOT NULL THEN 1 END), COUNT(CASE WHEN issue_id IS NOT NULL THEN 1 END),
COUNT(CASE WHEN merge_request_id IS NOT NULL THEN 1 END) COUNT(CASE WHEN merge_request_id IS NOT NULL THEN 1 END)
FROM resource_state_events", FROM resource_state_events",
[], [],
|row| Ok((row.get(0)?, row.get(1)?)), |row| Ok((row.get(0)?, row.get(1)?)),
) )?;
.unwrap_or((0, 0));
counts.state_issue = row.0 as usize; counts.state_issue = row.0 as usize;
counts.state_mr = row.1 as usize; counts.state_mr = row.1 as usize;
let row: (i64, i64) = conn let row: (i64, i64) = conn.query_row(
.query_row( "SELECT
"SELECT
COUNT(CASE WHEN issue_id IS NOT NULL THEN 1 END), COUNT(CASE WHEN issue_id IS NOT NULL THEN 1 END),
COUNT(CASE WHEN merge_request_id IS NOT NULL THEN 1 END) COUNT(CASE WHEN merge_request_id IS NOT NULL THEN 1 END)
FROM resource_label_events", FROM resource_label_events",
[], [],
|row| Ok((row.get(0)?, row.get(1)?)), |row| Ok((row.get(0)?, row.get(1)?)),
) )?;
.unwrap_or((0, 0));
counts.label_issue = row.0 as usize; counts.label_issue = row.0 as usize;
counts.label_mr = row.1 as usize; counts.label_mr = row.1 as usize;
let row: (i64, i64) = conn let row: (i64, i64) = conn.query_row(
.query_row( "SELECT
"SELECT
COUNT(CASE WHEN issue_id IS NOT NULL THEN 1 END), COUNT(CASE WHEN issue_id IS NOT NULL THEN 1 END),
COUNT(CASE WHEN merge_request_id IS NOT NULL THEN 1 END) COUNT(CASE WHEN merge_request_id IS NOT NULL THEN 1 END)
FROM resource_milestone_events", FROM resource_milestone_events",
[], [],
|row| Ok((row.get(0)?, row.get(1)?)), |row| Ok((row.get(0)?, row.get(1)?)),
) )?;
.unwrap_or((0, 0));
counts.milestone_issue = row.0 as usize; counts.milestone_issue = row.0 as usize;
counts.milestone_mr = row.1 as usize; counts.milestone_mr = row.1 as usize;

View File

@@ -106,8 +106,7 @@ pub fn extract_refs_from_system_notes(conn: &Connection, project_id: i64) -> Res
entity_id: row.get(3)?, entity_id: row.get(3)?,
}) })
})? })?
.filter_map(|r| r.ok()) .collect::<std::result::Result<Vec<_>, _>>()?;
.collect();
if notes.is_empty() { if notes.is_empty() {
return Ok(result); return Ok(result);
@@ -193,7 +192,10 @@ fn noteable_type_to_entity_type(noteable_type: &str) -> &str {
match noteable_type { match noteable_type {
"Issue" => "issue", "Issue" => "issue",
"MergeRequest" => "merge_request", "MergeRequest" => "merge_request",
_ => "issue", other => {
debug!(noteable_type = %other, "Unknown noteable_type, defaulting to issue");
"issue"
}
} }
} }

View File

@@ -2,6 +2,7 @@ use flate2::Compression;
use flate2::read::GzDecoder; use flate2::read::GzDecoder;
use flate2::write::GzEncoder; use flate2::write::GzEncoder;
use rusqlite::Connection; use rusqlite::Connection;
use rusqlite::OptionalExtension;
use sha2::{Digest, Sha256}; use sha2::{Digest, Sha256};
use std::io::{Read, Write}; use std::io::{Read, Write};
@@ -35,7 +36,7 @@ pub fn store_payload(conn: &Connection, options: StorePayloadOptions) -> Result<
), ),
|row| row.get(0), |row| row.get(0),
) )
.ok(); .optional()?;
if let Some(id) = existing { if let Some(id) = existing {
return Ok(id); return Ok(id);
@@ -74,7 +75,7 @@ pub fn read_payload(conn: &Connection, id: i64) -> Result<Option<serde_json::Val
[id], [id],
|row| Ok((row.get(0)?, row.get(1)?)), |row| Ok((row.get(0)?, row.get(1)?)),
) )
.ok(); .optional()?;
let Some((encoding, payload_bytes)) = row else { let Some((encoding, payload_bytes)) = row else {
return Ok(None); return Ok(None);

View File

@@ -145,6 +145,7 @@ fn upsert_document_inner(conn: &Connection, doc: &DocumentData) -> Result<bool>
is_truncated, truncated_reason) is_truncated, truncated_reason)
VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8, ?9, ?10, ?11, ?12, ?13, ?14, ?15) VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8, ?9, ?10, ?11, ?12, ?13, ?14, ?15)
ON CONFLICT(source_type, source_id) DO UPDATE SET ON CONFLICT(source_type, source_id) DO UPDATE SET
project_id = excluded.project_id,
author_username = excluded.author_username, author_username = excluded.author_username,
label_names = excluded.label_names, label_names = excluded.label_names,
labels_hash = excluded.labels_hash, labels_hash = excluded.labels_hash,

View File

@@ -110,7 +110,9 @@ pub fn truncate_discussion(notes: &[NoteContent], max_bytes: usize) -> Truncatio
} }
let first_note = &formatted[0]; let first_note = &formatted[0];
if first_note.len() + last_note.len() > max_bytes { let omitted = formatted.len() - 2;
let marker = format!("\n\n[... {} notes omitted for length ...]\n\n", omitted);
if first_note.len() + marker.len() + last_note.len() > max_bytes {
let truncated = truncate_utf8(first_note, max_bytes.saturating_sub(11)); let truncated = truncate_utf8(first_note, max_bytes.saturating_sub(11));
let content = format!("{}[truncated]", truncated); let content = format!("{}[truncated]", truncated);
return TruncationResult { return TruncationResult {
@@ -120,8 +122,6 @@ pub fn truncate_discussion(notes: &[NoteContent], max_bytes: usize) -> Truncatio
}; };
} }
let omitted = formatted.len() - 2;
let marker = format!("\n\n[... {} notes omitted for length ...]\n\n", omitted);
let content = format!("{}{}{}", formatted[0], marker, last_note); let content = format!("{}{}{}", formatted[0], marker, last_note);
TruncationResult { TruncationResult {
content, content,

View File

@@ -14,7 +14,7 @@ pub fn encode_rowid(document_id: i64, chunk_index: i64) -> i64 {
} }
pub fn decode_rowid(rowid: i64) -> (i64, i64) { pub fn decode_rowid(rowid: i64) -> (i64, i64) {
debug_assert!( assert!(
rowid >= 0, rowid >= 0,
"decode_rowid called with negative rowid: {rowid}" "decode_rowid called with negative rowid: {rowid}"
); );

View File

@@ -87,10 +87,9 @@ impl OllamaClient {
source: Some(e), source: Some(e),
})?; })?;
let model_found = tags let model_found = tags.models.iter().any(|m| {
.models m.name == self.config.model || m.name.starts_with(&format!("{}:", self.config.model))
.iter() });
.any(|m| m.name.starts_with(&self.config.model));
if !model_found { if !model_found {
return Err(LoreError::OllamaModelNotFound { return Err(LoreError::OllamaModelNotFound {
@@ -169,13 +168,32 @@ mod tests {
} }
#[test] #[test]
fn test_health_check_model_starts_with() { fn test_health_check_model_matching() {
let model = "nomic-embed-text"; let model = "nomic-embed-text";
let tag_name = "nomic-embed-text:latest";
assert!(tag_name.starts_with(model));
let wrong_model = "llama2"; let tag_name = "nomic-embed-text:latest";
assert!(!tag_name.starts_with(wrong_model)); assert!(
tag_name == model || tag_name.starts_with(&format!("{model}:")),
"should match model with tag"
);
let exact_name = "nomic-embed-text";
assert!(
exact_name == model || exact_name.starts_with(&format!("{model}:")),
"should match exact model name"
);
let wrong_model = "llama2:latest";
assert!(
!(wrong_model == model || wrong_model.starts_with(&format!("{model}:"))),
"should not match wrong model"
);
let similar_model = "nomic-embed-text-v2:latest";
assert!(
!(similar_model == model || similar_model.starts_with(&format!("{model}:"))),
"should not false-positive on model name prefix"
);
} }
#[test] #[test]

View File

@@ -1,4 +1,5 @@
use rusqlite::Connection; use rusqlite::Connection;
use rusqlite::OptionalExtension;
use crate::core::backoff::compute_next_attempt_at; use crate::core::backoff::compute_next_attempt_at;
use crate::core::error::Result; use crate::core::error::Result;
@@ -88,11 +89,17 @@ pub fn record_dirty_error(
error: &str, error: &str,
) -> Result<()> { ) -> Result<()> {
let now = now_ms(); let now = now_ms();
let attempt_count: i64 = conn.query_row( let attempt_count: Option<i64> = conn
"SELECT attempt_count FROM dirty_sources WHERE source_type = ?1 AND source_id = ?2", .query_row(
rusqlite::params![source_type.as_str(), source_id], "SELECT attempt_count FROM dirty_sources WHERE source_type = ?1 AND source_id = ?2",
|row| row.get(0), rusqlite::params![source_type.as_str(), source_id],
)?; |row| row.get(0),
)
.optional()?;
let Some(attempt_count) = attempt_count else {
return Ok(());
};
let new_attempt = attempt_count + 1; let new_attempt = attempt_count + 1;
let next_at = compute_next_attempt_at(now, new_attempt); let next_at = compute_next_attempt_at(now, new_attempt);

View File

@@ -299,18 +299,21 @@ fn upsert_label_tx(
name: &str, name: &str,
created_count: &mut usize, created_count: &mut usize,
) -> Result<i64> { ) -> Result<i64> {
tx.execute(
"INSERT OR IGNORE INTO labels (project_id, name) VALUES (?1, ?2)",
(project_id, name),
)?;
if tx.changes() > 0 {
*created_count += 1;
}
let id: i64 = tx.query_row( let id: i64 = tx.query_row(
"INSERT INTO labels (project_id, name) VALUES (?1, ?2) "SELECT id FROM labels WHERE project_id = ?1 AND name = ?2",
ON CONFLICT(project_id, name) DO UPDATE SET name = excluded.name
RETURNING id",
(project_id, name), (project_id, name),
|row| row.get(0), |row| row.get(0),
)?; )?;
if tx.last_insert_rowid() == id {
*created_count += 1;
}
Ok(id) Ok(id)
} }

View File

@@ -295,18 +295,21 @@ fn upsert_label_tx(
name: &str, name: &str,
created_count: &mut usize, created_count: &mut usize,
) -> Result<i64> { ) -> Result<i64> {
tx.execute(
"INSERT OR IGNORE INTO labels (project_id, name) VALUES (?1, ?2)",
(project_id, name),
)?;
if tx.changes() > 0 {
*created_count += 1;
}
let id: i64 = tx.query_row( let id: i64 = tx.query_row(
"INSERT INTO labels (project_id, name) VALUES (?1, ?2) "SELECT id FROM labels WHERE project_id = ?1 AND name = ?2",
ON CONFLICT(project_id, name) DO UPDATE SET name = excluded.name
RETURNING id",
(project_id, name), (project_id, name),
|row| row.get(0), |row| row.get(0),
)?; )?;
if tx.last_insert_rowid() == id {
*created_count += 1;
}
Ok(id) Ok(id)
} }

View File

@@ -593,6 +593,7 @@ fn clear_sync_health_error(conn: &Connection, local_mr_id: i64) -> Result<()> {
conn.execute( conn.execute(
"UPDATE merge_requests SET "UPDATE merge_requests SET
discussions_sync_last_attempt_at = ?, discussions_sync_last_attempt_at = ?,
discussions_sync_attempts = 0,
discussions_sync_last_error = NULL discussions_sync_last_error = NULL
WHERE id = ?", WHERE id = ?",
params![now_ms(), local_mr_id], params![now_ms(), local_mr_id],

View File

@@ -1,6 +1,7 @@
use std::collections::HashMap; use std::collections::HashMap;
use rusqlite::Connection; use rusqlite::Connection;
use rusqlite::OptionalExtension;
use crate::core::error::Result; use crate::core::error::Result;
use crate::embedding::chunk_ids::decode_rowid; use crate::embedding::chunk_ids::decode_rowid;
@@ -11,7 +12,7 @@ pub struct VectorResult {
pub distance: f64, pub distance: f64,
} }
fn max_chunks_per_document(conn: &Connection) -> i64 { fn max_chunks_per_document(conn: &Connection) -> Result<i64> {
let stored: Option<i64> = conn let stored: Option<i64> = conn
.query_row( .query_row(
"SELECT MAX(chunk_count) FROM embedding_metadata "SELECT MAX(chunk_count) FROM embedding_metadata
@@ -19,21 +20,24 @@ fn max_chunks_per_document(conn: &Connection) -> i64 {
[], [],
|row| row.get(0), |row| row.get(0),
) )
.unwrap_or(None); .optional()?
.flatten();
if let Some(max) = stored { if let Some(max) = stored {
return max; return Ok(max);
} }
conn.query_row( Ok(conn
"SELECT COALESCE(MAX(cnt), 1) FROM ( .query_row(
"SELECT COALESCE(MAX(cnt), 1) FROM (
SELECT COUNT(*) as cnt FROM embedding_metadata SELECT COUNT(*) as cnt FROM embedding_metadata
WHERE last_error IS NULL GROUP BY document_id WHERE last_error IS NULL GROUP BY document_id
)", )",
[], [],
|row| row.get(0), |row| row.get(0),
) )
.unwrap_or(1) .optional()?
.unwrap_or(1))
} }
pub fn search_vector( pub fn search_vector(
@@ -50,7 +54,7 @@ pub fn search_vector(
.flat_map(|f| f.to_le_bytes()) .flat_map(|f| f.to_le_bytes())
.collect(); .collect();
let max_chunks = max_chunks_per_document(conn).max(1); let max_chunks = max_chunks_per_document(conn)?.max(1);
let multiplier = ((max_chunks.unsigned_abs() as usize * 3 / 2) + 1).clamp(8, 200); let multiplier = ((max_chunks.unsigned_abs() as usize * 3 / 2) + 1).clamp(8, 200);
let k = (limit * multiplier).min(10_000); let k = (limit * multiplier).min(10_000);