Compare commits
4 Commits
3e7fa607d3
...
859923f86b
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
859923f86b | ||
|
|
d701b1f977 | ||
|
|
736d9c9a80 | ||
|
|
8dc479e515 |
38
AGENTS.md
38
AGENTS.md
@@ -35,6 +35,8 @@ We only use **Cargo** in this project, NEVER any other package manager.
|
||||
- **Configuration:** Cargo.toml only
|
||||
- **Unsafe code:** Forbidden (`#![forbid(unsafe_code)]`)
|
||||
|
||||
When writing Rust code, reference RUST_CLI_TOOLS_BEST_PRACTICES.md
|
||||
|
||||
### Release Profile
|
||||
|
||||
Use the release profile defined in `Cargo.toml`. If you need to change it, justify the
|
||||
@@ -591,7 +593,7 @@ If you aren't 100% sure how to use a third-party library, **SEARCH ONLINE** to f
|
||||
|
||||
## Gitlore Robot Mode
|
||||
|
||||
The `lore` CLI has a robot mode optimized for AI agent consumption with structured JSON output, meaningful exit codes, and TTY auto-detection.
|
||||
The `lore` CLI has a robot mode optimized for AI agent consumption with compact JSON output, structured errors with machine-actionable recovery steps, meaningful exit codes, response timing metadata, field selection for token efficiency, and TTY auto-detection.
|
||||
|
||||
### Activation
|
||||
|
||||
@@ -616,6 +618,10 @@ LORE_ROBOT=1 lore issues
|
||||
lore --robot issues -n 10
|
||||
lore --robot mrs -s opened
|
||||
|
||||
# List with field selection (reduces token usage ~60%)
|
||||
lore --robot issues --fields minimal
|
||||
lore --robot mrs --fields iid,title,state,draft
|
||||
|
||||
# Show detailed entity info
|
||||
lore --robot issues 123
|
||||
lore --robot mrs 456 -p group/repo
|
||||
@@ -645,7 +651,7 @@ lore --robot doctor
|
||||
# Document and index statistics
|
||||
lore --robot stats
|
||||
|
||||
# Quick health pre-flight check (exit 0 = healthy, 1 = unhealthy)
|
||||
# Quick health pre-flight check (exit 0 = healthy, 19 = unhealthy)
|
||||
lore --robot health
|
||||
|
||||
# Generate searchable documents from ingested data
|
||||
@@ -654,7 +660,7 @@ lore --robot generate-docs
|
||||
# Generate vector embeddings via Ollama
|
||||
lore --robot embed
|
||||
|
||||
# Agent self-discovery manifest (all commands, flags, exit codes)
|
||||
# Agent self-discovery manifest (all commands, flags, exit codes, response schemas)
|
||||
lore robot-docs
|
||||
|
||||
# Version information
|
||||
@@ -663,16 +669,27 @@ lore --robot version
|
||||
|
||||
### Response Format
|
||||
|
||||
All commands return consistent JSON:
|
||||
All commands return compact JSON with a uniform envelope and timing metadata:
|
||||
|
||||
```json
|
||||
{"ok":true,"data":{...},"meta":{...}}
|
||||
{"ok":true,"data":{...},"meta":{"elapsed_ms":42}}
|
||||
```
|
||||
|
||||
Errors return structured JSON to stderr:
|
||||
Errors return structured JSON to stderr with machine-actionable recovery steps:
|
||||
|
||||
```json
|
||||
{"error":{"code":"CONFIG_NOT_FOUND","message":"...","suggestion":"Run 'lore init'"}}
|
||||
{"error":{"code":"CONFIG_NOT_FOUND","message":"...","suggestion":"Run 'lore init'","actions":["lore init"]}}
|
||||
```
|
||||
|
||||
The `actions` array contains executable shell commands for automated recovery. It is omitted when empty.
|
||||
|
||||
### Field Selection
|
||||
|
||||
The `--fields` flag on `issues` and `mrs` list commands controls which fields appear in the JSON response:
|
||||
|
||||
```bash
|
||||
lore -J issues --fields minimal # Preset: iid, title, state, updated_at_iso
|
||||
lore -J mrs --fields iid,title,state,draft,labels # Custom field list
|
||||
```
|
||||
|
||||
### Exit Codes
|
||||
@@ -680,7 +697,7 @@ Errors return structured JSON to stderr:
|
||||
| Code | Meaning |
|
||||
|------|---------|
|
||||
| 0 | Success |
|
||||
| 1 | Internal error / health check failed / not implemented |
|
||||
| 1 | Internal error / not implemented |
|
||||
| 2 | Usage error (invalid flags or arguments) |
|
||||
| 3 | Config invalid |
|
||||
| 4 | Token not set |
|
||||
@@ -698,6 +715,7 @@ Errors return structured JSON to stderr:
|
||||
| 16 | Embedding failed |
|
||||
| 17 | Not found (entity does not exist) |
|
||||
| 18 | Ambiguous match (use `-p` to specify project) |
|
||||
| 19 | Health check failed |
|
||||
| 20 | Config not found |
|
||||
|
||||
### Configuration Precedence
|
||||
@@ -711,7 +729,8 @@ Errors return structured JSON to stderr:
|
||||
|
||||
- Use `lore --robot` or `lore -J` for all agent interactions
|
||||
- Check exit codes for error handling
|
||||
- Parse JSON errors from stderr
|
||||
- Parse JSON errors from stderr; use `actions` array for automated recovery
|
||||
- Use `--fields minimal` to reduce token usage (~60% fewer tokens)
|
||||
- Use `-n` / `--limit` to control response size
|
||||
- Use `-q` / `--quiet` to suppress progress bars and non-essential output
|
||||
- Use `--color never` in non-TTY automation for ANSI-free output
|
||||
@@ -719,4 +738,5 @@ Errors return structured JSON to stderr:
|
||||
- Use `--log-format json` for machine-readable log output to stderr
|
||||
- TTY detection handles piped commands automatically
|
||||
- Use `lore --robot health` as a fast pre-flight check before queries
|
||||
- Use `lore robot-docs` for response schema discovery
|
||||
- The `-p` flag supports fuzzy project matching (suffix and substring)
|
||||
|
||||
@@ -1,3 +1,15 @@
|
||||
---
|
||||
plan: true
|
||||
title: "api-efficiency-findings"
|
||||
status: drafting
|
||||
iteration: 0
|
||||
target_iterations: 8
|
||||
beads_revision: 0
|
||||
related_plans: []
|
||||
created: 2026-02-07
|
||||
updated: 2026-02-07
|
||||
---
|
||||
|
||||
# API Efficiency & Observability Findings
|
||||
|
||||
> **Status:** Draft - working through items
|
||||
|
||||
@@ -2,19 +2,22 @@
|
||||
|
||||
## Overview
|
||||
|
||||
Robot mode optimizes the `lore` CLI for AI agent consumption with structured JSON output, meaningful exit codes, and token-efficient responses.
|
||||
Robot mode optimizes the `lore` CLI for AI agent consumption with compact JSON output, structured errors with machine-actionable recovery steps, meaningful exit codes, response timing metadata, field selection for token efficiency, and TTY auto-detection.
|
||||
|
||||
## Activation
|
||||
|
||||
```bash
|
||||
# Explicit flag
|
||||
lore --robot list issues
|
||||
lore --robot issues -n 5
|
||||
|
||||
# Auto-detection (when stdout is not a TTY)
|
||||
lore list issues | jq .
|
||||
# JSON shorthand
|
||||
lore -J issues -n 5
|
||||
|
||||
# Environment variable
|
||||
LORE_ROBOT=true lore list issues
|
||||
LORE_ROBOT=1 lore issues
|
||||
|
||||
# Auto-detection (when stdout is not a TTY)
|
||||
lore issues | jq .
|
||||
```
|
||||
|
||||
## Global Flags
|
||||
@@ -22,218 +25,160 @@ LORE_ROBOT=true lore list issues
|
||||
| Flag | Description |
|
||||
|------|-------------|
|
||||
| `--robot` | Force JSON output, structured errors |
|
||||
| `--quiet` | Suppress progress/spinners (implied by --robot) |
|
||||
| `-J` / `--json` | Shorthand for `--robot` |
|
||||
| `--quiet` | Suppress progress/spinners (implied by `--robot`) |
|
||||
| `--fields <list>` | Select output fields for list commands |
|
||||
|
||||
## Exit Codes
|
||||
## Response Envelope
|
||||
|
||||
| Code | ErrorCode | Meaning |
|
||||
|------|-----------|---------|
|
||||
| 0 | - | Success |
|
||||
| 1 | INTERNAL_ERROR | Unknown/internal error |
|
||||
| 2 | CONFIG_NOT_FOUND | Config file missing |
|
||||
| 3 | CONFIG_INVALID | Config file malformed |
|
||||
| 4 | TOKEN_NOT_SET | GitLab token not configured |
|
||||
| 5 | GITLAB_AUTH_FAILED | Authentication failed |
|
||||
| 6 | GITLAB_NOT_FOUND | Resource not found |
|
||||
| 7 | GITLAB_RATE_LIMITED | Rate limited |
|
||||
| 8 | GITLAB_NETWORK_ERROR | Network/connection error |
|
||||
| 9 | DB_LOCKED | Database locked by another process |
|
||||
| 10 | DB_ERROR | Database error |
|
||||
| 11 | MIGRATION_FAILED | Migration failed |
|
||||
| 12 | IO_ERROR | File I/O error |
|
||||
| 13 | TRANSFORM_ERROR | Data transformation error |
|
||||
All commands return a consistent JSON envelope to stdout:
|
||||
|
||||
```json
|
||||
{"ok":true,"data":{...},"meta":{"elapsed_ms":42}}
|
||||
```
|
||||
|
||||
Key properties:
|
||||
- **Compact JSON**: Single-line output (no pretty-printing) for efficient parsing
|
||||
- **Uniform envelope**: Every command wraps its data in `{"ok":true,"data":{...},"meta":{...}}`
|
||||
- **Timing metadata**: `meta.elapsed_ms` is present on every response (wall-clock milliseconds)
|
||||
|
||||
## Error Output Format
|
||||
|
||||
When `--robot` is active, errors are JSON on stderr:
|
||||
Errors are JSON on stderr with structured fields for programmatic handling:
|
||||
|
||||
```json
|
||||
{
|
||||
"error": {
|
||||
"code": "CONFIG_NOT_FOUND",
|
||||
"message": "Config file not found at ~/.config/lore/config.toml",
|
||||
"suggestion": "Run 'lore init' to create configuration"
|
||||
"message": "Config file not found at ~/.config/lore/config.json. Run \"lore init\" first.",
|
||||
"suggestion": "Run 'lore init' to set up your GitLab connection.",
|
||||
"actions": ["lore init"]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Success Output Format
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| `code` | string | Machine-readable error code (e.g., `CONFIG_NOT_FOUND`) |
|
||||
| `message` | string | Human-readable error description |
|
||||
| `suggestion` | string? | Recovery guidance (omitted when not applicable) |
|
||||
| `actions` | string[]? | Executable shell commands for recovery (omitted when empty) |
|
||||
|
||||
All commands return consistent JSON structure:
|
||||
### Error Actions by Code
|
||||
|
||||
| Error Code | Actions |
|
||||
|------------|---------|
|
||||
| `CONFIG_NOT_FOUND` | `["lore init"]` |
|
||||
| `CONFIG_INVALID` | `["lore init --force"]` |
|
||||
| `GITLAB_AUTH_FAILED` | `["export GITLAB_TOKEN=glpat-xxx", "lore auth"]` |
|
||||
| `TOKEN_NOT_SET` | `["export GITLAB_TOKEN=glpat-xxx"]` |
|
||||
| `OLLAMA_UNAVAILABLE` | `["ollama serve"]` |
|
||||
| `OLLAMA_MODEL_NOT_FOUND` | `["ollama pull nomic-embed-text"]` |
|
||||
| `DB_LOCKED` | `["lore ingest --force"]` |
|
||||
| `EMBEDDING_FAILED` | `["lore embed --retry-failed"]` |
|
||||
| `MIGRATION_FAILED` | `["lore migrate"]` |
|
||||
| `GITLAB_NETWORK_ERROR` | `["lore doctor"]` |
|
||||
|
||||
## Exit Codes
|
||||
|
||||
| Code | ErrorCode | Meaning |
|
||||
|------|-----------|---------|
|
||||
| 0 | -- | Success |
|
||||
| 1 | `INTERNAL_ERROR` | Unknown/internal error |
|
||||
| 2 | -- | Usage error (invalid flags or arguments) |
|
||||
| 3 | `CONFIG_INVALID` | Config file malformed |
|
||||
| 4 | `TOKEN_NOT_SET` | GitLab token not configured |
|
||||
| 5 | `GITLAB_AUTH_FAILED` | Authentication failed |
|
||||
| 6 | `GITLAB_NOT_FOUND` | Resource not found |
|
||||
| 7 | `GITLAB_RATE_LIMITED` | Rate limited |
|
||||
| 8 | `GITLAB_NETWORK_ERROR` | Network/connection error |
|
||||
| 9 | `DB_LOCKED` | Database locked by another process |
|
||||
| 10 | `DB_ERROR` | Database error |
|
||||
| 11 | `MIGRATION_FAILED` | Migration failed |
|
||||
| 12 | `IO_ERROR` | File I/O error |
|
||||
| 13 | `TRANSFORM_ERROR` | Data transformation error |
|
||||
| 14 | `OLLAMA_UNAVAILABLE` | Ollama not running |
|
||||
| 15 | `OLLAMA_MODEL_NOT_FOUND` | Ollama model not installed |
|
||||
| 16 | `EMBEDDING_FAILED` | Embedding generation failed |
|
||||
| 17 | `NOT_FOUND` | Entity does not exist locally |
|
||||
| 18 | `AMBIGUOUS` | Multiple projects match (use `-p`) |
|
||||
| 19 | -- | Health check failed |
|
||||
| 20 | `CONFIG_NOT_FOUND` | Config file missing |
|
||||
|
||||
## Field Selection
|
||||
|
||||
The `--fields` flag on `issues` and `mrs` list commands controls which fields appear in each item of the response array:
|
||||
|
||||
```bash
|
||||
# Preset: ~60% fewer tokens
|
||||
lore -J issues --fields minimal
|
||||
|
||||
# Custom field list
|
||||
lore -J mrs --fields iid,title,state,draft,target_branch
|
||||
```
|
||||
|
||||
### Presets
|
||||
|
||||
| Preset | Expands to |
|
||||
|--------|------------|
|
||||
| `minimal` | `iid`, `title`, `state`, `updated_at_iso` |
|
||||
|
||||
### Available Fields
|
||||
|
||||
**Issues**: `iid`, `title`, `state`, `author_username`, `labels`, `assignees`, `discussion_count`, `unresolved_count`, `created_at_iso`, `updated_at_iso`, `web_url`, `project_path`
|
||||
|
||||
**MRs**: `iid`, `title`, `state`, `author_username`, `labels`, `draft`, `target_branch`, `source_branch`, `discussion_count`, `unresolved_count`, `created_at_iso`, `updated_at_iso`, `web_url`, `project_path`, `reviewers`
|
||||
|
||||
Field selection applies only to list output, not to show (single-entity) output which returns full detail.
|
||||
|
||||
## Command Response Schemas
|
||||
|
||||
Every command in `lore robot-docs` includes a `response_schema` field describing the shape of its JSON response. This enables agents to understand response structures without trial-and-error.
|
||||
|
||||
```bash
|
||||
# Get schema for a specific command
|
||||
lore robot-docs | jq '.data.commands.issues.response_schema'
|
||||
|
||||
# Get all schemas
|
||||
lore robot-docs | jq '[.data.commands | to_entries[] | select(.value.response_schema) | {(.key): .value.response_schema}] | add'
|
||||
```
|
||||
|
||||
## Clap Error Handling
|
||||
|
||||
Parse errors from the argument parser emit structured JSON to stderr with semantic error codes:
|
||||
|
||||
| Code | Meaning |
|
||||
|------|---------|
|
||||
| `UNKNOWN_COMMAND` | Unrecognized subcommand (includes fuzzy suggestion) |
|
||||
| `UNKNOWN_FLAG` | Unrecognized command-line flag |
|
||||
| `MISSING_REQUIRED` | Required argument not provided |
|
||||
| `INVALID_VALUE` | Invalid value for argument |
|
||||
| `TOO_MANY_VALUES` | Too many values provided |
|
||||
| `TOO_FEW_VALUES` | Too few values provided |
|
||||
| `ARGUMENT_CONFLICT` | Conflicting arguments |
|
||||
| `MISSING_COMMAND` | No subcommand provided |
|
||||
| `HELP_REQUESTED` | Help or version flag used |
|
||||
| `PARSE_ERROR` | General parse error |
|
||||
|
||||
Unknown commands include a fuzzy suggestion when a close match exists:
|
||||
|
||||
```json
|
||||
{
|
||||
"ok": true,
|
||||
"data": { ... },
|
||||
"meta": {
|
||||
"count": 50,
|
||||
"total": 1234,
|
||||
"elapsed_ms": 45
|
||||
}
|
||||
}
|
||||
{"error":{"code":"UNKNOWN_COMMAND","message":"...","suggestion":"Did you mean 'lore issues'? Run 'lore robot-docs' for all commands"}}
|
||||
```
|
||||
|
||||
## Command-Specific Output
|
||||
## Agent Self-Discovery
|
||||
|
||||
### lore list issues --robot
|
||||
`lore robot-docs` provides a complete manifest for agent bootstrapping:
|
||||
|
||||
```json
|
||||
{
|
||||
"ok": true,
|
||||
"data": {
|
||||
"issues": [
|
||||
{
|
||||
"iid": 123,
|
||||
"project": "group/project",
|
||||
"title": "Bug in login",
|
||||
"state": "opened",
|
||||
"author": "username",
|
||||
"assignees": ["user1"],
|
||||
"labels": ["bug", "priority::high"],
|
||||
"discussions": { "total": 5, "unresolved": 2 },
|
||||
"updated_at": "2024-01-15T10:30:00Z",
|
||||
"web_url": "https://..."
|
||||
}
|
||||
]
|
||||
},
|
||||
"meta": { "showing": 50, "total": 234 }
|
||||
}
|
||||
```bash
|
||||
lore robot-docs # Pretty-printed (human-readable)
|
||||
lore --robot robot-docs # Compact (for parsing)
|
||||
```
|
||||
|
||||
### lore show issue 123 --robot
|
||||
|
||||
```json
|
||||
{
|
||||
"ok": true,
|
||||
"data": {
|
||||
"issue": {
|
||||
"iid": 123,
|
||||
"project": "group/project",
|
||||
"title": "Bug in login",
|
||||
"description": "Full markdown...",
|
||||
"state": "opened",
|
||||
"author": "username",
|
||||
"created_at": "2024-01-10T08:00:00Z",
|
||||
"updated_at": "2024-01-15T10:30:00Z",
|
||||
"discussions": [
|
||||
{
|
||||
"id": "abc123",
|
||||
"resolved": false,
|
||||
"notes": [
|
||||
{
|
||||
"author": "user1",
|
||||
"body": "Comment text...",
|
||||
"created_at": "2024-01-11T09:00:00Z",
|
||||
"system": false
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### lore ingest --type issues --robot
|
||||
|
||||
```json
|
||||
{
|
||||
"ok": true,
|
||||
"data": {
|
||||
"resource_type": "issues",
|
||||
"projects": [
|
||||
{
|
||||
"path": "group/project",
|
||||
"issues_synced": 45,
|
||||
"discussions_synced": 123
|
||||
}
|
||||
],
|
||||
"totals": {
|
||||
"issues": 45,
|
||||
"discussions": 123
|
||||
}
|
||||
},
|
||||
"meta": { "elapsed_ms": 3400 }
|
||||
}
|
||||
```
|
||||
|
||||
### lore count issues --robot
|
||||
|
||||
```json
|
||||
{
|
||||
"ok": true,
|
||||
"data": {
|
||||
"entity": "issues",
|
||||
"count": 1234,
|
||||
"breakdown": {
|
||||
"opened": 456,
|
||||
"closed": 778
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### lore doctor --robot
|
||||
|
||||
```json
|
||||
{
|
||||
"ok": true,
|
||||
"data": {
|
||||
"success": true,
|
||||
"checks": {
|
||||
"config": { "status": "ok", "path": "~/.config/lore/config.toml" },
|
||||
"database": { "status": "ok", "version": 6 },
|
||||
"gitlab": { "status": "ok", "user": "username" },
|
||||
"projects": [
|
||||
{ "path": "group/project", "status": "ok" }
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### lore sync-status --robot
|
||||
|
||||
```json
|
||||
{
|
||||
"ok": true,
|
||||
"data": {
|
||||
"last_sync": {
|
||||
"status": "completed",
|
||||
"resource_type": "issues",
|
||||
"started_at": "2024-01-15T10:00:00Z",
|
||||
"completed_at": "2024-01-15T10:00:45Z",
|
||||
"duration_ms": 45000
|
||||
},
|
||||
"cursors": [
|
||||
{
|
||||
"project": "group/project",
|
||||
"resource_type": "issues",
|
||||
"cursor": "2024-01-15T10:00:00Z"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Implementation Plan
|
||||
|
||||
### Phase 1: Core Infrastructure
|
||||
1. Add `--robot` global flag to Cli struct
|
||||
2. Create `RobotOutput` trait for consistent JSON serialization
|
||||
3. Add exit code mapping from ErrorCode
|
||||
4. Implement TTY detection with `atty` crate
|
||||
|
||||
### Phase 2: Command Updates
|
||||
1. Update all commands to check robot mode
|
||||
2. Add JSON output variants for commands missing them (count, ingest, sync-status)
|
||||
3. Suppress progress bars in robot mode
|
||||
|
||||
### Phase 3: Error Handling
|
||||
1. Update main.rs error handler for robot mode
|
||||
2. Add suggestion field to GiError variants
|
||||
3. Emit structured JSON errors to stderr
|
||||
|
||||
### Phase 4: Documentation
|
||||
1. Update AGENTS.md with robot mode commands
|
||||
2. Add --robot examples to help text
|
||||
The manifest includes:
|
||||
- All commands with flags, examples, and response schemas
|
||||
- Deprecated command aliases (e.g., `list issues` -> `issues`)
|
||||
- Exit codes with meanings
|
||||
- Clap error codes
|
||||
- Suggested workflows (first setup, daily sync, search, pre-flight)
|
||||
- Activation methods (flags, env vars, TTY auto-detection)
|
||||
|
||||
434
docs/who-command-design.feedback-1.md
Normal file
434
docs/who-command-design.feedback-1.md
Normal file
@@ -0,0 +1,434 @@
|
||||
Below are the highest-leverage revisions I’d make to this plan. I’m focusing on correctness pitfalls, SQLite gotchas, query performance on 280K notes, and reducing “dynamic SQL + param juggling” complexity—without turning this into a new ingestion project.
|
||||
|
||||
Change 1 — Fix a hard SQLite bug in --active (GROUP_CONCAT DISTINCT + separator)
|
||||
Why
|
||||
|
||||
SQLite does not allow GROUP_CONCAT(DISTINCT x, sep). With DISTINCT, SQLite only permits a single argument (GROUP_CONCAT(DISTINCT x)). Your current query will error at runtime in many SQLite versions.
|
||||
|
||||
Revision
|
||||
|
||||
Use a subquery that selects distinct participants, then GROUP_CONCAT with your separator.
|
||||
|
||||
diff
|
||||
Copy code
|
||||
diff --git a/Plan.md b/Plan.md
|
||||
@@ fn query_active(...)
|
||||
- (SELECT GROUP_CONCAT(DISTINCT n.author_username, X'1F')
|
||||
- FROM notes n
|
||||
- WHERE n.discussion_id = d.id
|
||||
- AND n.is_system = 0
|
||||
- AND n.author_username IS NOT NULL) AS participants
|
||||
+ (SELECT GROUP_CONCAT(username, X'1F') FROM (
|
||||
+ SELECT DISTINCT n.author_username AS username
|
||||
+ FROM notes n
|
||||
+ WHERE n.discussion_id = d.id
|
||||
+ AND n.is_system = 0
|
||||
+ AND n.author_username IS NOT NULL
|
||||
+ ORDER BY username
|
||||
+ )) AS participants
|
||||
|
||||
Change 2 — Replace “contains('.') => exact file match” with segment-aware path classification
|
||||
Why
|
||||
|
||||
path.contains('.') misclassifies directories like:
|
||||
|
||||
.github/workflows/
|
||||
|
||||
src/v1.2/auth/
|
||||
|
||||
It also fails the “root file” case (README.md) because your mode discriminator only treats paths as paths if they contain /.
|
||||
|
||||
Revision
|
||||
|
||||
Add explicit --path to force Expert mode (covers root files cleanly).
|
||||
|
||||
Classify file-vs-dir by checking last path segment for a dot, and whether the input ends with /.
|
||||
|
||||
diff
|
||||
Copy code
|
||||
diff --git a/Plan.md b/Plan.md
|
||||
@@ pub struct WhoArgs {
|
||||
- /// Username or file path (path if contains /)
|
||||
- pub target: Option<String>,
|
||||
+ /// Username or file path shorthand (ambiguous for root files like README.md)
|
||||
+ pub target: Option<String>,
|
||||
+
|
||||
+ /// Force expert mode for a file/directory path (supports root files like README.md)
|
||||
+ #[arg(long, help_heading = "Mode", conflicts_with_all = ["active", "overlap", "reviews"])]
|
||||
+ pub path: Option<String>,
|
||||
@@ fn resolve_mode<'a>(args: &'a WhoArgs) -> Result<WhoMode<'a>> {
|
||||
- if let Some(target) = &args.target {
|
||||
+ if let Some(p) = &args.path {
|
||||
+ return Ok(WhoMode::Expert { path: p });
|
||||
+ }
|
||||
+ if let Some(target) = &args.target {
|
||||
let clean = target.strip_prefix('@').unwrap_or(target);
|
||||
if args.reviews {
|
||||
return Ok(WhoMode::Reviews { username: clean });
|
||||
}
|
||||
- // Disambiguation: if target contains '/', it's a file path.
|
||||
- // GitLab usernames never contain '/'.
|
||||
- if target.contains('/') {
|
||||
+ // Disambiguation:
|
||||
+ // - treat as path if it contains '/'
|
||||
+ // - otherwise treat as username (root files require --path)
|
||||
+ if target.contains('/') {
|
||||
return Ok(WhoMode::Expert { path: target });
|
||||
}
|
||||
return Ok(WhoMode::Workload { username: clean });
|
||||
}
|
||||
|
||||
|
||||
And update the path pattern logic used by Expert/Overlap:
|
||||
|
||||
diff
|
||||
Copy code
|
||||
diff --git a/Plan.md b/Plan.md
|
||||
@@ fn query_expert(...)
|
||||
- // Normalize path for LIKE matching: add trailing % if no extension
|
||||
- let path_pattern = if path.contains('.') {
|
||||
- path.to_string() // Exact file match
|
||||
- } else {
|
||||
- let trimmed = path.trim_end_matches('/');
|
||||
- format!("{trimmed}/%")
|
||||
- };
|
||||
+ // Normalize:
|
||||
+ // - if ends_with('/') => directory prefix
|
||||
+ // - else if last segment contains '.' => file exact match
|
||||
+ // - else => directory prefix
|
||||
+ let trimmed = path.trim_end_matches('/');
|
||||
+ let last = trimmed.rsplit('/').next().unwrap_or(trimmed);
|
||||
+ let is_file = !path.ends_with('/') && last.contains('.');
|
||||
+ let path_pattern = if is_file { trimmed.to_string() } else { format!("{trimmed}/%") };
|
||||
|
||||
Change 3 — Stop building dynamic SQL strings for optional filters; always bind params
|
||||
Why
|
||||
|
||||
Right now you’re mixing:
|
||||
|
||||
dynamic project_clause string fragments
|
||||
|
||||
ad-hoc param vectors
|
||||
|
||||
placeholder renumbering by branch
|
||||
|
||||
That’s brittle and easy to regress (especially when you add more conditions later). SQLite/rusqlite can bind Option<T> to NULL, which enables a simple pattern:
|
||||
|
||||
sql
|
||||
Copy code
|
||||
AND (?3 IS NULL OR n.project_id = ?3)
|
||||
|
||||
Revision (representative; apply to all queries)
|
||||
diff
|
||||
Copy code
|
||||
diff --git a/Plan.md b/Plan.md
|
||||
@@ fn query_expert(...)
|
||||
- let project_clause = if project_id.is_some() {
|
||||
- "AND n.project_id = ?3"
|
||||
- } else {
|
||||
- ""
|
||||
- };
|
||||
-
|
||||
- let sql = format!(
|
||||
+ let sql = format!(
|
||||
"SELECT username, role, activity_count, last_active_at FROM (
|
||||
@@
|
||||
FROM notes n
|
||||
WHERE n.position_new_path LIKE ?1
|
||||
AND n.is_system = 0
|
||||
AND n.author_username IS NOT NULL
|
||||
AND n.created_at >= ?2
|
||||
- {project_clause}
|
||||
+ AND (?3 IS NULL OR n.project_id = ?3)
|
||||
@@
|
||||
WHERE n.position_new_path LIKE ?1
|
||||
AND m.author_username IS NOT NULL
|
||||
AND m.updated_at >= ?2
|
||||
- {project_clause}
|
||||
+ AND (?3 IS NULL OR n.project_id = ?3)
|
||||
GROUP BY m.author_username
|
||||
- )"
|
||||
+ ) t"
|
||||
);
|
||||
-
|
||||
- let mut params: Vec<Box<dyn rusqlite::ToSql>> = Vec::new();
|
||||
- params.push(Box::new(path_pattern.clone()));
|
||||
- params.push(Box::new(since_ms));
|
||||
- if let Some(pid) = project_id {
|
||||
- params.push(Box::new(pid));
|
||||
- }
|
||||
- let param_refs: Vec<&dyn rusqlite::ToSql> = params.iter().map(|p| p.as_ref()).collect();
|
||||
+ let param_refs = rusqlite::params![path_pattern, since_ms, project_id];
|
||||
|
||||
|
||||
Notes:
|
||||
|
||||
Adds required derived-table alias t (some SQLite configurations are stricter).
|
||||
|
||||
Eliminates the dynamic param vector and placeholder gymnastics.
|
||||
|
||||
Change 4 — Filter “path touch” queries to DiffNotes and escape LIKE properly
|
||||
Why
|
||||
|
||||
Only DiffNotes reliably have position_new_path; including other note types can skew counts and harm performance.
|
||||
|
||||
LIKE treats % and _ as wildcards—rare in file paths, but not impossible (generated files, templates). Escaping is a low-cost robustness win.
|
||||
|
||||
Revision
|
||||
|
||||
Add note_type='DiffNote' and LIKE ... ESCAPE '\' plus a tiny escape helper.
|
||||
|
||||
diff
|
||||
Copy code
|
||||
diff --git a/Plan.md b/Plan.md
|
||||
@@ fn query_expert(...)
|
||||
- FROM notes n
|
||||
- WHERE n.position_new_path LIKE ?1
|
||||
+ FROM notes n
|
||||
+ WHERE n.note_type = 'DiffNote'
|
||||
+ AND n.position_new_path LIKE ?1 ESCAPE '\'
|
||||
AND n.is_system = 0
|
||||
@@
|
||||
diff --git a/Plan.md b/Plan.md
|
||||
@@ Helper Functions
|
||||
+fn escape_like(input: &str) -> String {
|
||||
+ input.replace('\\', "\\\\").replace('%', "\\%").replace('_', "\\_")
|
||||
+}
|
||||
|
||||
|
||||
And when building patterns:
|
||||
|
||||
diff
|
||||
Copy code
|
||||
- let path_pattern = if is_file { trimmed.to_string() } else { format!("{trimmed}/%") };
|
||||
+ let base = escape_like(trimmed);
|
||||
+ let path_pattern = if is_file { base } else { format!("{base}/%") };
|
||||
|
||||
|
||||
Apply the same changes to query_overlap and any other position_new_path LIKE ....
|
||||
|
||||
Change 5 — Use note timestamps for “touch since” semantics (Expert/Overlap author branch)
|
||||
Why
|
||||
|
||||
In Expert/Overlap “author” branches you filter by m.updated_at >= since. That answers “MR updated recently” rather than “MR touched at this path recently”, which can surface stale ownership.
|
||||
|
||||
Revision
|
||||
|
||||
Filter by the note creation time (and use it for “last touch” where relevant). You can still compute author activity, but anchor it to note activity.
|
||||
|
||||
diff
|
||||
Copy code
|
||||
diff --git a/Plan.md b/Plan.md
|
||||
@@ fn query_overlap(...)
|
||||
- WHERE n.position_new_path LIKE ?1
|
||||
+ WHERE n.note_type = 'DiffNote'
|
||||
+ AND n.position_new_path LIKE ?1 ESCAPE '\'
|
||||
AND m.state IN ('opened', 'merged')
|
||||
AND m.author_username IS NOT NULL
|
||||
- AND m.updated_at >= ?2
|
||||
+ AND n.created_at >= ?2
|
||||
AND (?3 IS NULL OR m.project_id = ?3)
|
||||
|
||||
|
||||
Same idea in Expert mode’s “MR authors” branch.
|
||||
|
||||
Change 6 — Workload mode: apply --since consistently to unresolved discussions
|
||||
Why
|
||||
|
||||
Workload’s unresolved discussions ignore since_ms. That makes --since partially misleading and can dump very old threads.
|
||||
|
||||
Revision
|
||||
|
||||
Filter on d.last_note_at when since_ms is set.
|
||||
|
||||
diff
|
||||
Copy code
|
||||
diff --git a/Plan.md b/Plan.md
|
||||
@@ fn query_workload(...)
|
||||
- let disc_sql = format!(
|
||||
+ let disc_since = if since_ms.is_some() {
|
||||
+ "AND d.last_note_at >= ?2"
|
||||
+ } else { "" };
|
||||
+ let disc_sql = format!(
|
||||
"SELECT d.noteable_type,
|
||||
@@
|
||||
WHERE d.resolvable = 1 AND d.resolved = 0
|
||||
AND EXISTS (
|
||||
@@
|
||||
)
|
||||
{disc_project_filter}
|
||||
+ {disc_since}
|
||||
ORDER BY d.last_note_at DESC
|
||||
LIMIT {limit}"
|
||||
);
|
||||
@@
|
||||
- // Rebuild params for discussion query (only username + optional project_id)
|
||||
- let mut disc_params: Vec<Box<dyn rusqlite::ToSql>> = Vec::new();
|
||||
- disc_params.push(Box::new(username.to_string()));
|
||||
- if let Some(pid) = project_id {
|
||||
- disc_params.push(Box::new(pid));
|
||||
- }
|
||||
+ // Params: username, since_ms, project_id (NULLs ok)
|
||||
+ let disc_param_refs = rusqlite::params![username, since_ms, project_id];
|
||||
|
||||
|
||||
(If you adopt Change 3 fully, this becomes very clean.)
|
||||
|
||||
Change 7 — Make Overlap results represent “both roles” instead of collapsing to one
|
||||
Why
|
||||
|
||||
Collapsing to a single role loses valuable info (“they authored and reviewed”). Also your current “prefer author” rule is arbitrary for the “who else is touching this” question.
|
||||
|
||||
Revision
|
||||
|
||||
Track role counts separately and render as A, R, or A+R.
|
||||
|
||||
diff
|
||||
Copy code
|
||||
diff --git a/Plan.md b/Plan.md
|
||||
@@ pub struct OverlapUser {
|
||||
pub username: String,
|
||||
- pub role: String,
|
||||
- pub touch_count: u32,
|
||||
+ pub author_touch_count: u32,
|
||||
+ pub review_touch_count: u32,
|
||||
+ pub touch_count: u32,
|
||||
pub last_touch_at: i64,
|
||||
pub mr_iids: Vec<i64>,
|
||||
}
|
||||
@@ fn query_overlap(...)
|
||||
- let entry = user_map.entry(username.clone()).or_insert_with(|| OverlapUser {
|
||||
+ let entry = user_map.entry(username.clone()).or_insert_with(|| OverlapUser {
|
||||
username: username.clone(),
|
||||
- role: role.clone(),
|
||||
+ author_touch_count: 0,
|
||||
+ review_touch_count: 0,
|
||||
touch_count: 0,
|
||||
last_touch_at: 0,
|
||||
mr_iids: Vec::new(),
|
||||
});
|
||||
entry.touch_count += count;
|
||||
+ if role == "author" { entry.author_touch_count += count; }
|
||||
+ if role == "reviewer" { entry.review_touch_count += count; }
|
||||
@@ human output
|
||||
- println!(
|
||||
- " {:<16} {:<8} {:>7} {:<12} {}",
|
||||
+ println!(
|
||||
+ " {:<16} {:<6} {:>7} {:<12} {}",
|
||||
...
|
||||
);
|
||||
@@
|
||||
- user.role,
|
||||
+ format_roles(user.author_touch_count, user.review_touch_count),
|
||||
|
||||
Change 8 — Add an “Index Audit + optional migration” step (big perf win, low blast radius)
|
||||
Why
|
||||
|
||||
With 280K notes, the path/timestamp queries will degrade quickly without indexes. This isn’t “scope creep”; it’s making the feature usable.
|
||||
|
||||
Revision (plan-level)
|
||||
|
||||
Add a non-breaking migration that only creates indexes if missing.
|
||||
|
||||
Optionally add a runtime check: if EXPLAIN QUERY PLAN indicates full table scan on notes, print a dim warning in human mode.
|
||||
|
||||
diff
|
||||
Copy code
|
||||
diff --git a/Plan.md b/Plan.md
|
||||
@@ Implementation Order
|
||||
-| Step | What | Files |
|
||||
+| Step | What | Files |
|
||||
| 1 | CLI skeleton: `WhoArgs` + `Commands::Who` + dispatch + stub | `cli/mod.rs`, `commands/mod.rs`, `main.rs` |
|
||||
+| 1.5 | Index audit + add `CREATE INDEX IF NOT EXISTS` migration for who hot paths | `migrations/0xx_who_indexes.sql` |
|
||||
@@
|
||||
|
||||
|
||||
Suggested indexes (tune names to your conventions):
|
||||
|
||||
notes(note_type, position_new_path, created_at)
|
||||
|
||||
notes(discussion_id, is_system, author_username)
|
||||
|
||||
discussions(resolvable, resolved, last_note_at, project_id)
|
||||
|
||||
merge_requests(project_id, state, updated_at, author_username)
|
||||
|
||||
issue_assignees(username, issue_id)
|
||||
|
||||
Even if SQLite can’t perfectly index LIKE, these still help with join and timestamp filters.
|
||||
|
||||
Change 9 — Make robot JSON reproducible by echoing the effective query inputs
|
||||
Why
|
||||
|
||||
Agent workflows benefit from a stable “query record”: what mode ran, what path/user, resolved project, effective since, limit.
|
||||
|
||||
Revision
|
||||
|
||||
Include an input object in JSON output.
|
||||
|
||||
diff
|
||||
Copy code
|
||||
diff --git a/Plan.md b/Plan.md
|
||||
@@ struct WhoJsonData {
|
||||
mode: String,
|
||||
+ input: serde_json::Value,
|
||||
#[serde(flatten)]
|
||||
result: serde_json::Value,
|
||||
}
|
||||
@@ pub fn print_who_json(...)
|
||||
- let output = WhoJsonEnvelope {
|
||||
+ let input = serde_json::json!({
|
||||
+ "project": /* resolved or raw args.project */,
|
||||
+ "since": /* resolved since ISO */,
|
||||
+ "limit": /* args.limit */,
|
||||
+ });
|
||||
+ let output = WhoJsonEnvelope {
|
||||
ok: true,
|
||||
data: WhoJsonData {
|
||||
mode: mode.to_string(),
|
||||
+ input,
|
||||
result: data,
|
||||
},
|
||||
meta: RobotMeta { elapsed_ms },
|
||||
};
|
||||
|
||||
Change 10 — Tighten clap constraints so invalid combinations never reach resolve_mode
|
||||
Why
|
||||
|
||||
Right now conflicts are enforced manually (or not at all). Clamp the invalid combos at the CLI layer:
|
||||
|
||||
--active should conflict with target, --overlap, --reviews, --path
|
||||
|
||||
--reviews should require a username (and should conflict with Expert path modes)
|
||||
|
||||
diff
|
||||
Copy code
|
||||
diff --git a/Plan.md b/Plan.md
|
||||
@@ pub struct WhoArgs {
|
||||
- pub active: bool,
|
||||
+ #[arg(long, help_heading = "Mode", conflicts_with_all = ["target", "overlap", "reviews", "path"])]
|
||||
+ pub active: bool,
|
||||
@@
|
||||
- pub overlap: Option<String>,
|
||||
+ #[arg(long, help_heading = "Mode", conflicts_with_all = ["target", "active", "reviews", "path"])]
|
||||
+ pub overlap: Option<String>,
|
||||
@@
|
||||
- pub reviews: bool,
|
||||
+ #[arg(long, help_heading = "Mode", requires = "target", conflicts_with_all = ["active", "overlap", "path"])]
|
||||
+ pub reviews: bool,
|
||||
|
||||
Summary of what I’d definitely change
|
||||
|
||||
If you do nothing else, do these first:
|
||||
|
||||
Fix GROUP_CONCAT(DISTINCT ..., sep) in Active mode (runtime error).
|
||||
|
||||
Path classification: add --path, and stop using contains('.') globally.
|
||||
|
||||
Remove dynamic SQL + param vectors: always bind project_id as nullable and use (? IS NULL OR ...).
|
||||
|
||||
Filter to DiffNotes + LIKE escaping for correctness and fewer rows scanned.
|
||||
|
||||
Optional index migration: otherwise this will feel slow/non-deterministically slow depending on local DB state.
|
||||
|
||||
If you want, I can also provide a consolidated “v2 plan” as a single unified patch (one diff) rather than per-change snippets.
|
||||
303
docs/who-command-design.feedback-2.md
Normal file
303
docs/who-command-design.feedback-2.md
Normal file
@@ -0,0 +1,303 @@
|
||||
Below are the highest-leverage revisions I’d make to iteration 1 to tighten correctness, performance, and “agent usefulness” without blowing up scope. For each change: (1) rationale, (2) a focused unified diff against the plan you pasted.
|
||||
|
||||
Change 1 — Make robot “input echo” actually resolved (project_id, project_path, since_ms/iso, mode)
|
||||
Why
|
||||
|
||||
Your Design Principle #5 says the robot envelope should echo resolved inputs (“effective since, resolved project”), but the current input object echoes only raw CLI strings. Agents can’t reliably reproduce or compare runs (e.g., fuzzy project resolution may map differently over time).
|
||||
|
||||
This is also a reliability improvement: “what ran” should be computed once and propagated, not recomputed in output.
|
||||
|
||||
Plan diff
|
||||
diff
|
||||
Copy code
|
||||
--- a/who-command-design.md
|
||||
+++ b/who-command-design.md
|
||||
@@
|
||||
-5. **Robot-first reproducibility.** Robot JSON output includes an `input` object echoing the resolved query parameters (effective since, resolved project, limit) so agents can trace exactly what ran.
|
||||
+5. **Robot-first reproducibility.** Robot JSON output includes a `resolved_input` object (mode, since_ms + since_iso, resolved project_id + project_path, limit, db_path) so agents can trace exactly what ran.
|
||||
|
||||
@@
|
||||
-/// Main entry point. Resolves mode from args and dispatches.
|
||||
-pub fn run_who(config: &Config, args: &WhoArgs) -> Result<WhoResult> {
|
||||
+/// Main entry point. Resolves mode + resolved inputs once, then dispatches.
|
||||
+pub fn run_who(config: &Config, args: &WhoArgs) -> Result<WhoRun> {
|
||||
let db_path = get_db_path(config.storage.db_path.as_deref());
|
||||
let conn = create_connection(&db_path)?;
|
||||
|
||||
- let project_id = args
|
||||
+ let project_id = args
|
||||
.project
|
||||
.as_deref()
|
||||
.map(|p| resolve_project(&conn, p))
|
||||
.transpose()?;
|
||||
+ let project_path = project_id
|
||||
+ .map(|id| lookup_project_path(&conn, id))
|
||||
+ .transpose()?;
|
||||
|
||||
let mode = resolve_mode(args)?;
|
||||
|
||||
match mode {
|
||||
WhoMode::Expert { path } => {
|
||||
let since_ms = resolve_since(args.since.as_deref(), "6m")?;
|
||||
let result = query_expert(&conn, path, project_id, since_ms, args.limit)?;
|
||||
- Ok(WhoResult::Expert(result))
|
||||
+ Ok(WhoRun::new("expert", &db_path, project_id, project_path, since_ms, args.limit, WhoResult::Expert(result)))
|
||||
}
|
||||
@@
|
||||
}
|
||||
}
|
||||
+
|
||||
+/// Wrapper that carries resolved inputs for reproducible output.
|
||||
+pub struct WhoRun {
|
||||
+ pub mode: String,
|
||||
+ pub resolved_input: WhoResolvedInput,
|
||||
+ pub result: WhoResult,
|
||||
+}
|
||||
+
|
||||
+pub struct WhoResolvedInput {
|
||||
+ pub db_path: String,
|
||||
+ pub project_id: Option<i64>,
|
||||
+ pub project_path: Option<String>,
|
||||
+ pub since_ms: i64,
|
||||
+ pub since_iso: String,
|
||||
+ pub limit: usize,
|
||||
+}
|
||||
@@
|
||||
-pub fn print_who_json(result: &WhoResult, args: &WhoArgs, elapsed_ms: u64) {
|
||||
- let (mode, data) = match result {
|
||||
+pub fn print_who_json(run: &WhoRun, args: &WhoArgs, elapsed_ms: u64) {
|
||||
+ let (mode, data) = match &run.result {
|
||||
WhoResult::Expert(r) => ("expert", expert_to_json(r)),
|
||||
@@
|
||||
- let input = serde_json::json!({
|
||||
+ let input = serde_json::json!({
|
||||
"target": args.target,
|
||||
"path": args.path,
|
||||
"project": args.project,
|
||||
"since": args.since,
|
||||
"limit": args.limit,
|
||||
});
|
||||
+
|
||||
+ let resolved_input = serde_json::json!({
|
||||
+ "mode": run.mode,
|
||||
+ "db_path": run.resolved_input.db_path,
|
||||
+ "project_id": run.resolved_input.project_id,
|
||||
+ "project_path": run.resolved_input.project_path,
|
||||
+ "since_ms": run.resolved_input.since_ms,
|
||||
+ "since_iso": run.resolved_input.since_iso,
|
||||
+ "limit": run.resolved_input.limit,
|
||||
+ });
|
||||
@@
|
||||
- data: WhoJsonData {
|
||||
- mode: mode.to_string(),
|
||||
- input,
|
||||
- result: data,
|
||||
- },
|
||||
+ data: WhoJsonData { mode: mode.to_string(), input, resolved_input, result: data },
|
||||
meta: RobotMeta { elapsed_ms },
|
||||
};
|
||||
@@
|
||||
struct WhoJsonData {
|
||||
mode: String,
|
||||
input: serde_json::Value,
|
||||
+ resolved_input: serde_json::Value,
|
||||
#[serde(flatten)]
|
||||
result: serde_json::Value,
|
||||
}
|
||||
|
||||
Change 2 — Remove dynamic SQL format!(..LIMIT {limit}) and parameterize LIMIT everywhere
|
||||
Why
|
||||
|
||||
You explicitly prefer static SQL ((?N IS NULL OR ...)) to avoid subtle bugs; but Workload/Active use format! for LIMIT. Even though limit is typed, it’s an inconsistency that complicates statement caching and encourages future string assembly creep.
|
||||
|
||||
SQLite supports LIMIT ? with bound parameters; rusqlite can bind an i64.
|
||||
|
||||
Plan diff
|
||||
diff
|
||||
Copy code
|
||||
--- a/who-command-design.md
|
||||
+++ b/who-command-design.md
|
||||
@@
|
||||
- let issues_sql = format!(
|
||||
- "SELECT ...
|
||||
- ORDER BY i.updated_at DESC
|
||||
- LIMIT {limit}"
|
||||
- );
|
||||
- let mut stmt = conn.prepare(&issues_sql)?;
|
||||
+ let issues_sql =
|
||||
+ "SELECT ...
|
||||
+ ORDER BY i.updated_at DESC
|
||||
+ LIMIT ?4";
|
||||
+ let mut stmt = conn.prepare(issues_sql)?;
|
||||
let assigned_issues: Vec<WorkloadIssue> = stmt
|
||||
- .query_map(rusqlite::params![username, project_id, since_ms], |row| {
|
||||
+ .query_map(rusqlite::params![username, project_id, since_ms, limit as i64], |row| {
|
||||
@@
|
||||
- let authored_sql = format!(
|
||||
- "SELECT ...
|
||||
- ORDER BY m.updated_at DESC
|
||||
- LIMIT {limit}"
|
||||
- );
|
||||
- let mut stmt = conn.prepare(&authored_sql)?;
|
||||
+ let authored_sql =
|
||||
+ "SELECT ...
|
||||
+ ORDER BY m.updated_at DESC
|
||||
+ LIMIT ?4";
|
||||
+ let mut stmt = conn.prepare(authored_sql)?;
|
||||
@@
|
||||
- .query_map(rusqlite::params![username, project_id, since_ms], |row| {
|
||||
+ .query_map(rusqlite::params![username, project_id, since_ms, limit as i64], |row| {
|
||||
@@
|
||||
- let reviewing_sql = format!(
|
||||
- "SELECT ...
|
||||
- ORDER BY m.updated_at DESC
|
||||
- LIMIT {limit}"
|
||||
- );
|
||||
- let mut stmt = conn.prepare(&reviewing_sql)?;
|
||||
+ let reviewing_sql =
|
||||
+ "SELECT ...
|
||||
+ ORDER BY m.updated_at DESC
|
||||
+ LIMIT ?4";
|
||||
+ let mut stmt = conn.prepare(reviewing_sql)?;
|
||||
@@
|
||||
- .query_map(rusqlite::params![username, project_id, since_ms], |row| {
|
||||
+ .query_map(rusqlite::params![username, project_id, since_ms, limit as i64], |row| {
|
||||
@@
|
||||
- let disc_sql = format!(
|
||||
- "SELECT ...
|
||||
- ORDER BY d.last_note_at DESC
|
||||
- LIMIT {limit}"
|
||||
- );
|
||||
- let mut stmt = conn.prepare(&disc_sql)?;
|
||||
+ let disc_sql =
|
||||
+ "SELECT ...
|
||||
+ ORDER BY d.last_note_at DESC
|
||||
+ LIMIT ?4";
|
||||
+ let mut stmt = conn.prepare(disc_sql)?;
|
||||
@@
|
||||
- .query_map(rusqlite::params![username, project_id, since_ms], |row| {
|
||||
+ .query_map(rusqlite::params![username, project_id, since_ms, limit as i64], |row| {
|
||||
@@
|
||||
- let sql = format!(
|
||||
- "SELECT ...
|
||||
- ORDER BY d.last_note_at DESC
|
||||
- LIMIT {limit}"
|
||||
- );
|
||||
- let mut stmt = conn.prepare(&sql)?;
|
||||
+ let sql =
|
||||
+ "SELECT ...
|
||||
+ ORDER BY d.last_note_at DESC
|
||||
+ LIMIT ?3";
|
||||
+ let mut stmt = conn.prepare(sql)?;
|
||||
@@
|
||||
- .query_map(rusqlite::params![since_ms, project_id], |row| {
|
||||
+ .query_map(rusqlite::params![since_ms, project_id, limit as i64], |row| {
|
||||
|
||||
Change 3 — Fix path matching for dotless files (LICENSE/Makefile) via “exact OR prefix” (no new flags)
|
||||
Why
|
||||
|
||||
Your improved “dot only in last segment” heuristic still fails on dotless files (LICENSE, Makefile, Dockerfile) which are common, especially at repo root. Right now they’ll be treated as directories (LICENSE/%) and silently return nothing.
|
||||
|
||||
Best minimal UX: if user provides a path that’s ambiguous (no trailing slash), match either exact file OR directory prefix.
|
||||
|
||||
Plan diff
|
||||
diff
|
||||
Copy code
|
||||
--- a/who-command-design.md
|
||||
+++ b/who-command-design.md
|
||||
@@
|
||||
-/// Build a LIKE pattern from a user-supplied path, with proper LIKE escaping.
|
||||
-///
|
||||
-/// Rules:
|
||||
-/// - If the path ends with `/`, it's a directory prefix → `escaped_path%`
|
||||
-/// - If the last path segment contains `.`, it's a file → exact match
|
||||
-/// - Otherwise, it's a directory prefix → `escaped_path/%`
|
||||
+/// Build an exact + prefix match from a user-supplied path, with proper LIKE escaping.
|
||||
+///
|
||||
+/// Rules:
|
||||
+/// - If the path ends with `/`, treat as directory-only (prefix match)
|
||||
+/// - Otherwise, treat as ambiguous: exact match OR directory prefix
|
||||
+/// (fixes dotless files like LICENSE/Makefile without requiring new flags)
|
||||
@@
|
||||
-fn build_path_pattern(path: &str) -> String {
|
||||
+struct PathMatch {
|
||||
+ exact: String,
|
||||
+ prefix: String,
|
||||
+ dir_only: bool,
|
||||
+}
|
||||
+
|
||||
+fn build_path_match(path: &str) -> PathMatch {
|
||||
let trimmed = path.trim_end_matches('/');
|
||||
- let last_segment = trimmed.rsplit('/').next().unwrap_or(trimmed);
|
||||
- let is_file = !path.ends_with('/') && last_segment.contains('.');
|
||||
let escaped = escape_like(trimmed);
|
||||
-
|
||||
- if is_file {
|
||||
- escaped
|
||||
- } else {
|
||||
- format!("{escaped}/%")
|
||||
- }
|
||||
+ PathMatch {
|
||||
+ exact: escaped.clone(),
|
||||
+ prefix: format!("{escaped}/%"),
|
||||
+ dir_only: path.ends_with('/'),
|
||||
+ }
|
||||
}
|
||||
@@
|
||||
- let path_pattern = build_path_pattern(path);
|
||||
+ let pm = build_path_match(path);
|
||||
@@
|
||||
- AND n.position_new_path LIKE ?1 ESCAPE '\\'
|
||||
+ AND (
|
||||
+ (?4 = 1 AND n.position_new_path LIKE ?2 ESCAPE '\\')
|
||||
+ OR (?4 = 0 AND (n.position_new_path = ?1 OR n.position_new_path LIKE ?2 ESCAPE '\\'))
|
||||
+ )
|
||||
@@
|
||||
- let rows: Vec<(String, String, u32, i64)> = stmt
|
||||
- .query_map(rusqlite::params![path_pattern, since_ms, project_id], |row| {
|
||||
+ let rows: Vec<(String, String, u32, i64)> = stmt
|
||||
+ .query_map(rusqlite::params![pm.exact, pm.prefix, since_ms, i32::from(pm.dir_only), project_id], |row| {
|
||||
Ok((row.get(0)?, row.get(1)?, row.get(2)?, row.get(3)?))
|
||||
})?
|
||||
|
||||
|
||||
(Apply the same pattern to Overlap mode.)
|
||||
|
||||
Change 4 — Consistently exclude system notes in all DiffNote-based branches (Expert/Overlap author branches currently don’t)
|
||||
Why
|
||||
|
||||
You filter n.is_system = 0 for reviewer branches, but not in the author branches of Expert/Overlap. That can skew “author touch” via system-generated diff notes or bot activity.
|
||||
|
||||
Consistency here improves correctness and also enables more aggressive partial indexing.
|
||||
|
||||
Plan diff
|
||||
diff
|
||||
Copy code
|
||||
--- a/who-command-design.md
|
||||
+++ b/who-command-design.md
|
||||
@@
|
||||
- WHERE n.note_type = 'DiffNote'
|
||||
+ WHERE n.note_type = 'DiffNote'
|
||||
AND n.position_new_path LIKE ?1 ESCAPE '\\'
|
||||
+ AND n.is_system = 0
|
||||
AND m.author_username IS NOT NULL
|
||||
AND n.created_at >= ?2
|
||||
AND (?3 IS NULL OR m.project_id = ?3)
|
||||
@@
|
||||
- WHERE n.note_type = 'DiffNote'
|
||||
+ WHERE n.note_type = 'DiffNote'
|
||||
AND n.position_new_path LIKE ?1 ESCAPE '\\'
|
||||
+ AND n.is_system = 0
|
||||
AND m.state IN ('opened', 'merged')
|
||||
AND m.author_username IS NOT NULL
|
||||
AND n.created_at >= ?2
|
||||
AND (?3 IS NULL OR m.project_id = ?3)
|
||||
|
||||
Change 5 — Rework Migration 017 indexes to match real predicates + add one critical notes index for discussion participation
|
||||
Why
|
||||
|
||||
(a) idx_notes_diffnote_path_created currently leads with note_type even though it’s constant via partial index. You want the leading columns to match your most selective predicates: position_new_path prefix + created_at range, with optional project_id.
|
||||
|
||||
(b) Active + Workload discussion participation repeatedly hits notes by (discussion_id, author_username); you only guarantee notes(discussion_id) is indexed. Adding a narrow partial composite index pays off immediately for both “participants” and “EXISTS user participated” checks.
|
||||
|
||||
(c) The discussions index should focus on (project_id, last_note_at) with a partial predicate; resolvable/resolved a_
|
||||
471
docs/who-command-design.feedback-3.md
Normal file
471
docs/who-command-design.feedback-3.md
Normal file
@@ -0,0 +1,471 @@
|
||||
Below are the revisions I’d make to iteration 2 to improve correctness, determinism, query-plan quality, and multi-project usability without turning this into a bigger product.
|
||||
|
||||
I’m treating your plan as the “source of truth” and showing git-diff style patches against the plan text/code blocks you included.
|
||||
|
||||
Change 1 — Fix project scoping to hit the right index (DiffNote branches)
|
||||
Why
|
||||
|
||||
Your hot-path index is:
|
||||
|
||||
idx_notes_diffnote_path_created ON notes(position_new_path, created_at, project_id) WHERE note_type='DiffNote' AND is_system=0
|
||||
|
||||
But in Expert/Overlap you sometimes scope by m.project_id = ?3 (MR table), not n.project_id = ?3 (notes table). That weakens the optimizer’s ability to use the composite notes index (and can force broader joins before filtering).
|
||||
|
||||
Diff
|
||||
diff
|
||||
Copy code
|
||||
--- a/who-command-design.md
|
||||
+++ b/who-command-design.md
|
||||
@@ Query: Expert Mode @@
|
||||
- AND (?3 IS NULL OR m.project_id = ?3)
|
||||
+ -- IMPORTANT: scope on notes.project_id to maximize use of
|
||||
+ -- idx_notes_diffnote_path_created (notes is the selective table)
|
||||
+ AND (?3 IS NULL OR n.project_id = ?3)
|
||||
|
||||
@@ Query: Overlap Mode @@
|
||||
- AND (?3 IS NULL OR m.project_id = ?3)
|
||||
+ AND (?3 IS NULL OR n.project_id = ?3)
|
||||
|
||||
@@ Query: Overlap Mode (author branch) @@
|
||||
- AND (?3 IS NULL OR m.project_id = ?3)
|
||||
+ AND (?3 IS NULL OR n.project_id = ?3)
|
||||
|
||||
Change 2 — Introduce a “prefix vs exact” path query to avoid LIKE when you don’t need it
|
||||
Why
|
||||
|
||||
For exact file paths (e.g. src/auth/login.rs), you currently do:
|
||||
|
||||
position_new_path LIKE ?1 ESCAPE '\' where ?1 has no wildcard
|
||||
|
||||
That’s logically fine, but it’s a worse signal to the planner than = and can degrade performance depending on collation/case settings.
|
||||
|
||||
This doesn’t violate “static SQL” — you can pick between two static query strings.
|
||||
|
||||
Diff
|
||||
diff
|
||||
Copy code
|
||||
--- a/who-command-design.md
|
||||
+++ b/who-command-design.md
|
||||
@@ Helper: Path Pattern Construction @@
|
||||
-fn build_path_pattern(path: &str) -> String {
|
||||
+struct PathQuery {
|
||||
+ /// The parameter value to bind.
|
||||
+ value: String,
|
||||
+ /// If true: use LIKE value || '%'. If false: use '='.
|
||||
+ is_prefix: bool,
|
||||
+}
|
||||
+
|
||||
+fn build_path_query(path: &str) -> PathQuery {
|
||||
let trimmed = path.trim_end_matches('/');
|
||||
let last_segment = trimmed.rsplit('/').next().unwrap_or(trimmed);
|
||||
let is_file = !path.ends_with('/') && last_segment.contains('.');
|
||||
let escaped = escape_like(trimmed);
|
||||
|
||||
if is_file {
|
||||
- escaped
|
||||
+ PathQuery { value: escaped, is_prefix: false }
|
||||
} else {
|
||||
- format!("{escaped}/%")
|
||||
+ PathQuery { value: format!("{escaped}/%"), is_prefix: true }
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
And then (example for DiffNote predicates):
|
||||
|
||||
diff
|
||||
Copy code
|
||||
@@ Query: Expert Mode @@
|
||||
- let path_pattern = build_path_pattern(path);
|
||||
+ let pq = build_path_query(path);
|
||||
|
||||
- let sql = " ... n.position_new_path LIKE ?1 ESCAPE '\\' ... ";
|
||||
+ let sql_prefix = " ... n.position_new_path LIKE ?1 ESCAPE '\\' ... ";
|
||||
+ let sql_exact = " ... n.position_new_path = ?1 ... ";
|
||||
|
||||
- let mut stmt = conn.prepare(sql)?;
|
||||
+ let mut stmt = if pq.is_prefix { conn.prepare_cached(sql_prefix)? }
|
||||
+ else { conn.prepare_cached(sql_exact)? };
|
||||
let rows = stmt.query_map(params![... pq.value ...], ...);
|
||||
|
||||
Change 3 — Push Expert aggregation into SQL (less Rust, fewer rows, SQL-level LIMIT)
|
||||
Why
|
||||
|
||||
Right now Expert does:
|
||||
|
||||
UNION ALL
|
||||
|
||||
return per-role rows
|
||||
|
||||
HashMap merge
|
||||
|
||||
score compute
|
||||
|
||||
sort/truncate
|
||||
|
||||
You can do all of that in SQL deterministically, then LIMIT ?N actually works.
|
||||
|
||||
Diff
|
||||
diff
|
||||
Copy code
|
||||
--- a/who-command-design.md
|
||||
+++ b/who-command-design.md
|
||||
@@ Query: Expert Mode @@
|
||||
- let sql = "SELECT username, role, activity_count, last_active_at FROM (
|
||||
- ...
|
||||
- )";
|
||||
+ let sql = "
|
||||
+ WITH activity AS (
|
||||
+ SELECT
|
||||
+ n.author_username AS username,
|
||||
+ 'reviewer' AS role,
|
||||
+ COUNT(*) AS cnt,
|
||||
+ MAX(n.created_at) AS last_active_at
|
||||
+ FROM notes n
|
||||
+ WHERE n.note_type = 'DiffNote'
|
||||
+ AND n.is_system = 0
|
||||
+ AND n.author_username IS NOT NULL
|
||||
+ AND n.created_at >= ?2
|
||||
+ AND (?3 IS NULL OR n.project_id = ?3)
|
||||
+ AND (
|
||||
+ (?4 = 1 AND n.position_new_path LIKE ?1 ESCAPE '\\') OR
|
||||
+ (?4 = 0 AND n.position_new_path = ?1)
|
||||
+ )
|
||||
+ GROUP BY n.author_username
|
||||
+
|
||||
+ UNION ALL
|
||||
+
|
||||
+ SELECT
|
||||
+ m.author_username AS username,
|
||||
+ 'author' AS role,
|
||||
+ COUNT(DISTINCT m.id) AS cnt,
|
||||
+ MAX(n.created_at) AS last_active_at
|
||||
+ FROM merge_requests m
|
||||
+ JOIN discussions d ON d.merge_request_id = m.id
|
||||
+ JOIN notes n ON n.discussion_id = d.id
|
||||
+ WHERE n.note_type = 'DiffNote'
|
||||
+ AND n.is_system = 0
|
||||
+ AND m.author_username IS NOT NULL
|
||||
+ AND n.created_at >= ?2
|
||||
+ AND (?3 IS NULL OR n.project_id = ?3)
|
||||
+ AND (
|
||||
+ (?4 = 1 AND n.position_new_path LIKE ?1 ESCAPE '\\') OR
|
||||
+ (?4 = 0 AND n.position_new_path = ?1)
|
||||
+ )
|
||||
+ GROUP BY m.author_username
|
||||
+ )
|
||||
+ SELECT
|
||||
+ username,
|
||||
+ SUM(CASE WHEN role='reviewer' THEN cnt ELSE 0 END) AS review_count,
|
||||
+ SUM(CASE WHEN role='author' THEN cnt ELSE 0 END) AS author_count,
|
||||
+ MAX(last_active_at) AS last_active_at,
|
||||
+ (SUM(CASE WHEN role='reviewer' THEN cnt ELSE 0 END) * 3.0) +
|
||||
+ (SUM(CASE WHEN role='author' THEN cnt ELSE 0 END) * 2.0) AS score
|
||||
+ FROM activity
|
||||
+ GROUP BY username
|
||||
+ ORDER BY score DESC, last_active_at DESC, username ASC
|
||||
+ LIMIT ?5
|
||||
+ ";
|
||||
|
||||
- // Aggregate by username: combine reviewer + author counts
|
||||
- let mut user_map: HashMap<...> = HashMap::new();
|
||||
- ...
|
||||
- experts.sort_by(...); experts.truncate(limit);
|
||||
+ // No Rust-side merge/sort needed; SQL already returns final rows.
|
||||
|
||||
Change 4 — Overlap output is ambiguous across projects: include stable MR refs (project_path!iid)
|
||||
Why
|
||||
|
||||
mr_iids: Vec<i64> is ambiguous in a multi-project DB. !123 only means something with a project.
|
||||
|
||||
Also: your MR IID dedup is currently Vec.contains() inside a loop (O(n²)). Use a HashSet.
|
||||
|
||||
Diff
|
||||
diff
|
||||
Copy code
|
||||
--- a/who-command-design.md
|
||||
+++ b/who-command-design.md
|
||||
@@ OverlapResult @@
|
||||
pub struct OverlapUser {
|
||||
pub username: String,
|
||||
@@
|
||||
- pub mr_iids: Vec<i64>,
|
||||
+ /// Stable MR references like "group/project!123"
|
||||
+ pub mr_refs: Vec<String>,
|
||||
}
|
||||
|
||||
@@ Query: Overlap Mode (SQL) @@
|
||||
- GROUP_CONCAT(DISTINCT m.iid) AS mr_iids
|
||||
+ GROUP_CONCAT(DISTINCT (p.path_with_namespace || '!' || m.iid)) AS mr_refs
|
||||
FROM notes n
|
||||
JOIN discussions d ON n.discussion_id = d.id
|
||||
JOIN merge_requests m ON d.merge_request_id = m.id
|
||||
+ JOIN projects p ON m.project_id = p.id
|
||||
@@
|
||||
- GROUP_CONCAT(DISTINCT m.iid) AS mr_iids
|
||||
+ GROUP_CONCAT(DISTINCT (p.path_with_namespace || '!' || m.iid)) AS mr_refs
|
||||
FROM merge_requests m
|
||||
JOIN discussions d ON d.merge_request_id = m.id
|
||||
JOIN notes n ON n.discussion_id = d.id
|
||||
+ JOIN projects p ON m.project_id = p.id
|
||||
|
||||
@@ Query: Overlap Mode (Rust merge) @@
|
||||
- let mr_iids: Vec<i64> = mr_iids_csv ...
|
||||
+ let mr_refs: Vec<String> = mr_refs_csv
|
||||
+ .as_deref()
|
||||
+ .map(|csv| csv.split(',').map(|s| s.trim().to_string()).collect())
|
||||
+ .unwrap_or_default();
|
||||
@@
|
||||
- // Merge MR IIDs, deduplicate
|
||||
- for iid in &mr_iids {
|
||||
- if !entry.mr_iids.contains(iid) {
|
||||
- entry.mr_iids.push(*iid);
|
||||
- }
|
||||
- }
|
||||
+ // Merge MR refs, deduplicate
|
||||
+ use std::collections::HashSet;
|
||||
+ let mut set: HashSet<String> = entry.mr_refs.drain(..).collect();
|
||||
+ for r in mr_refs { set.insert(r); }
|
||||
+ entry.mr_refs = set.into_iter().collect();
|
||||
|
||||
Change 5 — Active mode: avoid correlated subqueries by preselecting discussions, then aggregating notes once
|
||||
Why
|
||||
|
||||
Your Active query does two correlated subqueries per discussion row:
|
||||
|
||||
note_count
|
||||
|
||||
participants
|
||||
|
||||
With LIMIT 20 it’s not catastrophic, but it is still unnecessary work and creates “spiky” behavior if the planner chooses poorly.
|
||||
|
||||
Pattern to use:
|
||||
|
||||
CTE selects the limited set of discussions
|
||||
|
||||
Join notes once, aggregate with GROUP BY
|
||||
|
||||
Diff
|
||||
diff
|
||||
Copy code
|
||||
--- a/who-command-design.md
|
||||
+++ b/who-command-design.md
|
||||
@@ Query: Active Mode @@
|
||||
- let sql =
|
||||
- "SELECT
|
||||
- d.noteable_type,
|
||||
- ...
|
||||
- (SELECT COUNT(*) FROM notes n
|
||||
- WHERE n.discussion_id = d.id AND n.is_system = 0) AS note_count,
|
||||
- (SELECT GROUP_CONCAT(username, X'1F') FROM (
|
||||
- SELECT DISTINCT n.author_username AS username
|
||||
- FROM notes n
|
||||
- WHERE n.discussion_id = d.id
|
||||
- AND n.is_system = 0
|
||||
- AND n.author_username IS NOT NULL
|
||||
- ORDER BY username
|
||||
- )) AS participants
|
||||
- FROM discussions d
|
||||
- ...
|
||||
- LIMIT ?3";
|
||||
+ let sql = "
|
||||
+ WITH picked AS (
|
||||
+ SELECT d.id, d.noteable_type, d.issue_id, d.merge_request_id, d.project_id, d.last_note_at
|
||||
+ FROM discussions d
|
||||
+ WHERE d.resolvable = 1 AND d.resolved = 0
|
||||
+ AND d.last_note_at >= ?1
|
||||
+ AND (?2 IS NULL OR d.project_id = ?2)
|
||||
+ ORDER BY d.last_note_at DESC
|
||||
+ LIMIT ?3
|
||||
+ ),
|
||||
+ note_agg AS (
|
||||
+ SELECT
|
||||
+ n.discussion_id,
|
||||
+ COUNT(*) AS note_count,
|
||||
+ GROUP_CONCAT(n.author_username, X'1F') AS participants
|
||||
+ FROM (
|
||||
+ SELECT DISTINCT discussion_id, author_username
|
||||
+ FROM notes
|
||||
+ WHERE is_system = 0 AND author_username IS NOT NULL
|
||||
+ ) n
|
||||
+ JOIN picked p ON p.id = n.discussion_id
|
||||
+ GROUP BY n.discussion_id
|
||||
+ )
|
||||
+ SELECT
|
||||
+ p.noteable_type,
|
||||
+ COALESCE(i.iid, m.iid) AS entity_iid,
|
||||
+ COALESCE(i.title, m.title) AS entity_title,
|
||||
+ proj.path_with_namespace,
|
||||
+ p.last_note_at,
|
||||
+ COALESCE(na.note_count, 0) AS note_count,
|
||||
+ COALESCE(na.participants, '') AS participants
|
||||
+ FROM picked p
|
||||
+ JOIN projects proj ON p.project_id = proj.id
|
||||
+ LEFT JOIN issues i ON p.issue_id = i.id
|
||||
+ LEFT JOIN merge_requests m ON p.merge_request_id = m.id
|
||||
+ LEFT JOIN note_agg na ON na.discussion_id = p.id
|
||||
+ ORDER BY p.last_note_at DESC
|
||||
+ ";
|
||||
|
||||
Change 6 — Use prepare_cached() everywhere (cheap perf win, no scope creep)
|
||||
Why
|
||||
|
||||
You already worked hard to keep SQL static. Taking advantage of sqlite statement caching completes the loop.
|
||||
|
||||
Diff
|
||||
diff
|
||||
Copy code
|
||||
--- a/who-command-design.md
|
||||
+++ b/who-command-design.md
|
||||
@@ Query functions @@
|
||||
- let mut stmt = conn.prepare(sql)?;
|
||||
+ let mut stmt = conn.prepare_cached(sql)?;
|
||||
|
||||
|
||||
Apply in all query fns (query_workload, query_reviews, query_active, query_expert, query_overlap, lookup_project_path).
|
||||
|
||||
Change 7 — Human output: show project_path where ambiguity exists (Workload + Overlap)
|
||||
Why
|
||||
|
||||
When not project-scoped, #42 and !100 aren’t unique. You already have project paths in the query results — you’re just not printing them.
|
||||
|
||||
Diff
|
||||
diff
|
||||
Copy code
|
||||
--- a/who-command-design.md
|
||||
+++ b/who-command-design.md
|
||||
@@ print_workload_human @@
|
||||
- println!(
|
||||
- " {} {} {}",
|
||||
+ println!(
|
||||
+ " {} {} {} {}",
|
||||
style(format!("#{:<5}", item.iid)).cyan(),
|
||||
truncate_str(&item.title, 45),
|
||||
style(format_relative_time(item.updated_at)).dim(),
|
||||
+ style(&item.project_path).dim(),
|
||||
);
|
||||
|
||||
@@ print_workload_human (MRs) @@
|
||||
- println!(
|
||||
- " {} {}{} {}",
|
||||
+ println!(
|
||||
+ " {} {}{} {} {}",
|
||||
style(format!("!{:<5}", mr.iid)).cyan(),
|
||||
truncate_str(&mr.title, 40),
|
||||
style(draft).dim(),
|
||||
style(format_relative_time(mr.updated_at)).dim(),
|
||||
+ style(&mr.project_path).dim(),
|
||||
);
|
||||
|
||||
@@ print_overlap_human @@
|
||||
- let mr_str = user.mr_iids.iter().take(5).map(|iid| format!("!{iid}")).collect::<Vec<_>>().join(", ");
|
||||
+ let mr_str = user.mr_refs.iter().take(5).cloned().collect::<Vec<_>>().join(", ");
|
||||
|
||||
Change 8 — Robot JSON: add stable IDs + “defaulted” flags for reproducibility
|
||||
Why
|
||||
|
||||
You already added resolved_input — good. Two more reproducibility gaps remain:
|
||||
|
||||
Agents can’t reliably “open” an entity without IDs (discussion_id, mr_id, issue_id).
|
||||
|
||||
Agents can’t tell whether since was user-provided vs defaulted (important when replaying intent).
|
||||
|
||||
Diff
|
||||
diff
|
||||
Copy code
|
||||
--- a/who-command-design.md
|
||||
+++ b/who-command-design.md
|
||||
@@ WhoResolvedInput @@
|
||||
pub struct WhoResolvedInput {
|
||||
@@
|
||||
pub since_ms: Option<i64>,
|
||||
pub since_iso: Option<String>,
|
||||
+ pub since_was_default: bool,
|
||||
pub limit: usize,
|
||||
}
|
||||
|
||||
@@ run_who @@
|
||||
- let since_ms = resolve_since(args.since.as_deref(), "6m")?;
|
||||
+ let since_was_default = args.since.is_none();
|
||||
+ let since_ms = resolve_since(args.since.as_deref(), "6m")?;
|
||||
Ok(WhoRun {
|
||||
resolved_input: WhoResolvedInput {
|
||||
@@
|
||||
since_ms: Some(since_ms),
|
||||
since_iso: Some(ms_to_iso(since_ms)),
|
||||
+ since_was_default,
|
||||
limit: args.limit,
|
||||
},
|
||||
|
||||
@@ print_who_json resolved_input @@
|
||||
let resolved_input = serde_json::json!({
|
||||
@@
|
||||
"since_ms": run.resolved_input.since_ms,
|
||||
"since_iso": run.resolved_input.since_iso,
|
||||
+ "since_was_default": run.resolved_input.since_was_default,
|
||||
"limit": run.resolved_input.limit,
|
||||
});
|
||||
|
||||
|
||||
And for Active/Workload discussion items, add IDs in SQL and JSON:
|
||||
|
||||
diff
|
||||
Copy code
|
||||
@@ ActiveDiscussion @@
|
||||
pub struct ActiveDiscussion {
|
||||
+ pub discussion_id: i64,
|
||||
@@
|
||||
}
|
||||
|
||||
@@ query_active SELECT @@
|
||||
- SELECT
|
||||
- p.noteable_type,
|
||||
+ SELECT
|
||||
+ p.id AS discussion_id,
|
||||
+ p.noteable_type,
|
||||
|
||||
@@ active_to_json @@
|
||||
- "discussions": r.discussions.iter().map(|d| json!({
|
||||
+ "discussions": r.discussions.iter().map(|d| json!({
|
||||
+ "discussion_id": d.discussion_id,
|
||||
...
|
||||
}))
|
||||
|
||||
Change 9 — Make performance verification explicit: require EXPLAIN QUERY PLAN checks for each mode
|
||||
Why
|
||||
|
||||
You’re adding indexes specifically for these queries. The only way to ensure the planner is doing what you think is to lock in a short perf checklist (especially after schema drift or SQLite version differences).
|
||||
|
||||
Diff
|
||||
diff
|
||||
Copy code
|
||||
--- a/who-command-design.md
|
||||
+++ b/who-command-design.md
|
||||
@@ Verification @@
|
||||
# Manual verification against real data
|
||||
cargo run --release -- who src/features/global-search/
|
||||
@@
|
||||
cargo run --release -- who src/features/global-search/ -p typescript # project scoped
|
||||
+
|
||||
+# Perf verification (required before merge):
|
||||
+# Confirm idx_notes_diffnote_path_created is used for Expert/Overlap and
|
||||
+# idx_discussions_unresolved_recent is used for Active.
|
||||
+sqlite3 path/to/db.sqlite "
|
||||
+ EXPLAIN QUERY PLAN
|
||||
+ SELECT ... -- paste final Expert SQL with representative bindings
|
||||
+";
|
||||
|
||||
|
||||
(Keep it lightweight: one representative query per mode is enough.)
|
||||
|
||||
Net effect
|
||||
|
||||
Correctness: project scoping hits the notes index; IDs added for agent workflows.
|
||||
|
||||
Performance: fewer rows/materialization in Expert; statement caching everywhere; Active avoids correlated subqueries.
|
||||
|
||||
UX: human output no longer ambiguous across projects; Overlap MR references become actionable.
|
||||
|
||||
Reproducibility: agents can distinguish defaults vs explicit inputs; can dereference entities reliably.
|
||||
|
||||
If you want one “highest ROI” subset to implement first: Change 1 + Change 4 + Change 6 + Change 7. That’s where the real operational value lands.
|
||||
3
docs/who-command-design.feedback-4.md
Normal file
3
docs/who-command-design.feedback-4.md
Normal file
@@ -0,0 +1,3 @@
|
||||
ChatGPT said:
|
||||
Proposing code revisions for performance and determinism
|
||||
Answer now
|
||||
356
docs/who-command-design.feedback-5.md
Normal file
356
docs/who-command-design.feedback-5.md
Normal file
@@ -0,0 +1,356 @@
|
||||
Below are the highest-leverage revisions I’d make. They’re tightly scoped (no new tables/APIs), but fix a few real correctness issues and make the outputs more actionable.
|
||||
|
||||
1) Fix a correctness bug in PathQuery: don’t escape for =, and make --path Makefile actually work
|
||||
Why
|
||||
|
||||
Bug: build_path_query() currently runs escape_like() even when is_prefix = false (exact match). That will break exact matches for paths containing _, %, or \ because = does not treat those as metacharacters (so the escaped string won’t equal the stored path).
|
||||
|
||||
UX mismatch: The plan says --path handles dotless root files (Makefile/LICENSE), but the current logic still treats them as directory prefixes (Makefile/%) → zero results.
|
||||
|
||||
Change
|
||||
|
||||
Only escape for LIKE.
|
||||
|
||||
Treat root paths (no /) passed via --path as exact matches by default (unless they end with /).
|
||||
|
||||
diff
|
||||
Copy code
|
||||
diff --git a/plan.md b/plan.md
|
||||
@@
|
||||
-/// Build a path query from a user-supplied path.
|
||||
-///
|
||||
-/// Rules:
|
||||
-/// - If the path ends with `/`, it's a directory prefix -> `escaped_path%` (LIKE)
|
||||
-/// - If the last path segment contains `.`, it's a file -> exact match (=)
|
||||
-/// - Otherwise, it's a directory prefix -> `escaped_path/%` (LIKE)
|
||||
+/// Build a path query from a user-supplied path.
|
||||
+///
|
||||
+/// Rules:
|
||||
+/// - If the path ends with `/`, it's a directory prefix -> `escaped_path/%` (LIKE)
|
||||
+/// - If the path is a root path (no `/`) and does NOT end with `/`, treat as exact (=)
|
||||
+/// (this makes `--path Makefile` and `--path LICENSE` work as intended)
|
||||
+/// - Else if the last path segment contains `.`, treat as exact (=)
|
||||
+/// - Otherwise, treat as directory prefix -> `escaped_path/%` (LIKE)
|
||||
@@
|
||||
-fn build_path_query(path: &str) -> PathQuery {
|
||||
+fn build_path_query(path: &str) -> PathQuery {
|
||||
let trimmed = path.trim_end_matches('/');
|
||||
let last_segment = trimmed.rsplit('/').next().unwrap_or(trimmed);
|
||||
- let is_file = !path.ends_with('/') && last_segment.contains('.');
|
||||
- let escaped = escape_like(trimmed);
|
||||
+ let is_root = !trimmed.contains('/');
|
||||
+ let is_file = !path.ends_with('/') && (is_root || last_segment.contains('.'));
|
||||
|
||||
if is_file {
|
||||
PathQuery {
|
||||
- value: escaped,
|
||||
+ // IMPORTANT: do NOT escape for exact match (=)
|
||||
+ value: trimmed.to_string(),
|
||||
is_prefix: false,
|
||||
}
|
||||
} else {
|
||||
+ let escaped = escape_like(trimmed);
|
||||
PathQuery {
|
||||
value: format!("{escaped}/%"),
|
||||
is_prefix: true,
|
||||
}
|
||||
}
|
||||
}
|
||||
@@
|
||||
-/// **Known limitation:** Dotless root files (LICENSE, Makefile, Dockerfile)
|
||||
-/// without a trailing `/` will be treated as directory prefixes. Use `--path`
|
||||
-/// for these — the `--path` flag passes through to Expert mode directly,
|
||||
-/// and the `build_path_query` output for "LICENSE" is a prefix `LICENSE/%`
|
||||
-/// which will simply return zero results (a safe, obvious failure mode that the
|
||||
-/// help text addresses).
|
||||
+/// Note: Root file paths passed via `--path` (including dotless files like Makefile/LICENSE)
|
||||
+/// are treated as exact matches unless they end with `/`.
|
||||
|
||||
|
||||
Also update the --path help text to be explicit:
|
||||
|
||||
diff
|
||||
Copy code
|
||||
diff --git a/plan.md b/plan.md
|
||||
@@
|
||||
- /// Force expert mode for a file/directory path (handles root files like
|
||||
- /// README.md, LICENSE, Makefile that lack a / and can't be auto-detected)
|
||||
+ /// Force expert mode for a file/directory path.
|
||||
+ /// Root files (README.md, LICENSE, Makefile) are treated as exact matches.
|
||||
+ /// Use a trailing `/` to force directory-prefix matching.
|
||||
|
||||
2) Fix Active mode: your note_count is currently counting participants, and the CTE scans too broadly
|
||||
Why
|
||||
|
||||
In note_agg, you do SELECT DISTINCT discussion_id, author_username and then COUNT(*) AS note_count. That’s participant count, not note count.
|
||||
|
||||
The current note_agg also builds the DISTINCT set from all notes then joins to picked. It’s avoidable work.
|
||||
|
||||
Change
|
||||
|
||||
Split into two aggregations scoped to picked:
|
||||
|
||||
note_counts: counts non-system notes per picked discussion.
|
||||
|
||||
participants: distinct usernames per picked discussion, then GROUP_CONCAT.
|
||||
|
||||
diff
|
||||
Copy code
|
||||
diff --git a/plan.md b/plan.md
|
||||
@@
|
||||
- note_agg AS (
|
||||
- SELECT
|
||||
- n.discussion_id,
|
||||
- COUNT(*) AS note_count,
|
||||
- GROUP_CONCAT(n.author_username, X'1F') AS participants
|
||||
- FROM (
|
||||
- SELECT DISTINCT discussion_id, author_username
|
||||
- FROM notes
|
||||
- WHERE is_system = 0 AND author_username IS NOT NULL
|
||||
- ) n
|
||||
- JOIN picked p ON p.id = n.discussion_id
|
||||
- GROUP BY n.discussion_id
|
||||
- )
|
||||
+ note_counts AS (
|
||||
+ SELECT
|
||||
+ n.discussion_id,
|
||||
+ COUNT(*) AS note_count
|
||||
+ FROM notes n
|
||||
+ JOIN picked p ON p.id = n.discussion_id
|
||||
+ WHERE n.is_system = 0
|
||||
+ GROUP BY n.discussion_id
|
||||
+ ),
|
||||
+ participants AS (
|
||||
+ SELECT
|
||||
+ x.discussion_id,
|
||||
+ GROUP_CONCAT(x.author_username, X'1F') AS participants
|
||||
+ FROM (
|
||||
+ SELECT DISTINCT n.discussion_id, n.author_username
|
||||
+ FROM notes n
|
||||
+ JOIN picked p ON p.id = n.discussion_id
|
||||
+ WHERE n.is_system = 0 AND n.author_username IS NOT NULL
|
||||
+ ) x
|
||||
+ GROUP BY x.discussion_id
|
||||
+ )
|
||||
@@
|
||||
- LEFT JOIN note_agg na ON na.discussion_id = p.id
|
||||
+ LEFT JOIN note_counts nc ON nc.discussion_id = p.id
|
||||
+ LEFT JOIN participants pa ON pa.discussion_id = p.id
|
||||
@@
|
||||
- COALESCE(na.note_count, 0) AS note_count,
|
||||
- COALESCE(na.participants, '') AS participants
|
||||
+ COALESCE(nc.note_count, 0) AS note_count,
|
||||
+ COALESCE(pa.participants, '') AS participants
|
||||
|
||||
|
||||
Net effect: correctness fix + more predictable perf.
|
||||
|
||||
Add a test that would have failed before:
|
||||
|
||||
diff
|
||||
Copy code
|
||||
diff --git a/plan.md b/plan.md
|
||||
@@
|
||||
#[test]
|
||||
fn test_active_query() {
|
||||
@@
|
||||
- insert_diffnote(&conn, 1, 1, 1, "reviewer_b", "src/foo.rs", "needs work");
|
||||
+ insert_diffnote(&conn, 1, 1, 1, "reviewer_b", "src/foo.rs", "needs work");
|
||||
+ insert_diffnote(&conn, 2, 1, 1, "reviewer_b", "src/foo.rs", "follow-up");
|
||||
@@
|
||||
- assert_eq!(result.discussions[0].participants, vec!["reviewer_b"]);
|
||||
+ assert_eq!(result.discussions[0].participants, vec!["reviewer_b"]);
|
||||
+ assert_eq!(result.discussions[0].note_count, 2);
|
||||
|
||||
3) Index fix: idx_discussions_unresolved_recent won’t help global --active ordering
|
||||
Why
|
||||
|
||||
Your index is (project_id, last_note_at) with WHERE resolvable=1 AND resolved=0.
|
||||
|
||||
When --active is not project-scoped (common default), SQLite can’t use (project_id, last_note_at) to satisfy ORDER BY last_note_at DESC efficiently because project_id isn’t constrained.
|
||||
|
||||
This can turn into a scan+sort over potentially large unresolved sets.
|
||||
|
||||
Change
|
||||
|
||||
Keep the project-scoped index, but add a global ordering index (partial, still small):
|
||||
|
||||
diff
|
||||
Copy code
|
||||
diff --git a/plan.md b/plan.md
|
||||
@@
|
||||
CREATE INDEX IF NOT EXISTS idx_discussions_unresolved_recent
|
||||
ON discussions(project_id, last_note_at)
|
||||
WHERE resolvable = 1 AND resolved = 0;
|
||||
+
|
||||
+-- Active (global): unresolved discussions by recency (no project scope).
|
||||
+-- Supports ORDER BY last_note_at DESC LIMIT N when project_id is unconstrained.
|
||||
+CREATE INDEX IF NOT EXISTS idx_discussions_unresolved_recent_global
|
||||
+ ON discussions(last_note_at)
|
||||
+ WHERE resolvable = 1 AND resolved = 0;
|
||||
|
||||
4) Make Overlap “touches” coherent: count MRs for reviewers, not DiffNotes
|
||||
Why
|
||||
|
||||
Overlap’s question is “Who else has MRs touching my files?” but:
|
||||
|
||||
reviewer branch uses COUNT(*) (DiffNotes)
|
||||
|
||||
author branch uses COUNT(DISTINCT m.id) (MRs)
|
||||
|
||||
Those are different units; summing them into touch_count is misleading.
|
||||
|
||||
Change
|
||||
|
||||
Count distinct MRs on the reviewer branch too:
|
||||
|
||||
diff
|
||||
Copy code
|
||||
diff --git a/plan.md b/plan.md
|
||||
@@
|
||||
- COUNT(*) AS touch_count,
|
||||
+ COUNT(DISTINCT m.id) AS touch_count,
|
||||
MAX(n.created_at) AS last_touch_at,
|
||||
GROUP_CONCAT(DISTINCT (p.path_with_namespace || '!' || m.iid)) AS mr_refs
|
||||
|
||||
|
||||
Also update human output labeling:
|
||||
|
||||
diff
|
||||
Copy code
|
||||
diff --git a/plan.md b/plan.md
|
||||
@@
|
||||
- style("Touches").bold(),
|
||||
+ style("MRs").bold(),
|
||||
|
||||
|
||||
(You still preserve “strength” via mr_refs and last_touch_at.)
|
||||
|
||||
5) Make outputs more actionable: add a canonical ref field (group/project!iid, group/project#iid)
|
||||
Why
|
||||
|
||||
You already do this for Overlap (mr_refs). Doing the same for Workload and Active reduces friction for both humans and agents:
|
||||
|
||||
humans can copy/paste a single token
|
||||
|
||||
robots don’t need to stitch project_path + iid + prefix
|
||||
|
||||
Change (Workload structs + SQL)
|
||||
diff
|
||||
Copy code
|
||||
diff --git a/plan.md b/plan.md
|
||||
@@
|
||||
pub struct WorkloadIssue {
|
||||
pub iid: i64,
|
||||
+ pub ref_: String,
|
||||
pub title: String,
|
||||
pub project_path: String,
|
||||
pub updated_at: i64,
|
||||
}
|
||||
@@
|
||||
pub struct WorkloadMr {
|
||||
pub iid: i64,
|
||||
+ pub ref_: String,
|
||||
pub title: String,
|
||||
pub draft: bool,
|
||||
pub project_path: String,
|
||||
@@
|
||||
- let issues_sql =
|
||||
- "SELECT i.iid, i.title, p.path_with_namespace, i.updated_at
|
||||
+ let issues_sql =
|
||||
+ "SELECT i.iid,
|
||||
+ (p.path_with_namespace || '#' || i.iid) AS ref,
|
||||
+ i.title, p.path_with_namespace, i.updated_at
|
||||
@@
|
||||
- iid: row.get(0)?,
|
||||
- title: row.get(1)?,
|
||||
- project_path: row.get(2)?,
|
||||
- updated_at: row.get(3)?,
|
||||
+ iid: row.get(0)?,
|
||||
+ ref_: row.get(1)?,
|
||||
+ title: row.get(2)?,
|
||||
+ project_path: row.get(3)?,
|
||||
+ updated_at: row.get(4)?,
|
||||
})
|
||||
@@
|
||||
- let authored_sql =
|
||||
- "SELECT m.iid, m.title, m.draft, p.path_with_namespace, m.updated_at
|
||||
+ let authored_sql =
|
||||
+ "SELECT m.iid,
|
||||
+ (p.path_with_namespace || '!' || m.iid) AS ref,
|
||||
+ m.title, m.draft, p.path_with_namespace, m.updated_at
|
||||
@@
|
||||
- iid: row.get(0)?,
|
||||
- title: row.get(1)?,
|
||||
- draft: row.get::<_, i32>(2)? != 0,
|
||||
- project_path: row.get(3)?,
|
||||
+ iid: row.get(0)?,
|
||||
+ ref_: row.get(1)?,
|
||||
+ title: row.get(2)?,
|
||||
+ draft: row.get::<_, i32>(3)? != 0,
|
||||
+ project_path: row.get(4)?,
|
||||
author_username: None,
|
||||
- updated_at: row.get(4)?,
|
||||
+ updated_at: row.get(5)?,
|
||||
})
|
||||
|
||||
|
||||
Then use ref_ in human output + robot JSON.
|
||||
|
||||
6) Reviews mode: tolerate leading whitespace before **prefix**
|
||||
Why
|
||||
|
||||
Many people write " **suggestion**: ...". Current LIKE '**%**%' misses that.
|
||||
|
||||
Change
|
||||
|
||||
Use ltrim(n.body) consistently:
|
||||
|
||||
diff
|
||||
Copy code
|
||||
diff --git a/plan.md b/plan.md
|
||||
@@
|
||||
- AND n.body LIKE '**%**%'
|
||||
+ AND ltrim(n.body) LIKE '**%**%'
|
||||
@@
|
||||
- SUBSTR(n.body, 3, INSTR(SUBSTR(n.body, 3), '**') - 1) AS raw_prefix,
|
||||
+ SUBSTR(ltrim(n.body), 3, INSTR(SUBSTR(ltrim(n.body), 3), '**') - 1) AS raw_prefix,
|
||||
|
||||
7) Add two small tests that catch the above regressions
|
||||
Why
|
||||
|
||||
These are exactly the kind of issues that slip through without targeted tests.
|
||||
|
||||
diff
|
||||
Copy code
|
||||
diff --git a/plan.md b/plan.md
|
||||
@@
|
||||
#[test]
|
||||
fn test_escape_like() {
|
||||
@@
|
||||
}
|
||||
+
|
||||
+ #[test]
|
||||
+ fn test_build_path_query_exact_does_not_escape() {
|
||||
+ // '_' must not be escaped for '='
|
||||
+ let pq = build_path_query("README_with_underscore.md");
|
||||
+ assert_eq!(pq.value, "README_with_underscore.md");
|
||||
+ assert!(!pq.is_prefix);
|
||||
+ }
|
||||
+
|
||||
+ #[test]
|
||||
+ fn test_path_flag_dotless_root_file_is_exact() {
|
||||
+ let pq = build_path_query("Makefile");
|
||||
+ assert_eq!(pq.value, "Makefile");
|
||||
+ assert!(!pq.is_prefix);
|
||||
+ }
|
||||
|
||||
Summary of net effect
|
||||
|
||||
Correctness fixes: exact-path escaping bug; Active.note_count bug.
|
||||
|
||||
Perf fixes: global --active index; avoid broad note scans in Active.
|
||||
|
||||
Usefulness upgrades: coherent overlap “touch” metric; canonical refs everywhere; reviews prefix more robust.
|
||||
|
||||
If you want one extra “stretch” that still isn’t scope creep: add an unscoped warning line in human output when project_id == None (e.g., “Aggregated across projects; use -p to scope”) for Expert/Overlap/Active. That’s pure presentation, but prevents misinterpretation in multi-project DBs.
|
||||
471
docs/who-command-design.feedback-6.md
Normal file
471
docs/who-command-design.feedback-6.md
Normal file
@@ -0,0 +1,471 @@
|
||||
Proposed revisions (Iteration 6)
|
||||
|
||||
Below are the highest-leverage changes I’d make on top of your current Iteration 5 plan, with rationale and git-diff style edits to the plan text/snippets.
|
||||
|
||||
1) Fix a real edge case: dotless non-root files (src/Dockerfile, infra/Makefile, etc.)
|
||||
Why
|
||||
|
||||
Your current build_path_query() treats dotless last segments as directories (prefix match) unless the path is root. That misclassifies legitimate dotless files inside directories and silently produces path/% (zero hits or wrong hits).
|
||||
|
||||
Best minimal fix: keep your static SQL approach, but add a DB existence probe (static SQL) for path queries:
|
||||
|
||||
If user didn’t force directory (/), and exact path exists in DiffNotes, treat as exact =.
|
||||
|
||||
Otherwise use prefix LIKE 'dir/%'.
|
||||
|
||||
This avoids new CLI flags, avoids heuristics lists, and uses your existing partial index (idx_notes_diffnote_path_created) efficiently.
|
||||
|
||||
Diff
|
||||
diff
|
||||
Copy code
|
||||
diff --git a/Plan.md b/Plan.md
|
||||
@@
|
||||
-struct PathQuery {
|
||||
+struct PathQuery {
|
||||
/// The parameter value to bind.
|
||||
value: String,
|
||||
/// If true: use `LIKE value ESCAPE '\'`. If false: use `= value`.
|
||||
is_prefix: bool,
|
||||
}
|
||||
|
||||
-/// Build a path query from a user-supplied path.
|
||||
+/// Build a path query from a user-supplied path, with a DB probe for dotless files.
|
||||
@@
|
||||
-fn build_path_query(path: &str) -> PathQuery {
|
||||
+fn build_path_query(conn: &Connection, path: &str) -> Result<PathQuery> {
|
||||
let trimmed = path.trim_end_matches('/');
|
||||
let last_segment = trimmed.rsplit('/').next().unwrap_or(trimmed);
|
||||
let is_root = !trimmed.contains('/');
|
||||
- let is_file = !path.ends_with('/') && (is_root || last_segment.contains('.'));
|
||||
+ let forced_dir = path.ends_with('/');
|
||||
+ let looks_like_file = !forced_dir && (is_root || last_segment.contains('.'));
|
||||
+
|
||||
+ // If it doesn't "look like a file" but the exact path exists in DiffNotes,
|
||||
+ // treat as exact (handles src/Dockerfile, infra/Makefile, etc.).
|
||||
+ let exact_exists = if !looks_like_file && !forced_dir {
|
||||
+ conn.query_row(
|
||||
+ "SELECT 1
|
||||
+ FROM notes
|
||||
+ WHERE note_type = 'DiffNote'
|
||||
+ AND is_system = 0
|
||||
+ AND position_new_path = ?1
|
||||
+ LIMIT 1",
|
||||
+ rusqlite::params![trimmed],
|
||||
+ |_| Ok(()),
|
||||
+ ).is_ok()
|
||||
+ } else {
|
||||
+ false
|
||||
+ };
|
||||
+
|
||||
+ let is_file = looks_like_file || exact_exists;
|
||||
|
||||
if is_file {
|
||||
PathQuery {
|
||||
value: trimmed.to_string(),
|
||||
is_prefix: false,
|
||||
}
|
||||
} else {
|
||||
let escaped = escape_like(trimmed);
|
||||
PathQuery {
|
||||
value: format!("{escaped}/%"),
|
||||
is_prefix: true,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
Also update callers:
|
||||
|
||||
diff
|
||||
Copy code
|
||||
@@
|
||||
- let pq = build_path_query(path);
|
||||
+ let pq = build_path_query(conn, path)?;
|
||||
@@
|
||||
- let pq = build_path_query(path);
|
||||
+ let pq = build_path_query(conn, path)?;
|
||||
|
||||
|
||||
And tests:
|
||||
|
||||
diff
|
||||
Copy code
|
||||
@@
|
||||
- fn test_build_path_query() {
|
||||
+ fn test_build_path_query() {
|
||||
@@
|
||||
- // Dotless root file -> exact match (root path without '/')
|
||||
+ // Dotless root file -> exact match (root path without '/')
|
||||
let pq = build_path_query("Makefile");
|
||||
assert_eq!(pq.value, "Makefile");
|
||||
assert!(!pq.is_prefix);
|
||||
+
|
||||
+ // Dotless file in subdir should become exact if DB contains it (probe)
|
||||
+ // (set up: insert one DiffNote with position_new_path = "src/Dockerfile")
|
||||
|
||||
2) Make “reviewer” semantics correct: exclude MR authors commenting on their own diffs
|
||||
Why
|
||||
|
||||
Right now, Overlap (and Expert reviewer branch) will count MR authors as “reviewers” if they leave DiffNotes in their own MR (clarifications / replies), inflating A+R and contaminating “who reviewed here” signals.
|
||||
|
||||
You already enforce this in --reviews mode (m.author_username != ?1). Apply the same principle consistently:
|
||||
|
||||
Reviewer branch: only count notes where n.author_username != m.author_username (when both non-NULL).
|
||||
|
||||
Diff (Overlap reviewer branch)
|
||||
diff
|
||||
Copy code
|
||||
@@
|
||||
- WHERE n.note_type = 'DiffNote'
|
||||
+ WHERE n.note_type = 'DiffNote'
|
||||
AND n.position_new_path LIKE ?1 ESCAPE '\\'
|
||||
AND n.is_system = 0
|
||||
AND n.author_username IS NOT NULL
|
||||
+ AND (m.author_username IS NULL OR n.author_username != m.author_username)
|
||||
AND n.created_at >= ?2
|
||||
AND (?3 IS NULL OR n.project_id = ?3)
|
||||
|
||||
|
||||
Same change for sql_exact.
|
||||
|
||||
3) Expert mode scoring: align units + reduce single-MR “comment storms”
|
||||
Why
|
||||
|
||||
Expert currently mixes units:
|
||||
|
||||
reviewer side: DiffNote count
|
||||
|
||||
author side: distinct MR count
|
||||
|
||||
That makes score noisy and can crown “someone who wrote 30 comments on one MR” as top expert.
|
||||
|
||||
Fix: make both sides primarily MR-breadth:
|
||||
|
||||
reviewer: COUNT(DISTINCT m.id) as review_mr_count
|
||||
|
||||
author: COUNT(DISTINCT m.id) as author_mr_count
|
||||
Optionally keep review_note_count as a secondary intensity signal (but not the main driver).
|
||||
|
||||
Diff (types + SQL)
|
||||
diff
|
||||
Copy code
|
||||
@@
|
||||
pub struct Expert {
|
||||
pub username: String,
|
||||
- pub score: f64,
|
||||
- pub review_count: u32,
|
||||
- pub author_count: u32,
|
||||
+ pub score: i64,
|
||||
+ pub review_mr_count: u32,
|
||||
+ pub review_note_count: u32,
|
||||
+ pub author_mr_count: u32,
|
||||
pub last_active_ms: i64,
|
||||
}
|
||||
|
||||
|
||||
Reviewer branch now joins to MR so it can count distinct MRs and exclude self-comments:
|
||||
|
||||
diff
|
||||
Copy code
|
||||
@@
|
||||
- SELECT
|
||||
- n.author_username AS username,
|
||||
- 'reviewer' AS role,
|
||||
- COUNT(*) AS cnt,
|
||||
- MAX(n.created_at) AS last_active_at
|
||||
- FROM notes n
|
||||
+ SELECT
|
||||
+ n.author_username AS username,
|
||||
+ 'reviewer' AS role,
|
||||
+ COUNT(DISTINCT m.id) AS mr_cnt,
|
||||
+ COUNT(*) AS note_cnt,
|
||||
+ MAX(n.created_at) AS last_active_at
|
||||
+ FROM notes n
|
||||
+ JOIN discussions d ON n.discussion_id = d.id
|
||||
+ JOIN merge_requests m ON d.merge_request_id = m.id
|
||||
WHERE n.note_type = 'DiffNote'
|
||||
AND n.is_system = 0
|
||||
AND n.author_username IS NOT NULL
|
||||
+ AND (m.author_username IS NULL OR n.author_username != m.author_username)
|
||||
AND n.position_new_path LIKE ?1 ESCAPE '\\'
|
||||
AND n.created_at >= ?2
|
||||
AND (?3 IS NULL OR n.project_id = ?3)
|
||||
GROUP BY n.author_username
|
||||
|
||||
|
||||
Update author branch payload to match shape:
|
||||
|
||||
diff
|
||||
Copy code
|
||||
@@
|
||||
SELECT
|
||||
m.author_username AS username,
|
||||
'author' AS role,
|
||||
- COUNT(DISTINCT m.id) AS cnt,
|
||||
+ COUNT(DISTINCT m.id) AS mr_cnt,
|
||||
+ 0 AS note_cnt,
|
||||
MAX(n.created_at) AS last_active_at
|
||||
|
||||
|
||||
Aggregate:
|
||||
|
||||
diff
|
||||
Copy code
|
||||
@@
|
||||
SELECT
|
||||
username,
|
||||
- SUM(CASE WHEN role = 'reviewer' THEN cnt ELSE 0 END) AS review_count,
|
||||
- SUM(CASE WHEN role = 'author' THEN cnt ELSE 0 END) AS author_count,
|
||||
+ SUM(CASE WHEN role = 'reviewer' THEN mr_cnt ELSE 0 END) AS review_mr_count,
|
||||
+ SUM(CASE WHEN role = 'reviewer' THEN note_cnt ELSE 0 END) AS review_note_count,
|
||||
+ SUM(CASE WHEN role = 'author' THEN mr_cnt ELSE 0 END) AS author_mr_count,
|
||||
MAX(last_active_at) AS last_active_at,
|
||||
- (SUM(CASE WHEN role = 'reviewer' THEN cnt ELSE 0 END) * 3.0) +
|
||||
- (SUM(CASE WHEN role = 'author' THEN cnt ELSE 0 END) * 2.0) AS score
|
||||
+ (
|
||||
+ (SUM(CASE WHEN role = 'reviewer' THEN mr_cnt ELSE 0 END) * 20) +
|
||||
+ (SUM(CASE WHEN role = 'author' THEN mr_cnt ELSE 0 END) * 12) +
|
||||
+ (SUM(CASE WHEN role = 'reviewer' THEN note_cnt ELSE 0 END) * 1)
|
||||
+ ) AS score
|
||||
|
||||
|
||||
Human header:
|
||||
|
||||
diff
|
||||
Copy code
|
||||
@@
|
||||
- style("Reviews").bold(),
|
||||
- style("Authored").bold(),
|
||||
+ style("Reviewed(MRs)").bold(),
|
||||
+ style("Notes").bold(),
|
||||
+ style("Authored(MRs)").bold(),
|
||||
|
||||
4) Deterministic output: participants + MR refs + tie-breakers
|
||||
Why
|
||||
|
||||
You’ve correctly focused on reproducibility (resolved_input), but you still have nondeterministic lists:
|
||||
|
||||
participants: GROUP_CONCAT order is undefined → vector order changes run-to-run.
|
||||
|
||||
mr_refs: you dedup via HashSet then iterate → undefined order.
|
||||
|
||||
user sorting in overlap is missing stable tie-breakers.
|
||||
|
||||
This is a real “robot mode flake” source.
|
||||
|
||||
Diff (Active participants sort)
|
||||
diff
|
||||
Copy code
|
||||
@@
|
||||
- let participants: Vec<String> = participants_csv
|
||||
+ let mut participants: Vec<String> = participants_csv
|
||||
.as_deref()
|
||||
.filter(|s| !s.is_empty())
|
||||
.map(|csv| csv.split('\x1F').map(String::from).collect())
|
||||
.unwrap_or_default();
|
||||
+ participants.sort(); // stable, deterministic
|
||||
|
||||
Diff (Overlap MR refs sort + stable user sort)
|
||||
diff
|
||||
Copy code
|
||||
@@
|
||||
- users.sort_by(|a, b| b.touch_count.cmp(&a.touch_count));
|
||||
+ users.sort_by(|a, b| {
|
||||
+ b.touch_count.cmp(&a.touch_count)
|
||||
+ .then_with(|| b.last_touch_at.cmp(&a.last_touch_at))
|
||||
+ .then_with(|| a.username.cmp(&b.username))
|
||||
+ });
|
||||
@@
|
||||
- entry.mr_refs = set.into_iter().collect();
|
||||
+ let mut v: Vec<String> = set.into_iter().collect();
|
||||
+ v.sort();
|
||||
+ entry.mr_refs = v;
|
||||
|
||||
5) Make --limit actionable: surface truncation explicitly (human + robot)
|
||||
Why
|
||||
|
||||
Agents (and humans) need to know if results were cut off so they can rerun with a bigger -n.
|
||||
Right now there’s no signal.
|
||||
|
||||
Minimal pattern: query limit + 1, set truncated = true if you got > limit, then truncate.
|
||||
|
||||
Diff (result types)
|
||||
diff
|
||||
Copy code
|
||||
@@
|
||||
pub struct ExpertResult {
|
||||
pub path_query: String,
|
||||
pub experts: Vec<Expert>,
|
||||
+ pub truncated: bool,
|
||||
}
|
||||
@@
|
||||
pub struct ActiveResult {
|
||||
pub discussions: Vec<ActiveDiscussion>,
|
||||
pub total_unresolved: u32,
|
||||
+ pub truncated: bool,
|
||||
}
|
||||
@@
|
||||
pub struct OverlapResult {
|
||||
pub path_query: String,
|
||||
pub users: Vec<OverlapUser>,
|
||||
+ pub truncated: bool,
|
||||
}
|
||||
|
||||
Diff (query pattern example)
|
||||
diff
|
||||
Copy code
|
||||
@@
|
||||
- let limit_i64 = limit as i64;
|
||||
+ let limit_plus_one = (limit + 1) as i64;
|
||||
@@
|
||||
- LIMIT ?4
|
||||
+ LIMIT ?4
|
||||
@@
|
||||
- rusqlite::params![pq.value, since_ms, project_id, limit_i64],
|
||||
+ rusqlite::params![pq.value, since_ms, project_id, limit_plus_one],
|
||||
@@
|
||||
- Ok(ExpertResult {
|
||||
+ let truncated = experts.len() > limit;
|
||||
+ let experts = experts.into_iter().take(limit).collect();
|
||||
+ Ok(ExpertResult {
|
||||
path_query: path.to_string(),
|
||||
experts,
|
||||
+ truncated,
|
||||
})
|
||||
|
||||
|
||||
Human output hint:
|
||||
|
||||
diff
|
||||
Copy code
|
||||
@@
|
||||
if r.experts.is_empty() { ... }
|
||||
+ if r.truncated {
|
||||
+ println!(" {}", style("(showing first -n; rerun with a higher --limit)").dim());
|
||||
+ }
|
||||
|
||||
|
||||
Robot output field:
|
||||
|
||||
diff
|
||||
Copy code
|
||||
@@
|
||||
fn expert_to_json(r: &ExpertResult) -> serde_json::Value {
|
||||
serde_json::json!({
|
||||
"path_query": r.path_query,
|
||||
+ "truncated": r.truncated,
|
||||
"experts": ...
|
||||
})
|
||||
}
|
||||
|
||||
6) Overlap merge hot loop: avoid repeated HashSet rebuild per row
|
||||
Why
|
||||
|
||||
This line is expensive in a UNION result with many rows:
|
||||
|
||||
rust
|
||||
Copy code
|
||||
let mut set: HashSet<String> = entry.mr_refs.drain(..).collect();
|
||||
|
||||
|
||||
It reallocates and rehashes every time.
|
||||
|
||||
Fix: store an accumulator with HashSet during merge, convert once at end.
|
||||
|
||||
Diff (internal accumulator)
|
||||
diff
|
||||
Copy code
|
||||
@@
|
||||
- let mut user_map: HashMap<String, OverlapUser> = HashMap::new();
|
||||
+ struct OverlapAcc {
|
||||
+ username: String,
|
||||
+ author_touch_count: u32,
|
||||
+ review_touch_count: u32,
|
||||
+ touch_count: u32,
|
||||
+ last_touch_at: i64,
|
||||
+ mr_refs: HashSet<String>,
|
||||
+ }
|
||||
+ let mut user_map: HashMap<String, OverlapAcc> = HashMap::new();
|
||||
@@
|
||||
- let entry = user_map.entry(username.clone()).or_insert_with(|| OverlapUser {
|
||||
+ let entry = user_map.entry(username.clone()).or_insert_with(|| OverlapAcc {
|
||||
username: username.clone(),
|
||||
author_touch_count: 0,
|
||||
review_touch_count: 0,
|
||||
touch_count: 0,
|
||||
last_touch_at: 0,
|
||||
- mr_refs: Vec::new(),
|
||||
+ mr_refs: HashSet::new(),
|
||||
});
|
||||
@@
|
||||
- let mut set: HashSet<String> = entry.mr_refs.drain(..).collect();
|
||||
- for r in mr_refs { set.insert(r); }
|
||||
- entry.mr_refs = set.into_iter().collect();
|
||||
+ for r in mr_refs { entry.mr_refs.insert(r); }
|
||||
@@
|
||||
- let mut users: Vec<OverlapUser> = user_map.into_values().collect();
|
||||
+ let mut users: Vec<OverlapUser> = user_map.into_values().map(|a| {
|
||||
+ let mut mr_refs: Vec<String> = a.mr_refs.into_iter().collect();
|
||||
+ mr_refs.sort();
|
||||
+ OverlapUser {
|
||||
+ username: a.username,
|
||||
+ author_touch_count: a.author_touch_count,
|
||||
+ review_touch_count: a.review_touch_count,
|
||||
+ touch_count: a.touch_count,
|
||||
+ last_touch_at: a.last_touch_at,
|
||||
+ mr_refs,
|
||||
+ }
|
||||
+ }).collect();
|
||||
|
||||
7) Tests to lock these behaviors
|
||||
Add tests (high value)
|
||||
|
||||
dotless subdir file uses DB probe → exact match
|
||||
|
||||
self-review exclusion prevents MR author showing up as reviewer
|
||||
|
||||
deterministic ordering for participants and mr_refs (sort)
|
||||
|
||||
Diff (test additions outline)
|
||||
diff
|
||||
Copy code
|
||||
@@
|
||||
#[test]
|
||||
+ fn test_build_path_query_dotless_subdir_file_uses_probe() {
|
||||
+ let conn = setup_test_db();
|
||||
+ insert_project(&conn, 1, "team/backend");
|
||||
+ insert_mr(&conn, 1, 1, 100, "author_a", "opened");
|
||||
+ insert_discussion(&conn, 1, 1, Some(1), None, true, false);
|
||||
+ insert_diffnote(&conn, 1, 1, 1, "reviewer_b", "src/Dockerfile", "note");
|
||||
+
|
||||
+ let pq = build_path_query(&conn, "src/Dockerfile").unwrap();
|
||||
+ assert_eq!(pq.value, "src/Dockerfile");
|
||||
+ assert!(!pq.is_prefix);
|
||||
+ }
|
||||
+
|
||||
+ #[test]
|
||||
+ fn test_overlap_excludes_self_review_notes() {
|
||||
+ let conn = setup_test_db();
|
||||
+ insert_project(&conn, 1, "team/backend");
|
||||
+ insert_mr(&conn, 1, 1, 100, "author_a", "opened");
|
||||
+ insert_discussion(&conn, 1, 1, Some(1), None, true, false);
|
||||
+ // author_a comments on their own MR diff
|
||||
+ insert_diffnote(&conn, 1, 1, 1, "author_a", "src/auth/login.rs", "clarification");
|
||||
+
|
||||
+ let result = query_overlap(&conn, "src/auth/", None, 0, 20).unwrap();
|
||||
+ let u = result.users.iter().find(|u| u.username == "author_a");
|
||||
+ // should not be credited as reviewer touch
|
||||
+ assert!(u.map(|x| x.review_touch_count).unwrap_or(0) == 0);
|
||||
+ }
|
||||
|
||||
Net effect
|
||||
|
||||
Correctness: fixes dotless subdir files + self-review pollution.
|
||||
|
||||
Signal quality: Expert ranking becomes harder to game by comment volume.
|
||||
|
||||
Robot reproducibility: deterministic ordering + explicit truncation.
|
||||
|
||||
Performance: avoids rehash loops in overlap merges; path probe uses indexed equality.
|
||||
|
||||
If you want one “single best” change: #1 (DB probe exact-match) is the most likely to prevent confusing “why is this empty?” behavior without adding any user-facing complexity.
|
||||
353
docs/who-command-design.feedback-7.md
Normal file
353
docs/who-command-design.feedback-7.md
Normal file
@@ -0,0 +1,353 @@
|
||||
Below are the highest-leverage revisions I’d make to iteration 6 to improve correctness (multi-project edge cases), robot-mode reliability (bounded payloads + truncation), and signal quality—without changing the fundamental scope (still pure SQL over existing tables).
|
||||
|
||||
1) Make build_path_query project-aware and two-way probe (exact and prefix)
|
||||
Why
|
||||
|
||||
Your DB probe currently answers: “does this exact file exist anywhere in DiffNotes?” That can misclassify in a project-scoped run:
|
||||
|
||||
Path exists as a dotless file in Project A → probe returns true
|
||||
|
||||
User runs -p Project B where the path is a directory (or different shape) → you switch to exact, return empty, and miss valid prefix hits.
|
||||
|
||||
Also, you still have a minor heuristic fragility for dot directories when the user omits trailing / (e.g., .github/workflows): last segment has a dot → you treat as file unless forced dir.
|
||||
|
||||
Revision
|
||||
|
||||
Thread project_id into build_path_query(conn, path, project_id)
|
||||
|
||||
Probe exact first (scoped), then probe prefix (scoped)
|
||||
|
||||
Only fall back to heuristics if both probes fail
|
||||
|
||||
This keeps “static SQL, no dynamic assembly,” and costs at most 2 indexed existence queries per invocation.
|
||||
|
||||
diff
|
||||
Copy code
|
||||
diff --git a/who-command-design.md b/who-command-design.md
|
||||
@@
|
||||
- fn build_path_query(conn: &Connection, path: &str) -> Result<PathQuery> {
|
||||
+ fn build_path_query(conn: &Connection, path: &str, project_id: Option<i64>) -> Result<PathQuery> {
|
||||
let trimmed = path.trim_end_matches('/');
|
||||
let last_segment = trimmed.rsplit('/').next().unwrap_or(trimmed);
|
||||
let is_root = !trimmed.contains('/');
|
||||
let forced_dir = path.ends_with('/');
|
||||
- let looks_like_file = !forced_dir && (is_root || last_segment.contains('.'));
|
||||
+ // Heuristic is now only a fallback; probes decide first.
|
||||
+ let looks_like_file = !forced_dir && (is_root || last_segment.contains('.'));
|
||||
|
||||
- let exact_exists = if !looks_like_file && !forced_dir {
|
||||
- conn.query_row(
|
||||
- "SELECT 1 FROM notes
|
||||
- WHERE note_type = 'DiffNote'
|
||||
- AND is_system = 0
|
||||
- AND position_new_path = ?1
|
||||
- LIMIT 1",
|
||||
- rusqlite::params![trimmed],
|
||||
- |_| Ok(()),
|
||||
- )
|
||||
- .is_ok()
|
||||
- } else {
|
||||
- false
|
||||
- };
|
||||
+ // Probe 1: exact file exists (scoped)
|
||||
+ let exact_exists = conn.query_row(
|
||||
+ "SELECT 1 FROM notes
|
||||
+ WHERE note_type = 'DiffNote'
|
||||
+ AND is_system = 0
|
||||
+ AND position_new_path = ?1
|
||||
+ AND (?2 IS NULL OR project_id = ?2)
|
||||
+ LIMIT 1",
|
||||
+ rusqlite::params![trimmed, project_id],
|
||||
+ |_| Ok(()),
|
||||
+ ).is_ok();
|
||||
+
|
||||
+ // Probe 2: directory prefix exists (scoped)
|
||||
+ let prefix_exists = if !forced_dir {
|
||||
+ let escaped = escape_like(trimmed);
|
||||
+ let pat = format!("{escaped}/%");
|
||||
+ conn.query_row(
|
||||
+ "SELECT 1 FROM notes
|
||||
+ WHERE note_type = 'DiffNote'
|
||||
+ AND is_system = 0
|
||||
+ AND position_new_path LIKE ?1 ESCAPE '\\'
|
||||
+ AND (?2 IS NULL OR project_id = ?2)
|
||||
+ LIMIT 1",
|
||||
+ rusqlite::params![pat, project_id],
|
||||
+ |_| Ok(()),
|
||||
+ ).is_ok()
|
||||
+ } else { false };
|
||||
|
||||
- let is_file = looks_like_file || exact_exists;
|
||||
+ // Forced directory always wins; otherwise: exact > prefix > heuristic
|
||||
+ let is_file = if forced_dir { false }
|
||||
+ else if exact_exists { true }
|
||||
+ else if prefix_exists { false }
|
||||
+ else { looks_like_file };
|
||||
|
||||
if is_file {
|
||||
Ok(PathQuery { value: trimmed.to_string(), is_prefix: false })
|
||||
} else {
|
||||
let escaped = escape_like(trimmed);
|
||||
Ok(PathQuery { value: format!("{escaped}/%"), is_prefix: true })
|
||||
}
|
||||
}
|
||||
@@
|
||||
- let pq = build_path_query(conn, path)?;
|
||||
+ let pq = build_path_query(conn, path, project_id)?;
|
||||
|
||||
|
||||
Add test coverage for the multi-project misclassification case:
|
||||
|
||||
diff
|
||||
Copy code
|
||||
diff --git a/who-command-design.md b/who-command-design.md
|
||||
@@
|
||||
#[test]
|
||||
fn test_build_path_query_dotless_subdir_file_uses_db_probe() {
|
||||
@@
|
||||
- let pq = build_path_query(&conn, "src/Dockerfile").unwrap();
|
||||
+ let pq = build_path_query(&conn, "src/Dockerfile", None).unwrap();
|
||||
@@
|
||||
- let pq2 = build_path_query(&conn2, "src/Dockerfile").unwrap();
|
||||
+ let pq2 = build_path_query(&conn2, "src/Dockerfile", None).unwrap();
|
||||
}
|
||||
+
|
||||
+ #[test]
|
||||
+ fn test_build_path_query_probe_is_project_scoped() {
|
||||
+ // Path exists as a dotless file in project 1; project 2 should not
|
||||
+ // treat it as an exact file unless it exists there too.
|
||||
+ let conn = setup_test_db();
|
||||
+ insert_project(&conn, 1, "team/a");
|
||||
+ insert_project(&conn, 2, "team/b");
|
||||
+ insert_mr(&conn, 1, 1, 10, "author_a", "opened");
|
||||
+ insert_discussion(&conn, 1, 1, Some(1), None, true, false);
|
||||
+ insert_diffnote(&conn, 1, 1, 1, "rev", "infra/Makefile", "note");
|
||||
+
|
||||
+ let pq_scoped = build_path_query(&conn, "infra/Makefile", Some(2)).unwrap();
|
||||
+ assert!(pq_scoped.is_prefix); // should fall back to prefix in project 2
|
||||
+ }
|
||||
|
||||
2) Bound robot payload sizes for participants and mr_refs (with totals + truncation)
|
||||
Why
|
||||
|
||||
mr_refs and participants can become unbounded arrays in robot mode, which is a real operational hazard:
|
||||
|
||||
huge JSON → slow, noisy diffs, brittle downstream pipelines
|
||||
|
||||
potential SQLite group_concat truncation becomes invisible (and you can’t distinguish “no refs” vs “refs truncated”)
|
||||
|
||||
Revision
|
||||
|
||||
Introduce hard caps and explicit metadata:
|
||||
|
||||
participants_total, participants_truncated
|
||||
|
||||
mr_refs_total, mr_refs_truncated
|
||||
|
||||
This is not scope creep—it’s defensive output hygiene.
|
||||
|
||||
diff
|
||||
Copy code
|
||||
diff --git a/who-command-design.md b/who-command-design.md
|
||||
@@
|
||||
pub struct ActiveDiscussion {
|
||||
@@
|
||||
pub participants: Vec<String>,
|
||||
+ pub participants_total: u32,
|
||||
+ pub participants_truncated: bool,
|
||||
}
|
||||
@@
|
||||
pub struct OverlapUser {
|
||||
@@
|
||||
pub mr_refs: Vec<String>,
|
||||
+ pub mr_refs_total: u32,
|
||||
+ pub mr_refs_truncated: bool,
|
||||
}
|
||||
|
||||
|
||||
Implementation sketch (Rust-side, deterministic):
|
||||
|
||||
diff
|
||||
Copy code
|
||||
diff --git a/who-command-design.md b/who-command-design.md
|
||||
@@
|
||||
fn query_active(...) -> Result<ActiveResult> {
|
||||
+ const MAX_PARTICIPANTS: usize = 50;
|
||||
@@
|
||||
- participants.sort();
|
||||
+ participants.sort();
|
||||
+ let participants_total = participants.len() as u32;
|
||||
+ let participants_truncated = participants.len() > MAX_PARTICIPANTS;
|
||||
+ if participants_truncated {
|
||||
+ participants.truncate(MAX_PARTICIPANTS);
|
||||
+ }
|
||||
@@
|
||||
Ok(ActiveDiscussion {
|
||||
@@
|
||||
participants,
|
||||
+ participants_total,
|
||||
+ participants_truncated,
|
||||
})
|
||||
@@
|
||||
fn query_overlap(...) -> Result<OverlapResult> {
|
||||
+ const MAX_MR_REFS_PER_USER: usize = 50;
|
||||
@@
|
||||
.map(|a| {
|
||||
let mut mr_refs: Vec<String> = a.mr_refs.into_iter().collect();
|
||||
mr_refs.sort();
|
||||
+ let mr_refs_total = mr_refs.len() as u32;
|
||||
+ let mr_refs_truncated = mr_refs.len() > MAX_MR_REFS_PER_USER;
|
||||
+ if mr_refs_truncated {
|
||||
+ mr_refs.truncate(MAX_MR_REFS_PER_USER);
|
||||
+ }
|
||||
OverlapUser {
|
||||
@@
|
||||
mr_refs,
|
||||
+ mr_refs_total,
|
||||
+ mr_refs_truncated,
|
||||
}
|
||||
})
|
||||
|
||||
|
||||
Update robot JSON accordingly:
|
||||
|
||||
diff
|
||||
Copy code
|
||||
diff --git a/who-command-design.md b/who-command-design.md
|
||||
@@
|
||||
fn active_to_json(r: &ActiveResult) -> serde_json::Value {
|
||||
@@
|
||||
"participants": d.participants,
|
||||
+ "participants_total": d.participants_total,
|
||||
+ "participants_truncated": d.participants_truncated,
|
||||
}))
|
||||
@@
|
||||
fn overlap_to_json(r: &OverlapResult) -> serde_json::Value {
|
||||
@@
|
||||
"mr_refs": u.mr_refs,
|
||||
+ "mr_refs_total": u.mr_refs_total,
|
||||
+ "mr_refs_truncated": u.mr_refs_truncated,
|
||||
}))
|
||||
|
||||
|
||||
Also update robot-docs manifest schema snippet for who.active.discussions[] and who.overlap.users[].
|
||||
|
||||
3) Add truncation metadata to Workload sections (same LIMIT+1 pattern)
|
||||
Why
|
||||
|
||||
Workload is the mode most likely to be consumed by agents, and right now it has silent truncation (each section is LIMIT N with no signal). Your plan already treats truncation as a first-class contract elsewhere; Workload should match.
|
||||
|
||||
Revision
|
||||
|
||||
For each workload query:
|
||||
|
||||
request LIMIT + 1
|
||||
|
||||
set *_truncated booleans
|
||||
|
||||
trim to requested limit
|
||||
|
||||
diff
|
||||
Copy code
|
||||
diff --git a/who-command-design.md b/who-command-design.md
|
||||
@@
|
||||
pub struct WorkloadResult {
|
||||
pub username: String,
|
||||
pub assigned_issues: Vec<WorkloadIssue>,
|
||||
pub authored_mrs: Vec<WorkloadMr>,
|
||||
pub reviewing_mrs: Vec<WorkloadMr>,
|
||||
pub unresolved_discussions: Vec<WorkloadDiscussion>,
|
||||
+ pub assigned_issues_truncated: bool,
|
||||
+ pub authored_mrs_truncated: bool,
|
||||
+ pub reviewing_mrs_truncated: bool,
|
||||
+ pub unresolved_discussions_truncated: bool,
|
||||
}
|
||||
|
||||
|
||||
And in JSON include the booleans (plus you already have summary.counts).
|
||||
|
||||
This is mechanically repetitive but extremely valuable for automation.
|
||||
|
||||
4) Rename “Last Active” → “Last Seen” for Expert/Overlap
|
||||
Why
|
||||
|
||||
For “author” rows, the timestamp is derived from review activity on their MR (via MAX(n.created_at)), not necessarily that person’s direct action. Calling that “active” is semantically misleading. “Last seen” is accurate across both reviewer+author branches.
|
||||
|
||||
diff
|
||||
Copy code
|
||||
diff --git a/who-command-design.md b/who-command-design.md
|
||||
@@
|
||||
pub struct Expert {
|
||||
@@
|
||||
- pub last_active_ms: i64,
|
||||
+ pub last_seen_ms: i64,
|
||||
}
|
||||
@@
|
||||
pub struct OverlapUser {
|
||||
@@
|
||||
- pub last_touch_at: i64,
|
||||
+ pub last_seen_at: i64,
|
||||
@@
|
||||
fn print_expert_human(...) {
|
||||
@@
|
||||
- style("Last Active").bold(),
|
||||
+ style("Last Seen").bold(),
|
||||
@@
|
||||
- style(format_relative_time(expert.last_active_ms)).dim(),
|
||||
+ style(format_relative_time(expert.last_seen_ms)).dim(),
|
||||
|
||||
|
||||
(Keep internal SQL aliases consistent: last_seen_at everywhere.)
|
||||
|
||||
5) Make MR state filtering consistent in Expert/Overlap reviewer branches
|
||||
Why
|
||||
|
||||
You already restrict Overlap author branch to opened|merged, but reviewer branches can include closed/unmerged noise. Consistency improves signal quality and can reduce scan churn.
|
||||
|
||||
Low-risk revision: apply the same state filter to reviewer branches (Expert + Overlap). You can keep “closed” excluded by default without adding new flags.
|
||||
|
||||
diff
|
||||
Copy code
|
||||
diff --git a/who-command-design.md b/who-command-design.md
|
||||
@@
|
||||
WHERE n.note_type = 'DiffNote'
|
||||
AND n.is_system = 0
|
||||
@@
|
||||
- AND n.created_at >= ?2
|
||||
+ AND m.state IN ('opened','merged')
|
||||
+ AND n.created_at >= ?2
|
||||
|
||||
|
||||
This is a semantic choice; if you later want archaeology across closed/unmerged, that belongs in a separate mode/flag, but I would not add it now.
|
||||
|
||||
6) Add a design principle for bounded outputs (aligns with robot-first reproducibility)
|
||||
diff
|
||||
Copy code
|
||||
diff --git a/who-command-design.md b/who-command-design.md
|
||||
@@
|
||||
10. **Truncation transparency.** Result types carry a `truncated: bool` flag...
|
||||
+11. **Bounded payloads.** Robot JSON must never emit unbounded arrays (participants, refs).
|
||||
+ Large list fields are capped with `*_total` + `*_truncated` so agents can page/retry.
|
||||
|
||||
Consolidated plan metadata bump (Iteration 7)
|
||||
diff
|
||||
Copy code
|
||||
diff --git a/who-command-design.md b/who-command-design.md
|
||||
@@
|
||||
-iteration: 6
|
||||
+iteration: 7
|
||||
updated: 2026-02-07
|
||||
|
||||
Net effect (what you get)
|
||||
|
||||
Correct path classification under -p scoping (no cross-project probe leakage)
|
||||
|
||||
Deterministic + bounded robot payloads (no giant JSON surprises)
|
||||
|
||||
Uniform truncation contract across all modes (Workload no longer silently truncates)
|
||||
|
||||
Clearer semantics (“Last Seen” avoids misinterpretation)
|
||||
|
||||
Cleaner signals (reviewer branches ignore closed/unmerged by default)
|
||||
|
||||
If you want, I can also produce a second diff that updates the robot-docs schema block and the Verification EXPLAIN expectations to reflect the new probe queries and the state filter.
|
||||
552
docs/who-command-design.feedback-8.md
Normal file
552
docs/who-command-design.feedback-8.md
Normal file
@@ -0,0 +1,552 @@
|
||||
Below are the highest-leverage revisions I’d make for iteration 8, staying within your MVP constraints (static SQL, no scope creep into new data sources), but tightening correctness, index utilization predictability, debuggability, and output safety.
|
||||
|
||||
1) Fix the semantic bug in since_was_default (Workload mode) by introducing since_mode
|
||||
Why this is better
|
||||
|
||||
Right now since_was_default = args.since.is_none() is misleading for Workload, because Workload has no default window (it’s “unbounded unless explicitly filtered”). In robot mode, this creates incorrect intent replay and ambiguity.
|
||||
|
||||
Replace the boolean with a tri-state:
|
||||
|
||||
since_mode: "default" | "explicit" | "none"
|
||||
|
||||
Keep since_was_default only if you want backward compatibility, but compute it as since_mode == "default".
|
||||
|
||||
Patch
|
||||
diff
|
||||
Copy code
|
||||
diff --git a/who-command-design.md b/who-command-design.md
|
||||
--- a/who-command-design.md
|
||||
+++ b/who-command-design.md
|
||||
@@
|
||||
-5. **Robot-first reproducibility.** Robot JSON output includes both a raw `input` object (echoing CLI args) and a `resolved_input` object (computed `since_ms`, `since_iso`, `since_was_default`, resolved `project_id` + `project_path`, effective `mode`, `limit`) so agents can trace exactly what ran and reproduce it precisely.
|
||||
+5. **Robot-first reproducibility.** Robot JSON output includes both a raw `input` object (echoing CLI args) and a `resolved_input` object (computed `since_ms`, `since_iso`, `since_mode`, resolved `project_id` + `project_path`, effective `mode`, `limit`) so agents can trace exactly what ran and reproduce it precisely.
|
||||
@@
|
||||
pub struct WhoResolvedInput {
|
||||
pub mode: String,
|
||||
pub project_id: Option<i64>,
|
||||
pub project_path: Option<String>,
|
||||
pub since_ms: Option<i64>,
|
||||
pub since_iso: Option<String>,
|
||||
- pub since_was_default: bool,
|
||||
+ /// "default" (mode default applied), "explicit" (user provided --since), "none" (no window)
|
||||
+ pub since_mode: String,
|
||||
pub limit: usize,
|
||||
}
|
||||
@@
|
||||
- let since_was_default = args.since.is_none();
|
||||
+ // since_mode semantics:
|
||||
+ // - expert/reviews/active/overlap: default window applies if args.since is None
|
||||
+ // - workload: no default window; args.since None => "none"
|
||||
+ let since_mode_for_defaulted = if args.since.is_some() { "explicit" } else { "default" };
|
||||
+ let since_mode_for_workload = if args.since.is_some() { "explicit" } else { "none" };
|
||||
@@
|
||||
WhoMode::Expert { path } => {
|
||||
let since_ms = resolve_since(args.since.as_deref(), "6m")?;
|
||||
let result = query_expert(&conn, path, project_id, since_ms, args.limit)?;
|
||||
Ok(WhoRun {
|
||||
resolved_input: WhoResolvedInput {
|
||||
mode: "expert".to_string(),
|
||||
project_id,
|
||||
project_path,
|
||||
since_ms: Some(since_ms),
|
||||
since_iso: Some(ms_to_iso(since_ms)),
|
||||
- since_was_default,
|
||||
+ since_mode: since_mode_for_defaulted.to_string(),
|
||||
limit: args.limit,
|
||||
},
|
||||
result: WhoResult::Expert(result),
|
||||
})
|
||||
}
|
||||
@@
|
||||
WhoMode::Workload { username } => {
|
||||
let since_ms = args
|
||||
.since
|
||||
.as_deref()
|
||||
.map(|s| resolve_since_required(s))
|
||||
.transpose()?;
|
||||
let result = query_workload(&conn, username, project_id, since_ms, args.limit)?;
|
||||
Ok(WhoRun {
|
||||
resolved_input: WhoResolvedInput {
|
||||
mode: "workload".to_string(),
|
||||
project_id,
|
||||
project_path,
|
||||
since_ms,
|
||||
since_iso: since_ms.map(ms_to_iso),
|
||||
- since_was_default,
|
||||
+ since_mode: since_mode_for_workload.to_string(),
|
||||
limit: args.limit,
|
||||
},
|
||||
result: WhoResult::Workload(result),
|
||||
})
|
||||
}
|
||||
@@
|
||||
fn print_who_json(run: &WhoRun, args: &WhoArgs, elapsed_ms: u64) {
|
||||
@@
|
||||
let resolved_input = serde_json::json!({
|
||||
"mode": run.resolved_input.mode,
|
||||
"project_id": run.resolved_input.project_id,
|
||||
"project_path": run.resolved_input.project_path,
|
||||
"since_ms": run.resolved_input.since_ms,
|
||||
"since_iso": run.resolved_input.since_iso,
|
||||
- "since_was_default": run.resolved_input.since_was_default,
|
||||
+ "since_mode": run.resolved_input.since_mode,
|
||||
"limit": run.resolved_input.limit,
|
||||
});
|
||||
}
|
||||
|
||||
2) Stop using nullable-OR ((? IS NULL OR col = ?)) where it determines the “right” index (Active is the big one)
|
||||
Why this is better
|
||||
|
||||
Your global vs project-scoped Active indexes are correct, but the nullable binding pattern undermines them because SQLite’s planner can’t assume whether ?2 is NULL at prepare time. Result: it can pick a “good enough for both” plan, which is often the wrong one for -p.
|
||||
|
||||
Fix: keep SQL static, but use two static statements selected at runtime (like you already do for exact vs prefix path matching).
|
||||
|
||||
Patch
|
||||
diff
|
||||
Copy code
|
||||
diff --git a/who-command-design.md b/who-command-design.md
|
||||
--- a/who-command-design.md
|
||||
+++ b/who-command-design.md
|
||||
@@
|
||||
-1. **Lean on existing infrastructure.** Use `(?N IS NULL OR ...)` nullable binding pattern (already used in `timeline_seed.rs`) instead of dynamic SQL string assembly.
|
||||
+1. **Lean on existing infrastructure.** Prefer `(?N IS NULL OR ...)` nullable binding for optional filters **unless** it materially changes index choice. In those cases, select between **two static SQL strings** at runtime (no `format!()`), e.g. Active mode uses separate global vs project-scoped statements to ensure the intended index is used.
|
||||
@@
|
||||
fn query_active(
|
||||
conn: &Connection,
|
||||
project_id: Option<i64>,
|
||||
since_ms: i64,
|
||||
limit: usize,
|
||||
) -> Result<ActiveResult> {
|
||||
let limit_plus_one = (limit + 1) as i64;
|
||||
|
||||
- // Total unresolved count
|
||||
- let total_sql =
|
||||
- "SELECT COUNT(*) FROM discussions d
|
||||
- WHERE d.resolvable = 1 AND d.resolved = 0
|
||||
- AND d.last_note_at >= ?1
|
||||
- AND (?2 IS NULL OR d.project_id = ?2)";
|
||||
+ // Total unresolved count (two static variants to avoid nullable-OR planner ambiguity)
|
||||
+ let total_sql_global =
|
||||
+ "SELECT COUNT(*) FROM discussions d
|
||||
+ WHERE d.resolvable = 1 AND d.resolved = 0
|
||||
+ AND d.last_note_at >= ?1";
|
||||
+ let total_sql_scoped =
|
||||
+ "SELECT COUNT(*) FROM discussions d
|
||||
+ WHERE d.resolvable = 1 AND d.resolved = 0
|
||||
+ AND d.last_note_at >= ?1
|
||||
+ AND d.project_id = ?2";
|
||||
|
||||
- let total_unresolved: u32 =
|
||||
- conn.query_row(total_sql, rusqlite::params![since_ms, project_id], |row| row.get(0))?;
|
||||
+ let total_unresolved: u32 = match project_id {
|
||||
+ None => conn.query_row(total_sql_global, rusqlite::params![since_ms], |row| row.get(0))?,
|
||||
+ Some(pid) => conn.query_row(total_sql_scoped, rusqlite::params![since_ms, pid], |row| row.get(0))?,
|
||||
+ };
|
||||
|
||||
- let sql = "
|
||||
+ let sql_global = "
|
||||
WITH picked AS (
|
||||
SELECT d.id, d.noteable_type, d.issue_id, d.merge_request_id,
|
||||
d.project_id, d.last_note_at
|
||||
FROM discussions d
|
||||
WHERE d.resolvable = 1 AND d.resolved = 0
|
||||
AND d.last_note_at >= ?1
|
||||
- AND (?2 IS NULL OR d.project_id = ?2)
|
||||
ORDER BY d.last_note_at DESC
|
||||
LIMIT ?2
|
||||
),
|
||||
@@
|
||||
ORDER BY p.last_note_at DESC
|
||||
";
|
||||
|
||||
- let mut stmt = conn.prepare_cached(sql)?;
|
||||
- let discussions: Vec<ActiveDiscussion> = stmt
|
||||
- .query_map(rusqlite::params![since_ms, project_id, limit_plus_one], |row| {
|
||||
+ let sql_scoped = "
|
||||
+ WITH picked AS (
|
||||
+ SELECT d.id, d.noteable_type, d.issue_id, d.merge_request_id,
|
||||
+ d.project_id, d.last_note_at
|
||||
+ FROM discussions d
|
||||
+ WHERE d.resolvable = 1 AND d.resolved = 0
|
||||
+ AND d.last_note_at >= ?1
|
||||
+ AND d.project_id = ?2
|
||||
+ ORDER BY d.last_note_at DESC
|
||||
+ LIMIT ?3
|
||||
+ ),
|
||||
+ note_counts AS (
|
||||
+ SELECT n.discussion_id, COUNT(*) AS note_count
|
||||
+ FROM notes n
|
||||
+ JOIN picked p ON p.id = n.discussion_id
|
||||
+ WHERE n.is_system = 0
|
||||
+ GROUP BY n.discussion_id
|
||||
+ ),
|
||||
+ participants AS (
|
||||
+ SELECT x.discussion_id, GROUP_CONCAT(x.author_username, X'1F') AS participants
|
||||
+ FROM (
|
||||
+ SELECT DISTINCT n.discussion_id, n.author_username
|
||||
+ FROM notes n
|
||||
+ JOIN picked p ON p.id = n.discussion_id
|
||||
+ WHERE n.is_system = 0 AND n.author_username IS NOT NULL
|
||||
+ ) x
|
||||
+ GROUP BY x.discussion_id
|
||||
+ )
|
||||
+ SELECT
|
||||
+ p.id AS discussion_id,
|
||||
+ p.noteable_type,
|
||||
+ COALESCE(i.iid, m.iid) AS entity_iid,
|
||||
+ COALESCE(i.title, m.title) AS entity_title,
|
||||
+ proj.path_with_namespace,
|
||||
+ p.last_note_at,
|
||||
+ COALESCE(nc.note_count, 0) AS note_count,
|
||||
+ COALESCE(pa.participants, '') AS participants
|
||||
+ FROM picked p
|
||||
+ JOIN projects proj ON p.project_id = proj.id
|
||||
+ LEFT JOIN issues i ON p.issue_id = i.id
|
||||
+ LEFT JOIN merge_requests m ON p.merge_request_id = m.id
|
||||
+ LEFT JOIN note_counts nc ON nc.discussion_id = p.id
|
||||
+ LEFT JOIN participants pa ON pa.discussion_id = p.id
|
||||
+ ORDER BY p.last_note_at DESC
|
||||
+ ";
|
||||
+
|
||||
+ let discussions: Vec<ActiveDiscussion> = match project_id {
|
||||
+ None => {
|
||||
+ let mut stmt = conn.prepare_cached(sql_global)?;
|
||||
+ stmt.query_map(rusqlite::params![since_ms, limit_plus_one], |row| {
|
||||
+ /* unchanged row mapping */
|
||||
+ })?.collect::<std::result::Result<Vec<_>, _>>()?
|
||||
+ }
|
||||
+ Some(pid) => {
|
||||
+ let mut stmt = conn.prepare_cached(sql_scoped)?;
|
||||
+ stmt.query_map(rusqlite::params![since_ms, pid, limit_plus_one], |row| {
|
||||
+ /* unchanged row mapping */
|
||||
+ })?.collect::<std::result::Result<Vec<_>, _>>()?
|
||||
+ }
|
||||
+ };
|
||||
|
||||
|
||||
Also update Verification to explicitly check both variants:
|
||||
|
||||
diff
|
||||
Copy code
|
||||
@@
|
||||
# Performance verification (required before merge):
|
||||
@@
|
||||
sqlite3 path/to/db.sqlite "
|
||||
EXPLAIN QUERY PLAN
|
||||
SELECT d.id, d.last_note_at
|
||||
FROM discussions d
|
||||
WHERE d.resolvable = 1 AND d.resolved = 0
|
||||
AND d.last_note_at >= 0
|
||||
ORDER BY d.last_note_at DESC
|
||||
LIMIT 20;
|
||||
"
|
||||
# Expected: SEARCH discussions USING INDEX idx_discussions_unresolved_recent_global
|
||||
+
|
||||
+sqlite3 path/to/db.sqlite "
|
||||
+ EXPLAIN QUERY PLAN
|
||||
+ SELECT d.id, d.last_note_at
|
||||
+ FROM discussions d
|
||||
+ WHERE d.resolvable = 1 AND d.resolved = 0
|
||||
+ AND d.project_id = 1
|
||||
+ AND d.last_note_at >= 0
|
||||
+ ORDER BY d.last_note_at DESC
|
||||
+ LIMIT 20;
|
||||
+"
|
||||
+# Expected: SEARCH discussions USING INDEX idx_discussions_unresolved_recent
|
||||
|
||||
3) Add repo-path normalization (eliminate trivial “no results” footguns)
|
||||
Why this is better
|
||||
|
||||
People paste:
|
||||
|
||||
./src/foo/
|
||||
|
||||
/src/foo/
|
||||
|
||||
src\foo\bar.rs (Windows)
|
||||
These currently lead to silent misses.
|
||||
|
||||
Normalize only user input (not DB content):
|
||||
|
||||
trim whitespace
|
||||
|
||||
strip leading ./ and /
|
||||
|
||||
convert \ → / when present
|
||||
|
||||
collapse repeated //
|
||||
|
||||
Patch
|
||||
diff
|
||||
Copy code
|
||||
diff --git a/who-command-design.md b/who-command-design.md
|
||||
--- a/who-command-design.md
|
||||
+++ b/who-command-design.md
|
||||
@@
|
||||
fn resolve_mode<'a>(args: &'a WhoArgs) -> Result<WhoMode<'a>> {
|
||||
@@
|
||||
- if let Some(p) = &args.path {
|
||||
- return Ok(WhoMode::Expert { path: p });
|
||||
+ if let Some(p) = &args.path {
|
||||
+ let norm = normalize_repo_path(p);
|
||||
+ return Ok(WhoMode::Expert { path: Box::leak(norm.into_boxed_str()) });
|
||||
}
|
||||
@@
|
||||
- if let Some(path) = &args.overlap {
|
||||
- return Ok(WhoMode::Overlap { path });
|
||||
+ if let Some(path) = &args.overlap {
|
||||
+ let norm = normalize_repo_path(path);
|
||||
+ return Ok(WhoMode::Overlap { path: Box::leak(norm.into_boxed_str()) });
|
||||
}
|
||||
@@
|
||||
- if target.contains('/') {
|
||||
- return Ok(WhoMode::Expert { path: target });
|
||||
+ if target.contains('/') {
|
||||
+ let norm = normalize_repo_path(target);
|
||||
+ return Ok(WhoMode::Expert { path: Box::leak(norm.into_boxed_str()) });
|
||||
}
|
||||
@@
|
||||
}
|
||||
+
|
||||
+/// Normalize user-supplied repo paths to match stored DiffNote paths.
|
||||
+/// - trims whitespace
|
||||
+/// - strips leading "./" and "/" (repo-relative)
|
||||
+/// - converts '\' to '/' (Windows paste)
|
||||
+/// - collapses repeated slashes
|
||||
+fn normalize_repo_path(input: &str) -> String {
|
||||
+ let mut s = input.trim().to_string();
|
||||
+ if s.contains('\\') && !s.contains('/') {
|
||||
+ s = s.replace('\\', "/");
|
||||
+ }
|
||||
+ while s.starts_with("./") {
|
||||
+ s = s.trim_start_matches("./").to_string();
|
||||
+ }
|
||||
+ while s.starts_with('/') {
|
||||
+ s = s.trim_start_matches('/').to_string();
|
||||
+ }
|
||||
+ while s.contains("//") {
|
||||
+ s = s.replace("//", "/");
|
||||
+ }
|
||||
+ s
|
||||
+}
|
||||
|
||||
|
||||
(Add a small test block for normalization; even 2–3 asserts catch regressions.)
|
||||
|
||||
4) Make path matching observable: include path_match (exact vs prefix) in results/JSON
|
||||
Why this is better
|
||||
|
||||
You’ve made path classification smarter (heuristics + two-way probe). That’s great, but without visibility you’ll get “why did it treat this as a directory?” confusion. Exposing match metadata is low cost and hugely helps debugging.
|
||||
|
||||
Patch
|
||||
diff
|
||||
Copy code
|
||||
diff --git a/who-command-design.md b/who-command-design.md
|
||||
--- a/who-command-design.md
|
||||
+++ b/who-command-design.md
|
||||
@@
|
||||
-struct PathQuery {
|
||||
- /// The parameter value to bind.
|
||||
- value: String,
|
||||
- /// If true: use `LIKE value ESCAPE '\'`. If false: use `= value`.
|
||||
- is_prefix: bool,
|
||||
-}
|
||||
+struct PathQuery {
|
||||
+ /// User input after normalization (no trailing slash stripping yet).
|
||||
+ input: String,
|
||||
+ /// Trimmed path without trailing '/' used for exact/prefix construction.
|
||||
+ normalized: String,
|
||||
+ /// The SQL parameter bound to the statement (`foo/bar` or `foo/bar/%`).
|
||||
+ sql_value: String,
|
||||
+ /// If true: use `LIKE sql_value ESCAPE '\'`. If false: use `= normalized`.
|
||||
+ is_prefix: bool,
|
||||
+}
|
||||
@@
|
||||
- let trimmed = path.trim_end_matches('/');
|
||||
+ let input = normalize_repo_path(path);
|
||||
+ let trimmed = input.trim_end_matches('/').to_string();
|
||||
@@
|
||||
- Ok(PathQuery {
|
||||
- value: trimmed.to_string(),
|
||||
- is_prefix: false,
|
||||
- })
|
||||
+ Ok(PathQuery { input, normalized: trimmed.clone(), sql_value: trimmed, is_prefix: false })
|
||||
} else {
|
||||
- Ok(PathQuery {
|
||||
- value: format!("{escaped}/%"),
|
||||
- is_prefix: true,
|
||||
- })
|
||||
+ Ok(PathQuery { input, normalized: trimmed.clone(), sql_value: format!("{escaped}/%"), is_prefix: true })
|
||||
}
|
||||
@@
|
||||
pub struct ExpertResult {
|
||||
pub path_query: String,
|
||||
+ pub path_match: String, // "exact" or "prefix"
|
||||
pub experts: Vec<Expert>,
|
||||
pub truncated: bool,
|
||||
}
|
||||
@@
|
||||
pub struct OverlapResult {
|
||||
pub path_query: String,
|
||||
+ pub path_match: String, // "exact" or "prefix"
|
||||
pub users: Vec<OverlapUser>,
|
||||
pub truncated: bool,
|
||||
}
|
||||
@@
|
||||
fn query_expert(...) -> Result<ExpertResult> {
|
||||
let pq = build_path_query(conn, path, project_id)?;
|
||||
@@
|
||||
Ok(ExpertResult {
|
||||
path_query: path.to_string(),
|
||||
+ path_match: if pq.is_prefix { "prefix".to_string() } else { "exact".to_string() },
|
||||
experts,
|
||||
truncated,
|
||||
})
|
||||
}
|
||||
@@
|
||||
fn query_overlap(...) -> Result<OverlapResult> {
|
||||
let pq = build_path_query(conn, path, project_id)?;
|
||||
@@
|
||||
Ok(OverlapResult {
|
||||
path_query: path.to_string(),
|
||||
+ path_match: if pq.is_prefix { "prefix".to_string() } else { "exact".to_string() },
|
||||
users,
|
||||
truncated,
|
||||
})
|
||||
}
|
||||
@@
|
||||
fn expert_to_json(r: &ExpertResult) -> serde_json::Value {
|
||||
serde_json::json!({
|
||||
"path_query": r.path_query,
|
||||
+ "path_match": r.path_match,
|
||||
"truncated": r.truncated,
|
||||
"experts": ...
|
||||
})
|
||||
}
|
||||
@@
|
||||
fn overlap_to_json(r: &OverlapResult) -> serde_json::Value {
|
||||
serde_json::json!({
|
||||
"path_query": r.path_query,
|
||||
+ "path_match": r.path_match,
|
||||
"truncated": r.truncated,
|
||||
"users": ...
|
||||
})
|
||||
}
|
||||
|
||||
|
||||
Human output can add a single dim hint line:
|
||||
|
||||
(matching exact file) or (matching directory prefix)
|
||||
|
||||
5) Put a hard upper bound on --limit at the CLI boundary
|
||||
Why this is better
|
||||
|
||||
You already bounded nested arrays (participants, mr_refs), but top-level lists are still user-unbounded. A single --limit 50000 can:
|
||||
|
||||
generate huge JSON payloads
|
||||
|
||||
blow up downstream agent pipelines
|
||||
|
||||
create slow queries / memory spikes
|
||||
|
||||
Clamp it before execution. A max of 500 is usually plenty; even 200 is fine.
|
||||
|
||||
Patch
|
||||
diff
|
||||
Copy code
|
||||
diff --git a/who-command-design.md b/who-command-design.md
|
||||
--- a/who-command-design.md
|
||||
+++ b/who-command-design.md
|
||||
@@
|
||||
pub struct WhoArgs {
|
||||
@@
|
||||
- /// Maximum results per section
|
||||
- #[arg(short = 'n', long = "limit", default_value = "20", help_heading = "Output")]
|
||||
+ /// Maximum results per section (bounded for output safety)
|
||||
+ #[arg(
|
||||
+ short = 'n',
|
||||
+ long = "limit",
|
||||
+ default_value = "20",
|
||||
+ value_parser = clap::value_parser!(u16).range(1..=500),
|
||||
+ help_heading = "Output"
|
||||
+ )]
|
||||
pub limit: usize,
|
||||
}
|
||||
@@
|
||||
-11. **Bounded payloads.** Robot JSON must never emit unbounded arrays ...
|
||||
+11. **Bounded payloads.** Robot JSON must never emit unbounded arrays ...
|
||||
+ Top-level result set size is also bounded via `--limit` (1..=500) to prevent runaway payloads.
|
||||
|
||||
6) Clarify Active “unresolved count” semantics (window vs total)
|
||||
Why this is better
|
||||
|
||||
total_unresolved currently means “unresolved within the time window”. The human header prints “Active Discussions (X unresolved)” which can easily be misread as “total unresolved overall”.
|
||||
|
||||
Small rename avoids confusion, no new behavior.
|
||||
|
||||
Patch
|
||||
diff
|
||||
Copy code
|
||||
diff --git a/who-command-design.md b/who-command-design.md
|
||||
--- a/who-command-design.md
|
||||
+++ b/who-command-design.md
|
||||
@@
|
||||
pub struct ActiveResult {
|
||||
pub discussions: Vec<ActiveDiscussion>,
|
||||
- pub total_unresolved: u32,
|
||||
+ pub total_unresolved_in_window: u32,
|
||||
pub truncated: bool,
|
||||
}
|
||||
@@
|
||||
- println!(
|
||||
- "{}",
|
||||
- style(format!(
|
||||
- "Active Discussions ({} unresolved)",
|
||||
- r.total_unresolved
|
||||
- ))
|
||||
- .bold()
|
||||
- );
|
||||
+ println!("{}", style(format!(
|
||||
+ "Active Discussions ({} unresolved in window)",
|
||||
+ r.total_unresolved_in_window
|
||||
+ )).bold());
|
||||
|
||||
|
||||
(If you later want global total, add a second count query—but I’d keep MVP lean.)
|
||||
|
||||
7) Tighten statement cache behavior: avoid preparing both SQL variants when not needed
|
||||
Why this is better
|
||||
|
||||
You already use prepare_cached(), but as you add more “two static variants” (exact/prefix; scoped/unscoped), it’s easy to accidentally prepare multiple statements per invocation.
|
||||
|
||||
Codify: select variant first, then prepare exactly one.
|
||||
|
||||
This is mostly a plan hygiene change (helps future you keep perf predictable).
|
||||
|
||||
Patch (plan-level emphasis)
|
||||
diff
|
||||
Copy code
|
||||
diff --git a/who-command-design.md b/who-command-design.md
|
||||
--- a/who-command-design.md
|
||||
+++ b/who-command-design.md
|
||||
@@
|
||||
-1. **Lean on existing infrastructure.** ...
|
||||
+1. **Lean on existing infrastructure.** ...
|
||||
+ When multiple static SQL variants exist (exact/prefix; scoped/unscoped), always:
|
||||
+ (a) resolve which variant applies, then (b) `prepare_cached()` exactly one statement.
|
||||
|
||||
Net effect (what you gain)
|
||||
|
||||
Correct robot semantics (since_mode) without breaking your static-SQL/agent-first contract.
|
||||
|
||||
Guaranteed intended index usage for Active global vs scoped queries (the nullable-OR planner pitfall is real).
|
||||
|
||||
Fewer “why no results?” surprises via path normalization.
|
||||
|
||||
Better debugging (path match introspection) with essentially no runtime cost.
|
||||
|
||||
Output safety even when users/agents misconfigure --limit.
|
||||
|
||||
Less ambiguous UX around “unresolved” counts.
|
||||
|
||||
If you want a single “most important” change to ship before iteration 8 locks: #2 (Active query variants) and #1 (since semantics) are the two that prevent the most painful, hard-to-diagnose failures.
|
||||
3251
docs/who-command-design.md
Normal file
3251
docs/who-command-design.md
Normal file
File diff suppressed because it is too large
Load Diff
Reference in New Issue
Block a user