10 Commits

Author SHA1 Message Date
Taylor Eernisse
96ef60fa05 docs: Update documentation for CP2 merge request support
Updates project documentation to reflect the complete CP2 feature set
with merge request ingestion and robot mode capabilities.

README.md:
- Add MR-related CLI examples (gi list mrs, gi show mr, gi ingest)
- Document robot mode (--robot flag, GI_ROBOT env, auto-detect)
- Update feature list with MR support and DiffNote positions
- Add configuration section with all config file options
- Expand CLI reference with new commands and flags

AGENTS.md:
- Add MR ingestion patterns for AI agent consumption
- Document robot mode JSON schemas for parsing
- Include error handling patterns with exit codes
- Add discussion/note querying examples for code review context

Cargo.toml:
- Bump version to 0.2.0 reflecting major feature addition

The documentation emphasizes the robot mode design which enables
AI agents like Claude Code to reliably parse gi output for automated
GitLab workflow integration.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 22:47:34 -05:00
Taylor Eernisse
d338d68191 test: Add comprehensive test suite for MR ingestion
Introduces thorough test coverage for merge request functionality,
following the established testing patterns from issue ingestion.

New test files:
- mr_transformer_tests.rs: NormalizedMergeRequest transformation tests
  covering full MR with all fields, minimal MR, draft detection via
  title prefix and work_in_progress field, label/assignee/reviewer
  extraction, and timestamp conversion

- mr_discussion_tests.rs: MR discussion normalization tests including
  polymorphic noteable binding, DiffNote position extraction with
  line ranges and SHA triplet, and resolvable note handling

- diffnote_position_tests.rs: Exhaustive DiffNote position scenarios
  covering text/image/file types, single-line vs multi-line comments,
  added/removed/modified lines, and missing position handling

New fixtures:
- fixtures/gitlab_merge_request.json: Representative MR API response
  with nested structures for integration testing

Updated tests:
- gitlab_types_tests.rs: Add MR type deserialization tests
- migration_tests.rs: Update expected schema version to 6

Test design follows property-based patterns where feasible, with
explicit edge case coverage for nullable fields and API variants
across different GitLab versions.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 22:47:17 -05:00
Taylor Eernisse
8ddc974b89 feat(cli): Add MR support to list/show/count/ingest commands
Extends all data commands to support merge requests alongside issues,
with consistent patterns and JSON output for robot mode.

List command (gi list mrs):
- MR-specific columns: branches, draft status, reviewers
- Filters: --state (opened|merged|closed|locked|all), --draft,
  --no-draft, --reviewer, --target-branch, --source-branch
- Discussion count with unresolved indicator (e.g., "5/2!")
- JSON output includes full MR metadata

Show command (gi show mr <iid>):
- MR details with branches, assignees, reviewers, merge status
- DiffNote positions showing file:line for code review comments
- Full description and discussion bodies (no truncation in JSON)
- --json flag for structured output with ISO timestamps

Count command (gi count mrs):
- MR counting with optional --type filter for discussions/notes
- JSON output with breakdown by state

Ingest command (gi ingest --type mrs):
- Full MR sync with discussion prefetch
- Progress output shows MR-specific metrics (diffnotes count)
- JSON summary with comprehensive sync statistics

All commands respect global --robot mode for auto-JSON output.
The pattern "gi list mrs --json | jq '.mrs[] | .iid'" now works
for scripted MR processing.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 22:46:59 -05:00
Taylor Eernisse
7d0d586932 feat(cli): Add global robot mode for machine-readable output
Introduces a unified robot mode that enables JSON output across all
commands, designed for AI agent and script consumption.

Robot mode activation (any of):
- --robot flag: Explicit opt-in
- GI_ROBOT=1 env var: For persistent configuration
- Non-TTY stdout: Auto-detect when piped (e.g., gi list issues | jq)

Implementation:
- Cli::is_robot_mode(): Centralized detection logic
- All command handlers receive robot_mode boolean
- Errors emit structured JSON to stderr with exit codes
- Success responses emit JSON to stdout

Behavior changes in robot mode:
- No color/emoji output (no ANSI escapes)
- No progress spinners or interactive prompts
- Timestamps as ISO 8601 strings (not relative "2 hours ago")
- Full content (no truncation of descriptions/notes)
- Structured error objects with code, message, suggestion

This enables reliable parsing by Claude Code, shell scripts, and
automation pipelines. The auto-detect on non-TTY means simple piping
"just works" without explicit flags.

Per-command --json flags remain for explicit control and override
robot mode when needed for human-friendly terminal + JSON file output.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 22:46:27 -05:00
Taylor Eernisse
5fe76e46a3 fix(core): Add structured error handling and responsive lock release
Improves core infrastructure with robot-friendly error output and
faster lock release for better sync behavior.

Error handling improvements (error.rs):
- ErrorCode::exit_code(): Unique exit codes per error type (1-13)
  for programmatic error handling in scripts/agents
- GiError::suggestion(): Helpful hints for common error recovery
- GiError::to_robot_error(): Structured JSON error conversion
- RobotError/RobotErrorOutput: Serializable error types with code,
  message, and optional suggestion fields

Lock improvements (lock.rs):
- Heartbeat thread now polls every 100ms for release flag, only
  updating database heartbeat at full interval (5s default)
- Eliminates 5-10s delay after sync completion when waiting for
  heartbeat thread to notice release
- Reduces lock hold time after operation completes

Database (db.rs):
- Bump expected schema version to 6 for MR migration

The exit code mapping enables shell scripts and CI/CD pipelines to
distinguish between configuration errors (2-4), GitLab API errors
(5-8), and database errors (9-11) for appropriate retry/alert logic.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 22:46:08 -05:00
Taylor Eernisse
cd44e516e3 feat(ingestion): Implement MR sync with parallel discussion prefetch
Adds complete merge request ingestion pipeline with a novel two-phase
discussion sync strategy optimized for throughput.

New modules:
- merge_requests.rs: MR upsert with labels/assignees/reviewers handling,
  stale MR cleanup, and watermark-based incremental sync
- mr_discussions.rs: Parallel prefetch strategy for MR discussions

Two-phase MR discussion sync:
1. PREFETCH PHASE: Spawn concurrent tasks to fetch discussions for
   multiple MRs simultaneously (configurable concurrency, default 8).
   Transform and validate in parallel, storing results in memory.
2. WRITE PHASE: Serial database writes to avoid lock contention.
   Each MR's discussions written in a single transaction, with
   proper stale discussion cleanup.

This approach achieves ~4-8x throughput vs serial fetching while
maintaining database consistency. Transform errors are tracked per-MR
to prevent partial writes from corrupting watermarks.

Orchestrator updates:
- ingest_merge_requests(): Coordinates MR fetch -> discussion sync flow
- Progress callbacks emit MR-specific events for UI feedback
- Respects --full flag to reset discussion watermarks for full resync

The prefetch strategy is critical for MRs which typically have more
discussions than issues, and where API latency dominates sync time.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 22:45:48 -05:00
Taylor Eernisse
d33f24c91b feat(transformers): Add MR transformer and polymorphic discussion support
Introduces NormalizedMergeRequest transformer and updates discussion
normalization to handle both issue and MR discussions polymorphically.

New transformers:
- NormalizedMergeRequest: Transforms API MergeRequest to database row,
  extracting labels/assignees/reviewers into separate collections for
  junction table insertion. Handles draft detection, detailed_merge_status
  preference over deprecated merge_status, and merge_user over merged_by.

Discussion transformer updates:
- NormalizedDiscussion now takes noteable_type ("Issue" | "MergeRequest")
  and noteable_id for polymorphic FK binding
- normalize_discussions_for_issue(): Convenience wrapper for issues
- normalize_discussions_for_mr(): Convenience wrapper for MRs
- DiffNote position fields (type, line_range, SHA triplet) now extracted
  from API position object for code review context

Design decisions:
- Transformer returns (normalized_item, labels, assignees, reviewers)
  tuple for efficient batch insertion without re-querying
- Timestamps converted to ms epoch for SQLite storage consistency
- Optional fields use map() chains for clean null handling

The polymorphic discussion approach allows reusing the same discussions
and notes tables for both issues and MRs, with noteable_type + FK
determining the parent relationship.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 22:45:29 -05:00
Taylor Eernisse
cc8c489fd2 feat(gitlab): Add MR and MR discussion API endpoints to client
Extends GitLabClient with endpoints for fetching merge requests and
their discussions, following the same patterns established for issues.

New methods:
- fetch_merge_requests(): Paginated MR listing with cursor support,
  using updated_after filter for incremental sync. Uses 'all' scope
  to include MRs where user is author/assignee/reviewer.
- fetch_merge_requests_single_page(): Single page variant for callers
  managing their own pagination (used by parallel prefetch)
- fetch_mr_discussions(): Paginated discussion listing for a single MR,
  returns full discussion trees with notes

API design notes:
- Uses keyset pagination (order_by=updated_at, keyset=true) for
  consistent results during sync operations
- MR endpoint uses /merge_requests (not /mrs) per GitLab API naming
- Discussion endpoint matches issue pattern for consistency
- Per_page defaults to 100 (GitLab max) for efficiency

The fetch_merge_requests_single_page method enables the parallel
prefetch strategy used in mr_discussions.rs, where multiple MRs'
discussions are fetched concurrently during the sweep phase.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 22:45:13 -05:00
Taylor Eernisse
a18908c377 feat(gitlab): Add MergeRequest and related types for API deserialization
Extends GitLab type definitions with comprehensive merge request support,
matching the API response structure for /projects/:id/merge_requests.

New types:
- MergeRequest: Full MR metadata including draft status, branch info,
  detailed_merge_status, merge_user (modern API fields replacing
  deprecated alternatives), and references for cross-project support
- MrReviewer: Reviewer user info (MR-specific, distinct from assignees)
- MrAssignee: Assignee user info with consistent structure
- MrDiscussion: MR discussion wrapper for polymorphic handling
- DiffNotePosition: Rich position data for code review comments with
  line ranges and SHA triplet for commit context

Design decisions:
- Use Option<T> for all nullable API fields to handle partial responses
- Include deprecated fields (merged_by, merge_status) alongside modern
  alternatives for backward compatibility with older GitLab instances
- DiffNotePosition uses Option for all fields since different position
  types (text/image/file) populate different subsets

These types enable type-safe deserialization of GitLab MR API responses
with full coverage of the fields needed for CP2 ingestion.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 22:44:58 -05:00
Taylor Eernisse
39a71d8b85 feat(db): Add schema migration v6 for merge request support
Introduces comprehensive database schema for merge request ingestion
(CP2), designed with forward compatibility for future features.

New tables:
- merge_requests: Core MR metadata with draft status, branch info,
  detailed_merge_status (modern API field), and sync health telemetry
  columns for debuggability
- mr_labels: Junction table linking MRs to shared labels table
- mr_assignees: MR assignee usernames (same pattern as issues)
- mr_reviewers: MR-specific reviewer tracking (not applicable to issues)

Additional indexes:
- discussions: Add merge_request_id and resolved status indexes
- notes: Add composite indexes for DiffNote file/line queries

DiffNote position enhancements:
- position_type: 'text' | 'image' | 'file' for diff comment semantics
- position_line_range_start/end: Multi-line comment range support
- position_base_sha/start_sha/head_sha: Commit context for diff notes

The schema captures CP3-ready fields (head_sha, references_short/full,
SHA triplet) at zero additional API cost, preparing for file-context
and cross-project reference features.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-26 22:44:37 -05:00
33 changed files with 5777 additions and 251 deletions

View File

@@ -164,3 +164,88 @@ git push # Push to remote
- Always `bd sync` before ending session
<!-- end-bv-agent-instructions -->
---
## GitLab Inbox Robot Mode
The `gi` CLI has a robot mode optimized for AI agent consumption with structured JSON output, meaningful exit codes, and TTY auto-detection.
### Activation
```bash
# Explicit flag
gi --robot list issues
# Auto-detection (when stdout is not a TTY)
gi list issues | jq .
# Environment variable
GI_ROBOT=1 gi list issues
```
### Robot Mode Commands
```bash
# List issues/MRs with JSON output
gi --robot list issues --limit=10
gi --robot list mrs --state=opened
# Count entities
gi --robot count issues
gi --robot count discussions --type=mr
# Show detailed entity info
gi --robot show issue 123
gi --robot show mr 456 --project=group/repo
# Check sync status
gi --robot sync-status
# Run ingestion (quiet, JSON summary)
gi --robot ingest --type=issues
# Check environment health
gi --robot doctor
```
### Response Format
All commands return consistent JSON:
```json
{"ok":true,"data":{...},"meta":{...}}
```
Errors return structured JSON to stderr:
```json
{"error":{"code":"CONFIG_NOT_FOUND","message":"...","suggestion":"Run 'gi init'"}}
```
### Exit Codes
| Code | Meaning |
|------|---------|
| 0 | Success |
| 1 | Internal error |
| 2 | Config not found |
| 3 | Config invalid |
| 4 | Token not set |
| 5 | GitLab auth failed |
| 6 | Resource not found |
| 7 | Rate limited |
| 8 | Network error |
| 9 | Database locked |
| 10 | Database error |
| 11 | Migration failed |
| 12 | I/O error |
| 13 | Transform error |
### Best Practices
- Use `gi --robot` for all agent interactions
- Check exit codes for error handling
- Parse JSON errors from stderr
- Use `--limit` to control response size
- TTY detection handles piped commands automatically

View File

@@ -20,7 +20,7 @@ serde = { version = "1", features = ["derive"] }
serde_json = "1"
# CLI
clap = { version = "4", features = ["derive"] }
clap = { version = "4", features = ["derive", "env"] }
dialoguer = "0.12"
console = "0.16"
indicatif = "0.18"

View File

@@ -1,15 +1,16 @@
# gi - GitLab Inbox
A command-line tool for managing GitLab issues locally. Syncs issues, discussions, and notes from GitLab to a local SQLite database for fast, offline-capable querying and filtering.
A command-line tool for managing GitLab issues and merge requests locally. Syncs issues, MRs, discussions, and notes from GitLab to a local SQLite database for fast, offline-capable querying and filtering.
## Features
- **Local-first**: All data stored in SQLite for instant queries
- **Incremental sync**: Cursor-based sync only fetches changes since last sync
- **Full re-sync**: Reset cursors and fetch all data from scratch when needed
- **Multi-project**: Track issues across multiple GitLab projects
- **Rich filtering**: Filter by state, author, assignee, labels, milestone, due date
- **Multi-project**: Track issues and MRs across multiple GitLab projects
- **Rich filtering**: Filter by state, author, assignee, labels, milestone, due date, draft status, reviewer, branches
- **Raw payload storage**: Preserves original GitLab API responses for debugging
- **Discussion threading**: Full support for issue and MR discussions including inline code review comments
## Installation
@@ -36,11 +37,20 @@ gi auth-test
# Sync issues from GitLab
gi ingest --type issues
# Sync merge requests from GitLab
gi ingest --type mrs
# List recent issues
gi list issues --limit 10
# List open merge requests
gi list mrs --state opened
# Show issue details
gi show issue 123 --project group/repo
# Show MR details with discussions
gi show mr 456 --project group/repo
```
## Configuration
@@ -163,13 +173,19 @@ Checks performed:
Sync data from GitLab to local database.
```bash
# Issues
gi ingest --type issues # Sync all projects
gi ingest --type issues --project group/repo # Single project
gi ingest --type issues --force # Override stale lock
gi ingest --type issues --full # Full re-sync (reset cursors)
# Merge Requests
gi ingest --type mrs # Sync all projects
gi ingest --type mrs --project group/repo # Single project
gi ingest --type mrs --full # Full re-sync (reset cursors)
```
The `--full` flag resets sync cursors and fetches all data from scratch, useful when:
The `--full` flag resets sync cursors and discussion watermarks, then fetches all data from scratch. Useful when:
- Assignee data or other fields were missing from earlier syncs
- You want to ensure complete data after schema changes
- Troubleshooting sync issues
@@ -201,6 +217,35 @@ gi list issues --json # JSON output
Output includes: IID, title, state, author, assignee, labels, and update time.
### `gi list mrs`
Query merge requests from local database.
```bash
gi list mrs # Recent MRs (default 50)
gi list mrs --limit 100 # More results
gi list mrs --state opened # Only open MRs
gi list mrs --state merged # Only merged MRs
gi list mrs --state closed # Only closed MRs
gi list mrs --state locked # Only locked MRs
gi list mrs --state all # All states
gi list mrs --author username # By author (@ prefix optional)
gi list mrs --assignee username # By assignee (@ prefix optional)
gi list mrs --reviewer username # By reviewer (@ prefix optional)
gi list mrs --draft # Only draft/WIP MRs
gi list mrs --no-draft # Exclude draft MRs
gi list mrs --target-branch main # By target branch
gi list mrs --source-branch feature/foo # By source branch
gi list mrs --label needs-review # By label (AND logic)
gi list mrs --since 7d # Updated in last 7 days
gi list mrs --project group/repo # Filter by project
gi list mrs --sort created --order asc # Sort options
gi list mrs --open # Open first result in browser
gi list mrs --json # JSON output
```
Output includes: IID, title (with [DRAFT] prefix if applicable), state, author, assignee, labels, and update time.
### `gi show issue`
Display detailed issue information.
@@ -212,14 +257,27 @@ gi show issue 123 --project group/repo # Disambiguate if needed
Shows: title, description, state, author, assignees, labels, milestone, due date, web URL, and threaded discussions.
### `gi show mr`
Display detailed merge request information.
```bash
gi show mr 456 # Show MR !456
gi show mr 456 --project group/repo # Disambiguate if needed
```
Shows: title, description, state, draft status, author, assignees, reviewers, labels, source/target branches, merge status, web URL, and threaded discussions. Inline code review comments (DiffNotes) display file context in the format `[src/file.ts:45]`.
### `gi count`
Count entities in local database.
```bash
gi count issues # Total issues
gi count mrs # Total MRs (with state breakdown)
gi count discussions # Total discussions
gi count discussions --type issue # Issue discussions only
gi count discussions --type mr # MR discussions only
gi count notes # Total notes (shows system vs user breakdown)
```
@@ -233,7 +291,7 @@ gi sync-status
Displays:
- Last sync run details (status, timing)
- Cursor positions per project and resource type
- Cursor positions per project and resource type (issues and MRs)
- Data summary counts
### `gi migrate`
@@ -282,12 +340,16 @@ Data is stored in SQLite with WAL mode and foreign keys enabled. Main tables:
|-------|---------|
| `projects` | Tracked GitLab projects with metadata |
| `issues` | Issue metadata (title, state, author, due date, milestone) |
| `merge_requests` | MR metadata (title, state, draft, branches, merge status) |
| `milestones` | Project milestones with state and due dates |
| `labels` | Project labels with colors |
| `issue_labels` | Many-to-many issue-label relationships |
| `issue_assignees` | Many-to-many issue-assignee relationships |
| `mr_labels` | Many-to-many MR-label relationships |
| `mr_assignees` | Many-to-many MR-assignee relationships |
| `mr_reviewers` | Many-to-many MR-reviewer relationships |
| `discussions` | Issue/MR discussion threads |
| `notes` | Individual notes within discussions (with system note flag) |
| `notes` | Individual notes within discussions (with system note flag and DiffNote position data) |
| `sync_runs` | Audit trail of sync operations |
| `sync_cursors` | Cursor positions for incremental sync |
| `app_locks` | Crash-safe single-flight lock |
@@ -334,15 +396,16 @@ cargo clippy
## Current Status
This is Checkpoint 1 (CP1) of the GitLab Knowledge Engine project. Currently implemented:
This is Checkpoint 2 (CP2) of the GitLab Knowledge Engine project. Currently implemented:
- Issue ingestion with cursor-based incremental sync
- Discussion and note syncing for issues
- Rich filtering and querying
- Full re-sync capability
- Merge request ingestion with cursor-based incremental sync
- Discussion and note syncing for issues and MRs
- DiffNote support for inline code review comments
- Rich filtering and querying for both issues and MRs
- Full re-sync capability with watermark reset
Not yet implemented:
- Merge request support (CP2)
- Semantic search with embeddings (CP3+)
- Backup and reset commands

View File

@@ -0,0 +1,97 @@
-- Migration 006: Merge Requests, MR Labels, Assignees, Reviewers
-- Schema version: 6
-- Adds CP2 MR ingestion support
-- Merge requests table
CREATE TABLE merge_requests (
id INTEGER PRIMARY KEY,
gitlab_id INTEGER UNIQUE NOT NULL,
project_id INTEGER NOT NULL REFERENCES projects(id),
iid INTEGER NOT NULL,
title TEXT,
description TEXT,
state TEXT, -- 'opened' | 'merged' | 'closed' | 'locked'
draft INTEGER NOT NULL DEFAULT 0, -- 0/1 (SQLite boolean) - work-in-progress status
author_username TEXT,
source_branch TEXT,
target_branch TEXT,
head_sha TEXT, -- Current commit SHA at head of source branch (CP3-ready)
references_short TEXT, -- Short reference e.g. "!123" (CP3-ready for display)
references_full TEXT, -- Full reference e.g. "group/project!123" (CP3-ready for cross-project)
detailed_merge_status TEXT, -- preferred, non-deprecated (replaces merge_status)
merge_user_username TEXT, -- preferred over deprecated merged_by
created_at INTEGER, -- ms epoch UTC
updated_at INTEGER, -- ms epoch UTC
merged_at INTEGER, -- ms epoch UTC (NULL if not merged)
closed_at INTEGER, -- ms epoch UTC (NULL if not closed)
last_seen_at INTEGER NOT NULL, -- ms epoch UTC, updated on every upsert
-- Prevents re-fetching discussions on cursor rewind / reruns unless MR changed.
discussions_synced_for_updated_at INTEGER,
-- Sync health telemetry for debuggability
discussions_sync_last_attempt_at INTEGER, -- ms epoch UTC of last sync attempt
discussions_sync_attempts INTEGER DEFAULT 0, -- count of sync attempts for this MR version
discussions_sync_last_error TEXT, -- last error message if sync failed
web_url TEXT,
raw_payload_id INTEGER REFERENCES raw_payloads(id)
);
CREATE INDEX idx_mrs_project_updated ON merge_requests(project_id, updated_at);
CREATE INDEX idx_mrs_author ON merge_requests(author_username);
CREATE INDEX idx_mrs_target_branch ON merge_requests(project_id, target_branch);
CREATE INDEX idx_mrs_source_branch ON merge_requests(project_id, source_branch);
CREATE INDEX idx_mrs_state ON merge_requests(project_id, state);
CREATE INDEX idx_mrs_detailed_merge_status ON merge_requests(project_id, detailed_merge_status);
CREATE INDEX idx_mrs_draft ON merge_requests(project_id, draft);
CREATE INDEX idx_mrs_discussions_sync ON merge_requests(project_id, discussions_synced_for_updated_at);
CREATE UNIQUE INDEX uq_mrs_project_iid ON merge_requests(project_id, iid);
-- MR-Label junction (reuses labels table from CP1)
CREATE TABLE mr_labels (
merge_request_id INTEGER REFERENCES merge_requests(id) ON DELETE CASCADE,
label_id INTEGER REFERENCES labels(id) ON DELETE CASCADE,
PRIMARY KEY(merge_request_id, label_id)
);
CREATE INDEX idx_mr_labels_label ON mr_labels(label_id);
-- MR assignees (same pattern as issue_assignees)
CREATE TABLE mr_assignees (
merge_request_id INTEGER REFERENCES merge_requests(id) ON DELETE CASCADE,
username TEXT NOT NULL,
PRIMARY KEY(merge_request_id, username)
);
CREATE INDEX idx_mr_assignees_username ON mr_assignees(username);
-- MR reviewers (MR-specific, not applicable to issues)
CREATE TABLE mr_reviewers (
merge_request_id INTEGER REFERENCES merge_requests(id) ON DELETE CASCADE,
username TEXT NOT NULL,
PRIMARY KEY(merge_request_id, username)
);
CREATE INDEX idx_mr_reviewers_username ON mr_reviewers(username);
-- Add FK constraint to discussions table for merge_request_id
-- Note: SQLite doesn't support ADD CONSTRAINT, the FK was defined in CP1 but nullable
-- We just need to add an index if not already present
CREATE INDEX IF NOT EXISTS idx_discussions_mr_id ON discussions(merge_request_id);
CREATE INDEX IF NOT EXISTS idx_discussions_mr_resolved ON discussions(merge_request_id, resolved, resolvable);
-- Additional indexes for DiffNote queries (notes table from CP1)
-- These composite indexes enable efficient file-context queries for CP3
CREATE INDEX IF NOT EXISTS idx_notes_type ON notes(note_type);
CREATE INDEX IF NOT EXISTS idx_notes_new_path ON notes(position_new_path);
CREATE INDEX IF NOT EXISTS idx_notes_new_path_line ON notes(position_new_path, position_new_line);
CREATE INDEX IF NOT EXISTS idx_notes_old_path_line ON notes(position_old_path, position_old_line);
-- CP2: capture richer diff note position shapes (minimal, still MVP)
-- These fields support modern GitLab diff note semantics without full diff reconstruction
ALTER TABLE notes ADD COLUMN position_type TEXT; -- 'text' | 'image' | 'file'
ALTER TABLE notes ADD COLUMN position_line_range_start INTEGER; -- multi-line comment start
ALTER TABLE notes ADD COLUMN position_line_range_end INTEGER; -- multi-line comment end
-- DiffNote SHA triplet for commit context (CP3-ready, zero extra API cost)
ALTER TABLE notes ADD COLUMN position_base_sha TEXT; -- Base commit SHA for diff
ALTER TABLE notes ADD COLUMN position_start_sha TEXT; -- Start commit SHA for diff
ALTER TABLE notes ADD COLUMN position_head_sha TEXT; -- Head commit SHA for diff
-- Update schema version
INSERT INTO schema_version (version, applied_at, description)
VALUES (6, strftime('%s', 'now') * 1000, 'Merge requests, MR labels, assignees, reviewers');

View File

@@ -2,6 +2,7 @@
use console::style;
use rusqlite::Connection;
use serde::Serialize;
use crate::Config;
use crate::core::db::create_connection;
@@ -12,7 +13,16 @@ use crate::core::paths::get_db_path;
pub struct CountResult {
pub entity: String,
pub count: i64,
pub system_count: Option<i64>, // For notes only
pub system_count: Option<i64>, // For notes only
pub state_breakdown: Option<StateBreakdown>, // For issues/MRs
}
/// State breakdown for issues or MRs.
pub struct StateBreakdown {
pub opened: i64,
pub closed: i64,
pub merged: Option<i64>, // MRs only
pub locked: Option<i64>, // MRs only
}
/// Run the count command.
@@ -24,30 +34,83 @@ pub fn run_count(config: &Config, entity: &str, type_filter: Option<&str>) -> Re
"issues" => count_issues(&conn),
"discussions" => count_discussions(&conn, type_filter),
"notes" => count_notes(&conn, type_filter),
"mrs" => {
// Placeholder for CP2
Ok(CountResult {
entity: "Merge Requests".to_string(),
count: 0,
system_count: None,
})
}
"mrs" => count_mrs(&conn),
_ => Ok(CountResult {
entity: entity.to_string(),
count: 0,
system_count: None,
state_breakdown: None,
}),
}
}
/// Count issues.
/// Count issues with state breakdown.
fn count_issues(conn: &Connection) -> Result<CountResult> {
let count: i64 = conn.query_row("SELECT COUNT(*) FROM issues", [], |row| row.get(0))?;
let opened: i64 = conn.query_row(
"SELECT COUNT(*) FROM issues WHERE state = 'opened'",
[],
|row| row.get(0),
)?;
let closed: i64 = conn.query_row(
"SELECT COUNT(*) FROM issues WHERE state = 'closed'",
[],
|row| row.get(0),
)?;
Ok(CountResult {
entity: "Issues".to_string(),
count,
system_count: None,
state_breakdown: Some(StateBreakdown {
opened,
closed,
merged: None,
locked: None,
}),
})
}
/// Count merge requests with state breakdown.
fn count_mrs(conn: &Connection) -> Result<CountResult> {
let count: i64 = conn.query_row("SELECT COUNT(*) FROM merge_requests", [], |row| row.get(0))?;
let opened: i64 = conn.query_row(
"SELECT COUNT(*) FROM merge_requests WHERE state = 'opened'",
[],
|row| row.get(0),
)?;
let merged: i64 = conn.query_row(
"SELECT COUNT(*) FROM merge_requests WHERE state = 'merged'",
[],
|row| row.get(0),
)?;
let closed: i64 = conn.query_row(
"SELECT COUNT(*) FROM merge_requests WHERE state = 'closed'",
[],
|row| row.get(0),
)?;
let locked: i64 = conn.query_row(
"SELECT COUNT(*) FROM merge_requests WHERE state = 'locked'",
[],
|row| row.get(0),
)?;
Ok(CountResult {
entity: "Merge Requests".to_string(),
count,
system_count: None,
state_breakdown: Some(StateBreakdown {
opened,
closed,
merged: Some(merged),
locked: Some(locked),
}),
})
}
@@ -81,6 +144,7 @@ fn count_discussions(conn: &Connection, type_filter: Option<&str>) -> Result<Cou
entity: entity_name.to_string(),
count,
system_count: None,
state_breakdown: None,
})
}
@@ -126,6 +190,7 @@ fn count_notes(conn: &Connection, type_filter: Option<&str>) -> Result<CountResu
entity: entity_name.to_string(),
count: non_system,
system_count: Some(system_count),
state_breakdown: None,
})
}
@@ -145,6 +210,55 @@ fn format_number(n: i64) -> String {
result
}
/// JSON output structure for count command.
#[derive(Serialize)]
struct CountJsonOutput {
ok: bool,
data: CountJsonData,
}
#[derive(Serialize)]
struct CountJsonData {
entity: String,
count: i64,
#[serde(skip_serializing_if = "Option::is_none")]
system_excluded: Option<i64>,
#[serde(skip_serializing_if = "Option::is_none")]
breakdown: Option<CountJsonBreakdown>,
}
#[derive(Serialize)]
struct CountJsonBreakdown {
opened: i64,
closed: i64,
#[serde(skip_serializing_if = "Option::is_none")]
merged: Option<i64>,
#[serde(skip_serializing_if = "Option::is_none")]
locked: Option<i64>,
}
/// Print count result as JSON (robot mode).
pub fn print_count_json(result: &CountResult) {
let breakdown = result.state_breakdown.as_ref().map(|b| CountJsonBreakdown {
opened: b.opened,
closed: b.closed,
merged: b.merged,
locked: b.locked.filter(|&l| l > 0),
});
let output = CountJsonOutput {
ok: true,
data: CountJsonData {
entity: result.entity.to_lowercase().replace(' ', "_"),
count: result.count,
system_excluded: result.system_count,
breakdown,
},
};
println!("{}", serde_json::to_string(&output).unwrap());
}
/// Print count result.
pub fn print_count(result: &CountResult) {
let count_str = format_number(result.count);
@@ -153,7 +267,7 @@ pub fn print_count(result: &CountResult) {
println!(
"{}: {} {}",
style(&result.entity).cyan(),
style(count_str).bold(),
style(&count_str).bold(),
style(format!(
"(excluding {} system)",
format_number(system_count)
@@ -164,9 +278,23 @@ pub fn print_count(result: &CountResult) {
println!(
"{}: {}",
style(&result.entity).cyan(),
style(count_str).bold()
style(&count_str).bold()
);
}
// Print state breakdown if available
if let Some(breakdown) = &result.state_breakdown {
println!(" opened: {}", format_number(breakdown.opened));
if let Some(merged) = breakdown.merged {
println!(" merged: {}", format_number(merged));
}
println!(" closed: {}", format_number(breakdown.closed));
if let Some(locked) = breakdown.locked
&& locked > 0
{
println!(" locked: {}", format_number(locked));
}
}
}
#[cfg(test)]

View File

@@ -3,6 +3,7 @@
use console::style;
use indicatif::{ProgressBar, ProgressStyle};
use rusqlite::Connection;
use serde::Serialize;
use crate::Config;
use crate::core::db::create_connection;
@@ -10,18 +11,32 @@ use crate::core::error::{GiError, Result};
use crate::core::lock::{AppLock, LockOptions};
use crate::core::paths::get_db_path;
use crate::gitlab::GitLabClient;
use crate::ingestion::{IngestProjectResult, ProgressEvent, ingest_project_issues_with_progress};
use crate::ingestion::{
IngestMrProjectResult, IngestProjectResult, ProgressEvent, ingest_project_issues_with_progress,
ingest_project_merge_requests_with_progress,
};
/// Result of ingest command for display.
pub struct IngestResult {
pub resource_type: String,
pub projects_synced: usize,
// Issue-specific fields
pub issues_fetched: usize,
pub issues_upserted: usize,
pub issues_synced_discussions: usize,
pub issues_skipped_discussion_sync: usize,
// MR-specific fields
pub mrs_fetched: usize,
pub mrs_upserted: usize,
pub mrs_synced_discussions: usize,
pub mrs_skipped_discussion_sync: usize,
pub assignees_linked: usize,
pub reviewers_linked: usize,
pub diffnotes_count: usize,
// Shared fields
pub labels_created: usize,
pub discussions_fetched: usize,
pub notes_upserted: usize,
pub issues_synced_discussions: usize,
pub issues_skipped_discussion_sync: usize,
}
/// Run the ingest command.
@@ -31,11 +46,12 @@ pub async fn run_ingest(
project_filter: Option<&str>,
force: bool,
full: bool,
robot_mode: bool,
) -> Result<IngestResult> {
// Only issues supported in CP1
if resource_type != "issues" {
// Validate resource type early
if resource_type != "issues" && resource_type != "mrs" {
return Err(GiError::Other(format!(
"Resource type '{}' not yet implemented. Only 'issues' is supported.",
"Invalid resource type '{}'. Valid types: issues, mrs",
resource_type
)));
}
@@ -69,16 +85,26 @@ pub async fn run_ingest(
// If --full flag is set, reset sync cursors and discussion watermarks for a complete re-fetch
if full {
println!(
"{}",
style("Full sync: resetting cursors to fetch all data...").yellow()
);
if !robot_mode {
println!(
"{}",
style("Full sync: resetting cursors to fetch all data...").yellow()
);
}
for (local_project_id, _, path) in &projects {
// Reset discussion watermarks first so discussions get re-synced
conn.execute(
"UPDATE issues SET discussions_synced_for_updated_at = NULL WHERE project_id = ?",
[*local_project_id],
)?;
if resource_type == "issues" {
// Reset issue discussion watermarks first so discussions get re-synced
conn.execute(
"UPDATE issues SET discussions_synced_for_updated_at = NULL WHERE project_id = ?",
[*local_project_id],
)?;
} else if resource_type == "mrs" {
// Reset MR discussion watermarks
conn.execute(
"UPDATE merge_requests SET discussions_synced_for_updated_at = NULL WHERE project_id = ?",
[*local_project_id],
)?;
}
// Then reset sync cursor
conn.execute(
@@ -86,7 +112,7 @@ pub async fn run_ingest(
(*local_project_id, resource_type),
)?;
tracing::info!(project = %path, "Reset sync cursor and discussion watermarks for full re-fetch");
tracing::info!(project = %path, resource_type, "Reset sync cursor and discussion watermarks for full re-fetch");
}
}
@@ -103,45 +129,76 @@ pub async fn run_ingest(
}
let mut total = IngestResult {
resource_type: resource_type.to_string(),
projects_synced: 0,
// Issue fields
issues_fetched: 0,
issues_upserted: 0,
issues_synced_discussions: 0,
issues_skipped_discussion_sync: 0,
// MR fields
mrs_fetched: 0,
mrs_upserted: 0,
mrs_synced_discussions: 0,
mrs_skipped_discussion_sync: 0,
assignees_linked: 0,
reviewers_linked: 0,
diffnotes_count: 0,
// Shared fields
labels_created: 0,
discussions_fetched: 0,
notes_upserted: 0,
issues_synced_discussions: 0,
issues_skipped_discussion_sync: 0,
};
println!("{}", style("Ingesting issues...").blue());
println!();
let type_label = if resource_type == "issues" {
"issues"
} else {
"merge requests"
};
if !robot_mode {
println!("{}", style(format!("Ingesting {type_label}...")).blue());
println!();
}
// Sync each project
for (local_project_id, gitlab_project_id, path) in &projects {
// Show spinner while fetching issues
let spinner = ProgressBar::new_spinner();
spinner.set_style(
ProgressStyle::default_spinner()
.template("{spinner:.blue} {msg}")
.unwrap(),
);
spinner.set_message(format!("Fetching issues from {path}..."));
spinner.enable_steady_tick(std::time::Duration::from_millis(100));
// Show spinner while fetching (only in interactive mode)
let spinner = if robot_mode {
ProgressBar::hidden()
} else {
let s = ProgressBar::new_spinner();
s.set_style(
ProgressStyle::default_spinner()
.template("{spinner:.blue} {msg}")
.unwrap(),
);
s.set_message(format!("Fetching {type_label} from {path}..."));
s.enable_steady_tick(std::time::Duration::from_millis(100));
s
};
// Progress bar for discussion sync (hidden until needed)
let disc_bar = ProgressBar::new(0);
disc_bar.set_style(
ProgressStyle::default_bar()
.template(" {spinner:.blue} Syncing discussions [{bar:30.cyan/dim}] {pos}/{len}")
.unwrap()
.progress_chars("=> "),
);
// Progress bar for discussion sync (hidden until needed, or always hidden in robot mode)
let disc_bar = if robot_mode {
ProgressBar::hidden()
} else {
let b = ProgressBar::new(0);
b.set_style(
ProgressStyle::default_bar()
.template(" {spinner:.blue} Syncing discussions [{bar:30.cyan/dim}] {pos}/{len}")
.unwrap()
.progress_chars("=> "),
);
b
};
// Create progress callback
// Create progress callback (no-op in robot mode)
let spinner_clone = spinner.clone();
let disc_bar_clone = disc_bar.clone();
let progress_callback: crate::ingestion::ProgressCallback =
let progress_callback: crate::ingestion::ProgressCallback = if robot_mode {
Box::new(|_| {})
} else {
Box::new(move |event: ProgressEvent| match event {
// Issue events
ProgressEvent::DiscussionSyncStarted { total } => {
spinner_clone.finish_and_clear();
disc_bar_clone.set_length(total as u64);
@@ -153,34 +210,83 @@ pub async fn run_ingest(
ProgressEvent::DiscussionSyncComplete => {
disc_bar_clone.finish_and_clear();
}
// MR events
ProgressEvent::MrDiscussionSyncStarted { total } => {
spinner_clone.finish_and_clear();
disc_bar_clone.set_length(total as u64);
disc_bar_clone.enable_steady_tick(std::time::Duration::from_millis(100));
}
ProgressEvent::MrDiscussionSynced { current, total: _ } => {
disc_bar_clone.set_position(current as u64);
}
ProgressEvent::MrDiscussionSyncComplete => {
disc_bar_clone.finish_and_clear();
}
_ => {}
});
})
};
let result = ingest_project_issues_with_progress(
&conn,
&client,
config,
*local_project_id,
*gitlab_project_id,
Some(progress_callback),
)
.await?;
if resource_type == "issues" {
let result = ingest_project_issues_with_progress(
&conn,
&client,
config,
*local_project_id,
*gitlab_project_id,
Some(progress_callback),
)
.await?;
spinner.finish_and_clear();
disc_bar.finish_and_clear();
spinner.finish_and_clear();
disc_bar.finish_and_clear();
// Print per-project summary
print_project_summary(path, &result);
// Print per-project summary (only in interactive mode)
if !robot_mode {
print_issue_project_summary(path, &result);
}
// Aggregate totals
total.projects_synced += 1;
total.issues_fetched += result.issues_fetched;
total.issues_upserted += result.issues_upserted;
total.labels_created += result.labels_created;
total.discussions_fetched += result.discussions_fetched;
total.notes_upserted += result.notes_upserted;
total.issues_synced_discussions += result.issues_synced_discussions;
total.issues_skipped_discussion_sync += result.issues_skipped_discussion_sync;
// Aggregate totals
total.projects_synced += 1;
total.issues_fetched += result.issues_fetched;
total.issues_upserted += result.issues_upserted;
total.labels_created += result.labels_created;
total.discussions_fetched += result.discussions_fetched;
total.notes_upserted += result.notes_upserted;
total.issues_synced_discussions += result.issues_synced_discussions;
total.issues_skipped_discussion_sync += result.issues_skipped_discussion_sync;
} else {
let result = ingest_project_merge_requests_with_progress(
&conn,
&client,
config,
*local_project_id,
*gitlab_project_id,
full,
Some(progress_callback),
)
.await?;
spinner.finish_and_clear();
disc_bar.finish_and_clear();
// Print per-project summary (only in interactive mode)
if !robot_mode {
print_mr_project_summary(path, &result);
}
// Aggregate totals
total.projects_synced += 1;
total.mrs_fetched += result.mrs_fetched;
total.mrs_upserted += result.mrs_upserted;
total.labels_created += result.labels_created;
total.assignees_linked += result.assignees_linked;
total.reviewers_linked += result.reviewers_linked;
total.discussions_fetched += result.discussions_fetched;
total.notes_upserted += result.notes_upserted;
total.diffnotes_count += result.diffnotes_count;
total.mrs_synced_discussions += result.mrs_synced_discussions;
total.mrs_skipped_discussion_sync += result.mrs_skipped_discussion_sync;
}
}
// Lock is released on drop
@@ -219,8 +325,8 @@ fn get_projects_to_sync(
Ok(projects)
}
/// Print summary for a single project.
fn print_project_summary(path: &str, result: &IngestProjectResult) {
/// Print summary for a single project (issues).
fn print_issue_project_summary(path: &str, result: &IngestProjectResult) {
let labels_str = if result.labels_created > 0 {
format!(", {} new labels", result.labels_created)
} else {
@@ -249,26 +355,188 @@ fn print_project_summary(path: &str, result: &IngestProjectResult) {
}
}
/// Print final summary.
pub fn print_ingest_summary(result: &IngestResult) {
println!();
/// Print summary for a single project (merge requests).
fn print_mr_project_summary(path: &str, result: &IngestMrProjectResult) {
let labels_str = if result.labels_created > 0 {
format!(", {} new labels", result.labels_created)
} else {
String::new()
};
let assignees_str = if result.assignees_linked > 0 || result.reviewers_linked > 0 {
format!(
", {} assignees, {} reviewers",
result.assignees_linked, result.reviewers_linked
)
} else {
String::new()
};
println!(
"{}",
style(format!(
"Total: {} issues, {} discussions, {} notes",
result.issues_upserted, result.discussions_fetched, result.notes_upserted
))
.green()
" {}: {} MRs fetched{}{}",
style(path).cyan(),
result.mrs_upserted,
labels_str,
assignees_str
);
if result.issues_skipped_discussion_sync > 0 {
if result.mrs_synced_discussions > 0 {
let diffnotes_str = if result.diffnotes_count > 0 {
format!(" ({} diff notes)", result.diffnotes_count)
} else {
String::new()
};
println!(
"{}",
style(format!(
"Skipped discussion sync for {} unchanged issues.",
result.issues_skipped_discussion_sync
))
.dim()
" {} MRs -> {} discussions, {} notes{}",
result.mrs_synced_discussions,
result.discussions_fetched,
result.notes_upserted,
diffnotes_str
);
}
if result.mrs_skipped_discussion_sync > 0 {
println!(
" {} unchanged MRs (discussion sync skipped)",
style(result.mrs_skipped_discussion_sync).dim()
);
}
}
/// JSON output structures for robot mode.
#[derive(Serialize)]
struct IngestJsonOutput {
ok: bool,
data: IngestJsonData,
}
#[derive(Serialize)]
struct IngestJsonData {
resource_type: String,
projects_synced: usize,
#[serde(skip_serializing_if = "Option::is_none")]
issues: Option<IngestIssueStats>,
#[serde(skip_serializing_if = "Option::is_none")]
merge_requests: Option<IngestMrStats>,
labels_created: usize,
discussions_fetched: usize,
notes_upserted: usize,
}
#[derive(Serialize)]
struct IngestIssueStats {
fetched: usize,
upserted: usize,
synced_discussions: usize,
skipped_discussion_sync: usize,
}
#[derive(Serialize)]
struct IngestMrStats {
fetched: usize,
upserted: usize,
synced_discussions: usize,
skipped_discussion_sync: usize,
assignees_linked: usize,
reviewers_linked: usize,
diffnotes_count: usize,
}
/// Print final summary as JSON (robot mode).
pub fn print_ingest_summary_json(result: &IngestResult) {
let (issues, merge_requests) = if result.resource_type == "issues" {
(
Some(IngestIssueStats {
fetched: result.issues_fetched,
upserted: result.issues_upserted,
synced_discussions: result.issues_synced_discussions,
skipped_discussion_sync: result.issues_skipped_discussion_sync,
}),
None,
)
} else {
(
None,
Some(IngestMrStats {
fetched: result.mrs_fetched,
upserted: result.mrs_upserted,
synced_discussions: result.mrs_synced_discussions,
skipped_discussion_sync: result.mrs_skipped_discussion_sync,
assignees_linked: result.assignees_linked,
reviewers_linked: result.reviewers_linked,
diffnotes_count: result.diffnotes_count,
}),
)
};
let output = IngestJsonOutput {
ok: true,
data: IngestJsonData {
resource_type: result.resource_type.clone(),
projects_synced: result.projects_synced,
issues,
merge_requests,
labels_created: result.labels_created,
discussions_fetched: result.discussions_fetched,
notes_upserted: result.notes_upserted,
},
};
println!("{}", serde_json::to_string(&output).unwrap());
}
/// Print final summary.
pub fn print_ingest_summary(result: &IngestResult) {
println!();
if result.resource_type == "issues" {
println!(
"{}",
style(format!(
"Total: {} issues, {} discussions, {} notes",
result.issues_upserted, result.discussions_fetched, result.notes_upserted
))
.green()
);
if result.issues_skipped_discussion_sync > 0 {
println!(
"{}",
style(format!(
"Skipped discussion sync for {} unchanged issues.",
result.issues_skipped_discussion_sync
))
.dim()
);
}
} else {
let diffnotes_str = if result.diffnotes_count > 0 {
format!(" ({} diff notes)", result.diffnotes_count)
} else {
String::new()
};
println!(
"{}",
style(format!(
"Total: {} MRs, {} discussions, {} notes{}",
result.mrs_upserted,
result.discussions_fetched,
result.notes_upserted,
diffnotes_str
))
.green()
);
if result.mrs_skipped_discussion_sync > 0 {
println!(
"{}",
style(format!(
"Skipped discussion sync for {} unchanged MRs.",
result.mrs_skipped_discussion_sync
))
.dim()
);
}
}
}

View File

@@ -90,7 +90,99 @@ impl From<&ListResult> for ListResultJson {
}
}
/// Filter options for list query.
/// MR row for display.
#[derive(Debug, Serialize)]
pub struct MrListRow {
pub iid: i64,
pub title: String,
pub state: String,
pub draft: bool,
pub author_username: String,
pub source_branch: String,
pub target_branch: String,
pub created_at: i64,
pub updated_at: i64,
#[serde(skip_serializing_if = "Option::is_none")]
pub web_url: Option<String>,
pub project_path: String,
pub labels: Vec<String>,
pub assignees: Vec<String>,
pub reviewers: Vec<String>,
pub discussion_count: i64,
pub unresolved_count: i64,
}
/// Serializable version for JSON output.
#[derive(Serialize)]
pub struct MrListRowJson {
pub iid: i64,
pub title: String,
pub state: String,
pub draft: bool,
pub author_username: String,
pub source_branch: String,
pub target_branch: String,
pub labels: Vec<String>,
pub assignees: Vec<String>,
pub reviewers: Vec<String>,
pub discussion_count: i64,
pub unresolved_count: i64,
pub created_at_iso: String,
pub updated_at_iso: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub web_url: Option<String>,
pub project_path: String,
}
impl From<&MrListRow> for MrListRowJson {
fn from(row: &MrListRow) -> Self {
Self {
iid: row.iid,
title: row.title.clone(),
state: row.state.clone(),
draft: row.draft,
author_username: row.author_username.clone(),
source_branch: row.source_branch.clone(),
target_branch: row.target_branch.clone(),
labels: row.labels.clone(),
assignees: row.assignees.clone(),
reviewers: row.reviewers.clone(),
discussion_count: row.discussion_count,
unresolved_count: row.unresolved_count,
created_at_iso: ms_to_iso(row.created_at),
updated_at_iso: ms_to_iso(row.updated_at),
web_url: row.web_url.clone(),
project_path: row.project_path.clone(),
}
}
}
/// Result of MR list query.
#[derive(Serialize)]
pub struct MrListResult {
pub mrs: Vec<MrListRow>,
pub total_count: usize,
}
/// JSON output structure for MRs.
#[derive(Serialize)]
pub struct MrListResultJson {
pub mrs: Vec<MrListRowJson>,
pub total_count: usize,
pub showing: usize,
}
impl From<&MrListResult> for MrListResultJson {
fn from(result: &MrListResult) -> Self {
Self {
mrs: result.mrs.iter().map(MrListRowJson::from).collect(),
total_count: result.total_count,
showing: result.mrs.len(),
}
}
}
/// Filter options for issue list query.
pub struct ListFilters<'a> {
pub limit: usize,
pub project: Option<&'a str>,
@@ -106,6 +198,24 @@ pub struct ListFilters<'a> {
pub order: &'a str,
}
/// Filter options for MR list query.
pub struct MrListFilters<'a> {
pub limit: usize,
pub project: Option<&'a str>,
pub state: Option<&'a str>,
pub author: Option<&'a str>,
pub assignee: Option<&'a str>,
pub reviewer: Option<&'a str>,
pub labels: Option<&'a [String]>,
pub since: Option<&'a str>,
pub draft: bool,
pub no_draft: bool,
pub target_branch: Option<&'a str>,
pub source_branch: Option<&'a str>,
pub sort: &'a str,
pub order: &'a str,
}
/// Run the list issues command.
pub fn run_list_issues(config: &Config, filters: ListFilters) -> Result<ListResult> {
let db_path = get_db_path(config.storage.db_path.as_deref());
@@ -126,11 +236,11 @@ fn query_issues(conn: &Connection, filters: &ListFilters) -> Result<ListResult>
params.push(Box::new(format!("%{project}%")));
}
if let Some(state) = filters.state {
if state != "all" {
where_clauses.push("i.state = ?");
params.push(Box::new(state.to_string()));
}
if let Some(state) = filters.state
&& state != "all"
{
where_clauses.push("i.state = ?");
params.push(Box::new(state.to_string()));
}
// Handle author filter (strip leading @ if present)
@@ -151,11 +261,11 @@ fn query_issues(conn: &Connection, filters: &ListFilters) -> Result<ListResult>
}
// Handle since filter
if let Some(since_str) = filters.since {
if let Some(cutoff_ms) = parse_since(since_str) {
where_clauses.push("i.updated_at >= ?");
params.push(Box::new(cutoff_ms));
}
if let Some(since_str) = filters.since
&& let Some(cutoff_ms) = parse_since(since_str)
{
where_clauses.push("i.updated_at >= ?");
params.push(Box::new(cutoff_ms));
}
// Handle label filters (AND logic - all labels must be present)
@@ -210,7 +320,11 @@ fn query_issues(conn: &Connection, filters: &ListFilters) -> Result<ListResult>
"iid" => "i.iid",
_ => "i.updated_at", // default
};
let order = if filters.order == "asc" { "ASC" } else { "DESC" };
let order = if filters.order == "asc" {
"ASC"
} else {
"DESC"
};
// Get issues with enriched data
let query_sql = format!(
@@ -251,7 +365,7 @@ fn query_issues(conn: &Connection, filters: &ListFilters) -> Result<ListResult>
let param_refs: Vec<&dyn rusqlite::ToSql> = params.iter().map(|p| p.as_ref()).collect();
let mut stmt = conn.prepare(&query_sql)?;
let issues = stmt
let issues: Vec<IssueListRow> = stmt
.query_map(param_refs.as_slice(), |row| {
let labels_csv: Option<String> = row.get(8)?;
let labels = labels_csv
@@ -278,8 +392,7 @@ fn query_issues(conn: &Connection, filters: &ListFilters) -> Result<ListResult>
unresolved_count: row.get(11)?,
})
})?
.filter_map(|r| r.ok())
.collect();
.collect::<std::result::Result<Vec<_>, _>>()?;
Ok(ListResult {
issues,
@@ -287,6 +400,216 @@ fn query_issues(conn: &Connection, filters: &ListFilters) -> Result<ListResult>
})
}
/// Run the list MRs command.
pub fn run_list_mrs(config: &Config, filters: MrListFilters) -> Result<MrListResult> {
let db_path = get_db_path(config.storage.db_path.as_deref());
let conn = create_connection(&db_path)?;
let result = query_mrs(&conn, &filters)?;
Ok(result)
}
/// Query MRs from database with enriched data.
fn query_mrs(conn: &Connection, filters: &MrListFilters) -> Result<MrListResult> {
// Build WHERE clause
let mut where_clauses = Vec::new();
let mut params: Vec<Box<dyn rusqlite::ToSql>> = Vec::new();
if let Some(project) = filters.project {
where_clauses.push("p.path_with_namespace LIKE ?");
params.push(Box::new(format!("%{project}%")));
}
if let Some(state) = filters.state
&& state != "all"
{
where_clauses.push("m.state = ?");
params.push(Box::new(state.to_string()));
}
// Handle author filter (strip leading @ if present)
if let Some(author) = filters.author {
let username = author.strip_prefix('@').unwrap_or(author);
where_clauses.push("m.author_username = ?");
params.push(Box::new(username.to_string()));
}
// Handle assignee filter (strip leading @ if present)
if let Some(assignee) = filters.assignee {
let username = assignee.strip_prefix('@').unwrap_or(assignee);
where_clauses.push(
"EXISTS (SELECT 1 FROM mr_assignees ma
WHERE ma.merge_request_id = m.id AND ma.username = ?)",
);
params.push(Box::new(username.to_string()));
}
// Handle reviewer filter (strip leading @ if present)
if let Some(reviewer) = filters.reviewer {
let username = reviewer.strip_prefix('@').unwrap_or(reviewer);
where_clauses.push(
"EXISTS (SELECT 1 FROM mr_reviewers mr
WHERE mr.merge_request_id = m.id AND mr.username = ?)",
);
params.push(Box::new(username.to_string()));
}
// Handle since filter
if let Some(since_str) = filters.since
&& let Some(cutoff_ms) = parse_since(since_str)
{
where_clauses.push("m.updated_at >= ?");
params.push(Box::new(cutoff_ms));
}
// Handle label filters (AND logic - all labels must be present)
if let Some(labels) = filters.labels {
for label in labels {
where_clauses.push(
"EXISTS (SELECT 1 FROM mr_labels ml
JOIN labels l ON ml.label_id = l.id
WHERE ml.merge_request_id = m.id AND l.name = ?)",
);
params.push(Box::new(label.clone()));
}
}
// Handle draft filter
if filters.draft {
where_clauses.push("m.draft = 1");
} else if filters.no_draft {
where_clauses.push("m.draft = 0");
}
// Handle target branch filter
if let Some(target_branch) = filters.target_branch {
where_clauses.push("m.target_branch = ?");
params.push(Box::new(target_branch.to_string()));
}
// Handle source branch filter
if let Some(source_branch) = filters.source_branch {
where_clauses.push("m.source_branch = ?");
params.push(Box::new(source_branch.to_string()));
}
let where_sql = if where_clauses.is_empty() {
String::new()
} else {
format!("WHERE {}", where_clauses.join(" AND "))
};
// Get total count
let count_sql = format!(
"SELECT COUNT(*) FROM merge_requests m
JOIN projects p ON m.project_id = p.id
{where_sql}"
);
let param_refs: Vec<&dyn rusqlite::ToSql> = params.iter().map(|p| p.as_ref()).collect();
let total_count: i64 = conn.query_row(&count_sql, param_refs.as_slice(), |row| row.get(0))?;
let total_count = total_count as usize;
// Build ORDER BY
let sort_column = match filters.sort {
"created" => "m.created_at",
"iid" => "m.iid",
_ => "m.updated_at", // default
};
let order = if filters.order == "asc" {
"ASC"
} else {
"DESC"
};
// Get MRs with enriched data
let query_sql = format!(
"SELECT
m.iid,
m.title,
m.state,
m.draft,
m.author_username,
m.source_branch,
m.target_branch,
m.created_at,
m.updated_at,
m.web_url,
p.path_with_namespace,
(SELECT GROUP_CONCAT(l.name, ',')
FROM mr_labels ml
JOIN labels l ON ml.label_id = l.id
WHERE ml.merge_request_id = m.id) AS labels_csv,
(SELECT GROUP_CONCAT(ma.username, ',')
FROM mr_assignees ma
WHERE ma.merge_request_id = m.id) AS assignees_csv,
(SELECT GROUP_CONCAT(mr.username, ',')
FROM mr_reviewers mr
WHERE mr.merge_request_id = m.id) AS reviewers_csv,
COALESCE(d.total, 0) AS discussion_count,
COALESCE(d.unresolved, 0) AS unresolved_count
FROM merge_requests m
JOIN projects p ON m.project_id = p.id
LEFT JOIN (
SELECT merge_request_id,
COUNT(*) as total,
SUM(CASE WHEN resolvable = 1 AND resolved = 0 THEN 1 ELSE 0 END) as unresolved
FROM discussions
WHERE merge_request_id IS NOT NULL
GROUP BY merge_request_id
) d ON d.merge_request_id = m.id
{where_sql}
ORDER BY {sort_column} {order}
LIMIT ?"
);
params.push(Box::new(filters.limit as i64));
let param_refs: Vec<&dyn rusqlite::ToSql> = params.iter().map(|p| p.as_ref()).collect();
let mut stmt = conn.prepare(&query_sql)?;
let mrs: Vec<MrListRow> = stmt
.query_map(param_refs.as_slice(), |row| {
let labels_csv: Option<String> = row.get(11)?;
let labels = labels_csv
.map(|s| s.split(',').map(String::from).collect())
.unwrap_or_default();
let assignees_csv: Option<String> = row.get(12)?;
let assignees = assignees_csv
.map(|s| s.split(',').map(String::from).collect())
.unwrap_or_default();
let reviewers_csv: Option<String> = row.get(13)?;
let reviewers = reviewers_csv
.map(|s| s.split(',').map(String::from).collect())
.unwrap_or_default();
let draft_int: i64 = row.get(3)?;
Ok(MrListRow {
iid: row.get(0)?,
title: row.get(1)?,
state: row.get(2)?,
draft: draft_int == 1,
author_username: row.get(4)?,
source_branch: row.get(5)?,
target_branch: row.get(6)?,
created_at: row.get(7)?,
updated_at: row.get(8)?,
web_url: row.get(9)?,
project_path: row.get(10)?,
labels,
assignees,
reviewers,
discussion_count: row.get(14)?,
unresolved_count: row.get(15)?,
})
})?
.collect::<std::result::Result<Vec<_>, _>>()?;
Ok(MrListResult { mrs, total_count })
}
/// Format relative time from ms epoch.
fn format_relative_time(ms_epoch: i64) -> String {
let now = now_ms();
@@ -362,6 +685,12 @@ fn format_discussions(total: i64, unresolved: i64) -> String {
}
}
/// Format branch info: target <- source
fn format_branches(target: &str, source: &str, max_width: usize) -> String {
let full = format!("{} <- {}", target, source);
truncate_with_ellipsis(&full, max_width)
}
/// Print issues list as a formatted table.
pub fn print_list_issues(result: &ListResult) {
if result.issues.is_empty() {
@@ -441,6 +770,96 @@ pub fn open_issue_in_browser(result: &ListResult) -> Option<String> {
}
}
/// Print MRs list as a formatted table.
pub fn print_list_mrs(result: &MrListResult) {
if result.mrs.is_empty() {
println!("No merge requests found.");
return;
}
println!(
"Merge Requests (showing {} of {})\n",
result.mrs.len(),
result.total_count
);
let mut table = Table::new();
table
.set_content_arrangement(ContentArrangement::Dynamic)
.set_header(vec![
Cell::new("IID").add_attribute(Attribute::Bold),
Cell::new("Title").add_attribute(Attribute::Bold),
Cell::new("State").add_attribute(Attribute::Bold),
Cell::new("Author").add_attribute(Attribute::Bold),
Cell::new("Branches").add_attribute(Attribute::Bold),
Cell::new("Disc").add_attribute(Attribute::Bold),
Cell::new("Updated").add_attribute(Attribute::Bold),
]);
for mr in &result.mrs {
// Add [DRAFT] prefix for draft MRs
let title = if mr.draft {
format!("[DRAFT] {}", truncate_with_ellipsis(&mr.title, 38))
} else {
truncate_with_ellipsis(&mr.title, 45)
};
let relative_time = format_relative_time(mr.updated_at);
let branches = format_branches(&mr.target_branch, &mr.source_branch, 25);
let discussions = format_discussions(mr.discussion_count, mr.unresolved_count);
let state_cell = match mr.state.as_str() {
"opened" => Cell::new(&mr.state).fg(Color::Green),
"merged" => Cell::new(&mr.state).fg(Color::Magenta),
"closed" => Cell::new(&mr.state).fg(Color::Red),
"locked" => Cell::new(&mr.state).fg(Color::Yellow),
_ => Cell::new(&mr.state).fg(Color::DarkGrey),
};
table.add_row(vec![
Cell::new(format!("!{}", mr.iid)).fg(Color::Cyan),
Cell::new(title),
state_cell,
Cell::new(format!(
"@{}",
truncate_with_ellipsis(&mr.author_username, 12)
))
.fg(Color::Magenta),
Cell::new(branches).fg(Color::Blue),
Cell::new(discussions),
Cell::new(relative_time).fg(Color::DarkGrey),
]);
}
println!("{table}");
}
/// Print MRs list as JSON.
pub fn print_list_mrs_json(result: &MrListResult) {
let json_result = MrListResultJson::from(result);
match serde_json::to_string_pretty(&json_result) {
Ok(json) => println!("{json}"),
Err(e) => eprintln!("Error serializing to JSON: {e}"),
}
}
/// Open MR in browser. Returns the URL that was opened.
pub fn open_mr_in_browser(result: &MrListResult) -> Option<String> {
let first_mr = result.mrs.first()?;
let url = first_mr.web_url.as_ref()?;
match open::that(url) {
Ok(()) => {
println!("Opened: {url}");
Some(url.clone())
}
Err(e) => {
eprintln!("Failed to open browser: {e}");
None
}
}
}
#[cfg(test)]
mod tests {
use super::*;

View File

@@ -10,12 +10,16 @@ pub mod show;
pub mod sync_status;
pub use auth_test::run_auth_test;
pub use count::{print_count, run_count};
pub use count::{print_count, print_count_json, run_count};
pub use doctor::{print_doctor_results, run_doctor};
pub use ingest::{print_ingest_summary, run_ingest};
pub use ingest::{print_ingest_summary, print_ingest_summary_json, run_ingest};
pub use init::{InitInputs, InitOptions, InitResult, run_init};
pub use list::{
ListFilters, open_issue_in_browser, print_list_issues, print_list_issues_json, run_list_issues,
ListFilters, MrListFilters, open_issue_in_browser, open_mr_in_browser, print_list_issues,
print_list_issues_json, print_list_mrs, print_list_mrs_json, run_list_issues, run_list_mrs,
};
pub use show::{print_show_issue, run_show_issue};
pub use sync_status::{print_sync_status, run_sync_status};
pub use show::{
print_show_issue, print_show_issue_json, print_show_mr, print_show_mr_json, run_show_issue,
run_show_mr,
};
pub use sync_status::{print_sync_status, print_sync_status_json, run_sync_status};

View File

@@ -2,6 +2,7 @@
use console::style;
use rusqlite::Connection;
use serde::Serialize;
use crate::Config;
use crate::core::db::create_connection;
@@ -9,8 +10,59 @@ use crate::core::error::{GiError, Result};
use crate::core::paths::get_db_path;
use crate::core::time::ms_to_iso;
/// Merge request metadata for display.
#[derive(Debug, Serialize)]
pub struct MrDetail {
pub id: i64,
pub iid: i64,
pub title: String,
pub description: Option<String>,
pub state: String,
pub draft: bool,
pub author_username: String,
pub source_branch: String,
pub target_branch: String,
pub created_at: i64,
pub updated_at: i64,
pub merged_at: Option<i64>,
pub closed_at: Option<i64>,
pub web_url: Option<String>,
pub project_path: String,
pub labels: Vec<String>,
pub assignees: Vec<String>,
pub reviewers: Vec<String>,
pub discussions: Vec<MrDiscussionDetail>,
}
/// MR discussion detail for display.
#[derive(Debug, Serialize)]
pub struct MrDiscussionDetail {
pub notes: Vec<MrNoteDetail>,
pub individual_note: bool,
}
/// MR note detail for display (includes DiffNote position).
#[derive(Debug, Serialize)]
pub struct MrNoteDetail {
pub author_username: String,
pub body: String,
pub created_at: i64,
pub is_system: bool,
pub position: Option<DiffNotePosition>,
}
/// DiffNote position context for display.
#[derive(Debug, Clone, Serialize)]
pub struct DiffNotePosition {
pub old_path: Option<String>,
pub new_path: Option<String>,
pub old_line: Option<i64>,
pub new_line: Option<i64>,
pub position_type: Option<String>,
}
/// Issue metadata for display.
#[derive(Debug)]
#[derive(Debug, Serialize)]
pub struct IssueDetail {
pub id: i64,
pub iid: i64,
@@ -27,14 +79,14 @@ pub struct IssueDetail {
}
/// Discussion detail for display.
#[derive(Debug)]
#[derive(Debug, Serialize)]
pub struct DiscussionDetail {
pub notes: Vec<NoteDetail>,
pub individual_note: bool,
}
/// Note detail for display.
#[derive(Debug)]
#[derive(Debug, Serialize)]
pub struct NoteDetail {
pub author_username: String,
pub body: String,
@@ -129,8 +181,7 @@ fn find_issue(conn: &Connection, iid: i64, project_filter: Option<&str>) -> Resu
project_path: row.get(9)?,
})
})?
.filter_map(|r| r.ok())
.collect();
.collect::<std::result::Result<Vec<_>, _>>()?;
match issues.len() {
0 => Err(GiError::NotFound(format!("Issue #{} not found", iid))),
@@ -155,10 +206,9 @@ fn get_issue_labels(conn: &Connection, issue_id: i64) -> Result<Vec<String>> {
ORDER BY l.name",
)?;
let labels = stmt
let labels: Vec<String> = stmt
.query_map([issue_id], |row| row.get(0))?
.filter_map(|r| r.ok())
.collect();
.collect::<std::result::Result<Vec<_>, _>>()?;
Ok(labels)
}
@@ -177,8 +227,7 @@ fn get_issue_discussions(conn: &Connection, issue_id: i64) -> Result<Vec<Discuss
let individual: i64 = row.get(1)?;
Ok((row.get(0)?, individual == 1))
})?
.filter_map(|r| r.ok())
.collect();
.collect::<std::result::Result<Vec<_>, _>>()?;
// Then get notes for each discussion
let mut note_stmt = conn.prepare(
@@ -200,8 +249,7 @@ fn get_issue_discussions(conn: &Connection, issue_id: i64) -> Result<Vec<Discuss
is_system: is_system == 1,
})
})?
.filter_map(|r| r.ok())
.collect();
.collect::<std::result::Result<Vec<_>, _>>()?;
// Filter out discussions with only system notes
let has_user_notes = notes.iter().any(|n| !n.is_system);
@@ -216,6 +264,255 @@ fn get_issue_discussions(conn: &Connection, issue_id: i64) -> Result<Vec<Discuss
Ok(discussions)
}
/// Run the show MR command.
pub fn run_show_mr(config: &Config, iid: i64, project_filter: Option<&str>) -> Result<MrDetail> {
let db_path = get_db_path(config.storage.db_path.as_deref());
let conn = create_connection(&db_path)?;
// Find the MR
let mr = find_mr(&conn, iid, project_filter)?;
// Load labels
let labels = get_mr_labels(&conn, mr.id)?;
// Load assignees
let assignees = get_mr_assignees(&conn, mr.id)?;
// Load reviewers
let reviewers = get_mr_reviewers(&conn, mr.id)?;
// Load discussions with notes
let discussions = get_mr_discussions(&conn, mr.id)?;
Ok(MrDetail {
id: mr.id,
iid: mr.iid,
title: mr.title,
description: mr.description,
state: mr.state,
draft: mr.draft,
author_username: mr.author_username,
source_branch: mr.source_branch,
target_branch: mr.target_branch,
created_at: mr.created_at,
updated_at: mr.updated_at,
merged_at: mr.merged_at,
closed_at: mr.closed_at,
web_url: mr.web_url,
project_path: mr.project_path,
labels,
assignees,
reviewers,
discussions,
})
}
/// Internal MR row from query.
struct MrRow {
id: i64,
iid: i64,
title: String,
description: Option<String>,
state: String,
draft: bool,
author_username: String,
source_branch: String,
target_branch: String,
created_at: i64,
updated_at: i64,
merged_at: Option<i64>,
closed_at: Option<i64>,
web_url: Option<String>,
project_path: String,
}
/// Find MR by iid, optionally filtered by project.
fn find_mr(conn: &Connection, iid: i64, project_filter: Option<&str>) -> Result<MrRow> {
let (sql, params): (&str, Vec<Box<dyn rusqlite::ToSql>>) = match project_filter {
Some(project) => (
"SELECT m.id, m.iid, m.title, m.description, m.state, m.draft,
m.author_username, m.source_branch, m.target_branch,
m.created_at, m.updated_at, m.merged_at, m.closed_at,
m.web_url, p.path_with_namespace
FROM merge_requests m
JOIN projects p ON m.project_id = p.id
WHERE m.iid = ? AND p.path_with_namespace LIKE ?",
vec![Box::new(iid), Box::new(format!("%{}%", project))],
),
None => (
"SELECT m.id, m.iid, m.title, m.description, m.state, m.draft,
m.author_username, m.source_branch, m.target_branch,
m.created_at, m.updated_at, m.merged_at, m.closed_at,
m.web_url, p.path_with_namespace
FROM merge_requests m
JOIN projects p ON m.project_id = p.id
WHERE m.iid = ?",
vec![Box::new(iid)],
),
};
let param_refs: Vec<&dyn rusqlite::ToSql> = params.iter().map(|p| p.as_ref()).collect();
let mut stmt = conn.prepare(sql)?;
let mrs: Vec<MrRow> = stmt
.query_map(param_refs.as_slice(), |row| {
let draft_val: i64 = row.get(5)?;
Ok(MrRow {
id: row.get(0)?,
iid: row.get(1)?,
title: row.get(2)?,
description: row.get(3)?,
state: row.get(4)?,
draft: draft_val == 1,
author_username: row.get(6)?,
source_branch: row.get(7)?,
target_branch: row.get(8)?,
created_at: row.get(9)?,
updated_at: row.get(10)?,
merged_at: row.get(11)?,
closed_at: row.get(12)?,
web_url: row.get(13)?,
project_path: row.get(14)?,
})
})?
.collect::<std::result::Result<Vec<_>, _>>()?;
match mrs.len() {
0 => Err(GiError::NotFound(format!("MR !{} not found", iid))),
1 => Ok(mrs.into_iter().next().unwrap()),
_ => {
let projects: Vec<String> = mrs.iter().map(|m| m.project_path.clone()).collect();
Err(GiError::Ambiguous(format!(
"MR !{} exists in multiple projects: {}. Use --project to specify.",
iid,
projects.join(", ")
)))
}
}
}
/// Get labels for an MR.
fn get_mr_labels(conn: &Connection, mr_id: i64) -> Result<Vec<String>> {
let mut stmt = conn.prepare(
"SELECT l.name FROM labels l
JOIN mr_labels ml ON l.id = ml.label_id
WHERE ml.merge_request_id = ?
ORDER BY l.name",
)?;
let labels: Vec<String> = stmt
.query_map([mr_id], |row| row.get(0))?
.collect::<std::result::Result<Vec<_>, _>>()?;
Ok(labels)
}
/// Get assignees for an MR.
fn get_mr_assignees(conn: &Connection, mr_id: i64) -> Result<Vec<String>> {
let mut stmt = conn.prepare(
"SELECT username FROM mr_assignees
WHERE merge_request_id = ?
ORDER BY username",
)?;
let assignees: Vec<String> = stmt
.query_map([mr_id], |row| row.get(0))?
.collect::<std::result::Result<Vec<_>, _>>()?;
Ok(assignees)
}
/// Get reviewers for an MR.
fn get_mr_reviewers(conn: &Connection, mr_id: i64) -> Result<Vec<String>> {
let mut stmt = conn.prepare(
"SELECT username FROM mr_reviewers
WHERE merge_request_id = ?
ORDER BY username",
)?;
let reviewers: Vec<String> = stmt
.query_map([mr_id], |row| row.get(0))?
.collect::<std::result::Result<Vec<_>, _>>()?;
Ok(reviewers)
}
/// Get discussions with notes for an MR.
fn get_mr_discussions(conn: &Connection, mr_id: i64) -> Result<Vec<MrDiscussionDetail>> {
// First get all discussions
let mut disc_stmt = conn.prepare(
"SELECT id, individual_note FROM discussions
WHERE merge_request_id = ?
ORDER BY first_note_at",
)?;
let disc_rows: Vec<(i64, bool)> = disc_stmt
.query_map([mr_id], |row| {
let individual: i64 = row.get(1)?;
Ok((row.get(0)?, individual == 1))
})?
.collect::<std::result::Result<Vec<_>, _>>()?;
// Then get notes for each discussion (with DiffNote position fields)
let mut note_stmt = conn.prepare(
"SELECT author_username, body, created_at, is_system,
position_old_path, position_new_path, position_old_line,
position_new_line, position_type
FROM notes
WHERE discussion_id = ?
ORDER BY position",
)?;
let mut discussions = Vec::new();
for (disc_id, individual_note) in disc_rows {
let notes: Vec<MrNoteDetail> = note_stmt
.query_map([disc_id], |row| {
let is_system: i64 = row.get(3)?;
let old_path: Option<String> = row.get(4)?;
let new_path: Option<String> = row.get(5)?;
let old_line: Option<i64> = row.get(6)?;
let new_line: Option<i64> = row.get(7)?;
let position_type: Option<String> = row.get(8)?;
let position = if old_path.is_some()
|| new_path.is_some()
|| old_line.is_some()
|| new_line.is_some()
{
Some(DiffNotePosition {
old_path,
new_path,
old_line,
new_line,
position_type,
})
} else {
None
};
Ok(MrNoteDetail {
author_username: row.get(0)?,
body: row.get(1)?,
created_at: row.get(2)?,
is_system: is_system == 1,
position,
})
})?
.collect::<std::result::Result<Vec<_>, _>>()?;
// Filter out discussions with only system notes
let has_user_notes = notes.iter().any(|n| !n.is_system);
if has_user_notes || notes.is_empty() {
discussions.push(MrDiscussionDetail {
notes,
individual_note,
});
}
}
Ok(discussions)
}
/// Format date from ms epoch.
fn format_date(ms: i64) -> String {
let iso = ms_to_iso(ms);
@@ -223,12 +520,13 @@ fn format_date(ms: i64) -> String {
iso.split('T').next().unwrap_or(&iso).to_string()
}
/// Truncate text with ellipsis.
/// Truncate text with ellipsis (character-safe for UTF-8).
fn truncate(s: &str, max_len: usize) -> String {
if s.len() <= max_len {
if s.chars().count() <= max_len {
s.to_string()
} else {
format!("{}...", &s[..max_len.saturating_sub(3)])
let truncated: String = s.chars().take(max_len.saturating_sub(3)).collect();
format!("{truncated}...")
}
}
@@ -357,6 +655,347 @@ pub fn print_show_issue(issue: &IssueDetail) {
}
}
/// Print MR detail.
pub fn print_show_mr(mr: &MrDetail) {
// Header with draft indicator
let draft_prefix = if mr.draft { "[Draft] " } else { "" };
let header = format!("MR !{}: {}{}", mr.iid, draft_prefix, mr.title);
println!("{}", style(&header).bold());
println!("{}", "".repeat(header.len().min(80)));
println!();
// Metadata
println!("Project: {}", style(&mr.project_path).cyan());
let state_styled = match mr.state.as_str() {
"opened" => style(&mr.state).green(),
"merged" => style(&mr.state).magenta(),
"closed" => style(&mr.state).red(),
_ => style(&mr.state).dim(),
};
println!("State: {}", state_styled);
println!(
"Branches: {} -> {}",
style(&mr.source_branch).cyan(),
style(&mr.target_branch).yellow()
);
println!("Author: @{}", mr.author_username);
if !mr.assignees.is_empty() {
println!(
"Assignees: {}",
mr.assignees
.iter()
.map(|a| format!("@{}", a))
.collect::<Vec<_>>()
.join(", ")
);
}
if !mr.reviewers.is_empty() {
println!(
"Reviewers: {}",
mr.reviewers
.iter()
.map(|r| format!("@{}", r))
.collect::<Vec<_>>()
.join(", ")
);
}
println!("Created: {}", format_date(mr.created_at));
println!("Updated: {}", format_date(mr.updated_at));
if let Some(merged_at) = mr.merged_at {
println!("Merged: {}", format_date(merged_at));
}
if let Some(closed_at) = mr.closed_at {
println!("Closed: {}", format_date(closed_at));
}
if mr.labels.is_empty() {
println!("Labels: {}", style("(none)").dim());
} else {
println!("Labels: {}", mr.labels.join(", "));
}
if let Some(url) = &mr.web_url {
println!("URL: {}", style(url).dim());
}
println!();
// Description
println!("{}", style("Description:").bold());
if let Some(desc) = &mr.description {
let truncated = truncate(desc, 500);
let wrapped = wrap_text(&truncated, 76, " ");
println!(" {}", wrapped);
} else {
println!(" {}", style("(no description)").dim());
}
println!();
// Discussions
let user_discussions: Vec<&MrDiscussionDetail> = mr
.discussions
.iter()
.filter(|d| d.notes.iter().any(|n| !n.is_system))
.collect();
if user_discussions.is_empty() {
println!("{}", style("Discussions: (none)").dim());
} else {
println!(
"{}",
style(format!("Discussions ({}):", user_discussions.len())).bold()
);
println!();
for discussion in user_discussions {
let user_notes: Vec<&MrNoteDetail> =
discussion.notes.iter().filter(|n| !n.is_system).collect();
if let Some(first_note) = user_notes.first() {
// Print DiffNote position context if present
if let Some(pos) = &first_note.position {
print_diff_position(pos);
}
// First note of discussion (not indented)
println!(
" {} ({}):",
style(format!("@{}", first_note.author_username)).cyan(),
format_date(first_note.created_at)
);
let wrapped = wrap_text(&truncate(&first_note.body, 300), 72, " ");
println!(" {}", wrapped);
println!();
// Replies (indented)
for reply in user_notes.iter().skip(1) {
println!(
" {} ({}):",
style(format!("@{}", reply.author_username)).cyan(),
format_date(reply.created_at)
);
let wrapped = wrap_text(&truncate(&reply.body, 300), 68, " ");
println!(" {}", wrapped);
println!();
}
}
}
}
}
/// Print DiffNote position context.
fn print_diff_position(pos: &DiffNotePosition) {
let file = pos.new_path.as_ref().or(pos.old_path.as_ref());
if let Some(file_path) = file {
let line_str = match (pos.old_line, pos.new_line) {
(Some(old), Some(new)) if old == new => format!(":{}", new),
(Some(old), Some(new)) => format!(":{}{}", old, new),
(None, Some(new)) => format!(":+{}", new),
(Some(old), None) => format!(":-{}", old),
(None, None) => String::new(),
};
println!(
" {} {}{}",
style("📍").dim(),
style(file_path).yellow(),
style(line_str).dim()
);
}
}
// ============================================================================
// JSON Output Structs (with ISO timestamps for machine consumption)
// ============================================================================
/// JSON output for issue detail.
#[derive(Serialize)]
pub struct IssueDetailJson {
pub id: i64,
pub iid: i64,
pub title: String,
pub description: Option<String>,
pub state: String,
pub author_username: String,
pub created_at: String,
pub updated_at: String,
pub web_url: Option<String>,
pub project_path: String,
pub labels: Vec<String>,
pub discussions: Vec<DiscussionDetailJson>,
}
/// JSON output for discussion detail.
#[derive(Serialize)]
pub struct DiscussionDetailJson {
pub notes: Vec<NoteDetailJson>,
pub individual_note: bool,
}
/// JSON output for note detail.
#[derive(Serialize)]
pub struct NoteDetailJson {
pub author_username: String,
pub body: String,
pub created_at: String,
pub is_system: bool,
}
impl From<&IssueDetail> for IssueDetailJson {
fn from(issue: &IssueDetail) -> Self {
Self {
id: issue.id,
iid: issue.iid,
title: issue.title.clone(),
description: issue.description.clone(),
state: issue.state.clone(),
author_username: issue.author_username.clone(),
created_at: ms_to_iso(issue.created_at),
updated_at: ms_to_iso(issue.updated_at),
web_url: issue.web_url.clone(),
project_path: issue.project_path.clone(),
labels: issue.labels.clone(),
discussions: issue.discussions.iter().map(|d| d.into()).collect(),
}
}
}
impl From<&DiscussionDetail> for DiscussionDetailJson {
fn from(disc: &DiscussionDetail) -> Self {
Self {
notes: disc.notes.iter().map(|n| n.into()).collect(),
individual_note: disc.individual_note,
}
}
}
impl From<&NoteDetail> for NoteDetailJson {
fn from(note: &NoteDetail) -> Self {
Self {
author_username: note.author_username.clone(),
body: note.body.clone(),
created_at: ms_to_iso(note.created_at),
is_system: note.is_system,
}
}
}
/// JSON output for MR detail.
#[derive(Serialize)]
pub struct MrDetailJson {
pub id: i64,
pub iid: i64,
pub title: String,
pub description: Option<String>,
pub state: String,
pub draft: bool,
pub author_username: String,
pub source_branch: String,
pub target_branch: String,
pub created_at: String,
pub updated_at: String,
pub merged_at: Option<String>,
pub closed_at: Option<String>,
pub web_url: Option<String>,
pub project_path: String,
pub labels: Vec<String>,
pub assignees: Vec<String>,
pub reviewers: Vec<String>,
pub discussions: Vec<MrDiscussionDetailJson>,
}
/// JSON output for MR discussion detail.
#[derive(Serialize)]
pub struct MrDiscussionDetailJson {
pub notes: Vec<MrNoteDetailJson>,
pub individual_note: bool,
}
/// JSON output for MR note detail.
#[derive(Serialize)]
pub struct MrNoteDetailJson {
pub author_username: String,
pub body: String,
pub created_at: String,
pub is_system: bool,
pub position: Option<DiffNotePosition>,
}
impl From<&MrDetail> for MrDetailJson {
fn from(mr: &MrDetail) -> Self {
Self {
id: mr.id,
iid: mr.iid,
title: mr.title.clone(),
description: mr.description.clone(),
state: mr.state.clone(),
draft: mr.draft,
author_username: mr.author_username.clone(),
source_branch: mr.source_branch.clone(),
target_branch: mr.target_branch.clone(),
created_at: ms_to_iso(mr.created_at),
updated_at: ms_to_iso(mr.updated_at),
merged_at: mr.merged_at.map(ms_to_iso),
closed_at: mr.closed_at.map(ms_to_iso),
web_url: mr.web_url.clone(),
project_path: mr.project_path.clone(),
labels: mr.labels.clone(),
assignees: mr.assignees.clone(),
reviewers: mr.reviewers.clone(),
discussions: mr.discussions.iter().map(|d| d.into()).collect(),
}
}
}
impl From<&MrDiscussionDetail> for MrDiscussionDetailJson {
fn from(disc: &MrDiscussionDetail) -> Self {
Self {
notes: disc.notes.iter().map(|n| n.into()).collect(),
individual_note: disc.individual_note,
}
}
}
impl From<&MrNoteDetail> for MrNoteDetailJson {
fn from(note: &MrNoteDetail) -> Self {
Self {
author_username: note.author_username.clone(),
body: note.body.clone(),
created_at: ms_to_iso(note.created_at),
is_system: note.is_system,
position: note.position.clone(),
}
}
}
/// Print issue detail as JSON.
pub fn print_show_issue_json(issue: &IssueDetail) {
let json_result = IssueDetailJson::from(issue);
match serde_json::to_string_pretty(&json_result) {
Ok(json) => println!("{json}"),
Err(e) => eprintln!("Error serializing to JSON: {e}"),
}
}
/// Print MR detail as JSON.
pub fn print_show_mr_json(mr: &MrDetail) {
let json_result = MrDetailJson::from(mr);
match serde_json::to_string_pretty(&json_result) {
Ok(json) => println!("{json}"),
Err(e) => eprintln!("Error serializing to JSON: {e}"),
}
}
#[cfg(test)]
mod tests {
use super::*;

View File

@@ -2,6 +2,7 @@
use console::style;
use rusqlite::Connection;
use serde::Serialize;
use crate::Config;
use crate::core::db::create_connection;
@@ -175,6 +176,95 @@ fn format_number(n: i64) -> String {
result
}
/// JSON output structures for robot mode.
#[derive(Serialize)]
struct SyncStatusJsonOutput {
ok: bool,
data: SyncStatusJsonData,
}
#[derive(Serialize)]
struct SyncStatusJsonData {
last_sync: Option<SyncRunJsonInfo>,
cursors: Vec<CursorJsonInfo>,
summary: SummaryJsonInfo,
}
#[derive(Serialize)]
struct SyncRunJsonInfo {
id: i64,
status: String,
command: String,
started_at: String,
#[serde(skip_serializing_if = "Option::is_none")]
completed_at: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
duration_ms: Option<i64>,
#[serde(skip_serializing_if = "Option::is_none")]
error: Option<String>,
}
#[derive(Serialize)]
struct CursorJsonInfo {
project: String,
resource_type: String,
#[serde(skip_serializing_if = "Option::is_none")]
updated_at_cursor: Option<String>,
#[serde(skip_serializing_if = "Option::is_none")]
tie_breaker_id: Option<i64>,
}
#[derive(Serialize)]
struct SummaryJsonInfo {
issues: i64,
discussions: i64,
notes: i64,
system_notes: i64,
}
/// Print sync status as JSON (robot mode).
pub fn print_sync_status_json(result: &SyncStatusResult) {
let last_sync = result.last_run.as_ref().map(|run| {
let duration_ms = run.finished_at.map(|f| f - run.started_at);
SyncRunJsonInfo {
id: run.id,
status: run.status.clone(),
command: run.command.clone(),
started_at: ms_to_iso(run.started_at),
completed_at: run.finished_at.map(ms_to_iso),
duration_ms,
error: run.error.clone(),
}
});
let cursors = result
.cursors
.iter()
.map(|c| CursorJsonInfo {
project: c.project_path.clone(),
resource_type: c.resource_type.clone(),
updated_at_cursor: c.updated_at_cursor.filter(|&ts| ts > 0).map(ms_to_iso),
tie_breaker_id: c.tie_breaker_id,
})
.collect();
let output = SyncStatusJsonOutput {
ok: true,
data: SyncStatusJsonData {
last_sync,
cursors,
summary: SummaryJsonInfo {
issues: result.summary.issue_count,
discussions: result.summary.discussion_count,
notes: result.summary.note_count - result.summary.system_note_count,
system_notes: result.summary.system_note_count,
},
},
};
println!("{}", serde_json::to_string(&output).unwrap());
}
/// Print sync status result.
pub fn print_sync_status(result: &SyncStatusResult) {
// Last Sync section

View File

@@ -3,6 +3,7 @@
pub mod commands;
use clap::{Parser, Subcommand};
use std::io::IsTerminal;
/// GitLab Inbox - Unified notification management
#[derive(Parser)]
@@ -13,11 +14,23 @@ pub struct Cli {
#[arg(short, long, global = true)]
pub config: Option<String>,
/// Machine-readable JSON output (auto-enabled when piped)
#[arg(long, global = true, env = "GI_ROBOT")]
pub robot: bool,
#[command(subcommand)]
pub command: Commands,
}
impl Cli {
/// Check if robot mode is active (explicit flag, env var, or non-TTY stdout)
pub fn is_robot_mode(&self) -> bool {
self.robot || !std::io::stdout().is_terminal()
}
}
#[derive(Subcommand)]
#[allow(clippy::large_enum_variant)]
pub enum Commands {
/// Initialize configuration and database
Init {
@@ -62,7 +75,7 @@ pub enum Commands {
/// Ingest data from GitLab
Ingest {
/// Resource type to ingest
#[arg(long, value_parser = ["issues", "merge_requests"])]
#[arg(long, value_parser = ["issues", "mrs"])]
r#type: String,
/// Filter to single project
@@ -92,8 +105,8 @@ pub enum Commands {
#[arg(long)]
project: Option<String>,
/// Filter by state
#[arg(long, value_parser = ["opened", "closed", "all"])]
/// Filter by state (opened|closed|all for issues; opened|merged|closed|locked|all for MRs)
#[arg(long)]
state: Option<String>,
/// Filter by author username
@@ -108,7 +121,7 @@ pub enum Commands {
#[arg(long)]
label: Option<Vec<String>>,
/// Filter by milestone title
/// Filter by milestone title (issues only)
#[arg(long)]
milestone: Option<String>,
@@ -116,11 +129,11 @@ pub enum Commands {
#[arg(long)]
since: Option<String>,
/// Filter by due date (before this date, YYYY-MM-DD)
/// Filter by due date (before this date, YYYY-MM-DD) (issues only)
#[arg(long)]
due_before: Option<String>,
/// Show only issues with a due date
/// Show only issues with a due date (issues only)
#[arg(long)]
has_due_date: bool,
@@ -132,13 +145,33 @@ pub enum Commands {
#[arg(long, value_parser = ["desc", "asc"], default_value = "desc")]
order: String,
/// Open first matching issue in browser
/// Open first matching item in browser
#[arg(long)]
open: bool,
/// Output as JSON
#[arg(long)]
json: bool,
/// Show only draft MRs (MRs only)
#[arg(long, conflicts_with = "no_draft")]
draft: bool,
/// Exclude draft MRs (MRs only)
#[arg(long, conflicts_with = "draft")]
no_draft: bool,
/// Filter by reviewer username (MRs only)
#[arg(long)]
reviewer: Option<String>,
/// Filter by target branch (MRs only)
#[arg(long)]
target_branch: Option<String>,
/// Filter by source branch (MRs only)
#[arg(long)]
source_branch: Option<String>,
},
/// Count entities in local database
@@ -164,5 +197,9 @@ pub enum Commands {
/// Filter by project path (required if iid is ambiguous)
#[arg(long)]
project: Option<String>,
/// Output as JSON
#[arg(long)]
json: bool,
},
}

View File

@@ -15,8 +15,18 @@ const MIGRATIONS: &[(&str, &str)] = &[
("001", include_str!("../../migrations/001_initial.sql")),
("002", include_str!("../../migrations/002_issues.sql")),
("003", include_str!("../../migrations/003_indexes.sql")),
("004", include_str!("../../migrations/004_discussions_payload.sql")),
("005", include_str!("../../migrations/005_assignees_milestone_duedate.sql")),
(
"004",
include_str!("../../migrations/004_discussions_payload.sql"),
),
(
"005",
include_str!("../../migrations/005_assignees_milestone_duedate.sql"),
),
(
"006",
include_str!("../../migrations/006_merge_requests.sql"),
),
];
/// Create a database connection with production-grade pragmas.

View File

@@ -2,6 +2,7 @@
//!
//! Uses thiserror for ergonomic error definitions with structured error codes.
use serde::Serialize;
use thiserror::Error;
/// Error codes for programmatic error handling.
@@ -43,6 +44,27 @@ impl std::fmt::Display for ErrorCode {
}
}
impl ErrorCode {
/// Get the exit code for this error (for robot mode).
pub fn exit_code(&self) -> i32 {
match self {
Self::InternalError => 1,
Self::ConfigNotFound => 2,
Self::ConfigInvalid => 3,
Self::TokenNotSet => 4,
Self::GitLabAuthFailed => 5,
Self::GitLabNotFound => 6,
Self::GitLabRateLimited => 7,
Self::GitLabNetworkError => 8,
Self::DatabaseLocked => 9,
Self::DatabaseError => 10,
Self::MigrationFailed => 11,
Self::IoError => 12,
Self::TransformError => 13,
}
}
}
/// Main error type for gitlab-inbox.
#[derive(Error, Debug)]
pub enum GiError {
@@ -132,6 +154,63 @@ impl GiError {
Self::Other(_) => ErrorCode::InternalError,
}
}
/// Get a suggestion for how to fix this error.
pub fn suggestion(&self) -> Option<&'static str> {
match self {
Self::ConfigNotFound { .. } => Some("Run 'gi init' to create configuration"),
Self::ConfigInvalid { .. } => Some("Check config file syntax or run 'gi init' to recreate"),
Self::GitLabAuthFailed => Some("Verify token has read_api scope and is not expired"),
Self::GitLabNotFound { .. } => Some("Check the resource path exists and you have access"),
Self::GitLabRateLimited { .. } => Some("Wait and retry, or reduce request frequency"),
Self::GitLabNetworkError { .. } => Some("Check network connection and GitLab URL"),
Self::DatabaseLocked { .. } => Some("Wait for other sync to complete or use --force"),
Self::MigrationFailed { .. } => Some("Check database file permissions or reset with 'gi reset'"),
Self::TokenNotSet { .. } => Some("Export the token environment variable"),
Self::Database(_) => Some("Check database file permissions or reset with 'gi reset'"),
Self::Http(_) => Some("Check network connection"),
Self::NotFound(_) => Some("Verify the entity exists using 'gi list'"),
Self::Ambiguous(_) => Some("Use --project flag to disambiguate"),
_ => None,
}
}
/// Get the exit code for this error.
pub fn exit_code(&self) -> i32 {
self.code().exit_code()
}
/// Convert to robot-mode JSON error output.
pub fn to_robot_error(&self) -> RobotError {
RobotError {
code: self.code().to_string(),
message: self.to_string(),
suggestion: self.suggestion().map(String::from),
}
}
}
/// Structured error for robot mode JSON output.
#[derive(Debug, Serialize)]
pub struct RobotError {
pub code: String,
pub message: String,
#[serde(skip_serializing_if = "Option::is_none")]
pub suggestion: Option<String>,
}
/// Wrapper for robot mode error output.
#[derive(Debug, Serialize)]
pub struct RobotErrorOutput {
pub error: RobotError,
}
impl From<&GiError> for RobotErrorOutput {
fn from(e: &GiError) -> Self {
Self {
error: e.to_robot_error(),
}
}
}
pub type Result<T> = std::result::Result<T, GiError>;

View File

@@ -42,10 +42,7 @@ pub struct AppLock {
impl AppLock {
/// Create a new app lock instance.
pub fn new(conn: Connection, options: LockOptions) -> Self {
let db_path = conn
.path()
.map(PathBuf::from)
.unwrap_or_default();
let db_path = conn.path().map(PathBuf::from).unwrap_or_default();
Self {
conn,
@@ -73,7 +70,9 @@ impl AppLock {
let now = now_ms();
// Use IMMEDIATE transaction to prevent race conditions
let tx = self.conn.transaction_with_behavior(TransactionBehavior::Immediate)?;
let tx = self
.conn
.transaction_with_behavior(TransactionBehavior::Immediate)?;
// Check for existing lock within the transaction
let existing: Option<(String, i64, i64)> = tx
@@ -176,9 +175,21 @@ impl AppLock {
}
};
loop {
thread::sleep(interval);
// Poll frequently for early exit, but only update heartbeat at full interval
const POLL_INTERVAL: Duration = Duration::from_millis(100);
loop {
// Sleep in small increments, checking released flag frequently
let mut elapsed = Duration::ZERO;
while elapsed < interval {
thread::sleep(POLL_INTERVAL);
elapsed += POLL_INTERVAL;
if released.load(Ordering::SeqCst) {
return;
}
}
// Check once more after full interval elapsed
if released.load(Ordering::SeqCst) {
break;
}

View File

@@ -12,7 +12,9 @@ use tokio::sync::Mutex;
use tokio::time::sleep;
use tracing::debug;
use super::types::{GitLabDiscussion, GitLabIssue, GitLabProject, GitLabUser, GitLabVersion};
use super::types::{
GitLabDiscussion, GitLabIssue, GitLabMergeRequest, GitLabProject, GitLabUser, GitLabVersion,
};
use crate::core::error::{GiError, Result};
/// Simple rate limiter with jitter to prevent thundering herd.
@@ -53,10 +55,12 @@ fn rand_jitter() -> u64 {
let mut hasher = state.build_hasher();
// Hash the address of the state (random per call) + current time nanos for more entropy
hasher.write_usize(&state as *const _ as usize);
hasher.write_u128(std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.as_nanos());
hasher.write_u128(
std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap_or_default()
.as_nanos(),
);
hasher.finish() % 50
}
@@ -305,6 +309,182 @@ impl GitLabClient {
})
}
/// Paginate through merge requests for a project.
///
/// Returns an async stream of merge requests, handling pagination automatically.
/// MRs are ordered by updated_at ascending to support cursor-based sync.
///
/// # Arguments
/// * `gitlab_project_id` - The GitLab project ID
/// * `updated_after` - Optional cursor (ms epoch) - only fetch MRs updated after this
/// * `cursor_rewind_seconds` - Rewind cursor by this many seconds to handle edge cases
pub fn paginate_merge_requests(
&self,
gitlab_project_id: i64,
updated_after: Option<i64>,
cursor_rewind_seconds: u32,
) -> Pin<Box<dyn Stream<Item = Result<GitLabMergeRequest>> + Send + '_>> {
Box::pin(stream! {
let mut page = 1u32;
let per_page = 100u32;
loop {
let page_result = self
.fetch_merge_requests_page(
gitlab_project_id,
updated_after,
cursor_rewind_seconds,
page,
per_page,
)
.await;
match page_result {
Ok(mr_page) => {
for mr in mr_page.items {
yield Ok(mr);
}
if mr_page.is_last_page {
break;
}
match mr_page.next_page {
Some(np) => page = np,
None => break,
}
}
Err(e) => {
yield Err(e);
break;
}
}
}
})
}
/// Fetch a single page of merge requests with pagination metadata.
pub async fn fetch_merge_requests_page(
&self,
gitlab_project_id: i64,
updated_after: Option<i64>,
cursor_rewind_seconds: u32,
page: u32,
per_page: u32,
) -> Result<MergeRequestPage> {
// Apply cursor rewind, clamping to 0
let rewound_cursor = updated_after.map(|ts| {
let rewind_ms = (cursor_rewind_seconds as i64) * 1000;
(ts - rewind_ms).max(0)
});
let mut params = vec![
("scope", "all".to_string()),
("state", "all".to_string()),
("order_by", "updated_at".to_string()),
("sort", "asc".to_string()),
("per_page", per_page.to_string()),
("page", page.to_string()),
];
// Add updated_after if we have a cursor
if let Some(ts_ms) = rewound_cursor
&& let Some(iso) = ms_to_iso8601(ts_ms)
{
params.push(("updated_after", iso));
}
let path = format!("/api/v4/projects/{}/merge_requests", gitlab_project_id);
let (items, headers) = self
.request_with_headers::<Vec<GitLabMergeRequest>>(&path, &params)
.await?;
// Pagination fallback chain: Link header > x-next-page > full-page heuristic
let link_next = parse_link_header_next(&headers);
let x_next_page = headers
.get("x-next-page")
.and_then(|v| v.to_str().ok())
.and_then(|s| s.parse::<u32>().ok());
let full_page = items.len() as u32 == per_page;
let (next_page, is_last_page) = match (link_next.is_some(), x_next_page, full_page) {
(true, _, _) => (Some(page + 1), false), // Link header present: continue
(false, Some(np), _) => (Some(np), false), // x-next-page present: use it
(false, None, true) => (Some(page + 1), false), // Full page, no headers: try next
(false, None, false) => (None, true), // Partial page: we're done
};
Ok(MergeRequestPage {
items,
next_page,
is_last_page,
})
}
/// Paginate through discussions for a merge request.
///
/// Returns an async stream of discussions, handling pagination automatically.
pub fn paginate_mr_discussions(
&self,
gitlab_project_id: i64,
mr_iid: i64,
) -> Pin<Box<dyn Stream<Item = Result<GitLabDiscussion>> + Send + '_>> {
Box::pin(stream! {
let mut page = 1u32;
let per_page = 100u32;
loop {
let params = vec![
("per_page", per_page.to_string()),
("page", page.to_string()),
];
let path = format!(
"/api/v4/projects/{}/merge_requests/{}/discussions",
gitlab_project_id, mr_iid
);
let result = self.request_with_headers::<Vec<GitLabDiscussion>>(&path, &params).await;
match result {
Ok((discussions, headers)) => {
let is_empty = discussions.is_empty();
let full_page = discussions.len() as u32 == per_page;
for discussion in discussions {
yield Ok(discussion);
}
// Pagination fallback chain: Link header > x-next-page > full-page heuristic
let link_next = parse_link_header_next(&headers);
let x_next_page = headers
.get("x-next-page")
.and_then(|v| v.to_str().ok())
.and_then(|s| s.parse::<u32>().ok());
let should_continue = match (link_next.is_some(), x_next_page, full_page) {
(true, _, _) => true, // Link header present: continue
(false, Some(np), _) if np > page => {
page = np;
true
}
(false, None, true) => true, // Full page, no headers: try next
_ => false, // Otherwise we're done
};
if !should_continue || is_empty {
break;
}
page += 1;
}
Err(e) => {
yield Err(e);
break;
}
}
}
})
}
/// Make an authenticated API request with query parameters, returning headers.
async fn request_with_headers<T: serde::de::DeserializeOwned>(
&self,
@@ -335,6 +515,55 @@ impl GitLabClient {
}
}
/// Fetch all discussions for an MR (collects paginated results).
/// This is useful for parallel prefetching where we want all data upfront.
impl GitLabClient {
pub async fn fetch_all_mr_discussions(
&self,
gitlab_project_id: i64,
mr_iid: i64,
) -> Result<Vec<GitLabDiscussion>> {
use futures::StreamExt;
let mut discussions = Vec::new();
let mut stream = self.paginate_mr_discussions(gitlab_project_id, mr_iid);
while let Some(result) = stream.next().await {
discussions.push(result?);
}
Ok(discussions)
}
}
/// Page result for merge request pagination.
#[derive(Debug)]
pub struct MergeRequestPage {
pub items: Vec<GitLabMergeRequest>,
pub next_page: Option<u32>,
pub is_last_page: bool,
}
/// Parse Link header to extract rel="next" URL (RFC 8288).
fn parse_link_header_next(headers: &HeaderMap) -> Option<String> {
headers
.get("link")
.and_then(|v| v.to_str().ok())
.and_then(|link_str| {
// Format: <url>; rel="next", <url>; rel="last"
for part in link_str.split(',') {
let part = part.trim();
if (part.contains("rel=\"next\"") || part.contains("rel=next"))
&& let Some(start) = part.find('<')
&& let Some(end) = part.find('>')
{
return Some(part[start + 1..end].to_string());
}
}
None
})
}
/// Convert milliseconds since epoch to ISO 8601 string.
fn ms_to_iso8601(ms: i64) -> Option<String> {
DateTime::<Utc>::from_timestamp_millis(ms)
@@ -381,4 +610,52 @@ mod tests {
// Should be 1 minute earlier
assert_eq!(rewound, 1705312740000);
}
#[test]
fn parse_link_header_extracts_next_url() {
let mut headers = HeaderMap::new();
headers.insert(
"link",
HeaderValue::from_static(
r#"<https://gitlab.example.com/api/v4/projects/1/merge_requests?page=2>; rel="next", <https://gitlab.example.com/api/v4/projects/1/merge_requests?page=5>; rel="last""#,
),
);
let result = parse_link_header_next(&headers);
assert_eq!(
result,
Some("https://gitlab.example.com/api/v4/projects/1/merge_requests?page=2".to_string())
);
}
#[test]
fn parse_link_header_handles_unquoted_rel() {
let mut headers = HeaderMap::new();
headers.insert(
"link",
HeaderValue::from_static(r#"<https://example.com/next>; rel=next"#),
);
let result = parse_link_header_next(&headers);
assert_eq!(result, Some("https://example.com/next".to_string()));
}
#[test]
fn parse_link_header_returns_none_when_no_next() {
let mut headers = HeaderMap::new();
headers.insert(
"link",
HeaderValue::from_static(r#"<https://example.com/last>; rel="last""#),
);
let result = parse_link_header_next(&headers);
assert!(result.is_none());
}
#[test]
fn parse_link_header_returns_none_when_missing() {
let headers = HeaderMap::new();
let result = parse_link_header_next(&headers);
assert!(result.is_none());
}
}

View File

@@ -46,6 +46,18 @@ pub struct NormalizedNote {
pub resolved: bool,
pub resolved_by: Option<String>,
pub resolved_at: Option<i64>,
// DiffNote position fields (CP1 - basic path/line)
pub position_old_path: Option<String>,
pub position_new_path: Option<String>,
pub position_old_line: Option<i32>,
pub position_new_line: Option<i32>,
// DiffNote extended position fields (CP2)
pub position_type: Option<String>, // "text" | "image" | "file"
pub position_line_range_start: Option<i32>, // multi-line comment start
pub position_line_range_end: Option<i32>, // multi-line comment end
pub position_base_sha: Option<String>, // Base commit SHA for diff
pub position_start_sha: Option<String>, // Start commit SHA for diff
pub position_head_sha: Option<String>, // Head commit SHA for diff
}
/// Parse ISO 8601 timestamp to milliseconds, returning None on failure.
@@ -113,6 +125,20 @@ pub fn transform_discussion(
}
}
/// Transform a GitLab discussion for MR context.
/// Convenience wrapper that uses NoteableRef::MergeRequest internally.
pub fn transform_mr_discussion(
gitlab_discussion: &GitLabDiscussion,
local_project_id: i64,
local_mr_id: i64,
) -> NormalizedDiscussion {
transform_discussion(
gitlab_discussion,
local_project_id,
NoteableRef::MergeRequest(local_mr_id),
)
}
/// Transform notes from a GitLab discussion into normalized schema.
pub fn transform_notes(
gitlab_discussion: &GitLabDiscussion,
@@ -134,6 +160,20 @@ fn transform_single_note(
position: i32,
now: i64,
) -> NormalizedNote {
// Extract DiffNote position fields if present
let (
position_old_path,
position_new_path,
position_old_line,
position_new_line,
position_type,
position_line_range_start,
position_line_range_end,
position_base_sha,
position_start_sha,
position_head_sha,
) = extract_position_fields(&note.position);
NormalizedNote {
gitlab_id: note.id,
project_id: local_project_id,
@@ -152,9 +192,138 @@ fn transform_single_note(
.resolved_at
.as_ref()
.and_then(|ts| parse_timestamp_opt(ts)),
position_old_path,
position_new_path,
position_old_line,
position_new_line,
position_type,
position_line_range_start,
position_line_range_end,
position_base_sha,
position_start_sha,
position_head_sha,
}
}
/// Extract DiffNote position fields from GitLabNotePosition.
/// Returns tuple of all position fields (all None if position is None).
#[allow(clippy::type_complexity)]
fn extract_position_fields(
position: &Option<crate::gitlab::types::GitLabNotePosition>,
) -> (
Option<String>,
Option<String>,
Option<i32>,
Option<i32>,
Option<String>,
Option<i32>,
Option<i32>,
Option<String>,
Option<String>,
Option<String>,
) {
match position {
Some(pos) => {
let line_range_start = pos.line_range.as_ref().and_then(|lr| lr.start_line());
let line_range_end = pos.line_range.as_ref().and_then(|lr| lr.end_line());
(
pos.old_path.clone(),
pos.new_path.clone(),
pos.old_line,
pos.new_line,
pos.position_type.clone(),
line_range_start,
line_range_end,
pos.base_sha.clone(),
pos.start_sha.clone(),
pos.head_sha.clone(),
)
}
None => (None, None, None, None, None, None, None, None, None, None),
}
}
/// Parse ISO 8601 timestamp to milliseconds with strict error handling.
/// Returns Err with the invalid timestamp in the error message.
fn parse_timestamp_strict(ts: &str) -> Result<i64, String> {
DateTime::parse_from_rfc3339(ts)
.map(|dt| dt.timestamp_millis())
.map_err(|_| format!("Invalid timestamp: {}", ts))
}
/// Transform notes from a GitLab discussion with strict timestamp parsing.
/// Returns Err if any timestamp is invalid - no silent fallback to 0.
pub fn transform_notes_with_diff_position(
gitlab_discussion: &GitLabDiscussion,
local_project_id: i64,
) -> Result<Vec<NormalizedNote>, String> {
let now = now_ms();
gitlab_discussion
.notes
.iter()
.enumerate()
.map(|(idx, note)| transform_single_note_strict(note, local_project_id, idx as i32, now))
.collect()
}
fn transform_single_note_strict(
note: &GitLabNote,
local_project_id: i64,
position: i32,
now: i64,
) -> Result<NormalizedNote, String> {
// Parse timestamps with strict error handling
let created_at = parse_timestamp_strict(&note.created_at)?;
let updated_at = parse_timestamp_strict(&note.updated_at)?;
let resolved_at = match &note.resolved_at {
Some(ts) => Some(parse_timestamp_strict(ts)?),
None => None,
};
// Extract DiffNote position fields if present
let (
position_old_path,
position_new_path,
position_old_line,
position_new_line,
position_type,
position_line_range_start,
position_line_range_end,
position_base_sha,
position_start_sha,
position_head_sha,
) = extract_position_fields(&note.position);
Ok(NormalizedNote {
gitlab_id: note.id,
project_id: local_project_id,
note_type: note.note_type.clone(),
is_system: note.system,
author_username: note.author.username.clone(),
body: note.body.clone(),
created_at,
updated_at,
last_seen_at: now,
position,
resolvable: note.resolvable,
resolved: note.resolved,
resolved_by: note.resolved_by.as_ref().map(|a| a.username.clone()),
resolved_at,
position_old_path,
position_new_path,
position_old_line,
position_new_line,
position_type,
position_line_range_start,
position_line_range_end,
position_base_sha,
position_start_sha,
position_head_sha,
})
}
#[cfg(test)]
mod tests {
use super::*;

View File

@@ -24,8 +24,8 @@ pub struct IssueRow {
pub created_at: i64, // ms epoch UTC
pub updated_at: i64, // ms epoch UTC
pub web_url: String,
pub due_date: Option<String>, // YYYY-MM-DD
pub milestone_title: Option<String>, // Denormalized for quick display
pub due_date: Option<String>, // YYYY-MM-DD
pub milestone_title: Option<String>, // Denormalized for quick display
}
/// Local schema representation of a milestone row.
@@ -62,11 +62,8 @@ pub fn transform_issue(issue: GitLabIssue) -> Result<IssueWithMetadata, Transfor
let created_at = parse_timestamp(&issue.created_at)?;
let updated_at = parse_timestamp(&issue.updated_at)?;
let assignee_usernames: Vec<String> = issue
.assignees
.iter()
.map(|a| a.username.clone())
.collect();
let assignee_usernames: Vec<String> =
issue.assignees.iter().map(|a| a.username.clone()).collect();
let milestone_title = issue.milestone.as_ref().map(|m| m.title.clone());
@@ -252,7 +249,10 @@ mod tests {
assert_eq!(milestone.description, Some("First release".to_string()));
assert_eq!(milestone.state, Some("active".to_string()));
assert_eq!(milestone.due_date, Some("2024-02-01".to_string()));
assert_eq!(milestone.web_url, Some("https://gitlab.example.com/-/milestones/5".to_string()));
assert_eq!(
milestone.web_url,
Some("https://gitlab.example.com/-/milestones/5".to_string())
);
}
#[test]

View File

@@ -0,0 +1,155 @@
//! Merge request transformer: converts GitLabMergeRequest to local schema.
use chrono::DateTime;
use std::time::{SystemTime, UNIX_EPOCH};
use crate::gitlab::types::GitLabMergeRequest;
/// Get current time in milliseconds since Unix epoch.
fn now_ms() -> i64 {
SystemTime::now()
.duration_since(UNIX_EPOCH)
.expect("Time went backwards")
.as_millis() as i64
}
/// Parse ISO 8601 timestamp to milliseconds since Unix epoch.
fn iso_to_ms(ts: &str) -> Result<i64, String> {
DateTime::parse_from_rfc3339(ts)
.map(|dt| dt.timestamp_millis())
.map_err(|e| format!("Failed to parse timestamp '{}': {}", ts, e))
}
/// Parse optional ISO 8601 timestamp to optional milliseconds since Unix epoch.
fn iso_to_ms_opt(ts: &Option<String>) -> Result<Option<i64>, String> {
match ts {
Some(s) => iso_to_ms(s).map(Some),
None => Ok(None),
}
}
/// Local schema representation of a merge request row.
#[derive(Debug, Clone)]
pub struct NormalizedMergeRequest {
pub gitlab_id: i64,
pub project_id: i64,
pub iid: i64,
pub title: String,
pub description: Option<String>,
pub state: String,
pub draft: bool,
pub author_username: String,
pub source_branch: String,
pub target_branch: String,
pub head_sha: Option<String>,
pub references_short: Option<String>,
pub references_full: Option<String>,
pub detailed_merge_status: Option<String>,
pub merge_user_username: Option<String>,
pub created_at: i64, // ms epoch UTC
pub updated_at: i64, // ms epoch UTC
pub merged_at: Option<i64>, // ms epoch UTC
pub closed_at: Option<i64>, // ms epoch UTC
pub last_seen_at: i64, // ms epoch UTC
pub web_url: String,
}
/// Merge request bundled with extracted metadata.
#[derive(Debug, Clone)]
pub struct MergeRequestWithMetadata {
pub merge_request: NormalizedMergeRequest,
pub label_names: Vec<String>,
pub assignee_usernames: Vec<String>,
pub reviewer_usernames: Vec<String>,
}
/// Transform a GitLab merge request into local schema format.
///
/// # Arguments
/// * `gitlab_mr` - The GitLab MR API response
/// * `local_project_id` - The local database project ID (not GitLab's project_id)
///
/// # Returns
/// * `Ok(MergeRequestWithMetadata)` - Transformed MR with extracted metadata
/// * `Err(String)` - Error message if transformation fails (e.g., invalid timestamps)
pub fn transform_merge_request(
gitlab_mr: &GitLabMergeRequest,
local_project_id: i64,
) -> Result<MergeRequestWithMetadata, String> {
// Parse required timestamps
let created_at = iso_to_ms(&gitlab_mr.created_at)?;
let updated_at = iso_to_ms(&gitlab_mr.updated_at)?;
// Parse optional timestamps
let merged_at = iso_to_ms_opt(&gitlab_mr.merged_at)?;
let closed_at = iso_to_ms_opt(&gitlab_mr.closed_at)?;
// Draft: prefer draft, fallback to work_in_progress
let is_draft = gitlab_mr.draft || gitlab_mr.work_in_progress;
// Merge status: prefer detailed_merge_status over legacy
let detailed_merge_status = gitlab_mr
.detailed_merge_status
.clone()
.or_else(|| gitlab_mr.merge_status_legacy.clone());
// Merge user: prefer merge_user over merged_by
let merge_user_username = gitlab_mr
.merge_user
.as_ref()
.map(|u| u.username.clone())
.or_else(|| gitlab_mr.merged_by.as_ref().map(|u| u.username.clone()));
// References extraction
let (references_short, references_full) = gitlab_mr
.references
.as_ref()
.map(|r| (Some(r.short.clone()), Some(r.full.clone())))
.unwrap_or((None, None));
// Head SHA
let head_sha = gitlab_mr.sha.clone();
// Extract assignee usernames
let assignee_usernames: Vec<String> = gitlab_mr
.assignees
.iter()
.map(|a| a.username.clone())
.collect();
// Extract reviewer usernames
let reviewer_usernames: Vec<String> = gitlab_mr
.reviewers
.iter()
.map(|r| r.username.clone())
.collect();
Ok(MergeRequestWithMetadata {
merge_request: NormalizedMergeRequest {
gitlab_id: gitlab_mr.id,
project_id: local_project_id,
iid: gitlab_mr.iid,
title: gitlab_mr.title.clone(),
description: gitlab_mr.description.clone(),
state: gitlab_mr.state.clone(),
draft: is_draft,
author_username: gitlab_mr.author.username.clone(),
source_branch: gitlab_mr.source_branch.clone(),
target_branch: gitlab_mr.target_branch.clone(),
head_sha,
references_short,
references_full,
detailed_merge_status,
merge_user_username,
created_at,
updated_at,
merged_at,
closed_at,
last_seen_at: now_ms(),
web_url: gitlab_mr.web_url.clone(),
},
label_names: gitlab_mr.labels.clone(),
assignee_usernames,
reviewer_usernames,
})
}

View File

@@ -2,6 +2,13 @@
pub mod discussion;
pub mod issue;
pub mod merge_request;
pub use discussion::{NormalizedDiscussion, NormalizedNote, NoteableRef, transform_discussion, transform_notes};
pub use discussion::{
NormalizedDiscussion, NormalizedNote, NoteableRef, transform_discussion,
transform_mr_discussion, transform_notes, transform_notes_with_diff_position,
};
pub use issue::{IssueRow, IssueWithMetadata, MilestoneRow, transform_issue};
pub use merge_request::{
MergeRequestWithMetadata, NormalizedMergeRequest, transform_merge_request,
};

View File

@@ -140,4 +140,120 @@ pub struct GitLabNotePosition {
pub new_path: Option<String>,
pub old_line: Option<i32>,
pub new_line: Option<i32>,
/// Position type: "text", "image", or "file".
pub position_type: Option<String>,
/// Line range for multi-line comments (GitLab 13.6+).
pub line_range: Option<GitLabLineRange>,
/// Base commit SHA for the diff.
pub base_sha: Option<String>,
/// Start commit SHA for the diff.
pub start_sha: Option<String>,
/// Head commit SHA for the diff.
pub head_sha: Option<String>,
}
/// Line range for multi-line DiffNote comments.
#[derive(Debug, Clone, Deserialize, Serialize)]
pub struct GitLabLineRange {
pub start: GitLabLineRangePoint,
pub end: GitLabLineRangePoint,
}
/// A point in a line range (start or end).
#[derive(Debug, Clone, Deserialize, Serialize)]
pub struct GitLabLineRangePoint {
pub line_code: Option<String>,
/// "old" or "new".
#[serde(rename = "type")]
pub line_type: Option<String>,
pub old_line: Option<i32>,
pub new_line: Option<i32>,
}
impl GitLabLineRange {
/// Get the start line number (new_line preferred, falls back to old_line).
pub fn start_line(&self) -> Option<i32> {
self.start.new_line.or(self.start.old_line)
}
/// Get the end line number (new_line preferred, falls back to old_line).
pub fn end_line(&self) -> Option<i32> {
self.end.new_line.or(self.end.old_line)
}
}
// === Checkpoint 2: Merge Request types ===
/// GitLab MR references (short and full reference strings).
#[derive(Debug, Clone, Deserialize, Serialize)]
pub struct GitLabReferences {
/// Short reference e.g. "!42".
pub short: String,
/// Full reference e.g. "group/project!42".
pub full: String,
}
/// GitLab Reviewer (can have approval state in future).
#[derive(Debug, Clone, Deserialize, Serialize)]
pub struct GitLabReviewer {
pub id: i64,
pub username: String,
pub name: String,
}
/// GitLab Merge Request from /projects/:id/merge_requests endpoint.
/// Note: Uses non-deprecated field names where possible (detailed_merge_status, merge_user).
/// Falls back gracefully for older GitLab versions.
#[derive(Debug, Clone, Deserialize, Serialize)]
pub struct GitLabMergeRequest {
/// GitLab global ID (unique across all projects).
pub id: i64,
/// Project-scoped MR number (the number shown in the UI).
pub iid: i64,
/// The project this MR belongs to.
pub project_id: i64,
pub title: String,
pub description: Option<String>,
/// "opened" | "merged" | "closed" | "locked".
pub state: String,
/// Work-in-progress status (preferred over work_in_progress).
#[serde(default)]
pub draft: bool,
/// Deprecated; fallback for older instances.
#[serde(default)]
pub work_in_progress: bool,
pub source_branch: String,
pub target_branch: String,
/// Current commit SHA at head of source branch (CP3-ready).
pub sha: Option<String>,
/// Short and full reference strings (CP3-ready).
pub references: Option<GitLabReferences>,
/// Non-deprecated merge status. Prefer over merge_status.
pub detailed_merge_status: Option<String>,
/// Deprecated merge_status field for fallback.
#[serde(alias = "merge_status")]
pub merge_status_legacy: Option<String>,
/// ISO 8601 timestamp.
pub created_at: String,
/// ISO 8601 timestamp.
pub updated_at: String,
/// ISO 8601 timestamp when merged (null if not merged).
pub merged_at: Option<String>,
/// ISO 8601 timestamp when closed (null if not closed).
pub closed_at: Option<String>,
pub author: GitLabAuthor,
/// Non-deprecated; who merged this MR.
pub merge_user: Option<GitLabAuthor>,
/// Deprecated; fallback for older instances.
pub merged_by: Option<GitLabAuthor>,
/// Array of label names.
#[serde(default)]
pub labels: Vec<String>,
/// Assignees (can be multiple).
#[serde(default)]
pub assignees: Vec<GitLabAuthor>,
/// Reviewers (MR-specific).
#[serde(default)]
pub reviewers: Vec<GitLabReviewer>,
pub web_url: String,
}

View File

@@ -192,21 +192,24 @@ async fn ingest_discussions_for_issue(
// Update discussions_synced_for_updated_at on the issue
update_issue_sync_timestamp(conn, issue.local_issue_id, issue.updated_at)?;
} else if pagination_error.is_none() && !received_first_response && seen_discussion_ids.is_empty() {
} else if pagination_error.is_none()
&& !received_first_response
&& seen_discussion_ids.is_empty()
{
// Stream was empty but no error - issue genuinely has no discussions
// This is safe to remove stale discussions (if any exist from before)
let removed = remove_stale_discussions(conn, issue.local_issue_id, &seen_discussion_ids)?;
result.stale_discussions_removed = removed;
update_issue_sync_timestamp(conn, issue.local_issue_id, issue.updated_at)?;
} else if pagination_error.is_some() {
} else if let Some(err) = pagination_error {
warn!(
issue_iid = issue.iid,
discussions_seen = seen_discussion_ids.len(),
"Skipping stale removal due to pagination error"
);
// Return the error to signal incomplete sync
return Err(pagination_error.unwrap());
return Err(err);
}
Ok(result)
@@ -320,7 +323,8 @@ fn remove_stale_discussions(
placeholders.join(", ")
);
let params: Vec<&dyn rusqlite::ToSql> = chunk.iter().map(|s| s as &dyn rusqlite::ToSql).collect();
let params: Vec<&dyn rusqlite::ToSql> =
chunk.iter().map(|s| s as &dyn rusqlite::ToSql).collect();
conn.execute(&sql, params.as_slice())?;
}

View File

@@ -148,12 +148,11 @@ fn passes_cursor_filter_with_ts(gitlab_id: i64, issue_ts: i64, cursor: &SyncCurs
return false;
}
if issue_ts == cursor_ts {
if let Some(cursor_id) = cursor.tie_breaker_id {
if gitlab_id <= cursor_id {
return false;
}
}
if issue_ts == cursor_ts
&& let Some(cursor_id) = cursor.tie_breaker_id
&& gitlab_id <= cursor_id
{
return false;
}
true
@@ -219,6 +218,7 @@ fn process_single_issue(
}
/// Inner function that performs all DB operations within a transaction.
#[allow(clippy::too_many_arguments)]
fn process_issue_in_transaction(
tx: &Transaction<'_>,
config: &Config,
@@ -366,7 +366,11 @@ fn link_issue_label_tx(tx: &Transaction<'_>, issue_id: i64, label_id: i64) -> Re
}
/// Upsert a milestone within a transaction, returning its local ID.
fn upsert_milestone_tx(tx: &Transaction<'_>, project_id: i64, milestone: &MilestoneRow) -> Result<i64> {
fn upsert_milestone_tx(
tx: &Transaction<'_>,
project_id: i64,
milestone: &MilestoneRow,
) -> Result<i64> {
tx.execute(
"INSERT INTO milestones (gitlab_id, project_id, iid, title, description, state, due_date, web_url)
VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8)

View File

@@ -0,0 +1,515 @@
//! Merge request ingestion with cursor-based incremental sync.
//!
//! Fetches merge requests from GitLab and stores them locally with:
//! - Cursor-based pagination for incremental sync
//! - Page-boundary cursor updates for crash recovery
//! - Raw payload storage with deduplication
//! - Label/assignee/reviewer extraction with clear-and-relink pattern
//! - Tracking of MRs needing discussion sync
use std::ops::Deref;
use rusqlite::{Connection, Transaction, params};
use tracing::{debug, info, warn};
use crate::Config;
use crate::core::error::{GiError, Result};
use crate::core::payloads::{StorePayloadOptions, store_payload};
use crate::core::time::now_ms;
use crate::gitlab::GitLabClient;
use crate::gitlab::transformers::merge_request::transform_merge_request;
use crate::gitlab::types::GitLabMergeRequest;
/// Result of merge request ingestion.
#[derive(Debug, Default)]
pub struct IngestMergeRequestsResult {
pub fetched: usize,
pub upserted: usize,
pub labels_created: usize,
pub assignees_linked: usize,
pub reviewers_linked: usize,
}
/// MR that needs discussion sync.
#[derive(Debug, Clone)]
pub struct MrForDiscussionSync {
pub local_mr_id: i64,
pub iid: i64,
pub updated_at: i64, // ms epoch
}
/// Cursor state for incremental sync.
#[derive(Debug, Default)]
struct SyncCursor {
updated_at_cursor: Option<i64>,
tie_breaker_id: Option<i64>,
}
/// Ingest merge requests for a project.
pub async fn ingest_merge_requests(
conn: &Connection,
client: &GitLabClient,
config: &Config,
project_id: i64, // Local DB project ID
gitlab_project_id: i64, // GitLab project ID
full_sync: bool, // Reset cursor if true
) -> Result<IngestMergeRequestsResult> {
let mut result = IngestMergeRequestsResult::default();
// Handle full sync - reset cursor and discussion watermarks
if full_sync {
reset_sync_cursor(conn, project_id)?;
reset_discussion_watermarks(conn, project_id)?;
info!("Full sync: cursor and discussion watermarks reset");
}
// 1. Get current cursor
let cursor = get_sync_cursor(conn, project_id)?;
debug!(?cursor, "Starting MR ingestion with cursor");
// 2. Fetch MRs page by page with cursor rewind
let mut page = 1u32;
let per_page = 100u32;
loop {
let page_result = client
.fetch_merge_requests_page(
gitlab_project_id,
cursor.updated_at_cursor,
config.sync.cursor_rewind_seconds,
page,
per_page,
)
.await?;
let mut last_updated_at: Option<i64> = None;
let mut last_gitlab_id: Option<i64> = None;
// 3. Process each MR
for mr in &page_result.items {
result.fetched += 1;
// Parse timestamp early
let mr_updated_at = match parse_timestamp(&mr.updated_at) {
Ok(ts) => ts,
Err(e) => {
warn!(
gitlab_id = mr.id,
error = %e,
"Skipping MR with invalid timestamp"
);
continue;
}
};
// Apply local cursor filter (skip already-processed due to rewind overlap)
if !passes_cursor_filter_with_ts(mr.id, mr_updated_at, &cursor) {
debug!(gitlab_id = mr.id, "Skipping already-processed MR");
continue;
}
// Transform and store
let mr_result = process_single_mr(conn, config, project_id, mr)?;
result.upserted += 1;
result.labels_created += mr_result.labels_created;
result.assignees_linked += mr_result.assignees_linked;
result.reviewers_linked += mr_result.reviewers_linked;
// Track cursor position
last_updated_at = Some(mr_updated_at);
last_gitlab_id = Some(mr.id);
}
// 4. Page-boundary cursor update
if let (Some(ts), Some(id)) = (last_updated_at, last_gitlab_id) {
update_sync_cursor(conn, project_id, ts, id)?;
debug!(page, "Page-boundary cursor update");
}
// 5. Check for more pages
if page_result.is_last_page {
break;
}
match page_result.next_page {
Some(np) => page = np,
None => break,
}
}
info!(
fetched = result.fetched,
upserted = result.upserted,
labels_created = result.labels_created,
assignees_linked = result.assignees_linked,
reviewers_linked = result.reviewers_linked,
"MR ingestion complete"
);
Ok(result)
}
/// Result of processing a single MR.
struct ProcessMrResult {
labels_created: usize,
assignees_linked: usize,
reviewers_linked: usize,
}
/// Process a single MR: store payload, upsert MR, handle labels/assignees/reviewers.
/// All operations are wrapped in a transaction for atomicity.
fn process_single_mr(
conn: &Connection,
config: &Config,
project_id: i64,
mr: &GitLabMergeRequest,
) -> Result<ProcessMrResult> {
// Transform MR first (outside transaction - no DB access)
let payload_json = serde_json::to_value(mr)?;
let transformed = transform_merge_request(mr, project_id)
.map_err(|e| GiError::Other(format!("MR transform failed: {}", e)))?;
// Wrap all DB operations in a transaction for atomicity
let tx = conn.unchecked_transaction()?;
let result =
process_mr_in_transaction(&tx, config, project_id, mr, &payload_json, &transformed)?;
tx.commit()?;
Ok(result)
}
/// Inner function that performs all DB operations within a transaction.
fn process_mr_in_transaction(
tx: &Transaction<'_>,
config: &Config,
project_id: i64,
mr: &GitLabMergeRequest,
payload_json: &serde_json::Value,
transformed: &crate::gitlab::transformers::merge_request::MergeRequestWithMetadata,
) -> Result<ProcessMrResult> {
let mut labels_created = 0;
let mr_row = &transformed.merge_request;
let now = now_ms();
// Store raw payload
let payload_id = store_payload(
tx.deref(),
StorePayloadOptions {
project_id: Some(project_id),
resource_type: "merge_request",
gitlab_id: &mr.id.to_string(),
payload: payload_json,
compress: config.storage.compress_raw_payloads,
},
)?;
// Upsert merge request
tx.execute(
"INSERT INTO merge_requests (
gitlab_id, project_id, iid, title, description, state, draft,
author_username, source_branch, target_branch, head_sha,
references_short, references_full, detailed_merge_status,
merge_user_username, created_at, updated_at, merged_at, closed_at,
last_seen_at, web_url, raw_payload_id
) VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8, ?9, ?10, ?11, ?12, ?13, ?14, ?15, ?16, ?17, ?18, ?19, ?20, ?21, ?22)
ON CONFLICT(gitlab_id) DO UPDATE SET
title = excluded.title,
description = excluded.description,
state = excluded.state,
draft = excluded.draft,
author_username = excluded.author_username,
source_branch = excluded.source_branch,
target_branch = excluded.target_branch,
head_sha = excluded.head_sha,
references_short = excluded.references_short,
references_full = excluded.references_full,
detailed_merge_status = excluded.detailed_merge_status,
merge_user_username = excluded.merge_user_username,
updated_at = excluded.updated_at,
merged_at = excluded.merged_at,
closed_at = excluded.closed_at,
last_seen_at = excluded.last_seen_at,
web_url = excluded.web_url,
raw_payload_id = excluded.raw_payload_id",
params![
mr_row.gitlab_id,
project_id,
mr_row.iid,
&mr_row.title,
&mr_row.description,
&mr_row.state,
mr_row.draft,
&mr_row.author_username,
&mr_row.source_branch,
&mr_row.target_branch,
&mr_row.head_sha,
&mr_row.references_short,
&mr_row.references_full,
&mr_row.detailed_merge_status,
&mr_row.merge_user_username,
mr_row.created_at,
mr_row.updated_at,
mr_row.merged_at,
mr_row.closed_at,
now,
&mr_row.web_url,
payload_id,
],
)?;
// Get local MR ID
let local_mr_id: i64 = tx.query_row(
"SELECT id FROM merge_requests WHERE project_id = ? AND iid = ?",
(project_id, mr_row.iid),
|row| row.get(0),
)?;
// Clear-and-relink labels
tx.execute(
"DELETE FROM mr_labels WHERE merge_request_id = ?",
[local_mr_id],
)?;
for label_name in &transformed.label_names {
let label_id = upsert_label_tx(tx, project_id, label_name, &mut labels_created)?;
tx.execute(
"INSERT OR IGNORE INTO mr_labels (merge_request_id, label_id) VALUES (?, ?)",
(local_mr_id, label_id),
)?;
}
// Clear-and-relink assignees
tx.execute(
"DELETE FROM mr_assignees WHERE merge_request_id = ?",
[local_mr_id],
)?;
let assignees_linked = transformed.assignee_usernames.len();
for username in &transformed.assignee_usernames {
tx.execute(
"INSERT OR IGNORE INTO mr_assignees (merge_request_id, username) VALUES (?, ?)",
(local_mr_id, username),
)?;
}
// Clear-and-relink reviewers
tx.execute(
"DELETE FROM mr_reviewers WHERE merge_request_id = ?",
[local_mr_id],
)?;
let reviewers_linked = transformed.reviewer_usernames.len();
for username in &transformed.reviewer_usernames {
tx.execute(
"INSERT OR IGNORE INTO mr_reviewers (merge_request_id, username) VALUES (?, ?)",
(local_mr_id, username),
)?;
}
Ok(ProcessMrResult {
labels_created,
assignees_linked,
reviewers_linked,
})
}
/// Upsert a label within a transaction, returning its ID.
fn upsert_label_tx(
tx: &Transaction<'_>,
project_id: i64,
name: &str,
created_count: &mut usize,
) -> Result<i64> {
// Try to get existing
let existing: Option<i64> = tx
.query_row(
"SELECT id FROM labels WHERE project_id = ? AND name = ?",
(project_id, name),
|row| row.get(0),
)
.ok();
if let Some(id) = existing {
return Ok(id);
}
// Insert new
tx.execute(
"INSERT INTO labels (project_id, name) VALUES (?, ?)",
(project_id, name),
)?;
*created_count += 1;
Ok(tx.last_insert_rowid())
}
/// Check if an MR passes the cursor filter (not already processed).
/// Takes pre-parsed timestamp to avoid redundant parsing.
fn passes_cursor_filter_with_ts(gitlab_id: i64, mr_ts: i64, cursor: &SyncCursor) -> bool {
let Some(cursor_ts) = cursor.updated_at_cursor else {
return true; // No cursor = fetch all
};
if mr_ts < cursor_ts {
return false;
}
if mr_ts == cursor_ts
&& let Some(cursor_id) = cursor.tie_breaker_id
&& gitlab_id <= cursor_id
{
return false;
}
true
}
/// Get the current sync cursor for merge requests.
fn get_sync_cursor(conn: &Connection, project_id: i64) -> Result<SyncCursor> {
let row: Option<(Option<i64>, Option<i64>)> = conn
.query_row(
"SELECT updated_at_cursor, tie_breaker_id FROM sync_cursors
WHERE project_id = ? AND resource_type = 'merge_requests'",
[project_id],
|row| Ok((row.get(0)?, row.get(1)?)),
)
.ok();
Ok(match row {
Some((updated_at, tie_breaker)) => SyncCursor {
updated_at_cursor: updated_at,
tie_breaker_id: tie_breaker,
},
None => SyncCursor::default(),
})
}
/// Update the sync cursor.
fn update_sync_cursor(
conn: &Connection,
project_id: i64,
updated_at: i64,
gitlab_id: i64,
) -> Result<()> {
conn.execute(
"INSERT INTO sync_cursors (project_id, resource_type, updated_at_cursor, tie_breaker_id)
VALUES (?1, 'merge_requests', ?2, ?3)
ON CONFLICT(project_id, resource_type) DO UPDATE SET
updated_at_cursor = excluded.updated_at_cursor,
tie_breaker_id = excluded.tie_breaker_id",
(project_id, updated_at, gitlab_id),
)?;
Ok(())
}
/// Reset the sync cursor (for full sync).
fn reset_sync_cursor(conn: &Connection, project_id: i64) -> Result<()> {
conn.execute(
"DELETE FROM sync_cursors WHERE project_id = ? AND resource_type = 'merge_requests'",
[project_id],
)?;
Ok(())
}
/// Reset discussion watermarks for all MRs in project (for full sync).
fn reset_discussion_watermarks(conn: &Connection, project_id: i64) -> Result<()> {
conn.execute(
"UPDATE merge_requests
SET discussions_synced_for_updated_at = NULL,
discussions_sync_attempts = 0,
discussions_sync_last_error = NULL
WHERE project_id = ?",
[project_id],
)?;
Ok(())
}
/// Get MRs that need discussion sync (updated_at > discussions_synced_for_updated_at).
pub fn get_mrs_needing_discussion_sync(
conn: &Connection,
project_id: i64,
) -> Result<Vec<MrForDiscussionSync>> {
let mut stmt = conn.prepare(
"SELECT id, iid, updated_at FROM merge_requests
WHERE project_id = ?
AND updated_at > COALESCE(discussions_synced_for_updated_at, 0)",
)?;
let mrs: std::result::Result<Vec<_>, _> = stmt
.query_map([project_id], |row| {
Ok(MrForDiscussionSync {
local_mr_id: row.get(0)?,
iid: row.get(1)?,
updated_at: row.get(2)?,
})
})?
.collect();
Ok(mrs?)
}
/// Parse ISO 8601 timestamp to milliseconds.
fn parse_timestamp(ts: &str) -> Result<i64> {
chrono::DateTime::parse_from_rfc3339(ts)
.map(|dt| dt.timestamp_millis())
.map_err(|e| GiError::Other(format!("Failed to parse timestamp '{}': {}", ts, e)))
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn result_default_has_zero_counts() {
let result = IngestMergeRequestsResult::default();
assert_eq!(result.fetched, 0);
assert_eq!(result.upserted, 0);
assert_eq!(result.labels_created, 0);
assert_eq!(result.assignees_linked, 0);
assert_eq!(result.reviewers_linked, 0);
}
#[test]
fn cursor_filter_allows_newer_mrs() {
let cursor = SyncCursor {
updated_at_cursor: Some(1705312800000), // 2024-01-15T10:00:00Z
tie_breaker_id: Some(100),
};
// MR with later timestamp passes
let later_ts = 1705399200000; // 2024-01-16T10:00:00Z
assert!(passes_cursor_filter_with_ts(101, later_ts, &cursor));
}
#[test]
fn cursor_filter_blocks_older_mrs() {
let cursor = SyncCursor {
updated_at_cursor: Some(1705312800000),
tie_breaker_id: Some(100),
};
// MR with earlier timestamp blocked
let earlier_ts = 1705226400000; // 2024-01-14T10:00:00Z
assert!(!passes_cursor_filter_with_ts(99, earlier_ts, &cursor));
}
#[test]
fn cursor_filter_uses_tie_breaker_for_same_timestamp() {
let cursor = SyncCursor {
updated_at_cursor: Some(1705312800000),
tie_breaker_id: Some(100),
};
// Same timestamp, higher ID passes
assert!(passes_cursor_filter_with_ts(101, 1705312800000, &cursor));
// Same timestamp, same ID blocked
assert!(!passes_cursor_filter_with_ts(100, 1705312800000, &cursor));
// Same timestamp, lower ID blocked
assert!(!passes_cursor_filter_with_ts(99, 1705312800000, &cursor));
}
#[test]
fn cursor_filter_allows_all_when_no_cursor() {
let cursor = SyncCursor::default();
let old_ts = 1577836800000; // 2020-01-01T00:00:00Z
assert!(passes_cursor_filter_with_ts(1, old_ts, &cursor));
}
}

View File

@@ -5,11 +5,19 @@
pub mod discussions;
pub mod issues;
pub mod merge_requests;
pub mod mr_discussions;
pub mod orchestrator;
pub use discussions::{IngestDiscussionsResult, ingest_issue_discussions};
pub use issues::{IngestIssuesResult, IssueForDiscussionSync, ingest_issues};
pub use orchestrator::{
IngestProjectResult, ProgressCallback, ProgressEvent, ingest_project_issues,
ingest_project_issues_with_progress,
pub use merge_requests::{
IngestMergeRequestsResult, MrForDiscussionSync, get_mrs_needing_discussion_sync,
ingest_merge_requests,
};
pub use mr_discussions::{IngestMrDiscussionsResult, ingest_mr_discussions};
pub use orchestrator::{
IngestMrProjectResult, IngestProjectResult, ProgressCallback, ProgressEvent,
ingest_project_issues, ingest_project_issues_with_progress, ingest_project_merge_requests,
ingest_project_merge_requests_with_progress,
};

View File

@@ -0,0 +1,673 @@
//! MR Discussion ingestion with atomicity guarantees.
//!
//! Critical requirements:
//! - Parse notes BEFORE any destructive DB operations
//! - Watermark advanced ONLY on full pagination success
//! - Upsert + sweep pattern for data replacement
//! - Sync health telemetry for debugging failures
//!
//! Supports two modes:
//! - Streaming: fetch and write incrementally (memory efficient)
//! - Prefetch: fetch all upfront, then write (enables parallel API calls)
use futures::StreamExt;
use rusqlite::{Connection, params};
use tracing::{debug, info, warn};
use crate::Config;
use crate::core::error::Result;
use crate::core::payloads::{StorePayloadOptions, store_payload};
use crate::core::time::now_ms;
use crate::gitlab::GitLabClient;
use crate::gitlab::transformers::{
NormalizedDiscussion, NormalizedNote, transform_mr_discussion,
transform_notes_with_diff_position,
};
use crate::gitlab::types::GitLabDiscussion;
use super::merge_requests::MrForDiscussionSync;
/// Result of MR discussion ingestion for a single MR.
#[derive(Debug, Default)]
pub struct IngestMrDiscussionsResult {
pub discussions_fetched: usize,
pub discussions_upserted: usize,
pub notes_upserted: usize,
pub notes_skipped_bad_timestamp: usize,
pub diffnotes_count: usize,
pub pagination_succeeded: bool,
}
/// Prefetched discussions for an MR (ready for DB write).
/// This separates the API fetch phase from the DB write phase to enable parallelism.
#[derive(Debug)]
pub struct PrefetchedMrDiscussions {
pub mr: MrForDiscussionSync,
pub discussions: Vec<PrefetchedDiscussion>,
pub fetch_error: Option<String>,
/// True if any discussions failed to transform (skip sweep if true)
pub had_transform_errors: bool,
/// Count of notes skipped due to transform errors
pub notes_skipped_count: usize,
}
/// A single prefetched discussion with transformed data.
#[derive(Debug)]
pub struct PrefetchedDiscussion {
pub raw: GitLabDiscussion,
pub normalized: NormalizedDiscussion,
pub notes: Vec<NormalizedNote>,
}
/// Fetch discussions for an MR without writing to DB.
/// This can be called in parallel for multiple MRs.
pub async fn prefetch_mr_discussions(
client: &GitLabClient,
gitlab_project_id: i64,
local_project_id: i64,
mr: MrForDiscussionSync,
) -> PrefetchedMrDiscussions {
debug!(mr_iid = mr.iid, "Prefetching discussions for MR");
// Fetch all discussions from GitLab
let raw_discussions = match client.fetch_all_mr_discussions(gitlab_project_id, mr.iid).await {
Ok(d) => d,
Err(e) => {
return PrefetchedMrDiscussions {
mr,
discussions: Vec::new(),
fetch_error: Some(e.to_string()),
had_transform_errors: false,
notes_skipped_count: 0,
};
}
};
// Transform each discussion
let mut discussions = Vec::with_capacity(raw_discussions.len());
let mut had_transform_errors = false;
let mut notes_skipped_count = 0;
for raw in raw_discussions {
// Transform notes
let notes = match transform_notes_with_diff_position(&raw, local_project_id) {
Ok(n) => n,
Err(e) => {
warn!(
mr_iid = mr.iid,
discussion_id = %raw.id,
error = %e,
"Note transform failed during prefetch"
);
// Track the failure - don't sweep stale data if transforms failed
had_transform_errors = true;
notes_skipped_count += raw.notes.len();
continue;
}
};
// Transform discussion
let normalized = transform_mr_discussion(&raw, local_project_id, mr.local_mr_id);
discussions.push(PrefetchedDiscussion {
raw,
normalized,
notes,
});
}
PrefetchedMrDiscussions {
mr,
discussions,
fetch_error: None,
had_transform_errors,
notes_skipped_count,
}
}
/// Write prefetched discussions to DB.
/// This must be called serially (rusqlite Connection is not Send).
pub fn write_prefetched_mr_discussions(
conn: &Connection,
config: &Config,
local_project_id: i64,
prefetched: PrefetchedMrDiscussions,
) -> Result<IngestMrDiscussionsResult> {
// Sync succeeds only if no fetch errors AND no transform errors
let sync_succeeded = prefetched.fetch_error.is_none() && !prefetched.had_transform_errors;
let mut result = IngestMrDiscussionsResult {
pagination_succeeded: sync_succeeded,
notes_skipped_bad_timestamp: prefetched.notes_skipped_count,
..Default::default()
};
let mr = &prefetched.mr;
// Handle fetch errors
if let Some(error) = &prefetched.fetch_error {
warn!(mr_iid = mr.iid, error = %error, "Prefetch failed for MR");
record_sync_health_error(conn, mr.local_mr_id, error)?;
return Ok(result);
}
let run_seen_at = now_ms();
// Write each discussion
for disc in &prefetched.discussions {
result.discussions_fetched += 1;
// Count DiffNotes
result.diffnotes_count += disc
.notes
.iter()
.filter(|n| n.position_new_path.is_some() || n.position_old_path.is_some())
.count();
// Start transaction
let tx = conn.unchecked_transaction()?;
// Store raw payload
let payload_json = serde_json::to_value(&disc.raw)?;
let payload_id = Some(store_payload(
&tx,
StorePayloadOptions {
project_id: Some(local_project_id),
resource_type: "discussion",
gitlab_id: &disc.raw.id,
payload: &payload_json,
compress: config.storage.compress_raw_payloads,
},
)?);
// Upsert discussion
upsert_discussion(&tx, &disc.normalized, run_seen_at, payload_id)?;
result.discussions_upserted += 1;
// Get local discussion ID
let local_discussion_id: i64 = tx.query_row(
"SELECT id FROM discussions WHERE project_id = ? AND gitlab_discussion_id = ?",
params![local_project_id, &disc.normalized.gitlab_discussion_id],
|row| row.get(0),
)?;
// Upsert notes
for note in &disc.notes {
let should_store_payload = !note.is_system
|| note.position_new_path.is_some()
|| note.position_old_path.is_some();
let note_payload_id = if should_store_payload {
let note_data = disc.raw.notes.iter().find(|n| n.id == note.gitlab_id);
if let Some(note_data) = note_data {
let note_payload_json = serde_json::to_value(note_data)?;
Some(store_payload(
&tx,
StorePayloadOptions {
project_id: Some(local_project_id),
resource_type: "note",
gitlab_id: &note.gitlab_id.to_string(),
payload: &note_payload_json,
compress: config.storage.compress_raw_payloads,
},
)?)
} else {
None
}
} else {
None
};
upsert_note(&tx, local_discussion_id, note, run_seen_at, note_payload_id)?;
result.notes_upserted += 1;
}
tx.commit()?;
}
// Only sweep stale data and advance watermark on full success
// If any discussions failed to transform, preserve existing data
if sync_succeeded {
sweep_stale_discussions(conn, mr.local_mr_id, run_seen_at)?;
sweep_stale_notes(conn, local_project_id, mr.local_mr_id, run_seen_at)?;
mark_discussions_synced(conn, mr.local_mr_id, mr.updated_at)?;
clear_sync_health_error(conn, mr.local_mr_id)?;
debug!(mr_iid = mr.iid, "MR discussion sync complete, watermark advanced");
} else if prefetched.had_transform_errors {
warn!(
mr_iid = mr.iid,
notes_skipped = prefetched.notes_skipped_count,
"Transform errors occurred; watermark NOT advanced to preserve data"
);
}
Ok(result)
}
/// Ingest discussions for MRs that need sync.
pub async fn ingest_mr_discussions(
conn: &Connection,
client: &GitLabClient,
config: &Config,
gitlab_project_id: i64,
local_project_id: i64,
mrs: &[MrForDiscussionSync],
) -> Result<IngestMrDiscussionsResult> {
let mut total_result = IngestMrDiscussionsResult {
pagination_succeeded: true, // Start optimistic
..Default::default()
};
for mr in mrs {
let result = ingest_discussions_for_mr(
conn,
client,
config,
gitlab_project_id,
local_project_id,
mr,
)
.await?;
total_result.discussions_fetched += result.discussions_fetched;
total_result.discussions_upserted += result.discussions_upserted;
total_result.notes_upserted += result.notes_upserted;
total_result.notes_skipped_bad_timestamp += result.notes_skipped_bad_timestamp;
total_result.diffnotes_count += result.diffnotes_count;
// Pagination failed for any MR means overall failure
if !result.pagination_succeeded {
total_result.pagination_succeeded = false;
}
}
info!(
mrs_processed = mrs.len(),
discussions_fetched = total_result.discussions_fetched,
discussions_upserted = total_result.discussions_upserted,
notes_upserted = total_result.notes_upserted,
notes_skipped = total_result.notes_skipped_bad_timestamp,
diffnotes = total_result.diffnotes_count,
pagination_succeeded = total_result.pagination_succeeded,
"MR discussion ingestion complete"
);
Ok(total_result)
}
/// Ingest discussions for a single MR.
async fn ingest_discussions_for_mr(
conn: &Connection,
client: &GitLabClient,
config: &Config,
gitlab_project_id: i64,
local_project_id: i64,
mr: &MrForDiscussionSync,
) -> Result<IngestMrDiscussionsResult> {
let mut result = IngestMrDiscussionsResult {
pagination_succeeded: true,
..Default::default()
};
debug!(
mr_iid = mr.iid,
local_mr_id = mr.local_mr_id,
"Fetching discussions for MR"
);
// Record sync start time for sweep
let run_seen_at = now_ms();
// Stream discussions from GitLab
let mut discussions_stream = client.paginate_mr_discussions(gitlab_project_id, mr.iid);
// Track if we've received any response
let mut received_first_response = false;
while let Some(disc_result) = discussions_stream.next().await {
if !received_first_response {
received_first_response = true;
}
// Handle pagination errors - don't advance watermark
let gitlab_discussion = match disc_result {
Ok(d) => d,
Err(e) => {
warn!(
mr_iid = mr.iid,
error = %e,
"Error during MR discussion pagination"
);
result.pagination_succeeded = false;
record_sync_health_error(conn, mr.local_mr_id, &e.to_string())?;
break;
}
};
result.discussions_fetched += 1;
// CRITICAL: Parse notes BEFORE any destructive DB operations
let notes = match transform_notes_with_diff_position(&gitlab_discussion, local_project_id) {
Ok(notes) => notes,
Err(e) => {
warn!(
mr_iid = mr.iid,
discussion_id = %gitlab_discussion.id,
error = %e,
"Note transform failed; preserving existing notes"
);
result.notes_skipped_bad_timestamp += gitlab_discussion.notes.len();
result.pagination_succeeded = false;
continue; // Skip this discussion, preserve existing data
}
};
// Count DiffNotes
result.diffnotes_count += notes
.iter()
.filter(|n| n.position_new_path.is_some() || n.position_old_path.is_some())
.count();
// Transform discussion
let normalized_discussion =
transform_mr_discussion(&gitlab_discussion, local_project_id, mr.local_mr_id);
// Only NOW start transaction (after parse succeeded)
let tx = conn.unchecked_transaction()?;
// Store raw payload
let payload_json = serde_json::to_value(&gitlab_discussion)?;
let payload_id = Some(store_payload(
&tx,
StorePayloadOptions {
project_id: Some(local_project_id),
resource_type: "discussion",
gitlab_id: &gitlab_discussion.id,
payload: &payload_json,
compress: config.storage.compress_raw_payloads,
},
)?);
// Upsert discussion with run_seen_at
upsert_discussion(&tx, &normalized_discussion, run_seen_at, payload_id)?;
result.discussions_upserted += 1;
// Get local discussion ID
let local_discussion_id: i64 = tx.query_row(
"SELECT id FROM discussions WHERE project_id = ? AND gitlab_discussion_id = ?",
params![
local_project_id,
&normalized_discussion.gitlab_discussion_id
],
|row| row.get(0),
)?;
// Upsert notes (not delete-all-then-insert)
for note in &notes {
// Selective payload storage: skip system notes without position
let should_store_payload = !note.is_system
|| note.position_new_path.is_some()
|| note.position_old_path.is_some();
let note_payload_id = if should_store_payload {
let note_data = gitlab_discussion
.notes
.iter()
.find(|n| n.id == note.gitlab_id);
if let Some(note_data) = note_data {
let note_payload_json = serde_json::to_value(note_data)?;
Some(store_payload(
&tx,
StorePayloadOptions {
project_id: Some(local_project_id),
resource_type: "note",
gitlab_id: &note.gitlab_id.to_string(),
payload: &note_payload_json,
compress: config.storage.compress_raw_payloads,
},
)?)
} else {
None
}
} else {
None
};
upsert_note(&tx, local_discussion_id, note, run_seen_at, note_payload_id)?;
result.notes_upserted += 1;
}
tx.commit()?;
}
// Only sweep stale data and advance watermark on full success
if result.pagination_succeeded && received_first_response {
// Sweep stale discussions for this MR
sweep_stale_discussions(conn, mr.local_mr_id, run_seen_at)?;
// Sweep stale notes for this MR
sweep_stale_notes(conn, local_project_id, mr.local_mr_id, run_seen_at)?;
// Advance watermark
mark_discussions_synced(conn, mr.local_mr_id, mr.updated_at)?;
clear_sync_health_error(conn, mr.local_mr_id)?;
debug!(
mr_iid = mr.iid,
"MR discussion sync complete, watermark advanced"
);
} else if result.pagination_succeeded && !received_first_response {
// Empty response (no discussions) - still safe to sweep and advance
sweep_stale_discussions(conn, mr.local_mr_id, run_seen_at)?;
sweep_stale_notes(conn, local_project_id, mr.local_mr_id, run_seen_at)?;
mark_discussions_synced(conn, mr.local_mr_id, mr.updated_at)?;
clear_sync_health_error(conn, mr.local_mr_id)?;
} else {
warn!(
mr_iid = mr.iid,
discussions_seen = result.discussions_upserted,
notes_skipped = result.notes_skipped_bad_timestamp,
"Watermark NOT advanced; will retry on next sync"
);
}
Ok(result)
}
/// Upsert a discussion with last_seen_at for sweep.
fn upsert_discussion(
conn: &Connection,
discussion: &crate::gitlab::transformers::NormalizedDiscussion,
last_seen_at: i64,
payload_id: Option<i64>,
) -> Result<()> {
conn.execute(
"INSERT INTO discussions (
gitlab_discussion_id, project_id, issue_id, merge_request_id, noteable_type,
individual_note, first_note_at, last_note_at, last_seen_at,
resolvable, resolved, raw_payload_id
) VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8, ?9, ?10, ?11, ?12)
ON CONFLICT(project_id, gitlab_discussion_id) DO UPDATE SET
first_note_at = excluded.first_note_at,
last_note_at = excluded.last_note_at,
last_seen_at = excluded.last_seen_at,
resolvable = excluded.resolvable,
resolved = excluded.resolved,
raw_payload_id = COALESCE(excluded.raw_payload_id, raw_payload_id)",
params![
&discussion.gitlab_discussion_id,
discussion.project_id,
discussion.issue_id,
discussion.merge_request_id,
&discussion.noteable_type,
discussion.individual_note,
discussion.first_note_at,
discussion.last_note_at,
last_seen_at,
discussion.resolvable,
discussion.resolved,
payload_id,
],
)?;
Ok(())
}
/// Upsert a note with last_seen_at for sweep.
fn upsert_note(
conn: &Connection,
discussion_id: i64,
note: &NormalizedNote,
last_seen_at: i64,
payload_id: Option<i64>,
) -> Result<()> {
conn.execute(
"INSERT INTO notes (
gitlab_id, discussion_id, project_id, note_type, is_system,
author_username, body, created_at, updated_at, last_seen_at,
position, resolvable, resolved, resolved_by, resolved_at,
position_old_path, position_new_path, position_old_line, position_new_line,
position_type, position_line_range_start, position_line_range_end,
position_base_sha, position_start_sha, position_head_sha,
raw_payload_id
) VALUES (?1, ?2, ?3, ?4, ?5, ?6, ?7, ?8, ?9, ?10, ?11, ?12, ?13, ?14, ?15, ?16, ?17, ?18, ?19, ?20, ?21, ?22, ?23, ?24, ?25, ?26)
ON CONFLICT(gitlab_id) DO UPDATE SET
note_type = excluded.note_type,
body = excluded.body,
updated_at = excluded.updated_at,
last_seen_at = excluded.last_seen_at,
resolvable = excluded.resolvable,
resolved = excluded.resolved,
resolved_by = excluded.resolved_by,
resolved_at = excluded.resolved_at,
position_old_path = excluded.position_old_path,
position_new_path = excluded.position_new_path,
position_old_line = excluded.position_old_line,
position_new_line = excluded.position_new_line,
position_type = excluded.position_type,
position_line_range_start = excluded.position_line_range_start,
position_line_range_end = excluded.position_line_range_end,
position_base_sha = excluded.position_base_sha,
position_start_sha = excluded.position_start_sha,
position_head_sha = excluded.position_head_sha,
raw_payload_id = COALESCE(excluded.raw_payload_id, raw_payload_id)",
params![
note.gitlab_id,
discussion_id,
note.project_id,
&note.note_type,
note.is_system,
&note.author_username,
&note.body,
note.created_at,
note.updated_at,
last_seen_at,
note.position,
note.resolvable,
note.resolved,
&note.resolved_by,
note.resolved_at,
&note.position_old_path,
&note.position_new_path,
note.position_old_line,
note.position_new_line,
&note.position_type,
note.position_line_range_start,
note.position_line_range_end,
&note.position_base_sha,
&note.position_start_sha,
&note.position_head_sha,
payload_id,
],
)?;
Ok(())
}
/// Sweep stale discussions (not seen in this run).
fn sweep_stale_discussions(conn: &Connection, local_mr_id: i64, run_seen_at: i64) -> Result<usize> {
let deleted = conn.execute(
"DELETE FROM discussions
WHERE merge_request_id = ? AND last_seen_at < ?",
params![local_mr_id, run_seen_at],
)?;
if deleted > 0 {
debug!(local_mr_id, deleted, "Swept stale discussions");
}
Ok(deleted)
}
/// Sweep stale notes for discussions belonging to this MR.
fn sweep_stale_notes(
conn: &Connection,
local_project_id: i64,
local_mr_id: i64,
run_seen_at: i64,
) -> Result<usize> {
let deleted = conn.execute(
"DELETE FROM notes
WHERE project_id = ?
AND discussion_id IN (
SELECT id FROM discussions WHERE merge_request_id = ?
)
AND last_seen_at < ?",
params![local_project_id, local_mr_id, run_seen_at],
)?;
if deleted > 0 {
debug!(local_mr_id, deleted, "Swept stale notes");
}
Ok(deleted)
}
/// Mark MR discussions as synced (advance watermark).
fn mark_discussions_synced(conn: &Connection, local_mr_id: i64, updated_at: i64) -> Result<()> {
conn.execute(
"UPDATE merge_requests SET discussions_synced_for_updated_at = ? WHERE id = ?",
params![updated_at, local_mr_id],
)?;
Ok(())
}
/// Record sync health error for debugging.
fn record_sync_health_error(conn: &Connection, local_mr_id: i64, error: &str) -> Result<()> {
conn.execute(
"UPDATE merge_requests SET
discussions_sync_last_attempt_at = ?,
discussions_sync_attempts = discussions_sync_attempts + 1,
discussions_sync_last_error = ?
WHERE id = ?",
params![now_ms(), error, local_mr_id],
)?;
Ok(())
}
/// Clear sync health error on success.
fn clear_sync_health_error(conn: &Connection, local_mr_id: i64) -> Result<()> {
conn.execute(
"UPDATE merge_requests SET
discussions_sync_last_attempt_at = ?,
discussions_sync_last_error = NULL
WHERE id = ?",
params![now_ms(), local_mr_id],
)?;
Ok(())
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn result_default_has_zero_counts() {
let result = IngestMrDiscussionsResult::default();
assert_eq!(result.discussions_fetched, 0);
assert_eq!(result.discussions_upserted, 0);
assert_eq!(result.notes_upserted, 0);
assert_eq!(result.notes_skipped_bad_timestamp, 0);
assert_eq!(result.diffnotes_count, 0);
assert!(!result.pagination_succeeded);
}
#[test]
fn result_pagination_succeeded_false_by_default() {
let result = IngestMrDiscussionsResult::default();
assert!(!result.pagination_succeeded);
}
}

View File

@@ -1,10 +1,11 @@
//! Ingestion orchestrator: coordinates issue and discussion sync.
//! Ingestion orchestrator: coordinates issue/MR and discussion sync.
//!
//! Implements the CP1 canonical pattern:
//! 1. Fetch issues with cursor-based sync
//! 2. Identify issues needing discussion sync
//! 3. Execute discussion sync sequentially (rusqlite Connection is not Send)
//! Implements the canonical pattern:
//! 1. Fetch resources (issues or MRs) with cursor-based sync
//! 2. Identify resources needing discussion sync
//! 3. Execute discussion sync with parallel prefetch (fetch in parallel, write serially)
use futures::future::join_all;
use rusqlite::Connection;
use tracing::info;
@@ -14,6 +15,10 @@ use crate::gitlab::GitLabClient;
use super::discussions::ingest_issue_discussions;
use super::issues::{IssueForDiscussionSync, ingest_issues};
use super::merge_requests::{
MrForDiscussionSync, get_mrs_needing_discussion_sync, ingest_merge_requests,
};
use super::mr_discussions::{prefetch_mr_discussions, write_prefetched_mr_discussions};
/// Progress callback for ingestion operations.
pub type ProgressCallback = Box<dyn Fn(ProgressEvent) + Send + Sync>;
@@ -33,9 +38,21 @@ pub enum ProgressEvent {
DiscussionSynced { current: usize, total: usize },
/// Discussion sync complete
DiscussionSyncComplete,
/// MR fetching started
MrsFetchStarted,
/// An MR was fetched (current count)
MrFetched { count: usize },
/// MR fetching complete
MrsFetchComplete { total: usize },
/// MR discussion sync started (total MRs to sync)
MrDiscussionSyncStarted { total: usize },
/// MR discussion synced (current/total)
MrDiscussionSynced { current: usize, total: usize },
/// MR discussion sync complete
MrDiscussionSyncComplete,
}
/// Result of full project ingestion.
/// Result of full project ingestion (issues).
#[derive(Debug, Default)]
pub struct IngestProjectResult {
pub issues_fetched: usize,
@@ -48,6 +65,23 @@ pub struct IngestProjectResult {
pub issues_skipped_discussion_sync: usize,
}
/// Result of MR ingestion for a project.
#[derive(Debug, Default)]
pub struct IngestMrProjectResult {
pub mrs_fetched: usize,
pub mrs_upserted: usize,
pub labels_created: usize,
pub assignees_linked: usize,
pub reviewers_linked: usize,
pub discussions_fetched: usize,
pub discussions_upserted: usize,
pub notes_upserted: usize,
pub notes_skipped_bad_timestamp: usize,
pub diffnotes_count: usize,
pub mrs_synced_discussions: usize,
pub mrs_skipped_discussion_sync: usize,
}
/// Ingest all issues and their discussions for a project.
pub async fn ingest_project_issues(
conn: &Connection,
@@ -194,6 +228,183 @@ async fn sync_discussions_sequential(
Ok(results)
}
/// Ingest all merge requests and their discussions for a project.
pub async fn ingest_project_merge_requests(
conn: &Connection,
client: &GitLabClient,
config: &Config,
project_id: i64,
gitlab_project_id: i64,
full_sync: bool,
) -> Result<IngestMrProjectResult> {
ingest_project_merge_requests_with_progress(
conn,
client,
config,
project_id,
gitlab_project_id,
full_sync,
None,
)
.await
}
/// Ingest all merge requests and their discussions for a project with progress reporting.
pub async fn ingest_project_merge_requests_with_progress(
conn: &Connection,
client: &GitLabClient,
config: &Config,
project_id: i64,
gitlab_project_id: i64,
full_sync: bool,
progress: Option<ProgressCallback>,
) -> Result<IngestMrProjectResult> {
let mut result = IngestMrProjectResult::default();
let emit = |event: ProgressEvent| {
if let Some(ref cb) = progress {
cb(event);
}
};
// Step 1: Ingest MRs
emit(ProgressEvent::MrsFetchStarted);
let mr_result = ingest_merge_requests(
conn,
client,
config,
project_id,
gitlab_project_id,
full_sync,
)
.await?;
result.mrs_fetched = mr_result.fetched;
result.mrs_upserted = mr_result.upserted;
result.labels_created = mr_result.labels_created;
result.assignees_linked = mr_result.assignees_linked;
result.reviewers_linked = mr_result.reviewers_linked;
emit(ProgressEvent::MrsFetchComplete {
total: result.mrs_fetched,
});
// Step 2: Query DB for MRs needing discussion sync
// CRITICAL: Query AFTER ingestion to avoid memory growth during large ingests
let mrs_needing_sync = get_mrs_needing_discussion_sync(conn, project_id)?;
// Query total MRs for accurate skip count
let total_mrs: i64 = conn
.query_row(
"SELECT COUNT(*) FROM merge_requests WHERE project_id = ?",
[project_id],
|row| row.get(0),
)
.unwrap_or(0);
let total_mrs = total_mrs as usize;
result.mrs_skipped_discussion_sync = total_mrs.saturating_sub(mrs_needing_sync.len());
if mrs_needing_sync.is_empty() {
info!("No MRs need discussion sync");
return Ok(result);
}
info!(
count = mrs_needing_sync.len(),
"Starting discussion sync for MRs"
);
emit(ProgressEvent::MrDiscussionSyncStarted {
total: mrs_needing_sync.len(),
});
// Step 3: Execute sequential MR discussion sync
let discussion_results = sync_mr_discussions_sequential(
conn,
client,
config,
gitlab_project_id,
project_id,
&mrs_needing_sync,
&progress,
)
.await?;
emit(ProgressEvent::MrDiscussionSyncComplete);
// Aggregate discussion results
for disc_result in discussion_results {
result.discussions_fetched += disc_result.discussions_fetched;
result.discussions_upserted += disc_result.discussions_upserted;
result.notes_upserted += disc_result.notes_upserted;
result.notes_skipped_bad_timestamp += disc_result.notes_skipped_bad_timestamp;
result.diffnotes_count += disc_result.diffnotes_count;
if disc_result.pagination_succeeded {
result.mrs_synced_discussions += 1;
}
}
info!(
mrs_fetched = result.mrs_fetched,
mrs_upserted = result.mrs_upserted,
labels_created = result.labels_created,
discussions_fetched = result.discussions_fetched,
notes_upserted = result.notes_upserted,
diffnotes = result.diffnotes_count,
mrs_synced = result.mrs_synced_discussions,
mrs_skipped = result.mrs_skipped_discussion_sync,
"MR project ingestion complete"
);
Ok(result)
}
/// Sync discussions for MRs with parallel API prefetching.
///
/// Pattern: Fetch discussions for multiple MRs in parallel, then write serially.
/// This overlaps network I/O while respecting rusqlite's single-connection constraint.
async fn sync_mr_discussions_sequential(
conn: &Connection,
client: &GitLabClient,
config: &Config,
gitlab_project_id: i64,
local_project_id: i64,
mrs: &[MrForDiscussionSync],
progress: &Option<ProgressCallback>,
) -> Result<Vec<super::mr_discussions::IngestMrDiscussionsResult>> {
let batch_size = config.sync.dependent_concurrency as usize;
let total = mrs.len();
let mut results = Vec::with_capacity(mrs.len());
let mut processed = 0;
// Process in batches: parallel API fetch, serial DB write
for chunk in mrs.chunks(batch_size) {
// Step 1: Prefetch discussions for all MRs in this batch in parallel
let prefetch_futures = chunk.iter().map(|mr| {
prefetch_mr_discussions(client, gitlab_project_id, local_project_id, mr.clone())
});
let prefetched_batch = join_all(prefetch_futures).await;
// Step 2: Write each prefetched result serially
for prefetched in prefetched_batch {
let disc_result =
write_prefetched_mr_discussions(conn, config, local_project_id, prefetched)?;
results.push(disc_result);
processed += 1;
// Emit progress
if let Some(cb) = progress {
cb(ProgressEvent::MrDiscussionSynced {
current: processed,
total,
});
}
}
}
Ok(results)
}
#[cfg(test)]
mod tests {
use super::*;
@@ -209,4 +420,21 @@ mod tests {
assert_eq!(result.issues_synced_discussions, 0);
assert_eq!(result.issues_skipped_discussion_sync, 0);
}
#[test]
fn mr_result_default_has_zero_counts() {
let result = IngestMrProjectResult::default();
assert_eq!(result.mrs_fetched, 0);
assert_eq!(result.mrs_upserted, 0);
assert_eq!(result.labels_created, 0);
assert_eq!(result.assignees_linked, 0);
assert_eq!(result.reviewers_linked, 0);
assert_eq!(result.discussions_fetched, 0);
assert_eq!(result.discussions_upserted, 0);
assert_eq!(result.notes_upserted, 0);
assert_eq!(result.notes_skipped_bad_timestamp, 0);
assert_eq!(result.diffnotes_count, 0);
assert_eq!(result.mrs_synced_discussions, 0);
assert_eq!(result.mrs_skipped_discussion_sync, 0);
}
}

View File

@@ -3,21 +3,26 @@
use clap::Parser;
use console::style;
use dialoguer::{Confirm, Input};
use serde::Serialize;
use tracing_subscriber::EnvFilter;
use tracing_subscriber::layer::SubscriberExt;
use tracing_subscriber::util::SubscriberInitExt;
use gi::Config;
use gi::cli::commands::{
InitInputs, InitOptions, ListFilters, open_issue_in_browser, print_count,
print_doctor_results, print_ingest_summary, print_list_issues, print_list_issues_json,
print_show_issue, print_sync_status, run_auth_test, run_count, run_doctor, run_ingest,
run_init, run_list_issues, run_show_issue, run_sync_status,
InitInputs, InitOptions, ListFilters, MrListFilters, open_issue_in_browser, open_mr_in_browser,
print_count, print_count_json, print_doctor_results, print_ingest_summary,
print_ingest_summary_json, print_list_issues, print_list_issues_json, print_list_mrs,
print_list_mrs_json, print_show_issue, print_show_issue_json, print_show_mr,
print_show_mr_json, print_sync_status, print_sync_status_json, run_auth_test, run_count,
run_doctor, run_ingest, run_init, run_list_issues, run_list_mrs, run_show_issue, run_show_mr,
run_sync_status,
};
use gi::core::db::{create_connection, get_schema_version, run_migrations};
use gi::core::paths::get_db_path;
use gi::cli::{Cli, Commands};
use gi::core::db::{create_connection, get_schema_version, run_migrations};
use gi::core::error::{GiError, RobotErrorOutput};
use gi::core::paths::get_config_path;
use gi::core::paths::get_db_path;
#[tokio::main]
async fn main() {
@@ -39,34 +44,36 @@ async fn main() {
.init();
let cli = Cli::parse();
let robot_mode = cli.is_robot_mode();
let result = match cli.command {
Commands::Init {
force,
non_interactive,
} => handle_init(cli.config.as_deref(), force, non_interactive).await,
Commands::AuthTest => handle_auth_test(cli.config.as_deref()).await,
Commands::Doctor { json } => handle_doctor(cli.config.as_deref(), json).await,
Commands::Version => {
println!("gi version {}", env!("CARGO_PKG_VERSION"));
Ok(())
}
Commands::Backup => {
println!("gi backup - not yet implemented");
Ok(())
}
Commands::Reset { confirm: _ } => {
println!("gi reset - not yet implemented");
Ok(())
}
Commands::Migrate => handle_migrate(cli.config.as_deref()).await,
Commands::SyncStatus => handle_sync_status(cli.config.as_deref()).await,
} => handle_init(cli.config.as_deref(), force, non_interactive, robot_mode).await,
Commands::AuthTest => handle_auth_test(cli.config.as_deref(), robot_mode).await,
Commands::Doctor { json } => handle_doctor(cli.config.as_deref(), json || robot_mode).await,
Commands::Version => handle_version(robot_mode),
Commands::Backup => handle_backup(robot_mode),
Commands::Reset { confirm: _ } => handle_reset(robot_mode),
Commands::Migrate => handle_migrate(cli.config.as_deref(), robot_mode).await,
Commands::SyncStatus => handle_sync_status(cli.config.as_deref(), robot_mode).await,
Commands::Ingest {
r#type,
project,
force,
full,
} => handle_ingest(cli.config.as_deref(), &r#type, project.as_deref(), force, full).await,
} => {
handle_ingest(
cli.config.as_deref(),
&r#type,
project.as_deref(),
force,
full,
robot_mode,
)
.await
}
Commands::List {
entity,
limit,
@@ -83,6 +90,11 @@ async fn main() {
order,
open,
json,
draft,
no_draft,
reviewer,
target_branch,
source_branch,
} => {
handle_list(
cli.config.as_deref(),
@@ -100,30 +112,106 @@ async fn main() {
&sort,
&order,
open,
json,
json || robot_mode,
draft,
no_draft,
reviewer.as_deref(),
target_branch.as_deref(),
source_branch.as_deref(),
)
.await
}
Commands::Count { entity, r#type } => {
handle_count(cli.config.as_deref(), &entity, r#type.as_deref()).await
handle_count(cli.config.as_deref(), &entity, r#type.as_deref(), robot_mode).await
}
Commands::Show {
entity,
iid,
project,
} => handle_show(cli.config.as_deref(), &entity, iid, project.as_deref()).await,
json,
} => {
handle_show(
cli.config.as_deref(),
&entity,
iid,
project.as_deref(),
json || robot_mode,
)
.await
}
};
if let Err(e) = result {
eprintln!("{} {}", style("Error:").red(), e);
std::process::exit(1);
handle_error(e, robot_mode);
}
}
/// Fallback error output for non-GiError errors in robot mode.
#[derive(Serialize)]
struct FallbackErrorOutput {
error: FallbackError,
}
#[derive(Serialize)]
struct FallbackError {
code: String,
message: String,
}
fn handle_error(e: Box<dyn std::error::Error>, robot_mode: bool) -> ! {
// Try to downcast to GiError for structured output
if let Some(gi_error) = e.downcast_ref::<GiError>() {
if robot_mode {
let output = RobotErrorOutput::from(gi_error);
// Use serde_json for safe serialization; fallback constructs JSON safely
eprintln!(
"{}",
serde_json::to_string(&output).unwrap_or_else(|_| {
// Fallback uses serde to ensure proper escaping
let fallback = FallbackErrorOutput {
error: FallbackError {
code: "INTERNAL_ERROR".to_string(),
message: gi_error.to_string(),
},
};
serde_json::to_string(&fallback)
.unwrap_or_else(|_| r#"{"error":{"code":"INTERNAL_ERROR","message":"Serialization failed"}}"#.to_string())
})
);
std::process::exit(gi_error.exit_code());
} else {
eprintln!("{} {}", style("Error:").red(), gi_error);
if let Some(suggestion) = gi_error.suggestion() {
eprintln!("{} {}", style("Hint:").yellow(), suggestion);
}
std::process::exit(gi_error.exit_code());
}
}
// Fallback for non-GiError errors - use serde for proper JSON escaping
if robot_mode {
let output = FallbackErrorOutput {
error: FallbackError {
code: "INTERNAL_ERROR".to_string(),
message: e.to_string(),
},
};
eprintln!(
"{}",
serde_json::to_string(&output)
.unwrap_or_else(|_| r#"{"error":{"code":"INTERNAL_ERROR","message":"Serialization failed"}}"#.to_string())
);
} else {
eprintln!("{} {}", style("Error:").red(), e);
}
std::process::exit(1);
}
async fn handle_init(
config_override: Option<&str>,
force: bool,
non_interactive: bool,
_robot_mode: bool, // TODO: Add robot mode support for init (requires non-interactive implementation)
) -> Result<(), Box<dyn std::error::Error>> {
let config_path = get_config_path(config_override);
let mut confirmed_overwrite = force;
@@ -244,16 +332,57 @@ async fn handle_init(
Ok(())
}
async fn handle_auth_test(config_override: Option<&str>) -> Result<(), Box<dyn std::error::Error>> {
/// JSON output for auth-test command.
#[derive(Serialize)]
struct AuthTestOutput {
ok: bool,
data: AuthTestData,
}
#[derive(Serialize)]
struct AuthTestData {
authenticated: bool,
username: String,
name: String,
gitlab_url: String,
}
async fn handle_auth_test(
config_override: Option<&str>,
robot_mode: bool,
) -> Result<(), Box<dyn std::error::Error>> {
match run_auth_test(config_override).await {
Ok(result) => {
println!("Authenticated as @{} ({})", result.username, result.name);
println!("GitLab: {}", result.base_url);
if robot_mode {
let output = AuthTestOutput {
ok: true,
data: AuthTestData {
authenticated: true,
username: result.username.clone(),
name: result.name.clone(),
gitlab_url: result.base_url.clone(),
},
};
println!("{}", serde_json::to_string(&output)?);
} else {
println!("Authenticated as @{} ({})", result.username, result.name);
println!("GitLab: {}", result.base_url);
}
Ok(())
}
Err(e) => {
eprintln!("{}", style(format!("Error: {e}")).red());
std::process::exit(1);
if robot_mode {
let output = FallbackErrorOutput {
error: FallbackError {
code: "AUTH_FAILED".to_string(),
message: e.to_string(),
},
};
eprintln!("{}", serde_json::to_string(&output)?);
} else {
eprintln!("{}", style(format!("Error: {e}")).red());
}
std::process::exit(5); // AUTH_FAILED exit code
}
}
}
@@ -283,12 +412,17 @@ async fn handle_ingest(
project_filter: Option<&str>,
force: bool,
full: bool,
robot_mode: bool,
) -> Result<(), Box<dyn std::error::Error>> {
let config = Config::load(config_override)?;
match run_ingest(&config, resource_type, project_filter, force, full).await {
match run_ingest(&config, resource_type, project_filter, force, full, robot_mode).await {
Ok(result) => {
print_ingest_summary(&result);
if robot_mode {
print_ingest_summary_json(&result);
} else {
print_ingest_summary(&result);
}
Ok(())
}
Err(e) => {
@@ -298,6 +432,7 @@ async fn handle_ingest(
}
}
#[allow(clippy::too_many_arguments)]
async fn handle_list(
config_override: Option<&str>,
entity: &str,
@@ -315,6 +450,11 @@ async fn handle_list(
order: &str,
open_browser: bool,
json_output: bool,
draft: bool,
no_draft: bool,
reviewer_filter: Option<&str>,
target_branch_filter: Option<&str>,
source_branch_filter: Option<&str>,
) -> Result<(), Box<dyn std::error::Error>> {
let config = Config::load(config_override)?;
@@ -348,7 +488,33 @@ async fn handle_list(
Ok(())
}
"mrs" => {
println!("MR listing not yet implemented. Only 'issues' is supported in CP1.");
let filters = MrListFilters {
limit,
project: project_filter,
state: state_filter,
author: author_filter,
assignee: assignee_filter,
reviewer: reviewer_filter,
labels: label_filter,
since: since_filter,
draft,
no_draft,
target_branch: target_branch_filter,
source_branch: source_branch_filter,
sort,
order,
};
let result = run_list_mrs(&config, filters)?;
if open_browser {
open_mr_in_browser(&result);
} else if json_output {
print_list_mrs_json(&result);
} else {
print_list_mrs(&result);
}
Ok(())
}
_ => {
@@ -362,21 +528,31 @@ async fn handle_count(
config_override: Option<&str>,
entity: &str,
type_filter: Option<&str>,
robot_mode: bool,
) -> Result<(), Box<dyn std::error::Error>> {
let config = Config::load(config_override)?;
let result = run_count(&config, entity, type_filter)?;
print_count(&result);
if robot_mode {
print_count_json(&result);
} else {
print_count(&result);
}
Ok(())
}
async fn handle_sync_status(
config_override: Option<&str>,
robot_mode: bool,
) -> Result<(), Box<dyn std::error::Error>> {
let config = Config::load(config_override)?;
let result = run_sync_status(&config)?;
print_sync_status(&result);
if robot_mode {
print_sync_status_json(&result);
} else {
print_sync_status(&result);
}
Ok(())
}
@@ -385,17 +561,27 @@ async fn handle_show(
entity: &str,
iid: i64,
project_filter: Option<&str>,
json: bool,
) -> Result<(), Box<dyn std::error::Error>> {
let config = Config::load(config_override)?;
match entity {
"issue" => {
let result = run_show_issue(&config, iid, project_filter)?;
print_show_issue(&result);
if json {
print_show_issue_json(&result);
} else {
print_show_issue(&result);
}
Ok(())
}
"mr" => {
println!("MR details not yet implemented. Only 'issue' is supported in CP1.");
let result = run_show_mr(&config, iid, project_filter)?;
if json {
print_show_mr_json(&result);
} else {
print_show_mr(&result);
}
Ok(())
}
_ => {
@@ -405,32 +591,159 @@ async fn handle_show(
}
}
async fn handle_migrate(config_override: Option<&str>) -> Result<(), Box<dyn std::error::Error>> {
/// JSON output for version command.
#[derive(Serialize)]
struct VersionOutput {
ok: bool,
data: VersionData,
}
#[derive(Serialize)]
struct VersionData {
version: String,
}
fn handle_version(robot_mode: bool) -> Result<(), Box<dyn std::error::Error>> {
let version = env!("CARGO_PKG_VERSION").to_string();
if robot_mode {
let output = VersionOutput {
ok: true,
data: VersionData { version },
};
println!("{}", serde_json::to_string(&output)?);
} else {
println!("gi version {}", version);
}
Ok(())
}
/// JSON output for not-implemented commands.
#[derive(Serialize)]
struct NotImplementedOutput {
ok: bool,
data: NotImplementedData,
}
#[derive(Serialize)]
struct NotImplementedData {
status: String,
command: String,
}
fn handle_backup(robot_mode: bool) -> Result<(), Box<dyn std::error::Error>> {
if robot_mode {
let output = NotImplementedOutput {
ok: true,
data: NotImplementedData {
status: "not_implemented".to_string(),
command: "backup".to_string(),
},
};
println!("{}", serde_json::to_string(&output)?);
} else {
println!("gi backup - not yet implemented");
}
Ok(())
}
fn handle_reset(robot_mode: bool) -> Result<(), Box<dyn std::error::Error>> {
if robot_mode {
let output = NotImplementedOutput {
ok: true,
data: NotImplementedData {
status: "not_implemented".to_string(),
command: "reset".to_string(),
},
};
println!("{}", serde_json::to_string(&output)?);
} else {
println!("gi reset - not yet implemented");
}
Ok(())
}
/// JSON output for migrate command.
#[derive(Serialize)]
struct MigrateOutput {
ok: bool,
data: MigrateData,
}
#[derive(Serialize)]
struct MigrateData {
before_version: i32,
after_version: i32,
migrated: bool,
}
/// JSON error output with suggestion field.
#[derive(Serialize)]
struct RobotErrorWithSuggestion {
error: RobotErrorSuggestionData,
}
#[derive(Serialize)]
struct RobotErrorSuggestionData {
code: String,
message: String,
suggestion: String,
}
async fn handle_migrate(
config_override: Option<&str>,
robot_mode: bool,
) -> Result<(), Box<dyn std::error::Error>> {
let config = Config::load(config_override)?;
let db_path = get_db_path(config.storage.db_path.as_deref());
if !db_path.exists() {
eprintln!(
"{}",
style(format!("Database not found at {}", db_path.display())).red()
);
eprintln!("{}", style("Run 'gi init' first to create the database.").yellow());
std::process::exit(1);
if robot_mode {
let output = RobotErrorWithSuggestion {
error: RobotErrorSuggestionData {
code: "DB_ERROR".to_string(),
message: format!("Database not found at {}", db_path.display()),
suggestion: "Run 'gi init' first".to_string(),
},
};
eprintln!("{}", serde_json::to_string(&output)?);
} else {
eprintln!(
"{}",
style(format!("Database not found at {}", db_path.display())).red()
);
eprintln!(
"{}",
style("Run 'gi init' first to create the database.").yellow()
);
}
std::process::exit(10); // DB_ERROR exit code
}
let conn = create_connection(&db_path)?;
let before_version = get_schema_version(&conn);
println!(
"{}",
style(format!("Current schema version: {}", before_version)).blue()
);
if !robot_mode {
println!(
"{}",
style(format!("Current schema version: {}", before_version)).blue()
);
}
run_migrations(&conn)?;
let after_version = get_schema_version(&conn);
if after_version > before_version {
if robot_mode {
let output = MigrateOutput {
ok: true,
data: MigrateData {
before_version,
after_version,
migrated: after_version > before_version,
},
};
println!("{}", serde_json::to_string(&output)?);
} else if after_version > before_version {
println!(
"{}",
style(format!(

View File

@@ -0,0 +1,381 @@
//! Tests for DiffNote position extraction in note transformer.
use gi::gitlab::transformers::discussion::transform_notes_with_diff_position;
use gi::gitlab::types::{
GitLabAuthor, GitLabDiscussion, GitLabLineRange, GitLabLineRangePoint, GitLabNote,
GitLabNotePosition,
};
fn make_author() -> GitLabAuthor {
GitLabAuthor {
id: 1,
username: "testuser".to_string(),
name: "Test User".to_string(),
}
}
fn make_basic_note(id: i64, created_at: &str) -> GitLabNote {
GitLabNote {
id,
note_type: Some("DiscussionNote".to_string()),
body: format!("Note {}", id),
author: make_author(),
created_at: created_at.to_string(),
updated_at: created_at.to_string(),
system: false,
resolvable: false,
resolved: false,
resolved_by: None,
resolved_at: None,
position: None,
}
}
fn make_diffnote_with_position(
id: i64,
created_at: &str,
position: GitLabNotePosition,
) -> GitLabNote {
GitLabNote {
id,
note_type: Some("DiffNote".to_string()),
body: format!("DiffNote {}", id),
author: make_author(),
created_at: created_at.to_string(),
updated_at: created_at.to_string(),
system: false,
resolvable: true,
resolved: false,
resolved_by: None,
resolved_at: None,
position: Some(position),
}
}
fn make_discussion(notes: Vec<GitLabNote>) -> GitLabDiscussion {
GitLabDiscussion {
id: "abc123".to_string(),
individual_note: false,
notes,
}
}
// === DiffNote Position Field Extraction ===
#[test]
fn extracts_position_paths_from_diffnote() {
let position = GitLabNotePosition {
old_path: Some("src/old.rs".to_string()),
new_path: Some("src/new.rs".to_string()),
old_line: Some(10),
new_line: Some(15),
position_type: Some("text".to_string()),
line_range: None,
base_sha: None,
start_sha: None,
head_sha: None,
};
let note = make_diffnote_with_position(1, "2024-01-16T09:00:00.000Z", position);
let discussion = make_discussion(vec![note]);
let notes = transform_notes_with_diff_position(&discussion, 100).unwrap();
assert_eq!(notes.len(), 1);
assert_eq!(notes[0].position_old_path, Some("src/old.rs".to_string()));
assert_eq!(notes[0].position_new_path, Some("src/new.rs".to_string()));
assert_eq!(notes[0].position_old_line, Some(10));
assert_eq!(notes[0].position_new_line, Some(15));
}
#[test]
fn extracts_position_type_from_diffnote() {
let position = GitLabNotePosition {
old_path: None,
new_path: Some("image.png".to_string()),
old_line: None,
new_line: None,
position_type: Some("image".to_string()),
line_range: None,
base_sha: None,
start_sha: None,
head_sha: None,
};
let note = make_diffnote_with_position(1, "2024-01-16T09:00:00.000Z", position);
let discussion = make_discussion(vec![note]);
let notes = transform_notes_with_diff_position(&discussion, 100).unwrap();
assert_eq!(notes[0].position_type, Some("image".to_string()));
}
#[test]
fn extracts_sha_triplet_from_diffnote() {
let position = GitLabNotePosition {
old_path: Some("file.rs".to_string()),
new_path: Some("file.rs".to_string()),
old_line: Some(5),
new_line: Some(5),
position_type: Some("text".to_string()),
line_range: None,
base_sha: Some("abc123base".to_string()),
start_sha: Some("def456start".to_string()),
head_sha: Some("ghi789head".to_string()),
};
let note = make_diffnote_with_position(1, "2024-01-16T09:00:00.000Z", position);
let discussion = make_discussion(vec![note]);
let notes = transform_notes_with_diff_position(&discussion, 100).unwrap();
assert_eq!(notes[0].position_base_sha, Some("abc123base".to_string()));
assert_eq!(notes[0].position_start_sha, Some("def456start".to_string()));
assert_eq!(notes[0].position_head_sha, Some("ghi789head".to_string()));
}
#[test]
fn extracts_line_range_from_multiline_diffnote() {
let line_range = GitLabLineRange {
start: GitLabLineRangePoint {
line_code: Some("abc123_10_10".to_string()),
line_type: Some("new".to_string()),
old_line: None,
new_line: Some(10),
},
end: GitLabLineRangePoint {
line_code: Some("abc123_15_15".to_string()),
line_type: Some("new".to_string()),
old_line: None,
new_line: Some(15),
},
};
let position = GitLabNotePosition {
old_path: None,
new_path: Some("file.rs".to_string()),
old_line: None,
new_line: Some(10),
position_type: Some("text".to_string()),
line_range: Some(line_range),
base_sha: None,
start_sha: None,
head_sha: None,
};
let note = make_diffnote_with_position(1, "2024-01-16T09:00:00.000Z", position);
let discussion = make_discussion(vec![note]);
let notes = transform_notes_with_diff_position(&discussion, 100).unwrap();
assert_eq!(notes[0].position_line_range_start, Some(10));
assert_eq!(notes[0].position_line_range_end, Some(15));
}
#[test]
fn line_range_uses_old_line_fallback_when_new_line_missing() {
let line_range = GitLabLineRange {
start: GitLabLineRangePoint {
line_code: None,
line_type: Some("old".to_string()),
old_line: Some(20),
new_line: None, // missing - should fall back to old_line
},
end: GitLabLineRangePoint {
line_code: None,
line_type: Some("old".to_string()),
old_line: Some(25),
new_line: None,
},
};
let position = GitLabNotePosition {
old_path: Some("deleted.rs".to_string()),
new_path: None,
old_line: Some(20),
new_line: None,
position_type: Some("text".to_string()),
line_range: Some(line_range),
base_sha: None,
start_sha: None,
head_sha: None,
};
let note = make_diffnote_with_position(1, "2024-01-16T09:00:00.000Z", position);
let discussion = make_discussion(vec![note]);
let notes = transform_notes_with_diff_position(&discussion, 100).unwrap();
assert_eq!(notes[0].position_line_range_start, Some(20));
assert_eq!(notes[0].position_line_range_end, Some(25));
}
// === Regular Notes (non-DiffNote) ===
#[test]
fn regular_note_has_none_for_all_position_fields() {
let note = make_basic_note(1, "2024-01-16T09:00:00.000Z");
let discussion = make_discussion(vec![note]);
let notes = transform_notes_with_diff_position(&discussion, 100).unwrap();
assert_eq!(notes[0].position_old_path, None);
assert_eq!(notes[0].position_new_path, None);
assert_eq!(notes[0].position_old_line, None);
assert_eq!(notes[0].position_new_line, None);
assert_eq!(notes[0].position_type, None);
assert_eq!(notes[0].position_line_range_start, None);
assert_eq!(notes[0].position_line_range_end, None);
assert_eq!(notes[0].position_base_sha, None);
assert_eq!(notes[0].position_start_sha, None);
assert_eq!(notes[0].position_head_sha, None);
}
// === Strict Timestamp Parsing ===
#[test]
fn returns_error_for_invalid_created_at_timestamp() {
let mut note = make_basic_note(1, "2024-01-16T09:00:00.000Z");
note.created_at = "not-a-timestamp".to_string();
let discussion = make_discussion(vec![note]);
let result = transform_notes_with_diff_position(&discussion, 100);
assert!(result.is_err());
let err = result.unwrap_err();
assert!(err.contains("not-a-timestamp"));
}
#[test]
fn returns_error_for_invalid_updated_at_timestamp() {
let mut note = make_basic_note(1, "2024-01-16T09:00:00.000Z");
note.updated_at = "garbage".to_string();
let discussion = make_discussion(vec![note]);
let result = transform_notes_with_diff_position(&discussion, 100);
assert!(result.is_err());
}
#[test]
fn returns_error_for_invalid_resolved_at_timestamp() {
let mut note = make_basic_note(1, "2024-01-16T09:00:00.000Z");
note.resolvable = true;
note.resolved = true;
note.resolved_by = Some(make_author());
note.resolved_at = Some("bad-timestamp".to_string());
let discussion = make_discussion(vec![note]);
let result = transform_notes_with_diff_position(&discussion, 100);
assert!(result.is_err());
}
// === Mixed Discussion (DiffNote + Regular Notes) ===
#[test]
fn handles_mixed_diffnote_and_regular_notes() {
let position = GitLabNotePosition {
old_path: None,
new_path: Some("file.rs".to_string()),
old_line: None,
new_line: Some(42),
position_type: Some("text".to_string()),
line_range: None,
base_sha: None,
start_sha: None,
head_sha: None,
};
let diffnote = make_diffnote_with_position(1, "2024-01-16T09:00:00.000Z", position);
let regular_note = make_basic_note(2, "2024-01-16T10:00:00.000Z");
let discussion = make_discussion(vec![diffnote, regular_note]);
let notes = transform_notes_with_diff_position(&discussion, 100).unwrap();
assert_eq!(notes.len(), 2);
// First note is DiffNote with position
assert_eq!(notes[0].position_new_path, Some("file.rs".to_string()));
assert_eq!(notes[0].position_new_line, Some(42));
// Second note is regular with None position fields
assert_eq!(notes[1].position_new_path, None);
assert_eq!(notes[1].position_new_line, None);
}
// === Position Preservation ===
#[test]
fn preserves_note_position_index() {
let pos1 = GitLabNotePosition {
old_path: None,
new_path: Some("file.rs".to_string()),
old_line: None,
new_line: Some(10),
position_type: Some("text".to_string()),
line_range: None,
base_sha: None,
start_sha: None,
head_sha: None,
};
let pos2 = GitLabNotePosition {
old_path: None,
new_path: Some("file.rs".to_string()),
old_line: None,
new_line: Some(20),
position_type: Some("text".to_string()),
line_range: None,
base_sha: None,
start_sha: None,
head_sha: None,
};
let note1 = make_diffnote_with_position(1, "2024-01-16T09:00:00.000Z", pos1);
let note2 = make_diffnote_with_position(2, "2024-01-16T10:00:00.000Z", pos2);
let discussion = make_discussion(vec![note1, note2]);
let notes = transform_notes_with_diff_position(&discussion, 100).unwrap();
assert_eq!(notes[0].position, 0);
assert_eq!(notes[1].position, 1);
}
// === Edge Cases ===
#[test]
fn handles_diffnote_with_empty_position_fields() {
// DiffNote exists but all position fields are None
let position = GitLabNotePosition {
old_path: None,
new_path: None,
old_line: None,
new_line: None,
position_type: None,
line_range: None,
base_sha: None,
start_sha: None,
head_sha: None,
};
let note = make_diffnote_with_position(1, "2024-01-16T09:00:00.000Z", position);
let discussion = make_discussion(vec![note]);
let notes = transform_notes_with_diff_position(&discussion, 100).unwrap();
// All position fields should be None, not cause an error
assert_eq!(notes[0].position_old_path, None);
assert_eq!(notes[0].position_new_path, None);
}
#[test]
fn handles_file_position_type() {
let position = GitLabNotePosition {
old_path: None,
new_path: Some("binary.bin".to_string()),
old_line: None,
new_line: None,
position_type: Some("file".to_string()),
line_range: None,
base_sha: None,
start_sha: None,
head_sha: None,
};
let note = make_diffnote_with_position(1, "2024-01-16T09:00:00.000Z", position);
let discussion = make_discussion(vec![note]);
let notes = transform_notes_with_diff_position(&discussion, 100).unwrap();
assert_eq!(notes[0].position_type, Some("file".to_string()));
assert_eq!(notes[0].position_new_path, Some("binary.bin".to_string()));
// File-level comments have no line numbers
assert_eq!(notes[0].position_new_line, None);
}

View File

@@ -0,0 +1,27 @@
{
"id": 12345,
"iid": 42,
"project_id": 100,
"title": "Add user authentication",
"description": "Implements JWT auth flow",
"state": "merged",
"draft": false,
"work_in_progress": false,
"source_branch": "feature/auth",
"target_branch": "main",
"sha": "abc123def456",
"references": { "short": "!42", "full": "group/project!42" },
"detailed_merge_status": "mergeable",
"merge_status": "can_be_merged",
"created_at": "2024-01-15T10:00:00Z",
"updated_at": "2024-01-20T14:30:00Z",
"merged_at": "2024-01-20T14:30:00Z",
"closed_at": null,
"author": { "id": 1, "username": "johndoe", "name": "John Doe" },
"merge_user": { "id": 2, "username": "janedoe", "name": "Jane Doe" },
"merged_by": { "id": 2, "username": "janedoe", "name": "Jane Doe" },
"labels": ["enhancement", "auth"],
"assignees": [{ "id": 3, "username": "bob", "name": "Bob Smith" }],
"reviewers": [{ "id": 4, "username": "alice", "name": "Alice Wong" }],
"web_url": "https://gitlab.example.com/group/project/-/merge_requests/42"
}

View File

@@ -1,7 +1,8 @@
//! Tests for GitLab API response type deserialization.
use gi::gitlab::types::{
GitLabAuthor, GitLabDiscussion, GitLabIssue, GitLabMilestone, GitLabNote, GitLabNotePosition,
GitLabAuthor, GitLabDiscussion, GitLabIssue, GitLabMergeRequest, GitLabMilestone, GitLabNote,
GitLabNotePosition, GitLabReferences, GitLabReviewer,
};
#[test]
@@ -399,3 +400,240 @@ fn deserializes_gitlab_milestone() {
assert_eq!(milestone.state, Some("active".to_string()));
assert_eq!(milestone.due_date, Some("2024-04-01".to_string()));
}
// === Checkpoint 2: Merge Request type tests ===
#[test]
fn deserializes_gitlab_merge_request_from_fixture() {
let json = include_str!("fixtures/gitlab_merge_request.json");
let mr: GitLabMergeRequest =
serde_json::from_str(json).expect("Failed to deserialize merge request");
assert_eq!(mr.id, 12345);
assert_eq!(mr.iid, 42);
assert_eq!(mr.project_id, 100);
assert_eq!(mr.title, "Add user authentication");
assert_eq!(mr.description, Some("Implements JWT auth flow".to_string()));
assert_eq!(mr.state, "merged");
assert!(!mr.draft);
assert!(!mr.work_in_progress);
assert_eq!(mr.source_branch, "feature/auth");
assert_eq!(mr.target_branch, "main");
assert_eq!(mr.sha, Some("abc123def456".to_string()));
assert_eq!(mr.detailed_merge_status, Some("mergeable".to_string()));
assert_eq!(mr.merge_status_legacy, Some("can_be_merged".to_string()));
assert_eq!(mr.author.username, "johndoe");
assert!(mr.merge_user.is_some());
assert_eq!(mr.merge_user.as_ref().unwrap().username, "janedoe");
assert!(mr.merged_by.is_some());
assert_eq!(mr.labels, vec!["enhancement", "auth"]);
assert_eq!(mr.assignees.len(), 1);
assert_eq!(mr.assignees[0].username, "bob");
assert_eq!(mr.reviewers.len(), 1);
assert_eq!(mr.reviewers[0].username, "alice");
}
#[test]
fn deserializes_gitlab_merge_request_with_references() {
let json = include_str!("fixtures/gitlab_merge_request.json");
let mr: GitLabMergeRequest =
serde_json::from_str(json).expect("Failed to deserialize merge request");
assert!(mr.references.is_some());
let refs = mr.references.unwrap();
assert_eq!(refs.short, "!42");
assert_eq!(refs.full, "group/project!42");
}
#[test]
fn deserializes_gitlab_merge_request_minimal() {
// Test with minimal fields (no optional ones)
let json = r#"{
"id": 1,
"iid": 1,
"project_id": 1,
"title": "Test MR",
"state": "opened",
"source_branch": "feature",
"target_branch": "main",
"created_at": "2024-01-01T00:00:00Z",
"updated_at": "2024-01-01T00:00:00Z",
"author": { "id": 1, "username": "user", "name": "User" },
"web_url": "https://example.com/mr/1"
}"#;
let mr: GitLabMergeRequest =
serde_json::from_str(json).expect("Failed to deserialize minimal MR");
assert_eq!(mr.id, 1);
assert!(mr.description.is_none());
assert!(!mr.draft);
assert!(!mr.work_in_progress);
assert!(mr.sha.is_none());
assert!(mr.references.is_none());
assert!(mr.detailed_merge_status.is_none());
assert!(mr.merge_status_legacy.is_none());
assert!(mr.merged_at.is_none());
assert!(mr.closed_at.is_none());
assert!(mr.merge_user.is_none());
assert!(mr.merged_by.is_none());
assert!(mr.labels.is_empty());
assert!(mr.assignees.is_empty());
assert!(mr.reviewers.is_empty());
}
#[test]
fn deserializes_gitlab_merge_request_with_draft() {
let json = r#"{
"id": 1,
"iid": 1,
"project_id": 1,
"title": "Draft MR",
"state": "opened",
"draft": true,
"source_branch": "wip",
"target_branch": "main",
"created_at": "2024-01-01T00:00:00Z",
"updated_at": "2024-01-01T00:00:00Z",
"author": { "id": 1, "username": "user", "name": "User" },
"web_url": "https://example.com/mr/1"
}"#;
let mr: GitLabMergeRequest =
serde_json::from_str(json).expect("Failed to deserialize draft MR");
assert!(mr.draft);
}
#[test]
fn deserializes_gitlab_merge_request_with_work_in_progress_fallback() {
// Older GitLab instances use work_in_progress instead of draft
let json = r#"{
"id": 1,
"iid": 1,
"project_id": 1,
"title": "WIP MR",
"state": "opened",
"work_in_progress": true,
"source_branch": "wip",
"target_branch": "main",
"created_at": "2024-01-01T00:00:00Z",
"updated_at": "2024-01-01T00:00:00Z",
"author": { "id": 1, "username": "user", "name": "User" },
"web_url": "https://example.com/mr/1"
}"#;
let mr: GitLabMergeRequest = serde_json::from_str(json).expect("Failed to deserialize WIP MR");
assert!(mr.work_in_progress);
// draft defaults to false when not present
assert!(!mr.draft);
}
#[test]
fn deserializes_gitlab_merge_request_with_locked_state() {
// locked is a transitional state during merge
let json = r#"{
"id": 1,
"iid": 1,
"project_id": 1,
"title": "Merging MR",
"state": "locked",
"source_branch": "feature",
"target_branch": "main",
"created_at": "2024-01-01T00:00:00Z",
"updated_at": "2024-01-01T00:00:00Z",
"author": { "id": 1, "username": "user", "name": "User" },
"web_url": "https://example.com/mr/1"
}"#;
let mr: GitLabMergeRequest =
serde_json::from_str(json).expect("Failed to deserialize locked MR");
assert_eq!(mr.state, "locked");
}
#[test]
fn deserializes_gitlab_reviewer() {
let json = r#"{
"id": 42,
"username": "reviewer",
"name": "Code Reviewer"
}"#;
let reviewer: GitLabReviewer =
serde_json::from_str(json).expect("Failed to deserialize reviewer");
assert_eq!(reviewer.id, 42);
assert_eq!(reviewer.username, "reviewer");
assert_eq!(reviewer.name, "Code Reviewer");
}
#[test]
fn deserializes_gitlab_references() {
let json = r#"{
"short": "!123",
"full": "group/project!123"
}"#;
let refs: GitLabReferences =
serde_json::from_str(json).expect("Failed to deserialize references");
assert_eq!(refs.short, "!123");
assert_eq!(refs.full, "group/project!123");
}
#[test]
fn deserializes_diffnote_position_with_sha_triplet() {
let json = r#"{
"old_path": "src/auth.rs",
"new_path": "src/auth.rs",
"old_line": 42,
"new_line": 45,
"position_type": "text",
"base_sha": "abc123",
"start_sha": "def456",
"head_sha": "ghi789"
}"#;
let pos: GitLabNotePosition =
serde_json::from_str(json).expect("Failed to deserialize position with SHA triplet");
assert_eq!(pos.position_type, Some("text".to_string()));
assert_eq!(pos.base_sha, Some("abc123".to_string()));
assert_eq!(pos.start_sha, Some("def456".to_string()));
assert_eq!(pos.head_sha, Some("ghi789".to_string()));
}
#[test]
fn deserializes_diffnote_position_with_line_range() {
let json = r#"{
"old_path": null,
"new_path": "src/new.rs",
"old_line": null,
"new_line": 10,
"position_type": "text",
"line_range": {
"start": {
"line_code": "abc123_10_10",
"type": "new",
"old_line": null,
"new_line": 10
},
"end": {
"line_code": "abc123_15_15",
"type": "new",
"old_line": null,
"new_line": 15
}
}
}"#;
let pos: GitLabNotePosition =
serde_json::from_str(json).expect("Failed to deserialize position with line range");
assert!(pos.line_range.is_some());
let range = pos.line_range.unwrap();
assert_eq!(range.start_line(), Some(10));
assert_eq!(range.end_line(), Some(15));
}

View File

@@ -342,7 +342,8 @@ fn migration_005_milestones_cascade_on_project_delete() {
).unwrap();
// Delete project
conn.execute("DELETE FROM projects WHERE id = 1", []).unwrap();
conn.execute("DELETE FROM projects WHERE id = 1", [])
.unwrap();
// Verify milestone is gone
let count: i64 = conn
@@ -369,7 +370,8 @@ fn migration_005_assignees_cascade_on_issue_delete() {
conn.execute(
"INSERT INTO issue_assignees (issue_id, username) VALUES (1, 'alice')",
[],
).unwrap();
)
.unwrap();
// Delete issue
conn.execute("DELETE FROM issues WHERE id = 1", []).unwrap();

View File

@@ -0,0 +1,105 @@
//! Tests for MR discussion transformer.
use gi::gitlab::transformers::discussion::transform_mr_discussion;
use gi::gitlab::types::{GitLabAuthor, GitLabDiscussion, GitLabNote};
fn make_author() -> GitLabAuthor {
GitLabAuthor {
id: 1,
username: "testuser".to_string(),
name: "Test User".to_string(),
}
}
fn make_basic_note(id: i64, created_at: &str) -> GitLabNote {
GitLabNote {
id,
note_type: Some("DiscussionNote".to_string()),
body: format!("Note {}", id),
author: make_author(),
created_at: created_at.to_string(),
updated_at: created_at.to_string(),
system: false,
resolvable: false,
resolved: false,
resolved_by: None,
resolved_at: None,
position: None,
}
}
fn make_discussion(notes: Vec<GitLabNote>) -> GitLabDiscussion {
GitLabDiscussion {
id: "abc123def456".to_string(),
individual_note: false,
notes,
}
}
#[test]
fn transform_mr_discussion_sets_merge_request_id() {
let note = make_basic_note(1, "2024-01-16T09:00:00.000Z");
let discussion = make_discussion(vec![note]);
let result = transform_mr_discussion(&discussion, 100, 42);
assert_eq!(result.merge_request_id, Some(42));
assert_eq!(result.issue_id, None);
assert_eq!(result.noteable_type, "MergeRequest");
}
#[test]
fn transform_mr_discussion_preserves_project_id() {
let note = make_basic_note(1, "2024-01-16T09:00:00.000Z");
let discussion = make_discussion(vec![note]);
let result = transform_mr_discussion(&discussion, 200, 42);
assert_eq!(result.project_id, 200);
}
#[test]
fn transform_mr_discussion_preserves_discussion_id() {
let note = make_basic_note(1, "2024-01-16T09:00:00.000Z");
let discussion = make_discussion(vec![note]);
let result = transform_mr_discussion(&discussion, 100, 42);
assert_eq!(result.gitlab_discussion_id, "abc123def456");
}
#[test]
fn transform_mr_discussion_computes_resolvable_from_notes() {
let mut note = make_basic_note(1, "2024-01-16T09:00:00.000Z");
note.resolvable = true;
let discussion = make_discussion(vec![note]);
let result = transform_mr_discussion(&discussion, 100, 42);
assert!(result.resolvable);
assert!(!result.resolved); // resolvable but not resolved
}
#[test]
fn transform_mr_discussion_computes_resolved_when_all_resolved() {
let mut note = make_basic_note(1, "2024-01-16T09:00:00.000Z");
note.resolvable = true;
note.resolved = true;
let discussion = make_discussion(vec![note]);
let result = transform_mr_discussion(&discussion, 100, 42);
assert!(result.resolvable);
assert!(result.resolved);
}
#[test]
fn transform_mr_discussion_handles_individual_note() {
let note = make_basic_note(1, "2024-01-16T09:00:00.000Z");
let mut discussion = make_discussion(vec![note]);
discussion.individual_note = true;
let result = transform_mr_discussion(&discussion, 100, 42);
assert!(result.individual_note);
}

View File

@@ -0,0 +1,374 @@
//! Tests for MR transformer module.
use gi::gitlab::transformers::merge_request::transform_merge_request;
use gi::gitlab::types::{GitLabAuthor, GitLabMergeRequest, GitLabReferences, GitLabReviewer};
fn make_test_mr() -> GitLabMergeRequest {
GitLabMergeRequest {
id: 12345,
iid: 42,
project_id: 100,
title: "Add user authentication".to_string(),
description: Some("Implements JWT auth flow".to_string()),
state: "merged".to_string(),
draft: false,
work_in_progress: false,
source_branch: "feature/auth".to_string(),
target_branch: "main".to_string(),
sha: Some("abc123def456".to_string()),
references: Some(GitLabReferences {
short: "!42".to_string(),
full: "group/project!42".to_string(),
}),
detailed_merge_status: Some("mergeable".to_string()),
merge_status_legacy: Some("can_be_merged".to_string()),
created_at: "2024-01-15T10:00:00.000Z".to_string(),
updated_at: "2024-01-20T14:30:00.000Z".to_string(),
merged_at: Some("2024-01-20T14:30:00.000Z".to_string()),
closed_at: None,
author: GitLabAuthor {
id: 1,
username: "johndoe".to_string(),
name: "John Doe".to_string(),
},
merge_user: Some(GitLabAuthor {
id: 2,
username: "janedoe".to_string(),
name: "Jane Doe".to_string(),
}),
merged_by: Some(GitLabAuthor {
id: 2,
username: "janedoe".to_string(),
name: "Jane Doe".to_string(),
}),
labels: vec!["enhancement".to_string(), "auth".to_string()],
assignees: vec![GitLabAuthor {
id: 3,
username: "bob".to_string(),
name: "Bob Smith".to_string(),
}],
reviewers: vec![GitLabReviewer {
id: 4,
username: "alice".to_string(),
name: "Alice Wong".to_string(),
}],
web_url: "https://gitlab.example.com/group/project/-/merge_requests/42".to_string(),
}
}
#[test]
fn transforms_mr_with_all_fields() {
let mr = make_test_mr();
let result = transform_merge_request(&mr, 200).unwrap();
assert_eq!(result.merge_request.gitlab_id, 12345);
assert_eq!(result.merge_request.iid, 42);
assert_eq!(result.merge_request.project_id, 200); // Local project ID, not GitLab's
assert_eq!(result.merge_request.title, "Add user authentication");
assert_eq!(
result.merge_request.description,
Some("Implements JWT auth flow".to_string())
);
assert_eq!(result.merge_request.state, "merged");
assert!(!result.merge_request.draft);
assert_eq!(result.merge_request.author_username, "johndoe");
assert_eq!(result.merge_request.source_branch, "feature/auth");
assert_eq!(result.merge_request.target_branch, "main");
assert_eq!(
result.merge_request.head_sha,
Some("abc123def456".to_string())
);
assert_eq!(
result.merge_request.references_short,
Some("!42".to_string())
);
assert_eq!(
result.merge_request.references_full,
Some("group/project!42".to_string())
);
assert_eq!(
result.merge_request.detailed_merge_status,
Some("mergeable".to_string())
);
assert_eq!(
result.merge_request.merge_user_username,
Some("janedoe".to_string())
);
assert_eq!(
result.merge_request.web_url,
"https://gitlab.example.com/group/project/-/merge_requests/42"
);
}
#[test]
fn parses_timestamps_to_ms_epoch() {
let mr = make_test_mr();
let result = transform_merge_request(&mr, 200).unwrap();
// 2024-01-15T10:00:00.000Z = 1705312800000 ms
assert_eq!(result.merge_request.created_at, 1705312800000);
// 2024-01-20T14:30:00.000Z = 1705761000000 ms
assert_eq!(result.merge_request.updated_at, 1705761000000);
// merged_at should also be parsed
assert_eq!(result.merge_request.merged_at, Some(1705761000000));
}
#[test]
fn handles_timezone_offset_timestamps() {
let mut mr = make_test_mr();
// GitLab can return timestamps with timezone offset
mr.created_at = "2024-01-15T05:00:00-05:00".to_string();
let result = transform_merge_request(&mr, 200).unwrap();
// 05:00 EST = 10:00 UTC = same as original test
assert_eq!(result.merge_request.created_at, 1705312800000);
}
#[test]
fn sets_last_seen_at_to_current_time() {
let mr = make_test_mr();
let before = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap()
.as_millis() as i64;
let result = transform_merge_request(&mr, 200).unwrap();
let after = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.unwrap()
.as_millis() as i64;
assert!(result.merge_request.last_seen_at >= before);
assert!(result.merge_request.last_seen_at <= after);
}
#[test]
fn extracts_label_names() {
let mr = make_test_mr();
let result = transform_merge_request(&mr, 200).unwrap();
assert_eq!(result.label_names.len(), 2);
assert_eq!(result.label_names[0], "enhancement");
assert_eq!(result.label_names[1], "auth");
}
#[test]
fn handles_empty_labels() {
let mut mr = make_test_mr();
mr.labels = vec![];
let result = transform_merge_request(&mr, 200).unwrap();
assert!(result.label_names.is_empty());
}
#[test]
fn extracts_assignee_usernames() {
let mr = make_test_mr();
let result = transform_merge_request(&mr, 200).unwrap();
assert_eq!(result.assignee_usernames.len(), 1);
assert_eq!(result.assignee_usernames[0], "bob");
}
#[test]
fn extracts_reviewer_usernames() {
let mr = make_test_mr();
let result = transform_merge_request(&mr, 200).unwrap();
assert_eq!(result.reviewer_usernames.len(), 1);
assert_eq!(result.reviewer_usernames[0], "alice");
}
#[test]
fn handles_empty_assignees_and_reviewers() {
let mut mr = make_test_mr();
mr.assignees = vec![];
mr.reviewers = vec![];
let result = transform_merge_request(&mr, 200).unwrap();
assert!(result.assignee_usernames.is_empty());
assert!(result.reviewer_usernames.is_empty());
}
#[test]
fn draft_prefers_draft_field() {
let mut mr = make_test_mr();
mr.draft = true;
mr.work_in_progress = false;
let result = transform_merge_request(&mr, 200).unwrap();
assert!(result.merge_request.draft);
}
#[test]
fn draft_falls_back_to_work_in_progress() {
let mut mr = make_test_mr();
mr.draft = false;
mr.work_in_progress = true;
let result = transform_merge_request(&mr, 200).unwrap();
assert!(result.merge_request.draft);
}
#[test]
fn draft_false_when_both_false() {
let mut mr = make_test_mr();
mr.draft = false;
mr.work_in_progress = false;
let result = transform_merge_request(&mr, 200).unwrap();
assert!(!result.merge_request.draft);
}
#[test]
fn detailed_merge_status_prefers_non_legacy() {
let mut mr = make_test_mr();
mr.detailed_merge_status = Some("checking".to_string());
mr.merge_status_legacy = Some("can_be_merged".to_string());
let result = transform_merge_request(&mr, 200).unwrap();
assert_eq!(
result.merge_request.detailed_merge_status,
Some("checking".to_string())
);
}
#[test]
fn detailed_merge_status_falls_back_to_legacy() {
let mut mr = make_test_mr();
mr.detailed_merge_status = None;
mr.merge_status_legacy = Some("can_be_merged".to_string());
let result = transform_merge_request(&mr, 200).unwrap();
assert_eq!(
result.merge_request.detailed_merge_status,
Some("can_be_merged".to_string())
);
}
#[test]
fn merge_user_prefers_merge_user_field() {
let mut mr = make_test_mr();
mr.merge_user = Some(GitLabAuthor {
id: 10,
username: "merge_user_name".to_string(),
name: "Merge User".to_string(),
});
mr.merged_by = Some(GitLabAuthor {
id: 11,
username: "merged_by_name".to_string(),
name: "Merged By".to_string(),
});
let result = transform_merge_request(&mr, 200).unwrap();
assert_eq!(
result.merge_request.merge_user_username,
Some("merge_user_name".to_string())
);
}
#[test]
fn merge_user_falls_back_to_merged_by() {
let mut mr = make_test_mr();
mr.merge_user = None;
mr.merged_by = Some(GitLabAuthor {
id: 11,
username: "merged_by_name".to_string(),
name: "Merged By".to_string(),
});
let result = transform_merge_request(&mr, 200).unwrap();
assert_eq!(
result.merge_request.merge_user_username,
Some("merged_by_name".to_string())
);
}
#[test]
fn handles_missing_references() {
let mut mr = make_test_mr();
mr.references = None;
let result = transform_merge_request(&mr, 200).unwrap();
assert!(result.merge_request.references_short.is_none());
assert!(result.merge_request.references_full.is_none());
}
#[test]
fn handles_missing_sha() {
let mut mr = make_test_mr();
mr.sha = None;
let result = transform_merge_request(&mr, 200).unwrap();
assert!(result.merge_request.head_sha.is_none());
}
#[test]
fn handles_missing_description() {
let mut mr = make_test_mr();
mr.description = None;
let result = transform_merge_request(&mr, 200).unwrap();
assert!(result.merge_request.description.is_none());
}
#[test]
fn handles_closed_at_timestamp() {
let mut mr = make_test_mr();
mr.state = "closed".to_string();
mr.merged_at = None;
mr.closed_at = Some("2024-01-18T12:00:00.000Z".to_string());
let result = transform_merge_request(&mr, 200).unwrap();
assert!(result.merge_request.merged_at.is_none());
// 2024-01-18T12:00:00.000Z = 1705579200000 ms
assert_eq!(result.merge_request.closed_at, Some(1705579200000));
}
#[test]
fn passes_through_locked_state() {
let mut mr = make_test_mr();
mr.state = "locked".to_string();
let result = transform_merge_request(&mr, 200).unwrap();
assert_eq!(result.merge_request.state, "locked");
}
#[test]
fn returns_error_for_invalid_created_at() {
let mut mr = make_test_mr();
mr.created_at = "not-a-timestamp".to_string();
let result = transform_merge_request(&mr, 200);
assert!(result.is_err());
let err = result.unwrap_err();
assert!(err.contains("not-a-timestamp"));
}
#[test]
fn returns_error_for_invalid_updated_at() {
let mut mr = make_test_mr();
mr.updated_at = "invalid".to_string();
let result = transform_merge_request(&mr, 200);
assert!(result.is_err());
}
#[test]
fn returns_error_for_invalid_merged_at() {
let mut mr = make_test_mr();
mr.merged_at = Some("bad-timestamp".to_string());
let result = transform_merge_request(&mr, 200);
assert!(result.is_err());
}
#[test]
fn returns_error_for_invalid_closed_at() {
let mut mr = make_test_mr();
mr.closed_at = Some("garbage".to_string());
let result = transform_merge_request(&mr, 200);
assert!(result.is_err());
}