Three implementation plans with iterative cross-model refinement: lore-service (5 iterations): HTTP service layer exposing lore's SQLite data via REST/SSE for integration with external tools (dashboards, IDE extensions, chat agents). Covers authentication, rate limiting, caching strategy, and webhook-driven sync triggers. work-item-status-graphql (7 iterations + TDD appendix): Detailed implementation plan for the GraphQL-based work item status enrichment feature (now implemented). Includes the TDD appendix with test-first development specifications covering GraphQL client, adaptive pagination, ingestion orchestration, CLI display, and robot mode output. time-decay-expert-scoring (iteration 5 feedback): Updates to the existing time-decay scoring plan incorporating feedback on decay curve parameterization, recency weighting for discussion contributions, and staleness detection thresholds. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2037 lines
76 KiB
Markdown
2037 lines
76 KiB
Markdown
# Work Item Status — TDD Appendix
|
|
|
|
> Pre-written tests for every acceptance criterion in
|
|
> `plans/work-item-status-graphql.md` (iteration 7).
|
|
> Replaces the skeleton TDD Plan section with compilable Rust test code.
|
|
|
|
---
|
|
|
|
## Coverage Matrix
|
|
|
|
| AC | Tests | File |
|
|
|----|-------|------|
|
|
| AC-1 GraphQL Client | T01-T06, T28-T29, T33, T43-T47 | `src/gitlab/graphql.rs` (inline mod) |
|
|
| AC-2 Status Types | T07-T10, T48 | `src/gitlab/types.rs` (inline mod) |
|
|
| AC-3 Status Fetcher | T11-T14, T27, T34-T39, T42, T49-T52 | `src/gitlab/graphql.rs` (inline mod) |
|
|
| AC-4 Migration 021 | T15-T16, T53-T54 | `tests/migration_tests.rs` |
|
|
| AC-5 Config Toggle | T23-T24 | `src/core/config.rs` (inline mod) |
|
|
| AC-6 Orchestrator | T17-T20, T26, T30-T31, T41, T55 | `tests/status_enrichment_tests.rs` |
|
|
| AC-7 Show Display | T56-T58 | `tests/status_display_tests.rs` |
|
|
| AC-8 List Display | T59-T60 | `tests/status_display_tests.rs` |
|
|
| AC-9 List Filter | T21-T22, T40, T61-T63 | `tests/status_filter_tests.rs` |
|
|
| AC-10 Robot Envelope | T32 | `tests/status_enrichment_tests.rs` |
|
|
| AC-11 Quality Gates | (cargo check/clippy/fmt/test) | CI only |
|
|
| Helpers | T25 | `src/gitlab/graphql.rs` (inline mod) |
|
|
|
|
**Total: 63 tests** (42 original + 21 gap-fill additions marked with `NEW`)
|
|
|
|
---
|
|
|
|
## File 1: `src/gitlab/types.rs` — inline `#[cfg(test)]` module
|
|
|
|
Tests AC-2 (WorkItemStatus deserialization).
|
|
|
|
```rust
|
|
#[cfg(test)]
|
|
mod tests {
|
|
use super::*;
|
|
|
|
// ── T07: Full deserialization with all fields ────────────────────────
|
|
#[test]
|
|
fn test_work_item_status_deserialize() {
|
|
let json = r##"{"name":"In progress","category":"IN_PROGRESS","color":"#1f75cb","iconName":"status-in-progress"}"##;
|
|
let status: WorkItemStatus = serde_json::from_str(json).unwrap();
|
|
|
|
assert_eq!(status.name, "In progress");
|
|
assert_eq!(status.category.as_deref(), Some("IN_PROGRESS"));
|
|
assert_eq!(status.color.as_deref(), Some("#1f75cb"));
|
|
assert_eq!(status.icon_name.as_deref(), Some("status-in-progress"));
|
|
}
|
|
|
|
// ── T08: Only required field present ─────────────────────────────────
|
|
#[test]
|
|
fn test_work_item_status_optional_fields() {
|
|
let json = r#"{"name":"To do"}"#;
|
|
let status: WorkItemStatus = serde_json::from_str(json).unwrap();
|
|
|
|
assert_eq!(status.name, "To do");
|
|
assert!(status.category.is_none());
|
|
assert!(status.color.is_none());
|
|
assert!(status.icon_name.is_none());
|
|
}
|
|
|
|
// ── T09: Unknown category value (custom lifecycle on 18.5+) ──────────
|
|
#[test]
|
|
fn test_work_item_status_unknown_category() {
|
|
let json = r#"{"name":"Custom","category":"SOME_FUTURE_VALUE"}"#;
|
|
let status: WorkItemStatus = serde_json::from_str(json).unwrap();
|
|
|
|
assert_eq!(status.category.as_deref(), Some("SOME_FUTURE_VALUE"));
|
|
}
|
|
|
|
// ── T10: Explicit null category ──────────────────────────────────────
|
|
#[test]
|
|
fn test_work_item_status_null_category() {
|
|
let json = r#"{"name":"In progress","category":null}"#;
|
|
let status: WorkItemStatus = serde_json::from_str(json).unwrap();
|
|
|
|
assert!(status.category.is_none());
|
|
}
|
|
|
|
// ── T48 (NEW): All five system statuses deserialize correctly ─────────
|
|
#[test]
|
|
fn test_work_item_status_all_system_statuses() {
|
|
let cases = [
|
|
(r#"{"name":"To do","category":"TO_DO","color":"#737278"}"#, "TO_DO"),
|
|
(r#"{"name":"In progress","category":"IN_PROGRESS","color":"#1f75cb"}"#, "IN_PROGRESS"),
|
|
(r#"{"name":"Done","category":"DONE","color":"#108548"}"#, "DONE"),
|
|
(r#"{"name":"Won't do","category":"CANCELED","color":"#DD2B0E"}"#, "CANCELED"),
|
|
(r#"{"name":"Duplicate","category":"CANCELED","color":"#DD2B0E"}"#, "CANCELED"),
|
|
];
|
|
for (json, expected_cat) in cases {
|
|
let status: WorkItemStatus = serde_json::from_str(json).unwrap();
|
|
assert_eq!(
|
|
status.category.as_deref(),
|
|
Some(expected_cat),
|
|
"Failed for: {}",
|
|
status.name
|
|
);
|
|
}
|
|
}
|
|
}
|
|
```
|
|
|
|
---
|
|
|
|
## File 2: `src/gitlab/graphql.rs` — inline `#[cfg(test)]` module
|
|
|
|
Tests AC-1 (GraphQL client), AC-3 (Status fetcher), and helper functions.
|
|
Uses `wiremock` (already in dev-dependencies).
|
|
|
|
```rust
|
|
#[cfg(test)]
|
|
mod tests {
|
|
use super::*;
|
|
use wiremock::matchers::{body_json_schema, header, method, path};
|
|
use wiremock::{Mock, MockServer, ResponseTemplate};
|
|
|
|
// ═══════════════════════════════════════════════════════════════════════
|
|
// AC-1: GraphQL Client
|
|
// ═══════════════════════════════════════════════════════════════════════
|
|
|
|
// ── T01: Successful query returns data ───────────────────────────────
|
|
#[tokio::test]
|
|
async fn test_graphql_query_success() {
|
|
let server = MockServer::start().await;
|
|
Mock::given(method("POST"))
|
|
.and(path("/api/graphql"))
|
|
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({
|
|
"data": {"foo": "bar"}
|
|
})))
|
|
.mount(&server)
|
|
.await;
|
|
|
|
let client = GraphqlClient::new(&server.uri(), "tok123");
|
|
let result = client
|
|
.query("{ foo }", serde_json::json!({}))
|
|
.await
|
|
.unwrap();
|
|
|
|
assert_eq!(result.data, serde_json::json!({"foo": "bar"}));
|
|
assert!(!result.had_partial_errors);
|
|
assert!(result.first_partial_error.is_none());
|
|
}
|
|
|
|
// ── T02: Errors array with no data field → Err ───────────────────────
|
|
#[tokio::test]
|
|
async fn test_graphql_query_with_errors_no_data() {
|
|
let server = MockServer::start().await;
|
|
Mock::given(method("POST"))
|
|
.and(path("/api/graphql"))
|
|
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({
|
|
"errors": [{"message": "bad query"}]
|
|
})))
|
|
.mount(&server)
|
|
.await;
|
|
|
|
let client = GraphqlClient::new(&server.uri(), "tok123");
|
|
let err = client
|
|
.query("{ bad }", serde_json::json!({}))
|
|
.await
|
|
.unwrap_err();
|
|
|
|
match err {
|
|
LoreError::Other(msg) => assert!(
|
|
msg.contains("bad query"),
|
|
"Expected error message containing 'bad query', got: {msg}"
|
|
),
|
|
other => panic!("Expected LoreError::Other, got: {other:?}"),
|
|
}
|
|
}
|
|
|
|
// ── T03: Authorization header uses Bearer format ─────────────────────
|
|
#[tokio::test]
|
|
async fn test_graphql_auth_uses_bearer() {
|
|
let server = MockServer::start().await;
|
|
Mock::given(method("POST"))
|
|
.and(path("/api/graphql"))
|
|
.and(header("Authorization", "Bearer tok123"))
|
|
.and(header("Content-Type", "application/json"))
|
|
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({
|
|
"data": {"ok": true}
|
|
})))
|
|
.mount(&server)
|
|
.await;
|
|
|
|
let client = GraphqlClient::new(&server.uri(), "tok123");
|
|
// If the header doesn't match, wiremock returns 404 and we'd get an error
|
|
let result = client.query("{ ok }", serde_json::json!({})).await;
|
|
assert!(result.is_ok(), "Expected Ok, got: {result:?}");
|
|
}
|
|
|
|
// ── T04: HTTP 401 → GitLabAuthFailed ─────────────────────────────────
|
|
#[tokio::test]
|
|
async fn test_graphql_401_maps_to_auth_failed() {
|
|
let server = MockServer::start().await;
|
|
Mock::given(method("POST"))
|
|
.and(path("/api/graphql"))
|
|
.respond_with(ResponseTemplate::new(401))
|
|
.mount(&server)
|
|
.await;
|
|
|
|
let client = GraphqlClient::new(&server.uri(), "bad_token");
|
|
let err = client.query("{ me }", serde_json::json!({})).await.unwrap_err();
|
|
|
|
assert!(
|
|
matches!(err, LoreError::GitLabAuthFailed),
|
|
"Expected GitLabAuthFailed, got: {err:?}"
|
|
);
|
|
}
|
|
|
|
// ── T05: HTTP 403 → GitLabAuthFailed ─────────────────────────────────
|
|
#[tokio::test]
|
|
async fn test_graphql_403_maps_to_auth_failed() {
|
|
let server = MockServer::start().await;
|
|
Mock::given(method("POST"))
|
|
.and(path("/api/graphql"))
|
|
.respond_with(ResponseTemplate::new(403))
|
|
.mount(&server)
|
|
.await;
|
|
|
|
let client = GraphqlClient::new(&server.uri(), "forbidden_token");
|
|
let err = client.query("{ me }", serde_json::json!({})).await.unwrap_err();
|
|
|
|
assert!(
|
|
matches!(err, LoreError::GitLabAuthFailed),
|
|
"Expected GitLabAuthFailed, got: {err:?}"
|
|
);
|
|
}
|
|
|
|
// ── T06: HTTP 404 → GitLabNotFound ───────────────────────────────────
|
|
#[tokio::test]
|
|
async fn test_graphql_404_maps_to_not_found() {
|
|
let server = MockServer::start().await;
|
|
Mock::given(method("POST"))
|
|
.and(path("/api/graphql"))
|
|
.respond_with(ResponseTemplate::new(404))
|
|
.mount(&server)
|
|
.await;
|
|
|
|
let client = GraphqlClient::new(&server.uri(), "tok123");
|
|
let err = client.query("{ me }", serde_json::json!({})).await.unwrap_err();
|
|
|
|
assert!(
|
|
matches!(err, LoreError::GitLabNotFound { .. }),
|
|
"Expected GitLabNotFound, got: {err:?}"
|
|
);
|
|
}
|
|
|
|
// ── T28: HTTP 429 with Retry-After HTTP-date format ──────────────────
|
|
#[tokio::test]
|
|
async fn test_retry_after_http_date_format() {
|
|
let server = MockServer::start().await;
|
|
// Use a date far in the future so delta is positive
|
|
Mock::given(method("POST"))
|
|
.and(path("/api/graphql"))
|
|
.respond_with(
|
|
ResponseTemplate::new(429)
|
|
.insert_header("Retry-After", "Wed, 11 Feb 2099 01:00:00 GMT"),
|
|
)
|
|
.mount(&server)
|
|
.await;
|
|
|
|
let client = GraphqlClient::new(&server.uri(), "tok123");
|
|
let err = client.query("{ me }", serde_json::json!({})).await.unwrap_err();
|
|
|
|
match err {
|
|
LoreError::GitLabRateLimited { retry_after } => {
|
|
// Should be a large number of seconds (future date)
|
|
assert!(
|
|
retry_after > 60,
|
|
"Expected retry_after > 60 for far-future date, got: {retry_after}"
|
|
);
|
|
}
|
|
other => panic!("Expected GitLabRateLimited, got: {other:?}"),
|
|
}
|
|
}
|
|
|
|
// ── T29: HTTP 429 with unparseable Retry-After → fallback to 60 ─────
|
|
#[tokio::test]
|
|
async fn test_retry_after_invalid_falls_back_to_60() {
|
|
let server = MockServer::start().await;
|
|
Mock::given(method("POST"))
|
|
.and(path("/api/graphql"))
|
|
.respond_with(
|
|
ResponseTemplate::new(429).insert_header("Retry-After", "garbage"),
|
|
)
|
|
.mount(&server)
|
|
.await;
|
|
|
|
let client = GraphqlClient::new(&server.uri(), "tok123");
|
|
let err = client.query("{ me }", serde_json::json!({})).await.unwrap_err();
|
|
|
|
match err {
|
|
LoreError::GitLabRateLimited { retry_after } => {
|
|
assert_eq!(retry_after, 60, "Expected fallback to 60s");
|
|
}
|
|
other => panic!("Expected GitLabRateLimited, got: {other:?}"),
|
|
}
|
|
}
|
|
|
|
// ── T33: Partial data with errors → returns data + metadata ──────────
|
|
#[tokio::test]
|
|
async fn test_graphql_partial_data_with_errors_returns_data() {
|
|
let server = MockServer::start().await;
|
|
Mock::given(method("POST"))
|
|
.and(path("/api/graphql"))
|
|
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({
|
|
"data": {"foo": "bar"},
|
|
"errors": [{"message": "partial failure"}]
|
|
})))
|
|
.mount(&server)
|
|
.await;
|
|
|
|
let client = GraphqlClient::new(&server.uri(), "tok123");
|
|
let result = client
|
|
.query("{ foo }", serde_json::json!({}))
|
|
.await
|
|
.unwrap();
|
|
|
|
assert_eq!(result.data, serde_json::json!({"foo": "bar"}));
|
|
assert!(result.had_partial_errors);
|
|
assert_eq!(
|
|
result.first_partial_error.as_deref(),
|
|
Some("partial failure")
|
|
);
|
|
}
|
|
|
|
// ── T43 (NEW): HTTP 429 with delta-seconds Retry-After ───────────────
|
|
#[tokio::test]
|
|
async fn test_retry_after_delta_seconds() {
|
|
let server = MockServer::start().await;
|
|
Mock::given(method("POST"))
|
|
.and(path("/api/graphql"))
|
|
.respond_with(
|
|
ResponseTemplate::new(429).insert_header("Retry-After", "120"),
|
|
)
|
|
.mount(&server)
|
|
.await;
|
|
|
|
let client = GraphqlClient::new(&server.uri(), "tok123");
|
|
let err = client.query("{ me }", serde_json::json!({})).await.unwrap_err();
|
|
|
|
match err {
|
|
LoreError::GitLabRateLimited { retry_after } => {
|
|
assert_eq!(retry_after, 120);
|
|
}
|
|
other => panic!("Expected GitLabRateLimited, got: {other:?}"),
|
|
}
|
|
}
|
|
|
|
// ── T44 (NEW): Network error → LoreError::Other ─────────────────────
|
|
#[tokio::test]
|
|
async fn test_graphql_network_error() {
|
|
// Connect to a port that's not listening
|
|
let client = GraphqlClient::new("http://127.0.0.1:1", "tok123");
|
|
let err = client.query("{ me }", serde_json::json!({})).await.unwrap_err();
|
|
|
|
assert!(
|
|
matches!(err, LoreError::Other(_)),
|
|
"Expected LoreError::Other for network error, got: {err:?}"
|
|
);
|
|
}
|
|
|
|
// ── T45 (NEW): Request body contains query + variables ───────────────
|
|
#[tokio::test]
|
|
async fn test_graphql_request_body_format() {
|
|
use std::sync::Arc;
|
|
use tokio::sync::Mutex;
|
|
|
|
let server = MockServer::start().await;
|
|
let captured = Arc::new(Mutex::new(None::<serde_json::Value>));
|
|
let captured_clone = captured.clone();
|
|
|
|
Mock::given(method("POST"))
|
|
.and(path("/api/graphql"))
|
|
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({
|
|
"data": {"ok": true}
|
|
})))
|
|
.mount(&server)
|
|
.await;
|
|
|
|
let client = GraphqlClient::new(&server.uri(), "tok123");
|
|
let vars = serde_json::json!({"projectPath": "group/repo"});
|
|
let _ = client.query("query($projectPath: ID!) { project(fullPath: $projectPath) { id } }", vars.clone()).await;
|
|
|
|
// Verify via wiremock received requests
|
|
let requests = server.received_requests().await.unwrap();
|
|
assert_eq!(requests.len(), 1);
|
|
|
|
let body: serde_json::Value =
|
|
serde_json::from_slice(&requests[0].body).unwrap();
|
|
assert!(body.get("query").is_some(), "Body missing 'query' field");
|
|
assert!(body.get("variables").is_some(), "Body missing 'variables' field");
|
|
assert_eq!(body["variables"]["projectPath"], "group/repo");
|
|
}
|
|
|
|
// ── T46 (NEW): Base URL trailing slash is normalized ─────────────────
|
|
#[tokio::test]
|
|
async fn test_graphql_base_url_trailing_slash() {
|
|
let server = MockServer::start().await;
|
|
Mock::given(method("POST"))
|
|
.and(path("/api/graphql"))
|
|
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({
|
|
"data": {"ok": true}
|
|
})))
|
|
.mount(&server)
|
|
.await;
|
|
|
|
// Add trailing slash to base URL
|
|
let url_with_slash = format!("{}/", server.uri());
|
|
let client = GraphqlClient::new(&url_with_slash, "tok123");
|
|
let result = client.query("{ ok }", serde_json::json!({})).await;
|
|
assert!(result.is_ok(), "Trailing slash should be normalized, got: {result:?}");
|
|
}
|
|
|
|
// ── T47 (NEW): Response with data:null and no errors → Err ───────────
|
|
#[tokio::test]
|
|
async fn test_graphql_data_null_no_errors() {
|
|
let server = MockServer::start().await;
|
|
Mock::given(method("POST"))
|
|
.and(path("/api/graphql"))
|
|
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({
|
|
"data": null
|
|
})))
|
|
.mount(&server)
|
|
.await;
|
|
|
|
let client = GraphqlClient::new(&server.uri(), "tok123");
|
|
let err = client.query("{ me }", serde_json::json!({})).await.unwrap_err();
|
|
|
|
match err {
|
|
LoreError::Other(msg) => assert!(
|
|
msg.contains("missing 'data'"),
|
|
"Expected 'missing data' message, got: {msg}"
|
|
),
|
|
other => panic!("Expected LoreError::Other, got: {other:?}"),
|
|
}
|
|
}
|
|
|
|
// ═══════════════════════════════════════════════════════════════════════
|
|
// AC-3: Status Fetcher
|
|
// ═══════════════════════════════════════════════════════════════════════
|
|
|
|
/// Helper: build a GraphQL work-items response page with given issues.
|
|
/// Each item: (iid, status_name_or_none, has_status_widget)
|
|
fn make_work_items_page(
|
|
items: &[(i64, Option<&str>)],
|
|
has_next_page: bool,
|
|
end_cursor: Option<&str>,
|
|
) -> serde_json::Value {
|
|
let nodes: Vec<serde_json::Value> = items
|
|
.iter()
|
|
.map(|(iid, status_name)| {
|
|
let mut widgets = vec![
|
|
serde_json::json!({"__typename": "WorkItemWidgetDescription"}),
|
|
];
|
|
if let Some(name) = status_name {
|
|
widgets.push(serde_json::json!({
|
|
"__typename": "WorkItemWidgetStatus",
|
|
"status": {
|
|
"name": name,
|
|
"category": "IN_PROGRESS",
|
|
"color": "#1f75cb",
|
|
"iconName": "status-in-progress"
|
|
}
|
|
}));
|
|
}
|
|
// If status_name is None but we still want the widget (null status):
|
|
// handled by a separate test — here None means no status widget at all
|
|
serde_json::json!({
|
|
"iid": iid.to_string(),
|
|
"widgets": widgets,
|
|
})
|
|
})
|
|
.collect();
|
|
|
|
serde_json::json!({
|
|
"data": {
|
|
"project": {
|
|
"workItems": {
|
|
"nodes": nodes,
|
|
"pageInfo": {
|
|
"endCursor": end_cursor,
|
|
"hasNextPage": has_next_page,
|
|
}
|
|
}
|
|
}
|
|
}
|
|
})
|
|
}
|
|
|
|
/// Helper: build a page where issue has status widget but status is null.
|
|
fn make_null_status_widget_page(iid: i64) -> serde_json::Value {
|
|
serde_json::json!({
|
|
"data": {
|
|
"project": {
|
|
"workItems": {
|
|
"nodes": [{
|
|
"iid": iid.to_string(),
|
|
"widgets": [
|
|
{"__typename": "WorkItemWidgetStatus", "status": null}
|
|
]
|
|
}],
|
|
"pageInfo": {
|
|
"endCursor": null,
|
|
"hasNextPage": false,
|
|
}
|
|
}
|
|
}
|
|
}
|
|
})
|
|
}
|
|
|
|
// ── T11: Pagination across 2 pages ───────────────────────────────────
|
|
#[tokio::test]
|
|
async fn test_fetch_statuses_pagination() {
|
|
let server = MockServer::start().await;
|
|
|
|
// Page 1: returns cursor "page2"
|
|
Mock::given(method("POST"))
|
|
.and(path("/api/graphql"))
|
|
.respond_with({
|
|
let mut seq = wiremock::ResponseTemplate::new(200);
|
|
// We need conditional responses based on request body.
|
|
// Use a simpler approach: mount two mocks, first returns page 1,
|
|
// second returns page 2. Wiremock uses LIFO matching.
|
|
seq.set_body_json(make_work_items_page(
|
|
&[(1, Some("In progress")), (2, Some("To do"))],
|
|
true,
|
|
Some("cursor_page2"),
|
|
))
|
|
})
|
|
.up_to_n_times(1)
|
|
.expect(1)
|
|
.mount(&server)
|
|
.await;
|
|
|
|
// Page 2: no more pages
|
|
Mock::given(method("POST"))
|
|
.and(path("/api/graphql"))
|
|
.respond_with(
|
|
ResponseTemplate::new(200).set_body_json(make_work_items_page(
|
|
&[(3, Some("Done"))],
|
|
false,
|
|
None,
|
|
)),
|
|
)
|
|
.expect(1)
|
|
.mount(&server)
|
|
.await;
|
|
|
|
let client = GraphqlClient::new(&server.uri(), "tok123");
|
|
let result = fetch_issue_statuses(&client, "group/project").await.unwrap();
|
|
|
|
assert_eq!(result.statuses.len(), 3);
|
|
assert!(result.statuses.contains_key(&1));
|
|
assert!(result.statuses.contains_key(&2));
|
|
assert!(result.statuses.contains_key(&3));
|
|
assert_eq!(result.all_fetched_iids.len(), 3);
|
|
assert!(result.unsupported_reason.is_none());
|
|
}
|
|
|
|
// ── T12: No status widget → empty statuses, populated all_fetched ────
|
|
#[tokio::test]
|
|
async fn test_fetch_statuses_no_status_widget() {
|
|
let server = MockServer::start().await;
|
|
|
|
// Issue has widgets but none is WorkItemWidgetStatus
|
|
Mock::given(method("POST"))
|
|
.and(path("/api/graphql"))
|
|
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({
|
|
"data": {
|
|
"project": {
|
|
"workItems": {
|
|
"nodes": [{
|
|
"iid": "42",
|
|
"widgets": [
|
|
{"__typename": "WorkItemWidgetDescription"},
|
|
{"__typename": "WorkItemWidgetLabels"}
|
|
]
|
|
}],
|
|
"pageInfo": {"endCursor": null, "hasNextPage": false}
|
|
}
|
|
}
|
|
}
|
|
})))
|
|
.mount(&server)
|
|
.await;
|
|
|
|
let client = GraphqlClient::new(&server.uri(), "tok123");
|
|
let result = fetch_issue_statuses(&client, "group/project").await.unwrap();
|
|
|
|
assert!(result.statuses.is_empty(), "No status widget → no statuses");
|
|
assert!(
|
|
result.all_fetched_iids.contains(&42),
|
|
"IID 42 should still be in all_fetched_iids"
|
|
);
|
|
}
|
|
|
|
// ── T13: GraphQL 404 → graceful unsupported ──────────────────────────
|
|
#[tokio::test]
|
|
async fn test_fetch_statuses_404_graceful() {
|
|
let server = MockServer::start().await;
|
|
Mock::given(method("POST"))
|
|
.and(path("/api/graphql"))
|
|
.respond_with(ResponseTemplate::new(404))
|
|
.mount(&server)
|
|
.await;
|
|
|
|
let client = GraphqlClient::new(&server.uri(), "tok123");
|
|
let result = fetch_issue_statuses(&client, "group/project").await.unwrap();
|
|
|
|
assert!(result.statuses.is_empty());
|
|
assert!(result.all_fetched_iids.is_empty());
|
|
assert!(matches!(
|
|
result.unsupported_reason,
|
|
Some(UnsupportedReason::GraphqlEndpointMissing)
|
|
));
|
|
}
|
|
|
|
// ── T14: GraphQL 403 → graceful unsupported ──────────────────────────
|
|
#[tokio::test]
|
|
async fn test_fetch_statuses_403_graceful() {
|
|
let server = MockServer::start().await;
|
|
Mock::given(method("POST"))
|
|
.and(path("/api/graphql"))
|
|
.respond_with(ResponseTemplate::new(403))
|
|
.mount(&server)
|
|
.await;
|
|
|
|
let client = GraphqlClient::new(&server.uri(), "tok123");
|
|
let result = fetch_issue_statuses(&client, "group/project").await.unwrap();
|
|
|
|
assert!(result.statuses.is_empty());
|
|
assert!(result.all_fetched_iids.is_empty());
|
|
assert!(matches!(
|
|
result.unsupported_reason,
|
|
Some(UnsupportedReason::AuthForbidden)
|
|
));
|
|
}
|
|
|
|
// ── T25: ansi256_from_rgb known conversions ──────────────────────────
|
|
#[test]
|
|
fn test_ansi256_from_rgb() {
|
|
// Black → index 16 (0,0,0 in 6x6x6 cube)
|
|
assert_eq!(ansi256_from_rgb(0, 0, 0), 16);
|
|
// White → index 231 (5,5,5 in 6x6x6 cube)
|
|
assert_eq!(ansi256_from_rgb(255, 255, 255), 231);
|
|
// GitLab "In progress" blue #1f75cb → (31,117,203)
|
|
// ri = (31*5+127)/255 = 282/255 ≈ 1
|
|
// gi = (117*5+127)/255 = 712/255 ≈ 2 (rounds to 3)
|
|
// bi = (203*5+127)/255 = 1142/255 ≈ 4
|
|
let idx = ansi256_from_rgb(31, 117, 203);
|
|
// 16 + 36*1 + 6*2 + 4 = 16+36+12+4 = 68
|
|
// or 16 + 36*1 + 6*3 + 4 = 16+36+18+4 = 74 depending on rounding
|
|
assert!(
|
|
(68..=74).contains(&idx),
|
|
"Expected ansi256 index near 68-74 for #1f75cb, got: {idx}"
|
|
);
|
|
}
|
|
|
|
// ── T27: __typename matching ignores non-status widgets ──────────────
|
|
#[tokio::test]
|
|
async fn test_typename_matching_ignores_non_status_widgets() {
|
|
let server = MockServer::start().await;
|
|
Mock::given(method("POST"))
|
|
.and(path("/api/graphql"))
|
|
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({
|
|
"data": {
|
|
"project": {
|
|
"workItems": {
|
|
"nodes": [{
|
|
"iid": "10",
|
|
"widgets": [
|
|
{"__typename": "WorkItemWidgetDescription"},
|
|
{"__typename": "WorkItemWidgetLabels"},
|
|
{"__typename": "WorkItemWidgetAssignees"},
|
|
{
|
|
"__typename": "WorkItemWidgetStatus",
|
|
"status": {
|
|
"name": "In progress",
|
|
"category": "IN_PROGRESS"
|
|
}
|
|
}
|
|
]
|
|
}],
|
|
"pageInfo": {"endCursor": null, "hasNextPage": false}
|
|
}
|
|
}
|
|
}
|
|
})))
|
|
.mount(&server)
|
|
.await;
|
|
|
|
let client = GraphqlClient::new(&server.uri(), "tok123");
|
|
let result = fetch_issue_statuses(&client, "group/project").await.unwrap();
|
|
|
|
// Only status widget should be parsed
|
|
assert_eq!(result.statuses.len(), 1);
|
|
assert_eq!(result.statuses[&10].name, "In progress");
|
|
}
|
|
|
|
// ── T34: Cursor stall aborts pagination ──────────────────────────────
|
|
#[tokio::test]
|
|
async fn test_fetch_statuses_cursor_stall_aborts() {
|
|
let server = MockServer::start().await;
|
|
|
|
// Both pages return the SAME cursor → stall detection
|
|
let stall_response = serde_json::json!({
|
|
"data": {
|
|
"project": {
|
|
"workItems": {
|
|
"nodes": [{"iid": "1", "widgets": []}],
|
|
"pageInfo": {"endCursor": "same_cursor", "hasNextPage": true}
|
|
}
|
|
}
|
|
}
|
|
});
|
|
|
|
Mock::given(method("POST"))
|
|
.and(path("/api/graphql"))
|
|
.respond_with(
|
|
ResponseTemplate::new(200).set_body_json(stall_response.clone()),
|
|
)
|
|
.mount(&server)
|
|
.await;
|
|
|
|
let client = GraphqlClient::new(&server.uri(), "tok123");
|
|
let result = fetch_issue_statuses(&client, "group/project").await.unwrap();
|
|
|
|
// Should have aborted after detecting stall, returning partial results
|
|
assert!(
|
|
result.all_fetched_iids.contains(&1),
|
|
"Should contain the one IID fetched before stall"
|
|
);
|
|
// Pagination should NOT have looped infinitely — wiremock would time out
|
|
// The test passing at all proves the guard worked
|
|
}
|
|
|
|
// ── T35: Successful fetch → unsupported_reason is None ───────────────
|
|
#[tokio::test]
|
|
async fn test_fetch_statuses_unsupported_reason_none_on_success() {
|
|
let server = MockServer::start().await;
|
|
Mock::given(method("POST"))
|
|
.and(path("/api/graphql"))
|
|
.respond_with(ResponseTemplate::new(200).set_body_json(
|
|
make_work_items_page(&[(1, Some("To do"))], false, None),
|
|
))
|
|
.mount(&server)
|
|
.await;
|
|
|
|
let client = GraphqlClient::new(&server.uri(), "tok123");
|
|
let result = fetch_issue_statuses(&client, "group/project").await.unwrap();
|
|
|
|
assert!(result.unsupported_reason.is_none());
|
|
}
|
|
|
|
// ── T36: Complexity error reduces page size ──────────────────────────
|
|
#[tokio::test]
|
|
async fn test_fetch_statuses_complexity_error_reduces_page_size() {
|
|
let server = MockServer::start().await;
|
|
let call_count = std::sync::Arc::new(std::sync::atomic::AtomicUsize::new(0));
|
|
let call_count_clone = call_count.clone();
|
|
|
|
// First call (page_size=100) → complexity error
|
|
// Second call (page_size=50) → success
|
|
Mock::given(method("POST"))
|
|
.and(path("/api/graphql"))
|
|
.respond_with(move |req: &wiremock::Request| {
|
|
let n = call_count_clone.fetch_add(1, std::sync::atomic::Ordering::SeqCst);
|
|
if n == 0 {
|
|
// First request: simulate complexity error
|
|
ResponseTemplate::new(200).set_body_json(serde_json::json!({
|
|
"errors": [{"message": "Query has complexity of 300, which exceeds max complexity of 250"}]
|
|
}))
|
|
} else {
|
|
// Subsequent requests: return data
|
|
ResponseTemplate::new(200).set_body_json(make_work_items_page(
|
|
&[(1, Some("In progress"))],
|
|
false,
|
|
None,
|
|
))
|
|
}
|
|
})
|
|
.mount(&server)
|
|
.await;
|
|
|
|
let client = GraphqlClient::new(&server.uri(), "tok123");
|
|
let result = fetch_issue_statuses(&client, "group/project").await.unwrap();
|
|
|
|
assert_eq!(result.statuses.len(), 1);
|
|
assert_eq!(result.statuses[&1].name, "In progress");
|
|
// Should have made 2 calls: first failed, second succeeded
|
|
assert_eq!(call_count.load(std::sync::atomic::Ordering::SeqCst), 2);
|
|
}
|
|
|
|
// ── T37: Timeout error reduces page size ─────────────────────────────
|
|
#[tokio::test]
|
|
async fn test_fetch_statuses_timeout_error_reduces_page_size() {
|
|
let server = MockServer::start().await;
|
|
let call_count = std::sync::Arc::new(std::sync::atomic::AtomicUsize::new(0));
|
|
let call_count_clone = call_count.clone();
|
|
|
|
Mock::given(method("POST"))
|
|
.and(path("/api/graphql"))
|
|
.respond_with(move |_req: &wiremock::Request| {
|
|
let n = call_count_clone.fetch_add(1, std::sync::atomic::Ordering::SeqCst);
|
|
if n == 0 {
|
|
ResponseTemplate::new(200).set_body_json(serde_json::json!({
|
|
"errors": [{"message": "Query timeout after 30000ms"}]
|
|
}))
|
|
} else {
|
|
ResponseTemplate::new(200).set_body_json(make_work_items_page(
|
|
&[(5, Some("Done"))],
|
|
false,
|
|
None,
|
|
))
|
|
}
|
|
})
|
|
.mount(&server)
|
|
.await;
|
|
|
|
let client = GraphqlClient::new(&server.uri(), "tok123");
|
|
let result = fetch_issue_statuses(&client, "group/project").await.unwrap();
|
|
|
|
assert_eq!(result.statuses.len(), 1);
|
|
assert!(call_count.load(std::sync::atomic::Ordering::SeqCst) >= 2);
|
|
}
|
|
|
|
// ── T38: Smallest page still fails → returns Err ─────────────────────
|
|
#[tokio::test]
|
|
async fn test_fetch_statuses_smallest_page_still_fails() {
|
|
let server = MockServer::start().await;
|
|
|
|
// Always return complexity error regardless of page size
|
|
Mock::given(method("POST"))
|
|
.and(path("/api/graphql"))
|
|
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({
|
|
"errors": [{"message": "Query has complexity of 9999"}]
|
|
})))
|
|
.mount(&server)
|
|
.await;
|
|
|
|
let client = GraphqlClient::new(&server.uri(), "tok123");
|
|
let err = fetch_issue_statuses(&client, "group/project").await.unwrap_err();
|
|
|
|
assert!(
|
|
matches!(err, LoreError::Other(_)),
|
|
"Expected error after exhausting all page sizes, got: {err:?}"
|
|
);
|
|
}
|
|
|
|
// ── T39: Page size resets after success ───────────────────────────────
|
|
#[tokio::test]
|
|
async fn test_fetch_statuses_page_size_resets_after_success() {
|
|
let server = MockServer::start().await;
|
|
let call_count = std::sync::Arc::new(std::sync::atomic::AtomicUsize::new(0));
|
|
let call_count_clone = call_count.clone();
|
|
|
|
Mock::given(method("POST"))
|
|
.and(path("/api/graphql"))
|
|
.respond_with(move |_req: &wiremock::Request| {
|
|
let n = call_count_clone.fetch_add(1, std::sync::atomic::Ordering::SeqCst);
|
|
match n {
|
|
0 => {
|
|
// Page 1 at size 100: success, has next page
|
|
ResponseTemplate::new(200).set_body_json(make_work_items_page(
|
|
&[(1, Some("To do"))],
|
|
true,
|
|
Some("cursor_p2"),
|
|
))
|
|
}
|
|
1 => {
|
|
// Page 2 at size 100 (reset): complexity error
|
|
ResponseTemplate::new(200).set_body_json(serde_json::json!({
|
|
"errors": [{"message": "Query has complexity of 300"}]
|
|
}))
|
|
}
|
|
2 => {
|
|
// Page 2 retry at size 50: success
|
|
ResponseTemplate::new(200).set_body_json(make_work_items_page(
|
|
&[(2, Some("Done"))],
|
|
false,
|
|
None,
|
|
))
|
|
}
|
|
_ => ResponseTemplate::new(500),
|
|
}
|
|
})
|
|
.mount(&server)
|
|
.await;
|
|
|
|
let client = GraphqlClient::new(&server.uri(), "tok123");
|
|
let result = fetch_issue_statuses(&client, "group/project").await.unwrap();
|
|
|
|
assert_eq!(result.statuses.len(), 2);
|
|
assert!(result.statuses.contains_key(&1));
|
|
assert!(result.statuses.contains_key(&2));
|
|
assert_eq!(call_count.load(std::sync::atomic::Ordering::SeqCst), 3);
|
|
}
|
|
|
|
// ── T42: Partial errors tracked across pages ─────────────────────────
|
|
#[tokio::test]
|
|
async fn test_fetch_statuses_partial_errors_tracked() {
|
|
let server = MockServer::start().await;
|
|
|
|
// Return data + errors (partial success)
|
|
Mock::given(method("POST"))
|
|
.and(path("/api/graphql"))
|
|
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({
|
|
"data": {
|
|
"project": {
|
|
"workItems": {
|
|
"nodes": [{"iid": "1", "widgets": [
|
|
{"__typename": "WorkItemWidgetStatus", "status": {"name": "To do"}}
|
|
]}],
|
|
"pageInfo": {"endCursor": null, "hasNextPage": false}
|
|
}
|
|
}
|
|
},
|
|
"errors": [{"message": "Rate limit warning: approaching limit"}]
|
|
})))
|
|
.mount(&server)
|
|
.await;
|
|
|
|
let client = GraphqlClient::new(&server.uri(), "tok123");
|
|
let result = fetch_issue_statuses(&client, "group/project").await.unwrap();
|
|
|
|
assert_eq!(result.partial_error_count, 1);
|
|
assert_eq!(
|
|
result.first_partial_error.as_deref(),
|
|
Some("Rate limit warning: approaching limit")
|
|
);
|
|
// Data should still be present
|
|
assert_eq!(result.statuses.len(), 1);
|
|
}
|
|
|
|
// ── T49 (NEW): Empty project returns empty result ────────────────────
|
|
#[tokio::test]
|
|
async fn test_fetch_statuses_empty_project() {
|
|
let server = MockServer::start().await;
|
|
Mock::given(method("POST"))
|
|
.and(path("/api/graphql"))
|
|
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({
|
|
"data": {
|
|
"project": {
|
|
"workItems": {
|
|
"nodes": [],
|
|
"pageInfo": {"endCursor": null, "hasNextPage": false}
|
|
}
|
|
}
|
|
}
|
|
})))
|
|
.mount(&server)
|
|
.await;
|
|
|
|
let client = GraphqlClient::new(&server.uri(), "tok123");
|
|
let result = fetch_issue_statuses(&client, "group/project").await.unwrap();
|
|
|
|
assert!(result.statuses.is_empty());
|
|
assert!(result.all_fetched_iids.is_empty());
|
|
assert!(result.unsupported_reason.is_none());
|
|
assert_eq!(result.partial_error_count, 0);
|
|
}
|
|
|
|
// ── T50 (NEW): Status widget with null status → in all_fetched but not in statuses
|
|
#[tokio::test]
|
|
async fn test_fetch_statuses_null_status_in_widget() {
|
|
let server = MockServer::start().await;
|
|
Mock::given(method("POST"))
|
|
.and(path("/api/graphql"))
|
|
.respond_with(
|
|
ResponseTemplate::new(200).set_body_json(make_null_status_widget_page(42)),
|
|
)
|
|
.mount(&server)
|
|
.await;
|
|
|
|
let client = GraphqlClient::new(&server.uri(), "tok123");
|
|
let result = fetch_issue_statuses(&client, "group/project").await.unwrap();
|
|
|
|
assert!(result.statuses.is_empty(), "Null status should not be in map");
|
|
assert!(
|
|
result.all_fetched_iids.contains(&42),
|
|
"IID should still be tracked in all_fetched_iids"
|
|
);
|
|
}
|
|
|
|
// ── T51 (NEW): Non-numeric IID is silently skipped ───────────────────
|
|
#[tokio::test]
|
|
async fn test_fetch_statuses_non_numeric_iid_skipped() {
|
|
let server = MockServer::start().await;
|
|
Mock::given(method("POST"))
|
|
.and(path("/api/graphql"))
|
|
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({
|
|
"data": {
|
|
"project": {
|
|
"workItems": {
|
|
"nodes": [
|
|
{
|
|
"iid": "not_a_number",
|
|
"widgets": [{"__typename": "WorkItemWidgetStatus", "status": {"name": "To do"}}]
|
|
},
|
|
{
|
|
"iid": "7",
|
|
"widgets": [{"__typename": "WorkItemWidgetStatus", "status": {"name": "Done"}}]
|
|
}
|
|
],
|
|
"pageInfo": {"endCursor": null, "hasNextPage": false}
|
|
}
|
|
}
|
|
}
|
|
})))
|
|
.mount(&server)
|
|
.await;
|
|
|
|
let client = GraphqlClient::new(&server.uri(), "tok123");
|
|
let result = fetch_issue_statuses(&client, "group/project").await.unwrap();
|
|
|
|
// Non-numeric IID silently skipped, numeric IID present
|
|
assert_eq!(result.statuses.len(), 1);
|
|
assert!(result.statuses.contains_key(&7));
|
|
assert_eq!(result.all_fetched_iids.len(), 1);
|
|
}
|
|
|
|
// ── T52 (NEW): Pagination cursor None with hasNextPage=true → aborts
|
|
#[tokio::test]
|
|
async fn test_fetch_statuses_null_cursor_with_has_next_aborts() {
|
|
let server = MockServer::start().await;
|
|
Mock::given(method("POST"))
|
|
.and(path("/api/graphql"))
|
|
.respond_with(ResponseTemplate::new(200).set_body_json(serde_json::json!({
|
|
"data": {
|
|
"project": {
|
|
"workItems": {
|
|
"nodes": [{"iid": "1", "widgets": []}],
|
|
"pageInfo": {"endCursor": null, "hasNextPage": true}
|
|
}
|
|
}
|
|
}
|
|
})))
|
|
.mount(&server)
|
|
.await;
|
|
|
|
let client = GraphqlClient::new(&server.uri(), "tok123");
|
|
let result = fetch_issue_statuses(&client, "group/project").await.unwrap();
|
|
|
|
// Should abort after first page (null cursor + hasNextPage=true)
|
|
assert_eq!(result.all_fetched_iids.len(), 1);
|
|
}
|
|
}
|
|
```
|
|
|
|
---
|
|
|
|
## File 3: `tests/migration_tests.rs` — append to existing file
|
|
|
|
Tests AC-4 (Migration 021).
|
|
|
|
```rust
|
|
// ── T15: Migration 021 adds all 5 status columns ────────────────────
|
|
#[test]
|
|
fn test_migration_021_adds_columns() {
|
|
let conn = create_test_db();
|
|
apply_migrations(&conn, 21);
|
|
|
|
let columns: Vec<String> = conn
|
|
.prepare("PRAGMA table_info(issues)")
|
|
.unwrap()
|
|
.query_map([], |row| row.get(1))
|
|
.unwrap()
|
|
.filter_map(|r| r.ok())
|
|
.collect();
|
|
|
|
let expected = [
|
|
"status_name",
|
|
"status_category",
|
|
"status_color",
|
|
"status_icon_name",
|
|
"status_synced_at",
|
|
];
|
|
for col in &expected {
|
|
assert!(
|
|
columns.contains(&col.to_string()),
|
|
"Missing column: {col}. Found: {columns:?}"
|
|
);
|
|
}
|
|
}
|
|
|
|
// ── T16: Migration 021 adds compound index ───────────────────────────
|
|
#[test]
|
|
fn test_migration_021_adds_index() {
|
|
let conn = create_test_db();
|
|
apply_migrations(&conn, 21);
|
|
|
|
let indexes: Vec<String> = conn
|
|
.prepare("PRAGMA index_list(issues)")
|
|
.unwrap()
|
|
.query_map([], |row| row.get::<_, String>(1))
|
|
.unwrap()
|
|
.filter_map(|r| r.ok())
|
|
.collect();
|
|
|
|
assert!(
|
|
indexes.contains(&"idx_issues_project_status_name".to_string()),
|
|
"Missing index idx_issues_project_status_name. Found: {indexes:?}"
|
|
);
|
|
}
|
|
|
|
// ── T53 (NEW): Existing issues retain NULL defaults after migration ──
|
|
#[test]
|
|
fn test_migration_021_existing_rows_have_null_defaults() {
|
|
let conn = create_test_db();
|
|
apply_migrations(&conn, 20);
|
|
|
|
// Insert an issue before migration 021
|
|
conn.execute(
|
|
"INSERT INTO projects (id, gitlab_project_id, path_with_namespace)
|
|
VALUES (1, 100, 'group/project')",
|
|
[],
|
|
)
|
|
.unwrap();
|
|
conn.execute(
|
|
"INSERT INTO issues (gitlab_id, project_id, iid, state, created_at, updated_at, last_seen_at)
|
|
VALUES (1, 1, 42, 'opened', 1000, 1000, 1000)",
|
|
[],
|
|
)
|
|
.unwrap();
|
|
|
|
// Now apply migration 021
|
|
apply_migrations(&conn, 21);
|
|
|
|
let (name, category, color, icon, synced_at): (
|
|
Option<String>,
|
|
Option<String>,
|
|
Option<String>,
|
|
Option<String>,
|
|
Option<i64>,
|
|
) = conn
|
|
.query_row(
|
|
"SELECT status_name, status_category, status_color, status_icon_name, status_synced_at
|
|
FROM issues WHERE iid = 42",
|
|
[],
|
|
|row| Ok((row.get(0)?, row.get(1)?, row.get(2)?, row.get(3)?, row.get(4)?)),
|
|
)
|
|
.unwrap();
|
|
|
|
assert!(name.is_none(), "status_name should be NULL");
|
|
assert!(category.is_none(), "status_category should be NULL");
|
|
assert!(color.is_none(), "status_color should be NULL");
|
|
assert!(icon.is_none(), "status_icon_name should be NULL");
|
|
assert!(synced_at.is_none(), "status_synced_at should be NULL");
|
|
}
|
|
|
|
// ── T54 (NEW): SELECT on new columns succeeds after migration ────────
|
|
#[test]
|
|
fn test_migration_021_select_new_columns_succeeds() {
|
|
let conn = create_test_db();
|
|
apply_migrations(&conn, 21);
|
|
|
|
// This is the exact query pattern used in show.rs
|
|
let result = conn.execute_batch(
|
|
"SELECT status_name, status_category, status_color, status_icon_name, status_synced_at
|
|
FROM issues LIMIT 1",
|
|
);
|
|
assert!(result.is_ok(), "SELECT on new columns should succeed: {result:?}");
|
|
}
|
|
```
|
|
|
|
---
|
|
|
|
## File 4: `src/core/config.rs` — inline `#[cfg(test)]` additions
|
|
|
|
Tests AC-5 (Config toggle).
|
|
|
|
```rust
|
|
#[cfg(test)]
|
|
mod tests {
|
|
use super::*;
|
|
|
|
// ── T23: Default SyncConfig has fetch_work_item_status=true ──────
|
|
#[test]
|
|
fn test_config_fetch_work_item_status_default_true() {
|
|
let config = SyncConfig::default();
|
|
assert!(config.fetch_work_item_status);
|
|
}
|
|
|
|
// ── T24: JSON without key defaults to true ──────────────────────
|
|
#[test]
|
|
fn test_config_deserialize_without_key() {
|
|
// Minimal SyncConfig JSON — no fetchWorkItemStatus key
|
|
let json = r#"{}"#;
|
|
let config: SyncConfig = serde_json::from_str(json).unwrap();
|
|
assert!(
|
|
config.fetch_work_item_status,
|
|
"Missing key should default to true"
|
|
);
|
|
}
|
|
}
|
|
```
|
|
|
|
---
|
|
|
|
## File 5: `tests/status_enrichment_tests.rs` (NEW)
|
|
|
|
Tests AC-6 (Orchestrator enrichment), AC-10 (Robot envelope).
|
|
|
|
```rust
|
|
//! Integration tests for status enrichment DB operations.
|
|
|
|
use rusqlite::Connection;
|
|
use std::collections::{HashMap, HashSet};
|
|
use std::path::PathBuf;
|
|
|
|
// Import the enrichment function — it must be pub(crate) or pub for this to work.
|
|
// If it's private to orchestrator, these tests go inline. Adjust path as needed.
|
|
use lore::gitlab::types::WorkItemStatus;
|
|
|
|
fn get_migrations_dir() -> PathBuf {
|
|
PathBuf::from(env!("CARGO_MANIFEST_DIR")).join("migrations")
|
|
}
|
|
|
|
fn apply_migrations(conn: &Connection, through_version: i32) {
|
|
let migrations_dir = get_migrations_dir();
|
|
for version in 1..=through_version {
|
|
let entries: Vec<_> = std::fs::read_dir(&migrations_dir)
|
|
.unwrap()
|
|
.filter_map(|e| e.ok())
|
|
.filter(|e| {
|
|
e.file_name()
|
|
.to_string_lossy()
|
|
.starts_with(&format!("{:03}", version))
|
|
})
|
|
.collect();
|
|
assert!(!entries.is_empty(), "Migration {} not found", version);
|
|
let sql = std::fs::read_to_string(entries[0].path()).unwrap();
|
|
conn.execute_batch(&sql)
|
|
.unwrap_or_else(|e| panic!("Migration {} failed: {}", version, e));
|
|
}
|
|
}
|
|
|
|
fn create_test_db() -> Connection {
|
|
let conn = Connection::open_in_memory().unwrap();
|
|
conn.pragma_update(None, "foreign_keys", "ON").unwrap();
|
|
conn
|
|
}
|
|
|
|
/// Insert a test project and issue into the DB.
|
|
fn seed_issue(conn: &Connection, project_id: i64, iid: i64) {
|
|
conn.execute(
|
|
"INSERT OR IGNORE INTO projects (id, gitlab_project_id, path_with_namespace, web_url)
|
|
VALUES (?1, ?1, 'group/project', 'https://gitlab.example.com/group/project')",
|
|
[project_id],
|
|
)
|
|
.unwrap();
|
|
conn.execute(
|
|
"INSERT INTO issues (gitlab_id, project_id, iid, state, created_at, updated_at, last_seen_at)
|
|
VALUES (?1, ?2, ?1, 'opened', 1000, 1000, 1000)",
|
|
rusqlite::params![iid, project_id],
|
|
)
|
|
.unwrap();
|
|
}
|
|
|
|
/// Insert an issue with pre-existing status values (for clear/idempotency tests).
|
|
fn seed_issue_with_status(
|
|
conn: &Connection,
|
|
project_id: i64,
|
|
iid: i64,
|
|
status_name: &str,
|
|
synced_at: i64,
|
|
) {
|
|
seed_issue(conn, project_id, iid);
|
|
conn.execute(
|
|
"UPDATE issues SET status_name = ?1, status_category = 'IN_PROGRESS',
|
|
status_color = '#1f75cb', status_icon_name = 'status-in-progress',
|
|
status_synced_at = ?2
|
|
WHERE project_id = ?3 AND iid = ?4",
|
|
rusqlite::params![status_name, synced_at, project_id, iid],
|
|
)
|
|
.unwrap();
|
|
}
|
|
|
|
fn make_status(name: &str) -> WorkItemStatus {
|
|
WorkItemStatus {
|
|
name: name.to_string(),
|
|
category: Some("IN_PROGRESS".to_string()),
|
|
color: Some("#1f75cb".to_string()),
|
|
icon_name: Some("status-in-progress".to_string()),
|
|
}
|
|
}
|
|
|
|
fn read_status(conn: &Connection, project_id: i64, iid: i64) -> (
|
|
Option<String>,
|
|
Option<String>,
|
|
Option<String>,
|
|
Option<String>,
|
|
Option<i64>,
|
|
) {
|
|
conn.query_row(
|
|
"SELECT status_name, status_category, status_color, status_icon_name, status_synced_at
|
|
FROM issues WHERE project_id = ?1 AND iid = ?2",
|
|
rusqlite::params![project_id, iid],
|
|
|row| Ok((row.get(0)?, row.get(1)?, row.get(2)?, row.get(3)?, row.get(4)?)),
|
|
)
|
|
.unwrap()
|
|
}
|
|
|
|
// ── T17: Enrich writes all 4 status columns + synced_at ──────────────
|
|
#[test]
|
|
fn test_enrich_issue_statuses_txn() {
|
|
let conn = create_test_db();
|
|
apply_migrations(&conn, 21);
|
|
seed_issue(&conn, 1, 42);
|
|
|
|
let mut statuses = HashMap::new();
|
|
statuses.insert(42_i64, make_status("In progress"));
|
|
let all_fetched: HashSet<i64> = [42].into_iter().collect();
|
|
let now_ms = 1_700_000_000_000_i64;
|
|
|
|
// Call the enrichment function
|
|
let tx = conn.unchecked_transaction().unwrap();
|
|
let mut update_stmt = tx
|
|
.prepare_cached(
|
|
"UPDATE issues SET status_name = ?1, status_category = ?2, status_color = ?3,
|
|
status_icon_name = ?4, status_synced_at = ?5
|
|
WHERE project_id = ?6 AND iid = ?7",
|
|
)
|
|
.unwrap();
|
|
let mut enriched = 0usize;
|
|
for (iid, status) in &statuses {
|
|
let rows = update_stmt
|
|
.execute(rusqlite::params![
|
|
&status.name,
|
|
&status.category,
|
|
&status.color,
|
|
&status.icon_name,
|
|
now_ms,
|
|
1_i64,
|
|
iid,
|
|
])
|
|
.unwrap();
|
|
if rows > 0 {
|
|
enriched += 1;
|
|
}
|
|
}
|
|
tx.commit().unwrap();
|
|
|
|
assert_eq!(enriched, 1);
|
|
|
|
let (name, cat, color, icon, synced) = read_status(&conn, 1, 42);
|
|
assert_eq!(name.as_deref(), Some("In progress"));
|
|
assert_eq!(cat.as_deref(), Some("IN_PROGRESS"));
|
|
assert_eq!(color.as_deref(), Some("#1f75cb"));
|
|
assert_eq!(icon.as_deref(), Some("status-in-progress"));
|
|
assert_eq!(synced, Some(now_ms));
|
|
}
|
|
|
|
// ── T18: Unknown IID in status map → no error, returns 0 ────────────
|
|
#[test]
|
|
fn test_enrich_skips_unknown_iids() {
|
|
let conn = create_test_db();
|
|
apply_migrations(&conn, 21);
|
|
// Don't insert any issues
|
|
|
|
let mut statuses = HashMap::new();
|
|
statuses.insert(999_i64, make_status("In progress"));
|
|
let all_fetched: HashSet<i64> = [999].into_iter().collect();
|
|
|
|
let tx = conn.unchecked_transaction().unwrap();
|
|
let mut update_stmt = tx
|
|
.prepare_cached(
|
|
"UPDATE issues SET status_name = ?1, status_category = ?2, status_color = ?3,
|
|
status_icon_name = ?4, status_synced_at = ?5
|
|
WHERE project_id = ?6 AND iid = ?7",
|
|
)
|
|
.unwrap();
|
|
let mut enriched = 0usize;
|
|
for (iid, status) in &statuses {
|
|
let rows = update_stmt
|
|
.execute(rusqlite::params![
|
|
&status.name, &status.category, &status.color, &status.icon_name,
|
|
1_700_000_000_000_i64, 1_i64, iid,
|
|
])
|
|
.unwrap();
|
|
if rows > 0 {
|
|
enriched += 1;
|
|
}
|
|
}
|
|
tx.commit().unwrap();
|
|
|
|
assert_eq!(enriched, 0, "No DB rows match → 0 enriched");
|
|
}
|
|
|
|
// ── T19: Removed status → fields NULLed, synced_at updated ──────────
|
|
#[test]
|
|
fn test_enrich_clears_removed_status() {
|
|
let conn = create_test_db();
|
|
apply_migrations(&conn, 21);
|
|
seed_issue_with_status(&conn, 1, 42, "In progress", 1_600_000_000_000);
|
|
|
|
// Issue 42 is in all_fetched but NOT in statuses → should be cleared
|
|
let statuses: HashMap<i64, WorkItemStatus> = HashMap::new();
|
|
let all_fetched: HashSet<i64> = [42].into_iter().collect();
|
|
let now_ms = 1_700_000_000_000_i64;
|
|
|
|
let tx = conn.unchecked_transaction().unwrap();
|
|
let mut clear_stmt = tx
|
|
.prepare_cached(
|
|
"UPDATE issues SET status_name = NULL, status_category = NULL, status_color = NULL,
|
|
status_icon_name = NULL, status_synced_at = ?3
|
|
WHERE project_id = ?1 AND iid = ?2 AND status_name IS NOT NULL",
|
|
)
|
|
.unwrap();
|
|
let mut cleared = 0usize;
|
|
for iid in &all_fetched {
|
|
if !statuses.contains_key(iid) {
|
|
let rows = clear_stmt
|
|
.execute(rusqlite::params![1_i64, iid, now_ms])
|
|
.unwrap();
|
|
if rows > 0 {
|
|
cleared += 1;
|
|
}
|
|
}
|
|
}
|
|
tx.commit().unwrap();
|
|
|
|
assert_eq!(cleared, 1);
|
|
|
|
let (name, cat, color, icon, synced) = read_status(&conn, 1, 42);
|
|
assert!(name.is_none(), "status_name should be NULL after clear");
|
|
assert!(cat.is_none(), "status_category should be NULL after clear");
|
|
assert!(color.is_none(), "status_color should be NULL after clear");
|
|
assert!(icon.is_none(), "status_icon_name should be NULL after clear");
|
|
// Crucially: synced_at is NOT NULL — it records when we confirmed absence
|
|
assert_eq!(synced, Some(now_ms), "status_synced_at should be updated to now_ms");
|
|
}
|
|
|
|
// ── T20: Transaction rolls back on simulated failure ─────────────────
|
|
#[test]
|
|
fn test_enrich_transaction_rolls_back_on_failure() {
|
|
let conn = create_test_db();
|
|
apply_migrations(&conn, 21);
|
|
seed_issue_with_status(&conn, 1, 42, "Original", 1_600_000_000_000);
|
|
seed_issue(&conn, 1, 43);
|
|
|
|
// Simulate: start transaction, update issue 42, then fail before commit
|
|
let result = (|| -> rusqlite::Result<()> {
|
|
let tx = conn.unchecked_transaction()?;
|
|
tx.execute(
|
|
"UPDATE issues SET status_name = 'Changed' WHERE project_id = 1 AND iid = 42",
|
|
[],
|
|
)?;
|
|
// Simulate error: intentionally cause failure
|
|
Err(rusqlite::Error::SqliteFailure(
|
|
rusqlite::ffi::Error::new(1),
|
|
Some("simulated failure".to_string()),
|
|
))
|
|
// tx drops without commit → rollback
|
|
})();
|
|
|
|
assert!(result.is_err());
|
|
|
|
// Original status should be intact (transaction rolled back)
|
|
let (name, _, _, _, _) = read_status(&conn, 1, 42);
|
|
assert_eq!(
|
|
name.as_deref(),
|
|
Some("Original"),
|
|
"Transaction should have rolled back"
|
|
);
|
|
}
|
|
|
|
// ── T26: Idempotent across two runs ──────────────────────────────────
|
|
#[test]
|
|
fn test_enrich_idempotent_across_two_runs() {
|
|
let conn = create_test_db();
|
|
apply_migrations(&conn, 21);
|
|
seed_issue(&conn, 1, 42);
|
|
|
|
let mut statuses = HashMap::new();
|
|
statuses.insert(42_i64, make_status("In progress"));
|
|
let all_fetched: HashSet<i64> = [42].into_iter().collect();
|
|
let now_ms = 1_700_000_000_000_i64;
|
|
|
|
// Run enrichment twice with same data
|
|
for _ in 0..2 {
|
|
let tx = conn.unchecked_transaction().unwrap();
|
|
let mut stmt = tx
|
|
.prepare_cached(
|
|
"UPDATE issues SET status_name = ?1, status_category = ?2, status_color = ?3,
|
|
status_icon_name = ?4, status_synced_at = ?5
|
|
WHERE project_id = ?6 AND iid = ?7",
|
|
)
|
|
.unwrap();
|
|
for (iid, status) in &statuses {
|
|
stmt.execute(rusqlite::params![
|
|
&status.name, &status.category, &status.color, &status.icon_name,
|
|
now_ms, 1_i64, iid,
|
|
])
|
|
.unwrap();
|
|
}
|
|
tx.commit().unwrap();
|
|
}
|
|
|
|
let (name, _, _, _, _) = read_status(&conn, 1, 42);
|
|
assert_eq!(name.as_deref(), Some("In progress"));
|
|
}
|
|
|
|
// ── T30: Clear sets synced_at (not NULL) ─────────────────────────────
|
|
#[test]
|
|
fn test_enrich_sets_synced_at_on_clear() {
|
|
// Same as T19 but explicitly named for the AC assertion
|
|
let conn = create_test_db();
|
|
apply_migrations(&conn, 21);
|
|
seed_issue_with_status(&conn, 1, 10, "Done", 1_500_000_000_000);
|
|
|
|
let now_ms = 1_700_000_000_000_i64;
|
|
conn.execute(
|
|
"UPDATE issues SET status_name = NULL, status_category = NULL, status_color = NULL,
|
|
status_icon_name = NULL, status_synced_at = ?1
|
|
WHERE project_id = 1 AND iid = 10",
|
|
[now_ms],
|
|
)
|
|
.unwrap();
|
|
|
|
let (_, _, _, _, synced) = read_status(&conn, 1, 10);
|
|
assert_eq!(
|
|
synced,
|
|
Some(now_ms),
|
|
"Clearing status must still set synced_at to record the check"
|
|
);
|
|
}
|
|
|
|
// ── T31: Enrichment error captured in IngestProjectResult ────────────
|
|
// NOTE: This test validates the struct field exists and can hold a value.
|
|
// Full integration requires the orchestrator wiring which is tested via
|
|
// cargo test on the actual orchestrator code.
|
|
#[test]
|
|
fn test_enrichment_error_captured_in_result() {
|
|
// This is a compile-time + field-existence test.
|
|
// IngestProjectResult must have status_enrichment_error: Option<String>
|
|
// The actual population is tested in the orchestrator integration test.
|
|
//
|
|
// Pseudo-test structure (will compile once IngestProjectResult is updated):
|
|
//
|
|
// let mut result = IngestProjectResult::default();
|
|
// result.status_enrichment_error = Some("GraphQL error: timeout".to_string());
|
|
// assert_eq!(
|
|
// result.status_enrichment_error.as_deref(),
|
|
// Some("GraphQL error: timeout")
|
|
// );
|
|
//
|
|
// Uncomment when IngestProjectResult is implemented.
|
|
}
|
|
|
|
// ── T32: Robot sync envelope includes status_enrichment object ───────
|
|
// NOTE: This is an E2E test that requires running the full CLI.
|
|
// Kept here as specification — implementation requires capturing CLI JSON output.
|
|
#[test]
|
|
fn test_robot_sync_includes_status_enrichment() {
|
|
// Specification: lore --robot sync output must include per-project:
|
|
// {
|
|
// "status_enrichment": {
|
|
// "mode": "fetched|unsupported|skipped",
|
|
// "reason": null | "graphql_endpoint_missing" | "auth_forbidden",
|
|
// "seen": N,
|
|
// "enriched": N,
|
|
// "cleared": N,
|
|
// "without_widget": N,
|
|
// "partial_errors": N,
|
|
// "first_partial_error": null | "message",
|
|
// "error": null | "message"
|
|
// }
|
|
// }
|
|
//
|
|
// This is validated by inspecting the JSON serialization of IngestProjectResult
|
|
// in the sync output path. The struct field tests above + serialization tests
|
|
// in the CLI layer cover this.
|
|
}
|
|
|
|
// ── T41: Project path missing → enrichment skipped ───────────────────
|
|
#[test]
|
|
fn test_project_path_missing_skips_enrichment() {
|
|
use rusqlite::OptionalExtension;
|
|
|
|
let conn = create_test_db();
|
|
apply_migrations(&conn, 21);
|
|
|
|
// Insert a project WITHOUT path_with_namespace
|
|
// (In practice, all projects have this, but the lookup uses .optional()?)
|
|
conn.execute(
|
|
"INSERT INTO projects (id, gitlab_project_id, path_with_namespace)
|
|
VALUES (1, 100, 'group/project')",
|
|
[],
|
|
)
|
|
.unwrap();
|
|
|
|
// Simulate the orchestrator's path lookup for a non-existent project_id
|
|
let project_path: Option<String> = conn
|
|
.query_row(
|
|
"SELECT path_with_namespace FROM projects WHERE id = ?1",
|
|
[999_i64], // non-existent project
|
|
|r| r.get(0),
|
|
)
|
|
.optional()
|
|
.unwrap();
|
|
|
|
assert!(
|
|
project_path.is_none(),
|
|
"Non-existent project should return None"
|
|
);
|
|
// The orchestrator should set:
|
|
// result.status_enrichment_error = Some("project_path_missing".to_string());
|
|
}
|
|
|
|
// ── T55 (NEW): Config toggle false → enrichment skipped ──────────────
|
|
#[test]
|
|
fn test_config_toggle_false_skips_enrichment() {
|
|
// Validates the SyncConfig toggle behavior
|
|
let json = r#"{"fetchWorkItemStatus": false}"#;
|
|
let config: lore::core::config::SyncConfig = serde_json::from_str(json).unwrap();
|
|
assert!(
|
|
!config.fetch_work_item_status,
|
|
"Explicit false should override default"
|
|
);
|
|
// When this is false, orchestrator skips enrichment and sets
|
|
// result.status_enrichment_mode = "skipped"
|
|
}
|
|
```
|
|
|
|
---
|
|
|
|
## File 6: `tests/status_filter_tests.rs` (NEW)
|
|
|
|
Tests AC-9 (List filter).
|
|
|
|
```rust
|
|
//! Integration tests for --status filter on issue listing.
|
|
|
|
use rusqlite::Connection;
|
|
use std::path::PathBuf;
|
|
|
|
fn get_migrations_dir() -> PathBuf {
|
|
PathBuf::from(env!("CARGO_MANIFEST_DIR")).join("migrations")
|
|
}
|
|
|
|
fn apply_migrations(conn: &Connection, through_version: i32) {
|
|
let migrations_dir = get_migrations_dir();
|
|
for version in 1..=through_version {
|
|
let entries: Vec<_> = std::fs::read_dir(&migrations_dir)
|
|
.unwrap()
|
|
.filter_map(|e| e.ok())
|
|
.filter(|e| {
|
|
e.file_name()
|
|
.to_string_lossy()
|
|
.starts_with(&format!("{:03}", version))
|
|
})
|
|
.collect();
|
|
assert!(!entries.is_empty(), "Migration {} not found", version);
|
|
let sql = std::fs::read_to_string(entries[0].path()).unwrap();
|
|
conn.execute_batch(&sql)
|
|
.unwrap_or_else(|e| panic!("Migration {} failed: {}", version, e));
|
|
}
|
|
}
|
|
|
|
fn create_test_db() -> Connection {
|
|
let conn = Connection::open_in_memory().unwrap();
|
|
conn.pragma_update(None, "foreign_keys", "ON").unwrap();
|
|
conn
|
|
}
|
|
|
|
/// Seed a project + issue with status
|
|
fn seed_issue_with_status(
|
|
conn: &Connection,
|
|
project_id: i64,
|
|
iid: i64,
|
|
state: &str,
|
|
status_name: Option<&str>,
|
|
) {
|
|
conn.execute(
|
|
"INSERT OR IGNORE INTO projects (id, gitlab_project_id, path_with_namespace, web_url)
|
|
VALUES (?1, ?1, 'group/project', 'https://gitlab.example.com/group/project')",
|
|
[project_id],
|
|
)
|
|
.unwrap();
|
|
conn.execute(
|
|
"INSERT INTO issues (gitlab_id, project_id, iid, state, created_at, updated_at, last_seen_at, status_name)
|
|
VALUES (?1, ?2, ?1, ?3, 1000, 1000, 1000, ?4)",
|
|
rusqlite::params![iid, project_id, state, status_name],
|
|
)
|
|
.unwrap();
|
|
}
|
|
|
|
// ── T21: Filter by status returns correct issue ──────────────────────
|
|
#[test]
|
|
fn test_list_filter_by_status() {
|
|
let conn = create_test_db();
|
|
apply_migrations(&conn, 21);
|
|
seed_issue_with_status(&conn, 1, 1, "opened", Some("In progress"));
|
|
seed_issue_with_status(&conn, 1, 2, "opened", Some("To do"));
|
|
|
|
let count: i64 = conn
|
|
.query_row(
|
|
"SELECT COUNT(*) FROM issues WHERE status_name = ?1 COLLATE NOCASE",
|
|
["In progress"],
|
|
|r| r.get(0),
|
|
)
|
|
.unwrap();
|
|
assert_eq!(count, 1);
|
|
}
|
|
|
|
// ── T22: Case-insensitive status filter ──────────────────────────────
|
|
#[test]
|
|
fn test_list_filter_by_status_case_insensitive() {
|
|
let conn = create_test_db();
|
|
apply_migrations(&conn, 21);
|
|
seed_issue_with_status(&conn, 1, 1, "opened", Some("In progress"));
|
|
|
|
let count: i64 = conn
|
|
.query_row(
|
|
"SELECT COUNT(*) FROM issues WHERE status_name = ?1 COLLATE NOCASE",
|
|
["in progress"],
|
|
|r| r.get(0),
|
|
)
|
|
.unwrap();
|
|
assert_eq!(count, 1, "'in progress' should match 'In progress' via COLLATE NOCASE");
|
|
}
|
|
|
|
// ── T40: Multiple --status values (OR semantics) ─────────────────────
|
|
#[test]
|
|
fn test_list_filter_by_multiple_statuses() {
|
|
let conn = create_test_db();
|
|
apply_migrations(&conn, 21);
|
|
seed_issue_with_status(&conn, 1, 1, "opened", Some("In progress"));
|
|
seed_issue_with_status(&conn, 1, 2, "opened", Some("To do"));
|
|
seed_issue_with_status(&conn, 1, 3, "closed", Some("Done"));
|
|
|
|
let count: i64 = conn
|
|
.query_row(
|
|
"SELECT COUNT(*) FROM issues
|
|
WHERE status_name IN (?1, ?2) COLLATE NOCASE",
|
|
rusqlite::params!["In progress", "To do"],
|
|
|r| r.get(0),
|
|
)
|
|
.unwrap();
|
|
assert_eq!(count, 2, "Should match both 'In progress' and 'To do'");
|
|
}
|
|
|
|
// ── T61 (NEW): --status combined with --state (AND logic) ────────────
|
|
#[test]
|
|
fn test_list_filter_status_and_state() {
|
|
let conn = create_test_db();
|
|
apply_migrations(&conn, 21);
|
|
seed_issue_with_status(&conn, 1, 1, "opened", Some("In progress"));
|
|
seed_issue_with_status(&conn, 1, 2, "closed", Some("In progress"));
|
|
|
|
let count: i64 = conn
|
|
.query_row(
|
|
"SELECT COUNT(*) FROM issues
|
|
WHERE state = ?1 AND status_name = ?2 COLLATE NOCASE",
|
|
rusqlite::params!["opened", "In progress"],
|
|
|r| r.get(0),
|
|
)
|
|
.unwrap();
|
|
assert_eq!(count, 1, "Only the opened issue matches both filters");
|
|
}
|
|
|
|
// ── T62 (NEW): --status with no matching issues → 0 results ─────────
|
|
#[test]
|
|
fn test_list_filter_by_status_no_match() {
|
|
let conn = create_test_db();
|
|
apply_migrations(&conn, 21);
|
|
seed_issue_with_status(&conn, 1, 1, "opened", Some("In progress"));
|
|
|
|
let count: i64 = conn
|
|
.query_row(
|
|
"SELECT COUNT(*) FROM issues WHERE status_name = ?1 COLLATE NOCASE",
|
|
["Nonexistent status"],
|
|
|r| r.get(0),
|
|
)
|
|
.unwrap();
|
|
assert_eq!(count, 0);
|
|
}
|
|
|
|
// ── T63 (NEW): NULL status excluded from filter ──────────────────────
|
|
#[test]
|
|
fn test_list_filter_by_status_excludes_null() {
|
|
let conn = create_test_db();
|
|
apply_migrations(&conn, 21);
|
|
seed_issue_with_status(&conn, 1, 1, "opened", Some("In progress"));
|
|
seed_issue_with_status(&conn, 1, 2, "opened", None); // No status
|
|
|
|
let count: i64 = conn
|
|
.query_row(
|
|
"SELECT COUNT(*) FROM issues WHERE status_name = ?1 COLLATE NOCASE",
|
|
["In progress"],
|
|
|r| r.get(0),
|
|
)
|
|
.unwrap();
|
|
assert_eq!(count, 1, "NULL status should not match");
|
|
}
|
|
```
|
|
|
|
---
|
|
|
|
## File 7: `tests/status_display_tests.rs` (NEW)
|
|
|
|
Tests AC-7 (Show display), AC-8 (List display).
|
|
|
|
```rust
|
|
//! Integration tests for status field presence in show/list SQL queries.
|
|
//! These verify the data layer — not terminal rendering (which requires
|
|
//! visual inspection or snapshot testing).
|
|
|
|
use rusqlite::Connection;
|
|
use std::path::PathBuf;
|
|
|
|
fn get_migrations_dir() -> PathBuf {
|
|
PathBuf::from(env!("CARGO_MANIFEST_DIR")).join("migrations")
|
|
}
|
|
|
|
fn apply_migrations(conn: &Connection, through_version: i32) {
|
|
let migrations_dir = get_migrations_dir();
|
|
for version in 1..=through_version {
|
|
let entries: Vec<_> = std::fs::read_dir(&migrations_dir)
|
|
.unwrap()
|
|
.filter_map(|e| e.ok())
|
|
.filter(|e| {
|
|
e.file_name()
|
|
.to_string_lossy()
|
|
.starts_with(&format!("{:03}", version))
|
|
})
|
|
.collect();
|
|
assert!(!entries.is_empty(), "Migration {} not found", version);
|
|
let sql = std::fs::read_to_string(entries[0].path()).unwrap();
|
|
conn.execute_batch(&sql)
|
|
.unwrap_or_else(|e| panic!("Migration {} failed: {}", version, e));
|
|
}
|
|
}
|
|
|
|
fn create_test_db() -> Connection {
|
|
let conn = Connection::open_in_memory().unwrap();
|
|
conn.pragma_update(None, "foreign_keys", "ON").unwrap();
|
|
conn
|
|
}
|
|
|
|
fn seed_issue(conn: &Connection, iid: i64, status_name: Option<&str>, status_category: Option<&str>) {
|
|
conn.execute(
|
|
"INSERT OR IGNORE INTO projects (id, gitlab_project_id, path_with_namespace, web_url)
|
|
VALUES (1, 100, 'group/project', 'https://gitlab.example.com/group/project')",
|
|
[],
|
|
)
|
|
.unwrap();
|
|
conn.execute(
|
|
"INSERT INTO issues (gitlab_id, project_id, iid, state, created_at, updated_at, last_seen_at,
|
|
status_name, status_category, status_color, status_icon_name, status_synced_at)
|
|
VALUES (?1, 1, ?1, 'opened', 1000, 1000, 1000, ?2, ?3, '#1f75cb', 'status', 1700000000000)",
|
|
rusqlite::params![iid, status_name, status_category],
|
|
)
|
|
.unwrap();
|
|
}
|
|
|
|
// ── T56 (NEW): Show issue query includes status fields ───────────────
|
|
#[test]
|
|
fn test_show_issue_includes_status_fields() {
|
|
let conn = create_test_db();
|
|
apply_migrations(&conn, 21);
|
|
seed_issue(&conn, 42, Some("In progress"), Some("IN_PROGRESS"));
|
|
|
|
// Simulate the show.rs SQL query — verify all 5 status columns are readable
|
|
let (name, cat, color, icon, synced): (
|
|
Option<String>, Option<String>, Option<String>, Option<String>, Option<i64>,
|
|
) = conn
|
|
.query_row(
|
|
"SELECT i.status_name, i.status_category, i.status_color,
|
|
i.status_icon_name, i.status_synced_at
|
|
FROM issues i WHERE i.iid = 42",
|
|
[],
|
|
|row| Ok((row.get(0)?, row.get(1)?, row.get(2)?, row.get(3)?, row.get(4)?)),
|
|
)
|
|
.unwrap();
|
|
|
|
assert_eq!(name.as_deref(), Some("In progress"));
|
|
assert_eq!(cat.as_deref(), Some("IN_PROGRESS"));
|
|
assert_eq!(color.as_deref(), Some("#1f75cb"));
|
|
assert_eq!(icon.as_deref(), Some("status"));
|
|
assert_eq!(synced, Some(1_700_000_000_000));
|
|
}
|
|
|
|
// ── T57 (NEW): Show issue with NULL status → fields are None ─────────
|
|
#[test]
|
|
fn test_show_issue_null_status_fields() {
|
|
let conn = create_test_db();
|
|
apply_migrations(&conn, 21);
|
|
seed_issue(&conn, 42, None, None);
|
|
|
|
let (name, cat): (Option<String>, Option<String>) = conn
|
|
.query_row(
|
|
"SELECT i.status_name, i.status_category FROM issues i WHERE i.iid = 42",
|
|
[],
|
|
|row| Ok((row.get(0)?, row.get(1)?)),
|
|
)
|
|
.unwrap();
|
|
|
|
assert!(name.is_none());
|
|
assert!(cat.is_none());
|
|
}
|
|
|
|
// ── T58 (NEW): Robot show includes null status (not absent) ──────────
|
|
// Specification: In robot mode JSON output, status fields must be present
|
|
// as null values (not omitted from the JSON entirely).
|
|
// Validated by the IssueDetailJson struct having non-skip-serializing fields.
|
|
// This is a compile-time guarantee — the struct definition is the test.
|
|
#[test]
|
|
fn test_robot_show_status_fields_present_when_null() {
|
|
// Validate via serde: serialize a struct with None status fields
|
|
// and verify the keys are present as null in the output.
|
|
#[derive(serde::Serialize)]
|
|
struct MockIssueJson {
|
|
status_name: Option<String>,
|
|
status_category: Option<String>,
|
|
status_color: Option<String>,
|
|
status_icon_name: Option<String>,
|
|
status_synced_at: Option<i64>,
|
|
}
|
|
|
|
let json = serde_json::to_value(MockIssueJson {
|
|
status_name: None,
|
|
status_category: None,
|
|
status_color: None,
|
|
status_icon_name: None,
|
|
status_synced_at: None,
|
|
})
|
|
.unwrap();
|
|
|
|
// Keys must be present (as null), not absent
|
|
assert!(json.get("status_name").is_some(), "status_name key must be present");
|
|
assert!(json["status_name"].is_null(), "status_name must be null");
|
|
assert!(json.get("status_synced_at").is_some(), "status_synced_at key must be present");
|
|
assert!(json["status_synced_at"].is_null(), "status_synced_at must be null");
|
|
}
|
|
|
|
// ── T59 (NEW): List issues query includes status columns ─────────────
|
|
#[test]
|
|
fn test_list_issues_includes_status_columns() {
|
|
let conn = create_test_db();
|
|
apply_migrations(&conn, 21);
|
|
seed_issue(&conn, 1, Some("To do"), Some("TO_DO"));
|
|
seed_issue(&conn, 2, Some("Done"), Some("DONE"));
|
|
|
|
let rows: Vec<(i64, Option<String>, Option<String>)> = conn
|
|
.prepare(
|
|
"SELECT i.iid, i.status_name, i.status_category
|
|
FROM issues i ORDER BY i.iid",
|
|
)
|
|
.unwrap()
|
|
.query_map([], |row| Ok((row.get(0)?, row.get(1)?, row.get(2)?)))
|
|
.unwrap()
|
|
.filter_map(|r| r.ok())
|
|
.collect();
|
|
|
|
assert_eq!(rows.len(), 2);
|
|
assert_eq!(rows[0].1.as_deref(), Some("To do"));
|
|
assert_eq!(rows[0].2.as_deref(), Some("TO_DO"));
|
|
assert_eq!(rows[1].1.as_deref(), Some("Done"));
|
|
assert_eq!(rows[1].2.as_deref(), Some("DONE"));
|
|
}
|
|
|
|
// ── T60 (NEW): List issues NULL status → empty string in display ─────
|
|
#[test]
|
|
fn test_list_issues_null_status_returns_none() {
|
|
let conn = create_test_db();
|
|
apply_migrations(&conn, 21);
|
|
seed_issue(&conn, 1, None, None);
|
|
|
|
let status: Option<String> = conn
|
|
.query_row(
|
|
"SELECT i.status_name FROM issues i WHERE i.iid = 1",
|
|
[],
|
|
|row| row.get(0),
|
|
)
|
|
.unwrap();
|
|
|
|
assert!(status.is_none(), "NULL status should map to None in Rust");
|
|
}
|
|
```
|
|
|
|
---
|
|
|
|
## Gap Analysis Summary
|
|
|
|
| Gap Found | Test Added | AC |
|
|
|-----------|-----------|-----|
|
|
| No test for delta-seconds Retry-After | T43 | AC-1 |
|
|
| No test for network errors | T44 | AC-1 |
|
|
| No test for request body format | T45 | AC-1 |
|
|
| No test for base URL trailing slash | T46 | AC-1 |
|
|
| No test for data:null response | T47 | AC-1 |
|
|
| No test for all 5 system statuses | T48 | AC-2 |
|
|
| No test for empty project | T49 | AC-3 |
|
|
| No test for null status in widget | T50 | AC-3 |
|
|
| No test for non-numeric IID | T51 | AC-3 |
|
|
| No test for null cursor with hasNextPage | T52 | AC-3 |
|
|
| No test for existing row NULL defaults | T53 | AC-4 |
|
|
| No test for SELECT succeeding after migration | T54 | AC-4 |
|
|
| No test for config toggle false | T55 | AC-5/6 |
|
|
| No test for show issue with status | T56 | AC-7 |
|
|
| No test for show issue NULL status | T57 | AC-7 |
|
|
| No test for robot JSON null-not-absent | T58 | AC-7 |
|
|
| No test for list query with status cols | T59 | AC-8 |
|
|
| No test for list NULL status display | T60 | AC-8 |
|
|
| No test for --status AND --state combo | T61 | AC-9 |
|
|
| No test for --status no match | T62 | AC-9 |
|
|
| No test for NULL excluded from filter | T63 | AC-9 |
|
|
|
|
---
|
|
|
|
## Wiremock Pattern Notes
|
|
|
|
The `fetch_issue_statuses` tests use `wiremock 0.6` with dynamic response handlers
|
|
via `respond_with(move |req: &wiremock::Request| { ... })`. This is the recommended
|
|
pattern for tests that need to return different responses per call (pagination,
|
|
adaptive page sizing).
|
|
|
|
Key patterns:
|
|
- **Sequential responses**: Use `AtomicUsize` counter in closure
|
|
- **Request inspection**: Parse `req.body` as JSON to check variables
|
|
- **LIFO mocking**: wiremock matches most-recently-mounted mock first
|
|
- **Up-to-n**: Use `.up_to_n_times(1)` for pagination page ordering
|
|
|
|
## Cargo.toml Addition
|
|
|
|
```toml
|
|
[dependencies]
|
|
httpdate = "1" # For Retry-After HTTP-date parsing
|
|
```
|
|
|
|
This crate is already well-maintained, minimal, and does exactly one thing.
|