Files
claude-statusline/src/trend.rs
Taylor Eernisse c03b0b1bd7 feat: add environment sections, visual enhancements, enhanced tools/beads, and /clear detection
The feature layer that builds on the new infrastructure modules. Adds
4 new environment-aware sections, rewrites the tools/beads/turns sections,
introduces gradient sparklines and block-style context bars, and wires
/clear detection into the main binary.

New sections (4):
  cloud_profile — Shows active cloud provider profile from env vars
    ($AWS_PROFILE, $CLOUDSDK_CORE_PROJECT, $AZURE_SUBSCRIPTION_ID).
    Provider-specific coloring (AWS orange, GCP blue, Azure blue).

  k8s_context — Parses kubeconfig for current-context and namespace.
    Minimal YAML scanning (no yaml dependency). 30s TTL cache.
    Shows "context/namespace" with split coloring.

  python_env — Detects active virtualenv ($VIRTUAL_ENV) or conda
    ($CONDA_DEFAULT_ENV, excluding "base"). Shows just the env name.

  toolchain — Detects Rust (rust-toolchain.toml) and Node.js (.nvmrc,
    .node-version) versions. Compares expected vs actual ($RUSTUP_TOOLCHAIN,
    $NODE_VERSION) and highlights mismatches in yellow.

Tools section rewrite:
  Progressive disclosure based on terminal width:
    - Narrow: just the count ("245")
    - Medium: count + last tool name ("245 tools (Bash)")
    - Wide: per-tool color-coded breakdown ("245 tools (Bash: 84/Read: 35/...)")
  Adaptive width budgeting: breakdown reduces tool count until it fits
  within 1/3 of terminal width. Color palette priority: config > terminal
  ANSI palette (via OSC 4) > built-in Dracula palette.

Beads section rewrite:
  Switched from `br ready --json` to `br stats --json` to show all
  statuses. Now renders multi-status breakdown: "3 ready 1 wip 2 open"
  with per-status visibility toggles in config.

Turns section:
  Falls back to transcript-derived turn count when cost.total_turns is
  absent. Requires at least one data source to render (vanishes when
  no session data exists at all).

Visual enhancements:
  trend.rs:
    - append_delta(): tracks rate-of-change (delta between cumulative
      samples) so sparklines show burn intensity, not monotonic growth
    - sparkline(): now renders exactly `width` chars with left-padding
      for missing data. Baseline (space) vs flatline (lowest bar) chars.
    - sparkline_colored(): per-character gradient coloring via colorgrad,
      returns (raw, ansi) tuple for layout compatibility.

  context_bar.rs:
    - Block style: Unicode full-block fill + light-shade empty chars
    - Per-character green->yellow->red gradient for block style
    - Classic style preserved (= and - chars) with single threshold color
    - Configurable fill_char/empty_char overrides

  context_trend + cost_trend:
    Switched to append_delta for rate-based sparklines. Gradient coloring
    with green->yellow->red via sparkline_colored().

  format.rs:
    Background color support via resolve_background(). Accepts named
    colors, hex, and palette refs. Applied as ANSI bg wrap around section
    output, preserving foreground colors.

  layout/mod.rs:
    - Separator styles: text (default), powerline (Nerd Font), arrow
      (Unicode), none (spaces). Powerline auto-falls-back to arrow when
      glyphs disabled.
    - Placeholder support: when an enabled section returns None (no data),
      substitutes a configurable placeholder character (default: box-draw)
      to maintain layout stability during justify/flex.

Section refinements:
  cost, cost_velocity, token_velocity, duration, tokens_raw — now show
  zero/baseline values instead of hiding entirely. This prevents layout
  jumps when sessions start or after /clear.

  context_usage — uses current_usage fields (input_tokens +
  cache_creation + cache_read) for precise token counts instead of
  percentage-derived estimates. Shows one decimal place on percentage.

  metrics.rs — prefers total_api_duration_ms over total_duration_ms for
  velocity calculations (active processing vs wall clock with idle time).
  Cache efficiency now divides by total input (not just cache tokens).

Config additions (config.rs):
  SeparatorStyle enum (text/powerline/arrow/none), BarStyle enum
  (classic/block), gradient toggle on trends + context_bar, background
  and placeholder on SectionBase, tools breakdown config (show_breakdown,
  top_n, palette), 4 new section structs.

Main binary (/clear detection + wiring):
  detect_clear() — watches for significant context usage drops (>15%
  to <5%, >20pp drop) to identify /clear. On detection: saves transcript
  offset so derived stats only count post-clear entries, flushes trend
  caches for fresh sparklines.

  resolve_transcript_stats() — cached transcript parsing with 5s TTL,
  respects clear offset, skipped when cost already has tool counts.

  resolve_terminal_palette() — cached palette detection with 1h TTL.

  Debug: CLAUDE_STATUSLINE_DEBUG env var dumps raw input JSON to
  /tmp/claude-statusline-input.json. dump-state now includes input data.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
2026-02-09 23:42:34 -05:00

245 lines
7.3 KiB
Rust

use crate::cache::Cache;
use crate::color;
use std::time::Duration;
const SPARKLINE_CHARS: &[char] = &[
'\u{2581}', '\u{2582}', '\u{2583}', '\u{2584}', '\u{2585}', '\u{2586}', '\u{2587}', '\u{2588}',
];
/// Baseline sparkline placeholder (space — "no data yet").
const BASELINE_CHAR: &str = " ";
const BASELINE_CHAR_CH: char = ' ';
/// Flatline sparkline character (lowest bar — "data exists but is flat/zero").
const FLATLINE_CHAR: char = '\u{2581}';
const FLATLINE_STR: &str = "\u{2581}";
/// Append a value to a trend file. Throttled to at most once per `interval`.
/// Returns the full comma-separated series (for immediate sparkline rendering).
pub fn append(
cache: &Cache,
key: &str,
value: i64,
max_points: usize,
interval: Duration,
) -> Option<String> {
let trend_key = format!("trend_{key}");
// Check throttle: skip if last write was within interval
if let Some(existing) = cache.get(&trend_key, interval) {
return Some(existing);
}
// Read current series (ignoring TTL)
let mut series: Vec<i64> = cache
.get_stale(&trend_key)
.unwrap_or_default()
.split(',')
.filter_map(|s| s.trim().parse().ok())
.collect();
// Skip if value unchanged from last point
if series.last() == Some(&value) {
// Still update the file mtime so throttle window resets
let csv = series
.iter()
.map(|v| v.to_string())
.collect::<Vec<_>>()
.join(",");
cache.set(&trend_key, &csv);
return Some(csv);
}
series.push(value);
// Trim from left to max_points
if series.len() > max_points {
series = series[series.len() - max_points..].to_vec();
}
let csv = series
.iter()
.map(|v| v.to_string())
.collect::<Vec<_>>()
.join(",");
cache.set(&trend_key, &csv);
Some(csv)
}
/// Append a **delta** (difference from previous cumulative value) to a trend.
/// Useful for rate-of-change sparklines (e.g., cost burn rate).
/// Stores the previous cumulative value in a separate cache key for delta computation.
pub fn append_delta(
cache: &Cache,
key: &str,
cumulative_value: i64,
max_points: usize,
interval: Duration,
) -> Option<String> {
let trend_key = format!("trend_{key}");
let prev_key = format!("trend_{key}_prev");
// Check throttle: skip if last write was within interval
if let Some(existing) = cache.get(&trend_key, interval) {
return Some(existing);
}
// Get previous cumulative value for delta computation
let prev = cache
.get_stale(&prev_key)
.and_then(|s| s.parse::<i64>().ok())
.unwrap_or(cumulative_value);
let delta = cumulative_value - prev;
// Store current cumulative for next delta
cache.set(&prev_key, &cumulative_value.to_string());
// Read existing series (ignoring TTL)
let mut series: Vec<i64> = cache
.get_stale(&trend_key)
.unwrap_or_default()
.split(',')
.filter_map(|s| s.trim().parse().ok())
.collect();
series.push(delta);
// Trim from left to max_points
if series.len() > max_points {
series = series[series.len() - max_points..].to_vec();
}
let csv = series
.iter()
.map(|v| v.to_string())
.collect::<Vec<_>>()
.join(",");
cache.set(&trend_key, &csv);
Some(csv)
}
/// Render a sparkline from comma-separated values.
/// Always renders exactly `width` characters — pads with baseline chars
/// on the left when fewer data points exist.
pub fn sparkline(csv: &str, width: usize) -> String {
let vals: Vec<i64> = csv
.split(',')
.filter_map(|s| s.trim().parse().ok())
.collect();
if vals.is_empty() {
return BASELINE_CHAR.repeat(width);
}
let min = *vals.iter().min().unwrap();
let max = *vals.iter().max().unwrap();
let data_count = vals.len().min(width);
let pad_count = width.saturating_sub(data_count);
if max == min {
// Data exists but is flat — show visible lowest bars (not invisible spaces)
let mut result = String::with_capacity(width * 3);
for _ in 0..pad_count {
result.push(BASELINE_CHAR_CH);
}
for _ in 0..data_count {
result.push(FLATLINE_CHAR);
}
return result;
}
let range = (max - min) as f64;
let mut result = String::with_capacity(width * 3);
// Left-pad with baseline chars
for _ in 0..pad_count {
result.push(BASELINE_CHAR_CH);
}
// Render data points (take the last `data_count` from available values,
// capped to `width`)
let skip = vals.len().saturating_sub(width);
for &v in vals.iter().skip(skip) {
let idx = (((v - min) as f64 / range) * 7.0) as usize;
result.push(SPARKLINE_CHARS[idx.min(7)]);
}
result
}
/// Render a sparkline with per-character gradient coloring.
/// Each character is colored based on its normalized value (0.0=min, 1.0=max).
/// Always renders exactly `width` characters — pads with DIM baseline chars on the left.
/// Returns (raw, ansi) — raw is plain sparkline, ansi has gradient colors.
pub fn sparkline_colored(
csv: &str,
width: usize,
grad: &colorgrad::LinearGradient,
) -> Option<(String, String)> {
let vals: Vec<i64> = csv
.split(',')
.filter_map(|s| s.trim().parse().ok())
.collect();
if vals.is_empty() {
let raw = BASELINE_CHAR.repeat(width);
let ansi = format!("{}{raw}{}", color::DIM, color::RESET);
return Some((raw, ansi));
}
let min = *vals.iter().min().unwrap();
let max = *vals.iter().max().unwrap();
let data_count = vals.len().min(width);
let pad_count = width.saturating_sub(data_count);
if max == min {
// Data exists but is flat — show visible lowest bars with DIM styling
let mut raw = String::with_capacity(width * 3);
let mut ansi = String::with_capacity(width * 20);
for _ in 0..pad_count {
raw.push(BASELINE_CHAR_CH);
}
if pad_count > 0 {
ansi.push_str(color::DIM);
ansi.push_str(&BASELINE_CHAR.repeat(pad_count));
ansi.push_str(color::RESET);
}
let flatline = FLATLINE_STR.repeat(data_count);
raw.push_str(&flatline);
ansi.push_str(color::DIM);
ansi.push_str(&flatline);
ansi.push_str(color::RESET);
return Some((raw, ansi));
}
let range = (max - min) as f32;
let mut raw = String::with_capacity(width * 3);
let mut ansi = String::with_capacity(width * 20);
// Left-pad with DIM baseline chars
if pad_count > 0 {
let pad_str = BASELINE_CHAR.repeat(pad_count);
raw.push_str(&pad_str);
ansi.push_str(color::DIM);
ansi.push_str(&pad_str);
ansi.push_str(color::RESET);
}
// Render data points (take the last `width` values)
let skip = vals.len().saturating_sub(width);
for &v in vals.iter().skip(skip) {
let norm = (v - min) as f32 / range;
let idx = (norm * 7.0) as usize;
let ch = SPARKLINE_CHARS[idx.min(7)];
raw.push(ch);
ansi.push_str(&color::sample_fg(grad, norm));
ansi.push(ch);
}
ansi.push_str(color::RESET);
Some((raw, ansi))
}