Extract token-type and per-model cost calculations from cmd/costs.go
into a dedicated pipeline.AggregateCostBreakdown() function. This
eliminates duplicate cost calculation logic between CLI and TUI.
New types:
- TokenTypeCosts: aggregate costs by input/output/cache types
- ModelCostBreakdown: per-model cost components
Benefits:
- Single source of truth for cost calculations
- Uses LookupPricingAt() for historical accuracy
- Both CLI and TUI now share the same cost logic
- Easier to maintain and extend
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Add data layer support for real-time usage visualization:
- MinuteStats type: holds token counts for 5-minute buckets, enabling
granular recent-activity views (12 buckets covering the last hour).
- AggregateTodayHourly(): computes 24 hourly token buckets for the
current local day by filtering sessions to today's date boundary and
slotting each into the correct hour index. Tracks prompts, sessions,
and total tokens per hour.
- AggregateLastHour(): computes 12 five-minute token buckets for the
last 60 minutes using reverse-offset bucketing (bucket 11 = most
recent 5 minutes, bucket 0 = 55-60 minutes ago). Bounds-clamped to
prevent off-by-one at the edges.
Both functions filter on StartTime locality and skip zero-time sessions,
consistent with existing aggregation patterns in the pipeline package.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Address golangci-lint findings and improve error handling throughout:
Package doc comments:
- Add canonical "// Package X ..." comments to source, model, config,
pipeline, cli, store, and main packages for godoc compliance.
Security & correctness:
- Fix directory permissions 0o755 -> 0o750 in store/cache.go Open()
(gosec G301: restrict group write on cache directory)
- Fix config.Save() to check encoder error before closing file, preventing
silent data loss on encode failure
- Add //nolint:gosec annotations with justifications on intentional
patterns (constructed file paths, manual bounds checking, config fields)
- Add //nolint:nilerr on intentional error-swallowing in scanner WalkDir
- Add //nolint:revive on stuttering type names (ModelStats, ModelUsage)
that would break too many call sites to rename
Performance (perfsprint):
- Replace fmt.Sprintf("%d", n) with strconv.FormatInt(n, 10) in format.go
FormatTokens() and FormatNumber() hot paths
- Clean up redundant fmt.Sprintf patterns in FormatCost and FormatDelta
Code cleanup:
- Convert if-else chain to switch in parser.go skipJSONString() for clarity
- Remove unused indexedResult struct from pipeline/loader.go
- Add deferred cache.Close() in pipeline/bench_test.go to prevent leaks
- Add deferred cache.Close() in cmd/root.go data loading path
- Fix doc comment alignment in scanner.go decodeProjectName
- Remove trailing blank line in cmd/costs.go
- Fix duplicate "/day" suffix in cmd/summary.go cost-per-day formatting
- Rename shadowed variable 'max' -> 'maxVal' in cli/render.go Sparkline
The daily aggregation now iterates from the since date through the
until date and inserts a zero-valued DailyStats entry for any day
not already present in the day map. This ensures sparklines and bar
charts render a continuous time axis with explicit zeros for idle
days, rather than connecting adjacent data points across gaps.
Also switch config file creation to os.OpenFile with explicit 0600
permissions and O_WRONLY|O_CREATE|O_TRUNC flags, matching the intent
of the original os.Create call while making the restricted permission
bits explicit for security.
Implement the pipeline layer that orchestrates discovery, parsing,
caching, and aggregation:
- pipeline/loader.go: Load() discovers session files via ScanDir,
optionally filters out subagent files, then parses all files in
parallel using a bounded worker pool sized to GOMAXPROCS. Workers
read from a pre-filled channel (no contention on dispatch) and
report progress via an atomic counter and callback. LoadResult
tracks total files, parsed files, parse errors, and file errors.
- pipeline/aggregator.go: Five aggregation functions, all operating
on time-filtered session slices:
* Aggregate: computes SummaryStats across all sessions — total
tokens (5 types), estimated cost, cache savings (summed per-model
via config.CalculateCacheSavings), cache hit rate, and per-active-
day rates (cost, tokens, sessions, prompts, minutes).
* AggregateDays: groups sessions by local calendar date, sorted
most-recent-first.
* AggregateModels: groups by normalized model name with share
percentages, sorted by cost descending.
* AggregateProjects: groups by project name, sorted by cost.
* AggregateHourly: distributes prompt/session/token counts across
24 hour buckets (attributed to session start hour).
Also provides FilterByTime, FilterByProject, FilterByModel with
case-insensitive substring matching.
- pipeline/incremental.go: LoadWithCache() implements the incremental
loading strategy — compares discovered files against the cache's
file_tracker (mtime_ns + size), loads unchanged sessions from
SQLite, and only reparses files that changed. Reparsed results
are immediately saved back to cache. CacheDir/CachePath follow
XDG_CACHE_HOME convention (~/.cache/cburn/metrics.db).
- pipeline/bench_test.go: Benchmarks for ScanDir, ParseFile (worst-
case largest file), full Load, and LoadWithCache to measure the
incremental cache speedup.
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>