Compare commits
34 Commits
2926645b10
...
fb9d4e5b9f
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
fb9d4e5b9f | ||
|
|
781e74cda2 | ||
|
|
ac629bd149 | ||
|
|
fa1fe8613a | ||
|
|
d3c6af9b00 | ||
|
|
0acc56417e | ||
|
|
19a31a4620 | ||
|
|
1fb4a82b39 | ||
|
|
69175f08f9 | ||
|
|
f26e9acb34 | ||
|
|
baa712ba15 | ||
|
|
7a9d290cb9 | ||
|
|
1e21dd08b6 | ||
|
|
e3e42e53f2 | ||
|
|
99a55472a5 | ||
|
|
5494d76a98 | ||
|
|
c0ee053d50 | ||
|
|
8070c4132a | ||
|
|
2d65d8f95b | ||
|
|
37748cb99c | ||
|
|
9695e9b08a | ||
|
|
58f0befe72 | ||
|
|
62d23793c4 | ||
|
|
7b1e47adc0 | ||
|
|
a7a9ebbf2b | ||
|
|
48c3ddce90 | ||
|
|
7059dea3f8 | ||
|
|
e4a0631fd7 | ||
|
|
1bece476a4 | ||
|
|
e99ae2ed89 | ||
|
|
49a57b5364 | ||
|
|
db3d2a2e31 | ||
|
|
ba16daac2a | ||
|
|
c7db46191c |
File diff suppressed because one or more lines are too long
1
.playwright-mcp/console-2026-02-26T22-17-15-253Z.log
Normal file
1
.playwright-mcp/console-2026-02-26T22-17-15-253Z.log
Normal file
@@ -0,0 +1 @@
|
|||||||
|
[ 94144ms] [ERROR] Failed to load resource: the server responded with a status of 500 (Internal Server Error) @ http://127.0.0.1:7400/api/spawn:0
|
||||||
1
.playwright-mcp/console-2026-02-26T22-19-49-481Z.log
Normal file
1
.playwright-mcp/console-2026-02-26T22-19-49-481Z.log
Normal file
@@ -0,0 +1 @@
|
|||||||
|
[ 398ms] [WARNING] cdn.tailwindcss.com should not be used in production. To use Tailwind CSS in production, install it as a PostCSS plugin or use the Tailwind CLI: https://tailwindcss.com/docs/installation @ https://cdn.tailwindcss.com/:63
|
||||||
68
CLAUDE.md
Normal file
68
CLAUDE.md
Normal file
@@ -0,0 +1,68 @@
|
|||||||
|
# AMC (Agent Management Console)
|
||||||
|
|
||||||
|
A dashboard and management system for monitoring and controlling Claude Code and Codex agent sessions.
|
||||||
|
|
||||||
|
## Key Documentation
|
||||||
|
|
||||||
|
### Claude JSONL Session Log Reference
|
||||||
|
|
||||||
|
**Location:** `docs/claude-jsonl-reference/`
|
||||||
|
|
||||||
|
Comprehensive documentation for parsing and processing Claude Code JSONL session logs. **Always consult this before implementing JSONL parsing logic.**
|
||||||
|
|
||||||
|
| Document | Purpose |
|
||||||
|
|----------|---------|
|
||||||
|
| [01-format-specification](docs/claude-jsonl-reference/01-format-specification.md) | Complete JSONL format spec with all fields |
|
||||||
|
| [02-message-types](docs/claude-jsonl-reference/02-message-types.md) | Every message type with concrete examples |
|
||||||
|
| [03-tool-lifecycle](docs/claude-jsonl-reference/03-tool-lifecycle.md) | Tool call flow from invocation to result |
|
||||||
|
| [04-subagent-teams](docs/claude-jsonl-reference/04-subagent-teams.md) | Subagent and team message formats |
|
||||||
|
| [05-edge-cases](docs/claude-jsonl-reference/05-edge-cases.md) | Error handling, malformed input, recovery |
|
||||||
|
| [06-quick-reference](docs/claude-jsonl-reference/06-quick-reference.md) | Cheat sheet for common operations |
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
### Server (Python)
|
||||||
|
|
||||||
|
The server uses a mixin-based architecture in `amc_server/`:
|
||||||
|
|
||||||
|
| Module | Purpose |
|
||||||
|
|--------|---------|
|
||||||
|
| `server.py` | Main AMC server class combining all mixins |
|
||||||
|
| `mixins/parsing.py` | JSONL reading and token extraction |
|
||||||
|
| `mixins/conversation.py` | Claude/Codex conversation parsing |
|
||||||
|
| `mixins/state.py` | Session state management |
|
||||||
|
| `mixins/discovery.py` | Codex session auto-discovery |
|
||||||
|
| `mixins/spawn.py` | Agent spawning via Zellij |
|
||||||
|
| `mixins/control.py` | Session control (focus, dismiss) |
|
||||||
|
| `mixins/skills.py` | Skill enumeration |
|
||||||
|
| `mixins/http.py` | HTTP routing |
|
||||||
|
|
||||||
|
### Dashboard (React)
|
||||||
|
|
||||||
|
Single-page app in `dashboard/` served via HTTP.
|
||||||
|
|
||||||
|
## File Locations
|
||||||
|
|
||||||
|
| Content | Location |
|
||||||
|
|---------|----------|
|
||||||
|
| Claude sessions | `~/.claude/projects/<encoded-path>/<session-id>.jsonl` |
|
||||||
|
| Codex sessions | `~/.codex/sessions/**/<session-id>.jsonl` |
|
||||||
|
| AMC session state | `~/.local/share/amc/sessions/<session-id>.json` |
|
||||||
|
| AMC event logs | `~/.local/share/amc/events/<session-id>.jsonl` |
|
||||||
|
| Pending spawns | `~/.local/share/amc/pending_spawns/<spawn-id>.json` |
|
||||||
|
|
||||||
|
## Critical Parsing Notes
|
||||||
|
|
||||||
|
1. **Content type ambiguity** — User message `content` can be string (user input) OR array (tool results)
|
||||||
|
2. **Missing fields** — Always use `.get()` with defaults for optional fields
|
||||||
|
3. **Boolean vs int** — Python's `isinstance(True, int)` is True; check bool first
|
||||||
|
4. **Partial reads** — When seeking to file end, first line may be truncated
|
||||||
|
5. **Codex differences** — Uses `response_item` type, `function_call` for tools
|
||||||
|
|
||||||
|
## Testing
|
||||||
|
|
||||||
|
```bash
|
||||||
|
pytest tests/
|
||||||
|
```
|
||||||
|
|
||||||
|
All parsing edge cases are covered in `tests/test_parsing.py` and `tests/test_conversation.py`.
|
||||||
316
PROPOSED_CODE_FILE_REORGANIZATION_PLAN.md
Normal file
316
PROPOSED_CODE_FILE_REORGANIZATION_PLAN.md
Normal file
@@ -0,0 +1,316 @@
|
|||||||
|
# Proposed Code File Reorganization Plan
|
||||||
|
|
||||||
|
## Executive Summary
|
||||||
|
|
||||||
|
After reading every source file in the project, analyzing all import graphs, and understanding how each module fits into the architecture, my assessment is: **the project is already reasonably well-organized**. The mixin-based decomposition of the handler, the dashboard's `components/utils/lib` split, and the test structure that mirrors source all reflect sound engineering.
|
||||||
|
|
||||||
|
That said, there is one clear structural problem and a few smaller wins. This plan proposes **surgical, high-value changes** rather than a gratuitous restructure. The guiding principle: every change must make it easier for a developer (or agent) to find things and understand the architecture.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Current Structure (Annotated)
|
||||||
|
|
||||||
|
```
|
||||||
|
amc/
|
||||||
|
amc_server/ # Python backend (2,571 LOC)
|
||||||
|
__init__.py # Package init, exports main
|
||||||
|
server.py # Server startup/shutdown (38 LOC)
|
||||||
|
handler.py # Handler class composed from mixins (31 LOC)
|
||||||
|
context.py # ** PROBLEM ** All config, constants, caches, locks, auth (121 LOC)
|
||||||
|
logging_utils.py # Logging + signal handlers (31 LOC)
|
||||||
|
mixins/ # Handler mixins (one per concern)
|
||||||
|
__init__.py # Package comment (1 LOC)
|
||||||
|
http.py # HTTP routing, static file serving (173 LOC)
|
||||||
|
state.py # State aggregation, SSE, session collection, cleanup (440 LOC)
|
||||||
|
conversation.py # Conversation parsing for Claude/Codex (278 LOC)
|
||||||
|
control.py # Session dismiss/respond, Zellij pane injection (295 LOC)
|
||||||
|
discovery.py # Codex session discovery, pane matching (347 LOC)
|
||||||
|
parsing.py # JSONL parsing, context usage extraction (274 LOC)
|
||||||
|
skills.py # Skill enumeration for autocomplete (184 LOC)
|
||||||
|
spawn.py # Agent spawning in Zellij tabs (358 LOC)
|
||||||
|
|
||||||
|
dashboard/ # Preact frontend (2,564 LOC)
|
||||||
|
index.html # Entry HTML with Tailwind config
|
||||||
|
main.js # App mount point (7 LOC)
|
||||||
|
styles.css # Custom styles
|
||||||
|
lib/ # Third-party/shared
|
||||||
|
preact.js # Preact re-exports
|
||||||
|
markdown.js # Markdown rendering + syntax highlighting (159 LOC)
|
||||||
|
utils/ # Pure utility functions
|
||||||
|
api.js # API constants + fetch helpers (39 LOC)
|
||||||
|
formatting.js # Time/token formatting (66 LOC)
|
||||||
|
status.js # Status metadata + session grouping (79 LOC)
|
||||||
|
autocomplete.js # Autocomplete trigger detection (48 LOC)
|
||||||
|
components/ # UI components
|
||||||
|
App.js # Root component (616 LOC)
|
||||||
|
Sidebar.js # Project nav sidebar (102 LOC)
|
||||||
|
SessionCard.js # Session card (176 LOC)
|
||||||
|
Modal.js # Full-screen modal wrapper (79 LOC)
|
||||||
|
ChatMessages.js # Message list (39 LOC)
|
||||||
|
MessageBubble.js # Individual message (54 LOC)
|
||||||
|
QuestionBlock.js # AskUserQuestion UI (228 LOC)
|
||||||
|
SimpleInput.js # Freeform text input (228 LOC)
|
||||||
|
OptionButton.js # Option button (24 LOC)
|
||||||
|
AgentActivityIndicator.js # Turn timer (115 LOC)
|
||||||
|
SpawnModal.js # Spawn dropdown (241 LOC)
|
||||||
|
Toast.js # Toast notifications (125 LOC)
|
||||||
|
EmptyState.js # Empty state (18 LOC)
|
||||||
|
Header.js # ** DEAD CODE ** (58 LOC, zero imports)
|
||||||
|
SessionGroup.js # ** DEAD CODE ** (56 LOC, zero imports)
|
||||||
|
|
||||||
|
bin/ # Shell/Python scripts
|
||||||
|
amc # Launcher (start/stop/status)
|
||||||
|
amc-hook # Hook script (standalone, writes session state)
|
||||||
|
amc-server # Server launch script
|
||||||
|
amc-server-restart # Server restart helper
|
||||||
|
|
||||||
|
tests/ # Test suite (mirrors mixin structure)
|
||||||
|
test_context.py # Context tests
|
||||||
|
test_control.py # Control mixin tests
|
||||||
|
test_conversation.py # Conversation parsing tests
|
||||||
|
test_conversation_mtime.py # Conversation mtime tests
|
||||||
|
test_discovery.py # Discovery mixin tests
|
||||||
|
test_hook.py # Hook script tests
|
||||||
|
test_http.py # HTTP mixin tests
|
||||||
|
test_parsing.py # Parsing mixin tests
|
||||||
|
test_skills.py # Skills mixin tests
|
||||||
|
test_spawn.py # Spawn mixin tests
|
||||||
|
test_state.py # State mixin tests
|
||||||
|
test_zellij_metadata.py # Zellij metadata tests
|
||||||
|
e2e/ # End-to-end tests
|
||||||
|
__init__.py
|
||||||
|
test_skills_endpoint.py
|
||||||
|
test_autocomplete_workflow.js
|
||||||
|
e2e_spawn.sh # Spawn E2E script
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Proposed Changes
|
||||||
|
|
||||||
|
### Change 1: Split `context.py` into Focused Modules (HIGH VALUE)
|
||||||
|
|
||||||
|
**Problem:** `context.py` is the classic "junk drawer" module. It contains:
|
||||||
|
- Path constants for the server, Zellij, Claude, and Codex
|
||||||
|
- Server configuration (port, timeouts)
|
||||||
|
- 5 independent caches with their own size limits
|
||||||
|
- 2 threading locks for unrelated concerns
|
||||||
|
- Auth token generation/validation
|
||||||
|
- Zellij binary resolution
|
||||||
|
- Spawn-related config
|
||||||
|
- Background thread management for projects cache
|
||||||
|
|
||||||
|
Every mixin imports from it, but each only needs a subset. When a developer asks "where is the spawn rate limit configured?", they have to scan through an unrelated grab-bag of constants. When they ask "where's the Codex transcript cache?", same problem.
|
||||||
|
|
||||||
|
**Proposed split:**
|
||||||
|
|
||||||
|
```
|
||||||
|
amc_server/
|
||||||
|
config.py # Server-level constants: PORT, DATA_DIR, SESSIONS_DIR, EVENTS_DIR,
|
||||||
|
# DASHBOARD_DIR, PROJECT_DIR, STALE_EVENT_AGE, STALE_STARTING_AGE
|
||||||
|
# These are the "universal" constants every module might need.
|
||||||
|
|
||||||
|
zellij.py # Zellij integration: ZELLIJ_BIN resolution, ZELLIJ_PLUGIN path,
|
||||||
|
# ZELLIJ_SESSION, _zellij_cache (sessions cache + expiry)
|
||||||
|
# Rationale: All Zellij-specific constants and helpers in one place.
|
||||||
|
# Any developer working on Zellij integration knows exactly where to look.
|
||||||
|
|
||||||
|
agents.py # Agent-specific paths and caches:
|
||||||
|
# CLAUDE_PROJECTS_DIR, CODEX_SESSIONS_DIR, CODEX_ACTIVE_WINDOW,
|
||||||
|
# _codex_pane_cache, _codex_transcript_cache, _CODEX_CACHE_MAX,
|
||||||
|
# _context_usage_cache, _CONTEXT_CACHE_MAX,
|
||||||
|
# _dismissed_codex_ids, _DISMISSED_MAX
|
||||||
|
# Rationale: Agent data source configuration and caches that are only
|
||||||
|
# relevant to discovery/parsing mixins, not the whole server.
|
||||||
|
|
||||||
|
auth.py # Auth token: generate_auth_token(), validate_auth_token(), _auth_token
|
||||||
|
# Rationale: Security-sensitive code in its own module. Small, but
|
||||||
|
# architecturally clean separation from general config.
|
||||||
|
|
||||||
|
spawn_config.py # Spawn feature config:
|
||||||
|
# PROJECTS_DIR, PENDING_SPAWNS_DIR, PENDING_SPAWN_TTL,
|
||||||
|
# _spawn_lock, _spawn_timestamps, SPAWN_COOLDOWN_SEC
|
||||||
|
# + start_projects_watcher() (background refresh thread)
|
||||||
|
# Rationale: Spawn feature has its own set of constants, lock, and
|
||||||
|
# background thread. Currently scattered between context.py and spawn.py.
|
||||||
|
# Consolidating makes the spawn feature self-contained.
|
||||||
|
```
|
||||||
|
|
||||||
|
Kept from current structure (unchanged):
|
||||||
|
- `_state_lock` moves to `config.py` (it's a server-level concern)
|
||||||
|
|
||||||
|
**Import changes required:**
|
||||||
|
|
||||||
|
| File | Current import from `context` | New import from |
|
||||||
|
|------|------|------|
|
||||||
|
| `server.py` | `DATA_DIR, PORT, generate_auth_token, start_projects_watcher` | `config.DATA_DIR, config.PORT`, `auth.generate_auth_token`, `spawn_config.start_projects_watcher` |
|
||||||
|
| `handler.py` | (none, uses mixins) | (unchanged) |
|
||||||
|
| `mixins/http.py` | `DASHBOARD_DIR`, `ctx._auth_token` | `config.DASHBOARD_DIR`, `auth._auth_token` |
|
||||||
|
| `mixins/state.py` | `EVENTS_DIR, SESSIONS_DIR, STALE_*, ZELLIJ_BIN, _state_lock, _zellij_cache` | `config.*`, `zellij.ZELLIJ_BIN, zellij._zellij_cache` |
|
||||||
|
| `mixins/conversation.py` | `EVENTS_DIR` | `config.EVENTS_DIR` |
|
||||||
|
| `mixins/control.py` | `SESSIONS_DIR, ZELLIJ_BIN, ZELLIJ_PLUGIN, _DISMISSED_MAX, _dismissed_codex_ids` | `config.SESSIONS_DIR`, `zellij.*`, `agents._DISMISSED_MAX, agents._dismissed_codex_ids` |
|
||||||
|
| `mixins/discovery.py` | `CODEX_*, PENDING_SPAWNS_DIR, SESSIONS_DIR, _codex_*` | `agents.*`, `spawn_config.PENDING_SPAWNS_DIR`, `config.SESSIONS_DIR` |
|
||||||
|
| `mixins/parsing.py` | `CLAUDE_PROJECTS_DIR, CODEX_SESSIONS_DIR, _*_cache, _*_MAX` | `agents.*` |
|
||||||
|
| `mixins/spawn.py` | `PENDING_SPAWNS_DIR, PROJECTS_DIR, SESSIONS_DIR, ZELLIJ_*, _spawn_*, validate_auth_token` | `spawn_config.*`, `config.SESSIONS_DIR`, `zellij.*`, `auth.validate_auth_token` |
|
||||||
|
|
||||||
|
**Why this is the right split:**
|
||||||
|
|
||||||
|
1. **By domain, not by size.** Each new module groups constants + caches + helpers that serve one architectural concern. A developer working on Zellij integration opens `zellij.py`. Working on Codex discovery? `agents.py`. Spawn feature? `spawn_config.py`.
|
||||||
|
|
||||||
|
2. **No circular imports.** The dependency graph is DAG: `config.py` is a leaf (imported by everything, imports nothing from `amc_server`). `zellij.py`, `agents.py`, `auth.py`, `spawn_config.py` import only from `config.py` (if at all). Mixins import from these.
|
||||||
|
|
||||||
|
3. **No behavioral change.** Module-level caches and singletons work the same way whether they're in one file or five.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Change 2: Delete Dead Dashboard Components (LOW EFFORT, HIGH CLARITY)
|
||||||
|
|
||||||
|
**Problem:** `Header.js` (58 LOC) and `SessionGroup.js` (56 LOC) are completely unused. Zero imports anywhere in the codebase. They were replaced by the current Sidebar + grid layout but never cleaned up.
|
||||||
|
|
||||||
|
**Action:** Delete both files.
|
||||||
|
|
||||||
|
**Import changes required:** None (nothing imports them).
|
||||||
|
|
||||||
|
**Rationale:** Dead code is noise. Anyone exploring the `components/` directory would reasonably assume these are active and try to understand how they fit. Removing them prevents that confusion.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Change 3: No Changes to Dashboard Structure
|
||||||
|
|
||||||
|
The dashboard is already well-organized:
|
||||||
|
|
||||||
|
- `components/` - All React-like components
|
||||||
|
- `utils/` - Pure utility functions (formatting, API, status, autocomplete)
|
||||||
|
- `lib/` - Third-party wrappers (Preact, markdown rendering)
|
||||||
|
|
||||||
|
This is a standard and intuitive layout. The `components/` directory has 13 files (15 before dead code removal), which is manageable. Creating sub-directories (e.g., `components/session/`, `components/layout/`) would add nesting without meaningful benefit at this scale.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Change 4: No Changes to `mixins/` Structure
|
||||||
|
|
||||||
|
The mixin decomposition is the project's architectural backbone. Each mixin handles one concern:
|
||||||
|
|
||||||
|
| Mixin | Responsibility |
|
||||||
|
|-------|---------------|
|
||||||
|
| `http.py` | HTTP routing, static file serving, CORS |
|
||||||
|
| `state.py` | State aggregation, SSE streaming, session collection |
|
||||||
|
| `conversation.py` | Conversation history parsing (Claude + Codex JSONL) |
|
||||||
|
| `control.py` | Session dismiss/respond, Zellij pane injection |
|
||||||
|
| `discovery.py` | Codex session auto-discovery, pane matching |
|
||||||
|
| `parsing.py` | JSONL tail reading, context usage extraction, caching |
|
||||||
|
| `skills.py` | Skill enumeration for Claude/Codex autocomplete |
|
||||||
|
| `spawn.py` | Agent spawning in Zellij tabs |
|
||||||
|
|
||||||
|
All are 170-440 lines, which is reasonable. The largest (`state.py` at 440 lines) could theoretically be split, but its methods are tightly coupled around session collection. Splitting would create artificial seams.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Change 5: No Changes to `tests/` Structure
|
||||||
|
|
||||||
|
Tests already mirror the source structure (`test_state.py` tests `mixins/state.py`, etc.). This is the correct pattern.
|
||||||
|
|
||||||
|
**One consideration:** After splitting `context.py`, `test_context.py` may need updates to import from the new module locations. The test file is small (755 bytes) and covers basic context constants, so the update would be trivial.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Change 6: No Changes to `bin/` Scripts
|
||||||
|
|
||||||
|
The `amc-hook` script intentionally duplicates `DATA_DIR`, `SESSIONS_DIR`, `EVENTS_DIR` from `context.py`. This is correct: the hook runs as a standalone process launched by Claude Code, not as part of the server. It must be self-contained with zero dependencies on the server package. Sharing code would create a fragile coupling.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What I Explicitly Decided NOT to Do
|
||||||
|
|
||||||
|
1. **Not creating a `src/` directory.** The project root is clean. Adding `src/` would be an extra nesting level with no benefit.
|
||||||
|
|
||||||
|
2. **Not splitting any mixins.** `state.py` (440 LOC) and `spawn.py` (358 LOC) are the largest, but their methods are cohesive. Splitting would scatter related logic across files.
|
||||||
|
|
||||||
|
3. **Not merging small files.** `EmptyState.js` (18 LOC), `OptionButton.js` (24 LOC), and `ChatMessages.js` (39 LOC) are tiny but each has a clear purpose and is imported independently. Merging them would violate component-per-file convention.
|
||||||
|
|
||||||
|
4. **Not reorganizing dashboard components into sub-folders.** With 13 components, flat is fine. Sub-folders like `components/session/` and `components/layout/` become necessary at ~25+ components.
|
||||||
|
|
||||||
|
5. **Not consolidating `api.js` + `formatting.js` + `status.js` + `autocomplete.js`.** Each is focused and independently imported. A combined `utils.js` would be a grab-bag (the exact problem we're fixing in `context.py`).
|
||||||
|
|
||||||
|
6. **Not moving `markdown.js` out of `lib/`.** It uses third-party dependencies and provides rendering utilities. `lib/` is the correct location.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Proposed Final Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
amc/
|
||||||
|
amc_server/
|
||||||
|
__init__.py # (unchanged)
|
||||||
|
server.py # (updated imports)
|
||||||
|
handler.py # (unchanged)
|
||||||
|
config.py # NEW: Server constants, DATA_DIR, SESSIONS_DIR, EVENTS_DIR, PORT, etc.
|
||||||
|
zellij.py # NEW: Zellij binary resolution, ZELLIJ_PLUGIN, ZELLIJ_SESSION, cache
|
||||||
|
agents.py # NEW: Agent paths (Claude/Codex), transcript caches, dismissed cache
|
||||||
|
auth.py # NEW: Auth token generation/validation
|
||||||
|
spawn_config.py # NEW: Spawn constants, locks, rate limiting, projects watcher
|
||||||
|
logging_utils.py # (unchanged)
|
||||||
|
mixins/ # (unchanged structure, updated imports)
|
||||||
|
__init__.py
|
||||||
|
http.py
|
||||||
|
state.py
|
||||||
|
conversation.py
|
||||||
|
control.py
|
||||||
|
discovery.py
|
||||||
|
parsing.py
|
||||||
|
skills.py
|
||||||
|
spawn.py
|
||||||
|
|
||||||
|
dashboard/
|
||||||
|
index.html # (unchanged)
|
||||||
|
main.js # (unchanged)
|
||||||
|
styles.css # (unchanged)
|
||||||
|
lib/
|
||||||
|
preact.js # (unchanged)
|
||||||
|
markdown.js # (unchanged)
|
||||||
|
utils/
|
||||||
|
api.js # (unchanged)
|
||||||
|
formatting.js # (unchanged)
|
||||||
|
status.js # (unchanged)
|
||||||
|
autocomplete.js # (unchanged)
|
||||||
|
components/
|
||||||
|
App.js # (unchanged)
|
||||||
|
Sidebar.js # (unchanged)
|
||||||
|
SessionCard.js # (unchanged)
|
||||||
|
Modal.js # (unchanged)
|
||||||
|
ChatMessages.js # (unchanged)
|
||||||
|
MessageBubble.js # (unchanged)
|
||||||
|
QuestionBlock.js # (unchanged)
|
||||||
|
SimpleInput.js # (unchanged)
|
||||||
|
OptionButton.js # (unchanged)
|
||||||
|
AgentActivityIndicator.js # (unchanged)
|
||||||
|
SpawnModal.js # (unchanged)
|
||||||
|
Toast.js # (unchanged)
|
||||||
|
EmptyState.js # (unchanged)
|
||||||
|
[DELETED] Header.js
|
||||||
|
[DELETED] SessionGroup.js
|
||||||
|
|
||||||
|
bin/ # (unchanged)
|
||||||
|
tests/ # (minor import updates in test_context.py)
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Implementation Order
|
||||||
|
|
||||||
|
1. **Delete dead dashboard components** (`Header.js`, `SessionGroup.js`) - zero risk, instant clarity
|
||||||
|
2. **Create new Python modules** (`config.py`, `zellij.py`, `agents.py`, `auth.py`, `spawn_config.py`) with the correct constants/functions
|
||||||
|
3. **Update all mixin imports** to use new module locations
|
||||||
|
4. **Update `server.py`** imports
|
||||||
|
5. **Delete `context.py`**
|
||||||
|
6. **Run full test suite** to verify nothing broke
|
||||||
|
7. **Update `test_context.py`** if needed
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Risk Assessment
|
||||||
|
|
||||||
|
- **Risk of breaking imports:** MEDIUM. There are many import statements to update across 8 mixin files + `server.py`. Mitigated by running the full test suite after changes.
|
||||||
|
- **Risk of circular imports:** LOW. The new modules form a clean DAG (config <- zellij/agents/auth/spawn_config <- mixins).
|
||||||
|
- **Risk to `bin/amc-hook`:** NONE. The hook is standalone and doesn't import from `amc_server`.
|
||||||
|
- **Risk to dashboard:** NONE for dead code deletion. Zero imports to either file.
|
||||||
BIN
amc_server/__pycache__/__init__.cpython-313.pyc
Normal file
BIN
amc_server/__pycache__/__init__.cpython-313.pyc
Normal file
Binary file not shown.
BIN
amc_server/__pycache__/context.cpython-313.pyc
Normal file
BIN
amc_server/__pycache__/context.cpython-313.pyc
Normal file
Binary file not shown.
BIN
amc_server/__pycache__/handler.cpython-313.pyc
Normal file
BIN
amc_server/__pycache__/handler.cpython-313.pyc
Normal file
Binary file not shown.
BIN
amc_server/__pycache__/logging_utils.cpython-313.pyc
Normal file
BIN
amc_server/__pycache__/logging_utils.cpython-313.pyc
Normal file
Binary file not shown.
BIN
amc_server/__pycache__/server.cpython-313.pyc
Normal file
BIN
amc_server/__pycache__/server.cpython-313.pyc
Normal file
Binary file not shown.
28
amc_server/agents.py
Normal file
28
amc_server/agents.py
Normal file
@@ -0,0 +1,28 @@
|
|||||||
|
"""Agent-specific paths, caches, and constants for Claude/Codex discovery."""
|
||||||
|
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
# Claude Code conversation directory
|
||||||
|
CLAUDE_PROJECTS_DIR = Path.home() / ".claude" / "projects"
|
||||||
|
|
||||||
|
# Codex conversation directory
|
||||||
|
CODEX_SESSIONS_DIR = Path.home() / ".codex" / "sessions"
|
||||||
|
|
||||||
|
# Only discover recently-active Codex sessions (10 minutes)
|
||||||
|
CODEX_ACTIVE_WINDOW = 600
|
||||||
|
|
||||||
|
# Cache for Codex pane info (avoid running pgrep/ps/lsof on every request)
|
||||||
|
_codex_pane_cache = {"pid_info": {}, "cwd_map": {}, "expires": 0}
|
||||||
|
|
||||||
|
# Cache for parsed context usage by transcript file path + mtime/size
|
||||||
|
_context_usage_cache = {}
|
||||||
|
_CONTEXT_CACHE_MAX = 100
|
||||||
|
|
||||||
|
# Cache mapping Codex session IDs to transcript paths (or None when missing)
|
||||||
|
_codex_transcript_cache = {}
|
||||||
|
_CODEX_CACHE_MAX = 200
|
||||||
|
|
||||||
|
# Codex sessions dismissed during this server lifetime (prevents re-discovery)
|
||||||
|
# Uses dict (not set) for O(1) lookup + FIFO eviction via insertion order (Python 3.7+)
|
||||||
|
_dismissed_codex_ids = {}
|
||||||
|
_DISMISSED_MAX = 500
|
||||||
18
amc_server/auth.py
Normal file
18
amc_server/auth.py
Normal file
@@ -0,0 +1,18 @@
|
|||||||
|
"""Auth token generation and validation for spawn endpoint security."""
|
||||||
|
|
||||||
|
import secrets
|
||||||
|
|
||||||
|
# Auth token for spawn endpoint
|
||||||
|
_auth_token: str = ''
|
||||||
|
|
||||||
|
|
||||||
|
def generate_auth_token():
|
||||||
|
"""Generate a one-time auth token for this server instance."""
|
||||||
|
global _auth_token
|
||||||
|
_auth_token = secrets.token_urlsafe(32)
|
||||||
|
return _auth_token
|
||||||
|
|
||||||
|
|
||||||
|
def validate_auth_token(request_token: str) -> bool:
|
||||||
|
"""Validate the Authorization header token."""
|
||||||
|
return request_token == f'Bearer {_auth_token}'
|
||||||
20
amc_server/config.py
Normal file
20
amc_server/config.py
Normal file
@@ -0,0 +1,20 @@
|
|||||||
|
"""Server-level constants: paths, port, timeouts, state lock."""
|
||||||
|
|
||||||
|
import threading
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
# Runtime data lives in XDG data dir
|
||||||
|
DATA_DIR = Path.home() / ".local" / "share" / "amc"
|
||||||
|
SESSIONS_DIR = DATA_DIR / "sessions"
|
||||||
|
EVENTS_DIR = DATA_DIR / "events"
|
||||||
|
|
||||||
|
# Source files live in project directory (relative to this module)
|
||||||
|
PROJECT_DIR = Path(__file__).resolve().parent.parent
|
||||||
|
DASHBOARD_DIR = PROJECT_DIR / "dashboard"
|
||||||
|
|
||||||
|
PORT = 7400
|
||||||
|
STALE_EVENT_AGE = 86400 # 24 hours in seconds
|
||||||
|
STALE_STARTING_AGE = 3600 # 1 hour - sessions stuck in "starting" are orphans
|
||||||
|
|
||||||
|
# Serialize state collection because it mutates session files/caches.
|
||||||
|
_state_lock = threading.Lock()
|
||||||
@@ -1,70 +0,0 @@
|
|||||||
import shutil
|
|
||||||
from pathlib import Path
|
|
||||||
import threading
|
|
||||||
|
|
||||||
# Claude Code conversation directory
|
|
||||||
CLAUDE_PROJECTS_DIR = Path.home() / ".claude" / "projects"
|
|
||||||
|
|
||||||
# Codex conversation directory
|
|
||||||
CODEX_SESSIONS_DIR = Path.home() / ".codex" / "sessions"
|
|
||||||
|
|
||||||
# Plugin path for zellij-send-keys
|
|
||||||
ZELLIJ_PLUGIN = Path.home() / ".config" / "zellij" / "plugins" / "zellij-send-keys.wasm"
|
|
||||||
|
|
||||||
|
|
||||||
def _resolve_zellij_bin():
|
|
||||||
"""Resolve zellij binary even when PATH is minimal (eg launchctl)."""
|
|
||||||
from_path = shutil.which("zellij")
|
|
||||||
if from_path:
|
|
||||||
return from_path
|
|
||||||
|
|
||||||
common_paths = (
|
|
||||||
"/opt/homebrew/bin/zellij", # Apple Silicon Homebrew
|
|
||||||
"/usr/local/bin/zellij", # Intel Homebrew
|
|
||||||
"/usr/bin/zellij",
|
|
||||||
)
|
|
||||||
for candidate in common_paths:
|
|
||||||
p = Path(candidate)
|
|
||||||
if p.exists() and p.is_file():
|
|
||||||
return str(p)
|
|
||||||
return "zellij" # Fallback for explicit error reporting by subprocess
|
|
||||||
|
|
||||||
|
|
||||||
ZELLIJ_BIN = _resolve_zellij_bin()
|
|
||||||
|
|
||||||
# Runtime data lives in XDG data dir
|
|
||||||
DATA_DIR = Path.home() / ".local" / "share" / "amc"
|
|
||||||
SESSIONS_DIR = DATA_DIR / "sessions"
|
|
||||||
EVENTS_DIR = DATA_DIR / "events"
|
|
||||||
|
|
||||||
# Source files live in project directory (relative to this module)
|
|
||||||
PROJECT_DIR = Path(__file__).resolve().parent.parent
|
|
||||||
DASHBOARD_DIR = PROJECT_DIR / "dashboard"
|
|
||||||
|
|
||||||
PORT = 7400
|
|
||||||
STALE_EVENT_AGE = 86400 # 24 hours in seconds
|
|
||||||
STALE_STARTING_AGE = 3600 # 1 hour - sessions stuck in "starting" are orphans
|
|
||||||
CODEX_ACTIVE_WINDOW = 600 # 10 minutes - only discover recently-active Codex sessions
|
|
||||||
|
|
||||||
# Cache for Zellij session list (avoid calling zellij on every request)
|
|
||||||
_zellij_cache = {"sessions": None, "expires": 0}
|
|
||||||
|
|
||||||
# Cache for Codex pane info (avoid running pgrep/ps/lsof on every request)
|
|
||||||
_codex_pane_cache = {"pid_info": {}, "cwd_map": {}, "expires": 0}
|
|
||||||
|
|
||||||
# Cache for parsed context usage by transcript file path + mtime/size
|
|
||||||
# Limited to prevent unbounded memory growth
|
|
||||||
_context_usage_cache = {}
|
|
||||||
_CONTEXT_CACHE_MAX = 100
|
|
||||||
|
|
||||||
# Cache mapping Codex session IDs to transcript paths (or None when missing)
|
|
||||||
_codex_transcript_cache = {}
|
|
||||||
_CODEX_CACHE_MAX = 200
|
|
||||||
|
|
||||||
# Codex sessions dismissed during this server lifetime (prevents re-discovery)
|
|
||||||
# Uses dict (not set) for O(1) lookup + FIFO eviction via insertion order (Python 3.7+)
|
|
||||||
_dismissed_codex_ids = {}
|
|
||||||
_DISMISSED_MAX = 500
|
|
||||||
|
|
||||||
# Serialize state collection because it mutates session files/caches.
|
|
||||||
_state_lock = threading.Lock()
|
|
||||||
@@ -5,6 +5,8 @@ from amc_server.mixins.control import SessionControlMixin
|
|||||||
from amc_server.mixins.discovery import SessionDiscoveryMixin
|
from amc_server.mixins.discovery import SessionDiscoveryMixin
|
||||||
from amc_server.mixins.http import HttpMixin
|
from amc_server.mixins.http import HttpMixin
|
||||||
from amc_server.mixins.parsing import SessionParsingMixin
|
from amc_server.mixins.parsing import SessionParsingMixin
|
||||||
|
from amc_server.mixins.skills import SkillsMixin
|
||||||
|
from amc_server.mixins.spawn import SpawnMixin
|
||||||
from amc_server.mixins.state import StateMixin
|
from amc_server.mixins.state import StateMixin
|
||||||
|
|
||||||
|
|
||||||
@@ -15,6 +17,8 @@ class AMCHandler(
|
|||||||
SessionControlMixin,
|
SessionControlMixin,
|
||||||
SessionDiscoveryMixin,
|
SessionDiscoveryMixin,
|
||||||
SessionParsingMixin,
|
SessionParsingMixin,
|
||||||
|
SkillsMixin,
|
||||||
|
SpawnMixin,
|
||||||
BaseHTTPRequestHandler,
|
BaseHTTPRequestHandler,
|
||||||
):
|
):
|
||||||
"""HTTP handler composed from focused mixins."""
|
"""HTTP handler composed from focused mixins."""
|
||||||
|
|||||||
BIN
amc_server/mixins/__pycache__/__init__.cpython-313.pyc
Normal file
BIN
amc_server/mixins/__pycache__/__init__.cpython-313.pyc
Normal file
Binary file not shown.
BIN
amc_server/mixins/__pycache__/control.cpython-313.pyc
Normal file
BIN
amc_server/mixins/__pycache__/control.cpython-313.pyc
Normal file
Binary file not shown.
BIN
amc_server/mixins/__pycache__/conversation.cpython-313.pyc
Normal file
BIN
amc_server/mixins/__pycache__/conversation.cpython-313.pyc
Normal file
Binary file not shown.
BIN
amc_server/mixins/__pycache__/discovery.cpython-313.pyc
Normal file
BIN
amc_server/mixins/__pycache__/discovery.cpython-313.pyc
Normal file
Binary file not shown.
BIN
amc_server/mixins/__pycache__/http.cpython-313.pyc
Normal file
BIN
amc_server/mixins/__pycache__/http.cpython-313.pyc
Normal file
Binary file not shown.
BIN
amc_server/mixins/__pycache__/parsing.cpython-313.pyc
Normal file
BIN
amc_server/mixins/__pycache__/parsing.cpython-313.pyc
Normal file
Binary file not shown.
BIN
amc_server/mixins/__pycache__/skills.cpython-313.pyc
Normal file
BIN
amc_server/mixins/__pycache__/skills.cpython-313.pyc
Normal file
Binary file not shown.
BIN
amc_server/mixins/__pycache__/spawn.cpython-313.pyc
Normal file
BIN
amc_server/mixins/__pycache__/spawn.cpython-313.pyc
Normal file
Binary file not shown.
BIN
amc_server/mixins/__pycache__/state.cpython-313.pyc
Normal file
BIN
amc_server/mixins/__pycache__/state.cpython-313.pyc
Normal file
Binary file not shown.
@@ -3,7 +3,9 @@ import os
|
|||||||
import subprocess
|
import subprocess
|
||||||
import time
|
import time
|
||||||
|
|
||||||
from amc_server.context import SESSIONS_DIR, ZELLIJ_BIN, ZELLIJ_PLUGIN, _DISMISSED_MAX, _dismissed_codex_ids
|
from amc_server.agents import _DISMISSED_MAX, _dismissed_codex_ids
|
||||||
|
from amc_server.config import SESSIONS_DIR
|
||||||
|
from amc_server.zellij import ZELLIJ_BIN, ZELLIJ_PLUGIN
|
||||||
from amc_server.logging_utils import LOGGER
|
from amc_server.logging_utils import LOGGER
|
||||||
|
|
||||||
|
|
||||||
@@ -24,6 +26,39 @@ class SessionControlMixin:
|
|||||||
session_file.unlink(missing_ok=True)
|
session_file.unlink(missing_ok=True)
|
||||||
self._send_json(200, {"ok": True})
|
self._send_json(200, {"ok": True})
|
||||||
|
|
||||||
|
def _dismiss_dead_sessions(self):
|
||||||
|
"""Delete all dead session files (clear all from dashboard).
|
||||||
|
|
||||||
|
Note: is_dead is computed dynamically, not stored on disk, so we must
|
||||||
|
recompute it here using the same logic as _collect_sessions.
|
||||||
|
"""
|
||||||
|
# Get liveness data (same as _collect_sessions)
|
||||||
|
active_zellij_sessions = self._get_active_zellij_sessions()
|
||||||
|
active_transcript_files = self._get_active_transcript_files()
|
||||||
|
|
||||||
|
dismissed_count = 0
|
||||||
|
for f in SESSIONS_DIR.glob("*.json"):
|
||||||
|
try:
|
||||||
|
data = json.loads(f.read_text())
|
||||||
|
if not isinstance(data, dict):
|
||||||
|
continue
|
||||||
|
# Recompute is_dead (it's not persisted to disk)
|
||||||
|
is_dead = self._is_session_dead(
|
||||||
|
data, active_zellij_sessions, active_transcript_files
|
||||||
|
)
|
||||||
|
if is_dead:
|
||||||
|
safe_id = f.stem
|
||||||
|
# Track dismissed Codex sessions
|
||||||
|
while len(_dismissed_codex_ids) >= _DISMISSED_MAX:
|
||||||
|
oldest_key = next(iter(_dismissed_codex_ids))
|
||||||
|
del _dismissed_codex_ids[oldest_key]
|
||||||
|
_dismissed_codex_ids[safe_id] = True
|
||||||
|
f.unlink(missing_ok=True)
|
||||||
|
dismissed_count += 1
|
||||||
|
except (json.JSONDecodeError, OSError):
|
||||||
|
continue
|
||||||
|
self._send_json(200, {"ok": True, "dismissed": dismissed_count})
|
||||||
|
|
||||||
def _respond_to_session(self, session_id):
|
def _respond_to_session(self, session_id):
|
||||||
"""Inject a response into the session's Zellij pane."""
|
"""Inject a response into the session's Zellij pane."""
|
||||||
safe_id = os.path.basename(session_id)
|
safe_id = os.path.basename(session_id)
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
import json
|
import json
|
||||||
import os
|
import os
|
||||||
|
|
||||||
from amc_server.context import EVENTS_DIR
|
from amc_server.config import EVENTS_DIR
|
||||||
|
|
||||||
|
|
||||||
class ConversationMixin:
|
class ConversationMixin:
|
||||||
|
|||||||
@@ -5,18 +5,99 @@ import subprocess
|
|||||||
import time
|
import time
|
||||||
from datetime import datetime, timezone
|
from datetime import datetime, timezone
|
||||||
|
|
||||||
from amc_server.context import (
|
from amc_server.agents import (
|
||||||
CODEX_ACTIVE_WINDOW,
|
CODEX_ACTIVE_WINDOW,
|
||||||
CODEX_SESSIONS_DIR,
|
CODEX_SESSIONS_DIR,
|
||||||
SESSIONS_DIR,
|
|
||||||
_CODEX_CACHE_MAX,
|
_CODEX_CACHE_MAX,
|
||||||
_codex_pane_cache,
|
_codex_pane_cache,
|
||||||
_codex_transcript_cache,
|
_codex_transcript_cache,
|
||||||
_dismissed_codex_ids,
|
_dismissed_codex_ids,
|
||||||
)
|
)
|
||||||
|
from amc_server.config import SESSIONS_DIR
|
||||||
|
from amc_server.spawn_config import PENDING_SPAWNS_DIR
|
||||||
from amc_server.logging_utils import LOGGER
|
from amc_server.logging_utils import LOGGER
|
||||||
|
|
||||||
|
|
||||||
|
def _parse_session_timestamp(session_ts):
|
||||||
|
"""Parse Codex session timestamp to Unix time. Returns None on failure."""
|
||||||
|
if not session_ts:
|
||||||
|
return None
|
||||||
|
try:
|
||||||
|
# Codex uses ISO format, possibly with Z suffix or +00:00
|
||||||
|
ts_str = session_ts.replace('Z', '+00:00')
|
||||||
|
dt = datetime.fromisoformat(ts_str)
|
||||||
|
return dt.timestamp()
|
||||||
|
except (ValueError, TypeError, AttributeError):
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
|
def _match_pending_spawn(session_cwd, session_start_ts):
|
||||||
|
"""Match a Codex session to a pending spawn by CWD and timestamp.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
session_cwd: The CWD of the Codex session
|
||||||
|
session_start_ts: The session's START timestamp (ISO string from Codex metadata)
|
||||||
|
IMPORTANT: Must be session start time, not file mtime, to avoid false
|
||||||
|
matches with pre-existing sessions that were recently active.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
spawn_id if matched (and deletes the pending file), None otherwise
|
||||||
|
"""
|
||||||
|
if not PENDING_SPAWNS_DIR.exists():
|
||||||
|
return None
|
||||||
|
|
||||||
|
normalized_cwd = os.path.normpath(session_cwd) if session_cwd else ""
|
||||||
|
if not normalized_cwd:
|
||||||
|
return None
|
||||||
|
|
||||||
|
# Parse session start time - if we can't parse it, we can't safely match
|
||||||
|
session_start_unix = _parse_session_timestamp(session_start_ts)
|
||||||
|
if session_start_unix is None:
|
||||||
|
return None
|
||||||
|
|
||||||
|
try:
|
||||||
|
for pending_file in PENDING_SPAWNS_DIR.glob('*.json'):
|
||||||
|
try:
|
||||||
|
data = json.loads(pending_file.read_text())
|
||||||
|
if not isinstance(data, dict):
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Check agent type (only match codex to codex)
|
||||||
|
if data.get('agent_type') != 'codex':
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Check CWD match
|
||||||
|
pending_path = os.path.normpath(data.get('project_path', ''))
|
||||||
|
if normalized_cwd != pending_path:
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Check timing: session must have STARTED after spawn was initiated
|
||||||
|
# Using session start time (not mtime) prevents false matches with
|
||||||
|
# pre-existing sessions that happen to be recently active
|
||||||
|
spawn_ts = data.get('timestamp', 0)
|
||||||
|
if session_start_unix < spawn_ts:
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Match found - claim the spawn_id and delete the pending file
|
||||||
|
spawn_id = data.get('spawn_id')
|
||||||
|
try:
|
||||||
|
pending_file.unlink()
|
||||||
|
except OSError:
|
||||||
|
pass
|
||||||
|
LOGGER.info(
|
||||||
|
'Matched Codex session (cwd=%s) to pending spawn_id=%s',
|
||||||
|
session_cwd, spawn_id,
|
||||||
|
)
|
||||||
|
return spawn_id
|
||||||
|
|
||||||
|
except (json.JSONDecodeError, OSError):
|
||||||
|
continue
|
||||||
|
except OSError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
return None
|
||||||
|
|
||||||
|
|
||||||
class SessionDiscoveryMixin:
|
class SessionDiscoveryMixin:
|
||||||
def _discover_active_codex_sessions(self):
|
def _discover_active_codex_sessions(self):
|
||||||
"""Find active Codex sessions and create/update session files with Zellij pane info."""
|
"""Find active Codex sessions and create/update session files with Zellij pane info."""
|
||||||
@@ -131,6 +212,13 @@ class SessionDiscoveryMixin:
|
|||||||
session_ts = payload.get("timestamp", "")
|
session_ts = payload.get("timestamp", "")
|
||||||
last_event_at = datetime.fromtimestamp(mtime, tz=timezone.utc).isoformat()
|
last_event_at = datetime.fromtimestamp(mtime, tz=timezone.utc).isoformat()
|
||||||
|
|
||||||
|
# Check for spawn_id: preserve existing, or match to pending spawn
|
||||||
|
# Use session_ts (start time) not mtime to avoid false matches
|
||||||
|
# with pre-existing sessions that were recently active
|
||||||
|
spawn_id = existing.get("spawn_id")
|
||||||
|
if not spawn_id:
|
||||||
|
spawn_id = _match_pending_spawn(cwd, session_ts)
|
||||||
|
|
||||||
session_data = {
|
session_data = {
|
||||||
"session_id": session_id,
|
"session_id": session_id,
|
||||||
"agent": "codex",
|
"agent": "codex",
|
||||||
@@ -145,6 +233,8 @@ class SessionDiscoveryMixin:
|
|||||||
"zellij_pane": zellij_pane or existing.get("zellij_pane", ""),
|
"zellij_pane": zellij_pane or existing.get("zellij_pane", ""),
|
||||||
"transcript_path": str(jsonl_file),
|
"transcript_path": str(jsonl_file),
|
||||||
}
|
}
|
||||||
|
if spawn_id:
|
||||||
|
session_data["spawn_id"] = spawn_id
|
||||||
if context_usage:
|
if context_usage:
|
||||||
session_data["context_usage"] = context_usage
|
session_data["context_usage"] = context_usage
|
||||||
elif existing.get("context_usage"):
|
elif existing.get("context_usage"):
|
||||||
|
|||||||
@@ -1,7 +1,8 @@
|
|||||||
import json
|
import json
|
||||||
import urllib.parse
|
import urllib.parse
|
||||||
|
|
||||||
from amc_server.context import DASHBOARD_DIR
|
import amc_server.auth as auth
|
||||||
|
from amc_server.config import DASHBOARD_DIR
|
||||||
from amc_server.logging_utils import LOGGER
|
from amc_server.logging_utils import LOGGER
|
||||||
|
|
||||||
|
|
||||||
@@ -62,6 +63,19 @@ class HttpMixin:
|
|||||||
project_dir = ""
|
project_dir = ""
|
||||||
agent = "claude"
|
agent = "claude"
|
||||||
self._serve_conversation(urllib.parse.unquote(session_id), urllib.parse.unquote(project_dir), agent)
|
self._serve_conversation(urllib.parse.unquote(session_id), urllib.parse.unquote(project_dir), agent)
|
||||||
|
elif self.path == "/api/skills" or self.path.startswith("/api/skills?"):
|
||||||
|
# Parse agent from query params, default to claude
|
||||||
|
if "?" in self.path:
|
||||||
|
query = self.path.split("?", 1)[1]
|
||||||
|
params = urllib.parse.parse_qs(query)
|
||||||
|
agent = params.get("agent", ["claude"])[0]
|
||||||
|
else:
|
||||||
|
agent = "claude"
|
||||||
|
self._serve_skills(agent)
|
||||||
|
elif self.path == "/api/projects":
|
||||||
|
self._handle_projects()
|
||||||
|
elif self.path == "/api/health":
|
||||||
|
self._handle_health()
|
||||||
else:
|
else:
|
||||||
self._json_error(404, "Not Found")
|
self._json_error(404, "Not Found")
|
||||||
except Exception:
|
except Exception:
|
||||||
@@ -73,12 +87,18 @@ class HttpMixin:
|
|||||||
|
|
||||||
def do_POST(self):
|
def do_POST(self):
|
||||||
try:
|
try:
|
||||||
if self.path.startswith("/api/dismiss/"):
|
if self.path == "/api/dismiss-dead":
|
||||||
|
self._dismiss_dead_sessions()
|
||||||
|
elif self.path.startswith("/api/dismiss/"):
|
||||||
session_id = urllib.parse.unquote(self.path[len("/api/dismiss/"):])
|
session_id = urllib.parse.unquote(self.path[len("/api/dismiss/"):])
|
||||||
self._dismiss_session(session_id)
|
self._dismiss_session(session_id)
|
||||||
elif self.path.startswith("/api/respond/"):
|
elif self.path.startswith("/api/respond/"):
|
||||||
session_id = urllib.parse.unquote(self.path[len("/api/respond/"):])
|
session_id = urllib.parse.unquote(self.path[len("/api/respond/"):])
|
||||||
self._respond_to_session(session_id)
|
self._respond_to_session(session_id)
|
||||||
|
elif self.path == "/api/spawn":
|
||||||
|
self._handle_spawn()
|
||||||
|
elif self.path == "/api/projects/refresh":
|
||||||
|
self._handle_projects_refresh()
|
||||||
else:
|
else:
|
||||||
self._json_error(404, "Not Found")
|
self._json_error(404, "Not Found")
|
||||||
except Exception:
|
except Exception:
|
||||||
@@ -89,11 +109,12 @@ class HttpMixin:
|
|||||||
pass
|
pass
|
||||||
|
|
||||||
def do_OPTIONS(self):
|
def do_OPTIONS(self):
|
||||||
# CORS preflight for respond endpoint
|
# CORS preflight for API endpoints (AC-39: wildcard CORS;
|
||||||
|
# localhost-only binding AC-24 is the real security boundary)
|
||||||
self.send_response(204)
|
self.send_response(204)
|
||||||
self.send_header("Access-Control-Allow-Origin", "*")
|
self.send_header("Access-Control-Allow-Origin", "*")
|
||||||
self.send_header("Access-Control-Allow-Methods", "POST, OPTIONS")
|
self.send_header("Access-Control-Allow-Methods", "GET, POST, OPTIONS")
|
||||||
self.send_header("Access-Control-Allow-Headers", "Content-Type")
|
self.send_header("Access-Control-Allow-Headers", "Content-Type, Authorization")
|
||||||
self.end_headers()
|
self.end_headers()
|
||||||
|
|
||||||
def _serve_dashboard_file(self, file_path):
|
def _serve_dashboard_file(self, file_path):
|
||||||
@@ -113,7 +134,12 @@ class HttpMixin:
|
|||||||
full_path = DASHBOARD_DIR / file_path
|
full_path = DASHBOARD_DIR / file_path
|
||||||
# Security: ensure path doesn't escape dashboard directory
|
# Security: ensure path doesn't escape dashboard directory
|
||||||
full_path = full_path.resolve()
|
full_path = full_path.resolve()
|
||||||
if not str(full_path).startswith(str(DASHBOARD_DIR.resolve())):
|
resolved_dashboard = DASHBOARD_DIR.resolve()
|
||||||
|
try:
|
||||||
|
# Use relative_to for robust path containment check
|
||||||
|
# (avoids startswith prefix-match bugs like "/dashboard" vs "/dashboardEVIL")
|
||||||
|
full_path.relative_to(resolved_dashboard)
|
||||||
|
except ValueError:
|
||||||
self._json_error(403, "Forbidden")
|
self._json_error(403, "Forbidden")
|
||||||
return
|
return
|
||||||
|
|
||||||
@@ -121,6 +147,13 @@ class HttpMixin:
|
|||||||
ext = full_path.suffix.lower()
|
ext = full_path.suffix.lower()
|
||||||
content_type = content_types.get(ext, "application/octet-stream")
|
content_type = content_types.get(ext, "application/octet-stream")
|
||||||
|
|
||||||
|
# Inject auth token into index.html for spawn endpoint security
|
||||||
|
if file_path == "index.html" and auth._auth_token:
|
||||||
|
content = content.replace(
|
||||||
|
b"<!-- AMC_AUTH_TOKEN -->",
|
||||||
|
f'<script>window.AMC_AUTH_TOKEN = "{auth._auth_token}";</script>'.encode(),
|
||||||
|
)
|
||||||
|
|
||||||
# No caching during development
|
# No caching during development
|
||||||
self._send_bytes_response(
|
self._send_bytes_response(
|
||||||
200,
|
200,
|
||||||
|
|||||||
@@ -2,9 +2,10 @@ import json
|
|||||||
import os
|
import os
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
|
||||||
from amc_server.context import (
|
from amc_server.agents import (
|
||||||
CLAUDE_PROJECTS_DIR,
|
CLAUDE_PROJECTS_DIR,
|
||||||
CODEX_SESSIONS_DIR,
|
CODEX_SESSIONS_DIR,
|
||||||
|
_CODEX_CACHE_MAX,
|
||||||
_CONTEXT_CACHE_MAX,
|
_CONTEXT_CACHE_MAX,
|
||||||
_codex_transcript_cache,
|
_codex_transcript_cache,
|
||||||
_context_usage_cache,
|
_context_usage_cache,
|
||||||
@@ -44,6 +45,11 @@ class SessionParsingMixin:
|
|||||||
|
|
||||||
try:
|
try:
|
||||||
for jsonl_file in CODEX_SESSIONS_DIR.rglob(f"*{session_id}*.jsonl"):
|
for jsonl_file in CODEX_SESSIONS_DIR.rglob(f"*{session_id}*.jsonl"):
|
||||||
|
# Evict old entries if cache is full (simple FIFO)
|
||||||
|
if len(_codex_transcript_cache) >= _CODEX_CACHE_MAX:
|
||||||
|
keys_to_remove = list(_codex_transcript_cache.keys())[: _CODEX_CACHE_MAX // 5]
|
||||||
|
for k in keys_to_remove:
|
||||||
|
_codex_transcript_cache.pop(k, None)
|
||||||
_codex_transcript_cache[session_id] = str(jsonl_file)
|
_codex_transcript_cache[session_id] = str(jsonl_file)
|
||||||
return jsonl_file
|
return jsonl_file
|
||||||
except OSError:
|
except OSError:
|
||||||
|
|||||||
189
amc_server/mixins/skills.py
Normal file
189
amc_server/mixins/skills.py
Normal file
@@ -0,0 +1,189 @@
|
|||||||
|
"""SkillsMixin: Enumerate available skills for Claude and Codex agents.
|
||||||
|
|
||||||
|
Skills are agent-global (not session-specific), loaded from well-known
|
||||||
|
filesystem locations for each agent type.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import json
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
|
||||||
|
class SkillsMixin:
|
||||||
|
"""Mixin for enumerating agent skills for autocomplete."""
|
||||||
|
|
||||||
|
def _serve_skills(self, agent: str) -> None:
|
||||||
|
"""Serve autocomplete config for an agent type.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
agent: Agent type ('claude' or 'codex')
|
||||||
|
|
||||||
|
Response JSON:
|
||||||
|
{trigger: '/' or '$', skills: [{name, description}, ...]}
|
||||||
|
"""
|
||||||
|
if agent == "codex":
|
||||||
|
trigger = "$"
|
||||||
|
skills = self._enumerate_codex_skills()
|
||||||
|
else: # Default to claude
|
||||||
|
trigger = "/"
|
||||||
|
skills = self._enumerate_claude_skills()
|
||||||
|
|
||||||
|
# Sort alphabetically by name (case-insensitive)
|
||||||
|
skills.sort(key=lambda s: s["name"].lower())
|
||||||
|
|
||||||
|
self._send_json(200, {"trigger": trigger, "skills": skills})
|
||||||
|
|
||||||
|
def _enumerate_claude_skills(self) -> list[dict]:
|
||||||
|
"""Enumerate Claude skills from ~/.claude/skills/.
|
||||||
|
|
||||||
|
Checks SKILL.md (canonical) first, then falls back to skill.md,
|
||||||
|
prompt.md, README.md for description extraction. Parses YAML
|
||||||
|
frontmatter if present to extract name and description fields.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List of {name: str, description: str} dicts.
|
||||||
|
Empty list if directory doesn't exist or enumeration fails.
|
||||||
|
"""
|
||||||
|
skills = []
|
||||||
|
skills_dir = Path.home() / ".claude/skills"
|
||||||
|
|
||||||
|
if not skills_dir.exists():
|
||||||
|
return skills
|
||||||
|
|
||||||
|
for skill_dir in skills_dir.iterdir():
|
||||||
|
if not skill_dir.is_dir() or skill_dir.name.startswith("."):
|
||||||
|
continue
|
||||||
|
|
||||||
|
meta = {"name": "", "description": ""}
|
||||||
|
# Check files in priority order, accumulating metadata
|
||||||
|
# (earlier files take precedence for each field)
|
||||||
|
for md_name in ["SKILL.md", "skill.md", "prompt.md", "README.md"]:
|
||||||
|
md_file = skill_dir / md_name
|
||||||
|
if md_file.exists():
|
||||||
|
try:
|
||||||
|
content = md_file.read_text()
|
||||||
|
parsed = self._parse_frontmatter(content)
|
||||||
|
if not meta["name"] and parsed["name"]:
|
||||||
|
meta["name"] = parsed["name"]
|
||||||
|
if not meta["description"] and parsed["description"]:
|
||||||
|
meta["description"] = parsed["description"]
|
||||||
|
if meta["description"]:
|
||||||
|
break
|
||||||
|
except OSError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
skills.append({
|
||||||
|
"name": meta["name"] or skill_dir.name,
|
||||||
|
"description": meta["description"] or f"Skill: {skill_dir.name}",
|
||||||
|
})
|
||||||
|
|
||||||
|
return skills
|
||||||
|
|
||||||
|
def _parse_frontmatter(self, content: str) -> dict:
|
||||||
|
"""Extract name and description from markdown YAML frontmatter.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
Dict with 'name' and 'description' keys (both str, may be empty).
|
||||||
|
"""
|
||||||
|
result = {"name": "", "description": ""}
|
||||||
|
lines = content.splitlines()
|
||||||
|
if not lines:
|
||||||
|
return result
|
||||||
|
|
||||||
|
# Check for YAML frontmatter
|
||||||
|
frontmatter_end = 0
|
||||||
|
if lines[0].strip() == "---":
|
||||||
|
for i, line in enumerate(lines[1:], start=1):
|
||||||
|
stripped = line.strip()
|
||||||
|
if stripped == "---":
|
||||||
|
frontmatter_end = i + 1
|
||||||
|
break
|
||||||
|
# Check each known frontmatter field
|
||||||
|
for field in ("name", "description"):
|
||||||
|
if stripped.startswith(f"{field}:"):
|
||||||
|
val = stripped[len(field) + 1:].strip()
|
||||||
|
# Remove quotes if present
|
||||||
|
if val.startswith('"') and val.endswith('"'):
|
||||||
|
val = val[1:-1]
|
||||||
|
elif val.startswith("'") and val.endswith("'"):
|
||||||
|
val = val[1:-1]
|
||||||
|
# Handle YAML multi-line indicators (>- or |-)
|
||||||
|
if val in (">-", "|-", ">", "|", ""):
|
||||||
|
if i + 1 < len(lines):
|
||||||
|
next_line = lines[i + 1].strip()
|
||||||
|
if next_line and not next_line.startswith("---"):
|
||||||
|
val = next_line
|
||||||
|
else:
|
||||||
|
val = ""
|
||||||
|
else:
|
||||||
|
val = ""
|
||||||
|
if val:
|
||||||
|
result[field] = val[:100]
|
||||||
|
|
||||||
|
# Fall back to first meaningful line for description
|
||||||
|
if not result["description"]:
|
||||||
|
for line in lines[frontmatter_end:]:
|
||||||
|
stripped = line.strip()
|
||||||
|
if stripped and not stripped.startswith("#") and not stripped.startswith("<!--") and stripped != "---":
|
||||||
|
result["description"] = stripped[:100]
|
||||||
|
break
|
||||||
|
|
||||||
|
return result
|
||||||
|
|
||||||
|
def _enumerate_codex_skills(self) -> list[dict]:
|
||||||
|
"""Enumerate Codex skills from cache and user directory.
|
||||||
|
|
||||||
|
Sources:
|
||||||
|
- ~/.codex/vendor_imports/skills-curated-cache.json (curated)
|
||||||
|
- ~/.codex/skills/*/ (user-installed)
|
||||||
|
|
||||||
|
Note: No deduplication — if curated and user skills share a name,
|
||||||
|
both appear in the list (per plan Known Limitations).
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
List of {name: str, description: str} dicts.
|
||||||
|
Empty list if no skills found.
|
||||||
|
"""
|
||||||
|
skills = []
|
||||||
|
|
||||||
|
# 1. Curated skills from cache
|
||||||
|
cache_file = Path.home() / ".codex/vendor_imports/skills-curated-cache.json"
|
||||||
|
if cache_file.exists():
|
||||||
|
try:
|
||||||
|
data = json.loads(cache_file.read_text())
|
||||||
|
for skill in data.get("skills", []):
|
||||||
|
# Use 'id' preferentially, fall back to 'name'
|
||||||
|
name = skill.get("id") or skill.get("name", "")
|
||||||
|
# Use 'shortDescription' preferentially, fall back to 'description'
|
||||||
|
desc = skill.get("shortDescription") or skill.get("description", "")
|
||||||
|
if name:
|
||||||
|
skills.append({
|
||||||
|
"name": name,
|
||||||
|
"description": desc[:100] if desc else f"Skill: {name}",
|
||||||
|
})
|
||||||
|
except (json.JSONDecodeError, OSError):
|
||||||
|
# Continue without curated skills on parse error
|
||||||
|
pass
|
||||||
|
|
||||||
|
# 2. User-installed skills
|
||||||
|
user_skills_dir = Path.home() / ".codex/skills"
|
||||||
|
if user_skills_dir.exists():
|
||||||
|
for skill_dir in user_skills_dir.iterdir():
|
||||||
|
if not skill_dir.is_dir() or skill_dir.name.startswith("."):
|
||||||
|
continue
|
||||||
|
|
||||||
|
meta = {"name": "", "description": ""}
|
||||||
|
# Check SKILL.md for metadata
|
||||||
|
skill_md = skill_dir / "SKILL.md"
|
||||||
|
if skill_md.exists():
|
||||||
|
try:
|
||||||
|
content = skill_md.read_text()
|
||||||
|
meta = self._parse_frontmatter(content)
|
||||||
|
except OSError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
skills.append({
|
||||||
|
"name": meta["name"] or skill_dir.name,
|
||||||
|
"description": meta["description"] or f"User skill: {skill_dir.name}",
|
||||||
|
})
|
||||||
|
|
||||||
|
return skills
|
||||||
360
amc_server/mixins/spawn.py
Normal file
360
amc_server/mixins/spawn.py
Normal file
@@ -0,0 +1,360 @@
|
|||||||
|
import json
|
||||||
|
import os
|
||||||
|
import re
|
||||||
|
import subprocess
|
||||||
|
import time
|
||||||
|
import uuid
|
||||||
|
|
||||||
|
from amc_server.auth import validate_auth_token
|
||||||
|
from amc_server.config import SESSIONS_DIR
|
||||||
|
from amc_server.spawn_config import (
|
||||||
|
PENDING_SPAWNS_DIR, PENDING_SPAWN_TTL,
|
||||||
|
PROJECTS_DIR,
|
||||||
|
_spawn_lock, _spawn_timestamps, SPAWN_COOLDOWN_SEC,
|
||||||
|
)
|
||||||
|
from amc_server.zellij import ZELLIJ_BIN, ZELLIJ_SESSION
|
||||||
|
from amc_server.logging_utils import LOGGER
|
||||||
|
|
||||||
|
|
||||||
|
def _write_pending_spawn(spawn_id, project_path, agent_type):
|
||||||
|
"""Write a pending spawn record for later correlation by discovery.
|
||||||
|
|
||||||
|
This enables Codex session correlation since env vars don't propagate
|
||||||
|
through Zellij's pane spawn mechanism.
|
||||||
|
"""
|
||||||
|
PENDING_SPAWNS_DIR.mkdir(parents=True, exist_ok=True)
|
||||||
|
pending_file = PENDING_SPAWNS_DIR / f'{spawn_id}.json'
|
||||||
|
data = {
|
||||||
|
'spawn_id': spawn_id,
|
||||||
|
'project_path': str(project_path),
|
||||||
|
'agent_type': agent_type,
|
||||||
|
'timestamp': time.time(),
|
||||||
|
}
|
||||||
|
try:
|
||||||
|
pending_file.write_text(json.dumps(data))
|
||||||
|
except OSError:
|
||||||
|
LOGGER.warning('Failed to write pending spawn file for %s', spawn_id)
|
||||||
|
|
||||||
|
|
||||||
|
def _cleanup_stale_pending_spawns():
|
||||||
|
"""Remove pending spawn files older than PENDING_SPAWN_TTL."""
|
||||||
|
if not PENDING_SPAWNS_DIR.exists():
|
||||||
|
return
|
||||||
|
now = time.time()
|
||||||
|
try:
|
||||||
|
for f in PENDING_SPAWNS_DIR.glob('*.json'):
|
||||||
|
try:
|
||||||
|
if now - f.stat().st_mtime > PENDING_SPAWN_TTL:
|
||||||
|
f.unlink()
|
||||||
|
except OSError:
|
||||||
|
continue
|
||||||
|
except OSError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
# Agent commands (AC-8, AC-9: full autonomous permissions)
|
||||||
|
AGENT_COMMANDS = {
|
||||||
|
'claude': ['claude', '--dangerously-skip-permissions'],
|
||||||
|
'codex': ['codex', '--dangerously-bypass-approvals-and-sandbox'],
|
||||||
|
}
|
||||||
|
|
||||||
|
# Module-level cache for projects list (AC-33)
|
||||||
|
_projects_cache: list[str] = []
|
||||||
|
|
||||||
|
# Characters unsafe for Zellij pane/tab names: control chars, quotes, backticks
|
||||||
|
_UNSAFE_PANE_CHARS = re.compile(r'[\x00-\x1f\x7f"\'`]')
|
||||||
|
|
||||||
|
|
||||||
|
def _sanitize_pane_name(name):
|
||||||
|
"""Sanitize a string for use as a Zellij pane name.
|
||||||
|
|
||||||
|
Replaces control characters and quotes with underscores, collapses runs
|
||||||
|
of whitespace into a single space, and truncates to 64 chars.
|
||||||
|
"""
|
||||||
|
name = _UNSAFE_PANE_CHARS.sub('_', name)
|
||||||
|
name = re.sub(r'\s+', ' ', name).strip()
|
||||||
|
return name[:64] if name else 'unnamed'
|
||||||
|
|
||||||
|
|
||||||
|
def load_projects_cache():
|
||||||
|
"""Scan ~/projects/ and cache the list. Called on server start."""
|
||||||
|
global _projects_cache
|
||||||
|
try:
|
||||||
|
projects = []
|
||||||
|
for entry in PROJECTS_DIR.iterdir():
|
||||||
|
if entry.is_dir() and not entry.name.startswith('.'):
|
||||||
|
projects.append(entry.name)
|
||||||
|
projects.sort()
|
||||||
|
_projects_cache = projects
|
||||||
|
except OSError:
|
||||||
|
_projects_cache = []
|
||||||
|
|
||||||
|
|
||||||
|
class SpawnMixin:
|
||||||
|
|
||||||
|
def _handle_spawn(self):
|
||||||
|
"""POST /api/spawn handler."""
|
||||||
|
# Verify auth token (AC-38)
|
||||||
|
auth_header = self.headers.get('Authorization', '')
|
||||||
|
if not validate_auth_token(auth_header):
|
||||||
|
self._send_json(401, {'ok': False, 'error': 'Unauthorized', 'code': 'UNAUTHORIZED'})
|
||||||
|
return
|
||||||
|
|
||||||
|
# Parse JSON body
|
||||||
|
try:
|
||||||
|
content_length = int(self.headers.get('Content-Length', 0))
|
||||||
|
body = json.loads(self.rfile.read(content_length))
|
||||||
|
if not isinstance(body, dict):
|
||||||
|
self._json_error(400, 'Invalid JSON body')
|
||||||
|
return
|
||||||
|
except (json.JSONDecodeError, ValueError):
|
||||||
|
self._json_error(400, 'Invalid JSON body')
|
||||||
|
return
|
||||||
|
|
||||||
|
project = body.get('project', '')
|
||||||
|
agent_type = body.get('agent_type', '')
|
||||||
|
|
||||||
|
# Validate params (returns resolved_path to avoid TOCTOU)
|
||||||
|
validation = self._validate_spawn_params(project, agent_type)
|
||||||
|
if 'error' in validation:
|
||||||
|
self._send_json(400, {
|
||||||
|
'ok': False,
|
||||||
|
'error': validation['error'],
|
||||||
|
'code': validation['code'],
|
||||||
|
})
|
||||||
|
return
|
||||||
|
|
||||||
|
resolved_path = validation['resolved_path']
|
||||||
|
spawn_id = str(uuid.uuid4())
|
||||||
|
|
||||||
|
# Acquire _spawn_lock with 15s timeout
|
||||||
|
acquired = _spawn_lock.acquire(timeout=15)
|
||||||
|
if not acquired:
|
||||||
|
self._send_json(503, {
|
||||||
|
'ok': False,
|
||||||
|
'error': 'Server busy - another spawn in progress',
|
||||||
|
'code': 'SERVER_BUSY',
|
||||||
|
})
|
||||||
|
return
|
||||||
|
|
||||||
|
try:
|
||||||
|
# Check rate limit inside lock
|
||||||
|
# Use None sentinel to distinguish "never spawned" from "spawned at time 0"
|
||||||
|
# (time.monotonic() can be close to 0 on fresh process start)
|
||||||
|
now = time.monotonic()
|
||||||
|
last_spawn = _spawn_timestamps.get(project)
|
||||||
|
if last_spawn is not None and now - last_spawn < SPAWN_COOLDOWN_SEC:
|
||||||
|
remaining = SPAWN_COOLDOWN_SEC - (now - last_spawn)
|
||||||
|
self._send_json(429, {
|
||||||
|
'ok': False,
|
||||||
|
'error': f'Rate limited - wait {remaining:.0f}s before spawning in {project}',
|
||||||
|
'code': 'RATE_LIMITED',
|
||||||
|
})
|
||||||
|
return
|
||||||
|
|
||||||
|
# Execute spawn
|
||||||
|
result = self._spawn_agent_in_project_tab(
|
||||||
|
project, resolved_path, agent_type, spawn_id,
|
||||||
|
)
|
||||||
|
|
||||||
|
# Update timestamp only on success
|
||||||
|
if result.get('ok'):
|
||||||
|
_spawn_timestamps[project] = time.monotonic()
|
||||||
|
|
||||||
|
status_code = 200 if result.get('ok') else 500
|
||||||
|
result['spawn_id'] = spawn_id
|
||||||
|
self._send_json(status_code, result)
|
||||||
|
finally:
|
||||||
|
_spawn_lock.release()
|
||||||
|
|
||||||
|
def _validate_spawn_params(self, project, agent_type):
|
||||||
|
"""Validate spawn parameters. Returns resolved_path or error dict."""
|
||||||
|
if not project or not isinstance(project, str):
|
||||||
|
return {'error': 'Project name is required', 'code': 'MISSING_PROJECT'}
|
||||||
|
|
||||||
|
# Reject whitespace-only names
|
||||||
|
if not project.strip():
|
||||||
|
return {'error': 'Project name is required', 'code': 'MISSING_PROJECT'}
|
||||||
|
|
||||||
|
# Reject null bytes and control characters (U+0000-U+001F, U+007F)
|
||||||
|
if '\x00' in project or re.search(r'[\x00-\x1f\x7f]', project):
|
||||||
|
return {'error': 'Invalid project name', 'code': 'INVALID_PROJECT'}
|
||||||
|
|
||||||
|
# Reject path traversal characters (/, \, ..)
|
||||||
|
if '/' in project or '\\' in project or '..' in project:
|
||||||
|
return {'error': 'Invalid project name', 'code': 'INVALID_PROJECT'}
|
||||||
|
|
||||||
|
# Resolve symlinks and verify under PROJECTS_DIR
|
||||||
|
candidate = PROJECTS_DIR / project
|
||||||
|
try:
|
||||||
|
resolved = candidate.resolve()
|
||||||
|
except OSError:
|
||||||
|
return {'error': f'Project not found: {project}', 'code': 'PROJECT_NOT_FOUND'}
|
||||||
|
|
||||||
|
# Symlink escape check
|
||||||
|
try:
|
||||||
|
resolved.relative_to(PROJECTS_DIR.resolve())
|
||||||
|
except ValueError:
|
||||||
|
return {'error': 'Invalid project name', 'code': 'INVALID_PROJECT'}
|
||||||
|
|
||||||
|
if not resolved.is_dir():
|
||||||
|
return {'error': f'Project not found: {project}', 'code': 'PROJECT_NOT_FOUND'}
|
||||||
|
|
||||||
|
if agent_type not in AGENT_COMMANDS:
|
||||||
|
return {
|
||||||
|
'error': f'Invalid agent type: {agent_type}. Must be one of: {", ".join(sorted(AGENT_COMMANDS))}',
|
||||||
|
'code': 'INVALID_AGENT_TYPE',
|
||||||
|
}
|
||||||
|
|
||||||
|
return {'resolved_path': resolved}
|
||||||
|
|
||||||
|
def _check_zellij_session_exists(self):
|
||||||
|
"""Check if the target Zellij session exists."""
|
||||||
|
try:
|
||||||
|
result = subprocess.run(
|
||||||
|
[ZELLIJ_BIN, 'list-sessions'],
|
||||||
|
capture_output=True,
|
||||||
|
text=True,
|
||||||
|
timeout=5,
|
||||||
|
)
|
||||||
|
if result.returncode != 0:
|
||||||
|
return False
|
||||||
|
# Strip ANSI escape codes (Zellij outputs colored text)
|
||||||
|
ansi_pattern = re.compile(r'\x1b\[[0-9;]*m')
|
||||||
|
output = ansi_pattern.sub('', result.stdout)
|
||||||
|
# Parse line-by-line to avoid substring false positives
|
||||||
|
for line in output.splitlines():
|
||||||
|
# Zellij outputs "session_name [Created ...]" or just "session_name"
|
||||||
|
session_name = line.strip().split()[0] if line.strip() else ''
|
||||||
|
if session_name == ZELLIJ_SESSION:
|
||||||
|
return True
|
||||||
|
return False
|
||||||
|
except FileNotFoundError:
|
||||||
|
return False
|
||||||
|
except subprocess.TimeoutExpired:
|
||||||
|
return False
|
||||||
|
except OSError:
|
||||||
|
return False
|
||||||
|
|
||||||
|
def _wait_for_session_file(self, spawn_id, timeout=10.0):
|
||||||
|
"""Poll for a session file matching spawn_id."""
|
||||||
|
deadline = time.monotonic() + timeout
|
||||||
|
while time.monotonic() < deadline:
|
||||||
|
try:
|
||||||
|
for f in SESSIONS_DIR.glob('*.json'):
|
||||||
|
try:
|
||||||
|
data = json.loads(f.read_text())
|
||||||
|
if isinstance(data, dict) and data.get('spawn_id') == spawn_id:
|
||||||
|
return True
|
||||||
|
except (json.JSONDecodeError, OSError):
|
||||||
|
continue
|
||||||
|
except OSError:
|
||||||
|
pass
|
||||||
|
time.sleep(0.25)
|
||||||
|
return False
|
||||||
|
|
||||||
|
def _spawn_agent_in_project_tab(self, project, project_path, agent_type, spawn_id):
|
||||||
|
"""Spawn an agent in a project-named Zellij tab."""
|
||||||
|
# Clean up stale pending spawns opportunistically
|
||||||
|
_cleanup_stale_pending_spawns()
|
||||||
|
|
||||||
|
# For Codex, write pending spawn record before launching.
|
||||||
|
# Zellij doesn't propagate env vars to pane commands, so discovery
|
||||||
|
# will match the session to this record by CWD + timestamp.
|
||||||
|
# (Claude doesn't need this - amc-hook writes spawn_id directly)
|
||||||
|
if agent_type == 'codex':
|
||||||
|
_write_pending_spawn(spawn_id, project_path, agent_type)
|
||||||
|
|
||||||
|
# Check session exists
|
||||||
|
if not self._check_zellij_session_exists():
|
||||||
|
return {
|
||||||
|
'ok': False,
|
||||||
|
'error': f'Zellij session "{ZELLIJ_SESSION}" not found',
|
||||||
|
'code': 'SESSION_NOT_FOUND',
|
||||||
|
}
|
||||||
|
|
||||||
|
# Create/switch to project tab
|
||||||
|
try:
|
||||||
|
result = subprocess.run(
|
||||||
|
[
|
||||||
|
ZELLIJ_BIN, '--session', ZELLIJ_SESSION,
|
||||||
|
'action', 'go-to-tab-name', '--create', project,
|
||||||
|
],
|
||||||
|
capture_output=True,
|
||||||
|
text=True,
|
||||||
|
timeout=5,
|
||||||
|
)
|
||||||
|
if result.returncode != 0:
|
||||||
|
return {
|
||||||
|
'ok': False,
|
||||||
|
'error': f'Failed to create tab: {result.stderr.strip() or "unknown error"}',
|
||||||
|
'code': 'TAB_ERROR',
|
||||||
|
}
|
||||||
|
except FileNotFoundError:
|
||||||
|
return {'ok': False, 'error': f'Zellij not found at {ZELLIJ_BIN}', 'code': 'ZELLIJ_NOT_FOUND'}
|
||||||
|
except subprocess.TimeoutExpired:
|
||||||
|
return {'ok': False, 'error': 'Zellij tab creation timed out', 'code': 'TIMEOUT'}
|
||||||
|
except OSError as e:
|
||||||
|
return {'ok': False, 'error': str(e), 'code': 'SPAWN_ERROR'}
|
||||||
|
|
||||||
|
# Build agent command
|
||||||
|
agent_cmd = AGENT_COMMANDS[agent_type]
|
||||||
|
pane_name = _sanitize_pane_name(f'{agent_type}-{project}')
|
||||||
|
|
||||||
|
# Spawn pane with agent command
|
||||||
|
env = os.environ.copy()
|
||||||
|
env['AMC_SPAWN_ID'] = spawn_id
|
||||||
|
|
||||||
|
try:
|
||||||
|
result = subprocess.run(
|
||||||
|
[
|
||||||
|
ZELLIJ_BIN, '--session', ZELLIJ_SESSION,
|
||||||
|
'action', 'new-pane',
|
||||||
|
'--name', pane_name,
|
||||||
|
'--cwd', str(project_path),
|
||||||
|
'--',
|
||||||
|
] + agent_cmd,
|
||||||
|
env=env,
|
||||||
|
capture_output=True,
|
||||||
|
text=True,
|
||||||
|
timeout=5,
|
||||||
|
)
|
||||||
|
if result.returncode != 0:
|
||||||
|
return {
|
||||||
|
'ok': False,
|
||||||
|
'error': f'Failed to spawn pane: {result.stderr.strip() or "unknown error"}',
|
||||||
|
'code': 'SPAWN_ERROR',
|
||||||
|
}
|
||||||
|
except FileNotFoundError:
|
||||||
|
return {'ok': False, 'error': f'Zellij not found at {ZELLIJ_BIN}', 'code': 'ZELLIJ_NOT_FOUND'}
|
||||||
|
except subprocess.TimeoutExpired:
|
||||||
|
return {'ok': False, 'error': 'Pane spawn timed out', 'code': 'TIMEOUT'}
|
||||||
|
except OSError as e:
|
||||||
|
return {'ok': False, 'error': str(e), 'code': 'SPAWN_ERROR'}
|
||||||
|
|
||||||
|
# Wait for session file to appear
|
||||||
|
found = self._wait_for_session_file(spawn_id)
|
||||||
|
if not found:
|
||||||
|
LOGGER.warning(
|
||||||
|
'Session file not found for spawn_id=%s after timeout (agent may still be starting)',
|
||||||
|
spawn_id,
|
||||||
|
)
|
||||||
|
|
||||||
|
return {'ok': True, 'session_file_found': found}
|
||||||
|
|
||||||
|
def _handle_projects(self):
|
||||||
|
"""GET /api/projects - return cached projects list."""
|
||||||
|
self._send_json(200, {'ok': True, 'projects': list(_projects_cache)})
|
||||||
|
|
||||||
|
def _handle_projects_refresh(self):
|
||||||
|
"""POST /api/projects/refresh - refresh and return projects list."""
|
||||||
|
load_projects_cache()
|
||||||
|
self._send_json(200, {'ok': True, 'projects': list(_projects_cache)})
|
||||||
|
|
||||||
|
def _handle_health(self):
|
||||||
|
"""GET /api/health - check server and Zellij status."""
|
||||||
|
zellij_ok = self._check_zellij_session_exists()
|
||||||
|
self._send_json(200, {
|
||||||
|
'ok': True,
|
||||||
|
'zellij_session': ZELLIJ_SESSION,
|
||||||
|
'zellij_available': zellij_ok,
|
||||||
|
})
|
||||||
@@ -2,18 +2,18 @@ import hashlib
|
|||||||
import json
|
import json
|
||||||
import subprocess
|
import subprocess
|
||||||
import time
|
import time
|
||||||
|
from collections import defaultdict
|
||||||
from datetime import datetime, timezone
|
from datetime import datetime, timezone
|
||||||
from pathlib import Path
|
from pathlib import Path
|
||||||
|
|
||||||
from amc_server.context import (
|
from amc_server.config import (
|
||||||
EVENTS_DIR,
|
EVENTS_DIR,
|
||||||
SESSIONS_DIR,
|
SESSIONS_DIR,
|
||||||
STALE_EVENT_AGE,
|
STALE_EVENT_AGE,
|
||||||
STALE_STARTING_AGE,
|
STALE_STARTING_AGE,
|
||||||
ZELLIJ_BIN,
|
|
||||||
_state_lock,
|
_state_lock,
|
||||||
_zellij_cache,
|
|
||||||
)
|
)
|
||||||
|
from amc_server.zellij import ZELLIJ_BIN, _zellij_cache
|
||||||
from amc_server.logging_utils import LOGGER
|
from amc_server.logging_utils import LOGGER
|
||||||
|
|
||||||
|
|
||||||
@@ -100,6 +100,9 @@ class StateMixin:
|
|||||||
# Get active Zellij sessions for liveness check
|
# Get active Zellij sessions for liveness check
|
||||||
active_zellij_sessions = self._get_active_zellij_sessions()
|
active_zellij_sessions = self._get_active_zellij_sessions()
|
||||||
|
|
||||||
|
# Get set of transcript files with active processes (for dead detection)
|
||||||
|
active_transcript_files = self._get_active_transcript_files()
|
||||||
|
|
||||||
for f in SESSIONS_DIR.glob("*.json"):
|
for f in SESSIONS_DIR.glob("*.json"):
|
||||||
try:
|
try:
|
||||||
data = json.loads(f.read_text())
|
data = json.loads(f.read_text())
|
||||||
@@ -120,11 +123,31 @@ class StateMixin:
|
|||||||
if context_usage:
|
if context_usage:
|
||||||
data["context_usage"] = context_usage
|
data["context_usage"] = context_usage
|
||||||
|
|
||||||
|
# Capture turn token baseline on UserPromptSubmit (for per-turn token display)
|
||||||
|
# Only write once when the turn starts and we have token data
|
||||||
|
if (
|
||||||
|
data.get("last_event") == "UserPromptSubmit"
|
||||||
|
and "turn_start_tokens" not in data
|
||||||
|
and context_usage
|
||||||
|
and context_usage.get("current_tokens") is not None
|
||||||
|
):
|
||||||
|
data["turn_start_tokens"] = context_usage["current_tokens"]
|
||||||
|
# Persist to session file so it survives server restarts
|
||||||
|
try:
|
||||||
|
f.write_text(json.dumps(data, indent=2))
|
||||||
|
except OSError:
|
||||||
|
pass
|
||||||
|
|
||||||
# Track conversation file mtime for real-time update detection
|
# Track conversation file mtime for real-time update detection
|
||||||
conv_mtime = self._get_conversation_mtime(data)
|
conv_mtime = self._get_conversation_mtime(data)
|
||||||
if conv_mtime:
|
if conv_mtime:
|
||||||
data["conversation_mtime_ns"] = conv_mtime
|
data["conversation_mtime_ns"] = conv_mtime
|
||||||
|
|
||||||
|
# Determine if session is "dead" (no longer interactable)
|
||||||
|
data["is_dead"] = self._is_session_dead(
|
||||||
|
data, active_zellij_sessions, active_transcript_files
|
||||||
|
)
|
||||||
|
|
||||||
sessions.append(data)
|
sessions.append(data)
|
||||||
except (json.JSONDecodeError, OSError):
|
except (json.JSONDecodeError, OSError):
|
||||||
continue
|
continue
|
||||||
@@ -132,8 +155,11 @@ class StateMixin:
|
|||||||
LOGGER.exception("Failed processing session file %s", f)
|
LOGGER.exception("Failed processing session file %s", f)
|
||||||
continue
|
continue
|
||||||
|
|
||||||
# Sort by last_event_at descending
|
# Sort by session_id for stable, deterministic ordering (no visual jumping)
|
||||||
sessions.sort(key=lambda s: s.get("last_event_at", ""), reverse=True)
|
sessions.sort(key=lambda s: s.get("session_id", ""))
|
||||||
|
|
||||||
|
# Dedupe same-pane sessions (handles --resume creating orphan + real session)
|
||||||
|
sessions = self._dedupe_same_pane_sessions(sessions)
|
||||||
|
|
||||||
# Clean orphan event logs (sessions persist until manually dismissed or SessionEnd)
|
# Clean orphan event logs (sessions persist until manually dismissed or SessionEnd)
|
||||||
self._cleanup_stale(sessions)
|
self._cleanup_stale(sessions)
|
||||||
@@ -204,6 +230,133 @@ class StateMixin:
|
|||||||
|
|
||||||
return None
|
return None
|
||||||
|
|
||||||
|
def _get_active_transcript_files(self):
|
||||||
|
"""Get set of transcript files that have active processes.
|
||||||
|
|
||||||
|
Uses a batched lsof call to efficiently check which Codex transcript
|
||||||
|
files are currently open by a process.
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
set: Absolute paths of transcript files with active processes.
|
||||||
|
"""
|
||||||
|
from amc_server.agents import CODEX_SESSIONS_DIR
|
||||||
|
|
||||||
|
if not CODEX_SESSIONS_DIR.exists():
|
||||||
|
return set()
|
||||||
|
|
||||||
|
# Find all recent transcript files
|
||||||
|
transcript_files = []
|
||||||
|
now = time.time()
|
||||||
|
cutoff = now - 3600 # Only check files modified in the last hour
|
||||||
|
|
||||||
|
for jsonl_file in CODEX_SESSIONS_DIR.rglob("*.jsonl"):
|
||||||
|
try:
|
||||||
|
if jsonl_file.stat().st_mtime > cutoff:
|
||||||
|
transcript_files.append(str(jsonl_file))
|
||||||
|
except OSError:
|
||||||
|
continue
|
||||||
|
|
||||||
|
if not transcript_files:
|
||||||
|
return set()
|
||||||
|
|
||||||
|
# Batch lsof check for all transcript files
|
||||||
|
active_files = set()
|
||||||
|
try:
|
||||||
|
# lsof with multiple files: returns PIDs for any that are open
|
||||||
|
result = subprocess.run(
|
||||||
|
["lsof", "-t"] + transcript_files,
|
||||||
|
capture_output=True,
|
||||||
|
text=True,
|
||||||
|
timeout=5,
|
||||||
|
)
|
||||||
|
# If any file is open, lsof returns 0
|
||||||
|
# We need to check which specific files are open
|
||||||
|
if result.returncode == 0 and result.stdout.strip():
|
||||||
|
# At least one file is open - check each one
|
||||||
|
for tf in transcript_files:
|
||||||
|
try:
|
||||||
|
check = subprocess.run(
|
||||||
|
["lsof", "-t", tf],
|
||||||
|
capture_output=True,
|
||||||
|
text=True,
|
||||||
|
timeout=2,
|
||||||
|
)
|
||||||
|
if check.returncode == 0 and check.stdout.strip():
|
||||||
|
active_files.add(tf)
|
||||||
|
except (subprocess.TimeoutExpired, Exception):
|
||||||
|
continue
|
||||||
|
except (subprocess.TimeoutExpired, FileNotFoundError, Exception):
|
||||||
|
pass
|
||||||
|
|
||||||
|
return active_files
|
||||||
|
|
||||||
|
def _is_session_dead(self, session_data, active_zellij_sessions, active_transcript_files):
|
||||||
|
"""Determine if a session is 'dead' (no longer interactable).
|
||||||
|
|
||||||
|
A dead session cannot receive input and won't produce more output.
|
||||||
|
These should be shown separately from active sessions in the UI.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
session_data: The session dict
|
||||||
|
active_zellij_sessions: Set of active zellij session names (or None)
|
||||||
|
active_transcript_files: Set of transcript file paths with active processes
|
||||||
|
|
||||||
|
Returns:
|
||||||
|
bool: True if the session is dead
|
||||||
|
"""
|
||||||
|
agent = session_data.get("agent")
|
||||||
|
zellij_session = session_data.get("zellij_session", "")
|
||||||
|
status = session_data.get("status", "")
|
||||||
|
|
||||||
|
# Sessions that are still starting are not dead (yet)
|
||||||
|
if status == "starting":
|
||||||
|
return False
|
||||||
|
|
||||||
|
if agent == "codex":
|
||||||
|
# Codex session is dead if no process has the transcript file open
|
||||||
|
transcript_path = session_data.get("transcript_path", "")
|
||||||
|
if not transcript_path:
|
||||||
|
return True # No transcript path = malformed, treat as dead
|
||||||
|
|
||||||
|
# Check cached set first (covers recently-modified files)
|
||||||
|
if transcript_path in active_transcript_files:
|
||||||
|
return False # Process is running
|
||||||
|
|
||||||
|
# For older files not in cached set, do explicit lsof check
|
||||||
|
# This handles long-idle but still-running processes
|
||||||
|
if self._is_file_open(transcript_path):
|
||||||
|
return False # Process is running
|
||||||
|
|
||||||
|
# No process running - it's dead
|
||||||
|
return True
|
||||||
|
|
||||||
|
elif agent == "claude":
|
||||||
|
# Claude session is dead if:
|
||||||
|
# 1. No zellij session attached, OR
|
||||||
|
# 2. The zellij session no longer exists
|
||||||
|
if not zellij_session:
|
||||||
|
return True
|
||||||
|
if active_zellij_sessions is not None:
|
||||||
|
return zellij_session not in active_zellij_sessions
|
||||||
|
# If we couldn't query zellij, assume alive (don't false-positive)
|
||||||
|
return False
|
||||||
|
|
||||||
|
# Unknown agent type - assume alive
|
||||||
|
return False
|
||||||
|
|
||||||
|
def _is_file_open(self, file_path):
|
||||||
|
"""Check if any process has a file open using lsof."""
|
||||||
|
try:
|
||||||
|
result = subprocess.run(
|
||||||
|
["lsof", "-t", file_path],
|
||||||
|
capture_output=True,
|
||||||
|
text=True,
|
||||||
|
timeout=2,
|
||||||
|
)
|
||||||
|
return result.returncode == 0 and result.stdout.strip()
|
||||||
|
except (subprocess.TimeoutExpired, FileNotFoundError, Exception):
|
||||||
|
return False # Assume not open on error (conservative)
|
||||||
|
|
||||||
def _cleanup_stale(self, sessions):
|
def _cleanup_stale(self, sessions):
|
||||||
"""Remove orphan event logs >24h and stale 'starting' sessions >1h."""
|
"""Remove orphan event logs >24h and stale 'starting' sessions >1h."""
|
||||||
active_ids = {s.get("session_id") for s in sessions if s.get("session_id")}
|
active_ids = {s.get("session_id") for s in sessions if s.get("session_id")}
|
||||||
@@ -231,3 +384,56 @@ class StateMixin:
|
|||||||
f.unlink()
|
f.unlink()
|
||||||
except (json.JSONDecodeError, OSError):
|
except (json.JSONDecodeError, OSError):
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
def _dedupe_same_pane_sessions(self, sessions):
|
||||||
|
"""Remove orphan sessions when multiple sessions share the same Zellij pane.
|
||||||
|
|
||||||
|
This handles the --resume edge case where Claude creates a new session file
|
||||||
|
before resuming the old one, leaving an orphan with no context_usage.
|
||||||
|
|
||||||
|
When multiple sessions share (zellij_session, zellij_pane), keep the one with:
|
||||||
|
1. context_usage (has actual conversation data)
|
||||||
|
2. Higher conversation_mtime_ns (more recent activity)
|
||||||
|
"""
|
||||||
|
|
||||||
|
def session_score(s):
|
||||||
|
"""Score a session for dedup ranking: (has_context, mtime)."""
|
||||||
|
has_context = 1 if s.get("context_usage") else 0
|
||||||
|
mtime = s.get("conversation_mtime_ns") or 0
|
||||||
|
# Defensive: ensure mtime is numeric
|
||||||
|
if not isinstance(mtime, (int, float)):
|
||||||
|
mtime = 0
|
||||||
|
return (has_context, mtime)
|
||||||
|
|
||||||
|
# Group sessions by pane
|
||||||
|
pane_groups = defaultdict(list)
|
||||||
|
for s in sessions:
|
||||||
|
zs = s.get("zellij_session", "")
|
||||||
|
zp = s.get("zellij_pane", "")
|
||||||
|
if zs and zp:
|
||||||
|
pane_groups[(zs, zp)].append(s)
|
||||||
|
|
||||||
|
# Find orphans to remove
|
||||||
|
orphan_ids = set()
|
||||||
|
for group in pane_groups.values():
|
||||||
|
if len(group) <= 1:
|
||||||
|
continue
|
||||||
|
|
||||||
|
# Pick the best session: prefer context_usage, then highest mtime
|
||||||
|
group_sorted = sorted(group, key=session_score, reverse=True)
|
||||||
|
|
||||||
|
# Mark all but the best as orphans
|
||||||
|
for s in group_sorted[1:]:
|
||||||
|
session_id = s.get("session_id")
|
||||||
|
if not session_id:
|
||||||
|
continue # Skip sessions without valid IDs
|
||||||
|
orphan_ids.add(session_id)
|
||||||
|
# Also delete the orphan session file
|
||||||
|
try:
|
||||||
|
orphan_file = SESSIONS_DIR / f"{session_id}.json"
|
||||||
|
orphan_file.unlink(missing_ok=True)
|
||||||
|
except OSError:
|
||||||
|
pass
|
||||||
|
|
||||||
|
# Return filtered list
|
||||||
|
return [s for s in sessions if s.get("session_id") not in orphan_ids]
|
||||||
|
|||||||
@@ -1,15 +1,24 @@
|
|||||||
import os
|
import os
|
||||||
from http.server import ThreadingHTTPServer
|
from http.server import ThreadingHTTPServer
|
||||||
|
|
||||||
from amc_server.context import DATA_DIR, PORT
|
from amc_server.auth import generate_auth_token
|
||||||
|
from amc_server.config import DATA_DIR, PORT
|
||||||
|
from amc_server.spawn_config import start_projects_watcher
|
||||||
from amc_server.handler import AMCHandler
|
from amc_server.handler import AMCHandler
|
||||||
from amc_server.logging_utils import LOGGER, configure_logging, install_signal_handlers
|
from amc_server.logging_utils import LOGGER, configure_logging, install_signal_handlers
|
||||||
|
from amc_server.mixins.spawn import load_projects_cache
|
||||||
|
|
||||||
|
|
||||||
def main():
|
def main():
|
||||||
configure_logging()
|
configure_logging()
|
||||||
DATA_DIR.mkdir(parents=True, exist_ok=True)
|
DATA_DIR.mkdir(parents=True, exist_ok=True)
|
||||||
LOGGER.info("Starting AMC server")
|
LOGGER.info("Starting AMC server")
|
||||||
|
|
||||||
|
# Initialize spawn feature
|
||||||
|
load_projects_cache()
|
||||||
|
generate_auth_token()
|
||||||
|
start_projects_watcher()
|
||||||
|
|
||||||
server = ThreadingHTTPServer(("127.0.0.1", PORT), AMCHandler)
|
server = ThreadingHTTPServer(("127.0.0.1", PORT), AMCHandler)
|
||||||
install_signal_handlers(server)
|
install_signal_handlers(server)
|
||||||
LOGGER.info("AMC server listening on http://127.0.0.1:%s", PORT)
|
LOGGER.info("AMC server listening on http://127.0.0.1:%s", PORT)
|
||||||
|
|||||||
40
amc_server/spawn_config.py
Normal file
40
amc_server/spawn_config.py
Normal file
@@ -0,0 +1,40 @@
|
|||||||
|
"""Spawn feature config: paths, locks, rate limiting, projects watcher."""
|
||||||
|
|
||||||
|
import threading
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
from amc_server.config import DATA_DIR
|
||||||
|
|
||||||
|
# Pending spawn registry
|
||||||
|
PENDING_SPAWNS_DIR = DATA_DIR / "pending_spawns"
|
||||||
|
|
||||||
|
# Pending spawn TTL: how long to keep unmatched spawn records (seconds)
|
||||||
|
PENDING_SPAWN_TTL = 60
|
||||||
|
|
||||||
|
# Projects directory for spawning agents
|
||||||
|
PROJECTS_DIR = Path.home() / 'projects'
|
||||||
|
|
||||||
|
# Lock for serializing spawn operations (prevents Zellij race conditions)
|
||||||
|
_spawn_lock = threading.Lock()
|
||||||
|
|
||||||
|
# Rate limiting: track last spawn time per project (prevents spam)
|
||||||
|
_spawn_timestamps: dict[str, float] = {}
|
||||||
|
SPAWN_COOLDOWN_SEC = 10.0
|
||||||
|
|
||||||
|
|
||||||
|
def start_projects_watcher():
|
||||||
|
"""Start background thread to refresh projects cache every 5 minutes."""
|
||||||
|
import logging
|
||||||
|
from amc_server.mixins.spawn import load_projects_cache
|
||||||
|
|
||||||
|
def _watch_loop():
|
||||||
|
import time
|
||||||
|
while True:
|
||||||
|
try:
|
||||||
|
time.sleep(300)
|
||||||
|
load_projects_cache()
|
||||||
|
except Exception:
|
||||||
|
logging.exception('Projects cache refresh failed')
|
||||||
|
|
||||||
|
thread = threading.Thread(target=_watch_loop, daemon=True)
|
||||||
|
thread.start()
|
||||||
34
amc_server/zellij.py
Normal file
34
amc_server/zellij.py
Normal file
@@ -0,0 +1,34 @@
|
|||||||
|
"""Zellij integration: binary resolution, plugin path, session name, cache."""
|
||||||
|
|
||||||
|
import shutil
|
||||||
|
from pathlib import Path
|
||||||
|
|
||||||
|
# Plugin path for zellij-send-keys
|
||||||
|
ZELLIJ_PLUGIN = Path.home() / ".config" / "zellij" / "plugins" / "zellij-send-keys.wasm"
|
||||||
|
|
||||||
|
|
||||||
|
def _resolve_zellij_bin():
|
||||||
|
"""Resolve zellij binary even when PATH is minimal (eg launchctl)."""
|
||||||
|
from_path = shutil.which("zellij")
|
||||||
|
if from_path:
|
||||||
|
return from_path
|
||||||
|
|
||||||
|
common_paths = (
|
||||||
|
"/opt/homebrew/bin/zellij", # Apple Silicon Homebrew
|
||||||
|
"/usr/local/bin/zellij", # Intel Homebrew
|
||||||
|
"/usr/bin/zellij",
|
||||||
|
)
|
||||||
|
for candidate in common_paths:
|
||||||
|
p = Path(candidate)
|
||||||
|
if p.exists() and p.is_file():
|
||||||
|
return str(p)
|
||||||
|
return "zellij" # Fallback for explicit error reporting by subprocess
|
||||||
|
|
||||||
|
|
||||||
|
ZELLIJ_BIN = _resolve_zellij_bin()
|
||||||
|
|
||||||
|
# Default Zellij session for spawning
|
||||||
|
ZELLIJ_SESSION = 'infra'
|
||||||
|
|
||||||
|
# Cache for Zellij session list (avoid calling zellij on every request)
|
||||||
|
_zellij_cache = {"sessions": None, "expires": 0}
|
||||||
31
bin/amc-hook
31
bin/amc-hook
@@ -170,6 +170,8 @@ def main():
|
|||||||
existing["last_event"] = f"PreToolUse({tool_name})"
|
existing["last_event"] = f"PreToolUse({tool_name})"
|
||||||
existing["last_event_at"] = now
|
existing["last_event_at"] = now
|
||||||
existing["pending_questions"] = _extract_questions(hook)
|
existing["pending_questions"] = _extract_questions(hook)
|
||||||
|
# Track when turn paused for duration calculation
|
||||||
|
existing["turn_paused_at"] = now
|
||||||
_atomic_write(session_file, existing)
|
_atomic_write(session_file, existing)
|
||||||
_append_event(session_id, {
|
_append_event(session_id, {
|
||||||
"event": f"PreToolUse({tool_name})",
|
"event": f"PreToolUse({tool_name})",
|
||||||
@@ -189,6 +191,16 @@ def main():
|
|||||||
existing["last_event"] = f"PostToolUse({tool_name})"
|
existing["last_event"] = f"PostToolUse({tool_name})"
|
||||||
existing["last_event_at"] = now
|
existing["last_event_at"] = now
|
||||||
existing.pop("pending_questions", None)
|
existing.pop("pending_questions", None)
|
||||||
|
# Accumulate paused time for turn duration calculation
|
||||||
|
paused_at = existing.pop("turn_paused_at", None)
|
||||||
|
if paused_at:
|
||||||
|
try:
|
||||||
|
paused_start = datetime.fromisoformat(paused_at.replace("Z", "+00:00"))
|
||||||
|
paused_end = datetime.fromisoformat(now.replace("Z", "+00:00"))
|
||||||
|
paused_ms = int((paused_end - paused_start).total_seconds() * 1000)
|
||||||
|
existing["turn_paused_ms"] = existing.get("turn_paused_ms", 0) + paused_ms
|
||||||
|
except (ValueError, TypeError):
|
||||||
|
pass
|
||||||
_atomic_write(session_file, existing)
|
_atomic_write(session_file, existing)
|
||||||
_append_event(session_id, {
|
_append_event(session_id, {
|
||||||
"event": f"PostToolUse({tool_name})",
|
"event": f"PostToolUse({tool_name})",
|
||||||
@@ -237,6 +249,25 @@ def main():
|
|||||||
"zellij_pane": os.environ.get("ZELLIJ_PANE_ID", ""),
|
"zellij_pane": os.environ.get("ZELLIJ_PANE_ID", ""),
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Include spawn_id if present in environment (for spawn correlation)
|
||||||
|
spawn_id = os.environ.get("AMC_SPAWN_ID")
|
||||||
|
if spawn_id:
|
||||||
|
state["spawn_id"] = spawn_id
|
||||||
|
|
||||||
|
# Turn timing: track working time from user prompt to completion
|
||||||
|
if event == "UserPromptSubmit":
|
||||||
|
# New turn starting - reset turn timing
|
||||||
|
state["turn_started_at"] = now
|
||||||
|
state["turn_paused_ms"] = 0
|
||||||
|
else:
|
||||||
|
# Preserve turn timing from existing state
|
||||||
|
if "turn_started_at" in existing:
|
||||||
|
state["turn_started_at"] = existing["turn_started_at"]
|
||||||
|
if "turn_paused_ms" in existing:
|
||||||
|
state["turn_paused_ms"] = existing["turn_paused_ms"]
|
||||||
|
if "turn_paused_at" in existing:
|
||||||
|
state["turn_paused_at"] = existing["turn_paused_at"]
|
||||||
|
|
||||||
# Store prose question if detected
|
# Store prose question if detected
|
||||||
if prose_question:
|
if prose_question:
|
||||||
state["pending_questions"] = [{
|
state["pending_questions"] = [{
|
||||||
|
|||||||
29
bin/amc-server-restart
Executable file
29
bin/amc-server-restart
Executable file
@@ -0,0 +1,29 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
# Restart the AMC server cleanly
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
PORT=7400
|
||||||
|
|
||||||
|
# Find and kill existing server
|
||||||
|
PID=$(lsof -ti :$PORT 2>/dev/null || true)
|
||||||
|
if [[ -n "$PID" ]]; then
|
||||||
|
echo "Stopping AMC server (PID $PID)..."
|
||||||
|
kill "$PID" 2>/dev/null || true
|
||||||
|
sleep 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Start server in background
|
||||||
|
echo "Starting AMC server on port $PORT..."
|
||||||
|
cd "$(dirname "$0")/.."
|
||||||
|
nohup python3 -m amc_server.server > /tmp/amc-server.log 2>&1 &
|
||||||
|
|
||||||
|
# Wait for startup
|
||||||
|
sleep 1
|
||||||
|
NEW_PID=$(lsof -ti :$PORT 2>/dev/null || true)
|
||||||
|
if [[ -n "$NEW_PID" ]]; then
|
||||||
|
echo "AMC server running (PID $NEW_PID)"
|
||||||
|
else
|
||||||
|
echo "Failed to start server. Check /tmp/amc-server.log"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
115
dashboard/components/AgentActivityIndicator.js
Normal file
115
dashboard/components/AgentActivityIndicator.js
Normal file
@@ -0,0 +1,115 @@
|
|||||||
|
import { html, useState, useEffect, useRef } from '../lib/preact.js';
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Shows live agent activity: elapsed time since user prompt, token usage.
|
||||||
|
* Visible when session is active/starting, pauses during needs_attention,
|
||||||
|
* shows final duration when done.
|
||||||
|
*
|
||||||
|
* @param {object} session - Session object with turn_started_at, turn_paused_at, turn_paused_ms, status
|
||||||
|
*/
|
||||||
|
export function AgentActivityIndicator({ session }) {
|
||||||
|
const [elapsed, setElapsed] = useState(0);
|
||||||
|
const intervalRef = useRef(null);
|
||||||
|
|
||||||
|
// Safely extract session fields (handles null/undefined session)
|
||||||
|
const status = session?.status;
|
||||||
|
const turn_started_at = session?.turn_started_at;
|
||||||
|
const turn_paused_at = session?.turn_paused_at;
|
||||||
|
const turn_paused_ms = session?.turn_paused_ms ?? 0;
|
||||||
|
const last_event_at = session?.last_event_at;
|
||||||
|
const context_usage = session?.context_usage;
|
||||||
|
const turn_start_tokens = session?.turn_start_tokens;
|
||||||
|
|
||||||
|
// Only show for sessions with turn timing
|
||||||
|
const hasTurnTiming = !!turn_started_at;
|
||||||
|
const isActive = status === 'active' || status === 'starting';
|
||||||
|
const isPaused = status === 'needs_attention';
|
||||||
|
const isDone = status === 'done';
|
||||||
|
|
||||||
|
useEffect(() => {
|
||||||
|
if (!hasTurnTiming) return;
|
||||||
|
|
||||||
|
const calculate = () => {
|
||||||
|
const startMs = new Date(turn_started_at).getTime();
|
||||||
|
const pausedMs = turn_paused_ms || 0;
|
||||||
|
|
||||||
|
if (isActive) {
|
||||||
|
// Running: current time - start - paused
|
||||||
|
return Date.now() - startMs - pausedMs;
|
||||||
|
} else if (isPaused && turn_paused_at) {
|
||||||
|
// Paused: frozen at pause time
|
||||||
|
const pausedAtMs = new Date(turn_paused_at).getTime();
|
||||||
|
return pausedAtMs - startMs - pausedMs;
|
||||||
|
} else if (isDone && last_event_at) {
|
||||||
|
// Done: final duration
|
||||||
|
const endMs = new Date(last_event_at).getTime();
|
||||||
|
return endMs - startMs - pausedMs;
|
||||||
|
}
|
||||||
|
return 0;
|
||||||
|
};
|
||||||
|
|
||||||
|
setElapsed(calculate());
|
||||||
|
|
||||||
|
// Only tick while active
|
||||||
|
if (isActive) {
|
||||||
|
intervalRef.current = setInterval(() => {
|
||||||
|
setElapsed(calculate());
|
||||||
|
}, 1000);
|
||||||
|
}
|
||||||
|
|
||||||
|
return () => {
|
||||||
|
if (intervalRef.current) {
|
||||||
|
clearInterval(intervalRef.current);
|
||||||
|
intervalRef.current = null;
|
||||||
|
}
|
||||||
|
};
|
||||||
|
}, [hasTurnTiming, isActive, isPaused, isDone, turn_started_at, turn_paused_at, turn_paused_ms, last_event_at]);
|
||||||
|
|
||||||
|
// Don't render if no turn timing or session is done with no activity
|
||||||
|
if (!hasTurnTiming) return null;
|
||||||
|
|
||||||
|
// Format elapsed time (clamp to 0 for safety)
|
||||||
|
const formatElapsed = (ms) => {
|
||||||
|
const totalSec = Math.max(0, Math.floor(ms / 1000));
|
||||||
|
if (totalSec < 60) return `${totalSec}s`;
|
||||||
|
const min = Math.floor(totalSec / 60);
|
||||||
|
const sec = totalSec % 60;
|
||||||
|
return `${min}m ${sec}s`;
|
||||||
|
};
|
||||||
|
|
||||||
|
// Format token count
|
||||||
|
const formatTokens = (count) => {
|
||||||
|
if (count == null) return null;
|
||||||
|
if (count >= 1000) return `${(count / 1000).toFixed(1)}k`;
|
||||||
|
return String(count);
|
||||||
|
};
|
||||||
|
|
||||||
|
// Calculate turn tokens (current - baseline from turn start)
|
||||||
|
const currentTokens = context_usage?.current_tokens;
|
||||||
|
const turnTokens = (currentTokens != null && turn_start_tokens != null)
|
||||||
|
? Math.max(0, currentTokens - turn_start_tokens)
|
||||||
|
: null;
|
||||||
|
const tokenDisplay = formatTokens(turnTokens);
|
||||||
|
|
||||||
|
return html`
|
||||||
|
<div class="flex items-center gap-2 font-mono text-label">
|
||||||
|
${isActive && html`
|
||||||
|
<span class="activity-spinner"></span>
|
||||||
|
`}
|
||||||
|
${isPaused && html`
|
||||||
|
<span class="h-2 w-2 rounded-full bg-attention"></span>
|
||||||
|
`}
|
||||||
|
${isDone && html`
|
||||||
|
<span class="h-2 w-2 rounded-full bg-done"></span>
|
||||||
|
`}
|
||||||
|
<span class="text-dim">
|
||||||
|
${isActive ? 'Working' : isPaused ? 'Paused' : 'Completed'}
|
||||||
|
</span>
|
||||||
|
<span class="text-bright tabular-nums">${formatElapsed(elapsed)}</span>
|
||||||
|
${tokenDisplay && html`
|
||||||
|
<span class="text-dim/70">·</span>
|
||||||
|
<span class="text-dim/90">${tokenDisplay} tokens</span>
|
||||||
|
`}
|
||||||
|
</div>
|
||||||
|
`;
|
||||||
|
}
|
||||||
@@ -1,11 +1,12 @@
|
|||||||
import { html, useState, useEffect, useCallback, useMemo, useRef } from '../lib/preact.js';
|
import { html, useState, useEffect, useCallback, useMemo, useRef } from '../lib/preact.js';
|
||||||
import { API_STATE, API_STREAM, API_DISMISS, API_RESPOND, API_CONVERSATION, POLL_MS, fetchWithTimeout } from '../utils/api.js';
|
import { API_STATE, API_STREAM, API_DISMISS, API_DISMISS_DEAD, API_RESPOND, API_CONVERSATION, API_HEALTH, POLL_MS, fetchWithTimeout, fetchSkills } from '../utils/api.js';
|
||||||
import { groupSessionsByProject } from '../utils/status.js';
|
import { groupSessionsByProject } from '../utils/status.js';
|
||||||
import { Sidebar } from './Sidebar.js';
|
import { Sidebar } from './Sidebar.js';
|
||||||
import { SessionCard } from './SessionCard.js';
|
import { SessionCard } from './SessionCard.js';
|
||||||
import { Modal } from './Modal.js';
|
import { Modal } from './Modal.js';
|
||||||
import { EmptyState } from './EmptyState.js';
|
import { EmptyState } from './EmptyState.js';
|
||||||
import { ToastContainer, trackError, clearErrorCount } from './Toast.js';
|
import { ToastContainer, showToast, trackError, clearErrorCount } from './Toast.js';
|
||||||
|
import { SpawnModal } from './SpawnModal.js';
|
||||||
|
|
||||||
let optimisticMsgId = 0;
|
let optimisticMsgId = 0;
|
||||||
|
|
||||||
@@ -17,6 +18,12 @@ export function App() {
|
|||||||
const [error, setError] = useState(null);
|
const [error, setError] = useState(null);
|
||||||
const [selectedProject, setSelectedProject] = useState(null);
|
const [selectedProject, setSelectedProject] = useState(null);
|
||||||
const [sseConnected, setSseConnected] = useState(false);
|
const [sseConnected, setSseConnected] = useState(false);
|
||||||
|
const [deadSessionsCollapsed, setDeadSessionsCollapsed] = useState(true);
|
||||||
|
const [spawnModalOpen, setSpawnModalOpen] = useState(false);
|
||||||
|
const [zellijAvailable, setZellijAvailable] = useState(true);
|
||||||
|
const [newlySpawnedIds, setNewlySpawnedIds] = useState(new Set());
|
||||||
|
const pendingSpawnIdsRef = useRef(new Set());
|
||||||
|
const [skillsConfig, setSkillsConfig] = useState({ claude: null, codex: null });
|
||||||
|
|
||||||
// Background conversation refresh with error tracking
|
// Background conversation refresh with error tracking
|
||||||
const refreshConversationSilent = useCallback(async (sessionId, projectDir, agent = 'claude') => {
|
const refreshConversationSilent = useCallback(async (sessionId, projectDir, agent = 'claude') => {
|
||||||
@@ -68,6 +75,34 @@ export function App() {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Check for newly spawned sessions matching pending spawn IDs
|
||||||
|
if (pendingSpawnIdsRef.current.size > 0) {
|
||||||
|
const matched = new Set();
|
||||||
|
for (const session of newSessions) {
|
||||||
|
if (session.spawn_id && pendingSpawnIdsRef.current.has(session.spawn_id)) {
|
||||||
|
matched.add(session.session_id);
|
||||||
|
pendingSpawnIdsRef.current.delete(session.spawn_id);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
if (matched.size > 0) {
|
||||||
|
setNewlySpawnedIds(prev => {
|
||||||
|
const next = new Set(prev);
|
||||||
|
for (const id of matched) next.add(id);
|
||||||
|
return next;
|
||||||
|
});
|
||||||
|
// Auto-clear highlight after animation duration (2.5s)
|
||||||
|
for (const id of matched) {
|
||||||
|
setTimeout(() => {
|
||||||
|
setNewlySpawnedIds(prev => {
|
||||||
|
const next = new Set(prev);
|
||||||
|
next.delete(id);
|
||||||
|
return next;
|
||||||
|
});
|
||||||
|
}, 2500);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
// Clean up conversation cache for sessions that no longer exist
|
// Clean up conversation cache for sessions that no longer exist
|
||||||
setConversations(prev => {
|
setConversations(prev => {
|
||||||
const activeIds = Object.keys(prev).filter(id => newSessionIds.has(id));
|
const activeIds = Object.keys(prev).filter(id => newSessionIds.has(id));
|
||||||
@@ -209,6 +244,21 @@ export function App() {
|
|||||||
}
|
}
|
||||||
}, [fetchState]);
|
}, [fetchState]);
|
||||||
|
|
||||||
|
// Dismiss all dead sessions
|
||||||
|
const dismissDeadSessions = useCallback(async () => {
|
||||||
|
try {
|
||||||
|
const res = await fetch(API_DISMISS_DEAD, { method: 'POST' });
|
||||||
|
const data = await res.json();
|
||||||
|
if (data.ok) {
|
||||||
|
fetchState();
|
||||||
|
} else {
|
||||||
|
trackError('dismiss-dead', `Failed to clear completed sessions: ${data.error || 'Unknown error'}`);
|
||||||
|
}
|
||||||
|
} catch (err) {
|
||||||
|
trackError('dismiss-dead', `Error clearing completed sessions: ${err.message}`);
|
||||||
|
}
|
||||||
|
}, [fetchState]);
|
||||||
|
|
||||||
// Subscribe to live state updates via SSE
|
// Subscribe to live state updates via SSE
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
let eventSource = null;
|
let eventSource = null;
|
||||||
@@ -285,6 +335,37 @@ export function App() {
|
|||||||
return () => clearInterval(interval);
|
return () => clearInterval(interval);
|
||||||
}, [fetchState, sseConnected]);
|
}, [fetchState, sseConnected]);
|
||||||
|
|
||||||
|
// Poll Zellij health status
|
||||||
|
useEffect(() => {
|
||||||
|
const checkHealth = async () => {
|
||||||
|
try {
|
||||||
|
const response = await fetchWithTimeout(API_HEALTH);
|
||||||
|
if (response.ok) {
|
||||||
|
const data = await response.json();
|
||||||
|
setZellijAvailable(data.zellij_available);
|
||||||
|
}
|
||||||
|
} catch {
|
||||||
|
// Server unreachable - handled by state fetch error
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
checkHealth();
|
||||||
|
const interval = setInterval(checkHealth, 30000);
|
||||||
|
return () => clearInterval(interval);
|
||||||
|
}, []);
|
||||||
|
|
||||||
|
// Fetch skills for autocomplete on mount
|
||||||
|
useEffect(() => {
|
||||||
|
const loadSkills = async () => {
|
||||||
|
const [claude, codex] = await Promise.all([
|
||||||
|
fetchSkills('claude'),
|
||||||
|
fetchSkills('codex')
|
||||||
|
]);
|
||||||
|
setSkillsConfig({ claude, codex });
|
||||||
|
};
|
||||||
|
loadSkills();
|
||||||
|
}, []);
|
||||||
|
|
||||||
// Group sessions by project
|
// Group sessions by project
|
||||||
const projectGroups = groupSessionsByProject(sessions);
|
const projectGroups = groupSessionsByProject(sessions);
|
||||||
|
|
||||||
@@ -296,6 +377,22 @@ export function App() {
|
|||||||
return projectGroups.filter(g => g.projectDir === selectedProject);
|
return projectGroups.filter(g => g.projectDir === selectedProject);
|
||||||
}, [projectGroups, selectedProject]);
|
}, [projectGroups, selectedProject]);
|
||||||
|
|
||||||
|
// Split sessions into active and dead
|
||||||
|
const { activeSessions, deadSessions } = useMemo(() => {
|
||||||
|
const active = [];
|
||||||
|
const dead = [];
|
||||||
|
for (const group of filteredGroups) {
|
||||||
|
for (const session of group.sessions) {
|
||||||
|
if (session.is_dead) {
|
||||||
|
dead.push(session);
|
||||||
|
} else {
|
||||||
|
active.push(session);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return { activeSessions: active, deadSessions: dead };
|
||||||
|
}, [filteredGroups]);
|
||||||
|
|
||||||
// Handle card click - open modal and fetch conversation if not cached
|
// Handle card click - open modal and fetch conversation if not cached
|
||||||
const handleCardClick = useCallback(async (session) => {
|
const handleCardClick = useCallback(async (session) => {
|
||||||
modalSessionRef.current = session.session_id;
|
modalSessionRef.current = session.session_id;
|
||||||
@@ -316,6 +413,17 @@ export function App() {
|
|||||||
setSelectedProject(projectDir);
|
setSelectedProject(projectDir);
|
||||||
}, []);
|
}, []);
|
||||||
|
|
||||||
|
const handleSpawnResult = useCallback((result) => {
|
||||||
|
if (result.success) {
|
||||||
|
showToast(`${result.agentType} agent spawned for ${result.project}`, 'success');
|
||||||
|
if (result.spawnId) {
|
||||||
|
pendingSpawnIdsRef.current.add(result.spawnId);
|
||||||
|
}
|
||||||
|
} else if (result.error) {
|
||||||
|
showToast(result.error, 'error');
|
||||||
|
}
|
||||||
|
}, []);
|
||||||
|
|
||||||
return html`
|
return html`
|
||||||
<!-- Sidebar -->
|
<!-- Sidebar -->
|
||||||
<${Sidebar}
|
<${Sidebar}
|
||||||
@@ -376,9 +484,31 @@ export function App() {
|
|||||||
`;
|
`;
|
||||||
})()}
|
})()}
|
||||||
</div>
|
</div>
|
||||||
|
<div class="relative">
|
||||||
|
<button
|
||||||
|
disabled=${!zellijAvailable}
|
||||||
|
class="rounded-lg border border-active/40 bg-active/12 px-3 py-2 text-sm font-medium text-active transition-colors hover:bg-active/20 ${!zellijAvailable ? 'opacity-50 cursor-not-allowed' : ''}"
|
||||||
|
onClick=${() => setSpawnModalOpen(true)}
|
||||||
|
>
|
||||||
|
+ New Agent
|
||||||
|
</button>
|
||||||
|
<${SpawnModal}
|
||||||
|
isOpen=${spawnModalOpen}
|
||||||
|
onClose=${() => setSpawnModalOpen(false)}
|
||||||
|
onSpawn=${handleSpawnResult}
|
||||||
|
currentProject=${selectedProject}
|
||||||
|
/>
|
||||||
|
</div>
|
||||||
</div>
|
</div>
|
||||||
</header>
|
</header>
|
||||||
|
|
||||||
|
${!zellijAvailable && html`
|
||||||
|
<div class="border-b border-attention/50 bg-attention/10 px-6 py-2 text-sm text-attention">
|
||||||
|
<span class="font-medium">Zellij session not found.</span>
|
||||||
|
${' '}Agent spawning is unavailable. Start Zellij with: <code class="rounded bg-attention/15 px-1.5 py-0.5 font-mono text-micro">zellij attach infra</code>
|
||||||
|
</div>
|
||||||
|
`}
|
||||||
|
|
||||||
<main class="px-6 pb-6 pt-6">
|
<main class="px-6 pb-6 pt-6">
|
||||||
${loading ? html`
|
${loading ? html`
|
||||||
<div class="glass-panel animate-fade-in-up flex items-center justify-center rounded-2xl py-24">
|
<div class="glass-panel animate-fade-in-up flex items-center justify-center rounded-2xl py-24">
|
||||||
@@ -394,10 +524,10 @@ export function App() {
|
|||||||
` : filteredGroups.length === 0 ? html`
|
` : filteredGroups.length === 0 ? html`
|
||||||
<${EmptyState} />
|
<${EmptyState} />
|
||||||
` : html`
|
` : html`
|
||||||
<!-- Sessions Grid (no project grouping header since sidebar shows selection) -->
|
<!-- Active Sessions Grid -->
|
||||||
|
${activeSessions.length > 0 ? html`
|
||||||
<div class="flex flex-wrap gap-4">
|
<div class="flex flex-wrap gap-4">
|
||||||
${filteredGroups.flatMap(group =>
|
${activeSessions.map(session => html`
|
||||||
group.sessions.map(session => html`
|
|
||||||
<${SessionCard}
|
<${SessionCard}
|
||||||
key=${session.session_id}
|
key=${session.session_id}
|
||||||
session=${session}
|
session=${session}
|
||||||
@@ -406,10 +536,68 @@ export function App() {
|
|||||||
onFetchConversation=${fetchConversation}
|
onFetchConversation=${fetchConversation}
|
||||||
onRespond=${respondToSession}
|
onRespond=${respondToSession}
|
||||||
onDismiss=${dismissSession}
|
onDismiss=${dismissSession}
|
||||||
|
isNewlySpawned=${newlySpawnedIds.has(session.session_id)}
|
||||||
|
autocompleteConfig=${skillsConfig[session.agent === 'codex' ? 'codex' : 'claude']}
|
||||||
/>
|
/>
|
||||||
`)
|
`)}
|
||||||
)}
|
|
||||||
</div>
|
</div>
|
||||||
|
` : deadSessions.length > 0 ? html`
|
||||||
|
<div class="glass-panel flex items-center justify-center rounded-xl py-12 mb-6">
|
||||||
|
<div class="text-center">
|
||||||
|
<p class="font-display text-lg text-dim">No active sessions</p>
|
||||||
|
<p class="mt-1 font-mono text-micro text-dim/70">All sessions have completed</p>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
` : ''}
|
||||||
|
|
||||||
|
<!-- Completed Sessions (Dead) - Collapsible -->
|
||||||
|
${deadSessions.length > 0 && html`
|
||||||
|
<div class="mt-8">
|
||||||
|
<button
|
||||||
|
onClick=${() => setDeadSessionsCollapsed(!deadSessionsCollapsed)}
|
||||||
|
class="group flex w-full items-center gap-3 rounded-lg border border-selection/50 bg-surface/50 px-4 py-3 text-left transition-colors hover:border-selection hover:bg-surface/80"
|
||||||
|
>
|
||||||
|
<svg
|
||||||
|
class="h-4 w-4 text-dim transition-transform ${deadSessionsCollapsed ? '' : 'rotate-90'}"
|
||||||
|
fill="none"
|
||||||
|
stroke="currentColor"
|
||||||
|
viewBox="0 0 24 24"
|
||||||
|
>
|
||||||
|
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M9 5l7 7-7 7"/>
|
||||||
|
</svg>
|
||||||
|
<span class="font-display text-sm font-medium text-dim">
|
||||||
|
Completed Sessions
|
||||||
|
</span>
|
||||||
|
<span class="rounded-full bg-done/15 px-2 py-0.5 font-mono text-micro tabular-nums text-done/70">
|
||||||
|
${deadSessions.length}
|
||||||
|
</span>
|
||||||
|
<div class="flex-1"></div>
|
||||||
|
<button
|
||||||
|
onClick=${(e) => { e.stopPropagation(); dismissDeadSessions(); }}
|
||||||
|
class="rounded-lg border border-selection/80 bg-bg/40 px-3 py-1.5 font-mono text-micro text-dim transition-colors hover:border-done/40 hover:bg-done/10 hover:text-bright"
|
||||||
|
>
|
||||||
|
Clear All
|
||||||
|
</button>
|
||||||
|
</button>
|
||||||
|
|
||||||
|
${!deadSessionsCollapsed && html`
|
||||||
|
<div class="mt-4 flex flex-wrap gap-4">
|
||||||
|
${deadSessions.map(session => html`
|
||||||
|
<${SessionCard}
|
||||||
|
key=${session.session_id}
|
||||||
|
session=${session}
|
||||||
|
onClick=${handleCardClick}
|
||||||
|
conversation=${conversations[session.session_id]}
|
||||||
|
onFetchConversation=${fetchConversation}
|
||||||
|
onRespond=${respondToSession}
|
||||||
|
onDismiss=${dismissSession}
|
||||||
|
autocompleteConfig=${skillsConfig[session.agent === 'codex' ? 'codex' : 'claude']}
|
||||||
|
/>
|
||||||
|
`)}
|
||||||
|
</div>
|
||||||
|
`}
|
||||||
|
</div>
|
||||||
|
`}
|
||||||
`}
|
`}
|
||||||
</main>
|
</main>
|
||||||
</div>
|
</div>
|
||||||
|
|||||||
@@ -1,58 +0,0 @@
|
|||||||
import { html, useState, useEffect } from '../lib/preact.js';
|
|
||||||
|
|
||||||
export function Header({ sessions }) {
|
|
||||||
const [clock, setClock] = useState(() => new Date());
|
|
||||||
|
|
||||||
useEffect(() => {
|
|
||||||
const timer = setInterval(() => setClock(new Date()), 30000);
|
|
||||||
return () => clearInterval(timer);
|
|
||||||
}, []);
|
|
||||||
|
|
||||||
const counts = {
|
|
||||||
attention: sessions.filter(s => s.status === 'needs_attention').length,
|
|
||||||
active: sessions.filter(s => s.status === 'active').length,
|
|
||||||
starting: sessions.filter(s => s.status === 'starting').length,
|
|
||||||
done: sessions.filter(s => s.status === 'done').length,
|
|
||||||
};
|
|
||||||
const total = sessions.length;
|
|
||||||
|
|
||||||
return html`
|
|
||||||
<header class="sticky top-0 z-50 px-4 pt-4 sm:px-6 sm:pt-6">
|
|
||||||
<div class="glass-panel rounded-2xl px-4 py-4 sm:px-6">
|
|
||||||
<div class="flex flex-col gap-4 lg:flex-row lg:items-center lg:justify-between">
|
|
||||||
<div class="min-w-0">
|
|
||||||
<div class="inline-flex items-center gap-2 rounded-full border border-starting/40 bg-starting/10 px-3 py-1 text-micro font-medium uppercase tracking-[0.24em] text-starting">
|
|
||||||
<span class="h-1.5 w-1.5 rounded-full bg-starting animate-float"></span>
|
|
||||||
Control Plane
|
|
||||||
</div>
|
|
||||||
<h1 class="mt-3 truncate font-display text-xl font-semibold text-bright sm:text-2xl">
|
|
||||||
Agent Mission Control
|
|
||||||
</h1>
|
|
||||||
<p class="mt-1 text-sm text-dim">
|
|
||||||
${total} live session${total === 1 ? '' : 's'} • Updated ${clock.toLocaleTimeString('en-US', { hour: 'numeric', minute: '2-digit' })}
|
|
||||||
</p>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<div class="grid grid-cols-2 gap-2 text-xs sm:grid-cols-4 sm:gap-3">
|
|
||||||
<div class="rounded-xl border border-attention/40 bg-attention/12 px-3 py-2 text-attention">
|
|
||||||
<div class="font-mono text-lg font-medium tabular-nums text-bright">${counts.attention}</div>
|
|
||||||
<div class="text-micro uppercase tracking-[0.16em]">Attention</div>
|
|
||||||
</div>
|
|
||||||
<div class="rounded-xl border border-active/40 bg-active/12 px-3 py-2 text-active">
|
|
||||||
<div class="font-mono text-lg font-medium tabular-nums text-bright">${counts.active}</div>
|
|
||||||
<div class="text-micro uppercase tracking-[0.16em]">Active</div>
|
|
||||||
</div>
|
|
||||||
<div class="rounded-xl border border-starting/40 bg-starting/12 px-3 py-2 text-starting">
|
|
||||||
<div class="font-mono text-lg font-medium tabular-nums text-bright">${counts.starting}</div>
|
|
||||||
<div class="text-micro uppercase tracking-[0.16em]">Starting</div>
|
|
||||||
</div>
|
|
||||||
<div class="rounded-xl border border-done/40 bg-done/12 px-3 py-2 text-done">
|
|
||||||
<div class="font-mono text-lg font-medium tabular-nums text-bright">${counts.done}</div>
|
|
||||||
<div class="text-micro uppercase tracking-[0.16em]">Done</div>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</header>
|
|
||||||
`;
|
|
||||||
}
|
|
||||||
@@ -1,14 +1,31 @@
|
|||||||
import { html, useState, useEffect, useCallback } from '../lib/preact.js';
|
import { html, useState, useEffect, useCallback } from '../lib/preact.js';
|
||||||
import { SessionCard } from './SessionCard.js';
|
import { SessionCard } from './SessionCard.js';
|
||||||
|
import { fetchSkills } from '../utils/api.js';
|
||||||
|
|
||||||
export function Modal({ session, conversations, onClose, onRespond, onFetchConversation, onDismiss }) {
|
export function Modal({ session, conversations, onClose, onRespond, onFetchConversation, onDismiss }) {
|
||||||
const [closing, setClosing] = useState(false);
|
const [closing, setClosing] = useState(false);
|
||||||
|
const [autocompleteConfig, setAutocompleteConfig] = useState(null);
|
||||||
|
|
||||||
// Reset closing state when session changes
|
// Reset closing state when session changes
|
||||||
useEffect(() => {
|
useEffect(() => {
|
||||||
setClosing(false);
|
setClosing(false);
|
||||||
}, [session?.session_id]);
|
}, [session?.session_id]);
|
||||||
|
|
||||||
|
// Load autocomplete skills when agent type changes
|
||||||
|
useEffect(() => {
|
||||||
|
if (!session) {
|
||||||
|
setAutocompleteConfig(null);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
let stale = false;
|
||||||
|
const agent = session.agent || 'claude';
|
||||||
|
fetchSkills(agent)
|
||||||
|
.then(config => { if (!stale) setAutocompleteConfig(config); })
|
||||||
|
.catch(() => { if (!stale) setAutocompleteConfig(null); });
|
||||||
|
return () => { stale = true; };
|
||||||
|
}, [session?.agent]);
|
||||||
|
|
||||||
// Animated close handler
|
// Animated close handler
|
||||||
const handleClose = useCallback(() => {
|
const handleClose = useCallback(() => {
|
||||||
setClosing(true);
|
setClosing(true);
|
||||||
@@ -54,6 +71,7 @@ export function Modal({ session, conversations, onClose, onRespond, onFetchConve
|
|||||||
onRespond=${onRespond}
|
onRespond=${onRespond}
|
||||||
onDismiss=${onDismiss}
|
onDismiss=${onDismiss}
|
||||||
enlarged=${true}
|
enlarged=${true}
|
||||||
|
autocompleteConfig=${autocompleteConfig}
|
||||||
/>
|
/>
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
|||||||
@@ -4,8 +4,9 @@ import { formatDuration, getContextUsageSummary } from '../utils/formatting.js';
|
|||||||
import { ChatMessages } from './ChatMessages.js';
|
import { ChatMessages } from './ChatMessages.js';
|
||||||
import { QuestionBlock } from './QuestionBlock.js';
|
import { QuestionBlock } from './QuestionBlock.js';
|
||||||
import { SimpleInput } from './SimpleInput.js';
|
import { SimpleInput } from './SimpleInput.js';
|
||||||
|
import { AgentActivityIndicator } from './AgentActivityIndicator.js';
|
||||||
|
|
||||||
export function SessionCard({ session, onClick, conversation, onFetchConversation, onRespond, onDismiss, enlarged = false }) {
|
export function SessionCard({ session, onClick, conversation, onFetchConversation, onRespond, onDismiss, enlarged = false, autocompleteConfig = null, isNewlySpawned = false }) {
|
||||||
const hasQuestions = session.pending_questions && session.pending_questions.length > 0;
|
const hasQuestions = session.pending_questions && session.pending_questions.length > 0;
|
||||||
const statusMeta = getStatusMeta(session.status);
|
const statusMeta = getStatusMeta(session.status);
|
||||||
const agent = session.agent === 'codex' ? 'codex' : 'claude';
|
const agent = session.agent === 'codex' ? 'codex' : 'claude';
|
||||||
@@ -83,9 +84,10 @@ export function SessionCard({ session, onClick, conversation, onFetchConversatio
|
|||||||
};
|
};
|
||||||
|
|
||||||
// Container classes differ based on enlarged mode
|
// Container classes differ based on enlarged mode
|
||||||
|
const spawnClass = isNewlySpawned ? ' session-card-spawned' : '';
|
||||||
const containerClasses = enlarged
|
const containerClasses = enlarged
|
||||||
? 'glass-panel flex w-full max-w-[90vw] max-h-[90vh] flex-col overflow-hidden rounded-2xl border border-selection/80'
|
? 'glass-panel flex w-full max-w-[90vw] max-h-[90vh] flex-col overflow-hidden rounded-2xl border border-selection/80'
|
||||||
: 'glass-panel flex h-[850px] max-h-[850px] w-[600px] cursor-pointer flex-col overflow-hidden rounded-xl border border-selection/70 transition-[border-color,box-shadow] duration-200 hover:border-starting/35 hover:shadow-panel';
|
: 'glass-panel flex h-[850px] max-h-[850px] w-[600px] cursor-pointer flex-col overflow-hidden rounded-xl border border-selection/70 transition-[border-color,box-shadow] duration-200 hover:border-starting/35 hover:shadow-panel' + spawnClass;
|
||||||
|
|
||||||
return html`
|
return html`
|
||||||
<div
|
<div
|
||||||
@@ -108,19 +110,12 @@ export function SessionCard({ session, onClick, conversation, onFetchConversatio
|
|||||||
<span class="rounded-full border px-2.5 py-1 font-mono text-micro uppercase tracking-[0.14em] ${agent === 'codex' ? 'border-emerald-400/45 bg-emerald-500/14 text-emerald-300' : 'border-violet-400/45 bg-violet-500/14 text-violet-300'}">
|
<span class="rounded-full border px-2.5 py-1 font-mono text-micro uppercase tracking-[0.14em] ${agent === 'codex' ? 'border-emerald-400/45 bg-emerald-500/14 text-emerald-300' : 'border-violet-400/45 bg-violet-500/14 text-violet-300'}">
|
||||||
${agent}
|
${agent}
|
||||||
</span>
|
</span>
|
||||||
${session.cwd && html`
|
${session.project_dir && html`
|
||||||
<span class="truncate rounded-full border border-selection bg-bg/40 px-2.5 py-1 font-mono text-micro text-dim">
|
<span class="truncate rounded-full border border-selection bg-bg/40 px-2.5 py-1 font-mono text-micro text-dim">
|
||||||
${session.cwd.split('/').slice(-2).join('/')}
|
${session.project_dir.split('/').slice(-2).join('/')}
|
||||||
</span>
|
</span>
|
||||||
`}
|
`}
|
||||||
</div>
|
</div>
|
||||||
${contextUsage && html`
|
|
||||||
<div class="mt-2 inline-flex max-w-full items-center gap-2 rounded-lg border border-selection/80 bg-bg/45 px-2.5 py-1.5 font-mono text-label text-dim" title=${contextUsage.title}>
|
|
||||||
<span class="text-bright">${contextUsage.headline}</span>
|
|
||||||
<span class="truncate">${contextUsage.detail}</span>
|
|
||||||
${contextUsage.trail && html`<span class="text-dim/80">${contextUsage.trail}</span>`}
|
|
||||||
</div>
|
|
||||||
`}
|
|
||||||
</div>
|
</div>
|
||||||
<div class="flex items-center gap-3 shrink-0 pt-0.5">
|
<div class="flex items-center gap-3 shrink-0 pt-0.5">
|
||||||
<span class="font-mono text-xs tabular-nums text-dim">${formatDuration(session.started_at)}</span>
|
<span class="font-mono text-xs tabular-nums text-dim">${formatDuration(session.started_at)}</span>
|
||||||
@@ -144,8 +139,21 @@ export function SessionCard({ session, onClick, conversation, onFetchConversatio
|
|||||||
<${ChatMessages} messages=${conversation || []} status=${session.status} limit=${enlarged ? null : 20} />
|
<${ChatMessages} messages=${conversation || []} status=${session.status} limit=${enlarged ? null : 20} />
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<!-- Card Footer (Input or Questions) -->
|
<!-- Card Footer (Status + Input/Questions) -->
|
||||||
<div class="shrink-0 border-t border-selection/70 bg-bg/55 p-4">
|
<div class="shrink-0 border-t border-selection/70 bg-bg/55">
|
||||||
|
<!-- Session Status Area -->
|
||||||
|
<div class="flex items-center justify-between gap-3 px-4 py-2 border-b border-selection/50 bg-bg/60">
|
||||||
|
<${AgentActivityIndicator} session=${session} />
|
||||||
|
${contextUsage && html`
|
||||||
|
<div class="flex items-center gap-2 rounded-lg border border-selection/80 bg-bg/45 px-2.5 py-1.5 font-mono text-label text-dim" title=${contextUsage.title}>
|
||||||
|
<span class="text-bright">${contextUsage.headline}</span>
|
||||||
|
<span class="truncate">${contextUsage.detail}</span>
|
||||||
|
${contextUsage.trail && html`<span class="text-dim/80">${contextUsage.trail}</span>`}
|
||||||
|
</div>
|
||||||
|
`}
|
||||||
|
</div>
|
||||||
|
<!-- Actions Area -->
|
||||||
|
<div class="p-4">
|
||||||
${hasQuestions ? html`
|
${hasQuestions ? html`
|
||||||
<${QuestionBlock}
|
<${QuestionBlock}
|
||||||
questions=${session.pending_questions}
|
questions=${session.pending_questions}
|
||||||
@@ -158,9 +166,11 @@ export function SessionCard({ session, onClick, conversation, onFetchConversatio
|
|||||||
sessionId=${session.session_id}
|
sessionId=${session.session_id}
|
||||||
status=${session.status}
|
status=${session.status}
|
||||||
onRespond=${onRespond}
|
onRespond=${onRespond}
|
||||||
|
autocompleteConfig=${autocompleteConfig}
|
||||||
/>
|
/>
|
||||||
`}
|
`}
|
||||||
</div>
|
</div>
|
||||||
</div>
|
</div>
|
||||||
|
</div>
|
||||||
`;
|
`;
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,56 +0,0 @@
|
|||||||
import { html } from '../lib/preact.js';
|
|
||||||
import { getStatusMeta, STATUS_PRIORITY } from '../utils/status.js';
|
|
||||||
import { SessionCard } from './SessionCard.js';
|
|
||||||
|
|
||||||
export function SessionGroup({ projectName, projectDir, sessions, onCardClick, conversations, onFetchConversation, onRespond, onDismiss }) {
|
|
||||||
if (sessions.length === 0) return null;
|
|
||||||
|
|
||||||
// Status summary for chips
|
|
||||||
const statusCounts = {};
|
|
||||||
for (const s of sessions) {
|
|
||||||
statusCounts[s.status] = (statusCounts[s.status] || 0) + 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Group header dot uses the most urgent status
|
|
||||||
const worstStatus = sessions.reduce((worst, s) => {
|
|
||||||
return (STATUS_PRIORITY[s.status] ?? 99) < (STATUS_PRIORITY[worst] ?? 99) ? s.status : worst;
|
|
||||||
}, 'done');
|
|
||||||
const worstMeta = getStatusMeta(worstStatus);
|
|
||||||
|
|
||||||
return html`
|
|
||||||
<section class="mb-12">
|
|
||||||
<div class="mb-4 flex flex-wrap items-center gap-2.5 border-b border-selection/50 pb-3">
|
|
||||||
<span class="h-2.5 w-2.5 rounded-full ${worstMeta.dot}"></span>
|
|
||||||
<h2 class="font-display text-body font-semibold text-bright">${projectName}</h2>
|
|
||||||
<span class="rounded-full border border-selection/80 bg-bg/55 px-2 py-0.5 font-mono text-micro text-dim">
|
|
||||||
${sessions.length} agent${sessions.length === 1 ? '' : 's'}
|
|
||||||
</span>
|
|
||||||
${Object.entries(statusCounts).map(([status, count]) => {
|
|
||||||
const meta = getStatusMeta(status);
|
|
||||||
return html`
|
|
||||||
<span key=${status} class="rounded-full border px-2 py-0.5 font-mono text-micro ${meta.badge}">
|
|
||||||
${count} ${meta.label.toLowerCase()}
|
|
||||||
</span>
|
|
||||||
`;
|
|
||||||
})}
|
|
||||||
</div>
|
|
||||||
${projectDir && projectDir !== 'unknown' && html`
|
|
||||||
<div class="-mt-2 mb-3 truncate font-mono text-micro text-dim/60">${projectDir}</div>
|
|
||||||
`}
|
|
||||||
|
|
||||||
<div class="flex flex-wrap gap-4">
|
|
||||||
${sessions.map(session => html`
|
|
||||||
<${SessionCard}
|
|
||||||
key=${session.session_id}
|
|
||||||
session=${session}
|
|
||||||
onClick=${onCardClick}
|
|
||||||
conversation=${conversations[session.session_id]}
|
|
||||||
onFetchConversation=${onFetchConversation}
|
|
||||||
onRespond=${onRespond}
|
|
||||||
onDismiss=${onDismiss}
|
|
||||||
/>
|
|
||||||
`)}
|
|
||||||
</div>
|
|
||||||
</section>
|
|
||||||
`;
|
|
||||||
}
|
|
||||||
@@ -1,14 +1,96 @@
|
|||||||
import { html, useState, useRef } from '../lib/preact.js';
|
import { html, useState, useRef, useCallback, useMemo, useEffect } from '../lib/preact.js';
|
||||||
import { getStatusMeta } from '../utils/status.js';
|
import { getStatusMeta } from '../utils/status.js';
|
||||||
|
import { getTriggerInfo as _getTriggerInfo, filteredSkills as _filteredSkills } from '../utils/autocomplete.js';
|
||||||
|
|
||||||
export function SimpleInput({ sessionId, status, onRespond }) {
|
export function SimpleInput({ sessionId, status, onRespond, autocompleteConfig = null }) {
|
||||||
const [text, setText] = useState('');
|
const [text, setText] = useState('');
|
||||||
const [focused, setFocused] = useState(false);
|
const [focused, setFocused] = useState(false);
|
||||||
const [sending, setSending] = useState(false);
|
const [sending, setSending] = useState(false);
|
||||||
const [error, setError] = useState(null);
|
const [error, setError] = useState(null);
|
||||||
|
const [triggerInfo, setTriggerInfo] = useState(null);
|
||||||
|
const [showAutocomplete, setShowAutocomplete] = useState(false);
|
||||||
|
const [selectedIndex, setSelectedIndex] = useState(0);
|
||||||
const textareaRef = useRef(null);
|
const textareaRef = useRef(null);
|
||||||
|
const autocompleteRef = useRef(null);
|
||||||
const meta = getStatusMeta(status);
|
const meta = getStatusMeta(status);
|
||||||
|
|
||||||
|
const getTriggerInfo = useCallback((value, cursorPos) => {
|
||||||
|
return _getTriggerInfo(value, cursorPos, autocompleteConfig);
|
||||||
|
}, [autocompleteConfig]);
|
||||||
|
|
||||||
|
const filteredSkills = useMemo(() => {
|
||||||
|
return _filteredSkills(autocompleteConfig, triggerInfo);
|
||||||
|
}, [autocompleteConfig, triggerInfo]);
|
||||||
|
|
||||||
|
// Show/hide autocomplete based on trigger detection
|
||||||
|
useEffect(() => {
|
||||||
|
const shouldShow = triggerInfo !== null;
|
||||||
|
setShowAutocomplete(shouldShow);
|
||||||
|
// Reset selection when dropdown opens
|
||||||
|
if (shouldShow) {
|
||||||
|
setSelectedIndex(0);
|
||||||
|
}
|
||||||
|
}, [triggerInfo]);
|
||||||
|
|
||||||
|
// Clamp selectedIndex when filtered list changes
|
||||||
|
useEffect(() => {
|
||||||
|
if (filteredSkills.length > 0 && selectedIndex >= filteredSkills.length) {
|
||||||
|
setSelectedIndex(filteredSkills.length - 1);
|
||||||
|
}
|
||||||
|
}, [filteredSkills.length, selectedIndex]);
|
||||||
|
|
||||||
|
// Click outside dismisses dropdown
|
||||||
|
useEffect(() => {
|
||||||
|
if (!showAutocomplete) return;
|
||||||
|
|
||||||
|
const handleClickOutside = (e) => {
|
||||||
|
if (autocompleteRef.current && !autocompleteRef.current.contains(e.target) &&
|
||||||
|
textareaRef.current && !textareaRef.current.contains(e.target)) {
|
||||||
|
setShowAutocomplete(false);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
document.addEventListener('mousedown', handleClickOutside);
|
||||||
|
return () => document.removeEventListener('mousedown', handleClickOutside);
|
||||||
|
}, [showAutocomplete]);
|
||||||
|
|
||||||
|
// Scroll selected item into view when navigating with arrow keys
|
||||||
|
useEffect(() => {
|
||||||
|
if (showAutocomplete && autocompleteRef.current) {
|
||||||
|
const container = autocompleteRef.current;
|
||||||
|
const selectedEl = container.children[selectedIndex];
|
||||||
|
if (selectedEl) {
|
||||||
|
selectedEl.scrollIntoView({ block: 'nearest' });
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}, [selectedIndex, showAutocomplete]);
|
||||||
|
|
||||||
|
// Insert a selected skill into the text
|
||||||
|
const insertSkill = useCallback((skill) => {
|
||||||
|
if (!triggerInfo || !autocompleteConfig) return;
|
||||||
|
|
||||||
|
const { trigger } = autocompleteConfig;
|
||||||
|
const { replaceStart, replaceEnd } = triggerInfo;
|
||||||
|
|
||||||
|
const before = text.slice(0, replaceStart);
|
||||||
|
const after = text.slice(replaceEnd);
|
||||||
|
const inserted = `${trigger}${skill.name} `;
|
||||||
|
|
||||||
|
setText(before + inserted + after);
|
||||||
|
setShowAutocomplete(false);
|
||||||
|
setTriggerInfo(null);
|
||||||
|
|
||||||
|
// Move cursor after inserted text
|
||||||
|
const newCursorPos = replaceStart + inserted.length;
|
||||||
|
setTimeout(() => {
|
||||||
|
if (textareaRef.current) {
|
||||||
|
textareaRef.current.selectionStart = newCursorPos;
|
||||||
|
textareaRef.current.selectionEnd = newCursorPos;
|
||||||
|
textareaRef.current.focus();
|
||||||
|
}
|
||||||
|
}, 0);
|
||||||
|
}, [text, triggerInfo, autocompleteConfig]);
|
||||||
|
|
||||||
const handleSubmit = async (e) => {
|
const handleSubmit = async (e) => {
|
||||||
e.preventDefault();
|
e.preventDefault();
|
||||||
e.stopPropagation();
|
e.stopPropagation();
|
||||||
@@ -40,15 +122,54 @@ export function SimpleInput({ sessionId, status, onRespond }) {
|
|||||||
</div>
|
</div>
|
||||||
`}
|
`}
|
||||||
<div class="flex items-end gap-2.5">
|
<div class="flex items-end gap-2.5">
|
||||||
|
<div class="relative flex-1">
|
||||||
<textarea
|
<textarea
|
||||||
ref=${textareaRef}
|
ref=${textareaRef}
|
||||||
value=${text}
|
value=${text}
|
||||||
onInput=${(e) => {
|
onInput=${(e) => {
|
||||||
setText(e.target.value);
|
const value = e.target.value;
|
||||||
|
const cursorPos = e.target.selectionStart;
|
||||||
|
setText(value);
|
||||||
|
setTriggerInfo(getTriggerInfo(value, cursorPos));
|
||||||
e.target.style.height = 'auto';
|
e.target.style.height = 'auto';
|
||||||
e.target.style.height = e.target.scrollHeight + 'px';
|
e.target.style.height = e.target.scrollHeight + 'px';
|
||||||
}}
|
}}
|
||||||
onKeyDown=${(e) => {
|
onKeyDown=${(e) => {
|
||||||
|
if (showAutocomplete) {
|
||||||
|
// Escape dismisses dropdown
|
||||||
|
if (e.key === 'Escape') {
|
||||||
|
e.preventDefault();
|
||||||
|
setShowAutocomplete(false);
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Enter/Tab: select if matches exist, otherwise dismiss
|
||||||
|
if (e.key === 'Enter' || e.key === 'Tab') {
|
||||||
|
e.preventDefault();
|
||||||
|
if (filteredSkills.length > 0 && filteredSkills[selectedIndex]) {
|
||||||
|
insertSkill(filteredSkills[selectedIndex]);
|
||||||
|
} else {
|
||||||
|
setShowAutocomplete(false);
|
||||||
|
}
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Arrow navigation
|
||||||
|
if (filteredSkills.length > 0) {
|
||||||
|
if (e.key === 'ArrowDown') {
|
||||||
|
e.preventDefault();
|
||||||
|
setSelectedIndex(i => Math.min(i + 1, filteredSkills.length - 1));
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
if (e.key === 'ArrowUp') {
|
||||||
|
e.preventDefault();
|
||||||
|
setSelectedIndex(i => Math.max(i - 1, 0));
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Normal Enter-to-submit (only when dropdown is closed)
|
||||||
if (e.key === 'Enter' && !e.shiftKey) {
|
if (e.key === 'Enter' && !e.shiftKey) {
|
||||||
e.preventDefault();
|
e.preventDefault();
|
||||||
handleSubmit(e);
|
handleSubmit(e);
|
||||||
@@ -58,10 +179,41 @@ export function SimpleInput({ sessionId, status, onRespond }) {
|
|||||||
onBlur=${() => setFocused(false)}
|
onBlur=${() => setFocused(false)}
|
||||||
placeholder="Send a message..."
|
placeholder="Send a message..."
|
||||||
rows="1"
|
rows="1"
|
||||||
class="flex-1 resize-none overflow-hidden rounded-xl border border-selection/75 bg-bg/70 px-3 py-2 text-sm text-fg transition-colors duration-150 placeholder:text-dim focus:outline-none"
|
class="w-full resize-none overflow-hidden rounded-xl border border-selection/75 bg-bg/70 px-3 py-2 text-sm text-fg transition-colors duration-150 placeholder:text-dim focus:outline-none"
|
||||||
style=${{ minHeight: '38px', maxHeight: '150px', borderColor: focused ? meta.borderColor : undefined }}
|
style=${{ minHeight: '38px', maxHeight: '150px', borderColor: focused ? meta.borderColor : undefined }}
|
||||||
disabled=${sending}
|
disabled=${sending}
|
||||||
/>
|
/>
|
||||||
|
${showAutocomplete && html`
|
||||||
|
<div
|
||||||
|
ref=${autocompleteRef}
|
||||||
|
class="absolute left-0 bottom-full mb-1 w-full max-h-48 overflow-y-auto rounded-lg border border-selection/75 bg-surface shadow-lg z-50"
|
||||||
|
>
|
||||||
|
${autocompleteConfig.skills.length === 0 ? html`
|
||||||
|
<div class="px-3 py-2 text-sm text-dim">No skills available</div>
|
||||||
|
` : filteredSkills.length === 0 ? html`
|
||||||
|
<div class="px-3 py-2 text-sm text-dim">No matching skills</div>
|
||||||
|
` : filteredSkills.map((skill, i) => html`
|
||||||
|
<div
|
||||||
|
key=${skill.name}
|
||||||
|
class="group relative px-3 py-1.5 cursor-pointer text-sm font-mono transition-colors ${
|
||||||
|
i === selectedIndex
|
||||||
|
? 'bg-selection/50 text-bright'
|
||||||
|
: 'text-fg hover:bg-selection/25'
|
||||||
|
}"
|
||||||
|
onClick=${() => insertSkill(skill)}
|
||||||
|
onMouseEnter=${() => setSelectedIndex(i)}
|
||||||
|
>
|
||||||
|
${autocompleteConfig.trigger}${skill.name}
|
||||||
|
${i === selectedIndex && skill.description && html`
|
||||||
|
<div class="absolute left-full top-0 ml-2 w-64 px-2.5 py-1.5 rounded-md border border-selection/75 bg-surface shadow-lg text-micro text-dim font-sans whitespace-normal z-50">
|
||||||
|
${skill.description}
|
||||||
|
</div>
|
||||||
|
`}
|
||||||
|
</div>
|
||||||
|
`)}
|
||||||
|
</div>
|
||||||
|
`}
|
||||||
|
</div>
|
||||||
<button
|
<button
|
||||||
type="submit"
|
type="submit"
|
||||||
class="shrink-0 rounded-xl px-3 py-2 text-sm font-medium transition-[transform,filter] duration-150 hover:-translate-y-0.5 hover:brightness-110 disabled:cursor-not-allowed disabled:opacity-50"
|
class="shrink-0 rounded-xl px-3 py-2 text-sm font-medium transition-[transform,filter] duration-150 hover:-translate-y-0.5 hover:brightness-110 disabled:cursor-not-allowed disabled:opacity-50"
|
||||||
|
|||||||
241
dashboard/components/SpawnModal.js
Normal file
241
dashboard/components/SpawnModal.js
Normal file
@@ -0,0 +1,241 @@
|
|||||||
|
import { html, useState, useEffect, useCallback, useRef } from '../lib/preact.js';
|
||||||
|
import { API_PROJECTS, API_SPAWN, fetchWithTimeout, API_TIMEOUT_MS } from '../utils/api.js';
|
||||||
|
|
||||||
|
// Spawn needs longer timeout: pending spawn registry requires discovery cycle to run,
|
||||||
|
// plus server polls for session file confirmation
|
||||||
|
const SPAWN_TIMEOUT_MS = API_TIMEOUT_MS * 2;
|
||||||
|
|
||||||
|
export function SpawnModal({ isOpen, onClose, onSpawn, currentProject }) {
|
||||||
|
const [projects, setProjects] = useState([]);
|
||||||
|
const [selectedProject, setSelectedProject] = useState('');
|
||||||
|
const [agentType, setAgentType] = useState('claude');
|
||||||
|
const [loading, setLoading] = useState(false);
|
||||||
|
const [loadingProjects, setLoadingProjects] = useState(false);
|
||||||
|
const [closing, setClosing] = useState(false);
|
||||||
|
const [error, setError] = useState(null);
|
||||||
|
|
||||||
|
const needsProjectPicker = !currentProject;
|
||||||
|
|
||||||
|
const dropdownRef = useCallback((node) => {
|
||||||
|
if (node) dropdownNodeRef.current = node;
|
||||||
|
}, []);
|
||||||
|
const dropdownNodeRef = useRef(null);
|
||||||
|
|
||||||
|
// Click outside dismisses dropdown
|
||||||
|
useEffect(() => {
|
||||||
|
if (!isOpen) return;
|
||||||
|
const handleClickOutside = (e) => {
|
||||||
|
if (dropdownNodeRef.current && !dropdownNodeRef.current.contains(e.target)) {
|
||||||
|
handleClose();
|
||||||
|
}
|
||||||
|
};
|
||||||
|
document.addEventListener('mousedown', handleClickOutside);
|
||||||
|
return () => document.removeEventListener('mousedown', handleClickOutside);
|
||||||
|
}, [isOpen]);
|
||||||
|
|
||||||
|
// Reset state on open
|
||||||
|
useEffect(() => {
|
||||||
|
if (isOpen) {
|
||||||
|
setAgentType('claude');
|
||||||
|
setError(null);
|
||||||
|
setLoading(false);
|
||||||
|
setClosing(false);
|
||||||
|
}
|
||||||
|
}, [isOpen]);
|
||||||
|
|
||||||
|
// Fetch projects when needed
|
||||||
|
useEffect(() => {
|
||||||
|
if (isOpen && needsProjectPicker) {
|
||||||
|
setLoadingProjects(true);
|
||||||
|
fetchWithTimeout(API_PROJECTS)
|
||||||
|
.then(r => r.json())
|
||||||
|
.then(data => {
|
||||||
|
setProjects(data.projects || []);
|
||||||
|
setSelectedProject('');
|
||||||
|
})
|
||||||
|
.catch(err => setError(err.message))
|
||||||
|
.finally(() => setLoadingProjects(false));
|
||||||
|
}
|
||||||
|
}, [isOpen, needsProjectPicker]);
|
||||||
|
|
||||||
|
// Animated close handler
|
||||||
|
const handleClose = useCallback(() => {
|
||||||
|
if (loading) return;
|
||||||
|
setClosing(true);
|
||||||
|
setTimeout(() => {
|
||||||
|
setClosing(false);
|
||||||
|
onClose();
|
||||||
|
}, 200);
|
||||||
|
}, [loading, onClose]);
|
||||||
|
|
||||||
|
// Handle escape key
|
||||||
|
useEffect(() => {
|
||||||
|
if (!isOpen) return;
|
||||||
|
const handleKeyDown = (e) => {
|
||||||
|
if (e.key === 'Escape') handleClose();
|
||||||
|
};
|
||||||
|
document.addEventListener('keydown', handleKeyDown);
|
||||||
|
return () => document.removeEventListener('keydown', handleKeyDown);
|
||||||
|
}, [isOpen, handleClose]);
|
||||||
|
|
||||||
|
const handleSpawn = async () => {
|
||||||
|
const rawProject = currentProject || selectedProject;
|
||||||
|
if (!rawProject) {
|
||||||
|
setError('Please select a project');
|
||||||
|
return;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Extract project name from full path (sidebar passes projectDir like "/Users/.../projects/amc")
|
||||||
|
const project = rawProject.includes('/') ? rawProject.split('/').pop() : rawProject;
|
||||||
|
|
||||||
|
setLoading(true);
|
||||||
|
setError(null);
|
||||||
|
|
||||||
|
try {
|
||||||
|
const response = await fetchWithTimeout(API_SPAWN, {
|
||||||
|
method: 'POST',
|
||||||
|
headers: {
|
||||||
|
'Content-Type': 'application/json',
|
||||||
|
'Authorization': `Bearer ${window.AMC_AUTH_TOKEN}`,
|
||||||
|
},
|
||||||
|
body: JSON.stringify({ project, agent_type: agentType }),
|
||||||
|
}, SPAWN_TIMEOUT_MS);
|
||||||
|
const data = await response.json();
|
||||||
|
|
||||||
|
if (data.ok) {
|
||||||
|
onSpawn({ success: true, project, agentType, spawnId: data.spawn_id });
|
||||||
|
handleClose();
|
||||||
|
} else {
|
||||||
|
setError(data.error || 'Spawn failed');
|
||||||
|
onSpawn({ error: data.error });
|
||||||
|
}
|
||||||
|
} catch (err) {
|
||||||
|
const msg = err.name === 'AbortError' ? 'Request timed out' : err.message;
|
||||||
|
setError(msg);
|
||||||
|
onSpawn({ error: msg });
|
||||||
|
} finally {
|
||||||
|
setLoading(false);
|
||||||
|
}
|
||||||
|
};
|
||||||
|
|
||||||
|
if (!isOpen) return null;
|
||||||
|
|
||||||
|
const canSpawn = !loading && (currentProject || selectedProject);
|
||||||
|
|
||||||
|
return html`
|
||||||
|
<div
|
||||||
|
ref=${dropdownRef}
|
||||||
|
class="absolute right-0 top-full mt-2 z-50 glass-panel w-80 rounded-xl border border-selection/70 shadow-lg ${closing ? 'modal-panel-out' : 'modal-panel-in'}"
|
||||||
|
onClick=${(e) => e.stopPropagation()}
|
||||||
|
>
|
||||||
|
<!-- Header -->
|
||||||
|
<div class="flex items-center justify-between border-b border-selection/70 px-4 py-3">
|
||||||
|
<h2 class="font-display text-sm font-semibold text-bright">Spawn Agent</h2>
|
||||||
|
<button
|
||||||
|
onClick=${handleClose}
|
||||||
|
disabled=${loading}
|
||||||
|
class="flex h-6 w-6 items-center justify-center rounded-lg border border-selection/80 text-dim transition-colors hover:border-done/40 hover:bg-done/10 hover:text-bright disabled:cursor-not-allowed disabled:opacity-50"
|
||||||
|
>
|
||||||
|
<svg class="h-3.5 w-3.5" fill="none" stroke="currentColor" viewBox="0 0 24 24">
|
||||||
|
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M6 18L18 6M6 6l12 12" />
|
||||||
|
</svg>
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<!-- Body -->
|
||||||
|
<div class="flex flex-col gap-3 px-4 py-3">
|
||||||
|
|
||||||
|
${needsProjectPicker && html`
|
||||||
|
<div class="flex flex-col gap-1.5">
|
||||||
|
<label class="text-label font-medium text-dim">Project</label>
|
||||||
|
${loadingProjects ? html`
|
||||||
|
<div class="flex items-center gap-2 rounded-xl border border-selection/75 bg-bg/70 px-3 py-2 text-sm text-dim">
|
||||||
|
<span class="working-dots"><span>.</span><span>.</span><span>.</span></span>
|
||||||
|
Loading projects
|
||||||
|
</div>
|
||||||
|
` : projects.length === 0 ? html`
|
||||||
|
<div class="rounded-xl border border-selection/75 bg-bg/70 px-3 py-2 text-sm text-dim">
|
||||||
|
No projects found in ~/projects/
|
||||||
|
</div>
|
||||||
|
` : html`
|
||||||
|
<select
|
||||||
|
value=${selectedProject}
|
||||||
|
onChange=${(e) => { setSelectedProject(e.target.value); setError(null); }}
|
||||||
|
class="w-full rounded-xl border border-selection/75 bg-bg/70 px-3 py-2 text-sm text-fg transition-colors duration-150 focus:border-starting/60 focus:outline-none"
|
||||||
|
>
|
||||||
|
<option value="" disabled>Select a project...</option>
|
||||||
|
${projects.map(p => html`
|
||||||
|
<option key=${p} value=${p}>${p}</option>
|
||||||
|
`)}
|
||||||
|
</select>
|
||||||
|
`}
|
||||||
|
</div>
|
||||||
|
`}
|
||||||
|
|
||||||
|
${currentProject && html`
|
||||||
|
<div class="flex flex-col gap-1.5">
|
||||||
|
<label class="text-label font-medium text-dim">Project</label>
|
||||||
|
<div class="rounded-xl border border-selection/75 bg-bg/70 px-3 py-2 text-sm text-bright">
|
||||||
|
${currentProject.includes('/') ? currentProject.split('/').pop() : currentProject}
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
`}
|
||||||
|
|
||||||
|
<!-- Agent type -->
|
||||||
|
<div class="flex flex-col gap-1.5">
|
||||||
|
<label class="text-label font-medium text-dim">Agent Type</label>
|
||||||
|
<div class="flex gap-2">
|
||||||
|
<button
|
||||||
|
onClick=${() => setAgentType('claude')}
|
||||||
|
class="flex-1 rounded-xl border px-3 py-2 text-sm font-medium transition-colors duration-150 ${
|
||||||
|
agentType === 'claude'
|
||||||
|
? 'border-violet-400/45 bg-violet-500/14 text-violet-300'
|
||||||
|
: 'border-selection/75 bg-bg/70 text-dim hover:border-selection hover:text-fg'
|
||||||
|
}"
|
||||||
|
>
|
||||||
|
Claude
|
||||||
|
</button>
|
||||||
|
<button
|
||||||
|
onClick=${() => setAgentType('codex')}
|
||||||
|
class="flex-1 rounded-xl border px-3 py-2 text-sm font-medium transition-colors duration-150 ${
|
||||||
|
agentType === 'codex'
|
||||||
|
? 'border-emerald-400/45 bg-emerald-500/14 text-emerald-300'
|
||||||
|
: 'border-selection/75 bg-bg/70 text-dim hover:border-selection hover:text-fg'
|
||||||
|
}"
|
||||||
|
>
|
||||||
|
Codex
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
${error && html`
|
||||||
|
<div class="rounded-lg border border-attention/40 bg-attention/12 px-3 py-1.5 text-sm text-attention">
|
||||||
|
${error}
|
||||||
|
</div>
|
||||||
|
`}
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<!-- Footer -->
|
||||||
|
<div class="flex items-center justify-end gap-2 border-t border-selection/70 px-4 py-2.5">
|
||||||
|
<button
|
||||||
|
onClick=${handleClose}
|
||||||
|
disabled=${loading}
|
||||||
|
class="rounded-xl border border-selection/75 bg-bg/70 px-4 py-2 text-sm font-medium text-dim transition-colors hover:border-selection hover:text-fg disabled:cursor-not-allowed disabled:opacity-50"
|
||||||
|
>
|
||||||
|
Cancel
|
||||||
|
</button>
|
||||||
|
<button
|
||||||
|
onClick=${handleSpawn}
|
||||||
|
disabled=${!canSpawn}
|
||||||
|
class="rounded-xl px-4 py-2 text-sm font-medium transition-[transform,filter] duration-150 hover:-translate-y-0.5 hover:brightness-110 disabled:cursor-not-allowed disabled:opacity-50 disabled:hover:translate-y-0 disabled:hover:brightness-100 ${
|
||||||
|
agentType === 'claude'
|
||||||
|
? 'bg-violet-500 text-white'
|
||||||
|
: 'bg-emerald-500 text-white'
|
||||||
|
}"
|
||||||
|
>
|
||||||
|
${loading ? 'Spawning...' : 'Spawn'}
|
||||||
|
</button>
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
`;
|
||||||
|
}
|
||||||
@@ -99,6 +99,7 @@
|
|||||||
</script>
|
</script>
|
||||||
|
|
||||||
<link rel="stylesheet" href="styles.css">
|
<link rel="stylesheet" href="styles.css">
|
||||||
|
<!-- AMC_AUTH_TOKEN -->
|
||||||
</head>
|
</head>
|
||||||
<body class="min-h-screen text-fg antialiased">
|
<body class="min-h-screen text-fg antialiased">
|
||||||
<div id="app"></div>
|
<div id="app"></div>
|
||||||
|
|||||||
@@ -88,6 +88,25 @@ body {
|
|||||||
animation: spin-ring 0.9s cubic-bezier(0.645, 0.045, 0.355, 1) infinite;
|
animation: spin-ring 0.9s cubic-bezier(0.645, 0.045, 0.355, 1) infinite;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* Agent activity spinner */
|
||||||
|
.activity-spinner {
|
||||||
|
width: 8px;
|
||||||
|
height: 8px;
|
||||||
|
border-radius: 50%;
|
||||||
|
background: #5fd0a4;
|
||||||
|
position: relative;
|
||||||
|
}
|
||||||
|
|
||||||
|
.activity-spinner::after {
|
||||||
|
content: '';
|
||||||
|
position: absolute;
|
||||||
|
inset: -3px;
|
||||||
|
border-radius: 50%;
|
||||||
|
border: 1.5px solid transparent;
|
||||||
|
border-top-color: #5fd0a4;
|
||||||
|
animation: spin-ring 0.9s cubic-bezier(0.645, 0.045, 0.355, 1) infinite;
|
||||||
|
}
|
||||||
|
|
||||||
/* Working indicator at bottom of chat */
|
/* Working indicator at bottom of chat */
|
||||||
@keyframes bounce-dot {
|
@keyframes bounce-dot {
|
||||||
0%, 80%, 100% { transform: translateY(0); }
|
0%, 80%, 100% { transform: translateY(0); }
|
||||||
@@ -134,6 +153,16 @@ body {
|
|||||||
animation: modalPanelOut 200ms ease-in forwards;
|
animation: modalPanelOut 200ms ease-in forwards;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* Spawn highlight animation - visual feedback when a newly spawned agent appears */
|
||||||
|
@keyframes spawn-highlight {
|
||||||
|
0% { box-shadow: 0 0 0 3px rgba(95, 208, 164, 0.6), 0 0 16px rgba(95, 208, 164, 0.15); }
|
||||||
|
100% { box-shadow: 0 0 0 0 transparent, 0 0 0 transparent; }
|
||||||
|
}
|
||||||
|
|
||||||
|
.session-card-spawned {
|
||||||
|
animation: spawn-highlight 2.5s ease-out;
|
||||||
|
}
|
||||||
|
|
||||||
/* Accessibility: disable continuous animations for motion-sensitive users */
|
/* Accessibility: disable continuous animations for motion-sensitive users */
|
||||||
@media (prefers-reduced-motion: reduce) {
|
@media (prefers-reduced-motion: reduce) {
|
||||||
.spinner-dot::after {
|
.spinner-dot::after {
|
||||||
@@ -152,7 +181,8 @@ body {
|
|||||||
animation: none;
|
animation: none;
|
||||||
}
|
}
|
||||||
.animate-float,
|
.animate-float,
|
||||||
.animate-fade-in-up {
|
.animate-fade-in-up,
|
||||||
|
.session-card-spawned {
|
||||||
animation: none !important;
|
animation: none !important;
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
162
dashboard/tests/autocomplete.test.js
Normal file
162
dashboard/tests/autocomplete.test.js
Normal file
@@ -0,0 +1,162 @@
|
|||||||
|
import { describe, it } from 'node:test';
|
||||||
|
import assert from 'node:assert/strict';
|
||||||
|
import { getTriggerInfo, filteredSkills } from '../utils/autocomplete.js';
|
||||||
|
|
||||||
|
const mockConfig = {
|
||||||
|
trigger: '/',
|
||||||
|
skills: [
|
||||||
|
{ name: 'commit', description: 'Create a git commit' },
|
||||||
|
{ name: 'review-pr', description: 'Review a pull request' },
|
||||||
|
{ name: 'comment', description: 'Add a comment' },
|
||||||
|
],
|
||||||
|
};
|
||||||
|
|
||||||
|
describe('getTriggerInfo', () => {
|
||||||
|
it('returns null when no autocompleteConfig', () => {
|
||||||
|
const result = getTriggerInfo('/hello', 1, null);
|
||||||
|
assert.equal(result, null);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('returns null when autocompleteConfig is undefined', () => {
|
||||||
|
const result = getTriggerInfo('/hello', 1, undefined);
|
||||||
|
assert.equal(result, null);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('detects trigger at position 0', () => {
|
||||||
|
const result = getTriggerInfo('/', 1, mockConfig);
|
||||||
|
assert.deepEqual(result, {
|
||||||
|
trigger: '/',
|
||||||
|
filterText: '',
|
||||||
|
replaceStart: 0,
|
||||||
|
replaceEnd: 1,
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
it('detects trigger after space', () => {
|
||||||
|
const result = getTriggerInfo('hello /co', 9, mockConfig);
|
||||||
|
assert.deepEqual(result, {
|
||||||
|
trigger: '/',
|
||||||
|
filterText: 'co',
|
||||||
|
replaceStart: 6,
|
||||||
|
replaceEnd: 9,
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
it('detects trigger after newline', () => {
|
||||||
|
const result = getTriggerInfo('line1\n/rev', 10, mockConfig);
|
||||||
|
assert.deepEqual(result, {
|
||||||
|
trigger: '/',
|
||||||
|
filterText: 'rev',
|
||||||
|
replaceStart: 6,
|
||||||
|
replaceEnd: 10,
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
it('returns null for non-trigger character', () => {
|
||||||
|
const result = getTriggerInfo('hello world', 5, mockConfig);
|
||||||
|
assert.equal(result, null);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('returns null for wrong trigger (! when config expects /)', () => {
|
||||||
|
const result = getTriggerInfo('!commit', 7, mockConfig);
|
||||||
|
assert.equal(result, null);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('returns null for trigger embedded in a word', () => {
|
||||||
|
const result = getTriggerInfo('path/to/file', 5, mockConfig);
|
||||||
|
assert.equal(result, null);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('extracts filterText correctly', () => {
|
||||||
|
const result = getTriggerInfo('/commit', 7, mockConfig);
|
||||||
|
assert.equal(result.filterText, 'commit');
|
||||||
|
assert.equal(result.replaceStart, 0);
|
||||||
|
assert.equal(result.replaceEnd, 7);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('filterText is lowercase', () => {
|
||||||
|
const result = getTriggerInfo('/CoMmIt', 7, mockConfig);
|
||||||
|
assert.equal(result.filterText, 'commit');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('replaceStart and replaceEnd are correct for mid-input trigger', () => {
|
||||||
|
const result = getTriggerInfo('foo /bar', 8, mockConfig);
|
||||||
|
assert.equal(result.replaceStart, 4);
|
||||||
|
assert.equal(result.replaceEnd, 8);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('works with a different trigger character', () => {
|
||||||
|
const codexConfig = { trigger: '!', skills: [] };
|
||||||
|
const result = getTriggerInfo('!test', 5, codexConfig);
|
||||||
|
assert.deepEqual(result, {
|
||||||
|
trigger: '!',
|
||||||
|
filterText: 'test',
|
||||||
|
replaceStart: 0,
|
||||||
|
replaceEnd: 5,
|
||||||
|
});
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
describe('filteredSkills', () => {
|
||||||
|
it('returns empty array without config', () => {
|
||||||
|
const info = { filterText: '' };
|
||||||
|
assert.deepEqual(filteredSkills(null, info), []);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('returns empty array without triggerInfo', () => {
|
||||||
|
assert.deepEqual(filteredSkills(mockConfig, null), []);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('returns empty array when both are null', () => {
|
||||||
|
assert.deepEqual(filteredSkills(null, null), []);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('returns all skills with empty filter', () => {
|
||||||
|
const info = { filterText: '' };
|
||||||
|
const result = filteredSkills(mockConfig, info);
|
||||||
|
assert.equal(result.length, 3);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('filters case-insensitively', () => {
|
||||||
|
const info = { filterText: 'com' };
|
||||||
|
const result = filteredSkills(mockConfig, info);
|
||||||
|
const names = result.map(s => s.name);
|
||||||
|
assert.ok(names.includes('commit'));
|
||||||
|
assert.ok(names.includes('comment'));
|
||||||
|
assert.ok(!names.includes('review-pr'));
|
||||||
|
});
|
||||||
|
|
||||||
|
it('matches anywhere in name', () => {
|
||||||
|
const info = { filterText: 'view' };
|
||||||
|
const result = filteredSkills(mockConfig, info);
|
||||||
|
assert.equal(result.length, 1);
|
||||||
|
assert.equal(result[0].name, 'review-pr');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('sorts alphabetically', () => {
|
||||||
|
const info = { filterText: '' };
|
||||||
|
const result = filteredSkills(mockConfig, info);
|
||||||
|
const names = result.map(s => s.name);
|
||||||
|
assert.deepEqual(names, ['comment', 'commit', 'review-pr']);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('returns empty array when no matches', () => {
|
||||||
|
const info = { filterText: 'zzz' };
|
||||||
|
const result = filteredSkills(mockConfig, info);
|
||||||
|
assert.deepEqual(result, []);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('does not mutate the original skills array', () => {
|
||||||
|
const config = {
|
||||||
|
trigger: '/',
|
||||||
|
skills: [
|
||||||
|
{ name: 'zebra', description: 'z' },
|
||||||
|
{ name: 'alpha', description: 'a' },
|
||||||
|
],
|
||||||
|
};
|
||||||
|
const info = { filterText: '' };
|
||||||
|
filteredSkills(config, info);
|
||||||
|
assert.equal(config.skills[0].name, 'zebra');
|
||||||
|
assert.equal(config.skills[1].name, 'alpha');
|
||||||
|
});
|
||||||
|
});
|
||||||
@@ -2,8 +2,14 @@
|
|||||||
export const API_STATE = '/api/state';
|
export const API_STATE = '/api/state';
|
||||||
export const API_STREAM = '/api/stream';
|
export const API_STREAM = '/api/stream';
|
||||||
export const API_DISMISS = '/api/dismiss/';
|
export const API_DISMISS = '/api/dismiss/';
|
||||||
|
export const API_DISMISS_DEAD = '/api/dismiss-dead';
|
||||||
export const API_RESPOND = '/api/respond/';
|
export const API_RESPOND = '/api/respond/';
|
||||||
export const API_CONVERSATION = '/api/conversation/';
|
export const API_CONVERSATION = '/api/conversation/';
|
||||||
|
export const API_SKILLS = '/api/skills';
|
||||||
|
export const API_SPAWN = '/api/spawn';
|
||||||
|
export const API_PROJECTS = '/api/projects';
|
||||||
|
export const API_PROJECTS_REFRESH = '/api/projects/refresh';
|
||||||
|
export const API_HEALTH = '/api/health';
|
||||||
export const POLL_MS = 3000;
|
export const POLL_MS = 3000;
|
||||||
export const API_TIMEOUT_MS = 10000;
|
export const API_TIMEOUT_MS = 10000;
|
||||||
|
|
||||||
@@ -18,3 +24,16 @@ export async function fetchWithTimeout(url, options = {}, timeoutMs = API_TIMEOU
|
|||||||
clearTimeout(timeoutId);
|
clearTimeout(timeoutId);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Fetch autocomplete skills config for an agent type
|
||||||
|
export async function fetchSkills(agent) {
|
||||||
|
const url = `${API_SKILLS}?agent=${encodeURIComponent(agent)}`;
|
||||||
|
try {
|
||||||
|
const response = await fetchWithTimeout(url);
|
||||||
|
if (!response.ok) return null;
|
||||||
|
return await response.json();
|
||||||
|
} catch {
|
||||||
|
// Network error, timeout, or JSON parse failure - graceful degradation
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|||||||
48
dashboard/utils/autocomplete.js
Normal file
48
dashboard/utils/autocomplete.js
Normal file
@@ -0,0 +1,48 @@
|
|||||||
|
// Pure logic for autocomplete trigger detection and skill filtering.
|
||||||
|
// Extracted from SimpleInput.js for testability.
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Detect if cursor is at a trigger position for autocomplete.
|
||||||
|
* Returns trigger info object or null.
|
||||||
|
*/
|
||||||
|
export function getTriggerInfo(value, cursorPos, autocompleteConfig) {
|
||||||
|
if (!autocompleteConfig) return null;
|
||||||
|
|
||||||
|
const { trigger } = autocompleteConfig;
|
||||||
|
|
||||||
|
// Find the start of the current "word" (after last whitespace before cursor)
|
||||||
|
let wordStart = cursorPos;
|
||||||
|
while (wordStart > 0 && !/\s/.test(value[wordStart - 1])) {
|
||||||
|
wordStart--;
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check if word starts with this agent's trigger character
|
||||||
|
if (value[wordStart] === trigger) {
|
||||||
|
return {
|
||||||
|
trigger,
|
||||||
|
filterText: value.slice(wordStart + 1, cursorPos).toLowerCase(),
|
||||||
|
replaceStart: wordStart,
|
||||||
|
replaceEnd: cursorPos,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
return null;
|
||||||
|
}
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Filter and sort skills based on trigger info.
|
||||||
|
* Returns sorted array of matching skills.
|
||||||
|
*/
|
||||||
|
export function filteredSkills(autocompleteConfig, triggerInfo) {
|
||||||
|
if (!autocompleteConfig || !triggerInfo) return [];
|
||||||
|
|
||||||
|
const { skills } = autocompleteConfig;
|
||||||
|
const { filterText } = triggerInfo;
|
||||||
|
|
||||||
|
let filtered = filterText
|
||||||
|
? skills.filter(s => s.name.toLowerCase().includes(filterText))
|
||||||
|
: skills.slice();
|
||||||
|
|
||||||
|
// Server pre-sorts, but re-sort after filtering for stability
|
||||||
|
return filtered.sort((a, b) => a.name.localeCompare(b.name));
|
||||||
|
}
|
||||||
214
docs/claude-jsonl-reference/01-format-specification.md
Normal file
214
docs/claude-jsonl-reference/01-format-specification.md
Normal file
@@ -0,0 +1,214 @@
|
|||||||
|
# Claude JSONL Format Specification
|
||||||
|
|
||||||
|
## File Format
|
||||||
|
|
||||||
|
- **Format:** Newline-delimited JSON (NDJSON/JSONL)
|
||||||
|
- **Encoding:** UTF-8
|
||||||
|
- **Line terminator:** `\n` (LF)
|
||||||
|
- **One JSON object per line** — no array wrapper
|
||||||
|
|
||||||
|
## Message Envelope (Common Fields)
|
||||||
|
|
||||||
|
Every line in a Claude JSONL file contains these fields:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"parentUuid": "uuid-string or null",
|
||||||
|
"isSidechain": false,
|
||||||
|
"userType": "external",
|
||||||
|
"cwd": "/full/path/to/working/directory",
|
||||||
|
"sessionId": "session-uuid-v4",
|
||||||
|
"version": "2.1.20",
|
||||||
|
"gitBranch": "branch-name or empty string",
|
||||||
|
"type": "user|assistant|progress|system|summary|file-history-snapshot",
|
||||||
|
"message": { ... },
|
||||||
|
"uuid": "unique-message-uuid-v4",
|
||||||
|
"timestamp": "ISO-8601 timestamp"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Field Reference
|
||||||
|
|
||||||
|
| Field | Type | Required | Description |
|
||||||
|
|-------|------|----------|-------------|
|
||||||
|
| `type` | string | Yes | Message type identifier |
|
||||||
|
| `uuid` | string (uuid) | Yes* | Unique identifier for this event |
|
||||||
|
| `parentUuid` | string (uuid) or null | Yes | Links to parent message (null for root) |
|
||||||
|
| `timestamp` | string (ISO-8601) | Yes* | When event occurred (UTC) |
|
||||||
|
| `sessionId` | string (uuid) | Yes | Session identifier |
|
||||||
|
| `version` | string (semver) | Yes | Claude Code version (e.g., "2.1.20") |
|
||||||
|
| `cwd` | string (path) | Yes | Working directory at event time |
|
||||||
|
| `gitBranch` | string | No | Git branch name (empty if not in repo) |
|
||||||
|
| `isSidechain` | boolean | Yes | `true` for subagent sessions |
|
||||||
|
| `userType` | string | Yes | Always "external" for user sessions |
|
||||||
|
| `message` | object | Conditional | Message content (user/assistant types) |
|
||||||
|
| `agentId` | string | Conditional | Agent identifier (subagent sessions only) |
|
||||||
|
|
||||||
|
*May be null in metadata-only entries like `file-history-snapshot`
|
||||||
|
|
||||||
|
## Content Structure
|
||||||
|
|
||||||
|
### User Message Content
|
||||||
|
|
||||||
|
User messages have `message.content` as either:
|
||||||
|
|
||||||
|
**String (direct input):**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"message": {
|
||||||
|
"role": "user",
|
||||||
|
"content": "Your question or instruction"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Array (tool results):**
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"message": {
|
||||||
|
"role": "user",
|
||||||
|
"content": [
|
||||||
|
{
|
||||||
|
"type": "tool_result",
|
||||||
|
"tool_use_id": "toolu_01XYZ",
|
||||||
|
"content": "Tool output text"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Assistant Message Content
|
||||||
|
|
||||||
|
Assistant messages always have `message.content` as an **array**:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"message": {
|
||||||
|
"role": "assistant",
|
||||||
|
"type": "message",
|
||||||
|
"model": "claude-opus-4-5-20251101",
|
||||||
|
"id": "msg_bdrk_01Abc123",
|
||||||
|
"content": [
|
||||||
|
{"type": "thinking", "thinking": "..."},
|
||||||
|
{"type": "text", "text": "..."},
|
||||||
|
{"type": "tool_use", "id": "toolu_01XYZ", "name": "Read", "input": {...}}
|
||||||
|
],
|
||||||
|
"stop_reason": "end_turn",
|
||||||
|
"stop_sequence": null,
|
||||||
|
"usage": {...}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Content Block Types
|
||||||
|
|
||||||
|
### Text Block
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "text",
|
||||||
|
"text": "Response text content"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Thinking Block
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "thinking",
|
||||||
|
"thinking": "Internal reasoning (extended thinking mode)",
|
||||||
|
"signature": "base64-signature (optional)"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Tool Use Block
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "tool_use",
|
||||||
|
"id": "toolu_01Abc123XYZ",
|
||||||
|
"name": "ToolName",
|
||||||
|
"input": {
|
||||||
|
"param1": "value1",
|
||||||
|
"param2": 123
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Tool Result Block
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "tool_result",
|
||||||
|
"tool_use_id": "toolu_01Abc123XYZ",
|
||||||
|
"content": "Result text or structured output",
|
||||||
|
"is_error": false
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Usage Object
|
||||||
|
|
||||||
|
Token consumption reported in assistant messages:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"usage": {
|
||||||
|
"input_tokens": 1000,
|
||||||
|
"output_tokens": 500,
|
||||||
|
"cache_creation_input_tokens": 200,
|
||||||
|
"cache_read_input_tokens": 400,
|
||||||
|
"cache_creation": {
|
||||||
|
"ephemeral_5m_input_tokens": 200,
|
||||||
|
"ephemeral_1h_input_tokens": 0
|
||||||
|
},
|
||||||
|
"service_tier": "standard"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
| Field | Type | Description |
|
||||||
|
|-------|------|-------------|
|
||||||
|
| `input_tokens` | int | Input tokens consumed |
|
||||||
|
| `output_tokens` | int | Output tokens generated |
|
||||||
|
| `cache_creation_input_tokens` | int | Tokens used to create cache |
|
||||||
|
| `cache_read_input_tokens` | int | Tokens read from cache |
|
||||||
|
| `service_tier` | string | API tier ("standard", etc.) |
|
||||||
|
|
||||||
|
## Model Identifiers
|
||||||
|
|
||||||
|
Common model names in `message.model`:
|
||||||
|
|
||||||
|
| Model | Identifier |
|
||||||
|
|-------|------------|
|
||||||
|
| Claude Opus 4.5 | `claude-opus-4-5-20251101` |
|
||||||
|
| Claude Sonnet 4.5 | `claude-sonnet-4-5-20241022` |
|
||||||
|
| Claude Haiku 4.5 | `claude-haiku-4-5-20251001` |
|
||||||
|
|
||||||
|
## Version History
|
||||||
|
|
||||||
|
| Version | Changes |
|
||||||
|
|---------|---------|
|
||||||
|
| 2.1.20 | Extended thinking, permission modes, todos |
|
||||||
|
| 2.1.17 | Subagent support with agentId |
|
||||||
|
| 2.1.x | Progress events, hook metadata |
|
||||||
|
| 2.0.x | Basic message/tool_use/tool_result |
|
||||||
|
|
||||||
|
## Conversation Graph
|
||||||
|
|
||||||
|
Messages form a DAG (directed acyclic graph) via parent-child relationships:
|
||||||
|
|
||||||
|
```
|
||||||
|
Root (parentUuid: null)
|
||||||
|
├── User message (uuid: A)
|
||||||
|
│ └── Assistant (uuid: B, parentUuid: A)
|
||||||
|
│ ├── Progress: Tool (uuid: C, parentUuid: A)
|
||||||
|
│ └── Progress: Hook (uuid: D, parentUuid: A)
|
||||||
|
└── User message (uuid: E, parentUuid: B)
|
||||||
|
└── Assistant (uuid: F, parentUuid: E)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Parsing Recommendations
|
||||||
|
|
||||||
|
1. **Line-by-line** — Don't load entire file into memory
|
||||||
|
2. **Skip invalid lines** — Wrap JSON.parse in try/catch
|
||||||
|
3. **Handle missing fields** — Check existence before access
|
||||||
|
4. **Ignore unknown types** — Format evolves with new event types
|
||||||
|
5. **Check content type** — User content can be string OR array
|
||||||
|
6. **Sum token variants** — Cache tokens may be in different fields
|
||||||
346
docs/claude-jsonl-reference/02-message-types.md
Normal file
346
docs/claude-jsonl-reference/02-message-types.md
Normal file
@@ -0,0 +1,346 @@
|
|||||||
|
# Claude JSONL Message Types
|
||||||
|
|
||||||
|
Complete reference for all message types in Claude Code session logs.
|
||||||
|
|
||||||
|
## Type: `user`
|
||||||
|
|
||||||
|
User input messages (prompts, instructions, tool results).
|
||||||
|
|
||||||
|
### Direct User Input
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"parentUuid": null,
|
||||||
|
"isSidechain": false,
|
||||||
|
"userType": "external",
|
||||||
|
"cwd": "/Users/dev/myproject",
|
||||||
|
"sessionId": "abc123-def456",
|
||||||
|
"version": "2.1.20",
|
||||||
|
"gitBranch": "main",
|
||||||
|
"type": "user",
|
||||||
|
"message": {
|
||||||
|
"role": "user",
|
||||||
|
"content": "Find all TODO comments in the codebase"
|
||||||
|
},
|
||||||
|
"uuid": "msg-001",
|
||||||
|
"timestamp": "2026-02-27T10:00:00.000Z",
|
||||||
|
"thinkingMetadata": {
|
||||||
|
"maxThinkingTokens": 31999
|
||||||
|
},
|
||||||
|
"todos": [],
|
||||||
|
"permissionMode": "bypassPermissions"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Tool Results (Following Tool Calls)
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"parentUuid": "msg-002",
|
||||||
|
"type": "user",
|
||||||
|
"message": {
|
||||||
|
"role": "user",
|
||||||
|
"content": [
|
||||||
|
{
|
||||||
|
"type": "tool_result",
|
||||||
|
"tool_use_id": "toolu_01ABC",
|
||||||
|
"content": "src/api.py:45: # TODO: implement caching"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "tool_result",
|
||||||
|
"tool_use_id": "toolu_01DEF",
|
||||||
|
"content": "src/utils.py:122: # TODO: add validation"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"uuid": "msg-003",
|
||||||
|
"timestamp": "2026-02-27T10:00:05.000Z"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Parsing Note:** Check `typeof content === 'string'` vs `Array.isArray(content)` to distinguish user input from tool results.
|
||||||
|
|
||||||
|
## Type: `assistant`
|
||||||
|
|
||||||
|
Claude's responses including text, thinking, and tool invocations.
|
||||||
|
|
||||||
|
### Text Response
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"parentUuid": "msg-001",
|
||||||
|
"type": "assistant",
|
||||||
|
"message": {
|
||||||
|
"role": "assistant",
|
||||||
|
"type": "message",
|
||||||
|
"model": "claude-opus-4-5-20251101",
|
||||||
|
"id": "msg_bdrk_01Abc123",
|
||||||
|
"content": [
|
||||||
|
{
|
||||||
|
"type": "text",
|
||||||
|
"text": "I found 2 TODO comments in your codebase..."
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"stop_reason": "end_turn",
|
||||||
|
"stop_sequence": null,
|
||||||
|
"usage": {
|
||||||
|
"input_tokens": 1500,
|
||||||
|
"output_tokens": 200,
|
||||||
|
"cache_read_input_tokens": 800
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"uuid": "msg-002",
|
||||||
|
"timestamp": "2026-02-27T10:00:02.000Z"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### With Thinking (Extended Thinking Mode)
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "assistant",
|
||||||
|
"message": {
|
||||||
|
"role": "assistant",
|
||||||
|
"content": [
|
||||||
|
{
|
||||||
|
"type": "thinking",
|
||||||
|
"thinking": "The user wants to find TODOs. I should use Grep to search for TODO patterns across all file types.",
|
||||||
|
"signature": "eyJhbGciOiJSUzI1NiJ9..."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "text",
|
||||||
|
"text": "I'll search for TODO comments in your codebase."
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### With Tool Calls
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "assistant",
|
||||||
|
"message": {
|
||||||
|
"role": "assistant",
|
||||||
|
"content": [
|
||||||
|
{
|
||||||
|
"type": "tool_use",
|
||||||
|
"id": "toolu_01Grep123",
|
||||||
|
"name": "Grep",
|
||||||
|
"input": {
|
||||||
|
"pattern": "TODO",
|
||||||
|
"output_mode": "content"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"stop_reason": null
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Multiple Tool Calls (Parallel)
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "assistant",
|
||||||
|
"message": {
|
||||||
|
"content": [
|
||||||
|
{
|
||||||
|
"type": "text",
|
||||||
|
"text": "I'll search for both TODOs and FIXMEs."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "tool_use",
|
||||||
|
"id": "toolu_01A",
|
||||||
|
"name": "Grep",
|
||||||
|
"input": {"pattern": "TODO"}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "tool_use",
|
||||||
|
"id": "toolu_01B",
|
||||||
|
"name": "Grep",
|
||||||
|
"input": {"pattern": "FIXME"}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Type: `progress`
|
||||||
|
|
||||||
|
Progress events for hooks, tools, and async operations.
|
||||||
|
|
||||||
|
### Hook Progress
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"parentUuid": "msg-002",
|
||||||
|
"isSidechain": false,
|
||||||
|
"type": "progress",
|
||||||
|
"data": {
|
||||||
|
"type": "hook_progress",
|
||||||
|
"hookEvent": "PostToolUse",
|
||||||
|
"hookName": "PostToolUse:Grep",
|
||||||
|
"command": "node scripts/log-tool-use.js"
|
||||||
|
},
|
||||||
|
"parentToolUseID": "toolu_01Grep123",
|
||||||
|
"toolUseID": "toolu_01Grep123",
|
||||||
|
"timestamp": "2026-02-27T10:00:03.000Z",
|
||||||
|
"uuid": "prog-001"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Bash Progress
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "progress",
|
||||||
|
"data": {
|
||||||
|
"type": "bash_progress",
|
||||||
|
"status": "running",
|
||||||
|
"toolName": "Bash",
|
||||||
|
"command": "npm test"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### MCP Progress
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "progress",
|
||||||
|
"data": {
|
||||||
|
"type": "mcp_progress",
|
||||||
|
"server": "playwright",
|
||||||
|
"tool": "browser_navigate",
|
||||||
|
"status": "complete"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Type: `system`
|
||||||
|
|
||||||
|
System messages and metadata entries.
|
||||||
|
|
||||||
|
### Local Command
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"parentUuid": "msg-001",
|
||||||
|
"type": "system",
|
||||||
|
"subtype": "local_command",
|
||||||
|
"content": "<command-name>/usage</command-name>\n<command-args></command-args>",
|
||||||
|
"level": "info",
|
||||||
|
"timestamp": "2026-02-27T10:00:00.500Z",
|
||||||
|
"uuid": "sys-001",
|
||||||
|
"isMeta": false
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Turn Duration
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "system",
|
||||||
|
"subtype": "turn_duration",
|
||||||
|
"slug": "project-session",
|
||||||
|
"durationMs": 65432,
|
||||||
|
"uuid": "sys-002",
|
||||||
|
"timestamp": "2026-02-27T10:01:05.000Z"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Type: `summary`
|
||||||
|
|
||||||
|
End-of-session or context compression summaries.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "summary",
|
||||||
|
"summary": "Searched codebase for TODO comments, found 15 instances across 8 files. Prioritized by module.",
|
||||||
|
"leafUuid": "msg-010"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Note:** `leafUuid` points to the last message included in this summary.
|
||||||
|
|
||||||
|
## Type: `file-history-snapshot`
|
||||||
|
|
||||||
|
File state tracking for undo/restore operations.
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "file-history-snapshot",
|
||||||
|
"messageId": "snap-001",
|
||||||
|
"snapshot": {
|
||||||
|
"messageId": "snap-001",
|
||||||
|
"trackedFileBackups": {
|
||||||
|
"/src/api.ts": {
|
||||||
|
"path": "/src/api.ts",
|
||||||
|
"originalContent": "...",
|
||||||
|
"backupPath": "~/.claude/backups/..."
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"timestamp": "2026-02-27T10:00:00.000Z"
|
||||||
|
},
|
||||||
|
"isSnapshotUpdate": false
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Codex Format (Alternative Agent)
|
||||||
|
|
||||||
|
Codex uses a different JSONL structure.
|
||||||
|
|
||||||
|
### Session Metadata (First Line)
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "session_meta",
|
||||||
|
"timestamp": "2026-02-27T10:00:00.000Z",
|
||||||
|
"payload": {
|
||||||
|
"cwd": "/Users/dev/myproject",
|
||||||
|
"timestamp": "2026-02-27T10:00:00.000Z"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Response Item (Messages)
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "response_item",
|
||||||
|
"timestamp": "2026-02-27T10:00:05.000Z",
|
||||||
|
"payload": {
|
||||||
|
"type": "message",
|
||||||
|
"role": "assistant",
|
||||||
|
"content": [
|
||||||
|
{"text": "I found the issue..."}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Function Call (Tool Use)
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "response_item",
|
||||||
|
"payload": {
|
||||||
|
"type": "function_call",
|
||||||
|
"call_id": "call_abc123",
|
||||||
|
"name": "Grep",
|
||||||
|
"arguments": "{\"pattern\": \"TODO\"}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Reasoning (Thinking)
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "response_item",
|
||||||
|
"payload": {
|
||||||
|
"type": "reasoning",
|
||||||
|
"summary": [
|
||||||
|
{"type": "summary_text", "text": "Analyzing the error..."}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Message Type Summary
|
||||||
|
|
||||||
|
| Type | Frequency | Content |
|
||||||
|
|------|-----------|---------|
|
||||||
|
| `user` | Per prompt | User input or tool results |
|
||||||
|
| `assistant` | Per response | Text, thinking, tool calls |
|
||||||
|
| `progress` | Per hook/tool | Execution status |
|
||||||
|
| `system` | Occasional | Commands, metadata |
|
||||||
|
| `summary` | Session end | Conversation summary |
|
||||||
|
| `file-history-snapshot` | Start/end | File state tracking |
|
||||||
341
docs/claude-jsonl-reference/03-tool-lifecycle.md
Normal file
341
docs/claude-jsonl-reference/03-tool-lifecycle.md
Normal file
@@ -0,0 +1,341 @@
|
|||||||
|
# Tool Call Lifecycle
|
||||||
|
|
||||||
|
Complete documentation of how tool invocations flow through Claude JSONL logs.
|
||||||
|
|
||||||
|
## Lifecycle Overview
|
||||||
|
|
||||||
|
```
|
||||||
|
1. Assistant message with tool_use block
|
||||||
|
↓
|
||||||
|
2. PreToolUse hook fires (optional)
|
||||||
|
↓
|
||||||
|
3. Tool executes
|
||||||
|
↓
|
||||||
|
4. PostToolUse hook fires (optional)
|
||||||
|
↓
|
||||||
|
5. User message with tool_result block
|
||||||
|
↓
|
||||||
|
6. Assistant processes result
|
||||||
|
```
|
||||||
|
|
||||||
|
## Phase 1: Tool Invocation
|
||||||
|
|
||||||
|
Claude requests a tool via `tool_use` content block:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "assistant",
|
||||||
|
"message": {
|
||||||
|
"role": "assistant",
|
||||||
|
"content": [
|
||||||
|
{
|
||||||
|
"type": "text",
|
||||||
|
"text": "I'll read that file for you."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "tool_use",
|
||||||
|
"id": "toolu_01ReadFile123",
|
||||||
|
"name": "Read",
|
||||||
|
"input": {
|
||||||
|
"file_path": "/src/auth/login.ts",
|
||||||
|
"limit": 200
|
||||||
|
}
|
||||||
|
}
|
||||||
|
],
|
||||||
|
"stop_reason": null
|
||||||
|
},
|
||||||
|
"uuid": "msg-001"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Tool Use Block Structure
|
||||||
|
|
||||||
|
| Field | Type | Required | Description |
|
||||||
|
|-------|------|----------|-------------|
|
||||||
|
| `type` | `"tool_use"` | Yes | Block type identifier |
|
||||||
|
| `id` | string | Yes | Unique tool call ID (format: `toolu_*`) |
|
||||||
|
| `name` | string | Yes | Tool name |
|
||||||
|
| `input` | object | Yes | Tool parameters |
|
||||||
|
|
||||||
|
### Common Tool Names
|
||||||
|
|
||||||
|
| Tool | Purpose | Key Input Fields |
|
||||||
|
|------|---------|------------------|
|
||||||
|
| `Read` | Read file | `file_path`, `offset`, `limit` |
|
||||||
|
| `Edit` | Edit file | `file_path`, `old_string`, `new_string` |
|
||||||
|
| `Write` | Create file | `file_path`, `content` |
|
||||||
|
| `Bash` | Run command | `command`, `timeout` |
|
||||||
|
| `Glob` | Find files | `pattern`, `path` |
|
||||||
|
| `Grep` | Search content | `pattern`, `path`, `type` |
|
||||||
|
| `WebFetch` | Fetch URL | `url`, `prompt` |
|
||||||
|
| `WebSearch` | Search web | `query` |
|
||||||
|
| `Task` | Spawn subagent | `prompt`, `subagent_type` |
|
||||||
|
| `AskUserQuestion` | Ask user | `questions` |
|
||||||
|
|
||||||
|
## Phase 2: Hook Execution (Optional)
|
||||||
|
|
||||||
|
If hooks are configured, progress events are logged:
|
||||||
|
|
||||||
|
### PreToolUse Hook Input
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"session_id": "abc123",
|
||||||
|
"transcript_path": "/Users/.../.claude/projects/.../session.jsonl",
|
||||||
|
"cwd": "/Users/dev/myproject",
|
||||||
|
"permission_mode": "default",
|
||||||
|
"hook_event_name": "PreToolUse",
|
||||||
|
"tool_name": "Read",
|
||||||
|
"tool_input": {
|
||||||
|
"file_path": "/src/auth/login.ts"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Hook Progress Event
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "progress",
|
||||||
|
"data": {
|
||||||
|
"type": "hook_progress",
|
||||||
|
"hookEvent": "PreToolUse",
|
||||||
|
"hookName": "security_check",
|
||||||
|
"status": "running"
|
||||||
|
},
|
||||||
|
"parentToolUseID": "toolu_01ReadFile123",
|
||||||
|
"toolUseID": "toolu_01ReadFile123",
|
||||||
|
"uuid": "prog-001"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Hook Output (Decision)
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"decision": "allow",
|
||||||
|
"reason": "File read permitted",
|
||||||
|
"additionalContext": "Note: This file was recently modified"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Phase 3: Tool Result
|
||||||
|
|
||||||
|
Tool output is wrapped in a user message:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "user",
|
||||||
|
"message": {
|
||||||
|
"role": "user",
|
||||||
|
"content": [
|
||||||
|
{
|
||||||
|
"type": "tool_result",
|
||||||
|
"tool_use_id": "toolu_01ReadFile123",
|
||||||
|
"content": "1\texport async function login(email: string, password: string) {\n2\t const user = await db.users.findByEmail(email);\n..."
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"uuid": "msg-002",
|
||||||
|
"parentUuid": "msg-001"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Tool Result Block Structure
|
||||||
|
|
||||||
|
| Field | Type | Required | Description |
|
||||||
|
|-------|------|----------|-------------|
|
||||||
|
| `type` | `"tool_result"` | Yes | Block type identifier |
|
||||||
|
| `tool_use_id` | string | Yes | Matches `tool_use.id` |
|
||||||
|
| `content` | string | Yes | Tool output |
|
||||||
|
| `is_error` | boolean | No | True if tool failed |
|
||||||
|
|
||||||
|
### Error Results
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "tool_result",
|
||||||
|
"tool_use_id": "toolu_01ReadFile123",
|
||||||
|
"content": "Error: File not found: /src/auth/login.ts",
|
||||||
|
"is_error": true
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Phase 4: Result Processing
|
||||||
|
|
||||||
|
Claude processes the result and continues:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "assistant",
|
||||||
|
"message": {
|
||||||
|
"content": [
|
||||||
|
{
|
||||||
|
"type": "thinking",
|
||||||
|
"thinking": "The login function looks correct. The issue might be in the middleware..."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "text",
|
||||||
|
"text": "I see the login function. Let me check the middleware next."
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"type": "tool_use",
|
||||||
|
"id": "toolu_01ReadMiddleware",
|
||||||
|
"name": "Read",
|
||||||
|
"input": {"file_path": "/src/auth/middleware.ts"}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
},
|
||||||
|
"uuid": "msg-003",
|
||||||
|
"parentUuid": "msg-002"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Parallel Tool Calls
|
||||||
|
|
||||||
|
Multiple tools can be invoked in a single message:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "assistant",
|
||||||
|
"message": {
|
||||||
|
"content": [
|
||||||
|
{"type": "tool_use", "id": "toolu_01A", "name": "Grep", "input": {"pattern": "TODO"}},
|
||||||
|
{"type": "tool_use", "id": "toolu_01B", "name": "Grep", "input": {"pattern": "FIXME"}},
|
||||||
|
{"type": "tool_use", "id": "toolu_01C", "name": "Glob", "input": {"pattern": "**/*.test.ts"}}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Results come back in the same user message:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "user",
|
||||||
|
"message": {
|
||||||
|
"content": [
|
||||||
|
{"type": "tool_result", "tool_use_id": "toolu_01A", "content": "Found 15 TODOs"},
|
||||||
|
{"type": "tool_result", "tool_use_id": "toolu_01B", "content": "Found 3 FIXMEs"},
|
||||||
|
{"type": "tool_result", "tool_use_id": "toolu_01C", "content": "tests/auth.test.ts\ntests/api.test.ts"}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Codex Tool Format
|
||||||
|
|
||||||
|
Codex uses a different structure:
|
||||||
|
|
||||||
|
### Function Call
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "response_item",
|
||||||
|
"payload": {
|
||||||
|
"type": "function_call",
|
||||||
|
"call_id": "call_abc123",
|
||||||
|
"name": "Read",
|
||||||
|
"arguments": "{\"file_path\": \"/src/auth/login.ts\"}"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Note:** `arguments` is a JSON string that needs parsing.
|
||||||
|
|
||||||
|
### Function Result
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "response_item",
|
||||||
|
"payload": {
|
||||||
|
"type": "function_call_result",
|
||||||
|
"call_id": "call_abc123",
|
||||||
|
"result": "File contents..."
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Tool Call Pairing
|
||||||
|
|
||||||
|
To reconstruct tool call history:
|
||||||
|
|
||||||
|
1. **Find tool_use blocks** in assistant messages
|
||||||
|
2. **Match by ID** to tool_result blocks in following user messages
|
||||||
|
3. **Handle parallel calls** — multiple tool_use can have multiple tool_result
|
||||||
|
|
||||||
|
```python
|
||||||
|
# Example: Pairing tool calls with results
|
||||||
|
tool_calls = {}
|
||||||
|
|
||||||
|
for line in jsonl_file:
|
||||||
|
event = json.loads(line)
|
||||||
|
|
||||||
|
if event['type'] == 'assistant':
|
||||||
|
for block in event['message']['content']:
|
||||||
|
if block['type'] == 'tool_use':
|
||||||
|
tool_calls[block['id']] = {
|
||||||
|
'name': block['name'],
|
||||||
|
'input': block['input'],
|
||||||
|
'timestamp': event['timestamp']
|
||||||
|
}
|
||||||
|
|
||||||
|
elif event['type'] == 'user':
|
||||||
|
content = event['message']['content']
|
||||||
|
if isinstance(content, list):
|
||||||
|
for block in content:
|
||||||
|
if block['type'] == 'tool_result':
|
||||||
|
call_id = block['tool_use_id']
|
||||||
|
if call_id in tool_calls:
|
||||||
|
tool_calls[call_id]['result'] = block['content']
|
||||||
|
tool_calls[call_id]['is_error'] = block.get('is_error', False)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Missing Tool Results
|
||||||
|
|
||||||
|
Edge cases where tool results may be absent:
|
||||||
|
|
||||||
|
1. **Session interrupted** — User closed session mid-tool
|
||||||
|
2. **Tool timeout** — Long-running tool exceeded limits
|
||||||
|
3. **Hook blocked** — PreToolUse hook returned `block`
|
||||||
|
4. **Permission denied** — User denied tool permission
|
||||||
|
|
||||||
|
Handle by checking if tool_use has matching tool_result before assuming completion.
|
||||||
|
|
||||||
|
## Tool-Specific Formats
|
||||||
|
|
||||||
|
### Bash Tool
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "tool_use",
|
||||||
|
"name": "Bash",
|
||||||
|
"input": {
|
||||||
|
"command": "npm test -- --coverage",
|
||||||
|
"timeout": 120000,
|
||||||
|
"description": "Run tests with coverage"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Result includes exit code context:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "tool_result",
|
||||||
|
"content": "PASS src/auth.test.ts\n...\nCoverage: 85%\n\n[Exit code: 0]"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Task Tool (Subagent)
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "tool_use",
|
||||||
|
"name": "Task",
|
||||||
|
"input": {
|
||||||
|
"description": "Research auth patterns",
|
||||||
|
"prompt": "Explore authentication implementations...",
|
||||||
|
"subagent_type": "Explore"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Result returns subagent output:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "tool_result",
|
||||||
|
"content": "## Research Findings\n\n1. JWT patterns...\n\nagentId: agent-abc123"
|
||||||
|
}
|
||||||
|
```
|
||||||
363
docs/claude-jsonl-reference/04-subagent-teams.md
Normal file
363
docs/claude-jsonl-reference/04-subagent-teams.md
Normal file
@@ -0,0 +1,363 @@
|
|||||||
|
# Subagent and Team Message Formats
|
||||||
|
|
||||||
|
Documentation for spawned agents, team coordination, and inter-agent messaging.
|
||||||
|
|
||||||
|
## Subagent Overview
|
||||||
|
|
||||||
|
Subagents are spawned via the `Task` tool and run in separate processes with their own transcripts.
|
||||||
|
|
||||||
|
### Spawn Relationship
|
||||||
|
|
||||||
|
```
|
||||||
|
Main Session (session-uuid.jsonl)
|
||||||
|
├── User message
|
||||||
|
├── Assistant: Task tool_use
|
||||||
|
├── [Subagent executes in separate process]
|
||||||
|
├── User message: tool_result with subagent output
|
||||||
|
└── ...
|
||||||
|
|
||||||
|
Subagent Session (session-uuid/subagents/agent-id.jsonl)
|
||||||
|
├── Subagent receives prompt
|
||||||
|
├── Subagent works (tool calls, etc.)
|
||||||
|
└── Subagent returns result
|
||||||
|
```
|
||||||
|
|
||||||
|
## Task Tool Invocation
|
||||||
|
|
||||||
|
Spawning a subagent:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "assistant",
|
||||||
|
"message": {
|
||||||
|
"content": [
|
||||||
|
{
|
||||||
|
"type": "tool_use",
|
||||||
|
"id": "toolu_01TaskSpawn",
|
||||||
|
"name": "Task",
|
||||||
|
"input": {
|
||||||
|
"description": "Research auth patterns",
|
||||||
|
"prompt": "Investigate authentication implementations in the codebase. Focus on JWT handling and session management.",
|
||||||
|
"subagent_type": "Explore",
|
||||||
|
"run_in_background": false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Task Tool Input Fields
|
||||||
|
|
||||||
|
| Field | Type | Required | Description |
|
||||||
|
|-------|------|----------|-------------|
|
||||||
|
| `description` | string | Yes | Short (3-5 word) description |
|
||||||
|
| `prompt` | string | Yes | Full task instructions |
|
||||||
|
| `subagent_type` | string | Yes | Agent type (Explore, Plan, etc.) |
|
||||||
|
| `run_in_background` | boolean | No | Run async without waiting |
|
||||||
|
| `model` | string | No | Override model (sonnet, opus, haiku) |
|
||||||
|
| `isolation` | string | No | "worktree" for isolated git copy |
|
||||||
|
| `team_name` | string | No | Team to join |
|
||||||
|
| `name` | string | No | Agent display name |
|
||||||
|
|
||||||
|
### Subagent Types
|
||||||
|
|
||||||
|
| Type | Tools Available | Use Case |
|
||||||
|
|------|-----------------|----------|
|
||||||
|
| `Explore` | Read-only tools | Research, search, analyze |
|
||||||
|
| `Plan` | Read-only tools | Design implementation plans |
|
||||||
|
| `general-purpose` | All tools | Full implementation |
|
||||||
|
| `claude-code-guide` | Docs tools | Answer Claude Code questions |
|
||||||
|
| Custom agents | Defined in `.claude/agents/` | Project-specific |
|
||||||
|
|
||||||
|
## Subagent Transcript Location
|
||||||
|
|
||||||
|
```
|
||||||
|
~/.claude/projects/<project-hash>/<session-id>/subagents/agent-<agent-id>.jsonl
|
||||||
|
```
|
||||||
|
|
||||||
|
## Subagent Message Format
|
||||||
|
|
||||||
|
Subagent transcripts have additional context fields:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"parentUuid": null,
|
||||||
|
"isSidechain": true,
|
||||||
|
"userType": "external",
|
||||||
|
"cwd": "/Users/dev/myproject",
|
||||||
|
"sessionId": "subagent-session-uuid",
|
||||||
|
"version": "2.1.20",
|
||||||
|
"gitBranch": "main",
|
||||||
|
"agentId": "a3fecd5",
|
||||||
|
"type": "user",
|
||||||
|
"message": {
|
||||||
|
"role": "user",
|
||||||
|
"content": "Investigate authentication implementations..."
|
||||||
|
},
|
||||||
|
"uuid": "msg-001",
|
||||||
|
"timestamp": "2026-02-27T10:00:00.000Z"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Key Differences from Main Session
|
||||||
|
|
||||||
|
| Field | Main Session | Subagent Session |
|
||||||
|
|-------|--------------|------------------|
|
||||||
|
| `isSidechain` | `false` | `true` |
|
||||||
|
| `agentId` | absent | present |
|
||||||
|
| `sessionId` | main session UUID | subagent session UUID |
|
||||||
|
|
||||||
|
## Task Result
|
||||||
|
|
||||||
|
When subagent completes, result returns to main session:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "user",
|
||||||
|
"message": {
|
||||||
|
"content": [
|
||||||
|
{
|
||||||
|
"type": "tool_result",
|
||||||
|
"tool_use_id": "toolu_01TaskSpawn",
|
||||||
|
"content": "## Authentication Research Findings\n\n### JWT Implementation\n- Located in src/auth/jwt.ts\n- Uses RS256 algorithm\n...\n\nagentId: a3fecd5 (for resuming)"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Note:** Result includes `agentId` for potential resumption.
|
||||||
|
|
||||||
|
## Background Tasks
|
||||||
|
|
||||||
|
For `run_in_background: true`:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "tool_use",
|
||||||
|
"name": "Task",
|
||||||
|
"input": {
|
||||||
|
"prompt": "Run comprehensive test suite",
|
||||||
|
"subagent_type": "general-purpose",
|
||||||
|
"run_in_background": true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Immediate result with task ID:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "tool_result",
|
||||||
|
"content": "Background task started.\nTask ID: task-abc123\nUse TaskOutput tool to check status."
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Team Coordination
|
||||||
|
|
||||||
|
Teams enable multiple agents working together.
|
||||||
|
|
||||||
|
### Team Creation (TeamCreate Tool)
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "tool_use",
|
||||||
|
"name": "TeamCreate",
|
||||||
|
"input": {
|
||||||
|
"team_name": "auth-refactor",
|
||||||
|
"description": "Refactoring authentication system"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Team Config File
|
||||||
|
|
||||||
|
Created at `~/.claude/teams/<team-name>/config.json`:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"team_name": "auth-refactor",
|
||||||
|
"description": "Refactoring authentication system",
|
||||||
|
"created_at": "2026-02-27T10:00:00.000Z",
|
||||||
|
"members": [
|
||||||
|
{
|
||||||
|
"name": "team-lead",
|
||||||
|
"agentId": "agent-lead-123",
|
||||||
|
"agentType": "general-purpose"
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "researcher",
|
||||||
|
"agentId": "agent-research-456",
|
||||||
|
"agentType": "Explore"
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Spawning Team Members
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "tool_use",
|
||||||
|
"name": "Task",
|
||||||
|
"input": {
|
||||||
|
"prompt": "Research existing auth implementations",
|
||||||
|
"subagent_type": "Explore",
|
||||||
|
"team_name": "auth-refactor",
|
||||||
|
"name": "researcher"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Inter-Agent Messaging (SendMessage)
|
||||||
|
|
||||||
|
### Direct Message
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "tool_use",
|
||||||
|
"name": "SendMessage",
|
||||||
|
"input": {
|
||||||
|
"type": "message",
|
||||||
|
"recipient": "researcher",
|
||||||
|
"content": "Please focus on JWT refresh token handling",
|
||||||
|
"summary": "JWT refresh priority"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Broadcast to Team
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "tool_use",
|
||||||
|
"name": "SendMessage",
|
||||||
|
"input": {
|
||||||
|
"type": "broadcast",
|
||||||
|
"content": "Critical: Found security vulnerability in token validation",
|
||||||
|
"summary": "Security alert"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Shutdown Request
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "tool_use",
|
||||||
|
"name": "SendMessage",
|
||||||
|
"input": {
|
||||||
|
"type": "shutdown_request",
|
||||||
|
"recipient": "researcher",
|
||||||
|
"content": "Task complete, please wrap up"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Shutdown Response
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "tool_use",
|
||||||
|
"name": "SendMessage",
|
||||||
|
"input": {
|
||||||
|
"type": "shutdown_response",
|
||||||
|
"request_id": "req-abc123",
|
||||||
|
"approve": true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Hook Events for Subagents
|
||||||
|
|
||||||
|
### SubagentStart Hook Input
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"session_id": "main-session-uuid",
|
||||||
|
"transcript_path": "/path/to/main/session.jsonl",
|
||||||
|
"hook_event_name": "SubagentStart",
|
||||||
|
"agent_id": "a3fecd5",
|
||||||
|
"agent_type": "Explore"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### SubagentStop Hook Input
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"session_id": "main-session-uuid",
|
||||||
|
"hook_event_name": "SubagentStop",
|
||||||
|
"agent_id": "a3fecd5",
|
||||||
|
"agent_type": "Explore",
|
||||||
|
"agent_transcript_path": "/path/to/subagent/agent-a3fecd5.jsonl",
|
||||||
|
"last_assistant_message": "Research complete. Found 3 auth patterns..."
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## AMC Spawn Tracking
|
||||||
|
|
||||||
|
AMC tracks spawned agents through:
|
||||||
|
|
||||||
|
### Pending Spawn Record
|
||||||
|
```json
|
||||||
|
// ~/.local/share/amc/pending_spawns/<spawn-id>.json
|
||||||
|
{
|
||||||
|
"spawn_id": "550e8400-e29b-41d4-a716-446655440000",
|
||||||
|
"project_path": "/Users/dev/myproject",
|
||||||
|
"agent_type": "claude",
|
||||||
|
"timestamp": 1708872000.123
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Session State with Spawn ID
|
||||||
|
```json
|
||||||
|
// ~/.local/share/amc/sessions/<session-id>.json
|
||||||
|
{
|
||||||
|
"session_id": "session-uuid",
|
||||||
|
"agent": "claude",
|
||||||
|
"project": "myproject",
|
||||||
|
"spawn_id": "550e8400-e29b-41d4-a716-446655440000",
|
||||||
|
"zellij_session": "main",
|
||||||
|
"zellij_pane": "3"
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Resuming Subagents
|
||||||
|
|
||||||
|
Subagents can be resumed using their agent ID:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "tool_use",
|
||||||
|
"name": "Task",
|
||||||
|
"input": {
|
||||||
|
"description": "Continue auth research",
|
||||||
|
"prompt": "Continue where you left off",
|
||||||
|
"subagent_type": "Explore",
|
||||||
|
"resume": "a3fecd5"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
The resumed agent receives full previous context.
|
||||||
|
|
||||||
|
## Worktree Isolation
|
||||||
|
|
||||||
|
For isolated code changes:
|
||||||
|
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "tool_use",
|
||||||
|
"name": "Task",
|
||||||
|
"input": {
|
||||||
|
"prompt": "Refactor auth module",
|
||||||
|
"subagent_type": "general-purpose",
|
||||||
|
"isolation": "worktree"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
Creates temporary git worktree at `.claude/worktrees/<name>/`.
|
||||||
|
|
||||||
|
Result includes worktree info:
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"type": "tool_result",
|
||||||
|
"content": "Refactoring complete.\n\nWorktree: .claude/worktrees/auth-refactor\nBranch: claude/auth-refactor-abc123\n\nChanges made - worktree preserved for review."
|
||||||
|
}
|
||||||
|
```
|
||||||
475
docs/claude-jsonl-reference/05-edge-cases.md
Normal file
475
docs/claude-jsonl-reference/05-edge-cases.md
Normal file
@@ -0,0 +1,475 @@
|
|||||||
|
# Edge Cases and Error Handling
|
||||||
|
|
||||||
|
Comprehensive guide to edge cases, malformed input handling, and error recovery in Claude JSONL processing.
|
||||||
|
|
||||||
|
## Parsing Edge Cases
|
||||||
|
|
||||||
|
### 1. Invalid JSON Lines
|
||||||
|
|
||||||
|
**Scenario:** Corrupted or truncated JSON line.
|
||||||
|
|
||||||
|
```python
|
||||||
|
# BAD: Crashes on invalid JSON
|
||||||
|
for line in file:
|
||||||
|
data = json.loads(line) # Raises JSONDecodeError
|
||||||
|
|
||||||
|
# GOOD: Skip invalid lines
|
||||||
|
for line in file:
|
||||||
|
if not line.strip():
|
||||||
|
continue
|
||||||
|
try:
|
||||||
|
data = json.loads(line)
|
||||||
|
except json.JSONDecodeError:
|
||||||
|
continue # Skip malformed line
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Content Type Ambiguity
|
||||||
|
|
||||||
|
**Scenario:** User message content can be string OR array.
|
||||||
|
|
||||||
|
```python
|
||||||
|
# BAD: Assumes string
|
||||||
|
user_text = message['content']
|
||||||
|
|
||||||
|
# GOOD: Check type
|
||||||
|
content = message['content']
|
||||||
|
if isinstance(content, str):
|
||||||
|
user_text = content
|
||||||
|
elif isinstance(content, list):
|
||||||
|
# This is tool results, not user input
|
||||||
|
user_text = None
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Missing Optional Fields
|
||||||
|
|
||||||
|
**Scenario:** Fields may be absent in older versions.
|
||||||
|
|
||||||
|
```python
|
||||||
|
# BAD: Assumes field exists
|
||||||
|
tokens = message['usage']['cache_read_input_tokens']
|
||||||
|
|
||||||
|
# GOOD: Safe access
|
||||||
|
usage = message.get('usage', {})
|
||||||
|
tokens = usage.get('cache_read_input_tokens', 0)
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Partial File Reads
|
||||||
|
|
||||||
|
**Scenario:** Reading last N bytes may cut first line.
|
||||||
|
|
||||||
|
```python
|
||||||
|
# When seeking to end - N bytes, first line may be partial
|
||||||
|
def read_tail(file_path, max_bytes=1_000_000):
|
||||||
|
with open(file_path, 'r') as f:
|
||||||
|
f.seek(0, 2) # End
|
||||||
|
size = f.tell()
|
||||||
|
|
||||||
|
if size > max_bytes:
|
||||||
|
f.seek(size - max_bytes)
|
||||||
|
f.readline() # Discard partial first line
|
||||||
|
else:
|
||||||
|
f.seek(0)
|
||||||
|
|
||||||
|
return f.readlines()
|
||||||
|
```
|
||||||
|
|
||||||
|
### 5. Non-Dict JSON Values
|
||||||
|
|
||||||
|
**Scenario:** Line contains valid JSON but not an object.
|
||||||
|
|
||||||
|
```python
|
||||||
|
# File might contain: 123, "string", [1,2,3], null
|
||||||
|
data = json.loads(line)
|
||||||
|
if not isinstance(data, dict):
|
||||||
|
continue # Skip non-object JSON
|
||||||
|
```
|
||||||
|
|
||||||
|
## Type Coercion Edge Cases
|
||||||
|
|
||||||
|
### Integer Conversion
|
||||||
|
|
||||||
|
```python
|
||||||
|
def safe_int(value):
|
||||||
|
"""Convert to int, rejecting booleans."""
|
||||||
|
# Python: isinstance(True, int) == True, so check explicitly
|
||||||
|
if isinstance(value, bool):
|
||||||
|
return None
|
||||||
|
if isinstance(value, int):
|
||||||
|
return value
|
||||||
|
if isinstance(value, float):
|
||||||
|
return int(value)
|
||||||
|
if isinstance(value, str):
|
||||||
|
try:
|
||||||
|
return int(value)
|
||||||
|
except ValueError:
|
||||||
|
return None
|
||||||
|
return None
|
||||||
|
```
|
||||||
|
|
||||||
|
### Token Summation
|
||||||
|
|
||||||
|
```python
|
||||||
|
def sum_tokens(*values):
|
||||||
|
"""Sum token counts, handling None/missing."""
|
||||||
|
valid = [v for v in values if isinstance(v, (int, float)) and not isinstance(v, bool)]
|
||||||
|
return sum(valid) if valid else None
|
||||||
|
```
|
||||||
|
|
||||||
|
## Session State Edge Cases
|
||||||
|
|
||||||
|
### 1. Orphan Sessions
|
||||||
|
|
||||||
|
**Scenario:** Multiple sessions claim same Zellij pane (e.g., after --resume).
|
||||||
|
|
||||||
|
**Resolution:** Keep session with:
|
||||||
|
1. Highest priority: Has `context_usage` (indicates real work)
|
||||||
|
2. Second priority: Latest `conversation_mtime_ns`
|
||||||
|
|
||||||
|
```python
|
||||||
|
def dedupe_sessions(sessions):
|
||||||
|
by_pane = {}
|
||||||
|
for s in sessions:
|
||||||
|
key = (s['zellij_session'], s['zellij_pane'])
|
||||||
|
if key not in by_pane:
|
||||||
|
by_pane[key] = s
|
||||||
|
else:
|
||||||
|
existing = by_pane[key]
|
||||||
|
# Prefer session with context_usage
|
||||||
|
if s.get('context_usage') and not existing.get('context_usage'):
|
||||||
|
by_pane[key] = s
|
||||||
|
elif s.get('conversation_mtime_ns', 0) > existing.get('conversation_mtime_ns', 0):
|
||||||
|
by_pane[key] = s
|
||||||
|
return list(by_pane.values())
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Dead Session Detection
|
||||||
|
|
||||||
|
**Claude:** Check Zellij session exists
|
||||||
|
```python
|
||||||
|
def is_claude_dead(session):
|
||||||
|
if session['status'] == 'starting':
|
||||||
|
return False # Benefit of doubt
|
||||||
|
|
||||||
|
zellij = session.get('zellij_session')
|
||||||
|
if not zellij:
|
||||||
|
return True
|
||||||
|
|
||||||
|
# Check if Zellij session exists
|
||||||
|
result = subprocess.run(['zellij', 'list-sessions'], capture_output=True)
|
||||||
|
return zellij not in result.stdout.decode()
|
||||||
|
```
|
||||||
|
|
||||||
|
**Codex:** Check if process has file open
|
||||||
|
```python
|
||||||
|
def is_codex_dead(session):
|
||||||
|
transcript = session.get('transcript_path')
|
||||||
|
if not transcript:
|
||||||
|
return True
|
||||||
|
|
||||||
|
# Check if any process has file open
|
||||||
|
result = subprocess.run(['lsof', transcript], capture_output=True)
|
||||||
|
return result.returncode != 0
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Stale Session Cleanup
|
||||||
|
|
||||||
|
```python
|
||||||
|
ORPHAN_AGE_HOURS = 24
|
||||||
|
STARTING_AGE_HOURS = 1
|
||||||
|
|
||||||
|
def should_cleanup(session, now):
|
||||||
|
age = now - session['started_at']
|
||||||
|
|
||||||
|
if session['status'] == 'starting' and age > timedelta(hours=STARTING_AGE_HOURS):
|
||||||
|
return True # Stuck in starting
|
||||||
|
|
||||||
|
if session.get('is_dead') and age > timedelta(hours=ORPHAN_AGE_HOURS):
|
||||||
|
return True # Dead and old
|
||||||
|
|
||||||
|
return False
|
||||||
|
```
|
||||||
|
|
||||||
|
## Tool Call Edge Cases
|
||||||
|
|
||||||
|
### 1. Missing Tool Results
|
||||||
|
|
||||||
|
**Scenario:** Session interrupted between tool_use and tool_result.
|
||||||
|
|
||||||
|
```python
|
||||||
|
def pair_tool_calls(messages):
|
||||||
|
pending = {} # tool_use_id -> tool_use
|
||||||
|
|
||||||
|
for msg in messages:
|
||||||
|
if msg['type'] == 'assistant':
|
||||||
|
for block in msg['message'].get('content', []):
|
||||||
|
if block.get('type') == 'tool_use':
|
||||||
|
pending[block['id']] = block
|
||||||
|
|
||||||
|
elif msg['type'] == 'user':
|
||||||
|
content = msg['message'].get('content', [])
|
||||||
|
if isinstance(content, list):
|
||||||
|
for block in content:
|
||||||
|
if block.get('type') == 'tool_result':
|
||||||
|
tool_id = block.get('tool_use_id')
|
||||||
|
if tool_id in pending:
|
||||||
|
pending[tool_id]['result'] = block
|
||||||
|
|
||||||
|
# Any pending without result = interrupted
|
||||||
|
incomplete = [t for t in pending.values() if 'result' not in t]
|
||||||
|
return pending, incomplete
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Parallel Tool Call Ordering
|
||||||
|
|
||||||
|
**Scenario:** Multiple tool_use in one message, results may come in different order.
|
||||||
|
|
||||||
|
```python
|
||||||
|
# Match by ID, not by position
|
||||||
|
tool_uses = [b for b in assistant_content if b['type'] == 'tool_use']
|
||||||
|
tool_results = [b for b in user_content if b['type'] == 'tool_result']
|
||||||
|
|
||||||
|
paired = {}
|
||||||
|
for result in tool_results:
|
||||||
|
paired[result['tool_use_id']] = result
|
||||||
|
|
||||||
|
for use in tool_uses:
|
||||||
|
result = paired.get(use['id'])
|
||||||
|
# result may be None if missing
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Tool Error Results
|
||||||
|
|
||||||
|
```python
|
||||||
|
def is_tool_error(result_block):
|
||||||
|
return result_block.get('is_error', False)
|
||||||
|
|
||||||
|
def extract_error_message(result_block):
|
||||||
|
content = result_block.get('content', '')
|
||||||
|
if content.startswith('Error:'):
|
||||||
|
return content
|
||||||
|
return None
|
||||||
|
```
|
||||||
|
|
||||||
|
## Codex-Specific Edge Cases
|
||||||
|
|
||||||
|
### 1. Content Injection Filtering
|
||||||
|
|
||||||
|
Codex may include system context in messages that should be filtered:
|
||||||
|
|
||||||
|
```python
|
||||||
|
SKIP_PREFIXES = [
|
||||||
|
'<INSTRUCTIONS>',
|
||||||
|
'<environment_context>',
|
||||||
|
'<permissions instructions>',
|
||||||
|
'# AGENTS.md instructions'
|
||||||
|
]
|
||||||
|
|
||||||
|
def should_skip_content(text):
|
||||||
|
return any(text.startswith(prefix) for prefix in SKIP_PREFIXES)
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. Developer Role Filtering
|
||||||
|
|
||||||
|
```python
|
||||||
|
def parse_codex_message(payload):
|
||||||
|
role = payload.get('role')
|
||||||
|
if role == 'developer':
|
||||||
|
return None # Skip system/developer messages
|
||||||
|
return payload
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Function Call Arguments Parsing
|
||||||
|
|
||||||
|
```python
|
||||||
|
def parse_arguments(arguments):
|
||||||
|
if isinstance(arguments, dict):
|
||||||
|
return arguments
|
||||||
|
if isinstance(arguments, str):
|
||||||
|
try:
|
||||||
|
return json.loads(arguments)
|
||||||
|
except json.JSONDecodeError:
|
||||||
|
return {'raw': arguments}
|
||||||
|
return {}
|
||||||
|
```
|
||||||
|
|
||||||
|
### 4. Tool Call Buffering
|
||||||
|
|
||||||
|
Codex tool calls need buffering until next message:
|
||||||
|
|
||||||
|
```python
|
||||||
|
class CodexParser:
|
||||||
|
def __init__(self):
|
||||||
|
self.pending_tools = []
|
||||||
|
|
||||||
|
def process_entry(self, entry):
|
||||||
|
payload = entry.get('payload', {})
|
||||||
|
ptype = payload.get('type')
|
||||||
|
|
||||||
|
if ptype == 'function_call':
|
||||||
|
self.pending_tools.append({
|
||||||
|
'name': payload['name'],
|
||||||
|
'input': self.parse_arguments(payload['arguments'])
|
||||||
|
})
|
||||||
|
return None # Don't emit yet
|
||||||
|
|
||||||
|
elif ptype == 'message' and payload.get('role') == 'assistant':
|
||||||
|
msg = self.create_message(payload)
|
||||||
|
if self.pending_tools:
|
||||||
|
msg['tool_calls'] = self.pending_tools
|
||||||
|
self.pending_tools = []
|
||||||
|
return msg
|
||||||
|
|
||||||
|
elif ptype == 'message' and payload.get('role') == 'user':
|
||||||
|
# Flush pending tools before user message
|
||||||
|
msgs = []
|
||||||
|
if self.pending_tools:
|
||||||
|
msgs.append({'role': 'assistant', 'tool_calls': self.pending_tools})
|
||||||
|
self.pending_tools = []
|
||||||
|
msgs.append(self.create_message(payload))
|
||||||
|
return msgs
|
||||||
|
```
|
||||||
|
|
||||||
|
## File System Edge Cases
|
||||||
|
|
||||||
|
### 1. Path Traversal Prevention
|
||||||
|
|
||||||
|
```python
|
||||||
|
import os
|
||||||
|
|
||||||
|
def validate_session_id(session_id):
|
||||||
|
# Must be basename only
|
||||||
|
if os.path.basename(session_id) != session_id:
|
||||||
|
raise ValueError("Invalid session ID")
|
||||||
|
|
||||||
|
# No special characters
|
||||||
|
if any(c in session_id for c in ['/', '\\', '..', '\x00']):
|
||||||
|
raise ValueError("Invalid session ID")
|
||||||
|
|
||||||
|
def validate_project_path(project_path, base_dir):
|
||||||
|
resolved = os.path.realpath(project_path)
|
||||||
|
base = os.path.realpath(base_dir)
|
||||||
|
|
||||||
|
if not resolved.startswith(base + os.sep):
|
||||||
|
raise ValueError("Path traversal detected")
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. File Not Found
|
||||||
|
|
||||||
|
```python
|
||||||
|
def read_session_file(path):
|
||||||
|
try:
|
||||||
|
with open(path, 'r') as f:
|
||||||
|
return f.read()
|
||||||
|
except FileNotFoundError:
|
||||||
|
return None
|
||||||
|
except PermissionError:
|
||||||
|
return None
|
||||||
|
except OSError:
|
||||||
|
return None
|
||||||
|
```
|
||||||
|
|
||||||
|
### 3. Empty Files
|
||||||
|
|
||||||
|
```python
|
||||||
|
def parse_jsonl(path):
|
||||||
|
with open(path, 'r') as f:
|
||||||
|
content = f.read()
|
||||||
|
|
||||||
|
if not content.strip():
|
||||||
|
return [] # Empty file
|
||||||
|
|
||||||
|
return [json.loads(line) for line in content.strip().split('\n') if line.strip()]
|
||||||
|
```
|
||||||
|
|
||||||
|
## Subprocess Edge Cases
|
||||||
|
|
||||||
|
### 1. Timeout Handling
|
||||||
|
|
||||||
|
```python
|
||||||
|
import subprocess
|
||||||
|
|
||||||
|
def run_with_timeout(cmd, timeout=5):
|
||||||
|
try:
|
||||||
|
result = subprocess.run(
|
||||||
|
cmd,
|
||||||
|
capture_output=True,
|
||||||
|
timeout=timeout,
|
||||||
|
text=True
|
||||||
|
)
|
||||||
|
return result.stdout
|
||||||
|
except subprocess.TimeoutExpired:
|
||||||
|
return None
|
||||||
|
except FileNotFoundError:
|
||||||
|
return None
|
||||||
|
except OSError:
|
||||||
|
return None
|
||||||
|
```
|
||||||
|
|
||||||
|
### 2. ANSI Code Stripping
|
||||||
|
|
||||||
|
```python
|
||||||
|
import re
|
||||||
|
|
||||||
|
ANSI_PATTERN = re.compile(r'\x1b\[[0-9;]*m')
|
||||||
|
|
||||||
|
def strip_ansi(text):
|
||||||
|
return ANSI_PATTERN.sub('', text)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Cache Invalidation
|
||||||
|
|
||||||
|
### Mtime-Based Cache
|
||||||
|
|
||||||
|
```python
|
||||||
|
class FileCache:
|
||||||
|
def __init__(self, max_size=100):
|
||||||
|
self.cache = {}
|
||||||
|
self.max_size = max_size
|
||||||
|
|
||||||
|
def get(self, path):
|
||||||
|
if path not in self.cache:
|
||||||
|
return None
|
||||||
|
|
||||||
|
entry = self.cache[path]
|
||||||
|
stat = os.stat(path)
|
||||||
|
|
||||||
|
# Invalidate if file changed
|
||||||
|
if stat.st_mtime_ns != entry['mtime_ns'] or stat.st_size != entry['size']:
|
||||||
|
del self.cache[path]
|
||||||
|
return None
|
||||||
|
|
||||||
|
return entry['data']
|
||||||
|
|
||||||
|
def set(self, path, data):
|
||||||
|
# Evict oldest if full
|
||||||
|
if len(self.cache) >= self.max_size:
|
||||||
|
oldest = next(iter(self.cache))
|
||||||
|
del self.cache[oldest]
|
||||||
|
|
||||||
|
stat = os.stat(path)
|
||||||
|
self.cache[path] = {
|
||||||
|
'mtime_ns': stat.st_mtime_ns,
|
||||||
|
'size': stat.st_size,
|
||||||
|
'data': data
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Testing Edge Cases Checklist
|
||||||
|
|
||||||
|
- [ ] Empty JSONL file
|
||||||
|
- [ ] Single-line JSONL file
|
||||||
|
- [ ] Truncated JSON line
|
||||||
|
- [ ] Non-object JSON values (numbers, strings, arrays)
|
||||||
|
- [ ] Missing required fields
|
||||||
|
- [ ] Unknown message types
|
||||||
|
- [ ] Content as string vs array
|
||||||
|
- [ ] Boolean vs integer confusion
|
||||||
|
- [ ] Unicode in content
|
||||||
|
- [ ] Very long lines (>64KB)
|
||||||
|
- [ ] Concurrent file modifications
|
||||||
|
- [ ] Missing tool results
|
||||||
|
- [ ] Multiple tool calls in single message
|
||||||
|
- [ ] Session without Zellij pane
|
||||||
|
- [ ] Codex developer messages
|
||||||
|
- [ ] Path traversal attempts
|
||||||
|
- [ ] Symlink escape attempts
|
||||||
238
docs/claude-jsonl-reference/06-quick-reference.md
Normal file
238
docs/claude-jsonl-reference/06-quick-reference.md
Normal file
@@ -0,0 +1,238 @@
|
|||||||
|
# Quick Reference
|
||||||
|
|
||||||
|
Cheat sheet for common Claude JSONL operations.
|
||||||
|
|
||||||
|
## File Locations
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Claude sessions
|
||||||
|
~/.claude/projects/-Users-user-projects-myapp/*.jsonl
|
||||||
|
|
||||||
|
# Codex sessions
|
||||||
|
~/.codex/sessions/**/*.jsonl
|
||||||
|
|
||||||
|
# Subagent transcripts
|
||||||
|
~/.claude/projects/.../session-id/subagents/agent-*.jsonl
|
||||||
|
|
||||||
|
# AMC session state
|
||||||
|
~/.local/share/amc/sessions/*.json
|
||||||
|
```
|
||||||
|
|
||||||
|
## Path Encoding
|
||||||
|
|
||||||
|
```python
|
||||||
|
# Encode: /Users/dev/myproject -> -Users-dev-myproject
|
||||||
|
encoded = '-' + project_path.replace('/', '-')
|
||||||
|
|
||||||
|
# Decode: -Users-dev-myproject -> /Users/dev/myproject
|
||||||
|
decoded = encoded[1:].replace('-', '/')
|
||||||
|
```
|
||||||
|
|
||||||
|
## Message Type Quick ID
|
||||||
|
|
||||||
|
| If you see... | It's a... |
|
||||||
|
|---------------|-----------|
|
||||||
|
| `"type": "user"` + string content | User input |
|
||||||
|
| `"type": "user"` + array content | Tool results |
|
||||||
|
| `"type": "assistant"` | Claude response |
|
||||||
|
| `"type": "progress"` | Hook/tool execution |
|
||||||
|
| `"type": "summary"` | Session summary |
|
||||||
|
| `"type": "system"` | Metadata/commands |
|
||||||
|
|
||||||
|
## Content Block Quick ID
|
||||||
|
|
||||||
|
| Block Type | Key Fields |
|
||||||
|
|------------|------------|
|
||||||
|
| `text` | `text` |
|
||||||
|
| `thinking` | `thinking`, `signature` |
|
||||||
|
| `tool_use` | `id`, `name`, `input` |
|
||||||
|
| `tool_result` | `tool_use_id`, `content`, `is_error` |
|
||||||
|
|
||||||
|
## jq Recipes
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Count messages by type
|
||||||
|
jq -s 'group_by(.type) | map({type: .[0].type, count: length})' session.jsonl
|
||||||
|
|
||||||
|
# Extract all tool calls
|
||||||
|
jq -c 'select(.type=="assistant") | .message.content[]? | select(.type=="tool_use")' session.jsonl
|
||||||
|
|
||||||
|
# Get user messages only
|
||||||
|
jq -c 'select(.type=="user" and (.message.content | type)=="string")' session.jsonl
|
||||||
|
|
||||||
|
# Sum tokens
|
||||||
|
jq -s '[.[].message.usage? | select(.) | .input_tokens + .output_tokens] | add' session.jsonl
|
||||||
|
|
||||||
|
# List tools used
|
||||||
|
jq -c 'select(.type=="assistant") | .message.content[]? | select(.type=="tool_use") | .name' session.jsonl | sort | uniq -c
|
||||||
|
|
||||||
|
# Find errors
|
||||||
|
jq -c 'select(.type=="user") | .message.content[]? | select(.type=="tool_result" and .is_error==true)' session.jsonl
|
||||||
|
```
|
||||||
|
|
||||||
|
## Python Snippets
|
||||||
|
|
||||||
|
### Read JSONL
|
||||||
|
```python
|
||||||
|
import json
|
||||||
|
|
||||||
|
def read_jsonl(path):
|
||||||
|
with open(path) as f:
|
||||||
|
for line in f:
|
||||||
|
if line.strip():
|
||||||
|
try:
|
||||||
|
yield json.loads(line)
|
||||||
|
except json.JSONDecodeError:
|
||||||
|
continue
|
||||||
|
```
|
||||||
|
|
||||||
|
### Extract Conversation
|
||||||
|
```python
|
||||||
|
def extract_conversation(path):
|
||||||
|
messages = []
|
||||||
|
for event in read_jsonl(path):
|
||||||
|
if event['type'] == 'user':
|
||||||
|
content = event['message']['content']
|
||||||
|
if isinstance(content, str):
|
||||||
|
messages.append({'role': 'user', 'content': content})
|
||||||
|
elif event['type'] == 'assistant':
|
||||||
|
for block in event['message'].get('content', []):
|
||||||
|
if block.get('type') == 'text':
|
||||||
|
messages.append({'role': 'assistant', 'content': block['text']})
|
||||||
|
return messages
|
||||||
|
```
|
||||||
|
|
||||||
|
### Get Token Usage
|
||||||
|
```python
|
||||||
|
def get_token_usage(path):
|
||||||
|
total_input = 0
|
||||||
|
total_output = 0
|
||||||
|
|
||||||
|
for event in read_jsonl(path):
|
||||||
|
if event['type'] == 'assistant':
|
||||||
|
usage = event.get('message', {}).get('usage', {})
|
||||||
|
total_input += usage.get('input_tokens', 0)
|
||||||
|
total_output += usage.get('output_tokens', 0)
|
||||||
|
|
||||||
|
return {'input': total_input, 'output': total_output}
|
||||||
|
```
|
||||||
|
|
||||||
|
### Find Tool Calls
|
||||||
|
```python
|
||||||
|
def find_tool_calls(path):
|
||||||
|
tools = []
|
||||||
|
for event in read_jsonl(path):
|
||||||
|
if event['type'] == 'assistant':
|
||||||
|
for block in event['message'].get('content', []):
|
||||||
|
if block.get('type') == 'tool_use':
|
||||||
|
tools.append({
|
||||||
|
'name': block['name'],
|
||||||
|
'id': block['id'],
|
||||||
|
'input': block['input']
|
||||||
|
})
|
||||||
|
return tools
|
||||||
|
```
|
||||||
|
|
||||||
|
### Pair Tools with Results
|
||||||
|
```python
|
||||||
|
def pair_tools_results(path):
|
||||||
|
pending = {}
|
||||||
|
|
||||||
|
for event in read_jsonl(path):
|
||||||
|
if event['type'] == 'assistant':
|
||||||
|
for block in event['message'].get('content', []):
|
||||||
|
if block.get('type') == 'tool_use':
|
||||||
|
pending[block['id']] = {'use': block, 'result': None}
|
||||||
|
|
||||||
|
elif event['type'] == 'user':
|
||||||
|
content = event['message'].get('content', [])
|
||||||
|
if isinstance(content, list):
|
||||||
|
for block in content:
|
||||||
|
if block.get('type') == 'tool_result':
|
||||||
|
tool_id = block['tool_use_id']
|
||||||
|
if tool_id in pending:
|
||||||
|
pending[tool_id]['result'] = block
|
||||||
|
|
||||||
|
return pending
|
||||||
|
```
|
||||||
|
|
||||||
|
## Common Gotchas
|
||||||
|
|
||||||
|
| Gotcha | Solution |
|
||||||
|
|--------|----------|
|
||||||
|
| `content` can be string or array | Check `isinstance(content, str)` first |
|
||||||
|
| `usage` may be missing | Use `.get('usage', {})` |
|
||||||
|
| Booleans are ints in Python | Check `isinstance(v, bool)` before `isinstance(v, int)` |
|
||||||
|
| First line may be partial after seek | Call `readline()` to discard |
|
||||||
|
| Tool results in user messages | Check for `tool_result` type in array |
|
||||||
|
| Codex `arguments` is JSON string | Parse with `json.loads()` |
|
||||||
|
| Agent ID vs session ID | Agent ID survives rewrites, session ID is per-run |
|
||||||
|
|
||||||
|
## Status Values
|
||||||
|
|
||||||
|
| Field | Values |
|
||||||
|
|-------|--------|
|
||||||
|
| `status` | `starting`, `active`, `done` |
|
||||||
|
| `stop_reason` | `end_turn`, `max_tokens`, `tool_use`, null |
|
||||||
|
| `is_error` | `true`, `false` (tool results) |
|
||||||
|
|
||||||
|
## Token Fields
|
||||||
|
|
||||||
|
```python
|
||||||
|
# All possible token fields to sum
|
||||||
|
token_fields = [
|
||||||
|
'input_tokens',
|
||||||
|
'output_tokens',
|
||||||
|
'cache_creation_input_tokens',
|
||||||
|
'cache_read_input_tokens'
|
||||||
|
]
|
||||||
|
|
||||||
|
# Context window by model
|
||||||
|
context_windows = {
|
||||||
|
'claude-opus': 200_000,
|
||||||
|
'claude-sonnet': 200_000,
|
||||||
|
'claude-haiku': 200_000,
|
||||||
|
'claude-2': 100_000
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Useful Constants
|
||||||
|
|
||||||
|
```python
|
||||||
|
# File locations
|
||||||
|
CLAUDE_BASE = os.path.expanduser('~/.claude/projects')
|
||||||
|
CODEX_BASE = os.path.expanduser('~/.codex/sessions')
|
||||||
|
AMC_BASE = os.path.expanduser('~/.local/share/amc')
|
||||||
|
|
||||||
|
# Read limits
|
||||||
|
MAX_TAIL_BYTES = 1_000_000 # 1MB
|
||||||
|
MAX_LINES = 400 # For context extraction
|
||||||
|
|
||||||
|
# Timeouts
|
||||||
|
SUBPROCESS_TIMEOUT = 5 # seconds
|
||||||
|
SPAWN_COOLDOWN = 30 # seconds
|
||||||
|
|
||||||
|
# Session ages
|
||||||
|
ACTIVE_THRESHOLD_MINUTES = 2
|
||||||
|
ORPHAN_CLEANUP_HOURS = 24
|
||||||
|
STARTING_CLEANUP_HOURS = 1
|
||||||
|
```
|
||||||
|
|
||||||
|
## Debugging Commands
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Watch session file changes
|
||||||
|
tail -f ~/.claude/projects/-path-to-project/*.jsonl | jq -c
|
||||||
|
|
||||||
|
# Find latest session
|
||||||
|
ls -t ~/.claude/projects/-path-to-project/*.jsonl | head -1
|
||||||
|
|
||||||
|
# Count lines in session
|
||||||
|
wc -l session.jsonl
|
||||||
|
|
||||||
|
# Validate JSON
|
||||||
|
cat session.jsonl | while read line; do echo "$line" | jq . > /dev/null || echo "Invalid: $line"; done
|
||||||
|
|
||||||
|
# Pretty print last message
|
||||||
|
tail -1 session.jsonl | jq .
|
||||||
|
```
|
||||||
57
docs/claude-jsonl-reference/README.md
Normal file
57
docs/claude-jsonl-reference/README.md
Normal file
@@ -0,0 +1,57 @@
|
|||||||
|
# Claude JSONL Session Log Reference
|
||||||
|
|
||||||
|
Comprehensive documentation for parsing and processing Claude Code JSONL session logs in the AMC project.
|
||||||
|
|
||||||
|
## Overview
|
||||||
|
|
||||||
|
Claude Code stores all conversations as JSONL (JSON Lines) files — one JSON object per line. This documentation provides authoritative specifications for:
|
||||||
|
|
||||||
|
- Message envelope structure and common fields
|
||||||
|
- All message types (user, assistant, progress, system, summary, etc.)
|
||||||
|
- Content block types (text, tool_use, tool_result, thinking)
|
||||||
|
- Tool call lifecycle and result handling
|
||||||
|
- Subagent spawn and team coordination formats
|
||||||
|
- Edge cases, error handling, and recovery patterns
|
||||||
|
|
||||||
|
## Documents
|
||||||
|
|
||||||
|
| Document | Purpose |
|
||||||
|
|----------|---------|
|
||||||
|
| [01-format-specification.md](./01-format-specification.md) | Complete JSONL format spec with all fields |
|
||||||
|
| [02-message-types.md](./02-message-types.md) | Every message type with concrete examples |
|
||||||
|
| [03-tool-lifecycle.md](./03-tool-lifecycle.md) | Tool call flow from invocation to result |
|
||||||
|
| [04-subagent-teams.md](./04-subagent-teams.md) | Subagent and team message formats |
|
||||||
|
| [05-edge-cases.md](./05-edge-cases.md) | Error handling, malformed input, recovery |
|
||||||
|
| [06-quick-reference.md](./06-quick-reference.md) | Cheat sheet for common operations |
|
||||||
|
|
||||||
|
## File Locations
|
||||||
|
|
||||||
|
| Content | Location |
|
||||||
|
|---------|----------|
|
||||||
|
| Claude sessions | `~/.claude/projects/<encoded-path>/<session-id>.jsonl` |
|
||||||
|
| Codex sessions | `~/.codex/sessions/**/<session-id>.jsonl` |
|
||||||
|
| Subagent transcripts | `~/.claude/projects/<path>/<session-id>/subagents/agent-<id>.jsonl` |
|
||||||
|
| AMC session state | `~/.local/share/amc/sessions/<session-id>.json` |
|
||||||
|
| AMC event logs | `~/.local/share/amc/events/<session-id>.jsonl` |
|
||||||
|
|
||||||
|
## Path Encoding
|
||||||
|
|
||||||
|
Project paths are encoded by replacing `/` with `-` and adding a leading dash:
|
||||||
|
- `/Users/taylor/projects/amc` → `-Users-taylor-projects-amc`
|
||||||
|
|
||||||
|
## Key Principles
|
||||||
|
|
||||||
|
1. **NDJSON format** — Each line is complete, parseable JSON
|
||||||
|
2. **Append-only** — Sessions are written incrementally
|
||||||
|
3. **UUID linking** — Messages link via `uuid` and `parentUuid`
|
||||||
|
4. **Graceful degradation** — Always handle missing/unknown fields
|
||||||
|
5. **Type safety** — Validate types before use (arrays vs strings, etc.)
|
||||||
|
|
||||||
|
## Sources
|
||||||
|
|
||||||
|
- [Claude Code Hooks Reference](https://code.claude.com/docs/en/hooks.md)
|
||||||
|
- [Claude Code Headless Documentation](https://code.claude.com/docs/en/headless.md)
|
||||||
|
- [Anthropic Messages API Reference](https://docs.anthropic.com/en/api/messages)
|
||||||
|
- [Inside Claude Code: Session File Format](https://medium.com/@databunny/inside-claude-code-the-session-file-format-and-how-to-inspect-it-b9998e66d56b)
|
||||||
|
- [Community: claude-code-log](https://github.com/daaain/claude-code-log)
|
||||||
|
- [Community: claude-JSONL-browser](https://github.com/withLinda/claude-JSONL-browser)
|
||||||
456
plans/PLAN-tool-result-display.md
Normal file
456
plans/PLAN-tool-result-display.md
Normal file
@@ -0,0 +1,456 @@
|
|||||||
|
# Plan: Tool Result Display in AMC Dashboard
|
||||||
|
|
||||||
|
> **Status:** Draft — awaiting review and mockup phase
|
||||||
|
> **Author:** Claude + Taylor
|
||||||
|
> **Created:** 2026-02-27
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
Add the ability to view tool call results (diffs, bash output, file contents) directly in the AMC dashboard conversation view. Currently, users see that a tool was called but cannot see what it did. This feature brings Claude Code's result visibility to the multi-agent dashboard.
|
||||||
|
|
||||||
|
### Goals
|
||||||
|
|
||||||
|
1. **See code changes as they happen** — diffs from Edit/Write tools always visible
|
||||||
|
2. **Debug agent behavior** — inspect Bash output, Read content, search results
|
||||||
|
3. **Match Claude Code UX** — familiar expand/collapse behavior with latest results expanded
|
||||||
|
|
||||||
|
### Non-Goals (v1)
|
||||||
|
|
||||||
|
- Codex agent support (different JSONL format — deferred to v2)
|
||||||
|
- Copy-to-clipboard functionality
|
||||||
|
- Virtual scrolling / performance optimization
|
||||||
|
- Editor integration (clicking paths to open files)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## User Workflows
|
||||||
|
|
||||||
|
### Workflow 1: Watching an Active Session
|
||||||
|
|
||||||
|
1. User opens a session card showing an active Claude agent
|
||||||
|
2. Agent calls Edit tool to modify a file
|
||||||
|
3. User immediately sees the diff expanded below the tool call pill
|
||||||
|
4. Agent calls Bash to run tests
|
||||||
|
5. User sees bash output expanded, previous Edit diff stays expanded (it's a diff)
|
||||||
|
6. Agent sends a text message explaining results
|
||||||
|
7. Bash output collapses (new assistant message arrived), Edit diff stays expanded
|
||||||
|
|
||||||
|
### Workflow 2: Reviewing a Completed Session
|
||||||
|
|
||||||
|
1. User opens a completed session to review what the agent did
|
||||||
|
2. All tool calls are collapsed by default (no "latest" assistant message)
|
||||||
|
3. Exception: Edit/Write diffs are still expanded
|
||||||
|
4. User clicks a Bash tool call to see what command ran and its output
|
||||||
|
5. User clicks "Show full output" when output is truncated
|
||||||
|
6. Lightweight modal opens with full scrollable content
|
||||||
|
7. User closes modal and continues reviewing
|
||||||
|
|
||||||
|
### Workflow 3: Debugging a Failed Tool Call
|
||||||
|
|
||||||
|
1. Agent runs a Bash command that fails
|
||||||
|
2. Tool result block shows with red-tinted background
|
||||||
|
3. stderr content is visible, clearly marked as error
|
||||||
|
4. User can see what went wrong without leaving the dashboard
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
|
||||||
|
### Display Behavior
|
||||||
|
|
||||||
|
- **AC-1:** Tool calls render as expandable elements showing tool name and summary
|
||||||
|
- **AC-2:** Clicking a collapsed tool call expands to show its result
|
||||||
|
- **AC-3:** Clicking an expanded tool call collapses it
|
||||||
|
- **AC-4:** Tool results in the most recent assistant message are expanded by default
|
||||||
|
- **AC-5:** When a new assistant message arrives, previous tool results collapse
|
||||||
|
- **AC-6:** Edit and Write tool diffs remain expanded regardless of message age
|
||||||
|
- **AC-7:** Tool calls without results display as non-expandable with muted styling
|
||||||
|
|
||||||
|
### Diff Rendering
|
||||||
|
|
||||||
|
- **AC-8:** Edit/Write results display structuredPatch data as syntax-highlighted diff
|
||||||
|
- **AC-9:** Diff additions render with VS Code dark theme green background (rgba(46, 160, 67, 0.15))
|
||||||
|
- **AC-10:** Diff deletions render with VS Code dark theme red background (rgba(248, 81, 73, 0.15))
|
||||||
|
- **AC-11:** Full file path displays above each diff block
|
||||||
|
- **AC-12:** Diff context lines use structuredPatch as-is (no recomputation)
|
||||||
|
|
||||||
|
### Other Tool Types
|
||||||
|
|
||||||
|
- **AC-13:** Bash results display stdout in monospace, stderr separately if present
|
||||||
|
- **AC-14:** Read results display file content with syntax highlighting based on file extension
|
||||||
|
- **AC-15:** Grep/Glob results display file list with match counts
|
||||||
|
- **AC-16:** WebFetch results display URL and response summary
|
||||||
|
|
||||||
|
### Truncation
|
||||||
|
|
||||||
|
- **AC-17:** Long outputs truncate at thresholds matching Claude Code behavior
|
||||||
|
- **AC-18:** Truncated outputs show "Show full output (N lines)" link
|
||||||
|
- **AC-19:** Clicking "Show full output" opens a dedicated lightweight modal
|
||||||
|
- **AC-20:** Modal displays full content with syntax highlighting, scrollable
|
||||||
|
|
||||||
|
### Error States
|
||||||
|
|
||||||
|
- **AC-21:** Failed tool calls display with red-tinted background
|
||||||
|
- **AC-22:** Error content (stderr, error messages) is clearly distinguishable from success content
|
||||||
|
- **AC-23:** is_error flag from tool_result determines error state
|
||||||
|
|
||||||
|
### API Contract
|
||||||
|
|
||||||
|
- **AC-24:** /api/conversation response includes tool results nested in tool_calls
|
||||||
|
- **AC-25:** Each tool_call has: name, id, input, result (when available)
|
||||||
|
- **AC-26:** Result structure varies by tool type (documented in IMP-SERVER)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Architecture
|
||||||
|
|
||||||
|
### Why Two-Pass JSONL Parsing
|
||||||
|
|
||||||
|
The Claude Code JSONL stores tool_use and tool_result as separate entries linked by tool_use_id. To nest results inside tool_calls for the API response, the server must:
|
||||||
|
|
||||||
|
1. First pass: Build a map of tool_use_id → toolUseResult
|
||||||
|
2. Second pass: Parse messages, attaching results to matching tool_calls
|
||||||
|
|
||||||
|
This adds parsing overhead but keeps the API contract simple. Alternatives considered:
|
||||||
|
- **Streaming/incremental:** More complex, doesn't help since we need full conversation anyway
|
||||||
|
- **Client-side joining:** Shifts complexity to frontend, increases payload size
|
||||||
|
|
||||||
|
### Why Render Everything, Not Virtual Scroll
|
||||||
|
|
||||||
|
Sessions typically have 20-80 tool calls. Modern browsers handle hundreds of DOM elements efficiently. Virtual scrolling adds significant complexity (measuring, windowing, scroll position management) for marginal benefit.
|
||||||
|
|
||||||
|
Decision: Ship simple, measure real-world performance, optimize if >100ms render times observed.
|
||||||
|
|
||||||
|
### Why Dedicated Modal Over Inline Expansion
|
||||||
|
|
||||||
|
Full output can be thousands of lines. Inline expansion would:
|
||||||
|
- Push other content out of view
|
||||||
|
- Make scrolling confusing
|
||||||
|
- Lose context of surrounding conversation
|
||||||
|
|
||||||
|
A modal provides a focused reading experience without disrupting conversation layout.
|
||||||
|
|
||||||
|
### Component Structure
|
||||||
|
|
||||||
|
```
|
||||||
|
MessageBubble
|
||||||
|
├── Content (text)
|
||||||
|
├── Thinking (existing)
|
||||||
|
└── ToolCallList (new)
|
||||||
|
└── ToolCallItem (repeated)
|
||||||
|
├── Header (pill: chevron, name, summary, status)
|
||||||
|
└── ResultContent (conditional)
|
||||||
|
├── DiffResult (for Edit/Write)
|
||||||
|
├── BashResult (for Bash)
|
||||||
|
├── FileListResult (for Glob/Grep)
|
||||||
|
└── GenericResult (fallback)
|
||||||
|
|
||||||
|
FullOutputModal (new, top-level)
|
||||||
|
├── Header (tool name, file path)
|
||||||
|
├── Content (full output, scrollable)
|
||||||
|
└── CloseButton
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Implementation Specifications
|
||||||
|
|
||||||
|
### IMP-SERVER: Parse and Attach Tool Results
|
||||||
|
|
||||||
|
**Fulfills:** AC-24, AC-25, AC-26
|
||||||
|
|
||||||
|
**Location:** `amc_server/mixins/conversation.py`
|
||||||
|
|
||||||
|
**Changes to `_parse_claude_conversation`:**
|
||||||
|
|
||||||
|
Two-pass parsing:
|
||||||
|
1. First pass: Scan all entries, build map of `tool_use_id` → `toolUseResult`
|
||||||
|
2. Second pass: Parse messages as before, but when encountering `tool_use`, lookup and attach result
|
||||||
|
|
||||||
|
**Tool call schema after change:**
|
||||||
|
```python
|
||||||
|
{
|
||||||
|
"name": "Edit",
|
||||||
|
"id": "toolu_abc123",
|
||||||
|
"input": {"file_path": "...", "old_string": "...", "new_string": "..."},
|
||||||
|
"result": {
|
||||||
|
"content": "The file has been updated successfully.",
|
||||||
|
"is_error": False,
|
||||||
|
"structuredPatch": [...],
|
||||||
|
"filePath": "...",
|
||||||
|
# ... other fields from toolUseResult
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
**Result Structure by Tool Type:**
|
||||||
|
|
||||||
|
| Tool | Result Fields |
|
||||||
|
|------|---------------|
|
||||||
|
| Edit | `structuredPatch`, `filePath`, `oldString`, `newString` |
|
||||||
|
| Write | `filePath`, content confirmation |
|
||||||
|
| Read | `file`, `type`, content in `content` field |
|
||||||
|
| Bash | `stdout`, `stderr`, `interrupted` |
|
||||||
|
| Glob | `filenames`, `numFiles`, `truncated` |
|
||||||
|
| Grep | `content`, `filenames`, `numFiles`, `numLines` |
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### IMP-TOOLCALL: Expandable Tool Call Component
|
||||||
|
|
||||||
|
**Fulfills:** AC-1, AC-2, AC-3, AC-4, AC-5, AC-6, AC-7
|
||||||
|
|
||||||
|
**Location:** `dashboard/lib/markdown.js` (refactor `renderToolCalls`)
|
||||||
|
|
||||||
|
**New function: `ToolCallItem`**
|
||||||
|
|
||||||
|
Renders a single tool call with:
|
||||||
|
- Chevron for expand/collapse (when result exists and not Edit/Write)
|
||||||
|
- Tool name (bold, colored)
|
||||||
|
- Summary (from existing `getToolSummary`)
|
||||||
|
- Status icon (checkmark or X)
|
||||||
|
- Result content (when expanded)
|
||||||
|
|
||||||
|
**State Management:**
|
||||||
|
|
||||||
|
Track expanded state per message. When new assistant message arrives:
|
||||||
|
- Compare latest assistant message ID to stored ID
|
||||||
|
- If different, reset expanded set to empty
|
||||||
|
- Edit/Write tools bypass this logic (always expanded via CSS/logic)
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### IMP-DIFF: Diff Rendering Component
|
||||||
|
|
||||||
|
**Fulfills:** AC-8, AC-9, AC-10, AC-11, AC-12
|
||||||
|
|
||||||
|
**Location:** `dashboard/lib/markdown.js` (new function `renderDiff`)
|
||||||
|
|
||||||
|
**Add diff language to highlight.js:**
|
||||||
|
```javascript
|
||||||
|
import langDiff from 'https://esm.sh/highlight.js@11.11.1/lib/languages/diff';
|
||||||
|
hljs.registerLanguage('diff', langDiff);
|
||||||
|
```
|
||||||
|
|
||||||
|
**Diff Renderer:**
|
||||||
|
|
||||||
|
1. Convert `structuredPatch` array to unified diff text:
|
||||||
|
- Each hunk: `@@ -oldStart,oldLines +newStart,newLines @@`
|
||||||
|
- Followed by hunk.lines array
|
||||||
|
2. Syntax highlight with hljs diff language
|
||||||
|
3. Sanitize with DOMPurify before rendering
|
||||||
|
4. Wrap in container with file path header
|
||||||
|
|
||||||
|
**CSS styling:**
|
||||||
|
- Container: dark border, rounded corners
|
||||||
|
- Header: muted background, monospace font, full file path
|
||||||
|
- Content: monospace, horizontal scroll
|
||||||
|
- Additions: `background: rgba(46, 160, 67, 0.15)`
|
||||||
|
- Deletions: `background: rgba(248, 81, 73, 0.15)`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### IMP-BASH: Bash Output Component
|
||||||
|
|
||||||
|
**Fulfills:** AC-13, AC-21, AC-22
|
||||||
|
|
||||||
|
**Location:** `dashboard/lib/markdown.js` (new function `renderBashResult`)
|
||||||
|
|
||||||
|
Renders:
|
||||||
|
- `stdout` in monospace pre block
|
||||||
|
- `stderr` in separate block with error styling (if present)
|
||||||
|
- "Command interrupted" notice (if interrupted flag)
|
||||||
|
|
||||||
|
Error state: `is_error` or presence of stderr triggers error styling (red tint, left border).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### IMP-TRUNCATE: Output Truncation
|
||||||
|
|
||||||
|
**Fulfills:** AC-17, AC-18
|
||||||
|
|
||||||
|
**Truncation Thresholds (match Claude Code):**
|
||||||
|
|
||||||
|
| Tool Type | Max Lines | Max Chars |
|
||||||
|
|-----------|-----------|-----------|
|
||||||
|
| Bash stdout | 100 | 10000 |
|
||||||
|
| Bash stderr | 50 | 5000 |
|
||||||
|
| Read content | 500 | 50000 |
|
||||||
|
| Grep matches | 100 | 10000 |
|
||||||
|
| Glob files | 100 | 5000 |
|
||||||
|
|
||||||
|
**Note:** These thresholds need verification against Claude Code behavior. May require adjustment based on testing.
|
||||||
|
|
||||||
|
**Truncation Helper:**
|
||||||
|
|
||||||
|
Takes content string, returns `{ text, truncated, totalLines }`. If truncated, result renderers show "Show full output (N lines)" link.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### IMP-MODAL: Full Output Modal
|
||||||
|
|
||||||
|
**Fulfills:** AC-19, AC-20
|
||||||
|
|
||||||
|
**Location:** `dashboard/components/FullOutputModal.js` (new file)
|
||||||
|
|
||||||
|
**Structure:**
|
||||||
|
- Overlay (click to close)
|
||||||
|
- Modal container (click does NOT close)
|
||||||
|
- Header: title (tool name + file path), close button
|
||||||
|
- Content: scrollable pre/code block with syntax highlighting
|
||||||
|
|
||||||
|
**Integration:** Modal state managed at App level or ChatMessages level. "Show full output" link sets state with content + metadata.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### IMP-ERROR: Error State Styling
|
||||||
|
|
||||||
|
**Fulfills:** AC-21, AC-22, AC-23
|
||||||
|
|
||||||
|
**Styling:**
|
||||||
|
- Tool call header: red-tinted background when `result.is_error`
|
||||||
|
- Status icon: red X instead of green checkmark
|
||||||
|
- Bash stderr: red text, italic, distinct from stdout
|
||||||
|
- Overall: left border accent in error color
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Rollout Slices
|
||||||
|
|
||||||
|
### Slice 1: Design Mockups (Pre-Implementation)
|
||||||
|
|
||||||
|
**Goal:** Validate visual design before building
|
||||||
|
|
||||||
|
**Deliverables:**
|
||||||
|
1. Create `/mockups` test route with static data
|
||||||
|
2. Implement 3-4 design variants (card-based, minimal, etc.)
|
||||||
|
3. Use real tool result data from session JSONL
|
||||||
|
4. User reviews and selects preferred design
|
||||||
|
|
||||||
|
**Exit Criteria:** Design direction locked
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Slice 2: Server-Side Tool Result Parsing
|
||||||
|
|
||||||
|
**Goal:** API returns tool results nested in tool_calls
|
||||||
|
|
||||||
|
**Deliverables:**
|
||||||
|
1. Two-pass parsing in `_parse_claude_conversation`
|
||||||
|
2. Tool results attached with `id` field
|
||||||
|
3. Unit tests for result attachment
|
||||||
|
4. Handle missing results gracefully (return tool_call without result)
|
||||||
|
|
||||||
|
**Exit Criteria:** AC-24, AC-25, AC-26 pass
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Slice 3: Basic Expand/Collapse UI
|
||||||
|
|
||||||
|
**Goal:** Tool calls are expandable, show raw result content
|
||||||
|
|
||||||
|
**Deliverables:**
|
||||||
|
1. Refactor `renderToolCalls` to `ToolCallList` component
|
||||||
|
2. Implement expand/collapse with chevron
|
||||||
|
3. Track expanded state per message
|
||||||
|
4. Collapse on new assistant message
|
||||||
|
5. Keep Edit/Write always expanded
|
||||||
|
|
||||||
|
**Exit Criteria:** AC-1 through AC-7 pass
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Slice 4: Diff Rendering
|
||||||
|
|
||||||
|
**Goal:** Edit/Write show beautiful diffs
|
||||||
|
|
||||||
|
**Deliverables:**
|
||||||
|
1. Add diff language to highlight.js
|
||||||
|
2. Implement `renderDiff` function
|
||||||
|
3. VS Code dark theme styling
|
||||||
|
4. Full file path header
|
||||||
|
|
||||||
|
**Exit Criteria:** AC-8 through AC-12 pass
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Slice 5: Other Tool Types
|
||||||
|
|
||||||
|
**Goal:** Bash, Read, Glob, Grep render appropriately
|
||||||
|
|
||||||
|
**Deliverables:**
|
||||||
|
1. `renderBashResult` with stdout/stderr separation
|
||||||
|
2. `renderFileContent` for Read
|
||||||
|
3. `renderFileList` for Glob/Grep
|
||||||
|
4. Generic fallback for unknown tools
|
||||||
|
|
||||||
|
**Exit Criteria:** AC-13 through AC-16 pass
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Slice 6: Truncation and Modal
|
||||||
|
|
||||||
|
**Goal:** Long outputs truncate with modal expansion
|
||||||
|
|
||||||
|
**Deliverables:**
|
||||||
|
1. Truncation helper with Claude Code thresholds
|
||||||
|
2. "Show full output" link
|
||||||
|
3. `FullOutputModal` component
|
||||||
|
4. Syntax highlighting in modal
|
||||||
|
|
||||||
|
**Exit Criteria:** AC-17 through AC-20 pass
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Slice 7: Error States and Polish
|
||||||
|
|
||||||
|
**Goal:** Failed tools visually distinct, edge cases handled
|
||||||
|
|
||||||
|
**Deliverables:**
|
||||||
|
1. Error state styling (red tint)
|
||||||
|
2. Muted styling for missing results
|
||||||
|
3. Test with interrupted sessions
|
||||||
|
4. Cross-browser testing
|
||||||
|
|
||||||
|
**Exit Criteria:** AC-21 through AC-23 pass, feature complete
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Open Questions
|
||||||
|
|
||||||
|
1. **Exact Claude Code truncation thresholds** — need to verify against Claude Code source or experiment
|
||||||
|
2. **Performance with 100+ tool calls** — monitor after ship, optimize if needed
|
||||||
|
3. **Codex support timeline** — when should we prioritize v2?
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Appendix: Research Findings
|
||||||
|
|
||||||
|
### Claude Code JSONL Format
|
||||||
|
|
||||||
|
Tool calls and results are stored as separate entries:
|
||||||
|
|
||||||
|
```json
|
||||||
|
// Assistant sends tool_use
|
||||||
|
{"type": "assistant", "message": {"content": [{"type": "tool_use", "id": "toolu_abc", "name": "Edit", "input": {...}}]}}
|
||||||
|
|
||||||
|
// Result in separate user entry
|
||||||
|
{"type": "user", "message": {"content": [{"type": "tool_result", "tool_use_id": "toolu_abc", "content": "Success"}]}, "toolUseResult": {...}}
|
||||||
|
```
|
||||||
|
|
||||||
|
The `toolUseResult` object contains rich structured data varying by tool type.
|
||||||
|
|
||||||
|
### Missing Results Statistics
|
||||||
|
|
||||||
|
Across 55 sessions with 2,063 tool calls:
|
||||||
|
- 11 missing results (0.5%)
|
||||||
|
- Affected tools: Edit (4), Read (2), Bash (1), others
|
||||||
|
|
||||||
|
### Interrupt Handling
|
||||||
|
|
||||||
|
User interrupts create a separate user message:
|
||||||
|
```json
|
||||||
|
{"type": "user", "message": {"content": [{"type": "text", "text": "[Request interrupted by user for tool use]"}]}}
|
||||||
|
```
|
||||||
|
|
||||||
|
Tool results for completed tools are still present; the interrupt message indicates the turn ended early.
|
||||||
@@ -76,7 +76,7 @@ Add the ability to spawn new agent sessions (Claude or Codex) from the AMC dashb
|
|||||||
|
|
||||||
- **AC-13:** The spawned pane's cwd is set to the project directory
|
- **AC-13:** The spawned pane's cwd is set to the project directory
|
||||||
- **AC-14:** Spawned panes are named `{agent_type}-{project}` (e.g., "claude-amc")
|
- **AC-14:** Spawned panes are named `{agent_type}-{project}` (e.g., "claude-amc")
|
||||||
- **AC-15:** The spawned agent appears in the dashboard within 5 seconds of spawn
|
- **AC-15:** The spawned agent appears in the dashboard within 10 seconds of spawn
|
||||||
|
|
||||||
### Session Discovery
|
### Session Discovery
|
||||||
|
|
||||||
@@ -95,6 +95,9 @@ Add the ability to spawn new agent sessions (Claude or Codex) from the AMC dashb
|
|||||||
- **AC-22:** Server validates project path is within `~/projects/` (resolves symlinks)
|
- **AC-22:** Server validates project path is within `~/projects/` (resolves symlinks)
|
||||||
- **AC-23:** Server rejects path traversal attempts in project parameter
|
- **AC-23:** Server rejects path traversal attempts in project parameter
|
||||||
- **AC-24:** Server binds to localhost only (127.0.0.1), not exposed to network
|
- **AC-24:** Server binds to localhost only (127.0.0.1), not exposed to network
|
||||||
|
- **AC-37:** Server generates a one-time auth token on startup and injects it into dashboard HTML
|
||||||
|
- **AC-38:** `/api/spawn` requires valid auth token in `Authorization` header
|
||||||
|
- **AC-39:** CORS headers are consistent across all endpoints (`Access-Control-Allow-Origin: *`); localhost-only binding (AC-24) is the security boundary
|
||||||
|
|
||||||
### Spawn Request Lifecycle
|
### Spawn Request Lifecycle
|
||||||
|
|
||||||
@@ -102,7 +105,7 @@ Add the ability to spawn new agent sessions (Claude or Codex) from the AMC dashb
|
|||||||
- **AC-26:** If the target Zellij session does not exist, spawn fails with error "Zellij session 'infra' not found"
|
- **AC-26:** If the target Zellij session does not exist, spawn fails with error "Zellij session 'infra' not found"
|
||||||
- **AC-27:** Server generates a unique `spawn_id` and passes it to the agent via `AMC_SPAWN_ID` env var
|
- **AC-27:** Server generates a unique `spawn_id` and passes it to the agent via `AMC_SPAWN_ID` env var
|
||||||
- **AC-28:** `amc-hook` writes `spawn_id` to session file when present in environment
|
- **AC-28:** `amc-hook` writes `spawn_id` to session file when present in environment
|
||||||
- **AC-29:** Spawn request polls for session file containing the specific `spawn_id` (max 5 second wait)
|
- **AC-29:** Spawn request polls for session file containing the specific `spawn_id` (max 10 second wait)
|
||||||
- **AC-30:** Concurrent spawn requests are serialized via a lock to prevent Zellij race conditions
|
- **AC-30:** Concurrent spawn requests are serialized via a lock to prevent Zellij race conditions
|
||||||
|
|
||||||
### Modal Behavior
|
### Modal Behavior
|
||||||
@@ -114,6 +117,17 @@ Add the ability to spawn new agent sessions (Claude or Codex) from the AMC dashb
|
|||||||
|
|
||||||
- **AC-33:** Projects list is loaded on server start and cached in memory
|
- **AC-33:** Projects list is loaded on server start and cached in memory
|
||||||
- **AC-34:** Projects list can be refreshed via `POST /api/projects/refresh`
|
- **AC-34:** Projects list can be refreshed via `POST /api/projects/refresh`
|
||||||
|
- **AC-40:** Projects list auto-refreshes every 5 minutes in background thread
|
||||||
|
|
||||||
|
### Rate Limiting
|
||||||
|
|
||||||
|
- **AC-35:** Spawn requests for the same project are throttled to 1 per 10 seconds
|
||||||
|
- **AC-36:** Rate limit errors return `RATE_LIMITED` code with retry-after hint
|
||||||
|
|
||||||
|
### Health Check
|
||||||
|
|
||||||
|
- **AC-41:** `GET /api/health` returns server status including Zellij session availability
|
||||||
|
- **AC-42:** Dashboard shows warning banner when Zellij session is unavailable
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -183,7 +197,7 @@ Add the ability to spawn new agent sessions (Claude or Codex) from the AMC dashb
|
|||||||
7. **Server → Zellij:** `new-pane --cwd <path> -- <agent command>` with `AMC_SPAWN_ID` env var
|
7. **Server → Zellij:** `new-pane --cwd <path> -- <agent command>` with `AMC_SPAWN_ID` env var
|
||||||
8. **Zellij:** Pane created, agent process starts
|
8. **Zellij:** Pane created, agent process starts
|
||||||
9. **Agent → Hook:** `amc-hook` fires on `SessionStart`, writes session JSON including `spawn_id` from env
|
9. **Agent → Hook:** `amc-hook` fires on `SessionStart`, writes session JSON including `spawn_id` from env
|
||||||
10. **Server:** Poll for session file containing matching `spawn_id` (up to 5 seconds)
|
10. **Server:** Poll for session file containing matching `spawn_id` (up to 10 seconds)
|
||||||
11. **Server → Dashboard:** Return success only after session file with `spawn_id` detected
|
11. **Server → Dashboard:** Return success only after session file with `spawn_id` detected
|
||||||
12. **Server:** Release spawn lock
|
12. **Server:** Release spawn lock
|
||||||
|
|
||||||
@@ -217,6 +231,16 @@ Response (error):
|
|||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
Response (rate limited - AC-35, AC-36):
|
||||||
|
```json
|
||||||
|
{
|
||||||
|
"ok": false,
|
||||||
|
"error": "Rate limited. Try again in 8 seconds.",
|
||||||
|
"code": "RATE_LIMITED",
|
||||||
|
"retry_after": 8
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
**GET /api/projects**
|
**GET /api/projects**
|
||||||
|
|
||||||
Response:
|
Response:
|
||||||
@@ -254,11 +278,11 @@ Response:
|
|||||||
- Contains: session_id, project, status, zellij_session, zellij_pane, etc.
|
- Contains: session_id, project, status, zellij_session, zellij_pane, etc.
|
||||||
- **Spawn correlation:** If `AMC_SPAWN_ID` env var is set, hook includes it in session JSON
|
- **Spawn correlation:** If `AMC_SPAWN_ID` env var is set, hook includes it in session JSON
|
||||||
|
|
||||||
**Codex agents** are discovered dynamically by `SessionDiscoveryMixin`:
|
**Codex agents** are discovered via the same hook mechanism as Claude:
|
||||||
- Scans `~/.codex/sessions/` for recently-modified `.jsonl` files
|
- When spawned with `AMC_SPAWN_ID` env var, the hook writes spawn_id to session JSON
|
||||||
- Extracts Zellij pane info via process inspection (`pgrep`, `lsof`)
|
- Existing `SessionDiscoveryMixin` also scans `~/.codex/sessions/` as fallback
|
||||||
- Creates/updates session JSON in `~/.local/share/amc/sessions/`
|
- **Spawn correlation:** Hook has direct access to env var and writes spawn_id
|
||||||
- **Spawn correlation:** Codex discovery checks for `AMC_SPAWN_ID` in process environment
|
- Note: Process inspection (`pgrep`, `lsof`) is used for non-spawned agents only
|
||||||
|
|
||||||
**Prerequisite:** The `amc-hook` must be installed in Claude Code's hooks configuration. See `~/.claude/hooks/` or Claude Code settings.
|
**Prerequisite:** The `amc-hook` must be installed in Claude Code's hooks configuration. See `~/.claude/hooks/` or Claude Code settings.
|
||||||
|
|
||||||
@@ -293,26 +317,89 @@ ZELLIJ_SESSION = "infra"
|
|||||||
|
|
||||||
# Lock for serializing spawn operations (prevents Zellij race conditions)
|
# Lock for serializing spawn operations (prevents Zellij race conditions)
|
||||||
_spawn_lock = threading.Lock()
|
_spawn_lock = threading.Lock()
|
||||||
|
|
||||||
|
# Rate limiting: track last spawn time per project (prevents spam)
|
||||||
|
_spawn_timestamps: dict[str, float] = {}
|
||||||
|
SPAWN_COOLDOWN_SEC = 10.0
|
||||||
|
|
||||||
|
# Auth token for spawn endpoint (AC-37, AC-38)
|
||||||
|
# Generated on server start, injected into dashboard HTML
|
||||||
|
_auth_token: str = ""
|
||||||
|
|
||||||
|
|
||||||
|
def generate_auth_token():
|
||||||
|
"""Generate a one-time auth token for this server instance."""
|
||||||
|
global _auth_token
|
||||||
|
import secrets
|
||||||
|
_auth_token = secrets.token_urlsafe(32)
|
||||||
|
return _auth_token
|
||||||
|
|
||||||
|
|
||||||
|
def validate_auth_token(request_token: str) -> bool:
|
||||||
|
"""Validate the Authorization header token."""
|
||||||
|
return request_token == f"Bearer {_auth_token}"
|
||||||
|
|
||||||
|
|
||||||
|
def start_projects_watcher():
|
||||||
|
"""Start background thread to refresh projects cache every 5 minutes (AC-40)."""
|
||||||
|
import logging
|
||||||
|
import threading
|
||||||
|
from amc_server.mixins.spawn import load_projects_cache
|
||||||
|
|
||||||
|
def _watch_loop():
|
||||||
|
import time
|
||||||
|
while True:
|
||||||
|
try:
|
||||||
|
time.sleep(300) # 5 minutes
|
||||||
|
load_projects_cache()
|
||||||
|
except Exception:
|
||||||
|
logging.exception("Projects cache refresh failed")
|
||||||
|
|
||||||
|
thread = threading.Thread(target=_watch_loop, daemon=True)
|
||||||
|
thread.start()
|
||||||
```
|
```
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
### IMP-1: SpawnMixin for Server (fulfills AC-8, AC-9, AC-10, AC-11, AC-12, AC-13, AC-14, AC-18, AC-19, AC-22, AC-23, AC-25, AC-26, AC-29, AC-30)
|
### IMP-0b: Auth Token Verification in SpawnMixin (fulfills AC-38)
|
||||||
|
|
||||||
|
Add to the beginning of `_handle_spawn()`:
|
||||||
|
```python
|
||||||
|
# Verify auth token (AC-38)
|
||||||
|
auth_header = self.headers.get("Authorization", "")
|
||||||
|
if not validate_auth_token(auth_header):
|
||||||
|
self._send_json(401, {"ok": False, "error": "Unauthorized", "code": "UNAUTHORIZED"})
|
||||||
|
return
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### IMP-1: SpawnMixin for Server (fulfills AC-8, AC-9, AC-10, AC-11, AC-12, AC-13, AC-14, AC-18, AC-19, AC-22, AC-23, AC-24, AC-26-AC-30, AC-33-AC-36)
|
||||||
|
|
||||||
**File:** `amc_server/mixins/spawn.py`
|
**File:** `amc_server/mixins/spawn.py`
|
||||||
|
|
||||||
**Integration notes:**
|
**Integration notes:**
|
||||||
- Uses `_send_json()` from HttpMixin (not a new `_json_response`)
|
- Uses `_send_json()` from HttpMixin (not a new `_json_response`)
|
||||||
- Uses inline JSON body parsing (same pattern as `control.py:33-47`)
|
- Uses inline JSON body parsing (same pattern as `control.py:33-47`)
|
||||||
- PROJECTS_DIR and ZELLIJ_SESSION come from context.py (centralized constants)
|
- PROJECTS_DIR, ZELLIJ_SESSION, `_spawn_lock`, `_spawn_timestamps`, `SPAWN_COOLDOWN_SEC` come from context.py
|
||||||
- Session file polling watches SESSIONS_DIR for any new .json by mtime
|
- **Deterministic correlation:** Generates `spawn_id`, passes via env var, polls for matching session file
|
||||||
|
- **Concurrency safety:** Acquires `_spawn_lock` around Zellij operations to prevent race conditions
|
||||||
|
- **Rate limiting:** Per-project cooldown prevents spawn spam (AC-35, AC-36)
|
||||||
|
- **Symlink safety:** Resolves project path and verifies it's still under PROJECTS_DIR
|
||||||
|
- **TOCTOU mitigation:** Validation returns resolved path; caller uses it directly (no re-resolution)
|
||||||
|
- **Env var propagation:** Uses shell wrapper to guarantee `AMC_SPAWN_ID` reaches agent process
|
||||||
|
|
||||||
```python
|
```python
|
||||||
import json
|
import json
|
||||||
|
import os
|
||||||
import subprocess
|
import subprocess
|
||||||
import time
|
import time
|
||||||
|
import uuid
|
||||||
|
|
||||||
from amc_server.context import PROJECTS_DIR, SESSIONS_DIR, ZELLIJ_BIN, ZELLIJ_SESSION
|
from amc_server.context import (
|
||||||
|
PROJECTS_DIR, SESSIONS_DIR, ZELLIJ_BIN, ZELLIJ_SESSION,
|
||||||
|
_spawn_lock, _spawn_timestamps, SPAWN_COOLDOWN_SEC,
|
||||||
|
)
|
||||||
|
|
||||||
# Agent commands (AC-8, AC-9: full autonomous permissions)
|
# Agent commands (AC-8, AC-9: full autonomous permissions)
|
||||||
AGENT_COMMANDS = {
|
AGENT_COMMANDS = {
|
||||||
@@ -320,7 +407,7 @@ AGENT_COMMANDS = {
|
|||||||
"codex": ["codex", "--dangerously-bypass-approvals-and-sandbox"],
|
"codex": ["codex", "--dangerously-bypass-approvals-and-sandbox"],
|
||||||
}
|
}
|
||||||
|
|
||||||
# Module-level cache for projects list (AC-29)
|
# Module-level cache for projects list (AC-33)
|
||||||
_projects_cache: list[str] = []
|
_projects_cache: list[str] = []
|
||||||
|
|
||||||
|
|
||||||
@@ -355,44 +442,104 @@ class SpawnMixin:
|
|||||||
project = body.get("project", "").strip()
|
project = body.get("project", "").strip()
|
||||||
agent_type = body.get("agent_type", "claude").strip()
|
agent_type = body.get("agent_type", "claude").strip()
|
||||||
|
|
||||||
# Validation
|
# Validation returns resolved path to avoid TOCTOU
|
||||||
error = self._validate_spawn_params(project, agent_type)
|
validation = self._validate_spawn_params(project, agent_type)
|
||||||
if error:
|
if "error" in validation:
|
||||||
self._send_json(400, {"ok": False, "error": error["message"], "code": error["code"]})
|
self._send_json(400, {"ok": False, "error": validation["error"], "code": validation["code"]})
|
||||||
return
|
return
|
||||||
|
|
||||||
project_path = PROJECTS_DIR / project
|
resolved_path = validation["resolved_path"]
|
||||||
|
|
||||||
# Ensure tab exists, then spawn pane, then wait for session file
|
# Generate spawn_id for deterministic correlation (AC-27)
|
||||||
result = self._spawn_agent_in_project_tab(project, project_path, agent_type)
|
spawn_id = str(uuid.uuid4())
|
||||||
|
|
||||||
|
# Acquire lock to serialize Zellij operations (AC-30)
|
||||||
|
# NOTE: Rate limiting check is INSIDE lock to prevent race condition where
|
||||||
|
# two concurrent requests both pass the cooldown check before either updates timestamp
|
||||||
|
# Use timeout to prevent indefinite blocking if lock is held by hung thread
|
||||||
|
if not _spawn_lock.acquire(timeout=15.0):
|
||||||
|
self._send_json(503, {
|
||||||
|
"ok": False,
|
||||||
|
"error": "Server busy, try again shortly",
|
||||||
|
"code": "SERVER_BUSY"
|
||||||
|
})
|
||||||
|
return
|
||||||
|
|
||||||
|
acquire_start = time.time()
|
||||||
|
try:
|
||||||
|
# Log lock contention for debugging
|
||||||
|
acquire_time = time.time() - acquire_start
|
||||||
|
if acquire_time > 1.0:
|
||||||
|
import logging
|
||||||
|
logging.warning(f"Spawn lock contention: waited {acquire_time:.1f}s for {project}")
|
||||||
|
# Rate limiting per project (AC-35, AC-36) - must be inside lock
|
||||||
|
now = time.time()
|
||||||
|
last_spawn = _spawn_timestamps.get(project, 0)
|
||||||
|
if now - last_spawn < SPAWN_COOLDOWN_SEC:
|
||||||
|
retry_after = int(SPAWN_COOLDOWN_SEC - (now - last_spawn)) + 1
|
||||||
|
self._send_json(429, {
|
||||||
|
"ok": False,
|
||||||
|
"error": f"Rate limited. Try again in {retry_after} seconds.",
|
||||||
|
"code": "RATE_LIMITED",
|
||||||
|
"retry_after": retry_after,
|
||||||
|
})
|
||||||
|
return
|
||||||
|
|
||||||
|
result = self._spawn_agent_in_project_tab(project, resolved_path, agent_type, spawn_id)
|
||||||
|
|
||||||
|
# Update timestamp only on successful spawn (don't waste cooldown on failures)
|
||||||
|
if result["ok"]:
|
||||||
|
_spawn_timestamps[project] = time.time()
|
||||||
|
finally:
|
||||||
|
_spawn_lock.release()
|
||||||
|
|
||||||
if result["ok"]:
|
if result["ok"]:
|
||||||
self._send_json(200, {"ok": True, "project": project, "agent_type": agent_type})
|
self._send_json(200, {"ok": True, "project": project, "agent_type": agent_type, "spawn_id": spawn_id})
|
||||||
else:
|
else:
|
||||||
self._send_json(500, {"ok": False, "error": result["error"], "code": result.get("code", "SPAWN_FAILED")})
|
self._send_json(500, {"ok": False, "error": result["error"], "code": result.get("code", "SPAWN_FAILED")})
|
||||||
|
|
||||||
def _validate_spawn_params(self, project, agent_type):
|
def _validate_spawn_params(self, project, agent_type):
|
||||||
"""Validate spawn parameters. Returns error dict or None."""
|
"""Validate spawn parameters. Returns resolved_path on success, error dict on failure.
|
||||||
|
|
||||||
|
Returns resolved path to avoid TOCTOU: caller uses this path directly
|
||||||
|
instead of re-resolving after validation.
|
||||||
|
"""
|
||||||
if not project:
|
if not project:
|
||||||
return {"message": "project is required", "code": "MISSING_PROJECT"}
|
return {"error": "project is required", "code": "MISSING_PROJECT"}
|
||||||
|
|
||||||
# Security: no path traversal
|
# Security: no path traversal in project name
|
||||||
if "/" in project or "\\" in project or ".." in project:
|
if "/" in project or "\\" in project or ".." in project:
|
||||||
return {"message": "Invalid project name", "code": "INVALID_PROJECT"}
|
return {"error": "Invalid project name", "code": "INVALID_PROJECT"}
|
||||||
|
|
||||||
# Project must exist
|
# Resolve symlinks and verify still under PROJECTS_DIR (AC-22)
|
||||||
project_path = PROJECTS_DIR / project
|
project_path = PROJECTS_DIR / project
|
||||||
if not project_path.is_dir():
|
try:
|
||||||
return {"message": f"Project not found: {project}", "code": "PROJECT_NOT_FOUND"}
|
resolved = project_path.resolve()
|
||||||
|
except OSError:
|
||||||
|
return {"error": f"Project not found: {project}", "code": "PROJECT_NOT_FOUND"}
|
||||||
|
|
||||||
|
# Symlink escape check: resolved path must be under PROJECTS_DIR
|
||||||
|
try:
|
||||||
|
resolved.relative_to(PROJECTS_DIR.resolve())
|
||||||
|
except ValueError:
|
||||||
|
return {"error": "Invalid project path", "code": "INVALID_PROJECT"}
|
||||||
|
|
||||||
|
if not resolved.is_dir():
|
||||||
|
return {"error": f"Project not found: {project}", "code": "PROJECT_NOT_FOUND"}
|
||||||
|
|
||||||
# Agent type must be valid
|
# Agent type must be valid
|
||||||
if agent_type not in AGENT_COMMANDS:
|
if agent_type not in AGENT_COMMANDS:
|
||||||
return {"message": f"Invalid agent type: {agent_type}", "code": "INVALID_AGENT_TYPE"}
|
return {"error": f"Invalid agent type: {agent_type}", "code": "INVALID_AGENT_TYPE"}
|
||||||
|
|
||||||
return None
|
# Return resolved path to avoid TOCTOU
|
||||||
|
return {"resolved_path": resolved}
|
||||||
|
|
||||||
def _check_zellij_session_exists(self):
|
def _check_zellij_session_exists(self):
|
||||||
"""Check if the target Zellij session exists (AC-25)."""
|
"""Check if the target Zellij session exists (AC-26).
|
||||||
|
|
||||||
|
Uses line-by-line parsing rather than substring check to avoid
|
||||||
|
false positives from similarly-named sessions (e.g., "infra2" matching "infra").
|
||||||
|
"""
|
||||||
try:
|
try:
|
||||||
result = subprocess.run(
|
result = subprocess.run(
|
||||||
[ZELLIJ_BIN, "list-sessions"],
|
[ZELLIJ_BIN, "list-sessions"],
|
||||||
@@ -400,38 +547,55 @@ class SpawnMixin:
|
|||||||
text=True,
|
text=True,
|
||||||
timeout=5
|
timeout=5
|
||||||
)
|
)
|
||||||
return ZELLIJ_SESSION in result.stdout
|
# Parse session names line by line to avoid substring false positives
|
||||||
|
# Each line is a session name (may have status suffix like " (current)")
|
||||||
|
for line in result.stdout.strip().split("\n"):
|
||||||
|
session_name = line.split()[0] if line.strip() else ""
|
||||||
|
if session_name == ZELLIJ_SESSION:
|
||||||
|
return True
|
||||||
|
return False
|
||||||
except (FileNotFoundError, subprocess.TimeoutExpired):
|
except (FileNotFoundError, subprocess.TimeoutExpired):
|
||||||
return False
|
return False
|
||||||
|
|
||||||
def _wait_for_session_file(self, timeout=5.0):
|
def _wait_for_session_file(self, spawn_id, timeout=10.0):
|
||||||
"""Poll for any new session file in SESSIONS_DIR (AC-26).
|
"""Poll for session file containing our spawn_id (AC-29).
|
||||||
|
|
||||||
Session files are named {session_id}.json. We don't know the session_id
|
Deterministic correlation: we look for the specific spawn_id we passed
|
||||||
in advance, so we watch for any .json file with mtime after spawn started.
|
to the agent, not just "any new file". This prevents false positives
|
||||||
|
from unrelated agent activity.
|
||||||
|
|
||||||
|
Note: We don't filter by mtime because spawn_id is already unique per
|
||||||
|
request - no risk of matching stale files. This also avoids edge cases
|
||||||
|
where file is written faster than our timestamp capture.
|
||||||
|
|
||||||
|
Args:
|
||||||
|
spawn_id: The UUID we passed to the agent via AMC_SPAWN_ID env var
|
||||||
|
timeout: Maximum seconds to poll (10s for cold starts, VM latency)
|
||||||
"""
|
"""
|
||||||
start = time.monotonic()
|
poll_start = time.time()
|
||||||
# Snapshot existing files to detect new ones
|
poll_interval = 0.25
|
||||||
existing_files = set()
|
|
||||||
if SESSIONS_DIR.exists():
|
|
||||||
existing_files = {f.name for f in SESSIONS_DIR.glob("*.json")}
|
|
||||||
|
|
||||||
while time.monotonic() - start < timeout:
|
while time.time() - poll_start < timeout:
|
||||||
if SESSIONS_DIR.exists():
|
if SESSIONS_DIR.exists():
|
||||||
for f in SESSIONS_DIR.glob("*.json"):
|
for f in SESSIONS_DIR.glob("*.json"):
|
||||||
# New file that didn't exist before spawn
|
try:
|
||||||
if f.name not in existing_files:
|
data = json.loads(f.read_text())
|
||||||
|
if data.get("spawn_id") == spawn_id:
|
||||||
return True
|
return True
|
||||||
# Or existing file with very recent mtime (reused session)
|
except (json.JSONDecodeError, OSError):
|
||||||
if f.stat().st_mtime > start:
|
continue
|
||||||
return True
|
time.sleep(poll_interval)
|
||||||
time.sleep(0.25)
|
|
||||||
return False
|
return False
|
||||||
|
|
||||||
def _spawn_agent_in_project_tab(self, project, project_path, agent_type):
|
def _spawn_agent_in_project_tab(self, project, project_path, agent_type, spawn_id):
|
||||||
"""Ensure project tab exists and spawn agent pane."""
|
"""Ensure project tab exists and spawn agent pane.
|
||||||
|
|
||||||
|
Called with _spawn_lock held to serialize Zellij operations.
|
||||||
|
|
||||||
|
Note: project_path is pre-resolved by _validate_spawn_params to avoid TOCTOU.
|
||||||
|
"""
|
||||||
try:
|
try:
|
||||||
# Step 0: Check session exists (AC-25)
|
# Step 0: Check session exists (AC-26)
|
||||||
if not self._check_zellij_session_exists():
|
if not self._check_zellij_session_exists():
|
||||||
return {
|
return {
|
||||||
"ok": False,
|
"ok": False,
|
||||||
@@ -450,6 +614,8 @@ class SpawnMixin:
|
|||||||
return {"ok": False, "error": f"Failed to create/switch tab: {tab_result.stderr}", "code": "TAB_ERROR"}
|
return {"ok": False, "error": f"Failed to create/switch tab: {tab_result.stderr}", "code": "TAB_ERROR"}
|
||||||
|
|
||||||
# Step 2: Spawn new pane with agent command (AC-14: naming scheme)
|
# Step 2: Spawn new pane with agent command (AC-14: naming scheme)
|
||||||
|
# Pass AMC_SPAWN_ID via subprocess env dict, merged with inherited environment.
|
||||||
|
# This ensures the env var propagates through Zellij's subprocess tree to the agent.
|
||||||
agent_cmd = AGENT_COMMANDS[agent_type]
|
agent_cmd = AGENT_COMMANDS[agent_type]
|
||||||
pane_name = f"{agent_type}-{project}"
|
pane_name = f"{agent_type}-{project}"
|
||||||
|
|
||||||
@@ -461,20 +627,26 @@ class SpawnMixin:
|
|||||||
*agent_cmd
|
*agent_cmd
|
||||||
]
|
]
|
||||||
|
|
||||||
|
# Merge spawn_id into environment so it reaches the agent process
|
||||||
|
spawn_env = os.environ.copy()
|
||||||
|
spawn_env["AMC_SPAWN_ID"] = spawn_id
|
||||||
|
|
||||||
spawn_result = subprocess.run(
|
spawn_result = subprocess.run(
|
||||||
spawn_cmd,
|
spawn_cmd,
|
||||||
capture_output=True,
|
capture_output=True,
|
||||||
text=True,
|
text=True,
|
||||||
timeout=10
|
timeout=10,
|
||||||
|
env=spawn_env,
|
||||||
)
|
)
|
||||||
if spawn_result.returncode != 0:
|
if spawn_result.returncode != 0:
|
||||||
return {"ok": False, "error": f"Failed to spawn pane: {spawn_result.stderr}", "code": "SPAWN_ERROR"}
|
return {"ok": False, "error": f"Failed to spawn pane: {spawn_result.stderr}", "code": "SPAWN_ERROR"}
|
||||||
|
|
||||||
# Step 3: Wait for session file (AC-26)
|
# Step 3: Wait for session file with matching spawn_id (AC-29)
|
||||||
if not self._wait_for_session_file(timeout=5.0):
|
# No mtime filter needed - spawn_id is unique per request
|
||||||
|
if not self._wait_for_session_file(spawn_id, timeout=10.0):
|
||||||
return {
|
return {
|
||||||
"ok": False,
|
"ok": False,
|
||||||
"error": "Agent spawned but session file not detected within 5 seconds",
|
"error": "Agent spawned but session file not detected within 10 seconds",
|
||||||
"code": "SESSION_FILE_TIMEOUT"
|
"code": "SESSION_FILE_TIMEOUT"
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -486,16 +658,45 @@ class SpawnMixin:
|
|||||||
return {"ok": False, "error": "zellij command timed out", "code": "TIMEOUT"}
|
return {"ok": False, "error": "zellij command timed out", "code": "TIMEOUT"}
|
||||||
|
|
||||||
def _handle_projects(self):
|
def _handle_projects(self):
|
||||||
"""Handle GET /api/projects - return cached projects list (AC-29)."""
|
"""Handle GET /api/projects - return cached projects list (AC-33)."""
|
||||||
self._send_json(200, {"projects": _projects_cache})
|
self._send_json(200, {"projects": _projects_cache})
|
||||||
|
|
||||||
def _handle_projects_refresh(self):
|
def _handle_projects_refresh(self):
|
||||||
"""Handle POST /api/projects/refresh - refresh cache (AC-30)."""
|
"""Handle POST /api/projects/refresh - refresh cache (AC-34)."""
|
||||||
load_projects_cache()
|
load_projects_cache()
|
||||||
self._send_json(200, {"ok": True, "projects": _projects_cache})
|
self._send_json(200, {"ok": True, "projects": _projects_cache})
|
||||||
|
|
||||||
|
def _handle_health(self):
|
||||||
|
"""Handle GET /api/health - return server status (AC-41)."""
|
||||||
|
zellij_available = self._check_zellij_session_exists()
|
||||||
|
self._send_json(200, {
|
||||||
|
"ok": True,
|
||||||
|
"zellij_session": ZELLIJ_SESSION,
|
||||||
|
"zellij_available": zellij_available,
|
||||||
|
"projects_count": len(_projects_cache),
|
||||||
|
})
|
||||||
```
|
```
|
||||||
|
|
||||||
### IMP-2: HTTP Routing (fulfills AC-1, AC-3, AC-4, AC-30)
|
### IMP-1b: Update amc-hook for spawn_id (fulfills AC-28)
|
||||||
|
|
||||||
|
**File:** `bin/amc-hook`
|
||||||
|
|
||||||
|
**Integration notes:**
|
||||||
|
- Check for `AMC_SPAWN_ID` environment variable
|
||||||
|
- If present, include it in the session JSON written to disk
|
||||||
|
- This enables deterministic correlation between spawn request and session discovery
|
||||||
|
|
||||||
|
Add after reading hook JSON and before writing session file:
|
||||||
|
```python
|
||||||
|
# Include spawn_id if present in environment (for spawn correlation)
|
||||||
|
spawn_id = os.environ.get("AMC_SPAWN_ID")
|
||||||
|
if spawn_id:
|
||||||
|
session_data["spawn_id"] = spawn_id
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### IMP-2: HTTP Routing (fulfills AC-1, AC-3, AC-4, AC-34)
|
||||||
|
|
||||||
**File:** `amc_server/mixins/http.py`
|
**File:** `amc_server/mixins/http.py`
|
||||||
|
|
||||||
@@ -503,6 +704,8 @@ Add to `do_GET`:
|
|||||||
```python
|
```python
|
||||||
elif self.path == "/api/projects":
|
elif self.path == "/api/projects":
|
||||||
self._handle_projects()
|
self._handle_projects()
|
||||||
|
elif self.path == "/api/health":
|
||||||
|
self._handle_health()
|
||||||
```
|
```
|
||||||
|
|
||||||
Add to `do_POST`:
|
Add to `do_POST`:
|
||||||
@@ -513,27 +716,53 @@ elif self.path == "/api/projects/refresh":
|
|||||||
self._handle_projects_refresh()
|
self._handle_projects_refresh()
|
||||||
```
|
```
|
||||||
|
|
||||||
Update `do_OPTIONS` for CORS preflight on new endpoints:
|
Update `do_OPTIONS` for CORS preflight on new endpoints (AC-39: consistent CORS):
|
||||||
```python
|
```python
|
||||||
def do_OPTIONS(self):
|
def do_OPTIONS(self):
|
||||||
# CORS preflight for API endpoints
|
# CORS preflight for API endpoints
|
||||||
|
# AC-39: Keep wildcard CORS consistent with existing endpoints;
|
||||||
|
# localhost-only binding (AC-24) is the real security boundary
|
||||||
self.send_response(204)
|
self.send_response(204)
|
||||||
self.send_header("Access-Control-Allow-Origin", "*")
|
self.send_header("Access-Control-Allow-Origin", "*")
|
||||||
self.send_header("Access-Control-Allow-Methods", "GET, POST, OPTIONS")
|
self.send_header("Access-Control-Allow-Methods", "GET, POST, OPTIONS")
|
||||||
self.send_header("Access-Control-Allow-Headers", "Content-Type")
|
self.send_header("Access-Control-Allow-Headers", "Content-Type, Authorization")
|
||||||
self.end_headers()
|
self.end_headers()
|
||||||
```
|
```
|
||||||
|
|
||||||
### IMP-2b: Server Startup (fulfills AC-29)
|
### IMP-2b: Server Startup (fulfills AC-33, AC-37)
|
||||||
|
|
||||||
**File:** `amc_server/server.py`
|
**File:** `amc_server/server.py`
|
||||||
|
|
||||||
Add to server initialization:
|
Add to server initialization:
|
||||||
```python
|
```python
|
||||||
from amc_server.mixins.spawn import load_projects_cache
|
from amc_server.mixins.spawn import load_projects_cache
|
||||||
|
from amc_server.context import generate_auth_token, start_projects_watcher
|
||||||
|
|
||||||
# In server startup, before starting HTTP server:
|
# In server startup, before starting HTTP server:
|
||||||
load_projects_cache()
|
load_projects_cache()
|
||||||
|
auth_token = generate_auth_token() # AC-37: Generate one-time token
|
||||||
|
start_projects_watcher() # AC-40: Auto-refresh every 5 minutes
|
||||||
|
# Token is injected into dashboard HTML via template variable
|
||||||
|
```
|
||||||
|
|
||||||
|
### IMP-2d: Inject Auth Token into Dashboard (fulfills AC-37)
|
||||||
|
|
||||||
|
**File:** `amc_server/mixins/http.py` (in dashboard HTML serving)
|
||||||
|
|
||||||
|
Inject the auth token into the dashboard HTML so JavaScript can use it:
|
||||||
|
```python
|
||||||
|
# In the HTML template that serves the dashboard:
|
||||||
|
html_content = html_content.replace(
|
||||||
|
"<!-- AMC_AUTH_TOKEN -->",
|
||||||
|
f'<script>window.AMC_AUTH_TOKEN = "{_auth_token}";</script>'
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
**File:** `dashboard/index.html`
|
||||||
|
|
||||||
|
Add placeholder in `<head>`:
|
||||||
|
```html
|
||||||
|
<!-- AMC_AUTH_TOKEN -->
|
||||||
```
|
```
|
||||||
|
|
||||||
### IMP-2c: API Constants (follows existing pattern)
|
### IMP-2c: API Constants (follows existing pattern)
|
||||||
@@ -569,7 +798,7 @@ class AMCHandler(
|
|||||||
"""HTTP handler composed from focused mixins."""
|
"""HTTP handler composed from focused mixins."""
|
||||||
```
|
```
|
||||||
|
|
||||||
### IMP-4: SpawnModal Component (fulfills AC-2, AC-3, AC-6, AC-7, AC-20, AC-24, AC-27, AC-28)
|
### IMP-4: SpawnModal Component (fulfills AC-2, AC-3, AC-6, AC-7, AC-20, AC-25, AC-31, AC-32)
|
||||||
|
|
||||||
**File:** `dashboard/components/SpawnModal.js`
|
**File:** `dashboard/components/SpawnModal.js`
|
||||||
|
|
||||||
@@ -658,7 +887,10 @@ export function SpawnModal({ isOpen, onClose, onSpawn, currentProject }) {
|
|||||||
try {
|
try {
|
||||||
const response = await fetchWithTimeout(API_SPAWN, {
|
const response = await fetchWithTimeout(API_SPAWN, {
|
||||||
method: 'POST',
|
method: 'POST',
|
||||||
headers: { 'Content-Type': 'application/json' },
|
headers: {
|
||||||
|
'Content-Type': 'application/json',
|
||||||
|
'Authorization': `Bearer ${window.AMC_AUTH_TOKEN}`, // AC-38
|
||||||
|
},
|
||||||
body: JSON.stringify({ project, agent_type: agentType })
|
body: JSON.stringify({ project, agent_type: agentType })
|
||||||
});
|
});
|
||||||
const data = await response.json();
|
const data = await response.json();
|
||||||
@@ -785,6 +1017,8 @@ export function SpawnModal({ isOpen, onClose, onSpawn, currentProject }) {
|
|||||||
3. Add button to existing inline header (lines 331-380)
|
3. Add button to existing inline header (lines 331-380)
|
||||||
4. Add SpawnModal component at end of render
|
4. Add SpawnModal component at end of render
|
||||||
|
|
||||||
|
**Project identity note:** `selectedProject` in App.js is already the short project name (e.g., "amc"), not the full path. This comes from `groupSessionsByProject()` in `status.js` which uses `projectName` as the key. The modal can pass it directly to `/api/spawn`.
|
||||||
|
|
||||||
```javascript
|
```javascript
|
||||||
// Add import at top
|
// Add import at top
|
||||||
import { SpawnModal } from './SpawnModal.js';
|
import { SpawnModal } from './SpawnModal.js';
|
||||||
@@ -802,6 +1036,7 @@ const [spawnModalOpen, setSpawnModalOpen] = useState(false);
|
|||||||
</button>
|
</button>
|
||||||
|
|
||||||
// Add modal before closing fragment (after ToastContainer, around line 426)
|
// Add modal before closing fragment (after ToastContainer, around line 426)
|
||||||
|
// selectedProject is already the short name (e.g., "amc"), not a path
|
||||||
<${SpawnModal}
|
<${SpawnModal}
|
||||||
isOpen=${spawnModalOpen}
|
isOpen=${spawnModalOpen}
|
||||||
onClose=${() => setSpawnModalOpen(false)}
|
onClose=${() => setSpawnModalOpen(false)}
|
||||||
@@ -861,7 +1096,7 @@ curl -X POST http://localhost:7400/api/spawn \
|
|||||||
-d '{"project":"gitlore","agent_type":"codex"}'
|
-d '{"project":"gitlore","agent_type":"codex"}'
|
||||||
```
|
```
|
||||||
|
|
||||||
**ACs covered:** AC-4, AC-5, AC-8, AC-9, AC-10, AC-11, AC-12, AC-13, AC-14, AC-18, AC-19, AC-22, AC-23, AC-25, AC-26, AC-29, AC-30
|
**ACs covered:** AC-4, AC-5, AC-8, AC-9, AC-10, AC-11, AC-12, AC-13, AC-14, AC-18, AC-19, AC-22, AC-23, AC-24, AC-26, AC-27, AC-28, AC-29, AC-30, AC-33, AC-34, AC-35, AC-36, AC-40, AC-41
|
||||||
|
|
||||||
### Slice 2: Spawn Modal UI
|
### Slice 2: Spawn Modal UI
|
||||||
|
|
||||||
@@ -870,14 +1105,14 @@ curl -X POST http://localhost:7400/api/spawn \
|
|||||||
**Tasks:**
|
**Tasks:**
|
||||||
1. Create `SpawnModal` component with context-aware behavior
|
1. Create `SpawnModal` component with context-aware behavior
|
||||||
2. Add "+ New Agent" button to page header
|
2. Add "+ New Agent" button to page header
|
||||||
3. Pass `currentProject` from sidebar selection to modal
|
3. Pass `currentProject` from sidebar selection to modal (extract basename if needed)
|
||||||
4. Implement agent type toggle (Claude / Codex)
|
4. Implement agent type toggle (Claude / Codex)
|
||||||
5. Wire up project dropdown (only shown on "All Projects")
|
5. Wire up project dropdown (only shown on "All Projects")
|
||||||
6. Add loading and error states
|
6. Add loading and error states
|
||||||
7. Show toast on spawn result
|
7. Show toast on spawn result
|
||||||
8. Implement modal dismiss behavior (Escape, click-outside, Cancel)
|
8. Implement modal dismiss behavior (Escape, click-outside, Cancel)
|
||||||
|
|
||||||
**ACs covered:** AC-1, AC-2, AC-3, AC-6, AC-7, AC-20, AC-21, AC-24, AC-27, AC-28
|
**ACs covered:** AC-1, AC-2, AC-3, AC-6, AC-7, AC-20, AC-21, AC-25, AC-31, AC-32, AC-42
|
||||||
|
|
||||||
### Slice 3: Polish & Edge Cases
|
### Slice 3: Polish & Edge Cases
|
||||||
|
|
||||||
@@ -895,16 +1130,83 @@ curl -X POST http://localhost:7400/api/spawn \
|
|||||||
|
|
||||||
## Open Questions
|
## Open Questions
|
||||||
|
|
||||||
1. **Rate limiting:** Should we limit spawn frequency to prevent accidental spam?
|
1. ~~**Rate limiting:** Should we limit spawn frequency to prevent accidental spam?~~ **RESOLVED:** Added per-project 10-second cooldown (AC-35, AC-36)
|
||||||
|
|
||||||
2. **Session cleanup:** When a spawned agent exits, should dashboard offer to close the pane?
|
2. **Session cleanup:** When a spawned agent exits, should dashboard offer to close the pane?
|
||||||
|
|
||||||
3. **Multiple Zellij sessions:** Currently hardcoded to "infra". Future: detect or let user pick?
|
3. **Multiple Zellij sessions:** Currently hardcoded to "infra". Future: detect or let user pick?
|
||||||
|
|
||||||
4. **Agent naming:** Current scheme is `{agent_type}-{project}`. Collision if multiple agents for same project?
|
4. **Agent naming:** Current scheme is `{agent_type}-{project}`. Collision if multiple agents for same project? (Zellij allows duplicate pane names; could add timestamp suffix)
|
||||||
|
|
||||||
5. **Spawn limits:** Should we add spawn limits or warnings for resource management?
|
5. **Spawn limits:** Should we add spawn limits or warnings for resource management? (Rate limiting helps but doesn't cap total)
|
||||||
|
|
||||||
6. **Dead code cleanup:** `Header.js` exists but isn't used (App.js has inline header). Remove it?
|
6. **Dead code cleanup:** `Header.js` exists but isn't used (App.js has inline header). Remove it?
|
||||||
|
|
||||||
7. **Hook verification:** Should spawn endpoint verify `amc-hook` is installed before spawning Claude agents?
|
7. **Hook verification:** Should spawn endpoint verify `amc-hook` is installed before spawning Claude agents? (Could add `/api/hook-status` endpoint)
|
||||||
|
|
||||||
|
8. **Async spawn confirmation:** Current design returns error if session file not detected in 5s even though pane exists. Future: return spawn_id immediately, let dashboard poll for confirmation? (Suggested by GPT 5.3 review but adds complexity)
|
||||||
|
|
||||||
|
9. **Tab focus disruption:** `go-to-tab-name --create` changes active tab globally in "infra" session. Explore `--skip-focus` or similar if available in Zellij CLI?
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Design Decisions (from review)
|
||||||
|
|
||||||
|
These issues were identified during external review and addressed in the plan:
|
||||||
|
|
||||||
|
| Issue | Resolution |
|
||||||
|
|-------|------------|
|
||||||
|
| `st_mtime` vs `time.monotonic()` bug | Fixed: Use `time.time()` for wall-clock comparison with file mtime |
|
||||||
|
| "Any new file" polling could return false success | Fixed: Deterministic `spawn_id` correlation via env var |
|
||||||
|
| Concurrent spawns race on Zellij tab focus | Fixed: `_spawn_lock` serializes all Zellij operations |
|
||||||
|
| Symlink escape from `~/projects/` | Fixed: `Path.resolve()` + `relative_to()` check |
|
||||||
|
| CORS `*` with dangerous agent flags | Accepted: localhost-only binding (AC-24) is sufficient for dev-machine use |
|
||||||
|
| Project identity mismatch (full path vs basename) | Documented: `selectedProject` from sidebar is already the short name; verify in implementation |
|
||||||
|
|
||||||
|
### Additional Issues (from GPT 5.3 second opinion)
|
||||||
|
|
||||||
|
| Issue | Resolution |
|
||||||
|
|-------|------------|
|
||||||
|
| `AMC_SPAWN_ID` propagation not guaranteed | Fixed: Use shell wrapper (`sh -c "export AMC_SPAWN_ID=...; exec ..."`) to guarantee env var reaches agent process |
|
||||||
|
| `go-to-tab-name --create` changes active tab globally | Accepted: Dev-machine tool; focus disruption is minor annoyance, not critical. Could explore `--skip-focus` flag in future |
|
||||||
|
| Process-local lock insufficient for multi-worker | Accepted: AMC is single-process by design; documented as design constraint |
|
||||||
|
| `SESSION_FILE_TIMEOUT` creates false-failure path | Documented: Pane exists but API returns error; future work could add idempotent retry with spawn_id deduplication |
|
||||||
|
| Authz/abuse controls missing on `/api/spawn` | Fixed: Added per-project rate limiting (AC-35, AC-36); localhost-only binding provides baseline security |
|
||||||
|
| TOCTOU: path validated then re-resolved | Fixed: `_validate_spawn_params` returns resolved path; caller uses it directly |
|
||||||
|
| Hardcoded "infra" session is SPOF | Documented: Single-session design is intentional for v1; multi-session support noted in Open Questions |
|
||||||
|
| 5s polling timeout brittle under cold starts | Accepted: 5s is generous for typical agent startup; SESSION_FILE_TIMEOUT error is actionable |
|
||||||
|
| Hook missing/broken causes confirmation failure | Documented: Prerequisite section notes hook must be installed; future work could add hook verification endpoint |
|
||||||
|
| Pane name collisions reduce debuggability | Accepted: Zellij allows duplicate names; dashboard shows full session context. Could add timestamp suffix in future |
|
||||||
|
|
||||||
|
### Issues from GPT 5.3 Third Review (Codex)
|
||||||
|
|
||||||
|
| Issue | Resolution |
|
||||||
|
|-------|------------|
|
||||||
|
| Rate-limit check outside `_spawn_lock` causes race | Fixed: Moved rate-limit check inside lock to prevent two requests bypassing cooldown simultaneously |
|
||||||
|
| `start = time.time()` after spawn causes false timeout | Fixed: Capture `spawn_start_time` BEFORE spawn command; pass to `_wait_for_session_file()` |
|
||||||
|
| CORS `*` + localhost insufficient for security | Fixed: Added AC-37, AC-38, AC-39 for auth token + strict CORS |
|
||||||
|
| `ZELLIJ_SESSION in stdout` substring check | Fixed: Parse session names line-by-line to avoid false positives (e.g., "infra2" matching "infra") |
|
||||||
|
| `go-to-tab-name` then `new-pane` not atomic | Accepted: Zellij CLI doesn't support atomic tab+pane creation; race window is small in practice |
|
||||||
|
|
||||||
|
### Issues from GPT 5.3 Fourth Review (Codex)
|
||||||
|
|
||||||
|
| Issue | Resolution |
|
||||||
|
|-------|------------|
|
||||||
|
| Timestamp captured BEFORE spawn creates mtime ambiguity | Fixed: Capture `spawn_complete_time` AFTER subprocess returns; spawn_id correlation handles fast writes |
|
||||||
|
| 5s polling timeout brittle for cold starts/VMs | Fixed: Increased timeout to 10s (AC-29 updated) |
|
||||||
|
| CORS inconsistency (wildcard removed only on spawn) | Fixed: Keep wildcard CORS consistent; localhost binding is security boundary (AC-39 updated) |
|
||||||
|
| Projects cache goes stale between server restarts | Fixed: Added AC-40 for 5-minute background refresh |
|
||||||
|
| Lock contention could silently delay requests | Fixed: Added contention logging when wait exceeds 1s |
|
||||||
|
| Auth token via inline script is fragile | Accepted: Works for localhost dev tool; secure cookie alternative documented as future option |
|
||||||
|
|
||||||
|
### Issues from GPT 5.3 Fifth Review (Codex)
|
||||||
|
|
||||||
|
| Issue | Resolution |
|
||||||
|
|-------|------------|
|
||||||
|
| Shell wrapper env var propagation is fragile | Fixed: Use `subprocess.run(..., env=spawn_env)` to pass env dict directly |
|
||||||
|
| Mtime check creates race if file written during spawn | Fixed: Removed mtime filter entirely; spawn_id is already deterministic |
|
||||||
|
| Rate-limit timestamp updated before spawn wastes cooldown | Fixed: Update timestamp only after successful spawn |
|
||||||
|
| Background thread can silently die on exception | Fixed: Added try-except with logging in watch loop |
|
||||||
|
| Lock acquisition can block indefinitely | Fixed: Added 15s timeout, return SERVER_BUSY on timeout |
|
||||||
|
| No way to check Zellij session status before spawning | Fixed: Added AC-41/AC-42 for health endpoint and dashboard warning |
|
||||||
|
| Codex discovery via process inspection unreliable with shell wrapper | Fixed: Clarified Codex uses hook-based discovery (same as Claude) |
|
||||||
|
|||||||
51
plans/subagent-visibility.md
Normal file
51
plans/subagent-visibility.md
Normal file
@@ -0,0 +1,51 @@
|
|||||||
|
# Subagent & Agent Team Visibility for AMC
|
||||||
|
|
||||||
|
> **Status**: Draft
|
||||||
|
> **Last Updated**: 2026-02-27
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
Add a button in the turn stats section showing the count of active subagents/team members. Clicking it opens a list with names and lifetime stats (time taken, tokens used). Mirrors Claude Code's own agent display.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## User Workflow
|
||||||
|
|
||||||
|
1. User views a session card in AMC
|
||||||
|
2. Turn stats area shows: `2h 15m | 84k tokens | 3 agents`
|
||||||
|
3. User clicks "3 agents" button
|
||||||
|
4. List opens showing:
|
||||||
|
```
|
||||||
|
claude-code-guide (running) 12m 42,000 tokens
|
||||||
|
Explore (completed) 3m 18,500 tokens
|
||||||
|
Explore (completed) 5m 23,500 tokens
|
||||||
|
```
|
||||||
|
5. List updates in real-time as agents complete
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Acceptance Criteria
|
||||||
|
|
||||||
|
### Discovery
|
||||||
|
|
||||||
|
- **AC-1**: Subagent JSONL files discovered at `{session_dir}/subagents/agent-*.jsonl`
|
||||||
|
- **AC-2**: Both regular subagents (Task tool) and team members (Task with `team_name`) are discovered from same location
|
||||||
|
|
||||||
|
### Status Detection
|
||||||
|
|
||||||
|
- **AC-3**: Subagent is "running" if: parent session is alive AND last assistant entry has `stop_reason != "end_turn"`
|
||||||
|
- **AC-4**: Subagent is "completed" if: last assistant entry has `stop_reason == "end_turn"` OR parent session is dead
|
||||||
|
|
||||||
|
### Stats Extraction
|
||||||
|
|
||||||
|
- **AC-5**: Subagent name extracted from parent's Task tool invocation: use `name` if present (team member), else `subagent_type`
|
||||||
|
- **AC-6**: Lifetime duration = first entry timestamp to last entry timestamp (or now if running)
|
||||||
|
- **AC-7**: Lifetime tokens = sum of all assistant entries' `usage.input_tokens + usage.output_tokens`
|
||||||
|
|
||||||
|
### UI
|
||||||
|
|
||||||
|
- **AC-8**: Turn stats area shows agent count button when subagents exist
|
||||||
|
- **AC-9**: Button shows count + running indicator (e.g., "3 agents" or "2 agents (1 running)")
|
||||||
|
- **AC-10**: Clicking button opens popover with: name, status, duration, token count
|
||||||
|
- **AC-11**: Running agents show activity indicator
|
||||||
|
- **AC-12**: List updates via existing polling/SSE
|
||||||
BIN
tests/__pycache__/test_context.cpython-313-pytest-9.0.2.pyc
Normal file
BIN
tests/__pycache__/test_context.cpython-313-pytest-9.0.2.pyc
Normal file
Binary file not shown.
BIN
tests/__pycache__/test_control.cpython-313-pytest-9.0.2.pyc
Normal file
BIN
tests/__pycache__/test_control.cpython-313-pytest-9.0.2.pyc
Normal file
Binary file not shown.
BIN
tests/__pycache__/test_conversation.cpython-313-pytest-9.0.2.pyc
Normal file
BIN
tests/__pycache__/test_conversation.cpython-313-pytest-9.0.2.pyc
Normal file
Binary file not shown.
Binary file not shown.
BIN
tests/__pycache__/test_discovery.cpython-313-pytest-9.0.2.pyc
Normal file
BIN
tests/__pycache__/test_discovery.cpython-313-pytest-9.0.2.pyc
Normal file
Binary file not shown.
BIN
tests/__pycache__/test_hook.cpython-313-pytest-9.0.2.pyc
Normal file
BIN
tests/__pycache__/test_hook.cpython-313-pytest-9.0.2.pyc
Normal file
Binary file not shown.
BIN
tests/__pycache__/test_http.cpython-313-pytest-9.0.2.pyc
Normal file
BIN
tests/__pycache__/test_http.cpython-313-pytest-9.0.2.pyc
Normal file
Binary file not shown.
BIN
tests/__pycache__/test_parsing.cpython-313-pytest-9.0.2.pyc
Normal file
BIN
tests/__pycache__/test_parsing.cpython-313-pytest-9.0.2.pyc
Normal file
Binary file not shown.
BIN
tests/__pycache__/test_skills.cpython-313-pytest-9.0.2.pyc
Normal file
BIN
tests/__pycache__/test_skills.cpython-313-pytest-9.0.2.pyc
Normal file
Binary file not shown.
BIN
tests/__pycache__/test_spawn.cpython-313-pytest-9.0.2.pyc
Normal file
BIN
tests/__pycache__/test_spawn.cpython-313-pytest-9.0.2.pyc
Normal file
Binary file not shown.
BIN
tests/__pycache__/test_state.cpython-313-pytest-9.0.2.pyc
Normal file
BIN
tests/__pycache__/test_state.cpython-313-pytest-9.0.2.pyc
Normal file
Binary file not shown.
Binary file not shown.
0
tests/e2e/__init__.py
Normal file
0
tests/e2e/__init__.py
Normal file
BIN
tests/e2e/__pycache__/__init__.cpython-313.pyc
Normal file
BIN
tests/e2e/__pycache__/__init__.cpython-313.pyc
Normal file
Binary file not shown.
Binary file not shown.
614
tests/e2e/test_autocomplete_workflow.js
Normal file
614
tests/e2e/test_autocomplete_workflow.js
Normal file
@@ -0,0 +1,614 @@
|
|||||||
|
/**
|
||||||
|
* E2E integration tests for the autocomplete workflow.
|
||||||
|
*
|
||||||
|
* Validates the complete flow from typing a trigger character through
|
||||||
|
* skill selection and insertion, using a mock HTTP server that serves
|
||||||
|
* both dashboard files and the /api/skills endpoint.
|
||||||
|
*
|
||||||
|
* Test scenarios from bd-3cc:
|
||||||
|
* - Server serves /api/skills correctly
|
||||||
|
* - Dashboard loads skills on session open
|
||||||
|
* - Trigger character shows dropdown
|
||||||
|
* - Keyboard navigation works
|
||||||
|
* - Selection inserts skill
|
||||||
|
* - Edge cases (wrong trigger, empty skills, backspace, etc.)
|
||||||
|
*/
|
||||||
|
|
||||||
|
import { describe, it, before, after } from 'node:test';
|
||||||
|
import assert from 'node:assert/strict';
|
||||||
|
import { createServer } from 'node:http';
|
||||||
|
import { getTriggerInfo, filteredSkills } from '../../dashboard/utils/autocomplete.js';
|
||||||
|
|
||||||
|
// -- Mock server for /api/skills --
|
||||||
|
|
||||||
|
const CLAUDE_SKILLS_RESPONSE = {
|
||||||
|
trigger: '/',
|
||||||
|
skills: [
|
||||||
|
{ name: 'commit', description: 'Create a git commit' },
|
||||||
|
{ name: 'comment', description: 'Add a comment' },
|
||||||
|
{ name: 'review-pr', description: 'Review a pull request' },
|
||||||
|
{ name: 'help', description: 'Get help' },
|
||||||
|
{ name: 'init', description: 'Initialize project' },
|
||||||
|
],
|
||||||
|
};
|
||||||
|
|
||||||
|
const CODEX_SKILLS_RESPONSE = {
|
||||||
|
trigger: '$',
|
||||||
|
skills: [
|
||||||
|
{ name: 'lint', description: 'Lint code' },
|
||||||
|
{ name: 'deploy', description: 'Deploy to prod' },
|
||||||
|
{ name: 'test', description: 'Run tests' },
|
||||||
|
],
|
||||||
|
};
|
||||||
|
|
||||||
|
const EMPTY_SKILLS_RESPONSE = {
|
||||||
|
trigger: '/',
|
||||||
|
skills: [],
|
||||||
|
};
|
||||||
|
|
||||||
|
let server;
|
||||||
|
let serverUrl;
|
||||||
|
|
||||||
|
function startMockServer() {
|
||||||
|
return new Promise((resolve) => {
|
||||||
|
server = createServer((req, res) => {
|
||||||
|
const url = new URL(req.url, `http://${req.headers.host}`);
|
||||||
|
|
||||||
|
if (url.pathname === '/api/skills') {
|
||||||
|
const agent = url.searchParams.get('agent') || 'claude';
|
||||||
|
res.writeHead(200, { 'Content-Type': 'application/json', 'Access-Control-Allow-Origin': '*' });
|
||||||
|
|
||||||
|
if (agent === 'codex') {
|
||||||
|
res.end(JSON.stringify(CODEX_SKILLS_RESPONSE));
|
||||||
|
} else if (agent === 'empty') {
|
||||||
|
res.end(JSON.stringify(EMPTY_SKILLS_RESPONSE));
|
||||||
|
} else {
|
||||||
|
res.end(JSON.stringify(CLAUDE_SKILLS_RESPONSE));
|
||||||
|
}
|
||||||
|
} else {
|
||||||
|
res.writeHead(404);
|
||||||
|
res.end('Not found');
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
server.listen(0, '127.0.0.1', () => {
|
||||||
|
const { port } = server.address();
|
||||||
|
serverUrl = `http://127.0.0.1:${port}`;
|
||||||
|
resolve();
|
||||||
|
});
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
function stopMockServer() {
|
||||||
|
return new Promise((resolve) => {
|
||||||
|
if (server) server.close(resolve);
|
||||||
|
else resolve();
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
// -- Helper: simulate fetching skills like the dashboard does --
|
||||||
|
|
||||||
|
async function fetchSkills(agent) {
|
||||||
|
const url = `${serverUrl}/api/skills?agent=${encodeURIComponent(agent)}`;
|
||||||
|
const response = await fetch(url);
|
||||||
|
if (!response.ok) return null;
|
||||||
|
return response.json();
|
||||||
|
}
|
||||||
|
|
||||||
|
// =============================================================
|
||||||
|
// Test Suite: Server -> Client Skills Fetch
|
||||||
|
// =============================================================
|
||||||
|
|
||||||
|
describe('E2E: Server serves /api/skills correctly', () => {
|
||||||
|
before(startMockServer);
|
||||||
|
after(stopMockServer);
|
||||||
|
|
||||||
|
it('fetches Claude skills with / trigger', async () => {
|
||||||
|
const config = await fetchSkills('claude');
|
||||||
|
assert.equal(config.trigger, '/');
|
||||||
|
assert.ok(config.skills.length > 0, 'should have skills');
|
||||||
|
assert.ok(config.skills.some(s => s.name === 'commit'));
|
||||||
|
});
|
||||||
|
|
||||||
|
it('fetches Codex skills with $ trigger', async () => {
|
||||||
|
const config = await fetchSkills('codex');
|
||||||
|
assert.equal(config.trigger, '$');
|
||||||
|
assert.ok(config.skills.some(s => s.name === 'lint'));
|
||||||
|
});
|
||||||
|
|
||||||
|
it('returns empty skills list when none exist', async () => {
|
||||||
|
const config = await fetchSkills('empty');
|
||||||
|
assert.equal(config.trigger, '/');
|
||||||
|
assert.deepEqual(config.skills, []);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('each skill has name and description', async () => {
|
||||||
|
const config = await fetchSkills('claude');
|
||||||
|
for (const skill of config.skills) {
|
||||||
|
assert.ok(skill.name, 'skill should have name');
|
||||||
|
assert.ok(skill.description, 'skill should have description');
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
// =============================================================
|
||||||
|
// Test Suite: Dashboard loads skills on session open
|
||||||
|
// =============================================================
|
||||||
|
|
||||||
|
describe('E2E: Dashboard loads skills on session open', () => {
|
||||||
|
before(startMockServer);
|
||||||
|
after(stopMockServer);
|
||||||
|
|
||||||
|
it('loads Claude skills config matching server response', async () => {
|
||||||
|
const config = await fetchSkills('claude');
|
||||||
|
assert.equal(config.trigger, '/');
|
||||||
|
// Verify the config is usable by autocomplete functions
|
||||||
|
const info = getTriggerInfo('/com', 4, config);
|
||||||
|
assert.ok(info, 'should detect trigger in loaded config');
|
||||||
|
assert.equal(info.filterText, 'com');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('loads Codex skills config matching server response', async () => {
|
||||||
|
const config = await fetchSkills('codex');
|
||||||
|
assert.equal(config.trigger, '$');
|
||||||
|
const info = getTriggerInfo('$li', 3, config);
|
||||||
|
assert.ok(info, 'should detect $ trigger');
|
||||||
|
assert.equal(info.filterText, 'li');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('handles null/missing config gracefully', async () => {
|
||||||
|
// Simulate network failure
|
||||||
|
const info = getTriggerInfo('/test', 5, null);
|
||||||
|
assert.equal(info, null);
|
||||||
|
const skills = filteredSkills(null, { filterText: '' });
|
||||||
|
assert.deepEqual(skills, []);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
// =============================================================
|
||||||
|
// Test Suite: Trigger character shows dropdown
|
||||||
|
// =============================================================
|
||||||
|
|
||||||
|
describe('E2E: Trigger character shows dropdown', () => {
|
||||||
|
const config = CLAUDE_SKILLS_RESPONSE;
|
||||||
|
|
||||||
|
it('Claude session: Type "/" -> dropdown appears with Claude skills', () => {
|
||||||
|
const info = getTriggerInfo('/', 1, config);
|
||||||
|
assert.ok(info, 'trigger should be detected');
|
||||||
|
assert.equal(info.trigger, '/');
|
||||||
|
const skills = filteredSkills(config, info);
|
||||||
|
assert.ok(skills.length > 0, 'should show skills');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('Codex session: Type "$" -> dropdown appears with Codex skills', () => {
|
||||||
|
const codexConfig = CODEX_SKILLS_RESPONSE;
|
||||||
|
const info = getTriggerInfo('$', 1, codexConfig);
|
||||||
|
assert.ok(info, 'trigger should be detected');
|
||||||
|
assert.equal(info.trigger, '$');
|
||||||
|
const skills = filteredSkills(codexConfig, info);
|
||||||
|
assert.ok(skills.length > 0);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('Claude session: Type "$" -> nothing happens (wrong trigger)', () => {
|
||||||
|
const info = getTriggerInfo('$', 1, config);
|
||||||
|
assert.equal(info, null, 'wrong trigger should not activate');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('Type "/com" -> list filters to skills containing "com"', () => {
|
||||||
|
const info = getTriggerInfo('/com', 4, config);
|
||||||
|
assert.ok(info);
|
||||||
|
assert.equal(info.filterText, 'com');
|
||||||
|
const skills = filteredSkills(config, info);
|
||||||
|
const names = skills.map(s => s.name);
|
||||||
|
assert.ok(names.includes('commit'), 'should include commit');
|
||||||
|
assert.ok(names.includes('comment'), 'should include comment');
|
||||||
|
assert.ok(!names.includes('review-pr'), 'should not include review-pr');
|
||||||
|
assert.ok(!names.includes('help'), 'should not include help');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('Mid-message: Type "please run /commit" -> autocomplete triggers on "/"', () => {
|
||||||
|
const input = 'please run /commit';
|
||||||
|
const info = getTriggerInfo(input, input.length, config);
|
||||||
|
assert.ok(info, 'should detect trigger mid-message');
|
||||||
|
assert.equal(info.trigger, '/');
|
||||||
|
assert.equal(info.filterText, 'commit');
|
||||||
|
assert.equal(info.replaceStart, 11);
|
||||||
|
assert.equal(info.replaceEnd, 18);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('Trigger at start of line after newline', () => {
|
||||||
|
const input = 'first line\n/rev';
|
||||||
|
const info = getTriggerInfo(input, input.length, config);
|
||||||
|
assert.ok(info);
|
||||||
|
assert.equal(info.filterText, 'rev');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
// =============================================================
|
||||||
|
// Test Suite: Keyboard navigation works
|
||||||
|
// =============================================================
|
||||||
|
|
||||||
|
describe('E2E: Keyboard navigation simulation', () => {
|
||||||
|
const config = CLAUDE_SKILLS_RESPONSE;
|
||||||
|
|
||||||
|
it('Arrow keys navigate through filtered list', () => {
|
||||||
|
const info = getTriggerInfo('/', 1, config);
|
||||||
|
const skills = filteredSkills(config, info);
|
||||||
|
|
||||||
|
// Simulate state: selectedIndex starts at 0
|
||||||
|
let selectedIndex = 0;
|
||||||
|
|
||||||
|
// ArrowDown moves to next
|
||||||
|
selectedIndex = Math.min(selectedIndex + 1, skills.length - 1);
|
||||||
|
assert.equal(selectedIndex, 1);
|
||||||
|
|
||||||
|
// ArrowDown again
|
||||||
|
selectedIndex = Math.min(selectedIndex + 1, skills.length - 1);
|
||||||
|
assert.equal(selectedIndex, 2);
|
||||||
|
|
||||||
|
// ArrowUp moves back
|
||||||
|
selectedIndex = Math.max(selectedIndex - 1, 0);
|
||||||
|
assert.equal(selectedIndex, 1);
|
||||||
|
|
||||||
|
// ArrowUp back to start
|
||||||
|
selectedIndex = Math.max(selectedIndex - 1, 0);
|
||||||
|
assert.equal(selectedIndex, 0);
|
||||||
|
|
||||||
|
// ArrowUp at top doesn't go negative
|
||||||
|
selectedIndex = Math.max(selectedIndex - 1, 0);
|
||||||
|
assert.equal(selectedIndex, 0);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('ArrowDown clamps at list end', () => {
|
||||||
|
const info = getTriggerInfo('/com', 4, config);
|
||||||
|
const skills = filteredSkills(config, info);
|
||||||
|
// "com" matches commit and comment -> 2 skills
|
||||||
|
assert.equal(skills.length, 2);
|
||||||
|
|
||||||
|
let selectedIndex = 0;
|
||||||
|
// Down to 1
|
||||||
|
selectedIndex = Math.min(selectedIndex + 1, skills.length - 1);
|
||||||
|
assert.equal(selectedIndex, 1);
|
||||||
|
// Down again - clamped at 1
|
||||||
|
selectedIndex = Math.min(selectedIndex + 1, skills.length - 1);
|
||||||
|
assert.equal(selectedIndex, 1, 'should clamp at list end');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('Enter selects the current skill', () => {
|
||||||
|
const info = getTriggerInfo('/', 1, config);
|
||||||
|
const skills = filteredSkills(config, info);
|
||||||
|
const selectedIndex = 0;
|
||||||
|
|
||||||
|
// Simulate Enter: select skill at selectedIndex
|
||||||
|
const selected = skills[selectedIndex];
|
||||||
|
assert.ok(selected, 'should have a skill to select');
|
||||||
|
assert.equal(selected.name, skills[0].name);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('Escape dismisses without selection', () => {
|
||||||
|
// Simulate Escape: set showAutocomplete = false, no insertion
|
||||||
|
let showAutocomplete = true;
|
||||||
|
// Escape handler
|
||||||
|
showAutocomplete = false;
|
||||||
|
assert.equal(showAutocomplete, false, 'dropdown should close on Escape');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
// =============================================================
|
||||||
|
// Test Suite: Selection inserts skill
|
||||||
|
// =============================================================
|
||||||
|
|
||||||
|
describe('E2E: Selection inserts skill', () => {
|
||||||
|
const config = CLAUDE_SKILLS_RESPONSE;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* Simulate insertSkill logic from SimpleInput.js
|
||||||
|
*/
|
||||||
|
function simulateInsertSkill(text, triggerInfo, skill, trigger) {
|
||||||
|
const { replaceStart, replaceEnd } = triggerInfo;
|
||||||
|
const before = text.slice(0, replaceStart);
|
||||||
|
const after = text.slice(replaceEnd);
|
||||||
|
const inserted = `${trigger}${skill.name} `;
|
||||||
|
return {
|
||||||
|
newText: before + inserted + after,
|
||||||
|
newCursorPos: replaceStart + inserted.length,
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
it('Selected skill shows as "{trigger}skill-name " in input', () => {
|
||||||
|
const text = '/com';
|
||||||
|
const info = getTriggerInfo(text, 4, config);
|
||||||
|
const skills = filteredSkills(config, info);
|
||||||
|
const skill = skills.find(s => s.name === 'commit');
|
||||||
|
|
||||||
|
const { newText, newCursorPos } = simulateInsertSkill(text, info, skill, config.trigger);
|
||||||
|
assert.equal(newText, '/commit ', 'should insert trigger + skill name + space');
|
||||||
|
assert.equal(newCursorPos, 8, 'cursor should be after inserted text');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('Inserting mid-message preserves surrounding text', () => {
|
||||||
|
const text = 'please run /com and continue';
|
||||||
|
const info = getTriggerInfo(text, 15, config); // cursor at end of "/com"
|
||||||
|
assert.ok(info);
|
||||||
|
const skill = { name: 'commit' };
|
||||||
|
|
||||||
|
const { newText } = simulateInsertSkill(text, info, skill, config.trigger);
|
||||||
|
assert.equal(newText, 'please run /commit and continue');
|
||||||
|
// Note: there's a double space because "and" was after the cursor position
|
||||||
|
// In real use, the cursor was at position 15 which is after "/com"
|
||||||
|
});
|
||||||
|
|
||||||
|
it('Inserting at start of input', () => {
|
||||||
|
const text = '/';
|
||||||
|
const info = getTriggerInfo(text, 1, config);
|
||||||
|
const skill = { name: 'help' };
|
||||||
|
|
||||||
|
const { newText, newCursorPos } = simulateInsertSkill(text, info, skill, config.trigger);
|
||||||
|
assert.equal(newText, '/help ');
|
||||||
|
assert.equal(newCursorPos, 6);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('Inserting with filter text replaces trigger+filter', () => {
|
||||||
|
const text = '/review';
|
||||||
|
const info = getTriggerInfo(text, 7, config);
|
||||||
|
const skill = { name: 'review-pr' };
|
||||||
|
|
||||||
|
const { newText } = simulateInsertSkill(text, info, skill, config.trigger);
|
||||||
|
assert.equal(newText, '/review-pr ');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
// =============================================================
|
||||||
|
// Test Suite: Verify alphabetical ordering
|
||||||
|
// =============================================================
|
||||||
|
|
||||||
|
describe('E2E: Verify alphabetical ordering of skills', () => {
|
||||||
|
it('Skills are returned sorted alphabetically', () => {
|
||||||
|
const config = CLAUDE_SKILLS_RESPONSE;
|
||||||
|
const info = getTriggerInfo('/', 1, config);
|
||||||
|
const skills = filteredSkills(config, info);
|
||||||
|
const names = skills.map(s => s.name);
|
||||||
|
|
||||||
|
for (let i = 1; i < names.length; i++) {
|
||||||
|
assert.ok(
|
||||||
|
names[i].localeCompare(names[i - 1]) >= 0,
|
||||||
|
`${names[i]} should come after ${names[i - 1]}`
|
||||||
|
);
|
||||||
|
}
|
||||||
|
});
|
||||||
|
|
||||||
|
it('Filtered results maintain alphabetical order', () => {
|
||||||
|
const config = CLAUDE_SKILLS_RESPONSE;
|
||||||
|
const info = getTriggerInfo('/com', 4, config);
|
||||||
|
const skills = filteredSkills(config, info);
|
||||||
|
const names = skills.map(s => s.name);
|
||||||
|
|
||||||
|
assert.deepEqual(names, ['comment', 'commit']);
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
// =============================================================
|
||||||
|
// Test Suite: Edge Cases
|
||||||
|
// =============================================================
|
||||||
|
|
||||||
|
describe('E2E: Edge cases', () => {
|
||||||
|
it('Session without skills shows empty list', () => {
|
||||||
|
const emptyConfig = EMPTY_SKILLS_RESPONSE;
|
||||||
|
const info = getTriggerInfo('/', 1, emptyConfig);
|
||||||
|
assert.ok(info, 'trigger still detected');
|
||||||
|
const skills = filteredSkills(emptyConfig, info);
|
||||||
|
assert.equal(skills.length, 0);
|
||||||
|
});
|
||||||
|
|
||||||
|
it('Single skill still shows in dropdown', () => {
|
||||||
|
const singleConfig = {
|
||||||
|
trigger: '/',
|
||||||
|
skills: [{ name: 'only-skill', description: 'The only one' }],
|
||||||
|
};
|
||||||
|
const info = getTriggerInfo('/', 1, singleConfig);
|
||||||
|
const skills = filteredSkills(singleConfig, info);
|
||||||
|
assert.equal(skills.length, 1);
|
||||||
|
assert.equal(skills[0].name, 'only-skill');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('Multiple triggers in one message work independently', () => {
|
||||||
|
const config = CLAUDE_SKILLS_RESPONSE;
|
||||||
|
|
||||||
|
// User types: "first /commit then /review-pr finally"
|
||||||
|
// After first insertion, simulating second trigger
|
||||||
|
const text = 'first /commit then /rev';
|
||||||
|
|
||||||
|
// Cursor at end - should detect second trigger
|
||||||
|
const info = getTriggerInfo(text, text.length, config);
|
||||||
|
assert.ok(info, 'should detect second trigger');
|
||||||
|
assert.equal(info.filterText, 'rev');
|
||||||
|
assert.equal(info.replaceStart, 19); // position of second "/"
|
||||||
|
assert.equal(info.replaceEnd, text.length);
|
||||||
|
|
||||||
|
const skills = filteredSkills(config, info);
|
||||||
|
assert.ok(skills.some(s => s.name === 'review-pr'));
|
||||||
|
});
|
||||||
|
|
||||||
|
it('Backspace over trigger dismisses autocomplete', () => {
|
||||||
|
const config = CLAUDE_SKILLS_RESPONSE;
|
||||||
|
|
||||||
|
// Type "/" - trigger detected
|
||||||
|
let info = getTriggerInfo('/', 1, config);
|
||||||
|
assert.ok(info, 'trigger detected');
|
||||||
|
|
||||||
|
// Backspace - text is now empty
|
||||||
|
info = getTriggerInfo('', 0, config);
|
||||||
|
assert.equal(info, null, 'trigger dismissed after backspace');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('Trigger embedded in word does not activate', () => {
|
||||||
|
const config = CLAUDE_SKILLS_RESPONSE;
|
||||||
|
const info = getTriggerInfo('path/to/file', 5, config);
|
||||||
|
assert.equal(info, null, 'should not trigger on path separator');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('No matching skills after filtering shows empty list', () => {
|
||||||
|
const config = CLAUDE_SKILLS_RESPONSE;
|
||||||
|
const info = getTriggerInfo('/zzz', 4, config);
|
||||||
|
assert.ok(info, 'trigger still detected');
|
||||||
|
const skills = filteredSkills(config, info);
|
||||||
|
assert.equal(skills.length, 0, 'no skills match "zzz"');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('Case-insensitive filtering works', () => {
|
||||||
|
const config = CLAUDE_SKILLS_RESPONSE;
|
||||||
|
const info = getTriggerInfo('/COM', 4, config);
|
||||||
|
assert.ok(info);
|
||||||
|
assert.equal(info.filterText, 'com'); // lowercased
|
||||||
|
const skills = filteredSkills(config, info);
|
||||||
|
assert.ok(skills.length >= 2, 'should match commit and comment');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('Click outside dismisses (state simulation)', () => {
|
||||||
|
// Simulate: showAutocomplete=true, click outside sets it to false
|
||||||
|
let showAutocomplete = true;
|
||||||
|
// Simulate click outside handler
|
||||||
|
const clickTarget = { contains: () => false };
|
||||||
|
const textareaRef = { contains: () => false };
|
||||||
|
if (!clickTarget.contains('event') && !textareaRef.contains('event')) {
|
||||||
|
showAutocomplete = false;
|
||||||
|
}
|
||||||
|
assert.equal(showAutocomplete, false, 'click outside should dismiss');
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
// =============================================================
|
||||||
|
// Test Suite: Cross-agent isolation
|
||||||
|
// =============================================================
|
||||||
|
|
||||||
|
describe('E2E: Cross-agent trigger isolation', () => {
|
||||||
|
it('Claude trigger / does not activate in Codex config', () => {
|
||||||
|
const codexConfig = CODEX_SKILLS_RESPONSE;
|
||||||
|
const info = getTriggerInfo('/', 1, codexConfig);
|
||||||
|
assert.equal(info, null, '/ should not trigger for Codex');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('Codex trigger $ does not activate in Claude config', () => {
|
||||||
|
const claudeConfig = CLAUDE_SKILLS_RESPONSE;
|
||||||
|
const info = getTriggerInfo('$', 1, claudeConfig);
|
||||||
|
assert.equal(info, null, '$ should not trigger for Claude');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('Each agent gets its own skills list', async () => {
|
||||||
|
// This requires the mock server
|
||||||
|
await startMockServer();
|
||||||
|
try {
|
||||||
|
const claude = await fetchSkills('claude');
|
||||||
|
const codex = await fetchSkills('codex');
|
||||||
|
|
||||||
|
assert.equal(claude.trigger, '/');
|
||||||
|
assert.equal(codex.trigger, '$');
|
||||||
|
|
||||||
|
const claudeNames = claude.skills.map(s => s.name);
|
||||||
|
const codexNames = codex.skills.map(s => s.name);
|
||||||
|
|
||||||
|
// No overlap in default test data
|
||||||
|
assert.ok(!claudeNames.includes('lint'), 'Claude should not have Codex skills');
|
||||||
|
assert.ok(!codexNames.includes('commit'), 'Codex should not have Claude skills');
|
||||||
|
} finally {
|
||||||
|
await stopMockServer();
|
||||||
|
}
|
||||||
|
});
|
||||||
|
});
|
||||||
|
|
||||||
|
// =============================================================
|
||||||
|
// Test Suite: Full workflow simulation
|
||||||
|
// =============================================================
|
||||||
|
|
||||||
|
describe('E2E: Full autocomplete workflow', () => {
|
||||||
|
before(startMockServer);
|
||||||
|
after(stopMockServer);
|
||||||
|
|
||||||
|
it('complete flow: fetch -> type -> filter -> navigate -> select -> verify', async () => {
|
||||||
|
// Step 1: Fetch skills from server (like Modal.js does on session open)
|
||||||
|
const config = await fetchSkills('claude');
|
||||||
|
assert.equal(config.trigger, '/');
|
||||||
|
assert.ok(config.skills.length > 0);
|
||||||
|
|
||||||
|
// Step 2: User starts typing - no trigger yet
|
||||||
|
let text = 'hello ';
|
||||||
|
let cursorPos = text.length;
|
||||||
|
let info = getTriggerInfo(text, cursorPos, config);
|
||||||
|
assert.equal(info, null, 'no trigger yet');
|
||||||
|
|
||||||
|
// Step 3: User types trigger character
|
||||||
|
text = 'hello /';
|
||||||
|
cursorPos = text.length;
|
||||||
|
info = getTriggerInfo(text, cursorPos, config);
|
||||||
|
assert.ok(info, 'trigger detected');
|
||||||
|
let skills = filteredSkills(config, info);
|
||||||
|
assert.ok(skills.length === 5, 'all 5 skills shown');
|
||||||
|
|
||||||
|
// Step 4: User types filter text
|
||||||
|
text = 'hello /com';
|
||||||
|
cursorPos = text.length;
|
||||||
|
info = getTriggerInfo(text, cursorPos, config);
|
||||||
|
assert.ok(info);
|
||||||
|
assert.equal(info.filterText, 'com');
|
||||||
|
skills = filteredSkills(config, info);
|
||||||
|
assert.equal(skills.length, 2, 'filtered to 2 skills');
|
||||||
|
assert.deepEqual(skills.map(s => s.name), ['comment', 'commit']);
|
||||||
|
|
||||||
|
// Step 5: Arrow down to select "commit" (index 1)
|
||||||
|
let selectedIndex = 0; // starts on "comment"
|
||||||
|
selectedIndex = Math.min(selectedIndex + 1, skills.length - 1); // ArrowDown
|
||||||
|
assert.equal(selectedIndex, 1);
|
||||||
|
assert.equal(skills[selectedIndex].name, 'commit');
|
||||||
|
|
||||||
|
// Step 6: Press Enter to insert
|
||||||
|
const selected = skills[selectedIndex];
|
||||||
|
const { replaceStart, replaceEnd } = info;
|
||||||
|
const before = text.slice(0, replaceStart);
|
||||||
|
const after = text.slice(replaceEnd);
|
||||||
|
const inserted = `${config.trigger}${selected.name} `;
|
||||||
|
const newText = before + inserted + after;
|
||||||
|
const newCursorPos = replaceStart + inserted.length;
|
||||||
|
|
||||||
|
// Step 7: Verify insertion
|
||||||
|
assert.equal(newText, 'hello /commit ');
|
||||||
|
assert.equal(newCursorPos, 14);
|
||||||
|
|
||||||
|
// Step 8: Verify autocomplete closed (trigger info should be null for the new text)
|
||||||
|
// After insertion, cursor is at 14, no active trigger word
|
||||||
|
const postInfo = getTriggerInfo(newText, newCursorPos, config);
|
||||||
|
assert.equal(postInfo, null, 'autocomplete should be dismissed after selection');
|
||||||
|
});
|
||||||
|
|
||||||
|
it('complete flow with second trigger after first insertion', async () => {
|
||||||
|
const config = await fetchSkills('claude');
|
||||||
|
|
||||||
|
// After first insertion: "hello /commit "
|
||||||
|
let text = 'hello /commit ';
|
||||||
|
let cursorPos = text.length;
|
||||||
|
|
||||||
|
// User types more text and another trigger
|
||||||
|
text = 'hello /commit then /';
|
||||||
|
cursorPos = text.length;
|
||||||
|
let info = getTriggerInfo(text, cursorPos, config);
|
||||||
|
assert.ok(info, 'second trigger detected');
|
||||||
|
assert.equal(info.replaceStart, 19);
|
||||||
|
|
||||||
|
// Filter the second trigger
|
||||||
|
text = 'hello /commit then /rev';
|
||||||
|
cursorPos = text.length;
|
||||||
|
info = getTriggerInfo(text, cursorPos, config);
|
||||||
|
assert.ok(info);
|
||||||
|
assert.equal(info.filterText, 'rev');
|
||||||
|
|
||||||
|
const skills = filteredSkills(config, info);
|
||||||
|
assert.ok(skills.some(s => s.name === 'review-pr'));
|
||||||
|
|
||||||
|
// Select review-pr
|
||||||
|
const skill = skills.find(s => s.name === 'review-pr');
|
||||||
|
const before = text.slice(0, info.replaceStart);
|
||||||
|
const after = text.slice(info.replaceEnd);
|
||||||
|
const newText = before + `${config.trigger}${skill.name} ` + after;
|
||||||
|
|
||||||
|
assert.equal(newText, 'hello /commit then /review-pr ');
|
||||||
|
});
|
||||||
|
});
|
||||||
250
tests/e2e/test_skills_endpoint.py
Normal file
250
tests/e2e/test_skills_endpoint.py
Normal file
@@ -0,0 +1,250 @@
|
|||||||
|
"""E2E tests for the /api/skills endpoint.
|
||||||
|
|
||||||
|
Spins up a real AMC server on a random port and verifies the skills API
|
||||||
|
returns correct data for Claude and Codex agents, including trigger
|
||||||
|
characters, alphabetical sorting, and response format.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import json
|
||||||
|
import socket
|
||||||
|
import tempfile
|
||||||
|
import threading
|
||||||
|
import time
|
||||||
|
import unittest
|
||||||
|
import urllib.request
|
||||||
|
from http.server import ThreadingHTTPServer
|
||||||
|
from pathlib import Path
|
||||||
|
from unittest.mock import patch
|
||||||
|
|
||||||
|
from amc_server.handler import AMCHandler
|
||||||
|
|
||||||
|
|
||||||
|
def _find_free_port():
|
||||||
|
"""Find an available port for the test server."""
|
||||||
|
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
|
||||||
|
s.bind(("127.0.0.1", 0))
|
||||||
|
return s.getsockname()[1]
|
||||||
|
|
||||||
|
|
||||||
|
def _get_json(url):
|
||||||
|
"""Fetch JSON from a URL, returning (status_code, parsed_json)."""
|
||||||
|
req = urllib.request.Request(url)
|
||||||
|
try:
|
||||||
|
with urllib.request.urlopen(req, timeout=5) as resp:
|
||||||
|
return resp.status, json.loads(resp.read())
|
||||||
|
except urllib.error.HTTPError as e:
|
||||||
|
return e.code, json.loads(e.read())
|
||||||
|
|
||||||
|
|
||||||
|
class TestSkillsEndpointE2E(unittest.TestCase):
|
||||||
|
"""E2E tests: start a real server and hit /api/skills over HTTP."""
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def setUpClass(cls):
|
||||||
|
"""Start a test server on a random port with mock skill data."""
|
||||||
|
cls.port = _find_free_port()
|
||||||
|
cls.base_url = f"http://127.0.0.1:{cls.port}"
|
||||||
|
|
||||||
|
# Create temp directories for skill data
|
||||||
|
cls.tmpdir = tempfile.mkdtemp()
|
||||||
|
cls.home = Path(cls.tmpdir)
|
||||||
|
|
||||||
|
# Claude skills
|
||||||
|
for name, desc in [
|
||||||
|
("commit", "Create a git commit"),
|
||||||
|
("review-pr", "Review a pull request"),
|
||||||
|
("comment", "Add a comment"),
|
||||||
|
]:
|
||||||
|
skill_dir = cls.home / ".claude/skills" / name
|
||||||
|
skill_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
(skill_dir / "SKILL.md").write_text(desc)
|
||||||
|
|
||||||
|
# Codex curated skills
|
||||||
|
cache_dir = cls.home / ".codex/vendor_imports"
|
||||||
|
cache_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
cache = {
|
||||||
|
"skills": [
|
||||||
|
{"id": "lint", "shortDescription": "Lint code"},
|
||||||
|
{"id": "deploy", "shortDescription": "Deploy to prod"},
|
||||||
|
]
|
||||||
|
}
|
||||||
|
(cache_dir / "skills-curated-cache.json").write_text(json.dumps(cache))
|
||||||
|
|
||||||
|
# Codex user skill
|
||||||
|
codex_skill = cls.home / ".codex/skills/my-script"
|
||||||
|
codex_skill.mkdir(parents=True, exist_ok=True)
|
||||||
|
(codex_skill / "SKILL.md").write_text("Run my custom script")
|
||||||
|
|
||||||
|
# Patch Path.home() for the skills enumeration
|
||||||
|
cls.home_patcher = patch.object(Path, "home", return_value=cls.home)
|
||||||
|
cls.home_patcher.start()
|
||||||
|
|
||||||
|
# Start server in background thread
|
||||||
|
cls.server = ThreadingHTTPServer(("127.0.0.1", cls.port), AMCHandler)
|
||||||
|
cls.server_thread = threading.Thread(target=cls.server.serve_forever)
|
||||||
|
cls.server_thread.daemon = True
|
||||||
|
cls.server_thread.start()
|
||||||
|
|
||||||
|
# Wait for server to be ready
|
||||||
|
for _ in range(50):
|
||||||
|
try:
|
||||||
|
with socket.create_connection(("127.0.0.1", cls.port), timeout=0.1):
|
||||||
|
break
|
||||||
|
except OSError:
|
||||||
|
time.sleep(0.05)
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def tearDownClass(cls):
|
||||||
|
"""Shut down the test server."""
|
||||||
|
cls.server.shutdown()
|
||||||
|
cls.server_thread.join(timeout=5)
|
||||||
|
cls.home_patcher.stop()
|
||||||
|
|
||||||
|
# -- Core: /api/skills serves correctly --
|
||||||
|
|
||||||
|
def test_skills_default_is_claude(self):
|
||||||
|
"""GET /api/skills without ?agent defaults to claude (/ trigger)."""
|
||||||
|
status, data = _get_json(f"{self.base_url}/api/skills")
|
||||||
|
self.assertEqual(status, 200)
|
||||||
|
self.assertEqual(data["trigger"], "/")
|
||||||
|
self.assertIsInstance(data["skills"], list)
|
||||||
|
|
||||||
|
def test_claude_skills_returned(self):
|
||||||
|
"""GET /api/skills?agent=claude returns Claude skills."""
|
||||||
|
status, data = _get_json(f"{self.base_url}/api/skills?agent=claude")
|
||||||
|
self.assertEqual(status, 200)
|
||||||
|
self.assertEqual(data["trigger"], "/")
|
||||||
|
names = [s["name"] for s in data["skills"]]
|
||||||
|
self.assertIn("commit", names)
|
||||||
|
self.assertIn("review-pr", names)
|
||||||
|
self.assertIn("comment", names)
|
||||||
|
|
||||||
|
def test_codex_skills_returned(self):
|
||||||
|
"""GET /api/skills?agent=codex returns Codex skills with $ trigger."""
|
||||||
|
status, data = _get_json(f"{self.base_url}/api/skills?agent=codex")
|
||||||
|
self.assertEqual(status, 200)
|
||||||
|
self.assertEqual(data["trigger"], "$")
|
||||||
|
names = [s["name"] for s in data["skills"]]
|
||||||
|
self.assertIn("lint", names)
|
||||||
|
self.assertIn("deploy", names)
|
||||||
|
self.assertIn("my-script", names)
|
||||||
|
|
||||||
|
def test_unknown_agent_defaults_to_claude(self):
|
||||||
|
"""Unknown agent type defaults to claude behavior."""
|
||||||
|
status, data = _get_json(f"{self.base_url}/api/skills?agent=unknown-agent")
|
||||||
|
self.assertEqual(status, 200)
|
||||||
|
self.assertEqual(data["trigger"], "/")
|
||||||
|
|
||||||
|
# -- Response format --
|
||||||
|
|
||||||
|
def test_response_has_trigger_and_skills_keys(self):
|
||||||
|
"""Response JSON has exactly trigger and skills keys."""
|
||||||
|
_, data = _get_json(f"{self.base_url}/api/skills?agent=claude")
|
||||||
|
self.assertIn("trigger", data)
|
||||||
|
self.assertIn("skills", data)
|
||||||
|
|
||||||
|
def test_each_skill_has_name_and_description(self):
|
||||||
|
"""Each skill object has name and description fields."""
|
||||||
|
_, data = _get_json(f"{self.base_url}/api/skills?agent=claude")
|
||||||
|
for skill in data["skills"]:
|
||||||
|
self.assertIn("name", skill)
|
||||||
|
self.assertIn("description", skill)
|
||||||
|
self.assertIsInstance(skill["name"], str)
|
||||||
|
self.assertIsInstance(skill["description"], str)
|
||||||
|
|
||||||
|
# -- Alphabetical sorting --
|
||||||
|
|
||||||
|
def test_claude_skills_alphabetically_sorted(self):
|
||||||
|
"""Claude skills are returned in alphabetical order."""
|
||||||
|
_, data = _get_json(f"{self.base_url}/api/skills?agent=claude")
|
||||||
|
names = [s["name"] for s in data["skills"]]
|
||||||
|
self.assertEqual(names, sorted(names, key=str.lower))
|
||||||
|
|
||||||
|
def test_codex_skills_alphabetically_sorted(self):
|
||||||
|
"""Codex skills are returned in alphabetical order."""
|
||||||
|
_, data = _get_json(f"{self.base_url}/api/skills?agent=codex")
|
||||||
|
names = [s["name"] for s in data["skills"]]
|
||||||
|
self.assertEqual(names, sorted(names, key=str.lower))
|
||||||
|
|
||||||
|
# -- Descriptions --
|
||||||
|
|
||||||
|
def test_claude_skill_descriptions(self):
|
||||||
|
"""Claude skills have correct descriptions from SKILL.md."""
|
||||||
|
_, data = _get_json(f"{self.base_url}/api/skills?agent=claude")
|
||||||
|
by_name = {s["name"]: s["description"] for s in data["skills"]}
|
||||||
|
self.assertEqual(by_name["commit"], "Create a git commit")
|
||||||
|
self.assertEqual(by_name["review-pr"], "Review a pull request")
|
||||||
|
|
||||||
|
def test_codex_curated_descriptions(self):
|
||||||
|
"""Codex curated skills have correct descriptions from cache."""
|
||||||
|
_, data = _get_json(f"{self.base_url}/api/skills?agent=codex")
|
||||||
|
by_name = {s["name"]: s["description"] for s in data["skills"]}
|
||||||
|
self.assertEqual(by_name["lint"], "Lint code")
|
||||||
|
self.assertEqual(by_name["deploy"], "Deploy to prod")
|
||||||
|
|
||||||
|
def test_codex_user_skill_description(self):
|
||||||
|
"""Codex user-installed skills have descriptions from SKILL.md."""
|
||||||
|
_, data = _get_json(f"{self.base_url}/api/skills?agent=codex")
|
||||||
|
by_name = {s["name"]: s["description"] for s in data["skills"]}
|
||||||
|
self.assertEqual(by_name["my-script"], "Run my custom script")
|
||||||
|
|
||||||
|
# -- CORS --
|
||||||
|
|
||||||
|
def test_cors_header_present(self):
|
||||||
|
"""Response includes Access-Control-Allow-Origin header."""
|
||||||
|
url = f"{self.base_url}/api/skills?agent=claude"
|
||||||
|
with urllib.request.urlopen(url, timeout=5) as resp:
|
||||||
|
cors = resp.headers.get("Access-Control-Allow-Origin")
|
||||||
|
self.assertEqual(cors, "*")
|
||||||
|
|
||||||
|
|
||||||
|
class TestSkillsEndpointEmptyE2E(unittest.TestCase):
|
||||||
|
"""E2E tests: server with no skills data."""
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def setUpClass(cls):
|
||||||
|
cls.port = _find_free_port()
|
||||||
|
cls.base_url = f"http://127.0.0.1:{cls.port}"
|
||||||
|
|
||||||
|
# Empty home directory - no skills at all
|
||||||
|
cls.tmpdir = tempfile.mkdtemp()
|
||||||
|
cls.home = Path(cls.tmpdir)
|
||||||
|
|
||||||
|
cls.home_patcher = patch.object(Path, "home", return_value=cls.home)
|
||||||
|
cls.home_patcher.start()
|
||||||
|
|
||||||
|
cls.server = ThreadingHTTPServer(("127.0.0.1", cls.port), AMCHandler)
|
||||||
|
cls.server_thread = threading.Thread(target=cls.server.serve_forever)
|
||||||
|
cls.server_thread.daemon = True
|
||||||
|
cls.server_thread.start()
|
||||||
|
|
||||||
|
for _ in range(50):
|
||||||
|
try:
|
||||||
|
with socket.create_connection(("127.0.0.1", cls.port), timeout=0.1):
|
||||||
|
break
|
||||||
|
except OSError:
|
||||||
|
time.sleep(0.05)
|
||||||
|
|
||||||
|
@classmethod
|
||||||
|
def tearDownClass(cls):
|
||||||
|
cls.server.shutdown()
|
||||||
|
cls.server_thread.join(timeout=5)
|
||||||
|
cls.home_patcher.stop()
|
||||||
|
|
||||||
|
def test_empty_claude_skills(self):
|
||||||
|
"""Server with no Claude skills returns empty list."""
|
||||||
|
status, data = _get_json(f"{self.base_url}/api/skills?agent=claude")
|
||||||
|
self.assertEqual(status, 200)
|
||||||
|
self.assertEqual(data["trigger"], "/")
|
||||||
|
self.assertEqual(data["skills"], [])
|
||||||
|
|
||||||
|
def test_empty_codex_skills(self):
|
||||||
|
"""Server with no Codex skills returns empty list."""
|
||||||
|
status, data = _get_json(f"{self.base_url}/api/skills?agent=codex")
|
||||||
|
self.assertEqual(status, 200)
|
||||||
|
self.assertEqual(data["trigger"], "$")
|
||||||
|
self.assertEqual(data["skills"], [])
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
unittest.main()
|
||||||
617
tests/e2e_spawn.sh
Executable file
617
tests/e2e_spawn.sh
Executable file
@@ -0,0 +1,617 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
set -euo pipefail
|
||||||
|
|
||||||
|
# E2E test script for the AMC spawn workflow.
|
||||||
|
# Tests the full flow from API call to Zellij pane creation.
|
||||||
|
#
|
||||||
|
# Usage:
|
||||||
|
# ./tests/e2e_spawn.sh # Safe mode (no actual spawning)
|
||||||
|
# ./tests/e2e_spawn.sh --spawn # Full test including real agent spawn
|
||||||
|
|
||||||
|
SERVER_URL="http://localhost:7400"
|
||||||
|
TEST_PROJECT="amc" # Must exist in ~/projects/
|
||||||
|
AUTH_TOKEN=""
|
||||||
|
SPAWN_MODE=false
|
||||||
|
PASSED=0
|
||||||
|
FAILED=0
|
||||||
|
SKIPPED=0
|
||||||
|
|
||||||
|
# Parse args
|
||||||
|
for arg in "$@"; do
|
||||||
|
case "$arg" in
|
||||||
|
--spawn) SPAWN_MODE=true ;;
|
||||||
|
*) echo "Unknown arg: $arg"; exit 2 ;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
# Colors
|
||||||
|
RED='\033[0;31m'
|
||||||
|
GREEN='\033[0;32m'
|
||||||
|
YELLOW='\033[1;33m'
|
||||||
|
NC='\033[0m'
|
||||||
|
|
||||||
|
log_info() { echo -e "${GREEN}[INFO]${NC} $1"; }
|
||||||
|
log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
|
||||||
|
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
|
||||||
|
log_test() { echo -e "\n${GREEN}[TEST]${NC} $1"; }
|
||||||
|
log_pass() { echo -e "${GREEN}[PASS]${NC} $1"; ((PASSED++)); }
|
||||||
|
log_fail() { echo -e "${RED}[FAIL]${NC} $1"; ((FAILED++)); }
|
||||||
|
log_skip() { echo -e "${YELLOW}[SKIP]${NC} $1"; ((SKIPPED++)); }
|
||||||
|
|
||||||
|
# Track spawned panes for cleanup
|
||||||
|
SPAWNED_PANE_NAMES=()
|
||||||
|
|
||||||
|
cleanup() {
|
||||||
|
if [[ ${#SPAWNED_PANE_NAMES[@]} -gt 0 ]]; then
|
||||||
|
log_info "Cleaning up spawned panes..."
|
||||||
|
for pane_name in "${SPAWNED_PANE_NAMES[@]}"; do
|
||||||
|
# Best-effort: close panes we spawned during tests
|
||||||
|
zellij --session infra action close-pane --name "$pane_name" 2>/dev/null || true
|
||||||
|
done
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
trap cleanup EXIT
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Pre-flight checks
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
preflight() {
|
||||||
|
log_test "Pre-flight checks"
|
||||||
|
|
||||||
|
# curl available?
|
||||||
|
if ! command -v curl &>/dev/null; then
|
||||||
|
log_error "curl not found"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# jq available?
|
||||||
|
if ! command -v jq &>/dev/null; then
|
||||||
|
log_error "jq not found (required for JSON assertions)"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Server running?
|
||||||
|
if ! curl -sf "${SERVER_URL}/api/health" >/dev/null 2>&1; then
|
||||||
|
log_error "Server not running at ${SERVER_URL}"
|
||||||
|
log_error "Start with: python -m amc_server.server"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Test project exists?
|
||||||
|
if [[ ! -d "$HOME/projects/${TEST_PROJECT}" ]]; then
|
||||||
|
log_error "Test project '${TEST_PROJECT}' not found at ~/projects/${TEST_PROJECT}"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
log_pass "Pre-flight checks passed"
|
||||||
|
}
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Extract auth token from dashboard HTML
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
extract_auth_token() {
|
||||||
|
log_test "Extract auth token from dashboard"
|
||||||
|
|
||||||
|
local html
|
||||||
|
html=$(curl -sf "${SERVER_URL}/")
|
||||||
|
|
||||||
|
AUTH_TOKEN=$(echo "$html" | grep -o 'AMC_AUTH_TOKEN = "[^"]*"' | cut -d'"' -f2)
|
||||||
|
if [[ -z "$AUTH_TOKEN" ]]; then
|
||||||
|
log_error "Could not extract auth token from dashboard HTML"
|
||||||
|
log_error "Check that index.html contains <!-- AMC_AUTH_TOKEN --> placeholder"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
log_pass "Auth token extracted (${AUTH_TOKEN:0:8}...)"
|
||||||
|
}
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Test: GET /api/health
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
test_health_endpoint() {
|
||||||
|
log_test "GET /api/health"
|
||||||
|
|
||||||
|
local response
|
||||||
|
response=$(curl -sf "${SERVER_URL}/api/health")
|
||||||
|
|
||||||
|
local ok
|
||||||
|
ok=$(echo "$response" | jq -r '.ok')
|
||||||
|
if [[ "$ok" != "true" ]]; then
|
||||||
|
log_fail "Health endpoint returned ok=$ok"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Must include zellij_available and zellij_session fields
|
||||||
|
local has_zellij_available has_zellij_session
|
||||||
|
has_zellij_available=$(echo "$response" | jq 'has("zellij_available")')
|
||||||
|
has_zellij_session=$(echo "$response" | jq 'has("zellij_session")')
|
||||||
|
|
||||||
|
if [[ "$has_zellij_available" != "true" || "$has_zellij_session" != "true" ]]; then
|
||||||
|
log_fail "Health response missing expected fields: $response"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
|
||||||
|
local zellij_available
|
||||||
|
zellij_available=$(echo "$response" | jq -r '.zellij_available')
|
||||||
|
log_pass "Health OK (zellij_available=$zellij_available)"
|
||||||
|
}
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Test: GET /api/projects
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
test_projects_endpoint() {
|
||||||
|
log_test "GET /api/projects"
|
||||||
|
|
||||||
|
local response
|
||||||
|
response=$(curl -sf "${SERVER_URL}/api/projects")
|
||||||
|
|
||||||
|
local ok
|
||||||
|
ok=$(echo "$response" | jq -r '.ok')
|
||||||
|
if [[ "$ok" != "true" ]]; then
|
||||||
|
log_fail "Projects endpoint returned ok=$ok"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
|
||||||
|
local project_count
|
||||||
|
project_count=$(echo "$response" | jq '.projects | length')
|
||||||
|
if [[ "$project_count" -lt 1 ]]; then
|
||||||
|
log_fail "No projects returned (expected at least 1)"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Verify test project is in the list
|
||||||
|
local has_test_project
|
||||||
|
has_test_project=$(echo "$response" | jq --arg p "$TEST_PROJECT" '[.projects[] | select(. == $p)] | length')
|
||||||
|
if [[ "$has_test_project" -lt 1 ]]; then
|
||||||
|
log_fail "Test project '$TEST_PROJECT' not in projects list"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
|
||||||
|
log_pass "Projects OK ($project_count projects, '$TEST_PROJECT' present)"
|
||||||
|
}
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Test: POST /api/projects/refresh
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
test_projects_refresh() {
|
||||||
|
log_test "POST /api/projects/refresh"
|
||||||
|
|
||||||
|
local response
|
||||||
|
response=$(curl -sf -X POST "${SERVER_URL}/api/projects/refresh")
|
||||||
|
|
||||||
|
local ok
|
||||||
|
ok=$(echo "$response" | jq -r '.ok')
|
||||||
|
if [[ "$ok" != "true" ]]; then
|
||||||
|
log_fail "Projects refresh returned ok=$ok"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
|
||||||
|
local project_count
|
||||||
|
project_count=$(echo "$response" | jq '.projects | length')
|
||||||
|
log_pass "Projects refresh OK ($project_count projects)"
|
||||||
|
}
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Test: Spawn without auth (should return 401)
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
test_spawn_no_auth() {
|
||||||
|
log_test "POST /api/spawn without auth (expect 401)"
|
||||||
|
|
||||||
|
local http_code body
|
||||||
|
body=$(curl -s -o /dev/null -w '%{http_code}' -X POST "${SERVER_URL}/api/spawn" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{"project":"amc","agent_type":"claude"}')
|
||||||
|
|
||||||
|
if [[ "$body" != "401" ]]; then
|
||||||
|
log_fail "Expected HTTP 401, got $body"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Also verify the JSON error code
|
||||||
|
local response
|
||||||
|
response=$(curl -s -X POST "${SERVER_URL}/api/spawn" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-d '{"project":"amc","agent_type":"claude"}')
|
||||||
|
|
||||||
|
local code
|
||||||
|
code=$(echo "$response" | jq -r '.code')
|
||||||
|
if [[ "$code" != "UNAUTHORIZED" ]]; then
|
||||||
|
log_fail "Expected code UNAUTHORIZED, got $code"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
|
||||||
|
log_pass "Correctly rejected unauthorized request (401/UNAUTHORIZED)"
|
||||||
|
}
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Test: Spawn with wrong token (should return 401)
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
test_spawn_wrong_token() {
|
||||||
|
log_test "POST /api/spawn with wrong token (expect 401)"
|
||||||
|
|
||||||
|
local response
|
||||||
|
response=$(curl -s -X POST "${SERVER_URL}/api/spawn" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "Authorization: Bearer totally-wrong-token" \
|
||||||
|
-d '{"project":"amc","agent_type":"claude"}')
|
||||||
|
|
||||||
|
local code
|
||||||
|
code=$(echo "$response" | jq -r '.code')
|
||||||
|
if [[ "$code" != "UNAUTHORIZED" ]]; then
|
||||||
|
log_fail "Expected UNAUTHORIZED, got $code"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
|
||||||
|
log_pass "Correctly rejected wrong token (UNAUTHORIZED)"
|
||||||
|
}
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Test: Spawn with malformed auth (no Bearer prefix)
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
test_spawn_malformed_auth() {
|
||||||
|
log_test "POST /api/spawn with malformed auth header"
|
||||||
|
|
||||||
|
local response
|
||||||
|
response=$(curl -s -X POST "${SERVER_URL}/api/spawn" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "Authorization: Token ${AUTH_TOKEN}" \
|
||||||
|
-d '{"project":"amc","agent_type":"claude"}')
|
||||||
|
|
||||||
|
local code
|
||||||
|
code=$(echo "$response" | jq -r '.code')
|
||||||
|
if [[ "$code" != "UNAUTHORIZED" ]]; then
|
||||||
|
log_fail "Expected UNAUTHORIZED for malformed auth, got $code"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
|
||||||
|
log_pass "Correctly rejected malformed auth header"
|
||||||
|
}
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Test: Spawn with invalid JSON body
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
test_spawn_invalid_json() {
|
||||||
|
log_test "POST /api/spawn with invalid JSON"
|
||||||
|
|
||||||
|
local response
|
||||||
|
response=$(curl -s -X POST "${SERVER_URL}/api/spawn" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "Authorization: Bearer ${AUTH_TOKEN}" \
|
||||||
|
-d 'not valid json!!!')
|
||||||
|
|
||||||
|
local ok
|
||||||
|
ok=$(echo "$response" | jq -r '.ok')
|
||||||
|
if [[ "$ok" != "false" ]]; then
|
||||||
|
log_fail "Expected ok=false for invalid JSON, got ok=$ok"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
|
||||||
|
log_pass "Correctly rejected invalid JSON body"
|
||||||
|
}
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Test: Spawn with path traversal (should return 400/INVALID_PROJECT)
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
test_spawn_path_traversal() {
|
||||||
|
log_test "POST /api/spawn with path traversal"
|
||||||
|
|
||||||
|
local response
|
||||||
|
response=$(curl -s -X POST "${SERVER_URL}/api/spawn" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "Authorization: Bearer ${AUTH_TOKEN}" \
|
||||||
|
-d '{"project":"../etc/passwd","agent_type":"claude"}')
|
||||||
|
|
||||||
|
local code
|
||||||
|
code=$(echo "$response" | jq -r '.code')
|
||||||
|
if [[ "$code" != "INVALID_PROJECT" ]]; then
|
||||||
|
log_fail "Expected INVALID_PROJECT for path traversal, got $code"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
|
||||||
|
log_pass "Correctly rejected path traversal (INVALID_PROJECT)"
|
||||||
|
}
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Test: Spawn with nonexistent project
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
test_spawn_nonexistent_project() {
|
||||||
|
log_test "POST /api/spawn with nonexistent project"
|
||||||
|
|
||||||
|
local response
|
||||||
|
response=$(curl -s -X POST "${SERVER_URL}/api/spawn" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "Authorization: Bearer ${AUTH_TOKEN}" \
|
||||||
|
-d '{"project":"this-project-does-not-exist-xyz","agent_type":"claude"}')
|
||||||
|
|
||||||
|
local code
|
||||||
|
code=$(echo "$response" | jq -r '.code')
|
||||||
|
if [[ "$code" != "PROJECT_NOT_FOUND" ]]; then
|
||||||
|
log_fail "Expected PROJECT_NOT_FOUND, got $code"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
|
||||||
|
log_pass "Correctly rejected nonexistent project (PROJECT_NOT_FOUND)"
|
||||||
|
}
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Test: Spawn with invalid agent type
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
test_spawn_invalid_agent_type() {
|
||||||
|
log_test "POST /api/spawn with invalid agent type"
|
||||||
|
|
||||||
|
local response
|
||||||
|
response=$(curl -s -X POST "${SERVER_URL}/api/spawn" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "Authorization: Bearer ${AUTH_TOKEN}" \
|
||||||
|
-d "{\"project\":\"${TEST_PROJECT}\",\"agent_type\":\"gpt5\"}")
|
||||||
|
|
||||||
|
local code
|
||||||
|
code=$(echo "$response" | jq -r '.code')
|
||||||
|
if [[ "$code" != "INVALID_AGENT_TYPE" ]]; then
|
||||||
|
log_fail "Expected INVALID_AGENT_TYPE, got $code"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
|
||||||
|
log_pass "Correctly rejected invalid agent type (INVALID_AGENT_TYPE)"
|
||||||
|
}
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Test: Spawn with missing project field
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
test_spawn_missing_project() {
|
||||||
|
log_test "POST /api/spawn with missing project"
|
||||||
|
|
||||||
|
local response
|
||||||
|
response=$(curl -s -X POST "${SERVER_URL}/api/spawn" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "Authorization: Bearer ${AUTH_TOKEN}" \
|
||||||
|
-d '{"agent_type":"claude"}')
|
||||||
|
|
||||||
|
local code
|
||||||
|
code=$(echo "$response" | jq -r '.code')
|
||||||
|
if [[ "$code" != "MISSING_PROJECT" ]]; then
|
||||||
|
log_fail "Expected MISSING_PROJECT, got $code"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
|
||||||
|
log_pass "Correctly rejected missing project field (MISSING_PROJECT)"
|
||||||
|
}
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Test: Spawn with backslash in project name
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
test_spawn_backslash_project() {
|
||||||
|
log_test "POST /api/spawn with backslash in project name"
|
||||||
|
|
||||||
|
local response
|
||||||
|
response=$(curl -s -X POST "${SERVER_URL}/api/spawn" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "Authorization: Bearer ${AUTH_TOKEN}" \
|
||||||
|
-d '{"project":"foo\\bar","agent_type":"claude"}')
|
||||||
|
|
||||||
|
local code
|
||||||
|
code=$(echo "$response" | jq -r '.code')
|
||||||
|
if [[ "$code" != "INVALID_PROJECT" ]]; then
|
||||||
|
log_fail "Expected INVALID_PROJECT for backslash, got $code"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
|
||||||
|
log_pass "Correctly rejected backslash in project name (INVALID_PROJECT)"
|
||||||
|
}
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Test: CORS preflight for /api/spawn
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
test_cors_preflight() {
|
||||||
|
log_test "OPTIONS /api/spawn (CORS preflight)"
|
||||||
|
|
||||||
|
local http_code headers
|
||||||
|
headers=$(curl -sI -X OPTIONS "${SERVER_URL}/api/spawn" 2>/dev/null)
|
||||||
|
http_code=$(echo "$headers" | head -1 | grep -o '[0-9][0-9][0-9]' | head -1)
|
||||||
|
|
||||||
|
if [[ "$http_code" != "204" ]]; then
|
||||||
|
log_fail "Expected HTTP 204 for OPTIONS, got $http_code"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
|
||||||
|
if ! echo "$headers" | grep -qi 'Access-Control-Allow-Methods'; then
|
||||||
|
log_fail "Missing Access-Control-Allow-Methods header"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
|
||||||
|
if ! echo "$headers" | grep -qi 'Authorization'; then
|
||||||
|
log_fail "Authorization not in Access-Control-Allow-Headers"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
|
||||||
|
log_pass "CORS preflight OK (204 with correct headers)"
|
||||||
|
}
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Test: Actual spawn (only with --spawn flag)
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
test_spawn_valid() {
|
||||||
|
if [[ "$SPAWN_MODE" != "true" ]]; then
|
||||||
|
log_skip "Actual spawn test (pass --spawn to enable)"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
|
||||||
|
log_test "POST /api/spawn with valid project (LIVE)"
|
||||||
|
|
||||||
|
# Check Zellij session first
|
||||||
|
if ! zellij list-sessions 2>/dev/null | grep -q '^infra'; then
|
||||||
|
log_skip "Zellij session 'infra' not found - cannot test live spawn"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Count session files before
|
||||||
|
local sessions_dir="$HOME/.local/share/amc/sessions"
|
||||||
|
local count_before=0
|
||||||
|
if [[ -d "$sessions_dir" ]]; then
|
||||||
|
count_before=$(find "$sessions_dir" -name '*.json' -maxdepth 1 2>/dev/null | wc -l | tr -d ' ')
|
||||||
|
fi
|
||||||
|
|
||||||
|
local response
|
||||||
|
response=$(curl -s -X POST "${SERVER_URL}/api/spawn" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "Authorization: Bearer ${AUTH_TOKEN}" \
|
||||||
|
-d "{\"project\":\"${TEST_PROJECT}\",\"agent_type\":\"claude\"}")
|
||||||
|
|
||||||
|
local ok
|
||||||
|
ok=$(echo "$response" | jq -r '.ok')
|
||||||
|
if [[ "$ok" != "true" ]]; then
|
||||||
|
local error_code
|
||||||
|
error_code=$(echo "$response" | jq -r '.code // .error')
|
||||||
|
log_fail "Spawn failed: $error_code"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Verify spawn_id is returned
|
||||||
|
local spawn_id
|
||||||
|
spawn_id=$(echo "$response" | jq -r '.spawn_id')
|
||||||
|
if [[ -z "$spawn_id" || "$spawn_id" == "null" ]]; then
|
||||||
|
log_fail "No spawn_id in response"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Track for cleanup
|
||||||
|
SPAWNED_PANE_NAMES+=("claude-${TEST_PROJECT}")
|
||||||
|
|
||||||
|
# Verify session_file_found field
|
||||||
|
local session_found
|
||||||
|
session_found=$(echo "$response" | jq -r '.session_file_found')
|
||||||
|
log_info "session_file_found=$session_found, spawn_id=${spawn_id:0:8}..."
|
||||||
|
|
||||||
|
log_pass "Spawn successful (spawn_id=${spawn_id:0:8}...)"
|
||||||
|
}
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Test: Rate limiting (only with --spawn flag)
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
test_rate_limiting() {
|
||||||
|
if [[ "$SPAWN_MODE" != "true" ]]; then
|
||||||
|
log_skip "Rate limiting test (pass --spawn to enable)"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
|
||||||
|
log_test "Rate limiting on rapid spawn"
|
||||||
|
|
||||||
|
# Immediately try to spawn the same project again
|
||||||
|
local response
|
||||||
|
response=$(curl -s -X POST "${SERVER_URL}/api/spawn" \
|
||||||
|
-H "Content-Type: application/json" \
|
||||||
|
-H "Authorization: Bearer ${AUTH_TOKEN}" \
|
||||||
|
-d "{\"project\":\"${TEST_PROJECT}\",\"agent_type\":\"claude\"}")
|
||||||
|
|
||||||
|
local code
|
||||||
|
code=$(echo "$response" | jq -r '.code')
|
||||||
|
if [[ "$code" == "RATE_LIMITED" ]]; then
|
||||||
|
log_pass "Rate limiting active (RATE_LIMITED returned)"
|
||||||
|
else
|
||||||
|
local ok
|
||||||
|
ok=$(echo "$response" | jq -r '.ok')
|
||||||
|
log_warn "Rate limiting not triggered (ok=$ok, code=$code) - cooldown may have expired"
|
||||||
|
log_pass "Rate limiting test completed (non-deterministic)"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Test: Dashboard shows agent after spawn (only with --spawn flag)
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
test_dashboard_shows_agent() {
|
||||||
|
if [[ "$SPAWN_MODE" != "true" ]]; then
|
||||||
|
log_skip "Dashboard agent visibility test (pass --spawn to enable)"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
|
||||||
|
log_test "Dashboard /api/state includes spawned agent"
|
||||||
|
|
||||||
|
# Give the session a moment to register
|
||||||
|
sleep 2
|
||||||
|
|
||||||
|
local response
|
||||||
|
response=$(curl -sf "${SERVER_URL}/api/state")
|
||||||
|
|
||||||
|
local session_count
|
||||||
|
session_count=$(echo "$response" | jq '.sessions | length')
|
||||||
|
|
||||||
|
if [[ "$session_count" -gt 0 ]]; then
|
||||||
|
log_pass "Dashboard shows $session_count session(s)"
|
||||||
|
else
|
||||||
|
log_warn "No sessions visible yet (agent may still be starting)"
|
||||||
|
log_pass "Dashboard state endpoint responsive"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Main
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
main() {
|
||||||
|
echo "========================================="
|
||||||
|
echo " AMC Spawn Workflow E2E Tests"
|
||||||
|
echo "========================================="
|
||||||
|
echo ""
|
||||||
|
if [[ "$SPAWN_MODE" == "true" ]]; then
|
||||||
|
log_warn "SPAWN MODE: will create real Zellij panes"
|
||||||
|
else
|
||||||
|
log_info "Safe mode (no actual spawning). Pass --spawn to test live spawn."
|
||||||
|
fi
|
||||||
|
|
||||||
|
preflight
|
||||||
|
extract_auth_token
|
||||||
|
|
||||||
|
# Read-only endpoint tests
|
||||||
|
test_health_endpoint
|
||||||
|
test_projects_endpoint
|
||||||
|
test_projects_refresh
|
||||||
|
|
||||||
|
# Auth / validation tests (no side effects)
|
||||||
|
test_spawn_no_auth
|
||||||
|
test_spawn_wrong_token
|
||||||
|
test_spawn_malformed_auth
|
||||||
|
test_spawn_invalid_json
|
||||||
|
test_spawn_path_traversal
|
||||||
|
test_spawn_nonexistent_project
|
||||||
|
test_spawn_invalid_agent_type
|
||||||
|
test_spawn_missing_project
|
||||||
|
test_spawn_backslash_project
|
||||||
|
|
||||||
|
# CORS
|
||||||
|
test_cors_preflight
|
||||||
|
|
||||||
|
# Live spawn tests (only with --spawn)
|
||||||
|
test_spawn_valid
|
||||||
|
test_rate_limiting
|
||||||
|
test_dashboard_shows_agent
|
||||||
|
|
||||||
|
# Summary
|
||||||
|
echo ""
|
||||||
|
echo "========================================="
|
||||||
|
echo " Results: ${PASSED} passed, ${FAILED} failed, ${SKIPPED} skipped"
|
||||||
|
echo "========================================="
|
||||||
|
|
||||||
|
if [[ "$FAILED" -gt 0 ]]; then
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
main "$@"
|
||||||
@@ -1,18 +1,18 @@
|
|||||||
import unittest
|
import unittest
|
||||||
from unittest.mock import patch
|
from unittest.mock import patch
|
||||||
|
|
||||||
from amc_server.context import _resolve_zellij_bin
|
from amc_server.zellij import _resolve_zellij_bin
|
||||||
|
|
||||||
|
|
||||||
class ContextTests(unittest.TestCase):
|
class ContextTests(unittest.TestCase):
|
||||||
def test_resolve_zellij_bin_prefers_which(self):
|
def test_resolve_zellij_bin_prefers_which(self):
|
||||||
with patch("amc_server.context.shutil.which", return_value="/custom/bin/zellij"):
|
with patch("amc_server.zellij.shutil.which", return_value="/custom/bin/zellij"):
|
||||||
self.assertEqual(_resolve_zellij_bin(), "/custom/bin/zellij")
|
self.assertEqual(_resolve_zellij_bin(), "/custom/bin/zellij")
|
||||||
|
|
||||||
def test_resolve_zellij_bin_falls_back_to_default_name(self):
|
def test_resolve_zellij_bin_falls_back_to_default_name(self):
|
||||||
with patch("amc_server.context.shutil.which", return_value=None), patch(
|
with patch("amc_server.zellij.shutil.which", return_value=None), patch(
|
||||||
"amc_server.context.Path.exists", return_value=False
|
"amc_server.zellij.Path.exists", return_value=False
|
||||||
), patch("amc_server.context.Path.is_file", return_value=False):
|
), patch("amc_server.zellij.Path.is_file", return_value=False):
|
||||||
self.assertEqual(_resolve_zellij_bin(), "zellij")
|
self.assertEqual(_resolve_zellij_bin(), "zellij")
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -172,5 +172,358 @@ class SessionControlMixinTests(unittest.TestCase):
|
|||||||
handler._try_write_chars_inject.assert_called_once()
|
handler._try_write_chars_inject.assert_called_once()
|
||||||
|
|
||||||
|
|
||||||
|
class TestParsePaneId(unittest.TestCase):
|
||||||
|
"""Tests for _parse_pane_id edge cases."""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.handler = DummyControlHandler()
|
||||||
|
|
||||||
|
def test_empty_string_returns_none(self):
|
||||||
|
self.assertIsNone(self.handler._parse_pane_id(""))
|
||||||
|
|
||||||
|
def test_none_returns_none(self):
|
||||||
|
self.assertIsNone(self.handler._parse_pane_id(None))
|
||||||
|
|
||||||
|
def test_direct_int_string_parses(self):
|
||||||
|
self.assertEqual(self.handler._parse_pane_id("42"), 42)
|
||||||
|
|
||||||
|
def test_terminal_format_parses(self):
|
||||||
|
self.assertEqual(self.handler._parse_pane_id("terminal_5"), 5)
|
||||||
|
|
||||||
|
def test_plugin_format_parses(self):
|
||||||
|
self.assertEqual(self.handler._parse_pane_id("plugin_3"), 3)
|
||||||
|
|
||||||
|
def test_unknown_prefix_returns_none(self):
|
||||||
|
self.assertIsNone(self.handler._parse_pane_id("pane_7"))
|
||||||
|
|
||||||
|
def test_non_numeric_suffix_returns_none(self):
|
||||||
|
self.assertIsNone(self.handler._parse_pane_id("terminal_abc"))
|
||||||
|
|
||||||
|
def test_too_many_underscores_returns_none(self):
|
||||||
|
self.assertIsNone(self.handler._parse_pane_id("terminal_5_extra"))
|
||||||
|
|
||||||
|
def test_negative_int_parses(self):
|
||||||
|
# Edge case: negative numbers
|
||||||
|
self.assertEqual(self.handler._parse_pane_id("-1"), -1)
|
||||||
|
|
||||||
|
|
||||||
|
class TestGetSubmitEnterDelaySec(unittest.TestCase):
|
||||||
|
"""Tests for _get_submit_enter_delay_sec edge cases."""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.handler = DummyControlHandler()
|
||||||
|
|
||||||
|
def test_unset_env_returns_default(self):
|
||||||
|
with patch.dict(os.environ, {}, clear=True):
|
||||||
|
result = self.handler._get_submit_enter_delay_sec()
|
||||||
|
self.assertEqual(result, 0.20)
|
||||||
|
|
||||||
|
def test_empty_string_returns_default(self):
|
||||||
|
with patch.dict(os.environ, {"AMC_SUBMIT_ENTER_DELAY_MS": ""}, clear=True):
|
||||||
|
result = self.handler._get_submit_enter_delay_sec()
|
||||||
|
self.assertEqual(result, 0.20)
|
||||||
|
|
||||||
|
def test_whitespace_only_returns_default(self):
|
||||||
|
with patch.dict(os.environ, {"AMC_SUBMIT_ENTER_DELAY_MS": " "}, clear=True):
|
||||||
|
result = self.handler._get_submit_enter_delay_sec()
|
||||||
|
self.assertEqual(result, 0.20)
|
||||||
|
|
||||||
|
def test_negative_value_returns_zero(self):
|
||||||
|
with patch.dict(os.environ, {"AMC_SUBMIT_ENTER_DELAY_MS": "-100"}, clear=True):
|
||||||
|
result = self.handler._get_submit_enter_delay_sec()
|
||||||
|
self.assertEqual(result, 0.0)
|
||||||
|
|
||||||
|
def test_value_over_2000_clamped(self):
|
||||||
|
with patch.dict(os.environ, {"AMC_SUBMIT_ENTER_DELAY_MS": "5000"}, clear=True):
|
||||||
|
result = self.handler._get_submit_enter_delay_sec()
|
||||||
|
self.assertEqual(result, 2.0) # 2000ms = 2.0s
|
||||||
|
|
||||||
|
def test_valid_ms_converted_to_seconds(self):
|
||||||
|
with patch.dict(os.environ, {"AMC_SUBMIT_ENTER_DELAY_MS": "500"}, clear=True):
|
||||||
|
result = self.handler._get_submit_enter_delay_sec()
|
||||||
|
self.assertEqual(result, 0.5)
|
||||||
|
|
||||||
|
def test_float_value_works(self):
|
||||||
|
with patch.dict(os.environ, {"AMC_SUBMIT_ENTER_DELAY_MS": "150.5"}, clear=True):
|
||||||
|
result = self.handler._get_submit_enter_delay_sec()
|
||||||
|
self.assertAlmostEqual(result, 0.1505)
|
||||||
|
|
||||||
|
def test_non_numeric_returns_default(self):
|
||||||
|
with patch.dict(os.environ, {"AMC_SUBMIT_ENTER_DELAY_MS": "fast"}, clear=True):
|
||||||
|
result = self.handler._get_submit_enter_delay_sec()
|
||||||
|
self.assertEqual(result, 0.20)
|
||||||
|
|
||||||
|
|
||||||
|
class TestAllowUnsafeWriteCharsFallback(unittest.TestCase):
|
||||||
|
"""Tests for _allow_unsafe_write_chars_fallback edge cases."""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.handler = DummyControlHandler()
|
||||||
|
|
||||||
|
def test_unset_returns_false(self):
|
||||||
|
with patch.dict(os.environ, {}, clear=True):
|
||||||
|
self.assertFalse(self.handler._allow_unsafe_write_chars_fallback())
|
||||||
|
|
||||||
|
def test_empty_returns_false(self):
|
||||||
|
with patch.dict(os.environ, {"AMC_ALLOW_UNSAFE_WRITE_CHARS_FALLBACK": ""}, clear=True):
|
||||||
|
self.assertFalse(self.handler._allow_unsafe_write_chars_fallback())
|
||||||
|
|
||||||
|
def test_one_returns_true(self):
|
||||||
|
with patch.dict(os.environ, {"AMC_ALLOW_UNSAFE_WRITE_CHARS_FALLBACK": "1"}, clear=True):
|
||||||
|
self.assertTrue(self.handler._allow_unsafe_write_chars_fallback())
|
||||||
|
|
||||||
|
def test_true_returns_true(self):
|
||||||
|
with patch.dict(os.environ, {"AMC_ALLOW_UNSAFE_WRITE_CHARS_FALLBACK": "true"}, clear=True):
|
||||||
|
self.assertTrue(self.handler._allow_unsafe_write_chars_fallback())
|
||||||
|
|
||||||
|
def test_yes_returns_true(self):
|
||||||
|
with patch.dict(os.environ, {"AMC_ALLOW_UNSAFE_WRITE_CHARS_FALLBACK": "yes"}, clear=True):
|
||||||
|
self.assertTrue(self.handler._allow_unsafe_write_chars_fallback())
|
||||||
|
|
||||||
|
def test_on_returns_true(self):
|
||||||
|
with patch.dict(os.environ, {"AMC_ALLOW_UNSAFE_WRITE_CHARS_FALLBACK": "on"}, clear=True):
|
||||||
|
self.assertTrue(self.handler._allow_unsafe_write_chars_fallback())
|
||||||
|
|
||||||
|
def test_case_insensitive(self):
|
||||||
|
with patch.dict(os.environ, {"AMC_ALLOW_UNSAFE_WRITE_CHARS_FALLBACK": "TRUE"}, clear=True):
|
||||||
|
self.assertTrue(self.handler._allow_unsafe_write_chars_fallback())
|
||||||
|
|
||||||
|
def test_random_string_returns_false(self):
|
||||||
|
with patch.dict(os.environ, {"AMC_ALLOW_UNSAFE_WRITE_CHARS_FALLBACK": "maybe"}, clear=True):
|
||||||
|
self.assertFalse(self.handler._allow_unsafe_write_chars_fallback())
|
||||||
|
|
||||||
|
|
||||||
|
class TestDismissSession(unittest.TestCase):
|
||||||
|
"""Tests for _dismiss_session edge cases."""
|
||||||
|
|
||||||
|
def test_deletes_existing_session_file(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
sessions_dir = Path(tmpdir)
|
||||||
|
sessions_dir.mkdir(exist_ok=True)
|
||||||
|
session_file = sessions_dir / "abc123.json"
|
||||||
|
session_file.write_text('{"session_id": "abc123"}')
|
||||||
|
|
||||||
|
handler = DummyControlHandler()
|
||||||
|
with patch.object(control, "SESSIONS_DIR", sessions_dir):
|
||||||
|
handler._dismiss_session("abc123")
|
||||||
|
|
||||||
|
self.assertFalse(session_file.exists())
|
||||||
|
self.assertEqual(handler.sent, [(200, {"ok": True})])
|
||||||
|
|
||||||
|
def test_handles_missing_file_gracefully(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
sessions_dir = Path(tmpdir)
|
||||||
|
|
||||||
|
handler = DummyControlHandler()
|
||||||
|
with patch.object(control, "SESSIONS_DIR", sessions_dir):
|
||||||
|
handler._dismiss_session("nonexistent")
|
||||||
|
|
||||||
|
# Should still return success
|
||||||
|
self.assertEqual(handler.sent, [(200, {"ok": True})])
|
||||||
|
|
||||||
|
def test_path_traversal_sanitized(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
sessions_dir = Path(tmpdir)
|
||||||
|
sessions_dir.mkdir(exist_ok=True)
|
||||||
|
# Create a file that should NOT be deleted (unused - documents test intent)
|
||||||
|
_secret_file = Path(tmpdir).parent / "secret.json"
|
||||||
|
|
||||||
|
handler = DummyControlHandler()
|
||||||
|
with patch.object(control, "SESSIONS_DIR", sessions_dir):
|
||||||
|
handler._dismiss_session("../secret")
|
||||||
|
|
||||||
|
# Secret file should not have been targeted
|
||||||
|
# (if it existed, it would still exist)
|
||||||
|
|
||||||
|
def test_tracks_dismissed_codex_session(self):
|
||||||
|
from amc_server.agents import _dismissed_codex_ids
|
||||||
|
_dismissed_codex_ids.clear()
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
sessions_dir = Path(tmpdir)
|
||||||
|
|
||||||
|
handler = DummyControlHandler()
|
||||||
|
with patch.object(control, "SESSIONS_DIR", sessions_dir):
|
||||||
|
handler._dismiss_session("codex-session-123")
|
||||||
|
|
||||||
|
self.assertIn("codex-session-123", _dismissed_codex_ids)
|
||||||
|
_dismissed_codex_ids.clear()
|
||||||
|
|
||||||
|
|
||||||
|
class TestTryWriteCharsInject(unittest.TestCase):
|
||||||
|
"""Tests for _try_write_chars_inject edge cases."""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.handler = DummyControlHandler()
|
||||||
|
|
||||||
|
def test_successful_write_without_enter(self):
|
||||||
|
completed = subprocess.CompletedProcess(args=[], returncode=0, stdout="", stderr="")
|
||||||
|
|
||||||
|
with patch.object(control, "ZELLIJ_BIN", "/usr/bin/zellij"), \
|
||||||
|
patch("amc_server.mixins.control.subprocess.run", return_value=completed) as run_mock:
|
||||||
|
result = self.handler._try_write_chars_inject({}, "infra", "hello", send_enter=False)
|
||||||
|
|
||||||
|
self.assertEqual(result, {"ok": True})
|
||||||
|
# Should only be called once (no Enter)
|
||||||
|
self.assertEqual(run_mock.call_count, 1)
|
||||||
|
|
||||||
|
def test_successful_write_with_enter(self):
|
||||||
|
completed = subprocess.CompletedProcess(args=[], returncode=0, stdout="", stderr="")
|
||||||
|
|
||||||
|
with patch.object(control, "ZELLIJ_BIN", "/usr/bin/zellij"), \
|
||||||
|
patch("amc_server.mixins.control.subprocess.run", return_value=completed) as run_mock:
|
||||||
|
result = self.handler._try_write_chars_inject({}, "infra", "hello", send_enter=True)
|
||||||
|
|
||||||
|
self.assertEqual(result, {"ok": True})
|
||||||
|
# Should be called twice (write-chars + write Enter)
|
||||||
|
self.assertEqual(run_mock.call_count, 2)
|
||||||
|
|
||||||
|
def test_write_chars_failure_returns_error(self):
|
||||||
|
failed = subprocess.CompletedProcess(args=[], returncode=1, stdout="", stderr="write failed")
|
||||||
|
|
||||||
|
with patch.object(control, "ZELLIJ_BIN", "/usr/bin/zellij"), \
|
||||||
|
patch("amc_server.mixins.control.subprocess.run", return_value=failed):
|
||||||
|
result = self.handler._try_write_chars_inject({}, "infra", "hello", send_enter=False)
|
||||||
|
|
||||||
|
self.assertFalse(result["ok"])
|
||||||
|
self.assertIn("write", result["error"].lower())
|
||||||
|
|
||||||
|
def test_timeout_returns_error(self):
|
||||||
|
with patch.object(control, "ZELLIJ_BIN", "/usr/bin/zellij"), \
|
||||||
|
patch("amc_server.mixins.control.subprocess.run",
|
||||||
|
side_effect=subprocess.TimeoutExpired("cmd", 2)):
|
||||||
|
result = self.handler._try_write_chars_inject({}, "infra", "hello", send_enter=False)
|
||||||
|
|
||||||
|
self.assertFalse(result["ok"])
|
||||||
|
self.assertIn("timed out", result["error"].lower())
|
||||||
|
|
||||||
|
def test_zellij_not_found_returns_error(self):
|
||||||
|
with patch.object(control, "ZELLIJ_BIN", "/nonexistent/zellij"), \
|
||||||
|
patch("amc_server.mixins.control.subprocess.run",
|
||||||
|
side_effect=FileNotFoundError("No such file")):
|
||||||
|
result = self.handler._try_write_chars_inject({}, "infra", "hello", send_enter=False)
|
||||||
|
|
||||||
|
self.assertFalse(result["ok"])
|
||||||
|
self.assertIn("not found", result["error"].lower())
|
||||||
|
|
||||||
|
|
||||||
|
class TestRespondToSessionEdgeCases(unittest.TestCase):
|
||||||
|
"""Additional edge case tests for _respond_to_session."""
|
||||||
|
|
||||||
|
def _write_session(self, sessions_dir, session_id, **kwargs):
|
||||||
|
sessions_dir.mkdir(parents=True, exist_ok=True)
|
||||||
|
session_file = sessions_dir / f"{session_id}.json"
|
||||||
|
data = {"session_id": session_id, **kwargs}
|
||||||
|
session_file.write_text(json.dumps(data))
|
||||||
|
|
||||||
|
def test_invalid_json_body_returns_400(self):
|
||||||
|
handler = DummyControlHandler.__new__(DummyControlHandler)
|
||||||
|
handler.headers = {"Content-Length": "10"}
|
||||||
|
handler.rfile = io.BytesIO(b"not json!!")
|
||||||
|
handler.sent = []
|
||||||
|
handler.errors = []
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
with patch.object(control, "SESSIONS_DIR", Path(tmpdir)):
|
||||||
|
handler._respond_to_session("test")
|
||||||
|
|
||||||
|
self.assertEqual(handler.errors, [(400, "Invalid JSON body")])
|
||||||
|
|
||||||
|
def test_non_dict_body_returns_400(self):
|
||||||
|
raw = b'"just a string"'
|
||||||
|
handler = DummyControlHandler.__new__(DummyControlHandler)
|
||||||
|
handler.headers = {"Content-Length": str(len(raw))}
|
||||||
|
handler.rfile = io.BytesIO(raw)
|
||||||
|
handler.sent = []
|
||||||
|
handler.errors = []
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
with patch.object(control, "SESSIONS_DIR", Path(tmpdir)):
|
||||||
|
handler._respond_to_session("test")
|
||||||
|
|
||||||
|
self.assertEqual(handler.errors, [(400, "Invalid JSON body")])
|
||||||
|
|
||||||
|
def test_empty_text_returns_400(self):
|
||||||
|
handler = DummyControlHandler({"text": ""})
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
sessions_dir = Path(tmpdir)
|
||||||
|
self._write_session(sessions_dir, "test", zellij_session="s", zellij_pane="1")
|
||||||
|
with patch.object(control, "SESSIONS_DIR", sessions_dir):
|
||||||
|
handler._respond_to_session("test")
|
||||||
|
|
||||||
|
self.assertEqual(handler.errors, [(400, "Missing or empty 'text' field")])
|
||||||
|
|
||||||
|
def test_whitespace_only_text_returns_400(self):
|
||||||
|
handler = DummyControlHandler({"text": " \n\t "})
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
sessions_dir = Path(tmpdir)
|
||||||
|
self._write_session(sessions_dir, "test", zellij_session="s", zellij_pane="1")
|
||||||
|
with patch.object(control, "SESSIONS_DIR", sessions_dir):
|
||||||
|
handler._respond_to_session("test")
|
||||||
|
|
||||||
|
self.assertEqual(handler.errors, [(400, "Missing or empty 'text' field")])
|
||||||
|
|
||||||
|
def test_non_string_text_returns_400(self):
|
||||||
|
handler = DummyControlHandler({"text": 123})
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
sessions_dir = Path(tmpdir)
|
||||||
|
self._write_session(sessions_dir, "test", zellij_session="s", zellij_pane="1")
|
||||||
|
with patch.object(control, "SESSIONS_DIR", sessions_dir):
|
||||||
|
handler._respond_to_session("test")
|
||||||
|
|
||||||
|
self.assertEqual(handler.errors, [(400, "Missing or empty 'text' field")])
|
||||||
|
|
||||||
|
def test_missing_zellij_session_returns_400(self):
|
||||||
|
handler = DummyControlHandler({"text": "hello"})
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
sessions_dir = Path(tmpdir)
|
||||||
|
self._write_session(sessions_dir, "test", zellij_session="", zellij_pane="1")
|
||||||
|
with patch.object(control, "SESSIONS_DIR", sessions_dir):
|
||||||
|
handler._respond_to_session("test")
|
||||||
|
|
||||||
|
self.assertIn("missing Zellij pane info", handler.errors[0][1])
|
||||||
|
|
||||||
|
def test_missing_zellij_pane_returns_400(self):
|
||||||
|
handler = DummyControlHandler({"text": "hello"})
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
sessions_dir = Path(tmpdir)
|
||||||
|
self._write_session(sessions_dir, "test", zellij_session="sess", zellij_pane="")
|
||||||
|
with patch.object(control, "SESSIONS_DIR", sessions_dir):
|
||||||
|
handler._respond_to_session("test")
|
||||||
|
|
||||||
|
self.assertIn("missing Zellij pane info", handler.errors[0][1])
|
||||||
|
|
||||||
|
def test_invalid_pane_format_returns_400(self):
|
||||||
|
handler = DummyControlHandler({"text": "hello"})
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
sessions_dir = Path(tmpdir)
|
||||||
|
self._write_session(sessions_dir, "test", zellij_session="sess", zellij_pane="invalid_format_here")
|
||||||
|
with patch.object(control, "SESSIONS_DIR", sessions_dir):
|
||||||
|
handler._respond_to_session("test")
|
||||||
|
|
||||||
|
self.assertIn("Invalid pane format", handler.errors[0][1])
|
||||||
|
|
||||||
|
def test_invalid_option_count_treated_as_zero(self):
|
||||||
|
# optionCount that can't be parsed as int should default to 0
|
||||||
|
handler = DummyControlHandler({"text": "hello", "freeform": True, "optionCount": "not a number"})
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
sessions_dir = Path(tmpdir)
|
||||||
|
self._write_session(sessions_dir, "test", zellij_session="sess", zellij_pane="5")
|
||||||
|
with patch.object(control, "SESSIONS_DIR", sessions_dir):
|
||||||
|
handler._inject_text_then_enter = MagicMock(return_value={"ok": True})
|
||||||
|
handler._respond_to_session("test")
|
||||||
|
|
||||||
|
# With optionCount=0, freeform mode shouldn't trigger the "other" selection
|
||||||
|
# It should go straight to inject_text_then_enter
|
||||||
|
handler._inject_text_then_enter.assert_called_once()
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
unittest.main()
|
unittest.main()
|
||||||
|
|||||||
481
tests/test_conversation.py
Normal file
481
tests/test_conversation.py
Normal file
@@ -0,0 +1,481 @@
|
|||||||
|
"""Tests for mixins/conversation.py edge cases.
|
||||||
|
|
||||||
|
Unit tests for conversation parsing from Claude Code and Codex JSONL files.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import json
|
||||||
|
import tempfile
|
||||||
|
import unittest
|
||||||
|
from pathlib import Path
|
||||||
|
from unittest.mock import patch
|
||||||
|
|
||||||
|
from amc_server.mixins.conversation import ConversationMixin
|
||||||
|
from amc_server.mixins.parsing import SessionParsingMixin
|
||||||
|
|
||||||
|
|
||||||
|
class DummyConversationHandler(ConversationMixin, SessionParsingMixin):
|
||||||
|
"""Minimal handler for testing conversation mixin."""
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
self.sent_responses = []
|
||||||
|
|
||||||
|
def _send_json(self, code, payload):
|
||||||
|
self.sent_responses.append((code, payload))
|
||||||
|
|
||||||
|
|
||||||
|
class TestParseCodexArguments(unittest.TestCase):
|
||||||
|
"""Tests for _parse_codex_arguments edge cases."""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.handler = DummyConversationHandler()
|
||||||
|
|
||||||
|
def test_dict_input_returned_as_is(self):
|
||||||
|
result = self.handler._parse_codex_arguments({"key": "value"})
|
||||||
|
self.assertEqual(result, {"key": "value"})
|
||||||
|
|
||||||
|
def test_empty_dict_returned_as_is(self):
|
||||||
|
result = self.handler._parse_codex_arguments({})
|
||||||
|
self.assertEqual(result, {})
|
||||||
|
|
||||||
|
def test_json_string_parsed(self):
|
||||||
|
result = self.handler._parse_codex_arguments('{"key": "value"}')
|
||||||
|
self.assertEqual(result, {"key": "value"})
|
||||||
|
|
||||||
|
def test_invalid_json_string_returns_raw(self):
|
||||||
|
result = self.handler._parse_codex_arguments("not valid json")
|
||||||
|
self.assertEqual(result, {"raw": "not valid json"})
|
||||||
|
|
||||||
|
def test_empty_string_returns_raw(self):
|
||||||
|
result = self.handler._parse_codex_arguments("")
|
||||||
|
self.assertEqual(result, {"raw": ""})
|
||||||
|
|
||||||
|
def test_none_returns_empty_dict(self):
|
||||||
|
result = self.handler._parse_codex_arguments(None)
|
||||||
|
self.assertEqual(result, {})
|
||||||
|
|
||||||
|
def test_int_returns_empty_dict(self):
|
||||||
|
result = self.handler._parse_codex_arguments(42)
|
||||||
|
self.assertEqual(result, {})
|
||||||
|
|
||||||
|
def test_list_returns_empty_dict(self):
|
||||||
|
result = self.handler._parse_codex_arguments([1, 2, 3])
|
||||||
|
self.assertEqual(result, {})
|
||||||
|
|
||||||
|
|
||||||
|
class TestServeEvents(unittest.TestCase):
|
||||||
|
"""Tests for _serve_events edge cases."""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.handler = DummyConversationHandler()
|
||||||
|
|
||||||
|
def test_path_traversal_sanitized(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
events_dir = Path(tmpdir)
|
||||||
|
# Create a file that path traversal might try to access (unused - documents intent)
|
||||||
|
_secret_file = Path(tmpdir).parent / "secret.jsonl"
|
||||||
|
|
||||||
|
with patch("amc_server.mixins.conversation.EVENTS_DIR", events_dir):
|
||||||
|
# Try path traversal
|
||||||
|
self.handler._serve_events("../secret")
|
||||||
|
|
||||||
|
# Should have served response with sanitized id
|
||||||
|
self.assertEqual(len(self.handler.sent_responses), 1)
|
||||||
|
code, payload = self.handler.sent_responses[0]
|
||||||
|
self.assertEqual(code, 200)
|
||||||
|
self.assertEqual(payload["session_id"], "secret")
|
||||||
|
self.assertEqual(payload["events"], [])
|
||||||
|
|
||||||
|
def test_nonexistent_file_returns_empty_events(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
with patch("amc_server.mixins.conversation.EVENTS_DIR", Path(tmpdir)):
|
||||||
|
self.handler._serve_events("nonexistent")
|
||||||
|
|
||||||
|
code, payload = self.handler.sent_responses[0]
|
||||||
|
self.assertEqual(payload["events"], [])
|
||||||
|
|
||||||
|
def test_empty_file_returns_empty_events(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
events_dir = Path(tmpdir)
|
||||||
|
event_file = events_dir / "session123.jsonl"
|
||||||
|
event_file.write_text("")
|
||||||
|
|
||||||
|
with patch("amc_server.mixins.conversation.EVENTS_DIR", events_dir):
|
||||||
|
self.handler._serve_events("session123")
|
||||||
|
|
||||||
|
code, payload = self.handler.sent_responses[0]
|
||||||
|
self.assertEqual(payload["events"], [])
|
||||||
|
|
||||||
|
def test_invalid_json_lines_skipped(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
events_dir = Path(tmpdir)
|
||||||
|
event_file = events_dir / "session123.jsonl"
|
||||||
|
event_file.write_text('{"valid": "event"}\nnot json\n{"another": "event"}\n')
|
||||||
|
|
||||||
|
with patch("amc_server.mixins.conversation.EVENTS_DIR", events_dir):
|
||||||
|
self.handler._serve_events("session123")
|
||||||
|
|
||||||
|
code, payload = self.handler.sent_responses[0]
|
||||||
|
self.assertEqual(len(payload["events"]), 2)
|
||||||
|
self.assertEqual(payload["events"][0], {"valid": "event"})
|
||||||
|
self.assertEqual(payload["events"][1], {"another": "event"})
|
||||||
|
|
||||||
|
|
||||||
|
class TestParseClaudeConversation(unittest.TestCase):
|
||||||
|
"""Tests for _parse_claude_conversation edge cases."""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.handler = DummyConversationHandler()
|
||||||
|
|
||||||
|
def test_user_message_with_array_content_skipped(self):
|
||||||
|
# Array content is tool results, not human messages
|
||||||
|
with tempfile.NamedTemporaryFile(mode="w", suffix=".jsonl", delete=False) as f:
|
||||||
|
f.write(json.dumps({
|
||||||
|
"type": "user",
|
||||||
|
"message": {"content": [{"type": "tool_result"}]}
|
||||||
|
}) + "\n")
|
||||||
|
path = Path(f.name)
|
||||||
|
|
||||||
|
try:
|
||||||
|
with patch.object(self.handler, "_get_claude_conversation_file", return_value=path):
|
||||||
|
messages = self.handler._parse_claude_conversation("session123", "/project")
|
||||||
|
self.assertEqual(messages, [])
|
||||||
|
finally:
|
||||||
|
path.unlink()
|
||||||
|
|
||||||
|
def test_user_message_with_string_content_included(self):
|
||||||
|
with tempfile.NamedTemporaryFile(mode="w", suffix=".jsonl", delete=False) as f:
|
||||||
|
f.write(json.dumps({
|
||||||
|
"type": "user",
|
||||||
|
"timestamp": "2024-01-01T00:00:00Z",
|
||||||
|
"message": {"content": "Hello, Claude!"}
|
||||||
|
}) + "\n")
|
||||||
|
path = Path(f.name)
|
||||||
|
|
||||||
|
try:
|
||||||
|
with patch.object(self.handler, "_get_claude_conversation_file", return_value=path):
|
||||||
|
messages = self.handler._parse_claude_conversation("session123", "/project")
|
||||||
|
self.assertEqual(len(messages), 1)
|
||||||
|
self.assertEqual(messages[0]["role"], "user")
|
||||||
|
self.assertEqual(messages[0]["content"], "Hello, Claude!")
|
||||||
|
finally:
|
||||||
|
path.unlink()
|
||||||
|
|
||||||
|
def test_assistant_message_with_text_parts(self):
|
||||||
|
with tempfile.NamedTemporaryFile(mode="w", suffix=".jsonl", delete=False) as f:
|
||||||
|
f.write(json.dumps({
|
||||||
|
"type": "assistant",
|
||||||
|
"timestamp": "2024-01-01T00:00:00Z",
|
||||||
|
"message": {
|
||||||
|
"content": [
|
||||||
|
{"type": "text", "text": "Part 1"},
|
||||||
|
{"type": "text", "text": "Part 2"},
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}) + "\n")
|
||||||
|
path = Path(f.name)
|
||||||
|
|
||||||
|
try:
|
||||||
|
with patch.object(self.handler, "_get_claude_conversation_file", return_value=path):
|
||||||
|
messages = self.handler._parse_claude_conversation("session123", "/project")
|
||||||
|
self.assertEqual(len(messages), 1)
|
||||||
|
self.assertEqual(messages[0]["content"], "Part 1\nPart 2")
|
||||||
|
finally:
|
||||||
|
path.unlink()
|
||||||
|
|
||||||
|
def test_assistant_message_with_tool_use(self):
|
||||||
|
with tempfile.NamedTemporaryFile(mode="w", suffix=".jsonl", delete=False) as f:
|
||||||
|
f.write(json.dumps({
|
||||||
|
"type": "assistant",
|
||||||
|
"message": {
|
||||||
|
"content": [
|
||||||
|
{"type": "tool_use", "name": "Read", "input": {"file_path": "/test"}},
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}) + "\n")
|
||||||
|
path = Path(f.name)
|
||||||
|
|
||||||
|
try:
|
||||||
|
with patch.object(self.handler, "_get_claude_conversation_file", return_value=path):
|
||||||
|
messages = self.handler._parse_claude_conversation("session123", "/project")
|
||||||
|
self.assertEqual(len(messages), 1)
|
||||||
|
self.assertEqual(messages[0]["tool_calls"][0]["name"], "Read")
|
||||||
|
self.assertEqual(messages[0]["tool_calls"][0]["input"]["file_path"], "/test")
|
||||||
|
finally:
|
||||||
|
path.unlink()
|
||||||
|
|
||||||
|
def test_assistant_message_with_thinking(self):
|
||||||
|
with tempfile.NamedTemporaryFile(mode="w", suffix=".jsonl", delete=False) as f:
|
||||||
|
f.write(json.dumps({
|
||||||
|
"type": "assistant",
|
||||||
|
"message": {
|
||||||
|
"content": [
|
||||||
|
{"type": "thinking", "thinking": "Let me consider..."},
|
||||||
|
{"type": "text", "text": "Here's my answer"},
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}) + "\n")
|
||||||
|
path = Path(f.name)
|
||||||
|
|
||||||
|
try:
|
||||||
|
with patch.object(self.handler, "_get_claude_conversation_file", return_value=path):
|
||||||
|
messages = self.handler._parse_claude_conversation("session123", "/project")
|
||||||
|
self.assertEqual(len(messages), 1)
|
||||||
|
self.assertEqual(messages[0]["thinking"], "Let me consider...")
|
||||||
|
self.assertEqual(messages[0]["content"], "Here's my answer")
|
||||||
|
finally:
|
||||||
|
path.unlink()
|
||||||
|
|
||||||
|
def test_assistant_message_content_as_string_parts(self):
|
||||||
|
# Some entries might have string content parts instead of dicts
|
||||||
|
with tempfile.NamedTemporaryFile(mode="w", suffix=".jsonl", delete=False) as f:
|
||||||
|
f.write(json.dumps({
|
||||||
|
"type": "assistant",
|
||||||
|
"message": {
|
||||||
|
"content": ["plain string", {"type": "text", "text": "structured"}]
|
||||||
|
}
|
||||||
|
}) + "\n")
|
||||||
|
path = Path(f.name)
|
||||||
|
|
||||||
|
try:
|
||||||
|
with patch.object(self.handler, "_get_claude_conversation_file", return_value=path):
|
||||||
|
messages = self.handler._parse_claude_conversation("session123", "/project")
|
||||||
|
self.assertEqual(messages[0]["content"], "plain string\nstructured")
|
||||||
|
finally:
|
||||||
|
path.unlink()
|
||||||
|
|
||||||
|
def test_missing_conversation_file_returns_empty(self):
|
||||||
|
with patch.object(self.handler, "_get_claude_conversation_file", return_value=None):
|
||||||
|
messages = self.handler._parse_claude_conversation("session123", "/project")
|
||||||
|
self.assertEqual(messages, [])
|
||||||
|
|
||||||
|
def test_non_dict_entry_skipped(self):
|
||||||
|
with tempfile.NamedTemporaryFile(mode="w", suffix=".jsonl", delete=False) as f:
|
||||||
|
f.write('"just a string"\n')
|
||||||
|
f.write('123\n')
|
||||||
|
f.write('{"type": "user", "message": {"content": "valid"}}\n')
|
||||||
|
path = Path(f.name)
|
||||||
|
|
||||||
|
try:
|
||||||
|
with patch.object(self.handler, "_get_claude_conversation_file", return_value=path):
|
||||||
|
messages = self.handler._parse_claude_conversation("session123", "/project")
|
||||||
|
self.assertEqual(len(messages), 1)
|
||||||
|
finally:
|
||||||
|
path.unlink()
|
||||||
|
|
||||||
|
def test_non_list_content_in_assistant_skipped(self):
|
||||||
|
with tempfile.NamedTemporaryFile(mode="w", suffix=".jsonl", delete=False) as f:
|
||||||
|
f.write(json.dumps({
|
||||||
|
"type": "assistant",
|
||||||
|
"message": {"content": "not a list"}
|
||||||
|
}) + "\n")
|
||||||
|
path = Path(f.name)
|
||||||
|
|
||||||
|
try:
|
||||||
|
with patch.object(self.handler, "_get_claude_conversation_file", return_value=path):
|
||||||
|
messages = self.handler._parse_claude_conversation("session123", "/project")
|
||||||
|
self.assertEqual(messages, [])
|
||||||
|
finally:
|
||||||
|
path.unlink()
|
||||||
|
|
||||||
|
|
||||||
|
class TestParseCodexConversation(unittest.TestCase):
|
||||||
|
"""Tests for _parse_codex_conversation edge cases."""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.handler = DummyConversationHandler()
|
||||||
|
|
||||||
|
def test_developer_role_skipped(self):
|
||||||
|
with tempfile.NamedTemporaryFile(mode="w", suffix=".jsonl", delete=False) as f:
|
||||||
|
f.write(json.dumps({
|
||||||
|
"type": "response_item",
|
||||||
|
"payload": {
|
||||||
|
"type": "message",
|
||||||
|
"role": "developer",
|
||||||
|
"content": [{"text": "System instructions"}]
|
||||||
|
}
|
||||||
|
}) + "\n")
|
||||||
|
path = Path(f.name)
|
||||||
|
|
||||||
|
try:
|
||||||
|
with patch.object(self.handler, "_find_codex_transcript_file", return_value=path):
|
||||||
|
messages = self.handler._parse_codex_conversation("session123")
|
||||||
|
self.assertEqual(messages, [])
|
||||||
|
finally:
|
||||||
|
path.unlink()
|
||||||
|
|
||||||
|
def test_injected_context_skipped(self):
|
||||||
|
skip_prefixes = [
|
||||||
|
"<INSTRUCTIONS>",
|
||||||
|
"<environment_context>",
|
||||||
|
"<permissions instructions>",
|
||||||
|
"# AGENTS.md instructions",
|
||||||
|
]
|
||||||
|
for prefix in skip_prefixes:
|
||||||
|
with tempfile.NamedTemporaryFile(mode="w", suffix=".jsonl", delete=False) as f:
|
||||||
|
f.write(json.dumps({
|
||||||
|
"type": "response_item",
|
||||||
|
"payload": {
|
||||||
|
"type": "message",
|
||||||
|
"role": "user",
|
||||||
|
"content": [{"text": f"{prefix} more content here"}]
|
||||||
|
}
|
||||||
|
}) + "\n")
|
||||||
|
path = Path(f.name)
|
||||||
|
|
||||||
|
try:
|
||||||
|
with patch.object(self.handler, "_find_codex_transcript_file", return_value=path):
|
||||||
|
messages = self.handler._parse_codex_conversation("session123")
|
||||||
|
self.assertEqual(messages, [], f"Should skip content starting with {prefix}")
|
||||||
|
finally:
|
||||||
|
path.unlink()
|
||||||
|
|
||||||
|
def test_function_call_accumulated_to_next_assistant(self):
|
||||||
|
with tempfile.NamedTemporaryFile(mode="w", suffix=".jsonl", delete=False) as f:
|
||||||
|
# Tool call
|
||||||
|
f.write(json.dumps({
|
||||||
|
"type": "response_item",
|
||||||
|
"payload": {
|
||||||
|
"type": "function_call",
|
||||||
|
"name": "shell",
|
||||||
|
"arguments": '{"command": "ls"}'
|
||||||
|
}
|
||||||
|
}) + "\n")
|
||||||
|
# Assistant message
|
||||||
|
f.write(json.dumps({
|
||||||
|
"type": "response_item",
|
||||||
|
"payload": {
|
||||||
|
"type": "message",
|
||||||
|
"role": "assistant",
|
||||||
|
"content": [{"text": "Here are the files"}]
|
||||||
|
}
|
||||||
|
}) + "\n")
|
||||||
|
path = Path(f.name)
|
||||||
|
|
||||||
|
try:
|
||||||
|
with patch.object(self.handler, "_find_codex_transcript_file", return_value=path):
|
||||||
|
messages = self.handler._parse_codex_conversation("session123")
|
||||||
|
self.assertEqual(len(messages), 1)
|
||||||
|
self.assertEqual(messages[0]["tool_calls"][0]["name"], "shell")
|
||||||
|
self.assertEqual(messages[0]["content"], "Here are the files")
|
||||||
|
finally:
|
||||||
|
path.unlink()
|
||||||
|
|
||||||
|
def test_function_calls_flushed_before_user_message(self):
|
||||||
|
with tempfile.NamedTemporaryFile(mode="w", suffix=".jsonl", delete=False) as f:
|
||||||
|
# Tool call
|
||||||
|
f.write(json.dumps({
|
||||||
|
"type": "response_item",
|
||||||
|
"payload": {"type": "function_call", "name": "tool1", "arguments": "{}"}
|
||||||
|
}) + "\n")
|
||||||
|
# User message (tool calls should be flushed first)
|
||||||
|
f.write(json.dumps({
|
||||||
|
"type": "response_item",
|
||||||
|
"payload": {
|
||||||
|
"type": "message",
|
||||||
|
"role": "user",
|
||||||
|
"content": [{"text": "User response"}]
|
||||||
|
}
|
||||||
|
}) + "\n")
|
||||||
|
path = Path(f.name)
|
||||||
|
|
||||||
|
try:
|
||||||
|
with patch.object(self.handler, "_find_codex_transcript_file", return_value=path):
|
||||||
|
messages = self.handler._parse_codex_conversation("session123")
|
||||||
|
# First message should be assistant with tool_calls (flushed)
|
||||||
|
# Second should be user
|
||||||
|
self.assertEqual(len(messages), 2)
|
||||||
|
self.assertEqual(messages[0]["role"], "assistant")
|
||||||
|
self.assertEqual(messages[0]["tool_calls"][0]["name"], "tool1")
|
||||||
|
self.assertEqual(messages[1]["role"], "user")
|
||||||
|
finally:
|
||||||
|
path.unlink()
|
||||||
|
|
||||||
|
def test_reasoning_creates_thinking_message(self):
|
||||||
|
with tempfile.NamedTemporaryFile(mode="w", suffix=".jsonl", delete=False) as f:
|
||||||
|
f.write(json.dumps({
|
||||||
|
"type": "response_item",
|
||||||
|
"payload": {
|
||||||
|
"type": "reasoning",
|
||||||
|
"summary": [
|
||||||
|
{"type": "summary_text", "text": "Let me think..."},
|
||||||
|
{"type": "summary_text", "text": "I'll try this approach."},
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}) + "\n")
|
||||||
|
path = Path(f.name)
|
||||||
|
|
||||||
|
try:
|
||||||
|
with patch.object(self.handler, "_find_codex_transcript_file", return_value=path):
|
||||||
|
messages = self.handler._parse_codex_conversation("session123")
|
||||||
|
self.assertEqual(len(messages), 1)
|
||||||
|
self.assertEqual(messages[0]["thinking"], "Let me think...\nI'll try this approach.")
|
||||||
|
finally:
|
||||||
|
path.unlink()
|
||||||
|
|
||||||
|
def test_pending_tool_calls_flushed_at_end(self):
|
||||||
|
with tempfile.NamedTemporaryFile(mode="w", suffix=".jsonl", delete=False) as f:
|
||||||
|
# Tool call with no following message
|
||||||
|
f.write(json.dumps({
|
||||||
|
"type": "response_item",
|
||||||
|
"payload": {"type": "function_call", "name": "final_tool", "arguments": "{}"}
|
||||||
|
}) + "\n")
|
||||||
|
path = Path(f.name)
|
||||||
|
|
||||||
|
try:
|
||||||
|
with patch.object(self.handler, "_find_codex_transcript_file", return_value=path):
|
||||||
|
messages = self.handler._parse_codex_conversation("session123")
|
||||||
|
# Should flush pending tool calls at end
|
||||||
|
self.assertEqual(len(messages), 1)
|
||||||
|
self.assertEqual(messages[0]["tool_calls"][0]["name"], "final_tool")
|
||||||
|
finally:
|
||||||
|
path.unlink()
|
||||||
|
|
||||||
|
def test_non_response_item_types_skipped(self):
|
||||||
|
with tempfile.NamedTemporaryFile(mode="w", suffix=".jsonl", delete=False) as f:
|
||||||
|
f.write('{"type": "session_meta"}\n')
|
||||||
|
f.write('{"type": "event_msg"}\n')
|
||||||
|
f.write(json.dumps({
|
||||||
|
"type": "response_item",
|
||||||
|
"payload": {"type": "message", "role": "user", "content": [{"text": "Hello"}]}
|
||||||
|
}) + "\n")
|
||||||
|
path = Path(f.name)
|
||||||
|
|
||||||
|
try:
|
||||||
|
with patch.object(self.handler, "_find_codex_transcript_file", return_value=path):
|
||||||
|
messages = self.handler._parse_codex_conversation("session123")
|
||||||
|
self.assertEqual(len(messages), 1)
|
||||||
|
finally:
|
||||||
|
path.unlink()
|
||||||
|
|
||||||
|
def test_missing_transcript_file_returns_empty(self):
|
||||||
|
with patch.object(self.handler, "_find_codex_transcript_file", return_value=None):
|
||||||
|
messages = self.handler._parse_codex_conversation("session123")
|
||||||
|
self.assertEqual(messages, [])
|
||||||
|
|
||||||
|
|
||||||
|
class TestServeConversation(unittest.TestCase):
|
||||||
|
"""Tests for _serve_conversation routing."""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.handler = DummyConversationHandler()
|
||||||
|
|
||||||
|
def test_routes_to_codex_parser(self):
|
||||||
|
with patch.object(self.handler, "_parse_codex_conversation", return_value=[]) as mock:
|
||||||
|
self.handler._serve_conversation("session123", "/project", agent="codex")
|
||||||
|
mock.assert_called_once_with("session123")
|
||||||
|
|
||||||
|
def test_routes_to_claude_parser_by_default(self):
|
||||||
|
with patch.object(self.handler, "_parse_claude_conversation", return_value=[]) as mock:
|
||||||
|
self.handler._serve_conversation("session123", "/project")
|
||||||
|
mock.assert_called_once_with("session123", "/project")
|
||||||
|
|
||||||
|
def test_sanitizes_session_id(self):
|
||||||
|
with patch.object(self.handler, "_parse_claude_conversation", return_value=[]):
|
||||||
|
self.handler._serve_conversation("../../../etc/passwd", "/project")
|
||||||
|
|
||||||
|
code, payload = self.handler.sent_responses[0]
|
||||||
|
self.assertEqual(payload["session_id"], "passwd")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
unittest.main()
|
||||||
361
tests/test_discovery.py
Normal file
361
tests/test_discovery.py
Normal file
@@ -0,0 +1,361 @@
|
|||||||
|
"""Tests for mixins/discovery.py edge cases.
|
||||||
|
|
||||||
|
Unit tests for Codex session discovery and pane matching.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import subprocess
|
||||||
|
import tempfile
|
||||||
|
import time
|
||||||
|
import unittest
|
||||||
|
from pathlib import Path
|
||||||
|
from unittest.mock import patch, MagicMock
|
||||||
|
|
||||||
|
from amc_server.mixins.discovery import SessionDiscoveryMixin
|
||||||
|
from amc_server.mixins.parsing import SessionParsingMixin
|
||||||
|
|
||||||
|
|
||||||
|
class DummyDiscoveryHandler(SessionDiscoveryMixin, SessionParsingMixin):
|
||||||
|
"""Minimal handler for testing discovery mixin."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
class TestGetCodexPaneInfo(unittest.TestCase):
|
||||||
|
"""Tests for _get_codex_pane_info edge cases."""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.handler = DummyDiscoveryHandler()
|
||||||
|
# Clear cache before each test
|
||||||
|
from amc_server.agents import _codex_pane_cache
|
||||||
|
_codex_pane_cache["expires"] = 0
|
||||||
|
_codex_pane_cache["pid_info"] = {}
|
||||||
|
_codex_pane_cache["cwd_map"] = {}
|
||||||
|
|
||||||
|
def test_pgrep_failure_returns_empty(self):
|
||||||
|
failed = subprocess.CompletedProcess(args=[], returncode=1, stdout="", stderr="")
|
||||||
|
|
||||||
|
with patch("amc_server.mixins.discovery.subprocess.run", return_value=failed):
|
||||||
|
pid_info, cwd_map = self.handler._get_codex_pane_info()
|
||||||
|
|
||||||
|
self.assertEqual(pid_info, {})
|
||||||
|
self.assertEqual(cwd_map, {})
|
||||||
|
|
||||||
|
def test_no_codex_processes_returns_empty(self):
|
||||||
|
no_results = subprocess.CompletedProcess(args=[], returncode=0, stdout="", stderr="")
|
||||||
|
|
||||||
|
with patch("amc_server.mixins.discovery.subprocess.run", return_value=no_results):
|
||||||
|
pid_info, cwd_map = self.handler._get_codex_pane_info()
|
||||||
|
|
||||||
|
self.assertEqual(pid_info, {})
|
||||||
|
self.assertEqual(cwd_map, {})
|
||||||
|
|
||||||
|
def test_extracts_zellij_env_vars(self):
|
||||||
|
pgrep_result = subprocess.CompletedProcess(args=[], returncode=0, stdout="12345\n", stderr="")
|
||||||
|
ps_result = subprocess.CompletedProcess(
|
||||||
|
args=[], returncode=0,
|
||||||
|
stdout="codex ZELLIJ_PANE_ID=7 ZELLIJ_SESSION_NAME=myproject",
|
||||||
|
stderr=""
|
||||||
|
)
|
||||||
|
lsof_result = subprocess.CompletedProcess(
|
||||||
|
args=[], returncode=0,
|
||||||
|
stdout="p12345\nn/Users/test/project",
|
||||||
|
stderr=""
|
||||||
|
)
|
||||||
|
|
||||||
|
def mock_run(args, **kwargs):
|
||||||
|
if args[0] == "pgrep":
|
||||||
|
return pgrep_result
|
||||||
|
elif args[0] == "ps":
|
||||||
|
return ps_result
|
||||||
|
elif args[0] == "lsof":
|
||||||
|
return lsof_result
|
||||||
|
return subprocess.CompletedProcess(args=[], returncode=1, stdout="", stderr="")
|
||||||
|
|
||||||
|
with patch("amc_server.mixins.discovery.subprocess.run", side_effect=mock_run):
|
||||||
|
pid_info, cwd_map = self.handler._get_codex_pane_info()
|
||||||
|
|
||||||
|
self.assertIn("12345", pid_info)
|
||||||
|
self.assertEqual(pid_info["12345"]["pane_id"], "7")
|
||||||
|
self.assertEqual(pid_info["12345"]["zellij_session"], "myproject")
|
||||||
|
|
||||||
|
def test_cache_used_when_fresh(self):
|
||||||
|
from amc_server.agents import _codex_pane_cache
|
||||||
|
_codex_pane_cache["pid_info"] = {"cached": {"pane_id": "1", "zellij_session": "s"}}
|
||||||
|
_codex_pane_cache["cwd_map"] = {"/cached/path": {"session": "s", "pane_id": "1"}}
|
||||||
|
_codex_pane_cache["expires"] = time.time() + 100
|
||||||
|
|
||||||
|
# Should not call subprocess
|
||||||
|
with patch("amc_server.mixins.discovery.subprocess.run") as mock_run:
|
||||||
|
pid_info, cwd_map = self.handler._get_codex_pane_info()
|
||||||
|
|
||||||
|
mock_run.assert_not_called()
|
||||||
|
self.assertEqual(pid_info, {"cached": {"pane_id": "1", "zellij_session": "s"}})
|
||||||
|
|
||||||
|
def test_timeout_handled_gracefully(self):
|
||||||
|
with patch("amc_server.mixins.discovery.subprocess.run",
|
||||||
|
side_effect=subprocess.TimeoutExpired("cmd", 2)):
|
||||||
|
pid_info, cwd_map = self.handler._get_codex_pane_info()
|
||||||
|
|
||||||
|
self.assertEqual(pid_info, {})
|
||||||
|
self.assertEqual(cwd_map, {})
|
||||||
|
|
||||||
|
|
||||||
|
class TestMatchCodexSessionToPane(unittest.TestCase):
|
||||||
|
"""Tests for _match_codex_session_to_pane edge cases."""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.handler = DummyDiscoveryHandler()
|
||||||
|
|
||||||
|
def test_lsof_match_found(self):
|
||||||
|
"""When lsof finds a PID with the session file open, use that match."""
|
||||||
|
pid_info = {
|
||||||
|
"12345": {"pane_id": "7", "zellij_session": "project"},
|
||||||
|
}
|
||||||
|
cwd_map = {}
|
||||||
|
|
||||||
|
lsof_result = subprocess.CompletedProcess(
|
||||||
|
args=[], returncode=0, stdout="12345\n", stderr=""
|
||||||
|
)
|
||||||
|
|
||||||
|
with patch("amc_server.mixins.discovery.subprocess.run", return_value=lsof_result):
|
||||||
|
session, pane = self.handler._match_codex_session_to_pane(
|
||||||
|
Path("/some/session.jsonl"), "/project", pid_info, cwd_map
|
||||||
|
)
|
||||||
|
|
||||||
|
self.assertEqual(session, "project")
|
||||||
|
self.assertEqual(pane, "7")
|
||||||
|
|
||||||
|
def test_cwd_fallback_when_lsof_fails(self):
|
||||||
|
"""When lsof doesn't find a match, fall back to CWD matching."""
|
||||||
|
pid_info = {}
|
||||||
|
cwd_map = {
|
||||||
|
"/home/user/project": {"session": "myproject", "pane_id": "3"},
|
||||||
|
}
|
||||||
|
|
||||||
|
lsof_result = subprocess.CompletedProcess(
|
||||||
|
args=[], returncode=1, stdout="", stderr=""
|
||||||
|
)
|
||||||
|
|
||||||
|
with patch("amc_server.mixins.discovery.subprocess.run", return_value=lsof_result):
|
||||||
|
session, pane = self.handler._match_codex_session_to_pane(
|
||||||
|
Path("/some/session.jsonl"), "/home/user/project", pid_info, cwd_map
|
||||||
|
)
|
||||||
|
|
||||||
|
self.assertEqual(session, "myproject")
|
||||||
|
self.assertEqual(pane, "3")
|
||||||
|
|
||||||
|
def test_no_match_returns_empty_strings(self):
|
||||||
|
pid_info = {}
|
||||||
|
cwd_map = {}
|
||||||
|
|
||||||
|
lsof_result = subprocess.CompletedProcess(
|
||||||
|
args=[], returncode=1, stdout="", stderr=""
|
||||||
|
)
|
||||||
|
|
||||||
|
with patch("amc_server.mixins.discovery.subprocess.run", return_value=lsof_result):
|
||||||
|
session, pane = self.handler._match_codex_session_to_pane(
|
||||||
|
Path("/some/session.jsonl"), "/unmatched/path", pid_info, cwd_map
|
||||||
|
)
|
||||||
|
|
||||||
|
self.assertEqual(session, "")
|
||||||
|
self.assertEqual(pane, "")
|
||||||
|
|
||||||
|
def test_cwd_normalized_for_matching(self):
|
||||||
|
"""CWD paths should be normalized for comparison."""
|
||||||
|
pid_info = {}
|
||||||
|
cwd_map = {
|
||||||
|
"/home/user/project": {"session": "proj", "pane_id": "1"},
|
||||||
|
}
|
||||||
|
|
||||||
|
lsof_result = subprocess.CompletedProcess(
|
||||||
|
args=[], returncode=1, stdout="", stderr=""
|
||||||
|
)
|
||||||
|
|
||||||
|
with patch("amc_server.mixins.discovery.subprocess.run", return_value=lsof_result):
|
||||||
|
# Session CWD has trailing slash and extra dots
|
||||||
|
session, pane = self.handler._match_codex_session_to_pane(
|
||||||
|
Path("/some/session.jsonl"), "/home/user/./project/", pid_info, cwd_map
|
||||||
|
)
|
||||||
|
|
||||||
|
self.assertEqual(session, "proj")
|
||||||
|
|
||||||
|
def test_empty_session_cwd_no_match(self):
|
||||||
|
pid_info = {}
|
||||||
|
cwd_map = {"/some/path": {"session": "s", "pane_id": "1"}}
|
||||||
|
|
||||||
|
lsof_result = subprocess.CompletedProcess(
|
||||||
|
args=[], returncode=1, stdout="", stderr=""
|
||||||
|
)
|
||||||
|
|
||||||
|
with patch("amc_server.mixins.discovery.subprocess.run", return_value=lsof_result):
|
||||||
|
session, pane = self.handler._match_codex_session_to_pane(
|
||||||
|
Path("/some/session.jsonl"), "", pid_info, cwd_map
|
||||||
|
)
|
||||||
|
|
||||||
|
self.assertEqual(session, "")
|
||||||
|
self.assertEqual(pane, "")
|
||||||
|
|
||||||
|
|
||||||
|
class TestDiscoverActiveCodexSessions(unittest.TestCase):
|
||||||
|
"""Tests for _discover_active_codex_sessions edge cases."""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.handler = DummyDiscoveryHandler()
|
||||||
|
# Clear caches
|
||||||
|
from amc_server.agents import _codex_transcript_cache, _dismissed_codex_ids
|
||||||
|
_codex_transcript_cache.clear()
|
||||||
|
_dismissed_codex_ids.clear()
|
||||||
|
|
||||||
|
def test_skips_when_codex_sessions_dir_missing(self):
|
||||||
|
with patch("amc_server.mixins.discovery.CODEX_SESSIONS_DIR", Path("/nonexistent")):
|
||||||
|
# Should not raise
|
||||||
|
self.handler._discover_active_codex_sessions()
|
||||||
|
|
||||||
|
def test_skips_old_files(self):
|
||||||
|
"""Files older than CODEX_ACTIVE_WINDOW should be skipped."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
codex_dir = Path(tmpdir)
|
||||||
|
sessions_dir = Path(tmpdir) / "sessions"
|
||||||
|
sessions_dir.mkdir()
|
||||||
|
|
||||||
|
# Create an old transcript file
|
||||||
|
old_file = codex_dir / "old-12345678-1234-1234-1234-123456789abc.jsonl"
|
||||||
|
old_file.write_text('{"type": "session_meta", "payload": {"cwd": "/test"}}\n')
|
||||||
|
# Set mtime to 2 hours ago
|
||||||
|
old_time = time.time() - 7200
|
||||||
|
os.utime(old_file, (old_time, old_time))
|
||||||
|
|
||||||
|
with patch("amc_server.mixins.discovery.CODEX_SESSIONS_DIR", codex_dir), \
|
||||||
|
patch("amc_server.mixins.discovery.SESSIONS_DIR", sessions_dir):
|
||||||
|
self.handler._get_codex_pane_info = MagicMock(return_value=({}, {}))
|
||||||
|
self.handler._discover_active_codex_sessions()
|
||||||
|
|
||||||
|
# Should not have created a session file
|
||||||
|
self.assertEqual(list(sessions_dir.glob("*.json")), [])
|
||||||
|
|
||||||
|
def test_skips_dismissed_sessions(self):
|
||||||
|
"""Sessions in _dismissed_codex_ids should be skipped."""
|
||||||
|
from amc_server.agents import _dismissed_codex_ids
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
codex_dir = Path(tmpdir)
|
||||||
|
sessions_dir = Path(tmpdir) / "sessions"
|
||||||
|
sessions_dir.mkdir()
|
||||||
|
|
||||||
|
# Create a recent transcript file
|
||||||
|
session_id = "12345678-1234-1234-1234-123456789abc"
|
||||||
|
transcript = codex_dir / f"session-{session_id}.jsonl"
|
||||||
|
transcript.write_text('{"type": "session_meta", "payload": {"cwd": "/test"}}\n')
|
||||||
|
|
||||||
|
# Mark as dismissed
|
||||||
|
_dismissed_codex_ids[session_id] = True
|
||||||
|
|
||||||
|
with patch("amc_server.mixins.discovery.CODEX_SESSIONS_DIR", codex_dir), \
|
||||||
|
patch("amc_server.mixins.discovery.SESSIONS_DIR", sessions_dir):
|
||||||
|
self.handler._get_codex_pane_info = MagicMock(return_value=({}, {}))
|
||||||
|
self.handler._discover_active_codex_sessions()
|
||||||
|
|
||||||
|
# Should not have created a session file
|
||||||
|
self.assertEqual(list(sessions_dir.glob("*.json")), [])
|
||||||
|
|
||||||
|
def test_skips_non_uuid_filenames(self):
|
||||||
|
"""Files without a UUID in the name should be skipped."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
codex_dir = Path(tmpdir)
|
||||||
|
sessions_dir = Path(tmpdir) / "sessions"
|
||||||
|
sessions_dir.mkdir()
|
||||||
|
|
||||||
|
# Create a file without a UUID
|
||||||
|
no_uuid = codex_dir / "random-name.jsonl"
|
||||||
|
no_uuid.write_text('{"type": "session_meta", "payload": {"cwd": "/test"}}\n')
|
||||||
|
|
||||||
|
with patch("amc_server.mixins.discovery.CODEX_SESSIONS_DIR", codex_dir), \
|
||||||
|
patch("amc_server.mixins.discovery.SESSIONS_DIR", sessions_dir):
|
||||||
|
self.handler._get_codex_pane_info = MagicMock(return_value=({}, {}))
|
||||||
|
self.handler._discover_active_codex_sessions()
|
||||||
|
|
||||||
|
self.assertEqual(list(sessions_dir.glob("*.json")), [])
|
||||||
|
|
||||||
|
def test_skips_non_session_meta_first_line(self):
|
||||||
|
"""Files without session_meta as first line should be skipped."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
codex_dir = Path(tmpdir)
|
||||||
|
sessions_dir = Path(tmpdir) / "sessions"
|
||||||
|
sessions_dir.mkdir()
|
||||||
|
|
||||||
|
session_id = "12345678-1234-1234-1234-123456789abc"
|
||||||
|
transcript = codex_dir / f"session-{session_id}.jsonl"
|
||||||
|
# First line is not session_meta
|
||||||
|
transcript.write_text('{"type": "response_item", "payload": {}}\n')
|
||||||
|
|
||||||
|
with patch("amc_server.mixins.discovery.CODEX_SESSIONS_DIR", codex_dir), \
|
||||||
|
patch("amc_server.mixins.discovery.SESSIONS_DIR", sessions_dir):
|
||||||
|
self.handler._get_codex_pane_info = MagicMock(return_value=({}, {}))
|
||||||
|
self.handler._discover_active_codex_sessions()
|
||||||
|
|
||||||
|
self.assertEqual(list(sessions_dir.glob("*.json")), [])
|
||||||
|
|
||||||
|
def test_creates_session_file_for_valid_transcript(self):
|
||||||
|
"""Valid recent transcripts should create session files."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
codex_dir = Path(tmpdir)
|
||||||
|
sessions_dir = Path(tmpdir) / "sessions"
|
||||||
|
sessions_dir.mkdir()
|
||||||
|
|
||||||
|
session_id = "12345678-1234-1234-1234-123456789abc"
|
||||||
|
transcript = codex_dir / f"session-{session_id}.jsonl"
|
||||||
|
transcript.write_text(json.dumps({
|
||||||
|
"type": "session_meta",
|
||||||
|
"payload": {"cwd": "/test/project", "timestamp": "2024-01-01T00:00:00Z"}
|
||||||
|
}) + "\n")
|
||||||
|
|
||||||
|
with patch("amc_server.mixins.discovery.CODEX_SESSIONS_DIR", codex_dir), \
|
||||||
|
patch("amc_server.mixins.discovery.SESSIONS_DIR", sessions_dir):
|
||||||
|
self.handler._get_codex_pane_info = MagicMock(return_value=({}, {}))
|
||||||
|
self.handler._match_codex_session_to_pane = MagicMock(return_value=("proj", "5"))
|
||||||
|
self.handler._get_cached_context_usage = MagicMock(return_value=None)
|
||||||
|
self.handler._discover_active_codex_sessions()
|
||||||
|
|
||||||
|
session_file = sessions_dir / f"{session_id}.json"
|
||||||
|
self.assertTrue(session_file.exists())
|
||||||
|
|
||||||
|
data = json.loads(session_file.read_text())
|
||||||
|
self.assertEqual(data["session_id"], session_id)
|
||||||
|
self.assertEqual(data["agent"], "codex")
|
||||||
|
self.assertEqual(data["project"], "project")
|
||||||
|
self.assertEqual(data["zellij_session"], "proj")
|
||||||
|
self.assertEqual(data["zellij_pane"], "5")
|
||||||
|
|
||||||
|
def test_determines_status_by_file_age(self):
|
||||||
|
"""Recent files should be 'active', older ones 'done'."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
codex_dir = Path(tmpdir)
|
||||||
|
sessions_dir = Path(tmpdir) / "sessions"
|
||||||
|
sessions_dir.mkdir()
|
||||||
|
|
||||||
|
session_id = "12345678-1234-1234-1234-123456789abc"
|
||||||
|
transcript = codex_dir / f"session-{session_id}.jsonl"
|
||||||
|
transcript.write_text(json.dumps({
|
||||||
|
"type": "session_meta",
|
||||||
|
"payload": {"cwd": "/test"}
|
||||||
|
}) + "\n")
|
||||||
|
|
||||||
|
# Set mtime to 3 minutes ago (> 2 min threshold)
|
||||||
|
old_time = time.time() - 180
|
||||||
|
os.utime(transcript, (old_time, old_time))
|
||||||
|
|
||||||
|
with patch("amc_server.mixins.discovery.CODEX_SESSIONS_DIR", codex_dir), \
|
||||||
|
patch("amc_server.mixins.discovery.SESSIONS_DIR", sessions_dir):
|
||||||
|
self.handler._get_codex_pane_info = MagicMock(return_value=({}, {}))
|
||||||
|
self.handler._match_codex_session_to_pane = MagicMock(return_value=("", ""))
|
||||||
|
self.handler._get_cached_context_usage = MagicMock(return_value=None)
|
||||||
|
self.handler._discover_active_codex_sessions()
|
||||||
|
|
||||||
|
session_file = sessions_dir / f"{session_id}.json"
|
||||||
|
data = json.loads(session_file.read_text())
|
||||||
|
self.assertEqual(data["status"], "done")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
unittest.main()
|
||||||
474
tests/test_hook.py
Normal file
474
tests/test_hook.py
Normal file
@@ -0,0 +1,474 @@
|
|||||||
|
"""Tests for bin/amc-hook functions.
|
||||||
|
|
||||||
|
These are unit tests for the pure functions in the hook script.
|
||||||
|
Edge cases are prioritized over happy paths.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import tempfile
|
||||||
|
import types
|
||||||
|
import unittest
|
||||||
|
from pathlib import Path
|
||||||
|
from unittest.mock import patch
|
||||||
|
|
||||||
|
# Import the hook module (no .py extension, so use compile+exec pattern)
|
||||||
|
hook_path = Path(__file__).parent.parent / "bin" / "amc-hook"
|
||||||
|
amc_hook = types.ModuleType("amc_hook")
|
||||||
|
amc_hook.__file__ = str(hook_path)
|
||||||
|
# Load module code - this is safe, we're loading our own source file
|
||||||
|
code = compile(hook_path.read_text(), hook_path, "exec")
|
||||||
|
exec(code, amc_hook.__dict__) # noqa: S102 - loading local module
|
||||||
|
|
||||||
|
|
||||||
|
class TestDetectProseQuestion(unittest.TestCase):
|
||||||
|
"""Tests for _detect_prose_question edge cases."""
|
||||||
|
|
||||||
|
def test_none_input_returns_none(self):
|
||||||
|
self.assertIsNone(amc_hook._detect_prose_question(None))
|
||||||
|
|
||||||
|
def test_empty_string_returns_none(self):
|
||||||
|
self.assertIsNone(amc_hook._detect_prose_question(""))
|
||||||
|
|
||||||
|
def test_whitespace_only_returns_none(self):
|
||||||
|
self.assertIsNone(amc_hook._detect_prose_question(" \n\t "))
|
||||||
|
|
||||||
|
def test_no_question_mark_returns_none(self):
|
||||||
|
self.assertIsNone(amc_hook._detect_prose_question("This is a statement."))
|
||||||
|
|
||||||
|
def test_question_mark_in_middle_not_at_end_returns_none(self):
|
||||||
|
# Question mark exists but message doesn't END with one
|
||||||
|
self.assertIsNone(amc_hook._detect_prose_question("What? I said hello."))
|
||||||
|
|
||||||
|
def test_trailing_whitespace_after_question_still_detects(self):
|
||||||
|
result = amc_hook._detect_prose_question("Is this a question? \n\t")
|
||||||
|
self.assertEqual(result, "Is this a question?")
|
||||||
|
|
||||||
|
def test_question_in_last_paragraph_only(self):
|
||||||
|
msg = "First paragraph here.\n\nSecond paragraph is the question?"
|
||||||
|
result = amc_hook._detect_prose_question(msg)
|
||||||
|
self.assertEqual(result, "Second paragraph is the question?")
|
||||||
|
|
||||||
|
def test_multiple_paragraphs_question_not_in_last_returns_none(self):
|
||||||
|
# Question in first paragraph, statement in last
|
||||||
|
msg = "Is this a question?\n\nNo, this is the last paragraph."
|
||||||
|
self.assertIsNone(amc_hook._detect_prose_question(msg))
|
||||||
|
|
||||||
|
def test_truncates_long_question_to_max_length(self):
|
||||||
|
long_question = "x" * 600 + "?"
|
||||||
|
result = amc_hook._detect_prose_question(long_question)
|
||||||
|
self.assertLessEqual(len(result), amc_hook.MAX_QUESTION_LEN + 1) # +1 for ?
|
||||||
|
|
||||||
|
def test_long_question_tries_sentence_boundary(self):
|
||||||
|
# Create a message longer than MAX_QUESTION_LEN (500) with a sentence boundary
|
||||||
|
# The truncation takes the LAST MAX_QUESTION_LEN chars, then finds FIRST ". " within that
|
||||||
|
prefix = "a" * 500 + ". Sentence start. "
|
||||||
|
suffix = "Is this the question?"
|
||||||
|
msg = prefix + suffix
|
||||||
|
self.assertGreater(len(msg), amc_hook.MAX_QUESTION_LEN)
|
||||||
|
result = amc_hook._detect_prose_question(msg)
|
||||||
|
# Code finds FIRST ". " in truncated portion, so starts at "Sentence start"
|
||||||
|
self.assertTrue(
|
||||||
|
result.startswith("Sentence start"),
|
||||||
|
f"Expected to start with 'Sentence start', got: {result[:50]}"
|
||||||
|
)
|
||||||
|
|
||||||
|
def test_long_question_no_sentence_boundary_truncates_from_end(self):
|
||||||
|
# No period in the long text
|
||||||
|
long_msg = "a" * 600 + "?"
|
||||||
|
result = amc_hook._detect_prose_question(long_msg)
|
||||||
|
self.assertTrue(result.endswith("?"))
|
||||||
|
self.assertLessEqual(len(result), amc_hook.MAX_QUESTION_LEN + 1)
|
||||||
|
|
||||||
|
def test_single_character_question(self):
|
||||||
|
result = amc_hook._detect_prose_question("?")
|
||||||
|
self.assertEqual(result, "?")
|
||||||
|
|
||||||
|
def test_newlines_within_last_paragraph_preserved(self):
|
||||||
|
msg = "Intro.\n\nLine one\nLine two?"
|
||||||
|
result = amc_hook._detect_prose_question(msg)
|
||||||
|
self.assertIn("\n", result)
|
||||||
|
|
||||||
|
|
||||||
|
class TestExtractQuestions(unittest.TestCase):
|
||||||
|
"""Tests for _extract_questions edge cases."""
|
||||||
|
|
||||||
|
def test_empty_hook_returns_empty_list(self):
|
||||||
|
self.assertEqual(amc_hook._extract_questions({}), [])
|
||||||
|
|
||||||
|
def test_missing_tool_input_returns_empty_list(self):
|
||||||
|
self.assertEqual(amc_hook._extract_questions({"other": "data"}), [])
|
||||||
|
|
||||||
|
def test_tool_input_is_none_returns_empty_list(self):
|
||||||
|
self.assertEqual(amc_hook._extract_questions({"tool_input": None}), [])
|
||||||
|
|
||||||
|
def test_tool_input_is_list_returns_empty_list(self):
|
||||||
|
# tool_input should be dict, not list
|
||||||
|
self.assertEqual(amc_hook._extract_questions({"tool_input": []}), [])
|
||||||
|
|
||||||
|
def test_tool_input_is_string_json_parsed(self):
|
||||||
|
tool_input = json.dumps({"questions": [{"question": "Test?", "options": []}]})
|
||||||
|
result = amc_hook._extract_questions({"tool_input": tool_input})
|
||||||
|
self.assertEqual(len(result), 1)
|
||||||
|
self.assertEqual(result[0]["question"], "Test?")
|
||||||
|
|
||||||
|
def test_tool_input_invalid_json_string_returns_empty(self):
|
||||||
|
result = amc_hook._extract_questions({"tool_input": "not valid json"})
|
||||||
|
self.assertEqual(result, [])
|
||||||
|
|
||||||
|
def test_questions_key_is_none_returns_empty(self):
|
||||||
|
result = amc_hook._extract_questions({"tool_input": {"questions": None}})
|
||||||
|
self.assertEqual(result, [])
|
||||||
|
|
||||||
|
def test_questions_key_missing_returns_empty(self):
|
||||||
|
result = amc_hook._extract_questions({"tool_input": {"other": "data"}})
|
||||||
|
self.assertEqual(result, [])
|
||||||
|
|
||||||
|
def test_option_without_markdown_excluded_from_output(self):
|
||||||
|
hook = {
|
||||||
|
"tool_input": {
|
||||||
|
"questions": [{
|
||||||
|
"question": "Pick one",
|
||||||
|
"options": [{"label": "A", "description": "Desc A"}],
|
||||||
|
}]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
result = amc_hook._extract_questions(hook)
|
||||||
|
self.assertNotIn("markdown", result[0]["options"][0])
|
||||||
|
|
||||||
|
def test_option_with_markdown_included(self):
|
||||||
|
hook = {
|
||||||
|
"tool_input": {
|
||||||
|
"questions": [{
|
||||||
|
"question": "Pick one",
|
||||||
|
"options": [{"label": "A", "description": "Desc", "markdown": "```code```"}],
|
||||||
|
}]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
result = amc_hook._extract_questions(hook)
|
||||||
|
self.assertEqual(result[0]["options"][0]["markdown"], "```code```")
|
||||||
|
|
||||||
|
def test_missing_question_fields_default_to_empty(self):
|
||||||
|
hook = {"tool_input": {"questions": [{}]}}
|
||||||
|
result = amc_hook._extract_questions(hook)
|
||||||
|
self.assertEqual(result[0]["question"], "")
|
||||||
|
self.assertEqual(result[0]["header"], "")
|
||||||
|
self.assertEqual(result[0]["options"], [])
|
||||||
|
|
||||||
|
def test_option_missing_fields_default_to_empty(self):
|
||||||
|
hook = {"tool_input": {"questions": [{"options": [{}]}]}}
|
||||||
|
result = amc_hook._extract_questions(hook)
|
||||||
|
self.assertEqual(result[0]["options"][0]["label"], "")
|
||||||
|
self.assertEqual(result[0]["options"][0]["description"], "")
|
||||||
|
|
||||||
|
|
||||||
|
class TestAtomicWrite(unittest.TestCase):
|
||||||
|
"""Tests for _atomic_write edge cases."""
|
||||||
|
|
||||||
|
def test_writes_to_nonexistent_file(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
path = Path(tmpdir) / "new_file.json"
|
||||||
|
amc_hook._atomic_write(path, {"key": "value"})
|
||||||
|
self.assertTrue(path.exists())
|
||||||
|
self.assertEqual(json.loads(path.read_text()), {"key": "value"})
|
||||||
|
|
||||||
|
def test_overwrites_existing_file(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
path = Path(tmpdir) / "existing.json"
|
||||||
|
path.write_text('{"old": "data"}')
|
||||||
|
amc_hook._atomic_write(path, {"new": "data"})
|
||||||
|
self.assertEqual(json.loads(path.read_text()), {"new": "data"})
|
||||||
|
|
||||||
|
def test_cleans_up_temp_file_on_replace_failure(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
path = Path(tmpdir) / "subdir" / "file.json"
|
||||||
|
# Parent doesn't exist, so mkstemp will fail
|
||||||
|
with self.assertRaises(FileNotFoundError):
|
||||||
|
amc_hook._atomic_write(path, {"data": "test"})
|
||||||
|
|
||||||
|
def test_no_partial_writes_on_failure(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
path = Path(tmpdir) / "file.json"
|
||||||
|
path.write_text('{"original": "data"}')
|
||||||
|
|
||||||
|
# Mock os.replace to fail after the temp file is written
|
||||||
|
_original_replace = os.replace # noqa: F841 - documents test setup
|
||||||
|
def failing_replace(src, dst):
|
||||||
|
raise PermissionError("Simulated failure")
|
||||||
|
|
||||||
|
with patch("os.replace", side_effect=failing_replace):
|
||||||
|
with self.assertRaises(PermissionError):
|
||||||
|
amc_hook._atomic_write(path, {"new": "data"})
|
||||||
|
|
||||||
|
# Original file should be unchanged
|
||||||
|
self.assertEqual(json.loads(path.read_text()), {"original": "data"})
|
||||||
|
|
||||||
|
|
||||||
|
class TestReadSession(unittest.TestCase):
|
||||||
|
"""Tests for _read_session edge cases."""
|
||||||
|
|
||||||
|
def test_nonexistent_file_returns_empty_dict(self):
|
||||||
|
result = amc_hook._read_session(Path("/nonexistent/path/file.json"))
|
||||||
|
self.assertEqual(result, {})
|
||||||
|
|
||||||
|
def test_empty_file_returns_empty_dict(self):
|
||||||
|
with tempfile.NamedTemporaryFile(mode="w", suffix=".json", delete=False) as f:
|
||||||
|
f.write("")
|
||||||
|
path = Path(f.name)
|
||||||
|
try:
|
||||||
|
result = amc_hook._read_session(path)
|
||||||
|
self.assertEqual(result, {})
|
||||||
|
finally:
|
||||||
|
path.unlink()
|
||||||
|
|
||||||
|
def test_invalid_json_returns_empty_dict(self):
|
||||||
|
with tempfile.NamedTemporaryFile(mode="w", suffix=".json", delete=False) as f:
|
||||||
|
f.write("not valid json {{{")
|
||||||
|
path = Path(f.name)
|
||||||
|
try:
|
||||||
|
result = amc_hook._read_session(path)
|
||||||
|
self.assertEqual(result, {})
|
||||||
|
finally:
|
||||||
|
path.unlink()
|
||||||
|
|
||||||
|
def test_valid_json_returned(self):
|
||||||
|
with tempfile.NamedTemporaryFile(mode="w", suffix=".json", delete=False) as f:
|
||||||
|
json.dump({"session_id": "abc"}, f)
|
||||||
|
path = Path(f.name)
|
||||||
|
try:
|
||||||
|
result = amc_hook._read_session(path)
|
||||||
|
self.assertEqual(result, {"session_id": "abc"})
|
||||||
|
finally:
|
||||||
|
path.unlink()
|
||||||
|
|
||||||
|
|
||||||
|
class TestAppendEvent(unittest.TestCase):
|
||||||
|
"""Tests for _append_event edge cases."""
|
||||||
|
|
||||||
|
def test_creates_file_if_missing(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
with patch.object(amc_hook, "EVENTS_DIR", Path(tmpdir)):
|
||||||
|
amc_hook._append_event("session123", {"event": "test"})
|
||||||
|
event_file = Path(tmpdir) / "session123.jsonl"
|
||||||
|
self.assertTrue(event_file.exists())
|
||||||
|
|
||||||
|
def test_appends_to_existing_file(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
event_file = Path(tmpdir) / "session123.jsonl"
|
||||||
|
event_file.write_text('{"event": "first"}\n')
|
||||||
|
with patch.object(amc_hook, "EVENTS_DIR", Path(tmpdir)):
|
||||||
|
amc_hook._append_event("session123", {"event": "second"})
|
||||||
|
lines = event_file.read_text().strip().split("\n")
|
||||||
|
self.assertEqual(len(lines), 2)
|
||||||
|
self.assertEqual(json.loads(lines[1])["event"], "second")
|
||||||
|
|
||||||
|
def test_oserror_silently_ignored(self):
|
||||||
|
with patch.object(amc_hook, "EVENTS_DIR", Path("/nonexistent/path")):
|
||||||
|
# Should not raise
|
||||||
|
amc_hook._append_event("session123", {"event": "test"})
|
||||||
|
|
||||||
|
|
||||||
|
class TestMainHookPathTraversal(unittest.TestCase):
|
||||||
|
"""Tests for path traversal protection in main()."""
|
||||||
|
|
||||||
|
def test_session_id_with_path_traversal_sanitized(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
sessions_dir = Path(tmpdir) / "sessions"
|
||||||
|
events_dir = Path(tmpdir) / "events"
|
||||||
|
sessions_dir.mkdir()
|
||||||
|
events_dir.mkdir()
|
||||||
|
|
||||||
|
# Create a legitimate session file to test that traversal doesn't reach it
|
||||||
|
legit_file = Path(tmpdir) / "secret.json"
|
||||||
|
legit_file.write_text('{"secret": "data"}')
|
||||||
|
|
||||||
|
hook_input = json.dumps({
|
||||||
|
"hook_event_name": "SessionStart",
|
||||||
|
"session_id": "../secret",
|
||||||
|
"cwd": "/test/project",
|
||||||
|
})
|
||||||
|
|
||||||
|
with patch.object(amc_hook, "SESSIONS_DIR", sessions_dir), \
|
||||||
|
patch.object(amc_hook, "EVENTS_DIR", events_dir), \
|
||||||
|
patch("sys.stdin.read", return_value=hook_input):
|
||||||
|
amc_hook.main()
|
||||||
|
|
||||||
|
# The sanitized session ID should be "secret" (basename of "../secret")
|
||||||
|
# and should NOT have modified the legit_file in parent dir
|
||||||
|
self.assertEqual(json.loads(legit_file.read_text()), {"secret": "data"})
|
||||||
|
|
||||||
|
|
||||||
|
class TestMainHookEmptyInput(unittest.TestCase):
|
||||||
|
"""Tests for main() with various empty/invalid inputs."""
|
||||||
|
|
||||||
|
def test_empty_stdin_returns_silently(self):
|
||||||
|
with patch("sys.stdin.read", return_value=""):
|
||||||
|
# Should not raise
|
||||||
|
amc_hook.main()
|
||||||
|
|
||||||
|
def test_whitespace_only_stdin_returns_silently(self):
|
||||||
|
with patch("sys.stdin.read", return_value=" \n\t "):
|
||||||
|
amc_hook.main()
|
||||||
|
|
||||||
|
def test_invalid_json_stdin_fails_silently(self):
|
||||||
|
with patch("sys.stdin.read", return_value="not json"):
|
||||||
|
amc_hook.main()
|
||||||
|
|
||||||
|
def test_missing_session_id_returns_silently(self):
|
||||||
|
with patch("sys.stdin.read", return_value='{"hook_event_name": "SessionStart"}'):
|
||||||
|
amc_hook.main()
|
||||||
|
|
||||||
|
def test_missing_event_name_returns_silently(self):
|
||||||
|
with patch("sys.stdin.read", return_value='{"session_id": "abc123"}'):
|
||||||
|
amc_hook.main()
|
||||||
|
|
||||||
|
def test_empty_session_id_after_sanitization_returns_silently(self):
|
||||||
|
# Edge case: session_id that becomes empty after basename()
|
||||||
|
with patch("sys.stdin.read", return_value='{"hook_event_name": "SessionStart", "session_id": "/"}'):
|
||||||
|
amc_hook.main()
|
||||||
|
|
||||||
|
|
||||||
|
class TestMainSessionEndDeletesFile(unittest.TestCase):
|
||||||
|
"""Tests for SessionEnd hook behavior."""
|
||||||
|
|
||||||
|
def test_session_end_deletes_existing_session_file(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
sessions_dir = Path(tmpdir) / "sessions"
|
||||||
|
events_dir = Path(tmpdir) / "events"
|
||||||
|
sessions_dir.mkdir()
|
||||||
|
events_dir.mkdir()
|
||||||
|
|
||||||
|
session_file = sessions_dir / "abc123.json"
|
||||||
|
session_file.write_text('{"session_id": "abc123"}')
|
||||||
|
|
||||||
|
hook_input = json.dumps({
|
||||||
|
"hook_event_name": "SessionEnd",
|
||||||
|
"session_id": "abc123",
|
||||||
|
})
|
||||||
|
|
||||||
|
with patch.object(amc_hook, "SESSIONS_DIR", sessions_dir), \
|
||||||
|
patch.object(amc_hook, "EVENTS_DIR", events_dir), \
|
||||||
|
patch("sys.stdin.read", return_value=hook_input):
|
||||||
|
amc_hook.main()
|
||||||
|
|
||||||
|
self.assertFalse(session_file.exists())
|
||||||
|
|
||||||
|
def test_session_end_missing_file_no_error(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
sessions_dir = Path(tmpdir) / "sessions"
|
||||||
|
events_dir = Path(tmpdir) / "events"
|
||||||
|
sessions_dir.mkdir()
|
||||||
|
events_dir.mkdir()
|
||||||
|
|
||||||
|
hook_input = json.dumps({
|
||||||
|
"hook_event_name": "SessionEnd",
|
||||||
|
"session_id": "nonexistent",
|
||||||
|
})
|
||||||
|
|
||||||
|
with patch.object(amc_hook, "SESSIONS_DIR", sessions_dir), \
|
||||||
|
patch.object(amc_hook, "EVENTS_DIR", events_dir), \
|
||||||
|
patch("sys.stdin.read", return_value=hook_input):
|
||||||
|
# Should not raise
|
||||||
|
amc_hook.main()
|
||||||
|
|
||||||
|
|
||||||
|
class TestMainPreToolUseWithoutExistingSession(unittest.TestCase):
|
||||||
|
"""Edge case: PreToolUse arrives but session file doesn't exist."""
|
||||||
|
|
||||||
|
def test_pre_tool_use_no_existing_session_returns_silently(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
sessions_dir = Path(tmpdir) / "sessions"
|
||||||
|
events_dir = Path(tmpdir) / "events"
|
||||||
|
sessions_dir.mkdir()
|
||||||
|
events_dir.mkdir()
|
||||||
|
|
||||||
|
hook_input = json.dumps({
|
||||||
|
"hook_event_name": "PreToolUse",
|
||||||
|
"tool_name": "AskUserQuestion",
|
||||||
|
"session_id": "nonexistent",
|
||||||
|
"tool_input": {"questions": []},
|
||||||
|
})
|
||||||
|
|
||||||
|
with patch.object(amc_hook, "SESSIONS_DIR", sessions_dir), \
|
||||||
|
patch.object(amc_hook, "EVENTS_DIR", events_dir), \
|
||||||
|
patch("sys.stdin.read", return_value=hook_input):
|
||||||
|
amc_hook.main()
|
||||||
|
|
||||||
|
# No session file should be created
|
||||||
|
self.assertFalse((sessions_dir / "nonexistent.json").exists())
|
||||||
|
|
||||||
|
|
||||||
|
class TestMainStopWithProseQuestion(unittest.TestCase):
|
||||||
|
"""Tests for Stop hook detecting prose questions."""
|
||||||
|
|
||||||
|
def test_stop_with_prose_question_sets_needs_attention(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
sessions_dir = Path(tmpdir) / "sessions"
|
||||||
|
events_dir = Path(tmpdir) / "events"
|
||||||
|
sessions_dir.mkdir()
|
||||||
|
events_dir.mkdir()
|
||||||
|
|
||||||
|
# Create existing session
|
||||||
|
session_file = sessions_dir / "abc123.json"
|
||||||
|
session_file.write_text(json.dumps({
|
||||||
|
"session_id": "abc123",
|
||||||
|
"status": "active",
|
||||||
|
}))
|
||||||
|
|
||||||
|
hook_input = json.dumps({
|
||||||
|
"hook_event_name": "Stop",
|
||||||
|
"session_id": "abc123",
|
||||||
|
"last_assistant_message": "What do you think about this approach?",
|
||||||
|
"cwd": "/test/project",
|
||||||
|
})
|
||||||
|
|
||||||
|
with patch.object(amc_hook, "SESSIONS_DIR", sessions_dir), \
|
||||||
|
patch.object(amc_hook, "EVENTS_DIR", events_dir), \
|
||||||
|
patch("sys.stdin.read", return_value=hook_input):
|
||||||
|
amc_hook.main()
|
||||||
|
|
||||||
|
data = json.loads(session_file.read_text())
|
||||||
|
self.assertEqual(data["status"], "needs_attention")
|
||||||
|
self.assertEqual(len(data["pending_questions"]), 1)
|
||||||
|
self.assertIn("approach?", data["pending_questions"][0]["question"])
|
||||||
|
|
||||||
|
|
||||||
|
class TestMainTurnTimingAccumulation(unittest.TestCase):
|
||||||
|
"""Tests for turn timing accumulation across pause/resume cycles."""
|
||||||
|
|
||||||
|
def test_post_tool_use_accumulates_paused_time(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
sessions_dir = Path(tmpdir) / "sessions"
|
||||||
|
events_dir = Path(tmpdir) / "events"
|
||||||
|
sessions_dir.mkdir()
|
||||||
|
events_dir.mkdir()
|
||||||
|
|
||||||
|
# Create session with existing paused state
|
||||||
|
session_file = sessions_dir / "abc123.json"
|
||||||
|
session_file.write_text(json.dumps({
|
||||||
|
"session_id": "abc123",
|
||||||
|
"status": "needs_attention",
|
||||||
|
"turn_paused_at": "2024-01-01T00:00:00+00:00",
|
||||||
|
"turn_paused_ms": 5000, # Already had 5 seconds paused
|
||||||
|
}))
|
||||||
|
|
||||||
|
hook_input = json.dumps({
|
||||||
|
"hook_event_name": "PostToolUse",
|
||||||
|
"tool_name": "AskUserQuestion",
|
||||||
|
"session_id": "abc123",
|
||||||
|
})
|
||||||
|
|
||||||
|
with patch.object(amc_hook, "SESSIONS_DIR", sessions_dir), \
|
||||||
|
patch.object(amc_hook, "EVENTS_DIR", events_dir), \
|
||||||
|
patch("sys.stdin.read", return_value=hook_input):
|
||||||
|
amc_hook.main()
|
||||||
|
|
||||||
|
data = json.loads(session_file.read_text())
|
||||||
|
# Should have accumulated more paused time
|
||||||
|
self.assertGreater(data["turn_paused_ms"], 5000)
|
||||||
|
# turn_paused_at should be removed after resuming
|
||||||
|
self.assertNotIn("turn_paused_at", data)
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
unittest.main()
|
||||||
377
tests/test_http.py
Normal file
377
tests/test_http.py
Normal file
@@ -0,0 +1,377 @@
|
|||||||
|
"""Tests for mixins/http.py edge cases.
|
||||||
|
|
||||||
|
Unit tests for HTTP routing and response handling.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import io
|
||||||
|
import json
|
||||||
|
import tempfile
|
||||||
|
import unittest
|
||||||
|
from pathlib import Path
|
||||||
|
from unittest.mock import patch, MagicMock
|
||||||
|
|
||||||
|
from amc_server.mixins.http import HttpMixin
|
||||||
|
|
||||||
|
|
||||||
|
class DummyHttpHandler(HttpMixin):
|
||||||
|
"""Minimal handler for testing HTTP mixin."""
|
||||||
|
|
||||||
|
def __init__(self):
|
||||||
|
self.response_code = None
|
||||||
|
self.headers_sent = {}
|
||||||
|
self.body_sent = b""
|
||||||
|
self.path = "/"
|
||||||
|
self.wfile = io.BytesIO()
|
||||||
|
|
||||||
|
def send_response(self, code):
|
||||||
|
self.response_code = code
|
||||||
|
|
||||||
|
def send_header(self, key, value):
|
||||||
|
self.headers_sent[key] = value
|
||||||
|
|
||||||
|
def end_headers(self):
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
class TestSendBytesResponse(unittest.TestCase):
|
||||||
|
"""Tests for _send_bytes_response edge cases."""
|
||||||
|
|
||||||
|
def test_sends_correct_headers(self):
|
||||||
|
handler = DummyHttpHandler()
|
||||||
|
handler._send_bytes_response(200, b"test", content_type="text/plain")
|
||||||
|
|
||||||
|
self.assertEqual(handler.response_code, 200)
|
||||||
|
self.assertEqual(handler.headers_sent["Content-Type"], "text/plain")
|
||||||
|
self.assertEqual(handler.headers_sent["Content-Length"], "4")
|
||||||
|
|
||||||
|
def test_includes_extra_headers(self):
|
||||||
|
handler = DummyHttpHandler()
|
||||||
|
handler._send_bytes_response(
|
||||||
|
200, b"test",
|
||||||
|
extra_headers={"X-Custom": "value", "Cache-Control": "no-cache"}
|
||||||
|
)
|
||||||
|
|
||||||
|
self.assertEqual(handler.headers_sent["X-Custom"], "value")
|
||||||
|
self.assertEqual(handler.headers_sent["Cache-Control"], "no-cache")
|
||||||
|
|
||||||
|
def test_broken_pipe_returns_false(self):
|
||||||
|
handler = DummyHttpHandler()
|
||||||
|
handler.wfile.write = MagicMock(side_effect=BrokenPipeError())
|
||||||
|
|
||||||
|
result = handler._send_bytes_response(200, b"test")
|
||||||
|
self.assertFalse(result)
|
||||||
|
|
||||||
|
def test_connection_reset_returns_false(self):
|
||||||
|
handler = DummyHttpHandler()
|
||||||
|
handler.wfile.write = MagicMock(side_effect=ConnectionResetError())
|
||||||
|
|
||||||
|
result = handler._send_bytes_response(200, b"test")
|
||||||
|
self.assertFalse(result)
|
||||||
|
|
||||||
|
def test_os_error_returns_false(self):
|
||||||
|
handler = DummyHttpHandler()
|
||||||
|
handler.wfile.write = MagicMock(side_effect=OSError("write error"))
|
||||||
|
|
||||||
|
result = handler._send_bytes_response(200, b"test")
|
||||||
|
self.assertFalse(result)
|
||||||
|
|
||||||
|
|
||||||
|
class TestSendJson(unittest.TestCase):
|
||||||
|
"""Tests for _send_json edge cases."""
|
||||||
|
|
||||||
|
def test_includes_cors_header(self):
|
||||||
|
handler = DummyHttpHandler()
|
||||||
|
handler._send_json(200, {"key": "value"})
|
||||||
|
|
||||||
|
self.assertEqual(handler.headers_sent["Access-Control-Allow-Origin"], "*")
|
||||||
|
|
||||||
|
def test_sets_json_content_type(self):
|
||||||
|
handler = DummyHttpHandler()
|
||||||
|
handler._send_json(200, {"key": "value"})
|
||||||
|
|
||||||
|
self.assertEqual(handler.headers_sent["Content-Type"], "application/json")
|
||||||
|
|
||||||
|
def test_encodes_payload_as_json(self):
|
||||||
|
handler = DummyHttpHandler()
|
||||||
|
handler._send_json(200, {"key": "value"})
|
||||||
|
|
||||||
|
written = handler.wfile.getvalue()
|
||||||
|
self.assertEqual(json.loads(written), {"key": "value"})
|
||||||
|
|
||||||
|
|
||||||
|
class TestServeDashboardFile(unittest.TestCase):
|
||||||
|
"""Tests for _serve_dashboard_file edge cases."""
|
||||||
|
|
||||||
|
def test_nonexistent_file_returns_404(self):
|
||||||
|
handler = DummyHttpHandler()
|
||||||
|
handler.errors = []
|
||||||
|
|
||||||
|
def capture_error(code, message):
|
||||||
|
handler.errors.append((code, message))
|
||||||
|
|
||||||
|
handler._json_error = capture_error
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
with patch("amc_server.mixins.http.DASHBOARD_DIR", Path(tmpdir)):
|
||||||
|
handler._serve_dashboard_file("nonexistent.html")
|
||||||
|
|
||||||
|
self.assertEqual(len(handler.errors), 1)
|
||||||
|
self.assertEqual(handler.errors[0][0], 404)
|
||||||
|
|
||||||
|
def test_path_traversal_blocked(self):
|
||||||
|
handler = DummyHttpHandler()
|
||||||
|
handler.errors = []
|
||||||
|
|
||||||
|
def capture_error(code, message):
|
||||||
|
handler.errors.append((code, message))
|
||||||
|
|
||||||
|
handler._json_error = capture_error
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
# Create a file outside the dashboard dir that shouldn't be accessible (unused - documents intent)
|
||||||
|
_secret = Path(tmpdir).parent / "secret.txt"
|
||||||
|
|
||||||
|
with patch("amc_server.mixins.http.DASHBOARD_DIR", Path(tmpdir)):
|
||||||
|
handler._serve_dashboard_file("../secret.txt")
|
||||||
|
|
||||||
|
self.assertEqual(len(handler.errors), 1)
|
||||||
|
self.assertEqual(handler.errors[0][0], 403)
|
||||||
|
|
||||||
|
def test_correct_content_type_for_html(self):
|
||||||
|
handler = DummyHttpHandler()
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
html_file = Path(tmpdir) / "test.html"
|
||||||
|
html_file.write_text("<html></html>")
|
||||||
|
|
||||||
|
with patch("amc_server.mixins.http.DASHBOARD_DIR", Path(tmpdir)):
|
||||||
|
handler._serve_dashboard_file("test.html")
|
||||||
|
|
||||||
|
self.assertEqual(handler.headers_sent["Content-Type"], "text/html; charset=utf-8")
|
||||||
|
|
||||||
|
def test_correct_content_type_for_css(self):
|
||||||
|
handler = DummyHttpHandler()
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
css_file = Path(tmpdir) / "styles.css"
|
||||||
|
css_file.write_text("body {}")
|
||||||
|
|
||||||
|
with patch("amc_server.mixins.http.DASHBOARD_DIR", Path(tmpdir)):
|
||||||
|
handler._serve_dashboard_file("styles.css")
|
||||||
|
|
||||||
|
self.assertEqual(handler.headers_sent["Content-Type"], "text/css; charset=utf-8")
|
||||||
|
|
||||||
|
def test_correct_content_type_for_js(self):
|
||||||
|
handler = DummyHttpHandler()
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
js_file = Path(tmpdir) / "app.js"
|
||||||
|
js_file.write_text("console.log('hello')")
|
||||||
|
|
||||||
|
with patch("amc_server.mixins.http.DASHBOARD_DIR", Path(tmpdir)):
|
||||||
|
handler._serve_dashboard_file("app.js")
|
||||||
|
|
||||||
|
self.assertEqual(handler.headers_sent["Content-Type"], "application/javascript; charset=utf-8")
|
||||||
|
|
||||||
|
def test_unknown_extension_gets_octet_stream(self):
|
||||||
|
handler = DummyHttpHandler()
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
unknown_file = Path(tmpdir) / "data.xyz"
|
||||||
|
unknown_file.write_bytes(b"\x00\x01\x02")
|
||||||
|
|
||||||
|
with patch("amc_server.mixins.http.DASHBOARD_DIR", Path(tmpdir)):
|
||||||
|
handler._serve_dashboard_file("data.xyz")
|
||||||
|
|
||||||
|
self.assertEqual(handler.headers_sent["Content-Type"], "application/octet-stream")
|
||||||
|
|
||||||
|
def test_no_cache_headers_set(self):
|
||||||
|
handler = DummyHttpHandler()
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
html_file = Path(tmpdir) / "test.html"
|
||||||
|
html_file.write_text("<html></html>")
|
||||||
|
|
||||||
|
with patch("amc_server.mixins.http.DASHBOARD_DIR", Path(tmpdir)):
|
||||||
|
handler._serve_dashboard_file("test.html")
|
||||||
|
|
||||||
|
self.assertIn("no-cache", handler.headers_sent.get("Cache-Control", ""))
|
||||||
|
|
||||||
|
|
||||||
|
class TestDoGet(unittest.TestCase):
|
||||||
|
"""Tests for do_GET routing edge cases."""
|
||||||
|
|
||||||
|
def _make_handler(self, path):
|
||||||
|
handler = DummyHttpHandler()
|
||||||
|
handler.path = path
|
||||||
|
handler._serve_dashboard_file = MagicMock()
|
||||||
|
handler._serve_state = MagicMock()
|
||||||
|
handler._serve_stream = MagicMock()
|
||||||
|
handler._serve_events = MagicMock()
|
||||||
|
handler._serve_conversation = MagicMock()
|
||||||
|
handler._json_error = MagicMock()
|
||||||
|
return handler
|
||||||
|
|
||||||
|
def test_root_serves_index(self):
|
||||||
|
handler = self._make_handler("/")
|
||||||
|
handler.do_GET()
|
||||||
|
handler._serve_dashboard_file.assert_called_with("index.html")
|
||||||
|
|
||||||
|
def test_index_html_serves_index(self):
|
||||||
|
handler = self._make_handler("/index.html")
|
||||||
|
handler.do_GET()
|
||||||
|
handler._serve_dashboard_file.assert_called_with("index.html")
|
||||||
|
|
||||||
|
def test_static_file_served(self):
|
||||||
|
handler = self._make_handler("/components/App.js")
|
||||||
|
handler.do_GET()
|
||||||
|
handler._serve_dashboard_file.assert_called_with("components/App.js")
|
||||||
|
|
||||||
|
def test_path_traversal_in_static_blocked(self):
|
||||||
|
handler = self._make_handler("/../../etc/passwd")
|
||||||
|
handler.do_GET()
|
||||||
|
handler._json_error.assert_called_with(404, "Not Found")
|
||||||
|
|
||||||
|
def test_api_state_routed(self):
|
||||||
|
handler = self._make_handler("/api/state")
|
||||||
|
handler.do_GET()
|
||||||
|
handler._serve_state.assert_called_once()
|
||||||
|
|
||||||
|
def test_api_stream_routed(self):
|
||||||
|
handler = self._make_handler("/api/stream")
|
||||||
|
handler.do_GET()
|
||||||
|
handler._serve_stream.assert_called_once()
|
||||||
|
|
||||||
|
def test_api_events_routed_with_id(self):
|
||||||
|
handler = self._make_handler("/api/events/session-123")
|
||||||
|
handler.do_GET()
|
||||||
|
handler._serve_events.assert_called_with("session-123")
|
||||||
|
|
||||||
|
def test_api_events_url_decoded(self):
|
||||||
|
handler = self._make_handler("/api/events/session%20with%20spaces")
|
||||||
|
handler.do_GET()
|
||||||
|
handler._serve_events.assert_called_with("session with spaces")
|
||||||
|
|
||||||
|
def test_api_conversation_with_query_params(self):
|
||||||
|
handler = self._make_handler("/api/conversation/sess123?project_dir=/test&agent=codex")
|
||||||
|
handler.do_GET()
|
||||||
|
handler._serve_conversation.assert_called_with("sess123", "/test", "codex")
|
||||||
|
|
||||||
|
def test_api_conversation_defaults_to_claude(self):
|
||||||
|
handler = self._make_handler("/api/conversation/sess123")
|
||||||
|
handler.do_GET()
|
||||||
|
handler._serve_conversation.assert_called_with("sess123", "", "claude")
|
||||||
|
|
||||||
|
def test_unknown_api_path_returns_404(self):
|
||||||
|
handler = self._make_handler("/api/unknown")
|
||||||
|
handler.do_GET()
|
||||||
|
handler._json_error.assert_called_with(404, "Not Found")
|
||||||
|
|
||||||
|
|
||||||
|
class TestDoGetSpawnRoutes(unittest.TestCase):
|
||||||
|
"""Tests for spawn-related GET routes."""
|
||||||
|
|
||||||
|
def _make_handler(self, path):
|
||||||
|
handler = DummyHttpHandler()
|
||||||
|
handler.path = path
|
||||||
|
handler._serve_dashboard_file = MagicMock()
|
||||||
|
handler._serve_state = MagicMock()
|
||||||
|
handler._serve_stream = MagicMock()
|
||||||
|
handler._serve_events = MagicMock()
|
||||||
|
handler._serve_conversation = MagicMock()
|
||||||
|
handler._serve_skills = MagicMock()
|
||||||
|
handler._handle_projects = MagicMock()
|
||||||
|
handler._handle_health = MagicMock()
|
||||||
|
handler._json_error = MagicMock()
|
||||||
|
return handler
|
||||||
|
|
||||||
|
def test_api_projects_routed(self):
|
||||||
|
handler = self._make_handler("/api/projects")
|
||||||
|
handler.do_GET()
|
||||||
|
handler._handle_projects.assert_called_once()
|
||||||
|
|
||||||
|
def test_api_health_routed(self):
|
||||||
|
handler = self._make_handler("/api/health")
|
||||||
|
handler.do_GET()
|
||||||
|
handler._handle_health.assert_called_once()
|
||||||
|
|
||||||
|
|
||||||
|
class TestDoPost(unittest.TestCase):
|
||||||
|
"""Tests for do_POST routing edge cases."""
|
||||||
|
|
||||||
|
def _make_handler(self, path):
|
||||||
|
handler = DummyHttpHandler()
|
||||||
|
handler.path = path
|
||||||
|
handler._dismiss_dead_sessions = MagicMock()
|
||||||
|
handler._dismiss_session = MagicMock()
|
||||||
|
handler._respond_to_session = MagicMock()
|
||||||
|
handler._handle_spawn = MagicMock()
|
||||||
|
handler._handle_projects_refresh = MagicMock()
|
||||||
|
handler._json_error = MagicMock()
|
||||||
|
return handler
|
||||||
|
|
||||||
|
def test_dismiss_dead_routed(self):
|
||||||
|
handler = self._make_handler("/api/dismiss-dead")
|
||||||
|
handler.do_POST()
|
||||||
|
handler._dismiss_dead_sessions.assert_called_once()
|
||||||
|
|
||||||
|
def test_dismiss_session_routed(self):
|
||||||
|
handler = self._make_handler("/api/dismiss/session-abc")
|
||||||
|
handler.do_POST()
|
||||||
|
handler._dismiss_session.assert_called_with("session-abc")
|
||||||
|
|
||||||
|
def test_dismiss_url_decoded(self):
|
||||||
|
handler = self._make_handler("/api/dismiss/session%2Fwith%2Fslash")
|
||||||
|
handler.do_POST()
|
||||||
|
handler._dismiss_session.assert_called_with("session/with/slash")
|
||||||
|
|
||||||
|
def test_respond_routed(self):
|
||||||
|
handler = self._make_handler("/api/respond/session-xyz")
|
||||||
|
handler.do_POST()
|
||||||
|
handler._respond_to_session.assert_called_with("session-xyz")
|
||||||
|
|
||||||
|
def test_spawn_routed(self):
|
||||||
|
handler = self._make_handler("/api/spawn")
|
||||||
|
handler.do_POST()
|
||||||
|
handler._handle_spawn.assert_called_once()
|
||||||
|
|
||||||
|
def test_projects_refresh_routed(self):
|
||||||
|
handler = self._make_handler("/api/projects/refresh")
|
||||||
|
handler.do_POST()
|
||||||
|
handler._handle_projects_refresh.assert_called_once()
|
||||||
|
|
||||||
|
def test_unknown_post_path_returns_404(self):
|
||||||
|
handler = self._make_handler("/api/unknown")
|
||||||
|
handler.do_POST()
|
||||||
|
handler._json_error.assert_called_with(404, "Not Found")
|
||||||
|
|
||||||
|
|
||||||
|
class TestDoOptions(unittest.TestCase):
|
||||||
|
"""Tests for do_OPTIONS CORS preflight."""
|
||||||
|
|
||||||
|
def test_returns_204_with_cors_headers(self):
|
||||||
|
handler = DummyHttpHandler()
|
||||||
|
handler.do_OPTIONS()
|
||||||
|
|
||||||
|
self.assertEqual(handler.response_code, 204)
|
||||||
|
self.assertEqual(handler.headers_sent["Access-Control-Allow-Origin"], "*")
|
||||||
|
self.assertIn("POST", handler.headers_sent["Access-Control-Allow-Methods"])
|
||||||
|
self.assertIn("GET", handler.headers_sent["Access-Control-Allow-Methods"])
|
||||||
|
self.assertIn("Content-Type", handler.headers_sent["Access-Control-Allow-Headers"])
|
||||||
|
self.assertIn("Authorization", handler.headers_sent["Access-Control-Allow-Headers"])
|
||||||
|
|
||||||
|
|
||||||
|
class TestJsonError(unittest.TestCase):
|
||||||
|
"""Tests for _json_error helper."""
|
||||||
|
|
||||||
|
def test_sends_json_with_error(self):
|
||||||
|
handler = DummyHttpHandler()
|
||||||
|
handler._json_error(404, "Not Found")
|
||||||
|
|
||||||
|
written = handler.wfile.getvalue()
|
||||||
|
payload = json.loads(written)
|
||||||
|
self.assertEqual(payload, {"ok": False, "error": "Not Found"})
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
unittest.main()
|
||||||
635
tests/test_parsing.py
Normal file
635
tests/test_parsing.py
Normal file
@@ -0,0 +1,635 @@
|
|||||||
|
"""Tests for mixins/parsing.py edge cases.
|
||||||
|
|
||||||
|
Unit tests for parsing helper functions and conversation file resolution.
|
||||||
|
"""
|
||||||
|
|
||||||
|
import json
|
||||||
|
import tempfile
|
||||||
|
import unittest
|
||||||
|
from pathlib import Path
|
||||||
|
from unittest.mock import patch
|
||||||
|
|
||||||
|
from amc_server.mixins.parsing import SessionParsingMixin
|
||||||
|
|
||||||
|
|
||||||
|
class DummyParsingHandler(SessionParsingMixin):
|
||||||
|
"""Minimal handler for testing parsing mixin."""
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
class TestToInt(unittest.TestCase):
|
||||||
|
"""Tests for _to_int edge cases."""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.handler = DummyParsingHandler()
|
||||||
|
|
||||||
|
def test_none_returns_none(self):
|
||||||
|
self.assertIsNone(self.handler._to_int(None))
|
||||||
|
|
||||||
|
def test_bool_true_returns_none(self):
|
||||||
|
# Booleans are technically ints in Python, but we don't want to convert them
|
||||||
|
self.assertIsNone(self.handler._to_int(True))
|
||||||
|
|
||||||
|
def test_bool_false_returns_none(self):
|
||||||
|
self.assertIsNone(self.handler._to_int(False))
|
||||||
|
|
||||||
|
def test_int_returns_int(self):
|
||||||
|
self.assertEqual(self.handler._to_int(42), 42)
|
||||||
|
|
||||||
|
def test_negative_int_returns_int(self):
|
||||||
|
self.assertEqual(self.handler._to_int(-10), -10)
|
||||||
|
|
||||||
|
def test_zero_returns_zero(self):
|
||||||
|
self.assertEqual(self.handler._to_int(0), 0)
|
||||||
|
|
||||||
|
def test_float_truncates_to_int(self):
|
||||||
|
self.assertEqual(self.handler._to_int(3.7), 3)
|
||||||
|
|
||||||
|
def test_negative_float_truncates(self):
|
||||||
|
self.assertEqual(self.handler._to_int(-2.9), -2)
|
||||||
|
|
||||||
|
def test_string_int_parses(self):
|
||||||
|
self.assertEqual(self.handler._to_int("123"), 123)
|
||||||
|
|
||||||
|
def test_string_negative_parses(self):
|
||||||
|
self.assertEqual(self.handler._to_int("-456"), -456)
|
||||||
|
|
||||||
|
def test_string_with_whitespace_fails(self):
|
||||||
|
# Python's int() handles whitespace, but let's verify
|
||||||
|
self.assertEqual(self.handler._to_int(" 42 "), 42)
|
||||||
|
|
||||||
|
def test_string_float_fails(self):
|
||||||
|
# "3.14" can't be parsed by int()
|
||||||
|
self.assertIsNone(self.handler._to_int("3.14"))
|
||||||
|
|
||||||
|
def test_empty_string_returns_none(self):
|
||||||
|
self.assertIsNone(self.handler._to_int(""))
|
||||||
|
|
||||||
|
def test_non_numeric_string_returns_none(self):
|
||||||
|
self.assertIsNone(self.handler._to_int("abc"))
|
||||||
|
|
||||||
|
def test_list_returns_none(self):
|
||||||
|
self.assertIsNone(self.handler._to_int([1, 2, 3]))
|
||||||
|
|
||||||
|
def test_dict_returns_none(self):
|
||||||
|
self.assertIsNone(self.handler._to_int({"value": 42}))
|
||||||
|
|
||||||
|
|
||||||
|
class TestSumOptionalInts(unittest.TestCase):
|
||||||
|
"""Tests for _sum_optional_ints edge cases."""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.handler = DummyParsingHandler()
|
||||||
|
|
||||||
|
def test_empty_list_returns_none(self):
|
||||||
|
self.assertIsNone(self.handler._sum_optional_ints([]))
|
||||||
|
|
||||||
|
def test_all_none_returns_none(self):
|
||||||
|
self.assertIsNone(self.handler._sum_optional_ints([None, None, None]))
|
||||||
|
|
||||||
|
def test_single_int_returns_that_int(self):
|
||||||
|
self.assertEqual(self.handler._sum_optional_ints([42]), 42)
|
||||||
|
|
||||||
|
def test_mixed_none_and_int_sums_ints(self):
|
||||||
|
self.assertEqual(self.handler._sum_optional_ints([None, 10, None, 20]), 30)
|
||||||
|
|
||||||
|
def test_all_ints_sums_all(self):
|
||||||
|
self.assertEqual(self.handler._sum_optional_ints([1, 2, 3, 4]), 10)
|
||||||
|
|
||||||
|
def test_includes_zero(self):
|
||||||
|
self.assertEqual(self.handler._sum_optional_ints([0, 5]), 5)
|
||||||
|
|
||||||
|
def test_negative_ints(self):
|
||||||
|
self.assertEqual(self.handler._sum_optional_ints([10, -3, 5]), 12)
|
||||||
|
|
||||||
|
def test_floats_ignored(self):
|
||||||
|
# Only integers are summed
|
||||||
|
self.assertEqual(self.handler._sum_optional_ints([10, 3.14, 5]), 15)
|
||||||
|
|
||||||
|
def test_strings_ignored(self):
|
||||||
|
self.assertEqual(self.handler._sum_optional_ints(["10", 5]), 5)
|
||||||
|
|
||||||
|
def test_only_non_ints_returns_none(self):
|
||||||
|
self.assertIsNone(self.handler._sum_optional_ints(["10", 3.14, None]))
|
||||||
|
|
||||||
|
|
||||||
|
class TestAsDict(unittest.TestCase):
|
||||||
|
"""Tests for _as_dict edge cases."""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.handler = DummyParsingHandler()
|
||||||
|
|
||||||
|
def test_dict_returns_dict(self):
|
||||||
|
self.assertEqual(self.handler._as_dict({"key": "value"}), {"key": "value"})
|
||||||
|
|
||||||
|
def test_empty_dict_returns_empty_dict(self):
|
||||||
|
self.assertEqual(self.handler._as_dict({}), {})
|
||||||
|
|
||||||
|
def test_none_returns_empty_dict(self):
|
||||||
|
self.assertEqual(self.handler._as_dict(None), {})
|
||||||
|
|
||||||
|
def test_list_returns_empty_dict(self):
|
||||||
|
self.assertEqual(self.handler._as_dict([1, 2, 3]), {})
|
||||||
|
|
||||||
|
def test_string_returns_empty_dict(self):
|
||||||
|
self.assertEqual(self.handler._as_dict("not a dict"), {})
|
||||||
|
|
||||||
|
def test_int_returns_empty_dict(self):
|
||||||
|
self.assertEqual(self.handler._as_dict(42), {})
|
||||||
|
|
||||||
|
def test_bool_returns_empty_dict(self):
|
||||||
|
self.assertEqual(self.handler._as_dict(True), {})
|
||||||
|
|
||||||
|
|
||||||
|
class TestGetClaudeContextWindow(unittest.TestCase):
|
||||||
|
"""Tests for _get_claude_context_window edge cases."""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.handler = DummyParsingHandler()
|
||||||
|
|
||||||
|
def test_none_model_returns_200k(self):
|
||||||
|
self.assertEqual(self.handler._get_claude_context_window(None), 200_000)
|
||||||
|
|
||||||
|
def test_empty_string_returns_200k(self):
|
||||||
|
self.assertEqual(self.handler._get_claude_context_window(""), 200_000)
|
||||||
|
|
||||||
|
def test_claude_2_returns_100k(self):
|
||||||
|
self.assertEqual(self.handler._get_claude_context_window("claude-2"), 100_000)
|
||||||
|
|
||||||
|
def test_claude_2_1_returns_100k(self):
|
||||||
|
self.assertEqual(self.handler._get_claude_context_window("claude-2.1"), 100_000)
|
||||||
|
|
||||||
|
def test_claude_3_returns_200k(self):
|
||||||
|
self.assertEqual(self.handler._get_claude_context_window("claude-3-opus-20240229"), 200_000)
|
||||||
|
|
||||||
|
def test_claude_35_returns_200k(self):
|
||||||
|
self.assertEqual(self.handler._get_claude_context_window("claude-3-5-sonnet-20241022"), 200_000)
|
||||||
|
|
||||||
|
def test_unknown_model_returns_200k(self):
|
||||||
|
self.assertEqual(self.handler._get_claude_context_window("some-future-model"), 200_000)
|
||||||
|
|
||||||
|
|
||||||
|
class TestGetClaudeConversationFile(unittest.TestCase):
|
||||||
|
"""Tests for _get_claude_conversation_file edge cases."""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.handler = DummyParsingHandler()
|
||||||
|
|
||||||
|
def test_empty_project_dir_returns_none(self):
|
||||||
|
self.assertIsNone(self.handler._get_claude_conversation_file("session123", ""))
|
||||||
|
|
||||||
|
def test_none_project_dir_returns_none(self):
|
||||||
|
self.assertIsNone(self.handler._get_claude_conversation_file("session123", None))
|
||||||
|
|
||||||
|
def test_nonexistent_file_returns_none(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
with patch("amc_server.mixins.parsing.CLAUDE_PROJECTS_DIR", Path(tmpdir)):
|
||||||
|
result = self.handler._get_claude_conversation_file("session123", "/some/project")
|
||||||
|
self.assertIsNone(result)
|
||||||
|
|
||||||
|
def test_existing_file_returns_path(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
# Create the expected file structure
|
||||||
|
# project_dir "/foo/bar" becomes "-foo-bar"
|
||||||
|
encoded_dir = Path(tmpdir) / "-foo-bar"
|
||||||
|
encoded_dir.mkdir()
|
||||||
|
conv_file = encoded_dir / "session123.jsonl"
|
||||||
|
conv_file.write_text('{"type": "user"}\n')
|
||||||
|
|
||||||
|
with patch("amc_server.mixins.parsing.CLAUDE_PROJECTS_DIR", Path(tmpdir)):
|
||||||
|
result = self.handler._get_claude_conversation_file("session123", "/foo/bar")
|
||||||
|
self.assertEqual(result, conv_file)
|
||||||
|
|
||||||
|
def test_project_dir_without_leading_slash_gets_prefixed(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
# project_dir "foo/bar" becomes "-foo-bar" (adds leading dash)
|
||||||
|
encoded_dir = Path(tmpdir) / "-foo-bar"
|
||||||
|
encoded_dir.mkdir()
|
||||||
|
conv_file = encoded_dir / "session123.jsonl"
|
||||||
|
conv_file.write_text('{"type": "user"}\n')
|
||||||
|
|
||||||
|
with patch("amc_server.mixins.parsing.CLAUDE_PROJECTS_DIR", Path(tmpdir)):
|
||||||
|
result = self.handler._get_claude_conversation_file("session123", "foo/bar")
|
||||||
|
self.assertEqual(result, conv_file)
|
||||||
|
|
||||||
|
|
||||||
|
class TestFindCodexTranscriptFile(unittest.TestCase):
|
||||||
|
"""Tests for _find_codex_transcript_file edge cases."""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.handler = DummyParsingHandler()
|
||||||
|
|
||||||
|
def test_empty_session_id_returns_none(self):
|
||||||
|
self.assertIsNone(self.handler._find_codex_transcript_file(""))
|
||||||
|
|
||||||
|
def test_none_session_id_returns_none(self):
|
||||||
|
self.assertIsNone(self.handler._find_codex_transcript_file(None))
|
||||||
|
|
||||||
|
def test_codex_sessions_dir_missing_returns_none(self):
|
||||||
|
with patch("amc_server.mixins.parsing.CODEX_SESSIONS_DIR", Path("/nonexistent")):
|
||||||
|
# Clear cache to force discovery
|
||||||
|
from amc_server.agents import _codex_transcript_cache
|
||||||
|
_codex_transcript_cache.clear()
|
||||||
|
result = self.handler._find_codex_transcript_file("abc123")
|
||||||
|
self.assertIsNone(result)
|
||||||
|
|
||||||
|
def test_cache_hit_returns_cached_path(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
transcript_file = Path(tmpdir) / "abc123.jsonl"
|
||||||
|
transcript_file.write_text('{"type": "session_meta"}\n')
|
||||||
|
|
||||||
|
from amc_server.agents import _codex_transcript_cache
|
||||||
|
_codex_transcript_cache["abc123"] = str(transcript_file)
|
||||||
|
|
||||||
|
result = self.handler._find_codex_transcript_file("abc123")
|
||||||
|
self.assertEqual(result, transcript_file)
|
||||||
|
|
||||||
|
# Clean up
|
||||||
|
_codex_transcript_cache.clear()
|
||||||
|
|
||||||
|
def test_cache_hit_with_deleted_file_returns_none(self):
|
||||||
|
from amc_server.agents import _codex_transcript_cache
|
||||||
|
_codex_transcript_cache["deleted-session"] = "/nonexistent/file.jsonl"
|
||||||
|
|
||||||
|
result = self.handler._find_codex_transcript_file("deleted-session")
|
||||||
|
self.assertIsNone(result)
|
||||||
|
|
||||||
|
_codex_transcript_cache.clear()
|
||||||
|
|
||||||
|
def test_cache_hit_with_none_returns_none(self):
|
||||||
|
from amc_server.agents import _codex_transcript_cache
|
||||||
|
_codex_transcript_cache["cached-none"] = None
|
||||||
|
|
||||||
|
result = self.handler._find_codex_transcript_file("cached-none")
|
||||||
|
self.assertIsNone(result)
|
||||||
|
|
||||||
|
_codex_transcript_cache.clear()
|
||||||
|
|
||||||
|
|
||||||
|
class TestReadJsonlTailEntries(unittest.TestCase):
|
||||||
|
"""Tests for _read_jsonl_tail_entries edge cases."""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.handler = DummyParsingHandler()
|
||||||
|
|
||||||
|
def test_empty_file_returns_empty_list(self):
|
||||||
|
with tempfile.NamedTemporaryFile(mode="w", suffix=".jsonl", delete=False) as f:
|
||||||
|
path = Path(f.name)
|
||||||
|
try:
|
||||||
|
result = self.handler._read_jsonl_tail_entries(path)
|
||||||
|
self.assertEqual(result, [])
|
||||||
|
finally:
|
||||||
|
path.unlink()
|
||||||
|
|
||||||
|
def test_nonexistent_file_returns_empty_list(self):
|
||||||
|
result = self.handler._read_jsonl_tail_entries(Path("/nonexistent/file.jsonl"))
|
||||||
|
self.assertEqual(result, [])
|
||||||
|
|
||||||
|
def test_single_line_file(self):
|
||||||
|
with tempfile.NamedTemporaryFile(mode="w", suffix=".jsonl", delete=False) as f:
|
||||||
|
f.write('{"key": "value"}\n')
|
||||||
|
path = Path(f.name)
|
||||||
|
try:
|
||||||
|
result = self.handler._read_jsonl_tail_entries(path)
|
||||||
|
self.assertEqual(result, [{"key": "value"}])
|
||||||
|
finally:
|
||||||
|
path.unlink()
|
||||||
|
|
||||||
|
def test_max_lines_limits_output(self):
|
||||||
|
with tempfile.NamedTemporaryFile(mode="w", suffix=".jsonl", delete=False) as f:
|
||||||
|
for i in range(100):
|
||||||
|
f.write(f'{{"n": {i}}}\n')
|
||||||
|
path = Path(f.name)
|
||||||
|
try:
|
||||||
|
result = self.handler._read_jsonl_tail_entries(path, max_lines=10)
|
||||||
|
self.assertEqual(len(result), 10)
|
||||||
|
# Should be the LAST 10 lines
|
||||||
|
self.assertEqual(result[-1], {"n": 99})
|
||||||
|
finally:
|
||||||
|
path.unlink()
|
||||||
|
|
||||||
|
def test_max_bytes_truncates_from_start(self):
|
||||||
|
with tempfile.NamedTemporaryFile(mode="w", suffix=".jsonl", delete=False) as f:
|
||||||
|
# Write many lines
|
||||||
|
for i in range(100):
|
||||||
|
f.write(f'{{"number": {i}}}\n')
|
||||||
|
path = Path(f.name)
|
||||||
|
try:
|
||||||
|
# Read only last 200 bytes
|
||||||
|
result = self.handler._read_jsonl_tail_entries(path, max_bytes=200)
|
||||||
|
# Should get some entries from the end
|
||||||
|
self.assertGreater(len(result), 0)
|
||||||
|
# All entries should be from near the end
|
||||||
|
for entry in result:
|
||||||
|
self.assertGreater(entry["number"], 80)
|
||||||
|
finally:
|
||||||
|
path.unlink()
|
||||||
|
|
||||||
|
def test_partial_first_line_skipped(self):
|
||||||
|
with tempfile.NamedTemporaryFile(mode="w", suffix=".jsonl", delete=False) as f:
|
||||||
|
# Write enough to trigger partial read
|
||||||
|
f.write('{"first": "line", "long_key": "' + "x" * 500 + '"}\n')
|
||||||
|
f.write('{"second": "line"}\n')
|
||||||
|
path = Path(f.name)
|
||||||
|
try:
|
||||||
|
# Read only last 100 bytes (will cut first line)
|
||||||
|
result = self.handler._read_jsonl_tail_entries(path, max_bytes=100)
|
||||||
|
# First line should be skipped (partial JSON)
|
||||||
|
self.assertEqual(len(result), 1)
|
||||||
|
self.assertEqual(result[0], {"second": "line"})
|
||||||
|
finally:
|
||||||
|
path.unlink()
|
||||||
|
|
||||||
|
def test_invalid_json_lines_skipped(self):
|
||||||
|
with tempfile.NamedTemporaryFile(mode="w", suffix=".jsonl", delete=False) as f:
|
||||||
|
f.write('{"valid": "json"}\n')
|
||||||
|
f.write('this is not json\n')
|
||||||
|
f.write('{"another": "valid"}\n')
|
||||||
|
path = Path(f.name)
|
||||||
|
try:
|
||||||
|
result = self.handler._read_jsonl_tail_entries(path)
|
||||||
|
self.assertEqual(len(result), 2)
|
||||||
|
self.assertEqual(result[0], {"valid": "json"})
|
||||||
|
self.assertEqual(result[1], {"another": "valid"})
|
||||||
|
finally:
|
||||||
|
path.unlink()
|
||||||
|
|
||||||
|
def test_empty_lines_skipped(self):
|
||||||
|
with tempfile.NamedTemporaryFile(mode="w", suffix=".jsonl", delete=False) as f:
|
||||||
|
f.write('{"first": 1}\n')
|
||||||
|
f.write('\n')
|
||||||
|
f.write('{"second": 2}\n')
|
||||||
|
path = Path(f.name)
|
||||||
|
try:
|
||||||
|
result = self.handler._read_jsonl_tail_entries(path)
|
||||||
|
self.assertEqual(len(result), 2)
|
||||||
|
finally:
|
||||||
|
path.unlink()
|
||||||
|
|
||||||
|
|
||||||
|
class TestParseClaudeContextUsageFromFile(unittest.TestCase):
|
||||||
|
"""Tests for _parse_claude_context_usage_from_file edge cases."""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.handler = DummyParsingHandler()
|
||||||
|
|
||||||
|
def test_empty_file_returns_none(self):
|
||||||
|
with tempfile.NamedTemporaryFile(mode="w", suffix=".jsonl", delete=False) as f:
|
||||||
|
path = Path(f.name)
|
||||||
|
try:
|
||||||
|
result = self.handler._parse_claude_context_usage_from_file(path)
|
||||||
|
self.assertIsNone(result)
|
||||||
|
finally:
|
||||||
|
path.unlink()
|
||||||
|
|
||||||
|
def test_no_assistant_messages_returns_none(self):
|
||||||
|
with tempfile.NamedTemporaryFile(mode="w", suffix=".jsonl", delete=False) as f:
|
||||||
|
f.write('{"type": "user", "message": {"content": "hello"}}\n')
|
||||||
|
path = Path(f.name)
|
||||||
|
try:
|
||||||
|
result = self.handler._parse_claude_context_usage_from_file(path)
|
||||||
|
self.assertIsNone(result)
|
||||||
|
finally:
|
||||||
|
path.unlink()
|
||||||
|
|
||||||
|
def test_assistant_without_usage_returns_none(self):
|
||||||
|
with tempfile.NamedTemporaryFile(mode="w", suffix=".jsonl", delete=False) as f:
|
||||||
|
f.write('{"type": "assistant", "message": {"content": []}}\n')
|
||||||
|
path = Path(f.name)
|
||||||
|
try:
|
||||||
|
result = self.handler._parse_claude_context_usage_from_file(path)
|
||||||
|
self.assertIsNone(result)
|
||||||
|
finally:
|
||||||
|
path.unlink()
|
||||||
|
|
||||||
|
def test_extracts_usage_from_assistant_message(self):
|
||||||
|
with tempfile.NamedTemporaryFile(mode="w", suffix=".jsonl", delete=False) as f:
|
||||||
|
f.write(json.dumps({
|
||||||
|
"type": "assistant",
|
||||||
|
"timestamp": "2024-01-01T00:00:00Z",
|
||||||
|
"message": {
|
||||||
|
"model": "claude-3-5-sonnet-20241022",
|
||||||
|
"usage": {
|
||||||
|
"input_tokens": 1000,
|
||||||
|
"output_tokens": 500,
|
||||||
|
"cache_read_input_tokens": 200,
|
||||||
|
"cache_creation_input_tokens": 100,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}) + "\n")
|
||||||
|
path = Path(f.name)
|
||||||
|
try:
|
||||||
|
result = self.handler._parse_claude_context_usage_from_file(path)
|
||||||
|
self.assertIsNotNone(result)
|
||||||
|
self.assertEqual(result["input_tokens"], 1000)
|
||||||
|
self.assertEqual(result["output_tokens"], 500)
|
||||||
|
self.assertEqual(result["cached_input_tokens"], 300) # 200 + 100
|
||||||
|
self.assertEqual(result["current_tokens"], 1800) # sum of all
|
||||||
|
self.assertEqual(result["window_tokens"], 200_000)
|
||||||
|
self.assertEqual(result["model"], "claude-3-5-sonnet-20241022")
|
||||||
|
finally:
|
||||||
|
path.unlink()
|
||||||
|
|
||||||
|
def test_uses_most_recent_assistant_message(self):
|
||||||
|
with tempfile.NamedTemporaryFile(mode="w", suffix=".jsonl", delete=False) as f:
|
||||||
|
f.write(json.dumps({
|
||||||
|
"type": "assistant",
|
||||||
|
"message": {"usage": {"input_tokens": 100, "output_tokens": 50}}
|
||||||
|
}) + "\n")
|
||||||
|
f.write(json.dumps({
|
||||||
|
"type": "assistant",
|
||||||
|
"message": {"usage": {"input_tokens": 999, "output_tokens": 888}}
|
||||||
|
}) + "\n")
|
||||||
|
path = Path(f.name)
|
||||||
|
try:
|
||||||
|
result = self.handler._parse_claude_context_usage_from_file(path)
|
||||||
|
# Should use the last message
|
||||||
|
self.assertEqual(result["input_tokens"], 999)
|
||||||
|
self.assertEqual(result["output_tokens"], 888)
|
||||||
|
finally:
|
||||||
|
path.unlink()
|
||||||
|
|
||||||
|
def test_skips_assistant_with_no_current_tokens(self):
|
||||||
|
with tempfile.NamedTemporaryFile(mode="w", suffix=".jsonl", delete=False) as f:
|
||||||
|
# Last message has no usable tokens
|
||||||
|
f.write(json.dumps({
|
||||||
|
"type": "assistant",
|
||||||
|
"message": {"usage": {"input_tokens": 100, "output_tokens": 50}}
|
||||||
|
}) + "\n")
|
||||||
|
f.write(json.dumps({
|
||||||
|
"type": "assistant",
|
||||||
|
"message": {"usage": {}} # No tokens
|
||||||
|
}) + "\n")
|
||||||
|
path = Path(f.name)
|
||||||
|
try:
|
||||||
|
result = self.handler._parse_claude_context_usage_from_file(path)
|
||||||
|
# Should fall back to earlier message with valid tokens
|
||||||
|
self.assertEqual(result["input_tokens"], 100)
|
||||||
|
finally:
|
||||||
|
path.unlink()
|
||||||
|
|
||||||
|
|
||||||
|
class TestParseCodexContextUsageFromFile(unittest.TestCase):
|
||||||
|
"""Tests for _parse_codex_context_usage_from_file edge cases."""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.handler = DummyParsingHandler()
|
||||||
|
|
||||||
|
def test_empty_file_returns_none(self):
|
||||||
|
with tempfile.NamedTemporaryFile(mode="w", suffix=".jsonl", delete=False) as f:
|
||||||
|
path = Path(f.name)
|
||||||
|
try:
|
||||||
|
result = self.handler._parse_codex_context_usage_from_file(path)
|
||||||
|
self.assertIsNone(result)
|
||||||
|
finally:
|
||||||
|
path.unlink()
|
||||||
|
|
||||||
|
def test_no_token_count_events_returns_none(self):
|
||||||
|
with tempfile.NamedTemporaryFile(mode="w", suffix=".jsonl", delete=False) as f:
|
||||||
|
f.write('{"type": "response_item"}\n')
|
||||||
|
path = Path(f.name)
|
||||||
|
try:
|
||||||
|
result = self.handler._parse_codex_context_usage_from_file(path)
|
||||||
|
self.assertIsNone(result)
|
||||||
|
finally:
|
||||||
|
path.unlink()
|
||||||
|
|
||||||
|
def test_extracts_token_count_event(self):
|
||||||
|
with tempfile.NamedTemporaryFile(mode="w", suffix=".jsonl", delete=False) as f:
|
||||||
|
f.write(json.dumps({
|
||||||
|
"type": "event_msg",
|
||||||
|
"timestamp": "2024-01-01T00:00:00Z",
|
||||||
|
"payload": {
|
||||||
|
"type": "token_count",
|
||||||
|
"info": {
|
||||||
|
"model_context_window": 128000,
|
||||||
|
"last_token_usage": {
|
||||||
|
"input_tokens": 5000,
|
||||||
|
"output_tokens": 2000,
|
||||||
|
"cached_input_tokens": 1000,
|
||||||
|
"total_tokens": 8000,
|
||||||
|
},
|
||||||
|
"total_token_usage": {
|
||||||
|
"total_tokens": 50000,
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}) + "\n")
|
||||||
|
path = Path(f.name)
|
||||||
|
try:
|
||||||
|
result = self.handler._parse_codex_context_usage_from_file(path)
|
||||||
|
self.assertIsNotNone(result)
|
||||||
|
self.assertEqual(result["window_tokens"], 128000)
|
||||||
|
self.assertEqual(result["current_tokens"], 8000)
|
||||||
|
self.assertEqual(result["input_tokens"], 5000)
|
||||||
|
self.assertEqual(result["output_tokens"], 2000)
|
||||||
|
self.assertEqual(result["session_total_tokens"], 50000)
|
||||||
|
finally:
|
||||||
|
path.unlink()
|
||||||
|
|
||||||
|
def test_calculates_current_tokens_when_total_missing(self):
|
||||||
|
with tempfile.NamedTemporaryFile(mode="w", suffix=".jsonl", delete=False) as f:
|
||||||
|
f.write(json.dumps({
|
||||||
|
"type": "event_msg",
|
||||||
|
"payload": {
|
||||||
|
"type": "token_count",
|
||||||
|
"info": {
|
||||||
|
"last_token_usage": {
|
||||||
|
"input_tokens": 100,
|
||||||
|
"output_tokens": 50,
|
||||||
|
# no total_tokens
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}) + "\n")
|
||||||
|
path = Path(f.name)
|
||||||
|
try:
|
||||||
|
result = self.handler._parse_codex_context_usage_from_file(path)
|
||||||
|
# Should sum available tokens
|
||||||
|
self.assertEqual(result["current_tokens"], 150)
|
||||||
|
finally:
|
||||||
|
path.unlink()
|
||||||
|
|
||||||
|
|
||||||
|
class TestGetCachedContextUsage(unittest.TestCase):
|
||||||
|
"""Tests for _get_cached_context_usage edge cases."""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.handler = DummyParsingHandler()
|
||||||
|
# Clear cache before each test
|
||||||
|
from amc_server.agents import _context_usage_cache
|
||||||
|
_context_usage_cache.clear()
|
||||||
|
|
||||||
|
def test_nonexistent_file_returns_none(self):
|
||||||
|
def mock_parser(path):
|
||||||
|
return {"tokens": 100}
|
||||||
|
|
||||||
|
result = self.handler._get_cached_context_usage(
|
||||||
|
Path("/nonexistent/file.jsonl"),
|
||||||
|
mock_parser
|
||||||
|
)
|
||||||
|
self.assertIsNone(result)
|
||||||
|
|
||||||
|
def test_caches_result_by_mtime_and_size(self):
|
||||||
|
call_count = [0]
|
||||||
|
def counting_parser(path):
|
||||||
|
call_count[0] += 1
|
||||||
|
return {"tokens": 100}
|
||||||
|
|
||||||
|
with tempfile.NamedTemporaryFile(mode="w", suffix=".jsonl", delete=False) as f:
|
||||||
|
f.write('{"data": "test"}\n')
|
||||||
|
path = Path(f.name)
|
||||||
|
try:
|
||||||
|
# First call - should invoke parser
|
||||||
|
result1 = self.handler._get_cached_context_usage(path, counting_parser)
|
||||||
|
self.assertEqual(call_count[0], 1)
|
||||||
|
self.assertEqual(result1, {"tokens": 100})
|
||||||
|
|
||||||
|
# Second call - should use cache
|
||||||
|
result2 = self.handler._get_cached_context_usage(path, counting_parser)
|
||||||
|
self.assertEqual(call_count[0], 1) # No additional call
|
||||||
|
self.assertEqual(result2, {"tokens": 100})
|
||||||
|
finally:
|
||||||
|
path.unlink()
|
||||||
|
|
||||||
|
def test_invalidates_cache_on_mtime_change(self):
|
||||||
|
import time
|
||||||
|
|
||||||
|
call_count = [0]
|
||||||
|
def counting_parser(path):
|
||||||
|
call_count[0] += 1
|
||||||
|
return {"tokens": call_count[0] * 100}
|
||||||
|
|
||||||
|
with tempfile.NamedTemporaryFile(mode="w", suffix=".jsonl", delete=False) as f:
|
||||||
|
f.write('{"data": "test"}\n')
|
||||||
|
path = Path(f.name)
|
||||||
|
try:
|
||||||
|
result1 = self.handler._get_cached_context_usage(path, counting_parser)
|
||||||
|
self.assertEqual(result1, {"tokens": 100})
|
||||||
|
|
||||||
|
# Modify file to change mtime
|
||||||
|
time.sleep(0.01)
|
||||||
|
path.write_text('{"data": "modified"}\n')
|
||||||
|
|
||||||
|
result2 = self.handler._get_cached_context_usage(path, counting_parser)
|
||||||
|
self.assertEqual(call_count[0], 2) # Parser called again
|
||||||
|
self.assertEqual(result2, {"tokens": 200})
|
||||||
|
finally:
|
||||||
|
path.unlink()
|
||||||
|
|
||||||
|
def test_parser_exception_returns_none(self):
|
||||||
|
def failing_parser(path):
|
||||||
|
raise ValueError("Parse error")
|
||||||
|
|
||||||
|
with tempfile.NamedTemporaryFile(mode="w", suffix=".jsonl", delete=False) as f:
|
||||||
|
f.write('{"data": "test"}\n')
|
||||||
|
path = Path(f.name)
|
||||||
|
try:
|
||||||
|
result = self.handler._get_cached_context_usage(path, failing_parser)
|
||||||
|
self.assertIsNone(result)
|
||||||
|
finally:
|
||||||
|
path.unlink()
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
unittest.main()
|
||||||
583
tests/test_skills.py
Normal file
583
tests/test_skills.py
Normal file
@@ -0,0 +1,583 @@
|
|||||||
|
import json
|
||||||
|
import tempfile
|
||||||
|
import unittest
|
||||||
|
from pathlib import Path
|
||||||
|
from unittest.mock import patch
|
||||||
|
|
||||||
|
from amc_server.mixins.skills import SkillsMixin
|
||||||
|
|
||||||
|
|
||||||
|
class DummySkillsHandler(SkillsMixin):
|
||||||
|
def __init__(self):
|
||||||
|
self.sent = []
|
||||||
|
|
||||||
|
def _send_json(self, code, payload):
|
||||||
|
self.sent.append((code, payload))
|
||||||
|
|
||||||
|
|
||||||
|
class TestEnumerateClaudeSkills(unittest.TestCase):
|
||||||
|
"""Tests for _enumerate_claude_skills."""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.handler = DummySkillsHandler()
|
||||||
|
|
||||||
|
def test_empty_directory(self):
|
||||||
|
"""Returns empty list when ~/.claude/skills doesn't exist."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
with patch.object(Path, "home", return_value=Path(tmpdir)):
|
||||||
|
result = self.handler._enumerate_claude_skills()
|
||||||
|
self.assertEqual(result, [])
|
||||||
|
|
||||||
|
def test_reads_skill_md(self):
|
||||||
|
"""Reads description from SKILL.md."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
skill_dir = Path(tmpdir) / ".claude/skills/my-skill"
|
||||||
|
skill_dir.mkdir(parents=True)
|
||||||
|
(skill_dir / "SKILL.md").write_text("Does something useful")
|
||||||
|
|
||||||
|
with patch.object(Path, "home", return_value=Path(tmpdir)):
|
||||||
|
result = self.handler._enumerate_claude_skills()
|
||||||
|
|
||||||
|
self.assertEqual(len(result), 1)
|
||||||
|
self.assertEqual(result[0]["name"], "my-skill")
|
||||||
|
self.assertEqual(result[0]["description"], "Does something useful")
|
||||||
|
|
||||||
|
def test_fallback_to_readme(self):
|
||||||
|
"""Falls back to README.md when SKILL.md is missing."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
skill_dir = Path(tmpdir) / ".claude/skills/readme-skill"
|
||||||
|
skill_dir.mkdir(parents=True)
|
||||||
|
(skill_dir / "README.md").write_text("# Header\n\nReadme description here")
|
||||||
|
|
||||||
|
with patch.object(Path, "home", return_value=Path(tmpdir)):
|
||||||
|
result = self.handler._enumerate_claude_skills()
|
||||||
|
|
||||||
|
self.assertEqual(len(result), 1)
|
||||||
|
self.assertEqual(result[0]["description"], "Readme description here")
|
||||||
|
|
||||||
|
def test_fallback_priority_order(self):
|
||||||
|
"""SKILL.md takes priority over skill.md, prompt.md, README.md."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
skill_dir = Path(tmpdir) / ".claude/skills/priority-skill"
|
||||||
|
skill_dir.mkdir(parents=True)
|
||||||
|
(skill_dir / "SKILL.md").write_text("From SKILL.md")
|
||||||
|
(skill_dir / "README.md").write_text("From README.md")
|
||||||
|
|
||||||
|
with patch.object(Path, "home", return_value=Path(tmpdir)):
|
||||||
|
result = self.handler._enumerate_claude_skills()
|
||||||
|
|
||||||
|
self.assertEqual(result[0]["description"], "From SKILL.md")
|
||||||
|
|
||||||
|
def test_skill_md_lowercase_fallback(self):
|
||||||
|
"""Falls back to skill.md when SKILL.md is missing."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
skill_dir = Path(tmpdir) / ".claude/skills/lower-skill"
|
||||||
|
skill_dir.mkdir(parents=True)
|
||||||
|
(skill_dir / "skill.md").write_text("From lowercase skill.md")
|
||||||
|
|
||||||
|
with patch.object(Path, "home", return_value=Path(tmpdir)):
|
||||||
|
result = self.handler._enumerate_claude_skills()
|
||||||
|
|
||||||
|
self.assertEqual(result[0]["description"], "From lowercase skill.md")
|
||||||
|
|
||||||
|
def test_skips_hidden_dirs(self):
|
||||||
|
"""Ignores directories starting with dot."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
skills_dir = Path(tmpdir) / ".claude/skills"
|
||||||
|
skills_dir.mkdir(parents=True)
|
||||||
|
# Visible skill
|
||||||
|
visible = skills_dir / "visible"
|
||||||
|
visible.mkdir()
|
||||||
|
(visible / "SKILL.md").write_text("Visible skill")
|
||||||
|
# Hidden skill
|
||||||
|
hidden = skills_dir / ".hidden"
|
||||||
|
hidden.mkdir()
|
||||||
|
(hidden / "SKILL.md").write_text("Hidden skill")
|
||||||
|
|
||||||
|
with patch.object(Path, "home", return_value=Path(tmpdir)):
|
||||||
|
result = self.handler._enumerate_claude_skills()
|
||||||
|
|
||||||
|
names = [s["name"] for s in result]
|
||||||
|
self.assertIn("visible", names)
|
||||||
|
self.assertNotIn(".hidden", names)
|
||||||
|
|
||||||
|
def test_skips_files_in_skills_dir(self):
|
||||||
|
"""Only processes directories, not loose files."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
skills_dir = Path(tmpdir) / ".claude/skills"
|
||||||
|
skills_dir.mkdir(parents=True)
|
||||||
|
(skills_dir / "not-a-dir.txt").write_text("stray file")
|
||||||
|
skill = skills_dir / "real-skill"
|
||||||
|
skill.mkdir()
|
||||||
|
(skill / "SKILL.md").write_text("A real skill")
|
||||||
|
|
||||||
|
with patch.object(Path, "home", return_value=Path(tmpdir)):
|
||||||
|
result = self.handler._enumerate_claude_skills()
|
||||||
|
|
||||||
|
self.assertEqual(len(result), 1)
|
||||||
|
self.assertEqual(result[0]["name"], "real-skill")
|
||||||
|
|
||||||
|
def test_truncates_description(self):
|
||||||
|
"""Description truncated to 100 chars."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
skill_dir = Path(tmpdir) / ".claude/skills/long-skill"
|
||||||
|
skill_dir.mkdir(parents=True)
|
||||||
|
long_desc = "A" * 200
|
||||||
|
(skill_dir / "SKILL.md").write_text(long_desc)
|
||||||
|
|
||||||
|
with patch.object(Path, "home", return_value=Path(tmpdir)):
|
||||||
|
result = self.handler._enumerate_claude_skills()
|
||||||
|
|
||||||
|
self.assertEqual(len(result[0]["description"]), 100)
|
||||||
|
|
||||||
|
def test_skips_headers(self):
|
||||||
|
"""First non-header line used as description."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
skill_dir = Path(tmpdir) / ".claude/skills/header-skill"
|
||||||
|
skill_dir.mkdir(parents=True)
|
||||||
|
content = "# My Skill\n## Subtitle\n\nActual description"
|
||||||
|
(skill_dir / "SKILL.md").write_text(content)
|
||||||
|
|
||||||
|
with patch.object(Path, "home", return_value=Path(tmpdir)):
|
||||||
|
result = self.handler._enumerate_claude_skills()
|
||||||
|
|
||||||
|
self.assertEqual(result[0]["description"], "Actual description")
|
||||||
|
|
||||||
|
def test_no_description_uses_fallback(self):
|
||||||
|
"""Empty/header-only skill uses 'Skill: name' fallback."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
skill_dir = Path(tmpdir) / ".claude/skills/empty-skill"
|
||||||
|
skill_dir.mkdir(parents=True)
|
||||||
|
(skill_dir / "SKILL.md").write_text("# Only Headers\n## Nothing else")
|
||||||
|
|
||||||
|
with patch.object(Path, "home", return_value=Path(tmpdir)):
|
||||||
|
result = self.handler._enumerate_claude_skills()
|
||||||
|
|
||||||
|
self.assertEqual(result[0]["description"], "Skill: empty-skill")
|
||||||
|
|
||||||
|
def test_frontmatter_name_overrides_dir_name(self):
|
||||||
|
"""Frontmatter name field takes priority over directory name."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
skill_dir = Path(tmpdir) / ".claude/skills/bug-hunter"
|
||||||
|
skill_dir.mkdir(parents=True)
|
||||||
|
content = '---\nname: "doodle:bug-hunter"\ndescription: Find bugs\n---\n'
|
||||||
|
(skill_dir / "SKILL.md").write_text(content)
|
||||||
|
|
||||||
|
with patch.object(Path, "home", return_value=Path(tmpdir)):
|
||||||
|
result = self.handler._enumerate_claude_skills()
|
||||||
|
|
||||||
|
self.assertEqual(len(result), 1)
|
||||||
|
self.assertEqual(result[0]["name"], "doodle:bug-hunter")
|
||||||
|
self.assertEqual(result[0]["description"], "Find bugs")
|
||||||
|
|
||||||
|
def test_name_preserved_across_fallback_files(self):
|
||||||
|
"""Name from SKILL.md is kept even when description comes from README.md."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
skill_dir = Path(tmpdir) / ".claude/skills/bug-hunter"
|
||||||
|
skill_dir.mkdir(parents=True)
|
||||||
|
# SKILL.md has name but no extractable description (headers only)
|
||||||
|
(skill_dir / "SKILL.md").write_text(
|
||||||
|
'---\nname: "doodle:bug-hunter"\n---\n# Bug Hunter\n## Overview'
|
||||||
|
)
|
||||||
|
# README.md provides the description
|
||||||
|
(skill_dir / "README.md").write_text("Find and fix obvious bugs")
|
||||||
|
|
||||||
|
with patch.object(Path, "home", return_value=Path(tmpdir)):
|
||||||
|
result = self.handler._enumerate_claude_skills()
|
||||||
|
|
||||||
|
self.assertEqual(result[0]["name"], "doodle:bug-hunter")
|
||||||
|
self.assertEqual(result[0]["description"], "Find and fix obvious bugs")
|
||||||
|
|
||||||
|
def test_no_frontmatter_name_uses_dir_name(self):
|
||||||
|
"""Without name in frontmatter, falls back to directory name."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
skill_dir = Path(tmpdir) / ".claude/skills/plain-skill"
|
||||||
|
skill_dir.mkdir(parents=True)
|
||||||
|
(skill_dir / "SKILL.md").write_text("---\ndescription: No name field\n---\n")
|
||||||
|
|
||||||
|
with patch.object(Path, "home", return_value=Path(tmpdir)):
|
||||||
|
result = self.handler._enumerate_claude_skills()
|
||||||
|
|
||||||
|
self.assertEqual(result[0]["name"], "plain-skill")
|
||||||
|
|
||||||
|
def test_no_md_files_uses_fallback(self):
|
||||||
|
"""Skill dir with no markdown files uses fallback description."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
skill_dir = Path(tmpdir) / ".claude/skills/bare-skill"
|
||||||
|
skill_dir.mkdir(parents=True)
|
||||||
|
|
||||||
|
with patch.object(Path, "home", return_value=Path(tmpdir)):
|
||||||
|
result = self.handler._enumerate_claude_skills()
|
||||||
|
|
||||||
|
self.assertEqual(result[0]["description"], "Skill: bare-skill")
|
||||||
|
|
||||||
|
def test_os_error_on_read_continues(self):
|
||||||
|
"""OSError when reading file doesn't crash enumeration."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
skill_dir = Path(tmpdir) / ".claude/skills/broken-skill"
|
||||||
|
skill_dir.mkdir(parents=True)
|
||||||
|
md = skill_dir / "SKILL.md"
|
||||||
|
md.write_text("content")
|
||||||
|
|
||||||
|
with patch.object(Path, "home", return_value=Path(tmpdir)), \
|
||||||
|
patch.object(Path, "read_text", side_effect=OSError("disk error")):
|
||||||
|
result = self.handler._enumerate_claude_skills()
|
||||||
|
|
||||||
|
self.assertEqual(len(result), 1)
|
||||||
|
self.assertEqual(result[0]["description"], "Skill: broken-skill")
|
||||||
|
|
||||||
|
|
||||||
|
class TestParseFrontmatter(unittest.TestCase):
|
||||||
|
"""Tests for _parse_frontmatter."""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.handler = DummySkillsHandler()
|
||||||
|
|
||||||
|
def _desc(self, content: str) -> str:
|
||||||
|
return self.handler._parse_frontmatter(content)["description"]
|
||||||
|
|
||||||
|
def _name(self, content: str) -> str:
|
||||||
|
return self.handler._parse_frontmatter(content)["name"]
|
||||||
|
|
||||||
|
def test_empty_content(self):
|
||||||
|
self.assertEqual(self._desc(""), "")
|
||||||
|
self.assertEqual(self._name(""), "")
|
||||||
|
|
||||||
|
def test_plain_text(self):
|
||||||
|
self.assertEqual(self._desc("Simple description"), "Simple description")
|
||||||
|
|
||||||
|
def test_yaml_frontmatter_description(self):
|
||||||
|
content = "---\ndescription: A skill for formatting\n---\nBody text"
|
||||||
|
self.assertEqual(self._desc(content), "A skill for formatting")
|
||||||
|
|
||||||
|
def test_yaml_frontmatter_quoted_description(self):
|
||||||
|
content = '---\ndescription: "Quoted desc"\n---\n'
|
||||||
|
self.assertEqual(self._desc(content), "Quoted desc")
|
||||||
|
|
||||||
|
def test_yaml_frontmatter_single_quoted_description(self):
|
||||||
|
content = "---\ndescription: 'Single quoted'\n---\n"
|
||||||
|
self.assertEqual(self._desc(content), "Single quoted")
|
||||||
|
|
||||||
|
def test_yaml_multiline_fold_indicator(self):
|
||||||
|
"""Handles >- style multi-line YAML."""
|
||||||
|
content = "---\ndescription: >-\n Multi-line folded text\n---\n"
|
||||||
|
self.assertEqual(self._desc(content), "Multi-line folded text")
|
||||||
|
|
||||||
|
def test_yaml_multiline_literal_indicator(self):
|
||||||
|
"""Handles |- style multi-line YAML."""
|
||||||
|
content = "---\ndescription: |-\n Literal block text\n---\n"
|
||||||
|
self.assertEqual(self._desc(content), "Literal block text")
|
||||||
|
|
||||||
|
def test_yaml_multiline_bare_fold(self):
|
||||||
|
"""Handles > without trailing dash."""
|
||||||
|
content = "---\ndescription: >\n Bare fold\n---\n"
|
||||||
|
self.assertEqual(self._desc(content), "Bare fold")
|
||||||
|
|
||||||
|
def test_yaml_multiline_bare_literal(self):
|
||||||
|
"""Handles | without trailing dash."""
|
||||||
|
content = "---\ndescription: |\n Bare literal\n---\n"
|
||||||
|
self.assertEqual(self._desc(content), "Bare literal")
|
||||||
|
|
||||||
|
def test_yaml_empty_description_falls_back_to_body(self):
|
||||||
|
"""Empty description in frontmatter falls back to body text."""
|
||||||
|
content = "---\ndescription:\n---\nFallback body line"
|
||||||
|
self.assertEqual(self._desc(content), "Fallback body line")
|
||||||
|
|
||||||
|
def test_skips_headers_and_empty_lines(self):
|
||||||
|
content = "# Title\n\n## Section\n\nActual content"
|
||||||
|
self.assertEqual(self._desc(content), "Actual content")
|
||||||
|
|
||||||
|
def test_skips_html_comments(self):
|
||||||
|
content = "<!-- comment -->\nReal content"
|
||||||
|
self.assertEqual(self._desc(content), "Real content")
|
||||||
|
|
||||||
|
def test_truncates_to_100_chars(self):
|
||||||
|
long_line = "B" * 150
|
||||||
|
self.assertEqual(len(self._desc(long_line)), 100)
|
||||||
|
|
||||||
|
def test_frontmatter_description_truncated(self):
|
||||||
|
desc = "C" * 150
|
||||||
|
content = f"---\ndescription: {desc}\n---\n"
|
||||||
|
self.assertEqual(len(self._desc(content)), 100)
|
||||||
|
|
||||||
|
def test_no_closing_frontmatter_extracts_description(self):
|
||||||
|
"""Unclosed frontmatter still extracts description from the loop."""
|
||||||
|
content = "---\ndescription: Orphaned\ntitle: Test"
|
||||||
|
self.assertEqual(self._desc(content), "Orphaned")
|
||||||
|
|
||||||
|
def test_body_only_headers_returns_empty(self):
|
||||||
|
content = "# H1\n## H2\n### H3"
|
||||||
|
self.assertEqual(self._desc(content), "")
|
||||||
|
|
||||||
|
# --- name field tests ---
|
||||||
|
|
||||||
|
def test_extracts_name_from_frontmatter(self):
|
||||||
|
content = "---\nname: doodle:bug-hunter\ndescription: Find bugs\n---\n"
|
||||||
|
self.assertEqual(self._name(content), "doodle:bug-hunter")
|
||||||
|
self.assertEqual(self._desc(content), "Find bugs")
|
||||||
|
|
||||||
|
def test_name_quoted(self):
|
||||||
|
content = '---\nname: "bmad:brainstorm"\n---\nBody'
|
||||||
|
self.assertEqual(self._name(content), "bmad:brainstorm")
|
||||||
|
|
||||||
|
def test_name_single_quoted(self):
|
||||||
|
content = "---\nname: 'my-prefix:skill'\n---\nBody"
|
||||||
|
self.assertEqual(self._name(content), "my-prefix:skill")
|
||||||
|
|
||||||
|
def test_no_name_field_returns_empty(self):
|
||||||
|
content = "---\ndescription: Just a description\n---\n"
|
||||||
|
self.assertEqual(self._name(content), "")
|
||||||
|
|
||||||
|
def test_name_truncated_to_100(self):
|
||||||
|
long_name = "N" * 150
|
||||||
|
content = f"---\nname: {long_name}\n---\n"
|
||||||
|
self.assertEqual(len(self._name(content)), 100)
|
||||||
|
|
||||||
|
|
||||||
|
class TestEnumerateCodexSkills(unittest.TestCase):
|
||||||
|
"""Tests for _enumerate_codex_skills."""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.handler = DummySkillsHandler()
|
||||||
|
|
||||||
|
def test_reads_cache(self):
|
||||||
|
"""Reads skills from skills-curated-cache.json."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
cache_dir = Path(tmpdir) / ".codex/vendor_imports"
|
||||||
|
cache_dir.mkdir(parents=True)
|
||||||
|
cache = {
|
||||||
|
"skills": [
|
||||||
|
{"id": "lint", "shortDescription": "Lint code"},
|
||||||
|
{"name": "deploy", "description": "Deploy to prod"},
|
||||||
|
]
|
||||||
|
}
|
||||||
|
(cache_dir / "skills-curated-cache.json").write_text(json.dumps(cache))
|
||||||
|
|
||||||
|
with patch.object(Path, "home", return_value=Path(tmpdir)):
|
||||||
|
result = self.handler._enumerate_codex_skills()
|
||||||
|
|
||||||
|
self.assertEqual(len(result), 2)
|
||||||
|
self.assertEqual(result[0]["name"], "lint")
|
||||||
|
self.assertEqual(result[0]["description"], "Lint code")
|
||||||
|
# Falls back to 'name' when 'id' is absent
|
||||||
|
self.assertEqual(result[1]["name"], "deploy")
|
||||||
|
self.assertEqual(result[1]["description"], "Deploy to prod")
|
||||||
|
|
||||||
|
def test_id_preferred_over_name(self):
|
||||||
|
"""Uses 'id' field preferentially over 'name'."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
cache_dir = Path(tmpdir) / ".codex/vendor_imports"
|
||||||
|
cache_dir.mkdir(parents=True)
|
||||||
|
cache = {"skills": [{"id": "the-id", "name": "the-name", "shortDescription": "desc"}]}
|
||||||
|
(cache_dir / "skills-curated-cache.json").write_text(json.dumps(cache))
|
||||||
|
|
||||||
|
with patch.object(Path, "home", return_value=Path(tmpdir)):
|
||||||
|
result = self.handler._enumerate_codex_skills()
|
||||||
|
|
||||||
|
self.assertEqual(result[0]["name"], "the-id")
|
||||||
|
|
||||||
|
def test_short_description_preferred(self):
|
||||||
|
"""Uses 'shortDescription' preferentially over 'description'."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
cache_dir = Path(tmpdir) / ".codex/vendor_imports"
|
||||||
|
cache_dir.mkdir(parents=True)
|
||||||
|
cache = {"skills": [{"id": "sk", "shortDescription": "short", "description": "long version"}]}
|
||||||
|
(cache_dir / "skills-curated-cache.json").write_text(json.dumps(cache))
|
||||||
|
|
||||||
|
with patch.object(Path, "home", return_value=Path(tmpdir)):
|
||||||
|
result = self.handler._enumerate_codex_skills()
|
||||||
|
|
||||||
|
self.assertEqual(result[0]["description"], "short")
|
||||||
|
|
||||||
|
def test_invalid_json(self):
|
||||||
|
"""Continues without cache if JSON is invalid."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
cache_dir = Path(tmpdir) / ".codex/vendor_imports"
|
||||||
|
cache_dir.mkdir(parents=True)
|
||||||
|
(cache_dir / "skills-curated-cache.json").write_text("{not valid json")
|
||||||
|
|
||||||
|
with patch.object(Path, "home", return_value=Path(tmpdir)):
|
||||||
|
result = self.handler._enumerate_codex_skills()
|
||||||
|
|
||||||
|
self.assertEqual(result, [])
|
||||||
|
|
||||||
|
def test_combines_cache_and_user(self):
|
||||||
|
"""Returns both curated and user-installed skills."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
# Curated
|
||||||
|
cache_dir = Path(tmpdir) / ".codex/vendor_imports"
|
||||||
|
cache_dir.mkdir(parents=True)
|
||||||
|
cache = {"skills": [{"id": "curated-skill", "shortDescription": "Curated"}]}
|
||||||
|
(cache_dir / "skills-curated-cache.json").write_text(json.dumps(cache))
|
||||||
|
# User
|
||||||
|
user_skill = Path(tmpdir) / ".codex/skills/user-skill"
|
||||||
|
user_skill.mkdir(parents=True)
|
||||||
|
(user_skill / "SKILL.md").write_text("User installed skill")
|
||||||
|
|
||||||
|
with patch.object(Path, "home", return_value=Path(tmpdir)):
|
||||||
|
result = self.handler._enumerate_codex_skills()
|
||||||
|
|
||||||
|
names = [s["name"] for s in result]
|
||||||
|
self.assertIn("curated-skill", names)
|
||||||
|
self.assertIn("user-skill", names)
|
||||||
|
self.assertEqual(len(result), 2)
|
||||||
|
|
||||||
|
def test_no_deduplication(self):
|
||||||
|
"""Duplicate names from cache and user both appear."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
# Curated with name "dupe"
|
||||||
|
cache_dir = Path(tmpdir) / ".codex/vendor_imports"
|
||||||
|
cache_dir.mkdir(parents=True)
|
||||||
|
cache = {"skills": [{"id": "dupe", "shortDescription": "From cache"}]}
|
||||||
|
(cache_dir / "skills-curated-cache.json").write_text(json.dumps(cache))
|
||||||
|
# User with same name "dupe"
|
||||||
|
user_skill = Path(tmpdir) / ".codex/skills/dupe"
|
||||||
|
user_skill.mkdir(parents=True)
|
||||||
|
(user_skill / "SKILL.md").write_text("From user dir")
|
||||||
|
|
||||||
|
with patch.object(Path, "home", return_value=Path(tmpdir)):
|
||||||
|
result = self.handler._enumerate_codex_skills()
|
||||||
|
|
||||||
|
dupe_entries = [s for s in result if s["name"] == "dupe"]
|
||||||
|
self.assertEqual(len(dupe_entries), 2)
|
||||||
|
|
||||||
|
def test_empty_no_cache_no_user_dir(self):
|
||||||
|
"""Returns empty list when neither cache nor user dir exists."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
with patch.object(Path, "home", return_value=Path(tmpdir)):
|
||||||
|
result = self.handler._enumerate_codex_skills()
|
||||||
|
self.assertEqual(result, [])
|
||||||
|
|
||||||
|
def test_skips_entries_without_name_or_id(self):
|
||||||
|
"""Cache entries without name or id are skipped."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
cache_dir = Path(tmpdir) / ".codex/vendor_imports"
|
||||||
|
cache_dir.mkdir(parents=True)
|
||||||
|
cache = {"skills": [{"shortDescription": "No name"}, {"id": "valid", "shortDescription": "OK"}]}
|
||||||
|
(cache_dir / "skills-curated-cache.json").write_text(json.dumps(cache))
|
||||||
|
|
||||||
|
with patch.object(Path, "home", return_value=Path(tmpdir)):
|
||||||
|
result = self.handler._enumerate_codex_skills()
|
||||||
|
|
||||||
|
self.assertEqual(len(result), 1)
|
||||||
|
self.assertEqual(result[0]["name"], "valid")
|
||||||
|
|
||||||
|
def test_missing_description_uses_fallback(self):
|
||||||
|
"""Cache entry without description uses 'Skill: name' fallback."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
cache_dir = Path(tmpdir) / ".codex/vendor_imports"
|
||||||
|
cache_dir.mkdir(parents=True)
|
||||||
|
cache = {"skills": [{"id": "bare"}]}
|
||||||
|
(cache_dir / "skills-curated-cache.json").write_text(json.dumps(cache))
|
||||||
|
|
||||||
|
with patch.object(Path, "home", return_value=Path(tmpdir)):
|
||||||
|
result = self.handler._enumerate_codex_skills()
|
||||||
|
|
||||||
|
self.assertEqual(result[0]["description"], "Skill: bare")
|
||||||
|
|
||||||
|
def test_user_skill_without_skill_md_uses_fallback(self):
|
||||||
|
"""User skill dir without SKILL.md uses fallback description."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
user_skill = Path(tmpdir) / ".codex/skills/no-md"
|
||||||
|
user_skill.mkdir(parents=True)
|
||||||
|
|
||||||
|
with patch.object(Path, "home", return_value=Path(tmpdir)):
|
||||||
|
result = self.handler._enumerate_codex_skills()
|
||||||
|
|
||||||
|
self.assertEqual(result[0]["description"], "User skill: no-md")
|
||||||
|
|
||||||
|
def test_user_skills_skip_hidden_dirs(self):
|
||||||
|
"""Hidden directories in user skills dir are skipped."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
user_dir = Path(tmpdir) / ".codex/skills"
|
||||||
|
user_dir.mkdir(parents=True)
|
||||||
|
(user_dir / "visible").mkdir()
|
||||||
|
(user_dir / ".hidden").mkdir()
|
||||||
|
|
||||||
|
with patch.object(Path, "home", return_value=Path(tmpdir)):
|
||||||
|
result = self.handler._enumerate_codex_skills()
|
||||||
|
|
||||||
|
names = [s["name"] for s in result]
|
||||||
|
self.assertIn("visible", names)
|
||||||
|
self.assertNotIn(".hidden", names)
|
||||||
|
|
||||||
|
def test_description_truncated_to_100(self):
|
||||||
|
"""Codex cache description truncated to 100 chars."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
cache_dir = Path(tmpdir) / ".codex/vendor_imports"
|
||||||
|
cache_dir.mkdir(parents=True)
|
||||||
|
cache = {"skills": [{"id": "long", "shortDescription": "D" * 200}]}
|
||||||
|
(cache_dir / "skills-curated-cache.json").write_text(json.dumps(cache))
|
||||||
|
|
||||||
|
with patch.object(Path, "home", return_value=Path(tmpdir)):
|
||||||
|
result = self.handler._enumerate_codex_skills()
|
||||||
|
|
||||||
|
self.assertEqual(len(result[0]["description"]), 100)
|
||||||
|
|
||||||
|
|
||||||
|
class TestServeSkills(unittest.TestCase):
|
||||||
|
"""Tests for _serve_skills."""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.handler = DummySkillsHandler()
|
||||||
|
|
||||||
|
def test_claude_trigger(self):
|
||||||
|
"""Returns / trigger for claude agent."""
|
||||||
|
with patch.object(self.handler, "_enumerate_claude_skills", return_value=[]):
|
||||||
|
self.handler._serve_skills("claude")
|
||||||
|
|
||||||
|
self.assertEqual(self.handler.sent[0][0], 200)
|
||||||
|
self.assertEqual(self.handler.sent[0][1]["trigger"], "/")
|
||||||
|
|
||||||
|
def test_codex_trigger(self):
|
||||||
|
"""Returns $ trigger for codex agent."""
|
||||||
|
with patch.object(self.handler, "_enumerate_codex_skills", return_value=[]):
|
||||||
|
self.handler._serve_skills("codex")
|
||||||
|
|
||||||
|
self.assertEqual(self.handler.sent[0][0], 200)
|
||||||
|
self.assertEqual(self.handler.sent[0][1]["trigger"], "$")
|
||||||
|
|
||||||
|
def test_default_to_claude(self):
|
||||||
|
"""Unknown agent defaults to claude (/ trigger)."""
|
||||||
|
with patch.object(self.handler, "_enumerate_claude_skills", return_value=[]):
|
||||||
|
self.handler._serve_skills("unknown-agent")
|
||||||
|
|
||||||
|
self.assertEqual(self.handler.sent[0][1]["trigger"], "/")
|
||||||
|
|
||||||
|
def test_alphabetical_sort(self):
|
||||||
|
"""Skills sorted alphabetically (case-insensitive)."""
|
||||||
|
skills = [
|
||||||
|
{"name": "Zebra", "description": "z"},
|
||||||
|
{"name": "alpha", "description": "a"},
|
||||||
|
{"name": "Beta", "description": "b"},
|
||||||
|
]
|
||||||
|
with patch.object(self.handler, "_enumerate_claude_skills", return_value=skills):
|
||||||
|
self.handler._serve_skills("claude")
|
||||||
|
|
||||||
|
result_names = [s["name"] for s in self.handler.sent[0][1]["skills"]]
|
||||||
|
self.assertEqual(result_names, ["alpha", "Beta", "Zebra"])
|
||||||
|
|
||||||
|
def test_response_format(self):
|
||||||
|
"""Response has trigger and skills keys."""
|
||||||
|
skills = [{"name": "test", "description": "A test skill"}]
|
||||||
|
with patch.object(self.handler, "_enumerate_claude_skills", return_value=skills):
|
||||||
|
self.handler._serve_skills("claude")
|
||||||
|
|
||||||
|
code, payload = self.handler.sent[0]
|
||||||
|
self.assertEqual(code, 200)
|
||||||
|
self.assertIn("trigger", payload)
|
||||||
|
self.assertIn("skills", payload)
|
||||||
|
self.assertEqual(len(payload["skills"]), 1)
|
||||||
|
|
||||||
|
def test_empty_skills_list(self):
|
||||||
|
"""Empty skill list still returns valid response."""
|
||||||
|
with patch.object(self.handler, "_enumerate_claude_skills", return_value=[]):
|
||||||
|
self.handler._serve_skills("claude")
|
||||||
|
|
||||||
|
payload = self.handler.sent[0][1]
|
||||||
|
self.assertEqual(payload["skills"], [])
|
||||||
|
self.assertEqual(payload["trigger"], "/")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
unittest.main()
|
||||||
891
tests/test_spawn.py
Normal file
891
tests/test_spawn.py
Normal file
@@ -0,0 +1,891 @@
|
|||||||
|
import io
|
||||||
|
import json
|
||||||
|
import tempfile
|
||||||
|
import time
|
||||||
|
import unittest
|
||||||
|
from pathlib import Path
|
||||||
|
from unittest.mock import MagicMock, patch
|
||||||
|
|
||||||
|
import amc_server.mixins.spawn as spawn_mod
|
||||||
|
from amc_server.mixins.spawn import SpawnMixin, load_projects_cache, _sanitize_pane_name
|
||||||
|
|
||||||
|
|
||||||
|
class DummySpawnHandler(SpawnMixin):
|
||||||
|
"""Minimal handler stub matching the project convention (see test_control.py)."""
|
||||||
|
|
||||||
|
def __init__(self, body=None, auth_header=''):
|
||||||
|
if body is None:
|
||||||
|
body = {}
|
||||||
|
raw = json.dumps(body).encode('utf-8')
|
||||||
|
self.headers = {
|
||||||
|
'Content-Length': str(len(raw)),
|
||||||
|
'Authorization': auth_header,
|
||||||
|
}
|
||||||
|
self.rfile = io.BytesIO(raw)
|
||||||
|
self.sent = []
|
||||||
|
self.errors = []
|
||||||
|
|
||||||
|
def _send_json(self, code, payload):
|
||||||
|
self.sent.append((code, payload))
|
||||||
|
|
||||||
|
def _json_error(self, code, message):
|
||||||
|
self.errors.append((code, message))
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# _validate_spawn_params
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
class TestValidateSpawnParams(unittest.TestCase):
|
||||||
|
"""Tests for _validate_spawn_params security validation."""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.handler = DummySpawnHandler()
|
||||||
|
|
||||||
|
# --- missing / empty project ---
|
||||||
|
|
||||||
|
def test_empty_project_returns_error(self):
|
||||||
|
result = self.handler._validate_spawn_params('', 'claude')
|
||||||
|
self.assertEqual(result['code'], 'MISSING_PROJECT')
|
||||||
|
|
||||||
|
def test_none_project_returns_error(self):
|
||||||
|
result = self.handler._validate_spawn_params(None, 'claude')
|
||||||
|
self.assertEqual(result['code'], 'MISSING_PROJECT')
|
||||||
|
|
||||||
|
# --- path traversal: slash ---
|
||||||
|
|
||||||
|
def test_forward_slash_rejected(self):
|
||||||
|
result = self.handler._validate_spawn_params('../etc', 'claude')
|
||||||
|
self.assertEqual(result['code'], 'INVALID_PROJECT')
|
||||||
|
|
||||||
|
def test_nested_slash_rejected(self):
|
||||||
|
result = self.handler._validate_spawn_params('foo/bar', 'claude')
|
||||||
|
self.assertEqual(result['code'], 'INVALID_PROJECT')
|
||||||
|
|
||||||
|
# --- path traversal: dotdot ---
|
||||||
|
|
||||||
|
def test_dotdot_in_name_rejected(self):
|
||||||
|
result = self.handler._validate_spawn_params('foo..bar', 'claude')
|
||||||
|
self.assertEqual(result['code'], 'INVALID_PROJECT')
|
||||||
|
|
||||||
|
def test_leading_dotdot_rejected(self):
|
||||||
|
result = self.handler._validate_spawn_params('..project', 'claude')
|
||||||
|
self.assertEqual(result['code'], 'INVALID_PROJECT')
|
||||||
|
|
||||||
|
# --- backslash ---
|
||||||
|
|
||||||
|
def test_backslash_rejected(self):
|
||||||
|
result = self.handler._validate_spawn_params('foo\\bar', 'claude')
|
||||||
|
self.assertEqual(result['code'], 'INVALID_PROJECT')
|
||||||
|
|
||||||
|
# --- symlink escape ---
|
||||||
|
|
||||||
|
def test_symlink_escape_rejected(self):
|
||||||
|
"""Project resolving outside PROJECTS_DIR is rejected."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
projects = Path(tmpdir) / 'projects'
|
||||||
|
projects.mkdir()
|
||||||
|
|
||||||
|
# Create a symlink that escapes projects dir
|
||||||
|
escape_target = Path(tmpdir) / 'outside'
|
||||||
|
escape_target.mkdir()
|
||||||
|
symlink = projects / 'evil'
|
||||||
|
symlink.symlink_to(escape_target)
|
||||||
|
|
||||||
|
with patch.object(spawn_mod, 'PROJECTS_DIR', projects):
|
||||||
|
result = self.handler._validate_spawn_params('evil', 'claude')
|
||||||
|
|
||||||
|
self.assertEqual(result['code'], 'INVALID_PROJECT')
|
||||||
|
|
||||||
|
# --- nonexistent project ---
|
||||||
|
|
||||||
|
def test_nonexistent_project_rejected(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
projects = Path(tmpdir) / 'projects'
|
||||||
|
projects.mkdir()
|
||||||
|
|
||||||
|
with patch.object(spawn_mod, 'PROJECTS_DIR', projects):
|
||||||
|
result = self.handler._validate_spawn_params('nonexistent', 'claude')
|
||||||
|
|
||||||
|
self.assertEqual(result['code'], 'PROJECT_NOT_FOUND')
|
||||||
|
|
||||||
|
def test_file_instead_of_directory_rejected(self):
|
||||||
|
"""A regular file (not a directory) is not a valid project."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
projects = Path(tmpdir) / 'projects'
|
||||||
|
projects.mkdir()
|
||||||
|
(projects / 'notadir').write_text('oops')
|
||||||
|
|
||||||
|
with patch.object(spawn_mod, 'PROJECTS_DIR', projects):
|
||||||
|
result = self.handler._validate_spawn_params('notadir', 'claude')
|
||||||
|
|
||||||
|
self.assertEqual(result['code'], 'PROJECT_NOT_FOUND')
|
||||||
|
|
||||||
|
# --- invalid agent type ---
|
||||||
|
|
||||||
|
def test_invalid_agent_type_rejected(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
projects = Path(tmpdir) / 'projects'
|
||||||
|
projects.mkdir()
|
||||||
|
(projects / 'myproject').mkdir()
|
||||||
|
|
||||||
|
with patch.object(spawn_mod, 'PROJECTS_DIR', projects):
|
||||||
|
result = self.handler._validate_spawn_params('myproject', 'gpt5')
|
||||||
|
|
||||||
|
self.assertEqual(result['code'], 'INVALID_AGENT_TYPE')
|
||||||
|
self.assertIn('gpt5', result['error'])
|
||||||
|
|
||||||
|
def test_empty_agent_type_rejected(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
projects = Path(tmpdir) / 'projects'
|
||||||
|
projects.mkdir()
|
||||||
|
(projects / 'myproject').mkdir()
|
||||||
|
|
||||||
|
with patch.object(spawn_mod, 'PROJECTS_DIR', projects):
|
||||||
|
result = self.handler._validate_spawn_params('myproject', '')
|
||||||
|
|
||||||
|
self.assertEqual(result['code'], 'INVALID_AGENT_TYPE')
|
||||||
|
|
||||||
|
# --- valid project returns resolved_path ---
|
||||||
|
|
||||||
|
def test_valid_project_returns_resolved_path(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
projects = Path(tmpdir) / 'projects'
|
||||||
|
projects.mkdir()
|
||||||
|
project_dir = projects / 'amc'
|
||||||
|
project_dir.mkdir()
|
||||||
|
|
||||||
|
with patch.object(spawn_mod, 'PROJECTS_DIR', projects):
|
||||||
|
result = self.handler._validate_spawn_params('amc', 'claude')
|
||||||
|
|
||||||
|
self.assertNotIn('error', result)
|
||||||
|
self.assertEqual(result['resolved_path'], project_dir.resolve())
|
||||||
|
|
||||||
|
def test_valid_project_with_codex_agent(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
projects = Path(tmpdir) / 'projects'
|
||||||
|
projects.mkdir()
|
||||||
|
(projects / 'myproject').mkdir()
|
||||||
|
|
||||||
|
with patch.object(spawn_mod, 'PROJECTS_DIR', projects):
|
||||||
|
result = self.handler._validate_spawn_params('myproject', 'codex')
|
||||||
|
|
||||||
|
self.assertNotIn('error', result)
|
||||||
|
self.assertIn('resolved_path', result)
|
||||||
|
|
||||||
|
# --- unicode / special characters ---
|
||||||
|
|
||||||
|
def test_unicode_project_name_without_traversal_chars(self):
|
||||||
|
"""Project names with unicode but no traversal chars resolve normally."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
projects = Path(tmpdir) / 'projects'
|
||||||
|
projects.mkdir()
|
||||||
|
# Create a project with a non-ASCII name
|
||||||
|
project_dir = projects / 'cafe'
|
||||||
|
project_dir.mkdir()
|
||||||
|
|
||||||
|
with patch.object(spawn_mod, 'PROJECTS_DIR', projects):
|
||||||
|
result = self.handler._validate_spawn_params('cafe', 'claude')
|
||||||
|
|
||||||
|
self.assertNotIn('error', result)
|
||||||
|
|
||||||
|
def test_whitespace_only_project_name(self):
|
||||||
|
"""Whitespace-only project name should fail (falsy)."""
|
||||||
|
result = self.handler._validate_spawn_params('', 'claude')
|
||||||
|
self.assertEqual(result['code'], 'MISSING_PROJECT')
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# load_projects_cache
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
class TestProjectsCache(unittest.TestCase):
|
||||||
|
"""Tests for projects cache loading."""
|
||||||
|
|
||||||
|
def test_loads_directory_names_sorted(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
projects = Path(tmpdir)
|
||||||
|
(projects / 'zebra').mkdir()
|
||||||
|
(projects / 'alpha').mkdir()
|
||||||
|
(projects / 'middle').mkdir()
|
||||||
|
|
||||||
|
with patch.object(spawn_mod, 'PROJECTS_DIR', projects):
|
||||||
|
load_projects_cache()
|
||||||
|
|
||||||
|
self.assertEqual(spawn_mod._projects_cache, ['alpha', 'middle', 'zebra'])
|
||||||
|
|
||||||
|
def test_excludes_hidden_directories(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
projects = Path(tmpdir)
|
||||||
|
(projects / 'visible').mkdir()
|
||||||
|
(projects / '.hidden').mkdir()
|
||||||
|
(projects / '.git').mkdir()
|
||||||
|
|
||||||
|
with patch.object(spawn_mod, 'PROJECTS_DIR', projects):
|
||||||
|
load_projects_cache()
|
||||||
|
|
||||||
|
self.assertEqual(spawn_mod._projects_cache, ['visible'])
|
||||||
|
|
||||||
|
def test_excludes_files(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
projects = Path(tmpdir)
|
||||||
|
(projects / 'real-project').mkdir()
|
||||||
|
(projects / 'README.md').write_text('hello')
|
||||||
|
|
||||||
|
with patch.object(spawn_mod, 'PROJECTS_DIR', projects):
|
||||||
|
load_projects_cache()
|
||||||
|
|
||||||
|
self.assertEqual(spawn_mod._projects_cache, ['real-project'])
|
||||||
|
|
||||||
|
def test_handles_oserror_gracefully(self):
|
||||||
|
with patch.object(spawn_mod, 'PROJECTS_DIR') as mock_dir:
|
||||||
|
mock_dir.iterdir.side_effect = OSError('permission denied')
|
||||||
|
load_projects_cache()
|
||||||
|
|
||||||
|
self.assertEqual(spawn_mod._projects_cache, [])
|
||||||
|
|
||||||
|
def test_empty_directory(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
projects = Path(tmpdir)
|
||||||
|
|
||||||
|
with patch.object(spawn_mod, 'PROJECTS_DIR', projects):
|
||||||
|
load_projects_cache()
|
||||||
|
|
||||||
|
self.assertEqual(spawn_mod._projects_cache, [])
|
||||||
|
|
||||||
|
def test_missing_directory(self):
|
||||||
|
"""PROJECTS_DIR that doesn't exist returns empty list, no crash."""
|
||||||
|
missing = Path('/tmp/amc-test-nonexistent-projects-dir')
|
||||||
|
assert not missing.exists(), 'test precondition: path must not exist'
|
||||||
|
|
||||||
|
with patch.object(spawn_mod, 'PROJECTS_DIR', missing):
|
||||||
|
load_projects_cache()
|
||||||
|
|
||||||
|
self.assertEqual(spawn_mod._projects_cache, [])
|
||||||
|
|
||||||
|
def test_only_hidden_dirs_returns_empty(self):
|
||||||
|
"""Directory with only hidden subdirectories returns empty list."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
projects = Path(tmpdir)
|
||||||
|
(projects / '.config').mkdir()
|
||||||
|
(projects / '.cache').mkdir()
|
||||||
|
(projects / '.local').mkdir()
|
||||||
|
|
||||||
|
with patch.object(spawn_mod, 'PROJECTS_DIR', projects):
|
||||||
|
load_projects_cache()
|
||||||
|
|
||||||
|
self.assertEqual(spawn_mod._projects_cache, [])
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Rate limiting
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
class TestRateLimiting(unittest.TestCase):
|
||||||
|
"""Tests for per-project rate limiting in _handle_spawn."""
|
||||||
|
|
||||||
|
def _make_handler(self, project, agent_type='claude', token='test-token'):
|
||||||
|
body = {'project': project, 'agent_type': agent_type}
|
||||||
|
return DummySpawnHandler(body, auth_header=f'Bearer {token}')
|
||||||
|
|
||||||
|
def test_first_spawn_allowed(self):
|
||||||
|
"""First spawn for a project should not be rate-limited."""
|
||||||
|
from amc_server.spawn_config import _spawn_timestamps
|
||||||
|
_spawn_timestamps.clear()
|
||||||
|
|
||||||
|
handler = self._make_handler('fresh-project')
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
projects = Path(tmpdir) / 'projects'
|
||||||
|
projects.mkdir()
|
||||||
|
(projects / 'fresh-project').mkdir()
|
||||||
|
|
||||||
|
with patch.object(spawn_mod, 'PROJECTS_DIR', projects), \
|
||||||
|
patch.object(spawn_mod, 'validate_auth_token', return_value=True), \
|
||||||
|
patch.object(handler, '_spawn_agent_in_project_tab', return_value={'ok': True}), \
|
||||||
|
patch.object(handler, '_validate_spawn_params', return_value={
|
||||||
|
'resolved_path': projects / 'fresh-project',
|
||||||
|
}):
|
||||||
|
handler._handle_spawn()
|
||||||
|
|
||||||
|
self.assertEqual(len(handler.sent), 1)
|
||||||
|
code, payload = handler.sent[0]
|
||||||
|
self.assertEqual(code, 200)
|
||||||
|
self.assertTrue(payload['ok'])
|
||||||
|
|
||||||
|
_spawn_timestamps.clear()
|
||||||
|
|
||||||
|
def test_rapid_spawn_same_project_rejected(self):
|
||||||
|
"""Spawning the same project within cooldown returns 429."""
|
||||||
|
from amc_server.spawn_config import _spawn_timestamps
|
||||||
|
_spawn_timestamps.clear()
|
||||||
|
# Pretend we just spawned this project
|
||||||
|
_spawn_timestamps['rapid-project'] = time.monotonic()
|
||||||
|
|
||||||
|
handler = self._make_handler('rapid-project')
|
||||||
|
|
||||||
|
with patch.object(spawn_mod, 'validate_auth_token', return_value=True), \
|
||||||
|
patch.object(handler, '_validate_spawn_params', return_value={
|
||||||
|
'resolved_path': Path('/fake/rapid-project'),
|
||||||
|
}):
|
||||||
|
handler._handle_spawn()
|
||||||
|
|
||||||
|
self.assertEqual(len(handler.sent), 1)
|
||||||
|
code, payload = handler.sent[0]
|
||||||
|
self.assertEqual(code, 429)
|
||||||
|
self.assertEqual(payload['code'], 'RATE_LIMITED')
|
||||||
|
|
||||||
|
_spawn_timestamps.clear()
|
||||||
|
|
||||||
|
def test_spawn_different_project_allowed(self):
|
||||||
|
"""Spawning a different project while one is on cooldown succeeds."""
|
||||||
|
from amc_server.spawn_config import _spawn_timestamps
|
||||||
|
_spawn_timestamps.clear()
|
||||||
|
_spawn_timestamps['project-a'] = time.monotonic()
|
||||||
|
|
||||||
|
handler = self._make_handler('project-b')
|
||||||
|
|
||||||
|
with patch.object(spawn_mod, 'validate_auth_token', return_value=True), \
|
||||||
|
patch.object(handler, '_validate_spawn_params', return_value={
|
||||||
|
'resolved_path': Path('/fake/project-b'),
|
||||||
|
}), \
|
||||||
|
patch.object(handler, '_spawn_agent_in_project_tab', return_value={'ok': True}):
|
||||||
|
handler._handle_spawn()
|
||||||
|
|
||||||
|
code, payload = handler.sent[0]
|
||||||
|
self.assertEqual(code, 200)
|
||||||
|
self.assertTrue(payload['ok'])
|
||||||
|
|
||||||
|
_spawn_timestamps.clear()
|
||||||
|
|
||||||
|
def test_spawn_after_cooldown_allowed(self):
|
||||||
|
"""Spawning the same project after cooldown expires succeeds."""
|
||||||
|
from amc_server.spawn_config import _spawn_timestamps, SPAWN_COOLDOWN_SEC
|
||||||
|
_spawn_timestamps.clear()
|
||||||
|
# Set timestamp far enough in the past
|
||||||
|
_spawn_timestamps['cooled-project'] = time.monotonic() - SPAWN_COOLDOWN_SEC - 1
|
||||||
|
|
||||||
|
handler = self._make_handler('cooled-project')
|
||||||
|
|
||||||
|
with patch.object(spawn_mod, 'validate_auth_token', return_value=True), \
|
||||||
|
patch.object(handler, '_validate_spawn_params', return_value={
|
||||||
|
'resolved_path': Path('/fake/cooled-project'),
|
||||||
|
}), \
|
||||||
|
patch.object(handler, '_spawn_agent_in_project_tab', return_value={'ok': True}):
|
||||||
|
handler._handle_spawn()
|
||||||
|
|
||||||
|
code, payload = handler.sent[0]
|
||||||
|
self.assertEqual(code, 200)
|
||||||
|
self.assertTrue(payload['ok'])
|
||||||
|
|
||||||
|
_spawn_timestamps.clear()
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Auth token validation
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
class TestAuthToken(unittest.TestCase):
|
||||||
|
"""Tests for auth token validation in _handle_spawn."""
|
||||||
|
|
||||||
|
def test_valid_bearer_token_accepted(self):
|
||||||
|
handler = DummySpawnHandler(
|
||||||
|
{'project': 'test', 'agent_type': 'claude'},
|
||||||
|
auth_header='Bearer valid-token',
|
||||||
|
)
|
||||||
|
|
||||||
|
with patch.object(spawn_mod, 'validate_auth_token', return_value=True), \
|
||||||
|
patch.object(handler, '_validate_spawn_params', return_value={
|
||||||
|
'resolved_path': Path('/fake/test'),
|
||||||
|
}), \
|
||||||
|
patch.object(handler, '_spawn_agent_in_project_tab', return_value={'ok': True}):
|
||||||
|
handler._handle_spawn()
|
||||||
|
|
||||||
|
code, _ = handler.sent[0]
|
||||||
|
self.assertEqual(code, 200)
|
||||||
|
|
||||||
|
def test_missing_auth_header_rejected(self):
|
||||||
|
handler = DummySpawnHandler(
|
||||||
|
{'project': 'test', 'agent_type': 'claude'},
|
||||||
|
auth_header='',
|
||||||
|
)
|
||||||
|
|
||||||
|
with patch.object(spawn_mod, 'validate_auth_token', return_value=False):
|
||||||
|
handler._handle_spawn()
|
||||||
|
|
||||||
|
code, payload = handler.sent[0]
|
||||||
|
self.assertEqual(code, 401)
|
||||||
|
self.assertEqual(payload['code'], 'UNAUTHORIZED')
|
||||||
|
|
||||||
|
def test_wrong_token_rejected(self):
|
||||||
|
handler = DummySpawnHandler(
|
||||||
|
{'project': 'test', 'agent_type': 'claude'},
|
||||||
|
auth_header='Bearer wrong-token',
|
||||||
|
)
|
||||||
|
|
||||||
|
with patch.object(spawn_mod, 'validate_auth_token', return_value=False):
|
||||||
|
handler._handle_spawn()
|
||||||
|
|
||||||
|
code, payload = handler.sent[0]
|
||||||
|
self.assertEqual(code, 401)
|
||||||
|
self.assertEqual(payload['code'], 'UNAUTHORIZED')
|
||||||
|
|
||||||
|
def test_malformed_bearer_rejected(self):
|
||||||
|
handler = DummySpawnHandler(
|
||||||
|
{'project': 'test', 'agent_type': 'claude'},
|
||||||
|
auth_header='NotBearer sometoken',
|
||||||
|
)
|
||||||
|
|
||||||
|
with patch.object(spawn_mod, 'validate_auth_token', return_value=False):
|
||||||
|
handler._handle_spawn()
|
||||||
|
|
||||||
|
code, payload = handler.sent[0]
|
||||||
|
self.assertEqual(code, 401)
|
||||||
|
self.assertEqual(payload['code'], 'UNAUTHORIZED')
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# _handle_spawn JSON parsing
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
class TestHandleSpawnParsing(unittest.TestCase):
|
||||||
|
"""Tests for JSON body parsing in _handle_spawn."""
|
||||||
|
|
||||||
|
def test_invalid_json_body_returns_400(self):
|
||||||
|
handler = DummySpawnHandler.__new__(DummySpawnHandler)
|
||||||
|
handler.headers = {
|
||||||
|
'Content-Length': '11',
|
||||||
|
'Authorization': 'Bearer tok',
|
||||||
|
}
|
||||||
|
handler.rfile = io.BytesIO(b'not json!!!')
|
||||||
|
handler.sent = []
|
||||||
|
handler.errors = []
|
||||||
|
|
||||||
|
with patch.object(spawn_mod, 'validate_auth_token', return_value=True):
|
||||||
|
handler._handle_spawn()
|
||||||
|
|
||||||
|
self.assertEqual(handler.errors, [(400, 'Invalid JSON body')])
|
||||||
|
|
||||||
|
def test_non_dict_body_returns_400(self):
|
||||||
|
raw = b'"just a string"'
|
||||||
|
handler = DummySpawnHandler.__new__(DummySpawnHandler)
|
||||||
|
handler.headers = {
|
||||||
|
'Content-Length': str(len(raw)),
|
||||||
|
'Authorization': 'Bearer tok',
|
||||||
|
}
|
||||||
|
handler.rfile = io.BytesIO(raw)
|
||||||
|
handler.sent = []
|
||||||
|
handler.errors = []
|
||||||
|
|
||||||
|
with patch.object(spawn_mod, 'validate_auth_token', return_value=True):
|
||||||
|
handler._handle_spawn()
|
||||||
|
|
||||||
|
self.assertEqual(handler.errors, [(400, 'Invalid JSON body')])
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# _handle_spawn lock contention
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
class TestHandleSpawnLock(unittest.TestCase):
|
||||||
|
"""Tests for spawn lock behavior."""
|
||||||
|
|
||||||
|
def test_lock_timeout_returns_503(self):
|
||||||
|
"""When lock can't be acquired within timeout, returns 503."""
|
||||||
|
handler = DummySpawnHandler(
|
||||||
|
{'project': 'test', 'agent_type': 'claude'},
|
||||||
|
auth_header='Bearer tok',
|
||||||
|
)
|
||||||
|
|
||||||
|
mock_lock = MagicMock()
|
||||||
|
mock_lock.acquire.return_value = False # Simulate timeout
|
||||||
|
|
||||||
|
with patch.object(spawn_mod, 'validate_auth_token', return_value=True), \
|
||||||
|
patch.object(handler, '_validate_spawn_params', return_value={
|
||||||
|
'resolved_path': Path('/fake/test'),
|
||||||
|
}), \
|
||||||
|
patch.object(spawn_mod, '_spawn_lock', mock_lock):
|
||||||
|
handler._handle_spawn()
|
||||||
|
|
||||||
|
code, payload = handler.sent[0]
|
||||||
|
self.assertEqual(code, 503)
|
||||||
|
self.assertEqual(payload['code'], 'SERVER_BUSY')
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# _handle_projects / _handle_projects_refresh
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
class TestHandleProjects(unittest.TestCase):
|
||||||
|
|
||||||
|
def test_handle_projects_returns_cached_list(self):
|
||||||
|
handler = DummySpawnHandler()
|
||||||
|
original = spawn_mod._projects_cache
|
||||||
|
|
||||||
|
spawn_mod._projects_cache = ['alpha', 'beta']
|
||||||
|
try:
|
||||||
|
handler._handle_projects()
|
||||||
|
code, payload = handler.sent[0]
|
||||||
|
self.assertEqual(code, 200)
|
||||||
|
self.assertTrue(payload['ok'])
|
||||||
|
self.assertEqual(payload['projects'], ['alpha', 'beta'])
|
||||||
|
finally:
|
||||||
|
spawn_mod._projects_cache = original
|
||||||
|
|
||||||
|
def test_handle_projects_returns_empty_list_gracefully(self):
|
||||||
|
handler = DummySpawnHandler()
|
||||||
|
original = spawn_mod._projects_cache
|
||||||
|
|
||||||
|
spawn_mod._projects_cache = []
|
||||||
|
try:
|
||||||
|
handler._handle_projects()
|
||||||
|
code, payload = handler.sent[0]
|
||||||
|
self.assertEqual(code, 200)
|
||||||
|
self.assertTrue(payload['ok'])
|
||||||
|
self.assertEqual(payload['projects'], [])
|
||||||
|
finally:
|
||||||
|
spawn_mod._projects_cache = original
|
||||||
|
|
||||||
|
def test_handle_projects_refresh_missing_dir_returns_empty(self):
|
||||||
|
"""Refreshing with a missing PROJECTS_DIR returns empty list, no crash."""
|
||||||
|
handler = DummySpawnHandler()
|
||||||
|
missing = Path('/tmp/amc-test-nonexistent-projects-dir')
|
||||||
|
|
||||||
|
with patch.object(spawn_mod, 'PROJECTS_DIR', missing):
|
||||||
|
handler._handle_projects_refresh()
|
||||||
|
|
||||||
|
code, payload = handler.sent[0]
|
||||||
|
self.assertEqual(code, 200)
|
||||||
|
self.assertTrue(payload['ok'])
|
||||||
|
self.assertEqual(payload['projects'], [])
|
||||||
|
|
||||||
|
def test_handle_projects_refresh_reloads_cache(self):
|
||||||
|
handler = DummySpawnHandler()
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
projects = Path(tmpdir)
|
||||||
|
(projects / 'new-project').mkdir()
|
||||||
|
|
||||||
|
with patch.object(spawn_mod, 'PROJECTS_DIR', projects):
|
||||||
|
handler._handle_projects_refresh()
|
||||||
|
|
||||||
|
code, payload = handler.sent[0]
|
||||||
|
self.assertEqual(code, 200)
|
||||||
|
self.assertEqual(payload['projects'], ['new-project'])
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# _handle_health
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
class TestHandleHealth(unittest.TestCase):
|
||||||
|
|
||||||
|
def test_health_with_zellij_available(self):
|
||||||
|
handler = DummySpawnHandler()
|
||||||
|
handler._check_zellij_session_exists = MagicMock(return_value=True)
|
||||||
|
|
||||||
|
handler._handle_health()
|
||||||
|
|
||||||
|
code, payload = handler.sent[0]
|
||||||
|
self.assertEqual(code, 200)
|
||||||
|
self.assertTrue(payload['ok'])
|
||||||
|
self.assertTrue(payload['zellij_available'])
|
||||||
|
|
||||||
|
def test_health_with_zellij_unavailable(self):
|
||||||
|
handler = DummySpawnHandler()
|
||||||
|
handler._check_zellij_session_exists = MagicMock(return_value=False)
|
||||||
|
|
||||||
|
handler._handle_health()
|
||||||
|
|
||||||
|
code, payload = handler.sent[0]
|
||||||
|
self.assertEqual(code, 200)
|
||||||
|
self.assertFalse(payload['zellij_available'])
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Special characters in project names (bd-14p)
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
class TestSpecialCharacterValidation(unittest.TestCase):
|
||||||
|
"""Verify project names with special characters are handled correctly."""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.handler = DummySpawnHandler()
|
||||||
|
|
||||||
|
def _make_project(self, tmpdir, name):
|
||||||
|
"""Create a project directory with the given name and patch PROJECTS_DIR."""
|
||||||
|
projects = Path(tmpdir) / 'projects'
|
||||||
|
projects.mkdir(exist_ok=True)
|
||||||
|
project_dir = projects / name
|
||||||
|
project_dir.mkdir()
|
||||||
|
return projects, project_dir
|
||||||
|
|
||||||
|
# --- safe characters that should work ---
|
||||||
|
|
||||||
|
def test_hyphenated_name(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
projects, _ = self._make_project(tmpdir, 'my-project')
|
||||||
|
with patch.object(spawn_mod, 'PROJECTS_DIR', projects):
|
||||||
|
result = self.handler._validate_spawn_params('my-project', 'claude')
|
||||||
|
self.assertNotIn('error', result)
|
||||||
|
|
||||||
|
def test_underscored_name(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
projects, _ = self._make_project(tmpdir, 'my_project')
|
||||||
|
with patch.object(spawn_mod, 'PROJECTS_DIR', projects):
|
||||||
|
result = self.handler._validate_spawn_params('my_project', 'claude')
|
||||||
|
self.assertNotIn('error', result)
|
||||||
|
|
||||||
|
def test_numeric_name(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
projects, _ = self._make_project(tmpdir, 'project2')
|
||||||
|
with patch.object(spawn_mod, 'PROJECTS_DIR', projects):
|
||||||
|
result = self.handler._validate_spawn_params('project2', 'claude')
|
||||||
|
self.assertNotIn('error', result)
|
||||||
|
|
||||||
|
def test_name_with_spaces(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
projects, _ = self._make_project(tmpdir, 'my project')
|
||||||
|
with patch.object(spawn_mod, 'PROJECTS_DIR', projects):
|
||||||
|
result = self.handler._validate_spawn_params('my project', 'claude')
|
||||||
|
self.assertNotIn('error', result)
|
||||||
|
|
||||||
|
def test_name_with_dots(self):
|
||||||
|
"""Single dots are fine, only '..' is rejected."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
projects, _ = self._make_project(tmpdir, 'my.project')
|
||||||
|
with patch.object(spawn_mod, 'PROJECTS_DIR', projects):
|
||||||
|
result = self.handler._validate_spawn_params('my.project', 'claude')
|
||||||
|
self.assertNotIn('error', result)
|
||||||
|
|
||||||
|
def test_name_with_parentheses(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
projects, _ = self._make_project(tmpdir, 'project (copy)')
|
||||||
|
with patch.object(spawn_mod, 'PROJECTS_DIR', projects):
|
||||||
|
result = self.handler._validate_spawn_params('project (copy)', 'claude')
|
||||||
|
self.assertNotIn('error', result)
|
||||||
|
|
||||||
|
def test_name_with_at_sign(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
projects, _ = self._make_project(tmpdir, '@scope-pkg')
|
||||||
|
with patch.object(spawn_mod, 'PROJECTS_DIR', projects):
|
||||||
|
result = self.handler._validate_spawn_params('@scope-pkg', 'claude')
|
||||||
|
self.assertNotIn('error', result)
|
||||||
|
|
||||||
|
def test_name_with_plus(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
projects, _ = self._make_project(tmpdir, 'c++project')
|
||||||
|
with patch.object(spawn_mod, 'PROJECTS_DIR', projects):
|
||||||
|
result = self.handler._validate_spawn_params('c++project', 'claude')
|
||||||
|
self.assertNotIn('error', result)
|
||||||
|
|
||||||
|
def test_name_with_hash(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
projects, _ = self._make_project(tmpdir, 'c#project')
|
||||||
|
with patch.object(spawn_mod, 'PROJECTS_DIR', projects):
|
||||||
|
result = self.handler._validate_spawn_params('c#project', 'claude')
|
||||||
|
self.assertNotIn('error', result)
|
||||||
|
|
||||||
|
# --- control characters: should be rejected ---
|
||||||
|
|
||||||
|
def test_null_byte_rejected(self):
|
||||||
|
result = self.handler._validate_spawn_params('project\x00evil', 'claude')
|
||||||
|
self.assertEqual(result['code'], 'INVALID_PROJECT')
|
||||||
|
|
||||||
|
def test_newline_rejected(self):
|
||||||
|
result = self.handler._validate_spawn_params('project\nevil', 'claude')
|
||||||
|
self.assertEqual(result['code'], 'INVALID_PROJECT')
|
||||||
|
|
||||||
|
def test_tab_rejected(self):
|
||||||
|
result = self.handler._validate_spawn_params('project\tevil', 'claude')
|
||||||
|
self.assertEqual(result['code'], 'INVALID_PROJECT')
|
||||||
|
|
||||||
|
def test_carriage_return_rejected(self):
|
||||||
|
result = self.handler._validate_spawn_params('project\revil', 'claude')
|
||||||
|
self.assertEqual(result['code'], 'INVALID_PROJECT')
|
||||||
|
|
||||||
|
def test_escape_char_rejected(self):
|
||||||
|
result = self.handler._validate_spawn_params('project\x1bevil', 'claude')
|
||||||
|
self.assertEqual(result['code'], 'INVALID_PROJECT')
|
||||||
|
|
||||||
|
def test_bell_char_rejected(self):
|
||||||
|
result = self.handler._validate_spawn_params('project\x07evil', 'claude')
|
||||||
|
self.assertEqual(result['code'], 'INVALID_PROJECT')
|
||||||
|
|
||||||
|
def test_del_char_rejected(self):
|
||||||
|
result = self.handler._validate_spawn_params('project\x7fevil', 'claude')
|
||||||
|
self.assertEqual(result['code'], 'INVALID_PROJECT')
|
||||||
|
|
||||||
|
# --- whitespace edge cases ---
|
||||||
|
|
||||||
|
def test_whitespace_only_space(self):
|
||||||
|
result = self.handler._validate_spawn_params(' ', 'claude')
|
||||||
|
self.assertEqual(result['code'], 'MISSING_PROJECT')
|
||||||
|
|
||||||
|
def test_whitespace_only_multiple_spaces(self):
|
||||||
|
result = self.handler._validate_spawn_params(' ', 'claude')
|
||||||
|
self.assertEqual(result['code'], 'MISSING_PROJECT')
|
||||||
|
|
||||||
|
# --- non-string project name ---
|
||||||
|
|
||||||
|
def test_integer_project_name_rejected(self):
|
||||||
|
result = self.handler._validate_spawn_params(42, 'claude')
|
||||||
|
self.assertEqual(result['code'], 'MISSING_PROJECT')
|
||||||
|
|
||||||
|
def test_list_project_name_rejected(self):
|
||||||
|
result = self.handler._validate_spawn_params(['bad'], 'claude')
|
||||||
|
self.assertEqual(result['code'], 'MISSING_PROJECT')
|
||||||
|
|
||||||
|
# --- shell metacharacters (safe because subprocess uses list args) ---
|
||||||
|
|
||||||
|
def test_dollar_sign_in_name(self):
|
||||||
|
"""Dollar sign is safe - no shell expansion with list args."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
projects, _ = self._make_project(tmpdir, '$HOME')
|
||||||
|
with patch.object(spawn_mod, 'PROJECTS_DIR', projects):
|
||||||
|
result = self.handler._validate_spawn_params('$HOME', 'claude')
|
||||||
|
self.assertNotIn('error', result)
|
||||||
|
|
||||||
|
def test_semicolon_in_name(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
projects, _ = self._make_project(tmpdir, 'foo;bar')
|
||||||
|
with patch.object(spawn_mod, 'PROJECTS_DIR', projects):
|
||||||
|
result = self.handler._validate_spawn_params('foo;bar', 'claude')
|
||||||
|
self.assertNotIn('error', result)
|
||||||
|
|
||||||
|
def test_backtick_in_name(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
projects, _ = self._make_project(tmpdir, 'foo`bar')
|
||||||
|
with patch.object(spawn_mod, 'PROJECTS_DIR', projects):
|
||||||
|
result = self.handler._validate_spawn_params('foo`bar', 'claude')
|
||||||
|
self.assertNotIn('error', result)
|
||||||
|
|
||||||
|
def test_pipe_in_name(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
projects, _ = self._make_project(tmpdir, 'foo|bar')
|
||||||
|
with patch.object(spawn_mod, 'PROJECTS_DIR', projects):
|
||||||
|
result = self.handler._validate_spawn_params('foo|bar', 'claude')
|
||||||
|
self.assertNotIn('error', result)
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# _sanitize_pane_name (bd-14p)
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
class TestSanitizePaneName(unittest.TestCase):
|
||||||
|
"""Tests for Zellij pane name sanitization."""
|
||||||
|
|
||||||
|
def test_simple_name_unchanged(self):
|
||||||
|
self.assertEqual(_sanitize_pane_name('claude-myproject'), 'claude-myproject')
|
||||||
|
|
||||||
|
def test_name_with_spaces_preserved(self):
|
||||||
|
self.assertEqual(_sanitize_pane_name('claude-my project'), 'claude-my project')
|
||||||
|
|
||||||
|
def test_multiple_spaces_collapsed(self):
|
||||||
|
self.assertEqual(_sanitize_pane_name('claude-my project'), 'claude-my project')
|
||||||
|
|
||||||
|
def test_quotes_replaced(self):
|
||||||
|
self.assertEqual(_sanitize_pane_name('claude-"quoted"'), 'claude-_quoted_')
|
||||||
|
|
||||||
|
def test_single_quotes_replaced(self):
|
||||||
|
self.assertEqual(_sanitize_pane_name("claude-it's"), 'claude-it_s')
|
||||||
|
|
||||||
|
def test_backtick_replaced(self):
|
||||||
|
self.assertEqual(_sanitize_pane_name('claude-`cmd`'), 'claude-_cmd_')
|
||||||
|
|
||||||
|
def test_control_chars_replaced(self):
|
||||||
|
self.assertEqual(_sanitize_pane_name('claude-\x07bell'), 'claude-_bell')
|
||||||
|
|
||||||
|
def test_tab_replaced(self):
|
||||||
|
self.assertEqual(_sanitize_pane_name('claude-\ttab'), 'claude-_tab')
|
||||||
|
|
||||||
|
def test_truncated_at_64_chars(self):
|
||||||
|
long_name = 'a' * 100
|
||||||
|
result = _sanitize_pane_name(long_name)
|
||||||
|
self.assertEqual(len(result), 64)
|
||||||
|
|
||||||
|
def test_empty_returns_unnamed(self):
|
||||||
|
self.assertEqual(_sanitize_pane_name(''), 'unnamed')
|
||||||
|
|
||||||
|
def test_only_control_chars_returns_underscores(self):
|
||||||
|
result = _sanitize_pane_name('\x01\x02\x03')
|
||||||
|
self.assertEqual(result, '___')
|
||||||
|
|
||||||
|
def test_unicode_preserved(self):
|
||||||
|
self.assertEqual(_sanitize_pane_name('claude-cafe'), 'claude-cafe')
|
||||||
|
|
||||||
|
def test_hyphens_and_underscores_preserved(self):
|
||||||
|
self.assertEqual(_sanitize_pane_name('claude-my_project-v2'), 'claude-my_project-v2')
|
||||||
|
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Special chars in spawn command construction (bd-14p)
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
|
||||||
|
class TestSpawnWithSpecialChars(unittest.TestCase):
|
||||||
|
"""Verify special character project names pass through to Zellij correctly."""
|
||||||
|
|
||||||
|
def test_space_in_project_name_passes_to_zellij(self):
|
||||||
|
"""Project with spaces should be passed correctly to subprocess args."""
|
||||||
|
handler = DummySpawnHandler()
|
||||||
|
|
||||||
|
with patch.object(handler, '_check_zellij_session_exists', return_value=True), \
|
||||||
|
patch.object(handler, '_wait_for_session_file', return_value=True), \
|
||||||
|
patch('amc_server.mixins.spawn.subprocess.run') as mock_run:
|
||||||
|
mock_run.return_value = MagicMock(returncode=0, stderr='')
|
||||||
|
result = handler._spawn_agent_in_project_tab(
|
||||||
|
'my project', Path('/fake/my project'), 'claude', 'spawn-123',
|
||||||
|
)
|
||||||
|
|
||||||
|
self.assertTrue(result['ok'])
|
||||||
|
# Check the tab creation call contains the name with spaces
|
||||||
|
tab_call = mock_run.call_args_list[0]
|
||||||
|
tab_args = tab_call[0][0]
|
||||||
|
self.assertIn('my project', tab_args)
|
||||||
|
|
||||||
|
# Check pane name was sanitized (spaces preserved but other chars cleaned)
|
||||||
|
pane_call = mock_run.call_args_list[1]
|
||||||
|
pane_args = pane_call[0][0]
|
||||||
|
name_idx = pane_args.index('--name') + 1
|
||||||
|
self.assertEqual(pane_args[name_idx], 'claude-my project')
|
||||||
|
|
||||||
|
def test_quotes_in_project_name_sanitized_in_pane_name(self):
|
||||||
|
"""Quotes in project names should be stripped from pane names."""
|
||||||
|
handler = DummySpawnHandler()
|
||||||
|
|
||||||
|
with patch.object(handler, '_check_zellij_session_exists', return_value=True), \
|
||||||
|
patch.object(handler, '_wait_for_session_file', return_value=True), \
|
||||||
|
patch('amc_server.mixins.spawn.subprocess.run') as mock_run:
|
||||||
|
mock_run.return_value = MagicMock(returncode=0, stderr='')
|
||||||
|
handler._spawn_agent_in_project_tab(
|
||||||
|
'it\'s-a-project', Path('/fake/its-a-project'), 'claude', 'spawn-123',
|
||||||
|
)
|
||||||
|
|
||||||
|
pane_call = mock_run.call_args_list[1]
|
||||||
|
pane_args = pane_call[0][0]
|
||||||
|
name_idx = pane_args.index('--name') + 1
|
||||||
|
# Single quote should be replaced with underscore
|
||||||
|
self.assertNotIn("'", pane_args[name_idx])
|
||||||
|
self.assertEqual(pane_args[name_idx], 'claude-it_s-a-project')
|
||||||
|
|
||||||
|
def test_unicode_project_name_in_pane(self):
|
||||||
|
"""Unicode project names should pass through to pane names."""
|
||||||
|
handler = DummySpawnHandler()
|
||||||
|
|
||||||
|
with patch.object(handler, '_check_zellij_session_exists', return_value=True), \
|
||||||
|
patch.object(handler, '_wait_for_session_file', return_value=True), \
|
||||||
|
patch('amc_server.mixins.spawn.subprocess.run') as mock_run:
|
||||||
|
mock_run.return_value = MagicMock(returncode=0, stderr='')
|
||||||
|
result = handler._spawn_agent_in_project_tab(
|
||||||
|
'cafe', Path('/fake/cafe'), 'claude', 'spawn-123',
|
||||||
|
)
|
||||||
|
|
||||||
|
self.assertTrue(result['ok'])
|
||||||
|
pane_call = mock_run.call_args_list[1]
|
||||||
|
pane_args = pane_call[0][0]
|
||||||
|
name_idx = pane_args.index('--name') + 1
|
||||||
|
self.assertEqual(pane_args[name_idx], 'claude-cafe')
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
unittest.main()
|
||||||
@@ -1,21 +1,30 @@
|
|||||||
|
import json
|
||||||
import subprocess
|
import subprocess
|
||||||
|
import tempfile
|
||||||
|
import time
|
||||||
import unittest
|
import unittest
|
||||||
|
from pathlib import Path
|
||||||
from unittest.mock import patch
|
from unittest.mock import patch
|
||||||
|
|
||||||
import amc_server.mixins.state as state_mod
|
import amc_server.mixins.state as state_mod
|
||||||
from amc_server.mixins.state import StateMixin
|
from amc_server.mixins.state import StateMixin
|
||||||
|
from amc_server.mixins.parsing import SessionParsingMixin
|
||||||
|
from amc_server.mixins.discovery import SessionDiscoveryMixin
|
||||||
|
|
||||||
|
|
||||||
class DummyStateHandler(StateMixin):
|
class DummyStateHandler(StateMixin, SessionParsingMixin, SessionDiscoveryMixin):
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
|
||||||
class StateMixinTests(unittest.TestCase):
|
class TestGetActiveZellijSessions(unittest.TestCase):
|
||||||
def test_get_active_zellij_sessions_uses_resolved_binary_and_parses_output(self):
|
"""Tests for _get_active_zellij_sessions edge cases."""
|
||||||
handler = DummyStateHandler()
|
|
||||||
|
def setUp(self):
|
||||||
state_mod._zellij_cache["sessions"] = None
|
state_mod._zellij_cache["sessions"] = None
|
||||||
state_mod._zellij_cache["expires"] = 0
|
state_mod._zellij_cache["expires"] = 0
|
||||||
|
|
||||||
|
def test_parses_output_with_metadata(self):
|
||||||
|
handler = DummyStateHandler()
|
||||||
completed = subprocess.CompletedProcess(
|
completed = subprocess.CompletedProcess(
|
||||||
args=[],
|
args=[],
|
||||||
returncode=0,
|
returncode=0,
|
||||||
@@ -23,15 +32,617 @@ class StateMixinTests(unittest.TestCase):
|
|||||||
stderr="",
|
stderr="",
|
||||||
)
|
)
|
||||||
|
|
||||||
with patch.object(state_mod, "ZELLIJ_BIN", "/opt/homebrew/bin/zellij"), patch(
|
with patch.object(state_mod, "ZELLIJ_BIN", "/opt/homebrew/bin/zellij"), \
|
||||||
"amc_server.mixins.state.subprocess.run", return_value=completed
|
patch("amc_server.mixins.state.subprocess.run", return_value=completed) as run_mock:
|
||||||
) as run_mock:
|
|
||||||
sessions = handler._get_active_zellij_sessions()
|
sessions = handler._get_active_zellij_sessions()
|
||||||
|
|
||||||
self.assertEqual(sessions, {"infra", "work"})
|
self.assertEqual(sessions, {"infra", "work"})
|
||||||
args = run_mock.call_args.args[0]
|
args = run_mock.call_args.args[0]
|
||||||
self.assertEqual(args, ["/opt/homebrew/bin/zellij", "list-sessions", "--no-formatting"])
|
self.assertEqual(args, ["/opt/homebrew/bin/zellij", "list-sessions", "--no-formatting"])
|
||||||
|
|
||||||
|
def test_empty_output_returns_empty_set(self):
|
||||||
|
handler = DummyStateHandler()
|
||||||
|
completed = subprocess.CompletedProcess(args=[], returncode=0, stdout="", stderr="")
|
||||||
|
|
||||||
|
with patch("amc_server.mixins.state.subprocess.run", return_value=completed):
|
||||||
|
sessions = handler._get_active_zellij_sessions()
|
||||||
|
|
||||||
|
self.assertEqual(sessions, set())
|
||||||
|
|
||||||
|
def test_whitespace_only_lines_ignored(self):
|
||||||
|
handler = DummyStateHandler()
|
||||||
|
completed = subprocess.CompletedProcess(
|
||||||
|
args=[], returncode=0, stdout="session1\n \n\nsession2\n", stderr=""
|
||||||
|
)
|
||||||
|
|
||||||
|
with patch("amc_server.mixins.state.subprocess.run", return_value=completed):
|
||||||
|
sessions = handler._get_active_zellij_sessions()
|
||||||
|
|
||||||
|
self.assertEqual(sessions, {"session1", "session2"})
|
||||||
|
|
||||||
|
def test_nonzero_exit_returns_none(self):
|
||||||
|
handler = DummyStateHandler()
|
||||||
|
completed = subprocess.CompletedProcess(args=[], returncode=1, stdout="", stderr="error")
|
||||||
|
|
||||||
|
with patch("amc_server.mixins.state.subprocess.run", return_value=completed):
|
||||||
|
sessions = handler._get_active_zellij_sessions()
|
||||||
|
|
||||||
|
self.assertIsNone(sessions)
|
||||||
|
|
||||||
|
def test_timeout_returns_none(self):
|
||||||
|
handler = DummyStateHandler()
|
||||||
|
|
||||||
|
with patch("amc_server.mixins.state.subprocess.run",
|
||||||
|
side_effect=subprocess.TimeoutExpired("cmd", 2)):
|
||||||
|
sessions = handler._get_active_zellij_sessions()
|
||||||
|
|
||||||
|
self.assertIsNone(sessions)
|
||||||
|
|
||||||
|
def test_file_not_found_returns_none(self):
|
||||||
|
handler = DummyStateHandler()
|
||||||
|
|
||||||
|
with patch("amc_server.mixins.state.subprocess.run", side_effect=FileNotFoundError()):
|
||||||
|
sessions = handler._get_active_zellij_sessions()
|
||||||
|
|
||||||
|
self.assertIsNone(sessions)
|
||||||
|
|
||||||
|
def test_cache_used_when_fresh(self):
|
||||||
|
handler = DummyStateHandler()
|
||||||
|
state_mod._zellij_cache["sessions"] = {"cached"}
|
||||||
|
state_mod._zellij_cache["expires"] = time.time() + 100
|
||||||
|
|
||||||
|
with patch("amc_server.mixins.state.subprocess.run") as mock_run:
|
||||||
|
sessions = handler._get_active_zellij_sessions()
|
||||||
|
|
||||||
|
mock_run.assert_not_called()
|
||||||
|
self.assertEqual(sessions, {"cached"})
|
||||||
|
|
||||||
|
|
||||||
|
class TestIsSessionDead(unittest.TestCase):
|
||||||
|
"""Tests for _is_session_dead edge cases."""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.handler = DummyStateHandler()
|
||||||
|
|
||||||
|
def test_starting_session_not_dead(self):
|
||||||
|
session = {"status": "starting", "agent": "claude", "zellij_session": "s"}
|
||||||
|
self.assertFalse(self.handler._is_session_dead(session, {"s"}, set()))
|
||||||
|
|
||||||
|
def test_claude_without_zellij_session_is_dead(self):
|
||||||
|
session = {"status": "active", "agent": "claude", "zellij_session": ""}
|
||||||
|
self.assertTrue(self.handler._is_session_dead(session, set(), set()))
|
||||||
|
|
||||||
|
def test_claude_with_missing_zellij_session_is_dead(self):
|
||||||
|
session = {"status": "active", "agent": "claude", "zellij_session": "deleted"}
|
||||||
|
active_zellij = {"other_session"}
|
||||||
|
self.assertTrue(self.handler._is_session_dead(session, active_zellij, set()))
|
||||||
|
|
||||||
|
def test_claude_with_active_zellij_session_not_dead(self):
|
||||||
|
session = {"status": "active", "agent": "claude", "zellij_session": "existing"}
|
||||||
|
active_zellij = {"existing", "other"}
|
||||||
|
self.assertFalse(self.handler._is_session_dead(session, active_zellij, set()))
|
||||||
|
|
||||||
|
def test_claude_unknown_zellij_status_assumes_alive(self):
|
||||||
|
# When we can't query zellij (None), assume alive to avoid false positives
|
||||||
|
session = {"status": "active", "agent": "claude", "zellij_session": "unknown"}
|
||||||
|
self.assertFalse(self.handler._is_session_dead(session, None, set()))
|
||||||
|
|
||||||
|
def test_codex_without_transcript_path_is_dead(self):
|
||||||
|
session = {"status": "active", "agent": "codex", "transcript_path": ""}
|
||||||
|
self.assertTrue(self.handler._is_session_dead(session, None, set()))
|
||||||
|
|
||||||
|
def test_codex_with_active_transcript_not_dead(self):
|
||||||
|
session = {"status": "active", "agent": "codex", "transcript_path": "/path/to/file.jsonl"}
|
||||||
|
active_files = {"/path/to/file.jsonl"}
|
||||||
|
self.assertFalse(self.handler._is_session_dead(session, None, active_files))
|
||||||
|
|
||||||
|
def test_codex_without_active_transcript_checks_lsof(self):
|
||||||
|
session = {"status": "active", "agent": "codex", "transcript_path": "/path/to/file.jsonl"}
|
||||||
|
|
||||||
|
# Simulate lsof finding the file open
|
||||||
|
with patch.object(self.handler, "_is_file_open", return_value=True):
|
||||||
|
result = self.handler._is_session_dead(session, None, set())
|
||||||
|
self.assertFalse(result)
|
||||||
|
|
||||||
|
# Simulate lsof not finding the file
|
||||||
|
with patch.object(self.handler, "_is_file_open", return_value=False):
|
||||||
|
result = self.handler._is_session_dead(session, None, set())
|
||||||
|
self.assertTrue(result)
|
||||||
|
|
||||||
|
def test_unknown_agent_assumes_alive(self):
|
||||||
|
session = {"status": "active", "agent": "unknown_agent"}
|
||||||
|
self.assertFalse(self.handler._is_session_dead(session, None, set()))
|
||||||
|
|
||||||
|
|
||||||
|
class TestIsFileOpen(unittest.TestCase):
|
||||||
|
"""Tests for _is_file_open edge cases."""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.handler = DummyStateHandler()
|
||||||
|
|
||||||
|
def test_lsof_finds_pid_returns_true(self):
|
||||||
|
completed = subprocess.CompletedProcess(args=[], returncode=0, stdout="12345\n", stderr="")
|
||||||
|
|
||||||
|
with patch("amc_server.mixins.state.subprocess.run", return_value=completed):
|
||||||
|
result = self.handler._is_file_open("/some/file.jsonl")
|
||||||
|
|
||||||
|
self.assertTrue(result)
|
||||||
|
|
||||||
|
def test_lsof_no_result_returns_false(self):
|
||||||
|
completed = subprocess.CompletedProcess(args=[], returncode=1, stdout="", stderr="")
|
||||||
|
|
||||||
|
with patch("amc_server.mixins.state.subprocess.run", return_value=completed):
|
||||||
|
result = self.handler._is_file_open("/some/file.jsonl")
|
||||||
|
|
||||||
|
self.assertFalse(result)
|
||||||
|
|
||||||
|
def test_lsof_timeout_returns_false(self):
|
||||||
|
with patch("amc_server.mixins.state.subprocess.run",
|
||||||
|
side_effect=subprocess.TimeoutExpired("cmd", 2)):
|
||||||
|
result = self.handler._is_file_open("/some/file.jsonl")
|
||||||
|
|
||||||
|
self.assertFalse(result)
|
||||||
|
|
||||||
|
def test_lsof_not_found_returns_false(self):
|
||||||
|
with patch("amc_server.mixins.state.subprocess.run", side_effect=FileNotFoundError()):
|
||||||
|
result = self.handler._is_file_open("/some/file.jsonl")
|
||||||
|
|
||||||
|
self.assertFalse(result)
|
||||||
|
|
||||||
|
|
||||||
|
class TestCleanupStale(unittest.TestCase):
|
||||||
|
"""Tests for _cleanup_stale edge cases."""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.handler = DummyStateHandler()
|
||||||
|
|
||||||
|
def test_removes_orphan_event_logs_older_than_24h(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
events_dir = Path(tmpdir) / "events"
|
||||||
|
events_dir.mkdir()
|
||||||
|
sessions_dir = Path(tmpdir) / "sessions"
|
||||||
|
sessions_dir.mkdir()
|
||||||
|
|
||||||
|
# Create an orphan event log (no matching session)
|
||||||
|
orphan_log = events_dir / "orphan.jsonl"
|
||||||
|
orphan_log.write_text('{"event": "test"}\n')
|
||||||
|
# Set mtime to 25 hours ago
|
||||||
|
old_time = time.time() - (25 * 3600)
|
||||||
|
import os
|
||||||
|
os.utime(orphan_log, (old_time, old_time))
|
||||||
|
|
||||||
|
with patch.object(state_mod, "EVENTS_DIR", events_dir), \
|
||||||
|
patch.object(state_mod, "SESSIONS_DIR", sessions_dir):
|
||||||
|
self.handler._cleanup_stale([]) # No active sessions
|
||||||
|
|
||||||
|
self.assertFalse(orphan_log.exists())
|
||||||
|
|
||||||
|
def test_keeps_orphan_event_logs_younger_than_24h(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
events_dir = Path(tmpdir) / "events"
|
||||||
|
events_dir.mkdir()
|
||||||
|
sessions_dir = Path(tmpdir) / "sessions"
|
||||||
|
sessions_dir.mkdir()
|
||||||
|
|
||||||
|
# Create a recent orphan event log
|
||||||
|
recent_log = events_dir / "recent.jsonl"
|
||||||
|
recent_log.write_text('{"event": "test"}\n')
|
||||||
|
|
||||||
|
with patch.object(state_mod, "EVENTS_DIR", events_dir), \
|
||||||
|
patch.object(state_mod, "SESSIONS_DIR", sessions_dir):
|
||||||
|
self.handler._cleanup_stale([])
|
||||||
|
|
||||||
|
self.assertTrue(recent_log.exists())
|
||||||
|
|
||||||
|
def test_keeps_event_logs_with_active_session(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
events_dir = Path(tmpdir) / "events"
|
||||||
|
events_dir.mkdir()
|
||||||
|
sessions_dir = Path(tmpdir) / "sessions"
|
||||||
|
sessions_dir.mkdir()
|
||||||
|
|
||||||
|
# Create an old event log that HAS an active session
|
||||||
|
event_log = events_dir / "active-session.jsonl"
|
||||||
|
event_log.write_text('{"event": "test"}\n')
|
||||||
|
old_time = time.time() - (25 * 3600)
|
||||||
|
import os
|
||||||
|
os.utime(event_log, (old_time, old_time))
|
||||||
|
|
||||||
|
sessions = [{"session_id": "active-session"}]
|
||||||
|
|
||||||
|
with patch.object(state_mod, "EVENTS_DIR", events_dir), \
|
||||||
|
patch.object(state_mod, "SESSIONS_DIR", sessions_dir):
|
||||||
|
self.handler._cleanup_stale(sessions)
|
||||||
|
|
||||||
|
self.assertTrue(event_log.exists())
|
||||||
|
|
||||||
|
def test_removes_stale_starting_sessions(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
events_dir = Path(tmpdir) / "events"
|
||||||
|
events_dir.mkdir()
|
||||||
|
sessions_dir = Path(tmpdir) / "sessions"
|
||||||
|
sessions_dir.mkdir()
|
||||||
|
|
||||||
|
# Create a stale "starting" session
|
||||||
|
stale_session = sessions_dir / "stale.json"
|
||||||
|
stale_session.write_text(json.dumps({"status": "starting"}))
|
||||||
|
# Set mtime to 2 hours ago (> 1 hour threshold)
|
||||||
|
old_time = time.time() - (2 * 3600)
|
||||||
|
import os
|
||||||
|
os.utime(stale_session, (old_time, old_time))
|
||||||
|
|
||||||
|
with patch.object(state_mod, "EVENTS_DIR", events_dir), \
|
||||||
|
patch.object(state_mod, "SESSIONS_DIR", sessions_dir):
|
||||||
|
self.handler._cleanup_stale([])
|
||||||
|
|
||||||
|
self.assertFalse(stale_session.exists())
|
||||||
|
|
||||||
|
def test_keeps_stale_active_sessions(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
events_dir = Path(tmpdir) / "events"
|
||||||
|
events_dir.mkdir()
|
||||||
|
sessions_dir = Path(tmpdir) / "sessions"
|
||||||
|
sessions_dir.mkdir()
|
||||||
|
|
||||||
|
# Create an old "active" session (should NOT be deleted)
|
||||||
|
active_session = sessions_dir / "active.json"
|
||||||
|
active_session.write_text(json.dumps({"status": "active"}))
|
||||||
|
old_time = time.time() - (2 * 3600)
|
||||||
|
import os
|
||||||
|
os.utime(active_session, (old_time, old_time))
|
||||||
|
|
||||||
|
with patch.object(state_mod, "EVENTS_DIR", events_dir), \
|
||||||
|
patch.object(state_mod, "SESSIONS_DIR", sessions_dir):
|
||||||
|
self.handler._cleanup_stale([])
|
||||||
|
|
||||||
|
self.assertTrue(active_session.exists())
|
||||||
|
|
||||||
|
|
||||||
|
class TestCollectSessions(unittest.TestCase):
|
||||||
|
"""Tests for _collect_sessions edge cases."""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.handler = DummyStateHandler()
|
||||||
|
|
||||||
|
def test_invalid_json_session_file_skipped(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
sessions_dir = Path(tmpdir) / "sessions"
|
||||||
|
sessions_dir.mkdir()
|
||||||
|
events_dir = Path(tmpdir) / "events"
|
||||||
|
events_dir.mkdir()
|
||||||
|
|
||||||
|
# Create an invalid JSON file
|
||||||
|
bad_file = sessions_dir / "bad.json"
|
||||||
|
bad_file.write_text("not json")
|
||||||
|
|
||||||
|
# Create a valid session
|
||||||
|
good_file = sessions_dir / "good.json"
|
||||||
|
good_file.write_text(json.dumps({
|
||||||
|
"session_id": "good",
|
||||||
|
"agent": "claude",
|
||||||
|
"status": "active",
|
||||||
|
"last_event_at": "2024-01-01T00:00:00Z",
|
||||||
|
}))
|
||||||
|
|
||||||
|
with patch.object(state_mod, "SESSIONS_DIR", sessions_dir), \
|
||||||
|
patch.object(state_mod, "EVENTS_DIR", events_dir), \
|
||||||
|
patch.object(self.handler, "_discover_active_codex_sessions"), \
|
||||||
|
patch.object(self.handler, "_get_active_zellij_sessions", return_value=None), \
|
||||||
|
patch.object(self.handler, "_get_active_transcript_files", return_value=set()), \
|
||||||
|
patch.object(self.handler, "_get_context_usage_for_session", return_value=None), \
|
||||||
|
patch.object(self.handler, "_get_conversation_mtime", return_value=None):
|
||||||
|
sessions = self.handler._collect_sessions()
|
||||||
|
|
||||||
|
# Should only get the good session
|
||||||
|
self.assertEqual(len(sessions), 1)
|
||||||
|
self.assertEqual(sessions[0]["session_id"], "good")
|
||||||
|
|
||||||
|
def test_non_dict_session_file_skipped(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
sessions_dir = Path(tmpdir) / "sessions"
|
||||||
|
sessions_dir.mkdir()
|
||||||
|
events_dir = Path(tmpdir) / "events"
|
||||||
|
events_dir.mkdir()
|
||||||
|
|
||||||
|
# Create a file with array instead of dict
|
||||||
|
array_file = sessions_dir / "array.json"
|
||||||
|
array_file.write_text("[1, 2, 3]")
|
||||||
|
|
||||||
|
with patch.object(state_mod, "SESSIONS_DIR", sessions_dir), \
|
||||||
|
patch.object(state_mod, "EVENTS_DIR", events_dir), \
|
||||||
|
patch.object(self.handler, "_discover_active_codex_sessions"), \
|
||||||
|
patch.object(self.handler, "_get_active_zellij_sessions", return_value=None), \
|
||||||
|
patch.object(self.handler, "_get_active_transcript_files", return_value=set()), \
|
||||||
|
patch.object(self.handler, "_get_context_usage_for_session", return_value=None), \
|
||||||
|
patch.object(self.handler, "_get_conversation_mtime", return_value=None):
|
||||||
|
sessions = self.handler._collect_sessions()
|
||||||
|
|
||||||
|
self.assertEqual(len(sessions), 0)
|
||||||
|
|
||||||
|
def test_orphan_starting_session_deleted(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
sessions_dir = Path(tmpdir) / "sessions"
|
||||||
|
sessions_dir.mkdir()
|
||||||
|
events_dir = Path(tmpdir) / "events"
|
||||||
|
events_dir.mkdir()
|
||||||
|
|
||||||
|
# Create a starting session with a non-existent zellij session
|
||||||
|
orphan_file = sessions_dir / "orphan.json"
|
||||||
|
orphan_file.write_text(json.dumps({
|
||||||
|
"session_id": "orphan",
|
||||||
|
"status": "starting",
|
||||||
|
"zellij_session": "deleted_session",
|
||||||
|
}))
|
||||||
|
|
||||||
|
active_zellij = {"other_session"} # orphan's session not in here
|
||||||
|
|
||||||
|
with patch.object(state_mod, "SESSIONS_DIR", sessions_dir), \
|
||||||
|
patch.object(state_mod, "EVENTS_DIR", events_dir), \
|
||||||
|
patch.object(self.handler, "_discover_active_codex_sessions"), \
|
||||||
|
patch.object(self.handler, "_get_active_zellij_sessions", return_value=active_zellij), \
|
||||||
|
patch.object(self.handler, "_get_active_transcript_files", return_value=set()), \
|
||||||
|
patch.object(self.handler, "_get_context_usage_for_session", return_value=None), \
|
||||||
|
patch.object(self.handler, "_get_conversation_mtime", return_value=None):
|
||||||
|
sessions = self.handler._collect_sessions()
|
||||||
|
|
||||||
|
# Orphan should be deleted and not in results
|
||||||
|
self.assertEqual(len(sessions), 0)
|
||||||
|
self.assertFalse(orphan_file.exists())
|
||||||
|
|
||||||
|
def test_sessions_sorted_by_id(self):
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
sessions_dir = Path(tmpdir) / "sessions"
|
||||||
|
sessions_dir.mkdir()
|
||||||
|
events_dir = Path(tmpdir) / "events"
|
||||||
|
events_dir.mkdir()
|
||||||
|
|
||||||
|
for sid in ["zebra", "alpha", "middle"]:
|
||||||
|
(sessions_dir / f"{sid}.json").write_text(json.dumps({
|
||||||
|
"session_id": sid,
|
||||||
|
"status": "active",
|
||||||
|
"last_event_at": "2024-01-01T00:00:00Z",
|
||||||
|
}))
|
||||||
|
|
||||||
|
with patch.object(state_mod, "SESSIONS_DIR", sessions_dir), \
|
||||||
|
patch.object(state_mod, "EVENTS_DIR", events_dir), \
|
||||||
|
patch.object(self.handler, "_discover_active_codex_sessions"), \
|
||||||
|
patch.object(self.handler, "_get_active_zellij_sessions", return_value=None), \
|
||||||
|
patch.object(self.handler, "_get_active_transcript_files", return_value=set()), \
|
||||||
|
patch.object(self.handler, "_get_context_usage_for_session", return_value=None), \
|
||||||
|
patch.object(self.handler, "_get_conversation_mtime", return_value=None):
|
||||||
|
sessions = self.handler._collect_sessions()
|
||||||
|
|
||||||
|
ids = [s["session_id"] for s in sessions]
|
||||||
|
self.assertEqual(ids, ["alpha", "middle", "zebra"])
|
||||||
|
|
||||||
|
|
||||||
|
class TestDedupeSamePaneSessions(unittest.TestCase):
|
||||||
|
"""Tests for _dedupe_same_pane_sessions (--resume orphan handling)."""
|
||||||
|
|
||||||
|
def setUp(self):
|
||||||
|
self.handler = DummyStateHandler()
|
||||||
|
|
||||||
|
def test_keeps_session_with_context_usage(self):
|
||||||
|
"""When two sessions share a pane, keep the one with context_usage."""
|
||||||
|
sessions = [
|
||||||
|
{
|
||||||
|
"session_id": "orphan",
|
||||||
|
"zellij_session": "infra",
|
||||||
|
"zellij_pane": "42",
|
||||||
|
"context_usage": None,
|
||||||
|
"conversation_mtime_ns": 1000,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"session_id": "real",
|
||||||
|
"zellij_session": "infra",
|
||||||
|
"zellij_pane": "42",
|
||||||
|
"context_usage": {"current_tokens": 5000},
|
||||||
|
"conversation_mtime_ns": 900, # older but has context
|
||||||
|
},
|
||||||
|
]
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
sessions_dir = Path(tmpdir)
|
||||||
|
# Create dummy session files
|
||||||
|
for s in sessions:
|
||||||
|
(sessions_dir / f"{s['session_id']}.json").write_text("{}")
|
||||||
|
|
||||||
|
with patch.object(state_mod, "SESSIONS_DIR", sessions_dir):
|
||||||
|
result = self.handler._dedupe_same_pane_sessions(sessions)
|
||||||
|
|
||||||
|
self.assertEqual(len(result), 1)
|
||||||
|
self.assertEqual(result[0]["session_id"], "real")
|
||||||
|
|
||||||
|
def test_keeps_higher_mtime_when_both_have_context(self):
|
||||||
|
"""When both have context_usage, keep the one with higher mtime."""
|
||||||
|
sessions = [
|
||||||
|
{
|
||||||
|
"session_id": "older",
|
||||||
|
"zellij_session": "infra",
|
||||||
|
"zellij_pane": "42",
|
||||||
|
"context_usage": {"current_tokens": 5000},
|
||||||
|
"conversation_mtime_ns": 1000,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"session_id": "newer",
|
||||||
|
"zellij_session": "infra",
|
||||||
|
"zellij_pane": "42",
|
||||||
|
"context_usage": {"current_tokens": 6000},
|
||||||
|
"conversation_mtime_ns": 2000,
|
||||||
|
},
|
||||||
|
]
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
sessions_dir = Path(tmpdir)
|
||||||
|
for s in sessions:
|
||||||
|
(sessions_dir / f"{s['session_id']}.json").write_text("{}")
|
||||||
|
|
||||||
|
with patch.object(state_mod, "SESSIONS_DIR", sessions_dir):
|
||||||
|
result = self.handler._dedupe_same_pane_sessions(sessions)
|
||||||
|
|
||||||
|
self.assertEqual(len(result), 1)
|
||||||
|
self.assertEqual(result[0]["session_id"], "newer")
|
||||||
|
|
||||||
|
def test_no_dedup_when_different_panes(self):
|
||||||
|
"""Sessions in different panes should not be deduped."""
|
||||||
|
sessions = [
|
||||||
|
{
|
||||||
|
"session_id": "session1",
|
||||||
|
"zellij_session": "infra",
|
||||||
|
"zellij_pane": "42",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"session_id": "session2",
|
||||||
|
"zellij_session": "infra",
|
||||||
|
"zellij_pane": "43", # different pane
|
||||||
|
},
|
||||||
|
]
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
with patch.object(state_mod, "SESSIONS_DIR", Path(tmpdir)):
|
||||||
|
result = self.handler._dedupe_same_pane_sessions(sessions)
|
||||||
|
|
||||||
|
self.assertEqual(len(result), 2)
|
||||||
|
|
||||||
|
def test_no_dedup_when_empty_pane_info(self):
|
||||||
|
"""Sessions without pane info should not be deduped."""
|
||||||
|
sessions = [
|
||||||
|
{"session_id": "session1", "zellij_session": "", "zellij_pane": ""},
|
||||||
|
{"session_id": "session2", "zellij_session": "", "zellij_pane": ""},
|
||||||
|
]
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
with patch.object(state_mod, "SESSIONS_DIR", Path(tmpdir)):
|
||||||
|
result = self.handler._dedupe_same_pane_sessions(sessions)
|
||||||
|
|
||||||
|
self.assertEqual(len(result), 2)
|
||||||
|
|
||||||
|
def test_deletes_orphan_session_file(self):
|
||||||
|
"""Orphan session file should be deleted from disk."""
|
||||||
|
sessions = [
|
||||||
|
{
|
||||||
|
"session_id": "orphan",
|
||||||
|
"zellij_session": "infra",
|
||||||
|
"zellij_pane": "42",
|
||||||
|
"context_usage": None,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"session_id": "real",
|
||||||
|
"zellij_session": "infra",
|
||||||
|
"zellij_pane": "42",
|
||||||
|
"context_usage": {"current_tokens": 5000},
|
||||||
|
},
|
||||||
|
]
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
sessions_dir = Path(tmpdir)
|
||||||
|
orphan_file = sessions_dir / "orphan.json"
|
||||||
|
real_file = sessions_dir / "real.json"
|
||||||
|
orphan_file.write_text("{}")
|
||||||
|
real_file.write_text("{}")
|
||||||
|
|
||||||
|
with patch.object(state_mod, "SESSIONS_DIR", sessions_dir):
|
||||||
|
self.handler._dedupe_same_pane_sessions(sessions)
|
||||||
|
|
||||||
|
self.assertFalse(orphan_file.exists())
|
||||||
|
self.assertTrue(real_file.exists())
|
||||||
|
|
||||||
|
def test_handles_session_without_id(self):
|
||||||
|
"""Sessions without session_id should be skipped, not cause errors."""
|
||||||
|
sessions = [
|
||||||
|
{
|
||||||
|
"session_id": None, # Missing ID
|
||||||
|
"zellij_session": "infra",
|
||||||
|
"zellij_pane": "42",
|
||||||
|
"context_usage": None,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"session_id": "real",
|
||||||
|
"zellij_session": "infra",
|
||||||
|
"zellij_pane": "42",
|
||||||
|
"context_usage": {"current_tokens": 5000},
|
||||||
|
},
|
||||||
|
]
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
sessions_dir = Path(tmpdir)
|
||||||
|
(sessions_dir / "real.json").write_text("{}")
|
||||||
|
|
||||||
|
with patch.object(state_mod, "SESSIONS_DIR", sessions_dir):
|
||||||
|
result = self.handler._dedupe_same_pane_sessions(sessions)
|
||||||
|
|
||||||
|
# Should keep real, skip the None-id session (not filter it out)
|
||||||
|
self.assertEqual(len(result), 2) # Both still in list
|
||||||
|
self.assertIn("real", [s.get("session_id") for s in result])
|
||||||
|
|
||||||
|
def test_keeps_only_best_with_three_sessions(self):
|
||||||
|
"""When 3+ sessions share a pane, keep only the best one."""
|
||||||
|
sessions = [
|
||||||
|
{
|
||||||
|
"session_id": "worst",
|
||||||
|
"zellij_session": "infra",
|
||||||
|
"zellij_pane": "42",
|
||||||
|
"context_usage": None,
|
||||||
|
"conversation_mtime_ns": 1000,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"session_id": "middle",
|
||||||
|
"zellij_session": "infra",
|
||||||
|
"zellij_pane": "42",
|
||||||
|
"context_usage": None,
|
||||||
|
"conversation_mtime_ns": 2000,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"session_id": "best",
|
||||||
|
"zellij_session": "infra",
|
||||||
|
"zellij_pane": "42",
|
||||||
|
"context_usage": {"current_tokens": 100},
|
||||||
|
"conversation_mtime_ns": 500, # Oldest but has context
|
||||||
|
},
|
||||||
|
]
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
sessions_dir = Path(tmpdir)
|
||||||
|
for s in sessions:
|
||||||
|
(sessions_dir / f"{s['session_id']}.json").write_text("{}")
|
||||||
|
|
||||||
|
with patch.object(state_mod, "SESSIONS_DIR", sessions_dir):
|
||||||
|
result = self.handler._dedupe_same_pane_sessions(sessions)
|
||||||
|
|
||||||
|
self.assertEqual(len(result), 1)
|
||||||
|
self.assertEqual(result[0]["session_id"], "best")
|
||||||
|
|
||||||
|
def test_handles_non_numeric_mtime(self):
|
||||||
|
"""Non-numeric mtime values should be treated as 0."""
|
||||||
|
sessions = [
|
||||||
|
{
|
||||||
|
"session_id": "bad_mtime",
|
||||||
|
"zellij_session": "infra",
|
||||||
|
"zellij_pane": "42",
|
||||||
|
"context_usage": None,
|
||||||
|
"conversation_mtime_ns": "not a number", # Invalid
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"session_id": "good_mtime",
|
||||||
|
"zellij_session": "infra",
|
||||||
|
"zellij_pane": "42",
|
||||||
|
"context_usage": None,
|
||||||
|
"conversation_mtime_ns": 1000,
|
||||||
|
},
|
||||||
|
]
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
sessions_dir = Path(tmpdir)
|
||||||
|
for s in sessions:
|
||||||
|
(sessions_dir / f"{s['session_id']}.json").write_text("{}")
|
||||||
|
|
||||||
|
with patch.object(state_mod, "SESSIONS_DIR", sessions_dir):
|
||||||
|
# Should not raise, should keep session with valid mtime
|
||||||
|
result = self.handler._dedupe_same_pane_sessions(sessions)
|
||||||
|
|
||||||
|
self.assertEqual(len(result), 1)
|
||||||
|
self.assertEqual(result[0]["session_id"], "good_mtime")
|
||||||
|
|
||||||
|
|
||||||
if __name__ == "__main__":
|
if __name__ == "__main__":
|
||||||
unittest.main()
|
unittest.main()
|
||||||
|
|||||||
693
tests/test_zellij_metadata.py
Normal file
693
tests/test_zellij_metadata.py
Normal file
@@ -0,0 +1,693 @@
|
|||||||
|
"""Tests for Zellij metadata in spawned agent sessions (bd-30o).
|
||||||
|
|
||||||
|
Verifies that:
|
||||||
|
- amc-hook writes correct zellij_session and zellij_pane from env vars
|
||||||
|
- amc-hook writes spawn_id from AMC_SPAWN_ID env var
|
||||||
|
- Non-Zellij agents (missing env vars) get empty strings gracefully
|
||||||
|
- Spawn mixin passes AMC_SPAWN_ID and correct pane name to Zellij
|
||||||
|
"""
|
||||||
|
|
||||||
|
import json
|
||||||
|
import os
|
||||||
|
import tempfile
|
||||||
|
import types
|
||||||
|
import unittest
|
||||||
|
from pathlib import Path
|
||||||
|
from unittest.mock import MagicMock, patch
|
||||||
|
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
# Import hook module (no .py extension)
|
||||||
|
# ---------------------------------------------------------------------------
|
||||||
|
hook_path = Path(__file__).parent.parent / "bin" / "amc-hook"
|
||||||
|
amc_hook = types.ModuleType("amc_hook")
|
||||||
|
amc_hook.__file__ = str(hook_path)
|
||||||
|
code = compile(hook_path.read_text(), hook_path, "exec")
|
||||||
|
exec(code, amc_hook.__dict__) # noqa: S102 - loading local module
|
||||||
|
|
||||||
|
# Import spawn mixin (after hook loading - intentional)
|
||||||
|
import amc_server.mixins.spawn as spawn_mod # noqa: E402
|
||||||
|
|
||||||
|
|
||||||
|
# ===========================================================================
|
||||||
|
# Hook: Zellij env vars -> session file
|
||||||
|
# ===========================================================================
|
||||||
|
|
||||||
|
class TestHookWritesZellijMetadata(unittest.TestCase):
|
||||||
|
"""Verify amc-hook writes Zellij metadata from environment variables."""
|
||||||
|
|
||||||
|
def _run_hook(self, session_id, event, env_overrides=None, existing_session=None):
|
||||||
|
"""Helper: run the hook main() with controlled env and stdin."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
sessions_dir = Path(tmpdir) / "sessions"
|
||||||
|
events_dir = Path(tmpdir) / "events"
|
||||||
|
sessions_dir.mkdir()
|
||||||
|
events_dir.mkdir()
|
||||||
|
|
||||||
|
if existing_session:
|
||||||
|
session_file = sessions_dir / f"{session_id}.json"
|
||||||
|
session_file.write_text(json.dumps(existing_session))
|
||||||
|
|
||||||
|
hook_input = json.dumps({
|
||||||
|
"hook_event_name": event,
|
||||||
|
"session_id": session_id,
|
||||||
|
"cwd": "/test/project",
|
||||||
|
"last_assistant_message": "",
|
||||||
|
})
|
||||||
|
|
||||||
|
env = {
|
||||||
|
"CLAUDE_PROJECT_DIR": "/test/project",
|
||||||
|
}
|
||||||
|
if env_overrides:
|
||||||
|
env.update(env_overrides)
|
||||||
|
|
||||||
|
with patch.object(amc_hook, "SESSIONS_DIR", sessions_dir), \
|
||||||
|
patch.object(amc_hook, "EVENTS_DIR", events_dir), \
|
||||||
|
patch("sys.stdin.read", return_value=hook_input), \
|
||||||
|
patch.dict(os.environ, env, clear=False):
|
||||||
|
amc_hook.main()
|
||||||
|
|
||||||
|
session_file = sessions_dir / f"{session_id}.json"
|
||||||
|
if session_file.exists():
|
||||||
|
return json.loads(session_file.read_text())
|
||||||
|
return None
|
||||||
|
|
||||||
|
def test_session_start_writes_zellij_session(self):
|
||||||
|
"""SessionStart with ZELLIJ_SESSION_NAME writes zellij_session."""
|
||||||
|
data = self._run_hook("test-sess", "SessionStart", env_overrides={
|
||||||
|
"ZELLIJ_SESSION_NAME": "infra",
|
||||||
|
"ZELLIJ_PANE_ID": "7",
|
||||||
|
})
|
||||||
|
self.assertIsNotNone(data)
|
||||||
|
self.assertEqual(data["zellij_session"], "infra")
|
||||||
|
|
||||||
|
def test_session_start_writes_zellij_pane(self):
|
||||||
|
"""SessionStart with ZELLIJ_PANE_ID writes zellij_pane."""
|
||||||
|
data = self._run_hook("test-sess", "SessionStart", env_overrides={
|
||||||
|
"ZELLIJ_SESSION_NAME": "infra",
|
||||||
|
"ZELLIJ_PANE_ID": "7",
|
||||||
|
})
|
||||||
|
self.assertIsNotNone(data)
|
||||||
|
self.assertEqual(data["zellij_pane"], "7")
|
||||||
|
|
||||||
|
def test_session_start_writes_spawn_id(self):
|
||||||
|
"""SessionStart with AMC_SPAWN_ID writes spawn_id."""
|
||||||
|
data = self._run_hook("test-sess", "SessionStart", env_overrides={
|
||||||
|
"AMC_SPAWN_ID": "abc-123-def",
|
||||||
|
"ZELLIJ_SESSION_NAME": "infra",
|
||||||
|
"ZELLIJ_PANE_ID": "3",
|
||||||
|
})
|
||||||
|
self.assertIsNotNone(data)
|
||||||
|
self.assertEqual(data["spawn_id"], "abc-123-def")
|
||||||
|
|
||||||
|
def test_no_zellij_env_vars_writes_empty_strings(self):
|
||||||
|
"""Without Zellij env vars, zellij_session and zellij_pane are empty."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
sessions_dir = Path(tmpdir) / "sessions"
|
||||||
|
events_dir = Path(tmpdir) / "events"
|
||||||
|
sessions_dir.mkdir()
|
||||||
|
events_dir.mkdir()
|
||||||
|
|
||||||
|
hook_input = json.dumps({
|
||||||
|
"hook_event_name": "SessionStart",
|
||||||
|
"session_id": "no-zellij-sess",
|
||||||
|
"cwd": "/test/project",
|
||||||
|
"last_assistant_message": "",
|
||||||
|
})
|
||||||
|
|
||||||
|
# Ensure Zellij vars are removed from env
|
||||||
|
clean_env = os.environ.copy()
|
||||||
|
clean_env.pop("ZELLIJ_SESSION_NAME", None)
|
||||||
|
clean_env.pop("ZELLIJ_PANE_ID", None)
|
||||||
|
clean_env.pop("AMC_SPAWN_ID", None)
|
||||||
|
clean_env["CLAUDE_PROJECT_DIR"] = "/test/project"
|
||||||
|
|
||||||
|
with patch.object(amc_hook, "SESSIONS_DIR", sessions_dir), \
|
||||||
|
patch.object(amc_hook, "EVENTS_DIR", events_dir), \
|
||||||
|
patch("sys.stdin.read", return_value=hook_input), \
|
||||||
|
patch.dict(os.environ, clean_env, clear=True):
|
||||||
|
amc_hook.main()
|
||||||
|
|
||||||
|
session_file = sessions_dir / "no-zellij-sess.json"
|
||||||
|
data = json.loads(session_file.read_text())
|
||||||
|
|
||||||
|
self.assertEqual(data["zellij_session"], "")
|
||||||
|
self.assertEqual(data["zellij_pane"], "")
|
||||||
|
self.assertNotIn("spawn_id", data)
|
||||||
|
|
||||||
|
def test_no_spawn_id_env_omits_field(self):
|
||||||
|
"""Without AMC_SPAWN_ID, spawn_id key is absent (not empty)."""
|
||||||
|
clean_env = os.environ.copy()
|
||||||
|
clean_env.pop("AMC_SPAWN_ID", None)
|
||||||
|
clean_env["ZELLIJ_SESSION_NAME"] = "infra"
|
||||||
|
clean_env["ZELLIJ_PANE_ID"] = "5"
|
||||||
|
clean_env["CLAUDE_PROJECT_DIR"] = "/test/project"
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
sessions_dir = Path(tmpdir) / "sessions"
|
||||||
|
events_dir = Path(tmpdir) / "events"
|
||||||
|
sessions_dir.mkdir()
|
||||||
|
events_dir.mkdir()
|
||||||
|
|
||||||
|
hook_input = json.dumps({
|
||||||
|
"hook_event_name": "SessionStart",
|
||||||
|
"session_id": "no-spawn-sess",
|
||||||
|
"cwd": "/test/project",
|
||||||
|
"last_assistant_message": "",
|
||||||
|
})
|
||||||
|
|
||||||
|
with patch.object(amc_hook, "SESSIONS_DIR", sessions_dir), \
|
||||||
|
patch.object(amc_hook, "EVENTS_DIR", events_dir), \
|
||||||
|
patch("sys.stdin.read", return_value=hook_input), \
|
||||||
|
patch.dict(os.environ, clean_env, clear=True):
|
||||||
|
amc_hook.main()
|
||||||
|
|
||||||
|
session_file = sessions_dir / "no-spawn-sess.json"
|
||||||
|
data = json.loads(session_file.read_text())
|
||||||
|
|
||||||
|
self.assertNotIn("spawn_id", data)
|
||||||
|
self.assertEqual(data["zellij_session"], "infra")
|
||||||
|
self.assertEqual(data["zellij_pane"], "5")
|
||||||
|
|
||||||
|
def test_user_prompt_submit_preserves_zellij_metadata(self):
|
||||||
|
"""UserPromptSubmit preserves Zellij metadata in session state."""
|
||||||
|
existing = {
|
||||||
|
"session_id": "test-sess",
|
||||||
|
"status": "starting",
|
||||||
|
"started_at": "2026-01-01T00:00:00+00:00",
|
||||||
|
}
|
||||||
|
data = self._run_hook("test-sess", "UserPromptSubmit", env_overrides={
|
||||||
|
"ZELLIJ_SESSION_NAME": "infra",
|
||||||
|
"ZELLIJ_PANE_ID": "12",
|
||||||
|
"AMC_SPAWN_ID": "spawn-uuid-456",
|
||||||
|
}, existing_session=existing)
|
||||||
|
self.assertIsNotNone(data)
|
||||||
|
self.assertEqual(data["zellij_session"], "infra")
|
||||||
|
self.assertEqual(data["zellij_pane"], "12")
|
||||||
|
self.assertEqual(data["spawn_id"], "spawn-uuid-456")
|
||||||
|
|
||||||
|
def test_stop_event_preserves_zellij_metadata(self):
|
||||||
|
"""Stop event preserves Zellij metadata in session state."""
|
||||||
|
existing = {
|
||||||
|
"session_id": "test-sess",
|
||||||
|
"status": "active",
|
||||||
|
"started_at": "2026-01-01T00:00:00+00:00",
|
||||||
|
}
|
||||||
|
data = self._run_hook("test-sess", "Stop", env_overrides={
|
||||||
|
"ZELLIJ_SESSION_NAME": "infra",
|
||||||
|
"ZELLIJ_PANE_ID": "8",
|
||||||
|
}, existing_session=existing)
|
||||||
|
self.assertIsNotNone(data)
|
||||||
|
self.assertEqual(data["zellij_session"], "infra")
|
||||||
|
self.assertEqual(data["zellij_pane"], "8")
|
||||||
|
|
||||||
|
|
||||||
|
# ===========================================================================
|
||||||
|
# Spawn mixin: AMC_SPAWN_ID and pane name passed to Zellij
|
||||||
|
# ===========================================================================
|
||||||
|
|
||||||
|
class TestSpawnPassesZellijMetadata(unittest.TestCase):
|
||||||
|
"""Verify spawn mixin passes correct metadata to Zellij subprocess."""
|
||||||
|
|
||||||
|
def _make_handler(self, project='test-proj', agent_type='claude'):
|
||||||
|
"""Create a DummySpawnHandler for testing."""
|
||||||
|
from tests.test_spawn import DummySpawnHandler
|
||||||
|
body = {'project': project, 'agent_type': agent_type}
|
||||||
|
return DummySpawnHandler(body, auth_header='Bearer tok')
|
||||||
|
|
||||||
|
def test_spawn_sets_amc_spawn_id_in_env(self):
|
||||||
|
"""_spawn_agent_in_project_tab passes AMC_SPAWN_ID in subprocess env."""
|
||||||
|
handler = self._make_handler()
|
||||||
|
handler._check_zellij_session_exists = MagicMock(return_value=True)
|
||||||
|
handler._wait_for_session_file = MagicMock(return_value=True)
|
||||||
|
|
||||||
|
with patch('subprocess.run') as mock_run:
|
||||||
|
mock_run.return_value = MagicMock(returncode=0, stdout='', stderr='')
|
||||||
|
result = handler._spawn_agent_in_project_tab(
|
||||||
|
'test-proj', Path('/fake/test-proj'), 'claude', 'spawn-id-789',
|
||||||
|
)
|
||||||
|
|
||||||
|
self.assertTrue(result['ok'])
|
||||||
|
# The second subprocess.run call is the pane spawn (first is tab creation)
|
||||||
|
pane_call = mock_run.call_args_list[1]
|
||||||
|
env_passed = pane_call.kwargs.get('env') or pane_call[1].get('env')
|
||||||
|
self.assertIsNotNone(env_passed)
|
||||||
|
self.assertEqual(env_passed['AMC_SPAWN_ID'], 'spawn-id-789')
|
||||||
|
|
||||||
|
def test_spawn_creates_pane_with_correct_name(self):
|
||||||
|
"""Pane name follows '{agent_type}-{project}' convention."""
|
||||||
|
handler = self._make_handler()
|
||||||
|
handler._check_zellij_session_exists = MagicMock(return_value=True)
|
||||||
|
handler._wait_for_session_file = MagicMock(return_value=True)
|
||||||
|
|
||||||
|
with patch('subprocess.run') as mock_run:
|
||||||
|
mock_run.return_value = MagicMock(returncode=0, stdout='', stderr='')
|
||||||
|
handler._spawn_agent_in_project_tab(
|
||||||
|
'my-project', Path('/fake/my-project'), 'claude', 'spawn-001',
|
||||||
|
)
|
||||||
|
|
||||||
|
pane_call = mock_run.call_args_list[1]
|
||||||
|
cmd = pane_call.args[0] if pane_call.args else pane_call[0][0]
|
||||||
|
# Verify --name argument
|
||||||
|
name_idx = cmd.index('--name')
|
||||||
|
self.assertEqual(cmd[name_idx + 1], 'claude-my-project')
|
||||||
|
|
||||||
|
def test_spawn_creates_pane_with_codex_name(self):
|
||||||
|
"""Codex agent type produces correct pane name."""
|
||||||
|
handler = self._make_handler(agent_type='codex')
|
||||||
|
handler._check_zellij_session_exists = MagicMock(return_value=True)
|
||||||
|
handler._wait_for_session_file = MagicMock(return_value=True)
|
||||||
|
|
||||||
|
with patch('subprocess.run') as mock_run:
|
||||||
|
mock_run.return_value = MagicMock(returncode=0, stdout='', stderr='')
|
||||||
|
handler._spawn_agent_in_project_tab(
|
||||||
|
'my-project', Path('/fake/my-project'), 'codex', 'spawn-002',
|
||||||
|
)
|
||||||
|
|
||||||
|
pane_call = mock_run.call_args_list[1]
|
||||||
|
cmd = pane_call.args[0] if pane_call.args else pane_call[0][0]
|
||||||
|
name_idx = cmd.index('--name')
|
||||||
|
self.assertEqual(cmd[name_idx + 1], 'codex-my-project')
|
||||||
|
|
||||||
|
def test_spawn_uses_correct_zellij_session(self):
|
||||||
|
"""Spawn commands target the configured ZELLIJ_SESSION."""
|
||||||
|
handler = self._make_handler()
|
||||||
|
handler._check_zellij_session_exists = MagicMock(return_value=True)
|
||||||
|
handler._wait_for_session_file = MagicMock(return_value=True)
|
||||||
|
|
||||||
|
with patch('subprocess.run') as mock_run, \
|
||||||
|
patch.object(spawn_mod, 'ZELLIJ_SESSION', 'infra'):
|
||||||
|
mock_run.return_value = MagicMock(returncode=0, stdout='', stderr='')
|
||||||
|
handler._spawn_agent_in_project_tab(
|
||||||
|
'proj', Path('/fake/proj'), 'claude', 'spawn-003',
|
||||||
|
)
|
||||||
|
|
||||||
|
# Both calls (tab + pane) should target --session infra
|
||||||
|
for c in mock_run.call_args_list:
|
||||||
|
cmd = c.args[0] if c.args else c[0][0]
|
||||||
|
session_idx = cmd.index('--session')
|
||||||
|
self.assertEqual(cmd[session_idx + 1], 'infra')
|
||||||
|
|
||||||
|
def test_spawn_sets_cwd_to_project_path(self):
|
||||||
|
"""Spawned pane gets --cwd pointing to the project directory."""
|
||||||
|
handler = self._make_handler()
|
||||||
|
handler._check_zellij_session_exists = MagicMock(return_value=True)
|
||||||
|
handler._wait_for_session_file = MagicMock(return_value=True)
|
||||||
|
|
||||||
|
with patch('subprocess.run') as mock_run:
|
||||||
|
mock_run.return_value = MagicMock(returncode=0, stdout='', stderr='')
|
||||||
|
handler._spawn_agent_in_project_tab(
|
||||||
|
'proj', Path('/home/user/projects/proj'), 'claude', 'spawn-004',
|
||||||
|
)
|
||||||
|
|
||||||
|
pane_call = mock_run.call_args_list[1]
|
||||||
|
cmd = pane_call.args[0] if pane_call.args else pane_call[0][0]
|
||||||
|
cwd_idx = cmd.index('--cwd')
|
||||||
|
self.assertEqual(cmd[cwd_idx + 1], '/home/user/projects/proj')
|
||||||
|
|
||||||
|
|
||||||
|
# ===========================================================================
|
||||||
|
# Spawn mixin: _wait_for_session_file finds session by spawn_id
|
||||||
|
# ===========================================================================
|
||||||
|
|
||||||
|
class TestWaitForSessionFile(unittest.TestCase):
|
||||||
|
"""Verify _wait_for_session_file matches session files by spawn_id."""
|
||||||
|
|
||||||
|
def _make_handler(self):
|
||||||
|
from tests.test_spawn import DummySpawnHandler
|
||||||
|
return DummySpawnHandler()
|
||||||
|
|
||||||
|
def test_finds_session_with_matching_spawn_id(self):
|
||||||
|
"""Session file with matching spawn_id is found."""
|
||||||
|
handler = self._make_handler()
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
sessions = Path(tmpdir)
|
||||||
|
session_file = sessions / "abc123.json"
|
||||||
|
session_file.write_text(json.dumps({
|
||||||
|
"session_id": "abc123",
|
||||||
|
"spawn_id": "target-spawn-id",
|
||||||
|
"zellij_session": "infra",
|
||||||
|
"zellij_pane": "5",
|
||||||
|
}))
|
||||||
|
|
||||||
|
with patch.object(spawn_mod, 'SESSIONS_DIR', sessions):
|
||||||
|
found = handler._wait_for_session_file("target-spawn-id", timeout=1.0)
|
||||||
|
|
||||||
|
self.assertTrue(found)
|
||||||
|
|
||||||
|
def test_ignores_session_with_different_spawn_id(self):
|
||||||
|
"""Session file with different spawn_id is not matched."""
|
||||||
|
handler = self._make_handler()
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
sessions = Path(tmpdir)
|
||||||
|
session_file = sessions / "abc123.json"
|
||||||
|
session_file.write_text(json.dumps({
|
||||||
|
"session_id": "abc123",
|
||||||
|
"spawn_id": "other-spawn-id",
|
||||||
|
}))
|
||||||
|
|
||||||
|
with patch.object(spawn_mod, 'SESSIONS_DIR', sessions):
|
||||||
|
found = handler._wait_for_session_file("target-spawn-id", timeout=0.5)
|
||||||
|
|
||||||
|
self.assertFalse(found)
|
||||||
|
|
||||||
|
def test_ignores_session_without_spawn_id(self):
|
||||||
|
"""Session file without spawn_id is not matched."""
|
||||||
|
handler = self._make_handler()
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
sessions = Path(tmpdir)
|
||||||
|
session_file = sessions / "abc123.json"
|
||||||
|
session_file.write_text(json.dumps({
|
||||||
|
"session_id": "abc123",
|
||||||
|
"zellij_session": "infra",
|
||||||
|
}))
|
||||||
|
|
||||||
|
with patch.object(spawn_mod, 'SESSIONS_DIR', sessions):
|
||||||
|
found = handler._wait_for_session_file("target-spawn-id", timeout=0.5)
|
||||||
|
|
||||||
|
self.assertFalse(found)
|
||||||
|
|
||||||
|
def test_found_session_has_correct_zellij_metadata(self):
|
||||||
|
"""A found session file contains the expected Zellij metadata."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
sessions = Path(tmpdir)
|
||||||
|
session_data = {
|
||||||
|
"session_id": "sess-001",
|
||||||
|
"spawn_id": "spawn-xyz",
|
||||||
|
"zellij_session": "infra",
|
||||||
|
"zellij_pane": "9",
|
||||||
|
"status": "starting",
|
||||||
|
}
|
||||||
|
(sessions / "sess-001.json").write_text(json.dumps(session_data))
|
||||||
|
|
||||||
|
# Verify the file directly (simulating what dashboard would do)
|
||||||
|
for f in sessions.glob("*.json"):
|
||||||
|
data = json.loads(f.read_text())
|
||||||
|
if data.get("spawn_id") == "spawn-xyz":
|
||||||
|
self.assertEqual(data["zellij_session"], "infra")
|
||||||
|
self.assertEqual(data["zellij_pane"], "9")
|
||||||
|
return
|
||||||
|
|
||||||
|
self.fail("Session file with spawn_id not found")
|
||||||
|
|
||||||
|
|
||||||
|
# ===========================================================================
|
||||||
|
# End-to-end: hook + env -> session file with full metadata
|
||||||
|
# ===========================================================================
|
||||||
|
|
||||||
|
class TestEndToEndZellijMetadata(unittest.TestCase):
|
||||||
|
"""End-to-end: simulate spawned agent hook writing full metadata."""
|
||||||
|
|
||||||
|
def test_spawned_agent_session_file_has_all_fields(self):
|
||||||
|
"""A spawned agent's session file has session, pane, and spawn_id."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
sessions_dir = Path(tmpdir) / "sessions"
|
||||||
|
events_dir = Path(tmpdir) / "events"
|
||||||
|
sessions_dir.mkdir()
|
||||||
|
events_dir.mkdir()
|
||||||
|
|
||||||
|
hook_input = json.dumps({
|
||||||
|
"hook_event_name": "SessionStart",
|
||||||
|
"session_id": "spawned-agent-001",
|
||||||
|
"cwd": "/home/user/projects/amc",
|
||||||
|
"last_assistant_message": "",
|
||||||
|
})
|
||||||
|
|
||||||
|
# Simulate the environment a spawned agent would see:
|
||||||
|
# AMC_SPAWN_ID set by spawn mixin, Zellij vars set by Zellij itself
|
||||||
|
spawn_env = os.environ.copy()
|
||||||
|
spawn_env["AMC_SPAWN_ID"] = "spawn-uuid-e2e"
|
||||||
|
spawn_env["ZELLIJ_SESSION_NAME"] = "infra"
|
||||||
|
spawn_env["ZELLIJ_PANE_ID"] = "14"
|
||||||
|
spawn_env["CLAUDE_PROJECT_DIR"] = "/home/user/projects/amc"
|
||||||
|
|
||||||
|
with patch.object(amc_hook, "SESSIONS_DIR", sessions_dir), \
|
||||||
|
patch.object(amc_hook, "EVENTS_DIR", events_dir), \
|
||||||
|
patch("sys.stdin.read", return_value=hook_input), \
|
||||||
|
patch.dict(os.environ, spawn_env, clear=True):
|
||||||
|
amc_hook.main()
|
||||||
|
|
||||||
|
session_file = sessions_dir / "spawned-agent-001.json"
|
||||||
|
self.assertTrue(session_file.exists(), "Session file not created")
|
||||||
|
data = json.loads(session_file.read_text())
|
||||||
|
|
||||||
|
# All metadata present and correct
|
||||||
|
self.assertEqual(data["zellij_session"], "infra")
|
||||||
|
self.assertEqual(data["zellij_pane"], "14")
|
||||||
|
self.assertEqual(data["spawn_id"], "spawn-uuid-e2e")
|
||||||
|
self.assertEqual(data["status"], "starting")
|
||||||
|
self.assertEqual(data["project"], "amc")
|
||||||
|
|
||||||
|
def test_non_zellij_agent_session_file_graceful(self):
|
||||||
|
"""Agent started outside Zellij has empty metadata, no crash."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
sessions_dir = Path(tmpdir) / "sessions"
|
||||||
|
events_dir = Path(tmpdir) / "events"
|
||||||
|
sessions_dir.mkdir()
|
||||||
|
events_dir.mkdir()
|
||||||
|
|
||||||
|
hook_input = json.dumps({
|
||||||
|
"hook_event_name": "SessionStart",
|
||||||
|
"session_id": "non-zellij-agent",
|
||||||
|
"cwd": "/home/user/projects/amc",
|
||||||
|
"last_assistant_message": "",
|
||||||
|
})
|
||||||
|
|
||||||
|
# No Zellij vars, no spawn ID
|
||||||
|
clean_env = {
|
||||||
|
"HOME": os.environ.get("HOME", "/tmp"),
|
||||||
|
"PATH": os.environ.get("PATH", "/usr/bin"),
|
||||||
|
"CLAUDE_PROJECT_DIR": "/home/user/projects/amc",
|
||||||
|
}
|
||||||
|
|
||||||
|
with patch.object(amc_hook, "SESSIONS_DIR", sessions_dir), \
|
||||||
|
patch.object(amc_hook, "EVENTS_DIR", events_dir), \
|
||||||
|
patch("sys.stdin.read", return_value=hook_input), \
|
||||||
|
patch.dict(os.environ, clean_env, clear=True):
|
||||||
|
amc_hook.main()
|
||||||
|
|
||||||
|
session_file = sessions_dir / "non-zellij-agent.json"
|
||||||
|
self.assertTrue(session_file.exists(), "Session file not created")
|
||||||
|
data = json.loads(session_file.read_text())
|
||||||
|
|
||||||
|
# Metadata present but empty (graceful degradation)
|
||||||
|
self.assertEqual(data["zellij_session"], "")
|
||||||
|
self.assertEqual(data["zellij_pane"], "")
|
||||||
|
self.assertNotIn("spawn_id", data)
|
||||||
|
# Session is still valid
|
||||||
|
self.assertEqual(data["status"], "starting")
|
||||||
|
self.assertEqual(data["session_id"], "non-zellij-agent")
|
||||||
|
|
||||||
|
|
||||||
|
# ===========================================================================
|
||||||
|
# Pending spawn registry for Codex correlation
|
||||||
|
# ===========================================================================
|
||||||
|
|
||||||
|
class TestPendingSpawnRegistry(unittest.TestCase):
|
||||||
|
"""Verify pending spawn files enable Codex session correlation."""
|
||||||
|
|
||||||
|
def test_write_pending_spawn_creates_file(self):
|
||||||
|
"""_write_pending_spawn creates a JSON file with correct data."""
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
pending_dir = Path(tmpdir) / "pending"
|
||||||
|
with patch.object(spawn_mod, "PENDING_SPAWNS_DIR", pending_dir):
|
||||||
|
spawn_mod._write_pending_spawn(
|
||||||
|
"test-spawn-id",
|
||||||
|
Path("/home/user/projects/myproject"),
|
||||||
|
"codex",
|
||||||
|
)
|
||||||
|
|
||||||
|
pending_file = pending_dir / "test-spawn-id.json"
|
||||||
|
self.assertTrue(pending_file.exists())
|
||||||
|
data = json.loads(pending_file.read_text())
|
||||||
|
self.assertEqual(data["spawn_id"], "test-spawn-id")
|
||||||
|
self.assertEqual(data["project_path"], "/home/user/projects/myproject")
|
||||||
|
self.assertEqual(data["agent_type"], "codex")
|
||||||
|
self.assertIn("timestamp", data)
|
||||||
|
|
||||||
|
def test_cleanup_removes_stale_pending_spawns(self):
|
||||||
|
"""_cleanup_stale_pending_spawns removes old files."""
|
||||||
|
import time
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
pending_dir = Path(tmpdir)
|
||||||
|
pending_dir.mkdir(exist_ok=True)
|
||||||
|
|
||||||
|
# Create an old file (simulate 120 seconds ago)
|
||||||
|
old_file = pending_dir / "old-spawn.json"
|
||||||
|
old_file.write_text('{"spawn_id": "old"}')
|
||||||
|
old_mtime = time.time() - 120
|
||||||
|
os.utime(old_file, (old_mtime, old_mtime))
|
||||||
|
|
||||||
|
# Create a fresh file
|
||||||
|
fresh_file = pending_dir / "fresh-spawn.json"
|
||||||
|
fresh_file.write_text('{"spawn_id": "fresh"}')
|
||||||
|
|
||||||
|
with patch.object(spawn_mod, "PENDING_SPAWNS_DIR", pending_dir), \
|
||||||
|
patch.object(spawn_mod, "PENDING_SPAWN_TTL", 60):
|
||||||
|
spawn_mod._cleanup_stale_pending_spawns()
|
||||||
|
|
||||||
|
self.assertFalse(old_file.exists(), "Old file should be deleted")
|
||||||
|
self.assertTrue(fresh_file.exists(), "Fresh file should remain")
|
||||||
|
|
||||||
|
|
||||||
|
class TestParseSessionTimestamp(unittest.TestCase):
|
||||||
|
"""Verify _parse_session_timestamp handles various formats."""
|
||||||
|
|
||||||
|
def test_parses_iso_with_z_suffix(self):
|
||||||
|
"""Parse ISO timestamp with Z suffix."""
|
||||||
|
from amc_server.mixins.discovery import _parse_session_timestamp
|
||||||
|
result = _parse_session_timestamp("2026-02-27T10:00:00Z")
|
||||||
|
self.assertIsNotNone(result)
|
||||||
|
self.assertIsInstance(result, float)
|
||||||
|
|
||||||
|
def test_parses_iso_with_offset(self):
|
||||||
|
"""Parse ISO timestamp with timezone offset."""
|
||||||
|
from amc_server.mixins.discovery import _parse_session_timestamp
|
||||||
|
result = _parse_session_timestamp("2026-02-27T10:00:00+00:00")
|
||||||
|
self.assertIsNotNone(result)
|
||||||
|
|
||||||
|
def test_returns_none_for_empty_string(self):
|
||||||
|
"""Empty string returns None."""
|
||||||
|
from amc_server.mixins.discovery import _parse_session_timestamp
|
||||||
|
self.assertIsNone(_parse_session_timestamp(""))
|
||||||
|
|
||||||
|
def test_returns_none_for_invalid_format(self):
|
||||||
|
"""Invalid format returns None."""
|
||||||
|
from amc_server.mixins.discovery import _parse_session_timestamp
|
||||||
|
self.assertIsNone(_parse_session_timestamp("not-a-timestamp"))
|
||||||
|
|
||||||
|
|
||||||
|
class TestMatchPendingSpawn(unittest.TestCase):
|
||||||
|
"""Verify _match_pending_spawn correlates Codex sessions to spawns."""
|
||||||
|
|
||||||
|
def _make_iso_timestamp(self, offset_seconds=0):
|
||||||
|
"""Create ISO timestamp string offset from now."""
|
||||||
|
import time
|
||||||
|
from datetime import datetime, timezone
|
||||||
|
ts = time.time() + offset_seconds
|
||||||
|
return datetime.fromtimestamp(ts, tz=timezone.utc).isoformat()
|
||||||
|
|
||||||
|
def test_matches_by_cwd_and_returns_spawn_id(self):
|
||||||
|
"""Pending spawn matched by CWD returns spawn_id and deletes file."""
|
||||||
|
import time
|
||||||
|
from amc_server.mixins.discovery import _match_pending_spawn
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
pending_dir = Path(tmpdir)
|
||||||
|
|
||||||
|
# Create pending spawn 5 seconds ago
|
||||||
|
spawn_ts = time.time() - 5
|
||||||
|
pending_file = pending_dir / "match-me.json"
|
||||||
|
pending_file.write_text(json.dumps({
|
||||||
|
"spawn_id": "match-me",
|
||||||
|
"project_path": "/home/user/projects/amc",
|
||||||
|
"agent_type": "codex",
|
||||||
|
"timestamp": spawn_ts,
|
||||||
|
}))
|
||||||
|
|
||||||
|
with patch("amc_server.mixins.discovery.PENDING_SPAWNS_DIR", pending_dir):
|
||||||
|
# Session started now (ISO string, after spawn)
|
||||||
|
session_ts = self._make_iso_timestamp(0)
|
||||||
|
result = _match_pending_spawn("/home/user/projects/amc", session_ts)
|
||||||
|
|
||||||
|
self.assertEqual(result, "match-me")
|
||||||
|
self.assertFalse(pending_file.exists(), "Pending file should be deleted")
|
||||||
|
|
||||||
|
def test_no_match_when_cwd_differs(self):
|
||||||
|
"""Pending spawn not matched when CWD differs."""
|
||||||
|
import time
|
||||||
|
from amc_server.mixins.discovery import _match_pending_spawn
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
pending_dir = Path(tmpdir)
|
||||||
|
|
||||||
|
pending_file = pending_dir / "no-match.json"
|
||||||
|
pending_file.write_text(json.dumps({
|
||||||
|
"spawn_id": "no-match",
|
||||||
|
"project_path": "/home/user/projects/other",
|
||||||
|
"agent_type": "codex",
|
||||||
|
"timestamp": time.time() - 5,
|
||||||
|
}))
|
||||||
|
|
||||||
|
with patch("amc_server.mixins.discovery.PENDING_SPAWNS_DIR", pending_dir):
|
||||||
|
session_ts = self._make_iso_timestamp(0)
|
||||||
|
result = _match_pending_spawn("/home/user/projects/amc", session_ts)
|
||||||
|
|
||||||
|
self.assertIsNone(result)
|
||||||
|
self.assertTrue(pending_file.exists(), "Pending file should remain")
|
||||||
|
|
||||||
|
def test_no_match_when_session_older_than_spawn(self):
|
||||||
|
"""Pending spawn not matched if session started before spawn."""
|
||||||
|
import time
|
||||||
|
from amc_server.mixins.discovery import _match_pending_spawn
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
pending_dir = Path(tmpdir)
|
||||||
|
|
||||||
|
# Spawn happens NOW
|
||||||
|
spawn_ts = time.time()
|
||||||
|
pending_file = pending_dir / "future-spawn.json"
|
||||||
|
pending_file.write_text(json.dumps({
|
||||||
|
"spawn_id": "future-spawn",
|
||||||
|
"project_path": "/home/user/projects/amc",
|
||||||
|
"agent_type": "codex",
|
||||||
|
"timestamp": spawn_ts,
|
||||||
|
}))
|
||||||
|
|
||||||
|
with patch("amc_server.mixins.discovery.PENDING_SPAWNS_DIR", pending_dir):
|
||||||
|
# Session started 10 seconds BEFORE spawn (ISO string)
|
||||||
|
session_ts = self._make_iso_timestamp(-10)
|
||||||
|
result = _match_pending_spawn("/home/user/projects/amc", session_ts)
|
||||||
|
|
||||||
|
self.assertIsNone(result)
|
||||||
|
|
||||||
|
def test_no_match_for_claude_agent_type(self):
|
||||||
|
"""Pending spawn for claude not matched to codex discovery."""
|
||||||
|
import time
|
||||||
|
from amc_server.mixins.discovery import _match_pending_spawn
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
pending_dir = Path(tmpdir)
|
||||||
|
|
||||||
|
pending_file = pending_dir / "claude-spawn.json"
|
||||||
|
pending_file.write_text(json.dumps({
|
||||||
|
"spawn_id": "claude-spawn",
|
||||||
|
"project_path": "/home/user/projects/amc",
|
||||||
|
"agent_type": "claude", # Not codex
|
||||||
|
"timestamp": time.time() - 5,
|
||||||
|
}))
|
||||||
|
|
||||||
|
with patch("amc_server.mixins.discovery.PENDING_SPAWNS_DIR", pending_dir):
|
||||||
|
session_ts = self._make_iso_timestamp(0)
|
||||||
|
result = _match_pending_spawn("/home/user/projects/amc", session_ts)
|
||||||
|
|
||||||
|
self.assertIsNone(result)
|
||||||
|
|
||||||
|
def test_no_match_when_session_timestamp_unparseable(self):
|
||||||
|
"""Session with invalid timestamp format doesn't match."""
|
||||||
|
import time
|
||||||
|
from amc_server.mixins.discovery import _match_pending_spawn
|
||||||
|
|
||||||
|
with tempfile.TemporaryDirectory() as tmpdir:
|
||||||
|
pending_dir = Path(tmpdir)
|
||||||
|
|
||||||
|
pending_file = pending_dir / "valid-spawn.json"
|
||||||
|
pending_file.write_text(json.dumps({
|
||||||
|
"spawn_id": "valid-spawn",
|
||||||
|
"project_path": "/home/user/projects/amc",
|
||||||
|
"agent_type": "codex",
|
||||||
|
"timestamp": time.time() - 5,
|
||||||
|
}))
|
||||||
|
|
||||||
|
with patch("amc_server.mixins.discovery.PENDING_SPAWNS_DIR", pending_dir):
|
||||||
|
# Invalid timestamp format should not match
|
||||||
|
result = _match_pending_spawn("/home/user/projects/amc", "invalid-timestamp")
|
||||||
|
|
||||||
|
self.assertIsNone(result)
|
||||||
|
self.assertTrue(pending_file.exists(), "Pending file should remain")
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == "__main__":
|
||||||
|
unittest.main()
|
||||||
Reference in New Issue
Block a user