Compare commits
56 Commits
9cd91f6b4e
...
master
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
c5b1fb3a80 | ||
|
|
abbede923d | ||
|
|
ef451cf20f | ||
|
|
fb9d4e5b9f | ||
|
|
781e74cda2 | ||
|
|
ac629bd149 | ||
|
|
fa1fe8613a | ||
|
|
d3c6af9b00 | ||
|
|
0acc56417e | ||
|
|
19a31a4620 | ||
|
|
1fb4a82b39 | ||
|
|
69175f08f9 | ||
|
|
f26e9acb34 | ||
|
|
baa712ba15 | ||
|
|
7a9d290cb9 | ||
|
|
1e21dd08b6 | ||
|
|
e3e42e53f2 | ||
|
|
99a55472a5 | ||
|
|
5494d76a98 | ||
|
|
c0ee053d50 | ||
|
|
8070c4132a | ||
|
|
2d65d8f95b | ||
|
|
37748cb99c | ||
|
|
9695e9b08a | ||
|
|
58f0befe72 | ||
|
|
62d23793c4 | ||
|
|
7b1e47adc0 | ||
|
|
a7a9ebbf2b | ||
|
|
48c3ddce90 | ||
|
|
7059dea3f8 | ||
|
|
e4a0631fd7 | ||
|
|
1bece476a4 | ||
|
|
e99ae2ed89 | ||
|
|
49a57b5364 | ||
|
|
db3d2a2e31 | ||
|
|
ba16daac2a | ||
|
|
c7db46191c | ||
|
|
2926645b10 | ||
|
|
6e566cfe82 | ||
|
|
117784f8ef | ||
|
|
de994bb837 | ||
|
|
b9c1bd6ff1 | ||
|
|
3dc10aa060 | ||
|
|
0d15787c7a | ||
|
|
dcbaf12f07 | ||
|
|
fa1ad4b22b | ||
|
|
31862f3a40 | ||
|
|
4740922b8d | ||
|
|
8578a19330 | ||
|
|
754e85445a | ||
|
|
183942fbaa | ||
|
|
2f80995f8d | ||
|
|
8224acbba7 | ||
|
|
7cf51427b7 | ||
|
|
be2dd6a4fb | ||
|
|
da08d7a588 |
11
.beads/.gitignore
vendored
Normal file
11
.beads/.gitignore
vendored
Normal file
@@ -0,0 +1,11 @@
|
||||
# Database
|
||||
*.db
|
||||
*.db-shm
|
||||
*.db-wal
|
||||
|
||||
# Lock files
|
||||
*.lock
|
||||
|
||||
# Temporary
|
||||
last-touched
|
||||
*.tmp
|
||||
4
.beads/config.yaml
Normal file
4
.beads/config.yaml
Normal file
@@ -0,0 +1,4 @@
|
||||
# Beads Project Configuration
|
||||
# issue_prefix: bd
|
||||
# default_priority: 2
|
||||
# default_type: task
|
||||
40
.beads/issues.jsonl
Normal file
40
.beads/issues.jsonl
Normal file
File diff suppressed because one or more lines are too long
4
.beads/metadata.json
Normal file
4
.beads/metadata.json
Normal file
@@ -0,0 +1,4 @@
|
||||
{
|
||||
"database": "beads.db",
|
||||
"jsonl_export": "issues.jsonl"
|
||||
}
|
||||
2
.gitignore
vendored
Normal file
2
.gitignore
vendored
Normal file
@@ -0,0 +1,2 @@
|
||||
# bv (beads viewer) local config and caches
|
||||
.bv/
|
||||
1
.playwright-mcp/console-2026-02-26T22-17-15-253Z.log
Normal file
1
.playwright-mcp/console-2026-02-26T22-17-15-253Z.log
Normal file
@@ -0,0 +1 @@
|
||||
[ 94144ms] [ERROR] Failed to load resource: the server responded with a status of 500 (Internal Server Error) @ http://127.0.0.1:7400/api/spawn:0
|
||||
1
.playwright-mcp/console-2026-02-26T22-19-49-481Z.log
Normal file
1
.playwright-mcp/console-2026-02-26T22-19-49-481Z.log
Normal file
@@ -0,0 +1 @@
|
||||
[ 398ms] [WARNING] cdn.tailwindcss.com should not be used in production. To use Tailwind CSS in production, install it as a PostCSS plugin or use the Tailwind CLI: https://tailwindcss.com/docs/installation @ https://cdn.tailwindcss.com/:63
|
||||
68
CLAUDE.md
Normal file
68
CLAUDE.md
Normal file
@@ -0,0 +1,68 @@
|
||||
# AMC (Agent Management Console)
|
||||
|
||||
A dashboard and management system for monitoring and controlling Claude Code and Codex agent sessions.
|
||||
|
||||
## Key Documentation
|
||||
|
||||
### Claude JSONL Session Log Reference
|
||||
|
||||
**Location:** `docs/claude-jsonl-reference/`
|
||||
|
||||
Comprehensive documentation for parsing and processing Claude Code JSONL session logs. **Always consult this before implementing JSONL parsing logic.**
|
||||
|
||||
| Document | Purpose |
|
||||
|----------|---------|
|
||||
| [01-format-specification](docs/claude-jsonl-reference/01-format-specification.md) | Complete JSONL format spec with all fields |
|
||||
| [02-message-types](docs/claude-jsonl-reference/02-message-types.md) | Every message type with concrete examples |
|
||||
| [03-tool-lifecycle](docs/claude-jsonl-reference/03-tool-lifecycle.md) | Tool call flow from invocation to result |
|
||||
| [04-subagent-teams](docs/claude-jsonl-reference/04-subagent-teams.md) | Subagent and team message formats |
|
||||
| [05-edge-cases](docs/claude-jsonl-reference/05-edge-cases.md) | Error handling, malformed input, recovery |
|
||||
| [06-quick-reference](docs/claude-jsonl-reference/06-quick-reference.md) | Cheat sheet for common operations |
|
||||
|
||||
## Architecture
|
||||
|
||||
### Server (Python)
|
||||
|
||||
The server uses a mixin-based architecture in `amc_server/`:
|
||||
|
||||
| Module | Purpose |
|
||||
|--------|---------|
|
||||
| `server.py` | Main AMC server class combining all mixins |
|
||||
| `mixins/parsing.py` | JSONL reading and token extraction |
|
||||
| `mixins/conversation.py` | Claude/Codex conversation parsing |
|
||||
| `mixins/state.py` | Session state management |
|
||||
| `mixins/discovery.py` | Codex session auto-discovery |
|
||||
| `mixins/spawn.py` | Agent spawning via Zellij |
|
||||
| `mixins/control.py` | Session control (focus, dismiss) |
|
||||
| `mixins/skills.py` | Skill enumeration |
|
||||
| `mixins/http.py` | HTTP routing |
|
||||
|
||||
### Dashboard (React)
|
||||
|
||||
Single-page app in `dashboard/` served via HTTP.
|
||||
|
||||
## File Locations
|
||||
|
||||
| Content | Location |
|
||||
|---------|----------|
|
||||
| Claude sessions | `~/.claude/projects/<encoded-path>/<session-id>.jsonl` |
|
||||
| Codex sessions | `~/.codex/sessions/**/<session-id>.jsonl` |
|
||||
| AMC session state | `~/.local/share/amc/sessions/<session-id>.json` |
|
||||
| AMC event logs | `~/.local/share/amc/events/<session-id>.jsonl` |
|
||||
| Pending spawns | `~/.local/share/amc/pending_spawns/<spawn-id>.json` |
|
||||
|
||||
## Critical Parsing Notes
|
||||
|
||||
1. **Content type ambiguity** — User message `content` can be string (user input) OR array (tool results)
|
||||
2. **Missing fields** — Always use `.get()` with defaults for optional fields
|
||||
3. **Boolean vs int** — Python's `isinstance(True, int)` is True; check bool first
|
||||
4. **Partial reads** — When seeking to file end, first line may be truncated
|
||||
5. **Codex differences** — Uses `response_item` type, `function_call` for tools
|
||||
|
||||
## Testing
|
||||
|
||||
```bash
|
||||
pytest tests/
|
||||
```
|
||||
|
||||
All parsing edge cases are covered in `tests/test_parsing.py` and `tests/test_conversation.py`.
|
||||
509
PLAN-slash-autocomplete.md
Normal file
509
PLAN-slash-autocomplete.md
Normal file
@@ -0,0 +1,509 @@
|
||||
# Plan: Skill Autocomplete for Agent Sessions
|
||||
|
||||
## Summary
|
||||
|
||||
Add autocomplete functionality to the SimpleInput component that displays available skills when the user types the agent-specific trigger character (`/` for Claude, `$` for Codex). Autocomplete triggers at the start of input or after any whitespace, enabling quick skill discovery and selection mid-message.
|
||||
|
||||
## User Workflow
|
||||
|
||||
1. User opens a session modal or card with the input field
|
||||
2. User types the trigger character (`/` for Claude, `$` for Codex):
|
||||
- At position 0, OR
|
||||
- After a space/newline (mid-message)
|
||||
3. Autocomplete dropdown appears showing available skills (alphabetically sorted)
|
||||
4. User can:
|
||||
- Continue typing to filter the list
|
||||
- Use arrow keys to navigate
|
||||
- Press Enter/Tab to select and insert the skill name
|
||||
- Press Escape or click outside to dismiss
|
||||
5. Selected skill replaces the trigger with `{trigger}skill-name ` (e.g., `/commit ` or `$yeet `)
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
### Core Functionality
|
||||
- **AC-1**: Autocomplete triggers when trigger character is typed at position 0 or after whitespace
|
||||
- **AC-2**: Claude sessions use `/` trigger; Codex sessions use `$` trigger
|
||||
- **AC-3**: Wrong trigger character for agent type is ignored (no autocomplete)
|
||||
- **AC-4**: Dropdown displays skill names with trigger prefix and descriptions
|
||||
- **AC-5**: Skills are sorted alphabetically by name
|
||||
- **AC-6**: Typing additional characters filters the skill list (case-insensitive match on name)
|
||||
- **AC-7**: Arrow up/down navigates the highlighted option
|
||||
- **AC-8**: Enter or Tab inserts the selected skill name (with trigger) followed by a space
|
||||
- **AC-9**: Escape, clicking outside, or backspacing over the trigger character dismisses the dropdown without insertion
|
||||
- **AC-10**: Cursor movement (arrow left/right) is ignored while autocomplete is open; dropdown position is locked to trigger location
|
||||
- **AC-11**: If no skills match the filter, dropdown shows "No matching skills"
|
||||
|
||||
### Data Flow
|
||||
- **AC-12**: On session open, an agent-specific config is loaded containing: (a) trigger character (`/` for Claude, `$` for Codex), (b) enumerated skills list
|
||||
- **AC-13**: Claude skills are enumerated from `~/.claude/skills/`
|
||||
- **AC-14**: Codex skills are loaded from `~/.codex/vendor_imports/skills-curated-cache.json` plus `~/.codex/skills/`
|
||||
- **AC-15**: If session has no skills, dropdown shows "No skills available" when trigger is typed
|
||||
|
||||
### UX Polish
|
||||
- **AC-16**: Dropdown positions above the input (bottom-anchored), aligned left
|
||||
- **AC-17**: Dropdown has max height with vertical scroll for long lists
|
||||
- **AC-18**: Currently highlighted item is visually distinct
|
||||
- **AC-19**: Dropdown respects the existing color scheme
|
||||
- **AC-20**: After skill insertion, cursor is positioned after the trailing space, ready to continue typing
|
||||
|
||||
### Known Limitations (Out of Scope)
|
||||
- **Duplicate skill names**: If curated and user skills share a name, both appear (no deduplication)
|
||||
- **Long skill names**: No truncation; names may overflow if extremely long
|
||||
- **Accessibility**: ARIA roles, active-descendant, screen reader support deferred to future iteration
|
||||
- **IME/composition**: Japanese/Korean input edge cases not handled in v1
|
||||
- **Server-side caching**: Skills re-enumerated on each request; mtime-based cache could improve performance at scale
|
||||
|
||||
## Architecture
|
||||
|
||||
### Autocomplete Config Per Agent
|
||||
|
||||
Each session gets an autocomplete config loaded at modal open:
|
||||
|
||||
```typescript
|
||||
type AutocompleteConfig = {
|
||||
trigger: '/' | '$';
|
||||
skills: Array<{ name: string; description: string }>;
|
||||
}
|
||||
```
|
||||
|
||||
| Agent | Trigger | Skill Sources |
|
||||
|-------|---------|---------------|
|
||||
| Claude | `/` | Enumerate `~/.claude/skills/*/` |
|
||||
| Codex | `$` | `~/.codex/vendor_imports/skills-curated-cache.json` + `~/.codex/skills/*/` |
|
||||
|
||||
### Server-Side: New Endpoint for Skills
|
||||
|
||||
**Endpoint**: `GET /api/skills?agent={claude|codex}`
|
||||
|
||||
**Response**:
|
||||
```json
|
||||
{
|
||||
"trigger": "/",
|
||||
"skills": [
|
||||
{ "name": "commit", "description": "Create a git commit with a message" },
|
||||
{ "name": "review-pr", "description": "Review a pull request" }
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
**Rationale**: Skills are agent-global, not session-specific. The client already knows `session.agent` from state, so no session_id is needed. Server enumerates skill directories directly.
|
||||
|
||||
### Component Structure
|
||||
|
||||
```
|
||||
SimpleInput.js
|
||||
├── Props: sessionId, status, onRespond, agent, autocompleteConfig
|
||||
├── State: text, focused, sending, error
|
||||
├── State: showAutocomplete, selectedIndex
|
||||
├── Derived: triggerMatch (detects trigger at valid position)
|
||||
├── Derived: filterText, filteredSkills (alphabetically sorted)
|
||||
├── onInput: detect trigger character at pos 0 or after whitespace
|
||||
├── onKeyDown: arrow/enter/escape handling for autocomplete
|
||||
└── Render: textarea + autocomplete dropdown
|
||||
```
|
||||
|
||||
### Data Flow
|
||||
|
||||
```
|
||||
┌─────────────────┐ ┌──────────────────────┐ ┌─────────────────┐
|
||||
│ Modal opens │────▶│ GET /api/skills?agent│────▶│ SimpleInput │
|
||||
│ (session) │ │ (server) │ │ (dropdown) │
|
||||
└─────────────────┘ └──────────────────────┘ └─────────────────┘
|
||||
```
|
||||
|
||||
Skills are agent-global, so the same response can be cached client-side per agent type.
|
||||
|
||||
## Implementation Specifications
|
||||
|
||||
### IMP-1: Server-side skill enumeration (fulfills AC-12, AC-13, AC-14, AC-15)
|
||||
|
||||
**Location**: `amc_server/mixins/skills.py` (new file)
|
||||
|
||||
```python
|
||||
class SkillsMixin:
|
||||
def _serve_skills(self, agent):
|
||||
"""Return autocomplete config for a session."""
|
||||
if agent == "codex":
|
||||
trigger = "$"
|
||||
skills = self._enumerate_codex_skills()
|
||||
else: # claude
|
||||
trigger = "/"
|
||||
skills = self._enumerate_claude_skills()
|
||||
|
||||
# Sort alphabetically
|
||||
skills.sort(key=lambda s: s["name"].lower())
|
||||
|
||||
self._send_json(200, {"trigger": trigger, "skills": skills})
|
||||
|
||||
def _enumerate_codex_skills(self):
|
||||
"""Load Codex skills from cache + user directory."""
|
||||
skills = []
|
||||
|
||||
# Curated skills from cache
|
||||
cache_file = Path.home() / ".codex/vendor_imports/skills-curated-cache.json"
|
||||
if cache_file.exists():
|
||||
try:
|
||||
data = json.loads(cache_file.read_text())
|
||||
for skill in data.get("skills", []):
|
||||
skills.append({
|
||||
"name": skill.get("id", skill.get("name", "")),
|
||||
"description": skill.get("shortDescription", skill.get("description", ""))[:100]
|
||||
})
|
||||
except (json.JSONDecodeError, OSError):
|
||||
pass
|
||||
|
||||
# User-installed skills
|
||||
user_skills_dir = Path.home() / ".codex/skills"
|
||||
if user_skills_dir.exists():
|
||||
for skill_dir in user_skills_dir.iterdir():
|
||||
if skill_dir.is_dir() and not skill_dir.name.startswith("."):
|
||||
skill_md = skill_dir / "SKILL.md"
|
||||
description = ""
|
||||
if skill_md.exists():
|
||||
# Parse first non-empty line as description
|
||||
try:
|
||||
for line in skill_md.read_text().splitlines():
|
||||
line = line.strip()
|
||||
if line and not line.startswith("#"):
|
||||
description = line[:100]
|
||||
break
|
||||
except OSError:
|
||||
pass
|
||||
skills.append({
|
||||
"name": skill_dir.name,
|
||||
"description": description or f"User skill: {skill_dir.name}"
|
||||
})
|
||||
|
||||
return skills
|
||||
|
||||
def _enumerate_claude_skills(self):
|
||||
"""Load Claude skills from user directory.
|
||||
|
||||
Note: Checks SKILL.md first (canonical casing used by Claude Code),
|
||||
then falls back to lowercase variants for compatibility.
|
||||
"""
|
||||
skills = []
|
||||
skills_dir = Path.home() / ".claude/skills"
|
||||
|
||||
if skills_dir.exists():
|
||||
for skill_dir in skills_dir.iterdir():
|
||||
if skill_dir.is_dir() and not skill_dir.name.startswith("."):
|
||||
# Look for SKILL.md (canonical), then fallbacks
|
||||
description = ""
|
||||
for md_name in ["SKILL.md", "skill.md", "prompt.md", "README.md"]:
|
||||
md_file = skill_dir / md_name
|
||||
if md_file.exists():
|
||||
try:
|
||||
content = md_file.read_text()
|
||||
# Find first meaningful line
|
||||
for line in content.splitlines():
|
||||
line = line.strip()
|
||||
if line and not line.startswith("#") and not line.startswith("<!--"):
|
||||
description = line[:100]
|
||||
break
|
||||
if description:
|
||||
break
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
skills.append({
|
||||
"name": skill_dir.name,
|
||||
"description": description or f"Skill: {skill_dir.name}"
|
||||
})
|
||||
|
||||
return skills
|
||||
```
|
||||
|
||||
### IMP-2: Add skills endpoint to HttpMixin (fulfills AC-12)
|
||||
|
||||
**Location**: `amc_server/mixins/http.py`
|
||||
|
||||
```python
|
||||
# In HttpMixin.do_GET, add route handling:
|
||||
elif path == "/api/skills":
|
||||
agent = query_params.get("agent", ["claude"])[0]
|
||||
self._serve_skills(agent)
|
||||
```
|
||||
|
||||
**Note**: Route goes in `HttpMixin.do_GET` (where all GET routing lives), not `handler.py`. The handler just composes mixins.
|
||||
|
||||
### IMP-3: Client-side API call (fulfills AC-12)
|
||||
|
||||
**Location**: `dashboard/utils/api.js`
|
||||
|
||||
```javascript
|
||||
export const API_SKILLS = '/api/skills';
|
||||
|
||||
export async function fetchSkills(agent) {
|
||||
const url = `${API_SKILLS}?agent=${encodeURIComponent(agent)}`;
|
||||
const response = await fetch(url);
|
||||
if (!response.ok) return null;
|
||||
return response.json();
|
||||
}
|
||||
```
|
||||
|
||||
### IMP-4: Autocomplete config loading in Modal (fulfills AC-12)
|
||||
|
||||
**Location**: `dashboard/components/Modal.js`
|
||||
|
||||
```javascript
|
||||
const [autocompleteConfig, setAutocompleteConfig] = useState(null);
|
||||
|
||||
// Load skills when agent type changes
|
||||
useEffect(() => {
|
||||
if (!session) {
|
||||
setAutocompleteConfig(null);
|
||||
return;
|
||||
}
|
||||
|
||||
const agent = session.agent || 'claude';
|
||||
fetchSkills(agent)
|
||||
.then(config => setAutocompleteConfig(config))
|
||||
.catch(() => setAutocompleteConfig(null));
|
||||
}, [session?.agent]);
|
||||
|
||||
// Pass to SimpleInput
|
||||
<${SimpleInput}
|
||||
...
|
||||
autocompleteConfig=${autocompleteConfig}
|
||||
/>
|
||||
```
|
||||
|
||||
### IMP-5: Trigger detection logic (fulfills AC-1, AC-2, AC-3)
|
||||
|
||||
**Location**: `dashboard/components/SimpleInput.js`
|
||||
|
||||
```javascript
|
||||
// Detect if we should show autocomplete
|
||||
const getTriggerInfo = useCallback((value, cursorPos) => {
|
||||
if (!autocompleteConfig) return null;
|
||||
|
||||
const { trigger } = autocompleteConfig;
|
||||
|
||||
// Find the start of the current "word" (after last whitespace before cursor)
|
||||
let wordStart = cursorPos;
|
||||
while (wordStart > 0 && !/\s/.test(value[wordStart - 1])) {
|
||||
wordStart--;
|
||||
}
|
||||
|
||||
// Check if word starts with trigger
|
||||
if (value[wordStart] === trigger) {
|
||||
return {
|
||||
trigger,
|
||||
filterText: value.slice(wordStart + 1, cursorPos).toLowerCase(),
|
||||
replaceStart: wordStart,
|
||||
replaceEnd: cursorPos
|
||||
};
|
||||
}
|
||||
|
||||
return null;
|
||||
}, [autocompleteConfig]);
|
||||
```
|
||||
|
||||
### IMP-6: Filtered and sorted skills (fulfills AC-5, AC-6)
|
||||
|
||||
**Location**: `dashboard/components/SimpleInput.js`
|
||||
|
||||
```javascript
|
||||
const filteredSkills = useMemo(() => {
|
||||
if (!autocompleteConfig || !triggerInfo) return [];
|
||||
|
||||
const { skills } = autocompleteConfig;
|
||||
const { filterText } = triggerInfo;
|
||||
|
||||
let filtered = skills;
|
||||
if (filterText) {
|
||||
filtered = skills.filter(s =>
|
||||
s.name.toLowerCase().includes(filterText)
|
||||
);
|
||||
}
|
||||
|
||||
// Already sorted by server, but ensure alphabetical
|
||||
return filtered.sort((a, b) => a.name.localeCompare(b.name));
|
||||
}, [autocompleteConfig, triggerInfo]);
|
||||
```
|
||||
|
||||
### IMP-7: Keyboard navigation (fulfills AC-7, AC-8, AC-9)
|
||||
|
||||
**Location**: `dashboard/components/SimpleInput.js`
|
||||
|
||||
**Note**: Enter with empty filter dismisses dropdown (doesn't submit message). This prevents accidental submissions when user types a partial match that has no results.
|
||||
|
||||
```javascript
|
||||
onKeyDown=${(e) => {
|
||||
if (showAutocomplete) {
|
||||
// Always handle Escape when dropdown is open
|
||||
if (e.key === 'Escape') {
|
||||
e.preventDefault();
|
||||
setShowAutocomplete(false);
|
||||
return;
|
||||
}
|
||||
|
||||
// Handle Enter/Tab: select if matches exist, otherwise dismiss (don't submit)
|
||||
if (e.key === 'Enter' || e.key === 'Tab') {
|
||||
e.preventDefault();
|
||||
if (filteredSkills.length > 0 && filteredSkills[selectedIndex]) {
|
||||
insertSkill(filteredSkills[selectedIndex]);
|
||||
} else {
|
||||
setShowAutocomplete(false);
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
// Arrow navigation only when there are matches
|
||||
if (filteredSkills.length > 0) {
|
||||
if (e.key === 'ArrowDown') {
|
||||
e.preventDefault();
|
||||
setSelectedIndex(i => Math.min(i + 1, filteredSkills.length - 1));
|
||||
return;
|
||||
}
|
||||
if (e.key === 'ArrowUp') {
|
||||
e.preventDefault();
|
||||
setSelectedIndex(i => Math.max(i - 1, 0));
|
||||
return;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Existing Enter-to-submit logic (only when dropdown is closed)
|
||||
if (e.key === 'Enter' && !e.shiftKey) {
|
||||
e.preventDefault();
|
||||
handleSubmit(e);
|
||||
}
|
||||
}}
|
||||
```
|
||||
|
||||
### IMP-8: Skill insertion (fulfills AC-8)
|
||||
|
||||
**Location**: `dashboard/components/SimpleInput.js`
|
||||
|
||||
```javascript
|
||||
const insertSkill = useCallback((skill) => {
|
||||
if (!triggerInfo || !autocompleteConfig) return;
|
||||
|
||||
const { trigger } = autocompleteConfig;
|
||||
const { replaceStart, replaceEnd } = triggerInfo;
|
||||
|
||||
const before = text.slice(0, replaceStart);
|
||||
const after = text.slice(replaceEnd);
|
||||
const inserted = `${trigger}${skill.name} `;
|
||||
|
||||
setText(before + inserted + after);
|
||||
setShowAutocomplete(false);
|
||||
|
||||
// Move cursor after inserted text
|
||||
const newCursorPos = replaceStart + inserted.length;
|
||||
setTimeout(() => {
|
||||
if (textareaRef.current) {
|
||||
textareaRef.current.selectionStart = newCursorPos;
|
||||
textareaRef.current.selectionEnd = newCursorPos;
|
||||
textareaRef.current.focus();
|
||||
}
|
||||
}, 0);
|
||||
}, [text, triggerInfo, autocompleteConfig]);
|
||||
```
|
||||
|
||||
### IMP-9: Autocomplete dropdown UI (fulfills AC-4, AC-10, AC-15, AC-16, AC-17, AC-18)
|
||||
|
||||
**Location**: `dashboard/components/SimpleInput.js`
|
||||
|
||||
**Note**: Uses index as `key` instead of `skill.name` to handle potential duplicate skill names (curated + user skills with same name).
|
||||
|
||||
```javascript
|
||||
${showAutocomplete && html`
|
||||
<div
|
||||
ref=${autocompleteRef}
|
||||
class="absolute left-0 bottom-full mb-1 w-full max-h-48 overflow-y-auto rounded-lg border border-selection/75 bg-surface shadow-lg z-50"
|
||||
>
|
||||
${filteredSkills.length === 0 ? html`
|
||||
<div class="px-3 py-2 text-sm text-dim">No matching skills</div>
|
||||
` : filteredSkills.map((skill, i) => html`
|
||||
<div
|
||||
key=${i}
|
||||
class="px-3 py-2 cursor-pointer text-sm transition-colors ${
|
||||
i === selectedIndex
|
||||
? 'bg-selection/50 text-bright'
|
||||
: 'text-fg hover:bg-selection/25'
|
||||
}"
|
||||
onClick=${() => insertSkill(skill)}
|
||||
onMouseEnter=${() => setSelectedIndex(i)}
|
||||
>
|
||||
<div class="font-medium font-mono text-bright">
|
||||
${autocompleteConfig.trigger}${skill.name}
|
||||
</div>
|
||||
<div class="text-micro text-dim truncate">${skill.description}</div>
|
||||
</div>
|
||||
`)}
|
||||
</div>
|
||||
`}
|
||||
```
|
||||
|
||||
### IMP-10: Click-outside dismissal (fulfills AC-9)
|
||||
|
||||
**Location**: `dashboard/components/SimpleInput.js`
|
||||
|
||||
```javascript
|
||||
useEffect(() => {
|
||||
if (!showAutocomplete) return;
|
||||
|
||||
const handleClickOutside = (e) => {
|
||||
if (autocompleteRef.current && !autocompleteRef.current.contains(e.target) &&
|
||||
textareaRef.current && !textareaRef.current.contains(e.target)) {
|
||||
setShowAutocomplete(false);
|
||||
}
|
||||
};
|
||||
|
||||
document.addEventListener('mousedown', handleClickOutside);
|
||||
return () => document.removeEventListener('mousedown', handleClickOutside);
|
||||
}, [showAutocomplete]);
|
||||
```
|
||||
|
||||
## Testing Considerations
|
||||
|
||||
### Manual Testing Checklist
|
||||
1. Claude session: Type `/` - dropdown appears with Claude skills
|
||||
2. Codex session: Type `$` - dropdown appears with Codex skills
|
||||
3. Claude session: Type `$` - nothing happens (wrong trigger)
|
||||
4. Type `/com` - list filters to skills containing "com"
|
||||
5. Mid-message: Type "please run /commit" - autocomplete triggers on `/`
|
||||
6. Arrow keys navigate, Enter selects
|
||||
7. Escape dismisses without selection
|
||||
8. Click outside dismisses
|
||||
9. Selected skill shows as `{trigger}skill-name ` in input
|
||||
10. Verify alphabetical ordering
|
||||
11. Verify vertical scroll with many skills
|
||||
|
||||
### Edge Cases
|
||||
- Session without skills (dropdown doesn't appear)
|
||||
- Single skill (still shows dropdown)
|
||||
- Very long skill descriptions (truncated with ellipsis)
|
||||
- Multiple triggers in one message (each can trigger independently)
|
||||
- Backspace over trigger (dismisses autocomplete)
|
||||
|
||||
## Rollout Slices
|
||||
|
||||
### Slice 1: Server-side skill enumeration
|
||||
- Add `SkillsMixin` with `_enumerate_codex_skills()` and `_enumerate_claude_skills()`
|
||||
- Add `/api/skills?agent=` endpoint in `HttpMixin.do_GET`
|
||||
- Test endpoint returns correct data for each agent type
|
||||
|
||||
### Slice 2: Client-side skill loading
|
||||
- Add `fetchSkills()` API helper
|
||||
- Load skills in Modal.js on session open
|
||||
- Pass `autocompleteConfig` to SimpleInput
|
||||
|
||||
### Slice 3: Basic autocomplete trigger
|
||||
- Add trigger detection logic (position 0 + after whitespace)
|
||||
- Show/hide dropdown based on trigger
|
||||
- Basic filtered list display
|
||||
|
||||
### Slice 4: Keyboard navigation + selection
|
||||
- Arrow key navigation
|
||||
- Enter/Tab selection
|
||||
- Escape dismissal
|
||||
- Click-outside dismissal
|
||||
|
||||
### Slice 5: Polish
|
||||
- Mouse hover to select
|
||||
- Scroll into view for long lists
|
||||
- Cursor positioning after insertion
|
||||
316
PROPOSED_CODE_FILE_REORGANIZATION_PLAN.md
Normal file
316
PROPOSED_CODE_FILE_REORGANIZATION_PLAN.md
Normal file
@@ -0,0 +1,316 @@
|
||||
# Proposed Code File Reorganization Plan
|
||||
|
||||
## Executive Summary
|
||||
|
||||
After reading every source file in the project, analyzing all import graphs, and understanding how each module fits into the architecture, my assessment is: **the project is already reasonably well-organized**. The mixin-based decomposition of the handler, the dashboard's `components/utils/lib` split, and the test structure that mirrors source all reflect sound engineering.
|
||||
|
||||
That said, there is one clear structural problem and a few smaller wins. This plan proposes **surgical, high-value changes** rather than a gratuitous restructure. The guiding principle: every change must make it easier for a developer (or agent) to find things and understand the architecture.
|
||||
|
||||
---
|
||||
|
||||
## Current Structure (Annotated)
|
||||
|
||||
```
|
||||
amc/
|
||||
amc_server/ # Python backend (2,571 LOC)
|
||||
__init__.py # Package init, exports main
|
||||
server.py # Server startup/shutdown (38 LOC)
|
||||
handler.py # Handler class composed from mixins (31 LOC)
|
||||
context.py # ** PROBLEM ** All config, constants, caches, locks, auth (121 LOC)
|
||||
logging_utils.py # Logging + signal handlers (31 LOC)
|
||||
mixins/ # Handler mixins (one per concern)
|
||||
__init__.py # Package comment (1 LOC)
|
||||
http.py # HTTP routing, static file serving (173 LOC)
|
||||
state.py # State aggregation, SSE, session collection, cleanup (440 LOC)
|
||||
conversation.py # Conversation parsing for Claude/Codex (278 LOC)
|
||||
control.py # Session dismiss/respond, Zellij pane injection (295 LOC)
|
||||
discovery.py # Codex session discovery, pane matching (347 LOC)
|
||||
parsing.py # JSONL parsing, context usage extraction (274 LOC)
|
||||
skills.py # Skill enumeration for autocomplete (184 LOC)
|
||||
spawn.py # Agent spawning in Zellij tabs (358 LOC)
|
||||
|
||||
dashboard/ # Preact frontend (2,564 LOC)
|
||||
index.html # Entry HTML with Tailwind config
|
||||
main.js # App mount point (7 LOC)
|
||||
styles.css # Custom styles
|
||||
lib/ # Third-party/shared
|
||||
preact.js # Preact re-exports
|
||||
markdown.js # Markdown rendering + syntax highlighting (159 LOC)
|
||||
utils/ # Pure utility functions
|
||||
api.js # API constants + fetch helpers (39 LOC)
|
||||
formatting.js # Time/token formatting (66 LOC)
|
||||
status.js # Status metadata + session grouping (79 LOC)
|
||||
autocomplete.js # Autocomplete trigger detection (48 LOC)
|
||||
components/ # UI components
|
||||
App.js # Root component (616 LOC)
|
||||
Sidebar.js # Project nav sidebar (102 LOC)
|
||||
SessionCard.js # Session card (176 LOC)
|
||||
Modal.js # Full-screen modal wrapper (79 LOC)
|
||||
ChatMessages.js # Message list (39 LOC)
|
||||
MessageBubble.js # Individual message (54 LOC)
|
||||
QuestionBlock.js # AskUserQuestion UI (228 LOC)
|
||||
SimpleInput.js # Freeform text input (228 LOC)
|
||||
OptionButton.js # Option button (24 LOC)
|
||||
AgentActivityIndicator.js # Turn timer (115 LOC)
|
||||
SpawnModal.js # Spawn dropdown (241 LOC)
|
||||
Toast.js # Toast notifications (125 LOC)
|
||||
EmptyState.js # Empty state (18 LOC)
|
||||
Header.js # ** DEAD CODE ** (58 LOC, zero imports)
|
||||
SessionGroup.js # ** DEAD CODE ** (56 LOC, zero imports)
|
||||
|
||||
bin/ # Shell/Python scripts
|
||||
amc # Launcher (start/stop/status)
|
||||
amc-hook # Hook script (standalone, writes session state)
|
||||
amc-server # Server launch script
|
||||
amc-server-restart # Server restart helper
|
||||
|
||||
tests/ # Test suite (mirrors mixin structure)
|
||||
test_context.py # Context tests
|
||||
test_control.py # Control mixin tests
|
||||
test_conversation.py # Conversation parsing tests
|
||||
test_conversation_mtime.py # Conversation mtime tests
|
||||
test_discovery.py # Discovery mixin tests
|
||||
test_hook.py # Hook script tests
|
||||
test_http.py # HTTP mixin tests
|
||||
test_parsing.py # Parsing mixin tests
|
||||
test_skills.py # Skills mixin tests
|
||||
test_spawn.py # Spawn mixin tests
|
||||
test_state.py # State mixin tests
|
||||
test_zellij_metadata.py # Zellij metadata tests
|
||||
e2e/ # End-to-end tests
|
||||
__init__.py
|
||||
test_skills_endpoint.py
|
||||
test_autocomplete_workflow.js
|
||||
e2e_spawn.sh # Spawn E2E script
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Proposed Changes
|
||||
|
||||
### Change 1: Split `context.py` into Focused Modules (HIGH VALUE)
|
||||
|
||||
**Problem:** `context.py` is the classic "junk drawer" module. It contains:
|
||||
- Path constants for the server, Zellij, Claude, and Codex
|
||||
- Server configuration (port, timeouts)
|
||||
- 5 independent caches with their own size limits
|
||||
- 2 threading locks for unrelated concerns
|
||||
- Auth token generation/validation
|
||||
- Zellij binary resolution
|
||||
- Spawn-related config
|
||||
- Background thread management for projects cache
|
||||
|
||||
Every mixin imports from it, but each only needs a subset. When a developer asks "where is the spawn rate limit configured?", they have to scan through an unrelated grab-bag of constants. When they ask "where's the Codex transcript cache?", same problem.
|
||||
|
||||
**Proposed split:**
|
||||
|
||||
```
|
||||
amc_server/
|
||||
config.py # Server-level constants: PORT, DATA_DIR, SESSIONS_DIR, EVENTS_DIR,
|
||||
# DASHBOARD_DIR, PROJECT_DIR, STALE_EVENT_AGE, STALE_STARTING_AGE
|
||||
# These are the "universal" constants every module might need.
|
||||
|
||||
zellij.py # Zellij integration: ZELLIJ_BIN resolution, ZELLIJ_PLUGIN path,
|
||||
# ZELLIJ_SESSION, _zellij_cache (sessions cache + expiry)
|
||||
# Rationale: All Zellij-specific constants and helpers in one place.
|
||||
# Any developer working on Zellij integration knows exactly where to look.
|
||||
|
||||
agents.py # Agent-specific paths and caches:
|
||||
# CLAUDE_PROJECTS_DIR, CODEX_SESSIONS_DIR, CODEX_ACTIVE_WINDOW,
|
||||
# _codex_pane_cache, _codex_transcript_cache, _CODEX_CACHE_MAX,
|
||||
# _context_usage_cache, _CONTEXT_CACHE_MAX,
|
||||
# _dismissed_codex_ids, _DISMISSED_MAX
|
||||
# Rationale: Agent data source configuration and caches that are only
|
||||
# relevant to discovery/parsing mixins, not the whole server.
|
||||
|
||||
auth.py # Auth token: generate_auth_token(), validate_auth_token(), _auth_token
|
||||
# Rationale: Security-sensitive code in its own module. Small, but
|
||||
# architecturally clean separation from general config.
|
||||
|
||||
spawn_config.py # Spawn feature config:
|
||||
# PROJECTS_DIR, PENDING_SPAWNS_DIR, PENDING_SPAWN_TTL,
|
||||
# _spawn_lock, _spawn_timestamps, SPAWN_COOLDOWN_SEC
|
||||
# + start_projects_watcher() (background refresh thread)
|
||||
# Rationale: Spawn feature has its own set of constants, lock, and
|
||||
# background thread. Currently scattered between context.py and spawn.py.
|
||||
# Consolidating makes the spawn feature self-contained.
|
||||
```
|
||||
|
||||
Kept from current structure (unchanged):
|
||||
- `_state_lock` moves to `config.py` (it's a server-level concern)
|
||||
|
||||
**Import changes required:**
|
||||
|
||||
| File | Current import from `context` | New import from |
|
||||
|------|------|------|
|
||||
| `server.py` | `DATA_DIR, PORT, generate_auth_token, start_projects_watcher` | `config.DATA_DIR, config.PORT`, `auth.generate_auth_token`, `spawn_config.start_projects_watcher` |
|
||||
| `handler.py` | (none, uses mixins) | (unchanged) |
|
||||
| `mixins/http.py` | `DASHBOARD_DIR`, `ctx._auth_token` | `config.DASHBOARD_DIR`, `auth._auth_token` |
|
||||
| `mixins/state.py` | `EVENTS_DIR, SESSIONS_DIR, STALE_*, ZELLIJ_BIN, _state_lock, _zellij_cache` | `config.*`, `zellij.ZELLIJ_BIN, zellij._zellij_cache` |
|
||||
| `mixins/conversation.py` | `EVENTS_DIR` | `config.EVENTS_DIR` |
|
||||
| `mixins/control.py` | `SESSIONS_DIR, ZELLIJ_BIN, ZELLIJ_PLUGIN, _DISMISSED_MAX, _dismissed_codex_ids` | `config.SESSIONS_DIR`, `zellij.*`, `agents._DISMISSED_MAX, agents._dismissed_codex_ids` |
|
||||
| `mixins/discovery.py` | `CODEX_*, PENDING_SPAWNS_DIR, SESSIONS_DIR, _codex_*` | `agents.*`, `spawn_config.PENDING_SPAWNS_DIR`, `config.SESSIONS_DIR` |
|
||||
| `mixins/parsing.py` | `CLAUDE_PROJECTS_DIR, CODEX_SESSIONS_DIR, _*_cache, _*_MAX` | `agents.*` |
|
||||
| `mixins/spawn.py` | `PENDING_SPAWNS_DIR, PROJECTS_DIR, SESSIONS_DIR, ZELLIJ_*, _spawn_*, validate_auth_token` | `spawn_config.*`, `config.SESSIONS_DIR`, `zellij.*`, `auth.validate_auth_token` |
|
||||
|
||||
**Why this is the right split:**
|
||||
|
||||
1. **By domain, not by size.** Each new module groups constants + caches + helpers that serve one architectural concern. A developer working on Zellij integration opens `zellij.py`. Working on Codex discovery? `agents.py`. Spawn feature? `spawn_config.py`.
|
||||
|
||||
2. **No circular imports.** The dependency graph is DAG: `config.py` is a leaf (imported by everything, imports nothing from `amc_server`). `zellij.py`, `agents.py`, `auth.py`, `spawn_config.py` import only from `config.py` (if at all). Mixins import from these.
|
||||
|
||||
3. **No behavioral change.** Module-level caches and singletons work the same way whether they're in one file or five.
|
||||
|
||||
---
|
||||
|
||||
### Change 2: Delete Dead Dashboard Components (LOW EFFORT, HIGH CLARITY)
|
||||
|
||||
**Problem:** `Header.js` (58 LOC) and `SessionGroup.js` (56 LOC) are completely unused. Zero imports anywhere in the codebase. They were replaced by the current Sidebar + grid layout but never cleaned up.
|
||||
|
||||
**Action:** Delete both files.
|
||||
|
||||
**Import changes required:** None (nothing imports them).
|
||||
|
||||
**Rationale:** Dead code is noise. Anyone exploring the `components/` directory would reasonably assume these are active and try to understand how they fit. Removing them prevents that confusion.
|
||||
|
||||
---
|
||||
|
||||
### Change 3: No Changes to Dashboard Structure
|
||||
|
||||
The dashboard is already well-organized:
|
||||
|
||||
- `components/` - All React-like components
|
||||
- `utils/` - Pure utility functions (formatting, API, status, autocomplete)
|
||||
- `lib/` - Third-party wrappers (Preact, markdown rendering)
|
||||
|
||||
This is a standard and intuitive layout. The `components/` directory has 13 files (15 before dead code removal), which is manageable. Creating sub-directories (e.g., `components/session/`, `components/layout/`) would add nesting without meaningful benefit at this scale.
|
||||
|
||||
---
|
||||
|
||||
### Change 4: No Changes to `mixins/` Structure
|
||||
|
||||
The mixin decomposition is the project's architectural backbone. Each mixin handles one concern:
|
||||
|
||||
| Mixin | Responsibility |
|
||||
|-------|---------------|
|
||||
| `http.py` | HTTP routing, static file serving, CORS |
|
||||
| `state.py` | State aggregation, SSE streaming, session collection |
|
||||
| `conversation.py` | Conversation history parsing (Claude + Codex JSONL) |
|
||||
| `control.py` | Session dismiss/respond, Zellij pane injection |
|
||||
| `discovery.py` | Codex session auto-discovery, pane matching |
|
||||
| `parsing.py` | JSONL tail reading, context usage extraction, caching |
|
||||
| `skills.py` | Skill enumeration for Claude/Codex autocomplete |
|
||||
| `spawn.py` | Agent spawning in Zellij tabs |
|
||||
|
||||
All are 170-440 lines, which is reasonable. The largest (`state.py` at 440 lines) could theoretically be split, but its methods are tightly coupled around session collection. Splitting would create artificial seams.
|
||||
|
||||
---
|
||||
|
||||
### Change 5: No Changes to `tests/` Structure
|
||||
|
||||
Tests already mirror the source structure (`test_state.py` tests `mixins/state.py`, etc.). This is the correct pattern.
|
||||
|
||||
**One consideration:** After splitting `context.py`, `test_context.py` may need updates to import from the new module locations. The test file is small (755 bytes) and covers basic context constants, so the update would be trivial.
|
||||
|
||||
---
|
||||
|
||||
### Change 6: No Changes to `bin/` Scripts
|
||||
|
||||
The `amc-hook` script intentionally duplicates `DATA_DIR`, `SESSIONS_DIR`, `EVENTS_DIR` from `context.py`. This is correct: the hook runs as a standalone process launched by Claude Code, not as part of the server. It must be self-contained with zero dependencies on the server package. Sharing code would create a fragile coupling.
|
||||
|
||||
---
|
||||
|
||||
## What I Explicitly Decided NOT to Do
|
||||
|
||||
1. **Not creating a `src/` directory.** The project root is clean. Adding `src/` would be an extra nesting level with no benefit.
|
||||
|
||||
2. **Not splitting any mixins.** `state.py` (440 LOC) and `spawn.py` (358 LOC) are the largest, but their methods are cohesive. Splitting would scatter related logic across files.
|
||||
|
||||
3. **Not merging small files.** `EmptyState.js` (18 LOC), `OptionButton.js` (24 LOC), and `ChatMessages.js` (39 LOC) are tiny but each has a clear purpose and is imported independently. Merging them would violate component-per-file convention.
|
||||
|
||||
4. **Not reorganizing dashboard components into sub-folders.** With 13 components, flat is fine. Sub-folders like `components/session/` and `components/layout/` become necessary at ~25+ components.
|
||||
|
||||
5. **Not consolidating `api.js` + `formatting.js` + `status.js` + `autocomplete.js`.** Each is focused and independently imported. A combined `utils.js` would be a grab-bag (the exact problem we're fixing in `context.py`).
|
||||
|
||||
6. **Not moving `markdown.js` out of `lib/`.** It uses third-party dependencies and provides rendering utilities. `lib/` is the correct location.
|
||||
|
||||
---
|
||||
|
||||
## Proposed Final Structure
|
||||
|
||||
```
|
||||
amc/
|
||||
amc_server/
|
||||
__init__.py # (unchanged)
|
||||
server.py # (updated imports)
|
||||
handler.py # (unchanged)
|
||||
config.py # NEW: Server constants, DATA_DIR, SESSIONS_DIR, EVENTS_DIR, PORT, etc.
|
||||
zellij.py # NEW: Zellij binary resolution, ZELLIJ_PLUGIN, ZELLIJ_SESSION, cache
|
||||
agents.py # NEW: Agent paths (Claude/Codex), transcript caches, dismissed cache
|
||||
auth.py # NEW: Auth token generation/validation
|
||||
spawn_config.py # NEW: Spawn constants, locks, rate limiting, projects watcher
|
||||
logging_utils.py # (unchanged)
|
||||
mixins/ # (unchanged structure, updated imports)
|
||||
__init__.py
|
||||
http.py
|
||||
state.py
|
||||
conversation.py
|
||||
control.py
|
||||
discovery.py
|
||||
parsing.py
|
||||
skills.py
|
||||
spawn.py
|
||||
|
||||
dashboard/
|
||||
index.html # (unchanged)
|
||||
main.js # (unchanged)
|
||||
styles.css # (unchanged)
|
||||
lib/
|
||||
preact.js # (unchanged)
|
||||
markdown.js # (unchanged)
|
||||
utils/
|
||||
api.js # (unchanged)
|
||||
formatting.js # (unchanged)
|
||||
status.js # (unchanged)
|
||||
autocomplete.js # (unchanged)
|
||||
components/
|
||||
App.js # (unchanged)
|
||||
Sidebar.js # (unchanged)
|
||||
SessionCard.js # (unchanged)
|
||||
Modal.js # (unchanged)
|
||||
ChatMessages.js # (unchanged)
|
||||
MessageBubble.js # (unchanged)
|
||||
QuestionBlock.js # (unchanged)
|
||||
SimpleInput.js # (unchanged)
|
||||
OptionButton.js # (unchanged)
|
||||
AgentActivityIndicator.js # (unchanged)
|
||||
SpawnModal.js # (unchanged)
|
||||
Toast.js # (unchanged)
|
||||
EmptyState.js # (unchanged)
|
||||
[DELETED] Header.js
|
||||
[DELETED] SessionGroup.js
|
||||
|
||||
bin/ # (unchanged)
|
||||
tests/ # (minor import updates in test_context.py)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation Order
|
||||
|
||||
1. **Delete dead dashboard components** (`Header.js`, `SessionGroup.js`) - zero risk, instant clarity
|
||||
2. **Create new Python modules** (`config.py`, `zellij.py`, `agents.py`, `auth.py`, `spawn_config.py`) with the correct constants/functions
|
||||
3. **Update all mixin imports** to use new module locations
|
||||
4. **Update `server.py`** imports
|
||||
5. **Delete `context.py`**
|
||||
6. **Run full test suite** to verify nothing broke
|
||||
7. **Update `test_context.py`** if needed
|
||||
|
||||
---
|
||||
|
||||
## Risk Assessment
|
||||
|
||||
- **Risk of breaking imports:** MEDIUM. There are many import statements to update across 8 mixin files + `server.py`. Mitigated by running the full test suite after changes.
|
||||
- **Risk of circular imports:** LOW. The new modules form a clean DAG (config <- zellij/agents/auth/spawn_config <- mixins).
|
||||
- **Risk to `bin/amc-hook`:** NONE. The hook is standalone and doesn't import from `amc_server`.
|
||||
- **Risk to dashboard:** NONE for dead code deletion. Zero imports to either file.
|
||||
20
README.md
20
README.md
@@ -93,7 +93,7 @@ AMC requires Claude Code hooks to report session state. Add this to your `~/.cla
|
||||
| `bin/amc` | Launcher script — start/stop/status commands |
|
||||
| `bin/amc-server` | Python HTTP server serving the API and dashboard |
|
||||
| `bin/amc-hook` | Hook script called by Claude Code to write session state |
|
||||
| `dashboard-preact.html` | Single-file Preact dashboard |
|
||||
| `dashboard/` | Modular Preact dashboard (index.html, components/, lib/, utils/) |
|
||||
|
||||
### Data Storage
|
||||
|
||||
@@ -134,8 +134,10 @@ The `/api/respond/{id}` endpoint injects text into a session's Zellij pane. Requ
|
||||
- `optionCount` — Number of options in the current question (used for freeform)
|
||||
|
||||
Response injection works via:
|
||||
1. **Zellij plugin** (`~/.config/zellij/plugins/zellij-send-keys.wasm`) — Preferred, no focus change
|
||||
2. **write-chars fallback** — Uses `zellij action write-chars`, changes focus
|
||||
1. **Zellij plugin** (`~/.config/zellij/plugins/zellij-send-keys.wasm`) — Required for pane-targeted sends and Enter submission
|
||||
2. **Optional unsafe fallback** (`AMC_ALLOW_UNSAFE_WRITE_CHARS_FALLBACK=1`) — Uses focused-pane `write-chars` only when explicitly enabled
|
||||
|
||||
AMC resolves the Zellij binary from PATH plus common Homebrew locations (`/opt/homebrew/bin/zellij`, `/usr/local/bin/zellij`) so response injection still works when started via `launchctl`.
|
||||
|
||||
## Session Statuses
|
||||
|
||||
@@ -159,9 +161,17 @@ Response injection works via:
|
||||
- Zellij (for response injection)
|
||||
- Claude Code with hooks support
|
||||
|
||||
## Optional: Zellij Plugin
|
||||
## Testing
|
||||
|
||||
For seamless response injection without focus changes, install the `zellij-send-keys` plugin:
|
||||
Run the server test suite:
|
||||
|
||||
```bash
|
||||
python3 -m unittest discover -s tests -v
|
||||
```
|
||||
|
||||
## Zellij Plugin
|
||||
|
||||
For pane-targeted response injection (including reliable Enter submission), install the `zellij-send-keys` plugin:
|
||||
|
||||
```bash
|
||||
# Build and install the plugin
|
||||
|
||||
BIN
amc_server/__pycache__/__init__.cpython-313.pyc
Normal file
BIN
amc_server/__pycache__/__init__.cpython-313.pyc
Normal file
Binary file not shown.
BIN
amc_server/__pycache__/context.cpython-313.pyc
Normal file
BIN
amc_server/__pycache__/context.cpython-313.pyc
Normal file
Binary file not shown.
BIN
amc_server/__pycache__/handler.cpython-313.pyc
Normal file
BIN
amc_server/__pycache__/handler.cpython-313.pyc
Normal file
Binary file not shown.
BIN
amc_server/__pycache__/logging_utils.cpython-313.pyc
Normal file
BIN
amc_server/__pycache__/logging_utils.cpython-313.pyc
Normal file
Binary file not shown.
BIN
amc_server/__pycache__/server.cpython-313.pyc
Normal file
BIN
amc_server/__pycache__/server.cpython-313.pyc
Normal file
Binary file not shown.
28
amc_server/agents.py
Normal file
28
amc_server/agents.py
Normal file
@@ -0,0 +1,28 @@
|
||||
"""Agent-specific paths, caches, and constants for Claude/Codex discovery."""
|
||||
|
||||
from pathlib import Path
|
||||
|
||||
# Claude Code conversation directory
|
||||
CLAUDE_PROJECTS_DIR = Path.home() / ".claude" / "projects"
|
||||
|
||||
# Codex conversation directory
|
||||
CODEX_SESSIONS_DIR = Path.home() / ".codex" / "sessions"
|
||||
|
||||
# Only discover recently-active Codex sessions (10 minutes)
|
||||
CODEX_ACTIVE_WINDOW = 600
|
||||
|
||||
# Cache for Codex pane info (avoid running pgrep/ps/lsof on every request)
|
||||
_codex_pane_cache = {"pid_info": {}, "cwd_map": {}, "expires": 0}
|
||||
|
||||
# Cache for parsed context usage by transcript file path + mtime/size
|
||||
_context_usage_cache = {}
|
||||
_CONTEXT_CACHE_MAX = 100
|
||||
|
||||
# Cache mapping Codex session IDs to transcript paths (or None when missing)
|
||||
_codex_transcript_cache = {}
|
||||
_CODEX_CACHE_MAX = 200
|
||||
|
||||
# Codex sessions dismissed during this server lifetime (prevents re-discovery)
|
||||
# Uses dict (not set) for O(1) lookup + FIFO eviction via insertion order (Python 3.7+)
|
||||
_dismissed_codex_ids = {}
|
||||
_DISMISSED_MAX = 500
|
||||
18
amc_server/auth.py
Normal file
18
amc_server/auth.py
Normal file
@@ -0,0 +1,18 @@
|
||||
"""Auth token generation and validation for spawn endpoint security."""
|
||||
|
||||
import secrets
|
||||
|
||||
# Auth token for spawn endpoint
|
||||
_auth_token: str = ''
|
||||
|
||||
|
||||
def generate_auth_token():
|
||||
"""Generate a one-time auth token for this server instance."""
|
||||
global _auth_token
|
||||
_auth_token = secrets.token_urlsafe(32)
|
||||
return _auth_token
|
||||
|
||||
|
||||
def validate_auth_token(request_token: str) -> bool:
|
||||
"""Validate the Authorization header token."""
|
||||
return request_token == f'Bearer {_auth_token}'
|
||||
20
amc_server/config.py
Normal file
20
amc_server/config.py
Normal file
@@ -0,0 +1,20 @@
|
||||
"""Server-level constants: paths, port, timeouts, state lock."""
|
||||
|
||||
import threading
|
||||
from pathlib import Path
|
||||
|
||||
# Runtime data lives in XDG data dir
|
||||
DATA_DIR = Path.home() / ".local" / "share" / "amc"
|
||||
SESSIONS_DIR = DATA_DIR / "sessions"
|
||||
EVENTS_DIR = DATA_DIR / "events"
|
||||
|
||||
# Source files live in project directory (relative to this module)
|
||||
PROJECT_DIR = Path(__file__).resolve().parent.parent
|
||||
DASHBOARD_DIR = PROJECT_DIR / "dashboard"
|
||||
|
||||
PORT = 7400
|
||||
STALE_EVENT_AGE = 86400 # 24 hours in seconds
|
||||
STALE_STARTING_AGE = 3600 # 1 hour - sessions stuck in "starting" are orphans
|
||||
|
||||
# Serialize state collection because it mutates session files/caches.
|
||||
_state_lock = threading.Lock()
|
||||
@@ -1,47 +0,0 @@
|
||||
from pathlib import Path
|
||||
import threading
|
||||
|
||||
# Claude Code conversation directory
|
||||
CLAUDE_PROJECTS_DIR = Path.home() / ".claude" / "projects"
|
||||
|
||||
# Codex conversation directory
|
||||
CODEX_SESSIONS_DIR = Path.home() / ".codex" / "sessions"
|
||||
|
||||
# Plugin path for zellij-send-keys
|
||||
ZELLIJ_PLUGIN = Path.home() / ".config" / "zellij" / "plugins" / "zellij-send-keys.wasm"
|
||||
|
||||
# Runtime data lives in XDG data dir
|
||||
DATA_DIR = Path.home() / ".local" / "share" / "amc"
|
||||
SESSIONS_DIR = DATA_DIR / "sessions"
|
||||
EVENTS_DIR = DATA_DIR / "events"
|
||||
|
||||
# Source files live in project directory (relative to this module)
|
||||
PROJECT_DIR = Path(__file__).resolve().parent.parent
|
||||
DASHBOARD_DIR = PROJECT_DIR / "dashboard"
|
||||
|
||||
PORT = 7400
|
||||
STALE_EVENT_AGE = 86400 # 24 hours in seconds
|
||||
STALE_STARTING_AGE = 3600 # 1 hour - sessions stuck in "starting" are orphans
|
||||
CODEX_ACTIVE_WINDOW = 600 # 10 minutes - only discover recently-active Codex sessions
|
||||
|
||||
# Cache for Zellij session list (avoid calling zellij on every request)
|
||||
_zellij_cache = {"sessions": None, "expires": 0}
|
||||
|
||||
# Cache for Codex pane info (avoid running pgrep/ps/lsof on every request)
|
||||
_codex_pane_cache = {"pid_info": {}, "cwd_map": {}, "expires": 0}
|
||||
|
||||
# Cache for parsed context usage by transcript file path + mtime/size
|
||||
# Limited to prevent unbounded memory growth
|
||||
_context_usage_cache = {}
|
||||
_CONTEXT_CACHE_MAX = 100
|
||||
|
||||
# Cache mapping Codex session IDs to transcript paths (or None when missing)
|
||||
_codex_transcript_cache = {}
|
||||
_CODEX_CACHE_MAX = 200
|
||||
|
||||
# Codex sessions dismissed during this server lifetime (prevents re-discovery)
|
||||
_dismissed_codex_ids = set()
|
||||
_DISMISSED_MAX = 500
|
||||
|
||||
# Serialize state collection because it mutates session files/caches.
|
||||
_state_lock = threading.Lock()
|
||||
@@ -5,6 +5,8 @@ from amc_server.mixins.control import SessionControlMixin
|
||||
from amc_server.mixins.discovery import SessionDiscoveryMixin
|
||||
from amc_server.mixins.http import HttpMixin
|
||||
from amc_server.mixins.parsing import SessionParsingMixin
|
||||
from amc_server.mixins.skills import SkillsMixin
|
||||
from amc_server.mixins.spawn import SpawnMixin
|
||||
from amc_server.mixins.state import StateMixin
|
||||
|
||||
|
||||
@@ -15,6 +17,8 @@ class AMCHandler(
|
||||
SessionControlMixin,
|
||||
SessionDiscoveryMixin,
|
||||
SessionParsingMixin,
|
||||
SkillsMixin,
|
||||
SpawnMixin,
|
||||
BaseHTTPRequestHandler,
|
||||
):
|
||||
"""HTTP handler composed from focused mixins."""
|
||||
|
||||
BIN
amc_server/mixins/__pycache__/__init__.cpython-313.pyc
Normal file
BIN
amc_server/mixins/__pycache__/__init__.cpython-313.pyc
Normal file
Binary file not shown.
BIN
amc_server/mixins/__pycache__/control.cpython-313.pyc
Normal file
BIN
amc_server/mixins/__pycache__/control.cpython-313.pyc
Normal file
Binary file not shown.
BIN
amc_server/mixins/__pycache__/conversation.cpython-313.pyc
Normal file
BIN
amc_server/mixins/__pycache__/conversation.cpython-313.pyc
Normal file
Binary file not shown.
BIN
amc_server/mixins/__pycache__/discovery.cpython-313.pyc
Normal file
BIN
amc_server/mixins/__pycache__/discovery.cpython-313.pyc
Normal file
Binary file not shown.
BIN
amc_server/mixins/__pycache__/http.cpython-313.pyc
Normal file
BIN
amc_server/mixins/__pycache__/http.cpython-313.pyc
Normal file
Binary file not shown.
BIN
amc_server/mixins/__pycache__/parsing.cpython-313.pyc
Normal file
BIN
amc_server/mixins/__pycache__/parsing.cpython-313.pyc
Normal file
Binary file not shown.
BIN
amc_server/mixins/__pycache__/skills.cpython-313.pyc
Normal file
BIN
amc_server/mixins/__pycache__/skills.cpython-313.pyc
Normal file
Binary file not shown.
BIN
amc_server/mixins/__pycache__/spawn.cpython-313.pyc
Normal file
BIN
amc_server/mixins/__pycache__/spawn.cpython-313.pyc
Normal file
Binary file not shown.
BIN
amc_server/mixins/__pycache__/state.cpython-313.pyc
Normal file
BIN
amc_server/mixins/__pycache__/state.cpython-313.pyc
Normal file
Binary file not shown.
@@ -3,23 +3,62 @@ import os
|
||||
import subprocess
|
||||
import time
|
||||
|
||||
from amc_server.context import SESSIONS_DIR, ZELLIJ_PLUGIN, _DISMISSED_MAX, _dismissed_codex_ids
|
||||
from amc_server.agents import _DISMISSED_MAX, _dismissed_codex_ids
|
||||
from amc_server.config import SESSIONS_DIR
|
||||
from amc_server.zellij import ZELLIJ_BIN, ZELLIJ_PLUGIN
|
||||
from amc_server.logging_utils import LOGGER
|
||||
|
||||
|
||||
class SessionControlMixin:
|
||||
_FREEFORM_MODE_SWITCH_DELAY_SEC = 0.30
|
||||
_DEFAULT_SUBMIT_ENTER_DELAY_SEC = 0.20
|
||||
|
||||
def _dismiss_session(self, session_id):
|
||||
"""Delete a session file (manual dismiss from dashboard)."""
|
||||
safe_id = os.path.basename(session_id)
|
||||
session_file = SESSIONS_DIR / f"{safe_id}.json"
|
||||
# Track dismissed Codex sessions to prevent re-discovery
|
||||
# Evict oldest entries if set is full (prevents unbounded growth)
|
||||
# Evict oldest entries via FIFO (dict maintains insertion order in Python 3.7+)
|
||||
while len(_dismissed_codex_ids) >= _DISMISSED_MAX:
|
||||
_dismissed_codex_ids.pop()
|
||||
_dismissed_codex_ids.add(safe_id)
|
||||
oldest_key = next(iter(_dismissed_codex_ids))
|
||||
del _dismissed_codex_ids[oldest_key]
|
||||
_dismissed_codex_ids[safe_id] = True
|
||||
session_file.unlink(missing_ok=True)
|
||||
self._send_json(200, {"ok": True})
|
||||
|
||||
def _dismiss_dead_sessions(self):
|
||||
"""Delete all dead session files (clear all from dashboard).
|
||||
|
||||
Note: is_dead is computed dynamically, not stored on disk, so we must
|
||||
recompute it here using the same logic as _collect_sessions.
|
||||
"""
|
||||
# Get liveness data (same as _collect_sessions)
|
||||
active_zellij_sessions = self._get_active_zellij_sessions()
|
||||
active_transcript_files = self._get_active_transcript_files()
|
||||
|
||||
dismissed_count = 0
|
||||
for f in SESSIONS_DIR.glob("*.json"):
|
||||
try:
|
||||
data = json.loads(f.read_text())
|
||||
if not isinstance(data, dict):
|
||||
continue
|
||||
# Recompute is_dead (it's not persisted to disk)
|
||||
is_dead = self._is_session_dead(
|
||||
data, active_zellij_sessions, active_transcript_files
|
||||
)
|
||||
if is_dead:
|
||||
safe_id = f.stem
|
||||
# Track dismissed Codex sessions
|
||||
while len(_dismissed_codex_ids) >= _DISMISSED_MAX:
|
||||
oldest_key = next(iter(_dismissed_codex_ids))
|
||||
del _dismissed_codex_ids[oldest_key]
|
||||
_dismissed_codex_ids[safe_id] = True
|
||||
f.unlink(missing_ok=True)
|
||||
dismissed_count += 1
|
||||
except (json.JSONDecodeError, OSError):
|
||||
continue
|
||||
self._send_json(200, {"ok": True, "dismissed": dismissed_count})
|
||||
|
||||
def _respond_to_session(self, session_id):
|
||||
"""Inject a response into the session's Zellij pane."""
|
||||
safe_id = os.path.basename(session_id)
|
||||
@@ -87,16 +126,40 @@ class SessionControlMixin:
|
||||
self._send_json(500, {"ok": False, "error": f"Failed to activate freeform mode: {result['error']}"})
|
||||
return
|
||||
# Delay for Claude Code to switch to text input mode
|
||||
time.sleep(0.3)
|
||||
time.sleep(self._FREEFORM_MODE_SWITCH_DELAY_SEC)
|
||||
|
||||
# Inject the actual text (with Enter)
|
||||
result = self._inject_to_pane(zellij_session, pane_id, text, send_enter=True)
|
||||
# Inject the actual text first, then submit with delayed Enter.
|
||||
result = self._inject_text_then_enter(zellij_session, pane_id, text)
|
||||
|
||||
if result["ok"]:
|
||||
self._send_json(200, {"ok": True})
|
||||
else:
|
||||
self._send_json(500, {"ok": False, "error": result["error"]})
|
||||
|
||||
def _inject_text_then_enter(self, zellij_session, pane_id, text):
|
||||
"""Send text and trigger Enter in two steps to avoid newline-only races."""
|
||||
result = self._inject_to_pane(zellij_session, pane_id, text, send_enter=False)
|
||||
if not result["ok"]:
|
||||
return result
|
||||
|
||||
time.sleep(self._get_submit_enter_delay_sec())
|
||||
# Send Enter as its own action after the text has landed.
|
||||
return self._inject_to_pane(zellij_session, pane_id, "", send_enter=True)
|
||||
|
||||
def _get_submit_enter_delay_sec(self):
|
||||
raw = os.environ.get("AMC_SUBMIT_ENTER_DELAY_MS", "").strip()
|
||||
if not raw:
|
||||
return self._DEFAULT_SUBMIT_ENTER_DELAY_SEC
|
||||
try:
|
||||
ms = float(raw)
|
||||
if ms < 0:
|
||||
return 0.0
|
||||
if ms > 2000:
|
||||
ms = 2000
|
||||
return ms / 1000.0
|
||||
except ValueError:
|
||||
return self._DEFAULT_SUBMIT_ENTER_DELAY_SEC
|
||||
|
||||
def _parse_pane_id(self, zellij_pane):
|
||||
"""Extract numeric pane ID from various formats."""
|
||||
if not zellij_pane:
|
||||
@@ -127,7 +190,7 @@ class SessionControlMixin:
|
||||
|
||||
# Pane-accurate routing requires the plugin.
|
||||
if ZELLIJ_PLUGIN.exists():
|
||||
result = self._try_plugin_inject(env, pane_id, text, send_enter)
|
||||
result = self._try_plugin_inject(env, zellij_session, pane_id, text, send_enter)
|
||||
if result["ok"]:
|
||||
return result
|
||||
LOGGER.warning(
|
||||
@@ -142,7 +205,7 @@ class SessionControlMixin:
|
||||
# `write-chars` targets whichever pane is focused, which is unsafe for AMC.
|
||||
if self._allow_unsafe_write_chars_fallback():
|
||||
LOGGER.warning("Using unsafe write-chars fallback (focused pane only)")
|
||||
return self._try_write_chars_inject(env, text, send_enter)
|
||||
return self._try_write_chars_inject(env, zellij_session, text, send_enter)
|
||||
|
||||
return {
|
||||
"ok": False,
|
||||
@@ -156,7 +219,7 @@ class SessionControlMixin:
|
||||
value = os.environ.get("AMC_ALLOW_UNSAFE_WRITE_CHARS_FALLBACK", "").strip().lower()
|
||||
return value in ("1", "true", "yes", "on")
|
||||
|
||||
def _try_plugin_inject(self, env, pane_id, text, send_enter=True):
|
||||
def _try_plugin_inject(self, env, zellij_session, pane_id, text, send_enter=True):
|
||||
"""Try injecting via zellij-send-keys plugin (no focus change)."""
|
||||
payload = json.dumps({
|
||||
"pane_id": pane_id,
|
||||
@@ -167,7 +230,9 @@ class SessionControlMixin:
|
||||
try:
|
||||
result = subprocess.run(
|
||||
[
|
||||
"zellij",
|
||||
ZELLIJ_BIN,
|
||||
"--session",
|
||||
zellij_session,
|
||||
"action",
|
||||
"pipe",
|
||||
"--plugin",
|
||||
@@ -194,12 +259,12 @@ class SessionControlMixin:
|
||||
except Exception as e:
|
||||
return {"ok": False, "error": str(e)}
|
||||
|
||||
def _try_write_chars_inject(self, env, text, send_enter=True):
|
||||
def _try_write_chars_inject(self, env, zellij_session, text, send_enter=True):
|
||||
"""Inject via write-chars (UNSAFE: writes to focused pane)."""
|
||||
try:
|
||||
# Write the text
|
||||
result = subprocess.run(
|
||||
["zellij", "action", "write-chars", text],
|
||||
[ZELLIJ_BIN, "--session", zellij_session, "action", "write-chars", text],
|
||||
env=env,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
@@ -212,7 +277,7 @@ class SessionControlMixin:
|
||||
# Send Enter if requested
|
||||
if send_enter:
|
||||
result = subprocess.run(
|
||||
["zellij", "action", "write", "13"], # 13 = Enter
|
||||
[ZELLIJ_BIN, "--session", zellij_session, "action", "write", "13"], # 13 = Enter
|
||||
env=env,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
@@ -227,6 +292,6 @@ class SessionControlMixin:
|
||||
except subprocess.TimeoutExpired:
|
||||
return {"ok": False, "error": "write-chars timed out"}
|
||||
except FileNotFoundError:
|
||||
return {"ok": False, "error": "zellij not found in PATH"}
|
||||
return {"ok": False, "error": f"zellij not found (resolved binary: {ZELLIJ_BIN})"}
|
||||
except Exception as e:
|
||||
return {"ok": False, "error": str(e)}
|
||||
|
||||
@@ -1,7 +1,22 @@
|
||||
import json
|
||||
import os
|
||||
|
||||
from amc_server.context import EVENTS_DIR
|
||||
from amc_server.config import EVENTS_DIR
|
||||
|
||||
# Prefixes for system-injected content that appears as user messages
|
||||
# but was not typed by the human (hook outputs, system reminders, etc.)
|
||||
_SYSTEM_INJECTED_PREFIXES = (
|
||||
"<system-reminder>",
|
||||
"<local-command-caveat>",
|
||||
"<available-deferred-tools>",
|
||||
"<teammate-message",
|
||||
)
|
||||
|
||||
|
||||
def _is_system_injected(content):
|
||||
"""Return True if user message content is system-injected, not human-typed."""
|
||||
stripped = content.lstrip()
|
||||
return stripped.startswith(_SYSTEM_INJECTED_PREFIXES)
|
||||
|
||||
|
||||
class ConversationMixin:
|
||||
@@ -39,6 +54,7 @@ class ConversationMixin:
|
||||
def _parse_claude_conversation(self, session_id, project_dir):
|
||||
"""Parse Claude Code JSONL conversation format."""
|
||||
messages = []
|
||||
msg_id = 0
|
||||
|
||||
conv_file = self._get_claude_conversation_file(session_id, project_dir)
|
||||
|
||||
@@ -56,12 +72,14 @@ class ConversationMixin:
|
||||
if msg_type == "user":
|
||||
content = entry.get("message", {}).get("content", "")
|
||||
# Only include actual human messages (strings), not tool results (arrays)
|
||||
if content and isinstance(content, str):
|
||||
if content and isinstance(content, str) and not _is_system_injected(content):
|
||||
messages.append({
|
||||
"id": f"claude-{session_id[:8]}-{msg_id}",
|
||||
"role": "user",
|
||||
"content": content,
|
||||
"timestamp": entry.get("timestamp", ""),
|
||||
})
|
||||
msg_id += 1
|
||||
|
||||
elif msg_type == "assistant":
|
||||
# Assistant messages have structured content
|
||||
@@ -90,6 +108,7 @@ class ConversationMixin:
|
||||
text_parts.append(part)
|
||||
if text_parts or tool_calls or thinking_parts:
|
||||
msg = {
|
||||
"id": f"claude-{session_id[:8]}-{msg_id}",
|
||||
"role": "assistant",
|
||||
"content": "\n".join(text_parts) if text_parts else "",
|
||||
"timestamp": entry.get("timestamp", ""),
|
||||
@@ -99,6 +118,7 @@ class ConversationMixin:
|
||||
if thinking_parts:
|
||||
msg["thinking"] = "\n\n".join(thinking_parts)
|
||||
messages.append(msg)
|
||||
msg_id += 1
|
||||
|
||||
except json.JSONDecodeError:
|
||||
continue
|
||||
@@ -108,8 +128,16 @@ class ConversationMixin:
|
||||
return messages
|
||||
|
||||
def _parse_codex_conversation(self, session_id):
|
||||
"""Parse Codex JSONL conversation format."""
|
||||
"""Parse Codex JSONL conversation format.
|
||||
|
||||
Codex uses separate response_items for different content types:
|
||||
- message: user/assistant text messages
|
||||
- function_call: tool invocations (name, arguments, call_id)
|
||||
- reasoning: thinking summaries (encrypted content, visible summary)
|
||||
"""
|
||||
messages = []
|
||||
pending_tool_calls = [] # Accumulate tool calls to attach to next assistant message
|
||||
msg_id = 0
|
||||
|
||||
conv_file = self._find_codex_transcript_file(session_id)
|
||||
|
||||
@@ -123,16 +151,58 @@ class ConversationMixin:
|
||||
if not isinstance(entry, dict):
|
||||
continue
|
||||
|
||||
# Codex format: type="response_item", payload.type="message"
|
||||
if entry.get("type") != "response_item":
|
||||
continue
|
||||
|
||||
payload = entry.get("payload", {})
|
||||
if not isinstance(payload, dict):
|
||||
continue
|
||||
if payload.get("type") != "message":
|
||||
|
||||
payload_type = payload.get("type")
|
||||
timestamp = entry.get("timestamp", "")
|
||||
|
||||
# Handle function_call (tool invocations)
|
||||
if payload_type == "function_call":
|
||||
tool_call = {
|
||||
"name": payload.get("name", "unknown"),
|
||||
"input": self._parse_codex_arguments(payload.get("arguments", "{}")),
|
||||
}
|
||||
pending_tool_calls.append(tool_call)
|
||||
continue
|
||||
|
||||
# Handle reasoning (thinking summaries)
|
||||
if payload_type == "reasoning":
|
||||
summary_parts = payload.get("summary", [])
|
||||
if summary_parts:
|
||||
thinking_text = []
|
||||
for part in summary_parts:
|
||||
if isinstance(part, dict) and part.get("type") == "summary_text":
|
||||
thinking_text.append(part.get("text", ""))
|
||||
if thinking_text:
|
||||
# Flush any pending tool calls first
|
||||
if pending_tool_calls:
|
||||
messages.append({
|
||||
"id": f"codex-{session_id[:8]}-{msg_id}",
|
||||
"role": "assistant",
|
||||
"content": "",
|
||||
"tool_calls": pending_tool_calls,
|
||||
"timestamp": timestamp,
|
||||
})
|
||||
msg_id += 1
|
||||
pending_tool_calls = []
|
||||
# Add thinking as assistant message
|
||||
messages.append({
|
||||
"id": f"codex-{session_id[:8]}-{msg_id}",
|
||||
"role": "assistant",
|
||||
"content": "",
|
||||
"thinking": "\n".join(thinking_text),
|
||||
"timestamp": timestamp,
|
||||
})
|
||||
msg_id += 1
|
||||
continue
|
||||
|
||||
# Handle message (user/assistant text)
|
||||
if payload_type == "message":
|
||||
role = payload.get("role", "")
|
||||
content_parts = payload.get("content", [])
|
||||
if not isinstance(content_parts, list):
|
||||
@@ -146,7 +216,6 @@ class ConversationMixin:
|
||||
text_parts = []
|
||||
for part in content_parts:
|
||||
if isinstance(part, dict):
|
||||
# Codex uses "input_text" for user, "output_text" for assistant
|
||||
text = part.get("text", "")
|
||||
if text:
|
||||
# Skip injected context (AGENTS.md, environment, permissions)
|
||||
@@ -160,16 +229,65 @@ class ConversationMixin:
|
||||
continue
|
||||
text_parts.append(text)
|
||||
|
||||
if text_parts and role in ("user", "assistant"):
|
||||
if role == "user" and text_parts:
|
||||
# Flush any pending tool calls before user message
|
||||
if pending_tool_calls:
|
||||
messages.append({
|
||||
"role": role,
|
||||
"content": "\n".join(text_parts),
|
||||
"timestamp": entry.get("timestamp", ""),
|
||||
"id": f"codex-{session_id[:8]}-{msg_id}",
|
||||
"role": "assistant",
|
||||
"content": "",
|
||||
"tool_calls": pending_tool_calls,
|
||||
"timestamp": timestamp,
|
||||
})
|
||||
msg_id += 1
|
||||
pending_tool_calls = []
|
||||
messages.append({
|
||||
"id": f"codex-{session_id[:8]}-{msg_id}",
|
||||
"role": "user",
|
||||
"content": "\n".join(text_parts),
|
||||
"timestamp": timestamp,
|
||||
})
|
||||
msg_id += 1
|
||||
elif role == "assistant":
|
||||
msg = {
|
||||
"id": f"codex-{session_id[:8]}-{msg_id}",
|
||||
"role": "assistant",
|
||||
"content": "\n".join(text_parts) if text_parts else "",
|
||||
"timestamp": timestamp,
|
||||
}
|
||||
# Attach any pending tool calls to this assistant message
|
||||
if pending_tool_calls:
|
||||
msg["tool_calls"] = pending_tool_calls
|
||||
pending_tool_calls = []
|
||||
if text_parts or msg.get("tool_calls"):
|
||||
messages.append(msg)
|
||||
msg_id += 1
|
||||
|
||||
except json.JSONDecodeError:
|
||||
continue
|
||||
|
||||
# Flush any remaining pending tool calls
|
||||
if pending_tool_calls:
|
||||
messages.append({
|
||||
"id": f"codex-{session_id[:8]}-{msg_id}",
|
||||
"role": "assistant",
|
||||
"content": "",
|
||||
"tool_calls": pending_tool_calls,
|
||||
"timestamp": "",
|
||||
})
|
||||
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
return messages
|
||||
|
||||
def _parse_codex_arguments(self, arguments_str):
|
||||
"""Parse Codex function_call arguments (JSON string or dict)."""
|
||||
if isinstance(arguments_str, dict):
|
||||
return arguments_str
|
||||
if isinstance(arguments_str, str):
|
||||
try:
|
||||
return json.loads(arguments_str)
|
||||
except json.JSONDecodeError:
|
||||
return {"raw": arguments_str}
|
||||
return {}
|
||||
|
||||
@@ -5,18 +5,99 @@ import subprocess
|
||||
import time
|
||||
from datetime import datetime, timezone
|
||||
|
||||
from amc_server.context import (
|
||||
from amc_server.agents import (
|
||||
CODEX_ACTIVE_WINDOW,
|
||||
CODEX_SESSIONS_DIR,
|
||||
SESSIONS_DIR,
|
||||
_CODEX_CACHE_MAX,
|
||||
_codex_pane_cache,
|
||||
_codex_transcript_cache,
|
||||
_dismissed_codex_ids,
|
||||
)
|
||||
from amc_server.config import SESSIONS_DIR
|
||||
from amc_server.spawn_config import PENDING_SPAWNS_DIR
|
||||
from amc_server.logging_utils import LOGGER
|
||||
|
||||
|
||||
def _parse_session_timestamp(session_ts):
|
||||
"""Parse Codex session timestamp to Unix time. Returns None on failure."""
|
||||
if not session_ts:
|
||||
return None
|
||||
try:
|
||||
# Codex uses ISO format, possibly with Z suffix or +00:00
|
||||
ts_str = session_ts.replace('Z', '+00:00')
|
||||
dt = datetime.fromisoformat(ts_str)
|
||||
return dt.timestamp()
|
||||
except (ValueError, TypeError, AttributeError):
|
||||
return None
|
||||
|
||||
|
||||
def _match_pending_spawn(session_cwd, session_start_ts):
|
||||
"""Match a Codex session to a pending spawn by CWD and timestamp.
|
||||
|
||||
Args:
|
||||
session_cwd: The CWD of the Codex session
|
||||
session_start_ts: The session's START timestamp (ISO string from Codex metadata)
|
||||
IMPORTANT: Must be session start time, not file mtime, to avoid false
|
||||
matches with pre-existing sessions that were recently active.
|
||||
|
||||
Returns:
|
||||
spawn_id if matched (and deletes the pending file), None otherwise
|
||||
"""
|
||||
if not PENDING_SPAWNS_DIR.exists():
|
||||
return None
|
||||
|
||||
normalized_cwd = os.path.normpath(session_cwd) if session_cwd else ""
|
||||
if not normalized_cwd:
|
||||
return None
|
||||
|
||||
# Parse session start time - if we can't parse it, we can't safely match
|
||||
session_start_unix = _parse_session_timestamp(session_start_ts)
|
||||
if session_start_unix is None:
|
||||
return None
|
||||
|
||||
try:
|
||||
for pending_file in PENDING_SPAWNS_DIR.glob('*.json'):
|
||||
try:
|
||||
data = json.loads(pending_file.read_text())
|
||||
if not isinstance(data, dict):
|
||||
continue
|
||||
|
||||
# Check agent type (only match codex to codex)
|
||||
if data.get('agent_type') != 'codex':
|
||||
continue
|
||||
|
||||
# Check CWD match
|
||||
pending_path = os.path.normpath(data.get('project_path', ''))
|
||||
if normalized_cwd != pending_path:
|
||||
continue
|
||||
|
||||
# Check timing: session must have STARTED after spawn was initiated
|
||||
# Using session start time (not mtime) prevents false matches with
|
||||
# pre-existing sessions that happen to be recently active
|
||||
spawn_ts = data.get('timestamp', 0)
|
||||
if session_start_unix < spawn_ts:
|
||||
continue
|
||||
|
||||
# Match found - claim the spawn_id and delete the pending file
|
||||
spawn_id = data.get('spawn_id')
|
||||
try:
|
||||
pending_file.unlink()
|
||||
except OSError:
|
||||
pass
|
||||
LOGGER.info(
|
||||
'Matched Codex session (cwd=%s) to pending spawn_id=%s',
|
||||
session_cwd, spawn_id,
|
||||
)
|
||||
return spawn_id
|
||||
|
||||
except (json.JSONDecodeError, OSError):
|
||||
continue
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
return None
|
||||
|
||||
|
||||
class SessionDiscoveryMixin:
|
||||
def _discover_active_codex_sessions(self):
|
||||
"""Find active Codex sessions and create/update session files with Zellij pane info."""
|
||||
@@ -131,6 +212,13 @@ class SessionDiscoveryMixin:
|
||||
session_ts = payload.get("timestamp", "")
|
||||
last_event_at = datetime.fromtimestamp(mtime, tz=timezone.utc).isoformat()
|
||||
|
||||
# Check for spawn_id: preserve existing, or match to pending spawn
|
||||
# Use session_ts (start time) not mtime to avoid false matches
|
||||
# with pre-existing sessions that were recently active
|
||||
spawn_id = existing.get("spawn_id")
|
||||
if not spawn_id:
|
||||
spawn_id = _match_pending_spawn(cwd, session_ts)
|
||||
|
||||
session_data = {
|
||||
"session_id": session_id,
|
||||
"agent": "codex",
|
||||
@@ -145,6 +233,8 @@ class SessionDiscoveryMixin:
|
||||
"zellij_pane": zellij_pane or existing.get("zellij_pane", ""),
|
||||
"transcript_path": str(jsonl_file),
|
||||
}
|
||||
if spawn_id:
|
||||
session_data["spawn_id"] = spawn_id
|
||||
if context_usage:
|
||||
session_data["context_usage"] = context_usage
|
||||
elif existing.get("context_usage"):
|
||||
|
||||
@@ -1,7 +1,8 @@
|
||||
import json
|
||||
import urllib.parse
|
||||
|
||||
from amc_server.context import DASHBOARD_DIR
|
||||
import amc_server.auth as auth
|
||||
from amc_server.config import DASHBOARD_DIR
|
||||
from amc_server.logging_utils import LOGGER
|
||||
|
||||
|
||||
@@ -62,6 +63,19 @@ class HttpMixin:
|
||||
project_dir = ""
|
||||
agent = "claude"
|
||||
self._serve_conversation(urllib.parse.unquote(session_id), urllib.parse.unquote(project_dir), agent)
|
||||
elif self.path == "/api/skills" or self.path.startswith("/api/skills?"):
|
||||
# Parse agent from query params, default to claude
|
||||
if "?" in self.path:
|
||||
query = self.path.split("?", 1)[1]
|
||||
params = urllib.parse.parse_qs(query)
|
||||
agent = params.get("agent", ["claude"])[0]
|
||||
else:
|
||||
agent = "claude"
|
||||
self._serve_skills(agent)
|
||||
elif self.path == "/api/projects":
|
||||
self._handle_projects()
|
||||
elif self.path == "/api/health":
|
||||
self._handle_health()
|
||||
else:
|
||||
self._json_error(404, "Not Found")
|
||||
except Exception:
|
||||
@@ -73,12 +87,18 @@ class HttpMixin:
|
||||
|
||||
def do_POST(self):
|
||||
try:
|
||||
if self.path.startswith("/api/dismiss/"):
|
||||
if self.path == "/api/dismiss-dead":
|
||||
self._dismiss_dead_sessions()
|
||||
elif self.path.startswith("/api/dismiss/"):
|
||||
session_id = urllib.parse.unquote(self.path[len("/api/dismiss/"):])
|
||||
self._dismiss_session(session_id)
|
||||
elif self.path.startswith("/api/respond/"):
|
||||
session_id = urllib.parse.unquote(self.path[len("/api/respond/"):])
|
||||
self._respond_to_session(session_id)
|
||||
elif self.path == "/api/spawn":
|
||||
self._handle_spawn()
|
||||
elif self.path == "/api/projects/refresh":
|
||||
self._handle_projects_refresh()
|
||||
else:
|
||||
self._json_error(404, "Not Found")
|
||||
except Exception:
|
||||
@@ -89,11 +109,12 @@ class HttpMixin:
|
||||
pass
|
||||
|
||||
def do_OPTIONS(self):
|
||||
# CORS preflight for respond endpoint
|
||||
# CORS preflight for API endpoints (AC-39: wildcard CORS;
|
||||
# localhost-only binding AC-24 is the real security boundary)
|
||||
self.send_response(204)
|
||||
self.send_header("Access-Control-Allow-Origin", "*")
|
||||
self.send_header("Access-Control-Allow-Methods", "POST, OPTIONS")
|
||||
self.send_header("Access-Control-Allow-Headers", "Content-Type")
|
||||
self.send_header("Access-Control-Allow-Methods", "GET, POST, OPTIONS")
|
||||
self.send_header("Access-Control-Allow-Headers", "Content-Type, Authorization")
|
||||
self.end_headers()
|
||||
|
||||
def _serve_dashboard_file(self, file_path):
|
||||
@@ -113,7 +134,12 @@ class HttpMixin:
|
||||
full_path = DASHBOARD_DIR / file_path
|
||||
# Security: ensure path doesn't escape dashboard directory
|
||||
full_path = full_path.resolve()
|
||||
if not str(full_path).startswith(str(DASHBOARD_DIR.resolve())):
|
||||
resolved_dashboard = DASHBOARD_DIR.resolve()
|
||||
try:
|
||||
# Use relative_to for robust path containment check
|
||||
# (avoids startswith prefix-match bugs like "/dashboard" vs "/dashboardEVIL")
|
||||
full_path.relative_to(resolved_dashboard)
|
||||
except ValueError:
|
||||
self._json_error(403, "Forbidden")
|
||||
return
|
||||
|
||||
@@ -121,6 +147,13 @@ class HttpMixin:
|
||||
ext = full_path.suffix.lower()
|
||||
content_type = content_types.get(ext, "application/octet-stream")
|
||||
|
||||
# Inject auth token into index.html for spawn endpoint security
|
||||
if file_path == "index.html" and auth._auth_token:
|
||||
content = content.replace(
|
||||
b"<!-- AMC_AUTH_TOKEN -->",
|
||||
f'<script>window.AMC_AUTH_TOKEN = "{auth._auth_token}";</script>'.encode(),
|
||||
)
|
||||
|
||||
# No caching during development
|
||||
self._send_bytes_response(
|
||||
200,
|
||||
|
||||
@@ -2,9 +2,10 @@ import json
|
||||
import os
|
||||
from pathlib import Path
|
||||
|
||||
from amc_server.context import (
|
||||
from amc_server.agents import (
|
||||
CLAUDE_PROJECTS_DIR,
|
||||
CODEX_SESSIONS_DIR,
|
||||
_CODEX_CACHE_MAX,
|
||||
_CONTEXT_CACHE_MAX,
|
||||
_codex_transcript_cache,
|
||||
_context_usage_cache,
|
||||
@@ -44,6 +45,11 @@ class SessionParsingMixin:
|
||||
|
||||
try:
|
||||
for jsonl_file in CODEX_SESSIONS_DIR.rglob(f"*{session_id}*.jsonl"):
|
||||
# Evict old entries if cache is full (simple FIFO)
|
||||
if len(_codex_transcript_cache) >= _CODEX_CACHE_MAX:
|
||||
keys_to_remove = list(_codex_transcript_cache.keys())[: _CODEX_CACHE_MAX // 5]
|
||||
for k in keys_to_remove:
|
||||
_codex_transcript_cache.pop(k, None)
|
||||
_codex_transcript_cache[session_id] = str(jsonl_file)
|
||||
return jsonl_file
|
||||
except OSError:
|
||||
|
||||
189
amc_server/mixins/skills.py
Normal file
189
amc_server/mixins/skills.py
Normal file
@@ -0,0 +1,189 @@
|
||||
"""SkillsMixin: Enumerate available skills for Claude and Codex agents.
|
||||
|
||||
Skills are agent-global (not session-specific), loaded from well-known
|
||||
filesystem locations for each agent type.
|
||||
"""
|
||||
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
class SkillsMixin:
|
||||
"""Mixin for enumerating agent skills for autocomplete."""
|
||||
|
||||
def _serve_skills(self, agent: str) -> None:
|
||||
"""Serve autocomplete config for an agent type.
|
||||
|
||||
Args:
|
||||
agent: Agent type ('claude' or 'codex')
|
||||
|
||||
Response JSON:
|
||||
{trigger: '/' or '$', skills: [{name, description}, ...]}
|
||||
"""
|
||||
if agent == "codex":
|
||||
trigger = "$"
|
||||
skills = self._enumerate_codex_skills()
|
||||
else: # Default to claude
|
||||
trigger = "/"
|
||||
skills = self._enumerate_claude_skills()
|
||||
|
||||
# Sort alphabetically by name (case-insensitive)
|
||||
skills.sort(key=lambda s: s["name"].lower())
|
||||
|
||||
self._send_json(200, {"trigger": trigger, "skills": skills})
|
||||
|
||||
def _enumerate_claude_skills(self) -> list[dict]:
|
||||
"""Enumerate Claude skills from ~/.claude/skills/.
|
||||
|
||||
Checks SKILL.md (canonical) first, then falls back to skill.md,
|
||||
prompt.md, README.md for description extraction. Parses YAML
|
||||
frontmatter if present to extract name and description fields.
|
||||
|
||||
Returns:
|
||||
List of {name: str, description: str} dicts.
|
||||
Empty list if directory doesn't exist or enumeration fails.
|
||||
"""
|
||||
skills = []
|
||||
skills_dir = Path.home() / ".claude/skills"
|
||||
|
||||
if not skills_dir.exists():
|
||||
return skills
|
||||
|
||||
for skill_dir in skills_dir.iterdir():
|
||||
if not skill_dir.is_dir() or skill_dir.name.startswith("."):
|
||||
continue
|
||||
|
||||
meta = {"name": "", "description": ""}
|
||||
# Check files in priority order, accumulating metadata
|
||||
# (earlier files take precedence for each field)
|
||||
for md_name in ["SKILL.md", "skill.md", "prompt.md", "README.md"]:
|
||||
md_file = skill_dir / md_name
|
||||
if md_file.exists():
|
||||
try:
|
||||
content = md_file.read_text()
|
||||
parsed = self._parse_frontmatter(content)
|
||||
if not meta["name"] and parsed["name"]:
|
||||
meta["name"] = parsed["name"]
|
||||
if not meta["description"] and parsed["description"]:
|
||||
meta["description"] = parsed["description"]
|
||||
if meta["description"]:
|
||||
break
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
skills.append({
|
||||
"name": meta["name"] or skill_dir.name,
|
||||
"description": meta["description"] or f"Skill: {skill_dir.name}",
|
||||
})
|
||||
|
||||
return skills
|
||||
|
||||
def _parse_frontmatter(self, content: str) -> dict:
|
||||
"""Extract name and description from markdown YAML frontmatter.
|
||||
|
||||
Returns:
|
||||
Dict with 'name' and 'description' keys (both str, may be empty).
|
||||
"""
|
||||
result = {"name": "", "description": ""}
|
||||
lines = content.splitlines()
|
||||
if not lines:
|
||||
return result
|
||||
|
||||
# Check for YAML frontmatter
|
||||
frontmatter_end = 0
|
||||
if lines[0].strip() == "---":
|
||||
for i, line in enumerate(lines[1:], start=1):
|
||||
stripped = line.strip()
|
||||
if stripped == "---":
|
||||
frontmatter_end = i + 1
|
||||
break
|
||||
# Check each known frontmatter field
|
||||
for field in ("name", "description"):
|
||||
if stripped.startswith(f"{field}:"):
|
||||
val = stripped[len(field) + 1:].strip()
|
||||
# Remove quotes if present
|
||||
if val.startswith('"') and val.endswith('"'):
|
||||
val = val[1:-1]
|
||||
elif val.startswith("'") and val.endswith("'"):
|
||||
val = val[1:-1]
|
||||
# Handle YAML multi-line indicators (>- or |-)
|
||||
if val in (">-", "|-", ">", "|", ""):
|
||||
if i + 1 < len(lines):
|
||||
next_line = lines[i + 1].strip()
|
||||
if next_line and not next_line.startswith("---"):
|
||||
val = next_line
|
||||
else:
|
||||
val = ""
|
||||
else:
|
||||
val = ""
|
||||
if val:
|
||||
result[field] = val[:100]
|
||||
|
||||
# Fall back to first meaningful line for description
|
||||
if not result["description"]:
|
||||
for line in lines[frontmatter_end:]:
|
||||
stripped = line.strip()
|
||||
if stripped and not stripped.startswith("#") and not stripped.startswith("<!--") and stripped != "---":
|
||||
result["description"] = stripped[:100]
|
||||
break
|
||||
|
||||
return result
|
||||
|
||||
def _enumerate_codex_skills(self) -> list[dict]:
|
||||
"""Enumerate Codex skills from cache and user directory.
|
||||
|
||||
Sources:
|
||||
- ~/.codex/vendor_imports/skills-curated-cache.json (curated)
|
||||
- ~/.codex/skills/*/ (user-installed)
|
||||
|
||||
Note: No deduplication — if curated and user skills share a name,
|
||||
both appear in the list (per plan Known Limitations).
|
||||
|
||||
Returns:
|
||||
List of {name: str, description: str} dicts.
|
||||
Empty list if no skills found.
|
||||
"""
|
||||
skills = []
|
||||
|
||||
# 1. Curated skills from cache
|
||||
cache_file = Path.home() / ".codex/vendor_imports/skills-curated-cache.json"
|
||||
if cache_file.exists():
|
||||
try:
|
||||
data = json.loads(cache_file.read_text())
|
||||
for skill in data.get("skills", []):
|
||||
# Use 'id' preferentially, fall back to 'name'
|
||||
name = skill.get("id") or skill.get("name", "")
|
||||
# Use 'shortDescription' preferentially, fall back to 'description'
|
||||
desc = skill.get("shortDescription") or skill.get("description", "")
|
||||
if name:
|
||||
skills.append({
|
||||
"name": name,
|
||||
"description": desc[:100] if desc else f"Skill: {name}",
|
||||
})
|
||||
except (json.JSONDecodeError, OSError):
|
||||
# Continue without curated skills on parse error
|
||||
pass
|
||||
|
||||
# 2. User-installed skills
|
||||
user_skills_dir = Path.home() / ".codex/skills"
|
||||
if user_skills_dir.exists():
|
||||
for skill_dir in user_skills_dir.iterdir():
|
||||
if not skill_dir.is_dir() or skill_dir.name.startswith("."):
|
||||
continue
|
||||
|
||||
meta = {"name": "", "description": ""}
|
||||
# Check SKILL.md for metadata
|
||||
skill_md = skill_dir / "SKILL.md"
|
||||
if skill_md.exists():
|
||||
try:
|
||||
content = skill_md.read_text()
|
||||
meta = self._parse_frontmatter(content)
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
skills.append({
|
||||
"name": meta["name"] or skill_dir.name,
|
||||
"description": meta["description"] or f"User skill: {skill_dir.name}",
|
||||
})
|
||||
|
||||
return skills
|
||||
360
amc_server/mixins/spawn.py
Normal file
360
amc_server/mixins/spawn.py
Normal file
@@ -0,0 +1,360 @@
|
||||
import json
|
||||
import os
|
||||
import re
|
||||
import subprocess
|
||||
import time
|
||||
import uuid
|
||||
|
||||
from amc_server.auth import validate_auth_token
|
||||
from amc_server.config import SESSIONS_DIR
|
||||
from amc_server.spawn_config import (
|
||||
PENDING_SPAWNS_DIR, PENDING_SPAWN_TTL,
|
||||
PROJECTS_DIR,
|
||||
_spawn_lock, _spawn_timestamps, SPAWN_COOLDOWN_SEC,
|
||||
)
|
||||
from amc_server.zellij import ZELLIJ_BIN, ZELLIJ_SESSION
|
||||
from amc_server.logging_utils import LOGGER
|
||||
|
||||
|
||||
def _write_pending_spawn(spawn_id, project_path, agent_type):
|
||||
"""Write a pending spawn record for later correlation by discovery.
|
||||
|
||||
This enables Codex session correlation since env vars don't propagate
|
||||
through Zellij's pane spawn mechanism.
|
||||
"""
|
||||
PENDING_SPAWNS_DIR.mkdir(parents=True, exist_ok=True)
|
||||
pending_file = PENDING_SPAWNS_DIR / f'{spawn_id}.json'
|
||||
data = {
|
||||
'spawn_id': spawn_id,
|
||||
'project_path': str(project_path),
|
||||
'agent_type': agent_type,
|
||||
'timestamp': time.time(),
|
||||
}
|
||||
try:
|
||||
pending_file.write_text(json.dumps(data))
|
||||
except OSError:
|
||||
LOGGER.warning('Failed to write pending spawn file for %s', spawn_id)
|
||||
|
||||
|
||||
def _cleanup_stale_pending_spawns():
|
||||
"""Remove pending spawn files older than PENDING_SPAWN_TTL."""
|
||||
if not PENDING_SPAWNS_DIR.exists():
|
||||
return
|
||||
now = time.time()
|
||||
try:
|
||||
for f in PENDING_SPAWNS_DIR.glob('*.json'):
|
||||
try:
|
||||
if now - f.stat().st_mtime > PENDING_SPAWN_TTL:
|
||||
f.unlink()
|
||||
except OSError:
|
||||
continue
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
# Agent commands (AC-8, AC-9: full autonomous permissions)
|
||||
AGENT_COMMANDS = {
|
||||
'claude': ['claude', '--dangerously-skip-permissions'],
|
||||
'codex': ['codex', '--dangerously-bypass-approvals-and-sandbox'],
|
||||
}
|
||||
|
||||
# Module-level cache for projects list (AC-33)
|
||||
_projects_cache: list[str] = []
|
||||
|
||||
# Characters unsafe for Zellij pane/tab names: control chars, quotes, backticks
|
||||
_UNSAFE_PANE_CHARS = re.compile(r'[\x00-\x1f\x7f"\'`]')
|
||||
|
||||
|
||||
def _sanitize_pane_name(name):
|
||||
"""Sanitize a string for use as a Zellij pane name.
|
||||
|
||||
Replaces control characters and quotes with underscores, collapses runs
|
||||
of whitespace into a single space, and truncates to 64 chars.
|
||||
"""
|
||||
name = _UNSAFE_PANE_CHARS.sub('_', name)
|
||||
name = re.sub(r'\s+', ' ', name).strip()
|
||||
return name[:64] if name else 'unnamed'
|
||||
|
||||
|
||||
def load_projects_cache():
|
||||
"""Scan ~/projects/ and cache the list. Called on server start."""
|
||||
global _projects_cache
|
||||
try:
|
||||
projects = []
|
||||
for entry in PROJECTS_DIR.iterdir():
|
||||
if entry.is_dir() and not entry.name.startswith('.'):
|
||||
projects.append(entry.name)
|
||||
projects.sort()
|
||||
_projects_cache = projects
|
||||
except OSError:
|
||||
_projects_cache = []
|
||||
|
||||
|
||||
class SpawnMixin:
|
||||
|
||||
def _handle_spawn(self):
|
||||
"""POST /api/spawn handler."""
|
||||
# Verify auth token (AC-38)
|
||||
auth_header = self.headers.get('Authorization', '')
|
||||
if not validate_auth_token(auth_header):
|
||||
self._send_json(401, {'ok': False, 'error': 'Unauthorized', 'code': 'UNAUTHORIZED'})
|
||||
return
|
||||
|
||||
# Parse JSON body
|
||||
try:
|
||||
content_length = int(self.headers.get('Content-Length', 0))
|
||||
body = json.loads(self.rfile.read(content_length))
|
||||
if not isinstance(body, dict):
|
||||
self._json_error(400, 'Invalid JSON body')
|
||||
return
|
||||
except (json.JSONDecodeError, ValueError):
|
||||
self._json_error(400, 'Invalid JSON body')
|
||||
return
|
||||
|
||||
project = body.get('project', '')
|
||||
agent_type = body.get('agent_type', '')
|
||||
|
||||
# Validate params (returns resolved_path to avoid TOCTOU)
|
||||
validation = self._validate_spawn_params(project, agent_type)
|
||||
if 'error' in validation:
|
||||
self._send_json(400, {
|
||||
'ok': False,
|
||||
'error': validation['error'],
|
||||
'code': validation['code'],
|
||||
})
|
||||
return
|
||||
|
||||
resolved_path = validation['resolved_path']
|
||||
spawn_id = str(uuid.uuid4())
|
||||
|
||||
# Acquire _spawn_lock with 15s timeout
|
||||
acquired = _spawn_lock.acquire(timeout=15)
|
||||
if not acquired:
|
||||
self._send_json(503, {
|
||||
'ok': False,
|
||||
'error': 'Server busy - another spawn in progress',
|
||||
'code': 'SERVER_BUSY',
|
||||
})
|
||||
return
|
||||
|
||||
try:
|
||||
# Check rate limit inside lock
|
||||
# Use None sentinel to distinguish "never spawned" from "spawned at time 0"
|
||||
# (time.monotonic() can be close to 0 on fresh process start)
|
||||
now = time.monotonic()
|
||||
last_spawn = _spawn_timestamps.get(project)
|
||||
if last_spawn is not None and now - last_spawn < SPAWN_COOLDOWN_SEC:
|
||||
remaining = SPAWN_COOLDOWN_SEC - (now - last_spawn)
|
||||
self._send_json(429, {
|
||||
'ok': False,
|
||||
'error': f'Rate limited - wait {remaining:.0f}s before spawning in {project}',
|
||||
'code': 'RATE_LIMITED',
|
||||
})
|
||||
return
|
||||
|
||||
# Execute spawn
|
||||
result = self._spawn_agent_in_project_tab(
|
||||
project, resolved_path, agent_type, spawn_id,
|
||||
)
|
||||
|
||||
# Update timestamp only on success
|
||||
if result.get('ok'):
|
||||
_spawn_timestamps[project] = time.monotonic()
|
||||
|
||||
status_code = 200 if result.get('ok') else 500
|
||||
result['spawn_id'] = spawn_id
|
||||
self._send_json(status_code, result)
|
||||
finally:
|
||||
_spawn_lock.release()
|
||||
|
||||
def _validate_spawn_params(self, project, agent_type):
|
||||
"""Validate spawn parameters. Returns resolved_path or error dict."""
|
||||
if not project or not isinstance(project, str):
|
||||
return {'error': 'Project name is required', 'code': 'MISSING_PROJECT'}
|
||||
|
||||
# Reject whitespace-only names
|
||||
if not project.strip():
|
||||
return {'error': 'Project name is required', 'code': 'MISSING_PROJECT'}
|
||||
|
||||
# Reject null bytes and control characters (U+0000-U+001F, U+007F)
|
||||
if '\x00' in project or re.search(r'[\x00-\x1f\x7f]', project):
|
||||
return {'error': 'Invalid project name', 'code': 'INVALID_PROJECT'}
|
||||
|
||||
# Reject path traversal characters (/, \, ..)
|
||||
if '/' in project or '\\' in project or '..' in project:
|
||||
return {'error': 'Invalid project name', 'code': 'INVALID_PROJECT'}
|
||||
|
||||
# Resolve symlinks and verify under PROJECTS_DIR
|
||||
candidate = PROJECTS_DIR / project
|
||||
try:
|
||||
resolved = candidate.resolve()
|
||||
except OSError:
|
||||
return {'error': f'Project not found: {project}', 'code': 'PROJECT_NOT_FOUND'}
|
||||
|
||||
# Symlink escape check
|
||||
try:
|
||||
resolved.relative_to(PROJECTS_DIR.resolve())
|
||||
except ValueError:
|
||||
return {'error': 'Invalid project name', 'code': 'INVALID_PROJECT'}
|
||||
|
||||
if not resolved.is_dir():
|
||||
return {'error': f'Project not found: {project}', 'code': 'PROJECT_NOT_FOUND'}
|
||||
|
||||
if agent_type not in AGENT_COMMANDS:
|
||||
return {
|
||||
'error': f'Invalid agent type: {agent_type}. Must be one of: {", ".join(sorted(AGENT_COMMANDS))}',
|
||||
'code': 'INVALID_AGENT_TYPE',
|
||||
}
|
||||
|
||||
return {'resolved_path': resolved}
|
||||
|
||||
def _check_zellij_session_exists(self):
|
||||
"""Check if the target Zellij session exists."""
|
||||
try:
|
||||
result = subprocess.run(
|
||||
[ZELLIJ_BIN, 'list-sessions'],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=5,
|
||||
)
|
||||
if result.returncode != 0:
|
||||
return False
|
||||
# Strip ANSI escape codes (Zellij outputs colored text)
|
||||
ansi_pattern = re.compile(r'\x1b\[[0-9;]*m')
|
||||
output = ansi_pattern.sub('', result.stdout)
|
||||
# Parse line-by-line to avoid substring false positives
|
||||
for line in output.splitlines():
|
||||
# Zellij outputs "session_name [Created ...]" or just "session_name"
|
||||
session_name = line.strip().split()[0] if line.strip() else ''
|
||||
if session_name == ZELLIJ_SESSION:
|
||||
return True
|
||||
return False
|
||||
except FileNotFoundError:
|
||||
return False
|
||||
except subprocess.TimeoutExpired:
|
||||
return False
|
||||
except OSError:
|
||||
return False
|
||||
|
||||
def _wait_for_session_file(self, spawn_id, timeout=10.0):
|
||||
"""Poll for a session file matching spawn_id."""
|
||||
deadline = time.monotonic() + timeout
|
||||
while time.monotonic() < deadline:
|
||||
try:
|
||||
for f in SESSIONS_DIR.glob('*.json'):
|
||||
try:
|
||||
data = json.loads(f.read_text())
|
||||
if isinstance(data, dict) and data.get('spawn_id') == spawn_id:
|
||||
return True
|
||||
except (json.JSONDecodeError, OSError):
|
||||
continue
|
||||
except OSError:
|
||||
pass
|
||||
time.sleep(0.25)
|
||||
return False
|
||||
|
||||
def _spawn_agent_in_project_tab(self, project, project_path, agent_type, spawn_id):
|
||||
"""Spawn an agent in a project-named Zellij tab."""
|
||||
# Clean up stale pending spawns opportunistically
|
||||
_cleanup_stale_pending_spawns()
|
||||
|
||||
# For Codex, write pending spawn record before launching.
|
||||
# Zellij doesn't propagate env vars to pane commands, so discovery
|
||||
# will match the session to this record by CWD + timestamp.
|
||||
# (Claude doesn't need this - amc-hook writes spawn_id directly)
|
||||
if agent_type == 'codex':
|
||||
_write_pending_spawn(spawn_id, project_path, agent_type)
|
||||
|
||||
# Check session exists
|
||||
if not self._check_zellij_session_exists():
|
||||
return {
|
||||
'ok': False,
|
||||
'error': f'Zellij session "{ZELLIJ_SESSION}" not found',
|
||||
'code': 'SESSION_NOT_FOUND',
|
||||
}
|
||||
|
||||
# Create/switch to project tab
|
||||
try:
|
||||
result = subprocess.run(
|
||||
[
|
||||
ZELLIJ_BIN, '--session', ZELLIJ_SESSION,
|
||||
'action', 'go-to-tab-name', '--create', project,
|
||||
],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=5,
|
||||
)
|
||||
if result.returncode != 0:
|
||||
return {
|
||||
'ok': False,
|
||||
'error': f'Failed to create tab: {result.stderr.strip() or "unknown error"}',
|
||||
'code': 'TAB_ERROR',
|
||||
}
|
||||
except FileNotFoundError:
|
||||
return {'ok': False, 'error': f'Zellij not found at {ZELLIJ_BIN}', 'code': 'ZELLIJ_NOT_FOUND'}
|
||||
except subprocess.TimeoutExpired:
|
||||
return {'ok': False, 'error': 'Zellij tab creation timed out', 'code': 'TIMEOUT'}
|
||||
except OSError as e:
|
||||
return {'ok': False, 'error': str(e), 'code': 'SPAWN_ERROR'}
|
||||
|
||||
# Build agent command
|
||||
agent_cmd = AGENT_COMMANDS[agent_type]
|
||||
pane_name = _sanitize_pane_name(f'{agent_type}-{project}')
|
||||
|
||||
# Spawn pane with agent command
|
||||
env = os.environ.copy()
|
||||
env['AMC_SPAWN_ID'] = spawn_id
|
||||
|
||||
try:
|
||||
result = subprocess.run(
|
||||
[
|
||||
ZELLIJ_BIN, '--session', ZELLIJ_SESSION,
|
||||
'action', 'new-pane',
|
||||
'--name', pane_name,
|
||||
'--cwd', str(project_path),
|
||||
'--',
|
||||
] + agent_cmd,
|
||||
env=env,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=5,
|
||||
)
|
||||
if result.returncode != 0:
|
||||
return {
|
||||
'ok': False,
|
||||
'error': f'Failed to spawn pane: {result.stderr.strip() or "unknown error"}',
|
||||
'code': 'SPAWN_ERROR',
|
||||
}
|
||||
except FileNotFoundError:
|
||||
return {'ok': False, 'error': f'Zellij not found at {ZELLIJ_BIN}', 'code': 'ZELLIJ_NOT_FOUND'}
|
||||
except subprocess.TimeoutExpired:
|
||||
return {'ok': False, 'error': 'Pane spawn timed out', 'code': 'TIMEOUT'}
|
||||
except OSError as e:
|
||||
return {'ok': False, 'error': str(e), 'code': 'SPAWN_ERROR'}
|
||||
|
||||
# Wait for session file to appear
|
||||
found = self._wait_for_session_file(spawn_id)
|
||||
if not found:
|
||||
LOGGER.warning(
|
||||
'Session file not found for spawn_id=%s after timeout (agent may still be starting)',
|
||||
spawn_id,
|
||||
)
|
||||
|
||||
return {'ok': True, 'session_file_found': found}
|
||||
|
||||
def _handle_projects(self):
|
||||
"""GET /api/projects - return cached projects list."""
|
||||
self._send_json(200, {'ok': True, 'projects': list(_projects_cache)})
|
||||
|
||||
def _handle_projects_refresh(self):
|
||||
"""POST /api/projects/refresh - refresh and return projects list."""
|
||||
load_projects_cache()
|
||||
self._send_json(200, {'ok': True, 'projects': list(_projects_cache)})
|
||||
|
||||
def _handle_health(self):
|
||||
"""GET /api/health - check server and Zellij status."""
|
||||
zellij_ok = self._check_zellij_session_exists()
|
||||
self._send_json(200, {
|
||||
'ok': True,
|
||||
'zellij_session': ZELLIJ_SESSION,
|
||||
'zellij_available': zellij_ok,
|
||||
})
|
||||
@@ -2,16 +2,18 @@ import hashlib
|
||||
import json
|
||||
import subprocess
|
||||
import time
|
||||
from collections import defaultdict
|
||||
from datetime import datetime, timezone
|
||||
from pathlib import Path
|
||||
|
||||
from amc_server.context import (
|
||||
from amc_server.config import (
|
||||
EVENTS_DIR,
|
||||
SESSIONS_DIR,
|
||||
STALE_EVENT_AGE,
|
||||
STALE_STARTING_AGE,
|
||||
_state_lock,
|
||||
_zellij_cache,
|
||||
)
|
||||
from amc_server.zellij import ZELLIJ_BIN, _zellij_cache
|
||||
from amc_server.logging_utils import LOGGER
|
||||
|
||||
|
||||
@@ -98,6 +100,9 @@ class StateMixin:
|
||||
# Get active Zellij sessions for liveness check
|
||||
active_zellij_sessions = self._get_active_zellij_sessions()
|
||||
|
||||
# Get set of transcript files with active processes (for dead detection)
|
||||
active_transcript_files = self._get_active_transcript_files()
|
||||
|
||||
for f in SESSIONS_DIR.glob("*.json"):
|
||||
try:
|
||||
data = json.loads(f.read_text())
|
||||
@@ -118,6 +123,31 @@ class StateMixin:
|
||||
if context_usage:
|
||||
data["context_usage"] = context_usage
|
||||
|
||||
# Capture turn token baseline on UserPromptSubmit (for per-turn token display)
|
||||
# Only write once when the turn starts and we have token data
|
||||
if (
|
||||
data.get("last_event") == "UserPromptSubmit"
|
||||
and "turn_start_tokens" not in data
|
||||
and context_usage
|
||||
and context_usage.get("current_tokens") is not None
|
||||
):
|
||||
data["turn_start_tokens"] = context_usage["current_tokens"]
|
||||
# Persist to session file so it survives server restarts
|
||||
try:
|
||||
f.write_text(json.dumps(data, indent=2))
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
# Track conversation file mtime for real-time update detection
|
||||
conv_mtime = self._get_conversation_mtime(data)
|
||||
if conv_mtime:
|
||||
data["conversation_mtime_ns"] = conv_mtime
|
||||
|
||||
# Determine if session is "dead" (no longer interactable)
|
||||
data["is_dead"] = self._is_session_dead(
|
||||
data, active_zellij_sessions, active_transcript_files
|
||||
)
|
||||
|
||||
sessions.append(data)
|
||||
except (json.JSONDecodeError, OSError):
|
||||
continue
|
||||
@@ -125,8 +155,11 @@ class StateMixin:
|
||||
LOGGER.exception("Failed processing session file %s", f)
|
||||
continue
|
||||
|
||||
# Sort by last_event_at descending
|
||||
sessions.sort(key=lambda s: s.get("last_event_at", ""), reverse=True)
|
||||
# Sort by session_id for stable, deterministic ordering (no visual jumping)
|
||||
sessions.sort(key=lambda s: s.get("session_id", ""))
|
||||
|
||||
# Dedupe same-pane sessions (handles --resume creating orphan + real session)
|
||||
sessions = self._dedupe_same_pane_sessions(sessions)
|
||||
|
||||
# Clean orphan event logs (sessions persist until manually dismissed or SessionEnd)
|
||||
self._cleanup_stale(sessions)
|
||||
@@ -143,7 +176,7 @@ class StateMixin:
|
||||
|
||||
try:
|
||||
result = subprocess.run(
|
||||
["zellij", "list-sessions", "--no-formatting"],
|
||||
[ZELLIJ_BIN, "list-sessions", "--no-formatting"],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=2,
|
||||
@@ -165,6 +198,165 @@ class StateMixin:
|
||||
|
||||
return None # Return None on error (don't clean up if we can't verify)
|
||||
|
||||
def _get_conversation_mtime(self, session_data):
|
||||
"""Get the conversation file's mtime for real-time change detection."""
|
||||
agent = session_data.get("agent")
|
||||
|
||||
if agent == "claude":
|
||||
conv_file = self._get_claude_conversation_file(
|
||||
session_data.get("session_id", ""),
|
||||
session_data.get("project_dir", ""),
|
||||
)
|
||||
if conv_file:
|
||||
try:
|
||||
return conv_file.stat().st_mtime_ns
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
elif agent == "codex":
|
||||
transcript_path = session_data.get("transcript_path", "")
|
||||
if transcript_path:
|
||||
try:
|
||||
return Path(transcript_path).stat().st_mtime_ns
|
||||
except OSError:
|
||||
pass
|
||||
# Fallback to discovery
|
||||
transcript_file = self._find_codex_transcript_file(session_data.get("session_id", ""))
|
||||
if transcript_file:
|
||||
try:
|
||||
return transcript_file.stat().st_mtime_ns
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
return None
|
||||
|
||||
def _get_active_transcript_files(self):
|
||||
"""Get set of transcript files that have active processes.
|
||||
|
||||
Uses a batched lsof call to efficiently check which Codex transcript
|
||||
files are currently open by a process.
|
||||
|
||||
Returns:
|
||||
set: Absolute paths of transcript files with active processes.
|
||||
"""
|
||||
from amc_server.agents import CODEX_SESSIONS_DIR
|
||||
|
||||
if not CODEX_SESSIONS_DIR.exists():
|
||||
return set()
|
||||
|
||||
# Find all recent transcript files
|
||||
transcript_files = []
|
||||
now = time.time()
|
||||
cutoff = now - 3600 # Only check files modified in the last hour
|
||||
|
||||
for jsonl_file in CODEX_SESSIONS_DIR.rglob("*.jsonl"):
|
||||
try:
|
||||
if jsonl_file.stat().st_mtime > cutoff:
|
||||
transcript_files.append(str(jsonl_file))
|
||||
except OSError:
|
||||
continue
|
||||
|
||||
if not transcript_files:
|
||||
return set()
|
||||
|
||||
# Batch lsof check for all transcript files
|
||||
active_files = set()
|
||||
try:
|
||||
# lsof with multiple files: returns PIDs for any that are open
|
||||
result = subprocess.run(
|
||||
["lsof", "-t"] + transcript_files,
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=5,
|
||||
)
|
||||
# If any file is open, lsof returns 0
|
||||
# We need to check which specific files are open
|
||||
if result.returncode == 0 and result.stdout.strip():
|
||||
# At least one file is open - check each one
|
||||
for tf in transcript_files:
|
||||
try:
|
||||
check = subprocess.run(
|
||||
["lsof", "-t", tf],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=2,
|
||||
)
|
||||
if check.returncode == 0 and check.stdout.strip():
|
||||
active_files.add(tf)
|
||||
except (subprocess.TimeoutExpired, Exception):
|
||||
continue
|
||||
except (subprocess.TimeoutExpired, FileNotFoundError, Exception):
|
||||
pass
|
||||
|
||||
return active_files
|
||||
|
||||
def _is_session_dead(self, session_data, active_zellij_sessions, active_transcript_files):
|
||||
"""Determine if a session is 'dead' (no longer interactable).
|
||||
|
||||
A dead session cannot receive input and won't produce more output.
|
||||
These should be shown separately from active sessions in the UI.
|
||||
|
||||
Args:
|
||||
session_data: The session dict
|
||||
active_zellij_sessions: Set of active zellij session names (or None)
|
||||
active_transcript_files: Set of transcript file paths with active processes
|
||||
|
||||
Returns:
|
||||
bool: True if the session is dead
|
||||
"""
|
||||
agent = session_data.get("agent")
|
||||
zellij_session = session_data.get("zellij_session", "")
|
||||
status = session_data.get("status", "")
|
||||
|
||||
# Sessions that are still starting are not dead (yet)
|
||||
if status == "starting":
|
||||
return False
|
||||
|
||||
if agent == "codex":
|
||||
# Codex session is dead if no process has the transcript file open
|
||||
transcript_path = session_data.get("transcript_path", "")
|
||||
if not transcript_path:
|
||||
return True # No transcript path = malformed, treat as dead
|
||||
|
||||
# Check cached set first (covers recently-modified files)
|
||||
if transcript_path in active_transcript_files:
|
||||
return False # Process is running
|
||||
|
||||
# For older files not in cached set, do explicit lsof check
|
||||
# This handles long-idle but still-running processes
|
||||
if self._is_file_open(transcript_path):
|
||||
return False # Process is running
|
||||
|
||||
# No process running - it's dead
|
||||
return True
|
||||
|
||||
elif agent == "claude":
|
||||
# Claude session is dead if:
|
||||
# 1. No zellij session attached, OR
|
||||
# 2. The zellij session no longer exists
|
||||
if not zellij_session:
|
||||
return True
|
||||
if active_zellij_sessions is not None:
|
||||
return zellij_session not in active_zellij_sessions
|
||||
# If we couldn't query zellij, assume alive (don't false-positive)
|
||||
return False
|
||||
|
||||
# Unknown agent type - assume alive
|
||||
return False
|
||||
|
||||
def _is_file_open(self, file_path):
|
||||
"""Check if any process has a file open using lsof."""
|
||||
try:
|
||||
result = subprocess.run(
|
||||
["lsof", "-t", file_path],
|
||||
capture_output=True,
|
||||
text=True,
|
||||
timeout=2,
|
||||
)
|
||||
return result.returncode == 0 and result.stdout.strip()
|
||||
except (subprocess.TimeoutExpired, FileNotFoundError, Exception):
|
||||
return False # Assume not open on error (conservative)
|
||||
|
||||
def _cleanup_stale(self, sessions):
|
||||
"""Remove orphan event logs >24h and stale 'starting' sessions >1h."""
|
||||
active_ids = {s.get("session_id") for s in sessions if s.get("session_id")}
|
||||
@@ -192,3 +384,56 @@ class StateMixin:
|
||||
f.unlink()
|
||||
except (json.JSONDecodeError, OSError):
|
||||
pass
|
||||
|
||||
def _dedupe_same_pane_sessions(self, sessions):
|
||||
"""Remove orphan sessions when multiple sessions share the same Zellij pane.
|
||||
|
||||
This handles the --resume edge case where Claude creates a new session file
|
||||
before resuming the old one, leaving an orphan with no context_usage.
|
||||
|
||||
When multiple sessions share (zellij_session, zellij_pane), keep the one with:
|
||||
1. context_usage (has actual conversation data)
|
||||
2. Higher conversation_mtime_ns (more recent activity)
|
||||
"""
|
||||
|
||||
def session_score(s):
|
||||
"""Score a session for dedup ranking: (has_context, mtime)."""
|
||||
has_context = 1 if s.get("context_usage") else 0
|
||||
mtime = s.get("conversation_mtime_ns") or 0
|
||||
# Defensive: ensure mtime is numeric
|
||||
if not isinstance(mtime, (int, float)):
|
||||
mtime = 0
|
||||
return (has_context, mtime)
|
||||
|
||||
# Group sessions by pane
|
||||
pane_groups = defaultdict(list)
|
||||
for s in sessions:
|
||||
zs = s.get("zellij_session", "")
|
||||
zp = s.get("zellij_pane", "")
|
||||
if zs and zp:
|
||||
pane_groups[(zs, zp)].append(s)
|
||||
|
||||
# Find orphans to remove
|
||||
orphan_ids = set()
|
||||
for group in pane_groups.values():
|
||||
if len(group) <= 1:
|
||||
continue
|
||||
|
||||
# Pick the best session: prefer context_usage, then highest mtime
|
||||
group_sorted = sorted(group, key=session_score, reverse=True)
|
||||
|
||||
# Mark all but the best as orphans
|
||||
for s in group_sorted[1:]:
|
||||
session_id = s.get("session_id")
|
||||
if not session_id:
|
||||
continue # Skip sessions without valid IDs
|
||||
orphan_ids.add(session_id)
|
||||
# Also delete the orphan session file
|
||||
try:
|
||||
orphan_file = SESSIONS_DIR / f"{session_id}.json"
|
||||
orphan_file.unlink(missing_ok=True)
|
||||
except OSError:
|
||||
pass
|
||||
|
||||
# Return filtered list
|
||||
return [s for s in sessions if s.get("session_id") not in orphan_ids]
|
||||
|
||||
@@ -1,15 +1,24 @@
|
||||
import os
|
||||
from http.server import ThreadingHTTPServer
|
||||
|
||||
from amc_server.context import DATA_DIR, PORT
|
||||
from amc_server.auth import generate_auth_token
|
||||
from amc_server.config import DATA_DIR, PORT
|
||||
from amc_server.spawn_config import start_projects_watcher
|
||||
from amc_server.handler import AMCHandler
|
||||
from amc_server.logging_utils import LOGGER, configure_logging, install_signal_handlers
|
||||
from amc_server.mixins.spawn import load_projects_cache
|
||||
|
||||
|
||||
def main():
|
||||
configure_logging()
|
||||
DATA_DIR.mkdir(parents=True, exist_ok=True)
|
||||
LOGGER.info("Starting AMC server")
|
||||
|
||||
# Initialize spawn feature
|
||||
load_projects_cache()
|
||||
generate_auth_token()
|
||||
start_projects_watcher()
|
||||
|
||||
server = ThreadingHTTPServer(("127.0.0.1", PORT), AMCHandler)
|
||||
install_signal_handlers(server)
|
||||
LOGGER.info("AMC server listening on http://127.0.0.1:%s", PORT)
|
||||
|
||||
40
amc_server/spawn_config.py
Normal file
40
amc_server/spawn_config.py
Normal file
@@ -0,0 +1,40 @@
|
||||
"""Spawn feature config: paths, locks, rate limiting, projects watcher."""
|
||||
|
||||
import threading
|
||||
from pathlib import Path
|
||||
|
||||
from amc_server.config import DATA_DIR
|
||||
|
||||
# Pending spawn registry
|
||||
PENDING_SPAWNS_DIR = DATA_DIR / "pending_spawns"
|
||||
|
||||
# Pending spawn TTL: how long to keep unmatched spawn records (seconds)
|
||||
PENDING_SPAWN_TTL = 60
|
||||
|
||||
# Projects directory for spawning agents
|
||||
PROJECTS_DIR = Path.home() / 'projects'
|
||||
|
||||
# Lock for serializing spawn operations (prevents Zellij race conditions)
|
||||
_spawn_lock = threading.Lock()
|
||||
|
||||
# Rate limiting: track last spawn time per project (prevents spam)
|
||||
_spawn_timestamps: dict[str, float] = {}
|
||||
SPAWN_COOLDOWN_SEC = 10.0
|
||||
|
||||
|
||||
def start_projects_watcher():
|
||||
"""Start background thread to refresh projects cache every 5 minutes."""
|
||||
import logging
|
||||
from amc_server.mixins.spawn import load_projects_cache
|
||||
|
||||
def _watch_loop():
|
||||
import time
|
||||
while True:
|
||||
try:
|
||||
time.sleep(300)
|
||||
load_projects_cache()
|
||||
except Exception:
|
||||
logging.exception('Projects cache refresh failed')
|
||||
|
||||
thread = threading.Thread(target=_watch_loop, daemon=True)
|
||||
thread.start()
|
||||
34
amc_server/zellij.py
Normal file
34
amc_server/zellij.py
Normal file
@@ -0,0 +1,34 @@
|
||||
"""Zellij integration: binary resolution, plugin path, session name, cache."""
|
||||
|
||||
import shutil
|
||||
from pathlib import Path
|
||||
|
||||
# Plugin path for zellij-send-keys
|
||||
ZELLIJ_PLUGIN = Path.home() / ".config" / "zellij" / "plugins" / "zellij-send-keys.wasm"
|
||||
|
||||
|
||||
def _resolve_zellij_bin():
|
||||
"""Resolve zellij binary even when PATH is minimal (eg launchctl)."""
|
||||
from_path = shutil.which("zellij")
|
||||
if from_path:
|
||||
return from_path
|
||||
|
||||
common_paths = (
|
||||
"/opt/homebrew/bin/zellij", # Apple Silicon Homebrew
|
||||
"/usr/local/bin/zellij", # Intel Homebrew
|
||||
"/usr/bin/zellij",
|
||||
)
|
||||
for candidate in common_paths:
|
||||
p = Path(candidate)
|
||||
if p.exists() and p.is_file():
|
||||
return str(p)
|
||||
return "zellij" # Fallback for explicit error reporting by subprocess
|
||||
|
||||
|
||||
ZELLIJ_BIN = _resolve_zellij_bin()
|
||||
|
||||
# Default Zellij session for spawning
|
||||
ZELLIJ_SESSION = 'infra'
|
||||
|
||||
# Cache for Zellij session list (avoid calling zellij on every request)
|
||||
_zellij_cache = {"sessions": None, "expires": 0}
|
||||
39
bin/amc-hook
39
bin/amc-hook
@@ -82,10 +82,14 @@ def _extract_questions(hook):
|
||||
"options": [],
|
||||
}
|
||||
for opt in q.get("options", []):
|
||||
entry["options"].append({
|
||||
opt_entry = {
|
||||
"label": opt.get("label", ""),
|
||||
"description": opt.get("description", ""),
|
||||
})
|
||||
}
|
||||
# Include markdown preview if present
|
||||
if opt.get("markdown"):
|
||||
opt_entry["markdown"] = opt.get("markdown")
|
||||
entry["options"].append(opt_entry)
|
||||
result.append(entry)
|
||||
return result
|
||||
|
||||
@@ -166,6 +170,8 @@ def main():
|
||||
existing["last_event"] = f"PreToolUse({tool_name})"
|
||||
existing["last_event_at"] = now
|
||||
existing["pending_questions"] = _extract_questions(hook)
|
||||
# Track when turn paused for duration calculation
|
||||
existing["turn_paused_at"] = now
|
||||
_atomic_write(session_file, existing)
|
||||
_append_event(session_id, {
|
||||
"event": f"PreToolUse({tool_name})",
|
||||
@@ -185,6 +191,16 @@ def main():
|
||||
existing["last_event"] = f"PostToolUse({tool_name})"
|
||||
existing["last_event_at"] = now
|
||||
existing.pop("pending_questions", None)
|
||||
# Accumulate paused time for turn duration calculation
|
||||
paused_at = existing.pop("turn_paused_at", None)
|
||||
if paused_at:
|
||||
try:
|
||||
paused_start = datetime.fromisoformat(paused_at.replace("Z", "+00:00"))
|
||||
paused_end = datetime.fromisoformat(now.replace("Z", "+00:00"))
|
||||
paused_ms = int((paused_end - paused_start).total_seconds() * 1000)
|
||||
existing["turn_paused_ms"] = existing.get("turn_paused_ms", 0) + paused_ms
|
||||
except (ValueError, TypeError):
|
||||
pass
|
||||
_atomic_write(session_file, existing)
|
||||
_append_event(session_id, {
|
||||
"event": f"PostToolUse({tool_name})",
|
||||
@@ -233,6 +249,25 @@ def main():
|
||||
"zellij_pane": os.environ.get("ZELLIJ_PANE_ID", ""),
|
||||
}
|
||||
|
||||
# Include spawn_id if present in environment (for spawn correlation)
|
||||
spawn_id = os.environ.get("AMC_SPAWN_ID")
|
||||
if spawn_id:
|
||||
state["spawn_id"] = spawn_id
|
||||
|
||||
# Turn timing: track working time from user prompt to completion
|
||||
if event == "UserPromptSubmit":
|
||||
# New turn starting - reset turn timing
|
||||
state["turn_started_at"] = now
|
||||
state["turn_paused_ms"] = 0
|
||||
else:
|
||||
# Preserve turn timing from existing state
|
||||
if "turn_started_at" in existing:
|
||||
state["turn_started_at"] = existing["turn_started_at"]
|
||||
if "turn_paused_ms" in existing:
|
||||
state["turn_paused_ms"] = existing["turn_paused_ms"]
|
||||
if "turn_paused_at" in existing:
|
||||
state["turn_paused_at"] = existing["turn_paused_at"]
|
||||
|
||||
# Store prose question if detected
|
||||
if prose_question:
|
||||
state["pending_questions"] = [{
|
||||
|
||||
29
bin/amc-server-restart
Executable file
29
bin/amc-server-restart
Executable file
@@ -0,0 +1,29 @@
|
||||
#!/usr/bin/env bash
|
||||
# Restart the AMC server cleanly
|
||||
|
||||
set -e
|
||||
|
||||
PORT=7400
|
||||
|
||||
# Find and kill existing server
|
||||
PID=$(lsof -ti :$PORT 2>/dev/null || true)
|
||||
if [[ -n "$PID" ]]; then
|
||||
echo "Stopping AMC server (PID $PID)..."
|
||||
kill "$PID" 2>/dev/null || true
|
||||
sleep 1
|
||||
fi
|
||||
|
||||
# Start server in background
|
||||
echo "Starting AMC server on port $PORT..."
|
||||
cd "$(dirname "$0")/.."
|
||||
nohup python3 -m amc_server.server > /tmp/amc-server.log 2>&1 &
|
||||
|
||||
# Wait for startup
|
||||
sleep 1
|
||||
NEW_PID=$(lsof -ti :$PORT 2>/dev/null || true)
|
||||
if [[ -n "$NEW_PID" ]]; then
|
||||
echo "AMC server running (PID $NEW_PID)"
|
||||
else
|
||||
echo "Failed to start server. Check /tmp/amc-server.log"
|
||||
exit 1
|
||||
fi
|
||||
File diff suppressed because it is too large
Load Diff
1470
dashboard.html
1470
dashboard.html
File diff suppressed because it is too large
Load Diff
115
dashboard/components/AgentActivityIndicator.js
Normal file
115
dashboard/components/AgentActivityIndicator.js
Normal file
@@ -0,0 +1,115 @@
|
||||
import { html, useState, useEffect, useRef } from '../lib/preact.js';
|
||||
|
||||
/**
|
||||
* Shows live agent activity: elapsed time since user prompt, token usage.
|
||||
* Visible when session is active/starting, pauses during needs_attention,
|
||||
* shows final duration when done.
|
||||
*
|
||||
* @param {object} session - Session object with turn_started_at, turn_paused_at, turn_paused_ms, status
|
||||
*/
|
||||
export function AgentActivityIndicator({ session }) {
|
||||
const [elapsed, setElapsed] = useState(0);
|
||||
const intervalRef = useRef(null);
|
||||
|
||||
// Safely extract session fields (handles null/undefined session)
|
||||
const status = session?.status;
|
||||
const turn_started_at = session?.turn_started_at;
|
||||
const turn_paused_at = session?.turn_paused_at;
|
||||
const turn_paused_ms = session?.turn_paused_ms ?? 0;
|
||||
const last_event_at = session?.last_event_at;
|
||||
const context_usage = session?.context_usage;
|
||||
const turn_start_tokens = session?.turn_start_tokens;
|
||||
|
||||
// Only show for sessions with turn timing
|
||||
const hasTurnTiming = !!turn_started_at;
|
||||
const isActive = status === 'active' || status === 'starting';
|
||||
const isPaused = status === 'needs_attention';
|
||||
const isDone = status === 'done';
|
||||
|
||||
useEffect(() => {
|
||||
if (!hasTurnTiming) return;
|
||||
|
||||
const calculate = () => {
|
||||
const startMs = new Date(turn_started_at).getTime();
|
||||
const pausedMs = turn_paused_ms || 0;
|
||||
|
||||
if (isActive) {
|
||||
// Running: current time - start - paused
|
||||
return Date.now() - startMs - pausedMs;
|
||||
} else if (isPaused && turn_paused_at) {
|
||||
// Paused: frozen at pause time
|
||||
const pausedAtMs = new Date(turn_paused_at).getTime();
|
||||
return pausedAtMs - startMs - pausedMs;
|
||||
} else if (isDone && last_event_at) {
|
||||
// Done: final duration
|
||||
const endMs = new Date(last_event_at).getTime();
|
||||
return endMs - startMs - pausedMs;
|
||||
}
|
||||
return 0;
|
||||
};
|
||||
|
||||
setElapsed(calculate());
|
||||
|
||||
// Only tick while active
|
||||
if (isActive) {
|
||||
intervalRef.current = setInterval(() => {
|
||||
setElapsed(calculate());
|
||||
}, 1000);
|
||||
}
|
||||
|
||||
return () => {
|
||||
if (intervalRef.current) {
|
||||
clearInterval(intervalRef.current);
|
||||
intervalRef.current = null;
|
||||
}
|
||||
};
|
||||
}, [hasTurnTiming, isActive, isPaused, isDone, turn_started_at, turn_paused_at, turn_paused_ms, last_event_at]);
|
||||
|
||||
// Don't render if no turn timing or session is done with no activity
|
||||
if (!hasTurnTiming) return null;
|
||||
|
||||
// Format elapsed time (clamp to 0 for safety)
|
||||
const formatElapsed = (ms) => {
|
||||
const totalSec = Math.max(0, Math.floor(ms / 1000));
|
||||
if (totalSec < 60) return `${totalSec}s`;
|
||||
const min = Math.floor(totalSec / 60);
|
||||
const sec = totalSec % 60;
|
||||
return `${min}m ${sec}s`;
|
||||
};
|
||||
|
||||
// Format token count
|
||||
const formatTokens = (count) => {
|
||||
if (count == null) return null;
|
||||
if (count >= 1000) return `${(count / 1000).toFixed(1)}k`;
|
||||
return String(count);
|
||||
};
|
||||
|
||||
// Calculate turn tokens (current - baseline from turn start)
|
||||
const currentTokens = context_usage?.current_tokens;
|
||||
const turnTokens = (currentTokens != null && turn_start_tokens != null)
|
||||
? Math.max(0, currentTokens - turn_start_tokens)
|
||||
: null;
|
||||
const tokenDisplay = formatTokens(turnTokens);
|
||||
|
||||
return html`
|
||||
<div class="flex items-center gap-2 font-mono text-label">
|
||||
${isActive && html`
|
||||
<span class="activity-spinner"></span>
|
||||
`}
|
||||
${isPaused && html`
|
||||
<span class="h-2 w-2 rounded-full bg-attention"></span>
|
||||
`}
|
||||
${isDone && html`
|
||||
<span class="h-2 w-2 rounded-full bg-done"></span>
|
||||
`}
|
||||
<span class="text-dim">
|
||||
${isActive ? 'Working' : isPaused ? 'Paused' : 'Completed'}
|
||||
</span>
|
||||
<span class="text-bright tabular-nums">${formatElapsed(elapsed)}</span>
|
||||
${tokenDisplay && html`
|
||||
<span class="text-dim/70">·</span>
|
||||
<span class="text-dim/90">${tokenDisplay} tokens</span>
|
||||
`}
|
||||
</div>
|
||||
`;
|
||||
}
|
||||
616
dashboard/components/App.js
Normal file
616
dashboard/components/App.js
Normal file
@@ -0,0 +1,616 @@
|
||||
import { html, useState, useEffect, useCallback, useMemo, useRef } from '../lib/preact.js';
|
||||
import { API_STATE, API_STREAM, API_DISMISS, API_DISMISS_DEAD, API_RESPOND, API_CONVERSATION, API_HEALTH, POLL_MS, fetchWithTimeout, fetchSkills } from '../utils/api.js';
|
||||
import { groupSessionsByProject } from '../utils/status.js';
|
||||
import { Sidebar } from './Sidebar.js';
|
||||
import { SessionCard } from './SessionCard.js';
|
||||
import { Modal } from './Modal.js';
|
||||
import { EmptyState } from './EmptyState.js';
|
||||
import { ToastContainer, showToast, trackError, clearErrorCount } from './Toast.js';
|
||||
import { SpawnModal } from './SpawnModal.js';
|
||||
|
||||
let optimisticMsgId = 0;
|
||||
|
||||
export function App() {
|
||||
const [sessions, setSessions] = useState([]);
|
||||
const [modalSession, setModalSession] = useState(null);
|
||||
const [conversations, setConversations] = useState({});
|
||||
const [loading, setLoading] = useState(true);
|
||||
const [error, setError] = useState(null);
|
||||
const [selectedProject, setSelectedProject] = useState(null);
|
||||
const [sseConnected, setSseConnected] = useState(false);
|
||||
const [deadSessionsCollapsed, setDeadSessionsCollapsed] = useState(true);
|
||||
const [spawnModalOpen, setSpawnModalOpen] = useState(false);
|
||||
const [zellijAvailable, setZellijAvailable] = useState(true);
|
||||
const [newlySpawnedIds, setNewlySpawnedIds] = useState(new Set());
|
||||
const pendingSpawnIdsRef = useRef(new Set());
|
||||
const [skillsConfig, setSkillsConfig] = useState({ claude: null, codex: null });
|
||||
|
||||
// Background conversation refresh with error tracking
|
||||
const refreshConversationSilent = useCallback(async (sessionId, projectDir, agent = 'claude') => {
|
||||
try {
|
||||
let url = API_CONVERSATION + encodeURIComponent(sessionId);
|
||||
const params = new URLSearchParams();
|
||||
if (projectDir) params.set('project_dir', projectDir);
|
||||
if (agent) params.set('agent', agent);
|
||||
if (params.toString()) url += '?' + params.toString();
|
||||
const response = await fetch(url);
|
||||
if (!response.ok) {
|
||||
trackError(`conversation-${sessionId}`, `Failed to fetch conversation (HTTP ${response.status})`);
|
||||
return;
|
||||
}
|
||||
const data = await response.json();
|
||||
setConversations(prev => ({
|
||||
...prev,
|
||||
[sessionId]: data.messages || []
|
||||
}));
|
||||
clearErrorCount(`conversation-${sessionId}`); // Clear on success
|
||||
} catch (err) {
|
||||
trackError(`conversation-${sessionId}`, `Failed to fetch conversation: ${err.message}`);
|
||||
}
|
||||
}, []);
|
||||
|
||||
// Track last_event_at for each session to detect actual changes
|
||||
const lastEventAtRef = useRef({});
|
||||
|
||||
// Refs for stable callback access (avoids recreation on state changes)
|
||||
const sessionsRef = useRef(sessions);
|
||||
const conversationsRef = useRef(conversations);
|
||||
const modalSessionRef = useRef(null);
|
||||
sessionsRef.current = sessions;
|
||||
conversationsRef.current = conversations;
|
||||
|
||||
// Apply state payload from polling or SSE stream
|
||||
const applyStateData = useCallback((data) => {
|
||||
const newSessions = data.sessions || [];
|
||||
const newSessionIds = new Set(newSessions.map(s => s.session_id));
|
||||
setSessions(newSessions);
|
||||
setError(null);
|
||||
|
||||
// Update modalSession if it's still open (to get latest pending_questions, etc.)
|
||||
const modalId = modalSessionRef.current;
|
||||
if (modalId) {
|
||||
const updatedSession = newSessions.find(s => s.session_id === modalId);
|
||||
if (updatedSession) {
|
||||
setModalSession(updatedSession);
|
||||
}
|
||||
}
|
||||
|
||||
// Check for newly spawned sessions matching pending spawn IDs
|
||||
if (pendingSpawnIdsRef.current.size > 0) {
|
||||
const matched = new Set();
|
||||
for (const session of newSessions) {
|
||||
if (session.spawn_id && pendingSpawnIdsRef.current.has(session.spawn_id)) {
|
||||
matched.add(session.session_id);
|
||||
pendingSpawnIdsRef.current.delete(session.spawn_id);
|
||||
}
|
||||
}
|
||||
if (matched.size > 0) {
|
||||
setNewlySpawnedIds(prev => {
|
||||
const next = new Set(prev);
|
||||
for (const id of matched) next.add(id);
|
||||
return next;
|
||||
});
|
||||
// Auto-clear highlight after animation duration (2.5s)
|
||||
for (const id of matched) {
|
||||
setTimeout(() => {
|
||||
setNewlySpawnedIds(prev => {
|
||||
const next = new Set(prev);
|
||||
next.delete(id);
|
||||
return next;
|
||||
});
|
||||
}, 2500);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Clean up conversation cache for sessions that no longer exist
|
||||
setConversations(prev => {
|
||||
const activeIds = Object.keys(prev).filter(id => newSessionIds.has(id));
|
||||
if (activeIds.length === Object.keys(prev).length) return prev; // No cleanup needed
|
||||
const cleaned = {};
|
||||
for (const id of activeIds) {
|
||||
cleaned[id] = prev[id];
|
||||
}
|
||||
return cleaned;
|
||||
});
|
||||
|
||||
// Refresh conversations for sessions that have actually changed
|
||||
// Use conversation_mtime_ns for real-time updates (changes on every file write),
|
||||
// falling back to last_event_at for sessions without mtime tracking
|
||||
const prevEventMap = lastEventAtRef.current;
|
||||
const nextEventMap = {};
|
||||
|
||||
for (const session of newSessions) {
|
||||
const id = session.session_id;
|
||||
// Prefer mtime (changes on every write) over last_event_at (only on hook events)
|
||||
const newKey = session.conversation_mtime_ns || session.last_event_at || '';
|
||||
nextEventMap[id] = newKey;
|
||||
|
||||
const oldKey = prevEventMap[id] || '';
|
||||
if (newKey !== oldKey) {
|
||||
refreshConversationSilent(session.session_id, session.project_dir, session.agent || 'claude');
|
||||
}
|
||||
}
|
||||
lastEventAtRef.current = nextEventMap;
|
||||
|
||||
setLoading(false);
|
||||
}, [refreshConversationSilent]);
|
||||
|
||||
// Fetch state from API
|
||||
const fetchState = useCallback(async () => {
|
||||
try {
|
||||
const response = await fetchWithTimeout(API_STATE);
|
||||
if (!response.ok) {
|
||||
throw new Error(`HTTP ${response.status}`);
|
||||
}
|
||||
const data = await response.json();
|
||||
applyStateData(data);
|
||||
clearErrorCount('state-fetch');
|
||||
} catch (err) {
|
||||
const msg = err.name === 'AbortError' ? 'Request timed out' : err.message;
|
||||
trackError('state-fetch', `Failed to fetch state: ${msg}`);
|
||||
setError(msg);
|
||||
setLoading(false);
|
||||
}
|
||||
}, [applyStateData]);
|
||||
|
||||
// Fetch conversation for a session (explicit fetch, e.g., on modal open)
|
||||
const fetchConversation = useCallback(async (sessionId, projectDir, agent = 'claude', force = false) => {
|
||||
// Skip if already fetched and not forcing refresh
|
||||
if (!force && conversationsRef.current[sessionId]) return;
|
||||
|
||||
try {
|
||||
let url = API_CONVERSATION + encodeURIComponent(sessionId);
|
||||
const params = new URLSearchParams();
|
||||
if (projectDir) params.set('project_dir', projectDir);
|
||||
if (agent) params.set('agent', agent);
|
||||
if (params.toString()) url += '?' + params.toString();
|
||||
const response = await fetch(url);
|
||||
if (!response.ok) {
|
||||
trackError(`conversation-${sessionId}`, `Failed to fetch conversation (HTTP ${response.status})`);
|
||||
return;
|
||||
}
|
||||
const data = await response.json();
|
||||
setConversations(prev => ({
|
||||
...prev,
|
||||
[sessionId]: data.messages || []
|
||||
}));
|
||||
clearErrorCount(`conversation-${sessionId}`);
|
||||
} catch (err) {
|
||||
trackError(`conversation-${sessionId}`, `Error fetching conversation: ${err.message}`);
|
||||
}
|
||||
}, []);
|
||||
|
||||
// Respond to a session's pending question with optimistic update
|
||||
const respondToSession = useCallback(async (sessionId, text, isFreeform = false, optionCount = 0) => {
|
||||
const payload = { text };
|
||||
if (isFreeform) {
|
||||
payload.freeform = true;
|
||||
payload.optionCount = optionCount;
|
||||
}
|
||||
|
||||
// Optimistic update: immediately show user's message
|
||||
const optimisticMsg = {
|
||||
id: `optimistic-${++optimisticMsgId}`,
|
||||
role: 'user',
|
||||
content: text,
|
||||
timestamp: new Date().toISOString(),
|
||||
_optimistic: true, // Flag for identification
|
||||
};
|
||||
setConversations(prev => ({
|
||||
...prev,
|
||||
[sessionId]: [...(prev[sessionId] || []), optimisticMsg]
|
||||
}));
|
||||
|
||||
try {
|
||||
const res = await fetch(API_RESPOND + encodeURIComponent(sessionId), {
|
||||
method: 'POST',
|
||||
headers: { 'Content-Type': 'application/json' },
|
||||
body: JSON.stringify(payload)
|
||||
});
|
||||
const data = await res.json();
|
||||
if (!data.ok) {
|
||||
throw new Error(data.error || 'Failed to send response');
|
||||
}
|
||||
clearErrorCount(`respond-${sessionId}`);
|
||||
// SSE will push state update when Claude processes the message,
|
||||
// which triggers conversation refresh via applyStateData
|
||||
} catch (err) {
|
||||
// Remove optimistic message on failure
|
||||
setConversations(prev => ({
|
||||
...prev,
|
||||
[sessionId]: (prev[sessionId] || []).filter(m => m !== optimisticMsg)
|
||||
}));
|
||||
trackError(`respond-${sessionId}`, `Failed to send message: ${err.message}`);
|
||||
throw err; // Re-throw so SimpleInput/QuestionBlock can catch and show error
|
||||
}
|
||||
}, []);
|
||||
|
||||
// Dismiss a session
|
||||
const dismissSession = useCallback(async (sessionId) => {
|
||||
try {
|
||||
const res = await fetch(API_DISMISS + encodeURIComponent(sessionId), {
|
||||
method: 'POST'
|
||||
});
|
||||
const data = await res.json();
|
||||
if (data.ok) {
|
||||
// Trigger refresh
|
||||
fetchState();
|
||||
} else {
|
||||
trackError(`dismiss-${sessionId}`, `Failed to dismiss session: ${data.error || 'Unknown error'}`);
|
||||
}
|
||||
} catch (err) {
|
||||
trackError(`dismiss-${sessionId}`, `Error dismissing session: ${err.message}`);
|
||||
}
|
||||
}, [fetchState]);
|
||||
|
||||
// Dismiss all dead sessions
|
||||
const dismissDeadSessions = useCallback(async () => {
|
||||
try {
|
||||
const res = await fetch(API_DISMISS_DEAD, { method: 'POST' });
|
||||
const data = await res.json();
|
||||
if (data.ok) {
|
||||
fetchState();
|
||||
} else {
|
||||
trackError('dismiss-dead', `Failed to clear completed sessions: ${data.error || 'Unknown error'}`);
|
||||
}
|
||||
} catch (err) {
|
||||
trackError('dismiss-dead', `Error clearing completed sessions: ${err.message}`);
|
||||
}
|
||||
}, [fetchState]);
|
||||
|
||||
// Subscribe to live state updates via SSE
|
||||
useEffect(() => {
|
||||
let eventSource = null;
|
||||
let reconnectTimer = null;
|
||||
let stopped = false;
|
||||
|
||||
const connect = () => {
|
||||
if (stopped) return;
|
||||
|
||||
try {
|
||||
eventSource = new EventSource(API_STREAM);
|
||||
} catch (err) {
|
||||
trackError('sse-init', `Failed to initialize EventSource: ${err.message}`);
|
||||
setSseConnected(false);
|
||||
reconnectTimer = setTimeout(connect, 2000);
|
||||
return;
|
||||
}
|
||||
|
||||
eventSource.addEventListener('open', () => {
|
||||
if (stopped) return;
|
||||
setSseConnected(true);
|
||||
setError(null);
|
||||
// Clear event cache on reconnect to force refresh of all conversations
|
||||
// (handles updates missed during disconnect)
|
||||
lastEventAtRef.current = {};
|
||||
});
|
||||
|
||||
eventSource.addEventListener('state', (event) => {
|
||||
if (stopped) return;
|
||||
try {
|
||||
const data = JSON.parse(event.data);
|
||||
applyStateData(data);
|
||||
clearErrorCount('sse-parse');
|
||||
} catch (err) {
|
||||
trackError('sse-parse', `Failed to parse SSE state payload: ${err.message}`);
|
||||
}
|
||||
});
|
||||
|
||||
eventSource.addEventListener('error', () => {
|
||||
if (stopped) return;
|
||||
setSseConnected(false);
|
||||
if (eventSource) {
|
||||
eventSource.close();
|
||||
eventSource = null;
|
||||
}
|
||||
if (!reconnectTimer) {
|
||||
reconnectTimer = setTimeout(() => {
|
||||
reconnectTimer = null;
|
||||
connect();
|
||||
}, 2000);
|
||||
}
|
||||
});
|
||||
};
|
||||
|
||||
connect();
|
||||
|
||||
return () => {
|
||||
stopped = true;
|
||||
if (reconnectTimer) {
|
||||
clearTimeout(reconnectTimer);
|
||||
}
|
||||
if (eventSource) {
|
||||
eventSource.close();
|
||||
}
|
||||
};
|
||||
}, [applyStateData]);
|
||||
|
||||
// Poll for updates only when SSE is disconnected (fallback mode)
|
||||
useEffect(() => {
|
||||
if (sseConnected) return;
|
||||
|
||||
fetchState();
|
||||
const interval = setInterval(fetchState, POLL_MS);
|
||||
return () => clearInterval(interval);
|
||||
}, [fetchState, sseConnected]);
|
||||
|
||||
// Poll Zellij health status
|
||||
useEffect(() => {
|
||||
const checkHealth = async () => {
|
||||
try {
|
||||
const response = await fetchWithTimeout(API_HEALTH);
|
||||
if (response.ok) {
|
||||
const data = await response.json();
|
||||
setZellijAvailable(data.zellij_available);
|
||||
}
|
||||
} catch {
|
||||
// Server unreachable - handled by state fetch error
|
||||
}
|
||||
};
|
||||
|
||||
checkHealth();
|
||||
const interval = setInterval(checkHealth, 30000);
|
||||
return () => clearInterval(interval);
|
||||
}, []);
|
||||
|
||||
// Fetch skills for autocomplete on mount
|
||||
useEffect(() => {
|
||||
const loadSkills = async () => {
|
||||
const [claude, codex] = await Promise.all([
|
||||
fetchSkills('claude'),
|
||||
fetchSkills('codex')
|
||||
]);
|
||||
setSkillsConfig({ claude, codex });
|
||||
};
|
||||
loadSkills();
|
||||
}, []);
|
||||
|
||||
// Group sessions by project
|
||||
const projectGroups = groupSessionsByProject(sessions);
|
||||
|
||||
// Filter sessions based on selected project
|
||||
const filteredGroups = useMemo(() => {
|
||||
if (selectedProject === null) {
|
||||
return projectGroups;
|
||||
}
|
||||
return projectGroups.filter(g => g.projectDir === selectedProject);
|
||||
}, [projectGroups, selectedProject]);
|
||||
|
||||
// Split sessions into active and dead
|
||||
const { activeSessions, deadSessions } = useMemo(() => {
|
||||
const active = [];
|
||||
const dead = [];
|
||||
for (const group of filteredGroups) {
|
||||
for (const session of group.sessions) {
|
||||
if (session.is_dead) {
|
||||
dead.push(session);
|
||||
} else {
|
||||
active.push(session);
|
||||
}
|
||||
}
|
||||
}
|
||||
return { activeSessions: active, deadSessions: dead };
|
||||
}, [filteredGroups]);
|
||||
|
||||
// Handle card click - open modal and fetch conversation if not cached
|
||||
const handleCardClick = useCallback(async (session) => {
|
||||
modalSessionRef.current = session.session_id;
|
||||
setModalSession(session);
|
||||
|
||||
// Fetch conversation if not already cached
|
||||
if (!conversationsRef.current[session.session_id]) {
|
||||
await fetchConversation(session.session_id, session.project_dir, session.agent || 'claude');
|
||||
}
|
||||
}, [fetchConversation]);
|
||||
|
||||
const handleCloseModal = useCallback(() => {
|
||||
modalSessionRef.current = null;
|
||||
setModalSession(null);
|
||||
}, []);
|
||||
|
||||
const handleSelectProject = useCallback((projectDir) => {
|
||||
setSelectedProject(projectDir);
|
||||
}, []);
|
||||
|
||||
const handleSpawnResult = useCallback((result) => {
|
||||
if (result.success) {
|
||||
showToast(`${result.agentType} agent spawned for ${result.project}`, 'success');
|
||||
if (result.spawnId) {
|
||||
pendingSpawnIdsRef.current.add(result.spawnId);
|
||||
}
|
||||
} else if (result.error) {
|
||||
showToast(result.error, 'error');
|
||||
}
|
||||
}, []);
|
||||
|
||||
return html`
|
||||
<!-- Sidebar -->
|
||||
<${Sidebar}
|
||||
projectGroups=${projectGroups}
|
||||
selectedProject=${selectedProject}
|
||||
onSelectProject=${handleSelectProject}
|
||||
totalSessions=${sessions.length}
|
||||
/>
|
||||
|
||||
<!-- Main Content (offset for sidebar) -->
|
||||
<div class="ml-80 min-h-screen pb-10">
|
||||
<!-- Compact Header -->
|
||||
<header class="sticky top-0 z-30 border-b border-selection/50 bg-surface/95 px-6 py-4 backdrop-blur-sm">
|
||||
<div class="flex items-center justify-between">
|
||||
<div>
|
||||
<h2 class="font-display text-lg font-semibold text-bright">
|
||||
${selectedProject === null ? 'All Projects' : filteredGroups[0]?.projectName || 'Project'}
|
||||
</h2>
|
||||
<p class="mt-0.5 font-mono text-micro text-dim">
|
||||
${filteredGroups.reduce((sum, g) => sum + g.sessions.length, 0)} session${filteredGroups.reduce((sum, g) => sum + g.sessions.length, 0) === 1 ? '' : 's'}
|
||||
${selectedProject !== null && filteredGroups[0]?.projectDir ? html` in <span class="text-dim/80">${filteredGroups[0].projectDir}</span>` : ''}
|
||||
</p>
|
||||
</div>
|
||||
<!-- Status summary chips -->
|
||||
<div class="flex items-center gap-2">
|
||||
${(() => {
|
||||
const counts = { needs_attention: 0, active: 0, starting: 0, done: 0 };
|
||||
for (const g of filteredGroups) {
|
||||
for (const s of g.sessions) {
|
||||
counts[s.status] = (counts[s.status] || 0) + 1;
|
||||
}
|
||||
}
|
||||
return html`
|
||||
${counts.needs_attention > 0 && html`
|
||||
<div class="rounded-lg border border-attention/40 bg-attention/12 px-2.5 py-1 text-attention">
|
||||
<span class="font-mono text-sm font-medium tabular-nums">${counts.needs_attention}</span>
|
||||
<span class="ml-1 text-micro uppercase tracking-wider">attention</span>
|
||||
</div>
|
||||
`}
|
||||
${counts.active > 0 && html`
|
||||
<div class="rounded-lg border border-active/40 bg-active/12 px-2.5 py-1 text-active">
|
||||
<span class="font-mono text-sm font-medium tabular-nums">${counts.active}</span>
|
||||
<span class="ml-1 text-micro uppercase tracking-wider">active</span>
|
||||
</div>
|
||||
`}
|
||||
${counts.starting > 0 && html`
|
||||
<div class="rounded-lg border border-starting/40 bg-starting/12 px-2.5 py-1 text-starting">
|
||||
<span class="font-mono text-sm font-medium tabular-nums">${counts.starting}</span>
|
||||
<span class="ml-1 text-micro uppercase tracking-wider">starting</span>
|
||||
</div>
|
||||
`}
|
||||
${counts.done > 0 && html`
|
||||
<div class="rounded-lg border border-done/40 bg-done/12 px-2.5 py-1 text-done">
|
||||
<span class="font-mono text-sm font-medium tabular-nums">${counts.done}</span>
|
||||
<span class="ml-1 text-micro uppercase tracking-wider">done</span>
|
||||
</div>
|
||||
`}
|
||||
`;
|
||||
})()}
|
||||
</div>
|
||||
<div class="relative">
|
||||
<button
|
||||
disabled=${!zellijAvailable}
|
||||
class="rounded-lg border border-active/40 bg-active/12 px-3 py-2 text-sm font-medium text-active transition-colors hover:bg-active/20 ${!zellijAvailable ? 'opacity-50 cursor-not-allowed' : ''}"
|
||||
onClick=${() => setSpawnModalOpen(true)}
|
||||
>
|
||||
+ New Agent
|
||||
</button>
|
||||
<${SpawnModal}
|
||||
isOpen=${spawnModalOpen}
|
||||
onClose=${() => setSpawnModalOpen(false)}
|
||||
onSpawn=${handleSpawnResult}
|
||||
currentProject=${selectedProject}
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
</header>
|
||||
|
||||
${!zellijAvailable && html`
|
||||
<div class="border-b border-attention/50 bg-attention/10 px-6 py-2 text-sm text-attention">
|
||||
<span class="font-medium">Zellij session not found.</span>
|
||||
${' '}Agent spawning is unavailable. Start Zellij with: <code class="rounded bg-attention/15 px-1.5 py-0.5 font-mono text-micro">zellij attach infra</code>
|
||||
</div>
|
||||
`}
|
||||
|
||||
<main class="px-6 pb-6 pt-6">
|
||||
${loading ? html`
|
||||
<div class="glass-panel animate-fade-in-up flex items-center justify-center rounded-2xl py-24">
|
||||
<div class="font-mono text-dim">Loading sessions...</div>
|
||||
</div>
|
||||
` : error ? html`
|
||||
<div class="glass-panel animate-fade-in-up flex items-center justify-center rounded-2xl py-24">
|
||||
<div class="text-center">
|
||||
<p class="mb-2 font-display text-lg text-attention">Failed to connect to API</p>
|
||||
<p class="font-mono text-sm text-dim">${error}</p>
|
||||
</div>
|
||||
</div>
|
||||
` : filteredGroups.length === 0 ? html`
|
||||
<${EmptyState} />
|
||||
` : html`
|
||||
<!-- Active Sessions Grid -->
|
||||
${activeSessions.length > 0 ? html`
|
||||
<div class="flex flex-wrap gap-4">
|
||||
${activeSessions.map(session => html`
|
||||
<${SessionCard}
|
||||
key=${session.session_id}
|
||||
session=${session}
|
||||
onClick=${handleCardClick}
|
||||
conversation=${conversations[session.session_id]}
|
||||
onFetchConversation=${fetchConversation}
|
||||
onRespond=${respondToSession}
|
||||
onDismiss=${dismissSession}
|
||||
isNewlySpawned=${newlySpawnedIds.has(session.session_id)}
|
||||
autocompleteConfig=${skillsConfig[session.agent === 'codex' ? 'codex' : 'claude']}
|
||||
/>
|
||||
`)}
|
||||
</div>
|
||||
` : deadSessions.length > 0 ? html`
|
||||
<div class="glass-panel flex items-center justify-center rounded-xl py-12 mb-6">
|
||||
<div class="text-center">
|
||||
<p class="font-display text-lg text-dim">No active sessions</p>
|
||||
<p class="mt-1 font-mono text-micro text-dim/70">All sessions have completed</p>
|
||||
</div>
|
||||
</div>
|
||||
` : ''}
|
||||
|
||||
<!-- Completed Sessions (Dead) - Collapsible -->
|
||||
${deadSessions.length > 0 && html`
|
||||
<div class="mt-8">
|
||||
<button
|
||||
onClick=${() => setDeadSessionsCollapsed(!deadSessionsCollapsed)}
|
||||
class="group flex w-full items-center gap-3 rounded-lg border border-selection/50 bg-surface/50 px-4 py-3 text-left transition-colors hover:border-selection hover:bg-surface/80"
|
||||
>
|
||||
<svg
|
||||
class="h-4 w-4 text-dim transition-transform ${deadSessionsCollapsed ? '' : 'rotate-90'}"
|
||||
fill="none"
|
||||
stroke="currentColor"
|
||||
viewBox="0 0 24 24"
|
||||
>
|
||||
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M9 5l7 7-7 7"/>
|
||||
</svg>
|
||||
<span class="font-display text-sm font-medium text-dim">
|
||||
Completed Sessions
|
||||
</span>
|
||||
<span class="rounded-full bg-done/15 px-2 py-0.5 font-mono text-micro tabular-nums text-done/70">
|
||||
${deadSessions.length}
|
||||
</span>
|
||||
<div class="flex-1"></div>
|
||||
<button
|
||||
onClick=${(e) => { e.stopPropagation(); dismissDeadSessions(); }}
|
||||
class="rounded-lg border border-selection/80 bg-bg/40 px-3 py-1.5 font-mono text-micro text-dim transition-colors hover:border-done/40 hover:bg-done/10 hover:text-bright"
|
||||
>
|
||||
Clear All
|
||||
</button>
|
||||
</button>
|
||||
|
||||
${!deadSessionsCollapsed && html`
|
||||
<div class="mt-4 flex flex-wrap gap-4">
|
||||
${deadSessions.map(session => html`
|
||||
<${SessionCard}
|
||||
key=${session.session_id}
|
||||
session=${session}
|
||||
onClick=${handleCardClick}
|
||||
conversation=${conversations[session.session_id]}
|
||||
onFetchConversation=${fetchConversation}
|
||||
onRespond=${respondToSession}
|
||||
onDismiss=${dismissSession}
|
||||
autocompleteConfig=${skillsConfig[session.agent === 'codex' ? 'codex' : 'claude']}
|
||||
/>
|
||||
`)}
|
||||
</div>
|
||||
`}
|
||||
</div>
|
||||
`}
|
||||
`}
|
||||
</main>
|
||||
</div>
|
||||
|
||||
<${Modal}
|
||||
session=${modalSession}
|
||||
conversations=${conversations}
|
||||
onClose=${handleCloseModal}
|
||||
onFetchConversation=${fetchConversation}
|
||||
onRespond=${respondToSession}
|
||||
onDismiss=${dismissSession}
|
||||
/>
|
||||
|
||||
<${ToastContainer} />
|
||||
`;
|
||||
}
|
||||
39
dashboard/components/ChatMessages.js
Normal file
39
dashboard/components/ChatMessages.js
Normal file
@@ -0,0 +1,39 @@
|
||||
import { html } from '../lib/preact.js';
|
||||
import { getUserMessageBg } from '../utils/status.js';
|
||||
import { MessageBubble, filterDisplayMessages } from './MessageBubble.js';
|
||||
|
||||
function getMessageKey(msg, index) {
|
||||
// Server-assigned ID (preferred)
|
||||
if (msg.id) return msg.id;
|
||||
// Fallback: role + timestamp + index (for legacy/edge cases)
|
||||
return `${msg.role}-${msg.timestamp || ''}-${index}`;
|
||||
}
|
||||
|
||||
export function ChatMessages({ messages, status, limit = 20 }) {
|
||||
const userBgClass = getUserMessageBg(status);
|
||||
|
||||
if (!messages || messages.length === 0) {
|
||||
return html`
|
||||
<div class="flex h-full items-center justify-center rounded-xl border border-dashed border-selection/70 bg-bg/30 px-4 text-center text-sm text-dim">
|
||||
No messages to show
|
||||
</div>
|
||||
`;
|
||||
}
|
||||
|
||||
const allDisplayMessages = filterDisplayMessages(messages);
|
||||
const displayMessages = limit ? allDisplayMessages.slice(-limit) : allDisplayMessages;
|
||||
const offset = allDisplayMessages.length - displayMessages.length;
|
||||
|
||||
return html`
|
||||
<div class="space-y-2.5">
|
||||
${displayMessages.map((msg, i) => html`
|
||||
<${MessageBubble}
|
||||
key=${getMessageKey(msg, offset + i)}
|
||||
msg=${msg}
|
||||
userBg=${userBgClass}
|
||||
compact=${true}
|
||||
/>
|
||||
`)}
|
||||
</div>
|
||||
`;
|
||||
}
|
||||
18
dashboard/components/EmptyState.js
Normal file
18
dashboard/components/EmptyState.js
Normal file
@@ -0,0 +1,18 @@
|
||||
import { html } from '../lib/preact.js';
|
||||
|
||||
export function EmptyState() {
|
||||
return html`
|
||||
<div class="glass-panel mx-auto flex max-w-2xl flex-col items-center justify-center rounded-3xl px-8 py-20 text-center">
|
||||
<div class="mb-6 flex h-20 w-20 items-center justify-center rounded-2xl border border-selection/80 bg-bg/40 shadow-halo">
|
||||
<svg class="h-9 w-9 text-dim" fill="none" stroke="currentColor" viewBox="0 0 24 24">
|
||||
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="1.5"
|
||||
d="M9.75 17L9 20l-1 1h8l-1-1-.75-3M3 13h18M5 17h14a2 2 0 002-2V5a2 2 0 00-2-2H5a2 2 0 00-2 2v10a2 2 0 002 2z"/>
|
||||
</svg>
|
||||
</div>
|
||||
<h2 class="mb-2 font-display text-2xl font-semibold text-bright">No Active Sessions</h2>
|
||||
<p class="max-w-lg text-dim">
|
||||
Agent sessions will appear here when they connect. Start a Claude Code session to see it in the dashboard.
|
||||
</p>
|
||||
</div>
|
||||
`;
|
||||
}
|
||||
54
dashboard/components/MessageBubble.js
Normal file
54
dashboard/components/MessageBubble.js
Normal file
@@ -0,0 +1,54 @@
|
||||
import { html } from '../lib/preact.js';
|
||||
import { renderContent, renderToolCalls, renderThinking } from '../lib/markdown.js';
|
||||
|
||||
/**
|
||||
* Single message bubble used by both the card chat view and modal view.
|
||||
* All message rendering logic lives here — card and modal only differ in
|
||||
* container layout, not in how individual messages are rendered.
|
||||
*
|
||||
* @param {object} msg - Message object: { role, content, thinking, tool_calls, timestamp }
|
||||
* @param {string} userBg - Tailwind classes for user message background
|
||||
* @param {boolean} compact - true = card view (smaller), false = modal view (larger)
|
||||
* @param {function} formatTime - Optional timestamp formatter (modal only)
|
||||
*/
|
||||
export function MessageBubble({ msg, userBg, compact = false, formatTime }) {
|
||||
const isUser = msg.role === 'user';
|
||||
const pad = compact ? 'px-3 py-2.5' : 'px-4 py-3';
|
||||
const maxW = compact ? 'max-w-[92%]' : 'max-w-[86%]';
|
||||
|
||||
return html`
|
||||
<div class="flex ${isUser ? 'justify-end' : 'justify-start'} ${compact ? '' : 'animate-fade-in-up'}">
|
||||
<div
|
||||
class="${maxW} rounded-2xl ${pad} ${
|
||||
isUser
|
||||
? `${userBg} rounded-br-md shadow-[0_3px_8px_rgba(16,24,36,0.22)]`
|
||||
: 'border border-selection/75 bg-surface2/75 text-fg rounded-bl-md'
|
||||
}"
|
||||
>
|
||||
<div class="mb-1 font-mono text-micro uppercase tracking-[0.14em] text-dim">
|
||||
${isUser ? 'Operator' : 'Agent'}
|
||||
</div>
|
||||
${msg.thinking && renderThinking(msg.thinking)}
|
||||
<div class="whitespace-pre-wrap break-words text-ui font-chat">
|
||||
${renderContent(msg.content)}
|
||||
</div>
|
||||
${renderToolCalls(msg.tool_calls)}
|
||||
${formatTime && msg.timestamp && html`
|
||||
<div class="mt-2 font-mono text-label text-dim">
|
||||
${formatTime(msg.timestamp)}
|
||||
</div>
|
||||
`}
|
||||
</div>
|
||||
</div>
|
||||
`;
|
||||
}
|
||||
|
||||
/**
|
||||
* Filter messages for display — removes empty assistant messages
|
||||
* (no content, thinking, or tool_calls) that would render as empty bubbles.
|
||||
*/
|
||||
export function filterDisplayMessages(messages) {
|
||||
return messages.filter(msg =>
|
||||
msg.content || msg.thinking || msg.tool_calls?.length || msg.role === 'user'
|
||||
);
|
||||
}
|
||||
79
dashboard/components/Modal.js
Normal file
79
dashboard/components/Modal.js
Normal file
@@ -0,0 +1,79 @@
|
||||
import { html, useState, useEffect, useCallback } from '../lib/preact.js';
|
||||
import { SessionCard } from './SessionCard.js';
|
||||
import { fetchSkills } from '../utils/api.js';
|
||||
|
||||
export function Modal({ session, conversations, onClose, onRespond, onFetchConversation, onDismiss }) {
|
||||
const [closing, setClosing] = useState(false);
|
||||
const [autocompleteConfig, setAutocompleteConfig] = useState(null);
|
||||
|
||||
// Reset closing state when session changes
|
||||
useEffect(() => {
|
||||
setClosing(false);
|
||||
}, [session?.session_id]);
|
||||
|
||||
// Load autocomplete skills when agent type changes
|
||||
useEffect(() => {
|
||||
if (!session) {
|
||||
setAutocompleteConfig(null);
|
||||
return;
|
||||
}
|
||||
|
||||
let stale = false;
|
||||
const agent = session.agent || 'claude';
|
||||
fetchSkills(agent)
|
||||
.then(config => { if (!stale) setAutocompleteConfig(config); })
|
||||
.catch(() => { if (!stale) setAutocompleteConfig(null); });
|
||||
return () => { stale = true; };
|
||||
}, [session?.agent]);
|
||||
|
||||
// Animated close handler
|
||||
const handleClose = useCallback(() => {
|
||||
setClosing(true);
|
||||
setTimeout(() => {
|
||||
setClosing(false);
|
||||
onClose();
|
||||
}, 200);
|
||||
}, [onClose]);
|
||||
|
||||
// Lock body scroll when modal is open
|
||||
useEffect(() => {
|
||||
if (!session) return;
|
||||
document.body.style.overflow = 'hidden';
|
||||
return () => {
|
||||
document.body.style.overflow = '';
|
||||
};
|
||||
}, [session?.session_id]);
|
||||
|
||||
// Handle escape key
|
||||
useEffect(() => {
|
||||
if (!session) return;
|
||||
const handleKeyDown = (e) => {
|
||||
if (e.key === 'Escape') handleClose();
|
||||
};
|
||||
document.addEventListener('keydown', handleKeyDown);
|
||||
return () => document.removeEventListener('keydown', handleKeyDown);
|
||||
}, [session?.session_id, handleClose]);
|
||||
|
||||
if (!session) return null;
|
||||
|
||||
const conversation = conversations[session.session_id] || [];
|
||||
|
||||
return html`
|
||||
<div
|
||||
class="fixed inset-0 z-50 flex items-center justify-center bg-[#02050d]/84 p-4 backdrop-blur-sm ${closing ? 'modal-backdrop-out' : 'modal-backdrop-in'}"
|
||||
onClick=${(e) => e.target === e.currentTarget && handleClose()}
|
||||
>
|
||||
<div class=${closing ? 'modal-panel-out' : 'modal-panel-in'} onClick=${(e) => e.stopPropagation()}>
|
||||
<${SessionCard}
|
||||
session=${session}
|
||||
conversation=${conversation}
|
||||
onFetchConversation=${onFetchConversation}
|
||||
onRespond=${onRespond}
|
||||
onDismiss=${onDismiss}
|
||||
enlarged=${true}
|
||||
autocompleteConfig=${autocompleteConfig}
|
||||
/>
|
||||
</div>
|
||||
</div>
|
||||
`;
|
||||
}
|
||||
24
dashboard/components/OptionButton.js
Normal file
24
dashboard/components/OptionButton.js
Normal file
@@ -0,0 +1,24 @@
|
||||
import { html } from '../lib/preact.js';
|
||||
|
||||
export function OptionButton({ number, label, description, selected, onClick, onMouseEnter, onFocus }) {
|
||||
const selectedStyles = selected
|
||||
? 'border-starting/60 bg-starting/15 shadow-sm'
|
||||
: 'border-selection/70 bg-surface2/55';
|
||||
|
||||
return html`
|
||||
<button
|
||||
onClick=${onClick}
|
||||
onMouseEnter=${onMouseEnter}
|
||||
onFocus=${onFocus}
|
||||
class="group w-full rounded-lg border px-3 py-2 text-left transition-[transform,border-color,background-color,box-shadow] duration-200 hover:-translate-y-0.5 hover:border-starting/55 hover:bg-surface2/90 hover:shadow-halo ${selectedStyles}"
|
||||
>
|
||||
<div class="flex items-baseline gap-2">
|
||||
<span class="font-mono text-sm text-starting">${number}.</span>
|
||||
<span class="text-sm font-medium text-bright">${label}</span>
|
||||
</div>
|
||||
${description && html`
|
||||
<p class="mt-0.5 pl-4 text-xs text-dim">${description}</p>
|
||||
`}
|
||||
</button>
|
||||
`;
|
||||
}
|
||||
228
dashboard/components/QuestionBlock.js
Normal file
228
dashboard/components/QuestionBlock.js
Normal file
@@ -0,0 +1,228 @@
|
||||
import { html, useState, useRef } from '../lib/preact.js';
|
||||
import { getStatusMeta } from '../utils/status.js';
|
||||
import { OptionButton } from './OptionButton.js';
|
||||
import { renderContent } from '../lib/markdown.js';
|
||||
|
||||
export function QuestionBlock({ questions, sessionId, status, onRespond }) {
|
||||
const [freeformText, setFreeformText] = useState('');
|
||||
const [focused, setFocused] = useState(false);
|
||||
const [sending, setSending] = useState(false);
|
||||
const [error, setError] = useState(null);
|
||||
const [previewIndex, setPreviewIndex] = useState(0);
|
||||
const textareaRef = useRef(null);
|
||||
const meta = getStatusMeta(status);
|
||||
|
||||
if (!questions || questions.length === 0) return null;
|
||||
|
||||
// Only show the first question (sequential, not parallel)
|
||||
const question = questions[0];
|
||||
const remainingCount = questions.length - 1;
|
||||
const options = question.options || [];
|
||||
|
||||
// Check if any option has markdown preview content
|
||||
const hasMarkdownPreviews = options.some(opt => opt.markdown);
|
||||
|
||||
const handleOptionClick = async (optionLabel) => {
|
||||
if (sending) return;
|
||||
setSending(true);
|
||||
setError(null);
|
||||
try {
|
||||
await onRespond(sessionId, optionLabel, false, options.length);
|
||||
} catch (err) {
|
||||
setError('Failed to send response');
|
||||
console.error('QuestionBlock option error:', err);
|
||||
} finally {
|
||||
setSending(false);
|
||||
}
|
||||
};
|
||||
|
||||
const handleFreeformSubmit = async (e) => {
|
||||
e.preventDefault();
|
||||
e.stopPropagation();
|
||||
if (freeformText.trim() && !sending) {
|
||||
setSending(true);
|
||||
setError(null);
|
||||
try {
|
||||
await onRespond(sessionId, freeformText.trim(), true, options.length);
|
||||
setFreeformText('');
|
||||
} catch (err) {
|
||||
setError('Failed to send response');
|
||||
console.error('QuestionBlock freeform error:', err);
|
||||
} finally {
|
||||
setSending(false);
|
||||
// Refocus the textarea after submission
|
||||
// Use setTimeout to ensure React has re-rendered with disabled=false
|
||||
setTimeout(() => {
|
||||
textareaRef.current?.focus();
|
||||
}, 0);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
// Side-by-side layout when options have markdown previews
|
||||
if (hasMarkdownPreviews) {
|
||||
const currentMarkdown = options[previewIndex]?.markdown || '';
|
||||
|
||||
return html`
|
||||
<div class="space-y-2.5" onClick=${(e) => e.stopPropagation()}>
|
||||
${error && html`
|
||||
<div class="rounded-lg border border-attention/40 bg-attention/12 px-3 py-1.5 text-sm text-attention">
|
||||
${error}
|
||||
</div>
|
||||
`}
|
||||
|
||||
<!-- Question Header Badge -->
|
||||
${question.header && html`
|
||||
<span class="inline-flex rounded-full border px-2 py-1 font-mono text-micro uppercase tracking-[0.15em] ${meta.badge}">
|
||||
${question.header}
|
||||
</span>
|
||||
`}
|
||||
|
||||
<!-- Question Text -->
|
||||
<p class="font-medium text-bright">${question.question || question.text}</p>
|
||||
|
||||
<!-- Side-by-side: Options | Preview -->
|
||||
<div class="flex gap-3">
|
||||
<!-- Options List (left side) -->
|
||||
<div class="w-2/5 space-y-1.5 shrink-0">
|
||||
${options.map((opt, i) => html`
|
||||
<${OptionButton}
|
||||
key=${i}
|
||||
number=${i + 1}
|
||||
label=${opt.label || opt}
|
||||
description=${opt.description}
|
||||
selected=${previewIndex === i}
|
||||
onMouseEnter=${() => setPreviewIndex(i)}
|
||||
onFocus=${() => setPreviewIndex(i)}
|
||||
onClick=${() => handleOptionClick(opt.label || opt)}
|
||||
/>
|
||||
`)}
|
||||
</div>
|
||||
|
||||
<!-- Preview Pane (right side) — fixed height prevents layout thrashing on hover -->
|
||||
<div class="flex-1 rounded-lg border border-selection/50 bg-bg/60 p-3 h-[400px] overflow-auto">
|
||||
${currentMarkdown
|
||||
? (currentMarkdown.trimStart().startsWith('```')
|
||||
? renderContent(currentMarkdown)
|
||||
: html`<pre class="font-mono text-sm text-fg/90 whitespace-pre leading-relaxed">${currentMarkdown}</pre>`)
|
||||
: html`<p class="text-dim text-sm italic">No preview for this option</p>`
|
||||
}
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Freeform Input -->
|
||||
<form onSubmit=${handleFreeformSubmit} class="flex items-end gap-2.5">
|
||||
<textarea
|
||||
ref=${textareaRef}
|
||||
value=${freeformText}
|
||||
onInput=${(e) => {
|
||||
setFreeformText(e.target.value);
|
||||
e.target.style.height = 'auto';
|
||||
e.target.style.height = e.target.scrollHeight + 'px';
|
||||
}}
|
||||
onKeyDown=${(e) => {
|
||||
if (e.key === 'Enter' && !e.shiftKey) {
|
||||
e.preventDefault();
|
||||
handleFreeformSubmit(e);
|
||||
}
|
||||
}}
|
||||
onFocus=${() => setFocused(true)}
|
||||
onBlur=${() => setFocused(false)}
|
||||
placeholder="Type a response..."
|
||||
rows="1"
|
||||
class="flex-1 resize-none overflow-hidden rounded-xl border border-selection/75 bg-bg/70 px-3 py-2 text-sm text-fg transition-colors duration-150 placeholder:text-dim focus:outline-none"
|
||||
style=${{ minHeight: '38px', maxHeight: '150px', borderColor: focused ? meta.borderColor : undefined }}
|
||||
disabled=${sending}
|
||||
/>
|
||||
<button
|
||||
type="submit"
|
||||
class="shrink-0 rounded-xl px-3 py-2 text-sm font-medium transition-[transform,filter] duration-150 hover:-translate-y-0.5 hover:brightness-110 disabled:cursor-not-allowed disabled:opacity-50"
|
||||
style=${{ backgroundColor: meta.borderColor, color: '#0a0f18' }}
|
||||
disabled=${sending || !freeformText.trim()}
|
||||
>
|
||||
${sending ? 'Sending...' : 'Send'}
|
||||
</button>
|
||||
</form>
|
||||
|
||||
<!-- More Questions Indicator -->
|
||||
${remainingCount > 0 && html`
|
||||
<p class="font-mono text-label text-dim">+ ${remainingCount} more question${remainingCount > 1 ? 's' : ''} after this</p>
|
||||
`}
|
||||
</div>
|
||||
`;
|
||||
}
|
||||
|
||||
// Standard layout (no markdown previews)
|
||||
return html`
|
||||
<div class="space-y-2.5" onClick=${(e) => e.stopPropagation()}>
|
||||
${error && html`
|
||||
<div class="rounded-lg border border-attention/40 bg-attention/12 px-3 py-1.5 text-sm text-attention">
|
||||
${error}
|
||||
</div>
|
||||
`}
|
||||
<!-- Question Header Badge -->
|
||||
${question.header && html`
|
||||
<span class="inline-flex rounded-full border px-2 py-1 font-mono text-micro uppercase tracking-[0.15em] ${meta.badge}">
|
||||
${question.header}
|
||||
</span>
|
||||
`}
|
||||
|
||||
<!-- Question Text -->
|
||||
<p class="font-medium text-bright">${question.question || question.text}</p>
|
||||
|
||||
<!-- Options -->
|
||||
${options.length > 0 && html`
|
||||
<div class="space-y-1.5">
|
||||
${options.map((opt, i) => html`
|
||||
<${OptionButton}
|
||||
key=${i}
|
||||
number=${i + 1}
|
||||
label=${opt.label || opt}
|
||||
description=${opt.description}
|
||||
onClick=${() => handleOptionClick(opt.label || opt)}
|
||||
/>
|
||||
`)}
|
||||
</div>
|
||||
`}
|
||||
|
||||
<!-- Freeform Input -->
|
||||
<form onSubmit=${handleFreeformSubmit} class="flex items-end gap-2.5">
|
||||
<textarea
|
||||
ref=${textareaRef}
|
||||
value=${freeformText}
|
||||
onInput=${(e) => {
|
||||
setFreeformText(e.target.value);
|
||||
e.target.style.height = 'auto';
|
||||
e.target.style.height = e.target.scrollHeight + 'px';
|
||||
}}
|
||||
onKeyDown=${(e) => {
|
||||
if (e.key === 'Enter' && !e.shiftKey) {
|
||||
e.preventDefault();
|
||||
handleFreeformSubmit(e);
|
||||
}
|
||||
}}
|
||||
onFocus=${() => setFocused(true)}
|
||||
onBlur=${() => setFocused(false)}
|
||||
placeholder="Type a response..."
|
||||
rows="1"
|
||||
class="flex-1 resize-none overflow-hidden rounded-xl border border-selection/75 bg-bg/70 px-3 py-2 text-sm text-fg transition-colors duration-150 placeholder:text-dim focus:outline-none"
|
||||
style=${{ minHeight: '38px', maxHeight: '150px', borderColor: focused ? meta.borderColor : undefined }}
|
||||
disabled=${sending}
|
||||
/>
|
||||
<button
|
||||
type="submit"
|
||||
class="shrink-0 rounded-xl px-3 py-2 text-sm font-medium transition-[transform,filter] duration-150 hover:-translate-y-0.5 hover:brightness-110 disabled:cursor-not-allowed disabled:opacity-50"
|
||||
style=${{ backgroundColor: meta.borderColor, color: '#0a0f18' }}
|
||||
disabled=${sending || !freeformText.trim()}
|
||||
>
|
||||
${sending ? 'Sending...' : 'Send'}
|
||||
</button>
|
||||
</form>
|
||||
|
||||
<!-- More Questions Indicator -->
|
||||
${remainingCount > 0 && html`
|
||||
<p class="font-mono text-label text-dim">+ ${remainingCount} more question${remainingCount > 1 ? 's' : ''} after this</p>
|
||||
`}
|
||||
</div>
|
||||
`;
|
||||
}
|
||||
177
dashboard/components/SessionCard.js
Normal file
177
dashboard/components/SessionCard.js
Normal file
@@ -0,0 +1,177 @@
|
||||
import { html, useEffect, useRef } from '../lib/preact.js';
|
||||
import { getStatusMeta } from '../utils/status.js';
|
||||
import { formatDuration, getContextUsageSummary } from '../utils/formatting.js';
|
||||
import { ChatMessages } from './ChatMessages.js';
|
||||
import { QuestionBlock } from './QuestionBlock.js';
|
||||
import { SimpleInput } from './SimpleInput.js';
|
||||
import { AgentActivityIndicator } from './AgentActivityIndicator.js';
|
||||
|
||||
export function SessionCard({ session, onClick, conversation, onFetchConversation, onRespond, onDismiss, enlarged = false, autocompleteConfig = null, isNewlySpawned = false }) {
|
||||
const hasQuestions = session.pending_questions && session.pending_questions.length > 0;
|
||||
const statusMeta = getStatusMeta(session.status);
|
||||
const agent = session.agent === 'codex' ? 'codex' : 'claude';
|
||||
const agentHeaderClass = agent === 'codex' ? 'agent-header-codex' : 'agent-header-claude';
|
||||
const contextUsage = getContextUsageSummary(session.context_usage);
|
||||
|
||||
// Fetch conversation when card mounts
|
||||
useEffect(() => {
|
||||
if (!conversation && onFetchConversation) {
|
||||
onFetchConversation(session.session_id, session.project_dir, agent);
|
||||
}
|
||||
}, [session.session_id, session.project_dir, agent, conversation, onFetchConversation]);
|
||||
|
||||
const chatPaneRef = useRef(null);
|
||||
const stickyToBottomRef = useRef(true); // Start in "sticky" mode
|
||||
const scrollUpAccumulatorRef = useRef(0); // Track cumulative scroll-up distance
|
||||
const prevConversationLenRef = useRef(0);
|
||||
|
||||
// Track user intent via wheel events (only fires from actual user scrolling)
|
||||
useEffect(() => {
|
||||
const el = chatPaneRef.current;
|
||||
if (!el) return;
|
||||
|
||||
const handleWheel = (e) => {
|
||||
// User scrolling up - accumulate distance before disabling sticky
|
||||
if (e.deltaY < 0) {
|
||||
scrollUpAccumulatorRef.current += Math.abs(e.deltaY);
|
||||
// Only disable sticky mode after scrolling up ~50px (meaningful intent)
|
||||
if (scrollUpAccumulatorRef.current > 50) {
|
||||
stickyToBottomRef.current = false;
|
||||
}
|
||||
}
|
||||
|
||||
// User scrolling down - reset accumulator and check if near bottom
|
||||
if (e.deltaY > 0) {
|
||||
scrollUpAccumulatorRef.current = 0;
|
||||
const distanceFromBottom = el.scrollHeight - el.scrollTop - el.clientHeight;
|
||||
if (distanceFromBottom < 100) {
|
||||
stickyToBottomRef.current = true;
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
el.addEventListener('wheel', handleWheel, { passive: true });
|
||||
return () => el.removeEventListener('wheel', handleWheel);
|
||||
}, []);
|
||||
|
||||
// Auto-scroll when conversation changes
|
||||
useEffect(() => {
|
||||
const el = chatPaneRef.current;
|
||||
if (!el || !conversation) return;
|
||||
|
||||
const prevLen = prevConversationLenRef.current;
|
||||
const currLen = conversation.length;
|
||||
const hasNewMessages = currLen > prevLen;
|
||||
const isFirstLoad = prevLen === 0 && currLen > 0;
|
||||
|
||||
// Check if user just submitted (always scroll for their own messages)
|
||||
const lastMsg = conversation[currLen - 1];
|
||||
const userJustSubmitted = hasNewMessages && lastMsg?.role === 'user';
|
||||
|
||||
prevConversationLenRef.current = currLen;
|
||||
|
||||
// Auto-scroll if in sticky mode, first load, or user just submitted
|
||||
if (isFirstLoad || userJustSubmitted || (hasNewMessages && stickyToBottomRef.current)) {
|
||||
requestAnimationFrame(() => {
|
||||
el.scrollTop = el.scrollHeight;
|
||||
});
|
||||
}
|
||||
}, [conversation]);
|
||||
|
||||
const handleDismissClick = (e) => {
|
||||
e.stopPropagation();
|
||||
if (onDismiss) onDismiss(session.session_id);
|
||||
};
|
||||
|
||||
// Container classes differ based on enlarged mode
|
||||
const spawnClass = isNewlySpawned ? ' session-card-spawned' : '';
|
||||
const containerClasses = enlarged
|
||||
? 'glass-panel flex w-full max-w-[90vw] max-h-[90vh] flex-col overflow-hidden rounded-2xl border border-selection/80'
|
||||
: 'glass-panel flex h-[850px] max-h-[850px] w-[600px] cursor-pointer flex-col overflow-hidden rounded-xl border border-selection/70 transition-[border-color,box-shadow] duration-200 hover:border-starting/35 hover:shadow-panel' + spawnClass;
|
||||
|
||||
return html`
|
||||
<div
|
||||
class=${containerClasses}
|
||||
style=${{ borderLeftWidth: '3px', borderLeftColor: statusMeta.borderColor }}
|
||||
onClick=${enlarged ? undefined : () => onClick && onClick(session)}
|
||||
>
|
||||
<!-- Card Header -->
|
||||
<div class="shrink-0 border-b px-4 py-3 ${agentHeaderClass}">
|
||||
<div class="flex items-start justify-between gap-2.5">
|
||||
<div class="min-w-0">
|
||||
<div class="flex items-center gap-2.5">
|
||||
<span class="h-2 w-2 shrink-0 rounded-full ${statusMeta.dot} ${statusMeta.spinning ? 'spinner-dot' : ''}" style=${{ color: statusMeta.borderColor }}></span>
|
||||
<span class="truncate font-display text-base font-medium text-bright">${session.project || session.name || 'Session'}</span>
|
||||
</div>
|
||||
<div class="mt-2 flex flex-wrap items-center gap-2">
|
||||
<span class="rounded-full border px-2.5 py-1 font-mono text-micro uppercase tracking-[0.14em] ${statusMeta.badge}">
|
||||
${statusMeta.label}
|
||||
</span>
|
||||
<span class="rounded-full border px-2.5 py-1 font-mono text-micro uppercase tracking-[0.14em] ${agent === 'codex' ? 'border-emerald-400/45 bg-emerald-500/14 text-emerald-300' : 'border-violet-400/45 bg-violet-500/14 text-violet-300'}">
|
||||
${agent}
|
||||
</span>
|
||||
${session.project_dir && html`
|
||||
<span class="truncate rounded-full border border-selection bg-bg/40 px-2.5 py-1 font-mono text-micro text-dim">
|
||||
${session.project_dir.split('/').slice(-2).join('/')}
|
||||
</span>
|
||||
`}
|
||||
</div>
|
||||
</div>
|
||||
<div class="flex items-center gap-3 shrink-0 pt-0.5">
|
||||
<span class="font-mono text-xs tabular-nums text-dim">${formatDuration(session.started_at)}</span>
|
||||
${session.status === 'done' && html`
|
||||
<button
|
||||
onClick=${handleDismissClick}
|
||||
class="flex h-7 w-7 items-center justify-center rounded-lg border border-selection/80 text-dim transition-colors hover:border-done/40 hover:bg-done/10 hover:text-bright"
|
||||
title="Dismiss"
|
||||
>
|
||||
<svg class="h-3.5 w-3.5" fill="none" stroke="currentColor" viewBox="0 0 24 24">
|
||||
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M6 18L18 6M6 6l12 12"/>
|
||||
</svg>
|
||||
</button>
|
||||
`}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
<!-- Card Content Area (Chat) -->
|
||||
<div ref=${chatPaneRef} class="flex-1 min-h-0 overflow-y-auto bg-surface px-4 py-4">
|
||||
<${ChatMessages} messages=${conversation || []} status=${session.status} limit=${enlarged ? null : 20} />
|
||||
</div>
|
||||
|
||||
<!-- Card Footer (Status + Input/Questions) -->
|
||||
<div class="shrink-0 border-t border-selection/70 bg-bg/55">
|
||||
<!-- Session Status Area -->
|
||||
<div class="flex items-center justify-between gap-3 px-4 py-2 border-b border-selection/50 bg-bg/60">
|
||||
<${AgentActivityIndicator} session=${session} />
|
||||
${contextUsage && html`
|
||||
<div class="flex items-center gap-2 rounded-lg border border-selection/80 bg-bg/45 px-2.5 py-1.5 font-mono text-label text-dim" title=${contextUsage.title}>
|
||||
<span class="text-bright">${contextUsage.headline}</span>
|
||||
<span class="truncate">${contextUsage.detail}</span>
|
||||
${contextUsage.trail && html`<span class="text-dim/80">${contextUsage.trail}</span>`}
|
||||
</div>
|
||||
`}
|
||||
</div>
|
||||
<!-- Actions Area -->
|
||||
<div class="p-4">
|
||||
${hasQuestions ? html`
|
||||
<${QuestionBlock}
|
||||
questions=${session.pending_questions}
|
||||
sessionId=${session.session_id}
|
||||
status=${session.status}
|
||||
onRespond=${onRespond}
|
||||
/>
|
||||
` : html`
|
||||
<${SimpleInput}
|
||||
sessionId=${session.session_id}
|
||||
status=${session.status}
|
||||
onRespond=${onRespond}
|
||||
autocompleteConfig=${autocompleteConfig}
|
||||
conversation=${conversation}
|
||||
/>
|
||||
`}
|
||||
</div>
|
||||
</div>
|
||||
</div>
|
||||
`;
|
||||
}
|
||||
102
dashboard/components/Sidebar.js
Normal file
102
dashboard/components/Sidebar.js
Normal file
@@ -0,0 +1,102 @@
|
||||
import { html } from '../lib/preact.js';
|
||||
import { getStatusMeta, STATUS_PRIORITY } from '../utils/status.js';
|
||||
|
||||
export function Sidebar({ projectGroups, selectedProject, onSelectProject, totalSessions }) {
|
||||
// Calculate totals for "All Projects"
|
||||
const allStatusCounts = { needs_attention: 0, active: 0, starting: 0, done: 0 };
|
||||
for (const group of projectGroups) {
|
||||
for (const s of group.sessions) {
|
||||
allStatusCounts[s.status] = (allStatusCounts[s.status] || 0) + 1;
|
||||
}
|
||||
}
|
||||
|
||||
// Worst status across all projects
|
||||
const allWorstStatus = totalSessions > 0
|
||||
? Object.keys(allStatusCounts).reduce((worst, status) =>
|
||||
allStatusCounts[status] > 0 && (STATUS_PRIORITY[status] ?? 99) < (STATUS_PRIORITY[worst] ?? 99) ? status : worst
|
||||
, 'done')
|
||||
: 'done';
|
||||
const allWorstMeta = getStatusMeta(allWorstStatus);
|
||||
|
||||
// Tiny inline status indicator
|
||||
const StatusPips = ({ counts }) => html`
|
||||
<div class="flex items-center gap-1 shrink-0">
|
||||
${counts.needs_attention > 0 && html`<span class="rounded-full bg-attention/20 px-1.5 py-0.5 font-mono text-micro tabular-nums text-attention">${counts.needs_attention}</span>`}
|
||||
${counts.active > 0 && html`<span class="rounded-full bg-active/20 px-1.5 py-0.5 font-mono text-micro tabular-nums text-active">${counts.active}</span>`}
|
||||
${counts.starting > 0 && html`<span class="rounded-full bg-starting/20 px-1.5 py-0.5 font-mono text-micro tabular-nums text-starting">${counts.starting}</span>`}
|
||||
${counts.done > 0 && html`<span class="rounded-full bg-done/15 px-1.5 py-0.5 font-mono text-micro tabular-nums text-done/70">${counts.done}</span>`}
|
||||
</div>
|
||||
`;
|
||||
|
||||
return html`
|
||||
<aside class="fixed left-0 top-0 z-40 flex h-screen w-80 flex-col border-r border-selection/50 bg-surface/95 backdrop-blur-sm">
|
||||
<!-- Sidebar Header -->
|
||||
<div class="shrink-0 border-b border-selection/50 px-5 py-4">
|
||||
<div class="inline-flex items-center gap-2 rounded-full border border-starting/40 bg-starting/10 px-2.5 py-0.5 text-micro font-medium uppercase tracking-[0.2em] text-starting">
|
||||
<span class="h-1.5 w-1.5 rounded-full bg-starting animate-float"></span>
|
||||
Control Plane
|
||||
</div>
|
||||
<h1 class="mt-2 font-display text-lg font-semibold text-bright">
|
||||
Agent Mission Control
|
||||
</h1>
|
||||
</div>
|
||||
|
||||
<!-- Project List -->
|
||||
<nav class="flex-1 overflow-y-auto px-3 py-3">
|
||||
<!-- All Projects -->
|
||||
<button
|
||||
onClick=${() => onSelectProject(null)}
|
||||
class="group flex w-full items-center gap-2.5 rounded-lg px-3 py-2 text-left transition-colors duration-150 ${
|
||||
selectedProject === null
|
||||
? 'bg-selection/50'
|
||||
: 'hover:bg-selection/25'
|
||||
}"
|
||||
>
|
||||
<span class="h-2 w-2 shrink-0 rounded-full ${allWorstMeta.dot}"></span>
|
||||
<span class="flex-1 truncate font-medium text-bright">All Projects</span>
|
||||
<${StatusPips} counts=${allStatusCounts} />
|
||||
</button>
|
||||
|
||||
<!-- Divider -->
|
||||
<div class="my-2 border-t border-selection/30"></div>
|
||||
|
||||
<!-- Individual Projects -->
|
||||
${projectGroups.map(group => {
|
||||
const statusCounts = { needs_attention: 0, active: 0, starting: 0, done: 0 };
|
||||
for (const s of group.sessions) {
|
||||
statusCounts[s.status] = (statusCounts[s.status] || 0) + 1;
|
||||
}
|
||||
|
||||
const worstStatus = group.sessions.reduce((worst, s) =>
|
||||
(STATUS_PRIORITY[s.status] ?? 99) < (STATUS_PRIORITY[worst] ?? 99) ? s.status : worst
|
||||
, 'done');
|
||||
const worstMeta = getStatusMeta(worstStatus);
|
||||
const isSelected = selectedProject === group.projectDir;
|
||||
|
||||
return html`
|
||||
<button
|
||||
key=${group.projectDir}
|
||||
onClick=${() => onSelectProject(group.projectDir)}
|
||||
class="group flex w-full items-center gap-2.5 rounded-lg px-3 py-2 text-left transition-colors duration-150 ${
|
||||
isSelected
|
||||
? 'bg-selection/50'
|
||||
: 'hover:bg-selection/25'
|
||||
}"
|
||||
>
|
||||
<span class="h-2 w-2 shrink-0 rounded-full ${worstMeta.dot}"></span>
|
||||
<span class="flex-1 truncate text-fg">${group.projectName}</span>
|
||||
<${StatusPips} counts=${statusCounts} />
|
||||
</button>
|
||||
`;
|
||||
})}
|
||||
</nav>
|
||||
|
||||
<!-- Sidebar Footer -->
|
||||
<div class="shrink-0 border-t border-selection/50 px-5 py-3">
|
||||
<div class="font-mono text-micro text-dim">
|
||||
${totalSessions} session${totalSessions === 1 ? '' : 's'} total
|
||||
</div>
|
||||
</div>
|
||||
</aside>
|
||||
`;
|
||||
}
|
||||
288
dashboard/components/SimpleInput.js
Normal file
288
dashboard/components/SimpleInput.js
Normal file
@@ -0,0 +1,288 @@
|
||||
import { html, useState, useRef, useCallback, useMemo, useEffect } from '../lib/preact.js';
|
||||
import { getStatusMeta } from '../utils/status.js';
|
||||
import { getTriggerInfo as _getTriggerInfo, filteredSkills as _filteredSkills } from '../utils/autocomplete.js';
|
||||
|
||||
export function SimpleInput({ sessionId, status, onRespond, autocompleteConfig = null, conversation }) {
|
||||
const [text, setText] = useState('');
|
||||
const [focused, setFocused] = useState(false);
|
||||
const [sending, setSending] = useState(false);
|
||||
const [error, setError] = useState(null);
|
||||
const [triggerInfo, setTriggerInfo] = useState(null);
|
||||
const [showAutocomplete, setShowAutocomplete] = useState(false);
|
||||
const [selectedIndex, setSelectedIndex] = useState(0);
|
||||
const textareaRef = useRef(null);
|
||||
const autocompleteRef = useRef(null);
|
||||
const historyIndexRef = useRef(-1);
|
||||
const draftRef = useRef('');
|
||||
const meta = getStatusMeta(status);
|
||||
|
||||
const userHistory = useMemo(
|
||||
() => (conversation || []).filter(m => m.role === 'user').map(m => m.content),
|
||||
[conversation]
|
||||
);
|
||||
|
||||
const getTriggerInfo = useCallback((value, cursorPos) => {
|
||||
return _getTriggerInfo(value, cursorPos, autocompleteConfig);
|
||||
}, [autocompleteConfig]);
|
||||
|
||||
const filteredSkills = useMemo(() => {
|
||||
return _filteredSkills(autocompleteConfig, triggerInfo);
|
||||
}, [autocompleteConfig, triggerInfo]);
|
||||
|
||||
// Show/hide autocomplete based on trigger detection
|
||||
useEffect(() => {
|
||||
const shouldShow = triggerInfo !== null;
|
||||
setShowAutocomplete(shouldShow);
|
||||
// Reset selection when dropdown opens
|
||||
if (shouldShow) {
|
||||
setSelectedIndex(0);
|
||||
}
|
||||
}, [triggerInfo]);
|
||||
|
||||
// Clamp selectedIndex when filtered list changes
|
||||
useEffect(() => {
|
||||
if (filteredSkills.length > 0 && selectedIndex >= filteredSkills.length) {
|
||||
setSelectedIndex(filteredSkills.length - 1);
|
||||
}
|
||||
}, [filteredSkills.length, selectedIndex]);
|
||||
|
||||
// Click outside dismisses dropdown
|
||||
useEffect(() => {
|
||||
if (!showAutocomplete) return;
|
||||
|
||||
const handleClickOutside = (e) => {
|
||||
if (autocompleteRef.current && !autocompleteRef.current.contains(e.target) &&
|
||||
textareaRef.current && !textareaRef.current.contains(e.target)) {
|
||||
setShowAutocomplete(false);
|
||||
}
|
||||
};
|
||||
|
||||
document.addEventListener('mousedown', handleClickOutside);
|
||||
return () => document.removeEventListener('mousedown', handleClickOutside);
|
||||
}, [showAutocomplete]);
|
||||
|
||||
// Scroll selected item into view when navigating with arrow keys
|
||||
useEffect(() => {
|
||||
if (showAutocomplete && autocompleteRef.current) {
|
||||
const container = autocompleteRef.current;
|
||||
const selectedEl = container.children[selectedIndex];
|
||||
if (selectedEl) {
|
||||
selectedEl.scrollIntoView({ block: 'nearest' });
|
||||
}
|
||||
}
|
||||
}, [selectedIndex, showAutocomplete]);
|
||||
|
||||
// Insert a selected skill into the text
|
||||
const insertSkill = useCallback((skill) => {
|
||||
if (!triggerInfo || !autocompleteConfig) return;
|
||||
|
||||
const { trigger } = autocompleteConfig;
|
||||
const { replaceStart, replaceEnd } = triggerInfo;
|
||||
|
||||
const before = text.slice(0, replaceStart);
|
||||
const after = text.slice(replaceEnd);
|
||||
const inserted = `${trigger}${skill.name} `;
|
||||
|
||||
setText(before + inserted + after);
|
||||
setShowAutocomplete(false);
|
||||
setTriggerInfo(null);
|
||||
|
||||
// Move cursor after inserted text
|
||||
const newCursorPos = replaceStart + inserted.length;
|
||||
setTimeout(() => {
|
||||
if (textareaRef.current) {
|
||||
textareaRef.current.selectionStart = newCursorPos;
|
||||
textareaRef.current.selectionEnd = newCursorPos;
|
||||
textareaRef.current.focus();
|
||||
}
|
||||
}, 0);
|
||||
}, [text, triggerInfo, autocompleteConfig]);
|
||||
|
||||
const handleSubmit = async (e) => {
|
||||
e.preventDefault();
|
||||
e.stopPropagation();
|
||||
if (text.trim() && !sending) {
|
||||
setSending(true);
|
||||
setError(null);
|
||||
try {
|
||||
await onRespond(sessionId, text.trim(), true, 0);
|
||||
setText('');
|
||||
historyIndexRef.current = -1;
|
||||
} catch (err) {
|
||||
setError('Failed to send message');
|
||||
console.error('SimpleInput send error:', err);
|
||||
} finally {
|
||||
setSending(false);
|
||||
// Refocus the textarea after submission
|
||||
// Use setTimeout to ensure React has re-rendered with disabled=false
|
||||
setTimeout(() => {
|
||||
textareaRef.current?.focus();
|
||||
}, 0);
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
return html`
|
||||
<form onSubmit=${handleSubmit} class="flex flex-col gap-2" onClick=${(e) => e.stopPropagation()}>
|
||||
${error && html`
|
||||
<div class="rounded-lg border border-attention/40 bg-attention/12 px-3 py-1.5 text-sm text-attention">
|
||||
${error}
|
||||
</div>
|
||||
`}
|
||||
<div class="flex items-end gap-2.5">
|
||||
<div class="relative flex-1">
|
||||
<textarea
|
||||
ref=${textareaRef}
|
||||
value=${text}
|
||||
onInput=${(e) => {
|
||||
const value = e.target.value;
|
||||
const cursorPos = e.target.selectionStart;
|
||||
setText(value);
|
||||
historyIndexRef.current = -1;
|
||||
setTriggerInfo(getTriggerInfo(value, cursorPos));
|
||||
e.target.style.height = 'auto';
|
||||
e.target.style.height = e.target.scrollHeight + 'px';
|
||||
}}
|
||||
onKeyDown=${(e) => {
|
||||
if (showAutocomplete) {
|
||||
// Escape dismisses dropdown
|
||||
if (e.key === 'Escape') {
|
||||
e.preventDefault();
|
||||
setShowAutocomplete(false);
|
||||
return;
|
||||
}
|
||||
|
||||
// Enter/Tab: select if matches exist, otherwise dismiss
|
||||
if (e.key === 'Enter' || e.key === 'Tab') {
|
||||
e.preventDefault();
|
||||
if (filteredSkills.length > 0 && filteredSkills[selectedIndex]) {
|
||||
insertSkill(filteredSkills[selectedIndex]);
|
||||
} else {
|
||||
setShowAutocomplete(false);
|
||||
}
|
||||
return;
|
||||
}
|
||||
|
||||
// Arrow navigation
|
||||
if (filteredSkills.length > 0) {
|
||||
if (e.key === 'ArrowDown') {
|
||||
e.preventDefault();
|
||||
setSelectedIndex(i => Math.min(i + 1, filteredSkills.length - 1));
|
||||
return;
|
||||
}
|
||||
if (e.key === 'ArrowUp') {
|
||||
e.preventDefault();
|
||||
setSelectedIndex(i => Math.max(i - 1, 0));
|
||||
return;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// History navigation (only when autocomplete is closed)
|
||||
if (e.key === 'ArrowUp' && !showAutocomplete &&
|
||||
e.target.selectionStart === 0 && e.target.selectionEnd === 0 &&
|
||||
userHistory.length > 0) {
|
||||
e.preventDefault();
|
||||
if (historyIndexRef.current === -1) {
|
||||
draftRef.current = text;
|
||||
historyIndexRef.current = userHistory.length - 1;
|
||||
} else if (historyIndexRef.current > 0) {
|
||||
historyIndexRef.current -= 1;
|
||||
}
|
||||
// Clamp if history shrank since last navigation
|
||||
if (historyIndexRef.current >= userHistory.length) {
|
||||
historyIndexRef.current = userHistory.length - 1;
|
||||
}
|
||||
const historyText = userHistory[historyIndexRef.current];
|
||||
setText(historyText);
|
||||
setTimeout(() => {
|
||||
if (textareaRef.current) {
|
||||
textareaRef.current.selectionStart = historyText.length;
|
||||
textareaRef.current.selectionEnd = historyText.length;
|
||||
textareaRef.current.style.height = 'auto';
|
||||
textareaRef.current.style.height = textareaRef.current.scrollHeight + 'px';
|
||||
}
|
||||
}, 0);
|
||||
return;
|
||||
}
|
||||
|
||||
if (e.key === 'ArrowDown' && !showAutocomplete &&
|
||||
historyIndexRef.current !== -1) {
|
||||
e.preventDefault();
|
||||
historyIndexRef.current += 1;
|
||||
let newText;
|
||||
if (historyIndexRef.current >= userHistory.length) {
|
||||
historyIndexRef.current = -1;
|
||||
newText = draftRef.current;
|
||||
} else {
|
||||
newText = userHistory[historyIndexRef.current];
|
||||
}
|
||||
setText(newText);
|
||||
setTimeout(() => {
|
||||
if (textareaRef.current) {
|
||||
textareaRef.current.selectionStart = newText.length;
|
||||
textareaRef.current.selectionEnd = newText.length;
|
||||
textareaRef.current.style.height = 'auto';
|
||||
textareaRef.current.style.height = textareaRef.current.scrollHeight + 'px';
|
||||
}
|
||||
}, 0);
|
||||
return;
|
||||
}
|
||||
|
||||
// Normal Enter-to-submit (only when dropdown is closed)
|
||||
if (e.key === 'Enter' && !e.shiftKey) {
|
||||
e.preventDefault();
|
||||
handleSubmit(e);
|
||||
}
|
||||
}}
|
||||
onFocus=${() => setFocused(true)}
|
||||
onBlur=${() => setFocused(false)}
|
||||
placeholder="Send a message..."
|
||||
rows="1"
|
||||
class="w-full resize-none overflow-hidden rounded-xl border border-selection/75 bg-bg/70 px-3 py-2 text-sm text-fg transition-colors duration-150 placeholder:text-dim focus:outline-none"
|
||||
style=${{ minHeight: '38px', maxHeight: '150px', borderColor: focused ? meta.borderColor : undefined }}
|
||||
disabled=${sending}
|
||||
/>
|
||||
${showAutocomplete && html`
|
||||
<div
|
||||
ref=${autocompleteRef}
|
||||
class="absolute left-0 bottom-full mb-1 w-full max-h-48 overflow-y-auto rounded-lg border border-selection/75 bg-surface shadow-lg z-50"
|
||||
>
|
||||
${autocompleteConfig.skills.length === 0 ? html`
|
||||
<div class="px-3 py-2 text-sm text-dim">No skills available</div>
|
||||
` : filteredSkills.length === 0 ? html`
|
||||
<div class="px-3 py-2 text-sm text-dim">No matching skills</div>
|
||||
` : filteredSkills.map((skill, i) => html`
|
||||
<div
|
||||
key=${skill.name}
|
||||
class="group relative px-3 py-1.5 cursor-pointer text-sm font-mono transition-colors ${
|
||||
i === selectedIndex
|
||||
? 'bg-selection/50 text-bright'
|
||||
: 'text-fg hover:bg-selection/25'
|
||||
}"
|
||||
onClick=${() => insertSkill(skill)}
|
||||
onMouseEnter=${() => setSelectedIndex(i)}
|
||||
>
|
||||
${autocompleteConfig.trigger}${skill.name}
|
||||
${i === selectedIndex && skill.description && html`
|
||||
<div class="absolute left-full top-0 ml-2 w-64 px-2.5 py-1.5 rounded-md border border-selection/75 bg-surface shadow-lg text-micro text-dim font-sans whitespace-normal z-50">
|
||||
${skill.description}
|
||||
</div>
|
||||
`}
|
||||
</div>
|
||||
`)}
|
||||
</div>
|
||||
`}
|
||||
</div>
|
||||
<button
|
||||
type="submit"
|
||||
class="shrink-0 rounded-xl px-3 py-2 text-sm font-medium transition-[transform,filter] duration-150 hover:-translate-y-0.5 hover:brightness-110 disabled:cursor-not-allowed disabled:opacity-50"
|
||||
style=${{ backgroundColor: meta.borderColor, color: '#0a0f18' }}
|
||||
disabled=${sending || !text.trim()}
|
||||
>
|
||||
${sending ? 'Sending...' : 'Send'}
|
||||
</button>
|
||||
</div>
|
||||
</form>
|
||||
`;
|
||||
}
|
||||
241
dashboard/components/SpawnModal.js
Normal file
241
dashboard/components/SpawnModal.js
Normal file
@@ -0,0 +1,241 @@
|
||||
import { html, useState, useEffect, useCallback, useRef } from '../lib/preact.js';
|
||||
import { API_PROJECTS, API_SPAWN, fetchWithTimeout, API_TIMEOUT_MS } from '../utils/api.js';
|
||||
|
||||
// Spawn needs longer timeout: pending spawn registry requires discovery cycle to run,
|
||||
// plus server polls for session file confirmation
|
||||
const SPAWN_TIMEOUT_MS = API_TIMEOUT_MS * 2;
|
||||
|
||||
export function SpawnModal({ isOpen, onClose, onSpawn, currentProject }) {
|
||||
const [projects, setProjects] = useState([]);
|
||||
const [selectedProject, setSelectedProject] = useState('');
|
||||
const [agentType, setAgentType] = useState('claude');
|
||||
const [loading, setLoading] = useState(false);
|
||||
const [loadingProjects, setLoadingProjects] = useState(false);
|
||||
const [closing, setClosing] = useState(false);
|
||||
const [error, setError] = useState(null);
|
||||
|
||||
const needsProjectPicker = !currentProject;
|
||||
|
||||
const dropdownRef = useCallback((node) => {
|
||||
if (node) dropdownNodeRef.current = node;
|
||||
}, []);
|
||||
const dropdownNodeRef = useRef(null);
|
||||
|
||||
// Click outside dismisses dropdown
|
||||
useEffect(() => {
|
||||
if (!isOpen) return;
|
||||
const handleClickOutside = (e) => {
|
||||
if (dropdownNodeRef.current && !dropdownNodeRef.current.contains(e.target)) {
|
||||
handleClose();
|
||||
}
|
||||
};
|
||||
document.addEventListener('mousedown', handleClickOutside);
|
||||
return () => document.removeEventListener('mousedown', handleClickOutside);
|
||||
}, [isOpen]);
|
||||
|
||||
// Reset state on open
|
||||
useEffect(() => {
|
||||
if (isOpen) {
|
||||
setAgentType('claude');
|
||||
setError(null);
|
||||
setLoading(false);
|
||||
setClosing(false);
|
||||
}
|
||||
}, [isOpen]);
|
||||
|
||||
// Fetch projects when needed
|
||||
useEffect(() => {
|
||||
if (isOpen && needsProjectPicker) {
|
||||
setLoadingProjects(true);
|
||||
fetchWithTimeout(API_PROJECTS)
|
||||
.then(r => r.json())
|
||||
.then(data => {
|
||||
setProjects(data.projects || []);
|
||||
setSelectedProject('');
|
||||
})
|
||||
.catch(err => setError(err.message))
|
||||
.finally(() => setLoadingProjects(false));
|
||||
}
|
||||
}, [isOpen, needsProjectPicker]);
|
||||
|
||||
// Animated close handler
|
||||
const handleClose = useCallback(() => {
|
||||
if (loading) return;
|
||||
setClosing(true);
|
||||
setTimeout(() => {
|
||||
setClosing(false);
|
||||
onClose();
|
||||
}, 200);
|
||||
}, [loading, onClose]);
|
||||
|
||||
// Handle escape key
|
||||
useEffect(() => {
|
||||
if (!isOpen) return;
|
||||
const handleKeyDown = (e) => {
|
||||
if (e.key === 'Escape') handleClose();
|
||||
};
|
||||
document.addEventListener('keydown', handleKeyDown);
|
||||
return () => document.removeEventListener('keydown', handleKeyDown);
|
||||
}, [isOpen, handleClose]);
|
||||
|
||||
const handleSpawn = async () => {
|
||||
const rawProject = currentProject || selectedProject;
|
||||
if (!rawProject) {
|
||||
setError('Please select a project');
|
||||
return;
|
||||
}
|
||||
|
||||
// Extract project name from full path (sidebar passes projectDir like "/Users/.../projects/amc")
|
||||
const project = rawProject.includes('/') ? rawProject.split('/').pop() : rawProject;
|
||||
|
||||
setLoading(true);
|
||||
setError(null);
|
||||
|
||||
try {
|
||||
const response = await fetchWithTimeout(API_SPAWN, {
|
||||
method: 'POST',
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
'Authorization': `Bearer ${window.AMC_AUTH_TOKEN}`,
|
||||
},
|
||||
body: JSON.stringify({ project, agent_type: agentType }),
|
||||
}, SPAWN_TIMEOUT_MS);
|
||||
const data = await response.json();
|
||||
|
||||
if (data.ok) {
|
||||
onSpawn({ success: true, project, agentType, spawnId: data.spawn_id });
|
||||
handleClose();
|
||||
} else {
|
||||
setError(data.error || 'Spawn failed');
|
||||
onSpawn({ error: data.error });
|
||||
}
|
||||
} catch (err) {
|
||||
const msg = err.name === 'AbortError' ? 'Request timed out' : err.message;
|
||||
setError(msg);
|
||||
onSpawn({ error: msg });
|
||||
} finally {
|
||||
setLoading(false);
|
||||
}
|
||||
};
|
||||
|
||||
if (!isOpen) return null;
|
||||
|
||||
const canSpawn = !loading && (currentProject || selectedProject);
|
||||
|
||||
return html`
|
||||
<div
|
||||
ref=${dropdownRef}
|
||||
class="absolute right-0 top-full mt-2 z-50 glass-panel w-80 rounded-xl border border-selection/70 shadow-lg ${closing ? 'modal-panel-out' : 'modal-panel-in'}"
|
||||
onClick=${(e) => e.stopPropagation()}
|
||||
>
|
||||
<!-- Header -->
|
||||
<div class="flex items-center justify-between border-b border-selection/70 px-4 py-3">
|
||||
<h2 class="font-display text-sm font-semibold text-bright">Spawn Agent</h2>
|
||||
<button
|
||||
onClick=${handleClose}
|
||||
disabled=${loading}
|
||||
class="flex h-6 w-6 items-center justify-center rounded-lg border border-selection/80 text-dim transition-colors hover:border-done/40 hover:bg-done/10 hover:text-bright disabled:cursor-not-allowed disabled:opacity-50"
|
||||
>
|
||||
<svg class="h-3.5 w-3.5" fill="none" stroke="currentColor" viewBox="0 0 24 24">
|
||||
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M6 18L18 6M6 6l12 12" />
|
||||
</svg>
|
||||
</button>
|
||||
</div>
|
||||
|
||||
<!-- Body -->
|
||||
<div class="flex flex-col gap-3 px-4 py-3">
|
||||
|
||||
${needsProjectPicker && html`
|
||||
<div class="flex flex-col gap-1.5">
|
||||
<label class="text-label font-medium text-dim">Project</label>
|
||||
${loadingProjects ? html`
|
||||
<div class="flex items-center gap-2 rounded-xl border border-selection/75 bg-bg/70 px-3 py-2 text-sm text-dim">
|
||||
<span class="working-dots"><span>.</span><span>.</span><span>.</span></span>
|
||||
Loading projects
|
||||
</div>
|
||||
` : projects.length === 0 ? html`
|
||||
<div class="rounded-xl border border-selection/75 bg-bg/70 px-3 py-2 text-sm text-dim">
|
||||
No projects found in ~/projects/
|
||||
</div>
|
||||
` : html`
|
||||
<select
|
||||
value=${selectedProject}
|
||||
onChange=${(e) => { setSelectedProject(e.target.value); setError(null); }}
|
||||
class="w-full rounded-xl border border-selection/75 bg-bg/70 px-3 py-2 text-sm text-fg transition-colors duration-150 focus:border-starting/60 focus:outline-none"
|
||||
>
|
||||
<option value="" disabled>Select a project...</option>
|
||||
${projects.map(p => html`
|
||||
<option key=${p} value=${p}>${p}</option>
|
||||
`)}
|
||||
</select>
|
||||
`}
|
||||
</div>
|
||||
`}
|
||||
|
||||
${currentProject && html`
|
||||
<div class="flex flex-col gap-1.5">
|
||||
<label class="text-label font-medium text-dim">Project</label>
|
||||
<div class="rounded-xl border border-selection/75 bg-bg/70 px-3 py-2 text-sm text-bright">
|
||||
${currentProject.includes('/') ? currentProject.split('/').pop() : currentProject}
|
||||
</div>
|
||||
</div>
|
||||
`}
|
||||
|
||||
<!-- Agent type -->
|
||||
<div class="flex flex-col gap-1.5">
|
||||
<label class="text-label font-medium text-dim">Agent Type</label>
|
||||
<div class="flex gap-2">
|
||||
<button
|
||||
onClick=${() => setAgentType('claude')}
|
||||
class="flex-1 rounded-xl border px-3 py-2 text-sm font-medium transition-colors duration-150 ${
|
||||
agentType === 'claude'
|
||||
? 'border-violet-400/45 bg-violet-500/14 text-violet-300'
|
||||
: 'border-selection/75 bg-bg/70 text-dim hover:border-selection hover:text-fg'
|
||||
}"
|
||||
>
|
||||
Claude
|
||||
</button>
|
||||
<button
|
||||
onClick=${() => setAgentType('codex')}
|
||||
class="flex-1 rounded-xl border px-3 py-2 text-sm font-medium transition-colors duration-150 ${
|
||||
agentType === 'codex'
|
||||
? 'border-emerald-400/45 bg-emerald-500/14 text-emerald-300'
|
||||
: 'border-selection/75 bg-bg/70 text-dim hover:border-selection hover:text-fg'
|
||||
}"
|
||||
>
|
||||
Codex
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
|
||||
${error && html`
|
||||
<div class="rounded-lg border border-attention/40 bg-attention/12 px-3 py-1.5 text-sm text-attention">
|
||||
${error}
|
||||
</div>
|
||||
`}
|
||||
</div>
|
||||
|
||||
<!-- Footer -->
|
||||
<div class="flex items-center justify-end gap-2 border-t border-selection/70 px-4 py-2.5">
|
||||
<button
|
||||
onClick=${handleClose}
|
||||
disabled=${loading}
|
||||
class="rounded-xl border border-selection/75 bg-bg/70 px-4 py-2 text-sm font-medium text-dim transition-colors hover:border-selection hover:text-fg disabled:cursor-not-allowed disabled:opacity-50"
|
||||
>
|
||||
Cancel
|
||||
</button>
|
||||
<button
|
||||
onClick=${handleSpawn}
|
||||
disabled=${!canSpawn}
|
||||
class="rounded-xl px-4 py-2 text-sm font-medium transition-[transform,filter] duration-150 hover:-translate-y-0.5 hover:brightness-110 disabled:cursor-not-allowed disabled:opacity-50 disabled:hover:translate-y-0 disabled:hover:brightness-100 ${
|
||||
agentType === 'claude'
|
||||
? 'bg-violet-500 text-white'
|
||||
: 'bg-emerald-500 text-white'
|
||||
}"
|
||||
>
|
||||
${loading ? 'Spawning...' : 'Spawn'}
|
||||
</button>
|
||||
</div>
|
||||
</div>
|
||||
`;
|
||||
}
|
||||
125
dashboard/components/Toast.js
Normal file
125
dashboard/components/Toast.js
Normal file
@@ -0,0 +1,125 @@
|
||||
import { html, useState, useEffect, useCallback, useRef } from '../lib/preact.js';
|
||||
|
||||
/**
|
||||
* Lightweight toast notification system.
|
||||
* Tracks error counts and surfaces persistent issues.
|
||||
*/
|
||||
|
||||
// Singleton state for toast management (shared across components)
|
||||
let toastListeners = [];
|
||||
let toastIdCounter = 0;
|
||||
|
||||
export function showToast(message, type = 'error', duration = 5000) {
|
||||
const id = ++toastIdCounter;
|
||||
const toast = { id, message, type, duration };
|
||||
toastListeners.forEach(listener => listener(toast));
|
||||
return id;
|
||||
}
|
||||
|
||||
export function ToastContainer() {
|
||||
const [toasts, setToasts] = useState([]);
|
||||
const timeoutIds = useRef(new Map());
|
||||
|
||||
useEffect(() => {
|
||||
const listener = (toast) => {
|
||||
setToasts(prev => [...prev, toast]);
|
||||
if (toast.duration > 0) {
|
||||
const timeoutId = setTimeout(() => {
|
||||
timeoutIds.current.delete(toast.id);
|
||||
setToasts(prev => prev.filter(t => t.id !== toast.id));
|
||||
}, toast.duration);
|
||||
timeoutIds.current.set(toast.id, timeoutId);
|
||||
}
|
||||
};
|
||||
toastListeners.push(listener);
|
||||
return () => {
|
||||
toastListeners = toastListeners.filter(l => l !== listener);
|
||||
// Clear all pending timeouts on unmount
|
||||
timeoutIds.current.forEach(id => clearTimeout(id));
|
||||
timeoutIds.current.clear();
|
||||
};
|
||||
}, []);
|
||||
|
||||
const dismiss = useCallback((id) => {
|
||||
// Clear auto-dismiss timeout if exists
|
||||
const timeoutId = timeoutIds.current.get(id);
|
||||
if (timeoutId) {
|
||||
clearTimeout(timeoutId);
|
||||
timeoutIds.current.delete(id);
|
||||
}
|
||||
setToasts(prev => prev.filter(t => t.id !== id));
|
||||
}, []);
|
||||
|
||||
if (toasts.length === 0) return null;
|
||||
|
||||
return html`
|
||||
<div class="fixed bottom-4 right-4 z-[100] flex flex-col gap-2 pointer-events-none">
|
||||
${toasts.map(toast => html`
|
||||
<div
|
||||
key=${toast.id}
|
||||
class="pointer-events-auto flex items-start gap-3 rounded-xl border px-4 py-3 shadow-lg backdrop-blur-sm animate-fade-in-up ${
|
||||
toast.type === 'error'
|
||||
? 'border-attention/50 bg-attention/15 text-attention'
|
||||
: toast.type === 'success'
|
||||
? 'border-active/50 bg-active/15 text-active'
|
||||
: 'border-starting/50 bg-starting/15 text-starting'
|
||||
}"
|
||||
style=${{ maxWidth: '380px' }}
|
||||
>
|
||||
<div class="flex-1 text-sm font-medium">${toast.message}</div>
|
||||
<button
|
||||
onClick=${() => dismiss(toast.id)}
|
||||
class="shrink-0 rounded p-0.5 opacity-70 transition-opacity hover:opacity-100"
|
||||
>
|
||||
<svg class="h-4 w-4" fill="none" stroke="currentColor" viewBox="0 0 24 24">
|
||||
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M6 18L18 6M6 6l12 12"/>
|
||||
</svg>
|
||||
</button>
|
||||
</div>
|
||||
`)}
|
||||
</div>
|
||||
`;
|
||||
}
|
||||
|
||||
/**
|
||||
* Error tracker for surfacing repeated failures.
|
||||
* Tracks errors by key and shows toast after threshold.
|
||||
*/
|
||||
const errorCounts = {};
|
||||
const ERROR_THRESHOLD = 3;
|
||||
const ERROR_WINDOW_MS = 30000; // 30 second window
|
||||
|
||||
export function trackError(key, message, { log = true, threshold = ERROR_THRESHOLD } = {}) {
|
||||
const now = Date.now();
|
||||
|
||||
// Always log
|
||||
if (log) {
|
||||
console.error(`[${key}]`, message);
|
||||
}
|
||||
|
||||
// Track error count within window
|
||||
if (!errorCounts[key]) {
|
||||
errorCounts[key] = { count: 0, firstAt: now, lastToastAt: 0 };
|
||||
}
|
||||
|
||||
const tracker = errorCounts[key];
|
||||
|
||||
// Reset if outside window
|
||||
if (now - tracker.firstAt > ERROR_WINDOW_MS) {
|
||||
tracker.count = 0;
|
||||
tracker.firstAt = now;
|
||||
}
|
||||
|
||||
tracker.count++;
|
||||
|
||||
// Surface toast after threshold, but not too frequently
|
||||
if (tracker.count >= threshold && now - tracker.lastToastAt > ERROR_WINDOW_MS) {
|
||||
showToast(`${message} (repeated ${tracker.count}x)`, 'error');
|
||||
tracker.lastToastAt = now;
|
||||
tracker.count = 0; // Reset after showing toast
|
||||
}
|
||||
}
|
||||
|
||||
export function clearErrorCount(key) {
|
||||
delete errorCounts[key];
|
||||
}
|
||||
108
dashboard/index.html
Normal file
108
dashboard/index.html
Normal file
@@ -0,0 +1,108 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>Agent Mission Control</title>
|
||||
<link rel="preconnect" href="https://fonts.googleapis.com">
|
||||
<link rel="preconnect" href="https://fonts.gstatic.com" crossorigin>
|
||||
<link href="https://fonts.googleapis.com/css2?family=JetBrains+Mono:wght@400;500&family=Space+Grotesk:wght@500;600;700&family=IBM+Plex+Mono:wght@400;500&family=IBM+Plex+Sans:wght@400;500;600&display=swap" rel="stylesheet">
|
||||
|
||||
<!-- Tailwind CDN -->
|
||||
<script src="https://cdn.tailwindcss.com"></script>
|
||||
<script>
|
||||
tailwind.config = {
|
||||
theme: {
|
||||
extend: {
|
||||
colors: {
|
||||
bg: '#01040b',
|
||||
surface: '#070d18',
|
||||
surface2: '#0d1830',
|
||||
selection: '#223454',
|
||||
fg: '#e0ebff',
|
||||
bright: '#fbfdff',
|
||||
dim: '#8ba3cc',
|
||||
active: '#5fd0a4',
|
||||
attention: '#e0b45e',
|
||||
starting: '#7cb2ff',
|
||||
done: '#e39a8c',
|
||||
},
|
||||
fontFamily: {
|
||||
display: ['Space Grotesk', 'IBM Plex Sans', 'sans-serif'],
|
||||
sans: ['IBM Plex Sans', 'system-ui', 'sans-serif'],
|
||||
mono: ['IBM Plex Mono', 'SFMono-Regular', 'Menlo', 'monospace'],
|
||||
chat: ['JetBrains Mono', 'IBM Plex Mono', 'monospace'],
|
||||
},
|
||||
fontSize: {
|
||||
micro: ['clamp(0.68rem, 0.05vw + 0.66rem, 0.78rem)', { lineHeight: '1.35' }],
|
||||
label: ['clamp(0.76rem, 0.07vw + 0.74rem, 0.86rem)', { lineHeight: '1.4' }],
|
||||
ui: ['clamp(0.84rem, 0.09vw + 0.81rem, 0.94rem)', { lineHeight: '1.45' }],
|
||||
body: ['clamp(0.88rem, 0.1vw + 0.85rem, 0.98rem)', { lineHeight: '1.55' }],
|
||||
chat: ['clamp(0.92rem, 0.12vw + 0.89rem, 1.02rem)', { lineHeight: '1.6' }],
|
||||
},
|
||||
boxShadow: {
|
||||
panel: '0 8px 18px rgba(10, 14, 20, 0.28)',
|
||||
halo: '0 0 0 1px rgba(117, 138, 166, 0.12), 0 6px 14px rgba(10, 14, 20, 0.24)',
|
||||
},
|
||||
keyframes: {
|
||||
float: {
|
||||
'0%, 100%': { transform: 'translateY(0)' },
|
||||
'50%': { transform: 'translateY(-4px)' },
|
||||
},
|
||||
fadeInUp: {
|
||||
'0%': { opacity: '0', transform: 'translateY(10px)' },
|
||||
'100%': { opacity: '1', transform: 'translateY(0)' },
|
||||
},
|
||||
},
|
||||
animation: {
|
||||
float: 'float 6s ease-in-out infinite',
|
||||
'fade-in-up': 'fadeInUp 0.35s ease-out',
|
||||
},
|
||||
}
|
||||
},
|
||||
safelist: [
|
||||
'bg-attention/30',
|
||||
'bg-active/30',
|
||||
'bg-starting/30',
|
||||
'bg-done/30',
|
||||
'bg-attention/18',
|
||||
'bg-active/18',
|
||||
'bg-starting/18',
|
||||
'bg-done/18',
|
||||
'bg-selection/80',
|
||||
'border-attention/40',
|
||||
'border-active/40',
|
||||
'border-starting/40',
|
||||
'border-done/40',
|
||||
'border-l-attention',
|
||||
'border-l-active',
|
||||
'border-l-starting',
|
||||
'border-l-done',
|
||||
'text-attention',
|
||||
'text-active',
|
||||
'text-starting',
|
||||
'text-done',
|
||||
'border-emerald-500/30',
|
||||
'bg-emerald-500/10',
|
||||
'text-emerald-400',
|
||||
'border-emerald-400/45',
|
||||
'bg-emerald-500/14',
|
||||
'text-emerald-300',
|
||||
'border-violet-500/30',
|
||||
'bg-violet-500/10',
|
||||
'text-violet-400',
|
||||
'border-violet-400/45',
|
||||
'bg-violet-500/14',
|
||||
'text-violet-300',
|
||||
]
|
||||
}
|
||||
</script>
|
||||
|
||||
<link rel="stylesheet" href="styles.css">
|
||||
<!-- AMC_AUTH_TOKEN -->
|
||||
</head>
|
||||
<body class="min-h-screen text-fg antialiased">
|
||||
<div id="app"></div>
|
||||
<script type="module" src="main.js"></script>
|
||||
</body>
|
||||
</html>
|
||||
159
dashboard/lib/markdown.js
Normal file
159
dashboard/lib/markdown.js
Normal file
@@ -0,0 +1,159 @@
|
||||
// Markdown rendering with syntax highlighting
|
||||
import { h } from 'https://esm.sh/preact@10.19.3';
|
||||
import { marked } from 'https://esm.sh/marked@15.0.7';
|
||||
import DOMPurify from 'https://esm.sh/dompurify@3.2.4';
|
||||
import hljs from 'https://esm.sh/highlight.js@11.11.1/lib/core';
|
||||
import langJavascript from 'https://esm.sh/highlight.js@11.11.1/lib/languages/javascript';
|
||||
import langTypescript from 'https://esm.sh/highlight.js@11.11.1/lib/languages/typescript';
|
||||
import langBash from 'https://esm.sh/highlight.js@11.11.1/lib/languages/bash';
|
||||
import langJson from 'https://esm.sh/highlight.js@11.11.1/lib/languages/json';
|
||||
import langPython from 'https://esm.sh/highlight.js@11.11.1/lib/languages/python';
|
||||
import langRust from 'https://esm.sh/highlight.js@11.11.1/lib/languages/rust';
|
||||
import langCss from 'https://esm.sh/highlight.js@11.11.1/lib/languages/css';
|
||||
import langXml from 'https://esm.sh/highlight.js@11.11.1/lib/languages/xml';
|
||||
import langSql from 'https://esm.sh/highlight.js@11.11.1/lib/languages/sql';
|
||||
import langYaml from 'https://esm.sh/highlight.js@11.11.1/lib/languages/yaml';
|
||||
import htm from 'https://esm.sh/htm@3.1.1';
|
||||
|
||||
const html = htm.bind(h);
|
||||
|
||||
// Register highlight.js languages
|
||||
hljs.registerLanguage('javascript', langJavascript);
|
||||
hljs.registerLanguage('js', langJavascript);
|
||||
hljs.registerLanguage('typescript', langTypescript);
|
||||
hljs.registerLanguage('ts', langTypescript);
|
||||
hljs.registerLanguage('bash', langBash);
|
||||
hljs.registerLanguage('sh', langBash);
|
||||
hljs.registerLanguage('shell', langBash);
|
||||
hljs.registerLanguage('json', langJson);
|
||||
hljs.registerLanguage('python', langPython);
|
||||
hljs.registerLanguage('py', langPython);
|
||||
hljs.registerLanguage('rust', langRust);
|
||||
hljs.registerLanguage('css', langCss);
|
||||
hljs.registerLanguage('html', langXml);
|
||||
hljs.registerLanguage('xml', langXml);
|
||||
hljs.registerLanguage('sql', langSql);
|
||||
hljs.registerLanguage('yaml', langYaml);
|
||||
hljs.registerLanguage('yml', langYaml);
|
||||
|
||||
// Configure marked with highlight.js using custom renderer (v15 API)
|
||||
const renderer = {
|
||||
code(token) {
|
||||
const code = token.text;
|
||||
const lang = token.lang || '';
|
||||
let highlighted;
|
||||
if (lang && hljs.getLanguage(lang)) {
|
||||
highlighted = hljs.highlight(code, { language: lang }).value;
|
||||
} else {
|
||||
highlighted = hljs.highlightAuto(code).value;
|
||||
}
|
||||
return `<pre><code class="hljs language-${lang}">${highlighted}</code></pre>`;
|
||||
}
|
||||
};
|
||||
marked.use({ renderer, breaks: false, gfm: true });
|
||||
|
||||
// Render markdown content with syntax highlighting
|
||||
// All HTML is sanitized with DOMPurify before rendering to prevent XSS
|
||||
export function renderContent(content) {
|
||||
if (!content) return '';
|
||||
const rawHtml = marked.parse(content);
|
||||
const safeHtml = DOMPurify.sanitize(rawHtml);
|
||||
return html`<div class="md-content" dangerouslySetInnerHTML=${{ __html: safeHtml }} />`;
|
||||
}
|
||||
|
||||
// Generate a short summary for a tool call based on name + input.
|
||||
// Uses heuristics to extract meaningful info from common input patterns
|
||||
// rather than hardcoding specific tool names.
|
||||
function getToolSummary(name, input) {
|
||||
if (!input || typeof input !== 'object') return name;
|
||||
|
||||
// Helper to safely get string value and slice it
|
||||
const str = (val, len) => typeof val === 'string' ? val.slice(0, len) : null;
|
||||
|
||||
// Try to extract a meaningful summary from common input patterns
|
||||
// Priority order matters - more specific/useful fields first
|
||||
|
||||
// 1. Explicit description or summary
|
||||
let s = str(input.description, 60) || str(input.summary, 60);
|
||||
if (s) return s;
|
||||
|
||||
// 2. Command/shell execution
|
||||
s = str(input.command, 60) || str(input.cmd, 60);
|
||||
if (s) return s;
|
||||
|
||||
// 3. File paths - show last 2 segments for context
|
||||
const pathKeys = ['file_path', 'path', 'file', 'filename', 'filepath'];
|
||||
for (const key of pathKeys) {
|
||||
if (typeof input[key] === 'string' && input[key]) {
|
||||
return input[key].split('/').slice(-2).join('/');
|
||||
}
|
||||
}
|
||||
|
||||
// 4. Search patterns
|
||||
if (typeof input.pattern === 'string' && input.pattern) {
|
||||
const glob = typeof input.glob === 'string' ? ` ${input.glob}` : '';
|
||||
return `/${input.pattern.slice(0, 40)}/${glob}`.trim();
|
||||
}
|
||||
s = str(input.query, 50) || str(input.search, 50);
|
||||
if (s) return s;
|
||||
if (typeof input.regex === 'string' && input.regex) return `/${input.regex.slice(0, 40)}/`;
|
||||
|
||||
// 5. URL/endpoint
|
||||
s = str(input.url, 60) || str(input.endpoint, 60);
|
||||
if (s) return s;
|
||||
|
||||
// 6. Name/title fields
|
||||
if (typeof input.name === 'string' && input.name && input.name !== name) return input.name.slice(0, 50);
|
||||
s = str(input.title, 50);
|
||||
if (s) return s;
|
||||
|
||||
// 7. Message/content (for chat/notification tools)
|
||||
s = str(input.message, 50) || str(input.content, 50);
|
||||
if (s) return s;
|
||||
|
||||
// 8. First string value as fallback (skip very long values)
|
||||
for (const [key, value] of Object.entries(input)) {
|
||||
if (typeof value === 'string' && value.length > 0 && value.length < 100) {
|
||||
return value.slice(0, 50);
|
||||
}
|
||||
}
|
||||
|
||||
// No useful summary found
|
||||
return name;
|
||||
}
|
||||
|
||||
// Render tool call pills (summary mode)
|
||||
export function renderToolCalls(toolCalls) {
|
||||
if (!toolCalls || toolCalls.length === 0) return '';
|
||||
return html`
|
||||
<div class="flex flex-wrap gap-1.5 mt-1.5">
|
||||
${toolCalls.map(tc => {
|
||||
const summary = getToolSummary(tc.name, tc.input);
|
||||
return html`
|
||||
<span class="inline-flex items-center gap-1 rounded-md border border-starting/30 bg-starting/10 px-2 py-0.5 font-mono text-label text-starting">
|
||||
<span class="font-medium">${tc.name}</span>
|
||||
${summary !== tc.name && html`<span class="text-starting/65 truncate max-w-[200px]">${summary}</span>`}
|
||||
</span>
|
||||
`;
|
||||
})}
|
||||
</div>
|
||||
`;
|
||||
}
|
||||
|
||||
// Render thinking block (full content, open by default)
|
||||
// Content is sanitized with DOMPurify before rendering
|
||||
export function renderThinking(thinking) {
|
||||
if (!thinking) return '';
|
||||
const rawHtml = marked.parse(thinking);
|
||||
const safeHtml = DOMPurify.sanitize(rawHtml);
|
||||
return html`
|
||||
<details class="mt-2 rounded-lg border border-violet-400/25 bg-violet-500/8" open>
|
||||
<summary class="cursor-pointer select-none px-3 py-1.5 font-mono text-label uppercase tracking-[0.14em] text-violet-300/80 hover:text-violet-200">
|
||||
Thinking
|
||||
</summary>
|
||||
<div class="border-t border-violet-400/15 px-3 py-2 text-label text-dim/90 font-chat leading-relaxed">
|
||||
<div class="md-content" dangerouslySetInnerHTML=${{ __html: safeHtml }} />
|
||||
</div>
|
||||
</details>
|
||||
`;
|
||||
}
|
||||
7
dashboard/lib/preact.js
Normal file
7
dashboard/lib/preact.js
Normal file
@@ -0,0 +1,7 @@
|
||||
// Re-export Preact and htm for consistent imports across components
|
||||
export { h, render } from 'https://esm.sh/preact@10.19.3';
|
||||
export { useState, useEffect, useRef, useCallback, useMemo } from 'https://esm.sh/preact@10.19.3/hooks';
|
||||
import { h } from 'https://esm.sh/preact@10.19.3';
|
||||
import htm from 'https://esm.sh/htm@3.1.1';
|
||||
|
||||
export const html = htm.bind(h);
|
||||
7
dashboard/main.js
Normal file
7
dashboard/main.js
Normal file
@@ -0,0 +1,7 @@
|
||||
// Dashboard entry point
|
||||
import { render } from './lib/preact.js';
|
||||
import { html } from './lib/preact.js';
|
||||
import { App } from './components/App.js';
|
||||
|
||||
// Mount the app
|
||||
render(html`<${App} />`, document.getElementById('app'));
|
||||
426
dashboard/styles.css
Normal file
426
dashboard/styles.css
Normal file
@@ -0,0 +1,426 @@
|
||||
/* AMC Dashboard Styles */
|
||||
|
||||
html {
|
||||
font-size: 16px;
|
||||
}
|
||||
|
||||
:root {
|
||||
--bg-flat: #01040b;
|
||||
--glass-border: rgba(116, 154, 214, 0.22);
|
||||
}
|
||||
|
||||
* {
|
||||
scrollbar-width: thin;
|
||||
scrollbar-color: #445f8e #0a1222;
|
||||
}
|
||||
|
||||
body {
|
||||
margin: 0;
|
||||
font-family: 'IBM Plex Sans', system-ui, sans-serif;
|
||||
background: var(--bg-flat);
|
||||
min-height: 100vh;
|
||||
color: #e0ebff;
|
||||
letter-spacing: 0.01em;
|
||||
}
|
||||
|
||||
#app {
|
||||
position: relative;
|
||||
min-height: 100vh;
|
||||
}
|
||||
|
||||
#app > *:not(.fixed) {
|
||||
position: relative;
|
||||
z-index: 1;
|
||||
}
|
||||
|
||||
::-webkit-scrollbar {
|
||||
width: 8px;
|
||||
height: 8px;
|
||||
}
|
||||
|
||||
::-webkit-scrollbar-track {
|
||||
background: #0a1222;
|
||||
}
|
||||
|
||||
::-webkit-scrollbar-thumb {
|
||||
background: #445f8e;
|
||||
border-radius: 4px;
|
||||
}
|
||||
|
||||
::-webkit-scrollbar-thumb:hover {
|
||||
background: #5574aa;
|
||||
}
|
||||
|
||||
/* Animations */
|
||||
@keyframes pulse-attention {
|
||||
0%, 100% {
|
||||
opacity: 1;
|
||||
transform: scale(1) translateY(0);
|
||||
}
|
||||
50% {
|
||||
opacity: 0.62;
|
||||
transform: scale(1.04) translateY(-1px);
|
||||
}
|
||||
}
|
||||
|
||||
.pulse-attention {
|
||||
animation: pulse-attention 2s ease-in-out infinite;
|
||||
}
|
||||
|
||||
/* Active session spinner */
|
||||
@keyframes spin-ring {
|
||||
to { transform: rotate(360deg); }
|
||||
}
|
||||
|
||||
.spinner-dot {
|
||||
position: relative;
|
||||
display: inline-block;
|
||||
}
|
||||
|
||||
.spinner-dot::after {
|
||||
content: '';
|
||||
position: absolute;
|
||||
inset: -2px;
|
||||
border-radius: 50%;
|
||||
border: 1.5px solid transparent;
|
||||
border-top-color: currentColor;
|
||||
will-change: transform;
|
||||
animation: spin-ring 0.9s cubic-bezier(0.645, 0.045, 0.355, 1) infinite;
|
||||
}
|
||||
|
||||
/* Agent activity spinner */
|
||||
.activity-spinner {
|
||||
width: 8px;
|
||||
height: 8px;
|
||||
border-radius: 50%;
|
||||
background: #5fd0a4;
|
||||
position: relative;
|
||||
}
|
||||
|
||||
.activity-spinner::after {
|
||||
content: '';
|
||||
position: absolute;
|
||||
inset: -3px;
|
||||
border-radius: 50%;
|
||||
border: 1.5px solid transparent;
|
||||
border-top-color: #5fd0a4;
|
||||
animation: spin-ring 0.9s cubic-bezier(0.645, 0.045, 0.355, 1) infinite;
|
||||
}
|
||||
|
||||
/* Working indicator at bottom of chat */
|
||||
@keyframes bounce-dot {
|
||||
0%, 80%, 100% { transform: translateY(0); }
|
||||
40% { transform: translateY(-4px); }
|
||||
}
|
||||
|
||||
.working-dots span {
|
||||
display: inline-block;
|
||||
will-change: transform;
|
||||
}
|
||||
|
||||
.working-dots span:nth-child(1) { animation: bounce-dot 1.2s ease-out infinite; }
|
||||
.working-dots span:nth-child(2) { animation: bounce-dot 1.2s ease-out 0.15s infinite; }
|
||||
.working-dots span:nth-child(3) { animation: bounce-dot 1.2s ease-out 0.3s infinite; }
|
||||
|
||||
/* Modal entrance/exit animations */
|
||||
@keyframes modalBackdropIn {
|
||||
from { opacity: 0; }
|
||||
to { opacity: 1; }
|
||||
}
|
||||
@keyframes modalBackdropOut {
|
||||
from { opacity: 1; }
|
||||
to { opacity: 0; }
|
||||
}
|
||||
@keyframes modalPanelIn {
|
||||
from { opacity: 0; transform: scale(0.96) translateY(8px); }
|
||||
to { opacity: 1; transform: scale(1) translateY(0); }
|
||||
}
|
||||
@keyframes modalPanelOut {
|
||||
from { opacity: 1; transform: scale(1) translateY(0); }
|
||||
to { opacity: 0; transform: scale(0.96) translateY(8px); }
|
||||
}
|
||||
|
||||
.modal-backdrop-in {
|
||||
animation: modalBackdropIn 200ms ease-out;
|
||||
}
|
||||
.modal-backdrop-out {
|
||||
animation: modalBackdropOut 200ms ease-in forwards;
|
||||
}
|
||||
.modal-panel-in {
|
||||
animation: modalPanelIn 200ms ease-out;
|
||||
}
|
||||
.modal-panel-out {
|
||||
animation: modalPanelOut 200ms ease-in forwards;
|
||||
}
|
||||
|
||||
/* Spawn highlight animation - visual feedback when a newly spawned agent appears */
|
||||
@keyframes spawn-highlight {
|
||||
0% { box-shadow: 0 0 0 3px rgba(95, 208, 164, 0.6), 0 0 16px rgba(95, 208, 164, 0.15); }
|
||||
100% { box-shadow: 0 0 0 0 transparent, 0 0 0 transparent; }
|
||||
}
|
||||
|
||||
.session-card-spawned {
|
||||
animation: spawn-highlight 2.5s ease-out;
|
||||
}
|
||||
|
||||
/* Accessibility: disable continuous animations for motion-sensitive users */
|
||||
@media (prefers-reduced-motion: reduce) {
|
||||
.spinner-dot::after {
|
||||
animation: none;
|
||||
}
|
||||
.working-dots span {
|
||||
animation: none;
|
||||
}
|
||||
.pulse-attention {
|
||||
animation: none;
|
||||
}
|
||||
.modal-backdrop-in,
|
||||
.modal-backdrop-out,
|
||||
.modal-panel-in,
|
||||
.modal-panel-out {
|
||||
animation: none;
|
||||
}
|
||||
.animate-float,
|
||||
.animate-fade-in-up,
|
||||
.session-card-spawned {
|
||||
animation: none !important;
|
||||
}
|
||||
}
|
||||
|
||||
/* Glass panel effect */
|
||||
.glass-panel {
|
||||
backdrop-filter: blur(10px);
|
||||
border: 1px solid var(--glass-border);
|
||||
background: rgba(7, 13, 24, 0.95);
|
||||
box-shadow: 0 10px 24px rgba(0, 0, 0, 0.36), inset 0 1px 0 rgba(151, 185, 245, 0.05);
|
||||
}
|
||||
|
||||
/* Agent header variants */
|
||||
.agent-header-codex {
|
||||
background: rgba(20, 60, 54, 0.4);
|
||||
border-bottom-color: rgba(116, 227, 196, 0.34);
|
||||
}
|
||||
|
||||
.agent-header-claude {
|
||||
background: rgba(45, 36, 78, 0.42);
|
||||
border-bottom-color: rgba(179, 154, 255, 0.36);
|
||||
}
|
||||
|
||||
/* Markdown content styling */
|
||||
.md-content {
|
||||
line-height: 1.45;
|
||||
}
|
||||
|
||||
.md-content > *:first-child {
|
||||
margin-top: 0;
|
||||
}
|
||||
|
||||
.md-content > *:last-child {
|
||||
margin-bottom: 0;
|
||||
}
|
||||
|
||||
.md-content h1, .md-content h2, .md-content h3,
|
||||
.md-content h4, .md-content h5, .md-content h6 {
|
||||
font-family: 'Space Grotesk', 'IBM Plex Sans', sans-serif;
|
||||
font-weight: 600;
|
||||
color: #fbfdff;
|
||||
margin: 0.6em 0 0.25em;
|
||||
line-height: 1.3;
|
||||
}
|
||||
|
||||
.md-content h1 { font-size: 1.4em; }
|
||||
.md-content h2 { font-size: 1.25em; }
|
||||
.md-content h3 { font-size: 1.1em; }
|
||||
.md-content h4, .md-content h5, .md-content h6 { font-size: 1em; }
|
||||
|
||||
.md-content p {
|
||||
margin: 0.25em 0;
|
||||
}
|
||||
|
||||
.md-content p:empty {
|
||||
display: none;
|
||||
}
|
||||
|
||||
.md-content strong {
|
||||
color: #fbfdff;
|
||||
font-weight: 600;
|
||||
}
|
||||
|
||||
.md-content em {
|
||||
color: #c8d8f0;
|
||||
}
|
||||
|
||||
.md-content a {
|
||||
color: #7cb2ff;
|
||||
text-decoration: underline;
|
||||
text-underline-offset: 2px;
|
||||
}
|
||||
|
||||
.md-content a:hover {
|
||||
color: #a8ccff;
|
||||
}
|
||||
|
||||
.md-content code {
|
||||
font-family: 'IBM Plex Mono', monospace;
|
||||
font-size: 0.9em;
|
||||
background: rgba(1, 4, 11, 0.55);
|
||||
border: 1px solid rgba(34, 52, 84, 0.8);
|
||||
border-radius: 4px;
|
||||
padding: 0.15em 0.4em;
|
||||
}
|
||||
|
||||
.md-content pre {
|
||||
margin: 0.4em 0;
|
||||
padding: 0.6rem 0.8rem;
|
||||
background: rgba(1, 4, 11, 0.65);
|
||||
border: 1px solid rgba(34, 52, 84, 0.75);
|
||||
border-radius: 0.75rem;
|
||||
overflow-x: auto;
|
||||
font-size: 0.85em;
|
||||
line-height: 1.5;
|
||||
}
|
||||
|
||||
.md-content pre code {
|
||||
background: none;
|
||||
border: none;
|
||||
padding: 0;
|
||||
font-size: inherit;
|
||||
}
|
||||
|
||||
.md-content ul, .md-content ol {
|
||||
margin: 0.35em 0;
|
||||
padding-left: 1.5em;
|
||||
white-space: normal;
|
||||
}
|
||||
|
||||
.md-content li {
|
||||
margin: 0;
|
||||
}
|
||||
|
||||
.md-content li p {
|
||||
margin: 0;
|
||||
display: inline;
|
||||
}
|
||||
|
||||
.md-content li > ul, .md-content li > ol {
|
||||
margin: 0.1em 0;
|
||||
}
|
||||
|
||||
.md-content blockquote {
|
||||
margin: 0.4em 0;
|
||||
padding: 0.4em 0.8em;
|
||||
border-left: 3px solid rgba(124, 178, 255, 0.5);
|
||||
background: rgba(34, 52, 84, 0.25);
|
||||
border-radius: 0 0.5rem 0.5rem 0;
|
||||
color: #c8d8f0;
|
||||
}
|
||||
|
||||
.md-content hr {
|
||||
border: none;
|
||||
border-top: 1px solid rgba(34, 52, 84, 0.6);
|
||||
margin: 0.6em 0;
|
||||
}
|
||||
|
||||
.md-content table {
|
||||
width: 100%;
|
||||
border-collapse: collapse;
|
||||
margin: 0.75em 0;
|
||||
font-size: 0.9em;
|
||||
}
|
||||
|
||||
.md-content th, .md-content td {
|
||||
border: 1px solid rgba(34, 52, 84, 0.6);
|
||||
padding: 0.5em 0.75em;
|
||||
text-align: left;
|
||||
}
|
||||
|
||||
.md-content th {
|
||||
background: rgba(34, 52, 84, 0.35);
|
||||
font-weight: 600;
|
||||
color: #fbfdff;
|
||||
}
|
||||
|
||||
.md-content tr:nth-child(even) {
|
||||
background: rgba(34, 52, 84, 0.15);
|
||||
}
|
||||
|
||||
/* Highlight.js syntax theme (dark) */
|
||||
.hljs {
|
||||
color: #e0ebff;
|
||||
}
|
||||
|
||||
.hljs-keyword,
|
||||
.hljs-selector-tag,
|
||||
.hljs-built_in,
|
||||
.hljs-name,
|
||||
.hljs-tag {
|
||||
color: #c792ea;
|
||||
}
|
||||
|
||||
.hljs-string,
|
||||
.hljs-title,
|
||||
.hljs-section,
|
||||
.hljs-attribute,
|
||||
.hljs-literal,
|
||||
.hljs-template-tag,
|
||||
.hljs-template-variable,
|
||||
.hljs-type,
|
||||
.hljs-addition {
|
||||
color: #c3e88d;
|
||||
}
|
||||
|
||||
.hljs-comment,
|
||||
.hljs-quote,
|
||||
.hljs-deletion,
|
||||
.hljs-meta {
|
||||
color: #697098;
|
||||
}
|
||||
|
||||
.hljs-keyword,
|
||||
.hljs-selector-tag,
|
||||
.hljs-literal,
|
||||
.hljs-title,
|
||||
.hljs-section,
|
||||
.hljs-doctag,
|
||||
.hljs-type,
|
||||
.hljs-name,
|
||||
.hljs-strong {
|
||||
font-weight: 500;
|
||||
}
|
||||
|
||||
.hljs-number,
|
||||
.hljs-selector-id,
|
||||
.hljs-selector-class,
|
||||
.hljs-quote,
|
||||
.hljs-template-tag,
|
||||
.hljs-deletion {
|
||||
color: #f78c6c;
|
||||
}
|
||||
|
||||
.hljs-title.function_,
|
||||
.hljs-subst,
|
||||
.hljs-symbol,
|
||||
.hljs-bullet,
|
||||
.hljs-link {
|
||||
color: #82aaff;
|
||||
}
|
||||
|
||||
.hljs-selector-attr,
|
||||
.hljs-selector-pseudo,
|
||||
.hljs-variable,
|
||||
.hljs-template-variable {
|
||||
color: #ffcb6b;
|
||||
}
|
||||
|
||||
.hljs-attr {
|
||||
color: #89ddff;
|
||||
}
|
||||
|
||||
.hljs-regexp,
|
||||
.hljs-link {
|
||||
color: #89ddff;
|
||||
}
|
||||
|
||||
.hljs-emphasis {
|
||||
font-style: italic;
|
||||
}
|
||||
162
dashboard/tests/autocomplete.test.js
Normal file
162
dashboard/tests/autocomplete.test.js
Normal file
@@ -0,0 +1,162 @@
|
||||
import { describe, it } from 'node:test';
|
||||
import assert from 'node:assert/strict';
|
||||
import { getTriggerInfo, filteredSkills } from '../utils/autocomplete.js';
|
||||
|
||||
const mockConfig = {
|
||||
trigger: '/',
|
||||
skills: [
|
||||
{ name: 'commit', description: 'Create a git commit' },
|
||||
{ name: 'review-pr', description: 'Review a pull request' },
|
||||
{ name: 'comment', description: 'Add a comment' },
|
||||
],
|
||||
};
|
||||
|
||||
describe('getTriggerInfo', () => {
|
||||
it('returns null when no autocompleteConfig', () => {
|
||||
const result = getTriggerInfo('/hello', 1, null);
|
||||
assert.equal(result, null);
|
||||
});
|
||||
|
||||
it('returns null when autocompleteConfig is undefined', () => {
|
||||
const result = getTriggerInfo('/hello', 1, undefined);
|
||||
assert.equal(result, null);
|
||||
});
|
||||
|
||||
it('detects trigger at position 0', () => {
|
||||
const result = getTriggerInfo('/', 1, mockConfig);
|
||||
assert.deepEqual(result, {
|
||||
trigger: '/',
|
||||
filterText: '',
|
||||
replaceStart: 0,
|
||||
replaceEnd: 1,
|
||||
});
|
||||
});
|
||||
|
||||
it('detects trigger after space', () => {
|
||||
const result = getTriggerInfo('hello /co', 9, mockConfig);
|
||||
assert.deepEqual(result, {
|
||||
trigger: '/',
|
||||
filterText: 'co',
|
||||
replaceStart: 6,
|
||||
replaceEnd: 9,
|
||||
});
|
||||
});
|
||||
|
||||
it('detects trigger after newline', () => {
|
||||
const result = getTriggerInfo('line1\n/rev', 10, mockConfig);
|
||||
assert.deepEqual(result, {
|
||||
trigger: '/',
|
||||
filterText: 'rev',
|
||||
replaceStart: 6,
|
||||
replaceEnd: 10,
|
||||
});
|
||||
});
|
||||
|
||||
it('returns null for non-trigger character', () => {
|
||||
const result = getTriggerInfo('hello world', 5, mockConfig);
|
||||
assert.equal(result, null);
|
||||
});
|
||||
|
||||
it('returns null for wrong trigger (! when config expects /)', () => {
|
||||
const result = getTriggerInfo('!commit', 7, mockConfig);
|
||||
assert.equal(result, null);
|
||||
});
|
||||
|
||||
it('returns null for trigger embedded in a word', () => {
|
||||
const result = getTriggerInfo('path/to/file', 5, mockConfig);
|
||||
assert.equal(result, null);
|
||||
});
|
||||
|
||||
it('extracts filterText correctly', () => {
|
||||
const result = getTriggerInfo('/commit', 7, mockConfig);
|
||||
assert.equal(result.filterText, 'commit');
|
||||
assert.equal(result.replaceStart, 0);
|
||||
assert.equal(result.replaceEnd, 7);
|
||||
});
|
||||
|
||||
it('filterText is lowercase', () => {
|
||||
const result = getTriggerInfo('/CoMmIt', 7, mockConfig);
|
||||
assert.equal(result.filterText, 'commit');
|
||||
});
|
||||
|
||||
it('replaceStart and replaceEnd are correct for mid-input trigger', () => {
|
||||
const result = getTriggerInfo('foo /bar', 8, mockConfig);
|
||||
assert.equal(result.replaceStart, 4);
|
||||
assert.equal(result.replaceEnd, 8);
|
||||
});
|
||||
|
||||
it('works with a different trigger character', () => {
|
||||
const codexConfig = { trigger: '!', skills: [] };
|
||||
const result = getTriggerInfo('!test', 5, codexConfig);
|
||||
assert.deepEqual(result, {
|
||||
trigger: '!',
|
||||
filterText: 'test',
|
||||
replaceStart: 0,
|
||||
replaceEnd: 5,
|
||||
});
|
||||
});
|
||||
});
|
||||
|
||||
describe('filteredSkills', () => {
|
||||
it('returns empty array without config', () => {
|
||||
const info = { filterText: '' };
|
||||
assert.deepEqual(filteredSkills(null, info), []);
|
||||
});
|
||||
|
||||
it('returns empty array without triggerInfo', () => {
|
||||
assert.deepEqual(filteredSkills(mockConfig, null), []);
|
||||
});
|
||||
|
||||
it('returns empty array when both are null', () => {
|
||||
assert.deepEqual(filteredSkills(null, null), []);
|
||||
});
|
||||
|
||||
it('returns all skills with empty filter', () => {
|
||||
const info = { filterText: '' };
|
||||
const result = filteredSkills(mockConfig, info);
|
||||
assert.equal(result.length, 3);
|
||||
});
|
||||
|
||||
it('filters case-insensitively', () => {
|
||||
const info = { filterText: 'com' };
|
||||
const result = filteredSkills(mockConfig, info);
|
||||
const names = result.map(s => s.name);
|
||||
assert.ok(names.includes('commit'));
|
||||
assert.ok(names.includes('comment'));
|
||||
assert.ok(!names.includes('review-pr'));
|
||||
});
|
||||
|
||||
it('matches anywhere in name', () => {
|
||||
const info = { filterText: 'view' };
|
||||
const result = filteredSkills(mockConfig, info);
|
||||
assert.equal(result.length, 1);
|
||||
assert.equal(result[0].name, 'review-pr');
|
||||
});
|
||||
|
||||
it('sorts alphabetically', () => {
|
||||
const info = { filterText: '' };
|
||||
const result = filteredSkills(mockConfig, info);
|
||||
const names = result.map(s => s.name);
|
||||
assert.deepEqual(names, ['comment', 'commit', 'review-pr']);
|
||||
});
|
||||
|
||||
it('returns empty array when no matches', () => {
|
||||
const info = { filterText: 'zzz' };
|
||||
const result = filteredSkills(mockConfig, info);
|
||||
assert.deepEqual(result, []);
|
||||
});
|
||||
|
||||
it('does not mutate the original skills array', () => {
|
||||
const config = {
|
||||
trigger: '/',
|
||||
skills: [
|
||||
{ name: 'zebra', description: 'z' },
|
||||
{ name: 'alpha', description: 'a' },
|
||||
],
|
||||
};
|
||||
const info = { filterText: '' };
|
||||
filteredSkills(config, info);
|
||||
assert.equal(config.skills[0].name, 'zebra');
|
||||
assert.equal(config.skills[1].name, 'alpha');
|
||||
});
|
||||
});
|
||||
39
dashboard/utils/api.js
Normal file
39
dashboard/utils/api.js
Normal file
@@ -0,0 +1,39 @@
|
||||
// API Constants
|
||||
export const API_STATE = '/api/state';
|
||||
export const API_STREAM = '/api/stream';
|
||||
export const API_DISMISS = '/api/dismiss/';
|
||||
export const API_DISMISS_DEAD = '/api/dismiss-dead';
|
||||
export const API_RESPOND = '/api/respond/';
|
||||
export const API_CONVERSATION = '/api/conversation/';
|
||||
export const API_SKILLS = '/api/skills';
|
||||
export const API_SPAWN = '/api/spawn';
|
||||
export const API_PROJECTS = '/api/projects';
|
||||
export const API_PROJECTS_REFRESH = '/api/projects/refresh';
|
||||
export const API_HEALTH = '/api/health';
|
||||
export const POLL_MS = 3000;
|
||||
export const API_TIMEOUT_MS = 10000;
|
||||
|
||||
// Fetch with timeout to prevent hanging requests
|
||||
export async function fetchWithTimeout(url, options = {}, timeoutMs = API_TIMEOUT_MS) {
|
||||
const controller = new AbortController();
|
||||
const timeoutId = setTimeout(() => controller.abort(), timeoutMs);
|
||||
try {
|
||||
const response = await fetch(url, { ...options, signal: controller.signal });
|
||||
return response;
|
||||
} finally {
|
||||
clearTimeout(timeoutId);
|
||||
}
|
||||
}
|
||||
|
||||
// Fetch autocomplete skills config for an agent type
|
||||
export async function fetchSkills(agent) {
|
||||
const url = `${API_SKILLS}?agent=${encodeURIComponent(agent)}`;
|
||||
try {
|
||||
const response = await fetchWithTimeout(url);
|
||||
if (!response.ok) return null;
|
||||
return await response.json();
|
||||
} catch {
|
||||
// Network error, timeout, or JSON parse failure - graceful degradation
|
||||
return null;
|
||||
}
|
||||
}
|
||||
48
dashboard/utils/autocomplete.js
Normal file
48
dashboard/utils/autocomplete.js
Normal file
@@ -0,0 +1,48 @@
|
||||
// Pure logic for autocomplete trigger detection and skill filtering.
|
||||
// Extracted from SimpleInput.js for testability.
|
||||
|
||||
/**
|
||||
* Detect if cursor is at a trigger position for autocomplete.
|
||||
* Returns trigger info object or null.
|
||||
*/
|
||||
export function getTriggerInfo(value, cursorPos, autocompleteConfig) {
|
||||
if (!autocompleteConfig) return null;
|
||||
|
||||
const { trigger } = autocompleteConfig;
|
||||
|
||||
// Find the start of the current "word" (after last whitespace before cursor)
|
||||
let wordStart = cursorPos;
|
||||
while (wordStart > 0 && !/\s/.test(value[wordStart - 1])) {
|
||||
wordStart--;
|
||||
}
|
||||
|
||||
// Check if word starts with this agent's trigger character
|
||||
if (value[wordStart] === trigger) {
|
||||
return {
|
||||
trigger,
|
||||
filterText: value.slice(wordStart + 1, cursorPos).toLowerCase(),
|
||||
replaceStart: wordStart,
|
||||
replaceEnd: cursorPos,
|
||||
};
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Filter and sort skills based on trigger info.
|
||||
* Returns sorted array of matching skills.
|
||||
*/
|
||||
export function filteredSkills(autocompleteConfig, triggerInfo) {
|
||||
if (!autocompleteConfig || !triggerInfo) return [];
|
||||
|
||||
const { skills } = autocompleteConfig;
|
||||
const { filterText } = triggerInfo;
|
||||
|
||||
let filtered = filterText
|
||||
? skills.filter(s => s.name.toLowerCase().includes(filterText))
|
||||
: skills.slice();
|
||||
|
||||
// Server pre-sorts, but re-sort after filtering for stability
|
||||
return filtered.sort((a, b) => a.name.localeCompare(b.name));
|
||||
}
|
||||
66
dashboard/utils/formatting.js
Normal file
66
dashboard/utils/formatting.js
Normal file
@@ -0,0 +1,66 @@
|
||||
// Formatting utilities
|
||||
|
||||
export function formatDuration(isoStart) {
|
||||
if (!isoStart) return '';
|
||||
const start = new Date(isoStart);
|
||||
const now = new Date();
|
||||
const mins = Math.max(0, Math.floor((now - start) / 60000));
|
||||
if (mins < 60) return mins + 'm';
|
||||
const hrs = Math.floor(mins / 60);
|
||||
const remainMins = mins % 60;
|
||||
return hrs + 'h ' + remainMins + 'm';
|
||||
}
|
||||
|
||||
export function formatTime(isoTime) {
|
||||
if (!isoTime) return '';
|
||||
const date = new Date(isoTime);
|
||||
return date.toLocaleTimeString('en-US', {
|
||||
hour: 'numeric',
|
||||
minute: '2-digit',
|
||||
hour12: true
|
||||
});
|
||||
}
|
||||
|
||||
export function formatTokenCount(value) {
|
||||
if (!Number.isFinite(value)) return '';
|
||||
return Math.round(value).toLocaleString('en-US');
|
||||
}
|
||||
|
||||
export function getContextUsageSummary(usage) {
|
||||
if (!usage || typeof usage !== 'object') return null;
|
||||
|
||||
const current = Number(usage.current_tokens);
|
||||
const windowTokens = Number(usage.window_tokens);
|
||||
const sessionTotal = usage.session_total_tokens != null ? Number(usage.session_total_tokens) : null;
|
||||
const hasCurrent = Number.isFinite(current) && current > 0;
|
||||
const hasWindow = Number.isFinite(windowTokens) && windowTokens > 0;
|
||||
const hasSessionTotal = sessionTotal != null && Number.isFinite(sessionTotal) && sessionTotal > 0;
|
||||
|
||||
if (hasCurrent && hasWindow) {
|
||||
const percent = (current / windowTokens) * 100;
|
||||
return {
|
||||
headline: `${percent >= 10 ? percent.toFixed(0) : percent.toFixed(1)}% ctx`,
|
||||
detail: `${formatTokenCount(current)} / ${formatTokenCount(windowTokens)}`,
|
||||
trail: hasSessionTotal ? `Σ ${formatTokenCount(sessionTotal)}` : '',
|
||||
title: `Context window usage: ${formatTokenCount(current)} / ${formatTokenCount(windowTokens)} tokens`,
|
||||
};
|
||||
}
|
||||
|
||||
if (hasCurrent) {
|
||||
const inputTokens = Number(usage.input_tokens);
|
||||
const outputTokens = Number(usage.output_tokens);
|
||||
const hasInput = Number.isFinite(inputTokens);
|
||||
const hasOutput = Number.isFinite(outputTokens);
|
||||
const ioDetail = hasInput || hasOutput
|
||||
? ` • in ${formatTokenCount(hasInput ? inputTokens : 0)} out ${formatTokenCount(hasOutput ? outputTokens : 0)}`
|
||||
: '';
|
||||
return {
|
||||
headline: 'Ctx usage',
|
||||
detail: `${formatTokenCount(current)} tok${ioDetail}`,
|
||||
trail: '',
|
||||
title: `Token usage: ${formatTokenCount(current)} tokens`,
|
||||
};
|
||||
}
|
||||
|
||||
return null;
|
||||
}
|
||||
79
dashboard/utils/status.js
Normal file
79
dashboard/utils/status.js
Normal file
@@ -0,0 +1,79 @@
|
||||
// Status-related utilities
|
||||
|
||||
export const STATUS_PRIORITY = {
|
||||
needs_attention: 0,
|
||||
active: 1,
|
||||
starting: 2,
|
||||
done: 3
|
||||
};
|
||||
|
||||
export function getStatusMeta(status) {
|
||||
switch (status) {
|
||||
case 'needs_attention':
|
||||
return {
|
||||
label: 'Needs attention',
|
||||
dot: 'bg-attention pulse-attention',
|
||||
badge: 'bg-attention/18 text-attention border-attention/40',
|
||||
borderColor: '#e0b45e',
|
||||
};
|
||||
case 'active':
|
||||
return {
|
||||
label: 'Active',
|
||||
dot: 'bg-active',
|
||||
badge: 'bg-active/18 text-active border-active/40',
|
||||
borderColor: '#5fd0a4',
|
||||
spinning: true,
|
||||
};
|
||||
case 'starting':
|
||||
return {
|
||||
label: 'Starting',
|
||||
dot: 'bg-starting',
|
||||
badge: 'bg-starting/18 text-starting border-starting/40',
|
||||
borderColor: '#7cb2ff',
|
||||
spinning: true,
|
||||
};
|
||||
case 'done':
|
||||
return {
|
||||
label: 'Done',
|
||||
dot: 'bg-done',
|
||||
badge: 'bg-done/18 text-done border-done/40',
|
||||
borderColor: '#e39a8c',
|
||||
};
|
||||
default:
|
||||
return {
|
||||
label: status || 'Unknown',
|
||||
dot: 'bg-dim',
|
||||
badge: 'bg-selection text-dim border-selection',
|
||||
borderColor: '#223454',
|
||||
};
|
||||
}
|
||||
}
|
||||
|
||||
export function getUserMessageBg(status) {
|
||||
switch (status) {
|
||||
case 'needs_attention': return 'bg-attention/20 border border-attention/35 text-bright';
|
||||
case 'active': return 'bg-active/20 border border-active/30 text-bright';
|
||||
case 'starting': return 'bg-starting/20 border border-starting/30 text-bright';
|
||||
case 'done': return 'bg-done/20 border border-done/30 text-bright';
|
||||
default: return 'bg-selection/80 border border-selection text-bright';
|
||||
}
|
||||
}
|
||||
|
||||
export function groupSessionsByProject(sessions) {
|
||||
const groups = new Map();
|
||||
|
||||
for (const session of sessions) {
|
||||
const key = session.project_dir || session.cwd || 'unknown';
|
||||
if (!groups.has(key)) {
|
||||
groups.set(key, {
|
||||
projectDir: key,
|
||||
projectName: session.project || key.split('/').pop() || 'Unknown',
|
||||
sessions: [],
|
||||
});
|
||||
}
|
||||
groups.get(key).sessions.push(session);
|
||||
}
|
||||
|
||||
// Return groups in API order (no status-based reordering)
|
||||
return Array.from(groups.values());
|
||||
}
|
||||
214
docs/claude-jsonl-reference/01-format-specification.md
Normal file
214
docs/claude-jsonl-reference/01-format-specification.md
Normal file
@@ -0,0 +1,214 @@
|
||||
# Claude JSONL Format Specification
|
||||
|
||||
## File Format
|
||||
|
||||
- **Format:** Newline-delimited JSON (NDJSON/JSONL)
|
||||
- **Encoding:** UTF-8
|
||||
- **Line terminator:** `\n` (LF)
|
||||
- **One JSON object per line** — no array wrapper
|
||||
|
||||
## Message Envelope (Common Fields)
|
||||
|
||||
Every line in a Claude JSONL file contains these fields:
|
||||
|
||||
```json
|
||||
{
|
||||
"parentUuid": "uuid-string or null",
|
||||
"isSidechain": false,
|
||||
"userType": "external",
|
||||
"cwd": "/full/path/to/working/directory",
|
||||
"sessionId": "session-uuid-v4",
|
||||
"version": "2.1.20",
|
||||
"gitBranch": "branch-name or empty string",
|
||||
"type": "user|assistant|progress|system|summary|file-history-snapshot",
|
||||
"message": { ... },
|
||||
"uuid": "unique-message-uuid-v4",
|
||||
"timestamp": "ISO-8601 timestamp"
|
||||
}
|
||||
```
|
||||
|
||||
### Field Reference
|
||||
|
||||
| Field | Type | Required | Description |
|
||||
|-------|------|----------|-------------|
|
||||
| `type` | string | Yes | Message type identifier |
|
||||
| `uuid` | string (uuid) | Yes* | Unique identifier for this event |
|
||||
| `parentUuid` | string (uuid) or null | Yes | Links to parent message (null for root) |
|
||||
| `timestamp` | string (ISO-8601) | Yes* | When event occurred (UTC) |
|
||||
| `sessionId` | string (uuid) | Yes | Session identifier |
|
||||
| `version` | string (semver) | Yes | Claude Code version (e.g., "2.1.20") |
|
||||
| `cwd` | string (path) | Yes | Working directory at event time |
|
||||
| `gitBranch` | string | No | Git branch name (empty if not in repo) |
|
||||
| `isSidechain` | boolean | Yes | `true` for subagent sessions |
|
||||
| `userType` | string | Yes | Always "external" for user sessions |
|
||||
| `message` | object | Conditional | Message content (user/assistant types) |
|
||||
| `agentId` | string | Conditional | Agent identifier (subagent sessions only) |
|
||||
|
||||
*May be null in metadata-only entries like `file-history-snapshot`
|
||||
|
||||
## Content Structure
|
||||
|
||||
### User Message Content
|
||||
|
||||
User messages have `message.content` as either:
|
||||
|
||||
**String (direct input):**
|
||||
```json
|
||||
{
|
||||
"message": {
|
||||
"role": "user",
|
||||
"content": "Your question or instruction"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Array (tool results):**
|
||||
```json
|
||||
{
|
||||
"message": {
|
||||
"role": "user",
|
||||
"content": [
|
||||
{
|
||||
"type": "tool_result",
|
||||
"tool_use_id": "toolu_01XYZ",
|
||||
"content": "Tool output text"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Assistant Message Content
|
||||
|
||||
Assistant messages always have `message.content` as an **array**:
|
||||
|
||||
```json
|
||||
{
|
||||
"message": {
|
||||
"role": "assistant",
|
||||
"type": "message",
|
||||
"model": "claude-opus-4-5-20251101",
|
||||
"id": "msg_bdrk_01Abc123",
|
||||
"content": [
|
||||
{"type": "thinking", "thinking": "..."},
|
||||
{"type": "text", "text": "..."},
|
||||
{"type": "tool_use", "id": "toolu_01XYZ", "name": "Read", "input": {...}}
|
||||
],
|
||||
"stop_reason": "end_turn",
|
||||
"stop_sequence": null,
|
||||
"usage": {...}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Content Block Types
|
||||
|
||||
### Text Block
|
||||
```json
|
||||
{
|
||||
"type": "text",
|
||||
"text": "Response text content"
|
||||
}
|
||||
```
|
||||
|
||||
### Thinking Block
|
||||
```json
|
||||
{
|
||||
"type": "thinking",
|
||||
"thinking": "Internal reasoning (extended thinking mode)",
|
||||
"signature": "base64-signature (optional)"
|
||||
}
|
||||
```
|
||||
|
||||
### Tool Use Block
|
||||
```json
|
||||
{
|
||||
"type": "tool_use",
|
||||
"id": "toolu_01Abc123XYZ",
|
||||
"name": "ToolName",
|
||||
"input": {
|
||||
"param1": "value1",
|
||||
"param2": 123
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Tool Result Block
|
||||
```json
|
||||
{
|
||||
"type": "tool_result",
|
||||
"tool_use_id": "toolu_01Abc123XYZ",
|
||||
"content": "Result text or structured output",
|
||||
"is_error": false
|
||||
}
|
||||
```
|
||||
|
||||
## Usage Object
|
||||
|
||||
Token consumption reported in assistant messages:
|
||||
|
||||
```json
|
||||
{
|
||||
"usage": {
|
||||
"input_tokens": 1000,
|
||||
"output_tokens": 500,
|
||||
"cache_creation_input_tokens": 200,
|
||||
"cache_read_input_tokens": 400,
|
||||
"cache_creation": {
|
||||
"ephemeral_5m_input_tokens": 200,
|
||||
"ephemeral_1h_input_tokens": 0
|
||||
},
|
||||
"service_tier": "standard"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| `input_tokens` | int | Input tokens consumed |
|
||||
| `output_tokens` | int | Output tokens generated |
|
||||
| `cache_creation_input_tokens` | int | Tokens used to create cache |
|
||||
| `cache_read_input_tokens` | int | Tokens read from cache |
|
||||
| `service_tier` | string | API tier ("standard", etc.) |
|
||||
|
||||
## Model Identifiers
|
||||
|
||||
Common model names in `message.model`:
|
||||
|
||||
| Model | Identifier |
|
||||
|-------|------------|
|
||||
| Claude Opus 4.5 | `claude-opus-4-5-20251101` |
|
||||
| Claude Sonnet 4.5 | `claude-sonnet-4-5-20241022` |
|
||||
| Claude Haiku 4.5 | `claude-haiku-4-5-20251001` |
|
||||
|
||||
## Version History
|
||||
|
||||
| Version | Changes |
|
||||
|---------|---------|
|
||||
| 2.1.20 | Extended thinking, permission modes, todos |
|
||||
| 2.1.17 | Subagent support with agentId |
|
||||
| 2.1.x | Progress events, hook metadata |
|
||||
| 2.0.x | Basic message/tool_use/tool_result |
|
||||
|
||||
## Conversation Graph
|
||||
|
||||
Messages form a DAG (directed acyclic graph) via parent-child relationships:
|
||||
|
||||
```
|
||||
Root (parentUuid: null)
|
||||
├── User message (uuid: A)
|
||||
│ └── Assistant (uuid: B, parentUuid: A)
|
||||
│ ├── Progress: Tool (uuid: C, parentUuid: A)
|
||||
│ └── Progress: Hook (uuid: D, parentUuid: A)
|
||||
└── User message (uuid: E, parentUuid: B)
|
||||
└── Assistant (uuid: F, parentUuid: E)
|
||||
```
|
||||
|
||||
## Parsing Recommendations
|
||||
|
||||
1. **Line-by-line** — Don't load entire file into memory
|
||||
2. **Skip invalid lines** — Wrap JSON.parse in try/catch
|
||||
3. **Handle missing fields** — Check existence before access
|
||||
4. **Ignore unknown types** — Format evolves with new event types
|
||||
5. **Check content type** — User content can be string OR array
|
||||
6. **Sum token variants** — Cache tokens may be in different fields
|
||||
346
docs/claude-jsonl-reference/02-message-types.md
Normal file
346
docs/claude-jsonl-reference/02-message-types.md
Normal file
@@ -0,0 +1,346 @@
|
||||
# Claude JSONL Message Types
|
||||
|
||||
Complete reference for all message types in Claude Code session logs.
|
||||
|
||||
## Type: `user`
|
||||
|
||||
User input messages (prompts, instructions, tool results).
|
||||
|
||||
### Direct User Input
|
||||
```json
|
||||
{
|
||||
"parentUuid": null,
|
||||
"isSidechain": false,
|
||||
"userType": "external",
|
||||
"cwd": "/Users/dev/myproject",
|
||||
"sessionId": "abc123-def456",
|
||||
"version": "2.1.20",
|
||||
"gitBranch": "main",
|
||||
"type": "user",
|
||||
"message": {
|
||||
"role": "user",
|
||||
"content": "Find all TODO comments in the codebase"
|
||||
},
|
||||
"uuid": "msg-001",
|
||||
"timestamp": "2026-02-27T10:00:00.000Z",
|
||||
"thinkingMetadata": {
|
||||
"maxThinkingTokens": 31999
|
||||
},
|
||||
"todos": [],
|
||||
"permissionMode": "bypassPermissions"
|
||||
}
|
||||
```
|
||||
|
||||
### Tool Results (Following Tool Calls)
|
||||
```json
|
||||
{
|
||||
"parentUuid": "msg-002",
|
||||
"type": "user",
|
||||
"message": {
|
||||
"role": "user",
|
||||
"content": [
|
||||
{
|
||||
"type": "tool_result",
|
||||
"tool_use_id": "toolu_01ABC",
|
||||
"content": "src/api.py:45: # TODO: implement caching"
|
||||
},
|
||||
{
|
||||
"type": "tool_result",
|
||||
"tool_use_id": "toolu_01DEF",
|
||||
"content": "src/utils.py:122: # TODO: add validation"
|
||||
}
|
||||
]
|
||||
},
|
||||
"uuid": "msg-003",
|
||||
"timestamp": "2026-02-27T10:00:05.000Z"
|
||||
}
|
||||
```
|
||||
|
||||
**Parsing Note:** Check `typeof content === 'string'` vs `Array.isArray(content)` to distinguish user input from tool results.
|
||||
|
||||
## Type: `assistant`
|
||||
|
||||
Claude's responses including text, thinking, and tool invocations.
|
||||
|
||||
### Text Response
|
||||
```json
|
||||
{
|
||||
"parentUuid": "msg-001",
|
||||
"type": "assistant",
|
||||
"message": {
|
||||
"role": "assistant",
|
||||
"type": "message",
|
||||
"model": "claude-opus-4-5-20251101",
|
||||
"id": "msg_bdrk_01Abc123",
|
||||
"content": [
|
||||
{
|
||||
"type": "text",
|
||||
"text": "I found 2 TODO comments in your codebase..."
|
||||
}
|
||||
],
|
||||
"stop_reason": "end_turn",
|
||||
"stop_sequence": null,
|
||||
"usage": {
|
||||
"input_tokens": 1500,
|
||||
"output_tokens": 200,
|
||||
"cache_read_input_tokens": 800
|
||||
}
|
||||
},
|
||||
"uuid": "msg-002",
|
||||
"timestamp": "2026-02-27T10:00:02.000Z"
|
||||
}
|
||||
```
|
||||
|
||||
### With Thinking (Extended Thinking Mode)
|
||||
```json
|
||||
{
|
||||
"type": "assistant",
|
||||
"message": {
|
||||
"role": "assistant",
|
||||
"content": [
|
||||
{
|
||||
"type": "thinking",
|
||||
"thinking": "The user wants to find TODOs. I should use Grep to search for TODO patterns across all file types.",
|
||||
"signature": "eyJhbGciOiJSUzI1NiJ9..."
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"text": "I'll search for TODO comments in your codebase."
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### With Tool Calls
|
||||
```json
|
||||
{
|
||||
"type": "assistant",
|
||||
"message": {
|
||||
"role": "assistant",
|
||||
"content": [
|
||||
{
|
||||
"type": "tool_use",
|
||||
"id": "toolu_01Grep123",
|
||||
"name": "Grep",
|
||||
"input": {
|
||||
"pattern": "TODO",
|
||||
"output_mode": "content"
|
||||
}
|
||||
}
|
||||
],
|
||||
"stop_reason": null
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Multiple Tool Calls (Parallel)
|
||||
```json
|
||||
{
|
||||
"type": "assistant",
|
||||
"message": {
|
||||
"content": [
|
||||
{
|
||||
"type": "text",
|
||||
"text": "I'll search for both TODOs and FIXMEs."
|
||||
},
|
||||
{
|
||||
"type": "tool_use",
|
||||
"id": "toolu_01A",
|
||||
"name": "Grep",
|
||||
"input": {"pattern": "TODO"}
|
||||
},
|
||||
{
|
||||
"type": "tool_use",
|
||||
"id": "toolu_01B",
|
||||
"name": "Grep",
|
||||
"input": {"pattern": "FIXME"}
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Type: `progress`
|
||||
|
||||
Progress events for hooks, tools, and async operations.
|
||||
|
||||
### Hook Progress
|
||||
```json
|
||||
{
|
||||
"parentUuid": "msg-002",
|
||||
"isSidechain": false,
|
||||
"type": "progress",
|
||||
"data": {
|
||||
"type": "hook_progress",
|
||||
"hookEvent": "PostToolUse",
|
||||
"hookName": "PostToolUse:Grep",
|
||||
"command": "node scripts/log-tool-use.js"
|
||||
},
|
||||
"parentToolUseID": "toolu_01Grep123",
|
||||
"toolUseID": "toolu_01Grep123",
|
||||
"timestamp": "2026-02-27T10:00:03.000Z",
|
||||
"uuid": "prog-001"
|
||||
}
|
||||
```
|
||||
|
||||
### Bash Progress
|
||||
```json
|
||||
{
|
||||
"type": "progress",
|
||||
"data": {
|
||||
"type": "bash_progress",
|
||||
"status": "running",
|
||||
"toolName": "Bash",
|
||||
"command": "npm test"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### MCP Progress
|
||||
```json
|
||||
{
|
||||
"type": "progress",
|
||||
"data": {
|
||||
"type": "mcp_progress",
|
||||
"server": "playwright",
|
||||
"tool": "browser_navigate",
|
||||
"status": "complete"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Type: `system`
|
||||
|
||||
System messages and metadata entries.
|
||||
|
||||
### Local Command
|
||||
```json
|
||||
{
|
||||
"parentUuid": "msg-001",
|
||||
"type": "system",
|
||||
"subtype": "local_command",
|
||||
"content": "<command-name>/usage</command-name>\n<command-args></command-args>",
|
||||
"level": "info",
|
||||
"timestamp": "2026-02-27T10:00:00.500Z",
|
||||
"uuid": "sys-001",
|
||||
"isMeta": false
|
||||
}
|
||||
```
|
||||
|
||||
### Turn Duration
|
||||
```json
|
||||
{
|
||||
"type": "system",
|
||||
"subtype": "turn_duration",
|
||||
"slug": "project-session",
|
||||
"durationMs": 65432,
|
||||
"uuid": "sys-002",
|
||||
"timestamp": "2026-02-27T10:01:05.000Z"
|
||||
}
|
||||
```
|
||||
|
||||
## Type: `summary`
|
||||
|
||||
End-of-session or context compression summaries.
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "summary",
|
||||
"summary": "Searched codebase for TODO comments, found 15 instances across 8 files. Prioritized by module.",
|
||||
"leafUuid": "msg-010"
|
||||
}
|
||||
```
|
||||
|
||||
**Note:** `leafUuid` points to the last message included in this summary.
|
||||
|
||||
## Type: `file-history-snapshot`
|
||||
|
||||
File state tracking for undo/restore operations.
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "file-history-snapshot",
|
||||
"messageId": "snap-001",
|
||||
"snapshot": {
|
||||
"messageId": "snap-001",
|
||||
"trackedFileBackups": {
|
||||
"/src/api.ts": {
|
||||
"path": "/src/api.ts",
|
||||
"originalContent": "...",
|
||||
"backupPath": "~/.claude/backups/..."
|
||||
}
|
||||
},
|
||||
"timestamp": "2026-02-27T10:00:00.000Z"
|
||||
},
|
||||
"isSnapshotUpdate": false
|
||||
}
|
||||
```
|
||||
|
||||
## Codex Format (Alternative Agent)
|
||||
|
||||
Codex uses a different JSONL structure.
|
||||
|
||||
### Session Metadata (First Line)
|
||||
```json
|
||||
{
|
||||
"type": "session_meta",
|
||||
"timestamp": "2026-02-27T10:00:00.000Z",
|
||||
"payload": {
|
||||
"cwd": "/Users/dev/myproject",
|
||||
"timestamp": "2026-02-27T10:00:00.000Z"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Response Item (Messages)
|
||||
```json
|
||||
{
|
||||
"type": "response_item",
|
||||
"timestamp": "2026-02-27T10:00:05.000Z",
|
||||
"payload": {
|
||||
"type": "message",
|
||||
"role": "assistant",
|
||||
"content": [
|
||||
{"text": "I found the issue..."}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Function Call (Tool Use)
|
||||
```json
|
||||
{
|
||||
"type": "response_item",
|
||||
"payload": {
|
||||
"type": "function_call",
|
||||
"call_id": "call_abc123",
|
||||
"name": "Grep",
|
||||
"arguments": "{\"pattern\": \"TODO\"}"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Reasoning (Thinking)
|
||||
```json
|
||||
{
|
||||
"type": "response_item",
|
||||
"payload": {
|
||||
"type": "reasoning",
|
||||
"summary": [
|
||||
{"type": "summary_text", "text": "Analyzing the error..."}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Message Type Summary
|
||||
|
||||
| Type | Frequency | Content |
|
||||
|------|-----------|---------|
|
||||
| `user` | Per prompt | User input or tool results |
|
||||
| `assistant` | Per response | Text, thinking, tool calls |
|
||||
| `progress` | Per hook/tool | Execution status |
|
||||
| `system` | Occasional | Commands, metadata |
|
||||
| `summary` | Session end | Conversation summary |
|
||||
| `file-history-snapshot` | Start/end | File state tracking |
|
||||
341
docs/claude-jsonl-reference/03-tool-lifecycle.md
Normal file
341
docs/claude-jsonl-reference/03-tool-lifecycle.md
Normal file
@@ -0,0 +1,341 @@
|
||||
# Tool Call Lifecycle
|
||||
|
||||
Complete documentation of how tool invocations flow through Claude JSONL logs.
|
||||
|
||||
## Lifecycle Overview
|
||||
|
||||
```
|
||||
1. Assistant message with tool_use block
|
||||
↓
|
||||
2. PreToolUse hook fires (optional)
|
||||
↓
|
||||
3. Tool executes
|
||||
↓
|
||||
4. PostToolUse hook fires (optional)
|
||||
↓
|
||||
5. User message with tool_result block
|
||||
↓
|
||||
6. Assistant processes result
|
||||
```
|
||||
|
||||
## Phase 1: Tool Invocation
|
||||
|
||||
Claude requests a tool via `tool_use` content block:
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "assistant",
|
||||
"message": {
|
||||
"role": "assistant",
|
||||
"content": [
|
||||
{
|
||||
"type": "text",
|
||||
"text": "I'll read that file for you."
|
||||
},
|
||||
{
|
||||
"type": "tool_use",
|
||||
"id": "toolu_01ReadFile123",
|
||||
"name": "Read",
|
||||
"input": {
|
||||
"file_path": "/src/auth/login.ts",
|
||||
"limit": 200
|
||||
}
|
||||
}
|
||||
],
|
||||
"stop_reason": null
|
||||
},
|
||||
"uuid": "msg-001"
|
||||
}
|
||||
```
|
||||
|
||||
### Tool Use Block Structure
|
||||
|
||||
| Field | Type | Required | Description |
|
||||
|-------|------|----------|-------------|
|
||||
| `type` | `"tool_use"` | Yes | Block type identifier |
|
||||
| `id` | string | Yes | Unique tool call ID (format: `toolu_*`) |
|
||||
| `name` | string | Yes | Tool name |
|
||||
| `input` | object | Yes | Tool parameters |
|
||||
|
||||
### Common Tool Names
|
||||
|
||||
| Tool | Purpose | Key Input Fields |
|
||||
|------|---------|------------------|
|
||||
| `Read` | Read file | `file_path`, `offset`, `limit` |
|
||||
| `Edit` | Edit file | `file_path`, `old_string`, `new_string` |
|
||||
| `Write` | Create file | `file_path`, `content` |
|
||||
| `Bash` | Run command | `command`, `timeout` |
|
||||
| `Glob` | Find files | `pattern`, `path` |
|
||||
| `Grep` | Search content | `pattern`, `path`, `type` |
|
||||
| `WebFetch` | Fetch URL | `url`, `prompt` |
|
||||
| `WebSearch` | Search web | `query` |
|
||||
| `Task` | Spawn subagent | `prompt`, `subagent_type` |
|
||||
| `AskUserQuestion` | Ask user | `questions` |
|
||||
|
||||
## Phase 2: Hook Execution (Optional)
|
||||
|
||||
If hooks are configured, progress events are logged:
|
||||
|
||||
### PreToolUse Hook Input
|
||||
```json
|
||||
{
|
||||
"session_id": "abc123",
|
||||
"transcript_path": "/Users/.../.claude/projects/.../session.jsonl",
|
||||
"cwd": "/Users/dev/myproject",
|
||||
"permission_mode": "default",
|
||||
"hook_event_name": "PreToolUse",
|
||||
"tool_name": "Read",
|
||||
"tool_input": {
|
||||
"file_path": "/src/auth/login.ts"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Hook Progress Event
|
||||
```json
|
||||
{
|
||||
"type": "progress",
|
||||
"data": {
|
||||
"type": "hook_progress",
|
||||
"hookEvent": "PreToolUse",
|
||||
"hookName": "security_check",
|
||||
"status": "running"
|
||||
},
|
||||
"parentToolUseID": "toolu_01ReadFile123",
|
||||
"toolUseID": "toolu_01ReadFile123",
|
||||
"uuid": "prog-001"
|
||||
}
|
||||
```
|
||||
|
||||
### Hook Output (Decision)
|
||||
```json
|
||||
{
|
||||
"decision": "allow",
|
||||
"reason": "File read permitted",
|
||||
"additionalContext": "Note: This file was recently modified"
|
||||
}
|
||||
```
|
||||
|
||||
## Phase 3: Tool Result
|
||||
|
||||
Tool output is wrapped in a user message:
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "user",
|
||||
"message": {
|
||||
"role": "user",
|
||||
"content": [
|
||||
{
|
||||
"type": "tool_result",
|
||||
"tool_use_id": "toolu_01ReadFile123",
|
||||
"content": "1\texport async function login(email: string, password: string) {\n2\t const user = await db.users.findByEmail(email);\n..."
|
||||
}
|
||||
]
|
||||
},
|
||||
"uuid": "msg-002",
|
||||
"parentUuid": "msg-001"
|
||||
}
|
||||
```
|
||||
|
||||
### Tool Result Block Structure
|
||||
|
||||
| Field | Type | Required | Description |
|
||||
|-------|------|----------|-------------|
|
||||
| `type` | `"tool_result"` | Yes | Block type identifier |
|
||||
| `tool_use_id` | string | Yes | Matches `tool_use.id` |
|
||||
| `content` | string | Yes | Tool output |
|
||||
| `is_error` | boolean | No | True if tool failed |
|
||||
|
||||
### Error Results
|
||||
```json
|
||||
{
|
||||
"type": "tool_result",
|
||||
"tool_use_id": "toolu_01ReadFile123",
|
||||
"content": "Error: File not found: /src/auth/login.ts",
|
||||
"is_error": true
|
||||
}
|
||||
```
|
||||
|
||||
## Phase 4: Result Processing
|
||||
|
||||
Claude processes the result and continues:
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "assistant",
|
||||
"message": {
|
||||
"content": [
|
||||
{
|
||||
"type": "thinking",
|
||||
"thinking": "The login function looks correct. The issue might be in the middleware..."
|
||||
},
|
||||
{
|
||||
"type": "text",
|
||||
"text": "I see the login function. Let me check the middleware next."
|
||||
},
|
||||
{
|
||||
"type": "tool_use",
|
||||
"id": "toolu_01ReadMiddleware",
|
||||
"name": "Read",
|
||||
"input": {"file_path": "/src/auth/middleware.ts"}
|
||||
}
|
||||
]
|
||||
},
|
||||
"uuid": "msg-003",
|
||||
"parentUuid": "msg-002"
|
||||
}
|
||||
```
|
||||
|
||||
## Parallel Tool Calls
|
||||
|
||||
Multiple tools can be invoked in a single message:
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "assistant",
|
||||
"message": {
|
||||
"content": [
|
||||
{"type": "tool_use", "id": "toolu_01A", "name": "Grep", "input": {"pattern": "TODO"}},
|
||||
{"type": "tool_use", "id": "toolu_01B", "name": "Grep", "input": {"pattern": "FIXME"}},
|
||||
{"type": "tool_use", "id": "toolu_01C", "name": "Glob", "input": {"pattern": "**/*.test.ts"}}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Results come back in the same user message:
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "user",
|
||||
"message": {
|
||||
"content": [
|
||||
{"type": "tool_result", "tool_use_id": "toolu_01A", "content": "Found 15 TODOs"},
|
||||
{"type": "tool_result", "tool_use_id": "toolu_01B", "content": "Found 3 FIXMEs"},
|
||||
{"type": "tool_result", "tool_use_id": "toolu_01C", "content": "tests/auth.test.ts\ntests/api.test.ts"}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Codex Tool Format
|
||||
|
||||
Codex uses a different structure:
|
||||
|
||||
### Function Call
|
||||
```json
|
||||
{
|
||||
"type": "response_item",
|
||||
"payload": {
|
||||
"type": "function_call",
|
||||
"call_id": "call_abc123",
|
||||
"name": "Read",
|
||||
"arguments": "{\"file_path\": \"/src/auth/login.ts\"}"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Note:** `arguments` is a JSON string that needs parsing.
|
||||
|
||||
### Function Result
|
||||
```json
|
||||
{
|
||||
"type": "response_item",
|
||||
"payload": {
|
||||
"type": "function_call_result",
|
||||
"call_id": "call_abc123",
|
||||
"result": "File contents..."
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Tool Call Pairing
|
||||
|
||||
To reconstruct tool call history:
|
||||
|
||||
1. **Find tool_use blocks** in assistant messages
|
||||
2. **Match by ID** to tool_result blocks in following user messages
|
||||
3. **Handle parallel calls** — multiple tool_use can have multiple tool_result
|
||||
|
||||
```python
|
||||
# Example: Pairing tool calls with results
|
||||
tool_calls = {}
|
||||
|
||||
for line in jsonl_file:
|
||||
event = json.loads(line)
|
||||
|
||||
if event['type'] == 'assistant':
|
||||
for block in event['message']['content']:
|
||||
if block['type'] == 'tool_use':
|
||||
tool_calls[block['id']] = {
|
||||
'name': block['name'],
|
||||
'input': block['input'],
|
||||
'timestamp': event['timestamp']
|
||||
}
|
||||
|
||||
elif event['type'] == 'user':
|
||||
content = event['message']['content']
|
||||
if isinstance(content, list):
|
||||
for block in content:
|
||||
if block['type'] == 'tool_result':
|
||||
call_id = block['tool_use_id']
|
||||
if call_id in tool_calls:
|
||||
tool_calls[call_id]['result'] = block['content']
|
||||
tool_calls[call_id]['is_error'] = block.get('is_error', False)
|
||||
```
|
||||
|
||||
## Missing Tool Results
|
||||
|
||||
Edge cases where tool results may be absent:
|
||||
|
||||
1. **Session interrupted** — User closed session mid-tool
|
||||
2. **Tool timeout** — Long-running tool exceeded limits
|
||||
3. **Hook blocked** — PreToolUse hook returned `block`
|
||||
4. **Permission denied** — User denied tool permission
|
||||
|
||||
Handle by checking if tool_use has matching tool_result before assuming completion.
|
||||
|
||||
## Tool-Specific Formats
|
||||
|
||||
### Bash Tool
|
||||
```json
|
||||
{
|
||||
"type": "tool_use",
|
||||
"name": "Bash",
|
||||
"input": {
|
||||
"command": "npm test -- --coverage",
|
||||
"timeout": 120000,
|
||||
"description": "Run tests with coverage"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Result includes exit code context:
|
||||
```json
|
||||
{
|
||||
"type": "tool_result",
|
||||
"content": "PASS src/auth.test.ts\n...\nCoverage: 85%\n\n[Exit code: 0]"
|
||||
}
|
||||
```
|
||||
|
||||
### Task Tool (Subagent)
|
||||
```json
|
||||
{
|
||||
"type": "tool_use",
|
||||
"name": "Task",
|
||||
"input": {
|
||||
"description": "Research auth patterns",
|
||||
"prompt": "Explore authentication implementations...",
|
||||
"subagent_type": "Explore"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Result returns subagent output:
|
||||
```json
|
||||
{
|
||||
"type": "tool_result",
|
||||
"content": "## Research Findings\n\n1. JWT patterns...\n\nagentId: agent-abc123"
|
||||
}
|
||||
```
|
||||
363
docs/claude-jsonl-reference/04-subagent-teams.md
Normal file
363
docs/claude-jsonl-reference/04-subagent-teams.md
Normal file
@@ -0,0 +1,363 @@
|
||||
# Subagent and Team Message Formats
|
||||
|
||||
Documentation for spawned agents, team coordination, and inter-agent messaging.
|
||||
|
||||
## Subagent Overview
|
||||
|
||||
Subagents are spawned via the `Task` tool and run in separate processes with their own transcripts.
|
||||
|
||||
### Spawn Relationship
|
||||
|
||||
```
|
||||
Main Session (session-uuid.jsonl)
|
||||
├── User message
|
||||
├── Assistant: Task tool_use
|
||||
├── [Subagent executes in separate process]
|
||||
├── User message: tool_result with subagent output
|
||||
└── ...
|
||||
|
||||
Subagent Session (session-uuid/subagents/agent-id.jsonl)
|
||||
├── Subagent receives prompt
|
||||
├── Subagent works (tool calls, etc.)
|
||||
└── Subagent returns result
|
||||
```
|
||||
|
||||
## Task Tool Invocation
|
||||
|
||||
Spawning a subagent:
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "assistant",
|
||||
"message": {
|
||||
"content": [
|
||||
{
|
||||
"type": "tool_use",
|
||||
"id": "toolu_01TaskSpawn",
|
||||
"name": "Task",
|
||||
"input": {
|
||||
"description": "Research auth patterns",
|
||||
"prompt": "Investigate authentication implementations in the codebase. Focus on JWT handling and session management.",
|
||||
"subagent_type": "Explore",
|
||||
"run_in_background": false
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Task Tool Input Fields
|
||||
|
||||
| Field | Type | Required | Description |
|
||||
|-------|------|----------|-------------|
|
||||
| `description` | string | Yes | Short (3-5 word) description |
|
||||
| `prompt` | string | Yes | Full task instructions |
|
||||
| `subagent_type` | string | Yes | Agent type (Explore, Plan, etc.) |
|
||||
| `run_in_background` | boolean | No | Run async without waiting |
|
||||
| `model` | string | No | Override model (sonnet, opus, haiku) |
|
||||
| `isolation` | string | No | "worktree" for isolated git copy |
|
||||
| `team_name` | string | No | Team to join |
|
||||
| `name` | string | No | Agent display name |
|
||||
|
||||
### Subagent Types
|
||||
|
||||
| Type | Tools Available | Use Case |
|
||||
|------|-----------------|----------|
|
||||
| `Explore` | Read-only tools | Research, search, analyze |
|
||||
| `Plan` | Read-only tools | Design implementation plans |
|
||||
| `general-purpose` | All tools | Full implementation |
|
||||
| `claude-code-guide` | Docs tools | Answer Claude Code questions |
|
||||
| Custom agents | Defined in `.claude/agents/` | Project-specific |
|
||||
|
||||
## Subagent Transcript Location
|
||||
|
||||
```
|
||||
~/.claude/projects/<project-hash>/<session-id>/subagents/agent-<agent-id>.jsonl
|
||||
```
|
||||
|
||||
## Subagent Message Format
|
||||
|
||||
Subagent transcripts have additional context fields:
|
||||
|
||||
```json
|
||||
{
|
||||
"parentUuid": null,
|
||||
"isSidechain": true,
|
||||
"userType": "external",
|
||||
"cwd": "/Users/dev/myproject",
|
||||
"sessionId": "subagent-session-uuid",
|
||||
"version": "2.1.20",
|
||||
"gitBranch": "main",
|
||||
"agentId": "a3fecd5",
|
||||
"type": "user",
|
||||
"message": {
|
||||
"role": "user",
|
||||
"content": "Investigate authentication implementations..."
|
||||
},
|
||||
"uuid": "msg-001",
|
||||
"timestamp": "2026-02-27T10:00:00.000Z"
|
||||
}
|
||||
```
|
||||
|
||||
### Key Differences from Main Session
|
||||
|
||||
| Field | Main Session | Subagent Session |
|
||||
|-------|--------------|------------------|
|
||||
| `isSidechain` | `false` | `true` |
|
||||
| `agentId` | absent | present |
|
||||
| `sessionId` | main session UUID | subagent session UUID |
|
||||
|
||||
## Task Result
|
||||
|
||||
When subagent completes, result returns to main session:
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "user",
|
||||
"message": {
|
||||
"content": [
|
||||
{
|
||||
"type": "tool_result",
|
||||
"tool_use_id": "toolu_01TaskSpawn",
|
||||
"content": "## Authentication Research Findings\n\n### JWT Implementation\n- Located in src/auth/jwt.ts\n- Uses RS256 algorithm\n...\n\nagentId: a3fecd5 (for resuming)"
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Note:** Result includes `agentId` for potential resumption.
|
||||
|
||||
## Background Tasks
|
||||
|
||||
For `run_in_background: true`:
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "tool_use",
|
||||
"name": "Task",
|
||||
"input": {
|
||||
"prompt": "Run comprehensive test suite",
|
||||
"subagent_type": "general-purpose",
|
||||
"run_in_background": true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Immediate result with task ID:
|
||||
```json
|
||||
{
|
||||
"type": "tool_result",
|
||||
"content": "Background task started.\nTask ID: task-abc123\nUse TaskOutput tool to check status."
|
||||
}
|
||||
```
|
||||
|
||||
## Team Coordination
|
||||
|
||||
Teams enable multiple agents working together.
|
||||
|
||||
### Team Creation (TeamCreate Tool)
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "tool_use",
|
||||
"name": "TeamCreate",
|
||||
"input": {
|
||||
"team_name": "auth-refactor",
|
||||
"description": "Refactoring authentication system"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Team Config File
|
||||
|
||||
Created at `~/.claude/teams/<team-name>/config.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"team_name": "auth-refactor",
|
||||
"description": "Refactoring authentication system",
|
||||
"created_at": "2026-02-27T10:00:00.000Z",
|
||||
"members": [
|
||||
{
|
||||
"name": "team-lead",
|
||||
"agentId": "agent-lead-123",
|
||||
"agentType": "general-purpose"
|
||||
},
|
||||
{
|
||||
"name": "researcher",
|
||||
"agentId": "agent-research-456",
|
||||
"agentType": "Explore"
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### Spawning Team Members
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "tool_use",
|
||||
"name": "Task",
|
||||
"input": {
|
||||
"prompt": "Research existing auth implementations",
|
||||
"subagent_type": "Explore",
|
||||
"team_name": "auth-refactor",
|
||||
"name": "researcher"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Inter-Agent Messaging (SendMessage)
|
||||
|
||||
### Direct Message
|
||||
```json
|
||||
{
|
||||
"type": "tool_use",
|
||||
"name": "SendMessage",
|
||||
"input": {
|
||||
"type": "message",
|
||||
"recipient": "researcher",
|
||||
"content": "Please focus on JWT refresh token handling",
|
||||
"summary": "JWT refresh priority"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Broadcast to Team
|
||||
```json
|
||||
{
|
||||
"type": "tool_use",
|
||||
"name": "SendMessage",
|
||||
"input": {
|
||||
"type": "broadcast",
|
||||
"content": "Critical: Found security vulnerability in token validation",
|
||||
"summary": "Security alert"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Shutdown Request
|
||||
```json
|
||||
{
|
||||
"type": "tool_use",
|
||||
"name": "SendMessage",
|
||||
"input": {
|
||||
"type": "shutdown_request",
|
||||
"recipient": "researcher",
|
||||
"content": "Task complete, please wrap up"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Shutdown Response
|
||||
```json
|
||||
{
|
||||
"type": "tool_use",
|
||||
"name": "SendMessage",
|
||||
"input": {
|
||||
"type": "shutdown_response",
|
||||
"request_id": "req-abc123",
|
||||
"approve": true
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Hook Events for Subagents
|
||||
|
||||
### SubagentStart Hook Input
|
||||
```json
|
||||
{
|
||||
"session_id": "main-session-uuid",
|
||||
"transcript_path": "/path/to/main/session.jsonl",
|
||||
"hook_event_name": "SubagentStart",
|
||||
"agent_id": "a3fecd5",
|
||||
"agent_type": "Explore"
|
||||
}
|
||||
```
|
||||
|
||||
### SubagentStop Hook Input
|
||||
```json
|
||||
{
|
||||
"session_id": "main-session-uuid",
|
||||
"hook_event_name": "SubagentStop",
|
||||
"agent_id": "a3fecd5",
|
||||
"agent_type": "Explore",
|
||||
"agent_transcript_path": "/path/to/subagent/agent-a3fecd5.jsonl",
|
||||
"last_assistant_message": "Research complete. Found 3 auth patterns..."
|
||||
}
|
||||
```
|
||||
|
||||
## AMC Spawn Tracking
|
||||
|
||||
AMC tracks spawned agents through:
|
||||
|
||||
### Pending Spawn Record
|
||||
```json
|
||||
// ~/.local/share/amc/pending_spawns/<spawn-id>.json
|
||||
{
|
||||
"spawn_id": "550e8400-e29b-41d4-a716-446655440000",
|
||||
"project_path": "/Users/dev/myproject",
|
||||
"agent_type": "claude",
|
||||
"timestamp": 1708872000.123
|
||||
}
|
||||
```
|
||||
|
||||
### Session State with Spawn ID
|
||||
```json
|
||||
// ~/.local/share/amc/sessions/<session-id>.json
|
||||
{
|
||||
"session_id": "session-uuid",
|
||||
"agent": "claude",
|
||||
"project": "myproject",
|
||||
"spawn_id": "550e8400-e29b-41d4-a716-446655440000",
|
||||
"zellij_session": "main",
|
||||
"zellij_pane": "3"
|
||||
}
|
||||
```
|
||||
|
||||
## Resuming Subagents
|
||||
|
||||
Subagents can be resumed using their agent ID:
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "tool_use",
|
||||
"name": "Task",
|
||||
"input": {
|
||||
"description": "Continue auth research",
|
||||
"prompt": "Continue where you left off",
|
||||
"subagent_type": "Explore",
|
||||
"resume": "a3fecd5"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The resumed agent receives full previous context.
|
||||
|
||||
## Worktree Isolation
|
||||
|
||||
For isolated code changes:
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "tool_use",
|
||||
"name": "Task",
|
||||
"input": {
|
||||
"prompt": "Refactor auth module",
|
||||
"subagent_type": "general-purpose",
|
||||
"isolation": "worktree"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Creates temporary git worktree at `.claude/worktrees/<name>/`.
|
||||
|
||||
Result includes worktree info:
|
||||
```json
|
||||
{
|
||||
"type": "tool_result",
|
||||
"content": "Refactoring complete.\n\nWorktree: .claude/worktrees/auth-refactor\nBranch: claude/auth-refactor-abc123\n\nChanges made - worktree preserved for review."
|
||||
}
|
||||
```
|
||||
475
docs/claude-jsonl-reference/05-edge-cases.md
Normal file
475
docs/claude-jsonl-reference/05-edge-cases.md
Normal file
@@ -0,0 +1,475 @@
|
||||
# Edge Cases and Error Handling
|
||||
|
||||
Comprehensive guide to edge cases, malformed input handling, and error recovery in Claude JSONL processing.
|
||||
|
||||
## Parsing Edge Cases
|
||||
|
||||
### 1. Invalid JSON Lines
|
||||
|
||||
**Scenario:** Corrupted or truncated JSON line.
|
||||
|
||||
```python
|
||||
# BAD: Crashes on invalid JSON
|
||||
for line in file:
|
||||
data = json.loads(line) # Raises JSONDecodeError
|
||||
|
||||
# GOOD: Skip invalid lines
|
||||
for line in file:
|
||||
if not line.strip():
|
||||
continue
|
||||
try:
|
||||
data = json.loads(line)
|
||||
except json.JSONDecodeError:
|
||||
continue # Skip malformed line
|
||||
```
|
||||
|
||||
### 2. Content Type Ambiguity
|
||||
|
||||
**Scenario:** User message content can be string OR array.
|
||||
|
||||
```python
|
||||
# BAD: Assumes string
|
||||
user_text = message['content']
|
||||
|
||||
# GOOD: Check type
|
||||
content = message['content']
|
||||
if isinstance(content, str):
|
||||
user_text = content
|
||||
elif isinstance(content, list):
|
||||
# This is tool results, not user input
|
||||
user_text = None
|
||||
```
|
||||
|
||||
### 3. Missing Optional Fields
|
||||
|
||||
**Scenario:** Fields may be absent in older versions.
|
||||
|
||||
```python
|
||||
# BAD: Assumes field exists
|
||||
tokens = message['usage']['cache_read_input_tokens']
|
||||
|
||||
# GOOD: Safe access
|
||||
usage = message.get('usage', {})
|
||||
tokens = usage.get('cache_read_input_tokens', 0)
|
||||
```
|
||||
|
||||
### 4. Partial File Reads
|
||||
|
||||
**Scenario:** Reading last N bytes may cut first line.
|
||||
|
||||
```python
|
||||
# When seeking to end - N bytes, first line may be partial
|
||||
def read_tail(file_path, max_bytes=1_000_000):
|
||||
with open(file_path, 'r') as f:
|
||||
f.seek(0, 2) # End
|
||||
size = f.tell()
|
||||
|
||||
if size > max_bytes:
|
||||
f.seek(size - max_bytes)
|
||||
f.readline() # Discard partial first line
|
||||
else:
|
||||
f.seek(0)
|
||||
|
||||
return f.readlines()
|
||||
```
|
||||
|
||||
### 5. Non-Dict JSON Values
|
||||
|
||||
**Scenario:** Line contains valid JSON but not an object.
|
||||
|
||||
```python
|
||||
# File might contain: 123, "string", [1,2,3], null
|
||||
data = json.loads(line)
|
||||
if not isinstance(data, dict):
|
||||
continue # Skip non-object JSON
|
||||
```
|
||||
|
||||
## Type Coercion Edge Cases
|
||||
|
||||
### Integer Conversion
|
||||
|
||||
```python
|
||||
def safe_int(value):
|
||||
"""Convert to int, rejecting booleans."""
|
||||
# Python: isinstance(True, int) == True, so check explicitly
|
||||
if isinstance(value, bool):
|
||||
return None
|
||||
if isinstance(value, int):
|
||||
return value
|
||||
if isinstance(value, float):
|
||||
return int(value)
|
||||
if isinstance(value, str):
|
||||
try:
|
||||
return int(value)
|
||||
except ValueError:
|
||||
return None
|
||||
return None
|
||||
```
|
||||
|
||||
### Token Summation
|
||||
|
||||
```python
|
||||
def sum_tokens(*values):
|
||||
"""Sum token counts, handling None/missing."""
|
||||
valid = [v for v in values if isinstance(v, (int, float)) and not isinstance(v, bool)]
|
||||
return sum(valid) if valid else None
|
||||
```
|
||||
|
||||
## Session State Edge Cases
|
||||
|
||||
### 1. Orphan Sessions
|
||||
|
||||
**Scenario:** Multiple sessions claim same Zellij pane (e.g., after --resume).
|
||||
|
||||
**Resolution:** Keep session with:
|
||||
1. Highest priority: Has `context_usage` (indicates real work)
|
||||
2. Second priority: Latest `conversation_mtime_ns`
|
||||
|
||||
```python
|
||||
def dedupe_sessions(sessions):
|
||||
by_pane = {}
|
||||
for s in sessions:
|
||||
key = (s['zellij_session'], s['zellij_pane'])
|
||||
if key not in by_pane:
|
||||
by_pane[key] = s
|
||||
else:
|
||||
existing = by_pane[key]
|
||||
# Prefer session with context_usage
|
||||
if s.get('context_usage') and not existing.get('context_usage'):
|
||||
by_pane[key] = s
|
||||
elif s.get('conversation_mtime_ns', 0) > existing.get('conversation_mtime_ns', 0):
|
||||
by_pane[key] = s
|
||||
return list(by_pane.values())
|
||||
```
|
||||
|
||||
### 2. Dead Session Detection
|
||||
|
||||
**Claude:** Check Zellij session exists
|
||||
```python
|
||||
def is_claude_dead(session):
|
||||
if session['status'] == 'starting':
|
||||
return False # Benefit of doubt
|
||||
|
||||
zellij = session.get('zellij_session')
|
||||
if not zellij:
|
||||
return True
|
||||
|
||||
# Check if Zellij session exists
|
||||
result = subprocess.run(['zellij', 'list-sessions'], capture_output=True)
|
||||
return zellij not in result.stdout.decode()
|
||||
```
|
||||
|
||||
**Codex:** Check if process has file open
|
||||
```python
|
||||
def is_codex_dead(session):
|
||||
transcript = session.get('transcript_path')
|
||||
if not transcript:
|
||||
return True
|
||||
|
||||
# Check if any process has file open
|
||||
result = subprocess.run(['lsof', transcript], capture_output=True)
|
||||
return result.returncode != 0
|
||||
```
|
||||
|
||||
### 3. Stale Session Cleanup
|
||||
|
||||
```python
|
||||
ORPHAN_AGE_HOURS = 24
|
||||
STARTING_AGE_HOURS = 1
|
||||
|
||||
def should_cleanup(session, now):
|
||||
age = now - session['started_at']
|
||||
|
||||
if session['status'] == 'starting' and age > timedelta(hours=STARTING_AGE_HOURS):
|
||||
return True # Stuck in starting
|
||||
|
||||
if session.get('is_dead') and age > timedelta(hours=ORPHAN_AGE_HOURS):
|
||||
return True # Dead and old
|
||||
|
||||
return False
|
||||
```
|
||||
|
||||
## Tool Call Edge Cases
|
||||
|
||||
### 1. Missing Tool Results
|
||||
|
||||
**Scenario:** Session interrupted between tool_use and tool_result.
|
||||
|
||||
```python
|
||||
def pair_tool_calls(messages):
|
||||
pending = {} # tool_use_id -> tool_use
|
||||
|
||||
for msg in messages:
|
||||
if msg['type'] == 'assistant':
|
||||
for block in msg['message'].get('content', []):
|
||||
if block.get('type') == 'tool_use':
|
||||
pending[block['id']] = block
|
||||
|
||||
elif msg['type'] == 'user':
|
||||
content = msg['message'].get('content', [])
|
||||
if isinstance(content, list):
|
||||
for block in content:
|
||||
if block.get('type') == 'tool_result':
|
||||
tool_id = block.get('tool_use_id')
|
||||
if tool_id in pending:
|
||||
pending[tool_id]['result'] = block
|
||||
|
||||
# Any pending without result = interrupted
|
||||
incomplete = [t for t in pending.values() if 'result' not in t]
|
||||
return pending, incomplete
|
||||
```
|
||||
|
||||
### 2. Parallel Tool Call Ordering
|
||||
|
||||
**Scenario:** Multiple tool_use in one message, results may come in different order.
|
||||
|
||||
```python
|
||||
# Match by ID, not by position
|
||||
tool_uses = [b for b in assistant_content if b['type'] == 'tool_use']
|
||||
tool_results = [b for b in user_content if b['type'] == 'tool_result']
|
||||
|
||||
paired = {}
|
||||
for result in tool_results:
|
||||
paired[result['tool_use_id']] = result
|
||||
|
||||
for use in tool_uses:
|
||||
result = paired.get(use['id'])
|
||||
# result may be None if missing
|
||||
```
|
||||
|
||||
### 3. Tool Error Results
|
||||
|
||||
```python
|
||||
def is_tool_error(result_block):
|
||||
return result_block.get('is_error', False)
|
||||
|
||||
def extract_error_message(result_block):
|
||||
content = result_block.get('content', '')
|
||||
if content.startswith('Error:'):
|
||||
return content
|
||||
return None
|
||||
```
|
||||
|
||||
## Codex-Specific Edge Cases
|
||||
|
||||
### 1. Content Injection Filtering
|
||||
|
||||
Codex may include system context in messages that should be filtered:
|
||||
|
||||
```python
|
||||
SKIP_PREFIXES = [
|
||||
'<INSTRUCTIONS>',
|
||||
'<environment_context>',
|
||||
'<permissions instructions>',
|
||||
'# AGENTS.md instructions'
|
||||
]
|
||||
|
||||
def should_skip_content(text):
|
||||
return any(text.startswith(prefix) for prefix in SKIP_PREFIXES)
|
||||
```
|
||||
|
||||
### 2. Developer Role Filtering
|
||||
|
||||
```python
|
||||
def parse_codex_message(payload):
|
||||
role = payload.get('role')
|
||||
if role == 'developer':
|
||||
return None # Skip system/developer messages
|
||||
return payload
|
||||
```
|
||||
|
||||
### 3. Function Call Arguments Parsing
|
||||
|
||||
```python
|
||||
def parse_arguments(arguments):
|
||||
if isinstance(arguments, dict):
|
||||
return arguments
|
||||
if isinstance(arguments, str):
|
||||
try:
|
||||
return json.loads(arguments)
|
||||
except json.JSONDecodeError:
|
||||
return {'raw': arguments}
|
||||
return {}
|
||||
```
|
||||
|
||||
### 4. Tool Call Buffering
|
||||
|
||||
Codex tool calls need buffering until next message:
|
||||
|
||||
```python
|
||||
class CodexParser:
|
||||
def __init__(self):
|
||||
self.pending_tools = []
|
||||
|
||||
def process_entry(self, entry):
|
||||
payload = entry.get('payload', {})
|
||||
ptype = payload.get('type')
|
||||
|
||||
if ptype == 'function_call':
|
||||
self.pending_tools.append({
|
||||
'name': payload['name'],
|
||||
'input': self.parse_arguments(payload['arguments'])
|
||||
})
|
||||
return None # Don't emit yet
|
||||
|
||||
elif ptype == 'message' and payload.get('role') == 'assistant':
|
||||
msg = self.create_message(payload)
|
||||
if self.pending_tools:
|
||||
msg['tool_calls'] = self.pending_tools
|
||||
self.pending_tools = []
|
||||
return msg
|
||||
|
||||
elif ptype == 'message' and payload.get('role') == 'user':
|
||||
# Flush pending tools before user message
|
||||
msgs = []
|
||||
if self.pending_tools:
|
||||
msgs.append({'role': 'assistant', 'tool_calls': self.pending_tools})
|
||||
self.pending_tools = []
|
||||
msgs.append(self.create_message(payload))
|
||||
return msgs
|
||||
```
|
||||
|
||||
## File System Edge Cases
|
||||
|
||||
### 1. Path Traversal Prevention
|
||||
|
||||
```python
|
||||
import os
|
||||
|
||||
def validate_session_id(session_id):
|
||||
# Must be basename only
|
||||
if os.path.basename(session_id) != session_id:
|
||||
raise ValueError("Invalid session ID")
|
||||
|
||||
# No special characters
|
||||
if any(c in session_id for c in ['/', '\\', '..', '\x00']):
|
||||
raise ValueError("Invalid session ID")
|
||||
|
||||
def validate_project_path(project_path, base_dir):
|
||||
resolved = os.path.realpath(project_path)
|
||||
base = os.path.realpath(base_dir)
|
||||
|
||||
if not resolved.startswith(base + os.sep):
|
||||
raise ValueError("Path traversal detected")
|
||||
```
|
||||
|
||||
### 2. File Not Found
|
||||
|
||||
```python
|
||||
def read_session_file(path):
|
||||
try:
|
||||
with open(path, 'r') as f:
|
||||
return f.read()
|
||||
except FileNotFoundError:
|
||||
return None
|
||||
except PermissionError:
|
||||
return None
|
||||
except OSError:
|
||||
return None
|
||||
```
|
||||
|
||||
### 3. Empty Files
|
||||
|
||||
```python
|
||||
def parse_jsonl(path):
|
||||
with open(path, 'r') as f:
|
||||
content = f.read()
|
||||
|
||||
if not content.strip():
|
||||
return [] # Empty file
|
||||
|
||||
return [json.loads(line) for line in content.strip().split('\n') if line.strip()]
|
||||
```
|
||||
|
||||
## Subprocess Edge Cases
|
||||
|
||||
### 1. Timeout Handling
|
||||
|
||||
```python
|
||||
import subprocess
|
||||
|
||||
def run_with_timeout(cmd, timeout=5):
|
||||
try:
|
||||
result = subprocess.run(
|
||||
cmd,
|
||||
capture_output=True,
|
||||
timeout=timeout,
|
||||
text=True
|
||||
)
|
||||
return result.stdout
|
||||
except subprocess.TimeoutExpired:
|
||||
return None
|
||||
except FileNotFoundError:
|
||||
return None
|
||||
except OSError:
|
||||
return None
|
||||
```
|
||||
|
||||
### 2. ANSI Code Stripping
|
||||
|
||||
```python
|
||||
import re
|
||||
|
||||
ANSI_PATTERN = re.compile(r'\x1b\[[0-9;]*m')
|
||||
|
||||
def strip_ansi(text):
|
||||
return ANSI_PATTERN.sub('', text)
|
||||
```
|
||||
|
||||
## Cache Invalidation
|
||||
|
||||
### Mtime-Based Cache
|
||||
|
||||
```python
|
||||
class FileCache:
|
||||
def __init__(self, max_size=100):
|
||||
self.cache = {}
|
||||
self.max_size = max_size
|
||||
|
||||
def get(self, path):
|
||||
if path not in self.cache:
|
||||
return None
|
||||
|
||||
entry = self.cache[path]
|
||||
stat = os.stat(path)
|
||||
|
||||
# Invalidate if file changed
|
||||
if stat.st_mtime_ns != entry['mtime_ns'] or stat.st_size != entry['size']:
|
||||
del self.cache[path]
|
||||
return None
|
||||
|
||||
return entry['data']
|
||||
|
||||
def set(self, path, data):
|
||||
# Evict oldest if full
|
||||
if len(self.cache) >= self.max_size:
|
||||
oldest = next(iter(self.cache))
|
||||
del self.cache[oldest]
|
||||
|
||||
stat = os.stat(path)
|
||||
self.cache[path] = {
|
||||
'mtime_ns': stat.st_mtime_ns,
|
||||
'size': stat.st_size,
|
||||
'data': data
|
||||
}
|
||||
```
|
||||
|
||||
## Testing Edge Cases Checklist
|
||||
|
||||
- [ ] Empty JSONL file
|
||||
- [ ] Single-line JSONL file
|
||||
- [ ] Truncated JSON line
|
||||
- [ ] Non-object JSON values (numbers, strings, arrays)
|
||||
- [ ] Missing required fields
|
||||
- [ ] Unknown message types
|
||||
- [ ] Content as string vs array
|
||||
- [ ] Boolean vs integer confusion
|
||||
- [ ] Unicode in content
|
||||
- [ ] Very long lines (>64KB)
|
||||
- [ ] Concurrent file modifications
|
||||
- [ ] Missing tool results
|
||||
- [ ] Multiple tool calls in single message
|
||||
- [ ] Session without Zellij pane
|
||||
- [ ] Codex developer messages
|
||||
- [ ] Path traversal attempts
|
||||
- [ ] Symlink escape attempts
|
||||
238
docs/claude-jsonl-reference/06-quick-reference.md
Normal file
238
docs/claude-jsonl-reference/06-quick-reference.md
Normal file
@@ -0,0 +1,238 @@
|
||||
# Quick Reference
|
||||
|
||||
Cheat sheet for common Claude JSONL operations.
|
||||
|
||||
## File Locations
|
||||
|
||||
```bash
|
||||
# Claude sessions
|
||||
~/.claude/projects/-Users-user-projects-myapp/*.jsonl
|
||||
|
||||
# Codex sessions
|
||||
~/.codex/sessions/**/*.jsonl
|
||||
|
||||
# Subagent transcripts
|
||||
~/.claude/projects/.../session-id/subagents/agent-*.jsonl
|
||||
|
||||
# AMC session state
|
||||
~/.local/share/amc/sessions/*.json
|
||||
```
|
||||
|
||||
## Path Encoding
|
||||
|
||||
```python
|
||||
# Encode: /Users/dev/myproject -> -Users-dev-myproject
|
||||
encoded = '-' + project_path.replace('/', '-')
|
||||
|
||||
# Decode: -Users-dev-myproject -> /Users/dev/myproject
|
||||
decoded = encoded[1:].replace('-', '/')
|
||||
```
|
||||
|
||||
## Message Type Quick ID
|
||||
|
||||
| If you see... | It's a... |
|
||||
|---------------|-----------|
|
||||
| `"type": "user"` + string content | User input |
|
||||
| `"type": "user"` + array content | Tool results |
|
||||
| `"type": "assistant"` | Claude response |
|
||||
| `"type": "progress"` | Hook/tool execution |
|
||||
| `"type": "summary"` | Session summary |
|
||||
| `"type": "system"` | Metadata/commands |
|
||||
|
||||
## Content Block Quick ID
|
||||
|
||||
| Block Type | Key Fields |
|
||||
|------------|------------|
|
||||
| `text` | `text` |
|
||||
| `thinking` | `thinking`, `signature` |
|
||||
| `tool_use` | `id`, `name`, `input` |
|
||||
| `tool_result` | `tool_use_id`, `content`, `is_error` |
|
||||
|
||||
## jq Recipes
|
||||
|
||||
```bash
|
||||
# Count messages by type
|
||||
jq -s 'group_by(.type) | map({type: .[0].type, count: length})' session.jsonl
|
||||
|
||||
# Extract all tool calls
|
||||
jq -c 'select(.type=="assistant") | .message.content[]? | select(.type=="tool_use")' session.jsonl
|
||||
|
||||
# Get user messages only
|
||||
jq -c 'select(.type=="user" and (.message.content | type)=="string")' session.jsonl
|
||||
|
||||
# Sum tokens
|
||||
jq -s '[.[].message.usage? | select(.) | .input_tokens + .output_tokens] | add' session.jsonl
|
||||
|
||||
# List tools used
|
||||
jq -c 'select(.type=="assistant") | .message.content[]? | select(.type=="tool_use") | .name' session.jsonl | sort | uniq -c
|
||||
|
||||
# Find errors
|
||||
jq -c 'select(.type=="user") | .message.content[]? | select(.type=="tool_result" and .is_error==true)' session.jsonl
|
||||
```
|
||||
|
||||
## Python Snippets
|
||||
|
||||
### Read JSONL
|
||||
```python
|
||||
import json
|
||||
|
||||
def read_jsonl(path):
|
||||
with open(path) as f:
|
||||
for line in f:
|
||||
if line.strip():
|
||||
try:
|
||||
yield json.loads(line)
|
||||
except json.JSONDecodeError:
|
||||
continue
|
||||
```
|
||||
|
||||
### Extract Conversation
|
||||
```python
|
||||
def extract_conversation(path):
|
||||
messages = []
|
||||
for event in read_jsonl(path):
|
||||
if event['type'] == 'user':
|
||||
content = event['message']['content']
|
||||
if isinstance(content, str):
|
||||
messages.append({'role': 'user', 'content': content})
|
||||
elif event['type'] == 'assistant':
|
||||
for block in event['message'].get('content', []):
|
||||
if block.get('type') == 'text':
|
||||
messages.append({'role': 'assistant', 'content': block['text']})
|
||||
return messages
|
||||
```
|
||||
|
||||
### Get Token Usage
|
||||
```python
|
||||
def get_token_usage(path):
|
||||
total_input = 0
|
||||
total_output = 0
|
||||
|
||||
for event in read_jsonl(path):
|
||||
if event['type'] == 'assistant':
|
||||
usage = event.get('message', {}).get('usage', {})
|
||||
total_input += usage.get('input_tokens', 0)
|
||||
total_output += usage.get('output_tokens', 0)
|
||||
|
||||
return {'input': total_input, 'output': total_output}
|
||||
```
|
||||
|
||||
### Find Tool Calls
|
||||
```python
|
||||
def find_tool_calls(path):
|
||||
tools = []
|
||||
for event in read_jsonl(path):
|
||||
if event['type'] == 'assistant':
|
||||
for block in event['message'].get('content', []):
|
||||
if block.get('type') == 'tool_use':
|
||||
tools.append({
|
||||
'name': block['name'],
|
||||
'id': block['id'],
|
||||
'input': block['input']
|
||||
})
|
||||
return tools
|
||||
```
|
||||
|
||||
### Pair Tools with Results
|
||||
```python
|
||||
def pair_tools_results(path):
|
||||
pending = {}
|
||||
|
||||
for event in read_jsonl(path):
|
||||
if event['type'] == 'assistant':
|
||||
for block in event['message'].get('content', []):
|
||||
if block.get('type') == 'tool_use':
|
||||
pending[block['id']] = {'use': block, 'result': None}
|
||||
|
||||
elif event['type'] == 'user':
|
||||
content = event['message'].get('content', [])
|
||||
if isinstance(content, list):
|
||||
for block in content:
|
||||
if block.get('type') == 'tool_result':
|
||||
tool_id = block['tool_use_id']
|
||||
if tool_id in pending:
|
||||
pending[tool_id]['result'] = block
|
||||
|
||||
return pending
|
||||
```
|
||||
|
||||
## Common Gotchas
|
||||
|
||||
| Gotcha | Solution |
|
||||
|--------|----------|
|
||||
| `content` can be string or array | Check `isinstance(content, str)` first |
|
||||
| `usage` may be missing | Use `.get('usage', {})` |
|
||||
| Booleans are ints in Python | Check `isinstance(v, bool)` before `isinstance(v, int)` |
|
||||
| First line may be partial after seek | Call `readline()` to discard |
|
||||
| Tool results in user messages | Check for `tool_result` type in array |
|
||||
| Codex `arguments` is JSON string | Parse with `json.loads()` |
|
||||
| Agent ID vs session ID | Agent ID survives rewrites, session ID is per-run |
|
||||
|
||||
## Status Values
|
||||
|
||||
| Field | Values |
|
||||
|-------|--------|
|
||||
| `status` | `starting`, `active`, `done` |
|
||||
| `stop_reason` | `end_turn`, `max_tokens`, `tool_use`, null |
|
||||
| `is_error` | `true`, `false` (tool results) |
|
||||
|
||||
## Token Fields
|
||||
|
||||
```python
|
||||
# All possible token fields to sum
|
||||
token_fields = [
|
||||
'input_tokens',
|
||||
'output_tokens',
|
||||
'cache_creation_input_tokens',
|
||||
'cache_read_input_tokens'
|
||||
]
|
||||
|
||||
# Context window by model
|
||||
context_windows = {
|
||||
'claude-opus': 200_000,
|
||||
'claude-sonnet': 200_000,
|
||||
'claude-haiku': 200_000,
|
||||
'claude-2': 100_000
|
||||
}
|
||||
```
|
||||
|
||||
## Useful Constants
|
||||
|
||||
```python
|
||||
# File locations
|
||||
CLAUDE_BASE = os.path.expanduser('~/.claude/projects')
|
||||
CODEX_BASE = os.path.expanduser('~/.codex/sessions')
|
||||
AMC_BASE = os.path.expanduser('~/.local/share/amc')
|
||||
|
||||
# Read limits
|
||||
MAX_TAIL_BYTES = 1_000_000 # 1MB
|
||||
MAX_LINES = 400 # For context extraction
|
||||
|
||||
# Timeouts
|
||||
SUBPROCESS_TIMEOUT = 5 # seconds
|
||||
SPAWN_COOLDOWN = 30 # seconds
|
||||
|
||||
# Session ages
|
||||
ACTIVE_THRESHOLD_MINUTES = 2
|
||||
ORPHAN_CLEANUP_HOURS = 24
|
||||
STARTING_CLEANUP_HOURS = 1
|
||||
```
|
||||
|
||||
## Debugging Commands
|
||||
|
||||
```bash
|
||||
# Watch session file changes
|
||||
tail -f ~/.claude/projects/-path-to-project/*.jsonl | jq -c
|
||||
|
||||
# Find latest session
|
||||
ls -t ~/.claude/projects/-path-to-project/*.jsonl | head -1
|
||||
|
||||
# Count lines in session
|
||||
wc -l session.jsonl
|
||||
|
||||
# Validate JSON
|
||||
cat session.jsonl | while read line; do echo "$line" | jq . > /dev/null || echo "Invalid: $line"; done
|
||||
|
||||
# Pretty print last message
|
||||
tail -1 session.jsonl | jq .
|
||||
```
|
||||
57
docs/claude-jsonl-reference/README.md
Normal file
57
docs/claude-jsonl-reference/README.md
Normal file
@@ -0,0 +1,57 @@
|
||||
# Claude JSONL Session Log Reference
|
||||
|
||||
Comprehensive documentation for parsing and processing Claude Code JSONL session logs in the AMC project.
|
||||
|
||||
## Overview
|
||||
|
||||
Claude Code stores all conversations as JSONL (JSON Lines) files — one JSON object per line. This documentation provides authoritative specifications for:
|
||||
|
||||
- Message envelope structure and common fields
|
||||
- All message types (user, assistant, progress, system, summary, etc.)
|
||||
- Content block types (text, tool_use, tool_result, thinking)
|
||||
- Tool call lifecycle and result handling
|
||||
- Subagent spawn and team coordination formats
|
||||
- Edge cases, error handling, and recovery patterns
|
||||
|
||||
## Documents
|
||||
|
||||
| Document | Purpose |
|
||||
|----------|---------|
|
||||
| [01-format-specification.md](./01-format-specification.md) | Complete JSONL format spec with all fields |
|
||||
| [02-message-types.md](./02-message-types.md) | Every message type with concrete examples |
|
||||
| [03-tool-lifecycle.md](./03-tool-lifecycle.md) | Tool call flow from invocation to result |
|
||||
| [04-subagent-teams.md](./04-subagent-teams.md) | Subagent and team message formats |
|
||||
| [05-edge-cases.md](./05-edge-cases.md) | Error handling, malformed input, recovery |
|
||||
| [06-quick-reference.md](./06-quick-reference.md) | Cheat sheet for common operations |
|
||||
|
||||
## File Locations
|
||||
|
||||
| Content | Location |
|
||||
|---------|----------|
|
||||
| Claude sessions | `~/.claude/projects/<encoded-path>/<session-id>.jsonl` |
|
||||
| Codex sessions | `~/.codex/sessions/**/<session-id>.jsonl` |
|
||||
| Subagent transcripts | `~/.claude/projects/<path>/<session-id>/subagents/agent-<id>.jsonl` |
|
||||
| AMC session state | `~/.local/share/amc/sessions/<session-id>.json` |
|
||||
| AMC event logs | `~/.local/share/amc/events/<session-id>.jsonl` |
|
||||
|
||||
## Path Encoding
|
||||
|
||||
Project paths are encoded by replacing `/` with `-` and adding a leading dash:
|
||||
- `/Users/taylor/projects/amc` → `-Users-taylor-projects-amc`
|
||||
|
||||
## Key Principles
|
||||
|
||||
1. **NDJSON format** — Each line is complete, parseable JSON
|
||||
2. **Append-only** — Sessions are written incrementally
|
||||
3. **UUID linking** — Messages link via `uuid` and `parentUuid`
|
||||
4. **Graceful degradation** — Always handle missing/unknown fields
|
||||
5. **Type safety** — Validate types before use (arrays vs strings, etc.)
|
||||
|
||||
## Sources
|
||||
|
||||
- [Claude Code Hooks Reference](https://code.claude.com/docs/en/hooks.md)
|
||||
- [Claude Code Headless Documentation](https://code.claude.com/docs/en/headless.md)
|
||||
- [Anthropic Messages API Reference](https://docs.anthropic.com/en/api/messages)
|
||||
- [Inside Claude Code: Session File Format](https://medium.com/@databunny/inside-claude-code-the-session-file-format-and-how-to-inspect-it-b9998e66d56b)
|
||||
- [Community: claude-code-log](https://github.com/daaain/claude-code-log)
|
||||
- [Community: claude-JSONL-browser](https://github.com/withLinda/claude-JSONL-browser)
|
||||
503
plans/PLAN-tool-result-display.md
Normal file
503
plans/PLAN-tool-result-display.md
Normal file
@@ -0,0 +1,503 @@
|
||||
# Plan: Tool Result Display in AMC Dashboard
|
||||
|
||||
> **Status:** Draft — awaiting review and mockup phase
|
||||
> **Author:** Claude + Taylor
|
||||
> **Created:** 2026-02-27
|
||||
|
||||
## Summary
|
||||
|
||||
Add the ability to view tool call results (diffs, bash output, file contents) directly in the AMC dashboard conversation view. Currently, users see that a tool was called but cannot see what it did. This feature brings Claude Code's result visibility to the multi-agent dashboard.
|
||||
|
||||
### Goals
|
||||
|
||||
1. **See code changes as they happen** — diffs from Edit/Write tools always visible
|
||||
2. **Debug agent behavior** — inspect Bash output, Read content, search results
|
||||
3. **Match Claude Code UX** — familiar expand/collapse behavior with latest results expanded
|
||||
|
||||
### Non-Goals (v1)
|
||||
|
||||
- Codex agent support (different JSONL format — deferred to v2)
|
||||
- Copy-to-clipboard functionality
|
||||
- Virtual scrolling / performance optimization
|
||||
- Editor integration (clicking paths to open files)
|
||||
- Accessibility (keyboard navigation, focus management, ARIA labels — deferred to v2)
|
||||
- Lazy-fetch API for tool results (consider for v2 if payload size becomes an issue)
|
||||
|
||||
---
|
||||
|
||||
## User Workflows
|
||||
|
||||
### Workflow 1: Watching an Active Session
|
||||
|
||||
1. User opens a session card showing an active Claude agent
|
||||
2. Agent calls Edit tool to modify a file
|
||||
3. User immediately sees the diff expanded below the tool call pill
|
||||
4. Agent calls Bash to run tests
|
||||
5. User sees bash output expanded, previous Edit diff stays expanded (it's a diff)
|
||||
6. Agent sends a text message explaining results
|
||||
7. Bash output collapses (new assistant message arrived), Edit diff stays expanded
|
||||
|
||||
### Workflow 2: Reviewing a Completed Session
|
||||
|
||||
1. User opens a completed session to review what the agent did
|
||||
2. All tool calls are collapsed by default (no "latest" assistant message)
|
||||
3. Exception: Edit/Write diffs are still expanded
|
||||
4. User clicks a Bash tool call to see what command ran and its output
|
||||
5. User clicks "Show full output" when output is truncated
|
||||
6. Lightweight modal opens with full scrollable content
|
||||
7. User closes modal and continues reviewing
|
||||
|
||||
### Workflow 3: Debugging a Failed Tool Call
|
||||
|
||||
1. Agent runs a Bash command that fails
|
||||
2. Tool result block shows with red-tinted background
|
||||
3. stderr content is visible, clearly marked as error
|
||||
4. User can see what went wrong without leaving the dashboard
|
||||
|
||||
---
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
### Display Behavior
|
||||
|
||||
- **AC-1:** Tool calls render as expandable elements showing tool name and summary
|
||||
- **AC-2:** Clicking a collapsed tool call expands to show its result
|
||||
- **AC-3:** Clicking an expanded tool call collapses it
|
||||
- **AC-4:** In active sessions, tool results in the most recent assistant message are expanded by default
|
||||
- **AC-5:** When a new assistant message arrives, previous non-diff tool results collapse unless the user has manually toggled them in that message
|
||||
- **AC-6:** Edit and Write results remain expanded regardless of message age or session status (even if Write only has confirmation text)
|
||||
- **AC-7:** In completed sessions, all non-diff tool results start collapsed
|
||||
- **AC-8:** Tool calls without results display as non-expandable with muted styling; in active sessions, pending tool calls show a spinner to distinguish in-progress from permanently missing
|
||||
|
||||
### Diff Rendering
|
||||
|
||||
- **AC-9:** Edit/Write results display structuredPatch data as syntax-highlighted diff; falls back to raw content text if structuredPatch is malformed or absent
|
||||
- **AC-10:** Diff additions render with VS Code dark theme green background (rgba(46, 160, 67, 0.15))
|
||||
- **AC-11:** Diff deletions render with VS Code dark theme red background (rgba(248, 81, 73, 0.15))
|
||||
- **AC-12:** Full file path displays above each diff block
|
||||
- **AC-13:** Diff context lines use structuredPatch as-is (no recomputation)
|
||||
|
||||
### Other Tool Types
|
||||
|
||||
- **AC-14:** Bash results display stdout in monospace, stderr separately if present
|
||||
- **AC-15:** Bash output with ANSI escape codes renders as colored HTML (via ansi_up)
|
||||
- **AC-16:** Read results display file content with syntax highlighting based on file extension
|
||||
- **AC-17:** Grep/Glob results display file list with match counts
|
||||
- **AC-18:** Unknown tools (WebFetch, Task, etc.) use GenericResult fallback showing raw content
|
||||
|
||||
### Truncation
|
||||
|
||||
- **AC-19:** Long outputs truncate at configurable line/character thresholds (defaults tuned to approximate Claude Code behavior)
|
||||
- **AC-20:** Truncated outputs show "Show full output (N lines)" link
|
||||
- **AC-21:** Clicking "Show full output" opens a dedicated lightweight modal
|
||||
- **AC-22:** Modal displays full content with syntax highlighting, scrollable
|
||||
|
||||
### Error States
|
||||
|
||||
- **AC-23:** Failed tool calls display with red-tinted background
|
||||
- **AC-24:** Error content (stderr, error messages) is clearly distinguishable from success content
|
||||
- **AC-25:** is_error flag from tool_result determines error state
|
||||
|
||||
### API Contract
|
||||
|
||||
- **AC-26:** /api/conversation response includes tool results nested in tool_calls
|
||||
- **AC-27:** Each tool_call has: name, id, input, result (when available)
|
||||
- **AC-28:** All tool results conform to a normalized envelope: `{ kind, status, content, is_error }` with tool-specific fields nested in `content`
|
||||
|
||||
---
|
||||
|
||||
## Architecture
|
||||
|
||||
### Why Two-Pass JSONL Parsing
|
||||
|
||||
The Claude Code JSONL stores tool_use and tool_result as separate entries linked by tool_use_id. To nest results inside tool_calls for the API response, the server must:
|
||||
|
||||
1. First pass: Build a map of tool_use_id → toolUseResult
|
||||
2. Second pass: Parse messages, attaching results to matching tool_calls
|
||||
|
||||
This adds parsing overhead but keeps the API contract simple. Alternatives considered:
|
||||
- **Streaming/incremental:** More complex, doesn't help since we need full conversation anyway
|
||||
- **Client-side joining:** Shifts complexity to frontend, increases payload size
|
||||
|
||||
### Why Render Everything, Not Virtual Scroll
|
||||
|
||||
Sessions typically have 20-80 tool calls. Modern browsers handle hundreds of DOM elements efficiently. Virtual scrolling adds significant complexity (measuring, windowing, scroll position management) for marginal benefit.
|
||||
|
||||
Decision: Ship simple, measure real-world performance, optimize if >100ms render times observed.
|
||||
|
||||
### Why Dedicated Modal Over Inline Expansion
|
||||
|
||||
Full output can be thousands of lines. Inline expansion would:
|
||||
- Push other content out of view
|
||||
- Make scrolling confusing
|
||||
- Lose context of surrounding conversation
|
||||
|
||||
A modal provides a focused reading experience without disrupting conversation layout.
|
||||
|
||||
### Why a Normalized Result Contract
|
||||
|
||||
Raw `toolUseResult` shapes vary wildly by tool type — Edit has `structuredPatch`, Bash has `stdout`/`stderr`, Glob has `filenames`. Passing these raw to the frontend means every renderer must know the exact JSONL format, and adding Codex support (v2) would require duplicating all that branching.
|
||||
|
||||
Instead, the server normalizes each result into a stable envelope:
|
||||
|
||||
```python
|
||||
{
|
||||
"kind": "diff" | "bash" | "file_content" | "file_list" | "generic",
|
||||
"status": "success" | "error" | "pending",
|
||||
"is_error": bool,
|
||||
"content": { ... } # tool-specific fields, documented per kind
|
||||
}
|
||||
```
|
||||
|
||||
The frontend switches on `kind` (5 cases) rather than tool name (unbounded). This also gives us a clean seam for the `result_mode` query parameter if payload size becomes an issue later.
|
||||
|
||||
### Component Structure
|
||||
|
||||
```
|
||||
MessageBubble
|
||||
├── Content (text)
|
||||
├── Thinking (existing)
|
||||
└── ToolCallList (new)
|
||||
└── ToolCallItem (repeated)
|
||||
├── Header (pill: chevron, name, summary, status)
|
||||
└── ResultContent (conditional)
|
||||
├── DiffResult (for Edit/Write)
|
||||
├── BashResult (for Bash)
|
||||
├── FileListResult (for Glob/Grep)
|
||||
└── GenericResult (fallback)
|
||||
|
||||
FullOutputModal (new, top-level)
|
||||
├── Header (tool name, file path)
|
||||
├── Content (full output, scrollable)
|
||||
└── CloseButton
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation Specifications
|
||||
|
||||
### IMP-SERVER: Parse and Attach Tool Results
|
||||
|
||||
**Fulfills:** AC-26, AC-27, AC-28
|
||||
|
||||
**Location:** `amc_server/mixins/conversation.py`
|
||||
|
||||
**Changes to `_parse_claude_conversation`:**
|
||||
|
||||
Two-pass parsing:
|
||||
1. First pass: Scan all entries, build map of `tool_use_id` → `toolUseResult`
|
||||
2. Second pass: Parse messages as before, but when encountering `tool_use`, lookup and attach result
|
||||
|
||||
**API query parameter:** `/api/conversation?result_mode=full` (default). Future option: `result_mode=preview` to return truncated previews and reduce payload size without an API-breaking change.
|
||||
|
||||
**Normalization step:** After looking up the raw `toolUseResult`, the server normalizes it into the stable envelope before attaching:
|
||||
|
||||
```python
|
||||
{
|
||||
"name": "Edit",
|
||||
"id": "toolu_abc123",
|
||||
"input": {"file_path": "...", "old_string": "...", "new_string": "..."},
|
||||
"result": {
|
||||
"kind": "diff",
|
||||
"status": "success",
|
||||
"is_error": False,
|
||||
"content": {
|
||||
"structuredPatch": [...],
|
||||
"filePath": "...",
|
||||
"text": "The file has been updated successfully."
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Normalized `kind` mapping:**
|
||||
|
||||
| kind | Source Tools | `content` Fields |
|
||||
|------|-------------|-----------------|
|
||||
| `diff` | Edit, Write | `structuredPatch`, `filePath`, `text` |
|
||||
| `bash` | Bash | `stdout`, `stderr`, `interrupted` |
|
||||
| `file_content` | Read | `file`, `type`, `text` |
|
||||
| `file_list` | Glob, Grep | `filenames`, `numFiles`, `truncated`, `numLines` |
|
||||
| `generic` | All others | `text` (raw content string) |
|
||||
|
||||
---
|
||||
|
||||
### IMP-TOOLCALL: Expandable Tool Call Component
|
||||
|
||||
**Fulfills:** AC-1, AC-2, AC-3, AC-4, AC-5, AC-6, AC-7, AC-8
|
||||
|
||||
**Location:** `dashboard/lib/markdown.js` (refactor `renderToolCalls`)
|
||||
|
||||
**New function: `ToolCallItem`**
|
||||
|
||||
Renders a single tool call with:
|
||||
- Chevron for expand/collapse (when result exists and not Edit/Write)
|
||||
- Tool name (bold, colored)
|
||||
- Summary (from existing `getToolSummary`)
|
||||
- Status icon (checkmark or X)
|
||||
- Result content (when expanded)
|
||||
|
||||
**State Management:**
|
||||
|
||||
Track two sets per message: `autoExpanded` (system-controlled) and `userToggled` (manual clicks).
|
||||
|
||||
When new assistant message arrives:
|
||||
- Compare latest assistant message ID to stored ID
|
||||
- If different, reset `autoExpanded` to empty for previous messages
|
||||
- `userToggled` entries are never reset — user intent is preserved
|
||||
- Edit/Write tools bypass this logic (always expanded via CSS/logic)
|
||||
|
||||
Expand/collapse logic: a tool call is expanded if it is in `userToggled` (explicit click) OR in `autoExpanded` (latest message) OR is Edit/Write kind.
|
||||
|
||||
---
|
||||
|
||||
### IMP-DIFF: Diff Rendering Component
|
||||
|
||||
**Fulfills:** AC-9, AC-10, AC-11, AC-12, AC-13
|
||||
|
||||
**Location:** `dashboard/lib/markdown.js` (new function `renderDiff`)
|
||||
|
||||
**Add diff language to highlight.js:**
|
||||
```javascript
|
||||
import langDiff from 'https://esm.sh/highlight.js@11.11.1/lib/languages/diff';
|
||||
hljs.registerLanguage('diff', langDiff);
|
||||
```
|
||||
|
||||
**Diff Renderer:**
|
||||
|
||||
1. If `structuredPatch` is present and valid, convert to unified diff text:
|
||||
- Each hunk: `@@ -oldStart,oldLines +newStart,newLines @@`
|
||||
- Followed by hunk.lines array
|
||||
2. If `structuredPatch` is missing or malformed, fall back to raw `content.text` in a monospace block
|
||||
3. Syntax highlight with hljs diff language
|
||||
4. Sanitize with DOMPurify before rendering
|
||||
5. Wrap in container with file path header
|
||||
|
||||
**CSS styling:**
|
||||
- Container: dark border, rounded corners
|
||||
- Header: muted background, monospace font, full file path
|
||||
- Content: monospace, horizontal scroll
|
||||
- Additions: `background: rgba(46, 160, 67, 0.15)`
|
||||
- Deletions: `background: rgba(248, 81, 73, 0.15)`
|
||||
|
||||
---
|
||||
|
||||
### IMP-BASH: Bash Output Component
|
||||
|
||||
**Fulfills:** AC-14, AC-15, AC-23, AC-24
|
||||
|
||||
**Location:** `dashboard/lib/markdown.js` (new function `renderBashResult`)
|
||||
|
||||
**ANSI-to-HTML conversion:**
|
||||
```javascript
|
||||
import AnsiUp from 'https://esm.sh/ansi_up';
|
||||
const ansi = new AnsiUp();
|
||||
const html = ansi.ansi_to_html(bashOutput);
|
||||
```
|
||||
|
||||
The `ansi_up` library (zero dependencies, ~8KB) converts ANSI escape codes to styled HTML spans, preserving colored test output, progress indicators, and error highlighting from CLI tools.
|
||||
|
||||
**Renders:**
|
||||
- `stdout` in monospace pre block with ANSI colors preserved
|
||||
- `stderr` in separate block with error styling (if present)
|
||||
- "Command interrupted" notice (if interrupted flag)
|
||||
|
||||
**Sanitization order (CRITICAL):** First convert ANSI to HTML via ansi_up, THEN sanitize with DOMPurify. Sanitizing before conversion would strip escape codes; sanitizing after preserves the styled spans while preventing XSS.
|
||||
|
||||
Error state: `is_error` or presence of stderr triggers error styling (red tint, left border).
|
||||
|
||||
---
|
||||
|
||||
### IMP-TRUNCATE: Output Truncation
|
||||
|
||||
**Fulfills:** AC-19, AC-20
|
||||
|
||||
**Truncation Thresholds (match Claude Code):**
|
||||
|
||||
| Tool Type | Max Lines | Max Chars |
|
||||
|-----------|-----------|-----------|
|
||||
| Bash stdout | 100 | 10000 |
|
||||
| Bash stderr | 50 | 5000 |
|
||||
| Read content | 500 | 50000 |
|
||||
| Grep matches | 100 | 10000 |
|
||||
| Glob files | 100 | 5000 |
|
||||
|
||||
**Note:** These thresholds need verification against Claude Code behavior. May require adjustment based on testing.
|
||||
|
||||
**Truncation Helper:**
|
||||
|
||||
Takes content string, returns `{ text, truncated, totalLines }`. If truncated, result renderers show "Show full output (N lines)" link.
|
||||
|
||||
---
|
||||
|
||||
### IMP-MODAL: Full Output Modal
|
||||
|
||||
**Fulfills:** AC-21, AC-22
|
||||
|
||||
**Location:** `dashboard/components/FullOutputModal.js` (new file)
|
||||
|
||||
**Structure:**
|
||||
- Overlay (click to close)
|
||||
- Modal container (click does NOT close)
|
||||
- Header: title (tool name + file path), close button
|
||||
- Content: scrollable pre/code block with syntax highlighting
|
||||
|
||||
**Integration:** Modal state managed at App level or ChatMessages level. "Show full output" link sets state with content + metadata.
|
||||
|
||||
---
|
||||
|
||||
### IMP-ERROR: Error State Styling
|
||||
|
||||
**Fulfills:** AC-23, AC-24, AC-25
|
||||
|
||||
**Styling:**
|
||||
- Tool call header: red-tinted background when `result.is_error`
|
||||
- Status icon: red X instead of green checkmark
|
||||
- Bash stderr: red text, italic, distinct from stdout
|
||||
- Overall: left border accent in error color
|
||||
|
||||
---
|
||||
|
||||
## Rollout Slices
|
||||
|
||||
### Slice 1: Design Mockups (Pre-Implementation)
|
||||
|
||||
**Goal:** Validate visual design before building
|
||||
|
||||
**Deliverables:**
|
||||
1. Create `/mockups` test route with static data
|
||||
2. Implement 3-4 design variants (card-based, minimal, etc.)
|
||||
3. Use real tool result data from session JSONL
|
||||
4. User reviews and selects preferred design
|
||||
|
||||
**Exit Criteria:** Design direction locked
|
||||
|
||||
---
|
||||
|
||||
### Slice 2: Server-Side Tool Result Parsing and Normalization
|
||||
|
||||
**Goal:** API returns normalized tool results nested in tool_calls
|
||||
|
||||
**Deliverables:**
|
||||
1. Two-pass parsing in `_parse_claude_conversation`
|
||||
2. Normalization layer: raw `toolUseResult` → `{ kind, status, is_error, content }` envelope
|
||||
3. Tool results attached with `id` field
|
||||
4. Unit tests for result attachment and normalization per tool type
|
||||
5. Handle missing results gracefully (return tool_call without result)
|
||||
6. Support `result_mode=full` query parameter (only mode for now, but wired up for future `preview`)
|
||||
|
||||
**Exit Criteria:** AC-26, AC-27, AC-28 pass
|
||||
|
||||
---
|
||||
|
||||
### Slice 3: Basic Expand/Collapse UI
|
||||
|
||||
**Goal:** Tool calls are expandable, show raw result content
|
||||
|
||||
**Deliverables:**
|
||||
1. Refactor `renderToolCalls` to `ToolCallList` component
|
||||
2. Implement expand/collapse with chevron
|
||||
3. Track expanded state per message
|
||||
4. Collapse on new assistant message
|
||||
5. Keep Edit/Write always expanded
|
||||
|
||||
**Exit Criteria:** AC-1 through AC-8 pass
|
||||
|
||||
---
|
||||
|
||||
### Slice 4: Diff Rendering
|
||||
|
||||
**Goal:** Edit/Write show beautiful diffs
|
||||
|
||||
**Deliverables:**
|
||||
1. Add diff language to highlight.js
|
||||
2. Implement `renderDiff` function
|
||||
3. VS Code dark theme styling
|
||||
4. Full file path header
|
||||
|
||||
**Exit Criteria:** AC-9 through AC-13 pass
|
||||
|
||||
---
|
||||
|
||||
### Slice 5: Other Tool Types
|
||||
|
||||
**Goal:** Bash, Read, Glob, Grep render appropriately
|
||||
|
||||
**Deliverables:**
|
||||
1. Import and configure `ansi_up` for ANSI-to-HTML conversion
|
||||
2. `renderBashResult` with stdout/stderr separation and ANSI color preservation
|
||||
3. `renderFileContent` for Read
|
||||
4. `renderFileList` for Glob/Grep
|
||||
5. `GenericResult` fallback for unknown tools (WebFetch, Task, etc.)
|
||||
|
||||
**Exit Criteria:** AC-14 through AC-18 pass
|
||||
|
||||
---
|
||||
|
||||
### Slice 6: Truncation and Modal
|
||||
|
||||
**Goal:** Long outputs truncate with modal expansion
|
||||
|
||||
**Deliverables:**
|
||||
1. Truncation helper with Claude Code thresholds
|
||||
2. "Show full output" link
|
||||
3. `FullOutputModal` component
|
||||
4. Syntax highlighting in modal
|
||||
|
||||
**Exit Criteria:** AC-19 through AC-22 pass
|
||||
|
||||
---
|
||||
|
||||
### Slice 7: Error States and Polish
|
||||
|
||||
**Goal:** Failed tools visually distinct, edge cases handled
|
||||
|
||||
**Deliverables:**
|
||||
1. Error state styling (red tint)
|
||||
2. Muted styling for missing results
|
||||
3. Test with interrupted sessions
|
||||
4. Cross-browser testing
|
||||
|
||||
**Exit Criteria:** AC-23 through AC-25 pass, feature complete
|
||||
|
||||
---
|
||||
|
||||
## Open Questions
|
||||
|
||||
1. ~~**Exact Claude Code truncation thresholds**~~ — **Resolved:** using reasonable defaults with a note to tune via testing. AC-19 updated.
|
||||
2. **Performance with 100+ tool calls** — monitor after ship, optimize if needed
|
||||
3. **Codex support timeline** — when should we prioritize v2? The normalized `kind` contract makes this easier: add Codex normalizers without touching renderers.
|
||||
4. ~~**Lazy-fetch for large payloads**~~ — **Resolved:** `result_mode` query parameter wired into API contract. Only `full` implemented in v1; `preview` deferred.
|
||||
|
||||
---
|
||||
|
||||
## Appendix: Research Findings
|
||||
|
||||
### Claude Code JSONL Format
|
||||
|
||||
Tool calls and results are stored as separate entries:
|
||||
|
||||
```json
|
||||
// Assistant sends tool_use
|
||||
{"type": "assistant", "message": {"content": [{"type": "tool_use", "id": "toolu_abc", "name": "Edit", "input": {...}}]}}
|
||||
|
||||
// Result in separate user entry
|
||||
{"type": "user", "message": {"content": [{"type": "tool_result", "tool_use_id": "toolu_abc", "content": "Success"}]}, "toolUseResult": {...}}
|
||||
```
|
||||
|
||||
The `toolUseResult` object contains rich structured data varying by tool type.
|
||||
|
||||
### Missing Results Statistics
|
||||
|
||||
Across 55 sessions with 2,063 tool calls:
|
||||
- 11 missing results (0.5%)
|
||||
- Affected tools: Edit (4), Read (2), Bash (1), others
|
||||
|
||||
### Interrupt Handling
|
||||
|
||||
User interrupts create a separate user message:
|
||||
```json
|
||||
{"type": "user", "message": {"content": [{"type": "text", "text": "[Request interrupted by user for tool use]"}]}}
|
||||
```
|
||||
|
||||
Tool results for completed tools are still present; the interrupt message indicates the turn ended early.
|
||||
1212
plans/agent-spawning.md
Normal file
1212
plans/agent-spawning.md
Normal file
File diff suppressed because it is too large
Load Diff
382
plans/card-modal-unification.md
Normal file
382
plans/card-modal-unification.md
Normal file
@@ -0,0 +1,382 @@
|
||||
# Card/Modal Unification Plan
|
||||
|
||||
**Status:** Implemented
|
||||
**Date:** 2026-02-26
|
||||
**Author:** Claude + Taylor
|
||||
|
||||
---
|
||||
|
||||
## Executive Summary
|
||||
|
||||
Unify SessionCard and Modal into a single component with an `enlarged` prop, eliminating 165 lines of duplicated code and ensuring feature parity across both views.
|
||||
|
||||
---
|
||||
|
||||
## 1. Problem Statement
|
||||
|
||||
### 1.1 What's Broken
|
||||
|
||||
The AMC dashboard displays agent sessions as cards in a grid. Clicking a card opens a "modal" for a larger, focused view. These two views evolved independently, creating:
|
||||
|
||||
| Issue | Impact |
|
||||
|-------|--------|
|
||||
| **Duplicated rendering logic** | Modal.js reimplemented header, chat, input from scratch (227 lines) |
|
||||
| **Feature drift** | Card had context usage display; modal didn't. Modal had timestamps; card didn't. |
|
||||
| **Maintenance burden** | Every card change required parallel modal changes (often forgotten) |
|
||||
| **Inconsistent UX** | Users see different information depending on view |
|
||||
|
||||
### 1.2 Why This Matters
|
||||
|
||||
The modal's purpose is simple: **show an enlarged view with more screen space for content**. It should not be a separate implementation with different features. Users clicking a card expect to see *the same thing, bigger* — not a different interface.
|
||||
|
||||
### 1.3 Root Cause
|
||||
|
||||
The modal was originally built as a separate component because it needed:
|
||||
- Backdrop blur with click-outside-to-close
|
||||
- Escape key handling
|
||||
- Body scroll lock
|
||||
- Entrance/exit animations
|
||||
|
||||
These concerns led developers to copy-paste card internals into the modal rather than compose them.
|
||||
|
||||
---
|
||||
|
||||
## 2. Goals and Non-Goals
|
||||
|
||||
### 2.1 Goals
|
||||
|
||||
1. **Zero duplicated rendering code** — Single source of truth for how sessions display
|
||||
2. **Automatic feature parity** — Any card change propagates to modal without extra work
|
||||
3. **Preserve modal behaviors** — Backdrop, escape key, animations, scroll lock
|
||||
4. **Add missing features to both views** — Smart scroll, sending state feedback
|
||||
|
||||
### 2.2 Non-Goals
|
||||
|
||||
- Changing the visual design of either view
|
||||
- Adding new features beyond parity + smart scroll + sending state
|
||||
- Refactoring other components
|
||||
|
||||
---
|
||||
|
||||
## 3. User Workflows
|
||||
|
||||
### 3.1 Current User Journey
|
||||
|
||||
```
|
||||
User sees session cards in grid
|
||||
│
|
||||
├─► Card shows: status, agent, cwd, context usage, last 20 messages, input
|
||||
│
|
||||
└─► User clicks card
|
||||
│
|
||||
└─► Modal opens with DIFFERENT layout:
|
||||
- Combined status badge (dot inside)
|
||||
- No context usage
|
||||
- All messages with timestamps
|
||||
- Different input implementation
|
||||
- Keyboard hints shown
|
||||
```
|
||||
|
||||
### 3.2 Target User Journey
|
||||
|
||||
```
|
||||
User sees session cards in grid
|
||||
│
|
||||
├─► Card shows: status, agent, cwd, context usage, last 20 messages, input
|
||||
│
|
||||
└─► User clicks card
|
||||
│
|
||||
└─► Modal opens with SAME card, just bigger:
|
||||
- Identical header layout
|
||||
- Context usage visible
|
||||
- All messages (not limited to 20)
|
||||
- Same input components
|
||||
- Same everything, more space
|
||||
```
|
||||
|
||||
### 3.3 User Benefits
|
||||
|
||||
| Benefit | Rationale |
|
||||
|---------|-----------|
|
||||
| **Cognitive consistency** | Same information architecture in both views reduces learning curve |
|
||||
| **Trust** | No features "hiding" in one view or the other |
|
||||
| **Predictability** | Click = zoom, not "different interface" |
|
||||
|
||||
---
|
||||
|
||||
## 4. Design Decisions
|
||||
|
||||
### 4.1 Architecture: Shared Component with Prop
|
||||
|
||||
**Decision:** Add `enlarged` prop to SessionCard. Modal renders `<SessionCard enlarged={true} />`.
|
||||
|
||||
**Alternatives Considered:**
|
||||
|
||||
| Alternative | Rejected Because |
|
||||
|-------------|------------------|
|
||||
| Modal wraps Card with CSS transform | Breaks layout, accessibility issues, can't change message limit |
|
||||
| Higher-order component | Unnecessary complexity for single boolean difference |
|
||||
| Render props pattern | Overkill, harder to read |
|
||||
| Separate "CardContent" extracted | Still requires prop to control limit, might as well be on SessionCard |
|
||||
|
||||
**Rationale:** A single boolean prop is the simplest solution that achieves all goals. The `enlarged` prop controls exactly two things: container sizing and message limit. Everything else is identical.
|
||||
|
||||
---
|
||||
|
||||
### 4.2 Message Limit: Card 20, Enlarged All
|
||||
|
||||
**Decision:** Card shows last 20 messages. Enlarged view shows all.
|
||||
|
||||
**Rationale:**
|
||||
- Cards in a grid need bounded height for visual consistency
|
||||
- 20 messages is enough context without overwhelming the card
|
||||
- Enlarged view exists specifically to see more — no artificial limit makes sense
|
||||
- Implementation: `limit` prop on ChatMessages (20 default, null for unlimited)
|
||||
|
||||
---
|
||||
|
||||
### 4.3 Header Layout: Keep Card's Multi-Row Style
|
||||
|
||||
**Decision:** Use the card's multi-row header layout for both views.
|
||||
|
||||
**Modal had:** Single row with combined status badge (dot inside badge)
|
||||
**Card had:** Multi-row with separate dot, status badge, agent badge, cwd badge, context usage
|
||||
|
||||
**Rationale:**
|
||||
- Card layout shows more information (context usage was missing from modal)
|
||||
- Multi-row handles overflow gracefully with `flex-wrap`
|
||||
- Consistent with the "modal = bigger card" philosophy
|
||||
|
||||
---
|
||||
|
||||
### 4.4 Spacing: Keep Tighter (Card Style)
|
||||
|
||||
**Decision:** Use card's tighter spacing (`px-4 py-3`, `space-y-2.5`) for both views.
|
||||
|
||||
**Modal had:** Roomier spacing (`px-5 py-4`, `space-y-4`)
|
||||
|
||||
**Rationale:**
|
||||
- Tighter spacing is more information-dense
|
||||
- Enlarged view gains space from larger container, not wider margins
|
||||
- Consistent visual rhythm between views
|
||||
|
||||
---
|
||||
|
||||
### 4.5 Empty State Text: "No messages to show"
|
||||
|
||||
**Decision:** Standardize on "No messages to show" (neither original).
|
||||
|
||||
**Card had:** "No messages yet"
|
||||
**Modal had:** "No conversation messages"
|
||||
|
||||
**Rationale:** "No messages to show" is neutral and accurate — doesn't imply timing ("yet") or specific terminology ("conversation").
|
||||
|
||||
---
|
||||
|
||||
## 5. Implementation Details
|
||||
|
||||
### 5.1 SessionCard.js Changes
|
||||
|
||||
```
|
||||
BEFORE: SessionCard({ session, onClick, conversation, onFetchConversation, onRespond, onDismiss })
|
||||
AFTER: SessionCard({ session, onClick, conversation, onFetchConversation, onRespond, onDismiss, enlarged = false })
|
||||
```
|
||||
|
||||
**New behaviors controlled by `enlarged`:**
|
||||
|
||||
| Aspect | `enlarged=false` (card) | `enlarged=true` (modal) |
|
||||
|--------|-------------------------|-------------------------|
|
||||
| Container classes | `h-[850px] w-[600px] cursor-pointer hover:...` | `max-w-5xl max-h-[90vh]` |
|
||||
| Click handler | `onClick(session)` | `undefined` (no-op) |
|
||||
| Message limit | 20 | null (all) |
|
||||
|
||||
**New feature: Smart scroll tracking**
|
||||
|
||||
```js
|
||||
// Track if user is at bottom
|
||||
const wasAtBottomRef = useRef(true);
|
||||
|
||||
// On scroll, update tracking
|
||||
const handleScroll = () => {
|
||||
wasAtBottomRef.current = el.scrollHeight - el.scrollTop - el.clientHeight < 50;
|
||||
};
|
||||
|
||||
// On new messages, only scroll if user was at bottom
|
||||
if (hasNewMessages && wasAtBottomRef.current) {
|
||||
el.scrollTop = el.scrollHeight;
|
||||
}
|
||||
```
|
||||
|
||||
**Rationale:** Users reading history shouldn't be yanked to bottom when new messages arrive. Only auto-scroll if they were already at the bottom (watching live updates).
|
||||
|
||||
---
|
||||
|
||||
### 5.2 Modal.js Changes
|
||||
|
||||
**Before:** 227 lines reimplementing header, chat, input, scroll, state management
|
||||
|
||||
**After:** 62 lines — backdrop wrapper only
|
||||
|
||||
```js
|
||||
export function Modal({ session, conversations, onClose, onRespond, onFetchConversation, onDismiss }) {
|
||||
// Closing animation state
|
||||
// Body scroll lock
|
||||
// Escape key handler
|
||||
|
||||
return html`
|
||||
<div class="backdrop...">
|
||||
<${SessionCard}
|
||||
session=${session}
|
||||
conversation=${conversations[session.session_id] || []}
|
||||
onFetchConversation=${onFetchConversation}
|
||||
onRespond=${onRespond}
|
||||
onDismiss=${onDismiss}
|
||||
onClick=${() => {}}
|
||||
enlarged=${true}
|
||||
/>
|
||||
</div>
|
||||
`;
|
||||
}
|
||||
```
|
||||
|
||||
**Preserved behaviors:**
|
||||
- Backdrop blur (`bg-[#02050d]/84 backdrop-blur-sm`)
|
||||
- Click outside to close
|
||||
- Escape key handler
|
||||
- Body scroll lock (`document.body.style.overflow = 'hidden'`)
|
||||
- Entrance/exit animations (CSS classes)
|
||||
|
||||
---
|
||||
|
||||
### 5.3 ChatMessages.js Changes
|
||||
|
||||
```
|
||||
BEFORE: ChatMessages({ messages, status })
|
||||
AFTER: ChatMessages({ messages, status, limit = 20 })
|
||||
```
|
||||
|
||||
**Logic change:**
|
||||
```js
|
||||
// Before: always slice to 20
|
||||
const displayMessages = allDisplayMessages.slice(-20);
|
||||
|
||||
// After: respect limit prop
|
||||
const displayMessages = limit ? allDisplayMessages.slice(-limit) : allDisplayMessages;
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
### 5.4 SimpleInput.js / QuestionBlock.js Changes
|
||||
|
||||
**New feature: Sending state feedback**
|
||||
|
||||
```js
|
||||
const [sending, setSending] = useState(false);
|
||||
|
||||
const handleSubmit = async (e) => {
|
||||
if (sending) return;
|
||||
setSending(true);
|
||||
try {
|
||||
await onRespond(...);
|
||||
} finally {
|
||||
setSending(false);
|
||||
}
|
||||
};
|
||||
|
||||
// In render:
|
||||
<button disabled=${sending}>
|
||||
${sending ? 'Sending...' : 'Send'}
|
||||
</button>
|
||||
```
|
||||
|
||||
**Rationale:** Users need feedback that their message is being sent. Without this, they might click multiple times or think the UI is broken.
|
||||
|
||||
---
|
||||
|
||||
### 5.5 App.js Changes
|
||||
|
||||
**Removed (unused after refactor):**
|
||||
- `conversationLoading` state — was only passed to Modal
|
||||
- `refreshConversation` callback — was only used by Modal's custom send handler
|
||||
|
||||
**Modified:**
|
||||
- `respondToSession` now refreshes conversation immediately after successful send
|
||||
- Modal receives same props as SessionCard (onRespond, onFetchConversation, onDismiss)
|
||||
|
||||
---
|
||||
|
||||
## 6. Dependency Graph
|
||||
|
||||
```
|
||||
App.js
|
||||
│
|
||||
├─► SessionCard (in grid)
|
||||
│ ├─► ChatMessages (limit=20)
|
||||
│ │ └─► MessageBubble
|
||||
│ ├─► QuestionBlock (with sending state)
|
||||
│ │ └─► OptionButton
|
||||
│ └─► SimpleInput (with sending state)
|
||||
│
|
||||
└─► Modal (backdrop wrapper)
|
||||
└─► SessionCard (enlarged=true)
|
||||
├─► ChatMessages (limit=null)
|
||||
│ └─► MessageBubble
|
||||
├─► QuestionBlock (with sending state)
|
||||
│ └─► OptionButton
|
||||
└─► SimpleInput (with sending state)
|
||||
```
|
||||
|
||||
**Key insight:** Modal no longer has its own rendering tree. It delegates entirely to SessionCard.
|
||||
|
||||
---
|
||||
|
||||
## 7. Metrics
|
||||
|
||||
| Metric | Before | After | Change |
|
||||
|--------|--------|-------|--------|
|
||||
| Modal.js lines | 227 | 62 | -73% |
|
||||
| Total duplicated code | ~180 lines | 0 | -100% |
|
||||
| Features requiring dual maintenance | All | None | -100% |
|
||||
| Prop surface area (Modal) | 6 custom | 6 same as card | Aligned |
|
||||
|
||||
---
|
||||
|
||||
## 8. Verification Checklist
|
||||
|
||||
- [x] Card displays: status dot, status badge, agent badge, cwd, context usage, messages, input
|
||||
- [x] Modal displays: identical to card, just larger
|
||||
- [x] Card limits to 20 messages
|
||||
- [x] Modal shows all messages
|
||||
- [x] Smart scroll works in both views
|
||||
- [x] "Sending..." feedback works in both views
|
||||
- [x] Escape closes modal
|
||||
- [x] Click outside closes modal
|
||||
- [x] Entrance/exit animations work
|
||||
- [x] Body scroll locked when modal open
|
||||
|
||||
---
|
||||
|
||||
## 9. Future Considerations
|
||||
|
||||
### 9.1 Potential Enhancements
|
||||
|
||||
| Enhancement | Rationale | Blocked By |
|
||||
|-------------|-----------|------------|
|
||||
| Keyboard navigation in card grid | Accessibility | None |
|
||||
| Resize modal dynamically | User preference | None |
|
||||
| Pin modal to side (split view) | Power user workflow | Design decision needed |
|
||||
|
||||
### 9.2 Maintenance Notes
|
||||
|
||||
- **Any SessionCard change** automatically applies to modal view
|
||||
- **To add modal-only behavior**: Check `enlarged` prop (but avoid this — keep views identical)
|
||||
- **To change message limit**: Modify the `limit` prop value in SessionCard's ChatMessages call
|
||||
|
||||
---
|
||||
|
||||
## 10. Lessons Learned
|
||||
|
||||
1. **Composition > Duplication** — When two UIs show the same data, compose them from shared components
|
||||
2. **Props for variations** — A single boolean prop is often sufficient for "same thing, different context"
|
||||
3. **Identify the actual differences** — Modal needed only: backdrop, escape key, scroll lock, animations. Everything else was false complexity.
|
||||
4. **Feature drift is inevitable** — Duplicated code guarantees divergence over time. Only shared code stays in sync.
|
||||
96
plans/input-history.md
Normal file
96
plans/input-history.md
Normal file
@@ -0,0 +1,96 @@
|
||||
# Input History (Up/Down Arrow)
|
||||
|
||||
## Summary
|
||||
|
||||
Add shell-style up/down arrow navigation through past messages in SimpleInput. History is derived from the conversation data already parsed from session logs -- no new state management, no server changes.
|
||||
|
||||
## How It Works Today
|
||||
|
||||
1. Server parses JSONL session logs, extracts user messages with `role: "user"` (`conversation.py:57-66`)
|
||||
2. App.js stores parsed conversations in `conversations` state, refreshed via SSE on `conversation_mtime_ns` change
|
||||
3. SessionCard receives `conversation` as a prop but does **not** pass it to SimpleInput
|
||||
4. SimpleInput has no awareness of past messages
|
||||
|
||||
## Step 1: Pipe Conversation to SimpleInput
|
||||
|
||||
Pass the conversation array from SessionCard into SimpleInput so it can derive history.
|
||||
|
||||
- `SessionCard.js:165-169` -- add `conversation` prop to SimpleInput
|
||||
- Same for the QuestionBlock path if freeform input is used there (line 162) -- skip for now, QuestionBlock is option-based
|
||||
|
||||
**Files**: `dashboard/components/SessionCard.js`
|
||||
|
||||
## Step 2: Derive User Message History
|
||||
|
||||
Inside SimpleInput, filter conversation to user messages only.
|
||||
|
||||
```js
|
||||
const userHistory = useMemo(
|
||||
() => (conversation || []).filter(m => m.role === 'user').map(m => m.content),
|
||||
[conversation]
|
||||
);
|
||||
```
|
||||
|
||||
This updates automatically whenever the session log changes (SSE triggers conversation refresh, new prop flows down).
|
||||
|
||||
**Files**: `dashboard/components/SimpleInput.js`
|
||||
|
||||
## Step 3: History Navigation State
|
||||
|
||||
Add refs for tracking position in history and preserving the draft.
|
||||
|
||||
```js
|
||||
const historyIndexRef = useRef(-1); // -1 = not browsing
|
||||
const draftRef = useRef(''); // saves in-progress text before browsing
|
||||
```
|
||||
|
||||
Use refs (not state) because index changes don't need re-renders -- only `setText` triggers the visual update.
|
||||
|
||||
**Files**: `dashboard/components/SimpleInput.js`
|
||||
|
||||
## Step 4: ArrowUp/ArrowDown Keybinding
|
||||
|
||||
In the `onKeyDown` handler (after the autocomplete block, before Enter-to-submit), add history navigation:
|
||||
|
||||
- **ArrowUp**: only when autocomplete is closed AND cursor is at position 0 (prevents hijacking multiline cursor movement). On first press, save current text to `draftRef`. Walk backward through `userHistory`. Call `setText()` with the history entry.
|
||||
- **ArrowDown**: walk forward through history. If past the newest entry, restore `draftRef` and reset index to -1.
|
||||
- **Reset on submit**: set `historyIndexRef.current = -1` in `handleSubmit` after successful send.
|
||||
- **Reset on manual edit**: in `onInput`, reset `historyIndexRef.current = -1` so typing after browsing exits history mode.
|
||||
|
||||
### Cursor position check
|
||||
|
||||
```js
|
||||
const atStart = e.target.selectionStart === 0 && e.target.selectionEnd === 0;
|
||||
```
|
||||
|
||||
Only intercept ArrowUp when `atStart` is true. This lets multiline text cursor movement work normally. ArrowDown can use similar logic (check if cursor is at end of text) or always navigate history when `historyIndexRef.current !== -1` (already browsing).
|
||||
|
||||
**Files**: `dashboard/components/SimpleInput.js`
|
||||
|
||||
## Step 5: Modal Parity
|
||||
|
||||
The Modal (`Modal.js:71`) also renders SimpleInput with `onRespond`. Verify it passes `conversation` through. The same SessionCard is used in enlarged mode, so this should work automatically if Step 1 is done correctly.
|
||||
|
||||
**Files**: `dashboard/components/Modal.js` (verify, likely no change needed)
|
||||
|
||||
## Non-Goals
|
||||
|
||||
- No localStorage persistence -- history comes from session logs which survive across page reloads
|
||||
- No server changes -- conversation parsing already extracts what we need
|
||||
- No new API endpoints
|
||||
- No changes to QuestionBlock (option-based, not free-text history)
|
||||
|
||||
## Test Cases
|
||||
|
||||
| Scenario | Expected |
|
||||
|----------|----------|
|
||||
| Press up with empty input | Fills with most recent user message |
|
||||
| Press up multiple times | Walks backward through user messages |
|
||||
| Press down after browsing up | Walks forward; past newest restores draft |
|
||||
| Press up with text in input | Saves text as draft, shows history |
|
||||
| Press down past end | Restores saved draft |
|
||||
| Type after browsing | Exits history mode (index resets) |
|
||||
| Submit after browsing | Sends displayed text, resets index |
|
||||
| Up arrow in multiline text (cursor not at pos 0) | Normal cursor movement, no history |
|
||||
| New message arrives via SSE | userHistory updates, no index disruption |
|
||||
| Session with no prior messages | Up arrow does nothing |
|
||||
51
plans/model-selection.md
Normal file
51
plans/model-selection.md
Normal file
@@ -0,0 +1,51 @@
|
||||
# Model Selection & Display
|
||||
|
||||
## Summary
|
||||
|
||||
Add model visibility and control to the AMC dashboard. Users can see which model each agent is running, pick a model when spawning, and switch models mid-session.
|
||||
|
||||
## Models
|
||||
|
||||
| Label | Value sent to Claude Code |
|
||||
|-------|--------------------------|
|
||||
| Opus 4.6 | `opus` |
|
||||
| Opus 4.5 | `claude-opus-4-5-20251101` |
|
||||
| Sonnet 4.6 | `sonnet` |
|
||||
| Haiku | `haiku` |
|
||||
|
||||
## Step 1: Display Current Model
|
||||
|
||||
Surface `context_usage.model` in `SessionCard.js`.
|
||||
|
||||
- Data already extracted by `parsing.py` (line 202) from conversation JSONL
|
||||
- Already available via `/api/state` in `context_usage.model`
|
||||
- Add model name formatter: `claude-opus-4-5-20251101` -> `Opus 4.5`
|
||||
- Show in SessionCard (near agent badge or context usage area)
|
||||
- Shows `null` until first assistant message
|
||||
|
||||
**Files**: `dashboard/components/SessionCard.js`
|
||||
|
||||
## Step 2: Model Picker at Spawn
|
||||
|
||||
Add model dropdown to `SpawnModal.js`. Pass to spawn API, which appends `--model <value>` to the claude command.
|
||||
|
||||
- Extend `/api/spawn` to accept optional `model` param
|
||||
- Validate against allowed model list
|
||||
- Prepend `--model {model}` to command in `AGENT_COMMANDS`
|
||||
- Default: no flag (uses Claude Code's default)
|
||||
|
||||
**Files**: `dashboard/components/SpawnModal.js`, `amc_server/mixins/spawn.py`
|
||||
|
||||
## Step 3: Mid-Session Model Switch
|
||||
|
||||
Dropdown on SessionCard to change model for running sessions via Zellij.
|
||||
|
||||
- Send `/model <value>` to the agent's Zellij pane:
|
||||
```bash
|
||||
zellij -s {session} action write-chars "/model {value}" --pane-id {pane}
|
||||
zellij -s {session} action write 10 --pane-id {pane}
|
||||
```
|
||||
- New endpoint: `POST /api/session/{id}/model` with `{"model": "opus"}`
|
||||
- Only works when agent is idle (waiting for input). If mid-turn, command queues and applies after.
|
||||
|
||||
**Files**: `dashboard/components/SessionCard.js`, `amc_server/mixins/state.py` (or new mixin)
|
||||
533
plans/subagent-visibility.md
Normal file
533
plans/subagent-visibility.md
Normal file
@@ -0,0 +1,533 @@
|
||||
# Subagent & Agent Team Visibility for AMC
|
||||
|
||||
> **Status**: Draft
|
||||
> **Last Updated**: 2026-03-02
|
||||
|
||||
## Summary
|
||||
|
||||
Add visibility into Claude Code subagents (Task tool spawns and team members) within AMC session cards. A pill button shows active agent count; clicking opens a popover with names, status, and stats. Claude-only (Codex does not support subagents).
|
||||
|
||||
---
|
||||
|
||||
## User Workflow
|
||||
|
||||
1. User views a session card in AMC
|
||||
2. Session status area shows: `[●] Working 2m 15s · 42k tokens 32% ctx [3 agents]`
|
||||
3. User clicks "3 agents" button
|
||||
4. Popover opens showing:
|
||||
```
|
||||
Explore-a250de ● running 12m 42,000 tokens
|
||||
code-reviewer ○ completed 3m 18,500 tokens
|
||||
action-wirer ○ completed 5m 23,500 tokens
|
||||
```
|
||||
5. Popover auto-updates every 2s while open
|
||||
6. Button hidden when session has no subagents
|
||||
|
||||
---
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
### Discovery
|
||||
|
||||
- **AC-1**: Subagent JSONL files discovered for Claude sessions at `{claude_projects}/{encoded_project_dir}/{session_id}/subagents/agent-*.jsonl`
|
||||
- **AC-2**: Team members discovered from same location (team spawning uses Task tool, stores in subagents dir)
|
||||
- **AC-3**: Codex sessions do not show subagent button (Codex does not support subagents)
|
||||
|
||||
### Status Detection
|
||||
|
||||
- **AC-4**: Subagent is "running" if parent session is not dead AND last assistant entry has `stop_reason != "end_turn"`
|
||||
- **AC-5**: Subagent is "completed" if last assistant entry has `stop_reason == "end_turn"` OR parent session is dead
|
||||
|
||||
### Name Resolution
|
||||
|
||||
- **AC-6**: Team member names extracted from agentId format `{name}@{team_name}` (O(1) string split)
|
||||
- **AC-7**: Non-team subagent names generated as `agent-{agentId_prefix}` (no parent session parsing required)
|
||||
|
||||
### Stats Extraction
|
||||
|
||||
- **AC-8**: Duration = first entry timestamp to last entry timestamp (or server time if running)
|
||||
- **AC-9**: Tokens = sum of `input_tokens + output_tokens` from all assistant entries (excludes cache tokens)
|
||||
|
||||
### API
|
||||
|
||||
- **AC-10**: `/api/state` includes `subagent_count` and `subagent_running_count` for each Claude session
|
||||
- **AC-11**: New endpoint `/api/sessions/{id}/subagents` returns full subagent list with name, status, duration_ms, tokens
|
||||
- **AC-12**: Subagent endpoint supports session_id path param; returns 404 if session not found
|
||||
|
||||
### UI
|
||||
|
||||
- **AC-13**: Context usage displays as plain text (remove badge styling)
|
||||
- **AC-14**: Agent count button appears as bordered pill to the right of context text
|
||||
- **AC-15**: Button hidden when `subagent_count == 0`
|
||||
- **AC-16**: Button shows running indicator: "3 agents" when none running, "3 agents (1 running)" when some running
|
||||
- **AC-17**: Clicking button opens popover anchored to button
|
||||
- **AC-18**: Popover shows list: name, status indicator, duration, token count per row
|
||||
- **AC-19**: Running agents show filled indicator (●), completed show empty (○)
|
||||
- **AC-20**: Popover polls `/api/sessions/{id}/subagents` every 2s while open
|
||||
- **AC-21**: Popover closes on outside click or Escape key
|
||||
- **AC-22**: Subagent rows are display-only (no click action in v1)
|
||||
|
||||
---
|
||||
|
||||
## Architecture
|
||||
|
||||
### Why This Structure
|
||||
|
||||
| Decision | Rationale | Fulfills |
|
||||
|----------|-----------|----------|
|
||||
| Aggregate counts in `/api/state` + detail endpoint | Minimizes payload size; hash stability (counts change less than durations) | AC-10, AC-11 |
|
||||
| Claude-only | Codex lacks subagent infrastructure | AC-3 |
|
||||
| Name from agentId pattern | Avoids expensive parent session parsing; team names encoded in agentId | AC-6, AC-7 |
|
||||
| Input+output tokens only | Matches "work done" mental model; simpler than cache tracking | AC-9 |
|
||||
| Auto-poll in popover | Real-time feel consistent with session card updates | AC-20 |
|
||||
| Hide button when empty | Reduces visual noise for sessions without agents | AC-15 |
|
||||
|
||||
### Data Flow
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Backend (Python) │
|
||||
│ │
|
||||
│ _collect_sessions() │
|
||||
│ │ │
|
||||
│ ├── For each Claude session: │
|
||||
│ │ └── _count_subagents(session_id, project_dir) │
|
||||
│ │ ├── glob subagents/agent-*.jsonl │
|
||||
│ │ ├── count files, check running status │
|
||||
│ │ └── return (count, running_count) │
|
||||
│ │ │
|
||||
│ └── Attach subagent_count, subagent_running_count │
|
||||
│ │
|
||||
│ _serve_subagents(session_id) │
|
||||
│ ├── _get_claude_session_dir(session_id, project_dir) │
|
||||
│ ├── glob subagents/agent-*.jsonl │
|
||||
│ ├── For each file: │
|
||||
│ │ ├── Parse name from agentId │
|
||||
│ │ ├── Determine status from stop_reason │
|
||||
│ │ ├── Calculate duration from timestamps │
|
||||
│ │ └── Sum tokens from assistant usage │
|
||||
│ └── Return JSON list │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Frontend (Preact) │
|
||||
│ │
|
||||
│ SessionCard │
|
||||
│ │ │
|
||||
│ ├── Session Status Area: │
|
||||
│ │ ├── AgentActivityIndicator (left) │
|
||||
│ │ ├── Context text (center-right, plain) │
|
||||
│ │ └── SubagentButton (far right, if count > 0) │
|
||||
│ │ │
|
||||
│ └── SubagentButton │
|
||||
│ ├── Shows "{count} agents" or "{count} ({running})" │
|
||||
│ ├── onClick: opens SubagentPopover │
|
||||
│ └── SubagentPopover │
|
||||
│ ├── Polls /api/sessions/{id}/subagents │
|
||||
│ ├── Renders list with status indicators │
|
||||
│ └── Closes on outside click or Escape │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### File Changes
|
||||
|
||||
| File | Change | ACs |
|
||||
|------|--------|-----|
|
||||
| `amc_server/mixins/subagent.py` | New mixin for subagent discovery and stats | AC-1,2,4-9 |
|
||||
| `amc_server/mixins/state.py` | Call subagent mixin, attach counts to session | AC-10 |
|
||||
| `amc_server/mixins/http.py` | Add route `/api/sessions/{id}/subagents` | AC-11,12 |
|
||||
| `amc_server/handler.py` | Add SubagentMixin to handler class | - |
|
||||
| `dashboard/components/SessionCard.js` | Update status area layout | AC-13,14 |
|
||||
| `dashboard/components/SubagentButton.js` | New component for button + popover | AC-15-22 |
|
||||
| `dashboard/utils/api.js` | Add `fetchSubagents(sessionId)` function | AC-20 |
|
||||
|
||||
---
|
||||
|
||||
## Implementation Specs
|
||||
|
||||
### IMP-1: SubagentMixin (Python)
|
||||
|
||||
**Fulfills:** AC-1, AC-2, AC-4, AC-5, AC-6, AC-7, AC-8, AC-9
|
||||
|
||||
```python
|
||||
# amc_server/mixins/subagent.py
|
||||
|
||||
class SubagentMixin:
|
||||
def _get_subagent_counts(self, session_id: str, project_dir: str) -> tuple[int, int]:
|
||||
"""Return (total_count, running_count) for a Claude session."""
|
||||
subagents_dir = self._get_subagents_dir(session_id, project_dir)
|
||||
if not subagents_dir or not subagents_dir.exists():
|
||||
return (0, 0)
|
||||
|
||||
total = 0
|
||||
running = 0
|
||||
for jsonl_file in subagents_dir.glob("agent-*.jsonl"):
|
||||
total += 1
|
||||
if self._is_subagent_running(jsonl_file):
|
||||
running += 1
|
||||
return (total, running)
|
||||
|
||||
def _get_subagents_dir(self, session_id: str, project_dir: str) -> Path | None:
|
||||
"""Construct path to subagents directory."""
|
||||
if not project_dir:
|
||||
return None
|
||||
encoded_dir = project_dir.replace("/", "-")
|
||||
if not encoded_dir.startswith("-"):
|
||||
encoded_dir = "-" + encoded_dir
|
||||
return CLAUDE_PROJECTS_DIR / encoded_dir / session_id / "subagents"
|
||||
|
||||
def _is_subagent_running(self, jsonl_file: Path) -> bool:
|
||||
"""Check if subagent is still running based on last assistant stop_reason."""
|
||||
try:
|
||||
# Read last few lines to find last assistant entry
|
||||
entries = self._read_jsonl_tail_entries(jsonl_file, max_lines=20)
|
||||
for entry in reversed(entries):
|
||||
if entry.get("type") == "assistant":
|
||||
stop_reason = entry.get("message", {}).get("stop_reason")
|
||||
return stop_reason != "end_turn"
|
||||
return True # No assistant entries yet = still starting
|
||||
except Exception:
|
||||
return False
|
||||
|
||||
def _get_subagent_list(self, session_id: str, project_dir: str, parent_is_dead: bool) -> list[dict]:
|
||||
"""Return full subagent list with stats."""
|
||||
subagents_dir = self._get_subagents_dir(session_id, project_dir)
|
||||
if not subagents_dir or not subagents_dir.exists():
|
||||
return []
|
||||
|
||||
result = []
|
||||
for jsonl_file in subagents_dir.glob("agent-*.jsonl"):
|
||||
subagent = self._parse_subagent(jsonl_file, parent_is_dead)
|
||||
if subagent:
|
||||
result.append(subagent)
|
||||
|
||||
# Sort: running first, then by name
|
||||
result.sort(key=lambda s: (0 if s["status"] == "running" else 1, s["name"]))
|
||||
return result
|
||||
|
||||
def _parse_subagent(self, jsonl_file: Path, parent_is_dead: bool) -> dict | None:
|
||||
"""Parse a single subagent JSONL file."""
|
||||
try:
|
||||
entries = self._read_jsonl_tail_entries(jsonl_file, max_lines=500, max_bytes=512*1024)
|
||||
if not entries:
|
||||
return None
|
||||
|
||||
# Get agentId from first entry
|
||||
first_entry = entries[0] if entries else {}
|
||||
agent_id = first_entry.get("agentId", "")
|
||||
|
||||
# Resolve name
|
||||
name = self._resolve_subagent_name(agent_id, jsonl_file)
|
||||
|
||||
# Determine status
|
||||
is_running = False
|
||||
if not parent_is_dead:
|
||||
for entry in reversed(entries):
|
||||
if entry.get("type") == "assistant":
|
||||
stop_reason = entry.get("message", {}).get("stop_reason")
|
||||
is_running = stop_reason != "end_turn"
|
||||
break
|
||||
status = "running" if is_running else "completed"
|
||||
|
||||
# Calculate duration
|
||||
first_ts = first_entry.get("timestamp")
|
||||
last_ts = entries[-1].get("timestamp") if entries else None
|
||||
duration_ms = self._calculate_duration_ms(first_ts, last_ts, is_running)
|
||||
|
||||
# Sum tokens
|
||||
tokens = self._sum_assistant_tokens(entries)
|
||||
|
||||
return {
|
||||
"name": name,
|
||||
"status": status,
|
||||
"duration_ms": duration_ms,
|
||||
"tokens": tokens,
|
||||
}
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
def _resolve_subagent_name(self, agent_id: str, jsonl_file: Path) -> str:
|
||||
"""Extract display name from agentId or filename."""
|
||||
# Team members: "reviewer-wcja@surgical-sync" -> "reviewer-wcja"
|
||||
if "@" in agent_id:
|
||||
return agent_id.split("@")[0]
|
||||
|
||||
# Regular subagents: use prefix from agentId
|
||||
# agent_id like "a250dec6325c589be" -> "a250de"
|
||||
prefix = agent_id[:6] if agent_id else "agent"
|
||||
|
||||
# Try to get subagent_type from filename if it contains it
|
||||
# Filename: agent-acompact-b857538cac0d5172.jsonl -> might indicate "compact"
|
||||
# For now, use generic fallback
|
||||
return f"agent-{prefix}"
|
||||
|
||||
def _calculate_duration_ms(self, first_ts: str, last_ts: str, is_running: bool) -> int:
|
||||
"""Calculate duration in milliseconds."""
|
||||
if not first_ts:
|
||||
return 0
|
||||
try:
|
||||
first = datetime.fromisoformat(first_ts.replace("Z", "+00:00"))
|
||||
if is_running:
|
||||
end = datetime.now(timezone.utc)
|
||||
elif last_ts:
|
||||
end = datetime.fromisoformat(last_ts.replace("Z", "+00:00"))
|
||||
else:
|
||||
return 0
|
||||
return max(0, int((end - first).total_seconds() * 1000))
|
||||
except Exception:
|
||||
return 0
|
||||
|
||||
def _sum_assistant_tokens(self, entries: list[dict]) -> int:
|
||||
"""Sum input_tokens + output_tokens from all assistant entries."""
|
||||
total = 0
|
||||
for entry in entries:
|
||||
if entry.get("type") != "assistant":
|
||||
continue
|
||||
usage = entry.get("message", {}).get("usage", {})
|
||||
input_tok = usage.get("input_tokens", 0) or 0
|
||||
output_tok = usage.get("output_tokens", 0) or 0
|
||||
total += input_tok + output_tok
|
||||
return total
|
||||
```
|
||||
|
||||
### IMP-2: State Integration (Python)
|
||||
|
||||
**Fulfills:** AC-10
|
||||
|
||||
```python
|
||||
# In amc_server/mixins/state.py, within _collect_sessions():
|
||||
|
||||
# After computing is_dead, add:
|
||||
if data.get("agent") == "claude":
|
||||
subagent_count, subagent_running = self._get_subagent_counts(
|
||||
data.get("session_id", ""),
|
||||
data.get("project_dir", "")
|
||||
)
|
||||
if subagent_count > 0:
|
||||
data["subagent_count"] = subagent_count
|
||||
data["subagent_running_count"] = subagent_running
|
||||
```
|
||||
|
||||
### IMP-3: Subagents Endpoint (Python)
|
||||
|
||||
**Fulfills:** AC-11, AC-12
|
||||
|
||||
```python
|
||||
# In amc_server/mixins/http.py, add route handling:
|
||||
|
||||
def _route_request(self):
|
||||
# ... existing routes ...
|
||||
|
||||
# /api/sessions/{id}/subagents
|
||||
subagent_match = re.match(r"^/api/sessions/([^/]+)/subagents$", self.path)
|
||||
if subagent_match:
|
||||
session_id = subagent_match.group(1)
|
||||
self._serve_subagents(session_id)
|
||||
return
|
||||
|
||||
def _serve_subagents(self, session_id):
|
||||
"""Serve subagent list for a specific session."""
|
||||
# Find session to get project_dir and is_dead
|
||||
session_file = SESSIONS_DIR / f"{session_id}.json"
|
||||
if not session_file.exists():
|
||||
self._send_json(404, {"error": "Session not found"})
|
||||
return
|
||||
|
||||
try:
|
||||
session_data = json.loads(session_file.read_text())
|
||||
except (json.JSONDecodeError, OSError):
|
||||
self._send_json(404, {"error": "Session not found"})
|
||||
return
|
||||
|
||||
if session_data.get("agent") != "claude":
|
||||
self._send_json(200, {"subagents": []})
|
||||
return
|
||||
|
||||
parent_is_dead = session_data.get("is_dead", False)
|
||||
subagents = self._get_subagent_list(
|
||||
session_id,
|
||||
session_data.get("project_dir", ""),
|
||||
parent_is_dead
|
||||
)
|
||||
self._send_json(200, {"subagents": subagents})
|
||||
```
|
||||
|
||||
### IMP-4: SubagentButton Component (JavaScript)
|
||||
|
||||
**Fulfills:** AC-14, AC-15, AC-16, AC-17, AC-18, AC-19, AC-20, AC-21, AC-22
|
||||
|
||||
```javascript
|
||||
// dashboard/components/SubagentButton.js
|
||||
|
||||
import { html, useState, useEffect, useRef } from '../lib/preact.js';
|
||||
import { fetchSubagents } from '../utils/api.js';
|
||||
|
||||
export function SubagentButton({ sessionId, count, runningCount }) {
|
||||
const [isOpen, setIsOpen] = useState(false);
|
||||
const [subagents, setSubagents] = useState([]);
|
||||
const buttonRef = useRef(null);
|
||||
const popoverRef = useRef(null);
|
||||
|
||||
// Format button label
|
||||
const label = runningCount > 0
|
||||
? `${count} agents (${runningCount} running)`
|
||||
: `${count} agents`;
|
||||
|
||||
// Poll while open
|
||||
useEffect(() => {
|
||||
if (!isOpen) return;
|
||||
|
||||
const fetchData = async () => {
|
||||
const data = await fetchSubagents(sessionId);
|
||||
if (data?.subagents) {
|
||||
setSubagents(data.subagents);
|
||||
}
|
||||
};
|
||||
|
||||
fetchData();
|
||||
const interval = setInterval(fetchData, 2000);
|
||||
return () => clearInterval(interval);
|
||||
}, [isOpen, sessionId]);
|
||||
|
||||
// Close on outside click or Escape
|
||||
useEffect(() => {
|
||||
if (!isOpen) return;
|
||||
|
||||
const handleClickOutside = (e) => {
|
||||
if (popoverRef.current && !popoverRef.current.contains(e.target) &&
|
||||
buttonRef.current && !buttonRef.current.contains(e.target)) {
|
||||
setIsOpen(false);
|
||||
}
|
||||
};
|
||||
|
||||
const handleEscape = (e) => {
|
||||
if (e.key === 'Escape') setIsOpen(false);
|
||||
};
|
||||
|
||||
document.addEventListener('mousedown', handleClickOutside);
|
||||
document.addEventListener('keydown', handleEscape);
|
||||
return () => {
|
||||
document.removeEventListener('mousedown', handleClickOutside);
|
||||
document.removeEventListener('keydown', handleEscape);
|
||||
};
|
||||
}, [isOpen]);
|
||||
|
||||
const formatDuration = (ms) => {
|
||||
const sec = Math.floor(ms / 1000);
|
||||
if (sec < 60) return `${sec}s`;
|
||||
const min = Math.floor(sec / 60);
|
||||
return `${min}m`;
|
||||
};
|
||||
|
||||
const formatTokens = (count) => {
|
||||
if (count >= 1000) return `${(count / 1000).toFixed(1)}k`;
|
||||
return String(count);
|
||||
};
|
||||
|
||||
return html`
|
||||
<div class="relative">
|
||||
<button
|
||||
ref=${buttonRef}
|
||||
onClick=${() => setIsOpen(!isOpen)}
|
||||
class="rounded-lg border border-selection/80 bg-bg/45 px-2.5 py-1 font-mono text-label text-dim hover:border-starting/50 hover:text-bright transition-colors"
|
||||
>
|
||||
${label}
|
||||
</button>
|
||||
|
||||
${isOpen && html`
|
||||
<div
|
||||
ref=${popoverRef}
|
||||
class="absolute right-0 top-full mt-2 z-50 min-w-[280px] rounded-lg border border-selection/80 bg-surface shadow-lg"
|
||||
>
|
||||
<div class="p-2">
|
||||
${subagents.length === 0 ? html`
|
||||
<div class="text-center text-dim text-sm py-4">Loading...</div>
|
||||
` : subagents.map(agent => html`
|
||||
<div class="flex items-center gap-3 px-3 py-2 rounded hover:bg-bg/40">
|
||||
<span class="w-2 h-2 rounded-full ${agent.status === 'running' ? 'bg-active' : 'border border-dim'}"></span>
|
||||
<span class="flex-1 font-mono text-sm text-bright truncate">${agent.name}</span>
|
||||
<span class="font-mono text-label text-dim">${formatDuration(agent.duration_ms)}</span>
|
||||
<span class="font-mono text-label text-dim">${formatTokens(agent.tokens)}</span>
|
||||
</div>
|
||||
`)}
|
||||
</div>
|
||||
</div>
|
||||
`}
|
||||
</div>
|
||||
`;
|
||||
}
|
||||
```
|
||||
|
||||
### IMP-5: SessionCard Status Area Update (JavaScript)
|
||||
|
||||
**Fulfills:** AC-13, AC-14, AC-15
|
||||
|
||||
```javascript
|
||||
// In dashboard/components/SessionCard.js, update the Session Status Area:
|
||||
|
||||
// Replace the contextUsage badge with plain text + SubagentButton
|
||||
|
||||
<!-- Session Status Area -->
|
||||
<div class="flex items-center justify-between gap-3 px-4 py-2 border-b border-selection/50 bg-bg/60">
|
||||
<${AgentActivityIndicator} session=${session} />
|
||||
<div class="flex items-center gap-3">
|
||||
${contextUsage && html`
|
||||
<span class="font-mono text-label text-dim" title=${contextUsage.title}>
|
||||
${contextUsage.headline}
|
||||
</span>
|
||||
`}
|
||||
${session.subagent_count > 0 && session.agent === 'claude' && html`
|
||||
<${SubagentButton}
|
||||
sessionId=${session.session_id}
|
||||
count=${session.subagent_count}
|
||||
runningCount=${session.subagent_running_count || 0}
|
||||
/>
|
||||
`}
|
||||
</div>
|
||||
</div>
|
||||
```
|
||||
|
||||
### IMP-6: API Function (JavaScript)
|
||||
|
||||
**Fulfills:** AC-20
|
||||
|
||||
```javascript
|
||||
// In dashboard/utils/api.js, add:
|
||||
|
||||
export async function fetchSubagents(sessionId) {
|
||||
try {
|
||||
const response = await fetch(`/api/sessions/${sessionId}/subagents`);
|
||||
if (!response.ok) return null;
|
||||
return await response.json();
|
||||
} catch (e) {
|
||||
console.error('Failed to fetch subagents:', e);
|
||||
return null;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Rollout Slices
|
||||
|
||||
### Slice 1: Backend Discovery (AC-1, AC-2, AC-4, AC-5, AC-6, AC-7, AC-8, AC-9, AC-10)
|
||||
- Create `amc_server/mixins/subagent.py` with discovery and stats logic
|
||||
- Integrate into `state.py` to add counts to session payload
|
||||
- Unit tests for name resolution, status detection, token summing
|
||||
|
||||
### Slice 2: Backend Endpoint (AC-11, AC-12)
|
||||
- Add `/api/sessions/{id}/subagents` route
|
||||
- Return 404 for missing sessions, empty list for Codex
|
||||
- Integration test with real session data
|
||||
|
||||
### Slice 3: Frontend Button (AC-13, AC-14, AC-15, AC-16)
|
||||
- Update SessionCard status area layout
|
||||
- Create SubagentButton component with label logic
|
||||
- Test: button shows when count > 0, hidden when 0
|
||||
|
||||
### Slice 4: Frontend Popover (AC-17, AC-18, AC-19, AC-20, AC-21, AC-22)
|
||||
- Add popover with polling
|
||||
- Style running/completed indicators
|
||||
- Test: popover opens, polls, closes on outside click/Escape
|
||||
BIN
tests/__pycache__/test_context.cpython-313-pytest-9.0.2.pyc
Normal file
BIN
tests/__pycache__/test_context.cpython-313-pytest-9.0.2.pyc
Normal file
Binary file not shown.
BIN
tests/__pycache__/test_control.cpython-313-pytest-9.0.2.pyc
Normal file
BIN
tests/__pycache__/test_control.cpython-313-pytest-9.0.2.pyc
Normal file
Binary file not shown.
BIN
tests/__pycache__/test_conversation.cpython-313-pytest-9.0.2.pyc
Normal file
BIN
tests/__pycache__/test_conversation.cpython-313-pytest-9.0.2.pyc
Normal file
Binary file not shown.
Binary file not shown.
BIN
tests/__pycache__/test_discovery.cpython-313-pytest-9.0.2.pyc
Normal file
BIN
tests/__pycache__/test_discovery.cpython-313-pytest-9.0.2.pyc
Normal file
Binary file not shown.
BIN
tests/__pycache__/test_hook.cpython-313-pytest-9.0.2.pyc
Normal file
BIN
tests/__pycache__/test_hook.cpython-313-pytest-9.0.2.pyc
Normal file
Binary file not shown.
BIN
tests/__pycache__/test_http.cpython-313-pytest-9.0.2.pyc
Normal file
BIN
tests/__pycache__/test_http.cpython-313-pytest-9.0.2.pyc
Normal file
Binary file not shown.
BIN
tests/__pycache__/test_parsing.cpython-313-pytest-9.0.2.pyc
Normal file
BIN
tests/__pycache__/test_parsing.cpython-313-pytest-9.0.2.pyc
Normal file
Binary file not shown.
BIN
tests/__pycache__/test_skills.cpython-313-pytest-9.0.2.pyc
Normal file
BIN
tests/__pycache__/test_skills.cpython-313-pytest-9.0.2.pyc
Normal file
Binary file not shown.
BIN
tests/__pycache__/test_spawn.cpython-313-pytest-9.0.2.pyc
Normal file
BIN
tests/__pycache__/test_spawn.cpython-313-pytest-9.0.2.pyc
Normal file
Binary file not shown.
BIN
tests/__pycache__/test_state.cpython-313-pytest-9.0.2.pyc
Normal file
BIN
tests/__pycache__/test_state.cpython-313-pytest-9.0.2.pyc
Normal file
Binary file not shown.
Binary file not shown.
0
tests/e2e/__init__.py
Normal file
0
tests/e2e/__init__.py
Normal file
BIN
tests/e2e/__pycache__/__init__.cpython-313.pyc
Normal file
BIN
tests/e2e/__pycache__/__init__.cpython-313.pyc
Normal file
Binary file not shown.
Binary file not shown.
614
tests/e2e/test_autocomplete_workflow.js
Normal file
614
tests/e2e/test_autocomplete_workflow.js
Normal file
@@ -0,0 +1,614 @@
|
||||
/**
|
||||
* E2E integration tests for the autocomplete workflow.
|
||||
*
|
||||
* Validates the complete flow from typing a trigger character through
|
||||
* skill selection and insertion, using a mock HTTP server that serves
|
||||
* both dashboard files and the /api/skills endpoint.
|
||||
*
|
||||
* Test scenarios from bd-3cc:
|
||||
* - Server serves /api/skills correctly
|
||||
* - Dashboard loads skills on session open
|
||||
* - Trigger character shows dropdown
|
||||
* - Keyboard navigation works
|
||||
* - Selection inserts skill
|
||||
* - Edge cases (wrong trigger, empty skills, backspace, etc.)
|
||||
*/
|
||||
|
||||
import { describe, it, before, after } from 'node:test';
|
||||
import assert from 'node:assert/strict';
|
||||
import { createServer } from 'node:http';
|
||||
import { getTriggerInfo, filteredSkills } from '../../dashboard/utils/autocomplete.js';
|
||||
|
||||
// -- Mock server for /api/skills --
|
||||
|
||||
const CLAUDE_SKILLS_RESPONSE = {
|
||||
trigger: '/',
|
||||
skills: [
|
||||
{ name: 'commit', description: 'Create a git commit' },
|
||||
{ name: 'comment', description: 'Add a comment' },
|
||||
{ name: 'review-pr', description: 'Review a pull request' },
|
||||
{ name: 'help', description: 'Get help' },
|
||||
{ name: 'init', description: 'Initialize project' },
|
||||
],
|
||||
};
|
||||
|
||||
const CODEX_SKILLS_RESPONSE = {
|
||||
trigger: '$',
|
||||
skills: [
|
||||
{ name: 'lint', description: 'Lint code' },
|
||||
{ name: 'deploy', description: 'Deploy to prod' },
|
||||
{ name: 'test', description: 'Run tests' },
|
||||
],
|
||||
};
|
||||
|
||||
const EMPTY_SKILLS_RESPONSE = {
|
||||
trigger: '/',
|
||||
skills: [],
|
||||
};
|
||||
|
||||
let server;
|
||||
let serverUrl;
|
||||
|
||||
function startMockServer() {
|
||||
return new Promise((resolve) => {
|
||||
server = createServer((req, res) => {
|
||||
const url = new URL(req.url, `http://${req.headers.host}`);
|
||||
|
||||
if (url.pathname === '/api/skills') {
|
||||
const agent = url.searchParams.get('agent') || 'claude';
|
||||
res.writeHead(200, { 'Content-Type': 'application/json', 'Access-Control-Allow-Origin': '*' });
|
||||
|
||||
if (agent === 'codex') {
|
||||
res.end(JSON.stringify(CODEX_SKILLS_RESPONSE));
|
||||
} else if (agent === 'empty') {
|
||||
res.end(JSON.stringify(EMPTY_SKILLS_RESPONSE));
|
||||
} else {
|
||||
res.end(JSON.stringify(CLAUDE_SKILLS_RESPONSE));
|
||||
}
|
||||
} else {
|
||||
res.writeHead(404);
|
||||
res.end('Not found');
|
||||
}
|
||||
});
|
||||
|
||||
server.listen(0, '127.0.0.1', () => {
|
||||
const { port } = server.address();
|
||||
serverUrl = `http://127.0.0.1:${port}`;
|
||||
resolve();
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
function stopMockServer() {
|
||||
return new Promise((resolve) => {
|
||||
if (server) server.close(resolve);
|
||||
else resolve();
|
||||
});
|
||||
}
|
||||
|
||||
// -- Helper: simulate fetching skills like the dashboard does --
|
||||
|
||||
async function fetchSkills(agent) {
|
||||
const url = `${serverUrl}/api/skills?agent=${encodeURIComponent(agent)}`;
|
||||
const response = await fetch(url);
|
||||
if (!response.ok) return null;
|
||||
return response.json();
|
||||
}
|
||||
|
||||
// =============================================================
|
||||
// Test Suite: Server -> Client Skills Fetch
|
||||
// =============================================================
|
||||
|
||||
describe('E2E: Server serves /api/skills correctly', () => {
|
||||
before(startMockServer);
|
||||
after(stopMockServer);
|
||||
|
||||
it('fetches Claude skills with / trigger', async () => {
|
||||
const config = await fetchSkills('claude');
|
||||
assert.equal(config.trigger, '/');
|
||||
assert.ok(config.skills.length > 0, 'should have skills');
|
||||
assert.ok(config.skills.some(s => s.name === 'commit'));
|
||||
});
|
||||
|
||||
it('fetches Codex skills with $ trigger', async () => {
|
||||
const config = await fetchSkills('codex');
|
||||
assert.equal(config.trigger, '$');
|
||||
assert.ok(config.skills.some(s => s.name === 'lint'));
|
||||
});
|
||||
|
||||
it('returns empty skills list when none exist', async () => {
|
||||
const config = await fetchSkills('empty');
|
||||
assert.equal(config.trigger, '/');
|
||||
assert.deepEqual(config.skills, []);
|
||||
});
|
||||
|
||||
it('each skill has name and description', async () => {
|
||||
const config = await fetchSkills('claude');
|
||||
for (const skill of config.skills) {
|
||||
assert.ok(skill.name, 'skill should have name');
|
||||
assert.ok(skill.description, 'skill should have description');
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
// =============================================================
|
||||
// Test Suite: Dashboard loads skills on session open
|
||||
// =============================================================
|
||||
|
||||
describe('E2E: Dashboard loads skills on session open', () => {
|
||||
before(startMockServer);
|
||||
after(stopMockServer);
|
||||
|
||||
it('loads Claude skills config matching server response', async () => {
|
||||
const config = await fetchSkills('claude');
|
||||
assert.equal(config.trigger, '/');
|
||||
// Verify the config is usable by autocomplete functions
|
||||
const info = getTriggerInfo('/com', 4, config);
|
||||
assert.ok(info, 'should detect trigger in loaded config');
|
||||
assert.equal(info.filterText, 'com');
|
||||
});
|
||||
|
||||
it('loads Codex skills config matching server response', async () => {
|
||||
const config = await fetchSkills('codex');
|
||||
assert.equal(config.trigger, '$');
|
||||
const info = getTriggerInfo('$li', 3, config);
|
||||
assert.ok(info, 'should detect $ trigger');
|
||||
assert.equal(info.filterText, 'li');
|
||||
});
|
||||
|
||||
it('handles null/missing config gracefully', async () => {
|
||||
// Simulate network failure
|
||||
const info = getTriggerInfo('/test', 5, null);
|
||||
assert.equal(info, null);
|
||||
const skills = filteredSkills(null, { filterText: '' });
|
||||
assert.deepEqual(skills, []);
|
||||
});
|
||||
});
|
||||
|
||||
// =============================================================
|
||||
// Test Suite: Trigger character shows dropdown
|
||||
// =============================================================
|
||||
|
||||
describe('E2E: Trigger character shows dropdown', () => {
|
||||
const config = CLAUDE_SKILLS_RESPONSE;
|
||||
|
||||
it('Claude session: Type "/" -> dropdown appears with Claude skills', () => {
|
||||
const info = getTriggerInfo('/', 1, config);
|
||||
assert.ok(info, 'trigger should be detected');
|
||||
assert.equal(info.trigger, '/');
|
||||
const skills = filteredSkills(config, info);
|
||||
assert.ok(skills.length > 0, 'should show skills');
|
||||
});
|
||||
|
||||
it('Codex session: Type "$" -> dropdown appears with Codex skills', () => {
|
||||
const codexConfig = CODEX_SKILLS_RESPONSE;
|
||||
const info = getTriggerInfo('$', 1, codexConfig);
|
||||
assert.ok(info, 'trigger should be detected');
|
||||
assert.equal(info.trigger, '$');
|
||||
const skills = filteredSkills(codexConfig, info);
|
||||
assert.ok(skills.length > 0);
|
||||
});
|
||||
|
||||
it('Claude session: Type "$" -> nothing happens (wrong trigger)', () => {
|
||||
const info = getTriggerInfo('$', 1, config);
|
||||
assert.equal(info, null, 'wrong trigger should not activate');
|
||||
});
|
||||
|
||||
it('Type "/com" -> list filters to skills containing "com"', () => {
|
||||
const info = getTriggerInfo('/com', 4, config);
|
||||
assert.ok(info);
|
||||
assert.equal(info.filterText, 'com');
|
||||
const skills = filteredSkills(config, info);
|
||||
const names = skills.map(s => s.name);
|
||||
assert.ok(names.includes('commit'), 'should include commit');
|
||||
assert.ok(names.includes('comment'), 'should include comment');
|
||||
assert.ok(!names.includes('review-pr'), 'should not include review-pr');
|
||||
assert.ok(!names.includes('help'), 'should not include help');
|
||||
});
|
||||
|
||||
it('Mid-message: Type "please run /commit" -> autocomplete triggers on "/"', () => {
|
||||
const input = 'please run /commit';
|
||||
const info = getTriggerInfo(input, input.length, config);
|
||||
assert.ok(info, 'should detect trigger mid-message');
|
||||
assert.equal(info.trigger, '/');
|
||||
assert.equal(info.filterText, 'commit');
|
||||
assert.equal(info.replaceStart, 11);
|
||||
assert.equal(info.replaceEnd, 18);
|
||||
});
|
||||
|
||||
it('Trigger at start of line after newline', () => {
|
||||
const input = 'first line\n/rev';
|
||||
const info = getTriggerInfo(input, input.length, config);
|
||||
assert.ok(info);
|
||||
assert.equal(info.filterText, 'rev');
|
||||
});
|
||||
});
|
||||
|
||||
// =============================================================
|
||||
// Test Suite: Keyboard navigation works
|
||||
// =============================================================
|
||||
|
||||
describe('E2E: Keyboard navigation simulation', () => {
|
||||
const config = CLAUDE_SKILLS_RESPONSE;
|
||||
|
||||
it('Arrow keys navigate through filtered list', () => {
|
||||
const info = getTriggerInfo('/', 1, config);
|
||||
const skills = filteredSkills(config, info);
|
||||
|
||||
// Simulate state: selectedIndex starts at 0
|
||||
let selectedIndex = 0;
|
||||
|
||||
// ArrowDown moves to next
|
||||
selectedIndex = Math.min(selectedIndex + 1, skills.length - 1);
|
||||
assert.equal(selectedIndex, 1);
|
||||
|
||||
// ArrowDown again
|
||||
selectedIndex = Math.min(selectedIndex + 1, skills.length - 1);
|
||||
assert.equal(selectedIndex, 2);
|
||||
|
||||
// ArrowUp moves back
|
||||
selectedIndex = Math.max(selectedIndex - 1, 0);
|
||||
assert.equal(selectedIndex, 1);
|
||||
|
||||
// ArrowUp back to start
|
||||
selectedIndex = Math.max(selectedIndex - 1, 0);
|
||||
assert.equal(selectedIndex, 0);
|
||||
|
||||
// ArrowUp at top doesn't go negative
|
||||
selectedIndex = Math.max(selectedIndex - 1, 0);
|
||||
assert.equal(selectedIndex, 0);
|
||||
});
|
||||
|
||||
it('ArrowDown clamps at list end', () => {
|
||||
const info = getTriggerInfo('/com', 4, config);
|
||||
const skills = filteredSkills(config, info);
|
||||
// "com" matches commit and comment -> 2 skills
|
||||
assert.equal(skills.length, 2);
|
||||
|
||||
let selectedIndex = 0;
|
||||
// Down to 1
|
||||
selectedIndex = Math.min(selectedIndex + 1, skills.length - 1);
|
||||
assert.equal(selectedIndex, 1);
|
||||
// Down again - clamped at 1
|
||||
selectedIndex = Math.min(selectedIndex + 1, skills.length - 1);
|
||||
assert.equal(selectedIndex, 1, 'should clamp at list end');
|
||||
});
|
||||
|
||||
it('Enter selects the current skill', () => {
|
||||
const info = getTriggerInfo('/', 1, config);
|
||||
const skills = filteredSkills(config, info);
|
||||
const selectedIndex = 0;
|
||||
|
||||
// Simulate Enter: select skill at selectedIndex
|
||||
const selected = skills[selectedIndex];
|
||||
assert.ok(selected, 'should have a skill to select');
|
||||
assert.equal(selected.name, skills[0].name);
|
||||
});
|
||||
|
||||
it('Escape dismisses without selection', () => {
|
||||
// Simulate Escape: set showAutocomplete = false, no insertion
|
||||
let showAutocomplete = true;
|
||||
// Escape handler
|
||||
showAutocomplete = false;
|
||||
assert.equal(showAutocomplete, false, 'dropdown should close on Escape');
|
||||
});
|
||||
});
|
||||
|
||||
// =============================================================
|
||||
// Test Suite: Selection inserts skill
|
||||
// =============================================================
|
||||
|
||||
describe('E2E: Selection inserts skill', () => {
|
||||
const config = CLAUDE_SKILLS_RESPONSE;
|
||||
|
||||
/**
|
||||
* Simulate insertSkill logic from SimpleInput.js
|
||||
*/
|
||||
function simulateInsertSkill(text, triggerInfo, skill, trigger) {
|
||||
const { replaceStart, replaceEnd } = triggerInfo;
|
||||
const before = text.slice(0, replaceStart);
|
||||
const after = text.slice(replaceEnd);
|
||||
const inserted = `${trigger}${skill.name} `;
|
||||
return {
|
||||
newText: before + inserted + after,
|
||||
newCursorPos: replaceStart + inserted.length,
|
||||
};
|
||||
}
|
||||
|
||||
it('Selected skill shows as "{trigger}skill-name " in input', () => {
|
||||
const text = '/com';
|
||||
const info = getTriggerInfo(text, 4, config);
|
||||
const skills = filteredSkills(config, info);
|
||||
const skill = skills.find(s => s.name === 'commit');
|
||||
|
||||
const { newText, newCursorPos } = simulateInsertSkill(text, info, skill, config.trigger);
|
||||
assert.equal(newText, '/commit ', 'should insert trigger + skill name + space');
|
||||
assert.equal(newCursorPos, 8, 'cursor should be after inserted text');
|
||||
});
|
||||
|
||||
it('Inserting mid-message preserves surrounding text', () => {
|
||||
const text = 'please run /com and continue';
|
||||
const info = getTriggerInfo(text, 15, config); // cursor at end of "/com"
|
||||
assert.ok(info);
|
||||
const skill = { name: 'commit' };
|
||||
|
||||
const { newText } = simulateInsertSkill(text, info, skill, config.trigger);
|
||||
assert.equal(newText, 'please run /commit and continue');
|
||||
// Note: there's a double space because "and" was after the cursor position
|
||||
// In real use, the cursor was at position 15 which is after "/com"
|
||||
});
|
||||
|
||||
it('Inserting at start of input', () => {
|
||||
const text = '/';
|
||||
const info = getTriggerInfo(text, 1, config);
|
||||
const skill = { name: 'help' };
|
||||
|
||||
const { newText, newCursorPos } = simulateInsertSkill(text, info, skill, config.trigger);
|
||||
assert.equal(newText, '/help ');
|
||||
assert.equal(newCursorPos, 6);
|
||||
});
|
||||
|
||||
it('Inserting with filter text replaces trigger+filter', () => {
|
||||
const text = '/review';
|
||||
const info = getTriggerInfo(text, 7, config);
|
||||
const skill = { name: 'review-pr' };
|
||||
|
||||
const { newText } = simulateInsertSkill(text, info, skill, config.trigger);
|
||||
assert.equal(newText, '/review-pr ');
|
||||
});
|
||||
});
|
||||
|
||||
// =============================================================
|
||||
// Test Suite: Verify alphabetical ordering
|
||||
// =============================================================
|
||||
|
||||
describe('E2E: Verify alphabetical ordering of skills', () => {
|
||||
it('Skills are returned sorted alphabetically', () => {
|
||||
const config = CLAUDE_SKILLS_RESPONSE;
|
||||
const info = getTriggerInfo('/', 1, config);
|
||||
const skills = filteredSkills(config, info);
|
||||
const names = skills.map(s => s.name);
|
||||
|
||||
for (let i = 1; i < names.length; i++) {
|
||||
assert.ok(
|
||||
names[i].localeCompare(names[i - 1]) >= 0,
|
||||
`${names[i]} should come after ${names[i - 1]}`
|
||||
);
|
||||
}
|
||||
});
|
||||
|
||||
it('Filtered results maintain alphabetical order', () => {
|
||||
const config = CLAUDE_SKILLS_RESPONSE;
|
||||
const info = getTriggerInfo('/com', 4, config);
|
||||
const skills = filteredSkills(config, info);
|
||||
const names = skills.map(s => s.name);
|
||||
|
||||
assert.deepEqual(names, ['comment', 'commit']);
|
||||
});
|
||||
});
|
||||
|
||||
// =============================================================
|
||||
// Test Suite: Edge Cases
|
||||
// =============================================================
|
||||
|
||||
describe('E2E: Edge cases', () => {
|
||||
it('Session without skills shows empty list', () => {
|
||||
const emptyConfig = EMPTY_SKILLS_RESPONSE;
|
||||
const info = getTriggerInfo('/', 1, emptyConfig);
|
||||
assert.ok(info, 'trigger still detected');
|
||||
const skills = filteredSkills(emptyConfig, info);
|
||||
assert.equal(skills.length, 0);
|
||||
});
|
||||
|
||||
it('Single skill still shows in dropdown', () => {
|
||||
const singleConfig = {
|
||||
trigger: '/',
|
||||
skills: [{ name: 'only-skill', description: 'The only one' }],
|
||||
};
|
||||
const info = getTriggerInfo('/', 1, singleConfig);
|
||||
const skills = filteredSkills(singleConfig, info);
|
||||
assert.equal(skills.length, 1);
|
||||
assert.equal(skills[0].name, 'only-skill');
|
||||
});
|
||||
|
||||
it('Multiple triggers in one message work independently', () => {
|
||||
const config = CLAUDE_SKILLS_RESPONSE;
|
||||
|
||||
// User types: "first /commit then /review-pr finally"
|
||||
// After first insertion, simulating second trigger
|
||||
const text = 'first /commit then /rev';
|
||||
|
||||
// Cursor at end - should detect second trigger
|
||||
const info = getTriggerInfo(text, text.length, config);
|
||||
assert.ok(info, 'should detect second trigger');
|
||||
assert.equal(info.filterText, 'rev');
|
||||
assert.equal(info.replaceStart, 19); // position of second "/"
|
||||
assert.equal(info.replaceEnd, text.length);
|
||||
|
||||
const skills = filteredSkills(config, info);
|
||||
assert.ok(skills.some(s => s.name === 'review-pr'));
|
||||
});
|
||||
|
||||
it('Backspace over trigger dismisses autocomplete', () => {
|
||||
const config = CLAUDE_SKILLS_RESPONSE;
|
||||
|
||||
// Type "/" - trigger detected
|
||||
let info = getTriggerInfo('/', 1, config);
|
||||
assert.ok(info, 'trigger detected');
|
||||
|
||||
// Backspace - text is now empty
|
||||
info = getTriggerInfo('', 0, config);
|
||||
assert.equal(info, null, 'trigger dismissed after backspace');
|
||||
});
|
||||
|
||||
it('Trigger embedded in word does not activate', () => {
|
||||
const config = CLAUDE_SKILLS_RESPONSE;
|
||||
const info = getTriggerInfo('path/to/file', 5, config);
|
||||
assert.equal(info, null, 'should not trigger on path separator');
|
||||
});
|
||||
|
||||
it('No matching skills after filtering shows empty list', () => {
|
||||
const config = CLAUDE_SKILLS_RESPONSE;
|
||||
const info = getTriggerInfo('/zzz', 4, config);
|
||||
assert.ok(info, 'trigger still detected');
|
||||
const skills = filteredSkills(config, info);
|
||||
assert.equal(skills.length, 0, 'no skills match "zzz"');
|
||||
});
|
||||
|
||||
it('Case-insensitive filtering works', () => {
|
||||
const config = CLAUDE_SKILLS_RESPONSE;
|
||||
const info = getTriggerInfo('/COM', 4, config);
|
||||
assert.ok(info);
|
||||
assert.equal(info.filterText, 'com'); // lowercased
|
||||
const skills = filteredSkills(config, info);
|
||||
assert.ok(skills.length >= 2, 'should match commit and comment');
|
||||
});
|
||||
|
||||
it('Click outside dismisses (state simulation)', () => {
|
||||
// Simulate: showAutocomplete=true, click outside sets it to false
|
||||
let showAutocomplete = true;
|
||||
// Simulate click outside handler
|
||||
const clickTarget = { contains: () => false };
|
||||
const textareaRef = { contains: () => false };
|
||||
if (!clickTarget.contains('event') && !textareaRef.contains('event')) {
|
||||
showAutocomplete = false;
|
||||
}
|
||||
assert.equal(showAutocomplete, false, 'click outside should dismiss');
|
||||
});
|
||||
});
|
||||
|
||||
// =============================================================
|
||||
// Test Suite: Cross-agent isolation
|
||||
// =============================================================
|
||||
|
||||
describe('E2E: Cross-agent trigger isolation', () => {
|
||||
it('Claude trigger / does not activate in Codex config', () => {
|
||||
const codexConfig = CODEX_SKILLS_RESPONSE;
|
||||
const info = getTriggerInfo('/', 1, codexConfig);
|
||||
assert.equal(info, null, '/ should not trigger for Codex');
|
||||
});
|
||||
|
||||
it('Codex trigger $ does not activate in Claude config', () => {
|
||||
const claudeConfig = CLAUDE_SKILLS_RESPONSE;
|
||||
const info = getTriggerInfo('$', 1, claudeConfig);
|
||||
assert.equal(info, null, '$ should not trigger for Claude');
|
||||
});
|
||||
|
||||
it('Each agent gets its own skills list', async () => {
|
||||
// This requires the mock server
|
||||
await startMockServer();
|
||||
try {
|
||||
const claude = await fetchSkills('claude');
|
||||
const codex = await fetchSkills('codex');
|
||||
|
||||
assert.equal(claude.trigger, '/');
|
||||
assert.equal(codex.trigger, '$');
|
||||
|
||||
const claudeNames = claude.skills.map(s => s.name);
|
||||
const codexNames = codex.skills.map(s => s.name);
|
||||
|
||||
// No overlap in default test data
|
||||
assert.ok(!claudeNames.includes('lint'), 'Claude should not have Codex skills');
|
||||
assert.ok(!codexNames.includes('commit'), 'Codex should not have Claude skills');
|
||||
} finally {
|
||||
await stopMockServer();
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
// =============================================================
|
||||
// Test Suite: Full workflow simulation
|
||||
// =============================================================
|
||||
|
||||
describe('E2E: Full autocomplete workflow', () => {
|
||||
before(startMockServer);
|
||||
after(stopMockServer);
|
||||
|
||||
it('complete flow: fetch -> type -> filter -> navigate -> select -> verify', async () => {
|
||||
// Step 1: Fetch skills from server (like Modal.js does on session open)
|
||||
const config = await fetchSkills('claude');
|
||||
assert.equal(config.trigger, '/');
|
||||
assert.ok(config.skills.length > 0);
|
||||
|
||||
// Step 2: User starts typing - no trigger yet
|
||||
let text = 'hello ';
|
||||
let cursorPos = text.length;
|
||||
let info = getTriggerInfo(text, cursorPos, config);
|
||||
assert.equal(info, null, 'no trigger yet');
|
||||
|
||||
// Step 3: User types trigger character
|
||||
text = 'hello /';
|
||||
cursorPos = text.length;
|
||||
info = getTriggerInfo(text, cursorPos, config);
|
||||
assert.ok(info, 'trigger detected');
|
||||
let skills = filteredSkills(config, info);
|
||||
assert.ok(skills.length === 5, 'all 5 skills shown');
|
||||
|
||||
// Step 4: User types filter text
|
||||
text = 'hello /com';
|
||||
cursorPos = text.length;
|
||||
info = getTriggerInfo(text, cursorPos, config);
|
||||
assert.ok(info);
|
||||
assert.equal(info.filterText, 'com');
|
||||
skills = filteredSkills(config, info);
|
||||
assert.equal(skills.length, 2, 'filtered to 2 skills');
|
||||
assert.deepEqual(skills.map(s => s.name), ['comment', 'commit']);
|
||||
|
||||
// Step 5: Arrow down to select "commit" (index 1)
|
||||
let selectedIndex = 0; // starts on "comment"
|
||||
selectedIndex = Math.min(selectedIndex + 1, skills.length - 1); // ArrowDown
|
||||
assert.equal(selectedIndex, 1);
|
||||
assert.equal(skills[selectedIndex].name, 'commit');
|
||||
|
||||
// Step 6: Press Enter to insert
|
||||
const selected = skills[selectedIndex];
|
||||
const { replaceStart, replaceEnd } = info;
|
||||
const before = text.slice(0, replaceStart);
|
||||
const after = text.slice(replaceEnd);
|
||||
const inserted = `${config.trigger}${selected.name} `;
|
||||
const newText = before + inserted + after;
|
||||
const newCursorPos = replaceStart + inserted.length;
|
||||
|
||||
// Step 7: Verify insertion
|
||||
assert.equal(newText, 'hello /commit ');
|
||||
assert.equal(newCursorPos, 14);
|
||||
|
||||
// Step 8: Verify autocomplete closed (trigger info should be null for the new text)
|
||||
// After insertion, cursor is at 14, no active trigger word
|
||||
const postInfo = getTriggerInfo(newText, newCursorPos, config);
|
||||
assert.equal(postInfo, null, 'autocomplete should be dismissed after selection');
|
||||
});
|
||||
|
||||
it('complete flow with second trigger after first insertion', async () => {
|
||||
const config = await fetchSkills('claude');
|
||||
|
||||
// After first insertion: "hello /commit "
|
||||
let text = 'hello /commit ';
|
||||
let cursorPos = text.length;
|
||||
|
||||
// User types more text and another trigger
|
||||
text = 'hello /commit then /';
|
||||
cursorPos = text.length;
|
||||
let info = getTriggerInfo(text, cursorPos, config);
|
||||
assert.ok(info, 'second trigger detected');
|
||||
assert.equal(info.replaceStart, 19);
|
||||
|
||||
// Filter the second trigger
|
||||
text = 'hello /commit then /rev';
|
||||
cursorPos = text.length;
|
||||
info = getTriggerInfo(text, cursorPos, config);
|
||||
assert.ok(info);
|
||||
assert.equal(info.filterText, 'rev');
|
||||
|
||||
const skills = filteredSkills(config, info);
|
||||
assert.ok(skills.some(s => s.name === 'review-pr'));
|
||||
|
||||
// Select review-pr
|
||||
const skill = skills.find(s => s.name === 'review-pr');
|
||||
const before = text.slice(0, info.replaceStart);
|
||||
const after = text.slice(info.replaceEnd);
|
||||
const newText = before + `${config.trigger}${skill.name} ` + after;
|
||||
|
||||
assert.equal(newText, 'hello /commit then /review-pr ');
|
||||
});
|
||||
});
|
||||
250
tests/e2e/test_skills_endpoint.py
Normal file
250
tests/e2e/test_skills_endpoint.py
Normal file
@@ -0,0 +1,250 @@
|
||||
"""E2E tests for the /api/skills endpoint.
|
||||
|
||||
Spins up a real AMC server on a random port and verifies the skills API
|
||||
returns correct data for Claude and Codex agents, including trigger
|
||||
characters, alphabetical sorting, and response format.
|
||||
"""
|
||||
|
||||
import json
|
||||
import socket
|
||||
import tempfile
|
||||
import threading
|
||||
import time
|
||||
import unittest
|
||||
import urllib.request
|
||||
from http.server import ThreadingHTTPServer
|
||||
from pathlib import Path
|
||||
from unittest.mock import patch
|
||||
|
||||
from amc_server.handler import AMCHandler
|
||||
|
||||
|
||||
def _find_free_port():
|
||||
"""Find an available port for the test server."""
|
||||
with socket.socket(socket.AF_INET, socket.SOCK_STREAM) as s:
|
||||
s.bind(("127.0.0.1", 0))
|
||||
return s.getsockname()[1]
|
||||
|
||||
|
||||
def _get_json(url):
|
||||
"""Fetch JSON from a URL, returning (status_code, parsed_json)."""
|
||||
req = urllib.request.Request(url)
|
||||
try:
|
||||
with urllib.request.urlopen(req, timeout=5) as resp:
|
||||
return resp.status, json.loads(resp.read())
|
||||
except urllib.error.HTTPError as e:
|
||||
return e.code, json.loads(e.read())
|
||||
|
||||
|
||||
class TestSkillsEndpointE2E(unittest.TestCase):
|
||||
"""E2E tests: start a real server and hit /api/skills over HTTP."""
|
||||
|
||||
@classmethod
|
||||
def setUpClass(cls):
|
||||
"""Start a test server on a random port with mock skill data."""
|
||||
cls.port = _find_free_port()
|
||||
cls.base_url = f"http://127.0.0.1:{cls.port}"
|
||||
|
||||
# Create temp directories for skill data
|
||||
cls.tmpdir = tempfile.mkdtemp()
|
||||
cls.home = Path(cls.tmpdir)
|
||||
|
||||
# Claude skills
|
||||
for name, desc in [
|
||||
("commit", "Create a git commit"),
|
||||
("review-pr", "Review a pull request"),
|
||||
("comment", "Add a comment"),
|
||||
]:
|
||||
skill_dir = cls.home / ".claude/skills" / name
|
||||
skill_dir.mkdir(parents=True, exist_ok=True)
|
||||
(skill_dir / "SKILL.md").write_text(desc)
|
||||
|
||||
# Codex curated skills
|
||||
cache_dir = cls.home / ".codex/vendor_imports"
|
||||
cache_dir.mkdir(parents=True, exist_ok=True)
|
||||
cache = {
|
||||
"skills": [
|
||||
{"id": "lint", "shortDescription": "Lint code"},
|
||||
{"id": "deploy", "shortDescription": "Deploy to prod"},
|
||||
]
|
||||
}
|
||||
(cache_dir / "skills-curated-cache.json").write_text(json.dumps(cache))
|
||||
|
||||
# Codex user skill
|
||||
codex_skill = cls.home / ".codex/skills/my-script"
|
||||
codex_skill.mkdir(parents=True, exist_ok=True)
|
||||
(codex_skill / "SKILL.md").write_text("Run my custom script")
|
||||
|
||||
# Patch Path.home() for the skills enumeration
|
||||
cls.home_patcher = patch.object(Path, "home", return_value=cls.home)
|
||||
cls.home_patcher.start()
|
||||
|
||||
# Start server in background thread
|
||||
cls.server = ThreadingHTTPServer(("127.0.0.1", cls.port), AMCHandler)
|
||||
cls.server_thread = threading.Thread(target=cls.server.serve_forever)
|
||||
cls.server_thread.daemon = True
|
||||
cls.server_thread.start()
|
||||
|
||||
# Wait for server to be ready
|
||||
for _ in range(50):
|
||||
try:
|
||||
with socket.create_connection(("127.0.0.1", cls.port), timeout=0.1):
|
||||
break
|
||||
except OSError:
|
||||
time.sleep(0.05)
|
||||
|
||||
@classmethod
|
||||
def tearDownClass(cls):
|
||||
"""Shut down the test server."""
|
||||
cls.server.shutdown()
|
||||
cls.server_thread.join(timeout=5)
|
||||
cls.home_patcher.stop()
|
||||
|
||||
# -- Core: /api/skills serves correctly --
|
||||
|
||||
def test_skills_default_is_claude(self):
|
||||
"""GET /api/skills without ?agent defaults to claude (/ trigger)."""
|
||||
status, data = _get_json(f"{self.base_url}/api/skills")
|
||||
self.assertEqual(status, 200)
|
||||
self.assertEqual(data["trigger"], "/")
|
||||
self.assertIsInstance(data["skills"], list)
|
||||
|
||||
def test_claude_skills_returned(self):
|
||||
"""GET /api/skills?agent=claude returns Claude skills."""
|
||||
status, data = _get_json(f"{self.base_url}/api/skills?agent=claude")
|
||||
self.assertEqual(status, 200)
|
||||
self.assertEqual(data["trigger"], "/")
|
||||
names = [s["name"] for s in data["skills"]]
|
||||
self.assertIn("commit", names)
|
||||
self.assertIn("review-pr", names)
|
||||
self.assertIn("comment", names)
|
||||
|
||||
def test_codex_skills_returned(self):
|
||||
"""GET /api/skills?agent=codex returns Codex skills with $ trigger."""
|
||||
status, data = _get_json(f"{self.base_url}/api/skills?agent=codex")
|
||||
self.assertEqual(status, 200)
|
||||
self.assertEqual(data["trigger"], "$")
|
||||
names = [s["name"] for s in data["skills"]]
|
||||
self.assertIn("lint", names)
|
||||
self.assertIn("deploy", names)
|
||||
self.assertIn("my-script", names)
|
||||
|
||||
def test_unknown_agent_defaults_to_claude(self):
|
||||
"""Unknown agent type defaults to claude behavior."""
|
||||
status, data = _get_json(f"{self.base_url}/api/skills?agent=unknown-agent")
|
||||
self.assertEqual(status, 200)
|
||||
self.assertEqual(data["trigger"], "/")
|
||||
|
||||
# -- Response format --
|
||||
|
||||
def test_response_has_trigger_and_skills_keys(self):
|
||||
"""Response JSON has exactly trigger and skills keys."""
|
||||
_, data = _get_json(f"{self.base_url}/api/skills?agent=claude")
|
||||
self.assertIn("trigger", data)
|
||||
self.assertIn("skills", data)
|
||||
|
||||
def test_each_skill_has_name_and_description(self):
|
||||
"""Each skill object has name and description fields."""
|
||||
_, data = _get_json(f"{self.base_url}/api/skills?agent=claude")
|
||||
for skill in data["skills"]:
|
||||
self.assertIn("name", skill)
|
||||
self.assertIn("description", skill)
|
||||
self.assertIsInstance(skill["name"], str)
|
||||
self.assertIsInstance(skill["description"], str)
|
||||
|
||||
# -- Alphabetical sorting --
|
||||
|
||||
def test_claude_skills_alphabetically_sorted(self):
|
||||
"""Claude skills are returned in alphabetical order."""
|
||||
_, data = _get_json(f"{self.base_url}/api/skills?agent=claude")
|
||||
names = [s["name"] for s in data["skills"]]
|
||||
self.assertEqual(names, sorted(names, key=str.lower))
|
||||
|
||||
def test_codex_skills_alphabetically_sorted(self):
|
||||
"""Codex skills are returned in alphabetical order."""
|
||||
_, data = _get_json(f"{self.base_url}/api/skills?agent=codex")
|
||||
names = [s["name"] for s in data["skills"]]
|
||||
self.assertEqual(names, sorted(names, key=str.lower))
|
||||
|
||||
# -- Descriptions --
|
||||
|
||||
def test_claude_skill_descriptions(self):
|
||||
"""Claude skills have correct descriptions from SKILL.md."""
|
||||
_, data = _get_json(f"{self.base_url}/api/skills?agent=claude")
|
||||
by_name = {s["name"]: s["description"] for s in data["skills"]}
|
||||
self.assertEqual(by_name["commit"], "Create a git commit")
|
||||
self.assertEqual(by_name["review-pr"], "Review a pull request")
|
||||
|
||||
def test_codex_curated_descriptions(self):
|
||||
"""Codex curated skills have correct descriptions from cache."""
|
||||
_, data = _get_json(f"{self.base_url}/api/skills?agent=codex")
|
||||
by_name = {s["name"]: s["description"] for s in data["skills"]}
|
||||
self.assertEqual(by_name["lint"], "Lint code")
|
||||
self.assertEqual(by_name["deploy"], "Deploy to prod")
|
||||
|
||||
def test_codex_user_skill_description(self):
|
||||
"""Codex user-installed skills have descriptions from SKILL.md."""
|
||||
_, data = _get_json(f"{self.base_url}/api/skills?agent=codex")
|
||||
by_name = {s["name"]: s["description"] for s in data["skills"]}
|
||||
self.assertEqual(by_name["my-script"], "Run my custom script")
|
||||
|
||||
# -- CORS --
|
||||
|
||||
def test_cors_header_present(self):
|
||||
"""Response includes Access-Control-Allow-Origin header."""
|
||||
url = f"{self.base_url}/api/skills?agent=claude"
|
||||
with urllib.request.urlopen(url, timeout=5) as resp:
|
||||
cors = resp.headers.get("Access-Control-Allow-Origin")
|
||||
self.assertEqual(cors, "*")
|
||||
|
||||
|
||||
class TestSkillsEndpointEmptyE2E(unittest.TestCase):
|
||||
"""E2E tests: server with no skills data."""
|
||||
|
||||
@classmethod
|
||||
def setUpClass(cls):
|
||||
cls.port = _find_free_port()
|
||||
cls.base_url = f"http://127.0.0.1:{cls.port}"
|
||||
|
||||
# Empty home directory - no skills at all
|
||||
cls.tmpdir = tempfile.mkdtemp()
|
||||
cls.home = Path(cls.tmpdir)
|
||||
|
||||
cls.home_patcher = patch.object(Path, "home", return_value=cls.home)
|
||||
cls.home_patcher.start()
|
||||
|
||||
cls.server = ThreadingHTTPServer(("127.0.0.1", cls.port), AMCHandler)
|
||||
cls.server_thread = threading.Thread(target=cls.server.serve_forever)
|
||||
cls.server_thread.daemon = True
|
||||
cls.server_thread.start()
|
||||
|
||||
for _ in range(50):
|
||||
try:
|
||||
with socket.create_connection(("127.0.0.1", cls.port), timeout=0.1):
|
||||
break
|
||||
except OSError:
|
||||
time.sleep(0.05)
|
||||
|
||||
@classmethod
|
||||
def tearDownClass(cls):
|
||||
cls.server.shutdown()
|
||||
cls.server_thread.join(timeout=5)
|
||||
cls.home_patcher.stop()
|
||||
|
||||
def test_empty_claude_skills(self):
|
||||
"""Server with no Claude skills returns empty list."""
|
||||
status, data = _get_json(f"{self.base_url}/api/skills?agent=claude")
|
||||
self.assertEqual(status, 200)
|
||||
self.assertEqual(data["trigger"], "/")
|
||||
self.assertEqual(data["skills"], [])
|
||||
|
||||
def test_empty_codex_skills(self):
|
||||
"""Server with no Codex skills returns empty list."""
|
||||
status, data = _get_json(f"{self.base_url}/api/skills?agent=codex")
|
||||
self.assertEqual(status, 200)
|
||||
self.assertEqual(data["trigger"], "$")
|
||||
self.assertEqual(data["skills"], [])
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
617
tests/e2e_spawn.sh
Executable file
617
tests/e2e_spawn.sh
Executable file
@@ -0,0 +1,617 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# E2E test script for the AMC spawn workflow.
|
||||
# Tests the full flow from API call to Zellij pane creation.
|
||||
#
|
||||
# Usage:
|
||||
# ./tests/e2e_spawn.sh # Safe mode (no actual spawning)
|
||||
# ./tests/e2e_spawn.sh --spawn # Full test including real agent spawn
|
||||
|
||||
SERVER_URL="http://localhost:7400"
|
||||
TEST_PROJECT="amc" # Must exist in ~/projects/
|
||||
AUTH_TOKEN=""
|
||||
SPAWN_MODE=false
|
||||
PASSED=0
|
||||
FAILED=0
|
||||
SKIPPED=0
|
||||
|
||||
# Parse args
|
||||
for arg in "$@"; do
|
||||
case "$arg" in
|
||||
--spawn) SPAWN_MODE=true ;;
|
||||
*) echo "Unknown arg: $arg"; exit 2 ;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Colors
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
NC='\033[0m'
|
||||
|
||||
log_info() { echo -e "${GREEN}[INFO]${NC} $1"; }
|
||||
log_warn() { echo -e "${YELLOW}[WARN]${NC} $1"; }
|
||||
log_error() { echo -e "${RED}[ERROR]${NC} $1"; }
|
||||
log_test() { echo -e "\n${GREEN}[TEST]${NC} $1"; }
|
||||
log_pass() { echo -e "${GREEN}[PASS]${NC} $1"; ((PASSED++)); }
|
||||
log_fail() { echo -e "${RED}[FAIL]${NC} $1"; ((FAILED++)); }
|
||||
log_skip() { echo -e "${YELLOW}[SKIP]${NC} $1"; ((SKIPPED++)); }
|
||||
|
||||
# Track spawned panes for cleanup
|
||||
SPAWNED_PANE_NAMES=()
|
||||
|
||||
cleanup() {
|
||||
if [[ ${#SPAWNED_PANE_NAMES[@]} -gt 0 ]]; then
|
||||
log_info "Cleaning up spawned panes..."
|
||||
for pane_name in "${SPAWNED_PANE_NAMES[@]}"; do
|
||||
# Best-effort: close panes we spawned during tests
|
||||
zellij --session infra action close-pane --name "$pane_name" 2>/dev/null || true
|
||||
done
|
||||
fi
|
||||
}
|
||||
trap cleanup EXIT
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Pre-flight checks
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
preflight() {
|
||||
log_test "Pre-flight checks"
|
||||
|
||||
# curl available?
|
||||
if ! command -v curl &>/dev/null; then
|
||||
log_error "curl not found"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# jq available?
|
||||
if ! command -v jq &>/dev/null; then
|
||||
log_error "jq not found (required for JSON assertions)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Server running?
|
||||
if ! curl -sf "${SERVER_URL}/api/health" >/dev/null 2>&1; then
|
||||
log_error "Server not running at ${SERVER_URL}"
|
||||
log_error "Start with: python -m amc_server.server"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Test project exists?
|
||||
if [[ ! -d "$HOME/projects/${TEST_PROJECT}" ]]; then
|
||||
log_error "Test project '${TEST_PROJECT}' not found at ~/projects/${TEST_PROJECT}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
log_pass "Pre-flight checks passed"
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Extract auth token from dashboard HTML
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
extract_auth_token() {
|
||||
log_test "Extract auth token from dashboard"
|
||||
|
||||
local html
|
||||
html=$(curl -sf "${SERVER_URL}/")
|
||||
|
||||
AUTH_TOKEN=$(echo "$html" | grep -o 'AMC_AUTH_TOKEN = "[^"]*"' | cut -d'"' -f2)
|
||||
if [[ -z "$AUTH_TOKEN" ]]; then
|
||||
log_error "Could not extract auth token from dashboard HTML"
|
||||
log_error "Check that index.html contains <!-- AMC_AUTH_TOKEN --> placeholder"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
log_pass "Auth token extracted (${AUTH_TOKEN:0:8}...)"
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Test: GET /api/health
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
test_health_endpoint() {
|
||||
log_test "GET /api/health"
|
||||
|
||||
local response
|
||||
response=$(curl -sf "${SERVER_URL}/api/health")
|
||||
|
||||
local ok
|
||||
ok=$(echo "$response" | jq -r '.ok')
|
||||
if [[ "$ok" != "true" ]]; then
|
||||
log_fail "Health endpoint returned ok=$ok"
|
||||
return
|
||||
fi
|
||||
|
||||
# Must include zellij_available and zellij_session fields
|
||||
local has_zellij_available has_zellij_session
|
||||
has_zellij_available=$(echo "$response" | jq 'has("zellij_available")')
|
||||
has_zellij_session=$(echo "$response" | jq 'has("zellij_session")')
|
||||
|
||||
if [[ "$has_zellij_available" != "true" || "$has_zellij_session" != "true" ]]; then
|
||||
log_fail "Health response missing expected fields: $response"
|
||||
return
|
||||
fi
|
||||
|
||||
local zellij_available
|
||||
zellij_available=$(echo "$response" | jq -r '.zellij_available')
|
||||
log_pass "Health OK (zellij_available=$zellij_available)"
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Test: GET /api/projects
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
test_projects_endpoint() {
|
||||
log_test "GET /api/projects"
|
||||
|
||||
local response
|
||||
response=$(curl -sf "${SERVER_URL}/api/projects")
|
||||
|
||||
local ok
|
||||
ok=$(echo "$response" | jq -r '.ok')
|
||||
if [[ "$ok" != "true" ]]; then
|
||||
log_fail "Projects endpoint returned ok=$ok"
|
||||
return
|
||||
fi
|
||||
|
||||
local project_count
|
||||
project_count=$(echo "$response" | jq '.projects | length')
|
||||
if [[ "$project_count" -lt 1 ]]; then
|
||||
log_fail "No projects returned (expected at least 1)"
|
||||
return
|
||||
fi
|
||||
|
||||
# Verify test project is in the list
|
||||
local has_test_project
|
||||
has_test_project=$(echo "$response" | jq --arg p "$TEST_PROJECT" '[.projects[] | select(. == $p)] | length')
|
||||
if [[ "$has_test_project" -lt 1 ]]; then
|
||||
log_fail "Test project '$TEST_PROJECT' not in projects list"
|
||||
return
|
||||
fi
|
||||
|
||||
log_pass "Projects OK ($project_count projects, '$TEST_PROJECT' present)"
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Test: POST /api/projects/refresh
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
test_projects_refresh() {
|
||||
log_test "POST /api/projects/refresh"
|
||||
|
||||
local response
|
||||
response=$(curl -sf -X POST "${SERVER_URL}/api/projects/refresh")
|
||||
|
||||
local ok
|
||||
ok=$(echo "$response" | jq -r '.ok')
|
||||
if [[ "$ok" != "true" ]]; then
|
||||
log_fail "Projects refresh returned ok=$ok"
|
||||
return
|
||||
fi
|
||||
|
||||
local project_count
|
||||
project_count=$(echo "$response" | jq '.projects | length')
|
||||
log_pass "Projects refresh OK ($project_count projects)"
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Test: Spawn without auth (should return 401)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
test_spawn_no_auth() {
|
||||
log_test "POST /api/spawn without auth (expect 401)"
|
||||
|
||||
local http_code body
|
||||
body=$(curl -s -o /dev/null -w '%{http_code}' -X POST "${SERVER_URL}/api/spawn" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"project":"amc","agent_type":"claude"}')
|
||||
|
||||
if [[ "$body" != "401" ]]; then
|
||||
log_fail "Expected HTTP 401, got $body"
|
||||
return
|
||||
fi
|
||||
|
||||
# Also verify the JSON error code
|
||||
local response
|
||||
response=$(curl -s -X POST "${SERVER_URL}/api/spawn" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"project":"amc","agent_type":"claude"}')
|
||||
|
||||
local code
|
||||
code=$(echo "$response" | jq -r '.code')
|
||||
if [[ "$code" != "UNAUTHORIZED" ]]; then
|
||||
log_fail "Expected code UNAUTHORIZED, got $code"
|
||||
return
|
||||
fi
|
||||
|
||||
log_pass "Correctly rejected unauthorized request (401/UNAUTHORIZED)"
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Test: Spawn with wrong token (should return 401)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
test_spawn_wrong_token() {
|
||||
log_test "POST /api/spawn with wrong token (expect 401)"
|
||||
|
||||
local response
|
||||
response=$(curl -s -X POST "${SERVER_URL}/api/spawn" \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "Authorization: Bearer totally-wrong-token" \
|
||||
-d '{"project":"amc","agent_type":"claude"}')
|
||||
|
||||
local code
|
||||
code=$(echo "$response" | jq -r '.code')
|
||||
if [[ "$code" != "UNAUTHORIZED" ]]; then
|
||||
log_fail "Expected UNAUTHORIZED, got $code"
|
||||
return
|
||||
fi
|
||||
|
||||
log_pass "Correctly rejected wrong token (UNAUTHORIZED)"
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Test: Spawn with malformed auth (no Bearer prefix)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
test_spawn_malformed_auth() {
|
||||
log_test "POST /api/spawn with malformed auth header"
|
||||
|
||||
local response
|
||||
response=$(curl -s -X POST "${SERVER_URL}/api/spawn" \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "Authorization: Token ${AUTH_TOKEN}" \
|
||||
-d '{"project":"amc","agent_type":"claude"}')
|
||||
|
||||
local code
|
||||
code=$(echo "$response" | jq -r '.code')
|
||||
if [[ "$code" != "UNAUTHORIZED" ]]; then
|
||||
log_fail "Expected UNAUTHORIZED for malformed auth, got $code"
|
||||
return
|
||||
fi
|
||||
|
||||
log_pass "Correctly rejected malformed auth header"
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Test: Spawn with invalid JSON body
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
test_spawn_invalid_json() {
|
||||
log_test "POST /api/spawn with invalid JSON"
|
||||
|
||||
local response
|
||||
response=$(curl -s -X POST "${SERVER_URL}/api/spawn" \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "Authorization: Bearer ${AUTH_TOKEN}" \
|
||||
-d 'not valid json!!!')
|
||||
|
||||
local ok
|
||||
ok=$(echo "$response" | jq -r '.ok')
|
||||
if [[ "$ok" != "false" ]]; then
|
||||
log_fail "Expected ok=false for invalid JSON, got ok=$ok"
|
||||
return
|
||||
fi
|
||||
|
||||
log_pass "Correctly rejected invalid JSON body"
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Test: Spawn with path traversal (should return 400/INVALID_PROJECT)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
test_spawn_path_traversal() {
|
||||
log_test "POST /api/spawn with path traversal"
|
||||
|
||||
local response
|
||||
response=$(curl -s -X POST "${SERVER_URL}/api/spawn" \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "Authorization: Bearer ${AUTH_TOKEN}" \
|
||||
-d '{"project":"../etc/passwd","agent_type":"claude"}')
|
||||
|
||||
local code
|
||||
code=$(echo "$response" | jq -r '.code')
|
||||
if [[ "$code" != "INVALID_PROJECT" ]]; then
|
||||
log_fail "Expected INVALID_PROJECT for path traversal, got $code"
|
||||
return
|
||||
fi
|
||||
|
||||
log_pass "Correctly rejected path traversal (INVALID_PROJECT)"
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Test: Spawn with nonexistent project
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
test_spawn_nonexistent_project() {
|
||||
log_test "POST /api/spawn with nonexistent project"
|
||||
|
||||
local response
|
||||
response=$(curl -s -X POST "${SERVER_URL}/api/spawn" \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "Authorization: Bearer ${AUTH_TOKEN}" \
|
||||
-d '{"project":"this-project-does-not-exist-xyz","agent_type":"claude"}')
|
||||
|
||||
local code
|
||||
code=$(echo "$response" | jq -r '.code')
|
||||
if [[ "$code" != "PROJECT_NOT_FOUND" ]]; then
|
||||
log_fail "Expected PROJECT_NOT_FOUND, got $code"
|
||||
return
|
||||
fi
|
||||
|
||||
log_pass "Correctly rejected nonexistent project (PROJECT_NOT_FOUND)"
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Test: Spawn with invalid agent type
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
test_spawn_invalid_agent_type() {
|
||||
log_test "POST /api/spawn with invalid agent type"
|
||||
|
||||
local response
|
||||
response=$(curl -s -X POST "${SERVER_URL}/api/spawn" \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "Authorization: Bearer ${AUTH_TOKEN}" \
|
||||
-d "{\"project\":\"${TEST_PROJECT}\",\"agent_type\":\"gpt5\"}")
|
||||
|
||||
local code
|
||||
code=$(echo "$response" | jq -r '.code')
|
||||
if [[ "$code" != "INVALID_AGENT_TYPE" ]]; then
|
||||
log_fail "Expected INVALID_AGENT_TYPE, got $code"
|
||||
return
|
||||
fi
|
||||
|
||||
log_pass "Correctly rejected invalid agent type (INVALID_AGENT_TYPE)"
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Test: Spawn with missing project field
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
test_spawn_missing_project() {
|
||||
log_test "POST /api/spawn with missing project"
|
||||
|
||||
local response
|
||||
response=$(curl -s -X POST "${SERVER_URL}/api/spawn" \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "Authorization: Bearer ${AUTH_TOKEN}" \
|
||||
-d '{"agent_type":"claude"}')
|
||||
|
||||
local code
|
||||
code=$(echo "$response" | jq -r '.code')
|
||||
if [[ "$code" != "MISSING_PROJECT" ]]; then
|
||||
log_fail "Expected MISSING_PROJECT, got $code"
|
||||
return
|
||||
fi
|
||||
|
||||
log_pass "Correctly rejected missing project field (MISSING_PROJECT)"
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Test: Spawn with backslash in project name
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
test_spawn_backslash_project() {
|
||||
log_test "POST /api/spawn with backslash in project name"
|
||||
|
||||
local response
|
||||
response=$(curl -s -X POST "${SERVER_URL}/api/spawn" \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "Authorization: Bearer ${AUTH_TOKEN}" \
|
||||
-d '{"project":"foo\\bar","agent_type":"claude"}')
|
||||
|
||||
local code
|
||||
code=$(echo "$response" | jq -r '.code')
|
||||
if [[ "$code" != "INVALID_PROJECT" ]]; then
|
||||
log_fail "Expected INVALID_PROJECT for backslash, got $code"
|
||||
return
|
||||
fi
|
||||
|
||||
log_pass "Correctly rejected backslash in project name (INVALID_PROJECT)"
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Test: CORS preflight for /api/spawn
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
test_cors_preflight() {
|
||||
log_test "OPTIONS /api/spawn (CORS preflight)"
|
||||
|
||||
local http_code headers
|
||||
headers=$(curl -sI -X OPTIONS "${SERVER_URL}/api/spawn" 2>/dev/null)
|
||||
http_code=$(echo "$headers" | head -1 | grep -o '[0-9][0-9][0-9]' | head -1)
|
||||
|
||||
if [[ "$http_code" != "204" ]]; then
|
||||
log_fail "Expected HTTP 204 for OPTIONS, got $http_code"
|
||||
return
|
||||
fi
|
||||
|
||||
if ! echo "$headers" | grep -qi 'Access-Control-Allow-Methods'; then
|
||||
log_fail "Missing Access-Control-Allow-Methods header"
|
||||
return
|
||||
fi
|
||||
|
||||
if ! echo "$headers" | grep -qi 'Authorization'; then
|
||||
log_fail "Authorization not in Access-Control-Allow-Headers"
|
||||
return
|
||||
fi
|
||||
|
||||
log_pass "CORS preflight OK (204 with correct headers)"
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Test: Actual spawn (only with --spawn flag)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
test_spawn_valid() {
|
||||
if [[ "$SPAWN_MODE" != "true" ]]; then
|
||||
log_skip "Actual spawn test (pass --spawn to enable)"
|
||||
return
|
||||
fi
|
||||
|
||||
log_test "POST /api/spawn with valid project (LIVE)"
|
||||
|
||||
# Check Zellij session first
|
||||
if ! zellij list-sessions 2>/dev/null | grep -q '^infra'; then
|
||||
log_skip "Zellij session 'infra' not found - cannot test live spawn"
|
||||
return
|
||||
fi
|
||||
|
||||
# Count session files before
|
||||
local sessions_dir="$HOME/.local/share/amc/sessions"
|
||||
local count_before=0
|
||||
if [[ -d "$sessions_dir" ]]; then
|
||||
count_before=$(find "$sessions_dir" -name '*.json' -maxdepth 1 2>/dev/null | wc -l | tr -d ' ')
|
||||
fi
|
||||
|
||||
local response
|
||||
response=$(curl -s -X POST "${SERVER_URL}/api/spawn" \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "Authorization: Bearer ${AUTH_TOKEN}" \
|
||||
-d "{\"project\":\"${TEST_PROJECT}\",\"agent_type\":\"claude\"}")
|
||||
|
||||
local ok
|
||||
ok=$(echo "$response" | jq -r '.ok')
|
||||
if [[ "$ok" != "true" ]]; then
|
||||
local error_code
|
||||
error_code=$(echo "$response" | jq -r '.code // .error')
|
||||
log_fail "Spawn failed: $error_code"
|
||||
return
|
||||
fi
|
||||
|
||||
# Verify spawn_id is returned
|
||||
local spawn_id
|
||||
spawn_id=$(echo "$response" | jq -r '.spawn_id')
|
||||
if [[ -z "$spawn_id" || "$spawn_id" == "null" ]]; then
|
||||
log_fail "No spawn_id in response"
|
||||
return
|
||||
fi
|
||||
|
||||
# Track for cleanup
|
||||
SPAWNED_PANE_NAMES+=("claude-${TEST_PROJECT}")
|
||||
|
||||
# Verify session_file_found field
|
||||
local session_found
|
||||
session_found=$(echo "$response" | jq -r '.session_file_found')
|
||||
log_info "session_file_found=$session_found, spawn_id=${spawn_id:0:8}..."
|
||||
|
||||
log_pass "Spawn successful (spawn_id=${spawn_id:0:8}...)"
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Test: Rate limiting (only with --spawn flag)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
test_rate_limiting() {
|
||||
if [[ "$SPAWN_MODE" != "true" ]]; then
|
||||
log_skip "Rate limiting test (pass --spawn to enable)"
|
||||
return
|
||||
fi
|
||||
|
||||
log_test "Rate limiting on rapid spawn"
|
||||
|
||||
# Immediately try to spawn the same project again
|
||||
local response
|
||||
response=$(curl -s -X POST "${SERVER_URL}/api/spawn" \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "Authorization: Bearer ${AUTH_TOKEN}" \
|
||||
-d "{\"project\":\"${TEST_PROJECT}\",\"agent_type\":\"claude\"}")
|
||||
|
||||
local code
|
||||
code=$(echo "$response" | jq -r '.code')
|
||||
if [[ "$code" == "RATE_LIMITED" ]]; then
|
||||
log_pass "Rate limiting active (RATE_LIMITED returned)"
|
||||
else
|
||||
local ok
|
||||
ok=$(echo "$response" | jq -r '.ok')
|
||||
log_warn "Rate limiting not triggered (ok=$ok, code=$code) - cooldown may have expired"
|
||||
log_pass "Rate limiting test completed (non-deterministic)"
|
||||
fi
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Test: Dashboard shows agent after spawn (only with --spawn flag)
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
test_dashboard_shows_agent() {
|
||||
if [[ "$SPAWN_MODE" != "true" ]]; then
|
||||
log_skip "Dashboard agent visibility test (pass --spawn to enable)"
|
||||
return
|
||||
fi
|
||||
|
||||
log_test "Dashboard /api/state includes spawned agent"
|
||||
|
||||
# Give the session a moment to register
|
||||
sleep 2
|
||||
|
||||
local response
|
||||
response=$(curl -sf "${SERVER_URL}/api/state")
|
||||
|
||||
local session_count
|
||||
session_count=$(echo "$response" | jq '.sessions | length')
|
||||
|
||||
if [[ "$session_count" -gt 0 ]]; then
|
||||
log_pass "Dashboard shows $session_count session(s)"
|
||||
else
|
||||
log_warn "No sessions visible yet (agent may still be starting)"
|
||||
log_pass "Dashboard state endpoint responsive"
|
||||
fi
|
||||
}
|
||||
|
||||
# ---------------------------------------------------------------------------
|
||||
# Main
|
||||
# ---------------------------------------------------------------------------
|
||||
|
||||
main() {
|
||||
echo "========================================="
|
||||
echo " AMC Spawn Workflow E2E Tests"
|
||||
echo "========================================="
|
||||
echo ""
|
||||
if [[ "$SPAWN_MODE" == "true" ]]; then
|
||||
log_warn "SPAWN MODE: will create real Zellij panes"
|
||||
else
|
||||
log_info "Safe mode (no actual spawning). Pass --spawn to test live spawn."
|
||||
fi
|
||||
|
||||
preflight
|
||||
extract_auth_token
|
||||
|
||||
# Read-only endpoint tests
|
||||
test_health_endpoint
|
||||
test_projects_endpoint
|
||||
test_projects_refresh
|
||||
|
||||
# Auth / validation tests (no side effects)
|
||||
test_spawn_no_auth
|
||||
test_spawn_wrong_token
|
||||
test_spawn_malformed_auth
|
||||
test_spawn_invalid_json
|
||||
test_spawn_path_traversal
|
||||
test_spawn_nonexistent_project
|
||||
test_spawn_invalid_agent_type
|
||||
test_spawn_missing_project
|
||||
test_spawn_backslash_project
|
||||
|
||||
# CORS
|
||||
test_cors_preflight
|
||||
|
||||
# Live spawn tests (only with --spawn)
|
||||
test_spawn_valid
|
||||
test_rate_limiting
|
||||
test_dashboard_shows_agent
|
||||
|
||||
# Summary
|
||||
echo ""
|
||||
echo "========================================="
|
||||
echo " Results: ${PASSED} passed, ${FAILED} failed, ${SKIPPED} skipped"
|
||||
echo "========================================="
|
||||
|
||||
if [[ "$FAILED" -gt 0 ]]; then
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
main "$@"
|
||||
20
tests/test_context.py
Normal file
20
tests/test_context.py
Normal file
@@ -0,0 +1,20 @@
|
||||
import unittest
|
||||
from unittest.mock import patch
|
||||
|
||||
from amc_server.zellij import _resolve_zellij_bin
|
||||
|
||||
|
||||
class ContextTests(unittest.TestCase):
|
||||
def test_resolve_zellij_bin_prefers_which(self):
|
||||
with patch("amc_server.zellij.shutil.which", return_value="/custom/bin/zellij"):
|
||||
self.assertEqual(_resolve_zellij_bin(), "/custom/bin/zellij")
|
||||
|
||||
def test_resolve_zellij_bin_falls_back_to_default_name(self):
|
||||
with patch("amc_server.zellij.shutil.which", return_value=None), patch(
|
||||
"amc_server.zellij.Path.exists", return_value=False
|
||||
), patch("amc_server.zellij.Path.is_file", return_value=False):
|
||||
self.assertEqual(_resolve_zellij_bin(), "zellij")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
unittest.main()
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user