175 lines
4.8 KiB
Plaintext
175 lines
4.8 KiB
Plaintext
# Cost Analyzer
|
|
# Analyzes runs for token usage and cost patterns
|
|
#
|
|
# Usage:
|
|
# prose run @openprose/lib/cost-analyzer
|
|
#
|
|
# Inputs:
|
|
# run_path: Path to run to analyze, or "recent" for latest runs
|
|
# scope: single | compare | trend
|
|
#
|
|
# Outputs:
|
|
# - Token usage breakdown by agent/phase
|
|
# - Model tier efficiency analysis
|
|
# - Cost hotspots
|
|
# - Optimization recommendations
|
|
|
|
input run_path: "Path to run, or 'recent' for latest runs in .prose/runs/"
|
|
input scope: "Scope: single (one run) | compare (multiple runs) | trend (over time)"
|
|
|
|
# ============================================================
|
|
# Agents
|
|
# ============================================================
|
|
|
|
agent collector:
|
|
model: sonnet
|
|
prompt: """
|
|
You collect and structure cost/token data from .prose runs.
|
|
|
|
Extract from run artifacts:
|
|
- Model used per session (haiku/sonnet/opus)
|
|
- Approximate token counts (estimate from content length)
|
|
- Session count per agent
|
|
- Parallel vs sequential execution
|
|
"""
|
|
|
|
agent analyzer:
|
|
model: opus
|
|
prompt: """
|
|
You analyze cost patterns and identify optimization opportunities.
|
|
|
|
Consider:
|
|
- Model tier appropriateness (is opus needed, or would sonnet suffice?)
|
|
- Token efficiency (are contexts bloated?)
|
|
- Parallelization (could sequential steps run in parallel?)
|
|
- Caching opportunities (repeated computations?)
|
|
"""
|
|
|
|
agent tracker:
|
|
model: haiku
|
|
persist: user
|
|
prompt: """
|
|
You track cost metrics across runs for trend analysis.
|
|
Store compactly: run_id, program, total_cost_estimate, breakdown.
|
|
"""
|
|
|
|
# ============================================================
|
|
# Phase 1: Collect Run Data
|
|
# ============================================================
|
|
|
|
let runs_to_analyze = session: collector
|
|
prompt: """
|
|
Find runs to analyze.
|
|
|
|
Input: {run_path}
|
|
Scope: {scope}
|
|
|
|
If run_path is a specific path, use that run.
|
|
If run_path is "recent", find the latest 5-10 runs in .prose/runs/
|
|
|
|
For scope=compare, find runs of the same program.
|
|
For scope=trend, find runs over time.
|
|
|
|
Return: list of run paths to analyze
|
|
"""
|
|
|
|
let run_data = runs_to_analyze | pmap:
|
|
session: collector
|
|
prompt: """
|
|
Extract cost data from run: {item}
|
|
|
|
Read state.md and bindings to determine:
|
|
1. Program name
|
|
2. Each session spawned:
|
|
- Agent name (or "anonymous")
|
|
- Model tier
|
|
- Estimated input tokens (context size)
|
|
- Estimated output tokens (binding size)
|
|
3. Parallel blocks (how many concurrent sessions)
|
|
4. Total session count
|
|
|
|
Estimate costs using rough rates:
|
|
- haiku: $0.25 / 1M input, $1.25 / 1M output
|
|
- sonnet: $3 / 1M input, $15 / 1M output
|
|
- opus: $15 / 1M input, $75 / 1M output
|
|
|
|
Return structured JSON.
|
|
"""
|
|
context: item
|
|
|
|
# ============================================================
|
|
# Phase 2: Analyze
|
|
# ============================================================
|
|
|
|
let analysis = session: analyzer
|
|
prompt: """
|
|
Analyze cost patterns across these runs.
|
|
|
|
Data: {run_data}
|
|
Scope: {scope}
|
|
|
|
For single run:
|
|
- Break down cost by agent and phase
|
|
- Identify the most expensive operations
|
|
- Flag potential inefficiencies
|
|
|
|
For compare:
|
|
- Show cost differences between runs
|
|
- Identify which changes affected cost
|
|
- Note if cost increased/decreased
|
|
|
|
For trend:
|
|
- Show cost over time
|
|
- Identify if costs are stable, growing, or improving
|
|
- Flag anomalies
|
|
|
|
Always include:
|
|
- Model tier efficiency (are expensive models used appropriately?)
|
|
- Context efficiency (are contexts lean or bloated?)
|
|
- Specific optimization recommendations
|
|
|
|
Return structured JSON with:
|
|
{
|
|
"summary": { total_cost, session_count, by_model: {...} },
|
|
"hotspots": [ { agent, cost, percent, issue } ],
|
|
"recommendations": [ { priority, description, estimated_savings } ],
|
|
"details": { ... }
|
|
}
|
|
"""
|
|
context: run_data
|
|
|
|
# ============================================================
|
|
# Phase 3: Track for Trends
|
|
# ============================================================
|
|
|
|
resume: tracker
|
|
prompt: """
|
|
Record this cost analysis for future trend tracking.
|
|
|
|
{analysis.summary}
|
|
|
|
Add to your historical record.
|
|
"""
|
|
context: analysis
|
|
|
|
# ============================================================
|
|
# Output
|
|
# ============================================================
|
|
|
|
output report = session "Format report"
|
|
prompt: """
|
|
Format the cost analysis as a readable report.
|
|
|
|
Analysis: {analysis}
|
|
|
|
Include:
|
|
1. Executive summary (total cost, key finding)
|
|
2. Cost breakdown table
|
|
3. Hotspots (where money goes)
|
|
4. Recommendations (prioritized)
|
|
5. If scope=trend, include trend chart (ascii or description)
|
|
|
|
Format as markdown.
|
|
"""
|
|
context: analysis
|