feat: add OpenProse plugin skills

This commit is contained in:
Peter Steinberger
2026-01-23 00:49:32 +00:00
parent db0235a26a
commit 51a9053387
102 changed files with 23315 additions and 5 deletions

View File

@@ -0,0 +1,356 @@
# Language Self-Improvement
# Analyzes .prose usage patterns to evolve the language itself
# Meta-level 2: while the crystallizer creates .prose files, this improves .prose
#
# BACKEND: Run with sqlite+ or postgres backend for corpus-scale analysis
# prose run 47-language-self-improvement.prose --backend sqlite+
#
# This program treats OpenProse programs as its corpus, looking for:
# - Workarounds (patterns that exist because the language lacks a cleaner way)
# - Friction (places where authors struggle or make errors)
# - Gaps (things people want to express but cannot)
input corpus_path: "Path to .prose files to analyze (default: examples/)"
input conversations: "Optional: conversation threads where people struggled with the language"
input focus: "Optional: specific area to focus on (e.g., 'error handling', 'parallelism')"
# ============================================================
# Agents
# ============================================================
agent archaeologist:
model: opus
prompt: """
You excavate patterns from code corpora.
Look for: repeated idioms, workarounds, boilerplate that could be abstracted.
Report patterns with frequency counts and concrete examples.
Distinguish between intentional patterns and compensating workarounds.
"""
permissions:
read: ["**/*.prose", "**/*.md"]
agent clinician:
model: opus
prompt: """
You diagnose pain points from conversations and code.
Look for: confusion, errors, questions that shouldn't need asking.
Identify gaps between what people want to express and what they can express.
Be specific about the symptom and hypothesize the underlying cause.
"""
permissions:
read: ["**/*.prose", "**/*.md", "**/*.jsonl"]
agent architect:
model: opus
persist: true
prompt: """
You design language features with these principles:
1. Self-evidence: syntax should be readable without documentation
2. Composability: features should combine without special cases
3. Minimalism: no feature without clear, repeated need
4. Consistency: follow existing patterns unless there's strong reason not to
For each proposal, specify: syntax, semantics, interaction with existing features.
"""
agent spec_writer:
model: opus
prompt: """
You write precise language specifications.
Follow the style of compiler.md: grammar rules, semantic descriptions, examples.
Be rigorous but readable. Include edge cases.
"""
permissions:
read: ["**/*.md"]
write: ["**/*.md"]
agent guardian:
model: sonnet
prompt: """
You assess backwards compatibility and risk.
Breaking levels:
0 - Fully compatible, new syntax only
1 - Soft deprecation, old syntax still works
2 - Hard deprecation, migration required
3 - Breaking change, existing programs may fail
Also assess: complexity cost, interaction risks, implementation effort.
"""
agent test_smith:
model: sonnet
prompt: """
You create test .prose files that exercise proposed features.
Include: happy path, edge cases, error conditions, interaction with existing features.
Tests should be runnable and self-documenting.
"""
permissions:
write: ["**/*.prose"]
# ============================================================
# Phase 1: Corpus Excavation
# ============================================================
parallel:
patterns = session: archaeologist
prompt: """
Analyze the .prose corpus for recurring patterns.
For each pattern found, report:
- Pattern name and description
- Frequency (how many files use it)
- Representative examples (quote actual code)
- Is this intentional idiom or compensating workaround?
Focus on patterns that appear 3+ times.
"""
context: corpus_path
pain_points = session: clinician
prompt: """
Analyze conversations and code for pain points.
Look for:
- Syntax errors that recur (what do people get wrong?)
- Questions about "how do I...?" (what's not obvious?)
- Workarounds or hacks (what's the language missing?)
- Frustrated comments or abandoned attempts
For each pain point, hypothesize what language change would help.
"""
context: { corpus_path, conversations }
current_spec = session: archaeologist
prompt: """
Summarize the current language capabilities from the spec.
List: all keywords, all constructs, all patterns explicitly supported.
Note any areas marked as "experimental" or "future".
Identify any inconsistencies or gaps in the spec itself.
"""
context: "compiler.md, prose.md"
# ============================================================
# Phase 2: Pattern Synthesis
# ============================================================
let synthesis = session: architect
prompt: """
Synthesize the excavation findings into a ranked list of potential improvements.
Categories:
1. ADDITIONS - new syntax/semantics the language lacks
2. REFINEMENTS - existing features that could be cleaner
3. CLARIFICATIONS - spec ambiguities that need resolution
4. DEPRECATIONS - features that add complexity without value
For each item:
- Problem statement (what pain does this solve?)
- Evidence (which patterns/pain points support this?)
- Rough sketch of solution
- Priority (critical / high / medium / low)
Rank by: (frequency of need) × (severity of pain) / (implementation complexity)
"""
context: { patterns, pain_points, current_spec, focus }
# ============================================================
# Phase 3: Proposal Generation
# ============================================================
let top_candidates = session: architect
prompt: """
Select the top 3-5 candidates from the synthesis.
For each, produce a detailed proposal:
## Feature: [name]
### Problem
[What pain point does this solve? Include evidence.]
### Proposed Syntax
```prose
[Show the new syntax]
```
### Semantics
[Precisely describe what it means]
### Before/After
[Show how existing workarounds become cleaner]
### Interactions
[How does this interact with existing features?]
### Open Questions
[What needs further thought?]
"""
context: synthesis
# ============================================================
# Phase 4: User Checkpoint
# ============================================================
input user_review: """
## Proposed Language Improvements
{top_candidates}
---
For each proposal, indicate:
- PURSUE: Develop full spec and tests
- REFINE: Good direction but needs changes (explain)
- DEFER: Valid but not now
- REJECT: Don't want this (explain why)
You can also suggest entirely different directions.
"""
let approved = session: architect
prompt: """
Incorporate user feedback into final proposal set.
For PURSUE items: proceed as-is
For REFINE items: adjust based on feedback
For DEFER/REJECT items: note the reasoning for future reference
Output the final list of proposals to develop.
"""
context: { top_candidates, user_review }
if **there are no approved proposals**:
output result = {
status: "no-changes",
synthesis: synthesis,
proposals: top_candidates,
user_decision: user_review
}
throw "No proposals approved - halting gracefully"
# ============================================================
# Phase 5: Spec Drafting
# ============================================================
let spec_patches = approved | map:
session: spec_writer
prompt: """
Write the specification addition for this proposal.
Follow compiler.md style:
- Grammar rule (in the existing notation)
- Semantic description
- Examples
- Edge cases
- Error conditions
Output as a diff/patch that could be applied to compiler.md
"""
context: { item, current_spec }
# ============================================================
# Phase 6: Test Case Creation
# ============================================================
let test_files = approved | pmap:
session: test_smith
prompt: """
Create test .prose files for this proposal.
Include:
1. Basic usage (happy path)
2. Edge cases
3. Error conditions (should fail gracefully)
4. Interaction with existing features
Each test should be a complete, runnable .prose file.
Name format: test-{feature-name}-{N}.prose
"""
context: item
# ============================================================
# Phase 7: Risk Assessment
# ============================================================
let risks = session: guardian
prompt: """
Assess the full proposal set for risks.
For each proposal:
- Breaking level (0-3)
- Complexity cost (how much does this add to the language?)
- Interaction risks (could this combine badly with existing features?)
- Implementation effort (VM changes, spec changes, tooling)
Also assess aggregate risk:
- Are we adding too much at once?
- Is there a coherent theme or is this feature creep?
- What's the total complexity budget impact?
Recommend: PROCEED / REDUCE SCOPE / PHASE INCREMENTALLY / HALT
"""
context: { approved, spec_patches, current_spec }
if **the guardian recommends halting**:
input override: """
Guardian recommends halting:
{risks}
Override and proceed anyway? (yes/no/reduce scope)
"""
if **the user declined to override**:
output result = {
status: "halted-by-guardian",
proposals: approved,
risks: risks
}
throw "Halted by guardian recommendation"
# ============================================================
# Phase 8: Migration Guide
# ============================================================
let migration = session: spec_writer
prompt: """
Write a migration guide for existing .prose programs.
For each proposal:
- What existing code is affected?
- Before/after examples
- Deprecation timeline (if any)
- Automated migration possible?
Also:
- Version number recommendation (major/minor/patch)
- Release notes draft
"""
context: { approved, risks, corpus_path }
# ============================================================
# Output
# ============================================================
output evolution = {
status: "proposals-ready",
# What we found
patterns: patterns,
pain_points: pain_points,
synthesis: synthesis,
# What we propose
proposals: approved,
spec_patches: spec_patches,
test_files: test_files,
# Risk and migration
risks: risks,
migration: migration,
# Meta
corpus_analyzed: corpus_path,
focus_area: focus
}