* fix(bluebubbles): prefer DM resolution + hide routing markers
* fix(bluebubbles): prevent message routing to group chats when targeting phone numbers
When sending a message to a phone number like +12622102921, the
resolveChatGuidForTarget function was finding and returning a GROUP
CHAT containing that phone number instead of a direct DM chat.
The bug was in the participantMatch fallback logic which matched ANY
chat containing the phone number as a participant, including groups.
This fix adds a check to ensure participantMatch only considers DM
chats (identified by ';-;' separator in the chat GUID). Group chats
(identified by ';+;' separator) are now explicitly excluded from
handle-based matching.
If a phone number only exists in a group chat (no direct DM exists),
the function now correctly returns null, which causes the send to
fail with a clear error rather than accidentally messaging a group.
Added test case to verify this behavior.
* feat(bluebubbles): auto-create new DM chats when sending to unknown phone numbers
When sending to a phone number that doesn't have an existing chat,
instead of failing with 'chatGuid not found', now automatically creates
a new chat using the /api/v1/chat/new endpoint.
- Added createNewChatWithMessage() helper function
- When resolveChatGuidForTarget returns null for a handle target,
uses the new chat endpoint with addresses array and message
- Includes helpful error message if Private API isn't enabled
- Only applies to handle targets (phone numbers), not group chats
* fix(bluebubbles): hide internal routing metadata in cross-context markers
When sending cross-context messages via BlueBubbles, the origin marker was
exposing internal chat_guid routing info like '[from bluebubbles:chat_guid:any;-;+19257864429]'.
This adds a formatTargetDisplay() function to the BlueBubbles plugin that:
- Extracts phone numbers from chat_guid formats (iMessage;-;+1234567890 -> +1234567890)
- Normalizes handles for clean display
- Avoids returning raw chat_guid formats containing internal routing metadata
Now cross-context markers show clean identifiers like '[from +19257864429]' instead
of exposing internal routing details to recipients.
* fix: prevent cross-context decoration on direct message tool sends
Two fixes:
1. Cross-context decoration (e.g., '[from +19257864429]' prefix) was being
added to ALL messages sent to a different target, even when the agent
was just composing a new message via the message tool. This decoration
should only be applied when forwarding/relaying messages between chats.
Fix: Added skipCrossContextDecoration flag to ChannelThreadingToolContext.
The message tool now sets this flag to true, so direct sends don't get
decorated. The buildCrossContextDecoration function checks this flag
and returns null when set.
2. Aborted requests were still completing because the abort signal wasn't
being passed through the message tool execution chain.
Fix: Added abortSignal propagation from message tool → runMessageAction →
executeSendAction → sendMessage → deliverOutboundPayloads. Added abort
checks at key points in the chain to fail fast when aborted.
Files changed:
- src/channels/plugins/types.core.ts: Added skipCrossContextDecoration field
- src/infra/outbound/outbound-policy.ts: Check skip flag before decorating
- src/agents/tools/message-tool.ts: Set skip flag, accept and pass abort signal
- src/infra/outbound/message-action-runner.ts: Pass abort signal through
- src/infra/outbound/outbound-send-service.ts: Check and pass abort signal
- src/infra/outbound/message.ts: Pass abort signal to delivery
* fix(bluebubbles): preserve friendly display names in formatTargetDisplay
When the primary model supports vision natively (e.g., Claude Opus 4.5),
skip the image understanding call entirely. The image will be injected
directly into the model context instead, saving an API call and avoiding
redundant descriptions.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Adds support for Brave Search API's freshness parameter to filter results
by discovery time:
- 'pd' - Past 24 hours
- 'pw' - Past week
- 'pm' - Past month
- 'py' - Past year
- 'YYYY-MM-DDtoYYYY-MM-DD' - Custom date range
Useful for cron jobs and monitoring tasks that need recent results.
Note: Perplexity provider ignores this parameter (Brave only).
---
🤖 AI-assisted: This PR was created with Claude (Opus). Lightly tested via
build script. The change follows existing patterns for optional parameters
(country, search_lang, ui_lang).
Venice AI is a privacy-focused AI inference provider with support for
uncensored models and access to major proprietary models via their
anonymized proxy.
This integration adds:
- Complete model catalog with 25 models:
- 15 private models (Llama, Qwen, DeepSeek, Venice Uncensored, etc.)
- 10 anonymized models (Claude, GPT-5.2, Gemini, Grok, Kimi, MiniMax)
- Auto-discovery from Venice API with fallback to static catalog
- VENICE_API_KEY environment variable support
- Interactive onboarding via 'venice-api-key' auth choice
- Model selection prompt showing all available Venice models
- Provider auto-registration when API key is detected
- Comprehensive documentation covering:
- Privacy modes (private vs anonymized)
- All 25 models with context windows and features
- Streaming, function calling, and vision support
- Model selection recommendations
Privacy modes:
- Private: Fully private, no logging (open-source models)
- Anonymized: Proxied through Venice (proprietary models)
Default model: venice/llama-3.3-70b (good balance of capability + privacy)
Venice API: https://api.venice.ai/api/v1 (OpenAI-compatible)
* fix(ui): enable save button only when config has changes
The save button in the Control UI config editor was not properly gating
on whether actual changes were made. This adds:
- `configRawOriginal` state to track the original raw config for comparison
- Change detection for both form mode (via computeDiff) and raw mode
- `hasChanges` check in canSave/canApply logic
- Set `configFormDirty` when raw mode edits occur
- Handle raw mode UI correctly (badge shows "Unsaved changes", no diff panel)
Fixes#1609
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
* feat(gateway-tool): add config.patch action for safe partial config updates
Exposes the existing config.patch server method to agents, allowing safe
partial config updates that merge with existing config instead of replacing it.
- Add config.patch to GATEWAY_ACTIONS in gateway tool
- Add restart + sentinel logic to config.patch server method
- Extend ConfigPatchParamsSchema with sessionKey, note, restartDelayMs
- Add unit test for config.patch gateway tool action
Closes#1617
---------
Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
* feat: Add Ollama provider with automatic model discovery
- Add Ollama provider builder with automatic model detection
- Discover available models from local Ollama instance via /api/tags API
- Make resolveImplicitProviders async to support dynamic model discovery
- Add comprehensive Ollama documentation with setup and usage guide
- Add tests for Ollama provider integration
- Update provider index and model providers documentation
Closes#1531
* fix: Correct Ollama provider type definitions and error handling
- Fix input property type to match ModelDefinitionConfig
- Import ModelDefinitionConfig type properly
- Fix error template literal to use String() for type safety
- Simplify return type signature of discoverOllamaModels
* fix: Suppress unhandled promise warnings from ensureClawdbotModelsJson in tests
- Cast unused promise returns to 'unknown' to suppress TypeScript warnings
- Tests that don't await the promise are intentionally not awaiting it
- This fixes the failing test suite caused by unawaited async calls
* fix: Skip Ollama model discovery during tests
- Check for VITEST or NODE_ENV=test before making HTTP requests
- Prevents test timeouts and hangs from network calls
- Ollama discovery will still work in production/normal usage
* fix: Set VITEST environment variable in test setup
- Ensures Ollama discovery is skipped in all test runs
- Prevents network calls during tests that could cause timeouts
* test: Temporarily skip Ollama provider tests to diagnose CI failures
* fix: Make Ollama provider opt-in to avoid breaking existing tests
**Root Cause:**
The Ollama provider was being added to ALL configurations by default
(with a fallback API key of 'ollama-local'), which broke tests that
expected NO providers when no API keys were configured.
**Solution:**
- Removed the default fallback API key for Ollama
- Ollama provider now requires explicit configuration via:
- OLLAMA_API_KEY environment variable, OR
- Ollama profile in auth store
- Updated documentation to reflect the explicit configuration requirement
- Added a test to verify Ollama is not added by default
This fixes all 4 failing test suites:
- checks (node, test, pnpm test)
- checks (bun, test, bunx vitest run)
- checks-windows (node, test, pnpm test)
- checks-macos (test, pnpm test)
Closes#1531
* fix: detect Anthropic 'Request size exceeds model context window' as context overflow
Anthropic now returns 'Request size exceeds model context window' instead of
the previously detected 'prompt is too long' format. This new error message
was not recognized by isContextOverflowError(), causing auto-compaction to
NOT trigger. Users would see the raw error twice without any recovery attempt.
Changes:
- Add 'exceeds model context window' and 'request size exceeds' to
isContextOverflowError() detection patterns
- Add tests that fail without the fix, verifying both the raw error
string and the JSON-wrapped format from Anthropic's API
- Add test for formatAssistantErrorText to ensure the friendly
'Context overflow' message is shown instead of the raw error
Note: The upstream pi-ai package (@mariozechner/pi-ai) also needs a fix
in its OVERFLOW_PATTERNS regex: /exceeds the context window/i should be
changed to /exceeds.*context window/i to match both 'the' and 'model'
variants for triggering auto-compaction retry.
* fix(tests): remove unused imports and helper from test files
Remove WorkspaceBootstrapFile references and _makeFile helper that were
incorrectly copied from another test file. These caused type errors and
were unrelated to the context overflow detection tests.
* fix: trigger auto-compaction on context overflow promptError
When the LLM rejects a request with a context overflow error that surfaces
as a promptError (thrown exception rather than streamed error), the existing
auto-compaction in pi-coding-agent never triggers. This happens because the
error bypasses the agent's message_end → agent_end → _checkCompaction path.
This fix adds a fallback compaction attempt directly in the run loop:
- Detects context overflow in promptError (excluding compaction_failure)
- Calls compactEmbeddedPiSessionDirect (bypassing lane queues since already in-lane)
- Retries the prompt after successful compaction
- Limits to one compaction attempt per run to prevent infinite loops
Fixes: context overflow errors shown to user without auto-compaction attempt
* style: format compact.ts and run.ts with oxfmt
* fix: tighten context overflow match (#1627) (thanks @rodrigouroz)
---------
Co-authored-by: Claude <claude@anthropic.com>
Co-authored-by: Peter Steinberger <steipete@gmail.com>
Added a dedicated Anthropic payload logger that writes exact request
JSON (as sent) plus per‑run usage stats (input/output/cache read/write)
to a
standalone JSONL file, gated by an env flag.
Changes
- New logger: src/agents/anthropic-payload-log.ts (writes
logs/anthropic-payload.jsonl under the state dir, optional override via
env).
- Hooked into embedded runs to wrap the stream function and record
usage: src/agents/pi-embedded-runner/run/attempt.ts.
How to enable
- CLAWDBOT_ANTHROPIC_PAYLOAD_LOG=1
- Optional:
CLAWDBOT_ANTHROPIC_PAYLOAD_LOG_FILE=/path/to/anthropic-payload.jsonl
What you’ll get (JSONL)
- stage: "request" with payload (exact Anthropic params) +
payloadDigest
- stage: "usage" with usage
(input/output/cacheRead/cacheWrite/totalTokens/etc.)
Notes
- Usage is taken from the last assistant message in the run; if the
run fails before usage is present, you’ll only see an error field.
Files touched
- src/agents/anthropic-payload-log.ts
- src/agents/pi-embedded-runner/run/attempt.ts
Tests not run.
Add automatic discovery of AWS Bedrock models using ListFoundationModels API.
When AWS credentials are detected, models that support streaming and text output
are automatically discovered and made available.
- Add @aws-sdk/client-bedrock dependency
- Add discoverBedrockModels() with caching (default 1 hour)
- Add resolveImplicitBedrockProvider() for auto-registration
- Add BedrockDiscoveryConfig for optional filtering by provider/region
- Filter to active, streaming, text-output models only
- Update docs/bedrock.md with auto-discovery documentation