Commit Graph

1301 Commits

Author SHA1 Message Date
Peter Steinberger
bfa57aae44 fix: log env opts and collapse duplicate blocks 2026-01-25 10:22:53 +00:00
Tyler Yust
0f662c2935 fix(bluebubbles): route phone-number targets to direct chats; prevent internal IDs leaking in cross-context prefix (#1751)
* fix(bluebubbles): prefer DM resolution + hide routing markers

* fix(bluebubbles): prevent message routing to group chats when targeting phone numbers

When sending a message to a phone number like +12622102921, the
resolveChatGuidForTarget function was finding and returning a GROUP
CHAT containing that phone number instead of a direct DM chat.

The bug was in the participantMatch fallback logic which matched ANY
chat containing the phone number as a participant, including groups.

This fix adds a check to ensure participantMatch only considers DM
chats (identified by ';-;' separator in the chat GUID). Group chats
(identified by ';+;' separator) are now explicitly excluded from
handle-based matching.

If a phone number only exists in a group chat (no direct DM exists),
the function now correctly returns null, which causes the send to
fail with a clear error rather than accidentally messaging a group.

Added test case to verify this behavior.

* feat(bluebubbles): auto-create new DM chats when sending to unknown phone numbers

When sending to a phone number that doesn't have an existing chat,
instead of failing with 'chatGuid not found', now automatically creates
a new chat using the /api/v1/chat/new endpoint.

- Added createNewChatWithMessage() helper function
- When resolveChatGuidForTarget returns null for a handle target,
  uses the new chat endpoint with addresses array and message
- Includes helpful error message if Private API isn't enabled
- Only applies to handle targets (phone numbers), not group chats

* fix(bluebubbles): hide internal routing metadata in cross-context markers

When sending cross-context messages via BlueBubbles, the origin marker was
exposing internal chat_guid routing info like '[from bluebubbles:chat_guid:any;-;+19257864429]'.

This adds a formatTargetDisplay() function to the BlueBubbles plugin that:
- Extracts phone numbers from chat_guid formats (iMessage;-;+1234567890 -> +1234567890)
- Normalizes handles for clean display
- Avoids returning raw chat_guid formats containing internal routing metadata

Now cross-context markers show clean identifiers like '[from +19257864429]' instead
of exposing internal routing details to recipients.

* fix: prevent cross-context decoration on direct message tool sends

Two fixes:

1. Cross-context decoration (e.g., '[from +19257864429]' prefix) was being
   added to ALL messages sent to a different target, even when the agent
   was just composing a new message via the message tool. This decoration
   should only be applied when forwarding/relaying messages between chats.

   Fix: Added skipCrossContextDecoration flag to ChannelThreadingToolContext.
   The message tool now sets this flag to true, so direct sends don't get
   decorated. The buildCrossContextDecoration function checks this flag
   and returns null when set.

2. Aborted requests were still completing because the abort signal wasn't
   being passed through the message tool execution chain.

   Fix: Added abortSignal propagation from message tool → runMessageAction →
   executeSendAction → sendMessage → deliverOutboundPayloads. Added abort
   checks at key points in the chain to fail fast when aborted.

Files changed:
- src/channels/plugins/types.core.ts: Added skipCrossContextDecoration field
- src/infra/outbound/outbound-policy.ts: Check skip flag before decorating
- src/agents/tools/message-tool.ts: Set skip flag, accept and pass abort signal
- src/infra/outbound/message-action-runner.ts: Pass abort signal through
- src/infra/outbound/outbound-send-service.ts: Check and pass abort signal
- src/infra/outbound/message.ts: Pass abort signal to delivery

* fix(bluebubbles): preserve friendly display names in formatTargetDisplay
2026-01-25 10:03:08 +00:00
Tyler Yust
fdecf5c59a fix: skip image understanding when primary model has vision
When the primary model supports vision natively (e.g., Claude Opus 4.5),
skip the image understanding call entirely. The image will be injected
directly into the model context instead, saving an API call and avoiding
redundant descriptions.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 09:57:19 +00:00
Peter Steinberger
c6cdbb630c fix: harden exec spawn fallback 2026-01-25 06:37:39 +00:00
Rohan Nagpal
06a7e1e8ce Telegram: threaded conversation support (#1597)
* Telegram: isolate dm topic sessions

* Tests: cap vitest workers

* Tests: cap Vitest workers on CI macOS

* Tests: avoid timer-based pi-ai stream mock

* Tests: increase embedded runner timeout

* fix: harden telegram dm thread handling (#1597) (thanks @rohannagpal)

---------

Co-authored-by: Peter Steinberger <steipete@gmail.com>
2026-01-25 04:48:51 +00:00
Peter Steinberger
9afde64e26 fix: validate web_search freshness (#1688) (thanks @JonUleis) 2026-01-25 04:23:25 +00:00
Jon Uleis
8682524da3 feat(web_search): add freshness parameter for Brave time filtering
Adds support for Brave Search API's freshness parameter to filter results
by discovery time:
- 'pd' - Past 24 hours
- 'pw' - Past week
- 'pm' - Past month
- 'py' - Past year
- 'YYYY-MM-DDtoYYYY-MM-DD' - Custom date range

Useful for cron jobs and monitoring tasks that need recent results.

Note: Perplexity provider ignores this parameter (Brave only).

---
🤖 AI-assisted: This PR was created with Claude (Opus). Lightly tested via
build script. The change follows existing patterns for optional parameters
(country, search_lang, ui_lang).
2026-01-25 04:19:58 +00:00
Peter Steinberger
3a35d313d9 fix: signal reactions 2026-01-25 03:24:44 +00:00
Peter Steinberger
b1a555da13 fix: skip tailscale dns probe when off 2026-01-25 02:51:20 +00:00
Peter Steinberger
c3e777e3e1 fix: keep raw config edits scoped to config view (#1673) (thanks @Glucksberg) 2026-01-25 02:48:07 +00:00
Peter Steinberger
2e3b14187b fix: stabilize venice model discovery 2026-01-25 02:43:08 +00:00
jonisjongithub
7540d1e8c1 feat: add Venice AI provider integration
Venice AI is a privacy-focused AI inference provider with support for
uncensored models and access to major proprietary models via their
anonymized proxy.

This integration adds:

- Complete model catalog with 25 models:
  - 15 private models (Llama, Qwen, DeepSeek, Venice Uncensored, etc.)
  - 10 anonymized models (Claude, GPT-5.2, Gemini, Grok, Kimi, MiniMax)
- Auto-discovery from Venice API with fallback to static catalog
- VENICE_API_KEY environment variable support
- Interactive onboarding via 'venice-api-key' auth choice
- Model selection prompt showing all available Venice models
- Provider auto-registration when API key is detected
- Comprehensive documentation covering:
  - Privacy modes (private vs anonymized)
  - All 25 models with context windows and features
  - Streaming, function calling, and vision support
  - Model selection recommendations

Privacy modes:
- Private: Fully private, no logging (open-source models)
- Anonymized: Proxied through Venice (proprietary models)

Default model: venice/llama-3.3-70b (good balance of capability + privacy)
Venice API: https://api.venice.ai/api/v1 (OpenAI-compatible)
2026-01-25 01:11:57 +00:00
Peter Steinberger
8e159ab0b7 fix: follow up config.patch restarts/docs/tests (#1653)
* fix: land config.patch restarts/docs/tests (#1624) (thanks @Glucksberg)

* docs: update changelog entry for config.patch follow-up (#1653) (thanks @Glucksberg)
2026-01-24 23:33:13 +00:00
Glucksberg
60661441b1 feat(gateway-tool): add config.patch action for safe partial config updates (#1624)
* fix(ui): enable save button only when config has changes

The save button in the Control UI config editor was not properly gating
on whether actual changes were made. This adds:
- `configRawOriginal` state to track the original raw config for comparison
- Change detection for both form mode (via computeDiff) and raw mode
- `hasChanges` check in canSave/canApply logic
- Set `configFormDirty` when raw mode edits occur
- Handle raw mode UI correctly (badge shows "Unsaved changes", no diff panel)

Fixes #1609

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* feat(gateway-tool): add config.patch action for safe partial config updates

Exposes the existing config.patch server method to agents, allowing safe
partial config updates that merge with existing config instead of replacing it.

- Add config.patch to GATEWAY_ACTIONS in gateway tool
- Add restart + sentinel logic to config.patch server method
- Extend ConfigPatchParamsSchema with sessionKey, note, restartDelayMs
- Add unit test for config.patch gateway tool action

Closes #1617

---------

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-24 23:30:32 +00:00
Abhay
51e3d16be9 feat: Add Ollama provider with automatic model discovery (#1606)
* feat: Add Ollama provider with automatic model discovery

- Add Ollama provider builder with automatic model detection
- Discover available models from local Ollama instance via /api/tags API
- Make resolveImplicitProviders async to support dynamic model discovery
- Add comprehensive Ollama documentation with setup and usage guide
- Add tests for Ollama provider integration
- Update provider index and model providers documentation

Closes #1531

* fix: Correct Ollama provider type definitions and error handling

- Fix input property type to match ModelDefinitionConfig
- Import ModelDefinitionConfig type properly
- Fix error template literal to use String() for type safety
- Simplify return type signature of discoverOllamaModels

* fix: Suppress unhandled promise warnings from ensureClawdbotModelsJson in tests

- Cast unused promise returns to 'unknown' to suppress TypeScript warnings
- Tests that don't await the promise are intentionally not awaiting it
- This fixes the failing test suite caused by unawaited async calls

* fix: Skip Ollama model discovery during tests

- Check for VITEST or NODE_ENV=test before making HTTP requests
- Prevents test timeouts and hangs from network calls
- Ollama discovery will still work in production/normal usage

* fix: Set VITEST environment variable in test setup

- Ensures Ollama discovery is skipped in all test runs
- Prevents network calls during tests that could cause timeouts

* test: Temporarily skip Ollama provider tests to diagnose CI failures

* fix: Make Ollama provider opt-in to avoid breaking existing tests

**Root Cause:**
The Ollama provider was being added to ALL configurations by default
(with a fallback API key of 'ollama-local'), which broke tests that
expected NO providers when no API keys were configured.

**Solution:**
- Removed the default fallback API key for Ollama
- Ollama provider now requires explicit configuration via:
  - OLLAMA_API_KEY environment variable, OR
  - Ollama profile in auth store
- Updated documentation to reflect the explicit configuration requirement
- Added a test to verify Ollama is not added by default

This fixes all 4 failing test suites:
- checks (node, test, pnpm test)
- checks (bun, test, bunx vitest run)
- checks-windows (node, test, pnpm test)
- checks-macos (test, pnpm test)

Closes #1531
2026-01-24 22:38:52 +00:00
Peter Steinberger
dd150d69c6 fix: use active auth profile for auto-compaction 2026-01-24 22:23:49 +00:00
Rodrigo Uroz
9ceac415c5 fix: auto-compact on context overflow promptError before returning error (#1627)
* fix: detect Anthropic 'Request size exceeds model context window' as context overflow

Anthropic now returns 'Request size exceeds model context window' instead of
the previously detected 'prompt is too long' format. This new error message
was not recognized by isContextOverflowError(), causing auto-compaction to
NOT trigger. Users would see the raw error twice without any recovery attempt.

Changes:
- Add 'exceeds model context window' and 'request size exceeds' to
  isContextOverflowError() detection patterns
- Add tests that fail without the fix, verifying both the raw error
  string and the JSON-wrapped format from Anthropic's API
- Add test for formatAssistantErrorText to ensure the friendly
  'Context overflow' message is shown instead of the raw error

Note: The upstream pi-ai package (@mariozechner/pi-ai) also needs a fix
in its OVERFLOW_PATTERNS regex: /exceeds the context window/i should be
changed to /exceeds.*context window/i to match both 'the' and 'model'
variants for triggering auto-compaction retry.

* fix(tests): remove unused imports and helper from test files

Remove WorkspaceBootstrapFile references and _makeFile helper that were
incorrectly copied from another test file. These caused type errors and
were unrelated to the context overflow detection tests.

* fix: trigger auto-compaction on context overflow promptError

When the LLM rejects a request with a context overflow error that surfaces
as a promptError (thrown exception rather than streamed error), the existing
auto-compaction in pi-coding-agent never triggers. This happens because the
error bypasses the agent's message_end → agent_end → _checkCompaction path.

This fix adds a fallback compaction attempt directly in the run loop:
- Detects context overflow in promptError (excluding compaction_failure)
- Calls compactEmbeddedPiSessionDirect (bypassing lane queues since already in-lane)
- Retries the prompt after successful compaction
- Limits to one compaction attempt per run to prevent infinite loops

Fixes: context overflow errors shown to user without auto-compaction attempt

* style: format compact.ts and run.ts with oxfmt

* fix: tighten context overflow match (#1627) (thanks @rodrigouroz)

---------

Co-authored-by: Claude <claude@anthropic.com>
Co-authored-by: Peter Steinberger <steipete@gmail.com>
2026-01-24 22:09:24 +00:00
Peter Steinberger
a4f6b3528a fix: cover elevated ask approvals (#1636) 2026-01-24 21:12:46 +00:00
Ivan Casco
fe7436a1f6 fix(exec): only set security=full when elevated mode is full (#1616) 2026-01-24 20:55:21 +00:00
Peter Steinberger
390b730b37 fix: unify reasoning tags + agent ids (#1613) (thanks @kyleok) (#1629) 2026-01-24 19:56:02 +00:00
Peter Steinberger
876bbb742a test: skip opencode alpha GLM in live suite 2026-01-24 14:08:16 +00:00
Peter Steinberger
386d21b6d1 fix: sync tests with config normalization 2026-01-24 13:32:26 +00:00
Peter Steinberger
d0e21f05a6 fix: drop opencode alpha GLM model 2026-01-24 13:19:55 +00:00
Peter Steinberger
4b6cdd1d3c fix: normalize session keys and outbound mirroring 2026-01-24 11:57:11 +00:00
Luke
be1cdc9370 fix(agents): treat provider request-aborted as timeout for fallback (#1576)
* fix(agents): treat request-aborted as timeout for fallback

* test(e2e): add provider timeout fallback
2026-01-24 11:27:24 +00:00
Peter Steinberger
ab000398be fix: resolve session ids in session tools 2026-01-24 11:09:11 +00:00
Peter Steinberger
a6ddd82a14 feat: add TTS hint to system prompt 2026-01-24 10:25:42 +00:00
Peter Steinberger
8ea8801d06 fix: show tool error fallback for tool-only replies 2026-01-24 08:17:50 +00:00
Peter Steinberger
c97bf23a4a fix: gate openai reasoning downgrade on model switches (#1562) (thanks @roshanasingh4) 2026-01-24 08:16:42 +00:00
Peter Steinberger
d9a467fe3b feat: move TTS into core (#1559) (thanks @Glucksberg) 2026-01-24 08:00:44 +00:00
Roshan Singh
202d7af855 Fix OpenAI Responses transcript after model switch 2026-01-24 07:58:25 +00:00
Tak hoffman
ff52aec38e Agents: drop bash tool alias 2026-01-24 07:44:04 +00:00
Peter Steinberger
15620b1092 fix: guard tool allowlists with warnings 2026-01-24 07:38:42 +00:00
Peter Steinberger
6a60d47c53 fix: cover slack open policy gating (#1563) (thanks @itsjaydesu) 2026-01-24 07:09:26 +00:00
Andrii
7e498ab94a anthropic-payload-log mvp
Added a dedicated Anthropic payload logger that writes exact request
JSON (as sent) plus per‑run usage stats (input/output/cache read/write)
to a
  standalone JSONL file, gated by an env flag.

  Changes

  - New logger: src/agents/anthropic-payload-log.ts (writes
logs/anthropic-payload.jsonl under the state dir, optional override via
env).
  - Hooked into embedded runs to wrap the stream function and record
usage: src/agents/pi-embedded-runner/run/attempt.ts.

  How to enable

  - CLAWDBOT_ANTHROPIC_PAYLOAD_LOG=1
  - Optional:
CLAWDBOT_ANTHROPIC_PAYLOAD_LOG_FILE=/path/to/anthropic-payload.jsonl

  What you’ll get (JSONL)

  - stage: "request" with payload (exact Anthropic params) +
payloadDigest
  - stage: "usage" with usage
(input/output/cacheRead/cacheWrite/totalTokens/etc.)

  Notes

  - Usage is taken from the last assistant message in the run; if the
run fails before usage is present, you’ll only see an error field.

  Files touched

  - src/agents/anthropic-payload-log.ts
  - src/agents/pi-embedded-runner/run/attempt.ts

  Tests not run.
2026-01-24 06:43:51 +00:00
Peter Steinberger
66eec295b8 perf: stabilize system prompt time 2026-01-24 06:24:04 +00:00
Peter Steinberger
675019cb6f fix: trigger fallback on auth profile exhaustion 2026-01-24 06:14:23 +00:00
Peter Steinberger
9d98e55ed5 fix: enforce group tool policy inheritance for subagents (#1557) (thanks @adam91holt) 2026-01-24 05:49:39 +00:00
Adam Holt
c07949a99c Channels: add per-group tool policies 2026-01-24 05:49:39 +00:00
Peter Steinberger
eba0625a70 fix: ignore identity template placeholders 2026-01-24 05:35:50 +00:00
Peter Steinberger
886752217d fix: gate diagnostic logs behind verbose 2026-01-24 05:06:42 +00:00
Peter Steinberger
5662a9cdfc fix: honor tools.exec ask/security in approvals 2026-01-24 04:53:44 +00:00
Peter Steinberger
c3cb26f7ca feat: add node browser proxy routing 2026-01-24 04:21:47 +00:00
Peter Steinberger
a4e57d3ac4 fix: align service path tests with platform delimiters 2026-01-24 02:34:54 +00:00
Peter Steinberger
1d862cf5c2 fix: add readability fallback extraction 2026-01-24 02:15:13 +00:00
Peter Steinberger
0840029982 fix: stabilize embedded runner queueing 2026-01-24 02:05:41 +00:00
Peter Steinberger
309fcc5321 fix: publish llm-task docs and harden tool 2026-01-24 01:44:51 +00:00
Peter Steinberger
00fd57b8f5 fix: honor wildcard tool allowlists 2026-01-24 01:30:44 +00:00
Peter Steinberger
4e77483051 fix: refine bedrock discovery defaults (#1543) (thanks @fal3) 2026-01-24 01:18:33 +00:00
Alex Fallah
8effb557d5 feat: add dynamic Bedrock model discovery
Add automatic discovery of AWS Bedrock models using ListFoundationModels API.
When AWS credentials are detected, models that support streaming and text output
are automatically discovered and made available.

- Add @aws-sdk/client-bedrock dependency
- Add discoverBedrockModels() with caching (default 1 hour)
- Add resolveImplicitBedrockProvider() for auto-registration
- Add BedrockDiscoveryConfig for optional filtering by provider/region
- Filter to active, streaming, text-output models only
- Update docs/bedrock.md with auto-discovery documentation
2026-01-24 01:15:06 +00:00