Commit Graph

7893 Commits

Author SHA1 Message Date
Peter Steinberger
ef078fec70 docs: add Windows install troubleshooting 2026-01-25 05:48:24 +00:00
Glucksberg
8e3ac01db6 fix(ui): improve config save UX (#1678)
Follow-up to #1609 fix:
- Remove formUnsafe check from canSave (was blocking save even with valid changes)
- Suppress disconnect message for code 1012 (service restart is expected during config save)

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
2026-01-25 05:46:55 +00:00
Peter Steinberger
c3f90dd4e2 docs: add macOS VM link 2026-01-25 05:32:04 +00:00
Peter Steinberger
00c4556d7b docs: add Claude Max FAQ 2026-01-25 05:28:01 +00:00
Peter Steinberger
a4bc69dbec docs: add first steps FAQ 2026-01-25 05:24:47 +00:00
Peter Steinberger
3f1457de2a docs: add new FAQ entries 2026-01-25 05:20:16 +00:00
Peter Steinberger
8507ea08bd docs: expand macOS VM guide (#1693) (thanks @f-trycua) 2026-01-25 05:16:41 +00:00
f-trycua
7ae2548fc6 docs: add macOS VM (Lume) platform guide
Add documentation for running Clawdbot in a sandboxed macOS VM
using Lume. This provides an alternative to buying dedicated
hardware or using cloud instances.

The guide covers:
- Installing Lume on Apple Silicon Macs
- Creating and configuring a macOS VM
- Installing Clawdbot inside the VM
- Running headlessly for 24/7 operation
- iMessage integration via BlueBubbles
- Saving golden images for easy reset
2026-01-25 05:14:13 +00:00
Peter Steinberger
f06f83ddd0 docs: add CC comparison faq 2026-01-25 05:00:47 +00:00
Peter Steinberger
c78297d80f docs: add account isolation faq 2026-01-25 04:56:41 +00:00
Peter Steinberger
69f6e1a20b docs: add multi-agent team faq 2026-01-25 04:55:22 +00:00
Peter Steinberger
5f6409a73d fix: configurable signal startup timeout 2026-01-25 04:51:35 +00:00
Rohan Nagpal
06a7e1e8ce Telegram: threaded conversation support (#1597)
* Telegram: isolate dm topic sessions

* Tests: cap vitest workers

* Tests: cap Vitest workers on CI macOS

* Tests: avoid timer-based pi-ai stream mock

* Tests: increase embedded runner timeout

* fix: harden telegram dm thread handling (#1597) (thanks @rohannagpal)

---------

Co-authored-by: Peter Steinberger <steipete@gmail.com>
2026-01-25 04:48:51 +00:00
Peter Steinberger
9eaaadf8ee fix: clarify control ui auth hints (fixes #1690) 2026-01-25 04:46:42 +00:00
Seb Slight
d4f60bf16a TTS: gate auto audio on inbound voice notes (#1667)
Co-authored-by: Sebastian <sebslight@gmail.com>
2026-01-25 04:35:20 +00:00
Peter Steinberger
ede5145191 docs: sweep support troubleshooting updates 2026-01-25 04:33:14 +00:00
Peter Steinberger
26d3fbb09f docs: add faq answers from support stream 2026-01-25 04:30:37 +00:00
Peter Steinberger
f7c89ba796 docs: fix faq wording + add heading guardrail 2026-01-25 04:25:17 +00:00
Peter Steinberger
9afde64e26 fix: validate web_search freshness (#1688) (thanks @JonUleis) 2026-01-25 04:23:25 +00:00
Jon Uleis
8682524da3 feat(web_search): add freshness parameter for Brave time filtering
Adds support for Brave Search API's freshness parameter to filter results
by discovery time:
- 'pd' - Past 24 hours
- 'pw' - Past week
- 'pm' - Past month
- 'py' - Past year
- 'YYYY-MM-DDtoYYYY-MM-DD' - Custom date range

Useful for cron jobs and monitoring tasks that need recent results.

Note: Perplexity provider ignores this parameter (Brave only).

---
🤖 AI-assisted: This PR was created with Claude (Opus). Lightly tested via
build script. The change follows existing patterns for optional parameters
(country, search_lang, ui_lang).
2026-01-25 04:19:58 +00:00
Peter Steinberger
5956dde459 docs: expand faq on heavy work + self-hosted models 2026-01-25 04:17:12 +00:00
Peter Steinberger
50bb418fe7 fix: guard discord thread channel 2026-01-25 04:11:55 +00:00
Peter Steinberger
458e731f8b fix: newline chunking across channels 2026-01-25 04:11:36 +00:00
Peter Steinberger
ca78ccf74c docs: add dedicated host faq 2026-01-25 04:11:08 +00:00
Peter Steinberger
580fd7abbd fix: guard discord forum thread access 2026-01-25 04:11:04 +00:00
Peter Steinberger
4a82c258c7 docs: add windows restart faq 2026-01-25 04:09:51 +00:00
Peter Steinberger
58c7c61e62 fix: add duplex for fetch uploads 2026-01-25 04:05:30 +00:00
Peter Steinberger
629ce4454d docs: add tips + clawd-to-clawd faq 2026-01-25 04:04:18 +00:00
Peter Steinberger
617d8a12d7 chore: update clawtributors
Co-authored-by: vilkasdev <vilkasdev@users.noreply.github.com>
2026-01-25 04:01:10 +00:00
Shadow
cdceff2284 Discord: add forum parent context 2026-01-24 21:57:48 -06:00
Peter Steinberger
cb52ffb842 fix: drop unused cli import 2026-01-25 03:42:32 +00:00
Peter Steinberger
3a35d313d9 fix: signal reactions 2026-01-25 03:24:44 +00:00
Tom McKenzie
116fbb747f CLI: fix subcommand registration to work without --help/--version flags (#1683)
## Problem

The clawdbot-gateway systemd service was crash-looping on Linux (Fedora 42,
aarch64) with the error:

    error: unknown command '/usr/bin/node-22'

After ~20 seconds of runtime, the gateway would exit with status 1/FAILURE
and systemd would restart it, repeating the cycle indefinitely (80+ restarts
observed).

## Root Cause Analysis

### Investigation Steps

1. Examined systemd service logs via `journalctl --user -u clawdbot-gateway.service`
2. Found the error appeared consistently after the service had been running
   for 20-30 seconds
3. Added debug logging to trace argv at parseAsync() call
4. Discovered that argv was being passed to Commander.js with the node binary
   and script paths still present: `["/usr/bin/node-22", "/path/to/entry.js", "gateway", "--port", "18789"]`
5. Traced the issue to the lazy subcommand registration logic in runCli()

### The Bug

The lazy-loading logic for subcommands was gated behind `hasHelpOrVersion(parseArgv)`:

```typescript
if (hasHelpOrVersion(parseArgv)) {
  const primary = getPrimaryCommand(parseArgv);
  if (primary) {
    const { registerSubCliByName } = await import("./program/register.subclis.js");
    await registerSubCliByName(program, primary);
  }
}
```

This meant that when running `clawdbot gateway --port 18789` (without --help
or --version), the `gateway` subcommand was never registered before
`program.parseAsync(parseArgv)` was called. Commander.js would then try to
parse the arguments without knowing about the gateway command, leading to
parse errors.

The error message "unknown command '/usr/bin/node-22'" appeared because
Commander was treating the first positional argument as a command name due to
argv not being properly stripped on non-Windows platforms in some code paths.

## The Fix

Remove the `hasHelpOrVersion()` gate and always register the primary
subcommand when one is detected:

```typescript
// Register the primary subcommand if one exists (for lazy-loading)
const primary = getPrimaryCommand(parseArgv);
if (primary) {
  const { registerSubCliByName } = await import("./program/register.subclis.js");
  await registerSubCliByName(program, primary);
}
```

This ensures that subcommands like `gateway` are properly registered before
parsing begins, regardless of what flags are present.

## Environment

- OS: Fedora 42 (Linux 6.15.9-201.fc42.aarch64)
- Arch: aarch64
- Node: /usr/bin/node-22 (symlink to node-22)
- Deployment: systemd user service
- Runtime: Gateway started via `clawdbot gateway --port 18789`

## Why This Should Be Merged

1. **Critical Bug**: The gateway service cannot run reliably on Linux without
   this fix, making it a blocking issue for production deployments via systemd.

2. **Affects All Non-Help Invocations**: Any direct subcommand invocation
   (gateway, channels, etc.) without --help/--version is broken.

3. **Simple & Safe Fix**: The change removes an unnecessary condition that was
   preventing lazy-loading from working correctly. Subcommands should always be
   registered when detected, not just for help/version requests.

4. **No Regression Risk**: The fix maintains the lazy-loading behavior (only
   loads the requested subcommand), just ensures it works in all cases instead
   of only help/version scenarios.

5. **Tested**: Verified that the gateway service now runs stably for extended
   periods (45+ seconds continuous runtime with no crashes) after applying this
   fix.

Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2026-01-25 03:17:02 +00:00
Peter Steinberger
b1a555da13 fix: skip tailscale dns probe when off 2026-01-25 02:51:20 +00:00
Peter Steinberger
c92aaca8b0 docs: answer local data storage faq 2026-01-25 02:48:28 +00:00
Peter Steinberger
c3e777e3e1 fix: keep raw config edits scoped to config view (#1673) (thanks @Glucksberg) 2026-01-25 02:48:07 +00:00
Peter Steinberger
2e3b14187b fix: stabilize venice model discovery 2026-01-25 02:43:08 +00:00
Peter Steinberger
f8a22521bd docs: clarify WSL2 recommendation 2026-01-25 02:30:09 +00:00
Peter Steinberger
8477394414 docs: explain unstuck commands 2026-01-25 02:04:32 +00:00
Peter Steinberger
e6e71457e0 fix: honor trusted proxy client IPs (PR #1654)
Thanks @ndbroadbent.

Co-authored-by: Nathan Broadbent <git@ndbroadbent.com>
2026-01-25 01:52:19 +00:00
Peter Steinberger
2684a364c6 docs: add basic debug commands to unstuck faq 2026-01-25 01:51:38 +00:00
Peter Steinberger
b9dc117309 docs: refine venice highlight 2026-01-25 01:49:53 +00:00
Peter Steinberger
9205ee55de docs: add fastest-unstuck guidance 2026-01-25 01:22:22 +00:00
Peter Steinberger
6e23e81678 docs: clarify lobster DSL rationale 2026-01-25 01:13:55 +00:00
Peter Steinberger
0163f53f5d fix: regenerate protocol models 2026-01-25 01:12:49 +00:00
jonisjongithub
25f2d2adb3 docs: remove rate limits claim from Venice docs 2026-01-25 01:11:57 +00:00
jonisjongithub
7540d1e8c1 feat: add Venice AI provider integration
Venice AI is a privacy-focused AI inference provider with support for
uncensored models and access to major proprietary models via their
anonymized proxy.

This integration adds:

- Complete model catalog with 25 models:
  - 15 private models (Llama, Qwen, DeepSeek, Venice Uncensored, etc.)
  - 10 anonymized models (Claude, GPT-5.2, Gemini, Grok, Kimi, MiniMax)
- Auto-discovery from Venice API with fallback to static catalog
- VENICE_API_KEY environment variable support
- Interactive onboarding via 'venice-api-key' auth choice
- Model selection prompt showing all available Venice models
- Provider auto-registration when API key is detected
- Comprehensive documentation covering:
  - Privacy modes (private vs anonymized)
  - All 25 models with context windows and features
  - Streaming, function calling, and vision support
  - Model selection recommendations

Privacy modes:
- Private: Fully private, no logging (open-source models)
- Anonymized: Proxied through Venice (proprietary models)

Default model: venice/llama-3.3-70b (good balance of capability + privacy)
Venice API: https://api.venice.ai/api/v1 (OpenAI-compatible)
2026-01-25 01:11:57 +00:00
Peter Steinberger
fc0e303e05 feat: add edge tts fallback provider 2026-01-25 01:05:43 +00:00
Peter Steinberger
6a7a1d7085 fix: add chat stop button
Co-authored-by: Nathan Broadbent <ndbroadbent@users.noreply.github.com>
2026-01-25 01:00:23 +00:00
Tyler Yust
92e794dc18 feat: add chunking mode option for BlueBubbles (#1645)
* feat: add chunking mode for outbound messages

- Introduced `chunkMode` option in various account configurations to allow splitting messages by "length" or "newline".
- Updated message processing to handle chunking based on the selected mode.
- Added tests for new chunking functionality, ensuring correct behavior for both modes.

* feat: enhance chunking mode documentation and configuration

- Added `chunkMode` option to the BlueBubbles account configuration, allowing users to choose between "length" and "newline" for message chunking.
- Updated documentation to clarify the behavior of the `chunkMode` setting.
- Adjusted account merging logic to incorporate the new `chunkMode` configuration.

* refactor: simplify chunk mode handling for BlueBubbles

- Removed `chunkMode` configuration from various account schemas and types, centralizing chunk mode logic to BlueBubbles only.
- Updated `processMessage` to default to "newline" for BlueBubbles chunking.
- Adjusted tests to reflect changes in chunk mode handling for BlueBubbles, ensuring proper functionality.

* fix: update default chunk mode to 'length' for BlueBubbles

- Changed the default value of `chunkMode` from 'newline' to 'length' in the BlueBubbles configuration and related processing functions.
- Updated documentation to reflect the new default behavior for chunking messages.
- Adjusted tests to ensure the correct default value is returned for BlueBubbles chunk mode.
2026-01-25 00:47:10 +00:00