Skip to content

cybernati1776/openclaw-koboldcpp-tutorial

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 

Repository files navigation

⚠️ Attention: This tutorial may undergo changes as both projects evolve.

KoboldCPP + OpenClaw Local Run 🔥

Welcome! This step-by-step guide is designed to be simple and straightforward, allowing anyone to run OpenClaw integrated with KoboldCPP locally on their own machine.

The goal is to teach you how to install both programs and, most importantly, connect them using the Custom Provider feature.

Recommendations: Install Python, npm (Node.js), and Homebrew before starting.

Step 1: Installing KoboldCPP 🧠

KoboldCPP is the program responsible for processing and running the base Artificial Intelligence model on your computer. It features multimodal capabilities, meaning it can run on various CPUs and GPUs, and supports multiple types of models including LLMs, Embeddings, Vision, TTS (Text-to-Speech), and Image generation.

Download the program: Go to the official KoboldCPP releases page here.

If you use Windows, look for and download the latest koboldcpp.exe file. https://github.com/LostRuins/koboldcpp/releases/tag/v1.109.2

If you use Linux, download the corresponding binary (e.g., koboldcpp-linux-x64). https://github.com/LostRuins/koboldcpp/releases/tag/v1.109.2

Download a model: For the AI to work, you need a "brain". Go to the Hugging Face website and download an AI model in the .GGUF format.

Execute: Double-click koboldcpp.exe (or run the Linux binary). The graphical interface will open.

Load the model: Click "Browse", select the .GGUF file you just downloaded, and then click Launch.

It will open a terminal and load the data. Once it finishes, it will be running perfectly on port: http://localhost:5001.

Step 2: Configure KoboldCPP Hardware Settings

#Remember Koboldcpp must always be running. It is the GATEWAY ENDPOINT for the BOT

Configure your settings based on your hardware before starting OpenClaw.

Refer to the official step-by-step guides: KoboldCPP Wiki https://github.com/LostRuins/koboldcpp/wiki#quick-start

Use the GET HELP button in the app for assistance, and you can even download models directly using the HF Search button.

Recommended models: Remember that the larger the model, the more hardware is required. I advise models based on parameter size (B): 4B, 8B, 9B, 12B... 27B... 32B... 42B... https://github.com/LostRuins/koboldcpp/wiki#getting-an-ai-model-file

Download locations:

  • KoboldCPP Models
  • MyGGUF
  • Huihui-ai
  • DavidAU
  • mradermacher
  • unsloth
  • TeichAI

https://huggingface.co/koboldcpp/models https://mygguf.com/models https://huggingface.co/huihui-ai https://huggingface.co/DavidAU https://huggingface.co/mradermacher https://huggingface.co/unsloth https://huggingface.co/TeichAI

Important: Make sure to uncheck the Launcher Browser option.

My setup (for reference): Ryzen 5500 (16x dual channel), 32GB DDR4 3200 CL 22 + RTX 3060 12GB VRAM + NVMe M.2 500 GB SSD + HDDs + SSD running Arch Linux KDE optimized for LLMs. However, it runs on any machine if configured well.

Tip: For the best experience analyzing your own files, use models with vision capabilities (mmproj-f16).

Example Configuration (9B.kcpps): (You can save your configuration using the "Save Config" button in Kobold and used You can use it without needing to reconfigure every time. Just locate the kcpps file under "Load Config" button.)

{
  "model": [],
  "model_param": "/home/USER/Model/Qwen3.5-9B-Claude-4.6-OS-Auto-Variable-HERETIC-UNCENSORED-THINKING.i1-IQ4_XS.gguf", # llm model
  "port": 5001,
  "port_param": 5001,
  "host": "",
  "launch": false,
  "threads": 11,
  "usecuda": [
    "normal",
    "0",
    "mmq"
  ],
  "usevulkan": null,
  "usecpu": false,
  "contextsize": 32768,
  "gpulayers": 21,
  "tensor_split": null,
  "autofit": false,
  "maingpu": -1,
  "batchsize": 2048,
  "blasthreads": null,
  "lora": null,
  "loramult": 1.0,
  "noshift": false,
  "nofastforward": false,
  "useswa": false,
  "smartcache": 0,
  "ropeconfig": [
    0.0,
    10000.0  #( for big KV memory 128k 256k use 1000000.0)
  ],
  "overridenativecontext": 0,
  "usemmap": true,
  "usemlock": false,
  "noavx2": false,
  "failsafe": false,
  "debugmode": 0,
  "onready": "",
  "benchmark": null,
  "prompt": "",
  "cli": false,
  "genlimit": 4096,
  "multiuser": 1,
  "multiplayer": false,
  "websearch": true,
  "remotetunnel": false,
  "highpriority": false,
  "foreground": false,
  "preloadstory": null,
  "savedatafile": null,
  "quiet": false,
  "ssl": null,
  "nocertify": false,
  "mmproj": "/home/USER/Model/Qwen3.5-9B-Claude-4.6-OS-Auto-Variable-HERETIC-UNCENSORED-THINKING.mmproj-f16.gguf", # for vision mutimodal
  "mmprojcpu": false,
  "visionmaxres": 1024,
  "draftmodel": null,
  "draftamount": 8,
  "draftgpulayers": 999,
  "draftgpusplit": null,
  "password": null,
  "ratelimit": 0,
  "ignoremissing": false,
  "chatcompletionsadapter": "AutoGuess",
  "jinja": false,
  "jinja_tools": false,
  "noflashattention": false,
  "lowvram": true,
  "quantkv": 2,
  "smartcontext": false,
  "nomodel": false,
  "moeexperts": -1,
  "moecpu": 0,
  "defaultgenamt": 2048,
  "nobostoken": false,
  "enableguidance": false,
  "maxrequestsize": 32,
  "overridekv": null,
  "overridetensors": null,
  "showgui": false,
  "skiplauncher": false,
  "singleinstance": false,
  "nopipelineparallel": true,
  "gendefaults": "",
  "gendefaultsoverwrite": false,
  "mcpfile": null,
  "device": "",
  "downloaddir": "",
  "autofitpadding": 1024,
  "hordemodelname": "",
  "hordeworkername": "",
  "hordekey": "",
  "hordemaxctx": 0,
  "hordegenlen": 0,
  "sdmodel": "",
  "sdthreads": 5,
  "sdclamped": 0,
  "sdclampedsoft": 0,
  "sdt5xxl": "",
  "sdclip1": "",
  "sdclip2": "",
  "sdphotomaker": "",
  "sdupscaler": "",
  "sdflashattention": false,
  "sdoffloadcpu": false,
  "sdvaecpu": false,
  "sdclipgpu": false,
  "sdconvdirect": "off",
  "sdvae": "",
  "sdvaeauto": false,
  "sdquant": 0,
  "sdlora": null,
  "sdloramult": 1.0,
  "sdtiledvae": 768,
  "whispermodel": "",
  "ttsmodel": "",
  "ttswavtokenizer": "",
  "ttsgpu": false,
  "ttsmaxlen": 4096,
  "ttsthreads": 0,
  "ttsdir": "",
  "musicllm": "",
  "musicembeddings": "",
  "musicdiffusion": "",
  "musicvae": "",
  "musiclowvram": false,
  "embeddingsmodel": "/home/USER/Model/qwen3-vl-embedding-2b-q4_k_m.gguf", # for embedding 
  "embeddingsmaxctx": 2048,
  "embeddingsgpu": true,
  "admin": false,
  "adminpassword": "",
  "admindir": "",
  "hordeconfig": null,
  "sdconfig": null,
  "noblas": false,
  "nommap": false,
  "pipelineparallel": false,
  "sdnotile": false,
  "forceversion": false,
  "sdgendefaults": false,
  "flashattention": false,
  "istemplate": false
}

Step 3: Installing OpenClaw 🤖

OpenClaw will be the agent/assistant interface you will interact with.

Go to the official project page: OpenClaw Repository https://openclaw.ai/ https://github.com/openclaw/openclaw

Install the software according to the developer's instructions on their homepage (in many cases, this involves downloading the main files or using a guided package installation, depending on your OS).

After the initial installation and setup, open the terminal.

Install

npm install -g openclaw@latest
# or: pnpm add -g openclaw@latest

Now let's run the onboarding command:

openclaw onboard --install-daemon

OpenClaw Setup Flow

Follow the terminal prompts exactly like this:

openclaw onboard --install-daemon

🦞 OpenClaw 2026.3.12 (6472949) — I'm the middleware between your ambition and your attention span.

│
◇  Doctor warnings ──────────────────────────────────────────────────────────────────────────╮
│                                                                                            │
│  - channels.telegram.groupPolicy is "allowlist" but groupAllowFrom (and allowFrom) is      │
│    empty — all group messages will be silently dropped. Add sender IDs to                  │
│    channels.telegram.groupAllowFrom or channels.telegram.allowFrom, or set groupPolicy to  │
│    "open".                                                                                 │
│                                                                                            │
├────────────────────────────────────────────────────────────────────────────────────────────╯
▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄
██░▄▄▄░██░▄▄░██░▄▄▄██░▀██░██░▄▄▀██░████░▄▄▀██░███░██
██░███░██░▀▀░██░▄▄▄██░█░█░██░█████░████░▀▀░██░█░█░██
██░▀▀▀░██░█████░▀▀▀██░██▄░██░▀▀▄██░▀▀░█░██░██▄▀▄▀▄██
▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
                  🦞 OPENCLAW 🦞                    
 
┌  OpenClaw onboarding
│
◇  Security ─────────────────────────────────────────────────────────────────────────────────╮
│                                                                                            │
│  Security warning — please read.                                                           │
│                                                                                            │
│  OpenClaw is a hobby project and still in beta. Expect sharp edges.                        │
│  By default, OpenClaw is a personal agent: one trusted operator boundary.                  │
│  This bot can read files and run actions if tools are enabled.                             │
│  A bad prompt can trick it into doing unsafe things.                                       │
│                                                                                            │
│  OpenClaw is not a hostile multi-tenant boundary by default.                               │
│  If multiple users can message one tool-enabled agent, they share that delegated tool      │
│  authority.                                                                                │
│                                                                                            │
│  If you’re not comfortable with security hardening and access control, don’t run           │
│  OpenClaw.                                                                                 │
│  Ask someone experienced to help before enabling tools or exposing it to the internet.     │
│                                                                                            │
│  Recommended baseline:                                                                     │
│  - Pairing/allowlists + mention gating.                                                    │
│  - Multi-user/shared inbox: split trust boundaries (separate gateway/credentials, ideally  │
│    separate OS users/hosts).                                                               │
│  - Sandbox + least-privilege tools.                                                        │
│  - Shared inboxes: isolate DM sessions (`session.dmScope: per-channel-peer`) and keep      │
│    tool access minimal.                                                                    │
│  - Keep secrets out of the agent’s reachable filesystem.                                   │
│  - Use the strongest available model for any bot with tools or untrusted inboxes.          │
│                                                                                            │
│  Run regularly:                                                                            │
│  openclaw security audit --deep                                                            │
│  openclaw security audit --fix                                                             │
│                                                                                            │
│  Must read: [https://docs.openclaw.ai/gateway/security](https://docs.openclaw.ai/gateway/security)                                      │
│                                                                                            │
├────────────────────────────────────────────────────────────────────────────────────────────╯
│
◆  I understand this is personal-by-default and shared/multi-user use requires lock-down. Continue?
│  ● Yes / ○ No
◆  Onboarding mode
│  ● QuickStart (Configure details later via openclaw configure.)
│  ○ Manual
└
◇  QuickStart ─────────────────────────╮
│                                      │
│  Gateway port: 18789                 │
│  Gateway bind: Loopback (127.0.0.1)  │
│  Gateway auth: Token (default)       │
│  Tailscale exposure: Off             │
│  Direct to chat channels.            │
│                                      │
├──────────────────────────────────────╯
│
◆  Model/auth provider
│  ○ OpenAI
│  ○ Anthropic
│  ○ Chutes
│  ○ MiniMax
│  ○ Moonshot AI (Kimi K2.5)
│  ○ Google
│  ○ xAI (Grok)
│  ○ Mistral AI
│  ○ Volcano Engine
│  ○ BytePlus
│  ○ OpenRouter
│  ○ Kilo Gateway
│  ○ Qwen
│  ○ Z.AI
│  ○ Qianfan
│  ○ Alibaba Cloud Model Studio
│  ○ Copilot
│  ○ Vercel AI Gateway
│  ○ OpenCode
│  ○ Xiaomi
│  ○ Synthetic
│  ○ Together AI
│  ○ Hugging Face
│  ○ Venice AI
│  ○ LiteLLM
│  ○ Cloudflare AI Gateway
│  ● Custom Provider (Any OpenAI or Anthropic compatible endpoint) -----> ENTER 
│  ○ Ollama
│  ○ SGLang
│  ○ vLLM
│  ○ Skip for now
└│
◇  QuickStart ─────────────────────────╮
│                                      │
│  Gateway port: 18789                 │
│  Gateway bind: Loopback (127.0.0.1)  │
│  Gateway auth: Token (default)       │
│  Tailscale exposure: Off             │
│  Direct to chat channels.            │
│                                      │
├──────────────────────────────────────╯
│
◇  Model/auth provider
│  Custom Provider
│
◆  API Base URL
│   http://127.0.0.1:5001

◇  How do you want to provide this API key?
│  Paste API key now
│
◆  API Key (leave blank if not required)
│  _ 
└
◆  Endpoint compatibility
│  ○ OpenAI-compatible
│  ○ Anthropic-compatible
│  ● Unknown (detect automatically) (Probes OpenAI then Anthropic endpoints)
└
◆  Model ID
│  e.g. llama3, claude-3-7-sonnet
└ 

(Place the exact name of your model here. Do not add the .gguf or .safetensors extensions!)

│
◆  Model ID
│  Qwen3.5-9B-Claude-4.6-OS-Auto-Variable-HERETIC-UNCENSORED-THINKING.i1-IQ4_XS█
└
◇  Detected OpenAI-compatible endpoint.
│
◆  Endpoint ID
│  custom-127-0-0-1-5001█
└
◆  Model alias (optional)
│  Quen3.5-9b-C-4.6-AVHUTi1-IQ4█
└

(Optional: Choose your preferred channel. Each option will guide you on how to get access)

Select channel (QuickStart) 
│  ● Telegram (Bot API) (recommended · newcomer-friendly)
│  ○ WhatsApp (QR link)
│  ○ Discord (Bot API)
│  ○ IRC (Server + Nick)
│  ○ Google Chat (Chat API)
│  ○ Slack (Socket Mode)
│  ○ Signal (signal-cli)
│  ○ iMessage (imsg)
│  ○ LINE (Messaging API)
│  ○ Feishu/Lark (飞书)
│  ○ Nostr (NIP-04 DMs)
│  ○ Microsoft Teams (Bot Framework)
│  ○ Mattermost (plugin)
│  ○ Nextcloud Talk (self-hosted)
│  ○ Matrix (plugin)
│  ○ BlueBubbles (macOS app)
│  ○ Zalo (Bot API)
│  ○ Zalo (Personal Account)
│  ○ Synology Chat (Webhook)
│  ○ Tlon (Urbit)
│  ○ Skip for now
└

(Optional: You will need an API key for the chosen search provider. Tip: We suggest using Gemini via Google AI Studio because the API key is currently free)

◆  Search provider 
│  ○ Brave Search
│  ● Gemini (Google Search) (Google Search grounding · AI-synthesized) 
│  ○ Grok (xAI)
│  ○ Kimi (Moonshot)
│  ○ Perplexity Search
│  ○ Skip for now
└
 Configure skills now? (recommended)
│  ● Yes / ○ No
└

(Optional: You can choose your skills now, or install more later using the OpenClaw CLI. See https://docs.openclaw.ai/tools/skills and https://clawhub.ai/)

◇  Configure skills now? (recommended)
│  Yes
│
◆  Install missing skill dependencies
│  ◻ Skip for now
│  ◻ 📰 blogwatcher (Monitor blogs and RSS/Atom feeds for updates using the blogwatcher CLI. — Install blogwat…)
│  ◻ 📸 camsnap
│  ◻ 🛌 eightctl
│  ◻ 🎮 gog
│  ◻ 📍 goplaces
│  ◻ 📧 himalaya
│  ◻ 💡 openhue
│  ◻ 🛵 ordercli
│  ◻ 🧾 summarize
│  ◻ 📱 wacli
│  ◻ 🐦 xurl
└

(If you have the API keys for the skills you selected, enter them here)

◇  Configure skills now? (recommended)
│  Yes
│
◇  Install missing skill dependencies
│  Skip for now
│
◇  Set GOOGLE_PLACES_API_KEY for goplaces?
│  No
│
◇  Set GEMINI_API_KEY for nano-banana-pro?
│  No
│
◇  Set NOTION_API_KEY for notion?
│  No
│
◇  Set OPENAI_API_KEY for openai-image-gen?
│  No
│
◇  Set OPENAI_API_KEY for openai-whisper-api?
│  No
│
◇  Set ELEVENLABS_API_KEY for sag?  
│  No


◇  Hooks ──────────────────────────────────────────────────────────────────╮
│                                                                          │
│  Hooks let you automate actions when agent commands are issued.          │
│  Example: Save session context to memory when you issue /new or /reset.  │
│                                                                          │
│  Learn more: [https://docs.openclaw.ai/automation/hooks](https://docs.openclaw.ai/automation/hooks)                   │
│                                                                          │
├──────────────────────────────────────────────────────────────────────────╯
│
◆  Enable hooks? # Select ALL for fist used if your not fist used skip... 
│   Skip for now
│  ◼ 🚀 boot-md (Run BOOT.md on gateway startup)
│  ◼ 📎 bootstrap-extra-files (Inject additional workspace bootstrap files via glob/path patterns)
│  ◼ 📝 command-logger (Log all command events to a centralized audit file)
│  ◼ 💾 session-memory (Save session context to memory when /new or /reset command is issued)
└
◆  How do you want to hatch your bot?
│  ● Hatch in TUI (recommended) ----> ENTER Open the TUI  You can use it now
│  ○ Open the Web UI
│  ○ Do this later
└

#Koboldcpp must always be running. It is the GATEWAY ENDPOINT for the BOT

JSON Configurations

Here is an example output located at /home/USER/.openclaw/openclaw.json (remember, this is just an example):

{
  "wizard": {
    "lastRunAt": "2026-03-13T23:15:18.172Z",
    "lastRunVersion": "2026.3.12",
    "lastRunCommand": "onboard",
    "lastRunMode": "local"
  },
  "models": {
    "mode": "merge",
    "providers": {
      "custom-127-0-0-1-5001": {
        "baseUrl": "http://127.0.0.1:5001",
        "api": "openai-completions",
        "models": [
          {
            "id": "Qwen3.5-9B-Claude-4.6-OS-Auto-Variable-HERETIC-UNCENSORED-THINKING.i1-IQ4_XS",
            "name": "Qwen3.5-9B-Claude-4.6-OS-Auto-Variable-HERETIC-UNCENSORED-THINKING.i1-IQ4_XS (Custom Provider)",
            "reasoning": false,
            "input": [
              "text"
            ],
            "cost": {
              "input": 0,
              "output": 0,
              "cacheRead": 0,
              "cacheWrite": 0
            },
            "contextWindow": 16000,  # (Optional: increase the context window here to match the context window you configured in KoboldCPP earlier)
            "maxTokens": 4096 # (Optional: increase maxTokens according to your needs)
          }
        ]
      }
    }
  },
  "agents": {
    "defaults": {
      "model": {
        "primary": "custom-127-0-0-1-5001/Qwen3.5-9B-Claude-4.6-OS-Auto-Variable-HERETIC-UNCENSORED-THINKING.i1-IQ4_XS"
      },
      "models": {
        "custom-127-0-0-1-5001/Qwen3.5-9B-Claude-4.6-OS-Auto-Variable-HERETIC-UNCENSORED-THINKING.i1-IQ4_XS": {
          "alias": "Quen3.5-9b-C-4.6-AVHUTi1-IQ4"
        }
      },
      "workspace": "/home/USER/.openclaw/workspace"
    }
  },
  "tools": {
    "profile": "coding"
  },
  "commands": {
    "native": "auto",
    "nativeSkills": "auto",
    "restart": true,
    "ownerDisplay": "raw"
  },
  "session": {
    "dmScope": "per-channel-peer" # (Optional: use "main" to mirror conversations on Telegram, otherwise you will have separate sessions)
  },
  "hooks": {
    "internal": {
      "enabled": true,
      "entries": {
        "boot-md": {
          "enabled": true
        },
        "bootstrap-extra-files": {
          "enabled": true
        },
        "command-logger": {
          "enabled": true
        },
        "session-memory": {
          "enabled": true
        }
      }
    }
  },
  "gateway": {
    "port": 18789,
    "mode": "local",
    "bind": "loopback",
    "auth": {
      "mode": "token",
      "token": "**********************"
    },
    "tailscale": {
      "mode": "off",
      "resetOnExit": false
    },
    "nodes": {
      "denyCommands": [
        "camera.snap",
        "camera.clip",
        "screen.record",
        "contacts.add",
        "calendar.add",
        "reminders.add",
        "sms.send"
      ]
    }
  },
  "meta": {
    "lastTouchedVersion": "2026.3.12",
    "lastTouchedAt": "2026-03-13T23:15:18.176Z"
  }
}

Optional: Local Embeddings

I cannot guarantee this will work perfectly,because Gemini 3.1 created using the official documents , but for local embeddings, use the official tutorial for constructing the JSON structure: Local Embedding Auto-Download https://docs.openclaw.ai/concepts/memory#local-embedding-auto-download

Example:

{
  "wizard": {
    "lastRunAt": "2026-03-13T23:15:18.172Z",
    "lastRunVersion": "2026.3.12",
    "lastRunCommand": "onboard",
    "lastRunMode": "local"
  },
  "models": {
    "mode": "merge",
    "providers": {
      "custom-127-0-0-1-5001": {
        "baseUrl": "http://127.0.0.1:5001",
        "api": "openai-completions",
        "models": [
          {
            "id": "Qwen3.5-9B-Claude-4.6-OS-Auto-Variable-HERETIC-UNCENSORED-THINKING.i1-IQ4_XS",
            "name": "Qwen3.5-9B-Claude-4.6-OS-Auto-Variable-HERETIC-UNCENSORED-THINKING.i1-IQ4_XS (Custom Provider)",
            "reasoning": false,
            "input": ["text"],
            "cost": {
              "input": 0,
              "output": 0,
              "cacheRead": 0,
              "cacheWrite": 0
            },
            "contextWindow": 32768,
            "maxTokens": 4096
          },
          {
            "id": "qwen3-vl-embedding-2b-q4_k_m.gguf",
            "name": "Qwen3 VL Embedding 2B (Kobold)",
            "type": "embedding",
            "contextWindow": 2048
          }
        ]
      }
    }
  },
  "agents": {
    "defaults": {
      "model": {
        "primary": "custom-127-0-0-1-5001/Qwen3.5-9B-Claude-4.6-OS-Auto-Variable-HERETIC-UNCENSORED-THINKING.i1-IQ4_XS",
        "embedding": "custom-127-0-0-1-5001/qwen3-vl-embedding-2b-q4_k_m"
      },
      "workspace": "/home/USER/.openclaw/workspace",
      "memorySearch": {
        "query": {
          "hybrid": {
            "enabled": true,
            "vectorWeight": 0.7,
            "textWeight": 0.3,
            "candidateMultiplier": 4,
            "mmr": {
              "enabled": true,
              "lambda": 0.7
            },
            "temporalDecay": {
              "enabled": true,
              "halfLifeDays": 30
            }
          }
        },
        "cache": {
          "enabled": true,
          "maxEntries": 50000
        },
        "experimental": {
          "sessionMemory": true
        },
        "sources": [
          "memory",
          "sessions"
        ],
        "sync": {
          "sessions": {
            "deltaBytes": 100000,
            "deltaMessages": 50
          }
        },
        "provider": "openai",
        "model": "qwen3-vl-embedding-2b-q4_k_mf",
        "remote": {
          "baseUrl": "http://127.0.0.1:5001/v1/"
        }
      }
    }
  },
  "tools": {
    "profile": "coding"
  },
  "commands": {
    "native": "auto",
    "nativeSkills": "auto",
    "restart": true,
    "ownerDisplay": "raw"
  },
  "session": {
    "dmScope": "per-channel-peer"
  },
  "hooks": {
    "internal": {
      "enabled": true,
      "entries": {
        "boot-md": {
          "enabled": true
        },
        "bootstrap-extra-files": {
          "enabled": true
        },
        "command-logger": {
          "enabled": true
        },
        "session-memory": {
          "enabled": true
        }
      }
    }
  },
  "gateway": {
    "port": 18789,
    "mode": "local",
    "bind": "loopback",
    "auth": {
      "mode": "token",
      "token": "**********************"
    },
    "tailscale": {
      "mode": "off",
      "resetOnExit": false
    },
    "nodes": {
      "denyCommands": [
        "camera.snap",
        "camera.clip",
        "screen.record",
        "contacts.add",
        "calendar.add",
        "reminders.add",
        "sms.send"
      ]
    }
  },
  "meta": {
    "lastTouchedVersion": "2026.3.12",
    "lastTouchedAt": "2026-03-13T23:15:18.176Z"
  }
}

In case of problem use the command: openclaw doctor --fix https://docs.openclaw.ai/gateway/doctor

It also often resolves gateway-related issues : openclaw gateway restart https://docs.openclaw.ai/cli/gateway

☕ Support the Author

If this was useful and helped you, please consider buying me a coffee! :)

BTC: bc1q2k4yey4tj36hur5flnankc5knd8sdm2q3dfn0w

ETH: 0x59Cdc44178d3a081a4cbE4Fd5C75B43D7769691d

LTC: ltc1q0f0nrnj0g4lrfj78a0uu9de9vm6uav2ld8xnh4

Docs https://github.com/LostRuins/koboldcpp/wiki#quick-start https://docs.openclaw.ai/start/getting-started

About

Openclaw tutorial

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors