openclaw配置ollama本地模型
1、安装启动ollama本地模型
第一步,自然是把模型“发动机”给搭起来。如果你用的是Ubuntu系统,一个比较清晰的路径是参考这篇指南:《ubuntu安装ollama并启动服务》。跟着步骤走,把服务跑起来,这相当于为后续工作准备好了动力核心。
2、配置openclaw.json
基础环境就绪后,接下来的重头戏就是配置文件了。这活儿需要点耐心,但配置对了,后面就顺畅了。下面这个配置结构,可以直接拿来用,关键参数都已经标出来了:
{"meta": {"lastTouchedVersion": "2026.2.13","lastTouchedAt": "2026-02-16T04:32:40.223Z"},"wizard": {"lastRunAt": "2026-02-16T03:32:01.664Z","lastRunVersion": "2026.2.13","lastRunCommand": "onboard","lastRunMode": "local"}, "models": {"providers": {"ollama": {"baseUrl": "http://localhost:11434","apiKey": "ollama-local","api": "ollama","models": [{"id": "qwen3:8b","name": "Qwen3 8b","reasoning": false,"input": ["text"],"cost": {"input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0},"contextWindow": 1280000,"maxTokens": 128000}]}}},"agents": {"defaults": {"workspace": "/home/stone/.openclaw/workspace","compaction": {"mode": "safeguard"},"maxConcurrent": 4,"subagents": {"maxConcurrent": 8},"model": {"primary": "ollama/qwen3:8b","fallbacks": ["ollama/qwen3:8b"]},"models": {"ollama": {}}}},"messages": {"ackReactionScope": "group-mentions"},"commands": {"native": "auto","nativeSkills": "auto"},"gateway": {"port": 18789,"mode": "local","bind": "loopback","auth": {"mode": "token","token": "__OPENCLAW_REDACTED__"},"tailscale": {"mode": "off","resetOnExit": false},"nodes": {"denyCommands": ["camera.snap","camera.clip","screen.record","calendar.add","contacts.add","reminders.add"]}},"plugins": {"entries": {"qwen-portal-auth": {"enabled": true}}}}
这里有几个地方值得留意:baseUrl指向本地启动的Ollama服务地址;模型IDqwen3:8b需要和你本地拉取的模型名称对应;工作空间和端口等配置,可以根据你的实际环境做微调。
3、配置openclaw agent
配置文件搞定后,最后一步就是添加并激活Agent了。执行下面这条命令:
openclaw agents add qwen3:8b
在配置过程中,系统会引导你进行一些选择。关键点在于,模型类型记得选择 vllm,其他配置信息则可以参照上文openclaw.json中的相关参数进行填写。这一步完成,整个本地模型的配置链路就算打通了。