So, I need to use NixOS, because at this point I've had too many years of installing random projects off of github and adding random sources and having no sanity about dependencies and I never learn my lesson about containing a project's dependencies in symlinks so I need a build system that does that stuff for me.
So after hacking away all day, here's my output which I'm hoping the average user looking to get into openclaw, but like a sane person with a budget doesn't want to spend money on a remote LLM and instead wants to run locally, and also is running NixOS instead of just running the installer on ubuntu or something...um might find the output of my labor today helpful.
I have an NVIDIA card so some of this is NVIDIA specific. You will need to make modifications if you're using AMD or society forbid, an intel graphics card.
Hopefully it doesn't matter, but I'm running NixOS version 24.11
There is a nix-openclaw repo, however it requires installing determinate nix and if you ever wanted to follow normal NixOS guides ever again, you'd be screwed as this would completely fuck up your build system in favor of what is basically a fork of NixOS.
I use devbox heavily in this guide, so first order of business, in your /etc/nixos/configuration.nix
# I don't know how much of this nvidia configuration is needed, but I use local GPU accelerated stuff enough that I have all of this.
services.xserver.videoDrivers = [ "nvidia" ];
hardware.nvidia = {
modesetting.enable = true;
open = false;
nvidiaSettings = true;
powerManagement.enable = false;
};
hardware.graphics.enable = true;
boot.blacklistedKernelModules = [ "nouveau" ];
hardware.enableRedistributableFirmware = true;
users.users.nerd2ninja = {
isNormalUser = true;
description = "nerd2ninja";
extraGroups = ["networkmanager" "wheel"];
packages = with pkgs; [
tor-browser
devbox
];
};Why is tor browser here? Oh, just in case your ISP is blocking docs.openclaw.ai/. Don't know why they'd do that. Not like I'm speaking from experience or anything. Just covering a weird edge case I guess....
anyway
Find yourself a nice little folder on a drive with some space:
du -sh ollama_0_15_2/
7.2G ollama_0_15_2/
7.2G at least.
Go ahead and go to https://github.com/ollama/ollama/releases and download (the version for this guide) version 0.15.2
Now I know what you're thinking. I thought it too. Hey, the binary is right there. ollama-linux-amd64.tar.zst That should be it right? No sir or ma'am. You're in NixOS. That binary is expecting Linux filesystem hierarchy. NixOS disrespects Linux filesystem hierarchy. So you gotta install from source buddy. Go ahead and download that source code folder.
unzip ollama-0.15.2.zip or use the file roller doesn't matter.
Now in this folder that you created to download the source files in, you're going to create a file called "devbox.json"
devbox.json =>
{
"packages": [
"go",
"git",
"cmake",
"gcc",
"pkg-config",
"nodejs_24@latest",
"signal-cli@latest",
"pnpm@latest",
"cudaPackages.cudatoolkit",
"cudaPackages.cudnn",
"google-chrome@latest"
],
"env": {
"CUDA_HOME": "/run/opengl-driver",
"LD_LIBRARY_PATH": "/run/opengl-driver/lib"
},
"shell": {
"init_hook": [
"echo 'Clawdbot Activate' > /dev/null",
"export HOME=$PWD/.home",
"export NPM_CONFIG_PREFIX=$PWD/.npm-global",
"export NODE_PATH=$PWD/.node_modules",
"export PATH=$NPM_CONFIG_PREFIX/bin:$PATH"
],
"scripts": {
"test": [
"echo \"Error: no test specified\" && exit 1"
]
}
}
}Then, you're gonna do
devbox shellAh devbox shell doing most of the work. So nice. I didn't include a bunch of other stuff I could have just included in the init_hook...I don't know, I guess I just didn't want to do it everytime I loaded the shell. So the rest of this we do inside the devbox shell.
cd ollama-0.15.2
npm install -g openshell@latest
# Was this neccesary for GPU compadibility, or did I solve this in my devbox.json? Maybe I just forgot to run devbox shell when I was doing this part. IDK but I include it just in case I guess?
mkdir -p build
cd build
cmake .. -DLLAMA_VULKAN=ON
make -j$(nproc)
export CGO_ENABLED=1
export OLLAMA_LIBRARY_PATH="$(pwd)/build/lib/ollama"
cd ..
# end instruction in question
export CGO_ENABLED=1
export OLLAMA_VULKAN=1
go build \
-ldflags="-X github.com/ollama/ollama/version.Version=0.15.2" \
-o ollama .
# Now we have to make a small compatibility change. See ollama does this neat thing where you can ollama launch clawdbot and that will run the openclaw gateway, but its not called clawdbot anymore, its called openclaw now. So, its just a simple symlink change.
ln -s openclaw .npm-global/bin/clawdbot
# Alias the ollama command to the location of our binary:
alias ollama="$PWD/ollama"Now remember, ollama won't use your GPU if you try to run outside of the devbox shell, however, for as long as you're in the devbox shell, you should be able to run
ollama serve
This will run the ollama server over localhost. You now need to open a new terminal window in the same directory, not forgetting to devbox shell in the new window so you keep your environment
So I ran
ollama pull qwen2.5-coder:7bBut you can obviously use any llm that ollama can pull for this.
To do some basic initial setup, we need to run
openclaw configureDon't get too fancy. Just select local in the first option, then run a health check in case I missed stuff in this guide, then hit continue for that last option to exit.
Lets set some openclaw configuration.
xed .home/.openclaw/openclaw.json{
"meta": {
"lastTouchedVersion": "2026.1.30",
"lastTouchedAt": "2026-01-31T22:28:49.012Z"
},
"wizard": {
"lastRunAt": "2026-01-31T22:28:49.007Z",
"lastRunVersion": "2026.1.30",
"lastRunCommand": "onboard",
"lastRunMode": "local"
},
"models": {
"providers": {
"ollama": {
"baseUrl": "http://127.0.0.1:11434/v1",
"apiKey": "ollama-local",
"api": "openai-completions",
"models": [
{
"id": "qwen2.5-coder:7b",
"name": "qwen2.5-coder:7b",
"reasoning": false,
"input": [
"text"
],
"cost": {
"input": 0,
"output": 0,
"cacheRead": 0,
"cacheWrite": 0
},
"contextWindow": 131072,
"maxTokens": 16384
}
]
}
}
},
"agents": {
"defaults": {
"model": {
"primary": "ollama/qwen2.5-coder:7b"
},
"workspace": "$PWD/.home/.openclaw/workspace",
"maxConcurrent": 4,
"subagents": {
"maxConcurrent": 8
}
}
},
"messages": {
"ackReactionScope": "group-mentions"
},
"commands": {
"native": "auto",
"nativeSkills": "auto"
},
"gateway": {
"port": 18789,
"mode": "local",
"bind": "loopback",
"auth": {
"mode": "token",
"token": "Put what is in your config file already in this spot. If none exists, we missed something"
},
"tailscale": {
"mode": "off",
"resetOnExit": false
},
"remote": {
"token": "local-dev-token"
}
},
"skills": {
"install": {
"nodeManager": "pnpm"
}
},
"browser": {
"enabled": true,
"cdpUrl": "http://127.0.0.1:18792",
"remoteCdpTimeoutMs": 1500,
"remoteCdpHandshakeTimeoutMs": 3000,
"defaultProfile": "chrome",
"color": "#FF4500",
"headless": false,
"noSandbox": false,
"attachOnly": false,
"executablePath": "/nix/store/z4awqzi2hdkm8cdr75dhq80nqz6m3dqf-profile/bin/google-chrome", //Delete this note, but you need to paste the output of `whereis google-chrome` here.
"profiles": {
"openclaw": { "cdpPort": 18800, "color": "#FF4500" },
"work": { "cdpPort": 18801, "color": "#0066CC" },
"remote": { "cdpUrl": "http://10.0.0.42:9222", "color": "#00AA00" },
},
},
"hooks": {
"internal": {
"enabled": true,
"entries": {
"boot-md": {
"enabled": true
},
"command-logger": {
"enabled": true
},
"session-memory": {
"enabled": true
}
}
}
}
}Now, because we solved an important compatibility issue earlier you should be able to run:
ollama launch clawdbotWhich will run the gateway, but wait, because now you need to open a new terminal window to interact with the two localhost servers you just launched and devbox shell into it as well when you do.
The two ways I have been able to usefully use this thing so far have been
openclaw agent --agent main --session-id main --message "Hey LLM, how are you today?"and this next part will be a little freaky if you're not used to bots opening web browsers on you:
openclaw browser --browser-profile chrome tabs
openclaw browser --browser-profile openclaw start
openclaw browser --browser-profile openclaw open https://stacker.news
openclaw browser --browser-profile openclaw snapshotThat's everything I've done so far. I know there's a lot more, but this took a while to do so hopefully someone other than myself can benefit from all of this.