Skip to content

How I Got OpenClaw Running on My ChatGPT Subscription After Hours of Bad Advice

A practical setup guide for self-hosting OpenClaw on a Hostinger VPS using your ChatGPT subscription via Codex OAuth, plus what I wish I'd known an hour sooner.

This past Saturday, I sat down to set up OpenClaw on a Hostinger VPS. The plan was simple. Spin up a small server, point OpenClaw at my existing ChatGPT subscription via OAuth, and call it a day. No API keys. No per-token billing. Just my own agent, running on a small box, talking to GPT-5 through credentials I already had.

It took two hours.

About 10 minutes of that was the actual setup. The other hour and fifty minutes was an AI assistant in a different chat tab confidently sending me in circles.

I want to walk through what actually works, because almost nothing online about this exact configuration is current, and the bad advice I got was the kind that sounds correct until you read the docs. If you’re trying to do the same thing, this post should save you the detour.

What this is and why I wanted it

OpenClaw is an open source AI agent you self-host. It can run on any surface — your local machine, a VPS, Docker on a remote server — and exposes a chat UI, supports messaging channels like Slack and Telegram, and can talk to a long list of model providers. One of those providers is OpenAI’s Codex OAuth path, which lets you use your ChatGPT subscription quota directly, without a separate API key.

I already pay for ChatGPT. I’d rather route my agent’s traffic through that quota than spin up a parallel API account just to run my own bot. Hostinger has a 1-click OpenClaw template that drops the whole stack on a VPS in about 60 seconds. So in theory, between the template and the OAuth flow, this should be a five-minute job.

The catch is that the Hostinger template ships with a .env file that pre-bakes some defaults, including a Gemini API key that I added as part of the setup flow — which, little did I know, would come in the way of me trying to use my ChatGPT subscription. The default UI in Hostinger doesn’t show the ChatGPT subscription option, so switching to it means undoing some of that defaulting, and that’s where I ran into trouble.

The setup I ended up with

Here’s the working version, distilled. If you just want the answer, this is it.

Strip your .env down to the bare minimum. Mine looks like this:

PORT=51329
TZ=America/New_York
TRAEFIK_HOST=your-vps-host.example.cloud

No GEMINI_API_KEY. No OPENCLAW_GATEWAY_TOKEN. Just port, timezone, and the Traefik host so the reverse proxy keeps working. Save it.

Then wipe the data directory, recreate it with correct ownership, bring the container up clean, and run the onboarding wizard with the Codex OAuth flag.

cd /docker/openclaw-yourname
docker compose down
rm -rf data
mkdir -p data
chown -R 1000:1000 data
docker compose up -d --force-recreate
sleep 10
docker compose exec -it openclaw openclaw onboard --auth-choice openai-codex

The wizard walks you through OAuth (you authorize OpenClaw against your ChatGPT account in your laptop browser), suggests a default model, and writes a clean config file. After that, openclaw models status should show your OAuth profile authenticated, an openai-codex/gpt-X model as the default, and a healthy catalog. Open the chat UI, paste the new gateway token from the freshly written config, send a message, done.

That sequence takes about 8 minutes if you’ve never done it before. It took me two hours.

Why it took two hours

The other AI told me my model name should be openai-codex/gpt-5.5. The model existed in OpenClaw’s static catalog but was not actually exposed to my account through OAuth. The AI didn’t know that, but it sounded sure. It told me to set the primary model with openclaw config set agents.defaults.model.primary openai-codex/gpt-5.5. The command saved fine, the file changed on disk, and then on every restart OpenClaw silently fell back to Gemini. The AI’s diagnosis: “the cache hasn’t picked it up yet, run another restart.” It said that three times.

Eventually I asked it to actually look at the docs. It then told me there was a model called openai-codex/gpt-4o, which it confidently described as “the current best Codex subscription model.” That model name does not exist in OpenClaw’s actual provider catalog. The agent had hallucinated it.

I should have checked the docs myself the first time the same fix didn’t work twice. That’s the lesson.

There were two real problems behind all of this, and once I understood them, the fix took about ten minutes.

Problem one: the .env file regenerates your config on every boot

The Hostinger OpenClaw image ships with an entrypoint script that reads environment variables and writes the config file from them when the container starts. Even when you delete the entire data/ directory, the next container start regenerates a fresh openclaw.json with whatever your .env says.

In my case, the project’s .env had something like:

OPENCLAW_GATEWAY_TOKEN=z9JvgEcw...
GEMINI_API_KEY=AIzaSyCo...

So every “fresh install” came up with the same gateway token (not actually fresh) and a Gemini configuration sitting in the model allowlist (not actually clean). I wasted an hour believing I had a clean slate when I didn’t.

The fix is straightforward once you see it. Strip the .env down before resetting the data directory. With nothing to seed from, the entrypoint writes a true blank config, the gateway picks a fresh random token, and the model allowlist starts empty. Onboarding then fills in only what you ask for.

Problem two: OpenClaw protects itself from broken configs, and that protection looks like a bug

OpenClaw has a feature called “last-known-good” config. After every successful boot, it snapshots your config to openclaw.json.last-good. If a later edit fails validation, or the file shrinks dramatically, or critical fields like gateway.mode go missing, OpenClaw moves the broken file aside as openclaw.json.clobbered.<timestamp> and restores the last-good version. It then sends a warning to the next agent turn so the agent doesn’t blindly rewrite the restored config.

This is genuinely good engineering. It saved me from breaking my OAuth profile when I accidentally truncated the file in a bad nano session. But if you don’t know it’s there, it looks like the gateway is mysteriously rejecting your edits.

The diagnostic command that revealed this:

docker compose logs openclaw 2>&1 | grep -iE "clobber|last-known|restore"

It returned exactly one line: Config auto-restored from backup: /data/.openclaw/openclaw.json (size-drop-vs-last-good:3883->1813). My edit had shrunk the file by 53%, OpenClaw saw that as destructive, rolled back, and warned the agent. The agent then dutifully reported “I’m running on Gemini instead of your default model today” because that is literally what the restored config said.

If your edits keep evaporating, check that log filter first. The clobber message tells you the exact reason, which is more than the chat error messages do.

The actual working flow, with the gotchas baked in

Putting it together, here’s the full path that worked end to end.

Step 1. Edit .env to the minimum. From the Hostinger Docker Manager UI, or by editing /docker/openclaw-yourname/.env directly, strip everything except PORT, TZ, and TRAEFIK_HOST. Save.

Step 2. Take a backup of your existing data. This step is irreversible if anything goes wrong:

cd /docker/openclaw-yourname
tar -czf ~/openclaw-backup-$(date +%Y%m%d).tar.gz data/

Step 3. Wipe and recreate the data directory.

docker compose down
rm -rf data
mkdir -p data
chown -R 1000:1000 data
docker compose up -d --force-recreate

The chown step matters. The container runs as UID 1000, and a directory owned by root will fail to write the config on first boot.

Step 4. Verify the new config is actually clean.

docker compose exec openclaw cat /data/.openclaw/openclaw.json

You should see a small file (around 1500 bytes), with agents.defaults.models: {} empty, plugins: {} empty, no auth.profiles section, and a fresh random token under gateway.auth.token. If you see any Gemini entries or the same gateway token you had before, the .env strip didn’t take. Recheck that file.

Step 5. Run the wizard with the Codex OAuth choice baked in.

docker compose exec -it openclaw openclaw onboard --auth-choice openai-codex

Walk through the prompts. The OAuth flow gives you either a long URL or a device code. Open the URL in your laptop browser, log in with your ChatGPT account, authorize, return to the terminal. When it asks about plugins, search providers, or messaging channels, pick “Skip for now” for everything. Get the basic chat working first. You can add Brave search, Slack, Telegram, and whatever else later, as separate steps. Each one you add to the wizard is one more thing that can fail and confuse your debugging.

Step 6. Verify OAuth saved.

docker compose exec openclaw openclaw models status
docker compose exec openclaw openclaw models list --provider openai-codex

In my case, the wizard picked openai-codex/gpt-5.4 as the default. The status output showed my email tied to an OAuth profile with a 10-day expiry and 100% quota available. The catalog list showed gpt-5.4 as default,configured with auth yes. That’s healthy.

If your catalog only shows one model tagged missing, your OAuth completed but the catalog refresh didn’t, and you’re hitting the same dead end I did. Try a full container recreate (docker compose down && docker compose up -d --force-recreate) and rerun onboarding. Don’t try to patch around it with config edits.

Step 7. Use the new gateway token to log in to the Control UI.

The token is in data/.openclaw/openclaw.json under gateway.auth.token. Paste it into the Hostinger-provided URL in your browser. Start a new chat. Send a one-word message (“hi” works). The model header should show gpt-5.4 (or whichever Codex model the wizard picked).

That’s it. You’re running on your ChatGPT subscription with no per-token billing.

What I’d do differently next time

Three things stand out.

First, I’d ignore the Hostinger template’s default .env entirely on day one. The Gemini key and pinned gateway token make sense if you actually want Gemini. They silently fight you if you want anything else. Strip the file before the first container boot, not after.

Second, I’d run openclaw doctor once before starting. It tells you the config path, the agent state, and any validation warnings in one command. I didn’t run it until an hour in, and it would have flagged half my problems immediately.

Third, and this is the meta-lesson: I’d treat AI-generated terminal commands the way I treat AI-generated code. If the same fix doesn’t work twice, the diagnosis is wrong, not the cache. An AI that’s confidently wrong sounds identical to an AI that’s confidently right. The only thing that distinguishes them is the actual outcome. When the outcome doesn’t match the prediction, that’s the signal to drop the chat and read the docs.

The flip side is worth noting too. When I switched to a different AI tool that actually opened the docs, ran diagnostics, and said “I don’t know yet, let me look at the actual log message,” I had a working setup in under twenty minutes. Same model class, different prompt, completely different behavior. The tool wasn’t the problem. The way I was using it was.

A small security note

If you go through this exercise and any of your tokens end up in a chat window, treat them as compromised. I had to rotate a Slack bot token, an app-level token, a Brave Search key, a Gemini API key, and two gateway tokens at the end of all this, because various AI debugging sessions had pasted them back at me. Twenty minutes of revoking and regenerating in the respective dashboards. Cheap insurance.

OpenClaw stores secrets in ~/.openclaw/openclaw.json and in ~/.openclaw/agents/main/agent/auth-profiles.json. Don’t paste either file into a chat anywhere. If you need to share for debugging, redact the tokens first. The model names and the structure are the diagnostic value, not the credentials.

Try it

If you have a Hostinger VPS and a ChatGPT subscription, you can be running OpenClaw on your own subscription in under 15 minutes following the steps above. The hardest part is convincing yourself that the wizard’s defaults are actually fine and you don’t need to optimize anything before testing it.

Source code for OpenClaw is on GitHub. Hostinger’s template lives in their Docker manager catalog. Everything I described is current as of OpenClaw versions 2026.4.21 and 2026.4.24. Both worked end to end once the underlying config was clean.

If you hit a wall and an AI is telling you “you’re one fix away” for the third time in a row, close the chat, run openclaw doctor, and start with the actual error messages. They almost always contain the answer.

AI Self-Hosting Docker


Next
FlightWrapped: Turning Gmail into a 3D Travel Story with On-Device AI

Related Posts