Skip to main content

OpenClaw

Connect OpenClaw to the Bankr LLM Gateway.

Install the Bankr Skill

Install in OpenClaw

Tell your OpenClaw agent:

install the bankr skill from https://github.com/BankrBot/openclaw-skills

The Bankr skill gives your agent crypto trading, portfolio management, and DeFi capabilities — trade tokens, check balances, set automations (DCA, limit orders, stop-losses), query market data, and more.

Your agent can also help you set up the LLM gateway, configure access controls, and manage API keys — just ask it.

See Installing the Bankr Skill for full details and Available Skills for other skills you can add.

Quick Setup with Bankr CLI

The fastest way to configure OpenClaw is with the Bankr CLI:

# Auto-install Bankr provider into ~/.openclaw/openclaw.json
bankr llm setup openclaw --install

This writes the full provider config (base URL, API key, all models) into your OpenClaw config. If you're not logged in yet, run bankr login first.

To preview the config without writing it:

bankr llm setup openclaw
Let your agent handle it

After installing the Bankr skill, your agent can walk you through login, LLM gateway configuration, and access controls. Just ask:

help me set up the Bankr LLM gateway

Your API key is tied to a wallet. Use a dedicated agent account rather than your personal wallet, and consider enabling read-only mode or IP allowlisting if the agent does not need to execute transactions.

Manual Configuration

Add the Bankr provider to your openclaw.json:

{
models: {
mode: "merge",
providers: {
bankr: {
baseUrl: "https://llm.bankr.bot",
apiKey: "${BANKR_LLM_KEY}",
api: "openai-completions",

models: [
// Gemini — cost: USD per million tokens
{ id: "gemini-2.5-flash", name: "Gemini 2.5 Flash", input: ["text","image"], contextWindow: 1048576, maxTokens: 65535, cost: { input: 0.15, output: 0.6, cacheRead: 0.0375, cacheWrite: 0.15 } },
{ id: "gemini-2.5-pro", name: "Gemini 2.5 Pro", input: ["text","image"], contextWindow: 1048576, maxTokens: 65536, cost: { input: 1.25, output: 10.0, cacheRead: 0.3125, cacheWrite: 1.25 } },
{ id: "gemini-3-flash", name: "Gemini 3 Flash", input: ["text","image"], contextWindow: 1048576, maxTokens: 65535, cost: { input: 0.15, output: 0.6, cacheRead: 0.0375, cacheWrite: 0.15 } },
{ id: "gemini-3-pro", name: "Gemini 3 Pro", input: ["text","image"], contextWindow: 1048576, maxTokens: 65536, cost: { input: 1.25, output: 10.0, cacheRead: 0.3125, cacheWrite: 1.25 } },
// Claude (api override: uses Anthropic Messages format)
{ id: "claude-opus-4.6", name: "Claude Opus 4.6", input: ["text","image"], contextWindow: 1000000, maxTokens: 128000, api: "anthropic-messages", cost: { input: 15.0, output: 75.0, cacheRead: 1.5, cacheWrite: 18.75 } },
{ id: "claude-opus-4.5", name: "Claude Opus 4.5", input: ["text","image"], contextWindow: 200000, maxTokens: 64000, api: "anthropic-messages", cost: { input: 15.0, output: 75.0, cacheRead: 1.5, cacheWrite: 18.75 } },
{ id: "claude-sonnet-4.5", name: "Claude Sonnet 4.5", input: ["text","image"], contextWindow: 1000000, maxTokens: 64000, api: "anthropic-messages", cost: { input: 3.0, output: 15.0, cacheRead: 0.3, cacheWrite: 3.75 } },
{ id: "claude-haiku-4.5", name: "Claude Haiku 4.5", input: ["text","image"], contextWindow: 200000, maxTokens: 64000, api: "anthropic-messages", cost: { input: 0.8, output: 4.0, cacheRead: 0.08, cacheWrite: 1.0 } },
// OpenAI
{ id: "gpt-5.2", name: "GPT-5.2", input: ["text"], contextWindow: 400000, maxTokens: 128000, cost: { input: 2.5, output: 10.0, cacheRead: 1.25, cacheWrite: 2.5 } },
{ id: "gpt-5.2-codex", name: "GPT-5.2 Codex", input: ["text"], contextWindow: 400000, maxTokens: 128000, cost: { input: 2.5, output: 10.0, cacheRead: 1.25, cacheWrite: 2.5 } },
{ id: "gpt-5-mini", name: "GPT-5 Mini", input: ["text"], contextWindow: 400000, maxTokens: 128000, cost: { input: 0.4, output: 1.6, cacheRead: 0.2, cacheWrite: 0.4 } },
{ id: "gpt-5-nano", name: "GPT-5 Nano", input: ["text"], contextWindow: 400000, maxTokens: 128000, cost: { input: 0.1, output: 0.4, cacheRead: 0.05, cacheWrite: 0.1 } },
// Other
{ id: "kimi-k2.5", name: "Kimi K2.5", input: ["text"], contextWindow: 262144, maxTokens: 65535, cost: { input: 0.6, output: 2.4, cacheRead: 0.09, cacheWrite: 0.6 } },
{ id: "qwen3-coder", name: "Qwen3 Coder", input: ["text"], contextWindow: 262144, maxTokens: 65536, cost: { input: 0.3, output: 1.2, cacheRead: 0.15, cacheWrite: 0.3 } }
]
}
}
}
}

Set as Default Model

{
agents: {
defaults: {
model: {
primary: "bankr/claude-opus-4.6"
}
}
}
}

Per-Model API Format

The provider-level api: "openai-completions" is the default for all models. Claude models override this with api: "anthropic-messages" at the model level, so OpenClaw automatically uses the right API format for each model.

This is handled automatically by bankr llm setup openclaw. If configuring manually, add api: "anthropic-messages" to each Claude model entry (see the manual config above).

Model Properties Reference

PropertyDescription
idModel identifier used in requests
nameDisplay name in OpenClaw UI
inputSupported input types: ["text"] or ["text", "image"]
contextWindowMax input tokens
maxTokensMax output tokens
cost.inputUSD per million input tokens
cost.outputUSD per million output tokens
cost.cacheReadUSD per million cache-read tokens
cost.cacheWriteUSD per million cache-write tokens

Testing

curl https://llm.bankr.bot/v1/chat/completions \
-H "Content-Type: application/json" \
-H "X-API-Key: $BANKR_LLM_KEY" \
-d '{"model": "gemini-3-flash", "messages": [{"role": "user", "content": "Hello!"}]}'

Troubleshooting

Model not found: Ensure the model id matches exactly what the gateway expects.

429 Rate Limited: You've exceeded 60 requests/minute. Wait and retry.