Max Mode
Override the default agent LLM with any model from the gateway.
By default, the Bankr agent uses Gemini 3 Flash — a fast, cost-effective model. Max Mode lets you swap it for any model available in the LLM Gateway (Claude Opus, GPT-5.4, Gemini 3.1 Pro, etc.) so the agent uses a more capable model for complex tasks.
Usage is billed per-token from your LLM credit balance.
Max Mode requires LLM credits. Top up at bankr.bot/llm?tab=credits before enabling, or your messages will fail.
How It Works
- You enable Max Mode and pick a model
- Every agent message uses that model instead of the default
- Token usage (input + output) is tracked and billed at the model's rate
- A usage badge appears below each response showing the model, token count, and cost
Your Max Mode setting is saved to your wallet and applies across all platforms — web terminal, Farcaster, Twitter/X, Telegram, XMTP, and automations.
Using Max Mode in the Web Terminal
- Go to bankr.bot and open the chat
- Click the Max button above the input box to enable it (turns purple when active)
- Click the model name next to it to open the model picker
- Search or filter by provider, then select a model
- Start chatting — the agent will use your chosen model
To disable, click the Max button again. Your model selection is remembered for next time.
Hover the ⓘ icon next to the Max button for a quick explainer.
Using Max Mode in the CLI
Pass the --model (or -m) flag to any prompt:
bankr "analyze my portfolio" --model claude-opus-4.6
bankr "what are the top memecoins today?" -m gemini-3.1-pro
You can combine it with thread continuation:
bankr "tell me more" --continue --model claude-sonnet-4.6
Available CLI Models
bankr llm models # List all available models with pricing
Currently supported --model values for the CLI:
| Model | Provider | Input/M | Output/M |
|---|---|---|---|
claude-opus-4.7 | Anthropic | $5.00 | $25.00 |
claude-opus-4.6 | Anthropic | $5.00 | $25.00 |
claude-sonnet-4.6 | Anthropic | $3.00 | $15.00 |
gemini-3.1-pro | $2.00 | $12.00 |
The web terminal supports all gateway models. CLI model support is expanding.
Pricing
Each model has its own per-million-token rate for input and output. You can see pricing in:
- The model picker modal in the web terminal (shown next to each model)
- The Models tab at bankr.bot/llm
- The CLI via
bankr llm models
Cost is calculated as:
cost = (input_tokens / 1,000,000) × input_rate
+ (output_tokens / 1,000,000) × output_rate
After each Max Mode response, a usage badge shows the exact token count and cost.
Managing Credits
bankr llm credits # Check your balance
bankr llm credits add 25 # Add $25 (USDC)
bankr llm credits auto --enable # Enable auto top-up
Or manage credits in the browser at bankr.bot/llm?tab=credits.
See LLM Gateway Overview for full credit management docs.
FAQ
Do I need Bankr Club to use Max Mode? No. Max Mode is an alternative to Bankr Club — either one unlocks the agent. You can use both together.
Does Max Mode work with automations? Yes. Your Max Mode setting applies globally, including scheduled automations and social platform interactions.
What happens if I run out of credits? The agent will fall back to the default model (Gemini 3 Flash) and your message will still be processed. Enable auto top-up to avoid interruptions.
Can I change models mid-conversation? Yes. Each message uses whatever model is active when you send it. You can switch models between messages freely.