โ† Back to Articles
General4 min read

openclaw-model-selection

ClawMakers Teamยท

Mastering OpenClaw Model Selection: A Developer's Guide

Choosing the right AI model is crucial for getting optimal performance from your OpenClaw agent. OpenClaw's flexible model system allows you to configure primary models, fallbacks, and per-agent overrides to ensure your AI assistant works reliably across different scenarios.

How Model Selection Works

OpenClaw follows a clear hierarchy when selecting which model to use for processing your requests:

  1. Primary model - The default model specified in your configuration (agents.defaults.model.primary)
  2. Fallback models - A prioritized list of backup models that activate when the primary is unavailable
  3. Provider auth failover - Internal retries within the same provider before moving to the next model

This cascading approach ensures high availability and graceful degradation when service interruptions occur.

Configuring Your Model Setup

The core model configuration happens in your ~/.openclaw/openclaw.json file. Here's the essential structure:

{
  "agents": {
    "defaults": {
      "model": {
        "primary": "anthropic/claude-opus-4-6",
        "fallbacks": [
          "minimax/MiniMax-M2.1",
          "openrouter/qwen/qwen3-235b-a22b-2507"
        ]
      },
      "imageModel": {
        "primary": "openrouter/qwen/qwen-2.5-vl-72b-instruct:free",
        "fallbacks": [
          "openrouter/google/gemini-2.0-flash-vision:free"
        ]
      }
    },
    "models": {
      "anthropic/claude-opus-4-6": { "alias": "opus" },
      "minimax/MiniMax-M2.1": { "alias": "minimax" }
    }
  }
}

The models section serves as your allowed model catalog (an allowlist) and lets you create convenient aliases for frequently used models.

Built-in Model Aliases

OpenClaw provides several convenient built-in aliases that map to popular models:

| Alias | Default Model | |-------|---------------| | opus | anthropic/claude-opus-4-6 | | sonnet | anthropic/claude-sonnet-4-5 | | gpt | openai/gpt-5.2 | | gpt-mini | openai/gpt-5-mini | | gemini | google/gemini-3-pro-preview |

These aliases work automatically unless you override them in your configuration. Your custom aliases always take precedence over the built-in ones.

Switching Models at Runtime

You can dynamically change models during your session using the /model command:

/model                 # List available models with numbers
/model list           # Alternative listing command
/model 3              # Select model #3 from the list
/model openai/gpt-5.2 # Use full model reference
/model status         # Detailed view with auth and endpoint info

When typing model references directly, remember that OpenClaw parses them by splitting on the first /, so use the complete provider/model format when specifying models.

Testing Your Configuration

After configuring your models, test that everything works with these CLI commands:

# Check current model status
openclaw models status

# List all configured models
openclaw models list

# Test a specific model
openclaw models set anthropic/claude-sonnet-4-5

The models status command is particularly useful as it shows your resolved primary model, fallbacks, image model, and provides an authentication overview of your configured providers.

Troubleshooting Common Issues

"Model is not allowed" Error

If you receive this error, it means the model you're trying to use isn't in your configured allowlist. You have three options:

  1. Add the missing model to your agents.defaults.models configuration
  2. Remove the agents.defaults.models section entirely to allow any model
  3. Choose a model from the /model list that appears in your allowed list

Missing Authentication

If models status reports missing authentication, ensure you've properly configured API keys or OAuth tokens for your providers. For Anthropic, the preferred method is using the claude setup-token command.

Pro Tips

  1. Use the openclaw onboard wizard to simplify initial model configuration
  2. Consider your use case when choosing models - GLM often performs better for coding tasks, while MiniMax excels at creative writing
  3. Set up appropriate fallbacks to maintain functionality during primary model outages
  4. Regularly check your model status, especially if you rely on time-limited authentication methods like OAuth tokens

By understanding and properly configuring OpenClaw's model selection system, you can ensure your AI assistant remains responsive and capable across various scenarios and use cases.

Enjoyed this article?

Join the ClawMakers community to discuss this and more with fellow builders.

Join on Skool โ€” It's Free โ†’