How to Configure LLM Provider in OpenClaw

This beginner-friendly guide explains how to configure an LLM provider in OpenClaw. Learn how to connect OpenAI or other model providers, add API credentials, and verify your AI agent is working.

Do not index
notion image
Once your OpenClaw instance is running, the next important step is configuring a Large Language Model (LLM) provider. Without connecting a model provider, your OpenClaw agent cannot generate responses or process tasks.
In this guide, you’ll learn how to configure LLM provider in OpenClaw, including selecting the provider, adding your API key, and verifying that your AI agent is working correctly.
You can access your OpenClaw dashboard through:
Let’s walk through the process step by step.

Why You Need to Configure an LLM Provider

OpenClaw acts as the agent framework, but the intelligence comes from an external LLM provider.
These providers supply the AI model that generates responses, processes commands, and interacts with users across messaging channels.
Without configuring a provider, OpenClaw cannot:
  • Generate chat responses
  • Process automation tasks
  • Execute agent instructions
  • Respond through Telegram or WhatsApp
That’s why setting up an LLM provider is one of the most critical parts of your OpenClaw configuration.

Supported LLM Providers in OpenClaw

OpenClaw supports multiple AI model providers, giving you flexibility depending on cost, performance, and preference.
Some common providers include:
  • OpenAI
  • Anthropic
  • MiniMax
  • Moonshot AI (Kimi K2.5)
  • Google AI
  • XAI (Grok)
  • OpenRouter
  • Qwen
  • GLM 4.7
  • Copilot
  • Vercel AI Gateway
  • Venice AI
Most beginners typically start with OpenAI or Anthropic because they are widely supported and easy to configure.

Step 1: Access Your OpenClaw Instance

First, log into your dashboard:
Then navigate to:
OpenClaw → Instances
You’ll see your instance panel with information such as:
  • Instance ID
  • Tier (Basic or other plan)
  • Status (Running)
  • Starter credits
  • Actions (Chat, Terminal, Restart)
Make sure your instance status shows Running before continuing.

Step 2: Open the Terminal

Next, click Terminal from the instance action panel.
This opens the command interface where you can manage your OpenClaw configuration.
If you have not run onboarding yet, start with:
openclaw onboard
If onboarding is already complete, you can update provider settings from the configuration prompts.

Step 3: Choose Your Model Provider

During the configuration process, OpenClaw will display the Model/Auth Provider selection.
You’ll see a list of supported providers.
For example:
  • OpenAI
  • Anthropic
  • MiniMax
  • Google
  • OpenRouter
Use the arrow keys or selection prompts to choose your preferred provider.
For beginners, OpenAI is usually the easiest option to start with.

Step 4: Add Your API Key

After selecting a provider, OpenClaw will ask for your API key.
This key authenticates your OpenClaw instance with the LLM provider.
For example, if you selected OpenAI:
  1. Go to your OpenAI dashboard
  1. Generate an API key
  1. Copy the key
  1. Paste it into the OpenClaw prompt
Once the key is added, the system will validate it automatically.
If the key is valid, OpenClaw will save the configuration.

Step 5: Confirm Provider Configuration

After authentication, OpenClaw updates the configuration file and enables the model provider for your agent.
This usually updates files like:
  • openclaw.json
  • workspace settings
  • model configuration
At this stage, your OpenClaw agent can now generate AI responses.

Step 6: Test the LLM Connection

Once the provider is configured, you should verify that everything is working correctly.
Return to the dashboard and open:
Chat
Type a simple message such as:
Hi
If the LLM provider is correctly configured, the agent will respond.
This confirms that your OpenClaw instance is successfully connected to the AI model.

Common Issues When Configuring LLM Providers

If the agent does not respond, the issue is usually related to the provider configuration.
Here are some common problems:
Invalid API key
The API key may be incorrect or expired.
Insufficient credits
Some providers require active billing or credits.
Incorrect provider selection
Choosing the wrong provider can cause authentication errors.
Network configuration issues
Custom gateway setups may block external API calls.
If needed, you can rerun configuration by launching onboarding again.

Tips for Choosing the Right LLM Provider

Different providers offer different strengths.
For example:
OpenAI
Reliable and widely supported.
Anthropic
Strong reasoning and safe AI responses.
OpenRouter
Access to multiple models through one API.
Google AI
Good integration with Google ecosystem tools.
Choosing the right provider depends on your project needs, cost limits, and performance expectations.

What Happens After Configuring an LLM Provider

Once the provider is active, OpenClaw becomes fully functional as an AI agent platform.
You can then:
  • Connect Telegram bots
  • Enable WhatsApp messaging
  • Configure agent skills
  • Set automation hooks
  • Schedule cron jobs
  • Build advanced workflows
In short, configuring the LLM provider turns OpenClaw from a framework into a working AI assistant.

Final Thoughts

Learning how to configure LLM provider in OpenClaw is a key step in getting your AI agent running.
The process is simple once you understand the flow:
  1. Access your OpenClaw instance
  1. Open the terminal
  1. Select your model provider
  1. Add your API key
  1. Confirm configuration
  1. Test the agent in chat
Once this is done, your OpenClaw agent will be ready to generate responses, automate tasks, and interact across connected messaging channels.
And from there, the real possibilities begin.