Table of Contents
- Why Use Kimi K 2.5 with OpenClaw
- Step 1: Open Your OpenClaw Instance
- Step 2: Open the Instance Terminal
- Step 3: Start the OpenClaw Configuration Wizard
- Step 4: Select the Model Configuration Section
- Step 5: Choose Moonshot AI (Kimi K2.5)
- Step 6: Add Your Moonshot API Key
- Step 7: Select the Kimi Model
- Step 8: Save and Exit the Configuration
- Step 9: Test the Kimi Model
- Common Setup Issues
- What You Can Do After Connecting Kimi K 2.5
- Final Thoughts
Do not index

AI agents become significantly more powerful when connected to the right model provider. One option supported in OpenClaw is Moonshot AI’s Kimi K2.5, a fast and capable large language model that can power automation, chat workflows, and AI-driven assistants.
In this guide, you'll learn how to set up Kimi K 2.5 with OpenClaw, including selecting the provider, adding the API key, and verifying that the model is working inside your agent environment.
If you’re running OpenClaw through Agent37, you can access your instance from the dashboard:
Why Use Kimi K 2.5 with OpenClaw
OpenClaw allows you to switch between different model providers depending on your use case. Kimi K2.5 is useful when you want:
- fast responses
• strong reasoning ability
• lower operational cost compared to some models
• compatibility with agent workflows
Because OpenClaw supports multiple providers, you can experiment and choose whichever model works best for your AI agent.
Step 1: Open Your OpenClaw Instance
Start by logging into your OpenClaw workspace.
Navigate to your instance dashboard where you will see details such as:
- Instance ID
• Tier (Basic or other plan)
• Status (Running)
• Credits remaining
• Actions like Chat, Terminal, and Restart
Make sure your instance shows:
Status: Running
If it is not running, start or restart the instance before continuing.
Step 2: Open the Instance Terminal
Next, click Terminal from the actions column of your OpenClaw instance.
This opens the command interface where configuration tasks are performed.
Inside the terminal, you’ll see the OpenClaw CLI interface.
If you have not configured the agent yet, run the onboarding command:
openclaw onboardIf the agent is already configured, you can run the configuration wizard to update the model.
Step 3: Start the OpenClaw Configuration Wizard
Once the CLI opens, OpenClaw will detect the existing configuration.
You may see output similar to:
OpenClaw configure
Existing config detected
gateway.mode: localThis indicates that the environment already has a configuration and is ready for modification.
The system will ask where the gateway should run.
Most users should keep the default:
Local (this machine)
Step 4: Select the Model Configuration Section
Next, OpenClaw asks which configuration sections you want to modify.
You may see options such as:
- Workspace
• Model
• Web tools
• Gateway
• Channels
• Daemon
To set up Kimi K 2.5, select:
Model
This opens the provider selection menu.
Step 5: Choose Moonshot AI (Kimi K2.5)
In the provider list you will see several supported model providers, including:
- OpenAI
• Anthropic
• MiniMax
• Moonshot AI (Kimi K2.5)
• Google
• OpenRouter
• Qwen
• GLM
• Copilot
Select:
Moonshot AI (Kimi K2.5)
This tells OpenClaw to use the Moonshot AI ecosystem for your agent.
Step 6: Add Your Moonshot API Key
After selecting the provider, OpenClaw will ask for authentication.
You will typically be prompted to enter an API key.
Steps usually look like this:
- Go to the Moonshot AI developer dashboard
- Generate an API key
- Copy the key
- Paste it into the OpenClaw terminal prompt
Once the key is entered, OpenClaw verifies the authentication automatically.
If the key is valid, the configuration is saved.
You may see confirmation similar to:
Updated ~/.openclaw/openclaw.jsonStep 7: Select the Kimi Model
After authentication, OpenClaw lists the available models from the provider.
Choose the Kimi K2.5 model you want to run.
Once selected, OpenClaw sets it as the default model for your agent.
From this point forward, all AI responses generated by the agent will use the Kimi model.
Step 8: Save and Exit the Configuration
Once the model is selected, the configuration wizard finishes updating the environment.
OpenClaw automatically saves the settings to the configuration file.
You can now exit the wizard and return to the dashboard.
Step 9: Test the Kimi Model
To verify the setup is working, open the Chat interface from your instance.
Send a simple prompt such as:
HelloIf the agent responds successfully, the Kimi model is correctly configured.
You can also check the terminal logs if you want to confirm the provider being used.
Common Setup Issues
Sometimes the model may not respond correctly. The most common causes include:
- Invalid API key
• Expired API credentials
• Incorrect provider selection
• Network restrictions
If needed, simply rerun the configuration wizard and update the model settings.
What You Can Do After Connecting Kimi K 2.5
Once Kimi K2.5 is configured, your OpenClaw agent becomes much more capable.
You can now:
- run AI chat agents
• automate workflows
• connect messaging platforms
• build AI assistants
• integrate tools and APIs
Because OpenClaw supports multiple providers, you can always experiment and switch models later.
Final Thoughts
Learning how to set up Kimi K 2.5 with OpenClaw allows you to expand the capabilities of your AI agent.
The process is straightforward:
- Open your instance terminal
- Launch the configuration wizard
- Select the model provider
- Add your API key
- Choose the Kimi model
- Test the agent
Once completed, your OpenClaw environment will begin using Kimi K2.5 as its active AI model.
If you are running OpenClaw through Agent37, you can manage everything directly from the dashboard: