Introduction
What is OpenClaw?
OpenClaw is a personal AI assistant that runs locally on your device, bridging messaging platforms (WhatsApp, Telegram, Slack, Discord, iMessage, and others) to AI coding agents through a centralized gateway. This keeps your conversations and code completely private.
Why Install Locally with Ollama?
- 100% Local Execution: No API costs - everything runs on your machine
- Complete Privacy: Your data never leaves your computer
- Full Control: Choose your models and customize your experience
- Offline Capable: Works without internet connection after setup
- Cost Effective: No subscription fees or usage limits
What You'll Learn
This guide will walk you through:
- Installing Ollama on your system
- Installing OpenClaw
- Setting up compatible AI models
- Configuring WhatsApp integration
- Troubleshooting common issues
Prerequisites
System Requirements
- Operating System: macOS, Linux, or Windows
- RAM: Minimum 8GB (16GB+ recommended for larger models)
- Storage: At least 10GB free space for models
- Internet connection: Required for initial installation and model downloads
Required Software
- Node.js (v16 or higher) - for npm installation method
- Terminal/Command Prompt access
- WhatsApp account (for WhatsApp integration)
Installing Ollama
Choose Your Platform
macOS Installation
Install Ollama on macOS using one of these methods:
Method 1: Official Installer
curl -fsSL https://ollama.com/install.sh | sh
Method 2: Homebrew
brew install ollama
Start Ollama Service
ollama serve
Linux Installation
Install Ollama on Linux:
curl -fsSL https://ollama.com/install.sh | sh
Start Ollama Service
ollama serve
Windows Installation
Install Ollama on Windows:
Method 1: Official Installer
Download and run the installer from ollama.com/download
Method 2: PowerShell
winget install Ollama.Ollama
Start Ollama Service
Ollama should start automatically on Windows. If not, run:
ollama serve
Verify Ollama Installation
Test that Ollama is installed and running correctly:
ollama --version
You should see the version number. If you get a "command not found" error, make sure Ollama is in your PATH or restart your terminal.
Installing OpenClaw
Installation Methods
You can install OpenClaw using either npm or the official installer scripts.
Method 1: npm Installation (Recommended)
npm install -g openclaw@latest
Method 2: Official Installer Scripts
macOS/Linux Installer
curl -fsSL https://openclaw.ai/install.sh | bash
Windows Installer (PowerShell)
iwr -useb https://openclaw.ai/install.ps1 | iex
Post-Installation Setup
After installation, run the onboarding wizard to configure OpenClaw:
openclaw onboard --install-daemon
This will guide you through the initial configuration and install the daemon service.
Verify OpenClaw Installation
Check that OpenClaw is installed correctly:
openclaw --version
Model Setup
Model Requirements
OpenClaw requires models with a context length of at least 64k tokens. This ensures the AI can handle complex coding tasks and maintain context throughout conversations.
Recommended Models
Here are the recommended models for OpenClaw, optimized for different use cases:
For Coding Tasks
- qwen3-coder - Optimized specifically for coding tasks
ollama pull qwen3-coder
General Purpose Models
- glm-4.7 - Strong general-purpose model with excellent performance
- glm-4.7-flash - Faster version with balanced performance
ollama pull glm-4.7
ollama pull glm-4.7-flash
High-Performance Models
- gpt-oss:20b - Balanced performance and capability
- gpt-oss:120b - Maximum capability (requires significant RAM)
ollama pull gpt-oss:20b
# or for maximum performance (requires 64GB+ RAM):
ollama pull gpt-oss:120b
glm-4.7-flash for a good balance of speed and performance. You can always switch models later.
Launching OpenClaw with Ollama
Once you have Ollama running and a model downloaded, launch OpenClaw:
Option 1: Launch and Start Immediately
ollama launch openclaw
Option 2: Configure Without Starting
ollama launch openclaw --config
Set Ollama API Key
Configure the Ollama provider by setting the API key:
export OLLAMA_API_KEY="ollama-local"
set OLLAMA_API_KEY=ollama-local in Command Prompt or $env:OLLAMA_API_KEY="ollama-local" in PowerShell.
Verify Model Setup
Check that your model is available:
ollama list
You should see your downloaded models listed. Test a model:
ollama run glm-4.7 "Hello, can you help me with coding?"
WhatsApp Integration Setup
Prerequisites
- OpenClaw installed and running
- Ollama configured with a compatible model
- WhatsApp account (mobile or web)
- OpenClaw web interface accessible
Step-by-Step WhatsApp Integration
Step 1: Access OpenClaw Configuration
Open your web browser and navigate to the OpenClaw configuration interface. This is typically available at:
http://localhost:3000(default)- Or the URL provided during installation
Step 2: Navigate to Messaging Channels
In the OpenClaw interface, find the "Messaging Channels" or "Integrations" section. Look for WhatsApp in the list of available platforms.
Step 3: Connect WhatsApp
Click on the WhatsApp integration option. You'll be prompted to:
- Scan a QR code with your WhatsApp mobile app
- Or connect using WhatsApp Web
Step 4: Verify Connection
Once connected, you should see a confirmation message. Test the connection by sending a message to your WhatsApp number from another device, or send a test message to yourself.
Configuration Options
After connecting WhatsApp, you can configure:
- Auto-reply settings: Configure when OpenClaw should respond
- Model selection: Choose which Ollama model to use for WhatsApp conversations
- Response preferences: Set response style and behavior
- Privacy settings: Control data handling and storage
Testing Your Setup
To verify everything is working:
- Send a message to your connected WhatsApp number
- Wait for OpenClaw to process and respond
- Try asking a coding question or requesting help
- Check the OpenClaw logs if there are any issues
# Check OpenClaw logs (if running as service)
openclaw logs
# Or check system logs
journalctl -u openclaw # Linux
# Check Console.app on macOS
# Check Event Viewer on Windows
Usage Examples
Once set up, you can interact with OpenClaw via WhatsApp:
- "Help me write a Python function to sort a list"
- "Explain how async/await works in JavaScript"
- "Create a REST API endpoint in Node.js"
- "Debug this code: [paste code]"
Troubleshooting
Common Installation Issues
OpenClaw command not found
Solution:
- Make sure Node.js and npm are installed:
node --versionandnpm --version - Restart your terminal after installation
- Check if OpenClaw is in your PATH:
which openclaw(Linux/macOS) orwhere openclaw(Windows) - Reinstall if necessary:
npm uninstall -g openclawthennpm install -g openclaw@latest
Permission denied errors
Solution:
- Use
sudoon Linux/macOS:sudo npm install -g openclaw@latest - Run terminal as Administrator on Windows
- Or configure npm to use a different directory:
npm config set prefix ~/.npm-global
Installation script fails
Solution:
- Check your internet connection
- Verify the script URL is accessible
- Try the npm installation method instead
- Check system logs for specific error messages
Ollama Connection Problems
Ollama service not running
Solution:
- Start Ollama:
ollama serve - Check if it's running:
curl http://localhost:11434/api/tags - On Linux, check systemd:
systemctl status ollama - On macOS, check if it's running in the background
Cannot connect to Ollama API
Solution:
- Verify Ollama is running on the default port (11434)
- Check firewall settings
- Verify OLLAMA_API_KEY is set correctly
- Test connection:
curl http://localhost:11434/api/version
Model download fails or is slow
Solution:
- Check your internet connection speed
- Large models (120b) can take hours to download
- Try a smaller model first (glm-4.7-flash)
- Check available disk space:
df -h(Linux/macOS) - Verify Ollama has write permissions to its data directory
WhatsApp Connection Issues
QR code not appearing or expired
Solution:
- Refresh the OpenClaw configuration page
- Generate a new QR code
- Make sure you scan within 60 seconds
- Check that OpenClaw service is running
WhatsApp messages not being received
Solution:
- Verify WhatsApp connection status in OpenClaw dashboard
- Check OpenClaw logs for errors
- Restart OpenClaw service
- Reconnect WhatsApp if necessary
- Ensure your phone has internet connection
OpenClaw not responding to messages
Solution:
- Check if Ollama is running and accessible
- Verify the model is loaded:
ollama list - Check OpenClaw configuration for correct model selection
- Review OpenClaw logs for error messages
- Test Ollama directly:
ollama run glm-4.7 "test"
Model Loading Errors
Out of memory errors
Solution:
- Use a smaller model (glm-4.7-flash instead of gpt-oss:120b)
- Close other applications to free up RAM
- Check available memory:
free -h(Linux) or Activity Monitor (macOS) - Consider upgrading your system RAM
Model context window too small
Solution:
- Verify you're using a model with 64k+ context window
- Check model specifications:
ollama show [model-name] - Switch to a recommended model from the list above
Performance Optimization Tips
- Use flash models: Models with "-flash" suffix are faster and use less memory
- Close unnecessary applications: Free up RAM for better model performance
- Use SSD storage: Faster disk I/O improves model loading times
- Monitor system resources: Use system monitoring tools to identify bottlenecks
- Update regularly: Keep Ollama and OpenClaw updated to the latest versions
Getting Additional Help
If you're still experiencing issues:
- Check the Ollama documentation
- Visit the OpenClaw documentation
- Review OpenClaw logs for detailed error messages
- Search for similar issues in community forums