What you need before you start
Let me save you some time. Before you install anything, make sure you have these things ready:
A Mac or Linux machine. OpenClaw doesn't run on Windows natively. If you're on Windows, you'll need WSL2. I run mine on a Mac Studio, but any recent Mac with 16GB+ of RAM will work for cloud-only setups. If you want to run local models (which I recommend, eventually), you'll want 32GB or more.
A Slack workspace. This is where your agents live. Create a fresh workspace for this. Don't pollute your work Slack with AI agent channels. A free Slack workspace is fine to start.
At least one API key. You need a model to power your agent. The options are Anthropic (Claude), OpenAI (GPT), Google (Gemini), or a local model through Ollama. I'd start with one cloud API key and add local models later.
Node.js installed. OpenClaw runs on Node. If you don't have it, install it through your package manager. I'm on v25, but anything recent should work.
That's it. No Kubernetes. No Docker. No cloud infrastructure. OpenClaw runs on your machine, connected to Slack, calling model APIs. Simple.
Installing OpenClaw
The install is straightforward. Pull it through npm:
npm install -g openclaw
This gives you the openclaw CLI. Run openclaw init in an empty directory and it'll scaffold the basic configuration files. You'll get an openclaw.json file, an agents/ directory, and a workspace/ directory.
The openclaw.json file is the heart of everything. It defines your agents, their models, their Slack channel bindings, and your API keys. Don't put API keys directly in this file if you're going to version control it. Use environment variables instead.
Here's the thing the docs don't emphasize enough: the directory you run openclaw init in becomes your OpenClaw root. Every agent workspace, every memory file, every configuration lives under this directory. Pick a location you're comfortable with and stick with it. I use ~/.openclaw/.
Setting up Slack
You need a Slack app. Go to api.slack.com/apps, create a new app, and give it the permissions OpenClaw needs. The required scopes are in the docs, but the short version is: the app needs to read and write messages, join channels, and react to messages.
Install the app to your workspace and note the bot token. It starts with xoxb-. You'll need this for openclaw.json.
Now create a channel for your first agent. I'd call it something like #ai-assistant for now. Don't overthink the name. You can rename it later.
One thing that tripped me up: you need to invite the bot to the channel after creating it. The bot doesn't automatically join channels. Type /invite @your-bot-name in the channel. If you skip this step, messages will be sent to the channel but the bot won't see them, and you'll spend twenty minutes wondering why your agent isn't responding.
Creating your first agent
In your agents/ directory, create a folder for your agent. Let's call it assistant/. Inside that folder, you need a few files:
The models.json file tells OpenClaw which AI model to use. For your first agent, I'd recommend Claude Sonnet 4.6 if you have an Anthropic key, or GPT-4o if you have an OpenAI key. Sonnet is my default for most agents because it's a good balance of quality and cost.
{
"default": "claude-sonnet-4-6"
}
Then update openclaw.json to register this agent. You need to add it to the agents list and bind it to your Slack channel. The binding connects the Slack channel ID to the agent name, so OpenClaw knows which agent should respond in which channel.
Use the channel ID, not the channel name. This matters. Slack channel IDs look like C0AD07W3W7L. You can find the ID by right-clicking the channel name in Slack and selecting "Copy link." The ID is the last segment of the URL.
Your agent's workspace
This is where it gets interesting. Create a workspace/ directory for your agent (or it goes under the global workspace with the agent's name, depending on your setup). Inside it, create two files:
SOUL.md is your agent's identity. This is where you define who the agent is, how it should behave, what it knows. Think of it like a job description combined with a personality profile. Here's a stripped-down example:
# Assistant
You are a general-purpose AI assistant for [your name].
You are direct, concise, and honest. If you don't know something, say so.
You have access to the filesystem and can run shell commands.
Your workspace is at ~/.openclaw/workspace-assistant/.
## Rules
- Always read MEMORY.md at the start of a session
- Never make assumptions about file contents. Read them first.
- Be specific. Details matter.
Don't over-engineer this on day one. Write a few lines that describe how you want the agent to behave. You'll iterate on it. My Nyx agent's SOUL.md has been rewritten probably fifteen times by now.
MEMORY.md is long-term memory. This file is read by the agent at the start of every session. Put things here that you'd otherwise have to repeat every time: your project names, your tech stack, your preferences, important decisions you've made.
Start with a few bullet points. Add more as you go. The biggest mistake is leaving this file empty and wondering why the agent doesn't know anything about your work.
Sending your first message
Start the OpenClaw gateway:
openclaw start
This connects to Slack, loads your agent configuration, and starts listening for messages. You should see log output confirming the connection.
Now go to your agent's Slack channel and type a message. Something simple: "Hello, what can you do?"
The agent should respond within a few seconds. If it doesn't, check the gateway logs. The most common issues are:
The bot isn't in the channel. Invite it with /invite.
The channel ID in openclaw.json doesn't match the actual channel. Double-check.
Your API key is wrong or missing. The gateway log will tell you.
The model name in models.json has a typo. Model names are exact strings. claude-sonnet-4-6 is different from claude-sonnet-4.6.
If everything works, congratulations. You have a persistent AI agent running in Slack. Talk to it. Give it a task. See how it responds.
What will confuse you
I'm going to save you some headaches.
The agent doesn't have memory between sessions out of the box. It reads MEMORY.md, but it doesn't automatically write to it. If you want the agent to remember something permanently, you need to tell it to update MEMORY.md. Or you need to update it yourself. Persistence isn't automatic, it's structured. This confused me for the first few days. I kept expecting the agent to "just remember" things, but it only knows what's in its files.
Tool access is configured separately. Your agent can respond to messages, but to read files, run commands, or access APIs, you need to grant tool access in the agent configuration. The default setup gives basic tools. Anything beyond that needs to be explicitly enabled.
Long conversations eat context window. If you have a marathon session with 50 back-and-forth messages, the early messages eventually fall out of context. This is a model limitation, not an OpenClaw limitation. The solution is to tell your agent to summarize important decisions into MEMORY.md before they scroll out of view.
The gateway needs to be running. If your machine sleeps or the process crashes, your agents go offline. I run mine as a background service that auto-restarts. For first-time setup, just keep a terminal window open.
Making it actually useful
Your first agent will feel limited. That's normal. The power comes from configuration, not from the initial install.
Here are the first three things I'd do after getting the basic setup working:
Write a real MEMORY.md. Spend 15 minutes writing down the projects you're working on, the technologies you use, and the decisions you've made recently. This context is what transforms a generic chatbot into a useful assistant. Be specific. Include project names, file paths, tool versions.
Give it filesystem access. An agent that can only chat is a chatbot. An agent that can read your code, check git status, and run scripts is a collaborator. Enable the file and exec tools. This is where the "AI OS" feeling starts.
Create a decisions.md file. Every time the agent helps you make a decision, write it down. "Chose Supabase over Firebase for the events table because of real-time subscriptions." This creates an audit trail and feeds back into the agent's context over time.
What comes next
Once your first agent is running smoothly, you'll start noticing patterns. You'll ask it to do coding work and wish it had a specialized coding model. You'll ask it research questions and wish it had web access. You'll want it to help with email but realize that's a different skill set than code review.
That's when you create your second agent. And your third. I wrote about how to set up a multi-agent organization in a separate post, because it's its own topic. But the foundation is this: one agent, one channel, one memory file, running and useful.
Start there. Get comfortable with how it works. Understand the limitations. Then expand.
My setup grew from one agent to 26 over about six weeks. Yours doesn't need to. Some people run two or three agents and get enormous value. The number isn't the point. The persistence, specialization, and tool access are the point.
Install it, configure it, and send that first message. You'll know within an hour whether this way of working makes sense for you.