Build Your Own AI Workflows: Skills, Cron Jobs & Custom MCP Tools in OpenClaw
OpenClaw most people use AI agents as fancy chatbots. You type, it responds, maybe it searches the web. Cool, but that's like buying a sports car and only driving it to the grocery store.
Here's what we actually do: we run 13 cron jobs, 78 custom MCP tools, and multiple skills in production. Our agent publishes blog posts in four languages, engages on social media, monitors our inbox, generates branded images, and manages content pipelines — all autonomously. Every single day.
This isn't a getting-started guide. If you need that, read our Complete OpenClaw Guide first. This post is about building real automation. You'll walk away with copy-paste examples for skills, cron jobs, and custom MCP tools — and the lessons we learned getting them to work in production.
Let's build.
1. Skills — Teaching Your Agent New Tricks
A skill in OpenClaw is deceptively simple: it's a Markdown file with YAML frontmatter. That's it. No special SDK, no compilation step, no deployment pipeline. You write a SKILL.md file, drop it in the right directory, and your agent suddenly knows how to do something new.
The Anatomy of a Skill
~/.openclaw/workspace/skills/my-skill/
├── SKILL.md # Required: instructions + frontmatter
├── reference/ # Optional: additional docs the agent can read
│ └── api-docs.md
└── scripts/ # Optional: helper scripts
└── validate.sh
The SKILL.md frontmatter tells OpenClaw what the skill is. The body tells the agent how to use it. Here's a real example — a simplified version of our content publishing skill:
---
name: content-publisher
description: Publish blog posts and social media content via MCP tools.
---
# Content Publisher Skill
## When to Use
- User asks to publish a blog post
- User wants to create social media content
## Workflow
1. Research topic with `web_search`
2. Generate keywords with MCP `generate_keywords` tool
3. Write content following SEO guidelines
4. Generate hero image
5. Publish to CMS
6. Blast to social media
## Important Rules
- ALWAYS verify blog URL returns 200 before social posting
- Include hero image in ALL social posts
- Write in the specified language — don't mix languages
- Social media posts go out in ENGLISH with the English blog URL
Precedence Matters
OpenClaw loads skills from three locations, in this order:
- Workspace (
~/.openclaw/workspace/skills/) — your custom skills, highest priority - Managed — skills installed via ClawHub
- Bundled — built-in skills that ship with OpenClaw
This means you can override any bundled skill by creating one with the same name in your workspace. We've done this to customize the built-in web browsing skill with our specific login flows.
Best Practices (Learned the Hard Way)
Be opinionated. Don't write "consider using web_search if appropriate." Write "ALWAYS research the topic with web_search before writing." The agent performs better with clear directives.
Include guardrails. Our content skill has a rule: "ALWAYS verify blog URL returns 200 before social posting." We added that after the agent posted broken links to Twitter. Twice.
Keep it concise. Skills are injected into the LLM's context window. A 5,000-word skill eats tokens and confuses the model. Our most effective skills are under 500 words.
Add a "When to Use" section. This helps OpenClaw's skill routing decide when to activate the skill. Without it, the agent might load your skill when it doesn't need to.
Pro tip: Skills are just structured prompts — the LLM reads them as context. Think of them as "expert knowledge on demand." The agent doesn't execute the skill like code; it reads the skill and follows the instructions using its own judgment. This is powerful: you can encode complex workflows in plain English.
2. Cron Jobs — Your Agent's Heartbeat
If skills give your agent knowledge, cron jobs give it a schedule. OpenClaw has a built-in scheduler that persists across restarts — no external cron daemon, no systemd timers, no third-party scheduling service. You define a job, and OpenClaw wakes up and executes it on time.
Two Execution Modes
This is where it gets interesting. Cron jobs can run in two modes:
systemEvent — Injects a message into your main session. The agent sees it like a notification and can respond in context. Great for reminders and checks that benefit from conversation history.
agentTurn — Spawns an isolated session. The agent wakes up, does the job, and goes back to sleep. No conversation history, no interference with your main chat. This is what you want for autonomous workflows.
Example 1: Simple Reminder
{
"name": "Daily Standup Reminder",
"schedule": {
"kind": "cron",
"expr": "0 9 * * 1-5",
"tz": "Europe/Berlin"
},
"payload": {
"kind": "systemEvent",
"text": "Reminder: Check emails and calendar for today's meetings."
},
"sessionTarget": "main"
}
This fires Monday through Friday at 9:00 AM Berlin time. It drops a reminder into your main session, and the agent can then check your inbox and calendar using its available tools.
Example 2: Autonomous Engagement Pipeline
Here's what a real production cron job looks like — our morning social media engagement:
{
"name": "Social Media Engagement",
"schedule": {
"kind": "cron",
"expr": "0 10 * * *",
"tz": "Europe/Berlin"
},
"payload": {
"kind": "agentTurn",
"message": "Run the EU morning engagement round. Read memory/daily-intel.md for today's news. Find trending posts on X and LinkedIn. Reply to 5-8 posts with genuine, specific comments. Log results to memory/engagement-log.md.",
"model": "anthropic/claude-opus-4-6"
},
"sessionTarget": "isolated",
"delivery": {
"mode": "announce"
}
}
Notice a few things:
sessionTarget: "isolated"— This runs in its own session. It won't pollute your main chat with engagement logs.modelis explicit — We specify the model because defaults can change. You don't want your complex pipeline suddenly running on a smaller model after an update.delivery.mode: "announce"— When the job finishes, OpenClaw sends a summary to your main session. You stay informed without being in the loop.
Cron vs. Heartbeat
OpenClaw also has a heartbeat system — a periodic poll that checks if anything needs attention. When should you use which?
| Use Cron When | Use Heartbeat When |
|---|---|
| Exact timing matters ("9 AM sharp") | Multiple checks can batch together |
| Task needs isolation | You need conversation context |
| You want a specific model | Timing can drift (every ~30 min) |
| One-shot or recurring schedule | You want to reduce API calls |
Lessons Learned
Don't spawn sub-agents from cron jobs. We tried having our engagement cron job spawn sub-agents for parallel processing. They lost context, duplicated work, and occasionally replied to the same tweet twice. Keep cron jobs self-contained.
Always specify the model explicitly. Our content pipeline once ran on a smaller model because we relied on the default. The output quality dropped noticeably, and we didn't catch it for two days.
Use memory/ files for state. Cron jobs in isolated sessions can't see your conversation history. Instead, we read and write to files in the memory/ directory. The engagement job reads daily-intel.md and writes to engagement-log.md. Files are the shared state layer.
3. Custom MCP Tools — Give Your Agent Hands
Skills give knowledge. OpenClaw Cron gives schedule. MCP tools give your agent the ability to actually do things in the real world.
What Is MCP?
MCP (Model Context Protocol) is an open standard for connecting AI models to external tools and data sources. Think of it as a universal plugin system: you define tools with a name, description, and input schema, and any MCP-compatible client (like OpenClaw) can discover and use them.
Why Build Custom Tools?
Out of the box, OpenClaw gives you web search, file operations, browser control, and more. But eventually you'll need domain-specific actions:
- Publishing to your CMS
- Generating branded images from templates
- Querying your internal database
- Triggering CI/CD pipelines
- Posting to social media with your brand account
That's where custom MCP tools come in.
Architecture: Monolith vs. Microservices
We started with the idea of separate MCP servers for different domains — one for blog management, one for image generation, one for social media. After two weeks, we consolidated everything into a single server with 78 tools.
Why? Practical reasons:
- Fewer connections to manage — OpenClaw connects to one server, not ten
- Shared authentication — One API key, one auth flow
- Shared utilities — Image upload, error handling, logging — all reusable
- Easier deployment — One Vercel project, one set of environment variables
The downside is a bigger codebase, but for our scale (78 tools), it's very manageable. If you're building for a team with separate domains, microservices might make more sense.
Real Example: Building a Hero Image Generator
Here's a simplified version of our actual hero image tool. It takes article metadata, renders an HTML template with Puppeteer, and uploads the screenshot:
const generateHeroImage: Tool = {
name: "generate_hero_image",
description: "Generate a branded hero image using HTML templates. Accepts article type, title, optional logos, and accent color. Returns a URL to the uploaded image.",
inputSchema: {
type: "object",
properties: {
type: {
type: "string",
enum: ["product-launch", "tutorial", "comparison", "news"],
description: "Template type — determines layout and styling"
},
title: {
type: "string",
description: "Article title to display on the image"
},
logos: {
type: "array",
items: { type: "string" },
description: "Logo IDs from the registry (e.g., 'openclaw', 'vercel')"
},
accentColor: {
type: "string",
description: "Hex color for brand theming (default: #3B82F6)"
}
},
required: ["type", "title"]
},
handler: async (args) => {
// 1. Load the HTML template for the given type
const template = await loadTemplate(args.type);
// 2. Inject data — title, logos, colors
const html = renderTemplate(template, {
title: args.title,
logos: await resolveLogos(args.logos || []),
accentColor: args.accentColor || "#3B82F6"
});
// 3. Screenshot with Puppeteer (1200x630 for social sharing)
const screenshot = await puppeteerScreenshot(html, {
width: 1200,
height: 630
});
// 4. Upload to storage
const url = await uploadToStorage(screenshot, "hero-images");
// 5. Return structured data
return {
success: true,
url: url,
dimensions: { width: 1200, height: 630 }
};
}
};
Tool Descriptions Are Everything
Here's something that isn't obvious: the LLM reads your tool descriptions to decide which tool to call. A vague description like "generates images" will get your tool called for the wrong reasons. A specific description like "Generate a branded hero image using HTML templates. Accepts article type, title, optional logos, and accent color. Returns a URL to the uploaded image." — that's what makes the agent use it correctly.
We spent more time refining tool descriptions than writing the actual tool logic. It's worth it.
Return Structured Data, Not Prose
Early on, our tools returned messages like "Successfully generated image at https://...". The agent would then have to parse that string to extract the URL. Now we return structured JSON:
{
"success": true,
"url": "https://storage.example.com/hero-images/abc123.png",
"dimensions": { "width": 1200, "height": 630 }
}
The agent can use the URL directly in subsequent tool calls. No parsing, no ambiguity.
Testing with mcporter
OpenClaw includes mcporter, a CLI tool for testing MCP servers directly:
mcporter call myserver.generate_hero_image \
type="tutorial" \
title="My Post" \
logos='["openclaw"]' \
accentColor="#EF4444"
This is invaluable during development. You can test tools without going through the full agent loop — faster iteration, easier debugging.
4. Putting It All Together
Here's where the magic happens. OpenClaw None of these pieces are impressive alone. A Markdown file? A cron expression? A JSON schema? Big deal. But composed together, they turn your agent from a chatbot into an autonomous co-worker.
Our Morning Content Pipeline
Here's how our actual production pipeline works, every single morning:
- 06:00 — A cron job triggers an isolated agent turn
- Agent reads the content skill — Now it knows the full publishing workflow: research → write → image → publish → social
- Searches trending news via MCP research tools —
research_topic,search_knowledge_base - Generates topic proposals with SEO keywords from
generate_keywords - Creates a hero image via our template-based MCP tool
- Sends proposals to Telegram with inline buttons — "Approve," "Edit," or "Skip"
- On approval — Writes the post in 4 languages (EN, DE, FR, IT), publishes all versions, generates social posts, and blasts to X, LinkedIn, and Facebook
The whole pipeline — from cron trigger to published, promoted blog post — takes about 8 minutes. Without human intervention (unless we want to review).
The Composition Pattern
Think of it as three layers:
- Skills = Knowledge ("here's how to publish a blog post")
- Cron = Schedule ("do it every morning at 6 AM")
- MCP Tools = Actions ("here's how to actually create the post, generate the image, publish to CMS")
Any one layer alone is useful. All three together create autonomous workflows that actually work.
What We'd Do Differently
If we were starting over:
- Start with one skill, one cron job, one MCP tool. Get the loop working end-to-end before scaling. We built 30 tools before testing the full pipeline and had to rewrite half of them.
- Log everything to files. Conversation history gets compacted. Files persist. Every pipeline step should write its output to disk.
- Use isolated sessions for cron jobs from day one. We started with main session events and quickly regretted it — the chat got noisy.
What's Next
OpenClaw is still young, and the ecosystem is growing. ClawHub is becoming a community repository for skills — you can share yours and use others'. The OpenClaw docs have detailed references for everything covered here.
If you want the full production setup story — how we configure OpenClaw, manage multi-agent workflows, and handle browser automation — check out our Complete OpenClaw Guide.
The pieces are all there. Skills for knowledge, cron for scheduling, MCP tools for actions. What will you build?
Drop us a line at Context Studios or find us on X @_contextstudios.
Frequently Asked Questions
What is the difference between a skill and a cron job in OpenClaw?
A skill is a Markdown-based instruction set that teaches your agent how to perform a task on demand — like publishing content or generating images. OpenClaw A cron job is a scheduled automation that runs at fixed intervals without human prompting, such as checking emails every 30 minutes. Skills are reactive (triggered by requests), cron jobs are proactive (triggered by time).
Do I need coding experience to build custom MCP tools?
Basic familiarity with JavaScript or Python helps, but deep programming expertise is not required. MCP tools are essentially API wrappers with a JSON schema definition. If you can write a function that calls an API and returns structured data, you can build an MCP tool. The article includes copy-paste examples to get started.
How many MCP tools and cron jobs can OpenClaw handle in production?
There is no hard limit imposed by OpenClaw itself. Context Studios runs 78 MCP tools and 13 cron jobs simultaneously in production without performance issues. The practical limit depends on your API rate limits, token budgets, and the complexity of each tool or job.
Can OpenClaw skills access external APIs and services?
Yes. Skills can reference any MCP tool, which in turn can call external APIs — REST endpoints, databases, third-party services like ElevenLabs, Vercel, or social media platforms. The skill defines the workflow logic; MCP tools handle the actual API communication.
What happens if a cron job fails in OpenClaw?
Failed cron jobs log their errors but do not crash the agent or block other jobs. Best practice is to build idempotent jobs that can safely retry, and to use file-based state tracking so jobs resume correctly after interruptions.