How to Build an MCP Server for AutoRank: Run Your SEO Pipeline from Your AI Assistant
If you’ve ever managed a serious content operation, you know the drill. Google Search Console in one tab. Ahrefs in another. WordPress admin in a third. A spreadsheet tracking your content calendar. Another spreadsheet mapping keyword clusters.
And somewhere in the chaos, you’re trying to figure out which of the 200 keyword opportunities you identified last week actually deserve a 2,000-word article.
It’s death by context switching. Every tab is a silo. Every tool speaks its own language. Your actual strategic thinking — deciding what to write, when to publish, and how to outrank the competition — gets buried under operational overhead.
What if you could manage all of it from a single conversational interface?
In this guide, we’ll build an MCP server that connects AutoRank’s SEO intelligence directly to Claude Code. By the end, you’ll have a working server that turns natural language commands like “find content gaps against this competitor and queue articles for the top 5 opportunities” into real SEO actions.
What Is MCP (and Why Should SEOs Care)?
The Model Context Protocol (MCP) is an open standard from Anthropic that defines how AI assistants connect to external tools. Think of it as USB-C for AI — a universal interface that lets any AI assistant plug into any service without custom glue code.
The key insight: MCP separates AI reasoning from tool execution. Your AI assistant decides what needs to happen. Your MCP server handles the how — calling APIs, transforming data, returning results.
For SEO teams, this is transformative. Instead of bouncing between dashboards, you describe your intent:
- “Find content gaps against this competitor”
- “Queue an article targeting this keyword cluster”
- “What are my highest-impact keyword opportunities right now?”
The AI orchestrates the right API calls through your MCP server automatically.
Rank Tip: MCP servers run locally on your machine. Your API tokens never leave your environment, and you control exactly which capabilities to expose. Security-conscious teams can audit every line of the bridge code.
How the Architecture Works
The data flow is simple. Three components, two connections:
Claude Code <--MCP/STDIO--> Your MCP Server <--HTTP--> AutoRank API
(AI Client) (Bridge Code) (autorank.so)
Claude Code sends structured requests over STDIO (standard input/output). Your server receives them, calls the AutoRank API over HTTP, and returns structured responses. The AI never touches the API directly.
This gives you three advantages worth noting:
- Security — API tokens stay on your machine, never exposed to the AI model.
- Control — You shape what data comes back. Filter, summarize, or combine responses before the AI sees them.
- Composability — One MCP tool can call multiple APIs in sequence. Analyze a blog, pull keywords, and check GSC data in a single tool invocation.
The Three Tools We’re Building
MCP servers expose Tools (executable functions), Resources (read-only data), and Prompts (reusable templates). For this guide, we’re building three tools that cover the core SEO workflow:
| Tool | What It Does | When to Use It |
|---|---|---|
analyze_blog |
Content gap analysis on any URL | Competitor research, quarterly audits |
get_keyword_opportunities |
AI-ranked keyword clusters for a domain | Deciding what to write next |
generate_article |
Queue an article for AI generation + publishing | Turning research into content |
These three tools create a complete loop: research -> prioritize -> execute. That’s the workflow that actually moves rankings.
Rank Tip: The
analyze_blogtool works on any public blog — not just your own. Use it to reverse-engineer competitor content strategies before building your own editorial calendar.
Step 1: Initialize the Project
You’ll need Node.js 18+ and an AutoRank API token (grab yours from Dashboard > Settings).
mkdir autorank-mcp-server && cd autorank-mcp-server
npm init -y
npm install @modelcontextprotocol/sdk
npm install -D typescript @types/node
npx tsc --init
Update tsconfig.json for a modern Node runtime:
{
"compilerOptions": {
"target": "ES2022",
"module": "Node16",
"moduleResolution": "Node16",
"outDir": "./dist",
"rootDir": "./src",
"strict": true,
"esModuleInterop": true,
"skipLibCheck": true
}
}
Your project structure should now look like this:
autorank-mcp-server/
package.json
tsconfig.json
src/ <-- we'll create this next
Step 2: Set Up the Server Skeleton
Create src/index.ts. This is where everything lives — the server definition, the API helper, and all three tools.
Start with the imports and configuration:
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import { z } from "zod"; // Built into the MCP SDK for input validation
// AutoRank API config
const API_BASE = "https://autorank.so/api";
const API_TOKEN = process.env.AUTORANK_API_TOKEN;
if (!API_TOKEN) {
console.error("AUTORANK_API_TOKEN environment variable is required");
process.exit(1);
}
// Initialize the MCP server
const server = new McpServer({
name: "autorank-seo",
version: "1.0.0",
});
The z import is Zod, which the MCP SDK uses for input validation. Every tool you register will define its parameters as a Zod schema — this gives you runtime type checking and auto-generated descriptions for the AI client.
Rank Tip: The
namefield in your server config is what shows up in Claude Code’s tool list. Pick something descriptive —autorank-seoimmediately tells you what this server does.
Step 3: Build the API Helper
Before wiring up tools, we need a reusable function for authenticated requests to AutoRank. Every tool will call this instead of using fetch directly.
async function autorankRequest(
endpoint: string,
options: {
method?: "GET" | "POST";
body?: Record<string, unknown>;
params?: Record<string, string>;
} = {}
): Promise<unknown> {
const { method = "GET", body, params } = options;
// Build the URL with query params
const url = new URL(`${API_BASE}${endpoint}`);
if (params) {
Object.entries(params).forEach(([k, v]) => url.searchParams.set(k, v));
}
const response = await fetch(url.toString(), {
method,
headers: {
Authorization: `Bearer ${API_TOKEN}`,
"Content-Type": "application/json",
},
body: body ? JSON.stringify(body) : undefined,
});
if (!response.ok) {
const errorText = await response.text();
throw new Error(`AutoRank API error (${response.status}): ${errorText}`);
}
return response.json();
}
This handles authentication, query parameters, JSON serialization, and error reporting in one place. Every tool handler becomes a thin wrapper around this function.
Step 4: Register the Tools
Here’s where MCP gets interesting. The server.tool() method takes four arguments:
- Name — what the AI calls the tool
- Description — the AI reads this to decide when to use the tool (write it like a product description, not a docstring)
- Zod schema — defines and validates the input parameters
- Handler — the async function that does the work
Tool 1: analyze_blog
This tool accepts any blog URL and returns a content gap analysis — keyword opportunities, competitive insights, and topics the target site is missing or underserving.
server.tool(
"analyze_blog",
"Analyze a blog URL for content gaps, keyword opportunities, and " +
"competitive insights. Works on any public blog -- use it for " +
"competitor research or auditing your own site.",
{
url: z.string().url()
.describe("Blog URL to analyze (e.g. https://competitor.com/blog)"),
include_serp_analysis: z.boolean().optional().default(true)
.describe("Include SERP competitor analysis for top opportunities"),
},
async ({ url, include_serp_analysis }) => {
try {
const result = await autorankRequest("/blog-analyzer/analyze", {
method: "POST",
body: { url, include_serp_analysis },
});
return {
content: [{ type: "text" as const, text: JSON.stringify(result, null, 2) }],
};
} catch (error) {
const msg = error instanceof Error ? error.message : String(error);
return {
content: [{ type: "text" as const, text: `Analysis failed: ${msg}` }],
isError: true,
};
}
}
);
Notice the pattern: call autorankRequest, return the JSON result as text content, catch errors gracefully. Every tool you build follows this exact same structure. The only things that change are the endpoint, HTTP method, and parameters.
Tool 2: get_keyword_opportunities
This tool pulls AI-ranked keyword clusters for a domain — sorted by estimated traffic impact, with search volume, difficulty scores, and current positions.
server.tool(
"get_keyword_opportunities",
"Retrieve AI-ranked keyword opportunities for a domain. Returns " +
"clusters sorted by traffic impact with volume and difficulty scores. " +
"Filter by cluster to focus on a specific topic area.",
{
domain: z.string()
.describe("Domain to analyze (e.g. myblog.com)"),
cluster: z.string().optional()
.describe("Filter by cluster name (e.g. 'product-photography')"),
min_volume: z.number().optional()
.describe("Minimum monthly search volume threshold"),
limit: z.number().optional().default(50)
.describe("Max opportunities to return (default: 50)"),
},
async ({ domain, cluster, min_volume, limit }) => {
try {
const params: Record<string, string> = { domain, limit: String(limit) };
if (cluster) params.cluster = cluster;
if (min_volume) params.min_volume = String(min_volume);
const result = await autorankRequest("/seo/opportunities", { params });
return {
content: [{ type: "text" as const, text: JSON.stringify(result, null, 2) }],
};
} catch (error) {
const msg = error instanceof Error ? error.message : String(error);
return {
content: [{ type: "text" as const, text: `Failed to fetch: ${msg}` }],
isError: true,
};
}
}
);
Rank Tip: The
clusterfilter is powerful for focused content sprints. Instead of looking at all 200 keyword opportunities, narrow it to one topic cluster and dominate that silo before moving on. Topical authority compounds.
Tool 3: generate_article
This is where research becomes action. This tool queues an article for AI generation — AutoRank handles the outline, draft, SEO optimization, and scheduled WordPress publishing.
server.tool(
"generate_article",
"Queue an article for AI generation and WordPress publishing. " +
"AutoRank generates an SEO-optimized draft and publishes it on " +
"the scheduled date. Returns a job ID for tracking.",
{
topic: z.string()
.describe("Article topic or working title"),
focus_keyword: z.string()
.describe("Primary keyword to target (e.g. 'ai product photography')"),
cluster: z.string().optional()
.describe("Keyword cluster to assign this article to"),
scheduled_date: z.string().optional()
.describe("Publish date in ISO 8601 format (e.g. '2026-04-07')"),
tone: z.enum(["professional", "conversational", "technical", "casual"])
.optional().default("professional")
.describe("Writing tone for the article"),
word_count_target: z.number().optional().default(2000)
.describe("Target word count (default: 2000)"),
},
async ({ topic, focus_keyword, cluster, scheduled_date, tone, word_count_target }) => {
try {
const result = await autorankRequest("/content/queue", {
method: "POST",
body: { topic, focus_keyword, cluster, scheduled_date, tone, word_count_target },
});
return {
content: [{ type: "text" as const, text: JSON.stringify(result, null, 2) }],
};
} catch (error) {
const msg = error instanceof Error ? error.message : String(error);
return {
content: [{ type: "text" as const, text: `Failed to queue: ${msg}` }],
isError: true,
};
}
}
);
That’s all three tools registered. Notice how little code changes between them — the autorankRequest helper does the heavy lifting, and each handler is just configuration.
Step 5: Connect the Transport and Start the Server
The last piece: wire up STDIO transport so Claude Code can communicate with your server. Add this at the bottom of src/index.ts:
async function main() {
const transport = new StdioServerTransport();
await server.connect(transport);
console.error("AutoRank MCP server running on STDIO");
}
main().catch((error) => {
console.error("Fatal error:", error);
process.exit(1);
});
Now compile and verify:
npx tsc
What you should see: No errors. A dist/index.js file appears in your project directory. If TypeScript reports errors, double-check that your imports match the SDK version — the @modelcontextprotocol/sdk package includes Zod as a dependency.
Step 6: Configure Claude Code
Tell Claude Code where to find your server by adding it to ~/.claude/mcp.json:
{
"mcpServers": {
"autorank": {
"command": "node",
"args": ["/absolute/path/to/autorank-mcp-server/dist/index.js"],
"env": {
"AUTORANK_API_TOKEN": "your-api-token-here"
}
}
}
}
Restart Claude Code. You should see the three AutoRank tools (analyze_blog, get_keyword_opportunities, generate_article) listed when you check available tools.
Rank Tip: Use an absolute path in the
argsfield — relative paths break depending on where you launch Claude Code from. On macOS, that’s something like/Users/yourname/projects/autorank-mcp-server/dist/index.js.
Testing It: From Analysis to Published Content in One Conversation
Here’s where the payoff hits. Open Claude Code and start talking naturally. The AI maps your intent to tool calls automatically.
Competitive Research
Try this prompt:
“Analyze competitor.com for content gaps. Focus on topics where they rank on page 2 — those are the easiest to overtake.”
Claude calls analyze_blog, receives the gap analysis, and synthesizes it into prioritized recommendations. It might surface that the competitor has thin content on “AI product photography pricing” or is missing coverage on “bulk background removal for ecommerce” entirely.
Keyword Prioritization
“Find keyword opportunities in the product-photography cluster with search volume above 200.”
This hits get_keyword_opportunities with the cluster filter. The response comes back as ranked keyword groups — each with volume, difficulty, and your current position. The AI recommends which ones to target based on the opportunity score.
Content Scheduling
“Queue an article about AI headshots for LinkedIn targeting ‘ai headshot generator for linkedin’. Schedule it for next Monday, professional tone.”
The generate_article tool queues the job. AutoRank handles outline generation, drafting, SEO optimization, and WordPress publishing. You get a job ID back for tracking.
The Real Power: Chaining Operations
The magic is chaining all three in a single conversation:
“Analyze myblog.com, find the 5 highest-impact keyword gaps in the AI photography space, and queue articles for each one — spread across next week, Monday through Friday.”
One prompt. Five articles queued. Each targeted at a validated keyword opportunity, scheduled across the week. That’s content velocity you cannot achieve clicking through dashboards.
Rank Tip: Start with the
analyze_blog->get_keyword_opportunities->generate_articlechain as your weekly content workflow. Run it every Monday morning. In 10 minutes, you’ll have a data-backed editorial calendar for the entire week.
Extending Your Server
The three tools we built cover the core research-to-publish loop. But the pattern is identical for any new capability — adding a tool takes about 15 minutes once you have the scaffolding.
Ideas for additional tools:
| Tool | Purpose |
|---|---|
get_gsc_performance |
Pull Search Console data — impressions, clicks, CTR by query or page |
get_rank_tracking |
Monitor keyword position changes over time, flag drops |
manage_sites |
List and configure connected WordPress properties |
generate_pseo_pages |
Trigger programmatic SEO page generation at scale |
get_content_calendar |
View upcoming scheduled articles, check for publishing gaps |
Each one follows the same structure: define the Zod schema, write a thin handler that calls autorankRequest, return the result. The scaffolding you’ve already built does all the hard work.
Why This Matters
The SEO industry doesn’t have a tools shortage — it has a tools fragmentation problem. You gather data in one tool, analyze it in another, make decisions in a spreadsheet, and execute in a CMS. Every handoff is a place where context gets lost and decisions get delayed.
MCP collapses that workflow. Your AI assistant becomes the orchestration layer — it holds the context of your entire conversation, understands your strategic goals, and executes across multiple tools in sequence. The gap between “we should write about this topic” and “the article is queued for Thursday” shrinks from hours to seconds.
For SEO professionals who code, building an MCP server is one of the highest-leverage investments you can make. You’re not automating tasks — you’re building an interface that lets you operate at the speed of your strategic thinking, rather than the speed of your tab-switching.
The code from this guide is a starting point. Extend it, connect additional data sources, and build the SEO command center that matches how you actually think about organic growth.
Start building: autorank.so
More MCP Server Guides
Building MCP servers for other workflows? Check out our companion guides:
- How to Build an MCP Server for AI Image Processing (PixelPanda) — Background removal, AI product photography, and virtual try-on, all driven by natural language from your AI assistant.
- How to Build an MCP Server for Twitter/X Growth (ShipPost) — Find the best tweets to reply to, draft voice-matched replies, and generate original content without leaving your editor.
