Fetch MCP Server

Anthropic · MCP
8.5

Anthropic's official Fetch MCP Server lets AI agents retrieve any web page and get back clean Markdown — the simplest way to give Claude live internet access via MCP.

stable web content updated 2026-01
install
uvx mcp-server-fetch
npm: mcp-server-fetch
↗ GitHub
capabilities
Fetch any URL and return as Markdown Raw HTML/JSON retrieval (skip Markdown conversion) Paginated content via start_index and max_length Robots.txt compliance by default Custom user-agent and proxy support
compatible with
Claude DesktopClaude CodeCursorWindsurfAny MCP-compatible client

The Fetch MCP Server does one thing: it takes a URL and returns the contents as clean Markdown. That’s it. And that turns out to be enormously useful — because without it, Claude can’t access the web at all.

It’s Anthropic’s official reference implementation, part of the same modelcontextprotocol/servers monorepo as the Filesystem, GitHub, Memory, and PostgreSQL servers. It’s the #2 most-used MCP server globally by traffic, behind only the Filesystem server.

What It Does

The server exposes a single tool: fetch.

{
  "tool": "fetch",
  "parameters": {
    "url": "https://example.com/docs",
    "max_length": 5000,
    "start_index": 0,
    "raw": false
  }
}
ParameterTypeDefaultPurpose
urlstring (required)The URL to fetch
max_lengthinteger5000Character limit for returned content
start_indexinteger0Start extraction at this character position
rawbooleanfalseSkip Markdown conversion, return raw HTML/text

The HTML-to-Markdown conversion strips navigation, ads, and other chrome — returning the actual content in a format Claude can process efficiently. For APIs and JSON endpoints, set raw: true to get the response as-is.

Pagination via start_index: Content longer than max_length can be retrieved in chunks. Fetch once with the default start_index: 0, then increment by max_length on subsequent calls to page through long documents.

Robots.txt Handling

By default, the Fetch MCP Server respects robots.txt for model-initiated requests — it identifies itself with a ModelContextProtocol/1.0 user-agent and won’t fetch pages that disallow bots. Human-initiated requests (where the user explicitly specifies a URL) bypass this check.

To disable robots.txt enforcement entirely, pass --ignore-robots-txt when starting the server.

Installation

The Fetch MCP Server is a Python package, not Node.js. The recommended install method is uvx (no global installation required):

uvx mcp-server-fetch

Or via pip:

pip install mcp-server-fetch
python -m mcp_server_fetch

Docker is also available: mcp/fetch.

Claude Desktop configuration (claude_desktop_config.json):

{
  "mcpServers": {
    "fetch": {
      "command": "uvx",
      "args": ["mcp-server-fetch"]
    }
  }
}

Config file paths:

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%\Claude\claude_desktop_config.json

With Claude Code (via --mcp-config flag or .claude/mcp.json):

{
  "fetch": {
    "command": "uvx",
    "args": ["mcp-server-fetch"]
  }
}

Optional flags:

  • --ignore-robots-txt: Skip robots.txt enforcement
  • --user-agent=YourAgent: Set a custom user-agent string
  • --proxy-url=http://proxy:port: Route requests through a proxy

What It’s Good For

Documentation access: Claude can read API docs, library references, and technical specs from URLs you provide — without copying and pasting manually.

Research workflows: Point Claude at news articles, blog posts, or reports. It fetches, converts to Markdown, and processes the content as part of a longer chain of reasoning.

Web API exploration: With raw: true, Claude can call JSON APIs and work with the responses directly — useful for building agents that query third-party services.

/llms.txt support: Many sites now publish /llms.txt files — structured, LLM-friendly content summaries. The Fetch server handles these natively.

What It Doesn’t Do

No JavaScript rendering. The Fetch MCP Server makes a plain HTTP request. If a page requires JavaScript to load content (single-page apps, lazy-loaded content), you’ll get the initial HTML with empty placeholders. For JavaScript-heavy pages, you need a browser-based MCP server like Puppeteer MCP or Playwright MCP.

No session handling. There’s no cookie jar, login flow, or authentication management. Paywalled or login-required pages will return the login page, not the content.

No bulk crawling. Each fetch call retrieves one URL. Multi-page crawls require the agent to iterate manually.

Our Take

The Fetch MCP Server is the first MCP server most developers install after the Filesystem server. The use case is obvious, the setup is minimal (two lines of config), and the HTML-to-Markdown conversion works well for most public web content.

The Python runtime is the only friction — if your stack is Node.js, you need Python available. uvx handles this cleanly: it creates an isolated virtual environment automatically with no system-wide Python configuration required.

For JavaScript-rendered pages, you’ll need a complementary browser-based MCP server. For everything else, Fetch covers the use case.

Rating: 8.5/10