Webscan

Webscan

Enables web content scanning and analysis by fetching, analyzing, and extracting information from web pages using tools like page fetching, link extraction, site crawling, and more.

bsmi021

Content Fetching
Visit Server

Tools

extract-links

Extract and analyze all hyperlinks from a web page, organizing them into a structured format with URLs, anchor text, and contextual information. Performance-optimized with stream processing and worker threads for efficient handling of large pages. Works with either a direct URL or raw HTML content. Handles relative and absolute URLs properly by supporting an optional base URL parameter. Results can be limited to prevent overwhelming output for link-dense pages. Returns a comprehensive link inventory that includes destination URLs, link text, titles (if available), and whether links are internal or external to the source domain. Useful for site mapping, content analysis, broken link checking, SEO analysis, and as a preparatory step for targeted crawling operations.

crawl-site

Crawl a website and return a list of all the URLs found

check-links

Check for broken links on a page

fetch-page

Fetch a web page and convert it to Markdown

find-patterns

Find all links that match a given pattern

generate-site-map

Generate a sitemap for a website

README

MCP Webscan Server

smithery badge

A Model Context Protocol (MCP) server for web content scanning and analysis. This server provides tools for fetching, analyzing, and extracting information from web pages.

<a href="https://glama.ai/mcp/servers/u0tna3hemh"><img width="380" height="200" src="https://glama.ai/mcp/servers/u0tna3hemh/badge" alt="Webscan Server MCP server" /></a>

Features

  • Page Fetching: Convert web pages to Markdown for easy analysis
  • Link Extraction: Extract and analyze links from web pages
  • Site Crawling: Recursively crawl websites to discover content
  • Link Checking: Identify broken links on web pages
  • Pattern Matching: Find URLs matching specific patterns
  • Sitemap Generation: Generate XML sitemaps for websites

Installation

Installing via Smithery

To install Webscan for Claude Desktop automatically via Smithery:

npx -y @smithery/cli install mcp-server-webscan --client claude

Manual Installation

# Clone the repository
git clone <repository-url>
cd mcp-server-webscan

# Install dependencies
npm install

# Build the project
npm run build

Usage

Starting the Server

npm start

The server runs on stdio transport, making it compatible with MCP clients like Claude Desktop.

Available Tools

  1. fetch-page

    • Fetches a web page and converts it to Markdown.
    • Parameters:
      • url (required): URL of the page to fetch.
      • selector (optional): CSS selector to target specific content.
  2. extract-links

    • Extracts all links from a web page with their text.
    • Parameters:
      • url (required): URL of the page to analyze.
      • baseUrl (optional): Base URL to filter links.
      • limit (optional, default: 100): Maximum number of links to return.
  3. crawl-site

    • Recursively crawls a website up to a specified depth.
    • Parameters:
      • url (required): Starting URL to crawl.
      • maxDepth (optional, default: 2): Maximum crawl depth (0-5).
  4. check-links

    • Checks for broken links on a page.
    • Parameters:
      • url (required): URL to check links for.
  5. find-patterns

    • Finds URLs matching a specific pattern.
    • Parameters:
      • url (required): URL to search in.
      • pattern (required): JavaScript-compatible regex pattern to match URLs against.
  6. generate-site-map

    • Generates a simple XML sitemap by crawling.
    • Parameters:
      • url (required): Root URL for sitemap crawl.
      • maxDepth (optional, default: 2): Maximum crawl depth for discovering URLs (0-5).
      • limit (optional, default: 1000): Maximum number of URLs to include in the sitemap.

Example Usage with Claude Desktop

  1. Configure the server in your Claude Desktop settings:
{
  "mcpServers": {
    "webscan": {
      "command": "node",
      "args": ["path/to/mcp-server-webscan/build/index.js"], // Corrected path
      "env": {
        "NODE_ENV": "development",
        "LOG_LEVEL": "info" // Example: Set log level via env var
      }
    }
  }
}
  1. Use the tools in your conversations:
Could you fetch the content from https://example.com and convert it to Markdown?

Development

Prerequisites

  • Node.js >= 18
  • npm

Project Structure (Post-Refactor)

mcp-server-webscan/
├── src/
│   ├── config/
│   │   └── ConfigurationManager.ts
│   ├── services/
│   │   ├── CheckLinksService.ts
│   │   ├── CrawlSiteService.ts
│   │   ├── ExtractLinksService.ts
│   │   ├── FetchPageService.ts
│   │   ├── FindPatternsService.ts
│   │   ├── GenerateSitemapService.ts
│   │   └── index.ts
│   ├── tools/
│   │   ├── checkLinksTool.ts
│   │   ├── checkLinksToolParams.ts
│   │   ├── crawlSiteTool.ts
│   │   ├── crawlSiteToolParams.ts
│   │   ├── extractLinksTool.ts
│   │   ├── extractLinksToolParams.ts
│   │   ├── fetchPageTool.ts
│   │   ├── fetchPageToolParams.ts
│   │   ├── findPatterns.ts
│   │   ├── findPatternsToolParams.ts
│   │   ├── generateSitemapTool.ts
│   │   ├── generateSitemapToolParams.ts
│   │   └── index.ts
│   ├── types/
│   │   ├── checkLinksTypes.ts
│   │   ├── crawlSiteTypes.ts
│   │   ├── extractLinksTypes.ts
│   │   ├── fetchPageTypes.ts
│   │   ├── findPatternsTypes.ts
│   │   ├── generateSitemapTypes.ts
│   │   └── index.ts
│   ├── utils/
│   │   ├── errors.ts
│   │   ├── index.ts
│   │   ├── logger.ts
│   │   ├── markdownConverter.ts
│   │   └── webUtils.ts
│   ├── initialize.ts
│   └── index.ts    # Main server entry point
├── build/          # Compiled JavaScript (Corrected)
├── node_modules/
├── .clinerules
├── .gitignore
├── Dockerfile
├── LICENSE
├── mcp-consistant-servers-guide.md
├── package.json
├── package-lock.json
├── README.md
├── RFC-2025-001-Refactor.md
├── smithery.yaml
└── tsconfig.json

Building

npm run build

Development Mode

npm run dev

Error Handling

The server implements comprehensive error handling:

  • Invalid parameters
  • Network errors
  • Content parsing errors
  • URL validation

All errors are properly formatted according to the MCP specification.

Contributing

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add some amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

License

MIT License - see the LICENSE file for details

Recommended Servers

Mult Fetch MCP Server

Mult Fetch MCP Server

A versatile MCP-compliant web content fetching tool that supports multiple modes (browser/node), formats (HTML/JSON/Markdown/Text), and intelligent proxy detection, with bilingual interface (English/Chinese).

Featured
Local
Persistent Knowledge Graph

Persistent Knowledge Graph

An implementation of persistent memory for Claude using a local knowledge graph, allowing the AI to remember information about users across conversations with customizable storage location.

Featured
Local
Hyperbrowser MCP Server

Hyperbrowser MCP Server

Welcome to Hyperbrowser, the Internet for AI. Hyperbrowser is the next-generation platform empowering AI agents and enabling effortless, scalable browser automation. Built specifically for AI developers, it eliminates the headaches of local infrastructure and performance bottlenecks, allowing you to

Featured
Local
Exa MCP

Exa MCP

A Model Context Protocol server that enables AI assistants like Claude to perform real-time web searches using the Exa AI Search API in a safe and controlled manner.

Featured
Perplexity Chat MCP Server

Perplexity Chat MCP Server

MCP Server for the Perplexity API.

Featured
Web Research Server

Web Research Server

A Model Context Protocol server that enables Claude to perform web research by integrating Google search, extracting webpage content, and capturing screenshots.

Featured
Youtube Translate

Youtube Translate

A Model Context Protocol server that enables access to YouTube video content through transcripts, translations, summaries, and subtitle generation in various languages.

Featured
PubMedSearch

PubMedSearch

A Model Content Protocol server that provides tools to search and retrieve academic papers from PubMed database.

Featured
Aindreyway Codex Keeper

Aindreyway Codex Keeper

Serves as a guardian of development knowledge, providing AI assistants with curated access to latest documentation and best practices.

Featured
Perplexity Deep Research

Perplexity Deep Research

A server that allows AI assistants to perform web searches using Perplexity's sonar-deep-research model with citation support.

Featured