5 Docker MCP Servers Every Frontend and Backend Developer Must Be Aware Of in 2025

Stop wasting hours setting up MCP servers. The Docker MCP Catalog provides 270+ enterprise-grade, containerized Model Context Protocol servers that install in seconds—no dependency hell, no environment conflicts, no cross-platform issues.

5 Docker MCP Servers Every Frontend and Backend Developer Must Be Aware Of in 2025

TL;DR - Install These 5 Containerized MCP Servers Today

Stop wasting time with complex environment setups. These Docker-containerized MCP servers from the Docker Hub MCP Catalog work out of the box:

  1. GitHub Official MCP Server → Automate repositories, issues, and PRs without leaving your AI assistant
  2. MongoDB MCP Server → Query databases with natural language, no schema hunting needed
  3. Brave Search MCP Server → Real-time web search directly in your AI workflow
  4. Context7 MCP Server → Live framework documentation that eliminates API hallucinations
  5. Playwright MCP Server → Browser automation and E2E testing with zero configuration

Why Docker MCP Gateway Changes Everything

It's 11 AM. Sprint planning just wrapped. You need to:

  • Check GitHub for blockers across 5 repos
  • Query MongoDB production data for a bug report
  • Test the new checkout flow in staging
  • Verify API documentation for a deprecated endpoint
  • Search for a solution to that obscure CORS issue

You open 12 tabs. Switch contexts 47 times. Copy-paste credentials. Fight with Node.js versions. Two hours evaporate. This is the daily tax developers pay for fragmented workflows.

The Model Context Protocol (MCP) promised to fix this by giving AI direct access to your tools. But traditional MCP server installation meant:

❌ Environment conflicts (Node 18 vs Node 20)
❌ Python version hell (3.9, 3.10, 3.11?)
❌ Manual dependency installation
❌ Cross-platform nightmares (works on Mac, breaks on Windows)
❌ Security risks (unrestricted host access)

Docker solved all of this.

Enter: Docker MCP Gateway

The Docker MCP Gateway is a Docker CLI plugin (docker mcp) that acts as a unified interface between AI clients (Claude Desktop, Claude Code, VS Code, Cursor, Zed) and multiple MCP servers running in isolated Docker containers.

Think of it as a reverse proxy for AI tools:

AI Client → Docker MCP Gateway → MCP Servers (Docker Containers)
           (single connection)      (isolated, containerized)

What makes it powerful:

🐳 Container-based isolation: Each MCP server runs in its own Docker container
🔧 Unified management: One gateway serves all your AI clients
🔐 Secrets management: Secure credential handling via Docker Desktop
🌐 OAuth integration: Built-in OAuth flows for service authentication
📋 Catalog system: Browse 50+ servers from the Docker MCP Catalog
🔍 Dynamic discovery: Automatic tool and resource discovery
🎯 Zero configuration: Install servers in seconds, not hours

The Docker MCP Catalog hosts 270+ enterprise-grade, containerized MCP servers from publishers like GitHub, MongoDB, Microsoft, Stripe, AWS, and Brave. Each server:

✅ Runs in complete isolation
✅ Works identically on Windows, Mac, Linux, ARM, x86
✅ Requires zero dependency installation
✅ Ships with Docker Scout security scanning
✅ Installs in seconds with a single command

As David Soria Parra from Anthropic stated: "Docker is one of the most widely used packaging solutions for developers. The same way it solved the packaging problem for the cloud, it now has the potential to solve the packaging problem for rich AI agents."

Let me show you 5 game-changing servers from the Docker MCP Catalog.


MCP Server #1: GitHub Official MCP Server - Your Repo Command Center

Docker Hub: mcp/github-official
Publisher: GitHub
Downloads: 40+ tools

The Problem

You're reviewing PRs across 3 repositories. You need to:

  1. List all open issues labeled "bug"
  2. Check PR status for feature branches
  3. Add review comments
  4. Merge approved PRs
  5. Create a new issue from a bug report

You switch between GitHub tabs, copy-paste issue numbers into Slack, manually track merge status in a spreadsheet. 30 minutes gone.

Or you ask your AI assistant to help with GitHub tasks, but it can't actually do anything. It gives you code snippets. You become the execution layer. You're still doing the work.

The Solution

The official GitHub MCP server from Microsoft gives your AI assistant direct access to GitHub APIs—with 40+ tools for repository automation.

What you can do (natural language → actual GitHub actions):

"List all open issues in docker/mcp-gateway labeled 'enhancement'"
"Create a pull request from feature/auth to main with title 'Add OAuth support'"  
"Show me the diff for PR #127 in our backend repo"
"Add a review comment on line 45: 'Consider using async/await here'"
"Merge PR #142 after all checks pass"

Your AI assistant executes these using the actual GitHub API. No copy-paste. No tab switching. No context loss.

Installation with Docker MCP Gateway

Prerequisites:

  • Docker Desktop installed
  • MCP Toolkit enabled in Docker Desktop
  • GitHub Personal Access Token

Step 1: Enable the GitHub MCP Server

# Initialize the Docker MCP Catalog
docker mcp catalog init

# Enable the GitHub server
docker mcp server enable github-official

Step 2: Configure Authentication

# Set your GitHub token as a Docker secret
echo "ghp_your_token_here" | docker secret create GITHUB_PERSONAL_ACCESS_TOKEN -

# Or set via environment variable
docker mcp config write '
servers:
  github-official:
    env:
      GITHUB_PERSONAL_ACCESS_TOKEN: ghp_your_token_here
'

Step 3: Start the Gateway

# Run the gateway (stdio mode for Claude Desktop/Code)
docker mcp gateway run

# Or for VS Code/Cursor (HTTP mode)
docker mcp gateway run --port 8080 --transport streaming

That's it! Your AI clients can now use GitHub tools.

Setup time: 2 minutes
No Node.js required: Runs entirely in Docker
Security: Tokens managed via Docker secrets

Real-World Scenarios

Scenario 1: Sprint Planning Automation

You're leading a team of 8 developers. Every Monday morning, you need to:

  • Identify stale PRs (open > 7 days)
  • List high-priority issues without assignees
  • Check which features are blocked by open issues

Old way: Manual GitHub browsing, 20 minutes
With GitHub MCP: Single prompt, 30 seconds

"Show me all PRs in our repo that have been open for more than 7 days 
and list any issues labeled 'high-priority' without assignees"

Scenario 2: Code Review Workflow

You're reviewing a complex authentication refactor. You need to:

  • See the full diff
  • Add inline comments on specific lines
  • Request changes with a summary

Old way: GitHub web UI, manual clicking, 10 minutes
With GitHub MCP: Natural language commands

"Show me the diff for PR #245. Add a review comment on 
src/auth/middleware.ts line 67 saying 'Should we add rate limiting here?' 
Then request changes with summary: 'Needs rate limiting consideration'"

Scenario 3: Automated Issue Triage

50 new issues came in overnight from your open-source project. You need to:

  • Categorize by label
  • Assign to team members
  • Close obvious duplicates

Old way: Manual processing, 45 minutes
With GitHub MCP: Automated triage

"Analyze the last 50 issues. For any mentioning 'login fails', 
add label 'bug' and assign to @auth-team. For deployment questions, 
label 'docs' and assign to @devops. Close issues that duplicate #892"

Available Tools (40+)

  • Issues: create_issue, update_issue, list_issues, search_issues, add_comment, manage_labels
  • Pull Requests: create_pull_request, merge_pull_request, get_diff, add_review_comments, request_reviewers
  • Repositories: search_code, list_branches, create_branch, get_file_contents, push_files
  • Commits: list_commits, get_commit_details
  • Reviews: create_review, submit_pending_review, delete_pending_review

When to Use (and When Not To)

Best for:

  • Multi-repo workflows (microservices, monorepos)
  • Automated PR management and code review
  • Issue triage and sprint planning
  • Release automation
  • Cross-team repository access

Limitations:

  • Requires GitHub Personal Access Token (PAT)
  • Rate limited by GitHub API (5,000 requests/hour)
  • Can't trigger GitHub Actions directly (use workflow_dispatch instead)

MCP Server #2: MongoDB MCP Server - Natural Language Database Queries

Docker Hub: mcp/mongodb
Publisher: MongoDB
Downloads: 10,000+ pulls
Tools: 22 tools

The Problem

Your production database has 47 collections. A customer reports data inconsistency. You need to:

  1. Find the user document by email
  2. Check related orders in the last 30 days
  3. Verify payment status across 3 collections
  4. Aggregate revenue by product category

You open MongoDB Compass. Dig through schemas. Write aggregation pipelines. Test queries. Debug $lookup syntax for the 17th time this week. Meanwhile, your customer is still waiting.

Or you ask your AI assistant for MongoDB queries, but:

  • It doesn't know your schema
  • It hallucinates collection names
  • Aggregation pipelines are syntactically wrong
  • You can't execute queries directly—you're still copying and pasting

The Solution

The official MongoDB MCP server connects your AI assistant directly to your MongoDB clusters—with natural language querying and schema awareness.

What you can do:

"Find user with email john@example.com and show their last 5 orders"
"Count documents in users collection where created_at is after 2024-01-01"
"Create aggregation pipeline: total revenue by product category for Q1 2025"
"Show me the schema for the 'transactions' collection"
"Insert a new product: {name: 'Widget Pro', price: 29.99, category: 'tools'}"

Your AI translates natural language → correct MongoDB queries → executes them → returns results.

Installation with Docker MCP Gateway

Step 1: Enable MongoDB MCP Server

docker mcp server enable mongodb

Step 2: Configure Connection

# For local MongoDB
docker mcp config write '
servers:
  mongodb:
    env:
      MONGODB_URI: mongodb://localhost:27017/mydb
'

# For MongoDB Atlas
docker mcp config write '
servers:
  mongodb:
    env:
      MONGODB_URI: mongodb+srv://user:password@cluster.mongodb.net/mydb
'

Step 3: Start Gateway

docker mcp gateway run

Setup time: 45 seconds
Works with: MongoDB Atlas, local MongoDB, replica sets, sharded clusters

Real-World Scenarios

Scenario 1: Production Debugging

Customer says their order is missing. Support sends you: order_id: "ORD-2024-98234"

Old way:

  1. Open MongoDB Compass
  2. Search orders collection
  3. Copy customer_id
  4. Search users collection
  5. Check payments collection
  6. Manually correlate data
  7. Time elapsed: 12 minutes

With MongoDB MCP:

"Find order ORD-2024-98234, show the associated user details 
and payment status, and list any related transactions"

Time elapsed: 20 seconds

The AI assistant executes multiple queries, joins data across collections, and presents a unified view.

Scenario 2: Analytics Queries

Your PM asks: "What's our average order value by country for users who signed up in Q4 2024?"

Old way:

// You write this complex aggregation pipeline:
db.orders.aggregate([
  {
    $lookup: {
      from: "users",
      localField: "user_id",
      foreignField: "_id",
      as: "user"
    }
  },
  { $unwind: "$user" },
  {
    $match: {
      "user.created_at": {
        $gte: ISODate("2024-10-01"),
        $lt: ISODate("2025-01-01")
      }
    }
  },
  {
    $group: {
      _id: "$user.country",
      avgOrderValue: { $avg: "$total_amount" }
    }
  }
])

Time: 8 minutes to write and debug

With MongoDB MCP:

"Average order value by country for users who signed up in Q4 2024"

Time: 15 seconds

The AI assistant generates the aggregation pipeline, executes it, and formats the results.

Scenario 3: Database Administration

You need to clean up old test data:

Old way:

// Count first
db.users.countDocuments({ email: /test.*@example\.com/ })
// Review
// Then delete
db.users.deleteMany({ email: /test.*@example\.com/ })

With MongoDB MCP:

"Count users with test emails (containing 'test' and '@example.com'). 
If less than 100, delete them and confirm the count after deletion"

The AI executes safely with confirmations.

Available Tools (22)

  • Query Operations: find, findOne, count, distinct
  • CRUD Operations: insertOne, insertMany, updateOne, updateMany, deleteOne, deleteMany
  • Aggregation: aggregate, mapReduce
  • Schema: listCollections, getCollectionSchema, createCollection
  • Index Management: listIndexes, createIndex, dropIndex
  • Admin: getDatabaseStats, getServerInfo

When to Use (and When Not To)

Best for:

  • Production debugging and data investigation
  • Quick analytics queries
  • Database administration tasks
  • Customer support data lookups
  • Schema exploration

Limitations:

  • Not suitable for write-heavy production automation (use with caution)
  • Complex transactions may need manual verification
  • No built-in query optimization suggestions
  • Always test on non-production first

Docker Hub: mcp/brave
Publisher: Brave
Downloads: 50,000+ pulls
Tools: 6 tools

The Problem

You're debugging a cryptic error message: Error: ECONNREFUSED connecting to Redis

Your options:

  1. Copy error → Google → Click through 5 Stack Overflow threads → Find the answer (8 minutes)
  2. Ask your AI assistant → It gives outdated advice from its training data (2023)
  3. Switch to browser → Search → Read → Switch back → Lose context

Or you're researching a new framework. Your AI's knowledge cutoff was January 2025. The framework released v2.0 in October 2024. You get outdated syntax. You debug. Again.

The context switching alone kills productivity.

The Solution

Brave Search MCP server gives your AI assistant real-time web search capabilities. No context switching. No browser tabs. No copy-paste.

What you can do:

"Search for solutions to 'ECONNREFUSED connecting to Redis' error"
"Find the latest Next.js 15 documentation on server actions"
"What are the recent security vulnerabilities in Express.js?"
"Search for React 19 migration guides published this month"
"Find Docker Compose examples for MongoDB replica sets"

The AI searches the web, reads results, synthesizes information, and answers—all within your workflow.

Installation with Docker MCP Gateway

Prerequisites:

Step 1: Enable Brave Search

docker mcp server enable brave

Step 2: Configure API Key

docker mcp config write '
servers:
  brave:
    env:
      BRAVE_API_KEY: BSA-your-api-key-here
'

Step 3: Start Gateway

docker mcp gateway run

Setup time: 60 seconds
Cost: Free tier includes 2,000 queries/month

Real-World Scenarios

Scenario 1: Debugging Production Issues

Your app crashes with: TypeError: Cannot read property 'map' of undefined

You're using a specific version of a library. Stack Overflow has 1,000 answers but none match your exact version.

Old way:

  1. Google search
  2. Filter by date
  3. Check Stack Overflow
  4. Try GitHub issues
  5. Read documentation
  6. Time: 15 minutes

With Brave Search MCP:

"Search for TypeError: Cannot read property 'map' of undefined 
in react-query v5.8.4 and show me recent solutions"

The AI searches, finds recent discussions, synthesizes answers, and suggests solutions.

Time: 45 seconds

Scenario 2: Learning New Technologies

You need to implement OAuth 2.0 with PKCE flow in your Next.js app.

Old way:

  1. Search for "Next.js OAuth 2.0 PKCE"
  2. Read 5 blog posts
  3. Check official docs
  4. Find a working example
  5. Adapt to your needs
  6. Time: 30 minutes

With Brave Search MCP:

"Search for recent Next.js 15 implementations of OAuth 2.0 with PKCE flow. 
Show me complete examples with code snippets"

The AI finds current examples, extracts code, explains implementation, and adapts to your stack.

Time: 3 minutes

Scenario 3: Security Research

Your security scan flagged a dependency. You need to know if there's a known exploit.

Old way:

  1. Check CVE databases
  2. Search GitHub security advisories
  3. Look for blog posts
  4. Check npm advisory
  5. Time: 10 minutes

With Brave Search MCP:

"Search for recent security vulnerabilities in axios version 0.21.1 
and check if there's a patch available"

The AI searches CVE databases, security advisories, and GitHub, then summarizes findings with severity and remediation.

Time: 30 seconds

Available Tools (6)

  • web_search: General web search with Brave
  • local_search: Location-based search results
  • news_search: Recent news articles
  • image_search: Image results
  • video_search: Video content
  • summarize: Summarize webpage contents

When to Use (and When Not To)

Best for:

  • Debugging errors with current context
  • Researching latest framework versions
  • Finding recent security vulnerabilities
  • Getting up-to-date documentation
  • Discovering new tools and libraries

Limitations:

  • Free tier limited to 2,000 queries/month
  • Paid tier required for high-volume usage
  • Search quality depends on query formulation
  • May include outdated content in results (AI helps filter)

MCP Server #4: Context7 MCP Server - Stop API Hallucinations

Docker Hub: mcp/context7
Publisher: Upstash
Downloads: 100,000+ pulls
Tools: 2 tools

The Problem

It's 3 PM. You ask your AI assistant for a simple Next.js middleware function. It confidently spits out code using a deprecated API. You spend the next 20 minutes in a debugging rabbit hole, questioning your life choices.

Or you're using Supabase. You ask for a realtime subscription. The AI gives you:

javascript

const subscription = supabase
  .from('messages')
  .on('INSERT', payload => console.log(payload))
  .subscribe()

Looks perfect. Except on() was deprecated in Supabase v2. The correct syntax is .channel().on().

This happens because LLM training data is historical. When frameworks update APIs, training data doesn't. Your AI's knowledge cutoff is January 2025, but Next.js 15 shipped in October 2024. React 19 released in December 2024. The APIs your AI knows might already be outdated.

The Solution

Context7 fixes this by injecting live documentation from 1,000+ libraries directly into your AI's context before answering.

What happens:

  1. You ask: "How do I create a Supabase realtime subscription?"
  2. Context7 fetches current Supabase docs
  3. Your AI gets live, up-to-date information
  4. You receive correct, current syntax

What you can do:

"Create a Next.js 15 server action for form submission"
→ Gets live Next.js 15 docs, returns current syntax

"Show me how to use Prisma with edge runtime"
→ Fetches Prisma docs, provides edge-compatible examples

"Implement tRPC v11 with Next.js app router"
→ Retrieves tRPC v11 docs, generates correct implementation

Installation with Docker MCP Gateway

Step 1: Enable Context7

docker mcp server enable context7

Step 2: Start Gateway (No API Key Needed!)

docker mcp gateway run

That's it. No API keys. No configuration. It just works.

Setup time: 20 seconds
Cost: Completely free
Coverage: 1,000+ popular libraries

Real-World Scenarios

Scenario 1: Framework Upgrades

You're upgrading from Next.js 14 to Next.js 15. App router patterns changed. Server actions have new syntax.

Old way:

  1. Read migration guide
  2. Ask AI for help
  3. AI gives Next.js 14 patterns
  4. You debug
  5. Check docs manually
  6. Try again
  7. Time: 25 minutes per file

With Context7 MCP:

"Convert this Next.js 14 page to Next.js 15 app router with server actions"

Context7 fetches Next.js 15 docs. AI uses current patterns. Migration works first try.

Time: 2 minutes per file

Scenario 2: New Library Integration

You need to integrate Stripe's new embedded checkout (released 2 months ago).

Old way:

  • AI doesn't know about new embedded checkout
  • Gives you old Checkout Session code
  • You waste time implementing deprecated patterns
  • Find out later you need to refactor
  • Time: 45 minutes + refactor time

With Context7 MCP:

"Implement Stripe embedded checkout in React"

Context7 fetches latest Stripe docs. AI provides current embedded checkout implementation. Works immediately.

Time: 5 minutes

Scenario 3: Breaking Changes

Prisma released v6 with breaking changes to relations syntax. Your AI was trained on v5.

Old way:

  • Ask AI for Prisma query
  • Get v5 syntax
  • Code breaks
  • Debug for 20 minutes
  • Check docs manually
  • Fix code
  • Time: 30 minutes of frustration

With Context7 MCP:

"Create a Prisma query with nested relations and filters"

Context7 gets Prisma v6 docs. AI uses correct v6 syntax. Query works perfectly.

Time: 1 minute

Available Tools (2)

  • search_docs: Search for documentation across 1,000+ libraries
  • get_content: Retrieve specific documentation pages

Covered Libraries (1,000+)

Frontend Frameworks:

  • React, Next.js, Remix, Astro, SvelteKit, Vue, Nuxt

Backend & APIs:

  • tRPC, Prisma, Drizzle, Supabase, Firebase, Express, Fastify

UI Libraries:

  • shadcn/ui, Radix, Chakra UI, Mantine, Tailwind CSS

Build Tools:

  • Vite, Turbopack, Webpack, Rollup

And 900+ more...

When to Use (and When Not To)

Best for:

  • Rapidly evolving frameworks (Next.js, React, Remix)
  • Libraries with frequent breaking changes (Prisma, Supabase, tRPC)
  • New features in popular tools (Tailwind, shadcn)
  • When your AI's training data is outdated
  • Framework migrations and upgrades

Limitations:

  • Covers ~1,000 popular libraries (niche packages won't have docs)
  • Not a replacement for deep-dive reading
  • Uses additional tokens (overkill for trivial queries)
  • Only works with well-documented libraries

MCP Server #5: Playwright MCP Server - Zero-Config Browser Automation

Docker Hub: mcp/playwright
Publisher: Microsoft
Downloads: 100,000+ pulls
Tools: 21 tools

The Problem

You need to test your login flow. You open Playwright docs. You write a test script. You configure selectors. You deal with authentication tokens. You fight with headless browser quirks. 30 minutes gone for a 2-minute test.

Or you need to scrape data from a competitor's site. You could:

  1. Write a Playwright script manually
  2. Handle cookie consent
  3. Deal with lazy loading
  4. Extract structured data
  5. Debug when things break
  6. Time: 1 hour minimum

Or your designer says: "Can you check if our checkout flow works on mobile?" You set up Playwright mobile emulation. Configure viewport. Test. Debug. Another 30 minutes.

The Solution

Playwright MCP server gives your AI assistant direct browser automation capabilities. Describe what you want to test or scrape—the AI writes and executes Playwright code automatically.

What you can do:

"Navigate to our staging site, fill the login form with test@example.com / password123, 
click submit, and verify we reach the dashboard"

"Go to example.com/pricing, extract all pricing tiers with their features as JSON"

"Take a screenshot of our homepage in mobile view (iPhone 14 Pro dimensions)"

"Check if the checkout button is visible after selecting a product"

"Navigate through our onboarding flow and report any broken links"

The AI writes Playwright code, runs it in a containerized browser, and returns results.

Installation with Docker MCP Gateway

Step 1: Enable Playwright

docker mcp server enable playwright

Step 2: Start Gateway

docker mcp gateway run

That's it! The Playwright container includes a full browser environment.

Setup time: 30 seconds
No browser installation needed: Everything runs in Docker
Supported browsers: Chromium, Firefox, WebKit

Real-World Scenarios

Scenario 1: E2E Testing

You just pushed a critical auth refactor. You need to verify the login flow works.

Old way:

// Write Playwright test manually:
const { test, expect } = require('@playwright/test');

test('login flow', async ({ page }) => {
  await page.goto('https://staging.example.com/login');
  await page.fill('input[name="email"]', 'test@example.com');
  await page.fill('input[name="password"]', 'password123');
  await page.click('button[type="submit"]');
  await expect(page).toHaveURL(/dashboard/);
  await expect(page.locator('h1')).toContainText('Dashboard');
});

Time: 10 minutes to write, run, debug

With Playwright MCP:

"Test the login flow on staging: use test@example.com / password123, 
verify we land on the dashboard with 'Dashboard' heading"

AI generates test, runs it, reports results with screenshots on failure.

Time: 30 seconds

Scenario 2: Competitive Analysis

Your PM asks: "What features does competitor X highlight on their pricing page?"

Old way:

  1. Manually visit site
  2. Copy/paste information
  3. Format into spreadsheet
  4. Time: 15 minutes

With Playwright MCP:

"Navigate to competitorx.com/pricing, extract all pricing tiers, 
features included in each tier, and prices. Return as structured JSON"

AI scrapes the page, structures data, returns clean JSON.

Time: 1 minute

Scenario 3: Visual Regression Testing

You just updated your CSS. You need to check if the homepage looks correct across devices.

Old way:

  1. Set up Playwright screenshot tests
  2. Configure viewports
  3. Run tests
  4. Compare manually
  5. Time: 20 minutes

With Playwright MCP:

"Take screenshots of example.com homepage in:
- Desktop (1920x1080)
- iPad Pro (1024x1366)
- iPhone 14 Pro (393x852)
Compare them and highlight any layout issues"

AI takes screenshots, analyzes layouts, reports issues.

Time: 2 minutes

Scenario 4: Accessibility Audit

You need to check if your app is keyboard-navigable.

Old way:

  • Manually tab through interface
  • Document issues
  • Time: 30 minutes

With Playwright MCP:

"Navigate to example.com/app, tab through all interactive elements, 
and report any that aren't keyboard accessible"

AI simulates keyboard navigation, identifies issues, provides detailed report.

Time: 2 minutes

Available Tools (21)

  • Navigation: goto, goBack, goForward, reload
  • Interaction: click, fill, type, press, hover
  • Selectors: querySelector, querySelectorAll, waitForSelector
  • Assertions: isVisible, isEnabled, hasText, hasAttribute
  • Screenshots: screenshot, fullPageScreenshot
  • Mobile: emulateDevice (50+ device presets)
  • Network: setExtraHTTPHeaders, setCookie
  • Extraction: extractText, extractHTML, evaluate

When to Use (and When Not To)

Best for:

  • E2E testing during development
  • Quick smoke tests before deployment
  • Competitive research and data extraction
  • Visual regression testing
  • Accessibility audits
  • Form testing and validation

Limitations:

  • Not suitable for large-scale production test suites (use CI/CD)
  • Can't handle complex authentication flows (OAuth, 2FA)
  • Rate limiting concerns for scraping
  • No built-in retry logic for flaky tests

Setting Up Docker MCP Gateway: The Complete Guide

Now that you've seen the 5 essential servers, here's how to set up the Docker MCP Gateway to use them all.

Prerequisites

  • Docker Desktop 4.37+ (includes MCP Toolkit)
  • Enable MCP Toolkit in Docker Desktop settings
  • Claude Desktop, Claude Code, VS Code, or Cursor

Installation Steps

Step 1: Initialize the Docker MCP Catalog

# This downloads the official catalog of MCP servers
docker mcp catalog init

# Verify installation
docker mcp catalog ls

Step 2: Enable Your Desired Servers

# Enable all 5 servers we covered
docker mcp server enable github-official mongodb brave context7 playwright

# Verify enabled servers
docker mcp server ls

Step 3: Configure Server Credentials

# GitHub
docker mcp config write '
servers:
  github-official:
    env:
      GITHUB_PERSONAL_ACCESS_TOKEN: ghp_your_token_here
'

# MongoDB
docker mcp config write '
servers:
  mongodb:
    env:
      MONGODB_URI: mongodb://localhost:27017/mydb
'

# Brave Search
docker mcp config write '
servers:
  brave:
    env:
      BRAVE_API_KEY: BSA_your_key_here
'

# Context7 and Playwright need no configuration!

Step 4: Start the Gateway

# For Claude Desktop/Code (stdio mode)
docker mcp gateway run

# For VS Code/Cursor (HTTP mode)
docker mcp gateway run --port 8080 --transport streaming

Step 5: Connect Your AI Client

For Claude Desktop:

Edit claude_desktop_config.json:

{
  "mcpServers": {
    "docker-gateway": {
      "command": "docker",
      "args": ["mcp", "gateway", "run"]
    }
  }
}

For Claude Code:

# The gateway is automatically detected if running
claude mcp list

For VS Code/Cursor:

Add to settings:

{
  "mcp.servers": {
    "docker-gateway": {
      "url": "http://localhost:8080"
    }
  }
}

Verifying Everything Works

# List all available tools from enabled servers
docker mcp tools ls

# Test a tool
docker mcp tools call github:list_repositories

# Check tool count
docker mcp tools count

You should see tools from all 5 servers available!


Advanced: Managing Multiple Server Configurations

The Docker MCP Gateway supports working sets for managing different server configurations:

# Create a working set for frontend development
docker mcp workingset create frontend \
  --servers github-official,context7,playwright,brave

# Create a working set for backend development
docker mcp workingset create backend \
  --servers github-official,mongodb,brave

# List working sets
docker mcp workingset ls

# Run gateway with specific working set
docker mcp gateway run --working-set frontend

This lets you switch between different tool configurations without manually enabling/disabling servers.


Common Issues and Troubleshooting

Issue: "Server failed to start"

Solution:

# Check Docker logs
docker logs $(docker ps -q -f "ancestor=mcp/github-official")

# Verify configuration
docker mcp config read

# Reset and reconfigure
docker mcp server disable github-official
docker mcp server enable github-official

Issue: "Authentication failed"

Solution:

# Verify secrets are set
docker secret ls

# Re-create secret
docker secret rm GITHUB_PERSONAL_ACCESS_TOKEN
echo "ghp_new_token" | docker secret create GITHUB_PERSONAL_ACCESS_TOKEN -

Issue: "Tools not appearing in AI client"

Solution:

# Restart the gateway
docker mcp gateway stop
docker mcp gateway run

# Restart your AI client (Claude Desktop, VS Code, etc.)

# Verify tools are available
docker mcp tools ls

Issue: "High memory usage"

Solution:

# Some servers (Playwright) need more memory
# Increase Docker Desktop memory allocation in settings

# Or use resource limits:
docker mcp gateway run --memory 4g

The Bottom Line

You've just learned about 5 essential Docker MCP servers:

  1. GitHub Official - Automate repository workflows
  2. MongoDB - Natural language database queries
  3. Brave Search - Real-time web search
  4. Context7 - Current framework documentation
  5. Playwright - Browser automation without config

Each server:

  • Installs in under 60 seconds
  • Runs in isolated Docker containers
  • Requires zero dependency management
  • Works across all platforms
  • Integrates with your AI workflow

Most developers will read this and do nothing.

Don't be most developers.

Take Action Now

Pick one server. Just one.

  • Working on GitHub? → Enable github-official
  • Debugging databases? → Enable mongodb
  • Need current docs? → Enable context7
  • Testing workflows? → Enable playwright
  • Researching solutions? → Enable brave

Installation takes 2 minutes. The productivity gains last forever.

# Right now. Do it.
docker mcp catalog init
docker mcp server enable [your-choice]
docker mcp gateway run

Stop reading. Start building.


Additional Resources