Model Context Protocol: A CTO's Guide

26 Feb 2026

Model Context Protocol: A CTO's Guide

When REST APIs arrived in the early 2000s, they didn't make software smarter. They made it connected. Suddenly any frontend could talk to any backend using the same conventions. Any third-party service could be integrated in hours, not weeks. That standardisation didn't just improve developer productivity - it unlocked an entirely new category of products that simply weren't possible before.

AI is at that same inflection point right now. The models are already smart enough. ChatGPT, Claude, Gemini - the intelligence is there. What's missing is the layer that connects that intelligence to your actual systems: your database, your CRM, your support queue, your internal APIs.

That layer has a name. It's called MCP - Model Context Protocol. And just like REST APIs became the default way systems talked to each other, MCP is quickly becoming the default way AI talks to systems.

This guide is for CTOs and engineering teams who are ready to move beyond AI demos and build something that actually runs in production.

By the end, you'll understand what MCP is architecturally, when to build versus use existing servers, and how to implement a production-ready MCP server in Node.js - with real code, real patterns, and the mistakes to avoid before they cost your team time.

Where Most Engineering Teams Are Stuck

Before diving into MCP, it's worth being honest about where most companies actually are with AI right now. In our experience working with startups and SMEs, there are three distinct stages:

Stage 1 -  AI as a Search Bar
The team uses ChatGPT or Claude for drafting emails,
summarising documents, and answering questions.
Useful. But isolated. No connection to your systems.

Stage 2 -  AI with RAG
The team has connected AI to internal documents via
retrieval-augmented generation. Claude can answer
questions about your knowledge base.
Better. But still read-only. Still no actions.

Stage 3 -  AI with MCP
AI is connected to live data and can act on it.
It can query your database, update records, trigger
workflows, and post to Slack -  based on a single prompt.
This is where the productivity gap between companies opens.

Most companies today sit at Stage 1 or Stage 2. The jump to Stage 3 is not about better models or bigger budgets. It's about one architectural decision: giving your AI a standardised way to interact with your systems. That's exactly what MCP enables.

What is MCP? The Missing Layer in Your Stack

Model Context Protocol is an open standard created by Anthropic that defines how AI models communicate with external tools and data sources. It's already supported by Claude, Cursor, Zed, Windsurf, and a growing list of development tools - and the ecosystem is expanding fast.

The simplest way to think about it: if REST APIs are how systems talk to each other, MCP is how AI talks to systems.

Here's where MCP sits in your existing architecture:

┌──────────────────────────────────────┐
│         Your AI Interface            │
│     (Claude / Your Custom App)       │
├──────────────────────────────────────┤
│          MCP Protocol Layer          │  ← The standard bridge
├─────────────┬────────────┬───────────┤
│  MCP Server │ MCP Server │ MCP Server│
│    (DB)     │   (CRM)    │  (Slack)  │
└─────────────┴────────────┴───────────┘
        │            │           │
   PostgreSQL      HubSpot     Slack API

There are three components to understand:

MCP Host is the AI application your users interact with - Claude Desktop, your custom chat interface, or any MCP-compatible tool. It receives the user's prompt and decides which tools to use.

MCP Client is the coordination layer that lives inside the Host. It speaks the MCP protocol, manages connections to servers, and routes requests and responses. You don't build this - it's part of the Host.

MCP Server is what you build. It's a lightweight program that exposes specific capabilities to the AI. Each server is focused on one domain - your database, your CRM, your Slack workspace. Claude discovers what your server can do and calls it when relevant.

An MCP server can expose three types of capabilities:

  • Tools - Actions the AI can execute: get_overdue_invoices, create_support_ticket, post_to_slack

  • Resources - Read-only data the AI can access: files, database records, API responses

  • Prompts - Reusable prompt templates for structured interactions

For most production use cases, Tools are what you'll build first.

Architectural note - Context Window: Every tool call result is injected back into Claude's context window. This is a hard architectural constraint. If your tools return bloated responses - full DB rows, nested JSON objects, large lists - you will hit context limits quickly on any multi-step workflow. Design your tool responses to return only what Claude needs to reason with, not everything your DB can provide. This single decision has more impact on production reliability than almost anything else in your server design.

Build vs Buy: The Decision Before You Write a Line of Code

Before your engineering team spends a day building a custom MCP server, answer these four questions:

1. Does a ready-made MCP server already exist for this tool?
   └── YES → Use it. No build needed.
   └── NO  → Go to question 2.

2. Is your data internal or proprietary?
   (Your own DB schema, internal APIs, custom business logic)
   └── YES → Build a custom server.
   └── NO  → Go to question 3.

3. Does this integration need custom business rules?
   (Filters, permissions, company-specific workflows)
   └── YES → Build a custom server.
   └── NO  → Go to question 4.

4. Is it a well-known public API?
   (Weather, Maps, GitHub, Slack, Notion)
   └── YES → Check the registry again -  it almost certainly exists.
   └── NO  → Build a custom server.

A growing library of ready-made MCP servers already exists. Check the official registry at github.com/modelcontextprotocol/servers or the MCP server directory at modelcontextprotocol.io before building. Both are maintained by Anthropic and the community:

Tool

MCP Server

GitHub

Official

Slack

Official

Google Drive

Official

PostgreSQL

Official

Notion

Community

Stripe

Community

Build time estimates:

Scope

Time

Focused server, basic tooling

2–4 hours

Production-ready (error handling, validation, logging, auth)

1–2 days

Scope your tools tightly from the start - it's easy to expand, harder to refactor a sprawling server later.

Building a Production MCP Server in Node.js

For this walkthrough, we're building a real accounts receivable automation server. Here's the scenario:

Every Friday, your finance team spends 45 minutes manually checking which clients have unpaid invoices older than 30 days, cross-referencing client contact details, and drafting individual follow-up emails. With this MCP server, a single Claude prompt replaces the entire workflow.

"Which clients have invoices overdue by more than 30 days? Draft a follow-up email for each one."

This is not a toy demo. It connects to a real PostgreSQL database, uses environment variables from line one, includes input validation, and has an audit trail - because production systems need one.

Setup

mkdir invoice-mcp-server && cd invoice-mcp-server
npm init -y
npm install @modelcontextprotocol/sdk pg dotenv

Update package.json for ES modules:

{
  "type": "module",
  "scripts": {
    "start": "node server.js"
  }
}

Create a .env file:

DB_HOST=localhost
DB_PORT=5432
DB_NAME=your_database
DB_USER=your_user
DB_PASSWORD=your_password

server.js

import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
import {
  CallToolRequestSchema,
  ListToolsRequestSchema,
} from "@modelcontextprotocol/sdk/types.js";
import pg from "pg";
import dotenv from "dotenv";

dotenv.config();

// ── Database Connection ──────────────────────────────────────────
const db = new pg.Pool({
  host: process.env.DB_HOST,
  port: process.env.DB_PORT,
  database: process.env.DB_NAME,
  user: process.env.DB_USER,
  password: process.env.DB_PASSWORD,
});

// ── MCP Server Setup ─────────────────────────────────────────────
const server = new Server(
  { name: "invoice-manager", version: "1.0.0" },
  { capabilities: { tools: {} } }
);

// ── Tool Definitions ─────────────────────────────────────────────
server.setRequestHandler(ListToolsRequestSchema, async () => ({
  tools: [
    {
      name: "get_overdue_invoices",
      description: `Fetches all unpaid invoices older than a specified
        number of days. Use this when the user asks about overdue
        payments, outstanding bills, late clients, or cash flow issues.`,
      inputSchema: {
        type: "object",
        properties: {
          days_overdue: {
            type: "number",
            description: "Minimum number of days past due date",
          },
        },
        required: ["days_overdue"],
      },
    },
    {
      name: "get_client_details",
      description: `Returns contact details and account history for a
        specific client. Use this after fetching overdue invoices to
        get the email address and name needed to draft follow-ups.`,
      inputSchema: {
        type: "object",
        properties: {
          client_id: {
            type: "string",
            description: "The unique client identifier",
          },
        },
        required: ["client_id"],
      },
    },
    {
      name: "log_followup_sent",
      description: `Records that a follow-up email was sent for a
        specific invoice. Always call this after drafting or sending
        a follow-up to maintain the audit trail.`,
      inputSchema: {
        type: "object",
        properties: {
          invoice_id: { type: "string" },
          note: { type: "string", description: "Optional note" },
        },
        required: ["invoice_id"],
      },
    },
  ],
}));

// ── Tool Handlers ────────────────────────────────────────────────
server.setRequestHandler(CallToolRequestSchema, async (req) => {
  const { name, arguments: args } = req.params;

  // Validate args exist before touching the DB
  if (!args) {
    return { content: [{ type: "text", text: "❌ No arguments provided." }] };
  }

  if (name === "get_overdue_invoices") {
    const days = Number(args.days_overdue);
    if (isNaN(days) || days < 0) {
      return { content: [{ type: "text", text: "❌ Invalid days_overdue value." }] };
    }

    const result = await db.query(
      `SELECT invoice_id, client_id, amount, due_date,
              NOW()::date - due_date AS days_overdue
       FROM invoices
       WHERE status = 'unpaid'
         AND due_date < NOW() - INTERVAL '1 day' * $1
       ORDER BY days_overdue DESC`,
      [days]
    );

    if (result.rows.length === 0) {
      return { content: [{ type: "text", text: "✅ No overdue invoices found." }] };
    }

    const summary = result.rows
      .map(r => `Invoice ${r.invoice_id} | Client ${r.client_id} | $${r.amount} | ${r.days_overdue} days overdue`)
      .join("\n");

    return { content: [{ type: "text", text: `📋 Overdue Invoices:\n\n${summary}` }] };
  }

  if (name === "get_client_details") {
    if (!args.client_id || typeof args.client_id !== "string") {
      return { content: [{ type: "text", text: "❌ Invalid client_id." }] };
    }

    const result = await db.query(
      `SELECT name, email, phone, account_since
       FROM clients WHERE client_id = $1`,
      [args.client_id]
    );

    if (result.rows.length === 0) {
      return { content: [{ type: "text", text: `❌ Client ${args.client_id} not found.` }] };
    }

    const c = result.rows[0];
    return {
      content: [{
        type: "text",
        text: `👤 ${c.name} | ${c.email} | Client since ${c.account_since}`,
      }],
    };
  }

  if (name === "log_followup_sent") {
    if (!args.invoice_id) {
      return { content: [{ type: "text", text: "❌ invoice_id is required." }] };
    }

    await db.query(
      `INSERT INTO followup_log (invoice_id, sent_at, note)
       VALUES ($1, NOW(), $2)`,
      [args.invoice_id, args.note || "Follow-up sent via Claude"]
    );

    return {
      content: [{
        type: "text",
        text: `✅ Follow-up logged for invoice ${args.invoice_id}.`,
      }],
    };
  }

  return { content: [{ type: "text", text: `❓ Unknown tool: ${name}` }] };
});

// ── Start Server ─────────────────────────────────────────────────
async function main() {
  const transport = new StdioServerTransport();
  await server.connect(transport);
  // ⚠️ Always use console.error() in stdio servers.
  // console.log() writes to stdout and corrupts the JSON-RPC transport.
  console.error("✅ Invoice MCP Server running.");
}

main().catch(console.error);

Connecting to Claude Desktop

Find your Claude Desktop config file:

Mac:     ~/Library/Application Support/Claude/claude_desktop_config.json
Windows: %APPDATA%\Claude\claude_desktop_config.json

Add your server:

{
  "mcpServers": {
    "invoice-manager": {
      "command": "node",
      "args": ["/absolute/path/to/invoice-mcp-server/server.js"]
    }
  }
}

Restart Claude Desktop. You'll see an indicator confirming the connection.

Common issues at this stage:

  • Wrong absolute path - must be the full path, not relative

  • JSON syntax error in the config file - validate it before saving

  • Server crash on startup - usually a missing .env value; check console.error output first

Tool Descriptions Are Prompts, Not Comments

Look at the description field on each tool in the code above. This is not documentation for your team - it is an instruction for Claude. It determines when Claude decides to call the tool, with what arguments, and in what order. Get this wrong and Claude either calls the wrong tool or misses it entirely.

// ❌ Bad -  Claude has no idea when or why to use this
{
  name: "get_data",
  description: "Gets data from the system."
}

// ✅ Good -  Claude knows exactly when to call this and what it does
{
  name: "get_overdue_invoices",
  description: `Fetches all unpaid invoices older than a specified number
    of days. Use this when the user asks about overdue payments,
    outstanding bills, late clients, or cash flow issues.`
}

A useful rule of thumb: write your description as if you're telling a new employee when to use this function. Include the trigger conditions - the phrases or contexts that should prompt Claude to reach for this tool. The more precise your description, the more reliably Claude makes the right call.

Things NOT to Do: Anti-Patterns That Will Cost You Time in Production

These are not theoretical concerns. Each one is a decision that teams make early and pay for later.

1. Using console.log() in stdio servers

This is the most common mistake and the hardest to debug. In stdio-based MCP servers, stdout is the communication channel between your server and the Host. Any console.log() call writes directly to that channel and corrupts the JSON-RPC messages. Your server will fail silently or produce cryptic errors. Always use console.error() - it writes to stderr, which is safe.

2. Writing vague tool descriptions

Covered above, but worth repeating as a pattern: teams often treat the description field as a code comment and write things like "fetches invoice data." Claude will either skip the tool or use it unpredictably. Treat every description as a small prompt and test it - if you'd have to guess what the tool does from the description, Claude will too.

3. Exposing too many tools in one server

Claude's ability to select the right tool degrades meaningfully when a server exposes 15 or more tools. The solution is not fewer tools - it's better organisation. Group tools by domain and run separate focused servers: one for invoices, one for client management, one for Slack. Claude handles multiple connected servers well; it handles one overloaded server poorly.

4. Skipping input validation

Claude is remarkably accurate at constructing tool arguments, but it can hallucinate values - especially for IDs, dates, and numeric ranges. If you pass those values directly to a database query without validation, you introduce both bugs and potential security issues. Validate every argument before it touches your data layer, as shown in the code above.

5. Using stdio transport for multi-user production deployments

stdio transport is the right choice for local development and Claude Desktop integrations. The moment more than one person needs to use your MCP server - or you deploy it to a remote host - you need HTTP + SSE transport. This is a transport-level architectural decision, not a refactor you can defer.

Scenario

Transport

Local dev / personal use

stdio

Claude Desktop integration

stdio

Team access / shared server

Streamable HTTP

Production app (multi-user)

Streamable HTTP

Remote / cloud deployment

Streamable HTTP

MTechZilla in Production: AI-Powered Infrastructure Cost Anomaly Detection

To make this concrete, here's how MTechZilla implemented an MCP-based system for a growth-stage SaaS company running significant infrastructure on AWS.

The situation

The engineering team had no consistent process for catching cost anomalies early. Cloud bills were reviewed at the end of the month - by which point an overlooked misconfiguration or runaway service had already compounded for weeks.

The data was available in real time via AWS Cost Explorer. Nobody was querying it regularly because building a custom dashboard had been on the backlog for six months.

The decision

The CTO considered a scheduled Lambda that would query AWS costs daily and email a summary to the team. It would have worked. But it would have answered only the questions whoever wrote the query thought to ask - and cost anomalies rarely announce themselves in predictable ways.

What the team needed was the ability to ask arbitrary questions about their infrastructure spend on demand.

The MTechZilla recommendation was an MCP server instead. Rather than a static daily email, Claude could answer "Which services have increased in cost by more than 20% compared to last week, and when did the spike start?" at any point - and then cross-reference it with recent deployment history to find the likely cause.

The architecture:

┌──────────────────────────────────────────────────┐
│              Claude Interface                    │
│         (Internal engineering dashboard)         │
├──────────────────────────────────────────────────┤
│              MCP Protocol Layer                  │
├─────────────────────┬────────────────────────────┤
│   Cost MCP Server   │   Deployment MCP Server    │
│  (get_cost_by_      │   (get_recent_deploys,     │
│   service,          │    get_deploy_by_date)     │
│   get_cost_spike,   │                            │
│   get_cost_trend)   │                            │
└──────────┬──────────┴───────────────┬────────────┘
           │                          │
   AWS Cost Explorer API        Internal Deploy DB

Stack: Node.js MCP server, AWS SDK (Cost Explorer), internal PostgreSQL deployments DB, HTTP + SSE transport for shared engineering team access.

Tools built: get_cost_by_service, get_cost_spike, get_cost_trend, get_recent_deploys, get_deploy_by_date

Outcome: The team caught a misconfigured auto-scaling policy within 36 hours of it being introduced - a mistake that in the previous month had gone unnoticed for 19 days before the bill landed. Over the first quarter, the team attributed $14,000 in avoided overspend directly to anomalies caught and resolved within the same week they appeared.

The key engineering decision: get_cost_spike returns only service name, percentage increase, and the date the spike began - nothing else.

Claude can reason across multiple tool calls without the context window filling up with raw cost data. The lean response design was what made multi-step workflows - "find the spike, then find what deployed that day" - reliable in production.

Moving to Production: stdio vs HTTP/SSE

The invoice server built above uses stdio transport - the right choice for getting started and for Claude Desktop integrations. But stdio has a hard constraint: it only supports a single connection. For any production deployment where your team needs shared access, you need to switch to HTTP + SSE.

┌─────────────────────────────────────────┐
│         Claude / Your App               │
│         (Multiple users)                │
└─────────────────┬───────────────────────┘
                  │ HTTP + SSE
         ┌────────▼────────┐
         │   MCP Server    │  ← Deployed on Railway / Render / AWS
         │   (Node.js)     │
         └────────┬────────┘
                  │
           PostgreSQL DB

Switching transport in your server requires one change - replace StdioServerTransport with the HTTP transport:

import { StreamableHTTPServerTransport } from "@modelcontextprotocol/sdk/server/streamableHttp.js";
import http from "http";

const httpServer = http.createServer(async (req, res) => {
  if (req.method === "POST" && req.url === "/mcp") {
    const transport = new StreamableHTTPServerTransport({ sessionIdGenerator: undefined });
    await server.connect(transport);
    await transport.handleRequest(req, res);
  }
});

httpServer.listen(3000);
console.error("MCP Server running on http://localhost:3000/mcp");

Deploy this to any Node.js hosting provider and your entire team can connect to the same MCP server - turning a personal productivity tool into shared infrastructure.

What to Do This Week

MCP is not a future consideration. The teams adopting it now are building systems that will be significantly ahead of those still copying data between tools manually six months from now.

Three questions worth answering before your next engineering sprint:

What repetitive data workflow costs your team the most time each week? Anything involving pulling data from one system and acting on it elsewhere is a strong MCP candidate.

Does an MCP server already exist for the tools involved? Check github's official mcp server list before scoping any build.

Are you at Stage 2 or Stage 3? If your AI can answer questions about your data but can't act on it, MCP is the next architectural move.

The pattern is always the same: identify the workflow, expose the right tools with precise descriptions, validate inputs, and test Claude's tool selection before you ship. Start with one server, one workflow, and four tools or fewer. Expand from there.

MTechZilla helps engineering teams move from AI POC to production MCP integrations. If you're architecting your first MCP deployment or want a review of your current approach, let's talk.