How to Build an MCP Server

How to build an MCP server is quickly becoming a core skill for teams that want large language models (LLMs) to call internal tools and access data safely. Model Context Protocol (MCP) standardizes how an LLM connects to external capabilities through an MCP server, rather than embedding credentials or direct API access inside prompts or client apps. In practice, MCP servers expose tools (callable functions) and resources (structured data sources) over supported transports like HTTP or stdio, enabling integrations with clients such as Claude Desktop, Cursor IDE, and VS Code.
This tutorial walks through a production-oriented baseline: a TypeScript/Node.js MCP server that provides (1) to-do actions backed by SQLite and (2) a weather lookup tool. You will also learn design patterns for schemas, sessions, and deployment hardening.

Building MCP servers requires both architecture and coding skills-develop them through an Agentic AI Course, practice implementation with a Python certification, and learn deployment strategies via a Digital marketing course.
What is an MCP Server (and Why It Matters)?
An MCP server acts as an intermediary between an LLM client and your external systems. Instead of letting the model directly call APIs, access databases, or read files, you provide a controlled interface:
Tools: Executable functions the LLM can invoke. Inputs are validated using schemas, commonly Zod in TypeScript.
Resources: Data sources exposed via URIs, such as files, database query results, or generated JSON views.
Transports: How clients connect, such as HTTP for production services or stdio for local development and IDE workflows.
The official @modelcontextprotocol/sdk provides core primitives like McpServer and transport implementations such as StreamableHTTPServerTransport and StdioServerTransport. As of early 2026, MCP is widely used for production-grade integrations, with templates supporting authentication (for example, GitHub OAuth), monitoring (for example, Sentry), and edge deployments (for example, Cloudflare).
Architecture Overview: Tools, Resources, and Transports
Tools: LLM-callable functions with Zod schemas
Tools are the most common integration pattern. You define a name, description, and a schema for inputs. Using Zod rejects malformed inputs before your handler runs, which reduces runtime errors and improves safety when tools are called repeatedly by different clients.
Resources: Data exposed via URIs
Resources are useful when you want the client to browse or fetch structured data (for example, "show me all to-dos") without turning everything into a tool call. This also simplifies permissioning, because resources can be scoped by URI patterns.
Transports: stdio for local, HTTP for production
Stdio: Ideal for local testing, CLI-driven workflows, and some IDE integrations.
HTTP: Preferred for production. HTTP transport typically includes session management so multiple clients can connect concurrently.
Step-by-Step: How to Build an MCP Server in TypeScript
This section implements a minimal MCP server with two tools and one resource. The code samples focus on MCP concepts; you can swap in your preferred database layer, secret manager, and API clients later.
Step 1: Project setup
Create a Node.js project with TypeScript, the MCP SDK, Zod, and Express for HTTP transport.
Commands:
npm init -ynpm install @modelcontextprotocol/sdk zod expressnpm install -D typescript @types/node @types/express
Suggested structure:
src/index.ts(server bootstrap and transport)src/tools.ts(tool registration)src/resources.ts(resource registration)
Step 2: Create the MCP server skeleton
Instantiate McpServer with a name, version, and declared capabilities. Capabilities tell clients what your server supports.
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
const server = new McpServer(
{ name: "todo-weather-server", version: "1.0.0" },
{ capabilities: { tools: {}, resources: {} } }
);Even if you start with only tools, declaring resources early lets you expand without changing the overall handshake expectations.
Step 3: Define and register tools (to-dos and weather)
Tools follow a consistent pattern: a clear name, an LLM-readable description, a Zod schema, and an async handler that returns content.
// src/tools.ts
import { z } from "zod";
import type { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
export function registerTools(server: McpServer) {
server.tool(
"add_todo",
"Add a new to-do item",
{ task: z.string().describe("Task description") },
async ({ task }) => {
// TODO: Insert into SQLite
return { content: [{ type: "text", text: `Added: ${task}` }] };
}
);
server.tool(
"get_weather",
"Get weather for a city",
{ city: z.string().describe("City name") },
async ({ city }) => {
// TODO: Fetch from a weather API
return { content: [{ type: "text", text: `Weather in ${city}: sunny` }] };
}
);
}Tool design principles that improve accuracy and safety:
Use precise schemas: avoid open-ended string inputs when you can constrain with enums, regex, or min/max length.
Write descriptions for the model: state what the tool does, what inputs it requires, and what it returns.
Keep handlers deterministic: return predictable output formats to reduce hallucinated follow-up calls.
Step 4: Expose a resource (optional but recommended)
A simple resource can expose the current to-do list as JSON. The LLM client can fetch it like a document, which is more efficient than invoking a tool for read-only data.
// src/resources.ts
import type { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
export function registerResources(server: McpServer) {
server.resource(
"todo://list",
"todo://*",
async (uri) => {
// TODO: Query SQLite and return JSON
const todos = [{ id: 1, task: "Example", done: false }];
return {
contents: [
{
uri,
mimeType: "application/json",
text: JSON.stringify(todos, null, 2)
}
]
};
}
);
}In production, resources are a good place to enforce row-level filters, tenancy boundaries, and redaction rules for sensitive fields.
Step 5: Add transport and start the server (HTTP and stdio)
For production usage, HTTP transport is typically the default. Many MCP deployments maintain stateful sessions and use a UUID session identifier in headers so multiple clients can connect reliably.
// src/index.ts
import express from "express";
import { randomUUID } from "crypto";
import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
import { StreamableHTTPServerTransport } from "@modelcontextprotocol/sdk/server/streamableHttp.js";
import { registerTools } from "./tools.js";
import { registerResources } from "./resources.js";
const server = new McpServer(
{ name: "todo-weather-server", version: "1.0.0" },
{ capabilities: { tools: {}, resources: {} } }
);
registerTools(server);
registerResources(server);
const app = express();
app.use(express.json());
const transports = new Map<string, StreamableHTTPServerTransport>();
app.all("/mcp", async (req, res) => {
const sessionId = req.headers["mcp-session-id"] as string | undefined;
if (sessionId && transports.has(sessionId)) {
await transports.get(sessionId)!.handleRequest(req, res);
return;
}
const newSessionId = randomUUID();
const transport = new StreamableHTTPServerTransport({});
transports.set(newSessionId, transport);
res.setHeader("mcp-session-id", newSessionId);
await server.connect(transport);
await transport.handleRequest(req, res);
});
app.listen(3100, () => {
console.log("MCP server on http://localhost:3100/mcp");
});For local testing, you can implement StdioServerTransport instead. This is useful when running the server as a subprocess for development tools that prefer stdio connections.
Step 6: Test and integrate with clients
Run locally using your preferred TypeScript runner or compiled output:
npx tsx src/index.tsor compile and run
node dist/index.js
Client integration options commonly used in real workflows include Claude Desktop, Cursor IDE, and VS Code. Many developers also use an MCP Inspector for quick verification of tool schemas, resource visibility, and request/response handling.
Security and Production Hardening Checklist
MCP is often adopted because it improves security posture compared to ad hoc agent tooling. For production deployments, prioritize the following:
Authentication: Add OAuth or service-to-service auth. Production templates often use GitHub OAuth as a starting point.
Authorization: Enforce per-tool and per-resource permissions tied to the session identity.
Input validation: Zod schemas are not optional in practice. Strong schemas block malformed inputs early and reduce tool-call errors substantially.
Observability: Add structured logs, tracing, and error monitoring (for example, Sentry) so you can debug tool failures under load.
Session management: Plan for concurrent clients. HTTP session IDs and transport maps are common building blocks, but implement timeouts and cleanup logic as well.
Real-World Use Cases for MCP Servers
Once you understand how to build an MCP server, the same pattern applies to higher-value workflows:
Internal data access: Query company databases or knowledge stores without exposing credentials to the model or end-user clients.
Developer productivity: Connect IDEs to internal tooling for ticket creation, build triggers, or repository insights.
Operational dashboards: Serve authenticated metrics and incident context alongside monitoring hooks.
API orchestration: Wrap external APIs as tools with strict schemas and rate limits.
Take your MCP projects from idea to execution-learn concepts in an AI Course, apply models using a machine learning course, and scale solutions through a Digital marketing course.
Conclusion
How to build an MCP server comes down to a repeatable blueprint: define tools with strong Zod schemas, expose resources where browsing is more appropriate than tool calls, and choose the right transport for your environment. With the official MCP SDK in TypeScript/Node.js, you can move from a skeleton server to working integrations with desktop and IDE clients quickly, then harden the deployment with authentication, monitoring, and robust session handling.
As MCP adoption grows through 2026, teams that standardize their LLM-to-tool connectivity with MCP will be better positioned to scale securely, integrate faster, and reduce the operational risk that comes with custom agent wrappers.
Related Articles
View AllAI & ML
What Is MCP in AI?
Learn what MCP in AI is, how the Model Context Protocol works, and why it matters for real-time data access, tool use, automation, and governance.
AI & ML
MCP vs Function Calling vs Plugins
Compare MCP vs function calling vs plugins for LLM tool integration. Learn tradeoffs in portability, security, scalability, and when hybrid patterns work best.
AI & ML
Securing MCP Integrations
Learn how to secure MCP integrations with OAuth 2.1, least-privilege tool authorization, prompt-injection defenses, supply chain governance, and monitoring for tool-using AI.
Trending Articles
The Role of Blockchain in Ethical AI Development
How blockchain technology is being used to promote transparency and accountability in artificial intelligence systems.
AWS Career Roadmap
A step-by-step guide to building a successful career in Amazon Web Services cloud computing.
Top 5 DeFi Platforms
Explore the leading decentralized finance platforms and what makes each one unique in the evolving DeFi landscape.