A Guide to Nico's MCP Servers

how I give Claude persistent memory of my brand, design system, and teaching work

Updated 7 Apr 2025
  • renamed servers to match actual deployments: writer-for-human, designer-for-human, writer-for-designerer
  • builder-of-decks now targets Slidev (was Reveal.js) with a custom Apple HIG theme
  • updated tool names and descriptions throughout to match current specs

What Are MCPs?

Model Context Protocol — an open standard from Anthropic. It lets Claude call tools on external servers you control and pull back structured data.

Why build custom servers?


Figma Console MCP

Before getting into my custom servers — one off-the-shelf MCP worth knowing about. Figma Console MCP connects Claude to Figma so it can read your design files, inspect components, and pull tokens directly.

Quick start
See Figma Console MCP → NPX Setup for install instructions. Takes about two minutes.

My Custom Servers

Six servers. Each owns one domain of knowledge, nothing more. I built them for my day-to-day as a bilingual educator and freelance UI/UX designer.

Server Role
builder-of-mcps Scaffolds new MCP servers (the “meta” server)
builder-of-decks Slidev presentation toolkit with custom Apple HIG theme
writer-for-human Linguistic style — tone, word choice, bilingual glossary
designer-for-human Visual design system — colours, type, spacing, components
humanizer Psycholinguistic rules for human-sounding writing
writer-for-designer Client communication guide for freelance design

What Each Does


Architecture

Stack

Same pattern for every server:

Project Structure

plaintext
mcp-[name]/
├── CLAUDE.md              # Context file for Claude Code
├── docs/
│   ├── PRD.md             # Product requirements
│   ├── DATA_SCHEMA.md     # JSON data file schemas
│   ├── API_SPEC.md        # Tool specs
│   └── DEPLOYMENT.md      # Deployment runbook
├── src/
│   ├── __init__.py
│   ├── server.py          # FastMCP entry point
│   ├── tools/
│   │   ├── __init__.py
│   │   ├── read.py        # Read-only tools
│   │   └── updates.py     # Write tools (auto-commits to Git)
│   └── storage/
│       ├── __init__.py
│       └── json_store.py  # JSON I/O + Git commit helper
├── data/
│   └── *.json             # Domain data
├── Dockerfile
├── docker-compose.yml
├── entrypoint.sh          # Startup script — seeds data, initialises Git
├── requirements.txt
└── .gitignore

Data Flow

  1. Claude fires an MCP tool call over HTTPS
  2. Cloudflare Tunnel routes it to the NAS
  3. Cloudflare Access checks authorization
  4. the server reads its JSON data, returns a response
  5. write operations save to JSON, commit to Git, and push to GitHub automatically

Data Sync

Each container runs an entrypoint.sh script on startup:

  1. seeds the /data directory from bundled defaults if files are missing
  2. initialises a local Git repo so write operations can auto-commit
  3. starts the server

For remote sync, add a git pull step to entrypoint.sh that fetches from your GitHub repo before starting the server. A daily scheduled container restart keeps data fresh without manual intervention.


In Practice

Teaching

“Create a 12-slide deck on narrative structure for Grade 8.”
  1. builder-of-decksget_scaffold — a working Slidev deck with the brand theme
  2. builder-of-decksget_compositions — picks a composition pattern for each slide
  3. designer-for-humanget_visual_for_task — colours, fonts, spacing
  4. writer-for-humanget_style_for_task — tone and language rules
  5. humanizer — keeps the speaker notes from sounding robotic

What comes back is a deck that actually looks and reads like the brand. Not a generic template with my logo slapped on top.

Design

“Draft a project proposal email about the website redesign.”
  1. writer-for-designerget_communication_guide — proposal rules matched to client literacy
  2. humanizer — psycholinguistic rules so it reads like a person wrote it
  3. designer-for-humanget_brand_foundation — if the proposal references the practice itself

The output adjusts automatically. Non-designer client? No jargon. Design-literate? Precise terminology. Same principles either way.


Build Your Own

Prerequisites

Steps

1. Start with one server.

Pick the single domain that eats the most time. Brand guidelines, writing style, component library — whichever hurts most to re-explain. Don’t try to build six at once.

2. Pick a framework.

3. Put your knowledge in JSON

One /data directory. One file per concern. This is your source of truth.

json
{
  "brand_colours": {
    "primary": "#2D5A3D",
    "secondary": "#8B6F47"
  },
  "tone": {
    "default": "warm, direct, encouraging",
    "formal": "respectful, precise, understated"
  }
}

4. Serve slices, not the whole file.

This is the design choice that matters most. Tools should return the minimum data Claude actually needs. Add filters so it can ask for just the piece it wants:

python
@mcp.tool()
async def get_colours(role: str | None = None) -> dict:
    """Get brand colours, optionally filtered by role."""
    colours = load_json("colours.json")
    if role:
        return {k: v for k, v in colours.items()
                if v.get("role") == role}
    return colours

Keeps token usage low. Also makes responses noticeably faster.

5. Containerise and deploy.

dockerfile
FROM python:3.12-slim

RUN apt-get update && apt-get install -y git \
    && rm -rf /var/lib/apt/lists/*

WORKDIR /app
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

COPY src/ ./src/
COPY data/ ./data-defaults/
COPY entrypoint.sh .
RUN chmod +x entrypoint.sh

ENV PYTHONPATH=/app
ENTRYPOINT ["./entrypoint.sh"]

6. Connect to Claude.

Lessons Learned

Things I figured out the hard way:


Further Reading