New Mac beta (2025.2.4) — MCP Server edition — out now!

Hey folks :waving_hand:

This is a quick one — but we have a new beta out today that adds an MCP server to Sketch.

If you already know what an MCP server is — great! Here are the docs, get yourself set up, go crazy. Share your feedback here in the forum with us, and show us what you’ve made with it, if you can.

If you’re wondering what an MCP server is — read on…

What is an MCP server?

Many AI apps today aren’t just answer-generating machines; they can actually do things: read files from you computer, search internet, run cli tools, and so on. This type of AI is dubbed “agentic AI”, because well, it has some level of agency to it.

Now, reading files and running standard CLI commands is straightforward – but what if you want your AI “agent” to, say, access your Sketch designs? Maybe you want it to export all of your icons for developer handoff. Maybe you want it to find issues and inconsistencies in your design system. Maybe you want it to take a template and generate new screens based on that template using existing components.

The problem is, your AI agent doesn’t know anything about Sketch out of the box, so we have to teach it somehow. That’s where Model Context Protocol (MCP) comes into play. Here’s how this protocol works:

  1. In the latest beta, there’s a built-in MCP server. It’s a very basic local-only web server that exposes a couple of MCP commands (aka tools) that an AI client, like Claude or ChatGPT, may call to access Sketch data or even make changes there.

  2. Once you connect your AI client to the Sketch MCP server, it will learn the available Sketch tools and their usage suggestions. These suggestions are supplied by the MCP server and are automatically injected into the LLM’s context.

  3. From now on, whenever you prompt the AI to, say, gather some information from a Sketch document, or make a change there – it will choose a tool that it finds suitable for the task, and call it.

  4. This MCP call is handled by the Sketch MCP server, i.e. by Sketch itself. The results are then sent back to the AI client for further processing.

What can I do with it?

Once you’re all set up, here are a few ideas for prompts you can give your AI client:

  • Explain the visual layout of the selected Sketch frame
  • Are there any symbol masters in my active Sketch document that don’t have any instances?
  • List all design tokens used in my sketch selection
  • Create a vertical stack of 4 rectangles in sketch, add a unique gradient pattern to each rectangle
  • Fix grammar and spelling mistakes on the selected Sketch frame
  • Export all Sketch symbols prefixed with “icon/” from the current page as svgs to my Desktop
  • Show the full component hierarchy of my Sketch selection as a tree
  • Generate my current Sketch selection in React
  • Replace every text on the selected sketch frame with a random apple product name (including text in symbols)

Of course, good prompts tend to be more detailed than that, but these hopefully give you an idea of where to start!

Is it secure?

The MCP server is local-only and cannot be accessed remotely. Each call is handled locally by the MCP server — that is, by Sketch itself — and the results are sent back to the AI client for further processing. The server is off by default, and you have full control over when and how external AI clients connect to it.

In other words, this entirely opt-in. You choose whether to use it, and what AI client you use it with.

6 Likes

This is a really interesting idea and a great opportunity to explore new workflows. Looking forward to seeing how the community puts this to use!

2 Likes

Nice! I just tested and this open new opportunity to design in Sketch!

2 Likes

Does it support Stacks? I tried this on a bunch of stacks with symbols and color variables but got this message:

No design tokens (such as color variables, shared layer styles, or text styles) were found in your current Sketch selection. If you believe your selection should contain tokens, please ensure the correct layers are selected and try again.

Hey there,

Stacks shouldn’t be much of a problem to navigate when collecting design tokens like color variables and such, but let’s troubleshoot – can you share the code your AI tool has generated in order to collect color variables?

Found that the ai detected the wrong layer in the Sketch file. I mentioned the exact stack name in the prompt, and it worked well. Apparantly ai can’t detect “Sketch selection” correctly

Hmm detecting the currently selected Sketch layers also shouldn’t be an issue for AI agents. Will be happy to review the code your AI has generated and tune our internal MCP prompt accordingly!

1 Like

It’s input and output is empty

{

“script”: “const sketch = require(‘sketch’);\nconst selectedLayers = sketch.getSelectedDocument().selectedLayers.layers;\nlet tokens = ;\nselectedLayers.forEach(layer => {\n if (layer.style) {\n // Color variables\n layer.style.fills?.forEach(fill => {\n if (fill.swatch) {\n tokens.push({type: ‘color’, name: fill.swatch.name, library: fill.swatch.getLibrary()?.name});\n }\n if (fill.gradient) {\n fill.gradient.stops.forEach(stop => {\n if (stop.swatch) {\n tokens.push({type: ‘color’, name: stop.swatch.name, library: stop.swatch.getLibrary()?.name});\n }\n });\n }\n });\n // Borders\n layer.style.borders?.forEach(border => {\n if (border.swatch) {\n tokens.push({type: ‘color’, name: border.swatch.name, library: border.swatch.getLibrary()?.name});\n }\n });\n // Shadows\n layer.style.shadows?.forEach(shadow => {\n if (shadow.swatch) {\n tokens.push({type: ‘color’, name: shadow.swatch.name, library: shadow.swatch.getLibrary()?.name});\n }\n });\n // Layer style\n if (layer.sharedStyle) {\n tokens.push({type: ‘layerStyle’, name: layer.sharedStyle.name, library: layer.sharedStyle.getLibrary()?.name});\n }\n }\n // Text style\n if (layer.type === sketch.Types.Text && layer.sharedStyle) {\n tokens.push({type: ‘textStyle’, name: layer.sharedStyle.name, library: layer.sharedStyle.getLibrary()?.name});\n }\n});\n// Remove duplicates\nconst uniqueTokens = Array.from(new Set(tokens.map(t => JSON.stringify(t)))).map(s => JSON.parse(s));\nconsole.log(uniqueTokens);”

}

1 Like

Oh I see: it didn’t actually descend into child layers at all! So this code only grabs design tokens from any directly selected styled layer, but not from any of its descendants.

One possible workaround is to explicitly mention to an LLM that it should generate code that iterates selected layers recursively.

1 Like