Skip to main content
Glama
paragdesai1

Cursor Talk to Figma MCP

by paragdesai1

scan_text_nodes

Extract all text content from a selected Figma design element for analysis or processing.

Instructions

Scan all text nodes in the selected Figma node

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
nodeIdYesID of the node to scan

Implementation Reference

  • Core handler function that implements scan_text_nodes by recursively traversing Figma node tree, collecting all TEXT nodes with their properties (id, name, characters, fonts, bounds, hierarchy path). Supports chunked processing for large designs with progress reporting via sendProgressUpdate.
    async function scanTextNodes(params) {
      console.log(`Starting to scan text nodes from node ID: ${params.nodeId}`);
      const {
        nodeId,
        useChunking = true,
        chunkSize = 10,
        commandId = generateCommandId(),
      } = params || {};
    
      const node = await figma.getNodeByIdAsync(nodeId);
    
      if (!node) {
        console.error(`Node with ID ${nodeId} not found`);
        // Send error progress update
        sendProgressUpdate(
          commandId,
          "scan_text_nodes",
          "error",
          0,
          0,
          0,
          `Node with ID ${nodeId} not found`,
          { error: `Node not found: ${nodeId}` }
        );
        throw new Error(`Node with ID ${nodeId} not found`);
      }
    
      // If chunking is not enabled, use the original implementation
      if (!useChunking) {
        const textNodes = [];
        try {
          // Send started progress update
          sendProgressUpdate(
            commandId,
            "scan_text_nodes",
            "started",
            0,
            1, // Not known yet how many nodes there are
            0,
            `Starting scan of node "${node.name || nodeId}" without chunking`,
            null
          );
    
          await findTextNodes(node, [], 0, textNodes);
    
          // Send completed progress update
          sendProgressUpdate(
            commandId,
            "scan_text_nodes",
            "completed",
            100,
            textNodes.length,
            textNodes.length,
            `Scan complete. Found ${textNodes.length} text nodes.`,
            { textNodes }
          );
    
          return {
            success: true,
            message: `Scanned ${textNodes.length} text nodes.`,
            count: textNodes.length,
            textNodes: textNodes,
            commandId,
          };
        } catch (error) {
          console.error("Error scanning text nodes:", error);
    
          // Send error progress update
          sendProgressUpdate(
            commandId,
            "scan_text_nodes",
            "error",
            0,
            0,
            0,
            `Error scanning text nodes: ${error.message}`,
            { error: error.message }
          );
    
          throw new Error(`Error scanning text nodes: ${error.message}`);
        }
      }
    
      // Chunked implementation
      console.log(`Using chunked scanning with chunk size: ${chunkSize}`);
    
      // First, collect all nodes to process (without processing them yet)
      const nodesToProcess = [];
    
      // Send started progress update
      sendProgressUpdate(
        commandId,
        "scan_text_nodes",
        "started",
        0,
        0, // Not known yet how many nodes there are
        0,
        `Starting chunked scan of node "${node.name || nodeId}"`,
        { chunkSize }
      );
    
      await collectNodesToProcess(node, [], 0, nodesToProcess);
    
      const totalNodes = nodesToProcess.length;
      console.log(`Found ${totalNodes} total nodes to process`);
    
      // Calculate number of chunks needed
      const totalChunks = Math.ceil(totalNodes / chunkSize);
      console.log(`Will process in ${totalChunks} chunks`);
    
      // Send update after node collection
      sendProgressUpdate(
        commandId,
        "scan_text_nodes",
        "in_progress",
        5, // 5% progress for collection phase
        totalNodes,
        0,
        `Found ${totalNodes} nodes to scan. Will process in ${totalChunks} chunks.`,
        {
          totalNodes,
          totalChunks,
          chunkSize,
        }
      );
    
      // Process nodes in chunks
      const allTextNodes = [];
      let processedNodes = 0;
      let chunksProcessed = 0;
    
      for (let i = 0; i < totalNodes; i += chunkSize) {
        const chunkEnd = Math.min(i + chunkSize, totalNodes);
        console.log(
          `Processing chunk ${chunksProcessed + 1}/${totalChunks} (nodes ${i} to ${
            chunkEnd - 1
          })`
        );
    
        // Send update before processing chunk
        sendProgressUpdate(
          commandId,
          "scan_text_nodes",
          "in_progress",
          Math.round(5 + (chunksProcessed / totalChunks) * 90), // 5-95% for processing
          totalNodes,
          processedNodes,
          `Processing chunk ${chunksProcessed + 1}/${totalChunks}`,
          {
            currentChunk: chunksProcessed + 1,
            totalChunks,
            textNodesFound: allTextNodes.length,
          }
        );
    
        const chunkNodes = nodesToProcess.slice(i, chunkEnd);
        const chunkTextNodes = [];
    
        // Process each node in this chunk
        for (const nodeInfo of chunkNodes) {
          if (nodeInfo.node.type === "TEXT") {
            try {
              const textNodeInfo = await processTextNode(
                nodeInfo.node,
                nodeInfo.parentPath,
                nodeInfo.depth
              );
              if (textNodeInfo) {
                chunkTextNodes.push(textNodeInfo);
              }
            } catch (error) {
              console.error(`Error processing text node: ${error.message}`);
              // Continue with other nodes
            }
          }
    
          // Brief delay to allow UI updates and prevent freezing
          await delay(5);
        }
    
        // Add results from this chunk
        allTextNodes.push(...chunkTextNodes);
        processedNodes += chunkNodes.length;
        chunksProcessed++;
    
        // Send update after processing chunk
        sendProgressUpdate(
          commandId,
          "scan_text_nodes",
          "in_progress",
          Math.round(5 + (chunksProcessed / totalChunks) * 90), // 5-95% for processing
          totalNodes,
          processedNodes,
          `Processed chunk ${chunksProcessed}/${totalChunks}. Found ${allTextNodes.length} text nodes so far.`,
          {
            currentChunk: chunksProcessed,
            totalChunks,
            processedNodes,
            textNodesFound: allTextNodes.length,
            chunkResult: chunkTextNodes,
          }
        );
    
        // Small delay between chunks to prevent UI freezing
        if (i + chunkSize < totalNodes) {
          await delay(50);
        }
      }
    
      // Send completed progress update
      sendProgressUpdate(
        commandId,
        "scan_text_nodes",
        "completed",
        100,
        totalNodes,
        processedNodes,
        `Scan complete. Found ${allTextNodes.length} text nodes.`,
        {
          textNodes: allTextNodes,
          processedNodes,
          chunks: chunksProcessed,
        }
      );
    
      return {
        success: true,
        message: `Chunked scan complete. Found ${allTextNodes.length} text nodes.`,
        totalNodes: allTextNodes.length,
        processedNodes: processedNodes,
        chunks: chunksProcessed,
        textNodes: allTextNodes,
        commandId,
      };
    }
  • MCP tool registration and thin handler proxy that forwards the call to the Figma plugin via WebSocket (sendCommandToFigma), enabling the actual implementation. Includes input schema validation.
      "scan_text_nodes",
      "Scan all text nodes in the selected Figma node",
      {
        nodeId: z.string().describe("ID of the node to scan"),
      },
      async ({ nodeId }) => {
        try {
          // Initial response to indicate we're starting the process
          const initialStatus = {
            type: "text" as const,
            text: "Starting text node scanning. This may take a moment for large designs...",
          };
    
          // Use the plugin's scan_text_nodes function with chunking flag
          const result = await sendCommandToFigma("scan_text_nodes", {
            nodeId,
            useChunking: true,  // Enable chunking on the plugin side
            chunkSize: 10       // Process 10 nodes at a time
          });
    
          // If the result indicates chunking was used, format the response accordingly
          if (result && typeof result === 'object' && 'chunks' in result) {
            const typedResult = result as {
              success: boolean,
              totalNodes: number,
              processedNodes: number,
              chunks: number,
              textNodes: Array<any>
            };
    
            const summaryText = `
            Scan completed:
            - Found ${typedResult.totalNodes} text nodes
            - Processed in ${typedResult.chunks} chunks
            `;
    
            return {
              content: [
                initialStatus,
                {
                  type: "text" as const,
                  text: summaryText
                },
                {
                  type: "text" as const,
                  text: JSON.stringify(typedResult.textNodes, null, 2)
                }
              ],
            };
          }
    
          // If chunking wasn't used or wasn't reported in the result format, return the result as is
          return {
            content: [
              initialStatus,
              {
                type: "text",
                text: JSON.stringify(result, null, 2),
              },
            ],
          };
        } catch (error) {
          return {
            content: [
              {
                type: "text",
                text: `Error scanning text nodes: ${error instanceof Error ? error.message : String(error)
                  }`,
              },
            ],
          };
        }
      }
    );
  • Legacy helper function for scanning text nodes without chunking, used as fallback or in non-chunked mode.
    async function findTextNodes(node, parentPath = [], depth = 0, textNodes = []) {
      // Skip invisible nodes
      if (node.visible === false) return;
    
      // Get the path to this node including its name
      const nodePath = [...parentPath, node.name || `Unnamed ${node.type}`];
    
      if (node.type === "TEXT") {
        try {
          // Safely extract font information to avoid Symbol serialization issues
          let fontFamily = "";
          let fontStyle = "";
    
          if (node.fontName) {
            if (typeof node.fontName === "object") {
              if ("family" in node.fontName) fontFamily = node.fontName.family;
              if ("style" in node.fontName) fontStyle = node.fontName.style;
            }
          }
    
          // Create a safe representation of the text node with only serializable properties
          const safeTextNode = {
            id: node.id,
            name: node.name || "Text",
            type: node.type,
            characters: node.characters,
            fontSize: typeof node.fontSize === "number" ? node.fontSize : 0,
            fontFamily: fontFamily,
            fontStyle: fontStyle,
            x: typeof node.x === "number" ? node.x : 0,
            y: typeof node.y === "number" ? node.y : 0,
            width: typeof node.width === "number" ? node.width : 0,
            height: typeof node.height === "number" ? node.height : 0,
            path: nodePath.join(" > "),
            depth: depth,
          };
    
          // Only highlight the node if it's not being done via API
          try {
            // Safe way to create a temporary highlight without causing serialization issues
            const originalFills = JSON.parse(JSON.stringify(node.fills));
            node.fills = [
              {
                type: "SOLID",
                color: { r: 1, g: 0.5, b: 0 },
                opacity: 0.3,
              },
            ];
    
            // Promise-based delay instead of setTimeout
            await delay(500);
    
            try {
              node.fills = originalFills;
            } catch (err) {
              console.error("Error resetting fills:", err);
            }
          } catch (highlightErr) {
            console.error("Error highlighting text node:", highlightErr);
            // Continue anyway, highlighting is just visual feedback
          }
    
          textNodes.push(safeTextNode);
        } catch (nodeErr) {
          console.error("Error processing text node:", nodeErr);
          // Skip this node but continue with others
        }
      }
    
      // Recursively process children of container nodes
      if ("children" in node) {
        for (const child of node.children) {
          await findTextNodes(child, nodePath, depth + 1, textNodes);
        }
      }
    }
  • Helper for chunked mode: pre-collects all descendant nodes to process, building hierarchy paths.
    async function collectNodesToProcess(
      node,
      parentPath = [],
      depth = 0,
      nodesToProcess = []
    ) {
      // Skip invisible nodes
      if (node.visible === false) return;
    
      // Get the path to this node
      const nodePath = [...parentPath, node.name || `Unnamed ${node.type}`];
    
      // Add this node to the processing list
      nodesToProcess.push({
        node: node,
        parentPath: nodePath,
        depth: depth,
      });
    
      // Recursively add children
      if ("children" in node) {
        for (const child of node.children) {
          await collectNodesToProcess(child, nodePath, depth + 1, nodesToProcess);
        }
      }
    }
  • Helper to safely extract and serialize TEXT node properties (avoids Symbol issues), temporarily highlights node for visual feedback.
    async function processTextNode(node, parentPath, depth) {
      if (node.type !== "TEXT") return null;
    
      try {
        // Safely extract font information
        let fontFamily = "";
        let fontStyle = "";
    
        if (node.fontName) {
          if (typeof node.fontName === "object") {
            if ("family" in node.fontName) fontFamily = node.fontName.family;
            if ("style" in node.fontName) fontStyle = node.fontName.style;
          }
        }
    
        // Create a safe representation of the text node
        const safeTextNode = {
          id: node.id,
          name: node.name || "Text",
          type: node.type,
          characters: node.characters,
          fontSize: typeof node.fontSize === "number" ? node.fontSize : 0,
          fontFamily: fontFamily,
          fontStyle: fontStyle,
          x: typeof node.x === "number" ? node.x : 0,
          y: typeof node.y === "number" ? node.y : 0,
          width: typeof node.width === "number" ? node.width : 0,
          height: typeof node.height === "number" ? node.height : 0,
          path: parentPath.join(" > "),
          depth: depth,
        };
    
        // Highlight the node briefly (optional visual feedback)
        try {
          const originalFills = JSON.parse(JSON.stringify(node.fills));
          node.fills = [
            {
              type: "SOLID",
              color: { r: 1, g: 0.5, b: 0 },
              opacity: 0.3,
            },
          ];
    
          // Brief delay for the highlight to be visible
          await delay(100);
    
          try {
            node.fills = originalFills;
          } catch (err) {
            console.error("Error resetting fills:", err);
          }
        } catch (highlightErr) {
          console.error("Error highlighting text node:", highlightErr);
          // Continue anyway, highlighting is just visual feedback
        }
    
        return safeTextNode;
      } catch (nodeErr) {
        console.error("Error processing text node:", nodeErr);
        return null;
      }
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It states what the tool does but lacks details on permissions, rate limits, output format, or whether it's read-only or destructive. This is insufficient for a tool that likely interacts with a design system.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's function without unnecessary words. It's front-loaded and appropriately sized for a simple tool, earning full marks for conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the lack of annotations and output schema, the description is incomplete. It doesn't explain what 'scan' entails (e.g., returns text content, node IDs, or metadata) or address potential complexities like nested nodes or error handling, leaving significant gaps for an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with 'nodeId' clearly documented. The description adds no additional parameter semantics beyond implying scanning occurs within a selected node, which is already covered by the schema. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('scan') and target resource ('all text nodes in the selected Figma node'), making the purpose immediately understandable. It doesn't explicitly differentiate from sibling tools like 'scan_nodes_by_types' or 'get_node_info', which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, context for scanning text nodes, or how it differs from similar tools like 'scan_nodes_by_types' or 'get_node_info', leaving usage ambiguous.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/paragdesai1/parag-Figma-MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server