Skip to main content
Glama

updateFile

Modify metadata for existing Pinata files by updating file names and custom key-value pairs to organize content on IPFS.

Instructions

Update metadata for an existing file on Pinata including name and key-value pairs

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
networkNoWhether the file is in public or private storagepublic
idYesThe unique ID of the file to update
nameNoNew name for the file
keyvaluesNoMetadata key-value pairs to update

Implementation Reference

  • src/index.ts:249-291 (registration)
    The updateFile tool is registered with the MCP server, allowing updates to file metadata including name and key-value pairs on Pinata IPFS storage.
    server.tool(
      "updateFile",
      "Update metadata for an existing file on Pinata including name and key-value pairs",
      {
        network: z
          .enum(["public", "private"])
          .default("public")
          .describe("Whether the file is in public or private storage"),
        id: z.string().describe("The unique ID of the file to update"),
        name: z.string().optional().describe("New name for the file"),
        keyvalues: z
          .record(z.any())
          .optional()
          .describe("Metadata key-value pairs to update"),
      },
      async ({ network, id, name, keyvalues }) => {
        try {
          const url = `https://api.pinata.cloud/v3/files/${network}/${id}`;
    
          const payload: { name?: string; keyvalues?: Record<string, unknown> } =
            {};
          if (name) payload.name = name;
          if (keyvalues) payload.keyvalues = keyvalues;
    
          const response = await fetch(url, {
            method: "PUT",
            headers: getHeaders(),
            body: JSON.stringify(payload),
          });
    
          if (!response.ok) {
            throw new Error(
              `Failed to update file: ${response.status} ${response.statusText}`
            );
          }
    
          const data = await response.json();
          return successResponse(data);
        } catch (error) {
          return errorResponse(error);
        }
      }
    );
  • Handler function that executes the updateFile logic by sending a PUT request to the Pinata API with optional name and keyvalues metadata updates.
    async ({ network, id, name, keyvalues }) => {
      try {
        const url = `https://api.pinata.cloud/v3/files/${network}/${id}`;
    
        const payload: { name?: string; keyvalues?: Record<string, unknown> } =
          {};
        if (name) payload.name = name;
        if (keyvalues) payload.keyvalues = keyvalues;
    
        const response = await fetch(url, {
          method: "PUT",
          headers: getHeaders(),
          body: JSON.stringify(payload),
        });
    
        if (!response.ok) {
          throw new Error(
            `Failed to update file: ${response.status} ${response.statusText}`
          );
        }
    
        const data = await response.json();
        return successResponse(data);
      } catch (error) {
        return errorResponse(error);
      }
    }
  • Zod schema defining the input parameters for updateFile: network (public/private), id (required), name (optional), and keyvalues (optional).
    {
      network: z
        .enum(["public", "private"])
        .default("public")
        .describe("Whether the file is in public or private storage"),
      id: z.string().describe("The unique ID of the file to update"),
      name: z.string().optional().describe("New name for the file"),
      keyvalues: z
        .record(z.any())
        .optional()
        .describe("Metadata key-value pairs to update"),
  • Helper function that formats successful API responses into the standard MCP content format with JSON stringified data.
    // Helper for consistent success responses
    const successResponse = (data: unknown) => ({
      content: [{ type: "text" as const, text: JSON.stringify(data, null, 2) }],
    });
  • Helper function that formats error responses into the standard MCP content format with error message and isError flag.
    // Helper for consistent error responses
    const errorResponse = (error: unknown) => ({
      content: [
        {
          type: "text" as const,
          text: `Error: ${error instanceof Error ? error.message : String(error)}`,
        },
      ],
      isError: true,
    });
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It states this updates metadata, implying a mutation, but lacks critical behavioral details: whether authentication is required, if changes are reversible, rate limits, error conditions, or what happens to existing metadata not mentioned. The description is minimal and doesn't compensate for the absence of annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose. It avoids redundancy and wastes no words, though it could be slightly more structured (e.g., separating scope from parameters). Every part earns its place, making it appropriately concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, no output schema, and a mutation tool with 4 parameters, the description is incomplete. It lacks behavioral context (e.g., permissions, side effects), output details, and usage guidelines. While the schema covers parameters well, the description doesn't add enough value to compensate for missing structured data, leaving gaps for an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all 4 parameters. The description mentions 'name and key-value pairs', aligning with the 'name' and 'keyvalues' parameters, but adds no additional meaning beyond what the schema provides (e.g., format examples, constraints). With high schema coverage, the baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Update metadata'), target resource ('existing file on Pinata'), and specific fields ('name and key-value pairs'). It distinguishes from siblings like 'uploadFile' (creation) and 'deleteFile' (removal), though it doesn't explicitly contrast with 'updateGroup' or 'updatePaymentInstruction' which update different resources.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing the file ID), exclusions (e.g., cannot update file content), or compare with similar tools like 'updateGroup' for group metadata. Usage is implied only by the verb 'Update'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/PinataCloud/pinata-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server