Skip to main content
Glama
PhononX

Carbon Voice

by PhononX

run_ai_action

Execute AI prompts on specific Carbon Voice messages to generate responses, analyze content, or perform automated actions based on message IDs.

Instructions

Run an AI Action (Prompt) for a message. You can run an AI Action for a message by its ID or a list of message IDs.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
prompt_idYes
message_idsYes
channel_idNo
workspace_idNo
languageNoThe language of the response. Defaults to original message.

Implementation Reference

  • The handler function that implements the core logic of the 'run_ai_action' tool. It invokes the simplified API's aIResponseControllerCreateResponse method with the input arguments and authentication header, formats the response, and handles errors.
    async (args: CreateAIResponse, { authInfo }): Promise<McpToolResponse> => {
      try {
        return formatToMCPToolResponse(
          await simplifiedApi.aIResponseControllerCreateResponse(
            args,
            setCarbonVoiceAuthHeader(authInfo?.token),
          ),
        );
      } catch (error) {
        logger.error('Error running ai action:', { error });
        return formatToMCPToolResponse(error);
      }
    },
  • src/server.ts:861-885 (registration)
    Registers the 'run_ai_action' tool on the MCP server, specifying its name, description, input schema, annotations, and handler function.
    server.registerTool(
      'run_ai_action',
      {
        description:
          'Run an AI Action (Prompt) for a message. You can run an AI Action for a message by its ID or a list of message IDs.',
        inputSchema: aIResponseControllerCreateResponseBody.shape,
        annotations: {
          readOnlyHint: false,
          destructiveHint: false,
        },
      },
      async (args: CreateAIResponse, { authInfo }): Promise<McpToolResponse> => {
        try {
          return formatToMCPToolResponse(
            await simplifiedApi.aIResponseControllerCreateResponse(
              args,
              setCarbonVoiceAuthHeader(authInfo?.token),
            ),
          );
        } catch (error) {
          logger.error('Error running ai action:', { error });
          return formatToMCPToolResponse(error);
        }
      },
    );
  • Zod schema definition for the input body of the aIResponseControllerCreateResponse API call, used as inputSchema for the tool.
    export const aIResponseControllerCreateResponseBody = zod.object({
      "prompt_id": zod.string(),
      "message_ids": zod.array(zod.string()),
      "channel_id": zod.string().optional(),
      "workspace_id": zod.string().optional(),
      "language": zod.string().optional().describe('The language of the response. Defaults to original message.')
    })
  • TypeScript interface defining the shape of the input arguments for the 'run_ai_action' handler.
    export interface CreateAIResponse {
      prompt_id: string;
      message_ids: string[];
      channel_id?: string;
      workspace_id?: string;
      /** The language of the response. Defaults to original message. */
      language?: string;
    }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate this is not read-only and not destructive, which the description doesn't contradict. The description adds that it runs an AI action 'for a message', implying it might generate or process content based on messages, but doesn't disclose behavioral traits like rate limits, authentication needs, response format, or side effects. With annotations covering basic safety, the description adds minimal context beyond them.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with two sentences that directly state the tool's purpose and parameter options. It's front-loaded with the core action and avoids unnecessary details. However, it could be slightly more structured by explicitly listing key parameters or use cases.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 5 parameters with low schema coverage, no output schema, and annotations only cover basic hints, the description is incomplete. It doesn't explain what an 'AI Action' entails, the expected outcomes, error conditions, or how parameters like 'channel_id' and 'workspace_id' fit in. For a tool that likely performs non-trivial AI processing, this leaves significant gaps for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is low at 20%, with only the 'language' parameter having a description. The tool description mentions 'message IDs' and 'prompt_id' implicitly but doesn't explain what these parameters mean, their formats, or how they interact (e.g., if multiple message_ids are processed together). It fails to compensate for the poor schema coverage, leaving most parameters semantically unclear.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Run an AI Action') and the target resource ('for a message'), specifying it can be done by message ID or list of message IDs. It distinguishes from sibling tools like 'list_ai_actions' or 'get_ai_action_responses' by focusing on execution rather than listing or retrieving results. However, it doesn't explicitly differentiate from 'run_ai_action_for_shared_link', which is a similar tool for a different context.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing a valid prompt_id or message access), exclusions, or compare it to similar tools like 'run_ai_action_for_shared_link'. The agent must infer usage from the tool name and parameters alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/PhononX/cv-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server