Skip to main content
Glama
BrewMyTech

Grok MCP Server

by BrewMyTech

create_embeddings

Generate embeddings for text using the Grok API, specifying model, input, encoding format, and dimensions for accurate vector representations.

Instructions

Create embeddings for text with the Grok API

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
dimensionsNoThe number of dimensions the resulting output embeddings should have
encoding_formatNoThe format to return the embeddings in
inputYesInput text to get embeddings for
modelYesID of the model to use
userNoA unique user identifier

Implementation Reference

  • Core handler function implementing the embedding creation logic by making a POST request to the Grok embeddings endpoint and parsing the response.
    export async function createEmbeddings(
      options: z.infer<typeof EmbeddingsRequestSchema>
    ): Promise<z.infer<typeof EmbeddingsResponseSchema>> {
      const response = await grokRequest("embeddings", {
        method: "POST",
        body: options,
      });
    
      return EmbeddingsResponseSchema.parse(response);
    }
  • Zod schema defining the input parameters for the create_embeddings tool.
    export const EmbeddingsRequestSchema = z.object({
      model: z.string().describe("ID of the model to use"),
      input: z
        .union([
          z.string(),
          z.array(z.string()),
          z.array(z.number()),
          z.array(z.array(z.number())),
        ])
        .describe("Input text to get embeddings for"),
      encoding_format: z
        .enum(["float", "base64"])
        .optional()
        .describe("The format to return the embeddings in"),
      dimensions: z
        .number()
        .int()
        .positive()
        .optional()
        .describe(
          "The number of dimensions the resulting output embeddings should have"
        ),
      user: z.string().optional().describe("A unique user identifier"),
    });
  • index.ts:138-150 (registration)
    Registration of the create_embeddings tool on the MCP server, including the thin wrapper execute function that calls the core handler.
    server.addTool({
      name: "create_embeddings",
      description: "Create embeddings for text with the Grok API",
      parameters: embeddings.EmbeddingsRequestSchema,
      execute: async (args) => {
        try {
          const result = await embeddings.createEmbeddings(args);
          return JSON.stringify(result, null, 2);
        } catch (err) {
          handleError(err);
        }
      },
    });
  • Zod schema for parsing the response from the embeddings API.
    export const EmbeddingsResponseSchema = z.object({
      object: z.literal("list"),
      data: z.array(EmbeddingObjectSchema),
      model: z.string(),
      usage: z.object({
        prompt_tokens: z.number(),
        total_tokens: z.number(),
      }),
    });
  • Zod schema for individual embedding objects in the response.
    export const EmbeddingObjectSchema = z.object({
      object: z.literal("embedding"),
      embedding: z.array(z.number()),
      index: z.number(),
    });
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but offers minimal information. It states the tool creates embeddings but doesn't mention authentication requirements, rate limits, cost implications, or what the output looks like. For a tool that likely involves API calls and computational resources, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that gets straight to the point without any fluff. It's appropriately sized for a tool with a clear purpose and well-documented schema, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of an embedding tool with no annotations or output schema, the description is incomplete. It doesn't explain what embeddings are used for, potential limitations, error handling, or return format, leaving significant gaps for an agent to understand the tool's full context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds no parameter-specific information beyond what's already in the schema, which has 100% coverage with detailed descriptions for all 5 parameters. The baseline score of 3 reflects that the schema adequately documents parameters, but the description doesn't enhance understanding with examples or contextual usage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('create embeddings') and target resource ('text with the Grok API'), making the purpose immediately understandable. However, it doesn't differentiate from sibling tools like create_chat_completion or create_completion, which also involve the Grok API but for different purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like create_chat_completion or create_completion. There's no mention of use cases, prerequisites, or exclusions that would help an agent decide between similar tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/BrewMyTech/grok-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server