Skip to main content
Glama
ricleedo

MCP Server Boilerplate

by ricleedo

mongo-aggregate

Execute aggregation pipelines on MongoDB collections to process and analyze data through multi-stage operations.

Instructions

Execute aggregation pipeline on a MongoDB collection

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
databaseYesDatabase name
collectionYesCollection name
pipelineYesAggregation pipeline as array of stage objects

Implementation Reference

  • Handler function that executes the MongoDB aggregation pipeline on the specified collection, formats the results, and returns them in the MCP response format.
    async ({ database: dbName, collection: collectionName, pipeline }) => {
      try {
        const db = await ensureConnection(dbName);
        const collection: Collection = db.collection(collectionName);
        
        const documents = await collection.aggregate(pipeline).toArray();
        
        const formattedOutput = formatJsonOutput(documents);
        
        return {
          content: [
            {
              type: "text",
              text: `Aggregation returned ${documents.length} document(s):\n\n${formattedOutput}`,
            },
          ],
        };
      } catch (error) {
        throw new Error(`Failed to execute aggregation: ${error instanceof Error ? error.message : 'Unknown error'}`);
      }
    }
  • Input schema using Zod for validating database name, collection name, and aggregation pipeline array.
    {
      database: z.string().describe("Database name"),
      collection: z.string().describe("Collection name"),
      pipeline: z.array(z.record(z.any())).describe("Aggregation pipeline as array of stage objects"),
    },
  • src/index.ts:233-262 (registration)
    Registration of the 'mongo-aggregate' tool using server.tool(), including name, description, schema, and inline handler.
    server.tool(
      "mongo-aggregate",
      "Execute aggregation pipeline on a MongoDB collection",
      {
        database: z.string().describe("Database name"),
        collection: z.string().describe("Collection name"),
        pipeline: z.array(z.record(z.any())).describe("Aggregation pipeline as array of stage objects"),
      },
      async ({ database: dbName, collection: collectionName, pipeline }) => {
        try {
          const db = await ensureConnection(dbName);
          const collection: Collection = db.collection(collectionName);
          
          const documents = await collection.aggregate(pipeline).toArray();
          
          const formattedOutput = formatJsonOutput(documents);
          
          return {
            content: [
              {
                type: "text",
                text: `Aggregation returned ${documents.length} document(s):\n\n${formattedOutput}`,
              },
            ],
          };
        } catch (error) {
          throw new Error(`Failed to execute aggregation: ${error instanceof Error ? error.message : 'Unknown error'}`);
        }
      }
    );
  • Helper function to ensure MongoDB client connection and return the database instance, used by all tools including mongo-aggregate.
    async function ensureConnection(dbName: string): Promise<Db> {
      if (!mongoClient) {
        const uri = getMongoUri();
        mongoClient = new MongoClient(uri);
        await mongoClient.connect();
      }
      
      if (!databases.has(dbName)) {
        databases.set(dbName, mongoClient.db(dbName));
      }
      
      return databases.get(dbName)!;
    }
  • Helper function to format and truncate large JSON outputs for tool responses, called by the handler.
    function formatJsonOutput(data: unknown): string {
      const truncatedData = truncateForOutput(data);
      let outputText = JSON.stringify(truncatedData, null, 2);
      
      outputText = outputText.replace(
        /"\.\.\.(\d+) more items"/g,
        "...$1 more items"
      );
      outputText = outputText.replace(
        /"\.\.\.(\d+) more properties": "\.\.\.?"/g,
        "...$1 more properties"
      );
      
      return outputText;
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. While 'Execute aggregation pipeline' implies a read operation (not destructive), it doesn't clarify whether this requires specific permissions, has performance implications, returns results in a particular format, or handles errors. For a database query tool with zero annotation coverage, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise at just 6 words, with zero wasted language. It's front-loaded with the core action and target, making it immediately understandable. Every word earns its place by conveying essential information about what the tool does.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that this is a database query tool with no annotations, no output schema, and sibling tools that perform similar operations, the description is insufficiently complete. It doesn't explain what kind of results to expect, how data is returned, whether there are pagination considerations, or how this differs from simpler query operations available in sibling tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with all three parameters clearly documented in the schema itself. The description doesn't add any meaningful parameter semantics beyond what's already in the schema - it mentions 'aggregation pipeline' which corresponds to the 'pipeline' parameter, but provides no additional context about pipeline structure, stage types, or usage patterns.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Execute aggregation pipeline') and target resource ('on a MongoDB collection'), providing a specific verb+resource combination. However, it doesn't explicitly differentiate this aggregation operation from sibling tools like mongo-find-documents or mongo-count-documents, which also query MongoDB collections but with different approaches.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance about when to use this tool versus alternatives. It doesn't mention that aggregation pipelines are for complex data processing, transformation, or analysis compared to simpler queries (mongo-find-documents) or counting operations (mongo-count-documents). There's no context about prerequisites or when-not-to-use scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ricleedo/mongo-boilerplate-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server