Skip to main content
Glama
EGorsel

Mendix Context Bridge

by EGorsel

inspect_local_microflow

Find a microflow by name and return its logical steps to understand local Mendix project structure and logic directly from the .mpr file.

Instructions

Zoekt een microflow op naam en geeft de logische stappen terug.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
nameYesNaam van de microflow

Implementation Reference

  • Executes the inspect_local_microflow tool by extracting the microflow name from arguments, calling reader.getMicroflowJSON, and returning the JSON-formatted result.
    if (request.params.name === "inspect_local_microflow") {
        const name = String(request.params.arguments?.name);
        const data = reader.getMicroflowJSON(name);
        return {
            content: [{ type: "text", text: JSON.stringify(data, null, 2) }]
        };
    }
  • src/server.ts:54-67 (registration)
    Registers the inspect_local_microflow tool in the ListTools response, including its name, description, and input schema requiring a 'name' string.
    {
        name: "inspect_local_microflow",
        description: "Zoekt een microflow op naam en geeft de logische stappen terug.",
        inputSchema: {
            type: "object",
            properties: {
                name: {
                    type: "string",
                    description: "Naam van de microflow"
                }
            },
            required: ["name"]
        }
    },
  • Helper method in MprReader class that attempts to retrieve microflow JSON by name from the .mpr SQLite database, currently returns available columns and a message due to schema uncertainty.
    getMicroflowJSON(name: string): any {
        if (!this.db) {
            throw new Error('Database not connected.');
        }
        try {
            // We search for a unit that might contain the name.
            // 'tree' is often the blob column. 'unitId' is the ID.
            // This query is speculative.
            // We strive to find a row where some text column matches the name.
            // Pragma to find columns again (cached ideally).
            const columns = this.db.pragma('table_info(Unit)') as any[];
            const colNames = columns.map(c => c.name);
    
            // Heuristic: if there is a 'Name' column?
            // If not, we can't easily filter by name in SQL without scanning blobs?
            // We will return a message + the raw data of the first few matching units if we can guess.
    
            return {
                message: "To retrieve a specific microflow, we need to know the schema column for 'Name'.",
                availableColumns: colNames,
                instruction: "Please inspect 'getProjectSummary' output to identify the Name column."
            };
        } catch (error) {
            console.error(`Error retrieving microflow ${name}:`, error);
            return null;
        }
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool searches and returns logical steps, implying a read-only operation, but doesn't disclose critical traits like error handling (e.g., what happens if the microflow isn't found), performance considerations, or output format details. For a tool with no annotations, this leaves significant gaps in understanding its behavior beyond the basic purpose.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence in Dutch: 'Zoekt een microflow op naam en geeft de logische stappen terug.' It is front-loaded with the core action and outcome, with zero wasted words. Every part of the sentence directly contributes to understanding the tool's function, making it highly efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (a search tool with no output schema and no annotations), the description is incomplete. It doesn't explain what 'logical steps' entail, how they are formatted, or any limitations (e.g., only works for local microflows). With no output schema to clarify return values and no annotations for behavioral context, the description should provide more detail to fully guide the agent, but it falls short.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with the parameter 'name' documented as 'Naam van de microflow' (Name of the microflow). The description adds no additional parameter semantics beyond what the schema provides, such as format examples or constraints. Since schema coverage is high, the baseline score of 3 is appropriate, as the schema adequately documents the parameter without needing extra details in the description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Zoekt een microflow op naam en geeft de logische stappen terug' (Searches for a microflow by name and returns the logical steps). It specifies the verb ('zoeken' - search), resource ('microflow'), and output ('logical steps'), which is specific and actionable. However, it doesn't explicitly differentiate from sibling tools like 'get_domain_model' or 'inspect_database_schema', which likely serve different purposes but share a similar inspection theme.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools, prerequisites, or contextual cues for selection. For example, it doesn't clarify if this is for debugging, documentation, or analysis, or how it differs from 'list_local_modules'. Without such guidance, the agent must infer usage from the tool name and description alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/EGorsel/mendix-local-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server