Skip to main content
Glama

Regex Builder & Tester

build_regex

Generate regular expressions from natural language descriptions for emails, URLs, phones, dates, IPs, and 15+ patterns. Get patterns with JS/Python/TS code snippets and optional test results.

Instructions

Build and test regular expressions from natural language descriptions. Supports emails, URLs, phones, dates, IPs, colors, UUIDs, and 15+ more patterns. Returns the pattern, code snippets in JS/Python/TS, and optional test results.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
descriptionYesWhat to match (e.g., 'email addresses', 'hex color codes', 'semantic versions')
testStringsNoOptional strings to test the regex against
flagsNoRegex flags (default: 'g')g

Implementation Reference

  • Registration and handler implementation for the 'build_regex' tool. It calls the 'regex-builder' endpoint of the Agent Toolbelt API.
    server.registerTool(
      "build_regex",
      {
        title: "Regex Builder & Tester",
        description:
          "Build and test regular expressions from natural language descriptions. " +
          "Supports emails, URLs, phones, dates, IPs, colors, UUIDs, and 15+ more patterns. " +
          "Returns the pattern, code snippets in JS/Python/TS, and optional test results.",
        inputSchema: {
          description: z
            .string()
            .describe("What to match (e.g., 'email addresses', 'hex color codes', 'semantic versions')"),
          testStrings: z
            .array(z.string())
            .optional()
            .describe("Optional strings to test the regex against"),
          flags: z
            .string()
            .default("g")
            .describe("Regex flags (default: 'g')"),
        },
      },
      async ({ description, testStrings, flags }) => {
        const result = await callToolApi("regex-builder", { description, testStrings, flags });
        const data = result as any;
        const r = data.result;
    
        const lines = [
          `**Pattern:** \`${r.regexLiteral}\``,
          `**Description:** ${r.description}`,
          "",
          "**Code snippets:**",
          "```javascript",
          r.codeSnippets.javascript,
          "```",
          "```python",
          r.codeSnippets.python,
          "```",
        ];
    
        if (r.testResults) {
          lines.push("", "**Test results:**");
          for (const t of r.testResults) {
            const status = t.matched ? "✓" : "✗";
            lines.push(`  ${status} "${t.input}" → ${t.matched ? t.matches.join(", ") : "no match"}`);
          }
        }
    
        return {
          content: [{ type: "text" as const, text: lines.join("\n") }],
        };
      }
    );
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, description carries full burden and successfully discloses return values (pattern, JS/Python/TS snippets, test results). However, omits error handling behavior, determinism, or side effects. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three well-structured sentences with zero waste. Front-loaded with core purpose, followed by capability enumeration (15+ patterns), and output specification. Every sentence earns its place with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, but description compensates by detailing return structure (pattern + code snippets + test results). With 100% schema coverage and 3 parameters, the description provides sufficient context for invocation despite lacking annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage (baseline 3). Description adds crucial context that 'description' parameter accepts natural language (not regex syntax) and connects 'testStrings' to the return value of 'test results', which aids agent reasoning about parameter selection.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verbs (build/test) with clear resource (regex) and explicitly distinguishes from sibling 'build_cron' by emphasizing 'natural language descriptions' and listing regex-specific patterns (emails, URLs, UUIDs) rather than cron schedules.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage context ('from natural language descriptions') but lacks explicit when-to-use guidance, prerequisites, or named alternatives. Does not indicate when to use testStrings vs omitting it, or how this compares to manual regex writing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/marras0914/agent-toolbelt'

If you have feedback or need assistance with the MCP directory API, please join our Discord server