Skip to main content
Glama
cremich

promptz.dev MCP Server

by cremich

list_rules

Retrieve available project rules from promptz.dev to filter by tags and manage development guidelines without switching contexts.

Instructions

List available project rules from promptz.dev

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
cursorNoPagination token for fetching the next set of results
tagsNoFilter rules by tags (e.g. ['CDK', 'React'])

Implementation Reference

  • The main handler function for the 'list_rules' tool. It extracts parameters from the MCP CallToolRequest, calls the listRules helper from graphql-client, maps the rules to a simplified format, and returns the result as a JSON string in CallToolResult format.
    export async function listRulesToolHandler(request: CallToolRequest): Promise<CallToolResult> {
      const nextToken = request.params.arguments?.nextToken as string | undefined;
      const tags = request.params.arguments?.tags as string[] | undefined;
      const response = await listRules(nextToken, tags);
      const rules = response.searchProjectRules.results;
    
      const result = {
        rules: rules.map((rule) => ({
          name: rule.name,
          description: rule.description,
          tags: rule.tags || [],
        })),
        nextCursor: response.searchProjectRules.nextToken || undefined,
      };
    
      return {
        content: [
          {
            type: "text",
            text: JSON.stringify(result, null, 2),
          },
        ],
      };
    }
  • src/index.ts:64-83 (registration)
    Registration of the 'list_rules' tool in the ListToolsRequest handler, including the tool name, description, and input schema definition.
    {
      name: "list_rules",
      description: "List available project rules from promptz.dev",
      inputSchema: {
        type: "object",
        properties: {
          cursor: {
            type: "string",
            description: "Pagination token for fetching the next set of results",
          },
          tags: {
            type: "array",
            items: {
              type: "string",
            },
            description: "Filter rules by tags (e.g. ['CDK', 'React'])",
          },
        },
      },
    },
  • src/index.ts:114-116 (registration)
    Dispatch to the listRulesToolHandler in the CallToolRequest switch statement.
    case "list_rules": {
      return await listRulesToolHandler(request);
    }
  • TypeScript interface defining the response shape from the listRules GraphQL query.
    export interface ListRulesResponse {
      searchProjectRules: {
        results: ProjectRule[];
        nextToken?: string;
      };
    }
  • Helper function that executes the GraphQL query to list rules, handling pagination and tags, and returns the typed response.
    export async function listRules(nextToken?: string, tags?: string[]): Promise<ListRulesResponse> {
      try {
        logger.info("[API] Listing rules" + (tags ? ` with tags: ${tags.join(", ")}` : ""));
    
        const { data, error } = await client.query(
          gql`
            ${SEARCH_RULES}
          `,
          { nextToken, tags },
        );
    
        if (error) {
          throw error;
        }
    
        return data;
      } catch (error) {
        logger.error(`[Error] Failed to list project rules: ${error instanceof Error ? error.message : String(error)}`);
        throw new Error(`Failed to list project rules: ${error instanceof Error ? error.message : String(error)}`);
      }
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but only states what the tool does without mentioning safety, permissions, rate limits, or response format. It lacks details on whether this is a read-only operation, what happens on errors, or how results are structured.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's appropriately sized and front-loaded, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (list operation with filtering and pagination), no annotations, and no output schema, the description is minimally adequate but incomplete. It covers the basic purpose but lacks behavioral context and output details that would help an agent use it effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so parameters 'cursor' and 'tags' are well-documented in the schema itself. The description adds no additional parameter semantics, but the high schema coverage justifies the baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('List') and resource ('available project rules from promptz.dev'), making the purpose understandable. However, it doesn't differentiate this tool from its sibling 'list_prompts' or 'get_rule', which would be needed for a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'get_rule' or 'list_prompts'. There's no mention of prerequisites, context, or exclusions, leaving the agent with insufficient usage direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/cremich/promptz-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server