Skip to main content
Glama

writing_feedback

Analyze writing for clarity, grammar, style, and structure to improve essays, articles, and technical documentation with actionable feedback.

Instructions

Provides feedback on a piece of writing, such as an essay, article, or technical documentation, focusing on clarity, grammar, style, structure, and overall effectiveness.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
textYesThe text to review
writing_typeYesThe type of writing (e.g., 'essay', 'article', 'documentation')

Implementation Reference

  • The main handler function that executes the tool logic: checks rate limits, validates and sanitizes inputs, constructs a specialized prompt using type-specific guidance, calls the Deepseek API, and formats the response or handles errors.
    export async function handler(args: unknown): Promise<ToolResponse> {
      // Check rate limit first
      if (!checkRateLimit()) {
        return {
          content: [
            {
              type: 'text',
              text: 'Rate limit exceeded. Please try again later.',
            },
          ],
          isError: true,
        };
      }
    
      // Validate arguments
      if (!args || typeof args !== 'object') {
        return {
          content: [
            {
              type: 'text',
              text: 'Invalid arguments provided.',
            },
          ],
          isError: true,
        };
      }
    
      try {
        // Type guard for WritingFeedbackArgs
        if (!('text' in args) || !('writing_type' in args) ||
            typeof args.text !== 'string' || typeof args.writing_type !== 'string') {
          return {
            content: [
              {
                type: 'text',
                text: 'Both text and writing_type are required and must be strings.',
              },
            ],
            isError: true,
          };
        }
    
        const typedArgs = args as WritingFeedbackArgs;
    
        // Sanitize inputs
        const sanitizedText = sanitizeInput(typedArgs.text);
        const sanitizedType = sanitizeInput(typedArgs.writing_type.toLowerCase());
    
        // Get type-specific prompts
        const typePrompts = WRITING_TYPE_PROMPTS[sanitizedType] || WRITING_TYPE_PROMPTS.default;
    
        // Create the complete prompt
        const prompt = createPrompt(
          {
            ...BASE_PROMPT_TEMPLATE,
            template: BASE_PROMPT_TEMPLATE.template.replace(
              '{type_specific_prompts}',
              typePrompts
            ),
          },
          {
            writing_type: sanitizedType,
            text: sanitizedText,
          }
        );
    
        // Make the API call
        const response = await makeDeepseekAPICall(prompt, SYSTEM_PROMPT);
    
        if (response.isError) {
          return {
            content: [
              {
                type: 'text',
                text: `Error generating writing feedback: ${response.errorMessage || 'Unknown error'}`,
              },
            ],
            isError: true,
          };
        }
    
        // Return the formatted response
        return {
          content: [
            {
              type: 'text',
              text: response.text,
            },
          ],
        };
      } catch (error) {
        console.error('Writing feedback tool error:', error);
        return {
          content: [
            {
              type: 'text',
              text: `Error processing writing feedback: ${error instanceof Error ? error.message : 'Unknown error'}`,
            },
          ],
          isError: true,
        };
      }
    }
  • ToolDefinition object including the inputSchema that defines the expected parameters (text and writing_type) for the writing_feedback tool.
    export const definition: ToolDefinition = {
      name: 'writing_feedback',
      description: 'Provides feedback on a piece of writing, such as an essay, article, or technical documentation, focusing on clarity, grammar, style, structure, and overall effectiveness.',
      inputSchema: {
        type: 'object',
        properties: {
          text: {
            type: 'string',
            description: 'The text to review',
          },
          writing_type: {
            type: 'string',
            description: "The type of writing (e.g., 'essay', 'article', 'documentation')",
          },
        },
        required: ['text', 'writing_type'],
      },
    };
  • src/server.ts:56-64 (registration)
    Registration of the writing_feedback tool by including its definition in the list returned for ListToolsRequest, making it discoverable by clients.
    this.server.setRequestHandler(ListToolsRequestSchema, async () => ({
      tools: [
        secondOpinion.definition,
        codeReview.definition,
        designCritique.definition,
        writingFeedback.definition,
        brainstormEnhancements.definition,
      ],
    }));
  • src/server.ts:114-122 (registration)
    Dispatch logic in CallToolRequest handler that validates args using type guard and invokes the writing_feedback handler.
    case "writing_feedback": {
      if (!isWritingFeedbackArgs(args)) {
        throw new McpError(
          ErrorCode.InvalidParams,
          "Invalid parameters for writing feedback"
        );
      }
      response = await writingFeedback.handler(args);
      break;
  • TypeScript interface defining the shape of arguments for the writing_feedback tool.
    export interface WritingFeedbackArgs {
      text: string;
      writing_type: string;
    }
  • Type guard function used in server.ts to validate incoming arguments before dispatching to the handler.
    export function isWritingFeedbackArgs(args: unknown): args is WritingFeedbackArgs {
      if (!args || typeof args !== 'object') return false;
      const a = args as Record<string, unknown>;
      
      return 'text' in a && 'writing_type' in a &&
             typeof a.text === 'string' && typeof a.writing_type === 'string';
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. While it states what the tool does (provides feedback), it doesn't describe how it behaves: no information about response format, depth of analysis, whether it's generative or evaluative, processing time, or any limitations. This is inadequate for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise - a single sentence that efficiently communicates the core functionality. It's front-loaded with the main purpose and includes relevant examples. There's no wasted verbiage or redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description is incomplete. For a feedback tool with 2 parameters, it should explain what kind of feedback to expect, response format, or any constraints. The description covers what the tool does but not how it works or what it returns, leaving significant gaps for agent understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters fully. The description adds no additional parameter semantics beyond what's in the schema - it mentions writing types but doesn't elaborate on format expectations, length constraints, or special requirements. Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Provides feedback on a piece of writing' with specific focus areas (clarity, grammar, style, structure, effectiveness). It distinguishes from sibling tools like code_review and design_critique by specifying writing domains (essay, article, technical documentation). However, it doesn't explicitly differentiate from second_opinion which could also provide feedback.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention when to choose writing_feedback over brainstorm_enhancements, code_review, design_critique, or second_opinion. There are no explicit when/when-not statements or alternative recommendations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/cyanheads/mentor-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server