Skip to main content
Glama

get_peer_review_response_prompt

Generate a professional point-by-point peer review response prompt. Produces respectful, data-driven replies to maximize acceptance.

Instructions

[FREE] Generate a prompt to draft a professional point-by-point reviewer response. Produces respectful, concise, data-driven responses that maximize acceptance chances. DATA SAFETY: Only reference published or approved data in supporting_data.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
reviewer_commentYes
action_takenYes
supporting_dataNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The function that implements the get_peer_review_response_prompt tool. It takes reviewer_comment, action_taken, and optional supporting_data, and returns a formatted prompt string for drafting a point-by-point reviewer response.
    @mcp.tool()
    def get_peer_review_response_prompt(
        reviewer_comment: str,
        action_taken: str,
        supporting_data: str = ""
    ) -> str:
        """
        [FREE] Generate a prompt to draft a professional point-by-point reviewer response.
        Produces respectful, concise, data-driven responses that maximize acceptance chances.
        DATA SAFETY: Only reference published or approved data in supporting_data.
        """
        return f"""Draft a professional point-by-point response to the following reviewer comment.
    Be respectful, concise, and data-driven. If the comment requires a manuscript change,
    state exactly what was changed and where.
    
    Reviewer comment: {reviewer_comment}
    
    Our response/action taken: {action_taken}
    
    Relevant data supporting our position: {supporting_data if supporting_data else "N/A"}
    
    Always begin with 'We thank the reviewer for this insightful comment.' to set a collegial tone.
    
    ⚠️ DATA SAFETY: Only reference published or approved data in your response."""
  • server.py:89-89 (registration)
    The @mcp.tool() decorator registers this function as an MCP tool with the FastMCP server.
    @mcp.tool()
  • server.py:972-972 (registration)
    The tool is listed in the free_tools array inside the list_tools function for discovery.
    ("get_peer_review_response_prompt", "Draft a point-by-point reviewer response"),
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Without annotations, the description bears full burden. It provides a data safety warning ('Only reference published or approved data in supporting_data') and describes output qualities ('respectful, concise, data-driven'). However, it does not disclose whether the tool has any side effects, auth requirements, or other behaviors. The warning adds partial transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is three sentences: purpose, output qualities, and data safety. Every sentence adds unique value with no redundancy. It is front-loaded with the core verb and resource, and the structure is efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the output schema exists (so return format is covered), and the tool is a simple prompt generator with 3 parameters, the description is adequate but minimal. It lacks explanation of how parameters relate to the output or what the generated prompt looks like, leaving some uncertainty about exact usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate but does not explain the parameters. It only mentions 'supporting_data' in the safety note. There is no description of what 'reviewer_comment' or 'action_taken' are or how to format them, leaving the agent to infer from parameter names alone.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool generates a prompt for drafting a point-by-point reviewer response, distinguishing it from sibling prompt generators like get_rebuttal_disagreement_prompt or get_author_review_request_prompt. The specific verb 'Generate a prompt' and resource 'reviewer response' make the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when drafting a reviewer response but does not explicitly state when to use this tool versus alternatives or provide exclusions. For example, it could mention that this is for responsive comments, while get_rebuttal_disagreement_prompt handles disagreement rebuttals.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/pubspro/medwriter-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server