Skip to main content
Glama
seanshin0214

Dr. QuantMaster MCP Server

by seanshin0214

generate_r_code

Generate R code for statistical analysis, diagnostics, and visualization to support quantitative research workflows.

Instructions

R 코드 생성 (분석, 진단, 시각화)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
analysis_typeYes분석 유형
variablesNo변수 정보
optionsNo추가 옵션

Implementation Reference

  • The main handler function for the 'generate_r_code' tool. It generates R code templates for various analysis types (OLS, panel FE, DID, etc.) based on the input 'analysis_type'.
    function handleGenerateRCode(args: Record<string, unknown>) {
      const analysisType = args.analysis_type as string;
      const templates: Record<string, string> = {
        ols: `
    # OLS Regression
    library(tidyverse)
    library(fixest)
    library(modelsummary)
    
    # Estimate
    model <- lm(y ~ x1 + x2 + x3, data = df)
    
    # Robust SE
    model_robust <- feols(y ~ x1 + x2 + x3, data = df, vcov = "hetero")
    
    # Diagnostics
    car::vif(model)  # Multicollinearity
    lmtest::bptest(model)  # Heteroscedasticity
    
    # Results table
    modelsummary(list("OLS" = model, "Robust" = model_robust))
    `,
        panel_fe: `
    # Panel Fixed Effects
    library(fixest)
    library(modelsummary)
    
    # Entity FE
    model_fe <- feols(y ~ x1 + x2 | id, data = panel_df, vcov = ~id)
    
    # Entity + Time FE
    model_twfe <- feols(y ~ x1 + x2 | id + year, data = panel_df, vcov = ~id)
    
    # Results
    modelsummary(list("Entity FE" = model_fe, "TWFE" = model_twfe))
    `,
        did: `
    # Difference-in-Differences
    library(fixest)
    library(did)
    
    # Basic DID
    did_model <- feols(y ~ treat:post | id + time, data = df, vcov = ~id)
    
    # Event Study
    es_model <- feols(y ~ i(time, treat, ref = -1) | id + time, data = df, vcov = ~id)
    iplot(es_model)
    
    # Callaway-Sant'Anna (staggered)
    cs_did <- att_gt(yname = "y", tname = "time", idname = "id",
                      gname = "first_treat", data = df)
    aggte(cs_did, type = "dynamic") |> ggdid()
    `
      };
    
      return {
        analysis_type: analysisType,
        r_code: templates[analysisType] || "# Analysis template not found\n# Use search_stats_knowledge for guidance"
      };
    }
  • The tool registration in the exported tools array, including name, description, and input schema for validation.
      name: "generate_r_code",
      description: "R 코드 생성 (분석, 진단, 시각화)",
      inputSchema: {
        type: "object",
        properties: {
          analysis_type: { type: "string", description: "분석 유형" },
          variables: { type: "object", description: "변수 정보" },
          options: { type: "object", description: "추가 옵션" },
        },
        required: ["analysis_type"],
      },
    },
  • The switch case in handleToolCall that routes calls to 'generate_r_code' to the specific handler function.
    case "generate_r_code":
      return handleGenerateRCode(args);
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool generates R code but doesn't describe how it behaves—e.g., whether it creates files, returns code snippets, requires specific inputs, or has any side effects like saving outputs. This leaves significant gaps in understanding the tool's operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very concise—a single phrase in Korean—and front-loaded with the core purpose. However, it could be more structured by elaborating slightly to improve clarity without losing efficiency, as it currently feels under-specified rather than optimally concise.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has 3 parameters, no annotations, no output schema, and involves code generation (a potentially complex operation), the description is incomplete. It doesn't explain what the tool returns, how to use the parameters effectively, or any behavioral traits, making it inadequate for full contextual understanding despite the schema covering parameters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the three parameters (analysis_type, variables, options) with basic descriptions. The description adds no additional meaning beyond implying these are used for analysis, diagnosis, and visualization, which aligns with but doesn't enrich the schema details. Baseline 3 is appropriate as the schema handles parameter documentation adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'R 코드 생성 (분석, 진단, 시각화)' translates to 'R code generation (analysis, diagnosis, visualization)', which provides a general purpose but lacks specificity about what resources or data it operates on. It distinguishes from some siblings like generate_python_code by specifying R, but remains vague about the exact scope compared to other code-related tools like visualization_code or table_code.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives is provided. The description implies it's for generating R code, but it doesn't specify contexts, prerequisites, or exclusions, such as when to choose this over generate_python_code or other code-generation siblings like code_template or meta_code.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/seanshin0214/quantmaster-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server