Skip to main content
Glama
seanshin0214

Dr. QuantMaster MCP Server

by seanshin0214

generate_python_code

Generate Python code for statistical analysis using statsmodels, sklearn, or linearmodels libraries to perform regression and quantitative research tasks.

Instructions

Python 코드 생성 (statsmodels, sklearn)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
analysis_typeYes분석 유형
libraryNo라이브러리
variablesNo변수 정보

Implementation Reference

  • The handler function implementing the tool logic. It generates Python code templates for statistical analyses (e.g., OLS, panel fixed effects) using libraries like statsmodels and linearmodels based on the analysis_type parameter.
    function handleGeneratePythonCode(args: Record<string, unknown>) {
      const analysisType = args.analysis_type as string;
      const templates: Record<string, string> = {
        ols: `
    import pandas as pd
    import statsmodels.api as sm
    import statsmodels.formula.api as smf
    
    # OLS
    model = smf.ols('y ~ x1 + x2 + x3', data=df).fit()
    print(model.summary())
    
    # Robust SE
    model_robust = smf.ols('y ~ x1 + x2 + x3', data=df).fit(cov_type='HC3')
    
    # VIF
    from statsmodels.stats.outliers_influence import variance_inflation_factor
    X = df[['x1', 'x2', 'x3']]
    vif = pd.DataFrame({
        'Variable': X.columns,
        'VIF': [variance_inflation_factor(X.values, i) for i in range(X.shape[1])]
    })
    `,
        panel_fe: `
    import pandas as pd
    from linearmodels import PanelOLS
    
    # Set index
    df = df.set_index(['id', 'time'])
    
    # Entity FE
    model_fe = PanelOLS(df['y'], df[['x1', 'x2']], entity_effects=True)
    result_fe = model_fe.fit(cov_type='clustered', cluster_entity=True)
    
    # Two-way FE
    model_twfe = PanelOLS(df['y'], df[['x1', 'x2']],
                           entity_effects=True, time_effects=True)
    result_twfe = model_twfe.fit(cov_type='clustered', cluster_entity=True)
    `
      };
    
      return {
        analysis_type: analysisType,
        python_code: templates[analysisType] || "# Analysis template not found"
      };
    }
  • The input schema defining the parameters for the tool: analysis_type (required), library (statsmodels/sklearn/linearmodels), and variables.
    inputSchema: {
      type: "object",
      properties: {
        analysis_type: { type: "string", description: "분석 유형" },
        library: { type: "string", enum: ["statsmodels", "sklearn", "linearmodels"], description: "라이브러리" },
        variables: { type: "object", description: "변수 정보" },
      },
      required: ["analysis_type"],
    },
  • Tool object registration in the exported tools array.
    {
      name: "generate_python_code",
      description: "Python 코드 생성 (statsmodels, sklearn)",
      inputSchema: {
        type: "object",
        properties: {
          analysis_type: { type: "string", description: "분석 유형" },
          library: { type: "string", enum: ["statsmodels", "sklearn", "linearmodels"], description: "라이브러리" },
          variables: { type: "object", description: "변수 정보" },
        },
        required: ["analysis_type"],
      },
    },
  • Switch case in handleToolCall function that dispatches calls to the specific handler.
    case "generate_python_code":
      return handleGeneratePythonCode(args);
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions generating code but doesn't disclose behavioral traits such as whether it creates files, outputs strings, requires authentication, has rate limits, or handles errors. This leaves significant gaps in understanding how the tool behaves.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is very concise with a single phrase, making it efficient and front-loaded. However, it could be more structured by including key details, but it avoids unnecessary verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (3 parameters, no output schema, no annotations), the description is incomplete. It lacks details on what the generated code does, output format, error handling, or how it integrates with siblings, making it inadequate for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema documents parameters well. The description adds minimal value by implying the code generation relates to statsmodels and sklearn, which loosely maps to the 'library' parameter, but doesn't elaborate on parameter meanings or usage beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool generates Python code for statsmodels and sklearn, which clarifies the verb (generate) and resource (Python code). However, it doesn't distinguish from sibling tools like generate_r_code or generate_stata_code beyond mentioning specific libraries, and the purpose remains somewhat vague regarding what kind of code is generated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives is provided. The description mentions libraries but doesn't specify contexts, prerequisites, or exclusions compared to siblings like code_template or write_analysis_file, leaving usage unclear.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/seanshin0214/quantmaster-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server