Skip to main content
Glama
taehojo
by taehojo

batch_modality_screen

Screen multiple genomic variants for specific regulatory effects like expression, splicing, TF binding, or chromatin changes. Use for targeted regulatory studies and modality-specific analysis.

Instructions

Screen variants across specific regulatory modalities.

Efficiently tests multiple variants for specific regulatory effects.

Perfect for: targeted regulatory screens, modality-specific studies.

Example: "Screen 20 variants for splicing effects"

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
variantsYes
modalityYesRegulatory modality to screen

Implementation Reference

  • Core implementation of batch_modality_screen: maps modality to specific output types, processes each variant by calling predict_variant_effect, collects and returns results.
    def batch_modality_screen(client, params: Dict[str, Any]) -> Dict[str, Any]:
        """Screen variants across specific modalities."""
        variants_data = params.get('variants', [])
        modality = params.get('modality', 'expression')
    
        # Map modality names to OutputType enums
        modality_map = {
            'expression': [dna_client.OutputType.RNA_SEQ, dna_client.OutputType.CAGE],
            'splicing': [dna_client.OutputType.SPLICE_SITES],
            'tf_binding': [dna_client.OutputType.CHIP_TF],
            'chromatin': [dna_client.OutputType.DNASE, dna_client.OutputType.ATAC]
        }
        modalities = modality_map.get(modality, [dna_client.OutputType.RNA_SEQ, dna_client.OutputType.SPLICE_SITES])
    
        results = []
        for v in variants_data:
            # Create a copy with output_types
            variant_params = v.copy()
            variant_params['output_types'] = modalities
            try:
                result = predict_variant_effect(client, variant_params)
                results.append({
                    'variant': result['variant'],
                    'predictions': result['predictions'],
                    'impact': result['interpretation']['impact_level']
                })
            except Exception as e:
                print(f"Warning: Failed: {e}", file=sys.stderr)
                continue
    
        return {
            'modalities_tested': [modality],  # Return string representation
            'total_variants': len(results),
            'results': results
        }
  • MCP server tool handler: invokes AlphaGenomeClient.batchModalityScreen and formats result as MCP content response.
    case 'batch_modality_screen': {
      const result = await getClient().batchModalityScreen(args);
      return {
        content: [{ type: 'text', text: JSON.stringify(result, null, 2) }],
      };
    }
  • Input schema defining variants array (with chromosome, position, ref, alt) and modality enum.
    inputSchema: {
      type: 'object',
      properties: {
        variants: {
          type: 'array',
          items: {
            type: 'object',
            properties: {
              chromosome: { type: 'string' },
              position: { type: 'number' },
              ref: { type: 'string' },
              alt: { type: 'string' },
            },
            required: ['chromosome', 'position', 'ref', 'alt'],
          },
          minItems: 1,
        },
        modality: {
          type: 'string',
          enum: ['expression', 'splicing', 'tf_binding', 'chromatin'],
          description: 'Regulatory modality to screen',
        },
      },
      required: ['variants', 'modality'],
    },
  • src/tools.ts:626-660 (registration)
    Tool definition and registration: exports BATCH_MODALITY_SCREEN_TOOL with name, description, and schema for MCP tool listing.
    export const BATCH_MODALITY_SCREEN_TOOL: Tool = {
      name: 'batch_modality_screen',
      description: `Screen variants across specific regulatory modalities.
    
    Efficiently tests multiple variants for specific regulatory effects.
    
    Perfect for: targeted regulatory screens, modality-specific studies.
    
    Example: "Screen 20 variants for splicing effects"`,
      inputSchema: {
        type: 'object',
        properties: {
          variants: {
            type: 'array',
            items: {
              type: 'object',
              properties: {
                chromosome: { type: 'string' },
                position: { type: 'number' },
                ref: { type: 'string' },
                alt: { type: 'string' },
              },
              required: ['chromosome', 'position', 'ref', 'alt'],
            },
            minItems: 1,
          },
          modality: {
            type: 'string',
            enum: ['expression', 'splicing', 'tf_binding', 'chromatin'],
            description: 'Regulatory modality to screen',
          },
        },
        required: ['variants', 'modality'],
      },
    };
  • Bridge method in AlphaGenomeClient: calls Python bridge script with 'batch_modality_screen' action.
    async batchModalityScreen(params: any): Promise<any> {
      try {
        return await this.callPythonBridge('batch_modality_screen', params);
      } catch (error) {
        if (error instanceof ApiError) throw error;
        throw new ApiError(`Batch modality screen failed: ${error}`, 500);
      }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'Efficiently tests multiple variants' and 'Perfect for: targeted regulatory screens', which implies batch processing and specific use cases, but fails to disclose critical behavioral traits such as whether this is a read-only or destructive operation, expected runtime, rate limits, authentication needs, or what the output looks like. For a tool with no annotations, this is a significant gap in transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized with four sentences: a purpose statement, an efficiency note, usage guidelines, and an example. Each sentence adds value without redundancy. It's front-loaded with the core purpose. There's minor room for improvement in tighter phrasing, but overall it's efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of batch screening with no annotations and no output schema, the description is incomplete. It lacks details on behavioral aspects (e.g., safety, performance), output format, error handling, or prerequisites. While it covers purpose and usage well, for a tool with 2 parameters and potential regulatory implications, more contextual information is needed to guide an AI agent effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 50% (only 'modality' has a description). The description adds some meaning by implying parameters through context: 'variants' are screened for 'regulatory effects' based on 'modality', and the example mentions 'splicing effects' which aligns with the enum. However, it doesn't explicitly explain the 'variants' parameter structure or provide additional semantics beyond what the schema minimally offers. With moderate schema coverage, the baseline of 3 is appropriate as the description compensates somewhat but not fully.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Screen variants across specific regulatory modalities' and 'Efficiently tests multiple variants for specific regulatory effects.' It specifies the verb ('screen', 'tests'), resource ('variants'), and scope ('regulatory modalities', 'regulatory effects'), making it distinct from siblings like 'batch_pathogenicity_filter' or 'predict_expression_impact'. However, it doesn't explicitly differentiate from all siblings, such as 'batch_score_variants', which might have overlapping functions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: 'Perfect for: targeted regulatory screens, modality-specific studies' and includes an example: 'Screen 20 variants for splicing effects.' This gives practical guidance on its intended use cases. However, it lacks explicit alternatives or exclusions, such as when not to use it compared to sibling tools like 'predict_splice_impact' or 'batch_tissue_comparison'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/taehojo/alphagenome-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server