Skip to main content
Glama
prashantgupta123

AWS FinOps MCP Server

find_target_groups_with_high_response_time

Identify AWS target groups exceeding response time thresholds to optimize application performance and reduce latency issues.

Instructions

Find target groups with high response times.

Args:
    region_name: AWS region name
    period: Lookback period in days (default: 7)
    response_time_threshold: Response time threshold in seconds (default: 1.0)
    profile_name: AWS profile name (optional)
    role_arn: IAM role ARN to assume (optional)
    access_key: AWS access key ID (optional)
    secret_access_key: AWS secret access key (optional)
    session_token: AWS session token for temporary credentials (optional)

Returns:
    Dictionary with target groups having high response times

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
region_nameNous-east-1
periodNo
response_time_thresholdNo
profile_nameNo
role_arnNo
access_keyNo
secret_access_keyNo
session_tokenNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool 'Find[s] target groups with high response times' and describes the return as a 'Dictionary with target groups having high response times,' but lacks critical details: it doesn't specify what 'high' means beyond the threshold parameter, whether this is a read-only operation (implied but not stated), how results are structured, or any rate limits or authentication requirements beyond the optional credential parameters. For a tool with 8 parameters and no annotation coverage, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and appropriately sized: a clear purpose statement followed by 'Args' and 'Returns' sections. Each sentence earns its place by explaining parameters and output. However, the 'Args' section is somewhat verbose due to listing all parameters individually, though this is necessary given the 0% schema coverage. It could be more front-loaded with usage context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (8 parameters, no annotations, output schema exists), the description is partially complete. It covers parameters well and notes the return type, but lacks behavioral context (e.g., safety, performance impact) and usage guidelines. The output schema existence means it doesn't need to detail return values, but overall, it's adequate with clear gaps for a tool of this scope.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It provides a clear 'Args' section listing all 8 parameters with brief explanations (e.g., 'AWS region name,' 'Lookback period in days'), including defaults for 'period' and 'response_time_threshold.' This adds significant meaning beyond the schema's titles, though it doesn't detail constraints (e.g., valid region formats) or interactions between optional credential parameters. With 0% schema coverage, this is strong but not exhaustive.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Find target groups with high response times.' It specifies the verb ('Find') and resource ('target groups'), and distinguishes itself from sibling tools like 'find_target_groups_with_high_error_rate' by focusing on response time rather than error rate. However, it doesn't explicitly differentiate from other performance analysis tools in the list, such as 'analyze_api_gateway_performance' or 'analyze_rds_performance_insights', which slightly reduces specificity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., AWS permissions), context (e.g., performance troubleshooting), or exclusions (e.g., when to use other tools like 'find_target_groups_with_high_error_rate'). The agent must infer usage from the tool name and sibling list alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/prashantgupta123/aws-pillar-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server