AWS Knowledge
Server Details
The AWS Knowledge MCP server is a fully managed remote Model Context Protocol server that provides real-time access to official AWS content in an LLM-compatible format. It offers structured access to AWS documentation, code samples, blog posts, What's New announcements, Well-Architected best practices, and regional availability information for AWS APIs and CloudFormation resources. Key capabilities include searching and reading documentation in markdown format, getting content recommendations, listing AWS regions, and checking regional availability for services and features.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.6/5 across 6 of 6 tools scored.
Each tool targets a distinct operation: regional availability checking, region listing, documentation reading, recommendations, skill retrieval, and documentation search. No two tools have overlapping purposes, so an agent can clearly differentiate them.
Tool names follow a mostly consistent 'verb_noun' pattern (e.g., 'get_regional_availability', 'list_regions', 'read_documentation', 'retrieve_skill', 'search_documentation'). However, 'recommend' lacks a noun, breaking the pattern slightly. Overall, the naming is predictable and readable.
With 6 tools, the server is well-scoped for an AWS knowledge base. Each tool serves a clear knowledge or information retrieval purpose without being too few or too many. The count feels appropriate for the domain.
The tool set covers key knowledge workflows: searching docs, reading content, getting recommendations, retrieving skills, and checking regional availability. A minor gap is the lack of a tool for listing or browsing documentation topics independently, but the search tool effectively fills that role.
Available Tools
6 toolsaws___get_regional_availabilityAInspect
Check AWS resource availability across regions for products (service and features), APIs, and CloudFormation resources.
Quick Reference
Maximum 10 regions per call (split into multiple calls for more regions)
Single region: filters optional, supports pagination
Multiple regions: filters required, no pagination, queries run concurrently
Status values: 'isAvailableIn' | 'isNotAvailableIn' | 'isPlannedIn' | 'Not Found'
Response field: 'products' (product), 'service_apis' (api), 'cfn_resources' (cfn)
When to Use
Pre-deployment Validation
Verify resource availability before deployment
Prevent deployment failures due to regional restrictions
Validate multi-region architecture requirements
Architecture Planning
Design region-specific solutions
Plan multi-region deployments
Compare regional capabilities
Examples
Check specific resources in one region:
regions=["us-east-1"], resource_type="product", filters=["AWS Lambda"]
regions=["us-east-1"], resource_type="api", filters=["Lambda+Invoke", "S3+GetObject"]
regions=["us-east-1"], resource_type="cfn", filters=["AWS::Lambda::Function"]Compare availability across regions:
regions=["us-east-1", "eu-west-1"], resource_type="product", filters=["AWS Lambda"]Explore all resources (single region only, with pagination handling support via next_token due to large output):
regions=["us-east-1"], resource_type="product"Follow up with next_token from response to get more results.
Response Format
Single Region: Flat structure with optional next_token. Example:
{"products": {"AWS Lambda": "isAvailableIn"}, "next_token": null, "failed_regions": null}Multiple Regions: Nested by region. Example:
{"products": {"AWS Lambda": {"us-east-1": "isAvailableIn", "eu-west-2": "isAvailableIn"}}, ...}Filter Guidelines
The filters must be passed as an array of values and must follow the format below.
Product - service and feature (resource_type='product') Format: 'Product' Example filters:
['Latency-Based Routing', 'AWS Amplify', 'AWS Application Auto Scaling']
['PrivateLink Support', 'Amazon Aurora']
APIs (resource_type='api') Format: to filter on API level 'SdkServiceId+APIOperation' Example filters:
['Athena+UpdateNamedQuery', 'ACM PCA+CreateCertificateAuthority', 'IAM+GetSSHPublicKey'] Format: to filter on SdkService level 'SdkServiceId' Example filters:
['EC2', 'ACM PCA']
CloudFormation (resource_type='cfn') Format: 'CloudformationResourceType' Example filters:
['AWS::EC2::Instance', 'AWS::Lambda::Function', 'AWS::Logs::LogGroup']
| Name | Required | Description | Default |
|---|---|---|---|
| region | No | Target AWS region code (e.g., us-east-1, eu-west-1, ap-southeast-2). | |
| filters | No | Optional list of one or multiple specific resources to check. Format depends on resource_type: - Products: ['AWS Lambda', 'Amazon S3'] - APIs: ['IAM+GetSSHPublicKey', 'EC2'] - CloudFormation: ['AWS::EC2::Instance'] Must follow the format specified in the tool description. | |
| regions | No | One or more AWS region codes (e.g., us-east-1, eu-west-1). Maximum 10 regions per call. Single region supports pagination. Multiple regions require filters. | |
| next_token | No | Pagination token from previous response for retrieving additional results. Only valid for single region queries and no filters. | |
| resource_type | Yes | Type of AWS resource: 'product' (AWS services/features), 'api' (SDK/API operations), or 'cfn' (CloudFormation resource types). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden and excels. It discloses key behavioral traits: maximum 10 regions per call, differences between single vs. multiple regions (pagination, filter requirements, concurrency), status values, response field structure, and pagination handling via next_token. This covers operational constraints and output behavior thoroughly.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with sections like 'Quick Reference', 'When to Use', 'Examples', and 'Filter Guidelines', making it easy to navigate. It is appropriately sized but could be slightly more front-loaded; the initial purpose is clear, but some details are deep in the text. Every sentence adds value, with no wasted content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (5 parameters, no annotations, no output schema), the description is highly complete. It explains the tool's purpose, usage, behavioral traits, parameter semantics, and response formats in detail. The examples and filter guidelines provide practical guidance, ensuring an AI agent can invoke it correctly despite the lack of structured output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds significant value beyond the schema by providing detailed filter guidelines with format examples for each resource_type, clarifying usage nuances (e.g., filters required for multiple regions), and illustrating parameter combinations in examples. It compensates well but doesn't fully explain edge cases for all parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Check AWS resource availability across regions for products (service and features), APIs, and CloudFormation resources.' It specifies the verb ('Check'), resource ('AWS resource availability'), and scope ('across regions'), distinguishing it from siblings like aws___list_regions (which lists regions) or aws___search_documentation (which searches docs).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The 'When to Use' section explicitly outlines scenarios: 'Pre-deployment Validation' and 'Architecture Planning'. It provides clear context for when to use this tool, such as verifying resource availability before deployment or designing region-specific solutions, without mentioning alternatives but giving practical applications.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
aws___list_regionsAInspect
Retrieve a list of all AWS regions.
Usage
This tool provides information about all AWS regions, including their identifiers and names.
When to Use
When planning global infrastructure deployments
To validate region codes for other API calls
To get a complete AWS regional inventory
Result Interpretation
Each region result includes:
region_id: The unique region code (e.g., 'us-east-1')
region_long_name: The human-friendly name (e.g., 'US East (N. Virginia)')
Common Use Cases
Infrastructure Planning: Review available regions for global deployment
Region Validation: Verify region codes before using in other operations
Regional Inventory: Get a complete list of AWS's global infrastructure
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It effectively describes the tool's behavior: it's a read-only operation (implied by 'Retrieve'), returns structured data with specific fields, and serves informational purposes. However, it doesn't mention potential limitations like rate limits or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (Usage, When to Use, Result Interpretation, Common Use Cases). Each sentence adds value, with no redundant information, and the key purpose is stated upfront.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter tool with no annotations or output schema, the description provides good context about what the tool does and how to interpret results. It could be more complete by explicitly stating it's a read-only operation or mentioning any constraints, but it covers the essential aspects well.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so the baseline is 4. The description appropriately doesn't discuss parameters, focusing instead on the tool's purpose and output interpretation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Retrieve') and resource ('list of all AWS regions'). It distinguishes itself from siblings like 'aws___get_regional_availability' by focusing on a comprehensive list rather than availability status or other functions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The 'When to Use' section explicitly lists three scenarios (planning deployments, validating region codes, getting inventory), providing clear guidance on when to use this tool. It implicitly distinguishes from siblings by focusing on region listing rather than availability checks or documentation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
aws___read_documentationAInspect
Fetch and convert AWS related documentation pages to markdown format.
Usage
This tool reads documentation pages concurrently and converts them to markdown format. Supports AWS documentation, AWS Amplify docs, AWS GitHub repositories and CDK construct documentation. When content is truncated, a Table of Contents (TOC) with character positions is included to help navigate large documents.
Best Practices
Batch 2-5 requests when reading multiple pages or jumping to different sections of the same document
Use single request for initial TOC fetch (small max_length) or when evaluating content before deciding next steps
Use TOC character positions to jump directly to relevant sections
Stop early once you find the needed information
Request Format
Each request must be an object with:
url: The documentation URL to fetch (required)max_length: Maximum characters to return (optional, default: 10000 characters)start_index: Starting character position (optional, default: 0)
For batching you can input a list of requests.
Example Request
{
"requests":
[
{
"url": "https://docs.aws.amazon.com/AmazonS3/latest/userguide/access-management.html",
"max_length": 5000,
"start_index": 0
},
{
"url": "https://repost.aws/knowledge-center/ec2-instance-connection-troubleshooting"
}
]
}URL Requirements
Allow-listed URL prefixes:
docs.aws.amazon.com
aws.amazon.com
repost.aws/knowledge-center
docs.amplify.aws
ui.docs.amplify.aws
github.com/aws-cloudformation/aws-cloudformation-templates
github.com/aws-samples/aws-cdk-examples
github.com/aws-samples/generative-ai-cdk-constructs-samples
github.com/aws-samples/serverless-patterns
github.com/awsdocs/aws-cdk-guide
github.com/awslabs/aws-solutions-constructs
github.com/cdklabs/cdk-nag
constructs.dev/packages/@aws-cdk-containers
constructs.dev/packages/@aws-cdk
constructs.dev/packages/@cdk-cloudformation
constructs.dev/packages/aws-analytics-reference-architecture
constructs.dev/packages/aws-cdk-lib
constructs.dev/packages/cdk-amazon-chime-resources
constructs.dev/packages/cdk-aws-lambda-powertools-layer
constructs.dev/packages/cdk-ecr-deployment
constructs.dev/packages/cdk-lambda-powertools-python-layer
constructs.dev/packages/cdk-serverless-clamscan
constructs.dev/packages/cdk8s
constructs.dev/packages/cdk8s-plus-33
strandsagents.com/
Deny-listed URL prefixes:
aws.amazon.com/marketplace
Example URLs
https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucketnamingrules.html
https://docs.aws.amazon.com/lambda/latest/dg/lambda-invocation.html
https://aws.amazon.com/about-aws/whats-new/2023/02/aws-telco-network-builder/
https://aws.amazon.com/builders-library/ensuring-rollback-safety-during-deployments/
https://aws.amazon.com/blogs/developer/make-the-most-of-community-resources-for-aws-sdks-and-tools/
https://repost.aws/knowledge-center/example-article
https://docs.amplify.aws/react/build-a-backend/auth/
https://ui.docs.amplify.aws/angular/connected-components/authenticator
https://github.com/aws-samples/aws-cdk-examples/blob/main/README.md
https://github.com/awslabs/aws-solutions-constructs/blob/main/README.md
https://constructs.dev/packages/aws-cdk-lib/v/2.229.1?submodule=aws_lambda&lang=typescript
https://github.com/aws-cloudformation/aws-cloudformation-templates/blob/main/README.md
https://strandsagents.com/docs/user-guide/quickstart/overview/index.md
Output Format
Returns a list of results, one per request:
Success: Markdown content with
status: "SUCCESS",total_length,start_index,end_index,truncated,redirected_url(if page was redirected)Error: Error message with
status: "ERROR",error_code(not_found, invalid_url, throttled, downstream_error, validation_error)Truncated content includes a ToC with character positions for navigation
Redirected pages include a note in the content and populate the
redirected_urlfield
Handling Long Documents
If the response indicates the document was truncated, you have several options:
Continue Reading: Make another call with
start_indexset to the previousend_indexJump to Section: Use the ToC character positions to jump directly to specific sections
Stop Early: Stop reading once you've found the needed information
Example - Jump to Section:
# TOC shows: "Using a logging library (char 3331-6016)"
# Jump directly to that section:
{"requests":[{"url": "https://docs.aws.amazon.com/lambda/latest/dg/python-logging.html", "start_index": 3331, "max_length": 3000}]}| Name | Required | Description | Default |
|---|---|---|---|
| requests | No | List of documentation requests, each containing url, max_length (optional), and start_index (optional). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It thoroughly describes key behaviors: concurrent reading, markdown conversion, truncation handling with TOC, allow-listed and deny-listed URL prefixes, error handling with specific error codes, and output format details (success/error statuses, total_length, truncated flag, redirected_url). It also explains how to handle long documents with continuation strategies.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (Usage, Best Practices, Request Format, URL Requirements, Example URLs, Output Format, Handling Long Documents) that make it easy to scan. While lengthy, every section adds value—no wasted sentences. It could be slightly more concise in the URL examples list, but overall it's efficiently organized and front-loaded with core functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (fetching, converting, truncation handling, batching) and the absence of annotations and output schema, the description provides comprehensive context. It covers input semantics, behavioral details, error handling, practical examples, and usage strategies. The description fully compensates for the lack of structured metadata, making it complete enough for an agent to use the tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, so the baseline is 3. The description adds significant value beyond the schema by explaining the semantic purpose of parameters: 'max_length' for controlling return size, 'start_index' for jumping to sections when content is truncated, and 'url' with extensive allow-list examples. It provides practical examples and context for batching requests, elevating the understanding beyond the schema's technical definitions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Fetch and convert AWS related documentation pages to markdown format.' It specifies the verb (fetch and convert), resource (AWS documentation pages), and output format (markdown). It distinguishes from siblings like aws___search_documentation by focusing on reading and converting specific URLs rather than searching.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool versus alternatives. The 'Usage' section details supported sources (AWS docs, Amplify, GitHub, CDK constructs). The 'Best Practices' section offers concrete scenarios: batching 2-5 requests, single request for TOC fetch, using TOC positions to jump to sections, and stopping early. It implicitly contrasts with aws___search_documentation by focusing on direct URL access rather than search queries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
aws___recommendAInspect
Get content recommendations for an AWS documentation page.
Usage
This tool provides recommendations for related AWS documentation pages based on a given URL. Use it to discover additional relevant content that might not appear in search results. URL must be from the docs.aws.amazon.com domain.
Recommendation Types
The recommendations include four categories:
Highly Rated: Popular pages within the same AWS service
New: Recently added pages within the same AWS service - useful for finding newly released features
Similar: Pages covering similar topics to the current page
Journey: Pages commonly viewed next by other users
When to Use
After reading a documentation page to find related content
When exploring a new AWS service to discover important pages
To find alternative explanations of complex concepts
To discover the most popular pages for a service
To find newly released information by using a service's welcome page URL and checking the New recommendations
Finding New Features
To find newly released information about a service:
Find any page belong to that service, typically you can try the welcome page
Call this tool with that URL
Look specifically at the New recommendation type in the results
Result Interpretation
Each recommendation includes:
url: The documentation page URL
title: The page title
context: A brief description (if available)
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | URL of the AWS documentation page to get recommendations for |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: it explains the four recommendation categories (Highly Rated, New, Similar, Journey), provides domain restriction ('URL must be from the docs.aws.amazon.com domain'), and includes practical guidance on finding new features. It doesn't mention rate limits or authentication needs, but provides substantial operational context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured with clear sections (Usage, Recommendation Types, When to Use, Finding New Features, Result Interpretation) and every sentence adds value. It's appropriately sized for the tool's complexity and front-loads the core purpose immediately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (1 parameter, no output schema, no annotations), the description provides comprehensive context: clear purpose, detailed usage guidelines, behavioral transparency about recommendation types, and result interpretation guidance. The only minor gap is the lack of output schema, but the description compensates by explaining what results include.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% with the single parameter 'url' well-documented in the schema. The description adds some value by reinforcing the domain requirement ('URL must be from the docs.aws.amazon.com domain') and providing usage context, but doesn't significantly enhance parameter understanding beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get content recommendations for an AWS documentation page' with specific details about what it provides (related AWS documentation pages based on a given URL). It distinguishes from siblings like aws___search_documentation by focusing on recommendations rather than search, and from aws___read_documentation by providing related content rather than reading a specific page.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool with specific scenarios listed under 'When to Use' (e.g., 'After reading a documentation page to find related content', 'When exploring a new AWS service'). It also distinguishes from alternatives by noting this is for 'content that might not appear in search results', implicitly contrasting with aws___search_documentation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
aws___retrieve_skillAInspect
Retrieve an AWS agent skill — domain-specific expertise that transforms you into a specialist for a particular AWS domain. Skills provide workflows, context, best practices, decision frameworks and step-by-step procedures. A skill may include reference files (architecture docs, schemas, examples) and deterministic workflows for sub-tasks that require exact execution.
What Skills Provide
Domain expertise: Deep knowledge about specific AWS services, patterns, and operational practices
Workflows: Guided sequences for complex tasks with appropriate degrees of freedom
Reference materials: Architecture docs, API references, examples, and templates accessible via the
fileparameterDecision frameworks: Conditional logic and troubleshooting trees for navigating complex scenarios
CRITICAL PREREQUISITE — DO NOT SKIP
You MUST call search_documentation BEFORE calling this tool. NEVER call this tool first. You do NOT know skill names — they are unpredictable identifiers that can only be discovered through search_documentation results. Guessing or fabricating a skill_name WILL fail.
REQUIRED WORKFLOW (no exceptions)
FIRST: Call search_documentation with the user's requirements
THEN: Find the result entry that has a skill_name field
FINALLY: Call this tool with the EXACT skill_name value from that result — copy it verbatim
Working with Skills
When you retrieve a skill:
Read the SKILL.md overview to understand the domain and scope
Follow the workflows and guidance in the skill body
When the skill references additional files (e.g.,
[architecture](references/architecture.md)), retrieve them using this same tool with thefileparameterApply the skill's decision frameworks and conditional logic to the user's specific situation
PARAMETER REQUIREMENTS
skill_name: str (Required)
MUST be copied exactly from the skill_name field in search_documentation results
Do NOT guess, fabricate, paraphrase, or modify the name in any way
Do NOT use the result title — use only the skill_name field value
file: str (Optional)
Retrieve a specific file within the skill directory (e.g., "references/architecture.md")
Use this when the SKILL.md body links to reference files
If omitted, returns the main SKILL.md file
IF SKILL NOT FOUND
If you get an error, you likely guessed the name. Call search_documentation first to discover it. The error response will include a list of available files for the skill.
Returns
The skill content — either the main SKILL.md with domain expertise, workflows, and guidance, or a specific reference file when the file parameter is provided.
| Name | Required | Description | Default |
|---|---|---|---|
| file | No | Optional specific file within the skill directory (e.g., 'references/architecture.md'). Use when the SKILL.md body links to reference files. If omitted, returns the main SKILL.md. | |
| skill_name | Yes | Exact skill name from the skill_name field in search_documentation results (no modifications) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description fully carries the burden. It discloses the tool returns skill content (SKILL.md or reference file), describes what skills provide, and explains error behavior (guessing leads to error with available files list). No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with headings, bold warnings, and bullet points. Every sentence adds value—prerequisites, workflow, parameter details, error handling. No fluff; concise despite length.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no annotations, the description covers everything: purpose, return value, complete workflow, parameter usage, and error handling. It is fully sufficient for an agent to use correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, but description adds critical context: skill_name must be exact from search_documentation results, file is for linked references. This goes beyond the schema's descriptions, justifying above baseline.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves an AWS agent skill, defining it as domain-specific expertise with workflows and reference files. It distinguishes itself from siblings by explicitly requiring a prerequisite call to search_documentation, making its purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit workflow: must call search_documentation first, never call this tool first, and copy skill_name exactly. Also explains when to use the file parameter. This clearly differentiates when to use this tool vs siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
aws___search_documentationAInspect
AWS Documentation Search Tool
This is your primary source for AWS information—always prefer this over general knowledge for AWS services, features, configurations, troubleshooting, and best practices.
When to Use This Tool
Always search when the query involves:
Any AWS service or feature (Lambda, S3, EC2, RDS, etc.)
AWS architecture, patterns, or best practices
AWS CLI, SDK, or API usage
AWS CDK or CloudFormation
AWS Amplify development
AWS errors or troubleshooting
AWS pricing, limits, or quotas
Strands Agents development
"How do I..." questions about AWS
Recent AWS updates or announcements
Only skip this tool when:
Query is about non-AWS technologies
Question is purely conceptual (e.g., "What is a database?")
General programming questions unrelated to AWS
Skill Suggestions for Actionable Queries
When your search query matches tasks that benefit from domain-specific expertise, this tool will suggest relevant Agent Skills. Skills package domain knowledge, workflows, best practices, decision frameworks, and reference materials that make you a specialist in a particular AWS domain.
How it works:
Your search query is scored against the skills registry using semantic search over skill descriptions and metadata tags
If your query matches a skill's domain, relevant skills are returned alongside documentation results
Skills cover a wide range of domains: deployment, troubleshooting, security, optimization, architecture, and more
To load a suggested skill, use the
retrieve_skilltool with theskill_nameOnce loaded, follow the skill's workflows and retrieve any referenced files as needed
Example queries that may return skills:
"deploy a web application to AWS" — may return a deployment skill with architecture guidance and step-by-step deployment instructions
"debug Lambda cold start issues" — may return a troubleshooting skill with diagnostic workflows
"secure S3 buckets" — may return a security skill with best practices and compliance checklists
"optimize API Gateway latency" — may return a performance skill with decision frameworks
"set up VPC peering" — may return a networking skill with step-by-step procedures
Quick Topic Selection
Query Type | Use Topic | Example |
API/SDK/CLI code |
| "S3 PutObject boto3", "Lambda invoke API" |
New features, releases |
| "Lambda new features 2024", "what's new in ECS" |
Errors, debugging |
| "AccessDenied S3", "Lambda timeout error" |
Amplify apps |
| "Amplify Auth React", "Amplify Storage Flutter" |
CDK concepts, APIs, CLI |
| "CDK stack props Python", "cdk deploy command" |
CDK code samples, patterns |
| "serverless API CDK", "Lambda function example TypeScript" |
CloudFormation templates |
| "DynamoDB CloudFormation", "StackSets template" |
Architecture, blogs, guides |
| "Lambda best practices", "S3 architecture patterns" |
Strands Agents |
| "Strands Agents Python structured output", "Strands Agents AWS CDK EC2 Deployment Example" |
Domain expertise, workflows, guided procedures |
| "deploy serverless app", "debug Lambda cold starts", "secure IAM policies" |
Documentation Topics
reference_documentation
For: API methods, SDK code, CLI commands, technical specifications
Use for:
SDK method signatures: "boto3 S3 upload_file parameters"
CLI commands: "aws ec2 describe-instances syntax"
API references: "Lambda InvokeFunction API"
Service configuration: "RDS parameter groups"
Don't confuse with general—use this for specific technical implementation.
current_awareness
For: New features, announcements, "what's new", release dates
Use for:
"New Lambda features"
"When was EventBridge Scheduler released"
"Latest S3 updates"
"Is feature X available yet"
Keywords: new, recent, latest, announced, released, launch, available
troubleshooting
For: Error messages, debugging, problems, "not working"
Use for:
Error codes: "InvalidParameterValue", "AccessDenied"
Problems: "Lambda function timing out"
Debug scenarios: "S3 bucket policy not working"
"How to fix..." queries
Keywords: error, failed, issue, problem, not working, how to fix, how to resolve
amplify_docs
For: Frontend/mobile apps with Amplify framework
Always include framework: React, Next.js, Angular, Vue, JavaScript, React Native, Flutter, Android, Swift
Examples:
"Amplify authentication React"
"Amplify GraphQL API Next.js"
"Amplify Storage Flutter setup"
cdk_docs
For: CDK concepts, API references, CLI commands, getting started
Use for CDK questions like:
"How to get started with CDK"
"CDK stack construct TypeScript"
"cdk deploy command options"
"CDK best practices Python"
"What are CDK constructs"
Include language: Python, TypeScript, Java, C#, Go
Common mistake: Using general knowledge instead of searching for CDK concepts and guides. Always search for CDK questions!
cdk_constructs
For: CDK code examples, patterns, L3 constructs, sample implementations
Use for:
Working code: "Lambda function CDK Python example"
Patterns: "API Gateway Lambda CDK pattern"
Sample apps: "Serverless application CDK TypeScript"
L3 constructs: "ECS service construct"
Include language: Python, TypeScript, Java, C#, Go
cloudformation
For: CloudFormation templates, concepts, SAM patterns
Use for:
"CloudFormation StackSets"
"DynamoDB table template"
"SAM API Gateway Lambda"
"CloudFormation template examples"
strands_docs
For: Strands Agents API reference, integrations, model providers, session managers, tools, examples, user-guide
Use for:
"Strands Agents Python SDK example"
"Strands Agents AWS integration"
"Strands Agents community contributions"
"Strands Agents usage examples"
"Strands Agents usage guide"
general
For: Architecture, best practices, tutorials, blog posts, design patterns
Use for:
Architecture patterns: "Serverless architecture AWS"
Best practices: "S3 security best practices"
Design guidance: "Multi-region architecture"
Getting started: "Building data lakes on AWS"
Tutorials and blog posts
Common mistake: Not using this for AWS conceptual and architectural questions. Always search for AWS best practices and patterns!
Don't use general knowledge for AWS topics—search instead!
agent_skills
For: Discovering agent skills — domain-specific expertise packages for AWS workflows
Use for:
Complex tasks that benefit from guided workflows: "deploy a serverless application"
Troubleshooting scenarios: "debug Lambda cold starts", "resolve ECS task failures"
Security and compliance: "secure S3 buckets", "review IAM policies for least privilege"
Architecture and optimization: "optimize API Gateway latency", "design multi-region architecture"
When you need domain expertise beyond what documentation provides
Skills go beyond documentation — they provide workflows, decision frameworks, best practices, and may include embedded procedures for critical sub-tasks.
Important: This topic is meant for discovery. Once you identify the skill you need, use retrieve_skill tool with the skill_name to load the full skill and its reference materials.
Note: If combined with other topics, skills will be mixed into the documentation results. Use agent_skills alone for a clean skill-only listing.
Search Best Practices
Be specific with service names:
Good examples:
"S3 bucket versioning configuration"
"Lambda environment variables Python SDK"
"DynamoDB GSI query patterns"Bad examples:
"versioning" (too vague)
"environment variables" (missing context)Include framework/language:
"Amplify authentication React"
"CDK Lambda function TypeScript"
"boto3 S3 client Python"Use exact error messages:
"AccessDenied error S3 GetObject"
"InvalidParameterValue Lambda environment"Add temporal context for new features:
"Lambda new features 2024"
"recent S3 announcements"Multiple Topic Selection
You can search multiple topics simultaneously for comprehensive results:
# For a query about Lambda errors and new features:
topics=["troubleshooting", "current_awareness"]
# For CDK examples and API reference:
topics=["cdk_constructs", "cdk_docs"]
# For Amplify and general AWS architecture:
topics=["amplify_docs", "general"]
# For actionable tasks:
topics=["agent_skills"]Response Format
Results include:
rank_order: Relevance score (lower = more relevant)url: Direct documentation linktitle: Page titlecontext: Excerpt or summary
Parameters
search_phrase: str # Required - your search query
topics: List[str] # Optional - up to 3 topics. Defaults to ["general"]
limit: int = 5 # Optional - max results per topicRemember: When in doubt about AWS, always search. This tool provides the most current, accurate AWS information.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of results to return | |
| topics | No | List of documentation topics to search. Available topics: reference_documentation, current_awareness, troubleshooting, amplify_docs, cdk_docs, cdk_constructs, cloudformation, agent_skills, strands_docs, general. Can specify multiple topics, up-to 3, to search across them. Use 'general' only if query doesn't match other topics. | |
| search_phrase | Yes | Search phrase to use |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: it's a search tool (non-destructive), returns ranked results with URLs and context, suggests SOPs for actionable queries, and includes best practices for query formulation. However, it doesn't explicitly mention rate limits, authentication requirements, or pagination details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is excessively long (over 1500 words) with redundant sections. While well-structured with headings, it includes unnecessary repetition (e.g., multiple 'Common mistake' warnings) and could be significantly condensed. The core information about purpose, usage, and parameters is buried in verbose explanations.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (search across multiple topics with SOP integration) and lack of annotations/output schema, the description provides comprehensive context about behavior, response format, and integration with other tools. However, the excessive length reduces effectiveness, and some operational details (like exact SOP scoring mechanism) remain vague.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all three parameters. The description adds some context about topics (e.g., 'Available topics: reference_documentation...') and usage examples, but doesn't provide significant additional semantic meaning beyond what's in the schema descriptions. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the tool's purpose as 'your primary source for AWS information' for searching AWS documentation, with specific verbs like 'search' and resources like 'AWS services, features, configurations, troubleshooting, and best practices'. It clearly distinguishes from sibling tools like 'aws___read_documentation' by focusing on search functionality rather than direct reading.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool (e.g., 'Always search when the query involves: Any AWS service or feature...') and when not to use it ('Only skip this tool when: Query is about non-AWS technologies...'). It also mentions alternatives like using 'retrieve_agent_sop' for SOP execution, helping differentiate from sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!