BoostedTravel
Server Details
Flight search & booking for AI agents. 400+ airlines, $20-50 cheaper than OTAs.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- LetsFG/LetsFG
- GitHub Stars
- 571
- Server Listing
- LetsFG
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
8 toolsbook_flightAInspect
Book an unlocked flight. Creates a real airline PNR with e-ticket.
REQUIREMENTS:
Offer must be unlocked first (call unlock_flight_offer)
Use passenger_id from search results
Use REAL passenger details — airline sends e-ticket to the email provided
Requires GitHub star verification.
| Name | Required | Description | Default |
|---|---|---|---|
| offer_id | Yes | Unlocked offer ID (off_xxx) | |
| passengers | Yes | Passenger details with id from search results | |
| contact_email | Yes | Contact email for booking confirmation |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full disclosure burden. It reveals critical behavioral traits: creates 'real' PNR (implies irreversible commitment), triggers email side-effects ('airline sends e-ticket'), and requires external auth ('GitHub star verification'). Deducted one point for not explicitly stating cancellation policies or failure rollback behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Front-loaded with purpose statement, followed by structured REQUIREMENTS list, ending with auth constraint. Every sentence serves distinct purpose (action, prerequisites, consequences, permissions). No redundant or filler text despite covering complex workflow.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive for a booking mutation tool with nested passenger objects. Covers prerequisites, data lineage, side effects, and auth. Lacks description of return value or confirmation structure, though absence of output_schema reduces obligation here.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing baseline of 3. Description adds semantic value by contextualizing parameters within the workflow: 'passenger_id from search results' links to search_flights output, and 'REAL passenger details' establishes data quality requirements not captured in schema field descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with specific verb ('Book') and resource ('unlocked flight'), immediately distinguishing it from sibling 'search_flights' and 'unlock_flight_offer'. It clarifies the tangible outcome ('real airline PNR with e-ticket') beyond just the action.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly lists three numbered requirements: prerequisite tool ('call unlock_flight_offer'), data source ('passenger_id from search results'), and data quality standards ('Use REAL passenger details'). This provides clear when-to-use sequencing and warns against synthetic data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_agent_profileAInspect
View your agent profile — payment status, GitHub star verification, and usage stats. Use this to check if you're verified.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses what data the profile contains (payment, GitHub stars, usage stats), which adds context. However, it lacks disclosure about operational aspects like whether this is a read-only operation, side effects, or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences total with zero waste. First sentence establishes function and return value; second provides usage context. Every word earns its place with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a zero-parameter tool without output schema or annotations, the description adequately covers what the tool retrieves. It could be improved by explicitly noting the read-only/safe nature of the operation since no annotations declare readOnlyHint.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has zero parameters, which per guidelines establishes a baseline of 4. No parameter description is needed, and the description correctly focuses on the tool's output rather than inventing parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'View' with resource 'agent profile' and explicitly lists returned data (payment status, GitHub star verification, usage stats). It clearly distinguishes from sibling tools which are travel/booking focused (book_flight, search_hotels, etc.).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit usage guidance: 'Use this to check if you're verified.' This establishes when to use the tool (verification checks). However, it doesn't explicitly state when NOT to use it or contrast with alternatives like link_github for verification purposes.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
link_githubAInspect
Link your GitHub account to verify you've starred LetsFG/LetsFG.
HOW TO GET ACCESS:
Star https://github.com/LetsFG/LetsFG
Call this tool with your GitHub username
Once verified, all tools are unlocked — search, unlock, book.
This is required before using search_flights, search_hotels, unlock_flight_offer, book_flight, or setup_payment.
| Name | Required | Description | Default |
|---|---|---|---|
| github_username | Yes | Your GitHub username (the one that starred LetsFG/LetsFG) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full behavioral disclosure burden. It successfully explains the verification logic (checking for the star), the side effect (unlocking all tools), and the dependency chain. Minor gap: doesn't describe failure behavior if the user hasn't starred the repo or if called repeatedly.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear visual hierarchy: purpose statement, 'HOW TO GET ACCESS' header with numbered steps, and prerequisite list. Content is front-loaded. Slightly verbose but every sentence serves the critical function of explaining a complex authentication workflow to unlock the tool ecosystem.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 1-parameter auth tool with no output schema, the description adequately covers the essential context: it explains the verification mechanism, the unlock side effects, and ecosystem integration. Missing only edge-case handling (errors, idempotency) to achieve a 5.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with the parameter 'github_username' fully described as 'Your GitHub username (the one that starred LetsFG/LetsFG)'. The description references this parameter in step 2 ('Call this tool with your GitHub username') but adds no semantic information beyond what's already in the schema. Baseline 3 appropriate for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific action and scope: 'Link your GitHub account to verify you've starred LetsFG/LetsFG.' It clearly distinguishes this authentication prerequisite from the travel-booking siblings (search_flights, book_flight, etc.) by framing it as an account verification step.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Excellent explicit guidance provided via numbered 'HOW TO GET ACCESS' steps and a clear prerequisite statement: 'This is required before using search_flights...' The description establishes exactly when to use this tool (before any booking operations) and the specific workflow (star repo → call tool → unlock ecosystem).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resolve_locationAInspect
Convert a city or airport name to IATA codes for use in flight search. Use when the user says 'London' instead of 'LON'.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | City or airport name (e.g. 'London', 'New York', 'Heathrow') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full behavioral burden. It explains the core transformation but omits critical operational details: whether it returns multiple codes for cities (e.g., London→LHR,LGW), error behavior for unknown locations, or the return data structure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two tightly constructed sentences with zero redundancy. First sentence establishes function; second provides usage condition. Perfectly front-loaded and appropriately sized for the tool's simplicity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description should disclose the return format (single code vs. array, handling of multi-airport cities). For a single-parameter lookup tool, the description is minimally adequate but leaves gaps in the contract regarding response structure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description adds value by providing the concrete 'London' vs 'LON' example, which clarifies the disambiguation semantics beyond the generic schema description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the conversion function (city/airport name to IATA codes) and mentions the domain (flight search). However, it could explicitly distinguish itself as a preprocessing/utility step versus the actual search_flights sibling tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides a concrete usage trigger ('Use when the user says 'London' instead of 'LON''), effectively illustrating the disambiguation use case. Lacks explicit workflow guidance mentioning that the output feeds into search_flights or book_flight.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_flightsAInspect
Search for flights between any two cities/airports worldwide. 400+ airlines via GDS/NDC + 140 local connectors fire in parallel. Returns offers with prices, airlines, times, conditions. FREE.
IMPORTANT: Response includes passenger_ids — save them for booking.
Requires GitHub star verification. If not verified, call link_github first.
| Name | Required | Description | Default |
|---|---|---|---|
| adults | No | Number of adult passengers (default: 1) | |
| origin | Yes | IATA departure code (e.g. LON, JFK). Use resolve_location if you only have a city name. | |
| children | No | Number of children (ages 2-11) | |
| currency | No | Currency code (EUR, USD, GBP) | EUR |
| date_from | Yes | Departure date YYYY-MM-DD | |
| cabin_class | No | M=economy, W=premium, C=business, F=first | |
| destination | Yes | IATA arrival code (e.g. BCN, LAX) | |
| max_results | No | Max offers to return (default: 10) | |
| return_from | No | Return date YYYY-MM-DD (omit for one-way) | |
| response_mode | No | 'summary' (default) — compact price/airline/route per offer, saves tokens. 'full' — includes segments, conditions, bags, durations. | summary |
| departure_time_to | No | Latest departure time (HH:MM, 24h). E.g. '14:00' for departures before 2pm | |
| departure_time_from | No | Earliest departure time (HH:MM, 24h). E.g. '08:00' for morning flights |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries significant behavioral disclosure burden, successfully indicating cost ('FREE'), implementation scope ('400+ airlines via GDS/NDC'), and output characteristics ('Response includes `passenger_ids`'). It also discloses the authentication requirement ('Requires GitHub star verification'), though it omits rate limits or error handling specifics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description efficiently organizes information into three distinct functional blocks: capability statement, critical output warning, and authentication prerequisite. Every sentence serves a specific purpose, with the 'IMPORTANT' flag appropriately highlighting the `passenger_ids` retention requirement without unnecessary verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description adequately compensates by detailing the return structure ('offers with prices, airlines, times, conditions') and specifically highlighting the critical `passenger_ids` field needed for subsequent booking. The authentication requirement and parallel search architecture provide sufficient context for a complex 10-parameter flight search tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema achieves 100% description coverage with clear type hints and enum values (e.g., 'M=economy, W=premium'), so the description appropriately does not redundantly document parameters. The description adds minimal semantic value beyond the schema, mentioning only that it searches 'between any two cities/airports,' which aligns with the origin/destination parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly defines the tool's scope with 'Search for flights between any two cities/airports worldwide' and specifies the return values ('offers with prices, airlines, times, conditions'). It effectively distinguishes this from the booking sibling tool by noting that results are for searching/saving passenger_ids rather than completing transactions, establishing the workflow boundary.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit prerequisite guidance stating 'Requires GitHub star verification. If not verified, call `link_github` first,' establishing a clear conditional workflow. It also indicates the downstream workflow by noting that `passenger_ids` should be saved for booking, implicitly guiding the agent toward the `book_flight` tool, though it does not explicitly differentiate from `search_hotels`.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_hotelsAInspect
Search for hotels in a city or near coordinates. 300,000+ properties via wholesale suppliers — cheaper than Booking.com. Returns hotels with rooms, prices, photos, cancellation policies. FREE.
Requires GitHub star verification. If not verified, call link_github first.
| Name | Required | Description | Default |
|---|---|---|---|
| rooms | No | Number of rooms (default: 1) | |
| adults | No | Number of adult guests (default: 2) | |
| checkin | Yes | Check-in date YYYY-MM-DD | |
| checkout | Yes | Check-out date YYYY-MM-DD | |
| currency | No | Currency code (EUR, USD, GBP) | EUR |
| location | Yes | City name, airport code, or 'lat,lon' coordinates | |
| max_results | No | Max hotels to return (default: 10) | |
| response_mode | No | 'summary' (default) — compact name/price/rating per hotel, saves tokens. 'full' — includes rooms, photos, policies, amenities. | summary |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It successfully indicates the read-only nature via 'Search', discloses the data source ('wholesale suppliers'), value proposition ('cheaper than Booking.com'), return payload contents ('rooms, prices, photos, cancellation policies'), cost ('FREE'), and authentication requirements. It misses explicit idempotency or rate limit statements, but covers the critical behavioral traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured with zero waste: core action (sentence 1), scope/value prop (sentence 2), return data (sentence 3), cost (sentence 4), and prerequisites (sentences 5-6). Every sentence earns its place and information is front-loaded appropriately.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema and annotations, the description adequately compensates by listing the return payload fields (rooms, prices, photos, policies) and explaining the verification prerequisite. It provides sufficient context for an agent to invoke the tool correctly, though it could be improved by mentioning pagination or caching behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema fully documents all 8 parameters including location formats and response modes. The description adds minimal semantic value beyond the schema—primarily reinforcing that location accepts 'city or near coordinates' (already documented in schema). Baseline score appropriate given schema completeness.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description opens with a specific verb ('Search for hotels') and clearly defines the searchable resources ('city or near coordinates'). It distinguishes itself from sibling flight tools by explicitly mentioning 'hotels' and 'properties', and adds scope context ('300,000+ properties') that differentiates it from generic search tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit prerequisite guidance ('Requires GitHub star verification. If not verified, call `link_github` first'), clearly stating when to invoke a sibling tool before this one. It lacks explicit 'when not to use' guidance (e.g., vs. resolve_location), but the prerequisite instruction is highly specific and actionable.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
setup_paymentAInspect
Attach a payment card. Required before booking.
For testing: {"token": "tok_visa"} For production: {"payment_method_id": "pm_xxx"} from Stripe.js
One-time setup — all future charges are automatic.
Requires GitHub star verification.
| Name | Required | Description | Default |
|---|---|---|---|
| token | No | Stripe token (e.g. tok_visa for testing) | |
| payment_method_id | No | Stripe PaymentMethod ID (pm_xxx) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, but description carries the burden well: discloses 'all future charges are automatic' (side effects), 'One-time setup' (idempotency hint), and 'Requires GitHub star verification' (prerequisites). Missing only error state details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Efficiently packed with no waste. Front-loaded with purpose, followed by environment-specific examples, then behavioral notes. Slightly dense but all sentences earn their place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given simple 2-parameter schema and no output schema, description adequately covers prerequisites (GitHub star), environment differences, persistence model, and relationship to booking workflow.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage with examples, but description adds critical usage context: mapping 'token' to testing environment and 'payment_method_id' to production with Stripe.js source. Explains the mutual exclusivity implicitly through environment separation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
States specific action ('Attach a payment card') and explicitly distinguishes from sibling booking tools by noting it's 'Required before booking'. Clear verb-resource combination with workflow context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use ('Required before booking', 'One-time setup') and distinguishes testing vs production environments. Implicitly indicates not to use if already set up or when not booking.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
unlock_flight_offerAInspect
Confirm live price and reserve a flight offer for 30 minutes. After unlocking, call book_flight to complete the booking.
Requires GitHub star verification.
| Name | Required | Description | Default |
|---|---|---|---|
| offer_id | Yes | Offer ID from search results (off_xxx) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden and successfully discloses key behavioral traits: the 30-minute reservation window, live price confirmation behavior, and GitHub star authentication requirement. Could improve by mentioning what happens when the reservation expires or if the offer becomes invalid.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste: first defines the action, second establishes the workflow relationship with book_flight, third states the authentication prerequisite. Information is front-loaded and appropriately sized for a single-parameter tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Complete for the tool's complexity (1 parameter, no output schema). Describes the temporal effect (30-minute hold), workflow position, and authentication barrier. Minor gap: does not describe the return value or confirmation structure, though no output schema exists to require this.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% (the offer_id parameter is fully documented with type and format 'off_xxx'). The description does not add additional semantic context about the parameter, but with complete schema coverage, no additional description is necessary. Baseline score appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific actions performed: 'Confirm live price and reserve a flight offer for 30 minutes.' It uses distinct verbs (confirm, reserve) and distinguishes itself from siblings search_flights (finding offers) and book_flight (completing purchase).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Excellent workflow guidance: explicitly names sibling tool book_flight as the next step ('After unlocking, call book_flight to complete the booking'). Also states a critical prerequisite ('Requires GitHub star verification'), clearly defining when the tool can be used.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.