swisstrip-mcp
Server Details
Canonical SwissTrip MCP — independent SBB/CFF/FFS schedules, prices, and ticket links by SwissTrip.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- Fabsbags/swisstrip-mcp
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.1/5 across 6 of 6 tools scored.
Each tool has a distinct purpose: station search, connection search, trip details, pricing, purchase link, and pagination. No overlap between tools.
All tools follow a consistent verb_noun pattern using snake_case (get_more_connections, get_prices, get_ticket_link, get_trip_details, search_connections, search_stations). No mixing of styles.
Six tools cover the essential workflow of Swiss train travel: station lookup, connection search, details, pricing, purchase, and pagination. The count is well-scoped.
The tool set provides a complete lifecycle: search stations → search connections → get details/prices → purchase link, plus pagination. No obvious missing operations for the intended domain.
Available Tools
6 toolsget_more_connectionsGet More ConnectionsARead-onlyIdempotentInspect
Load earlier or later train connections for a previous search. Use the collection ID from search_connections results.
| Name | Required | Description | Default |
|---|---|---|---|
| direction | Yes | "next" for later trains, "previous" for earlier trains | |
| collection_id | Yes | Collection ID from search_connections results |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, so the safety profile is clear. The description adds that it loads connections from a previous search, which is consistent with annotations. No additional behavioral details are needed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description consists of two succinct sentences with no redundant words. It is front-loaded with the purpose and immediately provides the key requirement (collection_id).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (2 parameters, no output schema, good annotations), the description covers all necessary context: what it does, how to use it, and prerequisites. No gaps remain.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with both parameters described in the schema (direction with enum, collection_id as string). The description does not add new meaning beyond what the schema provides, so baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool loads earlier or later train connections for a previous search, using a specific verb ('load') and resource ('train connections'). It distinguishes from siblings like search_connections which performs initial searches.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit context: it requires a collection_id from search_connections results and a direction. However, it does not explicitly state when not to use this tool or mention alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_pricesGet PricesARead-onlyIdempotentInspect
Get ticket prices in CHF for one or more train connections. Supports Half-Fare card (Halbtax) and GA travelcard discounts. Up to 10 trip_ids per call — batch them in a single request rather than calling once per connection. Use trip_ids from a recent search_connections result; do not invent IDs.
| Name | Required | Description | Default |
|---|---|---|---|
| trip_ids | Yes | Trip IDs from search_connections results | |
| traveler_type | No | Traveler type | ADULT |
| reduction_card | No | Swiss reduction card: HALF_FARE (Halbtax), GA (General Abonnement), or NONE | HALF_FARE |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint true and idempotentHint true, so the description is not required to restate safety. It adds value by mentioning discount support and family pricing, but does not disclose additional behavioral traits like response format or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with core purpose, no redundancy. Every sentence adds useful information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With good annotations and full schema coverage, the description adequately covers the tool's purpose and special cases (family pricing). No output schema exists, but the description implies what it returns (prices). Could mention return format but not critical.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% description coverage, so description adds limited value. It clarifies that traveler_names enables family pricing, which is not fully captured in schema. However, most parameter semantics are already provided by schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it retrieves ticket prices for train connections, distinguishing it from siblings like search_connections (which provides trip IDs) and get_trip_details. The verb 'get' with resource 'prices' is specific and unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides context for when to use traveler_names (when connected to SwissTrip with SWISSTRIP_TOKEN) and implicitly links to prior use of search_connections. Does not explicitly exclude alternatives, but the purpose is clear enough for the agent to infer usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_ticket_linkGet Ticket LinkARead-onlyInspect
Get a direct purchase link to buy a train ticket on SBB.ch. Only call this when the user wants to buy a specific ticket. On mobile with SBB app installed, opens directly in the app with Halbtax/GA applied automatically.
| Name | Required | Description | Default |
|---|---|---|---|
| date | Yes | Travel date YYYY-MM-DD | |
| time | Yes | Departure time HH:MM | |
| to_id | Yes | Destination station ID (e.g. "8507000") | |
| from_id | Yes | Origin station ID (e.g. "8503000") | |
| to_name | Yes | Destination station name (e.g. "Bern") | |
| trip_id | Yes | Trip ID to purchase | |
| from_name | Yes | Origin station name (e.g. "Zürich HB") | |
| traveler_type | No | Traveler type | ADULT |
| reduction_card | No | Swiss reduction card: HALF_FARE (Halbtax), GA (General Abonnement), or NONE | HALF_FARE |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Beyond the annotations (readOnlyHint, openWorldHint), the description adds useful behavioral context: on mobile with SBB app installed, it opens directly in the app with Halbtax/GA applied automatically; traveler_names requires SWISSTRIP_TOKEN. No contradictions with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is three efficient sentences, front-loaded with the main purpose, and every sentence provides unique value without wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (10 parameters, no output schema), the description covers purpose, usage, and parameter context well. However, it does not describe what the tool returns (e.g., URL string) or behavior on invalid trip_id, which would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with descriptions for all parameters. The description adds semantics: traveler_names is for family tickets and requires a token, and traveler_type/reduction_card are used when traveler_names not given. This adds value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool gets a direct purchase link to buy a train ticket on SBB.ch, and specifies it should be called when the user wants to buy a specific ticket. It distinguishes itself from sibling tools by focusing on the purchase link action.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use the tool ('Only call this when the user wants to buy a specific ticket') and provides context for mobile app behavior and traveler_names usage. However, it does not explicitly mention when not to use it or alternative sibling tools like get_prices or get_trip_details.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_trip_detailsGet Trip DetailsARead-onlyIdempotentInspect
Get detailed information about a specific train connection including all intermediate stops, platforms, and occupancy. Use a trip ID from search_connections results.
| Name | Required | Description | Default |
|---|---|---|---|
| trip_id | Yes | Trip ID from search_connections results |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, idempotent, non-destructive behavior. The description adds value by detailing the return content (intermediate stops, platforms, occupancy), aiding the agent in understanding what to expect beyond the schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences: first states the purpose and content, second gives the source of the required parameter. No fluff, front-loaded, efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with one well-described parameter and rich annotations, the description covers all necessary context. It mentions return details and ties to a sibling tool, making it complete for correct invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema already fully describes the single parameter (trip_id) with the same context given in the description. Schema coverage is 100%, so baseline is 3; description adds no new semantic information.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves detailed information about a specific train connection, listing included data (stops, platforms, occupancy). It distinguishes from siblings like search_connections by specifying it operates on a single trip ID.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly tells the agent to use a trip ID from search_connections results, providing clear context. While it doesn't mention when to avoid the tool or name alternatives, the instruction is sufficient for correct invocation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_connectionsSearch ConnectionsARead-onlyIdempotentInspect
Find train connections between two Swiss stations. Accepts station names directly (e.g. "Zürich HB", "Bern") or UIC IDs — name resolution happens internally. Returns live schedules with departure/arrival times, duration, transfers, and trip IDs for downstream pricing/details/ticket calls. Live data: includes delays and cancellations for trains departing within 30 min.
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | Destination station name or ID (e.g. "Bern" or "8507000") | |
| date | No | Travel date in YYYY-MM-DD format (default: today) | |
| from | Yes | Origin station name or ID (e.g. "Zurich HB" or "8503000") | |
| time | No | Time in HH:MM (Europe/Zurich local time, 24h). By default treated as DEPARTURE time. Default: now. | |
| arrival_time | No | Defaults to false (treat `time` as departure). Only set true when the user EXPLICITLY says they want to ARRIVE by a specific time ("I need to be in Bern by 9am", "arriving at 14:00"). For loose phrases like "around 9am", "morning", or "tomorrow at 9", leave this false — those mean departure time. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint and idempotentHint, so no contradictions. Description adds that returns specific fields but does not disclose additional behavioral traits like rate limits or authentication needs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence with immediate action verb and key details, no redundancy. Information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given annotations and full schema coverage, the description is sufficient for an agent to understand the tool's purpose and return format. Minor omission: no mention of scope (e.g., only Swiss stations) but that is explicit.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
All 5 parameters have full descriptions in the schema (100% coverage). The description does not add new meaning beyond the schema, just summarizes the overall output.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it finds train connections between Swiss stations and returns schedules with specific fields. Distinguishes itself from siblings like get_more_connections or get_trip_details by focusing on initial search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or when-not-to-use guidance. Context from sibling names implies it is for initial search, but no direct comparison or exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_stationsSearch StationsARead-onlyIdempotentInspect
Search for Swiss train stations, addresses, or points of interest by name. Returns UIC station IDs (e.g. "8503000" for Zürich HB) used by the other tools. Note: search_connections accepts station names directly, so this tool is only needed when the user explicitly asks for station info or when you need disambiguation between multiple matches.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of results | |
| query | Yes | Station name to search for (e.g. "Zurich", "Bern", "Interlaken") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint, idempotentHint, destructiveHint. The description adds context about returning station IDs needed for other tools, which aids in understanding the tool's role without contradicting annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no unnecessary words. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with 2 parameters and no output schema, the description covers the essential purpose, scope, and result. It is complete enough.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description reinforces that search is by name, but does not add substantial new meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Search', the resource 'Swiss train stations, addresses, or points of interest', and adds value by mentioning the output 'station IDs needed for other tools'. This distinguishes it from sibling tools like search_connections.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by stating the output is needed for other tools, but does not explicitly provide when-to-use or when-not-to-use guidance or mention alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!