sbb-mcp
Server Details
Independent SBB/CFF/FFS MCP — schedules, prices, tickets. By SwissTrip; not the official SBB MCP.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- Fabsbags/sbb-mcp
- GitHub Stars
- 0
- Server Listing
- sbb-mcp
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.1/5 across 6 of 6 tools scored.
Each tool has a distinct purpose: station search, connection search, loading more connections, pricing, purchase link, and trip details. No overlap or ambiguity.
All tool names follow a consistent verb_noun pattern using snake_case (e.g., search_stations, get_prices). No deviations.
With 6 tools, the set is well-scoped for a Swiss railway information server. It covers the essential workflow without being bloated or incomplete.
The surface covers the full user journey: station lookup, connection search with pagination, pricing, purchase link, and trip details. No obvious gaps.
Available Tools
6 toolsget_more_connectionsGet More ConnectionsARead-onlyIdempotentInspect
Load earlier or later train connections for a previous search. Use the collection ID from search_connections results.
| Name | Required | Description | Default |
|---|---|---|---|
| direction | Yes | "next" for later trains, "previous" for earlier trains | |
| collection_id | Yes | Collection ID from search_connections results |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnly, idempotent, and non-destructive. Description adds minimal extra behavioral context beyond the parameter dependency (collection ID). No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, 17 words, front-loaded with the core action. Every word earns its place with no redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Provides sufficient context for a simple pagination tool with no output schema. Could mention behavior when no more connections or invalid collection_id, but the existing information is adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already explains each parameter. The description reinforces the collection_id relationship but adds no new semantic meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the verb 'Load' and resource 'earlier or later train connections for a previous search'. Distinguishes from sibling 'search_connections' by specifying dependency on its results.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly instructs to use the collection ID from search_connections results, providing clear when-to-use context. Does not explicitly list alternatives or when-not-to-use, but the sibling set implies limited scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_pricesGet PricesARead-onlyIdempotentInspect
Get ticket prices in CHF for one or more train connections. Supports Half-Fare card (Halbtax) and GA travelcard discounts. Up to 10 trip_ids per call — batch them in a single request rather than calling once per connection. Use trip_ids from a recent search_connections result; do not invent IDs.
| Name | Required | Description | Default |
|---|---|---|---|
| trip_ids | Yes | Trip IDs from search_connections results | |
| traveler_type | No | Traveler type | ADULT |
| reduction_card | No | Swiss reduction card: HALF_FARE (Halbtax), GA (General Abonnement), or NONE | HALF_FARE |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and idempotentHint=true. Description adds behavioral context: supports Half-Fare and GA discounts, conditional pricing via SwissTrip with traveler_names. No contradictions. Could mention error handling or default behavior when both parameter groups are provided.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with purpose, followed by key features. No redundant information. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers main usage scenarios and conditional logic. Minor omission: does not specify behavior when both traveler_names and reduction_card are provided, or response format. But given simplicity and no output schema, it is largely complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 100% coverage with descriptions. Description adds semantics: traveler_names requires SWISSTRIP_TOKEN and overrides reduction_card/traveler_type, clarifies reduction_card options are Swiss-specific. Adds context beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states 'Get ticket prices for one or more train connections' with specific verb and resource. Distinguishes from siblings like search_connections (which returns connections) and get_trip_details (details of a trip). Includes discount support and conditional SwissTrip integration.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides context: use after having trip IDs from search_connections, and special case with SwissTrip token and traveler_names. However, lacks explicit when-not-to-use or comparisons to sibling tools. No exclusion criteria or alternative suggestions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_ticket_linkGet Ticket LinkARead-onlyInspect
Get a direct purchase link to buy a train ticket on SBB.ch. Only call this when the user wants to buy a specific ticket. On mobile with SBB app installed, opens directly in the app with Halbtax/GA applied automatically.
| Name | Required | Description | Default |
|---|---|---|---|
| date | Yes | Travel date YYYY-MM-DD | |
| time | Yes | Departure time HH:MM | |
| to_id | Yes | Destination station ID (e.g. "8507000") | |
| from_id | Yes | Origin station ID (e.g. "8503000") | |
| to_name | Yes | Destination station name (e.g. "Bern") | |
| trip_id | Yes | Trip ID to purchase | |
| from_name | Yes | Origin station name (e.g. "Zürich HB") | |
| traveler_type | No | Traveler type | ADULT |
| reduction_card | No | Swiss reduction card: HALF_FARE (Halbtax), GA (General Abonnement), or NONE | HALF_FARE |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true, and description aligns by indicating a link is returned (not a purchase). Adds behavioral details like automatic Halbtax/GA application on mobile and traveler_names logic. Does not mention link expiration (openWorldHint) or return format, but overall good.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Four sentences, each serving a purpose: purpose, usage condition, mobile behavior, parameter guidance. Front-loaded with core function. No wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Covers purpose, usage, and key parameters. However, lacks explanation of return value (e.g., URL format) and does not address error cases or expiration. Since no output schema exists, description should compensate more.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline 3. Description adds value by explaining traveler_names parameter for family tickets and implying reduction_card behavior via automatic application. Provides context beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it 'Get a direct purchase link to buy a train ticket on SBB.ch.' Distinguishes from sibling tools (e.g., search_connections, get_prices) by specifying it's for purchasing a specific ticket.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Only call this when the user wants to buy a specific ticket.' Also provides conditional guidance for traveler_names when connected to SwissTrip. Lacks explicit mention of alternative tools but the context is clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_trip_detailsGet Trip DetailsARead-onlyIdempotentInspect
Get detailed information about a specific train connection including all intermediate stops, platforms, and occupancy. Use a trip ID from search_connections results.
| Name | Required | Description | Default |
|---|---|---|---|
| trip_id | Yes | Trip ID from search_connections results |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare read only and non-destructive. Description adds behavioral details on what is returned (stops, platforms, occupancy) beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences with purpose first, then usage. No wasted words, well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with one parameter and no output schema, description covers core purpose and input source. Could mention return format but adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Parameter trip_id is fully described in schema. Description adds relational context linking trip_id to search_connections results, providing extra semantic value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Get detailed information about a specific train connection' listing specific elements (intermediate stops, platforms, occupancy) and differentiates from siblings like search_connections and get_prices.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly says 'Use a trip ID from search_connections results', guiding when to invoke. Does not explicitly mention alternatives but context implies appropriate usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_connectionsSearch ConnectionsARead-onlyIdempotentInspect
Find train connections between two Swiss stations. Accepts station names directly (e.g. "Zürich HB", "Bern") or UIC IDs — name resolution happens internally. Returns live schedules with departure/arrival times, duration, transfers, and trip IDs for downstream pricing/details/ticket calls. Live data: includes delays and cancellations for trains departing within 30 min.
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | Destination station name or ID (e.g. "Bern" or "8507000") | |
| date | No | Travel date in YYYY-MM-DD format (default: today) | |
| from | Yes | Origin station name or ID (e.g. "Zurich HB" or "8503000") | |
| time | No | Time in HH:MM (Europe/Zurich local time, 24h). By default treated as DEPARTURE time. Default: now. | |
| arrival_time | No | Defaults to false (treat `time` as departure). Only set true when the user EXPLICITLY says they want to ARRIVE by a specific time ("I need to be in Bern by 9am", "arriving at 14:00"). For loose phrases like "around 9am", "morning", or "tomorrow at 9", leave this false — those mean departure time. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate a safe read operation. The description adds that the tool returns schedules with specific details, but does not disclose limits, data freshness, or pagination behavior beyond the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with only two sentences, front-loaded with the core purpose. No redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description lists returned fields but lacks details on result structure, pagination, or behavior of optional parameters like arrival_time. Given no output schema, more information would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema coverage is 100%, so baseline is 3. The description adds minimal context (e.g., 'Swiss stations') but does not elaborate on parameter semantics beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description ('Find train connections between two Swiss stations') clearly identifies the tool's primary purpose with a specific verb and resource. It distinguishes from sibling tools like get_prices and search_stations by focusing on connection search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for finding connections between stations but does not explicitly mention when to use alternatives (e.g., get_more_connections for pagination) or when not to use this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_stationsSearch StationsARead-onlyIdempotentInspect
Search for Swiss train stations, addresses, or points of interest by name. Returns UIC station IDs (e.g. "8503000" for Zürich HB) used by the other tools. Note: search_connections accepts station names directly, so this tool is only needed when the user explicitly asks for station info or when you need disambiguation between multiple matches.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of results | |
| query | Yes | Station name to search for (e.g. "Zurich", "Bern", "Interlaken") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and idempotentHint=true, so the description adds minimal behavioral context beyond 'returns station IDs'. No contradictions; the description is fine but not enhanced.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two succinct sentences: purpose first, outcome second. No filler, every word counts.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequately covers purpose, scope (Swiss), and return value (station IDs). Lacks detail on result ordering or error handling, but minimal for a simple read-only tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema coverage is 100% with clear descriptions and examples for both parameters. The description adds no extra parameter details, achieving baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Search', the resource 'Swiss train stations, addresses, or points of interest', and the purpose 'Returns station IDs needed for other tools'. It distinguishes from sibling tools like search_connections by focusing on station lookup.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for obtaining station IDs for subsequent calls, but does not explicitly state when not to use or name alternatives. The context is clear enough given the distinct sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.