Skip to main content
Glama

Server Details

Independent SBB/CFF/FFS MCP — schedules, prices, tickets. By SwissTrip; not the official SBB MCP.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
Fabsbags/sbb-mcp
GitHub Stars
0
Server Listing
sbb-mcp

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.1/5 across 6 of 6 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a distinct purpose: station search, connection search, loading more connections, pricing, purchase link, and trip details. No overlap or ambiguity.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern using snake_case (e.g., search_stations, get_prices). No deviations.

Tool Count5/5

With 6 tools, the set is well-scoped for a Swiss railway information server. It covers the essential workflow without being bloated or incomplete.

Completeness5/5

The surface covers the full user journey: station lookup, connection search with pagination, pricing, purchase link, and trip details. No obvious gaps.

Available Tools

6 tools
get_more_connectionsGet More ConnectionsA
Read-onlyIdempotent
Inspect

Load earlier or later train connections for a previous search. Use the collection ID from search_connections results.

ParametersJSON Schema
NameRequiredDescriptionDefault
directionYes"next" for later trains, "previous" for earlier trains
collection_idYesCollection ID from search_connections results
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnly, idempotent, and non-destructive. Description adds minimal extra behavioral context beyond the parameter dependency (collection ID). No contradictions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, 17 words, front-loaded with the core action. Every word earns its place with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Provides sufficient context for a simple pagination tool with no output schema. Could mention behavior when no more connections or invalid collection_id, but the existing information is adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already explains each parameter. The description reinforces the collection_id relationship but adds no new semantic meaning beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the verb 'Load' and resource 'earlier or later train connections for a previous search'. Distinguishes from sibling 'search_connections' by specifying dependency on its results.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly instructs to use the collection ID from search_connections results, providing clear when-to-use context. Does not explicitly list alternatives or when-not-to-use, but the sibling set implies limited scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_pricesGet PricesA
Read-onlyIdempotent
Inspect

Get ticket prices in CHF for one or more train connections. Supports Half-Fare card (Halbtax) and GA travelcard discounts. Up to 10 trip_ids per call — batch them in a single request rather than calling once per connection. Use trip_ids from a recent search_connections result; do not invent IDs.

ParametersJSON Schema
NameRequiredDescriptionDefault
trip_idsYesTrip IDs from search_connections results
traveler_typeNoTraveler typeADULT
reduction_cardNoSwiss reduction card: HALF_FARE (Halbtax), GA (General Abonnement), or NONEHALF_FARE
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true. Description adds behavioral context: supports Half-Fare and GA discounts, conditional pricing via SwissTrip with traveler_names. No contradictions. Could mention error handling or default behavior when both parameter groups are provided.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with purpose, followed by key features. No redundant information. Every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers main usage scenarios and conditional logic. Minor omission: does not specify behavior when both traveler_names and reduction_card are provided, or response format. But given simplicity and no output schema, it is largely complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with descriptions. Description adds semantics: traveler_names requires SWISSTRIP_TOKEN and overrides reduction_card/traveler_type, clarifies reduction_card options are Swiss-specific. Adds context beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'Get ticket prices for one or more train connections' with specific verb and resource. Distinguishes from siblings like search_connections (which returns connections) and get_trip_details (details of a trip). Includes discount support and conditional SwissTrip integration.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides context: use after having trip IDs from search_connections, and special case with SwissTrip token and traveler_names. However, lacks explicit when-not-to-use or comparisons to sibling tools. No exclusion criteria or alternative suggestions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_trip_detailsGet Trip DetailsA
Read-onlyIdempotent
Inspect

Get detailed information about a specific train connection including all intermediate stops, platforms, and occupancy. Use a trip ID from search_connections results.

ParametersJSON Schema
NameRequiredDescriptionDefault
trip_idYesTrip ID from search_connections results
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare read only and non-destructive. Description adds behavioral details on what is returned (stops, platforms, occupancy) beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences with purpose first, then usage. No wasted words, well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple tool with one parameter and no output schema, description covers core purpose and input source. Could mention return format but adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Parameter trip_id is fully described in schema. Description adds relational context linking trip_id to search_connections results, providing extra semantic value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states 'Get detailed information about a specific train connection' listing specific elements (intermediate stops, platforms, occupancy) and differentiates from siblings like search_connections and get_prices.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says 'Use a trip ID from search_connections results', guiding when to invoke. Does not explicitly mention alternatives but context implies appropriate usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_connectionsSearch ConnectionsA
Read-onlyIdempotent
Inspect

Find train connections between two Swiss stations. Accepts station names directly (e.g. "Zürich HB", "Bern") or UIC IDs — name resolution happens internally. Returns live schedules with departure/arrival times, duration, transfers, and trip IDs for downstream pricing/details/ticket calls. Live data: includes delays and cancellations for trains departing within 30 min.

ParametersJSON Schema
NameRequiredDescriptionDefault
toYesDestination station name or ID (e.g. "Bern" or "8507000")
dateNoTravel date in YYYY-MM-DD format (default: today)
fromYesOrigin station name or ID (e.g. "Zurich HB" or "8503000")
timeNoTime in HH:MM (Europe/Zurich local time, 24h). By default treated as DEPARTURE time. Default: now.
arrival_timeNoDefaults to false (treat `time` as departure). Only set true when the user EXPLICITLY says they want to ARRIVE by a specific time ("I need to be in Bern by 9am", "arriving at 14:00"). For loose phrases like "around 9am", "morning", or "tomorrow at 9", leave this false — those mean departure time.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate a safe read operation. The description adds that the tool returns schedules with specific details, but does not disclose limits, data freshness, or pagination behavior beyond the annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with only two sentences, front-loaded with the core purpose. No redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The description lists returned fields but lacks details on result structure, pagination, or behavior of optional parameters like arrival_time. Given no output schema, more information would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema coverage is 100%, so baseline is 3. The description adds minimal context (e.g., 'Swiss stations') but does not elaborate on parameter semantics beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description ('Find train connections between two Swiss stations') clearly identifies the tool's primary purpose with a specific verb and resource. It distinguishes from sibling tools like get_prices and search_stations by focusing on connection search.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for finding connections between stations but does not explicitly mention when to use alternatives (e.g., get_more_connections for pagination) or when not to use this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_stationsSearch StationsA
Read-onlyIdempotent
Inspect

Search for Swiss train stations, addresses, or points of interest by name. Returns UIC station IDs (e.g. "8503000" for Zürich HB) used by the other tools. Note: search_connections accepts station names directly, so this tool is only needed when the user explicitly asks for station info or when you need disambiguation between multiple matches.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of results
queryYesStation name to search for (e.g. "Zurich", "Bern", "Interlaken")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true, so the description adds minimal behavioral context beyond 'returns station IDs'. No contradictions; the description is fine but not enhanced.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two succinct sentences: purpose first, outcome second. No filler, every word counts.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequately covers purpose, scope (Swiss), and return value (station IDs). Lacks detail on result ordering or error handling, but minimal for a simple read-only tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema coverage is 100% with clear descriptions and examples for both parameters. The description adds no extra parameter details, achieving baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Search', the resource 'Swiss train stations, addresses, or points of interest', and the purpose 'Returns station IDs needed for other tools'. It distinguishes from sibling tools like search_connections by focusing on station lookup.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for obtaining station IDs for subsequent calls, but does not explicitly state when not to use or name alternatives. The context is clear enough given the distinct sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.