Skip to main content
Glama

BART Real-Time Transit

Server Details

Real-time BART departures, trip planning, fares, stations, and advisories.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
srivastsh/bay-area-transit-mcp
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4/5 across 6 of 6 tools scored. Lowest: 3.3/5.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose with no overlap: advisories for service issues, departures for real-time schedules, fare for pricing, map for visualization, stations for reference data, and trip for journey planning. An agent can easily distinguish between them based on their specific functions.

Naming Consistency5/5

All tools follow a consistent 'bart_' prefix with descriptive nouns (advisories, departures, fare, map, stations, trip), creating a predictable and readable pattern. There are no deviations in naming conventions across the set.

Tool Count5/5

With 6 tools, this server is well-scoped for its transit domain, covering key aspects like real-time info, planning, and reference data without being overwhelming. Each tool earns its place by addressing a core user need in the BART ecosystem.

Completeness4/5

The toolset covers most essential transit operations: advisories, departures, fare calculation, station lookup, and trip planning. A minor gap exists in the lack of tools for managing user-specific data (e.g., saved trips or alerts), but core workflows are fully supported without dead ends.

Available Tools

6 tools
bart_advisoriesBART AdvisoriesA
Read-onlyIdempotent
Inspect

Get current BART service advisories, delays, and elevator/escalator status.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already cover key behavioral traits (read-only, open-world, idempotent, non-destructive), but the description adds context by specifying the types of status information retrieved (advisories, delays, elevator/escalator status). This enhances understanding beyond the annotations without contradicting them.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose ('Get current BART service advisories') and adds specific details without waste. Every word contributes to understanding the tool's function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (0 parameters, no output schema) and rich annotations, the description is largely complete for a read-only data retrieval tool. However, it could slightly improve by hinting at output format or update frequency, though annotations cover most behavioral aspects.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema description coverage, the baseline is high. The description compensates by implicitly indicating no filtering or input is needed, aligning with the empty schema. It adds semantic clarity about the data scope without redundant parameter details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get') and resources ('current BART service advisories, delays, and elevator/escalator status'), distinguishing it from siblings like bart_departures (real-time departures) or bart_fare (fare calculations). It precisely defines the scope of information retrieved.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for real-time service status, but does not explicitly state when to use this tool versus alternatives like bart_departures (for departure times) or bart_trip (for trip planning). No exclusions or prerequisites are mentioned, leaving some ambiguity about optimal use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

bart_departuresBART Real-Time DeparturesB
Read-onlyIdempotent
Inspect

Get real-time departures from a BART station.

Args:

  • station: 4-letter abbreviation (e.g. EMBR, 24TH)

  • direction: Optional 'n' or 's'

ParametersJSON Schema
NameRequiredDescriptionDefault
stationYesStation code (e.g. EMBR, 24TH)
directionNoDirection filter
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds minimal behavioral context beyond this, such as implying real-time data retrieval, but doesn't detail rate limits, authentication needs, or response format. With annotations providing strong coverage, the description meets the lower bar but adds limited extra value.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise and well-structured: a clear purpose statement followed by brief parameter documentation. Every sentence earns its place with no wasted words, and it's front-loaded with the core functionality. This efficiency makes it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no output schema) and rich annotations, the description is adequate but incomplete. It covers the basic purpose and parameters but lacks details on output format, error handling, or integration with sibling tools. For a read-only tool with good annotations, this is minimally viable but leaves gaps in full contextual understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear descriptions for both parameters (station and direction). The description adds examples (e.g., 'EMBR, 24TH' for station and 'n' or 's' for direction), which provides some semantic context beyond the schema. However, this is minimal enhancement, so it meets the baseline score for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get real-time departures from a BART station.' It specifies the verb ('Get') and resource ('real-time departures from a BART station'), making the function unambiguous. However, it doesn't explicitly differentiate from sibling tools like bart_trip or bart_stations, which reduces it from a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like bart_advisories for alerts or bart_trip for trip planning, nor does it specify prerequisites or exclusions. The only usage hint is the parameter documentation, which doesn't address tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

bart_fareBART FareA
Read-onlyIdempotent
Inspect

Get fare between two BART stations.

ParametersJSON Schema
NameRequiredDescriptionDefault
originYesOrigin station code
destinationYesDestination station code
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint, openWorldHint, idempotentHint, and destructiveHint, covering safety and behavior. The description adds no additional behavioral context (e.g., rate limits, data freshness, or error handling), but it doesn't contradict annotations, so it meets the lower bar with minimal value added.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence with zero waste, efficiently conveying the core purpose without unnecessary details, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity, rich annotations covering safety and behavior, and no output schema, the description is mostly complete. However, it lacks details on return values (e.g., fare format or units), which would be helpful since there's no output schema, slightly reducing completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with clear descriptions for origin and destination parameters. The description adds no extra meaning beyond the schema, such as station code formats or examples, so it meets the baseline of 3 without compensating for gaps.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get fare') and the resource ('between two BART stations'), distinguishing it from sibling tools like bart_advisories, bart_departures, bart_map, bart_stations, and bart_trip, which focus on different aspects of BART information.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for fare calculation between stations but does not explicitly state when to use this tool versus alternatives like bart_trip (which might provide broader trip details) or bart_stations (which lists stations). No exclusions or prerequisites are mentioned, leaving some ambiguity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

bart_mapOpen BART Interactive MapA
Read-onlyIdempotent
Inspect

Open interactive BART route map. View departures, plan trips.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare this as read-only, non-destructive, idempotent, and open-world, so the description doesn't need to repeat those safety traits. However, it adds valuable context about the interactive nature of the map and the specific functionalities (view departures, plan trips), which goes beyond what annotations provide. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with just two short phrases ('Open interactive BART route map' and 'View departures, plan trips'), front-loading the core purpose. Every word earns its place without any waste or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, no output schema) and rich annotations covering safety and behavior, the description is reasonably complete. It explains what the tool does in practical terms. However, it could be more comprehensive by clarifying the interactive map's scope or how it differs from sibling tools.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0 parameters and 100% schema description coverage, the schema fully documents the lack of inputs. The description appropriately doesn't discuss parameters, as none exist, and instead focuses on the tool's functionality. This meets the baseline of 4 for zero-parameter tools.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('Open', 'View', 'plan') and resource ('BART interactive map'), making it easy to understand what the tool does. However, it doesn't explicitly differentiate from sibling tools like 'bart_departures' or 'bart_trip' which also involve viewing departures and trip planning.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context for viewing departures and planning trips, but doesn't provide explicit guidance on when to use this tool versus alternatives like 'bart_departures' for departure info or 'bart_trip' for trip planning. No exclusions or clear alternatives are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

bart_stationsList BART StationsA
Read-onlyIdempotent
Inspect

List all BART stations with abbreviation codes and locations. Use this to look up station codes needed by other tools.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, openWorldHint=true, idempotentHint=true, and destructiveHint=false, covering safety and behavior. The description adds value by explaining the tool's role in providing reference data for other tools, which is useful context beyond what annotations provide. No contradictions exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with zero waste: the first states the purpose, and the second provides usage guidance. It's front-loaded with essential information and appropriately sized for a simple tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read-only tool with 0 parameters, rich annotations, and no output schema, the description is complete. It explains what the tool does, when to use it, and its role in the broader context, covering all necessary aspects without redundancy.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description appropriately doesn't discuss parameters, maintaining focus on the tool's purpose and usage. Baseline is 4 for zero parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'List all BART stations with abbreviation codes and locations.' It specifies the verb ('List'), resource ('BART stations'), and scope ('all'), and distinguishes from siblings by focusing on station metadata rather than advisories, departures, fares, maps, or trips.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool: 'Use this to look up station codes needed by other tools.' This provides clear guidance that this tool is for obtaining reference data (station codes) that are prerequisites for other operations, distinguishing it from siblings that perform different functions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

bart_tripPlan BART TripA
Read-onlyIdempotent
Inspect

Plan a trip between two BART stations. Returns schedule, fares, and transfer info.

ParametersJSON Schema
NameRequiredDescriptionDefault
dateNoDate MM/DD/YYYY
timeNoDeparture time e.g. '5:30pm'
originYesOrigin station code
destinationYesDestination station code
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, openWorldHint=true, and idempotentHint=true, covering safety and idempotency. The description adds value by specifying the return types (schedule, fares, transfer info), which provides useful context beyond annotations. However, it doesn't disclose behavioral details like rate limits, authentication needs, or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core purpose and output. Every word earns its place, with no redundant or vague phrasing, making it easy for an agent to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a read-only tool with full schema coverage and no output schema, the description is mostly complete—it clarifies the tool's purpose and output scope. However, it could improve by mentioning optional parameters (date/time) or potential limitations (e.g., real-time vs. scheduled data), given the lack of output schema details.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with all parameters (date, time, origin, destination) well-documented in the schema. The description doesn't add any parameter-specific semantics beyond what the schema provides, such as explaining station code formats or date/time constraints. Baseline 3 is appropriate given the comprehensive schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Plan a trip'), resource ('between two BART stations'), and output scope ('schedule, fares, and transfer info'). It distinguishes from siblings like bart_advisories (service alerts), bart_departures (real-time departures), bart_fare (fare-only queries), bart_map (station maps), and bart_stations (station listings).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for trip planning between stations, which provides clear context. However, it doesn't explicitly state when to use this tool versus alternatives like bart_departures (for real-time departures) or bart_fare (for fare-only queries), nor does it mention exclusions (e.g., for non-BART transit).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.