finance-mcp-server
Server Details
Stocks, crypto, FX, and portfolio math in one tool — no per-source API juggling.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.1/5 across 5 of 5 tools scored.
Each tool targets a distinct financial domain: crypto prices, exchange rates, stock data, stock screening, and crypto portfolio tracking. The descriptions clearly differentiate them, even with some overlap between get_crypto_prices and track_crypto_portfolio.
All tool names follow a consistent verb_noun pattern with snake_case (e.g., get_crypto_prices, screen_stocks), making them predictable and easy to parse.
Five tools is an appropriate number for a finance server, covering core data retrieval and screening without being overwhelming or trivial.
The set covers basic stock, crypto, and forex data, plus stock screening. However, it lacks historical data, news, or order execution, which are common in finance servers, leaving minor gaps.
Available Tools
5 toolsget_crypto_pricesARead-onlyInspect
Fetch real-time cryptocurrency prices and 24-hour market data for multiple coins. Returns current price in USD, 24h price change percentage, market cap, and trading volume. Use for crypto portfolio tracking, price alerts, or comparing digital assets.
| Name | Required | Description | Default |
|---|---|---|---|
| coins | No | List of cryptocurrency symbols or names to fetch price data |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Description adds value beyond annotations by specifying real-time data retrieval and the exact fields returned. Annotations already indicate readOnlyHint and openWorldHint, so no contradiction. The description enriches understanding without redundancy.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with the primary action, then use cases. No wasted words; every sentence contributes to understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with one parameter and no output schema, the description adequately covers return values (price, change, market cap, volume) and usage context. No missing critical information.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema covers the single parameter 'coins' with 100% description coverage, including examples. The description mentions 'multiple coins' but does not add semantic information beyond the schema, meeting the baseline of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool fetches real-time cryptocurrency prices and market data, listing specific fields (current price, 24h change, market cap, volume). It distinguishes from sibling tools like get_stock_data and get_exchange_rates by focusing on crypto.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description provides clear use cases: crypto portfolio tracking, price alerts, comparing digital assets. It does not explicitly state when not to use or mention alternatives, but the context is well-defined.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_exchange_ratesARead-onlyInspect
Look up current foreign exchange rates between major world currencies. Returns exchange rate, bid/ask spread, and last update timestamp. Use for travel planning, international business transactions, or forex trading decisions.
| Name | Required | Description | Default |
|---|---|---|---|
| base | No | Base currency code for rate calculation (e.g. 'USD', 'EUR', 'GBP', 'JPY') | USD |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and openWorldHint=true. The description adds value by detailing the return fields ('exchange rate, bid/ask spread, and last update timestamp'), which goes beyond the annotations. It does not contradict annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences that front-load the core purpose and return information, followed by relevant use cases. No extraneous words or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (one optional parameter, no output schema), the description adequately covers purpose, use cases, and return fields. It could potentially mention if rates are real-time or delayed, but that is a minor gap.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema covers the single parameter 'base' entirely (100% coverage) with a description and default. The description does not add additional parameter details beyond what the schema already provides, so it meets the baseline but does not enhance understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Look up current foreign exchange rates between major world currencies,' specifying the action (look up) and resource (exchange rates). It provides concrete use cases and easily distinguishes from sibling tools like get_crypto_prices or get_stock_data.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description gives explicit use cases ('Use for travel planning, international business transactions, or forex trading decisions'), which helps the agent understand when to apply the tool. However, it does not mention when not to use it or point to alternatives, though the context of sibling tools makes the distinction clear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_stock_dataARead-onlyInspect
Retrieve comprehensive stock market data for individual ticker symbols from Yahoo Finance. Returns current price, intraday high/low, 52-week range, trading volume, market capitalization, P/E ratio, earnings per share, dividend yield, and sector classification. Use for fundamental analysis, stock screening, or building financial dashboards.
| Name | Required | Description | Default |
|---|---|---|---|
| ticker | Yes | Stock ticker symbol in uppercase format (e.g. 'AAPL' for Apple, 'MSFT' for Microsoft, 'GOOGL' for Alphabet) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, and description adds details on returned data (price, range, volume, etc.) and source (Yahoo Finance), providing useful behavioral context beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences: first sentence clearly states action and source, second lists returns and use cases. Front-loaded with purpose, no redundant words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given one parameter and no output schema, description covers purpose, return fields, and use cases. Could mention data freshness or limitations, but sufficient for this complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has one parameter 'ticker' with a comprehensive description including examples. Schema coverage is 100%, so description does not need to add more; baseline 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states 'Retrieve comprehensive stock market data for individual ticker symbols from Yahoo Finance', specifying the verb, resource, and source. It distinguishes from siblings which cover crypto, exchange rates, screening, and crypto portfolios.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly suggests use cases: 'Use for fundamental analysis, stock screening, or building financial dashboards.' While it doesn't explicitly exclude alternatives, the sibling context implies appropriate choices.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
screen_stocksARead-onlyInspect
Filter and screen stocks based on financial criteria like market cap range, sector, P/E ratio thresholds, dividend yield, or revenue growth. Returns matching ticker symbols with key metrics. Use for value investing, growth stock identification, or portfolio rebalancing analysis.
| Name | Required | Description | Default |
|---|---|---|---|
| criteria | No | Filtering criteria as key-value pairs (e.g. {'sector': 'technology', 'market_cap_min': 1000000000, 'pe_ratio_max': 25}) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Adds context beyond annotations: returns matching ticker symbols with key metrics. No contradictions with readOnlyHint=true or openWorldHint=true. Does not detail permissions or rate limits but is acceptable.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two-sentence description is concise and front-loaded. First sentence explains action, second sentence provides use cases. Could be slightly more structured but no waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequately describes input (criteria) and output (ticker symbols with key metrics). Given no output schema, this is sufficient. Sibling tools are different, so no confusion.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema covers 100% of parameters; description enriches by providing concrete example criteria (sector, market cap, P/E ratio, etc.), adding meaning beyond the generic schema description.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it filters/screens stocks based on financial criteria, naming specific metrics and use cases, distinguishing it from sibling tools like get_stock_data (single stock) and get_crypto_prices.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly lists use cases (value investing, growth stock identification, portfolio rebalancing). Lacks explicit when-not-to-use or alternatives, but siblings are distinct enough to imply usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
track_crypto_portfolioARead-onlyInspect
Monitor real-time prices and performance metrics for a portfolio of cryptocurrency holdings. Returns current price per coin, 24-hour percentage change, portfolio value summary, and top movers. Use for active portfolio management and price monitoring.
| Name | Required | Description | Default |
|---|---|---|---|
| coins | Yes | List of cryptocurrency codes in portfolio to track real-time |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only and open world. Description adds output details but no additional behavioral traits like rate limits or authentication.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, concise and front-loaded with purpose. No waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple 1-parameter tool with no output schema, the description covers purpose, outputs, and usage context adequately.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with clear parameter description. The overall description adds context about outputs but doesn't significantly extend parameter meaning beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states it monitors real-time prices and performance for crypto holdings, specifying outputs. Slightly ambiguous versus sibling 'get_crypto_prices' but distinct by portfolio focus.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides a usage context ('active portfolio management and price monitoring') but does not mention when to avoid or contrast with sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!