nordic-financial-mcp
Server Details
Company filings, reports, press releases and macro indicators for Nordic markets: NO, SE, DK, FI.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- AIDataNordic/nordic_financial_mcp
- GitHub Stars
- 0
- Server Listing
- Nordic Economics MCP
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.9/5 across 4 of 4 tools scored. Lowest: 3.1/5.
Each tool has a clearly distinct scope: company registry lookup (Brønnøysund), live market data (Yahoo Finance), document/filing search, and health check. No functional overlap exists between get_company_info, get_market_data, and search_filings.
Three tools follow a consistent verb_noun pattern (get_company_info, get_market_data, search_filings). Only 'ping' deviates as a standalone verb without the noun component, though it is a standard convention for health checks.
Four tools is appropriate for this focused domain: company lookup, market data, document search, and connectivity test. While slightly minimal, each tool earns its place and covers distinct aspects of Nordic financial data retrieval without bloat.
Notable gap: get_company_info only supports Norwegian companies (Brønnøysund) despite the 'Nordic' server name, lacking Swedish, Danish, and Finnish registry lookups. Additionally, there is no company listing/browse capability—only direct lookup by organization number.
Available Tools
4 toolsget_company_infoAInspect
Look up a Norwegian company in the Brønnøysund Register (Enhetsregisteret).
Args: orgnr: Norwegian organisation number without hyphens, e.g. 923609016.
Returns: Dict with company name, status and registered business address.
| Name | Required | Description | Default |
|---|---|---|---|
| orgnr | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
No output parameters | ||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses return value structure ('Dict with company name, status...'), but omits error handling (not found scenarios), rate limits, or authentication requirements common in registry APIs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Efficient Args/Returns structure with zero waste. Four sentences covering purpose, parameter detail, and output format. Front-loaded action verb in first sentence.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a single-parameter lookup tool. Output schema exists, yet description helpfully summarizes return content. Minor gap: lacks error case documentation (e.g., invalid orgnr format or non-existent company handling).
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Excellent compensation for 0% schema coverage. Adds critical semantics: 'Norwegian organisation number without hyphens' explains format constraint, and example '923609016' clarifies expected pattern—information completely absent from schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Specific verb 'Look up' + resource 'Norwegian company' + scope 'Brønnøysund Register (Enhetsregisteret)'. Naming the specific registry distinguishes it from get_market_data and search_filings siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage via specific registry domain (Enhetsregisteret), but lacks explicit when-to-use guidance or comparison to siblings like search_filings. Agent must infer appropriateness from registry name.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
parse_pdf_to_textAInspect
Download a PDF from a URL and extract all text as a single string, page by page.
This is useful for agents that need to read report attachments, press releases, or any PDF content that is not directly searchable in the main database.
Args: pdf_url: Direct URL to the PDF file (e.g. https://example.com/report.pdf)
Returns: All text from the PDF with page separators, or an error message.
| Name | Required | Description | Default |
|---|---|---|---|
| pdf_url | Yes |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full disclosure burden. It successfully documents the extraction method ('page by page'), output format ('single string' with 'page separators'), and error handling ('or an error message'). Minor gaps remain regarding operational constraints like file size limits, timeouts, or authentication requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with logical flow (action → use case → args → returns). Each sentence adds value. The Args/Returns documentation style is clear, though the 'This is useful for...' clause could be more concise. Appropriately front-loaded with the core operation.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (single parameter, simple read operation), the description provides comprehensive coverage of intent, parameters, and return values. Minor improvements would include noting any file size limitations or timeout behaviors for network downloads.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Despite 0% schema description coverage, the description fully compensates by documenting the 'pdf_url' parameter with both semantic meaning ('Direct URL to the PDF file') and a concrete example ('https://example.com/report.pdf'), providing sufficient guidance for correct invocation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Download a PDF from a URL and extract all text as a single string, page by page') using distinct verbs and resources. It effectively differentiates from sibling search_filings by noting the tool is for content 'not directly searchable in the main database.'
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides clear positive guidance on when to use ('read report attachments, press releases, or any PDF content'), implicitly contrasting with database search tools. However, it does not explicitly name the alternative tool (search_filings) or state negative conditions ('do not use for...').
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pingBInspect
Simple connectivity test. Returns a greeting to confirm the server is running.
| Name | Required | Description | Default |
|---|---|---|---|
| name | No | world |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It indicates the tool returns a greeting to confirm server status, but omits details about side effects, rate limits, or what the greeting format looks like.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences with zero waste. The first establishes purpose, the second explains the return value. Every word earns its place and the description is appropriately sized for a simple utility tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
While the core behavior is documented and an output schema exists (reducing the need to describe returns), the description is incomplete due to the undocumented 'name' parameter. For a simple 1-parameter tool, this gap prevents a higher score.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0% and the description fails to compensate by explaining the 'name' parameter (which customizes the greeting). The parameter's purpose and default value usage are not mentioned in the text.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly identifies this as a connectivity test with a specific return value (a greeting), distinguishing it from the data-oriented siblings. It does not explicitly contrast when to use this versus the data retrieval alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance is provided on when to use this tool versus the sibling data retrieval tools, nor are prerequisites mentioned. The phrase 'connectivity test' only vaguely implies usage context without specific direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_filingsAInspect
Search the Nordic financial database for company filings, press releases and macroeconomic summaries.
The database contains:
COMPANY FILINGS Annual and quarterly reports (IR PDFs): SalMar (SALM), Mowi (MOWI), Lerøy (LSG), Grieg Seafood (GSF), Austevoll (AUSS), Bakkafrost (BAKKA), Aker BP (AKRBP), Odfjell (ODF)
SEC EDGAR filings (Form 20-F annual reports and 6-K current reports): Equinor (EQNR), Höegh Autoliners (HSHP), Okeanis Eco Tankers (ECO), BW LPG (BWLP), Flex LNG (FLNG), Hafnia (HAFN), Cadeler (CDLR), Scorpio Tankers (STNG), SFL Corporation (SFL), Golden Ocean (GOGL), Frontline (FRO), Golar LNG (GLNG), Nordic American Tankers (NAT), Atlas Corp (ATCO)
PRESS RELEASES GlobeNewswire RSS (continuous, hourly updates Mon–Fri): Norwegian, Swedish, Danish and Finnish listed companies (NO/SE/DK/FI)
MACROECONOMIC SUMMARIES Quarterly macro summaries covering key indicators per country: Norway (NO): policy rate, FX rates, CPI, house prices, credit growth, electricity price, salmon price, GDP components Sweden (SE): policy rate, house price index, household credit Denmark (DK): policy rate, house price index, household loans, electricity price Finland (FI): house price index, household debt-to-income ratio, electricity price Use report_type='macro_summary' and country='NO'/'SE'/'DK'/'FI' to filter. Use fiscal_year and a quarter reference in your query, e.g. "Norwegian housing market Q1 2024".
Args: query: What you are looking for, e.g. 'salmon price Q3', 'fleet utilization', 'dividend policy', 'Norwegian housing market 2024 Q1', 'Swedish policy rate inflation 2023' ticker: Optional — filter by company ticker, e.g. 'SALM', 'EQNR' fiscal_year: Optional — filter by year, e.g. 2024 report_type: Optional — one of: 'annual_report' – Nordic IR annual reports (PDF) 'quarterly_report' – Quarterly/interim reports (PDF) 'annual_report_20f' – SEC Form 20-F '6k' – SEC Form 6-K 'press_release' – GlobeNewswire press releases 'macro_summary' – Quarterly macroeconomic summaries sector: Optional — filter by sector: 'seafood' – seafood companies 'energy' – energy / oil & gas 'shipping' – shipping companies country: Optional — filter by country code: 'NO', 'SE', 'DK' or 'FI' limit: Number of results after reranking (default 5, max 20)
Returns: List of relevant text excerpts with metadata, reranked by relevance. Each result includes rerank_score, vector_score, company, ticker, country, fiscal_year, report_type, period and the full text chunk.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | ||
| query | Yes | ||
| sector | No | ||
| ticker | No | ||
| country | No | ||
| fiscal_year | No | ||
| report_type | No |
Output Schema
| Name | Required | Description |
|---|---|---|
| result | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses data scope (specific companies listed, Nordic countries only), update frequency ('hourly updates Mon–Fri'), and processing behavior ('reranked by relevance'). Returns section documents output structure including metadata fields. Missing rate limits or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Well-structured with clear headers (COMPANY FILINGS, PRESS RELEASES, MACROECONOMIC SUMMARIES, Args, Returns). Front-loads purpose. Length is justified by the need to enumerate specific covered companies and macro indicators. Args section mimics Python docstring for readability.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Comprehensive given 7 parameters with 0% schema coverage and rich output. Covers: database contents (specific tickers, countries), query patterns, output format with metadata fields, and filtering combinations. With output schema present in Returns section, description appropriately completes the picture without redundancy.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema has 0% description coverage, requiring full compensation. Description excels: provides 5 usage examples for 'query', valid ticker examples, enumerated options with semantic meaning for report_type (6 options) and sector (3 options), country codes mapped to full names, and default/max values for limit. Fully compensates for empty schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Opens with specific verb+resource: 'Search the Nordic financial database for company filings, press releases and macroeconomic summaries.' Immediately distinguishes from siblings (get_company_info, get_market_data) by focusing on historical document retrieval rather than current market data or static company info.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Extensive examples under Args (e.g., 'salmon price Q3', 'Norwegian housing market 2024 Q1') and explicit parameter combinations ('Use report_type='macro_summary' and country='NO'...'). Lacks explicit 'when not to use' or sibling alternatives naming, but content enumeration provides clear contextual boundaries.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail — every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control — enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management — store and rotate API keys and OAuth tokens in one place
Change alerts — get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption — public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics — see which tools are being used most, helping you prioritize development and documentation
Direct user feedback — users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.