Skip to main content
Glama

Server Details

Semantic search over Nordic filings, press releases, macro data and electricity prices.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
AIDataNordic/nordic_financial_mcp
GitHub Stars
0
Server Listing
Nordic Financial MCP

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.2/5 across 6 of 6 tools scored. Lowest: 3.1/5.

Server CoherenceA
Disambiguation5/5

Each tool has a clear, distinct purpose: due_diligence_report orchestrates multi-query searches, get_company_info retrieves registry data, get_current_power_price fetches electricity prices, parse_pdf_to_text extracts PDF text, ping checks connectivity, and search_filings handles general financial database searches. No two tools overlap significantly.

Naming Consistency4/5

Four tools follow a verb_noun pattern (get_company_info, get_current_power_price, parse_pdf_to_text, search_filings), while due_diligence_report and ping deviate. This is mostly consistent but with minor outliers.

Tool Count5/5

With 6 tools covering registry lookup, power prices, PDF parsing, connectivity, primary search, and due diligence orchestration, the count is well-scoped for a Nordic financial MCP. No tool feels redundant or missing.

Completeness4/5

The tool set covers core financial filing search, registry info, and power prices. Minor gaps exist, such as lack of explicit tools for historical market data or company lists by sector, but search_filings can partially address these via queries.

Available Tools

7 tools
analyze_companyA
Read-only
Inspect

AI-powered company analysis using semantic search over Nordic financial data.

Orchestrates multiple searches internally and returns a synthesized narrative answer with source citations. Covers annual reports, quarterly reports, press releases and macroeconomic context for Nordic listed companies.

Use this when you want a synthesized answer rather than raw search chunks. For raw data access, use search_filings or due_diligence_report instead.

Args: company: Company name or ticker question: What you want to know about the company model: 'haiku' (default) or 'sonnet'

ParametersJSON Schema
NameRequiredDescriptionDefault
modelNoModel: 'haiku' (default, fast, ~$0.07/call) or 'sonnet' (more capable, ~$0.24/call)haiku
companyYesCompany name or ticker, e.g. 'Equinor' or 'EQNR'
questionYesQuestion to answer, e.g. 'How did margins develop 2022-2024?' or 'What are the main risk factors?'

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=true, and the description adds context by stating it 'orchestrates multiple searches internally' and returns synthesized narrative. This goes beyond annotations, though it could mention error behavior or data recency. No contradiction observed.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Description is front-loaded with purpose, followed by usage guidelines and parameter details. Every sentence adds value, no fluff. Proper structure with paragraphs and bullet-like list.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (AI orchestration) and presence of output schema, the description covers purpose, usage, parameters, and behavior sufficiently. Lacks edge cases or error handling, but overall complete for agent selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, and the description's Args section adds value with example values for company and question (e.g., 'Equinor' or 'EQNR') and pricing hints for model. While partly redundant, it enhances usability beyond the schema alone.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs 'AI-powered company analysis using semantic search over Nordic financial data' and returns a 'synthesized narrative answer with source citations.' It specifies coverage of annual reports, quarterly reports, press releases, and macroeconomic context, distinguishing it from siblings like search_filings and due_diligence_report.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly guides when to use: 'Use this when you want a synthesized answer rather than raw search chunks.' It directly names alternatives: 'For raw data access, use search_filings or due_diligence_report instead.' This clearly differentiates usage context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

due_diligence_reportA
Read-only
Inspect

Run multiple targeted searches and return results grouped by section for due diligence.

The agent defines all sections and queries — this tool does not decide what is relevant. Before calling, reason about which topics and data sources matter for this specific company: financial metrics, risk factors, sector-specific macro drivers (e.g. freight rates for shipping, power prices for aluminium smelters), recent press releases, peer context, etc. Formulate one query per section.

Each query is run independently as a full hybrid search (dense + sparse + rerank).

IMPORTANT — use 'ticker' on company-specific sections to avoid false positives. Without a ticker filter, documents that merely mention the company (e.g. as a customer or competitor) can rank above actual filings from that company. Omit 'ticker' only for sections where cross-company results are intentional, such as sector macro context or peer comparisons.

Args: company: Company name, used for metadata only (not a filter). sections: Up to 8 sections. Example: [ {"name": "financials", "query": "Equinor revenue EBITDA operating profit 2024", "ticker": "EQNR"}, {"name": "risk", "query": "Equinor climate regulatory risk stranded assets", "ticker": "EQNR"}, {"name": "macro", "query": "Brent crude oil price energy sector Norway 2024", "limit": 3}, {"name": "news", "query": "Equinor press release dividend acquisition 2024", "ticker": "EQNR"} ]

Returns: Dict with 'company', 'generated_at', and 'sections' — one entry per requested section with its name and results (same format as search_filings). Sections with no results return an empty list.

ParametersJSON Schema
NameRequiredDescriptionDefault
companyYesCompany name to research, e.g. 'Equinor', 'Norsk Hydro', 'Aker BP'
sectionsYesList of section dicts. Each must have 'name' (str) and 'query' (str). Optional: 'ticker' (str, filters results to that company), 'limit' (int, default 5, max 10). Maximum 8 sections.

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond annotations (readOnlyHint, openWorldHint), the description discloses that each query runs independently as hybrid search (dense+sparse+rerank) and explains the return dict structure. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear sections (purpose, guidance, args, returns). Each sentence adds value, though could be slightly more concise. Properly front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of multi-section due diligence, the description is thorough. Covers return format, query independence, and optional parameters. Output schema exists but dict description suffices.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, but the description adds meaning: company is metadata-only, sections include optional ticker and limit with defaults and max. The example helps clarify usage beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool runs multiple targeted searches and returns results grouped by section for due diligence. It distinguishes from siblings like search_filings by emphasizing the aggregated, section-structured output and the agent's role in defining queries.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit guidance on when to use, including formulation of queries, use of ticker to avoid false positives, and when to omit ticker. Does not explicitly compare to siblings but context implies it's for due diligence versus simpler searches.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_company_infoA
Read-only
Inspect

Look up a company in the official business registry for Norway, Denmark or Finland.

Use this to retrieve authoritative registration data (legal name, status, address) for a known organisation number. Do not use for Sweden (SE) — use search_filings with country='SE' instead, as Bolagsverket integration is not yet available. Do not use to discover tickers or ISIN codes — use search_filings for that.

Args: identifier: Organisation/business/CVR number. Format varies by country: NO: 9-digit organisation number, e.g. 923609016 (Equinor) DK: 8-digit CVR number, e.g. 22756214 (Maersk) FI: Business ID with hyphen, e.g. 0112038-9 (Nokia) country: Two-letter country code: 'NO' (default), 'DK', or 'FI'.

Returns: Dict with company name, status and registered business address. Returns {'error': ''} if the company is not found, the identifier format is invalid, or the upstream registry API is unavailable.

ParametersJSON Schema
NameRequiredDescriptionDefault
countryNoTwo-letter country code: NO (default), DK, or FINO
identifierYesOrganisation number (NO: 9 digits, DK: 8 digits CVR, FI: business ID with hyphen)

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Discloses return value structure ('Dict with company name, status...'), but omits error handling (not found scenarios), rate limits, or authentication requirements common in registry APIs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Efficient Args/Returns structure with zero waste. Four sentences covering purpose, parameter detail, and output format. Front-loaded action verb in first sentence.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a single-parameter lookup tool. Output schema exists, yet description helpfully summarizes return content. Minor gap: lacks error case documentation (e.g., invalid orgnr format or non-existent company handling).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Excellent compensation for 0% schema coverage. Adds critical semantics: 'Norwegian organisation number without hyphens' explains format constraint, and example '923609016' clarifies expected pattern—information completely absent from schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Look up' + resource 'Norwegian company' + scope 'Brønnøysund Register (Enhetsregisteret)'. Naming the specific registry distinguishes it from get_market_data and search_filings siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage via specific registry domain (Enhetsregisteret), but lacks explicit when-to-use guidance or comparison to siblings like search_filings. Agent must infer appropriateness from registry name.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_current_power_priceA
Read-only
Inspect

Fetch today's hourly day-ahead electricity spot prices for a Nordic bidding zone.

Use this for current and near-term (today/tomorrow) price queries. Do not use for historical price analysis — use search_filings with report_type='macro_summary' and a date reference in the query for that purpose. Tomorrow's prices are published by NordPool around 13:00 CET; requests before that time will return "not yet available" for the tomorrow field.

All zones return prices in EUR/kWh (NordPool day-ahead, native currency). Norwegian zones (NO1–NO5) use hvakosterstrommen.no; all other zones use ENTSO-E.

Args: zone: Bidding zone code. Options: NO1 (East/Oslo), NO2 (Southwest), NO3 (Central/Trondheim), NO4 (North), NO5 (West/Bergen), SE1–SE4, DK1, DK2, FI. include_tomorrow: Set to True to also fetch tomorrow's hourly prices if already published (default False).

Returns: Dict containing zone, date, current_hour_utc, current price, and a 'today' summary with min/max/avg and the full hourly list. Includes a 'tomorrow' key if include_tomorrow=True. Returns {'error': ''} if price data is unavailable for the requested zone or date.

ParametersJSON Schema
NameRequiredDescriptionDefault
zoneNoBidding zone: NO1–NO5, SE1–SE4, DK1, DK2, or FINO1
include_tomorrowNoAlso fetch tomorrow's prices if available (published after 13:00 CET)

Output Schema

ParametersJSON Schema
NameRequiredDescription

No output parameters

Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: it specifies the data format (EUR/kWh), timing constraints (tomorrow's prices availability), and return structure. It doesn't mention rate limits, authentication needs, or error conditions, but provides substantial operational context for a read-only data fetching tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with a clear opening statement, followed by important notes, then organized parameter explanations, and finally return value information. Every sentence adds value without redundancy, and the information is front-loaded with the core purpose stated first.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no annotations, but with output schema), the description provides complete context: it explains what the tool does, when to use it, parameter meanings, return structure, and operational constraints. The existence of an output schema means the description doesn't need to detail return value formats, and it covers all essential aspects for effective tool selection and invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 0% schema description coverage, the description comprehensively documents both parameters: it lists all valid zone codes with their geographic meanings and explains the include_tomorrow parameter's purpose and default behavior. This fully compensates for the schema's lack of descriptions and adds meaningful context beyond basic type definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verb ('fetch') and resource ('today's hourly electricity spot prices'), including the geographic scope ('Nordic bidding zone') and currency details. It distinguishes itself from sibling tools like get_company_info or search_filings by focusing on real-time energy pricing data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use the tool (for electricity spot prices in Nordic zones) and includes a practical timing note about tomorrow's prices being available after 13:00 CET. However, it doesn't explicitly contrast with alternatives or specify when NOT to use it, though this is less critical given the distinct domain from sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

parse_pdf_to_textA
Read-only
Inspect

Download a PDF from a URL and extract all text content, page by page.

Use this to read the full text of a specific document — for example, an annual report PDF linked from a search_filings result. Best combined with search_filings: use search_filings to locate the document, then parse_pdf_to_text for the full text. Do not use for PDFs that are already well-represented in the database — search_filings is faster and returns pre-ranked, relevant excerpts. Not suitable for scanned (image-only) PDFs without embedded text; those pages will be returned as "(no extractable text)".

Args: pdf_url: Direct HTTPS URL to the PDF file, e.g. https://example.com/report.pdf. Must be publicly accessible; authentication-protected URLs will fail.

Returns: All text from the PDF with "--- Page N ---" separators between pages. Returns an error string if the download fails, the URL does not point to a valid PDF, or the document exceeds the 60-second download timeout.

ParametersJSON Schema
NameRequiredDescriptionDefault
pdf_urlYesDirect HTTPS URL to the PDF file

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full disclosure burden. It successfully documents the extraction method ('page by page'), output format ('single string' with 'page separators'), and error handling ('or an error message'). Minor gaps remain regarding operational constraints like file size limits, timeouts, or authentication requirements.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with logical flow (action → use case → args → returns). Each sentence adds value. The Args/Returns documentation style is clear, though the 'This is useful for...' clause could be more concise. Appropriately front-loaded with the core operation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (single parameter, simple read operation), the description provides comprehensive coverage of intent, parameters, and return values. Minor improvements would include noting any file size limitations or timeout behaviors for network downloads.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Despite 0% schema description coverage, the description fully compensates by documenting the 'pdf_url' parameter with both semantic meaning ('Direct URL to the PDF file') and a concrete example ('https://example.com/report.pdf'), providing sufficient guidance for correct invocation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Download a PDF from a URL and extract all text as a single string, page by page') using distinct verbs and resources. It effectively differentiates from sibling search_filings by noting the tool is for content 'not directly searchable in the main database.'

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear positive guidance on when to use ('read report attachments, press releases, or any PDF content'), implicitly contrasting with database search tools. However, it does not explicitly name the alternative tool (search_filings) or state negative conditions ('do not use for...').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

pingB
Read-only
Inspect

Connectivity check that confirms the Nordic MCP server process is responding.

Use this at the start of a session to verify the server is reachable before making other calls. Do not use as a proxy for database health — the server can respond while the Qdrant vector database is temporarily unavailable. To confirm data availability, call search_filings directly.

Returns: A greeting string: "Hello {name}! Nordic MCP server is running."

ParametersJSON Schema
NameRequiredDescriptionDefault
nameNoArbitrary label included in the response, e.g. 'healthcheck' or 'agent-1'world

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It indicates the tool returns a greeting to confirm server status, but omits details about side effects, rate limits, or what the greeting format looks like.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences with zero waste. The first establishes purpose, the second explains the return value. Every word earns its place and the description is appropriately sized for a simple utility tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

While the core behavior is documented and an output schema exists (reducing the need to describe returns), the description is incomplete due to the undocumented 'name' parameter. For a simple 1-parameter tool, this gap prevents a higher score.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0% and the description fails to compensate by explaining the 'name' parameter (which customizes the greeting). The parameter's purpose and default value usage are not mentioned in the text.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies this as a connectivity test with a specific return value (a greeting), distinguishing it from the data-oriented siblings. It does not explicitly contrast when to use this versus the data retrieval alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance is provided on when to use this tool versus the sibling data retrieval tools, nor are prerequisites mentioned. The phrase 'connectivity test' only vaguely implies usage context without specific direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_filingsA
Read-only
Inspect

Search the Nordic financial database for company filings, press releases and macroeconomic summaries.

Use this as the primary tool for any question about Nordic listed companies, markets or macro conditions. Do not use to retrieve a full document — results are chunked text excerpts; use parse_pdf_to_text for the full original document. Do not use for Swedish company registration data — use get_company_info instead.

The database contains ~1 million vectors across four Nordic markets (NO/SE/DK/FI).

COMPANY FILINGS Annual reports (XBRL/ESEF) and quarterly reports from ~1 500 listed companies across Oslo Børs, Nasdaq Stockholm, Nasdaq Helsinki, Nasdaq Copenhagen and First North markets. Covers 2020–present. Strong coverage for NO and SE; growing coverage for DK and FI.

EXCHANGE ANNOUNCEMENTS & PRESS RELEASES Regulatory filings, exchange announcements and press releases from listed companies in NO, SE, DK and FI. Covers 2020–present.

MACROECONOMIC SUMMARIES Quarterly macro summaries covering key indicators per country: Norway (NO): policy rate, FX rates, CPI, house prices, credit growth, electricity price, salmon price, GDP components Sweden (SE): policy rate, house price index, household credit Denmark (DK): policy rate, house price index, household loans, electricity price Finland (FI): house price index, household debt-to-income ratio, electricity price Use report_type='macro_summary' and country='NO'/'SE'/'DK'/'FI' to filter. Use fiscal_year and a quarter reference in your query, e.g. "Norwegian housing market Q1 2024".

Args: query: What you are looking for, e.g. 'net interest margin outlook', 'salmon price Q3', 'dividend policy', 'fleet utilization', 'Norwegian housing market 2024 Q1', 'Swedish policy rate inflation 2023' ticker: Optional — filter by company ticker, e.g. 'SALM', 'EQNR', 'NDA' fiscal_year: Optional — filter by year, e.g. 2024 report_type: Optional — one of: 'annual_report' – Nordic XBRL/ESEF annual reports 'quarterly_report' – Quarterly/interim reports 'press_release' – Exchange announcements and press releases 'macro_summary' – Quarterly macroeconomic summaries sector: Optional — filter by sector: 'seafood' – seafood companies 'energy' – energy / oil & gas 'shipping' – shipping companies country: Optional — filter by country code: 'NO', 'SE', 'DK' or 'FI' limit: Number of results after reranking (default 5, max 20)

Returns: List of relevant text excerpts with metadata, reranked by relevance. Each result includes rerank_score, hybrid_score, vector_score, company, ticker, country, fiscal_year, report_type, period, filing_date and the full text chunk. Returns an empty list if no relevant results are found or if the Qdrant database is temporarily unreachable.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoNumber of results to return (1–20)
queryYesNatural language search query, e.g. 'Equinor dividend 2024' or 'Norwegian housing market Q3'
sectorNoFilter by sector, e.g. 'energy', 'financials', 'salmon'
tickerNoFilter by company ticker, e.g. 'EQNR', 'SALM', 'NDA'
countryNoFilter by country: NO, SE, DK, or FI
fiscal_yearNoFilter by fiscal year, e.g. 2024. Use 0 for no filter
report_typeNoFilter by type: annual_report, quarterly_report, press_release, exchange_announcement, macro_summary

Output Schema

ParametersJSON Schema
NameRequiredDescription
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Discloses data scope (specific companies listed, Nordic countries only), update frequency ('hourly updates Mon–Fri'), and processing behavior ('reranked by relevance'). Returns section documents output structure including metadata fields. Missing rate limits or error conditions.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear headers (COMPANY FILINGS, PRESS RELEASES, MACROECONOMIC SUMMARIES, Args, Returns). Front-loads purpose. Length is justified by the need to enumerate specific covered companies and macro indicators. Args section mimics Python docstring for readability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive given 7 parameters with 0% schema coverage and rich output. Covers: database contents (specific tickers, countries), query patterns, output format with metadata fields, and filtering combinations. With output schema present in Returns section, description appropriately completes the picture without redundancy.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage, requiring full compensation. Description excels: provides 5 usage examples for 'query', valid ticker examples, enumerated options with semantic meaning for report_type (6 options) and sector (3 options), country codes mapped to full names, and default/max values for limit. Fully compensates for empty schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Opens with specific verb+resource: 'Search the Nordic financial database for company filings, press releases and macroeconomic summaries.' Immediately distinguishes from siblings (get_company_info, get_market_data) by focusing on historical document retrieval rather than current market data or static company info.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Extensive examples under Args (e.g., 'salmon price Q3', 'Norwegian housing market 2024 Q1') and explicit parameter combinations ('Use report_type='macro_summary' and country='NO'...'). Lacks explicit 'when not to use' or sibling alternatives naming, but content enumeration provides clear contextual boundaries.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.