Skip to main content
Glama
FlowLLM-AI

Finance MCP

by FlowLLM-AI

extract_entities_code

Extract financial entities and their codes from natural language queries to identify stocks, bonds, funds, cryptocurrencies, and other investment instruments for financial analysis.

Instructions

Extract financial entities from the query, including types such as "stock", "bond", "fund", "cryptocurrency", "index", "commodity", "etf", etc. For entities like stocks or ETF funds, search for their corresponding codes. Finally, return the financial entities appearing in the query, including their types and codes.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYesNatural language query about financial entities.

Implementation Reference

  • The primary handler function that processes the input query by extracting financial entities using an LLM, resolving security codes asynchronously for relevant entity types, and outputting a JSON list of enriched entities.
    async def async_execute(self):
        """Run the main pipeline: extract entities then enrich them with codes.
    
        The method first prompts the LLM to return a JSON list of entities
        mentioned in the user ``query``. For supported financial types, it
        schedules parallel async tasks to fetch their security codes and
        merges the results back into the original entity list.
        """
    
        query = self.input_dict["query"]
        extract_entities_prompt: str = self.prompt_format(
            prompt_name="extract_entities_prompt",
            example=self.get_prompt(prompt_name="extract_entities_example"),
            query=query,
        )
    
        def callback_fn(message: Message):
            """Parse the assistant response as JSON content."""
    
            return extract_content(message.content, language_tag="json")
    
        assistant_result: List[dict] = await self.llm.achat(
            messages=[Message(role=Role.USER, content=extract_entities_prompt)],
            callback_fn=callback_fn,
        )
        logger.info(json.dumps(assistant_result, ensure_ascii=False))
    
        entity_list = []  # Track entities that will have codes resolved.
        for entity_info in assistant_result:
            # Only resolve codes for stock- or fund-like entities.
            if entity_info["type"] in ["stock", "股票", "etf", "fund"]:
                entity_list.append(entity_info["entity"])
                self.submit_async_task(
                    self.get_entity_code,
                    entity=entity_info["entity"],
                    entity_type=entity_info["type"],
                )
    
        # Wait for all async code-resolution tasks and merge results.
        for t_result in await self.join_async_task():
            entity = t_result["entity"]
            codes = t_result["codes"]
            for entity_info in assistant_result:
                if entity_info["entity"] == entity:
                    entity_info["codes"] = codes
    
        # Store JSON string as final op output.
        self.set_output(json.dumps(assistant_result, ensure_ascii=False))
  • Tool schema definition including the tool description (from prompt) and input schema requiring a 'query' string parameter.
    return ToolCall(
        **{
            "description": self.get_prompt("tool_description"),
            "input_schema": {
                "query": {
                    "type": "string",
                    "description": "Natural language query about financial entities.",
                    "required": True,
                },
            },
        },
    )
  • Registration of the ExtractEntitiesCodeOp using the @C.register_op() decorator, which likely exposes it as the 'extract_entities_code' tool in the MCP framework.
    @C.register_op()
    class ExtractEntitiesCodeOp(BaseAsyncToolOp):
  • Helper method to resolve security codes for a specific entity by calling a downstream search op and extracting codes via LLM.
    async def get_entity_code(self, entity: str, entity_type: str):
        """Resolve security codes for a single entity using a search op.
    
        This helper method delegates to the first configured sub-op (usually
        a search op) to obtain raw search results, then prompts the LLM to
        extract one or more security codes from that text.
    
        Args:
            entity: Entity name, such as a company or fund name.
            entity_type: Entity type returned by the LLM, e.g. ``"stock"``
                or ``"fund"``.
    
        Returns:
            A mapping with the original ``entity`` and a list of resolved
            ``codes``.
        """
    
        # Currently we only expect a single configured downstream op.
        search_op = list(self.ops.values())[0]
        assert isinstance(search_op, BaseAsyncToolOp)
        await search_op.async_call(query=f"the {entity_type} code of {entity}")
    
        extract_code_prompt: str = self.prompt_format(
            prompt_name="extract_code_prompt",
            entity=entity,
            text=search_op.output,
        )
    
        def callback_fn(message: Message):
            """Return plain text content from the assistant message."""
    
            return extract_content(message.content)
    
        assistant_result = await self.llm.achat(
            messages=[Message(role=Role.USER, content=extract_code_prompt)],
            callback_fn=callback_fn,
        )
        logger.info(
            "entity=%s response=%s %s",
            entity,
            search_op.output,
            json.dumps(assistant_result, ensure_ascii=False),
        )
        return {"entity": entity, "codes": assistant_result}
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It describes the extraction and code search behavior but lacks details on permissions, rate limits, error handling, or output format (e.g., structure of returned entities). For a tool with no annotations, this is a significant gap in behavioral disclosure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise and front-loaded, stating the core purpose in the first sentence. Both sentences earn their place by adding specific entity types and code search details. It could be slightly more structured but avoids redundancy and is efficiently sized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description is incomplete. It explains what the tool does but omits critical behavioral aspects like output structure, error conditions, or limitations. For a tool with 1 parameter and no structured support, more context is needed to be fully helpful to an AI agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with the single parameter 'query' documented as a natural language query about financial entities. The description adds context by specifying the types of financial entities (e.g., stock, bond) and the code search for stocks/ETFs, but doesn't provide additional syntax or format details beyond what the schema implies. Baseline 3 is appropriate given the high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: extracting financial entities from a query, identifying their types, and searching for codes for stocks/ETFs. It specifies the resource (financial entities) and verb (extract, search, return). However, it doesn't explicitly differentiate from sibling tools like dashscope_search or tavily_search, which might also process queries.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, exclusions (e.g., when not to use it), or compare it to sibling tools like dashscope_search or tavily_search, which might handle similar queries. Usage is implied only by the purpose statement.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/FlowLLM-AI/finance-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server