Skip to main content
Glama

elfatwitterintelligenceagent_search_mentions

Monitor influential Twitter discussions about specific cryptocurrency tokens or blockchain topics. Search using up to 5 keywords to retrieve relevant mentions from smart accounts within a defined time frame.

Instructions

Search for mentions of specific tokens or topics on Twitter. This tool finds discussions about cryptocurrencies, blockchain projects, or other topics of interest. It provides the tweets and mentions of smart accounts (only influential ones) and does not contain all tweets. Use this when you want to understand what influential people are saying about a particular token or topic on Twitter. Each of the search keywords should be one word or phrase. A maximum of 5 keywords are allowed. One key word should be one concept. Never use long sentences or phrases as keywords.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
days_agoNoNumber of days to look back
keywordsYesList of keywords to search for
limitNoMaximum number of results (minimum: 20, maximum: 30)

Implementation Reference

  • Configuration includes 'ElfaTwitterIntelligenceAgent' in DEFAULT_AGENTS, enabling its tools including 'search_mentions' to be registered as 'elfatwitterintelligenceagent_search_mentions'.
    # Default supported agents
    DEFAULT_AGENTS = [
        "CoinGeckoTokenInfoAgent",
        "DexScreenerTokenInfoAgent",
        "ElfaTwitterIntelligenceAgent",
        "ExaSearchAgent",
        "TwitterInfoAgent",
        "AIXBTProjectInfoAgent",
        "EtherscanAgent",
        "EvmTokenInfoAgent",
        "FundingRateAgent",
        "UnifaiTokenAnalysisAgent",
        "YahooFinanceAgent",
        "ZerionWalletAnalysisAgent"
    ]
  • Dynamically registers tools from agent metadata using tool_id = f'{agent_id.lower()}_{tool_name}', creating 'elfatwitterintelligenceagent_search_mentions' from ElfaTwitterIntelligenceAgent's search_mentions tool.
    async def process_tool_metadata(self) -> Dict[str, Dict[str, Any]]:
        """Process agent metadata and extract tool information.
    
        Returns:
            Dictionary mapping tool IDs to tool information
        """
        agents_metadata = await self.fetch_agent_metadata()
        tool_registry = {}
    
        # Log filtering status
        if self.supported_agents is not None:
            logger.info(
                f"Filtering tools using supported agent list ({len(self.supported_agents)} agents specified)"
            )
        else:
            logger.info("Loading tools from all available agents (no filter applied)")
    
        for agent_id, agent_data in agents_metadata.items():
            # Skip agents not in our supported list (if a list is specified)
            if (
                self.supported_agents is not None
                and agent_id not in self.supported_agents
            ):
                continue
    
            # Process tools for this agent
            for tool in agent_data.get("tools", []):
                if tool.get("type") == "function":
                    function_data = tool.get("function", {})
                    tool_name = function_data.get("name")
    
                    if not tool_name:
                        continue
    
                    # Create a unique tool ID
                    tool_id = f"{agent_id.lower()}_{tool_name}"
    
                    # Get parameters or create default schema
                    parameters = function_data.get("parameters", {})
                    if not parameters:
                        parameters = {
                            "type": "object",
                            "properties": {},
                            "required": [],
                        }
    
                    # Store tool info
                    tool_registry[tool_id] = {
                        "agent_id": agent_id,
                        "tool_name": tool_name,
                        "description": function_data.get("description", ""),
                        "parameters": parameters,
                    }
    
        # Log which agents contributed tools
        agents_with_tools = set(info["agent_id"] for info in tool_registry.values())
        logger.info(f"Loaded tools from agents: {', '.join(sorted(agents_with_tools))}")
        logger.info(f"Successfully loaded {len(tool_registry)} tools")
    
        return tool_registry
  • Handler function that proxies tool execution to the remote Mesh API for the specific agent and tool.
    async def execute_tool(
        self, agent_id: str, tool_name: str, tool_arguments: Dict[str, Any]
    ) -> Dict[str, Any]:
        """Execute a tool on a mesh agent.
    
        Args:
            agent_id: ID of the agent to execute the tool on
            tool_name: Name of the tool to execute
            tool_arguments: Arguments to pass to the tool
    
        Returns:
            Tool execution result
    
        Raises:
            ToolExecutionError: If there's an error executing the tool
        """
        request_data = {
            "agent_id": agent_id,
            "input": {"tool": tool_name, "tool_arguments": tool_arguments},
        }
    
        # Add API key if available
        if Config.HEURIST_API_KEY:
            request_data["api_key"] = Config.HEURIST_API_KEY
    
        try:
            result = await call_mesh_api(
                "mesh_request", method="POST", json=request_data
            )
            return result.get("data", result)  # Prefer the 'data' field if it exists
        except MeshApiError as e:
            # Re-raise API errors with clearer context
            raise ToolExecutionError(str(e)) from e
        except Exception as e:
            logger.error(f"Error calling {agent_id} tool {tool_name}: {e}")
            raise ToolExecutionError(
                f"Failed to call {agent_id} tool {tool_name}: {str(e)}"
            ) from e
  • MCP call_tool handler that dispatches to execute_tool based on the tool name like 'elfatwitterintelligenceagent_search_mentions', extracting agent and original tool name.
    @app.call_tool()
    async def call_tool(name: str, arguments: dict) -> List[types.TextContent]:
        """Call the specified tool with the given arguments."""
        try:
            if name not in self.tool_registry:
                raise ValueError(f"Unknown tool: {name}")
    
            tool_info = self.tool_registry[name]
            result = await self.execute_tool(
                agent_id=tool_info["agent_id"],
                tool_name=tool_info["tool_name"],
                tool_arguments=arguments,
            )
    
            # Convert result to TextContent
            return [types.TextContent(type="text", text=str(result))]
        except Exception as e:
            logger.error(f"Error calling tool {name}: {e}")
            raise ValueError(f"Failed to call tool {name}: {str(e)}") from e
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It usefully adds that results are limited to 'influential ones' and 'does not contain all tweets,' which are important constraints not in the schema. However, it doesn't mention rate limits, authentication needs, or what the output format looks like (no output schema exists), leaving some behavioral aspects unclear.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with the core purpose. The sentences about keyword formatting are necessary but could be more streamlined. Overall, it's efficient with minimal waste, though the keyword instructions are somewhat repetitive.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (3 parameters, no annotations, no output schema), the description provides adequate context about purpose, constraints, and usage. However, it lacks details on output format, error handling, or deeper behavioral traits that would be helpful for an agent to fully understand how to interpret results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all three parameters. The description adds some semantic context by specifying that keywords should be 'one word or phrase,' 'maximum of 5 keywords,' and 'one key word should be one concept,' which provides guidance beyond the schema's basic array description. This justifies a baseline 3 with slight enhancement.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches for mentions of specific tokens or topics on Twitter, focusing on cryptocurrencies/blockchain and influential accounts. It distinguishes from sibling tools like 'search_account' or 'get_trending_tokens' by emphasizing content search rather than account lookup or trending analysis. However, it doesn't explicitly contrast with all siblings, keeping it at 4 rather than 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use it: 'when you want to understand what influential people are saying about a particular token or topic on Twitter.' It doesn't explicitly state when NOT to use it or name specific alternatives among siblings, but the context is sufficiently clear for an agent to infer appropriate usage scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/heurist-network/heurist-mesh-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server