Skip to main content
Glama
saidsurucu

İhale MCP

by saidsurucu

get_recent_tenders

Retrieve recently published public tenders from Turkey's EKAP portal, filtered by date range, tender types, and result limits for procurement monitoring.

Instructions

Get recent tenders from last N days. Convenience function for recent tender activity.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
daysNoNumber of days back to search (1-30)
limitNoMaximum number of results (1-100)
tender_typesNoFilter by tender types

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The handler function for the 'get_recent_tenders' tool, decorated with @mcp.tool for registration. It fetches recent tenders from EKAP by calculating the date range for the past N days and invoking the underlying search_tenders method, then formats the response with additional metadata.
    @mcp.tool
    async def get_recent_tenders(
        days: Annotated[int, "Number of days back to search (1-30)"] = 7,
        tender_types: Annotated[List[Literal[1, 2, 3, 4]], "Filter by tender types"] = None,
        limit: Annotated[int, "Maximum number of results (1-100)"] = 20
    ) -> Dict[str, Any]:
        """
        Get recent tenders from last N days.
        Convenience function for recent tender activity.
        """
        
        if days > 30:
            days = 30
        elif days < 1:
            days = 1
            
        # Calculate date range
        end_date = datetime.now()
        start_date = end_date - timedelta(days=days)
        
        start_date_str = start_date.strftime("%Y-%m-%d")
        end_date_str = end_date.strftime("%Y-%m-%d")
        
        # Use the client to search for recent tenders
        result = await ekap_client.search_tenders(
            search_text="",
            tender_types=tender_types,
            announcement_date_start=start_date_str,
            announcement_date_end=end_date_str,
            order_by="ihaleTarihi",
            sort_order="desc",
            limit=limit
        )
        
        if result.get("error"):
            return result
            
        return {
            "recent_tenders": result.get("tenders", []),
            "total_count": result.get("total_count", 0),
            "date_range": {
                "start": start_date_str,
                "end": end_date_str,
                "days_back": days
            },
            "filters_applied": {
                "tender_types": tender_types,
                "limit": limit
            }
        }
  • ihale_mcp.py:245-245 (registration)
    The @mcp.tool decorator registers the get_recent_tenders function as an MCP tool.
    @mcp.tool
  • Input schema defined via Annotated type hints in the function signature, providing parameter descriptions and types for the tool.
    async def get_recent_tenders(
        days: Annotated[int, "Number of days back to search (1-30)"] = 7,
        tender_types: Annotated[List[Literal[1, 2, 3, 4]], "Filter by tender types"] = None,
        limit: Annotated[int, "Maximum number of results (1-100)"] = 20
    ) -> Dict[str, Any]:
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions 'Convenience function' which hints at simplicity, but fails to describe critical behaviors like whether this is a read-only operation, what authentication is needed, rate limits, pagination, or error handling. For a tool with no annotations, this leaves significant gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with two short sentences that are front-loaded with the core purpose. There's no wasted text, though it could be slightly more structured by explicitly differentiating from siblings. Every sentence earns its place by clarifying scope and convenience.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has an output schema (not provided here) and 100% schema coverage, the description doesn't need to explain return values or parameters. However, with no annotations and moderate complexity (3 parameters, filtering), the description should do more to cover behavioral aspects like safety and usage constraints, making it minimally adequate but with clear gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents the parameters (days, limit, tender_types). The description adds no additional semantic meaning beyond implying temporal filtering with 'last N days,' which is already covered in the schema. Baseline 3 is appropriate as the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose as 'Get recent tenders from last N days' which specifies the verb ('get'), resource ('tenders'), and temporal scope ('recent', 'last N days'). It distinguishes itself from siblings like 'search_tenders' by focusing on recency rather than general search, though the distinction could be more explicit.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context with 'Convenience function for recent tender activity,' suggesting it's for quick access to recent data rather than comprehensive searches. However, it doesn't explicitly state when to use this tool versus alternatives like 'search_tenders' or provide any exclusions, leaving some ambiguity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/saidsurucu/ihale-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server