Skip to main content
Glama

Server Quality Checklist

67%
Profile completionA complete profile improves this server's visibility in search results.
  • Disambiguation4/5

    Tools are generally well-differentiated by domain (climate vs. soil vs. country stats vs. food). The pair `climate_averages` and `climate_history` could cause slight confusion as both retrieve historical NASA POWER climate data for a location, though they are distinguished by monthly averages versus specific date ranges. Food tools are clearly separated by barcode lookup versus text search.

    Naming Consistency4/5

    Most tools follow a consistent noun_phrase pattern (e.g., `climate_averages`, `soil_conditions`, `country_agriculture_profile`). However, `compare_countries` uses a verb_noun structure, creating a minor deviation from the otherwise consistent convention. All use snake_case consistently.

    Tool Count5/5

    Eight tools is an appropriate count for this scope, covering field-level environmental data (climate averages, history, forecasts, soil), macro-level country agricultural statistics (profile and comparison), and food product databases (search and lookup) without redundancy or bloat.

    Completeness4/5

    The toolset provides comprehensive read-only coverage across distinct agricultural data domains: temporal climate data (past averages, historical range, forecasts), soil conditions, country-level indicators, and food product information. No obvious CRUD gaps exist for a data-retrieval server, though it lacks analytical tools that might combine these datasets (e.g., crop suitability scoring).

  • Average 3.7/5 across 8 of 8 tools scored. Lowest: 3.1/5.

    See the tool scores section below for per-tool breakdowns.

  • This repository includes a README.md file.

  • Add a LICENSE file by following GitHub's guide.

    MCP servers without a LICENSE cannot be installed.

  • Latest release: v0.1.2

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • Add a glama.json file to provide metadata about your server.

  • This server provides 8 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • This server has been verified by its author.

  • Add related servers to improve discoverability.

Tool Scores

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations, description fails to disclose return format, error handling for invalid country codes, data sources, or rate limits; only provides input format hints.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Appropriately brief, front-loaded with purpose statement, followed by structured Args section with clear formatting guidance.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness2/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Critical gap given no output schema exists: fails to describe what data structure is returned (time series, aggregates, comparison metrics) or data coverage limitations.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Effectively compensates for 0% schema description coverage by providing concrete examples for all three parameters (country code format, indicator examples, year range syntax).

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States specific action (compare) and resource (agricultural indicators between countries) clearly in German, though lacks explicit differentiation from sibling 'country_agriculture_profile'.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides no guidance on when to use this versus 'country_agriculture_profile' or other alternatives, nor any exclusion criteria.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior2/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided and description lacks disclosure of return values, rate limits, or side effects beyond the basic search operation.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Front-loaded purpose statement followed by structured Args documentation; appropriately concise though could benefit from additional behavioral context.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Covers basic invocation needs (purpose + parameters) but gaps remain regarding output format and sibling tool differentiation given the agricultural domain complexity.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Compensates effectively for 0% schema coverage by providing descriptions and examples for all 3 parameters (query, category, limit) in the Args section.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    States clear action ('Lebensmittel suchen') and resources (by name/category) but fails to differentiate from sibling 'food_product_lookup'.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    No guidance on when to use versus alternatives like 'food_product_lookup' or when not to use.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Discloses what data is returned (temp, precipitation, wind, etc.) but lacks information on rate limits, caching, or side effects given no annotations exist.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Well-structured with purpose front-loaded, followed by output details and Args section; every sentence provides unique value without repetition.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Appropriately complete for the tool's complexity; mentions agricultural optimization and specific output metrics to compensate for missing output schema.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Effectively compensates for 0% schema description coverage by defining all three parameters and adding the critical 1-16 day range constraint not present in the schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clearly states it provides agricultural weather forecasts with specific ag-focused metrics (evapotranspiration, solar radiation) and distinguishes from siblings via 'Vorhersage' (forecast) versus historical climate tools.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides no guidance on when to use this versus climate_averages/history or other weather-related siblings.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Discloses data source (NASA POWER) and temporal coverage (ab 1981), but omits other behavioral traits like rate limits, authentication requirements, or return format structure.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Well-structured with front-loaded key information (source, date range, metrics); Args section efficiently documents parameters without redundancy.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequate for the tool's low complexity, though absence of output schema could have been mitigated by describing the return structure beyond listing metrics.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters5/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Excellent compensation for 0% schema description coverage by providing parameter meanings (German translations) and critical format examples (YYYYMMDD) in the Args section.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clearly states it retrieves historical agricultural climate data from NASA POWER (since 1981) and lists specific metrics (temperature, precipitation, solar radiation, soil moisture, evapotranspiration); 'historical' distinguishes it from sibling forecast tools.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Mentions implied use cases (site evaluation, climate analysis) but lacks explicit guidance on when to use this versus climate_averages or other siblings.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Discloses data source (World Bank) which is valuable behavioral context given no annotations, but omits error handling, data freshness, or rate limit information.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Extremely concise with no fluff; front-loaded purpose followed by data specifics, source attribution, and parameter definition.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Adequately covers the simple single-parameter input and compensates for missing output schema by enumerating returned data categories.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters5/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Excellently compensates for 0% schema description coverage by specifying the ISO3 format and providing concrete examples (DEU, USA, BRA, IND, CHN).

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clearly states it retrieves comprehensive agricultural profiles listing specific metrics (yields, land use, GDP share) that distinguish it from weather-focused siblings like climate_averages and product-focused food_search.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides no explicit guidance on when to use versus alternatives (e.g., when to use this instead of crop_weather_forecast for agricultural planning).

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Discloses data source (NASA POWER), specific time range (2001-2020), and return content (monthly averages for four metrics) despite having no annotations to rely on.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Well-structured with clear separation between general description and Args section; every sentence conveys essential information without redundancy.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Appropriately complete for a simple 2-parameter tool; explains return values (necessary due to missing output schema) and covers data provenance and ideal use case.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Compensates effectively for 0% schema description coverage by providing German translations and semantic meaning for lat/lon parameters in the Args section.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clearly states it retrieves long-term climate averages (2001-2020) from NASA POWER for temperature, precipitation, solar radiation and soil moisture, implicitly distinguishing it from the sibling 'climate_history' through the specific averaging period.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides a specific use case ('Ideal für Standortbewertung für neue Anbauflächen') but lacks explicit guidance on when to prefer this over 'climate_history' or other siblings.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Discloses return data (Nährwerte, Nutri-Score, Eco-Score, NOVA-Gruppe) and data source (Open Food Facts) despite no annotations or output schema.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Well-structured with purpose, return value, source, and args sections; every sentence adds value; appropriately brief.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Explains return values comprehensively given lack of output schema; single parameter is adequately documented.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Compensates for 0% schema description coverage by specifying format (EAN/UPC) and concrete example (Nutella barcode).

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Specific verb+resource ('Lebensmittel per Barcode nachschlagen') clearly distinguishes from sibling food_search and agriculture/climate tools.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Implies usage context through 'per Barcode' but lacks explicit when/when-not guidance regarding food_search alternative.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Provides concrete behavioral details absent from annotations: soil depth range (0-54cm) and forecast horizon (1-16 days), though omits rate limiting or error behavior.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Well-structured with purpose front-loaded, followed by specific metrics, use cases, and Args section; no redundant or filler content.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given no output schema exists, description adequately specifies return values (soil temp, moisture, FAO evapotranspiration) for the tool's moderate complexity.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Effectively compensates for 0% schema description coverage by providing concrete examples (Berlin coordinates) and constraint documentation (1-16 range, default 7) for all parameters.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Clearly states it retrieves current and forecasted soil conditions (temperature at 0-54cm depth, moisture, evapotranspiration), specifically distinguishing it from climate and crop weather siblings.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Explicitly identifies ideal use cases (sowing decisions, irrigation planning) but lacks explicit guidance on when to prefer crop_weather_forecast or climate tools instead.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

agriculture-mcp-server MCP server

Copy to your README.md:

Score Badge

agriculture-mcp-server MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/AiAgentKarl/agriculture-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server