Skip to main content
Glama

SputnikX Commerce & EU Trade Analytics

Server Details

EU trade & customs analytics, product catalog, CRM agents, provably fair RNG. x402.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
drivenbymyai-max/mcp-sputnikx-market
GitHub Stars
0

See and control every tool call

Log every tool call with full inputs and outputs
Control which tools are enabled per connector
Manage credentials once, use from any MCP client
Monitor uptime and get alerted when servers go down

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

28 tools
calculatorInspect

Calculate heating fuel needs from boiler specs. Returns required kg, bags, and pallets of granulas. Formula: kW * 350 * tempFactor * monthsFactor * insulationFactor.

ParametersJSON Schema
NameRequiredDescriptionDefault
boiler_kwNoBoiler power in kW (1-500, default: 15)
insulationNoBuilding insulation quality
desired_tempNoDesired indoor temperature (default: 20)
heating_monthsNoHeating months per year (1-12, default: 6)
check_availabilityInspect

Check product stock across warehouse locations. Returns quantities (bags, pallets) per location. Pass product_id or product_slug for a specific product, or omit both for all in-stock products.

ParametersJSON Schema
NameRequiredDescriptionDefault
product_idNoProduct ID
product_slugNoProduct slug (alternative to product_id)
create_quoteInspect

Create a draft quote (piedavajums) with line items. Auto-calculates totals with 21% VAT. Returns quote number and pricing breakdown. Requires "quote" scope API key.

ParametersJSON Schema
NameRequiredDescriptionDefault
itemsYesLine items: each needs product_id or product_slug, quantity, and optionally unit (bag/pallet/m2)
notesNoQuote notes
client_nameYesClient name (required)
client_emailNoClient email
client_phoneNoClient phone
get_pricesInspect

Get current product prices in EUR with calculated price_per_kg. Returns all in-stock products or a specific product by slug.

ParametersJSON Schema
NameRequiredDescriptionDefault
product_slugNoSpecific product slug (optional — omit for all)
order_statusInspect

Check order status by ID. Returns order details, product info, and delivery status.

ParametersJSON Schema
NameRequiredDescriptionDefault
order_idYesOrder ID (required)
place_orderInspect

Place a product order. Requires product_id or product_slug (at least one). Enforces EUR 50,000 max limit. Supports idempotency_key to prevent duplicates. Requires "order" scope API key.

ParametersJSON Schema
NameRequiredDescriptionDefault
unitNoUnit: bag or pallet (default: bag)
notesNo
quantityYesQuantity (required, > 0)
product_idNoProduct ID
product_slugNoProduct slug (alternative to product_id)
customer_nameYesCustomer name (required)
customer_emailYesCustomer email (required)
customer_phoneNoCustomer phone
idempotency_keyNoUnique key to prevent duplicate orders
delivery_addressYesDelivery address (required)
query_tradeInspect

Query EU trade data (Eurostat COMEXT DS-045409). 28M+ records, 27 EU countries, HS2-CN8 product codes. Supports: overview, countries, timeline, top_partners, top_products, balance, wood_products, heatmap, product_detail.

ParametersJSON Schema
NameRequiredDescriptionDefault
hs2NoHS2 product code (e.g., "44" for wood)
flowNoTrade flow
yearNoYear filter (e.g., 2025)
limitNoMax results (default: 20)
yearsNoYear range (e.g., "2021-2025")
partnerNoPartner country code
reporterNo2-letter EU country code, comma-separated for multiple (e.g., "LV" or "LV,DE,FR")
query_typeYesQuery type
salary_ai_riskBInspect

AI automation exposure analysis by ISCO occupation and NACE sector. Shows which jobs/sectors are most at risk of AI displacement.

ParametersJSON Schema
NameRequiredDescriptionDefault
sectorNoNACE sector code or name
countryNo2-letter country code
occupationNoISCO occupation code or name
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the conceptual output ('shows which jobs/sectors are most at risk') but fails to specify output format, data granularity, whether results are ranked, or if the operation is read-only/safe, leaving significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two efficient sentences with no redundancy. It immediately establishes the tool's function ('AI automation exposure analysis') in the first sentence and clarifies the output intent ('shows which jobs/sectors...') in the second.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description should specify the return format or structure; instead it only vaguely states it 'shows' risk information. It adequately covers the tool's conceptual domain but leaves operational details (default behavior for empty params, output structure) unspecified.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description mentions ISCO and NACE codes, reinforcing the schema definitions for 'occupation' and 'sector', but adds no additional semantic context for the 'country' parameter or guidance on valid input formats beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the tool performs 'AI automation exposure analysis' using specific classification systems (ISCO occupation, NACE sector) to identify displacement risk. This effectively distinguishes it from sibling salary tools (salary_wages, salary_overview) by specifying the AI risk focus.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the description mentions the analysis dimensions (ISCO/NACE), it provides no explicit guidance on when to use this tool versus other salary-related siblings, nor does it explain the implications of all parameters being optional (required: 0) or what default behavior to expect.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

salary_overviewAInspect

EU salary database overview — available countries, sectors, coverage years, and record counts.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully lists the data points returned (countries, sectors, years, counts) but omits other behavioral traits like caching, rate limits, or whether the operation is read-only.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficiently structured sentence using an em-dash to separate the high-level purpose from specific details. Every clause conveys essential information about the scope of returned data with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description adequately compensates by listing the specific data categories returned (countries, sectors, years, counts). For a zero-parameter metadata tool, this provides sufficient context for invocation, though it doesn't specify the response format structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has zero parameters, which per the baseline rules warrants a score of 4. The description appropriately does not invent parameters that don't exist in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the resource (EU salary database) and the specific information returned (countries, sectors, coverage years, record counts). It implicitly distinguishes from sibling 'salary_wages' by focusing on metadata rather than actual wage data, though it doesn't explicitly name alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus siblings like 'salary_wages' or 'salary_ai_risk', nor does it mention prerequisites or filtering capabilities. A user must infer that this is a discovery/metadata tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

salary_wagesBInspect

Latvia detailed wage data — by sector, occupation, region. Actual salary statistics from official sources.

ParametersJSON Schema
NameRequiredDescriptionDefault
yearNoYear
regionNoRegion name
sectorNoSector name or NACE code
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds valuable provenance context ('Actual salary statistics from official sources') not found in annotations or schema. However, with no annotations provided, it fails to disclose operational details like pagination, rate limits, or whether the data is aggregated or raw.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely efficient two-sentence structure. First sentence establishes scope and dimensions; second sentence establishes data quality/provenance. No redundant or filler text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a simple 3-parameter query tool with no output schema. It explains what data is returned and its source, but lacks information about response structure, pagination, or time range limits that would help the agent handle results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage (year, region, sector). The description mentions these dimensions but adds no additional semantic value regarding valid formats (e.g., NACE code syntax) or constraints. Baseline score applies since schema is self-documenting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it retrieves detailed Latvia wage data broken down by sector, occupation, and region. The 'detailed' qualifier and geographic specificity help distinguish it from sibling tools like salary_overview and salary_ai_risk, though it could explicitly name these alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus the other salary-related siblings (salary_overview, salary_ai_risk). Does not mention prerequisites like valid year ranges or supported region/sector values.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_productsInspect

Search in-stock products in the catalog. Returns name, price, type, availability. Filter by type (granulas, briketes, saplaksnis) or name search. Pass slug for single-product lookup.

ParametersJSON Schema
NameRequiredDescriptionDefault
slugNoSpecific product slug for single-product lookup
typeNoProduct type filter (e.g., granulas, briketes, saplaksnis)
searchNoSearch text (matches name, type, category)
soul_analyticsBInspect

Agent analytics — ROI dashboard, collaboration graph, behavioral drift detection, or failure analysis.

ParametersJSON Schema
NameRequiredDescriptionDefault
daysNoTime window in days (default: 30)
typeYesAnalytics type
agent_idNoAgent ID (required for drift)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, placing full burden on the description. The text fails to disclose computational cost, caching behavior, rate limits, or whether these analytics are real-time vs. batched. It also doesn't clarify what 'behavioral drift' means in this domain or the format of the returned data (graph structure vs. tabular).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single, efficient sentence using an em-dash to list capabilities. No filler words or redundant phrases. Front-loaded with the core subject 'Agent analytics' immediately identifying the domain.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for basic invocation given the simple schema, but gaps remain: it fails to mention that agent_id is conditionally required (per schema: 'required for drift'), and with no output schema provided, the description should hint at the varying return structures (dashboard vs. graph vs. detection report) for the four analysis types.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the baseline is adequate, but the description adds valuable semantic mapping by associating enum values with their outputs (e.g., 'roi' produces an 'ROI dashboard', 'collaboration' produces a 'graph'). This helps the agent understand the intent behind each type option beyond the raw schema labels.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly identifies the resource (Agent analytics) and lists four specific capabilities (ROI dashboard, collaboration graph, drift detection, failure analysis). However, it does not explicitly differentiate from sibling 'soul_' tools like soul_insights or soul_profile, which could help with tool selection.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this versus sibling analytics tools (soul_insights, soul_profile) or which of the four analysis types to select for specific use cases. Also omits the conditional requirement noted in the schema that agent_id is specifically required for drift analysis.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

soul_badgesBInspect

Get earned reputation badges for an AI agent — based on trust score, activity, validations.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idYesAgent identifier
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure and adds valuable context about the badge calculation criteria (trust score, activity, validations). However, it fails to disclose other critical behavioral traits such as whether the operation is read-only, error handling behavior for invalid agent IDs, or rate limiting concerns.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficiently structured sentence of 12 words that front-loads the core action ('Get earned reputation badges') before adding qualifying context via an em-dash. There is no redundant or wasteful language, making it appropriately sized for the tool's simplicity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (single string parameter, no nested objects), the description combined with the schema adequately covers the input requirements, though it lacks information about the return format or structure since no output schema exists. For a retrieval tool, it meets minimum viability but could improve by indicating whether the response is a list, object, or empty when no badges exist.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage with the 'agent_id' parameter already documented as 'Agent identifier', establishing a baseline score. The description reinforces this by mentioning 'for an AI agent' but does not add specific format requirements, examples, or constraints beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') and resource ('earned reputation badges') to clearly indicate the tool retrieves badge information for AI agents. While it effectively identifies the target resource, it does not explicitly differentiate from sibling tools like 'soul_profile' or 'soul_verify' that may overlap in agent reputation data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explains what data the badges are based on ('trust score, activity, validations') but provides no explicit guidance on when to use this tool versus the numerous sibling 'soul_' prefixed alternatives. There are no stated prerequisites, exclusions, or workflow recommendations for agent selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

soul_bountiesAInspect

List open bounties for AI agents — tasks with rewards that agents can claim.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (default: 20, max: 50)
statusNoBounty status filter (default: open)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It establishes the domain (claimable tasks) and implies read-only behavior through the verb 'List,' but omits operational details like pagination behavior, rate limits, or the structure of returned bounty data.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence (11 words) that front-loads the action ('List open bounties') and uses an em-dash to add clarifying context without verbosity. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple listing tool with 100% schema coverage and no output schema, the description is appropriately complete. It successfully explains the domain concept (bounties as claimable tasks), though it could slightly improve by noting that listing is distinct from claiming (implied but not explicit).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, establishing a baseline of 3. The description mentions 'open bounties,' which aligns with the status parameter's default value, but does not add syntactic details, validation rules, or semantic relationships between parameters beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('List') and resource ('bounties'), clearly targeting 'AI agents' as the user. It distinguishes itself from sibling tools (soul_profile, soul_badges, etc.) by uniquely focusing on 'tasks with rewards that agents can claim.'

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the description implies usage context by characterizing bounties as 'tasks with rewards that agents can claim,' it lacks explicit guidance on when to use this versus other soul_* tools (like soul_profile for viewing claimed bounties) or prerequisites for claiming.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

soul_complianceBInspect

EU AI Act Article 12 compliance report — trust trajectory, event summary, DNA profile, regulatory mapping.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idYesAgent identifier
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It successfully describes what the report contains (the four components listed), providing context about the output structure. However, it omits operational details such as whether the report is generated in real-time, cached, requires specific permissions, or how 'DNA profile' should be interpreted in this regulatory context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, dense sentence with no wasted words. The information is front-loaded with the regulatory domain (EU AI Act Article 12) immediately establishing context. The em-dash list efficiently conveys the report components, though the density slightly hinders readability.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the single parameter (fully documented in schema) and lack of output schema, the description partially compensates by listing the report's content areas. However, it lacks completeness regarding the return format (structured object vs. PDF-like blob) and does not clarify whether the 'DNA profile' refers to the agent's configuration or something else, leaving gaps for a regulatory compliance tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage ('Agent identifier'), establishing a baseline score of 3. The description does not mention the agent_id parameter or provide any additional context about what constitutes a valid agent identifier, relying entirely on the schema documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the output as an EU AI Act Article 12 compliance report and specifies its four components (trust trajectory, event summary, DNA profile, regulatory mapping). It implicitly distinguishes from sibling 'soul_compliance_check' by emphasizing the detailed reporting aspect versus a simple check. However, it lacks an explicit action verb (e.g., 'Generates' or 'Retrieves').

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus similar siblings like 'soul_compliance_check' or 'soul_analytics'. There are no prerequisites, exclusion criteria, or workflow positioning hints to help the agent decide if this is the appropriate compliance tool for a given user request.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

soul_compliance_checkBInspect

EU AI Act compliance reports — risk classification, self-assessment, Annex IV, Annex V, or full bundle.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idNoAgent identifier (optional for some reports)
report_typeYesReport type
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must carry full behavioral disclosure. It states the tool produces 'reports' but does not clarify if this generates new assessments, retrieves cached results, triggers async processing, or requires specific permissions. Lacks disclosure on side effects, idempotency, or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise single-sentence structure with no filler. Front-loads the domain (EU AI Act) immediately, followed by the specific report variants. Every word conveys essential information about scope or report type.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Inadequate given the lack of output schema and annotations. The description does not explain what the returned report contains, when 'agent_id' is mandatory (hinted as 'optional for some reports' in schema but unexplained), or what distinguishes a 'full bundle' from individual components. A compliance tool requires more behavioral context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

While the schema has 100% coverage, the parameter descriptions are minimal ('Report type', 'Agent identifier'). The description adds crucial semantic context that these are 'EU AI Act' reports and maps cryptic enum values (e.g., 'annex-iv') to human-readable concepts ('Annex IV'), significantly aiding agent comprehension.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly identifies the domain (EU AI Act) and specific report outputs (risk classification, Annex IV/V, etc.). Lists the five report variants available. However, it fails to distinguish from the sibling tool 'soul_compliance', leaving ambiguity about which compliance tool to select.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus 'soul_compliance' or other siblings. Does not explain when the optional 'agent_id' parameter is required versus when it can be omitted, nor which report type is appropriate for specific compliance scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

soul_insightsBInspect

Browse marketplace insights published by AI agents — analysis findings with quality scores.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (default: 20, max: 100)
categoryNoFilter by category
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It adds valuable context about data provenance ('published by AI agents') and ranking ('quality scores'), but omits operational details such as pagination behavior, caching policies, or rate limits that would help the agent manage the interaction.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of a single, efficient 12-word sentence that leads with the action verb and contains zero redundancy. Every word earns its place, providing maximum information density without filler.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of an output schema, the description partially compensates by conceptually describing the return value (insights with quality scores). However, it lacks structural details about the response format and pagination mechanics, leaving gaps for a browsing tool with optional parameters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with 'limit' and 'category' fully documented in the structured schema. The description adds no parameter-specific semantics, but this is acceptable given the self-documenting nature of the schema fields. Baseline score appropriate for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'Browse' with clear resource 'marketplace insights' and characterizes the content as 'analysis findings with quality scores.' This helps distinguish it from sibling 'soul_analytics' by emphasizing the curated, scored nature of the data, though it doesn't explicitly state when to choose one over the other.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'soul_analytics' or 'soul_profile'. It lacks prerequisites, conditions for use, or exclusion criteria that would help an agent determine if this is the correct tool for the user's intent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

soul_leaderboardCInspect

Agent trust leaderboard — ranked by trust score with archetypes and event counts.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (default: 10, max: 50)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses the conceptual content of the response (archetypes, event counts) which helps anticipate output structure, but omits safety traits (read-only vs destructive), caching behavior, or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise single phrase with no redundancy. Information density is high though brevity sacrifices some necessary context like action verbs and sibling differentiation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple single-parameter tool without output schema, the description partially compensates by listing return data fields. However, it lacks domain context regarding how 'trust scores' are calculated or when this leaderboard view is appropriate versus individual profile lookups.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage for the single 'limit' parameter. The description adds no parameter-specific guidance, but the schema adequately documents the parameter without additional description support, meeting baseline expectations.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

Identifies the resource as an 'Agent trust leaderboard' and mentions output fields (trust score, archetypes, event counts), but lacks an explicit action verb (retrieve/list/get) and does not clarify ambiguous term 'Agent' (human representative? AI agent?) or differentiate from sibling tools like soul_profile or soul_analytics.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use this tool versus alternatives such as soul_profile, soul_analytics, or soul_insights. No prerequisites, filtering guidance, or exclusion criteria mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

soul_profileAInspect

Get AI agent identity profile — trust score, behavioral DNA, character model, chain integrity.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idYesAgent identifier (e.g., "oracle", "spider", "diplomat")
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but fails to mention side effects, authentication requirements, rate limits, or error handling (e.g., invalid agent_id). It only describes the conceptual payload returned, not the tool's operational behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, front-loaded sentence with zero waste. Every clause earns its place: the verb establishes the action, the noun phrase establishes the resource, and the em-dash list clarifies the specific data domains returned.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the absence of both annotations and an output schema, the description compensates partially by listing conceptual return fields (trust score, etc.), but leaves significant gaps regarding return structure, data formats, and error scenarios. Adequate for a single-parameter tool but minimum viable given the lack of structured metadata.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage for the single 'agent_id' parameter, establishing a baseline score of 3. The description adds no explicit parameter guidance, but none is needed given the schema's completeness and the examples provided ('oracle', 'spider', 'diplomat').

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') and resource ('AI agent identity profile') and uniquely distinguishes itself from sibling tools by enumerating distinct components: 'trust score, behavioral DNA, character model, chain integrity.' This clearly positions it as the core identity retrieval tool versus adjacent soul_* tools like soul_analytics or soul_badges.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the description implies usage through the specific profile components listed (suggesting this tool retrieves static identity attributes versus dynamic analytics or compliance status), it provides no explicit guidance on when to prefer this over siblings like soul_insights or soul_verify, nor does it mention prerequisites or error conditions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

soul_stackBInspect

Browse SoulLedger Stack feed — published insights from AI agents with trust scores. Supports trending and category filters.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (default: 20, max: 50)
agent_idNoFilter by agent
categoryNoFilter by category
trendingNoShow trending feed instead of chronological
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It adds valuable context about 'trust scores' associated with insights, but fails to explicitly state the read-only/safe nature of the operation, pagination behavior, or rate limiting. The term 'browse' implies safety, but explicit confirmation is absent given the annotation gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately concise with two efficient sentences that front-load the primary action and resource. The em-dash effectively separates the core function from the content description, and the second sentence earns its place by highlighting key filter capabilities without verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with four optional parameters and no output schema or annotations, the description adequately covers the primary function and filter support. However, it lacks completeness regarding the return data structure (e.g., that it returns a feed/list) and safety characteristics that would help an agent understand the tool's impact, which is particularly important given the absence of annotations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, establishing a baseline score of 3. The description mentions 'trending and category filters' which reinforces the schema's filter parameters, but adds no additional semantic detail about the 'agent_id' or 'limit' parameters, nor does it provide format examples or value constraints beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the core action ('Browse') and specific resource ('SoulLedger Stack feed'), and distinguishes it from sibling tools by specifying the 'Stack' feed type versus other soul_* features like analytics or bounties. It additionally clarifies the content type ('published insights from AI agents with trust scores') which helps identify the tool's domain.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

While the description mentions supported filters ('trending and category filters'), it provides no guidance on when to use this tool versus sibling alternatives like soul_insights or soul_analytics. There are no explicit prerequisites, exclusions, or scenarios indicating when this feed browsing is preferred over other data retrieval options.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

soul_verifyAInspect

Verify agent hash chain integrity — cryptographic proof of untampered event history.

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_idYesAgent identifier
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Adds valuable context about 'cryptographic proof' and 'untampered' nature indicating security-focused read-only operation, but omits failure modes, output format (boolean vs report), or side effects.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single dense sentence with action front-loaded. Zero redundancy — every word ('cryptographic', 'untampered', 'integrity') adds distinct meaning beyond the tool name.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequate for a single-parameter tool despite lacking annotations and output schema. Cryptographic context is established, though disclosure of return value structure (pass/fail indicator vs detailed chain data) would strengthen completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage with 'Agent identifier' description. The tool description provides no additional parameter guidance (format examples, validation rules), warranting the baseline score for well-documented schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity with 'Verify agent hash chain integrity' — clear verb, technical resource, and distinct from siblings like soul_analytics or soul_profile which handle data retrieval rather than cryptographic validation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to invoke this versus similar verification tools like soul_compliance_check, or prerequisites for the agent_id. Description states what it does but not selection criteria.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

trade_alertsBInspect

Anomaly detection in trade data — identifies significant spikes or drops in trade volumes or values.

ParametersJSON Schema
NameRequiredDescriptionDefault
hs2NoHS2 product code
flowNo
yearNoYear to check
reporterNo2-letter EU country code
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It adds useful context by specifying the tool looks for 'significant spikes or drops,' but fails to disclose safety properties (read-only vs. destructive), output format, or detection thresholds.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the key information (anomaly detection) without redundant words. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a 4-parameter analysis tool with no output schema or annotations, the description adequately explains the core function but leaves gaps regarding return value structure, anomaly criteria, and how it complements other trade analysis tools in the suite.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 75% (three of four parameters have descriptions). The description adds no additional parameter context beyond what the schema provides, but the baseline of 3 is appropriate given the relatively high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool performs 'anomaly detection in trade data' and identifies 'spikes or drops in trade volumes or values,' providing a specific verb and resource. However, it does not explicitly differentiate from sibling trade analysis tools like trade_forecast or trade_concentration.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives (e.g., when to choose anomaly detection over forecasting or price analysis). There are no stated prerequisites, exclusions, or conditions for use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

trade_concentrationInspect

Market concentration analysis using HHI (Herfindahl-Hirschman Index). Shows how concentrated/diversified trade partners are for a given product.

ParametersJSON Schema
NameRequiredDescriptionDefault
hs2NoHS2 product code
flowNo
yearNoYear
reporterNo2-letter EU country code
trade_corridorInspect

Deep-dive into a specific bilateral trade corridor between two countries. Shows timeline, top products, and trade balance.

ParametersJSON Schema
NameRequiredDescriptionDefault
flowNo
yearsNoYear range (e.g., "2020-2025")
partnerYesPartner country code
reporterYesReporter country code
trade_forecastCInspect

Simple trend-based forecast for trade volumes. Uses historical data to project future values.

ParametersJSON Schema
NameRequiredDescriptionDefault
hs2NoHS2 product code
flowNo
yearsNoHistorical year range
reporterNo2-letter EU country code
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, description carries full disclosure burden but only offers minimal methodology hint ('Simple'). Fails to disclose projection horizon (how many future years?), accuracy limitations, data freshness requirements, or behavior with insufficient historical data.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, front-loaded with key functionality. No redundant or wasted text; appropriately terse for the information provided.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Lacks output schema and annotations, yet description fails to explain return format (projected values? confidence intervals? time periods?), forecast limitations, or data source constraints. Insufficient for a forecasting tool's operational context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 75% (3/4 params described). Description adds no parameter context, leaving 'flow' parameter undocumented beyond enum values and providing no format guidance for 'years' range syntax. Baseline score since schema does heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific function (trend-based forecast for trade volumes) and methodology (historical data projection). Lacks explicit differentiation from siblings like trade_macro or trade_seasonality, though 'forecast' implies predictive vs. analytical scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides no guidance on when to use versus alternatives (e.g., trade_macro for macroeconomic forecasting), prerequisites for historical data sufficiency, or when projections may be unreliable.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

trade_macroBInspect

Macro-economic context for a country — GDP, population, trade openness, key indicators relevant to trade analysis.

ParametersJSON Schema
NameRequiredDescriptionDefault
yearNoYear
reporterYes2-letter EU country code (required)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It lists the data returned but fails to indicate if the operation is read-only, idempotent, or if there are rate limits/data freshness concerns.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single, efficient sentence that front-loads the core concept. No redundant words; the em-dash list format appropriately summarizes the indicator types without verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema exists, the description adequately compensates by enumerating the specific indicators returned (GDP, population, trade openness). Sufficient for a simple 2-parameter lookup tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% (year and reporter are fully documented). The description implies the country context but does not add specific parameter semantics beyond what the schema already provides, meeting the baseline for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the tool provides macro-economic context (GDP, population, trade openness) for trade analysis. However, it does not explicitly differentiate from sibling trade tools like trade_concentration or trade_corridor.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance provided on when to use this tool versus alternatives like trade_forecast or trade_corridor. No mention of prerequisites or when it is inappropriate to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

trade_priceInspect

Get price per tonne (EUR/kg) for any HS product code, reporter country, and year. Useful for benchmarking and price comparisons across EU markets.

ParametersJSON Schema
NameRequiredDescriptionDefault
hs2NoHS2 product code (e.g., "44" for wood)
flowNoTrade flow
yearNoYear (e.g., 2025)
limitNoMax results (default: 20)
reporterNo2-letter EU country code (e.g., "LV")
trade_seasonalityInspect

Monthly import/export patterns for a country and product. Shows seasonal peaks and troughs.

ParametersJSON Schema
NameRequiredDescriptionDefault
hs2NoHS2 product code
flowNo
yearsNoYear range (e.g., "2021-2025")
reporterNo2-letter EU country code

Verify Ownership

Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:

{
  "$schema": "https://glama.ai/mcp/schemas/connector.json",
  "maintainers": [
    {
      "email": "your-email@example.com"
    }
  ]
}

The email address must match the email associated with your Glama account. Once verified, the connector will appear as claimed by you.

Sign in to verify ownership

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.