Skip to main content
Glama

Server Details

Query cryptographically-verified trading reputations (humans + AI agents). ed25519-signed.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
tradallo/reputation
GitHub Stars
0
Server Listing
tradallo-reputation

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4/5 across 5 of 5 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: fetching track records, raw trade receipts, version history, searching records by filters, and verifying receipts. No overlap or ambiguity.

Naming Consistency5/5

All tools follow a consistent verb_noun pattern (get_track_record, get_utrs, get_versions, search_records, verify_utr), making them predictable and easy to understand.

Tool Count5/5

With 5 tools, the server is well-scoped for its purpose of querying and verifying trading reputation data. Neither sparse nor overwhelming.

Completeness5/5

The tool surface covers all necessary read operations: individual records, lists of receipts, version history, search, and cryptographic verification. No obvious gaps given the domain.

Available Tools

5 tools
get_track_recordA
Read-onlyIdempotent
Inspect

Fetch a verified trading track record for a Tradallo profile (human) or agent. Returns cryptographically-verified statistics (Sharpe, Sortino, win rate, max drawdown, PnL, trade count, expectancy, profit factor) across all-time and rolling 30/90/365-day windows. Response is a JCS-canonicalized + ed25519-signed envelope.

Examples:

  • get_track_record({ handle: "alpha-momentum-v3" }) // agent (default)

  • get_track_record({ handle: "aaronjordan", principal_type: "human" })

ParametersJSON Schema
NameRequiredDescriptionDefault
handleYesTradallo handle (lowercase letters, digits, hyphens, underscores; 1-40 chars)
principal_typeNo'agent' (default) or 'human' — which namespace to look up the handle in

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNoSet to false on protocol-level errors. Successful envelopes do not include this.
dataYesThe signed payload. Shape depends on the tool — see each tool's description.
served_atNoISO-8601 timestamp at which the envelope was signed.
signatureNoed25519 signature over the JCS-canonicalized envelope minus this field.
schema_versionNoEnvelope schema version. Always '1' today.
max_age_secondsNoReplay window — clients reject envelopes past served_at + this many seconds.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description must cover behavioral traits. It mentions cryptographic signing (envelope) but omits error handling, auth needs, rate limits, or success/failure responses. Partial transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise: two key sentences plus two examples. No wasted words, all information front-loaded. Ideal length.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema, description covers return format (signed envelope) and key stats. Missing error behavior (e.g., invalid handle) but otherwise complete enough for a fetch tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, with descriptions for both handle (pattern) and principal_type (enum). Description adds examples and clarifies default ('agent' for handle-only calls), adding value beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states it fetches a verified trading track record for a Tradallo profile or agent, lists returned statistics, and distinguishes from siblings like get_utrs (likely multiple records) and search_records (search). Specific verb+resource+scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Examples show when to use 'human' vs 'agent', but no explicit comparison to sibling tools or conditions for not using. Context is clear, but lacking exclusions/alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_utrsA
Read-onlyIdempotent
Inspect

Fetch raw Universal Trade Receipts for an agent. Each UTR is a v2 canonical receipt with its SHA-256 hash recomputed by Tradallo so consumers can spot-check individual receipts against verify_utr. Paginated cursor-style on closed_at — pass the prior response's next_since to advance.

Example:

  • get_utrs({ agent_handle: "alpha-momentum-v3", limit: 50 })

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoPage size (1-500, default 100)
sinceNoISO-8601 timestamp; only return UTRs closed at or after this. Defaults to the agent's registration anchor.
agent_handleYesAgent's Tradallo handle

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNoSet to false on protocol-level errors. Successful envelopes do not include this.
dataYesThe signed payload. Shape depends on the tool — see each tool's description.
served_atNoISO-8601 timestamp at which the envelope was signed.
signatureNoed25519 signature over the JCS-canonicalized envelope minus this field.
schema_versionNoEnvelope schema version. Always '1' today.
max_age_secondsNoReplay window — clients reject envelopes past served_at + this many seconds.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations, so description carries full burden. Discloses pagination behavior and hash recomputation, but does not mention idempotency, error handling, or response structure.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences plus a concise example. No fluff, first sentence sets purpose, second provides key behavior details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers pagination and relationship to verify_utr, but lacks description of response structure (e.g., fields of UTR) and does not explain 'v2 canonical receipt' for newcomers. Adequate with minor gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% coverage, baseline 3. Description adds context about pagination cursor (next_since) but does not elaborate on 'since' or 'limit' beyond schema, and may cause confusion by mentioning next_since without clarifying it is a response field.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Fetch raw Universal Trade Receipts for an agent' with specific verb and resource. It differentiates from sibling 'verify_utr' by noting that hash is recomputed for spot-checking.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides context for pagination (cursor on closed_at, passing next_since) and an example, but lacks explicit when-to-use or alternatives compared to sibling tools beyond hinting at verify_utr.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_versionsA
Read-onlyIdempotent
Inspect

Fetch the full version history of an agent: semver tags, version_hash, policy_hash, when each version was deployed and superseded. Use to understand which version of an agent's policy produced a given track record before delegating capital.

Example:

  • get_versions({ agent_handle: "alpha-momentum-v3" })

ParametersJSON Schema
NameRequiredDescriptionDefault
agent_handleYesAgent's Tradallo handle (lowercase letters, digits, hyphens, underscores)

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNoSet to false on protocol-level errors. Successful envelopes do not include this.
dataYesThe signed payload. Shape depends on the tool — see each tool's description.
served_atNoISO-8601 timestamp at which the envelope was signed.
signatureNoed25519 signature over the JCS-canonicalized envelope minus this field.
schema_versionNoEnvelope schema version. Always '1' today.
max_age_secondsNoReplay window — clients reject envelopes past served_at + this many seconds.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries the full burden. It discloses that the tool is a read operation (fetch) and lists the returned fields. It does not mention side effects, permissions, or pagination, but for a simple fetch this is adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences plus an example, front-loaded with the core purpose. Every sentence adds value, with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with one parameter and no output schema, the description covers the main points: what is fetched, why to use it, and an example. It does not discuss pagination or ordering, but these are minor omissions.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so the schema already documents the one parameter. The description adds an example usage but does not provide additional semantic information beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states what the tool does: fetch full version history including semver tags, version_hash, policy_hash, and timestamps. It implies a use case (understanding which version produced a track record), but does not explicitly differentiate from siblings like get_track_record.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear usage context: 'Use to understand which version of an agent's policy produced a given track record before delegating capital.' It offers an example, but does not specify when not to use it or compare to alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_recordsA
Read-onlyIdempotent
Inspect

Discover verified trading records by performance filters. Returns a list of summary records sorted by the chosen metric. Useful when an agent is shopping for strategies meeting specific risk/return criteria.

Example:

  • search_records({ min_sharpe: 1.5, min_trades: 100, max_drawdown: 0.15, sort_by: "sharpe", limit: 10 })

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax results (1-100, default 25)
venueNoRestrict to a specific venue (e.g. 'hyperliquid', 'dydx')
sort_byNoSort field (descending). Default 'net_pnl'
min_sharpeNoMinimum annualized Sharpe ratio
min_tradesNoMinimum trade count
max_drawdownNoMax drawdown as a fraction (0.25 = 25%)
principal_typeNoRestrict to humans or agents only

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNoSet to false on protocol-level errors. Successful envelopes do not include this.
dataYesThe signed payload. Shape depends on the tool — see each tool's description.
served_atNoISO-8601 timestamp at which the envelope was signed.
signatureNoed25519 signature over the JCS-canonicalized envelope minus this field.
schema_versionNoEnvelope schema version. Always '1' today.
max_age_secondsNoReplay window — clients reject envelopes past served_at + this many seconds.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided. Description indicates read operation ('Discover', 'Returns a list') but lacks details on side effects, rate limits, or authentication. Adequate but not thorough.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three focused sentences plus an illustrative example. No redundant information, front-loaded with purpose and usage.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Provides overview and example but lacks details on output format or pagination. With 7 parameters and no output schema, additional context about return records would be helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is high (86%) and descriptions are clear. Description adds a concrete example demonstrating parameter usage, which enhances understanding beyond schema alone.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states it discovers verified trading records by performance filters and returns a sorted summary list. Differentiates from sibling tools like get_track_record which retrieves a single record.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly says it's useful when shopping for strategies with specific criteria, providing clear usage context. Does not list exclusions but implies other tools for single-record access.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

verify_utrA
Read-onlyIdempotent
Inspect

Look up a Universal Trade Receipt by SHA-256 hash. Returns whether Tradallo has anchored that hash on-chain via a Solana memo transaction, and if so the chain, signature, slot, posted_at, Solana Explorer URL, and notarizer pubkey for independent verification.

Example:

  • verify_utr({ utr_hash: "b81f2c7e9a4d5e6f8a3b1c2d3e4f5a6b7c8d9e0f1a2b3c4d5e6f7a8b9c0d1e2f" })

ParametersJSON Schema
NameRequiredDescriptionDefault
utr_hashYes64-char hex SHA-256 hash of the UTR to look up

Output Schema

ParametersJSON Schema
NameRequiredDescription
okNoSet to false on protocol-level errors. Successful envelopes do not include this.
dataYesThe signed payload. Shape depends on the tool — see each tool's description.
served_atNoISO-8601 timestamp at which the envelope was signed.
signatureNoed25519 signature over the JCS-canonicalized envelope minus this field.
schema_versionNoEnvelope schema version. Always '1' today.
max_age_secondsNoReplay window — clients reject envelopes past served_at + this many seconds.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Describes return fields but lacks explicit read-only statement; no annotations provided, so description carries full burden but omits side-effect clarity.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Concise single sentence plus example, front-loaded with purpose. No redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Complete for a hash-lookup tool: explains input, return fields (if found), and provides example. No output schema needed, schema coverage 100%.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema covers parameter fully with pattern and description; description adds example but no new semantic information beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'Look up' and specific resource 'Universal Trade Receipt by SHA-256 hash'. Distinguishes from siblings like get_utrs (list) and search_records (search).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implied usage when you have a hash for on-chain anchoring verification, but no explicit guidance on when to use vs siblings or when not to use.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.