Skip to main content
Glama

ProofX - Content Protection for Creators

Server Details

Protect and verify digital content with cryptographic signing and proof of ownership.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

6 tools
compute_hashA
Read-onlyIdempotent
Inspect

Compute the SHA-256 hash of text content. Useful for creating a fingerprint of content for verification purposes.

ParametersJSON Schema
NameRequiredDescriptionDefault
textYesThe text content to hash
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnly/idempotent/destructive hints; description adds valuable context by specifying SHA-256 algorithm specifically (not just 'hash') and explaining the fingerprinting use case. Does not contradict annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: first states the core action, second provides use case context. No redundancy or filler. Front-loaded with essential information; every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Complete for a simple utility tool: annotations cover behavioral safety, schema fully documents the single input, and description specifies algorithm. Lacking output schema, could optionally mention that return value is the hexadecimal hash string, but tool name makes this sufficiently obvious.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear 'text' parameter description. Description mentions 'text content' but does not add syntax constraints, encoding requirements, or length limits beyond what the schema provides. Baseline score appropriate for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Compute' with specific resource 'SHA-256 hash of text content'. It distinguishes from sibling verify_hash by emphasizing 'creating a fingerprint' (generation) versus verification (checking), and clarifies the cryptographic algorithm used.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context for when to use ('for verification purposes'/'creating a fingerprint'), implying content integrity workflows. However, it does not explicitly name sibling verify_hash as the alternative for checking hashes against content, nor state when NOT to use (e.g., for non-text content).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_creatorA
Read-onlyIdempotent
Inspect

Look up a ProofX creator's profile, including their identity, certificate details, and content count. Use this when a user wants to learn about a content creator registered on ProofX.

ParametersJSON Schema
NameRequiredDescriptionDefault
creator_idYesThe ProofX creator ID (8-character hex string, e.g. 'c1c15c6c')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare safety profile (readOnly/idempotent), freeing the description to document return value composition (identity, certificate details, content count). Adds valuable context about what data is returned that annotations cannot express.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: first defines operation and return payload, second provides usage trigger. Perfectly front-loaded and appropriately sized for a single-parameter lookup tool.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple read-only lookup with complete schema annotations, description adequately covers return value structure. Minor gap: does not mention error behavior (e.g., 'returns null if creator not found').

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with detailed description of creator_id format (8-character hex string, example provided). Description adds no parameter details, but baseline 3 is appropriate when schema carries full documentation burden.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Strong specific verb 'Look up' paired with clear resource 'ProofX creator's profile'. The scope (identity, certificate details, content count) differentiates this from sibling 'my_account' (self-profile) and content verification tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use: 'Use this when a user wants to learn about a content creator registered on ProofX.' However, lacks explicit 'when not to use' or named alternatives (e.g., 'use my_account instead for your own profile').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

my_accountA
Read-onlyIdempotent
Inspect

Show the current user's ProofX account information including their creator ID, plan, and content count. Use this when a user wants to check their ProofX account status.

ParametersJSON Schema
NameRequiredDescriptionDefault

No parameters

Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already establish read-only, idempotent, non-destructive safety profile. The description adds value by disclosing the specific data fields returned (creator ID, plan, content count) which compensates for the missing output schema, but lacks details on auth requirements or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two well-structured sentences with zero waste. The first sentence front-loads the core action and return value; the second provides usage context. Every sentence earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a zero-parameter read-only tool, the description is appropriately complete. It compensates for the missing output schema by enumerating the specific account fields returned, providing sufficient information for an agent to decide invocation and handle results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has zero parameters, which establishes a baseline score of 4. The description does not explicitly mention that no parameters are required, but this is evident from the empty properties object.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Show') and resource ('ProofX account information') and lists specific fields returned (creator ID, plan, content count). It implicitly distinguishes from 'get_creator' by emphasizing 'current user,' though it does not explicitly contrast with that sibling.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit positive guidance ('Use this when a user wants to check their ProofX account status'), clearly indicating the trigger condition. However, it lacks negative guidance or mention of when to use siblings like 'get_creator' instead.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

protect_contentA
Destructive
Inspect

Register and protect digital content with ProofX cryptographic signatures. Computes the SHA-256 hash and registers it with ProofX for tamper-proof protection. Use this when a user wants to protect their text, article, poem, script, code, or other content they created. The user must provide their creator_id (get one free at proofx.co.uk).

ParametersJSON Schema
NameRequiredDescriptionDefault
titleNoA title for the content being protectedUntitled
creator_idNoProofX creator ID (8-char hex). If not provided, uses the authenticated session. Get one free at proofx.co.uk
content_textYesThe text content to protect (article, poem, script, code, lyrics, etc.)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description effectively supplements annotations by revealing technical implementation details (SHA-256 hashing, ProofX registration) and the value proposition ('tamper-proof protection'). It correctly aligns with openWorldHint=true by identifying the external ProofX service and destructiveHint=true implies the permanence of the cryptographic signature. However, it does not explicitly clarify what 'destructive' means in this context (e.g., irreversible registration, consumption of credits, or public visibility), which would aid agent decision-making.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The four-sentence structure is optimally organized: mechanism first (hashing/registering), value proposition second (tamper-proof), usage condition third, and authentication requirement fourth. Every sentence conveys distinct information without redundancy, making it easy for the model to parse intent and constraints quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the high schema coverage (100%) and presence of safety annotations, the description adequately covers the input requirements and operational context. However, with destructiveHint=true and no output schema, the description should ideally clarify the irreversible nature of the registration and potential side effects (e.g., public blockchain entry, quota consumption). The absence of return value description is acceptable but leaves the agent uncertain about success confirmation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the baseline is 3. The description adds concrete examples of content types ('poem, script, code') and acquisition instructions for the creator_id, which enhances usability. However, it contains a minor discrepancy with the schema: the description states the user 'must provide their creator_id,' while the schema indicates it is optional (falls back to authenticated session). This inconsistency could confuse parameter selection logic.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific action verbs ('Register and protect') and identifies the resource ('digital content') and method ('ProofX cryptographic signatures'). It implicitly distinguishes from siblings like compute_hash (which lacks registration) and verify_content (which checks rather than creates protection) by specifying the ProofX registration step, though it does not explicitly name those alternatives.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear activation criteria: 'Use this when a user wants to protect their text, article, poem, script, code, or other content they created.' It also establishes a clear prerequisite regarding the creator_id. It lacks explicit 'when not to use' guidance or named sibling alternatives (e.g., 'use compute_hash if you only need hashing without registration'), but the contextual clues provide sufficient boundaries for selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

verify_contentA
Read-onlyIdempotent
Inspect

Check if digital content is protected with ProofX. Returns the creator, protection date, signature status, and content details. Use this when a user wants to verify if an image, video, or document has been registered with ProofX.

ParametersJSON Schema
NameRequiredDescriptionDefault
content_idYesThe ProofX content ID (8-character hex string, e.g. 'a1b2c3d4')
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations correctly identify this as read-only, idempotent, and non-destructive, the description adds valuable behavioral context by detailing the specific return payload (creator, protection date, signature status, content details) despite the absence of an output schema. It does not mention authentication requirements or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two highly efficient sentences: the first covers function and return values, the second covers usage context. Every word earns its place with no redundant or filler content, making it appropriately front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's single-parameter simplicity and comprehensive annotations, the description adequately covers the verification workflow and return data structure. It could improve by briefly noting error behavior (e.g., 'returns null if not found'), but remains complete enough for agent selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the parameter 'content_id' is fully documented in the JSON schema including its format (8-character hex string). The description does not add additional semantic context about the parameter, meeting the baseline expectation for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description specifies the exact action ('Check if digital content is protected') and mentions the external system (ProofX), providing clear intent. However, it does not explicitly differentiate from the sibling tool 'verify_hash', which could cause confusion when both are available.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to invoke the tool ('Use this when a user wants to verify...'), covering the target content types (image, video, document). It lacks explicit guidance on when NOT to use it or which sibling tool to use instead (e.g., versus verify_hash).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

verify_hashA
Read-onlyIdempotent
Inspect

Verify content authenticity by its SHA-256 hash. Use this when a user has a file and wants to check if it's been registered with ProofX without uploading it. The user should compute the SHA-256 hash of their file first.

ParametersJSON Schema
NameRequiredDescriptionDefault
hashYesThe SHA-256 hash of the file (64-character hex string)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnly/idempotent/non-destructive traits. The description adds valuable context beyond these: it specifies the operation checks against 'ProofX' registration database and clarifies this is an offline verification workflow (no upload). Does not mention auth requirements or rate limits, but covers the primary behavioral context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with zero waste: purpose (sentence 1), usage context/differentiation (sentence 2), prerequisite workflow (sentence 3). Front-loaded with the core operation and perfectly sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 100% schema coverage, strong annotations, and single-parameter simplicity, the description is remarkably complete. It covers the ProofX system context, upload-free workflow distinction, and hash computation prerequisite. No output schema exists, but the implied boolean/existence result (registered or not) is sufficient for agent reasoning.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with complete hash specification. The description adds semantic meaning by contextualizing the hash as derived from 'their file' and establishing the SHA-256 algorithm context within the workflow, exceeding baseline expectations for fully documented schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description provides specific verb 'Verify' with resource 'content authenticity by its SHA-256 hash'. It effectively distinguishes from sibling verify_content by emphasizing hash-based verification 'without uploading it', and implicitly relates to compute_hash via the prerequisite workflow.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use: 'when a user has a file and wants to check if it's been registered with ProofX without uploading it'. Also provides clear prerequisite workflow distinguishing it from compute_hash: 'The user should compute the SHA-256 hash of their file first'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources