Skip to main content
Glama

Server Details

Lightning-paid AI reasoning and decisions for agents. L402 Bitcoin payments, no subscriptions.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
babyblueviper1/invinoveritas
GitHub Stars
0
Server Listing
invinoveritas

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsC

Average 2.8/5 across 4 of 4 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose: decide handles decision-making with risk assessment, memory_get and memory_store manage persistent memory retrieval and storage, and reason provides strategic reasoning with style control. There is no overlap in functionality, making tool selection unambiguous.

Naming Consistency4/5

The naming is mostly consistent with a verb_noun pattern (e.g., memory_get, memory_store), but 'decide' and 'reason' deviate by using single verbs without nouns. This minor inconsistency does not hinder readability, but it breaks the pattern seen in the memory tools.

Tool Count5/5

With 4 tools, the server is well-scoped for its purpose of decision intelligence and memory management. Each tool serves a specific role, and the count is neither too sparse nor excessive, fitting typical server scopes of 3-15 tools.

Completeness3/5

The tool set covers core functions like decision-making, reasoning, and memory storage/retrieval, but there are notable gaps. For a decision intelligence domain, tools for updating or deleting memories, or handling more complex data analysis, are missing, which could limit agent workflows.

Available Tools

4 tools
decideCInspect

Structured decision intelligence with risk assessment and confidence scoring. Optimized for trading bots.

ParametersJSON Schema
NameRequiredDescriptionDefault
goalYesOverall goal or objective
styleNonormal
contextNoBackground context
questionYesSpecific decision question
want_confidenceNoInclude confidence score, risk level, and recommended position sizing
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'risk assessment and confidence scoring' but doesn't explain what these entail, how decisions are made, whether there are rate limits, authentication needs, or what the output format looks like. For a decision-making tool with zero annotation coverage, this leaves significant behavioral gaps.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with two sentences that efficiently state the core functionality and optimization. However, the second sentence 'Optimized for trading bots' could be more integrated or clarified, and there's some room to improve structure by front-loading key details more explicitly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a decision-making tool with 5 parameters, no annotations, and no output schema, the description is incomplete. It lacks details on behavioral traits, output format, error handling, and how it differs from siblings. The 'trading bots' optimization hints at context but doesn't provide enough completeness for an agent to use it effectively without guesswork.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 80%, so the schema already documents most parameters well. The description adds no specific parameter information beyond what's in the schema, such as examples or clarifications for 'goal' vs 'question.' This meets the baseline of 3 when schema coverage is high, but doesn't compensate for the 20% gap or add extra meaning.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose as 'Structured decision intelligence with risk assessment and confidence scoring,' which is specific and includes key capabilities. However, it doesn't distinguish from sibling tools like 'reason' or 'memory_get/store,' and the 'Optimized for trading bots' phrase suggests a specific domain application without clarifying if this is exclusive or just an example.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'reason' or the memory tools. The phrase 'Optimized for trading bots' implies a context but doesn't explicitly state when to use it, when not to, or what alternatives exist for non-trading scenarios.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

memory_getCInspect

Retrieve previously stored memory for this agent.

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYes
agent_idYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions retrieval but doesn't describe what happens if memory is missing, whether it's read-only (implied but not stated), authentication needs, rate limits, or response format. For a tool with zero annotation coverage, this leaves significant behavioral gaps that could affect agent decision-making.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's appropriately sized for a simple tool, though it could be more informative. The structure is front-loaded with the core action, but it doesn't waste space on redundant information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (2 parameters, no annotations, no output schema), the description is incomplete. It doesn't explain parameter meanings, behavioral traits, or what to expect upon retrieval. Without annotations or an output schema, the description should compensate more by detailing return values, error conditions, or usage examples, but it does not.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the schema provides no parameter details. The description doesn't explain the parameters 'agent_id' and 'key'—it doesn't clarify what they represent, their format, or how they relate to retrieving memory. With two required parameters and no semantic information in either the schema or description, the agent lacks crucial context for correct invocation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool's purpose ('Retrieve previously stored memory for this agent') with a clear verb ('Retrieve') and resource ('previously stored memory'), but it doesn't distinguish from sibling tools like 'memory_store' or 'reason'. The purpose is understandable but lacks specificity about what type of memory or how it differs from other memory-related operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'memory_store' (for storing) or 'reason' (which might involve memory retrieval). It implies usage for retrieving stored memory but doesn't specify prerequisites, exclusions, or contextual triggers. Without explicit when/when-not instructions, the agent must infer usage from the purpose alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

memory_storeBInspect

Store persistent memory/context for this agent (long-term state).

ParametersJSON Schema
NameRequiredDescriptionDefault
keyYesMemory key (e.g. 'goal', 'session-42')
valueYesThe data to store
agent_idYesUnique agent identifier
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It mentions 'persistent' and 'long-term state', which hints at durability, but lacks details on permissions, rate limits, data format constraints, or whether this overwrites existing keys. More behavioral context is needed for a mutation tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence with zero waste. It's front-loaded with the core purpose and includes a clarifying parenthetical ('long-term state'), making it appropriately sized and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description is minimal but covers the basic purpose. For a mutation tool with three parameters, it should ideally include more on behavioral aspects like error handling or return values, but it's adequate as a starting point.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all three parameters (agent_id, key, value) with descriptions. The description adds no additional parameter semantics beyond what the schema provides, meeting the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('store') and resource ('persistent memory/context for this agent'), specifying it's for long-term state. It distinguishes from 'memory_get' (retrieval) but doesn't explicitly differentiate from other siblings like 'decide' or 'reason'.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for storing long-term agent state, suggesting when to use it (persistent memory needs). However, it doesn't provide explicit guidance on when not to use it or mention alternatives like using 'memory_get' for retrieval or other storage mechanisms.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

reasonCInspect

Premium strategic reasoning with style control and optional confidence scoring.

ParametersJSON Schema
NameRequiredDescriptionDefault
styleNonormal
questionYesThe question to reason about
want_confidenceNoInclude confidence score and reasoning quality
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but lacks behavioral details. It mentions 'premium' (possibly implying cost or access restrictions) and optional confidence scoring, but doesn't disclose rate limits, authentication needs, output format, or error handling. This is inadequate for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise with a single sentence that front-loads key features ('premium strategic reasoning'), but it could be more structured by explicitly stating the core action. It avoids redundancy, though it might benefit from slightly more detail given the lack of annotations.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, no output schema, and moderate schema coverage, the description is incomplete. It doesn't explain what 'strategic reasoning' outputs, how confidence scores are used, or behavioral traits like limitations. For a tool with three parameters and no structured support, this leaves significant gaps for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 67%, with 'question' and 'want_confidence' having descriptions, but 'style' lacks one. The description adds minimal value beyond the schema by mentioning 'style control' and 'confidence scoring', which aligns with parameters but doesn't explain semantics like what 'strategic reasoning' means or how styles differ. Baseline is 3 due to moderate schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool provides 'strategic reasoning' with 'style control' and 'confidence scoring', which gives a general purpose. However, it's vague about what 'strategic reasoning' entails compared to siblings like 'decide' or 'memory_get', and doesn't specify the resource or domain (e.g., problem-solving, analysis). It distinguishes somewhat by mentioning 'premium' and optional features but lacks clear differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives like 'decide' or 'memory_get'. The description implies usage for reasoning tasks with style preferences, but doesn't specify contexts, prerequisites, or exclusions. It mentions 'premium' which might hint at advanced use, but this is not elaborated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.