Skip to main content
Glama

Rodin

Server Details

Find ~5 thinkers whose intellectual fingerprint matches a passage of text.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.6/5 across 1 of 1 tools scored.

Server CoherenceA
Disambiguation5/5

With only one tool, there is no possibility of confusion with other tools. The tool's purpose is clearly distinct and unambiguous.

Naming Consistency5/5

The single tool follows a clear verb_noun pattern ('find_thinkers_like'), and consistency is trivially maintained.

Tool Count3/5

The server has only one tool, which is on the lower end of the range. While it serves a specific purpose, the count feels thin and could benefit from additional supporting tools.

Completeness4/5

The tool covers the primary use case of finding similar thinkers based on text. It returns sufficient information including a profile URL. However, missing features like browsing all thinkers or detailed profiles represent minor gaps.

Available Tools

1 tool
find_thinkers_likeFind thinkers like thisAInspect

Given a passage of text (essay, note, message, snippet, transcript), returns ~5 humans whose intellectual fingerprint matches it — recurring themes, mental models, archetypal stance, blind spots. Use when the principal asks for sparring partners, intellectual peers, "who else is wrestling with this," "who thinks like X," or "find me writers similar to this passage." Each result returns a name, three-word archetype, one-line summary, dominant themes, and a profile URL the principal can visit. The match runs over Voyage 3.5-lite text embeddings reranked by a proprietary 12-dimensional cognitive-style vector — so results align by how a mind reasons, not just topical overlap.

ParametersJSON Schema
NameRequiredDescriptionDefault
textYes

Output Schema

ParametersJSON Schema
NameRequiredDescription
rerankYes
matchesYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

In the absence of annotations, the description fully discloses behavioral traits: it details the matching process (Voyage 3.5-lite embeddings reranked by a proprietary 12-dimensional cognitive-style vector) and clarifies that results align by reasoning style. It also enumerates return fields (name, archetype, summary, themes, URL).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is moderately long but front-loaded with the core purpose. Every sentence contributes value, though it could be slightly more concise. The structure is logical: purpose, use cases, output format, and methodology.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity, the description is complete: it explains input content, output details (name, archetype, summary, themes, URL), and the underlying methodology. With an existing output schema, the description does not need to reiterate return types.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema coverage, the description compensates by explaining the 'text' parameter: a passage of text such as an essay, note, message, snippet, or transcript. It adds contextual meaning but does not mention the length constraints (min 80, max 8000 characters) from the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: given a passage of text, it returns approximately 5 humans whose intellectual fingerprint matches. The verb 'find' and resource 'thinkers' are specific, and the description differentiates it from hypothetical siblings by focusing on cognitive-style matching rather than topical overlap.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly lists use cases (e.g., sparring partners, intellectual peers) and explains what each result contains. While it does not provide negative examples or alternatives, it effectively guides the agent on when to invoke this tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources