intelligence
Server Details
Agent-callable creator intelligence: 952+ scored YouTube creators across 180 niches.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- byimprint/mcp-server
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.1/5 across 7 of 7 tools scored. Lowest: 2.2/5.
Each tool targets a distinct operation: structured filtering, regex search, cross-creator querying, niche-level analysis, single creator deep dive, ranking for brands, and keyword search. No significant overlap in purpose.
All tools share the 'imprint-' prefix, but the suffixes vary in verb-noun clarity: most are verbs (filter, grep, map, search, rank-creators-for-brand) while 'niche' is a noun and 'profile' is a noun. This is a minor inconsistency but overall pattern is clear.
Seven tools is an ideal size for a specialized intelligence server. Each tool covers a needed functionality without unnecessary proliferation, fitting well within the 3-15 recommended range.
The tools cover the core workflow: discovery (search, filter, grep), analysis (profile, niche), comparison (map), and decision (rank). Minor gaps might include bulk export or direct comparison of two profiles, but the set is largely complete for its domain.
Available Tools
7 toolsimprint-filterCInspect
Filter creators by structured criteria: niche, viability score, product readiness, subscriber count, partnerships, risk category, and more.
| Name | Required | Description | Default |
|---|---|---|---|
| from | No | Narrow results to a previous result set. Pass a handle ID (e.g. "rs_abc123") or "prev" for the most recent result. | |
| niche | No | Filter by niche name (exact match, case-insensitive) | |
| sortBy | No | Sort order (default: score) | |
| maxScore | No | Maximum viability score (0-10) | |
| minScore | No | Minimum viability score (0-10) | |
| maxResults | No | Maximum number of results to return | |
| riskCategory | No | Filter to creators with this risk category present | |
| analyzedAfter | No | ISO date string — only return creators analyzed after this date | |
| maxSubscribers | No | Maximum subscriber count | |
| minSubscribers | No | Minimum subscriber count | |
| hasPartnerships | No | Filter by whether creator has partnerships | |
| productReadiness | No | Filter by product readiness level(s) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, and the description only states 'Filter creators' without disclosing behavioral traits like read-only nature, pagination, rate limits, or side effects. The description does not compensate for missing annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single efficient sentence that lists key criteria. It is front-loaded with the action, but could be improved by noting pagination or sorting defaults.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite 12 parameters and no output schema, the description is very brief. It does not explain what 'more' refers to, return format, or pagination. Incomplete for a complex filter tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds no extra meaning beyond listing criteria already covered in schema descriptions. It does not enhance understanding of parameter usage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool filters creators by structured criteria and lists several filter types. It is a specific verb+resource, but does not differentiate from sibling tools like imprint-search or imprint-rank-creators-for-brand.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives. There is no mention of when to prefer filter over search or ranking, nor any exclusions or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
imprint-grepBInspect
Regex search across all creator analysis text
| Name | Required | Description | Default |
|---|---|---|---|
| from | No | Narrow search to a previous result set. Pass a handle ID (e.g. "rs_abc123") or "prev" for the most recent result. | |
| fields | No | Limit search to specific field paths (e.g. purchaseContext, riskFactors) | |
| pattern | Yes | Regular expression pattern to search for | |
| maxResults | No | Maximum number of matches to return (default: 200) | |
| caseSensitive | No | Whether the regex match is case-sensitive (default: false) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations present, the description carries full burden for behavioral disclosure but only states 'regex search across all creator analysis text'. It does not discuss output format, performance characteristics, authentication needs, or side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, concise sentence that front-loads the core action. It is efficient but could benefit from slightly more detail without becoming verbose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description is too brief given the tool's complexity (5 parameters, no output schema, no annotations). It does not explain return values, pagination, or the exact meaning of 'creator analysis text', leaving significant gaps for the agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for all 5 parameters, so the baseline is 3. The description adds no additional semantic meaning beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('regex search') and the resource ('all creator analysis text'), making the tool's purpose explicit. It distinguishes from siblings like 'imprint-filter' and 'imprint-search' by specifying the regex method and broad scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus its siblings. The description does not mention prerequisites, limitations, or alternatives, leaving the agent to infer usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
imprint-mapCInspect
Ask the same question across multiple creators
| Name | Required | Description | Default |
|---|---|---|---|
| from | No | Narrow to a previous result set. Pass a handle ID (e.g. "rs_abc123") or "prev" for the most recent result. | |
| question | Yes | Question to ask across creators | |
| synthesize | No | If true, sends each creator to Claude Haiku for a synthesized answer (costs tokens). Default false uses data extraction only. | |
| maxCreators | No | Maximum number of creators to include (default: all) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description should fully disclose behavior. It fails to mention whether the tool is read-only, what happens to the results, or any side effects. The agent cannot infer safety or cost implications.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single short sentence, efficiently conveying the core purpose. However, it lacks any structural elements like headings or examples that might further aid an agent.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Without an output schema, the description should explain what the tool returns (e.g., a list of answers). It does not, leaving the agent uncertain about the result format.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, so baseline is 3. The description adds no extra meaning beyond the schema's property descriptions, but the schema itself is fairly clear.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (ask the same question) and the target (across multiple creators). It is specific but does not distinguish itself from siblings like imprint-search which may also involve querying creators.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives such as imprint-filter or imprint-search. The description is too brief to indicate appropriate contexts or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
imprint-nicheCInspect
Niche-level intelligence landscape
| Name | Required | Description | Default |
|---|---|---|---|
| niche | Yes | Niche name to look up (exact, case-insensitive, or partial match) | |
| minScore | No | Minimum viability score to include creators in the list | |
| includeCreators | No | Include creator list in output (default: true) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are present, so the description carries the full burden. It does not disclose whether the tool is read-only or has side effects, nor does it describe any behavioral traits beyond the vague phrase 'intelligence landscape.'
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely short (3 words), but it lacks substance. Conciseness should be efficient, not minimal. The description fails to convey essential information, making it under-specified rather than concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given three parameters and no output schema, the description is insufficient. It does not explain what the output contains, how the niche is used (exact or partial match implied by schema but not described), or any related behavior, leaving significant gaps for the agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for its three parameters, so the schema already defines their meaning. The description adds no additional context or usage nuances beyond what the schema provides, resulting in a baseline score of 3.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Niche-level intelligence landscape' is vague and lacks a specific verb or resource. It does not clearly state what action the tool performs (e.g., retrieve, analyze, or report), and it fails to distinguish from sibling tools like imprint-search or imprint-filter.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. The description does not mention any prerequisites, context, or exclusions, leaving the agent to guess based on the tool name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
imprint-profileAInspect
Full deep-dive on a single creator. Returns purchase context, risk factors, partnerships, audience, competition, content coverage, and product proposal.
| Name | Required | Description | Default |
|---|---|---|---|
| slug | Yes | Creator slug (e.g. woodshopmike) | |
| sections | No | Limit output to specific sections. Omit to get all sections. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, description could disclose more behavioral aspects (e.g., auth, rate limits, side effects). Currently only lists return categories, which is adequate but not exhaustive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
A single, front-loaded sentence that efficiently conveys the tool's purpose and output without extraneous detail.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequately describes the returned sections and input parameters; missing optional details like slug format or prerequisites, but sufficient given lack of output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, and the description does not add significant parameter-level meaning beyond what the schema provides, resulting in a baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: a 'Full deep-dive on a single creator' returning multiple data categories, distinguishing it from sibling tools like imprint-filter or imprint-search.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Implicitly indicates use for detailed profiling of a single creator, but lacks explicit 'when to use vs alternatives' or exclusion criteria.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
imprint-rank-creators-for-brandBInspect
Rank creators for a buyer brand brief and niche. Returns recommendations with evidence, confidence, caveats, and excluded creators.
| Name | Required | Description | Default |
|---|---|---|---|
| niche | Yes | Niche to rank creators within | |
| category | No | Brand category or product type | |
| minScore | No | Minimum Imprint Score | |
| brandName | Yes | Buyer brand name | |
| geography | No | Target geographies | |
| objective | No | Campaign objective | |
| budgetRange | No | Budget range for creator spend | |
| maxCreators | No | Maximum creators to rank | |
| dealbreakers | No | Dealbreakers or exclusion criteria | |
| targetCustomer | No | Brand ICP or target customer | |
| preferredPlatforms | No | Preferred creator platforms |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must fully convey behavioral traits. It discloses that the tool returns recommendations with evidence, confidence, caveats, and excluded creators, adding some transparency. However, it does not discuss side effects, authentication needs, rate limits, or mutation behavior. The description is adequate but not comprehensive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that is front-loaded with the verb 'Rank' and immediately states the resource. No unnecessary words; every part adds value. Ideal conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of 11 parameters and no output schema, the description partially covers what to expect (recommendations with evidence, confidence, caveats, excluded creators). However, it lacks details on return format, pagination, or sorting. The description is adequate but leaves gaps for a tool with many parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description does not add significant meaning beyond the schema; it briefly references 'brand brief and niche' which aligns with required parameters 'brandName' and 'niche', but no param-specific details are added.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool ranks creators for a buyer brand brief and niche, using a specific verb and resource. While it doesn't explicitly differentiate from siblings, the purpose is distinct from the listed sibling tools (e.g., filter, grep, map). Slight ambiguity remains about what distinguishes this from a potential similar ranking tool.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It does not mention contexts where this tool is preferable or when to avoid it. Users must infer from the tool name and brief description.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
imprint-searchBInspect
Search creators by keyword. Matches against niche names, creator names, and analysis text (purchase context, risk factors, audience descriptions, etc.).
| Name | Required | Description | Default |
|---|---|---|---|
| from | No | Narrow search to a previous result set. Pass a handle ID (e.g. "rs_abc123") or "prev" for the most recent result. | |
| query | Yes | Search keyword or phrase | |
| minScore | No | Minimum viability score — exclude creators below this threshold | |
| maxResults | No | Maximum number of results to return (default: 50) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description implies a read-only search operation, but without annotations, it does not disclose any behavioral traits beyond the search behavior (e.g., no mention of rate limits, permissions, or side effects). It adds context on match fields but misses broader transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with the core action, no unnecessary words. Efficiently conveys the purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a search tool: describes what it searches and the matching fields. However, lacks explanation of return values (no output schema) and how parameters like from, minScore, maxResults affect the results. Could be more complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description does not add new meaning to parameters; it just describes the search action. The parameters (from, query, minScore, maxResults) are already documented in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it searches creators by keyword, and specifies the fields it matches (niche names, creator names, analysis text). This is specific and distinguishes from general search tools, but does not explicitly differentiate from siblings like imprint-filter or imprint-grep.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus the sibling tools (imprint-filter, imprint-grep, etc.) or prerequisites. Implies keyword search but lacks exclusions or comparisons.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!