microlink
Server Details
Microlink MCP — wraps Microlink API (free tier, no auth required)
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- pipeworx-io/mcp-microlink
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.8/5 across 7 of 7 tools scored. Lowest: 2.9/5.
Most tools have distinct purposes: ask_pipeworx for natural language queries, discover_tools for tool search, get_metadata for URL extraction, take_screenshot for webpage capture, and memory tools (remember, recall, forget) for session storage. However, ask_pipeworx and discover_tools could be confused as both relate to finding information, though their scopes differ (direct answers vs. tool discovery).
Naming is mixed: some tools use verb_noun patterns (discover_tools, get_metadata, take_screenshot), while others use single verbs (ask, forget, recall, remember). This inconsistency reduces predictability, but the names are still readable and descriptive overall.
With 7 tools, the count is well-scoped for a utility server covering web metadata, screenshots, memory management, and query assistance. Each tool serves a clear function without bloat, fitting typical server scope of 3-15 tools.
The toolset covers key areas: web interaction (metadata, screenshots), memory management (store, retrieve, delete), and information discovery (querying, tool search). Minor gaps exist, such as no direct web scraping beyond metadata or advanced screenshot options, but core workflows are supported.
Available Tools
7 toolsask_pipeworxAInspect
Ask a question in plain English and get an answer from the best available data source. Pipeworx picks the right tool, fills the arguments, and returns the result. No need to browse tools or learn schemas — just describe what you need. Examples: "What is the US trade deficit with China?", "Look up adverse events for ozempic", "Get Apple's latest 10-K filing".
| Name | Required | Description | Default |
|---|---|---|---|
| question | Yes | Your question or request in natural language |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: it accepts natural language questions, automatically selects tools and fills arguments, and returns results. However, it doesn't mention limitations like rate limits, authentication needs, or error handling, leaving some gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured: first sentence states the core functionality, second explains the automation benefit, third provides usage guidance, and examples illustrate scope. Every sentence adds value without redundancy, making it easy to parse and understand quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (natural language processing with automated tool selection) and lack of annotations/output schema, the description does well by explaining the workflow and providing examples. However, it doesn't detail what types of data sources are available or potential limitations, which could leave the agent uncertain about edge cases.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 100% description coverage for its single parameter, so the baseline is 3. The description adds value by explaining the parameter's purpose beyond the schema's 'natural language' note: it emphasizes plain English questions and provides concrete examples ('What is the US trade deficit with China?'), enhancing understanding of expected input format.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Ask a question in plain English and get an answer from the best available data source.' It specifies the verb ('ask'), resource ('answer from data source'), and distinguishes it from siblings by emphasizing natural language interaction versus tool-specific schemas. The examples further clarify the scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: 'No need to browse tools or learn schemas — just describe what you need.' It contrasts with sibling tools like 'discover_tools' or 'recall' by positioning it as a high-level query interface. The examples provide concrete scenarios for appropriate usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
discover_toolsAInspect
Search the Pipeworx tool catalog by describing what you need. Returns the most relevant tools with names and descriptions. Call this FIRST when you have 500+ tools available and need to find the right ones for your task.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of tools to return (default 20, max 50) | |
| query | Yes | Natural language description of what you want to do (e.g., "analyze housing market trends", "look up FDA drug approvals", "find trade data between countries") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: it's a search operation (implied read-only), returns relevant tools based on a query, and has a specific use case context (large tool catalogs). However, it doesn't mention potential limitations like rate limits, authentication needs, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise and front-loaded: two sentences that each earn their place. The first sentence explains the core functionality, and the second provides crucial usage guidance without any wasted words or redundant information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (search function with 2 parameters), no annotations, and no output schema, the description is mostly complete. It explains purpose, usage context, and what it returns, but lacks details on output format, error handling, or performance characteristics that would be helpful for a search tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents both parameters (query and limit). The description adds no additional parameter semantics beyond what's in the schema, such as query examples or limit implications. Baseline 3 is appropriate when the schema does all the parameter documentation work.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('Search', 'Returns') and resource ('Pipeworx tool catalog'), and distinguishes it from siblings by specifying it's for searching when many tools are available. It explicitly mentions what it returns ('most relevant tools with names and descriptions').
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('Call this FIRST when you have 500+ tools available and need to find the right ones for your task'), including a clear condition (500+ tools) and alternative context (vs. not using it when fewer tools are available). It directly addresses when to use it versus alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
forgetCInspect
Delete a stored memory by key.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key to delete |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. While 'Delete' implies a destructive mutation, it doesn't specify whether the deletion is permanent or reversible, what permissions are required, error conditions (e.g., what happens if the key doesn't exist), or any rate limits. This leaves significant behavioral gaps for a destructive operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise at 5 words, front-loading the essential action ('Delete') and resource. Every word earns its place with zero waste or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with no annotations and no output schema, the description is incomplete. It doesn't address critical context like what constitutes a 'stored memory', whether deletion is permanent, what response to expect, or error handling. The agent lacks sufficient information to use this tool safely and effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents the single 'key' parameter. The description adds no additional semantic context about the key parameter beyond what's in the schema (e.g., format examples, constraints, relationship to other tools). Baseline 3 is appropriate when the schema does all the parameter documentation work.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Delete') and resource ('a stored memory by key'), making the purpose immediately understandable. It doesn't explicitly distinguish from sibling tools like 'recall' or 'remember', but the verb 'Delete' provides clear differentiation from those read/create operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing an existing memory key), when not to use it, or how it relates to sibling tools like 'recall' (which presumably retrieves memories) or 'remember' (which presumably creates them).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_metadataCInspect
Extract metadata from any URL to preview page content. Returns title, description, image, author, publisher, logo, and structured data—useful when you need to understand a webpage without visiting it directly.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | The URL to extract metadata from. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It mentions what metadata is extracted but doesn't disclose behavioral traits like rate limits, authentication requirements, error handling, or what 'and more' encompasses. The description is functional but lacks operational context needed for an agent to use it effectively.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately concise - a single sentence that communicates the core functionality. It's front-loaded with the main purpose and includes relevant examples. There's no wasted verbiage, though it could be slightly more structured with clearer separation of metadata types.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (metadata extraction from URLs) and lack of annotations/output schema, the description is minimally adequate. It covers what the tool does but lacks important context about limitations, return format, error conditions, and how it differs from the sibling screenshot tool. The 'and more' is vague and doesn't provide complete information.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (the single parameter 'url' is fully described in the schema), so the baseline is 3. The description adds no additional parameter semantics beyond what's in the schema - it doesn't elaborate on URL format requirements, supported protocols, or validation rules.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Extract metadata from any URL' with specific examples of what metadata is extracted (title, description, image, etc.). It uses a specific verb ('extract') and resource ('metadata'), but doesn't explicitly distinguish it from its sibling tool 'take_screenshot' which serves a different function.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. While it's clear this is for metadata extraction, there's no mention of when to use it instead of 'take_screenshot' or other potential tools. No exclusions, prerequisites, or alternative scenarios are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
recallAInspect
Retrieve a previously stored memory by key, or list all stored memories (omit key). Use this to retrieve context you saved earlier in the session or in previous sessions.
| Name | Required | Description | Default |
|---|---|---|---|
| key | No | Memory key to retrieve (omit to list all keys) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It clearly explains the dual functionality (retrieve by key or list all) and persistence across sessions, which are important behavioral traits. It doesn't mention error handling or performance characteristics, but covers the core behavior adequately.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two sentences that each earn their place. The first sentence states the dual functionality, and the second provides usage context. No wasted words, and the most important information is front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no annotations and no output schema, the description provides good coverage of what the tool does and how to use it. It explains the dual functionality, parameter semantics, and persistence across sessions. The main gap is the lack of information about return format or error conditions, which would be helpful given the absence of an output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has 100% description coverage, so the baseline is 3. The description adds meaningful context by explaining the semantic implication of omitting the key parameter ('omit to list all keys'), which goes beyond the schema's technical description. This provides valuable guidance for proper tool invocation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('retrieve', 'list') and resources ('previously stored memory', 'all stored memories'). It distinguishes from siblings like 'remember' (store) and 'forget' (delete) by focusing on retrieval operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('retrieve context you saved earlier') and when to omit parameters ('omit key to list all keys'). It distinguishes from alternatives by specifying this is for retrieving previously stored memories rather than discovering new tools or taking screenshots.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rememberAInspect
Store a key-value pair in your session memory. Use this to save intermediate findings, user preferences, or context across tool calls. Authenticated users get persistent memory; anonymous sessions last 24 hours.
| Name | Required | Description | Default |
|---|---|---|---|
| key | Yes | Memory key (e.g., "subject_property", "target_ticker", "user_preference") | |
| value | Yes | Value to store (any text — findings, addresses, preferences, notes) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behavioral traits: it's a write operation ('Store'), specifies persistence differences ('Authenticated users get persistent memory; anonymous sessions last 24 hours'), and implies session-scoped storage. It does not cover aspects like error handling or size limits, but adds substantial value beyond basic purpose.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence, followed by usage context and behavioral details. Every sentence earns its place by adding value (e.g., examples, persistence rules) without redundancy, making it efficiently structured and appropriately sized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (write operation with persistence rules), no annotations, and no output schema, the description is largely complete. It covers purpose, usage, and key behavioral aspects like persistence. However, it lacks details on return values or error cases, which would be helpful for full completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, so the schema already documents both parameters ('key' and 'value') with examples. The description does not add any parameter-specific details beyond what the schema provides, such as formatting constraints or usage tips, meeting the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Store a key-value pair') and resource ('in your session memory'), distinguishing it from sibling tools like 'recall' (likely for retrieval) and 'forget' (likely for deletion). It provides concrete examples of what to store ('intermediate findings, user preferences, or context across tool calls'), making the purpose unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool ('to save intermediate findings, user preferences, or context across tool calls'), providing clear context. However, it does not mention when not to use it or name specific alternatives (e.g., 'recall' for retrieval), which prevents a perfect score.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
take_screenshotBInspect
Capture a screenshot of any webpage. Returns image URL showing the rendered page layout and visual content—use to verify page state or design.
| Name | Required | Description | Default |
|---|---|---|---|
| url | Yes | The URL of the webpage to screenshot. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions returning 'the screenshot image URL' but omits critical details like authentication needs, rate limits, error conditions, image format, or whether the operation is read-only or has side effects. This leaves significant gaps for agent understanding.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence that efficiently conveys the core functionality without any wasted words. It's appropriately sized for a simple tool with one parameter, making it easy for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of annotations and output schema, the description is incomplete for a tool that performs an external operation (screenshot capture). It fails to address behavioral aspects like permissions, rate limits, or output details (e.g., image format, URL validity), which are crucial for reliable agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 100%, with the single parameter 'url' well-documented in the schema. The description adds no additional semantic context beyond implying the URL is for a webpage, which is already covered. This meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Take a screenshot') and resource ('of a webpage'), and distinguishes from the sibling tool 'get_metadata' by focusing on visual capture rather than metadata extraction. It's precise and unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, such as the sibling 'get_metadata' or other potential tools for webpage analysis. It lacks context about prerequisites, limitations, or typical use cases, offering only a basic functional statement.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!