joboracle
Server Details
JobOracle Job Market Intelligence MCP
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- ToolOracle/joboracle
- GitHub Stars
- 0
- Server Listing
- JobOracle
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 2.8/5 across 8 of 8 tools scored.
Most tools have distinct purposes, but job_search and remote_jobs could overlap in functionality for remote job searches, potentially causing confusion. The other tools like company_jobs, job_compare, job_trends, and salary_insights are clearly differentiated by their specific focuses on company-specific listings, market comparisons, trend analysis, and salary data, respectively.
Tool names follow a consistent snake_case pattern throughout, such as company_jobs and job_search, which aids readability. However, there is a minor deviation with health_check, which uses a different verb style (health instead of job-related terms), though it still fits the overall naming convention.
With 8 tools, the count is well-scoped for a job market analysis server, covering key areas like job searching, company insights, trends, and salary data. Each tool appears to serve a specific, useful function without redundancy, making the set manageable and comprehensive for the domain.
The tool surface covers core job market operations, including search, comparison, trends, and salary insights, with no major gaps. However, there might be minor omissions, such as tools for detailed job application processes or user-specific job tracking, but these are not essential for the server's apparent purpose of market analysis and insights.
Available Tools
8 toolscompany_jobsCInspect
All open positions at a specific company.
| Name | Required | Description | Default |
|---|---|---|---|
| company | Yes | Company name | |
| country | No | de | |
| results_per_page | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It states it retrieves 'open positions' but doesn't mention whether this is a read-only operation, if authentication is required, rate limits, pagination behavior, or what format the results come in. For a tool with 3 parameters and no output schema, this leaves significant gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that gets straight to the point with zero wasted words. It's appropriately sized for a simple retrieval tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 3 parameters (with only 33% schema coverage), no annotations, and no output schema, the description is insufficient. It doesn't explain what the tool returns, how results are structured, or provide enough context about the undocumented parameters to make this tool usable without additional documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is only 33% (only 'company' parameter has a description). The description mentions 'specific company' which aligns with the 'company' parameter, but doesn't add any meaning for the undocumented 'country' and 'results_per_page' parameters. With low schema coverage, the description fails to compensate adequately.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states what the tool does ('All open positions at a specific company'), providing a specific verb+resource combination. However, it doesn't explicitly differentiate from sibling tools like 'job_search' or 'remote_jobs', which likely have overlapping functionality.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'job_search' or 'remote_jobs'. There's no mention of prerequisites, exclusions, or comparative context with sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
health_checkCInspect
Server status.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. 'Server status' implies a read-only, non-destructive operation, but it doesn't disclose behavioral traits such as authentication needs, rate limits, response format, or error conditions. The description is minimal and lacks necessary context for safe invocation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with 'Server status.' as a single phrase, which is appropriately sized for a simple tool. It's front-loaded and wastes no words, though it could benefit from slightly more detail without losing efficiency.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, no output schema), the description is incomplete. It lacks context on what 'status' includes (e.g., health metrics, uptime), response format, or integration with sibling tools. Without annotations or output schema, more detail is needed for effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so no parameter documentation is needed. The description doesn't add parameter semantics, but this is appropriate given the absence of parameters. Baseline is 4 for zero parameters, as the schema fully covers the input requirements.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Server status' states what the tool does at a high level but lacks specificity. It identifies the resource (server) and action (checking status) but doesn't distinguish from siblings or provide details about what aspects of server status are checked. It's not tautological but remains vague about scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. The description doesn't mention context, prerequisites, or exclusions, and it doesn't relate to sibling tools like company_jobs or job_search. Usage is implied only by the tool name and generic description.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
job_alerts_checkCInspect
How many new jobs posted today/3 days/week for a query. Quick market pulse.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Keywords | |
| country | No | de | |
| location | No | City or region |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions timeframes (today/3 days/week) and the tool's quick nature, but lacks details on permissions, rate limits, data sources, or response format. For a tool with no annotations, this leaves significant gaps in understanding its behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded, consisting of two short sentences that directly state the tool's function and intent. Every sentence adds value without redundancy, though it could be slightly more structured (e.g., by explicitly listing timeframes).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (3 parameters, no annotations, no output schema), the description is incomplete. It doesn't cover behavioral aspects like data freshness, error handling, or output format, and with low schema coverage (67%), it fails to compensate for gaps in parameter understanding, making it inadequate for effective tool use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 67% (2 out of 3 parameters have descriptions), so the baseline is 3. The description adds no additional parameter semantics beyond what the schema provides (e.g., it doesn't explain how 'query' interacts with timeframes or what 'country' and 'location' defaults imply).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: to count new jobs posted within specific timeframes (today/3 days/week) for a query, providing a 'quick market pulse.' It uses specific verbs ('check,' 'posted') and identifies the resource ('jobs'), though it doesn't explicitly differentiate from siblings like job_search or job_trends.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions 'quick market pulse,' which implies a high-level overview, but doesn't specify scenarios, exclusions, or compare it to siblings such as job_search (detailed listings) or job_trends (historical analysis).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
job_compareCInspect
Compare job markets across cities for a role: job count and salaries.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Job title | |
| country | No | de | |
| locations | Yes | List of cities to compare |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions the outputs (job count and salaries) but lacks details on data sources, freshness, rate limits, authentication needs, or error handling. For a tool with no annotations, this is a significant gap in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose without unnecessary words. Every element ('compare job markets', 'across cities', 'for a role', 'job count and salaries') contributes directly to understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations, no output schema, and moderate schema coverage, the description is incomplete. It doesn't address behavioral aspects like data reliability or usage constraints, nor does it explain output format or error cases, leaving gaps for a tool with 3 parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 67% (2 out of 3 parameters described). The description adds minimal value beyond the schema, as it doesn't explain parameter interactions, format expectations for 'locations', or the role of 'country'. Baseline 3 is appropriate since the schema does moderate lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: comparing job markets across cities for a specific role, with metrics on job count and salaries. It uses specific verbs ('compare') and resources ('job markets', 'cities', 'role'), though it doesn't explicitly distinguish from siblings like 'salary_insights' or 'job_trends'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'salary_insights' or 'job_search'. It states what the tool does but offers no context on appropriate scenarios, prerequisites, or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
job_searchCInspect
Search jobs by title, skill, location. Filter by type and recency.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number (starts at 1) | |
| query | Yes | Job title or keywords, e.g. 'Business Development Manager' | |
| country | No | Country code: de, gb, us, at, ch, nl, fr, it, pl, br, au, in | de |
| sort_by | No | Sort: relevance, date, salary | relevance |
| location | No | City or region, e.g. 'Bielefeld' or 'Berlin' | |
| full_time | No | Full-time only | |
| salary_min | No | Minimum annual salary filter | |
| max_days_old | No | Only jobs posted within N days | |
| results_per_page | No | Results per page (max 50) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It mentions filtering capabilities but doesn't disclose important behavioral traits like pagination behavior (implied by 'page' parameter but not explained), rate limits, authentication requirements, or what the response format looks like. For a search tool with 9 parameters and no output schema, this leaves significant gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise - a single sentence that efficiently communicates the core functionality. Every word earns its place with no redundancy or unnecessary elaboration. It's front-loaded with the main purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 9 parameters, no annotations, and no output schema, the description is insufficiently complete. It doesn't explain what the tool returns (job listings format, structure, fields), doesn't mention pagination behavior despite having page parameters, and provides no context about data freshness, source limitations, or error conditions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents all 9 parameters thoroughly. The description adds minimal value beyond the schema - it mentions 'title, skill, location' and 'type and recency' filtering, which corresponds to some parameters but doesn't provide additional semantic context. Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Search jobs by title, skill, location' with specific verbs and resources. It distinguishes from some siblings like 'job_compare' or 'salary_insights' by focusing on search functionality, though it doesn't explicitly differentiate from 'remote_jobs' or 'company_jobs' which might also involve job searching.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'remote_jobs', 'company_jobs', or 'job_trends'. It mentions filtering by 'type and recency', but this is part of the functionality rather than usage context. No explicit when/when-not statements or sibling comparisons are included.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
job_trendsCInspect
Top companies hiring for a field. Rising demand signals.
| Name | Required | Description | Default |
|---|---|---|---|
| field | Yes | Field or job category, e.g. 'Vertrieb' or 'IT' | |
| country | No | de |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It mentions 'rising demand signals' which hints at trend analysis, but fails to specify whether this is a read-only operation, what data sources are used, potential rate limits, or the format of the output. This leaves significant gaps in understanding the tool's behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very concise (two short sentences) and front-loaded with the core purpose. Every sentence contributes directly to understanding the tool's function, with no wasted words. However, it could be more structured by explicitly separating the two key outputs (top companies and demand signals).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (trend analysis with 2 parameters), lack of annotations, and no output schema, the description is incomplete. It doesn't explain what 'top companies' means (e.g., by volume, growth?), what 'rising demand signals' entails, or the return format. For a tool with no structured behavioral hints, this leaves too much undefined.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 50% (only the 'field' parameter has a description). The description adds no parameter-specific information beyond what's implied by the tool's purpose. It doesn't clarify the semantics of 'country' or provide examples beyond the schema's 'field' description. With moderate schema coverage, the baseline is 3 as the description doesn't compensate for the coverage gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Top companies hiring for a field. Rising demand signals.' states a general purpose (identifying top companies and demand trends) but is vague about the specific resource or output format. It distinguishes from obvious siblings like 'salary_insights' or 'job_search' by focusing on companies and trends rather than salaries or job listings, but doesn't explicitly differentiate from 'company_jobs' or 'job_compare' which might overlap.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'job_search' or 'company_jobs'. It implies usage for analyzing hiring trends, but offers no explicit when/when-not instructions or prerequisites, leaving the agent to infer context from the tool name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
remote_jobsCInspect
Find remote-only positions for a role or skill.
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | Keywords | |
| country | No | de | |
| results_per_page | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool finds remote-only positions but doesn't mention any behavioral traits such as data sources, rate limits, authentication requirements, or response format. This leaves significant gaps for a search tool with potential external dependencies.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's purpose without any wasted words. It is appropriately sized and front-loaded, making it easy to understand at a glance.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of a job search tool with no annotations, low schema coverage, and no output schema, the description is incomplete. It lacks information on behavioral aspects, parameter details beyond the basic purpose, and output expectations, making it inadequate for effective tool selection and invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is low at 33%, with only the 'query' parameter having a description ('Keywords'). The description adds no parameter semantics beyond what's implied by the tool's purpose (e.g., it doesn't explain 'country' or 'results_per_page' defaults or usage). This fails to compensate for the schema's lack of detail on two out of three parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Find') and resource ('remote-only positions'), and specifies the scope ('for a role or skill'). However, it doesn't explicitly differentiate from sibling tools like 'job_search' or 'company_jobs', which might also search for jobs but with different filters or scopes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. With sibling tools like 'job_search' and 'company_jobs' available, there's no indication of when this tool is preferred (e.g., for remote-only filtering) or when other tools might be more appropriate, leaving usage context unclear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
salary_insightsCInspect
Salary ranges for a role in a location.
| Name | Required | Description | Default |
|---|---|---|---|
| role | Yes | Job title, e.g. 'Sales Manager' | |
| country | No | de | |
| location | No | City or region |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions what data is retrieved (salary ranges) but doesn't cover critical aspects like data sources, accuracy, update frequency, rate limits, authentication needs, or error handling. For a data retrieval tool with zero annotation coverage, this leaves significant gaps in understanding its behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise and front-loaded, consisting of a single, clear sentence that directly states the tool's purpose. There is no wasted language or unnecessary elaboration, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of salary data retrieval (which involves nuanced parameters and potential data limitations), the description is insufficient. With no annotations, no output schema, and only moderate parameter coverage, it fails to address key contextual elements like data format, currency, time periods, or reliability. The description alone doesn't provide enough information for effective tool use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 67% (2 out of 3 parameters have descriptions), which is moderate. The description adds minimal value beyond the schema by implying that 'role' and 'location' are used together to scope the salary data, but it doesn't explain parameter interactions (e.g., how 'country' and 'location' relate) or provide examples. With partial schema coverage, the description doesn't fully compensate, resulting in a baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: retrieving salary ranges for specific roles in specific locations. It includes both the verb ('ranges for') and resources ('role', 'location'), making the function unambiguous. However, it doesn't explicitly differentiate from sibling tools like 'job_compare' or 'job_trends' which might also involve salary data, keeping it from a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. With siblings like 'job_compare' and 'job_trends' that might overlap in functionality, there's no indication of when this tool is preferred or what distinguishes it from them. The description only states what it does, not when to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!
Your Connectors
Sign in to create a connector for this server.