HelloTime MCP
Server Details
HelloTime MCP server for workforce management — time tracking, attendance, productivity, payroll and timesheets.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.4/5 across 4 of 5 tools scored.
Each tool has a distinct purpose: country_support for per-country details, feature_search for free-text search, list_features for listing features, list_plans for pricing plans, and payroll_capabilities for payroll info. No overlapping functionality.
Tools use lowercase with underscores, but mix noun_noun (country_support, feature_search, payroll_capabilities) and verb_noun patterns (list_features, list_plans). Most are consistent, but two follow a different pattern.
With 5 tools covering product information, the count is well-scoped for the server's purpose. Each tool provides essential functionality without unnecessary bloat or insufficiency.
The toolset covers core HelloTime product info: countries, features, plans, and payroll. A minor gap is lack of direct tools for integrations or account details, but the feature_search can compensate.
Available Tools
5 toolscountry_supportAInspect
Return per-country features, default currency, and product positioning for a supported country (IN, AU, GB, US, CA, AE, SG, NZ).
| Name | Required | Description | Default |
|---|---|---|---|
| country | No | Single ISO country code. Omit for the full matrix. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided; description implies a read-only query but lacks details on side effects, authorization, or rate limits. Adequate but minimal for a simple lookup.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Single sentence, front-loaded with action and resource, efficient with no wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple one-parameter tool with no output schema, the description covers purpose and supported countries. Could mention what 'full matrix' means when parameter omitted.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema already describes the single parameter well (with enum and note about omission). Description adds context on what is returned, adding value beyond schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states it returns per-country features, default currency, and product positioning for a specific set of countries. Verb 'Return' is specific and resource is well-defined.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool vs siblings like feature_search or list_features. The description does not mention alternatives or restrictions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
feature_searchCInspect
Free-text search across plan features, product features, country features, and payroll engines.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max results to return (default 20). | |
| query | Yes | Free-text query, e.g. "geofence clock-in" or "PF ESI" or "screenshots". |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, and the description only covers the search scope. It does not disclose data mutability, authentication needs, rate limits, or behavior on invalid queries, leaving the agent underinformed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence. It is concise and front-loads the core purpose, though it could be slightly more structured with additional details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With only two simple parameters and no output schema, the description adequately conveys the search scope but lacks details on result format, pagination, and error handling, leaving some gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema already describes both parameters fully (100% coverage), including examples. The overall description adds no additional parameter context, so baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it is a free-text search across plan features, product features, country features, and payroll engines. This distinguishes it from siblings like list_features (listing) and country_support (likely support lookup), but the term 'features' remains somewhat vague.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like list_features or payroll_capabilities. There is no indication of prerequisites or context for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_featuresCInspect
List HelloTime features (shifts, rosters, leave types, timesheets, time tracking, productivity, GPS / geofence, biometric kiosk, payroll, invoicing, analytics, projects, reports, integrations).
| Name | Required | Description | Default |
|---|---|---|---|
| plan | No | Only return features available in this plan tier. | |
| category | No | Filter to one feature category (shifts, rosters, leave, timesheets, gps-geofence, biometric-kiosk, etc.). |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It does not disclose any behavioral traits like read-only nature, authentication needs, or rate limits. The tool is likely idempotent and safe, but this is not stated.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that efficiently lists all relevant categories. It is front-loaded with the verb and resource, and the list is complete. Slightly verbose due to the long list, but acceptable.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema, the description does not explain what each feature entry contains (e.g., name, description, availability). With two optional parameters and no return format, the description is incomplete for an agent to fully understand the output.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 100% description coverage for both parameters ('plan' and 'category' with enum values). The description lists all categories, reinforcing the schema but adding little new meaning. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action (List HelloTime features) and provides a comprehensive list of feature categories, distinguishing it from siblings like feature_search and list_plans. However, it does not explicitly differentiate from country_support or payroll_capabilities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is given on when to use this tool versus alternatives such as feature_search for searching specific features or list_plans for plan information. The description implies listing all features but lacks explicit usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_plansAInspect
List HelloTime pricing plans (Free, Attend, Track, Pro, Business) with launch + list prices per region, plus volume and annual prepay discounts. Free is permanent for teams up to 5 employees; paid tiers each include a 7-day free trial.
| Name | Required | Description | Default |
|---|---|---|---|
| plan | No | Restrict the response to a single plan tier. | |
| country | No | ISO country code. Filters prices to one country. Omit to return all 8 markets. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations exist, so the description carries the full burden. It discloses the output includes launch prices, list prices, region-based filtering, and discount details, which is informative for a read-only listing tool. However, it does not explicitly state that the operation is non-destructive or idempotent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, well-structured sentence that front-loads the core action ('list pricing plans') and efficiently appends the key output details. No superfluous words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description covers the main return components (plans, prices, discounts, region). It omits pagination or ordering, but the tool is simple and likely returns a small dataset. The optional parameters are adequately implied.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with both parameters having descriptions. The description adds minimal value beyond the schema, only reinforcing the 'region' concept and mentioning discounts not directly tied to parameters. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool lists HelloTime pricing plans, naming specific tiers (Pro, Business, Enterprise) and the details returned (launch/list prices, discounts per region). It distinguishes itself from sibling tools that cover support, features, or payroll.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or when-not-to-use guidance is provided. The description implies it is for retrieving pricing plan information, but lacks alternatives or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
payroll_capabilitiesAInspect
For a given country, return the supported payroll engines (e.g. AU STP2 + super, IN PF/ESI/TDS/Form 24Q, US W-2/1099) with status (live/beta/coming-soon).
| Name | Required | Description | Default |
|---|---|---|---|
| country | Yes | Required ISO country code. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It discloses the tool returns a list of engines with statuses, indicating a read operation, but lacks details on authorization, rate limits, or potential errors.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single sentence that is concise and front-loaded with the action and resource. It includes illustrative examples without unnecessary words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple tool with one parameter and no output schema, the description adequately explains what is returned (engines with statuses) and gives examples. It could hint at the result format (e.g., list of objects), but overall it is sufficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with a well-described 'country' parameter. The description only rephrases the parameter need ('for a given country') and provides output examples, which do not add new semantic constraints to the parameter itself.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'return' and the resource 'supported payroll engines for a given country'. It includes concrete examples for multiple countries, making the purpose unambiguous and distinguishing it from siblings like 'country_support'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description does not provide any guidance on when to use this tool versus alternatives. It lacks explicit when-to-use, when-not-to-use, or references to sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!