Skip to main content
Glama
Lucid-Drone-Technologies

Paylocity MCP Server

Server Quality Checklist

58%
Profile completionA complete profile improves this server's visibility in search results.
  • Disambiguation5/5

    Each tool targets a distinct domain concept with clear boundaries: employee CRUD (add/get/update/search), earnings management (add), payroll data (pay statements, direct deposit), and company insights. No overlapping functionality between tools like 'add_earnings' and 'update_employee'.

    Naming Consistency5/5

    Strict adherence to verb_noun snake_case convention throughout (add_earnings, get_employee, search_employees, update_employee). Pluralization choices logically match the resource type (earnings, statements as collections; employee, deposit as singular records).

    Tool Count5/5

    Eight tools provide a focused but sufficient surface for core HRIS operations without bloat. The scope covers employee lifecycle management, compensation adjustments, and payroll inquiry—appropriate for a Paylocity integration without attempting to wrap the entire API.

    Completeness4/5

    Solid CRUD coverage for employee records and read access to payroll data. Minor gaps exist: direct deposit can be viewed but not updated, and earnings can be added but not removed or modified. However, core 'find employee, view details, update info' workflows are fully supported.

  • Average 4/5 across 8 of 8 tools scored.

    See the tool scores section below for per-tool breakdowns.

  • This repository includes a README.md file.

  • Add a LICENSE file by following GitHub's guide.

    MCP servers without a LICENSE cannot be installed.

  • Latest release: v1.0.0

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • Add a glama.json file to provide metadata about your server.

  • This server provides 8 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • Are you the author?

  • Add related servers to improve discoverability.

Tool Scores

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden of behavioral disclosure. It adds valuable context that 'Account and routing numbers are redacted' and lists what fields are visible (type, status, name). However, it omits other critical behavioral traits: error handling (what happens if employeeId is invalid), whether this is idempotent, or rate limiting concerns.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two sentences, zero waste. The first sentence front-loads the purpose; the second sentence efficiently communicates data privacy/redaction behavior. Every word earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a simple read-only tool with 2 flat parameters and no output schema, the description is nearly complete. It explains what the tool retrieves and clarifies the redaction behavior. Minor gap: doesn't explicitly state the return cardinality (single vs. multiple accounts) or format, though 'accounts' plural implies a collection.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100% (both companyId and employeeId have descriptions). The description itself does not mention parameters, but with the schema fully documented, the baseline is 3. The description neither adds parameter semantics nor contradicts the schema.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description explicitly states 'Get direct deposit accounts for an employee' — clear verb (Get), specific resource (direct deposit accounts), and scope (employee). It clearly distinguishes from siblings like get_employee (general info), get_pay_statements (pay history), and add_earnings (write operation).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines2/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides no explicit guidance on when to use this tool versus alternatives like get_employee, nor does it mention prerequisites (e.g., needing valid employeeId). It relies entirely on the tool name to imply appropriate usage context.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full disclosure burden. It clarifies the temporal behavior (one-time vs recurring) and prerequisites, but omits safety details like whether this creates permanent payroll records, potential validation failures, or idempotency characteristics.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Two sentences with zero waste: the first establishes purpose with examples, the second provides actionable prerequisite guidance. Information is front-loaded and every word serves the agent's selection decision.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given 11 parameters, no output schema, and no annotations, the description meets minimum viability for tool selection but leaves gaps. It adequately covers the core earning addition logic but provides no semantics for cost center fields and does not describe success behaviors or return values.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 73%, establishing a baseline of 3. The description adds significant value by providing concrete examples for 'earningCode' (BONUS, COMM, OT) and framing the date parameters as 'one-time or recurring.' However, it fails to address the three costCenter parameters which lack schema descriptions.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the action ('Add') and resource ('earning') with specific examples (bonus, commission, stipend, overtime). It distinguishes from sibling 'add_employee' by specifying the target is an earning added 'to an employee' rather than the employee record itself.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides explicit prerequisite guidance by referencing 'get_company_codes' to obtain valid earning codes before invocation. While it doesn't explicitly state when NOT to use the tool, it effectively guides the agent toward necessary preparatory steps.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It effectively discloses the return payload scope ('Returns everything: personal info...') and critical data security behavior ('SSN, FEIN, and bank info are redacted'). It misses explicit read-only/safety declarations or error handling, but covers the key behavioral traits for HR data access.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is front-loaded with the core action in the first sentence, followed by a colon-delimited list of returned fields and a final security note. Every sentence earns its place with zero redundancy or filler.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the lack of output schema, the description appropriately compensates by detailing the comprehensive return payload and redaction rules. It adequately covers the tool's behavior for a simple two-parameter read operation, though it could note error cases (e.g., invalid ID).

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, providing detailed parameter documentation including the environment variable default for companyId. The description adds minimal parameter-specific semantics beyond the schema, focusing instead on return values, which warrants the baseline score for high-coverage schemas.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses a specific verb ('Get') with a clear resource ('full details for an employee') and identifier method ('by ID'). It effectively distinguishes from siblings like 'search_employees' (which implies filtering without ID) and 'update_employee' (write vs read).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The phrase 'by ID' implies you must have the employee identifier, suggesting contrast with 'search_employees'. However, there is no explicit guidance like 'Use search_employees if you do not have the ID' or clarification on when to use 'get_direct_deposit' versus this comprehensive endpoint.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It effectively discloses the return payload structure—specifying both summary fields (gross/net pay, hours) and line-item details (taxes, deductions, earnings)—which compensates for the missing output_schema. However, it does not explicitly confirm this is read-only/safe or describe error cases (e.g., invalid year).

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description consists of three efficiently structured sentences: purpose (sentence 1), return value details (sentence 2), and optional filtering capability (sentence 3). There is no redundant or wasted text; every clause adds specific value about functionality or return structure.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the absence of an output schema, the description adequately compensates by detailing the return structure (summary vs. line-item data). With 100% schema coverage for inputs and clear required parameter indication, the definition is complete for invocation, though it could benefit from mentioning data availability constraints or error scenarios.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The input schema has 100% description coverage, documenting all four parameters including the optional nature of checkDate and companyId default behavior. The description reinforces the year-based retrieval and optional date filtering, but does not add semantic meaning beyond what the schema already provides, warranting the baseline score.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses a specific verb ('Get') and resource ('pay statement data') and clearly scopes it to 'by year'. It effectively distinguishes from siblings like get_direct_deposit (which retrieves setup/configuration, not pay statements) and get_employee (profile data) by specifying it returns payroll details including 'gross pay, net pay, hours' and line-item breakdowns.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description mentions the optional checkDate filter ('Optionally filter to a specific check date'), implying when to use that parameter. However, it lacks explicit guidance on when to choose this over get_direct_deposit or prerequisites like requiring an existing employeeId from get_employee/search_employees.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full disclosure burden. It successfully notes the effective date requirement but fails to disclose mutation characteristics: idempotency, error behavior when employee missing, whether partial updates are supported, or what the response indicates.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Optimal structure: single opening sentence establishes purpose, followed by categorized field list, ending with critical constraint. No redundant words despite high parameter complexity. Every sentence earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given high schema coverage (100%) and complex nested structure, the description provides sufficient context by mapping schema fields to business concepts (cost center, supervisor, benefit class). Missing return value documentation is acceptable without output schema; only minor gap is lack of error scenario mention.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters4/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, baseline is 3. The description adds value by categorizing the nested 'updates' fields into logical groups (personal, job, compensation) and emphasizing the effective date constraint that applies across multiple nested objects, aiding comprehension of the complex schema structure.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses specific verb 'Update' with resource 'employee's information' and comprehensively lists updatable field categories (personal info, job details, compensation, status, benefits). The verb clearly distinguishes this from sibling 'add_employee' (create vs update semantics).

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines3/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides critical constraint that 'All changes require an effective date where applicable,' which guides temporal usage. However, lacks explicit when-not-to-use guidance (e.g., distinguishing from 'add_employee' for new hires) or prerequisite mentions (e.g., employee must exist).

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    With no annotations provided, the description carries the full burden. It successfully discloses what data is returned (headcount, departments, job titles, reporting relationships), but does not mention safety characteristics (read-only nature), authentication requirements, or performance/caching behavior. Adequate but not comprehensive behavioral disclosure.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Excellent single-sentence structure that front-loads the action ('Get a high-level workforce summary') and efficiently lists the specific data components available. Zero redundancy or filler content.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the absence of an output schema, the description excellently compensates by detailing the exact structure and fields returned (headcount, status breakdown, department names, job titles, reporting relationships). For a simple single-parameter tool, this provides complete contextual information.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100% for the single companyId parameter, which includes default behavior (env var). The description adds no explicit parameter guidance, but with complete schema documentation, no additional parameter semantics are required. Baseline score appropriate for high schema coverage.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description uses specific verb 'Get' and clearly defines the resource as a 'high-level workforce summary' with explicit details (headcount, department breakdowns, reporting relationships). It clearly distinguishes from siblings like get_employee (individual records) and search_employees (filtered search) by emphasizing aggregate company-wide data.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    While it does not explicitly name alternative tools, the description provides clear context that this is for aggregate company-level data ('high-level workforce summary', 'total headcount'), implicitly guiding the agent to use individual employee tools (get_employee, search_employees) for non-aggregate needs. Clear context without explicit exclusions.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. Discloses important security behavior (SSN encrypted, not stored in conversation) and return value ('Returns the new employee ID'). Does not mention error scenarios or reversibility.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Four sentences with zero waste. Front-loaded with purpose ('Add a new employee'), followed by scope, security-critical SSN handling, and return value. Every sentence earns its place.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given 23 parameters and mutation nature, description adequately covers essential security context (SSN encryption) and output (employee ID). Missing explicit required vs optional guidance and error handling, but sufficient for correct invocation given schema.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is low (43%), so description must compensate. Provides high-level categories ('personal info, job details, and compensation') mapping to parameter groups and highlights SSN specifically. However, does not detail the majority of undocumented parameters (firstName, lastName, address1, etc.).

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Description uses specific verbs ('Add', 'Creates') with clear resource ('employee'). Explicitly mentions 'onboarding' and 'new' to distinguish from sibling update_employee, and 'Creates' distinguishes from get/search siblings.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Provides clear context ('onboarding') indicating when to use for new hires. Includes critical usage instruction about SSN handling ('pass it directly'). Could improve by explicitly contrasting with update_employee for existing records.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    No annotations provided, so description carries full burden. Compensates by disclosing return structure (basic info: name, title, department, status, supervisor) despite lacking output schema. Does not explicitly state read-only/safety properties or pagination behavior, preventing a 5.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    Three sentences with zero waste: purpose (sentence 1), return value (sentence 2), workflow guidance (sentence 3). Perfectly front-loaded with the most critical selection criteria first.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Comprehensive given constraints. Lacks output schema but description compensates by listing returned fields. Establishes clear relationship to sibling 'get_employee'. All 3 parameters fully documented in schema with description covering the semantic intent of the search.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema coverage is 100%, establishing baseline 3. Description lists searchable fields (name, title, email, employee ID) which aligns with but does not substantially augment the schema's parameter descriptions. No additional syntax or format guidance provided.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    Specific verb (Search) + resource (employee directory) + scope (by name, job title, email, or employee ID). Explicitly distinguishes from sibling 'get_employee' by describing this as directory search versus full record retrieval.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    Explicit workflow guidance: 'Use this first to find someone's employee ID, then use get_employee for full details.' Clearly establishes the two-step pattern and when to prefer this tool over the sibling retrieval tool.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

paylocity-mcp MCP server

Copy to your README.md:

Score Badge

paylocity-mcp MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Lucid-Drone-Technologies/paylocity-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server