invoiceflow-mcp
Server Quality Checklist
- Disambiguation5/5
Each tool serves a clearly distinct purpose with non-overlapping boundaries. Manual payment marking (invoice_mark_paid) is clearly differentiated from external payment reconciliation (payment_reconcile), and analytical tools (invoice_risk, cashflow_report) are separated from workflow actions.
Naming Consistency3/5While predominantly using snake_case, the set mixes naming patterns: most use noun_verb (invoice_create, payment_reconcile), but cashflow_report and invoice_risk use noun_noun, and client_manage employs a vague verb that obscures whether it creates or updates.
Tool Count4/5Nine tools is an appropriate count for an invoice workflow system, covering the essential lifecycle from client onboarding through payment collection without excessive granularity, though the surface feels slightly lean due to missing retrieval operations.
Completeness3/5The core invoice workflow is functional, but significant gaps hamper agent autonomy: missing list_clients and get_client prevents discovering existing clients, while absent update_invoice and delete_invoice operations block error correction workflows.
Average 3.5/5 across 9 of 9 tools scored. Lowest: 2.9/5.
See the tool scores section below for per-tool breakdowns.
This repository includes a README.md file.
This repository includes a LICENSE file.
Latest release: v1.0.0
No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.
Tip: use the "Try in Browser" feature on the server page to seed initial usage.
Add a glama.json file to provide metadata about your server.
- This server provides 9 tools. View schema
No known security issues or vulnerabilities reported.
This server has been verified by its author.
Add related servers to improve discoverability.
Tool Scores
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Discloses upsert behavior (create or update) but with no annotations, fails to explain critical mechanics: how existing clients are identified for updates, idempotency, or side effects. The claim of updating is unsupported by the schema which lacks an ID parameter.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Extremely efficient: two sentences, front-loaded with action, no redundancy. Every word serves a purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Inadequate for complexity: 9 undocumented parameters, upsert behavior, no output schema, and no annotations. Missing essential details like identification logic for updates, return values, or field semantics.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters1/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Complete failure: schema has 0% description coverage for 9 parameters, and the description adds zero parameter context. No explanation of what distinguishes required fields (name/email) from optional ones, or semantic details like tax_id format.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
Clearly states the dual create/update capability with specific verbs, and distinguishes from sibling invoice tools by noting the prerequisite dependency ('Clients are needed before creating invoices').
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides implicit sequencing guidance (use before creating invoices) but lacks explicit when-to-use vs. alternatives, prerequisites, or how to determine whether to create or update.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior2/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden but only minimally discloses behavior. It mentions that amount_paid and status fields are updated, but fails to address idempotency, reversibility, validation constraints (e.g., overpayment handling), or side effects like notifications triggered.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. The first sentence front-loads the primary action and scope; the second explains the specific data mutations. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness2/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a financial mutation tool with no annotations and no output schema, the description is insufficient. It omits critical context such as whether the operation is reversible, what status values result (paid vs partial), and what the tool returns upon success or failure.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100%, establishing a baseline of 3. The description adds slight value by explaining the 'fully or partially paid' concept which maps to the amount parameter's default behavior, but does not elaborate on syntax, format, or relationships between parameters beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the core action (mark as paid) and scope (fully or partially), and distinguishes from siblings like invoice_send or invoice_create by mentioning specific fields updated (amount_paid and status). However, it does not explicitly differentiate from payment_reconcile.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives (particularly payment_reconcile), no prerequisites mentioned (e.g., invoice must exist), and no warnings about irreversible actions or validation rules.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full disclosure burden. Adds valuable pagination behavior ('Supports pagination') beyond structured data. However, lacks safety profile (read-only status), rate limits, default behavior when no filters applied, or return value structure. 'List' implies read-only but explicit confirmation would help.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste. First sentence front-loads core functionality and all filter dimensions; second adds pagination capability. Every word earns its place with no filler or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 9 parameters, 0% schema coverage, complex validation patterns (UUID, date-time regex), and no output schema, the description covers the primary use cases but leaves significant gaps. No explanation of return format, error conditions, or the relationship between overdue_only filter and status=overdue.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, requiring description to compensate. Maps semantic meaning to most parameters by listing filterable concepts (status, client, amount range, date range, overdue status) and mentioning pagination. However, fails to explain data formats (UUID for client_id, ISO8601 for dates), enum values, or default values (limit=20, offset=0).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
States specific action (List and filter) and resource (invoices) with clear filtering capabilities (status, client, amount range, date range, overdue status). Effectively distinguishes from sibling mutation tools (invoice_create, invoice_mark_paid) through its list/filter verb, though could explicitly clarify it's for retrieval rather than modification.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides no guidance on when to use this tool versus alternatives like cashflow_report (which might aggregate invoice data) or invoice_risk. No mention of prerequisites (e.g., needing client_id from client_manage) or when filtering is preferable to unfiltered listing.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses specific mutation side effects ('Increments reminder count and updates last_reminder_at'), which is valuable. However, it lacks information on idempotency, error conditions (e.g., if already paid), permissions, or return values.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: first states purpose, second discloses side effects. Front-loaded and appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a simple two-parameter mutation tool without output schema. The description covers the primary behavioral impact (state updates) but does not clarify the return value or success/failure indicators, leaving a minor gap in completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100% (both invoice_id and message have descriptions). The description does not add parameter-specific semantics (e.g., format constraints for invoice_id, message length limits) beyond what the schema already provides, meeting the baseline for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose4/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific action ('Send a payment reminder') and target resource ('unpaid invoice'). The mention of 'reminder' distinguishes it from sibling 'invoice_send' (likely for initial delivery), and 'unpaid' distinguishes from 'invoice_mark_paid'. However, it does not explicitly articulate these distinctions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use versus siblings (e.g., when to remind vs. send initial invoice), no prerequisites stated, and no alternative tools mentioned. Usage must be inferred solely from the 'unpaid invoice' qualifier.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses auto-calculation behavior for financial fields and the invoice number generation format (INV-YYYY-NNNN), but omits safety/risk information such as idempotency, error handling, or whether the operation is reversible.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences with zero waste. Front-loaded with the core purpose ('Create a new invoice...'), followed by behavioral details (calculations, numbering). Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 7 parameters, nested line item structures, and no output schema or annotations, the description covers the essential creation flow but leaves significant gaps. It does not describe return values, error conditions, or required vs optional fields, forcing reliance on an undescribed schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters2/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, requiring the description to compensate. While it implicitly references client_id and line_items, it fails to explain the other five parameters (currency, issue_date, due_date, notes, terms) or the structure of line item objects (description, quantity, unit_price, etc.).
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb ('Create') with clear resource ('invoice') and scope ('for a client with line items'). It effectively distinguishes from sibling tools like invoice_list, invoice_send, and invoice_mark_paid through the explicit creation intent.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines2/5Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance provided on when to use this tool versus siblings (e.g., when to create vs. update), nor any mention of prerequisites such as whether the client_id must exist beforehand or validation rules for line items.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. Discloses critical mutation ('Auto-marks the invoice as paid') but omits failure modes (what happens if no match found, multiple matches, or amount mismatch), authorization requirements, or idempotency guarantees for financial operations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Two efficient sentences with zero waste. First sentence establishes matching mechanism and key parameters; second discloses side effects. Information is front-loaded and dense.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness3/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Adequate for a 4-parameter reconciliation tool, covering core workflow and side effects. However, lacks output description (what identifier is returned?) and error handling details given the financial domain and absence of output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 75% (payment_method lacks description). Description compensates partially by mentioning 'Stripe or PayPal' as examples for payment_method and clarifies the matching logic combining amount and email. Baseline 3 appropriate given schema does heavy lifting for other parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
Clear specific verb ('Match') with resource ('payment' to 'invoice') and exact matching criteria ('by amount and client email'). Distinguishes from sibling invoice_mark_paid by emphasizing the matching workflow rather than direct status update.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
Implies usage context through matching logic but lacks explicit guidance on when to use this versus invoice_mark_paid or how to handle cases where automatic matching fails. No mention of prerequisites (e.g., invoice must exist).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses key side effects (PDF generation, status update to 'sent', email delivery) but omits critical mutation safety details like idempotency, reversibility, error handling if email fails, or required permissions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Three short sentences with zero waste. The description is front-loaded with the core action ('Send an invoice to the client via email') followed by implementation details (PDF generation, status update). Every sentence earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a 2-parameter tool without output schema, the description adequately covers the core functionality. However, given this is a destructive mutation (status change, external email delivery), it should mention prerequisites or state requirements for the invoice before sending.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already fully documents both invoice_id and message parameters. The description adds no additional parameter-specific context (e.g., format constraints, default behavior when message is omitted), warranting the baseline score for high-coverage schemas.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs (send, generates, updates) and clearly identifies the resource (invoice) and scope (via email, status change to 'sent'). It effectively distinguishes from siblings like invoice_create, invoice_list, and invoice_mark_paid by specifying the email delivery and PDF generation aspects.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
While the description implies this is for initial invoice delivery, it lacks explicit guidance on when to use this versus invoice_remind (which likely sends reminder emails). No prerequisites are stated, such as required invoice status (e.g., 'draft' vs 'ready') before sending.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior3/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries full burden for behavioral disclosure. It effectively describes output content (the eight metrics returned) but fails to declare operational characteristics like read-only safety, idempotency, or whether the projection uses cached vs. real-time data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Single dense sentence is front-loaded with the core action ('Generate a cash flow summary') followed by colon-separated list of specific output components. Zero wasted words; every clause describes distinct report functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the absence of an output schema, the description compensates effectively by detailing specific report components (totals, rates, projections, breakdowns). Minor gap: does not explicitly state the read-only/safe nature of the operation given lack of readOnlyHint annotation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters4/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema contains zero parameters, establishing baseline score of 4 per rubric. No parameter semantics are required or provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verb 'Generate' and resource 'cash flow summary', then enumerates eight distinct metrics (collection rate, 30-day projection, breakdown by status/client) that clearly distinguish this analytical/reporting function from transactional sibling tools like invoice_create or invoice_mark_paid.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
While the detailed list of aggregated metrics implies this tool is for financial analysis rather than individual record management, there is no explicit guidance on when to prefer this over invoice_list for detailed records or how it relates to payment_reconcile for collection data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
- Behavior4/5
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden. It successfully discloses analysis methodology (invoice amount, client history, due date, reminder history) and return structure (risk level, factors, recommended action), compensating for the missing output schema. It lacks mention of side effects or caching but implies read-only assessment.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Conciseness5/5Is the description appropriately sized, front-loaded, and free of redundancy?
Three sentences efficiently structured: first states core function and scale, second explains analysis inputs, third explains outputs. Zero redundancy, front-loaded with essential information, appropriate length for tool complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Completeness4/5Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool without output schema, the description adequately compensates by detailing the return values (risk level, factors, actions) and analysis factors. It provides sufficient context for invocation without being verbose, though explicit mention of required invoice existence could strengthen it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Parameters3/5Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with the single 'invoice_id' parameter fully documented in the schema. The description does not add parameter-specific semantics, but with high schema coverage and only one obvious parameter, no additional description is necessary. Baseline 3 applies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Purpose5/5Does the description clearly state what the tool does and how it differs from similar tools?
The description uses specific verbs ('Predict', 'Analyzes', 'Returns') and clearly identifies the resource (late payment risk for an invoice with 0-100 scale). It distinguishes from siblings like invoice_remind or invoice_mark_paid by focusing on risk assessment rather than action or state management.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Usage Guidelines3/5Does the description explain when to use this tool, when not to, or what alternatives exist?
While the purpose is clear, the description lacks explicit guidance on when to use this versus alternatives like invoice_remind or cashflow_report. It implies usage for pre-action risk assessment but does not state 'use this before sending reminders' or specify prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
GitHub Badge
Glama performs regular codebase and documentation scans to:
- Confirm that the MCP server is working as expected.
- Confirm that there are no obvious security issues.
- Evaluate tool definition quality.
Our badge communicates server capabilities, safety, and installation instructions.
Card Badge
Copy to your README.md:
Score Badge
Copy to your README.md:
How to claim the server?
If you are the author of the server, you simply need to authenticate using GitHub.
However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.
{
"$schema": "https://glama.ai/mcp/schemas/server.json",
"maintainers": [
"your-github-username"
]
}Then, authenticate using GitHub.
Browse examples.
How to make a release?
A "release" on Glama is not the same as a GitHub release. To create a Glama release:
- Claim the server if you haven't already.
- Go to the Dockerfile admin page, configure the build spec, and click Deploy.
- Once the build test succeeds, click Make Release, enter a version, and publish.
This process allows Glama to run security checks on your server and enables users to deploy it.
How to add a LICENSE?
Please follow the instructions in the GitHub documentation.
Once GitHub recognizes the license, the system will automatically detect it within a few hours.
If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.
How to sync the server with GitHub?
Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.
To manually sync the server, click the "Sync Server" button in the MCP server admin interface.
How is the quality score calculated?
The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).
Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.
Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).
Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/enzoemir1/invoiceflow-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server