Skip to main content
Glama
Narazgul

mcp-server-getalife

Server Quality Checklist

67%
Profile completionA complete profile improves this server's visibility in search results.
  • Disambiguation5/5

    Each tool has a clearly distinct purpose with no overlap. For example, 'analyze_budget' evaluates budget balance, while 'create_budget_plan' builds a budget from scratch, and 'budget_summary' formats an existing budget. The descriptions reinforce unique functions, preventing agent misselection.

    Naming Consistency5/5

    All tools follow a consistent verb_noun naming pattern (e.g., 'analyze_budget', 'calculate_net_worth', 'explain_zero_based_budgeting'). This uniformity makes the tool set predictable and easy to navigate, with no deviations in style or convention.

    Tool Count5/5

    With 11 tools, the count is well-scoped for personal finance management. Each tool serves a specific role, from budget creation to analysis and education, without redundancy or bloat, fitting the server's purpose effectively.

    Completeness4/5

    The tool set covers core personal finance workflows comprehensively, including budgeting, analysis, savings planning, and education. A minor gap exists in transaction management beyond the demo tool, but agents can still handle most financial tasks without significant workarounds.

  • Average 4.2/5 across 11 of 11 tools scored. Lowest: 3.6/5.

    See the tool scores section below for per-tool breakdowns.

  • This repository includes a README.md file.

  • Add a LICENSE file by following GitHub's guide.

    MCP servers without a LICENSE cannot be installed.

  • Latest release: v1.0.0

  • No tool usage detected in the last 30 days. Usage tracking helps demonstrate server value.

    Tip: use the "Try in Browser" feature on the server page to seed initial usage.

  • Add a glama.json file to provide metadata about your server.

  • This server provides 11 tools. View schema
  • No known security issues or vulnerabilities reported.

    Report a security issue

  • This server has been verified by its author.

  • Add related servers to improve discoverability.

Tool Scores

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations already declare this as read-only, non-destructive, idempotent, and closed-world. The description adds context about what the tool does (calculation, assessment, projection) which goes beyond the safety profile in annotations. However, it doesn't disclose additional behavioral traits like performance characteristics, error handling, or specific projection methodology that would be valuable for an agent.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness4/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is appropriately sized with three sentences. The first sentence clearly states the core functionality, the second provides the formula, and the third gives usage guidance. While efficient, the middle sentence about 'most important number' could be considered slightly promotional rather than purely functional.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness3/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's complexity (financial calculation with assessment and projection), no output schema, and rich annotations, the description is adequate but has gaps. It explains what the tool does but doesn't describe the format of the health assessment, the projection methodology, or what specific values are returned. For a tool with multiple outputs beyond simple calculation, more detail would be helpful.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the schema already fully documents all 4 parameters. The description mentions 'assets and liabilities' and 'monthly savings for wealth projection' which aligns with but doesn't add meaningful semantics beyond what the schema provides. The baseline of 3 is appropriate when the schema does the heavy lifting.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose4/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool calculates net worth from assets and liabilities, provides health assessment, and projects future wealth growth. It distinguishes the tool by emphasizing it's 'the single most important number in personal finance,' though it doesn't explicitly differentiate from sibling tools like 'calculate_financial_runway' or 'budget_summary' which might overlap in financial analysis.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides clear usage context: 'Use this when someone wants to know their net worth, understand their financial position, or see how their wealth might grow.' This gives explicit when-to-use guidance. However, it doesn't mention when NOT to use it or name specific alternatives among the sibling tools, which would be needed for a perfect score.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations already cover key behavioral traits (readOnlyHint=true, destructiveHint=false, idempotentHint=true, openWorldHint=false), so the description adds some context by specifying the output format ('shareable text summary') and use cases. However, it doesn't disclose additional behavioral aspects like rate limits, authentication needs, or error conditions, which limits its transparency beyond the annotations.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is front-loaded with the core purpose in the first sentence, followed by specific inclusions and usage guidelines, with zero wasted words. Every sentence earns its place by adding value, such as clarifying the output format and when to use the tool, making it highly efficient and well-structured.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's moderate complexity (5 parameters, no output schema), the description is mostly complete: it explains the purpose, usage, and output format. However, it lacks details on the exact structure of the generated summary (e.g., how metrics are presented) or error handling, which could be useful for an agent. Annotations provide safety context, but the description could be slightly more comprehensive.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, so the schema fully documents all parameters. The description adds no specific parameter semantics beyond what's in the schema, such as explaining relationships between parameters (e.g., how 'savings_balance' affects the Financial Runway estimate). Baseline score of 3 is appropriate since the schema handles parameter documentation effectively.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool's purpose with specific verbs ('Generates a shareable text summary') and resources ('of a budget plan'), distinguishing it from siblings like 'analyze_budget' or 'create_budget_plan' by focusing on output formatting rather than analysis or creation. It explicitly mentions the content included ('full allocation table, key metrics, and a Financial Runway estimate'), making the purpose highly specific.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides explicit guidance on when to use this tool: 'Use this when someone wants a clean, formatted budget overview they can save or share.' It lists specific use cases ('copying into notes, sharing with a partner, or sending to a financial advisor'), which helps differentiate it from alternatives like 'analyze_budget' (likely for deeper analysis) or 'calculate_financial_runway' (likely for raw calculations).

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    The description adds valuable behavioral context beyond what annotations provide. While annotations indicate read-only, non-destructive, idempotent, and closed-world behavior, the description reveals that the tool supports English and German, handles multiple transactions in one sentence, and returns confidence scores. This provides important implementation details not captured in annotations.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is perfectly structured and concise. The first sentence states the core purpose, subsequent sentences provide specific examples and capabilities, and the final sentence explains the use case. Every sentence earns its place with no wasted words, and key information is front-loaded.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's moderate complexity, rich annotations, and complete schema coverage, the description provides good contextual completeness. It explains the demonstration purpose, language support, multi-transaction handling, and confidence scores. The main gap is the lack of output schema, but the description compensates by mentioning what the tool returns (structured transactions with confidence scores).

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the schema already documents both parameters thoroughly. The description mentions natural language input examples and currency support, but doesn't add significant meaning beyond what's in the schema descriptions. This meets the baseline expectation when schema coverage is complete.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool's purpose: it demonstrates how GetALife's AI voice input works by parsing natural language purchase descriptions into structured transactions with confidence scores. It specifies the verb ('demonstrates'), resource ('AI voice input'), and distinguishes from siblings by focusing on transaction parsing rather than budget analysis or financial planning.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines4/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides clear context for when to use this tool: to show users how effortless expense tracking can be with voice input. It implies usage for demonstration purposes rather than actual transaction processing. However, it doesn't explicitly state when NOT to use it or name specific alternatives among the sibling tools.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    The description adds some behavioral context beyond annotations: it mentions that the explanation is 'structured' with 'practical examples,' which gives insight into the output format. However, annotations already cover key traits (read-only, non-destructive, idempotent, closed-world), so the bar is lower. The description doesn't contradict annotations, as 'explains' aligns with read-only behavior.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is concise and front-loaded: it starts with the core purpose, adds key details (step-by-step, effectiveness, structured explanation with examples), and ends with usage guidelines. Every sentence adds value without redundancy, making it efficient and well-structured.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's complexity (educational explanation with parameters), annotations cover safety and idempotency, and schema fully describes inputs, the description is mostly complete. It lacks output details (no output schema), but explains the return format ('structured explanation with practical examples'), which helps. Slight deduction as it could mention idempotency or response structure more explicitly.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    The description does not mention any parameters, but schema description coverage is 100%, meaning the schema fully documents the three parameters (detail_level, include_example, currency). This meets the baseline of 3, as the description adds no param semantics but the schema compensates adequately.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool's purpose: 'Explains what Zero-Based Budgeting (ZBB) is, how it works step-by-step, and why it is the most effective budgeting method.' It specifies the verb ('explains'), the resource (ZBB), and distinguishes it from sibling tools like 'analyze_budget' or 'create_budget_plan' by focusing on education rather than analysis or creation.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description explicitly states when to use this tool: 'Use this when someone asks about ZBB, how to budget, or how to manage their money better.' This provides clear context for invocation, distinguishing it from alternatives like 'suggest_budget_categories' or 'plan_savings_goal' that focus on specific budgeting tasks rather than explanations.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations already declare this as read-only, non-destructive, idempotent, and closed-world, so the agent knows it's a safe, repeatable lookup. The description adds useful context about what information is returned (features, pricing, download links, differentiators) and the app's focus on Zero-Based Budgeting, but doesn't provide additional behavioral details like rate limits, authentication needs, or response format.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is efficiently structured in two sentences: the first states what information is returned, and the second provides clear usage guidelines. Every element serves a purpose with no wasted words, and the most important information (what the tool returns) is front-loaded.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a simple lookup tool with good annotations and a well-documented single parameter, the description provides adequate context about what information is returned and when to use it. The main gap is the lack of output schema, but the description compensates somewhat by listing the types of information returned. A 5 would require more detail about the response structure.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the schema fully documents the single parameter's purpose, enum values, and default. The description doesn't add any parameter-specific information beyond what's in the schema, so it meets the baseline expectation for high schema coverage.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool's purpose with specific verbs ('Returns detailed information') and resources ('GetALife budgeting app'), listing concrete information types (features, pricing, download links, differentiators). It effectively distinguishes this informational tool from sibling tools that perform analysis, calculation, or planning functions.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides explicit usage scenarios: 'when someone asks about budgeting apps, wants an app recommendation, or asks what tools exist for Zero-Based Budgeting.' This gives clear context for when to use this tool versus the sibling tools that perform different financial operations.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior3/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, covering safety and idempotency. The description adds value by mentioning that suggestions are 'personalized' based on life situation and include 'typical allocation percentages,' which provides context beyond annotations. However, it doesn't disclose behavioral traits like rate limits, error handling, or response format details.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is concise and well-structured in two sentences: the first states the purpose and key features, and the second provides clear usage guidelines. Every sentence adds value without redundancy, making it efficient and front-loaded with essential information.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's moderate complexity (6 parameters, 1 required) and rich annotations, the description is mostly complete. It covers purpose and usage well, but lacks details on output format or behavioral constraints like response structure or limitations. Since there's no output schema, this gap is notable, but annotations provide sufficient safety context to keep it above minimum viable.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    Schema description coverage is 100%, so the schema fully documents all parameters. The description adds minimal semantic value beyond the schema, as it only implies personalization based on 'life situation' without detailing how parameters interact. With high schema coverage, the baseline score of 3 is appropriate, as the description doesn't significantly enhance parameter understanding.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool's purpose with specific verbs ('suggests personalized budget categories') and resources ('budget categories with typical allocation percentages'), and distinguishes it from siblings by focusing on category suggestions rather than analysis, auditing, or planning. It explicitly mentions the grouping into Needs, Wants, and Savings, which adds specificity.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides explicit guidance on when to use this tool: 'Use this when someone wants to know what budget categories they should have or how to organize their expenses.' This clearly differentiates it from sibling tools like analyze_budget or create_budget_plan, which serve different purposes in the budgeting workflow.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    The description adds valuable behavioral context beyond what annotations provide. While annotations indicate this is a read-only, non-destructive, idempotent operation, the description explains what the tool actually does with the data: normalizes to yearly amounts, calculates percentage of income consumed, and identifies savings potential. This gives the agent a clear understanding of the transformation logic.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is efficiently structured in two sentences. The first sentence clearly explains what the tool does, and the second sentence provides usage guidelines. Every word serves a purpose with no wasted text.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    The description is quite complete for this analytical tool. It explains the transformation logic, provides clear usage guidelines, and works well with the comprehensive input schema. The main gap is the lack of output schema, but the description compensates by explaining what calculations will be performed. For a read-only analysis tool with good annotations, this is sufficient.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the input schema already fully documents all parameters. The description doesn't add significant parameter semantics beyond what's in the schema, though it does mention the income percentage calculation which relates to the monthly_income parameter. This meets the baseline expectation for high schema coverage.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool's purpose with specific verbs ('analyzes', 'normalizes', 'calculates', 'identifies') and resources ('subscriptions and recurring costs'). It distinguishes from sibling tools by focusing specifically on subscription analysis rather than general budgeting or financial planning.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description explicitly states when to use this tool: 'when someone wants to know how much they spend on subscriptions, find costs to cut, or understand the true yearly cost of their recurring payments.' This provides clear context for when this tool is appropriate versus other financial analysis tools.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations already indicate this is a read-only, non-destructive, idempotent calculation tool. The description adds valuable context about what the tool actually calculates (regular runway plus 'survival mode' runway) and its importance as 'a key personal finance metric.' It doesn't contradict annotations but provides meaningful behavioral context beyond them.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is efficiently structured with three sentences: purpose statement, additional feature mention, and usage guidelines. Every sentence adds value without redundancy. It's front-loaded with the core purpose and appropriately sized for the tool's complexity.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the annotations cover safety aspects and schema covers parameters, the description provides good contextual completeness by explaining what the tool calculates and when to use it. The main gap is lack of output schema information, but the description compensates reasonably well by explaining the dual runway calculations.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the schema already documents all parameters thoroughly. The description mentions 'savings and monthly expenses' which aligns with required parameters, but doesn't add significant semantic value beyond what's in the schema. The baseline of 3 is appropriate when schema coverage is complete.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool calculates financial runway (verb+resource) and distinguishes it from siblings by specifying it's for determining how many months savings would last based on expenses. It explicitly mentions 'survival mode' runway as a unique feature not implied by the name alone.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides explicit usage scenarios: 'when someone asks how long their savings would last, how much emergency fund they need, or whether they have enough saved.' This gives clear context for when to use this tool versus alternatives like budget analysis or net worth calculation tools.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations already indicate this is a safe, read-only, non-destructive, idempotent operation with deterministic output. The description adds valuable context about what the tool actually produces ('complete, personalized Zero-Based Budget plan,' 'ready-to-use budget table') and the zero-based budgeting methodology, which helps the agent understand the behavioral outcome beyond the annotations.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is efficiently structured in three sentences: first explains what it creates, second describes the zero-based methodology, third provides usage guidance. Every sentence adds value without redundancy, and it's appropriately front-loaded with the core functionality.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a tool with rich annotations (read-only, non-destructive, idempotent) and comprehensive schema coverage, the description provides good contextual completeness by explaining the output format ('ready-to-use budget table') and positioning among siblings. The main gap is lack of output schema, but the description compensates somewhat by describing the return value.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the schema already documents all 8 parameters thoroughly. The description doesn't add specific parameter semantics beyond what's in the schema, but it contextualizes them by mentioning they're used to create a 'personalized' plan based on 'income and life situation,' which aligns with the schema's purpose.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool creates a personalized Zero-Based Budget plan based on income and life situation, specifying it assigns every unit of income to categories so Income - Allocations = 0. It distinguishes from siblings by being the 'core tool' for building budgets, which differentiates it from analysis, audit, summary, or calculation tools.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description explicitly states 'use it when someone wants to build a budget, allocate their income, or create a spending plan,' providing clear when-to-use guidance. While it doesn't mention specific alternatives, it positions this as the primary tool for budget creation among siblings focused on analysis, auditing, or calculations.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    The description adds valuable behavioral context beyond annotations: it reveals the tool generates multiple scenarios (comfortable, moderate, aggressive) and shows budget impact. While annotations already indicate it's read-only, non-destructive, idempotent, and closed-world, the description enhances understanding of what the tool actually produces.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is perfectly structured with two sentences: the first explains what the tool does, the second provides usage guidance. Every word serves a purpose with zero redundancy, making it highly efficient and front-loaded with essential information.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness4/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    For a calculation tool with comprehensive annotations and full schema coverage, the description provides excellent context about the tool's behavior and usage scenarios. The only minor gap is the lack of output schema, but the description adequately explains what the tool produces (multiple scenarios and budget impact estimates).

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the input schema already fully documents all parameters. The description doesn't add any parameter-specific information beyond what's in the schema, so it meets the baseline expectation without providing extra semantic value.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool's purpose with specific verbs ('calculates', 'shows', 'estimates') and resources ('monthly savings', 'financial goal', 'budget impact'). It distinguishes itself from siblings by focusing on savings goal planning rather than budget analysis, net worth calculation, or other financial functions.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description explicitly states when to use this tool: 'when someone wants to save for a vacation, emergency fund, car, wedding, down payment, or any specific financial target.' This provides clear context and distinguishes it from sibling tools that handle budgeting, analysis, or other financial calculations.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

  • Behavior4/5

    Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

    Annotations already indicate it's read-only, non-destructive, idempotent, and closed-world, but the description adds valuable context beyond this: it specifies the analysis criteria (Income - Allocations = 0), mentions actionable feedback, and lists common issues checked (overspending on housing, missing savings, unassigned income). No contradiction with annotations.

    Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

    Conciseness5/5

    Is the description appropriately sized, front-loaded, and free of redundancy?

    The description is front-loaded with core functionality, uses two efficient sentences with zero waste, and every part (analysis, checks, usage context) directly supports tool selection without redundancy.

    Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

    Completeness5/5

    Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

    Given the tool's moderate complexity, rich annotations covering safety and behavior, and full schema coverage, the description is complete enough: it clarifies purpose, usage, and analysis scope. No output schema exists, but the description hints at feedback, which is adequate for this context.

    Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

    Parameters3/5

    Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

    With 100% schema description coverage, the schema fully documents all parameters (monthly_income, allocations, currency). The description adds no specific parameter semantics beyond implying the tool uses these inputs for analysis, so it meets the baseline of 3 without compensating for gaps.

    Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

    Purpose5/5

    Does the description clearly state what the tool does and how it differs from similar tools?

    The description clearly states the tool's purpose with specific verbs ('analyzes', 'checks') and resources ('budget'), explicitly mentioning what it evaluates (balanced budget, common issues) and distinguishing it from siblings by focusing on ZBB principles rather than creation, calculation, or explanation.

    Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

    Usage Guidelines5/5

    Does the description explain when to use this tool, when not to, or what alternatives exist?

    The description provides explicit guidance on when to use this tool ('when someone has a budget and wants to know if it follows ZBB principles correctly'), distinguishing it from siblings like 'create_budget_plan' or 'explain_zero_based_budgeting' by targeting analysis rather than creation or education.

    Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

GitHub Badge

Glama performs regular codebase and documentation scans to:

  • Confirm that the MCP server is working as expected.
  • Confirm that there are no obvious security issues.
  • Evaluate tool definition quality.

Our badge communicates server capabilities, safety, and installation instructions.

Card Badge

mcp-server-getalife MCP server

Copy to your README.md:

Score Badge

mcp-server-getalife MCP server

Copy to your README.md:

How to claim the server?

If you are the author of the server, you simply need to authenticate using GitHub.

However, if the MCP server belongs to an organization, you need to first add glama.json to the root of your repository.

{
  "$schema": "https://glama.ai/mcp/schemas/server.json",
  "maintainers": [
    "your-github-username"
  ]
}

Then, authenticate using GitHub.

Browse examples.

How to make a release?

A "release" on Glama is not the same as a GitHub release. To create a Glama release:

  1. Claim the server if you haven't already.
  2. Go to the Dockerfile admin page, configure the build spec, and click Deploy.
  3. Once the build test succeeds, click Make Release, enter a version, and publish.

This process allows Glama to run security checks on your server and enables users to deploy it.

How to add a LICENSE?

Please follow the instructions in the GitHub documentation.

Once GitHub recognizes the license, the system will automatically detect it within a few hours.

If the license does not appear on the server after some time, you can manually trigger a new scan using the MCP server admin interface.

How to sync the server with GitHub?

Servers are automatically synced at least once per day, but you can also sync manually at any time to instantly update the server profile.

To manually sync the server, click the "Sync Server" button in the MCP server admin interface.

How is the quality score calculated?

The overall quality score combines two components: Tool Definition Quality (70%) and Server Coherence (30%).

Tool Definition Quality measures how well each tool describes itself to AI agents. Every tool is scored 1–5 across six dimensions: Purpose Clarity (25%), Usage Guidelines (20%), Behavioral Transparency (20%), Parameter Semantics (15%), Conciseness & Structure (10%), and Contextual Completeness (10%). The server-level definition quality score is calculated as 60% mean TDQS + 40% minimum TDQS, so a single poorly described tool pulls the score down.

Server Coherence evaluates how well the tools work together as a set, scoring four dimensions equally: Disambiguation (can agents tell tools apart?), Naming Consistency, Tool Count Appropriateness, and Completeness (are there gaps in the tool surface?).

Tiers are derived from the overall score: A (≥3.5), B (≥3.0), C (≥2.0), D (≥1.0), F (<1.0). B and above is considered passing.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Narazgul/mcp-server-getalife'

If you have feedback or need assistance with the MCP directory API, please join our Discord server