Skip to main content
Glama

AdmitBase Admissions

Server Details

Admissions data & match scores for law, medical, dental, MBA, pharmacy, vet, optometry schools

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4/5 across 5 of 5 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool targets a distinct action: search schools, get stats, calculate match, save stats, compare applicants. No overlap in functionality.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern with snake_case (e.g., calculate_match_score, search_schools). No deviations.

Tool Count5/5

Five tools is well-scoped for an admissions assistant: covers search, stats, match calculation, saving, and comparison. Not excessive or insufficient.

Completeness4/5

Covers core workflows: search, stats, match, save, compare. Missing a tool to retrieve saved stats, but compare_to_applicants implicitly uses them, so minor gap.

Available Tools

5 tools
calculate_match_scoreCalculate match scoreAInspect

Given a user's GPA and test score, calculate match category (Safety/Target/Reach/Far Reach), admission probability, and percentiles for one school or a ranked list. Public — no authentication required.

ParametersJSON Schema
NameRequiredDescriptionDefault
gpaYesCumulative GPA on a 4.0 scale
limitNoNumber of top schools to return when no school_slug given
programYesProgram type
test_scoreYesTest score: LSAT 120-180, MCAT 472-528, DAT 1-30, OAT 200-400, GMAT/GRE for MBA, PCAT for pharmacy
school_slugNoIf provided, calculate for this school only. Otherwise returns a ranked list.
work_experience_yearsNoYears of full-time work experience (MBA programs only)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that the tool is public and requires no authentication, and outlines the core behavior (calculating match category, probability, percentiles). It does not cover error handling or rate limits, but given the lack of annotations, it provides adequate transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, no fluff. The first sentence captures the core purpose and outputs, the second adds access context. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema is provided, so the description should explain the return format more explicitly. While it mentions the concepts (match category, probability, percentiles), it lacks details on structure or edge cases. Adequate but not comprehensive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the input schema already explains each parameter. The description merely summarizes the inputs without adding new meaning beyond what the schema provides. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'calculate', specifies inputs (GPA, test score), outputs (match category, probability, percentiles), and scope (one school or ranked list). It distinguishes from siblings like compare_to_applicants and get_school_stats.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description indicates public access with no authentication, but does not explicitly state when to use this tool versus alternatives like compare_to_applicants or search_schools. Usage context is implied but not contrasted.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

compare_to_applicantsCompare against community outcomes (requires Bearer auth)AInspect

Compare a user's saved stats against anonymous community outcome reports (accepted/waitlisted/rejected). Requires Authorization: Bearer amb_... header.

ParametersJSON Schema
NameRequiredDescriptionDefault
programYesProgram to compare against
school_slugNoIf provided, compare against applicants at this specific school
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses required authentication (Bearer header), which is a key behavioral trait. However, it does not indicate whether the tool is read-only, destructive, or has rate limits. Missing details on what happens if user has no saved stats.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences: first states purpose, second states auth requirement. No wasted words, front-loaded with key information. Ideal conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema, but description does not explain return value structure or format. Does not mention that this tool relies on prior use of save_my_stats tool to have 'saved stats'. Missing crucial context about prerequisites and output, making it incomplete for effective use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with both parameters described. Description does not add any extra meaning beyond the schema; it only restates the purpose. Baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool compares a user's saved stats against anonymous community outcome reports (accepted/waitlisted/rejected). It uses specific verbs ('Compare') and resource ('community outcome reports'), and the title includes 'requires Bearer auth' for clarity. Distinguishes from siblings by focusing on comparison rather than calculation (calculate_match_score) or stats retrieval (get_school_stats).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Description implies when to use (when user wants to compare stats against community outcomes) but does not explicitly state when not to use or mention alternatives. It lacks differentiation from sibling calculate_match_score, which might also involve comparison. No guidance on prerequisites like having saved stats.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_school_statsGet detailed school statsAInspect

Get full admissions statistics for a specific school: GPA/test percentiles, acceptance rate, class size, tuition, employment outcomes. Public — no authentication required.

ParametersJSON Schema
NameRequiredDescriptionDefault
school_nameNoSchool name to search for (partial match). Used when slug is unknown.
school_slugNoSchool slug (e.g. "harvard-law-school"). Use search_schools to find slugs.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the burden. It mentions the tool is public but does not disclose idempotency, rate limits, or error handling behavior. For a read-only tool, this is adequate but not exhaustive.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with no extraneous text. Essential information is front-loaded, and every sentence contributes meaning.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description lists the types of data returned (GPA, test percentiles, etc.), which provides sufficient context. It could be slightly improved by specifying whether the output is a single object or an array.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with parameter descriptions. The description adds value by clarifying that 'school_name' is for partial match and that 'school_slug' can be obtained via 'search_schools', going beyond the schema alone.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves full admissions statistics for a specific school, listing specific data points. It distinguishes itself from sibling tools like 'search_schools' which is for finding slugs.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description notes 'Public — no authentication required' and suggests using 'search_schools' to find slugs, providing workflow context. However, it does not explicitly state when to use this tool versus alternatives like 'calculate_match_score'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

save_my_statsSave my application stats (requires Bearer auth)AInspect

Save a user's GPA and test score per program under their AdmitBase API key. Requires Authorization: Bearer amb_... header (generate one at https://admitbase.com/profile#api-keys).

ParametersJSON Schema
NameRequiredDescriptionDefault
gpaYesCumulative GPA on a 4.0 scale
notesNoAny notes about your application profile
programYesProgram you are applying to
test_scoreYesYour test score (LSAT, MCAT, DAT, OAT, GMAT/GRE, PCAT)
state_provinceNoYour state or province (2-letter code, e.g. "CA", "NY")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses the auth requirement and that it saves data, but does not explain behavior on duplicates, response format, or any side effects. Adequate but not thorough.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences: first states purpose, second states auth requirement. No unnecessary words, front-loaded with key info.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema and no annotations, the description covers core functionality and auth. Minor missing details like idempotency or whether it creates/updates, but sufficient for a simple save tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so all parameters have descriptions. The tool description adds no additional semantic value beyond summarizing gpa and test score. Baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states 'Save a user's GPA and test score per program', specifying the verb (save), resource (stats), and scope. It distinguishes from siblings like calculate_match_score and search_schools which serve different purposes.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies when to use (when saving stats) but does not provide explicit when-not or alternatives among siblings. Since no sibling does saving, it's clear, but lacks guidance on conditions like overwriting.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_schoolsSearch professional schoolsAInspect

Search professional schools (law, medical, dental, MBA, pharmacy, veterinary, optometry) by program type, name, or ranking range. Returns admissions stats and AdmitBase links. Public — no authentication required.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMaximum number of results (default 20, max 50)
queryNoSchool name search (partial match)
programYesType of professional school program
max_rankingNoOnly return schools ranked below this number
min_rankingNoOnly return schools ranked at or above this number
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses that the tool is public and returns admissions stats and links. It does not cover rate limits or data freshness, but the core behavioral trait (no auth) is stated.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences, each serving a distinct purpose: the first defines the action and parameters, the second states output and access. No redundant information, front-loaded with key details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With no output schema and no annotations, the description adequately covers purpose, parameters, and return type. It could mention pagination or sorting behavior, but the limit parameter implies pagination. Sufficient for a straightforward search tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

All 5 parameters are described in the input schema (100% coverage). The description adds value by mapping the search criteria to the parameters and mentioning return results, which is not in the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it searches professional schools by specific criteria (program type, name, ranking range) and lists the program types. It distinguishes itself from sibling tools like calculate_match_score, which are clearly different operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description indicates it's public and requires no authentication, giving context for when to use it. However, it does not explicitly state when not to use it or provide alternatives to guide against similar tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources