AdmitBase Admissions
Server Details
Admissions data & match scores for law, medical, dental, MBA, pharmacy, vet, optometry schools
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4/5 across 5 of 5 tools scored.
Each tool targets a distinct action: search schools, get stats, calculate match, save stats, compare applicants. No overlap in functionality.
All tool names follow a consistent verb_noun pattern with snake_case (e.g., calculate_match_score, search_schools). No deviations.
Five tools is well-scoped for an admissions assistant: covers search, stats, match calculation, saving, and comparison. Not excessive or insufficient.
Covers core workflows: search, stats, match, save, compare. Missing a tool to retrieve saved stats, but compare_to_applicants implicitly uses them, so minor gap.
Available Tools
5 toolscalculate_match_scoreCalculate match scoreAInspect
Given a user's GPA and test score, calculate match category (Safety/Target/Reach/Far Reach), admission probability, and percentiles for one school or a ranked list. Public — no authentication required.
| Name | Required | Description | Default |
|---|---|---|---|
| gpa | Yes | Cumulative GPA on a 4.0 scale | |
| limit | No | Number of top schools to return when no school_slug given | |
| program | Yes | Program type | |
| test_score | Yes | Test score: LSAT 120-180, MCAT 472-528, DAT 1-30, OAT 200-400, GMAT/GRE for MBA, PCAT for pharmacy | |
| school_slug | No | If provided, calculate for this school only. Otherwise returns a ranked list. | |
| work_experience_years | No | Years of full-time work experience (MBA programs only) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that the tool is public and requires no authentication, and outlines the core behavior (calculating match category, probability, percentiles). It does not cover error handling or rate limits, but given the lack of annotations, it provides adequate transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, no fluff. The first sentence captures the core purpose and outputs, the second adds access context. Every word earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema is provided, so the description should explain the return format more explicitly. While it mentions the concepts (match category, probability, percentiles), it lacks details on structure or edge cases. Adequate but not comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the input schema already explains each parameter. The description merely summarizes the inputs without adding new meaning beyond what the schema provides. Baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'calculate', specifies inputs (GPA, test score), outputs (match category, probability, percentiles), and scope (one school or ranked list). It distinguishes from siblings like compare_to_applicants and get_school_stats.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description indicates public access with no authentication, but does not explicitly state when to use this tool versus alternatives like compare_to_applicants or search_schools. Usage context is implied but not contrasted.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
compare_to_applicantsCompare against community outcomes (requires Bearer auth)AInspect
Compare a user's saved stats against anonymous community outcome reports (accepted/waitlisted/rejected). Requires Authorization: Bearer amb_... header.
| Name | Required | Description | Default |
|---|---|---|---|
| program | Yes | Program to compare against | |
| school_slug | No | If provided, compare against applicants at this specific school |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations provided, so description carries full burden. It discloses required authentication (Bearer header), which is a key behavioral trait. However, it does not indicate whether the tool is read-only, destructive, or has rate limits. Missing details on what happens if user has no saved stats.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences: first states purpose, second states auth requirement. No wasted words, front-loaded with key information. Ideal conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, but description does not explain return value structure or format. Does not mention that this tool relies on prior use of save_my_stats tool to have 'saved stats'. Missing crucial context about prerequisites and output, making it incomplete for effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with both parameters described. Description does not add any extra meaning beyond the schema; it only restates the purpose. Baseline score of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
Description clearly states the tool compares a user's saved stats against anonymous community outcome reports (accepted/waitlisted/rejected). It uses specific verbs ('Compare') and resource ('community outcome reports'), and the title includes 'requires Bearer auth' for clarity. Distinguishes from siblings by focusing on comparison rather than calculation (calculate_match_score) or stats retrieval (get_school_stats).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Description implies when to use (when user wants to compare stats against community outcomes) but does not explicitly state when not to use or mention alternatives. It lacks differentiation from sibling calculate_match_score, which might also involve comparison. No guidance on prerequisites like having saved stats.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_school_statsGet detailed school statsAInspect
Get full admissions statistics for a specific school: GPA/test percentiles, acceptance rate, class size, tuition, employment outcomes. Public — no authentication required.
| Name | Required | Description | Default |
|---|---|---|---|
| school_name | No | School name to search for (partial match). Used when slug is unknown. | |
| school_slug | No | School slug (e.g. "harvard-law-school"). Use search_schools to find slugs. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the burden. It mentions the tool is public but does not disclose idempotency, rate limits, or error handling behavior. For a read-only tool, this is adequate but not exhaustive.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with no extraneous text. Essential information is front-loaded, and every sentence contributes meaning.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, the description lists the types of data returned (GPA, test percentiles, etc.), which provides sufficient context. It could be slightly improved by specifying whether the output is a single object or an array.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 100% with parameter descriptions. The description adds value by clarifying that 'school_name' is for partial match and that 'school_slug' can be obtained via 'search_schools', going beyond the schema alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves full admissions statistics for a specific school, listing specific data points. It distinguishes itself from sibling tools like 'search_schools' which is for finding slugs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description notes 'Public — no authentication required' and suggests using 'search_schools' to find slugs, providing workflow context. However, it does not explicitly state when to use this tool versus alternatives like 'calculate_match_score'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
save_my_statsSave my application stats (requires Bearer auth)AInspect
Save a user's GPA and test score per program under their AdmitBase API key. Requires Authorization: Bearer amb_... header (generate one at https://admitbase.com/profile#api-keys).
| Name | Required | Description | Default |
|---|---|---|---|
| gpa | Yes | Cumulative GPA on a 4.0 scale | |
| notes | No | Any notes about your application profile | |
| program | Yes | Program you are applying to | |
| test_score | Yes | Your test score (LSAT, MCAT, DAT, OAT, GMAT/GRE, PCAT) | |
| state_province | No | Your state or province (2-letter code, e.g. "CA", "NY") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It discloses the auth requirement and that it saves data, but does not explain behavior on duplicates, response format, or any side effects. Adequate but not thorough.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences: first states purpose, second states auth requirement. No unnecessary words, front-loaded with key info.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema and no annotations, the description covers core functionality and auth. Minor missing details like idempotency or whether it creates/updates, but sufficient for a simple save tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so all parameters have descriptions. The tool description adds no additional semantic value beyond summarizing gpa and test score. Baseline 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Save a user's GPA and test score per program', specifying the verb (save), resource (stats), and scope. It distinguishes from siblings like calculate_match_score and search_schools which serve different purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies when to use (when saving stats) but does not provide explicit when-not or alternatives among siblings. Since no sibling does saving, it's clear, but lacks guidance on conditions like overwriting.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_schoolsSearch professional schoolsAInspect
Search professional schools (law, medical, dental, MBA, pharmacy, veterinary, optometry) by program type, name, or ranking range. Returns admissions stats and AdmitBase links. Public — no authentication required.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Maximum number of results (default 20, max 50) | |
| query | No | School name search (partial match) | |
| program | Yes | Type of professional school program | |
| max_ranking | No | Only return schools ranked below this number | |
| min_ranking | No | Only return schools ranked at or above this number |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It discloses that the tool is public and returns admissions stats and links. It does not cover rate limits or data freshness, but the core behavioral trait (no auth) is stated.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, each serving a distinct purpose: the first defines the action and parameters, the second states output and access. No redundant information, front-loaded with key details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With no output schema and no annotations, the description adequately covers purpose, parameters, and return type. It could mention pagination or sorting behavior, but the limit parameter implies pagination. Sufficient for a straightforward search tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
All 5 parameters are described in the input schema (100% coverage). The description adds value by mapping the search criteria to the parameters and mentioning return results, which is not in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it searches professional schools by specific criteria (program type, name, ranking range) and lists the program types. It distinguishes itself from sibling tools like calculate_match_score, which are clearly different operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description indicates it's public and requires no authentication, giving context for when to use it. However, it does not explicitly state when not to use it or provide alternatives to guide against similar tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!