Law.AI — Lawyer Search
Server Details
Verified lawyer and attorney search, discovery, and matching for AI — 991K+ US profiles.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- risk-ai/lawai-mcp-server
- GitHub Stars
- 0
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.7/5 across 5 of 5 tools scored.
Each tool has a distinct purpose: name lookup, jurisdiction listing, profile retrieval by ID, practice area listing, and criteria-based search. No overlaps.
All tool names follow a consistent verb_noun pattern in snake_case (find_lawyer_by_name, get_jurisdictions, etc.) with verbs 'find', 'get', and 'search' appropriately chosen.
5 tools is well-scoped for a lawyer search API, covering lookup, search, and supporting enumerations without unnecessary extras.
Covers core workflows: name verification, criteria search, profile retrieval, and dropdown data. Missing direct search by bar number, but overall surface is solid.
Available Tools
5 toolsfind_lawyer_by_nameBInspect
Look up a lawyer by name to verify they exist and are licensed.
| Name | Required | Description | Default |
|---|---|---|---|
| name | Yes | Lawyer name to search | |
| state | No | US state to narrow results |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description must disclose all behavioral traits. It only mentions verification of existence and license, but omits details like matching precision, handling of partial names, jurisdiction checks, or what constitutes 'licensed'. Critical behaviors are undisclosed.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence that conveys the essential purpose without extraneous words. Every part earns its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The description is adequate for a simple lookup tool but lacks completeness given the absence of output schema or annotations. The agent receives no information about return format, multiple matches, or error handling, which could lead to misinvocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema descriptions already cover both 'name' and 'state' with basic descriptions (100% coverage). The tool description adds minimal new information beyond the schema, so baseline of 3 is appropriate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Look up' and the resource 'lawyer by name', with a specific purpose 'to verify they exist and are licensed'. It distinctly differs from sibling tools like 'search_lawyers' (broader) and 'get_lawyer_profile' (more detailed).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus siblings or alternatives. The description lacks scenarios or exclusions, leaving the agent to infer usage from context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_jurisdictionsAInspect
List US states with lawyer counts.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the burden. It discloses a read operation ('List') but does not mention performance, error handling, or data currency. For a simple list, this is adequate but minimally transparent.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single concise sentence that conveys all necessary information without any extraneous words. It is front-loaded and efficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple listing tool with no parameters and no output schema, the description is sufficiently complete. It could optionally mention scope (e.g., includes DC) but is not deficient.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has zero parameters, so per guidelines the baseline is 4. The description correctly adds no parameter information, as none is needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description uses a specific verb 'List' and clearly identifies the resource 'US states with lawyer counts'. It is unambiguous and distinct from sibling tools like find_lawyer_by_name and get_practice_areas.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives, but the context (sibling tools) makes it apparent that this is the only jurisdiction-listing tool. A simple use case requires no further clarification.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_lawyer_profileAInspect
Get full profile for a specific lawyer by their Law.AI ID.
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Lawyer ID from Law.AI |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must fully disclose behavioral traits. However, it only says 'full profile' without specifying what data is returned, whether it is read-only, or any potential side effects.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that conveys the essential information without any superfluous words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (one required parameter, no nested objects, no output schema), the description is mostly complete. However, it could clarify what elements are included in the 'full profile'.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The sole parameter 'id' is described in the schema as 'Lawyer ID from Law.AI'. The description adds minimal value beyond the schema, which already has 100% coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly specifies the action ('get'), the resource ('full profile'), and the required input ('Law.AI ID'). It distinguishes from sibling tools like find_lawyer_by_name and search_lawyers, which do not require a specific ID.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when you have a lawyer's Law.AI ID. It does not explicitly state when not to use it or list alternatives, but the context of sibling tools provides differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_practice_areasAInspect
List all practice areas with lawyer counts.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description fully discloses the read-only nature and the output format (practice areas with lawyer counts), which is sufficient for this simple tool; no contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, front-loaded sentence of 6 words with no unnecessary information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no parameters and a simple output, the description completely covers functionality and expected return data.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema has zero parameters, so the description correctly adds no extra parameter information; baseline score of 4 applies for 0-parameter tools.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'list' and the resource 'practice areas' with the specific included data 'lawyer counts', distinguishing it from sibling tools like search_lawyers or get_jurisdictions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit when-to-use or when-not-to-use guidance is provided, but the purpose is implied by the tool name and description; it is clear that this tool is for retrieving practice areas.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
search_lawyersBInspect
Search verified lawyer profiles by practice area, state, city, and bar status.
| Name | Required | Description | Default |
|---|---|---|---|
| city | No | City name | |
| limit | No | Results (default 10, max 50) | |
| state | No | US state (name or abbreviation) | |
| offset | No | Pagination offset | |
| bar_status | No | "Active" (default), "Inactive", or "Any" | |
| practice_area | No | Practice area, e.g. "Criminal Defense" |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description must disclose behavioral traits. It only states 'Search', implying a read operation, but does not mention pagination behavior, default limit (10), result ordering, or any side effects. Minimal transparency beyond the schema fields.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence with no unnecessary words. It is front-loaded with the action and resource, followed by the key filters. Efficient and to the point.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
The tool has 6 parameters and no output schema, but the description does not explain the return format, pagination mechanics, or any defaults. For a search tool, details about result structure and pagination are important for correct usage, and they are missing.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so baseline is 3. The description lists the same filter criteria present in the schema, adding no additional meaning or examples. It does not help beyond what the schema already provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Search'), the resource ('verified lawyer profiles'), and the key filter criteria ('by practice area, state, city, and bar status'). However, it does not explicitly differentiate from the sibling 'find_lawyer_by_name', though the purpose is distinct.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for searching lawyers by the listed criteria, but provides no explicit guidance on when to use this tool versus alternatives like 'find_lawyer_by_name' or 'get_lawyer_profile'. No exclusions or context are given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!