Lattis
Server Details
Search indexed websites, read raw page markdown, and score AI visibility for any site.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 4.1/5 across 4 of 4 tools scored.
Each tool has a clearly distinct purpose with no overlap: get_page retrieves markdown content, get_site provides metadata and AI Visibility Score, list_sites enumerates all indexed sites, and search performs semantic search across content. The descriptions explicitly differentiate their functions and use cases.
All tools follow a consistent verb_noun pattern with the 'lattis_' prefix: get_page, get_site, list_sites, and search. The naming is uniform, predictable, and aligns with their actions, making the set easy to navigate and understand.
With 4 tools, the server is well-scoped for its purpose of indexing and querying websites. Each tool serves a distinct role in the workflow (discovery, metadata retrieval, content access, and search), and none feel redundant or missing for the domain.
The tool set provides complete coverage for the domain of website indexing and analysis: list_sites for discovery, get_site for metadata, search for content retrieval, and get_page for detailed content access. There are no obvious gaps, and the tools support a full workflow from exploration to in-depth reading.
Available Tools
4 toolslattis_get_pageAInspect
Get the full markdown of a specific page from an indexed website. Use this to zoom in after lattis_search points you at a URL.
| Name | Required | Description | Default |
|---|---|---|---|
| path | Yes | Page path starting with /, e.g. '/pricing' or '/docs/api' | |
| site | Yes | Domain ('posthog.com') or slug |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It mentions retrieving 'full markdown' which implies read-only behavior and output format, but lacks details on error handling, rate limits, or authentication needs. It adds some context but doesn't fully compensate for the absence of annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences with zero waste: the first states the purpose, and the second provides usage guidance. It is front-loaded with essential information and appropriately sized, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (2 parameters, no output schema), the description is mostly complete. It covers purpose and usage well, but without annotations or output schema, it lacks details on behavioral traits like error responses or markdown structure, leaving minor gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description does not add any meaning beyond what the schema provides, such as examples or usage nuances, meeting the baseline for high coverage without extra value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Get the full markdown') and resource ('a specific page from an indexed website'), distinguishing it from siblings by mentioning its role after `lattis_search` points to a URL. It uses precise language that directly communicates the tool's function.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It explicitly provides when to use this tool ('to zoom in after `lattis_search` points you at a URL'), offering clear context and distinguishing it from alternatives like `lattis_search` for initial discovery. This gives the agent direct guidance on sequencing and tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
lattis_get_siteAInspect
Get metadata and the AI Visibility Score for a single indexed site — name, description, category, subcategories, page counts, and which evaluation questions the site answers vs misses.
| Name | Required | Description | Default |
|---|---|---|---|
| site | Yes | Domain ('posthog.com') or slug ('posthog-com') |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations, the description carries full burden. It discloses the tool's read-only nature by stating 'Get' and lists return fields, but lacks details on permissions, rate limits, or error handling. It adds some context (evaluation questions answered vs missed) but is incomplete for behavioral traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, dense sentence that efficiently conveys purpose, scope, and return data without redundancy. It is front-loaded with key information and has zero wasted words, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no annotations and no output schema, the description partially compensates by listing return fields and behavioral context. However, for a tool with one parameter and complex output (e.g., AI Visibility Score, evaluation questions), it lacks details on output format, error cases, or prerequisites, leaving gaps in completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the 'site' parameter fully. The description does not add any parameter-specific details beyond what the schema provides, such as format examples or constraints, meeting the baseline for high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get metadata and the AI Visibility Score') and the resource ('a single indexed site'), listing specific data fields returned (name, description, category, etc.). It distinguishes from siblings by specifying 'single indexed site' versus list/search operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving detailed metadata and scores for one site, contrasting with 'lattis_list_sites' for multiple sites and 'lattis_search' for broader queries. However, it lacks explicit when-not-to-use guidance or named alternatives, leaving some ambiguity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
lattis_list_sitesAInspect
List every website indexed by Lattis. Returns slug, domain, category, subcategories, AI Visibility Score, and page count for each site. Use this to discover what is available before searching.
| Name | Required | Description | Default |
|---|---|---|---|
| limit | No | Max sites to return (default 200) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries the full burden. It describes the return format and the tool's purpose, but lacks details on behavioral traits like pagination, rate limits, or error handling. However, it does disclose the 'discovery' context, which adds some value beyond basic functionality.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: the first states purpose and return data, the second provides usage guidance. It is front-loaded with key information and appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 optional parameter, no output schema, no annotations), the description is mostly complete. It covers purpose, usage, and return data, but lacks output format details (e.g., structure of the returned list) and behavioral aspects like error cases, which would be helpful for full completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the schema already documents the 'limit' parameter fully. The description does not add any parameter-specific information beyond what the schema provides, such as default behavior or usage tips. Baseline 3 is appropriate when the schema handles parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('List every website indexed by Lattis') and resource ('website'), distinguishing it from siblings like 'lattis_get_site' (single site) and 'lattis_search' (searching). It provides concrete details about what data is returned (slug, domain, category, etc.).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Explicitly states when to use this tool ('Use this to discover what is available before searching'), providing clear context and distinguishing it from the 'lattis_search' sibling. It effectively guides the agent on the tool's role in the workflow.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
lattis_searchAInspect
Semantic search across every indexed website. Returns the most relevant markdown passages (with source URL, heading path, and score) so the calling agent can read and reason about them. Pass site to scope results to one domain.
| Name | Required | Description | Default |
|---|---|---|---|
| site | No | Optional: scope to a single domain, e.g. 'posthog.com' or 'supabase.com' | |
| query | Yes | What you are trying to find, e.g. 'does PostHog have a SOC2 report' or 'stripe webhook signature verification' | |
| top_k | No | How many passages to return (default 20, max 50) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes the core behavior (semantic search returning markdown passages with metadata) and output format, but lacks details on rate limits, authentication needs, or error handling, which are important for a search tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is efficiently structured in two sentences: the first explains the core functionality and output, the second provides a key parameter usage tip. Every sentence adds value without redundancy, making it appropriately sized and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (search with 3 parameters), no annotations, and no output schema, the description does a good job covering purpose, usage, and output format. However, it lacks details on behavioral aspects like rate limits or error cases, leaving some gaps for an agent to use it effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so the baseline is 3. The description adds minimal value beyond the schema by mentioning the 'site' parameter for scoping, but does not provide additional context for 'query' or 'top_k' beyond what's already documented in the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('semantic search across every indexed website') and resources ('indexed website'), and distinguishes it from siblings by mentioning it returns 'relevant markdown passages' for reading and reasoning, unlike the more specific get_page, get_site, or list_sites tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context on when to use this tool ('semantic search across every indexed website') and includes a specific parameter usage tip ('Pass `site` to scope results to one domain'), but does not explicitly state when not to use it or name alternatives among the sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!