Skip to main content
Glama

Lattis

Server Details

Search indexed websites, read raw page markdown, and score AI visibility for any site.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.1/5 across 4 of 4 tools scored.

Server CoherenceA
Disambiguation5/5

Each tool has a clearly distinct purpose with no overlap: get_page retrieves markdown content, get_site provides metadata and AI Visibility Score, list_sites enumerates all indexed sites, and search performs semantic search across content. The descriptions explicitly differentiate their functions and use cases.

Naming Consistency5/5

All tools follow a consistent verb_noun pattern with the 'lattis_' prefix: get_page, get_site, list_sites, and search. The naming is uniform, predictable, and aligns with their actions, making the set easy to navigate and understand.

Tool Count5/5

With 4 tools, the server is well-scoped for its purpose of indexing and querying websites. Each tool serves a distinct role in the workflow (discovery, metadata retrieval, content access, and search), and none feel redundant or missing for the domain.

Completeness5/5

The tool set provides complete coverage for the domain of website indexing and analysis: list_sites for discovery, get_site for metadata, search for content retrieval, and get_page for detailed content access. There are no obvious gaps, and the tools support a full workflow from exploration to in-depth reading.

Available Tools

4 tools
lattis_get_pageAInspect

Get the full markdown of a specific page from an indexed website. Use this to zoom in after lattis_search points you at a URL.

ParametersJSON Schema
NameRequiredDescriptionDefault
pathYesPage path starting with /, e.g. '/pricing' or '/docs/api'
siteYesDomain ('posthog.com') or slug
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It mentions retrieving 'full markdown' which implies read-only behavior and output format, but lacks details on error handling, rate limits, or authentication needs. It adds some context but doesn't fully compensate for the absence of annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with zero waste: the first states the purpose, and the second provides usage guidance. It is front-loaded with essential information and appropriately sized, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (2 parameters, no output schema), the description is mostly complete. It covers purpose and usage well, but without annotations or output schema, it lacks details on behavioral traits like error responses or markdown structure, leaving minor gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters thoroughly. The description does not add any meaning beyond what the schema provides, such as examples or usage nuances, meeting the baseline for high coverage without extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get the full markdown') and resource ('a specific page from an indexed website'), distinguishing it from siblings by mentioning its role after `lattis_search` points to a URL. It uses precise language that directly communicates the tool's function.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

It explicitly provides when to use this tool ('to zoom in after `lattis_search` points you at a URL'), offering clear context and distinguishing it from alternatives like `lattis_search` for initial discovery. This gives the agent direct guidance on sequencing and tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lattis_get_siteAInspect

Get metadata and the AI Visibility Score for a single indexed site — name, description, category, subcategories, page counts, and which evaluation questions the site answers vs misses.

ParametersJSON Schema
NameRequiredDescriptionDefault
siteYesDomain ('posthog.com') or slug ('posthog-com')
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description carries full burden. It discloses the tool's read-only nature by stating 'Get' and lists return fields, but lacks details on permissions, rate limits, or error handling. It adds some context (evaluation questions answered vs missed) but is incomplete for behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, dense sentence that efficiently conveys purpose, scope, and return data without redundancy. It is front-loaded with key information and has zero wasted words, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description partially compensates by listing return fields and behavioral context. However, for a tool with one parameter and complex output (e.g., AI Visibility Score, evaluation questions), it lacks details on output format, error cases, or prerequisites, leaving gaps in completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the 'site' parameter fully. The description does not add any parameter-specific details beyond what the schema provides, such as format examples or constraints, meeting the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Get metadata and the AI Visibility Score') and the resource ('a single indexed site'), listing specific data fields returned (name, description, category, etc.). It distinguishes from siblings by specifying 'single indexed site' versus list/search operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for retrieving detailed metadata and scores for one site, contrasting with 'lattis_list_sites' for multiple sites and 'lattis_search' for broader queries. However, it lacks explicit when-not-to-use guidance or named alternatives, leaving some ambiguity.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lattis_list_sitesAInspect

List every website indexed by Lattis. Returns slug, domain, category, subcategories, AI Visibility Score, and page count for each site. Use this to discover what is available before searching.

ParametersJSON Schema
NameRequiredDescriptionDefault
limitNoMax sites to return (default 200)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It describes the return format and the tool's purpose, but lacks details on behavioral traits like pagination, rate limits, or error handling. However, it does disclose the 'discovery' context, which adds some value beyond basic functionality.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste: the first states purpose and return data, the second provides usage guidance. It is front-loaded with key information and appropriately sized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 optional parameter, no output schema, no annotations), the description is mostly complete. It covers purpose, usage, and return data, but lacks output format details (e.g., structure of the returned list) and behavioral aspects like error cases, which would be helpful for full completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the 'limit' parameter fully. The description does not add any parameter-specific information beyond what the schema provides, such as default behavior or usage tips. Baseline 3 is appropriate when the schema handles parameter documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('List every website indexed by Lattis') and resource ('website'), distinguishing it from siblings like 'lattis_get_site' (single site) and 'lattis_search' (searching). It provides concrete details about what data is returned (slug, domain, category, etc.).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly states when to use this tool ('Use this to discover what is available before searching'), providing clear context and distinguishing it from the 'lattis_search' sibling. It effectively guides the agent on the tool's role in the workflow.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources