Skip to main content
Glama
asterixix

Polish Academic MCP

by asterixix

ludzie_search

Search Polish researcher profiles in the Ludzie Nauki registry to find scientists by name, scientific domain, or browse alphabetically. Retrieve profile IDs for detailed information.

Instructions

Search scientist profiles in Ludzie Nauki (ludzie.nauka.gov.pl), Poland's public researcher registry. Structured search with pagination (0-based page). Filter by surname, optional first name, optional scientific domain code (e.g. DZ0106N). Omit name filters to browse ordered results (large totalHits). Response includes profileId for ludzie_get_scientist and public profile URLs under /ln/profile/{id}.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
surnameNoLast name filter (partial match). Omit surname and first_name to browse the registry alphabetically.
first_nameNoFirst name filter (optional, use with or without surname).
domain_codeNoScientific domain code from Polish classification, e.g. DZ0106N (exact sciences), DZ0105N (social sciences).
pageNoPage number — 0-based
sizeNoResults per page (1–50)
include_deceasedNoWhen true, pass withTheDead=true to include posthumous profiles.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses key behavioral traits: pagination (0-based), filtering options, that omitting name filters enables browsing, and that results include profileIds and URLs. However, it lacks details on rate limits, authentication needs, error handling, or the exact structure of the response (beyond mentioning 'large totalHits'), leaving some gaps for a mutation-free but complex search tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first covers purpose and key features (search, pagination, filters), and the second explains result usage and sibling tool linkage. Every word adds value, with no redundancy or fluff, making it easy to parse and front-loaded with essential information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (6 parameters, search functionality) and lack of annotations or output schema, the description is adequate but incomplete. It covers the basic operation, filtering logic, and result usage, but misses details like response format (beyond mentioning profileId and URLs), error cases, or performance considerations (e.g., handling of 'large totalHits'). For a search tool with no structured output, more behavioral context would be beneficial.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all 6 parameters. The description adds minimal value beyond the schema: it mentions filtering by surname/first name/domain code and that omitting name filters enables browsing, but these are already implied or stated in the schema descriptions. No additional syntax, format, or usage nuances are provided, meeting the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Search scientist profiles'), target resource ('Ludzie Nauki (ludzie.nauka.gov.pl), Poland's public researcher registry'), and distinguishes it from siblings like 'ludzie_get_scientist' (which retrieves individual profiles) and 'ludzie_semantic_search' (which presumably uses different search logic). It's precise about what the tool does without being tautological.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: for structured searches with pagination and filtering. It explicitly mentions using 'ludzie_get_scientist' with the profileId from results, distinguishing it from that sibling. However, it doesn't explicitly state when NOT to use it (e.g., vs. 'ludzie_semantic_search') or detail prerequisites, keeping it from a perfect score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/asterixix/polish-academic-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server