Skip to main content
Glama

Server Details

MCP server for Drosophila neuroscience data from VirtualFlyBrain

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL
Repository
Robbie1977/VFB3-MCP
GitHub Stars
0

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4/5 across 3 of 3 tools scored. Lowest: 3.4/5.

Server CoherenceA
Disambiguation5/5

The three tools have clearly distinct purposes: get_term_info retrieves detailed information for specific VFB IDs, run_query executes predefined queries based on those IDs, and search_terms performs broad searches with filtering capabilities. There is no overlap in functionality; each tool serves a unique role in the workflow.

Naming Consistency5/5

All tool names follow a consistent verb_noun pattern (get_term_info, run_query, search_terms) with clear, descriptive verbs. The naming is uniform and predictable, making it easy for agents to understand the action each tool performs.

Tool Count5/5

With three tools, this server is well-scoped for its domain of querying and retrieving data from VirtualFlyBrain. Each tool is essential: get_term_info for foundational data, run_query for specific analyses, and search_terms for discovery. The count is appropriate and avoids bloat.

Completeness4/5

The toolset covers core workflows effectively: retrieving term details, running queries, and searching. However, there might be minor gaps, such as tools for updating or managing data (if applicable to the domain), but the provided tools handle the primary use cases of data access and exploration without dead ends.

Available Tools

3 tools
get_term_infoAInspect

Get term information from VirtualFlyBrain using one or more VFB IDs. Supports batch requests — pass an array of IDs to fetch multiple terms in parallel. When multiple IDs are provided, results are returned as a JSON object keyed by ID. The Images field is keyed by template brain ID — use these to construct VFB browser URLs: https://v2.virtualflybrain.org/org.geppetto.frontend/geppetto?id=<VFB_ID>&i=<TEMPLATE_ID>,<IMAGE_ID1>,<IMAGE_ID2> where id= is the focus term and i= is a comma-separated list of image IDs for the 3D viewer (template ID must be first in the i= list to set the coordinate space).

ParametersJSON Schema
NameRequiredDescriptionDefault
idYesOne or more VFB IDs to look up
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and discloses key behavioral traits: batch processing with parallel fetching, JSON output structure keyed by ID, and detailed URL construction for images. It lacks information on rate limits, error handling, or authentication needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, starting with the core purpose and batch support. The URL construction details are necessary but slightly verbose, though every sentence adds value without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations and no output schema, the description compensates well by explaining output format and URL usage. It covers the essential context for a read-only lookup tool but could improve by mentioning error cases or response structure beyond the JSON keying.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the 'id' parameter thoroughly. The description adds value by explaining batch usage and result mapping, but does not provide additional semantic details beyond what the schema offers, meeting the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get term information') and resource ('from VirtualFlyBrain using one or more VFB IDs'), distinguishing it from sibling tools like 'run_query' and 'search_terms' by focusing on direct ID-based lookup rather than search or query execution.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for usage, explaining batch capabilities and URL construction, but does not explicitly state when to use this tool versus alternatives like 'search_terms' or 'run_query', nor does it mention exclusions or prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

run_queryAInspect

Run a query on VirtualFlyBrain using a VFB ID and query type. Supports batch requests — pass an array of IDs to run the same query_type on all of them, or use the queries array for mixed ID/query_type combinations. When multiple queries are provided, results are returned as a JSON object keyed by "ID::query_type". IMPORTANT: Do NOT pass tool names (like "get_term_info" or "search_terms") as query_type — those are separate tools. Valid query_types are returned by get_term_info in the Queries array for each entity. Common query_types include: PaintedDomains, AllAlignedImages, AlignedDatasets, AllDatasets (for templates); SimilarMorphologyTo, NeuronInputsTo, NeuronNeuronConnectivityQuery (for neurons); ListAllAvailableImages, SubclassesOf, PartsOf, NeuronsPartHere, NeuronsSynaptic, ExpressionOverlapsHere (for classes). Available query_types vary by entity type — ALWAYS call get_term_info FIRST to see which queries are available for a given ID, as attempting invalid query types will result in an error message directing you to use get_term_info.

ParametersJSON Schema
NameRequiredDescriptionDefault
idNoOne or more VFB IDs to query
queriesNoArray of {id, query_type} pairs for mixed batch queries. When provided, id and query_type params are ignored.
query_typeNoA valid query type from the Queries array returned by get_term_info. Used for single id or array of ids.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It effectively describes key behaviors: batch processing capabilities, result format for multiple queries, error handling for invalid query types, and the dependency on get_term_info for valid query types. It doesn't cover rate limits or authentication needs, but provides substantial operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded with the core purpose. Every sentence adds value: batch processing details, result format, warnings about invalid usage, examples of query types, and the critical dependency on get_term_info. It could be slightly more concise in listing query type examples, but overall structure is effective.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (batch processing, dependency on another tool, variable query types) and no output schema, the description provides substantial context. It explains the result format for batch queries, error conditions, and the prerequisite of calling get_term_info. The main gap is lack of output schema details, but the description compensates well given the constraints.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, so the baseline is 3. The description adds significant value beyond the schema by explaining the interaction between parameters (e.g., 'When multiple queries are provided, results are returned as a JSON object keyed by "ID::query_type"' and 'When provided, id and query_type params are ignored'), providing examples of valid query types, and clarifying the relationship with get_term_info.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Run a query on VirtualFlyBrain using a VFB ID and query type.' It specifies the resource (VirtualFlyBrain), the action (run a query), and the required inputs (VFB ID and query type). It also distinguishes from sibling tools by explicitly warning not to pass tool names like 'get_term_info' or 'search_terms' as query_type.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool versus alternatives. It states: 'ALWAYS call get_term_info FIRST to see which queries are available for a given ID' and warns that 'attempting invalid query types will result in an error message directing you to use get_term_info.' It also distinguishes from sibling tools by naming them and clarifying they are separate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

search_termsBInspect

Search for VFB terms using the Solr search server. Results can be filtered, excluded, or boosted by entity type using facets_annotation values.

Available filter types: entity, anatomy, nervous_system, individual, has_image, adult, cell, neuron, vfb, has_neuron_connectivity, nblast, visual_system, cholinergic, class, secondary_neuron, expression_pattern, gabaergic, expression_pattern_fragment, glutamatergic, feature, sensory_neuron, neuronbridge, deprecated, larva, has_region_connectivity, nblastexp, gene, primary_neuron, flycircuit, mechanosensory_system, histaminergic, lineage_mbp, peptidergic, hasscrnaseq, chemosensory_system, split, has_subclass, olfactory_system, dopaminergic, fafb, l1em, pub, enzyme, motor_neuron, cluster, lineage_6, lineage_3, serotonergic, lineage_19, lineage_cm3, lineage_dm6, proprioceptive_system, gustatory_system, sense_organ, lineage_mbp4, lineage_mbp1, lineage_1, lineage_mbp2, lineage_all1, lineage_balc, lineage_cm4, lineage_dm4, muscle, lineage_13, lineage_8, lineage_mbp3, lineage_12, lineage_dm1, lineage_dpmm1, lineage_9, lineage_cp2, lineage_dl1, fanc, lineage_7, lineage_vpnd2, lineage_dm3, lineage_dpmpm2, lineage_14, lineage_4, lineage_blp1, lineage_dalv2, lineage_eba1, lineage_dm2, lineage_dpmpm1, auditory_system, lineage_16, lineage_blvp1, lineage_blav2, lineage_vlpl2, lineage_alad1, lineage_bamv3, lineage_bld6, lineage_vpnd1, synaptic_neuropil, lineage_23, lineage_17, lineage_10, lineage_dplpv, lineage_21, lineage_alv1

Multiple filter_types are ANDed (results must match ALL). Multiple exclude_types are ORed (any match excludes). boost_types soft-rank matching results higher without excluding others.

ParametersJSON Schema
NameRequiredDescriptionDefault
rowsNoNumber of results to return (default 150, max 1000) - use smaller numbers for focused searches
queryYesSearch query (e.g., medulla)
startNoPagination start index (default 0) - use to get results beyond the first page
boost_typesNoBoost ranking of results matching these facets_annotation types without excluding others
filter_typesNoFilter results to only include items matching ALL of these facets_annotation types (AND logic)
exclude_typesNoExclude results matching ANY of these facets_annotation types (OR logic)
minimize_resultsNoWhen true, limit results to top 10 for initial searches and add truncation metadata. For exact matches, return only the matching result.
auto_fetch_term_infoNoWhen true and an exact label match is found, automatically fetch and include term info in the response.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by explaining key behavioral traits: it specifies the search backend (Solr), describes filter logic (AND for filter_types, OR for exclude_types), explains boost behavior (soft-ranking without exclusion), and lists all available filter types. It doesn't cover rate limits, authentication needs, or error behaviors, but provides substantial operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is overly long due to the exhaustive list of filter types (over 100 items), which could be summarized or referenced externally. The first sentence is clear, but the bulk of the text is a data dump that doesn't efficiently convey usage. It's not appropriately sized for quick comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex search tool with 8 parameters and no output schema, the description provides good context on filtering logic and available types. However, it lacks information on response format, pagination details beyond schema hints, error cases, or performance considerations. Given the complexity, more guidance on result interpretation would be helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all 8 parameters thoroughly. The description adds value by explaining the semantics of filter_types, exclude_types, and boost_types (AND/OR logic, boosting behavior) and listing all possible filter type values. However, it doesn't add meaning for other parameters like 'query' or 'rows' beyond what the schema provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool searches for VFB terms using Solr, which is a specific verb (search) and resource (VFB terms). It distinguishes from sibling 'get_term_info' (which fetches info for a specific term) and 'run_query' (which is more generic). However, it doesn't explicitly contrast with 'run_query' beyond mentioning Solr.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage through its detailed explanation of filter types and logic, suggesting when to use filtering/boosting features. However, it lacks explicit guidance on when to choose this tool over 'run_query' or when to use 'get_term_info' instead for known terms. No clear 'when-not' scenarios are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.