Skip to main content
Glama

store

Store structured entities, file payloads, or both in a unified request. Supports content-addressed file storage with SHA-256 dedup, entity relationships, and optional interpretation provenance.

Instructions

Unified storing for structured, file-backed, or combined payloads in one request. Choose path by source: file- or resource-sourced (attachment/file to preserve) → use file_content+mime_type or file_path; conversation- or tool-sourced (chat or other MCP) → use entities. You may send both entities and file input in the same call. File bytes create a content-addressed sources row (SHA-256 dedup per user); the response includes source_id / content_hash for the unstructured leg. The server does not invent structured fields from opaque blobs without an explicit interpretation block or a separate interpretation flow. Agents should parse and extract entities first when they need structured data from a readable file, then send those entities alongside the raw file. IMPORTANT FOR STRUCTURED DATA: Include ALL fields from source data. Schema fields go to observations; non-schema fields go to raw_fragments for future schema expansion.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
entitiesNo
relationshipsNoOptional. Create relationships between entities in this request. Use `source_index` or `target_index` for entities in this request, and `source_entity_id` or `target_entity_id` for existing entities. Index and id endpoints may be mixed in one relationship.
interpretationNoOptional interpretation provenance for source-derived structured extraction. Supplying this creates an interpretation row and links new observations to it. Omit for ordinary already-structured/chat-native facts, which keep observations.interpretation_id NULL.
source_priorityNo
observation_sourceNoClassifies the *kind* of write being performed, orthogonal to `source_priority`. See `Observation.observation_source` for the full semantic contract. Defaults to `llm_summary` when unspecified. Applies to every observation created by this request.
external_actorNoUpstream artifact author (e.g. GitHub user) stamped into observation provenance alongside AAuth agent attribution. Matches `ExternalActorInputSchema` in `action_schemas.ts`.
idempotency_keyNoRequired for structured path, optional for unstructured-only path.
file_idempotency_keyNoOptional idempotency key for file path when sending structured + unstructured in one call.
file_contentNoBase64-encoded file content. Use file_path for local files instead of base64 encoding.
file_pathNoLocal file path (alternative to file_content). If provided, file will be read from filesystem. MIME type will be auto-detected from extension if not provided. Works in local environments (Cursor, Claude Code) where MCP server has filesystem access. Does NOT work in web-based environments (claude.ai, chatgpt.com) - use file_content for those.
mime_typeNoMIME type (e.g., 'application/pdf', 'text/csv') - required with file_content, optional with file_path (auto-detected from extension)
original_filenameNoOriginal filename or source label (optional). For unstructured: auto-detected from file_path if not provided. For structured (entities): omit when data is agent-provided (no file origin); the source will have no filename. Pass only when mirroring a real file name or when a display label is desired.
user_idNo
commitNoWhen false, runs in plan/dry-run mode: resolves entities and returns planned actions ("would_create" / "would_match_existing") without persisting observations or source rows. Useful for previewing a structured store before committing.
strictNoWhen true, refuse silent merges: only match an existing entity when the entity's schema declares canonical_name_fields that the request matches, or when target_id is supplied. Prevents accidental coalescing into a pre-existing record.
source_peer_idNoOptional Neotoma peer id to stamp on observations for cross-instance sync loop prevention (Phase 5). Requires `observation_source: sync` in practice.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It discloses key behaviors: file bytes create a content-addressed sources row with SHA-256 dedup per user, response includes source_id/content_hash, and the server will not invent structured fields without an explicit interpretation block. This is thorough, though it omits error handling and rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is moderately long but every sentence adds value. It is well-structured: purpose, path selection, file handling, interpretation, and structured data note. No redundancy, though minor trimming could improve brevity without losing content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (16 parameters, nested objects, no output schema), the description covers the main concepts (unified store, file vs entity paths, interpretation, structured data handling). It mentions response fields (source_id, content_hash) and critical constraints (dedup, no automatic extraction). While it does not detail every parameter, the schema covers those, making the description adequately complete.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is high (81%), so baseline is 3. The description adds substantial meaning beyond the schema by explaining the unified nature, how to choose between file_content/file_path, the role of interpretation, and the distinction between schema fields and raw_fragments. This helps agents select appropriate parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose as a unified store for structured, file-backed, or combined payloads in one request. It uses specific verbs (storing, choose path) and distinguishes itself from sibling tools like parse_file and create_interpretation by emphasizing direct storage and the need for prior extraction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly provides when-to-use guidance: 'Choose path by source' with concrete recommendations for file-sourced vs conversation-sourced data. It also advises against relying on the server for extraction, directing agents to parse and extract entities first, which clearly differentiates this tool from analysis/parsing siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/markmhendrickson/neotoma'

If you have feedback or need assistance with the MCP directory API, please join our Discord server