Skip to main content
Glama
ProsodyAI

@prosodyai/mcp-docs

Official
by ProsodyAI

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
PORTNoPort for the HTTP server.3333
PROSODYAI_REPO_ROOTNoPath to the prosodyai monorepo root, used to locate content directory. If not set, the package walks up from its own directory.

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{
  "listChanged": true
}
resources
{
  "listChanged": true
}

Tools

Functions exposed to the LLM to take actions

NameDescription
search_docsA

Search ProsodyAI docs, SDK READMEs, recipes, and OpenAPI metadata. Returns a ranked list of matches with snippets and stable ids. Follow up with read_doc to fetch full content.

list_docsA

List every document in this server. Useful for browsing without a search query.

read_docA

Fetch the full content of a doc, SDK README, recipe, or other entry by id (as returned by search_docs or list_docs).

list_endpointsA

List ProsodyAI REST API endpoints from the bundled OpenAPI spec. Optional filters by tag or path substring.

get_endpointA

Get the full OpenAPI operation object (parameters, request body, responses, security) for a single REST endpoint.

get_openapiA

Return the full bundled OpenAPI 3 spec for the ProsodyAI REST API. Use sparingly — prefer list_endpoints + get_endpoint for targeted lookups.

list_recipesA

List curated end-to-end implementation recipes for common ProsodyAI integration tasks (e.g. add prosody to a LiveKit agent, stream from a browser, wire the LangChain tool, define KPIs).

get_overviewA

Return a single-page overview of the ProsodyAI platform: what it is, what to use, and how the SDKs/API/recipes relate. Read this first when starting an integration.

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription
api/openapiStructured data: openapi
docs/API_CUSTOM_DOMAINTo serve the FastAPI at **https://api.prosodyai.app** and have env vars set on Cloud Run.
docs/KPI_EXAMPLESUse these when defining KPIs in the dashboard or when training the KPI head. Each needs **outcome labels** (post-call / post-session) to train the model; until then the API uses heuristic mapping from prosody.
docs/KPI_LABELED_DATATo train the KPI head you need **(audio or prosody features) + (actual KPI outcomes)**. Outcomes are sent via the feedback API; the same `session_id` links a conversation to its labels.
docs/NAMINGSingle source of truth so we don’t confuse repos, packages, and the product.
docs/OVERVIEWProsodyAI is real-time prosodic intelligence infrastructure for voice agents. It tells your agent *how* someone sounds — not just what they say. Audio flows through a frozen WavLM-Large backbone into Mamba selective scan blocks that output continuous Valence-Arousal-Dominance ...
docs/PAPERProsodyAI is a speech analysis system that turns short audio chunks into affective and prosodic signals for voice agents, call analysis, and downstream business workflows. The current deployed system accepts base64-encoded audio, resamples it to 16 kHz when needed, runs a Pros...
docs/README- **[STRUCTURE.md](STRUCTURE.md)** — DB, API, dashboard layout - **[SYSTEMS.md](SYSTEMS.md)** — Topology, env contract, deployment - **[PAPER.md](PAPER.md)** — Grounded technical paper for the current deployed system - **env.example** — Copy to repo root as `.env` for local de...
docs/schema/README**Source of truth: `website/prisma/schema.prisma`.** Run migrations from the dashboard:
docs/STRUCTUREOne database. One backend API. One dashboard. **Configuration: [SYSTEMS.md](SYSTEMS.md).**
docs/SYSTEMSSingle source of truth for topology, configuration, and deployment. All runtime config comes from environment variables; no hardcoded URLs or secrets.
docs/TECHNICAL_DEEP_DIVE1. [System Overview](#1-system-overview) 2. [ProsodySSM Model Architecture](#2-prosodyssm-model-architecture) 3. [Training Pipeline](#3-training-pipeline) 4. [Inference & Deployment (Baseten)](#4-inference--deployment-baseten) 5. [API Layer (FastAPI on Cloud Run)](#5-api-layer...
docs/TRAININGProsodySSM training runs on **Baseten** (GPU). Data is read from **GCS**; checkpoints go to Baseten workspace (and optionally GCS). Inference uses the same Baseten stack: deploy a checkpoint as a Truss model.
recipes/browser-streamingGoal: capture mic audio in the browser, stream it to the ProsodyAI realtime endpoint, and react to escalation alerts in the UI (e.g. show a "calm down" indicator, switch the agent's persona, or surface a coach card).
recipes/kpi-flowProsodyAI does **not** ship hard-coded "emotion" classes. Instead, you define the KPIs you actually care about (e.g. `retention_intent`, `clinician_handoff`, `buying_intent`, `authenticity_score`) in the dashboard, and the API returns predictions for *those* KPIs from raw pros...
recipes/langchain-agentGoal: give a LangChain agent the ability to listen to an audio file or live session and reason about how the speaker sounds — separately from what they said.
recipes/livekit-realtime-agentGoal: a LiveKit `Agent` that listens to the caller's audio in real time, streams it through ProsodyAI, and adapts its behaviour when the prosodic signal changes (e.g. caller becomes frustrated → switch to empathetic tone, trigger a de-escalation prompt, or hand off to a human).
recipes/rest-api-integrationWhen you can't (or don't want to) install an SDK — e.g. inside an Edge Function, a different language, or a thin proxy — call the ProsodyAI REST API directly.
recipes/sdk-typescript-quickstartUse this when adding ProsodyAI to a Node, Next.js, or browser app (e.g. AureliaStudio's web client or a Vercel function).
sdks/api-fastapiPublic REST API for ProsodyAI speech emotion recognition service.
sdks/langchainProsodyAI integration for LangChain. Includes speech emotion analysis, forward-looking conversation predictions, and feedback for continuous model improvement.
sdks/livekitThe plugin can classify transcript turns alongside prosody events. Feed it text from your LiveKit STT/transcription pipeline:
sdks/python-coreCore prosody/emotion model library (ProsodySSM). Pip package: **prosody-ssm** (import `prosody_ssm`).
sdks/typescriptProsodyAI SDK for speech emotion analysis with forward-looking conversation predictions. Supports files, buffers, real-time streaming, and feedback for continuous model improvement.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ProsodyAI/mcp-docs'

If you have feedback or need assistance with the MCP directory API, please join our Discord server