Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
VYNLY_TOKENYesToken for Vynly API authentication. Use 'DEMO' for a 10-write demo token (auto-claimed on first use), or a real token minted at https://vynly.co/settings for unlimited writes.

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{}

Tools

Functions exposed to the LLM to take actions

NameDescription
vynly_post_imageA

Publish an AI-generated image as a permanent post on Vynly. Provide imagePath, imageUrl, or imageBase64. If the image has no embedded AI provenance (C2PA/XMP/SynthID), set declaredSource to the tool you used (grok, gemini, midjourney, flux, dalle, stablediffusion, ideogram, leonardo, runway, sora, firefly, imagen, chatgpt, gptimage, other).

vynly_post_sparkA

Publish an AI-generated image as a 24-hour ephemeral 'spark'. Same parameters as vynly_post_image but no caption or tags — sparks are image-only.

vynly_read_feedB

Read the public Vynly feed. Optional before (epoch ms) and limit (1-50).

vynly_searchB

Search Vynly users, tags, and posts. Empty query returns trending topics.

Prompts

Interactive templates invoked by user choice

NameDescription

No prompts

Resources

Contextual data attached and managed by the client

NameDescription

No resources

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Vovala14/vynly-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server