Server Configuration
Describes the environment variables required to run the server.
| Name | Required | Description | Default |
|---|---|---|---|
| PORT | No | The port the server will run on (defaults to 3001). | 3001 |
| HEADLESS | No | Enables headless mode for browser automation. | true |
| OPENAI_MODEL | No | The primary OpenAI model to use. | gpt-4.1 |
| OPENAI_API_KEY | Yes | OpenAI API key required for AI-powered WordPress tasks. | |
| REQUIRE_API_KEY | No | Whether to require the master API key for requests (recommended for production). | true |
| CORS_ALLOW_ORIGIN | No | The CORS allowed origin. | * |
| OPENAI_MAX_TOKENS | No | The maximum number of tokens for OpenAI responses. | 4096 |
| OPENAI_NANO_MODEL | No | The model used for nano tasks. | gpt-4.1-nano |
| CORS_ALLOW_HEADERS | No | The CORS allowed headers. | Content-Type,Authorization,Accept,Origin,X-Requested-With,X-Api-Key |
| CORS_ALLOW_METHODS | No | The CORS allowed HTTP methods. | GET,HEAD,PUT,PATCH,POST,DELETE,OPTIONS |
| OPENAI_BASIC_MODEL | No | The model used for basic tasks. | gpt-4.1-mini |
| OPENAI_TEMPERATURE | No | The sampling temperature for OpenAI. | 0.7 |
| TANUKIMCP_MASTER_KEY | Yes | The master API key for the TanukiMCP server. | |
| OPENAI_ADVANCED_MODEL | No | The model used for advanced tasks. | gpt-4.1 |
| OPENAI_MAX_CONTEXT_TOKENS | No | The maximum context tokens for OpenAI. | 128000 |
| ANALYTICS_DETAILED_LOGGING | No | Whether to enable detailed logging for analytics. | false |
| ANALYTICS_SAVE_INTERVAL_MS | No | The interval in milliseconds for saving analytics. | 300000 |
Capabilities
Server capabilities have not been inspected yet.
Tools
Functions exposed to the LLM to take actions
| Name | Description |
|---|---|
No tools | |
Prompts
Interactive templates invoked by user choice
| Name | Description |
|---|---|
No prompts | |
Resources
Contextual data attached and managed by the client
| Name | Description |
|---|---|
No resources | |