Feature: Model Switching (Provider-Agnostic)
As a user
I want to switch between AI models easily
So that I can use different LLMs with the Blender MCP server
Background:
Given the MCP server is running
And MCP servers are provider-agnostic
Scenario: Generate OpenRouter client config for Claude
When I call generate_client_config with provider="openrouter", model="anthropic/claude-3.5-sonnet"
Then I receive a JSON snippet for OpenRouter configuration
And the snippet includes the API endpoint and model name
And the config file path is returned
And the documentation explains this is client-side configuration
Scenario: Generate OpenRouter config for GLM-4
When I call generate_client_config with provider="openrouter", model="zhipu/glm-4"
Then I receive OpenRouter configuration for GLM-4
And the model routing is handled by OpenRouter
Scenario: Generate Claude Desktop config
When I call generate_client_config with provider="claude_desktop", model="claude-sonnet-4.5"
Then I receive an updated claude_desktop_config.json snippet
And the snippet includes the MCP server entry
And instructions explain where to paste the config
Scenario: Generate Cursor config
When I call generate_client_config with provider="cursor", model="anthropic/claude-3.5-sonnet"
Then I receive Cursor .mcp.json configuration
And the config includes the uvx blender-mcp command
And platform-specific instructions are provided (Windows vs Mac/Linux)
Scenario: Documentation clarifies provider-agnosticism
Given the server documentation
Then it clearly states MCP servers don't control which LLM the client uses
And it explains that model selection is a client responsibility
And it provides examples of OpenRouter for multi-model routing