Skip to main content
Glama

mcp-run-python

Official
by pydantic
test_simple_completion.yaml3.53 kB
interactions: - request: body: null headers: accept: - '*/*' accept-encoding: - gzip, deflate connection: - keep-alive method: GET uri: https://huggingface.co/api/models/Qwen/Qwen2.5-72B-Instruct?expand=inferenceProviderMapping response: headers: access-control-allow-origin: - https://huggingface.co access-control-expose-headers: - X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,X-Total-Count,ETag,Link,Accept-Ranges,Content-Range,X-Linked-Size,X-Linked-ETag,X-Xet-Hash connection: - keep-alive content-length: - '703' content-type: - application/json; charset=utf-8 cross-origin-opener-policy: - same-origin etag: - W/"2bf-bkSLwumMG89/DZCsDWwBvtIEsEs" referrer-policy: - strict-origin-when-cross-origin vary: - Origin parsed_body: _id: 66e81cefd1b1391042d0e47e id: Qwen/Qwen2.5-72B-Instruct inferenceProviderMapping: featherless-ai: providerId: Qwen/Qwen2.5-72B-Instruct status: live task: conversational fireworks-ai: providerId: accounts/fireworks/models/qwen2p5-72b-instruct status: live task: conversational hyperbolic: providerId: Qwen/Qwen2.5-72B-Instruct status: error task: conversational nebius: providerId: Qwen/Qwen2.5-72B-Instruct-fast status: live task: conversational novita: providerId: qwen/qwen-2.5-72b-instruct status: error task: conversational together: providerId: Qwen/Qwen2.5-72B-Instruct-Turbo status: live task: conversational status: code: 200 message: OK - request: body: null headers: {} method: POST uri: https://router.huggingface.co/nebius/v1/chat/completions response: headers: access-control-allow-credentials: - 'true' access-control-allow-origin: - '*' access-control-expose-headers: - X-Repo-Commit,X-Request-Id,X-Error-Code,X-Error-Message,X-Total-Count,ETag,Link,Accept-Ranges,Content-Range,X-Linked-Size,X-Linked-ETag,X-Xet-Hash connection: - keep-alive content-length: - '680' content-type: - application/json cross-origin-opener-policy: - same-origin referrer-policy: - strict-origin-when-cross-origin strict-transport-security: - max-age=31536000; includeSubDomains vary: - Origin parsed_body: choices: - finish_reason: stop index: 0 logprobs: null message: audio: null content: Hello! How can I assist you today? Feel free to ask me any questions or let me know if you need help with anything specific. function_call: null reasoning_content: null refusal: null role: assistant tool_calls: [] stop_reason: null created: 1751982153 id: chatcmpl-d445c0d473a84791af2acf356cc00df7 model: Qwen/Qwen2.5-72B-Instruct-fast object: chat.completion prompt_logprobs: null service_tier: null system_fingerprint: null usage: completion_tokens: 29 completion_tokens_details: null prompt_tokens: 30 prompt_tokens_details: null total_tokens: 59 status: code: 200 message: OK version: 1

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/pydantic/pydantic-ai'

If you have feedback or need assistance with the MCP directory API, please join our Discord server