Skip to main content
Glama
EXAMPLES.md4.21 kB
# Examples for QML MCP Server This directory contains examples demonstrating how to use the QML MCP server. ## Example 1: Running a Quantum Circuit ```python import json # QASM3 circuit for Bell state qasm_circuit = """ OPENQASM 3.0; include "stdgates.inc"; qubit[2] q; bit[2] c; h q[0]; cx q[0], q[1]; c[0] = measure q[0]; c[1] = measure q[1]; """ # Call run_quantum_circuit tool arguments = { "qasm": qasm_circuit, "shots": 1000 } # Expected response: { "counts": { "00": 498, "11": 502 }, "shots": 1000, "num_qubits": 2, "success": true } ``` ## Example 2: Computing Quantum Kernel ```python # Training data for kernel computation arguments = { "train_data": [ [0.1, 0.2], [0.3, 0.4], [0.5, 0.6] ] } # Expected response: { "kernel_matrix": [ [1.0, 0.87, 0.65], [0.87, 1.0, 0.89], [0.65, 0.89, 1.0] ], "train_shape": [3, 2], "test_shape": null, "feature_dimension": 2, "success": true } ``` ## Example 3: Training a VQC ```python # Binary classification dataset arguments = { "X_train": [ [0.1, 0.2], [0.2, 0.3], [0.8, 0.9], [0.9, 0.8] ], "y_train": [0, 0, 1, 1], "max_iter": 100 } # Expected response: { "model": "gASViGAAAAAAAACMMnFpc2tpdF9tYWNoaW5lX2xlYXJuaW5nLm...", "train_score": 1.0, "feature_dimension": 2, "num_samples": 4, "success": true } ``` ## Example 4: Evaluating a Model ```python # Use the model from Example 3 arguments = { "model": "gASViGAAAAAAAACMMnFpc2tpdF9tYWNoaW5lX2xlYXJuaW5nLm...", "X_test": [ [0.15, 0.25], [0.85, 0.95] ], "y_test": [0, 1] } # Expected response: { "predictions": [0, 1], "num_samples": 2, "success": true, "score": 1.0, "accuracy": 1.0 } ``` ## Example 5: Complete Workflow Here's a complete example showing the entire workflow from circuit execution to model evaluation: ```python import asyncio import json async def quantum_ml_workflow(): """Complete quantum ML workflow.""" # Step 1: Verify quantum circuit execution works print("Step 1: Running quantum circuit...") circuit_args = { "qasm": """ OPENQASM 3.0; include "stdgates.inc"; qubit[2] q; bit[2] c; h q[0]; cx q[0], q[1]; c[0] = measure q[0]; c[1] = measure q[1]; """, "shots": 1000 } # Call run_quantum_circuit with circuit_args # Step 2: Compute quantum kernel print("Step 2: Computing quantum kernel...") kernel_args = { "train_data": [[0.1, 0.2], [0.3, 0.4], [0.5, 0.6]] } # Call compute_quantum_kernel with kernel_args # Step 3: Train VQC print("Step 3: Training VQC classifier...") train_args = { "X_train": [ [0.1, 0.2], [0.2, 0.3], [0.8, 0.9], [0.9, 0.8] ], "y_train": [0, 0, 1, 1], "max_iter": 100 } # Call train_vqc with train_args # Save the returned model # Step 4: Evaluate model print("Step 4: Evaluating model...") eval_args = { "model": "<base64_model_from_step_3>", "X_test": [[0.15, 0.25], [0.85, 0.95]], "y_test": [0, 1] } # Call evaluate_model with eval_args print("Workflow complete!") # Run the workflow asyncio.run(quantum_ml_workflow()) ``` ## Error Handling All tools return structured error responses: ```python # Example error response { "success": false, "error": "Circuit has 15 qubits, exceeds maximum of 10", "error_type": "ValueError", "traceback": "Traceback (most recent call last):\n ..." } ``` ## Configuration Configure the server via environment variables: ```bash export QML_MCP_QUANTUM_MAX_SHOTS=50000 export QML_MCP_QUANTUM_MAX_QUBITS=5 export QML_MCP_LOG_LEVEL=DEBUG ``` ## Tips 1. **Start Small**: Begin with small datasets (4-8 samples) for VQC training 2. **Iterations**: Use 10-20 iterations for quick tests, 100+ for production 3. **Feature Dimensions**: Keep feature dimensions ≤ max_qubits 4. **Model Size**: Trained models can be large (several KB), ensure proper storage 5. **Shots**: More shots = more accurate results but slower execution

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/des137/qml-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server