TekAutomate MCP Server
Proxies the OpenAI Responses API, manages assistant threads, and utilizes vector stores for retrieval-augmented generation and command indexing.
Click on "Install Server".
Wait a few minutes for the server to deploy. Once ready, it will show a "Started" state.
In the chat, type
@followed by the MCP server name and your instructions, e.g., "@TekAutomate MCP ServerFind the SCPI command to measure peak-to-peak voltage on channel 1"
That's it! The server will respond to your query, and you can continue using it as needed.
Here is a step-by-step guide with screenshots.
TekAutomate MCP Server
This server is the AI orchestration layer for TekAutomate. It accepts full workspace context from the app, runs tool-assisted reasoning (or deterministic shortcuts), validates output, and returns applyable ACTIONS_JSON.
What it does
Hosts AI chat endpoint used by TekAutomate (
/ai/chat).Proxies OpenAI Responses API with a server-owned key and vector store (
/ai/responses-proxy).Loads and indexes local project knowledge.
public/commands/*.json(SCPI command truth source).public/rag/*.json(retrieval chunks).public/templates/*.json(workflow examples).mcp-server/policies/*.md(behavior and output constraints).Exposes a tool catalog for retrieval, validation, and optional live instrument probing.
Applies post-check and repair logic before returning final text.
Stores request/debug artifacts for diagnostics.
High-level flow
TekAutomate sends
POST /ai/chatwith:
userMessageprovider/model/key
full
flowContext(steps, backend, model family, selected step, validation state)full
runContext(logs/audit/exit code)optional
instrumentEndpoint(code executor + VISA resource)
MCP server runs
runToolLoop(...):
deterministic shortcut path when eligible
or provider path (OpenAI hosted Responses/tool loop, OpenAI chat-completions fallback, Anthropic)
Post-check validates and normalizes response:
ACTIONS_JSONstructurestep schema and IDs
saveAspresence/deduplicationSCPI verification pipeline
prose truncation guard
Server returns JSON payload:
textdisplayTextopenaiThreadIderrorswarningsmetrics
Endpoints
GET /healthreturns
{ ok: true, status: "ready" }when indexes are loaded.GET /ai/debug/lastreturns last debug bundle (prompts, timings, tool trace metadata).
POST /ai/chatmain orchestration endpoint for TekAutomate assistant.
POST /ai/responses-proxystreaming Responses proxy using
OPENAI_SERVER_API_KEYand optionalCOMMAND_VECTOR_STORE_ID.POST /ai/key-testvalidates provider/key/model reachability (
openaioranthropic).POST /ai/modelslists available models for given provider/key.
Tooling surface
Server tools are grouped into retrieval, materialization, validation, and live-instrument calls.
Retrieval tools:
search_scpiget_command_groupget_command_by_headerget_commands_by_header_batchsearch_tm_devicesretrieve_rag_chunkssearch_known_failuresget_template_examplesget_policylist_valid_step_typesget_block_schemaMaterialization tools:
materialize_scpi_commandmaterialize_scpi_commandsfinalize_scpi_commandsmaterialize_tm_devices_callValidation tools:
validate_action_payloadvalidate_device_contextverify_scpi_commandsLive instrument tools (via code executor):
get_instrument_stateprobe_commandget_visa_resourcesget_environment
Deterministic shortcut features
The server includes shortcut builders for common requests to produce fast, consistent actions without full model/tool loops when conditions match.
Measurement shortcut (including scoped channel handling and standard measurement sets).
FastFrame shortcut for pyvisa flows.
Common pyvisa server shortcut for frequent setup/build patterns.
tm_devicesmeasurement shortcut.Planner-driven deterministic shortcut from parsed intent + command index.
These shortcuts still pass through post-check before response.
Safety and output enforcement
Strict action schema validation (
validate_action_payload).Replace-flow hardening.
ensures step IDs are present and unique
can auto-group long flat flows into logical groups
enforces/repairs query
saveAsdeduplicates save names
SCPI verification and source-backed command handling.
Python substitution guard in non-python flows.
Response prose truncation guard (
MCP_POSTCHECK_MAX_PROSE_CHARS, default 1200).Prompt/policy driven constraints loaded from:
mcp-server/prompts/*.mdmcp-server/policies/*.md
Data and indexes
At startup, the server initializes:
Command index (
public/commands/*.json)tm_devices index
RAG indexes (
public/rag/*.json)Template index (
public/templates/*.json)
Command sources include modern and legacy scope families plus AFG, AWG, SMU, DPOJET, TekExpress, and RSA datasets.
Frontend integration
Current app integration resolves MCP host from:
localStorage["tekautomate.mcp.host"]or
REACT_APP_MCP_HOSTfallback:
http://localhost:8787only on localhost app hosts
Example:
localStorage.setItem('tekautomate.mcp.host', 'http://localhost:8787');Run locally
npm install
npm run startDefault port is 8787 unless MCP_PORT is set.
Deploy on Railway
This repo is ready to deploy directly as a standalone Node service.
Recommended Railway settings:
Root directory: repo root
Install command:
npm installStart command:
npm startHealth check path:
/health
Included deploy helper:
railway.json
Useful hosted endpoints after deploy:
/simple browser status page with uptime and links
/healthJSON health payload for Railway and monitoring
/statussame JSON status payload for manual checks
Suggested Railway environment variables:
OPENAI_SERVER_API_KEYCOMMAND_VECTOR_STORE_ID(if using file search / vector retrieval)OPENAI_ASSISTANT_ID(if using assistant-thread routing)MCP_ROUTER_ENABLED=true(if you want router-backed tool hydration at boot)NODE_ENV=production
Environment variables
Copy .env.example to .env and set what you need.
Required for
/ai/responses-proxy:OPENAI_SERVER_API_KEYOptional retrieval augmentation:
COMMAND_VECTOR_STORE_IDOpenAI routing/model controls:
OPENAI_BASE_URLOPENAI_DEFAULT_MODELOPENAI_FLOW_MODELOPENAI_REASONING_MODELOPENAI_ASSISTANT_MODELOPENAI_MAX_OUTPUT_TOKENSHosted prompt controls:
OPENAI_PROMPT_IDOPENAI_PROMPT_VERSIONlegacy fallback accepted:
OPENAI_ASSISTANT_IDPrompt file overrides:
TEKAUTOMATE_STEPS_INSTRUCTIONS_FILETEKAUTOMATE_BLOCKLY_INSTRUCTIONS_FILEPost-check tuning:
MCP_POSTCHECK_MAX_PROSE_CHARSServer:
MCP_PORT
Scripts and verification
npm run start/npm run devnpm run eval:comprehensivenpm run eval:levelsnpm run verify:command-groups
Reference benchmark:
mcp-server/reports/level-benchmark-2026-03-18.mdshows 40/40 PASS in that run.
Logs and debug artifacts
Last debug state:
GET /ai/debug/lastRequest logs are written under
mcp-server/src/logs/requests(rotated, max 500 files).Additional logs and reports are under
mcp-server/logsandmcp-server/reports.
Internals: Planner, Materializers, and AI Routing
Intent planner (src/core/intentPlanner.ts)
The planner is a deterministic parser + resolver layer used before (and sometimes instead of) LLM output.
Main responsibilities:
Parse user text into structured intent fields (channels, trigger, measurements, bus decode, acquisition, save/recall, status, AFG/AWG/SMU/RSA).
Detect device type and map request to relevant command families.
Resolve concrete SCPI candidates against the command index.
Return unresolved intents when command mapping is ambiguous.
Run conflict checks (resource collisions / inconsistent intent combinations).
Core exported functions:
parseIntent(...): buildsPlannerIntentfrom natural language.planIntent(...): parse + resolve + conflict check, returningPlannerOutput.resolve*Commands(...): domain resolvers such asresolveTriggerCommands,resolveMeasurementCommands,resolveBusCommands,resolveSaveCommands, etc.parse*Intent(...): focused parsers such asparseChannelIntent,parseTriggerIntent,parseMeasurementIntent,parseBusIntent, etc.
SCPI source of truth (src/core/commandIndex.ts)
Loads command JSON files from
public/commandsonce at startup.Normalizes heterogeneous command shapes (manual-entry rich format and flat format).
Builds fast lookup structures for:
exact header lookup (
getByHeader)prefix lookup (
getByHeaderPrefix)ranked query search (
searchByQuery)Supports placeholder-aware header normalization (
CH<x>,MEAS<x>,BUS<x>,{A|B}, etc.).
Current local index size (measured):
~
9307normalized command records.
SCPI retrieval functions
search_scpi(src/tools/searchScpi.ts): query search + header-like direct matching merge.get_command_by_header: exact deterministic match for known headers.get_commands_by_header_batch: batch exact lookup for multiple headers.get_command_group: feature-area retrieval (group-level).verify_scpi_commands(src/tools/verifyScpiCommands.ts): validates commands (including exact syntax mode).
Materializers
Materializers convert canonical records into concrete, applyable commands/calls.
materialize_scpi_command:selects set/query syntax
infers placeholder bindings from
concreteHeaderapplies explicit bindings + argument values
checks unresolved placeholders
runs exact verification before returning success
materialize_scpi_commands: batch wrapper around single materializer.finalize_scpi_commands: batch materialize + verified output packaging, used as endgame tool in hosted flows.materialize_tm_devices_call: builds exact Python call from verifiedmethodPathand arguments.
Tool loop and when server goes to AI for more info
Routing is centralized in src/core/toolLoop.ts.
Deterministic path first (no external model):
In
mcp_onlymode, server tries deterministic shortcuts and planner synthesis first.If planner fully resolves commands, it can return applyable
ACTIONS_JSONdirectly.If unresolved in
mcp_only, server returns findings/suggested fixes instead of calling external AI.
AI path (mcp_ai and normal hosted usage):
If deterministic path is not enough, server calls provider path:
OpenAI hosted Responses (preferred for structured build/edit)
OpenAI chat-completions fallback
Anthropic messages path
For hosted structured build, server preloads source-of-truth context via tools (
search_scpi,get_command_group,get_commands_by_header_batch, orsearch_tm_devices) before/within the loop.Tool rounds are capped (
4for hosted structured build,3default,8when forced tool mode).
Reliability fallbacks after AI response:
Post-check pass validates and normalizes output.
If model returns non-actionable output, server attempts hybrid planner gap-fill.
If
ACTIONS_JSONis malformed, server retries once with strict JSON-only instruction.If model output is weak in specific cases, server can fallback to deterministic shortcut output.
Performance snapshot
From checked-in benchmark report:
mcp-server/reports/level-benchmark-2026-03-18.md: 40/40 PASS.In that run, per-case
totalMsranged from about1msto254ms.
Local micro-benchmark (quick developer run on this workspace; indicative, not production SLA):
searchByQueryaverage: ~0.54 msper lookup.getByHeaderaverage: ~0.009 msper lookup (hot path).materializeScpiCommandaverage (single-command path with verification): ~25.4 ms.finalizeScpiCommandsaverage for 3-command batch: ~1.8 ms.
Use these as practical engineering baselines; real end-to-end latency depends more on provider/model calls than local index lookup.
When to use MCP-only vs MCP+AI
Use mcp_only when:
You want deterministic/local command resolution.
You prefer speed and strictness over open-ended reasoning.
The request is explicit enough for planner/materializers.
Use mcp_ai when:
Request is complex, ambiguous, or cross-domain.
You need richer reasoning, explanation, or conflict tradeoffs.
Deterministic planner reports unresolved intent and you want model help.
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/abnasim/TekAutomate-MCP'
If you have feedback or need assistance with the MCP directory API, please join our Discord server