skills-mcp-server
by amazingashis
Server Configuration
Describes the environment variables required to run the server.
| Name | Required | Description | Default |
|---|---|---|---|
| SKILLS_ROOT | No | Path to the root directory containing skill packs (SKILL.md folders). Defaults to 'skills/' relative to current working directory. | skills/ |
| MCP_TRANSPORT | No | Transport mode: 'stdio' for local subprocess, 'streamable-http' for Streamable HTTP, or 'sse' for legacy SSE. | stdio |
Capabilities
Features and capabilities supported by this server
| Capability | Details |
|---|---|
| tools | {
"listChanged": true
} |
| prompts | {
"listChanged": true
} |
| resources | {
"subscribe": false,
"listChanged": true
} |
| completions | {} |
Tools
Functions exposed to the LLM to take actions
| Name | Description |
|---|---|
| list_skillsA | Return a manifest of available skills (id, title, description teaser). No filesystem paths are exposed. |
| get_skillA | Load the full SKILL.md contents for a skill id. |
| search_skillsA | Filter skills by free-text query over id, title, and description. |
Prompts
Interactive templates invoked by user choice
| Name | Description |
|---|---|
| skill_context | Build a user message that embeds a skill for the model. |
Resources
Contextual data attached and managed by the client
| Name | Description |
|---|---|
| Data engineering pipeline review | ## Scope Spark/Delta pipelines, file ingestion, MERGE/upsert, SCD-style patterns, small-vs-large table joins, and write semantics. ## Checklist - **Filter early**, **project early**: unnecessary columns increase shuffle and IO. - **Joins**: risk of skew; broadcast only when the small side is truly small and stable; avoid accidental Cartesian products. - **Actions**: `count()`, `collect()`, full scans—justify or flag for large data. - **Delta**: prefer explicit schema; MERGE for upserts; avoid blind `overwrite` on shared tables; document `replaceWhere` if used. - **Writes**: idempotent job r… |
| Databricks secrets and logging | ## Secrets - Read credentials via **Databricks secrets** (e.g. `dbutils.secrets.get(scope, key)`) or organization-approved injection—never commit secrets to repos, YAML, or notebooks. - Fail fast with a **clear, non-leaking** error when a required secret or scope is missing (message: which scope/key name is missing, not the value). ## Parameters - Use job/task **parameters** for non-secret run controls (dates, environment flags). - Do not pass secrets as default widget values or job parameters visible in UI history if policy forbids it. ## Logging - Structured, concise logs; **no** raw ro… |
| Databricks workflow from notebook paths | ## Goal Produce a **runnable-shaped** Databricks workflow where each task points at the correct notebook path, dependencies form a DAG, and compute settings are parameterized (not hardcoded secrets). ## Inputs to collect (ask if missing) - **Notebook paths**: Databricks workspace paths (e.g. `/Shared/etl/bronze_ingest`) and/or repo-relative paths if using Repos/Git folders—state which naming the output must use. - **Order / DAG**: Linear order, explicit `depends_on`, or parallel branches. - **Compute**: `job_cluster_key` reuse, existing cluster ID placeholder, or serverless if applicable to… |
| Delta table operations | ## Writes - **Append**: simple inserts; watch duplicate runs without dedupe keys. - **MERGE**: primary pattern for upserts; ensure match keys are selective and well-distributed. - **Overwrite**: use with explicit predicates (`replaceWhere`) when appropriate; dangerous on shared tables without guardrails. ## Maintenance - **OPTIMIZE**: reduces small files; costs IO; run on a schedule aligned with churn. - **ZORDER**: few columns, high filter benefit; not a substitute for good partition design. - **VACUUM**: understand retention vs time travel requirements before lowering retention. ## Schem… |
| NEPSE share price history | ## Goal When the user names **one or more NEPSE-listed stocks** (symbols like `NIFRA`, `HIDCL`, `NABIL`, company names, or “scrip”), **use web search** to find recent public information, then give **direct, readable output**: a short summary plus a **markdown table** when the results include enough structured numbers (date, close/LTP, and OHLC/volume if available). This is **market data retrieval**, not investment advice. End with a one-line disclaimer: data may be delayed or incomplete; verify on the exchange or broker platform before acting. ## Method (required) 1. **Web search** — Run o… |
| Domain created date (registry lookup only) | ## Goal Given a **URL or hostname**, output **only** the **domain registration / creation date** (the registry field commonly labeled *Created*, *Registered On*, *Registration Time*, or RDAP `events` registration). Include **one authoritative source link** the user can reopen. Do **not** summarize the website, fetch marketing copy, or report TLS, Archive, or HTTP “last modified” unless the user explicitly asks beyond this skill. ## Steps 1. **Normalize input:** Extract the **registrable domain** (e.g. `https://www.adhikariasis.com.np/path` → `adhikariasis.com.np`). Note punycode if relevan… |
Latest Blog Posts
MCP directory API
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/amazingashis/mcp-deployment'
If you have feedback or need assistance with the MCP directory API, please join our Discord server