We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/dorukardahan/domain-search-mcp'
If you have feedback or need assistance with the MCP directory API, please join our Discord server
# Domain Name QLoRA Training
This folder contains a minimal QLoRA fine-tuning setup for
`Qwen2.5-7B-Instruct` using the JSONL dataset generated by the MCP repo.
## Dataset
- Expected format: JSONL with `prompt` and `response` fields.
- Example line:
```json
{"prompt":"Generate 10 brandable names for a crypto task platform...","response":"- name.com — reason\\n- name.ai — reason"}
```
- Default path: `data/domain-dataset-100k.jsonl`
## Quick start (GPU box)
```bash
python -m venv .venv
source .venv/bin/activate
pip install -r training/requirements.txt
cp training/.env.example training/.env
# Edit training/.env and set HF_TOKEN if required by the model host.
set -a
source training/.env
set +a
python training/qlora_train.py \
--model Qwen/Qwen2.5-7B-Instruct \
--data data/domain-dataset-100k.jsonl \
--output training/output \
--max_seq_len 512 \
--batch_size 8 \
--grad_accum 4 \
--epochs 1
```
## Notes
- QLoRA uses 4-bit base weights + LoRA adapters.
- Output folder contains adapter weights and trainer logs.
- To change styles or constraints, regenerate the dataset first.