Skip to main content
Glama
Pasted-Got-it-Michael-Here-s-the-entire-self-contained-kit-in-the-chat-Copy-each-file-into-your-project-1758339673534_1758339673535.txt34.7 kB
Got it, Michael. Here’s the entire, self-contained kit in the chat. Copy each file into your project as shown. No downloads needed. I’ve kept it clean, step-by-step, and minimal-click. 0) Structure Create this folder layout anywhere (e.g., C:\tools\character_agent_kit or /opt/character_agent_kit): character_agent_kit/ frontend/ index.html backend/ affogato_5views.py pipeline_poc.py requirements.txt .env.example client_secret.json # you will add this file (YouTube OAuth) n8n/ workflow.json README.md 1) Frontend (upload → n8n webhook) Save as frontend/index.html <!doctype html> <html lang="en"> <head> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1" /> <title>Character Builder (PoC)</title> <style> body{font-family:system-ui,-apple-system,Segoe UI,Roboto,Arial,sans-serif;max-width:960px;margin:24px auto;padding:0 16px;} h1{margin-bottom:8px;} fieldset{border:1px solid #ddd;padding:16px;margin-bottom:16px;border-radius:12px;} label{display:block;margin:8px 0 4px;} input,select,textarea{width:100%;padding:8px;border:1px solid #ccc;border-radius:8px;} button{padding:10px 16px;border:0;border-radius:10px;background:#111;color:#fff;font-weight:600;cursor:pointer;} .row{display:grid;grid-template-columns:1fr 1fr;gap:12px;} .small{font-size:12px;color:#666;} </style> </head> <body> <h1>Character Builder</h1> <p class="small">Set your n8n Webhook URL below (replace the placeholder) before use.</p> <!-- Replace YOUR_N8N_WEBHOOK_URL with the Webhook node's Production URL --> <form id="f" method="POST" enctype="multipart/form-data" action="YOUR_N8N_WEBHOOK_URL"> <fieldset> <legend>Upload</legend> <label>Input Photo (person/animal)</label> <input type="file" name="photo" accept="image/*" required /> </fieldset> <fieldset> <legend>Character Details</legend> <div class="row"> <div> <label>Character Name</label> <input name="name" placeholder="Milka" required /> </div> <div> <label>Species/Type</label> <select name="species"> <option>Human</option><option>Dog</option><option>Cat</option><option>Other</option> </select> </div> </div> <div class="row"> <div> <label>Gender</label> <input name="gender" placeholder="Female" /> </div> <div> <label>Age</label> <input name="age" type="number" min="0" /> </div> </div> <label>Location</label> <input name="location" placeholder="Dublin, IE" /> <label>Attitude (1–2 lines)</label> <input name="attitude" placeholder="Low-energy sarcasm" /> <label>Personality (2–5 bullets)</label> <textarea name="personality" rows="3" placeholder="• Gruff but adored&#10;• Blunt honesty"></textarea> <label>Background</label> <textarea name="background" rows="3" placeholder="Rescue pup; Dublin."></textarea> <label>Style Tags (comma-separated)</label> <input name="style_tags" placeholder="Puppet, Cartoon" /> <label>ElevenLabs Voice ID</label> <input name="voice_id" placeholder="JBFqnCBsd6RMkjVDRZzb" /> <label>Short Script (8–12 lines)</label> <textarea name="script" rows="8" placeholder="Hello from Milka..."></textarea> </fieldset> <fieldset> <legend>Options</legend> <label>Style Hint (for image gen)</label> <input name="style_hint" value="puppet-like stylized character, soft shading, simple lines" /> </fieldset> <button type="submit">Create Character</button> </form> <p id="status" class="small"></p> <script> document.getElementById('f').addEventListener('submit', () => { const s = document.getElementById('status'); s.textContent = 'Uploading… this may take a minute.'; }); </script> </body> </html> 2) Backend: RenderNet/Affogato 5 views Save as backend/affogato_5views.py import os, time, json, argparse, re from pathlib import Path import requests API_BASE = "https://api.rendernet.ai" TIMEOUT = 120 POLL_SEC = 3 def slugify(s): s = re.sub(r"[^a-z0-9]+","-", s.lower().strip()).strip("-") return s or "item" def post(url, key, **kwargs): headers = kwargs.pop("headers", {}) headers.update({"X-API-KEY": key}) r = requests.post(url, headers=headers, timeout=TIMEOUT, **kwargs) r.raise_for_status() return r.json() def get(url, key, **kwargs): headers = kwargs.pop("headers", {}) headers.update({"X-API-KEY": key}) r = requests.get(url, headers=headers, timeout=TIMEOUT, **kwargs) r.raise_for_status() return r.json() def upload_asset_v2(x_api_key, file_path): with open(file_path, "rb") as f: r = requests.post(f"{API_BASE}/pub/v1/assets/v2/upload", headers={"X-API-KEY": x_api_key}, files={"file": (Path(file_path).name, f, "application/octet-stream")}, timeout=TIMEOUT) r.raise_for_status() data = r.json()["data"] return data["id"], data.get("url") def create_character(x_api_key, asset_id, name, prompt, mode="balanced"): body = {"asset_id": asset_id, "character_type": "Custom", "name": name, "prompt": prompt, "mode": mode} data = post(f"{API_BASE}/pub/v1/characters", x_api_key, headers={"Content-Type":"application/json"}, json=body)["data"] return data["id"] def generate_once(x_api_key, character_id, positive, negative, aspect="1:1", model="JuggernautXL", steps=20, cfg_scale=7, quality="Plus", sampler="DPM++ 2M Karras", seed=None): body = [{ "aspect_ratio": aspect, "batch_size": 1, "cfg_scale": cfg_scale, "character": { "character_id": character_id, "mode": "balanced" }, "model": model, "prompt": { "positive": positive, "negative": negative }, "quality": quality, "sampler": sampler, "steps": steps, "seed": seed }] resp = post(f"{API_BASE}/pub/v1/generations", x_api_key, headers={"Content-Type":"application/json"}, json=body) return resp["data"]["generation_id"] def wait_for_media(x_api_key, generation_id): while True: data = get(f"{API_BASE}/pub/v1/generations/{generation_id}", x_api_key)["data"] status = data.get("status") if status in ("completed", "failed"): if status == "failed": raise RuntimeError(f"Generation {generation_id} failed.") media = data.get("media", []) urls = [] for m in media: if m.get("url"): urls.append(m["url"]) else: md = get(f"{API_BASE}/pub/v1/media/{m['id']}", x_api_key)["data"] if md.get("url"): urls.append(md["url"]) return urls time.sleep(POLL_SEC) def download(url, out_path): r = requests.get(url, timeout=TIMEOUT) r.raise_for_status() Path(out_path).write_bytes(r.content) return str(out_path) VIEWS = [ ("front", "front view, facing camera"), ("left", "left profile view"), ("right", "right profile view"), ("back", "rear view, back of head and body"), ("expressive", "front view, expressive pose, lively eyes") ] NEGATIVE = "nsfw, deformed, extra limbs, bad anatomy, distorted face, text, watermark, worst quality, jpeg artifacts, duplicate, morbid, mutilated" def main(): ap = argparse.ArgumentParser(description="Affogato 5-view generator") ap.add_argument("--x_api_key", required=True, help="Affogato X-API-KEY") ap.add_argument("--input_image", required=True, help="Path to input face image") ap.add_argument("--name", required=True, help="Character name") ap.add_argument("--style_hint", default="clean puppet-style character on plain background, evenly lit") ap.add_argument("--outdir", default="./outputs") args = ap.parse_args() out_root = Path(args.outdir) / slugify(args.name) / "images" out_root.mkdir(parents=True, exist_ok=True) asset_id, _ = upload_asset_v2(args.x_api_key, args.input_image) char_prompt = f"{args.style_hint}. Consistent features. Blank background. High clarity." char_id = create_character(args.x_api_key, asset_id, args.name, char_prompt, mode="balanced") saved = [] for key, view in VIEWS: pos = f"{args.style_hint}. {view}. Blank background. Centered, full body if possible, no cropping." gen_id = generate_once(args.x_api_key, char_id, pos, NEGATIVE, aspect="1:1") urls = wait_for_media(args.x_api_key, gen_id) if not urls: raise RuntimeError(f"No media URLs for {key}") out_path = out_root / f"{key}.png" download(urls[0], out_path) saved.append(str(out_path)) print(f"[ok] {key}: {out_path}") print("\nSaved files:") for s in saved: print(" -", s) print("\nDone.") if __name__ == "__main__": main() 3) Backend: TTS → simple MP4 → Notion → YouTube (Unlisted) Save as backend/pipeline_poc.py import os, json, argparse, re from pathlib import Path from datetime import datetime import requests from dotenv import load_dotenv # Media from moviepy.editor import ImageClip, concatenate_videoclips, AudioFileClip # Note: avoiding TextClip to keep dependencies simple # YouTube import googleapiclient.discovery from googleapiclient.http import MediaFileUpload from google_auth_oauthlib.flow import InstalledAppFlow from google.oauth2.credentials import Credentials load_dotenv() # --- ENV NOTION_TOKEN = os.getenv("NOTION_TOKEN", "") CHAR_DB_ID = os.getenv("NOTION_CHAR_DB_ID", "") LOG_DB_ID = os.getenv("NOTION_LOG_DB_ID", "") ELEVEN_API_KEY = os.getenv("ELEVEN_API_KEY", "") YT_PRIVACY = os.getenv("YT_UPLOAD_PRIVACY", "unlisted") YT_CATEGORY_ID = os.getenv("YT_CATEGORY_ID", "24") BASE_DIR = Path(os.getenv("BASE_DIR", Path(__file__).resolve().parent)) GOOGLE_TOKEN_FILE = BASE_DIR / "token.json" GOOGLE_CLIENT_FILE = BASE_DIR / "client_secret.json" SCOPES = ["https://www.googleapis.com/auth/youtube.upload"] HEADERS_NOTION = { "Authorization": f"Bearer {NOTION_TOKEN}", "Notion-Version": "2022-06-28", "Content-Type": "application/json" } def ensure_dir(p: Path): p.mkdir(parents=True, exist_ok=True) def slugify(s): s = re.sub(r"[^a-z0-9]+","-", s.lower().strip()).strip("-") return s[:60] or "item" # ---------- ElevenLabs ---------- def make_tts(text, out_mp3, voice_id, stability=0.3, similarity=0.7): url = f"https://api.elevenlabs.io/v1/text-to-speech/{voice_id}" payload = { "text": text, "model_id": "eleven_multilingual_v2", "voice_settings": {"stability": stability, "similarity_boost": similarity} } headers = {"xi-api-key": ELEVEN_API_KEY, "Content-Type": "application/json"} r = requests.post(url, headers=headers, json=payload, timeout=120) r.raise_for_status() Path(out_mp3).write_bytes(r.content) return str(out_mp3) # ---------- Simple video composer ---------- def compose_video(images, audio_mp3, out_mp4, size=(1920,1080)): audio = AudioFileClip(audio_mp3) duration = max(audio.duration, 1.0) # ensure 5 slots while len(images) < 5: images.append(images[len(images) % len(images)]) beat = duration / 5.0 clips = [] for img_path in images[:5]: img_clip = ImageClip(img_path).resize(newsize=size).set_duration(beat) clips.append(img_clip) final = concatenate_videoclips(clips, method="compose", padding=-0.25).set_audio(audio) final.write_videofile(str(out_mp4), fps=30, codec="libx264", audio_codec="aac", threads=4) final.close() audio.close() return str(out_mp4) # ---------- Notion ---------- def notion_create_or_get_character(slug, properties): qurl = f"https://api.notion.com/v1/databases/{CHAR_DB_ID}/query" qpayload = {"filter": {"property":"Slug", "rich_text":{"equals": slug}}} r = requests.post(qurl, headers=HEADERS_NOTION, json=qpayload, timeout=60) r.raise_for_status() data = r.json().get("results", []) if data: return data[0]["id"] url = "https://api.notion.com/v1/pages" payload = {"parent":{"database_id": CHAR_DB_ID},"properties": properties} r = requests.post(url, headers=HEADERS_NOTION, json=payload, timeout=60) r.raise_for_status() return r.json()["id"] def notion_update_character_paths(page_id, image_paths_dict): props = {} # Use optional Text fields in your DB: "Image Front Path", etc. for label, path_val in image_paths_dict.items(): props[f"{label} Path"] = {"rich_text":[{"text":{"content": str(path_val)}}]} url = f"https://api.notion.com/v1/pages/{page_id}" r = requests.patch(url, headers=HEADERS_NOTION, json={"properties": props}, timeout=60) r.raise_for_status() def notion_create_video_log(title, character_page_id, scene_slug, mp4_path, audio_path, project="YouTube-Review", interactions=""): url = "https://api.notion.com/v1/pages" props = { "Title": {"title":[{"text":{"content": title}}]}, "Project/Series": {"select":{"name": project}}, "Scene/Script Slug": {"rich_text":[{"text":{"content": scene_slug}}]}, "Interactions": {"rich_text":[{"text":{"content": interactions}}]}, "Output MP4": {"rich_text":[{"text":{"content": str(mp4_path)}}]}, "Output Audio": {"rich_text":[{"text":{"content": str(audio_path)}}]}, "Date": {"date":{"start": datetime.utcnow().date().isoformat()}}, "Character": {"relation":[{"id": character_page_id}]} } payload = {"parent":{"database_id": LOG_DB_ID}, "properties": props} r = requests.post(url, headers=HEADERS_NOTION, json=payload, timeout=60) r.raise_for_status() return r.json()["id"] # ---------- YouTube ---------- def youtube_service(): creds = None if GOOGLE_TOKEN_FILE.exists(): creds = Credentials.from_authorized_user_file(str(GOOGLE_TOKEN_FILE), SCOPES) if not creds or not creds.valid: flow = InstalledAppFlow.from_client_secrets_file(str(GOOGLE_CLIENT_FILE), SCOPES) creds = flow.run_local_server(port=0) GOOGLE_TOKEN_FILE.write_text(creds.to_json()) return googleapiclient.discovery.build("youtube", "v3", credentials=creds) def youtube_upload_unlisted(ytsvc, video_file, title, description, tags=None, category_id="24", privacy_status="unlisted", thumbnail_path=None): body = {"snippet":{"title":title,"description":description,"tags":tags or [],"categoryId":category_id}, "status":{"privacyStatus":privacy_status}} media = MediaFileUpload(video_file, chunksize=-1, resumable=True, mimetype="video/*") request = ytsvc.videos().insert(part="snippet,status", body=body, media_body=media) response = None while response is None: status, response = request.next_chunk() vid = response["id"] # thumbnail optional if thumbnail_path and Path(thumbnail_path).exists(): ytsvc.thumbnails().set(videoId=vid, media_body=thumbnail_path).execute() return f"https://youtu.be/{vid}" # ---------- Main ---------- def main(): ap = argparse.ArgumentParser() ap.add_argument("--name", required=True) ap.add_argument("--species", default="Other") ap.add_argument("--gender", default="") ap.add_argument("--age", type=int, default=None) ap.add_argument("--location", default="") ap.add_argument("--attitude", default="") ap.add_argument("--personality", default="") ap.add_argument("--background", default="") ap.add_argument("--style_tags", default="Puppet") ap.add_argument("--voice_id", required=True) ap.add_argument("--script_file", required=True) ap.add_argument("--img_front", required=True) ap.add_argument("--img_left", required=True) ap.add_argument("--img_right", required=True) ap.add_argument("--img_back", required=True) ap.add_argument("--img_expr", required=True) ap.add_argument("--outdir", default=str(BASE_DIR / "outputs")) args = ap.parse_args() name = args.name.strip() slug = slugify(name) outdir = Path(args.outdir) / f"{slug}_{datetime.utcnow().strftime('%Y%m%d_%H%M%S')}" ensure_dir(outdir) # TTS script_text = Path(args.script_file).read_text(encoding="utf-8").strip() audio_mp3 = outdir / f"{slug}.mp3" make_tts(script_text, audio_mp3, args.voice_id) # Video video_mp4 = outdir / f"{slug}.mp4" images = [args.img_front, args.img_left, args.img_right, args.img_back, args.img_expr] compose_video(images, audio_mp3, video_mp4) # Notion upsert style_list = [t.strip() for t in args.style_tags.split(",") if t.strip()] props = { "Name": {"title":[{"text":{"content": name}}]}, "Slug": {"rich_text":[{"text":{"content": slug}}]}, "Species/Type": {"select":{"name": args.species}}, "Gender": {"select":{"name": args.gender}} if args.gender else None, "Age": {"number": args.age} if args.age is not None else None, "Style Tag": {"multi_select":[{"name":t} for t in style_list]}, "Personality": {"rich_text":[{"text":{"content": args.personality}}]}, "Background": {"rich_text":[{"text":{"content": args.background}}]}, "Attitude": {"rich_text":[{"text":{"content": args.attitude}}]}, "Voice Code": {"rich_text":[{"text":{"content": args.voice_id}}]}, "Status": {"select":{"name":"Approved"}}, "First Use Date": {"date":{"start": datetime.utcnow().date().isoformat()}}, "Last Updated": {"date":{"start": datetime.utcnow().date().isoformat()}} } props = {k:v for k,v in props.items() if v is not None} page_id = notion_create_or_get_character(slug, props) # Image paths (Text props in Notion) notion_update_character_paths(page_id, { "Image Front": args.img_front, "Image Left": args.img_left, "Image Right": args.img_right, "Image Back": args.img_back, "Image Expressive": args.img_expr }) # Log entry log_title = f"{name} – Auto Render" scene_slug = f"{slug}-auto" notion_create_video_log(log_title, page_id, scene_slug, video_mp4, audio_mp3) # YouTube (Unlisted draft) ytsvc = youtube_service() yt_title = f"{name}: Short Scene" yt_desc = f"Character: {name}\nAuto-rendered proof." yt_tags = ["puppet","automation","PoC"] yt_url = youtube_upload_unlisted(ytsvc, str(video_mp4), yt_title, yt_desc, yt_tags, category_id=YT_CATEGORY_ID, privacy_status=YT_PRIVACY, thumbnail_path=args.img_front) summary = { "character": name, "slug": slug, "files": {"audio_mp3": str(audio_mp3), "video_mp4": str(video_mp4), "images": images}, "notion": {"character_page_id": page_id}, "youtube_unlisted_url": yt_url } (outdir / "run_summary.json").write_text(json.dumps(summary, indent=2), encoding="utf-8") print(json.dumps(summary, indent=2)) if __name__ == "__main__": main() 4) Backend requirements Save as backend/requirements.txt python-dotenv requests moviepy pydub google-auth-oauthlib google-api-python-client Note: MoviePy requires ffmpeg on the machine (install via Chocolatey on Windows, apt/brew otherwise). 5) Backend env template Save as backend/.env.example # Notion NOTION_TOKEN=secret_xxx NOTION_CHAR_DB_ID=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx NOTION_LOG_DB_ID=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx # ElevenLabs ELEVEN_API_KEY=xxxxxxxx # YouTube YT_UPLOAD_PRIVACY=unlisted YT_CATEGORY_ID=24 # App base dir (absolute path to backend folder) BASE_DIR=/absolute/path/to/character_agent_kit/backend Also put your Google client_secret.json in backend/ (from Google Cloud Console with YouTube Data API v3 enabled). 6) n8n workflow Save as n8n/workflow.json { "name": "Character Agent – Webhook to Pipeline", "nodes": [ { "parameters": { "httpMethod": "POST", "path": "agent/create-character", "options": { "binaryData": true } }, "id": "Webhook_1", "name": "Webhook (Upload)", "type": "n8n-nodes-base.webhook", "typeVersion": 1, "position": [200, 300] }, { "parameters": { "operation": "toFile", "binaryPropertyName": "photo", "fileName": "/tmp/upload_photo.png" }, "id": "MoveBinary_1", "name": "Binary → File", "type": "n8n-nodes-base.moveBinaryData", "typeVersion": 1, "position": [430, 300] }, { "parameters": { "command": "python3", "arguments": [ "/ABS/PATH/TO/backend/affogato_5views.py", "--x_api_key", "$env.AFFOGATO_API_KEY", "--input_image", "/tmp/upload_photo.png", "--name", "={{$json[\"name\"]}}", "--style_hint", "={{$json[\"style_hint\"] || \"puppet-like stylized character, soft shading, simple lines\"}}", "--outdir", "/ABS/PATH/TO/backend/outputs" ] }, "id": "Exec_5views", "name": "Execute: 5 Views", "type": "n8n-nodes-base.executeCommand", "typeVersion": 1, "position": [690, 300] }, { "parameters": { "fileName": "/tmp/script.txt", "fileContent": "={{$json[\"script\"] || \"Hello from our new character.\"}}" }, "id": "Write_Script", "name": "Write Script", "type": "n8n-nodes-base.writeBinaryFile", "typeVersion": 1, "position": [690, 420] }, { "parameters": { "command": "python3", "arguments": [ "/ABS/PATH/TO/backend/pipeline_poc.py", "--name","={{$json[\"name\"]}}", "--species","={{$json[\"species\"] || \"Other\"}}", "--gender","={{$json[\"gender\"] || \"\"}}", "--age","={{$json[\"age\"] || \"0\"}}", "--location","={{$json[\"location\"] || \"\"}}", "--attitude","={{$json[\"attitude\"] || \"\"}}", "--personality","={{$json[\"personality\"] || \"\"}}", "--background","={{$json[\"background\"] || \"\"}}", "--style_tags","={{$json[\"style_tags\"] || \"Puppet\"}}", "--voice_id","={{$json[\"voice_id\"]}}", "--script_file","/tmp/script.txt", "--img_front","/ABS/PATH/TO/backend/outputs/{{ $json[\"name\"].toLowerCase().replace(/[^a-z0-9]+/g,'-') }}/images/front.png", "--img_left","/ABS/PATH/TO/backend/outputs/{{ $json[\"name\"].toLowerCase().replace(/[^a-z0-9]+/g,'-') }}/images/left.png", "--img_right","/ABS/PATH/TO/backend/outputs/{{ $json[\"name\"].toLowerCase().replace(/[^a-z0-9]+/g,'-') }}/images/right.png", "--img_back","/ABS/PATH/TO/backend/outputs/{{ $json[\"name\"].toLowerCase().replace(/[^a-z0-9]+/g,'-') }}/images/back.png", "--img_expr","/ABS/PATH/TO/backend/outputs/{{ $json[\"name\"].toLowerCase().replace(/[^a-z0-9]+/g,'-') }}/images/expressive.png", "--outdir","/ABS/PATH/TO/backend/outputs" ] }, "id": "Exec_Pipeline", "name": "Execute: Pipeline (TTS+Video+Notion+YouTube)", "type": "n8n-nodes-base.executeCommand", "typeVersion": 1, "position": [940, 300] }, { "parameters": { "responseBody": "={{$json}}" }, "id": "Respond", "name": "Respond", "type": "n8n-nodes-base.respondToWebhook", "typeVersion": 1, "position": [1200, 300] } ], "connections": { "Webhook (Upload)": { "main": [ [ { "node": "Binary → File", "type": "main", "index": 0 }, { "node": "Write Script", "type": "main", "index": 0 } ] ] }, "Binary → File": { "main": [ [ { "node": "Execute: 5 Views", "type": "main", "index": 0 } ] ] }, "Write Script": { "main": [ [ { "node": "Execute: Pipeline (TTS+Video+Notion+YouTube)", "type": "main", "index": 0 } ] ] }, "Execute: 5 Views": { "main": [ [ { "node": "Execute: Pipeline (TTS+Video+Notion+YouTube)", "type": "main", "index": 0 } ] ] }, "Execute: Pipeline (TTS+Video+Notion+YouTube)": { "main": [ [ { "node": "Respond", "type": "main", "index": 0 } ] ] } }, "active": false } Replace /ABS/PATH/TO/ with the absolute path to your backend folder. Ensure your n8n host has python/python3 and ffmpeg in PATH. Set environment variable AFFOGATO_API_KEY on your n8n host. If your n8n doesn’t include the Write Binary File node, replace it with an “Execute Command” node to echo the script into /tmp/script.txt. 7) README (concise) Save as README.md # Character Agent Kit (PoC) Purpose: Upload a photo + metadata on a simple page → n8n → generate 5 views (Affogato) → TTS (ElevenLabs) → simple MP4 → Notion upsert → YouTube Unlisted draft. ## Prereqs - Notion API token + DB IDs (Characters, Sample Video Log) - ElevenLabs API key + voice_id - Affogato (RenderNet) API key - Google Cloud project with YouTube Data API v3 enabled + client_secret.json - Python 3.10+ and ffmpeg installed - n8n running ## Backend setup cd backend python -m venv .venv # Windows: .\.venv\Scripts\Activate.ps1 # Linux/Mac: source .venv/bin/activate pip install --upgrade pip pip install -r requirements.txt cp .env.example .env # edit .env with your tokens + absolute BASE_DIR # put client_secret.json in backend/ ## n8n - Import n8n/workflow.json - Replace /ABS/PATH/TO/ with your absolute backend path - Add environment var on the n8n host: AFFOGATO_API_KEY=your_key - Activate workflow; copy Webhook Production URL ## Frontend - Open frontend/index.html - Set action="YOUR_N8N_WEBHOOK_URL" - Open the file in your browser and submit ## Notion fields (Characters) Name (Title), Slug (Text), Species/Type (Select), Gender (Select), Age (Number), Style Tag (Multi-select), Personality (Rich text), Background (Rich text), Attitude (Text), Voice Code (Text), Status (Select), First Use Date (Date), Last Updated (Date) Optional Text fields: Image Front Path, Image Left Path, Image Right Path, Image Back Path, Image Expressive Path ## Outputs - backend/outputs/<slug_timestamp>/ - Notion character and log entries - YouTube Unlisted link printed to console and saved in run_summary.json ## Troubleshooting - ffmpeg not found → install and restart shell - Notion 400 → property names don’t match DB - ElevenLabs error → wrong API key or voice_id - Affogato drift → regenerate the bad view; tighten style_hint 8) Agent prompts (for “Claude Code local” + n8n validation) A) “Local Claude Code” setup instruction (paste once) You are my setup assistant. Do the following exactly, in order: 1) Create folders: backend/, frontend/, n8n/ 2) I will paste file contents; create each file at the given path with that exact content. 3) In backend/: create a venv, activate it, pip install -r requirements.txt 4) Print Python version, ffmpeg availability, and the absolute path to backend/ 5) Stop and wait for my secrets and client_secret.json B) Payload validator (n8n LLM node – optional) System prompt You are a strict payload validator for a character pipeline. Validate JSON keys: {name, species, gender, age, location, attitude, personality, background, style_tags, voice_id, style_hint} If required fields are missing (name, voice_id, photo), respond: { "status": "fix_request", "message": "<missing or invalid fields>", "examples": {"name":"Milka", "voice_id":"JBF..."} } If valid, respond: { "status": "ok", "payload": { <cleaned values> } } User template (map from Webhook JSON) { "name":"{{ $json.name }}", "species":"{{ $json.species }}", "gender":"{{ $json.gender }}", "age":"{{ $json.age }}", "location":"{{ $json.location }}", "attitude":"{{ $json.attitude }}", "personality":"{{ $json.personality }}", "background":"{{ $json.background }}", "style_tags":"{{ $json.style_tags }}", "voice_id":"{{ $json.voice_id }}", "style_hint":"{{ $json.style_hint }}" } 9) Quick test (proof-of-concept) In backend/.env, set tokens + absolute BASE_DIR. Put client_secret.json into backend/. In n8n, set AFFOGATO_API_KEY. Open frontend/index.html, set action to your Webhook URL. Submit a clear face photo + short script. Check: a) five PNGs under backend/outputs/<slug>/images, b) MP4 + MP3 + run_summary.json, c) Notion entries, d) YouTube Unlisted draft. 10) Known limitations / “needs verification” Five-view identity consistency depends on the model settings in your Affogato tenant. If drift occurs, regenerate the single bad view or tighten style_hint. Notion file previews: for this PoC, we store local paths. To see thumbnails in Notion, switch to hosted URLs or upload as page blocks. First YouTube run prompts an OAuth browser window on the machine running pipeline_poc.py. If you want me to merge both Python scripts into a single “all-in-one” script and adjust the n8n workflow to call just one command, say “merge it,” and I’ll paste that full script plus an updated workflow JSON here. You said: My N8n is on my QNAP, and the agent files that you've created here, or the documentation and all that, is all on my local C drive. Should it be on the QNAP? Should it be somewhere else? Or should it be here? I don't know, you tell me. I just want to copy and paste the prompt or such prompts into code, and let it do the work for me here on local channels. And connect it to N8n, which is on my QNAP, and do all the work for me in that regard. I already have API set up. I already have an env file. ChatGPT said: Thought for 54s Michael — here’s the straight answer first, then the exact steps. Short answer (where things live) n8n runs on your QNAP → any commands it runs (Execute Command nodes) must see the files on the QNAP filesystem (or inside the n8n container). Backend folder (backend/) → put this on the QNAP, not your C:\ drive. .env and API keys → keep them on the QNAP in backend/.env (you said you already have them—great). Frontend (frontend/index.html) → can live anywhere (your C:\ drive is fine). It just POSTs to the n8n Webhook URL on the QNAP. n8n workflow → on the QNAP (import into your n8n there). Update the absolute paths to point to the QNAP copy of backend/. That’s it. Frontend can stay local. Backend + workflow paths must be on the QNAP. Recommended layout on QNAP (simple + reliable) On the QNAP create a folder for this, e.g.: /share/Container/character_agent_kit/ backend/ outputs/ # will be created by the scripts Copy these files from the chat to the QNAP: backend/affogato_5views.py backend/pipeline_poc.py backend/requirements.txt backend/.env ← use your existing env (tokens, DB IDs, etc.) backend/client_secret.json ← YouTube OAuth file If your n8n runs in a Docker container on QNAP (Container Station), mount /share/Container/character_agent_kit into the container (e.g., at /data/character_agent_kit). Then use absolute paths inside the container (like /data/character_agent_kit/backend/...) in the workflow. Step-by-step (QNAP first run) Put backend on QNAP Copy the backend/ files to /share/Container/character_agent_kit/backend. Install runtime in the n8n environment Your n8n container/host needs: python3, pip, and ffmpeg. Create a venv inside /share/Container/character_agent_kit/backend and pip install -r requirements.txt. Mount volumes (if using Docker n8n) Mount /share/Container/character_agent_kit → /data/character_agent_kit in the n8n container. Set env var in the container: AFFOGATO_API_KEY=<your key> (and anything else you want the Execute Command node to see). Import the workflow (on QNAP n8n) Import the n8n/workflow.json (from the chat above). Replace every /ABS/PATH/TO/backend with your real container path, e.g. /data/character_agent_kit/backend. Set the Webhook node to Active, and copy its Production URL. Front-end on your PC (C:\ drive) Open frontend/index.html (from the chat) on your PC. Replace YOUR_N8N_WEBHOOK_URL with the Production URL you copied. Leave it on your C:\ drive; just open it in a browser to use it. YouTube OAuth (headless note) The first YouTube upload needs OAuth. Because QNAP is headless, do one of these: Easy path: Run pipeline_poc.py once on your PC (with your client_secret.json) to create a token.json, then copy that token.json into backend/ on the QNAP. After that, QNAP refreshes automatically. OR modify the script to use device/console flow (I can paste that patch if you want). Alternative (if you really want backend to stay on your PC) You’d have to remove Execute Command from n8n and instead have n8n call your PC (HTTP) to trigger the backend. That means: Running a small local API server on your PC, Opening a tunnel/port so QNAP can reach it, Replacing Execute Command nodes with HTTP Request nodes. It’s doable but more complex. Recommended path is keeping backend with n8n on the QNAP. Paste-ready edits you need (quick) In n8n/workflow.json (already in the chat), change these arguments to your container path (example uses /data/character_agent_kit/backend): /data/character_agent_kit/backend/affogato_5views.py /data/character_agent_kit/backend/pipeline_poc.py --outdir /data/character_agent_kit/backend/outputs --img_front /data/character_agent_kit/backend/outputs/{{ slug }}/images/front.png ... (same for left/right/back/expressive) In your QNAP backend/.env: Keep your existing keys/IDs. Set BASE_DIR=/data/character_agent_kit/backend (or the absolute path you mounted). On the n8n container, ensure env: AFFOGATO_API_KEY=<your key>. ADHD-friendly checklist Backend lives on QNAP (same “place” as n8n). Frontend can stay on *C:* (it only posts to the QNAP webhook). Update absolute paths in workflow to the QNAP path. Put .env and client_secret.json in backend/ on QNAP. Do YouTube OAuth once (PC → copy token.json to QNAP) or ask me for the device flow patch. Test by opening frontend/index.html, filling the form, and submitting. Likely failure points (and fixes) Execute Command fails: Path wrong. Fix the absolute path in the node. Python/ffmpeg missing in container: Install them in the container or base OS. YouTube OAuth fails headless: Generate token.json on your PC first, then copy to QNAP. Notion 400: Property names in your DB don’t match—align them exactly. Images not found: Make sure the outputs folder in the workflow matches --outdir.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/bermingham85/mcp-puppet-pipeline'

If you have feedback or need assistance with the MCP directory API, please join our Discord server