Skip to main content
Glama
QUICK_START.mdβ€’4.08 kB
# πŸš€ λΉ λ₯Έ μ‹œμž‘ κ°€μ΄λ“œ ## λ‹€λ₯Έ μ‚¬λžŒμ΄ 처음 μ‚¬μš©ν•  λ•Œ ### 1. μ €μž₯μ†Œ 클둠 (지식 베이슀 μžλ™ λ‹€μš΄λ‘œλ“œ) ```bash git clone https://github.com/seanshin0214/world-class-leadership-personas.git cd world-class-leadership-personas ``` **κ²°κ³Ό**: - βœ… 페λ₯΄μ†Œλ‚˜ 메타데이터 (community/) - βœ… MCP μ„œλ²„ μ½”λ“œ (src/) - βœ… **지식 베이슀 (knowledge-base/)** ← μžλ™ 포함! ### 2. 지식 베이슀 확인 ```bash # ν˜„μž¬ ν¬ν•¨λœ 지식 베이슀 ls knowledge-base/ # 좜λ ₯: 410-llm-engineer/ └── core-competencies/ └── transformer-architectures.md (20 pages) βœ… ``` ### 3. MCP μ„œλ²„ μ‹€ν–‰ ```bash npm install npm run dev ``` **이제 Claude Desktopμ—μ„œ μ‚¬μš© κ°€λŠ₯!** --- ## 지식 베이슀 μ—…λ°μ΄νŠΈ λ°›κΈ° ### λ‹€λ₯Έ μ‚¬λžŒμ΄ μƒˆ 지식 μΆ”κ°€ν–ˆμ„ λ•Œ ```bash # μ΅œμ‹  지식 베이슀 λ‹€μš΄λ‘œλ“œ git pull origin main # μ˜ˆμ‹œ 좜λ ₯: Updating 40a527a..52e3007 Fast-forward knowledge-base/410-llm-engineer/core-competencies/prompt-engineering.md | 1200 ++++ 1 file changed, 1200 insertions(+) ``` **μžλ™μœΌλ‘œ 동기화됨!** βœ… --- ## 지식 λ² μ΄μŠ€μ— κΈ°μ—¬ν•˜κΈ° ### μƒˆ λ¬Έμ„œ μΆ”κ°€ ```bash # 1. λ¬Έμ„œ μž‘μ„± mkdir -p knowledge-base/410-llm-engineer/case-studies cat > knowledge-base/410-llm-engineer/case-studies/my-case-study.md << 'EOF' # My LLM Case Study [Write your content...] EOF # 2. Git μΆ”κ°€ git add knowledge-base/ # 3. 컀밋 git commit -m "feat: Add my LLM case study" # 4. ν‘Έμ‹œ git push origin main ``` ### κΈ°μ‘΄ λ¬Έμ„œ μˆ˜μ • ```bash # 1. μˆ˜μ • code knowledge-base/410-llm-engineer/core-competencies/transformer-architectures.md # 2. 컀밋 git add knowledge-base/ git commit -m "docs: Update transformer-architectures with new benchmarks" # 3. ν‘Έμ‹œ git push origin main ``` --- ## ν˜„μž¬ μƒνƒœ ### μ™„μ„±λœ 지식 베이슀 ``` βœ… 410-llm-engineer/ └── core-competencies/ └── transformer-architectures.md - Multi-Head Attention (μˆ˜ν•™ 증λͺ…) - Positional Encodings (RoPE, ALiBi) - Flash Attention (μ½”λ“œ + 벀치마크) - μ‹€μ œ 사둀 (LLaMA-2-70B, GPT-4) - 20 pages, ν”„λ‘œλ•μ…˜κΈ‰ ν’ˆμ§ˆ ``` ### μž‘μ—… 쀑 ``` ⏳ 410-llm-engineer/ (μ§„ν–‰λ₯ : 20%) β”œβ”€β”€ core-competencies/ β”‚ β”œβ”€β”€ transformer-architectures.md βœ… β”‚ └── prompt-engineering.md ⏳ (λ‹€μŒ μž‘μ—…) β”œβ”€β”€ case-studies/ ⏳ β”œβ”€β”€ code-examples/ ⏳ └── research-papers/ ⏳ ``` --- ## RAG μž‘λ™ ν…ŒμŠ€νŠΈ ### μ‹œλ‚˜λ¦¬μ˜€ **질문**: "LLaMA-2-70B μΆ”λ‘  속도 μ΅œμ ν™” 방법?" **RAG 검색**: 1. Query μž„λ² λ”© 생성 2. knowledge-base/410-llm-engineer/ 검색 3. transformer-architectures.mdμ—μ„œ κ΄€λ ¨ μ„Ήμ…˜ 발견: - Flash Attention (3.8x speedup) - Multi-Query Attention (8x KV cache reduction) - Real benchmarks (A100 GPU) **응닡**: ``` Based on the knowledge base: 1. Use Flash Attention: 3.8x faster, reduces memory O(nΒ²) β†’ O(n) [Code example from transformer-architectures.md] 2. Apply Multi-Query Attention: 8x smaller KV cache [Specific implementation details] 3. Expected improvement: 9s β†’ 1.8s latency [Real benchmark data included] ``` **효과**: 일반적 μ‘°μ–Έ β†’ ꡬ체적, μ½”λ“œ 포함, μΈ‘μ • κ°€λŠ₯ν•œ λ‹΅λ³€ --- ## μ›Œν¬ν”Œλ‘œμš° μš”μ•½ ``` 둜컬 μž‘μ„± ↓ git add knowledge-base/ ↓ git commit -m "feat: ..." ↓ git push origin main ↓ λ‹€λ₯Έ μ‚¬μš©μž: git pull ↓ μžλ™ 동기화 βœ… ``` --- ## λ‹€μŒ 단계 ### 단기 (1주일) - [ ] prompt-engineering.md μž‘μ„± (30 pages) - [ ] case-studies/ μΆ”κ°€ (5개 사둀) ### 쀑기 (1κ°œμ›”) - [ ] 410-llm-engineer μ™„μ„± (100MB) - [ ] 10개 핡심 페λ₯΄μ†Œλ‚˜ μ‹œμž‘ ### μž₯κΈ° (6κ°œμ›”) - [ ] 142개 전체 페λ₯΄μ†Œλ‚˜ - [ ] Git LFS λ§ˆμ΄κ·Έλ ˆμ΄μ…˜ (1GB 도달 μ‹œ) --- **ν˜„μž¬**: Gitμ—μ„œ 지식 베이슀 직접 관리 쀑 βœ… **효과**: Clone ν•œ 번으둜 λͺ¨λ“  지식 μžλ™ λ‹€μš΄λ‘œλ“œ **μ—…λ°μ΄νŠΈ**: git pull둜 μ΅œμ‹  지식 μ¦‰μ‹œ 반영

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/seanshin0214/persona-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server