# π λΉ λ₯Έ μμ κ°μ΄λ
## λ€λ₯Έ μ¬λμ΄ μ²μ μ¬μ©ν λ
### 1. μ μ₯μ ν΄λ‘ (μ§μ λ² μ΄μ€ μλ λ€μ΄λ‘λ)
```bash
git clone https://github.com/seanshin0214/world-class-leadership-personas.git
cd world-class-leadership-personas
```
**κ²°κ³Ό**:
- β
νλ₯΄μλ λ©νλ°μ΄ν° (community/)
- β
MCP μλ² μ½λ (src/)
- β
**μ§μ λ² μ΄μ€ (knowledge-base/)** β μλ ν¬ν¨!
### 2. μ§μ λ² μ΄μ€ νμΈ
```bash
# νμ¬ ν¬ν¨λ μ§μ λ² μ΄μ€
ls knowledge-base/
# μΆλ ₯:
410-llm-engineer/
βββ core-competencies/
βββ transformer-architectures.md (20 pages) β
```
### 3. MCP μλ² μ€ν
```bash
npm install
npm run dev
```
**μ΄μ Claude Desktopμμ μ¬μ© κ°λ₯!**
---
## μ§μ λ² μ΄μ€ μ
λ°μ΄νΈ λ°κΈ°
### λ€λ₯Έ μ¬λμ΄ μ μ§μ μΆκ°νμ λ
```bash
# μ΅μ μ§μ λ² μ΄μ€ λ€μ΄λ‘λ
git pull origin main
# μμ μΆλ ₯:
Updating 40a527a..52e3007
Fast-forward
knowledge-base/410-llm-engineer/core-competencies/prompt-engineering.md | 1200 ++++
1 file changed, 1200 insertions(+)
```
**μλμΌλ‘ λκΈ°νλ¨!** β
---
## μ§μ λ² μ΄μ€μ κΈ°μ¬νκΈ°
### μ λ¬Έμ μΆκ°
```bash
# 1. λ¬Έμ μμ±
mkdir -p knowledge-base/410-llm-engineer/case-studies
cat > knowledge-base/410-llm-engineer/case-studies/my-case-study.md << 'EOF'
# My LLM Case Study
[Write your content...]
EOF
# 2. Git μΆκ°
git add knowledge-base/
# 3. 컀λ°
git commit -m "feat: Add my LLM case study"
# 4. νΈμ
git push origin main
```
### κΈ°μ‘΄ λ¬Έμ μμ
```bash
# 1. μμ
code knowledge-base/410-llm-engineer/core-competencies/transformer-architectures.md
# 2. 컀λ°
git add knowledge-base/
git commit -m "docs: Update transformer-architectures with new benchmarks"
# 3. νΈμ
git push origin main
```
---
## νμ¬ μν
### μμ±λ μ§μ λ² μ΄μ€
```
β
410-llm-engineer/
βββ core-competencies/
βββ transformer-architectures.md
- Multi-Head Attention (μν μ¦λͺ
)
- Positional Encodings (RoPE, ALiBi)
- Flash Attention (μ½λ + λ²€μΉλ§ν¬)
- μ€μ μ¬λ‘ (LLaMA-2-70B, GPT-4)
- 20 pages, νλ‘λμ
κΈ νμ§
```
### μμ
μ€
```
β³ 410-llm-engineer/ (μ§νλ₯ : 20%)
βββ core-competencies/
β βββ transformer-architectures.md β
β βββ prompt-engineering.md β³ (λ€μ μμ
)
βββ case-studies/ β³
βββ code-examples/ β³
βββ research-papers/ β³
```
---
## RAG μλ ν
μ€νΈ
### μλ리μ€
**μ§λ¬Έ**: "LLaMA-2-70B μΆλ‘ μλ μ΅μ ν λ°©λ²?"
**RAG κ²μ**:
1. Query μλ² λ© μμ±
2. knowledge-base/410-llm-engineer/ κ²μ
3. transformer-architectures.mdμμ κ΄λ ¨ μΉμ
λ°κ²¬:
- Flash Attention (3.8x speedup)
- Multi-Query Attention (8x KV cache reduction)
- Real benchmarks (A100 GPU)
**μλ΅**:
```
Based on the knowledge base:
1. Use Flash Attention: 3.8x faster, reduces memory O(nΒ²) β O(n)
[Code example from transformer-architectures.md]
2. Apply Multi-Query Attention: 8x smaller KV cache
[Specific implementation details]
3. Expected improvement: 9s β 1.8s latency
[Real benchmark data included]
```
**ν¨κ³Ό**: μΌλ°μ μ‘°μΈ β ꡬ체μ , μ½λ ν¬ν¨, μΈ‘μ κ°λ₯ν λ΅λ³
---
## μν¬νλ‘μ° μμ½
```
λ‘컬 μμ±
β
git add knowledge-base/
β
git commit -m "feat: ..."
β
git push origin main
β
λ€λ₯Έ μ¬μ©μ: git pull
β
μλ λκΈ°ν β
```
---
## λ€μ λ¨κ³
### λ¨κΈ° (1μ£ΌμΌ)
- [ ] prompt-engineering.md μμ± (30 pages)
- [ ] case-studies/ μΆκ° (5κ° μ¬λ‘)
### μ€κΈ° (1κ°μ)
- [ ] 410-llm-engineer μμ± (100MB)
- [ ] 10κ° ν΅μ¬ νλ₯΄μλ μμ
### μ₯κΈ° (6κ°μ)
- [ ] 142κ° μ 체 νλ₯΄μλ
- [ ] Git LFS λ§μ΄κ·Έλ μ΄μ
(1GB λλ¬ μ)
---
**νμ¬**: Gitμμ μ§μ λ² μ΄μ€ μ§μ κ΄λ¦¬ μ€ β
**ν¨κ³Ό**: Clone ν λ²μΌλ‘ λͺ¨λ μ§μ μλ λ€μ΄λ‘λ
**μ
λ°μ΄νΈ**: git pullλ‘ μ΅μ μ§μ μ¦μ λ°μ