[//]: # (Author: Raveen Beemsingh)
# Startup Idioms and Development Principles
## Core Principles (ALWAYS FOLLOW)
### 1. "Ship is Better Than Done"
**Meaning:** Ship often rather than waiting for perfection
#### How We Apply This:
- **Ship working features daily** - Not polished, but functional
- **Iterate based on real usage** - Don't guess what users want
- **MVP over MLP** - Minimum Viable, not Minimum Lovable
- **Version 1 is never perfect** - That's what v2 is for
- **Feature flags over big bang releases** - Ship dark, test, then enable
#### What This Means Today:
- Ship AI extraction that works 70% of the time TODAY
- Don't wait for 99% accuracy
- Basic UI that displays results > Beautiful UI that doesn't exist
- Manual fallbacks are OK for edge cases
#### What This DOESN'T Mean:
- Ship broken code
- Skip error handling
- Ignore security
- Deploy without testing
### 2. "Test, Test, and Test, Baby!"
**Meaning:** Don't skimp on testing. Don't break existing features when adding new ones.
#### How We Apply This:
- **Every PR must pass ALL existing tests** - No exceptions
- **New features = New tests** - Write tests as you code
- **Test the happy path AND edge cases**
- **CI/CD blocks on test failures** - Can't merge broken code
- **Manual testing before marking "done"** - Actually use the feature
#### Testing Hierarchy:
1. **Unit tests** - Core business logic
2. **Integration tests** - API endpoints
3. **E2E tests (Cypress)** - User workflows
4. **Manual smoke tests** - "Does it actually work?"
5. **Performance tests** - "Is it fast enough?"
### 3. "Fast is Fine, Accuracy is Final"
**Origin:** Wyatt Earp of OK Corral fame
**Meaning:** Building features fast is a poor alternative to building the RIGHT things in the first place. Avoid wasted time doing rework.
#### How We Apply This:
- **Plan before coding** - Design discussions save development time
- **Understand requirements fully** - Ask clarifying questions upfront
- **Right solution > Quick solution** - Don't rush into implementation
- **Measure twice, cut once** - Think through implications
- **Rework is waste** - Time spent redoing is time not spent on new value
#### What This Means in Practice:
- Take 30 minutes to plan, save 3 hours of rework
- Write design docs for complex features
- Get alignment on approach BEFORE coding
- Consider edge cases during planning, not after shipping
- Technical debt is OK, wrong architecture is not
#### Balance with "Ship is Better Than Done":
- Ship the RIGHT thing incrementally
- MVP should be Minimal but still VIABLE
- Quick iterations on the CORRECT path
- Fast feedback on RIGHT features
### 4. "In Doubt, Ask"
**Meaning:** If you're not sure the path or decision is right, it's OK and highly encouraged to ask for validation.
#### How We Apply This:
- **No ego in uncertainty** - Asking shows wisdom, not weakness
- **Quick check-ins save time** - 5-minute question prevents 5-hour mistake
- **Document decisions** - Share reasoning for future reference
- **Collaborative design** - Two minds better than one
- **Validate assumptions early** - Don't build on shaky foundations
#### When to Ask:
- Before major architectural decisions
- When requirements seem ambiguous
- Before significant refactoring
- When choosing between multiple valid approaches
- When something "feels wrong" but you can't articulate why
#### How to Ask Effectively:
1. State the problem clearly
2. Present options you've considered
3. Share your recommendation and why
4. Ask for specific feedback
5. Document the decision made
### 5. "Ego Has Killed More Startups Than Money or Tech"
**Meaning:** Accept that there will be blind spots. It's better to look out for them and correct when found than to pretend they don't exist.
#### How We Apply This:
- **Embrace being wrong** - It's how we learn and improve
- **Blind spots are universal** - Everyone has them, including me
- **Feedback is gold** - Even criticism contains valuable truth
- **Change course when needed** - Pivoting isn't failure, it's intelligence
- **Kill your darlings** - Don't cling to ideas just because they're yours
#### What This Means in Practice:
- Regularly ask: "What am I missing?"
- Seek contradictory evidence actively
- Thank people who point out flaws
- Admit mistakes quickly and openly
- Focus on solving problems, not being right
#### Examples of Ego Traps to Avoid:
- "This is definitely the right approach" → "This seems like the right approach, what do you think?"
- "Users will love this feature" → "Let's test if users actually want this"
- "My code doesn't have bugs" → "Where might bugs be hiding in my code?"
- "This architecture is perfect" → "What could break this architecture?"
#### Healthy Ego Replacement:
- Pride in learning, not in being right
- Joy in team success, not individual credit
- Satisfaction in solving real problems
- Excitement about discovering blind spots
### 6. "Be the Kepler"
**Origin:** Inspired by Carl Sagan's description of Johannes Kepler in Cosmos
**Quote:** "He preferred the hard truth to his dearest delusions. That is the heart of science."
**Meaning:** Accept uncomfortable facts. Choose hard truths over comfortable delusions.
#### The Kepler Story:
Johannes Kepler spent years trying to prove planetary orbits were perfect circles (his "dearest delusion"). When data showed they were ellipses (uncomfortable fact), he accepted it and revolutionized astronomy. His willingness to abandon beautiful theories for ugly facts led to truth.
#### How We Apply This:
- **Data over opinions** - Let metrics tell the story
- **User feedback over our assumptions** - They know their needs better
- **Admit when something isn't working** - Don't polish a failed approach
- **Embrace uncomfortable metrics** - Low adoption means something's wrong
- **Question beautiful theories** - Elegant doesn't mean correct
#### What This Means in Practice:
- If users aren't using a feature, accept it failed
- If the architecture doesn't scale, redesign it
- If the AI extraction is only 60% accurate, don't pretend it's 80%
- If the MVP isn't viable, admit it and fix it
- If our assumptions were wrong, celebrate discovering the truth
#### Kepler Moments in Development:
- "This clever algorithm must be fast" → Test it → "It's actually slower"
- "Users will definitely want this" → Ship it → "They ignore it completely"
- "This abstraction makes everything cleaner" → Build it → "It makes things more complex"
- "AI will solve this perfectly" → Implement it → "It needs human oversight"
#### The Kepler Mindset:
1. Form hypotheses, but hold them lightly
2. Seek evidence that contradicts beliefs
3. When reality disagrees with theory, change theory
4. Find joy in discovering you were wrong
5. Truth is more beautiful than any delusion
### 7. "Occam's Razor" - Choose Simplicity
**Origin:** William of Ockham, 14th century philosopher
**Meaning:** When choosing between solutions, prefer the one requiring fewer assumptions. The simplest explanation is usually correct.
#### The Principle:
"Entities should not be multiplied beyond necessity" - Don't add complexity unless absolutely required.
#### How We Apply This:
- **Simple architecture over complex** - Monolith before microservices
- **Fewer dependencies over more** - Less code to maintain and debug
- **Direct implementation over abstraction** - Unless you need the abstraction NOW
- **Built-in solutions over custom** - Use what exists before building new
- **Question every assumption** - Each assumption is a potential failure point
#### What This Means in Practice:
- **Database?** Do you actually need persistent storage, or is in-memory fine?
- **Caching layer?** Have you proven current performance is inadequate?
- **Message queue?** Could a simple function call work for your current scale?
- **Microservices?** Will your team really benefit from distributed complexity?
- **New library?** Can you use what's already in package.json?
#### Architecture Decision Framework:
When evaluating solutions, count the assumptions:
**Example: User Authentication**
```
Complex Solution (OAuth proxy service):
✗ Assumption: Need custom OAuth flow
✗ Assumption: Built-in OAuth libraries insufficient
✗ Assumption: Need to manage tokens ourselves
✗ Assumption: Need separate service for auth
✗ Assumption: Team can maintain auth infrastructure
= 5 assumptions
Simple Solution (NextAuth.js):
✓ Assumption: NextAuth.js handles our use case
= 1 assumption
Winner: Simple (fewer assumptions = lower risk)
```
#### Real Examples from This Project:
**Good - Chose Simplicity:**
- LinkedIn posting: Direct API calls > Building SDK wrapper
- Logger: Simple timestamp wrapper > Full logging framework
- Deployment: Docker Compose > Kubernetes (for current scale)
**Warning Signs of Unnecessary Complexity:**
- "We might need this in the future"
- "This is more elegant/beautiful"
- "Everyone uses this pattern"
- "I read about this cool approach"
- Can't explain why simpler approach won't work
#### Occam's Razor in Code Reviews:
**Questions to ask:**
1. "What assumptions does this solution require?"
2. "Could we solve this with less code?"
3. "Do we need this abstraction right now?"
4. "What's the simplest thing that could work?"
5. "Why won't the simple approach work?" (Must have good answer!)
#### When Complexity IS Justified:
Simple isn't always right. Add complexity when:
- **Proven need exists** - Current simple solution actually failing
- **Scale requires it** - Data shows performance/capacity issues
- **Security demands it** - Encryption, isolation, etc.
- **User experience needs it** - Real users blocked by simple approach
- **Team velocity gains** - Abstraction provably speeds development
The key: **Complexity needs justification, simplicity doesn't.**
#### Balance with Other Principles:
**With "Fast is Fine, Accuracy is Final":**
- Simple solutions are usually the RIGHT solution
- Complex solutions often indicate unclear requirements
- If you're not sure which is right, start simple
**With "Be the Kepler":**
- When data shows simple approach isn't working, add complexity
- Until then, prefer simplicity
- "In theory complex is better" < "Simple works in practice"
**With "Ego Has Killed More Startups":**
- Complex solutions often ego-driven
- Simple solutions require humility
- "Look how clever this is" = ego red flag
#### Startup Reality Check:
Most startups die from:
- ❌ Over-engineering (killed by complexity)
- ❌ Premature optimization (solved wrong problem)
- ❌ Technology gold-plating (built for imaginary scale)
Not from:
- ✅ Starting simple and iterating
- ✅ Using boring, proven technology
- ✅ Building exactly what's needed today
#### The Occam's Razor Test:
Before building anything, ask:
1. **What's the simplest version that could work?**
2. **What assumptions does that require?**
3. **What assumptions does the complex version require?**
4. **Do we have evidence the simple version will fail?**
5. **Can we start simple and add complexity only if needed?**
If you can't articulate why simple won't work, start simple.
#### Remember:
> "Simplicity is the ultimate sophistication." - Leonardo da Vinci
> "Make things as simple as possible, but not simpler." - Einstein
> "The best code is no code at all." - Jeff Atwood
**In software:** YAGNI (You Aren't Gonna Need It) is Occam's Razor applied to features.
## The Balance Framework
These principles work together:
```
Ship is Better Than Done + Fast is Fine, Accuracy is Final
= Ship the RIGHT thing incrementally
Test Test Test Baby + In Doubt, Ask
= Validate both code AND approach
Ego Has Killed More Startups + Be the Kepler
= Stay humble and follow the truth
Occam's Razor + Be the Kepler
= Choose simple solutions, let data justify complexity
All Seven Together:
= Build the right thing simply, ship it incrementally,
test thoroughly, ask when uncertain,
stay humble, choose simplicity,
and follow the truth wherever it leads
```
## Decision Framework
When facing any decision:
### 1. Planning Phase (Accuracy is Final + Be the Kepler + Occam's Razor)
- What does the data actually say?
- What assumptions am I making?
- What's the simplest solution? (Occam's Razor)
- What assumptions does each solution require? (Count them!)
- What might I be missing? (Ego check)
- Should I ask for input? (In Doubt, Ask)
### 2. Building Phase (Ship is Better + Test)
- Ship MVP of the RIGHT solution
- Test everything thoroughly
- Look for evidence I'm wrong
### 3. Validation Phase (Be the Kepler)
- What do metrics actually show?
- What is user feedback telling us?
- What uncomfortable truths must we accept?
- What delusions must we abandon?
## Red Flags That Should Trigger Principle Checks
### Ego Red Flags:
- "I'm certain this will work" → Check ego
- "This is obviously the best way" → Check blind spots
- "Users don't understand" → They understand fine, we're wrong
- "The problem is execution, not the idea" → Maybe the idea is wrong
### Kepler Red Flags:
- "The data must be wrong" → The data is probably right
- "Users are using it wrong" → We built it wrong
- "It works on my machine" → Reality differs from our environment
- "In theory, this should work" → Theory < Reality
### Occam's Razor Red Flags:
- "We might need this later" → Build it when you need it (YAGNI)
- "This is more elegant/sophisticated" → Elegance ≠ Right
- "Everyone uses this pattern" → Cargo cult engineering
- "Let me show you this cool approach" → Cool ≠ Simple
- Can't explain why simpler won't work → Try simple first
## Cultural Reinforcement
### In Code Reviews:
- **Praise:** "Thanks for pointing out that blind spot"
- **Model:** "I was wrong about this approach"
- **Question:** "What uncomfortable truth are we avoiding?"
- **Celebrate:** "Great job accepting that the first approach failed"
### In Planning:
- **First question:** "What do we actually know vs assume?"
- **Second question:** "What's the simplest solution?"
- **Third question:** "What evidence contradicts this?"
- **Fourth question:** "What are we missing?"
- **Fifth question:** "What would Kepler do with this data?"
## Application to Current Work
### For AI Extraction Feature:
1. **Ego Check:** Maybe our extraction approach isn't as smart as we think
2. **Kepler Test:** If extraction accuracy is 60%, accept it and plan accordingly
3. **Blind Spot Search:** What are we missing about how users create brands?
4. **Truth Seeking:** Test with real websites, accept real results
### Example Applied to Recent Work:
```
Initial Delusion: "I'll build AI extraction quickly and it'll be amazing"
Kepler Moment: "Actually, I should have planned first"
Ego Check: "I was wrong to rush into coding"
Course Correction: "Let's scrap it and design properly"
Result: Better solution through accepting hard truth
```
## The Ultimate Wisdom
**Ego says:** "I know the answer"
**Kepler says:** "Let me find the answer"
**Ego says:** "This will definitely work"
**Kepler says:** "Let's test and see if it works"
**Ego says:** "Users are wrong"
**Kepler says:** "The data is telling us something"
**Ego says:** "My solution is elegant"
**Kepler says:** "Does it actually solve the problem?"
**Complexity says:** "This sophisticated architecture shows expertise"
**Occam says:** "What's the simplest thing that could work?"
## Remember
> "The first principle is that you must not fool yourself — and you are the easiest person to fool." - Richard Feynman
> "He preferred the hard truth to his dearest delusions." - Carl Sagan on Kepler
> "Entities should not be multiplied beyond necessity." - William of Ockham
> "Strong opinions, loosely held" - The startup way
## The Path Forward
1. **Build hypotheses** (not certainties)
2. **Test rigorously** (seek disconfirming evidence)
3. **Accept results** (even when they hurt)
4. **Adapt quickly** (pivot based on truth)
5. **Stay humble** (there's always a blind spot)
6. **Be grateful** (for every discovered mistake)
---
*Last Updated: 2025-11-03*
*These principles guide every decision and every line of code*
*When in conflict, simplicity and truth win over complexity and ego, always*