from [[bayes_evol(andrew_josh)]]
# Bayesian Evolution Literature Classification (μ°λ¦¬ λ
Όλ¬Έ "λΆνμ€ν μ½μμ€κ³" κ΄μ )
## Based on Double Reparameterization Framework: P(success) β Ο(promise) β (ΞΌ, Ο)
### ν΅μ¬ νλ μμν¬: μ°½μ
κ°μ λΆνμ€μ± μ€κ³
- **첫 λ²μ§Έ μ¬λ§€κ°λ³μν**: P(success) β Ο(promise) + n (μμ°μ 볡μ‘μ±)
- **λ λ²μ§Έ μ¬λ§€κ°λ³μν**: Ο β (ΞΌ, Ο) where ΞΌ = aspiration, Ο = concentration
- **Strategic Ignorance**: Ο* = max(0, V/ic - 1) when information cost exceeds value
- **From Player to Designer**: λΆνμ€μ±μ μ μ½μμ μμμΌλ‘ μ ν
---
## π Space Food (μ°μ£Όμλ) Literature Classification
| Paper | Core Concept | π’ AGREE | π΄ DISAGREE | π΅ Our Extension |
|-------|--------------|----------|-------------|------------------|
| **[[ππΎ_vul14_one_done]]** | 1-3 samples sufficient for near-optimal decision | **Strong Agreement**: Low Ο (sparse sampling) = adaptive optimality | Oversampling always better (μ°λ¦¬: Οββ causes learning trap) | Our Ο* formula explains when to stop sampling |
| **[[ππΎ_stern24_model(beliefs, experimentation)]]** | Entrepreneurs test low-prior strategies first for better signals | Heterogeneous priors drive contrarian experiments | All experiments equally informative | Ο modulates experiment informativeness |
| **[[ππΎ_gans23_choose(entrepreneurship, experimentation)]]** | Entrepreneurial choice under uncertainty with strategic experiments | Experiments reveal both idea quality and strategy fit | Experiments are neutral (μ°λ¦¬: Ο affects bias) | Promise design (Ο, Ο) shapes what experiments reveal |
| **[[ππΎ_tenanbaum11_grow(minds, cognition)]]** | Hierarchical Bayesian models of cognitive development | **Deep Resonance**: Learning as hierarchical prior updates | Learning is passive reception | Ο controls active forgetting vs integration |
| **[[ππΎ_gershman15_compute(rationality, resources)]]** | Bounded rationality as optimal given computational constraints | **Perfect Match**: Resource-rational = our V/ic framework | More computation always better | Strategic ignorance (Ο=0) can be optimal |
| **[[ππΎ_busenitz97_recognize(entrepreneurs, biases)]]** | Entrepreneurs use heuristics and biases more than managers | Biases as features not bugs when Ο low | Biases are mistakes to eliminate | "Biases" = rational low-Ο strategies |
| **[[ππΎ_arrow69_classify(production, knowledge)]]** | Learning by doing creates knowledge spillovers | Production generates information (reduces n) | Knowledge always reduces uncertainty | Sometimes preserving uncertainty (low Ο) valuable |
| **[[ππΎ_meehl67_test(theory, method)]]** | Theory testing requires strong inference | Strong tests need precise predictions (high Ο) | Always maximize test precision | Optimal Ο depends on V/ic ratio |
| **[[ππΎ_peng21_overload(information, decisions)]]** | Information overload degrades decision quality | **Strong Support**: High i (integration cost) β lower Ο optimal | More information always helps | Rational ignorance when i > V/c |
| **[[ππΎ_johnston02_caution(startups, scaling)]]** | Premature scaling is #1 cause of startup failure | High Ο too early = scaling trap | Fast scaling always good if funded | Ο should increase gradually with V/ic |
| **[[ππΎ_nejad22_model(mentorship, accelerators)]]** | Accelerators help calibrate entrepreneurial beliefs | External calibration of ΞΌ and Ο | One-size-fits-all mentorship | Mentors help optimize personal Ο* |
| **[[ππΎ_bhui21_optimize(decisions, resources)]]** | Resource-rational decision-making under constraints | Optimization given cognitive costs = our framework | Unbounded rationality ideal | Bounded optimality through Ο choice |
| **[[ππΎ_mansinghka25_automate(formalization, programming)]]** | Probabilistic programming automates Bayesian inference | Reduces i (integration cost) dramatically | Automation eliminates uncertainty | Lower i β higher optimal Ο, not elimination |
| **[[ππΎ_xuan24_plan(instruction, cooperation)]]** | Planning helps coordinate but constrains adaptation | Planning = high Ο for coordination | Always plan thoroughly | Ο* depends on coordination needs |
---
## π― Bayesian Statistical Methods Integration
| Method | Application to Promise Design | Our Innovation |
|--------|-------------------------------|----------------|
| **Prior Predictive Check** | Test if Ο ~ Beta(ΞΌΟ, (1-ΞΌ)Ο) generates realistic success rates | Before promising, simulate outcomes |
| **Posterior Predictive** | Validate updated beliefs match observed pivots | Ο controls update magnitude |
| **Simulation-Based Calibration** | Recover true (ΞΌ, Ο) from observed promises | Validate double reparameterization |
| **Hierarchical Modeling** | Industry β Founder β Venture nested structure | Ο varies across hierarchy levels |
| **Model Comparison** | Test double vs single reparameterization | WAIC shows double superior |
---
## π Synthesis: From Decision Under to On Uncertainty
### π€ μ±μ°κ³Όκ±°: The Tyranny of Information Maximization
**What We Must Destroy:**
- "More information = better decisions" dogma that created analysis paralysis
- Prediction-Based Prescription's rigid "predict then prescribe" sequence ignoring endogeneity
- Prior Predictive Checks that validate but never question the prior itself
- The delusion that uncertainty is always the enemy to be eliminated
- Better Place's $850M funeral: the price of information addiction
### π₯ λΉκ·Όλ―Έλ: The Dawn of Uncertainty Design
**What We Must Build:**
- **Bayesian Cringe** (Gelman): Healthy skepticism of over-precision
- **Strategic Ignorance**: Ο* = max(0, V/ic - 1) mathematically defines when not knowing beats knowing
- **Endogenous PBP**: Prediction and prescription become one when Ο is chosen
- **Prior as Design**: Not what you believe but what you choose to believe
- **Tesla's Triumph**: "Roughly 200 miles" beats "Exactly 5 minutes"
### Key Falsifiable Predictions
1. **Industries with higher n β lower average Ο** (complexity forces flexibility)
2. **Lower i (e.g., AI era) β bimodal Ο distribution** (all-or-nothing strategies)
3. **V/ic ratio determines optimal promise precision** (not market maturity)
4. **Successful founders show Ο trajectory: low β high** (not monotonic increase)
---
## π‘ Philosophical Foundation: Negative Capability
Building on Keats's "negative capability" - the ability to remain comfortable in uncertainty:
**NC = 1/(Ο+1)**
- High NC (low Ο): Tesla's "roughly 200 miles"
- Low NC (high Ο): Better Place's "exactly 5 minutes"
- Zero NC (Οββ): Theranos's impossible precision
This quantifies what poets knew intuitively: **comfort with uncertainty is strength, not weakness**.
---
## π¬ Methodological Contributions
### For Bayesian Statistics (Andrew's Lens)
- **Endogenous uncertainty**: Ο as chosen parameter
- **Double reparameterization**: Computational elegance
- **Rational meaning construction cost**: i as digestion cost
### For Innovation Policy (Josh's Lens)
- **Stage-appropriate Ο**: Different policies for different V/ic
- **Market failures from Ο mismatch**: Over/under-specification
- **Policy as n-reducer, markets as Ο-optimizer**: Clear roles
### For Entrepreneurship Theory (Scott's Lens)
- **Unifies Planning vs Action schools**: Both right at different Ο
- **Explains contrarian success**: Low Ο preserves option value
- **Strategic ignorance as capability**: Not bias but feature
---
## π The Promise Paradox Resolution
μ°λ¦¬μ ν΅μ¬ μμ€: **μ λ°ν μ½μμ μ μ€ν¨νκ³ λͺ¨νΈν μ½μμ μ μ±κ³΅νλκ°?**
ν΄λ΅: **Ο* = max(0, V/ic - 1)**
- Better Place: High Ο despite high c β Learning trap β Failure
- Tesla: Low initial Ο β Adaptive evolution β Success
- Optimal strategy: Let Ο grow with V/ic ratio
**"λΆνμ€μ±μ 극볡ν μ μ½μ΄ μλλΌ μ€κ³ν μμμ΄λ€"**
---
*Last updated: Based on deep synthesis of Space Food papers and our double reparameterization framework*