from [[bayes_evol(andrew_josh)]] # Bayesian Evolution Literature Classification (우리 λ…Όλ¬Έ "λΆˆν™•μ‹€ν•œ 약속섀계" 관점) ## Based on Double Reparameterization Framework: P(success) β†’ Ο†(promise) β†’ (ΞΌ, Ο„) ### 핡심 ν”„λ ˆμž„μ›Œν¬: μ°½μ—…κ°€μ˜ λΆˆν™•μ‹€μ„± 섀계 - **첫 번째 μž¬λ§€κ°œλ³€μˆ˜ν™”**: P(success) β†’ Ο†(promise) + n (μžμ—°μ˜ λ³΅μž‘μ„±) - **두 번째 μž¬λ§€κ°œλ³€μˆ˜ν™”**: Ο† β†’ (ΞΌ, Ο„) where ΞΌ = aspiration, Ο„ = concentration - **Strategic Ignorance**: Ο„* = max(0, V/ic - 1) when information cost exceeds value - **From Player to Designer**: λΆˆν™•μ‹€μ„±μ„ μ œμ•½μ—μ„œ μžμ›μœΌλ‘œ μ „ν™˜ --- ## πŸš€ Space Food (μš°μ£Όμ‹λŸ‰) Literature Classification | Paper | Core Concept | 🟒 AGREE | πŸ”΄ DISAGREE | πŸ”΅ Our Extension | |-------|--------------|----------|-------------|------------------| | **[[πŸ“œπŸ‘Ύ_vul14_one_done]]** | 1-3 samples sufficient for near-optimal decision | **Strong Agreement**: Low Ο„ (sparse sampling) = adaptive optimality | Oversampling always better (우리: Ο„β†’βˆž causes learning trap) | Our Ο„* formula explains when to stop sampling | | **[[πŸ“œπŸ‘Ύ_stern24_model(beliefs, experimentation)]]** | Entrepreneurs test low-prior strategies first for better signals | Heterogeneous priors drive contrarian experiments | All experiments equally informative | Ο„ modulates experiment informativeness | | **[[πŸ“œπŸ‘Ύ_gans23_choose(entrepreneurship, experimentation)]]** | Entrepreneurial choice under uncertainty with strategic experiments | Experiments reveal both idea quality and strategy fit | Experiments are neutral (우리: Ο„ affects bias) | Promise design (Ο†, Ο„) shapes what experiments reveal | | **[[πŸ“œπŸ‘Ύ_tenanbaum11_grow(minds, cognition)]]** | Hierarchical Bayesian models of cognitive development | **Deep Resonance**: Learning as hierarchical prior updates | Learning is passive reception | Ο„ controls active forgetting vs integration | | **[[πŸ“œπŸ‘Ύ_gershman15_compute(rationality, resources)]]** | Bounded rationality as optimal given computational constraints | **Perfect Match**: Resource-rational = our V/ic framework | More computation always better | Strategic ignorance (Ο„=0) can be optimal | | **[[πŸ“œπŸ‘Ύ_busenitz97_recognize(entrepreneurs, biases)]]** | Entrepreneurs use heuristics and biases more than managers | Biases as features not bugs when Ο„ low | Biases are mistakes to eliminate | "Biases" = rational low-Ο„ strategies | | **[[πŸ“œπŸ‘Ύ_arrow69_classify(production, knowledge)]]** | Learning by doing creates knowledge spillovers | Production generates information (reduces n) | Knowledge always reduces uncertainty | Sometimes preserving uncertainty (low Ο„) valuable | | **[[πŸ“œπŸ‘Ύ_meehl67_test(theory, method)]]** | Theory testing requires strong inference | Strong tests need precise predictions (high Ο„) | Always maximize test precision | Optimal Ο„ depends on V/ic ratio | | **[[πŸ“œπŸ‘Ύ_peng21_overload(information, decisions)]]** | Information overload degrades decision quality | **Strong Support**: High i (integration cost) β†’ lower Ο„ optimal | More information always helps | Rational ignorance when i > V/c | | **[[πŸ“œπŸ‘Ύ_johnston02_caution(startups, scaling)]]** | Premature scaling is #1 cause of startup failure | High Ο„ too early = scaling trap | Fast scaling always good if funded | Ο„ should increase gradually with V/ic | | **[[πŸ“œπŸ‘Ύ_nejad22_model(mentorship, accelerators)]]** | Accelerators help calibrate entrepreneurial beliefs | External calibration of ΞΌ and Ο„ | One-size-fits-all mentorship | Mentors help optimize personal Ο„* | | **[[πŸ“œπŸ‘Ύ_bhui21_optimize(decisions, resources)]]** | Resource-rational decision-making under constraints | Optimization given cognitive costs = our framework | Unbounded rationality ideal | Bounded optimality through Ο„ choice | | **[[πŸ“œπŸ‘Ύ_mansinghka25_automate(formalization, programming)]]** | Probabilistic programming automates Bayesian inference | Reduces i (integration cost) dramatically | Automation eliminates uncertainty | Lower i β†’ higher optimal Ο„, not elimination | | **[[πŸ“œπŸ‘Ύ_xuan24_plan(instruction, cooperation)]]** | Planning helps coordinate but constrains adaptation | Planning = high Ο„ for coordination | Always plan thoroughly | Ο„* depends on coordination needs | --- ## 🎯 Bayesian Statistical Methods Integration | Method | Application to Promise Design | Our Innovation | |--------|-------------------------------|----------------| | **Prior Predictive Check** | Test if Ο† ~ Beta(ΞΌΟ„, (1-ΞΌ)Ο„) generates realistic success rates | Before promising, simulate outcomes | | **Posterior Predictive** | Validate updated beliefs match observed pivots | Ο„ controls update magnitude | | **Simulation-Based Calibration** | Recover true (ΞΌ, Ο„) from observed promises | Validate double reparameterization | | **Hierarchical Modeling** | Industry β†’ Founder β†’ Venture nested structure | Ο„ varies across hierarchy levels | | **Model Comparison** | Test double vs single reparameterization | WAIC shows double superior | --- ## 🌊 Synthesis: From Decision Under to On Uncertainty ### 🀠 채찍과거: The Tyranny of Information Maximization **What We Must Destroy:** - "More information = better decisions" dogma that created analysis paralysis - Prediction-Based Prescription's rigid "predict then prescribe" sequence ignoring endogeneity - Prior Predictive Checks that validate but never question the prior itself - The delusion that uncertainty is always the enemy to be eliminated - Better Place's $850M funeral: the price of information addiction ### πŸ₯• λ‹Ήκ·Όλ―Έλž˜: The Dawn of Uncertainty Design **What We Must Build:** - **Bayesian Cringe** (Gelman): Healthy skepticism of over-precision - **Strategic Ignorance**: Ο„* = max(0, V/ic - 1) mathematically defines when not knowing beats knowing - **Endogenous PBP**: Prediction and prescription become one when Ο„ is chosen - **Prior as Design**: Not what you believe but what you choose to believe - **Tesla's Triumph**: "Roughly 200 miles" beats "Exactly 5 minutes" ### Key Falsifiable Predictions 1. **Industries with higher n β†’ lower average Ο„** (complexity forces flexibility) 2. **Lower i (e.g., AI era) β†’ bimodal Ο„ distribution** (all-or-nothing strategies) 3. **V/ic ratio determines optimal promise precision** (not market maturity) 4. **Successful founders show Ο„ trajectory: low β†’ high** (not monotonic increase) --- ## πŸ’‘ Philosophical Foundation: Negative Capability Building on Keats's "negative capability" - the ability to remain comfortable in uncertainty: **NC = 1/(Ο„+1)** - High NC (low Ο„): Tesla's "roughly 200 miles" - Low NC (high Ο„): Better Place's "exactly 5 minutes" - Zero NC (Ο„β†’βˆž): Theranos's impossible precision This quantifies what poets knew intuitively: **comfort with uncertainty is strength, not weakness**. --- ## πŸ”¬ Methodological Contributions ### For Bayesian Statistics (Andrew's Lens) - **Endogenous uncertainty**: Ο„ as chosen parameter - **Double reparameterization**: Computational elegance - **Rational meaning construction cost**: i as digestion cost ### For Innovation Policy (Josh's Lens) - **Stage-appropriate Ο„**: Different policies for different V/ic - **Market failures from Ο„ mismatch**: Over/under-specification - **Policy as n-reducer, markets as Ο„-optimizer**: Clear roles ### For Entrepreneurship Theory (Scott's Lens) - **Unifies Planning vs Action schools**: Both right at different Ο„ - **Explains contrarian success**: Low Ο„ preserves option value - **Strategic ignorance as capability**: Not bias but feature --- ## 🎭 The Promise Paradox Resolution 우리의 핡심 μ—­μ„€: **μ •λ°€ν•œ 약속은 μ™œ μ‹€νŒ¨ν•˜κ³  λͺ¨ν˜Έν•œ 약속은 μ™œ μ„±κ³΅ν•˜λŠ”κ°€?** ν•΄λ‹΅: **Ο„* = max(0, V/ic - 1)** - Better Place: High Ο„ despite high c β†’ Learning trap β†’ Failure - Tesla: Low initial Ο„ β†’ Adaptive evolution β†’ Success - Optimal strategy: Let Ο„ grow with V/ic ratio **"λΆˆν™•μ‹€μ„±μ€ 극볡할 μ œμ•½μ΄ μ•„λ‹ˆλΌ 섀계할 μžμ›μ΄λ‹€"** --- *Last updated: Based on deep synthesis of Space Food papers and our double reparameterization framework*