L2A algorithm
usecase: using one hierarchical model of tesla powerwall (A-lev) and three need-solution-L-lev , model s, roadster (logic), I asked gpt to infer the process from logic to algorithm (OSC: option2considerationSet2Choice) and apply this to hiearchical model of tesla's
input: output: two trees
- 1. attached tree is for tesla's powerwall. using the consideration set we constructed previously, could you make a similar tree for "tesla model S" case where solution was existing and need was new.
- 2. using the attached comparison table, could you infer the tree construction process of powerwall case?
- apply the outcome of 2 (process from logic level(need solution search strategy) to algorithmic level (options2considerationSet2evaluation)) to tesla's roadster case
- apply the outcome of 2 (process from logic level(need solution search strategy) to algorithmic level (options2considerationSet2evaluation)) to tesla's model S case
- make sure your outcome is two trees like the one attached
| Level | Name | Description | e.g. BMEV Paper |
| ----- | ------------------------------ | ------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| 1 | 🎯Computational theory | - What are the inputs and outputs to the computation?<br>- What is its goal?<br>- What is the logic by which it is carried out? | - Inputs: Historical equity valuation data from Crunchbase/Pitchbook (potentially with termsheet and captable)<br>- Outputs: Probabilistic models of equity valuation with further work suggestion on combination with bayesian decision theory<br>- Goal: To make a baseline bayesian statistical model for equity valuation that will be destructed by another paper with conversational inference using mit inference stack <br>- Logic: using Bayesian inference to drive action |
| 2 | 🧱Representation and algorithm | - How is information represented?<br>- How is information processed to achieve the computational goal? | - Representation: Probabilistic distributions of equity values and relevant factors<br><br>- Processing: Bayesian updating, MCMC sampling, hierarchical modeling |
| 3 | 💻Hardware implementation | - How is the computation realized in physical or biological hardware? | - Implementation: Specialized Bayesian software Stan or Gen |
applied to social system [[krafftCogsci.pdf]]
| Stage | Adaptation 🌱 | Co-opted Adaptation 🦅 | Co-opted Nonadaptation 🐟 |
|-------|--------------|----------------------|------------------------|
| **Sampling Stage**<br>(How options are gathered) | Looks at small changes to what already works | Looks at both old uses and possible new uses | Looks everywhere for new combinations |
| **Decision Stage**<br>(How choices are made) | Picks the best small improvement | Keeps old function while adding new one | Combines parts in completely new ways |
| **Core Strategy** | "Make it work better" | "Use it for something else too" | "Mix and match to make something new" |
| **Risk Level** | Very safe: small steps | Medium risk: keeps backup | High risk: major changes |
| **Example** | Bird feather gets better at keeping warm | Feather keeps warmth, adds flight | Swim bladder becomes totally different (lungs) |
Key Points:
- Adaptation 🌱: Takes safe, small steps to improve what works
- Co-opted Adaptation 🦅: Adds new uses while keeping the old
- Co-opted Nonadaptation 🐟: Creates something totally new by mixing parts
| Stage | Adaptation 🌱 | Co-opted Adaptation 🦅 | Co-opted Nonadaptation 🐟 |
|-------|--------------|----------------------|------------------------|
| **Information State** | High epistemic / Low aleatoric | Mixed epistemic / Mixed aleatoric | High aleatoric / Low epistemic |
| **Learning Process** | Systematic reduction of epistemic uncertainty in known domain | Balanced reduction of both uncertainties across domains | Embraces aleatoric uncertainty to find novel combinations |
| **Testing Approach** | Small, focused tests to reduce specific unknowns | Parallel tests across old and new domains | Wide exploration accepting high uncertainty |
| **Decision Rule** | Stop when epistemic uncertainty falls below threshold | Stop when both paths show viable certainty levels | Stop when novel combination proves viable |
| **Example** | Bird feather gets better at temperature control through incremental improvements | Feathers maintain temperature control while exploring flight capability | Fish air bladder transforms into lungs through uncertain recombination |
Key Insights:
1. Adaptation 🌱: Focuses on reducing epistemic uncertainty in known space
2. Co-opted Adaptation 🦅: Balances reduction of both uncertainty types
3. Co-opted Nonadaptation 🐟: Leverages high aleatoric uncertainty for innovation
This framework shows how different adaptation types handle the fundamental trade-off between reducible (epistemic) and irreducible (aleatoric) uncertainty in their search processes.