[[inspection paradox]] **Table 1: Feedback Analysis by Role** | Role & Person | Key Feedback Points | | ---------------- | ------------------------------------------------------------------ | | **Scientist** | 1. Paper lacks sufficient decision-making components | | | 2. Need to better justify epistemic/aleatoric distinction's value | | | 3. Mathematical framework needs concrete entrepreneurial decisions | | **Modeler** | 1. Time cost ratio axis needs better explanation | | | 2. Hierarchical model integration needs strengthening | | | 3. System dynamics connection needs clarification | | **Practitioner** | 1. E/A ratio practical implications unclear | | | 2. Knowledge evolution over time needs addressing | | | 3. Optimal vs dynamic sampling strategies need clearer explanation | **Table: Top Priority Action Items with Supporting Evidence** | Priority | Action Item | Supporting Evidence | Process to act | | -------- | ------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | 1 | **Strengthen Decision Framework Integration** | Scientist: "To me, this is a more comprehensive decision model, because you get decisions right that your decision is either I'm going to continue to experiment, I'm going to scale or I'm going to fail, right? So there are there are end points and there's continuation." | 1. Frame original pivoting model as binary decision (commit or keep learning i.e. combine fail and experiment state)<br><br>2. Frame sampling strategy itself as decision (fixed k, optimal k, dynamic k) i.e. for problems with infinite r, we should keep solving the subproblem until we reach to finite r then based on that foundation, build upward<br><br>3. Add concrete examples of how sampling leads to go/no-go decisions<br> <br>stashed: Show how E/A ratio influences decision thresholds | | 2 | divergence from traditional statistical decision theory | 🚨TODO [[Space/Library/1논문용/textbook/📖tenenbaum24_bayes(cog)]] | explain why entrepreneur's optimal stopping is closer to secretary problem than parking lot problem | | 3 | **Clarify Hierarchical Model Integration** | Modeler: "A lot of pure inferences seems like it's concerned with something like measuring the effect of some new molecule on diabetes. And that in itself, is not a decision. The decision, then is what do you do with that information." | - Develop secretary hiring example showing multiple decision levels (uncertain goals)<br>- Show how hierarchical structure affects sampling strategy<br>- Connect Bayesian updating to practical decision-making<br>- Demonstrate how hierarchical models influence optimal sample sizes | | 4 | **Add Practical Implementation Guidelines** | Practitioner: "Single, optimal care, same scenarios... And what's with dynamic case... I mean the first like fifth case or dynamic make a lot of sense. The single optimal case does not make a lot of sense."<br><br>Practitioner: "For example, you do one case. Everyone votes Texas, right? Now you have one optimal case, right? So when you do sampling again, what's your objective?" | - Create clear decision rules for choosing sampling strategy<br>- Use Tesla battery example to illustrate strategy selection<br>- Add specific thresholds for switching between strategies<br>- Include practical metrics for implementation | | Priority | Action Item | Supporting Evidence | Process to act | | -------- | ----------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | 1 | **Strengthen Decision Framework Integration** | Scientist: Emphasizes need for comprehensive decision model with clear endpoints (scale/fail/experiment)<br><br>Angie: Suggests reframing as binary choice (commit vs keep learning) <br><br>Modeler: Points out need to connect sampling strategies to system-level decisions<br><br>Practitioner: Questions how sampling leads to actual decisions | 1. Frame original pivoting model as binary decision (commit or keep learning i.e. combine fail and experiment state)<br><br>2. Frame sampling strategy itself as decision (fixed k, optimal k, dynamic k) i.e. for problems with infinite r, we should keep solving the subproblem until we reach to finite r then based on that foundation, build upward<br><br>3. Add concrete examples of how sampling leads to binary (commit or keep learning) decisions<br><br>stashed: Show how E/A ratio influences decision thresholds | | 2 | **Divergence from traditional statistical decision theory around feedback loop between inference and action i.e. active inference** | Scientist: Notes optimal stopping traditionally based on secretary problem<br><br>Angie: Highlights key difference - in entrepreneurship goals evolve during search unlike parking problem<br> | explain in introduction why entrepreneur's optimal stopping is closer to secretary problem than parking lot problem | | 3 | **Clarify Hierarchical Model Integration** | Scientist: Questions how hierarchical model connects to actual decisions<br><br>Angie: Explains hierarchical Bayesian as tool for modeling latent parameters<br><br>Modeler: Points out pure inference needs connection to decisions<br><br>Practitioner: Seeks clarification on practical implementation of hierarchical models | - Develop secretary hiring example showing multiple decision levels (uncertain goals)<br>- Show how hierarchical structure affects sampling strategy<br>- Connect Bayesian updating to practical decision-making<br>- Demonstrate how hierarchical models influence optimal sample sizes | | 4 | **Add Practical Implementation Guidelines** | Scientist: Asks for concrete implementation steps<br><br>Angie: Provides Tesla example to illustrate sampling strategies<br><br>Modeler: Suggests need for clear operational guidelines<br><br>Practitioner: Questions distinction between optimal and dynamic sampling in practice | - Create clear decision rules for choosing sampling strategy<br>- Use Tesla battery example to illustrate strategy selection<br>- Add specific thresholds for switching between strategies<br>- Include practical metrics for implementation |