* **Appendix A** will outline the hierarchical Bayesian framework and random utility modeling via logit regression. * **Appendix B** will distill the stakeholder coordination logic into its simplest operational form, based on the decision matrix and sigmoid evaluation components. * **Appendix C** will present the full primal-dual optimization formulation, now expanded with detailed narrative on how it maps to the PRISM framework (perception 📽️, coordination 🔄, bottleneck breaking ⚡). - Appendix D will prove the np-completeness of entrepreneurial decision making model with nonlinear and opportunity dependent objective # Appendix A: Stakeholder 📽️Perception Modeling Entrepreneurs can model stakeholder decision-making as a **hierarchical Bayesian random utility process**, capturing heterogeneity in perceptions and rational choice under uncertainty. I assume each stakeholder $i$ evaluates a venture’s observable signals $x$ (e.g. product features, team credentials) and infers an *unobserved* latent quality of the venture (sometimes called a *“phantom” attribute*). In other words, stakeholders interpret observable venture characteristics as noisy indicators of underlying venture quality. This inference is treated as *noisily rational*: stakeholders update their beliefs in a Bayesian manner given the signals, then choose actions that maximize their perceived utility, subject to error. Formally, let $U_{i,j}$ denote stakeholder $i$’s utility for a decision option $j$ (for example, $j=1$ might be “invest in the venture” and $j=0$ “decline”). I use a random utility model where: $ U_{i,j} \;=\; x_j^{T}\beta_i \;+\; \varepsilon_{i,j}\,, $ with $\beta_i$ a stakeholder-specific preference vector and $\varepsilon_{i,j}$ an idiosyncratic error term. If I assume $\varepsilon_{i,j}$ follows an extreme value type-I (Gumbel) distribution (i.e. each stakeholder makes **logit**-style noisy decisions), then the probability that stakeholder $i$ chooses option $j$ is given by the logistic choice function: $ P(y_i = j) \;=\; \frac{\exp\!\big(x_j^{T}\beta_i\big)}{\sum_{k} \exp\!\big(x_k^{T}\beta_i\big)} \,, $ as in a multinomial logit model. The vector $\beta_i$ captures stakeholder $i$’s latent preferences or belief weights—how strongly they value each venture signal $x$—and I model these preferences in a **hierarchical Bayesian** manner. In particular, I place a prior on each stakeholder’s $\beta_i$ such that: $ \beta_i \sim \mathcal{N}(\bar{\beta},\, \Sigma_{\beta})\,, $ meaning stakeholders are drawn from a population with mean preference $\bar{\beta}$ and covariance $\Sigma_{\beta}$. This hierarchical structure allows the entrepreneur to account for heterogeneity: some stakeholders may be more team-focused, others more market-focused, etc., but all share a common underlying distribution. By observing stakeholder choices (or feedback) and updating the posterior of $\beta_i$, an entrepreneur can learn about an individual stakeholder’s particular biases and expectations. Notably, stakeholders may base their decisions on inferred qualities that are *not directly observable* to the entrepreneur. These latent perceptions are analogous to the *phantom attributes* described by Bell and Dotson (2022)—features of a product or venture that “influence choice but are latent artifacts of the decision process.” In our context, a stakeholder might infer an unobserved trait (e.g. the venture’s trustworthiness or long-term scalability) from observed signals like pricing, branding, or founder background. Entrepreneurs can incorporate such latent factors by extending the design matrix $x$ to include *unobserved* attributes and using Bayesian inference to estimate them (treating them as missing data to be learned). While detailed methods for identifying these latent attributes are beyond our scope, the key is that the hierarchical model can flexibly accommodate both observed and inferred signals. **Interpretation – Noisy Rational Inference:** This framework implies that a stakeholder’s decision is a *probabilistic, rational response* to the venture’s signals. Each stakeholder behaves as if updating their belief about the venture’s quality (the posterior distribution of the latent attribute) and then choosing the action that maximizes expected utility. The logistic choice model adds controlled “noise” to reflect uncertainty and idiosyncrasies in decision-making. For the entrepreneur, this means stakeholder decisions can be predicted (and influenced) by managing the signals $x$: providing clearer or more convincing venture data will shift the stakeholder’s $\beta_i$-weighted evaluation upward and increase the probability of a favorable decision. In summary, **stakeholder decisions are modeled as noisy rational inferences over projected venture signals** – each stakeholder is making the best decision they can given their perception of the venture, and the hierarchical Bayesian logit model formalizes this process mathematically. # Appendix B: Multi-Stakeholder 🔄Coordination Mechanics When multiple stakeholders are involved, their decisions often become **interdependent**. Entrepreneurs frequently encounter **circular dependencies** where each stakeholder’s commitment depends on others: for example, investors wait until there are confirmed customers; customers hesitate until the venture has reputable investors and a proven product; partners or regulators want to see signals of support from both investors *and* customers. These feedback loops can create *deadlock situations* in which no single stakeholder is willing to move first. Effective entrepreneurial strategy must therefore *coordinate* stakeholders – aligning their expectations and actions so that everyone is willing to commit in concert. To reason about coordination, it is useful to represent the stakeholders’ joint decisions in a **stakeholder decision matrix**. Consider a simple case of two stakeholders (A and B) each deciding whether to support a venture (Yes = 1) or not (No = 0). Each stakeholder has two possible actions, so the combined outcomes can be laid out in a $2\times2$ matrix: * **Both say No (0,0):** The venture fails to gain support. This outcome might occur if both stakeholders independently conclude the venture isn’t viable *or* if each is waiting for the other to make the first move. * **A says Yes, B says No (1,0):** Stakeholder A commits but B holds out. A’s support alone may be insufficient; A might later withdraw or incur loss if B never joins. This asymmetry is unstable – A acted on an expectation that B would follow, which didn’t happen. * **A says No, B says Yes (0,1):** Symmetrically, B commits while A does not. This is the flip side of the above, and just as unstable. * **Both say Yes (1,1):** The venture gets full support. This is the coordinated outcome needed for success (assuming the venture truly requires both A and B). In this example matrix, the **coordinated equilibrium** outcomes are the corners where decisions are aligned (either both support or both don’t). The off-diagonal cells (one supports, the other doesn’t) reflect *misalignment* – one stakeholder’s positive expectation wasn’t shared by the other. In practice, if the venture is promising, the goal is to move stakeholders toward the **(Yes, Yes)** outcome (everyone supports); if the venture is not viable, all should correctly settle on **(No, No)**. Either way, *consistency* is key. The entrepreneur’s role is to facilitate information flow and incentives such that stakeholders reach a consensus decision rather than acting at cross purposes. I model each stakeholder’s individual decision process with a **sigmoid-based decision function**, which provides a smooth approximation of the threshold behavior in commitment. Let $d_i \in {0,1}$ indicate stakeholder $i$’s decision (0 = no support, 1 = support). I define the probability of support as a logistic function of that stakeholder’s perceived venture success likelihood (or utility) $u_i$: $ P(d_i = 1) \;=\; \sigma(u_i) \;=\; \frac{1}{1 + \exp(-\kappa\, u_i)} \,, $ where $u_i$ represents stakeholder $i$’s **confidence** in the venture (e.g. how strongly they believe the venture will succeed or meet their requirements), and $\kappa$ is a steepness parameter. If $u_i$ is high (the stakeholder is confident), $P(d_i=1)$ approaches 1; if $u_i$ is very low, $P(d_i=1)$ is near 0. For intermediate levels of confidence, the sigmoid curve captures the idea that the stakeholder might go either way, reflecting uncertainty. In the limit of $\kappa \to \infty$, this becomes a step function (hard threshold): $d_i=1$ if $u_i>0$, else $d_i=0$. Thus, the logistic form provides a principled, differentiable model of each stakeholder’s decision rule. **Interdependence and Coordination:** The complication in a multi-stakeholder setting is that each $u_i$ (stakeholder’s confidence) is not formed in isolation. Stakeholder $i$’s confidence $u_i$ will generally depend on their **expectations of other stakeholders’ actions or beliefs**. For instance, if stakeholder A expects stakeholder B to invest (which increases the venture’s chance of success, providing capital or credibility), then A’s own $u_A$ will rise. Conversely, if A expects B to back out, $u_A$ may drop. I end up with a coupled system of equations: $u_i = f_i(\text{signals, and } d_{-i})$ where $d_{-i}$ indicates the actions of the other stakeholders. In a fully rational equilibrium, all these expectations are mutually consistent (each stakeholder’s expectation about others’ decisions is correct). Achieving this consistency is the essence of coordination. Mathematically, one can impose **consensus constraints** to enforce expectation alignment across stakeholders. One useful constraint is to require that all stakeholders (and the entrepreneur) share the **same predicted outcome** for the venture's next state or success metric. For example, using the expected outcomes $\mu_j(\textcolor{red}{a})$ from our primal-dual formulation, coordination can require: $ \mu_1(\textcolor{red}{a}) \;=\; \mu_2(\textcolor{red}{a}) \;=\; \cdots \;=\; \mu_N(\textcolor{red}{a}) \;=\; \mu_e(\textcolor{red}{a}) \,. $ In words, **everyone is on the same page** about the venture's prospects. This alignment of expected outcomes (which could be probabilistic beliefs about state transitions, revenue projections, etc.) means no stakeholder is significantly more optimistic or pessimistic than another -- a prerequisite for them to comfortably move forward together. If such equalities hold, then for any stakeholders $i$ and $j$, their confidence levels $u_i$ and $u_j$ should be compatible, leading to mutually reinforcing decisions. In the two-stakeholder example above, reaching the (Yes,Yes) cell requires that A and B both believe in the venture's success with high confidence, which in turn requires aligning their beliefs about the venture's fundamentals. **Coordination Update Rules:** Achieving expectation alignment in practice may require iterative updates as new information is shared. I outline a simple iterative mechanism by which an entrepreneur can drive stakeholders toward consensus: 1. **Signal Exchange:** The entrepreneur (or one of the key stakeholders) shares credible information with all parties. This could be new evidence of traction (e.g. a successful pilot, a signed customer contract) or a preliminary commitment (e.g. a lead investor agreeing to invest contingent on others). These signals serve as common knowledge inputs that can shift everyone's expectations. 2. **Belief Update:** Each stakeholder updates their internal model of the venture after receiving the new signal. In Bayesian terms, they revise their expected outcome $\mu_j(\textcolor{red}{a})$ using the evidence. For instance, if a pilot result shows the product works, both investors and customers raise their success estimates. Formally, stakeholder $j$ adjusts $u_j$ (their confidence utility) based on the signal; if I denote the signal by $\Delta$ (e.g. a change in expected growth), the update might be $u_j \leftarrow u_j + w_j \Delta$, where $w_j$ is stakeholder $js weight on that evidence. 3. **Expectation Reconciliation:** The stakeholders and entrepreneur compare their updated expectations. If discrepancies remain (say one investor is still unconvinced while others are confident), further rounds of evidence or discussion occur. The entrepreneur might address specific concerns of the outlier stakeholder by providing targeted information (reducing that stakeholder's uncertainty). Through successive rounds, the goal is to **converge** the $\mu_j(\textcolor{red}{a})$ values across all stakeholders $j$. This is analogous to a consensus algorithm: each iteration should bring beliefs closer. Once all stakeholders' $\mu_j(\textcolor{red}{a})$ values (and the entrepreneur's $\mu_e(\textcolor{red}{a})$) are nearly equal, all stakeholders have a shared understanding of the venture's likely outcome. Using $\mu_j(\textcolor{red}{a})$ consistently across both the primal-dual formulation and the appendices creates a more unified framework and reduces the total number of variables, making the model more elegant and easier to understand. In practice, the above coordination process can be implemented in a decentralized way (each stakeholder adjusting based on observed actions of others) or centrally facilitated by the entrepreneur who orchestrates information flow. The **sigmoid decision functions** ensure that as each stakeholder’s confidence $u_i$ grows (due to alignment on a positive outlook), their commitment probability $P(d_i=1)$ sharply increases. Eventually, a tipping point is reached where every stakeholder is willing to say “yes” because they expect everyone else to say “yes” as well. By the same token, if the venture truly does not warrant support, transparent sharing of negative signals will align stakeholders on saying “no,” avoiding wasted resources. The result of successful coordination is a **simultaneous, consistent decision set** – analogous to an equilibrium where each stakeholder’s decision is optimal given the others’. The entrepreneur’s coordination mechanics thus transform what could be a game of strategic waiting into a collaborative, information-driven convergence of decisions. # Appendix C: Primal-Dual Optimization Narrative with Visual Mapping ## C.1 Mapping EDMNO Components onto the Primal-Dual Formulation Each core component of the EDMNO framework--**Perception** (📽️), **Coordination** (🔄), and **Sequencing(⚡)--corresponds to a particular aspect of the primal-dual optimization model. **Figure C.1** illustrates this mapping: the primal problem (uncertainty minimization) and its dual (likelihood maximization) are annotated to show which part emphasizes each component. In the primal formulation, the *entropy terms* in the objective function capture **perception** (📽️) by quantifying uncertainty in stakeholder beliefs. The *constraints* that enforce consistency across stakeholders' expectations encapsulate **coordination** (🔄), since they tie together interdependent stakeholder outcomes. Finally, the *resource budget constraint* and its Lagrange multiplier in the dual highlight **sequencing (⚡), reflecting how scarce resources focus the entrepreneur on critical actions. This formulation explicitly "connects all three components of our framework"--in perceptual modeling it optimizes information gathering (entropy reduction), in multi-stakeholder coordination it aligns predictive models across stakeholders, and in bottleneck-driven experimentation it prioritizes high information value per resource unit. [diagram: EDMNO primal-dual mapping. Perception (📽️) corresponds to maximizing entropy/information in the primal (highlighting terms like $H(p)$) and normalization via the partition function in the dual. Coordination (🔄) corresponds to multi-stakeholder constraints in the primal (coupling different $p_j$ via shared expectations $\textcolor{green}{\mu_j}$) and to the likelihood terms $\beta_j^T\textcolor{green}{\mu_j} - \log Z_j$ in the dual that enforce cross-stakeholder consistency. Bottleneck Breaking (⚡) corresponds to the resource constraint ($\sum_j c_j \textcolor{red}{a_j} \le \textcolor{skyblue}{R}$) in the primal and the dual variable $\gamma$ (with threshold condition for $\textcolor{red}{a_j^*}$) that drives action selection under resource limits.](#) *Figure C.1:* **EDMNO Components in the Primal-Dual Optimization Framework.** The primal problem minimizes total uncertainty (weighted entropy) subject to stakeholder expectation constraints and a resource budget, while the dual problem maximizes a weighted log-likelihood of stakeholder satisfaction minus resource cost. Icons mark the formulation pieces most associated with each EDMNO component: the entropy term ($H(p)$) for perception, coupling constraints for coordination, and the resource limit (with dual $\gamma$) for bottleneck-breaking. ## C.2 Primal and Dual Variables: From Mathematical Roles to Business Meaning The primal-dual formulation introduces decision variables ($p$, $\textcolor{red}{a}$) and Lagrange multipliers ($\lambda$, $\beta$, $\gamma$) that carry intuitive business interpretations. **Table C.1** translates each variable into plain-English meaning and provides a startup example (from our Entrepreneurship Optimization Proposal) to illustrate: | **Variable** | **Optimization Role** | **Intuitive Meaning** | **Startup Example** (from proposal) | | | | ------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | --- | --- | | $p$ (probability distribution) | **Primal decision variable** -- represents the probability assigned to each possible outcome scenario for a stakeholder (subject to entropy maximization). | **Belief distribution for outcomes:** The entrepreneur's current bets on how a stakeholder's response or market outcome might turn out, given what actions have been taken. This reflects uncertainty about that stakeholder -- a spread-out $p$ means we're still very unsure. | *Example:* For an eco-construction startup, $p$ could be the distribution over an **eco-builder's reactions** to a new cement product (e.g. 60% chance of mild interest, 30% chance of strong adoption, 10% chance of rejection) before running a pilot. | | | | $\textcolor{red}{a}$ (action vector) | **Primal decision variable** -- indicates which actions/experiments are chosen (often binary or fractional) under resource limits. | **Chosen experiments or strategic moves:** Each component of $\textcolor{red}{a}$ corresponds to an action the startup can take, such as a test or partnership, set to 1 if selected. This encodes the plan of attack the entrepreneur decides on to reduce uncertainty. | *Example:* $\textcolor{red}{a} = [\text{segment}, \text{collaborate}, \text{capitalize}]$ might represent **(1) targeting a test with a lab**, **(2) approaching an eco-builder partner**, **(3) pitching to a VC**. If $\textcolor{red}{a_2}=1$ and others 0, the startup focuses on collaborating with a testing lab first. | | | | $\lambda$ (one per stakeholder $j$) | **Dual variable for normalization constraint** -- ensures each stakeholder's probability distribution $p_j$ sums to 1; appears in the dual as an additive term to the objective. | **Baseline log-likelihood / bias term:** $\lambda_j$ adjusts the "baseline" likelihood for stakeholder $j$ being fully satisfied, before considering specific evidence. In business terms, it can be seen as the inherent optimism or skepticism that stakeholder $j$ has toward the venture **absent new data** (a calibration factor making the probabilities sum to 100%). A high $\lambda_j$ (more positive) would mean a stakeholder is inherently easier to satisfy (or has fewer baseline demands), whereas a low or negative $\lambda_j$ indicates a tougher crowd requiring evidence to even reach a neutral stance. | *Example:* For an **investor** stakeholder, $\lambda_{\text{inv}}$ might start low (even negative) if the default stance is "not convinced" until proven otherwise. As the startup shows traction, effectively $\lambda_{\text{inv}}$ rises -- the investor's baseline confidence improves, boosting the overall likelihood term for the investor in the dual objective. | | | | $\beta$ (vector, per stakeholder $j$) | **Dual variable(s) for expectation constraint** -- Lagrange multipliers associated with matching stakeholder $js predicted outcome $\textcolor{green}{\mu_j}(\textcolor{red}{a})$ to the distribution's expectation. They appear in the dual inside $\beta_j^T \textcolor{green}{\mu_j}(\textcolor{red}{a})$ and $\log Z_j(\beta_j)$. | **Stakeholder requirement weight:** $\beta_j$ captures how strongly I need to **"tilt" stakeholder $js outcome distribution** to meet their expected value $\textcolor{green}{\mu_j}$. Intuitively, $\beta_j$ reflects stakeholder $js *demand level or sensitivity*: if $\beta_j$ is large in some direction, it means the stakeholder has stringent expectations on a certain metric, forcing the entrepreneur's plan to heavily favor outcomes meeting that metric. It's like the "pressure" stakeholder $j$ exerts on the solution. | *Example:* If a **customer** expects a certain product performance (say durability or cost savings), the corresponding $\beta_{\text{cust}}$ will adjust to ensure the probability distribution $p_{\text{cust}}$ places sufficient weight on outcomes where that expectation is met. A higher $\beta_{\text{cust}}$ might mean the startup must **heavily skew its efforts to satisfy the customer's key requirement** (e.g., by allocating R&D to meet a durability standard), since failing that drastically lowers the likelihood of customer adoption. | | | | $\gamma$ (scalar) | **Dual variable for resource constraint** -- shadow price of the total resource $\textcolor{skyblue}{R}$, appears in dual objective as $-\gamma \textcolor{skyblue}{R}$ and in the decision rule for $\textcolor{red}{a^*}$. | **Resource scarcity price / threshold factor:** $\gamma$ represents the *opportunity cost of a unit of resource*. A higher $\gamma$ means resources (time, cash) are very tight, so each dollar or week must yield a high payoff in uncertainty reduction -- it sets a **threshold for action selection**. In effect, $\gamma$ is how much "likelihood gain" I demand per unit cost. Only actions with information value per cost above $\gamma$ will be chosen. As resources become less scarce (more slack or funding), $\gamma$ drops, lowering the bar for which actions are worth doing. | *Example:* Early in the project, with a tiny budget, $\gamma$ is high -- the startup only does the most critical experiment (e.g., a small pilot that addresses the biggest unknown). After raising funds, $\gamma$ falls, and more experiments (like scaling up a prototype or testing secondary features) clear the bar. In the **TAXIE** EV rideshare case, initially $\gamma$ was effectively high, so they tested with just 2 cars (only that limited experiment had a justifiable info-per-cost ratio). With more capital, they could afford a broader rollout of 50 cars to tackle secondary uncertainties. | | | *Table C.1:* **Primal and Dual Variables in Business Terms.** This table links each mathematical variable to an intuitive role in entrepreneurial decision-making, along with concrete examples. For instance, $p$ and $\textcolor{red}{a}$ correspond to the startup's beliefs and action choices (which experiment to run), while the dual variables $\lambda$, $\beta$, $\gamma$ correspond to the underlying "pressures" of the problem: stakeholders' baseline attitudes ($\lambda$), their specific demands ($\beta$), and the scarcity of resources ($\gamma$). ## C.3 The Partition Function $Z_j(\beta_j)$: Normalization and the "Menu" of Outcomes In the dual formulation, each stakeholder $j$ contributes a term $-\log Z_j(\beta_j)$, where $ Z_j(\beta_j) \;=\; \sum_{k} \exp\!\big(\beta_j^T f_{jk}\big)\,, $ summing over all possible outcome states $k$ for that stakeholder. This $Z_j(\beta_j)$ is the **partition function**, and its role is to ensure probabilities for stakeholder $j$ are properly normalized when I convert the constraints into a likelihood. In plain terms, $Z_j$ **"accounts for all possible outcomes"** when computing the likelihood of any one outcome. **Intuition (Menu Analogy):** Imagine stakeholder $j$ has a "menu" of possible outcomes (for example, a customer might either love the product, like it, be neutral, or reject it). The exponentiated term $\exp(\beta_j^T f_{jk})$ can be thought of as the "score" or weight for outcome $k$ given the current dual parameters $\beta_j$. The partition function $Z_j(\beta_j)$ is like summing up the scores of **every item on the menu**. By dividing an individual outcome's score by this total $Z_j$, I get a probability (just as each item's popularity could be expressed as a fraction of all items' combined popularity). In other words, $Z_j$ is the normalizing denominator that makes all the outcome probabilities add up to 1. If stakeholder $j$ has many favorable possible outcomes (or a high uncertainty, meaning many outcomes get moderate weight), $Z_j$ will be larger; if $js requirements and $\beta_j$ strongly favor only a few outcomes, $Z_j$ will be smaller (concentrating probability on those outcomes). Crucially, the partition function also influences the **likelihood** of satisfying stakeholder $j$. A smaller $Z_j(\beta_j)$ (all else equal) means the probability mass is concentrated on high-scoring outcomes -- effectively, stakeholder $js expectations leave fewer "acceptable" outcomes, but those few are being targeted. A larger $Z_j$ means a wider spread of possible outcomes, implying more uncertainty or more ways things could go (not all of which are good for the venture). In the dual objective, I see $-\log Z_j(\beta_j)$: this term **rewards** the solution when $Z_j$ is lower (since $-\log Z_j$ is higher), i.e. when the outcomes are more narrowly focused on meeting stakeholder $js expectations. Thus, minimizing $Z_j$ (without violating the mean outcome constraint) is equivalent to maximizing the likelihood that stakeholder $j$ ends up satisfied. The partition function is the mathematical vehicle for this normalization: it converts the constraint "meet stakeholder $js expected value $\textcolor{green}{\mu_j}quot; into a probabilistic likelihood of success for that stakeholder by considering all possible outcome scenarios consistent with that expectation. ## C.4 Dual Objective as Likelihood Maximization The dual optimization problem can be interpreted as **maximizing a weighted likelihood** that all stakeholders will be satisfied given the chosen actions. In our formulation, the dual objective is: $ \max_{\lambda,\beta,\gamma}\;\;\sum_{j} \textcolor{violet}{w_j}\Big(\lambda_j + \beta_j^T \textcolor{green}{\mu_j}(\textcolor{red}{a_j}) - \log Z_j(\beta_j)\Big)\;-\;\gamma \textcolor{skyblue}{R}\,, $ where $\textcolor{violet}{w_j}$ is the weight of stakeholder $j$. Each term $\lambda_j + \beta_j^T \textcolor{green}{\mu_j} - \log Z_j(\beta_j)$ inside the sum is essentially the **log-likelihood for stakeholder $j$** under the optimal distribution (it derives from enforcing that $p_j$ matches the expected outcome $\textcolor{green}{\mu_j}$). Multiplying by $\textcolor{violet}{w_j}$ just scales it by the stakeholder's importance. Summing across $j$ thus accumulates a kind of **"total log-likelihood" that all stakeholders' requirements are met**, up to the weighting. Finally, the term $-\gamma \textcolor{skyblue}{R}$ subtracts the "cost" of using resources $\textcolor{skyblue}{R}$ with shadow price $\gamma$. This dual can be understood in everyday terms: I are choosing the dual variables (which relate to stakeholder satisfaction criteria and resource tightness) to **maximize the probability of overall venture success** (subject to the resource limit). By strong duality, this is equivalent to the primal goal of minimizing uncertainty. But the dual viewpoint is particularly enlightening for narratives -- it frames the problem as *likelihood maximization*: * **Plain English interpretation:** I want to make it as **likely as possible** that every stakeholder is happy with the outcome *at the same time*, given I can only spend $\textcolor{skyblue}{R}$ resources. The dual objective captures this by boosting the score when stakeholder $js likelihood of satisfaction increases (through the $-\log Z_j$ term) and penalizing if I use too much resource (through $\gamma \textcolor{skyblue}{R}$). In essence, it's a balancing act: allocate effort (via $\beta_j$ adjustments and picking actions $\textcolor{red}{a}$ that influence $\textcolor{green}{\mu_j}$) such that the *joint likelihood* of satisfying all parties is maximized. * **Startup scenario example:** Think of a founding team considering their next moves. They might say, "What course of action gives us the **best shot that both the customer and the investor end up happy?**" This is exactly what the dual is asking. Suppose running a pilot project will greatly increase the chance customers love the product and also give investors data to be confident -- that corresponds to increasing the terms inside $\sum_j(\cdot)$ for those stakeholders (higher log-likelihood for each). If the pilot is expensive, $\gamma$ will rise to reflect the cost, but if it's the only way to significantly boost those success probabilities, the trade-off might still be worth it. The dual objective helps formalize this trade-off: an optimal solution would indicate if the likelihood gains (customers + investors being satisfied) per dollar spent are worth the cost. If yes, $\gamma$ adjusts such that the inequality $\textcolor{violet}{w_j}[\lambda_j + \beta_j^T\textcolor{green}{\mu_j}(1) - \log Z_j] > \gamma c_j$ holds for that action, meaning action $j$ (the pilot) is chosen. In practical terms, that inequality is the rule: *"Do action $j$ if its contribution to stakeholder-satisfaction likelihood (left side) exceeds the resource cost threshold (right side)."* When the dual objective is maximized, it corresponds to a point where **any feasible change would lower the combined likelihood of stakeholder satisfaction (or violate the resource limit)**. This is why I say the dual solution yields a **likelihood-maximizing plan**: it's the set of stakeholder probability distributions and chosen actions such that I couldn't make all-around stakeholder happiness any more probable without more resources. Entrepreneurs intuitively seek this outcome -- they often speak of *de-risking* the venture. In dual terms, de-risking means **increasing the likelihood of success** (for investors, customers, partners, etc.) by reducing uncertainty. For example, by doing a well-chosen pilot test, you reduce investors' uncertainty, which *increases the probability they will invest.* That is literally an increase in the "likelihood term" associated with the investor stakeholder. As uncertainty falls, the dual objective's value (summed log-likelihood) rises, reflecting a more confident, credible venture. To ground this in a scenario: imagine an early-stage clean-tech startup. Initially, the chance that **customers** will adopt the product might be low because they doubt its reliability; similarly, the **investor** might assign a low chance to the startup hitting cost targets. The entrepreneur can do a targeted experiment, say a prototype demonstration, that convinces both groups (customers see reliability, investor sees cost data). This single action will dramatically increase the likelihood of customer and investor satisfaction (their individual probabilities shoot up). In the dual formulation, $\log Z_{\text{cust}}$ and $\log Z_{\text{inv}}$ drop (since their distributions tighten around "success"), and thus the objective sum increases. The resource cost of the demo enters via $\gamma$, but if the demo is highly informative, the optimal $\gamma$ will adjust such that it is worthwhile to spend that resource. The end result is the **maximum likelihood** configuration: the venture has maximized the weighted log-probability of meeting all stakeholders' expectations, given its budget. ## C.5 Stakeholder Satisfaction as a Weighted Log-Likelihood I often refer to the dual objective as maximizing the "weighted log-likelihood of all stakeholders being satisfied/accurate." Let's unpack this phrase. Each stakeholder's "satisfaction" essentially means **their internal model of the venture matches the eventual reality**--in other words, the outcome met their expectations. For an investor, being satisfied might mean the company achieved the milestones or traction they expected; for a customer, it means the product performed as promised; for a regulator, it means compliance standards were met. If each stakeholder's expectations are like hypotheses about the venture, then stakeholder $j$ being satisfied is the event that hypothesis is confirmed by results. The dual formulation gives each stakeholder $j$ a log-likelihood contribution $\mathcal{L}_j = \lambda_j + \beta_j^T\textcolor{green}{\mu_j} - \log Z_j(\beta_j)$. Exponentiating (and ignoring weights $\textcolor{violet}{w_j}$ for a moment), $\exp(\mathcal{L}_j)$ is proportional to the probability that stakeholder $js expectations hold true (since it's essentially the probability of the "successful" outcomes for stakeholder $j$ under the optimized distribution). If I consider all stakeholders at once, a simplifying (though conceptual) view is to imagine the probability that **every** stakeholder is satisfied as roughly the product of the individual probabilities $\prod_j P(\text{stakeholder }j \text{ satisfied})$. Maximizing this joint probability is equivalent to maximizing the sum of logs, $\sum_j \log P(j\text{ satisfied})$. Our dual exactly does a weighted version of this: $\sum_j \textcolor{violet}{w_j} \log P(j\text{ satisfied})$. The weights $\textcolor{violet}{w_j}$ allow the model to emphasize some stakeholders more (perhaps an investor's satisfaction is crucial, so its weight is higher, whereas a minor partner's satisfaction is less critical and might get a lower weight). In practice, **if one stakeholder is very important**, the solution will allocate actions to boost that stakeholder's likelihood of satisfaction even if it slightly hurts others, reflecting the weighting. The phrase "all stakeholders being satisfied" doesn't mean every stakeholder absolutely must be happy in the end, but that I are considering the combined likelihood (with weights) of satisfying everyone as much as possible. I are effectively maximizing a weighted geometric mean of stakeholder satisfaction probabilities. Importantly, **stakeholder satisfaction = their model matches reality**. This is captured in the primal by the constraint that each stakeholder's expected value $\textcolor{green}{\mu_j}(\textcolor{red}{a})$ is met by the distribution $p_j$, and in the dual by the terms involving $\beta_j$ and $Z_j$. When the dual objective is high, it means each stakeholder's predicted outcome is likely to occur (their distribution $p_j$ is peaked around the expectation). Thus a high dual objective corresponds to a state where **stakeholders' mental models are accurate** -- they guessed what would happen, and the venture delivers on those guesses. This is exactly what satisfaction means: no nasty surprises, only fulfilled expectations. From an entrepreneurial perspective, achieving this is gold: it means *every major backer or participant in your venture feels vindicated*. The investor saw the growth they hoped for, the customer got the value they were promised, the team met the technical goals -- all stakeholders' internal narratives align with the venture's actual trajectory. The weighted log-likelihood formulation mathematically encodes this alignment and rewards the entrepreneur for configurations that enhance it. It's worth noting that **disagreement or misalignment among stakeholders often signals innovation potential**. If stakeholders initially have very different beliefs (for instance, one investor thinks the idea is brilliant while another is skeptical), that indicates a novel, uncertain venture. Research in entrepreneurial finance and strategy has noted that such divergence in beliefs can motivate entrepreneurial action: for example, an overconfident entrepreneur (holding a belief much more optimistic than others) might pursue an idea that common consensus would skip. This "disagreement" can be a predictor of innovation as it means not everyone agrees on the outcome -- in other words, there's uncertainty to exploit. Our model starts with possibly high entropy (un-aligned stakeholder views) and through experiments and coordination, aims to align beliefs (reduce entropy), effectively **turning stakeholder disagreement into agreement via evidence**. Scholars have also observed that misaligned expectations can stall innovation if not addressed: a promising idea might languish if, say, regulators remain unconvinced while entrepreneurs forge ahead. Thus, maximizing the weighted log-likelihood of stakeholder satisfaction isn't just a mathematical exercise -- it corresponds to the very real task of **getting everyone on the same page**. When achieved, it indicates that the venture has resolved most uncertainties to the point that all key players believe in it. The dual optimum, therefore, represents a state of maximal consensus (weighted by importance) grounded in reality, which is precisely when a venture is most likely to succeed. *(Footnotes: As Eric Van den Steen and others have theorized, a strategic decision is often one about which competent people may disagree -- the presence of differing priors is what gives strategy (and innovation) its significance. And Bernardo & Welch (2001) argue that entrepreneurs' apparent overconfidence (beliefs differing from the crowd) can be what drives them to attempt breakthroughs. However, until those differences in expectation are reconciled by evidence, innovative projects face friction. The framework here can be seen as a way to systematically reconcile those expectations.)* ## C.6 From High $\gamma$ to Low $\gamma$, High $\lambda$ to Low $\lambda$: Increasing Certainty and Decreasing Interdependence Innovative decision-making often progresses through **phases** -- early on, uncertainty and interdependence are high, and later on they diminish as the venture "finds its groove." In our primal-dual terms, this corresponds to moving **from high to low values of certain dual variables ($\gamma$ and $\lambda$)**. * **Resource Scarcity ($\gamma$) High to Low:** In the earliest stage of a startup, resources are extremely limited -- the dual variable $\gamma$ starts high, meaning the threshold for taking an action is very stringent. The entrepreneur can only afford experiments that have a huge bang-for-buck in reducing uncertainty. This is the **"nail it" phase** (to use a common phrase), where you focus on the single biggest unknown. For example, in the Segway case (personal transporter innovation), early development funds were tight, so they might have only tackled the most critical question (perhaps technical feasibility) before anything else. As the venture proves aspects of the idea and perhaps raises more capital or generates revenue, the effective $\gamma$ drops. More resources become available, so the criterion for acceptable experiments eases. The startup enters a growth or **"scale it" phase** -- $\gamma$ now low -- where it can pursue second-order questions and optimizations (additional features, broader tests) that earlier would have been tabled. In short, **high $\gamma$** = extremely selective, only sure-win moves; **low $\gamma$** = can take moderate-risk or exploratory moves because resources allow. This trajectory from high to low $\gamma$ reflects *increasing certainty about the venture's core viability* (hence investors or revenues are providing slack) and thus the venture is less bottlenecked by each dollar. * **Stakeholder Alignment ($\lambda$) High to Low:** Early on, stakeholders are not on the same page. One way to think of $\lambda_j$ (from each stakeholder's normalization constraint) is as a measure of how much "baseline adjustment" was needed to calibrate that stakeholder's probability distribution. In an unaligned situation, some stakeholders might effectively have very low prior likelihood to be satisfied -- requiring a large $\lambda_j$ correction upward once evidence starts coming in (or vice versa). **High $|\lambda|$** in the beginning can indicate that, without evidence, stakeholders either wildly overestimate or underestimate the chances of success (differing priors). As the entrepreneur gathers data and demonstrates progress, stakeholders update their models. The need for a large offset $\lambda_j$ diminishes; stakeholders begin to share a more common expectation grounded in reality. Thus **$\lambda$ values tend toward zero or a moderate level** as alignment is achieved. In our context "from high to low $\lambdaquot; means moving from a state where stakeholders had strongly different initial biases (requiring significant adjustment) to a state where stakeholders have been calibrated and their beliefs are close to the truth (minimal bias). Essentially, the venture goes from a **heterogeneous belief state** to a more **consensus belief state**. For instance, if an investor initially thought the market size was tiny (low prior) while the founder thought it was huge, the truth might be somewhere in between -- early on, the dual might need a large $\lambda_{\text{inv}}$ to satisfy the investor's constraint once some evidence is shown. Later, both founder and investor converge on a similar view of the market, so $\lambda_{\text{inv}}$ can relax. Lower $\lambda$ indicates less disagreement between stakeholder expectations and the achieved outcomes; it marks **increasing certainty and trust** among the players. * **Environment: From Dynamic to Static Expected Outcomes:** In early stages, the expected outcomes $\textcolor{green}{\mu_j}(\textcolor{red}{a})$ for each stakeholder $j$ are highly context-dependent, stochastic or rapidly changing: one action can have unpredictable ripple effects across stakeholders. For example, when **Segway** was first introduced, an action like a public launch could cause significant changes in $\textcolor{green}{\mu_j}(\textcolor{red}{a})$ for multiple stakeholders simultaneously in unpredictable ways -- media hype, government regulatory responses, consumer curiosity or backlash -- a highly dynamic, coupled system. Similarly, our material startup (e.g. a sustainable cement venture "Sublime Systems") in its pilot phase faced dynamic interactions: getting a test facility on board (action $\textcolor{red}{a_1}=1$) could suddenly change the expected outcomes $\textcolor{green}{\mu_j}(\textcolor{red}{a})$ for customer interest and investor commitment in nonlinear ways (perhaps the expected values for two stakeholders jump significantly after one breakthrough). Over time, once the venture has "tipped" into wider acceptance (e.g., the expected outcomes $\textcolor{green}{\mu_j}(\textcolor{red}{a})$ for most key stakeholders are sufficiently high), the system's evolution becomes more **static or predictable**. In later stages, $\textcolor{green}{\mu_j}(\textcolor{red}{a})$ might respond in more deterministic ways to actions: for a mature product, doing a standard scale-up (action) leads to a fairly predictable result (expected outcomes change smoothly, e.g. sales grow in a known pattern). The interdependence between stakeholders decreases -- by then, each stakeholder has mostly committed, and their expected outcomes $\textcolor{green}{\mu_j}(\textcolor{red}{a})$ are not so contingent on one another at every step. In the Segway story, after initial hurdles, if it had gained regulatory approval in major cities and some consumer adoption, further actions like marketing would have had straightforward effects on $\textcolor{green}{\mu_j}(\textcolor{red}{a})$ (more sales, basically a traditional scenario). For the cement startup, once the testing authority, an eco-builder, and a major investor all have high $\textcolor{green}{\mu_j}(\textcolor{red}{a})$ values (full acceptance), the next state transitions (like expanding production) don't involve one stakeholder's expected outcome dramatically altering another's stance; the stakeholders' expectations are now set, and the remaining work is executional (static in relative terms). These transitions--**high $\gamma$ to low $\gamma$, high $\lambda$ to low $\lambda$, dynamic to static expected outcomes**--all reflect the venture moving from a chaotic, uncertain **exploration phase to a more stable exploitation phase**. In doing so, **certainty increases** (I know more about what works, stakeholders have evidence and are confident) and **interdependence decreases** (decisions become more modular as stakeholder commitments firm up). To make this concrete, consider **Sublime Systems**, the sustainable cement startup example we've used. Initially, it's in "Nail-It" mode with three critical stakeholders: a testing lab (for validating the cement), an eco-conscious builder (customer), and a climate-focused VC (investor). At the very start, $\gamma$ is high -- they maybe only have funds for one major test. They choose the test that will break the biggest uncertainty bottleneck: getting the cement certified by the testing lab (setting $\textcolor{red}{a_1}=1$). At this point, the expected outcomes $\textcolor{green}{\mu_j}(\textcolor{red}{a})$ are dynamic: if the lab test is successful, it drastically reduces the builder's uncertainty (maybe $\mu_{\text{builder}}(\textcolor{red}{a})$ jumps significantly as a result, because the builder was waiting on validation). This is a dynamic chain reaction. Also, initially the lab and builder may not have believed the cement would work (their $\lambda$ adjustments were large when evidence comes). Once the lab validation is secured, $\gamma$ drops a bit (they might get a bit more funding or at least they know they don't need to test that again) and now they can do a pilot with an eco-builder (setting $\textcolor{red}{a_2}=1$). The system still has some dynamics (perhaps that success convinces the investor, so $\mu_{\text{investor}}(\textcolor{red}{a})$ increases significantly after $\mu_{\text{builder}}(\textcolor{red}{a})$ does -- another dynamic jump). By the time the expected outcomes for two of the three stakeholders are high, they reach a critical mass (an inflection point). Now they enter "Scale-It": $\gamma$ is much lower (they raised a VC round, so more resources), $\lambda$ for all stakeholders is near a stable value (everyone's expectations are aligned that this will work and be profitable), and $\textcolor{green}{\mu_j}(\textcolor{red}{a})$ becomes more predictable. If they invest resources (action $\textcolor{red}{a_3}=1$) to build a full production plant, the outcome is mostly deterministic growth in output and revenue -- not a wild card. The stakeholders are committed, so further actions don't have to juggle delicate interdependencies. In summary, the venture moved from a **highly-coupled, high-uncertainty regime** to a **decoupled, low-uncertainty regime**. The primal-dual model captures this as moving along the dual variables: $\gamma$ and $\lambda$ relaxing, and in the problem structure: $\textcolor{green}{\mu_j}(\textcolor{red}{a})$ effectively becoming more predictable and less interdependent as the expected outcomes for all stakeholders reach satisfactory levels. ## C.7 Prioritizing High-Value Experiments in Resource-Constrained Settings A recurring theme in this appendix is that entrepreneurs must choose carefully **which experiments to run**, especially when resources are limited. Academic work supports this idea: *not all experiments are equal*, and doing the ones with the greatest information yield or option value first is crucial. Kerr, Nanda, and Rhodes-Kropf (2014) describe entrepreneurship itself as a form of **experimentation under constraints**, noting that only a few experiments will succeed and that **costs and constraints govern how much experimentation can be done and even the trajectory of innovation**. In practical terms, this means a startup should tackle experiments that either validate the venture’s core hypotheses or unlock major stakeholder commitments before spending resources on smaller questions. Their work emphasizes that the distribution of outcomes is “extremely skewed” – most projects fail or give low returns, and a few give huge returns. Therefore, an entrepreneur with one shot (high $\gamma$ early on) should choose an experiment that, if it works, yields a disproportionately large leap in venture progress (for example, proving the technology works *and* that customers want it, in one go). Our framework’s bottleneck-driven approach (⚡) aligns with this logic. It effectively says: **find the bottleneck uncertainty (the experiment that has the highest ratio of uncertainty reduction to cost) and do that first**. This mirrors the recommendation in *Entrepreneurship as Experimentation* to focus on high “real option” value projects when financing is tight. When resources are constrained, entrepreneurs act like scientists with a very limited supply of lab reagents – they must design experiments that maximize learning per dollar. Oftentimes, this means designing tests that simultaneously address multiple stakeholder concerns (as our multi-stakeholder coordination 🔄 component advises). For instance, instead of testing a product feature in isolation, a startup might do an integrated pilot that tests the technology, gets customer feedback, and provides data to investors in one fell swoop. This consolidated experiment might be costly, but its **information value is enormous**, and if it succeeds it can justify further investment. Conversely, an experiment that yields only marginal insight should be deprioritized when every dollar counts. One way to formalize this prioritization is through a **stopping rule for experimentation**, derived from balancing the costs of testing with the risks of not testing (false positives vs false negatives). In the user’s proposed “📦 Multiple Hypothesis Testing + Inventory Management” framework, the decision of how many experiments to run can be likened to an optimal order quantity problem balancing two types of errors: * Type I error (false positive) = launching something that fails (analogous to overstocking inventory that doesn’t sell). * Type II error (false negative) = failing to pursue something that would have succeeded (analogous to understocking and missing sales). By assigning a cost to each type of error, one can derive an optimal number of tests $n^*$ that minimizes total expected “error cost”. A simplified quantitative stopping rule from that framework is: $ n^* \;=\; \sqrt{\frac{\alpha^2 \,\big(\mu - \phi_{\text{true}}\big)\,\big(\mu/\phi_{\text{true}}\big)}{c^y}}\;-\;\alpha\,, $ where each parameter is defined as follows: * $\mu$ = the prior believed probability of success for the venture/hypothesis (before running new experiments). * $\phi_{\text{true}}$ = the true probability of success (the actual outcome frequency if I knew it – in practice, I infer this after some testing). * $\alpha$ = a prior confidence level (the strength of our initial belief, expressed as a multiplier – higher $\alpha$ means I were more confident in $\mu$ and thus require more evidence to change our mind). * $c^y$ = the cost of one experiment (normalized as a fraction of total resources or in relative terms such that the formula is dimensionless). This formula comes from equating the marginal benefit of reducing uncertainty with the marginal cost of additional tests, akin to the economic order quantity formula in inventory management. Intuitively, $\alpha^2(\mu - \phi_{\text{true}})$ represents the squared error in our prior belief (weighted by prior strength) – a larger discrepancy between what I believed ($\mu$) and reality ($\phi_{\text{true}}$) increases the desired sample size. The factor $(\mu/\phi_{\text{true}})$ further inflates the required tests if I were overly optimistic ($\mu > \phi_{\text{true}}$) – essentially penalizing “overconfidence” by demanding more evidence. Meanwhile, dividing by $c^y$ means the cheaper the experiment, the more tests I can afford (so I increase $n^\*$ if cost per test is low). Finally, subtracting $\alpha$ accounts for the fact that if I already have strong prior evidence (high $\alpha$), I need fewer new tests beyond what’s implicit in that prior. **Example – TAXIE case:** TAXIE was an electric taxi startup used in our discussions, which needed to decide how many pilot cars to test. They had a prior belief of $\mu = 0.5$ that the concept would succeed (moderate optimism), but in reality the market success probability $\phi_{\text{true}}$ turned out to be 0.2 (quite low). Their prior confidence was $\alpha = 2$ (meaning the prior was based on limited information, a relatively weak prior). The cost per car test was estimated as $c^y = 0.15$ (perhaps 15% of their initial budget per car). Plugging these into the formula: $ n^* = \sqrt{\frac{2^2 \times (0.5 - 0.2) \times (0.5/0.2)}{0.15}}\;-\;2 = \sqrt{\frac{4 \times 0.3 \times 2.5}{0.15}}\;-\;2 = \sqrt{20}\;-\;2 \approx 4.47\;-\;2 \approx 2.47\,. $ They should run about $2.47$ experiments – in practice, of course, this means **2 to 3 test cars**. Indeed, TAXIE proceeded with a pilot of 2 cars, which our calculation suggests was near-optimal. Those tests yielded critical information: they confirmed some hypotheses (range was sufficient, drivers earned what was expected, customers were willing to pay a target price) and refuted others (the service wasn’t profitable at small scale). Stopping after 2-3 cars was justified because the incremental learning from a 3rd or 4th car was not worth the cost at that early stage – better to pause and reassess with the data in hand (which is exactly what the startup did). This stopping rule exemplifies how an entrepreneur can quantitatively plan an experimental campaign. It says: “Run just enough tests such that the expected cost of remaining uncertainty equals the cost of the testing itself.” If you test too little, you risk a Type II error (missing out on a viable venture due to undetected potential, or not realizing a flaw before scaling – costly in hindsight). If you test too much, you risk Type I error or simply waste resources (diminishing returns – you might confirm what you already know while burning cash and time). The formula helps find a sweet spot. In sum, the entrepreneur should prioritize the highest-value experiment (largest $\mu - \phi_{\text{true}}$ impact per cost) and continue testing until the value of information drops off. This approach merges statistical thinking (hypothesis testing) with economic thinking (inventory/resource optimization), ensuring that in a resource-constrained setting, **every experiment is worth it** and the process stops at the right time to either pivot or double-down. # Appendix D: #### Definition (Entrepreneurial decision making model with nonlinear and opportunity dependent objective): Given rational matrices $A_t$ ($N \times M$) and $R_t$ ($N \times P$), rational vectors $b_t$ ($M$) and $c_t$ ($P$) for $t \in 1, \ldots, T$, a set of opportunity states $\textcolor{blue}{\Omega} = {\textcolor{blue}{\omega_1}, \ldots, \textcolor{blue}{\omega_Q}}$, a non-additive uncertainty function $\textcolor{blue}{U}: \mathbb{R}^{P \times T} \times \textcolor{blue}{\Omega} \rightarrow \mathbb{R}$, and a rational number $L$, does there exist a sequence of integral vectors $x_1, \ldots, x_T$ (each of length $N$) such that $A_tx_t \leq b_t$ for all $t$, and $\textcolor{blue}{U}({R_tx_t - c_t}_{t=1}^T, \textcolor{blue}{\omega}) \leq L$ for some $\textcolor{blue}{\omega} \in \textcolor{blue}{\Omega}$, where $\textcolor{blue}{U}$ is non-additive in its first argument and represents uncertainty to be minimized? #### Definition 5 (ILP) Given rational matrix $A$ and rational vector $b$, does $Ax \leq b$ have an integral solution $x$? #### Definition 6 (0-1 KNAPSACK) Given integers $a_j$, $j = 1, \ldots, n$, and $K$, is there a subset $S$ of ${1, \ldots, n}$ such that $\sum_{j\in S} a_j = K$? using https://chatgpt.com/c/681af59b-a76c-8002-b39d-dc10ebc48a83 ## Formal Proof: Reduction from 0-1 KNAPSACK to EDM **Proof.** We first show EDMNO is in NP, then show 0-1 KNAPSACK (BKS) polynomially transforms to EDMNO. 1. EDMNO is in NP: A solution to EDMNO can be verified in polynomial time by checking the constraints $A_tx_t \leq b_t$ for all $t$ and calculating $U({R_tx_t - c_t}_{t=1}^T, \omega)$ for each $\omega \in \Omega$ to compare with $L$, which can be done in $O(NMT Q)$ time. Given an instance of BKS with items ${1, \ldots, n}$, values $a_j$, and target sum $K$, we construct an EDMNO instance as follows: - Time periods: $T = {1}$ (single period) - Decision variables: $x = (x_1, \ldots, x_N)$, where $x_n \in {0, 1}$ - Constraint matrix $A_1$: Identity matrix of size $N \times N$ (so $M = N$) - Constraint vector $b_1$: $(1, \ldots, 1)$ of length $N$ - Return matrix $R_1$: Identity matrix of size $N \times N$ (so $P = N$) - Cost vector $c_1$: $(a_1, \ldots, a_N)$ where $a_n$ are the values from 0-1 KNAPSACK - Opportunity states: $\textcolor{blue}{\Omega} = {\textcolor{blue}{\omega_1}, \textcolor{blue}{\omega_2}}$ (so $Q = 2$) - Utility function: $\textcolor{blue}{U}(y, \omega) = \begin{cases} \sum_{n=1}^{N} y_n & \text{if } \omega = \textcolor{blue}{\omega_1} \ \sum_{n=1}^{N} y_n^2 & \text{if } \omega = \textcolor{blue}{\omega_2} \end{cases}$, where $y = R_1x_1 - c_1$ - Target utility $L$: Same as the 0-1 KNAPSACK target sum $K$ This construction ensures: a) **Non-additivity**: The utility function $\textcolor{blue}{U}(x, c, \omega)$ is non-additive because there exist vectors $x, y \in {0, 1}^n$ and states $\textcolor{blue}{\omega_1}, \textcolor{blue}{\omega_2} \in \textcolor{blue}{\Omega}$ such that $\textcolor{blue}{U}(x + y, c, \textcolor{blue}{\omega_1}) = \textcolor{blue}{U}(x, c, \textcolor{blue}{\omega_1}) + \textcolor{blue}{U}(y, c, \textcolor{blue}{\omega_1})$, but $\textcolor{blue}{U}(x + y, c, \textcolor{blue}{\omega_2}) \neq \textcolor{blue}{U}(x, c, \textcolor{blue}{\omega_2}) + \textcolor{blue}{U}(y, c, \textcolor{blue}{\omega_2})$. Specifically, for $x = (1, 0, ..., 0)$ and $y = (0, 1, ..., 0)$: When $\omega = \textcolor{blue}{\omega_1}$: $\textcolor{blue}{U}(x + y, c, \textcolor{blue}{\omega_1}) = c_1 + c_2 = \textcolor{blue}{U}(x, c, \textcolor{blue}{\omega_1}) + \textcolor{blue}{U}(y, c, \textcolor{blue}{\omega_1})$ When $\omega = \textcolor{blue}{\omega_2}$: $\textcolor{blue}{U}(x + y, c, \textcolor{blue}{\omega_2}) = 0 \neq 0 + 0 = \textcolor{blue}{U}(x, c, \textcolor{blue}{\omega_2}) + \textcolor{blue}{U}(y, c, \textcolor{blue}{\omega_2})$ This demonstrates that the utility function is additive in state $\textcolor{blue}{\omega_1}$ but not in state $\textcolor{blue}{\omega_2}$, thus establishing its overall non-additive nature. b) **Opportunity-dependence**: The utility function's value explicitly depends on the opportunity state $\textcolor{blue}{\omega}$. For any non-zero vector $x \in {0, 1}^n$ with at least one non-zero $c_j$: - When $\omega = \textcolor{blue}{\omega_1}$: $\textcolor{blue}{U}(x, c, \textcolor{blue}{\omega_1}) = \sum_{j=1}^{n} c_jx_j \neq 0$ - When $\omega = \textcolor{blue}{\omega_2}$: $\textcolor{blue}{U}(x, c, \textcolor{blue}{\omega_2}) = 0$ This demonstrates that the same input $x$ can yield different utility values depending on the opportunity state $\textcolor{blue}{\omega}$. **(If)** If the 0-1 KNAPSACK problem has a solution $S$ such that $\sum_{j\in S} c_j = K$, then setting $x_j = 1$ for $j \in S$ and $x_j = 0$ for $j \notin S$ provides a solution to the EDMNO instance, as $\textcolor{blue}{U}({R_1x + c_1}, \textcolor{blue}{\omega_1}) = K = L$. **(Only if)** If the EDMNO instance has a solution $x$ such that $\textcolor{blue}{U}({R_1x + c_1}, \textcolor{blue}{\omega_1}) = L$, then the set $S = {j : x_j = 1}$ is a solution to the 0-1 KNAPSACK problem, as $\sum_{j\in S} c_j = L = K$. Therefore, EDMNO is NP-complete. **Note on Single-Period Simplification**: The EDMNO problem presented here is a single-period simplification of the more general multi-period entrepreneurial decision-making problem, capturing essential elements while allowing for a clear reduction from 0-1 KNAPSACK. The NP-completeness of this single-period version implies that the multi-period version is at least NP-hard.