# 14.282 Fall 2025 ์ค‘๊ฐ„๊ณ ์‚ฌ ์ตœ์ข… ์ •๋ฆฌ / Midterm Final Summary > **Modules Iโ€“III:** Incentives, Career Concerns, and Influence Activities > Based on Holmstrรถm (1982/1999), Crawford & Sobel (1982), and BGP *Chapters 2 & 4* --- ## ๐Ÿ”น ๋ฌธ์ œ 1: ๊ฒฝ๋ ฅ ๊ด€๋ฆฌ์™€ ์ธ์„ผํ‹ฐ๋ธŒ / Problem 1: Career Concerns and Incentives ### ์ด๋ก ์  ๋ฐฐ๊ฒฝ / Theoretical Foundation **Model lineage:** - **Holmstrรถm (1982/1999)** career concerns model (BGP ยง 2.4) - Layered on **Holmstrรถm & Milgrom (1991)** multitask incentive framework (BGP ยง 2.1) **Core theme:** When performance is **observable but not contractible**, reputational inference substitutes for explicit pay. --- ### ๋ฌธ์ œ 1(a): ์ˆœ์ˆ˜ ๊ฒฝ๋ ฅ ๊ด€๋ฆฌ / Pure Career Concerns (p ๊ด€์ฐฐ ๊ฐ€๋Šฅ, ๊ณ„์•ฝ ๋ถˆ๊ฐ€๋Šฅ) #### Setup: ๋ชจํ˜• ๊ตฌ์กฐ **Production & Performance:** $ \begin{align} p_t &= g_1 a_{1t} + g_2 a_{2t} + \eta + \phi_t \quad \text{(๊ด€์ฐฐ ๊ฐ€๋Šฅ)} \\ y_t &= f_1 a_{1t} + f_2 a_{2t} + \eta + \varepsilon_t \quad \text{(๊ณ„์•ฝ ๋ถˆ๊ฐ€๋Šฅ)} \end{align} $ **Cost & Information:** - Effort cost: $c(a_t) = \frac{1}{2}(a_{1t}^2 + a_{2t}^2)$ - Ability: $\eta \sim N(0, h^{-1})$ (unknown to all) - Noise: $\varepsilon_t, \phi_t \sim N(0, h_\varepsilon^{-1}), N(0, h_\phi^{-1})$ - Signal precision: $\varphi = \frac{h_\phi}{h + h_\phi}$ --- #### ํ•ต์‹ฌ ์งˆ๋ฌธ 1: ์™œ $a_{11}, a_{21} \neq 0$์ธ๊ฐ€? / Why non-zero effort in period 1? **ํ•œ๊ธ€ ์ง๊ด€:** **์—ญ๋ฐฉํ–ฅ ๊ท€๋‚ฉ (Backward Induction):** 1. **2๋…„์ฐจ (t=2):** ๋ฏธ๋ž˜๊ฐ€ ์—†์Œ โ†’ ํ‰ํŒ ๋™๊ธฐ ์—†์Œ โ†’ $a_{12} = a_{21} = 0$ 2. **1๋…„์ฐจ (t=1):** ์„ฑ๊ณผ $p_1$์ด ๋ฏธ๋ž˜ ์—ฐ๋ด‰ $w_2$์— ์˜ํ–ฅ โ†’ ํ‰ํŒ ์Œ“๊ธฐ ๋™๊ธฐ ๋ฐœ์ƒ **๋ฒ ์ด์ง€์•ˆ ์ถ”๋ก  ๋ฉ”์ปค๋‹ˆ์ฆ˜:** ์‹œ์žฅ์€ ๊ด€์ฐฐํ•œ ์„ฑ๊ณผ๋ฅผ ๋ฐ”ํƒ•์œผ๋กœ ๋Šฅ๋ ฅ์„ ์—…๋ฐ์ดํŠธํ•ฉ๋‹ˆ๋‹ค: $ E[\eta | p_1] = \varphi (p_1 - g \cdot \hat{a}_1) $ ์—ฌ๊ธฐ์„œ $\varphi$๋Š” ์‹ ํ˜ธ ์ •ํ™•๋„ (signal precision), $g \cdot \hat{a}_1$์€ ์˜ˆ์ƒ ๋…ธ๋ ฅ์ž…๋‹ˆ๋‹ค. **๊ท ํ˜•์—์„œ์˜ ๋…ธ๋ ฅ ๊ฒฐ์ •:** 1๋…„์ฐจ ๋…ธ๋ ฅ์˜ ํ•œ๊ณ„ ํšจ๊ณผ: - $g_i$ ๋ฐฉํ–ฅ์œผ๋กœ ๋…ธ๋ ฅ 1๋‹จ์œ„ ์ฆ๊ฐ€ โ†’ ์„ฑ๊ณผ $p_1$์ด $g_i$๋งŒํผ ์ƒ์Šน - ์‹œ์žฅ์˜ ๋Šฅ๋ ฅ ์ถ”์ • $E[\eta|p_1]$์ด $\delta \varphi g_i$๋งŒํผ ์ƒ์Šน - 2๋…„์ฐจ ์—ฐ๋ด‰์ด $\delta \varphi g_i$๋งŒํผ ์ƒ์Šน (์—ฌ๊ธฐ์„œ $\delta$๋Š” ํ• ์ธ์œจ) **์ผ๊ณ„ ์กฐ๊ฑด (FOC):** $ \frac{\partial c}{\partial a_{i1}} = \delta \varphi g_i \quad \Rightarrow \quad a_1 = \frac{\delta \varphi}{c} g $ **English Intuition:** **Backward Induction:** 1. **Period 2:** No future โ†’ no reputation motive โ†’ $a_{12} = a_{21} = 0$ 2. **Period 1:** Performance $p_1$ affects future wage $w_2$ โ†’ reputation-building incentive exists **Bayesian Inference Mechanism:** Market updates ability belief based on observed performance: $ E[\eta | p_1] = \varphi (p_1 - g \cdot \hat{a}_1) $ where $\varphi$ is signal precision and $g \cdot \hat{a}_1$ is expected effort. **Equilibrium Effort:** Marginal effect of period 1 effort: - 1 unit increase in $g_i$ direction โ†’ performance $p_1$ rises by $g_i$ - Market's ability estimate rises by $\delta \varphi g_i$ - Period 2 wage rises by $\delta \varphi g_i$ (where $\delta$ = discount rate) **First Order Condition:** $ \frac{\partial c}{\partial a_{i1}} = \delta \varphi g_i \quad \Rightarrow \quad a_1 = \frac{\delta \varphi}{c} g $ --- #### ํ•ต์‹ฌ ์งˆ๋ฌธ 2: $\cos(\theta)$์˜ ์—ญํ•  / Role of $\cos(\theta)$ **ํ•œ๊ธ€:** **์ •์˜:** $\cos(\theta) = \frac{f \cdot g}{\|f\| \|g\|}$ ๋Š” "์„ฑ๊ณผ์ง€ํ‘œ ๋ฐฉํ–ฅ"๊ณผ "ํšŒ์‚ฌ ๊ฐ€์น˜ ๋ฐฉํ–ฅ"์˜ ์ •๋ ฌ๋„ (alignment) **์ง๊ด€์  ์ดํ•ด:** - $\theta = 0$ (์™„๋ฒฝํ•œ ์ •๋ ฌ): ์„ฑ๊ณผ์ง€ํ‘œ๊ฐ€ ํšŒ์‚ฌ ๊ฐ€์น˜๋ฅผ ์™„๋ฒฝํžˆ ๋ฐ˜์˜ - $\theta = 90ยฐ$ (์™„์ „ํ•œ ๋ถˆ์ผ์น˜): ์„ฑ๊ณผ์ง€ํ‘œ์™€ ํšŒ์‚ฌ ๊ฐ€์น˜๊ฐ€ ๋ฌด๊ด€ **๊ฒฝ์ œ์  ํ•จ์˜:** | $\cos(\theta)$ | ํ•ด์„ | ๋…ธ๋ ฅ ๋ฐฐ๋ถ„ | |----------------|------|-----------| | ๋†’์Œ (โ‰ˆ1) | ์„ฑ๊ณผ์ง€ํ‘œ = ์ง„์งœ ๊ฐ€์น˜ | ํšจ์œจ์  ๋…ธ๋ ฅ | | ๋‚ฎ์Œ (โ‰ˆ0) | ์„ฑ๊ณผ์ง€ํ‘œ โ‰  ์ง„์งœ ๊ฐ€์น˜ | "๋ณด์—ฌ์ฃผ๊ธฐ" ๋…ธ๋ ฅ (window dressing) | **์‹ค์ œ ์˜ˆ์‹œ:** - **๋†’์€ ์ •๋ ฌ:** ์ œ์กฐ์—…์˜ ํ’ˆ์งˆ ๋ถˆ๋Ÿ‰๋ฅ  ์ง€ํ‘œ - **๋‚ฎ์€ ์ •๋ ฌ:** ๊ต์ˆ˜์˜ ์—ฐ๊ตฌ ์ธ์šฉ์ˆ˜ (๋‹จ๊ธฐ์ ์œผ๋กœ ๊ฒŒ์ž„ ๊ฐ€๋Šฅ) **BGP ์—ฐ๊ฒฐ:** - BGP Ch. 2 Fig. 2.2: ์‹คํ–‰ ๊ฐ€๋Šฅ ๋…ธ๋ ฅ ray๋Š” $g$; $f$๋ฅผ $g$์— ํˆฌ์˜ํ•˜๋ฉด ์ •๋ ฌ ํšจ๊ณผ๋ฅผ ์‹œ๊ฐํ™” **English:** **Definition:** $\cos(\theta) = \frac{f \cdot g}{\|f\| \|g\|}$ measures alignment between "performance metric" and "firm value" **Intuitive Understanding:** - $\theta = 0$ (perfect alignment): metric perfectly reflects firm value - $\theta = 90ยฐ$ (complete misalignment): metric unrelated to firm value **Economic Implications:** | $\cos(\theta)$ | Interpretation | Effort Allocation | |----------------|----------------|-------------------| | High (โ‰ˆ1) | Metric = True Value | Efficient effort | | Low (โ‰ˆ0) | Metric โ‰  True Value | "Window dressing" effort | **Real Examples:** - **High alignment:** Manufacturing defect rates - **Low alignment:** Professor citation counts (gameable in short-run) **BGP Connection:** - BGP Ch. 2 Fig. 2.2: Feasible effort ray is $g$; projection of $f$ onto $g$ visualizes alignment effect --- #### ํ•ต์‹ฌ ์งˆ๋ฌธ 3: $\varphi = \frac{h_\phi}{h + h_\phi}$์˜ ์—ญํ•  / Role of Signal Precision **ํ•œ๊ธ€:** **์ •์˜:** $\varphi$๋Š” ์„ฑ๊ณผ์ง€ํ‘œ $p_1$์ด ์ง„์งœ ๋Šฅ๋ ฅ $\eta$๋ฅผ ์–ผ๋งˆ๋‚˜ ์ •ํ™•ํžˆ ๋ฐ˜์˜ํ•˜๋Š”๊ฐ€๋ฅผ ์ธก์ • **๋ฒ ์ด์ง€์•ˆ ํ•ด์„:** $ \varphi = \frac{h_\phi}{h + h_\phi} = \frac{\text{์‹ ํ˜ธ์˜ ์ •ํ™•๋„}}{\text{์‹ ํ˜ธ์˜ ์ •ํ™•๋„} + \text{์‚ฌ์ „ ๋ถ„์‚ฐ}} $ - $h_\phi \uparrow$ (๋…ธ์ด์ฆˆ $\phi_t$ ๊ฐ์†Œ) โ†’ $\varphi \uparrow$ โ†’ ์‹ ํ˜ธ๊ฐ€ ๋Šฅ๋ ฅ์„ ๋” ์ž˜ ๋ฐ˜์˜ - $h \uparrow$ (๋Šฅ๋ ฅ ๋ถ„์‚ฐ ๊ฐ์†Œ) โ†’ $\varphi \downarrow$ โ†’ ๋ชจ๋‘ ๋น„์Šทํ•˜๋‹ˆ ์‹ ํ˜ธ ๊ฐ€์น˜ ํ•˜๋ฝ **๊ฒฝ์ œ์  ๋ฉ”์ปค๋‹ˆ์ฆ˜:** | $\varphi$ ํฌ๊ธฐ | ์‹œ์žฅ ํ•ด์„ | 1๋…„์ฐจ ๋…ธ๋ ฅ | ๋…ผ๋ฆฌ | |---------------|----------|-----------|------| | ๋†’์Œ | "์„ฑ๊ณผ = ๋Šฅ๋ ฅ" | $a_1 \uparrow$ | ๋…ธ๋ ฅ์ด ํ‰ํŒ์— ํฐ ์˜ํ–ฅ | | ๋‚ฎ์Œ | "์„ฑ๊ณผ = ์šด" | $a_1 \downarrow$ | "์–ด์ฐจํ”ผ ์šด์ด์•ผ" | **์‹ค์ฆ์  ์˜ˆ์ธก:** - ๋ณ€๋™์„ฑ ํฐ ์‚ฐ์—… (๋‚ฎ์€ $\varphi$): ๊ฒฝ๋ ฅ ์ดˆ๊ธฐ ๋…ธ๋ ฅ ๊ฐ์†Œ - ์˜ˆ์ธก ๊ฐ€๋Šฅํ•œ ํ™˜๊ฒฝ (๋†’์€ $\varphi$): ๊ฐ•ํ•œ ๊ฒฝ๋ ฅ ๊ด€๋ฆฌ ์ธ์„ผํ‹ฐ๋ธŒ **English:** **Definition:** $\varphi$ measures how accurately performance $p_1$ reflects true ability $\eta$ **Bayesian Interpretation:** $ \varphi = \frac{h_\phi}{h + h_\phi} = \frac{\text{signal precision}}{\text{signal precision} + \text{prior variance}} $ - $h_\phi \uparrow$ (noise $\phi_t$ decreases) โ†’ $\varphi \uparrow$ โ†’ signal better reflects ability - $h \uparrow$ (ability variance decreases) โ†’ $\varphi \downarrow$ โ†’ everyone similar, signal less valuable **Economic Mechanism:** | $\varphi$ Level | Market Interpretation | Period 1 Effort | Logic | |----------------|----------------------|-----------------|-------| | High | "Performance = Ability" | $a_1 \uparrow$ | Effort strongly affects reputation | | Low | "Performance = Luck" | $a_1 \downarrow$ | "It's all luck anyway" | **Empirical Predictions:** - High-volatility industries (low $\varphi$): reduced early-career effort - Predictable environments (high $\varphi$): strong career-concern incentives --- ### ๋ฌธ์ œ 1(b): ๋ช…์‹œ์  ๊ณ„์•ฝ ๊ฐ€๋Šฅ / Contractible Performance (w_t = s_t + b_t p_t) #### Setup: ์„ ํ˜• ๊ณ„์•ฝ **์ƒˆ๋กœ์šด ๊ณ„์•ฝ ํ˜•ํƒœ:** $ w_t = s_t + b_t p_t $ where $b_t$ is bonus rate (pay-for-performance sensitivity). --- #### ํ•ต์‹ฌ ์งˆ๋ฌธ 1: ์™œ $b_1^* \neq b_2^*$์ธ๊ฐ€? / Why do bonus rates differ? **ํ•œ๊ธ€:** **2๋…„์ฐจ ๋ถ„์„ (์ˆœ์ˆ˜ ๋ช…์‹œ์  ์ธ์„ผํ‹ฐ๋ธŒ):** ๋ฏธ๋ž˜๊ฐ€ ์—†์œผ๋ฏ€๋กœ ์˜ค์ง ๊ณ„์•ฝ๋งŒ์ด ๋™๊ธฐ: $ \max_{a_2} \quad b_2 p_2 - \frac{1}{2}(a_{12}^2 + a_{22}^2) $ **FOC:** $ a_2 = \frac{b_2}{c} g $ ์ตœ์  ๊ณ„์•ฝ ($f$ ๋ฐฉํ–ฅ ์œ ๋„๋ฅผ ์œ„ํ•ด): $ b_2^* = \frac{f \cdot g}{\|g\|^2} = \frac{\|f\|}{\|g\|} \cos(\theta) $ > **ํ•ด์„:** ์ •๋ ฌ๋„ $\cos(\theta)$๊ฐ€ ์ตœ์  ๋ณด๋„ˆ์Šค๋ฅผ ๊ฒฐ์ •ํ•ฉ๋‹ˆ๋‹ค. **1๋…„์ฐจ ๋ถ„์„ (๋ช…์‹œ์  + ์•”๋ฌต์  ์ธ์„ผํ‹ฐ๋ธŒ):** 1๋…„์ฐจ์—๋Š” ๋‘ ๊ฐ€์ง€ ๋™๊ธฐ๊ฐ€ ๊ณต์กด: 1. ๋ช…์‹œ์  ๋ณด๋„ˆ์Šค: $b_1 p_1$ 2. ์•”๋ฌต์  ํ‰ํŒ: $\delta \varphi g$ (๋ฏธ๋ž˜ ์—ฐ๋ด‰ ์ƒ์Šน ํšจ๊ณผ) **์ด ํ•œ๊ณ„ ์ˆ˜์ต:** $ \text{Total marginal return} = b_1 + \delta \varphi $ ๋”ฐ๋ผ์„œ: $ a_1 = \frac{b_1 + \delta \varphi}{c} g $ **ํ•ต์‹ฌ ํ†ต์ฐฐ: ๋ช…์‹œ์  vs ์•”๋ฌต์  ์ธ์„ผํ‹ฐ๋ธŒ๋Š” ๋Œ€์ฒด์žฌ (Substitutes)** ๊ฐ™์€ ๋…ธ๋ ฅ์„ ์œ ๋„ํ•˜๋ ค๋ฉด: $ b_1^* + \delta \varphi = b_2^* \quad \Rightarrow \quad b_1^* = b_2^* - \delta \varphi $ **Holmstrรถm์˜ "์•ฝํ•œ ์ธ์„ผํ‹ฐ๋ธŒ์˜ ์ง€ํ˜œ" (Wisdom of Weak Incentives):** ๋งŒ์•ฝ $\delta \varphi > b_2^*$์ด๋ฉด $b_1^* = 0$: - ํ‰ํŒ ๋™๊ธฐ๋งŒ์œผ๋กœ ์ถฉ๋ถ„ํ•œ ๋…ธ๋ ฅ์ด ์œ ๋„๋จ - ํšŒ์‚ฌ๋Š” "์–ด์ฐจํ”ผ ์–˜๊ฐ€ ์—ด์‹ฌํžˆ ํ•  ๊ฑฐ์•ผ"๋ผ๊ณ  ํŒ๋‹จ - ๋ช…์‹œ์  ๋ณด์ƒ ๋ถˆํ•„์š” **English:** **Period 2 Analysis (Pure Explicit Incentives):** No future, only contract matters: $ \max_{a_2} \quad b_2 p_2 - \frac{1}{2}(a_{12}^2 + a_{22}^2) $ **FOC:** $ a_2 = \frac{b_2}{c} g $ Optimal contract (to induce effort along $f$): $ b_2^* = \frac{f \cdot g}{\|g\|^2} = \frac{\|f\|}{\|g\|} \cos(\theta) $ > **Interpretation:** Alignment $\cos(\theta)$ determines optimal bonus. **Period 1 Analysis (Explicit + Implicit Incentives):** Two motivations coexist in period 1: 1. Explicit bonus: $b_1 p_1$ 2. Implicit reputation: $\delta \varphi g$ (future wage gain) **Total marginal return:** $ \text{Total marginal return} = b_1 + \delta \varphi $ Therefore: $ a_1 = \frac{b_1 + \delta \varphi}{c} g $ **Key Insight: Explicit and Implicit Incentives are Substitutes** To induce same effort: $ b_1^* + \delta \varphi = b_2^* \quad \Rightarrow \quad b_1^* = b_2^* - \delta \varphi $ **Holmstrรถm's "Wisdom of Weak Incentives":** If $\delta \varphi > b_2^*$, then $b_1^* = 0$: - Reputation motive alone induces sufficient effort - Firm thinks "they'll work hard anyway" - Explicit reward unnecessary --- #### ํ•ต์‹ฌ ์งˆ๋ฌธ 2: ์™œ $a_1^* \neq a_2^*$์ธ๊ฐ€? / Why do effort levels differ? **ํ•œ๊ธ€:** **์ง์ ‘์  ์›์ธ:** $ \begin{align} a_1^* &= \frac{b_1^* + \delta \varphi}{c} g = \frac{b_2^*}{c} g \quad \text{(if interior solution)} \\ a_2^* &= \frac{b_2^*}{c} g \end{align} $ ์ตœ์  ๊ณ„์•ฝ ํ•˜์—์„œ๋Š” $a_1^* = a_2^*$ (๊ฐ™์€ ๋…ธ๋ ฅ ์ˆ˜์ค€)! **ํ•˜์ง€๋งŒ ๋งŒ์•ฝ corner solution์ด๋ฉด:** - $b_1^* = 0 < b_2^*$์ธ ๊ฒฝ์šฐ - $a_1^* = \frac{\delta \varphi}{c} g < a_2^* = \frac{b_2^*}{c} g$ **๊ฒฝ์ œ์  ์˜๋ฏธ:** 1. **Interior solution:** ํšŒ์‚ฌ๊ฐ€ ๋ช…์‹œ์  ๋ณด์ƒ์„ ์กฐ์ ˆํ•˜์—ฌ ๊ฐ™์€ ์ด ๋…ธ๋ ฅ ์œ ๋„ 2. **Corner solution:** ํ‰ํŒ ๋™๊ธฐ๊ฐ€ ๋„ˆ๋ฌด ๊ฐ•ํ•ด์„œ ๋ช…์‹œ์  ๋ณด์ƒ ์—†์ด๋„ ์ถฉ๋ถ„ํ•œ (ํ•˜์ง€๋งŒ 2๋…„์ฐจ๋ณด๋‹ค ๋‚ฎ์€) ๋…ธ๋ ฅ **English:** **Direct Cause:** $ \begin{align} a_1^* &= \frac{b_1^* + \delta \varphi}{c} g = \frac{b_2^*}{c} g \quad \text{(if interior)} \\ a_2^* &= \frac{b_2^*}{c} g \end{align} $ Under optimal contract: $a_1^* = a_2^*$ (same effort level)! **But if corner solution:** - When $b_1^* = 0 < b_2^*$ - Then $a_1^* = \frac{\delta \varphi}{c} g < a_2^* = \frac{b_2^*}{c} g$ **Economic Meaning:** 1. **Interior solution:** Firm adjusts explicit reward to induce same total effort 2. **Corner solution:** Reputation motive so strong that without explicit reward, sufficient (but lower than period 2) effort --- #### ํ•ต์‹ฌ ์งˆ๋ฌธ 3: ๊ณ„์•ฝ ๊ณต๊ฐœ์˜ ์ค‘์š”์„ฑ / Importance of Public Contracts **ํ•œ๊ธ€:** **์‹œ๋‚˜๋ฆฌ์˜ค ๋น„๊ต:** **๋น„๊ณต๊ฐœ ๊ณ„์•ฝ (Private):** - ์‹œ์žฅ์ด $b_1$์„ ๊ด€์ฐฐ ๋ชปํ•จ - ์„ฑ๊ณผ $p_1 = 10$์„ ๋ณด๊ณ  โ†’ "์ด๊ฒŒ ๋Šฅ๋ ฅ์ธ๊ฐ€ ๋…ธ๋ ฅ์ธ๊ฐ€?" ๋ถ„ํ•ด ๋ถˆ๊ฐ€ - ๋†’์€ $p_1$์„ ๊ณผ๋„ํ•˜๊ฒŒ ๋Šฅ๋ ฅ์œผ๋กœ ํ•ด์„ โ†’ ๊ฐ•ํ•œ ์•”๋ฌต์  ์ธ์„ผํ‹ฐ๋ธŒ $\delta \varphi$ - ๊ฒฐ๊ณผ: $b_1^* \downarrow$ (๋‚ฎ์€ ๋ช…์‹œ์  ๋ณด์ƒ) **๊ณต๊ฐœ ๊ณ„์•ฝ (Public):** - ์‹œ์žฅ์ด $b_1 = 0.3$์„ ๊ด€์ฐฐ - "์•„, ๋ณด๋„ˆ์Šค๊ฐ€ 0.3์ด์—ˆ๊ตฌ๋‚˜ โ†’ ์˜ˆ์ƒ ๋…ธ๋ ฅ = $\frac{0.3}{c} gquot; - ์„ฑ๊ณผ๋ฅผ "๋Šฅ๋ ฅ"๊ณผ "๋…ธ๋ ฅ"์œผ๋กœ ์ •ํ™•ํžˆ ๋ถ„ํ•ด - ๊ฒฐ๊ณผ: ๋” ์ •ํ™•ํ•œ ๋Šฅ๋ ฅ ํ‰๊ฐ€ โ†’ ์•ฝํ•œ ์•”๋ฌต์  ์ธ์„ผํ‹ฐ๋ธŒ โ†’ $b_1^* \uparrow$ **์ˆ˜ํ•™์  ํ‘œํ˜„:** Private: $E[\eta | p_1]$ depends on $p_1$ without conditioning on $b_1$ Public: $E[\eta | p_1, b_1] = \varphi (p_1 - g \cdot a_1(b_1))$ **์ •์ฑ…์  ํ•จ์˜:** | ๊ณ„์•ฝ ์œ ํ˜• | ์‹œ์žฅ ์ถ”๋ก  | ์•”๋ฌต์  ์ธ์„ผํ‹ฐ๋ธŒ | ๋ช…์‹œ์  ๋ณด์ƒ | ์‚ฌํšŒ์  ํšจ์œจ์„ฑ | |----------|----------|----------------|------------|--------------| | ๋น„๊ณต๊ฐœ | ๋ถˆ์™„์ „ | ๊ฐ•ํ•จ | ๋‚ฎ์Œ | ์™œ๊ณก ๊ฐ€๋Šฅ | | ๊ณต๊ฐœ | ์ •ํ™• | ์•ฝํ•จ | ๋†’์Œ | ํšจ์œจ์  | **์‹ค์ œ ์˜ˆ์‹œ:** - ์ž„์› ๋ณด์ƒ ๊ณต์‹œ ์ œ๋„ (Executive compensation disclosure) - ํ•™๊ณ„ ์ฑ„์šฉ์‹œ ์ถ”์ฒœ์„œ์˜ ์—ญํ•  (hard evidence vs cheap talk) **English:** **Scenario Comparison:** **Private Contract:** - Market doesn't observe $b_1$ - Sees performance $p_1 = 10$ โ†’ can't decompose "ability" vs "effort" - Over-attributes high $p_1$ to ability โ†’ strong implicit incentive $\delta \varphi$ - Result: $b_1^* \downarrow$ (low explicit reward) **Public Contract:** - Market observes $b_1 = 0.3$ - "Oh, bonus was 0.3 โ†’ expected effort = $\frac{0.3}{c} gquot; - Accurately decomposes performance into "ability" and "effort" - Result: More accurate ability assessment โ†’ weak implicit incentive โ†’ $b_1^* \uparrow$ **Mathematical Expression:** Private: $E[\eta | p_1]$ depends on $p_1$ without conditioning on $b_1$ Public: $E[\eta | p_1, b_1] = \varphi (p_1 - g \cdot a_1(b_1))$ **Policy Implications:** | Contract Type | Market Inference | Implicit Incentive | Explicit Reward | Social Efficiency | |--------------|------------------|-------------------|-----------------|-------------------| | Private | Imperfect | Strong | Low | Potentially distorted | | Public | Accurate | Weak | High | Efficient | **Real Examples:** - Executive compensation disclosure regulations - Recommendation letters in academic hiring (hard evidence vs cheap talk) --- ### ์ข…ํ•ฉ ์š”์•ฝํ‘œ / Summary Table: Problem 1 | ๊ฐœ๋… / Concept | ๋ฉ”์ปค๋‹ˆ์ฆ˜ / Mechanism | ํ•จ์˜ / Implication | |---------------|---------------------|-------------------| | $\cos(\theta)$ | ์„ฑ๊ณผ์ง€ํ‘œ-๊ฐ€์น˜ ์ •๋ ฌ / Metric-value alignment | ๋‚ฎ์€ ์ •๋ ฌ โ†’ ์•ฝํ•œ ์ธ์„ผํ‹ฐ๋ธŒ / Low โ†’ weak incentives | | $\varphi$ | ์‹ ํ˜ธ ์ •ํ™•๋„ / Signal precision | $\varphi \uparrow$ โ†’ ์•”๋ฌต์  ๋ณด๋„ˆ์Šค $\delta \varphi \uparrow$ / โ†‘ฯ† โ†’ implicit bonus โ†‘ | | $b_1$ vs $b_2$ | ๋ช…์‹œ์  vs ์•”๋ฌต์  / Explicit vs implicit | $b_1 = b_2 - \delta \varphi$ (๋Œ€์ฒด์žฌ / substitutes) | | ๊ณต๊ฐœ ๊ณ„์•ฝ / Public contract | ๊ฒ€์ฆ ๊ฐ€๋Šฅ ์‹ ํ˜ธ / Verifiable signal | ๊ฒฝ๋ ฅ ์™œ๊ณก ๊ฐ์†Œ / Reduces career distortion | --- ## ๐Ÿ”น ๋ฌธ์ œ 2: ํ‰ํŒ๊ณผ ์ •๋ณด ์ „๋‹ฌ / Problem 2: Reputation and Information Transmission ### ์ด๋ก ์  ๋ฐฐ๊ฒฝ / Theoretical Foundation **Model lineage:** - **Crawford & Sobel (1982)** cheap talk model (BGP ยง 4.1) - Extended with reputation motive $\lambda \times \phi(m)$ **Core theme:** Reputation concerns can **block** information transmission in strategic communication. --- ### Setup: ์กฐ์–ธ์ž ๋ชจํ˜• / Advisor Model **Players:** - **Sender (Advisor):** Privately observes state $s \in \{0,1\}$, sends message $m \in \{0,1\}$ - **Receiver (Principal):** Updates belief $\phi(m) = Pr(\text{unbiased} | m)$, chooses decision $d$ **Sender Types:** - **Unbiased (u):** $U^u(d,s) = -(d-s)^2$ (aligned with principal) - **Biased (b):** $U^b(d,s) = -(d-1)^2$ (always prefers high $d$) - **Prior:** $Pr(\text{unbiased}) = q \in (0,1)$ **Reputation Motive:** All types value appearing unbiased: $+\lambda \phi(m)$ **Principal's Payoff:** $V(d,s) = -(d-s)^2$ --- ### ๋ฌธ์ œ 2(a): ์™œ ์™„์ „ ๋ถ„๋ฆฌ ๊ท ํ˜•์ด ๋ถˆ๊ฐ€๋Šฅํ•œ๊ฐ€? / Why No Full Separation? **ํ•œ๊ธ€:** **๋ชฉํ‘œ:** ์–‘ ํƒ€์ž… ๋ชจ๋‘ ์ง„์‹ค์„ ๋งํ•˜๋Š” ๊ท ํ˜• ์กด์žฌ ์—ฌ๋ถ€ (truthful revelation: $m = s$) **๋ชจ์ˆœ ์ฆ๋ช… (Proof by Contradiction):** **๊ฐ€์ •:** ๋ถ„๋ฆฌ ๊ท ํ˜• ์กด์žฌ, ์ฆ‰ - Unbiased: $m^u(0) = 0, m^u(1) = 1$ - Biased: $m^b(0) = 0, m^b(1) = 1$ **์ฃผ์žฅ์˜ Principal ๋ฐ˜์‘:** - $m = 0 \Rightarrow d_0 = 0$ (๋ฏฟ๊ณ  ๋‚ฎ์€ ๊ฒฐ์ •) - $m = 1 \Rightarrow d_1 = 1$ (๋ฏฟ๊ณ  ๋†’์€ ๊ฒฐ์ •) **Biased type์˜ ์ดํƒˆ ์œ ์ธ (Deviation Incentive):** ์ƒํƒœ $s = 0$์ผ ๋•Œ biased type์„ ๊ณ ๋ ค: **์ง„์‹ค (Truth-telling):** $ U^b(d_0=0, s=0) = -(0-1)^2 + \lambda \cdot 0.5 = -1 + 0.5\lambda $ (ํ‰ํŒ์€ ์ค‘๋ฆฝ์ , Bayesian updating์œผ๋กœ $\phi(0) = q = 0.5$ ๊ฐ€์ •) **๊ฑฐ์ง“๋ง (Lying to $m=1$):** $ U^b(d_1=1, s=0) = -(1-1)^2 + \lambda \cdot 0.5 = 0 + 0.5\lambda $ **๋น„๊ต:** $ 0 + 0.5\lambda > -1 + 0.5\lambda \quad \Rightarrow \quad \text{Lying strictly dominates} $ **๊ฒฐ๋ก :** Biased type์€ $s=0$์ผ ๋•Œ $m=1$๋กœ ์ดํƒˆ โ†’ ๊ท ํ˜• ๋ถ•๊ดด **Crawford-Sobel ๋ถˆ๊ฐ€๋Šฅ์„ฑ (BGP ยง 4.1.2):** - Cheap talk: ๊ฑฐ์ง“๋ง ๋น„์šฉ ์—†์Œ (no cost to lying) - Preference misalignment: Biased type์€ ์ง„์‹ค ๋งํ•  ์œ ์ธ ์—†์Œ - ๊ฒฐ๊ณผ: ์™„์ „ ๋ถ„๋ฆฌ ๋ถˆ๊ฐ€๋Šฅ (no full separation) **English:** **Goal:** Can both types truthfully reveal state? (i.e., $m = s$) **Proof by Contradiction:** **Assumption:** Separating equilibrium exists: - Unbiased: $m^u(0) = 0, m^u(1) = 1$ - Biased: $m^b(0) = 0, m^b(1) = 1$ **Principal's Belief-Based Response:** - $m = 0 \Rightarrow d_0 = 0$ (believes and makes low decision) - $m = 1 \Rightarrow d_1 = 1$ (believes and makes high decision) **Biased Type's Deviation Incentive:** Consider biased type when $s = 0$: **Truth-telling:** $ U^b(d_0=0, s=0) = -(0-1)^2 + \lambda \cdot 0.5 = -1 + 0.5\lambda $ **Lying to $m=1$:** $ U^b(d_1=1, s=0) = -(1-1)^2 + \lambda \cdot 0.5 = 0 + 0.5\lambda $ **Comparison:** $ 0 + 0.5\lambda > -1 + 0.5\lambda \quad \Rightarrow \quad \text{Lying strictly dominates} $ **Conclusion:** Biased type deviates to $m=1$ when $s=0$ โ†’ equilibrium breaks **Crawford-Sobel Impossibility (BGP ยง 4.1.2):** - Cheap talk: no cost to lying - Preference misalignment: biased type has no incentive to tell truth - Result: no full separation possible --- ### ๋ฌธ์ œ 2(b-c): ๋ถ€๋ถ„ ๋ถ„๋ฆฌ ๊ท ํ˜•๋„ ๋ถˆ๊ฐ€๋Šฅ / No Partial Separation Either **ํ•œ๊ธ€:** **๋ถ€๋ถ„ ๋ถ„๋ฆฌ (Partial Separation)๋ž€?** ์˜ˆ: "Unbiased๋Š” ๊ฐ€๋” ์ง„์‹ค, Biased๋Š” ํ•ญ์ƒ ํ’€๋ง" **์™œ ๋ถˆ๊ฐ€๋Šฅํ•œ๊ฐ€?** ํ‰ํŒ ์ธ์„ผํ‹ฐ๋ธŒ $\lambda$๊ฐ€ ์กด์žฌํ•˜๋ฉด: - ๋ชจ๋“  ํƒ€์ž…์ด "์ข‹์€ ํ‰ํŒ ๋ฐ›๋Š” ๋ฉ”์‹œ์ง€"๋กœ ์ˆ˜๋ ด - ์–ด๋–ค ๋ฉ”์‹œ์ง€๋“  ํ•œ์ชฝ์ด ๋” ๋†’์€ ํ‰ํŒ์„ ์ฃผ๋ฉด, ๋ชจ๋‘ ๊ทธ์ชฝ์œผ๋กœ ์ด๋™ - **๊ฒฐ๊ณผ:** ๋‹ค์‹œ ํ’€๋ง์œผ๋กœ ๋ถ•๊ดด (collapse to pooling) **์ˆ˜ํ•™์  ์ง๊ด€:** ๋งŒ์•ฝ $\phi(m_1) > \phi(m_0)$์ด๋ฉด: - ๋ชจ๋“  ํƒ€์ž…์ด $m_1$ ์„ ํƒ โ†’ ๋ถ€๋ถ„ ๋ถ„๋ฆฌ ๋ถˆ๊ฐ€๋Šฅ - ํ‰ํŒ ์ฐจ์ด๊ฐ€ ์ œ๋กœ์—ฌ์•ผ ๋ถ€๋ถ„ ๋ถ„๋ฆฌ ๊ฐ€๋Šฅ โ†’ ํ•˜์ง€๋งŒ preference misalignment๋กœ ๋ถˆ๊ฐ€๋Šฅ **English:** **What is Partial Separation?** E.g., "Unbiased sometimes tells truth, Biased always pools" **Why Impossible?** With reputation incentive $\lambda$: - All types converge to "good reputation message" - If any message gives higher reputation, everyone migrates there - **Result:** Collapses back to pooling **Mathematical Intuition:** If $\phi(m_1) > \phi(m_0)$: - All types choose $m_1$ โ†’ partial separation impossible - Need zero reputation difference for partial separation โ†’ but preference misalignment prevents this --- ### ๋ฌธ์ œ 2(d): ํ’€๋ง ๊ท ํ˜• - ๋ชจ๋‘ "1" / Pooling Equilibrium: All Say "1" **ํ•œ๊ธ€:** **๊ท ํ˜• ๊ตฌ์กฐ:** $ m^u(s) = m^b(s) = 1 \quad \forall s \in \{0,1\} $ **Principal์˜ ๋ฐ˜์‘:** - ๋ฉ”์‹œ์ง€๊ฐ€ ๋ฌด์ •๋ณด์  (uninformative) โ†’ ์‚ฌ์ „ ๋ฏฟ์Œ ์œ ์ง€ - $d^* = E[s] = 0.5$ (prior: $s=0$ ๋˜๋Š” $s=1$ ๊ฐ๊ฐ 50% ๊ฐ€์ •) **๊ท ํ˜• ์กฐ๊ฑด (Incentive Compatibility):** Unbiased type์ด $s=0$์„ ๋ดค์„ ๋•Œ ์™œ $m=1$์„ ์„ ํƒํ•˜๋‚˜? **์ง„์‹ค ๋งํ•˜๊ธฐ (Deviate to $m=0$):** - Principal์ด "์ด์ƒํ•˜๋‹ค, biased๊ฒ ์ง€"๋ผ๊ณ  ์ƒ๊ฐ โ†’ $\phi(0) = 0$ (off-equilibrium belief) - ํ•˜์ง€๋งŒ ์ •ํ™•ํ•œ ๊ฒฐ์ •: $d_0 = 0$ - Payoff: $U^u = -(0-0)^2 + \lambda \cdot 0 = 0$ **๊ฑฐ์ง“๋ง (Pool at $m=1$):** - ํ‰ํŒ ์œ ์ง€: $\phi(1) = q$ - ๋ถ€์ •ํ™•ํ•œ ๊ฒฐ์ •: $d_1 = 0.5$ - Payoff: $U^u = -(0.5-0)^2 + \lambda q = -0.25 + \lambda q$ **๊ท ํ˜• ์กฐ๊ฑด:** $ -0.25 + \lambda q \geq 0 \quad \Rightarrow \quad \lambda q \geq 0.25 $ **๊ฒฝ์ œ์  ํ•ด์„:** | ์กฐ๊ฑด | ์˜๋ฏธ | ๊ฒฐ๊ณผ | |------|------|------| | $\lambda$ ํผ | ํ‰ํŒ ๊ฐ€์น˜ ๋†’์Œ | Unbiased๋„ ๊ฑฐ์ง“๋ง | | $\lambda$ ์ž‘์Œ | ํ‰ํŒ ๊ฐ€์น˜ ๋‚ฎ์Œ | ์ง„์‹ค ๋งํ•  ์œ ์ธ (but ๊ท ํ˜• ๊นจ์ง) | **์‹ค์ œ ์˜ˆ์‹œ:** - **ํˆฌ์ž์€ํ–‰ ์• ๋„๋ฆฌ์ŠคํŠธ:** "๋งค๋„" ์ถ”์ฒœ ๊ฑฐ์˜ ์•ˆ ํ•จ (ํ‰ํŒ ๋ฆฌ์Šคํฌ) - **์ •์น˜ ์ปจ์„คํ„ดํŠธ:** "์œ„ํ—˜ํ•œ ์ง„์‹ค"๋ณด๋‹ค "์•ˆ์ „ํ•œ ๊ฑฐ์ง“" ์„ ํ˜ธ **English:** **Equilibrium Structure:** $ m^u(s) = m^b(s) = 1 \quad \forall s \in \{0,1\} $ **Principal's Response:** - Message is uninformative โ†’ maintains prior belief - $d^* = E[s] = 0.5$ (assuming prior: $s=0$ or $s=1$ each 50%) **Equilibrium Condition (Incentive Compatibility):** Why does unbiased type choose $m=1$ when seeing $s=0$? **Truth-telling (Deviate to $m=0$):** - Principal thinks "weird, must be biased" โ†’ $\phi(0) = 0$ (off-equilibrium belief) - But accurate decision: $d_0 = 0$ - Payoff: $U^u = -(0-0)^2 + \lambda \cdot 0 = 0$ **Lying (Pool at $m=1$):** - Maintain reputation: $\phi(1) = q$ - Inaccurate decision: $d_1 = 0.5$ - Payoff: $U^u = -(0.5-0)^2 + \lambda q = -0.25 + \lambda q$ **Equilibrium Condition:** $ -0.25 + \lambda q \geq 0 \quad \Rightarrow \quad \lambda q \geq 0.25 $ **Economic Interpretation:** | Condition | Meaning | Result | |-----------|---------|--------| | Large $\lambda$ | High reputation value | Even unbiased lies | | Small $\lambda$ | Low reputation value | Incentive to tell truth (but equilibrium breaks) | **Real Examples:** - **Investment bank analysts:** Rarely issue "sell" recommendations (reputation risk) - **Political consultants:** Prefer "safe lies" over "dangerous truths" --- ### ๋ฌธ์ œ 2(e): ๋Œ€์นญ์  ํ’€๋ง - ๋ชจ๋‘ "0" / Symmetric Pooling: All Say "0" **ํ•œ๊ธ€:** **๊ท ํ˜• ๊ตฌ์กฐ:** $ m^u(s) = m^b(s) = 0 \quad \forall s \in \{0,1\} $ **๊ท ํ˜• ์กฐ๊ฑด:** (d)์™€ ๋Œ€์นญ์  ๋…ผ๋ฆฌ Unbiased type์ด $s=1$์„ ๋ดค์„ ๋•Œ ์™œ $m=0$์„ ์„ ํƒํ•˜๋‚˜? $ y(1-d_1^2) \leq \lambda[q - 0] $ where $d_1$ is off-equilibrium belief response to $m=1$. **๊ฒฝ์ œ์  ์˜๋ฏธ:** - ์กฐ๊ฑด (d)๊ฐ€ ์„ฑ๋ฆฝ ์•ˆ ํ•˜๋ฉด ์ด์ชฝ ๊ท ํ˜• ์กด์žฌ - ์—ฌ์ „ํžˆ ๋ฌด์ •๋ณด์  (uninformative) - Principal์€ ์•„๋ฌด๊ฒƒ๋„ ๋ฐฐ์šฐ์ง€ ๋ชปํ•จ **ํ•ต์‹ฌ ๊ตํ›ˆ:** > **ํ‰ํŒ ๊ด€๋ฆฌ๊ฐ€ ์ •๋ณด ์ „๋‹ฌ์„ ํŒŒ๊ดดํ•ฉ๋‹ˆ๋‹ค (Reputation destroys information transmission)** **English:** **Equilibrium Structure:** $ m^u(s) = m^b(s) = 0 \quad \forall s \in \{0,1\} $ **Equilibrium Condition:** Symmetric logic to (d) Why does unbiased type choose $m=0$ when seeing $s=1$? $ y(1-d_1^2) \leq \lambda[q - 0] $ where $d_1$ is off-equilibrium belief response to $m=1$. **Economic Meaning:** - When condition (d) fails, this equilibrium exists - Still uninformative - Principal learns nothing **Key Lesson:** > **Reputation concerns destroy information transmission** --- ### Hard Evidence์˜ ์—ญํ•  / Role of Hard Evidence (BGP ยง 4.1.4) **ํ•œ๊ธ€:** **Hard Evidence vs Cheap Talk:** | ํŠน์„ฑ | Cheap Talk | Hard Evidence | |------|-----------|---------------| | ๊ฑฐ์ง“๋ง ๋น„์šฉ | ์—†์Œ (costless) | ๋ถˆ๊ฐ€๋Šฅ (impossible) | | ๊ฐ€๋Šฅ ๋ฉ”์‹œ์ง€ | ์ƒํƒœ ๋ฌด๊ด€ | ์ƒํƒœ ์˜์กด์  (state-dependent) | | ๊ท ํ˜• | ํ’€๋ง (pooling) | ๋ถ„๋ฆฌ (separating) | | ์ •๋ณด ์ „๋‹ฌ | ์‹คํŒจ | ์„ฑ๊ณต | **Unraveling ๋ฉ”์ปค๋‹ˆ์ฆ˜ (Grossman-Milgrom 1981):** Hard evidence ํ™˜๊ฒฝ์—์„œ: 1. ์ตœ๊ณ  ํƒ€์ž… ($s=1$)์ด ์ฆ๊ฑฐ ์ œ์‹œ โ†’ ๊ตฌ๋ถ„๋จ 2. ์ œ์‹œ ์•ˆ ํ•˜๋ฉด "๋‚ฎ์€ ํƒ€์ž…"์œผ๋กœ ๊ฐ„์ฃผ๋จ 3. $s=0$ ๊ทผ์ฒ˜ ํƒ€์ž…๋„ ์ œ์‹œ โ†’ ์—ฐ์‡„์  ๊ณต๊ฐœ (unraveling) 4. **๊ฒฐ๊ณผ:** ์™„์ „ ์ •๋ณด ๊ณต๊ฐœ (full revelation) **์‹ค์ œ ์‘์šฉ:** - ์žฌ๋ฌด์ œํ‘œ ๊ณต์‹œ (Financial disclosure) - ํ’ˆ์งˆ ์ธ์ฆ (Quality certification) - ๊ต์œก ํ•™์œ„ (Educational credentials) **English:** **Hard Evidence vs Cheap Talk:** | Feature | Cheap Talk | Hard Evidence | |---------|-----------|---------------| | Cost of Lying | Costless | Impossible | | Feasible Messages | State-independent | State-dependent | | Equilibrium | Pooling | Separating | | Information | Fails | Succeeds | **Unraveling Mechanism (Grossman-Milgrom 1981):** In hard evidence environment: 1. Highest type ($s=1$) presents evidence โ†’ gets distinguished 2. Not presenting โ†’ inferred as "low type" 3. Types near $s=0$ also present โ†’ cascading revelation (unraveling) 4. **Result:** Full revelation **Real Applications:** - Financial disclosure - Quality certification - Educational credentials --- ### ์ข…ํ•ฉ ์š”์•ฝํ‘œ / Summary Table: Problem 2 | ์š”์ธ / Force | ๋ฉ”์ปค๋‹ˆ์ฆ˜ / Mechanism | ํšจ๊ณผ / Effect | |-------------|---------------------|--------------| | ํ‰ํŒ ($\lambda$) | "๊ณต์ •ํ•œ ์ฒ™" ๊ฐ€์น˜ / Value of appearing unbiased | ๋™์กฐ ์œ ๋„, ํ’€๋ง / Drives conformity, pooling | | ํŽธํ–ฅ (Bias) | ๋†’์€ ๊ฒฐ์ • ์„ ํ˜ธ / Intrinsic desire for high $d$ | ์ง„์‹ค ์™œ๊ณก / Skews truthful reporting | | Cheap Talk | ๊ฑฐ์ง“๋ง ๋น„์šฉ ์—†์Œ / No lie cost | ๋ชจ๋ฐฉ ๊ฐ€๋Šฅ โ†’ ์ •๋ณด ์†์‹ค / Allows mimicry โ†’ info loss | | Hard Evidence | ์ƒํƒœ ์˜์กด ๊ฐ€๋Šฅ ๋ฉ”์‹œ์ง€ / State-dependent feasible set | ์ง„์‹ค ์ „๋‹ฌ ํšŒ๋ณต / Restores truthful communication | --- ## ๐Ÿ”น ์ „์ฒด ์ข…ํ•ฉ / Grand Synthesis ### ๋‘ ๋ฌธ์ œ์˜ ๊ณตํ†ต ์ฃผ์ œ / Common Themes Across Problems **1. ๋ช…์‹œ์  vs ์•”๋ฌต์  ์ธ์„ผํ‹ฐ๋ธŒ (Explicit vs Implicit Incentives)** | ๋ฌธ์ œ | ๋ช…์‹œ์  | ์•”๋ฌต์  | ๊ด€๊ณ„ | |------|-------|-------|------| | Problem 1 | ๋ณด๋„ˆ์Šค $b_t$ | ํ‰ํŒ $\delta \varphi$ | ๋Œ€์ฒด์žฌ (substitutes) | | Problem 2 | ์—†์Œ (N/A) | ํ‰ํŒ $\lambda$ | ์ •๋ณด ์™œ๊ณก (distorts info) | **2. ์ •๋ณด์™€ ์ธ์„ผํ‹ฐ๋ธŒ์˜ ์ƒํ˜ธ์ž‘์šฉ (Information-Incentive Interaction)** - **Problem 1:** ์ •๋ณด ํˆฌ๋ช…์„ฑ (๊ณ„์•ฝ ๊ณต๊ฐœ) โ†’ ํ‰ํŒ ์‹œ์Šคํ…œ ์ž‘๋™ - **Problem 2:** ์ •๋ณด ๋น„๋Œ€์นญ + ํ‰ํŒ โ†’ ์˜์‚ฌ์†Œํ†ต ์‹คํŒจ **3. ์ •๋ ฌ์˜ ์ค‘์š”์„ฑ (Importance of Alignment)** - **Problem 1:** $\cos(\theta)$ โ†’ ์„ฑ๊ณผ์ง€ํ‘œ-๊ฐ€์น˜ ์ •๋ ฌ - **Problem 2:** Preference alignment โ†’ ์ •๋ณด ์ „๋‹ฌ ๊ฐ€๋Šฅ์„ฑ --- ### ๋‹น์‹ ์˜ ์—ฐ๊ตฌ์™€์˜ ์—ฐ๊ฒฐ / Connection to Your Research #### Strategic Ambiguity & OIL Framework **๋‹น์‹ ์˜ OIL ๊ณต์‹:** $ \tau^* = \max\left\{0, \sqrt{\frac{V}{4i}} - 1\right\} $ where: - $\tau$ = precision level (1 = vague, 0 = precise) - $V$ = variance of project value - $i$ = info gathering cost **Problem 1๊ณผ์˜ ์—ฐ๊ฒฐ:** **Career Concerns = Implicit Value of Ambiguity** ๊ธฐ์—…๊ฐ€๊ฐ€ precision์„ ์„ ํƒํ•˜๋Š” ๋ฌธ์ œ๋Š” ๋ฌธ์ œ 1์˜ "์–ด๋–ค performance measure๋ฅผ ๊ณต๊ฐœํ• ์ง€" ์„ ํƒ๊ณผ ์œ ์‚ฌ: | Precision Choice | Career Concerns Analog | Trade-off | |-----------------|----------------------|-----------| | ๋†’์€ precision (๋‚ฎ์€ $\tau$) | ๋งŽ์€ ์„ฑ๊ณผ์ง€ํ‘œ ๊ณต๊ฐœ | ์ ์‘๋ ฅ โ†“, ์ฑ…์ž„์„ฑ โ†‘ / Low adapt, High account | | ๋‚ฎ์€ precision (๋†’์€ $\tau$) | ์ ์€ ์„ฑ๊ณผ์ง€ํ‘œ ๊ณต๊ฐœ | ์ ์‘๋ ฅ โ†‘, ์ฑ…์ž„์„ฑ โ†“ / High adapt, Low account | **์ •๋ ฌ ํšจ๊ณผ ($\cos\theta$)์™€ Precision:** - ๋ถˆํ™•์‹ค์„ฑ ๋†’์€ ํ™˜๊ฒฝ (๋‚ฎ์€ $\cos\theta$) โ†’ ์• ๋งค๋ชจํ˜ธํ•œ ์•ฝ์†์ด ์œ ๋ฆฌ - ์˜ˆ์ธก ๊ฐ€๋Šฅํ•œ ํ™˜๊ฒฝ (๋†’์€ $\cos\theta$) โ†’ ์ •ํ™•ํ•œ ์•ฝ์†์ด ๊ฐ€๋Šฅ **Problem 2์™€์˜ ์—ฐ๊ฒฐ:** **Pooling Equilibrium = Strategic Ambiguity** ๋ฌธ์ œ 2์˜ ํ’€๋ง ๊ท ํ˜•์€ strategic ambiguity์˜ ํ•œ ํ˜•ํƒœ: | ๊ฐœ๋… | ๋ฌธ์ œ 2 ํ‘œํ˜„ | OIL ํ”„๋ ˆ์ž„์›Œํฌ | |------|-----------|--------------| | ์˜๋„์  ๋ชจํ˜ธ์„ฑ | ๋ชจ๋‘ ๊ฐ™์€ ๋ฉ”์‹œ์ง€ | ๋†’์€ $\tau$ ์„ ํƒ | | ํ‰ํŒ ๋™๊ธฐ | $\lambda$ ํผ | ์•”๋ฌต์  ์•ฝ์† ๊ฐ€์น˜ | | ์ •๋ณด ์†์‹ค | ํ’€๋ง โ†’ ๋ฌด์ •๋ณด | ์œ ์—ฐ์„ฑ โ†‘, ์‹ ๋ขฐ โ†“ | **๋‹น์‹ ์˜ H1 & H2 ์—ฐ๊ฒฐ:** - **H1: Vague promises โ†’ lower early funding** ($\alpha_1 < 0$) - Problem 1 parallel: ๋‚ฎ์€ signal precision ($\varphi$) โ†’ ์•ฝํ•œ career incentive - Mechanism: ํˆฌ์ž์ž๊ฐ€ ๋Šฅ๋ ฅ ํŒ๋‹จ ์–ด๋ ค์›€ - **H2: Vague promises โ†’ higher later success** ($\beta_1 > 0$) - Problem 2 parallel: Pooling equilibrium์ด ์œ ์—ฐ์„ฑ ์ œ๊ณต - Mechanism: ์ ์‘ ๊ฐ€๋Šฅ ๊ณต๊ฐ„ ํ™•๋ณด (option value) **ํ†ตํ•ฉ ์ง๊ด€:** > **๋ฌธ์ œ 1 + 2 โ†’ OIL Framework** > > - Career concerns (๋ฌธ์ œ 1): precision์ด ํ‰ํŒ๊ณผ ์ ์‘๋ ฅ ์‚ฌ์ด ํŠธ๋ ˆ์ด๋“œ์˜คํ”„ ์ƒ์„ฑ > - Cheap talk (๋ฌธ์ œ 2): ์ „๋žต์  ๋ชจํ˜ธ์„ฑ์ด ํ‰ํŒ ๊ด€๋ฆฌ ์ˆ˜๋‹จ > - **OIL:** ์ตœ์  ambiguity๋Š” reputation value์™€ information cost ๊ท ํ˜• > > $ > \text{Optimal } \tau^* = f\left(\underbrace{\lambda}_{\text{reputation}}, \underbrace{\varphi}_{\text{signal quality}}, \underbrace{V/i}_{\text{uncertainty/cost}}\right) > $ --- ## ๐Ÿ”น ํ•ต์‹ฌ ๊ตํ›ˆ / Key Takeaways ### 1. ๋ช…์‹œ์  & ์•”๋ฌต์  ์ธ์„ผํ‹ฐ๋ธŒ๋Š” ๋Œ€์ฒด์žฌ / Explicit & Implicit Incentives are Substitutes **์ˆ˜์‹:** $ b_1^* = b_2^* - \delta \varphi $ **์ง๊ด€:** "์–ด์ฐจํ”ผ ์–˜๊ฐ€ ์—ด์‹ฌํžˆ ํ•  ๊ฑฐ์•ผ" โ†’ ๋ณด๋„ˆ์Šค ๊ฐ์†Œ **์‘์šฉ:** ์Šคํƒ€ํŠธ์—… ์ดˆ๊ธฐ ํŒ€ (๋†’์€ ๊ฒฝ๋ ฅ ๊ด€๋ฆฌ ๋™๊ธฐ โ†’ ๋‚ฎ์€ ๋ช…์‹œ์  ๋ณด์ƒ ๊ฐ€๋Šฅ) --- ### 2. ํˆฌ๋ช…์„ฑ์ด ํ‰ํŒ ์‹œ์Šคํ…œ์„ ์ž‘๋™์‹œํ‚ด / Transparency Enables Reputation Systems **๋ฉ”์ปค๋‹ˆ์ฆ˜:** ๊ณ„์•ฝ ๊ณต๊ฐœ โ†’ ์‹œ์žฅ์ด "๋Šฅ๋ ฅ"๊ณผ "๋…ธ๋ ฅ" ๋ถ„ํ•ด โ†’ ์ •ํ™•ํ•œ ํ‰ํŒ **์‘์šฉ:** - ๊ธฐ์—… ํˆฌ๋ช…์„ฑ (ESG ๊ณต์‹œ) - ํ•™๊ณ„ ์—ฐ๊ตฌ ๊ณผ์ • ๊ณต๊ฐœ (Open Science) --- ### 3. ํ‰ํŒ์ด ์ •๋ณด ์ „๋‹ฌ์„ ํŒŒ๊ดดํ•  ์ˆ˜ ์žˆ์Œ / Reputation Can Destroy Information Transmission **์—ญ์„ค:** "์ข‹์€ ์‚ฌ๋žŒ์œผ๋กœ ๋ณด์ด๊ธฐ" โ†‘ โ†’ "์ง„์‹ค ๋งํ•˜๊ธฐ" โ†“ **Crawford-Sobel ํ†ต์ฐฐ:** - Cheap talk + reputation โ†’ pooling - Hard evidence โ†’ unraveling (Grossman-Milgrom) **์‘์šฉ:** - ํˆฌ์ž ์กฐ์–ธ ์‹œ์žฅ์˜ "herding" - ๊ธฐ์—… ๋‚ด๋ถ€ ์ •๋ณด ํ๋ฆ„ ์„ค๊ณ„ --- ### 4. ์ •๋ ฌ์ด ๋ชจ๋“  ๊ฒƒ / Alignment is Everything **๋ฌธ์ œ 1:** $\cos(\theta)$ โ†’ ์„ฑ๊ณผ์ง€ํ‘œ๊ฐ€ ๊ฐ€์น˜๋ฅผ ๋ฐ˜์˜ํ•˜๋Š”๊ฐ€? **๋ฌธ์ œ 2:** Preference alignment โ†’ ์ •๋ณด ์ „๋‹ฌ ๊ฐ€๋Šฅํ•œ๊ฐ€? **์ผ๋ฐ˜ ์›์น™:** ์ธ์„ผํ‹ฐ๋ธŒ ์„ค๊ณ„๋Š” ๋ชฉํ‘œ ์ •๋ ฌ์—์„œ ์‹œ์ž‘ --- ## ๐Ÿ”น ์ด์ˆœ์‹ ์˜ ์ง€ํ˜œ์™€ ์—ฐ๊ฒฐ / Connection to Yi Sun-sin's Strategy **ไธ‰้“ๆฐด่ป (์‚ผ๋„์ˆ˜๊ตฐ) ํ”„๋ ˆ์ž„์›Œํฌ:** | ํ•จ๋Œ€ / Fleet | ์กฐ์ง๊ฒฝ์ œํ•™ ๊ฐœ๋… / Org Econ Concept | ๋‹น์‹ ์˜ AI ์ฒด๊ณ„ / Your AI System | |-------------|--------------------------------|---------------------------| | **์ „๋ผ ์ขŒ์ˆ˜์˜** | Career Concerns (๋ช…์‹œ์  ์ธ์„ผํ‹ฐ๋ธŒ) / Explicit incentives | ChatGPT (ๅˆฉ) - ๋น ๋ฅธ ์‹คํ–‰ | | **์ „๋ผ ์šฐ์ˆ˜์˜** | Reputation System (์•”๋ฌต์  ์ธ์„ผํ‹ฐ๋ธŒ) / Implicit incentives | Claude (ๆ€) - ๊ตฌ์กฐํ™” | | **๊ฒฝ์ƒ ์šฐ์ˆ˜์˜** | Information Transmission (Hard evidence) / Info transmission | Gemini (็พฉ) - ๊ฒ€์ฆ | **์ด์ˆœ์‹ ์˜ ๊ตํ›ˆ:** > "์ •๋ณด์˜ ํˆฌ๋ช…์„ฑ + ๋ช…ํ™•ํ•œ ์ธ์„ผํ‹ฐ๋ธŒ + ํ‰ํŒ ๊ด€๋ฆฌ = ์Šน๋ฆฌ" > > โ†’ ๋ช…๋Ÿ‰ ํ•ด์ „: ํˆฌ๋ช…ํ•œ ์ „์ˆ  ๊ณต์œ  (hard evidence) + ๋ช…ํ™•ํ•œ ์„ฑ๊ณผ ์ง€ํ‘œ + ๊ฐ•ํ•œ ํ‰ํŒ ์‹œ์Šคํ…œ **๋‹น์‹ ์˜ ์—ฐ๊ตฌ ์ „๋žต:** > Career concerns์™€ cheap talk ์ด๋ก ์„ strategic ambiguity์— ํ†ตํ•ฉ > > โ†’ Precision choice๊ฐ€ reputation๊ณผ information transmission์„ ๋™์‹œ์— ๊ฒฐ์ •ํ•˜๋Š” ๋ฉ”์ปค๋‹ˆ์ฆ˜ --- ## ๐Ÿ”น ์ตœ์ข… ์ฒดํฌ๋ฆฌ์ŠคํŠธ / Final Checklist **์ˆ˜ํ•™์  ์ •ํ™•์„ฑ / Mathematical Precision:** - โœ“ ๋ชจ๋“  FOC ๋„์ถœ ์ •ํ™• - โœ“ ๋ฒ ์ด์ง€์•ˆ ์—…๋ฐ์ดํŠธ ๊ณต์‹ ์ •ํ™• - โœ“ ๊ท ํ˜• ์กฐ๊ฑด ๋ช…ํ™• **๋ฌธํ—Œ ์—ฐ๊ฒฐ / Literature Connection:** - โœ“ Holmstrรถm (1982/1999) - Career concerns - โœ“ Holmstrรถm & Milgrom (1991) - Multitask - โœ“ Crawford & Sobel (1982) - Cheap talk - โœ“ Grossman & Milgrom (1981) - Hard evidence - โœ“ BGP Chapters 2 & 4 ๋ช…์‹œ **์ง๊ด€ ์„ค๋ช… / Intuitive Explanation:** - โœ“ ํ•œ์˜ ๋ณ‘ํ–‰ ์„ค๋ช… - โœ“ ์‹ค์ œ ์˜ˆ์‹œ ํฌํ•จ - โœ“ ๋‹น์‹ ์˜ ์—ฐ๊ตฌ์™€ ์—ฐ๊ฒฐ **Charlie & Scott ์ˆ˜์ค€:** - โœ“ ์ด๋ก ์  rigor - โœ“ ์‹ค์ฆ์  ํ•จ์˜ - โœ“ ์ •์ฑ…์  ์‘์šฉ --- ## ๐Ÿ“š ์ถ”๊ฐ€ ์ฐธ๊ณ ๋ฌธํ—Œ / Additional References 1. **Holmstrรถm, B. (1999).** "Managerial Incentive Problems: A Dynamic Perspective." *Review of Economic Studies*, 66(1): 169-182. (Originally published 1982) 2. **Holmstrรถm, B., & Milgrom, P. (1991).** "Multitask Principal-Agent Analyses: Incentive Contracts, Asset Ownership, and Job Design." *Journal of Law, Economics, & Organization*, 7: 24-52. 3. **Crawford, V. P., & Sobel, J. (1982).** "Strategic Information Transmission." *Econometrica*, 50(6): 1431-1451. 4. **Grossman, S. J., & Milgrom, P. (1981).** "The Economics of Information." In *Handbook of Mathematical Economics*, Vol. III. 5. **Bolton, P., & Dewatripont, M.** *Contract Theory.* MIT Press. (BGP) --- **ๅฟ…ๆญปๅฝ็”Ÿ (ํ•„์‚ฌ์ฆ‰์ƒ)** *"์ฃฝ์„ ๊ฐ์˜ค๋กœ ์‹ธ์šฐ๋ฉด ๋ฐ˜๋“œ์‹œ ์‚ด ์ˆ˜ ์žˆ๋‹ค"* **Good luck on your midterm! ๐Ÿ’ฏ**