In many defense settings, agents are on the same team as the central decision-maker; they express preferences truthfully to that decision-maker, who then aggregates those expressed preferences to decide on an outcome. This strays from, and is easier to analyze than, the traditional mechanism design setting from microeconomics, where agents are not assumed to be truthful. But, what if those trusted agents’ true preferences are manipulated by upstream actors (either adversaries or nature)? How should a decision-maker act when either strategic or natural uncertainty manipulates trusted agents’ beliefs?
|