«Abstract When can an expert be trusted to provide useful advice? We develop and experimentally test a simpliﬁed recommendation game where an expert ...»
from Unbiased Experts
Wonsuk Chung and Rick Harbaugh∗
This Version: August 2016
When can an expert be trusted to provide useful advice? We develop and experimentally
test a simpliﬁed recommendation game where an expert recommends one of two actions to a
decision maker who can take one of the actions or neither. Consistent with predictions from
the cheap talk literature, we ﬁnd that recommendations are persuasive in that they induce
actions beneﬁting the expert, but decision makers partially discount recommendations for the action a biased expert favors. If the decision maker is uncertain over whether the expert is biased toward an action, unbiased experts follow a political correctness strategy of recommending the opposite action in order to be more persuasive, which decision makers do not suﬃciently discount. Even if experts are known to be unbiased, experts pander by recommending the action that the decision maker already favors, and decision makers discount the recommendation. The results highlight that transparency of expert incentives can improve communication, but need not ensure unbiased advice.
JEL Classiﬁcation: D82, C92, M3.
Key Words: cheap talk, persuasion, transparency, pandering, political correctness ∗ Chung: Korea Insurance Research Institute, email@example.com; Harbaugh: Department of Business Economics and Public Policy, Kelley School of Business, Indiana University, firstname.lastname@example.org. For helpful comments we thank Alexei Alexandrov, Tim Cason, Archishman Chakraborty, Bill Harbaugh, Tarun Jain, Wooyoung Lim, Dmitry Lubensky, Stephen Morris, Michael Raith, Eric Rasmusen, Eric Schmidbauer, Irene Skricki, Joel Sobel, Yossi Spiegel, Lise Vesterlund, Jimmy Walker, Stephanie Wang, Alistair Wilson, and seminar and conference participants at the University of Pittsburgh, Michigan State University, Booth School of Business, Simon School of Business, Indian School of Business, the Western Economics Association meetings, the Fifth Conference on the Economics of Advertising and Marketing at Qinghua University’s School of Management and Economics, and the 2013 Boulder Summer Conference on Consumer Financial Decision Making.
1 Introduction When an expert advises a decision maker, the expert may beneﬁt from some choices more than others, such as a salesperson who earns a higher commission on some products. Can an expert’s recommendation still be persuasive with such a conﬂict of interest, or will it be completely discounted? How is communication aﬀected if the expert is suspected to beneﬁt more from one choice, but the decision maker is not sure? And what if the decision maker is already leaning toward a choice, such as a customer who is known to favor a particular product?
Understanding these issues is important to companies, institutions, and regulators that structure the incentive and information environment in which experts provide advice. In recent years the incentives of mortgage brokers to recommend high cost loans, of credit rating agencies to overrate risky bonds, of stock analysts to push their client’s stocks, of medical researchers to promote certain drugs, and of doctors to recommend expensive treatments have all come under scrutiny. Can such problems be resolved by requiring disclosure of any conﬂicts of interest, or is it necessary to eliminate biased incentives? And are unbiased incentives always suﬃcient to ensure unbiased advice?1 To gain insight into such questions, several recent papers have applied the cheap talk approach of Crawford and Sobel (1982) to discrete choice environments where an expert has private information about diﬀerent options a decision maker may choose among, the expert has preferences over these diﬀerent options, and the decision maker has an outside option that is the expert’s least favored choice (e.g., Chakraborty and Harbaugh, 2010; Inderst and Ottaviani, 2012; Che, Dessein and Kartik, 2013).2 Recommending one option over another induces the decision maker to have a better impression of the recommended option but also a worse impression of other options. For instance, a salesperson’s recommendation to buy one of several products on display can make a customer more favorably disposed toward that product, but also less likely to buy the other products. Because of this endogenous opportunity cost from recommending one option or another, such “comparative cheap talk” may be credible even without reputational or other constraints on lying.3 1 Regulations can impose more equal incentives, e.g., requirements for “ﬁrewalls” that limit the incentive of stock analysts to push their ﬁrm’s clients, and incentives may also be adjusted voluntarily to increase credibility, e.g., Best Buy promotes its “Non-commissioned sales professionals” whose “ﬁrst priority is to help you make the right purchasing decision”. Similarly, conﬂict of interest disclosure may be imposed as the SEC does for investment advisors, or voluntarily adopted as many medical journals have done for authors.
2 Earlier models that analyze some of these issues or related issues include De Jaeger and Jegers (2001), Chakraborty and Harbaugh (2007), and Bolton, Freixas, and Shapiro (2007).
3 This opportunity cost arises only if the recommendation inﬂuences the decision maker in equilibrium, so 1 In this paper we develop a simpliﬁed recommendation game based on this literature to test the literature’s main ﬁndings in a laboratory experiment. We assume that an expert has private information on the values of two actions to a decision maker and recommends one of them.
The expert beneﬁts to some extent if either action is taken, but the expert does not beneﬁt if the decision maker takes neither action, e.g., if a customer does not buy any of a salesperson’s products. Hence, like in the Crawford-Sobel model but in a discrete choice setting, the expert’s and decision maker’s preferences are neither completely aligned nor completely opposed. For simplicity we assume that one action is good and one action is bad so that the only uncertainty is regarding which action is the good one. Despite its simplicity, the model captures several phenomena that are the focus of recent research on recommendations by biased experts.
First, recommendations are not only credible in that they reveal information in equilibrium, but for suﬃcient payoﬀ symmetry they are also “persuasive” in that they beneﬁt the expert by reducing the probability that the decision maker walks away without taking either action.
Even though the expert always wants the decision maker to avoid the outside option, persuasive communication is possible simply because there are two actions that the expert beneﬁts from.
A recommendation for one action raises the expected value of that action and at the same time lowers the expected value of the other action, but the expert still beneﬁts since the higher expected value of one of the actions is now more likely to exceed the decision maker’s outside option.4 For instance, a customer is more likely to make a purchase if a recommendation persuades him that at least one of two comparably priced products under consideration is of high quality. In our experimental results we ﬁnd that recommendations are usually accepted, and are almost always accepted when the decision maker’s outside option of taking neither action is poor.
Second, when the expert is biased in the sense of having a stronger incentive to push one action, a recommendation for that action becomes more suspicious so in equilibrium it is “discounted” and less inﬂuential than another recommendation. Since the expert sometimes falsely claims that the more incentivized action is better, a recommendation for that action raises the updated expected value of that action less than a recommendation for the other action. Therefore the decision maker is more likely to ignore the recommendation and stick with the outside option. In equilibrium the expert faces a tradeoﬀ where one recommendation generates a higher payoﬀ if it is accepted but it is less likely to be accepted, while the other the game is a cheap talk game rather than a (costly) signaling game (Chakraborty and Harbaugh, 2007).
4 In the standard uniform-quadratic version of the Crawford-Sobel model, communication beneﬁts the expert by making the distribution of actions more closely match the true state, but the average action remains the same as without communication.
2 recommendation generates a lower payoﬀ if it is accepted but it is more likely to be accepted.5 Consistent with theoretical predictions, in our experimental results we ﬁnd that experts are signiﬁcantly more likely to lie in the direction of the more incentivized action, and that decision makers are signiﬁcantly less likely to accept a recommendation for the more incentivized action.
Third, when the prior distribution of values is asymmetric so that the decision maker is more impressed by a recommendation for one of the actions, the expert beneﬁts by “pandering” to the decision maker and recommending that action even when the other action is better (Che, Dessein, and Kartik, 2013). Hence biased recommendations can result even when the expert’s incentives for either action are the same. For instance, if it is known that one of the actions is for some reason preferred by the decision maker then the expert has a better chance of getting a favorable action from the decision maker by recommending that action. The decision maker anticipates such pandering and, just as in the asymmetric incentives case, discounts a recommendation for that action. In our experimental results experts are signiﬁcantly more likely to lie in favor of the preferred action, and decision makers are signiﬁcant more likely to discount such a recommendation than when the prior distribution is symmetric.
Finally, we extend the model to examine the impact of transparency of expert incentives on communication. The expert might be a biased expert with a higher incentive for one action, or an unbiased expert with an equal incentive for either action, and the decision maker does not know which. Since a recommendation for the action favored by the biased expert is more suspicious and hence discounted by the decision maker, we ﬁnd that an unbiased expert has an incentive to lie by recommending the opposite action from that favored by the biased expert. For instance, if a salesperson is suspected to beneﬁt more from selling one product than another product but in fact has equal incentives, then pushing the other product is more likely to generate a sale. Or if an unbiased newspaper is perceived to have a possible liberal bias, then recommending to readers a more conservative rather than more liberal policy increases the odds of a policy recommendation being accepted. In our experimental results we ﬁnd that, as predicted, both biased and unbiased experts are signiﬁcantly more likely to lie than if incentives are symmetric or if incentives are biased and transparent. However, we ﬁnd that decision makers do not suﬃciently discount recommendations in the opposite direction of the suspected bias, suggesting that they do not fully anticipate how lack of transparency warps the incentives of even unbiased experts.
This result on transparency is closely related to the Morris (2001) analysis of how an unThe Crawford-Sobel model also captures discounting in that equilibrium expected values of the state conditional on messages are below those implied by a symmetric, non-strategic interpretation of the messages.
However the equilibrium tradeoﬀ in the Crawford-Sobel model is diﬀerent as it is driven by reluctance of the expert to recommend a higher action when the true state is suﬃciently low.
3 biased expert has a “political correctness” concern to avoid making the same recommendation as a biased expert so as to maintain a reputation for not being biased. Similarly, discounting based on the expert’s relative incentives is related to Sobel’s (1985) result that a decision maker discounts a recommendation based on the strength of the expert’s incentive to push the action relative to the reputational costs. And pandering based on the decision maker’s preferences is related to Gentzkow and Shapiro’s (2006) result that an expert panders to the decision maker’s prior beliefs to make the decision maker more trusting of the expert for future advice. These classic results assume a binary decision in a repeated context, but in our approach qualitatively similar results hold in a simple one period model without reputational concerns. Because there are two actions in addition to the option to not take either action, there is an immediate cost to being thought of as biased — the recommendation is less persuasive so the decision maker might take no action. Hence the simple one-period model we test not only captures key insights from the recent literature, it also captures insights from these earlier models that address related concepts.6 Recommendation games are also related to the early literature on credence goods which examines recommendations to buy a cheap or expensive version of a product (Darby and Karni, 1973). Building on the Pitchik and Schotter (1987) model, De Jaegher and Jegers (2001) consider a doctor who recommends either a cheap or expensive treatment to a patient whose condition is severe or not, where an expensive treatment works for both conditions but a cheap treatment works only if the condition is not severe.7 For some parameter ranges their model has a mixed strategy equilibrium with aspects of pandering and discounting since a cheap treatment is more appealing to the patient and an expensive treatment is more lucrative for the doctor.
Recommending the wrong action in our game has a natural interpretation as a lie. Based on long-established results from the experimental literature on communication games we expect subjects to be reluctant to lie,8 and based on recent research we also expect heterogeneity in the strength of this aversion across subjects (Gibson, Tanner, and Warner, 2013). Therefore, to capture this behavior and to make the model suitable for experimental testing, we depart from a “pure” cheap talk approach and assume that experts are lying averse with a lying cost 6 Perhaps because of the extra complexity of implementing reputational eﬀects in the laboratory, we are not aware of other experimental papers that test these concepts.