Working Paper

Similarity and Consistency in Algorithm-Guided Exploration

Yongping Bao, Ludwig Danwitz, Fabian Dvorak, Sebastian Fehrler, Lars Hornuf, Hsuan Yu Lin, Bettina von Helversen
CESifo, Munich, 2022

CESifo Working Paper No. 10188

Algorithm-based decision support systems play an increasingly important role in decisions involving exploration tasks, such as product searches, portfolio choices, and human resource procurement. These tasks often involve a trade-off between exploration and exploitation, which can be highly dependent on individual preferences. In an online experiment, we study whether the willingness of participants to follow the advice of a reinforcement learning algorithm depends on the fit between their own exploration preferences and the algorithm’s advice. We vary the weight that the algorithm places on exploration rather than exploitation, and model the participants’ decision-making processes using a learning model comparable to the algorithm’s. This allows us to measure the degree to which one’s willingness to accept the algorithm’s advice depends on the weight it places on exploration and on the similarity between the exploration tendencies of the algorithm and the participant. We find that the algorithm’s advice affects and improves participants’ choices in all treatments. However, the degree to which participants are willing to follow the advice depends heavily on the algorithm’s exploration tendency. Participants are more likely to follow an algorithm that is more exploitative than they are, possibly interpreting the algorithm’s relative consistency over time as a signal of expertise. Similarity between human choices and the algorithm’s recommendations does not increase humans’ willingness to follow the recommendations. Hence, our results suggest that the consistency of an algorithm’s recommendations over time is key to inducing people to follow algorithmic advice in exploration tasks.

CESifo Category
Behavioural Economics
Economics of Digitization
Keywords: algorithms, decision support systems, recommender systems, advice-taking, multi-armed bandit, search, exploration-exploitation, cognitive modeling
JEL Classification: C910, D830