A longitudinal approach for understanding algorithm use

No Thumbnail Available
Date
2022
Journal Title
Journal ISSN
Volume Title
Publisher
WILEY
Abstract
Research suggests that algorithms-based on artificial intelligence or linear regression models-make better predictions than humans in a wide range of domains. Several studies have examined the degree to which people use algorithms. However, these studies have been mostly cross-sectional and thus have failed to address the dynamic nature of algorithm use. In the present paper, we examined algorithm use with a novel longitudinal approach outside the lab. Specifically, we conducted two ecological momentary assessment studies in which 401 participants made financial predictions for 18 days in two tasks. Relying on the judge-advisor system framework, we examined how time interacted with advice source (human vs. algorithm) and advisor accuracy to predict advice taking. Our results showed that when the advice was inaccurate, people tended to use algorithm advice less than human advice across the period studied. Inaccurate algorithms were penalized logarithmically; the effect was initially strong but tended to fade over time. This suggests that first impressions are crucial and produce significant changes in advice taking at the beginning of the interaction, which later tends to stabilize as days go by. Therefore, inaccurate algorithms are more likely to accrue a negative reputation than inaccurate humans, even when having the same level of performance.
Description
Keywords
Advice, Algorithm appreciation, Algorithm aversion, Algorithms, Decision making, Forecasting, Decision-making, Multimodel inference, Model selection, Trust
Citation