Research suggests that algorithms-based on artificial intelligence or linear regression models-make better predictions than humans in a wide range of domains. Several studies have examined the degree to which people use algorithms. However, these studies have been mostly cross-sectional and thus have failed to address the dynamic nature of algorithm use. In the present paper, we examined algorithm use with a novel longitudinal approach outside the lab. Specifically, we conducted two ecological momentary assessment studies in which 401 participants made financial predictions for 18 days in two tasks. Relying on the judge-advisor system framework, we examined how time interacted with advice source (human vs. algorithm) and advisor accuracy to predict advice taking. Our results showed that when the advice was inaccurate, people tended to use algorithm advice less than human advice across the period studied. Inaccurate algorithms were penalized logarithmically; the effect was initially strong but tended to fade over time. This suggests that first impressions are crucial and produce significant changes in advice taking at the beginning of the interaction, which later tends to stabilize as days go by. Therefore, inaccurate algorithms are more likely to accrue a negative reputation than inaccurate humans, even when having the same level of performance.
Registro Sencillo
Registro Completo
Autor | Chacón Hiriart, Álvaro Marcelo Kausel, Edgar E. Reyes Torres Tomas Hernan |
Título | A longitudinal approach for understanding algorithm use |
Revista | Journal of Behavioral Decision Making |
ISSN | 0894-3257 |
ISSN electrónico | 1099-0771 |
Volumen | 35 |
Número de publicación | 4 |
Número de artículo | e2275 |
Fecha de publicación | 2022 |
Resumen | Research suggests that algorithms-based on artificial intelligence or linear regression models-make better predictions than humans in a wide range of domains. Several studies have examined the degree to which people use algorithms. However, these studies have been mostly cross-sectional and thus have failed to address the dynamic nature of algorithm use. In the present paper, we examined algorithm use with a novel longitudinal approach outside the lab. Specifically, we conducted two ecological momentary assessment studies in which 401 participants made financial predictions for 18 days in two tasks. Relying on the judge-advisor system framework, we examined how time interacted with advice source (human vs. algorithm) and advisor accuracy to predict advice taking. Our results showed that when the advice was inaccurate, people tended to use algorithm advice less than human advice across the period studied. Inaccurate algorithms were penalized logarithmically; the effect was initially strong but tended to fade over time. This suggests that first impressions are crucial and produce significant changes in advice taking at the beginning of the interaction, which later tends to stabilize as days go by. Therefore, inaccurate algorithms are more likely to accrue a negative reputation than inaccurate humans, even when having the same level of performance. |
Derechos | acceso restringido |
Agencia financiadora | Fondo Nacional de Desarrollo Cientifico y Tecnologico |
DOI | 10.1002/bdm.2275 |
Editorial | WILEY |
Enlace | |
Id de publicación en Scopus | SCOPUS_ID:85123488500 |
Id de publicación en WoS | WOS:000746449600001 |
Paginación | 15 páginas |
Palabra clave | Advice Algorithm appreciation Algorithm aversion Algorithms Decision making Forecasting Decision-making Multimodel inference Model selection Trust |
Tema ODS | 03 Good Health and Well-being |
Tema ODS español | 03 Salud y bienestar |
Temática | Tecnología |
Tipo de documento | artículo |