How should intelligent agents apologize to restore trust? Interaction effects between anthropomorphism and apology attribution on trust repair

نویسندگان

چکیده

• Investigated the trajectory of a fintech AI agent’s trust repair process. Interaction effect anthropomorphism and apology type on was found. In human-like agent, internal attribution repaired better. machine-like external A computer-like agent with an most harmful at repair. Trust is essential in individuals’ perception, behavior, evaluation intelligent agents. Because, it primary motive for people to accept new technology, crucial when damaged. This study investigated how agents should apologize recover effectiveness different compared based two seemingly competing frameworks Computers-Are-Social-Actors paradigm automation bias. 2 (agent: Human-like vs. Machine-like ) X (apology attribution: Internal External) between-subject design experiment conducted ( N = 193) context stock market. Participants were presented scenario make investment choices artificial intelligence advice. To see initial trust-building, violation, process, we designed game that consists five rounds eight (40 total). The results show more efficiently apologizes rather than attribution. However, opposite pattern observed among participants who had agents; condition showed better Both theoretical practical implications are discussed.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

How Much Should We Trust

Most papers that employ Differences-in-Differences estimation (DD) use many years of data and focus on serially correlated outcomes but ignore that the resulting standard errors are inconsistent. To illustrate the severity of this issue, we randomly generate placebo laws in state-level data on female wages from the Current Population Survey. For each law, we use OLS to compute the DD estimate o...

متن کامل

Anthropomorphism Increases Trust , 1 The Mind in the Machine : Anthropomorphism Increases Trust in an Autonomous

Sophisticated technology is increasingly replacing human minds to perform complicated tasks in domains ranging from medicine to education to transportation. We investigated an important theoretical determinant of people's willingness to trust such technology to perform competently—the extent to which a nonhuman agent is anthropomorphized with a humanlike mind—in a domain of practical importance...

متن کامل

Calibrating Trust to Integrate Intelligent Agents into Human Teams

s require that information and resources from distributed ources be exchanged and fused because no one indivi, o dual or service has the collective expertise, information r resources required. We are at the beginning of a i major research program aimed at effectively incorporat ng intelligent agents into human teams. Our initial t i experiments use a low fidelity simulation of a targe dentifica...

متن کامل

How Much Should We Trust Estimates from Multiplicative Interaction Models? Simple Tools to Improve Empirical Practice

Regressions with multiplicative interaction terms are widely used in the social sciences to test whether the relationship between an outcome and an independent variable changes depending on a moderator. Despite much advice on how to use interaction models, two important problems are currently overlooked in empirical practice. First, multiplicative interaction models are based on the crucial ass...

متن کامل

How to trust systems

The owners and users of distributed systems need to trust components of the system from a security point of view. In this paper we investigate the possible methods for establishing trust in the security features of an IT product or system.

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Telematics and Informatics

سال: 2021

ISSN: ['0736-5853', '1879-324X']

DOI: https://doi.org/10.1016/j.tele.2021.101595