Published January 1, 2022 | Version v1
Conference paper Open

Risk and Exposure of XAI in Persuasion and Argumentation: The case of Manipulation

  • 1. Luxembourg Inst Sci & Technol LIST, Esch Sur Alzette, Luxembourg
  • 2. Univ Appl Sci Western Switzerland, Delemont, Switzerland

Description

In the last decades, Artificial intelligence (AI) systems have been increasingly adopted in assistive (possibly collaborative) decision-making tools. In particular, AI-based persuasive technologies are designed to steer/influence users' behaviour, habits, and choices to facilitate the achievement of their own - predetermined - goals. Nowadays, the inputs received by the assistive systems leverage heavily AI data-driven approaches. Thus, it is imperative to have transparent and understandable (to the user) both the process leading to the recommendations and the recommendations. The Explainable AI (XAI) community has progressively contributed to "opening the black box" , ensuring the interaction's effectiveness, and pursuing the safety of the individuals involved. However, principles and methods ensuring the efficacy and information retain on the human have not been introduced yet. The risk is to underestimate the context dependency and subjectivity of the explanations' understanding, interpretation, and relevance. Moreover, even a plausible (and possibly expected) explanation can lead to an imprecise or incorrect outcome or its understanding. This can lead to unbalanced and unfair circumstances, such as giving a financial advantage to the system owner/provider and the detriment of the user.

Files

bib-b53f2872-01dd-4bf9-8b28-523b094b008d.txt

Files (200 Bytes)

Name Size Download all
md5:44bb18eb7ea626900e32035bcd1413de
200 Bytes Preview Download