论文标题
为AI系统的负责任信任设计:A通信视角
Designing for Responsible Trust in AI Systems: A Communication Perspective
论文作者
论文摘要
当前关于“信任”的文献和公众讨论通常集中在值得信赖的AI基础的原则上,而人们对人们如何发展信任的关注不足。鉴于AI系统的可信赖性水平有所不同,因此两个开放性的问题出现了:如何负责任地传达AI的可信度,以确保不同用户的适当和公平的信任判断,以及我们如何保护用户免受欺骗性尝试赢得他们的信任的尝试?我们借鉴了有关技术信任的沟通理论和文献,以开发一种称为Match的概念模型,该模型描述了如何通过可信赖性提示在AI系统中传达信任度,以及人们如何处理这些线索以做出信任判断。除了AI生成的内容外,我们还强调了透明度和互动,作为AI系统的提供,可以向用户提供广泛的可信度提示。通过揭示用户的多种认知过程来做出信任判断及其潜在局限性,我们敦促技术创建者在选择可靠的目标用户来为目标用户选择可靠的信任性线索做出有意识的决定,并作为一个行业,以规范此空间并防止恶意使用。为了这些目标,我们定义了有必要的可信赖性线索和昂贵的可信度提示的概念,并提出了要求的清单,以帮助技术创建者确定适当的使用线索。我们提出了一个假设的用例,以说明从业人员如何负责任地使用匹配来设计AI系统,并讨论旨在促进AI负责任信任的研究和行业努力的未来方向。
Current literature and public discourse on "trust in AI" are often focused on the principles underlying trustworthy AI, with insufficient attention paid to how people develop trust. Given that AI systems differ in their level of trustworthiness, two open questions come to the fore: how should AI trustworthiness be responsibly communicated to ensure appropriate and equitable trust judgments by different users, and how can we protect users from deceptive attempts to earn their trust? We draw from communication theories and literature on trust in technologies to develop a conceptual model called MATCH, which describes how trustworthiness is communicated in AI systems through trustworthiness cues and how those cues are processed by people to make trust judgments. Besides AI-generated content, we highlight transparency and interaction as AI systems' affordances that present a wide range of trustworthiness cues to users. By bringing to light the variety of users' cognitive processes to make trust judgments and their potential limitations, we urge technology creators to make conscious decisions in choosing reliable trustworthiness cues for target users and, as an industry, to regulate this space and prevent malicious use. Towards these goals, we define the concepts of warranted trustworthiness cues and expensive trustworthiness cues, and propose a checklist of requirements to help technology creators identify appropriate cues to use. We present a hypothetical use case to illustrate how practitioners can use MATCH to design AI systems responsibly, and discuss future directions for research and industry efforts aimed at promoting responsible trust in AI.