论文标题
从眼动UI适应的眼睛运动中检测决策过程中的相关性
Detecting Relevance during Decision-Making from Eye Movements for UI Adaptation
论文作者
论文摘要
本文提出了一种方法,以检测来自眼动的决策过程中信息相关性,以实现用户界面适应。这是一个具有挑战性的任务,因为凝视行为在各个用户之间差异很大,而任务和地面数据很难获得。因此,先前的工作主要集中于更简单的目标搜索任务或建立凝视行为不那么复杂的普遍兴趣。从文献来看,我们确定了六个指标,这些指标捕获了决策过程中凝视行为的不同方面,并将它们结合在投票方案中。我们从经验上表明,这说明了凝视行为的巨大变化和超越独立指标。重要的是,它提供了一种直观的方式来控制检测信息的量,这对于不同的UI适应方案至关重要。我们通过开发一个更改所检测到的内容的视觉显着性的室内搜索应用程序来展示我们的方法的适用性。在一项实证研究中,我们表明,它在用户自我报告方面最多可检测到相关元素的97%,这使我们能够有意义地适应该界面,如参与者所证实的那样。我们的方法很快,不需要任何明确的用户输入,可以独立于任务和用户应用。
This paper proposes an approach to detect information relevance during decision-making from eye movements in order to enable user interface adaptation. This is a challenging task because gaze behavior varies greatly across individual users and tasks and groundtruth data is difficult to obtain. Thus, prior work has mostly focused on simpler target-search tasks or on establishing general interest, where gaze behavior is less complex. From the literature, we identify six metrics that capture different aspects of the gaze behavior during decision-making and combine them in a voting scheme. We empirically show, that this accounts for the large variations in gaze behavior and out-performs standalone metrics. Importantly, it offers an intuitive way to control the amount of detected information, which is crucial for different UI adaptation schemes to succeed. We show the applicability of our approach by developing a room-search application that changes the visual saliency of content detected as relevant. In an empirical study, we show that it detects up to 97% of relevant elements with respect to user self-reporting, which allows us to meaningfully adapt the interface, as confirmed by participants. Our approach is fast, does not need any explicit user input and can be applied independent of task and user.