论文标题
CNN解释器:具有交互式可视化的学习卷积神经网络
CNN Explainer: Learning Convolutional Neural Networks with Interactive Visualization
论文作者
论文摘要
深度学习取得了巨大的成功激发了许多从业者和学生学习这项令人兴奋的技术。但是,由于理解和应用深度学习的复杂性,对于初学者来说,迈出第一步通常是具有挑战性的。我们提出了CNN解释器,这是一种互动可视化工具,旨在为非专家学习和检查卷积神经网络(CNN),这是一种基础深度学习模型体系结构。我们的工具解决了新手在学习CNN时面临的关键挑战,我们从与教师的访谈和对过去学生的调查中确定了这些挑战。 CNN解释器紧密整合了一个模型概述,该模型概述总结了CNN的结构以及按需动态的视觉解释视图,该视图可帮助用户了解CNN的基本组件。通过跨抽象级别的平稳过渡,我们的工具使用户能够检查低级数学操作和高级模型结构之间的相互作用。定性用户研究表明,CNN解释器可帮助用户更容易理解CNN的内部工作,并且可以吸引和愉快地使用。我们还从研究中获得了设计课程。 CNN解释器使用现代网络技术开发,在用户的网络浏览器中本地运行,而无需安装或专业硬件,从而扩大了公众对现代深度学习技术的访问。
Deep learning's great success motivates many practitioners and students to learn about this exciting technology. However, it is often challenging for beginners to take their first step due to the complexity of understanding and applying deep learning. We present CNN Explainer, an interactive visualization tool designed for non-experts to learn and examine convolutional neural networks (CNNs), a foundational deep learning model architecture. Our tool addresses key challenges that novices face while learning about CNNs, which we identify from interviews with instructors and a survey with past students. CNN Explainer tightly integrates a model overview that summarizes a CNN's structure, and on-demand, dynamic visual explanation views that help users understand the underlying components of CNNs. Through smooth transitions across levels of abstraction, our tool enables users to inspect the interplay between low-level mathematical operations and high-level model structures. A qualitative user study shows that CNN Explainer helps users more easily understand the inner workings of CNNs, and is engaging and enjoyable to use. We also derive design lessons from our study. Developed using modern web technologies, CNN Explainer runs locally in users' web browsers without the need for installation or specialized hardware, broadening the public's education access to modern deep learning techniques.