Wexler James, Pushkarna Mahima, Bolukbasi Tolga, Wattenberg Martin, Viegas Fernanda, Wilson Jimbo
IEEE Trans Vis Comput Graph. 2020 Jan;26(1):56-65. doi: 10.1109/TVCG.2019.2934619. Epub 2019 Aug 20.
A key challenge in developing and deploying Machine Learning (ML) systems is understanding their performance across a wide range of inputs. To address this challenge, we created the What-If Tool, an open-source application that allows practitioners to probe, visualize, and analyze ML systems, with minimal coding. The What-If Tool lets practitioners test performance in hypothetical situations, analyze the importance of different data features, and visualize model behavior across multiple models and subsets of input data. It also lets practitioners measure systems according to multiple ML fairness metrics. We describe the design of the tool, and report on real-life usage at different organizations.
开发和部署机器学习(ML)系统的一个关键挑战是了解其在广泛输入范围内的性能。为应对这一挑战,我们创建了“假设分析工具”(What-If Tool),这是一个开源应用程序,使从业者能够以最少的编码来探究、可视化和分析ML系统。“假设分析工具”让从业者在假设情况下测试性能,分析不同数据特征的重要性,并可视化跨多个模型和输入数据子集的模型行为。它还让从业者根据多个ML公平性指标来衡量系统。我们描述了该工具的设计,并报告了其在不同组织中的实际使用情况。