Wang Huichen Will, Hoffswell Jane, Thazin Thane Sao Myat, Bursztyn Victor S, Bearfield Cindy Xiong
IEEE Trans Vis Comput Graph. 2025 Jan;31(1):536-546. doi: 10.1109/TVCG.2024.3456378. Epub 2024 Nov 25.
Large Language Models (LLMs) have been adopted for a variety of visualizations tasks, but how far are we from perceptually aware LLMs that can predict human takeaways? Graphical perception literature has shown that human chart takeaways are sensitive to visualization design choices, such as spatial layouts. In this work, we examine the extent to which LLMs exhibit such sensitivity when generating takeaways, using bar charts with varying spatial layouts as a case study. We conducted three experiments and tested four common bar chart layouts: vertically juxtaposed, horizontally juxtaposed, overlaid, and stacked. In Experiment 1, we identified the optimal configurations to generate meaningful chart takeaways by testing four LLMs, two temperature settings, nine chart specifications, and two prompting strategies. We found that even state-of-the-art LLMs struggled to generate semantically diverse and factually accurate takeaways. In Experiment 2, we used the optimal configurations to generate 30 chart takeaways each for eight visualizations across four layouts and two datasets in both zero-shot and one-shot settings. Compared to human takeaways, we found that the takeaways LLMs generated often did not match the types of comparisons made by humans. In Experiment 3, we examined the effect of chart context and data on LLM takeaways. We found that LLMs, unlike humans, exhibited variation in takeaway comparison types for different bar charts using the same bar layout. Overall, our case study evaluates the ability of LLMs to emulate human interpretations of data and points to challenges and opportunities in using LLMs to predict human chart takeaways.
大语言模型(LLMs)已被应用于各种可视化任务,但我们距离能够预测人类解读内容的具有感知能力的大语言模型还有多远呢?图形感知文献表明,人类对图表的解读对可视化设计选择很敏感,比如空间布局。在这项工作中,我们以具有不同空间布局的柱状图为案例研究,考察大语言模型在生成解读内容时表现出这种敏感性的程度。我们进行了三项实验,并测试了四种常见的柱状图布局:垂直并列、水平并列、叠加和堆叠。在实验1中,我们通过测试四个大语言模型、两种温度设置、九种图表规格和两种提示策略,确定了生成有意义的图表解读内容的最佳配置。我们发现,即使是最先进的大语言模型也难以生成语义多样且事实准确的解读内容。在实验2中,我们使用最佳配置在零样本和一样本设置下为四种布局和两个数据集中的八个可视化分别生成30个图表解读内容。与人类的解读内容相比,我们发现大语言模型生成的解读内容往往与人类进行的比较类型不匹配。在实验3中,我们考察了图表上下文和数据对大语言模型解读内容的影响。我们发现,与人类不同,大语言模型对于使用相同柱状布局的不同柱状图,在解读内容比较类型上表现出差异。总体而言,我们的案例研究评估了大语言模型模仿人类对数据的解读的能力,并指出了使用大语言模型预测人类图表解读内容方面的挑战和机遇。