Inferring Visualization Intent from Conversation
Haotian Li*, Nithin Chalapathi*, Huamin Qu, and 2 more authors
In Proceedings of the 33rd ACM International Conference on Information and Knowledge Management, Boise, ID, USA, 2024
During visual data analysis, users often explore visualizations one at a time, with each visualization leading to new directions of exploration. We consider a conversational approach to visualization, where users specify their needs at each step in natural language, with a visualization being returned in turn. Prior work has shown that visualization generation can be boiled down to the identification of visualization intent and visual encodings. Recognizing that the latter is a well-studied problem with standard solutions, we focus on the former, i.e., identifying visualization intent during conversation. We develop Luna, a framework that comprises a novel combination of language models adapted from BERT and rule-based inference, that together predict various aspects of visualization intent. We compare Luna with other conversational NL-to-visualization and NL-to-SQL approaches (adapted to visualization intent), including GPT-3.5 and GPT-4, and demonstrate that Luna has 14.3% higher accuracy than the state-of-the-art. We also apply Luna to a usage scenario on a dataset of police misconduct, showcasing its benefits relative to other approaches.