Grespan Mattia Medina, Broadbent Meghan, Zhang Xinyao, Axford Katherine E, Kious Brent, Imel Zac, Srikumar Vivek
Kahlert School of Computing, University of Utah.
Department of Educational Psychology, University of Utah.
Proc Conf Assoc Comput Linguist Meet. 2023 Jul;2023:11704-11722. doi: 10.18653/v1/2023.acl-long.654.
Ensuring the effectiveness of text-based crisis counseling requires observing ongoing conversations and providing feedback, both labor-intensive tasks. Automatic analysis of conversations-at the full chat and utterance levels-may help support counselors and provide better care. While some session-level training data (e.g., rating of patient risk) is often available from counselors, labeling utterances requires expensive post hoc annotation. But the latter can not only provide insights about conversation dynamics, but can also serve to support quality assurance efforts for counselors. In this paper, we examine if inexpensive-and potentially noisy-session-level annotation can help improve label utterances. To this end, we propose a logic-based indirect supervision approach that exploits declaratively stated structural dependencies between both levels of annotation to improve utterance modeling. We show that adding these rules gives an improvement of 3.5% f-score over a strong multi-task baseline for utterance-level predictions. We demonstrate via ablation studies how indirect supervision via logic rules also improves the consistency and robustness of the system.
确保基于文本的危机咨询的有效性需要观察正在进行的对话并提供反馈,这两项都是劳动密集型任务。在完整聊天和话语层面自动分析对话可能有助于支持咨询师并提供更好的护理。虽然咨询师通常可以获得一些会话层面的训练数据(例如患者风险评级),但标记话语需要昂贵的事后注释。但后者不仅可以提供有关对话动态的见解,还可以用于支持咨询师的质量保证工作。在本文中,我们研究廉价且可能有噪声的会话层面注释是否有助于改进话语标记。为此,我们提出了一种基于逻辑的间接监督方法,该方法利用声明性陈述的两个注释层面之间的结构依赖性来改进话语建模。我们表明,添加这些规则在话语层面预测的强大多任务基线之上,F1分数提高了3.5%。我们通过消融研究证明了通过逻辑规则进行的间接监督如何提高系统的一致性和鲁棒性。