• Home
  • Publications
  • Joint Modelling of Spoken Language Understanding Tasks with Integrated Dialog History

Research Area

Author

  • Siddhant Arora*, Hayato Futami, Emiru Tsunoo, Brian Yan*, Shinji Watanabe*
  • * External authors

Company

  • Sony Group Corporation

Venue

  • ICASSP

Date

  • 2023

Share

Joint Modelling of Spoken Language Understanding Tasks with Integrated Dialog History

View Publication

Abstract

Most human interactions occur in the form of spoken conversations where the semantic meaning of a given utterance depends on the context. Each utterance in spoken conversation can be represented by many semantic and speaker attributes, and there has been an interest in building Spoken Language Understanding (SLU) systems for automatically predicting these attributes. Recent work has shown that incorporating dialogue history can help advance SLU performance. However, separate models are used for each SLU task, leading to an increase in inference time and computation cost. Motivated by this, we aim to ask: can we jointly model all the SLU tasks while incorporating context to facilitate low-latency and lightweight inference? To answer this, we propose a novel model architecture that learns dialog context to jointly predict the intent, dialog act, speaker role, and emotion for the spoken utterance. Note that our joint prediction is based on an autoregressive model and we need to decide the prediction order of dialog attributes, which is not trivial. To mitigate the issue, we also propose an order agnostic training method. Our experiments show that our joint model achieves similar results to task-specific classifiers and can effectively integrate dialog context to further improve the SLU performance.

Share

この記事をシェアする