INDIGO Home University of Illinois at Urbana-Champaign logo uic building uic pavilion uic student center

Towards Modeling Collaborative Task Oriented Multimodal Human-human Dialogues

Show full item record

Bookmark or cite this item: http://hdl.handle.net/10027/18981

Files in this item

File Description Format
PDF Chen_Lin.pdf (4MB) (no description provided) PDF
Text file 1_Chen_Lin_iThenticate_revisions.txt (599bytes) (no description provided) Text file
Text file Chen_Lin_iThenticate_revisions.txt (599bytes) (no description provided) Text file
Title: Towards Modeling Collaborative Task Oriented Multimodal Human-human Dialogues
Author(s): Chen, Lin
Advisor(s): Di Eugenio, Barbara
Contributor(s): Zefran, Milos; Leigh, Jason; Gmytrasiewicz, Piotr; Chai, Joyce Y.
Department / Program: Computer Science
Graduate Major: Computer Science
Degree Granting Institution: University of Illinois at Chicago
Degree: PhD, Doctor of Philosophy
Genre: Doctoral
Subject(s): Multimodal Dialogue System Coreference Resolution Dialogue Act Classification
Abstract: This research took place in the larger context of building effective multimodal interfaces to help elderly people live independently. The final goal was to build a dialogue manager which could be deployed on a robot. The robot would help elderly people perform Activities of Daily Living (ADLs), such as cooking dinner, and setting a table. In particular, I focused on building dialogue processing modules to understand such multimodal dialogues. Specifically, I investigated the functions of gestures (e.g. Pointing Gestures, and Haptic-Ostensive actions which involve force exchange) in dialogues concerning collaborative tasks in ADLs. This research employed an empirical approach. The machine learning based modules were built using collected human experiment data. The ELDERLY-AT-HOME corpus was built based on a data collection of human-human collaborative interactions in the elderly care domain. Multiple categories of annotations were further conducted to build the Find corpus, which only contained the experiment episodes where two subjects were collaboratively searching for objects (e.g. a pot, a spoon, etc.), which are essential tasks to perform ADLs. This research developed three main modules: coreference resolution, Dialogue Act classification, and task state inference. The coreference resolution experiments showed that modalities other than language play an important role in bringing antecedents into the dialogue context. The Dialogue Act classification experiments showed that multimodal features including gestures, Haptic-Ostensive actions, and subject location significantly improve accuracy. They also showed that dialogue games help improve performance, even if the dialogue games were inferred dynamically. A heuristic rule-based task state inference system using the results of Dialogue Act classification and coreference resolution was designed and evaluated; the experiments showed reasonably good results. Compared to previous work, the contributions of this research are as follows: 1) Built a multimodal corpus focusing on human-human collaborative task-oriented dialogues. 2) Investigated coreference resolution from language to objects in the real world. 3) Experimented with Dialogue Act classification using utterances, gestures and Haptic-Ostensive actions. 4) Implemented and evaluated a task state inference system.
Issue Date: 2014-10-28
Genre: thesis
URI: http://hdl.handle.net/10027/18981
Rights Information: Copyright 2014 Lin Chen
Date Available in INDIGO: 2014-10-28
Date Deposited: 2014-08
 

This item appears in the following Collection(s)

Show full item record

Statistics

Country Code Views
United States of America 166
China 147
Russian Federation 29
Ukraine 18
Germany 7

Browse

My Account

Information

Access Key