Institutions | About Us | Help | Gaeilge
rian logo


Mark
Go Back
Utilization of multimodal interaction signals for automatic summarisation of academic presentations
Curtis, Keith
Multimedia archives are expanding rapidly. For these, there exists a shortage of retrieval and summarisation techniques for accessing and browsing content where the main information exists in the audio stream. This thesis describes an investigation into the development of novel feature extraction and summarisation techniques for audio-visual recordings of academic presentations. We report on the development of a multimodal dataset of academic presentations. This dataset is labelled by human annotators to the concepts of presentation ratings, audience engagement levels, speaker emphasis, and audience comprehension. We investigate the automatic classification of speaker ratings and audience engagement by extracting audio-visual features from video of the presenter and audience and training classifiers to predict speaker ratings and engagement levels. Following this, we investigate automatic identi�cation of areas of emphasised speech. By analysing all human annotated areas of emphasised speech, minimum speech pitch and gesticulation are identified as indicating emphasised speech when occurring together. Investigations are conducted into the speaker's potential to be comprehended by the audience. Following crowdsourced annotation of comprehension levels during academic presentations, a set of audio-visual features considered most likely to affect comprehension levels are extracted. Classifiers are trained on these features and comprehension levels could be predicted over a 7-class scale to an accuracy of 49%, and over a binary distribution to an accuracy of 85%. Presentation summaries are built by segmenting speech transcripts into phrases, and using keywords extracted from the transcripts in conjunction with extracted paralinguistic features. Highest ranking segments are then extracted to build presentation summaries. Summaries are evaluated by performing eye-tracking experiments as participants watch presentation videos. Participants were found to be consistently more engaged for presentation summaries than for full presentations. Summaries were also found to contain a higher concentration of new information than full presentations.
Keyword(s): Interactive computer systems; Multimedia systems; Image processing; Digital video; Information retrieval; Video Summarisation, Feature Classification, Evaluation, Eye Tracking
Publication Date:
2018
Type: Other
Peer-Reviewed: Unknown
Language(s): English
Contributor(s): Jones, Gareth; Campbell, Nick
Institution: Dublin City University
Citation(s): Curtis, Keith (2018) Utilization of multimodal interaction signals for automatic summarisation of academic presentations. PhD thesis, Dublin City University.
Publisher(s): Dublin City University. School of Computing
File Format(s): application/pdf
Related Link(s): http://doras.dcu.ie/22411/1/Thesis.pdf
First Indexed: 2018-11-17 06:05:37 Last Updated: 2019-02-09 06:09:21