Institutions
|
About Us
|
Help
|
Gaeilge
0
1000
Home
Browse
Advanced Search
Search History
Marked List
Statistics
A
A
A
Author(s)
Institution
Publication types
Funder
Year
Limited By:
Author = Cullen, Charlie;
40 items found
Sort by
Title
Author
Item type
Date
Institution
Peer review status
Language
Order
Ascending
Descending
25
50
100
per page
1
2
Bibtex
CSV
EndNote
RefWorks
RIS
XML
Displaying Results 1 - 25 of 40 on page 1 of 2
Marked
Mark
A Crowdsourcing Approach to Labelling a Mood Induced Speech Corpus
(2012)
Snel, John; Tarasov, Alexey; Cullen, Charlie; Delany, Sarah Jane
A Crowdsourcing Approach to Labelling a Mood Induced Speech Corpus
(2012)
Snel, John; Tarasov, Alexey; Cullen, Charlie; Delany, Sarah Jane
Abstract:
This paper demonstrates the use of crowdsourcing to accumulate ratings from na ̈ıve listeners as a means to provide labels for a naturalistic emotional speech dataset. In order to do so, listening tasks are performed with a rating tool, which is delivered via the web. The rating requirements are based on the classical dimensions, activation and evaluation, presented to the participant as two discretised 5-point scales. Great emphasis is placed on the participant’s overall understanding of the task, and on the ease-of-use of the tool so that labelling accuracy is reinforced. The accumulation process is ongoing with a goal to supply the research community with a publicly available speech corpus.
https://arrow.dit.ie/dmccon/97
Marked
Mark
A Pilot Study of Comparison Gesture Analysis in Motion Driven Video Games
(2016)
Covone, Fabrizio Valerio; Vaughan, Brian; Cullen, Charlie
A Pilot Study of Comparison Gesture Analysis in Motion Driven Video Games
(2016)
Covone, Fabrizio Valerio; Vaughan, Brian; Cullen, Charlie
Abstract:
This study investigates whether there are significant differences in the gestures made by gamers and non-gamers whilst playing commercial games that employ gesture inputs. Specifically, the study focuses on testing a prototype of multimodal capture tool that we used to obtain real-time audio, video and skeletal gesture data. Additionally, we developed an experimental design framework for the acquisition of spatio-temporal gesture data and analysed the vector magnitude of a gesture to compare the relative displacement of each participant whilst playing a game.
https://arrow.dit.ie/aaschmedcon/44
Marked
Mark
A VOWEL-STRESS EMOTIONAL SPEECH ANALYSIS METHOD
(2008)
Cullen, Charlie; Vaughan, Brian; Kousidis, Spyros
A VOWEL-STRESS EMOTIONAL SPEECH ANALYSIS METHOD
(2008)
Cullen, Charlie; Vaughan, Brian; Kousidis, Spyros
Abstract:
The analysis of speech, particularly for emotional content, is an open area of current research. This paper documents the development of a vowel-stress analysis framework for emotional speech, which is intended to provide suitable assessment of the assets obtained in terms of their prosodic attributes. The consideration of different levels of vowel-stress provides means by which the salient points of a signal may be analysed in terms of their overall priority to the listener. The prosodic attributes of these events can thus be assessed in terms of their overall significance, in an effort to provide a means of categorising the acoustic correlates of emotional speech. The use of vowel-stress is performed in conjunction with the definition of pitch and intensity contours, alongside other micro-prosodic information relating to voice quality.
https://arrow.dit.ie/dmccon/109
Marked
Mark
Analysis of Data Sets Using Trio Sonification
(2004)
Cullen, Charlie; Coyle, Eugene
Analysis of Data Sets Using Trio Sonification
(2004)
Cullen, Charlie; Coyle, Eugene
Abstract:
Recent advances in technology have suggested that sound and audio play a far greater part in our daily working lives than ever before. Mobile phone ring tones are now based upon polyphonic music sequences that allow relatively complex audio to be generated from a handset by way of conveying information (i.e. a call or message is incoming). This real world example of sonification suggests that far more could be made of sonification techniques for analysis- particularly in the business environment. One advantage of sonification is its relatively hands free nature in that once a sequence is being played it does not necessarily require further input from the user and so the potential exists for applications that could deliver information while other tasks are being performed in tandem. For the definition of the basic principles of Trio sonification an application is being developed that will read in data sets of certain formats (.csv or .xml) and allow the various elements to be sonifie...
https://arrow.dit.ie/dmccon/24
Marked
Mark
Asynchronous Ultrasonic Trilateration for Indoor Positioning of Mobile Phones
(2012)
Filonenko, Viacheslav; Cullen, Charlie; Carswell, James
Asynchronous Ultrasonic Trilateration for Indoor Positioning of Mobile Phones
(2012)
Filonenko, Viacheslav; Cullen, Charlie; Carswell, James
Abstract:
In this paper we discuss how the innate ability of mobile phone speakers to produce ultrasound can be used for accurate indoor positioning. The frequencies in question are in a range between 20 and 22 KHz, which is high enough to be inaudible by humans but still low enough to be generated by today’s mobile phone sound hardware. Our tests indicate that it is possible to generate the given range of frequencies without significant distortions, provided the signal volume is not turned excessively high. In this paper we present and evaluate the accuracy of our asynchronous trilateration method (Lok8) for mobile positioning without requiring knowledge of the time the ultrasonic signal was sent. This approach shows that only the differences in time of arrival to multiple microphones (control points) placed throughout the indoor environment is sufficient. Consequently, any timing issues with client and server synchronization are avoided.
https://arrow.dit.ie/dmccon/94
Marked
Mark
CorpVis: an Online Emotional Speech Corpora Visualisation Interface
(2009)
Cullen, Charlie; Vaughan, Brian
CorpVis: an Online Emotional Speech Corpora Visualisation Interface
(2009)
Cullen, Charlie; Vaughan, Brian
Abstract:
Our research in emotional speech analysis has led to the construction of several dedicated high quality, online corpora of natural emotional speech assets. The requirements for querying, retrieval and organization of assets based on both their metadata descriptors and their analysis data led to the construction of a suitable interface for data visualization and corpus management. The CorpVis interface is intended to assist collaborative work between several speech research groups working with us in this area, allowing online collaboration and distribution of assets to be performed. This paper details the current CorpVis interface into our corpora, and the work performed to achieve this.
https://arrow.dit.ie/dmccon/19
Marked
Mark
Emotional Speech Corpora for Analysis and Media Production
(2008)
Cullen, Charlie; Vaughan, Brian; Kousidis, Spyros
Emotional Speech Corpora for Analysis and Media Production
(2008)
Cullen, Charlie; Vaughan, Brian; Kousidis, Spyros
Abstract:
Research into the acoustic correlates of emotional speech as part of the SALERO project has led to the construction of high quality emotional speech corpora, which contain both IMDI metadata and acoustic analysis data for each asset. Research into semi-automated, re-usable character animation has considered the development of online workflows based on speech corpus assets that would provide a single point of origin for character animation in media production. In this paper, a brief description of the corpus design and construction is given. Further, a prototype workflow for semi-automated emotional character animation is also provided, alongside a description of current and future work.
https://arrow.dit.ie/dmccon/18
Marked
Mark
Emotional Speech Corpus Construction, Annotation and Distribution
(2008)
Vaughan, Brian; Cullen, Charlie; Kousidis, Spyros; McAuley, John
Emotional Speech Corpus Construction, Annotation and Distribution
(2008)
Vaughan, Brian; Cullen, Charlie; Kousidis, Spyros; McAuley, John
Abstract:
This paper details a process of creating an emotional speech corpus by collecting natural emotional speech assets, analysisng and tagging them (for certain acoustic and linguistic features) and annotating them within an on-line database. The definition of specific metadata for use with an emotional speech corpus is crucial, in that poorly (or inaccurately) annotated assets are of little use in analysis. This problem is compounded by the lack of standardisation for speech corpora, particularly in relation to emotion content. The ISLE Metadata Initiative (IMDI) is the only cohesive attempt at corpus metadata standardisation performed thus far. Although not a comprehensive (or universally adopted) standard, IMDI represents the only current standard for speech corpus metadata available. The adoption of the IMDI standard allows the corpus to be re-used and expanded, in a clear and structured manner, ensuring its re-usability and usefulness as well as addressing issues of data-sparsitiy w...
https://arrow.dit.ie/dmccon/92
Marked
Mark
Emotional Speech Corpus Creation, Structure, Distribution and Re-Use
(2009)
Vaughan, Brian; Cullen, Charlie
Emotional Speech Corpus Creation, Structure, Distribution and Re-Use
(2009)
Vaughan, Brian; Cullen, Charlie
Abstract:
Abstract This paper details the on-going creation of a natural emotional speech corpus, its structure, distribution, and re-use. Using Mood Induction Procedures (MIPs), high quality emotional speech assets are obtained, analysed, tagged (for acoustic features), annotated and uploaded to an online speech corpus. This method structures the corpus in a logical and coherent manner, allowing it to be utilized for more than one purpose, ensuring distribution via a URL and ease of access through a web browser.
https://arrow.dit.ie/aaconmuscon/9
Marked
Mark
Generation of High Quality Audio Natural Emotional Speech Corpus using Task Based Mood Induction
(2006)
Cullen, Charlie; Vaughan, Brian; Kousidis, Spyros; Wang, Yi; McDonnell, Ciaran; Campbel...
Generation of High Quality Audio Natural Emotional Speech Corpus using Task Based Mood Induction
(2006)
Cullen, Charlie; Vaughan, Brian; Kousidis, Spyros; Wang, Yi; McDonnell, Ciaran; Campbell, Dermot
Abstract:
Detecting emotional dimensions [1] in speech is an area of great research interest, notably as a means of improving human computer interaction in areas such as speech synthesis [2]. In this paper, a method of obtaining high quality emotional audio speech assets is proposed. The methods of obtaining emotional content are subject to considerable debate, with distinctions between acted [3] and natural [4] speech being made based on the grounds of authenticity. Mood Induction Procedures (MIP’s) [5] are often employed to stimulate emotional dimensions in a controlled environment. This paper details experimental procedures based around MIP 4, using performance related tasks to engender activation and evaluation responses from the participant. Tasks are specified involving two participants, who must co-operate in order to complete a given task [6] within the allotted time. Experiments designed in this manner also allow for the specification of high quality audio assets (notably 24bit/192Kh...
https://arrow.dit.ie/dmccon/90
Marked
Mark
Harmonically Combined Contour Icons for Concurrent Auditory Display
(2006)
Cullen, Charlie; Coyle, Eugene
Harmonically Combined Contour Icons for Concurrent Auditory Display
(2006)
Cullen, Charlie; Coyle, Eugene
Abstract:
This paper considers the harmonic combination of basic melodic shapes known as contour icons in concurrent auditory displays. Existing work in the field (such as that concerning earcons) has considered the combination of patterns designed using low level cognitive features, and so effective streaming is difficult. This work investigates means by which musical patterns with high level cognitive features (such as contour) representing data values can be rendered concurrently, so that multiple data sets can be effectively conveyed using an auditory display. The detection and comprehension of harmonically combined contour icons was tested in comparison to those combined uniquely (non harmonically). Results suggest that significant improvement in pattern combination detection was made using harmonically combined contour icons, although limitations were observed due to the nature of the harmonic relations involved. Future work will investigate the most flexible methods of harmonic combina...
https://arrow.dit.ie/dmccon/30
Marked
Mark
Human Pattern Recognition in Data Sonification
(2016)
Cullen, Charlie; Coleman, William
Human Pattern Recognition in Data Sonification
(2016)
Cullen, Charlie; Coleman, William
Abstract:
Computational music analysis investigates the relevant features required for the detection and classification of musical content, features which do not always directly overlap with musical composition concepts. Human perception of music is also an active area of research, with existing work considering the role of perceptual schema in musical pattern recognition. Data sonification investigates the use of non-speech audio to convey information, and it is in this context that some potential guidelines for human pattern recognition are presented for discussion in this paper. Previous research into the role of musical contour (shape) in data sonification shows that it has a significant impact on pattern recognition performance, whilst investigation in the area of rhythmic parsing made a significant difference in performance when used to build structures in data sonifications. The paper presents these previous experimental results as the basis for a discussion around the potential for in...
https://arrow.dit.ie/fema/9
Marked
Mark
Indoor Positioning for Smartphones Using Asynchronous Ultrasound Trilateration
(2013)
Filonenko, Viacheslav; Cullen, Charlie; Carswell, James
Indoor Positioning for Smartphones Using Asynchronous Ultrasound Trilateration
(2013)
Filonenko, Viacheslav; Cullen, Charlie; Carswell, James
Abstract:
Modern smartphones are a great platform for Location Based Services (LBS). While outdoor LBS for smartphones has proven to be very successful, indoor LBS for smartphones has not yet fully developed due to the lack of an accurate positioning technology. In this paper we present an accurate indoor positioning approach for commercial off-the-shelf (COTS) smartphones that uses the innate ability of mobile phones to produce ultrasound, combined with Time-Difference-of-Arrival (TDOA) asynchronous trilateration. We evaluate our indoor positioning approach by describing its strengths and weaknesses, and determine its absolute accuracy. This is accomplished through a range of experiments that involve variables such as position of control point microphones, position of phone within the room, direction speaker is facing and presence of user in the signal path. Test results show that our Lok8 (locate) mobile positioning system can achieve accuracies better than 10 cm in a real-world environment.
https://arrow.dit.ie/dmcart/52
Marked
Mark
Information Delivery on Mobile Devices Using Boolean Sonification Patterns
(2005)
Cullen, Charlie; Coyle, Eugene
Information Delivery on Mobile Devices Using Boolean Sonification Patterns
(2005)
Cullen, Charlie; Coyle, Eugene
Abstract:
Sonification is the means by which non-speech audio can be used to convey information. Existing work has produced methods for delivering information in a wide range of fields, and recent work has considered the huge potential of mobile devices for Sonification. Boolean Sonification is a method of defining two related musical patterns as boolean conditions (true/false, yes/no etc.), such that one is considered contrary to the other by the listener. The final pattern set ideally comprises of two musical events that are closely enough related as to be considered a group, yet distinct enough to be perceived as separate entities. A java user interface is under development to allow Sonification to be configured by the user on the handset itself. Live testing is currently being performed.
https://arrow.dit.ie/dmccon/27
Marked
Mark
Information Delivery on Mobile Devices Using Contour Icon Sonification
(2005)
Cullen, Charlie; Coyle, Eugene
Information Delivery on Mobile Devices Using Contour Icon Sonification
(2005)
Cullen, Charlie; Coyle, Eugene
Abstract:
This paper examines the use of musical patterns to convey information, specifically in the context of mobile devices. Existing mechanisms (such as the popularity of the Morse code SMS alert) suggest that the use of musical patterns on mobile devices can be a very efficient and powerful method of data delivery. Unique musical patterns based on templates known as Contour Icons are used to represent specific data variables, with the output rendering of these patterns being referred to as a Sonification of that data. Contour Icon patterns mimic basic shapes and structures, thus providing listeners with a means of categorising them in a high level manner. Potential Sonification applications involving mobile devices are already in testing, with the aim of delivering data to mobile users in a fast, efficient and hands-free manner. It is the goal of this research to provide greater functionality on mobile devices using Sonification.
https://arrow.dit.ie/dmccon/23
Marked
Mark
Investigating Ultrasonic Positioning on Mobile Phones
(2010)
Filonenko, Viacheslav; Cullen, Charlie; Carswell, James
Investigating Ultrasonic Positioning on Mobile Phones
(2010)
Filonenko, Viacheslav; Cullen, Charlie; Carswell, James
Abstract:
In this paper we evaluate the innate ability of mobile phone speakers to produce ultrasound and the possible uses of this ability for accurate indoor positioning. The frequencies in question are a range between 20 and 22 KHz, which is high enough to be inaudible but low enough to be generated by standard sound hardware. A range of tones is generated at different volume settings on several popular modern mobile phones with the aim of finding points of failure. Our results indicate that it is possible to generate the given range of frequencies without significant distortions, provided the signal volume is not excessively high. This is preceded by the discussion of why such ability on off-the-shelf mobile devices is important for Location Based Services (LBS) applications research. Specifically, this ability could be used for indoor sound trilateration positioning. Such an approach is uniquely characterized by the high accuracy inherent to sound trilateration, with little computational...
https://arrow.dit.ie/dmccon/77
Marked
Mark
Judging Emotion from Low-pass Filtered Naturalistic Emotional Speech
(2013)
Snel, John; Cullen, Charlie
Judging Emotion from Low-pass Filtered Naturalistic Emotional Speech
(2013)
Snel, John; Cullen, Charlie
Abstract:
In speech, low frequency regions play a significant role in paralinguistic communication such as the conveyance of emotion or mood. The extent to which lower frequencies signify or contribute to affective speech is still an area for investigation. To investigate paralinguistic cues, and remove interference from linguistic cues, researchers can low-pass filter the speech signal on the assumption that certain acoustic cues characterizing affect are still discernible. Low-pass filtering is a practical technique to investigate paralinguistic phenomena, and is used here to investigate the inference of naturalistic emotional speech. This paper investigates how listeners perceive the level of Activation, and evaluate negative and positive levels, on low-pass filtered naturalistic speech, which has been developed through the use of Mood Inducing Procedures.
https://arrow.dit.ie/dmccon/105
Marked
Mark
LinguaTag: an Emotional Speech Analysis Application
(2008)
Cullen, Charlie; Vaughan, Brian; Kousidis, Spyros
LinguaTag: an Emotional Speech Analysis Application
(2008)
Cullen, Charlie; Vaughan, Brian; Kousidis, Spyros
Abstract:
The analysis of speech, particularly for emotional content, is an open area of current research. Ongoing work has developed an emotional speech corpus for analysis, and defined a vowel stress method by which this analysis may be performed. This paper documents the development of LinguaTag, an open source speech analysis software application which implements this vowel stress emotional speech analysis method developed as part of research into the acoustic and linguistic correlates of emotional speech. The analysis output is contained within a file format combining SMIL and SSML markup tags, to facilitate search and retrieval methods within an emotional speech corpus database. In this manner, analysis performed using LinguaTag aims to combine acoustic, emotional and linguistic descriptors in a single metadata framework.
https://arrow.dit.ie/dmccon/17
Marked
Mark
Metadata Visualisation Techniques for Emotional Speech Corpora
(2008)
Cullen, Charlie; Vaughan, Brian; Kousidis, Spyros
Metadata Visualisation Techniques for Emotional Speech Corpora
(2008)
Cullen, Charlie; Vaughan, Brian; Kousidis, Spyros
Abstract:
Our research in emotional speech analysis has led to the construction of dedicated high quality, online corpora of natural emotional speech assets. Once obtained, the annotation and analysis of these assets was necessary in order to develop a database of both analysis data and metadata relating to each speech act. With annotation complete, the means by which this data may be presented to the user online for analysis, retrieval and organization is the current focus of our investigations. Building on an initial web interface developed in Ruby on Rails, we are now working towards a visually driven GUI built on Adobe Flex. This paper details our work towards this goal, defining the rationale behind development and also demonstrating work achieved to date.
https://arrow.dit.ie/dmccon/26
Marked
Mark
MobiLAudio – a Multimodal Content Delivery Platform for Geo-Services
(2016)
Carswell, James; Gardiner, Keith; Cullen, Charlie
MobiLAudio – a Multimodal Content Delivery Platform for Geo-Services
(2016)
Carswell, James; Gardiner, Keith; Cullen, Charlie
Abstract:
Delivering high-quality context-relevant information in a timely manner is a priority for location-based services (LBS) where applications require an immediate response based on spatial interaction. Previous work in this area typically focused on ever more accurately determining this interaction and informing the user in the customary graphical way using the visual modality. This paper describes the research area of multimodal LBS and focuses on audio as the key delivery mechanism. This new research extends familiar graphical information delivery by introducing a geoservices platform for delivering multimodal content and navigation services. It incorporates a novel auditory user interface (AUI) that enables delivery of natural language directions and rich media content using audio. This unifying concept provides a hands-free modality for navigating a mapped space while simultaneously enjoying rich media content that is dynamically constructed using such mechanisms as algorithmic mus...
https://arrow.dit.ie/dmcart/56
Marked
Mark
mobiSurround: An Auditory User Interface for Geo-Service Delivery
(2015)
Gardiner, Keith; Cullen, Charlie; Carswell, James
mobiSurround: An Auditory User Interface for Geo-Service Delivery
(2015)
Gardiner, Keith; Cullen, Charlie; Carswell, James
Abstract:
This paper describes original research carried out in the area of Location-Based Services (LBS) with an emphasis on Auditory User Interfaces (AUI) for content delivery. Previous work in this area has focused on accurately determining spatial interactions and informing the user mainly by means of the visual modality. mobiSurround is new research that builds upon these principles with a focus on multimodal content delivery and navigation and in particular the development of an AUI. This AUI enables the delivery of rich media content and natural directions using audio. This novel approach provides a hands free method for navigating a space while experiencing rich media content dynamically constructed using techniques such as phrase synthesis, algorithmic music and 3D soundscaping. This paper outlines the innovative ideas employed in the design and development of the AUI that provides an overall immersive user experience.
https://arrow.dit.ie/dmccon/115
Marked
Mark
Musical Pattern Design Using Contour Icons
(2006)
Cullen, Charlie; Coyle, Eugene
Musical Pattern Design Using Contour Icons
(2006)
Cullen, Charlie; Coyle, Eugene
Abstract:
This paper considers the use of Contour Icons in the design and implementation of musical patterns, for the purposes of detection and recognition. Research work had endeavoured to deliver musical patterns that were both distinct and memorable, and to this end a set of basic melodic shapes were introduced using a Sonification application called TrioSon that had been designed for the purpose. Existing work in the field (such as that concerning Earcon design) has considered the mechanisms by which patterns may be made distinctive, but it is argued that separate consideration must be given to the method of making such patterns memorable. This work suggests that while segregation and detection can best be facilitated by the individuality of a patterns rhythm, the retention (and hence future recognition) of a musical pattern is concerned more with its melodic range and contour. The detection and comprehension of musical patterns based around basic shapes (known as Contour Icons) was teste...
https://arrow.dit.ie/dmccon/21
Marked
Mark
Obtaining Speech Assets for Judgement Analysis on Low-pass Filtered Emotional Speech
(2011)
Snel, John; Cullen, Charlie
Obtaining Speech Assets for Judgement Analysis on Low-pass Filtered Emotional Speech
(2011)
Snel, John; Cullen, Charlie
Abstract:
Investigating the emotional content in speech from acoustic characteristics requires separating the semantic con- tent from the acoustic channel. For natural emotional speech, a widely used method to separate the two channels is the use of cue masking. Our objective is to investigate the use of cue masking in non-acted emotional speech by analyzing the extent to which filtering impacts the perception of emotional content of the modified speech material. However, obtaining a corpus of emotional speech can be quite difficult whereby verifying the emotional content is an issue thoroughly discussed. Currently, speech research is showing a tendency toward constructing corpora of natural emotion expression. In this paper we outline the procedure used to obtain the corpus containing high audio quality and ‘natural’ emotional speech. We review the use of Mood Induction Procedures which provides a method to obtain spontaneous emotional speech in a controlled environment. Following this, we p...
https://arrow.dit.ie/dmccon/66
Marked
Mark
Orchestration within the Sonification of Basic Data Sets
(2004)
Cullen, Charlie; Coyle, Eugene
Orchestration within the Sonification of Basic Data Sets
(2004)
Cullen, Charlie; Coyle, Eugene
Abstract:
The use of sonification as a means of representing and analysing data has become a growing field of research in recent years and as such has become a far more accepted means of working with data. Existing work carried out as part of this research has focused primarily on the sonification of DNA/RNA sequences and their subsequent protein structures for the purposes of analysis. This sonification work raised many questions as regards the need for sequences to be set to music in a standard manner so that different strands could be analysed by comparison, and hence the orchestration and instrumentation used became of great importance. The basic principles of sonification can be rapidly extended to include many different data elements within a single rendering, and thus the importance of orchestration grows accordingly. Existing work on the use of rhythmic parsing within a sonification had suggested that far more information could be represented when orchestrated in a rhythmic manner tha...
https://arrow.dit.ie/dmccon/28
Marked
Mark
Prominence Driven Character Animation
(2010)
Cullen, Charlie; McGloin, Paula; Deegan, Anna; McCarthy, Evin
Prominence Driven Character Animation
(2010)
Cullen, Charlie; McGloin, Paula; Deegan, Anna; McCarthy, Evin
Abstract:
This paper details the development of a fully automated system for character animation implemented in Autodesk Maya. The system uses prioritised speech events to algorithmically generate head, body, arms and leg movements alongside eyeblinks, eyebrow movements and lip-synching. In addition, gaze tracking is also generated automatically relative to the definition of focus objects- contextually important objects in the character's worldview. The plugin uses an animation profile to store the relevant controllers and movements for a specific character, allowing any character to run with the system. Once a profile has been created, an audio file can be loaded and animated with a single button click. The average time to animate is between 2-3 minutes for 1 minute of speech, and the plugin can be used either as a first pass system for high quality work or as part of a batch animation workflow for larger amounts of content as exemplified in television and online dissemination channels.
https://arrow.dit.ie/dmccon/108
Displaying Results 1 - 25 of 40 on page 1 of 2
1
2
Bibtex
CSV
EndNote
RefWorks
RIS
XML
Item Type
Conference item (32)
Journal article (3)
Other (5)
Year
2016 (3)
2015 (2)
2014 (1)
2013 (2)
2012 (2)
2011 (3)
2010 (3)
2009 (6)
2008 (6)
2007 (1)
2006 (5)
2005 (3)
2004 (2)
2003 (1)
built by Enovation Solutions