|
Language Resources |
|
|
|
Search Catalogue |
|
|
|
Send us information |
|
|
|
Languages |
|
|
|
|
|
Displaying 181 to 200 (of 423 products) |
Result Pages: 10 |
Data collected in embedded environment (office) 352 speakers and 107-108 phonetically rich words for each speaker.
Language(s) : Korean
|
|
|
|
It consists of speech read by 400 speakers aged from 10 to 49, reading 104-105 sentences each, that is a total of about 20800 sentences.
Language(s) : Korean
|
|
|
|
AILLA is a database of audio and textual materials from the indigenous languages of Latin America.
Language(s) :
|
|
|
|
The DoBeS archive is a collection of all kinds of material concerning endangered languages: sound material, video recordings, photos and various textual annotations.
Language(s) : other -
|
|
|
|
The BANCA database is a large multi-modal database in four European languages (English, French, Spanish and Italian) and in two modalities (face and voice).
Language(s) : English - French - Italian - Spanish -
|
|
|
|
The XM2VTS database is a large multi-modal database captured onto high quality digital video. It contains four recordings of 295 subjects and each recording comprises a speaking head shot and a rotating head shot.
Language(s) : English -
|
|
|
|
The Belfast Naturalistic database contains recordings of discussions on emotive subjects and recorded extracts from television programs. Recordings were chosen to be as spontaneous as possible (interactive unscripted discourse), to sample genuine emotional states.
Language(s) : English -
|
|
|
|
This French database contains audiovisual material collected during game playing. People were playing at Taboo, a game in which one person has to explain to the other using gestures and body movement a ‘taboo’ concept or word.
Language(s) : French -
|
|
|
|
This dataset contains emotional audiovisual recordings collected with various elicitation procedures (outdoor activities, Spaghetti method).
It comprises the recordings of 50 females and 68 males. The total amount of data is approximately of 187 minutes.
Language(s) : English -
|
|
|
|
This dataset consists of the audiovisual recordings of 8 interactions during about 30 minutes each. In these interactions, one person tries to persuade another on a topic with multiple emotional overtones.
Language(s) : English -
|
|
|
|
The DRIVAWORK corpus was collected using a simulated driving task with 24 participants (a total of 15 hours). Recordings were made under three types of scenarios: relaxing, driving normally or driving with an additional task (like mental arithmetic for example).
Language(s) : German -
|
|
|
|
This database contains recordings of simulated driving. The procedure consists of inducing subjects into a range of emotional states (neutral, angry and elated). Participants had preidentified topics as emotive for them.
30 people participated in this experiment and the sessions last 10 minutes each.
Language(s) : English -
|
|
|
|
This English SAL corpus consists of audiovisual recordings of human-computer conversations. SAL stands for ‘Sensitive Artificial Listener’; it is an interface designed to let users work through a range of emotional states. It is built around four personalities which are supposed to draw the user into their own emotional state: happy, gloomy, angry and pragmatic.
4 people participated in the experiment (around 20 minutes each).
Language(s) : English -
|
|
|
|
This Hebrew SAL corpus consists of audiovisual recordings of human-computer conversations. SAL stands for ‘Sensitive Artificial Listener’; it is an interface designed to let users work through a range of emotional states. It is built around four personalities which are supposed to draw the user into their own emotional state: happy, gloomy, angry and pragmatic.
Language(s) : Hebrew -
|
|
|
|
This Greek SAL corpus consists of audiovisual recordings of human-computer conversations. SAL stands for ‘Sensitive Artificial Listener’; it is an interface designed to let users work through a range of emotional states. It is built around four personalities which are supposed to draw the user into their own emotional state: happy, gloomy, angry and pragmatic.
Language(s) : Greek -
|
|
|
|
The GEMEP corpus is a multimodal database of acted emotional utterances. It contains the simultaneous recordings of facial expressions, body movement and gestures and speech by 10 different actors (5 female / 5 male).
Language(s) : French -
|
|
|
|
Humaine is a labelled multimodal database containing natural speech. It was designed to cover material showing a wide range of emotions in action and interaction, and in different contexts (static, dynamic, outdoor, ...).
Language(s) : English - German - French - Hebrew -
|
|
|
|
This is a verbal interaction corpus in French, created to compare French native speakers with Flemish-speaking learners in Belgium. It contains 39 hours of video records and about 18 hours of annotated transcriptions.
Records consist of role-plays by Flemish-speaking learners of French at school (29 hours), and of the same by Belgian Francophones (5 hours) and French Francophones (5 hours).
Language(s) : French (Belgium) - French (France) -
|
|
|
|
This is a multimodal database containing 430 records. The database is designed for monomodal or multimodal use for face recognition systems' evaluation. Each session contains 3D facial datas, talking faces videos, 2D stereoscopic images and iris images.
Language(s) : French (France) -
|
|
|
|
The AMI (Augmented Multi-party Interaction) meeting Corpus is a multimodal resource containing 100 hours of recorded meetings. It consists of role-plays sequences in which participants follow a predefined scenario as in a real meeting, and of some naturally occurring meetings. Participants are mostly non-native speakers.
This corpus contains audio and video resources, as well as transcriptions with a wide range of annotations.
Language(s) : English -
|
|
|
|
Displaying 181 to 200 (of 423 products) |
Result Pages: 10 |
|
|