translate

8-distinction-fond-forme.md

Displaying 461 - 480 of 1717

WLASL: A Large-Scale Dataset for Word-Level American Sign Language

D. Li, C. Opazo, X. Yu, H. Li

This dataset is introduced to facilitate word level sign recognition research. Two main sources from the Internet, multiple educational sign language websites, such as ASLU and ASL-LEX and ASL tutorial videos on YouTube are used for data collection. 21,083 videos performed by 119 signers are extracted, and each video only contains one sign in ASL. Each sign is performed by at least 3 different signers for diversity
Tags: 
Sign languageAmerican Sign Language

CADENCE Corpus

Maria K. Wolters, J. Kilgour, Sarah E. MacPherson, M. Dzikovska, Johanna D. Moore

Older people with and without cognitive impairment or dementia were recruited to collect their spoken interactions with a voice interface to design Intelligent Cognitive Assistants. The corpus consists of transcribed spoken interactions collected through a calendar app interface on an iPad 2 tablet.
Tags: 
Cognitive impairmentDementiaAlzheimer's disease

ChicagoFSWild

B. Shi, A. Rio, J. Keane, J. Michaux, D. Brentari, G. Shakhnarovich, K. Livescu

The dataset is created for recognizing American Sign Language fingerspelling in the wild that is in naturally occurring video. Data consist of manullay located fingerspelling clips from ASL videos on YouTube, aslized.org and deafvideo.tv. with videos annotated by in-house annotators.
Tags: 
Sign languageAmerican Sign Language

Extending the Public DGS Corpus in Size and Depth

T. Hanke, M. Schulder, R. Konrad, E. Jahn

This work releases new resources from the Public DGS Corpus, a long term project that aims to build a reference corpus of German Sign Language and research portals. Extension of the corpus involves the addition of new data formats, including OpenPose pose information, and corrections to the subtitles and annotations.
Tags: 
Sign languageGerman sign languageGerman

The TORGO database of acoustic and articulatory speech from speakers with dysarthria

F. Rudzicz, A. Namasivayam, T. Wolff

This database is orignally created for a resource for developing advanced models in automatic speech recognition that are more suited to the needs of people with dysarthria. It consists of aligned acoustics and measured 3D articulatory features from speakers with either cerebral palsy (CP) or amyotrophic lateral sclerosis (ALS).
Tags: 
speechArticulationDysarthria

Low Vision Stroke-Gesture Dataset

R. Vatavu, B. Gheran, M. Schipor

A dataset of stroke gestures is collected to understand how people with low vision articulate gestures. Stroke gestures were collected on a touchscreen tablet
Tags: 
Low visionstroke

The Ishara-lipi dataset

Md. Sanzidul Islam, S. Mousumi, Nazmul A. Jessan, S. Hossain

A dataset of Bangla sign language digits and characters is collected to enable research in automatic sign language recognition and develop educational tools. A total of 2800 images are captured from unspecified native Bangla sign language signers. 10 digits and 36 characters were performed, with 100 and 50 images each, respectively.
Tags: 
Sign languageBangladeshi Sign LanguageBangladeshi

The toronto rehab stroke pose dataset to detect compensation during stroke rehabilitation therapy

E. Dolatabadi, Y. Zhi, B. Ye, M. Coahran, G. Lupinacci, A. Mihailidisi, R. Wangi, B. Taatii

3D pose estimates of people undergoing stroke rehabilitation and healthy controls is collected to help build an automated model that can predict the effectiveness of a rehabilitation exercise. Kinect sensor data of subjects performing tasks with a robotic stroke rehabilitation device and the joint data was recorded along with the common clinical assessment scores.
Tags: 
strokeStroke rehabilitation

Grammatical Facial Expressions Data Set

F. Freitas, F. Barbosa, S. Peres

The study aims for modeling automated recognition of grammatical facial expressions from Brazillian Sign Language (Libras). Videos of a fluent signer in a Libras performing Grammatical Facial Expressions were captured and represented by spatial and temporal information from image frames using Microsoft Kinect Sensor.
Tags: 
Sign language

GLASSENSE-VISION dataset

J. Sosa-Garcia, F. Odone

This work aims to create a system for identifying objects for the blind through visual feedback. Images from 7 different object categories were taken by wearable cameras and annotated manually.
Tags: 
Visual impairment

Dicta-Sign – Building a Multilingual Sign Language Corpus

S. Matthes, T. Hanke, A. Regen, J. Storz, S. Worseck, E. Efthimiou, A. Dimou, A. Braffort, J. Glauert, E. Safar

This work aims to build a corpus of multilingual sign language using stero cameras and annotate with iLex. Stero cameras recorded the gestures and iLex was used to annotate the corpus.
Tags: 
Sign language

Stroke-Gesture Input for People with Motor Impairments: Empirical Results & Research Roadmap

R. Vatavu, O. Ungurean

A dataset of stroke gestures is collected to understand how people with motor impairments articulate gestures. Stroke gestures were collected on a touchscreen tablet
Tags: 
Cerebral PalsySpinal Cord InjuryMeningitis

ChicagoFSWild+

B. Shi, A. Rio, J. Keane, D. Brentari, G. Shakhnarovich, K. Livescu

Naturally occurring sign language videos are collected from online sources like YouTube and Deaf social media for American Sign Language (ASL) fingerspelling recognition. ChicagoFSWild+ contains 55,232 fingerspelling sequences signed by 260 signers, along with crowdsource annotations.
Tags: 
Sign languageAmerican Sign LanguageEnglish

How2Sign: A Large-scale Multimodal Dataset for Continuous American Sign Language

A. Duarte, K. DeHaan, S. Palaskar, L. Ventura, D. Ghadiyaram, F. Metze, J. Torres, X. Giro-i-Nieto

This multimodal and multiview continuous American Sign Language (ASL) dataset consists of a parallel corpus to facilitate progress in the areas of sign language recognition, translation, and production. A total of 80 hours of sign Language videos were recorded from multiple angles, and gloss annotations and a coarse video categorization were performed.
Tags: 
Sign languageAmerican Sign LanguageEnglish

Swedish Sign Language Corpus

J. Mesch, L. Wallin

The Swedish Sign Language Corpus aims to document sign language discourses and is freely available with the web-based corpus tool STS-corpus, for use in teaching, research and sign language lexicography. The recordings consist of free conversations and storytelling, as well as retellings of, for example, “The Snow Man”. The recorded materials are annotated and transcribed.
Tags: 
Sign languageSwedish sign languageSwedish

SEP-28k: A Dataset for Stuttering Event Detection From Podcasts With People Who Stutter

C. Lea, V. Mitra, A. Joshi, S. Kajarekar, Jeffrey P. Bigham

The dataset contains stuttering event annotations for automatic detection of stuttering speech events and speech recognition systems for people with atypical speech patterns. 28k annotated clips (23 hours) of speech were curated from public podcasts largely consisting of people who stutter interviewing other people who stutter.
Tags: 
DysfluenciesStutteringAtypical speechSpeech recognition

A Corpus-Based Dictionary of Polish Sign Language (PJM)

J. Linde-Usiekniewicz, M. Czajkowska-Kisil, J. Łacheta, P. Rutkowski

This work collects video clips showing Deaf people using Polish Sign Language to create a corpus-based dictionary. Each recording involves two signers reacting to visual stimuli, e.g. retelling the content of picture-stories and video clips, naming an object, talking about themselves, and discussing topics of interest to them.
Tags: 
Sign languagePolish sign languagePolish

BdSL36: A Dataset for Bangladeshi Sign Letters Recognition

O. Hoque, M. Jubair, A. Akash, Md. Saiful Islam

The dataset of the Bangladeshi Sign Language letters is released for deep learning classification and detection, which comes with raw images, background removed and augmented images, and labelled bounding box. 1200 images were captured by volunteers at their convenience and natural environment using their phone cameras or webcam.
Tags: 
Sign languageBangladeshi Sign LanguageBangladeshi

Auslan Corpus

T. Johnston

Language recording sessions were conducted with deaf native or early learner/near-native users of the Australian Sign Language to identify the principles that need to be adhered to in the creation of signed language corpora. Unedited video footage data was collected where each participant took part in language-based activities that involved an interview, the production of narratives, responses to survey questions, free conversation, and other elicited linguistic responses to various stimuli such as a picture-book story, a filmed cartoon, and a filmed story told in Auslan.
Tags: 
Sign languageAustralian Sign Language

Database for Signer-Independent Continuous Sign Language Recognition

U. Agris

A dataset of video streams of German sign language isolated words and sentences is collected 450 basic signs and 740 different meanings were performed by 25 native signers
Tags: 
Sign languageGerman sign languageGerman