translate

APE.properties

Displaying 441 - 460 of 1717

Content4All Open Research Sign Language Translation Datasets

N. Camgoz, B. Saunders, G. Rochette, M. Giovanelli, G. Inches, R. Nachtrab-Ribback, R. Bowden

This work releases six datasets containing 20 hours of broadcast footage annotated by Deaf experts and interpreters for automatic sign language translation research. Raw footage is anonymized and processed using OpenPose to extract 2D and 3D body pose information, and spoken language text to sign language videos are aligned as part of the annotation process.
Tags: 
Sign language

VizWiz dataset

D. Gurari, Q. Li, Abigale J. Stangl, A. Guo, C. Lin, K. Grauman, J. Luo, Jeffrey P. Bigham

This work highlights the technological needs of blind people and attract more researchers towards accessibility research. Visual question answering (VQA) datasets consist of images taken by blind people and labeled using crowdsourcing.
Tags: 
blindblindness

ORBIT: A Real-World Few-Shot Dataset for Teachable Object Recognition

D. Massiceti, L. Zintgraf, J. Bronskill, L. Theodorou, M. Harris, E. Cutrell, C. Morrison, K. Hofmann, S. Stumpf

This dataset is a collection of videos of objects recorded by people who are blind/low-vision on their mobile phones to drive research in Teachable Object Recognisers (TORs) under few-shot, high-variation conditions. Collectors recorded and submitted videos to the ORBIT benchmark dataset via an accessible iOS app.
Tags: 
blindnessblindLow vision

PD_MIT-CS1PD:Computer keyboard interaction as an indicator of early Parkinson's disease

L. Giancardo, A. Sánchez-Ferro, T. Arroyo-Gallego, I. Butterworth, C. S. Mendoza, P. Montero, M. Matarazzo, J. A. Obeso, M. L. Gray, R. San José Estépar

Two datasets of key press and release timings is collected to detect motor impairment in early stage Parkinson's disease, this is the smaller dataset. Keyhold times for typing tasks is recorded for building a machine learning model for detecting motor impairment.
Tags: 
Parkinsonsphysical

MEDIAPI-SKEL

H. Bull, A. Braffort, M. Gouiffès

A 2D-skeleton database of translations of TV programs into French sign language aligned with French subtitles French Sign Language 2D skeleton videos aligned with French subtitles. Videos capture journalists using sign language.
Tags: 
Sign language

Structured Analysis of the Retina

Michael H. Goldbaum

Retinal images are collected to diagnose various retinopathology conditions Retinal images of healthy patients and patients with a variety of eye conditions are collected. Artery and vein images are labeled and blood vessel segmentation is also done.
Tags: 
DiabetesAgeingRetinopathy

Hands Holding Clues for Object Recognition in Teachable Machines

K. Lee, H. Kacorri

A dataset of hand held objects is collected to build an object recognizer that could be used by people with visual impairment. The dataset has photographs by people with and without visual impairments. A sighted and a blind individual collected images of objects using a smartphone camera.
Tags: 
blindnessLow visionVisual impairment

PSL ToF 84

T. Kapuscinski, M. Oszust, M. Wysocki, D. Warchoł

A 3D video, point cloud dataset is collected to explore sign language recognition using skeleton tracking and hand skin recognition. A single participant performed 84 signs 20 times, this was recorded using 3D Kinect sensors and the point cloud data was collected.
Tags: 
Sign languagePolishPolish sign language

Hypernasality detection

M. Novotný, J. Rusz​, R. Čmejla, H. Růžičková, J. Klempíř, E. Růžička

This work aims to analyze hypernasality due to basal ganglia dysfunction in Huntington and Parkinsons Patients. Participants speech recorded through head mounted condenser microphone and then analysed and rated by 10 raters including 1 speech language pathologist, 6 acoustic speech specialists and 3 clinicians.
Tags: 
ParkinsonsHuntingtonsHypernasality

Parkinson Speech Dataset with Multiple Types of Sound Recordings Data Set

B. Sakar, M. Erdem Isenkul, C. Okan Sakar, A. Sertbas, F. Gurgen, S. Delil, H. Apaydin, O. Kursun

This work aims to understand the predictiveness of voice samples in Parkinsons Disease Diagnosis and see how well the central tendency and dispersion metrics serve as representatives of all sample recordings of a subject Participants were given samples to read out that were then recorded.
Tags: 
Parkinsons

ArASL: Arabic Alphabets Sign Language Dataset

G. Latif, N. Mohammad, J. Alghazo, R. AlKhalaf, R. AlKhalaf

A large fully-labelled dataset for Arabic Sign Language (ArSL) images is collected for research in automated systems using machine learning and computer vision for the deaf and hard of hearing people. A smart camera attached to tripod was used to capture the images and partiicpants were asked to stand around 1 m away from the camera.
Tags: 
Sign languagearabicArabic sign language

Parkinson's Disease Observations

K. Thiyagarajan

This dataset provides different observation variables regarding parkinson's disease. Some of these variables include acoustic characteristics of voice signals (jimmer, shimmer) and have a very high correlation with each other.
Tags: 
Parkinsons

PSL Kinect 30

M. Oszust, M. Wysocki

A 3D video, point cloud dataset is collected to explore sign language recognition using skeleton tracking and hand skin recognition. A single participant performed 30 signs 10 times, this was recorded using 3D Kinect sensors and the point cloud data was collected.
Tags: 
Sign languagePolishPolish sign language

Early biomarkers of Parkinson’s disease based on natural connected speech

J. Hlavnička, R. Čmejla, T. Tykalová, K. Šonka, E. Růžička, J. Rusz

This work aims to predict neurodegeneration through analysing speech of patients with early untreated parkinsons and those at high risk of developing Parkinson's.
Tags: 
ParkinsonsRBDHypernasality

Vision-based assessment of parkinsonism and levodopa-induced dyskinesia with pose estimation

Michael H. Li, Tiago A. Mestre, Susan H. Fox , B. Taati

This work evaluates the feasibility of computer-vision based assessment of parkinsonism and LID using pose estimation. Tasks were given and recorded on video while constantly assessing parkinsonism and LID through clinical rating scales.
Tags: 
ParkinsonsLevodopa-induced dyskinesia

Dicta-Sign–LSF–v2: Remake of a Continuous French Sign Language DialogueCorpus and a First Baseline for Automatic Sign Language Processing

V. Belissen, A. Braffort, M. Gouiffès

This work recollects corpus of French sign language within the Dicta-Sign corpus using RGB cameras. RGB cameras recorded the gestures was used to annotate the corpus, using the Dicta-Sign elicitation content.
Tags: 
Sign language

Depresjon: a motor activity database of depression episodes in unipolar and bipolar patients

E. Garcia-Ceja, M. Riegler, P. Jakobsen, J. Tørresen, T. Nordgreen, Ketil J. Oedegaard, O. Fasmer

A dataset of daily activity accelerometer data is collected to understand what is the difference in motor activity by patients with depression and healthy controls Subjects wore actigraph sensors for measuring daily activity
Tags: 
DepressionUnipolarBipolar

FluencyBank Voices-AWS Corpus

N. Ratner, C. Luckman, M. Baer

This work aims to capture participants’ experience of stuttering to buid a new instrument (OASES) for measuring the overall impact of stuttering. Each participant completed a reading task and an interview to learn more about the behaviors and affective/cognitive features of living with stuttering.
Tags: 
StutteringDysfluenciesFluency disorders

The Korean Sign Language Dataset for Action Recognition

S. Yang, S. Jung, H. Kang, C. Kim

A Korean sign language corpus is collected using RGB video cameras of sign units to train a convolutional neural network for predicting the sign. RGB camera recorded the gestures for 77 words in Korean Sign language.
Tags: 
Sign languageKorean Sign Language

FluencyBank Voices-CWS Corpus

N. Ratner, C. Luckman, M. Baer

This work aims to capture children’s experience of stuttering to buid a new instrument (OASES) for measuring the overall impact of stuttering. Each child participant completed a grade-appropriate reading task and an interview to learn more about the behaviors and affective/cognitive features of living with stuttering.
Tags: 
StutteringDysfluencies