Call for papers
There is an ever-growing research interests of the computer vision and machine learning community in modeling human facial and gestural behavior for clinical applications. However, the current state of the art in computer vision and machine learning for face and gesture analysis has not yet achieved the goal of reliable use of behavioral indicators in clinical context. One main challenge to achieve this goal is the lack of available archives of behavioral observations of individuals that have clinically relevant conditions (e.g., pain, depression, autism spectrum). Well-labeled recordings of clinically relevant conditions are necessary to train classifiers. Interdisciplinary efforts to address this necessity are needed.
The workshop aims to discuss the strengths and major challenges in using computer vision and machine learning of automatic face and gesture analysis for clinical research and healthcare applications. We invite scientists working in related areas of computer vision and machine learning for face and gesture analysis, affective computing, human behavior sensing, and cognitive behavior to share their expertise and achievements in the emerging field of computer vision and machine learning based face and gesture analysis for health informatics.
Topics of interest include, but are not limited to:
- Deep learning based face and gesture analysis for healthcare
- Deep learning based facial expression recognition for healthcare
- Remote physiological sensing for healthcare
- Human-Computer Interaction systems for healthcare
- Deep learning based multi-modal (visual and verbal) fusion for healthcare applications
- Clinical protocols for face and gesture analysis and modeling in clinical context
- Applications include but are not limited to: Automatic pain intensity measurement, automatic depression severity assessment, autism screening.