Multi-modal analysis of infant cry types characterization: Acoustics, body language and brain signals

Computers in biology and medicine

Comput Biol Med. 2023 Oct 25;167:107626. doi: 10.1016/j.compbiomed.2023.107626. Online ahead of print.

ABSTRACT

BACKGROUND: Infant crying is the first attempt babies use to communicate during their initial months of life. A misunderstanding of the cry message can compromise infant care and future neurodevelopmental process.

METHODS: An exploratory study collecting multimodal data (i.e., crying, electroencephalography (EEG), near-infrared spectroscopy (NIRS), facial expressions, and body movements) from 38 healthy full-term newborns was conducted. Cry types were defined based on different conditions (i.e., hunger, sleepiness, fussiness, need to burp, and distress). Statistical analysis, Machine Learning (ML), and Deep Learning (DL) techniques were used to identify relevant features for cry type classification and to evaluate a robust DL algorithm named Acoustic MultiStage Interpreter (AMSI).

RESULTS: Significant differences were found across cry types based on acoustics, EEG, NIRS, facial expressions, and body movements. Acoustics and body language were identified as the most relevant ML features to support the cause of crying. The DL AMSI algorithm achieved an accuracy rate of 92%.

CONCLUSIONS: This study set a precedent for cry analysis research by highlighting the complexity of newborn cry expression and strengthening the potential use of infant cry analysis as an objective, reliable, accessible, and non-invasive tool for cry interpretation, improving the infant-parent relationship and ensuring family well-being.

PMID:37918262 | DOI:10.1016/j.compbiomed.2023.107626