AI solution created to answer perennial educator question: Why is this baby crying?
Any early childhood educator who has worked with pre-verbal children can speak about the challenge of wanting to assist a crying child, but not always being clear about the reason for the children’s cries.
Is the child hungry, wet, tired, lonely, in pain? The reasons for children communicating through tears are many and varied, and sourcing the solution can be one of the more complex parts of an educators day.
A group of researchers, based in the US, have devised an artificial intelligence method to identify and distinguish between normal cry signals and abnormal ones, such as those resulting from an underlying illness or pain. The method, based on a cry language recognition algorithm, is believed by researchers to be a tool which will be of tremendous benefit to educators, parents and in health care settings, where doctors may use it to discern cries among sick children.
While each baby’s cry is unique, cries share some common features when they result from the same reasons. Identifying the hidden patterns in the cry signal has been a major challenge, and artificial intelligence applications have now been shown to be an appropriate solution within this context. The new research uses a specific algorithm based on automatic speech recognition to detect and recognise the features of infant cries. In order to analyse and classify those signals, the team used compressed sensing as a way to process big data more efficiently.
Compressed sensing is a process that rebuilds a signal based on minimal data, and is especially useful when sounds are recorded in noisy environments – such as early childhood settings.
For the recent study, researchers designed a new cry language recognition algorithm which can distinguish the meanings of both types of cry signals in a noisy environment. The algorithm is independent of the individual crier, meaning that it can be used in a broader sense in practical scenarios as a way to recognise and classify various cry features and better understand why babies are crying and how urgent the cries are.
Lichuan Liu, a member of the group who conducted the research, said that crying sounds are a special language.
“The differences between sound signals actually carry the information. These differences are represented by different features of the cry signals. To recognise and leverage the information, we have to extract the features and then obtain the information in it,” Ms Liu said.
Researchers are hopeful that their findings will be applicable to many settings, but especially to medical care circumstances in which decision making relies heavily on the level of experience of the healthcare provider, disadvantaging those who have limited experience in working with infants and children.
Ms Liu said the ultimate goal of introducing AI technology to this space is to ensure pre-verbal children are healthier and happier, and that caregivers are less pressured and stressed.