For several years now Mon4t has been extracting various movement-related endpoints, such as tremor, sway and step cadence. By using an AI-based model, Mon4t’s technology predicts a score in one of the standard rating scales used for Parkinson’s Disease (UPDRS), Huntington Disease (UHDRS), Multiple Sclerosis (EDSS) and other conditions. In parallel, we have been able to collect voice samples that are currently offered as yet another endpoint when needed.

Voice indicates many different aspects of the human condition. Our voices can display our mood, intent, emotions, as well as inconsistencies or disparities which could be signs of certain medical diagnoses. One of the most common changes in one’s voice easily detected by one another is hoarseness or what most people call “losing your voice”. As the motor neurons control our vocal cords, just like a tremor of the hand, discrepancies in voice can indicate underlying neurological conditions such as, Parkinson’s disease, multiple sclerosis, myasthenia gravis and amyotrophic lateral sclerosis (ALS). Thus, just like neurological endpoints can be determined by using the accelerometers to analyze how the patient walks, the same can be done by using the microphone to analyze how the patient talks.

Mon4t provides access to record the patient’s voice in various settings, inclusive feature extraction with the signal processing done in the cloud, resulting in four main categories of voice features: Frequency features include pitch and formants bandwidth. Prosody features relate to rate of speech, voiced segments length and pauses length. Energy features display loudness, harmonic to noise ratio and shimmer. Lastly, spectral features include the spectral flux, Hammarberg Index and Mel-frequency cepstral coefficients (MFCCs).

These voice endpoints can be used like any other motor ones, but now we were able to use them for tracking the condition of patients who suffer from schizophrenia. The most common rating scale used for this condition is the Positive and Negative Syndrome Scale (PANSS). A first of its kind study was conducted under the supervision of Prof. Avi Peled. Several dozens of patients were rated by a psychiatrist over the course of several months, in parallel to recording their voices. In this case, rather than teaching the model to predict the absolute PANSS score (as we do with other rating scales), we taught the model to measure the relative change (𝜟PANSS), as this is primarily the important question to answer: is the patient stable or not? An example for the results is given on the right. In the upper plot, x-axis notes the 𝜟PANSS between two PANSS scores provided by the psychiatrist (same one in all cases), and the y-axis notes the 𝜟PANSS calculated by the model based on several voice samples taken on the same dates. The lower plot displays the absolute value of the error between the real 𝜟PANSS and the modeled one. These results suggest that after few assessments by a psychiatrist (to be used as a baseline), a simple voice-based tool can be used to remotely monitor the patient condition in a quantitative and reliable manner, and trigger an alert if anything goes wrong. While this tool requires further validation it may offer a dramatic reduction in the economic burden of addressing psychiatric conditions, and can result in healthier and better lives for the patient and caregivers.