Typically, when someone is depressed, they will seek help from a doctor. The doctor will ask questions about their mood, mental illness, their lifestyle and their personal history, in order to make a diagnosis. But now, it might be easier to identify when someone is depressed. Researchers at MIT have created a model that can detect depression in people without needed responses to these specific questions, based on their natural conversational and writing style.
Project lead, Tuka Alhanai indicates that we can tell a lot about someone based on their speech. But is that true? As someone who suffers from depression, I wonder if someone would be able to tell that I’m depressed based on the way that I speak? I realize that technology might have the ability to detect something that an individual couldn’t, but I also think I do a pretty good job at hiding it. (This isn’t a cry for help. This is me managing it, without letting everyone around me know what’s happening 24/7.)
The researchers call the model “context-free” because there are no constraints in the types of questions that can be asked, or the type of responses that will be looked for. Using a technique called sequence modeling, the researchers fed the model text and audio from conversations with both depressed and non-depressed individuals. As the sequences accumulated, patterns emerged, such as the natural use of words such as “sad” or “down”, and audio signals that are flatter and more monotone.
While I’m not trying to make it sound like I can beat this system, I would also like to point out that I always sound down and monotone. I shouldn’t say always. But even when I’m happy, I tend to be sarcastic, and thus monotone. The model uses this information to determine patterns that are more likely to be seen in people who are depressed. If it then sees the same sequences in new subjects, it can predict if they’re depressed too. In tests, the model demonstrated a 77% success rate in identifying depressing. This outperforms almost all other models, most of which rely on heavily structured questions and answers.
While I’ve made it sound like this isn’t a good testing tool, I actually think it is. The team says the model is intended to be a helpful tool to clinicians since every patient speaks differently. They’re not wrong. One day I can go into my counselor’s office and be in good spirits and other days, the floodgates open. In the future, the model could also power mobile apps that monitor a user’s text and voice for mental distress and send alerts. This could be especially useful for those who can’t get to a clinician for an initial diagnosis, due to distance, cost, or a lack of awareness that something may be wrong.
I also think this is great as it combines technology into modern medicine. This will also give clinicians tools that they might not have otherwise had. Spotting a depressed person might not be that easy, but if there are more tools available to help, I think it would be better all around.