Increases in the occurrence and global effect of mental illness have made the prevention and treatment of mental health problems a public health priority. To address the need for more access to mental health treatment, digital psychotherapy programs, such as internet-delivered Cognitive Behavioral Therapy (iCBT), have evolved and have achieved clinical outcomes comparable to traditional face-to-face therapy when it comes to meeting the needs of clients who are experiencing mental health problems.
However, to attain desired outcomes, sustaining good client engagement with iCBT programs remains a key challenge. Research has demonstrated that including a trained coach, who provides tailored guidance and encouragement to the client throughout their otherwise self-guided treatment journey, improves client engagement and leads to better mental health outcomes than unsupported interventions. From the evidence, it’s clear that forming a strong therapeutic alliance in these situations is critical to client engagement, making clients feel listened to and actively supported by coaches who care about their well-being. But a key question remains: What makes a coach more or less successful when it comes to their support and positive client outcomes?
In a paper accepted at the ACM CHI Conference on Human Factors in Computing Systems (CHI 2020), called “Understanding Client Support Strategies to Improve Clinical Outcomes in an Online Mental Health Intervention,” our team of researchers aims to understand how we can most effectively leverage and further enhance the support provided by trained mental health coaches. We apply machine learning (ML) methods to gain more in-depth insights into how coaches’ specific support behaviors may come to benefit clients the most and how such effects could be maximized. The team includes researchers from Microsoft Research, SilverCloud Health, Trinity College Dublin, and Prerna Chikersal, PhD student at Carnegie Melon University, who conducted a substantial amount of this work as part of her internship at Microsoft Research Cambridge last summer.
This research investigation is part of Project Talia, which follows a human-centered approach for identifying how ML applications can meaningfully assist in the detection, diagnosis, monitoring, and treatment of mental health problems. The project is a collaboration with SilverCloud Health, the leading digital therapeutics platform for mental and behavioral health.
Key focuses of our research include identifying:
- how support needs may vary across clients.
- how the type and frequency of coaching support could be better tailored to each client’s unique mental health and treatment needs.
- how to best improve health outcomes.
Dataset analysis of support messages using ML and data-mining methods
As a first step towards understanding which support behaviors are potentially predictive of better client outcomes, we employed a set of ML and data-mining methods to analyze the feedback messages that iCBT coaches send to their clients. Our analysis is based on a dataset of 234,735 messages that were sent by 3,481 coaches to an unprecedentedly large-scale clinical sample of 54,104 mental health clients. To protect full anonymity of both clients and coaches, only non-person identifiable, high-level interaction data was used for the analysis. The fully anonymized data was derived from SilverCloud Health’s iCBT program, Space from Depression and Anxiety, one of their most frequently used treatments. The program presents a self-guided intervention with seven core interactive psycho-educational and psycho-therapeutic modules. Clients work through the program at their own pace and time, and they receive regular, personalized feedback messages sent by a trained mental health coach.
Identifying successful support strategies in the feedback messages of iCBT mental health coaches
To better understand what characteristics in the coaches’ support messages may be linked with better client outcomes, we first needed to identify what constitutes successful support messages based on improved clinical outcomes. To this end, we computed four clinical outcome scores by averaging post-message change in client scores for clinical assessment of PHQ-9 (for depression) and GAD-7 (for anxiety) across all the messages that were sent by each mental health coach, resulting in eight outcome scores in total.
The four categories of clinical outcome scores include (more details on these in our paper):
- Message-level Change (MC): This score is the difference between actual change and the expected change in the client’s clinical score given the client’s score before the message, averaged across all messages sent by a coach.
- Message-level Improvement Rate (MR): This score is the percentage of messages sent by a coach with an actual change in client’s score higher than the expected change given the client’s clinical score before the message.
- Client-level Change (CC): This score is the sum of MC calculated for each client of a coach, divided by the total number of clients. It helps quantify consistency in improvements across different clients.
- Client-level Improvement Rate (CR): This score is the sum of MR calculated for each client of a coach divided by the total number of clients.
We then use these outcomes as features in K-means clustering, through which we obtain K=3 clusters of coaches whose messages are generally linked with either high, medium, or low improvements in client outcomes. We hypothesize that there are differences in the messages sent by coaches that our clustering identified as generally achieving higher client outcomes versus those whose clients tended to have lower client outcomes. Comparing the differences in the characteristics of the messages sent by coaches in the high and low outcome clusters enables us to identify potentially more effective support strategies.
Using text mining to understand language features that impact client outcomes
To analyze the content of the coaching messages, we used lexicon-based text-mining approaches that extract text characteristics without the risk of identifying any text content to preserve anonymity.
This included analyzing the following features:
- sentiment and emotional tone of a message by extracting the percentages of positive and negative words, and words related to eight emotion categories, such as fear, joy, and anger
- extraction of first-person plural pronouns (like we, us, and our) as indicators of a supportive therapeutic alliance
- use of encouraging phrases, such as “well done” or “good job”
- the percentages of words related to mental processes, such as abstraction (like know or thought) or social behavior (like call, say, or tell)
Among others, our semantic analysis revealed statistically significant findings that more successful support messages consistently used more positive and fewer negative words; included less negative emotions conveying sadness and fear; used more first-person plural pronouns; and contained significantly more encouraging phrases. For our mental process variables, we found that more successful messages consistently employed more words associated with social behavior and fewer words associated with abstraction. (For more details, see our paper.)
While this approach enables us to derive specific characteristics of support messages that generally tend to correlate with improved client outcomes, it does not take into account how their relevance may vary depending on clients’ specific circumstances at the time of support by considering context variables such as their mental health, level of engagement with the iCBT program, and other factors.
Therefore, as a next step, we wanted to better understand the more complex relationship that likely exists between the support strategies used by coaches in their messages and multiple client context variables to assess if certain linguistic characteristics may be more or less important for different client contexts. We believe that identifying such relationship patterns could enable a more effective tailoring of support strategies to the specific circumstances of each client.
Towards personalization: Client context–specific support strategies
To identify multi-dimensional context-related strategy patterns, we needed to discover associations between multiple client context variables and each of the previously identified support strategies (like positive words, pronouns, and others mentioned above). For this, we used the Apriori algorithm, well known for its use in frequent item set mining and association rule mining. It first generates a set of frequent items that occur together, and then it extracts association rules that explain the relationship between those items.
We extracted association rules separately for both our “more” and “less” successful outcome clusters, and then we calculated the salience for each of the identified rules as the absolute confidence difference between the two clusters (more salient rules are used more frequently by coaches in either the “more” or “less” successful cluster). Salience reveals how prominent each rule is when compared to others. We derived the 1584 most salient rules, visualized as a heatmap (Figure 3) comprised of eight support strategies and 66 multi-dimensional contexts. In other words, salience here reflects the degree to which a certain strategy is used more by the “more” or “less” successful cluster in each client context.
From this heatmap we can see, for example, that support messages that use few words of fear (“Words of fear” [low] on the y-axis) and more first-person plural pronouns (labelled “1st Person plural pronouns” [high] on the y-axis) have high salience on more successful messages when a client hasn’t engaged much with the intervention, as is reflected through the first six context variables from the left on the x-axis (“Program page views” [none] and “Contents shared with coach” [none]). This means that messages with less words related to fear, as well as those that use more first-person plural pronouns, are strongly associated with more successful support messages. This effect is particularly prevalent in situations where clients are disengaged.
While there remain many more rules and patterns to unpack, this analysis begins to show how we can use ML to identify support strategies that are important for a particular client context, paving the way for more personalized coaching support and interventions. Building on these findings, our on-going research expands explorations of how we can help effectively personalize human support. More specifically, we hope to provide mental health coaches with data-derived insights into support strategies that may be most beneficial to employ for specific clients.