Mental health is a serious problem, and it can strike at any time without warning. You'll never know if you're going to wake up that morning happy, or a mix of emotions. As more struggle with mental health, they seek out platforms to help them vent these feelings, relate, understand and feel like someone truly understands them - this is often under anonymous terms - many people do not want the ones around them to know they're struggling.

A variety of digital mental health communities have appeared over the years, from VC-backed to small groups of friends trying to change the digital mental health platforms.

It takes an incredible amount of trust to put into a random stranger online and discuss your issues with them. I commend people who are willing to - and able to - share these details in a safe environment. As we mature in the era of digital mental health support, we need to ensure our systems are designed in a privacy-first environment. Not only does this make the user feel safe, it also helps protect our systems. Obviously, this is not a perfect world - if a malicious actor does gain access to the hosting server, it is game over - no matter if you're the biggest company in the world or the smallest fish in the ocean – however, there should be reasonable measures all companies entrusted with some of our arguably most sensitive information should utilize.

In my spare time I decided to see how secure and private these platforms are, here I take a look at the top digital mental health provider, their security and privacy issues I've noticed and reported over two years. I would like to note here that working with the vendor on all occasions has been an enjoyable experience, they have always been prompt, upfront, and reasonable.

In summary, I believe there should be more oversight, transparency and regulation around digital mental health platforms. We, as users, deserve the right to know exactly how our information is used, stored, and what security mechanisms are in place to prevent staff, malicious users from just impersonating the user and stealing their sensitive conversations.

It is important to note that any security or privacy concern you see mentioned here has been previously reported to the vendor, unless otherwise noted, no patch has been supplied. I iterate that this is by no means malicious.

7 Cups of Tea

7 Cups of Tea is currently the largest mental health support platform, with over 100,000 registered members and volunteers - now with registered, licensed therapists. Some bugs listed here still exist. These have been responsibly disclosed over two years ago and no patch has been supplied.

Disclosure: I reached out to the vendor these last few weeks, and attempted to find a middle-ground with the vendor for publishing this article. It was expressed the article does not sit well with the vendor due to the nature of the disclosures below. I have published a modified version below, removing some more sensitive information at this time.

Password Change Advisory

If you have ever had an account on 7 Cups of Tea, I recommend you change your password immediately, and if you have used said password on other sites, you should update those passwords as well.

Full User Information Disclosure

In my discovery, I found an unauthenticated endpoint that accepted a "userName" parameter, in this it would return a JSON structure of an entire User's object (including MD5-hashed passwords) - Reported June 25th, 2017 and fixed within 25 minutes of the initial report.

A sample of what it returned has been provided below:

"screenName": "[REDACTED]",
    "email": "[REDACTED]",
    "passwd": "[REDACTED]",
    "fName": "[REDACTED]",
    "mName": "",
    "lName": "[REDACTED]",
    "gender": "[M/F]",
    "facebookToken": "[REDACTED]",
    "socialMedia1": "[REDACTED]",
    "socialMedia2": "[REDACTED]",
    "primaryPhone": [REDACTED],
    "primaryPhConf": "0",
    "phoneConfCode": null,
    "DOB": "YYYY-MM-DD",
    "DOBlistingOptOut": "0",
    "bgCheck": "NO",
    "productionFlag": "1",
    "stripeID": null,
    "paypalEmail": null,
    "adminNotes": "[REDACTED]",
    "listenerIP": "[REDACTED MD5 HASH]",
    "lastKnownIP": "[REDACTED]",
    "lastUserAgent": "Mozilla/5.0 (X11; CrOS x86_64 9460.73.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/59.0.3071.134 Safari/537.36",
    "lastTrackerCheckIn": "[REDACTED]",
    "therapist_credentials": [REDACTED],
    "therapist_license": [REDACTED],
    "therapist_academicProgram": [REDACTED],
    "therapist_insurance": [REDACTED],
    "age": "[REDACTED]",

As you can tell, this gives away an alarming amount of information. Passwords are MD5 encoded without salt, full date of births, background checks, emails, admin notes, names, genders, and even therapist credentials are exposed. This is just a light copy, other information included badge information, nominations, followers, background checks, etc. - Effectively, everything they know about you. This has been fixed, however, the API endpoint in reference is still leaking some information it should not.

Plaintext Credentials over the wire

This bug still exists, and is easily fixable. The fundamental issue with sending credentials over the wire in clear text is that it will be available in logs on the server. This means even if you have a big long password (think "[email protected][email protected]#), it will not matter if someone gets access to the server log file - it's in plain text (or, gets an RCE into the application).

Plaintext credentials over the wire.

And yes, it happens when you make an account too.

Creation of User Account (Mobile Application)
Application sending request over the wire

By allowing plaintext credentials over the wire, any man-in-the-middle can access your account, see your chat history, and interact as you. Furthermore, if any malicious user were to access said logs on the backend, you could access everyones account with ease. This may sound oddly familiar, like Facebook exposing passwords internally.

Account Enumeration

An endpoint currently exists that allows you to (without rate-limit) brute force usernames, to discover both users that are visible and invisible/banned/deactivated, and this does not have any rate limit on it.

Example with Username taken
(1 = available)

Persistent XSS in AI Chatbot (Noni)

Noni was introduced as a machine learning chat bot, it uses basic neural networks to train on how to hold conversations. OK, seems simple enough. Noni right now has all their messages avoid the XSS filter (javascript function "strip_tags"), if you remove the function in your browser, and ask it to set a reminder, give it XML payload, you'll get yourself a persistent XSS - the topping on the cake is it will send you a "reminder" with the XSS payload at intervals. This persistent XSS has been patched.

Persistent XSS
It appears their regular expression breaks due to my XSS
Some variants of XSS with proof of persistence.

That's all for now, I hope you enjoyed this post!

security advisory mental-health


Senior Software Engineer, Labber, Sysadmin. I make things scale rapidly. Optimize everything.

Read More