We trained our Social Media Account Classifier to identify nefarious automated bots, targeted harassment and abuse, mis/disinformation, toxic discourse, and other problematic social media activities. We use millions of samples to train our classification model, and we update our model weekly to remain accurate and relevant.


Our technology can be used to analyze text-based content, and you can use it with any text-based social media platform. Since conversations are similar across different social media platforms, the accuracy of our classification models remains consistent. 


Our classifier employs a straightforward scoring system from 0 to 100. The higher the score, the more likely the social media account is engaging in problematic activities.


Example:

Normal activity: 0% - 24%

Satisfactory activity: 25% - 49% 

Disruptive activity: 50% - 74%

Problematic activity: 75% - 100%