Researchers from Caltech have created an artificial intelligence (AI) algorithm that can monitor online social media conversations for trolling. The algorithm is an automated way to monitor and detect offensive, harassing and negative posts. It could prevent online harassment with rapid detection.
(Source: Unsplash)\
Current methods for trolling detection are either fully automated and not interpretable or it relies on a static set of keywords. These methods are not effective. Both methods cannot be scalable because they rely on humans to operate. Human error and bias are too great for the methods to work. Keywords can quickly change or become outdated. New terms pop up constantly and old terms can quickly change their meaning.
The Caltech team used GloVe, Global Vectors for Word Representation, model during their study. GloVe is a word-embedding model that discovered new and relevant keywords. This represents words in the vector space, or the distance between words to measure linguistics. Starting with one keyword, the model can find other keywords that are closely related. This reveals clusters of relevant terms that are monitored for trolling. The new approach is an ever-evolving keyword search that is constantly scanning social media. Context is also important - GloVe shows the extent to which certain keywords are related. This provides input on how the words are being used.
The team’s goal is to give social media platforms a powerful tool to help stop online harassment.
The study can be accessed here.