When it comes to social media, harassment is unfortunately just part of the platform.
Social media companies have been working to make efforts to block users or filter trigger words that lead to online abuses, but much of the abuse is so subtle that an algorithm alone might not pick up on various cues.
Now, a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has created what it calls the Squadbox, a crowdsourcing tool that enables people who have been the target of harassment to coordinate “squads” of friends to filter messages and support them during attacks.
Using interviews from Youtubers, scientists and activists, MIT CSAIL found many people who are harassed rely on friends and family to shield themselves from abusive messages.
“If you just give moderators the keys to your inbox, how does the moderator know when there’s an email to moderate, and which email has already been handled by other moderators?” said David Karger, MIT professors and leader of the research. “Squadbox allows users to customize how incoming email is handled, divvying up the work to make sure there's no duplication of effort.”
The technology allows the owner of a social media network to set up filters to automatically forward incoming content to its moderation pipeline. When an email arrives, a moderator decides which emails are harassment and which can be forwarded back to the person’s inbox.
“Previous solutions depended entirely on automated techniques, or were overly dependent on social solutions like simply giving one’s account information to a friend,” said Clifford Lampe, a professor of information at the University of Michigan. “This line of work helps provide a map for one hybrid solution to harassment that augments human support with tools in a meaningful way.”
While currently the system works with email, the team plans to extend the capabilities of Squadbox to work with social media platforms.
The system also lets users create “whitelists” and “blacklists” of senders whose emails will be automatically approved or rejected without moderation. Users can also deactivate and reactivate the system, read scores on messages’ toxicity and even respond to harassers.
Upon testing, the team found that using friends helped with fears of privacy and allowed for more tailored decisions for victims. However, in those tests users were worried that a friend might be sensitive to slow response times, and that the system could also be “spreading” the burden. Having multiple moderators could help with this issue, MIT says.