
Although moderators on different platforms adopt differing strategies to curate their content, automated tools are increasingly playing an important role in the regulation of posts across these sites. To carry out these regulation tasks, platforms rely on paid or volunteer human workers, called moderators. It is important for platforms to have such mechanisms in place in order to protect their brand, prevent their users from attacking one another, and keep discussions productive and civil. More broadly, these mechanisms help platforms address issues like illegal content, copyright violations, terrorism and extremist content, revenge porn, 1 online harassment, hate speech, and disinformation. We refer to the sociotechnical practices that constitute this task as “regulation mechanisms.” Efficient regulation mechanisms ensure that low-quality contributions do not drown out worthy posts on the site and exhaust the limited attention of users. Platforms try to combat this problem by implementing processes that determine which posts to allow on the site and which to remove. The utility of these sites is often undermined by the presence of various types of unwanted content such as spam, abusive, and off-topic postings. However, the freedom these sites provide to their content creators makes them inherently difficult to govern.

Online discussion sites provide a valuable resource for millions of users to exchange ideas and information on a variety of topics.
