Identifying the good and the bad: How machine learning is applied and applicable in the moderation of user comments on news

User commentary on news sites is intensively discussed and disputed: Harmful effects are feared and the chasm seems to be wide between normative conceptions of ideal usages and users' actual practices. Moderation is a promising setscrew but given the limited resources, outlets mostly focus on policing and banning "bad" commentary rather than encouraging "good" posts--although highlighting and engaging with valuable contributions could serve as a form of user gratification and thus a positive feedback loop. Resource-rich outlets such as the New York Times and the Washington Post, however, have started to explore ways to also automatically identify and highlight "good" contributions. In this talk, we will provide an overview of available ideas and applications, and discuss previously tested as well as ready-to-be-implemented criteria borrowed from deliberation theory and (in)civility literature for machine-learning in the realm of user-comment moderation.

Springer, N. & Haim, M. (3/2018). Identifying the good and the bad: How machine learning is applied and applicable in the moderation of user comments on news. Invited presentation at ThursdAI, MIT Media Lab, Cambridge, MA. (content_copy)