Alexander von Humboldt Institute for Internet and Society (HIIG)

Detailed view

When scholars sprint, bad algorithms are on the run

The first research sprint of the Ethics of Digitalisation project financed by the Stiftung Mercator reached the finishing line. Thirteen international fellows tackled pressing issues concerning the use of AI in content moderation. Looking back at ten intense weeks of interdisciplinary research, we share highlights and key outcomes.

In response to increasing public pressure to tackle hate speech and other challenging content, platform companies have turned to algorithmic content moderation systems. These automated tools promise to be more effective and efficient in identifying potentially illegal or unwanted material. But algorithmic content moderation also raises many questions – all of which eschew simple answers. Where is the line between hate speech and freedom of expression – and how to automate this on a global scale? Should platforms scale the use of AI tools for illegal online speech, like terrorism promotion, or also for regular content governance? Are platforms’ algorithms over-enforcing against legitimate speech, or are they rather failing to limit hateful content on their sites? And how can policymakers ensure an adequate level of transparency and accountability in platforms’ algorithmic content moderation processes?

Topic

Industry

Contact

Alexander von Humboldt Institute for Internet and Society (HIIG)

Address




Imprint link

» Imprint