Emerging Regulations on Content Moderation and Misinformation Policies of Online Media Platforms: Accommodating the Duty of Care into Intermediary Liability Models.

Item request has been placed! ×
Item request cannot be made. ×
loading   Processing Request
  • Additional Information
    • Abstract:
      Disinformation, hate speech and political polarization are evident problems of the growing relevance of information and communication technologies (ICTs) in current societies. To address these issues, decision-makers and regulators worldwide discuss the role of digital platforms in content moderation and in curtailing harmful content produced by third parties. However, intermediary liability rules require a balance that avoids the risks arising from the circulation at scale of harmful content and the risks of censorship if excessive burdens force content providers to adopt a risk-averse posture in content moderation. This piece examines the trend of altering intermediary liability models to include 'duty of care' provisions, describing three models in Europe, North America and South America. We discuss how these models are being modified to include greater monitoring and takedown burdens on internet content providers. We conclude with a word of caution regarding this balance between censorship and freedom of expression. [ABSTRACT FROM AUTHOR]
    • Abstract:
      Copyright of Business & Human Rights Journal is the property of Cambridge University Press and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)