Content-filtering AI systems–limitations, challenges and regulatory approaches.

Item request has been placed! ×
Item request cannot be made. ×
loading   Processing Request
  • Additional Information
    • Abstract:
      Online service providers, and even governments, have increasingly relied on Artificial Intelligence ('AI') to regulate content on the internet. In some jurisdictions, the law has incentivised, if not obligated, service providers to adopt measures to detect, track, and remove objectionable content such as terrorist propaganda. Consequently, service providers are being pushed to use AI to moderate online content. However, content-filtering AI systems are subject to limitations that affect their accuracy and transparency. These limitations open the possibility for legitimate content to be removed and objectionable content to remain online. Such an outcome could endanger human well-being and the exercise of our human rights. In view of these challenges, we argue that the design and use of content-filtering AI systems should be regulated. AI ethics principles such as transparency, explainability, fairness, and human-centricity should guide such regulatory efforts. [ABSTRACT FROM AUTHOR]