AUTOMATED DECISION-MAKING AND THE ROLE OF ARTIFICIAL INTELLIGENCE IN RESOLVING DISPUTES OVER ILLEGAL CONTENT ON DIGITAL PLATFORMS
Home Research Details
Regina Hučková

AUTOMATED DECISION-MAKING AND THE ROLE OF ARTIFICIAL INTELLIGENCE IN RESOLVING DISPUTES OVER ILLEGAL CONTENT ON DIGITAL PLATFORMS

0.0 (0 ratings)

Introduction

Automated decision-making and the role of artificial intelligence in resolving disputes over illegal content on digital platforms. Legal analysis of AI in automated dispute resolution for illegal content on digital platforms. Explores EU DSA, user rights, transparency & redress mechanisms.

0
12 views

Abstract

The increasing volume of digital content in the virtual world of digital platforms and its potential to contain illegal elements creates significant challenges for moderation and the resolution of related disputes. Automated decision-making, supported by artificial intelligence (AI), is becoming a key tool in the initial identification and processing of illegal content. In particular, this article explores the legal aspects of using AI in this context, with an emphasis on its role in the first stage of the decision-making process. Peripherally, the author also touches on technical aspects. The article will analyse the legal framework of the European Union, in particular the new Digital Services Regulations (DSA). We will focus our investigation on the extent to which automated decision-making is compatible with the requirements of transparency, fairness and the protection of users’ fundamental rights. The paper also offers insights into redress mechanisms that allow users to challenge AI decisions and possibly escalate disputes to a higher level, where human moderators or independent entities adjudicate them. Finally, we propose recommendations for legislation and digital platforms to harmonise the use of AI in content moderation, considering the need to protect fundamental rights, the efficiency, and transparency of the entire process. The ambition of this article is to contribute to the ongoing debate on the balance between automation and human intervention in content moderation and dispute resolution, identifying the opportunities and risks associated with the use of AI on digital platforms.


Review

This article addresses a highly pertinent and complex issue at the intersection of digital governance, artificial intelligence, and fundamental rights: the use of automated decision-making in moderating illegal content on digital platforms. The abstract clearly articulates the significant challenges posed by the increasing volume of digital content and the critical role AI is assuming in its initial identification and processing. The stated focus on the legal aspects, particularly within the framework of the European Union's Digital Services Act (DSA), positions this work as a timely and relevant contribution to the ongoing global discourse on online content moderation. The article's ambition to explore the compatibility of automated decisions with transparency, fairness, and fundamental rights, alongside offering insights into redress mechanisms, is commendable. The strengths of this proposed article lie in its critical legal analysis of AI's role in a contentious and rapidly evolving domain. By centering its investigation on the DSA, the paper promises to provide valuable insights into one of the most comprehensive regulatory frameworks governing digital services. The intention to address both the efficiency gains and the fundamental rights implications of AI in content moderation signifies a balanced approach, crucial for developing sustainable solutions. Furthermore, the commitment to propose recommendations for both legislation and digital platforms suggests a practical and forward-looking contribution, aiming to harmonise technological advancement with ethical and legal safeguards. The explicit goal to contribute to the debate on the balance between automation and human intervention underscores the paper's relevance to both academic and policy audiences. To further enrich its contribution, the article could consider a deeper integration of the "peripherally touched" technical aspects. While the primary focus is legal, a more robust understanding of AI's technical limitations, biases, and capabilities could provide a more nuanced foundation for the legal analysis and the proposed recommendations. Additionally, while "illegal content" is a broad category, a more granular exploration of specific types (e.g., hate speech vs. copyright infringement vs. child sexual abuse material) might reveal distinct legal and ethical considerations that warrant different approaches. Finally, while the focus on EU law is appropriate, a brief comparative glance at approaches in other major jurisdictions could contextualise the DSA's innovations and challenges, strengthening the broader applicability of the article's insights and recommendations.


Full Text

You need to be logged in to view the full text and Download file of this article - AUTOMATED DECISION-MAKING AND THE ROLE OF ARTIFICIAL INTELLIGENCE IN RESOLVING DISPUTES OVER ILLEGAL CONTENT ON DIGITAL PLATFORMS from EU and comparative law issues and challenges series (ECLIC) .

Login to View Full Text And Download

Comments


You need to be logged in to post a comment.