Liability in a digitalized workplace. Explore fragmented EU legislation on AI liability in the digitalized workplace. Addresses challenges of AI opacity, causation, and accountability, using Croatia as a case study for safe AI integration.
The AI Act, adopted by EU lawmakers in 2024, marks a major step in harmonizing AI regulation across Europe by introducing a risk-based framework for AI system oversight. However, it does not directly address civil liability related to the implementation of AI based systems and technologies. To bridge this gap, the EU has taken steps by updating the Product Liability Directive (PLD) and negotiating the proposed Artificial Intelligence Liability Directive (AILD). The PLD now covers damages caused by products that use AI-systems, while the AILD is still under discussion. Despite these efforts, liability rules remain fragmented, particularly in workplace settings. This paper explores how emerging EU legislation affects liability for AI-related damages at work, where clear rules are essential to ensure accountability and build trust in automation. It addresses challenges such as AI opacity, causation, and the balance between innovation and legal responsibility. Using examples like medical diagnostics and autonomous machines, the paper highlights the difficulties of assigning liability among employers, employees, and AI developers. It focuses on the intersection of contractual distribution of responsibilities, requirements under the AI act and special obligations of employers towards their employees. Croatia’s legal framework on contractual liability serves as a case study, focusing on how EU regulations intersect with national law and worker protection rules. The analysis reveals gaps in liability allocation, shows how guidelines for AI use will increase in importance and suggests how national and EU laws must adapt to support safe and lawful AI use in the workplace.
This paper addresses a highly timely and critical issue: the evolving landscape of civil liability in a digitalized workplace, particularly concerning the implementation of AI-based systems. Against the backdrop of recent EU legislative developments – including the AI Act, the updated Product Liability Directive, and the proposed AI Liability Directive – the author meticulously highlights the fragmentation and lacunae in liability rules, especially pertinent to workplace settings. The paper commendably frames the discussion around core challenges such as AI opacity, establishing causation, and the delicate balance between fostering innovation and ensuring legal responsibility. The use of practical examples like medical diagnostics and autonomous machines effectively illustrates the complexities involved in assigning liability among key stakeholders: employers, employees, and AI developers. A significant strength of this work lies in its specific focus on the *workplace* context, differentiating it from broader discussions on AI liability. By examining the intersection of contractual responsibilities, the requirements under the AI Act, and the special obligations of employers, the paper provides a nuanced understanding of the existing legal architecture. The inclusion of Croatia's legal framework as a case study is particularly valuable, demonstrating how EU regulations interact with national law and worker protection rules, and revealing practical implications for Member States. This approach effectively uncovers specific gaps in liability allocation and underscores the increasing importance of comprehensive guidelines for AI use, pushing towards the adaptation of both national and EU laws to foster safe and lawful AI integration in professional environments. While the paper clearly identifies the problems and highlights the need for adaptation, future iterations could benefit from a more prescriptive discussion of potential solutions. For instance, further exploration into specific policy mechanisms – beyond merely "adapting laws" – that could effectively bridge the identified liability gaps would enhance its impact. This could include examining the role of mandatory insurance schemes, the development of industry-specific best practices, or specific proposals for legal amendments that clarify responsibility in multi-stakeholder AI deployments. Additionally, while Croatia serves as an excellent case study, a brief comparative glance at how other EU nations are beginning to grapple with these workplace liability issues could further contextualize the findings and strengthen the arguments for a harmonized EU approach. Nonetheless, this paper provides a robust and essential analysis, offering a foundational understanding for policymakers and legal professionals grappling with the complexities of AI liability in the modern workplace.
You need to be logged in to view the full text and Download file of this article - LIABILITY IN A DIGITALIZED WORKPLACE from EU and comparative law issues and challenges series (ECLIC) .
Login to View Full Text And DownloadYou need to be logged in to post a comment.
By Sciaria
By Sciaria
By Sciaria
By Sciaria
By Sciaria
By Sciaria