Open Review of Management, Banking and Finance

«They say things are happening at the border, but nobody knows which border» (Mark Strand)

Artificial Intelligence and Liability Degree of Producers

by Marina Romano* and David García Guerrero**

ABSTRACT: This study aims to provide an overview of the current situation, highlighting the main areas of debate and uncertainty. Scenarios are explored in which current legislation can be extended or adapted to better understand and regulate the challenges posed by AI. Possible future directions for effective regulation that does not stifle innovation but, on the contrary, guides it in a responsible manner, ensuring the protection of individual and collective rights will be envisaged. The goal is to stimulate a broader reflection on the future of AI regulation, a rapidly developing field that requires close scrutiny and ongoing dialogue among technology experts, regulators, AI producers, and civil society.

SUMMARY: 1. Introduction. – 2. The spread of AI systems and open issues. – 3. The regulatory framework and the question of the legal subjectivity of AI systems. – 4. The set of rules based on the distinction of risks. – 5. The representation of a “chain of responsibilities”. – 6. The strengthening of obligations and producer responsibility. – 7. Concluding remarks.

1. Within the technological era in which we live, Artificial Intelligence (AI) is not only a frontier of science and innovation, but also a phenomenon that is redefining the way we live, work and interact. Its influence extends to a variety of fields, from medicine to industrial automation, from finance to education[1]. However, with these transformations come unprecedented and complex challenges, especially in the legal field. This article aims to analyze some issues related to the liability of producers of AI systems; a topic of growing importance in light of the ubiquity and constant evolution of these technologies.

The focus of the analysis is on understanding how current law interfaces with the challenges posed by AI[2] . It involves an examination of the existing legal framework, highlighting its gaps and potential in effectively regulating AI systems. One of the most critical issues in this context is the legal subjectivity of such systems. The fundamental question would be how legal responsibilities can be assigned to systems that operate with an ever-increasing degree of autonomy and complexity[3].

In addition, the paper dives into the discussion of the delicate balance between promoting technological innovation and ensuring adequate legal protection. The rapid evolution of AI poses unique challenges in terms of adaptability and anticipation of regulations. The responsibility of AI producers, therefore, becomes a central topic, reflecting on the importance of establishing a regulatory framework that is at once clear, fair, and flexible, capable of evolving in step with technological advances[4].

This study, therefore, aims to provide an overview of the current situation, highlighting the main areas of debate and uncertainty. Scenarios are explored in which current legislation can be extended or adapted to better understand and regulate the challenges posed by AI. Possible future directions for effective regulation that does not stifle innovation but, on the contrary, guides it in a responsible manner, ensuring the protection of individual and collective rights will be envisaged[5] .

The goal is to stimulate a broader reflection on the future of AI regulation, a rapidly developing field that requires close scrutiny and ongoing dialogue among technology experts, regulators, AI producers, and civil society.

2. The increasing development of AI systems[6] and their use in more and more areas of reality has long presented the jurist with new interpretive problems aimed at verifying the suitability of traditional categories to represent this complex phenomenon, including that of liability arising from a possible harmful action of AI.

As a first approximation, an AI system is a complex technology that has ancient roots[7] and is based on algorithms that during their processing activities simultaneously “self-learn” from the same data they have developed. In this mechanism, until recently, it was possible to say that human input still remained central and irreplaceable in reference to stages where a so-called creative approach would be required or when examining “new” situations[8]. The state of scientific knowledge, with regard to this technology, still showed, in essence, AI systems, albeit very high-performance from the point of view of the ability to analyse huge amounts of data and the speed in processing them, which were, however, still dependent on the imprint of man[9].

Because of its characteristics, the use of AI systems has proven to be increasingly useful for humans and in more and more areas: from applications in health care in the search for increasingly personalized medical therapies or the performance of surgeries, to the field of trading financial instruments, to the judiciary, to self-driving cars.

The scenario seems to be changing with the advent of AIs capable of generating content on user demand such as ChatGPT ones, i.e., language models capable of making persuasive text and answering user questions or MidJourney with which images can be made from text descriptions. This is a new generation of AI capable of generating content that is durable over time and therefore potentially capable of influencing the culture and thinking of a community.

It is also likely to expect that in the future the scenario may be even different and AI systems may be even more pervasive in human life. It is therefore necessary to question the many questions[10] that also arise on the legal level[11], including as mentioned that of liability in case of harmful events caused to third parties by AI.

In this specific area, concrete cases can arise either from the negotiation activity attributable to an IA system, from which hypotheses of contractual liability can arise, or from extra-contractual torts caused to third parties. In both cases, there may be numerous issues to be resolved; a number that increases where the chain of subjects involved in the activity of these systems is also taken into consideration: from the user who appropriates the results of the activity carried out and who may be a public subject[12] or private[13], to the maintainer of them, or to the one who markets them, up to the producer/trainer to whom the paternity of the algorithm and of the initial input that is the basis of its operation would be in essence traceable.

The diversity of the issues implies the need for each one to be analytically and specifically investigated; nevertheless, the position that comes to be taken by the producer of the AI system deserves special attention since it would seem to acquire, as will turn out below, an importance that we might describe as strategic in the pursuit of the regulatory objectives that are animating the debate of the Union bodies.

3. It should be pointed out that with regard to the legal framework of reference, at present, the interpreter cannot but register a certain difficulty in the face of a system that is still somewhat uncertain, despite the fact that AI systems are now already widespread in so many areas.

Second, and shifting the field of observation to the production of the effects of the activity carried out by AI systems, it is necessary to clarify preliminarily whether or not it is possible to give subjective autonomy to the AI system.

The issue of the possible attribution of legal subjectivity[14] to AI systems has been addressed by the doctrine[15] which concluded that, at present, there would be no conditions for its solution in a positive sense, given the impossibility of assimilating the AI system to a legal subject, being rather still considered a “thing”[16].

This conclusion was also shared by the European legislator[17] who, in principle[18], confirmed the notion that the world of robotics should be aimed at assisting human activity and not replacing it, and that the development of these technologies should take place from a so-called anthropocentric perspective.

With regard to the system of norms of the domestic legal system, on the basis of the exclusion of a legal subjectivity attributable to AI systems, with regard to the question of liability in the event of damage caused to third parties, the doctrine has had occasion to point out first of all that there would be no room for the applicability of the provision of Article 2043 of the Civil Code, lacking in the case a subject to whom it would be permissible to attribute the conduct that caused the damage[19].

There have then been numerous arguments for being able to apply the various hypotheses of strict liability[20], including the use of Article 2050 of the Civil Code on the subject of damage caused in the exercise of dangerous activities[21] on the basis of the consideration that the IA system could be included in the category of a dangerous product. In this regard, however, the doctrine has pointed out that such a relief could, if anything, have its validity only in cases of defects found in the construction and invention of the system itself, and that in any case it would presuppose that the injured party succeeds in providing proof of the damage, the defect in the product and the causal link; proof that would be expected to be difficult to detect, precisely because of the complexity of the AI system with regard to its operation[22].

On the other hand, with regard to the use of Article 2054 of the Civil Code on damages caused by the circulation of vehicles, its application seemed possible on the basis of the circumstance that AI systems were initially used precisely in the driving of driverless vehicles[23]  and thus where the question of liability for any damage caused to third parties was immediately raised.

On this subject, a possible solution, in the hypothesis of damage caused to third parties, would have been admissible by referring to the compulsory civil liability regime envisaged for the sector, consisting of the stipulation of an insurance policy to be borne by those who take advantage of this mode of circulation[24]; a solution that would, however, raise other application doubts in terms of identifying the parameters on which to calculate the amount of the relevant insurance premium[25].

The possibility of bringing the possible harmful activity caused by an AI system back within the scope of Directive 85/374/EC, implemented by Presidential Decree No. 224 of May 24, 1988 on product liability, has also been explored, provided, however, that the AI system is brought within the conceptual notion of a product[26]. The attempt seemed, at the state[27], rather arduous because of the inherent characteristics of AI based on highly advanced information technology.

The issue of AI regulation, precisely because of its a-territorial character, should be more correctly framed in the European context, where in recent years, a broad debate has been developing about the need to give birth to the world’s first regulation of AI that takes into account, on the one hand, the need to promote research and develop investment in this area in order not to lose the challenge at the global level with major powers such as China and the United States, and on the other, to lay solid foundations and precise rules to allow the development of these technologies in the safeguarding of privacy and respect for fundamental freedoms.

The path, which was firmly pursued, was to dictate rules valid for the entire Union; in the opposite direction, therefore, of allowing self-regulation of AI by companies that could themselves prevent the harmful consequences of innovation. This would have meant allowing companies producing AI systems to dictate their own rules through codes of conduct and thus evade control by official bodies and competent authorities.

In fact, this second option has been deemed too risky in light of the new versions of so-called generative AI that are proving capable of astonishing performance, but also very dangerous in terms of protecting the rights of users[28].

Numerous measures have thus far followed[29] until the recent last three proposals whose completion is still awaited. Specifically, the debate is focusing on reaching a shared text of the European Commission’s proposed Regulation of April 21, 2021[30], as amended on June 14, 2023 – the so-called Artificial Intelligence Act -.

Then there are the two proposed directives; the first is Feb. 28, 2022 – the Product Liability[31] – and the second is Feb. 28, 2022 – the IA Liability[32] -.

4. As we await the completion of the Artificial Intelligence Act’s approval process, it is possible to draw some firm points from which to delineate the position in which the producer of an AI system, that is, the one who gave the initial input and to whom the operating algorithm can be traced.

In the presence of an inherently complex technology such as that of an AI system, a distinction was already envisaged in the Parliament Resolution of October 20, 2020 (2020/2014INL)[33], within the scope of the possible risks associated with the use of an AI system.

Specifically, it is possible to infer from it a kind of dual system based on the division between “high-risk” systems, in which the AI is characterized by a high degree of decision-making autonomy that makes the consequences of its actions substantially unpredictable[34], and “other systems” referred to in Article 8.

This distribution of possible risks corresponds, in terms of liability, to a dual regime. First, a strict liability for so-called high-risk AI systems in which the operator, according to the provision of Art. 4(3), could only give exonerating evidence in the case of force majeure[35].

For “other systems,” thus other than those mentioned in Art. 3, Art. 8 below prescribes instead a fault-based liability regime, where the operator could give exculpatory evidence by showing that he or she took all appropriate measures to prevent the occurrence of the harmful event, or that due diligence was observed. The latter is stated verbatim: in the operator’s having selected “an IA system suitable to the task and expertise, duly commissioning the IA system, monitoring its activities and maintaining its operational reliability by periodically installing all available updates”[36].

On the other hand, with regard to the European Parliament’s Proposal for a Regulation of April 21, 2021, as amended on June 14, 2023, this one envisions a somewhat different distribution of risks. This proposal first and foremost does not directly address the issue of liability, but rather that of laying the groundwork for a European regulatory framework with the goal of reliable and safe AI to ensure a level playing field and protection for all citizens of the Union, while at the same time strengthening Europe’s competitiveness and industrial base in this area.

It is stressed, as mentioned above, that the development of AI systems should be done with respect for the fundamental rights and freedoms of the Union[37].

More specifically, a number of AI systems that are considered prohibited because they relate, for example, to techniques of manipulation and profiling of individuals implemented without their knowledge, or AI systems suitable for affecting the behaviour of groups or classes of individuals, and are first outlined in Article 5.

This is followed by the identification of so-called high-risk systems in Art. 6; for these systems, subsequent provisions set out the requirements that must be met and the establishment of a risk management system in Art. 9, even going so far as to provide, in Art. 14, for the supervision of individuals throughout their operation.

Thus, it is a more structured, risk-based system that distinguishes between unacceptable risk, which makes the system prohibited, and high risk, which is followed by a set of timely prescriptions.

The other systems, which are not included in these provisions, are consequently qualified as low risk and are exempt from the specific requirements mentioned above.

Instead, with the proposed 2022 AI liability directive, the legislature aimed to remedy the possible regulatory uncertainty regarding liability that could result from the different regimes adopted by individual domestic legislatures.

Specifically, this uncertainty concerns the member states to which companies will export or operate their products and services related to AI systems. Indeed, Recital 6 points out that in the absence of harmonized EU-wide rules on compensation for damages caused by AI systems, providers, operators, and users of AI systems, on the one hand, and injured parties, on the other, would be faced with different liability regimes, resulting in unequal levels of protection and a distortion of competition between companies in different member states.

Also in this proposal, reference is made to “high-risk” systems, for the definition of which, in Article 2, explicit reference is made to the proposed regulation reviewed above.

With regard to damage caused by AI systems, the directive aims to provide an effective basis for claims in relation to fault consisting of non-compliance with a duty of care under Union or national law. Establishing a causal link between such non-compliance and the output produced by the IA system, or the failure of the IA system to produce an output that caused the harm in question, can be difficult for claimants. Therefore, a “relative presumption of causation” has been established in Article 4(1). This presumption is the least burdensome measure likely to meet the need for fair compensation of the injured party.

The defendant’s fault must be proven by the plaintiff in accordance with applicable Union or national rules and may be established, for example, by the failure to comply with a duty of care under the IA Act or other regulations established at the Union level.

Such fault may also be presumed by the court on the basis of failure to comply with a judicial order for disclosure or preservation of evidence under Article 3(5). However, it is appropriate to introduce such presumption of causation only when it can be considered probable that the fault in question affected the relevant output of the AI system or its failure to produce it, which can be assessed by the court on the basis of the general circumstances of the case[38].

Recital 10 of the proposal, clarifies that “in order to comply with the principle of proportionality, only fault-based liability rules governing the burden of proof on those seeking compensation for damage caused by AI systems should be harmonized in a targeted manner. This directive should not harmonize general aspects of tort liability regulated in different ways by national tort rules, such as the definition of fault or causation, the different types of damage justifying claims for compensation, the distribution of liability among multiple tortfeasors, contributory negligence, the calculation of damages, or limitation periods.”

Recital 14 below also notes that “this Directive should follow a minimum harmonization approach. This approach allows the plaintiff to invoke more favorable national rules in cases where the damage was caused by AI systems. National laws may thus retain, for example, the reversal of the burden of proof under national fault-based liability rules, or national regimes of no-fault liability (referred to as ‘strict liability’), of which there are already many variants in national laws, possibly applicable to damage caused by AI systems.”

Having provided for the intervention of the court in order to facilitate useful information to support the injured party’s claim for compensation is related to the principle of transparency of AI systems, consisting of the need to be able to access all the information aimed at knowing the functioning of the algorithm underlying the specific system, as well as its logic[39].

The system of liability that emerges from this proposal is thus different from that envisaged in the 2020 Resolution, which provided for a dual liability regime. Here, in fact, it is a system based on a relative presumption of causation in the presence of a breach of a duty of care[40]. Also securing this conceptual framework is the described judicial procedure, which is aimed at the real understanding of the functioning of the AI system in order to identify exactly the perimeter of liability.

5. In light of the aforementioned representation of the risks associated with an AI system, in the various proposals made so far by the European legislature, it is possible to trace a kind of “chain of responsibilities” of the parties who are involved in a given AI system that has caused harm to a third party.

Preliminarily, stakeholders can be both those who use or market an AI system, and those who may affect its operation to some extent, namely those who designed the system, or those who intervene in the stages of updating basic instructions.

However, from examination of the provisions in the respective proposals above, it is not possible to arrive at an unambiguous identification of these subjects.

Specifically, Resolution 2020/2014, in Article 3, refers to the figure of the “operator,” distinguishing two subcategories. First, the “front-end” operator, which is the natural or legal person who exercises a degree of control over a risk related to the operation and functioning of the AI system and benefits from its operation. The latter differs from the “back-end” operator, which is the natural or legal person who, on an ongoing basis, defines the characteristics of the technology, provides the data and the essential back-end support service, and therefore also exercises a high degree of control over a risk related to the operation and functioning of the system itself. It is also envisaged that the latter entity may coincide with the producer, as defined in Article 3 of Directive 85/374/EEC on liability for defective products, and would therefore be subject to the relevant discipline.

For the purpose of representing the “chain of responsibility,” Article 11 below provides verbatim that “where there is more than one operator of an IA system, such parties are jointly and severally liable,” and therefore recourse action will be available against the other liable parties by the one who has fully compensated the injured party.

Then with regard to the applicable discipline, a distinction is still made in the case where the front-end operator is also the producer of the IA system; in this case it is stipulated that the provisions of the resolution prevail over those of the product liability directive. If, on the other hand, the back-end operator is also the producer, then in that case the regulations dictated by Directive 85/374/EEC apply. Finally, if the only party is the producer, the provisions dictated by the resolution take precedence.

Regarding the proposed Regulation 2021/206, in its updated version of June 14, 2023, Article 3 shows a more articulated distinction between those affected by an AI system in that it includes the following figures: the supplier[41], the operator/user[42], the authorized representative[43], the importer[44], the distributor[45] and the operator[46].

With regard to the proposed Directive 2022/496 on IA responsibility, for the identification of entities covered by an IA system, Article 2 expressly refers to the proposed regulation mentioned above and thus to the definitions of these entities rendered in the aforementioned Article 3, as updated following the amendments of June 14, 2023.

Ultimately, if producer is to be understood as the person who authored the algorithm, that is, the one who gave the system the initial input, imparting its operational purpose to the system, from the above proposals one should probably refer to the figure of the back-end operator, referred to in Resolution 2020/2014, or to that of the supplier, which the proposed Regulation 2021/206 conceptually identifies as the developer of the algorithmic technology, a concept that as seen is also taken up by the proposed directive examined.

6. On the basis of what has been noted so far, given that an AI system is an inherently complex technology, especially the latest generative technology in which there is a high degree of autonomy[47] and from which unpredictable results can arise, and considering, also, at present, the impossibility of attributing legal subjectivity to the AI system itself, it is possible to note that in the event of damage caused to third parties by the processing activity carried out by it, this must always be attributed to a human person.

Specifically, the doctrine is oriented to consider as responsible the person on whom would fall the effects of a possible bargaining activity carried out by the AI[48], or in the case of a non-contractual tort caused by an AI system, it could be the user of the system, or the operator/user i.e. the one who has appropriated the result of the AI’s activity, without prejudice, however, to the concurrent liability of the producer of the algorithm[49].

The latter would ultimately be the one to whom all those who have been involved in the activity of the system, as users, or users in various capacities of the system itself, to whom in the event of damage caused by the activity attributable to the AI system will be sent the claim by the injured party.

In fact, already in the proposed regulation on the AIA, in the version rendered in 2021, it can be seen that the figure of the producer, as the developer of the system, was given a central importance in the chain of responsibilities, as it was the entity on which a number of specific obligations were placed with reference to qualified high-risk systems.

This is attested by Title III of the proposal, which devotes the entire Chapter 3 to the obligations incumbent on the supplier/manufacturer. Thus, Articles 16 et seq. describe what the latter must ensure as a function of the security of systems of this type, up to and including, in Article 23, also a specific obligation to cooperate with the competent national authority, including access to the logs automatically generated by their AI systems, insofar as these logs are under their control.

It is also stipulated in Article 43 below that suppliers/manufacturers of these types of AI systems must ensure that they have undergone an appropriate conformity assessment procedure before they are placed on the market or put into operation.

Following the amendments of June 14, 2023, with the introduction of Article 29a into the text, the apparatus of safeguards to which the provider is bound has been further enriched by the “fundamental rights impact assessment,” again for high-risk systems. It basically requires, among many other elements, an analytical description of the purpose of the system, an indication of the categories of stakeholders and, above all, information on the consequences that the use of the system could have on fundamental rights, as well as an organized plan to curb any negative results in this area. So a real assumption of responsibility for this subject, which also includes, as provided in No. 3, a monitoring activity to be carried out later in the operation phase of the system.

The recent approval of a shared text by the Union bodies also shows further innovations affecting the figure of the supplier/producer.

Among the objectives that were achieved at the end of the extensive debate that took place among the Union bodies on the AIA was that of the inclusion in the text of so-called foundation models, i.e., generative models that process large masses of data. For these models, a discipline has been provided that is structured on two levels, again based on the degree of riskiness of the model itself.

Specifically, in the case of models evaluated as having a high impact based on the computational power they are equipped with, an ex ante evaluation placed on the supplier has been provided for, which, in this case, must address IT security, transparency and its commitment to sharing all technical documentation related to the system; all, as mentioned, before the system itself is placed on the market for marketing.

For models, on the other hand, which do not qualify as high-impact, there are only transparency requirements, however necessary for their dissemination in the market.

7. As pointed out, this apparatus of rules is still being defined. However, it would seem possible to infer from it that a framework is emerging in which the figure of the AI producer/developer comes to play a central role in terms of responsibility, thus strategic in the pursuit of the Union’s objectives, that is, to support the development of these new technologies while respecting the fundamental values and freedoms that represent the common European heritage.

In essence, the guiding principle behind the above measures would seem to be that the few major producers of the expensive AI systems, which may be large IT companies or research institutions, must also be the guarantors of them in terms of their compliance with the whole articulated and punctual set of requirements that have been dictated to make AI transparent in terms of purpose, impact on the community, and security. AI systems that must be trustworthy “upstream” then, and that are perceived as such by the entire community of users, that is, by all those who are located “downstream” of this process, whether they are users or even investors or anyone else who, with respect to AI, is in any case unfit to manage its risks.

Thus, one can understand the justifying reason for the apparatus of obligations on the producer, namely that the system of rules should be an opportunity for the controlled and transparent development of new technologies.

The effort of the European legislator has certainly been considerable so far, and the commitment to arrive soon at an adequate regulation is attested by the extensive debate still underway. In light of what has been noted so far, however, there would remain some critical aspects that should be taken into adequate consideration in the future final regulation.

It should be noted that, on the basis of the above breakdown among AI-related risks and considering individually each party involved in the use of an AI system in order to delineate their respective liability, the position of parties other than manufacturers should also be considered more carefully. These can turn directly to the manufacturer in all those cases of damage that can be linked to the technical system of operation that underlies the system.

In other words, in the case of damages that may have been caused by what we might refer to as the operation of the “initial algorithmic structure” of the AI[50], these would be attributable to its producer/developer who may have violated some of the obligations laid down, and it would therefore be possible to charge this party, according to the chain of responsibilities, with compensation.

The position of producers/developers, however, should be held harmless in case of damages that may have been caused by the misuse of AI by other parties, who while fulfilling their obligations[51], used the system abnormally with respect to the operating information received or distorted the processed data.

To be true, ascertaining when and to what extent the use of AI was improper or not certainly presents profiles of further complexity; nevertheless, such a possibility should not be neglected a priori, but should be given due consideration in the issuance of an organic and comprehensive text of AI regulations.

From the reasoned considerations above, it is possible to note that the operation of an AI system would, therefore, involve a plurality of subjects and different planes of inquiry: from the protection of personal data, to the protection of copyright or the market, in view of the fact that concentration in the hands of a few large IT companies, which are showing themselves to be the only ones capable of bearing the huge costs of building and operating these complex systems, may have significant repercussions on the condition of competition.

Thus, the approaches of investigation can be multiple, but the starting point is always to have to clearly establish the perimeter of respective responsibilities.

In this regard, it seems that the path taken by the European legislator[52] based on risk-sharing, on the one hand, and a broad apparatus of obligations on the other, is leading toward a regime of strict liability on the part of the producer; and along these lines would also be the proposed Products Liability directive[53], which in Article 4 includes software in the notion of product and which confirms a regime of strict producer liability.

The question arises in completely new and disruptive terms regarding the new generation of high-impact generative models that have achieved unprecedented levels of self-determination. These models can be fully functional, properly maintained and updated and yet still cause harm that in purely logical terms is attributable solely to AI activity.

In such hypotheses, the link between human action and the damage produced is essentially blown; the traditional categories of responsibility suitable for describing the phenomenon show all their inadequacy, having lost their traditional functions of sanction and deterrence. In addition, the distinction between the actors involved in an AI system between producer, developer and user is no longer representative of the phenomenon, but the spread of increasingly high-performance generative models is leading to an opacity even of the roles of each individual actor.

There remains, however, at present, the impossibility of attributing a legal subjectivity to the AI; reference will then have to be made in any case to a subject, probably to the large information technology companies that are investing enormous resources in these projects[54]since they expect in return great benefits, not only economic, and will therefore be able to bear in return the burden of an aggravated system of responsibility, in relation to which, moreover, an adequate insurance system will certainly arise. In the case of damage to third parties, it is to these subjects that a sort of “new generation” objective type of liability should be placed, adapted to the new needs and concretely capable of resolving the hypotheses of damage; probably, because of the complexity of the phenomenon, this could be configured as a liability with a clear restorative connotation and without admissibility of proof to the contrary.


[1] For a review of the use of Artificial Intelligence in different fields, vid. S. J. Russel and P. Norvig , Artificial Intelligence: A Modern Approach, 3rd ed., Prentice Hall, Hoboken, 2016, in which a comprehensive overview of AI is given, touching on topics ranging from its practical application to its ethical and social implications

[2]  An exploration of the legal challenges posed by AI is covered in R. Susskind, The Future of the Professions: How Technology Will Transform the Work of Human Expert, Oxford University Press, Oxford, 2019. The Author discusses the impact of AI on the legal professions and other fields, emphasizing the importance of updating the legal environment to keep pace with rapid technological change.

[3] For a discussion of the legal subjectivity and accountability of AI systems, refer to: S. Chopra and L. F. White, A Legal Theory for Autonomous Artificial Agents, University of Michigan Press, Michigan, 2011. The monograph addresses the legal issues associated with the autonomy of AI systems and proposes regulatory frameworks to manage them.

[4] Vid. R. Calo – A. M. Froomkin, and I. Kerr, Robot Law, Edward Elgar Publishing, Cheltenham 2016, in which he examines how Law can and should respond to the challenges posed by robotic innovation and AI.

[5] Refer to: N. Bostrom, Superintelligence: Paths, Dangers, Strategies, Oxford University Press, Oxford, 2014, where the Author analyzes the potential risks of advanced AI and discusses strategies to ensure the safe and controlled development of these technologies.

[6] Contributions on AI are now very numerous, among many, for a first insight: G. Alpa, Prefazione, p. 13 in Id. (ed.) Diritto e intelligenza artificiale, Pisa, 2020; U. Salanitro, Intelligenza artificiale e responsabilità: la strategia della Commissione euroepa, in Riv. dir. civ., 2020, p. 1247 ff.; M. Gabbrielli, Dalla logica al deep learning: una breve riflessione sull’intelligenza artificiale, in U. Ruffolo (ed.), XXVI Lezioni di diritto dell’intelligenza artificiale, Turin, 2021, p. 22 ff.

[7] For references on the origin of the phenomenon, see: A. Astone, Sistemi intelligenti e regole di responsabilità, in Persona e mercato, 3/2023, p. 485 ff.

[8] G. Sartor – F. Lagioia, Decisioni algoritmiche tra etica e diritto, in U. Ruffolo (ed.), Intelligenza artificiale. Il diritto, I diritti, l’etica, Milan, 2020, p. 69 ff.

[9] E. Battelli, Necessità di un umanesimo tecnologico: sistemi di intelligenza artificiale e diritti della persona, in D. Buzzelli – M. Palazzo (eds.), Intelligenza artificiale e diritti della persona, Pisa, 2022, p. 92 ff.

[10] The topic of AI systems is also cross-cutting and of interest from ethical and philosophical perspectives. On this topic, we refer for all to: P. Perlingieri, Sul trattamento algoritmico dei dati, in Tecnologie e diritto, 2020, p. 191.

In this direction, in the European context, one can frame the 2016 Ethical Aligned Design (EAD) in which the need emerged for the development of them to be functional to the well-being of society; for there to be transparency in the design phase of the system; and for the operation to be programmed with respect for human rights. Also raised was the question of the risk of human dependence on machines, a risk that must be curbed only by the prior, and above all shared, enunciation of a set of principles common to as many countries as possible, precisely in order to create a block against those that have so far implemented a policy of developing AI systems that is insensitive to any rules. This resulted in draft ethical guidelines for reliable AI systems, published on December 18, 2018 by the High-Level Expert Group on Artificial Intelligence. This draft was written in light of the Union Charter of Fundamental Rights and Treaty principles on respect for freedom, human dignity, equality and protection of minorities. In essence, the guidelines are concerned with the fact that AI systems should maintain their instrumental character with respect to human activity and the welfare of citizens; their use should be done in accordance with the criterion of justice, thus implying compensation for damages if harm occurs. On the subject, it has been pointed out that although these principles are not legal norms, they nevertheless serve as indications for legislative interventions to follow because of the goal of protecting the fundamental rights of the individual. Thus: G. Finocchiaro, Intelligenza artificiale contratto e responsabilità, in Contr. impr., 2020, p. 720.

[11] On the legal level, the application issues are numerous; in addition to the aspects of liability arising from activities attributable to AI systems, of relevant importance are also those of the protection of subjects’ personal data. In this regard, on this topic, for an initial approach, please refer to: C. Colapietro, Gli algoritmi tra trasparenza e protezione dei dati personali, in federalismi.it, 5, 2023, p. 151 ff.; A. Viglianisi Ferraro, Le nuove frontiere dell’intelligenza artificiale ed i potenziali rischi per il diritto alla privacy, in Persona e Mercato, 2, 2021, p. 393 ff.

It should also be pointed out that the algorithm on which the operation of an AI is based is covered by copyright.

The use of AI continually raises new issues; recently there has been discussion of the unauthorized use of a well-known person’s likeness that can be reproduced through AI with consequences on the level of image infringement even when the use is not for commercial purposes. News also came of a major new legal issue between the well-known New York Times newspaper and the nonprofit OpenIA concerning the unauthorized use of copyrighted content in the training of its ChatGPT generative models.

[12] For example, the same Public Administration that makes use of intelligent systems in judicial activity, or in the collection/management of fines to motorists for traffic violations. On the use of AI systems by the Public Administration, see for all: D. U. Galetta, Il diritto dell’amministrazione pubblica digitale, Giappichelli, Turin, 2020; F. Romano, Intelligenza artificiale e amministrazioni pubbliche: tra passato e presente, in Ciberspazio e diritto, 21, 1, 2020, p. 69 ff.; G. Avanzini, Decisioni amministartive e algoritmi informatici: predeterminazione, analisi predittiva e nuove forme di intellegibilità, ESI, Naples, 2019.

[13] Such as in the case of large pharmaceutical joint-stock companies that thanks to artificial intelligence systems were able to prepare, in a relatively short time, the vaccine to counter the Covid 19 epidemic.

[14] The issue goes back a long way in the thinking of the European legislator, who already in the Draft Report with Recommendations to the Commission on Standards in Robotics (2015/210(INL), in recitals Q and T, raised the issue of the legal liability of AI systems, especially those with greater decision-making autonomy for which it would be increasingly difficult to be considered a mere tool in the hands of their manufacturer or user. Nevertheless, it is made clear in this proposal that at present it would still be impossible to consider these robots as legal entities to which direct liability could be attributed with regard to the consequences of their actions. The consequences of their action (or omission) would therefore still be referable to their producer, or user or owner.

[15] A. D’Alessio, La responsabilità civile dell’intelligenza artificiale antropocentrica, in Persona e mercato, 2, 2022, p. 249 ff. In a positive sense is expressed instead: G. Sartor, L’intenzionalità dei sistemi informatici e il diritto, in Riv. trim. dir. proc. civ., 1, 2003, p. 23 ff.

[16] L. M. Lucarelli Tonini, L’IA tra trasperenza e nuovi profili di responsabilità: la nuova proposta di “IA Liability Directive,” in Dir. informaz. inform., 2, 2023, p. 327 ff.

[17] A first significant step in this direction was taken with the European Parliament Resolution of February 16, 2017 (2015/2103/INL). For an overview of the interventions of the European legislator: D. Chiappini , Intelligenza artificiale e responsabilità civile: nuovi orizzonti di regolamentazione alla luce dell’Artificial Intelligence Act dell’Unione Euroepa, in Riv. it. inf. e dir., 2, 2022, p. 95 ff.

[18]However, the aforementioned Resolution includes the idea that in the future, at least with reference to more advanced systems capable of some sort of autonomous decision-making, the possibility of electronic subjectivity may also be envisaged. Indeed, in the hypothesis of more sophisticated AI systems, given their recognized capacity for autonomous, and therefore unpredictable, reasoning, the Commission is invited to consider the possibility of considering these systems as “electronic” legal persons, and as such endowed with a degree of subjectivity such that they themselves may be liable for the harm that has been caused to third parties. Therefore, the introduction of guidelines and codes of conduct, as well as the provision of a compulsory insurance system, with the establishment of a fund in the case of damage not covered. It would also follow that a system of registration, at the Union level, of the most advanced robots should be envisaged, in order to establish a sort of database.

[19] Thus: L. M. Lucarelli Tonini, L’IA tra trasparenza e nuovi profili di responsabilità cit., p. 334.

[20] For the use of Article 2047 of the Civil Code related to damages caused by an incapacitated person, see: A. Santosuosso – M. Tomasi, Diritto. Scienza. Nuove tecnologie, Padua, 2016, p. 338 ff. For recourse to Article 2048 of the Civil Code on the subject of damage caused by minor children, see: M. Costanza, Robots e impresa, in U. Ruffolo (ed.), Intelligenza artificiale e responsabilità, Milan, 2017, p. 112 ff. For the use of Article 2049 of the Civil Code on the liability of masters and principals, see: M. Costanza, Robots e impresa, cit., p. 112 ff.

The use of both Article 2051 of the Civil Code concerning damage caused by things in custody and Article 2052 of the Civil Code concerning damage caused by animals was also raised. 

[21] In a favourable sense: L. Coppini, Robotica e intelligenza artificiale: questioni di responsabilità civile, in Pol. Dir., 4, 2018, p. 735 ff.

[22] Thus: A. Astone, Sistemi intelligenti, cit. p. 487.

[23] On issues related to driverless traffic systems, see for all to: A. Albanese, La responsabilità civile per i danni da circolazione di veicoli ad elevata automazione, in Eur e dir. priv., 4, 2019, p. 995 ff.

[24] A. Davola – R. Pardolesi, In viaggio con il robot: verso nuovi orizzonti della r.c. auto (“driveless”)? in Danno e resp., 5, 2017, p. 625 ff.

[25] Thus: L. Coppini, Robotica e intelligenza artificiale, cit. p. 737.

[26] Thus: A. Santosuosso – C. Boscarato – F. Caroleo, Robot e diritto: una prima ricognizione, in Nuova giur. civ. comm., II, 7-8, 2012, p. 494 ff.; N. F. Frattari, Robotica e responsabilità da algoritmo. Il processo di produzione dell’intelligenza artificiale, in Contr. impr., 1, 2020, p. 458 ff.

[27] However, the argument could be reconsidered considering the proposed Product Liability Directive.

[28] On the discussion that is animating the European Council, Commission and Parliament, aimed at reaching a regulatory agreement on the Artificial Intelligence Act, refer to: L. De Biase, L’intelligenza artificiale e i nodi dell’approvazione di un regolamento europeo, Il Sole24ore November 28, 2023. It is also reported that many scientists and researchers, embracing concerns about self-regulation of AI systems, including those such as ChatGPT, are strongly urging European bodies to approve the AI Act.

[29] The Communication from the Commission to the European Parliament of April 25, 2018 – COM 2018/137 – and the Communication from the Commission of December 7, 2018 – COM 2018/795 -, the former is “On Artificial Intelligence for Europe,” and the latter concerns the “Coordinated Plan on Artificial Intelligence.”

With them, the Commission has signaled the need for the deployment of artificial intelligence systems to be encouraged in the territory of the Union through a coordinated system of investment in order to be internationally competitive. However, a warning emerges that this development must always take place while respecting the fundamental values of the Union and the Charter of Fundamental Rights.

Subsequently there were: the February 19, 2020 Security Report – COM 2020/64 -, the February 19, 2020 AI White Paper – COM 2020/65 – and the October 20, 2020 European Parliament Resolution – 2020/2014INL – with recommendations to the Commission on an AI liability regime. In detail, it is stated that only specific and coordinated adjustments to the respective internal liability systems would be needed. 

[30] COM/2021/206. It should be given that on December 8, the European Parliament and the Council reached a compromise political agreement on the AIA, mediating between the need for the development of these technologies, which are certainly recognized as having great potential for the well-being of humanity, and the need for this development to be in line with the protection of citizens. A highly controversial issue was the permissibility of remote biometric identification by law enforcement authorities in public spaces. On this point, the agreement reached states that such activity is permitted provided that safeguards are in place for citizens. Another issue was the inclusion of so-called foundational models such as ChatGPT; for these there will be a preliminary assessment regarding the impact of them.

The legislative text of outline agreement will need to be further detailed before it is submitted to the Parliament and Council for a final vote, and its approval process is expected to be completed in February 2024.

[31] COM/2022/495

[32] COM/2022/496

[33] For a commentary on the goals of Resolution 2020/2014INL, refer to: A. D’Alessio, La responsabilità civile dell’intelligenza antropocentrica, cit., p. 257.

[34] Article 3(c) of the resolution verbatim qualifies “high risk” systems as ” a significant potential in an AI system that operates autonomously to cause harm or injury to one or more persons in a random manner and that is beyond what could reasonably be expected; the significance of the potential depends on the interaction between the severity of the possible harm or injury, the degree of decision autonomy, the likelihood of the risk materializing, and the manner and context of use of the AI system.”

[35] Article 5 of the resolution also indicates a detailed graduation in the amount of compensation depending on the type of damage and its severity.

[36] Thus Art. 8(b).

[37] Specifically, Recital 4a of the proposed regulation, as amended as a result of the amendments adopted on June 14, 2023, verbatim provides that “in view of the significant impact artificial intelligence can have on society and the need to build greater trust, it is essential that artificial intelligence and its regulatory framework be developed in accordance with the values of the Union as enshrined in Article 2 TEU, fundamental rights and freedoms as enshrined in the Treaties, the Charter, and international human rights law. As a prerequisite, artificial intelligence should be an anthropocentric technology. It should not replace human autonomy or assume the loss of individual freedom and should primarily serve society and the common good. Measures should be provided to ensure the development and use of ethically integrated artificial intelligence that respects the values of the Union and the Charter.”

[38] Recital 25 states verbatim that the relative presumption of causation can operate only on condition “that the negligent conduct affected the output produced by the AI system or the failure of that system to produce an output, which in turn caused the damage.”

[39] On this topic, refer to: L. M. Lucarelli Tonini, L’IA tra trasparenza e nuovi profili di responsabilità cit., p. 330.

[40] Article 3 of the proposed directive refers to “a duty of care under Union or national law that is directly intended to protect against the harm that has occurred.”

[41] Article 3(2) of the proposed regulation verbatim clarifies that it is “a natural or legal person, public authority, agency or other body that develops an AI system or has an AI system developed for the purpose of placing it on the market or putting it into service under its own name or trademark, whether in return for payment or free of charge.”

[42] Following the amendments of June 14, 2023-Amendment 172-the proposed regulation in Article 3(4) defines operator as: “any natural or legal person, public authority, agency or other body that uses an AI system under its authority, except when the AI system is used in the course of a personal, non-professional activity.”

[43] Article 3(5) of the proposed regulation textually defines an authorized representative as “any natural or legal person established in the Union who has received a written mandate from an AI system provider for the purpose, respectively, of fulfilling and executing on its behalf the obligations and procedures established by this Regulation.”

[44] Article 3(6) of the proposed regulation defines importer verbatim as “any natural or legal person established in the Union who places on the market or puts into service an IA system bearing the name or trademark of a natural or legal person established outside the Union.”

[45] Article 3(7) of the proposed regulation defines distributor verbatim as “any natural or legal person in the supply chain, other than the supplier or importer, who makes an IA system available on the Union market without changing its properties.”

[46] Article 3(8), following the amendments of June 14, 2023-Amendment 173-defines operator as “the supplier, operator, authorized representative, importer and distributor.”

[47] ChatGPT is a language model developed by OpenIA; it is based on the Generative Pre-trained Transformer (GPT) architecture. It is an AI program designed to understand and generate text in a consistent and contextually relevant manner. The goal is to provide consistent answers to user questions or to generate text based on specific prompts. Then there is the Gemini model developed by Google. It can understand and operate on different types of information, including text, code, audio, images, and video. Ernie Bot, on the other hand, is Chinese-made and can converse in text, such as ChatGPT, answer questions, and solve mathematical problems.

[48] In this regard, even in the case of smart contracts, where the entire contractual process is automated and regulated by software, the activity would be attributable to the human who made the initial choice. On the subject: M. Maugeri, Smart contracts e disciplina dei contratti, Bologna, 2021, p. 28; G. Salito, voce Smart contract, in Digesto disc. priv., Torino, 2019, p. 5.

[49] Thus: A. Astone, Sistemi intelligenti e responsabilità, cit., p. 496.

[50] Directive 85/374/EC on defective products would be unsuitable to handle this hypothesis because of the inherent characteristics of AI in which the damage might have been produced by the autonomous combination that the AI system makes between data it acquires and d the operation of the algorithm; the AI system would not be a product within this scope of discipline at present. However, the approval of the new Products Liability directive (COM/2022/495) is awaited.

[51] As noted above, the proposed Regulations dedicate Article 25 to the authorized representative, Article 26 to the importer, and Article 27 to the distributor.

[52] On this topic, refer to: C. Del Federico, Intelligenza artificiale e responsabilità civile. Alcune osservazioni sulle attuali proposte europee, in Jus civ., 5/2023.

[53] The draft Products Liability Directive (COM/2022/495).

[54] It is reported in the press that the OpenAI company has created an agreement with Microsoft to develop generative AI systems in which the latter has pledged to support investments of $13 billion, an astronomical amount that is nowhere near what the entire European Union plans to invest in the coming years; this denotes the strong interest in future AI developments.

Author

* Marina Romano is Assistant Professor of Private Law at Parthenope University of Naples.

** David García Guerrero is Profesor de Derecho Financiero y Tributario at National University of Distance Education (UNED).

Paragraphs nos. 2,3,4,5,6 were written by Marina Romano. Paragraph no. 1 was written by David García Guerrero. Paragraph no. 7 was written by Marina Romano and David García Guerrero.

Information

This entry was posted on 11/12/2023 by in Senza categoria.