«They say things are happening at the border, but nobody knows which border» (Mark Strand)
by Giuseppe Desiderio*
ABSTRACT: The essay, after some introductory references on the phenomenon of artificial intelligence and machine learning, examines the reasons for the spread of AI applications not only by banks but also by supervisors (RegTech and SupTech) and the problems related to in-house production of technologies as an alternative to using outsourcers. Corporate reporting also falls within the perimeter of AI reflections not only with regard to accounting reporting but also in the CorpTech perspective, that is, with regard to internal reporting, possible tools to facilitate the fulfillment of the board’s monitoring duties, in particular by nonexecutive directors as well as the obligations of the same board to prepare adequate organizational structures for the joint stock companies, in the specific perspective of Italian company Law.
Possible lines of development of AI applications in banks and beyond are then hypothesized, including in the perspective of the enactment of the AI Act, now at the proposal stage, of which had to be pointed out that it focused on the protection of rights by public authorities, leaving uncovered the area in which so-called “surveillance capitalism” moves and thrives.
Then it is considered that AI tools can be used for developing indicators for ESG goals, which see banks and the financial system at the forefront of their effective deployment. On the other hand, it has been considered the problems associated with nudging and accountability for decisions in which “machines” participate, the operation of which is sometimes opaque (so-called black box).
The essay closes with a general and brief reflection on the risks of inadequately controlled spread of AI systems and the impacts they may have on the very shape of our society.
SUMMARY: 1. Foreword – CorpTech. – 2. Banking supervision and AI: RegTech and SupTech – 3. AI and external and above all internal reporting. The board of directors monitoring activity – 4. AI and organizational structures – 5. Assisted and augmented AI and decision-making processes, interorganic dynamics of the board and responsibility – 6. Autonomous, futurism and futuristic AI.
1. In recent times and, to a greater extent, in the last three years, the production in Italy of articles and books dedicated to the investigation of issues relating to the placement of legal experience in the “infosphere” ([1]) and, in particular, to the legal dimension of technological innovations connected to the Artificial Intelligence (AI) and its evolution represented by machine learning systems. The reasons why interest has now reached much higher levels than in the past can be identified by the convergence of the effects of different economic and scientific-technological phenomena: the growing reality of Big Data ([2]), the developments of computer science, on the one hand, and developments of machine learning and deep learning algorithms (including natural language processing), on the other hand, all catalyzed also by the aim of economic exploitation of the results of these developments which allows the availability of increasing resources to finance these same developments.
Given that the very focus of the AI phenomenon and, therefore, of the conditions of use of this term would deserve clarifications and insights that are not possible here, however it seems sufficient to start from the definition of artificial intelligence contained in the Article 3 of the EU Regulation Proposal on AI (the AI EU Act) ([3]), under which «‘artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with», where machine learning, inductive programming and the various statistical approaches are indicated in this annex ([4]). The recurring taxonomy will therefore be kept in mind, substantially adopted by the Proposal for a Regulation, which distinguishes between «the characteristics of assisted AI (supporting specific tasks), or augmented AI (supporting and strengthening human decision making), which can develop up to AI amplified by real man-machine co-decision or, in perspective, even by AI autonomous (of integral replacement of the human being, up to the extreme case of autopoietic AI, of self-managed development of artificial intelligence itself).» ([5]). Moreover, it seems appropriate to aim largely at considering phenomena included in the so-called narrow AI (i.e. systems that, through functions of perception, classification and understanding and up to abstraction and reasoning, support or integrate human intelligence) and not the so-called general AI, that is the autonomous one, to which only a few closing considerations will be dedicated. However, the distinctive feature of AI systems that use machine learning is that of “learning” to the extent that they are used and even earlier in the training phase of the algorithms, which constitutes an essential and indispensable moment in the development of these systems and which in turn requires a database which must then be transformed into information on which to train. In fact, it was duly noted that «the data is the engine of artificial intelligence tools and digital services are the product of the use of the latter for commercial purposes» ([6]).
The perspective in which I intend to develop some considerations here is further restricted in two respects. First, it focuses on the application of these technologies to the corporate reality, the impacts on the related corporate governance ([7]), and in particular on the fulfillment of the directors’ duties of the and on the inter-organic relationships between the board and the CEO. A neologism has been coined to refer briefly to the phenomena of inclusion of technology in the reality of companies: CorpTech (crasis between Corporate and Technology) ([8]). Second, the perspective of this work is further restricted to the consideration of CorpTech with specific reference to the banking sector. On the other hand, in this, as indeed in the contiguous financial and insurance sectors, the EU banking sector regulation (EU and national) [9], defines specific governance relations as binding regulatory paradigms, with respect to which the AI impact assessment takes on specific importance.
2. A specific need that has favored and above all will favor in the future the implementation of the use of AI applications by banks is represented by the really large, perhaps excessive, proliferation of sector regulations, which imply an increasingly articulated, organized and burdensome compliance activity by the banks themselves. In particular, after the 2008 financial crisis, the reaction to the crisis itself led to a dramatic increase in the obligations placed on banks not only in the EU, which further complicated the compliance framework for global players, since they also have to deal with the different obligations provided for in the different jurisdictions in which they operate. This overall phenomenon also deserved the neologism of RegTech (crasis of Regulatory and Technology), which for banks and financial institutions corresponds to the «digitisation and automation of reporting processes to supervisory authorities and compliance with current regulations» ([10]).
On the other hand, the Bank of Italy, the central bank that in Italy performs the function of supervisory authority on banks, (but CONSOB has also expressed itself in the same direction) ([11]), has clearly expressed interest in the development and use of AI-related instruments ([12]). The comparison with the experiences of other central banks in and outside the EU then confirmed the interest of these institutions in the use of AI and machine learning to improve the ability to perform their institutional tasks (monetary policy, banking supervision and payment systems, anti-money laundering) ([13]). This is complemented by the transversal profile of cybersecurity, which concerns the protection of the environment within which the digital reality operates and evolves. However, it is clear that the economic data collected by the Bank of Italy is also being expanded, including, for example, those derived from the digital analysis of texts and images, always in order to create a data set then to be reworked in order to draw predictive indications ([14]). To identify these new operational models of banking supervision (but not only) here is yet another neologism: the SupTech (crasis between Supervisory and Technology) defined by the EBA as «any range of applications of technology‐enabled innovation for regulatory, compliance and reporting requirements implemented by a regulated institution (with or without the assistance of RegTech provider)» ([15]). In fact, the technology is not from today considered integral part of the instrumentation that the supervising Authorities must have to disposition ([16]), but only today it is indicated like one of the priority for the Supervisors: on this point the position of the EBA leaves no doubt when it laconically states that «[t]he use of SupTech will be in the focus of the EBA analysis in the near future» ([17]). Equally relevant is the other statement of the EBA that, in the specific area of its competence, considers essential the convergence between RegTech and SupTech, where it states that «[t]he financial sector differs from others by its large amount of data and because it is highly regulated. As a result, RegTech and SupTech solutions stand ready to become key for financial market participants and regulators to ensure an effective, safe, and sustainable market» ([18]). It is relevant to note at this point the importance of the orientation expressed in particular by the Bank of Italy with regard to the desire to develop, not with the use of outsourcing by third parties, but in house technologies, ie algorithms and software, hence the AI to be used as a tool of SupTech. This is a delicate step: in fact, what has been referred to as «the risk of capture of automated regulation by manufacturers» ([19]). The risk must be avoided or, as far as possible, minimized, in order to avoid the emergence of conflicts of interest that could arise from the fact that the same subjects regulated (or subject to them in some way connected) develop the tools for governing their regulation. One way to minimize the effects of capture by the IA services providers may be to deal with those that allow to leave in “ownership” of the user the training results of the AI system and machine learning licensed ([20]). The purpose is that the predictive ability developed by the AI algorithm remains in the legal sphere of those who have “trained” it. Hence, a rethinking of the scope itself of copyright is necessary for the protection of the results deriving from the use of AI systems, through the identification of the characteristics of originality of an AI-generated work, the author and therefore the owner ([21]). Of course, this will imply that the Supervisory Authorities, in our country the Bank of Italy, persevere in the intent to equip themselves with structures and above all with increasingly qualified professionals, which make them capable of developing in their own SupTech tools, giving rise to, in some ways still unpublished but necessary, interactions between different professions, such as those of lawyers, computer and data scientists. A subordinate hypothesis to be considered may be that of dealing with third-party AI providers but having a profile of potential conflict of interest significantly lower than market participants, which may be university research centres or other research institutes which are not directly dependent on (or linked to) undertakings operating on the market and in various ways interested in the development of AI applications in the banking and financial field. This type of relatioship is already used by some authorities with regard to the development of analytical tools for anti-money laundering control activity ([22]).
This is in line with the EU Commission’s goal of creating conditions for the use of innovative technologies, including RegTech and SupTech, for supervisory reporting by 2024 ([23]). The very first step in this process in Italy has been the activation in July 2020 of the new website «Cooperazione PUMA», where PUMA is, the procedure for reporting on the supervision of italian banks ([24]).
But this first objective, relating to supervisory reporting, in the future, could be related to a second one, that is to say that the Supervisory Authorities are able to make available to banks also data analysis tools of what has been reported, so that intermediaries can immediately have the (or a first) supervisory evaluation with some sort of real-time feedback. Here you can identify the connection in point of governance: this, in fact, would allow to incorporate in the evaluation processes by the bodies with strategic management and management function the evaluative inputs of supervision in the continuous and no longer only on the occasion of episodic or periodic interactions between vigilante and supervised.
Of course, any sanctions of the Bank of Italy at the expense of the banks, or rather at the expense of corporate representatives, could not be taken on the basis of the only automated processing of SupTech systems as this precludes the provisions of art. 22 reg. (EU) 2016/679 (aka as GDPR), pursuant to which «[t]he data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.» ([25]).
3. Closely connected with the compliance profiles are those of corporate reporting, in the different meanings that it can assume, that is, with reference to both external and internal “communications” and the three moments in which they are articulated: production, distribution and use. Under the first profile and therefore with regard to the sphere of the accounting documents it has been rightly remarked like an impulse to the use of systems of AI is owed also to the introduction of the European Single Electronic Format beginning from the 1st January 2020 ([26]). Equally interesting is that an institutional source such as the British Financial Reporting Council has stressed a series of benefits that the use of AI applications could offer in the field of corporate reporting ([27]). These include not only the guarantee of greater transparency and the consequent increased confidence in the results recorded by the company by the external recipients of this information, but also the impulse to a process of optimization of the same reporting function, considering that the reporting through AI surely facilitates the underlying accounting processes that shortens the timing ([28]). In this regard, it also notes the information relating to the financial statements addressed to shareholders, which are not strictly third parties, being expected that «the information concerning the exercise of social rights and, in particular, the modalities and any other elements necessary to exercise the voting right at the shareholders’ meeting, are, in good time, made available to shareholders in a clear and easily accessible form» ([29]).
But the profile with most regard to the corporate governance of the board of directors of banks is that of the internal reporting activity, which is inextricably linked to the monitoring activity of the board itself and, the information flows which it receives from the CEOs. In fact, the secondary legislation provides that the «corporate governance project», which each bank or parent company of a banking group must draw up, contains a reference to the information flow system, these being the presupposition of the exercise of the powers and the fulfilment of the duties of the members of the body with function of strategic supervision, that is of the board ([30]). Everything must take place in the civil framework outlined by Article 2381 italian Civil Code, on the basis of which a differentiation of attributions is established with respect to the organizational, administrative and accounting structure of the company, with respect to which the delegated bodies have the tasks of “care” and the board the ones of “evaluation” ([31]). This regulatory framework implies the establishment of a dense network of transitive and reflective information obligations ([32]), to be framed in the perspective of the standard of closure of the system of liability of the company, represented by the duty of the directors of «act in an informed manner», provided for by Article 2381, 6° paragraph, Italian Civil Code ([33]).
Going into more detail, the regulatory discipline, in the section dedicated to «tasks and powers of the corporate bodies», provides that the body with the function of strategic supervision is «called to deliberate on the strategic orientations of the bank and to verify in the continuous implementation» and to «ensure an effective dialogue with the management function and with those responsible for the main corporate functions and verify over time the choices and decisions taken by them» ([34]). In another respect, that is, in the section dedicated to «composition and appointment of corporate bodies», the same regulatory discipline provides that it is «fundamental that the non-executive directors also possess and express adequate knowledge of the banking business […]» ([35]). Now, in the extensive case law of the Italian Supreme Court (Cassazione) that has formed in the matter of opposition to the administrative sanctions imposed by the Bank of Italy (and by CONSOB) to the corporate representatives of banks (and intermediaries of the financial market as financial firms and management companies), is spended the argument that even non-executive directors «must possess and express constant and adequate knowledge of the banking business» according to the related «obligation to contribute to ensuring effective risk management in all areas of the bank and to take action in order to be able to effectively monitor the choices made by the executive bodies» ([36]).
At this point it is clear why one cannot agree with the use that in this case-law is made of the expression “modified” of the regulatory provision, to which the adjective «constant» has been added so as to be able to use the overall phrase «constant and adequate knowledge of the banking business» in a different context than that for which it was formulated and therefore, well to see, improperly. That is, not with regard to the essentially “professional” profile of the non-executive director, implying ex ante knowledge of the characteristics of the banking business, but with regard to the action of the non-executive director. This implies that it is postulated a sort of knowledge of the concrete and daily unfolding of the affairs of the bank or the financial firm. This conclusion, which proves that it is not a question of nominalism, has sometimes been accompanied by reference to a «monitoring of the choices made by the executive bodies through a constant flow of information» ([37]) and even more frequently has been accompanied to the statement that «[t]he duty to act informed of non-executive directors of brokerage companies, sanctioned by art. 2381, paragraphs 3 and 6, and 2392 c.c. should not be remitted, in its concrete operation, to the reports coming from the reports of the CEOs, since even the first must possess and express constant and adequate knowledge of the […]» ([38]).
The point is that this approach seems out of line with the conclusions reached by the Court of Cassation itself with regard to companies governed by ordinary law (i.e. different from banks or financial firms or management companies), for which the need for alarm signals is stated to trigger the initiative for further investigation by the directors without delegation ([39]). In other words, with regard to the delegating directors of banks is postulated a different activism than that prefigured by the information dynamics outlined by art. 2381 c.c., as it is based on a particular diligence and such as to allow them a “constant” knowledge of what is happening in the bank so as to be aware of possible anomalies, regardless of the information that, dutifully, the delegated bodies owe to the plenum. On closer inspection, however, the Judges of Legitimacy are not in a position to indicate in substance what these alternative sources should be of the necessary investigation “independent” from the information of the delegated bodies, limiting themselves to indicating rather initiatives to be taken, which logically follow because they presuppose the knowledge or the knowability of the management abnormalities ([40]). Elsewhere I have argued at length that there is no legal basis for the alleged peculiarity of the position of banks non-executive directors, not only in the same way as the rules of the general corporate regulation governing joint stock companies, but also with regard to banking regulation ([41]).
That said, it seems reasonable to consider that the use of AI systems can, to a progressively significant extent, constitute the normal form with which the endoconsiliar information is conveyed, both in the perspective of Article 2381, para. 1, Italian Civil Code (with regard to information to be provided before the board meeting) as well as in the perspective of paragraphs 5 and 4 above of the same Article (with regard to the information to be respectively provided by the delegated bodies and received by the directors). Of course, digitalization encompasses different phenomena, ranging from simple systems of access to business documentation in an efficient paperless governance perspective to expert systems (usable as virtual assistants) up to more complex algorithms able to process, through AI functionality, a high amount of data, organizing and rationalizing the use by directors ([42]). The use of such systems, included in CorpTech, would make less arbitrary and distant from reality the jurisprudential consideration see above, which presupposes a continuity of information flows and monitoring of the board, which today conflicts with the necessary periodicity of the board’s meetings, and access to information, which is postulated as disconnected from the information provided by the delegated bodies.
4. In relation to these perspectives, two sets of considerations may be made. The first concerns the benefit that non-executive directors (including the banks’ ones) can reasonably derive from the tools of access to documentation – the c.d. on line portal board ([43]) -, if not also of further processing, organization and elaboration of this documentation, so as to reduce the information gap that separates them from the delegated bodies and the management, since it is difficult to deny that they, beyond the octatives conjugated by the jurisprudence (v. supra para. 3), to date cannot but receive information of “second hand” ([44]). It is necessary to keep in mind that, particularly in complex companies and even where the corporate reporting system is still complex (as for banks), non-executive directors are at risk of being flooded with information (before meetings or periodic) in concrete hardly usable with the due attention and such to induce phenomena of skim reading, for reasons both of quantity of the submitted documents and of time available for their examination. This situation can be seen in the phenomenon that has been defined as «censura additiva» (in english, “additive censorship”) ([45]). In fact, this kind of censorship has an effect much more subtle than excluding censorship, given that additive censorship is carried out, so to speak, hiding, by addition, the act of concealment. It is clear that the availability of digital tools, especially if they are equipped with AI functionality suitable to rework and organize in a more user-friendly way the material offered in communication, may help to reduce, if not minimize, the information deficit condition that may characterize the position of the members of the plenum of the board, i.e., the non-executive directors. However, we are in the field of digital systems experts or non-autonomous AI, which are already available or are still within reach current scientific-technological developments or, in any case, can be reasonably contained. These tools will increasingly be within the reach of boards as AI systems increase their ability to process and process even natural language data or graphic representation. The use of such systems implies that their functionalities must be governed, which means that they must be considered in the context of the organizational structure of the bank. The availability of these information tools then corresponds to an additional duty for the board of directors. The corporate governance project and the internal legislation must therefore incorporate the discipline of these systems having regard to the profiles of their architecture and their reliability. With regard to the first aspect, it is necessary to place the systems of AI in the context of the organizational, administrative and accounting structures also in the perspective outlined by the new version of the Article 2086, 2 paragraph. 2380-bis, para. 1, of the Italian Civil Code, with regard to joint stock companies ([46]). One wonders whether the inclusion of CorpTech tools as a building block of organizational arrangements is essential to qualify such arrangements as “adequate,” i.e., whether their non-adoption already constitutes in itself a breach of the duties outlined in Article 2086, para. 2, Civil Code. This conclusion is probably too drastic and should be rejected, especially since the same civil law provision preaches adequacy in relation to the nature and size of the enterprise, so it seems reasonable to assume that in small or at any rate, due to the type of business, not very complex corporate realities, the adoption of CorpTech tools is to be deemed not normal. Therefore, it seems preferable today to consider the action of AI systems to be the subject not so much of an obligation but rather of a burden, which can help directors prove that they have established adequate arrangements ([47]). Clearly, to the extent CorpTech tools become widespread, their use may gradually identify a kind of professional benchmark of the organizational diligence of corporate management bodies that will progressively lead to having to justify non-use rather than the reverse [48]. Of course, it would be well possible for the benchmark to be introduced by regulation, wanting to push banks toward the adoption of CorpTech tools, perhaps indicating those that are necessary and those that are only appropriate, in which case the profile of the dutifulness of adoption would be normatively fixed.
But the inclusion of CorpTech tools, especially when AI-technologies-based, as increasingly important building blocks of the organizational arrangements of banks, as well as any corporation, raises the further question of their reliability. This aspect is all the more important the more these tools are innervated into the organization to support in the essential monitoring activity proper to the board, constituting tools, that is, dashboards for information, updating and also alerting on any detectable anomalies on the company’s asset and financial performance (this is to respond to the need to functionalize organizational arrangements also to the needs of timely detection of signs of crisis, according to Article 2086, 2nd paragraph, Civil Code) ([49]). In any case, the profile of the validation of AI functionalities – which is ultimately the responsibility of the company’s governing bodies themselves, for whom, of course, reliability is a prerequisite for the adoption and deployment of CorpTech tools – assumes decisive importance. In the specific banking perspective, therefore, opportunities for intervention also arise for regulatory discipline concerning the various aspects of the application of banking CorpTech. So, it is reasonable to foresee, and even to hope, that secondary (if not primary, European and domestic) regulation will intervene with regard to the identification of further professional profiles than the usual ones (such as, e.g., statisticians, data scientists and computer scientists), whose presence will have to be considered (increasingly) as essential in the board structure in order to be able to govern CorpTech choices. It is, therefore, likely that there will be an expansion of the “suitability” criteria – ex Article 26 Legislative Decree No. 385 of 1st September 1993 – Consolidated Law on Banking – sub specie of professionalism and competence that define the profile of the fit and proper bank director, so as to enrich the cultural pluralism of the collegiate body by ensuring that it also has available professionalism that can adequately support the board in the selection of organizational choices of CorpTech solutions, in short to form a “tech-savvy” board ([50]). In fact, it would be desirable for these “new” professional skills to be stably and structurally enhanced within the board by expanding, again by regulation, its internal articulation: in this perspective, the need to set up “tech committees” (to be placed side by side with the already envisaged nomination, risk and remuneration committees) should be envisaged, with an indication, albeit in principle, of the relative powers and professionalism, so as to give regulatory relevance to “experiments” that have also already been recorded in the banking sector, if anything accentuating their characteristics and functionality ([51]). Along these lines, it is desirable, in parallel, that the banking regulation also identify a specific corporate function to be placed alongside those already provided for (compliance, risk management and internal audit), which has skills and capabilities aimed at a specific organizational focus on CorpTech profiles, specifically with regard to the processes of verification and validation of systems as well as, more generally, data governance issues.
A non-secondary aspect that also should be taken care of by the overall organizational restyling assumed so far is the quantitative and qualitative identification of banks’ investments in AI technologies. They are already large but will take on an increasing weight in the profit and loss account of banking intermediaries. Thus, support from both the bank’s operational structure and the internal articulation of its governing bodies will be able to make an essential contribution to the quality of investments in this area and the bank’s strategy behind them.
In other respects, it is likely that these monitoring and alerting dashboards, even before they are available to board members, will first be used and developed by the chief executive officers, as well as by the heads of the bank’s various operational areas. Thus, the board when assessing the adequacy of organizational arrangements should express guidance about the adoption and implementation of these AI systems, still in fulfillment of the duty to care for the adequacy of organizational arrangements. In the regulatory framework – or even, at first, in the self-regulatory framework – it could then be determined whether and to what extent it is appropriate or necessary for all or part of these digital tools to be made available also to directors without delegated powers (i.e., non-executive directors), perhaps indicating which functionalities are instrumental to the strategic supervision function, so as to also give greater concreteness to the indication, already present in the regulatory framework, to avoid involving the board, and therefore the strategic supervision body, in matters concerning the day-to-day management of the bank.
5. Looking up to a broader time horizon, AI systems more advanced than those previously considered, namely those characterized by machine learning or deep learning, can be considered. The development curve of AI already makes it possible to hypothesize its application in banks and, in general, in companies, with functionalities that go far beyond the efficiency of reporting and monitoring, where uses are looming that can affect, for more reasons, the very decision-making activity of the strategic supervisory bodies and the management bodies themselves, i.e., plenum and delegated bodies (i.e., executive officers). Alluded to here are the developments of algorithms with astonishing predictive capacity, powered by a gigantic harvest of massively collected data and supported by hardware with ever-increasing computational powers. This is AI assisted, in its most advanced applications, as well as augmented AI. The use of increasingly sophisticated software using, among others, statistical models ([52]), which have the characteristic of “learning,” rectius of adjusting their estimates based on the data (structured and unstructured) with which these systems are first trained and then used, giving rise to unprecedented perspectives of interaction between man and machine or, rather, between evaluations on future choices processed by the human mind and predictions processed by AI systems. Thus, we are talking about systems capable of processing very large data sets – commercial, financial and otherwise, including then media and social media coverage of the same company and its competitors – in tight time frames that would be impossible to a human agent.
In this context, the position of the legal observer becomes less easy, but he or she is not prevented from taking notes on how much the existing legal framework needs to be changed in order to think about governing the modernity of AI-induced interactions with the human person. Of course, the considerations that follow lose, though not entirely, the connotation of specific reference to AI developments in the banking sector and specifically within the corporate governance bodies of banks and will be as general (except for some reference to the specificity of banks) as they will be synthetic and idiosyncratic. Preliminarily, as a result of a continental jurist’s approach, the proposal for an EU regulation on artificial intelligence (the AI Act EU), mentioned at the outset, is considered, which turns out to be only one piece of a complex framework of policy proposals already submitted and targeting EU regulatory projects such as, to mention only a few, the “Decision” of the EU Parliament and Council on the “Pathway to the Digital Decade” ([53]), the regulation proposals on data governance ([54]), digital services ([55]), on digital markets ([56]) and the communication on cybersecurity ([57]). Thus, the regulatory framework is in great turmoil and is overall in tune with the pursuit of the goal of “anthropocentric” development of digital innovation.
With specific regard to the AI Act proposal, it should immediately be noted that it is a first but appreciable step of elaboration and synthesis ([58]), which shows to have been coordinated with the GPDR and it is desirable that the same coordination be maintained with the other guidelines of the legislative evolution in the field of data governance and digital market services. However, it must be noted that the AI Act, which also aspires to become a regulatory benchmark at a global level (and this is still a good thing), is characterized by a partial approach, in the sense that it seems to be mainly focused on the protection of fundamental rights against the use of AI by public authorities (such as, for example, justice and public security) as if by the private sector, when using for commercial purposes, for example, customer classification outputs produced by AI, there were no risks, general and in some respects disturbing, that is, such as to jeopardize the integrity of the human sphere in its confrontation and interaction with the machine. This requires a “holistic” intervention to protect man from conditioning and manipulation and, ultimately, to preserve his overall freedom, even when fundamental rights are not directly affected. Perhaps it is an opportunity to rethink the catalog of these rights. In fact, a «new system of “surveillance capitalism” put in place by large digital companies was promptly highlighted, which, by defining and monetizing the preferences and behavioral traits of consumers, end up affecting not only their economic freedom – inducing these, for example, buying a product rather than another – also on their personal freedoms, first of all the freedom of self-determination» ([59]). Now, given that without the data for training machine learning systems cannot work, here is that the issue of data, its collection and trade, to date regulated by the GDPR ([60]), is conditioned by the choices made at the time by this regulatory source in terms of the selection of protected personal data, its marketability and the freedom to collect other data. It follows that today even when a decision is made – as in some ways the proposed data governance regulation would seem to want to do – there is no hiding the fact that any additional restrictions on data collection and trade would have the effect of consolidating the primacy of a few colossal global players who have so far fed their big data deposits, with an effect not unlike that which arose at the time from the introduction in Europe of the mandatory takeover bid rules on the market for corporate control in favor of those who at that time were in controlling listed corporations. After all, the GDPR itself, which came into force on May 25, 2018, has already led to an increase in requirements and costs in the collection of data – from the request for consent to processing and its deletion and related documentation, not to mention the appointment of data controllers and related technical and organizational safeguards – that has benefited those who had, until then, collected data in complete freedom. Similar effect of further consolidating the competitive position of big data corporations would be the imposition of additional obligations: the trade off between the different requirements poses issues of no simple solution. Of course, such issues could be overcome by technological innovation itself: think of the possible developments of the Solid (Social Linked Data) project, which is “aimed at the decentralization of data on the Web, controlled entirely by users rather than by the companies that now dominate this market” ([61]). At present, however, the issue of the concentration of the holders of big data – to the point of representing for newcomers a barrier to entry into this market(s) – is topical, and projects such as the one reported testify precisely to the keen need to overcome it.
At present, however, the issue of the concentration of the holders of big data – to the point of representing for newcomers a barrier to entry into this market(s) ([62]) – is topical, and projects such as the one reported testify precisely to the keen need to overcome it.
That said, it has been widely believed that the use of AI tools can facilitate to some degree the process of metabolizing the Enviroiment Susteneability and Governance (ESG) goals that were identified by the 2016 Paris Agreement ([63]) and the subject of the strategic planning for energy and ecological transformation under the banner of sustainability contained in the European Green Deal, presented in December 2019 ([64]). The goals are as ambitious as they are, by now, inescapable: not to undermine the competitiveness of the European economy, indeed fostering the development of new forms of competition, and at the same time to unbundle the economic growth/environmental resource use pair, in order to achieve climate neutrality, that is, no net greenhouse gas emissions, by 2050. Thus, a comprehensive regulation of the securities market sector has been defined with the issuance of a series of directives and delegated regulations aimed at creating a regulatory framework that induces investment choices inspired by the principles of environmental sustainability ([65]). Within this framework, banks play an essential role in two respects: on the one hand, as the main players in the securities market that provide investment services and activities, as managers or advisers to clients. As a result, banks’ boards are required to introduce organizational and management tools to be compliant with the new obligations. On the other hand, banks are equally essential when carrying out their typical and traditional lending activities, since in the assessment of borrowers, the commitment and ability of borrowers to, in turn, pursue ESG objectives will have to take on specific importance. This profile, in fact, will be an integral part of the appraisal as the demonstration of the credit applicants to have capacity to pursue sustainable development a long-term perspective will converge with the proof of the ability to maintain the correlated ability to repay the financing received. Banks, in the final analysis, position themselves as an essential engine for promoting sustainable development in a twofold way, by financing companies that are committed to the pursuit of ESG objectives – and not only in the granting phase but also in the development of the credit relationship – but also because this credit policy is able to be reflected in an improvement in the quality of the ESG profile of the bank itself and, therefore, in its shareholder value, as empirical studies already show ([66]).
All this is a prerequisite for increasing efforts by bank boards to establish policies, procedures and organizational safeguards geared toward improving ESG performance. Well, as mentioned, there are those who believe that AI systems can be valuable tools for identifying and interpreting sustainability factors ([67]), having the ability to probe with specialized algorithms and high computational powers the reality of the entire infosphere, also in order to improve the quality of ESG performance indicators, so as to spread the use of “specialized” ratings. Certainly this function of supporting the activity of boards, banking, additional to non-financial reporting ([68]), will also have to be accompanied by measures-which in the banking sector may also be of a regulatory nature-regarding the linking, to some extent, of directors’ remuneration to the achievement of ESG objectives (according to parameters that, in this aspect, could also cause the clear distinction between directors with and without proxies to be reviewed). Again, within this framework, the composition of the company’s governing bodies themselves have importance, which should lead to not only pushing toward gender diversity (as long as it is not endemic) but also emphasizing professional profiles and experience as well as the age of bank directors, which will likely select newbies with respect to directorship, perhaps to be channeled into special ESG committees ([69]). It may be pointed out how it has been felt that AI can be effectively used for the very selection of directors, which is all the more true with regard to those for whom ESG-friendly profiles are sought ([70]), which should be “fished,” at least also, from outside the circles that traditionally express candidates for seats on bank boards.
That said, we merely point out two additional problems: one primarily cognitive and another exquisitely legal. The first concerns the very serious risk of nudging, that is, the risk of AI “capturing” the human decision maker. In fact, confrontation with machine and deep learning systems, which can plumb huge deposits of data in depth and process them very quickly to produce natural language output, concretely exposes human intelligence to the risk of being invariably conditioned by that output. If only for the reason that the human mind results systematically anticipated by by machine prediction, with the incomparable computational power of the computers on which AI software runs. Thus, human intelligence is confronted with an outcome that is anticipated from the formation of its “spontaneous” determinations and that may condition it, the probability of its being conditioned being indeed high. The possible solutions are intertwined with the second problem, that of responsibility. Indeed, one might think of not allowing a director to place himself under the umbrella of the Business Judgement Rule – the cornerstone of the discipline of administrative responsibility – except when he has “appropriated” the result, in a kind of conscious sharing. However, things are more complicated. First, it must be considered that the most advanced AI systems, such as machine and deep learning systems, have been and will be increasingly developed by combining the use of various algorithms (statistical and otherwise) and multi-level connected neural networks designed to mimic those of the human brain. At this level of complexity there can be a so-called black box, in which «the very transformative and adaptive functioning of the machine gives rise to what has been effectively termed the transparency fallacy, meaning the technical impossibility of accessing the decision criteria of a system that changes logical paths autonomously.» ([71]). This does not mean that there is an inscrutable thinker within an AI black box, but its unknowability is due to the fact that the system was put into use when it was only partly “built,” so that it lacks a diagnostic component that can make explicit its outcome ([72]). In other words, an AI black box is built by focusing not on the optimality criteria of the parameters but on the ability to produce quality predictions, while the models and their parameters are “adjusted” by the algorithm’s subsequent processing of the data, so that from the outside it is not immediately intelligible how the variables are progressively combined to formulate the predictions, not even by those who initially programmed the AI black box, since it has evolved by changing its parameters through use. It follows that in cases of black box AI the “sharing” with humans of the criteria that led to the results of the predictive evaluation produced by the machine does not yet seem possible, not even if, to supplement the technical expertise of the board, there were expert data scientists or statisticians, and not even if there were the same programmer who designed and/or “trained” the system in the training phase. This, it should be pointed out, holds true in relation to the current state of technology, so much so that there are developments precisely in the direction of the search for AI systems that may in turn be able to decode the paths of another black box AI system, and it would probably not be an exercise in easy optimism to predict that this will happen, although it cannot be said in how long ([73]). For that remains the fact of the condition of highly likely capture of the human decision maker by the AI system, amplified by the possible, if mistaken, realization that AI has superior “intelligence” to humans. Indeed, the writer, at the risk of manifesting a cognitive bias, believes that, with good grace of the Turing test, machines-at the current state of their development-do not learn, in the proper sense, but are capable with very high efficiency of recognizing correlations and patterns, which is quite different from understanding cause/effect relationships as well as not learning rules but exploring, again with great speed and efficiency, possibilities within the limits that are set. Woe, then, if these limits are not properly and comprehensively set, because machines, with their millions of attempts, are able to find the shortcoming of the limit and go through it, perhaps precisely because the person who set the limit (whether a programmer or a legislator) did not figure the possibility of its being exceeded in a given context.
Thus, another aspect that converges on the issue of liability comes back to the fore, which is represented, as mentioned above (see supra para. 4), by the reliability of a predictive AI system. It seems problematic, however, to give a priori-that is, as a yardstick for liability-a criterion for determining from what point in time a machine or deep learning system can be said to be reliable or, if preferred, adequately “trained.” It is therefore arduous to determine when an administrator can include, even if only as a supporting element, the AI system outcome in his or her cognitive horizon, going exempt from liability for relying on its reliability ([74]). From this point of view, the recognition of responsibility of the AI service provider does not seem to be able to exhaustively solve the problem. Then, the issue of whether or not administrators need “expertise” will again have to be addressed in relation to the new front of possible liability that the use of the active support represented by the predictions of an evolved AI system. It is, moreover, quite easy to predict that, in any case, precisely as a result of that phenomenon of “capture” referred to earlier, natural person administrators might be induced to find it more convenient from the point of view of liability anyway to make their own the indication coming from the machine rather than to take on the burden of having to justify conduct to the contrary.
Then there are other general legal profiles affected by the AI phenomenon that would deserve to be highlighted in a minimal catalog, but this is not the place: we confine ourselves to recalling only two of them. To the first has already had occasion to refer briefly: these are intellectual property rights, the criteria for attributing rights and granting protection of which probably deserve to be revisited in order to allow their use also with reference to matters related to AI development. It should be kept in mind that a misguided formulation of principles and rules in this context could further hinder the “opening” of an AI black box. The second deserves at least a mention: consider the (extreme) hypothesis of the same AI system and/or data set being used for its training, as tools to support decision-making activities by two or more competing companies, e.g, banks that operate in the same market and develop services, offer aligned contracts, economic conditions and rates: there arises, in evidence, the serious question about the existence of what has been called “algorithmic collusion” or rather tacit algorithmic collusion or, in the case of oligopolistic markets, also perceivable as a collective dominant position under Article 102 TFEU ([75]). The knowledge about the suitability of the training of an AI system with deep learning to significantly differentiate its outcomes seems relevant profile to detect a discontinuity in the collusive link, but there may be some doubt that it can be determined at the state of the application practice of the current antitrust regulations of the European Union and Italy. This, too, implies the board’s assumption of significant responsibilities, so a point of clarity from the regulatory framework would be of great help.
6. When we move on to consider the evolutionary perspectives of AI, as autonomous or autopoietic systems, the discourse of the legal observer becomes impassable, as it is already very difficult to focus on the possible phenomena to be regulated, although it cannot be ruled out that this is only a limitation of the writer. It is, therefore, very uncomfortable to discuss the replacement of the human agent by highly evolved artificial intelligences even in corporate realities, thus considering phenomena that in a futuristic crescendo are denoted as fused board, robo-board, self-driving corporations, algorithmic entity and leader-less entity. The question is not only how to allow the unfolding of technological evolution, e.g., with the appointment of corporations, which have in their assets (even only) evolved neural systems or whatever other form autonomous AI may take, to the office of directors together with individuals or instead of them, as board service providers (subject to antitrust criticalities similar to those of algorithmic collusion), proceeding further down the road of fiction related to the very notion of legal person, further dehumanizing it ([76]), i.e., attributing legal capacity or even legal personality to autonomous AI systems. Companies could perhaps be configured in which there are only partners and machines, so as to give rise to new partnerships 2.0, in which the partners, who have chosen to rely on the machines, would be exposed to full responsibility for this choice. But, as mentioned above, this does not seem to be the question, but rather whether this would be desirable and whether policy choices would be geared to foster these developments without limit.
Then it is as spontaneous as it is trivial to call for great caution, attuned to sound ethical principles, and attention to the framework of rules necessary to guide and if necessary even limit that the machines, which seem to think, become innervated in all aspects of human life, in economic social, empathic and, again, ethical relations, that is, in the environment in which the lives of human beings are expressed and unfold ([77]). In fact, humans evolve; machines, on the other hand, predict, recognizing and replicating patterns uncovered in data and validated according to statistical procedures: machines are thus directed to past experience. It would be a paradox that the height of technological evolution would set the stage for the development of a conservative society. But these are reflections that must be made before the positive jurist goes to work. Without some constant, the legal equation presents too many variables to engage in technical discourse, whereupon a more general and generic prejuridical almanac is favored. And in this different register, the writer confesses two very human fears, and it could not be otherwise. The first is that of not wanting to find oneself in the situation where one has to rely too heavily on adherence to Asimov’s three laws of robotics (later to become four) ([78]), because there would always remain the risk that the machines by dint of “humanizing” themselves would do so to such an extent that they would “learn” to violate the law (and robotic warfare seems a terrifying reminder). It may be superstition, but in the first literary narrative in which a robot has appeared, indeed in which the syntagma has been used to denote an artificial agent, humanity comes to a bad end ([79]). The second is the fear that humans, who use machines, will then end up wanting to be machines themselves, as was also hoped for in futurist enthusiasm ([80]), in a forerunner fusion of cyborgs that science fiction imagery has subsequently developed and amplified, to instill unease and disorientation. Among the possible scenarios, to be averted at all costs – but it is politics that must do so – is that whereby machines, imitating humans, and humans, trying to imitate machines, chase the chimera of a highly efficient economy ([81]), which ultimately proves useless because citizens are reduced to buyers, buyers to slaves (perhaps unconscious but happy slaves) and slaves to poor slaves, all before human beings stop behaving like a virus to the Planet. It then seems impossible to believe-perhaps it is a mistake, but so it seems-that there is no difference between man and learning machine. It is plausible that if a computer is “told […] to be creative, the result is machine learning,” ([82]) but – unless the ultimate algorithm forces a retraction – it again seems impossible to believe that, even knowing all the words, a highly evolved neural system could ever find the poem with which Homer spoke of Odysseus’ spear – that still a youth goes boar hunting with his grandfather Autolycus on Parnassus, so as to procure for himself the scar that his nurse Andromache will recognize upon his return to Ithaca – by pointing it out to us as δολιχόσκιον ἔγχος, i.e., long-shadowed spear, instead of simply δολιχός ἔγχος, i.e., long spear ([83]), obviously finding the machine in Homer’s position and thus before any handbooks of rhetorical figures and, therefore, data to be swallowed were drawn up. Perhaps our humanity is all here, but it seems enough to claim the right to maintain it and to arouse commitment to improve ourselves and to govern, this yes, our sublime and at once dangerous irrationality, beginning, for example, and it would be a very good start, to banish war from the spectrum of possible forms of human interaction on the assumption that no bomb nor weapon can be considered truly intelligent, with or without AI.
[1] This neologism is already used on April 12, 1971, when, in reviewing a short story by C.F. Gravenson (The Sweetmeat Saga), R.Z. Sheppart in Times Magazine writes “In much the way that fish cannot conceptualize water or birds the air, man barely understands his infosphere, that encircling layer of electronic and typographical smog composed of cliches from journalism, entertainment, advertising and government” (available now at http://content.time.com/time/subscriber/article/0,33009,905004-1,00.html), highlighting that we are dealing with, we understand better today, the world of digital information and relationships in which we are, even unconsciously, immersed. The neologism is taken up in Italy by L. Floridi, Infosfera, in Internet & Net Economy, a cura di Vito di Bari, Il Sole 25-Ore Libri, 2002, with the following meaning (in traduction) “the semantic space constituted by the totality of documents, agents and their operations”; more recently, ID., La quarta rivoluzione – Come l’infosfera sta trasformando il mondo, Milano, Raffaello Cortina Editore, 2017 and ID., Pensare l’infosfera, Milano, Raffaello Cortina Editore, 2020.
[2] For more information on this subject see, ex multis, M. Maggiolino, I big data e il diritto antitrust, Milan, Egea, 2018, pp. 1 ff., 28 ff. and 37 ff., where further references. However, the European institutions use the expression “big data” with regard to a «high volumes of different types of data produced with high velocity from a high number of various types of sources are processed, often in real time, by IT tools (powerful processors, software and algorithms)», referring to the three “Vs”, i.e. “Volume”, “Variety” and “Velocity”: so the EU Commission, Communication on Data Driven Economy, July 2014, COM(2014)442 final.
[3] It is the “Proposal for a regulation of the european Parliament and of the Council laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act)” COM(2021) 206 final, 21 april 2021.
[4] More precisely in the annex I of the Proposal the techniques/methodologies indicated are the following: «(a) Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning;
(b) Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems; (c) Statistical approaches, Bayesian estimation, search and optimization methods.». With regard to machine learning, one of the most eminent data scientists, P. Domingos, The Master Algorithm. How the Quest for the Ultimate Learning Machine Will Remake Our World, Penguin Books, 2017, begins his prologue with the following aphorism «You may not know it, but machine learning is all about you»; the Author is looking for the master algorithm, that is a sort of AI “theory of everything”, and in the path he identifies five “tribes” of machine learning, that is, five different approaches to knowledge based on different fields of knowledge. The schools of thought identified are those (i) of the symbolists, who are inspired by philosophy, psychology and logic, for whom learning is the inverse of deduction, that is, induction; (ii) connectionists, who take their cue from neuroscience and physics to perform a sort of reverse engineering of the brain; (iii) evolutionists, who are inspired by genetics and evolutionary biology to carry out numerical simulations of evolution; (iv) of the Bayesians, who base their theses on statistics, on the assumption that learning is a form of probabilistic inference and, finally, (v) of the analogists, who are influenced by psychology and mathematical optimization, hence by logic, believing that we learn from extrapolations based on symbolic and therefore analogical criteria of similarity. Each of these tribes then has its own master algorithm or learner, as the learning algorithms of machine learning are denoted. The Master Algorithm – that is to say a multipurpose learner, which would constitute the principle of the automation of knowledge – is the goal of the Author’s research, which has not yet been achieved even if the goals already reached are significant. However, it seems that in the methodologies mentioned in annex I of the Proposal are scattered traces of the taxonomy proposed by Pedro Domingos.
[5] This is the translation of what writes A. Sacco Ginevri, Intelligenza artificiale e corporate governance, in Il diritto nell’era digitale – Persona, Amministrazione, Giustizia, a cura di R. Giordano – A. Panzarola – A. Police – S. Preziosi – M. Proto, Milano, Giuffrè Francis Lefebvre, 2022, p. 419; in the same sense, more widely, N. Abriani – G. Schneider, Diritto delle imprese e intelligenza artificiale, Bologna, Il Mulino, 2021, pp. 26 ff.; E. Hickman – M. Petrin, Trustworthy AI and Coorporate Governance: The EU’s Ethics Giudelines for Trustworthy Artificial Intelligence fron a Company Law Perspective, in European Business Organization L. Rev., vol. 22 (2021), pp, 593 ff., at p. 600.
[6] N. Abriani – G. Schneider, op. cit., p. 78 (in translation).
[7] On the general profiles of the impact of AI on company law, just to mention some of the most recent contributions from Italian Authors see G.D. Mosco, Roboboard. L’intelligenza artificiale nei consigli di amministrazione, in Analisi giuridica dell’economia, 2019, 1, p. 247 ff.; P. Tullio, Diritto societario degli algoritmi. E se i robot diventassero imprenditori commerciali?, ivi, pp. 225 ff.; N. Abriani – G. Schneider, op. cit.; M.L. Montagnani, Flussi informativi e doveri degli amministratori di società per azioni ai tempi dell’intelligenza artificiale, in Persona e Mercato, 2020/2, pp. 86 ff.; EAD, Il ruolo dell’intelligenza artificiale nel funzionamento del consiglio di amministrazione delle società per azioni, Milano, Egea, 2021; M.L. Montagnani – M.L. Passador, Il consiglio di amministrazione nell’era dell’intelligenza artificiale: tra corporate reporting, composizione e responsabilità, in Riv. soc., 2021, fasc. 1, pp. 121 ff.; A. Sacco Ginevri, op. cit., passim. Among foreign Authors see J. Armour – H. Eidenmüller, Self-Driving Corporations?, in Harv. Business L. Rev., vol. 10 (2020), pp. 87 ff.; S.A. Gramitto Ricci, Artificial Agents in Corporate Boardrooms, in Cornell L. Rev., vol. 105 (2020), Issue 3, pp. 869 ff. For the sake of clarity, even the very reference to corporate governance deserves to be appropriately defined (see recently A. Zattoni, Corporate Governance, Milan, Egea, 2021), however, considering the aims of this paper, it does not seem appropriate to dwell on this profile.
[8] Coined this phrase L. Enriquez – D.A. Zetzsche, Corporate Technologies and the Tech Nirvana Fallacy, in Hastings L. Journal, vol. 72 (2020), Issue 1, p. 59.
[9] Reference is made, in particular, to Article 53, paragraph 1, lett. d), Legislative Lecree no. 385 of September 1, 1993, the so-called Consolidated Banking Act, and Bank of Italy, Disposizioni di vigilanza per le banche, circular No. 285 of 17 December 2013, Adj. 36 of 20 July 2021, Part I, Title IV.
[10] This is, in translation, the definition given by N. Linciano – P. Soccorso, FinTech e RegTech: approcci di supervisione e regolamentazione, in FinTech – Introduzione ai profili giuridici di un mercato unico tecnologico dei servizi finanziari, edited by M.-T. Paracampo, Torino, Giappichelli, 2017, p. 44, who in turn quote the definition of RegTech as «tecnological solutions to regulatory processes» given by the Institute for International Finance, RegTech in Financial Services: Technology Solutions for Compliance and Reporting, Washington, DC, 2016. So, fintech and RegTech are placed in gender to species ratio, being the first already defined as «technology enabled innovation in financial services that could result in new business models, applications, processes or products with an associated material effect on the provision of financial services»: so the Financial Stability Board – FSB, Financial Stability Implications from FinTech Supervisory and Regulatory Issues that Merit Authorities’ Attention, Basel, 27 june 2017 (at http://www.fsb.org/wp-content/uploads/R270617.pdf). In the same vein it appears in Financial Conducting Authority – FCA, Call for input on supporting the development and adopters of RegTech, in https://www.fca.org.uk/publication/feedback/fs-16-04.pdf, secondo cui «RegTech is a sub-set of FinTech that focuses on technologies that may facilitate the delivery of regulatory requirements more efficiently and effectively than existing capabilities». For a review of the origin and evolutionary routes of RegTech see B.M. Cremona, RegTech 3.0: verso un Regulatory Sandbox europeo?, in Mercato concorrenza regole, 2019, Issue 3, pp. 547 ff.; A. Perrone, La nuova vigilanza. Regtech e capitale umano, in Banca, borsa titoli di credito,2020. Issue 4, I, pp. 516 ff.; P. Siciliani, The Disruption of the Prudential Regulatory Framework, in Journal of Financial Regulation, 2019, Issue 5, pp. 220 ff. For the distinction between FinTech and TechFin, where the second term refers to generic digital services and only subsequently applied to the world of finance, see N. Abriani – G. Schneider, op. cit., p. 124. For the correct notation about the relevance of compliance in all complex corporate realities see P. Benazzo, Organizzazione e gestione dell’«impresa complessa»: compliance, adeguatezza ed efficienza. E pluribus unum. in Rivista delle società, 2020. fasc. 4, pp. 1197 ff.
[11] See Consob, Consob day, Incontro annuale con il mercato finanziario. Discorso del Presidente Mario Nava, 11 June 2018. Consob – Commissione Nazionale per le Società e la Borsa, is the supervisory authority on financial markets,
[12] See the «Relazione sulla gestione e sulle attività della Banca d’Italia» (Report on the management and activities of the Bank of Italy) for 2018 (published in 2019), in which it gives an account of the start of a process of evolution of the processing centers towards innovative models, stressing that «The research activity focused on frontier technologies (machine learning, blockchain and big data) to evaluate their application to the functions of the Institute» (in translation) (p. 14); in the next report on the year 2019 (published in 2020) not only does it refer to the progress of research in the field of the AI but it takes into account the fact that «the applications of the new methodologies have included the activities of: (a) analysing the complaints of users of banking and financial services; (b) automatic classification of operations suspected of money laundering or terrorist financing; (c) control of access to IT resources; (d) identification of advanced solutions for the anonymization of microdata» (in translation) (p. 15). So too in the 2020 report (published in 2021), the continuing commitment to research and development of the applications mentioned above, in addition to projects aimed at exploiting the vast amount of data available to the Supervisory Authority, is noted, always to support the supervision activity (pp. 19-20) and in particular the supervision of FinTech (p. 73) as well as the use of the block chain within the EuroSystem of payments (Reports are accessible in the directory https:///www.bancaditalia.it/publication/relations&management/).
[13] This is what emerges clearly from the interventions at the Workshop on the theme «Big Data & Machine Learning Applications for Central Banks», held in Rome on 21-22 October 2019, at the Bank of Italy (the material can be consulted at https://www.bancaditalia.it/pubblicazioni/altri-atti-convegni/2019-bigdata/index.html).
[14] With special attention of problems arising from the collection of this type of data see M. J. Denny – A. Spirling, Text Preprocessing For Unsupervised Learning: Why It Matters, When It Misleads, And What To Do About It, in Political Analysis, vol. 26 (2018), Issue (2), pp. 168–189.
[15] European Banking Authority – EBA, Eba Analysis of Regtech in the Eu Financial Sector, EBA/REP/2021/17, June 2017, p. 7.
[16] This is the statement of Principle 9 of the «Principles of Effective Banking Supervision» of the Basel Committee, which refers to the use by supervisory authorities of «appropriate range of techniques and tools» (Basel Commitee on Banking Supervision – BCBS, Core Principles for Effective Banking Supervision, 2012).
[17] EBA, id, p. 9.
[18] EBA, id.
[19] N. Abriani – G. Schneider, op.cit, p. 138 (in translation).
[20] J. Armour – H. Eidenmüller, op. cit., p. 98, nota 35, report that «[s]ome vendors, such as IBM, permit users to license their IA products such that the results of training are proprietary to the user».
[21] Per la messa a fuoco di questi problemi e le diverse opzioni disponibili v. N. Selvadurai – R. Matulioyte, Reconsidering creativity: copyright protection for works generated using artificial intelligence, in Journal of Intellectual Property Law & Practice, 2020, vol. 15, n. 7, pp. 536 ff.
[22] For more details see R. Coelho – M. De Simoni – J. Prenio, Suptech applications for anti-money laundering, in Quaderni dell’antiriciclaggio – Analisi e studi, n 14, ottobre 2019, who with regard to the approach of the AML/CFT Authorities note that «[s]ome are building in-house capabilities, while others are taking advantage of ready solutions in the market. Some are also actively collaborating with the academic community and promoting research in this field» (§ 32, p. 16).
[23] See «COMMUNICATION FROM THE COMMISSION TO THE EUROPEAN PARLIAMENT, THE COUNCIL, THE EUROPEAN ECONOMIC AND SOCIAL COMMITTEE AND THE COMMITTEE OF THE REGIONS on a Digital Finance Strategy for the EU – COM(2020) 591 final», 24 September 2020, under para. 4.3, 2nd bullet, where it is pointed out that «the Commission, together with the ESAs will develop a strategy on supervisory data in 2021 to help ensuring that (i) supervisory reporting requirements (including definitions, formats, and processes) are unambiguous, aligned, harmonised and suitable for automated reporting, (ii) full use is made of available international standards and identifiers including the Legal Entity Identifier, and (iii) supervisory data is reported in machine-readable electronic formats and is easy to combine and process. This will facilitate the use of RegTech tools for reporting and SupTech tools for data analysis by authorities.».
[24] «PUMA» is the acronym of «Procedura Unificata Matrici Aziendali» («Unified Corporate Matrices Procedure»). This website is aimed at intermediaries through the preparation and dissemination of technical documentation to support the production of statistical and supervisory reports to be forwarded to the Bank of Italy, meeting criteria of flexibility and usability, where it allows a more direct interaction with signallers, with the possibility of interacting by proposing questions related to the documentation.
[25] On this topic see A. Caia, commento sub art. 22, in GDPR e normativa privacy – Commentario, a cura di G.M. Riccio, G. Scorza e E. Belisario, Milano, Wolters Kluwer, 2018, pp. 221 ff.
[26] M.L. Montagnani – M.L. Passador, op. cit., p. 125, recall that the dir. 2013/50/EU established that from 1st January 2020 the annual financial reports must be drawn up in a single electronic communication format from ESMA – European Securities and Markets Authority.
[27] See M.L. Montagnani – M.L. Passador, op. loc. ultt. citt., who indicate the document drawn by Financial Reporting Council – FRC, Artificial Intelligence and corporate reporting – How does it measure up?, January 2019 (in https://www.frc.org.uk/getattachment/e213b335-927b-4750-90db-64139aee44f2/AI-and-Corporate-Reporting-Jan.pdf).
[28] M.L. Montagnani – M.L. Passador, op. cit., p. 126, which also indicate the prospects for improving the quality of corporate communications, including in terms of their effectiveness, which according to the FRC could result from the use of AI sentiment analysis techniques, that are able to evaluate beforehand how a given Report can be read and then suggest formulations preferable so, e.g. better align them with the intentions of the management (ivi, p. 127).
[29] So, in translation, the regulatory discipline of Banca d’Italia, Disposizioni di vigilanza per le banche, cit., Part I, Tit. IV, sec. V, § 1,1 (p. 25).
[30] Banca d’Italia, Disposizioni di vigilanza per le banche, quoted in previous note, Part I, Tit. IV, sec. II, § 2(b) and (c) for the parent company; and above all sect. V, § 1.1, where punctually it is expected that ‘[t]he proper and efficient functioning of the company’s bodies requires not only an adequate composition as provided for in the previous paragraphs, but also the preparation of information flows, procedures, working methods, timing of meetings, equally adequate. Therefore, the identification and formalization of operational practices (procedures for convening, periodicity and duration of meetings, participation) that ensure effective and timely action of the bodies and their committees are of particular importance» (in translation).
[31] This regulatory framework is clearly present to and is expressly recalled by Banca d’Italia, Disposizioni di vigilanza per le banche, quoted at note 29, Part I, Tit. IV, Sec. III, § 2.1, Note 1 (p. 8).
[32] Here are used the expressions coined by G.M. Zamperetti, Il dovere di informazione degli amministratori nella governance della società per azioni, Milano, Giuffrè, 2005, pp. 177 ff. e 259 ff.
[33] See generally on the application of this Article to joint stock companies’ boards and also with regard to bank’s boards P. Montalenti, Amministrazione e controllo nella società per azioni tra codice civile e ordinamento bancario, in Banca, borsa tit. cred., 2015, fasc. 6, I, pp. 709; G. Desiderio, Poteri individuali degli amministratori non esecutivi di società per azioni di diritto comune, bancarie e finanziarie (a sistema tradizionale), Milano, Giuffrè Francis Lefebvre, 2021, p. 13, where more references.
[34] Banca d’Italia, Disposizioni di vigilanza per le banche, cit., Parte I, Tit. IV, sez. III, § 2.1 (p. 8) e §2.2(b) (p.9) (in translation).
[35] Banca d’Italia, Disposizioni di vigilanza per le banche, cit., Parte I, Tit. IV, sez. IV, § 1 (p. 17) (in translation).
[36] See, ex multis, Cassazione, 18 settembre 2020, n. 19560, at http://www.italgiure.giustizia.it/sncass, p. 25 (in translation). See G. Desiderio, op. cit., pp. 161 ff. for a detailed account of the history of this case-law and of the strands that can be identified in it, even in the constancy of the argument mentioned above in the text.
[37] Cassazione, 5 febbraio 2013, n. 2737, in http://www.italgiure.giustizia.it/sncass, p. 13 (in translation, enphasis added).
[38] Cassazione, 10 luglio 2020, n. 14713, in http://www.italgiure.giustizia.it/sncass, p. 13-14 (in translation)
[39] See the leading case of Cassazione, 31 agosto 2016, n. 17441, in Giur. it., 2017, fasc. 2, p. 386, where is stated that «the mere right to “request delegated bodies to provide the board with information on the management of the company” should be triggered, so as to become a positive obligation to conduct, information such as to put the directors on notice as “due diligence required by the nature of the task and their specific expertise”: otherwise it would fall into the configuration of a general supervisory obligation that the reform has deliberately eliminated» (in translation). For a better understanding of the issue, it must be taken into account that in 2003 the Italian Civil Code has been emended in such a way as to cancel the provision on the duty of directors to supervise the general management of the company and introduce in the provision under which «[t]he directors are required to act in an informed fashion; each director may request the delegated body to provide the board with information about the management of the company» (Article 2381, para. 6, Italian Civil Code, in translation).
[40] See the the decision of Cassazione, 9 novembre 2015, n. 22848, in http://www.italgiure.giustizia.it/sncass, p. 7, indicating the following initiatives of the directors; the request to the chairman to convene a board meeting, the reminder to the delegated body to revoke the unlawful resolution or to call off the delegated powers, the sending of written requests to the delegated body to desist from the harmful activity, the appeal of the resolution pursuant to Article 2391 of the Italian Civil Code, the reporting to the public prosecutor or the supervisory authority. In the same sense expressed themselves, ex multis, Cassazione, 12 gennaio 2017, n. 604, ivi, p. 8; Cassazione, 18 aprile 2018, n. 9546, ivi, p. 6.
[41] See G. Desiderio, op. cit., pp. 167 ff., for the argumentative path and references to literature.
[42] See M.L. Montagnani, Flussi informativi e doveri degli amministratori, cit., p. 191, also for the indication of board management software already available on the market (notes 32, 34-38).
[43] This is the expression is used by R.J. Thomas – M. Schrage – J.B. Bellin – G. Marcotte, How Boards Can Be Better — a Manifesto, in MIT Sloan Management Rev., vol. 50 (2009), n. 2, pp. 72-73, che auspicano anche l’utilizzo di «interactive, mobile and social networking» sui quali veicolare cruscotti di controllo e scoreboard ovvero strumenti di visualizzazione grafica dei trend (quali le heat map) e idonei, in particolare, a segnalare degli alert per le circostanze critiche o che comunque meritano attenzione, in base alle metriche che il consiglio ha previamente individuato; li menzionano anche K. Kastiel – Y. Nili, “Captured Boards”: The Rise of “Super Directors” and the Case of Board Suite, in Wisc. L. Rev., 2017, n. 1, p. 10.
[44] For the italian discipline this is inferred from the formulation of Article 2381, para. 4, of the Civil Code, which refers to the «information received» by the board as the base of its monitoring and evaluation activity. For statements of the same sign in North American literature see who refers to the «knowledge deficit» (L.M. Fairfax, The Uneasy Case for the Inside Director, in Iowa L. Rev., vol. 96 (2010), or even to the «board informational capture» by delegated bodies (K. Kastiel – Y. Nili, “Captured Boards”, cit., pp. 5 ff.). For further references see G. Desiderio, op. cit., p. 247, text and notes.
[45] U. Eco, Sette anni di desiderio2, Milano, Bompiani, 1983, pp. 129 ff., in which the notion of censorship is referred even when the information is precisely placed side by side to others (but the same applies a fortiori when diluted and buried in the middle), so as not to give it prominence.
[46] It should be noted, incidentally, how the latter provision, already amended by Article 375.2, Legislative Decree No. 14/2019 as a corollary of the crisis and insolvency code, is the result of a completely inappropriate intervention by a subsequent corrective action (pursuant to Article 40.2, Legislative Decree No. 147/2020), which has further reformulated Article 2086, paragraph 2, of the Italian Civil Code, with the cancellation of the reference to the attribution to the board of the exclusive power of “business management” so that now the attribution of exclusive power is literally predicated only with reference to the establishment of the organizational structures ex Article 2086 of the Civil Code. But despite this pernicious attempt to mutilate the joint stock company paradigm, the interpretative outcome of the invariance of the allocation of managerial powers in the joint stock company seems inevitable, given the continuing validity of Article 2364, paragraph 1, n. 5), of the Italian Civil Code, which continues to leave the assembly with only authorizing powers (as long as provided for by the bylaws), unsuitable for changing the management risk profile of the directors.
[47] See N. Abriani – G. Schneider, op.cit, p. 153 ff.
[48] This example may be valid: a bank – which today intends to keep accounts without computer support but manually or in any case in ways that involve steps with repeated manual intervention – would in all probability be exposed to sanctioning interventions by the supervisory authority for organizational deficiencies that expose the bank itself to errors, harbingers of financial and reputational damage, which are now to be considered dutifully surmountable with the use of appropriate IT technologies.
[49] From this point of view, in the banking sector prudential supervision itself can be identified as a gigantic apparatus aimed at enabling the early detection of situations that endanger the sound and prudent management of banks.
[50] The expression is borrowed from M.L. Montagnani, Flussi informativi, cit., p. 105.
[51] With regard to all the companies that use AI, see M.L. Montagnani, Il consiglio di amministrazione cit., pp. 132 ff.; M.L. Montagnani – M.L. Passador, Artificial Intelligence for Companies in a Post Covid World: An Empirical Analysis of Tech Committees in the EU and US, in Transatlantic Technology Law Forum (a joint initiative of Stanford Law School and the University of Vienna School of Law) – TTLF Working Papers, No. 70 (2020), in https://www-cdn.law.stanford.edu/wp-content/uploads/2020/12/montagnani_passador_wp70.pdf, as the outcome of an accurate investigation revealed that there is already experience in the banking sector in this sense: one of the first banks to set up a tech committee was Banco Santander SA already in 1999 (which, however, concluded the experience in 2015), so that in 2019, among the 28 listed companies in the EU (out of 2783 listed), there are Banco Bilbao Vizcaya Argentaria SA and Deutsche Bank AG. In the North American experience, see Morgan Stanley (with the Operations and Technology Committee, whose characteristics are described in https://www.morganstanley.com/about-us-governance/otcchart) and Bank of New York Mellon (with the Technology Committee, on which see https://www.bnymellon.com/us/en/investor-relations/corporate-governance/technology-committee.html); M.L. Montagnani – M.L. Passador, Toward an Enhanced Level of Corporate Governance: Tech Committees as a Game Changer for the Board of Directors (January 9, 2021). Journal of Business, Entrepreneurship and the Law, forthcoming; Bocconi Legal Studies Research Paper No. 3728946, available on SSRN: https://ssrn.com/abstract=3728946 or http://dx.doi.org/10.2139/ssrn.3728946. For the statement that all advanced companies must already consider equipping themselves with these specialized committees today, see A. Sacco Ginevri, op. cit., pp. 433-434.
[52] The reference to statistical models makes relevant the definition of probability, which varies in relation to different approaches and related methods for calculating it, being able to distinguish between (i) the so-called classical approach, in which probability is the ratio of the number of cases favorable to the event to the possible ones (the extension of which is difficult in the case of continuous variables and in any case characterized by a nonfinite number of events); (ii) the c. d. frequentist, in which the probability of a (repeatable) event is the ratio of the number of successes to the number of trials performed, assuming the latter number to be sufficiently large; (iii) the Bayesian approach, in which the value of the probability of an event is determined as a subject’s level of confidence that a given event will occur or a given proposition is true. In the latter, subjective considerations (i.e., the credibility estimate of a hypothesis) are used to assign the probability to a given event a priori, i.e., before doing the experiment, proceeding later, i.e., based on observation, to “adjust”-applying Bayes’ theorem-the a priori probability to arrive at the a posteriori probability: since the former is based on a priori information, it is not an absolute probability but always conditional on prior knowledge of the observed phenomenon. The Bayesian approach is thus the result of an epistemic conception of probability, in which the latter is the result of the ignorance of the observer not the intrinsic uncertainty of the observed phenomenon, and it is this that is incorporated into one of the algorithms that can be used by AI (v. supra nota 4).
[53] The reference is made to «Proposal for a DECISION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL establishing the 2030 Policy Programme “Path to the Digital Decade”» – COM(2021) 574 final, del 15th september 2021.
[54] «Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on European data governance (Data Governance Act)», COM(2020) 767 final.
[55] «Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on a Single Market For Digital Services (Digital Services Act) and amending Directive 2000/31/EC», COM(2020) 825 final.
[56] «Proposal for a REGULATION OF THE EUROPEAN PARLIAMENT AND OF THE COUNCIL on contestable and fair markets in the digital sector (Digital Markets Act)», COM(2020) 842 final.
[57] «JOINT COMMUNICATION TO THE EUROPEAN PARLIAMENT AND THE COUNCIL – The EU’s Cybersecurity Strategy for the Digital Decade», JOIN(2020) 18 final.
[58] Among other things, its content distinguishes between systems characterized by: (i) an unacceptable risk (Article 5), and as such prohibited (when they have a manipulative effect on human behavior), (ii) high-risk systems (Article 6 and Annex III), admitted but subject to compliance with regulatory requirements and the preventive conformity assessment, conducted by a third party, and (iii) the residual category of low or minimal risk systems.
[59] N. Abriani – G. Schneider, op. cit. (in translation), pp. 112-113 and, in another passage, again underline how the attention of the proposal is concentrated «on risks of a personal nature, relegating the economic risks connected to the use of artificial intelligence systems to a distinct and marginal level» (p. 186). The Authors borrow the expression «surveillance capitalism» from S. Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, New York, Public Affairs, 2019.
[60] As correctly noted by G. D’Acquisto – F. Pizzetti, Regulation of the data economy and protection of personal data, in Analisi Giuridica dell’Economia, 2019, fasc. 1, p. 89, the GDPR «today constitutes the architrave on which the whole construction of the European vision of the development of the digital economy is based» (in translation).
[61] N. Abriani – G. Schneider, op. cit., p. 70, who report that this project is being developed by Tim Berners-Lee, World Wide Web inventor, in collaboration with MIT. For more specifical informations about the characteristics of Solid see A.V. Sambra et al. (including T. Berners-Lee), Solid: A Platform for Decentralized Social Applications Based on Linked Data, 2016, at http://cs.brown.edu/courses/csci2390/2021/readings/solid.pdf.
[62] On the concentration in digital markets and big data see M. D’Alberti, Competition and social justice, in Mercato concorrenza regole, 2020, fasc. 2, pp. 241 ff.
[63] See the «Council Decision (EU) 2016/1841 of 5 October 2016 on the conclusion, on behalf of the European Union, of the Paris Agreement adopted under the United Nations Framework Convention on Climate Change»
[64] See the «Communication from the Commission – The European Green Deal», COM(2019) 640 final.
[65] These are specifically 1) Delegated Regulation (EU) 2021/1253 of April 21, 2021 amending Delegated Regulation (EU) 2017/565 with regard to the integration of sustainability factors, sustainability risks and sustainability preferences into certain organizational requirements and operating conditions for the activities of investment firms; 2) Delegated Regulation (EU) 2021/1255 of April 21, 2021 amending Delegated Regulation (EU) No. 231/2013 with regard to sustainability risks and sustainability factors to be taken into account by alternative investment fund managers; 3) Delegated Regulation (EU) 2021/1256 of April 21, 2021 amending Delegated Regulation (EU) 2015/35 as regards the integration of sustainability risks into the governance of insurance and reinsurance undertakings; 4) Delegated Regulation (EU) 2021/1257 of April 21, 2021 amending Delegated Regulations (EU) 2017/2358 and (EU) 2017/2359 with regard to the integration of sustainability factors, sustainability risks, and sustainability preferences into product control and governance requirements for insurance undertakings and distributors of insurance products and into conduct of business rules and investment advice for insurance investment products; 5) Delegated Directive (EU) 2021/1269 of Apr. 21, 2021 amending Delegated Directive (EU) 2017/593 as regards the integration of sustainability factors into product governance requirements; 6) Delegated Directive (EU) 2021/1270 of Apr. 21, 2021 amending Directive 2010/43/EU as regards sustainability risks and sustainability factors to be taken into account for undertakings for collective investment in transferable securities (UCITS). These regulatory interventions were preceded, with reference to the non-financial statement of listed companies, by the Directive 2014/95/EU, later transposed in Italy by Legislative Decree No. 254/2016.
[66] J.F. Houston – H. Shan, Corporate ESG Profiles and Banking Relationships (feb. 2019; available at https//ssnr.com/abstract=3331617).
[67] For an assessment along these lines and set with regard to the primarily economic benefits that sustainability-informed management can bring, among many, see P. Gompers – J. Ishii – A. Metrick, Corporate Governance and Equity Prices, in Quarterly Journal of Economics, vol. 118, Issue 1 (2003), pp. 107 ff. More recently see PWC, HOW AI CAN ENABLE A SUSTAINABLE FUTURE (April 8, 2019), (available at https://www.pwc.co.uk/sustainability-climate-change/assets/pdf/how-ai-can-enable-asustainable-future.pdf), where it states that «[u]sing AI for environmental applications has the potential to boost global GDP by 3.1 – 4.4%10 while also reducing global greenhouse gas emissions by around 1.5 – 4.0% by 2030 relative to Business as Usual […]. Economic benefits could be predominantly captured by Europe, East Asia and North America regions as they each achieve GDP gains in excess of US$ 1 trillion.». Starting from the legal perspective they point out the same approach M.L. Montagnani – M.L. Passador, Artificial Intelligence for Companies in a Post Covid World, cit., pp. 5 ff. and 67, where further references; N. Abriani – G. Schneider, op. cit., pp. 240 ff. For an “out-of-the-chorus” view (at least in the dominant European perspective) regarding policy on ESG objectives and, more generally, on the shift of companies toward objectives other than shareholder value maximization see. L.A. Bebchuck – R. Tallarita, The Illusory Promise of Stakehorder Governance, in Cornell L. Rev., vol. 106 (2020), pp. 91 ff, who, at the outcome of a conceptual and in concreto investigation of stakeholderism, conclude that this approach is counterproductive to the interests of the stakeholders themselves, as it creates the illusory impression that there is no need for «external law, regulations, and policies» and that the initiative of corporate leaders is sufficient to protect the interests of stakeholders (p. 176), with the consequence that «acceptance of stakeholderism would impede or delay legal, regulatory, and policy reforms that could provide real, meaningful protection to the stakeholders» (p. 177).
For a catalog of ESG risks and criteria for tracing them back to specific impacts on company strategy and board monitoring activity see V. Ramani – H. Saltman, Running the Risks: How Corporate Boards Can Oversee Environmental, Social And Governance Issues, (November 25, 2019), available at https://corpgov.law.harvard.edu/2019/11/25/running-the-risks-how-corporate-boards-can-oversee-environmental-social-and-governance-issues/#more-124429.
[68] T. Quaadman – E. Rust, ESG Reporting Best Practices, (2 dicembre 2019) at https://corpgov.law.harvard.edu/2019/12/02/esg-reporting-best-practices/#more-124487.
[69] J. Wilcox, A Common-Sense Approach to Corporate Purpose, ESG and Sustainability, (October 26th 2019), at https://corpgov.law.harvard.edu/2019/10/26/a-common-sense-approach-to-corporate-purpose-esg-and-sustainability/; B.M. Cremona – M.L. Passador, What About the Future of European Banks? Board Characteristics and ESG Impact, in Securities Regulation L. Journal, vol. 47, Issue 4 (2019), pp. 319 ff. (also available at https://ssrn.com/abstract=3441784).
[70] On this point in detail see I. Erel – L.H. Stern – C. Tan – M.S. Weisbach, Selecting Directors Using Machine Learning, in Rev. of Financial Studies, vol. 34 (2021), pp. 3226 ff.
[71] N. Abriani – G. Schneider, op. cit., p. 56; A. Dignam, Artificial intelligence, tech corporate governance and the public interest regulatory response, in Cambridge Journal of Regions, Economy and Society, 2020, 13, pp. 37 ss., at p. 41.
[72] A. Dignam, op. loc. ultt. citt.
[73] See P. Voosen, How AI detectives are cracking open the black box of deep learning, 2017, in https://www.science.org/content/article/how-ai-detectives-are-cracking-open-black-box-deep-learning.
[74] Investigating this problem are B. Liu, Benjamin – J. Selby, Directors ’defense of reliance on recommendations made by artificial intelligence systems: comparing the approaches in Delaware, New Zealand and Australia, in Australian J. of Corporate L., vol. 34, n. 2, 2019, pp. 141 ff., which conclude by recommending to administrators the use of an Algorithmic Impact Assessments before “trusting” the recommendations provided by AI systems.
[75] On this see first and foremost the early and seminal contribution by A. Ezrachi – M. E. Stucke, Virtual Competition: The Promise and Perils of the Algorithm-driven Economy, Cambridge, Harvard Press, 2016. From an institutional perspective see. OECD (OECD), Executive Summary of the Roundtable on Algorithms and Collusion – Annex to the Summary Record of the 127th meeting of the Competition Committee, June 21-23, 2017, at https://one.oecd.org/document/DAF/COMP/M(2017)1/ANN3/FINAL/en/pdf. More on the topic see. L. Calzolari, La collusione fra algoritmi nell’era dei big data: l’imputabilità alle imprese delle “intese 4.0” ai sensi dell’art. 101 TFUE, in MediaLaws, no. 3/2018, pp. 1 ff.; N. Abriani – G. Schneider, op. cit., p. 73. Of great significance in this framework is the arrest of the Bundesgerichtsof, Beshluss vom 23 Juni 2020, KVR 69/19 Facebook, in the case Facebook rejected, on remand, the application for suspension of the order issued by the Bundeskartellamt, on the assumption that in the case the GDPR violation constituted abuse of dominant position: on the ruling see. A. Giannaccari, Facebook and exploitative abuse under scrutiny by the Bundesgerichtshof, in Market Competition Rules, 2020, fasc. 2, pp. 403 ff. Currently pending before the EU Court of Justice (Case C-252/21) is Facebook’s request for a preliminary ruling.
[76] Of “dehumanization” of the legal person speaks A. Sacco Ginevri, op. cit., p. 432.
[77] See ex multis A. Celotto, How to regulate algorithms. The difficult balance between science, ethics and law, in Analisi Giuridica dell’Economia, 2019, fasc. 1, pp. 47 ff.
[78] On these “laws” see G. Lemme, Gli smart contracts e le tre leggi della robotica, in Analisi Giuridica dell’Economia, 2019, fasc. 1, pp. 130 ff.
[79] As G. Lemme, op. cit., p. 129, reminds us, «[i]n 1920, the Czech playwright Karel Čapek published R.U.R. (Rossumovi Univerzálni Roboti), a dystopian work in which beings artificially created to work in place of humans (robota in the Czech language means “work”) end up rebelling and exterminating humanity.» (in translation)
[80] The futurism was a literary, artistic and political italian movement, founded in 1909 by F.T. Marinetti. This movement, reconnecting with philosophical irrationalism, through a whole series of ‘manifestos’ and resounding polemics, advocated an art and costume that was to make a tabula rasa of the past and all traditional forms of expression, drawing inspiration from the dynamism of modern life, of mechanical civilization, and projecting into the future by providing the model for all subsequent avant-gardes. In this regard E. Gentile, “La nostra sfida alle stelle”. Futuristi in politica, Roma-Bari, Laterza, 2009, p. 4, writes, «Futurism was the first artistic movement of the twentieth century that proposed an anthropological revolution to create the new man of modernity, identified with the triumph of the machine and technology, the mighty new forces unleashed by the creative power of man, destined to radically change man himself, until he arrived at a kind of mechanical anthropoid, an inhuman and superhuman being at the same time, born of the symbiosis between man and machine.». In fact, the founder, F.T. Marinetti [Come si seducono le donne (1916), Firenze, Vallecchi, 2003, pp. 102-103], emphatically declares, «Women, the mutilated man whom you will kiss, will never appear to you sluggish, defeated, skeptical and dull […]. This is not romanticism that despises the body in the name of an ascetic abstraction. This is futurism that glorifies the body modified and embellished by war […]. Surgery has already begun the great transformation. After Carrel, surgical warfare fulminates the physiological revolution. Fusion of Steel and Flesh. Humanization of steel and metallization of flesh in the multiplied man. Motor body of different parts interchangeable and replaceable. Immortality of man!» (in translation).
[81] For the statement that «the idea of efficiency – of which there is a formal definition only for certain classes of procedures – materializes in a myriad of applications and specific calculations, from which it is difficult to derive a single notion» (in translation) see P. Zellini, La dittatura del calcolo, Milano, Adelphi, 2018, p. 30, and it is a mathematician that states this way.
[82] P. Domingos, op. cit., p. 15.
[83] The hunting episode is recounted in Odyssey, XIX, verses 428-466.
Author
* Giuseppe Desiderio is Associate Professor of Economic law at the Parthenope University of Naples