Open Review of Management, Banking and Finance

«They say things are happening at the border, but nobody knows which border» (Mark Strand)

Creditworthiness Assessment and Algorithmic Responsibility: A New Paradigm of Consumer Protection in the Digital Credit Market.

by Alma Agnese Rinaldi and Antonio Uricchio

Abstract: The essay examines creditworthiness assessment as a fundamental instrument for safeguarding both consumer protection and the stability of the banking and financial system, standing at the intersection of credit law, banking transparency, and data protection regulation. Within the evolving European legal framework, the analysis highlights the transition from a creditor-oriented perspective—focused on risk mitigation for financial intermediaries—to a consumer-oriented approach aimed at preventing over-indebtedness and promoting a model of responsible lending. Through a systematic reading of Directives 2008/48/EC and (EU) 2023/2225, the study shows how credit assessment today performs a public interest and substantive legitimacy function, combining objectives of economic efficiency with those of social protection, in the higher interest of trust between the parties and the markets. Particular attention is devoted to the growing impact of technological innovation and automated credit scoring systems which, while enhancing the efficiency of evaluation processes, raise significant issues of opacity, algorithmic bias, and the processing of personal data. The analysis ultimately advocates a systemic and multi-level regulatory approach capable of reconciling technological innovation with algorithmic accountability, thereby ensuring the effective protection of consumers’ fundamental rights within the digital credit market.

Summary: 1- Introduction. – 2. The evolution of the duty to assess creditworthiness from a consumer protection perspective. – 3. Towards a new assessment paradigm: creditworthiness between prudential rules and technological innovation. –4. The technological evolution of credit scoring: between automation and discretionary assessment. – 5. Automated decision-making and liability profiles: opacity, bias and explainability in credit scoring. – 6. The principle of human control in automated decision-making processes: Article 22 of the GDPR between prohibition and derogations. – 7. Preventive function and systemic dimension of creditworthiness assessment.

1. In the modern European legal system, creditworthiness assessment is a fundamental safeguard aimed at ensuring the fairness, sustainability and transparency of financing relationships between lending financial institutions, intermediaries and consumers, placing it at the intersection of two structural needs of the financial market: on the one hand, the protection of consumers as vulnerable parties structurally exposed to the risk of over-indebtedness, and on the other, the protection of savings and, with it, the stability of the banking and credit system, guaranteed by Article 47 of the Constitution.

From this perspective, the assessment of creditworthiness is not merely a technical requirement, but a function of public importance, as it guarantees the proper functioning of the credit market and the implementation of the principle of contractual fairness, preventing both the risk of debtor insolvency and the more subtle risk of irresponsible lending by intermediaries, in a context where easy access to finance can lead to an aggravation of individual and collective economic fragility.

Directive 2008/48/EC[1] first, and the more recent Directive (EU) 2023/2225[2] later, have progressively redefined the scope of the later, have progressively redefined the scope of the creditworthiness assessment obligation, shifting the focus from a perspective centred on the protection of the interests of the credit intermediary to a broader vision oriented towards the substantive protection of the consumer.

The result is a concept of credit assessment as a tool for the ex anteprevention of over-indebtedness, which requires the lender to verify not only the probability of repayment, but also the compatibility of the debt with the actual economic and social conditions of the borrower[3].

This regulatory evolution reflects a profound change in the philosophy of credit regulation: the objective is no longer simply to reduce the risk for the intermediary, but to ensure that credit is granted in the best possible interest of the consumer, with a view to “responsible lending”[4].

Creditworthiness assessment, understood in this way, becomes a condition of substantive legitimacy of credit activity, striking a balance between the lender’s entrepreneurial freedom and the protection of the debtor’s fundamental rights, including the right to a dignified economic life free from unfair or predatory practices.

In the current digital ecosystem, the centrality of this function is even more evident: the use of automated techniques and credit scoring algorithms amplifies the efficiency of assessment processes, but at the same time increases, as will be explained in more detail below, the risks of discrimination, opacity and unfair treatment of personal data.

Consequently, creditworthiness assessment is now a point of convergence between consumer law, banking law and personal data protection regulations, requiring a systemic approach that combines technological innovation, social responsibility and effective consumer protection.

2. The obligation imposed on creditors to carry out an adequate assessment of the creditworthiness of potential debtors fulfils a dual systemic function: on the one hand, it aims to protect

consumers against the risk of over-indebtedness and the granting of unsustainable credit; on the other, it is aimed at preserving the financial soundness of financial intermediaries and, more generally, at ensuring the overall stability of the banking and financial market.

The balancing function that this assessment performs is therefore part of a framework of shared responsibility between creditor and debtor, aimed at ensuring that credit is granted under sustainable and transparent conditions, avoiding opportunistic or purely speculative practices.

Article 8 of Directive 2008/48/EC had already introduced an initial set of assessment obligations, which were, however, formulated in general terms and such as to give credit institutions broad discretion in defining the criteria for economic reliability.

This approach, which was mainly geared towards the interests of the intermediary, has over time highlighted a structural asymmetry between the position of the creditor — focused on the profitability of the transaction — and that of the borrower, who is more exposed to the risk of excessive debt.

This raises the eminently systematic question of the prevailing purpose of the assessment: whether it should be understood as a “creditor-oriented” tool, designed to mitigate credit risk for the intermediary, or as a “debtor-oriented” mechanism, aimed at ensuring the sustainability of the consumer’s commitment.

The contrast between these two perspectives has been effectively outlined by the UK Financial Conduct Authority (FCA)[5] and extensively explored in legal theory.

When creditworthiness is assessed from the creditor’s perspective, the focus is on analysing the risk of insolvency and protecting the economic balance of the transaction.

Conversely, from a perspective geared towards protecting the debtor-consumer, the assessment must extend to the actual ability of the individual to fulfil the obligations undertaken, as well as the overall impact that the loan may have on their economic situation.

In other words, the creditor’s activity aimed at minimising risk and maximising return does not necessarily coincide with the consumer’s interest in avoiding disproportionate forms of indebtedness.

It is quite possible that a credit agreement may prove economically advantageous for the intermediary but, at the same time, detrimental to the consumer, who is exposed to costs, ancillary charges and penalties that make the debt effectively unsustainable.

Emblematic in this sense are “Buy Now, Pay Later” (BNPL) credit products, in which the creditor can also profit from the debtor’s default, thanks to the accrual of interest and penalties, regardless of the repayment of the capital.

In light of these critical issues, legal scholars[6] and the European Commission have long highlighted the inadequacy of Article 8 of the 2008 Directive in effectively pursuing consumer protection objectives.

In response to these concerns, Article 18 of Directive (EU) 2023/2225 introduced more detailed and comprehensive rules, marking a paradigm shift in the approach to this issue.

The new provision clarifies that creditworthiness assessments must be carried out in the interests of the consumer and not primarily in the interests of the creditor, with the aim of preventing irresponsible lending practices and over-indebtedness, in accordance with the principle of responsible lending.

The assessment must take adequate account of factors relevant to the verification of the consumer’s prospects of fulfilment and risk.

The assessment must now be based on up-to-date, accurate and verifiable information on the consumer’s income, expenditure and overall economic and financial situation, proportionate to the nature, duration and value of the credit requested.

In particular, the Directive requires the creditor to obtain and verify independent documentation attesting to the applicant’s economic circumstances, excluding the possibility that the applicant’s mere statements may be considered sufficient in the absence of appropriate supporting documents, in line with the established case law of the European Union.

In accordance with the principles of proportionality and protection of personal dignity, the directive also prohibits the processing of sensitive data — such as data relating to health, religious beliefs or sexual orientation — and the use of information from social networks for credit assessment purposes.

Member States may, however, authorize access to relevant credit databases, provided that the assessment is not based solely on such references (Article 18(11)).

Compared to the previous legislation, there is a strengthening of the duty of care on the part of the creditor, who may only grant credit if the assessment is positive.

Otherwise, the intermediary has no discretion and must refuse the loan, in accordance with the principles of legal certainty and consumer protection.

The new directive also extends these obligations to the internal organisation of the creditor, which must have structured and verifiable procedures in place, with particular attention to the automated assessment processes governed by Article 18(3).

Despite these advances, a certain degree of conceptual uncertainty remains in relation to the assessment criteria and the notions of “consumer interest”, “irresponsible practices” and “relevant factors”, leaving room for differences in interpretation between national legal systems and, consequently, possible distortions of competition.

This calls for clarification by the European Banking Authority (EBA), aimed at promoting uniform application of the rules and consolidating harmonisation in the internal market, as has already been done in the area of consumer mortgage credit, so as to increase the degree of harmonisation of the requirements imposed in this regard by national transposing legislation.

The European Parliament has already expressed its opinion in this regard[7].

Finally, the 2023 Directive recognises the consumer’s right, in line with the principles of Regulation (EU) 2016/679 (GDPR), to human intervention in cases of fully automated assessments, as well as the right to obtain a clear and comprehensible explanation of the algorithmic decision and to request a review of the creditworthiness assessment.

These guarantees, reiterated by the recent ruling of the Court of Justice[8], represent the point of convergence between privacy regulations and banking law, reaffirming the centrality of the individual in the context of the digitisation of credit granting processes.

As regards enforcement, the directive confirms the autonomy of Member States in setting up control and sanction mechanisms, while requiring that they be “effective, proportionate and dissuasive” (Article 44(1)).

The Court of Justice, in line with established case law, has also clarified that theburden of proof of compliance with the assessment obligations lies with the creditor, as the professionally qualified party required to document the fulfilment of its information and assessment duties.

However, this autonomy, in substantive and procedural terms, is subject to the limits deriving from the principles of equivalence and effectiveness, according to which the methods of protecting rights recognised by EU law must not be less favourable than domestic methods or make it excessively difficult to exercise those rights[9].

Furthermore, the severity of the penalties must be commensurate with the seriousness of the infringements and capable of producing a genuinely dissuasive effect, in accordance with the principle of proportionality.

Applying these principles to Article 8 of the 2008 Directive, the Court stated that purely public enforcement — through administrative penalties — does not ensure effective consumer protection, as it does not affect the individual position of those who have taken out a loan in breach of the rule.

This gives rise to an obligation for Member States to recognise consumers as having a genuine subjective right of a private nature to compliance with the obligations imposed on the creditor, supplementing public sanctions with effective civil remedies[10].

In this sense, the sanction provided for in Article 44 of the new Directive must necessarily also include civil law consequences in order to ensure the full effectiveness of the protection.

At the procedural level, the Court has also ruled that once the national court has the necessary factual and legal elements at its disposal, it is required to verify ex officio compliance with the obligations imposed on the creditor.

On the other hand, a rule that makes the consumer’s right to take action subject to an excessively short limitation period, such as the three-year period provided for the declaration of nullity of the contract and the return of the capital, is incompatible with the principle of effectiveness.

Finally, EU case law has recognised the legitimacy, in national legal systems, of private law sanctions such as the nullity of the contract or the forfeiture of the creditor’s right to interest, clarifying that, in order to be truly dissuasive, the sanction must entail an economic loss, even if only in terms of lost earnings[11].

It is therefore necessary that the interest retained by the negligent creditor be significantly lower than that which he would have received in the event of diligent performance[12].

Furthermore, the Court has admitted the possibility of sanctioning the creditor even ex post, i.e. after the contract has been fully performed and even in the absence of actual harm to the consumer, in view of the public policy objective of the legislation, which aims not only to protect the individual debtor but also, in parallel, to safeguard the proper functioning of the consumer credit market by making intermediaries accountable and preventing “irresponsible” lending practices.

Ultimately, Directive (EU) 2023/2225 marks the transition from a purely prudential approach to creditworthiness assessment to a model of ethical and sustainable credit, in which the lender’s responsibility takes on a clear public dimension.

The European legislator places the assessment function within the broader context of the protection of fundamental consumer rights, elevating the principle of responsible lending to a substantive parameter of intermediaries’ actions.

The result is a framework in which the professional diligence of the creditor is not limited to the prevention of insolvency risk, but extends to the positive duty to ensure that the financing is compatible with the economic capacity, needs and life objectives of the debtor.

The assessment therefore becomes an expression of a broader social responsibility of financial intermediation, which is called upon to combine economic efficiency, sustainability and respect for human dignity, in the knowledge that only credit granted in a correct and proportionate manner can ensure the stability of the system and public confidence in the financial markets.

3.         In this perspective, reflection on the content and methods of creditworthiness assessment cannot ignore an analysis of the operational models actually used by intermediaries.

While European legislation has gradually oriented credit granting towards a logic of responsibility and sustainability, it is also true that the effectiveness of these objectives depends to a large extent on the techniques used to measure risk in practice.

Technological developments and the growing availability of digital data have profoundly transformed the assessment function, which has gradually shifted from a predominantly empirical procedure to an algorithmic and predictive analysis process capable of estimating consumer creditworthiness more accurately, but also more complexly.

In this context, it therefore appears necessary to distinguish between traditional assessment systems, based on linear statistical models and the processing of economic and financial data, and innovative systems, which use artificial intelligence and machine learning techniques, as well as a much broader information base, including behavioural and digital elements.

A comparative analysis of these models allows us to understand how technology has, on the one hand, expanded the potential of credit scoring but, on the other, introduced new regulatory and protection challenges related to the use of personal data and the transparency of automated decision-making processes[13].

It has been said that the assessment of consumer creditworthiness — to which the legislator had previously devoted limited and fragmented attention — is now subject to comprehensive and systematic regulation contained in Articles 124-bis and 120-undecies of the Consolidated Banking Law (TUB), dedicated respectively to consumer credit[14] and consumer real estate credit[15].

These provisions incorporate the evolution of European regulations on responsible lending, orienting them towards a model that emphasises prior verification of solvency as a means of protecting not only the intermediary but also — and above all — the consumer.

The assessment of creditworthiness consists of a technical and financial prognosis aimed at estimating the ability of the borrower to fulfil their obligations by repaying the sum received in accordance with the agreed terms and conditions[16].

This judgement is expressed through the attribution of an individual risk score (credit score), placed within a predefined scale of values, which allows the customer’s degree of economic reliability to be represented in summary terms.

The assessment, which usually takes place in the pre-contractual phase, serves a dual purpose: on the one hand, it allows the conditions for granting credit to be verified; on the other, it serves to determine the economic and contractual conditions of the transaction, affecting the amount of the loan, the applicable rates and the amount of collateral required.

It is therefore an essential part of the granting process, in which the requirements of economic prudence, protection of the weaker party and stability of the financial system are intertwined.

There are essentially two key elements that contribute to the determination of the credit score:

1. the nature and quality of the data used for the assessment;

2. the methods of processing and interpreting such data.

Traditional assessment systems, which are still prevalent among supervised intermediaries, are based on the analysis of objective economic and financial information concerning the consumer’s assets and income: identity, credit history, punctuality of payments, income level, account movements and debt composition.

These elements — known as hard data — are characterised by verifiability, measurability and translatability into numerical indicators, and are acquired either directly from the applicant, from sources internal to the intermediary, or, above all, through credit databases, among which public credit bureaus are of primary importance.

These data are traditionally processed using linear statistical models, which correlate the available variables with the probability of insolvency, generating a risk score proportional to the customer’s level of reliability.

These systems are based on simple algorithms, defined by operators during the design phase and calibrated on a relatively small number of variables, allowing for a clear understanding of the decision-making process and relative stability of results.

However, relentless technological evolution and the digitalisation of financial markets have profoundly transformed assessment methods, exponentially increasing both the amount of usable information and the complexity of processing techniques.

The emergence of new credit scoring models is closely linked, on the one hand, to the growing availability of so-called big data, i.e. vast sets of heterogeneous and unstructured information, and, on the other, to the spread of automated and predictive analysis methodologies based on the use of artificial intelligence and machine learning algorithms[17].

Based on the now widely accepted assumption that “all data is credit data”[18], innovative systems take a radically different approach from traditional models: creditworthiness assessment is no longer limited to “classic” economic and financial parameters, but extends to a much broader set of information, including so-called alternative data (alternative data), which is often non-numerical and not immediately verifiable.

Such data can be collected directly from the applicant, through the completion of digital forms or the sharing of electronic documentation, or indirectly, by tracking the user’s digital behaviour, both online and offline.

This category includes data relating to web browsing, consumption habits, purchasing preferences, lifestyles, and interactions on social networks, which make it possible to outline a complex behavioural profile of the individual.

In addition, operators specialising in commercial and reputational profiling can provide further information derived from the reprocessing of digital behaviour and computer tracking, contributing to the formation of increasingly detailed individual risk profiles[19].

The new generation of credit scoring models therefore combine two categories of information:

· “hard data”, relating to the applicant’s economic and financial aspects and income capacity;

· “soft data”, i.e. information extracted from the extra-financial digital traces left by the individual in the context of their daily life.

This integration of traditional and alternative data is now one of the most significant features of innovation in the credit sector, but at the same time it raises important legal and regulatory issues concerning the processing of personal data, the transparency of assessment processes and the responsibility of intermediaries in the use of increasingly sophisticated algorithmic tools[20].

As noted in a study promoted by the Bank of Italy, there has been a progressive expansion of the data sources used for creditworthiness assessment at the global level: from structured financial data (capital and profitability indicators, current account and payment data, market data) to unstructured non-financial data (socio-demographic information, including from third-party sources) and unstructured data, both financial (transactional and open banking information) and non-financial (digital footprint, online behaviour, social information)[21].

These models are capable of processing far greater quantities of data than traditional statistical systems, identifying non-linear correlations and recurring behavioural patterns that would be difficult to detect using conventional analysis tools.

A case in point is a FinTech start-up operating in the credit sector, which has patented proprietary software based on machine learning algorithms capable of combining and integrating traditional financial information with a wide range of alternative data from users’ digital activity[22].

Through adaptive learning procedures, the system develops customised predictive models that estimate the applicant’s probability of fulfilment with increasing accuracy as the algorithm is exposed to new information inputs.

The elements taken into consideration include, for example, web search activity, the time spent reading the terms and conditions, consumption and spending habits, geolocation data (GPS), as well as information obtained through web crawling and web scraping techniques or derived from user interactions on social media platforms.

Once normalised and integrated with economic and financial variables, these data contribute to building a dynamic creditworthiness profile, updated in real time and potentially more representative of actual consumer behaviour than traditional models based on static data.

The introduction of these systems has had far-reaching transformative effects on the credit market, leading to the entry of non-traditional operators — such as peer-to-peer lending platforms -to-peer lending platforms and new digital intermediaries — which, despite not having the wealth of information historically held by banks and not being connected to public credit bureaus, are now able to carry out credit assessments with a level of predictive accuracy similar to, if not superior to, that of traditional institutions[23].

This phenomenon is one of the most obvious manifestations of the financial disintermediation produced by the digital revolution, which has enabled technologically advanced entities without a traditional banking structure to access the credit market by exploiting the ability of algorithms to process information from a variety of sources.

This development has broadened the range of financial services on offer, increasing the competitiveness of the sector and promoting, at least potentially, greater credit inclusion, thanks to the possibility of assessing individuals who do not have a traditional banking history or who were previously excluded from the financial circuit.

However, the innovative scope of these tools requires broader reflection on the systemic implications of their use, as the gradual replacement of human judgement with algorithmic analysis brings not only advantages in terms of efficiency and speed of decision-making, but also significant risks in relation to the transparency of decision-making processes, the protection of personal data and the legal liability of the operators involved.

For consumers, in terms of financial inclusion, as it allows the pool of eligible borrowers to be expanded to include those who do not have a credit history;

In this perspective, the growing interconnection between FinTech technologies, automated scoring systems and traditional lending activities highlights the urgent need for regulatory adaptation capable of combining innovation with the protection of the reliability, fairness and equity of assessments.

The introduction of automated credit rating systems has led to profound changes in the structure of the financial market and in the relationship between intermediaries and consumers, generating significant advantages but also new areas of legal and regulatory risk.

In terms of benefits, these tools have proved decisive in promoting greater financial inclusion, as they make it possible to broaden the pool of eligible borrowers to include those who, due to a lack of credit history or limited banking operations, would traditionally have been excluded from credit circuits.

Through the use of algorithms capable of analysing a wider set of variables, including behavioural and digital ones, operators can now identify reliability profiles even in individuals who do not have traditional collateral, thus making credit accessible to categories of consumers previously “invisible” to the banking system.

From the point of view of financial intermediaries, the use of automated models has led to a marked improvement in the granularity of risk assessment, allowing for a more accurate distinction between different levels of solvency.

This translates into more accurate customer segmentation, the ability to tailor contractual terms to the risk profile of the individual applicant and, more generally, increased operational efficiency, as data collection and analysis take place in a significantly shorter time than with traditional methods.

It also results in greater personalisation of financial services, in line with the customer-centric innovation approach that characterises contemporary digital finance.

However, the advantages in terms of speed, accuracy and accessibility are accompanied by a series of structural critical issues that require in-depth consideration from a legal, ethical and regulatory perspective.

The main risk lies in the heterogeneous and unstructured nature of the data used by scoring algorithms, which increasingly include information that is not directly relevant to the applicant’s economic and financial situation but relates to extra-financial aspects of their private life or behaviour.

This includes, for example, online browsing data, social network interactions, consumption habits, geographical location and even communication styles, which are used as indirect indicators of creditworthiness.

The use of such information, while potentially useful in expanding the predictive capacity of models, raises sensitive issues in terms of the proportionality and relevance of data processing, as well as respect for the right to privacy and non-discrimination.

Where the algorithm bases its assessment on variables that reflect social behaviour, cultural preferences or personal characteristics that are not economically relevant, there is a risk of generating arbitrary or discriminatory decisions, in contrast to the principles of fairness, equity and transparency that should govern credit activities.

Added to this is the difficulty for the data subject to understand and challenge the way in which their profile has been assessed, due to the so-called algorithmic opacity (black box problem), which makes the decision-making criteria used by artificial intelligence systems difficult to understand.

As a result, the adoption of increasingly sophisticated scoring tools requires the definition of a clear regulatory and ethical framework that balances business freedom and technological innovation with the need to protect the dignity, freedom and informational self-determination of consumers[24].

4. Credit scoring is an objective assessment procedure of an individual’s creditworthiness, aimed at determining the level of risk associated with granting a loan or credit line.

It is based on statistical analyses and complex predictive models, which process a variety of financial and behavioural data — such as credit history[25],current debt exposure, disposable income and employment status — with the aim of constructing a synthetic index of economic reliability.

The process culminates in the assignment of a numerical creditworthiness score, proportional to the probability that the applicant will be able to fulfil the obligations assumed in the contractual terms.

This score therefore constitutes a probabilistic representation of the risk of insolvency and is fully integrated into the broader duty of creditworthiness assessment imposed on financial intermediaries by European and national legislation.

Through scoring, the intermediary not only quantifies the risk, but also fulfils a specific obligation of diligence and proper management, which serves to ensure that credit is granted in a sustainable manner, both for the customer and for the stability of the financial system as a whole[26].

Unlike rating, which is also based on qualitative elements and the experiential judgement of the financial analyst, credit scoring focuses on entirely quantifiable variables, excluding any subjective component and entrusting the assessment to a purely statistical-mathematical process.

This results in a substantial distinction not only in terms of the subjects being assessed and the assessors, but also in relation to the recipients of the assessment and the economic and legal function of the tool: while ratings aim to express an overall judgement on the financial reliability of an issuer or a security, scoring pursues immediate operational objectives, resulting in a summary index intended to guide credit granting decisions[27].

Compared to traditional rating systems, scoring models are characterised by lower operating costs, speed of processing and greater standardisation of results, which makes them particularly useful tools for the mass activity of financial intermediaries[28].

For some time now, creditworthiness assessment has been using automated systems based on linear regression models and traditional statistical techniques, capable of correlating economic and behavioural variables with the risk of insolvency[29].

However, the advent of FinTech has marked a profound change, affecting not only operating methods but also the very philosophy of credit assessment.

The introduction of artificial intelligence and machine learning technologies has enabled a radically different approach to data management, based on progressive learning mechanisms and predictive capabilities founded on the identification of non-linear statistical correlations.

These models, trained on large credit datasets, learn autonomously to recognise recurring patterns and estimate the probability of default of new applicants, automating not only the processing phase, but also the collection and selection of information.

At the same time, the use of big data analytics has exponentially expanded the available information base, allowing data from heterogeneous sources — including non-traditional ones — to be integrated and more detailed assessments to be made.

The most recent empirical studies show that the use of artificial intelligence in advanced credit scoring systems is expanding rapidly, arousing growing interest among financial operators. However, despite the potential of these tools, their use remains limited, and traditional statistical methods continue to be the main operational reference for most supervised intermediaries.

From an operational point of view, the complete replacement of human intervention does not appear, at present, to be either desirable or fully achievable. In particular, when the results of the automated assessment place the applicant in intermediate risk bands, discretionary screening by a qualified operator is necessary, capable of integrating the algorithmic result with qualitative elements that cannot be inferred from the data.

It is, therefore, a “hybrid” model in which the automatic and human components coexist, giving rise to a new dialectic between calculation and judgement, in which statistical analysis guides, but does not replace, prudential assessment[30].

From a regulatory perspective, there are no specific limitations on the degree of digitisation of the scoring process, which can be fully automated using mathematical-statistical models.

This possibility is based on the “Code of Conduct for Information Systems Managed by Private Entities in the Field of Consumer Credit, Reliability and Timeliness of Payments”[31], which defines “scoring processing” in Article 2, paragraph 2, letter g), outlining a framework aimed at ensuring fairness, proportionality and security in the processing of personal data.

The advantages of automation are manifold: speed, accuracy and consistency of assessments, reduction of the margin of human error, as well as expansion of the range of data that can be assessed and greater uniformity of decision-making criteria.

These tools, which are characterised by low operating and transaction costs, are functional to economic growth and financial inclusion, as they allow access to credit to be extended to traditionally excluded individuals, reducing information asymmetries and improving the overall quality of the financial market.

Despite the undoubted potential of automated credit rating systems, the risks associated with their implementation require careful and thorough consideration.

As effectively observed in doctrine, ‘credit scores can make or break the fate of millions of individuals’[32], emphasising how a single algorithmic score can have a decisive impact on an individual’s economic and social opportunities.

One of the most significant critical issues is the risk that machine learning algorithms, in their process of learning from data, absorb and reproduce pre-existing biases in training datasets, generating discriminatory decisions. Emblematic in this sense is the phenomenon known as creditworthiness by associations[33], in which the ability of algorithms to identify statistical correlations between personal characteristics and credit risk leads to indirect discrimination based on arbitrary and statistically spurious associations[34].

This mechanism risks penalising categories of individuals who share socio-demographic or behavioural traits, without there being any logical link between these elements and their actual solvency.

Added to this is a second critical issue: the tendency of automated systems to not consider individual or exceptional circumstances that could affect an individual’s ability to fulfil their obligations.

The algorithmic model, being rigidly parameterised on quantitative variables, is unable to assess human or situational factors — such as family events, health conditions or temporary work difficulties — which a human assessor could instead weigh up in an equitable manner.

These problems are intertwined with the broader issue of algorithmic opacity (black box problem), which characterises much of the FinTech ecosystem[35].

The complexity of the models, combined with the need to protect the intellectual and industrial property of the developing companies, often makes it impossible to understand the internal mechanisms of the algorithm.

This results in serious shortcomings in transparency and accountability, aggravated by the impossibility for those concerned and for the supervisory authorities to verify the correctness of the assessment process and to identify any hidden discrimination.

In this context, the difficulty of ascertaining the subjective element of discriminatory intent constitutes a further obstacle to the effective protection of consumer rights.

In view of these risks, there is a strong need to intervene in the design and functioning of algorithms, placing three fundamental principles at the heart of the system: risk management, disclosure and accountability.

In this sense, it is essential to ensure qualified human intervention in the decision-making process, allowing automated decisions to be validated, corrected or supplemented, restoring rationality and proportionality to the entire scoring system.

Another important issue concerns the protection of personal data: in the collection, programming and validation of models, data plays a central and strategic role, making it essential to comply with the principles of lawfulness, fairness and minimisation enshrined in the GDPR.

The sensitive nature of much of the information processed — often relating to behaviour, preferences or lifestyle habits — amplifies the risk of privacy violations and misuse of data for economic or predictive purposes.

At the regulatory level, Italian law does not have comprehensive general regulations on creditworthiness assessment, nor, for that matter, on automated contexts.

There is a lack of uniform provisions capable of comprehensively outlining the duties of conduct and organisation of operators, as well as the legal remedies available in the event of violations.

The result is a fragmented regulatory framework, in which heterogeneous sectoral regulations coexist, addressing the issue from different perspectives — banking, consumer, technology and privacy — often overlapping or leaving room for interpretative uncertainty.

This lack of coherence makes it difficult to identify the protection measures actually available to individuals subject to scoring, who often find themselves in a position of informational and legal weakness.

Currently, the protection of data subjects is mainly indirect, as it is achieved through the imposition of organisational and conduct obligations — subject to administrative sanctions — on entities that use automated credit assessment tools.

A first significant regulatory reference can be found in Article 10, paragraph 1, letter c) of the aforementioned “Code of Conduct for Information Systems Managed by Private Entities in the Field of Consumer Credit, Reliability and Timeliness of Payments”, which highlights the centrality of the principles of diligence, fairness and professionalism, and the duty to ensure a comprehensive understanding of the customer’s situation and objectives.

The use of automated tools cannot, therefore, constitute an exception to the requirements of independence, transparency and organisational adequacy, which must characterise all financial assessment activities.

The algorithms used for profiling must be subject to periodic technical checks to verify their reliability, consistency and continued compliance with the principles of operational fairness[36].

However, as this is a code of conduct with no binding effect, it does not give the subjects assessed any enforceable rights vis-à-vis the intermediary, nor does it offer any effective means of responding to automated decisions that determine access to — or denial of — credit.

A more comprehensive and protective regulatory approach therefore appears essential, capable of ensuring effective forms of protection for the recipients of the most advanced credit scoring practices.

This requirement responds to the objectives that the European Commission is pursuing with increasing insistence as part of the European Digital Finance Strategy, which — with a view to open finance — aims to promote a systemic evolution of the financial sector, aimed at increasing financial inclusion and opportunities for access to credit, in particular for consumers and small and medium-sized enterprises[37].

In view of possible future developments, there is therefore a need for regulation that takes into account the specific risks that credit scoring automation may entail, not only for individual users, but also for the stability and overall efficiency of the financial system[38].

With regard to the use of credit scoring by banks and other financial intermediaries in the context of credit granting, “hard law” regulations apply, aimed essentially at protecting the interests of consumers, for whom creditworthiness assessment has long been an established obligation in European legal systems.

As regards the duties of conduct incumbent on financial intermediaries, these must be interpreted in the light of the fundamental principle of “sound and prudent management” of the intermediary enshrined in Article 5 of the Consolidated Banking Law (TUB), a principle that the supervisory authorities have progressively enriched with content through secondary sectoral legislation[39].

Following the transposition of the European directives on consumer credit and credit agreements relating to residential immovable property, the regulations have now been incorporated into Articles 124-bis and 120-undecies of the TUB, respectively, which require lenders to obtain adequate information — either directly from the consumer or by consulting databases such as credit bureaus[40].

In particular, for real estate loans, intermediaries are required to assess the consumer’s prospects of fulfilment on the basis of necessary, proportionate and verified economic and financial information concerning income, assets and repayment capacity.

Consumer protection is therefore enshrined in legislation governing the management and control of credit risk, which stipulates that creditworthiness must be assessed in a manner proportionate to the information provided by the customer or collected independently by the intermediary.

However, there are no specific restrictions on the type of data that can be used for profiling and analysis, with the result that operators enjoy wide discretion in choosing the assessment methods and level of automation to be adopted[41].

Although there is no regulation that operationally defines the procedures to be followed to fulfil the creditworthiness assessment obligation, the legislation expressly considers the possibility of automated processing of the data collected, providing in such cases for the consumer’s right to be informed if the credit application is rejected[42].

As clarified by the Court of Justice of the European Union, the burden of proof regarding the correct fulfilment of the obligation to assess creditworthiness lies with the intermediary, who cannot simply produce mere statements signed by the consumer, but must demonstrate that they have actually carried out a concrete and documented assessment of creditworthiness.

The verification obligation has been further extended with the adoption of Directive (EU) 2023/2225, which, in Article 18, specifies the need to consider relevant and accurate information relating to the consumer’s economic and financial situation, with particular reference to income and expenditure.

The Directive allows the use of alternative or heterogeneous data, but within strict limits imposed by the sensitive nature of the information processed, in accordance with Regulation (EU) 2016/679 (GDPR), which we will discuss in more detail below, and absolutely excludes the use of data from social networks.

The directive itself, from its recitals onwards, places strong emphasis on the consumer’s right, where creditworthiness is assessed by automated processing, to obtain human intervention and to receive a clear and comprehensible explanation of how the decision was made, the variables used, the underlying logic and the risks involved[43].

Consumers may also, express their point of view and request a review of the assessment and the decision to grant credit (Recital 56 and Article 18(8)).

These safeguards introduce effective protection tools, allowing consumers to opt out of a purely automated assessment and obtain clarification on the reasons for the score assigned, as well as to challenge the final decision.

This is a matter of great practical importance: for consumers who are refused credit, it is essential to have effective means of redress, especially when the negative decision is based on inaccurate or incorrectly collected data.

The Financial Banking Arbitrator (ABF) has intervened on these issues on several occasions, emphasising, on the one hand, that there is no general obligation to grant credit, as intermediaries have broad technical and negotiating discretion in conducting their assessments; but, on the other hand, that the customer has the right to receive adequate, clear and comprehensible explanations regarding the reasons for the refusal, even if it is based on automated credit scoring systems[44].

Therefore, although consumers cannot be granted a subjective right to the granting of credit — which remains at the discretion of the intermediary — it is contrary to the principles of transparency, fairness and good faith to refuse to provide explanations regarding the assessment procedures adopted.

The intermediary cannot therefore invoke reasons related to the protection of intellectual property or the confidentiality of the model to avoid the duty to clarify the decision-making methods followed by the application processing system[45].

In light of these rulings, it must be considered that, if — following a request for human intervention, the acquisition of further information or objections raised by the consumer — it emerges that the decision to reject the application is based on an assessment that does not comply with the legislative criteria, the intermediary is not obliged to approve the granting of credit[46].

However, negligent or inaccurate behaviour in assessing creditworthiness may expose the intermediary to liability for damages, if such conduct results in actual damage to the consumer[47].

In such cases, the intermediary is required, at the very least, to rectify the scoring and re-examine the case in good faith and fairness, with the possibility of modifying the original decision.

A further problematic issue concerns cases where the assessment is carried out by persons not authorised to grant credit, who operate solely for the purpose of granting loans and therefore remain outside the traditional categories of supervised intermediaries.

Such situations are not governed by the special rules mentioned above, but are based on common law, and in particular on the duties of diligence and fairness that permeate every contractual relationship.

With regard to scoring and the concept of “responsible lending”, the principles of sound and prudent management continue to be a point of reference for regulators and interpreters in order to outline a framework for credit automation that is consistent with the protection of applicants’ interests and the ethical use of financial technologies, in line with the objectives of European and national legislators.

It therefore seems desirable for the European supervisory authorities to take concrete action by issuing guidelines on the use of new technologies, aimed at promoting high standards of transparency and accountability towards consumers, with reference to the decision-making processes adopted and the criteria underlying automated assessments See Expert Group on Regulatory Obstacles to Financial Innovation (ROFIEG), 30 Recommendations on Regulation, Innovation and Finance, Final Report of the European Commission, 2019[48].

5.         One of the most widely discussed critical issues is the phenomenon of so-called algorithmic opacity, or the black box problem, which refers to the impossibility, even for operators themselves, of fully understanding the logical sequence through which the algorithm arrives at a given result[49].

In such cases, the automated decision is “obscured” not by the will of the controller, but by the intrinsic complexity of the predictive model, which processes information according to non-linear and not immediately explainable dynamics.

This opacity has significant consequences both legally, as it hinders the possibility for the data subject to effectively challenge the decision, and ethically and socially, as it undermines the principle of trust that must underpin the relationship between financial institutions and users[50].

The situation is exacerbated by the risk of algorithmic bias, i.e. systematic distortions arising from the data on which the models are trained or from the mathematical rules that guide their learning.

Since machine learning algorithms learn from historical data, they tend to reproduce and consolidate any biases present in the source datasets[51].

As a result, credit decisions could be indirectly discriminatory, penalising specific categories of individuals — for example, on the basis of age, geographical area, profession or even consumption patterns — even in the absence of any discriminatory intent on the part of the operator.

These seemingly technical distortions take on substantial legal significance, as they conflict with the principles of substantive equality, non-discrimination and proportionality enshrined in both European Union law and national legal systems.

In other words, the risk is that the apparent neutrality of the algorithm will result in de facto automated discrimination, which is difficult to identify and even more difficult to challenge, precisely because of the difficulty of accessing the internal logic of the decision-making process[52].

In this perspective, the principle of algorithmic explainability (explainability) takes on importance, representing a concrete application of the more general principle of transparency enshrined in the GDPR.

It requires that decisions based on automated processing are not only justified, but also intelligible, i.e. understandable in terms of the logic and determinants that generated them.

This requirement is based on Articles 13, 14 and 22 of the GDPR, which recognise the data subject’s right to obtain “meaningful information” about the logic used and to request human intervention, a review and an explanation of the assessment methods.

Algorithmic explainability is now at the nexus of technology, law and trust, serving as a substantial guarantee of the legitimacy of automated decisions. In other words, it is an essential condition for ensuring that the use of algorithms in decision-making processes does not result in the opaque and uncontrollable exercise of technical power, but remains anchored to the principles of transparency, proportionality and democratic control.

From this perspective, explainability makes it possible to transform transparency from a merely declarative concept into an effective tool for protection, enabling understanding of the logic underlying automated processes and verification of the criteria used in the assessment.

It ensures that the data subject is not reduced to a mere object of a probabilistic calculation, but remains an informed and active participant in the process that concerns them, in line with the principles of dignity and informational self-determination enshrined in the Charter of Fundamental Rights of the European Union.

This gives rise to the increasingly pressing need for an integrated regulatory framework, capable of reconciling technological innovation with the protection of fundamental rights, through the introduction of legal and technical safeguards that ensure accountability, auditability and effective human supervision.

In particular, the adoption of algorithmic auditing mechanisms, the provision of enhanced transparency obligations and the promotion of the “human in the loop” principle are essential tools for bringing automated credit scoring systems within a framework of legal, ethical and social responsibility, in line with the founding values of the European legal system[53].

From this perspective, the combination of the GDPR and the AI Act marks a crucial step towards a model of “responsible algorithm governance”, in which automation does not replace human decision-making but integrates it within a framework of legality, proportionality and respect for the individual, restoring the balance between innovation and substantive justice in the financial markets.

This is sufficient to affirm that personal data protection regulations are now a tool for the substantial regulation of digital markets, capable of profoundly affecting the economic dynamics and organisational structures of operators. Through the rigorous regulation of the entire data processing chain, the setting of limits on the secondary use of personal information and the imposition of obligations of transparency and fairness of information aimed at reducing information asymmetries between data controllers and data subjects, the framework outlined by Regulation (EU) 2016/679 (General Data Protection Regulation – GDPR) now represents a fundamental safeguard for the governance of a complex and highly fragmented economic sector such as profiling.

However, this regulation does not exhaust its function in ensuring the individual protection of the digital person: it also proves to be a valid ally of consumer law, competition law and ex ante economic regulation.

In particular, the restrictions placed on the free movement and uncontrolled accumulation of personal data help to prevent the concentration of information power in the hands of a few dominant operators, promoting a more balanced and competitive market structure. From this perspective, personal data protection legislation acts as a factor of systemic rebalancing, capable of ensuring effective competition and preserving the decision-making autonomy of economic operators and consumers.

At the same time, ex ante regulation at European level — such as the recent measures relating to the Artificial Intelligence Act, the Digital Markets Act and the Digital Services Act — aims to strengthen the “stability” of the relationship between artificial intelligence and personal data protection, laying the foundations for a model of technological innovation that respects fundamental rights and constitutional guarantees.

The impact of this relationship is particularly significant in the field of financial profiling, where the use of artificial intelligence systems and predictive algorithms can have a significant impact on the legal and economic sphere of individuals, affecting access to credit, digital reputation and, more generally, the consumer’s negotiating autonomy.

Furthermore, the risk that these same technologies could be exploited for manipulative or fraudulent purposes cannot be underestimated, as is the case with so-called deep fakes or predictive psychological profiling, which are capable of generating sophisticated and deceptive forms of disinformation that can influence economic or social behaviour.

In this scenario, there is a clear need for an integrated and interdisciplinary approach, capable of bridging the traditional gap between the communities of experts in artificial intelligence and those in data protection, who are often inclined to address these issues independently and with different perspectives depending on the jurisdictions and legal systems of reference.

The absence of a common language and a shared systemic vision not only generates misinterpretations, but also contributes to increasing the complexity of the regulatory framework’s application, risking weakening the effectiveness of the safeguards put in place to protect fundamental human rights and fair competition in digital markets[54].

In the absence of such requirements, the use of advanced credit scoring techniques risks leading to distorted results, with incorrect or discriminatory assessments, sometimes even unintentionally, due to opaque, self-referential algorithmic models or models lacking adequate control mechanisms.

The phenomenon of algorithmic discrimination, an expression of structural bias inherent in the training data or inferential logic of the model, is one of the most significant critical issues in modern scoring systems: it occurs when automated decisions reproduce or amplify pre-existing disparities, without it being possible to clearly identify the determinants of the decision-making process.

This highlights the need to promote a model of responsible algorithmic governance, based on principles of transparency, verifiability and auditability of automated decision-making processes, in order to ensure that technological evolution does not compromise, but rather strengthens, the safeguards of fairness and impartiality that must guide assessment activities in the credit market[55].

6. In the current regulatory and technological context, the essential coordinates concerning the data that can be used for creditworthiness assessment and the ways in which such information can be processed are mainly based on the GDPR Regulation.

This is a general regulation, but one that is also immediately applicable in the financial sectors, which outlines a set of principles and rules that apply whenever economic or commercial activity involves the processing of personal data.

It is therefore clear that creditworthiness assessment falls fully within its scope, as it is a process intrinsically based on the acquisition, analysis and processing of information relating to natural persons.

Even with regard to the use of output generated by an automated system in a decision-making process that significantly affects the individual, Regulation (EU) 2016/679 (GDPR) offers a general framework, structured on several levels and based on the principle of the centrality of human intervention.

The starting point is Article 22(1) of the GDPR, which establishes the general rule that the data subject has the right not to be subject to a decision that produces legal effects concerning him or her or similarly significantly affects him or her, where that decision is based solely on automated processing, including profiling.

This provision essentially codifies the principle of the so-called “human in the loop”, i.e. the requirement that there must always be a margin for effective human intervention in the decision-making process, capable of critically evaluating the result generated by the algorithm and exercising substantial control over its outcome.

However, the right granted to the data subject by Article 22(1) is not absolute.

The following paragraph 2 provides for a series of exceptions that allow automated decisions to be taken under certain conditions.

In particular, the European legislator allows for three cases in which automated processing is lawful:

(a) where the decision is necessary for entering into, or performing, a contract between the data subject and the controller (Article 22(2)(a));

b) when it is authorised by Union or Member State law, provided that appropriate safeguards are in place to protect the rights, freedoms and legitimate interests of the data subject (Article 22(2)(b));

c) when the data subject has given explicit consent to the processing (Article 22(2)(c)).

It follows that the right referred to in Article 22(1), although formulated in terms of a prohibition, is neither mandatory nor unavailable.

The data subject may, in fact, waive the guarantee of human control, either implicitly, by contractually accepting a clause providing for automated decision-making, or explicitly, by giving express and unambiguous consent to the processing[56].

This reflects a modern conception of the right to data protection, understood not as a rigid limitation on technology, but as a space for informed self-determination, within which the data controller can operate legitimately only if consent is effectively informed, free and specific.

Finally, paragraph 4 of Article 22 reiterates the structure of the prohibition and exceptions, emphasising that even in cases where automated decisions are permitted, they may not concern personal data belonging to particularly sensitive categories within the meaning of Article 9 of the Regulation — such as, for example, those relating to health, sexual orientation or religious or political beliefs.

However, this prohibition may be waived in two exceptional circumstances: where there is explicit consent from the data subject (Article 9(2)(a)) or where the processing is justified by reasons of substantial public interest, based on Union or Member State law (Article 9(2)(g))[57].

The logic behind the provision is clear: the GDPR aims to prevent the automation of decision-making, even when legitimate, from leading to forms of discriminatory profiling or decisions that affect the dignity, identity and autonomy of the individual.

The principle of “human in the loop” is therefore not simply a technical safeguard, but a corollary of the principle of human dignity that permeates the entire European personal data protection system[58].

Having thus outlined the general structure of the prohibition and its exceptions, the Regulation introduces, by way of completion, a specific set of rules that supplements the safeguards already provided for each processing of personal data, providing for additional information and procedural obligations in favour of the data subject.

In particular, the data controller must ensure that the data subject is able to understand the decision-making logic of the algorithm, to request a human review of the decision and to challenge the outcome if they consider it to be flawed or discriminatory.

This approach highlights how the GDPR pursues a balance between technological innovation and the protection of individuals, not as an obstacle to the development of artificial intelligence applied to credit, but as a regulatory model of algorithmic responsibility, which requires transparency, verifiability and human control in automated decisions that affect the economic lives of citizens[59].

From this perspective, the Regulation acts not only as a safeguard for individuals, but also as a tool for rationalising the entire digital financial ecosystem.

Its function, in fact, goes beyond the merely protective dimension to take on a systemic value: through the codification of general principles – lawfulness, fairness, proportionality and transparency – the GDPR guides the operating methods of intermediaries, imposing a model of data governance based on responsibility and technological sustainability.

The importance of the GDPR in the field of credit scoring is twofold: on the one hand, it represents a substantial guarantee for the protection of fundamental consumer rights, namely the right to privacy and the protection of personal data; on the other hand, it requires financial operators to undertake a thorough review of their credit risk management methods, requiring automated assessment to comply with criteria of lawfulness, fairness, proportionality and transparency[60].

It is not, therefore, a mere framework of ancillary rules, but a genuine parameter for the regulation of credit activities, which has a structural impact on the organisational procedures of intermediaries and on the very legitimacy of scoring processes.

The Regulation establishes, in general, that personal data must be processed in a lawful, fair and transparent manner in relation to the data subject (Article 5(1)(a)).

The lawfulness of the processing requires that it be based on an adequate legal basis: typically, the consent of the data subject (Art. 6, para. 1, letter a), or the legitimate interest of the data controller (Art. 6, para. 1, letter f), provided that the latter does not override the fundamental rights and freedoms of the data subject.

Fairness, on the other hand, concerns the way in which data is managed and implies that the intermediary acts in accordance with the principles of good faith, informing the consumer in a clear and comprehensible manner about the use that will be made of the information provided, especially when it contributes to the formation of a creditworthiness assessment.

This information aspect is of substantial value, as it is not merely a formal requirement but represents the minimum condition for the data subject to be able to exercise their rights of informational self-determination in an informed manner.

In the context of credit scoring, the principle of transparency therefore translates into a duty for the intermediary to ensure that the customer is fully aware of the logic underlying the processing and the effects that the processing of data may have on their legal and economic position.

Another pillar of the European regulatory architecture is the principle of data quality, which is divided into several positive obligations.

Article 5(1)(c) stipulates that the data collected must be adequate, relevant and limited to what is strictly necessary for the purposes of the processing.

This principle of minimisation imposes a criterion of strict proportionality: the intermediary may not collect information that is merely potential or of general interest, but must limit the acquisition to data that is actually useful for assessing the customer’s creditworthiness.

Furthermore, the information must be accurate, up to date and regularly verified (Article 5(1)(d)), so that the risk assessment reflects a true and current representation of the data subject’s financial situation.

Failing this, the creditworthiness assessment would risk being based on erroneous assumptions, resulting in an arbitrary and potentially discriminatory decision.

No less important is the principle of storage limitation (Article 5(1)(e)), according to which data must be kept in a form that allows the identification of the data subject only for as long as is strictly necessary for the purposes of the processing.

This means that intermediaries are required to set up internal procedures for the periodic deletion and review of the database, in order to avoid the prolonged use of information that is no longer relevant.

This principle not only protects the privacy of the data subject, but also helps to ensure the predictive quality of scoring systems, which would otherwise risk being based on obsolete or unrepresentative data.

Particularly sensitive is the regime relating to special categories of personal data (Article 9), i.e. so-called sensitive data, the collection and processing of which is, in general, prohibited.

This category includes information concerning racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, genetic and biometric data, as well as data concerning an individual’s health or sex life and sexual orientation.

The processing of such data is only permitted with the explicit, specific and informed consent of the data subject, which makes its use in the context of credit assessment exceptional and residual[61].

The rationale behind this prohibition lies in the need to prevent forms of automated discrimination and to avoid access to credit or the economic conditions applied being influenced by factors unrelated to the individual’s economic capacity.

In a context characterised by the growing use of artificial intelligence techniques and automated decision-making processes, this guarantee takes on even greater significance: the inclusion of sensitive data in a predictive model could, in fact, amplify algorithmic biases, producing decisions that violate the principle of substantive equality and human dignity[62].

Ultimately, the GDPR seeks to strike a complex and dynamic balance between the economic interest of intermediaries in having complete and reliable information and the need to preserve the essential core of consumers’ personal rights.

From this perspective, the right to data protection is not a static limitation on technological innovation, but rather a criterion for the legitimisation and rationalisation of economic activity, serving to ensure that the automation of credit decisions develops within the bounds of substantive legality and respect for the individual.

With regard to the methods of processing data collected in the context of creditworthiness assessments, the provisions of the GDPR dedicated to automated decision-making processes are of paramount importance. This category undoubtedly includes modern credit scoring systems based on the use of machine learning algorithms and artificial intelligence techniques[63].

Aware of the potential impact that fully automated decisions can have on the legal and personal sphere of individuals, the European legislator has introduced a comprehensive set of transparency and guarantee measures.

Firstly, Article 13 of the GDPR, with a view to full information accountability, requires the data controller to inform the data subject, from the outset, of the existence of an automated decision-making process that affects them, clearly and comprehensibly indicating the logic used by the algorithmic system, as well as the significance and legal or economic consequences that may result from it.

This information obligation, far from being a mere formality, constitutes a substantial safeguard of the right to transparency, enabling the data subject to understand the structure and scope of the processing, thus preventing forms of information asymmetry and technological opacity.

In other words, Article 13 does not merely require notification of the existence of an automated process, but requires a meaningful explanation of its operating logic, the degree of impact of the processing and the foreseeable effects on the position of the data subject.

The following Article 22 of the Regulation introduces even more significant protection, establishing the right of the data subject not to be subject to a decision based solely on automated processing, including profiling, where such a decision produces legal effects or similarly significantly affects the person.

This is a particularly innovative provision, which places the individual at the centre as a subject of law and not merely an object of algorithmic analysis[64].

Article 22, in fact, aims to restore a balance between automation and human intervention, preventing statistical calculation from completely replacing the discretionary assessment and personal judgement that must characterise the adoption of decisions that are potentially capable of affecting the fundamental rights of citizens.

From this perspective, automated creditworthiness assessment systems fall fully within the scope of the regulation, since the determination of an individual score, calculated without direct human involvement and used to grant or deny a loan, constitutes in all respects a decision based solely on automated processing within the meaning of the GDPR.

This means that, in the financial sector, intermediaries using such techniques must not only comply with the information requirements set out in Article 13, but also ensure, pursuant to Article 22, corrective mechanisms aimed at preserving the human dimension of the decision.

The Court of Justice of the European Union recently ruled in this regard, clarifying that the prohibition laid down in Article 22 applies whenever the final decision — even if formally taken by a third party, such as a credit institution or financial service provider — depends decisively on the result of an algorithmic scoring process, which affects the very possibility of concluding, continue or terminate a contractual relationship with the customer[65].

The EU court thus recognised that the calculation of a reliability index, if used as the sole or predominant parameter for assessing access to credit, constitutes an automated decision-making process and, as such, must be subject to the constraints and safeguards provided for in the Regulation.

This interpretation, in addition to strengthening consumer protection, has a significant systemic impact: it requires intermediaries to rethink the design and governance of their algorithmic models, ensuring that effective human control remains at every stage of the decision-making process, capable of critically evaluating the results provided by the system and correcting any distortions.

It follows that the legitimacy of credit scoring cannot be assessed solely in terms of predictive efficiency or technical performance, but must also be measured in light of its compatibility with fundamental human rights and the principles of accountability, proportionality and non-discrimination that permeate European data protection law.

From this perspective, the GDPR does not merely regulate the use of data, but is a true standard of technological civilisation, aimed at ensuring that digital progress does not translate into a reduction in individual freedom or an unjustified delegation of economic decision-making to the automatism of machines.

The interpretation of Article 22 of the GDPR and, in particular, the possibility of qualifying as “automated decision-making relating to natural persons” — within the meaning of the aforementioned provision — the automated calculation carried out by a company that provides commercial information, aimed at determining a probability index based on an individual’s personal data and referring to their ability to fulfil financial obligations in the future, if this index is a decisive factor in the conclusion, execution or termination of a contractual relationship with a third party recipient of such information[66].

Only in the presence of an automated decision-making process do the specific safeguards provided for in the GDPR to protect the data subject apply, including the right not to be subject to a decision based solely on automated processing, including profiling, where such a decision produces legal effects or similarly significantly affects the individual[67].

As is well known, case law and doctrine have clarified that the applicability of Article 22 requires the concurrent fulfilment of three conditions:

– that a decision exists;

– that the decision is based exclusively on automated processing, including profiling;

– that it produces legal effects or otherwise significantly affects the data subject.

In light of these assumptions, the interpretative problem concerns the delimitation of the concept of “decision” and the need to adapt it to a context characterised by the use of increasingly complex technologies.

Even with regard to the use of output generated by an automated system within a decision-making process that significantly affects the individual, the GDPR offers a general framework, structured on several levels and based on the principle of the centrality of human intervention.

The introduction of AI systems has not only made it possible to process quantities of data of a size and speed unimaginable to humans, but has also broadened the range of individuals involved in the decision-making process, making it more complex and fragmented[68].

It is no coincidence that the AI Act itself recognises the existence of a genuine AI “value chain”, in which a multitude of subjects are intertwined and, in various ways, contribute to the final decision.

This observation highlights one of the most significant challenges facing the European regulator, namely the difficulty of identifying, in a complex, multi-level decision-making system, the entity that should be held responsible for the harmful conduct.

In this new paradigm, the final “decision” can only represent the epilogue of a sequence of heterogeneous operations, each of which constitutes an essential link in the overall process.

To consider only the final act as relevant — sometimes devoid of real discretion, understood as an effective choice between possible alternatives — would mean underestimating the preliminary stages, which are often decisive in terms of the infringement of the rights of the person concerned.

Emblematic in this sense is the case referred to in the ruling, in which the infringement of the legal position of the applicant for the loan occurred well before the formal refusal of the loan by the intermediary, i.e. when the rating agency made its automatic assessment of creditworthiness.

In light of this finding, the EU judge considered it necessary to recognise a broad interpretation of the concept of “decision” referred to in Article 22 of the GDPR, consistent with the provisions of Recital 71, according to which the provision must be understood as referring not only to a formal decision, but also to a “measure” capable of producing legal effects or affecting the data subject “in a similar way”.

From this perspective, cases such as the automatic rejection of an online credit application or the adoption of electronic recruitment procedures without human intervention may also fall within the scope of Article 22, if the result of the automated assessment has substantial consequences in the legal or economic sphere of the data subject.

7. In light of the above, it is clear that the rules outlined in the GDPR, in emphasising the principle of human control as a safeguard against automated decision-making, are not limited to protecting the individual sphere of the data subject, but rise to the level of a systemic criterion for regulating economic and financial processes.

From this perspective, the centrality of human intervention, as a corollary to the principle of dignity and informational self-determination, is closely linked to the preventive and social function of creditworthiness assessment.

Article 22 of the Regulation does not operate in a regulatory vacuum, but is part of a broader plan aimed at combining technological efficiency with the ethical sustainability of credit, guiding the activities of intermediaries towards models of responsible algorithmic governance and responsible lending[69].

The “human in the loop” guarantee is therefore the link between personal data protection and the prevention of over-indebtedness, as it allows substantial control to be maintained over the dynamics of financial inclusion or exclusion generated by automated processes.

As authoritatively observed in doctrine[70], the transparency and verifiability of algorithmic decisions are essential conditions for creditworthiness assessment to fulfil not only an economic function, but also a function of contractual justice and systemic balance.

It is precisely from this perspective that we can understand the logical and teleological link between the regulation of automated data processing and European and domestic consumer credit legislation: both contribute to outlining a market model based on shared responsibility, in which technology is at the service of the individual and not vice versa.

In the current economic and financial climate, the phenomenon of over-indebtedness is one of the most serious manifestations of structural consumer vulnerability, as well as one of the main challenges for the regulation of credit markets in the digital age[71].

It represents the point of convergence between economic, social and legal needs, as it affects both individual freedom of negotiation and the overall stability of the financial system, raising fundamental questions about the role of credit in contemporary society.

Far from being a mere technical requirement, creditworthiness assessment is, in this perspective, an essential safeguard for prevention and accountability.

Its purpose is not limited to protecting the intermediary against the risk of insolvency, but extends to protecting the debtor against the even more insidious risk of incurring debts that are disproportionate to their economic capacity.

The logic underlying European legislation on consumer credit and consumer mortgage credit is based precisely on this dynamic balance: ensuring, on the one hand, market stability through prudent risk management and, on the other, debt sustainability as a condition of contractual fairness and economic inclusion.

In today’s technological context, as we have seen, this balance is being tested by the growing disintermediation of decision-making processes.

FinTech platforms, microcredit apps and Buy Now, Pay Later models have introduced new paradigms of access to credit, often based on predictive profiling and highly personalised offers, which stimulate immediate consumption but compromise consumers’ financial awareness.

The apparent democratisation of access to finance is thus accompanied by a real risk of systemic over-indebtedness, facilitated by the automation of granting procedures and the progressive marginalisation of human intervention in the assessment process.

The use of algorithmic credit scoring models can, in the absence of adequate safeguards, lead to a new form of reverse information asymmetry: no longer to the detriment of the intermediary, but of the consumer, whose capacity for economic self-determination is compromised by the opaque and self-referential nature of automated decision-making mechanisms. Individuals are thus assessed, classified and potentially excluded from access to credit on the basis of criteria that they themselves are unable to know or effectively challenge. From this perspective, the prevention of over-indebtedness takes on a significance that transcends individual protection, affecting fundamental rights and social cohesion.

From a systematic point of view, it is necessary to recognise that creditworthiness assessment rules operate as a form of ex ante market regulation, aimed at ensuring the overall sustainability of credit flows and preventing the formation of structural debt bubbles.

The European Union, as mentioned above, has reinforced this approach with Directive (EU) 2023/2225, establishing creditworthiness assessment as an obligation that is functional to the implementation of the principle of responsible lending and the construction of an ethically oriented and socially sustainable credit system.

This implies a cultural shift in the very way we understand the credit relationship: credit not as a mere economic transaction, but as a fiduciary relationship based on proportionality, transparency and mutual responsibility[72].

In this sense, the digitisation of assessment processes and the use of artificial intelligence tools must be brought within the scope of responsible algorithmic governance, which guarantees an effective balance between technological efficiency and human dignity.

The fight against over-indebtedness, therefore, cannot be entrusted solely to private law instruments or ex post protection measures: it requires an integrated approach, based on synergy between different disciplines — banking law, consumer law, data protection and artificial intelligence regulation — and on a logic of systemic prevention.

Only in this way can creditworthiness assessment retain its function as a technical, ethical and constitutional safeguard, capable of translating the principles enshrined in Article 47 of the Constitution and the Charter of Fundamental Rights of the European Union into operational terms.

Under current regulations, consumers are considered to be in a state of over-indebtedness if their cash flows are structurally inadequate in relation to their obligations, i.e. if they are unable to meet their debts on a regular basis.

This condition, which represents a pathological situation in terms of the individual’s economic capacity, triggers a set of remedies aimed at restoring financial balance and promoting a “second chance” for deserving debtors.

In this perspective, the Code of Business Crisis and Insolvency (Legislative Decree No. 14/2019, hereinafter CCII) provides, in Article 67, for the possibility for over-indebted consumers to propose to creditors, with the assistance of the Crisis Settlement Body (OCC), a debt restructuring plan aimed at defining the timing, methods and content of the satisfaction of claims.

The proposal of the over-indebted consumer, the structure of which remains based on broad freedom of content, may be articulated in a variety of solutions, including partial or differentiated ones, provided that they are aimed at rebalancing the financial situation and effectively overcoming the crisis. However, access to the procedure requires the debtor to provide comprehensive documentation, giving a complete and transparent picture of their financial position and income. To this end, they must attach a detailed list of creditors, indicating the amounts owed and any causes of priority, a description of the size and composition of their assets, and an indication of any extraordinary administrative acts carried out in the last five years.

In addition, tax return forms for the previous three years must be produced, along with any information useful for representing the income capacity and sources of livelihood of the debtor and their family, with specific indication of the sums necessary for the maintenance of the latter.

This information is necessary to ensure maximum transparency in reconstructing the debtor’s financial situation, allowing the crisis resolution body and creditors to verify the accuracy and completeness of the data on which the proposal is based.

It is precisely in this perspective of fairness and reliability of the procedure that Article 69, paragraph 1, CCII, which defines the cases of inadmissibility of the procedure, excluding access for those who have already benefited from debt relief in the previous five years or twice in total, or who have caused their own over-indebtedness through wilful misconduct, gross negligence or fraud.

To protect the debtor’s good faith and the fairness of the credit market, paragraph 2 of the same provision also stipulates that creditors who have culpably contributed to causing or aggravating the indebtedness — or who have violated the principles set out in Article 124-bis of the TUB — are not entitled to file an opposition or complaint during the approval process, nor to contest the appropriateness of the restructuring proposal.

The case law on the matter has clarified that the provision of Article 69, paragraph 1, CCII marks a significant evolution with respect to the previous requirement of merit referred to in Article 12-bis of Law No. 3/2012.

In particular, it follows the logic of the “second chance”, establishing less restrictive access criteria and focusing solely on the impediments identified by the legislator[73].  The judge is therefore required to limit himself to verifying the existence of these negative requirements, denying approval only if the situation of over-indebtedness is attributable to gross negligence, bad faith or fraud[74].

This interpretation is further supported by the need to coordinate Article 69 of the CCII with Article 124-bis of the TUB, which requires intermediaries to carry out an accurate assessment of creditworthiness before granting a loan. Although the latter is formally referred to only in the second paragraph of Article 69, the functional correlation between the two provisions is clear: failure by the lender to carry out such an assessment may affect the assessment of the degree of fault of the debtor, given their position of information asymmetry and their lesser technical discernment compared to the intermediary.

In this perspective, part of the case law on the merits has highlighted that, for the purposes of assessing the consumer’s gross negligence, the behaviour of the financial intermediary must also be taken into account, as a professionally qualified person in the assessment of solvency[75].

The lender’s failure to carry out or negligent conduct of the preliminary investigation may, in fact, mitigate or even exclude the debtor’s fault if it has contributed significantly to the formation of the state of over-indebtedness.

This approach, in addition to strengthening the internal consistency of the system, contributes to the creation of a model of shared responsibility between debtor and creditor, based on the principles of fairness, proportionality and contractual solidarity. It reaffirms the public function of creditworthiness assessment, no longer understood as a mere tool for the protection of the lender, but as a safeguard of the balance and sustainability of the entire credit market.

Authors:

Alma Agnese Rinaldi is Adjunct professor of public economic law at the “Libera Università Maria Santissima Assunta” (LUMSA)

Antonio Uricchio is Full professor of tax law at the University of Bari Aldo Moro

Although this paper is the result of a joint reflection of the authors, Antonio Uricchio wrote the paragraphs 1-2 and Alma Agnese Rinaldi wrote the paragraphs 3-7

 


[1] Directive 2008/48/EC of the European Parliament and of the Council of 23 April 2008 on credit agreements for consumers and repealing Council Directive 87/102/EEC, in OJEU, 22 May 2008, L 133, 66.

[2] Directive (EU) 2023/2225 of the European Parliament and of the Council of 18 October 2023 on credit agreements for consumers and repealing Directive 2008/48/EC, in OJEU 30 October 2023, L, 1.

[3] Both the “old” (Directive 2008/48/EC) and the new EU Directive (2023/2225) pursue the same objectives; what differentiates them is the scope and content of the regulatory requirements imposed on Member States (and indirectly on creditors) for their implementation. The objectives are the smooth functioning of the internal market for consumer credit and consumer protection. Promoting the development of the internal market for consumer credit means reducing barriers to the supply and demand of cross-border credit within the European Union.

[4] The European Court of Justice, in its 2014 judgment in case c-565/12, invested with the question of the adequacy of penalty systems in the event of a breach of the responsible lending obligations imposed in consumer credit agreements, recognised that this directive aims to protect consumers not only from abuse by lenders, but also from over-indebtedness and insolvency. In this way, it is understandable what the function of creditworthiness checks is in the view of the EU legislator, namely to ensure a high level of consumer protection.” See CJEU, judgment of 11 January 2024, case C-755/22, Nárokuj s.r.o. v EC Financial Services, a.s., which clarified the cases of declaration of nullity of consumer credit agreements in the event of a breach of creditworthiness assessment.

[5] Financial Conduct Authority, Assessing creditworthiness in consumer credit: Proposed changes to our rules and guidance, July, 2017.

[6] See O. Cherednychenko, On the Bumpy Road rf Responsible Lending in the Digital Marketplace: the new EU Consumer Credit Directive, in Journal of Consumer Policy, 2024, p. 241 ss.

[7] European Parliament, Final compromise amendments on the draft report on the proposal for a directive of the European Parliament and of the Council on consumer credits, 2021/00171 (COD), 50.

[8] CJEU, Case C-634/21, SCHUFA Holding (Scoring): Judgment of the Court (First Chamber) of 7 December 2023, in eur-lex.europa.eu, which established that the SCHUFA score cannot be used as the sole criterion for automated decisions on creditworthiness and credit. In particular, the decision requires banks and companies that use these systems to ensure human oversight and additional individual assessment, limiting the use of decisions based solely on algorithms

[9] Conclusions of the Advocate General, 14 November 2019, C-679/18, OPR-Finance s.r.o. v GK, pt. 82, referred to by the CJEU, 5 March 2020, pt. 36.

[11] On this subject, see M. Nicoletti, Virtual nullity, creditworthiness assessment and violation of criminal law in light of Cass. No. 26248/2024, in ristrutturazioni aziendali.it, 30 January 2025.

[10] See M. Zappatore, Creditworthiness assessment to protect the general interest: beyond the relationship between consumer and intermediary, Commentary on EU Court of Justice, judgment of 11 January 2024, case C-755/22, 29 October 2024, in rivistapactum.it

[12] On the remedy of compensation, attributable to pre-contractual liability, applicable in the Italian legal system, see G. Liace, Il credito al consumo (Consumer Credit), Milan, Giuffrè, 2022, p. 83 ff.

[13] See F. Mancioppi, Automated credit scoring: Note on the Judgment of the European Court of Justice of 7 December 2023, in Riv. Trim. dell’Econ., no. 3, 2024, p. 174 ff.

[14] On this subject, see, most recently, M. Ortino, Il Credito ai consumatori, in (edited by) F. Greco – G. Liace, Trattato di diritto bancario, Lefebvre Giuffrè, 2025, p. 143 ff

[15] For more on this topic, see E. Cecchinato, Il credito immobiliare al consumo, in (edited by) F. Greco – G. Liace, Trattato di diritto bancario, op. cit., p. 212 ff.

[16] See, in case law, Court of Rimini, 1 March 2019, in unijuris.it, which, having to verify the merits of the applicant, found that the debtor had been induced to take out a loan that was disproportionate to his ability to repay by financial companies that had not carried out a proper credit assessment, and that he should therefore be admitted to the debt restructuring procedure; Court of Macerata, 24 May 2018, in Nuova giur. civ. comm., 2018, p. 1430 ff., in which the bank that granted an irresponsible loan was ordered to pay compensation to the injured consumer, commensurate with the contractual interest and default interest provided for in the contract. According to the judges, the main purpose of the provisions in question is to directly protect the consumer as the weaker party to the contract rather than to protect the market, which has a subjective right to assess creditworthiness.

[17] See also G. Mattarella, Big Data and access to credit for immigrants: algorithmic discrimination and consumer protection, cit., 704 ff. More recently, on this topic, M. RABITTI, Technological discrimination and Fin-tech, cit., 467 ff. On this subject, see also L. Ammanati – G.L. Greco, “Intelligent” credit scoring: experiences, risks and new rules, in this Review, 2023, 471 ff.; F. Mattasoglio, The “innovative” assessment of consumer creditworthiness and the challenges for the regulator, cit., 205 ff. In foreign doctrine, K. Langebucher, Consumer Credit in The Age of AI – Beyond Anti-Discrimination Law (9 December 2022), SAFE Working Paper No. 369, available at https://ssrn.com/abstract=429826; C. O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, New York, 2016; S. Baraocas, &SELBST, Big Data’s Disparate Impact, in California Law Review, vol. 104, no. 3, 671 ff.; C. Harvard, “On the Take”: The Black Box of Credit Scoring and Mortgage Discrimination, in Boston University Public Interest Law Journal, vol. 20, no. 2, p. 241 ff.

[18] See N. Collado-Rogriguez – U. Kohl, “All Data Is Credit Data”: Personalised Consumer Credit Score and Anti-Discrimination Law, in (edited by) U. Kohl – J. Eisler, Data-Driven Personalisation in Markets, Politics and Law, Cambridge University Press, 9 July 2021. See also N. Aggarwal, When All Data Is Credit Data: Consumer Credit Markets, Technological Development and Distributive Justice. University of Oxford, 2023, in irpa.eu.

[19] In today’s digital economy, profiling activities are based on highly technology-intensive analysis techniques, which differ significantly from traditional information collection and processing tools. Profiling today takes on a complex and multifaceted form, characterised by a plurality of compositions, functional components and operational purposes, making it extremely difficult for interpreters to provide a unified and generalisable reconstruction of the phenomenon on a technical-legal level. With a reasonable degree of approximation, however, it can be said that profiling in the contemporary digital context consists of three main stages: a) data collection, which is the starting point of the process and makes use of heterogeneous sources — data provided directly by the data subject, digital tracking, or information obtained from third parties; b) the analysis and correlation phase, normally conducted automatically using learning algorithms and statistical inference, aimed at identifying recurring patterns and constructing predictive models of behaviour; c) finally, the application phase, in which the profile thus developed is referred to a specific natural person in order to infer, in probabilistic terms, current or future characteristics, habits or behavioural inclinations. Although it is a significant aspect of the contemporary debate, the issue of the use of similar profiling technologies for the purposes of surveillance, control or social conditioning, i.e. to guide citizens’ behaviour through reward or punishment mechanisms, which is eminently ethical and public in nature, will not be the subject of this investigation. This phenomenon, known as social scoring, has systemic and value-related implications that go beyond the scope of this paper, which will focus primarily on profiling techniques used for economic and financial purposes, with particular reference to automated credit scoring. See, on this subject, F. Lagioia – G. Sartor, Profiling and algorithmic decision-making: from the market to the public sphere, in Federalismi.it, 24 April 2020, p. 85 ff.; G. Cerrina Feroni, Artificial intelligence and social scoring systems. Between dystopia and reality, in Il Diritto dell’Informazione e dell’Informatica, 2023, p. 1 ff.

[20] See M. Rabitti, Credit scoring via machine learning and responsible lending, in Riv. dir. banc., 2023, p. 182. On this point, see A. Davola, Decision-making algorithms and banking transparency, Turin, 2020, p. 136 ff.

[21] See, a.a.v.v., Artificial intelligence in credit scoring. Analysis of some experiences in the Italian financial system, in Economic and Financial Issues (Bank of Italy), no. 721, 2022, p. 15. For an analytical list of data (structured and unstructured) that is released in the digital environment and processed in the provision of online financial services, please refer to the list contained in EBA, Discussion Paper on innovative uses of consumer data by financial institutions, DP/2016/01, 4 June 2016, p. 9 ff.

[22] The reference is to ZestFinance (now Zest Ai), one of the first and best-known Fintech credit start-ups, which has now ceased its credit activities to focus exclusively on the development and licensing of proprietary software. See, in this regard, the analysis conducted by D. Robinson – H. Yu, Knowing the Score: New Data, Underwriting, and Marketing in the Consumer Credit Marketplace. A Guide for Financial Inclusion Stakeholders, Ford Foundation, 2014, which also discusses other credit scoring systems with similar characteristics.

[23] In this regard, see L. Modica, Peer to Consumer Lending, in Osservatorio del diritto civile e commerciale, 2022, p. 81 ff.; A. Sciarrone Alibrandi – G. Borello – R.G. Ferretti – F. Lenoci – E. Macchiavello – F. Mattasoglio – F. Panisi, Marketplace lending. Towards new forms of financial intermediation? Quaderno FinTech no. 5 – July 2019; G. Biferali, Big data and creditworthiness assessment for access to peer-to-peer lending, in Information and IT Law, 2018, 487 ff. As regards the banking system, research can start from M. Pellegrini, Cybernetic law in its impact on banking and finance, in (edited by) G. Alpa – F. Capriglione, Liber amicorum, Milan, 2019, p. 351 ff.

[24] Cfr. J. Pfeifer, Algorithmic Opacity Meeting Organizational Opacity: Challenges of AI Deployment in Organizations, in Carolina Digital Repository, cdr.lib.unc.edu/concern/master papers, 10 maggio 2024; P. Radanliev, AI Ethics: Integrating Transparency, Fairness, and Privacy, in papers.ssrn.com, 20 giugno 2025.

[25] For a careful and essential examination of the history of credit, see F. Capriglione – G. Morelli, entry “Credito”, in Enc. It., V Appendix, 1991, in treccani.it; M. Onado, entry “Credito”, in Enc. delle scienze soc., 1992, in treccani.it

[26] In this regard, it is worth noting that financial intermediaries have already begun to benefit from the opportunities offered by the innovative services introduced by Directive (EU) 2015/2366, commonly known as Payment Services Directive 2 (PSD2). This regulation, in the context of the progressive digitisation of payment services, has introduced new tools for interconnection between operators, promoting broader, more timely and standardised access to customers’ financial information. In particular, intermediaries can now use the Account Information Service, which allows them to acquire, including through authorised third parties — known as Third Party Providers (TPPs) — data and information relating to transactions and payment accounts held by consumers at different institutions. This mechanism, based on the principle of interoperability and open data (open banking), allows lenders to reconstruct a more complete and accurate picture of the customer’s financial profile, broadening the information base on which to base their creditworthiness assessment. The result is a framework in which PSD2 and the GDPR operate in a logic of systematic integration: the former by promoting the regulated sharing of banking data for the purposes of innovation and competition; the latter by ensuring that such sharing takes place in full compliance with the fundamental rights of the data subject, in accordance with the principles of transparency, proportionality and purpose limitation. See A. Burchi – S. Mezzacapo – P. Musile Tanzi – V. Troiano, Financial Data Aggregation and Account Information Services. Regulatory issues and business profiles. Consob Fintech Notebooks, no. 4, March 2019; L. Ammanati – G.L. Greco, Digital platforms, algorithms and big data: the case of credit scoring, in Riv. trim. dir. econ., 2021, p. 298 ff.

[27] On the functions of ratings and profiles of responsibility towards investors, see F. Raffaele, Il Rating e la trasparenza, in (edited by) F. Greco – G. Liace, Trattato di diritto bancario, cit., p. 715 ff.

[28] The difference between rating and scoring assessments is significant and relates to how the judgement is formed:

– in the case of scores, the process of assigning a judgement is entirely automated (or automatable) and is essentially the result of applying certain credit risk algorithms to the entity being assessed, which produce creditworthiness scores based on the processing of primarily quantitative data and variables (financial statement data, sector data, macroeconomic data) without any substantial intervention by analysts;

– In the case of ratings, however, the rating process must follow specific methodologies that involve substantial human intervention, which may prevail over the automated assessment attributable to scoring. It is therefore quite possible that the starting point for a rating assessment is provided by a credit scoring algorithm, which can be expressed either numerically (e.g.a score within a range) or on the basis of a predefined scale of values similar to that used for the final rating assignment. However, it is essential that this automated judgement be supplemented by human intervention on the part of a rating analyst who can express their own assessments, both on the same variables and data previously examined by the algorithm and on different and additional information, typically of a qualitative nature. Therefore, the rating judgement is enriched by the element of human assessment, which is completely absent in scoring assessments. See G. Zorzi – G. Soldi, L’analisi e la gestione del rischio di credito nelle imprese non finanziarie (Credit risk analysis and management in non-financial companies), in F. Beltrame – G. Soldi – G. Zorzi, Merito creditizio e finanza d’Impresa (Creditworthiness and corporate finance), Giuffrè Lefebvre, 2023, p. 88 ff.

[29] For a comparison between the types of data used in the traditional credit scoring system and in the algorithmic decision-based system, see World Bank Group, Credit Scoring Approaches Guidelines, 2019, p. 9 ff. For the use of big data and advanced analytics, see EBA, Report on big data and advanced analytics (EBA/Rep/2020/01), in which the EBA identifies the fundamental principles for the responsible use of data by intermediaries (see, in particular, paragraph 3). For a detailed analysis of the interventions of supervisory authorities in these matters, see F. Bagni, Use of algorithms in the credit market: national and European dimension, in Osservatorio sulle fonti, 2021, p. 917 ff. See also, in a non-legally binding supporting document, the views expressed by the European Central Bank, in ECB, Guide to assessment of fintech credit institutions licence applications, p. 10, where a certain mistrust of non-traditional data for credit scoring purposes is expressed. In the case of outsourcing credit scoring based on non-traditional data, it is considered necessary to have appropriate credit risk controls and adequate documentation of sources, which must be understood by the bank and the problems posed by this possibility, see, in doctrine, G. Mattarella, Big Data and access to credit for immigrants: algorithmic discrimination and consumer protection, in Giur. comm., no. 4, 2020, p. 696 ff., who concludes in favour of the legitimacy of the use of social data only if it is actually relevant to the customer’s solvency, as well as F. Ferretti, Consume Access to Capital in the Age of Fintech and Big Data: the limits of EU Law, in Maastricht Journal of European and Comparative Law, 2018, 25, p. 476 ff.

[30] For these data, see G.L. Greco, Credit scoring 5.0 between the Artificial Intelligence Act and the Consolidated Banking Act, in Riv. trim. dell’econ., no. 3, 2021, p. 80 ff.

[31] 2019 provision, available at garanteprivacy.it, which establishes the rules for credit information systems (SIC), such as those managed by private entities such as CRIF. Approved by the Data Protection Authority, the code defines the guarantees for the proper functioning of the credit market, the processing of personal data, retention periods, and methods of information and complaint management. Among the main objectives are to ensure the proper functioning of the credit financial market, while respecting the rights of data subjects; to regulate the processing of personal data and content in SICs in order to assess the financial situation and creditworthiness of data subjects, as well as the reliability and punctuality of payments; and to explicitly prohibit the use of data for other purposes, such as marketing.

[32] Thus. D.K. Citron – F. Pasquale, The Scored Society: Due Process for Automated Predictions, in Wash. L. Rev., 89, 2014, ISs., spec., p. 8 ff.

[33] See M. Franchi, The role of creditworthiness in the renewed regulations on the settlement of over-indebtedness crises: closing the circle? in Riv. dir. banc., 2021, III, p. 501 ff.

[34] On this subject, see M. Hurley – J. Adebayo, Credit scoring in the Era of Big Data, in Yale Journal of Law Technology, 2017, p. 151.

[35] See, on this topic, the interesting analysis by G. Lo Sapio, La black box: l’esplicabilità delle scelte algoritmiche quale garanzia di buona amministrazione (The black box: the explainability of algorithmic choices as a guarantee of good administration), in federalismi.it, 30 June 2021 and, more recently, C. Sabelli, Dentro la Black box: i dati nell’epoca del machine learning (Inside the Black Box: data in the age of machine learning), in orizzonti.polito.it, April 2025.

[36]  The issue of profiling has long been a privileged laboratory for observing the dynamics of interaction between the financial sector and personal data protection regulations, as well as a testing ground for legislation, measuring the ability of the law to govern technological innovation without stifling its potential for evolution. As in other productive sectors, the driving force of digital transformation has provided banks, financial intermediaries and FinTech companies with a set of unprecedented tools for analysing, managing and exploiting data, fostering the emergence of economic models based on interconnection and information sharing. At the same time, this process has fuelled a profound rethinking of the operational and organisational models of market players, orienting them towards more flexible, dynamic and interoperable structures. In this context, the profiling business has been able to act as a catalyst for this evolution, transforming itself into a highly profitable sector thanks to its ability to extract economic value from customer data and integrate this information into open and competitive digital ecosystems. The logic underlying these new models — commonly referred to as open business models — is based on the circulation of data as a strategic asset, shared between operators, intermediaries and third parties (think of the paradigm of open banking and the data economy), in a context of increasing technological and functional interoperability. From a legal perspective, the regulatory boundaries of profiling in the banking and financial sector are still being defined and have only been partially explored. The traditional approach, according to which the only regulation applicable to profiling is that laid down in Regulation (EU) 2016/679 (General Data Protection Regulation — GDPR), now appears outdated. In a context of integration between data regulation and financial regulation, the thesis of the “exclusive speciality” of the GDPR can no longer be upheld, as its automatic and isolated application is not sufficient to exclude the relevance of other sectoral regulatory sources — in particular those relating to transparency, fairness in contractual relations and prudential supervision. Rather, it must be recognised that profiling, while generally constituting an optional and ancillary activity to banking and financial services, is now taking on increasing systemic relevance, especially in cases where it is functional to the provision of personalised services, risk assessment or the definition of contractual conditions. The only exceptions are cases where it is made mandatory for reasons of public interest, such as in anti-money laundering or credit monitoring procedures. In a competitive market, profiling is therefore a key competitive factor, enabling operators to develop highly predictive commercial strategies and to exploit the results of data analysis to maximise profitability. It thus stands at the crossroads between technological innovation, competition law and personal data protection, requiring interpreters to face the far from simple challenge of balancing economic freedom and personal protection in an increasingly pervasive and interconnected digital ecosystem. On this subject, please refer to a series of wide-ranging contributions. See, among others, a.a.v.v., in (edited by) V. Falce – U. Morera, Dall’Open Banking all’Open Finance. Profili di diritto dell’economia, Turin, Giappichelli, 2024, passim; a.a.v.v., in (edited by) V. Falce, Strategia dei dati e intelligenza artificiale. Verso un nuovo ordine giuridico del mercato (Towards a new legal order for the market), Turin, Giappichelli, 2023, passim; various authors, in (edited by) V. Falce, Financial Innovation tra Disintermediazione e Mercato (Financial Innovation between Disintermediation and the Market), Turin, Giappichelli, 2021, passim; V. Falce, Data Strategy and Artificial Intelligence, in (edited by) M. Passalacqua, Rights and Markets in the Digital Ecological Transition, Padua, Cedam, 2021; V. Falce – G. Fonocchiaro, Fintech: Rights, Competition, Rules, Bologna, Zanichelli, 2019; V. Falce – G. Ghidini – G. Olivieri, Information and Big Data between Innovation and Market, Milan, Giuffrè, 2018.

[37] See, in this regard, M.T. Paracampo, Digital transformation of the financial sector and open finance: what prospects for “sustainable” credit? Initial reflections, in MediaLaws, 2023, p. 188 ff.; L. Ammanati – G.L. Greco, “Smart credit scoring: experiences, risks and new rules”, in Riv. dir. banc., 3, 2023, p. 465 ff.

[38] See G.L. Greco, Credit scoring 5.0 between the Artificial Intelligence Act and the Consolidated Banking Law, in Riv. trim. dell’econ., cit., p. 74 ff.

[39] Article 5 of the Consolidated Banking Act has been defined as “a medium specifying general principles directly intended to regulate current relations between private individuals” (A.A. Dolmetta, Valutazione del merito creditizio, op. cit., p. 1582). On the nature of the general principle with which not only the actions of supervisory authorities but also the actions of banks must comply, see F. Sartori, Disciplina dell’impresa e statuto contrattuale: il criterio della ‘sana e prudente gestione’ (Company regulation and contractual status: the criterion of ‘sound and prudent management’), in Banca borsa titoli di credito, 2017, 2, p. 131 ff., who writes: “the principle of sound and prudent management is not limited to regulatory supervision, but becomes itself a criterion that sets rules of conduct”; F. Ciraolo, Small loans backed by public guarantees (Article 13, paragraph 1, letter m), liquidity decree), credit refusal and bank liability, in Riv. dir. banc., 2, 2021, p. 313 ff., according to which “the assessment of creditworthiness is a specific obligation for banking institutions, which is based on primary and secondary sector legislation, with a clear link to the general principle of sound and prudent management”. On this point, see also Cass., Sez. I, ord. no. 18610/2022. For a critical reinterpretation of the nature of the general clause in Article 5 of the Consolidated Banking Act, see M. De Poli, “Sound and prudent management of financial companies”, in (edited by) R. Lener – A.S. Alibrandi – M. Rabitti, et. Al., General clauses in economic law, Giappichelli, Turin 2024.

[40] On this point, see C. De Rosa, Dal vaglio del merito creditizio al credito alla cosa. Il ruolo della garanzia immobiliare nell’erogazione del credito (From creditworthiness assessment to secured credit: the role of real estate collateral in lending), in Osservatorio dir. civ. e comm. (Civil and Commercial Law Observatory), 2021, p. 385 ff.

[41] See C. Rinaldo, Il Credit scoring, in (edited by) M. Cian – C. Sandei, Diritto del Fintech, Cedam, 2024, pp. 458-461.

[42] On this subject, see A.A. Dolmetta, Creditworthiness assessment for access to the service. The perspective of the business contract.in Banca, borsa e tit. cred., 2023, p. 310 ff., which analyses in particular the duty laid down by these articles and in the broader context of relations between customers and banks.

[43] See R. Santagata, La concessione abusiva del credito al consumo (The abusive granting of consumer credit), Turin, 2020, p. 41; in the same vein, with specific reference to credit scoring, L. Ammanati – G.L. Greco, Il credit scoring “intelligente” (Intelligent credit scoring), cit., p. 487 ff.; D. Di Sabato – G. Alfano, The use of AI to influence and evaluate people between limitations and prohibitions: some critical considerations on the proposed regulation drawn up by the European Commission, in Riv. dir. impresa, 2022, p. 290 ff. See also M. Chironi, The liability of the creditor bank for incorrect creditworthiness assessment, in Resp. civ. e prev., 2023, p. 941, who considers these rules inapplicable to credit scoring.

[44] The Court of Justice of the European Union, in its judgment in Case C-203/22 on 27 February 2025, ruled on automated credit scoring, with particular reference to the right of the data subject to an explanation of the logic underlying the decision to grant or refuse credit, which allows them to understand and challenge the automated decision. This is the principle of law affirmed:

– Article 15(1)(h) of Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) must be interpreted as meaning that, in the case of automated decision-making, including profiling, within the meaning of Article 22(1) of that regulation, the data subject may require the controller, by way of ‘meaningful information about the logic involved’, to explain to him, in a concise, transparent, intelligible and easily accessible form, the procedure and principles applied to use, by automated means, personal data relating to that data subject in order to obtain a specific result, such as a creditworthiness profile. The Regulation is available at eur-lex.europa.eu/legal-content

– Article 15(1)(h) of Regulation 2016/679 must be interpreted as meaning that, where the controller considers that the information to be provided to the data subject in accordance with that provision contains third-party data protected by that Regulation or trade secrets, within the meaning of Article 2(1) of Directive 2016/943 (EU) of the European Parliament and of the Council of 8 June 2016 on the protection of undisclosed know-how and business information (trade secrets) against their unlawful acquisition, use and disclosure, that controller is required to communicate that allegedly protected information to the competent supervisory authority or court, which is responsible for weighing the rights and interests at stake in order to determine the scope of the data subject’s right of access under Article 15 of that regulation.

According to the Court, in summary, the data controller must describe the procedure and principles actually applied in assessing the customer’s creditworthiness, so that the data subject can understand which of his or her personal data have been used, and how, in the automated decision-making process (such as credit scoring): for example, by informing the data subject whether and how a change in the personal data taken into account would have led to a different result; simply communicating an algorithm would not, however, be a sufficiently concise and comprehensible explanation. Furthermore, if the data controller considers that the information to be provided contains protected data of third parties or trade secrets, it must communicate such allegedly protected information to the supervisory authority or the competent court, which are required to weigh up the rights and interests at stake in order to determine the scope of the data subject’s right of access to such information: the GDPR precludes the application of a national provision which, as a rule, excludes the right of access in question where it compromises a trade secret of the data controller or a third party. With regard to the right to an explanation of the automated decision, the Court points out that in the context of a decision-making process based exclusively on automated processing (such as credit scoring), the purpose of the data subject’s right to obtain the information provided for in Article 15(1)(h) of the GDPR is to enable him or her to effectively exercise the rights granted to him or her by Article 22(3) of that regulation, namely to express his or her point of view on that decision and to contest it: if the persons affected by an automated decision, including profiling, were unable to understand the reasons that led to that decision before expressing their point of view or contesting it, those rights could not fully fulfil their purpose of protecting those persons from the specific risks to their rights and freedoms arising from the automated processing of their personal data. An examination of the purposes of Article 15 shows that the right to obtain “meaningful information about the logic used” in an automated decision-making process (such as credit scoring), within the meaning of that provision, must be understood as a right to an explanation of the procedure and principles actually applied in order to use, by automated means, the personal data of the data subject in order to obtain a specific result, such as a creditworthiness profile: in order to enable the data subject to effectively exercise the rights granted to him or her by the GDPR and, in particular, by Article 22(3), that explanation must be provided by means of relevant information in a concise, transparent, intelligible and easily accessible form. Neither the simple communication of a complex mathematical formula, such as an algorithm, nor a detailed description of all the steps in an automated decision-making process such as credit scoring can satisfy these requirements for the Court, as neither of these methods would constitute a sufficiently concise and comprehensible explanation: as can be seen from the guidelines on automated decision-making relating to natural persons and profiling for the purposes of Regulation (EU) 2016/679, adopted on 3 October 2017 by the working group established by Article 29 of Directive 95/46 (as amended and adopted on 6 February 2018), on the one hand, the controller should find simple ways to communicate to the data subject the logic or criteria on which the decision is based; on the other hand, the GDPR requires the data controller to provide meaningful information about the logic used, “but not necessarily a complex explanation of the algorithms used or disclosure of the complete algorithm”. With specific regard to profiling, the referring court could, in particular, consider it sufficiently transparent and comprehensible to inform the data subject of how a change in the personal data taken into account would have led to a different result.

[45] On this subject, see R. Raimo, Access to credit and creditworthiness assessment, in (edited by) G. Conte, Banking and Financial Arbitrator, Milan, 2021, p. 211 ff.

[46] On this subject, see A.A. Dolmetta, Creditworthiness assessment for access to the service., cit., p. 329 ff., who considers the opposite case in which credit sums have been disbursed on the basis of an incorrect assessment of creditworthiness.

[47] See A.A. Dolmetta, Creditworthiness assessment for access to the service., cit., p. 311, in particular on the preparatory and functional, but nevertheless autonomous, nature of the assessment phase with respect to the decision-making phase.

[48] See Expert Group on Regulatory Obstacles to Financial Innovation (ROFIEG), 30 Recommendations on Regulation, Innovation and Finance, Final Report of the European Commission, 2019.

[49] See, on this subject, F. Pasquale,The Black Box Society. The Secret Algorithms That Control Money and Information, Harvard University Press, 2016, a text that explores the social, political and legal implications of algorithmic opacity in digital economies, with many references to the financial sector.

[50] See Guidelines on automated decision-making relating to natural persons and profiling – WP251, defined in accordance with the provisions of Regulation (EU) 2016/679, at garanteprivacy.it

[51] For further information on this point, see R. Basili, Intelligent systems and data: ethical opportunities and risks, in (edited by) A. Morace Pinelli, The circulation of personal data: person, contract and market, Pacini giuridica, 2023, p. 129; where the law states that ‘[to] Polarised data, for example, characterised by social stereotypes (such as race in matters of justice or advertising), correspond to algorithmic choices – responses such as search engines or advertising recommendations – that are biased and therefore have a strongly prejudicial influence on the masses of users’ ; however, algorithms may generate discriminatory predictions against individuals or groups of individuals, as they are “affected” by a bias intrinsic to the collection of historical data, which makes it necessary to analyse the precautions required in data processing in order to detect and counterbalance distortions.

[52] For a more in-depth analysis of the risks and benefits associated with the adoption of new creditworthiness assessment systems, see F. Mattasoglio, La valutazione ‘innovativa’ del merito creditizio del consumatore e le sfide per il regolatore (The “innovative” assessment of consumer creditworthiness and the challenges for the regulator), cit., 187 ff., esp. 200 ff. More recently, on this topic, see M. Rabitti, Credit scoring via machine learning and responsible lending, in Riv. dir. banc., 2023, p. 175 ff. In foreign doctrine, in addition to the contributions already mentioned, see, from a critical perspective, D.k. Citron – F. Pasquale, The Scored Society: Due Process for Automated Predictions, in Washington Law Review, 2014, vol. 89, 1 ff.

[53] See EDPB, Opinion 28/2024 on Certain Data Protection Aspects Related to the Processing of Personal Data in the Context of AI models adopted on 17 December 2024. Along the same lines, see European Data Protection Supervisors (EDPS), Generative AI and the EUDPR. First EDPS Orientations for Ensuring Data Protection Compliance when Using Generative AI Systems, 3 June 2024.

[54] For more on the subject of profiling, see N. M.F. Faraone, Spunti ricostruttivi in materia di profilazione e valutazione del merito creditizio nella nuova strategia europea dei dati, in Analisi giur. dell’econ., no. 1, 2025, p. 267 ff.

[55] In the absence of such requirements, the use of advanced credit scoring techniques risks leading to distorted results, with incorrect or discriminatory assessments, sometimes even unintentionally, due to opaque, self-referential algorithmic models or models lacking adequate control mechanisms. The phenomenon of algorithmic discrimination, an expression of structural bias inherent in the training data or inferential logic of the model, is one of the most significant critical issues in modern scoring systems: it occurs when automated decisions reproduce or amplify pre-existing disparities, without it being possible to clearly identify the determinants of the decision-making process. This highlights the need to promote a model of responsible algorithmic governance, based on principles of transparency, verifiability and auditability of automated decision-making processes, in order to ensure that technological evolution does not compromise, but rather strengthens, the safeguards of fairness and impartiality that must guide assessment activities in the credit market

[56] See D. Imbruglia, Le presunzioni delle macchine e il consenso dell’interessato, in Riv. trim. proc. Civ., 3, 2023, p. 921. See also E. Falletti, Automated decisions and the right to an explanation: some comparative reflections, in Il Diritto dell’informazione e dell’Informatica, 2, 2019.

[57] On the traceability of the obligation to assess the consumer’s creditworthiness to both the economic public policy of protection and the public policy of direction, see A. Tucci – M. Semeraro, Consumer credit, in (edited by) E. Capobianco, Banking contracts, Wolters Kluwer, Milan 2021, p. 1846 ff.

[58] See B. Marchetti, La garanzia dello Human in loop alla prova della decisione algoritmica amministrativa (The guarantee of the human in the loop put to the test of administrative algorithmic decision-making), in BioLaw Journal, 2, 2021, p. 367 ff. ; E. Pellecchia, Profiling automated decisions in the age of the black box society: data quality and algorithm readability in the context of responsible research and innovation, in Le nuove Leggi Civili Commentate, 5, 2018, p. 1224; C. Tabarrini, Explainability Due Process: Legal Guidelines for All-Based Business Decisions, in (edited by) R. Senigaglia – C. Irti – A. Bernes, Privacy and Data Protection in Software Services, Singapore, Springer, 2022; R. Messinetti, The protection of the human person versus artificial intelligence. Decision-making power of technological apparatus and the right to an explanation of automated decisions, in Contr. impr., 2019, p. 877 ff.

[59] See M. Bianco, Artificial intelligence in credit scoring: analysis of some experiences in the Italian financial system, 12 October 2022, 2, in bancaditalia.it

[60] From this perspective, the Regulation acts not only as a safeguard for individuals, but also as a tool for rationalising the entire digital financial ecosystem. Its function, in fact, goes beyond the merely protective dimension to take on a systemic value: through the codification of general principles – lawfulness, fairness, proportionality and transparency – the GDPR guides the operating methods of intermediaries, imposing a model of data governance based on responsibility and technological sustainability. The importance of the GDPR in the field of credit scoring is twofold: on the one hand, it represents a substantial guarantee for the protection of fundamental consumer rights, namely the right to privacy and the protection of personal data; on the other hand, it requires financial operators to undertake a thorough review of their credit risk management methods, requiring automated assessment to comply with criteria of lawfulness, fairness, proportionality and transparency

[61] See B. Bagni, Use of algorithms in the credit market: national and European dimension, in Osservatorio sulle Fonti, 2021, p. 914 ff.

[62] See C.N. Pehlivan, The EU Artificial Intelligence (AI) Act: An Introduction, in Global Privacy L. Rev., 2024, p. 1 ff.; on this topic, see also M. Ebers, Standardising AI. The Case of the European Commission’s Proposal for an “Artificial Intelligence Act”, in (edited by) L.A. DiMatteo – C. Poncibò – M. Cannarsa, The Cambridge Handbook of Artificial Intelligence, Cambridge, 2022, p. 321 ff.

[63] For further information on this point, see R. Basili, Intelligent systems and data: ethical opportunities and risks, in (edited by) A. Morace Pinelli, The circulation of personal data: person, contract and market, Pacini giuridica, 2023, p. 129; where the law states that ‘[to] Polarised data, for example, characterised by social stereotypes (such as race in matters of justice or advertising), correspond to algorithmic choices – responses such as search engines or advertising recommendations – that are biased and therefore have a strongly prejudicial influence on the masses of users’ ; however, algorithms may generate discriminatory predictions against individuals or groups of individuals, as they are “affected” by a bias intrinsic to the collection of historical data, which makes it necessary to analyse the precautions required in data processing in order to detect and counterbalance distortions.

[64] See F. Mattasoglio, La valutazione “innovativa” del merito creditizio del consumatore e le sfide per il regolatore (The “innovative” assessment of consumer creditworthiness and the challenges for the regulator), in Dir. banc. merc. fin., 2020, p. 201. See also OECD, AI Data Governance and Privacy: Synergies and Areas of International Co-Operation, in OECD Artificial Intelligence Papers, 22, Paris, 2024, OECD Publishing; Roundtable of G7 Data Protection and Privacy Authorities, Promoting Enforcement Cooperation, 11 October 2024, p. 6.

[65] EU Court of Justice, 7 December 2023, Schufa Holding (Scoring), case C‑634/21, see comments by F. D’Orazio, Credit scoring and Article 22 of the GDPR under review by the Court of Justice, in Nuova giur. comm., 2024, 410 ff. and E. Falletti, Some reflections on the applicability of Article 22 GDPR in relation to credit scoring, in Dir. inf., 2024, p. 110 ff. On this point, see also M. RABITTI, Technological discrimination and Fin-tech, in Riv. dir. impresa, Esi, no. 3, 2023, p. 478 ff.

[66] See C. Rinaldo, Il Credit scoring, cit., p. 468 ff.

[67]  In this sense, see also Article 9 of the Council of Europe Convention for the Protection of Individuals with regard to Automatic Processing of Personal Data (Convention 108+), as amended by the Protocol adopted in 2018 and ratified by Italy in 2021. In this regard, the considerations contained in paragraph 77 of the explanatory report to that Convention are also significant, according to which: ‘data subjects have the right to obtain knowledge of the reasoning behind the processing of data, including the consequences of such reasoning and the conclusions that may have been drawn from it, in particular in the case of the use of algorithms for the purpose of automated decision-making, especially in the context of profiling. For example, in the case of a creditworthiness assessment system, borrowers have the right to know the logic used in the processing of their data that leads to the granting or refusal of the loan, rather than simply being informed of the decision itself. Understanding these elements contributes to the effective exercise of other fundamental safeguards, such as the right to object and the right to appeal to a competent authority’.

[68] See N. M.F. Faraone, Spunti ricostruttivi in materia di profilazione e valutazione del merito creditizio nella nuova strategia europea dei dati (Reconstructive insights on profiling and creditworthiness assessment in the new European data strategy), cit., p. 270 ff.

[69] See G. Sartor, AI, Transparency and the Rule of Law, in European Journal of Law and Technology, 2024, p.1 ff.

[70] R. Caponigro, Algorithms and responsible credit: profiles of compatibility with European law, in Riv. dir. banc., 2023, p. 521 ff.

[71] See G. Passarelli, The assessment of consumer creditworthiness between over-indebtedness and the new Crisis and Insolvency Code, in nuovodirittodellesocietà.it

[72] See V. Cangemi, The transparency of automated decision-making systems in the workplace. A jurisprudential analysis, in Dir. rel. ind., 2024, p. 928, who observes how the “technical nature” of machine learning models hinders the intelligibility of the automated decision-making process by interested third parties.

[73] See, Court of Nola, 8 May 2024, no. 41, in tribunale.nola.giustizia.it

[74] On this subject, see App. Bologna, decree 9 February 2024 in dirittodellacrisi.it, according to which the request for access to the over-indebtedness procedure is admissible even in the presence of a crisis and therefore of the so-called danger of insolvency, which indicates a critical situation with an actual probability, not just the mere possibility, of future insolvency. The probabilistic assessment requires a projection of the financial evolution and the prospective ability to regularly fulfil obligations, even those that have not yet arisen but are likely to arise. See also Court of Reggio Calabria, decree of 25 January 2024 in tribunale-reggiocalabria.giustizia.it

[75] See, in case law, Court of Turin, 31 May 2023, in tribunale.torino.giustizia.it; Court of Santa Maria Capua Vetere, 16 October 2023, in unijuris.it; and, more recently, Court of Appeal of Bari, Section I, 30 April 2025, no. 626, in simpliciter.ai, in which the Court of Appeal of Bari focuses on certain decisive issues in the context of over-indebtedness proceedings in general and consumer debt restructuring proceedings in particular.

The Apulian Court focuses, in fact, the effects of failure to assess creditworthiness ex parte creditoris and the assessment criteria with regard to the subjective requirement of the debtor’s merit, clarifying the discrimen between “slight fault” and “serious fault”, as well as specifying the criteria related to the verification of the economic convenience of the proposed plan compared to the liquidation alternative and its possible duration of more than five years.

Information

This entry was posted on 26/10/2025 by in Senza categoria.

Navigation