«They say things are happening at the border, but nobody knows which border» (Mark Strand)
by Antonio Davola
Abstract: Consumer scores describe individuals or groups in order to predict, on the basis of their data, behaviors and outcomes. Scores use information about consumer characteristics and attributes by means of statistical models that produce a range of numeric scores, and they proliferate in day-to-day interactions: in the US only, roughly 140 scoring algorithms are implemented for a wide range of services, and the most advanced of them can elaborate up to 8.000 individual variables.
In particular, credit-scoring systems are used to evaluate individuals’ creditworthiness for access to finance. Credit scores are implemented by both institutional operators and emerging P2P lending platform: a positive credit score represents an essential means for access to credit, and therefore as a tool for individual and social development.
At the same time, not everyone can be allowed to access to credit under the same conditions: individuals and businesses shall be distinguished based on predictions regarding their likelihood to repay loans, and the characteristics of their credit shall be determined accordingly. When effective, good evaluations enable lenders to respond promptly to market conditions and customer needs: both lenders and borrowers stand benefits.
These new forms of scoring provide an opportunity to access credit to transparent and unbanked individuals without a consistent credit history, promoting forms of inclusion. Nevertheless, they entail significant risks: lenders should be attentive to avoiding disparate impact and unfair outcomes, while at the same time considering how to comply with the obligations of disclosure and transparency towards consumers. Lastly, how these new systems (e.g. scores implementing aggregated data and scores based on indirect proxies for sensitive factors) fit in the current European regulatory framework is still largely uncertain.
Striking the balance between conflicting interests and reaching the optimal level of access to credit poses a fundamental challenge for regulators.
In order to provide a normative response to these concerns, the paper provides an overview of credit scoring algorithms and their role in order to promote access to credit and financial inclusion, comparing them with the already existing tools (such as the FICO score). Then, it investigates the similarities existing between the consumer-scoring systems and the activity conducted by credit rating agencies – subject to careful examination by regulators in the US and in Europe following the 2008 financial crisis) to develop a regulatory proposal.
In particular (arguing in favor of an intervention by analogy), the research illustrates a tentative regulatory model taking advantage of the framework delineated by the Reg. EU/462/2013 for credit rating agencies as a matrix to develop specific obligations for companies involved in the development and use of consumer-scoring algorithms, ultimately allowing for information on credit scoring methodologies and relevant data to be monitored and audited by consumers and supervisory agencies.
Summary: 1. Prelude: tales of consumer scoring. – 2. Some preliminary considerations on Fintech, access to credit and innovation as a tool for promoting financial inclusion. – 3. Traditional scoring systems and their shortcomings: the FICO score. – 4. Credit scoring 2.0 – 5. Risks and opportunities in the use of credit scoring 2.0 – 6. The normative framework for creditworthiness assessment, and the uncertainties it unfolds. – 7. Looking for true novelty in soft-data analysis, and the analogy between Consumer Reporting Agencies and Credit Rating Agencies. – 8. A tentative regulatory proposal. – 9. Conclusive Remarks.
1. In 2014, the People’s Republic of China officially started the implementation of its Social Credit System (社会信用体系, shehui xinyong tixi, “SCS”): each Chinese citizen is classified in a national database and – on the basis of the information available to the government – she is assigned a “social credit score”. Despite being recently implemented, the idea of using a social credit system to represent the social and economic condition of citizens (as a parameter for attributing benefits or exclusions) dates back to the early ‘90s. In 2018, 40 Chinese local governments had already introduced piloted – privately or publicly managed – versions of the SCS under their jurisdictions, and a full implementation of the SCS program on national scaled is expected to be completed by 2020.
On the other side of the ocean, during the 2017 international conference Money 20/20, PayPal President and CEO Dan Schulman made a passionate speech to thousands about the globe’s working poor and their need for access to banking and credit. Schulman underlined the role of algorithmic credit scoring, where payments and social media data coupled to machine learning are exploited to make lending decisions, as the primary tool for lending activity in the future, given the capacity of these technologies to promote access to credit for those individuals who lack traditional assets and guarantees, being therefore excluded from investments.
These episodes are symptomatic of the growing attention that credit scoring algorithms for the assessment of individuals’ creditworthiness are receiving in the contemporary society.
The creditworthiness assessment is defined, in general terms, as the probabilistic evaluation that a professional operator in the credit sector operates in order to predict the applicant’s future conducts, with particular regards to her future solvency perspective. Such evaluation is preliminary and prodromal to the concession – or denial – of a loan.
When automated tools are used in this process, the assessment of creditworthiness is performed by decision-making algorithms, elaborating different variables (depending on the specific model considered) in order to classify the applicants in tiers, provide a numeric score, or even to state a binary answer regarding her suitability for a loan.
The ultimate goal of this process is to offer effective resources for the identification of the risk of insolvency, to assess the probability of repayment, and to provide the credit institution information in order to better discriminate loans rate and conditions amongst applicants.
Consumer credit scoring constitutes one of the most significant areas of Fintech: between 2014 and 2017, global investments in the use of IoT and ICT for the customization and provision of financial services grew from 19.9 to 39.4 billion dollars, and they exceeded 41.7 billion dollars in the first half of 2018. In this field, a major amount of the investments is directed towards the improvement of services connected to retail consumer credit.
If, on the one hand, algorithmic scoring is a resource for credit institutions, it nevertheless creates new risks for individuals: an unfortunate example of such occurrences happened in 2009 to the businessman Kevin Johnson, whose was unilaterally subject to a forced reduction of his credit limit (around 65 percent) by his bank due to the outcome of his profiling. It goes without saying, that Johnson was unaware of these tracking and elaboration activities being performed on his account.
The use of algorithmic technologies for the elaboration of credit scoring lies at the crossroads of different principles and interests: critical assessment of the various facet of the issue are therefore needed in order to contextualize and address both the microeconomic (e.g. consumers’ autonomy, freedom and privacy) and macroeconomic aspects (e.g. the duality between responsible borrowing/lending, and financial inclusion) of this heterogeneous phenomenon.
2. In order to properly investigate the implications technological innovations for consumers’ access to credit, it is first and foremost necessary to operate some preliminary considerations on the wider topic of financial inclusion. Such a necessity arises from the fact that the concept of financial inclusion is at the very own interpretative core of the role that operators in the credit market play in contemporary market economy and, therefore, constitute a major guideline in balancing their rights and duties towards debtors.
On one side, promoting access to credit represents a fundamental resource to ensure individuals’ economic citizenship; on the other one, it is widely acknowledged that an equally relevant goal lies in ensuring a responsible access to finance and (then) repayment: an excessive relaxation of the system would undermine the stability of professional operators with a substantive detrimental effect over the general welfare, exacerbating public debt.
Empirical studies widely demonstrated that financial inclusion represents a pivotal resource in developing countries, reducing social inequality and local poverty; in addition, numerous studies suggest that beneficial effects arising from major financial inclusion can be experienced also in more advanced economies. This is likely to happen, in particular, in those countries where – despite the general sophistication of the economic framework – significant groups of individuals are nevertheless not fully integrated in the financial system: in such situations, introducing alternative modes of access to credit and savings usually increases investments levels, level of consumption, employment rate, and (more in general) allows for a better management of market shocks.
These researches prove themselves particularly significant in light of the profound re-interpretation of the main European normative corpora that was conducted after the 2008 financial crisis – and the subsequent stiffening of the credit offer over the Union territory: amongst the main regulatory novelties, particular reference shall be made to the enactment of the Directive 2008/48/EC on credit agreements for consumers and the Directive 2014/17/EU on credit agreements for consumers relating to residential immovable property. These changes were, as a matter of fact, symptomatic of the will to introduce a set of expected conducts both for consumers and creditors, by embracing an amphibious perspective apt to reach a high-level of consumer protection and stimulate the credit market. In a ‘co-responsibility framework’, creditors are required to check the creditworthiness of their clients on an individual, casuistic, basis, operating a thoughtful evaluation of all the information that might be relevant to predict the applicant’s (in)solvency.
As a consequence of the joint effect of normative efforts and technological developments, operators in the credit market stared to consider and implement alternative models of calculation: in particular, scoring algorithms based on big data analysis machine learning and other forms of predictive modelling offered a valuable opportunity to introduce in the creditworthiness assessment new – previously unconsidered – variables, related to consumers’ behavioural, environmental, and social characteristics. These innovations were initially employed by emerging private actors in the credit market, such as P2P lending platforms, and progressively met the favour of institutional creditors.
Scoring algorithms are supposed to overcome the shortcomings that traditional assessment mechanisms present, considering both the precision of the issued consumer rating, and the capacity of allowing “transparent” individuals to access credit.
3. A major divide existing between the “new” scoring systems and the old ones is related to the kind of data they elaborate: traditional information used by credit institution in order quantify and supply loans is based – ever since the first calculation tools began to operate – on hard data, that are objective, quantitative in nature and easily verifiable. Amongst software using hard data for the creditworthiness assessment, the most popular one is the Fair Isaac Corporation (FICO) score. Despite the long-lasting persistence of trade secret agreements on the characteristics of the calculation algorithm, nowadays it is publicly disclosed that the FICO score operates a classification of the information related to the applicant into five categories (payment history; amounts owned; length of the credit history; new credit; type of credit used); each category has its own individual value and a specific weight in the overall score: the combination the different values contributes to the calculation of an overall score ranging between 350 and 800. Lastly, the score collocates the applicant within one of five “creditworthiness levels”.
Despite being widely used in the professional practice, significant critics invest (on the one hand) the lack of transparency of the FICO algorithm and (on the other one) its difficulty to adequately assess the creditworthiness of individuals with short credit history.
It was noted, first and foremost, that the generic mention of the five categories contributing to the overall score is inadequate to properly inform applicants (and supervisory entities) regarding which specific information is considered in the assessment, and how it is actually processed by the algorithm. As a counterargument, representatives of the industry stressed the need to protect the secrecy and industrial value of the scoring algorithm; in recent times, the debate was further exacerbated in the light of the sub-prime crisis.
Secondly, empirical studies observed that the use of the FICO score embodies an intrinsic risk of systematically excluding some areas of the population from access to credit: reliance on FICO parameters as proxies to evaluate creditworthiness ends up creating economic “credit deserts”, where individuals are precluded from investments and financial services. This issue is further aggravated by the uncertainties surrounding the effective correlation between FICO score and individuals’ solvency, due to the chance of erroneous inferences operated from the applicants’ data.
Further risks of exclusion due to the use of FICO arises from the potential absence of traditional data for specific groups of a community. Due to the changes that invested consumption dynamics and socioeconomic relations in the contemporary society, many consumers (e.g. gig-economy workers, immigrants or young professionals) cannot be properly evaluated through the FICO categories: they do not have a long-enough credit history, conduct atypical businesses, and obtained economic resources outside traditional credit channels.
As far as these cases are concerned, using quantitative data as the sole proxy to determine individual’s creditworthiness might be misguiding and determine unfair refusals or unjust conditions for a loan, since the outcome has no real connection with the applicants’ solvency or welfare status.
Against this background, it shall be defended that reliance on hard data entail significant benefits for both credit institutions and applicants: as far as the firsts are considered, credit history still represents an extremely reliable information and, since quantitative hard data are usually detained by operators themselves (e.g. banks), using them as main proxy for the creditworthiness assessment creates a significant competitive advantage for traditional operators and constitutes a major entry barrier for competitors.  From the consumers’ perspective, the main advantage of relying exclusively on hard data – given that this sort of information usually pertains to the applicants’ financial activity – is that no sensitive data on personal or social habits are acquired nor elaborated their counterparties. Furthermore, the use of (relatively) simple calculation algorithms with few, known, variables, allows consumer to monitor – up to certain extent – the fairness and accuracy of the evaluation, in order for them to pinpoint and correct errors, and eventually to react to privacy violations.
Still, the persistent existence of groups who are unable to access credit due to traditional quantitative credit scoring systems, and the growing availability of behavioural and environmental data, pushing towards the inclusion of alternative systems for the creditworthiness assessment, leading to significant normative uncertainties.
4. As it was preliminary underlined, the availability of a vast amount of information in the digital environment introduced new opportunities for credit operators, reshaping their relations with consumers and promoting significant changes in the modes of access to finance.
In the s.c. data society, individuals operate as “informative agents”, and digital technologies are exploited in order to gather and process the information that consumers furnish – through their use of ICTs and IoT – on the different channels to identify features that are relevant to the users, and to develop personalized commercial strategies based on individuals’ preferences.
In particular, through ICTs, social media analysis, wearable devices and app tracking, credit institutions can collect and connect psychological, environmental, behavioural, and social data, profile people (being even capable of deducing their emotions by means of affecting computing analysis) and modify their commercial strategy and product offering through algorithmic elaborations.
This phenomenon had pervasive implications in all the areas connected to B2C transactions and, in the field of credit scoring, led to the utilization of creditworthiness algorithms based on big data analysis and techniques such as the Knowledge Discovery in Databases as resources to overcome the shortcomings of traditional decision-making protocols.
Scoring algorithms “2.0” promise to improve the analysis of loan applications and to effectively profile consumers, using data from the social media or other sources of information: this data (usually acquired by a third party operating as a data broker) range from consumption habits to buying preferences, and more in general exploit the social ecosystem of individuals as proxies to evaluate their solvency expectancy. By creating a virtual image of the applicant’s personality, these new tools jointly use hard and soft data; they profoundly alter the assessment methodologies, going past the mere historical analysis of consumers’ economic resources and transactions.
These systems exploit different and heterogeneous types of data related to consumers: currently, on the market, credit scoring algorithms analysing consumers’ scrolling rate of apps terms and conditions, or their presence on different social media platforms, are present. Other systems monitor the purchase rate for high-tech products, the number and characteristics of the apps that the user has on her smartphone and its device’s operative system; relevant information might also be inferred from location data obtained by GPS tracking, habitual relationships and encounters, use of sanitary and energetic services, and even by the applicants’ physical appearance. Recently, some credit operators introduced assessment systems based on the simultaneous elaboration of historical and present data – e.g. by requesting candidates to register a short video-interview to acquire their psychometrics, or to take an online test in order to evaluate their probability and calculation skills, self-control and capacity to take choices under pressure.
Besides these very sophisticated strategies, experimental studies underlines that some sets of generic data – e.g. the analysis of the consumers’ digital footprint (that is, her access and registration to websites) – are already useful to significantly increase the creditworthiness assessment’s predictive power. Ultimately, by processing these data, credit scoring algorithms are able to develop and identify previously unknown meta-variables, i.e. sets of decisions that can traced to specific aspects of the applicant’s personality and consumption attitude. Based on these elements, the reliability of the quantitative information provided by the consumer can be re-evaluated, and the conditions for granting or denying a loan determined.
5. According to the vast majority of scholars who analysed the use of innovative data for credit scoring, it is highly unlikely for purely soft data-based algorithms to grow as dominant on the market, and to entirely replace hard, quantitative data: soft data will, most reasonably, provide complementary information to be elaborated in conjunction with traditional elements.
Yet, in such correlative fashion, these elements are going to radically impact on the dynamics of access to credit; therefore they call for a normative effort in order to assess their impact on the existing regulatory framework and on financial intermediaries’ operative protocols.
Scoring algorithms based on soft data (as stand alone or combined resources) entail opportunities and risks for the credit market, both in micro and macroeconomic perspectives.
In addition to the benefits that we already suggested in the previous sections of this work, the use of soft data for credit scoring purposes can improve the stability of the financial system overall, offering additional information to effectively prevent and react consumers’ indebtedness; at the same time, soft data are significant to promote wider access to credit and financial inclusion.
The availability of environmental and behavioral data – as long as it proves effective to ensure a high predictability of insolvency conditions – is useful to tailor offering conditions for loans and investments, and to allocate the risk of default efficiently amongst applicants, therefore reducing the likelihood of systemic crisis; all these aspects are particularly significant in the light of the debate on product governance in the financial market.
Soft data might play a pivotal role in reducing existing entry barriers for private agents in the credit market as well: by indirectly tackling the information oligopoly that institutional credit operators built overtime, alternative assessment methodologies reduce the concentration in the credit market – and its subsequent costs for consumers – allowing the new P2P lending platforms to offer credit to previously excluded groups.
The utilization of algorithms that are able to discriminate amongst clients and predict insolvency with significant precision is also likely to result in a reduction of transaction costs for credit operators and, therefore, in more favourable conditions for (virtuous) consumers, considering both the an – i.e. the ability of evaluating the creditworthiness of “transparent” individuals – and the quomodo of a loan.
As a counterbalance for these opportunities, the unregulated use of scoring algorithms exploiting soft data as well.
Firstly, innovative credit scoring systems raise significant questions in terms of (potential absence of) transparency in the creditworthiness evaluation. Using data that are not directly connected to an individual financial sphere as proxies for the assessment, they entail the risk of making the whole operation “opaque” for applicants, providing a mere “acceptance or refusal” answer to their request. This problem, on more general grounds, exists for every form of creditworthiness assessment. Yet, the proliferation of big data and the increasing availability of behavioural and psychological information further exacerbate this issue: in traditional systems, the use of data that are related to consumers’ properties and financial activity (especially when the weighting rate is made public) provides a general understanding of the dynamics behind the elaboration of a score. On the opposite, the introduction of multiple new variables and the exploitation of algorithmic training techniques based on deep learning protocols and non-linear neural networks offers few or no clarification to the applicants, who are exposed to the outcome of the elaboration.
In such cases, the transparency principles is undermined both in its substantive and procedural function: if consumers are radically unable to understand how a score is determined, as a necessary consequence they have no opportunity to challenge an outcome that they deem discriminatory or unfair.
The lack of transparency in automated decision-making is also problematic in terms of monitoring the significant variables which have been used for the assessment, in order to avoid disparate impact amongst applicants. Many elements that are used for profiling purposes (e.g. geographical and demographic data, or information on interactions or social media activities) can operate as indirect proxies for sensitive data, which shall not constitute the basis for decision (e.g. religious opinion, race, etc.). Along with this aspect, the risk of observing biased algorithms is present: erroneous scoring might be either determined by a biased dataset, by the imposition of biased directives in the algorithms’ design, or be the outcome of unsupervised training, and even emerge from the machine’s progressive interactions with users. Considering all these potential causes and variables, it is extremely arduous (even for software developers) to reverse-engineer the elaboration and identify the problem that caused the unfair outcome, and this is even more true for a public power or an individual consumer subject to the decision.
A persistent verification over the quality of the dataset, and on the appropriateness of the elaboration protocol is significant also in order to ensure that – given the reticence (or even the inability) of professional operators to disclose the characteristics and methodologies of their assessments – using too many data do not hinder the rigor of the system, leading to systematic errors. Despite the (abovementioned) idea that soft data increase the algorithm’s precision, they currently lack a standardized procedure allowing for the control of their robustness. Furthermore, since soft data are usually purchased by credit operators from third parties (e.g. the social media platform), it is difficult for them to directly appreciate the quality of an externally provided dataset.
In the absence of stringent legal obligations regarding the characteristics that soft data must have, and on how they shall be evaluated, the major predictive power of algorithms using qualitative information could be counterbalanced by the inner uncertainty in the quality of the information itself. It should be considered, as well, that soft data are often determined and created by those same subjects, who will lately apply for credit (e.g. by operating declarations on social networks): an inner risk of manipulation is therefore existent and should be taken into appropriate consideration.
Lastly the risk of potential unfair exploitation of credit scoring algorithms by private operators should be considered: since there is not, currently, any substantive control systems to ensure fairness in the software elaboration, developers might use these tools (not to offer better conditions to the best applicants, but rather) to identify those consumers, who show tendency towards major expenditure and indebtedness. Such a conduct would contrast with the responsible lending principle and undermine the robustness of financial markets in general.
6. In order to evaluate whether the existing regulatory framework is appropriate to address the challenges, that the use of soft data credit scoring algorithms raise for consumers’ protection and market’s stability, an overview of the main European law on creditworthiness assessment and its general provisions is opportune.
The most recent document on the issue is the 2018 European Central Bank Guide to assessments of fintech credit institution license application: according to the Guide (when conducting the suitability assessment of the governance structure of the credit institution) national banking authorities and the European Central Bank are required to verify the feasibility of the applicant’s credit-scoring model, considering both in-house credit-scoring models and models using data to validate credit scores obtained from third-party providers.
Note that the Guide constitutes a soft law initiative, and this is consistent with the general trend of the EU Institutions in the regulation of Fintech, waiting for the phenomenon to further develop before enacting binding acts. Furthermore, the Guide affects just a minor part of the entirety of actors in the credit market: according to the Guide, rules on Fintech banks apply to credit institutions as defined in Article 4(1)(1) of the Capital Requirements Regulation (CRR) only.
In the lack of specific rules, the main binding provisions regulating algorithmic credit scoring are to be found in the Directive 2008/48/EC on credit agreements for consumers and in the Directive 2014/17/EU on credit agreements for consumers relating to residential immovable property. According to Art. 8(1) of Directive 2008/48/EC, before the conclusion of a credit agreement, “Member States shall ensure that […] the creditor assesses the consumer’s creditworthiness on the basis of sufficient information, where appropriate obtained from the consumer and, where necessary, on the basis of a consultation of the relevant database”. If the credit agreement is related to an immovable property, then Art. 18 Directive 2014/17/EU mandates similar obligations. As for the main differences between the two provisions, Art. 18 qualifies the creditworthiness assessment as “thorough”; in addition, Art. 18 explicitly underlines that “the creditor only makes the credit available to the consumer where the result of the creditworthiness assessment indicates that the obligations resulting from the credit agreement are likely to be met in the manner required under that agreement”, therefore creating an ontological relation between the favourable outcome of the creditworthiness assessment and the provision of the loan. This aspect is not present in the Directive 2008/48/EC.
None of these provisions explicitly refers to the characteristics that algorithmic methodologies for the creditworthiness assessment should have, and neither offers clear indication on the quality of the data. These aspects in the regulation have not been addressed by the follow-up activity of the European Banking Authority either, since the most recent Guidelines on creditworthiness assessment merely identify some factors (e.g. servicing obligations; evidence of any missed payments; relevant taxes and insurance) to be kept in consideration throughout the assessment.
Art 9 of Directive 2008/48, regulating the “database access”, might offer some indications: the main goal of the rule is to promote the non-discriminatory access to cross-border databases by EU creditors; yet, Art. 9(2) states that “if the credit application is rejected on the basis of consultation of a database, the creditor shall inform the consumer immediately and without charge of the result of such consultation and of the particulars of the database consulted”. By embracing an evolutionary interpretation of Art. 9(2), then it might be possible to encompass the features of the big and soft data set within the notion of “particulars of the database”, setting the ground for stringent control over the characteristics of the algorithmic elaboration.
Despite these (indirect) inferences, though, rules on the creditworthiness assessment do not provide clear indication nor requirements for conducting the scoring activity, and therefore lack significant prescriptive force.
Besides the loans and mortgages regulation, interesting elements can be inferred by the recent General Data Protection Regulation (GDPR): Art. 22(1) of the GDPR regulates automated individual decision-making phenomena, providing that “data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.” This provision was one of the most debated amongst scholars and professionals, given the uncertainty of establishing a proper level of supervision for the automated processing to be lawfully operated. Furthermore, the problem of regulating automated decision-making has been heightened by the recent indications provided by the High-level expert group on artificial intelligence created by the European Commission: in its recent Ethics Guidelines For Trustworthy AI, the Group defended the idea that human agency and oversight represent two fundamental principles that should guide the implementation of artificial intelligence technologies. With regards to human agency, it was clarified that “users should be able to make informed autonomous decisions regarding AI systems” and that “they should be given the knowledge and tools to comprehend and interact with AI systems to a satisfactory degree and, where possible, be enabled to reasonably self-assess or challenge the system”. As for the oversight principle, the Guidelines state that “oversight may be achieved through governance mechanisms such as a human-in-the-loop (HITL), human-on-the-loop (HOTL), or human-in-command (HIC) approach. HITL refers to the capability for human intervention in every decision cycle of the system, which in many cases is neither possible nor desirable. HOTL refers to the capability for human intervention during the design cycle of the system and monitoring the system’s operation. HIC refers to the capability to oversee the overall activity of the AI system (including its broader economic, societal, legal and ethical impact) and the ability to decide when and how to use the system in any particular situation. This can include the decision not to use an AI system in a particular situation, to establish levels of human discretion during the use of the system, or to ensure the ability to override a decision made by a system”.
In such an uncertain framework, a comparative overview over the main relevant provisions of the U.S. legislation does not provide clear solutions either. The debate over the opportunity to permit automated credit scoring systems in the United States has been vigorous – especially considering the high-level of individual debt and oligopolistic nature of the American credit market. – In addition, it is currently unclear how emerging credit scoring systems should operate in accordance with the American regulation, and if the elaboration of aggregated data by means of indirect proxies is susceptible to be regulated under the existing law.
Under the Fair Credit Reporting Act, consumer-reporting agencies are required to “adopt reasonable procedures for meeting the needs of commerce for consumer credit, personnel, insurance, and other information in a manner which is fair and equitable to the consumer, with regard to the confidentiality, accuracy, relevancy, and proper utilization of such information”; furthermore, they – as well as any potential third party involved in the acquisition, furniture, or processing of information used in the assessment reports – shall retain negative information about an applicants’ solvency for a maximum of seven years. Upon these foundations, the Equal Opportunity Act introduced an express prohibition of discrimination on the basis of the applicant’s race, color, religion, national origin, sex or marital status, or age. Companies operating in consumer credit scoring are also subject to the obligations set in the Gramm-Leach-Bliley Act and to the periodic reporting obligations arising from the Fair and Accurate Credit Transactions Act.
It should be noted that, after the Equifax scandal the Consumer Financial Protection Bureau inaugurated a process of rethinking and redefinition of the obligations concerning the use and acquisition of data for credit reporting and credit scoring: in light of a renovated interest in improving the three areas of (a) data accuracy, (b) dispute handling and resolution between consumers and credit institutions and (c) general reporting, integrations and amendments have been introduced through the Economic Growth, Regulatory Relief and Consumer Protection Act in 2018.
Lastly, in March 2019 the reform bill Fair Lending for All Act was presented to the House of Representatives: amongst its main innovations, the Bill introduces some specifications to existent class of data present in the Equal Credit Opportunity Act and identifies new types of data (that were unknown to the previous legislations), such as the ones related to the applicant’s geographical location.
Despite these efforts, both the US framework and the European one lack a systematically critical overview of the phenomenon of automated credit scoring, and do not tackle its most controversial implications – e.g. the use of indirect proxies or aggregate data for the individual assessment. With regards to this aspect, identifying types of data or information to be qualified as protected looks like a flawed solution, without any substantive effect in terms of consumer protection. This is, in particular, significant in light of a consideration: in the classifying society, the use of personal identification or the distinction between normal and “sensitive” data as main criteria to impose diligence obligation is meant to disappear, since big data-based algorithms operate their inferences using clusters of consumers’ information, and therefore do not need to relate the datum with the individual person it refers to.
7. In order to manage the risks that the emerging methodologies for credit scoring create for consumer protection while, at the same time, preserving the incentives and potentials of these technologies in the credit market, some considerations are necessary.
First and foremost, an historical analysis of the credit rating phenomenon reveals that the use of soft data for the creditworthiness assessment has way deeper roots than the Fintech phenomenon, being even older than the normative concept of credit scoring itself.
Consumer credit began to diffuse in a structured form since the late middle age, and the loan system was based on local monitoring conducted within each community: creditors obtained information about the debtor’s diligence and reputation from her family or from other members of the village. Lenders looked for data regarding her personal reliability, which was considered more significant than the economic one: in other terms, qualitative and soft data had been used way before than the econometric system of data elaboration developed in the credit sector.
We might deduce that the main critical aspect in the development of new methodologies for assessing consumers’ creditworthiness is not strictly related to the use of applicants’ personal, behavioural, or environmental data (as long as this information is not used for purposeful discriminatory purposes): what is relevant is that in the digital environment the amount of data available to creditors has significantly increased. Information is elaborated in massive and aggregate form, and then circulates amongst sector operators and software developers. These elements are at the core of the difficulties consumers face in understanding which information is relevant for their solvency and, therefore, in drawing causal inferences regarding the outcome of the assessment in order to challenge it – if deemed unfair – or to change their conduct and improve their rating in the future.
The problem is, once again, intertwined with the scope of the transparency principle. Yet, any regulatory proposal should consider that the risk is technology- rather than quality-related: it does not arise from the characteristics of the data, but from their modes of elaboration and aggregation.
Along this line, it has been empirically observed that the absence of transparency in the credit sector constitutes the main factor for consumers’ distrust towards access to retail credit. This is particularly acute in those countries where the population exhibits low levels of financial literacy, and represents a major obstacle for the development of the market economy: even an extremely efficient credit system will not be able to properly operate if consumers are not willing to request loans in the first place.
Lastly, in order to develop a policy proposal to regulate the phenomenon, the existence of similar experiences should be taken into consideration, in order to verify whether a comparative approach might lead to significant improvement of the system.
In embracing this perspective, we considered the similarities and differences existing between the activity of consumer rating operators (agencies, institutional entities, etc.) and credit rating agencies (CRAs): these companies, in fact, assign credit ratings, which measure a debtor’s ability to pay back debt by making timely principal and interest payments, as well as her general likelihood of default. CRAs play a pivotal part in the allocation of monetary assets, and rating have a significant impact on public operators and governs’ policies.
The role that CRAs had in the Lehman Brothers collapse – and, earlier, in the 2001 Enron case – leading to the 2008 financial crisis pushed regulators in Europe towards the adoption of innovative regulations in order to hold CRAs responsible for erroneous ratings. Before 2009, the main set of obligations regulating CRAs’ actions was found in the International Organization of Securities Commission (IOSCO) Statement of principles regarding the activities of credit rating agencies: after the financial crisis, the European institutions addressed the topic by enacting the s.c. First Regulation No 1060/2009 on credit rating agencies. The Regulation was later followed by two amending Regulations respectively in 2011 (s.c. First Reform) and 2013 (s.c. Second Reform).
These modifications shift the balance in the allocation of rights and obligations of CRAs by embracing a precautionary perspective: whereas, in the past, CRAs mostly relied on self-regulation and institutional ex post supervision, the new strand of regulation stresses how processing and assessment activities should be conducted according to the principles of quality, transparency, and integrity of the data. Furthermore, a constant exchange of information between CRAs and supervisory entities is requested as essential to ensure the responsible elaboration of ratings.
CRAs’ rating activity and credit institutions’ creditworthiness assessment are similar, under both an operative and a substantive perspective: both entities assess the robustness and reliability of market agents in order to predict their solvency and provide information to potential investors, and both are considered essential in order for the financial market to properly operate.
Relevant differences are present as well: the most intuitive one pertains to the nature of those who are subject to evaluation and assessment (companies and their financial products for CRAs, individual retail consumers for credit institutions). In addition, the specific characteristics of the different subjects have an impact on the methodologies that market operators exploit, and on the weight that qualitative and quantitative data have in the rating – e.g. it is traditionally defended that behavioural and environmental data have less significance than quantitative ones in CRAs’ assessment.
Furthermore, some regulatory concerns in the governance over CRAs are not equally relevant – or not present at all – when consumer credit institutions are considered: for example, conflict of interests is fundamental for CRAs, while it is less prominent when consumer scoring is considered.
Still, these two types of entities show common functional features: they both aim at promoting sustainable equilibriums between the allocation of offer and demand of monetary resources and they both do so by providing assessments that – due to the high level of reliability of their issuers – can reduce transaction costs and allow for the rationalization and efficient allocation of assets. In both cases, this reliability is supposed to be grounded in the robustness and transparency of the criteria and protocols used for the elaboration of the ratings: by abiding to these principles, these entities qualify themselves as worthy of investors’ trust, and as essential operators to promote economic initiative, savings and responsible investment that are instrumental to the general social welfare.
8. Moving from the abovementioned considerations, we can conclusively hypothesize a normative model in order to regulate consumer-scoring activities based on algorithms exploiting big and soft data.
What I suggest is, in particular, to develop a regulatory proposal based on a neutral model (that can be virtually applied to any scoring activity) compliant to the principles of reliance, fairness, and transparency. The matrix to identify the obligations for this model shall be developed moving from the rating agencies existing regulation.
It is, first and foremost necessary to evaluate which aspects of the current CRAs regulation could (and should) be adapted to (and adopted in) the consumer scoring activity. In particular, some provisions of the Regulation 1060/2009 – in its current structure, as drafted after the Second Reform – are of particular interest.
The first provision is Art. 14, according to which each company, willing to obtain the status of credit rating agency, shall apply to the Committee Of European Securities Regulators (CESR) for registration: the application shall include information regarding the company and its activity, and the approval automatically makes the Agency subject to the whole Regulation.
Such a duty is not unknown to the financial sectors, and a similar provision also applies to Fintech banks; even though the subjective qualification of Fintech banks is a major vulnus of the regulatory body in the field: in our hypothesis, on the contrary, the rules would apply inter alia to non-Fintech banks and to P2P lending platforms, who are currently operating in an uncertain regulatory condition: the tentative regulation would, in fact, focusing on the activity (creditworthiness assessment) and not on the subjective characteristics of the operator (Fintech banks or other operators).
A second significant provision is Art. 10(2) on “Disclosure and presentation of credit ratings”: according to the rule, CRAs are required to indicate, when a rating is issued, “(a) all substantially material sources, including the rated entity or, where appropriate, a related third party, which were used to prepare the credit rating […]; (b) the principal methodology or version of methodology that was used in determining the rating […], with a reference to its comprehensive description […]; (c) the meaning of each rating category, the definition of default or recovery and any appropriate risk warning, including a sensitivity analysis of the relevant key rating assumptions, such as mathematical or correlation assumptions […]; (d) the date at which the credit rating was first released for distribution and when it was last updated”. Furthermore, any agency is required to inform the rated entity at least 12 hours before publication of the credit rating and of the principal grounds on which the rating is based, in order to provide the opportunity to draw attention of the credit rating agency to any factual errors.
In addition to the information that credit rating agencies must provide when a rating is issued, according to Art. 12 of the Regulation, CRAs shall publish an annual Transparency Report illustrating the supervision mechanisms implemented to ensure and safeguard the rating quality.
Throughout this process, the European Securities and Markets Authority (ESMA) has the power to verify that methodologies used by credit rating agencies are rigorous, systematic, continuous and subject to validation based on historical experience, including back-testing.
Lastly, according to Art. 11a of the Regulation, ESMA shall publish the individual credit ratings on a website (European Rating Platform): through the presence of a centralized platform, it is possible for any investor, issuer, or financial market operator, to compare the ratings issued for a relevant company and evaluate its historical performances.
By modifying adapting these rules to the specific characteristics of consumer credit scoring, it seems possible to lay the ground for a first, general, operational framework: these rules would result in a) the creation of a public register; b) the identification of an authority provided with a general supervision power (and, therefore, not limited to institutional banks); c) the introduction of periodical disclosure and communication duties on credit operators towards consumers, and in d) the creation of an open platform with information related to the rating and (in the specific declination we hypothesized) data regarding the protocol to determine consumer scoring.
Even though, the introduction of these new rules constitute just one part of the proposal: it is, as a matter of fact, equally significant to look for technological solutions that would make these obligations not only legally, but also concretely binding.
Amongst existing tools, an effective solution in order to promote both reliability and transparency in the set up of the platform, and in its interaction with the consumer is represented by the set up of a semi-permissioned blockchain system (or also a centralized database) by the supervisory authority: the platform would allow, on one side, the introduction of score-related data by supervised entities and, on the other one, a constant and effective monitoring by the public administration.
Within a tentative regulatory framework for credit scoring activities, a blockchain platform could be structured according to three following levels of permission.
– Level 1: general management and supervision of the platform, to be entrusted to the public supervisory authority.
– Level 2: granted to credit operators (banking institutions or P2P platforms). After the approval of the application, credit operators are assigned a Legal Entity Identifier (LEI) code. By using the LEI code, they can enter the platform and enter all information required on the typology and characteristics of the data that are used for the issuance of creditworthiness ratings. These data are not visible to competitors – in order to safeguard competition and operators’ confidentiality; in particular, credit scoring entities will be asked to provide information regarding: (a) classes and categories of data gathered pertaining to consumers, including, but not limited to, details of existing credit accounts, credit status and activity, salary and employment data, retail purchase data, location data, and social media data; (b) the types of sources from which each data category is obtained and the collection methods used to gather such data, including the collection methods used by any third party data vendors; and (c) complete list of all individual data points and combinations of data points that a credit score or credit assessment tool treats as significant. Data are then grouped in clusters based on their source – e.g. data obtained from social media, apps, etc. – and relative weight in the assessment, in order to draft simplified outlines for consumers to consult.
– Level 3: access and monitor the information on the platform (permission for consumers). Consumers are allowed to inspect the categories used by credit operators: when an assessment regarding their creditworthiness is made, they will be able to evaluate which dataset have been considered, use this information when making their choices, and assess the consistency between the calculation algorithm and their score.
By implementing such a system, the platform would be updated in real time, and the supervisory authority could monitor continuously the conducts of credit operators. Given the different level of permission, the public authority could access and investigate the entirety of information provided by market operators on their methodologies and data, whereas consumers would be allowed to examine the categories of data and – eventually – their relative weight. This, in order to allow for a sufficient level of data sharing without prejudicing credit operators’ proprietary interests, safeguarding both consumers and competition.
9. Designing the best regulation for consumers’ creditworthiness assessments requires a careful account of multiple, intertwined, and concurring (public and private) interests: on the one hand, access to credit is essential for effective economic citizenship and to incentivise consumption; on other one, if the system is opaque or excessively unbolted, then the stability of institutional operators is undermined, entry barriers for vulnerable groups emerge, and unlawful and unconstitutional effects related to the use of soft and big data by automated algorithms ultimately arise.
In such a context, the joint use of technological tools and legal reforms can stimulate P2P commerce and introduce original operative paradigms in the credit sector; when they operate in accordance with the principle of substantive transparency, these tools are likely to facilitate access to information by consumers and supervisory authorities, without hindering progress in credit-scoring systems. As a consequence, they create the conditions for operators to improve their analytic methodologies in the creditworthiness assessment, reduce transaction costs, promote responsiveness by professional operators and create trust in the retail credit sector.
We hold true that a revision or the existing regulatory framework based on: a) the reinterpretation of credit rating agencies’ law and b) the responsible technological integration in the operative praxis of credit scoring agencies, can efficiently mediate amongst the conflicting interest insisting in this area, with the ultimate result of promoting socially responsible lending with personalized conditions for each applicant. In order for this to happen, though, normative argumentations and reform of legal corpora should converge together with the implementation of technological resources, as two essential, complementary aspects of the Fintech phenomenon.
Author: Antonio Davola is Adjunct Professor at Luiss Guido Carli University, Rome
 Relevant data in order to form social credit scores range from information concerning the citizens’ loan repayment rate to preferences, behavioural data and personal relationship. The official document for the creation of the SCS system [2014-2020] can be consulted at https://chinacopyrightandmedia.wordpress.com/2014/06/14/planning-outline-for-the-construction-of-a-social-credit-system-2014-2020/.
 An high credit score can facilitate access to credit and loans, while a low one might preclude access to private schools or high-qualified workplaces. See ex multis, Meissner M., China’s Social Credit System: A big data-enabled approach to market regulation with broad implications for doing business in China, in Merics China Monitor, 2017, 39; v. altresì The General Office of the Central Committee of the Communist Party of China and the General Office of the State Council issued Opinions concerning Accelerating the Construction of Credit Supervision, Warning and Punishment Mechanisms for Persons Subject to Enforcement for Trust-Breaking, consultabile al link http://www.xinhuanet.com/politics/2016-09/25/c_1119620719.htm (in chinese), 2016. An English version of the document can be consulted in Creemers R., Opinions concerning Accelerating the Construction of Credit Supervision, Warning and Punishment Mechanisms for Persons Subject to Enforcement for Trust-Breaking, in China Copyright and Media, consultabile al link: https://chinacopyrightandmedia.wordpress.com/2016/09/25/opinions-concerning-accelerating-the-construction-of-credit-supervision-warning-and-punishment-mechanisms-for-persons-subject-to-enforcement-for-trust-breaking/.
 Liang F. – Das V. – Kostyuk N. – Hussain M.M., Constructing a Data‐Driven Society: China’s Social Credit System as a State Surveillance Infrastructure, in Policy & Internet, 2018, vol. 10, iss. 4, 415.
 See https://www.creditchina.gov.cn/home/index.htm for a list of the municipalities currently adopting the SCS. Cfr. also Kostka, G., China’s Social Credit Systems and Public Opinion: Explaining High Levels of Approval,
https://www.researchgate.net/publication/326625329_China’s_Social_Credit_Systems_and_Public_Opinion_Explaining_High_Levels_of_Approval, 2018 for an analysis of the general public reactions’ to the SCS in China.
 On this aspect, see critically O’Dwyer R., Algorithms are making the same mistakes assessing credit scores that humans did a century ago, 2018, https://qz.com/1276781/algorithms-are-making-the-same-mistakes-assessing-credit-scores-that-humans-did-a-century-ago/.
 The topic will be critically addressed in the following Sections of this Article. For a preliminary overview, see also Robinson D. – Yu H., Knowing the Score: New Data, Underwriting, and Marketing in the Consumer Credit Marketplace. A Guide for Financial Inclusion Stakeholders, 2014, Ford Foundation, passim; Langley P., Financialization and the Consumer Credit Boom, in Competition & Change, 2008, vol. 12, 133-147.
 See Citron D.K. – Pasquale F., The scored society: Due process for automated prediction, in Washington Law Review, 2014, vol. 89, 1 ss.; Gross K., Expanding the Use of Credit Report and Credit Scores: The Need for Caution and Empiricism, in Twigg-Flesner C.- Parry D.- Howells G.- Nordhausen A. (eds.), The Yearbook of Consumer Law, Aldershot, Ashgate, 2008, 327-336.
 See infra §§4-5.
 See Mansilla-Fernandez, J.M., “Numbers”, in European Economy: Banks, Regulation and the Real Sector, 2017, 3, 31-40; cfr. also Thakor A.V., Fintech and Banking, 2019, https://ssrn.com/abstract=3332550. Recently, European Authorities drafted a plurality of documents stressing the essential role of Fintech in the development of the Digital Single Market: see e.g. European Commission, Communication from the Commission to the European Parliament, the Council, the European Central Bank, the European Economic and Social Committee and the Committee of the Regions. FinTech Action plan: For a more competitive and innovative European financial sector, 8 Mar. 2018, COM(2018) 109 final; European Banking Authority, The EBA’s Fintech Roadmap. Conclusions from Consultation on the EBA’s Approach to Financial Technology (Fintech), 2018, www.eba.europa.com; European Parliament, Draft Report on Fintech: the influence of technology on the future of financial sector, 2017, www.europarl.europa.eu; World Economic Forum, Beyond Fintech: A Pragmatic Assessment of Disruptive Potential in Financial Services, 2017, in www.weforum.org. Recently, see also the Report by the European Securities and Market Authority, FinTech: Regulatory sandboxes and innovation hubs, JC 2018/74, 2019,
 Welltrado, Global Blockchain-Backed Loans Marketplace ICO, White Paper, 2018.
 Lieber R., American Express Kept a (Very) Watchful Eye on Charges, N.Y. Times, 30 Jan 2009, https://www.nytimes.com/2009/01/31/your-money/credit-and-debit-cards/31money.html. See also Hurley M.- Adebayo J., Credit Scoring in the Era of Big Data, in Yale Journal of Law and Technology, 2017, vol. 18, iss. 1, 151.
 For a general overview, see WORLD BANK, Financial Inclusion Overview, 2017, http://www.worldbank.org/en/topic/financialinclusion/overview. See also Ardic, O. P.- Heimann, M.- Mylenko, N., Access to Financial Services and the Financial Inclusion Agenda Around the World: A Cross-Country Analysis with a New Data Set, http://www.elibrary.worldbank.org, 2011; Kabakova, O.- Plaksenkov, E. Analysis of Factors Affecting Financial Inclusion: Ecosystem View, in Journal of Business Research, 2018, 89, 198–205; Corrado G.- Corrado L., Inclusive finance for inclusive growth and development, in Current Opinion in Environmental Sustainability, 2017, 24, 19-22; Sha’ban M.- Girardone C.- Sarkisyan A., Financial Inclusion: Trends and Determinants, in Gualandri E.- Venturelli V.- Sclip A. (eds.), Frontier Topics in Banking. Investigating New Trends and Recent Developments in the Financial Industry, Palgrave, 2019, 119-136.
 See Pagliantini S., Il sovraindebitamento del consumatore. Studio critic sull’esdebitazione, Turin, 2018, 19; World Bank, UFA2020 Overview: Universal Financial Access by 2020, https://www.worldbank.org/en/topic/financialinclusion/brief/achieving-universal-financial-access-by-2020, see also Demirgüç-Kunt A.- Klapper L.- Singer D.- Ansar S.-Hess J., The Global Findex Database 2017: Measuring Financial Inclusion and the Fin-tech Revolution, Washington, World Bank, 2018, passim.
 Karlan D.- Kendall J.- Mann R. – Pande R.- Suri T.- Zinman J., Research and Impacts of Digital Financial Services, in NBER Working Paper 22633, National Bureau of Economic Research, Cambridge, 2016; Demirgüç-Kunt A.- Klapper L.- Singer D., Financial Inclusion and Inclusive Growth: A Review of Recent Empirical Evidence, in Policy Research Working Paper 8040, World Bank, Washington DC, 2017.
 Patwardhan A., Financial Inclusion in the Digital Age, in Handbook of Blockchain, Digital Finance, and Inclusion, 2018, vol. 1, Elsevier, 58.
 Cfr. European Central Bank, The Euro Area Bank Lending Survey. First Quarter of 2019, 2019, 23.
 Directive 2008/48/EC of the European Parliament and of the Council of 23 April 2008 on credit agreements for consumers and repealing Council Directive 87/102/EEC OJ L 133, 22 May 2008.
 Directive 2014/17/EU of the European Parliament and of the Council of 4 February 2014 on credit agreements for consumers relating to residential immovable property and amending Directives 2008/48/EC and 2013/36/EU and Regulation (EU) No 1093/2010 Text with EEA relevance, OJ L 60, 28 Feb. 2014.
 See Directive 2008/48/EC Recital 26.
 See infra §4.
 Berger F.- Gleisner S., The Emergence of Financial Intermediaries in Electronic Markets: The Case of Online P2P Lending, in Business Research, 2009, 2, 1, 39–65; Havrylchyk O.- Verdier M. The Financial Intermediation Role of the P2P Lending Platforms, in Comparative Economic Studies, 2018, 60, 1, 115–130.
 Pottow J.A.E., Private Liability for Reckless Consumer Lending, in University of Illinois Law Review, 2007, 413; Ferretti F., The “Credit Scoring pandemic” and the European Vaccine: Making Sense of EU Data Protection Legislation, in European Journal of Law and Technology, 2009, 3 ff.
 In addition to hard data, as we will see, modern credit scoring algorithm usually implement soft data as well: soft data are qualitative, subjective information – usually acquired by monitoring individuals’ activities and interactions – that, in the past, were difficult to quantify in terms of numeric factors, and subsequently to be pondered within the creditworthiness assessment. See ex multis Cornée S., Soft Information and Default Prediction in Cooperative and Social Banks, in Jeod, 2014, vol. 3, iss. 1, 90; also María Liberti J.- Petersen M.A., Information: Hard and Soft, in The Review of Corporate Finance Studies, 2019, vol. 8, iss. 1, 1-41.
 The percentage value of each element for the overall score is 35% (payment history), 30% (amounts owned), 15% (length of the credit history), 10% (new credit) e 10% (type of credit used). For an analysis of the FICO score functioning and characteristics see Arya S.- Eckel C.- Wichman C., Anatomy of the credit score, in Journal of Economic Behavior & Organization, 2013, 95, 175-185; Poon M., Scorecards as devices for consumer credit: the case of Fair, Isaac & Company Incorporated, in The Sociological Review, 2007, 55, 284-306.
 See Pasquale F., The Black Box Society. The Secret Algorithms That Control Money and Information, Cambridge, 2015.
 See Mendoza I., The Right Not to be Subject to Automated Decisions Based on Profiling, in Synodinou T.E.- Jougleux P.- Markou C.- Prastitou T. (a cura di), EU Internet Law. Regulation and Enforcement, Springer, 2017, 89.
 Cfr. Rona Tas A.- Hib S., Consumer and Corporate Credit Ratings and the Subprime Crisis in the U.S., with Some Lessons from Germany, in Micklitz H.W. (ed.), Consumer Loans and the role of the Credit Bureaus in Europe, in EUI Working Papers, RSCAS, 2010, 44, 9.
 KEAR M., Governing homo subprimicus: Beyond financial citizenship, exclusion, and rights, in Antipode, 2013, vol. 45, iss. 4, 926-946.
 Cfr. V. Turner M.A.- Walker P., Changing The Lending Landscape: Credit Deserts, The Credit Invisible, And Data Gaps In Silicon Valley, 2017, http://www.perc.net/publications/changing-lending-landscape-credit-deserts-credit-invisible-data-gaps-silicon-valley/.
 This issue was experimented both in the US and in the EU market. See Ellis B., Millions without credit scores not so risky after all, 2013, https://money.cnn.com/2013/08/14/pf/credit-scores/index.html; and Pwc, Sizing the UK Near Prime Credit Market, 2016, https://newday.co.uk/media/11895/pwc-near-prime-market-report.pdf.
 See European Banking Authority, Discussion Paper on innovative uses of consumer data by financial institutions (EBA/DP/2016/01), 2016, https://eba.europa.eu/regulation-and-policy/consumer-protection-and-financial-innovation/; also Henry N.- Morris J., Scaling Up Affordable Lending: Inclusive Credit Scoring, in Coventry University Research Centre in Business and Society Research Paper, 2018, 8.
 Patwardhan A., Financial Inclusion in the Digital Age, 63.
 Mattassoglio, F., Innovazione tecnologica e valutazione del merito creditizio dei consumatori. Verso un Social Credit System?, Milano, 2018, 39.
 Floridi L., La quarta rivoluzione. Come l’infosfera sta trasformando il mondo, Milano, 2017, 44.
 Internet Society, The Internet of Things: An Overview. Understanding the Issues and Challenges of a More Connected World, Oct. 2015, https://www.internetsociety.org/resources/doc/2015/iot-overview.
54 V. ex multis Germani E.- Ferda L., Il wearable computing e gli orizzonti futuri della privacy, in Dir. dell’inf. e dell’inform., 2014, 75. Also Woo J.- Lee J.H., The Limitations of “Information Privacy” in the Network Environment, in Journal of Technology Law and Policy, Pittsburgh School of Law, 2006, 7.
 Lexis Nexis, Alternative Credit Decision Tools: Auto & Credit Lending. White Paper, 2013, https://risk.lexisnexis.com/insights-resources/white-paper/alternative-credit-decision-tools.
 Stein J.C., Information Production and Capital Allocation: Decentralized vs. Hierarchical Firms, in Journal of Finance, 2002, 57, 1891.
 Hardi Q., Big Data for the Poor, N.Y. Times, 5 Jul. 2012, https://bits.blogs.nytimes.com/2012/07/05/big-data-for-the-poor/.
 Bertrand M., Kamenica M., Coming Apart? Lives of the rich and poor over time in the United States, in University of Chicago Booth School of Business Working Paper Series, 2018.
 Mcclanahan A., Bad Credit. The Character of Credit Scoring, in Representations, 2014, vol. 126, iss. 1, 31-57.
 Duarte J.- Siegel S.- Young L., Trust and Credit: The Role of Appearance in Peer-to-Peer Lending, in The Review of Financial Studies, 2012, vol. 25, iss. 8, 2455-2484.
 See Strömbäcka C.- Lind T.- Skagerlund K,. Västfjäll D.- Tinghög G., Does self-control predict financial behavior and financial well-being?, in Journal of Behavioral and Experimental Finance, 2017, 14, 30–38; also Tokunaga H., The use and abuse of consumer credit: Application of psychological theory and research, in Journal of Economic Psychology, 1993, 14, 285-316; v. altresì Hertzberg A.- Liberman A.- Paravisini D., Adverse Selection on Maturity: Evidence from On-line Consumer Credit, 2016, http://web.mit.edu/cjpalmer/www/discussions/Palmer_Sept2015_Adverseselection_Presentation.pdf.
 Berg T.- Burg V.- Gombovic A.- Puri M., On the Rise of FinTechs – Credit Scoring using Digital Footprints, in FDIC – CFR Working Paper Series, 2018, 4, 3.
 Hurley M.- Adebayo J., Credit Scoring in the Era of Big Data, 176.
 Milne A.,- Parboteeah P., The Business Models and Economics of Peer-to-Peer Lending, in ECRI Research Report, 2016, 17; Pricewaterhousecoopers, Peer Pressure: How Peer-to -Peer Lending Platforms are Transforming the Consumer Lending Industry. Research report, 2015, http://www.pwc.com; see also Fuster A.- Plosser M., Schnabl P., Vickery J., The Role of Technology in Mortgage Lending, in The Review of Financial Studies, 2019, vol. 32, iss. 5, 1854-1899.
 See §2.
 See Troiano V.- Motroni R. (eds.), LA MIFID II. Rapporti con la clientela – regole di governance – mercati, Cedam, 2016, 213.
 See Bonini S.- Dell’acqua A.- Fungo M.- Kysucky V., Credit market concentration, relationship lending and the cost of debt, in International Review of Financial Analysis, 2016, 45, 172–179; also Lemieux C.- Jagtiani J., Fintech lending: Financial Inclusion, Risk Pricing, and Alternative Information, in Federal Reserve Bank of Philadelphia Working Papers, 2017, 17, 19.
 Policy And Economic Research Council, Research Consensus Confirms benefits of Alternative Data, 2015, http://www.perc.net/wp-content/uploads/2015/03/ResearchConsensus.pdf; DEMYANK Y.- KOLLINER D., Peer-to-Peer Lending Is Poised to Grow, 2014, https://www.clevelandfed.org/newsroom-and-events/publications/economic-trends/2014-economic-trends/et-20140814-peer-to-peer-lending-is-poised-to-grow.aspx.
 See European Banking Authority, Consumer Trends Report 2017, https://eba.europa.eu/documents/10180/1720738/Consumer+Trends+Report+2017.pdf.
 Citron D.K. – Pasquale F., The scored society, 10.
 See West D., Neural network credit scoring models, in Computers & Operations Research, 2000, 27, 1131-1152. Recently, v. Nazari M., Measuring Credit Risk of Bank Customers Using Artificial Neural Network, in Journal of Management Research, 2013, 5, 2; Munkhdalai L.- Munkhdalai T.- Namsrai O.E.- Lee Y.J.- Ryu K.H., An Empirical Comparison of Machine-Learning Methods on Bank Client Credit Assessments, in Sustainability, 2019, 11, 699.
 It shall be noted that this issue is significant for supervision authorities as well. See e.g. U.S. Congressional Research Service, Consumer Credit Reporting, Credit Bureaus, Credit Scoring, and Related Policy Issues, 28 marzo 2019, https://fas.org/sgp/crs/misc/R44125.pdf; also Baer G.- Prakash N., Machine Learning and Consumer Banking: An Appropriate Role for Regulation, http://www.bpi.com, 2019.
 Note that this is a general problem existing for any automated decision-making algorithms, which was considered also by the recent Art. 22 of the EU General Data Protection Regulation (GDPR). See ex multis Pasquale F., Restoring Transparency to Automated Authority, in Journal On Telecommunications & High Tech Law, 2011, 9, 235–236; Selbst A.- Powles J., Meaningful information and the right to explanation, in International Data Privacy Law, 2017, vol. 7, iss. 4, 233–242; Brkan M., Do algorithms rule the world? Algorithmic decision-making in the framework of the GDPR and beyond, in Terminator or the Jetsons? The Economics and Policy Implications of Artificial Intelligence, Technology Policy Institute, Washington, 2018; Mendoza I.- Bygrave L., The right not to be subject to automated decisions based on profiling, in EU Internet Law, Springer, 2017, 77–98; Almada M., Human Intervention in Automated Decision-Making: Toward the Construction of Contestable Systems, International Conference on Artificial Intelligence and Law (ICAIL 2019), forthcoming and currently available at https://ssrn.com/abstract=3264189; Burrell J., How the Machine “Thinks“: Understanding Opacity in Machine Learning Algorithms, in Big Data and Society, 2016, 3, 1.
 Barocas S.- Selbst A., Big Data’s Disparate Impact, in California Law Review, 2016, vol. 104, 671; Selbst A., Disparate Impact in Big Data Policing, in Georgia Law Review, 2017, 52, 109; Leese M., The New Profiling: Algorithms, Black Boxes, and the Failure of Anti-Discriminatory Safeguards in the European Union, in Security Dialogue, 2014, vol. 45, iss. 5, 494–511.
 In general, on algorithmic bias in computer systems, see Friedman B.- Nissenbaum H., Bias in Computer Systems, in ACM Transactions on Information Systems, 1996, iss. 14, 3; also Pedreschi D.- Ruggieri S.- Turini F., Discrimination-aware data mining, in Proceedings of KDD, 2008. On the specific topic, see Angelini R., Intelligenza Artificiale e governance. Alcune riflessioni di sistema, in Pizzetti F. (ed.), Intelligenza artificiale, protezione dei dati personali e regolazione, Turin, 2018, 295; Kuner C.- Svantesson D.J.B.- Cate F.H.- Lynskey O.- Millard C., Machine learning with personal data: is data protection law smart enough to meet the challenge?, in International Data Privacy Law, 2017, 7, 1; DANKS D.- LONDON A., Algorithmic Bias in Autonomous Systems, in Proceedings of the 26th International Joint Conference on Artificial Intelligence, 2017.
 Amplius Havard C.J., “On The Take”: The Black Box of Credit Scoring and mortgage discrimination, in Public Interest Law Journal 2011, 20, 241; SOLOVE D.J., Data Mining and the Security-Liberty Debate, in University of Chicago Law Review, 2008, 75, 343.
 Zarsky T., Transparent Predictions, in Illinois Law Review, 2013, 1503, 1512; Id. Transparency in Data Mining: From Theory to Practice, in Custers B.- Calders T.- Schermer B.- Zarsky T. (eds.), Discrimination and Privacy in the Information Society. Studies in Applied Philosophy, Epistemology and Rational Ethics, vol 3. Springer, 2013, 301-324.
 Citron D.K., Technological Due Process, in Washington University Law Review, 2008, 85, 1249.
 Libertini M., La tutela della libertà di scelta del consumatore e i prodotti finanziari, in Grillo (ed.), Mercati finanziari e protezione del consumatore, Milan, 2010, 21.
 Guide, 9.
 Cfr. Financial Stability Board, Artificial intelligence and machine learning in financial services, 2017, http://www.fsb.org/wpcontent/uploads/P011117.pdf; European Banking Authority, Eba Report On The Prudential Risks And Opportunities Arising For Institutions From Fintech, 2018, https://eba.europa.eu/documents/10180/2270909/Report+on+prudential+risks+and+opportunities+arising+for+institutions+from+FinTech.pdf; Esma, Eba, Eiopa, Joint Committee Discussion Paper On The Use Of Big Data By Financial Institutions, 2018, https://www.esma.europa.eu/press-news/consultations/joint-committee-discussion-paper-use-big-data-financial-institutions.
 As a consequence many operators, such as P2P lending platforms operating as mere intermediaries, are not subject to the Guide.
 European Banking Authority, Guidelines on creditworthiness assessment, EBA/GL/2015/11, 2015, https://eba.europa.eu/regulation-and-policy/consumer-protection-and-financial-innovation/guidelines-on-creditworthiness-assessment#, 5.
 Ex multis see Wachter S.- Mittelstadt B.- Floridi L. Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation, in International Data Privacy Law, 2017; Malgieri, G.- Comandé, G., Why a Right to Legibility of Automated Decision-Making Exists in the General Data Protection Regulation, in International Data Privacy Law, 2017, vol. 7, iss. 3; Wachter S.- Mittelstadt B.- Russell C., Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR, in Harvard Journal of Law & Technology, 2018, vol. 31, iss. 2; Kamarinou D.- Millard C.- Singh J., Machine Learning with Personal Data, in Queen Mary School of Law Legal Studies, Research Paper No. 247, 2016, https://ssrn.com/abstract=2865811; Brkan M., Do Algorithms Rule the World? Algorithmic Decision-Making in the Framework of the GDPR and Beyond, in International Journal of Law and Information Technology, 2019, https://ssrn.com/abstract=3124901; Zarsky T., The Trouble with Algorithmic Decisions: An Analytic Road Map to Examine Efficiency and Fairness in Automated and Opaque Decision Making, in Science, Technology, & Human Values, 2016, 41, 118-132. Mendoza I., The Right Not to be Subject to Automated Decisions Based on Profiling, in Synodinou T.E.- Jougleux P.- Markou C.- Prastitou T. (eds.), EU Internet Law. Regulation and Enforcement, Springer, 2017.
 See Weiss E., The Equifax Data Breach: An Overview and Issues for Congress, in Congressional Service Report Insight IN10792; Jentzsch N., The Economics and Regulation of Financial Privacy: An International Comparison of Credit reporting System, Physica-Verlag, 2006, 88.
 See15 U.S.C., §1681(b).
 Consumer Financial Protection Bureau, How long does negative information remain on my credit report?, 4 Aug. 2016, https://www.consumerfinance.gov/ask-cfpb/how-long-does-negative-information-remain-on-my-credit-report-en-323/.
 See15 U.S.C., §1691.
 According to § 501(b) they are required, in particular, to establish appropriate standards for the financial institutions subject to their jurisdiction relating to administrative, technical, and physical safeguard—(1) to insure the security and confidentiality of customer records and information; (2) to protect against any anticipated threats or hazards to the security or integrity of such records; and (3) to protect against unauthorized access or use of such records or information which could result in substantial harm or inconvenience to any customer.
 Pub L. No 108-59, 11Stat 1952.
 See Hao K., The complete guide to the Equifax breach, in Quartz, 2017, https://qz.com/1079253/the-complete-guide-to-the-equifax-breach/.
 Consumer Financial Protection Bureau, Supervisory Highlights Consumer Reporting Special Edition, 2017, 14, https://files.consumerfinance.gov/f/documents/201703_cfpb_Supervisory-Highlights-Consumer-Reporting-Special-Edition.pdf.
 P.L. 115-174.
 The bill can be examined at https://www.congress.gov/bill/116th-congress/house-bill/166.
 E.g. the add of sexual orientation and gender identity as classes protected against discrimination with respect to credit transactions.
 H.R. 166, §3. “Prohibition On Credit Discrimination”.
 Comandé G., Regulating Algorithm’s Regulation? First Ethico-Legal Principles, Problems, and Opportunities of Algorithms, in Cerquitelli T.- Quercia D.- Pasquale F. (eds.), Transparent Data Mining for Big and Small Data, Springer, 2017, 182; also Purtova N., The law of everything. Broad concept of personal data and future of EU data protection law, in Law, Innovation and Technology, 2018, vol. 10, iss. 1, 40-81; Schwartz P.- Solove D., The PII Problem: Privacy and a New Concept of Personally Identifiable Information, in N.Y.U. Law Review, 2011, 86, 1814; Zuiderveen Borgesius F., Singling Out People Without Knowing Their Names – Behavioural Targeting, Pseudonymous Data, and the new Data Protection Regulation, in Computer Law & Security Review, 2016, 32, 256–271.
 See Calder L., Financing the American Dream: A Cultural History of Consumer Credit, Princeton University Press, 1999 Also Lipartito K., The Narrative and the Algorithm: Genres of Credit Reporting from the Nineteenth Century to Today, 2011, https://ssrn.com/abstract=1736283.
 Cfr. Salvatore A.- Franceschi F.- Neri A.- Zanichelli F., Measuring the financial literacy of the adult population: the experience of Banca d’Italia, in Questioni di Economia e Finanza della Banca d’Italia, 2018, 435
 Cfr. Reifner U., Responsible Credit in European Law, in The Italian Law Journal, 2018, vol. 4, n. 2, 432.
 Chiu I., Regulating Credit Rating Agencies in the EU: In Search of a Coherent Regulatory Regime, in European Business Law Review, 2014, vol. 25, iss. 2, 269–294.
 See Coffee J., Understanding Enron: It’s About the Gatekeepers, Stupid, in Columbia Law School Working Paper No 207, 2002.
 IOSCO, Technical Committee, Statement of principles regarding the activities of credit rating agencies, 2003, http://www.iosco.org/library/pubdocs/pdf/IOSCOPD151.pdf. In 2015, a new version of the IOSCO code of conduct was enacted: see IOSCO Code of Conduct Fundamentals for Credit Rating Agencies, Final Report, 2015, https://www.iosco.org/library/pubdocs/pdf/IOSCOPD482.pdf.
 Regulation (EC) No 1060/2009 of the European Parliament and of the Council of 16 September 2009 on credit rating agencies, OJ L 302, 17 Nov. 2009, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=uriserv:OJ.L_.2009.302.01.0001.01.ENG.
 Regulation (EU) No 513/2011 of the European Parliament and of the Council of 11 May 2011 amending Regulation (EC) No 1060/2009 on credit rating agencies, OJ L 145, 31 May 2011, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32011R0513.
 Regulation (EU) No 462/2013 of the European Parliament and of the Council of 21 May 2013 amending Regulation (EC) No 1060/2009 on credit rating agencies, OJ L 146, 31 May 2013, https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A32013R0462.
 A general overview of the reform of the CRAs regulation is present in Deipenbrock G., Trying or falling better next time? – The European legal framework for credit rating agencies after its second reform, in European Business Law Review, 2014, vol. 25, iss. 2, 207-225.
 Further developing on this path, some legal scholars argued that these entities should be considered “semi-public” ones in terms of obligations towards the general public, in the light of the general trust that communities and governments have in the quality and reliability of their assessments. The topic has been developed, more in general, with regards to social media platforms: see Balkin J., Information Fiduciaries and the First Amendment, in UC Davis Law Review, 2016, vol. 49, iss. 4, 1183 ss.; Id., Free Speech in the Algorithmic Society: Big Data, Private Governance, and New School Speech Regulation, ibidem, 2018, 51, 1151. Critically, on this theory, Khan L., Pozen D.E., A Skeptical View of Information Fiduciaries, in Harvard Law Review, Vol. 133, 2019.
 Recently, though, this perspective have been reexamined under a critical view: see Tor A., The Market, the Firm, and Behavioral Antitrust, in Zamir E.- Teichman D. (eds.), The Oxford Handbook of Behavioral Economics and the Law, Oxford University Press, 2015, 532; Armstrong M.- Huck S., Behavioral economics as applied to firms: A primer, in Competition Policy International, 2010, 6, 3–45; Engel C., The behaviour of corporate actors: How much can we learn from the experimental literature?, in Journal of Institutional Economics, 2010, 6, 445–475.
 Crockett A.- Harris T.- Mishkin F.- White E., Conflicts of Interest in the Financial Services Industry: What Should We Do About Them?, London, 2003, 41 ss.; also Bai L., On Regulating Conflict of Interests in the Credit Rating Industry, in New York University Journal of Legislation and Public Policy, 2010, 13.
 According to the Annex II of the Regulation 1060/2009, the application shall include 1. Full name of the credit rating agency, address of the registered office within the Community; 2. Name and contact details of a contact person and of the compliance officer; 3. Legal status; 4. Class of credit ratings for which the credit rating agency is applying to be registered; 5. Ownership structure; 6. Organisational structure and corporate governance; 7. Financial resources to perform credit rating activities; 8. Staffing of credit rating agency and its expertise; 9. Information regarding subsidiaries of credit rating agency; 10. Description of the procedures and methodologies used to issue and review credit ratings; 11. Policies and procedures to identify, manage and disclose any conflicts of interests; 12. Information regarding rating analysts; 13. Compensation and performance evaluation arrangements; 14. Services other than credit rating activities, which the credit rating agency intends to provide; 15. Programme of operations, including indications of where the main business activities are expected to be carried out, branches to be established, and setting out the type of business envisaged; 16. Documents and detailed information related to the expected use of endorsement; 17. Documents and detailed information related to the expected outsourcing arrangements including information on entities assuming outsourcing functions.
 See the European Central Bank Guidelines, Note 89.
 Cfr. European Banking Authority, Opinion of the European Banking Authority on lending-based crowdfunding, EBA/Op/2015/03, 2015, https://eba.europa.eu/documents/10180/983359/EBA-Op-2015-03+%28EBA+Opinion+on+lending+based+Crowdfunding%29.pdf); Linciano N.- Soccorso P., FinTech e RegTEch: approcci di regolamentazione e di supervisione, in Paracampo M.T.: (ed.), FinTech. Introduzione ai profili giuridici di un mercato unico tecnologico dei servizi finanziari, Torino, 2017, 42; Naidji C., Regulation of European peer-to-peer lending Fintechs Regulatory framework to improve SME’s access to capital, in KU Leuven Working Paper, 2017, 38.
 Regulation 462/2013, Recital 27.
 The European Rating Platform has been introduced in August 2017, and the register can be consulted at
 See Peters G.W., Panayi E., Understanding Modern Banking Ledgers through Blockchain Technologies: Future of Transaction Processing and Smart Contracts on the Internet of Money, in Tasca P.- Aste T.- Pellizzon L.- Perony N. (eds.), Banking Beyond Banks and Money. A Guide to Banking Services in the Twenty-First Century, Springer, 2016, 239-278.
 See Esma. Brief on Legal Entity Identifiers, ESMA70-145-238, 2017.
 The set of relevant variables for disclosure obligations in credit scoring is presented in Hurley M.- Adebayo J., Credit Scoring in the Era of Big Data, 204.
 A similar solution has been already hypothesized for financial intermediaries’ reporting duties: see Peters G.W.- Vishnia G.R., Blockchain Architectures for Electronic Exchange Reporting Requirements: EMIR, Dodd Frank, MiFID I/II, MiFIR, REMIT, Reg NMS and T2S, in Kuo Chuen (ed.), Handbook of Blockchain, Digital Finance, and Inclusion, Volume 2: ChinaTech, Mobile Security, and Distributed Ledger, Academic Press, 2017
 See Jappelli T.- Pagano M., Public Credit Information: A European Perspective, in Miller (ed.) Credit Reporting Systems and the International Economy, MIT Press, 2003, 81-114. See also De Almeida A.M-. Damia V., Challenges and prospects for setting-up a European Union shared system on credit, in IFC Bulletin, 2014, 37, 7-10.
 See World Bank Group, Cryptocurrencies and Blockchain, 2018, http://documents.worldbank.org/curated/en/293821525702130886/Cryptocurrencies-and-blockchain, 21, 34-36.