ON THE OBSOLESCENCE OF EMPIRICAL KNOWLEDGE IN DEFINING THE RISK/RIGHTS-BASED APPROACH TO AI REGULATION IN THE EUROPEAN UNION.

AuthorGrozdanovski, Ljupcho
  1. INTRODUCTION 210 II. THE EVOLUTION OF THE EU'S REGULATORY APPROACH TO AI 228 A. The Institutional Incentives for AI Regulation 228 B. The Divergence Between Established and Regulated AI-Related Risks 236 1. The Discovery of Evidence of AI-Related Risks 236 2. The Regulatory Response to the Evidence Discovered 243 III. SEEKING KNOWLEDGE OF FACTS FOR THE PURPOSE OF POLICY: THE EPISTEMIC CHALLENGES OF RISK IDENTIFICATION 247 A. Identifying the loci of Uncertainty 248 B. Selecting Relevant Risks Warranting Further Exploration (and Regulation) 255 1. Relevance Induced from 'Bare' Facts and Shared Perceptions 256 2. Relevance Inferred from Policy Objectives 264 IV. TRANSLATING KNOWLEDGE OF FACTS INTO POLICY: THE IMPACT OF RISK-CHARACTERIZATION ON THE DESIGN OF 'ADEQUATE' REGULATORY FRAMEWORKS 270 A. Knowledge of Facts, Paramount in Shaping 'Standard' Risk Regulation 272 1. Probative and Consistent Premises 272 2. ... Yielding 'Acceptable' (and Regulation-Worthy) Knowledge of Risks 279 B. Knowledge of Facts, Ancillary in Designing the AI Act 285 1. The ratio legis: Reasons for the Fact Neutrality of the AI Act 286 i. 'Fact-Neutrality' Explained by the Specific Nature of the AI Act as a 'Risk-Regulating' Instrument 286 ii. 'Fact-Neutrality' Justified by a Specific Definition of the Notion of 'Risk' 294 2. The explanatio legis: Normative Coherence with Existing EU Law 307 V. A PEAK INTO THE FUTURE: CAN THE AI ACT PASS THE PROPORTIONALITY TEST? 320 VI. CONCLUDING REMARKS 329 I. INTRODUCTION

    In his 1992 Risk society, (1) Beck analysed the modes of production and distribution in a globalized economy, with scientific and technological progress as driving forces of overall social organization. He argued that post-industrial risk society is a concept "based on the importance of bads" (2) and is characterized by "the distribution of bads that flow within various territories and are not confined within the borders of a single society." (3) Artificial Intelligence (hereafter, AI), (4) as the latest offspring of technological innovation, has certainly triggered global debates on series of risks ('bads') that States and regional organizations like the European Union (hereafter, EU) were quick to identify and caution against. Unsurprisingly, regulation of AI became a point of focus of scholars (5) and regulators alike. (6)

    Much like the technologies preceding AI, (7) regulators seemed confronted with a familiar 'the old meets the new' scenario, typically experienced when new technologies challenge the scope of application of existing regulatory instruments. (8) Indeed, realizing that current regulation was somewhat ill-adapted (9) for the resolution of - what became - topical risks (e.g. algorithmic biases) associated with the deployment and use of intelligent systems, the need for AI-specific, tailor-made instruments naturally arose.

    Unlike previous technologies, AI (as a class of intelligent rather than automated devices) raised never-seen-before challenges, as regulators sought to strike a balance between two competing objectives (and corresponding rationalities): market gains and a rights/values protection. The difficulty in balancing the two is namely due to the diversity and complexity of AI technologies. As stressed in a 2021 Expert Report on AI in Japan "on the one hand, laws and regulations face difficulties in keeping up with the speed and complexity of AI innovation and deployment (...) On the other hand, prescriptive regulation or rule-based regulation can hinder innovation. To address these conflicting problems, it is necessary to change governance models from conventional rule-based ones to goal-based ones that can guide entities such as companies to the value to be attained. Because our society shares the Social Principles of Human-Centric AI, which state the goals for the use of AI, and because principles on AI are slowly but steadily reaching a consensus globally, it can be said that we are finalizing the building of a foundation for goal-based governance." (10)

    The cited Report reveals a key feature of what seems to have become the dominant method in AI regulation: since AI is ever-evolving and never fully knowable (in the sense of conclusive evidentiary discovery (11)), the most regulators can aim for are broadly defined regulatory principles. These would establish general frameworks within which subsequent and specific regulation could be enacted. While there has been much debate on which principles ought to serve for the setting of a 'golden standard' for AI regulation, one - though not the only (12) - possible candidate for a relatively complete list of such principles is that of Asilomar, (13) which groups those principles in three main clusters: research, ethics and long-term goals. The 'Research' cluster includes strategies and funding, science-policy links, the development of research cultures and so-called race avoidance. (14) The 'Ethics' cluster includes the principles of safety, failure transparency, judicial transparency, responsibility, value alignment, protection of human values, protection of personal privacy as well as liberty and privacy, the pursuit of shared benefits and prosperity, the preponderance of Human control, non-subversion of social checks schemes and the avoidance of AI Arms Race. (15) Finally, the 'Long-term goals' cluster includes AI capability caution, importance and impact of AI on various types of future global developments, careful identification and managing of risks, recursive self-improvement and the pursuit of the common good. (16)

    The principles and objectives highlighted in many national AI strategies are variations of the Asilomar principles, the general trend being the emphasis placed on humancentric, ethical values coupled with the fostering of economic efficiency and growth through, say, strategic investments and development of innovation.

    This trend of framing ambitious market strategies with strong ethical values is inter alia visible in the 2018 German AI Strategy (17) as well as the Swiss Digital Strategy. (18) Alternatively, the 'leadership claim' in the so-called AI race (for markets), is especially strong in the AI strategy of the United States (hereafter, the US). The Executive order from 11 February 2019 (19) expresses the ambition of maintaining American leadership in AI through a concerted effort to promote advancement in technology and innovation "while protecting American technology, economic and national security, civil liberties, privacy, American values and enhancing international and industrial collaboration with foreign partners and allies." (20) To this end, the American strategy is guided by five principles: drive technological breakthroughs through the promotion of scientific discovery, economic competitiveness and national security; the development of appropriate technical standards and reducing the barriers to the safe testing and deployment of AI technologies; train current and future generations of US workers with the skills to develop and apply AI technologies; foster public trust and confidence in AI technologies and protect civil liberties, privacy and American values; promote an international environment that supports American AI research and innovation and opens markets for American AI industries, while protecting technological advantage in AI and protecting US critical technologies from acquisition by strategic competitors and adversarial nations. (21)

    The EU's regulatory trajectory follows the same general trends, since it also strives to strike the 'right' balance between risk-preventing principles of ethics without, at the same time, hindering the economic gains that AI innovation and use promise to deliver. In the 2020 White Paper on AI, the EC clearly supported a "regulatory and investment-oriented approach with the twin objectives of promoting the uptake of AI and addressing the risk associated." (22) One could argue that the White Paper provided the two-pronged regulatory framework within which subsequent EU regulation on AI would take shape. The driving ambition of the European Commission (hereafter, EC) is to realise, on the one hand, an ecosystem of trust ensuring "compliance with EU rules, including the rules protecting fundamental rights" and, on the other hand, an ecosystem of excellence that supports "the development and uptake of AI across the EU economy and public administration." (23) The latter objective is purely economic and aims at "harnessing the capacity of the EU to invest in next generation technologies and infrastructures." (24) The EC thus sought to increase "Europe's technological sovereignty in key enabling technologies and infrastructures for the data economy." (25) This twofold (excellence/trust) ecosystem echoes the Asilomar principles insofar as it places the emphasis on humancentric AI while aiming to foster investments and innovation. Similarly, the European Parliament (hereafter, EP) also acknowledged that AI systems "have the potential to generate opportunities for business and benefits for citizens" while simultaneously waving the need for a regulatory framework "protecting citizens from the potential risks of such technologies." (26) With the Proposal for an Artificial Intelligence Act (hereafter, AI Act), the Commission responded to that call emphasizing that AI "can provide key competitive advantages to companies and support socially and environmentally beneficial outcomes," but also "may generate risks and cause harm to public interests and rights that are protected by Union law." (27)

    The interesting and often overlooked question - explored in this study - is the following: which real-life experiences (and evidence thereof) can be relied on to design the axiological and regulatory shields that regulators should raise to protect their citizens from AI-related risks? Considering the current state and foreseeable development of AI innovation, establishing an exhaustive and...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT