Date22 September 2023
AuthorCowger, Alfred R., Jr.

CONTENTS I. A Brief Explanation of Algorithms and Artificial 138 Intelligence for the Corporate Community II. How AI Works--The Devil is in the Details 139 a. Step One: The Design Process 140 b. Step Two: The Hidden Minefields in Data Selection and Use 142 c. Step Three: Understanding (or Not) How Algorithmic 145 Processes Works i. How Algorithms Analyze Data and Draw Conclusions 145 ii. Beware AI "Black Box" Processing 149 III. A General Synopsis of the Current State of Corporate 150 Fiduciary Duties a. The Two Corporate Fiduciary Duties--Due Care and Loyalty 150 b. The Business Judgment Rule 153 c. Statutory Immunities 154 d. The Implications of Modern Corporate Fiduciary 156 Jurisprudence When Applied to the Use of Artificial Intelligence IV. A Deeper Analysis of the Duty of Due Care and the Use of AI 156 a. How AI Can Help Meet the Duty of Due Care in the Age of 157 Algorithms, and Thus, Might Become a Necessary Tool of All Corporate Fiduciaries b. How Deference to and Over-Reliance on AI Can Lead to a 161 Breach of the Duty of Due Care c. Why the Business Judgment Rule Should Not Apply to a 165 Fiduciary's Deference to AI d. Why Immunity-Granting Statutes Do Not Apply to the Use 167 of AI by a Fiduciary V. A Deeper Analysis of the Duty of Loyalty and the Use of AI 170 a. How AI Can Fulfill the Duty of Loyalty and, In Fact, 170 Become a Necessary Tool for Corporate Fiduciaries b. How Deference to and Over-Reliance on AI Can Lead to a 174 Breach of the Duty of Good Faith and Loyalty i. Breaching the Underlying Concepts of the Duty of Good 174 Faith ii. Why Relying on AI is not the Same as Relying on 176 Human Experts When Fulfilling the Duty of Good Faith and Loyalty iii. How Reliance on AI Can Violate the "Sub-Duties" of 178 Monitoring and Disclosure Falling Under the Duty of Loyalty iv. Deferring to Algorithms Is an Improper Delegation of 181 Authority by the Corporate Fiduciary v. As Artificial Intelligence Becomes More Sophisticated, 185 The Liability of Corporate Fiduciaries Will Become Greater VI. New Legal Standards and Practices to Meet Corporate 192 Fiduciary Duties in the Age of Algorithms a. New Legislation for New AI Tools 192 b. New Concepts of Fiduciaries in the Age of Algorithms 198 c. New Standards for Using AI Tools--What is Expected of 203 the "Reasonable" Corporate Fiduciary and Their Corporation VII. Conclusion 206 I. A Brief Explanation of Algorithms and Artificial Intelligence for the Corporate Community

To anyone that is not intimately involved in the design and sales of AI-based products--which means virtually every corporate officer, director, and legal counsel--"algorithms", "artificial intelligence (or "AI")", "robotics," and similar technical terms seem immediately incomprehensible. Yet, for the legal analysis, these terms can be considered interchangeable and definable. (1) According to Merriam Webster Dictionary, an algorithm is "a step-by-step procedure for solving a problem or accomplishing some end." (2) A decision or action that is the result of one or more algorithms constitutes the use of artificial intelligence or "AI." (3) "Robotics" simply refers to "embodied material objects that interact with their environment." (4) In other words, a device or machine powered by AI, such as a robot, a self-driving vehicle, or, perhaps in the worst-case scenario, an assassin drone. Perhaps the easiest way to remember the interaction of algorithms, AI, and robotics is that algorithms are the individual software programs that comprise what is collectively called artificial intelligence. Robotics are the hardware that is run by artificial intelligence. Since algorithms make up the artificial intelligence which controls robotic devices, the fiduciary ramifications from the use of algorithms, AI, and robotics in corporate decision-making are so interwoven that they can simply be referred to as "artificial intelligence" or "AI" or "AI tools" for purposes of discussing those ramifications.

Though the definitions may be simple, the ramifications of AI on corporate fiduciary decision-making will likely become substantive and pervasive. Algorithms and AI are valuable tools anytime a decision requires, or the decision is improved by, large datasets to make multi-layered or otherwise complicated correlations and conclusions. A fiduciary can use those correlations and conclusions to make decisions and determinations about a wide range of corporate operational and strategic decisions. However, most managers, directors, and corporate consultants do not understand the full potential of these AI tools, let alone the pitfalls and limitations of using them.

One reason that corporate managers have limited practical knowledge about the AI tools available to them is that AI tools have, until recently, been developed predominantly for use in sectors outside of the corporate board room. Those sectors include healthcare, energy, public safety and policing, communications and social media, and government intelligence and security. (5) Nonetheless, AI tools for corporate managers are already being marketed to meet a wide range of operational, marketing, and production needs. One need only do a website search for "how corporations can use artificial intelligence" to see the AI products and AI consultants promising to improve the decisionmaking of corporate senior managers regarding product pricing, supply chain management, brand management and marketing, hiring and other HR determinations. (6) Corporate directors can find their own AI tools to advise them on a wide range of corporate governance issues, such as strategic planning, financial assessments, market predictions, risk management, and corporate governance. (7) As such, regardless of whether corporate fiduciaries are aware of the inevitable impact of AI on their fiduciary duties, AI will fundamentally change the roles and functions of corporate fiduciaries.

  1. How AI Works--The Devil is in the Details

    Given the inevitable pervasiveness of AI tools for corporate fiduciaries, those who fail to educate themselves about the existence of these tools and how to use them properly may soon find that such failures constitute a breach of their fiduciary duties. On the other hand, those corporate fiduciaries who rely too heavily or too passively on AI and algorithms to make corporate decisions could find such reliance to also be a breach of their fiduciary duties. To better understand those values and risks, corporate fiduciaries must have a basic understanding of how algorithms are created and incorporated into AI products, and how AI renders the conclusions upon which corporate managers will make their own decisions and conclusions.

    As the more detailed explanations below show, at least for the foreseeable future, artificial intelligence has inherent limitations. Thus, humans, whose natural tendency is to be in awe of the power of artificial intelligence, should lower their expectations and exercise careful reticence when turning to artificial intelligence. Studies have repeatedly shown that humans exhibit "automation bias" in favor of AI, which means humans tend to accept an algorithmic outcome, even if they intuitively suspect there is something wrong with the outcome. (8) Even experts, who should have enough knowledge and experience to know when an algorithmic answer is wrong, tend to reject their own self-doubt in favor of the erroneous algorithmic-based results. (9) Thus, corporate fiduciaries are must always be on guard against assuming the AI products which provide guidance are infallible, and must rely on their own expertise and experience to question and reject any guidance that they reasonably believe is wrong.

    a. Step One: The Design Process

    Like any other software, or any other product for that matter, algorithms and AI tools begin with a design process initiated by and undertaken by humans. (10) Those human designers can create deficient algorithms that produce bad results. (11) Designers may be experts at crafting algorithms from software code, but they are not ever going to be the combination of lawyers, CPAS, business administrators, logistics experts, and HR managers needed to make a good algorithm for use by corporations. Without proper expert consultation at the design phase, including input from clients about their specific goals and needs, a good designer is doomed to make a bad AI tool. The risk also exists that a designer might be "too smart" and include correlations that seem perfectly rational to the designer, but include factors and considerations that a board may not want to consider or, as discussed below, may not be legally permitted to consider. (12) The end result in either situation will be algorithms created by highly competent designers that are nonetheless defective products for corporate fiduciaries to use to their peril. (13)

    Even well-intentioned designers might succumb to market pressures that would result in an underperforming Ai product. Due to cost and marketing considerations, designers would be forced to create AI products that do not live up to the promises of their manufacturer bosses. (14) In order to gain market share, AI tool manufacturers may try to market their products to a wider range of customers than the product is actually designed to address. (15) Furthermore, the more complicated an algorithm is, the more time is needed to perfect it, so designers might be pressured to cut corners during the design phase. (16) Additionally, the computer systems running those algorithms will require astounding amounts of energy to run. For example, the AI used to run bitcoin businesses can use more energy than entire nations. (17) Thus, a designer who wants to create an algorithm that is a viable product in terms of pricing and operation will face limitations on the sophistication of the final AI-driven product, which in turn means a higher risk of error in the end result. The dual consumer warnings of "you...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT