Artificial Intelligence Oversight Risks: Smart board level questions to ask about AI.

AuthorPeterson, Brad
PositionTHE CHARACTER OF THE CORPORATION: Ethics of Technology

Artificial intelligence, or "AI," raises legal and ethical issues beyond those generally found in investments in technology. Due to the rapid growth in this area, the lack of standards for evaluation and oversight and the risks associated with Al use, AI projects would particularly benefit from board inquiry and oversight.

Board members should ask the following questions as their company evaluates its use of AI.

Will AI be replacing human judgment?

As board members well know, our legal system relies fundamentally on human judgment in the areas of greatest importance. No board would simply turn over the question of whether a buyout offer is in the best interests of shareholders to an AI system, for example. Each board needs to inquire about whether sufficient consideration has been given to the potential uses of AI, particularly for businesses where legal compliance, fairness and adapting to new situations are important.

Al-based decisions must satisfy the laws and regulations that apply to your business. Of particular concern that Al-based decisions may discriminate because they rely on data that reflects a discriminatory past or looks only at correlation instead of causal factors. Companies that use AI tools in hiring, for example, need to ensure that these tools do not discriminate against certain protected classes of applicants or employees. In regulated areas like insurance, AI tools used for underwriting decisions will have to follow recently-issued requirements from the New York Department of Financial Services on the use of "unconventional sources or types of external data" to address the risk of unlawful discrimination and a lack of data transparency.

Companies can mitigate these AI risks by utilizing oversight, risk management and controls to meet legal compliance and ethical objectives. Data scientists who understand the AI tools and the context of the data and who Implement controls designed to eliminate bias, inaccuracies and coincidence can reduce the chance of these unintended conseguences.

In addition, AI systems will need to produce output that is transparent, auditable and that can be explained--sometimes called "Explainable AI." For the AI hiring tool example above, a company will need to be able to demonstrate that favorable hiring qualification scores of applicants are based on legitimate criteria, and not, on machine-determined prohibited factors such as race or gender identification.

What are the concerns around the...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT