The Board's Role in Setting Up AI's Ethical 'Guardrails': Boardroom discussions on artificial intelligence and machine learning's impact on humanity need to be fast-tracked.

AuthorTahmincioglu, Eve
PositionTHE CHARACTER OF THE CORPORATION: Ethics of Technology

Whenever there are discussions in the boardroom about artificial intelligence (AI) and how the technology may fit in the company's strategy, Linda Goodspeed likes to bring up two questions: "What decisions are we going to allow machines to make, and how are we going to audit those decisions?"

While Goodspeed, a former CIO who sits on the boards of AEP, AutoZone and Darling Ingredients, says AI and machine learning (ML) implementation is still in the early stages for the companies she serves on. "Thoughtful discussions" about how to carefully adopt the technology are critical.

For many companies and their boards, it's early days for AI adoption and in particular, ML, but the recent rush to implement the latest and greatest so you're not left behind is causing some to worry about whether corporate leaders are taking enough time to consider the potential impacts on employees, communities and society at large.

AI can have "a very positive impact on people and society with greater efficiencies, sustainability and better ways of living," says Martin Fiore, the northeast tax leader for EY, who has spearheaded the firm's Humans Inc. initiative to raise awareness around the ethics-conscious pursuit of new technologies like AI. "But if you don't build trust, it can be very negative. We're looking at how we preserve and maximize humanity."

Boards, he explains, are just beginning to consider these issues.

The best starting point, he stresses, is figuring out whether there "should there be guardrails governing these issues and, if there should be, who should make the decision on what those guardrails look like? What guardrails do you have in place, do you understand what all functions in the business are doing? Everyone is innovating, do you know the inventory of innovation you have at your organization?"

The AI innovation that has most technology experts worried is machine learning.

"The primary ethical problem with machine learning right now is that because it programs itself by analyzing existing data, and because existing data almost always reflects existing biases, ML can replicate or even amplify those biases," explains David Weinberger, senior researcher at the Harvard Berkman Center for Internet & Society who wrote Everyday Chaos: Technology, Complexity, and How We're Thriving in a New World of Possibility.

ML essentially programs itself, he continues, "by building insanely large, complex 'neural networks' based on the data it has analyzed...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT