Are Tech Ethics Too Complex for the Board?

PositionEDITOR'S NOTE

Last year, I interviewed Kai Fu Lee, the former head of Google China and the author of AI Superpowers: China, Silicon Valley and the New World Order, and since then I've been thinking about what role directors should play when it comes to the ethics behind adopting innovation.

Lee offered advice for boards on questions to ask when implementing artificial technology to bolster business, but when it came to the discussion of artificial intelligence (AI) ethics, he wasn't sure if it was the best topic for boards to discuss.

If board members start talking about ethical questions, he said, things could become "very abstract. We've seen so many people fall into the trap of thinking they understand AI."

The topic of ethics, he noted, should be left to the compliance folks who can provide boards with high-level summaries, adding that "the executing teams should be held responsible" so innovations like AI "don't cause ethical issues, security problems and lawsuits."

But who better than the board to take a 10,000-foot view of the wide-ranging impact of AI and automation in general?

I spent many years covering the auto industry as a reporter and I don't recall a lot of talk from the executing team behind factory automation about the community impact of losing thousands of jobs.

I got to see that impact first hand: An assembly line worker with more than 20 years in, crying at his kitchen table during an interview because he was going to have uproot his family from Delaware to a plant in the Midwest, a plant that also eventually closed. That was just a shadow of the devastation after the plant's closure.

Clearly, the ethical issues behind AI and other advances...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT