Boards Need to Keep an Eye on the Ethics of AI: Balancing algorithms, business and humanity.

AuthorAnanny, Mike

Elon Musk, CEO of Tesla and SpaceX, recently tweeted:

"AI will be the best or worst thing ever for humanity, so let's get it right."

For years, only academics, technologists and science fiction writers talked about artificial intelligence (AI). It was the stuff of research, engineering and fantasy. But, especially in the last five years as computers have become increasingly powerful, invisible, connected to vast amounts of data and embedded in the daily lives of millions, AI has quickly become an indispensable part of the lives of consumers and companies alike, and it is not going away.

Apple's Siri, Amazon's recommendations, Facebook's newsfeed and Gmail's filters all use AI to gather and analyze vast amounts of human speech, social media data and search queries to mimic the best listener, friend, news organization or assistant you could possibly imagine. Insurance companies use AI to set rates, universities use it to admit students and doctors use it to diagnose patients.

In truth, AI is all of these things: computer code, big data, new products, marketing campaigns and consumer expectations.

But AI is also always about people and judgment.

There is no such thing as a purely objective or neutral AI that doesn't, in some way, require or make assumptions about people. It's the people whose behaviors and opinions are surveyed, surveilled and captured in the large databases that algorithms analyze. It's the engineers who write the rules and tests that make one AI seem better than another. It's the companies that build their business models on controversial applications of AI; there is no escaping the fact that it is now more important than ever to consider the human ethics of artificial intelligence.

Consider these examples:

* HP created cameras with AI-driven facial recognition systems, but their reliance on test cases that only considered people with white skin meant they failed to see people with dark skin and were labeled "racist" in the mainstream press.

* Google attempted to use AI to encourage more civil online conversations but fell flat after its algorithms mistook all instances of profanity for hate speech, mislabeling supportive comments like "you're the f@%king best!" for bullying speech that needed to be automatically censored.

In each case, the companies used biased or mistaken databases to build and test their AI-driven products, creating technologies that only worked for white people or misunderstood the meaning of profanity.

...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT