Welcoming our new algorithmic overlords? Algocracy and its effect on government decision making.

AuthorBailey, Ronald
PositionColumns

Algorithms are everywhere. You can't see them, but these procedures or formulas for solving problems help computers sift through enormous databases to reveal compatible lovers, products that please, faster commutes, news of interest, stocks to buy, and answers to queries.

[ILLUSTRATION OMITTED]

Dud dates or boring book recommendations are no big deal. But John Danaher, a lecturer in the law school at the National University of Ireland, warns that algorithms take on a very different profile when they're employed to guide government behavior. He worries that encroaching algorithmic governance, or what he calls algocracy, could "create problems for the moral or political legitimacy of our public decision making processes."

And employ them government agencies do. The Social Security Administration uses the tool to aid its agents in evaluating benefits claims; the Internal Revenue Service uses it to select taxpayers for audit; the Food and Drug Administration uses algorithms to study patterns of foodborne illness; the Securities and Exchange Commission uses them to detect trading misconduct; and local police departments employ their insights to predict the emergence of crime hotspots.

Conventional algorithms are rule-based systems constructed by programmers to make automated decisions. Because each rule is explicit, it is possible to understand how and why the algorithm produces its outputs, although the continual addition of rules and exceptions over time can make keeping track of what the system is doing difficult in practice.

Alternatively, so-called machine-learning algorithms (which are increasingly being deployed to deal with the growing flood and complexity of data that needs crunching) are a type of artificial intelligence that gives computers the ability to discover rules for themselves--without being explicitly programmed. These algorithms are usually trained to organize and extract information after being exposed to relevant data sets. It's often hard to discern exactly how the algorithm is devising the rules it's using to make predictions.

While machine learning is highly efficient at digesting data, the answers it supplies can be skewed. In a recent New York Times op-ed tided "Artificial Intelligence's White Guy Problem," Kate Crawford, a researcher at Microsoft who serves as co-chairwoman of the White House Symposium on Society and Artificial Intelligence, cited several instances of these algorithms getting something badly wrong...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT