Opinion Impersonation Bots & Kansas Law

Publication year2018
Pages20
CitationVol. 87 No. 5 Pg. 20
Opinion: Impersonation Bots & Kansas Law
No. 87 J. Kan. Bar Assn 5, 20 (2018)
Kansas Bar Journal
May, 2018

by Jay Van Blaricum

Opinion: Impersonation Bots and Kansas Law

NOTE: Opinions and positions expressed herein are those of the author(s) and not necessarily those of the Kansas Bar Association, the Journal, or its Board of Editors. The material within this publication is presented as information for attorneys to use and consider, in conjunction with other research they deem necessary, in the exercise of their independent judgment. The Board of Editors does not independently research the content of submitted articles approved .

I. Introduction

Robots disguised as humans are everywhere online, and many of them are not our friends. Malicious software known as "impersonation bots" pose a serious threat to Kansans, but current law does not prohibit their deployment or require them to disclose their true nature. This article will discuss the existing legal framework and propose additional legal tools for fighting this particularly troublesome form of cyber-crime.

A 2017 report by the security firm Imperva found that online software programs known as "bots" generated an estimated 51.8 percent of all website traffic in 2016.[1] Although many bots on the internet are designed to operate in a neutral or benevolent manner (for example, bots used as customer service interfaces, or bots which automatically refresh one's Facebook feed), a sizeable portion of bots are malicious in nature, or "malbots" for short. Generally, malbots are programs infected with malicious code that autonomously perform work on behalf of their creators. Among the many different types of malbots being deployed are a particularly deceptive and increasingly popular version known as impersonation bots, which appear to have complete and legitimate human credentials. Online, malicious impersonation bots are indistinguishable from human users. In actuality, malicious impersonation bots merely pose as a human by hiding behind one of the estimated 1.9 billion stolen identities available on the internet. Many of these highly advanced, fraudulent programs are equipped with artificial intelligence, and are known in computer science as "autonomous intelligent agents," or simply "intelligent agents,"[2] meaning they can operate independently to do anything technologically possible on the internet. Malicious impersonation bots have been increasingly imitating the intricacies of human online behavior, so even detecting them poses a unique challenge.

In a July 2017 New York Times op-ed article, "Please Prove You Are Not a Robot," Professor Tim Wu of Columbia University School of Law highlighted the mass proliferation of malicious impersonation bots flooding the internet.[3] More recently, Cambridge University released a comprehensive report from an international team of 26 authors from various fields entitled, "Malicious Artificial Intelligence: Forecasting, Prevention, and Mitigation." [4] Focusing on threats posed by all malicious artificial intelligence systems, including bots, the report called for coordination between the A.I. development community and those within other institutions, including policy and legal institutions, to develop standards, policies and laws to address the potential for chaos which malicious A.I. poses.

To illustrate the scale of the problem of bots alone, malicious impersonation bots used for distributed denial of service ("DDoS") attacks [5] alone accounted for about 24 percent of overall web traffic in 2016.[6]They also seek to exploit vulnerabilities online to stealthily deliver weaponized software, steal any kind of data, manipulate the market for event ticket sales, spread disinformation, inflate product ratings, self-propagate, and fraudulently manipulate web traffic to generate ad revenue, among other nefarious uses. These bots are capable of defeating the formerly reliable defense of online "Captcha" and Turing tests, those familiar hurdles websites employ to confound robots. The underlying technology of malicious impersonation bots is not new, but the sophistication, effectiveness, pervasiveness, and destructive capabilities of these mal-bots are steadily advancing. Artificial intelligence technology is also developing at a breakneck pace and will grow exponentially more powerful as the nascent field of quantum computing improves and becomes widely adopted.[7] A.I. software is already capable of combining video clips to create relatively realistic fakes [8] and creating believable images of people that do not exist.[9] Soon, authentic humans may be unable to determine if text, photos, audio, and video reflect reality or are frauds fabricated by A.I.-operated impersonation bots.[10]

Much malicious impersonator bot activity in recent years has focused on popular social media sites. This was demonstrated in dramatic form in early 2018 with the discovery of a company called Devumi, which allegedly sold celebrities and politicians millions of fake social media followers that often used stolen profiles to appear legitimate and went on to exhibit bizarre, malevolent behavior—malicious impersonation bots. [11]But these bots also are capable of committing serious crimes on their creators' behalf. For example, working with police and security agencies around the world in 2010, the FBI cracked a major cyber-crime network as part of Operation Trident Breach. The cyber criminals targeted 390 computers of small and medium-sized companies, municipalities, churches, and individuals, infecting them with a version of the ZeuS Botnet.[12] As part of this complex operation, a network of malbots operated covertly for two years, stealing $74 million on behalf of its creators before being detected. [13] In light of advances in technology since 2010, including the increasing refinement and affordability of malicious artificial intelligence, the events uncovered in Operation Trident Breach could be just the tip of the iceberg. Given the large scale of these malicious activities, Kansans are undoubtedly affected by the proliferation of malicious impersonation bots.

II. Limitations with Current Laws

There are limitations to existing laws combating malicious impersonation. For example, the federal Better Online Ticket Sales Act of 2016 [14] addresses the online ticket scalping aspect of bot activity, and provides for enforcement by state attorneys general. Similarly, thirteen states currently prohibit ticketscalping bots, including California, [15] New York, [16]Oregon, [17]Pennsylvania, [18] and Tennessee.[19] However, those laws do not address the malicious activities beyond online ticket scalping.

Moreover, while certain law enforcement strategies have proven successful in several high-profile cases, and serve as one of several strategies to deter this type of cybercrime, [20] enforcement of prohibitions against bots and botnets at both the federal and state level have generally proven notoriously challenging, considering the difficulty of detection and the massive size, reach and complexity of the criminal networks behind them. [21]Indeed, successful prosecution of bot herders [22] and subsequent cleansing of infected machines are often followed by resurrection of the criminal operation within minutes.[23]

The main problem with existing laws relates to the autonomous feature of the impersonation bots: who is culpable for a criminal act carried out by an unpredictable A.I.-operated bot? To illustrate this problem: the Kansas computer crime statute, K.S.A. 2017 Supp. § 21-5839(a)(2), touches on the problem of fraudulent software. This subsection provides it is unlawful to:

[U]se a computer, computer system, computer network or any other property for the purpose of devising or executing a scheme or artifice with the intent to defraud or to obtain money, property, services or any...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT