Cybersecurity Stovepiping

Publication year2021
CitationVol. 96

96 Nebraska L. Rev. 339. Cybersecurity Stovepiping

Cybersecurity Stovepiping


David Thaw(fn*)


TABLE OF CONTENTS


I. Introduction .......................................... 340


II. The Concept of Stovepiping ........................... 342


III. Stovepiping in Cybersecurity .......................... 345
A. Policy Making, Complexity, and Change ........... 345
B. Complex Passwords: A Case Study ................. 346
1. Fundamentals of Password Complexity ......... 349
2. "Guessability"-the False Assumption .......... 351
a. Password Guessing Via Authentication (Login) Interfaces .......................... 352
b. Password Guessing Via Unprotected/ Unsanitized Service ........................ 352
c. Offline Password Attacks ................... 352
3. "Defense in Depth"-Measuring Marginal Benefit ........................................ 355


IV. Implications of the Stovepiping Disjuncture ............ 357
A. Addressing the Same Question .................... 358


1

B. Overcoming Policy Entrenchment .................. 359
C. Risk-Analytic Framework for Cybersecurity ........ 360


V. Conclusion ............................................ 362


I. INTRODUCTION

Most readers of this Article probably have encountered-and been frustrated by-password-complexity requirements. Such requirements have become a mainstream part of contemporary culture-the more complex your password is, the more secure you are, right? So the cybersecurity experts tell us. Moreover, policy makers have accepted this "expertise" and have even adopted such requirements into law and regulation.(fn1)

This Article asks two questions. First, it examines whether complex passwords actually achieve the goals many experts claim. Does using the password "Tr0ub4dorand3" or the passphrase "correcthorsebatterystaple" actually protect one's account? Concluding complex passwords are a red herring, as recently confirmed by the federal standards makers,(fn2) this Article then examines why such requirements became so widespread.

Through analysis of historical computer-science and related literature, this Article reveals a fundamental disconnect between the best available scientific knowledge and the application of that knowledge to password policy(fn3) development. Discussions with computer scientists who were leading experts in computer security during the period

2

when password-complexity policies developed suggests that the disconnect between scientific literature and policy outcomes cannot be fully explained by a simple failure of computer-security researchers to identify the shortcomings of complex passwords. Nor can it be fully explained by a failure of computer-science research to consider the user design implications of password complexity and associated research in psychology. This Article proposes the alternative hypothesis that the disconnect resulted from a "stovepiping" failure of a different type-the failure to convey relevant scientific knowledge in a framework which could drive a shift in policy direction.(fn4)

A common approach to arguing for policy reversal or change is the presentation of new evidence. However, this Article posits that in certain contexts mere contrary or "corrective" evidence may be insufficient. This is because of the effects of policy inertia, which result in a state where more evidence is required to reverse a policy than would have been required to implement it in the first place. Frequent vacillation of policy positions can have destabilizing effects on economic markets and social structures, and within certain highly technological or scientific contexts, this differential may be even more harmful. Using cybercecurity as an example, this Article proposes the hypothesis that in such cases, the same (high) level of evidence should be required for implementation of policy in the first instance as is required for subsequent reversal or revision of policy.

Under such a hypothesis, in the context of complex passwords, the result would be that this higher standard of evidence was not met after computer science gained understanding that password complexity did not address the problems it claimed to solve. Thus, what was required for policy reversal was not merely new computer-science evidence but the characterization of that evidence within a framework demonstrating that continuing the original course of action was actually resulting in a worse condition than originally existed. In fact, this type of net benefit/loss economic framing was largely missing from the discourse regarding authentication at the time and, indeed, remains deeply undertheorized in contemporary discourse regarding cybersecurity policy.(fn5)

The implications of these results are compelling. If the assertions in this Article are correct, the technical complexity of society has vastly outstripped our policy-making process's ability to keep pace. A dystopian view of this result suggests we are heading toward technocracy. (How did you feel the last time Facebook or Google implemented a major overhaul?) A perhaps more optimistic view, however, suggests that such technical complexity is not a new concept in relative terms

3

and that historical context can provide some guidance as to how to adapt.

This optimistic view proposes that regulatory history may provide suggestions for regulating rapidly changing fields like cybersecurity. Looking to other fields such as medicine, aviation, and other technologies whose development outpaced the policy makers of the time can provide such insight for the Information Age. Each of these fields had to develop a scientific knowledge base upon which to base policy frameworks and evaluate subsequent policy changes. Developing a science of cybersecurity and requiring evidence-based policy making can provide solutions applicable to the specific problems presented in this Article, and that process of developing a scientific base as a regulatory prerequisite may also benefit other highly technical subjects faced by an increasingly complex society.

Simply put, cybersecurity policy making must, as with other technical fields, move towards requiring evidence-based policy making in the first instance. To do otherwise in such a highly technical and rapidly evolving field undermines the very purposes of the regulatory process itself, particularly in the context of delegation to "expert" administrative agencies. This Article examines that concept through the lens of the specific problem of password complexity and offers a policy making prescription by way of example: the myth of "risk prevention" must be replaced with the empirically founded calculus of risk management. And the primary question to be addressed must not be "Is your system secure?" but rather "Do your risk mitigation techniques match your risk tolerance?"

II. THE CONCEPT OF STOVEPIPING

Academic research departments often debate the concept of "stovepiping"-the idea that intense focus within their own discipline, while beneficial to depth-oriented research, decreases the likelihood that broader questions may be investigated. In recent years, many major U.S. research institutions have increased the number of interdisciplinary or cross-disciplinary initiatives.(fn6) While a promising trend, examples like those presented in this Article suggest the trend may not be adequate to address the complex technical problems facing an increasingly interconnected society. This paper explores, through the example of information-security regulations, the disconnect between how legal and policy communities versus technical and scientific communities formulate questions. That disconnect, combined with the increasing interconnectivity and resulting complexity in

4

global society, suggests that traditional policy-making processes may be inadequate to address certain contemporary challenges.

This project began with the hypothesis that a lack of interdisciplinary integration within computer-science research failed to identify the shortcomings of complex passwords discussed in section III.A. of this Article. While only a small percentage of the literature on password complexity has addressed this concern,(fn7) based on discussions with computer scientists(fn8) involved in early password policy making and feedback received during the development of this Article, that hypothesis appears incomplete. It is important to note that there exists little, if any, formal literature documenting this history. Substantial credit is owed to Professors Steven Bellovin, David Clark, and Matthew Blaze for their direct recitation of relevant history during the development of early password-authentication practices during the 1980s and early 1990s.

The initial inquiry of this Article began from the hypothesis that computer-science research failed to account for certain human factors, thus leading to the production of incomplete evidence that formed the basis of policy recommendations. This hypothesis seemed unsatisfying, however, because substantial research into human-computer interaction and human factors (in technology) has been ongoing for several decades.(fn9) Furthermore, discussion with leading computer scientists during the period when password-complexity policies first were developed indicated that, even if unpublished, at least some key components of the computer-security community recognized that raw password complexity was not a simple tradeoff. Based on this knowledge, this Article posits a revised hypothesis suggesting that, notwithstanding an understanding of various factors, the policy-making processes-at the legislative, regulatory, and organizational levels- were mismatched with the processes for developing and refining technical knowledge to adapt to changing situations, particularly in the context of matters of "safety and security."

Thus, the "stovepiping...

To continue reading

Request your trial

VLEX uses login cookies to provide you with a better browsing experience. If you click on 'Accept' or continue browsing this site we consider that you accept our cookie policy. ACCEPT