Please use the given pdf as reference and also add references at the bottom of the page.
Chapter 9 discusses the concept of correlation. Assume that an agency has focused its system development and critical infrastructure data collection efforts on separate engineering management systems for different types of assets and is working on the integration of these systems. In this case, the agency focused on the data collection for two types of assets: water treatment and natural gas delivery management facilities. Please identify what type of critical infrastructure data collection is needed for water treatment and natural gas delivery management facilities.
Two paragraphs minimum.
To complete this assignment, you must do the following:
A) Create a new thread. As indicated above, identify what type of critical infrastructure data collection is needed for water treatment and natural gas delivery management facilities.
“Dr. Amoroso’s fi fth book Cyber Attacks: Protecting National Infrastructure outlines the chal- lenges of protecting our nation’s infrastructure from cyber attack using security techniques established to protect much smaller and less complex environments. He proposes a brand new type of national infrastructure protection methodology and outlines a strategy presented as a series of ten basic design and operations principles ranging from deception to response. The bulk of the text covers each of these principles in technical detail. While several of these principles would be daunting to implement and practice they provide the fi rst clear and con- cise framework for discussion of this critical challenge. This text is thought-provoking and should be a ‘must read’ for anyone concerned with cybersecurity in the private or government sector.”
— Clayton W. Naeve, Ph.D. , Senior Vice President and Chief Information Offi cer,
Endowed Chair in Bioinformatics, St. Jude Children’s Research Hospital,
Memphis, TN
“Dr. Ed Amoroso reveals in plain English the threats and weaknesses of our critical infra- structure balanced against practices that reduce the exposures. This is an excellent guide to the understanding of the cyber-scape that the security professional navigates. The book takes complex concepts of security and simplifi es it into coherent and simple to understand concepts.”
— Arnold Felberbaum , Chief IT Security & Compliance Offi cer,
Reed Elsevier
“The national infrastructure, which is now vital to communication, commerce and entertain- ment in everyday life, is highly vulnerable to malicious attacks and terrorist threats. Today, it is possible for botnets to penetrate millions of computers around the world in few minutes, and to attack the valuable national infrastructure.
“As the New York Times reported, the growing number of threats by botnets suggests that this cyber security issue has become a serious problem, and we are losing the war against these attacks.
“While computer security technologies will be useful for network systems, the reality tells us that this conventional approach is not effective enough for the complex, large-scale national infrastructure. “Not only does the author provide comprehensive methodologies based on 25 years of expe- rience in cyber security at AT&T, but he also suggests ‘security through obscurity,’ which attempts to use secrecy to provide security.”
— Byeong Gi Lee , President, IEEE Communications Society, and
Commissioner of the Korea Communications Commission (KCC)
Cyber At tacks Protecting National Infrastructure
Edward G. Amoroso
AMSTERDAM • BOSTON • HEIDELBERG • LONDON
NEW YORK • OXFORD • PARIS • SAN DIEGO
SAN FRANCISCO • SINGAPORE • SYDNEY • TOKYO Butterworth-Heinemann is an imprint of Elsevier
Acquiring Editor: Pam Chester Development Editor: Gregory Chalson Project Manager: Paul Gottehrer Designer: Alisa Andreola
Butterworth-Heinemann is an imprint of Elsevier 30 Corporate Drive, Suite 400, Burlington, MA 01803, USA
© 2011 Elsevier Inc. All rights reserved
No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions .
This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein).
Notices Knowledge and best practice in this fi eld are constantly changing. As new research and experience broaden our understanding, changes in research methods or professional practices, may become necessary. Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information or methods described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility.
To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein.
Library of Congress Cataloging-in-Publication Data Amoroso, Edward G. Cyber attacks : protecting national infrastructure / Edward Amoroso. p. cm. Includes index. ISBN 978-0-12-384917-5 1. Cyberterrorism—United States—Prevention. 2. Computer security—United States. 3. National security—United States. I. Title. HV6773.2.A47 2011 363.325�90046780973—dc22 2010040626
British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library.
Printed in the United States of America 10 11 12 13 14 10 9 8 7 6 5 4 3 2 1
For information on all BH publications visit our website at www.elsevierdirect.com/security
CONTENTS v
CONTENTS Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix Acknowledgment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Chapter 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 National Cyber Threats, Vulnerabilities, and Attacks . . . . . . . . . . . . . . . . 4 Botnet Threat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 National Cyber Security Methodology Components . . . . . . . . . . . . . . . 9 Deception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 Separation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 Diversity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 Consistency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 Depth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 Discretion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Awareness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25 Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26 Implementing the Principles Nationally . . . . . . . . . . . . . . . . . . . . . . . . 28
Chapter 2 Deception . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Scanning Stage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 Deliberately Open Ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Discovery Stage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 Deceptive Documents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41 Exploitation Stage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42 Procurement Tricks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Exposing Stage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46 Interfaces Between Humans and Computers . . . . . . . . . . . . . . . . . . . . 47 National Deception Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
vi CONTENTS
Chapter 3 Separation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51 What Is Separation? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 Functional Separation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 National Infrastructure Firewalls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 DDOS Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60 SCADA Separation Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 Physical Separation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Insider Separation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65 Asset Separation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68 Multilevel Security (MLS) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
Chapter 4 Diversity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 Diversity and Worm Propagation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 Desktop Computer System Diversity . . . . . . . . . . . . . . . . . . . . . . . . . . . 77 Diversity Paradox of Cloud Computing . . . . . . . . . . . . . . . . . . . . . . . . . 80 Network Technology Diversity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82 Physical Diversity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85 National Diversity Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
Chapter 5 Commonality. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89 Meaningful Best Practices for Infrastructure Protection . . . . . . . . . . . . 92 Locally Relevant and Appropriate Security Policy . . . . . . . . . . . . . . . . 95 Culture of Security Protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97 Infrastructure Simplifi cation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 Certifi cation and Education . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 Career Path and Reward Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105 Responsible Past Security Practice . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 National Commonality Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
Chapter 6 Depth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 Effectiveness of Depth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111 Layered Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 Layered E-Mail Virus and Spam Protection . . . . . . . . . . . . . . . . . . . . . . 119
CONTENTS vii
Layered Access Controls . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 Layered Encryption . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 Layered Intrusion Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124 National Program of Depth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
Chapter 7 Discretion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 Trusted Computing Base . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130 Security Through Obscurity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 Information Sharing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 Information Reconnaissance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137 Obscurity Layers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 Organizational Compartments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141 National Discretion Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
Chapter 8 Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Collecting Network Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148 Collecting System Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 Security Information and Event Management . . . . . . . . . . . . . . . . . . 154 Large-Scale Trending . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156 Tracking a Worm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159 National Collection Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
Chapter 9 Correlation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 Conventional Security Correlation Methods . . . . . . . . . . . . . . . . . . . . 167 Quality and Reliability Issues in Data Correlation . . . . . . . . . . . . . . . . 169 Correlating Data to Detect a Worm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170 Correlating Data to Detect a Botnet . . . . . . . . . . . . . . . . . . . . . . . . . . . 172 Large-Scale Correlation Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 National Correlation Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176
Chapter 10 Awareness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 Detecting Infrastructure Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 Managing Vulnerability Information . . . . . . . . . . . . . . . . . . . . . . . . . . 184
viii CONTENTS
Cyber Security Intelligence Reports . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 Risk Management Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188 Security Operations Centers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 National Awareness Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
Chapter 11 Response. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 Pre- Versus Post-Attack Response . . . . . . . . . . . . . . . . . . . . . . . . . . . . 195 Indications and Warning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197 Incident Response Teams . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198 Forensic Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201 Law Enforcement Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203 Disaster Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 204 National Response Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 206
Appendix Sample National Infrastructure Protection Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Sample Deception Requirements (Chapter 2) . . . . . . . . . . . . . . . . . . . 208 Sample Separation Requirements (Chapter 3) . . . . . . . . . . . . . . . . . . 209 Sample Diversity Requirements (Chapter 4) . . . . . . . . . . . . . . . . . . . . . 211 Sample Commonality Requirements (Chapter 5) . . . . . . . . . . . . . . . . 212 Sample Depth Requirements (Chapter 6) . . . . . . . . . . . . . . . . . . . . . . 213 Sample Discretion Requirements (Chapter 7) . . . . . . . . . . . . . . . . . . . 214 Sample Collection Requirements (Chapter 8) . . . . . . . . . . . . . . . . . . . 214 Sample Correlation Requirements (Chapter 9) . . . . . . . . . . . . . . . . . . 215 Sample Awareness Requirements (Chapter 10) . . . . . . . . . . . . . . . . . 216 Sample Response Requirements (Chapter 11) . . . . . . . . . . . . . . . . . . 216
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
PREFACE ix
PREFACE
Man did not enter into society to become worse than he was before, nor to have fewer rights than he had before, but to have those rights better secured.
Thomas Paine in Common Sense
Before you invest any of your time with this book, please take a moment and look over the following points. They outline my basic philosophy of national infrastructure security. I think that your reaction to these points will give you a pretty good idea of what your reaction will be to the book. 1. Citizens of free nations cannot hope to express or enjoy
their freedoms if basic security protections are not provided. Security does not suppress freedom—it makes freedom possible.
2. In virtually every modern nation, computers and networks power critical infrastructure elements. As a result, cyber attackers can use computers and networks to damage or ruin the infrastructures that citizens rely on.
3. Security protections, such as those in security books, were designed for small-scale environments such as enterprise computing environments. These protections do not extrapo- late to the protection of massively complex infrastructure.
4. Effective national cyber protections will be driven largely by cooperation and coordination between commercial, indus- trial, and government organizations. Thus, organizational management issues will be as important to national defense as technical issues.
5. Security is a process of risk reduction, not risk removal. Therefore, concrete steps can and should be taken to reduce, but not remove, the risk of cyber attack to national infrastructure.
6. The current risk of catastrophic cyber attack to national infra- structure must be viewed as extremely high, by any realistic measure. Taking little or no action to reduce this risk would be a foolish national decision. The chapters of this book are organized around ten basic
principles that will reduce the risk of cyber attack to national infrastructure in a substantive manner. They are driven by
x PREFACE
experiences gained managing the security of one of the largest, most complex infrastructures in the world, by years of learning from various commercial and government organizations, and by years of interaction with students and academic researchers in the security fi eld. They are also driven by personal experiences dealing with a wide range of successful and unsuccessful cyber attacks, including ones directed at infrastructure of considerable value. The implementation of the ten principles in this book will require national resolve and changes to the way computing and networking elements are designed, built, and operated in the context of national infrastructure. My hope is that the sugges- tions offered in these pages will make this process easier.
ACKNOWLEDGMENT xi
ACKNOWLEDGMENT
The cyber security experts in the AT&T Chief Security Offi ce, my colleagues across AT&T Labs and the AT&T Chief Technology Offi ce, my colleagues across the entire AT&T business, and my graduate and undergraduate students in the Computer Science Department at the Stevens Institute of Technology, have had a profound impact on my thinking and on the contents of this book. In addition, many prominent enterprise customers of AT&T with whom I’ve had the pleasure of serving, especially those in the United States Federal Government, have been great infl uencers in the preparation of this material.
I’d also like to extend a great thanks to my wife Lee, daugh- ter Stephanie (17), son Matthew (15), and daughter Alicia (9) for their collective patience with my busy schedule.
Edward G. Amoroso Florham Park, NJ September 2010
This page intentionally left blank
1 Cyber Attacks. DOI: © Elsevier Inc. All rights reserved.
10.1016/B978-0-12-384917-5.00001-9 2011
INTRODUCTION Somewhere in his writings—and I regret having forgotten where— John Von Neumann draws attention to what seemed to him a contrast. He remarked that for simple mechanisms it is often easier to describe how they work than what they do, while for more complicated mechanisms it was usually the other way round .
Edsger W. Dijkstra 1
National infrastructure refers to the complex, underlying delivery and support systems for all large-scale services considered abso- lutely essential to a nation. These services include emergency response, law enforcement databases, supervisory control and data acquisition (SCADA) systems, power control networks, mili- tary support services, consumer entertainment systems, fi nancial applications, and mobile telecommunications. Some national services are provided directly by government, but most are pro- vided by commercial groups such as Internet service provid- ers, airlines, and banks. In addition, certain services considered essential to one nation might include infrastructure support that is controlled by organizations from another nation. This global interdependency is consistent with the trends referred to collec- tively by Thomas Friedman as a “fl at world.” 2
National infrastructure, especially in the United States, has always been vulnerable to malicious physical attacks such as equipment tampering, cable cuts, facility bombing, and asset theft. The events of September 11, 2001, for example, are the most prominent and recent instance of a massive physical attack directed at national infrastructure. During the past couple of decades, however, vast portions of national infrastructure have become reliant on software, computers, and networks. This reli- ance typically includes remote access, often over the Internet, to
1
1 E.W. Dijkstra, Selected Writings on Computing: A Personal Perspective , Springer-Verlag, New York, 1982, pp. 212–213. 2 T. Friedman, The World Is Flat: A Brief History of the Twenty-First Century , Farrar, Straus, and Giroux, New York, 2007. (Friedman provides a useful economic backdrop to the global aspect of the cyber attack trends suggested in this chapter.)
2 Chapter 1 INTRODUCTION
the systems that control national services. Adversaries thus can initiate cyber attacks on infrastructure using worms, viruses, leaks, and the like. These attacks indirectly target national infra- structure through their associated automated controls systems (see Figure 1.1 ).
A seemingly obvious approach to dealing with this national cyber threat would involve the use of well-known computer security techniques. After all, computer security has matured substantially in the past couple of decades, and considerable expertise now exists on how to protect software, computers, and networks. In such a national scheme, safeguards such as fi re- walls, intrusion detection systems, antivirus software, passwords, scanners, audit trails, and encryption would be directly embed- ded into infrastructure, just as they are currently in small-scale environments. These national security systems would be con- nected to a centralized threat management system, and inci- dent response would follow a familiar sort of enterprise process. Furthermore, to ensure security policy compliance, one would expect the usual programs of end-user awareness, security train- ing, and third-party audit to be directed toward the people build- ing and operating national infrastructure. Virtually every national infrastructure protection initiative proposed to date has followed this seemingly straightforward path. 3
While well-known computer security techniques will certainly be useful for national infrastructure, most practical experience to date suggests that this conventional approach will not be suf- fi cient. A primary reason is the size, scale, and scope inherent in complex national infrastructure. For example, where an enter- prise might involve manageably sized assets, national infrastruc- ture will require unusually powerful computing support with the ability to handle enormous volumes of data. Such volumes
Indirect Cyber Attacks
Direct Physical Attacks
“Worms, Viruses, Leaks”
“Tampering, Cuts,
Bombs”
National Infrastructure
Automated Control
Software
Computers
Networks
Figure 1.1 National infrastructure cyber and physical attacks.
3 Executive Offi ce of the President, Cyberspace Policy Review: Assuring a Trusted and Resilient Information and Communications Infrastructure , U.S. White House, Washington, D.C., 2009 ( http://handle.dtic.mil/100.2/ADA501541 ).
Chapter 1 INTRODUCTION 3
will easily exceed the storage and processing capacity of typical enterprise security tools such as a commercial threat manage- ment system. Unfortunately, this incompatibility confl icts with current initiatives in government and industry to reduce costs through the use of common commercial off-the-shelf products.
In addition, whereas enterprise systems can rely on manual intervention by a local expert during a security disaster, large- scale national infrastructure generally requires a carefully orches- trated response by teams of security experts using predetermined processes. These teams of experts will often work in different groups, organizations, or even countries. In the worst cases, they will cooperate only if forced by government, often sharing just the minimum amount of information to avoid legal conse- quences. An additional problem is that the complexity associated with national infrastructure leads to the bizarre situation where response teams often have partial or incorrect understand- ing about how the underlying systems work. For these reasons, seemingly convenient attempts to apply existing small-scale security processes to large-scale infrastructure attacks will ulti- mately fail (see Figure 1.2 ).
As a result, a brand-new type of national infrastructure protec- tion methodology is required—one that combines the best ele- ments of existing computer and network security techniques with the unique and diffi cult challenges associated with complex, large- scale national services. This book offers just such a protection methodology for national infrastructure. It is based on a quarter century of practical experience designing, building, and operating
Small-Scale
Small Volume
Possibly Manual
Local Expert
High
Focused
High Volume
Large-Scale
Process-Based
Distributed Expertise
Partial or Incorrect
Broad
Collection
Emergency
Expertise
Knowledge
Analysis
Large-Scale Attributes Complicate Cyber Security
Figure 1.2 Differences between small- and large-scale cyber security.
National infrastructure databases far exceed the size of even the largest commercial databases.
4 Chapter 1 INTRODUCTION
cyber security systems for government, commercial, and con- sumer infrastructure. It is represented as a series of protection principles that can be applied to new or existing systems. Because of the unique needs of national infrastructure, especially its mas- sive size, scale, and scope, some aspects of the methodology will be unfamiliar to the computer security community. In fact, certain elements of the approach, such as our favorable view of “security through obscurity,” might appear in direct confl ict with conven- tional views of how computers and networks should be protected.
National Cyber Threats, Vulnerabilities, and Attacks Conventional computer security is based on the oft-repeated tax- onomy of security threats which includes confi dentiality, integrity, availability, and theft. In the broadest sense, all four diverse threat types will have applicability in national infrastructure. For example, protections are required equally to deal with sensitive information leaks (confi dentiality ), worms affecting the operation of some criti- cal application (integrity), botnets knocking out an important system (availability), or citizens having their identities compromised (theft). Certainly, the availability threat to national services must be viewed as particularly important, given the nature of the threat and its rela- tion to national assets. One should thus expect particular attention to availability threats to national infrastructure. Nevertheless, it makes sense to acknowledge that all four types of security threats in the conventional taxonomy of computer security must be addressed in any national infrastructure protection methodology.
Vulnerabilities are more diffi cult to associate with any taxon- omy. Obviously, national infrastructure must address well-known problems such as improperly confi gured equipment, poorly designed local area networks, unpatched system software, exploit- able bugs in application code, and locally disgruntled employ- ees. The problem is that the most fundamental vulnerability in national infrastructure involves the staggering complexity inher- ent in the underlying systems. This complexity is so pervasive that many times security incidents uncover aspects of computing functionality that were previously unknown to anyone, including sometimes the system designers. Furthermore, in certain cases, the optimal security solution involves simplifying and cleaning up poorly conceived infrastructure. This is bad news, because most large organizations are inept at simplifying much of anything.
The best one can do for a comprehensive view of the vulner- abilities associated with national infrastructure is to address their
Any of the most common security concerns— confi dentiality, integrity, availability, and theft— threaten our national infrastructure.
Chapter 1 INTRODUCTION 5
relative exploitation points. This can be done with an abstract national infrastructure cyber security model that includes three types of malicious adversaries: external adversary (hackers on the Internet), internal adversary (trusted insiders), and supplier adversary (vendors and partners). Using this model, three exploi- tation points emerge for national infrastructure: remote access (Internet and telework), system administration and normal usage (management and use of software, computers, and networks), and supply chain (procurement and outsourcing) (see Figure 1.3 ).
These three exploitation points and three types of adversaries can be associated with a variety of possible motivations for initi- ating either a full or test attack on national infrastructure.
Remote Access
System Administration and
Normal Usage
External Adversary
Three Exploitation Points
National Infrastructure
Three Adversaries
Supply Chain
Internal Adversary
Software
Computers
NetworksSupplier Adversary
Figure 1.3 Adversaries and exploitation points in national infrastructure.
Five Possible Motivations for an Infrastructure Attack
● Country-sponsored warfare —National infrastructure attacks sponsored and funded by enemy countries must be considered the most signifi cant potential motivation, because the intensity of adversary capability and willingness to attack is potentially unlimited.
● Terrorist attack —The terrorist motivation is also signifi cant, especially because groups driven by terror can easily obtain suffi cient capability and funding to perform signifi cant attacks on infrastructure.
● Commercially motivated attack —When one company chooses to utilize cyber attacks to gain a commercial advantage, it becomes a national infrastructure incident if the target company is a purveyor of some national asset.
● Financially driven criminal attack —Identify theft is the most common example of a fi nancially driven attack by criminal groups, but other cases exist, such as companies being extorted to avoid a cyber incident.
● Hacking —One must not forget that many types of attacks are still driven by the motivation of hackers, who are often just mischievous youths trying to learn or to build a reputation within the hacking community. This is much less a sinister motivation, and national leaders should try to identify better ways to tap this boundless capability and energy.
6 Chapter 1 INTRODUCTION
Each of the three exploitation points might be utilized in a cyber attack on national infrastructure. For example, a supplier might use a poorly designed supply chain to insert Trojan horse code into a software component that controls some national asset, or a hacker on the Internet might take advantage of some unprotected Internet access point to break into a vulnerable ser- vice. Similarly, an insider might use trusted access for either sys- tem administration or normal system usage to create an attack. The potential also exists for an external adversary to gain valu- able insider access through patient, measured means, such as gaining employment in an infrastructure-supporting organiza- tion and then becoming trusted through a long process of work performance. In each case, the possibility exists that a limited type of engagement might be performed as part of a planned test or exercise. This seems especially likely if the attack is country or terrorist sponsored, because it is consistent with past practice.
At each exploitation point, the vulnerability being used might be a well-known problem previously reported in an authoritative public advisory, or it could be a proprietary issue kept hidden by a local organization. It is entirely appropriate for a recognized authority to make a detailed public vulnerability advisory if the benefi ts of notifying the good guys outweigh the risks of alert- ing the bad guys. This cost–benefi t result usually occurs when many organizations can directly benefi t from the information and can thus take immediate action. When the reported vulner- ability is unique and isolated, however, then reporting the details might be irresponsible, especially if the notifi cation process does not enable a more timely fi x. This is a key issue, because many government authorities continue to consider new rules for man- datory reporting. If the information being demanded is not prop- erly protected, then the reporting process might result in more harm than good.
Botnet Threat Perhaps the most insidious type of attack that exists today is the botnet . 4 In short, a botnet involves remote control of a collec- tion of compromised end-user machines, usually broadband- connected PCs. The controlled end-user machines, which are referred to as bots , are programmed to attack some target that is designated by the botnet controller. The attack is tough to stop
4 Much of the material on botnets in this chapter is derived from work done by Brian Rexroad, David Gross, and several others from AT&T.
When to issue a vulnerability risk advisory and when to keep the risk confi dential must be determined on a case- by-case basis, depending on the threat.
Chapter 1 INTRODUCTION 7
because end-user machines are typically administered in an inef- fective manner. Furthermore, once the attack begins, it occurs from sources potentially scattered across geographic, political, and service provider boundaries. Perhaps worse, bots are pro- grammed to take commands from multiple controller systems, so any attempts to destroy a given controller result in the bots sim- ply homing to another one.
The Five Entities That Comprise a Botnet Attack ● Botnet operator —This is the individual, group, or country that creates the botnet, including its setup and operation.
When the botnet is used for fi nancial gain, it is the operator who will benefi t. Law enforcement and cyber security initiatives have found it very diffi cult to identify the operators. The press, in particular, has done a poor job reporting on the presumed identity of botnet operators, often suggesting sponsorship by some country when little supporting evidence exists.
● Botnet controller —This is the set of servers that command and control the operation of a botnet. Usually these servers have been maliciously compromised for this purpose. Many times, the real owner of a server that has been compromised will not even realize what has occurred. The type of activity directed by a controller includes all recruitment, setup, communication, and attack activity. Typical botnets include a handful of controllers, usually distributed across the globe in a non-obvious manner.
● Collection of bots —These are the end-user, broadband-connected PCs infected with botnet malware. They are usually owned and operated by normal citizens, who become unwitting and unknowing dupes in a botnet attack. When a botnet includes a concentration of PCs in a given region, observers often incorrectly attribute the attack to that region. The use of smart mobile devices in a botnet will grow as upstream capacity and device processing power increase.
● Botnet software drop —Most botnets include servers designed to store software that might be useful for the botnets during their lifecycle. Military personnel might refer to this as an arsenal . Like controllers, botnet software drop points are usually servers compromised for this purpose, often unknown to the normal server operator.
● Botnet target —This is the location that is targeted in the attack. Usually, it is a website, but it can really be any device, system, or network that is visible to the bots. In most cases, botnets target prominent and often controversial websites, simply because they are visible via the Internet and generally have a great deal at stake in terms of their availability. This increases gain and leverage for the attacker. Logically, however, botnets can target anything visible.
The way a botnet works is that the controller is set up to com- municate with the bots via some designated protocol, most often Internet Relay Chat (IRC). This is done via malware inserted into the end-user PCs that comprise the bots. A great challenge in this regard is that home PCs and laptops are so poorly administered. Amazingly, over time, the day-to-day system and security admin- istration task for home computers has gravitated to the end user.
8 Chapter 1 INTRODUCTION
This obligation results in both a poor user experience and gen- eral dissatisfaction with the security task. For example, when a typical computer buyer brings a new machine home, it has prob- ably been preloaded with security software by the retailer. From this point onward, however, that home buyer is then tasked with all responsibility for protecting the machine. This includes keep- ing fi rewall, intrusion detection, antivirus, and antispam software up to date, as well as ensuring that all software patches are cur- rent. When these tasks are not well attended, the result is a more vulnerable machine that is easily turned into a bot. (Sadly, even if a machine is properly managed, expert bot software designers might fi nd a way to install the malware anyway.)
Once a group of PCs has been compromised into bots, attacks can thus be launched by the controller via a command to the bots, which would then do as they are instructed. This might not occur instantaneously with the infection; in fact, experi- ence suggests that many botnets lay dormant for a great deal of time. Nevertheless, all sorts of attacks are possible in a bot- net arrangement, including the now-familiar distributed denial of service attack (DDOS). In such a case, the bots create more inbound traffi c than the target gateway can handle. For example, if some theoretical gateway allows for 1 Gbps of inbound traffi c, and the botnet creates an inbound stream larger than 1 Gbps, then a logjam results at the inbound gateway, and a denial of service condition occurs (see Figure 1.4 ).
Any serious present study of cyber security must acknowl- edge the unique threat posed by botnets. Virtually any Internet- connected system is vulnerable to major outages from a botnet-originated DDOS attack. The physics of the situation are especially depressing; that is, a botnet that might steal 500 Kbps
Broadband Carriers
Capacity Excess Creates Jam
Bots
Target A’s Designated
Carrier
1 Gbps Ingress
Target A
1 Gbps DDOS Traffic Aimed at Target A
Figure 1.4 Sample DDOS attack from a botnet.
Home PC users may never know they are being used for a botnet scheme.
A DDOS attack is like a cyber traffi c jam.
Chapter 1 INTRODUCTION 9
of upstream capacity from each bot (which would generally allow for concurrent normal computing and networking) would only need three bots to collapse a target T1 connection. Following this logic, only 16,000 bots would be required theoretically to fi ll up a 10-Gbps connection. Because most of the thousands of bot- nets that have been observed on the Internet are at least this size, the threat is obvious; however, many recent and prominent bot- nets such as Storm and Confi cker are much larger, comprising as many as several million bots, so the threat to national infrastruc- ture is severe and immediate.
National Cyber Security Methodology Components Our proposed methodology for protecting national infrastruc- ture is presented as a series of ten basic design and operation principles. The implication is that, by using these principles as a guide for either improving existing infrastructure components or building new ones, the security result will be desirable, includ- ing a reduced risk from botnets. The methodology addresses all four types of security threats to national infrastructure; it also deals with all three types of adversaries to national infrastructure, as well as the three exploitation points detailed in the infrastruc- ture model. The list of principles in the methodology serves as a guide to the remainder of this chapter, as well as an outline for the remaining chapters of the book: ● Chapter 2: Deception —The openly advertised use of deception
creates uncertainty for adversaries because they will not know if a discovered problem is real or a trap. The more common hid- den use of deception allows for real-time behavioral analysis if an intruder is caught in a trap. Programs of national infrastruc- ture protection must include the appropriate use of deception, especially to reduce the malicious partner and supplier risk.
● Chapter 3: Separation —Network separation is currently accomplished using fi rewalls, but programs of national infra- structure protection will require three specifi c changes. Specifi cally, national infrastructure must include network- based fi rewalls on high-capacity backbones to throttle DDOS attacks, internal fi rewalls to segregate infrastructure and reduce the risk of sabotage, and better tailoring of fi rewall fea- tures for specifi c applications such as SCADA protocols. 5
5 R. Kurtz, Securing SCADA Systems , Wiley, New York, 2006. (Kurtz provides an excellent overview of SCADA systems and the current state of the practice in securing them.)
10 Chapter 1 INTRODUCTION
● Chapter 4: Diversity —Maintaining diversity in the products, services, and technologies supporting national infrastruc- ture reduces the chances that one common weakness can be exploited to produce a cascading attack. A massive program of coordinated procurement and supplier management is required to achieve a desired level of national diversity across all assets. This will be tough, because it confl icts with most cost-motivated information technology procurement initia- tives designed to minimize diversity in infrastructure.
● Chapter 5: Commonality —The consistent use of security best practices in the administration of national infrastructure ensures that no infrastructure component is either poorly managed or left completely unguarded. National programs of standards selection and audit validation, especially with an emphasis on uniform programs of simplifi cation, are thus required. This can certainly include citizen end users, but one should never rely on high levels of security compliance in the broad population.
● Chapter 6: Depth —The use of defense in depth in national infrastructure ensures that no critical asset is reliant on a single security layer; thus, if any layer should fail, an addi- tional layer is always present to mitigate an attack. Analysis is required at the national level to ensure that all critical assets are protected by at least two layers, preferably more.
● Chapter 7: Discretion —The use of personal discretion in the sharing of information about national assets is a practical technique that many computer security experts fi nd diffi cult to accept because it confl icts with popular views on “security through obscurity.” Nevertheless, large-scale infrastructure protection cannot be done properly unless a national culture of discretion and secrecy is nurtured. It goes without saying that such discretion should never be put in place to obscure illegal or unethical practices.
● Chapter 8: Collection —The collection of audit log informa- tion is a necessary component of an infrastructure security scheme, but it introduces privacy, size, and scale issues not seen in smaller computer and network settings. National infrastructure protection will require a data collection approach that is acceptable to the citizenry and provides the requisite level of detail for security analysis.
● Chapter 9: Correlation —Correlation is the most fundamen- tal of all analysis techniques for cyber security, but modern attack methods such as botnets greatly complicate its use for attack-related indicators. National-level correlation must be performed using all available sources and the best available
Chapter 1 INTRODUCTION 11
technology and algorithms. Correlating information around a botnet attack is one of the more challenging present tasks in cyber security.
● Chapter 10: Awareness —Maintaining situational awareness is more important in large-scale infrastructure protection than in traditional computer and network security because it helps to coordinate the real-time aspect of multiple infrastructure components. A program of national situational awareness must be in place to ensure proper management decision- making for national assets.
● Chapter 11: Response —Incident response for national infra- structure protection is especially diffi cult because it gener- ally involves complex dependencies and interactions between disparate government and commercial groups. It is best accomplished at the national level when it focuses on early indications, rather than on incidents that have already begun to damage national assets. The balance of this chapter will introduce each principle, with
discussion on its current use in computer and network security, as well as its expected benefi ts for national infrastructure protection.
Deception The principle of deception involves the deliberate introduc- tion of misleading functionality or misinformation into national infrastructure for the purpose of tricking an adversary. The idea is that an adversary would be presented with a view of national infrastructure functionality that might include services or inter- face components that are present for the sole purpose of fakery. Computer scientists refer to this functionality as a honey pot , but the use of deception for national infrastructure could go far beyond this conventional view. Specifi cally, deception can be used to protect against certain types of cyber attacks that no other security method will handle. Law enforcement agen- cies have been using deception effectively for many years, often catching cyber stalkers and criminals by spoofi ng the reported identity of an end point. Even in the presence of such obvi- ous success, however, the cyber security community has yet to embrace deception as a mainstream protection measure.
Deception in computing typically involves a layer of clev- erly designed trap functionality strategically embedded into the internal and external interfaces for services. Stated more simply, deception involves fake functionality embedded into real inter- faces. An example might be a deliberately planted trap link on
Deception is an oft-used tool by law enforcement agencies to catch cyber stalkers and predators.
12 Chapter 1 INTRODUCTION
a website that would lead potential intruders into an environ- ment designed to highlight adversary behavior. When the decep- tion is open and not secret, it might introduce uncertainty for adversaries in the exploitation of real vulnerabilities, because the adversary might suspect that the discovered entry point is a trap. When it is hidden and stealth, which is the more common situa- tion, it serves as the basis for real-time forensic analysis of adver- sary behavior. In either case, the result is a public interface that includes real services, deliberate honey pot traps, and the inevi- table exploitable vulnerabilities that unfortunately will be pres- ent in all nontrivial interfaces (see Figure 1.5 ).
Only relatively minor tests of honey pot technology have been reported to date, usually in the context of a research effort. Almost no reports are available on the day-to-day use of decep- tion as a structural component of a real enterprise security program. In fact, the vast majority of security programs for com- panies, government agencies, and national infrastructure would include no such functionality. Academic computer scientists have shown little interest in this type of security, as evidenced by the relatively thin body of literature on the subject. This lack of interest might stem from the discomfort associated with using computing to mislead. Another explanation might be the relative ineffectiveness of deception against the botnet threat, which is clearly the most important security issue on the Internet today. Regardless of the cause, this tendency to avoid the use of decep- tion is unfortunate, because many cyber attacks, such as subtle break-ins by trusted insiders and Trojan horses being maliciously inserted by suppliers into delivered software, cannot be easily remedied by any other means.
The most direct benefi t of deception is that it enables foren- sic analysis of intruder activity. By using a honey pot, unique insights into attack methods can be gained by watching what is occurring in real time. Such deception obviously works best in a hidden, stealth mode, unknown to the intruder, because if
Interface to Valid Services
Trap Interface to Honey Pot
Should Resemble Valid Services
Vulnerabilities Possible
Uncertainty
Real Assets
Honey Pot
???
Figure 1.5 Components of an interface with deception.
Deception is less effective against botnets than other types of attack methods.
Chapter 1 INTRODUCTION 13
the intruder realizes that some vulnerable exploitation point is a fake, then no exploitation will occur. Honey pot pioneers Cliff Stoll, Bill Cheswick, and Lance Spitzner have provided a major- ity of the reported experience in real-time forensics using honey pots. They have all suggested that the most diffi cult task involves creating believability in the trap. It is worth noting that connect- ing a honey pot to real assets is a terrible idea.
An additional potential benefi t of deception is that it can introduce the clever idea that some discovered vulnerability might instead be a deliberately placed trap. Obviously, such an approach is only effective if the use of deception is not hidden; that is, the adversary must know that deception is an approved and accepted technique used for protection. It should therefore be obvious that the major advantage here is that an accidental vulnerability, one that might previously have been an open door for an intruder, will suddenly look like a possible trap. A further profound notion, perhaps for open discussion, is whether just the implied statement that deception might be present (perhaps without real justifi cation) would actually reduce risk. Suppliers, for example, might be less willing to take the risk of Trojan horse insertion if the procuring organization advertises an open research and development program of detailed software test and inspection against this type of attack.
Separation The principle of separation involves enforcement of access policy restrictions on the users and resources in a computing environ- ment. Access policy restrictions result in separation domains, which are arguably the most common security architectural concept in use today. This is good news, because the creation of access-policy-based separation domains will be essential in the protection of national infrastructure. Most companies today will typically use fi rewalls to create perimeters around their presumed enterprise, and access decisions are embedded in the associated rules sets. This use of enterprise fi rewalls for separation is com- plemented by several other common access techniques: ● Authentication and identity management —These methods are
used to validate and manage the identities on which separa- tion decisions are made. They are essential in every enterprise but cannot be relied upon solely for infrastructure security. Malicious insiders, for example, will be authorized under such systems. In addition, external attacks such as DDOS are unaf- fected by authentication and identity management.
Do not connect honey pots to real assets!
14 Chapter 1 INTRODUCTION
● Logical access controls —The access controls inherent in oper- ating systems and applications provide some degree of sepa- ration, but they are also weak in the presence of compromised insiders. Furthermore, underlying vulnerabilities in appli- cations and operating systems can often be used to subvert these methods.
● LAN controls —Access control lists on local area network (LAN) components can provide separation based on infor- mation such as Internet Protocol (IP) or media access control (MAC) address. In this regard, they are very much like fi rewalls but typically do not extend their scope beyond an isolated segment.
● Firewalls —For large-scale infrastructure, fi rewalls are particu- larly useful, because they separate one network from another. Today, every Internet-based connection is almost certainly protected by some sort of fi rewall functionality. This approach worked especially well in the early years of the Internet, when the number of Internet connections to the enterprise was small. Firewalls do remain useful, however, even with the massive connectivity of most groups to the Internet. As a result, national infrastructure should continue to include the use of fi rewalls to protect known perimeter gateways to the Internet. Given the massive scale and complexity associated with
national infrastructure, three specifi c separation enhancements are required, and all are extensions of the fi rewall concept.
Required Separation Enhancements for National Infrastructure Protection
1. The use of network-based fi rewalls is absolutely required for many national infrastructure applications, especially ones vulnerable to DDOS attacks from the Internet. This use of network-based mediation can take advantage of high-capacity network backbones if the service provider is involved in running the fi rewalls.
2. The use of fi rewalls to segregate and isolate internal infrastructure components from one another is a mandatory technique for simplifying the implementation of access control policies in an organization. When insiders have malicious intent, any exploit they might attempt should be explicitly contained by internal fi rewalls.
3. The use of commercial off-the-shelf fi rewalls, especially for SCADA usage, will require tailoring of the fi rewall to the unique protocol needs of the application. It is not acceptable for national infrastructure protection to retrofi t the use of a generic, commercial, off-the-shelf tool that is not optimized for its specifi c use (see Figure 1.6 ).
Chapter 1 INTRODUCTION 15
With the advent of cloud computing, many enterprise and government agency security managers have come to acknowl- edge the benefi ts of network-based fi rewall processing. The approach scales well and helps to deal with the uncontrolled complexity one typically fi nds in national infrastructure. That said, the reality is that most national assets are still secured by placing a fi rewall at each of the hundreds or thousands of pre- sumed choke points. This approach does not scale and leads to a false sense of security. It should also be recognized that the fi rewall is not the only device subjected to such scale problems. Intrusion detection systems, antivirus fi ltering, threat manage- ment, and denial of service fi ltering also require a network-based approach to function properly in national infrastructure.
An additional problem that exists in current national infrastruc- ture is the relative lack of architectural separation used in an internal, trusted network. Most security engineers know that large systems are best protected by dividing them into smaller systems. Firewalls or packet fi ltering routers can be used to segregate an enterprise net- work into manageable domains. Unfortunately, the current state of the practice in infrastructure protection rarely includes a disciplined approach to separating internal assets. This is unfortunate, because it allows an intruder in one domain to have access to a more expan- sive view of the organizational infrastructure. The threat increases when the fi rewall has not been optimized for applications such as SCADA that require specialized protocol support.
Required New Separation Mechanisms
(Less Familiar)
Existing Separation Mechanisms
(Less Familiar)
Internet Service Provider Commercial and
Government Infrastructure
Commercial Off-the-Shelf
Perimeter Firewalls
Authentification and Identity Management,
Logical Access Controls, LAN Controls
Internal Firewalls
Tailored Firewalls (SCADA)
Network-Based Firewalls (Carrier)
Figure 1.6 Firewall enhancements for national infrastructure.
Parceling a network into manageable smaller domains creates an environment that is easier to protect.
16 Chapter 1 INTRODUCTION
Diversity The principle of diversity involves the selection and use of tech- nology and systems that are intentionally different in substan- tive ways. These differences can include technology source, programming language, computing platform, physical location, and product vendor. For national infrastructure, realizing such diversity requires a coordinated program of procurement to ensure a proper mix of technologies and vendors. The purpose of introducing these differences is to deliberately create a measure of non-interoperability so that an attack cannot easily cascade from one component to another through exploitation of some common vulnerability. Certainly, it would be possible, even in a diverse environment, for an exploit to cascade, but the likelihood is reduced as the diversity profi le increases.
This concept is somewhat controversial, because so much of computer science theory and information technology prac- tice in the past couple of decades has been focused on maxi- mizing interoperability of technologies. This might help explain the relative lack of attentiveness that diversity considerations receive in these fi elds. By way of analogy, however, cyber attacks on national infrastructure are mitigated by diversity technol- ogy just as disease propagation is reduced by a diverse biologi- cal ecosystem. That is, a problem that originates in one area of infrastructure with the intention of automatic propagation will only succeed in the presence of some degree of interoperability. If the technologies are suffi ciently diverse, then the attack propa- gation will be reduced or even stopped. As such, national asset managers are obliged to consider means for introducing diver- sity in a cost-effective manner to realize its security benefi ts (see Figure 1.7 ).
Attack Target
Component 3
Attack Target
Component 2
Non-Diverse (Attack Propagates)
Diverse (Attack Propagation Stops)
Attack
Adversary Target
Component 1
Figure 1.7 Introducing diversity to national infrastructure.
Chapter 1 INTRODUCTION 17
Diversity is especially tough to implement in national infra- structure for several reasons. First, it must be acknowledged that a single, major software vendor tends to currently dominate the personal computer (PC) operating system business landscape in most government and enterprise settings. This is not likely to change, so national infrastructure security initiatives must sim- ply accept an ecosystem lacking in diversity in the PC landscape. The profi le for operating system software on computer servers is slightly better from a diversity perspective, but the choices remain limited to a very small number of available sources. Mobile oper- ating systems currently offer considerable diversity, but one can- not help but expect to see a trend toward greater consolidation.
Second, diversity confl icts with the often-found organiza- tional goal of simplifying supplier and vendor relationships; that is, when a common technology is used throughout an organiza- tion, day-to-day maintenance, administration, and training costs are minimized. Furthermore, by purchasing in bulk, better terms are often available from a vendor. In contrast, the use of diversity could result in a reduction in the level of service provided in an organization. For example, suppose that an Internet service pro- vider offers particularly secure and reliable network services to an organization. Perhaps the reliability is even measured to some impressive quantitative availability metric. If the organization is committed to diversity, then one might be forced to actually introduce a second provider with lower levels of reliability.
In spite of these drawbacks, diversity carries benefi ts that are indisputable for large-scale infrastructure. One of the great chal- lenges in national infrastructure protection will thus involve fi nd- ing ways to diversify technology products and services without increasing costs and losing business leverage with vendors.
Consistency The principle of consistency involves uniform attention to secu- rity best practices across national infrastructure components. Determining which best practices are relevant for which national asset requires a combination of local knowledge about the asset, as well as broader knowledge of security vulnerabilities in generic infrastructure protection. Thus, the most mature approach to consistency will combine compliance with relevant standards such as the Sarbanes–Oxley controls in the United States, with locally derived security policies that are tailored to the organiza- tional mission. This implies that every organization charged with the design or operation of national infrastructure must have a
Enforcing diversity of products and services might seem counterintuitive if you have a reliable provider.
18 Chapter 1 INTRODUCTION
local security policy. Amazingly, some large groups do not have such a policy today.
The types of best practices that are likely to be relevant for national infrastructure include well-defi ned software lifecycle methodologies, timely processes for patching software and sys- tems, segregation of duty controls in system administration, threat management of all collected security information, secu- rity awareness training for all system administrators, operational confi gurations for infrastructure management, and use of soft- ware security tools to ensure proper integrity management. Most security experts agree on which best practices to include in a generic set of security requirements, as evidenced by the inclu- sion of a common core set of practices in every security standard. Attentiveness to consistency is thus one of the less controversial of our recommended principles.
The greatest challenge in implementing best practice consis- tency across infrastructure involves auditing. The typical audit process is performed by an independent third-party entity doing an analysis of target infrastructure to determine consistency with a desired standard. The result of the audit is usually a numeric score, which is then reported widely and used for management decisions. In the United States, agencies of the federal govern- ment are audited against a cyber security standard known as FISMA (Federal Information Security Management Act). While auditing does lead to improved best practice coverage, there are often problems. For example, many audits are done poorly, which results in confusion and improper management deci- sions. In addition, with all the emphasis on numeric ratings, many agencies focus more on their score than on good security practice.
Today, organizations charged with protecting national infra- structure are subjected to several types of security audits. Streamlining these standards would certainly be a good idea, but some additional items for consideration include improving the types of common training provided to security administrators, as well as including past practice in infrastructure protection in common audit standards. The most obvious practical consid- eration for national infrastructure, however, would be national- level agreement on which standard or standards would be used to determine competence to protect national assets. While this is a straightforward concept, it could be tough to obtain wide con- currence among all national participants. A related issue involves commonality in national infrastructure operational confi gu- rations; this reduces the chances that a rogue confi guration
A good audit score is important but should not replace good security practices.
A national standard of competence for protecting our assets is needed.
Chapter 1 INTRODUCTION 19
installed for malicious purposes, perhaps by compromised insiders.
Depth The principle of depth involves the use of multiple security layers of protection for national infrastructure assets. These layers pro- tect assets from both internal and external attacks via the familiar “defense in depth” approach; that is, multiple layers reduce the risk of attack by increasing the chances that at least one layer will be effective. This should appear to be a somewhat sketchy situ- ation, however, from the perspective of traditional engineering. Civil engineers, for example, would never be comfortable design- ing a structure with multiple fl awed supports in the hopes that one of them will hold the load. Unfortunately, cyber security experts have no choice but to rely on this fl awed notion, perhaps highlighting the relative immaturity of security as an engineering discipline.
One hint as to why depth is such an important requirement is that national infrastructure components are currently con- trolled by software, and everyone knows that the current state of software engineering is abysmal. Compared to other types of engineering, software stands out as the only one that accepts the creation of knowingly fl awed products as acceptable. The result is that all nontrivial software has exploitable vulnerabilities, so the idea that one should create multiple layers of security defense is unavoidable. It is worth mentioning that the degree of diversity in these layers will also have a direct impact on their effectiveness (see Figure 1.8 ).
To maximize the usefulness of defense layers in national infra- structure, it is recommended that a combination of functional
Software engineering standards do not contain the same level of quality as civil and other engineering standards.
Attack Gets Through Here…
…Hopefully Stopped Here
Multiple Layers of Protection
Adversary Target Asset
Asset Protected Via Depth Approach
Figure 1.8 National infrastructure security through defense in depth.
20 Chapter 1 INTRODUCTION
and procedural controls be included. For example, a common fi rst layer of defense is to install an access control mechanism for the admission of devices to the local area network. This could involve router controls in a small network or fi rewall access rules in an enterprise. In either case, this fi rst line of defense is clearly functional. As such, a good choice for a second layer of defense might involve something procedural, such as the deployment of scanning to determine if inappropriate devices have gotten through the fi rst layer. Such diversity will increase the chances that the cause of failure in one layer is unlikely to cause a similar failure in another layer.
A great complication in national infrastructure protection is that many layers of defense assume the existence of a defi ned net- work perimeter. For example, the presence of many fl aws in enter- prise security found by auditors is mitigated by the recognition that intruders would have to penetrate the enterprise perimeter to exploit these weaknesses. Unfortunately, for most national assets, fi nding a perimeter is no longer possible. The assets of a country, for example, are almost impossible to defi ne within some geo- graphic or political boundary, much less a network one. Security managers must therefore be creative in identifying controls that will be meaningful for complex assets whose properties are not always evident. The risk of getting this wrong is that in providing multiple layers of defense, one might misapply the protections and leave some portion of the asset base with no layers in place.
Discretion The principle of discretion involves individuals and groups making good decisions to obscure sensitive information about national infrastructure. This is done by combining formal man- datory information protection programs with informal discre- tionary behavior. Formal mandatory programs have been in place for many years in the U.S. federal government, where docu- ments are associated with classifi cations, and policy enforce- ment is based on clearances granted to individuals. In the most intense environments, such as top-secret compartments in the intelligence community, violations of access policies could be interpreted as espionage, with all of the associated criminal implications. For this reason, prominent breaches of highly clas- sifi ed government information are not common.
In commercial settings, formal information protection pro- grams are gaining wider acceptance because of the increased need to protect personally identifi able information (PII) such as
Naturally, top-secret information within the intelligence community is at great risk for attack or infi ltration.
Chapter 1 INTRODUCTION 21
credit card numbers. Employees of companies around the world are starting to understand the importance of obscuring certain aspects of corporate activity, and this is healthy for national infra- structure protection. In fact, programs of discretion for national infrastructure protection will require a combination of corpo- rate and government security policy enforcement, perhaps with custom-designed information markings for national assets. The resultant discretionary policy serves as a layer of protection to prevent national infrastructure-related information from reach- ing individuals who have no need to know such information.
A barrier in our recommended application of discretion is the maligned notion of “security through obscurity.” Security experts, especially cryptographers, have long complained that obscurity is an unacceptable protection approach. They correctly reference the problems of trying to secure a system by hiding its underly- ing detail. Inevitably, an adversary discovers the hidden design secrets and the security protection is lost. For this reason, con- ventional computer security correctly dictates an open approach to software, design, and algorithms. An advantage of this open approach is the social review that comes with widespread adver- tisement; for example, the likelihood is low of software ever being correct without a signifi cant amount of intense review by experts. So, the general computer security argument against “security through obscurity” is largely valid in most cases.
Nevertheless, any manager charged with the protection of nontrivial, large-scale infrastructure will tell you that discretion and, yes, obscurity are indispensable components in a protec- tion program. Obscuring details around technology used, soft- ware deployed, systems purchased, and confi gurations managed will help to avoid or at least slow down certain types of attacks. Hackers often claim that by discovering this type of informa- tion about a company and then advertising the weaknesses they are actually doing the local security team a favor. They suggest that such advertisement is required to motivate a security team toward a solution, but this is actually nonsense. Programs around proper discretion and obscurity for infrastructure information are indispensable and must be coordinated at the national level.
Collection The principle of collection involves automated gathering of sys- tem-related information about national infrastructure to enable security analysis. Such collection is usually done in real time and involves probes or hooks in applications, system software, net- work elements, or hardware devices that gather information of
“Security through obscurity” may actually leave assets more vulnerable to attack than an open approach would.
22 Chapter 1 INTRODUCTION
interest. The use of audit trails in small-scale computer security is an example of a long-standing collection practice that introduces very little controversy among experts as to its utility. Security devices such as fi rewalls produce log fi les, and systems purported to have some degree of security usefulness will also generate an audit trail output. The practice is so common that a new type of product, called a security information management system (SIMS), has been developed to process all this data.
The primary operational challenge in setting up the right type of collection process for computers and networks has been two- fold: First, decisions must be made about what types of informa- tion are to be collected. If this decision is made correctly, then the information collected should correspond to exactly the type of data required for security analysis, and nothing else. Second, decisions must be made about how much information is actu- ally collected. This might involve the use of existing system func- tions, such as enabling the automatic generation of statistics on a router; or it could involve the introduction of some new type of function that deliberately gathers the desired information. Once these considerations are handled, appropriate mechanisms for collecting data from national infrastructure can be embedded into the security architecture (see Figure 1.9 ).
The technical and operational challenges associated with the collection of logs and audit trails are heightened in the protec- tion of national assets. Because national infrastructure is so com- plex, determining what information should be collected turns out to be a diffi cult exercise. In particular, the potential arises with large-scale collection to intrude on the privacy of individu- als and groups within a nation. As such, any initiative to protect
Typical Infrastructure Collection Points
Type and Volume Issues
Device Status Monitors
Distributed Across Government and Industry
Interpretation and Action
Operating System Logs
Network Monitors
Application Hooks
Transport Issues
Privacy Issues
Data Collection
Repositories
Figure 1.9 Collecting national infrastructure-related security information.
Chapter 1 INTRODUCTION 23
infrastructure through the collection of data must include at least some measure of privacy policy determination. Similarly, the vol- umes of data collected from large infrastructure can exceed prac- tical limits. Telecommunications collection systems designed to protect the integrity of a service provider backbone, for example, can easily generate many terabytes of data in hours of processing.
In both cases, technical and operational expertise must be applied to ensure that the appropriate data is collected in the proper amounts. The good news is that virtually all security protection algorithms require no deep, probing information of the type that might generate privacy or volumetric issues. The challenge arises instead when collection is done without proper advance analysis which often results in the collection of more data than is needed. This can easily lead to privacy problems in some national collection repositories, so planning is particularly necessary. In any event, a national strategy of data collection is required, with the usual sorts of legal and policy guidance on who collects what and under which circumstances. As we sug- gested above, this exercise must be guided by the requirements for security analysis—and nothing else.
Correlation The principle of correlation involves a specifi c type of analysis that can be performed on factors related to national infrastructure protection. The goal of correlation is to identify whether security- related indicators might emerge from the analysis. For example, if some national computing asset begins operating in a sluggish man- ner, then other factors would be examined for a possible correlative relationship. One could imagine the local and wide area networks being analyzed for traffi c that might be of an attack nature. In addi- tion, similar computing assets might be examined to determine if they are experiencing a similar functional problem. Also, all soft- ware and services embedded in the national asset might be ana- lyzed for known vulnerabilities. In each case, the purpose of the correlation is to combine and compare factors to help explain a given security issue. This type of comparison-oriented analysis is indispensable for national infrastructure because of its complexity.
Interestingly, almost every major national infrastructure pro- tection initiative attempted to date has included a fusion cen- ter for real-time correlation of data. A fusion center is a physical security operations center with means for collecting and ana- lyzing multiple sources of ingress data. It is not uncommon for such a center to include massive display screens with colorful,
What and how much data to collect is an operational challenge.
Only collect as much data as is necessary for security purposes.
Monitoring and analyzing networks and data collection may reveal a hidden or emerging security threat.
24 Chapter 1 INTRODUCTION
visualized representations, nor is it uncommon to fi nd such cen- ters in the military with teams of enlisted people performing the manual chores. This is an important point, because, while such automated fusion is certainly promising, best practice in cor- relation for national infrastructure protection must include the requirement that human judgment be included in the analysis. Thus, regardless of whether resources are centralized into one physical location, the reality is that human beings will need to be included in the processing (see Figure 1.10 ).
In practice, fusion centers and the associated processes and correlation algorithms have been tough to implement, even in small-scale environments. Botnets, for example, involve the use of source systems that are selected almost arbitrarily. As such, the use of correlation to determine where and why the attack is occurring has been useless. In fact, correlating geographic infor- mation with the sources of botnet activity has even led to many false conclusions about who is attacking whom. Countless hours have been spent by security teams poring through botnet infor- mation trying to determine the source, and the best one can
Correlation Process
Output Recommended
Actions
Multiple Ingress Data
Feeds
Comparison and Analysis of
Relevant Factors
Derive Real-Time
Conclusions
Figure 1.10 National infrastructure high-level correlation approach.
Three Steps to Improve Current Correlation Capabilities
1. The actual computer science around correlation algorithms needs to be better investigated. Little attention has been placed in academic computer science and applied mathematics departments to multifactor correlation of real-time security data. This could be changed with appropriate funding and grant emphasis from the government.
2. The ability to identify reliable data feeds needs to be greatly improved. Too much attention has been placed on ad hoc collection of volunteered feeds, and this complicates the ability for analysis to perform meaningful correlation.
3. The design and operation of a national-level fusion center must be given serious consideration. Some means must be identifi ed for putting aside political and funding problems in order to accomplish this important objective.
Chapter 1 INTRODUCTION 25
hope for might be information about controllers or software drops. In the end, current correlation approaches fall short.
What is needed to improve present correlation capabilities for national infrastructure protection involves multiple steps.
Awareness The principle of awareness involves an organization under- standing the differences, in real time and at all times, between observed and normal status in national infrastructure. This status can include risks, vulnerabilities, and behavior in the target infra- structure. Behavior refers here to the mix of user activity, system processing, network traffi c, and computing volumes in the soft- ware, computers, and systems that comprise infrastructure. The implication is that the organization can somehow characterize a given situation as being either normal or abnormal. Furthermore, the organization must have the ability to detect and measure differences between these two behavioral states. Correlation analysis is usually inherent in such determinations, but the real challenge is less the algorithms and more the processes that must be in place to ensure situational awareness every hour of every day. For example, if a new vulnerability arises that has impact on the local infrastructure, then this knowledge must be obtained and factored into management decisions immediately.
Managers of national infrastructure generally do not have to be convinced that situational awareness is important. The big issue instead is how to achieve this goal. In practice, real-time aware- ness requires attentiveness and vigilance rarely found in normal computer security. Data must fi rst be collected and enabled to fl ow into a fusion center at all times so correlation can take place. The results of the correlation must be used to establish a profi led baseline of behavior so differences can be measured. This sounds easier than it is, because so many odd situations have the ability to mimic normal behavior (when it is really a problem) or a problem (when it really is nothing). Nevertheless, national infrastructure protection demands that managers of assets create a locally rele- vant means for being able to comment accurately on the state of security at all times. This allows for proper management decisions about security (see Figure 1.11 ).
Interestingly, situational awareness has not been considered a major component of the computer security equation to date. The concept plays no substantive role in small-scale security, such as in a home network, because when the computing base to be protected is simple enough, characterizing real-time situational status is just not necessary. Similarly, when a security manager puts in place security controls for a small enterprise, situational
Awareness builds on collection and correlation, but is not limited to those areas alone.
26 Chapter 1 INTRODUCTION
awareness is not the highest priority. Generally, the closest one might expect to some degree of real-time awareness for a small system might be an occasional review of system log fi les. So, the transition from small-scale to large-scale infrastructure protec- tion does require a new attentiveness to situational awareness that is not well developed. It is also worth noting that the general notion of “user awareness” of security is also not the principle specifi ed here. While it is helpful for end users to have knowl- edge of security, any professionally designed program of national infrastructure security must presume that a high percentage of end users will always make the wrong sorts of security deci- sions if allowed. The implication is that national infrastructure protection must never rely on the decision-making of end users through programs of awareness.
A further advance that is necessary for situational awareness involves enhancements in approaches to security metrics report- ing. Where the non-cyber national intelligence community has done a great job developing means for delivering daily intelligence briefs to senior government offi cials, the cyber security commu- nity has rarely considered this approach. The reality is that, for sit- uation awareness to become a structural component of national infrastructure protection, valid metrics must be developed to accurately portray status, and these must be codifi ed into a suit- able type of regular intelligence report that senior offi cials can use to determine security status. It would not be unreasonable to expect this cyber security intelligence to fl ow from a central point such as a fusion center, but in general this is not a requirement.
Response The principle of response involves assurance that processes are in place to react to any security-related indicator that becomes
Large-scale infrastructure protection requires a higher level of awareness than most groups currently employ.
Targeted at ManagersCollection
Raw Data
Combined Automation and Manual Process
Fusion
Intelligence
Situational Awareness
Figure 1.11 Real-time situation awareness process fl ow.
Chapter 1 INTRODUCTION 27
available. These indicators should fl ow into the response pro- cess primarily from the situational awareness layer. National infrastructure response should emphasize indicators rather than incidents. In most current computer security applications, the response team waits for serious problems to occur, usually including complaints from users, applications running poorly, and networks operating in a sluggish manner. Once this occurs, the response team springs into action, even though by this time the security game has already been lost. For essential national infrastructure services, the idea of waiting for the service to degrade before responding does not make logical sense.
An additional response-related change for national infra- structure protection is that the maligned concept of “false posi- tive” must be reconsidered. In current small-scale environments, a major goal of the computer security team is to minimize the number of response cases that are initiated only to fi nd that nothing was wrong after all. This is an easy goal to reach by sim- ply waiting for disasters to be confi rmed beyond a shadow of a doubt before response is initiated. For national infrastructure, however, this is obviously unacceptable. Instead, response must follow indicators, and the concept of minimizing false positives must not be part of the approach. The only quantitative metric that must be minimized in national-level response is risk (see Figure 1.12 ).
A challenge that must be considered in establishing response functions for national asset protection is that relevant indica- tors often arise long before any harmful effects are seen. This suggests that infrastructure protecting must have accurate situ- ational awareness that considers much more than just visible impacts such as users having trouble, networks being down, or services being unavailable. Instead, often subtle indicators must
• Higher False-Positive Rate • Lower Security Risk • Recommended for National Infrastructure
Response Process (pre-attack)
indicator indicator indicator
• Lower False-Positive Rate • Higher Security Risk • Use for National Infrastructure Only If Required
effect effect effect
Response Process (post-attack)
attack threshold time
Figure 1.12 National infrastructure security response approach.
28 Chapter 1 INTRODUCTION
be analyzed carefully, which is where the challenges arise with false positives. When response teams agree to consider such indi- cators, it becomes more likely that such indicators are benign. A great secret to proper incident response for national infrastruc- ture is that higher false positive rates might actually be a good sign.
It is worth noting that the principles of collection, correlation, awareness, and response are all consistent with the implemen- tation of a national fusion center. Clearly, response activities are often dependent on a real-time, ubiquitous operations center to coordinate activities, contact key individuals, collect data as it becomes available, and document progress in the response activ- ities. As such, it should not be unexpected that national-level response for cyber security should include some sort of central- ized national center. The creation of such a facility should be the centerpiece of any national infrastructure protection program and should involve the active participation of all organizations with responsibility for national services.
Implementing the Principles Nationally To effectively apply this full set of security principles in practice for national infrastructure protection, several practical imple- mentation considerations emerge: ● Commissions and groups —Numerous commissions and
groups have been created over the years with the purpose of national infrastructure protection. Most have had some minor positive impact on infrastructure security, but none has had suffi cient impact to reduce present national risk to accept- able levels. An observation here is that many of these commis- sions and groups have become the end rather than the means toward a cyber security solution. When this occurs, their likeli- hood of success diminishes considerably. Future commissions and groups should take this into consideration.
● Information sharing —Too much attention is placed on infor- mation sharing between government and industry, perhaps because information sharing would seem on the surface to carry much benefi t to both parties. The advice here is that a comprehensive information sharing program is not easy to implement simply because organizations prefer to maintain a low profi le when fi ghting a vulnerability or attack. In addi- tion, the presumption that some organization—government or commercial—might have some nugget of information that could solve a cyber attack or reduce risk is not generally
A higher rate of false positives must be tolerated for national infrastructure protection.
Chapter 1 INTRODUCTION 29
consistent with practice. Thus, the motivation for a commer- cial entity to share vulnerability or incident-related informa- tion with the government is low; very little value generally comes from such sharing.
● International cooperation —National initiatives focused on creating government cyber security legislation must acknowl- edge that the Internet is global, as are the shared services such as the domain name system (DNS) that all national and global assets are so dependent upon. Thus, any program of national infrastructure protection must include provisions for interna- tional cooperation, and such cooperation implies agreements between participants that will be followed as long as everyone perceives benefi t.
● Technical and operational costs —To implement the princi- ples described above, considerable technical and operational costs will need to be covered across government and commer- cial environments. While it is tempting to presume that the purveyors of national infrastructure can simply absorb these costs into normal business budgets, this has not been the case in the past. Instead, the emphasis should be on rewards and incentives for organizations that make the decision to imple- ment these principles. This point is critical because it suggests that the best possible use of government funds might be as straightforward as helping to directly fund initiatives that will help to secure national assets. The bulk of our discussion in the ensuing chapters is techni-
cal in nature; that is, programmatic and political issues are conve- niently ignored. This does not diminish their importance, but rather is driven by our decision to separate our concerns and focus in this book on the details of “what” must be done, rather than “how.”
This page intentionally left blank
31 Cyber Attacks. DOI: © Elsevier Inc. All rights reserved.
10.1016/B978-0-12-384917-5.00002-0 2011
DECEPTION
Create a highly controlled network. Within that network, you place production systems and then monitor, capture, and analyze all activity that happens within that network Because this is not a production network, but rather our Honeynet, any traffic is suspicious by nature .
The Honeynet Project 1
The use of deception in computing involves deliberately mislead- ing an adversary by creating a system component that looks real but is in fact a trap. The system component, sometimes referred to as a honey pot , is usually functionality embedded in a computing or networking system, but it can also be a physical asset designed to trick an intruder. In both cases, a common interface is presented to an adversary who might access real functionality connected to real assets, but who might also unknowingly access deceptive functionality connected to bogus assets. In a well-designed decep- tive system, the distinction between real and trap functionality should not be apparent to the intruder (see Figure 2.1 ).
The purpose of deception, ultimately, is to enhance security, so in the context of national infrastructure it can be used for large-scale protection of assets. The reason why deception works is that it helps accomplish any or all of the following four security objectives: ● Attention —The attention of an adversary can be diverted from
real assets toward bogus ones. ● Energy —The valuable time and energy of an adversary can be
wasted on bogus targets.
2
1 The Honeynet Project, Know Your Enemy: Revealing the Security Tools, Tactics, and Motives of the Blackhat Community , Addison–Wesley Professional, New York, 2002. (I highly recommend this amazing and original book.) See also B. Cheswick and S. Bellovin, Firewalls and Internet Security: Repelling the Wily Hacker , 1st ed., Addison– Wesley Professional, New York, 1994; C. Stoll, The Cuckoo’s Egg: Tracking a Spy Through the Maze of Computer Espionage , Pocket Books, New York, 2005.
32 Chapter 2 DECEPTION
● Uncertainty —Uncertainty can be created around the veracity of a discovered vulnerability.
● Analysis —A basis can be provided for real-time security analy- sis of adversary behavior. The fact that deception diverts the attention of adversaries,
while also wasting their time and energy, should be familiar to anyone who has ever used a honey pot on a network. As long as the trap is set properly and the honey pot is suffi ciently realistic, adversaries might direct their time, attention, and energy toward something that is useless from an attack perspective. They might even plant time bombs in trap functionality that they believe will be of subsequent use in targeting real assets. Obviously, in a honey pot, this is not the case. This type of deception is a pow- erful deterrent, because it defuses a cyber attack in a way that could fool an adversary for an extended period of time.
The possibility that deception might create uncertainty around the veracity of a discovered vulnerability has been poorly explored to date. The idea here is that when an intruder inevitably stumbles onto an exploitable hole it would be nice if that intruder were led to believe that the hole might be a trap. Thus, under the right circumstances, the intruder might actu- ally choose to avoid exploitation of a vulnerability for fear that it has been intentionally planted. While this might seem diffi cult to implement in many settings, the concept is powerful because it allows security managers to defuse existing vulnerabilities without even knowing about them . This is a signifi cant enough concept that it deserves repeating: The use of deception in com- puting allows system security managers to reduce the risk of vul- nerabilities that they might not even know are present .
The fact that real-time analysis can be performed on a honey pot is reasonably well known in the computing community today.
Connected to Real Assets
Connected to Bogus Assets
Computing Functionality
(Real)
Computing Functionality (Deceptive)
Normal User
Normal Access
Common Interface
Malicious User
Normal Access
Figure 2.1 Use of deception in computing.
Deception is a powerful security tool, as it protects even unknown vulnerabilities.
Chapter 2 DECEPTION 33
Perhaps this is because it is a widely accepted best practice that security administrators should try to observe the behavior of intruders that have been detected. Most intrusion detection sys- tems, for example, include threat management back-end systems that are designed to support such an objective. In the best case, the forensic analysis gathered during deception is suffi ciently detailed to allow for identifi cation of the adversary and possibly even pros- ecution. In the most typical case, however, accurate traceability to the original human source of a problem is rarely accomplished.
Luckily, the success of deceptive traps is assisted by the fact that intruders will almost always view designers and opera- tors of national assets as being sloppy in their actions, defi cient in their training, and incompetent in their knowledge. This extremely negative opinion of the individuals running national infrastructure is a core belief in virtually every hacking com- munity in the world (and is arguably justifi ed in some environ- ments). Ironically, this low expectation is an important element that helps make stealth deception much more feasible, because honey pots do not always have to mimic a perfectly managed environment. Instead, adversaries can generally be led to fi nd a system environment that is poorly administered, and they will not bat an eyelash. This helps the deception designer.
The less well-understood case of openly advertised deception relies on the adversary believing that designers and operators of national assets are competent enough to plant a believable trap into a national asset. This view represents a hurdle, because the hacking community will need to see convincing evidence before they will ever believe that anyone associated with a large organization would be competent enough to manage a complex program of deceptive computing. This is too bad, because open use of deception carries great advantages, as we will explain in more detail below. In any event, the psychology of understanding and managing adversary views is not straightforward. This soft issue must become part of the national infrastructure protection equation but will obviously require a new set of skills among security experts.
The most common implementation of deception involves the insertion of fake attack entry points, such as open service ports, that adversaries might expect to see in a normal system. The hope is that an adversary would discover (perhaps with a scan- ner) and then connect to these open service ports, which would in turn then lead to a honey pot. As suggested above, creating realism in a honey pot is not an easy task, but several design options do exist. One approach involves routing inbound open port connections to physically separate bogus systems that are isolated from real assets. This allows for a “forklift”-type copying
Honey pots should not necessarily mimic perfect environments.
Effective cyber deception involves understanding your adversary.
34 Chapter 2 DECEPTION
of real functionality (perhaps with sensitive data sanitized) to an isolated, safe location where no real damage can be done.
Recall that, if the deception is advertised openly, the possibil- ity arises that an adversary will not bother to attempt an attack. Admittedly, this scenario is a stretch, but the possibility does arise and is worth mentioning. Nevertheless, we will assume for the balance of this discussion that the adversary fi nds the decep- tive entry point, presumes that it is real, and decides to move forward with an attack. If the subsequent deception is properly managed, then the adversary should be led down a controlled process path with four distinct attack stages: scanning , discovery , exploitation , and exposing (see Figure 2.2 ).
During the initial scanning stage, an adversary is search- ing through whatever means is available for exploitable entry points. The presumption in this stage is that the service inter- face includes trap functionality, such as bogus links on prox- ied websites that lead to a honey pot for collecting information. It is worth noting, however, that this “searching” process does not always imply the use of a network by an adversary. Instead, the adversary might be searching for exploitable entry points in contracts, processes, locked cabinets, safes, or even relation- ships with national infrastructure personnel. In practice, one might even expect a combination of computing and noncomput- ing searches for information about exploitable entry points. The deception must be designed accordingly.
During the discovery phase, an adversary fi nds an exploit- able entry point, which might be real or fake. If the vulnerability is real, then one hopes that good back-end security is in place to avoid an infrastructure disaster. Nevertheless, the decision on the
Forensics Performed in This Stage
National Asset
Interface
Trap
Honey Pot
Scanning Discovery Exploitation Exposing
Adversary
Figure 2.2 Stages of deception for national infrastructure protection.
Bear in mind that a cyber honey pot might require coordination with a tangible exploitable point outside the cyber world.
Chapter 2 DECEPTION 35
part of the intruder to exploit a discovered vulnerability, real or fake, is an important trigger point. Good infrastructure security systems would need to connect this exploitation point to a threat management system that would either open a security trouble ticket or would alert a security administrator that an intruder has either started an attack or fallen for the deceptive bait. Obviously, such alerts should not signal an intruder that a trap is present.
During the exploitation stage, the adversary makes use of the discovered vulnerability for whatever purposes they might have. If the vulnerability is real, then the usual infrastructure break-in scenario results. If the vulnerability is a trap, however, then its effectiveness will be directly related to the realism of the honey pot. For both stealth and non-stealth deception, this is the initial stage during which data becomes available for forensic analy- sis. A design consideration is that the actual asset must never become compromised as a result of the trap. This requirement will likely result in deceptive functionality running on computing “islands” that are functionally separated from the real assets.
During the exposing stage in deception, adversary behavior becomes available for observation. Honey pots should include suffi cient monitoring to expose adversary technique, intent, and identity. This is generally the stage during which management decisions are made about whether response actions are war- ranted. It is also a stage where real-time human actions are often required to help make the deceptive functionality look real. As we stated above, a great advantage that arises here is the low expec- tation the adversary will have regarding system administrative competency on the part of the infrastructure team. This allows the security team to use the excuse of poor setup to cover func- tional gaps that might exist in the deception.
Any one of the four stages of deception can raise signifi cant legal and social issues, so any program of national infrastruc- ture protection must have participation from the national legal community to determine what is considered acceptable. The difference between a passive trap and an active lure, for exam- ple, is subtle and must be clarifi ed before a live deployment is made into infrastructure. From a social perspective, one might hope that the acceptance that exists for using deception to catch online stalkers would be extended to the cyber security commu- nity for catching adversaries targeting national infrastructure.
Scanning Stage In this fi rst stage, the presumption is that an adversary is scan- ning whatever is available to fi nd exploitation points to attack
Actual assets must remain separate and protected so they are not compromised by a honey pot trap.
Monitoring honey pots takes security to the next level: potential for responsive action.
36 Chapter 2 DECEPTION
national infrastructure. This scanning can include online searches for web-based information, network scans to determine port availability, and even offl ine searches of documents for rele- vant information. Deception can be used to divert these scanning attempts by creating false entry points with planted vulnerabili- ties. To deal with the offl ine case, the deception can extend to noncomputing situations such as intentionally leaving a nor- mally locked cabinet or safe door open with bogus documents inserted to deceive a malicious insider.
The deceptive design goal during scanning is to make avail- able an interface with three distinct components: authorized services , real vulnerabilities , and bogus vulnerabilities . In a per- fect world, there would be no vulnerabilities, only authorized services. Unfortunately, given the extreme complexity associated with national infrastructure services, this is an unrealistic expec- tation, so real vulnerabilities will always be present in some way, shape, or form. When deception is used, these real vulnerabilities are complemented by fake ones and should be indistinguishable. Thus, an adversary will see three components when presented with a national asset interface with deception (see Figure 2.3 ).
Bogus vulnerabilities will generally be inserted based on the usual sorts of problems found in software. This is one of the few cases where the defi ciencies of the software engineering disci- pline can actually be put to good use for security. One might imag- ine situations where new vulnerabilities are discovered and then immediately implemented as traps in systems that require protec- tion. Nevertheless, planted holes do not always have to be based on such exploitable software bugs or system misconfi gurations. In some cases, they might correspond to properly administered func- tionality, but that might not be considered acceptable for local use.
Valid User
Adversary
National Asset
Authorized Service
Uncertainty About Which is Real
Three Components of Service Interface
Real Vulnerabilities
Bogus Vulnerabilities
Honey Pots
Figure 2.3 National asset service interface with deception.
Chapter 2 DECEPTION 37
Honey Pots can be Built into Websites A good example of a trap based on properly administered functionality might be a promiscuous tab on a website that openly solicits leaks of information; this is found sometimes on some of the more controversial blog sites. If legal and policy acceptance is given, then these links might be connected in a local proxied Intranet to a honey pot collection site. Insiders to an organization might then consider leaking information directly using this link to the seemingly valid Internet site, only to be duped into providing the leak to the local security team. Again, this should only be considered for deployment if all legal and policy requirements are met, but the example does help illustrate the possibilities.
A prominent goal of deception is to observe the adversary in action. This is done via real-time collection of data about intruder activity, along with reasoned analysis about intent. For example, if the intruder seems to be guessing passwords over and over again to gain access to a honey pot system, the administrator might decide in real time to simply grant access. A great challenge is that the automation possibilities of such response are not currently well understood and are barely included in security research pro- grams. This is too bad, because such cases could really challenge and ultimately improve the skills of a good security administra- tor. One could even imagine national groups sponsoring contests between live intruders and live administrators who are battling against each other in real time in a contrived honey pot.
Deliberately Open Ports Intruders routinely search the Internet for servers that allow connections to exploitable inbound services. These services are exploitable generally because they contain some weakness such as a buffer overfl ow condition that can be tripped to gain privi- leged access. Once privileged access is obtained, the intruder can perform administrative tasks such as changing system fi les, installing malware, and stealing sensitive information. All good system administrators understand the importance of harden- ing servers by disabling all exploitable and unnecessary services. The problem is that hardening is a complex process that is made more diffi cult in environments where the operating system is proprietary and less transparent. Amazingly, most software and server vendors still deliver their products in confi gurations that include most services being default enabled.
The deliberate insertion of open service ports on an Internet- facing server is the most straightforward of all deceptive computing
Allowing an intruder access increases your risk level but also allows the security administrator to monitor the intruder’s moves.
38 Chapter 2 DECEPTION
practices. The deliberately open ports are connected to back-end honey pot functionality, which is connected to monitoring systems for the purpose of observation and analysis. The result is that serv- ers would thus present adversaries of national infrastructure with three different views of open service ports: (1) valid open ports one might expect, such as HTTP, DNS, and SMTP; (2) open ports that are inadvertently left open and might correspond to exploit- able software; and (3) open ports that are deliberately inserted and connected to bogus assets in a honey pot. As long as it is generally understood that deception could potentially be deployed, there could be some uncertainty on the part of the adversary about which open ports are deliberate and which are inadvertent (see Figure 2.4 ).
Security managers who use port scanners as part of a normal program of enterprise network protection often cringe at this use of deception. What happens is that their scanners will fi nd these open ports, which will result in the generation of reports that highlight the presumed vulnerabilities to managers, users, and auditors. Certainly, the output can be manually cropped to avoid such exposure, but this might not scale well to a large enterprise. Unfortunately, solutions are not easily identifi ed that solve this incompatibility between the authorized use of port scanners and the deliberate use of open ports as traps. It represents yet another area for research and development in deceptive computing.
Valid Open Ports
TCP 80: HTTP TCP 53: DNS TCP 25: SMTP . . .
To Real Assets
To Bogus Assets
Trap Open Ports (Deliberate):
UDP 1820 UDP 1830 . . .
System A
Valid Open Ports
TCP 80: HTTP TCP 53: DNS TCP 25: SMTP . . .
To Real Assets
To Bogus Assets
Open Ports (Inadvertant):
UDP 1334 UDP 1862 . . .
System B
Which is deliberate and which is inadvertent?
Intruder
Figure 2.4 Use of deceptive open ports to bogus assets.
Another challenge is for security managers to knowingly keep open ports after running scanners that highlight these vulnerabilities.
Chapter 2 DECEPTION 39
An additional consideration with the deliberate use of open ports is that care must be taken on the back end to ensure that real assets cannot be exploited. Not surprisingly, practical tech- niques for doing this are not well known. For example, if the back-end deceptive software connected to deliberately open ports shares resources with valid assets, then the potential exists for negative side effects. The only reasonable approach today would involve deliberately open ports on bogus servers that are honey pots with no valid resources. These servers should be subtly embedded into server complexes so they look normal, but they should be hardwired to separate honey pot assets. This reduces the likelihood of negative side effects on normal servers (see Figure 2.5 ).
In practice, the real challenge to the deceptive use of open ports is creating port-connected functionality that is suffi ciently valid to fool an expert adversary but also properly separated from valid services so no adversary could make use of the honey pot to advance an attack. Because computer science does not cur- rently offer much foundational assistance in this regard, national infrastructure protection initiatives must include immediate programs of research and development to push this technique forward.
Discovery Stage The discovery stage corresponds to the adversary fi nding and accepting the security bait embedded in the trap. The two corre- sponding security goals during this stage are to make an intruder believe that real vulnerabilities could be bogus and that bogus
Honey Pot
Complex of Internet-Connected Servers
Servers Should
Look Same (Normal and Honey Pot)
Subtly Embedded Honey Pot Server
Normal Server
Normal Server
Figure 2.5 Embedding a honey pot server into a normal server complex.
40 Chapter 2 DECEPTION
vulnerabilities could be real. The fi rst of these goals is accom- plished by making the deception program well-established and openly known. Specifi c techniques for doing this include the following: ● Sponsored research —The use of deception in national infra-
structure could become generally presumed through the open sponsorship and funding of unclassifi ed research and devel- opment work in this area.
● Published case studies —The open publication of case studies where deception has been used effectively in national asset protection increases the likelihood that an adversary might consider a found vulnerability to be deliberate.
● Open solicitations —Requests for Information (RFIs) and Requests for Proposals (RFPs) should be openly issued by national asset protectors. This implies that funding must be directed toward security projects that would actually use deceptive methods. Interestingly, the potential that an adversary will hesitate
before exploiting a real vulnerability increases only when the use of deception appears to be a real possibility. It would seem a hollow goal, for example, to simply announce that deception is being used without honest efforts to really deploy such decep- tions in national infrastructure. This is akin to placing a home protection sign in the landscaping without ever installing a real security system. For openly advertised deception to work, the national infrastructure team must be fully committed to actually doing the engineering, deployment, and operation.
The second goal of making bogus vulnerabilities look real will be familiar to computer security experts who have considered the use of honey pots. The technique of duplication is often used in honey pot design, where a bogus system is a perfect copy of a real one but without the back-end connectivity to the real asset being protected. This is generally done by duplicating the front- end interface to a real system and placing the duplicate next to a back-end honey pot. Duplication greatly increases realism and is actually quite easy to implement in practice (see Figure 2.6 ).
As suggested above, the advantage of duplication in honey pot design is that it maximizes authenticity. If one fi nds, for example, a real vulnerability in some front-end server, then an image of that vulnerable server could be used in future deceptive confi g- urations. Programs of national infrastructure protection should thus fi nd ways to effectively connect vulnerability discovery pro- cesses to honey pot design. Thus, when a truly interesting vulner- ability is found, it can become the front end to a future deceptive trap.
Openly advertised use of deception may cause adversaries to question whether a discovered vulnerability is valid or bogus.
Turn discovered vulnerabilities into advantages by mimicking them in honey pot traps.
Chapter 2 DECEPTION 41
Deceptive Documents The creation and special placement of deceptive documents is an example method for tricking adversaries during discovery. This technique, which can be done electronically or manually, is espe- cially useful for detecting the presence of a malicious insider and will only work under two conditions: ● Content —The bogus document must include information that
is convincingly realistic. Duplication of a valid document with changes to the most sensitive components is a straightforward means for doing this.
● Protection —The placement of the bogus document should include suffi cient protections to make the document appear truly realistic. If the protection approach is thin, then this will raise immediate suspicion. Sabotage can be detected by pro- tecting the bogus document in an environment that cannot be accessed by anyone other than trusted insiders. An illustrative approach for national infrastructure protec-
tion would follow these steps: First, a document is created with information that references a specially created bogus asset, such as a phone number, physical location, or server. The informa- tion should never be real, but it should be very realistic. Next, the document is stored in a highly protected location, such as a locked safe (computer or physical). The presumption is that under normal circumstances the document should sit idly in the locked safe, as it should have no real purpose to anyone. Finally, the specially created bogus asset is monitored carefully for any attempted compromise. If someone fi nds and grabs the document, then one can conclude that some insider is not to be trusted.
Should BE NO OBVIOUS or VISIBLE DIFFERENCES
to an ADVERSARY Off-line Duplication
(“Make a Copy of the Real Interface”)
Real Front-End Interface
Real Back-End Asset
Same Front-End Interface
Back-End Honey Pot
Figure 2.6 Duplication in honey pot design.
42 Chapter 2 DECEPTION
It should be obvious that the example scheme shown in Figure 2.7 works as well for an electronic document protected by encryption and access control as for a manual paper document locked in a protected safe. In both cases, one would expect that no one would ever correlate these bogus references. If it turns out that the monitoring shows access to these bogus assets in some related way, then one would have to assume that the pro- tected enclave has been compromised. (Monitoring a hotel might require complex logistics, such as the use of hidden cameras.) In any event, these assets would provide a platform for subsequent analysis of exploitation activity by the adversary.
Exploitation Stage The third stage of the deception lifecycle for an adversary involves exploitation of a discovered vulnerability. This is a key
Steps to Planting a Bogus Document To effectively plant a bogus document, consider following these steps: 1. Create a fi le with instructions for obtaining what would appear to be extremely sensitive information. The fi le could
include a phone number, an Internet address for a server, and perhaps a room location in some hotel. 2. Encrypt the fi le and store it on a server (or print and lock it in a safe) that one would presume to be protected from
inside or outside access. 3. Put monitoring of the server or safe in place, with no expectation of a time limit. In fact, the monitoring might go on
indefi nitely, because one would expect to see no correlative behavior on these monitored assets (see Figure 2.7 ).
Protected Enclave (Should Prevent Normal Access)
Correlative Monitoring (Only Invoked if Bogus Document Used)
Adversary Believes Bogus
Document Bogus AssetsBogus Document
In-line references to . . .
telephone (987) 654-3210 address 192.123.4567 hotel rm. 1a, 23 main st.
(987) 654-3210
1a, 23 main st.
192. 123.4567
Figure 2.7 Planting a bogus document in a protected enclave.
Chapter 2 DECEPTION 43
step in the decision process for an adversary because it is usu- ally the fi rst stage in which policy rules or even laws are actu- ally violated. That is, when an intruder begins to create a cyber attack, the initial steps are preparatory and generally do not vio- late any specifi c policy rules or laws. Sometimes security experts refer to this early activity as low radar actions , and when they are detected they are referred to as indications and warnings . Determining whether to respond to indications and warnings is a challenge, because response requires time and energy. If the track record of the security team involves many response actions to indications and warnings that are largely false positives, then the organization is often tempted to reduce the response trig- ger point. This is a bad idea for national infrastructure, because the chances increase that a real event will occur that is not responded to promptly.
As such, the protection of national infrastructure should involve a mind shift away from trying to reduce false positive responses to indications and warnings. Instead, the goal should be to deal with all instances in which indication and warning actions would appear to be building up to the threshold at which exploitation begins. This is especially important, because this threshold marks the fi rst stage during which real assets, if tar- geted, might actually be damaged (see Figure 2.8 ).
The key requirement at this decision point is that any exploi- tation of a bogus asset must not cause disclosure, integrity, theft, or availability problems with any real asset. Such non- interference between bogus and real assets is easiest to accom- plish when these assets are kept as separate as possible. Physical separation of assets is straightforward; a real software applica- tion with real data, for example, could be separated from a bogus application with fake data by simply hosting each on different
Post-Attack Stages
Pre-Attack Stages
ExploitationDiscoveryScanning
Decision to Exploit Discovered
Vulnerability Vulnerability Discovered
Figure 2.8 Pre- and post-attack stages at the exploitation stage.
Responding to a large number of false positives is necessary to adequately protect national infrastructure.
44 Chapter 2 DECEPTION
servers, perhaps even on different networks. This is how most honey pots operate, and the risk of interference is generally low.
Achieving noninterference in an environment where resources are shared between real and fake assets is more challenging. To accomplish this goal, the deception designer must be creative. For example, if some business process is to be shared by both real and fake functionality, then care must be taken by the decep- tion operators to ensure that real systems are not degraded in any way. Very little research has been done in this area, especially for availability threats. Allowing a malicious adversary to execute programs on a live, valid system, for example, would provide opportunities for malicious resource exhaustion. Nevertheless, the general approach has considerable promise and deserves more attention.
A related issue involves the possibility that intrusion detection and incident response systems might be fooled during exploita- tion into believing that trap functionality is real. White hat teams in companies have dealt with this problem for years, and they must coordinate with security teams to ensure that their activity does not cause a false alarm. This can be accomplished in several ways: ● Process coordination —This involves the honey pot team coor-
dinating their activities in advance with the infrastructure security teams. The problem is that it tends to highlight the deception and might destroy some hidden transparency, espe- cially if the deception is designed to detect malicious insiders.
● Trap isolation —This involves making certain that security systems, such as intrusion detection probes, are not in the deceptive path. Ongoing architectural analysis is required in this case to make sure that the condition holds throughout the system lifecycle.
● Back-end insiders —If the security team includes some trusted insiders involved in response activities, then they might be able to ensure that a response to deceptive action does not waste time and resources. This works best if the insider is a decision-maker.
● Process allowance —In this case, the detection and response activities are allowed to proceed. This is the recommended case if the deception is considered sensitive and must be totally hidden from insiders. The exploitation stage is the most dangerous part of the
deceptive lifecycle because it involves real malicious activity from an adversary. Infrastructure teams must learn to under- stand and respect the potential for real damage if this stage is not managed carefully.
When bogus and real assets reside on the same server, vulnerability risk increases dramatically.
Chapter 2 DECEPTION 45
Procurement Tricks One way to understand adversary behavior is to compare it in dif- ferent environments. An example involves a deception designer creating two separate supplier solicitations for a given product or service. One solicitation would be for a benign, noncritical, non- sensitive application; the other would be for an obviously sensi- tive, nationally critical application. In both cases, exactly the same product or service would be requested, but when they are deliv- ered to each application detailed examination would be initiated to identify differences. Any differences found in the delivered product by the intentionally planted “critical buyer” and “benign buyer” would be viewed as potential exploits (see Figure 2.9 ).
The deception described above only works if suffi cient foren- sic capability exists to compare the two delivered products. For any product or service, this could include comparison of rela- tive software size, system performance, product documenta- tion, service functionality, or technical support. One could even imagine a second level of deception using social engineering where an impassioned plea would be made to the supplier for some undocumented type of emergency support, usually remote administration. If either of the delivered products is set up for such remote administration, then the national asset manager would know that something is wrong.
Delivered Product
Deception Designer
Exactly Same
Differences Must Be
Explained
Should Be Exactly Same
Ensure Specs Same
Monitor Supplier Activity
Carefully Compare Products
Delivered Product
Supplier
Create Two
Buyers
Benign Buyer
(e.g., school)
Critical Buyer
(e.g., military)
Order Specification
Order Specification
Figure 2.9 Using deception against malicious suppliers.
46 Chapter 2 DECEPTION
The procurement lifecycle is one of the most underestimated components in national infrastructure protection from an attack perspective. Generally, security teams focus on selecting, test- ing, installing, and operating functionality, with seemingly mun- dane procurement tasks left to the supply chain team. This is a huge mistake, and adversaries understand this point well. Thus, national infrastructure protection initiatives must extend to the procurement process, and the clever use of deception is a power- ful tool in this regard.
Exposing Stage The fi nal stage in the deception lifecycle involves the adversary exposing behavior to the deception operator. Presumably, in this stage, the adversary is now hacking away at the trap functionality, convinced that all systems and assets are real. All sorts of possi- bilities arise in terms of how this hacking will proceed. It could be a fl urry of intense activity in a short period of time or it could be a drawn-out process of low and slow actions, so the decep- tion team must have patience. Also, during this stage, the adver- sary might expose the use of well-known hacking techniques and tools or, alternatively, could demonstrate use of techniques not previously seen by the security team (see Figure 2.10 ).
The challenge in this stage is that the deception must allow a window for observation of intruder activity, but must also be hidden. It must provide a convenient interface for the deception team to collect data but also must provide a way for commands to be issued or changes to be made in real time. Unfortunately, few commercial products exist that are designed to support these features. Specifi c functional requirements for the monitoring
Real-Time Forensic Analysis of Adversary Behavior
Exploitation/ExposingDiscoveryScanning
Vulnerability is Discovered
Decision to Exploit Vulnerability
Figure 2.10 Adversary exposing stage during deception.
National infrastructure protection must extend from procurement to operating functionality in order to be truly effective.
Chapter 2 DECEPTION 47
environment during the exposing stage of deception include the following: ● Suffi cient detail —The monitoring environment must provide
suffi cient detail so the deception operator can determine what is going on. For example, overly cryptic audit logs in terse for- mat with gaps in certain areas would not be the best approach. The usual administrative interface to an operating system (generally through a command interface) is often the most desirable approach. One should not expect fancy, colorful dis- plays for the monitoring task because most security analysts prefer the convenience of a terse command line interface.
● Hidden probes —Monitoring in this stage of the deception works only if it is completely hidden. If an adversary fi g- ures out that someone is watching, then behavior modifi ca- tion would occur immediately. Simple tasks must therefore be implemented such as suppressed listing of any processes launched by the deception team (unless desired). The art of creating realistic functionality to hide probes requires support and nurturing in the security community.
● Real-time observation —The deception operator should have access to information about exposed behavior as it happens. The degree of real time for such monitoring (e.g., instantaneous, within seconds, within minutes) would depend on the local cir- cumstances. In most cases, this observation is simply done by watching system logs, but more advanced tools are required to record and store information about intruder behavior. As we suggested above, in all cases of deception monitoring
the key design goal should be to ensure a believable environ- ment. No suspicious or unexplainable processes should be pres- ent that could tip off an intruder that logging is ongoing. Fake audit logs are also a good way to create believability; if a honey pot is developed using an operating system with normal audit logging, then this should be enabled. A good adversary will likely turn it off. The idea is that hidden monitoring would have to be put in place underneath the normal logging—and this would be functionality that the adversary could not turn off.
Interfaces Between Humans and Computers The gathering of forensic evidence during the analysis of intruder behavior in a honey pot often relies on detailed understanding of how systems, protocols, and services interact. Specifi cally, this type of communication can be performed in four different ways: human-to-human , human-to-computer , computer-to-human , and
Observing intruder activity can be an informative but risky process during the exposure stage.
48 Chapter 2 DECEPTION
computer-to-computer . If we take the fi rst term (human or com- puter) to mean the intruder and we take the second term to mean the honey pot manager, then we can make some logical distinctions.
First, it should be obvious that, in an automated attack such as a botnet, the real-time behavior of the attack system will not change based on some subjective observation of honey pot func- tionality. Certainly, the interpretation of the results of the bot- net could easily affect the thinking of the botnet operator, but the real-time functionality is not going to be affected. As such, the most powerful cases in real-time forensic analysis of honey pot behavior will be the cases where human-to-human and human-to-computer interactions are being attempted by an intruder. Let’s examine each in turn.
The most common human-to-human interaction in national infrastructure involves help desk or customer care support func- tions, and the corresponding attack approach involves social engineering of such activity. The current state of the art in deal- ing with this vulnerability is to train operators and customer care personnel to detect attempts at social engineering and to report them to the security team. Deception, however, introduces a more interesting option. If the likelihood is high that social engi- neering is being attempted, then an advanced approach to pro- tection might involve deceiving the adversary into believing that they have succeeded. This can be accomplished quite easily by simply training operators to divert social engineering attempts to specially established help desks that are phony. The operators at these phony desks would reverse social engineer such attackers to get them to expose their identity or motivation (see Figure 2.11 ).
The most common human-to-computer interaction occurs when an intruder is trying to gain unauthorized access through a series of live, interactive commands. The idea is that intruders should be led to believe that their activity is invoking services on the target system, as in the usual type of operating system hacking. A good example might involve an intruder repeatedly trying to execute some command or operation in a trap system. If
1. Attempt to Social Engineer Help Desk
(Real)
2. Suspicious Call Diverted
3. Reverse Social Engineering (Attempt to Determine Identity)
Help Desk (Deceptive)
Figure 2.11 Deceptively exploiting the human-to-human interface.
Real-time forensic analysis is not possible for every scenario, such as a botnet attack.
Chapter 2 DECEPTION 49
the security team notices this intent and can act quickly enough, the desired command or operation could be deliberately led to execute. This is a tricky engagement, because an expert adver- sary might notice that the target confi guration is changing, which obviously is not normal.
National Deception Program One might hope that some sort of national deception program could be created based on a collection of traps strategically planted across national infrastructure components, tied together by some sort of deception analysis backbone. Such an approach is unlikely, because deception remains a poorly understood secu- rity approach, and infrastructure managers would be very hesi- tant to allow traps to be implanted in production systems. These traps, if they malfunction or do not work as advertised, could trick authorized users or impede normal operations.
Any realistic assessment of current security and information technology practice suggests that large-scale adoption of decep- tion for national infrastructure protection would not be widely accepted today. As a result, programs of national deception would be better designed based on the following assumptions: ● Selective infrastructure use —One must assume that cer-
tain infrastructure components are likely to include decep- tive traps but that others will not. At the time of this writing, many infrastructure teams are still grappling with basic com- puter security concepts; the idea that they would agree to install traps is not realistic. As such, any program of national deception must assume that not all components would utilize honey pots in the same manner.
● Sharing of results and insights —Programs of national decep- tion can and should include a mechanism for the sharing of results and insights gained through operational use of traps and honey pots. Certainly, insight obtained through forensic analysis of adversary behavior can be shared in a structured manner.
● Reuse of tools and methods —National deception programs could serve as means for making honey pot and trap software available for deployment. In some cases, deception tools and methods that work in one infrastructure area can be reused in another. The most common criticism of deception in large-scale
national security is that automated tools such as botnets are not affected by trap functionality. While it is true that botnets attack
An expert adversary may become aware of the security team observing the attempted intrusion.
50 Chapter 2 DECEPTION
infrastructure in a blindly automated manner regardless of whether the target is real or fake, the possibility remains that trap functionality might have some positive impact. A good example might be national coordination of numerous bogus endpoints that might be ready and willing to accept botnet software. If these endpoints are designed properly, one could imagine them being deliberately designed to mess up the botnet communication, perhaps by targeting the controllers themselves. This approach is often referred to as a tarpit , and one might imagine this method being quite interesting for degrading the effectiveness of a botnet.
51 Cyber Attacks. DOI: © Elsevier Inc. All rights reserved.
10.1016/B978-0-12-384917-5.00003-2 2011
SEPARATION
A limitation of firewalls is that they can only be as good as their access controls and filters. They might fail to detect subversive packets. In some situations, they might be bypassed altogether. For example, if a computer behind a firewall has a dial-up port, as is all too common, an intruder can get access by dialing the machine .
Dorothy Denning 1
The separation of network assets from malicious intruders using a fi rewall is perhaps the most familiar protection approach in all of computer security. Today, you will fi nd some sort of fi rewall deployed in or around virtually every computer, application, sys- tem, and network in the world. They serve as the centerpiece in most organizations’ security functionality, including intrusion detection, antivirus fi ltering, and even identity management. An enormous fi rewall industry has emerged to support such mas- sive deployment and use, and this industry has done nothing but continue to grow for years and years.
In spite of this widespread adoption, fi rewalls as separation mechanisms for large-scale infrastructure have worked to only a limited degree. The networks and systems associated with national infrastructure assets tend to be complex, with a multi- tude of different entry points for intruders through a variety of Internet service providers. In addition, the connectivity require- ments for complex networks often result in large rule sets that permit access for many different types of services and source addresses. Worse, the complexity of large-scale networks often leads to unknown, unprotected entry points into and out of the enterprise (see Figure 3.1 ).
Certainly, the use of traditional perimeter fi rewalls will con- tinue to play a role in the protection of national assets, as we will describe below. Egress fi ltering, for example, is often most effi ciently performed at the perceived perimeter of an organiza- tion. Similarly, when two or more organizations share a private
3
1 D. Denning, Information Warfare and Security , Addison–Wesley, New York, 1999, p. 354.
Firewalls are valuable and frequently employed but may not provide enough protection to large-scale networks.
52 Chapter 3 SEPARATION
connection, the connection endpoints are often the most natu- ral place to perform fi rewall fi ltering, especially if traditional circuit-switched connections are involved. To achieve optimal separation in the protection of large-scale national assets, how- ever, three new fi rewall approaches will be required: ● Network-based separation —Because the perimeter of any
complex national infrastructure component will be diffi cult to defi ne accurately, the use of separation methods such as network-based fi rewalls is imperative. Such cloud-based func- tionality allows a broader, more accurate view of the egress and ingress activity for an organization. It also provides a richer environment for fi ltering high-capacity attacks. The fi ltering of denial of service attacks aimed at infrastructure, for example, can only be stopped with special types of cloud- based fi ltering fi rewalls strategically placed in the network.
● Internal separation —National infrastructure protection will require a program of internal asset separation using fi rewalls strategically placed in infrastructure. This type of separation of internal assets using fi rewalls or other separation mecha- nisms (such as operating system access controls) is not gener- ally present in most infrastructure environments. Instead, the
Perimeter
Single Firewall
Simple Network
Single Internet Service
Provider
Large, Non-Uniform Rule Bases
Complexity of Multiple Providers
Unknown, Unprotected Link
Complex Connectivity to Firewall
Multiple Internet Service
Providers
Complex Network
Figure 3.1 Firewalls in simple and complex networks.
Chapter 3 SEPARATION 53
idea persists that insiders should have unrestricted access to internal resources and that perimeter fi rewalls should protect resources from untrusted, external access. This model breaks down in complex infrastructure environments because it is so easy to plant insiders or penetrate complex network perimeters.
● Tailored separation —With the use of specialized protocols in national infrastructure management, especially supervisory control and data acquisition (SCADA), tailoring fi rewalls to handle unique protocols and services is a requirement. This is a challenge because commercial fi rewalls are generally designed for generic use in a wide market and tailoring will require a more focused effort. The result will be more accurate fi rewall operation without the need to open large numbers of service ports to enable SCADA applications. The reader might be amused to consider the irony pre-
sented today by network connectivity and security separation. Twenty years ago, the central problem in computer network- ing involved the rampant interoperability that existed between systems. Making two computers connect over a network was a signifi cant challenge, one that computer scientists worked hard to overcome. In some instances, large projects would be initi- ated with the goal of connecting systems together over networks. Amazingly, the challenge we deal with today is not one of con- nectivity, but rather one of separation. This comes from the ubiq- uity of the Internet Protocol (IP), which enables almost every system on the planet to be connected with trivial effort. Thus, where previously we did not know how to interconnect systems, today we don’t know how to separate them!
What Is Separation? In the context of national infrastructure protection, separation is viewed as a technique that accomplishes one of the following security objectives: ● Adversary separation —The fi rst separation goal involves sepa-
rating an asset from an adversary to reduce the risk of direct attack. Whatever implementation is chosen should result in the intruder having no direct means for accessing national assets.
● Component distribution —The second separation goal involves architecturally separating components in an infrastructure to distribute the risk of compromise. The idea here is that a com- promise in one area of infrastructure should not be allowed to propagate directly.
Now that we are able to connect systems with ease, we must learn to separate them for protection!
Commercially available fi rewalls are not designed for the large-scale complexity of our national infrastructure networks.
54 Chapter 3 SEPARATION
The access restrictions that result from either of these separa- tion approaches can be achieved through functional or physical means. Functional means involve software, computers, and net- works, whereas physical means include tangible separations such as locks, safes, and cabinets. In practice, most separation access restrictions must be designed to focus on either the insider or outsider threat. The relationship between these different separa- tion options can be examined based on the three primary factors involved in the use of separation for protecting infrastructure (see box).
A Working Taxonomy of Separation Techniques The three primary factors involved in the use of separation for protecting infrastructure include the source of the threat (insider or outsider), the target of the security control (adversary or asset), and the approach used in the security control (functional or physical). We can thus use these three factors to create a separation taxonomy that might help to compare and contrast the various options for separating infrastructure from adversaries (see Figure 3.2 ).
The fi rst column in the taxonomy shows that separation controls are focused on keeping either insiders or outsiders away from some asset. The key difference here is that insiders would typically be more trusted and would have more opportunity to gain special types of access. The second column indicates that the separation controls are focused on either keeping an adversary away from some asset or inherently separating components of the actual asset, perhaps through distribution. The third column identifi es whether the separation approach uses computing functionality or would rely instead on some tangible, physical control.
Threat
Insider Adversary Functional Internal access control
Outsider Adversary Functional Internet-facing firewall
Insider Asset Functional Application separation
Functional Adversary Techniques
Functional Asset Techniques
Physical Adversary and Asset Techniques
Outsider Asset Functional Application distribution
Insider Adversary Physical Project compartmentalization
Outsider Adversary Physical Information classification
Insider Asset Physical Internal network diversity
Outsider Asset Physical Physical host distribution
Target Approach Example
Figure 3.2 Taxonomy of separation techniques.
Chapter 3 SEPARATION 55
Functional Separation Functional separation of an adversary from any computing asset is most commonly achieved using an access control mechanism with the requisite authentication and identity management. Access controls defi ne which users can perform which actions on which entities. The access rules should be predetermined in a security policy. They should specify, for example, which users can access a given application, and, obviously, the validation of user identity must be accurate. In some cases, security policy rules must be more dynamic, as in whether a new type of traffi c stream is allowed to proceed to some Internet ingress point. This might be determined by real-time analysis of the network fl ow.
An access policy thus emerges for every organization that identifi es desired allowances for users requesting to perform actions on system entities. Firewall policies are the most com- mon example of this; for example, users trying to connect to a web server might be subjected to an access control policy that would determine if this was to be permitted. Similarly, the IP addresses of some organization might be keyed into a fi rewall rule to allow access to some designated system. A major prob- lem that occurs in practice with fi rewalls is that the rule base can grow to an enormous size, with perhaps thousands of rules. The result is complexity and a high potential for error. National infrastructure initiatives must identify rewards and incentives for organizations to keep their fi rewall rule bases as small as pos- sible. Some organizations have used optimization tools for this purpose, and this practice should be encouraged for national assets.
From the fi rst two rows of the taxonomy, it should be clear that internal access controls demonstrate a functional means for separating insider adversaries from an asset, whereas Internet fi rewalls achieve roughly the same end for outside adversaries. These fi rewalls might be traditional devices, as one might fi nd in an enterprise, or special fi ltering devices placed in the network to throttle volume attacks. The third and fourth rows show that logical separation of an application is a good way to complicate an insider attack; this is comparably done for outsiders by distributing the application across different Internet-facing hosts. The last four rows in Figure 3.2 demonstrate different ways to use physical means to protect infrastructure, ranging from keeping projects and people separate from an asset to maintaining diversity and distribution of infrastructure assets. The following sections provide more detail on these separation taxonomy elements.
56 Chapter 3 SEPARATION
Two broad categories of security can be followed when trying to achieve functional separation of adversaries from any type of national infrastructure assets. The fi rst involves distributing the responsibility for access mediation to the owners of smaller asset components such as individual computers or small networks; the second involves deployment of a large, centralized mediation mechanism through which all access control decisions would be made (see Figure 3.3 ).
The distributed approach has had considerable appeal for the global Internet community to date. It avoids the problem of having to trust a large entity with mediation decisions, it allows for com- mercial entities to market their security tools on a large scale to end users, and it places control of access policy close to the asset, which presumably should increase the likelihood that the policy is appropriate. The massive global distribution of computer security responsibility to every owner of a home personal computer is an example of this approach. End users must decide how to protect their assets, rather than relying on some centralized authority.
Unfortunately, in practice, the distributed approach has led to poor results. Most end users are unqualifi ed to make good deci- sions about security, and even if a large percentage make excellent decisions, the ones who do not create a big enough vulnerability as to place the entire scheme at risk. Botnets, for example, prey on poorly managed end-user computers on broadband connections. When a home computer is infected with malware, there really is no centralized authority for performing a cleansing function. This lack of centralization on the Internet thus results in a huge secu- rity risk. Obviously, the Internet will never be redesigned to include centralized control; that would be impractical, if not impossible.
One Firewall
Centralized MediationDistributed Mediation
Multiple Firewalls
Internet
Figure 3.3 Distributed versus centralized mediation.
In large networks, fi rewall rules can become so numerous that they actually increase the margin for error.
Chapter 3 SEPARATION 57
For national infrastructure, however, the possibility does exist for more centralized control. The belief here is that an increased reliance on centralized protection, especially in con- junction with the network service provider, will improve overall national asset protection methods. This does not imply, how- ever, that distributed protection is not necessary. In fact, in most environments, skilled placement of both centralized and distributed security will be required to avoid national infra- structure attack.
National Infrastructure Firewalls The most common application of a fi rewall involves its place- ment between a system or enterprise to be protected and some untrusted network such as the Internet. In such an arrangement for the protection of a national asset, the following two possibili- ties immediately arise: ● Coverage —The fi rewall might not cover all paths between the
national asset to be protected and the untrusted network such as the Internet. This is a likely case given the general complex- ity associated with most national infrastructure.
● Accuracy —The fi rewall might be forced to allow access to the national asset in a manner that also provides inadvertent, unau- thorized access to certain protected assets. This is common in large-scale settings, especially because specialized protocols such as those in SCADA systems are rarely supported by com- mercial fi rewalls. As a result, the fi rewall operator must compen- sate by leaving certain ports wide open for ingress traffi c. To address these challenges, the design of national security
infrastructure requires a skillful placement of separation func- tionality to ensure that all relevant traffi c is mediated and that no side effects occur when access is granted to a specifi c asset. The two most effective techniques include aggregation of protections in the wide area network and segregation of protections in the local area network (see Figure 3.4 ).
Aggregating fi rewall functionality at a defi ned gateway is not unfamiliar to enterprise security managers. It helps ensure cov- erage of untrusted connections in more complex environments. It also provides a means for focusing the best resources, tools, and staff to one aggregated security complex. Segregation in a local area network is also familiar, albeit perhaps less practiced. It is effective in reducing the likelihood that external access to System A has the side effect of providing external access to System B. It requires management of more devices and does
Centralized control versus multiple, independent fi rewalls—both have their advantages, so which is best for national infrastructure?
58 Chapter 3 SEPARATION
generally imply higher cost. Nevertheless, both of these tech- niques will be important in national infrastructure fi rewall placement.
A major challenge to national infrastructure comes with the massive increase in wireless connectivity that must be presumed for all national assets in the coming years. Most enterprise work- ers now carry around some sort of smart device that is ubiqui- tously connected to the Internet. Such smart devices have begun to resemble computers in that they can support browsing, e-mail access, and even virtual private network (VPN) access to applica- tions that might reside behind a fi rewall. As such, the ease with which components of infrastructure can easily bypass defi ned fi rewall gateways will increase substantially. The result of this increased wireless connectivity, perhaps via 4G deployment, will be that all components of infrastructure will require some sort of common means for ensuring security.
Massive distribution of security to smart wireless endpoint devices may not be the best option, for all the reasons previously cited. It would require massive distribution, again, of the security responsibility to all owners of smart devices. It also requires vigi- lance on the part of every smart device owner, and this is not a reasonable expectation. An alternative approach involves iden- tifying a common transport infrastructure to enforce desired policy. This might best be accomplished via the network trans- port carrier. Network service providers offer several advantages with regard to centralized security: ● Vantage point —The network service provider has a wide van-
tage point that includes all customers, peering points, and
Firewall Aggregation (Wide Area)
Firewall Segregation (Local Area)
Internet
A
C
B
Figure 3.4 Wide area fi rewall aggregation and local area fi rewall segregation.
Effective protection of national infrastructure will undoubtedly be expensive due to the increased management of devices.
Smart devices have added another layer of complexity to network protection.
Chapter 3 SEPARATION 59
gateways. Thus, if some incident is occurring on the Internet, the service provider will observe its effects.
● Operations —Network service providers possess the opera- tional capability to ensure up-to-date coverage of signatures, updates, and new security methods, in contrast to the inabil- ity of most end users to keep their security software current.
● Investment —Where most end users, including enterprise groups, are unlikely to have funds suffi cient to install multiple types of diverse or even redundant security tools, service pro- viders can often support a business case for such investment. For these reasons, a future view of fi rewall functionality for
national infrastructure will probably include a new aggregation point—namely, the concept of implementing a network-based fi rewall in the cloud (see Figure 3.5 ).
In the protection of national infrastructure, the use of net- work-based fi rewalls that are embedded in service provider fab- ric will require a new partnership between carriers and end-user groups. Unfortunately, most current telecommunications ser- vice level agreements (SLAs) are not compatible with this notion, focusing instead on packet loss and latency issues, rather than policy enforcement. This results in too many current cases of a national infrastructure provider being attacked, with the ser- vice provider offering little or no support during the incident.
Service Provider Fabric
Network-Based Firewall (Provider Managed)
Internet
A
B
Wireless Connection (3G/4G)
C Wired Connection
Figure 3.5 Carrier-centric network-based fi rewall.
A fi rewall in the cloud may be the future of fi rewall functionality.
60 Chapter 3 SEPARATION
Obviously, this situation must change for the protection of national assets.
DDOS Filtering A major application of the network-based fi rewall concept includes a special type of mediation device embedded in the wide area network for the purpose of throttling distributed denial of service (DDOS) attacks. This device, which can be crudely referred to as a DDOS fi lter , is essential in modern networking, given the magnifi ed risk of DDOS attacks from botnets. Trying to fi lter DDOS attacks at the enterprise edge does not make sense given the physics of network ingress capacity. If, for example, an enterprise has a 1-Gbps ingress connection from the Internet, then a botnet directing an inbound volume of anything greater than 1 Gbps will overwhelm the connection.
The solution to this volume problem is to move the fi lter- ing upstream into the network. Carrier infrastructure gener- ally provides the best available option here. The way the fi ltering would work is that volumetric increases in ingress traffi c would cause a real-time redirection of traffi c to a DDOS fi ltering com- plex charged with removing botnet-originating traffi c from valid traffi c. Algorithms for performing such fi ltering generally key on the type of traffi c being sent, the relative size of the traffi c, and any other hint that might point to the traffi c being of an attack nature. Once the traffi c has been fi ltered, it is then funneled to the proper ingress point. The result is like a large safety valve or shock absorber in the wide area network that turns on when an attack is under way toward some target enterprise (see Figure 3.6 ).
Quantitative analysis associated with DDOS protection of national infrastructure is troubling. If, for example, we assume that bots can easily steal 500 Kbps of broadband egress from the unknowing infected computer owner, then it would only require three bots to overwhelm a T1 (1.5-Mbps) connection. If one carries out this argument, then botnets with 16,000 bots are suffi cient to overwhelm a 10-Gbps connection. Given the exis- tence of prominent botnets such as Storm and Confi cker, which some experts suggest could have as many as 2 or 3 million bots, the urgency associated with putting DDOS fi ltering in place cannot be understated. An implication is that national infrastruc- ture protection initiatives must include some measure of DDOS fi ltering to reduce the risk of DDOS attacks on national assets.
A serious problem that must be addressed, however, in current DDOS attacks on infrastructure involves a so-called
The risk of DDOS attacks must be effectively addressed.
Moving the fi ltering functionality into the network will allow legitimate traffi c to pass through and the discovery of potential DDOS attacks.
Chapter 3 SEPARATION 61
amplifi cation approach. Modern DDOS attacks are generally designed in recognition of the fact that DDOS fi lters exist to detect large inbound streams of unusual traffi c. Thus, to avoid inbound fi ltering in carrier infrastructure, adversaries have begun to follow two design heuristics. First, they design DDOS traffi c to mimic normal system behavior, often creating transac- tions that look perfectly valid. Second, they design their attack to include small inbound traffi c that utilizes some unique aspect of the target software to create larger outbound responses. The result is a smaller, less obvious inbound stream which then pro- duces much larger outbound response traffi c that can cause the DDOS condition.
1 Gbps Ingress
>> 1 Gbps DDOS Traffic Redirected to Filters
<< 1 Gbps Valid Traffic Tunneled to Target A
Carriers
Target A
Bots >> 1 Gbps DDOS Traffic Aimed at Target A
Target A’s Designated
Carrier
Figure 3.6 DDOS fi ltering of inbound attacks on target assets.
The Great Challenge of Filtering Out DDOS Attacks The great challenge regarding current DDOS attacks is that the only way to avoid the sort of problem mentioned in the text is through nontrivial changes in target infrastructure. Two of these nontrivial changes are important to mention here: 1. Stronger authentication of inbound inquiries and transactions from users is imperative. This is not desirable for
e-commerce sites designed to attract users from the Internet and also designed to minimize any procedures that might scare away customers.
2. To minimize the amplifi cation effects of some target system, great care must go into analyzing the behavior of Internet-visible applications to determine if small inquiries can produce much larger responses. This is particularly important for public shared services such as the domain name system, which is quite vulnerable to amplifi cation attacks.
These types of technical considerations must be included in modern national infrastructure protection initiatives.
Modern DDOS attacks take into account a more advanced fi ltering system and thus design the DDOS traffi c accordingly.
62 Chapter 3 SEPARATION
SCADA Separation Architecture Many critical national infrastructure systems include supervi- sory control and data acquisition (SCADA) functionality. These systems can be viewed as the set of software, computers, and networks that provide remote coordination of controls systems for tangible infrastructures such as power generation systems, chemical plants, manufacturing equipment, and transportation systems. The general structure of SCADA systems includes the following components: ● Human-machine interface (HMI) —The interface between the
human operator and the commands relevant to the SCADA system
● Master terminal unit (MTU) —The client system that gathers data locally and transmits it to the remote terminal unit
● Remote terminal unit (RTU) —The server that gathers data remotely and sends control signals to fi eld control systems
● Field control systems —Systems that have a direct interface to fi eld data elements such as sensors, pumps, and switches The primary security separation issue in a SCADA system archi-
tecture is that remote access from an MTU to a given RTU must be properly mediated according to a strong access control policy. 2 The use of fi rewalls between MTUs and RTUs is thus imperative in any SCADA system architecture. This separation must also enforce policy from any type of untrusted network, such as the Internet, into the RTUs. If this type of protection is not present, then the obvious risk emerges that an adversary can remotely access and change or infl uence the operation of a fi eld control system.
As one might expect, all the drawbacks associated with large- scale fi rewall deployment are also present in SCADA systems. Coverage and accuracy issues must be considered, as well as the likelihood that individual components have direct or wireless connections to the Internet through unknown or unapproved channels. This implies that protection of RTUs from unauthor- ized access will require a combination of segregated local area fi rewalls, aggregated enterprise-wide fi rewalls, and carrier- hosted network-based fi rewalls (see Figure 3.7 ).
The biggest issue for SCADA separation security is that most of the associated electromechanical systems were designed and evolved in an environment largely separate from conventional computing and networking. Few computing texts explain the sub- tle details in SCADA system architecture; in fact, computer scien- tists can easily complete an advanced program of study without the slightest exposure to SCADA issues. Thus, in far too many
2 R. Krutz, Securing SCADA Systems , John Wiley & Sons, New York, 2006.
Remote access from MTUs to RTUs opens the door for adversaries to take advantage of this separation.
Chapter 3 SEPARATION 63
SCADA environments, the computerized connections between tangible systems and their control networks have occurred in an ad hoc manner, often as a result of establishing local convenience such as remote access. For this reason, the likelihood is generally low that state-of-the-art protection mechanisms are in place to protect a given SCADA system from cyber attack.
An additional problem that emerges for SCADA fi rewall usage is that commercial fi rewalls do not generally support SCADA protocols. When this occurs, the fi rewall operator must exam- ine which types of ports are required for usage of the protocol, and these would have to be opened. Security experts have long known that one of the great vulnerabilities in a network is the inadvertent opening of ports that can be attacked. Obviously, national infrastructure protection initiatives must be considered that would encourage and enable new types of fi rewall function- ality such as special proxies that could be embedded in SCADA architecture to improve immediate functionality.
Physical Separation One separation technique that is seemingly obvious, but amaz- ingly underrepresented in the computer security literature, is the physical isolation of one network from another. On the sur- face, one would expect that nothing could be simpler for sepa- rating one network from any untrusted environment than just unplugging all external connections. The process is known as
Internet
SCADA Service Provider Fabric
SCADA Enterprise Firewall
SCADA Enterprise LAN
Network-Based SCADA Firewall (Provider-Managed)
Wireless Connection (3G/4G)
Wired Connection
Adversary
To Field Data Element
RTU CMTUs To Field Data Element
RTU B To Field Data Element
RTU B
Figure 3.7 Recommended SCADA system fi rewall architecture.
Protection mechanisms must be updated to effectively protect a SCADA system from cyber attack.
Opening ports, although necessary, is a risky endeavor, as it subjects the SCADA system to increased vulnerabilities.
64 Chapter 3 SEPARATION
air gapping , and it has the great advantage of not requiring any special equipment, software, or systems. It can be done to sepa- rate enterprise networks from the Internet or components of an enterprise network from each other.
The problem with physical separation as a security technique is that as complexity increases in some system or network to be isolated, so does the likelihood that some unknown or unautho- rized external connection will arise. For example, a small com- pany with a modest local area network can generally enjoy high confi dence that external connections to the Internet are well known and properly protected. As the company grows, however, and establishes branch offi ces with diverse equipment, people, and needs, the likelihood that some generally unrecognized external connectivity will arise is high. Physical separation of net- work thus becomes more diffi cult.
So how does one go about creating a truly air-gapped net- work? The answer lies in the following basic principles: ● Clear policy —If a network is to be physically isolated, then
clear policy must be established around what is and what is not considered an acceptable network connection. Organizations would thus need to establish policy checks as part of the net- work connection provision process.
● Boundary scanning —Isolated networks, by defi nition, must have some sort of identifi able boundary. Although this can cer- tainly be complicated by fi rewalls embedded in the isolated net- work, a program of boundary scanning will help to identify leaks.
● Violation consequences —If violations occur, clear conse- quences should be established. Government networks in the U.S. military and intelligence communities, such as SIPRNet and Intelink, are protected by laws governing how individuals must use these classifi ed networks. The consequences of vio- lation are not pleasant.
● Reasonable alternatives —Leaks generally occur in an isolated network because someone needs to establish some sort of communication with an external environment. If a network connection is not a reasonable means to achieve this goal, then the organization must provide or support a reasonable work-around alternative. Perhaps the biggest threat to physical network isolation
involves dual-homing a system to both an enterprise network and some external network such as the Internet. Such dual- homing can easily arise where an end user utilizes the same system to access both the isolated network and the Internet. As laptops have begun to include native 3G wireless access, this like- lihood of dual-homing increases. Regardless of the method, if any
Air gapping allows for physical separation of the network from untrusted environments.
As a company grows, physical separation as a protection feature becomes increasingly complex.
Chapter 3 SEPARATION 65
sort of connectivity is enabled simultaneously to both systems, then the end user creates an inadvertent bridge (see Figure 3.8 ).
It is worth mentioning that the bridge referenced above does not necessarily have to be established simultaneously. If a sys- tem connects to one network and is infected with some sort of malware, then this can be spread to another network upon sub- sequent connectivity. For this reason, laptops and other mobile computing devices need to include some sort of native protec- tion to minimize this problem. Unfortunately, the current state of the art for preventing malware downloads is poor.
A familiar technique for avoiding bridges between networks involves imposing strict policy on end-user devices that can be used to access an isolated system. This might involve preventing certain laptops, PCs, and mobile devices from being connected to the Internet; instead, they would exist solely for isolated net- work usage. This certainly reduces risk, but is an expensive and cumbersome alternative. The advice here is that for critical sys- tems, especially those involving safety and life-critical applica- tions, if such segregation is feasible then it is probably worth the additional expense. In any event, additional research in multi- mode systems that ensure avoidance of dual-homing between networks is imperative and recommended for national infra- structure protection.
Insider Separation The insider threat in national infrastructure protection is espe- cially tough to address because it is relatively easy for determined
Dual-homing creates another area of vulnerability for enterprise networks.
Isolated Environment
Isolated Network
Internet
End Users
Simultaneous Dual-Homing
Leak
Figure 3.8 Bridging an isolated network via a dual-homing user.
Imposing strict policies regarding connection of laptops, PCs, and mobile devices to a network is both cumbersome and expensive but necessary.
66 Chapter 3 SEPARATION
adversaries to obtain trusted positions in groups with responsi- bility for national assets. This threat has become even more diffi – cult to counter as companies continue to partner, purchase, and outsource across political boundaries. Thus, the ease with which an adversary in one country can gain access to the internal, trusted infrastructure systems of another country is both growing and troubling.
Traditionally, governments have dealt with this challenge through strict requirements on background checking of any individuals who require access to sensitive government systems. This practice continues in many government procurement set- tings, especially ones involving military or intelligence infor- mation. The problem is that national infrastructure includes so much more than just sensitive government systems. It includes SCADA systems, telecommunications networks, transportation infrastructure, fi nancial networks, and the like. Rarely, if ever, are requirements embedded in these commercial environments to ensure some sort of insider controls against unauthorized data collection, inappropriate access to customer records, or admin- istrative access to critical applications. Instead, it is typical for employees to be granted access to the corporate Intranet, from which virtually anything can be obtained.
Techniques for reducing the risk of unauthorized insider access do exist that can be embedded in the design and operation of national infrastructure operation. These techniques include the following: ● Internal fi rewalls —Internal fi rewalls separating components
of national assets can reduce the risk of insider access. Insiders with access to component A, for example, would have to suc- cessfully negotiate through a fi rewall to gain access to com- ponent B. Almost every method for separating insiders from assets will include some sort of internal fi rewall. They can be implemented as fully confi gured fi rewalls, or as packet fi lter- ing routers; but regardless, the method of separating insiders from assets using fi rewalls must become a pervasive control in national infrastructure.
● Deceptive honey pots —As we discussed in Chapter 2, internal honey pots can help identify malicious insiders. If the decep- tion is openly advertised, then malicious insiders might be more uncertain in their sabotage activity; if the deception is stealth, however, then operators might observe malicious behavior and potentially identify the internal source.
● Enforcement of data markings —Many organizations with responsibility for national infrastructure do not properly mark their information. Every company and government agency
An adversarial threat may come from a trusted partner.
The commercially run components of our national infrastructure do not have the same stringent personnel requirements as the government-run components.
Chapter 3 SEPARATION 67
must identify, defi ne, and enforce clearly visible data markings on all information that could be mishandled. Without such markings, the likelihood of proprietary information being made available inadvertently to adversaries increases sub- stantially. Some companies have recently begun to use new data markings for personally identifi able information (PII).
● Data leakage protection (DLP) systems —Techniques for sniff- ing gateway traffi c for sensitive or inappropriate materials are becoming common. Tools called DLP systems are routinely deployed in companies and agencies. At best, they provide weak protection against insider threats, but they do help identify erro- neous leaks. Once deployed, they provide statistics on where and how insiders might be using corporate systems to spill informa- tion. In practice, however, no knowledgeable insider would ever be caught by a data leakage tool. Instead, the leak would be done using non-company-provided computers and networks. One of the more effective controls against insider threats
involves a procedural practice that can be embedded into virtu- ally every operation of an organization. The technique is known as segregation of duties , and it should be familiar to anyone who has dealt with Sarbanes-Oxley requirements in the United States. Security researchers will recognize the related separation of duties notion introduced in the Clark-Wilson integrity model. In both cases, critical work functions are decomposed so that work com- pletion requires multiple individuals to be involved. For example, if a fi nancial task requires two different types of activities for com- pletion, then a segregation of duties requirement would ensure that no one individual could ever perform both operations.
The purpose of this should be obvious. By ensuring that mul- tiple individuals are involved in some sensitive or critical task, the possibility of a single insider committing sabotage is greatly reduced. Of course, multiple individuals could still collude to cre- ate an internal attack, but this is more diffi cult and less likely in most cases. If desired, the risk of multiple individuals creating sabotage can be reduced by more complex segregation of duty policies, perhaps supported by the use of security architectural controls, probably based on internally positioned fi rewalls. In fact, for network-based segregation tasks, the use of internal fi re- walls is the most straightforward implementation.
In general, the concept of segregation of duties can be rep- resented via a work function ABC that is performed either by a single operator A or as a series of work segments by multiple operators. This general schema supports most instances of segre- gation of duties, regardless of the motivation or implementation details (see Figure 3.9 ).
Segregation of duties offers another layer of protection.
Internal fi rewalls create a straightforward de facto separation of duties.
68 Chapter 3 SEPARATION
The idea of breaking down work functions into components is certainly not new. Managers have decomposed functions into smaller tasks for many years; this is how assembly lines origi- nated. Unfortunately, most efforts at work function decomposi- tion result in increased bureaucracy and decreased worker (and end-user) satisfaction. The stereotyped image arises of the gov- ernment bureau where customers must stand in line at this desk for this function and then stand in line at that desk for that func- tion, and so on. The process is clearly infuriating but, ironically, is also diffi cult to sabotage by a malicious insider.
The challenge for national infrastructure protection is to inte- grate segregation of duty policies into all aspects of critical asset management and operation, but to do so in a manner that mini- mizes the increased bureaucracy. This will be especially diffi – cult in government organizations where the local culture always tends to nurture and embrace new bureaucratic processes.
Asset Separation Asset separation involves the distribution, replication, decompo- sition, or segregation of national assets to reduce the risk of an isolated compromise. Each of these separation techniques can be described as follows: ● Distribution involves creating functionality using multiple
cooperating components that work together as a distributed system. The security advantage is that if the distributed system is designed properly then one or more of the components can be compromised without breaking the overall system function.
● Replication involves copying assets across disparate compo- nents so that if one asset is broken then replicated versions
Original Work Function with One
Operator
Operator A
Work Function ABC
Decomposed Function with Segregation of
Duties
Operator BOperator A Operator C
Work Function A Work Function B Work Function C
Figure 3.9 Decomposing work functions for segregation of duty.
How to effectively separate duties without increasing the unwieldy bureaucracy is a challenge that must be addressed.
Chapter 3 SEPARATION 69
will continue to be available. Database systems have been pro- tected in this way for many years. Obviously, no national asset should exist without a degree of replication to reduce risk.
● Decomposition is the breaking down of complex assets into individual components so that isolated compromise of a com- ponent will be less likely to break the overall asset. A common implementation of a complex business process, for example, generally includes some degree of decomposition into smaller parts.
● Segregation is the logical separation of assets through spe- cial access controls, data markings, and policy enforcement. Operating systems, unfortunately, provide weak controls in this regard, largely because of the massive deployment of single- user machines over the past couple of decades. Organizations thus implement logical separation of data by trying to keep it on different PCs and laptops. This is a weak implementation. Each of these techniques is common in modern infrastruc-
ture management. For example, content distribution networks (CDNs) are rarely cited as having a positive impact on national infrastructure security, but the reality is that the distribution and replication inherent in CDNs for hosting are powerful techniques for reducing risk. DDOS attacks, for example, are more diffi cult to complete against CDN-hosted content than for content resident only on an origination host. Attackers have a more diffi cult time targeting a single point of failure in a CDN (see Figure 3.10 ).
It is important to emphasize that the use of a CDN certainly does not ensure protection against a DDOS attack, but the rep- lication and distribution inherent in a CDN will make the attack more diffi cult. By having the domain name system (DNS) point
Segregation is one method of separation.
CarriersBots DDOS Attack Aimed at Origination Host
Target A’s Designated
Carrier CDN Replicated Hosts
Same Possibly Unaffected by DDOS Attack
Origination Host Network
CDN
CDN
Figure 3.10 Reducing DDOS risk through CDN-hosted content.
70 Chapter 3 SEPARATION
to CDN-distributed assets, the content naturally becomes more robust. National infrastructure designers and operators are thus obliged to ensure that CDN hosting is at least considered for all critically important content, especially multimedia content (streaming and progressive download) and any type of critical software download.
This is becoming more important as multimedia provi- sion becomes more commonly embedded into national assets. In the recent past, the idea of providing video over the Internet was nothing more than a trivial curiosity. Obviously, the mas- sive proliferation of video content on sites such as YouTube.com has made these services more mainstream. National assets that rely on video should thus utilize CDN services to increase their robustness. Additional DDOS protection of content from the backbone service provider would also be recommended.
Multilevel Security (MLS) A technique for logical separation of assets that was popular in the computer security community during the 1980s and 1990s is known as multilevel security (MLS). MLS operating systems and applications were marketed aggressively to the security commu- nity during that time period. A typical implementation involved embedding mandatory access controls and audit trail hooks into the underlying operating system kernel. Assurance methods would then be used to ensure that the trusted component of the kernel was correct, or at least as correct as could be reasonably verifi ed. Today, for reasons largely economic, MLS systems are no longer available, except in the most esoteric classifi ed govern- ment applications.
The idea behind MLS was that, by labeling the fi les and direc- tories of a computer system with meaningful classifi cations and by also labeling the users of that system with meaningful clear- ances, a familiar security policy could be enforced. This scheme, which was motivated largely by paper methods used to protect information in government, produced a logical separation of cer- tain assets from certain users, based on the existing policy. For example, fi les marked “secret” could only be read by users with suffi cient clearances. Similarly, users not cleared to the level of “top secret” would not be allowed to read fi les that were so labeled. The result was an enforced policy on requesting users and protected assets (see Figure 3.11 ).
Several models of computer system behavior with such MLS functionality were developed in the early years of computer
The increase in multimedia components within national infrastructure networks argues for increased reliance on CDN services.
The familiar notion of “top- secret clearance” comes from MLS systems.
Chapter 3 SEPARATION 71
security. The Bell-La Padula disclosure and Biba integrity models are prominent examples. Each of these models stipulated policy rules that, if followed, would help to ensure certain desirable security properties. Certainly, there were problems, especially as networking was added to isolated secure systems, but, unfortu- nately, most research and development in MLS dissolved myste- riously in the mid-1990s, perhaps as a result of the economic pull of the World Wide Web. This is unfortunate, because the function- ality inherent in such MLS separation models would be valuable in today’s national infrastructure landscape. A renewed interest in MLS systems is thus strongly encouraged to improve protec- tion of any nation’s assets.
Requesting Users
Top Secret Cleared
Secret Cleared
MLS Policy Enforcement
Read (allowed)
Read (blocked)
Read (allowed)
Protected Assets
Top Secret Classified
Secret Classified
MLS Produces Logical Separation of Assets
TS
TS
TS S
S
S
TS
TS
S
S
Figure 3.11 Using MLS logical separation to protect assets.
Implementing a National Separation Program Implementation of a national separation program would involve verifi cation and validation of certain design goals in government agencies and companies with responsibility for national infrastructure. These goals, related to policy enforcement between requesting users and the protected national assets, would include the following: ● Internet separation —Certain critical national assets simply should not be accessible from the Internet. One would
imagine that the control systems for a nuclear power plant, for example, would be good candidates for separation from the Internet. Formal national programs validating such separation would be a good idea. If this requires changes in business practice, then assistance and guidance would be required to transition from open, Internet connectivity to something more private.
● Network-based fi rewalls —National infrastructure systems should be encouraged to utilize network-based fi rewalls, preferably ones managed by a centralized group. The likelihood is higher in such settings that signatures will be
MLS systems seem to have gone by the wayside but should be revived as another weapon in the national infrastructure protection arsenal.
72 Chapter 3 SEPARATION
Obviously, once a national program is in place, consideration of how one might separate assets between different cooperat- ing nations would seem a logical extension. Certainly, this would seem a more distant goal given the complexity and diffi culty of creating validated policy enforcement in one nation.
kept up to date and that security systems will be operated properly on a 24/7 basis. Procurement programs in government, in particular, must begin to routinely include the use of network-based security in any contract with an Internet service provider.
● DDOS protection —All networks associated with national assets should have a form of DDOS protection arranged before an attack occurs. This protection should be provided on a high-capacity backbone that will raise the bar for attackers contemplating a capacity-based cyber attack. If some organization, such as a government agency, does not have a suitable DDOS protection scheme, this should be likened to having no disaster recovery program.
● Internal separation —Critical national infrastructure settings must have some sort of incentive to implement an internal separation policy to prevent sabotage. The Sarbanes-Oxley requirements in the United States attempted to enforce such separation for fi nancial systems. While the debate continues about whether this was a successful initiative, some sort of program for national infrastructure seems worth considering. Validation would be required that internal fi rewalls exist to create protection domains around critical assets.
● Tailoring requirements —Incentives must be put in place for vendors to consider building tailored systems such as fi rewalls for specialized SCADA environments. This would greatly reduce the need for security administrators in such settings to confi gure their networks in an open position.
73 Cyber Attacks. DOI: © Elsevier Inc. All rights reserved.
10.1016/B978-0-12-384917-5.00004-4 2011
DIVERSITY
We are looking at computers the way a physician would look at genetically related patients, each susceptible to the same disorder .
Mike Reiter, professor of electrical and computer engineering and computer science at Carnegie-Mellon University 1
Making national infrastructure more diverse in order to create greater resilience against cyber attack seems to be a pretty sen- sible approach. For example, natural scientists have known for years that a diverse ecosystem is always more resilient to disease than a monoculture. When a forest includes only one tree, the possibility arises that a single disease could wipe out the entire ecosystem. This type of situation arises even in business. Certain airlines, for example, have decided to use only one model of air- craft. This reduces the cost of maintenance and training but does create a serious risk if that particular aircraft were grounded for some reason. The airline would be out of business—a risk that is avoided by a diversity approach.
So it would stand to reason that the process of securing any set of national assets should always include some sort of diver- sity strategy. This diversity should extend to all applications, soft- ware, computers, networks, and systems. Unfortunately, with the exception of familiar geographic requirements on network routes and data centers, diversity is not generally included in infra- structure protection. In fact, the topic of deliberately introduc- ing diversity into national infrastructure to increase its security has not been well explored by computer scientists. Only recently have some researchers begun to investigate the benefi ts of diver- sity in software deployment.
Diversity in national infrastructure involves the introduc- tion of intentional differences into systems. Relevant differences include the vendor source, deployment approach, network con- nectivity, targeted standards, programming language, operating
4
1 Quoted in “Taking Cues from Mother Nature to Foil Cyber Attacks” (press release), Offi ce of Legislative and Public Affairs, National Science Foundation, Washington, D.C., 2003 ( http://www.nsf.gov/od/lpa/news/03/pr03130.htm ).
Introducing diversity at all levels of functionality has not been properly explored as a protection strategy.
74 Chapter 4 DIVERSITY
system, application base, software version, and so on. Two sys- tems are considered diverse if their key attributes differ, and non- diverse otherwise (see Figure 4.1 ).
The general idea is that an adversary will make assumptions about each of the relevant attributes in a target system. In the absence of diversity, a worst-case scenario results if the adversary makes the right assumptions about each attribute. If, for exam- ple, the adversary creates an attack on a set of computers that assumes an underlying Microsoft ® operating system environ- ment, and the national asset at risk employs only these types of systems, then the effect could be signifi cant. In the presence of diversity, however, it becomes much more diffi cult for an adver- sary to create an attack with maximal reach. This is especially rel- evant for attacks that are designed to automatically propagate. Eventually, the attack will reach a point where it can no longer copy itself or remotely execute, and the process will cease.
Why, then, is diversity so underrepresented in national infra- structure protection? To understand this, one must fi rst recognize the near-obsessive goal of enforcing sets of common standards that the information technology and security communities have attempted to achieve. In nearly every facet of computing, sets of standard, auditable practices have been defi ned and backed by powerful organizations. In the United States, the Sarbanes- Oxley standard has had a profound infl uence on the operation of every major corporation in the country, leading to more common approaches to fi nancial systems operation. Commonality, as we discuss in the next chapter, is somewhat at odds with diversity.
This focus on maintaining common, standard operating envi- ronments should not come as a surprise. The rise of the Internet, for example, was driven largely by the common acceptance of a single protocol suite. Even the provision of Internet-based ser- vices such as websites and mail servers requires agreement among system administrators to follow common port assign- ments. Chaos would ensue if every administrator decided to
System
A
B
C
Company X
Company Y
Company Z
Off-the-shelf
Custom
Custom
IP
TDM
TDM
IP sec
None
None
C++
Java
Java
Windows
Unix
Unix
Vendor Source
Deployment Approach
Network Connectivity
Targeted Standards
Programming Language
Operating System Attributes
A and B: Diverse
B and C: Non diverse
Figure 4.1 Diverse and nondiverse components through attribute differences.
Diversity increases the number of assumptions an adversary has to make about the system and creates more potential for an adversary’s plan to fail.
Standardized operations are important for compliance but are at odds with diversity.
Chapter 4 DIVERSITY 75
assign random ports to their Internet services; end users would not be able to easily locate what they need, and the Internet would be a mess (although this would certainly complicate broad types of attacks). So, the result is general agreement on common computing confi gurations.
Another key motivation to avoid diversity for most system managers is the costs involved. Typical computing and net- working management teams have created programs focused on removing differences in enterprise systems in order to reduce operating expenses. Clearly, nondiverse information technology systems simplify platform deployment, end-user training, system administrative practices, and system documentation. For these cost-related reasons, diversity is generally not a prominent goal in most current national infrastructure settings. The result is less secure infrastructure.
Diversity and Worm Propagation The self-propagation of a computer worm is a good example of an attack that relies on a nondiverse target environment to function properly. The box shows how relatively simple an attack can be.
Diversity currently competes with commonality and cost savings.
Worm Functionality in Three Easy Steps The functionality of a typical, generic computer worm is quite straightforward (only three steps) and can be described in simple pseudo-code terms as follows:
Program: Worm Start
Step 1. Find a target system on the network for propagation of Program Worm . Step 2. Copy Program Worm to that target system. Step 3. Remotely execute Program Worm on that target system.
Repeat Steps 1 through 3.
As you can see, a worm program relies on the ability to fi nd common, reachable, interoperable systems on the network that will accept and execute a copy of the worm program. In the early days of the Internet, this would be accomplished by checking a local fi le that would include a list of systems that were reach- able. Today, it’s done by creating batches of Internet Protocol
76 Chapter 4 DIVERSITY
addresses. Also, in those early days, it was quite easy to copy and execute programs from one system to another, because no one had yet invented the fi rewall.
One would have hoped that the global deployment of fi re- walls would have stopped the ability of adversaries to create worms, but sadly it has not. Instead, vulnerabilities or services open through the fi rewalls are used as the basis for worms. Nondiversity in such setups is also the norm. This is unfortunate, because if a worm operates in a diverse environment, and thus cannot fi nd systems that consistently meet one or more of these criteria, then its propagation will cease more rapidly. This can be depicted in a simple reachability diagram showing the point of initiation for the worm through its propagation to the fi nal point at which the activity ceases as a result of diversity. As the worm tries to propagate, diversity attributes that reduce its ability to locate reachable systems, make copies, and remotely execute are the most effective (see Figure 4.2 ).
Obviously, all worms will eventually cease to propagate, regardless of the degree of diversity in a given network. The secu- rity advantage one gains with diversity is that the worm is likely to cease more quickly and perhaps without human intervention. Empirical experience in the global security community deal- ing with worms such as the SQL/Slammer and Blaster worms of 2003 and the Sasser worm of 2004 suggest that signifi cant human intervention is required to halt malicious operation. During the early hours of the SQL/Slammer worm, most of the security inci- dent response calls involved people trying to fi gure out what to
Unreachable
Worm Initiated
Worm Ceased
Unable to Accept Copy of Worm
Unable to Remotely Execute
Unreachable
Unable to Accept Copy of Worm
Unable to Remotely Execute
Figure 4.2 Mitigating worm activity through diversity.
A worm propagates by fi nding interoperable systems to target.
Chapter 4 DIVERSITY 77
do. Eventually, the most effective solution involved putting local area network blocks in place to shut down the offending traf- fi c. By the time the event died down, many millions of hours of global labor had been expended working on the problem. By increasing diversity, one should expect to reduce response costs around the world associated with fi ghting worms.
The real challenge here is that both the Internet and the networks and systems being run by companies and agencies charged with national infrastructure are simply not diverse— and there is little discussion in place to alter this situation. As we suggested earlier, this is driven largely by the goal to maximize interoperability. There are some exceptions in the broader com- puting community, such as digital rights management (DRM)- based systems that have tended to limit the execution of certain content applications to very specifi c devices such as the iPod ® and iPhone ® . The general trend, however, is toward more open, interoperable computing. What this means is that, for national infrastructure components that must be resilient against auto- mated attacks such as worms, the threat will remain as long as the networking environment is a monoculture.
Desktop Computer System Diversity Typical individual computer users in the home or offi ce, regard- less of their location in the world, are most likely to be using a commercial operating system running on a standard processor platform and utilizing one of a couple of popular browsers to perform searches on a popular search engine. This might seem an obvious statement, but in the early days of computing there were many users on home-grown or proprietary systems using all sorts of software that might only be known locally.
Today, however, the most likely confi guration would be a Windows ® -based operating system on an Intel ® platform with Internet Explorer ® being used for Google ® searches. We can say this confi dently, because almost all current estimates of market share list these products as dominant in their respective fi elds. Certainly, competing platforms and services from Apple ® and others have made inroads, but for the most part, especially in business and government environments, the desktop confi gura- tion is highly predictable (see Figure 4.3 ).
This dominant position for these few companies has admit- tedly led to a number of positive results. It has, for instance, pushed a deeper common understanding of computing among individuals around the world. Different people from different
Although introducing security can seem expensive, one should expect to save money on response costs with an effective diverse environment.
The average home PC user is working in a highly predictable computing environment.
78 Chapter 4 DIVERSITY
cultures around the world can share their experiences, recom- mendations, and suggestions about operating systems, search engines, CPUs, and browsers, and the likelihood of applicability is high. The dominant position of these respective products has also helped the software development industry by creating rich and attractive common target markets. Developers generally love to see a dominant platform confi guration, because it increases their potential profi ts through maximal usage. So, computing certainly has moved forward as a result of commonality; not much disagreement exists on this point.
The drawback from a national infrastructure perspective, how- ever, is that adversaries will have an easier time creating attacks with signifi cant reach and implication. Just as a game of dominoes works best when each domino is uniformly designed and positioned, so does common infrastructure become easier to topple with a single, uniform push. In some cases, the effect is signifi cant; the operat- ing system market on desktop PCs, for example, is dominated by Microsoft ® to the point where a well-designed Windows ® -based attack could be applicable to 90% of its desktop targets.
More likely, however, is the situation where the creation of a botnet becomes much easier given the nondiversity of PC con- fi gurations. When a botnet operator conceptualizes the design of a new botnet, the most important design consideration involves reach. That is, the botnet operator will seek to create malware that has the maximal likelihood of successfully infecting the larg- est number of target PCs. As such, the nondiversity of end-user confi gurations plays right into the hands of the botnet operator. Combine this with the typically poor system administrative prac- tices on most PCs, and the result is lethal. Worse, many security managers in business and government do not understand this risk. When trying to characterize the risk of attack, they rarely understand that the problem stems from a global set of nondiverse end-user PCs being mismanaged by home and offi ce workers.
Most Likely Configuration for Home or Office PC User
Windows – 90% Google – 81% Intel – 79% Internet Explorer – 67%
Operating System Search Engine
CPU Browser
Two-Thirds of All PCs
Nine-Tenths of All PCs
Figure 4.3 Typical PC confi guration showing nondiversity.
Targeting the most popular operating system software with a worm attack could bring the majority of PCs to a standstill.
Security managers are unlikely to consider the home PC user when assessing risk.
Chapter 4 DIVERSITY 79
In response to this threat, national infrastructure protection requires a deliberate and coordinated introduction of diver- sity into the global desktop computing environment. Enterprise attention is obviously different than that of individuals in homes, but the same principle applies. If the desktop computing assets that can reach a national asset must be maximally resilient, then desktop diversity is worth considering. The most obvious chal- lenge here is related to the consumer marketplace for PCs; that is, the reason why consumers use the same platform is because they prefer it and have chosen to purchase it. If Microsoft ® and Intel ® , for example, were not providing value in their products, then people would buy something else. The biggest hurdle, there- fore, involves enabling nondiversity without altering the ability of companies to provide products that people like to use. Perhaps this goal could be accomplished via diversity elements coming from within the existing vendor base.
Desktop Diversity Considerations Additional issues that arise immediately with respect to desktop diversity programs include the following: ● Platform costs —By introducing multiple, diverse platforms into a computing environment, the associated hardware
and software costs might increase. This is a common justifi cation by information technology (IT) managers for avoiding diversity initiatives. Certainly, the procurement of larger volumes of a given product will reduce the unit cost, but by introducing competition into the PC procurement arena increased costs might be somewhat mitigated.
● Application interoperability —Multiple, diverse platforms will complicate organizational goals to ensure common interoperability of key applications across all platforms. This can be managed by trying to match the desktop platform to local needs, but the process is not trivial. The good news is that most web-based applications behave similarly on diverse platforms.
● Support and training —Multiple, diverse platforms will complicate support and training processes by adding a new set of vendor concerns. In practical terms, this often means introducing a platform such as Mac OS ® to a more traditional Windows ® -based environment. Because many consumers are comfortable with both platforms, especially youngsters who tend to be more diverse in their selections, the problem is not as intense as it might be.
For national infrastructure protection, desktop diversity ini- tiatives that are focused on ensuring enterprise differences in companies and agencies have a good chance of success. Rewards and incentives can be put in place to mix up the desktop plat- forms in a given enterprise. The problem is that this will have only limited usefulness from the perspective of botnet design and recruitment. The real advantage would come from diversity in
80 Chapter 4 DIVERSITY
broadband-connected PCs run by consumers around the world. Unfortunately, this is not something that can be easily controlled via an initiative in any country, including the United States.
Interestingly, a related problem that emerges is the seem- ingly widespread software piracy one fi nds in certain areas of the globe. Software piracy on the desktop introduces the problem of security updates; that is, depending on the specifi cs of the theft, it is often diffi cult for pirated PCs to be properly protected with required patches. When many millions of PCs are in this state, the problem of nondiversity becomes all the more severe.
Diversity Paradox of Cloud Computing To better understand how diversity goals can be accomplished, it helps to introduce a simple model of desktop computing sys- tems. The model is represented as a linear spectrum of options related to the degree to which systems are either diverse or non- diverse. As such, the two ends of the model spectrum are easy to identify for a given environment. On one side of the spectrum would be the option of complete nondiversity, where every desk- top system in the organization, enterprise, or group is exactly the same. On the other side of the spectrum would be the option of complete diversity across the organization, where no two desk- top systems are the same. In the middle of the spectrum would be the usual types of settings, where some minor degree of diver- sity exists, but with a clearly dominant platform.
The model spectrum is useful because it allows illustration of our basic infrastructure security proposition around PCs— namely, as diversity increases, desktop attacks, including the use of worms to create a local denial of service condition, are more diffi cult to accomplish. One might also suggest that the creation and use of botnets would also be more diffi cult, but this benefi t might be more modest (see Figure 4.4 ).
Desktop Attack Difficulty (increases)
Typical Enterprise (Mostly Same, Some Different)
Desktops Different
Desktops Same
Figure 4.4 Spectrum of desktop diversity options.
Global diversity in broadband-connected home PCs would stymie many botnet attacks.
Chapter 4 DIVERSITY 81
In fact, diverse desktops are tougher to uniformly compro- mise, because they are less conducive as a group to a scalable, self-propagating attack. For example, if a company has half of its PCs running Windows ® -based operating systems and half run- ning Mac OS ® -based operating systems, then this will clearly be more challenging for an automatically propagating attack. Hence, the level of diversity and the associated diffi culty of attack appear to correlate. A challenge with this view, however, is that it does not properly characterize the optimal choice in reducing desktop attack risk—namely, the removal of desktops from the target environment. After all, one cannot attack systems that are not even there. This suggests a new (and admittedly theoretical) diversity and attack diffi culty spectrum (see Figure 4.5 ).
This suggests that the ultimate (albeit impossible) option for making desktops more secure involves their removal. Obviously, this is not a practical goal, but computer security objectives are often made more tractable via clear statements of the ideal con- dition. So, while current enterprise or home computing architec- tures do not include the option of having no desktop computers, older readers will remember the days when desktops did not exist. Rather, people used computer terminals to access informa- tion on mainframes, and security benefi ts were certainly pres- ent in such a setup. This included no need for end-user software patching, as well as no end-user platform for targeted malware. One great irony in the present deployment of desktops to every man, woman, and child on the planet is that most people really do not need such computing power. It is likely that they would be just fi ne with a keyboard, screen, and mouse connected to network-resident applications that are ubiquitously available via the Internet.
In modern computing, the closest thing we have to this arrangement is virtualized, cloud-based computing. In such a setup, computing power and application intelligence move to a centralized complex of servers, accessible via light clients. In
Desktop Attack Difficulty
Desktops Removed
Desktops Different
Desktops Same
Figure 4.5 Diversity and attack diffi culty with option of removal.
As the level of diversity increases, the level of diffi culty for an attack likewise increases.
The global proliferation of home PCs has increased the risk of malware attacks.
82 Chapter 4 DIVERSITY
fact, handheld mobile devices provide the equivalent of a desk- top computer in such a cloud environment. One should therefore presume, from the diagram in Figure 4.5 , that cloud computing would provide considerable security benefi ts by removing non- diverse desktops from the environment. This is most likely true, as long as the infrastructure supporting the cloud applications is properly secured, as per the various principles described in this book. If this is not the case, then one is simply moving nondiver- sity vulnerabilities from the desktops to the servers.
Network Technology Diversity Modern telecommunications network systems can be viewed as consisting of the following two basic types of technologies: ● Circuit-switched —This includes legacy, circuit-switched
systems that support traditional plain old telephone ser- vices (POTS) and related voice and data services. The public switched telephone network (PSTN) is the most signifi cant example of deployed circuit-switched technology.
● Packet-switched —This includes more modern, packet-switched systems that support Internet Protocol (IP) and related voice, data, and multimedia services. In addition to the Internet as the most obvious example of packet switching, the signal- ing network controlling the PSTN is itself a packet-switched system. For the most part, both logical and physical diversity naturally
exist between these two types of services, largely due to technol- ogy interoperability. That is, the vast majority of equipment, soft- ware, processes, and related infrastructure for these services are fundamentally different. Packets cannot accidentally or inten- tionally spill into circuits, and vice versa .
From a networking perspective, what this means is that a security event that occurs in one of these technologies will gen- erally not have any effect on the other. For example, if a network worm is unleashed across the Internet, as the global community experienced so severely in the 2003–2004 time frame, then the likelihood that this would affect traditional time-division multi- plexed (TDM) voice and data services is negligible. Such diversity is of signifi cant use in protecting national infrastructure, because it becomes so much more diffi cult for a given attack such as a worm to scale across logically separate technologies (see Figure 4.6 ).
Even with the logical diversity inherent in these different tech- nologies, one must be careful in drawing conclusions. A more
Cloud computing may offer home PC users the diverse, protected environment they cannot otherwise access.
Circuit-switched and packet-switched systems automatically provide diversity when compared to one another.
Chapter 4 DIVERSITY 83
accurate view of diverse telecommunications, for example, might expose the fact that, at lower levels, shared transport infrastruc- ture might be present. For example, many telecommunications companies use the same fi ber for their circuit-switched delivery as they do for IP-based services. Furthermore, different carriers often use the same right-of-way for their respective fi ber delivery. What this means is that in many locations such as bridges, tun- nels, and major highways, a physical disaster or targeted terror- ist attack could affect networks that were designed to be carrier diverse.
While sharing of fi ber and right-of-way routes makes sense from an operational implementation and cost perspective, one must be cognizant of the shared infrastructure, because it does change the diversity profi le. As suggested, it complicates any reli- ance on a multivendor strategy for diversity, but it also makes it theoretically possible for an IP-based attack, such as one pro- ducing a distributed denial of service (DDOS) effect, that would have negative implications on non-IP-based transport due to volume. This has not happened in practical settings to date, but because so much fi ber is shared it is certainly a possibility that must be considered (see Figure 4.7 ).
A more likely scenario is that a given national service tech- nology, such as modern 2G and 3G wireless services for citizens
End Users (Phones, Circuits)
Worm Circulating
Network Management
Electronic Switching
Switch Signaling
IP Routing
Non- Propagation (Logical Diversity)
Traditional Circuit-Switched
Modern Packet-Switched
End Users (Computers,
Intranets)
Figure 4.6 Worm nonpropagation benefi t from diverse telecommunications.
Unfortunately, vulnerabilities will always be present in IP-based and circuit-switched systems.
84 Chapter 4 DIVERSITY
and business, could see security problems stemming from either circuit- or packet-switched-based attacks. Because a typical car- rier wireless infrastructure, for example, will include both a cir- cuit- and packet-switched core, attacks in either area could cause problems. Internet browsing and multimedia messaging could be hit by attacks at the serving and gateway systems for these types of services; similarly, voice services could be hit by attacks on the mobile switching centers supporting this functionality. So, while it might be a goal to ensure some degree of diversity in these technology dependencies, in practice this may not be possible.
What this means from a national infrastructure protection perspective is that maximizing diversity will help to throttle large-scale attacks, but one must be certain to look closely at the entire architecture. In many cases, deeper inspection will reveal that infrastructure advertised as diverse might actually have components that are not. This does not imply that suffi – cient mitigations are always missing in nondiverse infrastruc- ture, but rather that designers must take the time to check. When done properly, however, network technology diversity remains an excellent means for reducing risk. Many a security offi cer will report, for example, the comfort of knowing that circuit-switched voice services will generally survive worms, botnets, and viruses on the Internet.
End Users (Phones, Circuits)
Worm Circulating
Network Management
Electronic Switching
Switch Signaling
IP Routing
Possible Impact (Physical Non-
Diversity)
End Users (Computers,
Intranets)
Fiber
Figure 4.7 Potential for impact propagation over shared fi ber.
Diversity may not always be a feasible goal.
Chapter 4 DIVERSITY 85
Physical Diversity The requirement for physical diversity in the design of comput- ing infrastructure is perhaps the most familiar of all diversity- related issues. The idea is that any computing or networking asset that serves as an essential component of some critical func- tion must include physical distribution to increase its survivabil- ity. The approach originated in the disaster recovery community with primary emphasis on natural disasters such as hurricanes and fi res, but, as the security threat has matured, infrastructure managers have come to recognize the value of providing some degree of physical diversity. This reduces, for example, reliance on a single local power grid, which is a valued cyber attack target for adversaries. It also greatly reduces the chances of a physical or premise-based attack, simply because multiple facilities would be involved.
These issues are not controversial. In fact, for many years, procurement projects for national asset systems, in both govern- ment and industry, have routinely included the demand that the following physical diversity issues be considered: ● Backup center diversity— If any major center for system, net-
work, or application management is included in a given infrastructure component, then it is routinely required that a backup center be identifi ed in a physically diverse loca- tion. Few would argue with this approach; if properly applied, it would ensure that the two centers are in different weather patterns and power grid segments.
● Supplier/vendor diversity— Many organizations dictate that for critical infrastructure components, some degree of diversity must be present in the supplier and vendor mix. This reduces the likelihood that any given fi rm would have too much infl u- ence on the integrity of the infrastructure. It also reduces the likelihood of a cascading problem that might link back to some common element, such as a software routine or library, embedded in one vendor’s product portfolio.
● Network route diversity —When network infrastructure is put in place to support national infrastructure, it is not uncom- mon to demand a degree of network route diversity from the provider or providers. This helps reduce the likelihood of malicious (or nonmalicious) problems affecting connectiv- ity. As mentioned above, this is complicated by common use of bridges, tunnels, or highways for physical network media deployments from several different vendors.
Physical diversity adds another important layer of protection against cascading effects.
Physical diversity has been incorporated into the national asset system for many years.
86 Chapter 4 DIVERSITY
Achieving Physical Diversity via Satellite Data Services
A good example application that demonstrates physical diversity principles is the provision of certain types of SCADA systems using IP over satellite (IPoS). Satellite data services have traditionally had the great advantage of being able to operate robustly via the airwaves in regions around the globe where terrestrial network construction would be diffi cult. Generally, in such regions commercial wireless coverage is less ubiquitous or even completely unavailable. Some SCADA applications have thus taken advantage of this robust communication feature in satellite systems to connect remote end-user terminals to the SCADA host system, but the requirement remains that some degree of diversity be utilized. As suggested above, most of this diversity emphasis has been driven largely by concerns over natural and physical disasters, but a clear cyber security benefi t exists as well.
Generally, the setup for satellite-connected SCADA involves end users connecting to a collection of physically diverse hubs via IPoS. These diverse hubs are then connected in a distributed manner to the SCADA hosts. An adversary seeking to attack these hubs would have to use either logical or electronic means, and a great degree of logistic effort would be required, especially if the hubs are located in different parts of the world. The Hughes Corporation, as an example, has been aggressive in marketing these types of confi gurations for SCADA customers. Their recommended remote access confi guration for diverse SCADA system control is shown in Figure 4.8 .
Space
Terrestrial
Remote terminal
SCADA Hosts
Geographically Diverse Hubs
Infrastructure Component
Access Network
Access Network
Access Network
Figure 4.8 Diverse hubs in satellite SCADA confi gurations.
Chapter 4 DIVERSITY 87
National Diversity Program The development of a national diversity program would
require coordination between companies and government agen- cies in the following areas: ● Critical path analysis —An analysis of national infrastructure
components must be made to determine certain critical paths that are required for essential services. For example, if a mili- tary group relies on a specifi c critical path to complete some logistic mission, then assurance should exist that this criti- cal path is supported by diverse vendors, suppliers, support teams, and technology.
● Cascade modeling —A similar analysis is required to identify any conditions in a national infrastructure component where a cascading effect is possible due to nondiversity. If, for exam- ple, 100% of the PCs in an organization are running in exactly the same confi guration, then this poses a risk. Admittedly, the organization might choose to accept the risk, but this should be done explicitly after a security investigation, rather than by default.
● Procurement discipline —The selection and procurement of technology by organizations charged with critical infrastructure should include a degree of diversity requirements. This generally occurs naturally in most large organizations, so the urgency here might not be as intense but the security benefi ts are obvious. The decision of whether to provide rewards and incentives
for diversity versus a stricter approach of requiring evidence of some targeted percentage of diversity must be driven by the local environment and culture. The threat environment in a mili- tary setting is considerably different than one might fi nd in tele- communications or transportation, so it would seem prudent to make such implementation decisions locally.
The advantage of diverse hubs is obvious; if any should be directly compromised, fl ooded, or attacked (physically or logically), then the SCADA hosts are still accessible to end users. In addition, attacks on local infrastructure components on which the SCADA operation depends, such as power, will not have a cascading effect. Such an approach only works, however, if all diverse components operate at a common service level. For example, if one service provider offers highly reliable, secure services with historical compliance to advertised service level agreements (SLAs), then introducing a diverse provider with poor SLA compliance might not be such a good idea. This is a key notion, because it is not considered reasonable to take a highly functioning system and make it diverse by introducing an inferior counterpart. In any event, this general concept of diverse relay between users and critical hosts should be embedded into all national infrastructure systems.
This page intentionally left blank
89 Cyber Attacks. DOI: © Elsevier Inc. All rights reserved.
10.1016/B978-0-12-384917-5.00005-6 2011
COMMONALITY
The only truly secure system is one that is powered off, cast in a block of concrete, and sealed in a lead-lined room with armed guards—and even then I have my doubts .
Eugene Spafford, Executive Director of the Purdue University Center for Education and Research in Information Assurance and Security (CERIAS) 1
Now that we have outlined our proposal in the previous chap- ter for national infrastructure systems to include diversity, we can discuss the seemingly paradoxical requirement that infrastruc- ture systems must also demonstrate a degree of commonality . In particular, certain desirable security attributes must be present in all aspects and areas of national infrastructure to ensure maxi- mal resilience against cyber attack. Anyone who has worked in the security fi eld understands this statement and is likely to agree with its basis. The collection of desirable security attributes is usu- ally referred to collectively as security best practices . Example best practices include routine scanning of systems, regular penetration testing of networks, programs for security awareness, and integrity management checking on servers.
When security best practices are easily identifi ed and measur- able, they can become the basis for what is known as a security standard . A security standard then becomes the basis for a pro- cess known as a security audit , in which an unbiased third-party observer determines based on evidence whether the requirements in the standard are met. The key issue for national infrastructure protection is that best practices, standards, and audits establish a low-water mark for all relevant organizations (see Figure 5.1 ).
Organizations that are below a minimally acceptable security best practices level will fi nd that security standards audits intro- duce new practices, in addition to revisiting existing practices. The desired effect is that the pre-audit state will transition to an improved post-audit state for all practices. This does not always happen, especially for organizations that have a poor environment
5
1 Quoted in A. K. Dewdney, “Computer recreations: of worms, viruses and Core War,” Sci. Am. , 260(3), 90–93, 1989.
90 Chapter 5 COMMONALITY
for introducing new security practices, but it is the goal. For organi- zations that are already above the minimally acceptable level, per- haps even with world-class features, the audit will rarely introduce new practices but will instead revisit existing ones. The desired effect here is that these practices would be strengthened, but, again, this does not always work perfectly, especially if the auditors are less familiar with the world-class security features already in place. Some common security-related best practices standards that one will fi nd in national infrastructure settings are listed in the box.
Organization B
Organization A
Pre-Audit Post-Audit Pre-Audit Post-Audit
Audit
Audit
Security Practices
World Class
Minimally Acceptable
Revisit Existing
New Practices
Revisit Existing
Figure 5.1 Illustrative security audits for two organizations.
The purpose of a security audit is to raise the level of security features currently in place.
Common Security-Related Best Practices Standards ● Federal Information Security Management Act (FISMA) —FISMA sets minimal standards for security best practices in
federal environments. It is enforced by congressional legislation and involves an annual letter grade being assigned to individual agencies. The following departmental agencies received an “F” for their FISMA rating in 2007: Defense, Commerce, Labor, Transportation, Interior, Treasury, Veterans Affairs, and Agriculture (so did the Nuclear Regulatory Commission).
● Health Insurance Portability and Accountability Act (HIPAA) —Title II of HIPAA includes recommended standards for security and privacy controls in the handling of health-related information for American citizens. It is also enforced by congressional legislation.
● Payment Card Industry Data Security Standard (PCI DSS) —This security standard was developed by the PCI Security Council, which includes major credit card companies such as Visa ® Card, Discover ® Card, American Express ® , and MasterCard ® . It includes requirements for encrypting sensitive customer data.
Chapter 5 COMMONALITY 91
With such redundancy in security standards and compli- ance, one would guess that the principle of commonality would be largely met in national infrastructure protection. For example, some organizations might be required to demonstrate compli- ance to dozens of different security standards. One would expect that such intense and focused attention on security would lead to a largely common approach to security around the globe. Sadly, the belief here is that in spite of the considerable audit and com- pliance activity around the world, most of it does not address the type of security commonality that will make a positive differ- ence in national infrastructure protection. The activity instead tends to focus on requirements that have some value but do not address the most critical issues. In fact, most of these practices exist in the category of state-of-the-art security, far beyond the minimally acceptable levels addressed in most audits.
The audit problem stems from the inherent differences between meaningful and measurable security best practices. There’s an old dumb joke about a man looking for his lost money on 42nd and Eighth. When a passerby asks whether the money was actually lost at that spot, the man looks up and says that the money was actually lost over on 41st and Tenth but the light is much better here. Security audit of best practices is often like this; the only practices that can be audited are ones where the light is good and measurable metrics can be established. This does not, however, imply that such metrics are always meaning- ful (see Figure 5.2 ).
The example requirements shown in Figure 5.2 provide a hint as to the types of requirements that are likely to be included in each category. One can easily levy a measurable requirement on password length, for example, even though this is generally a less useful constraint. This could be viewed as an example that
● ISO/IEC 27000 Standard (ISO27K) —The International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC) evolved a British Security Standard known as BS-7799 into an internationally recognized set of auditable security best practices. Some security experts believe that the ISO27K family of security standards is the most global and generally agreed upon set of best practices. All of these standards, and the many additional ones that are not mentioned above, include a large subset of security
and functional requirements that are virtually the same. For example, each standard requires carefully documented policies and procedures, authentication and authorization controls, data collection systems, and embedded encryption. Each standard also requires management oversight, ongoing security monitoring, compliance scores issued by designated auditors, and some form of fi nes or punishment if the standard best practices are not met.
92 Chapter 5 COMMONALITY
is measurable but not meaningful. Conversely, one can levy the important requirement that a strong culture of security be pres- ent in an environment. This is a meaningful condition but almost impossible to measure. The example requirement that a security policy be present is both meaningful and measurable. It demon- strates that there are certainly some requirements that reside in both categories.
Meaningful Best Practices for Infrastructure Protection A provocative implication here is that the ability to audit a given best practice does not determine or infl uence whether it is useful for infrastructure protection. In fact, the primary motivation for proper infrastructure protection should not be one’s audit score; rather, the motivation should be success based and economic. The fact is that companies, agencies, and groups with responsi- bility for infrastructure protection will eventually fail if they do not follow the best available recommendations for security best practices. Unfortunately, the best recommendations come not from the security standards and audit community but from prac- tical experience.
If you do not agree, then please consider that security stan- dards backed by powerful and authoritative groups have existed
Meaningful Requirements
Focus of Protection
Documented Security Policy
Focus of Auditor
Measurable Requirements
Culture of Security
Protection
Constraint on Password Length
Figure 5.2 Relationship between meaningful and measurable requirements.
Ideally, security practices are both meaningful and measurable.
A great audit score does not necessarily guarantee successful infrastructure protection.
Chapter 5 COMMONALITY 93
for many decades. In addition, security auditors have been in business for decades, performing diligent analysis and issuing embarrassing failure grades to security teams around the world. Our earlier reference to FISMA, for example, included failing grades for many prominent government agencies in the United States. In spite of all this activity and reporting, however, nothing truly material has changed during these past decades in the way computer and network systems are secured. In fact, one could easily make the claim that national infrastructure is more vul- nerable to attack today than it was 20 years ago. What makes one think that more stringent security standards and audit processes are going to change this now?
Based on this author’s experiences managing the security of major critical infrastructure components for many years, the answer lies in a two-step methodology: ● Step 1. Standard audit —The fi rst step is conventional, in that
it recommends that every organization submit to a standard audit to ensure that no group is operating below the mini- mally acceptable threshold. While most organizations would claim to already have this step ongoing, the goal here is to be given a desirable rating or score, rather than a failing one. So, even if a company or agency has ongoing audits, the goal here is to pass these audits. Any one of the major audit standards mentioned above is probably acceptable; they all roughly direct the same sort of minimal practices.
● Step 2. World-class focus —The second step involves a more intense focus on a set of truly meaningful national infrastruc- ture protection practices. These practices are derived largely from experience. They are consistent with the material pre- sented in this book, and they will only be present in pieces in most existing security audit standards. The greatest success will typically come from organizations self-administering this new focus, especially because these practices are not easy to measure and audit (see Figure 5.3 ). For the fi rst step, an important issue involves ensuring that
the audit does not cause more harm than good. For example, suppose that a competent and trustworthy system administra- tor has been charged with a bevy of responsibilities for an infra- structure component and that she has demonstrated excellent results over a long period of time, with no security problems. This is a common situation, especially in companies and agencies that take system administration seriously. Unfortunately, a security auditor would look at such a setup with horror and would deem it a clear violation of least privilege, separation of duties, and so on.
A successful protection strategy should start with at least a passing score on a standard security audit.
Sometimes security audit standards and best practices proven through experience are in confl ict.
94 Chapter 5 COMMONALITY
In the United States, if the component being administered was a fi nancial one in a public company, then this would be a violation of the Sarbanes-Oxley segregation of duties require- ments. The auditor would typically require that the single com- petent administrator be replaced by a bureaucratic process involving a team of potentially inferior personnel who would each only see a portion of the total task. It is not diffi cult to imag- ine the component being more poorly managed and, hence, less secure. This is the worst case in any audit and must be explicitly avoided for national infrastructure protection.
For the second step, the box lists specifi c meaningful secu- rity best practices, six in total, for national infrastructure protec- tion. These six best practices do not contradict current auditing processes and standards, but they are certainly not designed for easy audit application; for example, it is diffi cult to validate whether something is “appropriate” or “simplifi ed.” Nevertheless, our strong advice is that attentiveness to ensuring commonality across national infrastructure with these six practices will yield signifi cant benefi ts.
Existing Set of Practices
Minimally Acceptable Set of Practices
World-Class Infrastructure Protection
Standard Audit (FISMA, ISO, Etc.)
Self-Administer (Six Best Practices)
Figure 5.3 Methodology to achieve world-class infrastructure protection practices.
Six Best Practices for National Infrastructure Protection
● Practice 1. Locally relevant and appropriate security policy —Every organization charged with the design or operation of national infrastructure must have a security policy that is locally relevant to the environment and appropriate to the organizational mission. This implies that different organizations should expect to have different security policies. The good news is that this policy requirement is largely consistent with most standards and should be one of the more straightforward practices to understand.
● Practice 2. Organizational culture of security protection —Organizations charged with national infrastructure must develop and nurture a culture of security protection. The culture must pervade the organization and must include
Chapter 5 COMMONALITY 95
Readers familiar with standards and audits will recognize immediately the challenges with the subjective notions intro- duced in the box. For this reason, the only way they can be applied appropriately is for security managers to understand the purpose and intent of the requirements, and to then honestly self-administer a supporting program. This is not optimal for third-party assurance, but it is the only reasonable way to reach the level of world-class security best practices.
Locally Relevant and Appropriate Security Policy Any commercial or government organization that is currently developing or managing national infrastructure already has some sort of security policy. So the question of whether to develop a policy is not relevant; every organization has something . The real question instead for most organizations in national infra- structure roles is how to make the policy more relevant and
great incentives for positive behavior, as well as unfortunate consequences for negative. No security standard currently demands cultural attentiveness to security, simply because it cannot be measured.
● Practice 3. Commitment to infrastructure simplifi cation —Because complexity is arguably the primary cause of security problems in most large-scale environments, a commitment to simplifying infrastructure is critical to ensuring proper security. Determining what “simplifi cation” means is a subjective, local concept that is dependent on the specifi cs of the target environment. No current security standards demand infrastructure simplifi cation.
● Practice 4. Certifi cation and education program for decision-makers —A program of professional certifi cation and security education must be present for those who are making decisions about national infrastructure or who are directly charged with their implementation. Ideally, this should not have to include end users, because this greatly reduces the chances of proper coverage.
● Practice 5. Career path and reward structure for security teams —Those performing security in national infrastructure environments must have clearly defi ned career paths and desirable rewards as part of their professional journey. In the absence of these enticements, important security work is often handled by people who are untrained and poorly motivated. This requirement is generally more meaningful in larger organizations.
● Practice 6. Evidence of responsible past security practice —Just as most craftsmen go through a period of apprenticeship to learn and to demonstrate proper skills, so should an organization have to demonstrate a period of learning and attainment of proper skills before being charged with national infrastructure protection. It is amazing that existing security audits generally do not include a careful inspection of past security practices in dealing with live cyber attacks.
96 Chapter 5 COMMONALITY
appropriate to the local environment. Specifi cally, four basic security policy considerations are highly recommended for national infrastructure protection: ● Enforceable —Most security policies are easy to write down but
are not easy to enforce. Organizations must therefore spend a great deal of time on the issue of security policy enforce- ment. The local threat environment must be a consideration here, because the employees of some companies and agen- cies are more apt to follow security policy rules than others. Nevertheless, a policy is only as good as its degree of enforce- ability, so every organization should be able to explicitly describe their enforcement strategy.
● Small —Most security policies are too large and complex. If there is one exercise that would be the healthiest for national infrastructure teams, it would be to go through existing policy language to prune out old references, obsolete statements, and aged examples. Large, complex security policies with too much detail are to be avoided. A key issue is the direction in which one’s policy is headed; it is either staying the same (stagnant), getting more complex (unhealthy), or becoming smaller and more compact (healthy).
● Online —Policy language must be online and searchable for it to be truly useful in national infrastructure settings. Teams must be able to fi nd relevant requirements easily and should have the ability to cut and paste the relevant statements into their project or process documentation. The old days of printing and distributing a security policy with a fancy cover should be long gone.
● Inclusive —Policy must be inclusive of the proper computing and networking elements in the local national infrastructure environment. This can only be determined by an analysis. Unfortunately, this analysis can be somewhat time consuming and tedious, and without proper attention it could result in an overly complex policy. Considerable skill is required to write policy that is inclusive but not too complicated. These four requirements for security policies in groups
charged with national infrastructure can be subjected to a simple decision analysis that would help determine if the local policy is relevant and appropriate to the mission of the organization; this decision process is shown in Figure 5.4 .
It’s worth mentioning that, as will be seen in the next sec- tion, the culture of the local environment can really have an impact on the development of security policy. In an environment where technology change is not dramatic and operational skills are mature (e.g., traditional circuit-switched telephony), policy
The question is not whether to develop a security policy, but rather what that policy will entail.
Chapter 5 COMMONALITY 97
language can be less detailed and used to identify unexpected procedures that might be required for security. In an environ- ment where technology change is dramatic and operational skills might be constantly changing (e.g., wireless telephony), then pol- icy language might have to be much more specifi c. In either case, the issue is not whether the policy has certain required elements, but rather whether the policy is locally relevant and appropriate.
Culture of Security Protection Our second recommended common practice involves creation of an organizational culture of security protection. When an orga- nization has such a culture of security protection, the poten- tial for malicious exploitation of some vulnerability is greatly reduced for two reasons: First, the likelihood for the vulnerabil- ity itself to be present is reduced, as local diligence will weigh in favor of more secure decision-making. Second, real-time human vigilance in such a culture often helps avoid exploitation. Time after time, the alertness of human beings in a culture of security is effective in helping to avoid malicious attacks. (Readers will remember that the only effective security measures that took place on September 11, 2001, were the ones initiated by human beings.)
Here’s a simple test to determine if a given organization has a culture of security protection. Go to that organization’s local facility and observe how carefully the physical premises are policed for unauthorized entry. If an electronic door is used to authenticate entry, followed by a guard eyeballing every visitor, then chances are pretty good that the culture is one of protection.
Need to perform inventory analysis on locally relevant technologies, tools, systems, and processes
Need to spend significant time with local teams to fix this one – this is more a cultural than technical issue
Need to consider use of documentation tool to place requirements on line (good opportunity for pruning)
Need to go through policy to prune old references, simplify language, and remove obsolete statements
No
No
No
No
Is the policy tight and compact?
Does the policy address all relevent local issues?
Existing Security Policy
Much Better Policy
Is the policy on line?
Can the policy be enforced?
Figure 5.4 Decision process for security policy analysis.
98 Chapter 5 COMMONALITY
If, however, the person in front of you holds the door open for you to enter without bothering to check for your credentials or, worse, the door itself is propped open, then the culture is proba- bly more open. A culture of security certainly does not imply that things will be perfectly secure, but such a culture is essential in the protection of national assets.
Unfortunately, most of us tend to equate an organizational culture of security with a rigid, paranoid, authoritative, perhaps even military environment. Furthermore, a culture of security is generally associated with managers who avoid risks, stay away from the media, dislike remote access or telecommuting, and demonstrate little comfort with new technologies such as social networking. Similarly, one would equate a nonculture of secu- rity with a young, dynamic, creative, open, and egalitarian envi- ronment. In such a culture, managers are generally viewed to be comfortable with risk, open in speaking to outsiders about their work, in love with every new technology that comes along, and supportive of remote access and telecommuting.
The reality is that neither stereotype is accurate. Instead, the challenge in promoting a culture of security is to combine the best elements of each management approach, without the cor- responding weaknesses. The idea is to nurture any positive environmental attributes, but in a way that also allows for sen- sible protection of national assets; that is, each local environ- ment must have a way to adapt the various adjectives just cited to their own mission. For example, no group generally wants to be referred to as closed and paranoid, but a military intelligence group might have no choice. Similarly, no group wants to be referred to as being loose with security, but certain creative orga- nizations, such as some types of colleges and universities, make this decision explicitly.
As such, organizations must consider the spectrum of options in developing a suitable local culture. This spectrum acknowl- edges how straightforward it can be to assume an inverse rela- tionship between organizational rigidity and security. It’s easy to just make everything rigid and authoritative and hope that a culture of increased security will develop. The challenge, how- ever, lies in trying to break up this relationship by allowing open, creative activity in a way that does not compromise secu- rity. This might result in some aspects of the environment being more secure and others being less so. Such a combined cultural goal should be viewed as a common requirement for all groups involved with national assets (see Figure 5.5 ).
So an obvious question one might ask from the perspective of national infrastructure protection is why the highest level of
An organization with a culture of security is one in which standard operating procedures work to provide a secure environment.
An ideal security environment can marry creativity and interest in new technologies with caution and healthy risk aversion.
Chapter 5 COMMONALITY 99
security culture should not be required in all cases, regardless of any cultural goals of being open, creative, and willing to interact publicly. The U.S. military, for example, might exemplify such a level of rigid cultural commitment to security. One answer, as we’ve discussed above, is that it is diffi cult to require that a cul- ture be in place in an organization. Specifi c aspects of a culture might be required such as strong policy, tough enforcement, and so on, but to require the presence of a culture is easy to confi rm. Nevertheless, the premise is correct; that is, for national infra- structure, certain security standards are required that can only be met in an environment where a culture of security protection is met. This demands the uncomfortable situation in which local managers must honestly work to create the appropriate culture, which in some cases might require decades of attention.
An important element of security culture is the symbolism that management can create by its own behavior. This means that when senior executives are given passes that allow policy violations, this is a serious error as it detracts from the cultural objectives. Unfortunately, the most senior executives almost always outrank security staff, and this practice of senior exemp- tion is all too common. Perhaps major national infrastructure solicitations should include questions about this type of senior executive practice before contracts can be granted to an organi- zation. This might give the security team more concrete ammuni- tion to stop such exemptions.
Infrastructure Simplification Our third recommended common practice involves an explicit organizational commitment to infrastructure simplifi cation. Defi ning what we mean by simplifi cation in the context of
More secure
More rigid
Challenging cultural option
Straightforward cultural option
Less secure
More open