0
(0)
Book Cover

CompTIA Security+ All-In-One Exam Guide, Second Edition – Read Now and Download Mobi

Comments

Product Description

A CompTIA Security+ Exam Guide and An On-the-Job Reference–All-in-One

Get complete coverage of all the material included on the CompTIA Security+ exam inside this fully up-to-date, comprehensive resource. Written by network security experts, this authoritative exam guide features learning objectives at the beginning of each chapter, exam tips, practice questions, and in-depth explanations. Designed to help you pass the CompTIA Security+ exam with ease, this definitive volume also serves as an essential on-the-job reference. Get full details on all exam topics, including how to:

  • Combat viruses, Trojan horses, spyware, logic bombs, and worms

  • Defend against DDoS, spoofing, replay, TCP/IP hijacking, and other attacks

  • Apply best practices for access control methods

  • Implement authentication using Kerberos, CHAP, biometrics, and other methods

  • Use cryptography and PKI

  • Secure remote access, wireless, and virtual private networks (VPNs)

  • Harden networks, operating systems, and applications

  • Manage incident response and follow forensic procedures

Note: the Kindle edition does not come with a CD at this time.

About the Author

Greg White is an Associate Professor in the Department of Computer Science at the University of Texas at San Antonio. He is the author of the first edition of this book.

Wm. Arthur Conklin, CompTIA Security+, is an Assistant Professor in the Information and Logistics Technology department at the University of Houston.

Author
Chuck Cothren, Gregory White, Wm. Arthur Conklin, Dwayne Williams, Roger Davis

Rights

Language
en

Published
2009-01-02

ISBN
0071601279

Read Now

How useful was this post?

Click on a star to rate it!

Average rating 0 / 5. Vote count: 0

No votes so far! Be the first to rate this post.



ALL IN ONE
CompTIA Security+



EXAM GUIDE
Second Edition


ALL IN ONE
CompTIA Security+



EXAM GUIDE
Second Edition


Gregory White

Wm. Arthur Conklin
Dwayne Williams

Roger Davis

Chuck Cothren




Copyright © 2009 by The McGraw-Hill Companies. All rights reserved. Except as permitted under the United States Copyright Act of 1976, no part of this publication may be reproduced or distributed in any form or by any means, or stored in a database or retrieval system, without the prior written permission of the publisher.


ISBN: 978-0-07-164384-9


MHID: 0-07-164384-2


The material in this eBook also appears in the print version of this title: ISBN: 978-0-07-160127-6, MHID: 0-07-160127-9.


All trademarks are trademarks of their respective owners. Rather than put a trademark symbol after every occurrence of a trademarked name, we use names in an editorial fashion only, and to the benefit of the trademark owner, with no intention of infringement of the trademark. Where such designations appear in this book, they have been printed with initial caps.


McGraw-Hill eBooks are available at special quantity discounts to use as premiums and sales promotions, or for use in corporate training programs. To contact a representative please visit the Contact Us page at www.mhprofessional.com.


TERMS OF USE


This is a copyrighted work and The McGraw-Hill Companies, Inc. ("McGraw-Hill") and its licensors reserve all rights in and to the work. Use of this work is subject to these terms. Except as permitted under the Copyright Act of 1976 and the right to store and retrieve one copy of the work, you may not decompile, disassemble, reverse engineer, reproduce, modify, create derivative works based upon, transmit, distribute, disseminate, sell, publish or sublicense the work or any part of it without McGraw-Hill’s prior consent. You may use the work for your own noncommercial and personal use; any other use of the work is strictly prohibited. Your right to use the work may be terminated if you fail to comply with these terms.


THE WORK IS PROVIDED “AS IS.” McGRAW-HILL AND ITS LICENSORS MAKE NO GUARANTEES OR WARRANTIES AS TO THE ACCURACY, ADEQUACY OR COMPLETENESS OF OR RESULTS TO BE OBTAINED FROM USING THE WORK, INCLUDING ANY INFORMATION THAT CAN BE ACCESSED THROUGH THE WORK VIA HYPERLINK OR OTHERWISE, AND EXPRESSLY DISCLAIM ANY WARRANTY, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. McGraw-Hill and its licensors do not warrant or guarantee that the functions contained in the work will meet your requirements or that its operation will be uninterrupted or error free. Neither McGraw-Hill nor its licensors shall be liable to you or anyone else for any inaccuracy, error or omission, regardless of cause, in the work or for any damages resulting therefrom. McGraw-Hill has no responsibility for the content of any information accessed through the work. Under no circumstances shall McGraw-Hill and/or its licensors be liable for any indirect, incidental, special, punitive, consequential or similar damages that result from the use of or inability to use the work, even if any of them has been advised of the possibility of such damages. This limitation of liability shall apply to any claim or cause whatsoever whether such claim or cause arises in contract, tort or otherwise.


CompTIA Authorized Quality Curriculum


The logo of the CompTIA Authorized Quality Curriculum (CAQC) program and the status of this or other training material as “Authorized” under the CompTIA Authorized Quality Curriculum program signifies that, in CompTIA’s opinion, such training material covers the content of CompTIA’s related certification exam.

The contents of this training material were created for the CompTIA Security+ exam covering CompTIA certification objectives that were current as of 2008.

CompTIA has not reviewed or approved the accuracy of the contents of this training material and specifically disclaims any warranties of merchantability or fitness for a particular purpose.

CompTIA makes no guarantee concerning the success of persons using any such “Authorized” or other training material in order to prepare for any CompTIA certification exam.


How to become CompTIA certified:


This training material can help you prepare for and pass a related CompTIA certification exam or exams. In order to achieve CompTIA certification, you must register for and pass a CompTIA certification exam or exams.

In order to become CompTIA certified, you must


 
  1. 1. Select a certification exam provider. For more information please visit http://www.comptia.org/certification/general_information/exam_locations.aspx.
  2. 2. Register for and schedule a time to take the CompTIA certification exam(s) at a convenient location.
  3. 3. Read and sign the Candidate Agreement, which will be presented at the time of the exam(s). The text of the Candidate Agreement can be found at http://www.comptia.org/certification/general_information/candidate_agreement.aspx.
  4. 4. Take and pass the CompTIA certification exam(s).

For more information about CompTIA’s certifications, such as its industry acceptance, benefits, or program news, please visit www.comptia.org/certification.

CompTIA is a not-for-profit information technology (IT) trade association. CompTIA’s certifications are designed by subject matter experts from across the IT industry. Each CompTIA certification is vendor-neutral, covers multiple technologies, and requires demonstration of skills and knowledge widely sought after by the IT industry.

To contact CompTIA with any questions or comments, please call (1) (630) 678 8300 or email [email protected].


This book is dedicated to the many security professionals who daily work to

ensure the safety of our nation’s critical infrastructures.

We want to recognize the thousands of dedicated individuals who strive to

protect our national assets but who seldom receive praise and often are only

noticed when an incident occurs.

To you, we say thank you for a job well done!


ABOUT THE AUTHORS


Dr. Gregory White has been involved in computer and network security since 1986. He spent 19 years on active duty with the United States Air Force and is currently in the Air Force Reserves assigned to the Air Force Information Warfare Center. He obtained his Ph.D. in computer science from Texas A&M University in 1995. His dissertation topic was in the area of computer network intrusion detection, and he continues to conduct research in this area today. He is currently the Director for the Center for Infrastructure Assurance and Security (CIAS) and is an associate professor of information systems at the University of Texas at San Antonio (UTSA). Dr. White has written and presented numerous articles and conference papers on security. He is also the coauthor for three textbooks on computer and network security and has written chapters for two other security books. Dr. White continues to be active in security research. His current research initiatives include efforts in high-speed intrusion detection, infrastructure protection, and methods to calculate a return on investment and the total cost of ownership from security products.

Dr. Wm. Arthur Conklin is an assistant professor in the College of Technology at the University of Houston. Dr. Conklin’s research interests lie in software assurance and the application of systems theory to security issues. His dissertation was on the motivating factors for home users in adopting security on their own PCs. He has coauthored four books on information security and has written and presented numerous conference and academic journal papers. A former U.S. Navy officer, he was also previously the Technical Director at the Center for Infrastructure Assurance and Security at the University of Texas at San Antonio.

Chuck Cothren, CISSP, is the president of Globex Security, Inc., and applies a wide array of network security experience to consulting and training. This includes performing controlled penetration testing, network security policies, network intrusion detection systems, firewall configuration and management, and wireless security assessments. He has analyzed security methodologies for Voice over Internet Protocol (VoIP) systems and supervisory control and data acquisition (SCADA) systems. Mr. Cothren was previously employed at The University of Texas Center for Infrastructure Assurance and Security. He has also worked as a consulting department manager, performing vulnerability assessments and other security services for Fortune 100 clients to provide them with vulnerability assessments and other security services. He is coauthor of the book Voice and Data Security as well as Principles of Computer Security. Mr. Cothren holds a B.S. in Industrial Distribution from Texas A&M University.

Roger L. Davis, CISSP, CISM, CISA, is Program Manager of ERP systems at the Church of Jesus Christ of Latter-day Saints, managing the Church’s global financial system in over 140 countries. He has served as president of the Utah chapter of the Information Systems Security Association (ISSA) and various board positions for the Utah chapter of the Information Systems Audit and Control Association (ISACA). He is a retired Air Force lieutenant colonel with 30 years of military and information systems/security experience. Mr. Davis served on the faculty of Brigham Young University and the Air Force Institute of Technology. He coauthored McGraw-Hill’s Principles of Computer Security and Voice and Data Security. He holds a master’s degree in computer science from George Washington University, a bachelor’s degree in computer science from Brigham Young University, and performed post-graduate studies in electrical engineering and computer science at the University of Colorado.

Dwayne Williams is Associate Director, Special Projects, for the Center for Infrastructure Assurance and Security at the University of Texas at San Antonio and has over 18 years of experience in information systems and network security. Mr. Williams’s experience includes six years of commissioned military service as a Communications-Computer Information Systems Officer in the United States Air Force, specializing in network security, corporate information protection, intrusion detection systems, incident response, and VPN technology. Prior to joining the CIAS, he served as Director of Consulting for SecureLogix Corporation, where he directed and provided security assessment and integration services to Fortune 100, government, public utility, oil and gas, financial, and technology clients. Mr. Williams graduated in 1993 from Baylor University with a Bachelor of Arts in Computer Science. Mr. Williams is a Certified Information Systems Security Professional (CISSP) and coauthor of Voice and Data Security, Security+ Certification, and Principles of Computer Security.


About the Technical Editor


Glen E. Clarke, MCSE/MCSD/MCDBA/MCT/CEH/SCNP/CIWSA/A+/Security+, is an independent trainer and consultant, focusing on network security assessments and educating IT professionals on hacking countermeasures. Mr. Clark spends most of his time delivering certified courses on Windows Server 2003, SQL Server, Exchange Server, Visual Basic .NET, ASP.NET, Ethical Hacking, and Security Analysis. He has authored and technical edited a number of certification titles including The Network+ Certification Study Guide, 4th Edition. You can visit Mr. Clark online at http://www.gleneclarke.com or contact him at [email protected].


CONTENTS AT A GLANCE



     Part I Security Concepts


Chapter 1 General Security Concepts


Chapter 2 Operational Organizational Security


Chapter 3 Legal Issues, Privacy, and Ethics


     Part II Cryptography and Applications


Chapter 4 Cryptography


Chapter 5 Public Key Infrastructure


Chapter 6 Standards and Protocols


     Part III Security in the Infrastructure


Chapter 7 Physical Security


Chapter 8 Infrastructure Security


Chapter 9 Authentication and Remote Access

Chapter 10 Wireless Security


     Part IV Security in Transmissions

Chapter 11 Intrusion Detection Systems

Chapter 12 Security Baselines

Chapter 13 Types of Attacks and Malicious Software

Chapter 14 E-Mail and Instant Messaging

Chapter 15 Web Components


     Part V Operational Security

Chapter 16 Disaster Recovery and Business Continuity

Chapter 17 Risk Management

Chapter 18 Change Management

Chapter 19 Privilege Management

Chapter 20 Computer Forensics


     Part VI Appendixes

Appendix A About the CD

Appendix B OSI Model and Internet Protocols

                  Glossary

                  Index


CONTENTS


               Acknowledgments

               Preface

               Introduction


    Part I Security Concepts


Chapter 1 General Security Concepts

                The Security+ Exam

                Basic Security Terminology

                           Security Basics

                           Access Control

                          Authentication

                Chapter Review

                           Quick Tips

                           Questions

                           Answers


Chapter 2 Operational Organizational Security

                Policies, Standards, Guidelines, and Procedures

                The Security Perimeter

                Logical Access Controls

                           Access Control Policies

                Social Engineering

                           Phishing

                           Vishing

                           Shoulder Surfing

                           Dumpster Diving

                           Hoaxes

                Organizational Policies and Procedures

                           Security Policies

                           Privacy

                           Service Level Agreements

                           Human Resources Policies

                           Code of Ethics

                Chapter Review

                           Questions

                           Answers


Chapter 3 Legal Issues, Privacy, and Ethics

                Cybercrime

                           Common Internet Crime Schemes

                           Sources of Laws

                           Computer Trespass

                           Significant U.S. Laws

                           Payment Card Industry Data Security Standards (PCI DSS)

                           Import/Export Encryption Restrictions

                           Digital Signature Laws

                           Digital Rights Management

                Privacy

                           U.S. Privacy Laws

                           European Laws

                Ethics

                           SANS Institute IT Code of Ethics

                Chapter Review

                           Questions

                           Answers


Part II Cryptography and Applications


Chapter 4 Cryptography

                Algorithms

                Hashing

                           SHA

                           Message Digest

                           Hashing Summary

                Symmetric Encryption

                           DES

                           3DES

                           AES

                           CAST

                           RC

                           Blowfish

                           IDEA

                           Symmetric Encryption Summary

                Asymmetric Encryption

                           RSA

                           Diffie-Hellman

                           ElGamal

                           ECC

                           Asymmetric Encryption Summary

                Steganography

                Cryptography Algorithm Use

                           Confidentiality

                           Integrity

                           Nonrepudiation

                           Authentication

                           Digital Signatures

                           Key Escrow

                           Cryptographic Applications

                Chapter Review

                           Questions

                           Answers


Chapter 5 Public Key Infrastructure

                The Basics of Public Key Infrastructures

                Certificate Authorities

                Registration Authorities

                           Local Registration Authorities

                Certificate Repositories

                Trust and Certificate Verification

                Digital Certificates

                           Certificate Attributes

                           Certificate Extensions

                           Certificate Lifecycles

                Centralized or Decentralized Infrastructures

                           Hardware Storage Devices

                Private Key Protection

                           Key Recovery

                           Key Escrow

                Public Certificate Authorities

                In-house Certificate Authorities

                Outsourced Certificate Authorities

                Tying Different PKIs Together

                           Trust Models

                Chapter Review

                           Questions

                           Answers


Chapter 6 Standards and Protocols

                PKIX/PKCS

                           PKIX Standards

                           PKCS

                           Why You Need to Know

                X.509

                SSL/TLS

                ISAKMP

                CMP

                XKMS

                S/MIME

                           IETF S/MIME v3 Specifications

                PGP

                           How PGP Works

                           Where Can You Use PGP?

                HTTPS

                IPsec

                CEP

                FIPS

                Common Criteria (CC)

                WTLS

                WEP

                           WEP Security Issues

                ISO/IEC 27002 (Formerly ISO 17799)

                Chapter Review

                           Questions

                           Answers


Part III Security in the Infrastructure


Chapter 7 Physical Security

                The Security Problem

                Physical Security Safeguards

                           Walls and Guards

                           Policies and Procedures

                           Access Controls and Monitoring

                           Environmental Controls

                           Authentication

                Chapter Review

                           Questions

                           Answers


Chapter 8 Infrastructure Security

                Devices

                           Workstations

                           Servers

                           Network Interface Cards

                           Hubs

                           Bridges

                           Switches

                           Routers

                           Firewalls

                           Wireless

                           Modems

                           Telecom/PBX

                           RAS

                           VPN

                           Intrusion Detection Systems

                           Network Access Control

                           Network Monitoring/Diagnostic

                           Mobile Devices

                Media

                           Coaxial Cable

                           UTP/STP

                           Fiber

                           Unguided Media

                Security Concerns for Transmission Media

                           Physical Security

                Removable Media

                           Magnetic Media

                           Optical Media

                           Electronic Media

                Security Topologies

                           Security Zones

                           Telephony

                           VLANs

                           NAT

                Tunneling

                Chapter Review

                           Questions

                           Answers


Chapter 9 Authentication and Remote Access

                The Remote Access Process

                           Identification

                           Authentication

                           Authorization

                IEEE 802.1 x

                RADIUS

                           RADIUS Authentication

                           RADIUS Authorization

                           RADIUS Accounting

                           DIAMETER

                TACACS+

                           TACACS+ Authentication

                           TACACS+ Authorization

                           TACACS+ Accounting

                L2TP and PPTP

                           PPTP

                           PPP

                           CHAP

                           PAP

                           EAP

                           L2TP

                NT LAN Manager

                Telnet

                SSH

                IEEE 802.11

                VPNs

                IPsec

                           Security Associations

                           IPsec Configurations

                           IPsec Security

                Vulnerabilities

                Chapter Review

                           Questions

                           Answers


Chapter 10 Wireless Security

                Wireless Networking

                           Mobile Phones

                           Bluetooth

                           802.11

                Chapter Review

                           Questions

                           Answers


Part IV Security in Transmissions


Chapter 11 Intrusion Detection Systems

                History of Intrusion Detection Systems

                IDS Overview

                Host-based IDSs

                           Advantages of HIDSs

                           Disadvantages of HIDSs

                           Active vs. Passive HIDSs

                           Resurgence and Advancement of HIDSs

                PC-based Malware Protection

                           Antivirus Products

                           Personal Software Firewalls

                           Pop-up Blocker

                           Windows Defender

                Network-based IDSs

                           Advantages of a NIDS

                           Disadvantages of a NIDS

                           Active vs. Passive NIDSs

                Signatures

                False Positives and Negatives

                IDS Models

                Intrusion Prevention Systems

                Honeypots and Honeynets

                Firewalls

                Proxy Servers

                Internet Content Filters

                Protocol Analyzers

                Network Mappers

                Anti-spam

                Chapter Review

                           Questions

                           Answers


Chapter 12 Security Baselines

                Overview Baselines

                Password Selection

                           Password Policy Guidelines

                           Selecting a Password

                           Components of a Good Password

                           Password Aging

                Operating System and Network Operating System Hardening

                           Hardening Microsoft Operating Systems

                           Hardening UNIX- or Linux-Based Operating Systems

                Network Hardening

                           Software Updates

                           Device Configuration

                           Ports and Services

                           Traffic Filtering

                Application Hardening

                           Application Patches

                           Patch Management

                           Web Servers

                           Mail Servers

                           FTP Servers

                           DNS Servers

                           File and Print Services

                           Active Directory

                Group Policies

                           Security Templates

                Chapter Review

                           Questions

                           Answers


Chapter 13 Types of Attacks and Malicious Software

                Avenues of Attack

                           The Steps in an Attack

                           Minimizing Possible Avenues of Attack

                Attacking Computer Systems and Networks

                           Denial-of-Service Attacks

                           Backdoors and Trapdoors

                           Null Sessions

                           Sniffing

                           Spoofing

                           Man-in-the-Middle Attacks

                           Replay Attacks

                           TCP/IP Hijacking

                           Attacks on Encryption

                           Address System Attacks

                           Password Guessing

                           Software Exploitation

                           Malicious Code

                           War-Dialing and War-Driving

                           Social Engineering

                Auditing

                Chapter Review

                           Questions

                           Answers


Chapter 14 E-Mail and Instant Messaging

                Security of E-Mail

                Malicious Code

                Hoax E-Mails

                Unsolicited Commercial E-Mail (Spam)

                Mail Encryption

                Instant Messaging

                Chapter Review

                           Questions

                           Answers


Chapter 15 Web Components

                Current Web Components and Concerns

                Protocols

                           Encryption (SSL and TLS)

                           The Web (HTTP and HTTPS)

                           Directory Services (DAP and LDAP)

                           File Transfer (FTP and SFTP)

                           Vulnerabilities

                Code-Based Vulnerabilities

                           Buffer Overflows

                           Java and JavaScript

                           ActiveX

                           Securing the Browser

                           CGI

                           Server-Side Scripts

                           Cookies

                           Signed Applets

                           Browser Plug-ins

                Application-Based Weaknesses

                           Open Vulnerability and Assessment Language (OVAL)

                Chapter Review

                           Questions

                           Answers


Part V Operational Security


Chapter 16 Disaster Recovery and Business Continuity

                Disaster Recovery

                           Disaster Recovery Plans/Process

                           Backups

                           Utilities

                           Secure Recovery

                           High Availability and Fault Tolerance

                Chapter Review

                           Questions

                           Answers


Chapter 17 Risk Management

                An Overview of Risk Management

                           Example of Risk Management at the International Banking Level

                           Key Terms for Understanding Risk Management

                What Is Risk Management?

                Business Risks

                           Examples of Business Risks

                           Examples of Technology Risks

                Risk Management Models

                           General Risk Management Model

                           Software Engineering Institute Model

                           Model Application

                Qualitatively Assessing Risk

                Quantitatively Assessing Risk

                Qualitative vs. Quantitative Risk Assessment

                Tools

                Chapter Review

                           Questions

                           Answers


Chapter 18 Change Management

                Why Change Management?

                The Key Concept: Separation (Segregation) of Duties

                Elements of Change Management

                Implementing Change Management

                           The Purpose of a Change Control Board

                           Code Integrity

                The Capability Maturity Model Integration

                Chapter Review

                           Questions

                           Answers


Chapter 19 Privilege Management

                User, Group, and Role Management

                           User

                           Groups

                           Role

                Password Policies

                           Domain Password Policy

                Single Sign-On

                Centralized vs. Decentralized Management

                           Centralized Management

                           Decentralized Management

                           The Decentralized, Centralized Model

                Auditing (Privilege, Usage, and Escalation)

                           Privilege Auditing

                           Usage Auditing

                           Escalation Auditing

                Logging and Auditing of Log Files

                           Common Logs

                           Periodic Audits of Security Settings

                Handling Access Control (MAC, DAC, and RBAC)

                           Mandatory Access Control (MAC)

                           Discretionary Access Control (DAC)

                           Role-based Access Control (RBAC)

                           Rule-based Access Control (RBAC)

                           Account Expiration

                Permissions and Rights in Windows Operating Systems

                Chapter Review

                           Questions

                           Answers


Chapter 20 Computer Forensics

                Evidence

                           Standards for Evidence

                           Types of Evidence

                           Three Rules Regarding Evidence

                Collecting Evidence

                           Acquiring Evidence

                           Identifying Evidence

                           Protecting Evidence

                           Transporting Evidence

                           Storing Evidence

                           Conducting the Investigation

                Chain of Custody

                Free Space vs. Slack Space

                           Free Space

                           Slack Space

                Message Digest and Hash

                Analysis

                Chapter Review

                           Questions

                           Answers

Part VI Appendixes

Appendix A About the CD

                System Requirements

                LearnKey Online Training

                Installing and Running MasterExam

                           MasterExam

                Electronic Book

                Help

                Removing Installation(s)

                Technical Support

                           LearnKey Technical Support

Appendix B OSI Model and Internet Protocols

                Networking Frameworks and Protocols

                OSI Model

                           Application Layer

                           Presentation Layer

                           Session Layer

                           Transport Layer

                           Network Layer

                           Data-Link Layer

                           Physical Layer

                Internet Protocols

                           TCP

                           UDP

                           IP

                           Message Encapsulation

                Review

                Glossary

                Index


ACKNOWLEDGMENTS


We, the authors of CompTIA Security+ Certification All-in-One Exam Guide, have many individuals who we need to acknowledge—individuals without whom this effort would not have been successful.

The list needs to start with those folks at McGraw-Hill who worked tirelessly with the project’s multiple authors and contributors and led us successfully through the minefield that is a book schedule and who took our rough chapters and drawings and turned them into a final, professional product we can be proud of. We thank all the good people from the Acquisitions team, Tim Green, Jennifer Housh, and Carly Stapleton; from the Editorial Services team, Jody McKenzie; and from the Illustration and Production team, George Anderson, Peter Hancik, and Lyssa Wald. We also thank the technical editor Glen Clarke; the project editor, LeeAnn Pickrell; the copyeditor, Lisa Theobald; the proofreader, Susie Elkind; and the indexer, Karin Arrigoni for all their attention to detail that made this a finer work after they finished with it.

We also need to acknowledge our current employers who, to our great delight, have seen fit to pay us to work in a career field that we all find exciting and rewarding. There is never a dull moment in security because it is constantly changing.

We would like to thank Art Conklin for herding the cats on this one.

Finally, we would each like to individually thank those people who—on a personal basis—have provided the core support for us individually. Without these special people in our lives, none of us could have put this work together.

I would like to thank my wife, Charlan, for the tremendous support she has always given me. It doesn’t matter how many times I have sworn that I’ll never get involved with another book project only to return within months to yet another one; through it all, she has remained supportive.

I would also like to publicly thank the United States Air Force, which provided me numerous opportunities since 1986 to learn more about security than I ever knew existed.

To whoever it was who decided to send me as a young captain—fresh from completing my master’s degree in artificial intelligence—to my first assignment in computer security: thank you, it has been a great adventure!

—Gregory B. White, Ph.D.

To Susan, my muse and love, for all the time you suffered as I work on books.

—Art Conklin

Special thanks to Josie for all her support.

—Chuck Cothren

Geena, thanks for being my best friend and my greatest support. Anything I am is because of you. Love to my kids and grandkids!

—Roger L. Davis

To my wife and best friend Leah for your love, energy, and support—thank you for always being there. Here’s to many more years together.

—Dwayne Williams


PREFACE


Information and computer security has moved from the confines of academia to mainstream America in the last decade. The CodeRed, Nimda, and Slammer attacks were heavily covered in the media and broadcast into the average American’s home. It has become increasingly obvious to everybody that something needs to be done in order to secure not only our nation’s critical infrastructure but also the businesses we deal with on a daily basis. The question is, “Where do we begin?” What can the average information technology professional do to secure the systems that he or she is hired to maintain? One immediate answer is education and training. If we want to secure our computer systems and networks, we need to know how to do this and what security entails.

Complacency is not an option in today’s hostile network environment. While we once considered the insider to be the major threat to corporate networks, and the “script kiddie” to be the standard external threat (often thought of as only a nuisance), the highly interconnected networked world of today is a much different place. The U.S. government identified eight critical infrastructures a few years ago that were thought to be so crucial to the nation’s daily operation that if one were to be lost, it would have a catastrophic impact on the nation. To this original set of eight sectors, more have recently been added. A common thread throughout all of these, however, is technology—especially technology related to computers and communication. Thus, if an individual, organization, or nation wanted to cause damage to this nation, it could attack not just with traditional weapons but also with computers through the Internet. It is not surprising to hear that among the other information seized in raids on terrorist organizations, computers and Internet information are usually seized as well. While the insider can certainly still do tremendous damage to an organization, the external threat is again becoming the chief concern among many.

So, where do you, the IT professional seeking more knowledge on security, start your studies? The IT world is overflowing with certifications that can be obtained by those attempting to learn more about their chosen profession. The security sector is no different, and the CompTIA Security+ exam offers a basic level of certification for security. In the pages of this exam guide, you will find not only material that can help you prepare for taking the CompTIA Security+ examination but also the basic information that you will need in order to understand the issues involved in securing your computer systems and networks today. In no way is this exam guide the final source for learning all about protecting your organization’s systems, but it serves as a point from which to launch your security studies and career.

One thing is certainly true about this field of study—it never gets boring. It constantly changes as technology itself advances. Something else you will find as you progress in your security studies is that no matter how much technology advances and no matter how many new security devices are developed, at its most basic level, the human is still the weak link in the security chain. If you are looking for an exciting area to delveinto, then you have certainly chosen wisely. Security offers a challenging blend of technology and people issues. We, the authors of this exam guide, wish you luck as you embark on an exciting and challenging career path.

—Gregory B. White, Ph.D.


INTRODUCTION


Computer security is becoming increasingly important today as the number of security incidents steadily climbs. Many corporations now spend significant portions of their budget on security hardware, software, services, and personnel. They are spending this money not because it increases sales or enhances the product they provide, but because of the possible consequences should they not take protective actions.


Why Focus on Security?


Security is not something that we want to have to pay for; it would be nice if we didn’t have to worry about protecting our data from disclosure, modification, or destruction from unauthorized individuals, but that is not the environment we find ourselves in today. Instead, we have seen the cost of recovering from security incidents steadily rise along with the number of incidents themselves. Since September 11, 2001, this has taken on an even greater sense of urgency as we now face securing our systems not just from attack by disgruntled employees, juvenile hackers, organized crime, or competitors; we now also have to consider the possibility of attacks on our systems from terrorist organizations. If nothing else, the events of September 11, 2001, showed that anybody is a potential target. You do not have to be part of the government or a government contractor; being an American is sufficient reason to make you a target to some, and with the global nature of the Internet, collateral damage from cyber attacks on one organization could have a worldwide impact.


A Growing Need for Security Specialists


In order to protect our computer systems and networks, we will need a significant number of new security professionals trained in the many aspects of computer and network security. This is not an easy task as the systems connected to the Internet become increasingly complex with software whose lines of codes number in the millions. Understanding why this is such a difficult problem to solve is not hard if you consider just how many errors might be present in a piece of software that is several million lines long. When you add the additional factor of how fast software is being developed—from necessity as the market is constantly changing—understanding how errors occur is easy.

Not every “bug” in the software will result in a security hole, but it doesn’t take many to have a drastic affect on the Internet community. We can’t just blame the vendors for this situation because they are reacting to the demands of government and industry. Most vendors are fairly adept at developing patches for flaws found in their software, and patches are constantly being issued to protect systems from bugs that may introduce security problems. This introduces a whole new problem for managers and administrators—patch management. How important this has become is easily illustrated by how many of the most recent security events have occurred as a result of a security bug that was discovered months prior to the security incident, and for which a patch has been available, but for which the community has not correctly installed the patch, thus making the incident possible. One of the reasons this happens is that many of the individuals responsible for installing the patches are not trained to understand the security implications surrounding the hole or the ramifications of not installing the patch. Many of these individuals simply lack the necessary training.

Because of the need for an increasing number of security professionals who are trained to some minimum level of understanding, certifications such as the Security+ have been developed. Prospective employers want to know that the individual they are considering hiring knows what to do in terms of security. The prospective employee, in turn, wants to have a way to demonstrate his or her level of understanding, which can enhance the candidate’s chances of being hired. The community as a whole simply wants more trained security professionals.


Preparing Yourself for the Security+ Exam


CompTIA Security+ Certification All-in-One Exam Guide is designed to help prepare you to take the CompTIA Security+ certification exam. When you pass it, you will demonstrate you have that basic understanding of security that employers are looking for. Passing this certification exam will not be an easy task, for you will need to learn many things to acquire that basic understanding of computer and network security.


How This Book Is Organized


The book is divided into sections and chapters to correspond with the objectives of the exam itself. Some of the chapters are more technical than others—reflecting the nature of the security environment where you will be forced to deal with not only technical details but also other issues such as security policies and procedures as well as training and education. Although many individuals involved in computer and network security have advanced degrees in math, computer science, information systems, or computer or electrical engineering, you do not need this technical background to address security effectively in your organization. You do not need to develop your own cryptographic algorithm; for example, you simply need to be able to understand how cryptography is used along with its strengths and weaknesses. As you progress in your studies, you will learn that many security problems are caused by the human element. The best technology in the world still ends up being placed in an environment where humans have the opportunity to foul things up—and all too often do.

Part I: Security Concepts The book begins with an introduction of some of the basic elements of security.

Part II: Cryptography and Applications Cryptography is an important part of security, and this part covers this topic in detail. The purpose is not to make cryptographers out of readers but to instead provide a basic understanding of how cryptography works and what goes into a basic cryptographic scheme. An important subject in cryptography, and one that is essential for the reader to understand, is the creation of public key infrastructures, and this topic is covered as well.

Part III: Security in the Infrastructure The next part concerns infrastructure issues. In this case, we are not referring to the critical infrastructures identified by the White House several years ago (identifying sectors such as telecommunications, banking and finance, oil and gas, and so forth) but instead the various components that form the backbone of an organization’s security structure.

Part IV: Security in Transmissions This part discusses communications security. This is an important aspect of security because, for years now, we have connected our computers together into a vast array of networks. Various protocols in use today and that the security practitioner needs to be aware of are discussed in this part.

Part V: Operational Security This part addresses operational and organizational issues. This is where we depart from a discussion of technology again and will instead discuss how security is accomplished in an organization. Because we know that we will not be absolutely successful in our security efforts—attackers are always finding new holes and ways around our security defenses—one of the most important topics we will address is the subject of security incident response and recovery. Also included is a discussion of change management (addressing the subject we alluded to earlier when addressing the problems with patch management), security awareness and training, incident response, and forensics.

Part VI: Appendixes There are two appendixes in CompTIA Security+ Certification All-in-One Exam Guide. Appendix A explains how best to use the CD-ROM included with this book, and Appendix B provides an additional in-depth explanation of the OSI model and Internet protocols, should this information be new to you.

Glossary Located just before the Index, you will find a useful glossary of security terminology, including many related acronyms and their meaning. We hope that you use the Glossary frequently and find it to be a useful study aid as you work your way through the various topics in this exam guide.


Special Features of the All-in-One Certification Series


To make our exam guides more useful and a pleasure to read, we have designed the All-in-One Certification series to include several conventions.


Icons


To alert you to an important bit of advice, a shortcut, or a pitfall, you’ll occasionally see Notes, Tips, Cautions, and Exam Tips peppered throughout the text.



NOTE Notes offer nuggets of especially helpful stuff, background explanations, and information, and terms are defined occasionally.



TIP Tips provide suggestions and nuances to help you learn to finesse your job. Take a tip from us and read the Tips carefully.



CAUTION When you see a Caution, pay special attention. Cautions appear when you have to make a crucial choice or when you are about to undertake something that may have ramifications you might not immediately anticipate. Read them now so you don’t have regrets later.



EXAM TIP Exam Tips give you special advice or may provide information specifically related to preparing for the exam itself.


End-of-Chapter Reviews and Chapter Tests


An important part of this book comes at the end of each chapter where you will find a brief review of the high points along with a series of questions followed by the answers to those questions. Each question is in multiple-choice format. The answers provided also include a small discussion explaining why the correct answer actually is the correct answer.

The questions are provided as a study aid to you, the reader and prospective Security+ exam taker. We obviously can’t guarantee that if you answer all of our questions correctly you will absolutely pass the certification exam. Instead, what we can guarantee is that the questions will provide you with an idea about how ready you are for the exam.


The CD-ROM


CompTIA Security+ Certification All-in-One Exam Guide also provides you with a CD-ROM of even more test questions and their answers to help you prepare for the certification exam. Read more about the companion CD-ROM in Appendix A.


Onward and Upward


At this point, we hope that you are now excited about the topic of security, even if you weren’t in the first place. We wish you luck in your endeavors and welcome you to the exciting field of computer and network security.


PART I
Security Concepts


Chapter 1 General Security Concepts

Chapter 2 Operational Organizational Security

Chapter 3 Legal Issues, Privacy, and Ethics



CHAPTER 1
General Security Concepts


Learn about the Security+ exam


 
  • Learn basic terminology associated with computer and information security
  • Discover the basic approaches to computer and information security
  • Discover various methods of implementing access controls
  • Determine methods used to verify the identity and authenticity of an individual

Why should you be concerned with taking the Security+ exam? The goal of taking the Computing Technology Industry Association (CompTIA) Security+ exam is to prove that you’ve mastered the worldwide standards for foundation-level security practitioners. With a growing need for trained security professionals, the CompTIA Security+ exam gives you a perfect opportunity to validate your knowledge and understanding of the computer security field. The exam is an appropriate mechanism for many different individuals, including network and system administrators, analysts, programmers, web designers, application developers, and database specialists to show proof of professional achievement in security. The exam’s objectives were developed with input and assistance from industry and government agencies, including such notable examples as the Federal Bureau of Investigation (FBI), the National Institute of Standards and Technology (NIST), the U.S. Secret Service, the Information Systems Security Association (ISSA), the Information Systems Audit and Control Association (ISACA), Microsoft Corporation, RSA Security, Motorola, Novell, Sun Microsystems, VeriSign, and Entrust.


The Security+ Exam


The Security+ exam is designed to cover a wide range of security topics—subjects about which a security practitioner would be expected to know. The test includes information from six knowledge domains:

Knowledge Domain

Percent of Exam

Systems Security

21%

Network Infrastructure

20%

Access Control

17%

Assessments & Audits

15%

Cryptography

15%

Organizational Security

12%

The Systems Security knowledge domain covers the security threats to computer systems and addresses the mechanisms that systems use to address these threats. A major portion of this domain concerns the factors that go into hardening the operating system as well as the hardware and peripherals. The Network Infrastructure domain examines the security threats introduced when computers are connected in local networks and with the Internet. It is also concerned with the various elements of a network as well as the tools and mechanisms put in place to protect networks. Since a major security goal is to prevent unauthorized access to computer systems and the data they process, the third domain examines the many ways that we attempt to control who can access our systems and data. Since security is a difficult goal to obtain, we must constantly examine the ever-changing environment in which our systems operate. The fourth domain, Assessments & Audits, covers things individuals can do to check that security mechanisms that have been implemented are adequate and are sufficiently protecting critical data and resources. Cryptography has long been part of the basic security foundation of any organization, and an entire domain is devoted to its various aspects. The last domain, Organizational Security, takes a look at what an organization should be doing after all the other security mechanisms are in place. This domain covers incident response and disaster recovery, in addition to topics more appropriately addressed at the organizational level.

The exam consists of a series of questions, each designed to have a single best answer or response. The other available choices are designed to provide options that an individual might choose if he or she had an incomplete knowledge or understanding of the security topic represented by the question. The exam questions are chosen from the more detailed objectives listed in the outline shown in Figure 1-1, an excerpt from the 2008 objectives document obtainable from the CompTIA web site at http://certification.comptia.org/resources/objectives.aspx.

The Security+ exam is designed for individuals who have at least two years of networking experience and who have a thorough understanding of TCP/IP with a focus on security. Originally administered only in English, the exam is now offered in testing centers around the world in the English, Japanese, Korean, and German languages. Consult the CompTIA web site at www.comptia.org to determine a location near you.

The exam consists of 100 questions to be completed in 90 minutes. A minimum passing score is considered 764 out of a possible 900 points. Results are available immediately after you complete the exam. An individual who fails to pass the exam the first time will be required to pay the exam fee again to retake the exam, but no mandatory waiting period is required before retaking it the second time. If the individual again fails the exam, a minimum waiting period of 30 days is required for each subsequent retake. For more information on retaking exams, consult CompTIA’s retake policy, which can be found on its web site.

This All-in-One Security + Certification Exam Guide is designed to assist you in preparing for the Security+ exam. It is organized around the same objectives as the exam and attempts to cover the major areas the exam includes. Using this guide in no way guarantees that you will pass the exam, but it will greatly assist you in preparing to meet the challenges posed by the Security+ exam.



Figure 1-1 The CompTIA Security+ objectives



Basic Security Terminology


The term hacking is used frequently in the media. A hacker was once considered an individual who understood the technical aspects of computer operating systems and networks. Hackers were individuals you turned to when you had a problem and needed extreme technical expertise. Today, as a result of the media use, the term is used more often to refer to individuals who attempt to gain unauthorized access to computer systems or networks. While some would prefer to use the terms cracker and cracking when referring to this nefarious type of activity, the terminology generally accepted by the public is that of hacker and hacking. A related term that is sometimes used is phreaking, which refers to the “hacking” of computers and systems used by the telephone company.


Security Basics


Computer security is a term that has many meanings and related terms. Computer security entails the methods used to ensure that a system is secure. The ability to control who has access to a computer system and data and what they can do with those resources must be addressed in broad terms of computer security.

Seldom in today’s world are computers not connected to other computers in networks. This then introduces the term network security to refer to the protection of the multiple computers and other devices that are connected together in a network. Related to these two terms are two others, information security and information assurance, which place the focus of the security process not on the hardware and software being used but on the data that is processed by them. Assurance also introduces another concept, that of the availability of the systems and information when users want them.

Since the late 1990s, much has been published about specific lapses in security that have resulted in the penetration of a computer network or in denying access to or the use of the network. Over the last few years, the general public has become increasingly aware of its dependence on computers and networks and consequently has also become interested in their security.

As a result of this increased attention by the public, several new terms have become commonplace in conversations and print. Terms such as hacking, virus, TCP/IP, encryption, and firewalls now frequently appear in mainstream news publications and have found their way into casual conversations. What was once the purview of scientists and engineers is now part of our everyday life.

With our increased daily dependence on computers and networks to conduct everything from making purchases at our local grocery store to driving our children to school (any new car these days probably uses a small computer to obtain peak engine performance), ensuring that computers and networks are secure has become of paramount importance. Medical information about each of us is probably stored in a computer somewhere. So is financial information and data relating to the types of purchases we make and store preferences (assuming we have and use a credit card to make purchases). Making sure that this information remains private is a growing concern to the general public, and it is one of the jobs of security to help with the protection of our privacy. Simply stated, computer and network security is essential for us to function effectively and safely in today’s highly automated environment.


The “CIA” of Security


Almost from its inception, the goals of computer security have been threefold: confidentiality, integrity, and availability—the “CIA” of security. Confidentiality ensures that only those individuals who have the authority to view a piece of information may do so. No unauthorized individual should ever be able to view data to which they are not entitled. Integrity is a related concept but deals with the modification of data. Only authorized individuals should be able to change or delete information. The goal of availability is to ensure that the data, or the system itself, is available for use when the authorized user wants it.

As a result of the increased use of networks for commerce, two additional security goals have been added to the original three in the CIA of security. Authentication deals with ensuring that an individual is who he claims to be. The need for authentication in an online banking transaction, for example, is obvious. Related to this is nonrepudiation, which deals with the ability to verify that a message has been sent and received so that the sender (or receiver) cannot refute sending (or receiving) the information.



EXAM TIP Expect questions on these concepts as they are basic to the understanding of what we hope to guarantee in securing our computer systems and networks.


The Operational Model of Security


For many years, the focus of security was on prevention. If you could prevent somebody from gaining access to your computer systems and networks, you assumed that they were secure. Protection was thus equated with prevention. While this basic premise was true, it failed to acknowledge the realities of the networked environment of which our systems are a part. No matter how well you think you can provide prevention, somebody always seems to find a way around the safeguards. When this happens, the system is left unprotected. What is needed is multiple prevention techniques and also technology to alert you when prevention has failed and to provide ways to address the problem. This results in a modification to the original security equation with the addition of two new elements—detection and response. The security equation thus becomes

Protection = Prevention + (Detection + Response)

This is known as the operational model of computer security. Every security technique and technology falls into at least one of the three elements of the equation. Examples of the types of technology and techniques that represent each are depicted in Figure 1-2.


Security Principles


An organization can choose to address the protection of its networks in three ways: ignore security issues, provide host security, and approach security at a network level. The last two, host and network security, have prevention as well as detection and response components.


Figure 1-2 Sample technologies in the operational model of computer security


If an organization decides to ignore security, it has chosen to utilize the minimal amount of security that is provided with its workstations, servers, and devices. No additional security measures will be implemented. Each “out-of-the-box” system has certain security settings that can be configured, and they should be. To protect an entire network, however, requires work in addition to the few protection mechanisms that come with systems by default.

Host Security Host security takes a granular view of security by focusing on protecting each computer and device individually instead of addressing protection of the network as a whole. When host security is implemented, each computer is expected to protect itself. If an organization decides to implement only host security and does not include network security, it will likely introduce or overlook vulnerabilities. Many environments involve different operating systems (Windows, UNIX, Linux, Macintosh), different versions of those operating systems, and different types of installed applications. Each operating system has security configurations that differ from other systems, and different versions of the same operating system can in fact have variations among them. Trying to ensure that every computer is “locked down” to the same degree as every other system in the environment can be overwhelming and often results in an unsuccessful and frustrating effort.

Host security is important and should always be addressed. Security, however, should not stop there, as host security is a complementary process to be combined with network security. If individual host computers have vulnerabilities embodied within them, network security can provide another layer of protection that will hopefully stop intruders getting that far into the environment. Topics covered in this book dealing with host security include bastion hosts, host-based intrusion detection systems (devices designed to determine whether an intruder has penetrated a computer system or network), antivirus software (programs designed to prevent damage caused by various types of malicious software), and hardening of operating systems (methods used to strengthen operating systems and to eliminate possible avenues through which attacks can be launched).

Network Security In some smaller environments, host security alone might be a viable option, but as systems become connected into networks, security should include the actual network itself. In network security, an emphasis is placed on controlling access to internal computers from external entities. This control can be through devices such as routers, firewalls, authentication hardware and software, encryption, and intrusion detection systems (IDSs).

Network environments have a tendency to be unique entities because usually no two networks have exactly the same number of computers, the same applications installed, the same number of users, the exact same configurations, or the same available servers. They will not perform the same functions or have the same overall architecture. Because networks have so many differences, they can be protected and configured in many different ways. This chapter covers some foundational approaches to network and host security. Each approach can be implemented in myriad ways.


Least Privilege


One of the most fundamental approaches to security is least privilege. This concept is applicable to many physical environments as well as network and host security. Least privilege means that an object (such as a user, application, or process) should have only the rights and privileges necessary to perform its task, with no additional permissions. Limiting an object’s privileges limits the amount of harm that can be caused, thus limiting an organization’s exposure to damage. Users may have access to the files on their workstations and a select set of files on a file server, but they have no access to critical data that is held within the database. This rule helps an organization protect its most sensitive resources and helps ensure that whoever is interacting with these resources has a valid reason to do so.

Different operating systems and applications have different ways of implementing rights, permissions, and privileges. Before operating systems are actually configured, an overall plan should be devised and standardized methods developed to ensure that a solid security baseline is implemented. For example, a company might want all of the accounting department employees, but no one else, to be able to access employee payroll and profit margin spreadsheets stored on a server. The easiest way to implement this is to develop an Accounting group, put all accounting employees in this group, and assign rights to the group instead of each individual user.

As another example, a company could require implementing a hierarchy of administrators that perform different functions and require specific types of rights. Two people could be tasked with performing backups of individual workstations and servers; thus they do not need administrative permissions with full access to all resources. Three people could be in charge of setting up new user accounts and password management, which means they do not need full, or perhaps any, access to the company’s routers and switches. Once these baselines are delineated, indicating what subjects require which rights and permissions, it is much easier to configure settings to provide the least privileges for different subjects.

The concept of least privilege applies to more network security issues than just providing users with specific rights and permissions. When trust relationships are created, they should not be implemented in such a way that everyone trusts each other simply because it is easier to set it up that way. One domain should trust another for very specific reasons, and the implementers should have a full understanding of what the trust relationship allows between two domains. If one domain trusts another, do all of the users automatically become trusted, and can they thus easily access any and all resources on the other domain? Is this a good idea? Can a more secure method provide the same functionality? If a trusted relationship is implemented such that users in one group can access a plotter or printer that is available on only one domain, for example, it might make sense to purchase another plotter so that other more valuable or sensitive resources are not accessible by the entire group.

Another issue that falls under the least privilege concept is the security context in which an application runs. All applications, scripts, and batch files run in the security context of a specific user on an operating system. These objects will execute with specific permissions as if they were a user. The application could be Microsoft Word and be run in the space of a regular user, or it could be a diagnostic program that needs access to more sensitive system files and so must run under an administrative user account, or it could be a program that performs backups and so should operate within the security context of a backup operator. The crux of this issue is that programs should execute only in the security context that is needed for that program to perform its duties successfully. In many environments, people do not really understand how to make programs run under different security contexts, or it just seems easier to have them all run under the administrator account. If attackers can compromise a program or service running under the administrative account, they have effectively elevated their access level and have much more control over the system and many more possibilities to cause damage.



EXAM TIP The concept of least privilege is fundamental to many aspects of security. Remember the basic idea is to give people access only to the data and programs that they need to do their job. Anything beyond that can lead to a potential security problem.


Separation of Duties


Another fundamental approach to security is separation of duties. This concept is applicable to physical environments as well as network and host security. Separation of duty ensures that for any given task, more than one individual needs to be involved. The task is broken into different duties, each of which is accomplished by a separate individual. By implementing a task in this manner, no single individual can abuse the system for his or her own gain. This principle has been implemented in the business world, especially financial institutions, for many years. A simple example is a system in which one individual is required to place an order and a separate person is needed to authorize the purchase.

While separation of duties provides a certain level of checks and balances, it is not without its own drawbacks. Chief among these is the cost required to accomplish the task. This cost is manifested in both time and money. More than one individual is required when a single person could accomplish the task, thus potentially increasing the cost of the task. In addition, with more than one individual involved, a certain delay can be expected as the task must proceed through its various steps.


Implicit Deny


What has become the Internet was originally designed as a friendly environment where everybody agreed to abide by the rules implemented in the various protocols. Today, the Internet is no longer the friendly playground of researchers that it once was. This has resulted in different approaches that might at first seem less than friendly but that are required for security purposes. One of these approaches is implicit deny.

Frequently in the network world, decisions concerning access must be made. Often a series of rules will be used to determine whether or not to allow access. If a particular situation is not covered by any of the other rules, the implicit deny approach states that access should not be granted. In other words, if no rule would allow access, then access should not be granted. Implicit deny applies to situations involving both authorization and access.

The alternative to implicit deny is to allow access unless a specific rule forbids it. Another example of these two approaches is in programs that monitor and block access to certain web sites. One approach is to provide a list of specific sites that a user is not allowed to access. Access to any site not on the list would be implicitly allowed. The opposite approach (the implicit deny approach) would block all access to sites that are not specifically identified as authorized. As you can imagine, depending on the specific application, one or the other approach would be appropriate. Which approach you choose depends on the security objectives and policies of your organization.



EXAM TIP Implicit deny is another fundamental principle of security and students need to be sure they understand this principle. Similar to least privilege, this principle states if you haven’t specifically been allowed access, then access should be denied.


Job Rotation


An interesting approach to enhance security that is gaining increasing attention is through job rotation. The benefits of rotating individuals through various jobs in an organization’s IT department have been discussed for a while. By rotating through jobs, individuals gain a better perspective of how the various parts of IT can enhance (or hinder) the business. Since security is often a misunderstood aspect of IT, rotating individuals through security positions can result in a much wider understanding of the security problems throughout the organization. It also can have the side benefit of not relying on any one individual too heavily for security expertise. When all security tasks are the domain of one employee, and if that individual were to leave suddenly, security at the organization could suffer. On the other hand, if security tasks were understood by many different individuals, the loss of any one individual would have less of an impact on the organization.

One significant drawback to job rotation is relying on it too heavily. The IT world is very technical and often expertise in any single aspect takes years to develop. This is especially true in the security environment. In addition, the rapidly changing threat environment with new vulnerabilities and exploits routinely being discovered requires a level of understanding that takes considerable time to acquire and maintain.


Layered Security


A bank does not protect the money that it stores only by placing it in a vault. It uses one or more security guards as a first defense to watch for suspicious activities and to secure the facility when the bank is closed. It probably uses monitoring systems to watch various activities that take place in the bank, whether involving customers or employees. The vault is usually located in the center of the facility, and layers of rooms or walls also protect access to the vault. Access control ensures that the people who want to enter the vault have been granted the appropriate authorization before they are allowed access, and the systems, including manual switches, are connected directly to the police station in case a determined bank robber successfully penetrates any one of these layers of protection.

Networks should utilize the same type of layered security architecture. No system is 100 percent secure and nothing is foolproof, so no single specific protection mechanism should ever be trusted alone. Every piece of software and every device can be compromised in some way, and every encryption algorithm can be broken by someone with enough time and resources. The goal of security is to make the effort of actually accomplishing a compromise more costly in time and effort than it is worth to a potential attacker.

Consider, for example, the steps an intruder has to take to access critical data held within a company’s back-end database. The intruder will first need to penetrate the firewall and use packets and methods that will not be identified and detected by the IDS (more on these devices in Chapter 11). The attacker will have to circumvent an internal router performing packet filtering and possibly penetrate another firewall that is used to separate one internal network from another. From here, the intruder must break the access controls on the database, which means performing a dictionary or brute-force attack to be able to authenticate to the database software. Once the intruder has gotten this far, he still needs to locate the data within the database. This can in turn be complicated by the use of access control lists (ACLs) outlining who can actually view or modify the data. That’s a lot of work.

This example illustrates the different layers of security many environments employ. It is important that several different layers are implemented, because if intruders succeed at one layer, you want to be able to stop them at the next. The redundancy of different protection layers assures that no single point of failure can breach the network’s security. If a network used only a firewall to protect its assets, an attacker successfully able to penetrate this device would find the rest of the network open and vulnerable. Or, because a firewall usually does not protect against viruses attached to e-mail, a second layer of defense is needed, perhaps in the form of an antivirus program.

Every network environment must have multiple layers of security. These layers can employ a variety of methods such as routers, firewalls, network segments, IDSs, encryption, authentication software, physical security, and traffic control. The layers need to work together in a coordinated manner so that one does not impede another’s functionality and introduce a security hole. Security at each layer can be very complex, and putting different layers together can increase the complexity exponentially.

Although having layers of protection in place is very important, it is also important to understand how these different layers interact either by working together or in some cases by working against each other. One example of how different security methods can work against each other occurs when firewalls encounter encrypted network traffic. An organization can use encryption so that an outside customer communicating with a specific web server is assured that sensitive data being exchanged is protected. If this encrypted data is encapsulated within Secure Sockets Layer (SSL) packets and is then sent through a firewall, the firewall will not be able to read the payload information in the individual packets. This could enable the customer, or an outside attacker, to send undetected malicious code or instructions through the SSL connection. Other mechanisms can be introduced in similar situations, such as designing web pages to accept information only in certain formats and having the web server parse through the data for malicious activity. The important piece is to understand the level of protection that each layer provides and how each layer can be affected by activities that occur in other layers.

These layers are usually depicted starting at the top, with more general types of protection, and progress downward through each layer, with increasing granularity at each layer as you get closer to the actual resource, as you can see in Figure 1-3. The top-layer protection mechanism is responsible for looking at an enormous amount of traffic, and it would be overwhelming and cause too much of a performance degradation if each aspect of the packet were inspected here. Instead, each layer usually digs deeper into the packet and looks for specific items. Layers that are closer to the resource have to deal with only a fraction of the traffic that the top-layer security mechanism considers, and thus looking deeper and at more granular aspects of the traffic will not cause as much of a performance hit.


Diversity of Defense


Diversity of defense is a concept that complements the idea of various layers of security; layers are made dissimilar so that even if an attacker knows how to get through a system making up one layer, she might not know how to get through a different type of layer that employs a different system for security.

If, for example, an environment has two firewalls that form a demilitarized zone (a DMZ is the area between the two firewalls that provides an environment where activities can be more closely monitored), one firewall can be placed at the perimeter of the Internet and the DMZ. This firewall will analyze traffic that passes through that specific access point and enforces certain types of restrictions. The other firewall can be placed between the DMZ and the internal network. When applying the diversity of defense concept, you should set up these two firewalls to filter for different types of traffic and provide different types of restrictions. The first firewall, for example, can make sure that no File Transfer Protocol (FTP), Simple Network Management Protocol (SNMP), or Telnet traffic enters the network, but allow Simple Mail Transfer Protocol (SMTP), Secure Shell (SSH), Hypertext Transfer Protocol (HTTP), and SSL traffic through. The

Figure 1-3 Various layers of security



second firewall may not allow SSL or SSH through and can interrogate SMTP and HTTP traffic to make sure that certain types of attacks are not part of that traffic.

Another type of diversity of defense is to use products from different vendors. Every product has its own security vulnerabilities that are usually known to experienced attackers in the community. A Check Point firewall, for example, has different security issues and settings than a Sidewinder firewall; thus, different exploits can be used to crash or compromise them in some fashion. Combining this type of diversity with the preceding example, you might use the Check Point firewall as the first line of defense. If attackers are able to penetrate it, they are less likely to get through the next firewall if it is a Cisco PIX or Sidewinder firewall (or another maker’s firewall).

You should consider an obvious trade-off before implementing diversity of security using different vendors’ products. This setup usually increases operational complexity, and security and complexity are seldom a good mix. When implementing products from more than one vendor, security staff must know how to configure two different systems, the configuration settings will be totally different, the upgrades and patches will be released at different times and contain different changes, and the overall complexity of maintaining these systems can cause more headaches than security itself. This does not mean that you should not implement diversity of defense by installing products from different vendors, but you should know the implications of this decision.


Security Through Obscurity


With security through obscurity, security is considered effective if the environment and protection mechanisms are confusing or supposedly not generally known. Security through obscurity uses the approach of protecting something by hiding it—out of sight, out of mind. Noncomputer examples of this concept include hiding your briefcase or purse if you leave it in the car so that it is not in plain view, hiding a house key under a ceramic frog on your porch, or pushing your favorite ice cream to the back of the freezer so that nobody else will see it. This approach, however, does not provide actual protection of the object. Someone can still steal the purse by breaking into the car, lift the ceramic frog and find the key, or dig through the items in the freezer to find the ice cream. Security through obscurity may make someone work a little harder to accomplish a task, but it does not prevent anyone from eventually succeeding.

Similar approaches occur in computer and network security when attempting to hide certain objects. A network administrator can, for instance, move a service from its default port to a different port so that others will not know how to access it as easily, or a firewall can be configured to hide specific information about the internal network in the hope that potential attackers will not obtain the information for use in an attack on the network.

In most security circles, security through obscurity is considered a poor approach, especially if it is the organization’s only approach to security. An organization can use security through obscurity measures to try to hide critical assets, but other security measures should also be employed to provide a higher level of protection. For example, if an administrator moves a service from its default port to a more obscure port, an attacker can still find this service; thus a firewall should be used to restrict access to the service.


Keep It Simple


The terms security and complexity are often at odds with each other, because the more complex something is, the more difficult it is to understand, and you cannot truly secure something if you do not understand it. Another reason complexity is a problem within security is that it usually allows too many opportunities for something to go wrong. An application with 4000 lines of code has far fewer places for buffer overflows, for example, than an application with 2 million lines of code.

As with any other type of technology, when something goes wrong with security mechanisms, a troubleshooting process is used to identify the problem. If the mechanism is overly complex, identifying the root of the problem can be overwhelming if not impossible. Security is already a very complex issue because many variables are involved, many types of attacks and vulnerabilities are possible, many different types of resources must be secure, and many different ways can be used to secure them. You want your security processes and tools to be as simple and elegant as possible. They should be simple to troubleshoot, simple to use, and simple to administer.

Another application of the principle of keeping things simple concerns the number of services that you allow your system to run. Default installations of computer operating systems often leave many services running. The keep-it-simple principle tells us to eliminate those services that we don’t need. This is also a good idea from a security standpoint because it results in fewer applications that can be exploited and fewer services that the administrator is responsible for securing. The general rule of thumb should be to eliminate all nonessential services and protocols. This of course leads to the question, how do you determine whether a service or protocol is essential or not? Ideally, you should know for what your computer system or network is being used, and thus you should be able to identify those elements that are essential and activate only them. For a variety of reasons, this is not as easy as it sounds. Alternatively, a stringent security approach that you can take is to assume that no service is necessary (which is obviously absurd) and activate services and ports only as they are requested. Whatever approach you take, it’s a never-ending struggle to try to strike a balance between providing functionality and maintaining security.


Access Control


The term access control describes a variety of protection schemes. It sometimes refers to all security features used to prevent unauthorized access to a computer system or network. In this sense, it may be confused with authentication. More properly, access is the ability of a subject (such as an individual or a process running on a computer system) to interact with an object (such as a file or hardware device). Authentication, on the other hand, deals with verifying the identity of a subject.

To understand the difference, consider the example of an individual attempting to log in to a computer system or network. Authentication is the process used to verify to the computer system or network that the individual is who he claims to be. The most common method to do this is through the use of a user ID and password. Once the individual has verified his identity, access controls regulate what the individual can actually do on the system—just because a person is granted entry to the system does not mean that he should have access to all data the system contains.

Consider another example. When you go to your bank to make a withdrawal, the teller at the window will verify that you are indeed who you claim to be by asking you to provide some form of identification with your picture on it, such as your driver’s license. You might also have to provide your bank account number. Once the teller verifies your identity, you will have proved that you are a valid (authorized) customer of this bank. This does not, however, mean that you have the ability to view all information that the bank protects—such as your neighbor’s account balance. The teller will control what information, and funds, you can access and will grant you access only to information for which you are authorized to see. In this example, your identification and bank account number serve as your method of authentication and the teller serves as the access control mechanism.

In computer systems and networks, access controls can be implemented in several ways. An access control matrix provides the simplest framework for illustrating the process and is shown in Table 1-1. In this matrix, the system is keeping track of two processes, two files, and one hardware device. Process 1 can read both File 1 and File 2 but can write only to File 1. Process 1 cannot access Process 2, but Process 2 can execute Process 1. Both processes have the ability to write to the printer.

While simple to understand, the access control matrix is seldom used in computer systems because it is extremely costly in terms of storage space and processing. Imagine the size of an access control matrix for a large network with hundreds of users and thousands of files. The actual mechanics of how access controls are implemented in a system varies, though access control lists (ACLs) are common. An ACL is nothing more than a list that contains the subjects that have access rights to a particular object. The list identifies not only the subject but the specific access granted to the subject for the object. Typical types of access include read, write, and execute as indicated in the example access control matrix.

No matter what specific mechanism is used to implement access controls in a computer system or network, the controls should be based on a specific model of access. Several different models are discussed in security literature, including discretionary access control (DAC), mandatory access control (MAC), role-based access control (RBAC), and rule-based access control (also RBAC).


Table 1-1 An Access Control Matrix



Discretionary Access Control


Both discretionary access control and mandatory access control are terms originally used by the military to describe two different approaches to controlling an individual’s access to a system. As defined by the “Orange Book,” a Department of Defense document that at one time was the standard for describing what constituted a trusted computing system, DACs are “a means of restricting access to objects based on the identity of subjects and/or groups to which they belong. The controls are discretionary in the sense that a subject with a certain access permission is capable of passing that permission (perhaps indirectly) on to any other subject.” While this might appear to be confusing “government-speak,” the principle is rather simple. In systems that employ DACs, the owner of an object can decide which other subjects can have access to the object and what specific access they can have. One common method to accomplish this is the permission bits used in UNIX-based systems. The owner of a file can specify what permissions (read/write/execute) members in the same group can have and also what permissions all others can have. ACLs are also a common mechanism used to implement DAC.


Mandatory Access Control


A less frequently employed system for restricting access is mandatory access control. This system, generally used only in environments in which different levels of security classifications exist, is much more restrictive regarding what a user is allowed to do. Referring to the “Orange Book,” a mandatory access control is “a means of restricting access to objects based on the sensitivity (as represented by a label) of the information contained in the objects and the formal authorization (i.e., clearance) of subjects to access information of such sensitivity.” In this case, the owner or subject can’t determine whether access is to be granted to another subject; it is the job of the operating system to decide.

In MAC, the security mechanism controls access to all objects, and individual subjects cannot change that access. The key here is the label attached to every subject and object. The label will identify the level of classification for that object and the level to which the subject is entitled. Think of military security classifications such as Secret and Top Secret. A file that has been identified as Top Secret (has a label indicating that it is Top Secret) may be viewed only by individuals with a Top Secret clearance. It is up to the access control mechanism to ensure that an individual with only a Secret clearance never gains access to a file labeled as Top Secret. Similarly, a user cleared for Top Secret access will not be allowed by the access control mechanism to change the classification of a file labeled as Top Secret to Secret or to send that Top Secret file to a user cleared only for Secret information. The complexity of such a mechanism can be further understood when you consider today’s windowing environment. The access control mechanism will not allow a user to cut a portion of a Top Secret document and paste it into a window containing a document with only a Secret label. It is this separation of differing levels of classified information that results in this sort of mechanism being referred to as multilevel security.

Finally, just because a subject has the appropriate level of clearance to view a document, that does not mean that she will be allowed to do so. The concept of “need to know,” which is a DAC concept, also exists in MAC mechanisms. “Need to know” means that a person is given access only to information that she needs in order to accomplish her job or mission.



EXAM TIP If trying to remember the difference between MAC and DAC, just remember that MAC is associated with multilevel security.


Role-Based Access Control


ACLs can be cumbersome and can take time to administer properly. Another access control mechanism that has been attracting increased attention is the role-based access control (RBAC). In this scheme, instead of each user being assigned specific access permissions for the objects associated with the computer system or network, each user is assigned a set of roles that he or she may perform. The roles are in turn assigned the access permissions necessary to perform the tasks associated with the role. Users will thus be granted permissions to objects in terms of the specific duties they must perform—not according to a security classification associated with individual objects.


Rule-Based Access Control


The first thing that you might notice is the ambiguity that is introduced with this access control method also using the acronym RBAC. Rule-based access control again uses objects such as ACLs to help determine whether access should be granted or not. In this case, a series of rules are contained in the ACL and the determination of whether to grant access will be made based on these rules. An example of such a rule is one that states that no employee may have access to the payroll file after hours or on weekends. As with MAC, users are not allowed to change the access rules, and administrators are relied on for this. Rule-based access control can actually be used in addition to or as a method of implementing other access control methods. For example, MAC methods can utilize a rule-based approach for implementation.



EXAM TIP Do not become confused between rule-based and role-based access controls, even though they both have the same acronym. The name of each is descriptive of what it entails and will help you distinguish between them.


Authentication


Access controls define what actions a user can perform or what objects a user can access. These controls assume that the identity of the user has already been verified. It is the job of authentication mechanisms to ensure that only valid users are admitted. Described another way, authentication uses some mechanism to prove that you are who you claim to be. Three general methods are used in authentication. To verify your identity, you can provide the following:


 
  • Something you know
  • Something you have
  • Something you are (something unique about you)

The most common authentication mechanism is to provide something that only you, the valid user, should know. The most frequently used example of this is the common user ID (or username) and password. In theory, since you are not supposed to share your password with anybody else, only you should know your password, and thus by providing it you are proving to the system that you are who you claim to be. In theory, this should be a fairly decent method to provide authentication. Unfortunately, for a variety of reasons, such as the fact that people have a tendency to choose very poor and easily guessed passwords, this technique is not as reliable as it should be. Other authentication mechanisms are consequently always being developed and deployed.

Another method to provide authentication involves the use of something that only valid users should have in their possession. A physical-world example of this would be a simple lock and key. Only those individuals with the correct key will be able to open the lock and thus provide admittance to a house, car, office, or whatever the lock was protecting. A similar method can be used to authenticate users for a computer system or network (though the key may be electronic and may reside on a smart card or similar device). The problem with this technology is that people will lose their keys (or cards), which means they can’t log in to the system and somebody else who finds the key can then access the system, even though that person is not authorized. To address this problem, a combination of the something-you-know/something-you-have methods is often used so that the individual with the key can also be required to provide a password or passcode. The key is useless unless you know this code. An example of this is the ATM card most of us carry. The card is associated with a personal identification number (PIN), which only you should know. Knowing the PIN without having the card is useless, just as having the card without knowing the PIN will not give you access to your account.

The third general method to provide authentication involves something that is unique about you. We are used to this concept in our physical world, where people’s fingerprints or a sample of their DNA can be used to identify them. This same concept can be used to provide authentication in the computer world. The field of authentication that uses something about you or something that you are is known as biometrics. A number of different mechanisms can be used to accomplish this type of authentication, such as a voice or fingerprint, a retinal scan, or hand geometry. All of these methods obviously require some additional hardware in order to operate.

While these three approaches to authentication appear to be easy to understand and in most cases easy to implement, authentication is not to be taken lightly, since it is such an important component of security. Potential attackers are constantly searching for ways to get past the system’s authentication mechanism, and some fairly ingenious methods have been employed to do so. Consequently, security professionals are constantly devising new methods, building on these three basic approaches, to provide authentication mechanisms for computer systems and networks. A more in-depth discussion of various authentication schemes is covered in Chapter 9.


Chapter Review


In this chapter, you became acquainted with the objectives that will be tested on the Security+ exam as well as the expected format for the exam. You met with a number of basic security concepts and terms. The operational model of computer security was described and examples provided for each of its components (prevention, detection, and response). The difference between authentication and access control was also discussed. Authentication is the process of providing some sort of verification for who you are to the computer system or network, and access controls are the mechanisms the system uses to decide what you can do once your authenticity has been verified. Authentication generally comes in one of three forms: something you know, something you have, or something you are/something about you. Biometrics is an example of an authentication method, but the most common authentication mechanism is the simple username and password combination. Several approaches to access control were discussed, including discretionary access control, mandatory access control, rule-based access control, and role-based access control.


Quick Tips


 
  • Information assurance and information security place the security focus on the information and not the hardware or software used to process it.
  • The original goal of computer and network security was to provide confidentiality, integrity, and availability—the “CIA” of security.
  • As a result of the increased reliance on networks for commerce, authentication and nonrepudiation have been added to the original CIA of security.
  • The operational model of computer security tells us that protection is provided by prevention, detection, and response.
  • Host security focuses on protecting each computer and device individually instead of addressing protection of the network as a whole.
  • Least privilege means that an object should have only the necessary rights and privileges to perform its task, with no additional permissions.
  • Separation of duties requires that a given task will be broken into different parts that must be accomplished by different individuals. This means that no single individual could accomplish the task without another individual knowing about it.
  • Diversity of defense is a concept that complements the idea of various layers of security. It requires that the layers are dissimilar so that if one layer is penetrated, the next layer can’t also be penetrated using the same method.
  • Access is the ability of a subject to interact with an object. Access controls are devices and methods used to limit which subjects may interact with specific objects.
  • Authentication mechanisms ensure that only valid users are provided access to the computer system or network.
  • The three general methods used in authentication involve the users providing either something they know, something they have, or something unique about them (something they are).


Questions


To further help you prepare for the Security+ exam, and to test your level of preparedness, answer the following questions and then check your answers against the list of correct answers at the end of the chapter.


 
  1. 1. Which access control mechanism provides the owner of an object the opportunity to determine the access control permissions for other subjects?
    1. A. Mandatory
    2. B. Role-based
    3. C. Discretionary
    4. D. Token-based
 
  1. 2. What is the most common form of authentication used?
    1. A. Biometrics
    2. B. Tokens
    3. C. Access card
    4. D. Username/password
 
  1. 3. A retinal scan device is an example of what type of authentication mechanism?
    1. A. Something you know
    2. B. Something you have
    3. C. Something about you/something you are
    4. D. Multifactor authentication
 
  1. 4. Which of the following is true about the security principle of implicit deny?
    1. A. In a given access control situation, if a rule does not specifically allow the access, it is by default denied.
    2. B. It incorporates both access-control and authentication mechanisms into a single device.
    3. C. It allows for only one user to an object at a time; all others are denied access.
    4. D. It bases access decisions on the role of the user, as opposed to using the more common access control list mechanism.
 
  1. 5. From a security standpoint, what are the benefits of job rotation?
    1. A. It keeps employees from becoming bored with mundane tasks that might make it easier for them to make a mistake without noticing.
    2. B. It provides everybody with a better perspective of the issues surrounding security and lessens the impact of losing any individual employee since others can assume their duties.
    3. C. It keeps employees from learning too many details related to any one position thus making it more difficult for them to exploit that position.
    4. D. It ensures that no employee has the opportunity to exploit a specific position for any length of time without risk of being discovered.
  2. 6. What was described in the chapter as being essential in order to implement mandatory access controls?
    1. A. Tokens
    2. B. Certificates
    3. C. Labels
    4. D. Security classifications
  3. 7. The CIA of security includes
    1. A. Confidentiality, integrity, authentication
    2. B. Certificates, integrity, availability
    3. C. Confidentiality, inspection, authentication
    4. D. Confidentiality, integrity, availability
  4. 8. Security through obscurity is an approach to security that is sometimes used but that is dangerous to rely on. It attempts to do the following:
    1. A. Protect systems and networks by using confusing URLs to make them difficult to remember or find.
    2. B. Protect data by relying on attackers not being able to discover the hidden, confusing, or obscure mechanisms being used as opposed to employing any real security practices or devices.
    3. C. Hide data in plain sight through the use of cryptography.
    4. D. Make data hard to access by restricting its availability to a select group of users.
  5. 9. The fundamental approach to security in which an object has only the necessary rights and privileges to perform its task with no additional permissions is a description of
    1. A. Layered security
    2. B. Least privilege
    3. C. Role-based security
    4. D. Kerberos
 
  1. 10. Which access control technique discussed relies on a set of rules to determine whether access to an object will be granted or not?
    1. A. Role-based access control
    2. B. Object and rule instantiation access control
    3. C. Rule-based access control
    4. D. Discretionary access control
 
  1. 11. The security principle that ensures that no critical function can be executed by any single individual (by dividing the function into multiple tasks that can’t all be executed by the same individual) is known as
    1. A. Discretionary access control
    2. B. Security through obscurity
    3. C. Separation of duties
    4. D. Implicit deny
 
  1. 12. The ability of a subject to interact with an object describes
    1. A. Authentication
    2. B. Access
    3. C. Confidentiality
    4. D. Mutual authentication
 
  1. 13. Information security places the focus of security efforts on
    1. A. The system hardware
    2. B. The software
    3. C. The user
    4. D. The data
 
  1. 14. In role-based access control, which of the following is true?
    1. A. The user is responsible for providing both a password and a digital certificate in order to access the system or network.
    2. B. A set of roles that the user may perform will be assigned to each user, thus controlling what the user can do and what information he or she can access.
    3. C. The focus is on the confidentiality of the data the system protects and not its integrity.
    4. D. Authentication and nonrepudiation are the central focus.
 
  1. 15. Using different types of firewalls to protect various internal subnets is an example of
    1. A. Layered security
    2. B. Security through obscurity
    3. C. Diversity of defense
    4. D. Implementing least privilege for access control

Answers


 
  1. 1. C. Discretionary access control provides the owner of an object the opportunity to determine the access control permissions for other subjects.
  2. 2. D. Username/password is the single most common authentication mechanism in use today.
  3. 3. C. A retinal scan is an example of a biometric device, which falls into the category of something about you/something you are.
  4. 4. A. The basic premise of implicit deny is that an action is allowed only if a specific rule states that it is acceptable, making A the most correct answer.
  5. 5. B. While both C and D may indeed bear a semblance of truth, they are not the primary reasons given as benefits of rotating employees through jobs in an organization. The reasons discussed included ensuring that no single individual alone can perform security operations, plus the benefit of having more employees understand the issues related to security.
  6. 6. C. Labels were discussed as being required for both objects and subjects in order to implement mandatory access controls. D is not the correct answer, because mandatory access controls are often used to implement various levels of security classification but security classifications are not needed in order to implement MAC.
  7. 7. D. Don’t forget that even though authentication was described at great length in this chapter, the A in the CIA of security represents availability, which refers to the hardware and data being accessible when the user wants it.
  8. 8. B. Answer B describes the more general definition of this flawed approach, which relies on attackers not being able to discover the mechanisms being used in the belief that if it is confusing or obscure enough, it will remain safe. The problem with this approach is that once the confusing or obscure technique is discovered, the security of the system and data can be compromised. Security must rely on more than just obscurity to be effective. A does at some level describe activity that is similar to the concept of security through obscurity, but it is not the best answer.
  9. 9. B. This description describes least privilege. Layered security refers to using multiple layers of security (such as at the host and network layers) so that if an intruder penetrates one layer, they still will have to face additional security mechanisms before gaining access to sensitive information.
 
  1. 10.. C. Rule-based access control relies on a set of rules to determine whether access to an object will be granted or not.
 
  1. 11. C. The separation of duties principle ensures that no critical function can be executed by any single individual.
 
  1. 12.. B. Access is the ability of a subject to interact with an object.
  2. 13. D. Information security places the focus of the security efforts on the data (information).
  3. 14. B. In role-based access controls, roles are assigned to the user. Each role will describe what the user can do and the data or information that can be accessed to accomplish that role.
  4. 15. C. This is an example of diversity of defense. The idea is to provide different types of security and not rely too heavily on any one type of product.


CHAPTER 2
Operational Organizational Security


In this chapter, you will


 
  • Learn about the various operational aspects to security in your organization
  • Confront social engineering as a means to gain access to computers and networks and determine how your organization should deal with it
  • Identify and explain the benefits of organizational security policies
  • Describe and compare logical access control methods

To some, the solution to securing an organization’s computer systems and network is simply the implementation of various security technologies. Prevention technologies are designed to keep individuals from being able to gain access to systems or data they are not authorized to use. They are intended to prevent unauthorized access. A common prevention technology is the implementation of logical access controls. Although an important element of security, the implementation of any technological solution should be based upon an organizational security policy. In this chapter you will learn about various organizational and operational elements of security. Some of these, such as the establishment of security policies, standards, guidelines, and procedures, are activities that fall in the prevention category of the operational model of computer security. Others, such as the discussion on social engineering, come under the category of detection. All of these components, no matter which part of the operational model they fall under, need to be combined in a cohesive operational security program for your organization.


Policies, Standards, Guidelines, and Procedures


A security program (the total of all technology, processes, procedures, metrics, training, and personnel that are part of the organization’s approach to addressing security) should be based on an organization’s security policies, procedures, standards, and guidelines that specify what users and administrators should be doing to maintain the security of the systems and network. Collectively, these documents provide the guidance needed to determine how security will be implemented in the organization. Given this guidance, the specific technology and security mechanisms required can be planned for.

Policies are high-level, broad statements of what the organization wants to accomplish. Standards are mandatory elements regarding the implementation of a policy. Some standards can be externally driven. Government regulations for banking and financial institutions, for example, require that certain security measures be taken. Other standards may be set by the organization to meet its own security goals. Guidelines are recommendations relating to a policy. The key term in this case is recommendation—guidelines are not mandatory steps. Procedures are the step-by-step instructions on how to implement policies in the organization.

Just as the network itself constantly changes, the policies, standards, guidelines, and procedures should be included in living documents that are periodically evaluated and changed as necessary. The constant monitoring of the network and the periodic review of the relevant documents are part of the process that is the operational model. This operational process consists of four basic steps:


 
  1. 1. Plan (adjust) for security
  2. 2. Implement the plans
  3. 3. Monitor the implementation
  4. 4. Evaluate the effectiveness

In the first step, you develop the policies, procedures, and guidelines that will be implemented and design the security components that will protect your network. Once these are designed and developed, you can implement the plans. Next, you monitor to ensure that both the hardware and the software as well as the policies, procedures, and guidelines are working to secure your systems. Finally, you evaluate the effectiveness of the security measures you have in place. The evaluation step can include a vulnerability assessment (an attempt to identify and prioritize the list of vulnerabilities within a system or network) and penetration test (a method to check the security of a system by simulating an attack by a malicious individual) of your system to ensure the security is adequate. After evaluating your security posture, you begin again with step one, this time adjusting the security mechanisms you have in place, and then continue with this cyclical process.


The Security Perimeter


The discussion to this point has not mentioned the specific technology used to enforce operational and organizational security or a description of the various components that constitute the organization’s security perimeter. If the average administrator were asked to draw a diagram depicting the various components of her network, the diagram would probably look something like Figure 2-1.

This diagram includes the major components typically found in a network. A connection to the Internet generally has some sort of protection attached to it such as a

Figure 2-1 Basic diagram of an organization’s network



firewall. An intrusion detection system (IDS), also often a part of the security perimeter for the organization, can be on the inside of the firewall, or the outside, or it may in fact be on both sides. The specific location depends on the company and what it seeks to protect against (that is, insider threats or external threats). Beyond this security perimeter is the corporate LAN. Figure 2-1 is obviously a simple depiction—an actual network can have numerous subnets and extranets—but the basic components are present. Unfortunately, if this were the diagram provided by the administrator to show the organization’s basic network structure, the administrator would have missed a very important component. A more astute administrator would provide a diagram more like Figure 2-2.

This diagram includes the other important network found in every organization, the telephone network that is connected to the public switched telephone network (PSTN), otherwise known as the phone company. The organization may or may not

Figure 2-2 A more complete diagram of an organization’s network



have any authorized modems, but the savvy administrator would realize that because the potential exists for unauthorized modems, the telephone network must be included as a possible source of access for the network. When considering the policies, procedures, and guidelines needed to implement security for the organization, both networks need to be considered.

While Figure 2-2 provides a more comprehensive view of the various components that need to be protected, it is still incomplete. Most experts will agree that the biggest danger to any organization does not come from external attacks but rather from the insider—a disgruntled employee or somebody else who has physical access to the facility. Given physical access to an office, a knowledgeable attacker will quickly be able to find the information he needs to gain access to the organization’s computer systems and network. Consequently, every organization also needs security policies, procedures, and guidelines that cover physical security, and every security administrator should be concerned with these as well. While physical security (which can include such things as locks, cameras, guards and entry points, alarm systems, and physical barriers) will probably not fall under the purview of the security administrator, the operational state of the organization’s physical security measures is just as important as many of the other network-centric measures.


Logical Access Controls


Access control lists (ACLs) are as important to logical access controls as they are to the control of physical access to the organization and its resources. An ACL is simply a list of the individuals (or groups) that are granted access to a specific resource. It can also include the type of access they have (that is, what actions they can perform on or with the resource). Logical access controls refer to those mechanisms that are used to control who may gain electronic access (access to data or resources from a computer system or network as opposed to physical access to the system itself) to the organization’s computer systems and networks. Before setting the system’s access controls, you must establish the security policies that the settings will be based upon.


Access Control Policies


As mentioned, policies are statements of what the organization wants to accomplish. The organization needs to identify goals and intentions for many different aspects of security. Each aspect will have associated policies and procedures.


Group Policy


Operating systems such as Windows and Linux allow administrators to organize users into groups. This is used to create categories of users for which similar access policies can be established. Using groups saves the administrator time, as adding a new user will not require that he create a completely new user profile; instead the administrator would determine to which group the new user belongs and then add the user to that group. Examples of groups commonly found include administrator, user, and guest. Take care when creating groups and assigning users to them so that you do not provide more access than is absolutely required for members of that group. It would be simple to make everybody an administrator—it would cut down on the number of requests users might make of beleaguered administrators, but this is not a wise choice, as it also provides users the ability to modify the system in ways that could impact security. Establishing the correct levels of access for the various groups up front will save you time and eliminate potential problems that might be encountered later on.


Password Policy


Since passwords are the most common authentication mechanism, it is imperative that organizations have a policy addressing them. The list of authorized users will form the basis of the ACL for the computer system or network that the passwords will help control. The password policy should address the procedures used for selecting user passwords (specifying what is considered an acceptable password in the organization in terms of the character set and length, for example), the frequency with which they must be changed, and how they will be distributed. Procedures for creating new passwords should an employee forget her old password also need to be addressed, as well as the acceptable handling of passwords (for example, they should not be shared with anybody else, they should not be written down, and so on). It might also be useful to have the policy address the issue of password cracking by administrators, in order to discover weak passwords selected by employees.

Note that the developer of the password policy and associated procedures can go overboard and create an environment that negatively impacts employee productivity and leads to poorer security, not better. If, for example, the frequency with which passwords are changed is too great, users might write them down or forget them. Neither of these is a desirable outcome, as the one makes it possible for an intruder to find a password and gain access to the system, and the other leads to too many people losing productivity as they have to wait for a new password to be created to allow them access again.



EXAM TIP A password policy is one of the most basic policies that an organization can have. Make sure you understand the basics of what constitutes a good password along with the other issues that surround password creation, expiration, sharing, and use.


Domain Password Policy


Domains are logical groups of computers that share a central directory database. The database contains information about the user accounts and security information for all resources identified within the domain. Each user within the domain is assigned her own unique account (that is, a domain is not a single account shared by multiple users), which is then assigned access to specific resources within the domain. In operating systems that provide domain capabilities, the password policy is set in the root container for the domain and will apply to all users within that domain. Setting a password policy for a domain is similar to setting other password policies in that the same critical elements need to be considered (password length, complexity, life, and so on). If a change to one of these elements is desired for a group of users, a new domain will need to be created. In a Microsoft Windows operating system that employs Active Directory, the domain password policy can be set in the Active Directory Users and Computers menu in the Administrative Tools section of the Control Panel.


Usernames and Passwords


Policies regarding selection of usernames and passwords must weigh usability versus security. At one end of the spectrum is usability, which would dictate that the username be simple and easy to remember, such as the user’s first and last name separated by a period or the user’s first initial followed by the last name. This makes it easy for the user to remember the user (account) name and makes it easy for other individuals to remember a user’s username (since the username and e-mail name are generally similar). At the same time, however, adhering to a simple policy such as this also makes it easy for a potential attacker to guess a valid account name, which can then be used in an attempt to guess a username/password combination. At the other end of the spectrum is the generation of a completely random series of characters (such as xzf258) to be assigned to a user for a username. Aliases can be used for e-mail so that the more common first name/last name format can still be used for communication with users. The advantage of this random assignment is that it will be more difficult for an attacker to guess a valid username; however, it has the disadvantage of being difficult for the user to remember.

Most operating systems now include a password generation utility that helps users select their passwords. Such utilities use parameters that affect the passwords’ complexity, which in turn affects the ability for it to be guessed as well as for the user to remember it. Generally, the easier it is to remember the easier it will be to guess. Again, it is possible to generate completely random passwords, but these are difficult for users to remember. Restrictions on password generation can be eased so that the user can select a password that is easier to remember, but some general rules should still be followed. Passwords should contain a mix of uppercase and lowercase characters, special characters, and numbers. They should be at least eight characters in length and they should not be related to the username.


Time of Day Restrictions


Some systems allow for the specification of time of day restrictions in their access control policies. This means that a user’s access to the system or specific resources can be restricted to certain times of the day and days of the week. If a user normally accesses certain resources during normal business hours, an attempt to access these resources outside this time period (either at night or on the weekend) might indicate an attacker has gained access to the account. Specifying time of day restrictions can also serve as a mechanism to enforce internal controls of critical or sensitive resources. Obviously, a drawback to enforcing time of day restrictions is that it means that a user can’t go to work outside of normal hours in order to “catch up” with work tasks. As with all security policies, usability and security must be balanced in this policy decision.


Account and Password Expiration


Another common restriction that can be enforced in many access control mechanisms is either (or both) an account expiration or password expiration feature. This allows administrators to specify a period of time for which a password or an account will be active. For password expiration, when the expiration date is reached, the user will generally be asked to create a new password. This means that if the password (and thus the account) has been compromised when the expiration date is reached and a new password is set, the attacker will again (hopefully) be locked out of the system. The attacker can’t change the password himself since the user would then be locked out and would contact an administrator to have the password reset, thus again locking out the attacker.

The attacker could set a new password, and then attempt to reset it to the original password. This would mean that a new expiration time would be set for the account but would keep the same password and would not lock the user out. This is one reason why a password history mechanism should be used. The history is used to keep track of previously used passwords so that they cannot be reused. An account expiration is similar, except that it is generally put in place because a specific account is intended for a specific purpose of limited duration. When an account has expired, it cannot be used unless the expiration deadline is extended.


File and Print Resources


The desire for a collaborative work environment often results in file sharing on servers. In a similar manner, print resources are also often shared so that many users can access high-cost resources. In the past, the potential for security problems associated with shared resources (it was often difficult to isolate who could or could not use the resource if it was opened for sharing) had led to some security administrators simply prohibiting sharing. With some of the more current operating systems, however, sharing can be accomplished with a reasonable balance between it and security. Strict policies regarding sharing need to be established. Some files should not be shared (such as a user’s profile folder, for example), so allowing for a blanket sharing of files between users should be avoided. Instead, specific files within folders should be designated and managed through group policies. Similar care should be taken when deciding what print resources should be shared.


Logical Tokens


A token is an object that a user must have and present to the system to gain access to some resource or the system itself. Special hardware devices can be used as tokens that need to be inserted into the machine or a special reader, or that can provide some information (such as a one-time code) that must be supplied to the system to obtain access. A problem with all of these methods is that they require that the user have the physical device on hand to gain access. If the user loses the token or forgets it, she will be unable to access the resource.

Considered less secure but not suffering from the same problem is the use of logical or software tokens. These can take the form of a shared secret that only the user and the system know. The user is required to supply the secret when attempting to access the resource. As with passwords, policies should govern how logical tokens are generated, stored, and shared. With a hardware token, a user could give the device to another individual, but only one device is assigned to the user. With a software token, a user could share a token with another individual (along with any other identification information required) and that individual could in turn share it with somebody else. Once shared, there is no real way to control the dissemination of the software token.


Social Engineering


Social engineering is the process of convincing an authorized individual to provide confidential information or access to an unauthorized individual. Social engineering takes advantage of what continually turns out to be the weakest point in our security perimeter—the humans. Kevin Mitnick, a convicted cybercriminal turned security consultant, once stated, “Don’t rely on network safeguards and firewalls to protect your information. Look to your most vulnerable spot. You’ll usually find that vulnerability lies in your people.” In 2000, after being released from jail, Mitnick testified before Congress and spoke on several other occasions about social engineering and how effective it is. He stated that he “rarely had to resort to a technical attack” because of how easily information and access could be obtained through social engineering.

Individuals who are attempting to social engineer some piece of information generally rely on two aspects of human nature. First, most people generally want to help somebody who is requesting help. Second, people generally want to avoid confrontation. The knowledgeable social engineer might call a help desk pretending to be a new employee needing help to log on to the organization’s network. By doing so, valuable information can be obtained as to the type of system or network that is being employed. After making this call, a second call may be made that uses the information from the first call to provide background for the second call so that the next individual the attacker attempts to obtain information from will not suspect it is an unauthorized individual asking the questions. This works because people generally assume that somebody is who they claim to be, especially if they have information that would be known by the individual they claim to be.

If the pleasant approach doesn’t work, a more aggressive approach can be attempted. People will normally want to avoid unpleasant confrontations and will also not want to get into trouble with their superiors. An attacker, knowing this, may attempt to obtain information by threatening to go to the individual’s supervisor or by claiming that he is working for somebody who is high up in the organization’s management structure. Because employees want to avoid both a confrontation and a possible reprimand, they might provide the information requested even though they realize that it is against the organization’s policies or procedures.

The goal of social engineering is to gradually obtain the pieces of information necessary to make it to the next step. This is done repeatedly until the ultimate goal is reached. If social engineering is such an effective means of gaining unauthorized access to data and information, how can it be stopped? The most effective means is through the training and education of users, administrators, and security personnel. All employees should be instructed in the techniques that attackers might use and trained to recognize when a social engineering attack is being attempted. One important aspect of this training is for employees to recognize the type of information that should be protected and also how seemingly unimportant information can be combined with other pieces of information to potentially divulge sensitive information. This is known as data aggregation.

In addition to the direct approach to social engineering, attackers can use other indirect means to obtain the information they are seeking. These include phishing, vishing, shoulder surfing, and dumpster diving and are discussed in the following sections. Again, the first defense against any of these methods to gather information to be used in later attacks is a strong user education and awareness training program.



EXAM TIP Social engineering attacks can come in many different forms. Taken as a whole, they are the most common attacks facing users. Be sure to understand the differences among the different types of social engineering attacks.


Phishing


Phishing (pronounced “fishing") is a type of social engineering in which an individual attempts to obtain sensitive information from a user by masquerading as a trusted entity in an e-mail or instant message sent to the user. The type of information that the attacker attempts to obtain include usernames, passwords, credit card numbers, or details on the user’s bank account. The message sent often encourages the user to go to a web site that appears to be for a reputable entity such as PayPal or eBay, both of which have frequently been used in phishing attempts. The web site the user actually visits will not be owned by the reputable organization, however, and will ask the user to supply information that can be used in a later attack. Often the message sent to the user will tell a story about the user’s account having been compromised, and for security purposes they are encouraged to enter their account information to verify the details.

The e-mails and web sites generated by the attackers often appear to be legitimate. A few clues, however, can tip off the user that the e-mail might not be what it claims to be. The e-mail may contain grammatical and typographical errors, for example. Organizations that are used in these phishing attempts (such as eBay and PayPal) are careful about their images and will not send a security-related e-mail to users containing obvious errors. In addition, almost unanimously, organizations tell their users that they will never ask for sensitive information (such as a password or account number) via an e-mail. Despite the increasing media coverage concerning phishing attempts, some Internet users still fall for them, which results in attackers continuing to use this method to gain the information they are seeking.


Vishing


Vishing is a variation of phishing that uses voice communication technology to obtain the information the attacker is seeking. Vishing takes advantage of the trust that most people place in the telephone network. Users are unaware that attackers can spoof calls from legitimate entities using voice over IP (VoIP) technology. Voice messaging can also be compromised and used in these attempts. Generally, the attackers are hoping to obtain credit card numbers or other information that can be used in identity theft. The user may receive an e-mail asking him to call a number that is answered by a potentially compromised voice message system. Users may also receive a recorded message that appears to come from a legitimate entity. In both cases, the user will be encouraged to respond quickly and provide the sensitive information so that access to an account is not blocked. If a user ever receives a message that claims to be from a reputable entity and is asking for sensitive information, he should not provide it but instead use the Internet or examine a legitimate account statement to find a phone number that can be used to contact the entity. The user can then verify that the message received was legitimate or report the vishing attempt.


Shoulder Surfing


Shoulder surfing does not involve direct contact with the user, but instead involves the attacker directly observing the target entering sensitive information on a form, keypad, or keyboard. The attacker may simply look over the shoulder of the user at work or the attacker can set up a camera or use binoculars to view users entering sensitive data. The attacker can attempt to obtain information such as a PIN at an automated teller machine, an access control entry code at a secure gate or door, or calling card or credit card numbers. Some locations now use a small shield to surround a keypad so that it is difficult to observe somebody entering information. More sophisticated systems can actually scramble the location of the numbers so that the top row at one time includes the numbers 1, 2, and 3 and the next time 4, 8, and 0. While this makes it a bit slower for the user to enter information, it does mean that a person attempting to observe what numbers are pressed will not be able to press the same buttons/pattern since the location of the numbers have changed.

Although methods such as these can help make shoulder surfing more difficult, the best defense is for users to be aware of their surroundings and to not allow individuals to get into a position from which they can observe what the user is entering. A related security comment can be made at this point: It should now be obvious why a person should not use the same PIN for all of their different accounts, gate codes, and so on, since an attacker who learns the PIN for one could then use it for all of the other places requiring a PIN that was also generated by the user.


Dumpster Diving


Dumpster diving is not a uniquely computer security-related activity. It refers to the activity of sifting through an individual’s or organization’s trash for things that the dumpster diver might find valuable. In the nonsecurity realm, this can be anything from empty aluminum cans to articles of clothing or discarded household items. From a computer security standpoint, the diver is looking for information that can be obtained from listings or printouts, manuals, receipts, or even yellow sticky notes. The information can include credit card or bank account numbers, user IDs or passwords, details about the type of software or hardware platforms that are being used, or even company sensitive information. In most locations, trash is no longer considered private property after it has been discarded (and even where dumpster diving is illegal, little enforcement occurs). An organization should have policies about discarding materials. Sensitive information should be shredded and the organization should consider securing the trash receptacle so that individuals can’t forage through it. People should also consider shredding personal or sensitive information that they wish to discard in their own trash. A reasonable quality shredder is inexpensive and well worth the price when compared with the potential loss that could occur as a result of identity theft.


Hoaxes


At first glance, it might seem that a hoax related to security would be considered a nuisance and not a real security issue. This might be the case for some hoaxes, especially those of the urban legend type, but the reality of the situation is that a hoax can be very damaging if it causes users to take some sort of action that weakens security. One real hoax, for example, told the story of a new, highly destructive piece of malicious software. It instructed users to check for the existence of a certain file and to delete it if the file was found. In reality, the file mentioned was an important file that was used by the operating system, and deleting it caused problems the next time the system was booted. The damage caused by users modifying security settings can be serious. As with other forms of social engineering, training and awareness are the best and first line of defense for users. Users should be trained to be suspicious of unusual e-mails and stories and should know who to contact in the organization to verify the validity if they are received.


Organizational Policies and Procedures


Policies are high-level statements created by management that lay out the organization’s positions on particular issues. Policies are mandatory but are not specific in their details. Policies are focused on the result, not the methods for achieving that result. Procedures are generally step-by-step instructions that prescribe exactly how employees are expected to act in a given situation or to accomplish a specific task. Although standard policies can be described in general terms that will be applicable to all organizations, standards and procedures are often organization-specific and driven by specific organizational policies.

Regarding security, every organization should have several common policies in place in addition to those already discussed relative to access control methods. These policies include acceptable use policies, due care, separation of duties, and policies governing the protection of personally identifiable information (PII), and they are addressed in the following sections. Other important policy-related issues covered here include privacy, service level agreements, human resources policies, codes of ethics, and policies governing incident response.


Security Policies


In keeping with the high-level nature of policies, the security policy is a high-level statement produced by senior management that outlines what security means to the organization and the organization’s goals for security. The main security policy can then be broken down into additional policies that cover specific topics. Statements such as “this organization will exercise the principle of least access in its handling of client information” would be an example of a security policy. The security policy can also describe how security is to be handled from an organizational point of view (such as describing which office and corporate officer or manager oversees the organization’s security program).

In addition to policies related to access control, the organization’s security policy should include the specific policies described in the next sections. All policies should be reviewed on a regular basis and updated as needed. Generally, policies should be updated less frequently than the procedures that implement them, since the high-level goals will not change as often as the environment in which they must be implemented. All policies should be reviewed by the organization’s legal counsel, and a plan should be outlined describing how the organization will ensure that employees will be made aware of the policies. Policies can also be made stronger by including references to the authority who made the policy (whether this policy comes from the CEO or is a department-level policy) and also refer to any laws or regulations that are applicable to the specific policy and environment.


Change Management


The purpose of change management is to ensure proper procedures are followed when modifications to the IT infrastructure are made. These modifications can be prompted by a number of different reasons including new legislation, updated versions of software or hardware, implementation of new software or hardware, or improvements to the infrastructure. The term “management” implies that this process should be controlled in some systematic way, and that is indeed the purpose. Changes to the infrastructure can have a detrimental impact on operations. New versions of operating systems or application software can be incompatible with other software or hardware the organization is using. Without a process to manage the change, an organization can suddenly find itself unable to conduct business. A change management process should include various stages including a method to request a change to the infrastructure, a review and approval process for the request, an examination of the consequences of the change, resolution (or mitigation) of any detrimental affects the change might incur, implementation of the change, and documentation of the process as it related to the change.


Classification of Information


A key component of IT security is the protection of the information processed and stored on the computer systems and network. Organizations deal with many different types of information, and they need to recognize that not all information is of equal importance or sensitivity. This prompts a classification of information into various categories, each with its own requirements for its handling. Factors that affect the classification of specific information include its value to the organization (what will be the impact to the organization if it loses this information?), its age, and laws or regulations that govern its protection. The most widely known classification of information is that implemented by the government and military, which classifies information into categories such as confidential, secret, and top secret. Businesses have similar desires to protect information but can use categories such as publicly releasable, proprietary, company confidential, or for internal use only. Each policy for a classification of information should describe how it should be protected, who may have access to it, who has the authority to release it and how, and how it should be destroyed. All employees of the organization should be trained in the procedures for handling the information that they are authorized to access. Discretionary and mandatory access control techniques use classifications as a method to identify who may have access to what resources.


Acceptable Use


An acceptable use policy (AUP) outlines what the organization considers to be the appropriate use of company resources, such as computer systems, e-mail, Internet, and networks. Organizations should be concerned with the personal uses of organizational assets that do not benefit the company.

The goal of the policy is to ensure employee productivity while limiting organizational liability through inappropriate use of the organization’s assets. The policy should clearly delineate what activities are not allowed. Issues such as the use of resources to conduct personal business, installation of hardware or software, remote access to systems and networks, the copying of company-owned software, and the responsibility of users to protect company assets, including data, software, and hardware should be addressed. Statements regarding possible penalties for ignoring any of the policies (such as termination) should also be included.

Related to appropriate use of the organization’s computer systems and networks by employees is the appropriate use by the organization. The most important of such issues is whether the organization will consider it appropriate to monitor the employee’s use of the systems and network. If monitoring is considered appropriate, the organization should include a statement to this effect in the banner that appears at login. This repeatedly warns employees, and possible intruders, that their actions are subject to monitoring and that any misuse of the system will not be tolerated. Should the organization need to use any information gathered during monitoring in a civil or criminal case, the issue of whether the employee had an expectation of privacy, or whether it was even legal for the organization to be monitoring, is simplified if the organization can point to a statement that is always displayed, stating that use of the system constitutes consent to monitoring. Before any monitoring is conducted, or the actual wording on the warning message is created, the organization’s legal counsel should be consulted to determine the appropriate way to address this issue in the particular location.



EXAM TIP A second very common and also very important policy is the acceptable use policy. Make sure you understand how this policy outlines what is considered acceptable behavior for a computer system’s users. This policy often goes hand-in-hand with an organization’s Internet usage policy.


Internet Usage Policy


In today’s highly connected environment, employee use of access to the Internet is of particular concern. The goal for the Internet usage policy is to ensure maximum employee productivity and to limit potential liability to the organization from inappropriate use of the Internet in a workplace. The Internet provides a tremendous temptation for employees to waste hours as they surf the Web for the scores of the important games from the previous night, conduct quick online stock transactions, or read the review of the latest blockbuster movie everyone is talking about. Obviously, every minute they spend conducting this sort of activity is time they are not productively engaged in the organization’s business and their jobs. In addition, allowing employees to visit sites that may be considered offensive to others (such as pornographic or hate sites) can open the company to accusations of condoning a hostile work environment and result in legal liability.

The Internet usage policy needs to address what sites employees are allowed to visit and what sites they are not to visit. If the company allows them to surf the Web during non-work hours, the policy needs to clearly spell out the acceptable parameters, in terms of when they are allowed to do this and what sites they are still prohibited from visiting (such as potentially offensive sites). The policy should also describe under what circumstances an employee would be allowed to post something from the organization’s network on the Web (on a blog, for example). A necessary addition to this policy would be the procedure for an employee to follow to obtain permission to post the object or message.


E-Mail Usage Policy


Related to the Internet usage policy is the e-mail usage policy, which deals with what the company will allow employees to send in terms of e-mail. This policy should spell out whether non-work e-mail traffic is allowed at all or is at least severely restricted. It needs to cover the type of message that would be considered inappropriate to send to other employees (for example, no offensive language, no sex-related or ethnic jokes, no harassment, and so on). The policy should also specify any disclaimers that must be attached to an employee’s message sent to an individual outside the company.


Due Care and Due Diligence


Due care and due diligence are terms used in the legal and business community to address issues where one party’s actions might have caused loss or injury to another’s. Basically, the law recognizes the responsibility of an individual or organization to act reasonably relative to another with diligence being the degree of care and caution exercised. Reasonable precautions need to be taken that indicate that the organization is being responsible. In terms of security, it is expected that organizations will take reasonable precautions to protect the information that it maintains on other individuals. Should a person suffer a loss as a result of negligence on the part of an organization in terms of its security, a legal suit can be brought against the organization.

The standard applied—reasonableness—is extremely subjective and will often be determined by a jury. The organization will need to show how it had taken reasonable precautions to protect the information, and despite these precautions, an unforeseen security event occurred that caused the injury to the other party. Since this is so subjective, it is hard to describe what would be considered reasonable, but many sectors have “security best practices” for their industry, which provides a basis for organizations in that sector to start from. If the organization decides not to follow any of the best practices accepted by the industry, it needs to be prepared to justify its reasons in court should an incident occur. If the sector the organization is in has regulatory requirements, explanations on why the mandated security practices were not followed will be much more difficult (and possibly impossible) to justify.

Another element that can help establish due care from a security standpoint is developing and implementing the security policies discussed in this chapter. As the policies outlined become more generally accepted, the level of diligence and care that an organization will be expected to maintain will increase.


Due Process


Due process is concerned with guaranteeing fundamental fairness, justice, and liberty in relation to an individual’s legal rights. In the United States, due process is concerned with the guarantee of an individual’s rights as outlined by the Constitution and Bill of Rights. Procedural due process is based on the concept of what is “fair.” Also of interest is the recognition by courts of a series of rights that are not explicitly specified by the Constitution but that the courts have decided are implicit in the concepts embodied by the Constitution. An example of this is an individual’s right to privacy. From an organization’s point of view, due process may come into play during an administrative action that adversely affects an employee. Before an employee is terminated, for example, were all of the employee’s rights protected? An actual example pertains to the rights of privacy regarding employees’ e-mail messages. As the number of cases involving employers examining employee e-mails grows, case law is established and the courts eventually settle on what rights an employee can expect. The best thing an employer can do if faced with this sort of situation is to work closely with HR staff to ensure that appropriate policies are followed and that those policies are in keeping with current laws and regulations.


Separation of Duties


Separation of duties is a principle employed in many organizations to ensure that no single individual has the ability to conduct transactions alone. This means that the level of trust in any one individual is lessened, and the ability for any individual to cause catastrophic damage to the organization is also lessened. An example might be an organization in which one person has the ability to order equipment, but another individual makes the payment. An individual who wants to make an unauthorized purchase for his own personal gain would have to convince another person to go along with the transaction.

Separating duties as a security tool is a good practice, but it is possible to go overboard and break up transactions into too many pieces or require too much oversight. This results in inefficiency and can actually be less secure, since individuals may not scrutinize transactions as thoroughly because they know others will also be reviewing them. The temptation is to hurry something along and assume that somebody else will examine or has examined it.



EXAM TIP Another aspect of the separation of duties principle is that it spreads responsibilities out over an organization so no single individual becomes the indispensable individual with all of the “keys to the kingdom” or unique knowledge about how to make everything work. If enough tasks have been distributed, assigning a primary and a backup person for each task will ensure that the loss of any one individual will not have a disastrous impact on the organization.


Need to Know and Least Privilege


Two other common security principles are that of need to know and least privilege. The guiding factor here is that each individual in the organization is supplied with only the absolute minimum amount of information and privileges she needs to perform her work tasks. To obtain access to any piece of information, the individual must have a justified need to know. In addition, she will be granted only the bare minimum number of privileges that are needed to perform her job.

A policy spelling out these two principles as guiding philosophies for the organization should be created. The policy should also address who in the organization can grant access to information or may assign privileges to employees.


Disposal and Destruction


Many potential intruders have learned the value of dumpster diving. Not only should an organization be concerned with paper trash and discarded objects, but it must also be concerned with the information stored on discarded objects such as computers. Several government organizations have been embarrassed when old computers sold to salvagers proved to contain sensitive documents on their hard drives. It is critical for every organization to have a strong disposal and destruction policy and related procedures.

Important papers should be shredded, and important in this case means anything that might be useful to a potential intruder. It is amazing what intruders can do with what appears to be innocent pieces of information.

Magnetic storage media discarded in the trash (such as disks or tapes) or sold for salvage should have all files deleted, and then the media should be overwritten at least three times with all 1s, all 0s, and then random characters. Commercial products are available to destroy files using this process. It is not sufficient simply to delete all files and leave it at that, since the deletion process affects only the pointers to where the files are stored and doesn’t actually get rid of all of the bits in the file. This is why it is possible to “undelete” files and recover them after they have been deleted.

A safer method for destroying files from a storage device is to destroy the data magnetically using a strong magnetic field to degauss the media. This effectively destroys all data on the media. Several commercial degaussers can be purchased for this purpose. Another method that can be used on hard drives is to use a file on them (the sort of file you’d find in a hardware store) and actually file off the magnetic material from the surface of the platter. Shredding floppy media is normally sufficient, but simply cutting a floppy into a few pieces is not enough—data has been successfully recovered from floppies that were cut into only a couple of pieces. CDs and DVDs also need to be disposed of appropriately. Many paper shredders now have the ability to shred these forms of storage media. In some highly secure environments, the only acceptable method of disposing of hard drives and other storage devices is the actual physical destruction of the devices.


Privacy


Customers place an enormous amount of trust in organizations to which they provide personal information. These customers expect their information to be kept secure so that unauthorized individuals will not gain access to it and so that authorized users will not use the information in unintended ways. Organizations should have a privacy policy that explains what their guiding principles will be in guarding personal data to which they are given access. In many locations, customers have a legal right to expect that their information is kept private, and organizations that violate this trust may find themselves involved in a lawsuit. In certain sectors, such as health care, federal regulations have been created that prescribe stringent security controls on private information.

It is a general practice in most organizations to have a policy that describes explicitly how information provided to the organization will be used (for example, it will not be sold to other organizations). Watchdog organizations monitor the use of individual information by organizations, and businesses can subscribe to services that will vouch for the organization to consumers, stating that the company has agreed to protect and keep private any information supplied to it. The organization is then granted permission to display a seal or certification on its web site where customers can see it. Organizations that misuse the information they promised to protect will find themselves subject to penalties from the watchdog organization.

A special category of private information that is becoming increasingly important today is personally identifiable information (PII). This category of information includes any data that can be used to uniquely identify an individual. This would include an individual’s name, address, drivers license number, and other details. With the proliferation of e-commerce on the Internet, this information is used extensively and its protection has become increasingly important. You would not have to look far to find reports in the media of data compromises that have resulted in the loss of information that has led to issues such as identity theft. An organization that collects PII on its employees and customers must make sure that it takes all necessary measures to protect the data from compromise.


Service Level Agreements


Service level agreements (SLAs) are contractual agreements between entities describing specified levels of service that the servicing entity agrees to guarantee for the customer. These agreements clearly lay out expectations in terms of the service provided and support expected, and they also generally include penalties should the described level of service or support not be provided. An organization contracting with a service provider should remember to include in the agreement a section describing the service provider’s responsibility in terms of business continuity and disaster recovery. The provider’s backup plans and processes for restoring lost data should also be clearly described.


Human Resources Policies


It has been said that the weakest links in the security chain are the humans. Consequently, it is important for organizations to have policies in place relative to its employees. Policies that relate to the hiring of individuals are primarily important. The organization needs to make sure that it hires individuals that can be trusted with the organization’s data and that of its clients. Once employees are hired, they should be kept from slipping into the category of “disgruntled employee.” Finally, policies must be developed to address the inevitable point in the future when an employee leaves the organization—either on his own or with the “encouragement” of the organization itself. Security issues must be considered at each of these points.


Employee Hiring and Promotions


It is becoming common for organizations to run background checks on prospective employees and check the references they supply. Drug tests, checks for any criminal activity in the past, claimed educational backgrounds, and reported work history are all frequently checked today. For highly sensitive environments, security background checks can also be required. Make sure that your organization hires the most capable and trustworthy employees, and your policies should be designed to ensure this.

After an individual has been hired, your organization needs to minimize the risk that the employee will ignore company rules that could affect security. Periodic reviews by supervisory personnel, additional drug checks, and monitoring of activity during work may all be considered by the organization. If the organization chooses to implement any of these reviews, this must be specified in the organization’s policies, and prospective employees should be made aware of these policies before being hired. What an organization can do in terms of monitoring and requiring drug tests, for example, can be severely restricted if not spelled out in advance as terms of employment. New hires should be made aware of all pertinent policies, especially those applying to security, and documents should be signed by them indicating that they have read and understood them.

Occasionally an employee’s status will change within the company. If the change can be construed as a negative personnel action (such as a demotion), supervisors should be alerted to watch for changes in behavior that might indicate unauthorized activity is being contemplated or conducted. It is likely that the employee will be upset, and whether he acts on this to the detriment of the company is something that needs to be guarded against. In the case of a demotion, the individual may also lose certain privileges or access rights, and these changes should be made quickly so as to lessen the likelihood that the employee will destroy previously accessible data if he becomes disgruntled and decides to take revenge on the organization. On the other hand, if the employee is promoted, privileges may still change, but the need to make the change to access privileges may not be as urgent, though it should still be accomplished as quickly as possible. If the move is a lateral one, changes may also need to take place, and again they should be accomplished as quickly as possible. The organization’s goals in terms of making changes to access privileges should be clearly spelled out in its policies.


Retirement, Separation, or Termination of an Employee


An employee leaving an organization can be either a positive or a negative action. Employees who are retiring by their own choice may announce their planned retirement weeks or even months in advance. Limiting their access to sensitive documents the moment they announce their intention may be the safest thing to do, but it might not be necessary. Each situation should be evaluated individually. Should the situation be a forced retirement, the organization must determine the risk to its data if the employee becomes disgruntled as a result of the action. In this situation, the wisest choice might be to cut off their access quickly and provide them with some additional vacation time. This might seem like an expensive proposition, but the danger to the company of having a disgruntled employee can justify it. Again, each case should be evaluated individually.

When an employee decides to leave a company, generally as a result of a new job offer, continued access to sensitive information should be carefully considered. If the employee is leaving as a result of hard feelings for the company, it might be the wise choice to quickly revoke her access privileges. If she is leaving as a result of a better job offer, you may decide to allow her to gracefully transfer her projects to other employees, but the decision should be considered very carefully, especially if the new company is a competitor.

If the employee is leaving the organization because she is being terminated, you should plan on her becoming disgruntled. While it may not seem the friendliest thing to do, an employee in this situation should immediately have her access privileges to sensitive information and facilities revoked. It is better to give somebody several weeks of paid vacation rather than have a disgruntled employee trash sensitive files to which they have access. Combinations should also be quickly changed once they have been informed of their termination. Access cards, keys, and badges should be collected; the employee should be escorted to her desk and watched as she packs personal belongings; and then she should be escorted from the building.

No matter what the situation, the organization should have policies that describe the intended goals, and procedures should detail the process to be followed for each of the described situations.



EXAM TIP It is not uncommon for organizations to neglect having a policy that covers the removal of an individual’s computer access upon termination. The policy should also include the procedures to reclaim and “clean” a terminated employee’s computer system and accounts.


Mandatory Vacations


Organizations have provided vacation time for their employees for many years. Few, however, force employees to take this time if they don’t want to. Some employees are given the choice either to “use or lose” their vacation time and if they do not take all of their vacation time they’ll lose at least a portion of it. Many arguments can be made as to the benefit of taking time off, but more importantly from a security standpoint, an employee who never takes time off is a potential indicator of nefarious activity. Employees who never take any vacation time could be involved in activity such as fraud or embezzlement and might be afraid that if they leave on vacation, the organization would discover their illicit activities. As a result, requiring employees to use their vacation time through a policy of mandatory vacations can be a security protection mechanism.


Code of Ethics


Numerous professional organizations have established codes of ethics for their members. Each of these describe the expected behavior of their members from a high-level standpoint. Organizations can adopt this idea as well. For organizations, a code of ethics can set the tone for how employees will be expected to act and to conduct business. The code should demand honesty from employees and should require that they perform all activities in a professional manner. The code could also address principles of privacy and confidentiality and state how employees should treat client and organizational data. Conflicts of interest can often cause problems, so this could also be covered in the code of ethics.

By outlining a code of ethics, the organization can encourage an environment that is conducive to integrity and high ethical standards. For additional ideas on possible codes of ethics, check professional organizations such as the Institute for Electrical and Electronics Engineers (IEEE), the Association for Computing Machinery (ACM), or the Information Systems Security Association (ISSA).


Chapter Review


In this chapter, the organizational aspects of computer security were reviewed along with the role that policies, procedures, standards, and guidelines play in it. Taken together, these documents outline the security plan for the organization. Various factors that affect the security of the organization were discussed, including logic access controls and organizational security policies. Social engineering was discussed along with both the direct and indirect methods used. The best defense against all social engineering attacks consists of an active training and awareness program for employees.


Questions


To further help you prepare for the Security+ exam, and to test your level of preparedness, answer the following questions and then check your answers against the list of correct answers at the end of the chapter.


 
  1. 1. Which type of social engineering attack utilizes voice messaging to conduct the attack?
    1. A. Phishing
    2. B. War dialing
    3. C. Vishing
    4. D. War driving
  2. 2. Social engineering attacks work well because the individual who is the target of the attack/attempt
    1. A. Is often not very intelligent and can’t recognize the fact that a social engineering attempt is being attempted.
    2. B. Often either genuinely wants to help or is trying to avoid a confrontation, depending on the attacker’s specific tack.
    3. C. Is new to the organization and can’t tell that the story he is being fed is bogus.
    4. D. Knows the attacker.
  3. 3. From a security standpoint, why should an organization consider a policy of mandatory vacations?
    1. A. To ensure that employees are not involved in illicit activity that they are attempting to hide.
    2. B. Because employees who are tired are more prone to making errors.
    3. C. To provide an opportunity for security personnel to go through their desks and computer systems.
    4. D. To keep from having lawsuits filed against the organization for adverse working conditions.
 
  1. 4. Select all of the following that are examples of personally identifiable information:
    1. A. An individual’s name
    2. B. A national identification number
    3. C. A license plate number
    4. D. A telephone number
    5. E. A street address
  2. 5. A hoax can still be a security concern because
    1. A. It may identify a vulnerability that others can then decide to use in an attack.
    2. B. It shows that an attacker has the contact information for an individual who might be used in a later attack.
    3. C. It can result in a user performing some action that could lead to a compromise or that might adversely affect the system or network.
    4. D. A hoax is never a security concern—that is why it is called a hoax.
  3. 6. How should CDs and DVDs be disposed of?
    1. A. By shredding using a paper shredder designed also to shred CDs and DVDs.
    2. B. By using a commercial grade degausser.
    3. C. By overwriting the disk with 0s, then 1s, and then a random character.
    4. D. There is no approved way of disposing of this type of media, so they must be archived in a secure facility.
  4. 7. What type of attack consists of looking through an individual’s or organization’s trash for sensitive information?
    1. A. Phishing
    2. B. Vishing
    3. C. Shoulder surfing
    4. D. Dumpster diving
  5. 8. What type of attack can involve an attacker setting up a camera to record the entries individuals make on keypads used for access control?
    1. A. Phishing
    2. B. Shoulder surfing
    3. C. Dumpster diving
    4. D. Vishing
 
  1. 9. Which of the following should be included in a password policy?
    1. A. An explanation of how complex the password should be (i.e., what types of characters a password should be made up of)
    2. B. The length of time the password will be valid before it expires
    3. C. A description on how passwords should be distributed and protected
    4. D. All of the above
 
  1. 10. What is the best method of preventing successful phishing attacks?
    1. A. Firewalls that can spot and eliminate the phishing e-mails.
    2. B. Blocking sites where phishing originates.
    3. C. A viable user training and awareness program.
    4. D. There is no way to prevent successful phishing attacks.
 
  1. 11. What type of attack uses e-mails with a convincing story to encourage users to provide account or other sensitive information?
    1. A. Vishing
    2. B. Shoulder surfing
    3. C. Dumpster diving
    4. D. Phishing
 
  1. 12. The reason for providing a group access control policy is
    1. A. It provides a mechanism for individual users to police the other members of the group.
    2. B. It provides an easy mechanism to identify common user restrictions for members of the group. This means that individual profiles for each user don’t have to be created but instead each is identified as a member of the group with its associated group profile/policies.
    3. C. It is the only way to identify individual user access restrictions.
    4. D. It makes it easier for abnormal behaviors to be identified, as a group norm can be established.
 
  1. 13. Which of the following is a high-level, broad statement of what the organization wants to accomplish?
    1. A. Policy
    2. B. Procedure
    3. C. Guideline
    4. D. Standard

Answers


 
  1. 1. C. Vishing is basically a variation of phishing that uses voice communication technology to obtain the information the attacker is seeking. Vishing takes advantage of the trust that most people place in the telephone network. The users are unaware that using Voice over IP (VoIP) technology, attackers can spoof calls from legitimate entities. Voice messaging can be compromised and used in these attempts.
  2. 2. B. Social engineering works because people generally truly want to help an individual asking for assistance or because they are trying to avoid a confrontation. They also work because people generally want to believe that the individual really is who he claims to be, even if that’s not actually the case. The target’s intelligence isn’t an important factor; anybody can fall prey to an adept social engineer. Being new to an organization can certainly make it easier for an attacker to convince a target that he is entitled to the information requested, but it is not a requirement. Long-time employees can just as easily provide sensitive information to a talented social engineer. The target and attacker generally do not know each other in a social engineering attack, so D is not a good answer.
  3. 3. A. A frequent characteristic of employees who are involved in illicit activities is their reluctance to take a vacation. A prime security reason to require mandatory vacations is to discourage illicit activities in which employees are engaged.
  4. 4. A, B, C, D, E. All of these are examples of personally identifiable information. Any information that can be used to identify an individual uniquely falls into this category.
  5. 5. C. A hoax can cause a user to perform some action, such as deleting a file that the operating system needs. Because of this, hoaxes can be considered legitimate security concerns.
  6. 6. A. Shredders that are designed to destroy CDs and DVDs are common and inexpensive. A degausser is designed for magnetic media, not optical. Writing over with 0s, 1s, and a random character is a method that can be used for other magnetic media but not CDs or DVDs.
  7. 7. D. This is a description of dumpster diving. From a security standpoint, you should be concerned with an attacker being able to locate information that can help in an attack on the organization. From an individual perspective, you should be concerned about the attacker obtaining information such as bank account or credit card numbers.
  8. 8. B. This is a description of a shoulder surfing method. Other methods include simply looking over a person’s shoulder as she enters code or using binoculars to watch from a distance.
  9. 9. D. All three of these were mentioned as part of what a password policy should include.
  10. 10. C. While research is being conducted to support spotting and eliminating phishing e-mails, no effective method is currently available to do this. It may be possible to block some sites that are known to be hostile, but again this is not effective at this time since an e-mail could come from anywhere and its address can be spoofed anyway. There might be some truth to the statement (D) that there is no way to prevent successful phishing attacks, because users continue to fall for them. The best way to prevent this is an active and viable user training and awareness program.
  11. 11. D. This is a description of phishing, which is a type of social engineering attack as are the other options. Vishing employs the use of the telephone network. Shoulder surfing involves the attacker attempting to observe a user entering sensitive information on a form, keypad, or keyboard. Dumpster diving involves the attacker searching through the trash of an organization or individual to find useful and sensitive information.
  12. 12. B. Groups and domains provide a mechanism to organize users in a logical way. Individuals with similar access restrictions can be placed within the same group or domain. This greatly eases the process of account creation for new employees.
  13. 13. A. This is the definition of a policy. Procedures are the step-by-step instructions on how to implement policies in an organization.


CHAPTER 3
Legal Issues, Privacy, and Ethics


In this chapter, you will


 
  • Learn about the laws and rules concerning importing and exporting encryption software
  • Know the laws that govern computer access and trespass
  • Understand the laws that govern encryption and digital rights management
  • Learn about the laws that govern digital signatures
  • Learn about the laws that govern privacy in various industries with relation to computer security
  • Explore ethical issues associated with information security

Computer security is no different from any other subject in our society; as it changes our lives, laws are enacted to enable desired behaviors and prohibit undesired behaviors. The one substantial difference between this aspect of our society and others is that the speed of advancement in the information systems world as driven by business, computer network connectivity, and the Internet is much greater than in the legal system of compromise and law-making. In some cases, laws have been overly restrictive, limiting business options, such as in the area of importing and exporting encryption technology. In other cases, legislation has been slow in coming and this fact has stymied business initiatives, such as in digital signatures. And in some areas, it has been both too fast and too slow, as in the case of privacy laws. One thing is certain: you will never satisfy everyone with a law, but it does delineate the rules of the game.

The cyber-law environment has not been fully defined by the courts. Laws have been enacted, but until they have been fully tested and explored by cases in court, the exact limits are somewhat unknown. This makes some aspects of interpretation more challenging, but the vast majority of the legal environment is known well enough that effective policies can be enacted to navigate this environment properly. Policies and procedures are tools you use to ensure understanding and compliance with laws and regulations affecting cyberspace.


Cybercrime


One of the many ways to examine cybercrime involves studying how the computer is involved in the criminal act. Three types of computer crimes commonly occur: computer-assisted crime, computer-targeted crime, and computer-incidental crime. The differentiating factor is in how the computer is specifically involved from the criminal’s point of view. Just as crime is not a new phenomenon, neither are computers, and cybercrime has a history of several decades.

What is new is how computers are involved in criminal activities. The days of simple teenage hacking activities from a bedroom have been replaced by organized crime controlled botnets (groups of computers commandeered by a malicious hacker) and acts designed to attack specific targets. The legal system has been slow to react and law enforcement has been hampered by their own challenges in responding to the new threats posed by high-tech crime.

What comes to mind when most people think about cybercrime is a computer that is targeted and attacked by an intruder. The criminal attempts to benefit from some form of unauthorized activity associated with a computer. In the 1980s and ‘90s, cyber-crime was mainly virus and worm attacks, each exacting some form of damage, yet the gain for the criminal was usually negligible. Enter the 21st century, with new forms of malware, rootkits, and targeted attacks; criminals can now target individual users and their bank accounts. In the current environment it is easy to predict where this form of attack will occur—if money is involved, a criminal will attempt to obtain what he considers his own fair share! A common method of criminal activity is computer-based fraud. Advertising on the Internet is big business, and hence the “new” crime of click fraud is now a concern. Click fraud involves a piece of malware that defrauds the advertising revenue counter engine through fraudulent user clicks.

eBay, the leader in the Internet auction space, and its companion PayPal, are frequent targets of fraud. Whether the fraud occurs by fraudulent listing, fraudulent bidding, or outright stealing of merchandise, the results are the same: a crime is committed. As users move toward online banking and stock trading, so moves the criminal element. Malware designed to install a keystroke logger and then watch for bank/brokerage logins is already making the rounds of the Internet. Once the attacker finds the targets, he can begin looting accounts. His risk of getting caught and prosecuted is exceedingly low. Walk into a bank in the United States and rob it, and the odds are better than 95 percent that you will be doing time in federal prison after the FBI hunts you down and slaps the cuffs on your wrists. Do the same crime via a computer, and the odds are even better than the opposite: less than 1 percent of these attackers are caught and prosecuted.

The low risk of being caught is one of the reasons that criminals are turning to computer crime. Just as computers have become easy for ordinary people to use, the trend continues for the criminal element. Today’s cyber criminals use computers as tools to steal intellectual property or other valuable data and then subsequently market these materials through underground online forums. Using the computer to physically isolate the criminal from the direct event of the crime has made the investigation and prosecution of these crimes much more challenging for authorities.

The last way computers are involved with criminal activities is through incidental involvement. Back in 1931, the U.S. government used accounting records and tax laws to convict Al Capone of tax evasion. Today, similar records are kept on computers. Computers are also used to traffic child pornography and other illicit activities—these computers act more as storage devices than as actual tools to enable the crime. Because child pornography existed before computers made its distribution easier, the computer is actually incidental to the crime itself.

With the three forms of computer involvement in crimes, coupled with increased criminal involvement, multiplied by the myriad of ways a criminal can use a computer to steal or defraud, added to the indirect connection mediated by the computer and the Internet, computer crime of the 21st century is a complex problem indeed. Technical issues are associated with all the protocols and architectures. A major legal issue is the education of the entire legal system as to the serious nature of computer crimes. All these factors are further complicated by the use of the Internet to separate the criminal and his victim geographically. Imagine this defense: “Your honor, as shown by my client’s electronic monitoring bracelet, he was in his apartment in California when this crime occurred. The victim claims that the money was removed from his local bank in New York City. Now, last time I checked, New York City was a long way from Los Angeles, so how could my client have robbed the bank?"



EXAM TIP Computers are involved in three forms of criminal activity: the computer as a tool of the crime, the computer as a victim of a crime, and the computer that is incidental to a crime.


Common Internet Crime Schemes


To find crime, just follow the money. In the United States, the FBI and the National White Collar Crime Center (NW3C) have joined forces in developing the Internet Crime Complaint Center, an online clearinghouse that communicates issues associated with cybercrime. One of the items provided to the online community is a list of common Internet crimes and explanations (www.ic3.gov/crimeschemes.aspx). A separate list offers advice on how to prevent these crimes through individual actions (www.ic3.gov/preventiontips.aspx).

Here’s a list of common Internet crimes from the site:


 
  • Auction Fraud
  • Auction Fraud—Romania
  • Counterfeit Cashier’s Check
  • Credit Card Fraud
  • Debt Elimination
  • Parcel Courier Email Scheme
  • Employment/Business Opportunities
  • Escrow Services Fraud
  • Identity Theft
  • Internet Extortion
  • Investment Fraud
  • Lotteries
  • Nigerian Letter or “419"
  • Phishing/Spoofing
  • Ponzi/Pyramid Scheme
  • Reshipping
  • Spam
  • Third Party Receiver of Funds


Sources of Laws


In the United States, three primary sources of laws and regulations affect our lives and govern actions. Statutory laws are passed by the legislative branches of government, be it the Congress or a local city council. Another source of laws and regulations are from administrative bodies given power by other legislation. The power of government sponsored agencies, such as the Environmental Protection Agency (EPA), the Federal Aviation Administration (FAA), the Federal Communication Commission (FCC), and others lie in this powerful ability to enforce behaviors through administrative rule making. The last source of law in the United States is common law, which is based on previous events or precedent. This source of law comes from the judicial branch of government: judges decide on the applicability of laws and regulations.

All three sources have an involvement in computer security. Specific statutory laws, such as the Computer Fraud and Abuse Act, govern behavior. Administratively, the FCC and Federal Trade Commission (FTC) have made their presence felt in the Internet arena with respect to issues such as intellectual property theft and fraud. Common law cases are now working their way through the judicial system, cementing the issues of computers and crimes into the system of precedents and constitutional basis of laws.



EXAM TIP Three types of laws are commonly associated with cybercrime: statutory law, administrative law, and common law.


Computer Trespass


With the advent of global network connections and the rise of the Internet as a method of connecting computers between homes, businesses, and governments across the globe, a new type of criminal trespass can now be committed. Computer trespass is the unauthorized entry into a computer system via any means, including remote network connections. These crimes have introduced a new area of law that has both national and international consequences. For crimes that are committed within a country’s borders, national laws apply. For cross-border crimes, international laws and international treaties are the norm. Computer-based trespass can occur even if countries do not share a physical border.

Computer trespass is treated as a crime in many countries. National laws exist in many countries, including the EU, Canada, and the United States. These laws vary by country, but they all have similar provisions defining the unauthorized entry into and use of computer resources for criminal activities. Whether called computer mischief as in Canada, or computer trespass as in the United States, unauthorized entry and use of computer resources is treated as a crime with significant punishments. With the globalization of the computer network infrastructure, or Internet, issues that cross national boundaries have arisen and will continue to grow in prominence. Some of these issues are dealt with through the application of national laws upon request of another government. In the future, an international treaty may pave the way for closer cooperation.


Convention on Cybercrime


The Convention on Cybercrime is the first international treaty on crimes committed via the Internet and other computer networks. The convention is the product of four years of work by Council of Europe experts, but also by the United States, Canada, Japan, and other countries that are not members of the organization of the member states of the European Council. The current status of the convention is as a draft treaty, ratified by only two members. A total of five members must ratify it to become law.

The main objective of the convention, set out in the preamble, is to pursue a common criminal policy aimed at the protection of society against cybercrime, especially by adopting appropriate legislation and fostering international cooperation. This has become an important issue with the globalization of network communication. The ability to create a virus anywhere in the world and escape prosecution because of lack of local laws has become a global concern.

The convention deals particularly with infringements of copyright, computer-related fraud, child pornography, and violations of network security. It also contains a series of powers and procedures covering, for instance, searches of computer networks and interception. It will be supplemented by an additional protocol making any publication of racist and xenophobic propaganda via computer networks a criminal offense.


Significant U.S. Laws


The United States has been a leader in the development and use of computer technology. As such, it has a longer history with computers and with cybercrime. Because legal systems tend to be reactive and move slowly, this leadership position has translated into a leadership position from a legal perspective as well. The one advantage of this legal leadership position is the concept that once an item is identified and handled by the legal system in one jurisdiction, subsequent adoption in other jurisdictions is typically quicker.


Electronic Communications Privacy Act (ECPA)


The Electronic Communications Privacy Act (ECPA) of 1986 was passed by Congress and signed by President Reagan to address a myriad of legal privacy issues that resulted from the increasing use of computers and other technology specific to telecommunications. Sections of this law address e-mail, cellular communications, workplace privacy, and a host of other issues related to communicating electronically. A major provision was the prohibition against an employer’s monitoring an employee’s computer usage, including e-mail, unless consent is obtained. Other legal provisions protect electronic communications from wiretap and outside eavesdropping, as users were assumed to have a reasonable expectation of privacy and afforded protection under the Fourth Amendment to the Constitution.

A common practice with respect to computer access today is the use of a warning banner. These banners are typically displayed whenever a network connection occurs and serve four main purposes. First, from a legal standpoint, they establish the level of expected privacy (usually none on a business system) and serve as consent to real-time monitoring from a business standpoint. Real-time monitoring can be conducted for security reasons, business reasons, or technical network performance reasons. The key is that the banner tells users that their connection to the network signals their consent to monitoring. Consent can also be obtained to look at files and records. In the case of government systems, consent is needed to prevent direct application of the Fourth Amendment. And the last reason is that the warning banner can establish the system or network administrator’s common authority to consent to a law enforcement search.


Computer Fraud and Abuse Act (1986)


The Computer Fraud and Abuse Act (CFAA) of 1986, amended in 1994, 1996, and in 2001 by the Patriot Act, serves as the current foundation for criminalizing unauthorized access to computer systems. The CFAA makes it a crime to knowingly access a computer or computer system that is a government computer and is a computer involved in interstate or foreign communication, which in today’s Internet-connected age can be almost any machine. The act sets financial thresholds, which were lowered by the Patriot Act, but in light of today’s investigation costs, these are easily met. The act also makes it a crime to knowingly transmit a program, code, or command that results in damage. Trafficking in passwords or similar access information is also criminalized. This is a wide-sweeping act, but the challenge of proving a case still exists.


Patriot Act


The Patriot Act of 2001, passed in response to the September 11 terrorist attack on the World Trade Center buildings in New York, substantially changed the levels of checks and balances in laws related to privacy in the United States. This law extends the tap and trace provisions of existing wiretap statutes to the Internet and mandated certain technological modifications at ISPs to facilitate electronic wiretaps on the Internet. The act also permitted the Justice Department to proceed with its rollout of the Carnivore program, an eavesdropping program for the Internet. Much controversy exists over Carnivore, but until it’s changed, the Patriot Act mandates that ISPs cooperate and facilitate monitoring. The Patriot Act also permits federal law enforcement personnel to investigate computer trespass (intrusions) and enacts civil penalties for trespassers.


Gramm-Leach-Bliley Act (GLB)


In November 1999, President Clinton signed the Gramm-Leach-Bliley Act, a major piece of legislation affecting the financial industry with significant privacy provisions for individuals. The key privacy tenets enacted in GLB included the establishment of an opt-out method for individuals to maintain some control over the use of the information provided in a business transaction with a member of the financial community. GLB is enacted through a series of rules governed by state law, federal law, securities law, and federal rules. These rules cover a wider range of financial institutions, from banks and thrifts, to insurance companies, to securities dealers. Some internal information sharing is required under the Fair Credit Reporting Act (FCRA) between affiliated companies, but GLB ended sharing to external third-party firms.


Sarbanes-Oxley (SOX)


In the wake of several high-profile corporate accounting/financial scandals in the United States, the federal government in 2002 passed sweeping legislation overhauling the financial accounting standards for publically traded firms in the United States. These changes were comprehensive, touching most aspects of business in one way or another. With respect to information security, one of the most prominent changes is Section 404 controls, which specify that all processes associated with the financial reporting of a firm must be controlled and audited on a regular basis. Since the majority of firms use computerized systems, this placed internal auditors into the IT shops, verifying that the systems had adequate controls to ensure the integrity and accuracy of financial reporting. These controls have resulted in controversy over the cost of maintaining these controls versus the risk of not using them.

Section 404 requires firms to establish a control-based framework designed to detect or prevent fraud that would result in misstatement of financials. In simple terms, these controls should detect insider activity that would defraud the firm. This has significant impacts on the internal security controls, because a system administrator with root level access could perform many if not all tasks associated with fraud and would have the ability to alter logs and cover his or her tracks. Likewise, certain levels of power users of financial accounting programs would also have significant capability to alter records.


Payment Card Industry Data Security Standards (PCI DSS)


The payment card industry, including the powerhouses of MasterCard and Visa, designed a private sector initiative to protect payment card information between banks and merchants. This is a voluntary, private sector initiative that is proscriptive in its security guidance. Merchants and vendors can choose not to adopt these measures, but the standard has a steep price for noncompliance; the transaction fee for noncompliant vendors can be significantly higher, fines up to $500,000 can be levied, and in extreme cases the ability to process credit cards can be revoked. The PCI DSS is a set of six control objectives, containing a total of twelve requirements:


 
  1. 1. Build and Maintain a Secure Network
    1. Requirement 1 Install and maintain a firewall configuration to protect cardholder data
    2. Requirement 2 Do not use vendor-supplied defaults for system passwords and other security parameters
 
  1. 2. Protect Cardholder Data
    1. Requirement 3 Protect stored cardholder data
    2. Requirement 4 Encrypt transmission of cardholder data across open, public networks
 
  1. 3. Maintain a Vulnerability Management Program
    1. Requirement 5 Use and regularly update anti-virus software
    2. Requirement 6 Develop and maintain secure systems and applications
 
  1. 4. Implement Strong Access Control Measures
    1. Requirement 7 Restrict access to cardholder data by business need-to-know
    2. Requirement 8 Assign a unique ID to each person with computer access
    3. Requirement 9 Restrict physical access to cardholder data
 
  1. 5. Regularly Monitor and Test Networks
    1. Requirement 10 Track and monitor all access to network resources and cardholder data
    2. Requirement 11 Regularly test security systems and processes
 
  1. 6. Maintain an Information Security Policy
    1. Requirement 12 Maintain a policy that addresses information security for all employees and contractors

Import/Export Encryption Restrictions


Encryption technology has been controlled by governments for a variety of reasons. The level of control varies from outright banning to little or no regulation. The reasons behind the control vary as well, and control over import and export is a vital method of maintaining a level of control over encryption technology in general. The majority of the laws and restrictions are centered on the use of cryptography, which was until recently used mainly for military purposes. The advent of commercial transactions and network communications over public networks such as the Internet has expanded the use of cryptographic methods to include securing of network communications. As is the case in most rapidly changing technologies, the practice moves faster than law. Many countries still have laws that are outmoded in terms of e-commerce and the Internet. Over time, these laws will be changed to serve these new uses in a way consistent with each country’s needs.


U.S. Law


Export controls on commercial encryption products are administered by the Bureau of Industry and Security (BIS) in the U.S. Department of Commerce. The responsibility for export control and jurisdiction was transferred from the State Department to the Commerce Department in 1996 and most recently updated on June 6, 2002. Rules governing exports of encryption are found in the Export Administration Regulations (EAR), 15 C.F.R. Parts 730–774. Sections 740.13, 740.17, and 742.15 are the principal references for the export of encryption items.

Needless to say, violation of encryption export regulations is a serious matter and is not an issue to take lightly. Until recently, encryption protection was accorded the same level of attention as the export of weapons for war. With the rise of the Internet, widespread personal computing, and the need for secure connections for e-commerce, this position has relaxed somewhat. The United States updated its encryption export regulations to provide treatment consistent with regulations adopted by the EU, easing export and re-export restrictions among the 15 EU member states and Australia, the Czech Republic, Hungary, Japan, New Zealand, Norway, Poland, and Switzerland. The member nations of the Wassenaar Arrangement agreed to remove key length restrictions on encryption hardware and software that is subject to the certain reasonable levels of encryption strength. This action effectively removed “mass-market” encryption products from the list of dual-use items controlled by the Wassenaar Arrangement.

The U.S. encryption export control policy continues to rest on three principles: review of encryption products prior to sale, streamlined post-export reporting, and license review of certain exports of strong encryption to foreign government end users. The current set of U.S. rules require notification to the BIS for export in all cases, but the restrictions are significantly lessened for mass-market products as defined by all of the following:


 
  • They are generally available to the public by being sold, without restriction, from stock at retail selling points by any of these means:
    • Over-the-counter transactions
    • Mail-order transactions
    • Electronic transactions
    • Telephone call transactions
 
  • The cryptographic functionality cannot easily be changed by the user.
  • They are designed for installation by the user without further substantial support by the supplier.
    • When necessary, details of the items are accessible and will be provided, upon request, to the appropriate authority in the exporter’s country in order to ascertain compliance with export regulations.

Mass-market commodities and software employing a key length greater than 64 bits for the symmetric algorithm must be reviewed in accordance with BIS regulations. Restrictions on exports by U.S. persons to terrorist-supporting states (Cuba, Iran, Iraq, Libya, North Korea, Sudan, or Syria), their nationals, and other sanctioned entities are not changed by this rule.

As you can see, this is a very technical area, with significant rules and significant penalties for infractions. The best rule is that whenever you are faced with a situation involving the export of encryption-containing software, consult an expert and get the appropriate permission, or a statement that permission is not required, first. This is one case where it is better to be safe than sorry.


Non-U.S. Laws


Export control rules for encryption technologies fall under the Wassenaar Arrangement, an international arrangement on export controls for conventional arms and dual-use goods and technologies. The Wassenaar Arrangement has been established in order to contribute to regional and international security and stability, by promoting transparency and greater responsibility in transfers of conventional arms and dual-use goods and technologies, thus preventing destabilizing accumulations. Participating states, of which the United States is one of 33, will seek, through their own national policies and laws, to ensure that transfers of these items do not contribute to the development or enhancement of military capabilities that undermine these goals, and are not diverted to support such capabilities.

Many nations have more restrictive policies than those agreed upon as part of the Wassenaar Arrangement. Australia, New Zealand, United States, France, and Russia go further than is required under Wassenaar and restrict general-purpose cryptographic software as dual-use goods through national laws. The Wassenaar Arrangement has had a significant impact on cryptography export controls, and there seems little doubt that some of the nations represented will seek to use the next round to move toward a more repressive cryptography export control regime based on their own national laws. There are ongoing campaigns to attempt to influence other members of the agreement toward less restrictive rules, and in some cases no rules. These lobbying efforts are based on e-commerce and privacy arguments.

In addition to the export controls on cryptography, significant laws prohibit the use and possession of cryptographic technology. In China, a license from the state is required for cryptographic use. In some other countries, including Russia, Pakistan, Venezuela, and Singapore, tight restrictions apply to cryptographic uses. France relinquished tight state control over the possession of the technology in 1999. One of the driving points behind France’s action is the fact that more and more of the Internet technologies have built-in cryptography. Digital rights management, secure USB solutions, digital signatures, and Secure Sockets Layer (SSL)-secured connections are examples of common behind-the-scenes use of cryptographic technologies. In 2007, the United Kingdom passed a new law mandating that when requested by UK authorities, either police or military, encryption keys must be provided to permit decryption of information associated with terror or criminal investigation. Failure to deliver either the keys or decrypted data can result in an automatic prison sentence of two to five years. Although this seems reasonable, it has been argued that such actions will drive certain financial entities off shore, as the rule applies only to data housed in the UK. As for deterrence, the two-year sentence may be better than a conviction for trafficking in child pornography; hence the law seems not to be as useful as it seems at first glance.


Digital Signature Laws


On October 1, 2000, the Electronic Signatures in Global and National Commerce Act (commonly called the E-Sign law) went into effect in the United States. This law implements a simple principle: a signature, contract, or other record may not be denied legal effect, validity, or enforceability solely because it is in electronic form. Another source of law on digital signatures is the National Conference of Commissioners on Uniform State Laws’ Uniform Electronic Transactions Act (UETA), which has been adopted in more than 20 states. A number of states have adopted a nonuniform version of UETA, and the precise relationship between the federal E-Sign law and UETA has yet to be resolved and will most likely be worked out through litigation in the courts over complex technical issues.

Many states have adopted digital signature laws, the first being Utah in 1995. The Utah law, which has been used as a model by several other states, confirms the legal status of digital signatures as valid signatures, provides for use of state-licensed certification authorities, endorses the use of public key encryption technology, and authorizes online databases called repositories, where public keys would be available. The Utah act specifies a negligence standard regarding private encryption keys and places no limit on liability. Thus, if a criminal uses a consumer’s private key to commit fraud, the consumer is financially responsible for that fraud, unless the consumer can prove that he or she used reasonable care in safeguarding the private key. Consumers assume a duty of care when they adopt the use of digital signatures for their transactions, not unlike the care required for PINs on debit cards.

From a practical standpoint, the existence of the E-Sign law and UETA have enabled e-commerce transactions to proceed, and the resolution of the technical details via court actions will probably have little effect on consumers. It is worth noting that consumers will have to exercise reasonable care over their signature keys, much as they must over PINs and other private numbers. For the most part, software will handle these issues for the typical user.


Non-U.S. Signature Laws


The United Nations has a mandate to further harmonize international trade. With this in mind, the UN General Assembly adopted the United Nations Commission on International Trade Law (UNCITRAL) Model Law on E-Commerce. To implement specific technical aspects of this model law, more work on electronic signatures was needed. The General Assembly then adopted the United Nations Commission on International Trade Law (UNCITRAL) Model Law on Electronic Signatures. These model laws have become the basis for many national and international efforts in this area.


Canadian Laws


Canada was an early leader in the use of digital signatures. Singapore, Canada, and the U.S. state of Pennsylvania were the first governments to have digitally signed an interstate contract. This contract, digitally signed in 1998, concerned the establishment of a Global Learning Consortium between the three governments (source: Krypto-Digest Vol. 1 No. 749, June 11, 1998). Canada went on to adopt a national model bill for electronic signatures to promote e-commerce. This bill, the Uniform Electronic Commerce Act (UECA), allows the use of electronic signatures in communications with the government. The law contains general provisions for the equivalence between traditional and electronic signatures (source: BNA ECLR, May 27, 1998, p. 700) and is modeled after the UNCITRAL Model Law on E-Commerce (source: BNA ECLR, September 13, 2000, p. 918). The UECA is similar to Bill C-54 in authorizing governments to use electronic technology to deliver services and communicate with citizens.

Individual Canadian provinces have passed similar legislation defining digital signature provisions for e-commerce and government use. These laws are modeled after the UNCITRAL Model Law on E-Commerce to enable widespread use of e-commerce transactions. These laws have also modified the methods of interactions between the citizens and the government, enabling electronic communication in addition to previous forms.


European Laws


The European Commission adopted a Communication on Digital Signatures and Encryption: “Towards a European Framework for Digital Signatures and Encryption.” This communication states that a common framework at the EU level is urgently needed to stimulate “the free circulation of digital signature related products and services within the Internal market” and “the development of new economic activities linked to electronic commerce” as well as “to facilitate the use of digital signatures across national borders.” Community legislation should address common legal requirements for certificate authorities, legal recognition of digital signatures, and international cooperation. This communication was debated, and a common position was presented to the member nations for incorporation into national laws.

On May 4, 2000, the European Parliament and Council approved the common position adopted by the council. In June 2000, the final version Directive 2000/31/EC was adopted. The directive is now being implemented by member states. To implement the articles contained in the directive, member states will have to remove barriers, such as legal form requirements, to electronic contracting, leading to uniform digital signature laws across the EU.


Digital Rights Management


The ability to make flawless copies of digital media has led to another “new” legal issue. For years, the music and video industry has relied on technology to protect its rights with respect to intellectual property. It has been illegal for decades to copy information, such as music and videos, protected by copyright. Even with the law, for years people have made copies of music and videos to share, violating the law. This had not had a significant economic impact in the eyes of the industry, as the copies made were of lesser quality and people would pay for original quality in sufficient numbers to keep the economics of the industry healthy. As such, legal action against piracy was typically limited to large-scale duplication and sale efforts, commonly performed overseas and subsequently shipped to the United States as counterfeit items.

The ability of anyone with a PC to make a perfect copy of digital media has led to industry fears that individual piracy actions could cause major economic issues in the recording industry. To protect the rights of the recording artists and the economic health of the industry as a whole, the music and video recording industry lobbied the U.S. Congress for protection, which was granted under the Digital Millennium Copyright Act (DMCA) on October 20, 1998. This law states the following: “To amend title 17, United States Code, to implement the World Intellectual Property Organization Copyright Treaty and Performances and Phonograms Treaty, and for other purposes.” The majority of this law was well crafted, but one section has drawn considerable comment and criticism. A section of the law makes it illegal to develop, produce, and trade any device or mechanism designed to circumvent technological controls used in copy protection.

Although on the surface this seems a reasonable requirement, the methods used in most cases are cryptographic in nature, and this provision had the ability to eliminate and/or severely limit research into encryption and the strengths and weaknesses of specific methods. A provision, Section 1201(g) of DMCA, was included to provide for specific relief and allow exemptions for legitimate research. With this section, the law garnered industry support from several organizations such as the Software & Information Industry Association (SIIA), Recording Industry Association of America (RIAA), and Motion Picture Association of America (MPAA). Based on these inputs, the U.S. Copyright Office issued a report supporting the DMCA in a required report to the Congress. This seemed to settle the issues until the RIAA threatened to sue an academic research team headed by Professor Felten from Princeton University. The issue behind the suit was the potential publication of results demonstrating that several copy protection methods were flawed in their application. This research came in response to an industry-sponsored challenge to break the methods. After breaking the methods developed and published by the industry, Felten and his team prepared to publish their findings. The RIAA objected and threatened a suit under provisions of DMCA. After several years of litigation and support of Felten by the Electronic Freedom Foundation (EFF), the case was eventually resolved in the academic team’s favor, although no case law to prevent further industry-led threats was developed.

This might seem a remote issue, but industries have been subsequently using the DMCA to protect their technologically inspired copy protection schemes for such products as laser-toner cartridges and garage-door openers. It is doubtful that the U.S. Congress intended the law to have such effects, yet until these issues are resolved in court, the DMCA may have wide-reaching implications. The act has specific exemptions for research provided four elements are satisfied:

(A) the person lawfully obtained the encrypted copy, phonorecord, performance, or display of the published work;

(B) such act is necessary to conduct such encryption research;

(C) the person made a good faith effort to obtain authorization before the circumvention; and

(D) such act does not constitute infringement under this title or a violation of applicable law other than this section, including section 1030 of title 18 and those provisions of title 18 amended by the Computer Fraud and Abuse Act of 1986.

Additional exemptions are scattered through the law, although many were pasted in during various deliberations on the act and do not make sense when the act is viewed in whole. The effect of these exemptions upon people in the software and technology industry is not clear, and until restrained by case law, the DMCA gives large firms with deep legal pockets a potent weapon to use against parties who disclose flaws in encryption technologies used in various products. Actions have already been initiated against individuals and organizations who have reported security holes in products. This will be an active area of legal contention as the real issues behind digital rights management have yet to be truly resolved.


Privacy


The advent of interconnected computer systems has enabled businesses and governments to share and integrate information. This has led to a resurgence in the importance of privacy laws worldwide. Governments in Europe and the United States have taken different approaches in attempts to control privacy via legislation. Many social and philosophical differences have led to these differences, but as the world becomes interconnected, understanding and resolving them will be important.

Privacy can be defined as the power to control what others know about you and what they can do with this information. In the computer age, personal information forms the basis for many decisions, from credit card transactions to purchase goods, to the ability to buy an airplane ticket and fly domestically. Although it is theoretically possible to live an almost anonymous existence today, the price for doing so is high—from higher prices at the grocery store (no frequent shopper discount), to higher credit costs, to challenges with air travel, opening bank accounts, and seeking employment.


U.S. Privacy Laws


Identity privacy and the establishment of identity theft crimes is governed by the Identity Theft and Assumption Deterrence Act, which makes it a violation of federal law to knowingly use another’s identity. The collection of information necessary to do this is also governed by GLB, which makes it illegal for someone to gather identity information on another under false pretenses. In the education area, privacy laws have existed for years. Student records have significant protections under the Family Education Records and Privacy Act of 1974, including significant restrictions on information sharing. These records operate on an opt-in basis, as the student must approve the disclosure of information prior to the actual disclosure.


Health Insurance Portability & Accountability Act (HIPAA)


Medical and health information also has privacy implications, which is why the U.S. Congress enacted the Health Insurance Portability & Accountability Act (HIPAA) of 1996. HIPAA calls for sweeping changes in the way health and medical data is stored, exchanged, and used. From a privacy perspective, significant restrictions of data transfers to ensure privacy are included in HIPAA, including security standards and electronic signature provisions. HIPAA security standards mandate a uniform level of protections regarding all health information that pertains to an individual and is housed or transmitted electronically. The standard mandates safeguards for physical storage, maintenance, transmission, and access to individuals’ health information. HIPAA mandates that organizations that use electronic signatures will have to meet standards ensuring information integrity, signer authentication, and nonrepudiation. These standards leave to industry the task of specifying the specific technical solutions and mandate compliance only to significant levels of protection as provided by the rules being released by industry.


Gramm-Leech-Bliley Act (GLB)


In the financial arena, GLB introduced the U.S. consumer to privacy notices, where firms must disclose what they collect, how they protect the information, and with whom they will share it. Annual notices are required as well as the option for consumers to opt out of the data sharing. The primary concept behind U.S. privacy laws in the financial arena is the notion that consumers be allowed to opt-out. This was strengthened in GLB to include specific wording and notifications as well as the appointment of a privacy officer for the firm.


California Senate Bill 1386 (SB 1386)


California Senate Bill 1386 (SB 1386) was a landmark law concerning information disclosures. It mandates that Californians be notified whenever personally identifiable information is lost or disclosed. Since the passage of SB 1386, numerous other states have modeled legislation on this bill, and although national legislation has been blocked by political procedural moves, it will eventually be passed.


European Laws


The EU has developed a comprehensive concept of privacy administered via a set of statutes known as data protection laws. These privacy statutes cover all personal data, whether collected and used by government or private firms. These laws are administered by state and national data protection agencies in each country. With the advent of the EU, this common comprehensiveness stands in distinct contrast to the patchwork of laws in the United States.

Privacy laws in Europe are built around the concept that privacy is a fundamental human right that demands protection through government administration. When the EU was formed, many laws were harmonized across the 15 member nations, and data privacy was among those standardized. One important aspect of this harmonization is the Data Protection Directive, adopted by EU members, which has a provision allowing the European Commission to block transfers of personal data to any country outside the EU that has been determined to lack adequate data protection policies. The differences in approach between the U.S. and the EU with respect to data protection led to the EU issuing expressions of concern about the adequacy of data protection in the U.S., a move that could pave the way to the blocking of data transfers. After negotiation, it was determined that U.S. organizations that voluntarily joined an arrangement known as Safe Harbor would be considered adequate in terms of data protection.

Safe Harbor is a mechanism for self-regulation that can be enforced through trade practice law via the FTC. A business joining the Safe Harbor Consortium must make commitments to abide by specific guidelines concerning privacy. Safe Harbor members also agree to be governed by certain self-enforced regulatory mechanisms, backed ultimately by FTC action.

Another major difference between U.S. and European regulation lies in where the right of control is exercised. In European directives, the right of control over privacy is balanced in such a way as to favor consumers. Rather than having to pay to opt-out, as in unlisted phone numbers, consumers have such services for free. Rather than having to opt-out at all, the default privacy setting is deemed to be the highest level of data privacy, and users have to opt-in to share information. This default setting is a cornerstone of the EU Data Protection Directive and is enforced through national laws in all member nations.


Ethics


Ethics has been a subject of study by philosophers for centuries. It might be surprising to note that ethics associated with computer systems has a history dating back to the beginning of the computing age. The first examination of cybercrime occurred in the late 1960s, when the professional conduct of computer professionals was examined with respect to their activities in the workplace. If we consider ethical behavior to be consistent with that of existing social norms, it can be fairly easy to see what is considered right and wrong. But with the globalization of commerce, and the globalization of communications via the Internet, questions are raised on what is the appropriate social norm. Cultural issues can have wide-ranging effects on this, and although the idea of an appropriate code of conduct for the world is appealing, is it as yet an unachieved objective.

The issue of globalization has significant local effects. If a user wishes to express free speech via the Internet, is this protected behavior or criminal behavior? Different locales have different sets of laws to deal with items such as free speech, with some recognizing the right, while others prohibit it. With the globalization of business, what are the appropriate controls for intellectual property when some regions support this right, while others do not even recognize intellectual property as something of value, but rather something owned by the collective of society? The challenge in today’s business environment is to establish and communicate a code of ethics so that everyone associated with an enterprise can understand the standards of expected performance.

A great source of background information on all things associated with computer security, the SANS Institute, published a set of IT ethical guidelines in April 2004: see www.sans.org/resources/ethics.php?ref=3781.


SANS Institute IT Code of Ethics 1



Version 1.0 - April 24, 2004


The SANS Institute


I will strive to know myself and be honest about my capability.



 
  • I will strive for technical excellence in the IT profession by maintaining and enhancing my own knowledge and skills. I acknowledge that there are many free resources available on the Internet and affordable books and that the lack of my employer’s training budget is not an excuse nor limits my ability to stay current in IT.
  • When possible I will demonstrate my performance capability with my skills via projects, leadership, and/or accredited educational programs and will encourage others to do so as well.
  • I will not hesitate to seek assistance or guidance when faced with a task beyond my abilities or experience. I will embrace other professionals’ advice and learn from their experiences and mistakes. I will treat this as an opportunity to learn new techniques and approaches. When the situation arises that my assistance is called upon, I will respond willingly to share my knowledge with others.
  • I will strive to convey any knowledge (specialist or otherwise) that I have gained to others so everyone gains the benefit of each other’s knowledge.
  • I will teach the willing and empower others with Industry Best Practices (IBP). I will offer my knowledge to show others how to become security professionals in their own right. I will strive to be perceived as and be an honest and trustworthy employee.
  • I will not advance private interests at the expense of end users, colleagues, or my employer.
  • I will not abuse my power. I will use my technical knowledge, user rights, and permissions only to fulfill my responsibilities to my employer.


1 © 2000-2008 The SANS™ Institute. Reprinted with permission.



 
  • I will avoid and be alert to any circumstances or actions that might lead to conflicts of interest or the perception of conflicts of interest. If such circumstance occurs, I will notify my employer or business partners.
  • I will not steal property, time or resources.
  • I will reject bribery or kickbacks and will report such illegal activity.
  • I will report on the illegal activities of myself and others without respect to the punishments involved. I will not tolerate those who lie, steal, or cheat as a means of success in IT.


I will conduct my business in a manner that assures the IT profession is considered one of integrity and professionalism.


 
  • I will not injure others, their property, reputation, or employment by false or malicious action.
  • I will not use availability and access to information for personal gains through corporate espionage.
  • I distinguish between advocacy and engineering. I will not present analysis and opinion as fact.
  • I will adhere to Industry Best Practices (IBP) for system design, rollout, hardening and testing.
  • I am obligated to report all system vulnerabilities that might result in significant damage.
  • I respect intellectual property and will be careful to give credit for other’s work. I will never steal or misuse copyrighted, patented material, trade secrets or any other intangible asset.
  • I will accurately document my setup procedures and any modifications I have done to equipment. This will ensure that others will be informed of procedures and changes I’ve made.


I respect privacy and confidentiality.


 
  • I respect the privacy of my co-workers’ information. I will not peruse or examine their information including data, files, records, or network traffic except as defined by the appointed roles, the organization’s acceptable use policy, as approved by Human Resources, and without the permission of the end user.
  • I will obtain permission before probing systems on a network for vulnerabilities.
  • I respect the right to confidentiality with my employers, clients, and users except as dictated by applicable law. I respect human dignity.
  • I treasure and will defend equality, justice and respect for others.
  • I will not participate in any form of discrimination, whether due to race, color, national origin, ancestry, sex, sexual orientation, gender/sexual identity or expression, marital status, creed, religion, age, disability, veteran’s status, or political ideology.


Chapter Review


From a system administrator’s position, complying with cyber-laws is fairly easy. Add warning banners to all systems that enable consent to monitoring as a condition of access. This will protect you and the firm during normal routine operation of the system. Safeguard all personal information obtained in the course of your duties and do not obtain unnecessary information merely because you can get it. With respect to the various privacy statutes that are industry specific—GLB, FCRA, ECPA, FERPA, HIPAA—refer to your own institution’s guidelines and policies. When confronted with aspects of the U.S. Patriot Act, refer to your company’s general counsel, for although the act may absolve you and the firm of responsibility, this act’s implications with respect to existing law are still unknown. And in the event that your system is trespassed upon (hacked), you can get federal law enforcement assistance in investigating and prosecuting the perpetrators.


Questions


To further help you prepare for the Security+ exam, and to test your level of preparedness, answer the following questions and then check your answers against the list of correct answers at the end of the chapter.


 
  1. 1. The VP of IS wants to monitor user actions on the company’s intranet. What is the best method of obtaining the proper permissions?
    1. A. A consent banner displayed upon login
    2. B. Written permission from a company officer
    3. C. Nothing, because the system belongs to the company
    4. D. Written permission from the user
  2. 2. Your Social Security number and other associated facts kept by your bank are protected by what law against disclosure?
    1. A. The Social Security Act of 1934
    2. B. The Patriot Act of 2001
    3. C. The Gramm-Leach-Bliley Act
    4. D. HIPAA
 
  1. 3. Breaking into another computer system in the United States, even if you do not cause any damage, is regulated by what laws?
    1. A. State law, as the damage is minimal
    2. B. Federal law under the Identity Theft and Assumption Deterrence Act
    3. C. Federal law under Electronic Communications Privacy Act (ECPA) of 1986
    4. D. Federal law under the Patriot Act of 2001
  2. 4. Export of encryption programs is regulated by the
    1. A. U.S. State Department
    2. B. U.S. Commerce Department
    3. C. U.S. Department of Defense
    4. D. National Security Agency
  3. 5. For the FBI to install and operate Carnivore on an ISP’s network, what is required?
    1. A. A court order specifying specific items being searched for
    2. B. An official request from the FBI
    3. C. An impact statement to assess recoverable costs to the ISP
    4. D. A written request from an ISP to investigate a computer trespass incident
  4. 6. True or false: Digital signatures are equivalent to notarized signatures for all transactions in the United States.
    1. A. True for all transactions in which both parties agree to use digital signatures
    2. B. True only for non-real property transactions
    3. C. True only where governed by specific state statute
    4. D. False, as the necessary laws have not yet passed
  5. 7. The primary factor(s) behind data sharing compliance between U.S. and European companies is/are
    1. A. Safe Harbor Provision
    2. B. European Data Privacy Laws
    3. C. U.S. FTC enforcement actions
    4. D. All of the above
  6. 8. True or false: Writing viruses and releasing them across the Internet is a violation of law.
    1. A. Always true. All countries have reciprocal agreements under international law.
    2. B. Partially true. Depends on laws in country of origin.
    3. C. False. Computer security laws do not cross international boundaries.
    4. D. Partially true. Depends on the specific countries involved, for the author of the virus and the recipient.
  7. 9. Publication of flaws in encryption used for copy protection is a potential violation of
    1. A. HIPAA
    2. B. U.S. Commerce Department regulations
    3. C. DMCA
    4. D. National Security Agency regulations
 
  1. 10. Violation of DMCA can result in
    1. A. Civil fine
    2. B. Jail time
    3. C. Activity subject to legal injunctions
    4. D. All of the above

Answers


 
  1. 1. A. A consent banner consenting to monitoring resolves issues of monitoring with respect to the Electronic Communications Privacy Act (ECPA) of 1986.
  2. 2. C. The Gramm-Leach-Bliley Act governs the sharing of privacy information with respect to financial institutions.
  3. 3. D. The Patriot Act of 2001 made computer trespass a felony.
  4. 4. B. Export controls on commercial encryption products are administered by the Bureau of Industry and Security (BIS) in the U.S. Department of Commerce.
  5. 5. B. The Patriot Act of 2001 mandated ISP compliance with the FBI Carnivore program.
  6. 6. A. Electronic digital signatures are considered valid for transactions in the United States since the passing of the Electronic Signatures in Global and National Commerce Act (E-Sign) in 2001.
  7. 7. D. All of the above. The primary driver is European data protection laws as enforced on U.S firms by FTC enforcement through the Safe Harbor provision mechanism.
  8. 8. D. This is partially true, for not all countries share reciprocal laws. Some common laws and reciprocity issues exist in certain international communities—for example, European Union—so some cross-border legal issues have been resolved.
  9. 9. C. This is a potential violation of the Digital Millennium Copyright Act of 1998 unless an exemption provision is met.
 
  1. 10. D. All of the above have been attributed to DMCA, including the jailing of a Russian programmer who came to the United States to speak at a security conference. See w2.eff.org/IP/DMCA/?f=20010830_eff_dmca_op-ed.html.

PART II
Cryptography and Applications


Chapter 4 Cryptography

Chapter 5 Public Key Infrastructure

Chapter 6 Standards and Protocols



CHAPTER 4
Cryptography


In this chapter, you will


 
  • Learn about the different types of cryptography
  • Learn about the current cryptographic algorithms
  • Understand how cryptography is applied for security

Cryptography is the science of encrypting, or hiding, information—something people have sought to do since they began using language. Although language allowed them to communicate with one another, people in power attempted to hide information by controlling who was taught to read and write. Eventually, more complicated methods of concealing information by shifting letters around to make the text unreadable were developed.

The Spartans of ancient Greece would write on a ribbon wrapped around a specific gauge cylinder. When the ribbon was unwrapped, it revealed a strange string of letters. The message could be read only when the ribbon was wrapped around the same gauge cylinder. This is an example of a transposition cipher, where the same letters are used but the order is changed.

The Romans typically used a different method known as a shift cipher. In this case, one letter of the alphabet is shifted a set number of places in the alphabet for another letter. A common modern-day example of this is the ROT13 cipher, in which every letter is rotated 13 positions in the alphabet: n is written instead of a, o instead of b, and so on.

These ciphers were simple to use and also simple to break. Because hiding information was still important, more advanced transposition and substitution ciphers were required. As systems and technology became more complex, ciphers were frequently automated by some mechanical or electromechanical device. A famous example of a modern encryption machine is the German Enigma machine from World War II. This machine used a complex series of substitutions to perform encryption, and interestingly enough it gave rise to extensive research in computers.

Cryptanalysis, the process of analyzing available information in an attempt to return the encrypted message to its original form, required advances in computer technology for complex encryption methods. The birth of the computer made it possible to easily execute the calculations required by more complex encryption algorithms. Today, the computer almost exclusively powers how encryption is performed. Computer technology has also aided cryptanalysis, allowing new methods to be developed, such as linear and differential cryptanalysis. Differential cryptanalysis is done by comparing the input plaintext to the output ciphertext to try and determine the key used to encrypt the information. Linear cryptanalysis is similar in that it uses both plaintext and ciphertext, but it puts the plaintext through a simplified cipher to try and deduce what the key is likely to be in the full version of the cipher.

This chapter examines the most common symmetric and asymmetric algorithms in use today, as well as some uses of encryption on computer networks.


Algorithms


Every current encryption scheme is based upon an algorithm, a step-by-step, recursive computational procedure for solving a problem in a finite number of steps. The cryptographic algorithm—what is commonly called the encryption algorithm or cipher—is made up of mathematical steps for encrypting and decrypting information. Figure 4-1 shows a diagram of the encryption and decryption process and its parts.

The best algorithms are always public algorithms that have been published for peer review by other cryptographic and mathematical experts. Publication is important, as any flaws in the system can be revealed by others before actual use of the system. Several proprietary algorithms have been reverse-engineered, exposing the confidential data the algorithms try to protect. Examples of this include the decryption of Nikon’s proprietary RAW format white balance encryption, and the cracking of the Exxon Mobil SpeedPass RFID encryption. The use of a proprietary system can actually be less secure than using a published system. While proprietary systems are not made available to be tested by potential crackers, public systems are made public for precisely this purpose.

A system that maintains its security after public testing can be reasonably trusted to be secure. A public algorithm can be more secure because good systems rely on the encryption key to provide security, not the algorithm itself. The actual steps for encrypting data can be published, because without the key, the protected information cannot be accessed. A key is a special piece of data used in both the encryption and decryption processes. The algorithms stay the same in every implementation, but a different key is used for each, which ensures that even if someone knows the algorithm you use to protect your data, he cannot break your security. A classic example of this is the early shift cipher, known as Caesar’s cipher.

Figure 4-1 Diagram of the encryption and decryption process



Caesar’s cipher uses an algorithm and a key: the algorithm specifies that you offset the alphabet either to the right (forward) or to the left (backward), and the key specifies how many letters the offset should be. For example, if the algorithm specified offsetting the alphabet to the right, and the key was 3, the cipher would substitute an alphabetic letter three to the right for the real letter, so d would be used to represent a, f would be c, and so on. In this example, both the algorithm and key are simple, allowing for easy cryptanalysis of the cipher and easy recovery of the plaintext message.

The ease with which shift ciphers were broken led to the development of substitution ciphers, which were popular in Elizabethan England and more complex than shift ciphers. They work on the principle of substituting a different letter for every letter: A becomes G, B becomes D, and so on. This system permits 26 possible values for every letter in the message, making the cipher many times more complex than a standard shift cipher. Simple analysis of the cipher could be performed to retrieve the key, however. By looking for common letters such as e and patterns found in words such as ing, you can determine which cipher letter corresponds to which plaintext letter. Making educated guesses about words will eventually allow you to determine the system’s key value.

To correct this problem, more complexity had to be added to the system. The Vigenère cipher works as a polyalphabetic substitution cipher that depends on a password. This is done by setting up a substitution table like this one:


Then the password is matched up to the text it is meant to encipher. If the password is not long enough, the password is repeated until one character of the password is matched up with each character of the plaintext. For example, if the plaintext is Sample Message and the password is password, the resulting match is

SAMPLEMESSAGE

PASSWORDPASSW

The cipher letter is determined by use of the grid, matching the plaintext character’s row with the password character’s column, resulting in a single ciphertext character where the two meet. Consider the first letters S and P: when plugged into the grid they output a ciphertext character of H. This process is repeated for every letter of the message. Once the rest of the letters are processed, the output is HAEHHSDHHSSYA.

In this example, the key in the encryption system is the password. It also illustrates that an algorithm can be simple and still provide strong security. If someone knows about the table, she can determine how the encryption was performed, but she still will not know the key to decrypting the message.

The more complex the key, the greater the security of the system. The Vigenère cipher system and systems like it make the algorithms rather simple but the key rather complex, with the best keys being very long and very random data. Key complexity is achieved by giving the key a large number of possible values. The keyspace is the size of every possible key value. When an algorithm lists a certain number of bits as a key, it is defining the keyspace. Note that because the keyspace is a numeric value, it is very important to ensure that comparisons are done using similar key types. Comparing a key made of 1-bit (2 possible values) and a key made of 1 letter (26 possible values) would not yield accurate results. Fortunately, the widespread use of computers has made almost all algorithms state their keyspace values in terms of bits.

It is easy to see how key complexity affects an algorithm when you look at some of the encryption algorithms that have been broken. The Data Encryption Standard (DES) uses a 56-bit key, allowing 72,000,000,000,000,000 possible values, but it has been broken by modern computers. The modern implementation of DES, Triple DES (3DES) uses a 128-bit key, or 340,000,000,000,000,000,000,000,000,000,000,000,000 possible values. You can see the difference in the possible values, and why 128 bits is generally accepted as the minimum required to protect sensitive information.

Because the security of the algorithms rely on the key, key management is of critical concern. Key management includes anything having to do with the exchange, storage, safeguarding, and revocation of keys. It is most commonly associated with asymmetric encryption, since asymmetric encryption uses both public and private keys. To be used properly for authentication, a key must be current and verified. If you have an old or compromised key, you need a way to check to see that the key has been revoked.

Key management is also important for symmetric encryption, however, as keys must be shared and exchanged easily. They must also be securely stored to provide appropriate confidentiality of the encrypted information. While keys can be stored in many different ways, new PC hardware often includes the Trusted Platform Module (TPM), which provides a hardware-based key storage location that is used by many applications, including the BitLocker drive encryption featured in Microsoft Windows Vista. (More specific information about the management of keys is provided in Chapter 5.)

The same algorithms cannot be used indefinitely; eventually they lose their ability to secure information. When an algorithm is known to be broken, it could be a result of the algorithm being faulty or having been based on poor math—more likely the algorithm has been rendered obsolete by advancing technology. All encryption ciphers other than a “one-time pad” cipher are susceptible to brute-force attacks, in which a cracker attempts every possible key until he gains access. With a very small key, such as a 2-bit key, trying every possible value is a simple matter, with only four possibilities: 00, 01, 10, or 11. 56-bit DES, on the other hand, has 72 quadrillion values, and while that seems like a lot, today’s computers can attempt billions of keys every second. This makes brute-forcing a key only a matter of time; large keys are required to make brute-force attacks against the cipher take longer than the effective value of the information that is enciphered by them. One-time pad ciphers are interesting, because their keys are equal to the length of the messages they protect, and completely random characters must be used for the keys. This allows the keyspace to be unlimited, therefore making a brute-force attack practically impossible.



EXAM TIP A one-time pad with a good random key is considered unbreakable. In addition, since keys are never reused, even if a key is broken, no information can be accessed using the key other than the message used by that key.

Computers in cryptography and cryptanalysis must handle all this data in bit format. They would have difficulty in using the substitution table shown earlier, so many encryption functions use a logical function to perform the encipherment. This function is typically XOR, which is the bitwise exclusive OR. XOR is used because

if (P XOR K) = C then (C XOR K) = P

If P is the plaintext and K is the key, then C is the ciphertext, making a simple symmetric key cipher in the case where the sender and the receiver both have a shared secret (key) to encrypt and decrypt data.

While symmetric encryption is the most common type of encryption, other types of encryption are used, such as public key or asymmetric encryption, and hashing or oneway functions. Each is best suited for particular situations.


Hashing


Hashing functions are commonly used encryption methods. A hashing function is a special mathematical function that performs one-way encryption, which means that once the algorithm is processed, there is no feasible way to use the ciphertext to retrieve the plaintext that was used to generate it. Also, ideally, there is no feasible way to generate two different plaintexts that compute to the same hash value. Figure 4-2 shows a generic hashing process.

Common uses of hashing functions are storing computer passwords and ensuring message integrity. The idea is that hashing can produce a unique value that corresponds to the data entered, but the hash value is also reproducible by anyone else running the

Figure 4-2 How hashes work



same algorithm against the data. So you could hash a message to get a message authentication code (MAC), and the computational number of the message would show that no intermediary has modified the message. This process works because hashing methods are typically public, and anyone can hash data using the specified method. It is computationally simple to generate the hash, so it is simple to check the validity or integrity of something by matching the given hash to one that is locally generated.

A hash algorithm can be compromised with what is called a collision attack, in which an attacker finds two different messages that hash to the same value. This type of attack is very difficult and requires generating a separate algorithm that will attempt to find a text that will hash to the same value of a known hash. This must occur faster than simply editing characters until you hash to the same value, which is a brute-force type attack. The consequences of a hash function that suffers from collisions is that integrity is lost. If an attacker can make two different inputs purposefully hash to the same value, she might trick people into running malicious code and cause other problems. Two popular hash algorithms are the Secure Hash Algorithm (SHA) series and Message Digest (MD) hash of varying versions (MD2, MD4, MD5).



EXAM TIP The hashing algorithms in common use are MD2, MD4, MD5, and SHA-1, SHA-256, SHA-384, and SHA-512.

Hashing functions are very common, and they play an important role in the way information, such as passwords, is stored securely, and the way in which messages can be signed. By computing a digest of the message, less data needs to be signed by the more complex asymmetric encryption, and this still maintains assurances about message integrity. This is the primary purpose for which the protocols were designed, and their success will allow greater trust in electronic protocols and digital signatures.


SHA


Secure Hash Algorithm (SHA) refers to a set of four hash algorithms designed and published by the National Institute of Standards and Technology (NIST) and the National Security Agency (NSA). These algorithms are included in the SHA standard Federal Information Processing Standards (FIPS) 180-2. Individually, each standard is named SHA-1, SHA-256, SHA-384, and SHA-512. The latter variants are occasionally referred to as SHA-2.


SHA-1


SHA-1, developed in 1993, was designed as the algorithm to be used for secure hashing in the U.S. Digital Signature Standard (DSS). It is modeled on the MD4 algorithm and implements fixes in that algorithm discovered by the NSA. It creates message digests 160 bits long that can be used by the Digital Signature Algorithm (DSA), which can then compute the signature of the message. This is computationally simpler, as the message digest is typically much smaller than the actual message—smaller message, less work.

SHA-1 works, as do all hashing functions, by applying a compression function to the data input. It accepts an input of up to 264 bits or less and then compresses down to a hash of 160 bits. SHA-1 works in block mode, separating the data into words first, and then grouping the words into blocks. The words are 32-bit strings converted to hex; grouped together as 16 words, they make up a 512-bit block. If the data that is input to SHA-1 is not a multiple of 512, the message is padded with zeros and an integer describing the original length of the message.

Once the message has been formatted for processing, the actual hash can be generated. The 512-bit blocks are taken in order—B1, B2, B3, …, Bn—until the entire message has been processed. The computation uses 80, 32-bit words labeled W0, W1, W2, …, W79 being sent to two, 5-word buffers. The first 5-word buffer’s words are labeled A, B, C, D, E, and the second 5-word buffer’s words are labeled H0, H1, H2, H3, and H4. A single-word buffer, TEMP, also exists. Before processing any blocks, the Hi are initialized as follows:

H0 = 67452301

H1 = EFCDAB89

H2 = 98BADCFE

H3 = 10325476

H4 = C3D2E1F0

The first block then gets processed by dividing the first block into 16 words:

W0 through W15

For  t = 16 through 79


Wt  = S1(Wt-3 XOR Wt-8 XOR Wt-14 XOR Wt-16)


Let A = H0B = H1C = H2D = H3E = H4

For t  = 0 through 79

Let TEMP = S5(A) + ft(B,C,D) + E + Wt + Kt;

             E = D; D = C; C = S30(B); B = A; A = TEMP

Let H0 = H0 + A; H1 = H1 + B; H2 = H2 + C; H3 = H3 + D; H4 = H4 + E

After this has been completed for all blocks, the entire message is now represented by the 160-bit string H0H1H2H3H4.

At one time, SHA-1 was one of the more secure hash functions, but it has been found vulnerable to a collision attack. Thus, most people are suggesting that implementations of SHA-1 be moved to one of the other SHA versions. These longer versions, SHA-256, SHA-384, and SHA-512, all have longer hash results, making them more difficult to attack successfully. The added security and resistance to attack in SHA-1 does require more processing power to compute the hash.


SHA-256


SHA-256 is similar to SHA-1, in that it will also accept input of less than 264 bits and reduces that input to a hash. This algorithm reduces to 256 bits instead of SHA-1’s 160. Defined in FIPS 180-2 in 2002, SHA-256 is listed as an update to the original FIPS 180 that defined SHA. Similar to SHA-1, SHA-256 will accept 264 bits of input and uses 32-bit words and 512-bit blocks. Padding is added until the entire message is a multiple of 512. SHA-256 uses sixty-four 32-bit words, eight working variables, and results in a hash value of eight 32-bit words, hence 256 bits.

SHA-256 is more secure than SHA-1, but the attack basis for SHA-1 can produce collisions in SHA-256 as well since they are similar algorithms. The SHA standard does have two longer versions, however.


SHA-384


SHA-384 is also similar to SHA-1, but it handles larger sets of data. SHA-384 will accept 2128 bits of input, which it pads until it has several blocks of data at 1024-bit blocks. SHA-384 also used 64-bit words instead of SHA-1’s 32-bit words. It uses six 64-bit words to produce the 284-bit hash value.


SHA-512


SHA-512 is structurally similar to SHA-384. It will accept the same 2128 input and uses the same 64-bit word size and 1024-bit block size. SHA-512 does differ from SHA-384 in that it uses eight 64-bit words for the final hash, resulting in 512 bits.


Message Digest


Message Digest (MD) is the generic version of one of several algorithms that are designed to create a message digest or hash from data input into the algorithm. MD algorithms work in the same manner as SHA in that they use a secure method to compress the file and generate a computed output of a specified number of bits. They were all developed by Ronald L. Rivest of MIT.


MD2


MD2 was developed in 1989 and is in some ways an early version of the later MD5 algorithm. It takes a data input of any length and produces a hash output of 128 bits. It is different from MD4 and MD5 in that MD2 is optimized for 8-bit machines, whereas the other two are optimized for 32-bit machines. As with SHA, the input data is padded to become a multiple—in this case a multiple of 16 bytes. After padding, a 16-byte checksum is appended to the message. The message is then processed in 16-byte blocks. After initialization, the algorithm invokes a compression function.

The compression function operates as shown here:

      T = 0

For J = 0 through 17

For k = 0 through 47

     T = Xk XOR St

   Xk = T

     T = (T + J)mod 256

After the function has been run for every 16 bytes of the message, the output result is a 128-bit digest. The only known attack that is successful against MD2 requires that the checksum not be appended to the message before the hash function is run. Without a checksum, the algorithm can be vulnerable to a collision attack. Some collision attacks are based upon the algorithm’s initialization vector (IV).


MD4


MD4 was developed in 1990 and is optimized for 32-bit computers. It is a fast algorithm, but it can be subject to more attacks than more secure algorithms like MD5. Like MD2, it takes a data input of some length and outputs a digest of 128 bits. The message is padded to become a multiple of 512, which is then concatenated with the representation of the message’s original length.

As with SHA, the message is then divided into blocks and also into 16 words of 32 bits. All blocks of the message are processed in three distinct rounds. The digest is then computed using a four-word buffer. The final four words remaining after compression are the 128-bit hash.

An extended version of MD4 computes the message in parallel and produces two 128-bit outputs—effectively a 256-bit hash. Even though a longer hash is produced, security has not been improved because of basic flaws in the algorithm. Cryptographer Hans Dobbertin has shown how collisions in MD4 can be found in under a minute using just a PC. This vulnerability to collisions applies to 128-bit MD4 as well as 256-bit MD4. Most people are moving away from MD4 to MD5 or a robust version of SHA.


MD5


MD5 was developed in 1991 and is structured after MD4 but with additional security to overcome the problems in MD4. Therefore, it is very similar to the MD4 algorithm, only slightly slower and more secure.

MD5 creates a 128-bit hash of a message of any length. Like MD4, it segments the message into 512-bit blocks and then into sixteen 32-bit words. First, the original message is padded to be 64 bits short of a multiple of 512 bits. Then a 64-bit representation of the original length of the message is added to the padded value to bring the entire message up to a 512-bit multiple.

After padding is complete, four 32-bit variables, A, B, C, and D, are initialized. A, B, C, and D are copied into a, b, c, and d, and then the main function begins. This has four rounds, each using a different nonlinear function 16 times. These functions operate on three of a, b, c, and d, adding the result to the fourth variable, the fourth variable being a sub-block of the text and a constant, then rotating the result of that addition to the right a variable number of bits, specified by the round of the algorithm. After adding the result of this operation to one of a, b, c, and d, that sum replaces one of a, b, c, and d. After the four rounds are completed, a, b, c, and d are added to A, B, C, and D, and the algorithm moves on to the next block. After all blocks are completed, A, B, C, and D are concatenated to form the final output of 128 bits.

MD5 has been a fairly common integrity standard and was most commonly used as part of the NTLM (NT LAN Manager) challenge response authentication protocol. Recently successful attacks on the algorithm have occurred. Cryptanalysis has displayed weaknesses in the compression function. However, this weakness does not lend itself to an attack on MD5 itself. Czech cryptographer Vlastimil Klíma published work showing that MD5 collisions can be computed in about eight hours on a standard home PC. In November 2007, researchers published the ability to have two entirely different Win32 executables with different functionality but the same MD5 hash. This discovery has obvious implications for the development of malware. The combination of these problems with MD5 has pushed people to adopt a strong SHA version for security reasons.


Hashing Summary


Hashing functions are very common, and they play an important role in the way information, such as passwords, is stored securely and the way in which messages can be signed. By computing a digest of the message, less data needs to be signed by the more complex asymmetric encryption, and this still maintains assurances about message integrity. This is the primary purpose for which the protocols were designed, and their success will allow greater trust in electronic protocols and digital signatures.


Symmetric Encryption


Symmetric encryption is the older and more simple method of encrypting information. The basis of symmetric encryption is that both the sender and the receiver of the message have previously obtained the same key. This is, in fact, the basis for even the oldest ciphers—the Spartans needed the exact same size cylinder, making the cylinder the “key” to the message, and in shift ciphers both parties need to know the direction and amount of shift being performed. All symmetric algorithms are based upon this shared secret principle, including the unbreakable one-time pad method.

Figure 4-3 is a simple diagram showing the process that a symmetric algorithm goes through to provide encryption from plaintext to ciphertext. This ciphertext message is, presumably, transmitted to the message recipient who goes through the process to decrypt the message using the same key that was used to encrypt the message. Figure 4-3 shows the keys to the algorithm, which are the same value in the case of symmetric encryption.

Unlike with hash functions, a cryptographic key is involved in symmetric encryption, so there must be a mechanism for key management. Managing the cryptographic keys is critically important in symmetric algorithms because the key unlocks the data that is being protected. However, the key also needs to be known or transmitted in a secret way to the party to which you wish to communicate. This key management applies to all things that could happen to a key, securing it on the local computer, securing it on the remote one, protecting it from data corruption, protecting it from loss, as well as probably the most important step, protecting the key while it is transmitted between the two parties. Later in the chapter we will look at public key cryptography, which greatly eases the key management issue, but for symmetric algorithms the most important lesson is to store and send the key only by known secure means.

Some of the more popular symmetric encryption algorithms in use today are DES, 3DES, AES, and IDEA.



EXAM TIP Common symmetric algorithms are DES, 3DES, AES, IDEA, Blowfish, CAST, RC2, RC4, RC5, and RC6.


DES


DES, the Data Encryption Standard, was developed in response to the National Bureau of Standards (NBS), now known as the National Institute of Standards and Technology (NIST), issuing a request for proposals for a standard cryptographic algorithm in 1973. NBS received a promising response in an algorithm called Lucifer, originally developed by IBM. The NBS and the NSA worked together to analyze the algorithm’s security, and eventually DES was adopted as a federal standard in 1976.

NBS specified that the DES standard had to be recertified every five years. While DES passed without a hitch in 1983, the NSA said it would not recertify it in 1987. However, since no alternative was available for many businesses, many complaints ensued, and the NSA and NBS were forced to recertify it. The algorithm was then recertified in 1993. NIST has now certified the Advanced Encryption Standard (AES) to replace DES.

DES is what is known as a block cipher; it segments the input data into blocks of a specified size, typically padding the last block to make it a multiple of the block size required. In the case of DES, the block size is 64 bits, which means DES takes a 64-bit input and outputs 64 bits of ciphertext. This process is repeated for all 64-bit blocks in the message. DES uses a key length of 56 bits, and all security rests within the key. The same algorithm and key are used for both encryption and decryption.

Figure 4-3 Layout of a symmetric algorithm



At the most basic level, DES performs a substitution and then a permutation (a form of transposition) on the input, based upon the key. This action is called a round, and DES performs this 16 times on every 64-bit block. It works in three stages:


 
  1. 1. The algorithm accepts plaintext, P, and performs an initial permutation, IP, on P producing P0. The block is then broken into left and right halves, the left (L0) being the first 32 bits of P0 and the right (R0) being the last 32 bits of P0.
  2. 2. With L0 and R0, 16 rounds are performed until L16 and R16 are generated.
  3. 3. The inverse permutation, IP-1, is applied to L16R16 to produce ciphertext C.

The round executes 16 times, and these rounds are where the bulk of the encryption is performed. The individual rounds work with the following computation:

Where i represents the current round,

Li = Ri-1

Ri = Li-1 XOR f(Ri-1,Ki)

Ki represents the current round’s 48-bit string derived from the 56-bit key, and f represents the diffusion function. This function operates as follows:


 
  1. 1. 48 bits are selected from the 56-bit key.
  2. 2. The right half is expanded from 32 bits to 48 bits via an expansion permutation.
  3. 3. Those 48 bits are combined via XOR with the 48-key bits.
  4. 4. This result is then sent through eight S-boxes, producing 32 new bits, and then it is permuted again.

After all 16 rounds have been completed and the inverse permutation has been completed, the ciphertext is output as 64 bits. Then the algorithm picks up the next 64 bits and starts all over again. This is carried on until the entire message has been encrypted with DES. As mentioned, the same algorithm and key are used to decrypt and encrypt with DES. The only difference is that the sequence of key permutations are used in reverse order.

Over the years that DES has been a cryptographic standard, a lot of cryptanalysis has occurred, and while the algorithm has held up very well, some problems have been encountered. Weak keys are keys that are less secure than the majority of keys allowed in the keyspace of the algorithm. In the case of DES, because of the way the initial key is modified to get the subkey, certain keys are weak keys. The weak keys equate in binary to having all 1s or all 0s, or where half the key is all 1s and the other half is all 0s, like those shown in Figure 4-4.

Semi-weak keys, with which two keys will encrypt plaintext to identical ciphertext, also exist, meaning that either key will decrypt the ciphertext. The total number of possibly weak keys is 64, which is very small compared with the 256 possible keys in DES.

Figure 4-4 Weak DES keys



In addition, multiple successful attacks against DES algorithms have used fewer rounds than 16. Any DES with fewer than 16 rounds could be analyzed more efficiently with chosen plaintext than via a brute-force attack using differential cryptanalysis. With 16 rounds and not using a weak key, DES is reasonably secure and amazingly has been for more than two decades. In 1999, a distributed effort consisted of a supercomputer and 100,000 PCs over the Internet to break a 56-bit DES key. By attempting more than 240 billion keys per second, the effort was able to retrieve the key in less than a day. This demonstrates an incredible resistance to cracking a 20-year-old algorithm, but it also demonstrates that more stringent algorithms are needed to protect data today.


3DES


Triple DES (3DES) is a variant of DES. Depending on the specific variant, it uses either two or three keys instead of the single key that DES uses. It also spins through the DES algorithm three times via what’s called multiple encryption.

Multiple encryption can be performed in several different ways. The simplest method of multiple encryption is just to stack algorithms on top of each other—taking plaintext, encrypting it with DES, then encrypting the first ciphertext with a different key, and then encrypting the second ciphertext with a third key. In reality, this technique is less effective than the technique that 3DES uses, which is to encrypt with one key, then decrypt with a second, and then encrypt with a third, as shown in Figure 4-5.

Figure 4-5 Diagram of 3DES



This greatly increases the number of attempts needed to retrieve the key and is a significant enhancement of security. The additional security comes with a price, however. It can take up to three times longer to compute 3DES than to compute DES. However, the advances in memory and processing power in today’s electronics should make this problem irrelevant in all devices except for very small low-power handhelds.

The only weaknesses of 3DES are those that already exist in DES. Because different keys are used with the same algorithm, affecting a longer key length by adding the first keyspace to the second keyspace and the resulting greater resistance to brute-force attack, 3DES is stronger. 3DES is a good interim step before the new encryption standard, AES, is fully implemented to replace DES.


AES


Because of the advancement of technology and the progress being made in quickly retrieving DES keys, NIST put out a request for proposals for a new Advanced Encryption Standard (AES). It called for a block cipher using symmetric key cryptography and supporting key sizes of 128, 192, and 256 bits. After evaluation, the NIST had five finalists:


 
  • MARS IBM
  • RC6 RSA
  • Rijndael John Daemen and Vincent Rijmen
  • Serpent Ross Anderson, Eli Biham, and Lars Knudsen
  • Twofish Bruce Schneier, John Kelsey, Doug Whiting, David Wagner, Chris Hall, and Niels Ferguson

In the fall of 2000, NIST picked Rijndael to be the new AES. It was chosen for its overall security as well as its good performance on limited capacity devices. Rijndael’s design was influenced by Square, also written by John Daemen and Vincent Rijmen. Like Square, Rijndael is a block cipher separating data input in 128-bit blocks. Rijndael can also be configured to use blocks of 192 or 256 bits, but AES has standardized on 128-bit blocks. AES can have key sizes of 128, 192, and 256 bits, with the size of the key affecting the number of rounds used in the algorithm.

Like DES, AES works in three steps on every block of input data:


 
  1. 1. Add round key, performing an XOR of the block with a subkey.
  2. 2. Perform the number of normal rounds required by the key length.
  3. 3. Perform a regular round without the mix-column step found in the normal round.

After these steps have been performed, a 128-bit block of plaintext produces a 128-bit block of ciphertext. As mentioned in step 2, AES performs multiple rounds. This is determined by the key size. A key size of 128 bits requires 9 rounds, 192-bit keys will require 11 rounds, and 256-bit keys use 13 rounds. Four steps are performed in every round:


 
  1. 1.Byte sub. Each byte is replaced by its S-box substitute.
  2. 2.Shift row. Bytes are arranged in a rectangle and shifted.
  3. 3.Mix column. Matrix multiplication is performed based upon the arranged rectangle.
  4. 4.Add round key. This round’s subkey is cored in.

These steps are performed until the final round has been completed, and when the final step has been performed, the ciphertext is output.

The Rijndael algorithm is well thought-out and has suitable key length to provide security for many years to come. While no efficient attacks currently exist against AES, more time and analysis will tell if this standard can last as long as DES has.


CAST


CAST is an encryption algorithm similar to DES in its structure. It was designed by Carlisle Adams and Stafford Tavares. CAST uses a 64-bit block size for 64- and 128-bit key versions, and a 128-bit block size for the 256-bit key version. Like DES, it divides the plaintext block into a left half and a right half. The right half is then put through function f and then is XORed with the left half. This value becomes the new right half, and the original right half becomes the new left half. This is repeated for eight rounds for a 64-bit key, and the left and right output is concatenated to form the ciphertext block.

CAST supports longer key lengths than the original 64 bits. Changes to the key length affect the number of rounds: CAST-128 specifies 16 rounds and CAST-256 has 48 rounds. This algorithm in CAST-256 form was submitted for the AES standard but was not chosen. CAST has undergone thorough analysis with only minor weaknesses discovered that are dependent on low numbers of rounds. Currently, no better way is known to break high-round CAST than by brute-forcing the key, meaning that with sufficient key length, CAST should be placed with other trusted algorithms.


RC


RC is a general term for several ciphers all designed by Ron Rivest—RC officially stands for Rivest Cipher. RC1, RC2, RC3, RC4, RC5, and RC6 are all ciphers in the series. RC1 and RC3 never made it to release, but RC2, RC4, RC5, and RC6 are all working algorithms.


RC2


RC2 was designed as a DES replacement, and it is a variable-key-size block-mode cipher. The key size can be from 8 bits to 1024 bits with the block size being fixed at 64 bits. RC2 breaks up the input blocks into four 16-bit words and then puts them through 18 rounds of one of two operations. The two operations are mix and mash. The sequence in which the algorithms works is as follows:


 
  1. 1. Initialize the input block to words R0 through R3.
  2. 2. Expand the key into K0 through K63.
  3. 3. Initialize j = 0.
  4. 4. Five mix rounds.
  5. 5. One mash round.
  6. 6. Six mix rounds.
  7. 7. One mash round.
  8. 8. Five mix rounds.

This outputs 64 bits of ciphertext for 64 bits of plaintext. The individual operations are performed as follows, with rol in this description meaning to rotate the word left.

This is the mix operation:

Ri = Ri + Kj+ (Ri-1& Ri-2) + ((~Ri-1) & Ri-1)


j = j + 1


Ri= Ri rol si

This is the mash operation:

Ri = Ri + K[Ri-1 & 63]

According to RSA, RC2 is up to three times faster than DES. RSA maintained RC2 as a trade secret for a long time, with the source code eventually being illegally posted on the Internet. The ability of RC2 to accept different key lengths is one of the larger vulnerabilities in the algorithm. Any key length below 64 bits can be easily retrieved by modern computational power.


RC5


RC5 is a block cipher, written in 1994. It has multiple variable elements, numbers of rounds, key sizes, and block sizes. The algorithm starts by separating the input block into two words, A and B.

A = A + S0

B = B + S1

For i = 1 to r

A = ((A XOR B) <<< B) + S2i

B = ((B XOR A) <<< A) + S2i+1

A and B represent the ciphertext output. This algorithm is relatively new, but if configured to run enough rounds, RC5 seems to provide adequate security for current brute-forcing technology. Rivest recommends using at least 12 rounds. With 12 rounds in the algorithm, cryptanalysis in a linear fashion proves less effective than brute-force against RC5, and differential analysis fails for 15 or more rounds. A newer algorithm is RC6.


RC6


RC6 is based on the design of RC5. It uses a 128-bit block size, separated into four words of 32 bits each. It uses a round count of 20 to provide security, and it has three possible key sizes: 128, 192, and 256 bits. The four words are named A, B, C, and D, and the algorithm works like this:

B = B + S0

D = D + S1

            For i = 1 – 20

                 [t = (B * (2B + 1)) <<< 5

                 u = (D * (2D + 1)) <<< 5

                 A = ((A XOR t) <<< u) + S2i

                 C = ((C XOR u) <<< t) + S2i+1

          (A, B, C, D) = (B, C, D, A)]

A = A + S42

C = C + S43

The output of A, B, C, and D after 20 rounds is the ciphertext.

RC6 is a modern algorithm that runs well on 32-bit computers. With a sufficient number of rounds, the algorithm makes both linear and differential cryptanalysis infeasible. The available key lengths make brute-force attacks extremely time-consuming. RC6 should provide adequate security for some time to come.


RC4


RC4 was created before RC5 and RC6, but it differs in operation. RC4 is a stream cipher, whereas all the symmetric ciphers we have looked at so far have been block-mode ciphers. A stream-mode cipher works by enciphering the plaintext in a stream, usually bit by bit. This makes stream ciphers faster than block-mode ciphers. Stream ciphers accomplish this by performing a bitwise XOR with the plaintext stream and a generated key-stream.

RC4 operates in this manner. It was developed in 1987 and remained a trade secret of RSA until it was posted to the Internet in 1994. RC4 can use a key length of 8 to 2048 bits, though the most common versions use 128-bit keys, or if subject to the old export restrictions, 40-bit keys. The key is used to initialize a 256-byte state table. This table is used to generate the pseudo-random stream that is XORed with the plaintext to generate the ciphertext.

The operation is performed as follows:

I = 0

j = 0

I = (I + 1 ) mod 256

j = (j + Si) mod 256

Swap      Si      and      Sj

t = (Si+ Sj) mod 256

K = St

K is then XORed with the plaintext. Alternatively, K is XORed with the ciphertext to produce the plaintext.

The algorithm is fast, sometimes ten times faster than DES. The most vulnerable point of the encryption is the possibility of weak keys. One key in 256 can generate bytes closely correlated with key bytes.


Blowfish


Blowfish was designed in 1994 by Bruce Schneier. It is a block-mode cipher using 64-bit blocks and a variable key length from 32 to 448 bits. It was designed to run quickly on 32-bit microprocessors and is optimized for situations with few key changes. Encryption is done by separating the 64-bit input block into two 32-bit words, and then a function is executed every round. Blowfish has 16 rounds. Once the input has been split into left and right words, the following function is performed:

For I = 1 − 16

   XL = XL XOR Pi

   XR = F(XL) XOR XR

Swap XL and XR

Then, swap XL and XR

XR = XR XOR P17

XL = XL XOR P18

The two words are then recombined to form the 64-bit output ciphertext.

The only successful cryptanalysis to date against Blowfish has been against variants that used reduced rounds. There does not seem to be a weakness in the full 16-round version.


IDEA


IDEA (International Data Encryption Algorithm) started out as PES, or Proposed Encryption Cipher, in 1990, and it was modified to improve its resistance to differential cryptanalysis and its name was changed to IDEA in 1992. It is a block-mode cipher using a 64-bit block size and a 128-bit key. The input plaintext is split into four 16-bit segments, A, B, C, and D. The process uses eight rounds with each round performing the following function:

        A * S1 = X 1

        B + S2 = X2

        C + S3 = X3

        D * S4 = X4

X1 XOR X3 = X5

X2 XOR X4 = X6

        X5 * S5 = X7

       X6 + X7 = X8

        X 8 * S6 = X9

           X7 + X9 = X10

     X1 XOR X9 = X11

     X3 XOR X9 = X12

   X2 XOR X10 = X13

   X4 XOR X10 = X14

                X11 = A

                X13 = B

                X12 = C

                X14 = D

Then the next round starts. After eight rounds are completed, four more steps are done:

X11 * S49 = C1

X12 + S50 = C2

X13 + S51 = C3

X14 + S52 = C4

The output of the last four steps is then concatenated to form the ciphertext.

This algorithm is fairly new, but all current cryptanalysis on full, eight-round IDEA shows that the most efficient attack would be to brute-force the key. The 128-bit key would prevent this attack being accomplished, given current computer technology. The only known issue is that IDEA is susceptible to a weak key—a key that is made of all 0s. This weak key is easy to check for, and the weakness is simple to mitigate.


Symmetric Encryption Summary


Symmetric algorithms are important because they are comparatively fast and have few computational requirements. Their main weakness is that two geographically distant parties both need to have a key that matches exactly. In the past, keys could be much simpler and still be secure, but with today’s computational power, simple keys can be brute-forced very quickly. This means that larger and more complex keys must be used and exchanged. This key exchange is difficult because the key cannot be simple, such as a word, but must be shared in a secure manner. It might be easy to exchange a 4-bit key such as b in hex, but exchanging the 128-bit key 4b36402c5727472d5571373d22675b4b is far more difficult to do securely. This exchange of keys is greatly facilitated by our next subject, asymmetric, or public key, cryptography.


Asymmetric Encryption


Asymmetric cryptography is in many ways completely different than symmetric cryptography. While both are used to keep data from being seen by unauthorized users, asymmetric cryptography uses two keys instead of one. It was invented by Whitfield Diffie and Martin Hellman in 1975. Asymmetric cryptography is more commonly known as public key cryptography. The system uses a pair of keys: a private key that is kept secret and a public key that can be sent to anyone. The system’s security relies upon resistance to deducing one key, given the other, and thus retrieving the plaintext from the ciphertext.

Public key systems typically work by using hard math problems. One of the more common methods is through the difficulty of factoring large numbers. These functions are often called trapdoor functions, as they are difficult to process without the key, but easy to process when you have the key—the trapdoor through the function. For example, given a prime number, say 293, and another prime, such as 307, it is an easy function to multiply them together to get 89,951. Given 89,951, it is not simple to find the factors 293 and 307 unless you know one of them already. Computers can easily multiply very large primes with hundreds or thousands of digits but cannot easily factor the product.

The strength of these functions is very important: Because an attacker is likely to have access to the public key, he can run tests of known plaintext and produce ciphertext. This allows instant checking of guesses that are made about the keys of the algorithm. RSA, Diffie-Hellman, Elliptic curve cryptography (ECC), and ElGamal are all popular asymmetric protocols. We will look at all of them and their suitability for different functions.



EXAM TIP Popular asymmetric encryption algorithms are RSA, Diffie-Hellman, ElGamal, and ECC.


RSA


RSA is one of the first public key cryptosystems ever invented. It can be used for both encryption and digital signatures. RSA is named after its inventors, Ron Rivest, Adi Shamir, and Leonard Adleman, and was first published in 1977.

This algorithm uses the product of two very large prime numbers and works on the principle of difficulty in factoring such large numbers. It’s best to choose large prime numbers from 100 to 200 digits in length and that are equal in length. These two primes will be P and Q. Randomly choose an encryption key, E, so that E is greater than 1, E is less than P * Q, and E must be odd. E must also be relatively prime to (P − 1) and (Q − 1). Then compute the decryption key D:

D = E−1 mod ((P − 1)(Q − 1))

Now that the encryption key and decryption key have been generated, the two prime numbers can be discarded, but they should not be revealed. To encrypt a message, it should be divided into blocks less than the product of P and Q. Then,

C1 = mod (P * Q)

C is the output block of ciphertext matching the block length of the input message, M. To decrypt a message take ciphertext, C, and use this function:

Mimod (P * Q)

The use of the second key retrieves the plaintext of the message.

This is a simple function, but its security has withstood the test of more than 20 years of analysis. Considering the effectiveness of RSA’s security and the ability to have two keys, why are symmetric encryption algorithms needed at all? The answer is speed. RSA in software can be 100 times slower than DES, and in hardware it can be even slower.

RSA can be used to perform both regular encryption and digital signatures. Digital signatures try to duplicate the functionality of a physical signature on a document using encryption. Typically RSA and the other public key systems are used in conjunction with symmetric key cryptography. Public key, the slower protocol, is used to exchange the symmetric key (or shared secret), and then the communication uses the faster symmetric key protocol. This process is known as electronic key exchange.

Since the security of RSA is based upon the supposed difficulty of factoring large numbers, the main weaknesses are in the implementations of the protocol. Until recently, RSA was a patented algorithm, but it was a de facto standard for many years.


Diffie-Hellman


Diffie-Hellman was created in 1976 by Whitfield Diffie and Martin Hellman. This protocol is one of the most common encryption protocols in use today. It plays a role in the electronic key exchange method of the Secure Sockets Layer (SSL) protocol. It is also used by the SSH and IPsec protocols. Diffie-Hellman is important because it enables the sharing of a secret key between two people who have not contacted each other before.

The protocol, like RSA, uses large prime numbers to work. Two users agree to two numbers, P and G, with P being a sufficiently large prime number and G being the generator. Both users pick a secret number, a and b. Then both users compute their public number:

User 1 X = Ga mod P, with X being the public number

User 2 Y = Gb mod P, with Y being the public number

The users then exchange public numbers. User 1 knows P, G, a, X, and Y.

User 1 Computes Ka = Ya mod P

User 2 Computes Kb = Xb mod P

With Ka = Kb = K, now both users know the new shared secret K.

This is the basic algorithm, and although there have been methods created to strengthen it, Diffie-Hellman is still in wide use. It remains very effective because of the nature of what it is protecting—a temporary, automatically generated secret key that is good only for a single communication session.


ElGamal


ElGamal can be used for both encryption and digital signatures. Taher ElGamal designed the system in the early 1980s. This system was never patented and is free for use. It is used as the U.S. government standard for digital signatures.

The system is based upon the difficulty of calculating discrete logarithms in a finite field. Three numbers are needed to generate a key pair. User 1 chooses a prime, P, and two random numbers, F and D. F and D should both be less than P. Then you can calculate the public key A:

A = DF mod P

Then A, D, and P are shared with the second user, with F being the private key. To encrypt a message, M, a random key, k, is chosen that is relatively prime to P − 1. Then,

C1 = D k mod P

C2 = A k M mod P

C1and C2 makes up the ciphertext. Decryption is done by

M = C2/ mod P

ElGamal uses a different function for digital signatures. To sign a message, M, once again choose a random value k that is relatively prime to P − 1. Then,

C1 = D k mod P

C2 = (M − C1 * F)/k (mod P − 1)

C1 concatenated to C2 is the digital signature.

ElGamal is an effective algorithm and has been in use for some time. It is used primarily for digital signatures. Like all asymmetric cryptography, it is slower than symmetric cryptography.


ECC


Elliptic curve cryptography (ECC) works on the basis of elliptic curves. An elliptic curve is a simple function that is drawn as a gently looping curve on the X,Y plane. They are defined by this equation:

y2 = x3 + ax2 + b

Elliptic curves work because they have a special property—you can add two points on the curve together and get a third point on the curve.

For cryptography, the elliptic curve works as a public key algorithm. Users agree on an elliptic curve and a fixed curve point. This information is not a shared secret, and these points can be made public without compromising the security of the system. User 1 then chooses a secret random number, K1, and computes a public key based upon a point on the curve:

P1 = K1 * F

User 2 performs the same function and generates P2. Now user 1 can send user 2 a message by generating a shared secret:

S= K1 * P2

User 2 can generate the same shared secret independently:

S= K2 * P1

This is true because

K1 * P2 = K1 * (K2 * F) = (K1 * K2) *F = K2 * (K1 * F) = K2 * P1

The security of elliptic curve systems has been questioned, mostly because of lack of analysis. However, all public key systems rely on the difficulty of certain math problems. It would take a breakthrough in math for any of the mentioned systems to be weakened dramatically, but research has been done about the problems and has shown that the elliptic curve problem has been more resistant to incremental advances. Again, as with all cryptography algorithms, only time will tell how secure they really are.


Asymmetric Encryption Summary


Asymmetric encryption creates the possibility of digital signatures and also corrects the main weakness of symmetric cryptography. The ability to send messages securely without senders and receivers having had prior contact has become one of the basic concerns with secure communication. Digital signatures will enable faster and more efficient exchange of all kinds of documents, including legal documents. With strong algorithms and good key lengths, security can be assured.


Steganography


Steganography, an offshoot of cryptography technology, gets its meaning from the Greek steganos meaning covered. Invisible ink placed on a document hidden by innocuous text is an example of a steganographic message. Another example is a tattoo placed on the top of a person’s head, visible only when the person’s hair is shaved off.

Hidden writing in the computer age relies on a program to hide data inside other data. The most common application is the concealing of a text message in a picture file. The Internet contains multiple billions of image files, allowing a hidden message to be located almost anywhere without being discovered. The nature of the image files also make a hidden message difficult to detect. While it is most common to hide messages inside images, they can also be hidden in video and audio files.

The advantage to steganography over cryptography is that the messages do not attract attention, and this difficulty in detecting the hidden message provides an additional barrier to analysis. The data that is hidden in a steganographic message is frequently also encrypted, so should it be discovered, the message will remain secure. Steganography has many uses but the most publicized uses are to hide illegal material, often pornography, or allegedly for covert communication by terrorist networks. While there is no direct evidence to support that terrorists use steganography, the techniques have been documented in some of their training materials.

Steganographic encoding can be used in many ways and through many different media. Covering them all is beyond the scope for this book, but we will discuss one of the most common ways to encode into an image file, LSB encoding. LSB, Least Significant Bit, is a method of encoding information into an image while altering the actual visual image as little as possible. A computer image is made up of thousands or millions of pixels, all defined by 1s and 0s. If an image is composed of Red Green Blue (RGB) values, each pixel has an RGB value represented numerically from 0 to 255. For example, 0,0,0 is black, and 255,255,255 is white, which can also be represented as 00000000, 00000000, 00000000 for black and 11111111, 11111111, 11111111 for white. Given a white pixel, editing the least significant bit of the pixel to 11111110, 11111110, 11111110 changes the color. The change in color is undetectable to the human eye, but in a image with a million pixels, this creates a 125KB area in which to store a message.


Cryptography Algorithm Use


The use of cryptographic algorithms grows every day. More and more information becomes digitally encoded and placed online, and all of this data needs to be secured. The best way to do that with current technology is to use encryption. This section considers some of the tasks cryptographic algorithms accomplish and those for which they are best suited. Security is typically defined as a product of five components: confidentiality, integrity, availability, authentication, and nonrepudiation. Encryption addresses four of these five components: confidentiality, integrity, nonrepudiation, and authentication.


Confidentiality


Confidentiality typically comes to mind when the term security is brought up. Confidentiality is the ability to keep some piece of data a secret. In the digital world, encryption excels at providing confidentiality.

Confidentiality is used on stored data and on transmitted data. In both cases, symmetric encryption is favored because of its speed and because some asymmetric algorithms can significantly increase the size of the object being encrypted. In the case of a stored item, a public key is typically unnecessary, as the item is being encrypted to protect it from access by others. In the case of transmitted data, public key cryptography is typically used to exchange the secret key, and then symmetric cryptography is used to ensure the confidentiality of the data being sent.

Asymmetric cryptography does protect confidentiality, but its size and speed make it more efficient at protecting the confidentiality of small units for tasks such as electronic key exchange. In all cases, the strength of the algorithms and the length of the keys ensure the secrecy of the data in question.


Integrity


Integrity is better known as message integrity, and it is a crucial component of message security. When a message is sent, both the sender and recipient need to know that the message was not altered in transmission. This is especially important for legal contracts—recipients need to know that the contracts have not been altered. Signers also need a way to validate that a contract they sign will not be altered in the future.

Integrity is provided with one-way hash functions and digital signatures. The hash functions compute the message digests, and this guarantees the integrity of the message by allowing easy testing to determine whether any part of the message has been changed. The message now has a computed function (the hash value) to tell the users to resend the message if it was intercepted and interfered with.

This hash value is combined with asymmetric cryptography by taking the message’s hash value and encrypting it with the user’s private key. This lets anyone with the user’s public key decrypt the hash and compare it to the locally computed hash, ensuring not only the integrity of the message but positively identifying the sender.


Nonrepudiation


An item of some confusion, the concept of nonrepudiation is actually fairly simple. Nonrepudiation means that the message sender cannot later deny that she sent the message. This is important in electronic exchanges of data, because of the lack of face-to-face meetings. Nonrepudiation is based upon public key cryptography and the principle of only you knowing your private key. The presence of a message signed by you, using your private key, which nobody else should know, is an example of nonrepudiation. When a third party can check your signature using your public key, that disproves any claim that you were not the one who actually sent the message. Nonrepudiation is tied to asymmetric cryptography and cannot be implemented with symmetric algorithms.


Authentication


Authentication lets you prove you are who you say you are. Authentication is similar to nonrepudiation, except that authentication often occurs as communication begins, not after. Authentication is also typically used in both directions as part of a protocol.

Authentication can be accomplished in a multitude of ways, the most basic being the use of a simple password. Every time you sign in to check your e-mail, you authenticate yourself to the server. This process can grow to need two or three identifying factors, such as a password, a token (such as a digital certificate), and a biometric (such as a fingerprint).

Digital certificates are a form of token. Digital certificates are public encryption keys that have been verified by a trusted third party. When you log in to a secure web site, one-way authentication occurs. You want to know that you are logging into the server that you intend to log into, so your browser checks the server’s digital certificate. This token is digitally signed by a trusted third party, assuring you that the server is genuine. This authentication is one way because the server does not need to know that you are who you say you are—it will authenticate your credit card later on. The other option, two-way authentication, can work the same way: you send your digital certificate signed by a third party, and the other entity with which you are communicating sends its certificate.

While symmetric encryption can be used as a simple manner of authentication (only the authorized user should know the secret, after all) asymmetric encryption is better suited to show, via digital signatures and certificates, that you are who you say you are.


Digital Signatures


Digital signatures have been touted as the key to truly paperless document flow, and they do have promise for improving the system. Digital signatures are based on both hashing functions and asymmetric cryptography. Both encryption methods play an important role in signing digital documents.

Unprotected digital documents are very easy for anyone to change. If a document is edited after an individual signs it, it is important that any modification can be detected. To protect against document editing, hashing functions are used to create a digest of the message that is unique and easily reproducible by both parties. This ensures that the message integrity is complete.

Protection must also be provided to ensure that the intended party actually did sign the message, and that someone did not edit the message and the hash of the message. This is done by asymmetric encryption. The properties of asymmetric encryption allow anyone to use a person’s public key to generate a message that can be read only by that person, as this person is theoretically the only one with access to the private key. In the case of digital signatures, this process works exactly in reverse. When a user can decrypt the hash with the public key of the originator, that user knows that the hash was encrypted by the corresponding private key. This use of asymmetric encryption is a good example of nonrepudiation, because only the signer would have access to the private key. This is how digital signatures work, by using integrity and nonrepudiation to prove not only that the right people signed, but also what they signed.


Key Escrow


The impressive growth of the use of encryption technology has led to new methods for handling keys. Encryption is adept at hiding secrets, and with computer technology being affordable to everyone, criminals and other ill-willed people began using it to conceal communications and business dealings from law enforcement agencies. Because they could not break the encryption, government agencies began asking for key escrow. Key escrow is a system by which your private key is kept both by you and by the government. This allows people with a court order to retrieve your private key to gain access to anything encrypted with your public key. The data is essentially encrypted by your key and the government key, giving the government access to your plaintext data.

Key escrow can negatively impact the security provided by encryption, because the government requires a huge complex infrastructure of systems to hold every escrowed key, and the security of those systems is less efficient than the security of your memorizing the key. However, there are two sides to the key escrow coin. Without a practical way to recover a key if or when it is lost or the key holder dies, for example, some important information will be lost forever. Such issues will affect the design and security of encryption technologies for the foreseeable future.



EXAM TIP Key escrow can solve many problems resulting from an inaccessible key, and the nature of cryptography makes the access of the data impossible without the key.


Cryptographic Applications


A few applications can be used to encrypt data conveniently on your personal computer. (This is by no means a complete list of every application.)

Pretty Good Privacy (PGP) is mentioned in this book because it is a useful protocol suite. Created by Philip Zimmermann in 1991, it passed through several versions that were available for free under a noncommercial license. PGP applications can be plugged into popular e-mail programs to handle the majority of day-to-day encryption tasks using a combination of symmetric and asymmetric encryption protocols. One of the unique features of PGP is its ability to use both symmetric and asymmetric encryption methods, accessing the strengths of each method and avoiding the weaknesses of each as well. Symmetric keys are used for bulk encryption, taking advantage of the speed and efficiency of symmetric encryption. The symmetric keys are passed using asymmetric methods, capitalizing on the flexibility of this method. PGP is now sold as a commercial application with home and corporate versions. Depending on the version, PGP can perform file encryption, whole disk encryption, and public key encryption to protect e-mail.

TrueCrypt is an open source solution for encryption. It is designed for symmetric disk-based encryption of your files. It features AES ciphers and the ability to create a deniable volume, encryption stored within encryption so that volume cannot be reliably detected. TrueCrypt can perform file encryption and whole disk encryption. Whole disk encryption encrypts the entire hard drive of a computer, including the operating system.

FreeOTFE is similar to TrueCrypt. It offers “on-the-fly” disk encryption as an open source freely downloadable application. It can encrypt files up to entire disks with several popular ciphers including AES.

GnuPG or Gnu Privacy Guard is an open source implementation of the OpenPGP standard. This command line–based tool is a public key encryption program designed to protect electronic communications such as e-mail. It operates similar to PGP and includes a method for managing public/private keys.

File system encryption is becoming a standard means of protecting data while in storage. Even hard drives are available with built-in AES encryption. Microsoft expanded its encrypting file system (EFS) available since the NT operating system with BitLocker, a boot sector encryption method that protects data on the Vista operating system. Bit-Locker utilizes AES encryption to encrypt every file on the hard drive automatically. All encryption occurs in the background, and decryption occurs seamlessly when data is requested. The decryption key can be stored in the Trusted Platform Module (TPM) or on a USB key.


Chapter Review


Cryptography is in many ways the key to security in many systems. The progression of technology has allowed systems to be built to retrieve the secrets of others. More and more information is being digitized and then stored and sent via computers. Storing and transmitting valuable data and keeping it secure can be best accomplished with encryption.

In this chapter, you have seen the message digest one-way functions for passwords and message integrity checks. You have also examined the symmetric encryption algorithms used for encrypting data at high speeds. Finally, you have learned about the operation of asymmetric cryptography that is used for key management and digital signatures. These are three distinct types of encryption with different purposes.

The material presented in this chapter is based on current algorithms and techniques. When implemented properly, they will improve security; however, they need to be updated as encryption strength decays. Encryption is based on traditionally difficult mathematical problems, and it can keep data secure only for a limited amount of time, as technology for solving those problems improves—for example, encryption that was incredibly effective 50 years ago is now easily broken. However, current encryption methods can provide a reasonable assurance of security.


Questions


To further help you prepare for the Security+ exam, and to test your level of preparedness, answer the following questions and then check your answers against the list of correct answers at the end of the chapter.


 
  1. 1. What is the biggest drawback to symmetric encryption?
    1. A. It is too easily broken.
    2. B. It is too slow to be easily used on mobile devices.
    3. C. It requires a key to be securely shared.
    4. D. It is available only on UNIX.
  2. 2. What is Diffie-Hellman most commonly used for?
    1. A. Symmetric encryption key exchange
    2. B. Signing digital contracts
    3. C. Secure e-mail
    4. D. Storing encrypted passwords
 
  1. 3. What is AES meant to replace?
    1. A. IDEA
    2. B. DES
    3. C. Diffie-Hellman
    4. D. MD5
  2. 4. What kind of encryption cannot be reversed?
    1. A. Asymmetric
    2. B. Hash
    3. C. Linear cryptanalysis
    4. D. Authentication
  3. 5. What is public key cryptography a more common name for?
    1. A. Asymmetric encryption
    2. B. SHA
    3. C. An algorithm that is no longer secure against cryptanalysis
    4. D. Authentication
  4. 6. How many bits are in a block of the SHA algorithm?
    1. A. 128
    2. B. 64
    3. C. 512
    4. D. 1024
  5. 7. How does elliptical curve cryptography work?
    1. A. It multiplies two large primes.
    2. B. It uses the geometry of a curve to calculate three points.
    3. C. IT shifts the letters of the message in an increasing curve.
    4. D. It uses graphs instead of keys.
  6. 8. A good hash function is resistant to what?
    1. A. Brute-forcing
    2. B. Rainbow tables
    3. C. Interception
    4. D. Collisions
 
  1. 9. How is 3DES an improvement over normal DES?
    1. A. It uses public and private keys.
    2. B. It hashes the message before encryption.
    3. C. It uses three keys and multiple encryption and/or decryption sets.
    4. D. It is faster than DES.
 
  1. 10. What is the best kind of key to have?
    1. A. Easy to remember
    2. B. Long and random
    3. C. Long and predictable
    4. D. Short
 
  1. 11. What makes asymmetric encryption better than symmetric encryption?
    1. A. It is more secure.
    2. B. Key management is part of the algorithm.
    3. C. Anyone with a public key could decrypt the data.
    4. D. It uses a hash.
 
  1. 12. What kinds of encryption does a digital signature use?
    1. A. Hashing and asymmetric
    2. B. Asymmetric and symmetric
    3. C. Hashing and symmetric
    4. D. All of the above
 
  1. 13. What does differential cryptanalysis require?
    1. A. The key
    2. B. Large amounts of plaintext and ciphertext
    3. C. Just large amounts of ciphertext
    4. D. Computers able to guess at key values faster than a billion times per second
 
  1. 14. What is a brute-force attack?
    1. A. Feeding certain plaintext into the algorithm to deduce the key
    2. B. Capturing ciphertext with known plaintext values to deduce the key
    3. C. Sending every key value at the algorithm to find the key
    4. D. Sending two large men to the key owner’s house to retrieve the key
 
  1. 15. What is key escrow?
    1. A. Printing out your private key
    2. B. How Diffie-Hellman exchanges keys
    3. C. When the government keeps a copy of your key
    4. D. Rijndael

Answers


 
  1. 1. C. In symmetric encryption, the key must be securely shared. This can be complicated because long keys are required for good security.
  2. 2. A. Diffie-Hellman is most commonly used to protect the exchange of keys used to create a connection using symmetric encryption. It is often used in Transport Layer Security (TLS) implementations for protecting secure web pages.
  3. 3. B. AES, or Advanced Encryption Standard, is designed to replace the old U.S. government standard DES.
  4. 4. B. Hash functions are one-way and cannot be reversed to provide the original plaintext.
  5. 5. A. Asymmetric encryption is another name for public key cryptography.
  6. 6. C. 512 bits make up a block in SHA.
  7. 7. B. Elliptical curve cryptography uses two points to calculate a third point on the curve.
  8. 8. D. A good hash algorithm is resistant to collisions, or two different inputs hashing to the same value.
  9. 9. C. 3DES uses multiple keys and multiple encryption or decryption rounds to improve security over regular DES.
  10. 10. B. The best encryption key is one that is long and random, to reduce the predictability of the key.
  11. 11. B. In public key cryptography, only the private keys are secret, so key management is built into the algorithm.
  12. 12. A. Digital signatures use hashing and asymmetric encryption.
  13. 13. B. Differential cryptanalysis requires large amounts of plaintext and ciphertext.
  14. 14. C. Brute-forcing is the attempt to use every possible key to find the correct one.
  15. 15. C. When the government keeps a copy of your private key, this is typically referred to as key escrow.


CHAPTER 5
Public Key Infrastructure


 
  • Learn the basics of public key infrastructures
  • Understand certificate authorities and repositories
  • Understand registration authorities
  • Understand the relationship between trust and certificate verification
  • Understand how to use digital certificates
  • Understand centralized and decentralized infrastructures
  • Understand public and in-house certificate authorities

Public key infrastructures (PKIs) are becoming a central security foundation for managing identity credentials in many companies. The technology manages the issue of binding public keys and identities across multiple applications. The other approach, without PKIs, is to implement many different security solutions and hope for interoperability and equal levels of protection.

PKIs comprise components that include certificates, registration and certificate authorities, and a standard process for verification. PKI is about managing the sharing of trust and using a third party to vouch for the trustworthiness of a claim of ownership over a credential document, called a certificate.


The Basics of Public Key Infrastructures


A PKI provides all the components necessary for different types of users and entities to be able to communicate securely and in a predictable manner. A PKI is made up of hardware, applications, policies, services, programming interfaces, cryptographic algorithms, protocols, users, and utilities. These components work together to allow communication to take place using public key cryptography and asymmetric keys for digital signatures, data encryption, and integrity. (Refer to Chapter 4 if you need a refresher on these concepts.) Although many different applications and protocols can provide the same type of functionality, constructing and implementing a PKI boils down to establishing a level of trust.

If, for example, John and Diane want to communicate securely, John can generate his own public/private key pair and send his public key to Diane, or he can place his public key in a directory that is available to everyone. If Diane receives John’s public key, either from him or from a public directory, how does she know it really came from John? Maybe another individual is masquerading as John and replaced John’s public key with her own, as shown in Figure 5-1. If this took place, Diane would believe that her messages could be read only by John and that the replies were actually from him. However, she would actually be communicating with Katie. What is needed is a way to verify an individual’s identity, to ensure that a person’s public key is bound to their identity and thus ensure that the previous scenario (and others) cannot take place.

In PKI environments, entities called registration authorities and certificate authorities (CAs) provide services similar to those of the Department of Motor Vehicles (DMV). When John goes to register for a driver’s license, he has to prove his identity to the DMV by providing his passport, birth certificate, or other identification documentation. If the DMV is satisfied with the proof John provides (and John passes a driving test), the DMV will create a driver’s license that can then be used by John to prove his identity. Whenever John needs to identify himself, he can show his driver’s license. Although many people may not trust John to identify himself truthfully, they do trust the third party, the DMV.

Figure 5-1 Without PKIs, individuals could spoof others’ identities.



In the PKI context, while some variations exist in specific products, the registration authority will require proof of identity from the individual requesting a certificate and will validate this information. The registration authority will then advise the CA to generate a certificate, which is analogous to a driver’s license. The CA will digitally sign the certificate using its private key. The use of the private key ensures to the recipient that the certificate came from the CA. When Diane receives John’s certificate and verifies that it was actually digitally signed by a CA that she trusts, she will believe that the certificate is actually John’s—not because she trusts John, but because she trusts the entity that is vouching for his identity (the CA).

This is commonly referred to as a third-party trust model. Public keys are components of digital certificates, so when Diane verifies the CA’s digital signature, this verifies that the certificate is truly John’s and that the public key the certificate contains is also John’s. This is how John’s identity is bound to his public key.

This process allows John to authenticate himself to Diane and others. Using the third-party certificate, John can communicate with her, using public key encryption without prior communication or a preexisting relationship. Once Diane is convinced of the legitimacy of John’s public key, she can use it to encrypt and decrypt messages between herself and John, as illustrated in Figure 5-2.

Numerous applications and protocols can generate public/private key pairs and provide functionality similar to what a PKI provides, but no trusted third party is available for both of the communicating parties. For each party to choose to communicate this way without a third party vouching for the other’s identity, the two must choose to trust each other and the communication channel they are using. In many situations, it

Figure 5-2 Public keys are components of digital certificates.



is impractical and dangerous to arbitrarily trust an individual you do not know, and this is when the components of a PKI must fall into place—to provide the necessary level of trust you cannot, or choose not to, provide on your own.

What does the “infrastructure” in “public key infrastructure” really mean? An infrastructure provides a sustaining groundwork upon which other things can be built. So an infrastructure works at a low level to provide a predictable and uniform environment that allows other higher level technologies to work together through uniform access points. The environment that the infrastructure provides allows these higher level applications to communicate with each other and gives them the underlying tools to carry out their tasks.


Certificate Authorities


The CA is the trusted authority that certifies individuals’ identities and creates electronic documents indicating that individuals are who they say they are. The electronic document is referred to as a digital certificate, and it establishes an association between the subject’s identity and a public key. The private key that is paired with the public key in the certificate is stored separately. As noted in Chapter 4, it is important to safeguard the private key, and it typically never leaves the machine or device where it was created.

The CA is more than just a piece of software, however; it is actually made up of the software, hardware, procedures, policies, and people who are involved in validating individuals’ identities and generating the certificates. This means that if one of these components is compromised, it can negatively affect the CA overall and can threaten the integrity of the certificates it produces.

Every CA should have a certification practices statement (CPS) that outlines how identities are verified; the steps the CA follows to generate, maintain, and transmit certificates; and why the CA can be trusted to fulfill its responsibilities. It describes how keys are secured, what data is placed within a digital certificate, and how revocations will be handled. If a company is going to use and depend on a public CA, the company’s security officers, administrators, and legal department should review the CA’s entire CPS to ensure that it will properly meet the company’s needs, and to make sure that the level of security claimed by the CA is high enough for their use and environment. A critical aspect of a PKI is the trust between the users and the CA, so the CPS should be reviewed and understood to ensure that this level of trust is warranted.

The certificate server is the actual service that issues certificates based on the data provided during the initial registration process. The server constructs and populates the digital certificate with the necessary information and combines the user’s public key with the resulting certificate. The certificate is then digitally signed with CA’s private key. (To learn more about how digital signatures are created and verified, review Chapter 4.)



How Do We Know We Can Actually Trust a CA?

This question is part of the continuing debate on how much security PKIs actually provide. Overall, people put a lot of faith in a CA. The companies that provide CA services understand this and also understand that their business is based on their reputation. If a CA was compromised or did not follow through on its various responsibilities, word would get out and they would quickly lose customers and business. CAs work to ensure the reputation of their product and services by implementing very secure facilities, methods, procedures, and personnel. But it is up to the company or individual to determine what degree of trust can actually be given and what level of risk is acceptable.



Registration Authorities


The registration authority (RA) is the component that accepts a request for a digital certificate and performs the necessary steps of registering and authenticating the person requesting the certificate. The authentication requirements differ depending on the type of certificate being requested.

The types of certificates available can vary between different CAs, but usually at least three different types are available, and they are referred to as classes:


 
  • Class 1 A Class 1 certificate is usually used to verify an individual’s identity through e-mail. A person who receives a Class 1 certificate can use his public/private key pair to digitally sign e-mail and encrypt message contents.
  • Class 2 A Class 2 certificate can be used for software signing. A software vendor would register for this type of certificate so it could digitally sign its software. This provides integrity for the software after it is developed and released, and it allows the receiver of the software to verify from where the software actually came.
  • Class 3 A Class 3 certificate can be used by a company to set up its own CA, which will allow it to carry out its own identification verification and generate certificates internally.

Each higher class of certificate can carry out more powerful and critical tasks than the one before it. This is why the different classes have different requirements for proof of identity. If you want to receive a Class 1 certificate, you may only be asked to provide your name, e-mail address, and physical address. For a Class 2 certification, you may need to provide the RA with more data, such as your driver’s license, passport, and company information that can be verified. To obtain a Class 3 certificate, you will be asked to provide even more information and most likely will need to go to the RA’s office for a face-to-face meeting. Each CA will outline the certification classes it provides and the identification requirements that must be met to acquire each type of certificate.

In most situations, when a user requests a Class 1 certificate, the registration process will require the user to enter specific information into a web-based form. The web page will have a section that accepts the user’s public key, or it will step the user through creating a public/private key pair, which will allow the user to choose the size of the keys to be created. Once these steps have been completed, the public key is attached to the certificate registration form and both are forwarded to the RA for processing. The RA is responsible only for the registration process and cannot actually generate a certificate. Once the RA is finished processing the request and verifying the individual’s identity, the RA will send the request to the CA. The CA will use the RA-provided information to generate a digital certificate, integrate the necessary data into the certificate fields (user identification information, public key, validity dates, proper use for the key and certificate, and so on), and send a copy of the certificate to the user. These steps are shown in Figure 5-3. The certificate may also be posted to a publicly accessible directory so that others can access it.

Note that a 1:1 correspondence does not necessarily exist between identities and certificates. An entity can have multiple key pairs, using separate public keys for separate purposes. Thus, an entity can have multiple certificates, each attesting to separate public key ownership. It is also possible to have different classes of certificates, again with different keys. This flexibility allows entities total discretion in how they manage


Figure 5-3 Steps for obtaining a digital certificate


their keys, and the PKI manages the complexity by using a unified process that allows key verification through a common interface.



EXAM TIP The RA verifies the identity of the certificate requestor on behalf of the CA. The CA generates the certificate using information forwarded by the RA.

If an application creates a key store that can be accessed by other applications, it will provide a standardized interface, called the application programming interface (API). In Netscape and UNIX systems, this interface is usually PKCS #11, and in Microsoft applications the interface is Crypto API (CAPI). As an example, Figure 5-4 shows that application A went through the process of registering a certificate and generating a key pair. It created a key store that provides an interface to allow other applications to communicate with it and use the items held within the store.

The local key store is just one location where these items can be held. Often the digital certificate and public key are also stored in a certificate repository (as discussed in the “Certificate Repositories” section of this chapter) so that it is available to a subset of individuals.



Sharing Stores

Different applications from the same vendor may share key stores. Microsoft applications keep a user’s keys and certificates in a Registry entry within that particular user’s profile. The applications save and retrieve them from this single location, or key store.


Figure 5-4 Some key stores can be shared by different applications.




Local Registration Authorities


A local registration authority (LRA) performs the same functions as an RA, but the LRA is closer to the end users. This component is usually implemented in companies that have their own internal PKIs and have distributed sites. Each site has users that need RA services, so instead of requiring them to communicate with one central RA, each site can have its own LRA. This reduces the amount of traffic that would be created by several users making requests across wide area network (WAN) lines. The LRA will perform identification, verification, and registration functions. It will then send the request, along with the user’s public key, to a centralized CA so that the certificate can be generated. It acts as an interface between the users and the CA. LRAs simplify the RA/CA process for entities that desire certificates only for in-house use.


Certificate Repositories


Once the requestor’s identity has been proven, a certificate is registered with the public side of the key pair provided by the requestor. Public keys must be available to anybody who requires them to communicate within a PKI environment. These keys, and their corresponding certificates, are usually held in a publicly available repository. Repository is a general term that describes a centralized directory that can be accessed by a subset of individuals. The directories are usually Lightweight Directory Access Protocol (LDAP)—compliant, meaning that they can be accessed and searched via the LDAP.

When an individual initializes communication with another, the sender can send her certificate and public key to the receiver, which will allow the receiver to communicate with the sender using encryption or digital signatures (or both) without needing to track down the necessary public key in a certificate repository. This is equivalent to the sender saying, “If you would like to encrypt any future messages you send to me, or if you would like the ability to verify my digital signature, here are the necessary components.” But if a person wants to encrypt the first message sent to the receiver, the sender will need to find the receiver’s public key in a certificate repository. (For a refresher on how public and private keys come into play with encryption and digital signatures, refer to Chapter 4.)

A certificate repository is a holding place for individuals’ certificates and public keys that are participating in a particular PKI environment. The security requirements for repositories themselves are not as high as those needed for actual CAs and for the equipment and software used to carry out CA functions. Since each certificate is digitally signed by the CA, if a certificate stored in the certificate repository is modified, the recipient would be able to detect this change and not accept the certificate as valid.


Trust and Certificate Verification


We need to use a PKI if we do not automatically trust individuals we do not know. Security is about being suspicious and being safe, so we need a third party that we do trust to vouch for the other individual before confidence can be instilled and sensitive communication can take place. But what does it mean that we trust a CA, and how can we use this to our advantage?



Distinguished Names

A distinguished name is a label that follows the X.500 standard. This standard defines a naming convention that can be employed so that each subject within an organization has a unique name. An example is {Country = US, Organization = Real Secure, Organizational Unit = R&D, Location = Washington}. CAs use distinguished names to identify the owners of specific certificates.


When a user chooses to trust a CA, she will download that CA’s digital certificate and public key, which will be stored on her local computer. Most browsers have a list of CAs configured to be trusted by default, so when a user installs a new web browser, several of the most well-known and most trusted CAs will be trusted without any change of settings. An example of this listing is shown in Figure 5-5.

In the Microsoft CAPI environment, the user can add and remove CAs from this list as needed. In production environments that require a higher degree of protection, this list will be pruned, and possibly the only CAs listed will be the company’s internal CAs. This ensures that digitally signed software will be automatically installed only if it was signed by the company’s CA. Other products, such as Entrust, use centrally controlled policies to determine which CAs are to be trusted instead of expecting the user to make these critical decisions.

A number of steps are involved in checking the validity of a message. Suppose, for example, that Maynard receives a digitally signed message from Joyce, who he does not know or trust. Joyce has also included her digital certificate with her message, which has her public key embedded within it. Before Maynard can be sure of the authenticity of this message, he has some work to do. The steps are illustrated in Figure 5-6.

Figure 5-5 Browsers have a long list of CAs configured to be trusted by default.




Figure 5-6 Steps for verifying the authenticity and integrity of a certificate


First, Maynard will see which CA signed Joyce’s certificate and compare it to the list of CAs he has configured within his computer. He trusts the CAs in his list and no others. (If the certificate was signed by a CA he does not have in the list, he would not accept the certificate as being valid, and thus he could not be sure that this message was actually sent from Joyce or that the attached key was actually her public key.)

Maynard sees that the CA that signed Joyce’s certificate is indeed in his list of trusted CAs, so he now needs to verify that the certificate has not been altered. Using the CA’s public key and the digest of the certificate, Maynard can verify the integrity of the certificate. Then Maynard can be assured that this CA did actually create the certificate, so he can now trust the origin of Joyce’s certificate. The use of digital signatures allows certificates to be saved in public directories without the concern of them being accidentally or intentionally altered. If a user extracts a certificate from a repository and creates a message digest value that does not match the digital signature embedded within the certificate itself, that user will know that the certificate has been modified by someone other than the CA, and he will know not to accept the validity of the corresponding public key. Similarly, an attacker could not create a new message digest, encrypt it, and embed it within the certificate because he would not have access to the CA’s private key.

But Maynard is not done yet. He needs to be sure that the issuing CA has not revoked this certificate. The certificate also has start and stop dates, indicating a time during which the certificate is valid. If the start date hasn’t happened yet, or the stop date has been passed, the certificate is not valid. Maynard reviews these dates to make sure the certificate is still deemed valid.

Another step Maynard may go through is to check whether this certificate has been revoked for any reason, so he will refer to a list of revoked certificates to see if Joyce’s certificate is listed. The revocation list could be checked directly with the CA that issued the certificate or via a specialized online service that supports the Online Certificate Status Protocol (OCSP). (Certificate revocation and list distribution are explained in the “Certificate Lifecycles” section, later in this chapter.)

To recap, the following steps are required for validating a certificate:


 
  1. 1. Compare the CA that digitally signed the certificate to a list of CAs that have already been loaded into the receiver’s computer.
  2. 2. Calculate a message digest for the certificate.
  3. 3. Use the CA’s public key to decrypt the digital signature and recover what is claimed to be the original message digest embedded within the certificate (validating the digital signature).
  4. 4. Compare the two resulting message digest values to ensure the integrity of the certificate.
  5. 5. Review the identification information within the certificate, such as the e-mail address.
  6. 6. Review the validity dates.
  7. 7. Check a revocation list to see if the certificate has been revoked.

Maynard now trusts that this certificate is legitimate and that it belongs to Joyce. Now what does he need to do? The certificate holds Joyce’s public key, which he needs to validate the digital signature she appended to her message, so Maynard extracts Joyce’s public key from her certificate, runs her message through a hashing algorithm, and calculates a message digest value of X. He then uses Joyce’s public key to decrypt her digital signature (remember that a digital signature is just a message digest encrypted with a private key). This decryption process provides him with another message digest of value Y. Maynard compares values X and Y, and if they are the same, he is assured that the message has not been modified during transmission. Thus he has confidence in the integrity of the message. But how does Maynard know that the message actually came from Joyce? Because he can decrypt the digital signature using her public key, this indicates that only the associated private key could have been used. There is a miniscule risk that someone could create an identical key pair, but given the enormous keyspace for public keys, this is impractical. The public key can only decrypt something that was encrypted with the related private key, and only the owner of the private key is supposed to have access to it. Maynard can be sure that this message came from Joyce.

After all of this he reads her message, which says, “Hi. How are you?” All of that work just for this message? Maynard’s blood pressure would surely go through the roof if he had to do all of this work only to end up with a short and not very useful message. Fortunately, all of this PKI work is performed without user intervention and happens behind the scenes. Maynard didn’t have to exert any energy. He simply replies, “Fine. How are you?"


Digital Certificates


A digital certificate binds an individual’s identity to a public key, and it contains all the information a receiver needs to be assured of the identity of the public key owner. After an RA verifies an individual’s identity, the CA generates the digital certificate, but how does the CA know what type of data to insert into the certificate?

The certificates are created and formatted based on the X.509 standard, which outlines the necessary fields of a certificate and the possible values that can be inserted into the fields. As of this writing, X.509 version 3 is the most current version of the standard. X.509 is a standard of the International Telecommunication Union (www.itu.int). The IETF’s Public-Key Infrastructure (X.509), or PKIX, working group has adapted the X.509 standard to the more flexible organization of the Internet, as specified in RFC 3280, and is commonly referred to as PKIX for Public Key Infrastructure (X.509).

The following fields are included within a X.509 digital certificate:


 
  • Version number Identifies the version of the X.509 standard that was followed to create the certificate; indicates the format and fields that can be used.
  • Subject Specifies the owner of the certificate.
  • Public key Identifies the public key being bound to the certified subject; also identifies the algorithm used to create the private/public key pair.
  • Issuer Identifies the CA that generated and digitally signed the certificate.
  • Serial number Provides a unique number identifying this one specific certificate issued by a particular CA.
  • Validity Specifies the dates through which the certificate is valid for use.
  • Certificate usage Specifies the approved use of certificate, which dictates intended use of this public key.
  • Signature algorithm Specifies the hashing and digital signature algorithms used to digitally sign the certificate.
  • Extensions Allow additional data to be encoded into the certificate to expand the functionality of the certificate. Companies can customize the use of certificates within their environments by using these extensions. X.509 version 3 has extended the extension possibilities.

Figure 5-7 shows the actual values of these different certificate fields for a particular certificate in Internet Explorer. The version of this certificate is V3 (X.509 v3) and the serial number is also listed—this number is unique for each certificate that is created by a specific CA. The CA used the MD5 hashing algorithm to create the message digest value, and it then signed with its private key using the RSA algorithm. The actual CA that issued the certificate is Root SGC Authority, and the valid dates indicate how long this certificate is valid. The subject is MS SGC Authority, which is the entity that registered this certificate and is the entity that is bound to the embedded public key. The actual public key is shown in the lower window and is represented in hexadecimal.

The subject of a certificate is commonly a person, but it does not have to be. The subject can be a network device (router, web server, firewall, and so on), an application, a department, a company, or a person. Each has its own identity that needs to be verified and proven to another entity before secure, trusted communication can be initiated. If a network device is using a certificate for authentication, the certificate may contain the network address of that device. This means that if the certificate has a network address of 10.0.0.1, the receiver will compare this to the address from which it received the certificate to make sure a man-in-the-middle attack is not being attempted.


Certificate Attributes


Four main types of certificates are used:


 
  • End-entity certificates
  • CA certificates
  • Cross-certification certificates
  • Policy certificates

Figure 5-7 Fields within a digital certificate



End-entity certificates are issued by a CA to a specific subject, such as Joyce, the Accounting department, or a firewall, as illustrated in Figure 5-8. An end-entity certificate is the identity document provided by PKI implementations.

A CA certificate can be self-signed, in the case of a standalone or root CA, or it can be issued by a superior CA within a hierarchical model. In the model in Figure 5-8, the superior CA gives the authority and allows the subordinate CA to accept certificate requests and generate the individual certificates itself. This may be necessary when a company needs to have multiple internal CAs, and different departments within an organization need to have their own CAs servicing their specific end-entities in their sections. In these situations, a representative from each department requiring a CA registers with the higher trusted CA and requests a Certificate Authority certificate. (Public and private CAs are discussed in the “Public Certificate Authorities” and “In-house Certificate Authorities” sections later in this chapter, as are the different trust models that are available for companies.)

Cross-certificates, or cross-certification certificates, are used when independent CAs establish peer-to-peer trust relationships. Simply put, they are a mechanism through which one CA can issue a certificate allowing its users to trust another CA.


Figure 5-8 End-entity and CA certificates


Within sophisticated CAs used for high-security applications, a mechanism is required to provide centrally controlled policy information to PKI clients. This is often done by placing the policy information in a policy certificate.


Certificate Extensions


Certificate extensions allow for further information to be inserted within the certificate, which can be used to provide more functionality in a PKI implementation. Certificate extensions can be standard or private. Standard certificate extensions are implemented for every PKI implementation. Private certificate extensions are defined for specific organizations (or domains within one organization), and they allow companies to further define different, specific uses for digital certificates to best fit their business needs.

Several different extensions can be implemented, one being key usage extensions, which dictate how the public key that is held within the certificate can be used. Remember that public keys can be used for different functions: symmetric key encryption, data encryption, verifying digital signatures, and more. Following are some key examples of certificate extension:


 
  • DigitalSignature The key used to verify a digital signature
  • KeyEncipherment The key used to encrypt other keys used for secure key distribution
  • DataEncipherment The key used to encrypt data, which cannot be used to encrypt other keys
  • CRLSign The key used to verify a CA signature on a revocation list
  • KeyCertSign The key used to verify CA signatures on certificates
  • NonRepudiation The key used when a nonrepudiation service is being provided

A nonrepudiation service can be provided by a third-party notary. In this situation, the sender’s digital signature is verified and then signed by the notary so that the sender cannot later deny signing and sending the message. This is basically the same function performed by a traditional notary using paper—validate the sender’s identity and validate the time and date of an item being signed and sent. This is required when the receiver needs to be really sure of the sender’s identity and wants to be legally protected against possible fraud or forgery.

If a company needs to be sure that accountable nonrepudiation services will be provided, a trusted time source needs to be used, which can be a trusted third party called a time stamp authority. Using a trusted time source gives users a higher level of confidence as to when specific messages were digitally signed. For example, suppose Barry sends Ron a message and digitally signs it, and Ron later civilly sues Barry over a dispute. This digitally signed message may be submitted by Ron as evidence pertaining to an earlier agreement that Barry now is not fulfilling. If a trusted time source was not used in their PKI environment, Barry could claim that his private key had been compromised before that message was sent. If a trusted time source was implemented, then it could be shown that the message was signed before the date on which Barry claims his key was compromised. If a trusted time source is not used, no activity that was carried out within a PKI environment can be truly proven because it is so easy to change system and software time settings.


Critical and Noncritical Extensions


Certificate extensions are considered either critical or noncritical, which is indicated by a specific flag within the certificate itself. When this flag is set to critical, it means that the extension must be understood and processed by the receiver. If the receiver is not configured to understand a particular extension marked as critical, and thus cannot process it properly, the certificate cannot be used for its proposed purpose. If the flag does not indicate that the extension is critical, the certificate can be used for the intended purpose, even if the receiver does not process the appended extension.

So how does this work? When an extension is marked as critical, it means that the CA is certifying the key for only that specific purpose. If Joe receives a certificate with a DigitalSignature key usage extension and the critical flag is set, Joe can use the public key only within that certificate to validate digital signatures, and no more. If the extension was marked as noncritical, the key can be used for purposes outside of those listed in the extensions, so in this case it is up to Joe (and his applications) to decide how the key will be used.


Certificate Lifecycles


Keys and certificates should have lifetime settings that will force the user to register for a new certificate after a certain amount of time. Determining the proper length of these lifetimes is a trade-off: Shorter lifetimes limit the ability of attackers to crack them, but longer lifetimes lower system overhead. More sophisticated PKI implementations perform automated and often transparent key updates to avoid the time and expense of having users register for new certificates when old ones expire.

This means that the certificate and key pair has a lifecycle that must be managed. Certificate management involves administrating and managing each of these phases, including registration, certificate and key generation, renewal, and revocation.


Registration and Generation


A key pair (public and private keys) can be generated locally by an application and stored in a local key store on the user’s workstation. The key pair can also be created by a central key-generation server, which will require secure transmission of the keys to the user. The key pair that is created on the centralized server can be stored on the user’s workstation or on the user’s smart card, which will allow for more flexibility and mobility.

In most modern PKI implementations, users have two key pairs. One key pair is often generated by a central server and used for encryption and key transfers. This allows the corporate PKI to retain a copy of the encryption key pair for recovery, if necessary. The second key pair, a digital signature key pair, is usually generated by the user to make sure that she is the only one with a copy of the private key. Nonrepudiation can be challenged if there is any doubt about someone else obtaining a copy of an individual’s signature private key. If the key pair was created on a centralized server, that could weaken the case that the individual was the only one who had a copy of her private key. If a copy of a user’s signature private key is stored anywhere other than in her possession, or if there is a possibility of someone obtaining the user’s key, then true nonrepudiation cannot be provided.

The act of verifying that an individual indeed has the corresponding private key for a given public key is referred to as proof of possession. Not all public/private key pairs can be used for digital signatures, so asking the individual to sign a message and return it to prove that she has the necessary private key will not always work. If a key pair is used for encryption, the RA can send a challenge value to the individual, who, in turn, can use her private key to encrypt that value and return it to the RA. If the RA can successfully decrypt this value with the public key that was provided earlier, the RA can be confident that the individual has the necessary private key and can continue through the rest of the registration phase.

The PKI administrator usually configures the minimum required key size that users must use to have a key generated for the first time, and then for each renewal. In most applications, a drop-down list shows possible algorithms from which to choose, and possible key sizes. The key size should provide the necessary level of security for the current environment. The lifetime of the key should be long enough that continual renewal will not negatively affect productivity, but short enough to ensure that the key cannot be successfully compromised.


Renewal


The certificate itself has its own lifetime, which can be different than the key pair’s lifetime. The certificate’s lifetime is specified by the validity dates inserted into the digital certificate. These are beginning and ending dates indicating the time period during which the certificate is valid. The certificate cannot be used before the start date, and once the end date is met, the certificate is expired and a new certificate will need to be issued.

A renewal process is different from the registration phase in that the RA assumes that the individual has already successfully completed one registration round. If the certificate has not actually been revoked, the original keys and certificate can be used to provide the necessary authentication information and proof of identity for the renewal phase.



Approaches to Protection

Good key management and proper key replacement intervals protect keys from being compromised through human error. Choosing a large key size makes a brute-force attack more difficult.


The certificate may or may not need to change during the renewal process; this usually depends on why the renewal is taking place. If the certificate just expired and the keys will still be used for the same purpose, a new certificate can be generated with new validity dates. If, however, the key pair functionality needs to be expanded or restricted, new attributes and extensions may need to be integrated into the new certificate. These new functionalities may require more information to be gathered from the individual renewing the certificate, especially if the class changes or the new key uses allow for more powerful abilities.

This renewal process is required when the certificate has fulfilled its lifetime and its end validity date has been met. This situation differs from that of a certificate revocation.


Revocation


A certificate can be revoked when its validity needs to be ended before its actual expiration date is met, and this can occur for many reasons: for example, a user may have lost a laptop or a smart card that stored a private key, an improper software implementation may have been uncovered that directly affected the security of a private key, a user may have fallen victim to a social engineering attack and inadvertently given up a private key, data held within the certificate may no longer apply to the specified individual, or perhaps an employee left a company and should not be identified as a member of an in-house PKI any longer. In the last instance, the certificate, which was bound to the user’s key pair, identified the user as an employee of the company, and the administrator would want to ensure that the key pair could not be used in the future to validate this person’s affiliation with the company. Revoking the certificate does this.

If any of these things happen, a user’s private key has been compromised or should no longer be mapped to the owner’s identity. A different individual may have access to that user’s private key and could use it to impersonate and authenticate as the original user. If the impersonator used the key to digitally sign a message, the receiver would verify the authenticity of the sender by verifying the signature by using the original user’s public key, and the verification would go through perfectly—the receiver would believe it came from the proper sender and not the impersonator. If receivers could look at a list of certificates that had been revoked before verifying the digital signature, however, they would know not to trust the digital signatures on the list. Because of issues associated with the private key being compromised, revocation is permanent and final—once revoked, a certificate cannot be reinstated. If this were allowed and a user revoked his certificate, the unauthorized holder of the private key could use it to restore the certificate validity.

For example, if Joe stole Mike’s laptop, which held, among other things, Mike’s private key, Joe might be able to use it to impersonate Mike. Suppose Joe writes a message, digitally signs it with Mike’s private key, and sends it to Stacy. Stacy communicates with Mike periodically and has his public key, so she uses it to verify the digital signature. It computes properly, so Stacy is assured that this message came from Mike, but in truth it did not. If, before validating any certificate or digital signature, Stacy could check a list of revoked certificates, she might not fall victim to Joe’s false message.

The CA provides this type of protection by maintaining a certificate revocation list (CRL), a list of serial numbers of certificates that have been revoked. The CRL also contains a statement indicating why the individual certificates were revoked and a date when the revocation took place. The list usually contains all certificates that have been revoked within the lifetime of the CA. Certificates that have expired are not the same as those that have been revoked. If a certificate has expired, it means that its end validity date was reached.

The CA is the entity that is responsible for the status of the certificates it generates; it needs to be told of a revocation, and it must provide this information to others. The CA is responsible for maintaining CRL and posting it in a publicly available directory.



EXAM TIP The Certificate Revocation List is an essential item to ensure a certificate is still valid. CAs post CRLs in publicly available directories to permit automated checking of certificates against the list before certificate use by a client. A user should never trust a certificate that has not been checked against the appropriate CRL.

What if Stacy wants to get back at Joe for trying to trick her earlier, and she attempts to revoke Joe’s certificate herself? If she is successful, Joe’s participation in the PKI can be negatively affected because others will not trust his public key. Although we might think Joe may deserve this, we need to have some system in place to make sure people cannot arbitrarily have others’ certificates revoked, whether for revenge or for malicious purposes.

When a revocation request is submitted, the individual submitting the request must be authenticated. Otherwise, this could permit a type of denial-of-service attack, in which someone has another person’s certificate revoked. The authentication can involve an agreed-upon password that was created during the registration process, but authentication should not be based on the individual proving that he has the corresponding private key, because it may have been stolen, and the CA would be authenticating an imposter.

The CRL’s integrity needs to be protected to ensure that attackers cannot modify data pertaining to a revoked certification from the list. If this were allowed to take place, anyone who stole a private key could just delete that key from the CRL and continue to use the private key fraudulently. The integrity of the list also needs to be protected to ensure that bogus data is not added to it. Otherwise, anyone could add another person’s certificate to the list and effectively revoke that person’s certificate. The only entity that should be able to modify any information on the CRL is the CA.

The mechanism used to protect the integrity of a CRL is a digital signature. The CA’s revocation service creates a digital signature for the CRL, as shown in Figure 5-9. To validate a certificate, the user accesses to the directory where the CRL is posted, downloads the list, and verifies the CA’s digital signature to ensure that the proper authority signed the list and to ensure that the list was not modified in an unauthorized manner. The user then looks through the list to determine whether the serial number of the certificate that he is trying to validate is listed. If the serial number is on the list, the

Figure 5-9 The CA digitally signs the CRL to protect its integrity.



private key should no longer be trusted, and the public key should no longer be used. This can be a cumbersome process, so it has been automated in several ways that are described in the next section.

One concern is how up-to-date the CRL is—how often is it updated and does it actually reflect all the certificates currently revoked? The actual frequency with which the list is updated depends upon the CA and its certification practices statement (CPS). It is important that the list is updated in a timely manner so that anyone using the list has the most current information.


CRL Distribution


CRL files can be requested by individuals who need to verify and validate a newly received certificate, or the files can be periodically pushed down (sent) to all users participating within a specific PKI. This means the CRL can be pulled (downloaded) by individual users when needed or pushed down to all users within the PKI on a timed interval.

The actual CRL file can grow substantially, and transmitting this file and requiring PKI client software on each workstation to save and maintain it can use a lot of resources, so the smaller the CRL is, the better. It is also possible to first push down the full CRL, and after that initial load, the following CRLs pushed down to the users are delta CRLs, meaning that they contain only the changes to the original or base CRL. This can greatly reduce the amount of bandwidth consumed when updating CRLs.

In implementations where the CRLs are not pushed down to individual systems, the users’ PKI software needs to know where to look for the posted CRL that relates to the certificate it is trying to validate. The certificate might have an extension that points the validating user to the necessary CRL distribution point. The network administrator sets up the distribution points, and one or more points can exist for a particular PKI. The distribution point holds one or more lists containing the serial numbers of revoked certificates, and the user’s PKI software scans the list(s) for the serial number of the certificate the user is attempting to validate. If the serial number is not present, the user is assured that it has not been revoked. This approach helps point users to the right resource and also reduces the amount of information that needs to be scanned when checking that a certificate has not been revoked.

One last option for checking distributed CRLs is an online service. When a client user needs to validate a certificate and ensure that it has not been revoked, he can communicate with an online service that will query the necessary CRLs available within the environment. This service can query the lists for the client instead of pushing down the full CRL to each and every system. So if Joe receives a certificate from Stacy, he can contact an online service and send it the serial number listed in the certificate Stacy sent. The online service would query the necessary revocation lists and respond to Joe indicating whether that serial number was listed as being revoked or not.

One of the protocols used for online revocation services is OCSP, a request and response protocol that obtains the serial number of the certificate that is being validated and reviews revocation lists for the client. The protocol has a responder service that reports the status of the certificate back to the client, indicating whether it has been revoked, it is valid, or its status is unknown. This protocol and service saves the client from having to find, download, and process the right lists.


Suspension


Instead of being revoked, a certificate can be suspended, meaning it is temporarily put on hold. If, for example, Bob is taking an extended vacation and wants to ensure that his certificate will not be used during that time, he can make a suspension request to the CA. The CRL would list this certificate and its serial number, and in the field that describes why the certificate is revoked, it would instead indicate a hold state. Once Bob returns to work, he can make a request to the CA to remove his certificate from the list.

Another reason to suspend a certificate is if an administrator is suspicious that a private key might have been compromised. While the issue is under investigation, the certificate can be suspended to ensure that it cannot be used.


Key Destruction


Key pairs and certificates have set lifetimes, meaning that they will expire at some specified time. It is important that the certificates and keys are properly destroyed when that time comes, wherever the keys are stored (on users’ workstations, centralized key servers, USB token devices, smart cards, and so on).



Authority Revocation Lists

In some PKI implementations, a separate revocation list is maintained for CA keys that have been compromised or should no longer be trusted. This list is known as an authority revocation list (ARL). In the event that a CA’s private key is compromised or a cross certification is cancelled, the relevant certificate’s serial number is included in the ARL. A client can review an ARL to make sure the CA’s public key can still be trusted.


The goal is to make sure that no one can gain access to a key after its lifetime has ended and use this key for malicious purposes. An attacker might use the key to digitally sign or encrypt a message with the hopes of tricking someone else about his identity (this would be an example of a man-in-the-middle attack). Also, if the attacker is performing some type of brute-force attack on your cryptosystem, trying to figure out specific keys that were used for encryption processes, obtaining an old key could give him more insight into how your cryptosystem generates keys. The less information you supply to potential hackers, the better.

Note that in modern PKIs, encryption key pairs usually must be retained long after they expire so that users can decrypt information that was encrypted with the old keys. For example, if Bob encrypts a document using his current key and the keys are updated three months later, Bob’s software must maintain a copy of the old key so he can still decrypt the document. In the PKI world, this issue is referred to as key history maintenance.


Centralized or Decentralized Infrastructures


Keys used for authentication and encryption within a PKI environment can be generated in a centralized or decentralized manner. In a decentralized approach, software on individual computers generates and stores cryptographic keys local to the systems themselves. In a centralized infrastructure, the keys are generated and stored on a central server, and the keys are transmitted to the individual systems as needed. You might choose one type over the other for several reasons.

If a company uses an asymmetric algorithm that is resource-intensive to generate the public/private key pair, and if large (and resource-intensive) key sizes are needed, then the individual computers may not have the necessary processing power to produce the keys in an acceptable fashion. In this situation, the company can choose a centralized approach in which a very high-end server with powerful processing abilities is used, probably along with a hardware-based random number generator.

Central key generation and storage offers other benefits as well. For example, it is much easier to back up the keys and implement key recovery procedures with central storage than with a decentralized approach. Implementing a key recovery procedure on each and every computer holding one or more key pairs is difficult, and many applications that generate their own key pairs do not usually interface well with a centralized archive system. This means that if a company chooses to allow its individual users to create and maintain their own key pairs on their separate workstations, no real key recovery procedure can be put in place. This puts the company at risk. If an employee leaves the organization or is unavailable for one reason or another, the company may not be able to access its own business information that was encrypted by that employee.

So a centralized approach seems like the best approach, right? Well, the centralized method has some drawbacks to consider, too. If the keys will be generated on a server, they need to be securely transmitted to the individual clients that require them. This can be more difficult than it sounds. A technology needs to be employed that will send the keys in an encrypted manner, ensure the keys’ integrity, and make sure that only the intended user is receiving the key.

Also, the server that centrally stores the keys needs to be highly available and can provide a single point of failure, so some type of fault tolerance or redundancy mechanism may need to be put into place. If that one server goes down, users could not access their keys, which might prevent them from properly authenticating to the network, resources, and applications. Also, since all the keys are in one place, the server is a prime target for an attacker—if the central key server is compromised, the whole environment is compromised.

One other issue pertains to how the keys will actually be used. If a public/private key pair is being generated for digital signatures, and if the company wants to ensure that it can be used to provide true authenticity and nonrepudiation, the keys should not be generated at a centralized server. This would introduce doubt that only the one person had access to a specific private key.

If a company uses smart cards to hold users’ private keys, each private key often has to be generated on the card itself and cannot be copied for archiving purposes. This is a disadvantage of the centralized approach. In addition, some types of applications have been developed to create their own public/private key pairs and do not allow other keys to be imported and used. This means the keys would have to be created locally by these applications, and keys from a central server could not be used. These are just some of the considerations that need to be evaluated before any decision is made and implementation begins.


Hardware Storage Devices


PKIs can be constructed in software without special cryptographic hardware, and this is perfectly suitable for many environments. But software can be vulnerable to viruses, hackers, and hacking. If a company requires a higher level of protection than a purely software-based solution can provide, several hardware-based solutions are available.

In most situations, hardware key-storage solutions are used only for the most critical and sensitive keys, which are the root and possibly the intermediate CA private keys. If those keys are compromised, the whole security of the PKI is gravely threatened. If a person obtained a root CA private key, she could digitally sign any certificate, and that certificate would be quickly accepted by all entities within the environment. Such an attacker might be able to create a certificate that has extremely high privileges, perhaps allowing her to modify bank account information in a financial institution, and no alerts or warnings would be initiated because the ultimate CA, the root CA, signed it.



Random Number Generators

In most cases, software- and hardware-based generators are actually considered pseudo-random number generators because they have a finite number of values to work from. They usually extract these values from their surroundings, which are predictable in nature—the values can come from the system’s time or from CPU cycles. If the starting values are predictable, the numbers they generate cannot be truly random. An example of a true random number generator would be a system that collects radiation from a radioactive item. The elements that escape from the radioactive item do so in an unpredictable manner, and the results are used as seed values for key generation.



Private Key Protection


Although a PKI implementation can be complex, with many different components and options, a critical concept common to all PKIs must be understood and enforced: the private key needs to stay private. A digital signature is created solely for the purpose of proving who sent a particular message by using a private key. This rests on the assumption that only one person has access to this private key. If an imposter obtains a user’s private key, authenticity and nonrepudiation can no longer be claimed or proven.

When a private key is generated for the first time, it must be stored somewhere for future use. This storage area is referred to as a key store, and it is usually created by the application registering for a certificate, such as a web browser, smart card software, or other application. In most implementations, the application will prompt the user for a password, which will be used to create an encryption key that protects the key store. So, for example, if Cheryl used her web browser to register for a certificate, her private key would be generated and stored in the key store. Cheryl would then be prompted for a password, which the software would use to create a key that will encrypt the key store. When Cheryl needs to access this private key later that day, she will be prompted for the same password, which will decrypt the key store and allow her access to her private key.

Unfortunately, many applications do not require that a strong password be created to protect the key store, and in some implementations the user can choose not to provide a password at all. The user still has a private key available, and it is bound to the user’s identity, so why is a password even necessary? If, for example, Cheryl decided not to use a password, and another person sat down at her computer, he could use her web browser and her private key and digitally sign a message that contained a nasty virus. If Cheryl’s coworker Cliff received this message, he would think it came from Cheryl, open the message, and download the virus. The moral to this story is that users should be required to provide some type of authentication information (password, smart card, PIN, or the like) before being able to use private keys. Otherwise, the keys could be used by other individuals or imposters, and authentication and nonrepudiation would be of no use.

Because a private key is a crucial component of any PKI implementation, the key itself should contain the necessary characteristics and be protected at each stage of its life. The following list sums up the characteristics and requirements of proper private key use:


 
  • The key size should provide the necessary level of protection for the environment.
  • The lifetime of the key should correspond with how often it is used and the sensitivity of the data it is protecting.
  • The key should be changed and not used past its allowed lifetime.
  • Where appropriate, the key should be properly destroyed at the end of its lifetime.
  • The key should never be exposed in clear text.
  • No copies of the private key should be made if it is being used for digital signatures.
  • The key should not be shared.
  • The key should be stored securely.
  • Authentication should be required before the key can be used.
  • The key should be transported securely.
  • Software implementations that store and use the key should be evaluated to ensure they provide the necessary level of protection.

If digital signatures will be used for legal purposes, these points and others may need to be audited to ensure that true authenticity and nonrepudiation are provided.


Key Recovery


One individual could have one, two, or many key pairs that are tied to his or her identity. That is because users can have different needs and requirements for public/private key pairs. As mentioned earlier, certificates can have specific attributes and usage requirements dictating how their corresponding keys can and cannot be used. For example, David can have one key pair he uses to encrypt and transmit symmetric keys. He can also have one key pair that allows him to encrypt data and another key pair to perform digital signatures. David can also have a digital signature key pair for his work-related activities and another pair for personal activities, such as e-mailing his friends. These key pairs need to be used only for their intended purposes, and this is enforced through certificate attributes and usage values.

If a company is going to perform and maintain a key recovery system, it will generally back up only the key pair used to encrypt data, not the key pairs that are used to generate digital signatures. The reason that a company archives keys is to ensure that if a person leaves the company, falls off a cliff, or for some reason is unavailable to decrypt important company information, the company can still get to its company-owned data. This is just a matter of the organization protecting itself. A company would not need to be able to recover a key pair that is used for digital signatures, since those keys are to be used only to prove the authenticity of the individual who sent a message. A company would not benefit from having access to those keys and really should not have access to them, since they are tied to one individual for a specific purpose.



CA Private Key

The most sensitive and critical public/private key pairs are those used by CAs to digitally sign certificates. These need to be highly protected because if they were compromised, the trust relationship between the CA and all of the end-entities would be threatened. In high security environments, these keys are often kept in a tamper-proof hardware encryption store, only accessible to individuals with a need to access.


Two systems are important for backing up and restoring cryptographic keys: key archiving and key recovery. The key archiving system is a way of backing up keys and securely storing them in a repository; key recovery is the process of restoring lost keys to the users or the company.

If keys are backed up and stored in a centralized computer, this system must be tightly controlled, because if it were compromised, an attacker would have access to all keys for the entire infrastructure. Also, it is usually unwise to authorize a single person to be able to recover all the keys within the environment, because that person could use this power for evil purposes instead of just recovering keys when they are needed for legitimate purposes. In security systems, it is best not to fully trust anyone.



EXAM TIP Key Archiving is the process of storing a set of keys to be used as a backup should something happen to the original set. Key recovery is the process of using the backup keys.

Dual control can be used as part of a system to back up and archive data encryption keys. PKI systems can be configured to allow multiple individuals to be involved in any key recovery process. When a key recovery is required, at least two people can be required to authenticate by the key recovery software before the recovery procedure is performed. This enforces separation of duties, which means that one person cannot complete a critical task by himself. Requiring two individuals to recover a lost key together is called dual control, which simply means that two people have to be present to carry out a specific task.

This approach to key recovery is referred to as the m of n authentication, where n number of people can be involved in the key recovery process, but at least m (which is a smaller number than n) must be involved before the task can be completed. The goal is to minimize fraudulent or improper use of access and permissions. A company would not require all possible individuals to be involved in the recovery process, because getting all the people together at the same time could be impossible considering meetings, vacations, sick time, and travel. At least some of all possible individuals must be available to participate, and this is the subset m of the number n. This form of secret splitting can increase security by requiring multiple people to perform a specific function. Too many people can increase issues associated with availability, too few increases the risk of a small number of people compromising a secret.



EXAM TIP Secret splitting using m of n authentication schemes can improve security by requiring that multiple people perform critical functions, preventing a single party from compromising a secret.

All key recovery procedures should be highly audited. The audit logs should capture at least what keys were recovered, who was involved in the process, and the time and date. Keys are an integral piece of any encryption cryptosystem and are critical to a PKI environment, so you need to track who does what with them.


Key Escrow


Key recovery and key escrow are terms that are often used interchangeably, but they actually describe two different things. You should not use them interchangeably after you have read this section.

Key recovery is a process that allows for lost keys to be recovered. Key escrow is a process of giving keys to a third party so that they can decrypt and read sensitive information when this need arises. Key escrow almost always pertains to handing over encryption keys to the government, or to another higher authority, so that the keys can be used to collect evidence during investigations. A key pair used in a person’s place of work may be required to be escrowed by the employer for obvious reasons. First, the keys are property of the enterprise, issued to the worker for use. Second, the firm may have need for them after an employee leaves the firm.

Several movements, supported by parts of the U.S. government, would require all or many people residing in the United States to hand over copies of the keys they use to encrypt communication channels. The movement in the late ‘90s behind the Clipper Chip is the most well-known effort to implement this requirement and procedure. It was suggested that all American-made communication devices should have a hardware encryption chip within them. The chip could be used to encrypt data going back and forth between two individuals, but if a government agency decided that they should be able to eavesdrop on this dialog, they would just need to obtain a court order. If the court order was approved, the law enforcement agent would take the order to two escrow agencies, each of which would have a piece of the key that was necessary to decrypt this communication information. The agent would obtain both pieces of the key and combine them, which would allow the agent to listen in on the encrypted communication outlined in the court order.



EXAM TIP Key escrow, allowing another trusted party to hold a copy of a key, has long been a controversial topic. This essential business process provides continuity should the authorized key holding party leave an organization without disclosing keys. The security of the escrowed key is a concern, and it needs to be managed at the same security level as for the original key.

This was a standard that never saw the light of day because it seemed too “Big Brother” to many American citizens. But the idea was that the encryption keys would be escrowed to two agencies, meaning that each agency would hold one piece of the key. One agency could not hold the whole key, because it could then use this key to wiretap people’s conversations illegally. Splitting up the key is an example of separation of duties, put into place to try and prevent fraudulent activities. The current issue of governments demanding access to keys to decrypt information is covered in Chapter 3.


Public Certificate Authorities


An individual or company may decide to rely on a CA that is already established and being used by many other individuals and companies—this would be a public CA. A company, on the other hand, may decide that it needs its own CA for internal use, which gives the company more control over the certificate registration and generation process and allows it to configure items specifically for its own needs. This second type of CA is referred to as a private CA (or in-house CA).

A public CA specializes in verifying individual identities and creating and maintaining their certificates. These companies issue certificates that are not bound to specific companies or intercompany departments. Instead, their services are to be used by a larger and more diversified group of people and organizations. If a company uses a public CA, the company will pay the CA organization for individual certificates and for the service of maintaining these certificates. Some examples of public CAs are VeriSign (including GeoTrust and thawte), Entrust, and Go Daddy.

One advantage of using a public CA is that it is usually well known and easily accessible to many people. Most web browsers have a list of public CAs installed and configured by default, along with their corresponding root certificates. This means that if you install a web browser on your computer, it is already configured to trust certain CAs, even though you might have never heard of them before. So, if you receive a certificate from Bob, and his certificate was digitally signed by a CA listed in your browser, you can automatically trust the CA and can easily walk through the process of verifying Bob’s certificate. This has raised some eyebrows among security professionals, however, since trust is installed by default, but the industry has deemed this is a necessary approach that provides users with transparency and increased functionality. Users can remove these CAs from their browser list if they want to have more control over who their system trusts and who it doesn’t.

Earlier in the chapter, the different certificate classes and their uses were explained. No global standard defines these classes, the exact requirements for obtaining these different certificates, or their uses. Standards are in place, usually for a particular country or industry, but this means that public CAs can define their own certificate classifications. This is not necessarily a good thing for companies that depend on public CAs, because it does not provide enough control to the company over how it should interpret certificate classifications and how they should be used.

This means another component needs to be carefully developed for companies that use and depend on public CAs, and this component is referred to as the certificate policy (CP). This policy allows the company to decide what certification classes are acceptable and how they will be used within the organization. This is different from the CPS, which explains how the CA verifies entities, generates certificates, and maintains these certificates. The CP is generated and owned by an individual company that uses an external CA, and it allows the company to enforce its security decisions and control how certificates are used with its applications.


In-house Certificate Authorities


An in-house CA is implemented, maintained, and controlled by the company that implemented it. This type of CA can be used to create certificates for internal employees, devices, applications, partners, and customers. This approach gives the company complete control over how individuals are identified, what certification classifications are created, who can and cannot have access to the CA, and how the certifications can be used.

In-house CAs also provide more flexibility for companies, which often integrate them into current infrastructures and into applications for authentication, encryption, and nonrepudiation purposes. If the CA is going to be used over an extended period of time, this can be a cheaper method of generating and using certificates than having to purchase them through a public CA.

When the decision between an in-house and public CA is made, various factors need to be identified and accounted for. Many companies have embarked upon implementing an in-house PKI environment, which they estimated would be implemented within x number of months and would cost approximately y amount in dollars. Without doing the proper homework, companies might not understand the current environment, might not completely hammer out the intended purpose of the PKI, and might not have enough skilled staff supporting the project; time estimates can double or triple and the required funds and resources can become unacceptable. Several companies have started on a PKI implementation, only to quit halfway through, resulting in wasted time and money, with nothing to show for it except heaps of frustration and many ulcers.

In some situations, it is better for a company to use a public CA, since public CAs already have the necessary equipment, skills, and technologies. In other situations, companies may decide it is a better business decision to take on these efforts themselves. This is not always a strictly monetary decision—a specific level of security might be required. Some companies do not believe that they can trust an outside authority to generate and maintain their users’ and company’s certificates. In this situation, the scale may tip toward an in-house CA.

Each company is unique, with various goals, security requirements, functionality needs, budgetary restraints, and ideologies. The decision to use a private or in-house CA depends on the expansiveness of the PKI within the organization, how integrated it will be with different business needs and goals, its interoperability with a company’s current technologies, the number of individuals who will be participating, and how it will work with outside entities. This could be quite a large undertaking that ties up staff, resources, and funds, so a lot of strategic planning is required, and what will and won’t be gained from a PKI should be fully understood before the first dollar is spent on the implementation.


Outsourced Certificate Authorities


The last available option for using PKI components within a company is to outsource different parts of it to a specific service provider. Usually, the more complex parts are outsourced, such as the CA, RA, CRL, and key recovery mechanisms. This occurs if a company does not have the necessary skills to implement and carry out a full PKI environment.

An outsourced CA is different from a public CA in that it provides dedicated services, and possibly equipment, to an individual company. A public CA, in contrast, can be used by hundreds or thousands of companies—the CA doesn’t maintain specific servers and infrastructures for individual companies.

Although outsourced services might be easier for your company to implement, you need to review several factors before making this type of commitment. You need to determine what level of trust the company is willing to give to the service provider and what level of risk it is willing to accept. Often a PKI and its components serve as large security components within a company’s enterprise, and allowing a third party to maintain the PKI can introduce too many risks and liabilities that your company is not willing to undertake. The liabilities the service provider is willing to accept, security precautions and procedures the outsourced CAs provide, and the surrounding legal issues need to be examined before this type of agreement is made.

Some large vertical markets have their own outsourced PKI environments set up because they share similar needs and usually have the same requirements for certification types and uses. This allows several companies within the same market to split the costs of the necessary equipment, and it allows for industry-specific standards to be drawn up and followed. For example, although many medical facilities work differently and have different environments, they have a lot of the same functionality and security needs. If several of them came together, purchased the necessary equipment to provide CA, RA, and CRL functionality, employed one person to maintain it, and then each connected its different sites to the centralized components, both organizations could save a lot of money and resources. In this case, not every facility would need to strategically plan its own full PKI, and each would not need to purchase redundant equipment or employ redundant staff members. Figure 5-10 illustrates how one outsourced service provider can offer different PKI components and services to different companies, and how companies within one vertical market can share the same resources.

A set of standards can be drawn up about how each different facility should integrate its own infrastructure and how they should integrate with the centralized PKI components. This also allows for less complicated intercommunication to take place between the different medical facilities, which will ease information-sharing attempts.


Figure 5-10 A PKI service provider (represented by the four boxes) can offer different PKI components to companies.



Tying Different PKIs Together


In some cases, more than one CA can be needed for a specific PKI to work properly, and several requirements must be met for different PKIs to intercommunicate. Here are some examples:


 
  • A company wants to be able to communicate seamlessly with its suppliers, customers, or business partners via PKI.
  • One department within a company has higher security requirements than all other departments and thus needs to configure and control its own CA.
  • One department needs to have specially constructed certificates with unique fields and usages.
  • Different parts of an organization want to control their own pieces of the network and the CA that is encompassed within it.
  • The number of certificates that need to be generated and maintained would overwhelm one CA, so multiple CAs must be deployed.
  • The political culture of a company inhibits one department from being able to control elements of another department.
  • Enterprises are partitioned geographically, and different sites need their own local CA.

These situations can add much more complexity to the overall infrastructure, intercommunication capabilities, and procedures for certificate generation and validation. To control this complexity properly from the beginning, these requirements need to be understood, addressed, and planned for. Then the necessary trust model needs to be chosen and molded for the company to build upon. Selecting the right trust model will give the company a solid foundation from the beginning, instead of trying to add structure to an inaccurate and inadequate plan later on.


Trust Models


There is more involved in potential scenarios than just having more than one CA—each of the companies or each department of an enterprise can actually represent a trust domain itself. A trust domain is a construct of systems, personnel, applications, protocols, technologies, and policies that work together to provide a certain level of protection. All of these components can work together seamlessly within the same trust domain because they are known to the other components within the domain and are trusted to some degree. Different trust domains are usually managed by different groups of administrators, have different security policies, and restrict outsiders from privileged access.

Most trust domains (whether individual companies or departments) are not usually islands cut off from the world—they need to communicate with other less-trusted domains. The trick is to figure out how much two different domains should trust each other, and how to implement and configure an infrastructure that would allow these two domains to communicate in a way that will not allow security compromises or breaches. This can be more difficult than it sounds.

In the nondigital world, it is difficult to figure out who to trust, how to carry out legitimate business functions, and how to ensure that one is not being taken advantage of or lied to. Jump into the digital world and add protocols, services, encryption, CAs, RAs, CRLs, and differing technologies and applications, and the business risks can become overwhelming and confusing. So start with a basic question: What criteria will we use to determine who we trust and to what degree?

One example of trust considered earlier in the chapter is the driver’s license issued by the DMV. Suppose, for example, that Bob is buying a lamp from Carol and he wants to pay by check. Since Carol does not know Bob, she does not know if she can trust him or have much faith in his check. But if Bob shows Carol his driver’s license, she can compare the name to what appears on the check, and she can choose to accept it. The trust anchor (the agreed-upon trusted third party) in this scenario is the DMV, since both Carol and Bob trust it more than they trust each other. Since Bob had to provide documentation to prove his identity to the DMV, that organization trusted him enough to generate a license, and Carol trusts the DMV, so she decides to trust Bob’s check.

Consider another example of a trust anchor. If Joe and Stacy need to communicate through e-mail and would like to use encryption and digital signatures, they will not trust each other’s certificate alone. But when each receives the other’s certificate and sees that they both have been digitally signed by an entity they both do trust—the CA—then they have a deeper level of trust in each other. The trust anchor here is the CA. This is easy enough, but when we need to establish trust anchors between different CAs and PKI environments, it gets a little more complicated.

When two companies need to communicate using their individual PKIs, or if two departments within the same company use different CAs, two separate trust domains are involved. The users and devices from these different trust domains will need to communicate with each other, and they will need to exchange certificates and public keys. This means that trust anchors need to be identified, and a communication channel must be constructed and maintained.

A trust relationship must be established between two issuing authorities (CAs). This happens when one or both of the CAs issue a certificate for the other CA’s public key, as shown in Figure 5-11. This means that each CA registers for a certificate and public key from the other CA. Each CA validates the other CA’s identification information and generates a certificate containing a public key for that CA to use. This establishes a trust path between the two entities that can then be used when users need to verify other users’ certificates that fall within the different trust domains. The trust path can be unidirectional or bidirectional, so either the two CAs trust each other (bidirectional) or only one trusts the other (unidirectional).


Figure 5-11 A trust relationship can be built between two trust domains to set up a communication channel.


As illustrated in Figure 5-11, all the users and devices in trust domain 1 trust their own CA 1, which is their trust anchor. All users and devices in trust domain 2 have their own trust anchor, CA 2. The two CAs have exchanged certificates and trust each other, but they do not have a common trust anchor between them.

The trust models describe and outline the trust relationships between the different CAs and different environments, which will indicate where the trust paths reside. The trust models and paths need to be thought out before implementation to restrict and control access properly and to ensure that as few trust paths as possible are used. Several different trust models can be used: the hierarchical, peer-to-peer, and hybrid models are discussed in the following sections.


Hierarchical Trust Model


The first type of trust model we’ll examine is a basic hierarchical structure that contains a root CA, an intermediate CAs, leaf CAs, and end-entities. The configuration is that of an inverted tree, as shown in Figure 5-12. The root CA is the ultimate trust anchor for all other entities in this infrastructure, and it generates certificates for the intermediate CAs, which in turn generate certificates for the leaf CAs, and the leaf CAs generate certificates for the end-entities (users, network devices, and applications).

Intermediate CAs function to transfer trust between different CAs. These CAs are referred to as subordinate CAs as they are subordinate to the CA that they reference. The path of trust is walked up from the subordinate CA to the higher level CA; in essence the subordinate CA is using the higher CA as a reference.

As shown in Figure 5-12, no bidirectional trusts exist—they are all unidirectional trusts as indicated by the one-way arrows. Since no other entity can certify and generate certificates for the root CA, it creates a self-signed certificate. This means that the certificate’s issuer and subject fields hold the same information, both representing the root CA, and the root CA’s public key will be used to verify this certificate when that time comes. This root CA certificate and public key are distributed to all entities within this trust model.

Figure 5-12 The hierarchical trust model outlines trust paths.





Root CA

If the root CA’s private key was ever compromised, all entities within the hierarchical trust model would be drastically affected, because this is their sole trust anchor. The root CA usually has a small amount of interaction with the intermediate CAs and end-entities, and can therefore be taken offline much of the time. This provides a greater degree of protection for the root CA, because when it is offline it is basically inaccessible.



Walking the Certificate Path


When a user in one trust domain needs to communicate with another user in another trust domain, one user will need to validate the other’s certificate. This sounds simple enough, but what it really means is that each certificate for each CA, all the way up to a shared trusted anchor, also must be validated. If Debbie needs to validate Sam’s certificate, as shown in Figure 5-12, she actually also needs to validate the Leaf D CA and Intermediate B CA certificates, as well as Sam’s.

So in Figure 5-12, we have a user, Sam, who digitally signs a message and sends it and his certificate to Debbie. Debbie needs to validate this certificate before she can trust Sam’s digital signature. Included in Sam’s certificate is an issuer field, which indicates that the certificate was issued by Leaf D CA. Debbie has to obtain Leaf D CA’s digital certificate and public key to validate Sam’s certificate. Remember that Debbie validates the certificate by verifying its digital signature. The digital signature was created by the certificate issuer using its private key, so Debbie needs to verify the signature using the issuer’s public key.

Debbie tracks down Leaf D CA’s certificate and public key, but she now needs to verify this CA’s certificate, so she looks at the issuer field, which indicates that Leaf D CA’s certificate was issued by Intermediate B CA. Debbie now needs to get Intermediate B CA’s certificate and public key.

Debbie’s client software tracks this down and sees that the issuer for the Intermediate B CA is the root CA, for which she already has a certificate and public key. So Debbie’s client software had to follow the certificate path, meaning it had to continue to track down and collect certificates until it came upon a self-signed certificate. A self-signed certificate indicates that it was signed by a root CA, and Debbie’s software has been configured to trust this entity as her trust anchor, so she can stop there. Figure 5-13 illustrates the steps Debbie’s software had to carry out just to be able to verify Sam’s certificate.

This type of simplistic trust model works well within an enterprise that easily follows a hierarchical organizational chart, but many companies cannot use this type of trust model because different departments or offices require their own trust anchors. These demands can be derived from direct business needs or from interorganizational politics. This hierarchical model might not be possible when two or more companies need to communicate with each other. Neither company will let the other’s CA be the root CA, because each does not necessarily trust the other entity to that degree. In these situations, the CAs will need to work in a peer-to-peer relationship instead of in a hierarchical relationship.


Figure 5-13 Verifying each certificate in a certificate path



Peer-to-Peer Model


In a peer-to-peer trust model, one CA is not subordinate to another CA, and no established trusted anchor between the CAs is involved. The end-entities will look to their issuing CA as their trusted anchor, but the different CAs will not have a common anchor.

Figure 5-14 illustrates this type of trust model. The two different CAs will certify the public key for each other, which creates a bidirectional trust. This is referred to as cross certification, since the CAs are not receiving their certificates and public keys from a superior CA, but instead they are creating them for each other.

One of the main drawbacks to this model is scalability. Each CA must certify every other CA that is participating, and a bidirectional trust path must be implemented, as shown in Figure 5-15. If one root CA were certifying all the intermediate CAs, scalability would not be as much of an issue. Figure 5-15 represents a fully connected mesh architecture, meaning that each CA is directly connected to and has a bidirectional trust relationship with every other CA. As you can see in this illustration, the complexity of this setup can become overwhelming.


Hybrid Trust Model


A company can be complex within itself, and when the need arises to communicate properly with outside partners, suppliers, and customers in an authorized and secured manner, it can make sticking to either the hierarchical or peer-to-peer trust model difficult,

Figure 5-14 Cross certification creates a peer-to-peer PKI model.



Figure 5-15 Scalability is a drawback in cross-certification models.



if not impossible. In many implementations, the different model types have to be combined to provide the necessary communication lines and levels of trust. In a hybrid trust model, the two companies have their own internal hierarchical models and are connected through a peer-to-peer model using cross certification.

Another option in this hybrid configuration is to implement a bridge CA. Figure 5-16 illustrates the role that a bridge CA could play—it is responsible for issuing cross certificates for all connected CAs and trust domains. The bridge is not considered a root or trust anchor, but merely the entity that generates and maintains the cross certification for the connected environments.


Figure 5-16 A bridge CA can control the cross-certification procedures.




EXAM TIP Three trust models exist: hierarchical, peer-to-peer, and hybrid. Hierarchical trust is like an upside down tree. Peer-to-peer is a lateral series of references, and hybrid is a combination of hierarchical and peer-to-peer trust.


Chapter Review


Public key infrastructures can be complex beasts, as this chapter has shown. They have many different components that must work together seamlessly to provide the expected protection and functionality. A PKI is implemented to provide users and devices with the ability to communicate securely and to provide them with trust anchors, since they do not directly trust each other.

Certificate registration requests are validated by a registration authority (RA), and the certificate is then generated by a certificate authority (CA). The digital certificate binds an individual’s identity to the public key that is within the certificate.

Certificates can expire, be revoked, or be suspended. When a user receives a certificate from another user, the other user must be validated, which means that the CA’s digital signature that is embedded within the certificate itself must be validated. This can require that the receiving user validate a whole string of certificates and digital signatures, referred to as a certificate path. This path must be followed until a self-signed trusted root certificate is reached.

Certificate authorities can be public, private (in-house), or outsourced, depending on a company’s needs. Internal PKIs can follow different trust models, which will dictate their trust paths and anchors.

PKIs have been waiting in the wings for several years—waiting for the time when they would finally be accepted and implemented. That time has come, and more and more companies are putting them into place. This also means more and more companies have experienced the pain of implementing such a complex framework into a preexisting working environment. All the aspects of a PKI must be understood before you fill out the first purchase order, which also means determining exactly what a PKI will do for you and what it won’t. In any security activity, understanding the reality of any protection mechanism is necessary, but this is especially true for a PKI because it can drastically affect the whole production environment in both good and bad ways.

Finally, it is important that you understand that a majority of these authentication activities take place behind the scenes for the users—the technology and intelligence have been programmed into the software itself. So, in this chapter, when we said that users need to see if their system has been configured to trust a specific CA, or that they need to validate a digital signature or obtain a higher-level CA certificate, the user’s client software is actually carrying out these tasks. A majority of what was discussed in this chapter happens transparently to the users.


Questions


 
  1. 1. When a user wants to participate in a PKI, what component does he or she need to obtain, and how does that happen?
    1. A. The user submits a certification request to the CA.
    2. B. The user submits a key pair request to the CRL.
    3. C. The user submits a certification request to the RA.
    4. D. The user submits proof of identification to the CA.
  2. 2. How does a user validate a digital certificate that is received from another user?
    1. A. The user will first see whether her system has been configured to trust the CA that digitally signed the other user’s certificate and will then validate that CA’s digital signature.
    2. B. The user will calculate a message digest and compare it to the one attached to the message.
    3. C. The user will first see whether her system has been configured to trust the CA that digitally signed the certificate and then will validate the public key that is embedded within the certificate.
    4. D. The user will validate the sender’s digital signature on the message.
  3. 3. What is the purpose of a digital certificate?
    1. A. It binds a CA to a user’s identity.
    2. B. It binds a CA’s identity to the correct RA.
    3. C. It binds an individual to an RA.
    4. D. It binds an individual to a public key.
  4. 4. What steps does a user take to validate a CA’s digital signature on a digital certificate?
    1. A. The user’s software creates a message digest for the digital certificate and decrypts the encrypted message digest included within the digital certificate. If the decryption performs properly and the message digest values are the same, the certificate is validated.
    2. B. The user’s software creates a message digest for the digital signature and encrypts the message digest included within the digital certificate. If the encryption performs properly and the message digest values are the same, the certificate is validated.
    3. C. The user’s software creates a message digest for the digital certificate and decrypts the encrypted message digest included within the digital certificate. If the user can encrypt the message digest properly with the CA’s private key and the message digest values are the same, the certificate is validated.
    4. D. The user’s software creates a message digest for the digital signature and encrypts the message digest with its private key. If the decryption performs properly and the message digest values are the same, the certificate is validated.
  5. 5. What is a bridge CA, and what is its function?
    1. A. It is a hierarchical trust model that establishes a root CA, which is the trust anchor for all other CAs.
    2. B. It is an entity that creates and maintains the CRL for several CAs at one time.
    3. C. It is a CA that handles the cross-certification certificates for two or more CAs in a peer-to-peer relationship.
    4. D. It is an entity that validates the user’s identity information for the RA before the request goes to the CA.
  6. 6. Why would a company implement a key archiving and recovery system within the organization?
    1. A. To make sure all data encryption keys are available for the company if and when it needs them
    2. B. To make sure all digital signature keys are available for the company if and when it needs them
    3. C. To create session keys for users to be able to access when they need to encrypt bulk data
    4. D. To back up the RA’s private key for retrieval purposes
  7. 7. Within a PKI environment, where does the majority of the trust actually lie?
    1. A. All users and devices within an environment trust the RA, which allows them to indirectly trust each other.
    2. B. All users and devices within an environment trust the CA, which allows them to indirectly trust each other.
    3. C. All users and devices within an environment trust the CRL, which allows them to indirectly trust each other.
    4. D. All users and devices within an environment trust the CPS, which allows them to indirectly trust each other.
  8. 8. Which of the following properly explains the m of n control?
    1. A. This is the process a user must go through to properly register for a certificate through the RA.
    2. B. This ensures that a certificate has to be fully validated by a user before he can extract the public key and use it.
    3. C. This is a control in key recovery to enforce separation of duties.
    4. D. This is a control in key recovery to ensure that the company cannot recover a user’s key without the user’s consent.
 
  1. 9. Which of the following is not a valid field that could be present in an X.509 version 3 digital certificate?
    1. A. Validity dates
    2. B. Serial number
    3. C. Extensions
    4. D. Symmetric key
 
  1. 10. To what does a certificate path pertain?
    1. A. All of the digital certificates that need to be validated before a received certificate can be fully validated and trusted
    2. B. All of the digital certificates that need to be validated before a sent certificate can be properly encrypted
    3. C. All of the digital certificates that need to be validated before a user trusts her own trust anchor
    4. D. All of the digital certificates that need to be validated before a received certificate can be destroyed
 
  1. 11. Which of the following certificate characteristics was expanded upon with version 3 of the X.509 standard?
    1. A. Subject
    2. B. Extensions
    3. C. Digital signature
    4. D. Serial number
 
  1. 12. What is a certification practices statement (CPS), and what is its purpose?
    1. A. A CPS outlines the steps a CA goes through to validate identities and generate certificates. Companies should review this document to ensure that the CA follows the necessary steps the company requires and provides the necessary level of protection.
    2. B. A CPS outlines the steps a CA goes through to communicate with other CAs in other states. Companies should review this document to ensure that the CA follows the necessary steps the company requires and provides the necessary level of protection.
    3. C. A CPS outlines the steps a CA goes through to set up an RA at a company’s site. Companies should review this document to ensure that the CA follows the necessary steps the company requires and provides the necessary level of protection.
    4. D. A CPS outlines the steps a CA goes through to become a business within a vertical market. Companies should review this document to ensure that the CA follows the necessary steps the company requires and provides the necessary level of protection.
 
  1. 13. Which of the following properly describes what a public key infrastructure (PKI) actually is?
    1. A. A protocol written to work with a large subset of algorithms, applications, and protocols
    2. B. An algorithm that creates public/private key pairs
    3. C. A framework that outlines specific technologies and algorithms that must be used
    4. D. A framework that does not specify any technologies, but provides a foundation for confidentiality, integrity, and availability services
 
  1. 14. Once an individual validates another individual’s certificate, what is the use of the public key that is extracted from this digital certificate?
    1. A. The public key is now available to use to create digital signatures.
    2. B. The user can now encrypt session keys and messages with this public key and can validate the sender’s digital signatures.
    3. C. The public key is now available to encrypt future digital certificates that need to be validated.
    4. D. The user can now encrypt private keys that need to be transmitted securely.
 
  1. 15. Why would a digital certificate be added to a certificate revocation list (CRL)?
    1. A. If the public key had become compromised in a public repository
    2. B. If the private key had become compromised
    3. C. If a new employee joined the company and received a new certificate
    4. D. If the certificate expired
 
  1. 16. What is an online CRL service?
    1. A. End-entities can send a request containing a serial number of a specific certificate to an online CRL service. The online service will query several CRL distribution points and respond with information about whether the certificate is still valid or not.
    2. B. CAs can send a request containing the expiration date of a specific certificate to an online CRL service. The online service will query several other RAs and respond with information about whether the certificate is still valid or not.
    3. C. End-entities can send a request containing a public key of a specific certificate to an online CRL service. The online service will query several end-entities and respond with information about whether the certificate is still valid or not.
    4. D. End-entities can send a request containing a public key of a specific CA to an online CRL service. The online service will query several RA distribution points and respond with information about whether the CA is still trustworthy or not.
 
  1. 17. If an extension is marked as critical, what does this indicate?
    1. A. If the CA is not programmed to understand and process this extension, the certificate and corresponding keys can be used for their intended purpose.
    2. B. If the end-entity is programmed to understand and process this extension, the certificate and corresponding keys cannot be used.
    3. C. If the RA is not programmed to understand and process this extension, communication with the CA is not allowed.
    4. D. If the end-entity is not programmed to understand and process this extension, the certificate and corresponding keys cannot be used.
 
  1. 18. How can users have faith that the CRL was not modified to present incorrect information?
    1. A. The CRL is digitally signed by the CA.
    2. B. The CRL is encrypted by the CA.
    3. C. The CRL is open for anyone to post certificate information to.
    4. D. The CRL is accessible only to the CA.
 
  1. 19. When would a certificate be suspended, and where is that information posted?
    1. A. It would be suspended when an employee leaves the company. It is posted on the CRL.
    2. B. It would be suspended when an employee changes his or her last name. It is posted on the CA.
    3. C. It would be suspended when an employee goes on vacation. It is posted on the CRL.
    4. D. It would be suspended when a private key is compromised. It is posted on the CRL.
 
  1. 20. What does cross certification pertain to in a PKI environment?
    1. A. When a company uses an outsourced service provider, it needs to modify its CPS to allow for cross certification to take place between the RA and CA.
    2. B. When two end-entities need to communicate in a PKI, they need to exchange certificates.
    3. C. When two or more CAs need to trust each other so that their end-entities can communicate, they will create certificates for each other.
    4. D. A RA needs to perform a cross certification with a user before the certificate registration is terminated.

Answers


 
  1. 1. C. The user must submit identification data and a certification request to the registration authority (RA). The RA validates this information and sends the certification request to the certificate authority (CA).
  2. 2. A. A digital certificate is validated by the receiver by first determining whether her system has been configured to trust the CA that digitally signed the certificate. If this has been configured, the user’s software uses the CA’s public key and validates the CA’s digital signature that is embedded within the certificate.
  3. 3. D. A digital certificate vouches for an individual’s identity and binds that identity to the public key that is embedded within the certificate.
  4. 4. A. The user’s software calculates a message digest for the digital certificate and decrypts the encrypted message digest value included with the certificate, which is the digital signature. The message digest is decrypted using the CA’s public key. If the two message digest values match, the user knows that the certificate has not been modified in an unauthorized manner, and since the encrypted message digest can be decrypted properly with the CA’s public key, the user is assured that this CA created the certificate.
  5. 5. C. A bridge CA is set up to handle all of the cross-certification certificates and traffic between different CAs and trust domains. A bridge CA is used instead of requiring all of the CAs to authenticate to each other and create certificates with one another, which would end up in a full mesh configuration.
  6. 6. A. To protect itself, the company will make backups of the data encryption keys its employees use for encrypting company information. If an employee is no longer available, the company must make sure that it still has access to its own business data. Companies should not need to back up digital signature keys, since they are not used to encrypt data.
  7. 7. B. The trust anchor for a PKI environment is the CA. All users and devices trust the CA, which allows them to indirectly trust each other. The CA verifies and vouches for each user’s and device’s identity, so these different entities can have confidence that they are communicating with specific individuals.
  8. 8. C. The m of n control is the part of the key recovery software that allows a certain number of people to be involved with recovering and reconstructing a lost or corrupted key. A certain number of people (n) are allowed to authenticate to the software, which will allow them to participate in the key recovery process. Not all of those people may be available at one time, however, so a larger number of people (m) need to be involved with the process. The system should not allow only one person to carry out key recovery, because that person could then use the keys for fraudulent purposes.
  9. 9. D. The first three values are valid fields that are used in digital certificates. Validity dates indicate how long the certificate is good for, the serial number is a unique value used to identify individual certificates, and extensions allow companies to expand the use of their certificates. A public key is included in the certificate, which is an asymmetric key, not a symmetric key.
  10. 10. A. The certificate path is all of the certificates that must be validated before the receiver of a certificate can validate and trust the newly received certificate. When a user receives a certificate, she must obtain the certificate and public key of all of the CAs until she comes to a self-signed certificate, which is the trusted anchor. So the user must validate each of these certificates until the trusted anchor is reached. The path between the receiver and a trusted anchor is referred to as the certificate path. This is a hierarchical model of trust, and each rung of the trust model must be verified before the end user’s certificate can be validated and trusted.
  11. 11. B. The X.509 standard is currently at version 3, which added more extension capabilities to digital certificates and which added more flexibility for companies using PKIs. Companies can define many of these extensions to mean specific things that are necessary for their proprietary or customized environment and software.
  12. 12. A. The CPS outlines the certificate classes the CA uses and the CA’s procedures for verifying end-entity identities, generating certificates, and maintaining the certificates throughout their lifetimes. Any company that will be using a specific CA needs to make sure it is going through these procedures with the level of protection the company would require of itself. The company will be putting a lot of trust in the CA, so the company should do some homework and investigate how the CA actually accomplishes its tasks.
  13. 13. D. A PKI is a framework that allows several different types of technologies, applications, algorithms, and protocols to be plugged into it. The goal is to provide a foundation that can provide a hierarchical trust model, which will allow end-entities to indirectly trust each other and allow for secure and trusted communications.
  14. 14. B. Once a receiver validates a digital certificate, the embedded public key can be extracted and used to encrypt symmetric session keys, encrypt messages, and validate the sender’s digital signatures.
  15. 15. B. Certificates are added to a CRL the public/private key pair should no longer be bound to a specific person’s identity. This can happen if a private key is compromised, meaning that it was stolen or captured—this would mean someone else could be using the private key instead of the original user, so the CRL is a protection mechanism that will alert others in the PKI of this incident. Certificates can be added to the CRL if an employee leaves the company or is no longer affiliated with the company for one reason or another. Expired certificates are not added to CRLs.
  16. 16. A. Actually getting the data on the CRLs to end-entities is a huge barrier for many PKI implementations. The environment can have distribution points set up, which provide centralized places that allow the users’ systems to query to see whether a certificate has been revoked or not. Another approach is to push down the CRLs to each end-entity or to use an online service. The online service will do the busy work for the end-entity by querying all the available CRLs and returning a response to the end-entity indicating whether the certificate has been revoked or not.
  17. 17. D. Digital certificates have extensions that allow companies to expand the use of certificates within their environments. When a CA creates a certificate, it is certifying the key pair to be used for a specific purpose (for digital signatures, data encryption, validating a CA’s digital signature, and so on). If a CA adds a critical flag to an extension, it is stating that the key pair can be used only for the reason stated in the extension. If an end-entity receives a certificate with this critical flag set and cannot understand and process the marked extension, the key pair cannot be used at all. The CA is stating, “I will allow the key pair to be used only for this purpose and under these circumstances.” If an extension is marked noncritical, the end-entity does not have to be able to understand and process that extension.
  18. 18. A. The CRL contains all of the certificates that have been revoked. Only the CA can post information to this list. The CA then digitally signs the list to ensure that any modifications will be detected. When an end-entity receives a CRL, it verifies the CA’s digital signature, which tells the end-entity whether the list has been modified in an unauthorized manner and guarantees that the correct CA signed the list.
  19. 19. C. A certificate can be suspended if it needs to be temporarily taken out of production for a period of time. If an employee goes on vacation and wants to make sure no one can use his certificate, he can make a suspension request to the CA, which will post the information to the CRL. The other answers in this question would require the certificate to be revoked, not suspended, and a new certificate would need to be created for the user.
  20. 20. C. Cross certification means that two or more CAs create certificates for each other. This takes place when two trust domains, each with their own CA, need to be able to communicate—a trusted path needs to be established between these domains. Once the first CA validates the other CA’s identity and creates a certificate, it then trusts this other CA, which creates a trusted path between the different PKI environments. The trust can be bidirectional or unidirectional.


CHAPTER 6
Standards and Protocols


 
  • Learn about the standards involved in establishing an interoperable Internet PKI
  • Understand interoperability issues with PKI standards
  • Discover how the common Internet protocols use and implement the PKI standards

One of the biggest growth industries since the 1990s was the commercial use of the Internet. None of the still steadily growing Internet commerce would be possible without the use of standards and protocols that provide a common, interoperable environment for exchanging information securely. Due to the wide distribution of Internet users and businesses, the most practical solution to date has been the commercial implementation of public key infrastructures (PKIs).

This chapter examines the standards and protocols involved in secure Internet transactions and e-business using a PKI. Although you may use only a portion of the related standards and protocols on a daily basis, you should understand how they interact to provide the services that are critical for security: confidentiality, integrity, authentication, and nonrepudiation.

Chapter 5 introduced the algorithms and techniques used to implement a PKI, but as you probably noticed, there is a lot of room for interpretation. Various organizations have developed and implemented standards and protocols that have been accepted as the basis for secure interaction in a PKI environment. These standards fall into three general categories:


 
  • Standards that define the PKI These standards define the data and data structures exchanged and the means for managing that data to provide the functions of the PKI (certificate issuance, storage, revocation, registration, and management).
  • Standards that define the interface between applications and the underlying PKI These standards use the PKI to establish the services required by applications.
  • Other standards These standards don’t fit neatly in either of the other two categories. They provide bits and pieces that glue everything together; they can address not only the PKI structure and the methods and protocols for using it, but they can also provide an overarching business process environment for PKI implementation (for example, ISO/IEC 27002, Common Criteria, and the Federal Information Processing Standards Publications (FIPS PUBS)). Figure 6-1 shows the relationships between these standards and protocols.

Figure 6-1 conveys the interdependence of the standards and protocols discussed in this chapter. The Internet PKI relies on three main standards for establishing interoperable PKI services: PKI X.509 (PKIX), Public Key Cryptography Standards (PKCS), and X.509. Other protocols and standards help define the management and operation of the PKI and related services—Internet Security Association and Key Management Protocol (ISAKMP) and XML Key Management Specification (XKMS) are both key management protocols, while Certificate Management Protocol (CMP) is used for managing certificates. Wired Equivalent Privacy (WEP) is used to encrypt wireless communications in 802.11 environments to support some of the more application-oriented standards and protocols: Secure/Multipurpose Internet Mail Extensions (S/MIME) for e-mail; Secure Sockets Layer (SSL), Transport Layer Security (TLS), and Wireless Transport Layer Security (WTLS) for secure packet transmission; and IP Security (IPsec) and Point-to-Point Tunneling Protocol (PPTP) to support virtual private networks. ISO/IEC 27002 and FIPS PUBS each address security at the business process, application, protocol, and PKI implementation levels. Finally, Pretty Good Privacy (PGP) provides an alternative method spanning the protocol and application levels.

This chapter examines each standard from the bottom up, starting with building an infrastructure through protocols and applications, and finishing with some of the inherent weaknesses of and potential attacks on a PKI.

Figure 6-1 Relationships between PKI standards and protocols




PKIX/PKCS


Two main standards have evolved over time to implement PKI on a practical level on the Internet. Both are based on the X.509 certificate standard (discussed shortly in the “X.509” section) and establish complementary standards for implementing PKI. PKIX and PKCS intertwine to define the most commonly used set of standards.

PKIX was produced by the Internet Engineering Task Force (IETF) and defines standards for interactions and operations for four component types: the user (end-entity), certificate authority (CA), registration authority (RA), and the repository for certificates and certificate revocation lists (CRLs). PKCS defines many of the lower level standards for message syntax, cryptographic algorithms, and the like. The PKCS set of standards is a product of RSA Security.

The PKIX working group was formed in 1995 to develop the standards necessary to support PKIs. At the time, the X.509 Public Key Certificate (PKC) format was proposed as the basis for a PKI. X.509 includes information regarding data formats and procedures used for CA-signed PKCs, but it doesn’t specify values or formats for many of the fields within the PKC. X.509 v1 (version 1) was originally defined in 1988 as part of the X.500 Directory standard. After being co-opted by the Internet community for implementing certificates for secure Internet communications, X.509’s shortcomings became apparent. The current version, X.509 v3, was adopted in 1996. X.509 is very complex, allowing a great deal of flexibility in implementing certificate features. PKIX provides standards for extending and using X.509 v3 certificates and for managing them, enabling interoperability between PKIs following the standards.

PKIX uses the model shown in Figure 6-2 for representing the components and users of a PKI. The user, called an end-entity, is not part of the PKI, but end-entities are either users of the PKI certificates, the subject of a certificate (an entity identified by it), or both. The CA is responsible for issuing, storing, and revoking certificates—both PKCs and Attribute Certificates (ACs). The RA is responsible for management activities

Figure 6-2 The PKIX model



designated by the CA. The RA can, in fact, be a component of the CA rather than a separate component. The final component of the PKIX model is the repository, a system or group of distributed systems that provide certificates and certificate revocation lists to the end-entities.


PKIX Standards


Now that we have looked at how PKIX views the world, let’s take a look at what PKIX does. Using X.509 v3, the PKIX working group addresses five major areas:


 
  • PKIX outlines certificate extensions and content not covered by X.509 v3 and the format of version 2 CRLs, thus providing compatibility standards for sharing certificates and CRLs between CAs and end-entities in different PKIs. The PKIX profile of the X.509 v3 PKC describes the contents, required extensions, optional extensions, and extensions that need not be implemented. The PKIX profile suggests a range of values for many extensions. In addition, PKIX provides a profile for version 2 CRLs, allowing different PKIs to share revocation information. (For more information on PKIX, see “Internet X.509 Public Key Infrastructure Certificate and CRL Profile” [RFC 5280].)
  • PKIX provides certificate management message formats and protocols, defining the data structures, management messages, and management functions for PKIs. The working group also addresses the assumptions and restrictions of their protocols. This standard identifies the protocols necessary to support online interactions between entities in the PKIX model. The management protocols support functions for entity registration, initialization of the certificate (possibly key-pair generation), issuance of the certificate, key-pair update, certificate revocation, cross-certification (between CAs), and key-pair recovery if available.
  • PKIX outlines certificate policies and certification practices statements (CPSs), establishing the relationship between policies and CPSs. A policy is a set of rules that helps determine the applicability of a certificate to an end-entity. For example, a certificate for handling routine information would probably have a policy on creation, storage, and management of key pairs quite different from a policy for certificates used in financial transactions, due to the sensitivity of the financial information. A CPS explains the practices used by a CA to issue certificates. In other words, the CPS is the method used to get the certificate, while the policy defines some characteristics of the certificate and how it will be handled and used.
  • PKIX specifies operational protocols, defining the protocols for certificate handling. In particular, protocol definitions are specified for using File Transfer Protocol (FTP) and Hypertext Transfer Protocol (HTTP) to retrieve certificates from repositories. These are the most common protocols for applications to use when retrieving certificates.
  • PKIX includes time-stamping and data certification and validation services, which are areas of interest to the PKIX working group, and which will probably grow in use over time. A time stamp authority (TSA) certifies that a particular entity existed at a particular time. A Data Validation and Certification Server certifies the validity of signed documents, PKCs, and the possession or existence of data. These capabilities support nonrepudiation requirements and are considered building blocks for a nonrepudiation service.

PKCs are the most commonly used certificates, but the PKIX working group has been working on two other types of certificates: Attribute Certificates and Qualified Certificates.

An Attribute Certificate (AC) is used to grant permissions using rule-based, role-based, and rank-based access controls. ACs are used to implement a privilege management infrastructure (PMI). In a PMI, an entity (user, program, system, and so on) is typically identified as a client to a server using a PKC. There are then two possibilities: either the identified client pushes an AC to the server, or the server can query a trusted repository to retrieve the attributes of the client. This situation is modeled in Figure 6-3.

The client push of the AC has the effect of improving performance, but no independent verification of the client’s permissions is initiated by the server. The alternative is to have the server pull the information from an AC issuer or a repository. This method is preferable from a security standpoint, because the server or server’s domain determines the client’s access rights. The pull method has the added benefit of requiring no changes to the client software.

The Qualified Certificate (QC) is based on the term used within the European Commission to identify certificates with specific legislative uses. This concept is generalized in the PKIX QC profile to indicate a certificate used to identify a specific individual (a single human rather than the entity of the PKC) with a high level of assurance in a non-repudiation service.

Table 6-1 summarizes the Internet Requests for Comment (RFCs) that have been produced by the PKIX working group for each of these five areas.

Figure 6-3 The PKIX PMI model








Table 6-1 PKIX Subjects and Related RFCs


Other documents have been produced by the IETF PKIX working group, but those listed in Table 6-1 cover the major implementation details for PKIX. For a complete list of current and pending documents, see the Internet draft for the PKIX working group roadmap (https://datatracker.ietf.org/drafts/draft-ietf-pkix-roadmap/).


PKCS


RSA Laboratories created the Public Key Cryptography Standards (PKCS) to fill some of the gaps in the standards that existed in PKI implementation. As they have with the PKIX standards, PKI developers have adopted many of these standards as a basis for achieving interoperability between different certificate authorities. PKCS is composed of a set of (currently) 13 active standards, with 2 other standards that are no longer active. The standards are referred to as PKCS #1 through PKCS #15, as listed in Table 6-2. The standards combine to establish a common base for services required in a PKI.

Though adopted early in the development of PKIs, some of these standards are being phased out. For example, PKCS #6 is being replaced by X.509 v3 (covered shortly in the “X.509” section) and PKCS #7 and PKCS #10 are used less, as their PKIX counterparts are being adopted.

Standard

Title and Description

PKCS #1

RSA Cryptography Standard: Definition of the RSA encryption standard.

PKCS #2

No longer active; it covered RSA encryption of message digests and was incorporated into PKCS #1.

PKCS #3

Diffie-Hellman Key Agreement Standard: Definition of the Diffie-Hellman key-agreement protocol.

PKCS #4

No longer active; it covered RSA key syntax and was incorporated into PKCS #1.

PKCS #5

Password-Based Cryptography Standard: Definition of a password-based encryption (PBE) method for generating a secret key.

PKCS #6

Extended-Certificate Syntax Standard: Definition of an extended certificate syntax that is being replaced by X.509 v3.

PKCS #7

Cryptographic Message Syntax Standard: Definition of the cryptographic message standard for encoded messages, regardless of encryption algorithm. Commonly replaced with PKIX Cryptographic Message Syntax.

PKCS #8

Private-Key Information Syntax Standard: Definition of a private key information format, used to store private key information.

PKCS #9

Selected Attribute Types: Definition of attribute types used in other PKCS standards.

PKCS #10

Certification Request Syntax Standard: Definition of a syntax for certification requests.

PKCS #11

Cryptographic Token Interface Standard: Definition of a technology-independent programming interface for cryptographic devices (such as smart cards).

PKCS #12

Personal Information Exchange Syntax Standard: Definition of a format for storage and transport of user privates keys, certificates, and other personal information.

PKCS #13

Elliptic Curve Cryptography Standard: Description of methods for encrypting and signing messages using elliptic curve cryptography.

PKCS #14

A standard for pseudo-random number generation.

PKCS #15

Cryptographic Token Information Format Standard: Definition of a format for storing cryptographic information in cryptographic tokens.

Table 6-2 PKCS Standards



Why You Need to Know


If you or your company are planning to use one of the existing certificate servers to support e-commerce, you may not need to know the specifics of these standards (except perhaps for your exam). However, if you plan to implement a private PKI to support secure services within your organization, you will need to understand what standards are out there and how the decision to use a particular PKI implementation (either home grown or commercial) may lead to incompatibilities with other certificate-issuing entities. Your business-to-business requirements must be considered when deciding how to implement a PKI within your organization.



EXAM TIP All of these standards and protocols are the “vocabulary” of the computer security industry. You should be well versed in all these titles and their purposes and operations.


X.509


What is a certificate? A certificate is merely a data structure that binds a public key to subjects (unique names, DNS entries, or e-mails) and is used to authenticate that a public key indeed belongs to the subject. In the late 1980s, the X.500 OSI Directory Standard was defined by International Organization for Standardization (ISO) and the International Telecommunication Union (ITU). It was developed for implementing a network directory system, and part of this directory standard was the concept of authentication of entities within the directory. X.509 is the portion of the X.500 standard that addresses the structure of certificates used for authentication.

Several versions of the certificates have been created, with version 3 being the current version (as this is being written). Each version has extended the contents of the certificates to include additional information necessary to use certificates in a PKI. The original ITU X.509 definition was published in 1988, was formerly referred to as CCITT X.509, and is sometimes referred to as ISO/IEC/ITU 9594-8. The 1988 certificate format, version 1, was revised in 1993 as the ITU-T X.509 definition when two more fields were added to support directory access control. ITU-T is the Standards Section of the ITU created in 1992.

The 1993, version 2 specification was revised following lessons learned from implementing Internet Privacy Enhanced Mail (PEM). Version 3 added additional optional extensions for more subject identification information, key attribute information, policy information, and certification path constraints. In addition, version 3 allowed additional extensions to be defined in standards or to be defined and registered by organizations or communities. Table 6-3 gives a description of the fields in a X.509 certificate.

Certificates are used to encapsulate the information needed to authenticate an entity. The X.509 specification defines a hierarchical certification structure that relies on a root certification authority that is self-certifying (meaning it issues its own certificate). All other certificates can be traced back to such a root through a path. A CA issues a certificate to a uniquely identifiable entity (person, corporation, computer, and so on)—issuing a certificate to “John Smith” would cause some real problems if that were all the information the CA had when issuing the certificate. We are saved somewhat by the requirement that the CA determines what identifier is unique (the distinguished name), but when certificates and trust are extended between CAs, the unique identification becomes critical.

Some other extensions to the X.509 certificate have been proposed for use in implementing a PKI. For example, PKIX identified several extensions for use in the certificate policy framework (see RFC 2427). It is essential that you ensure that your PKI ignores extensions that it is not prepared to handle.


Table 6-3 X.509 Certificate Fields



SSL/TLS


Secure Sockets Layer (SSL) and Transport Layer Security (TLS) provide the most common means of interacting with a PKI and certificates. The older SSL protocol was introduced by Netscape as a means of providing secure connections for web transfers using encryption. These two protocols provide secure connections between the client and server for exchanging information. They also provide server authentication (and optionally, client authentication) and confidentiality of information transfers. See Chapter 15 for a detailed explanation.

The IETF established the TLS Working Group in 1996 to develop a standard transport layer security protocol. The working group began with SSL version 3.0 as its basis and released RFC 2246, TLS Protocol Version 1.0, in 1999 as a proposed standard. The working group also published RFC 2712, “Addition of Kerberos Cipher Suites to Transport Layer Security (TLS),” as a proposed standard, and two RFCs on the use of TLS with HTTP. Like its predecessor, TLS is a protocol that ensures privacy between communicating applications and their users on the Internet. When a server and client communicate, TLS ensures that no third party can eavesdrop or tamper with any message.

TLS is composed of two parts: the TLS Record Protocol and the TLS Handshake Protocol. The TLS Record Protocol provides connection security by using supported encryption methods. The TLS Record Protocol can also be used without encryption. The TLS Handshake Protocol allows the server and client to authenticate each other and to negotiate a session encryption algorithm and cryptographic keys before data is exchanged.

Though TLS is based on SSL and is sometimes referred to as SSL, they are not interoperable. However, the TLS protocol does contain a mechanism that allows a TLS implementation to back down to SSL 3.0. The difference between the two is the way they perform key expansion and message authentication computations. TLS uses the MD5 and SHA1 hashing algorithms XORed together to determine the session key. The most recent browser versions support TLS. Though SSL also uses both hashing algorithms, SSL is considered less secure because the way it uses them forces a reliance on MD5 rather than SHA1.

The TLS Record Protocol is a layered protocol. At each layer, messages may include fields for length, description, and content. The Record Protocol takes messages to be transmitted, fragments the data into manageable blocks, optionally compresses the data, applies a message authentication code (MAC) to the data, encrypts it, and transmits the result. Received data is decrypted, verified, decompressed, and reassembled, and then delivered to higher-level clients.

The TLS Handshake Protocol involves the following steps, which are summarized in Figure 6-4:


 
  1. 1. Exchange hello messages to agree on algorithms, exchange random values, and check for session resumption.
  2. 2. Exchange the necessary cryptographic parameters to allow the client and server to agree on a pre-master secret.
  3. 3. Exchange certificates and cryptographic information to allow the client and server to authenticate themselves.
  4. 4. Generate a master secret from the pre-master secret and exchange random values.
  5. 5. Provide security parameters to the record layer.
  6. 6. Allow the client and server to verify that their peer has calculated the same security parameters and that the handshake occurred without tampering by an attacker.

Figure 6-4 TLS Handshake Protocol



Though it has been designed to minimize this risk, TLS still has potential vulnerabilities to a man-in-the-middle attack. A highly skilled and well-placed attacker can force TLS to operate at lower security levels. Regardless, through the use of validated and trusted certificates, a secure cipher suite can be selected for the exchange of data.

Once established, a TLS session remains active as long as data is being exchanged. If sufficient inactive time has elapsed for the secure connection to time out, it can be reinitiated.


ISAKMP


The Internet Security Association and Key Management Protocol (ISAKMP) provides a method for implementing a key exchange protocol and for negotiating a security policy. It defines procedures and packet formats to negotiate, establish, modify, and delete security associates. Because it is a framework, it doesn’t define implementation-specific protocols, such as the key exchange protocol or hash functions. Examples of ISAKMP are the Internet Key Exchange (IKE) protocol and IPsec and are used widely throughout the industry.

An important definition for understanding ISAKMP is the term security association. A security association (SA) is a relationship in which two or more entities define how they will communicate securely. ISAKMP is intended to support SAs at all layers of the network stack. For this reason, ISAKMP can be implemented on the transport level using TCP or User Datagram Protocol (UDP), or it can be implemented on IP directly.

Negotiation of a SA between servers occurs in two stages. First, the entities agree on how to secure negotiation messages (the ISAKMP SA). Once the entities have secured their negotiation traffic, they then determine the SAs for the protocols used for the remainder of their communications. Figure 6-5 shows the structure of the ISAKMP header. This header is used during both parts of the ISAKMP negotiation.

The initiator cookie is set by the entity requesting the SA, and the responder sets the responder cookie. The payload byte indicates the type of the first payload to be


Figure 6-5 ISAKMP header format


encapsulated. Payload types include security associations, proposals, key transforms, key exchanges, vendor identities, and other things. The major and minor revision fields refer to the major version number and minor version number for the ISAKMP. The exchange type helps determine the order of messages and payloads. The flag bits indicate options for the ISAKMP exchange, including whether the payload is encrypted, whether the initiator and responder have “committed” to the SA, and whether the packet is to be authenticated only (and is not encrypted). The final fields of the ISAKMP header indicate the message identifier and a message length. Payloads encapsulated within ISAKMP use a generic header, and each payload has its own header format.

Once the ISAKMP SA is established, multiple protocol SAs can be established using the single ISAKMP SA. This feature is valuable due to the overhead associated with the two-stage negotiation. SAs are valid for specific periods of time, and once the time expires, the SA must be renegotiated. Many resources are also available for specific implementations of ISAKMP within the IPsec protocol.


CMP


The PKIX Certificate Management Protocol (CMP) is specified in RFC 4210. This protocol defines the messages and operations required to provide certificate management services within the PKIX model. Though part of the IETF PKIX effort, CMP provides a framework that works well with other standards, such as PKCS #7 and PKCS #10.

CMP provides for the following certificate operations:


 
  • CA establishment, including creation of the initial CRL and export of the public key for the CA
  • Certification of an end-entity, including the following:
    • Initial registration and certification of the end-entity (registration, certificate issuance, and placement of the certificate in a repository)
    • Updates to the key pair for end-entities, required periodically and when a key pair is compromised or keys cannot be recovered
 
  • End-entity certificate updates, required when a certificate expires
  • Periodic CA key-pair update, similar to end-entity key-pair updates
  • Cross-certification requests, placed by other CAs
  • Certificate and CRL publication, performed under the appropriate conditions of certificate issuance and certificate revocation
  • Key-pair recovery, a service to restore key-pair information for an end-entity; for example, if a certificate password is lost or the certificate file is lost
  • Revocation requests, supporting requests by authorized entities to revoke a certificate

CMP also defines mechanisms for performing these operations, either online or offline using files, e-mail, tokens, or web operations.


XKMS


The XML Key Management Specification defines services to manage PKI operations within the Extensible Markup Language (XML) environment. These services are provided for handling PKI keys and certificates automatically. Developed by the World Wide Web Consortium (W3C), XKMS is intended to simplify integration of PKIs and management of certificates in applications. As well as responding to problems of authentication and verification of electronic signatures, it also allows certificates to be managed, registered, or revoked.

XKMS services reside on a separate server that interacts with an established PKI. The services are accessible via a simple XML protocol. Developers can rely on the XKMS services, making it less complex to interface with the PKI. The services provide for retrieving key information (owner, key value, key issuer, and the like) and key registration and management (such as key registration and revocation).

Retrieval operations rely on the XML signature for the necessary information. Three tiers of service are based on the client requests and application requirements. Tier 0 provides a means of retrieving key information by embedding references to the key within the XML signature. The signature contains an element called a retrieval method that indicates ways to resolve the key. In this case, the client sends a request, using the retrieval method, to obtain the desired key information. For example, if the verification key contained a long chain of X.509 v3 certificates, a retrieval method could be included to avoid sending the certificates with the document. The client would use the retrieval method to obtain the chain of certificates. For tier 0, the server indicated in the retrieval method responds directly to the request for the key, possibly bypassing the XKMS server. The tier 0 process is shown in Figure 6-6.

With tier 1 operations, the client forwards the key information portions of the XML signature to the XKMS server, relying on the server to perform the retrieval of the desired key information. The desired information can be local to the XKMS sever, or it can reside on an external PKI system. The XKMS server provides no additional validation of the key information, such as checking to see whether the certificate has been revoked

Figure 6-6 XKMS tier 0 retrieval



and is still valid. Just as in tier 0, the client performs final validation of the document. Tier 1 is called the locate service because it locates the appropriate key information for the client, as shown in Figure 6-7.

Tier 2 is called the validate service, and it is illustrated in Figure 6-8. In this case, just as in tier 1, the client relies on the XKMS service to retrieve the relevant key information from the external PKI. The XKMS server also performs a data validation on a portion of the key information provided by the client for this purpose. This validation verifies the binding of the key information with the data indicated by the key information contained in the XML signature.

The primary difference between tier 1 and tier 2 is the level of involvement of the XKMS server. In tier 1, it can serve only as a relay or gateway between the client and the PKI. In tier 2, the XKMS server is actively involved in verifying the relation between the PKI information and the document containing the XML signature.

XKMS relies on the client or underlying communications mechanism to provide for the security of the communications with the XKMS server. The specification suggests using one of three methods for ensuring server authentication, response integrity, and relevance of the response to the request: digitally signed correspondence, a transport layer security protocol (such as SSL, TLS, or WTLS), or a packet layer security protocol (such as IPsec). Obviously, digitally signed correspondence introduces its own issues regarding validation of the signature, which is the purpose of XKMS.

It is possible to define other tiers of service. Tiers 3 and 4, an assertion service and an assertion status service, respectively, are mentioned in the defining XKMS specification, but they are not defined. The specification states they “could” be defined in other documents.

XKMS also provides services for key registration, key revocation, and key recovery. Authentication for these actions is based on a password or passphrase, which is provided when the keys are registered and when they must be recovered.

Figure 6-7 XKMS tier 1 locate service



Figure 6-8 XKMS tier 2 validate service




S/MIME


The Secure/Multipurpose Internet Mail Extensions (S/MIME) message specification is an extension to the MIME standard that provides a way to send and receive signed and encrypted MIME data. RSA Security created the first version of the S/MIME standard, using the RSA encryption algorithm and the PKCS series of standards. The second version dates from 1998 but had a number of serious restrictions, including the restriction to 40-bit Data Encryption Standard (DES). The current version of the IETF standard is dated July 2004 and requires the use of Advanced Encryption Standard (AES).

The changes in the S/MIME standard have been so frequent that the standard has become difficult to implement. Far from having a stable standard for several years that product manufacturers could have time to gain experience with, there have been changes to the encryption algorithms being used. Just as importantly, and not immediately clear from the IETF documents, the standard places reliance upon more than one other standard for it to function. Key among these is the format of a public key certificate as expressed in the X.509 standard.

The S/MIME v2 specifications outline a basic strategy for providing security services for e-mail but lack many security features required by the Department of Defense (DoD) for use by the military. In early 1996, the Internet Mail Consortium (IMC) was formed as a technical trade association pursuing cooperative use and enhancement of Internet e-mail and messaging. An early goal of the IMC was to bring together the DoD (along with its vendor community) and commercial industry in order to devise a standard security protocol acceptable to both. Several existing security protocols were considered, including: MIME Object Security Services (MOSS), Pretty Good Privacy (PGP), and S/MIME v2. After examining these protocols, the group determined that none met the requirements of both the military and commercial communities. Instead of launching into a development of an entirely new set of specifications, however, the group decided that with certain enhancements the S/MIME set of specifications could be used. It also decided that, since the discussion was about a common set of specifications to be used throughout the Internet community, this resulting specification should be brought under the control of the IETF.

Shortly after the decision was made to revise the S/MIME version 2 specifications, the DoD, its vendor community, and commercial industry met to begin development of the enhanced specifications. These new specifications would be known as S/MIME v3. Participants agreed that backward compatibility between S/MIME v3 and v2 should be preserved; otherwise, S/MIME v3–compatible applications would not be able to work with older S/MIME v2—compatible applications.

A minimum set of cryptographic algorithms were mandated so that different implementations of the new S/MIME v3 set of specifications could be interoperable. This minimum set must be implemented in an application for it to be considered S/MIME-compliant. Applications can implement additional cryptographic algorithms to meet their customers’ needs, but the minimum set must also be present in the applications for interoperability with other S/MIME applications. Thus, users are not forced to use S/MIME specified algorithms; they can choose their own, but if the application is to be considered S/MIME-compliant, the standard algorithms must also be present.


IETF S/MIME v3 Specifications


Building upon the original work by the IMC organized group, the IETF has worked hard to enhance the S/MIME v3 specifications. The ultimate goal is to have the S/MIME v3 specifications receive recognition as an Internet standard. The current IETF S/MIME v3 set of specifications includes the following:


 
  • Cryptographic Message Syntax (CMS)
  • S/MIME v3 message specification
  • S/MIME v3 certificate handling specification
  • Enhanced security services (ESS) for S/MIME

The CMS defines a standard syntax for transmitting cryptographic information about contents of a protected message. Originally based on the PKCS #7 version 1.5 specification, the CMS specification was enhanced by the IETF S/MIME Working Group to include optional security components. Just as the S/MIME v3 provides backward compatibility with v2, CMS provides backward compatibility with PKCS #7, so applications will be interoperable even if the new components are not implemented in a specific application.

Integrity, authentication, and nonrepudiation security features are provided by using digital signatures using the SignedData syntax described by the CMS. CMS also describes what is known as the EnvelopedData syntax to provide confidentiality of the message’s content through the use of encryption. The PKCS #7 specification supports key encryption algorithms, such as RSA. Algorithm independence is promoted through the addition of several fields to the EnvelopedData syntax in CMS, which is the major difference between the PKCS #7 and CMS specifications. The goal was to be able to support specific algorithms such as Diffie-Hellman and the Key Exchange Algorithm (KEA), which is implemented on the Fortezza Crypto Card developed for the DoD. One final significant change to the original specifications is the ability to include X.509 Attribute Certificates in the SignedData and EnvelopedData syntaxes for CMS.


CMS Triple Encapsulated Message


An interesting feature of CMS is the ability to nest security envelopes to provide a combination of security features. As an example, a CMS triple-encapsulated message can be created in which the original content and associated attributes are signed and encapsulated within the inner SignedData object. The inner SignedData is in turn encrypted and encapsulated within an EnvelopedData object. The resulting EnvelopedData object is then also signed and finally encapsulated within a second SignedData object, the outer SignedData object. Usually the inner SignedData object is signed by the original user and the outer SignedData is signed by another entity such as a firewall or a mail list agent providing an additional level of security.

This triple-encapsulation is not required of every CMS object. All that is required is a single SignedData object created by the user to sign a message or an EnvelopedData object if the user desired to encrypt a message.


PGP


Pretty Good Privacy (PGP) is a popular program that is used to encrypt and decrypt e-mail and files. It also provides the ability to digitally sign a message so the receiver can be certain of the sender’s identity. Taken together, encrypting and signing a message allows the receiver to be assured of who sent the message and to know that it was not modified during transmission. Public domain versions of PGP have been available for years as well as inexpensive commercial versions. PGP is one of the most widely used programs and is frequently used by both individuals and businesses to ensure data and e-mail privacy. It was developed by Philip R. Zimmermann in 1991 and quickly became a de facto standard for e-mail security.


How PGP Works


PGP uses a variation of the standard public key encryption process. In public key encryption, an individual (here called the creator) uses the encryption program to create a pair of keys. One key is known as the public key and is designed to be given freely to others. The other key is called the private key and is designed to be known only by the creator. Individuals wanting to send a private message to the creator will encrypt the message using the creator’s public key. The algorithm is designed such that only the private key can decrypt the message, so only the creator will be able to decrypt it.

This method, known as public key or asymmetric encryption, is time consuming. Symmetric encryption uses only a single key and is generally faster. It is because of this that PGP is designed the way it is. PGP uses a symmetric encryption algorithm to encrypt the message to be sent. It then encrypts the symmetric key used to encrypt this message with the public key of the intended recipient. Both the encrypted key and message are then sent. The receiver’s version of PGP will first decrypt the symmetric key with the private key supplied by the recipient and will then use the resulting decrypted key to decrypt the rest of the message.

PGP can use two different public key algorithms—Rivest-Shamir-Adleman (RSA) and Diffie-Hellman. The RSA version uses the International Data Encryption Algorithm (IDEA) algorithm to generate a short symmetric key to be used to encrypt the message and RSA to encrypt the short IDEA key. The Diffie-Hellman version uses the Carlisle Adams and Stafford Tavares (CAST) algorithm to encrypt the message and the Diffie-Hellman algorithm to encrypt the CAST key.

To generate a digital signature, PGP takes advantage of another property of public key encryption schemes. Normally, the sender will encrypt using the receiver’s public key and the message will be decrypted at the other end using the receiver’s private key. The process can be reversed so that the sender encrypts with his own private key. The receiver then decrypts the message with the sender’s public key. Since the sender is the only individual who has a key that will correctly be decrypted with the sender’s public key, the receiver knows that the message was created by the sender who claims to have sent it. The way PGP accomplishes this task is to generate a hash value from the user’s name and other signature information. This hash value is then encrypted with the sender’s private key known only by the sender. The receiver uses the sender’s public key, which is available to everyone, to decrypt the hash value. If the decrypted hash value matches the hash value sent as the digital signature for the message, then the receiver is assured that the message was sent by the sender who claims to have sent it.

Typically, versions of PGP will contain a user interface that works with common e-mail programs such as Microsoft Outlook. If you want others to be able to send you an encrypted message, you will need to register your public key that was generated by your PGP program with a PGP public-key server. Alternatively, you will have to send your public key to all those who want to send you an encrypted message or post your key to some location from which they can download it, such as your web page. Note that using a public-key server is the better method for all of the reasons of trust described in the discussions of PKIs in Chapter 5.


Where Can You Use PGP?


For many years the U.S. government waged a fight over the exportation of PGP technology, and for many years its exportation was illegal. Today, however, PGP encrypted e-mail can be exchanged with most users outside the United States, and many versions of PGP are available from numerous international sites. Of course, being able to exchange PGP-encrypted e-mail requires that the individuals on both sides of the communication have valid versions of PGP. Interestingly, international versions of PGP are just as secure as domestic versions—a feature that is not true of other encryption products. It should be noted that the freeware versions of PGP are not licensed for commercial purposes.


HTTPS


Most web activity occurs using the Hypertext Transfer Protocol (HTTP), but this protocol is prone to interception. HTTPS uses the Secure Sockets Layer (SSL) to transfer information. Originally developed by Netscape Communications and implemented in its browser, HTTPS has since been incorporated into most common browsers. It uses the open standard SSL to encrypt data at the application layer. In addition, HTTPS uses the standard TCP port 443 for TCP/IP communications rather than the standard port 80 used for HTTP. Early HTTPS implementations made use of the 40-bit RC4 encryption algorithm, but with the relaxation of export restrictions, most implementations now use 128-bit encryption.


IPsec


IPsec is a collection of IP security features designed to introduce security at the network or packet-processing layer in network communication. Other approaches have attempted to incorporate security at higher levels of the TCP/IP suite such as at the level where applications reside. IPsec is designed to be used to provide secure virtual private network capability over the Internet. In essence, IPsec provides a secure version of the IP by introducing authentication and encryption to protect layer 4 protocols. IPsec is optional for IPv4 but is required for IPv6. Obviously, both ends of the communication need to use IPsec for the encryption/decryption process to occur.

IPsec provides two types of security service to ensure authentication and confidentiality for either the data alone (referred to as IPsec transport mode) or for both the data and header (referred to as tunnel mode). See Chapter 9 for more detail on tunneling and IPsec operation. IPsec introduces several new protocols including the Authentication Header (AH), which basically provides authentication of the sender, and the Encapsulating Security Payload (ESP), which adds encryption of the data to ensure confidentiality. IPsec also provides for payload compression before encryption using the IP Payload Compression Protocol (IPcomp). Frequently, encryption negatively impacts the ability of compression algorithms to fully compress data for transmission. By providing the ability to compress the data before encryption, IPsec addresses this issue.


CEP


Certificate Enrollment Protocol (CEP) was originally developed by VeriSign for Cisco Systems. It was designed to support certificate issuance, distribution, and revocation using existing technologies. Its use has grown in client and CA applications. The operations supported include CA and RA public key distribution, certificate enrollment, certificate revocation, certificate query, and CRL query.

One of the key goals of CEP was to use existing technology whenever possible. It uses both PKCS #7 (Cryptographic Message Syntax Standard) and PKCS #10 (Certification Request Syntax Standard) to define a common message syntax. It supports access to certificates and CRLs using either Lightweight Directory Access Protocol (LDAP) or the CEP-defined certificate query.


FIPS


The Federal Information Processing Standards Publications (FIPS PUBS or simply FIPS) describe various standards for data communication issues. These documents are issued by the U.S. government through the National Institute of Standards and Technology (NIST), which is tasked with their development. NIST creates these publications when a compelling government need requires a standard for use in areas such as security or system interoperability and no recognized industry standard exists. Three categories of FIPS PUBS are currently maintained by NIST:


 
  • Hardware and software standards/guidelines
  • Data standards/guidelines
  • Computer security standards/guidelines

These documents require that products sold to the U.S. government comply with one (or more) of the FIPS standards. The standards can be obtained from www.itl.nist.gov/fipspubs.


Common Criteria (CC)


The Common Criteria (CC) are the result of an effort to develop a joint set of security processes and standards that can be used by the international community. The major contributors to the CC are the governments of the United States, Canada, France, Germany, the Netherlands, and the United Kingdom. The CC also provides a listing of laboratories that apply the criteria in testing security products. Products that are evaluated by one of the approved laboratories receive an Evaluation Assurance Level of EAL1 through EAL7 (EAL7 is the highest level), with EAL4, for example, designed for environments requiring a moderate to high level of independently assured security, and EAL1 being designed for environments in which some confidence in the correct operation of the system is required but where the threats to the system are not considered serious. The CC also provide a listing of products by function that have performed at a specific EAL.


WTLS


The Wireless Transport Layer Security (WTLS) protocol is based on the Transport Layer Security (TLS) protocol. WTLS provides reliability and security for wireless communications using the Wireless Application Protocol (WAP). WTLS is necessary due to the limited memory and processing abilities of WAP-enabled phones.

WTLS can be implemented in one of three classes: Class 1 is called anonymous authentication but is not designed for practical use. Class 2 is called server authentication and is the most common model. The clients and server may authenticate using different means. Class 3 is server and client authentication. In Class 3 authentication, the client’s and server’s WTLS certificates are authenticated. Class 3 is the strongest form of authentication and encryption.


WEP


The Wired Equivalent Privacy (WEP) algorithm is part of the 802.11 standard and is used to protect wireless communications from interception. A secondary function is to prevent access to a wireless network from unauthorized access. WEP relies on a secret key that is shared between a mobile station and an access point. In most installations, a single key is used by all of the mobile stations and access points.


WEP Security Issues


In modern corporate environments, it’s common for wireless networks to be created in which systems with 802.11 network interface cards communicate with wireless access points that connect the computer to the corporation’s network. WEP is an optional security protocol specified in the 802.11 standard and is designed to address the security needs in this wireless environment. It uses a 24-bit initialization vector as a seed value to begin the security association. This, in itself, is a potential security problem as more than 16 million vectors are possible with 24 bits. At the speeds at which modern networks operate, it does not take long for initialization vectors to repeat. The secret key is only 40 bits in length (for 64-bit encryption; 104 bits for 128-bit encryption), another problem since it does not take too long to brute-force encryption schemes using key lengths this short.

Some vendors provide 128-bit WEP 2 keys in their products to overcome the short encryption key length, but that only increases the complexity in a linear manner and is almost equally vulnerable. In addition, the WEP keys are static. It is up to the system administrator to change WEP keys manually.

One final problem with WEP is that many wireless network implementations do not even come with WEP enabled. Due to the rapid growth of the wireless industry, standards have not been strongly implemented. WPA and WPA2 of the 802.11i standard provide significantly increased wireless security. See Chapter 10 for more details on WPA and WPA2.


ISO/IEC 27002 (Formerly ISO 17799)


ISO/IEC 27002 is a very popular and detailed standard for creating and implementing security policies. ISO/IEC 27002 was formerly ISO 17799, which was based on version 2 of the British Standard 7799 (BS7799) published in May 1999. With the increased emphasis placed on security in both the government and industry over the last few years, many organizations are now training their audit personnel to evaluate their organizations against the ISO/IEC 27002 standard. The standard is divided into 12 sections, each containing more detailed statements describing what is involved for that topic:


 
  • Risk assessment
  • Security policy
  • Organization of information security
  • Asset management
  • Human resources security
  • Physical and environmental security Protection of the computer facilities
  • Communications and operations management Management of technical security controls in systems and networks
  • Access control Restriction of access rights to networks, systems, applications, functions, and data
  • Information systems acquisition, development and maintenance Building security into applications
  • Information security incident management Anticipating and responding appropriately to information security breaches
  • Business continuity management Protecting, maintaining, and recovering business-critical processes and systems
  • Compliance Ensuring conformance with information security policies, standards, laws, and regulations


Chapter Review


Chapter 5 discussed the various components of a public key infrastructure (PKI). This chapter continued the discussion with the many different standards and protocols that have been implemented to support PKI. Standards and protocols are important because they define the basis for how communication will take place. Without these protocols, two entities may each independently develop its own methods for implementing the various components for a PKI, as described in Chapter 5, and the two will not be compatible. On the Internet, not being compatible and not being able to communicate is not an option.

Three main standards have evolved over time to implement PKI on the Internet. Two are based on a third standard, the X.509 standard, and establish complementary standards for implementing PKI. These two standards are Public Key Infrastructure X.509 (PKIX) and Public Key Cryptography Standards (PKCS). PKIX defines standards for interactions and operations for four component types: the user (end-entity), certificate authority (CA), registration authority (RA), and the repository for certificates and certificate revocation lists (CRLs). PKCS defines many of the lower level standards for message syntax, cryptographic algorithms, and the like.

Other protocols and standards can help define the management and operation of the PKI and related services, such as ISAKMP, XKMS, and CMP. WEP is used to encrypt wireless communications in an 802.11 environment and S/MIME for e-mail; SSL, TLS, and WTLS are used for secure packet transmission; and IPsec and PPTP are used to support virtual private networks.

The Common Criteria (CC) establishes a series of criteria from which security products can be evaluated. The ISO/IEC 27002 standard provides a point from which security policies and practices can be developed in 12 areas. Various types of publications are available from NIST such as those found in the FIPS series.


Questions


 
  1. 1. Which organization created PKCS?
    1. A. RSA
    2. B. IEEE
    3. C. OSI
    4. D. ISO
  2. 2. Which of the following is not part of a public key infrastructure?
    1. A. Certificates
    2. B. Certificate revocation list (CRL)
    3. C. Substitution cipher
    4. D. Certificate authority (CA)
  3. 3. Which of the following is used to grant permissions using rule-based, role-based, and rank-based access controls?
    1. A. Attribute Certificate
    2. B. Qualified Certificate
    3. C. Control Certificate
    4. D. Operational Certificate
  4. 4. Transport Layer Security consists of which two protocols?
    1. A. TLS Record Protocol and TLS Certificate Protocol
    2. B. TLS Certificate Protocol and TLS Handshake Protocol
    3. C. TLS Key Protocol and TLS Handshake Protocol
    4. D. TLS Record Protocol and TLS Handshake Protocol
  5. 5. Which of the following provides connection security by using common encryption methods?
    1. A. TLS Certificate Protocol
    2. B. TLS Record Protocol
    3. C. TLS Layered Protocol
    4. D. TLS Key Protocol
 
  1. 6. Which of the following provides a method for implementing a key exchange protocol?
    1. A. EISA
    2. B. ISA
    3. C. ISAKMP
    4. D. ISAKEY
  2. 7. A relationship in which two or more entities define how they will communicate securely is known as what?
    1. A. Security association
    2. B. Security agreement
    3. C. Three-way agreement
    4. D. Three-way handshake
  3. 8. The entity requesting an SA sets what?
    1. A. Initiator cookie
    2. B. Process ID
    3. C. Session number
    4. D. Session ID
  4. 9. What protocol is used to establish a CA?
    1. A. Certificate Management Protocol
    2. B. Internet Key Exchange Protocol
    3. C. Secure Sockets Layer
    4. D. Public Key Infrastructure
 
  1. 10. What is the purpose of XKMS?
    1. A. Encapsulates session associations over TCP/IP
    2. B. Extends session associations over many transport protocols
    3. C. Designed to replace SSL
    4. D. Defines services to manage heterogeneous PKI operations via XML
 
  1. 11. Which of the following is a secure e-mail standard?
    1. A. POP3
    2. B. IMAP
    3. C. S/MIME
    4. D. SMTP
 
  1. 12. Secure Sockets Layer uses what port to communicate?
    1. A. 143
    2. B. 80
    3. C. 443
    4. D. 53

Answers


 
  1. 1. A. RSA Laboratories created Public Key Cryptography Standards (PKCS).
  2. 2. C. The substitution cipher is not a component of PKI. The substitution cipher is an elementary alphabet-based cipher.
  3. 3. A. An Attribute Certificate (AC) is used to grant permissions using rule-based, role-based, and rank-based access controls.
  4. 4. D. Transport Layer Security consists of the TLS Record Protocol, which provides security, and the TLS Handshake Protocol, which allows the server and client to authenticate each other.
  5. 5. B. The TLS Record Protocol provides connection security by using common encryption methods, such as DES.
  6. 6. C. The Internet Security Association and Key Management Protocol (ISAKMP) provides a method for implementing a key exchange protocol and for negotiating a security policy.
  7. 7. A. During a security association, the client and the server will list the types of encryption of which they are capable and will choose the most secure encryption standard that they have in common.
  8. 8. A. The entity requesting a security association will request an initiator cookie.
  9. 9. A. The Certificate Management Protocol is used to establish a CA.
  10. 10. D. XML Key Management Specification (XKMS) allows services to manage PKI via XML, which is interoperable across different vendor platforms.
  11. 11. C. Secure/Multipurpose Internet Mail Extensions (S/MIME) is a secure e-mail standard. Other popular standards include Pretty Good Privacy (PGP) and OpenPGP.
  12. 12. C. SSL’s well-known port is 443. SSL was developed by Netscape.

PART III
Security in the Infrastructure


Chapter 7 Physical Security

Chapter 8 Infrastructure Security

Chapter 9 Remote Access and Authentication

Chapter 10 Infrastructure



CHAPTER 7
Physical Security


 
  • Describe how physical security directly affects computer and network security
  • Discuss steps that can be taken to help mitigate risks
  • Understand electronic access controls and the principles of convergence

For most American homes, locks are the primary means of achieving physical security, and almost every American locks the doors to his or her home upon leaving the residence. Some go even further and set up intrusion alarm systems in addition to locks. All these precautions are considered necessary because people believe they have something significant inside the house that needs to be protected, such as important possessions and important people.

Physical security is an important topic for businesses dealing with the security of information systems. Businesses are responsible for securing their profitability, which requires a combination of several aspects: They need to secure employees, product inventory, trade secrets, and strategy information. These and other important assets affect the profitability of a company and its future survival. Companies therefore perform many activities to attempt to provide physical security—locking doors, installing alarm systems, using safes, posting security guards, setting access controls, and more.

Most companies today have committed a large amount of effort into network security and information systems security. In this chapter, you will learn about how these two security efforts are linked, and you’ll learn several methods by which companies can minimize their exposure to physical security events that can diminish their network security.


The Security Problem


The problem that faces professionals charged with securing a company’s network can be stated rather simply: Physical access negates all other security measures. No matter how impenetrable the firewall and intrusion detection system (IDS), if an attacker can find a way to walk up to and touch a server, he can break into it. The more remarkable thing is that gaining physical access to a number of machines is not that difficult.

Consider that most network security measures are, from necessity, directed at protecting a company from the Internet. This fact results in a lot of companies allowing any kind of traffic on the local area network (LAN). So if an attacker attempts to gain access to a server over the Internet and fails, he may be able to gain physical access to the receptionist’s machine, and by quickly compromising it, he can use it as a remotely controlled zombie to attack what he is really after. Physically securing information assets doesn’t mean just the servers; it means protecting the physical access to all the organization’s computers and its entire network infrastructure.

Physical access to a corporation’s systems can allow an attacker to perform a number of interesting activities, starting with simply plugging into an open Ethernet jack. The advent of handheld devices with the ability to run operating systems with full networking support has made this attack scenario even more feasible. Prior to handheld devices, the attacker would have to work in a secluded area with dedicated access to the Ethernet for a time. The attacker would sit down with a laptop and run a variety of tools against the network, and working internally typically put the attacker behind the firewall and IDS. Today’s capable PDAs can assist these efforts by allowing attackers to place the small device onto the network to act as a wireless bridge. The attacker can then use a laptop to attack a network remotely via the bridge from outside the building. If power is available near the Ethernet jack, this type of attack can also be accomplished with an off-the-shelf access point. The attacker’s only challenge is finding an Ethernet jack that isn’t covered by furniture or some other obstruction.

Another simple attack that can be used when an attacker has physical access is called a bootdisk. Before bootable CD-ROMs or DVD-ROMs were available, a boot floppy was used to start the system and prepare the hard drives to load the operating system. Since many machines still have floppy drives, boot floppies can still be used. These floppies can contain a number of programs, but the most typical ones would be NTFSDOS or a floppy-based Linux distribution that can be used to perform a number of tasks, including mounting the hard drives and performing at least read operations. Once an attacker is able to read a hard drive, the password file can be copied off the machine for offline password cracking attacks. If write access to the drive is obtained, the attacker could alter the password file or place a remote control program to be executed automatically upon the next boot, guaranteeing continued access to the machine.

Bootable CD-ROMs and DVD-ROMs are a danger for the same reason—perhaps even more so, because they can carry a variety of payloads such as malware or even entire operating systems. An operating system designed to run the entire machine from an optical disc without using the hard drive is commonly referred to as a LiveCD. LiveCDs contain a bootable version of an entire operating system, typically a variant of Linux, complete with drivers for most devices. LiveCDs give an attacker a greater array of tools than could be loaded onto a floppy disk. For example, an attacker would likely have access to the hard disk and also to an operational network interface that would allow him to send the drive data over the Internet if properly connected. These bootable operating systems could also be custom built to contain any tool that runs under Linux, allowing an attacker a standard bootable attack image or a standard bootable forensics image, or something customized for the tools he likes to use.

The use of bootdisks of all types leads to the next area of concern: creating an image of the hard drive for later investigation. Some form of bootable media is often used to load the imaging software.

Drive imaging is the process of copying the entire contents of a hard drive to a single file on a different media. This process is often used by people who perform forensic investigations of computers. Typically, a bootable media is used to start the computer and load the drive imaging software. This software is designed to make a bit-by-bit copy of the hard drive to a file on another media, usually another hard drive or CD-R/DVD-R media. Drive imaging is used in investigations to make an exact copy that can be observed and taken apart, while keeping the original exactly as it was for evidence purposes.

From an attacker’s perspective, drive imaging software is useful because it pulls all information from a computer’s hard drive while still leaving the machine in its original state. The information contains every bit of data that was on this computer: any locally stored documents, locally stored e-mails, and every other piece of information that the hard drive contained. This data could be very valuable if the machine held sensitive information about the company.

Physical access is the most common way of imaging a drive, and the biggest benefit for the attacker is that drive imaging leaves absolutely no trace of the crime. While you can do very little to prevent drive imaging, you can minimize its impact. The use of encryption even for a few important files will provide protection. Full encryption of the drive will protect all files stored on it. Alternatively, placing files on a centralized file server will keep them from being imaged from an individual machine, but if an attacker is able to image the file server, the data will be copied.



EXAM TIP Drive imaging is a threat because all existing access controls to data can be bypassed and all the data once stored on the drive can be read from the image.

An even simpler version of the drive imaging attack is to steal the computer outright. Computer theft typically occurs for monetary gain—the thief later selling his prize. We’re concerned with the theft of a computer to obtain the data it holds, however. While physical thievery is not a technical attack, it is often carried in conjunction with a bit of social engineering—for example, the thief might appear to be a legitimate computer repair person and may be allowed to walk out of the building with a laptop or other system in his possession. For anyone who discounts this type of attack, consider this incident: In Australia, two individuals entered a government computer room and managed to walk off with two large servers. They not only escaped with two valuable computers, but they got the data they contained as well.

Many of the methods mentioned so far can be used to perform a denial-of-service (DoS) attack. Physical access to the computers can be much more effective than a network-based DoS. The theft of a computer, using a bootdisk to erase all data on the drives, or simply unplugging computers are all effective DoS attacks. Depending on the company’s quality and frequency of backing up critical systems, a DoS attack can have lasting effects.

Physical access can negate almost all the security that the network attempts to provide. Considering this, you must determine the level of physical access that attackers might obtain. Of special consideration are persons with authorized access to the building but who are not authorized users of the systems. Janitorial personnel and others have authorized access to many areas, but they do not have authorized system access. An attacker could pose as one of these individuals or attempt to gain access to the facilities through them.


Physical Security Safeguards


While it is difficult, if not impossible, to be totally secure, many steps can be taken to mitigate the risk to information systems from a physical threat. The following sections discuss policies and procedures as well as access control methods. Then the chapter explores various authentication methods and how they can help protect against physical threats.


Walls and Guards


The primary defense against a majority of physical attacks are the barriers between the assets and a potential attacker—walls and doors. Some organizations also employ full-or part-time private security staff to attempt to protect their assets. These barriers provide the foundation upon which all other security initiatives are based, but the security must be designed carefully, as an attacker has to find only a single gap to gain access.

Walls may have been one of the first inventions of man. Once he learned to use natural obstacles such as mountains to separate him from his enemy, he next learned to build his own mountain for the same purpose. Hadrian’s Wall in England, the Great Wall of China, and the Berlin Wall are all famous examples of such basic physical defenses. The walls of any building serve the same purpose, but on a smaller scale: they provide barriers to physical access to company assets. In the case of information assets, as a general rule the most valuable assets are contained on company servers. To protect the physical servers, you must look in all directions: Doors and windows should be safeguarded and a minimum number of each should be used in a server room. Less obvious entry points should also be considered: Is a drop ceiling used in the server room? Do the interior walls extend to the actual roof, raised floors, or crawlspaces? Access to the server room should be limited to the people who need access, not to all employees of the organization. If you are going to use a wall to protect an asset, make sure no obvious holes appear in that wall.



EXAM TIP All entry points to server rooms and wiring closets should be closely controlled and if possible have access logged through an access control system.

Guards provide an excellent security measure, because a visible person has a direct responsibility for security. Other employees expect security guards to behave a certain way with regard to securing the facility. Guards typically monitor entrances and exits and can maintain access logs of who has visited and departed from the building. Everyone who passes through security as a visitor signs the log, which can be useful in tracing who was at what location and why.

Security personnel can be helpful in securing information assets, but proper protection must be provided. Security guards are typically not computer security experts, so they need to be educated about network security as well as physical security involving users. They are the company’s eyes and ears for suspicious activity, so the network security department needs to train them to notice suspicious network activity as well. Multiple extensions ringing in sequence during the night, computers rebooting all at once, or strange people parked in the parking lot with laptop computers are all indicators of a network attack that might be missed. Many traditional physical security tools such as access controls and CCTV camera systems are transitioning from closed hardwired systems to Ethernet- and IP-based systems. This transition opens up the devices to network attacks traditionally performed on computers. With physical security systems being implemented using the IP network, everyone in physical security must become smarter about network security.


Policies and Procedures


A policy’s effectiveness depends on the culture of an organization, so all of the policies mentioned here should be followed up by functional procedures that are designed to implement them. Physical security policies and procedures relate to two distinct areas: those that affect the computers themselves and those that affect users.

To mitigate the risk to computers, physical security needs to be extended to the computers themselves. To combat the threat of bootdisks, the simplest answer is to remove or disable floppy drives from all desktop systems that do not require them. The continued advance of hard drive capacity has pushed file sizes beyond what floppies can typically hold. LANs with constant Internet connectivity have made network services the focus of how files are moved and distributed. These two factors have reduced floppy usage to the point where computer manufacturers are making floppy drives accessory options instead of standard features.

The second boot device to consider is the CD-ROM/DVD-ROM drive. This device can probably also be removed from or disabled on a number of machines. A DVD can not only be used as a boot device, but it can be exploited via the autorun feature that some operating systems support. Autorun was designed as a convenience for users, so that when a CD containing an application is inserted, the computer will instantly prompt for input versus having to explore the CD filesystem and find the executable file. Unfortunately, since the autorun file runs an executable, it can be programmed to do anything an attacker wants. If autorun is programmed maliciously, it could run an executable that installs malicious code that could allow an attacker to later gain remote control of the machine.

Disabling autorun is an easy task: In Windows XP, you simply right-click the DVD drive icon and set all media types to No Action. This ability can also be disabled by Active Directory settings. Turning off the autorun feature is an easy step that improves security; however, disabling autorun is only half the solution. Since the optical drive can be used as a boot device, a CD loaded with its own operating system (called a LiveCD) could be used to boot the computer with malicious system code. This separate operating system will bypass any passwords on the host machine and can access locally stored files.

Some users will undoubtedly insist on having DVD drives in their machines, but, if possible, the drives should be removed from every machine. If removal is not feasible, particularly on machines that require CD-ROM/DVD use, you can remove the optical drive from the boot sequence in the computer’s BIOS.

To prevent an attacker from editing the boot order, BIOS passwords should be set. These passwords should be unique to the machine and, if possible, complex, using multiple uppercase and lowercase characters as well as numerics. Considering how often these passwords will be used, it is a good idea to list them all in an encrypted file so that a master passphrase will provide access to them.

As mentioned, floppy drives are being eliminated from manufacturers’ machines because of their limited usefulness, but new devices are being adopted in their place, such as USB devices. USB ports have greatly expanded users’ ability to connect devices to their computers. USB ports automatically recognize a device plugging into the system and usually work without the user needing to add drivers or configure software. This has spawned a legion of USB devices, from MP3 players to CD burners.

The most interesting of these, for security purposes, are the USB flash memory–based storage devices. USB drive keys, which are basically flash memory with a USB interface in a device about the size of your thumb, provide a way to move files easily from computer to computer. When plugged into a USB port, these devices automount and behave like any other drive attached to the computer. Their small size and relatively large capacity, coupled with instant read-write ability, present security problems. They can easily be used by an individual with malicious intent to conceal the removal of files or data from the building or to bring malicious files into the building and onto the company network.

In addition, well-intentioned users could accidentally introduce malicious code from USB devices by using them on an infected home machine and then bringing the infected device to the office, allowing the malware to bypass perimeter protections and possibly infect the organization. If USB devices are allowed, aggressive virus scanning should be implemented throughout the organization. The devices can be disallowed via Active Directory settings or with a Windows registry key entry. They could also be disallowed by unloading and disabling the USB drivers from user’s machines, which will stop all USB devices from working—however, doing this can create more trouble if users have USB keyboards and mice. Editing the registry key is probably the most effective solution for users who are not authorized to use these devices. Users who do have authorization for USB drives must be educated about the potential dangers of their use.



EXAM TIP USB devices can be used to inject malicious code onto any machine to which they are attached. They can be used to download malicious code from machine to machine without using the network.

The outright theft of a computer is a simple physical attack. This attack can be mitigated in a number of ways, but the most effective method is to lock up equipment that contains important data. Insurance can cover the loss of the physical equipment, but this can do little to get a business up and running again quickly after a theft. Therefore, special access controls for server rooms, as well as simply locking the racks when maintenance is not being performed, are good ways to secure an area. From a data standpoint, mission-critical or high-value information should be stored on a server only. This can mitigate the risk of a desktop or laptop being stolen for the data it contains. Laptops are popular targets for thieves and should be locked inside a desk when not in use, or special computer lockdown cables can be used to secure them. If desktop towers are used, use computer desks that provide a space in which to lock the computer. All of these measures can improve the physical security of the computers themselves, but most of them can be defeated by attackers if users are not knowledgeable about the security program and do not follow it.

Users are often mentioned as the “weakest link in the security chain,” and that can also apply to physical security. Fortunately, in physical security, users are often one of the primary beneficiaries of the security itself. A security program protects a company’s information assets, but it also protects the people of the organization. A good security program will provide tangible benefits to employees, helping them to support and reinforce the security program. Users need to be aware of security issues, and they need to be involved in security enforcement. A healthy company culture of security will go a long way toward assisting in this effort. If, for example, workers in the office notice a strange person visiting their work areas, they should challenge the individual’s presence—this is especially important if visitor badges are required for entry to the facility. A policy of having a visible badge with the employee’s photo on it also assists everyone in recognizing people who do not belong.

Users should be briefed on the proper departments or personnel to contact when they suspect a security violation. Users can perform one of the most simple, yet important, information security tasks: locking a workstation immediately before they step away from it. While a locking screensaver is a good policy, setting it to less than 15 minutes is often counter-productive to active use on the job. An attacker only needs to be lucky enough to catch a machine that has been left alone for 5 minutes.

It is also important to know about workers typically overlooked in the organization. New hires should undergo a background check before being given access to network resources. This policy should also apply to all personnel who will have unescorted physical access to the facility, including janitorial and maintenance workers.


Access Controls and Monitoring


Access control means control of doors and entry points. The design and construction of all types of access control systems as well as the physical barriers to which they are most complementary are fully discussed in other texts. Here, we explore a few important points to help you safeguard the information infrastructure, especially where it meets with the physical access control system. This section talks about layered access systems, as well as electronic door control systems. It also discusses closed circuit television (CCTV) systems and the implications of different CCTV system types.

Locks have been discussed as a primary element of security. Although locks have been used for hundreds of years, their design has not changed much: a metal “token” is used to align pins in a mechanical device. As all mechanical devices have tolerances, it is possible to sneak-through these tolerances by “picking” the lock.

As we humans are always trying to build a better mousetrap, high security locks have been designed to defeat attacks; these locks are more sophisticated than a standard home deadbolt system. Typically found in commercial applications that require high security, these locks are produced by two primary manufacturers: Medeco and ASSA. (Medeco’s locks, for example, require that the pins in the lock not only be set to a specific depth, but also individually rotated to set direction: left, right, or center.) High-end lock security is more important now that attacks such as “bump keys” are well known and widely available. A bump key is a key cut with all notches to the maximum depth, also known as “all nines.” This key uses a technique that has been around a long time, but has recently gained a lot of popularity. The key is inserted into the lock and then sharply struck, bouncing the lock pins up above the shear line and allowing the lock to open.

Layered access is an important concept in security. It is often mentioned in conversations about network security perimeters, but in this chapter it relates to the concept of physical security perimeters. To help prevent an attacker from gaining access to important assets, these assets should be placed inside multiple perimeters. Servers should be placed in a separate secure area, ideally with a separate authentication mechanism. For example, if an organization has an electronic door control system using contactless access cards, a combination of the card and a separate PIN code would be required to open the door to the server room. Access to the server room should be limited to staff with a legitimate need to work on the servers. To layer the protection, the area surrounding the server room should also be limited to people who need to work in that area.

Many organizations use electronic access control systems to control the opening of doors. Doorways are electronically controlled via electronic door strikes and magnetic locks. These devices rely on an electronic signal from the control panel to release the mechanism that keeps the door closed. These devices are integrated into an access control system that controls and logs entry into all the doors connected to it, typically through the use of access tokens. Security is improved by having a centralized system that can instantly grant or refuse access based upon a token that is given to the user. This kind of system also logs user access, providing nonrepudiation of a specific user’s presence in a controlled environment. The system will allow logging of personnel entry, auditing of personnel movements, and real-time monitoring of the access controls.

One caution about these kinds of systems is that they usually work with a software package that runs on a computer, and as such this computer should not be attached to the company network. While attaching it to the network can allow easy administration, the last thing you want is for an attacker to have control of the system that allows physical access to your facility. With this control, an attacker could input the ID of a badge that she owns, allowing full legitimate access to an area the system controls. Another problem with such a system is that it logs only the person who initially used the card to open the door—so no logs exist for doors that are propped open to allow others access, or of people “tailgating” through a door opened with a card. The implementation of a mantrap is one way to combat this function. A mantrap comprises two doors closely spaced that require the user to card through one and then the other sequentially. Mantraps make it nearly impossible to trail through a doorway undetected—if you happen to catch the first door, you will be trapped in by the second door.



EXAM TIP A mantrap door arrangement can prevent unauthorized people from following authorized users through an access controlled door, which is also known as “tailgating."

CCTVs are similar to the door control systems—they can be very effective, but how they are implemented is an important consideration. The use of CCTV cameras for surveillance purposes dates back to at least 1961, when the London Transport train station installed cameras. The development of smaller camera components and lower costs has caused a boon in the CCTV industry since then.

Traditional cameras are analog based and require a video multiplexer to combine all the signals and make multiple views appear on a monitor. IP-based cameras are changing that, as most of them are standalone units viewable through a web browser. These IP-based systems add useful functionality, such as the ability to check on the building from the Internet. This network functionality, however, makes the cameras subject to normal IP-based network attacks. The last thing that anyone would want would be a DoS attack launched at the CCTV system just as a break-in was planned. For this reason, IP-based CCTV cameras should be placed on their own physically separate network that can be accessed only by security personnel. The same physical separation applies to any IP-based camera infrastructure. Older time-lapse tape recorders are slowly being replaced with digital video recorders. While the advance in technology is significant, be careful if and when these devices become IP-enabled, since they will become a security issue, just like everything else that touches the network. If you depend on the CCTV system to protect your organization’s assets, carefully consider camera placement and the type of cameras used. Different iris types, focal lengths, and color or infrared capabilities are all options that make one camera superior over another in a specific location.

The issues discussed so far are especially prevalent when physical access control devices are connected to network resources. But no access controls, network or physical, would work without some form of authentication.


Environmental Controls


While the confidentiality of information is important, so is its availability. Sophisticated environmental controls are needed for current data centers. Fire suppression is also an important consideration when dealing with information systems.

Heating ventilating and air conditioning (HVAC) systems are critical for keeping data centers cool, because typical servers put out between 1000 and 2000 BTUs of heat. Enough servers in a confined area will create conditions too hot for the machines to continue to operate. The failure of HVAC systems for any reason is cause for concern. Properly securing these systems is important in helping prevent an attacker from performing a physical DoS attack on your servers.

Fire suppression systems should be specialized for the data center. Standard sprinkler-based systems are not optimal for data centers because water will ruin large electrical infrastructures and most integrated circuit–based devices—that is, computers. Gas-based systems are a good alternative, though they also carry special concerns. Halon was used for many years, and any existing installations may still have it for fire suppression in data centers. Halon displaces oxygen, and any people caught in the gas when the system goes off will need a breathing apparatus to survive. Halon is being replaced with other gas-based suppression systems, such as argon and nitrogen mixing systems or carbon dioxide, but the same danger to people exists, so these systems should be carefully implemented.


Authentication


Authentication is the process by which a user proves that she is who she says she is. Authentication is performed to allow or deny a person access to a physical space. The heart of any access control system is to allow access to authorized users and to make sure access is denied to unauthorized people. Authentication is required because many companies have grown so large that not every employee knows every other employee, so it can be difficult to tell by sight who is supposed to be where. Electronic access control systems were spawned from the need to have more logging and control than provided by the older method of metallic keys. Most electronic systems currently use a token-based card that if passed near a reader, and if you have permission from the system, will unlock the door strike and let you pass into the area. Newer technology attempts to make the authentication process easier and more secure.

The following sections discuss how tokens and biometrics are being used for authentication. It also looks into how multiple-factor authentication can be used for physical access.


Access Tokens


Access tokens are defined as “something you have.” An access token is a physical object that identifies specific access rights, and in authentication falls into the “something you have” factor. Your house key, for example, is a basic physical access token that allows you access into your home. Although keys have been used to unlock devices for centuries, they do have several limitations. Keys are paired exclusively with a lock or a set of locks, and they are not easily changed. It is easy to add an authorized user by giving the user a copy of the key, but it is far more difficult to give that user selective access unless that specified area is already set up as a separate key. It is also difficult to take access away from a single key or key holder, which usually requires a rekey of the whole system.

In many businesses, physical access authentication has moved to contactless radio frequency cards and readers. When passed near a card reader, the card sends out a code using radio waves. The reader picks up this code and transmits it to the control panel. The control panel checks the code against the reader from which it is being read and the type of access the card has in its database. The advantages of this kind of token-based system include the fact that any card can be deleted from the system without affecting any other card or the rest of the system. In addition, all doors connected to the system can be segmented in any form or fashion to create multiple access areas, with different permissions for each one. The tokens themselves can also be grouped in multiple ways to provide different access levels to different groups of people. All of the access levels or segmentation of doors can be modified quickly and easily if building space is retasked. Newer technologies are adding capabilities to the standard token-based systems. The advent of smart cards (cards that contain integrated circuits) has enabled cryptographic types of authentication.

The primary drawback of token-based authentication is that only the token is being authenticated. Therefore, the theft of the token could grant anyone who possessed the token access to what the system protects. The risk of theft of the token can be offset by the use of multiple-factor authentication. One of the ways that people have tried to achieve multiple-factor authentication is to add a biometric factor to the system.


Biometrics


Biometrics use the measurements of certain biological factors to identify one specific person from others. These factors are based on parts of the human body that are unique. The most well-known of these unique biological factors is the fingerprint. However, many others can be used—for instance, the retina or iris of the eye, the geometry of the hand, and the geometry of the face. When these are used for authentication, there is a two part process, enrollment and then authentication. During enrollment, a computer takes the image of the biological factor and reduces it to a numeric value. When the user attempts to authenticate, this feature is scanned by the reader, and the computer compares the numeric value being read to the one stored in the database. If they match, access is allowed. Since these physical factors are unique, theoretically only the actual authorized person would be allowed access.

In the real world, however, the theory behind biometrics breaks down. Tokens that have a digital code work very well because everything remains in the digital realm. A computer checks your code, such as 123, against the database; if the computer finds 123 and that number has access, the computer opens the door. Biometrics, however, take an analog signal, such as a fingerprint or a face, and attempt to digitize it, and it is then matched against the digits in the database. The problem with an analog signal is that it might not encode the exact same way twice. For example, if you came to work with a bandage on your chin, would the face-based biometrics grant you access or deny it?

Engineers who designed these systems understood that if a system was set to exact checking, an encoded biometric might never grant access since it might never scan the biometric exactly the same way twice. Therefore, most systems have tried to allow a certain amount of error in the scan, while not allowing too much. This leads to the concepts of false positives and false negatives. A false positive occurs when a biometric is scanned and allows access to someone who is not authorized—for example, two people who have very similar fingerprints might be recognized as the same person by the computer, which grants access to the wrong person. A false negative occurs when the system denies access to someone who is actually authorized—for example, a user at the hand geometry scanner forgot to wear a ring he usually wears and the computer doesn’t recognize his hand and denies him access. For biometric authentication to work properly, and also be trusted, it must minimize the existence of both false positives and false negatives. To do that, a balance between exacting and error must be created so that the machines allow a little physical variance—but not too much.

Another concern with biometrics is that if someone is able to steal the uniqueness factor that the machine scans—your fingerprint from a glass, for example—and is able to reproduce that factor in a substance that fools the scanner, that person now has your access privileges. This idea is compounded by the fact that it is impossible for you to change your fingerprint if it gets stolen. It is easy to replace a lost or stolen token and delete the missing one from the system, but it is far more difficult to replace a human hand. Another problem with biometrics is that parts of the human body can change. A human face can change, through scarring, weight loss or gain, or surgery. A fingerprint can be changed through damage to the fingers. Eye retinas can be affected by some types of diabetes or pregnancy. All of these changes force the biometric system to allow a higher tolerance for variance in the biometric being read. This has led the way for high-security installations to move toward multiple-factor authentication.


Multiple-factor Authentication


Multiple-factor authentication is simply the combination of two or more types of authentication. Three broad categories of authentication can be used: what you are (for example, biometrics), what you have (for instance, tokens), and what you know (passwords and other information). Two-factor authentication combines any two of these before granting access. An example would be a card reader that then turns on a fingerprint scanner—if your fingerprint matches the one on file for the card, you are granted access. Three-factor authentication would combine all three types, such as a smart card reader that asks for a PIN before enabling a retina scanner. If all three correspond to a valid user in the computer database, access is granted.



EXAM TIP Two-factor authentication combines any two methods, matching items such as a token with a biometric. Three-factor authentication combines any three, such as a passcode, biometric, and a token.

Multiple-factor authentication methods greatly enhance security by making it very difficult for an attacker to obtain all the correct materials for authentication. They also protect against the risk of stolen tokens, as the attacker must have the correct biometric, password, or both. More important, it enhances the security of biometric systems. Multiple-factor authentication does this by protecting against a stolen biometric. Changing the token makes the biometric useless unless the attacker can steal the new token. It also reduces false positives by trying to match the supplied biometric with the one that is associated with the supplied token. This prevents the computer from seeking a match using the entire database of biometrics. Using multiple factors is one of the best ways to ensure proper authentication and access control.


Chapter Review


Physical Security is required to maintain the security of information systems. Any person with malicious intent who gains physical access to a computer system can cause significant damage. If a person can gain physical access, almost no information security safeguard can truly protect valuable information.

You have seen how access controls can provide legitimate access while denying intruders. However, you have also seen how these systems are increasingly computer- and network-based, which can cause a separate path of attack to be generated. Physical access can be compromised through the use of information systems. As the tendency to use the IP network increases for every device in the organization, more and more interlinked systems will require interlinked security requirements. This is the concept of convergence, which can apply to security as well as voice, video, and data.


Questions


 
  1. 1. The feature that could allow a CD to load malicious code is called what?
    1. A. A false negative
    2. B. A CD-Key
    3. C. A MBR, or Master Boot Record
    4. D. Auto-run
 
  1. 2. Why is water not used for fire suppression in data centers?
    1. A. It would cause a flood.
    2. B. Water cannot put out an electrical fire.
    3. C. Water would ruin all the electronic equipment.
    4. D. Building code prevents it.
 
  1. 3. Which one is not a unique biometric?
    1. A. Fingerprint
    2. B. Eye retina
    3. C. Hand geometry
    4. D. Shoulder-to-waist geometry
 
  1. 4. Why is physical security so important to good network security?
    1. A. Because encryption is not involved
    2. B. Because physical access defeats nearly all network security measures
    3. C. Because an attacker can steal biometric identities
    4. D. Authentication
 
  1. 5. How does multiple-factor authentication improve security?
    1. A. By using biometrics, no other person can authenticate.
    2. B. It restricts users to smaller spaces.
    3. C. By using a combination of authentications, it is more difficult for someone to gain illegitimate access.
    4. D. It denies access to an intruder multiple times.
 
  1. 6. Why is access to an Ethernet jack a risk?
    1. A. A special plug can be used to short out the entire network.
    2. B. An attacker can use it to make a door entry card for himself.
    3. C. Wireless traffic can find its way onto the local area network.
    4. D. It allows access to the internal network.
 
  1. 7. When a biometric device has a false positive, it has done what?
    1. A. Generated a positive charge to the system for which compensation is required
    2. B. Allowed access to a person who is not authorized
    3. C. Denied access to a person who is authorized
    4. D. Failed, forcing the door it controls to be propped open
 
  1. 8. Why does an IP-based CCTV system need to be implemented carefully?
    1. A. Camera resolutions are lower.
    2. B. They don’t record images; they just send them to web pages.
    3. C. The network cables are more easily cut.
    4. D. They could be remotely attacked via the network.
 
  1. 9. Which of the following is a very simple physical attack?
    1. A. Using a custom RFID transmitter to open a door
    2. B. Accessing an Ethernet jack to attack the network
    3. C. Outright theft of the computers
    4. D. Installing a virus on the CCTV system
 
  1. 10. A perfect bit-by-bit copy of a drive is called what?
    1. A. Drive picture
    2. B. Drive image
    3. C. Drive copy
    4. D. Drive partition
 
  1. 11. What about physical security makes it more acceptable to other employees?
    1. A. It is more secure.
    2. B. Computers are not important.
    3. C. It protects the employees themselves.
    4. D. It uses encryption.
 
  1. 12. On whom should a company perform background checks?
    1. A. System administrators only
    2. B. Contract personnel only
    3. C. Background checks are not needed outside of the military
    4. D. All individuals who have unescorted physical access to the facility
 
  1. 13. What is a common threat to token-based access controls?
    1. A. The key
    2. B. Demagnetization of the strip
    3. C. A system crash
    4. D. Loss or theft of the token
 
  1. 14. Why should security guards get cross-training in network security?
    1. A. They are the eyes and ears of the corporation when it comes to security.
    2. B. They are the only people in the building at night.
    3. C. They are more qualified to know what a security threat is.
    4. D. They have the authority to detain violators.
 
  1. 15. Why can a USB flash drive be a threat?
    1. A. They use too much power.
    2. B. They can bring malicious code past other security mechanisms.
    3. C. They can be stolen.
    4. D. They can be encrypted.

Answers


 
  1. 1. D. Auto-run allows CDs to execute code automatically.
  2. 2. C. Electronic components would be ruined by a water-based fire-suppression system.
  3. 3. D. Shoulder-to-waist geometry is not unique. All the other examples are biometrics that are unique.
  4. 4. B. Physical access to a computer system will almost always defeat any security measures put in place on the system.
 
  1. 5. C. Multiple-factor authentication gives an attacker several systems to overcome, making the unauthorized access of systems much more difficult.
  2. 6. D. An exposed Ethernet jack available in a public place can allow access to the internal network, typically bypassing most of the network’s security systems.
  3. 7. B. A false positive means the system granted access to an unauthorized person based on a biometric being close to an authorized person’s biometric.
  4. 8. D. Any device attached to the IP network can be attacked using a traditional IP-based attack.
  5. 9. C. The theft of a computer is a very simple attack that can be carried out surprisingly effectively. This allows an attacker to compromise the stolen machine and its data at his leisure.
  6. 10. B. A drive image is a perfect copy of a drive that can then be analyzed on another computer.
  7. 11. C. Physical security protects the people, giving them a vested interest in its support.
  8. 12. D. All unescorted people entering the facility should be background checked.
  9. 13. D. The loss or theft of the token is the most common and most serious threat to the system; anyone with a token can access the system.
  10. 14. A. Security guards are the corporation’s eyes and ears and have a direct responsibility for security information.
  11. 15. B. USB drives have large storage capacities and can carry some types of malicious code past traditional virus filters.


CHAPTER 8
Infrastructure Security


 
  • Learn about the types of network devices used to construct networks
  • Discover the types of media used to carry network signals
  • Explore the types of storage media used to store information
  • Grow acquainted with basic terminology for a series of network functions related to information security
  • Explore NAC/NAP methodologies

Infrastructure security begins with the design of the infrastructure itself. The proper use of components improves not only performance but security as well. Network components are not isolated from the computing environment and are an essential aspect of a total computing environment. From the routers, switches, and cables that connect the devices, to the firewalls and gateways that manage communication, from the network design to the protocols employed, all of these items play essential roles in both performance and security.

In the CIA of security, the A for availability is often overlooked. Yet it is availability that has moved computing into this networked framework, and this concept has played a significant role in security. A failure in security can easily lead to a failure in availability and hence a failure of the system to meet user needs.

Security failures can occur in two ways. First, a failure can allow unauthorized users access to resources and data they are not authorized to use, compromising information security. Second, a failure can prevent a user from accessing resources and data the user is authorized to use. This second failure is often overlooked, but it can be as serious as the first. The primary goal of network infrastructure security is to allow all authorized use and deny all unauthorized use of resources.


Devices


A complete network computer solution in today’s business environment consists of more than just client computers and servers. Devices are needed to connect the clients and servers and to regulate the traffic between them. Devices are also needed to expand this network beyond simple client computers and servers to include yet other devices, such as wireless and handheld systems. Devices come in many forms and with many functions, from hubs and switches, to routers, wireless access points, and special-purpose devices such as virtual private network (VPN) devices. Each device has a specific network function and plays a role in maintaining network infrastructure security.


Workstations


Most users are familiar with the client computers used in the client/server model called workstation devices. The workstation is the machine that sits on the desktop and is used every day for sending and reading e-mail, creating spreadsheets, writing reports in a word processing program, and playing games. If a workstation is connected to a network, it is an important part of the security solution for the network. Many threats to information security can start at a workstation, but much can be done in a few simple steps to provide protection from many of these threats.

Workstations are attractive targets for crackers as they are numerous and can serve as entry points into the network and the data that is commonly the target of an attack. Although safety is a relative term, following these basic steps will increase workstation security immensely:


 
  • Remove unnecessary protocols such as Telnet, NetBIOS, IPX.
  • Remove modems unless needed and authorized.
  • Remove all shares that are not necessary.
  • Rename the administrator account, securing it with a strong password.
  • Remove unnecessary user accounts.
  • Install an antivirus program and keep abreast of updates.
  • If the floppy drive is not needed, remove or disconnect it.
  • Consider disabling USB ports via CMOS to restrict data movement to USB devices.
  • If no corporate firewall exists between the machine and the Internet, install a firewall.
  • Keep the operating system (OS) patched and up to date.


Antivirus Software for Workstations


Antivirus packages are available from a wide range of vendors. Running a network of computers without this basic level of protection will be an exercise in futility. Even though a virus attack is rare, the time and money you spend cleaning it up will more than equal the cost of antivirus protection. Even more important, once connected by networks, computers can spread a virus from machine to machine with an ease that’s even greater than simple floppy disk transfer. One unprotected machine can lead to problems throughout a network as other machines have to use their antivirus software to attempt to clean up a spreading infection.

Even secure networks can fall prey to virus and worm contamination, and infection has been known to come from commercial packages. As important as antivirus software is, it is even more important to keep the virus definitions for the software up to date. Out-of-date definitions can lead to a false sense of security, and many of the most potent virus and worm attacks are the newest ones being developed. The risk associated with a new virus is actually higher than for many of the old ones, which have been eradicated to a great extent by antivirus software.

A virus is a piece of software that must be introduced to the network and then executed on a machine. Workstations are the primary mode of entry for a virus into a network. Although a lot of methods can be used to introduce a virus to a network, the two most common are transfer of an infected file from another networked machine and from e-mail. A lot of work has gone into software to clean e-mail while in transit and at the mail server. But transferred files are a different matter altogether. People bring files from home, from friends, from places unknown and then execute them on a PC for a variety of purposes. It doesn’t matter whether it is a funny executable, a game, or even an authorized work application—the virus doesn’t care what the original file is, it just uses it to gain access. Even sharing of legitimate work files and applications can introduce viruses.

Once considered by many users to be immune, Apple Macintosh computers had very few examples of malicious software in the wild. This was not due to anything other than a low market share, and hence the devices were ignored by the malware community as a whole. As Mac has increased in market share, so has its exposure, and today a variety of Mac OS X malware steals files and passwords and is even used to take users’ pictures with the computer’s built-in webcam. All user machines need to install antivirus software in today’s environment, because any computer can become a target.

The form of transfer is not an issue either: whether via a USB device, CD/DVD, or FTP doesn’t matter. When the transferred file is executed, the virus is propagated. Simple removal of a CD/DVD drive or disabling USB ports will not adequately protect against this threat; nor does training, for users will eventually justify a transfer. The only real defense is an antivirus program that monitors all file movements.


Additional Precautions for Workstations


Personal firewalls are a necessity if a machine has an unprotected interface to the Internet. These are seen less often in commercial networks, as it is more cost effective to connect through a firewall server. With the advent of broadband connections for homes and small offices, this needed device is frequently missed. This can result in penetration of a PC from an outside hacker or a worm infection. Worst of all, the workstation can become part of a larger attack against another network, unknowingly joining forces with other compromised machines in a distributed denial-of-service (DDoS) attack.

The practice of disabling or removing unnecessary devices and software from workstations is also a sensible precaution. If a particular service, device, or account is not needed, disabling or removing it will prevent its unauthorized use by others. Having a standard image of a workstation and duplicating it across a bunch of identical workstations will reduce the workload for maintaining these requirements and reduce total cost of operations. Proper security at the workstation level can increase availability of network resources to users, enabling the business to operate as effectively as possible.

The primary method of controlling the security impact of a workstation on a network is to reduce the available attack surface area. Turning off all services that are not needed or permitted by policy will reduce the number of vulnerabilities. Removing methods of connecting additional devices to a workstation to move data—such as CD/DVD drives and USB ports—assists in controlling the movement of data into and out of the device. User-level controls, such as limiting e-mail attachment options, screening all attachments at the e-mail server level, and reducing network shares to needed shares only, can be used to limit the excessive connectivity that can impact security.


Servers


Servers are the computers in a network that host applications and data for everyone to share. Servers come in many sizes, from small single-CPU boxes that can be less powerful than a workstation, to multiple-CPU monsters, up to and including mainframes. The operating systems used by servers range from Windows Server, to Linux/UNIX, to Multiple Virtual Storage (MVS) and other mainframe operating systems. The OS on a server tends to be more robust than the OS on a workstation system and is designed to service multiple users over a network at the same time. Servers can host a variety of applications, including web servers, databases, e-mail servers, file servers, print servers, and application servers for middleware applications.

The key management issue behind running a secure server setup is to identify the specific needs of a server for its proper operation and enable only items necessary for those functions. Keeping all other services and users off the system improves system throughput and increases security. Reducing the attack surface area associated with a server reduces the vulnerabilities now and in the future as updates are required.



TIP Specific security needs can vary depending on the server’s specific use, but as a minimum, the following are beneficial:


 
  • Remove unnecessary protocols such as Telnet, NetBIOS, Internetwork Packet Exchange (IPX), and File Transfer Protocol (FTP).
  • Remove all shares that are not necessary.
  • Rename the administrator account, securing it with a strong password.
  • Remove unnecessary user accounts.
  • Keep the OS patched and up to date.
  • Control physical access to servers.

Once a server has been built and is ready to place into operation, the recording of MD5 hash values on all of its crucial files will provide valuable information later in case of a question concerning possible system integrity after a detected intrusion. The use of hash values to detect changes was first developed by Gene Kim and Eugene Spafford at Purdue University in 1992. The concept became the product Tripwire, which is now available in commercial and open source forms. The same basic concept is used by many security packages to detect file level changes.


Antivirus Software for Servers


The need for antivirus protection on servers depends a great deal on the use of the server. Some types of servers, such as e-mail servers, can require extensive antivirus protection because of the services they provide. Other servers (domain controllers and remote access servers, for example) may not require any antivirus software, as they do not allow users to place files on them. File servers will need protection, as will certain types of application servers. There is no general rule, so each server and its role in the network will need to be examined for applicability of antivirus software.


Network Interface Cards


To connect a server or workstation to a network, a device known as a network interface card (NIC) is used. A NIC is a card with a connector port for a particular type of network connection, either Ethernet or Token Ring. The most common network type in use for local area networks is the Ethernet protocol, and the most common connector is the RJ-45 connector. Figure 8-1 shows a RJ-45 connector (lower) compared to a standard telephone connector (upper). Additional types of connectors include coaxial cable connectors, frequently used with cable modems and extending from the wall to the cable modem.

The purpose of a NIC is to provide lower level protocol functionality from the OSI (Open System Interconnection) model. A NIC is the physical connection between a computer and the network. As the NIC defines the type of physical layer connection, different NICs are used for different physical protocols. NICs come as single-port and multiport, and most workstations use only a single-port NIC, as only a single network connection is needed. For servers, multiport NICs are used to increase the number of network connections, increasing the data throughput to and from the network.


Figure 8-1 Comparison of RJ-45 (lower) and phone connectors (upper)


NICs are serialized with a unique code, referred to as a Media Access Control address (MAC address). These are created by the manufacturer, with a portion being manufacturer and a portion being a serial number, guaranteeing uniqueness. MAC addresses are used in the addressing and delivery of network packets to the correct machine and in a variety of security situations. Unfortunately, these addresses can be changed, or “spoofed,” rather easily. In fact, it is common for personal routers to clone a MAC address to allow users to use multiple devices over a network connection that expects a single MAC.


Hubs


Hubs are networking equipment that connect devices using the same protocol at the physical layer of the OSI model. A hub allows multiple machines in an area to be connected together in a star configuration with the hub as the center. This configuration can save significant amounts of cable and is an efficient method of configuring an Ethernet backbone. All connections on a hub share a single collision domain, a small cluster in a network where collisions occur. As network traffic increases, it can become limited by collisions. The collision issue has made hubs obsolete in newer, higher performance networks, with low-cost switches and switched Ethernet keeping costs low and usable bandwidth high. Hubs also create a security weakness in that all connected devices see all traffic, enabling sniffing and eavesdropping to occur.


Bridges


Bridges are networking equipment that connect devices using the same protocol at the physical layer of the OSI model. A bridge operates at the data link layer, filtering traffic based on MAC addresses. Bridges can reduce collisions by separating pieces of a network into two separate collision domains, but this only cuts the collision problem in half. Although bridges are useful, a better solution is to use switches for network connections.


Switches


Switches form the basis for connections in most Ethernet-based local area networks (LANs). Although hubs and bridges still exist, in today’s high-performance network environment switches have replaced both. A switch has separate collision domains for each port. This means that for each port, two collision domains exist: one from the port to the client on the downstream side and one from the switch to the network upstream. When full duplex is employed, collisions are virtually eliminated from the two nodes, host and client. This also acts as a security factor in that a sniffer can see only limited traffic, as opposed to a hub-based system, where a single sniffer can see all of the traffic to and from connected devices.

Switches operate at the data link layer, while routers act at the network layer. For intranets, switches have become what routers are on the Internet—the device of choice for connecting machines. As switches have become the primary network connectivity device, additional functionality has been added to them. A switch is usually a layer 2 device, but layer 3 switches incorporate routing functionality.

Switches can also perform a variety of security functions. Switches work by moving packets from inbound connections to outbound connections. While moving the packets, it is possible to inspect the packet headers and enforce security policies. Port address security based on MAC addresses can determine whether a packet is allowed or blocked from a connection. This is the very function that a firewall uses for its determination, and this same functionality is what allows an 802.1x device to act as an “edge device."

One of the security concerns with switches is that, like routers, they are intelligent network devices and are therefore subject to hijacking by hackers. Should a hacker break into a switch and change its parameters, he might be able to eavesdrop on specific or all communications, virtually undetected. Switches are commonly administered using the Simple Network Management Protocol (SNMP) and Telnet protocol, both of which have a serious weakness in that they send passwords across the network in clear text. A hacker armed with a sniffer that observes maintenance on a switch can capture the administrative password. This allows the hacker to come back to the switch later and configure it as an administrator. An additional problem is that switches are shipped with default passwords, and if these are not changed when the switch is set up, they offer an unlocked door to a hacker. Commercial quality switches have a local serial console port for guaranteed access to the switch for purposes of control. Some products in the marketplace enable an out-of-band network, connecting these serial console ports to enable remote, secure access to programmable network devices.



CAUTION To secure a switch, you should disable all access protocols other than a secure serial line or a secure protocol such as Secure Shell (SSH). Using only secure methods to access a switch will limit the exposure to hackers and malicious users. Maintaining secure network switches is even more important than securing individual boxes, for the span of control to intercept data is much wider on a switch, especially if it’s reprogrammed by a hacker.


Virtual Local Area Networks


The other security feature that can be enabled in some switches is the concept of virtual local area networks (VLANs). Cisco defines a VLAN as a “broadcast domain within a switched network,” meaning that information is carried in broadcast mode only to devices within a VLAN. Switches that allow multiple VLANs to be defined enable broadcast messages to be segregated into the specific VLANs. If each floor of an office, for example, were to have a single switch and you had accounting functions on two floors, engineering functions on two floors, and sales functions on two floors, then separate VLANs for accounting, engineering, and sales would allow separate broadcast domains for each of these groups, even those that spanned floors. This configuration increases network segregation, increasing throughput and security.

Unused switch ports can be preconfigured into empty VLANs that do not connect to the rest of the network. This significantly increases security against unauthorized network connections. If, for example, a building is wired with network connections in all rooms, including multiple connections for convenience and future expansion, these unused ports become open to the network. One solution is to disconnect the connection at the switch, but this merely moves the network opening into the switch room. The better solution is to disconnect it and disable the port in the switch. This can be accomplished by connecting all unused ports into a VLAN that isolates them from the rest of the network.

Additional aspects of VLANs are explored in the “Security Topologies” section later in this chapter.


Routers


Routers are network traffic management devices used to connect different network segments together. Routers operate at the network layer of the OSI model, routing traffic using the network address (typically an IP address) utilizing routing protocols to determine optimal routing paths across a network. Routers form the backbone of the Internet, moving traffic from network to network, inspecting packets from every communication as they move traffic in optimal paths.

Routers operate by examining each packet, looking at the destination address, and using algorithms and tables to determine where to send the packet next. This process of examining the header to determine the next hop can be done in quick fashion.

Routers use access control lists (ACLs) as a method of deciding whether a packet is allowed to enter the network. With ACLs, it is also possible to examine the source address and determine whether or not to allow a packet to pass. This allows routers equipped with ACLs to drop packets according to rules built in the ACLs. This can be a cumbersome process to set up and maintain, and as the ACL grows in size, routing efficiency can be decreased. It is also possible to configure some routers to act as quasi–application gateways, performing stateful packet inspection and using contents as well as IP addresses to determine whether or not to permit a packet to pass. This can tremendously increase the time for a router to pass traffic and can significantly decrease router throughput. Configuring ACLs and other aspects of setting up routers for this type of use are beyond the scope of this book.



NOTE ACLs can be a significant effort to establish and maintain. Creating them is a straightforward task, but their judicious use will yield security benefits with a limited amount of maintenance. This can be very important in security zones such as a DMZ and at edge devices, blocking undesired outside contact while allowing known inside traffic.

One serious operational security concern regarding routers concerns the access to a router and control of its internal functions. Like a switch, a router can be accessed using SNMP and Telnet and programmed remotely. Because of the geographic separation of routers, this can become a necessity, for many routers in the world of the Internet can be hundreds of miles apart, in separate locked structures. Physical control over a router is absolutely necessary, for if any device, be it server, switch, or router, is physically accessed by a hacker, it should be considered compromised and thus such access must be prevented. As with switches, it is important to ensure that the administrative password is never passed in the clear, only secure mechanisms are used to access the router, and all of the default passwords are reset to strong passwords.

Just like switches, the most assured point of access for router management control is via the serial control interface port. This allows access to the control aspects of the router without having to deal with traffic related issues. For internal company networks, where the geographic dispersion of routers may be limited, third-party solutions to allow out-of-band remote management exist. This allows complete control over the router in a secure fashion, even from a remote location, although additional hardware is required.

Routers are available from numerous vendors and come in sizes big and small. A typical small home office router for use with cable modem/DSL service is shown in Figure 8-2. Larger routers can handle traffic of up to tens of gigabytes per second per channel, using fiber-optic inputs and moving tens of thousands of concurrent Internet connections across the network. These routers can cost hundreds of thousands of dollars and form an essential part of e-commerce infrastructure, enabling large enterprises such as Amazon and eBay to serve many customers concurrently.


Firewalls


A firewall can be hardware, software, or a combination whose purpose is to enforce a set of network security policies across network connections. It is much like a wall with a window: the wall serves to keep things out, except those permitted through the window (see Figure 8-3). Network security policies act like the glass in the window; they permit some things to pass, such as light, while blocking others, such as air. The heart of a firewall is the set of security policies that it enforces. Management determines what is allowed in the form of network traffic between devices, and these policies are used to build rule sets for the firewall devices used to filter network traffic across the network.

Security policies are rules that define what traffic is permissible and what traffic is to be blocked or denied. These are not universal rules, and many different sets of rules are created for a single company with multiple connections. A web server connected to the


Figure 8-2 A small home office router for cable modem/DSL use



Figure 8-3 How a firewall works


Internet may be configured to allow traffic only on port 80 for HTTP and have all other ports blocked, for example. An e-mail server may have only necessary ports for e-mail open, with others blocked. The network firewall can be programmed to block all traffic to the web server except for port 80 traffic, and to block all traffic bound to the mail server except for port 25. In this fashion, the firewall acts as a security filter, enabling control over network traffic, by machine, by port, and in some cases based on application level detail. A key to setting security policies for firewalls is the same as has been seen for other security policies—the principle of least access. Allow only the necessary access for a function; block or deny all unneeded functionality. How a firm deploys its firewalls determines what is needed for security policies for each firewall.

As will be discussed later, the security topology will determine what network devices are employed at what points in a network. At a minimum, the corporate connection to the Internet should pass through a firewall. This firewall should block all network traffic except that specifically authorized by the firm. This is actually easy to do: Blocking communications on a port is simple—just tell the firewall to close the port. The issue comes in deciding what services are needed and by whom, and thus which ports should be open and which should be closed. This is what makes a security policy useful. The perfect set of network security policies, for a firewall, is one that the end user never sees and that never allows even a single unauthorized packet to enter the network. As with any other perfect item, it will be rare to find the perfect set of security policies for firewalls in an enterprise.

To develop a complete and comprehensive security policy, it is first necessary to have a complete and comprehensive understanding of your network resources and their uses. Once you know how the network will be used, you will have an idea of what to permit. In addition, once you understand what you need to protect, you will have an idea of what to block. Firewalls are designed to block attacks before they reach a target machine. Common targets are web servers, e-mail servers, DNS servers, FTP services, and databases. Each of these has separate functionality, and each has unique vulnerabilities. Once you have decided who should receive what type of traffic and what types should be blocked, you can administer this through the firewall.


How Do Firewalls Work?


Firewalls enforce the established security policies through a variety of mechanisms, including the following:


 
  • Network Address Translation (NAT)
  • Basic packet filtering
  • Stateful packet filtering
  • ACLs
  • Application layer proxies

One of the most basic security functions provided by a firewall is NAT, which allows you to mask significant amounts of information from outside of the network. This allows an outside entity to communicate with an entity inside the firewall without truly knowing its address. NAT is a technique used in IPv4 to link private IP addresses to public ones. Private IP addresses are sets of IP addresses that can be used by anyone and by definition are not routable across the Internet. NAT can assist in security by preventing direct access to devices from outside the firm, without first having the address changed at a NAT device. The benefit is less public IP addresses are needed, and from a security point of view the internal address structure is not known to the outside world. If a hacker attacks the source address, he is simply attacking the NAT device, not the actual sender of the packet. NAT is described in detail in the “Security Topologies” section later in this chapter.

NAT was conceived to resolve an address shortage associated with IPv4 and is considered by many to be unnecessary for IPv6. The added security features of enforcing traffic translation and hiding internal network details from direct outside connections will give NAT life well into the IPv6 timeframe.

Basic packet filtering, the next most common firewall technique, involves looking at packets, their ports, protocols, source and destination addresses, and checking that information against the rules configured on the firewall. Telnet and FTP connections may be prohibited from being established to a mail or database server, but they may be allowed for the respective service servers. This is a fairly simple method of filtering based on information in each packet header, such as IP addresses and TCP/UDP ports. Packet filtering will not detect and catch all undesired packets, but it is fast and efficient.

To look at all packets and determine the need for each and its data requires stateful packet filtering. Stateful means that the firewall maintains, or knows, the context of a conversation. In many cases, rules depend on the context of a specific communication connection. For instance, traffic from an outside server to an inside server may be allowed if it is requested but blocked if it is not. A common example is a request for a web page. This request is actually a series of requests to multiple servers, each of which can be allowed or blocked. Advanced firewalls employ stateful packet filtering to prevent several types of undesired communications. Should a packet come from outside the network, in an attempt to pretend that it is a response to a message from inside the network, the firewall will have no record of it being requested and can discard it, blocking the undesired external access attempt. As many communications will be transferred to high ports (above 1023), stateful monitoring will enable the system to determine which sets of high communications are permissible and which should be blocked. A disadvantage of stateful monitoring is that it takes significant resources and processing to perform this type of monitoring, and this reduces efficiency and requires more robust and expensive hardware.



EXAM TIP Firewalls operate by examining packets and selectively denying some based on a set of rules. Firewalls act as gatekeepers or sentries at select network points, segregating traffic and allowing some to pass and blocking others.

Some high-security firewalls also employ application layer proxies. Packets are not allowed to traverse the firewall, but data instead flows up to an application that in turn decides what to do with it. For example, an Simple Mail Transfer Protocol (SMTP) proxy may accept inbound mail from the Internet and forward it to the internal corporate mail server. While proxies provide a high level of security by making it very difficult for an attacker to manipulate the actual packets arriving at the destination, and while they provide the opportunity for an application to interpret the data prior to forwarding it to the destination, they generally are not capable of the same throughput as stateful packet inspection firewalls. The trade-off between performance and speed is a common one and must be evaluated with respect to security needs and performance requirements.


Wireless


Wireless devices bring additional security concerns. There is, by definition, no physical connection to a wireless device; radio waves or infrared carry data, which allows anyone within range access to the data. This means that unless you take specific precautions, you have no control over who can see your data. Placing a wireless device behind a firewall does not do any good, because the firewall stops only physically connected traffic from reaching the device. Outside traffic can come literally from the parking lot directly to the wireless device.

The point of entry from a wireless device to a wired network is performed at a device called a wireless access point. Wireless access points can support multiple concurrent devices accessing network resources through the network node they provide. A typical wireless access point is shown here:


A typical wireless access point


Several mechanisms can be used to add wireless functionality to a machine. For PCs, this can be done via an expansion card. For notebooks, a PCMCIA adapter for wireless networks is available from several vendors. For both PCs and notebooks, vendors have introduced USB-based wireless connectors. The following illustration shows one vendor’s card—note the extended length used as an antenna. Not all cards have the same configuration, although they all perform the same function: to enable a wireless network connection. The numerous wireless protocols (802.11a, b, g, I, and n) are covered in Chapter 10. Wireless access points and cards must be matched by protocol for proper operation.


A typical PCMCIA wireless network card




NOTE To prevent unauthorized wireless access to the network, configuration of remote access protocols to a wireless access point is common. Forcing authentication and verifying authorization is a seamless method of performing basic network security for connections in this fashion. These protocols are covered in Chapter 10.


Modems


Modems were once a slow method of remote connection that was used to connect client workstations to remote services over standard telephone lines. Modem is a shortened form of modulator/demodulator, covering the functions actually performed by the device as it converts analog signals to digital and vice versa. To connect a digital computer signal to the analog telephone line required one of these devices. Today, the use of the term has expanded to cover devices connected to special digital telephone lines—DSL modems—and to cable television lines—cable modems. Although these devices are not actually modems in the true sense of the word, the term has stuck through marketing efforts directed to consumers. DSL and cable modems offer broadband high-speed connections and the opportunity for continuous connections to the Internet. Along with these new desirable characteristics come some undesirable ones, however. Although they both provide the same type of service, cable and DSL modems have some differences. A DSL modem provides a direct connection between a subscriber’s computer and an Internet connection at the local telephone company’s switching station. This private connection offers a degree of security, as it does not involve others sharing the circuit. Cable modems are set up in shared arrangements that theoretically could allow a neighbor to sniff a user’s cable modem traffic.

Cable modems were designed to share a party line in the terminal signal area, and the cable modem standard, the Data Over Cable Service Interface Specification (DOCSIS), was designed to accommodate this concept. DOCSIS includes built-in support for security protocols, including authentication and packet filtering. Although this does not guarantee privacy, it prevents ordinary subscribers from seeing others’ traffic without using specialized hardware.

Both cable and DSL services are designed for a continuous connection, which brings up the question of IP address life for a client. Although some services originally used a static IP arrangement, virtually all have now adopted the Dynamic Host Configuration Protocol (DHCP) to manage their address space. A static IP has an advantage of being the same and enabling convenient DNS connections for outside users. As cable and DSL services are primarily designed for client services as opposed to host services, this is not a relevant issue. A security issue of a static IP is that it is a stationary target for hackers. The move to DHCP has not significantly lessened this threat, however, for the typical IP lease on a cable modem DHCP is for days. This is still relatively stationary, and some form of firewall protection needs to be employed by the user.


Cable/DSL Security


The modem equipment provided by the subscription service converts the cable or DSL signal into a standard Ethernet signal that can then be connected to a NIC on the client device. This is still just a direct network connection, with no security device separating the two. The most common security device used in cable/DSL connections is a firewall. The firewall needs to be installed between the cable/DSL modem and client computers.

Two common methods exist for this in the marketplace. The first is software on each client device. Numerous software companies offer Internet firewall packages, which can cost under $50. Another solution is the use of a cable/DSL router with a built-in firewall. These are also relatively inexpensive, in the $100 range, and can be combined with software for an additional level of protection. Another advantage to the router solution is that most such routers allow multiple clients to share a common Internet connection, and most can also be enabled with other networking protocols such as VPN. A typical small home office cable modem/DSL router was shown earlier in Figure 8-2. The bottom line is simple: Even if you connect only occasionally and you disconnect between uses, you need a firewall between the client and the Internet connection. Most commercial firewalls for cable/DSL systems come preconfigured for Internet use and require virtually no maintenance other than keeping the system up to date.


Telecom/PBX


Private branch exchanges (PBXs) are an extension of the public telephone network into a business. Although typically considered a separate entity from data systems, they are frequently interconnected and have security requirements as part of this interconnection as well as of their own. PBXs are computer-based switching equipment designed to connect telephones into the local phone system. Basically digital switching systems, they can be compromised from the outside and used by phone hackers (phreakers) to make phone calls at the business’ expense. Although this type of hacking has decreased with lower cost long distance, it has not gone away, and as several firms learn every year, voice mail boxes and PBXs can be compromised and the long-distance bills can get very high, very fast.

Another problem with PBXs arises when they are interconnected to the data systems, either by corporate connection or by rogue modems in the hands of users. In either case, a path exists for connection to outside data networks and the Internet. Just as a firewall is needed for security on data connections, one is needed for these connections as well. Telecommunications firewalls are a distinct type of firewall designed to protect both the PBX and the data connections. The functionality of a telecommunications firewall is the same as that of a data firewall: it is there to enforce security policies. Telecommunication security policies can be enforced even to cover hours of phone use to prevent unauthorized long-distance usage through the implementation of access codes and/or restricted service hours.


RAS


Remote Access Service (RAS) is a portion of the Windows OS that allows the connection between a client and a server via a dial-up telephone connection. Although slower than cable/DSL connections, this is still a common method for connecting to a remote network. When a user dials into the computer system, authentication and authorization are performed through a series of remote access protocols, described in Chapter 9. For even greater security, a callback system can be employed, where the server calls back to the client at a set telephone number for the data exchange. RAS can also mean Remote Access Server, a term for a server designed to permit remote users access to a network and to regulate their access. A variety of protocols and methods exist to perform this function; they are described in detail in Chapter 9.


VPN


A virtual private network (VPN) is a construct used to provide a secure communication channel between users across public networks such as the Internet. As described in Chapter 10, a variety of techniques can be employed to instantiate a VPN connection. The use of encryption technologies allows either the data in a packet to be encrypted or the entire packet to be encrypted. If the data is encrypted, the packet header can still be sniffed and observed between source and destination, but the encryption protects the contents of the packet from inspection. If the entire packet is encrypted, it is then placed into another packet and sent via tunnel across the public network. Tunneling can protect even the identity of the communicating parties.

The most common implementation of VPN is via IPsec, a protocol for IP security. IPsec is mandated in IPv6 and is optionally back-fitted into IPv4. IPsec can be implemented in hardware, software, or a combination of both.


Intrusion Detection Systems


Intrusion detection systems (IDSs) are designed to detect, log, and respond to unauthorized network or host use, both in real time and after the fact. IDSs are available from a wide selection of vendors and are an essential part of network security. These systems are implemented in software, but in large systems, dedicated hardware is required as well. IDSs can be divided into two categories: network-based systems and host-based systems. Two primary methods of detection are used: signature-based and anomaly-based. IDSs are covered in detail in Chapter 11.


Network Access Control


Networks comprise connected workstations and servers. Managing security on a network involves managing a wide range of issues, from various connected hardware and the software operating these devices. Assuming that the network is secure, each additional connection involves risk. Managing the endpoints on a case-by-case basis as they connect is a security methodology known as network access control. Two main competing methodologies exist: Network Access Protection (NAP) is a Microsoft technology for controlling network access of a computer host, and Network Admission Control (NAC) is Cisco’s technology for controlling network admission.

Microsoft’s NAP system is based on measuring the system health of the connecting machine, including patch levels of the OS, antivirus protection, and system policies. NAP is first utilized in Windows XP Service Pack 3, Windows Vista, and Windows Server 2008, and it requires additional infrastructure servers to implement the health checks. The system includes enforcement agents that interrogate clients and verify admission criteria. Response options include rejection of the connection request or restriction of admission to a subnet.

Cisco’s NAC system is built around an appliance that enforces policies chosen by the network administrator. A series of third-party solutions can interface with the appliance, allowing the verification of a whole host of options including client policy settings, software updates, and client security posture. The use of third-party devices and software makes this an extensible system across a wide range of equipment.

Both the Cisco NAC and Microsoft NAP are in their early stages of implementation. The concept of automated admission checking based on client device characteristics is here to stay, as it provides timely control in the ever-changing network world of today’s enterprises.


Network Monitoring/Diagnostic


The computer network itself can be considered a large computer system, with performance and operating issues. Just as a computer needs management, monitoring, and fault resolution, so do networks. SNMP was developed to perform this function across networks. The idea is to enable a central monitoring and control center to maintain, configure, and repair network devices, such as switches and routers, as well as other network services such as firewalls, IDSs, and remote access servers. SNMP has some security limitations, and many vendors have developed software solutions that sit on top of SNMP to provide better security and better management tool suites.

The concept of a network operations center (NOC) comes from the old phone company network days, when central monitoring centers monitored the health of the telephone network and provided interfaces for maintenance and management. This same concept works well with computer networks, and companies with midsize and larger networks employ the same philosophy. The NOC allows operators to observe and interact with the network, using the self-reporting and in some cases self-healing nature of network devices to ensure efficient network operation. Although generally a boring operation under normal conditions, when things start to go wrong, as in the case of a virus or worm attack, the center can become a busy and stressful place as operators attempt to return the system to full efficiency while not interrupting existing traffic.

As networks can be spread out literally around the world, it is not feasible to have a person visit each device for control functions. Software enables controllers at NOCs to measure the actual performance of network devices and make changes to the configuration and operation of devices remotely. The ability to make remote connections with this level of functionality is both a blessing and a security issue. Although this allows efficient network operations management, it also provides an opportunity for unauthorized entry into a network. For this reason, a variety of security controls are used, from secondary networks to VPNs and advanced authentication methods with respect to network control connections.

Network monitoring is an ongoing concern for any significant network. In addition to monitoring traffic flow and efficiency, monitoring of security is necessary. IDSs act merely as alarms, indicating the possibility of a breach associated with a specific set of activities. These indications still need to be investigated and appropriate responses initiated by security personnel. Simple items such as port scans may be ignored by policy, but an actual unauthorized entry into a network router, for instance, would require NOC personnel to take specific actions to limit the potential damage to the system. The coordination of system changes, dynamic network traffic levels, potential security incidents, and maintenance activities is a daunting task requiring numerous personnel working together in any significant network. Software has been developed to help manage the information flow required to support these tasks. Such software can enable remote administration of devices in a standard fashion, so that the control systems can be devised in a hardware vendor–neutral configuration.

SNMP is the main standard embraced by vendors to permit interoperability. Although SNMP has received a lot of security-related attention of late due to various security holes in its implementation, it is still an important part of a security solution associated with network infrastructure. Many useful tools have security issues; the key is to understand the limitations and to use the tools within correct boundaries to limit the risk associated with the vulnerabilities. Blind use of any technology will result in increased risk, and SNMP is no exception. Proper planning, setup, and deployment can limit exposure to vulnerabilities. Continuous auditing and maintenance of systems with the latest patches is a necessary part of operations and is essential to maintaining a secure posture.


Mobile Devices


Mobile devices such as personal digital assistants (PDAs) and mobile phones are the latest devices to join the corporate network. These devices can perform significant business functions, and in the future, more of them will enter the corporate network and more work will be performed with them. These devices add several challenges for network administrators. When they synchronize their data with that on a workstation or server, the opportunity exists for viruses and malicious code to be introduced to the network. This can be a major security gap, as a user may access separate e-mail accounts, one personal, without antivirus protection, the other corporate. Whenever data is moved from one network to another via the PDA, the opportunity to load a virus onto the workstation exists. Although the virus may not affect the PDA or phone, these devices can act as transmission vectors. Currently, at least one vendor offers antivirus protection for PDAs, and similar protection for phones is not far away.


Media


The base of communications between devices is the physical layer of the OSI model. This is the domain of the actual connection between devices, whether by wire, fiber, or radio frequency waves. The physical layer separates the definitions and protocols required to transmit the signal physically between boxes from higher level protocols that deal with the details of the data itself. Four common methods are used to connect equipment at the physical layer:


 
  • Coaxial cable
  • Twisted-pair cable
  • Fiber-optics
  • Wireless


Coaxial Cable


Coaxial cable is familiar to many households as a method of connecting televisions to VCRs or to satellite or cable services. It is used because of its high bandwidth and shielding capabilities. Compared to standard twisted-pair lines such as telephone lines, “coax” is much less prone to outside interference. It is also much more expensive to run, both from a cost-per-foot measure and from a cable-dimension measure. Coax costs much more per foot than standard twisted pair and carries only a single circuit for a large wire diameter.


A coax connector


An original design specification for Ethernet connections, coax was used from machine to machine in early Ethernet implementations. The connectors were easy to use and ensured good connections, and the limited distance of most office LANs did not carry a large cost penalty. The original ThickNet specification for Ethernet called for up to 100 connections over 500 meters at 10 Mbps.

Today, almost all of this older Ethernet specification has been replaced by faster, cheaper twisted-pair alternatives and the only place you’re likely to see coax in a data network is from the cable box to the cable modem.


UTP/STP


Twisted-pair wires have all but completely replaced coaxial cables in Ethernet networks. Twisted-pair wires use the same technology used by the phone company for the movement of electrical signals. Single pairs of twisted wires reduce electrical crosstalk and electromagnetic interference. Multiple groups of twisted pairs can then be bundled together in common groups and easily wired between devices.

Twisted pairs come in two types, shielded and unshielded. Shielded twisted-pair (STP) has a foil shield around the pairs to provide extra shielding from electromagnetic interference. Unshielded twisted-pair (UTP) relies on the twist to eliminate interference. UTP has a cost advantage over STP and is usually sufficient for connections, except in very noisy electrical areas.


A typical 8-wire UTP line



A typical 8-wire STP line



A bundle of UTP wires


Twisted-pair lines are categorized by the level of data transmission they can support. Three current categories are in use:


 
  • Category 3 (Cat 3) minimum for voice and 10 Mbps Ethernet
  • Category 5 (Cat 5/Cat5e) for 100 Mbps Fast Ethernet; Cat 5e is an enhanced version of the Cat 5 specification to address Far End Crosstalk
  • Category 6 (Cat 6) for Gigabit Ethernet

The standard method for connecting twisted-pair cables is via an 8-pin connector called an RJ-45 connector that looks like a standard phone jack connector but is slightly larger. One nice aspect of twisted-pair cabling is that it’s easy to splice and change connectors. Many a network administrator has made Ethernet cables from stock Cat 5 wire, two connectors, and a crimping tool. This ease of connection is also a security issue, as twisted-pair cables are easy to splice into and rogue connections for sniffing could be made without detection in cable runs. Both coax and fiber are much more difficult to splice, with both of these needing a tap to connect, and taps are easier to detect.


Fiber


Fiber-optic cable uses beams of laser light to connect devices over a thin glass wire. The biggest advantage to fiber is its bandwidth, with transmission capabilities into the tera-bits per second range. Fiber-optic cable is used to make high-speed connections between servers and is the backbone medium of the Internet and large networks. For all of its speed and bandwidth advantages, fiber has one major drawback—cost.

The cost of using fiber is a two-edged sword. It is cheaper when measured by bandwidth to use fiber than competing wired technologies. The length of runs of fiber can be much longer, and the data capacity of fiber is much higher. But connections to a fiber are difficult and expensive and fiber is impossible to splice. Making the precise connection on the end of a fiber-optic line is a highly skilled job and is done by specially trained professionals who maintain a level of proficiency. Once the connector is fitted on the end, several forms of connectors and blocks are used, as shown in the images that follow.


A typical fiber optic fiber and terminator



Another type of fiber terminator



A connector block for fiber optic lines


Splicing fiber-optic is practically impossible; the solution is to add connectors and connect through a repeater. This adds to the security of fiber in that unauthorized connections are all but impossible to make. The high cost of connections to fiber and the higher cost of fiber per foot also make it less attractive for the final mile in public networks where users are connected to the public switching systems. For this reason, cable companies use coax and DSL providers use twisted pair to handle the “last-mile” scenario.


Unguided Media


Electromagnetic waves have been transmitted to convey signals literally since the inception of radio. Unguided media is a phrase used to cover all transmission media not guided by wire, fiber, or other constraints; it includes radio frequency (RF), infrared (IR), and microwave methods. Unguided media have one attribute in common: they are unguided and as such can travel to many machines simultaneously. Transmission patterns can be modulated by antennas, but the target machine can be one of many in a reception zone. As such, security principles are even more critical, as they must assume that unauthorized users have access to the signal.


Infrared


Infrared (IR) is a band of electromagnetic energy just beyond the red end of the visible color spectrum. IR has been used in remote control devices for years, and it cannot penetrate walls but instead bounces off them. IR made its debut in computer networking as a wireless method to connect to printers. Now that wireless keyboards, wireless mice, and PDAs exchange data via IR, it seems to be everywhere. IR can also be used to connect devices in a network configuration, but it is slow compared to other wireless technologies. It also suffers from not being able to penetrate solid objects, so stack a few items in front of the transceiver and the signal is lost.


RF/Microwave


The use of radio frequency (RF) waves to carry communication signals goes back to the beginning of the twentieth century. RF waves are a common method of communicating in a wireless world. They use a variety of frequency bands, each with special characteristics. The term microwave is used to describe a specific portion of the RF spectrum that is used for communication as well as other tasks, such as cooking.

Point-to-point microwave links have been installed by many network providers to carry communications over long distances and rough terrain. Microwave communications of telephone conversations were the basis for forming the telecommunication company MCI. Many different frequencies are used in the microwave bands for many different purposes. Today, home users can use wireless networking throughout their house and enable laptops to surf the Web while they move around the house. Corporate users are experiencing the same phenomenon, with wireless networking enabling corporate users to check e-mail on laptops while riding a shuttle bus on a business campus. These wireless solutions are covered in detail in Chapter 10.

One key feature of microwave communications is that microwave RF energy can penetrate reasonable amounts of building structure. This allows you to connect network devices in separate rooms, and it can remove the constraints on equipment location imposed by fixed wiring. Another key feature is broadcast capability. By its nature, RF energy is unguided and can be received by multiple users simultaneously. Microwaves allow multiple users access in a limited area, and microwave systems are seeing application as the last mile of the Internet in dense metropolitan areas. Point-to-multi-point microwave devices can deliver data communication to all the business users in a downtown metropolitan area through rooftop antennas, reducing the need for expensive building-to-building cables. Just as microwaves carry cell phone and other data communications, the same technologies offer a method to bridge the last-mile solution.

The “last mile” problem is the connection of individual consumers to a backbone, an expensive proposition because of the sheer number of connections and unshared lines at this point in a network. Again, cost is an issue, as transceiving equipment is expensive, but in densely populated areas, such as apartments and office buildings in metropolitan areas, the user density can help defray individual costs. Speed on commercial microwave links can exceed 10 Gbps, so speed is not a problem for connecting multiple users or for high-bandwidth applications.


Security Concerns for Transmission Media


The primary security concern for a system administrator has to be preventing physical access to a server by an unauthorized individual. Such access will almost always spell disaster, for with direct access and the correct tools, any system can be infiltrated. One of the administrator’s next major concerns should be preventing unfettered access to a network connection. Access to switches and routers is almost as bad as direct access to a server, and access to network connections would rank third in terms of worst-case scenarios. Preventing such access is costly, yet the cost of replacing a server because of theft is also costly.


Physical Security


A balanced approach is the most sensible approach when addressing physical security, and this applies to transmission media as well. Keeping network switch rooms secure and cable runs secure seems obvious, but cases of using janitorial closets for this vital business purpose abound. One of the keys to mounting a successful attack on a network is information. Usernames, passwords, server locations—all of these can be obtained if someone has the ability to observe network traffic in a process called sniffing. A sniffer can record all the network traffic, and this data can be mined for accounts, passwords, and traffic content, all of which can be useful to an unauthorized user. Many common scenarios exist when unauthorized entry to a network occurs, including these:


 
  • Inserting a node and functionality that is not authorized on the network, such as a sniffer device or unauthorized wireless access point
  • Modifying firewall security policies
  • Modifying ACLs for firewalls, switches, or routers
  • Modifying network devices to echo traffic to an external node

One starting point for many intrusions is the insertion of an unauthorized sniffer into the network, with the fruits of its labors driving the remaining unauthorized activities. The best first effort is to secure the actual network equipment to prevent this type of intrusion.

Network devices and transmission media become targets because they are dispersed throughout an organization, and physical security of many dispersed items can be difficult to manage. This work is not glamorous and has been likened to guarding plumbing. The difference is that in the case of network infrastructure, unauthorized physical access strikes at one of the most vulnerable points and, in many cases, is next to impossible to detect. Locked doors and equipment racks are easy to implement, yet this step is frequently overlooked. Shielding of cable runs, including the use of concrete runs outside buildings to prevent accidental breaches may have high initial costs, but typically pay off in the long run in terms of of reduced downtime. Raised floors, cable runs, closets—there are many places to hide an unauthorized device. Add to this the fact that a large percentage of unauthorized users have a direct connection to the target of the unauthorized use—they are employees, students, or the like. Twisted-pair and coax make it easy for an intruder to tap into a network without notice. A vampire tap is the name given to a spike tap that taps the center conductor of a coax cable. A person with talent can make such a tap without interrupting network traffic, merely by splicing a parallel connection tap. This will allow the information flow to split into two, enabling a second destination.

Although limiting physical access is difficult, it is essential. The least level of skill is still more than sufficient to accomplish unauthorized entry into a network if physical access to the network signals is allowed. This is one factor driving many organizations to use fiber-optics, for these cables are much more difficult to tap. Although many tricks can be employed with switches and VLANs to increase security, it is still essential that you prevent unauthorized contact with the network equipment.

Wireless networks make the intruder’s task even easier, as they take the network to the users, authorized or not. A technique called war-driving involves using a laptop and software to find wireless networks from outside the premises. A typical use of war-driving is to locate a wireless network with poor (or no) security and obtain free Internet access, but other uses can be more devastating. Methods for securing even the relatively weak Wired Equivalent Privacy (WEP) protocol are not difficult; they are just typically not followed. A simple solution is to place a firewall between the wireless access point and the rest of the network and authenticate users before allowing entry. Home users can do the same thing to prevent neighbors from “sharing” their Internet connections. To ensure that unauthorized traffic does not enter your network through a wireless access point, you must either use a firewall with an authentication system or establish a VPN.


Removable Media


One concept common to all computer users is data storage. Sometimes storage occurs on a file server and sometimes on movable media, allowing it to be transported between machines. Moving storage media represents a security risk from a couple of angles, the first being the potential loss of control over the data on the moving media. Second is the risk of introducing unwanted items, such as a virus or a worm, when the media are attached back to a network. Both of these issues can be remedied through policies and software. The key is to ensure that they are occurring. To describe media-specific issues, the media can be divided into three categories: magnetic, optical, and electronic.


Magnetic Media


Magnetic media store data through the rearrangement of magnetic particles on a non-magnetic substrate. Common forms include hard drives, floppy disks, zip disks, and magnetic tape. Although the specific format can differ, the basic concept is the same. All these devices share some common characteristics: Each has sensitivity to external magnetic fields. Attach a floppy disk to the refrigerator door with a magnet if you want to test the sensitivity. They are also affected by high temperatures as in fires and by exposure to water.


Hard Drives


Hard drives used to require large machines in mainframes. Now they are small enough to attach to PDAs and handheld devices. The concepts remain the same among all of them: a spinning platter rotates the magnetic media beneath heads that read the patterns in the oxide coating. As drives have gotten smaller and rotation speeds increased, the capacities have also grown. Today gigabytes can be stored in a device slightly larger than a bottle cap. Portable hard drives in the 120 to 320GB range are now available and affordable.

One of the latest advances is full drive encryption built into the drive hardware. Using a key that is controlled, through a Trusted Platform Module (TPM) interface for instance, this technology protects the data if the drive itself is lost or stolen. This may not be important if a thief takes the whole PC, but in larger storage environments, drives are placed in separate boxes and remotely accessed. In the specific case of notebook machines, this layer can be tied to smart card interfaces to provide more security. As this is built into the controller, encryption protocols such as Advanced Encryption Standard (AES) and Triple Data Encryption Standard (3DES) can be performed at full drive speed.



Diskettes


Floppy disks were the computer industry’s first attempt at portable magnetic media. The movable medium was placed in a protective sleeve, and the drive remained in the machine. Capacities up to 1.4MB were achieved, but the fragility of the device as the size increased, as well as competing media, has rendered floppies almost obsolete. A better alternative, the Zip disk from Iomega Corporation, improved on the floppy with a stronger case and higher capacity (250MB); it has been a common backup and file transfer medium. But even the increased size of 250MB is not large enough for some multimedia files, and recordable optical (CD-R) drives have arrived to fill the gap; they will be discussed shortly.



Tape


Magnetic tape has held a place in computer centers since the beginning of computing. Their primary use has been bulk offline storage and backup. Tape functions well in this role because of its low cost. The disadvantage of tape is its nature as a serial access medium, making it slow to work with for large quantities of data. Several types of magnetic tape are in use today, ranging from quarter inch to digital linear tape (DLT) and digital audio tape (DAT). These cartridges can hold upward of 60GB of compressed data.

Tapes are still a major concern from a security perspective, as they are used to back up many types of computer systems. The physical protection afforded the tapes is of concern, because if a tape is stolen, an unauthorized user could establish a network and recover your data on his system, because it’s all stored on the tape. Offsite storage is needed for proper disaster recovery protection, but secure offsite storage and transport is what is really needed. This important issue is frequently overlooked in many facilities. The simple solution to maintain control over the data even when you can’t control the tape is through encryption. Backup utilities can secure the backups with encryption, but this option is frequently not used for a variety of reasons. Regardless of the rationale for not encrypting data, once a tape is lost, not using the encryption option becomes a lamented decision.



Optical Media


Optical media involve the use of a laser to read data stored on a physical device. Rather than a magnetic head picking up magnetic marks on a disk, a laser picks up deformities embedded in the media that contain the information. As with magnetic media, optical media can be read-write, although the read-only version is still more common.


CD-R/DVD


The compact disc (CD) took the music industry by storm, and then it took the computer industry by storm as well. A standard CD holds more than 640MB of data, in some cases up to 800 MB. The digital video disc (DVD) can hold almost 4GB of data. These devices operate as optical storage, with little marks burned in them to represent 1’s and 0’s on a microscopic scale. The most common type of CD is the read-only version, in which the data is written to the disc once and only read afterward. This has become a popular method for distributing computer software, although higher capacity DVDs have begun to replace CDs for program distribution.


A second-generation device, the recordable compact disc (CD-R), allows users to create their own CDs using a burner device in their PC and special software. Users can now back up data, make their own audio CDs, and use CDs as high-capacity storage. Their relatively low cost has made them economical to use. CDs have a thin layer of aluminum inside the plastic, upon which bumps are burned by the laser when recorded. CD-Rs use a reflective layer, such as gold, upon which a dye is placed that changes upon impact by the recording laser. A newer type, CD-RW, has a different dye that allows discs to be erased and reused. The cost of the media increases from CD, to CD-R, to CD-RW.

DVDs will eventually occupy the same role that CDs have in the recent past, except that they hold more than seven times the data of a CD. This makes full-length movie recording possible on a single disc. The increased capacity comes from finer tolerances and the fact that DVDs can hold data on both sides. A wide range of formats for DVDs include DVD+R, DVD-R, dual layer, and now HD formats, HD-DVD and Blu-ray. This variety is due to competing “standards” and can result in confusion. DVD+R and -R are distinguishable only when recording, and most devices since 2004 should read both. Dual layers add additional space but require appropriate dual-layer—enabled drives. HD-DVD and Blue-ray are competing formats in the high-definition arena, with devices that currently hold 50GB and with research prototypes promising up to 1TB on a disk. In 2008, Toshiba, the leader of the HD-DVD format, announced it was ceasing production, casting doubts onto its future, although this format is also used in gaming systems such as the Xbox 360.


Electronic Media


The latest form of removable media is electronic memory. Electronic circuits of static memory, which can retain data even without power, fill a niche where high density and small size are needed. Originally used in audio devices and digital cameras, these electronic media come in a variety of vendor-specific types, such as smart cards, SmartMedia, flash cards, memory sticks, and CompactFlash devices. Several recent photo-quality color printers have been released with ports to accept the cards directly, meaning that a computer is not required for printing. Computer readers are also available to permit storing data from the card onto hard drives and other media in a computer. The size of storage on these devices ranges from 256MB to 32GB and higher.


Although they are used primarily for photos and music, these devices could be used to move any digital information from one machine to another. To a machine equipped with a connector port, these devices look like any other file storage location. They can be connected to a system through a special reader or directly via a USB port. In newer PC systems, a USB boot device has replaced the older floppy drive. These devices are small, can hold a significant amount of data—up to 32GB at time of writing—and are easy to move from machine to machine. Another novel interface is a mouse that has a slot for a memory stick. This dual-purpose device conserves space, conserves USB ports, and is easy to use. The memory stick is placed in the mouse, which can then be used normally. The stick is easily removable and transportable. The mouse works with or without the memory stick; it is just a convenient device to use for a portal.

The advent of large capacity USB sticks has enabled users to build entire systems, OSs, and tools onto them to ensure security and veracity of the OS and tools. With the expanding use of virtualization, a user could carry an entire system on a USB stick and boot it using virtually any hardware. The only downside to this form of mobile computing is the slower speed of the USB 2.0 interface, currently limited to 480 Mbps.


Security Topologies


Networks are different than single servers; networks exist as connections of multiple devices. A key characteristic of a network is its layout, or topology. A proper network topology takes security into consideration and assists in “building security” into the network. Security-related topologies include separating portions of the network by use and function, strategically designing in points to monitor for IDS systems, building in redundancy, and adding fault-tolerant aspects.


Security Zones


The first aspect of security is a layered defense. Just as a castle has a moat, an outside wall, an inside wall, and even a keep, so, too, does a modern secure network have different layers of protection. Different zones are designed to provide layers of defense, with the outermost layers providing basic protection and the innermost layers providing the highest level of protection. A constant issue is that accessibility tends to be inversely related to level of protection, so it is more difficult to provide complete protection and unfettered access at the same time. Trade-offs between access and security are handled through zones, with successive zones guarded by firewalls enforcing ever-increasingly strict security policies. The outermost zone is the Internet, a free area, beyond any specific controls. Between the inner secure corporate network and the Internet is an area where machines are considered at risk. This zone has come to be called the DMZ, after its military counterpart, the demilitarized zone, where neither side has any specific controls. Once inside the inner secure network, separate branches are frequently carved out to provide specific functionality; under this heading, we will discuss intranets, extranets, and virtual LANs (VLANs).


DMZ


The DMZ is a military term for ground separating two opposing forces, by agreement and for the purpose of acting as a buffer between the two sides. A DMZ in a computer network is used in the same way; it acts as a buffer zone between the Internet, where no controls exist, and the inner secure network, where an organization has security policies in place (see Figure 8-4). To demarcate the zones and enforce separation, a firewall is used on each side of the DMZ. The area between these firewalls is accessible from either the inner secure network or the Internet. Figure 8-4 illustrates these zones as caused by firewall placement. The firewalls are specifically designed to prevent access across the DMZ directly, from the Internet to the inner secure network.

Special attention should be paid to the security settings of network devices placed in the DMZ, and they should be considered at all times to be compromised by unauthorized use. A common industry term, hardened operating system, applies to machines whose functionality is locked down to preserve security. This approach needs to be applied to the machines in the DMZ, and although it means that their functionality is limited, such precautions ensure that the machines will work properly in a less-secure environment.


Figure 8-4 The DMZ and zones of trust


Many types of servers belong in this area, including web servers that are serving content to Internet users, as well as remote access servers and external e-mail servers. In general, any server directly accessed from the outside, untrusted Internet zone needs to be in the DMZ. Other servers should not be placed in the DMZ. Domain name servers for your inner trusted network and database servers that house corporate databases should not be accessible from the outside. Application servers, file servers, print servers—all of the standard servers used in the trusted network—should be behind both firewalls, plus routers and switches used to connect these machines.

The idea behind the use of the DMZ topology is to force an outside user to make at least one hop in the DMZ before he can access information inside the trusted network. If the outside user makes a request for a resource from the trusted network, such as a data element from a database via a web page, then this request needs to follow this scenario:


 
  1. 1. A user from the untrusted network (the Internet) requests data via a web page from a web server in the DMZ.
  2. 2. The web server in the DMZ requests the data from the application server, which can be in the DMZ or in the inner trusted network.
  3. 3. The application server requests the data from the database server in the trusted network.
  4. 4. The database server returns the data to the requesting application server.
  5. 5. The application server returns the data to the requesting web server.
  6. 6. The web server returns the data to the requesting user from the untrusted network.

This separation accomplishes two specific, independent tasks. First, the user is separated from the request for data on a secure network. By having intermediaries do the requesting, this layered approach allows significant security levels to be enforced. Users do not have direct access or control over their requests, and this filtering process can put controls in place. Second, scalability is more easily realized. The multiple-server solution can be made to be very scalable literally to millions of users, without slowing down any particular layer.



EXAM TIP DMZs act as a buffer zone between unprotected areas of a network (the Internet) and protected areas (sensitive company data stores), allowing for the monitoring and regulation of traffic between these two zones.


Internet


The Internet is a worldwide connection of networks and is used to transport e-mail, files, financial records, remote access—you name it—from one network to another. The Internet is not as a single network, but a series of interconnected networks that allow protocols to operate to enable data to flow across it. This means that even if your network doesn’t have direct contact with a resource, as long as a neighbor, or a neighbor’s neighbor, and so on, can get there, so can you. This large web allows users almost infinite ability to communicate between systems.

Because everything and everyone can access this interconnected web and it is outside of your control and ability to enforce security policies, the Internet should be considered an untrusted network. A firewall should exist at any connection between your trusted network and the Internet. This is not to imply that the Internet is a bad thing—it is a great resource for all networks and adds significant functionality to our computing environments.

The term World Wide Web (WWW) is frequently used synonymously to represent the Internet, but the WWW is actually just one set of services available via the Internet. WWW is more specifically the Hypertext Transfer Protocol (HTTP)—based services that are made available over the Internet. This can include a variety of actual services and content, including text files, pictures, streaming audio and video, and even viruses and worms.


Intranet


Intranet is a term used to describe a network that has the same functionality as the Internet for users but lies completely inside the trusted area of a network and is under the security control of the system and network administrators. Typically referred to as campus or corporate networks, intranets are used every day in companies around the world. An intranet allows a developer and a user the full set of protocols—HTTP, FTP, instant messaging, and so on—that is offered on the Internet, but with the added advantage of trust from the network security. Content on intranet web servers is not available over the Internet to untrusted users. This layer of security offers a significant amount of control and regulation, allowing users to fulfill business functionality while ensuring security.

Two methods can be used to make information available to outside users: Duplication of information onto machines in the DMZ can make it available to other users. Proper security checks and controls should be made prior to duplicating the material to ensure security policies concerning specific data availability are being followed. Alternatively, extranets can be used to publish material to trusted partners.

Should users inside the intranet require access to information from the Internet, a proxy server can be used to mask the requestor’s location. This helps secure the intranet from outside mapping of its actual topology. All Internet requests go to the proxy server. If a request passes filtering requirements, the proxy server, assuming it is also a cache server, looks in its local cache of previously downloaded web pages. If it finds the page in its cache, it returns the page to the requestor without needing to send the request to the Internet. If the page is not in the cache, the proxy server, acting as a client on behalf of the user, uses one of its own IP addresses to request the page from the Internet. When the page is returned, the proxy server relates it to the original request and forwards it on to the user. This masks the user’s IP address from the Internet. Proxy servers can perform several functions for a firm; for example, they can monitor traffic requests, eliminating improper requests, such as inappropriate content for work. They can also act as a cache server, cutting down on outside network requests for the same object. Finally, proxy servers protect the identity of internal IP addresses, although this function can also be accomplished through a router or firewall using Network Address Translation (NAT).


Extranet


An extranet is an extension of a selected portion of a company’s intranet to external partners. This allows a business to share information with customers, suppliers, partners, and other trusted groups while using a common set of Internet protocols to facilitate operations. Extranets can use public networks to extend their reach beyond a company’s own internal network, and some form of security, typically VPN, is used to secure this channel. The use of the term extranet implies both privacy and security. Privacy is required for many communications, and security is needed to prevent unauthorized use and events from occurring. Both of these functions can be achieved through the use of technologies described in this chapter and other chapters in this book. Proper firewall management, remote access, encryption, authentication, and secure tunnels across public networks are all methods used to ensure privacy and security for extranets.


Telephony


Data and voice communications have coexisted in enterprises for decades. Recent connections inside the enterprise of Voice over IP and traditional PBX solutions increase both functionality and security risks. Specific firewalls to protect against unauthorized traffic over telephony connections are available to counter the increased risk.


VLANs


A local area network (LAN) is a set of devices with similar functionality and similar communication needs, typically co-located and operated off a single switch. This is the lowest level of a network hierarchy and defines the domain for certain protocols at the data link layer for communication. Virtual LANs use a single switch and divide it into multiple broadcast domains and/or multiple network segments, known as trunking. This very powerful technique allows significant network flexibility, scalability, and performance.


Trunking


Trunking is the process of spanning a single VLAN across multiple switches. A trunk-based connection between switches allows packets from a single VLAN to travel between switches, as shown in Figure 8-5. Two trunks are shown in the figure: VLAN 10 is implemented with one trunk and VLAN 20 is implemented by the other. Hosts on different VLANs cannot communicate using trunks and are switched across the switch network. Trunks enable network administrators to set up VLANs across multiple switches with minimal effort. With a combination of trunks and VLANs, network administrators can subnet a network by user functionality without regard to host location on the network or the need to recable machines.


Figure 8-5 VLANs and trunks



Security Implications


VLANs are used to divide a single network into multiple subnets based on functionality. This permit engineering and accounting, for example, to share a switch because of proximity and yet have separate traffic domains. The physical placement of equipment and cables is logically and programmatically separated so adjacent ports on a switch can reference separate subnets. This prevents unauthorized use of physically close devices through separate subnets, but the same equipment. VLANs also allow a network administrator to define a VLAN that has no users and map all of the unused ports to this VLAN. Then if an unauthorized user should gain access to the equipment, he will be unable to use unused ports, as those ports will be securely defined to nothing. Both a purpose and a security strength of VLANs is that systems on separate VLANs cannot directly communicate with each other.



CAUTION Trunks and VLANs have security implications that need to be heeded so that firewalls and other segmentation devices are not breached through their use. They also require understanding of their use to prevent an unauthorized user from reconfiguring them to gain undetected access to secure portions of a network.


NAT


Network Address Translation (NAT) uses two sets of IP addresses for resources—one for internal use and another for external (Internet) use. NAT was developed as a solution to the rapid depletion of IP addresses in the IPv4 address space; it has since became an Internet standard (see RFC 1631 for details). NAT is used to translate between the two addressing schemes and is typically performed at a firewall or router. This permits enterprises to use the nonroutable private IP address space internally and reduces the number of external IP addresses used across the Internet.

Three sets of IP addresses are defined as nonroutable, which means that addresses will not be routed across the Internet. These addresses are routable internally and routers can be set to route them, but the routers across the Internet are set to discard packets sent to these addresses. This approach enables a separation of internal and external traffic and allows these addresses to be reused by anyone and everyone who wishes to do so. The three address spaces are


 
  • Class A 10.0.0.0 – 10.255.255.255
  • Class B 172.16.0.0 – 172.31.255.255
  • Class C 192.168.0.0 – 192.168.255.255

The use of these addresses inside a network is unrestricted, and they function like any other IP addresses. When outside—that is, Internet-provided—resources are needed for one of these addresses, NAT is required to produce a valid external IP address for the resource. NAT operates by translating the address when traffic passes the NAT device, such as a firewall. The external addresses used are not externally mappable 1:1 to the internal addresses, for this would defeat the purpose of reuse and address-space conservation. Typically, a pool of external IP addresses is used by the NAT device, with the device keeping track of which internal address is using which external address at any given time. This provides a significant layer of security, as it makes it difficult to map the internal network structure behind a firewall and directly address it from the outside. NAT is one of the methods used for enforcing perimeter security by forcing users to access resources through defined pathways such as firewalls and gateway servers.

Several techniques are used to accomplish NAT. Static NAT offers a 1:1 binding of external address to internal address; it is needed for services for which external sources reference internal sources, such as web servers or e-mail servers. For DMZ resources that reference outside resources, addresses can be shared, through dynamic NAT, in which a table is constructed and used by the edge device to manage the translation. As the address translation can change over time, the table changes as well. Even finer grained control can be obtained through port address translation (PAT), where actual TCP/UDP ports are translated as well. This will enable a single external IP address to serve two internal IP addresses through the use of ports. Resources that need long-running NAT, but only specific ports—such as a web server on port 80 or e-mail on port 25—can share a single external IP, conserving resources.


Tunneling


Tunneling is a method of packaging packets so that they can traverse a network in a secure, confidential manner. Tunneling involves encapsulating packets within packets, enabling dissimilar protocols to coexist in a single communication stream, as in IP traffic routed over an Asynchronous Transfer Mode (ATM) network. Tunneling also can provide significant measures of security and confidentiality through encryption and encapsulation methods. The best example of this is a VPN that is established over a public network through the use of a tunnel, as shown in Figure 8-6, connecting a firm’s Boston office to its New York City (NYC) office.

Assume, for example, that a company has multiple locations and decides to use the public Internet to connect the networks at these locations. To make these connections secure from outside unauthorized use, the company can employ a VPN connection between the different networks. On each network, an edge device, usually a router, connects to another edge device on the other network. Then using IPsec protocols, these routers establish a secure, encrypted path between them. This securely encrypted set of packets cannot be read by outside routers; only the addresses of the edge routers are visible. This arrangement acts as a tunnel across the public Internet and establishes a private connection, secure from outside snooping or use.

Because of ease of use, low-cost hardware, and strong security, tunnels and the Internet are a combination that will see more use in the future. IPsec, VPN, and tunnels will become a major set of tools for users requiring secure network connections across public segments of networks.


Chapter Review


This chapter covered a wide range of topics—from devices, to media, to topologies—and showed you how to use them together to create secure networks. These complementary items can each support the other in an effort to build a secure network structure. Designing a secure network begins with defining a topology and then laying out the necessary components. Separate the pieces using firewalls with clearly defined security policies. Use devices and media to the advantage of the overall network design and implement usable subnets with VLANs. Use encryption and encapsulation to secure communications of public segments to enable extranets and cross-Internet company traffic. Use items such as intrusion detection systems and firewalls to keep unauthorized


Figure 8-6 Tunneling across a public network


users out and monitor activity. Taken together, these pieces can make a secure network that is efficient, manageable, and effective.


Questions


To further help you prepare for the Security+ exam, and to test your level of preparedness, answer the following questions and then check your answers against the list of correct answers at the end of the chapter.


 
  1. 1. Switches operate at which layer of the OSI model?
    1. A. Physical layer
    2. B. Network layer
    3. C. Data link layer
    4. D. Application layer
  2. 2. UTP cables are terminated for Ethernet using what type of connector?
    1. A. A BNC plug
    2. B. An Ethernet connector
    3. C. A standard phone jack connector
    4. D. An RJ-45 connector
  3. 3. Coaxial cable carries how many physical channels?
    1. A. Two
    2. B. Four
    3. C. One
    4. D. None of the above
  4. 4. The purpose of a DMZ in a network is to
    1. A. Provide easy connections to the Internet without an interfering firewall
    2. B. Allow server farms to be divided into similar functioning entities
    3. C. Provide a place to lure and capture hackers
    4. D. Act as a buffer between untrusted and trusted networks
  5. 5. Network access control is associated with which of the following?
    1. A. NAP
    2. B. IPsec
    3. C. IPv6
    4. D. NAT
  6. 6. The purpose of twisting the wires in twisted-pair circuits is to
    1. A. Increase speed
    2. B. Increase bandwidth
    3. C. Reduce crosstalk
    4. D. Allow easier tracing
  7. 7. The shielding in STP acts as
    1. A. A physical barrier strengthening the cable
    2. B. A way to reduce interference
    3. C. An amplifier allowing longer connections
    4. D. None of the above
  8. 8. Microsoft NAP permits
    1. A. Restriction of connections to a restricted subnet only
    2. B. Checking of a client OS patch level before a network connection is permitted
    3. C. Denial of a connection based on client policy settings
    4. D. All of the above
  9. 9. One of the greatest concerns addressed by physical security is preventing unauthorized connections having what intent?
    1. A. Sniffing
    2. B. Spoofing
    3. C. Data diddling
    4. D. Free network access
 
  1. 10. SNMP is a protocol used for which of the following functions?
    1. A. Secure e-mail
    2. B. Secure encryption of network packets
    3. C. Remote access to user workstations
    4. D. Remote access to network infrastructure
 
  1. 11. Firewalls can use which of the following in their operation?
    1. A. Stateful packet inspection
    2. B. Port blocking to deny specific services
    3. C. NAT to hide internal IP addresses
    4. D. All of the above
 
  1. 12. SMTP is a protocol used for which of the following functions?
    1. A. E-mail
    2. B. Secure encryption of network packets
    3. C. Remote access to user workstations
    4. D. None of the above
 
  1. 13. Microwave communications are limited by
    1. A. Speed—the maximum for microwave circuits is 1 Gbps
    2. B. Cost—microwaves take a lot of energy to generate
    3. C. Line of sight—microwaves don’t propagate over the horizon
    4. D. Lack of standard operation protocols for widespread use
 
  1. 14. USB-based flash memory is characterized by
    1. A. Expensive
    2. B. Low capacity
    3. C. Slow access
    4. D. None of the above
 
  1. 15. Mobile devices connected to networks include what?
    1. A. Smart phones
    2. B. Laptops
    3. C. MP3 music devices
    4. D. All of the above

Answers


 
  1. 1. C. Switches operate at layer 2, the data link layer of the OSI model.
  2. 2. D. The standard connector for UTP in an Ethernet network is the RJ-45 connector. An RJ-45 is larger than a standard phone connector.
  3. 3. C. A coaxial connector carries one wire, one physical circuit.
  4. 4. D. A DMZ based topology is designed to manage the different levels of trust between the Internet (untrusted) and the internal network (trusted).
  5. 5. A. NAP (Network Access Protection) is one form of network access control.
  6. 6. C. The twist in twisted-pair wires reduces crosstalk between wires.
  7. 7. B. The shielding on STP is for grounding and reducing interference.
  8. 8. D. Microsoft Network Access Protection (NAP) enables the checking of a system’s health and other policies prior to allowing connection.
  9. 9. A. Sniffing is the greatest threat, for passwords and accounts can be captured and used later.
 
  1. 10.. D. The Simple Network Management Protocol is used to control network devices from a central control location.
  2. 11. D. Firewalls can do all of these things.
 
  1. 12. A. SMTP, the Simple Mail Transfer Protocol, is used to move e-mail across a network.
 
  1. 13. C. Microwave energy is a line-of-sight transmission medium; hence, towers must not be spaced too far apart or the horizon will block transmissions.
 
  1. 14. D. USB-based flash memory is low cost, fast, and high capacity—currently 32GB.
 
  1. 15. D. Almost any digital memory—containing device can find its way onto a network.


CHAPTER 9
Authentication and Remote Access


In this chapter, you will


 
  • Learn about the methods and protocols for remote access to networks
  • Discover authentication, authorization, and accounting (AAA) protocols
  • Be introduced to authentication methods and the security implications in their use
  • Cover virtual private networks (VPNs) and their security aspects
  • Explore Internet Protocol Security (IPsec) and its use in securing communications

Remote access enables users outside a network to have network access and privileges as if they were inside the network. Being outside a network means that the user is working on a machine that is not physically connected to the network and must therefore establish a connection through a remote means, such as dialing in, connecting via the Internet, or connecting through a wireless connection. A user accessing resources from the Internet through an Internet service provider (ISP) is also connecting remotely to the resources via the Internet.

Authentication is the process of establishing a user’s identity to enable the granting of permissions. To establish network connections, a variety of methods are used, which depend on network type, the hardware and software employed, and any security requirements. Microsoft Windows has a specific server component called the Remote Access Service (RAS) that is designed to facilitate the management of remote access connections through dial-up modems. Cisco has implemented a variety of remote access methods through its networking hardware and software. UNIX systems also have built-in methods to enable remote access.


The Remote Access Process


The process of connecting by remote access involves two elements: a temporary network connection and a series of protocols to negotiate privileges and commands. The temporary network connection can occur via a dial-up service, the Internet, wireless access, or any other method of connecting to a network. Once the connection is made, the primary issue is authenticating the identity of the user and establishing proper privileges for that user. This is accomplished using a combination of protocols and the operating system on the host machine.

The three steps in the establishment of proper privileges are authentication, authorization, and accounting (AAA). Authentication is the matching of user-supplied credentials to previously stored credentials on a host machine, and it usually involves an account username and password. Once the user is authenticated, the authorization step takes place. Authorization is the granting of specific permissions based on the privileges held by the account. Does the user have permission to use the network at this time, or is her use restricted? Does the user have access to specific applications, such as mail and FTP, or are some of these restricted? These checks are carried out as part of authorization, and in many cases this is a function of the operating system in conjunction with its established security policies. A last function,accounting, is the collection of billing and other detail records. Network access is often a billable function, and a log of how much time, bandwidth, file transfer space, or other resources were used needs to be maintained. Other accounting functions include keeping detailed security logs to maintain an audit trail of tasks being performed. All of these standard functions are part of normal and necessary overhead in maintaining a computer system, and the protocols used in remote access provide the necessary input for these functions.

By using encryption, remote access protocols can securely authenticate and authorize a user according to previously established privilege levels. The authorization phase can keep unauthorized users out, but after that, encryption of the communications channel becomes very important in preventing nonauthorized users from breaking in on an authorized session and hijacking an authorized user’s credentials. As more and more networks rely on the Internet for connecting remote users, the need for and importance of remote access protocols and secure communication channels will continue to grow.

When a user dials in to the Internet through an ISP, this is similarly a case of remote access—the user is establishing a connection to her ISP’s network, and the same security issues apply. The issue of authentication, the matching of user-supplied credentials to previously stored credentials on a host machine, is usually done via a user account name and password. Once the user is authenticated, the authorization step takes place.

Access controls define what actions a user can perform or what objects a user is allowed to access. Access controls are built upon the foundation of elements designed to facilitate the matching of a user to a process. These elements are identification, authentication, and authorization.


Identification


Identification is the process of ascribing a computer ID to a specific user, computer, network device, or computer process. The identification process is typically performed only once, when a user ID is issued to a particular user. User identification enables authentication and authorization to form the basis for accountability. For accountability purposes, user IDs should not be shared, and for security purposes, they should not be descriptive of job function. This practice enables you to trace activities to individual users or computer processes so that they can be held responsible for their actions. Identification usually takes the form of a logon ID or user ID. A required characteristic of such IDs is that they must be unique.


Authentication


Authentication is the process of binding a specific ID to a specific computer connection. Historically, three categories are used to authenticate the identity of a user. Originally published by the U.S. government in one of the Rainbow series manuals on computer security, these categories are


 
  • What users know (such as a password)
  • What users have (such as tokens)
  • What users are (static biometrics such as fingerprints or iris pattern)

Today, because of technological advances, a new category has emerged, patterned after subconscious behavior:


 
  • What users do (dynamic biometrics such as typing patterns or gait)

These methods can be used individually or in combination. These controls assume that the identification process has been completed and the identity of the user has been verified. It is the job of authentication mechanisms to ensure that only valid users are admitted. Described another way, authentication is using some mechanism to prove that you are who you claimed to be when the identification process was completed.

The most common method of authentication is the use of a password. For greater security, you can add an element from a separate group, such as a smart card token—something a user has in her possession. Passwords are common because they are one of the simplest forms and use memory as a prime component. Because of their simplicity, passwords have become ubiquitous across a wide range of systems.

Another method to provide authentication involves the use of something that only valid users should have in their possession. A physical-world example of this would be a simple lock and key. Only those individuals with the correct key will be able to open the lock and thus gain admittance to a house, car, office, or whatever the lock was protecting. A similar method can be used to authenticate users for a computer system or network (though the key may be electronic and could reside on a smart card or similar device). The problem with this technology, however, is that people do lose their keys (or cards), which means they can’t log in to the system and somebody else who finds the key may then be able to access the system, even though they are not authorized. To address this problem, a combination of the something-you-know/something-you-have methods is often used so that the individual with the key can also be required to provide a password or passcode. The key is useless unless you know this code.

The third general method to provide authentication involves something that is unique about you. We are accustomed to this concept in our physical world, where our fingerprints or a sample of our DNA can be used to identify us. This same concept can be used to provide authentication in the computer world. The field of authentication that uses something about you or something that you are is known as biometrics. A number of different mechanisms can be used to accomplish this type of authentication, such as a fingerprint, iris scan, retinal scan, or hand geometry. All of these methods obviously require some additional hardware in order to operate. The inclusion of fingerprint readers on laptop computers is becoming common as the additional hardware is becoming cost effective.

While these three approaches to authentication appear to be easy to understand and in most cases easy to implement, authentication is not to be taken lightly, since it is such an important component of security. Potential attackers are constantly searching for ways to get past the system’s authentication mechanism, and they have employed some fairly ingenious methods to do so. Consequently, security professionals are constantly devising new methods, building on these three basic approaches, to provide authentication mechanisms for computer systems and networks.


Kerberos


Developed as part of MIT’s project Athena, Kerberos is a network authentication protocol designed for a client/server environment. The current version is Kerberos Version 5 release 1.6.3 and is supported by all major operating systems. Kerberos securely passes a symmetric key over an insecure network using the Needham-Schroeder symmetric key protocol. Kerberos is built around the idea of a trusted third party, termed a key distribution center (KDC), which consists of two logically separate parts: an authentication server (AS) and a ticket granting server (TGS). Kerberos communicates via “tickets” that serve to prove the identity of users.

Taking its name from the three-headed dog of Greek mythology, Kerberos is designed to work across the Internet, an inherently insecure environment. Kerberos uses strong encryption so that a client can prove its identity to a server and the server can in turn authenticate itself to the client. A complete Kerberos environment is referred to as a Kerberos realm. The Kerberos server contains user IDs and hashed passwords for all users that will have authorizations to realm services. The Kerberos server also has shared secret keys with every server to which it will grant access tickets.

The basis for authentication in a Kerberos environment is the ticket. Tickets are used in a two-step process with the client. The first ticket is a ticket-granting ticket issued by the AS to a requesting client. The client can then present this ticket to the Kerberos server with a request for a ticket to access a specific server. This client-to-server ticket is used to gain access to a server’s service in the realm. Since the entire session can be encrypted, this will eliminate the inherently insecure transmission of items such as a password that can be intercepted on the network. Tickets are time-stamped and have a lifetime, so attempting to reuse a ticket will not be successful.



EXAM TIP Kerberos is a third-party authentication service that uses a series of tickets as tokens for authenticating users. The six steps involved are protected using strong cryptography: 1.) The user presents his credentials and requests a ticket from the Key Distribution Server (KDS). 2.) The KDS verifies credentials and issues a ticket granting ticket (TGT). 3.) The user presents a TGT and request for service to KDS. 4.) KDS verifies authorization and issues a client to server ticket. 5.) The user presents a request and a client to server ticket to the desired service. 6.) If the client to server ticket is valid, service is granted to the client.

To illustrate how the Kerberos authentication service works, think about the common driver’s license. You have received a license that you can present to other entities to prove you are who you claim to be. Because other entities trust the state in which the license was issued, they will accept your license as proof of your identity. The state in which the license was issued is analogous to the Kerberos authentication service realm and the license acts as a client to server ticket. It is the trusted entity both sides rely on to provide valid identifications. This analogy is not perfect, because we all probably have heard of individuals who obtained a phony driver’s license, but it serves to illustrate the basic idea behind Kerberos.


Certificates


Certificates are a method of establishing authenticity of specific objects such as an individual’s public key or downloaded software. A digital certificate is generally an attachment to a message and is used to verify that the message did indeed come from the entity it claims to have come from. The digital certificate can also contain a key that can be used to encrypt future communication. For more information on this subject, refer to Chapter 5.


Tokens


A token is a hardware device that can be used in a challenge/response authentication process. In this way, it functions as both a something-you-have and something-you-know authentication mechanism. Several variations on this type of device exist, but they all work on the same basic principles. The device has an LCD screen and may or may not have a numeric keypad. Devices without a keypad will display a password (often just a sequence of numbers) that changes at a constant interval, usually about every 60 seconds. When an individual attempts to log in to a system, he enters his own user ID number and then the number that is showing on the LCD. These two numbers are either entered separately or concatenated. The user’s own ID number is secret and this prevents someone from using a lost device. The system knows which device the user has and is synchronized with it so that it will know the number that should have been displayed. Since this number is constantly changing, a potential attacker who is able to see the sequence will not be able to use it later, since the code will have changed. Devices with a keypad work in a similar fashion (and may also be designed to function as a simple calculator). The individual who wants to log in to the system will first type his personal identification number into the calculator. He will then attempt to log in. The system will then provide a challenge; the user must enter that challenge into the calculator and press a special function key. The calculator will then determine the correct response and display it. The user provides the response to the system he is attempting to log in to, and the system verifies that this is the correct response. Since each user has a different PIN, two individuals receiving the same challenge will have different responses. The device can also use the date or time as a variable for the response calculation so that the same challenge at different times will yield different responses, even for the same individual.


Multifactor


Multifactor is a term that describes the use of more than one authentication mechanism at the same time. An example of this is the hardware token, which requires both a personal ID number (PIN) or password and the device itself to determine the correct response in order to authenticate to the system. This means that both the something-you-have and something-you-know mechanisms are used as factors in verifying authenticity of the user. Biometrics are also often used in conjunction with a PIN so that they, too, can be used as part of a multifactor authentication scheme, in this case something you are as well as something you know. The purpose of multifactor authentication is to increase the level of security, since more than one mechanism would have to be spoofed in order for an unauthorized individual to gain access to a computer system or network. The most common example of multifactor security is the common ATM card most of us carry in our wallets. The card is associated with a PIN that only the authorized card-holder should know. Knowing the PIN without having the card is useless, just as having the card without knowing the PIN will also not provide you access to your account.



EXAM TIP The required use of more than one authentication system is known as multifactor authentication. The most common example is the combination of password with a hardware token. For high security, three factors can be used: password, token, and biometric.


Single Sign-on


Single sign-on is a form of authentication that involves the transferring of credentials between systems. As more and more systems are combined in daily use, users are forced to have multiple sets of credentials. A user may have to log in to three, four, five, or even more systems every day just to do her job. Single sign-on allows a user to transfer her credentials, so that logging into one system acts to log her into all of them. This has an advantage of reducing login hassles for the user. It also has a disadvantage of combining the authentication systems in a way such that if one login is compromised, they all are for that user.


Mutual Authentication


Mutual authentication describes a process in which each side of an electronic communication verifies the authenticity of the other. We are accustomed to the idea of having to authenticate ourselves to our ISP before we access the Internet, generally through the use of a user ID/password pair, but how do we actually know that we are really communicating with our ISP and not some other system that has somehow inserted itself into our communication (a man-in-the-middle attack)? Mutual authentication would provide a mechanism for each side of a client/server relationship to verify the authenticity of the other to address this issue.


Authorization


Authorization is the process of permitting or denying access to a specific resource. Once identity is confirmed via authentication, specific actions can be authorized or denied. Many types of authorization schemes are used, but the purpose is the same: determine whether a given user who has been identified has permissions for a particular object or resource being requested. This functionality is frequently part of the operating system and is transparent to users.

The separation of tasks, from identification to authentication to authorization, has several advantages. Many methods can be used to perform each task, and on many systems several methods are concurrently present for each task. Separation of these tasks into individual elements allows combinations of implementations to work together. Any system or resource, be it hardware (router or workstation) or a software component (database system) that requires authorization can use its own authorization method once authentication has occurred. This makes for efficient and consistent application of these principles.


IEEE 802.1 x


IEEE 802.1 x is an authentication standard that supports communications between a user and an authorization device, such as an edge router. IEEE 802.1 x is used by all types of networks, including Ethernet, token ring, and wireless. This standard describes methods used to authenticate a user prior to granting access to an authentication server, such as a RADIUS server. 802.1 x acts through an intermediate device, such as an edge switch, enabling ports to carry normal traffic if the connection is properly authenticated. This prevents unauthorized clients from accessing the publicly available ports on a switch, keeping unauthorized users out of a LAN. Until a client has successfully authenticated itself to the device, only Extensible Authentication Protocol over LAN (EAPOL) traffic is passed by the switch.

EAPOL is an encapsulated method of passing EAP messages over 802 frames. EAP is a general protocol that can support multiple methods of authentication, including one-time passwords, Kerberos, public keys, and security device methods such as smart cards. Once a client successfully authenticates itself to the 802.1 x device, the switch opens ports for normal traffic. At this point, the client can communicate with the system’s AAA method, such as a RADIUS server, and authenticate itself to the network.


RADIUS


Remote Authentication Dial-In User Service (RADIUS) is a protocol that was developed originally by Livingston Enterprises (acquired by Lucent) as an AAA protocol. It was submitted to the Internet Engineering Task Force (IETF) as a series of RFCs: RFC 2058 (RADIUS specification), RFC 2059 (RADIUS accounting standard), and updated RFCs 2865–2869 are now standard protocols. The IETF AAA Working Group has proposed extensions to RADIUS (RFC 2882) and a replacement protocol DIAMETER (Internet Draft DIAMETER Base Protocol).

RADIUS is designed as a connectionless protocol utilizing User Datagram Protocol (UDP) as its transport level protocol. Connection type issues, such as timeouts, are handled by the RADIUS application instead of the transport layer. RADIUS utilizes UDP ports 1812 for authentication and authorization and 1813 for accounting functions (see Table 9-1 in the “Chapter Review” section).

RADIUS is a client/server protocol. The RADIUS client is typically a network access server (NAS). The RADIUS server is a process or daemon running on a UNIX or Windows Server machine. Communications between a RADIUS client and RADIUS server are encrypted using a shared secret that is manually configured into each entity and not shared over a connection. Hence, communications between a RADIUS client (typically a NAS) and a RADIUS server are secure, but the communications between a user (typically a PC) and the RADIUS client are subject to compromise. This is important to note, for if the user’s machine (the PC) is not the RADIUS client (the NAS), then communications between the PC and the NAS are typically