Training
Certifications
Books
Special Offers
Community




 
Microsoft® Encyclopedia of Security
Author Mitch Tulloch
Pages 480
Disk N/A
Level All Levels
Published 06/18/2003
ISBN 9780735618770
Price $39.99
To see this book's discounted price, select a reseller below.
 

More Information

About the Book
Table of Contents
Sample Chapter
Index
Related Series
About the Author

Support: Book & CD

Rate this book
Barnes Noble Amazon Quantum Books

 


Chapter : D



D

DAC

Stands for discretionary access control, a mechanism for controlling access by users to computing resources.

See: discretionary access control (DAC)

DACL

Stands for discretionary access control list, the most common type of access control list (ACL) used to control access to computer and network resources.

See: discretionary access control list (DACL)

Data Encryption Algorithm (DEA)

The name used by the American National Standards Institute (ANSI) for the Data Encryption Standard (DES).

See: Data Encryption Standard (DES)

Data Encryption Standard (DES)

An encryption standard used for may years by the U.S. federal government.

Overview

The Data Encryption Standard (DES) has been used since 1977 by federal agencies for protecting the confidentiality and integrity of sensitive information both during transmission and when in storage. DES is a secret key encryption algorithm defined by Federal Information Processing Standard FIPS 46-9. A stronger form of DES called 3DES or TDES (Triple DES) is also sometimes used by government agencies, but requires additional processing power because of the extra computation involved.

DES was cracked, however, in 1997, launching a search for a more secure replacement that would be faster than 3DES. The result of this process was the new Advanced Encryption Standard (AES), which is gradually being introduced in government agencies to phase out DES and 3DES.

Implementation

DES uses a 64-bit key, of which only 56 bits are used for encryption, while the remaining 8 bits are employed for error correction. The algorithm transforms 64 bits of plaintext into ciphertext blocks of the same size. Since DES is a symmetric key algorithm, both the sender and the receiver require the same key in order for secure communications to be implemented. To exchange a DES session key between two parties, an asymmetric key algorithm such as Diffie-Hellman (DH) or RSA can be employed.

DES can operate in several different modes, including cipher block chaining (CBC) and Electronic Codebook (ECB) mode. ECB uses DES directly to encrypt and decrypt information, while CBC chains blocks of ciphertext together.

Notes

The American Standards Institute (ANSI) refers to DES as the Data Encryption Algorithm (DEA).

See Also: 3DES, Advanced Encryption Standard (AES), asymmetric key algorithm, Diffie-Hellman (DH), RSA, symmetric key algorithm

data integrity

The validity of data that is transmitted or stored.

Overview

Maintaining data integrity is essential to the privacy, security, and reliability of critical business data. There are many ways in which this integrity can be compromised:

  • Corruption of data resulting from software bugs or the actions of malicious users
  • Viruses infecting computer systems and Trojans masquerading as genuine applications
  • Hardware failures caused by age, accident, or natural disasters
  • Human error in entering, storing, or transmitting data over a network

To minimize these threats to data integrity, you should implement the following procedures:

  • Back up important data regularly and store backups in a safe location.
  • Use access control lists (ACLs) to control who is allowed to access data.
  • Maintain and replace aging hardware to prevent unexpected failure.
  • Include code in your applications for validating data input.
  • Use digital signatures to ensure data has not been tampered with during storage or in transmission.

See Also: backup plan, disaster recovery plan (DRP), Trojan, virus

Data Protection API (DPAPI)

An application programming interface that is part of Microsoft CryptoAPI (CAPI) on Microsoft Windows platforms.

Overview

Data Protection API (DPAPI) implements Microsoft Windows Data Protection on Windows 2000, Windows XP, and Windows Server 2003 platforms. DPAPI is an operating system-level password-based data protection service that applications can use to encrypt and decrypt information. DPAPI uses the 3DES encryption algorithm and strong keys generated from user passwords, typically the password of the currently logged-on user. Since multiple applications running under the same account might use the same password and have access to such encrypted data, DPAPI also allows an application to provide an additional "secret," called secondary entropy, to ensure only that application can decrypt information it has previously encrypted. The process by which DPAPI generates a cryptographic key from a password is called Password-Based Key Derivation and is defined in the Public Key Cryptography Standards (PKCS) #5 standard.

Notes

DPAPI does not store encrypted information, and applications that use it must implement their own storage mechanisms for this purpose.

See Also: 3DES, password

DCS-1000

Formerly known as Carnivore, a surveillance technology used by the FBI for monitoring e-mail.

Overview

Few actual details are known about DCS-1000 apart from the fact that it can be installed at an Internet service provider and configured to monitor various aspects of traffic in transit through the provider's network. The Electronic Privacy Information Center (EPIC), concerned about the privacy of businesses and the public, has employed the Freedom of Information Act (FOIA) to force disclosure of some information concerning the platform, but the FBI has assured the public that it only uses the system to capture e-mail authorized for seizure by a court order, as opposed to unrestrictively capturing all online traffic.

For More Information

Further information can be found on the FBI Web site at www.fbi.gov/hq/lab/carnivore/carnivore.htm.

See Also: privacy

DDoS

Stands for distributed denial of service, a type of denial of service (DoS) attack that leverages the power of multiple intermediary hosts.

See: distributed denial of service (DDoS)

DEA

Stands for Data Encryption Algorithm, the name used by the American National Standards Institute (ANSI) for Data Encryption Standard (DES).

See: Data Encryption Standard (DES)

decryption

The process of converting ciphertext into plaintext.

Overview

Encryption and decryption are complementary aspects of cryptography. The first involves transforming plaintext (digital information containing human-readable content) into ciphertext (scrambled information that cannot be directly read by humans). Decryption is the reverse process, which recovers the meaning of an encrypted message by transforming it from ciphertext back into plaintext.

The approach used for decrypting messages depends on the method used to encrypt them. For example, in a symmetric (or secret) key algorithm, both the sender and the recipient use the same shared secret key to encrypt and decrypt the message. In asymmetric key algorithms such as those used by public key cryptography systems, two keys are used, one to encrypt the message and the other to decrypt it.

See Also: asymmetric key algorithm, cryptography, encryption, public key cryptography, symmetric key algorithm

Defcon

A popular hackers' convention held each fall in Las Vegas.

Overview

Defcon has been referred to by its organizers as the "annual computer underground party for hackers." In addition to papers and presentations on everything from how to hack a system to how to secure a system against attack by others, other topics discussed include phone phreaking, privacy issues, demonstration of new hacking and security tools, recently discovered vulnerabilities and how to exploit and correct them, advances in Trojan and remote-control technologies, and so on.

Defcon is generally well attended by hackers, security professionals, and representatives of government, law enforcement, and media agencies. Fun activities are usually included such as a capture-the-flag type of contest in which groups of hackers are pitted against each other to try to hack each other's networks while simultaneously defending their own networks against attack. Awards are often given; for instance, one was given at Defcon 9 to an individual who hacked the conference network itself in order to gain admission to the conference without a pass.

Defcon was founded by Jeff "Dark Tangent" Moss and had its 10th annual conference in August 2002, with attendance running around 5000 and some sessions being standing room only. Defcon has evolved somewhat from its early freewheeling days and has become more "respectable" as it began to attract IS managers concerned about their growing network security needs. Defcon immediately follows another conference called Black Hat Briefings, which brings legitimate and underground security experts together to discuss the latest network security issues and methodologies.

For More Information

Visit Defcon at www.defcon.org for information about upcoming conferences and archived information from previous ones.

See Also: Black Hat Briefings, hacker, phreaking

defense in depth

A layered approach to implementing network security.

Overview

The goal of defense in depth is to provide multiple barriers for attackers attempting to compromise the security of your network. These layers provide extra hurdles for the attacker to overcome, thus slowing down the attack and providing extra time for detecting, identifying, and countering the attack. For example, the first layer of defense against passive attacks such as eavesdropping might be implementing link- or network-layer encryption, followed by security-enabled applications as a backup defense. Defense against insider attacks can consist of layers such as physical security, authenticated access control, and regular analysis of audit logs.

From a more general perspective, the first line of defense for a network occurs at its perimeter where firewalls block unwanted traffic and intrusion detection systems (IDSs) monitor traffic passed through the firewall. Additional layers behind this can include host-based firewalls and IDSs, proper access control lists (ACLs) on server resources, strong password policies, and so on.

See Also: access control list (ACL), firewall, intrusion detection system (IDS), password

demilitarized zone (DMZ)

An isolated network segment at the point where a corporate network meets the Internet.

Overview

The demilitarized zone (DMZ) is a critical part of securing your network against attack. The term originated in the Korean War to refer to an area that both sides agreed to stay out of, which acted as a buffer zone to prevent hostilities from flaring up again.

In a networking scenario, the DMZ is used to segregate the private and public from each other while allowing essential network services such as Web site hosting, electronic messaging, and name resolution to function properly. To accomplish this, the DMZ is typically the location where hardened hosts such as Web, mail, and DNS servers are placed so they can handle traffic from both the internal and the external networks. This reduces the attack surface on both these hosts in particular and your network in general, for if these hosts were located outside the DMZ they would be more easily subject to attack, while if they were located inside the DMZ, compromising such a host could lead to penetration of your entire network.

Implementation

There are a variety of ways of implementing a DMZ, with two of the more popular being the following:

  • Dual-firewall DMZ: Here, both the private and public networks terminate with firewalls, and the DMZ is the network segment connecting the two firewalls together. This approach is probably the most popular one in use today for implementing a DMZ.
  • Single-firewall DMZ: This was the earliest approach to implementing a DMZ and consisted of a single firewall with three interfaces, one each for the private network, public Internet, and DMZ network segment.

Click to view graphic
Click to view graphic

Demilitarized zone (DMZ). Single- and dual-firewall DMZ configurations.

Notes

The term perimeter network is more commonly used instead of DMZ in Microsoft networking environments.

See Also: firewall

denial of service (DoS)

A type of attack that tries to prevent legitimate users from accessing network services.

Overview

In a denial of service (DoS) attack, the attacker tries to prevent access to a system or network by several possible means, including the following:

  • Flooding the network with so much traffic that traffic from legitimate clients is overwhelmed
  • Flooding the network with so many requests for a network service that the host providing the service cannot receive similar requests from legitimate clients
  • Disrupting communications between hosts and legitimate clients by various means, including alteration of system configuration information or even physical destruction of network servers and components

The earliest form of DoS attack was the SYN flood, which first appeared in 1996 and exploits a weakness in Transmission Control Protocol (TCP). Other attacks exploited vulnerabilities in operating systems and applications to bring down services or even crash servers. Numerous tools were developed and freely distributed on the Internet for conducting such attacks, including Bonk, LAND, Smurf, Snork, WinNuke, and Teardrop.

TCP attacks are still the most popular form of DoS attack. This is because other types of attack such as consuming all disk space on a system, locking out user accounts in a directory, or modifying routing tables in a router generally require networks to be penetrated first, which can be a difficult task when systems are properly hardened.

Defenses against DoS attacks include these:

  • Disabling unneeded network services to limit the attack surface of your network
  • Enabling disk quotas for all accounts including those used by network services
  • Implementing filtering on routers and patch operating systems to reduce exposure to SYN flooding
  • Baselining normal network usage to help identify such attacks in order to quickly defeat them
  • Regularly backing up system configuration information and ensuring strong password policies

See Also: distributed denial of service (DDoS), SYN flooding

Department of Defense Information Technology Security Certification and Accreditation Process (DITSCAP)

A standardized approach for certifying the security of IT (information technology) systems.

Overview

The Department of Defense Information Technology Security Certification and Accreditation Process (DITSCAP) was developed to help guide U.S. Department of Defense (DoD) agencies by providing guidance for the accreditation process of IT systems. DITSCAP is a four-stage process involving

  • Defining and documenting mission, function, requirements, and capabilities
  • Recommending changes and summarizing them as a system security authorization agreement (SSAA), which summarizes specifications for the system being developed
  • Validating the SSAA using vulnerability and penetration testing, resulting in full, interim, or withheld accreditation
  • Postaccreditation monitoring and maintenance to ensure continued security

The goal of DITSCAP is to introduce integrated security into the life cycle of IT systems to minimize risks in shared infrastructures. DITSCAP was developed as a joint effort by the DoD, the Defense Information Systems Agency (DISA), and the National Security Agency (NSA). A related standard called National Information Assurance Certification and Accreditation Process (NIACAP) is employed for similar purposes between U.S. government agencies and contractors and consultants.

See Also: National Information Assurance Certification and Accreditation Process (NIACAP)

DES

Stands for Data Encryption Standard, an encryption standard used for many years by the U.S. federal government.

See: Data Encryption Standard (DES)

DESX

An enhanced version of the Data Encryption Standard (DES).

Overview

DESX, which stands for "DES XORed," is a variant of DES developed by Ron Rivest in the 1980s. DESX performs similarly to DES but has greater resistance to exhaustive key search attacks. This is accomplished by XORing the input plaintext file with 64 bits of additional key material prior to encrypting the text using DES, a process sometimes called whitening, which is now implemented in other encryption schemes. Once DES has been applied to the whitened text, the result is again XORed with the same amount of additional key material.

See Also: Data Encryption Standard (DES)

DH

Stands for Diffie-Hellman, an algorithm used in public key cryptography schemes.

See: Diffie-Hellman (DH)

dictionary attack

A technique for cracking passwords.

Overview

The simplest but least efficient method for cracking passwords is the brute-force attack, which systematically tries all possible values in an attempt to guess the password. The dictionary attack is an improvement on this; it uses a dictionary (database) of common passwords derived from shared experiences of password crackers. Dictionary attacks can be performed online or offline, and readily available tools exist on the Internet for automating such attacks. A combination of a dictionary attack and a brute-force attack is called a hybrid attack.

In addition to cracking passwords, dictionary attacks have been used in other scenarios such as guessing community names on a network that uses Simple Network Management Protocol (SNMP). Once these names are guessed, the attacker can use SNMP to profile services on the targeted network.

See Also: brute-force attack, hybrid attack

Diffie-Hellman (DH)

An algorithm used in public key cryptography schemes.

Overview

Diffie-Hellman (DH) was the first algorithm developed for public key cryptography. It is used for key exchange by a variety of security protocols, including Internet Protocol Security (IPSec), Secure Sockets Layer (SSL), and Secure Shell (SSH), as well as many popular public key infrastructure (PKI) systems.

DH was developed by Whitfield Diffie and Martin Hellman in 1976 and was the first protocol developed for enabling users to exchange a secret over an insecure medium without an existing shared secret between them. DH is not an encryption algorithm but a protocol for exchanging secret keys to be used for sending encrypted transmissions between users using Data Encryption Standard (DES), Blowfish, or some other symmetric encryption scheme.

Issues

DH in its simplest form is susceptible to man-in-the-middle attacks, though this can be mitigated by necessitating the use of digital signatures by all parties. The Station-to-Station (STS) protocol is an authenticated version of DH developed in 1992 that uses keys certified by certificate authorities (CAs) to prevent such attacks.

See Also: public key cryptography

diffing

A technique used by hackers that compares different versions of files to look for differences.

Overview

The word diffing derives from the diff utility on UNIX systems that performs bytewise comparison between two files. A variety of diffing tools exist that work at the file, database, and disk levels. These tools are sometimes used by hackers to compare a new version of a file with an earlier version for various reasons, including the following:

  • Discovering where an application stores password information by entering a password, taking a bit-image snapshot of the application, changing the password, taking another snapshot, and diffing the two file images. This operation can show exactly where within the compiled code the password information is stored, and this may be of use in cracking other users' passwords.
  • Determining what effects a patch has when applied to an application. When vendors create patches, they may not fully disclose the vulnerabilities corrected, and by diffing the application before and after the patch and examining the result, a hacker may learn more about the original vulnerabilities. Using this information, the hacker can then proceed to attack unpatched versions of the application on other systems.

Examples of tools used for diffing include the Windows fc and UNIX diff commands. Once a file has been diffed to locate the section of code that has changed, the hacker can then use a hex editor such as Hackman to make bytewise modifications to the file if desired.

See Also: hex editor

Digest authentication

A Hypertext Transmission Protocol (HTTP) authentication scheme based on challenge-response authentication.

Overview

Digest authentication is a method used by Web servers to authenticate users trying to access sites. Digest authentication was proposed in RFC 2617 as a more secure method than Basic authentication, which passes user credentials across the connection in cleartext. Instead, Digest authentication encrypts user credentials as an MD5 hash to prevent credential theft by malicious users eavesdropping on the network.

Digest authentication is supported by Internet Information Services (IIS) on Microsoft Windows server platforms, the open source Apache Web server, the Jigsaw Web server developed by the World Wide Web Consortium (W3C), and many other platforms. Digest authentication can also be incorporated directly into Microsoft .NET-managed code, bypassing the version included in IIS on Microsoft Windows platforms.

Implementation

When a client browser tries to access a Web site on which Digest authentication is configured, the client begins by making an unauthenticated HTTP request to the server. The server responds with an HTTP 401 Unauthorized status code, sending a token called a nonce to the client and telling the client in the HTTP response header that it must use Digest authentication to access the site. The client then opens a dialog box to obtain the user's name and password, hashes the password together with the nonce, and sends the username and hash to the server requesting authentication.

The server then generates the same hash using the copy of the user's password stored in its security accounts database and compares this hash with the one received from the client. If the two hashes match, the client is allowed to download the requested resource from the server.

Click to view graphic
Click to view graphic

Digest authentication. How Digest authentication works.

Issues

Digest authentication is susceptible to replay attacks, but this can be minimized by time-limiting nonce values or using different values for each connection. While Digest authentication is more secure than Basic authentication, it is not as secure as Kerberos authentication or authentication based on client certificates. Another issue with the security of Digest authentication is that it requires passwords to be retrievable as cleartext.

See Also: authentication, Basic authentication, challenge response authentication, MD5, replay attack

DigiCrime

A Web site that humorously draws attention to information security issues.

Overview

DigiCrime (www.digicrime.com) is the brainchild of mathematician and computer scientist Kevin McCurley, and since 1996 this site has entertained the security community and informed the general public about potential issues in computer and online security. The site humorously promotes itself as offering "a full range of criminal services and products to our customers." These "services" include identity theft, money laundering, airline ticket rerouting, telephone wiretapping, spamming, and more. The idea behind these "services" is to educate and inform the general public of potential dangers in blindly trusting online transactions and to challenge the security community and software vendors to take these dangers more seriously. The site includes a community of real individuals with tongue-in-cheek titles like Director of Disinformation, Chief of Insecurity, Illegal Counsel, and Chief Arms Trafficker, many of whom are security professionals or cryptography experts and who help contribute to the site.

digital certificate

Encrypted information that guarantees that an encryption key belongs to a user.

Overview

Sometimes simply called certificates, digital certificates are specially formatted digital information that is used in secure messaging systems that employ public key cryptography. Certificates are used to verify the identity of the message sender to the recipient by generating a digital signature that can be used to sign the message. They are also used for providing the recipient of an encrypted message with a copy of the sender's public key.

Digital certificates are issued by a certificate authority (CA) that is trusted by both the sender and recipient. The most common format used for certificates is the X.509 standard, which contains the user's name and public key, a serial number, expiration date, the name and digital signature of the CA that issued the certificate, and other information. When a recipient receives an encrypted message with a certificate attached, the recipient uses the CA's public key to decrypt the certificate and verify the sender's identity.

See Also: digital signature, public key cryptography, X.509

digital fingerprinting

Another name for digital watermarking, a Digital Rights Management (DRM) antipiracy and copy-protection technology.

See: digital watermarking

digital forensics

The science of applying digital technologies to legal questions arising from criminal investigations.

Overview

Traditional forensic methods used in criminal investigations include looking for footprints, fingerprints, hair, fiber, and other physical evidence of an intruder's presence. In computer crime, the evidence left behind is of a digital nature and can include data on hard drives, logs of Web server visits or router activity, and so on. Digital forensics is the science of mining computer hardware and software to find evidence that can be used in a court of law to identify and prosecute cybercriminals.

Many companies have deployed an intrusion detection system (IDS) on their network to monitor and detect possible breaches of network security. When a breach has occurred, these companies may not have the necessary expertise to determine the extent of the breach or how the exploit was performed. In serious cases in which significant business loss has resulted, companies must establish an evidence trail to identify and prosecute the individuals responsible. In such cases, companies may enlist the services of digital forensic experts who can send in an incident response team to collect evidence, perform a "postmortem" by piecing together the evidence trail, help recover deleted files and other lost data, and perform "triage" to help restore compromised systems as quickly as possible.

Marketplace

Examples of companies offering digital forensics services include @stake, Computer Forensics, DigitalMedix, ESS Data Recovery, Guidance Software, Vigilinx, and others. Computer Sciences Corporation and Veridian share a significant portion of the digital forensics market for the U.S. federal government.

See Also: intrusion detection system (IDS)

Digital Millennium Copyright Act (DMCA)

Legislation that extends U.S. copyright law to cover digital content.

Overview

The Digital Millennium Copyright Act (DMCA) was enacted in 1998 as a vehicle for compliance toward treaties with the World Intellectual Property Organization (WIPO), a United Nations agency based in Geneva, Switzerland. The provisions of the DMCA include the following:

  • Outlawing the circumvention of antipiracy measures such as Digital Rights Management (DRM) technologies built into commercial software. The law also outlaws the manufacture, sale, or distribution of devices or software to illegally crack or copy such software. Exceptions are allowed for those who conduct research and development of encryption and antipiracy technologies and for libraries and other nonprofit organizations in certain circumstances.
  • Requiring Internet service providers to remove any information on users' Web sites that may constitute copyright infringement. Liability for simple transmission of such information by third parties is limited for these service providers, however, and for educational institutions hosting student Web sites.
  • Requiring Web sites broadcasting copyrighted digital audio or video to pay licensing fees to companies producing such content.
  • Upholding generally accepted "fair use" exemptions mandated by previous copyright legislation.

Issues

The DMCA has been widely praised by the entertainment and software industry but generally criticized by academics, librarians, and civil libertarians as part of larger issues surrounding the purposes and means of implementing DRM technologies in the consumer marketplace. A notable application of the DMCA was the arrest in 2001 of Russian programmer Dmitry Sklyarov, who was apprehended after a Defcon conference at which he presented a paper on how to circumvent copyright protection technology built into Adobe eBooks software.

See Also: Digital Rights Management (DRM)

Digital Rights Management (DRM)

Any technology used to protect the interests of copyright holders of commercial digital information products and services.

Overview

The last decade has seen the advent of consumer digital information products and services such as CD audio, DVD video, CD- and DVD-ROM software, and digital television. The potential for making illegal copies of digital products using standard computer hardware and software or through online file-sharing services has been viewed by the entertainment and software industries as potentially reducing their revenues by opening a floodgate of copyright circumvention and software piracy. This danger is enhanced by the nature of digitized information, which allows such copies to contain exactly the same information as the original.

In response to this issue, companies such as Microsoft and others have developed various Digital Rights Management (DRM) technologies to protect commercial digital products and services. These technologies may control access to such products and services by preventing the sharing or copying of digital content, limiting the number of times content can be viewed or used, and tying the use or viewing of content to specific individuals, operating systems, or hardware.

Implementation

There are two general methods for implementing DRM:

  • Encrypting the information so that only authorized users or devices can use it. An example is Microsoft Windows Media DRM, an end-to-end DRM system that provides content providers and retailers with the tools to encrypt Microsoft Windows Media files for broadcast or distribution.
  • Including a "digital watermark" to secretly identify the product or service as copyrighted and to signal to the hardware displaying the content that the material is copy protected. A Federal Communications Commission (FCC) proposal to incorporate a "broadcast flag" into digital television signals is one example of this approach.

Various industry groups are working toward DRM standards, including the Internet Engineering Task Force (IETF), the MPEG Group, the OpenEBook Forum, and several others. Microsoft Corporation's next-generation secure computing base, part of its Trustworthy Computing initiative, includes the incorporation of DRM technologies into the Microsoft Windows operating system platforms.

Issues

Critics of the encryption approach to DRM suggest that such technologies weaken the privacy of consumers by requiring them to provide personal information before content can be viewed or used. Such collected information may then be used to profile consumer purchase patterns for marketing purposes and price discrimination, to limit access to certain kinds of material to certain classes of consumers, or to push users toward a pay-per-view licensing model to enhance the revenue stream for content providers.

For More Information

For information about Microsoft Windows Media DRM, see www.microsoft.com/windows/windowsmedia/drm.aspx.

See Also: digital watermarking, next-generation secure computing base

digital signature

Digital information used for purposes of identification of electronic messages or documents.

Overview

Digital signatures are a way of authenticating the identity of creators or producers of digital information. A digital signature is like a handwritten signature and can have the same legal authority in certain situations, such as buying and selling online or signing legal contracts. Digital signatures can also be used to ensure that the information signed has not been tampered with during transmission or repudiated after being received.

Digital signatures are dependent on public key cryptography algorithms for their operation. There are three public key algorithms that are approved Federal Information Processing Standards (FIPS) for purposes of generating and validating digital signatures:

  • Digital Signature Algorithm (DSA)
  • Elliptic Curve DSA (ECDSA)
  • RSA algorithm

Implementation

To create a digital signature, the document or message to be transmitted is first mathematically hashed to produce a message digest. The hash is then encrypted using the sender's private key to form the digital signature, which is appended to or embedded within the message.

Once the encrypted message is received, it is decrypted using the sender's public key. The recipient can then hash the original message and compare it with the hash included in the signature to verify the sender's identity. Nonrepudiation is guaranteed by the fact that the sender's public key has itself been digitally signed by the certificate authority (CA) that issued it.

Click to view graphic
Click to view graphic

Digital signature. Creating a digital signature.

Notes

Digital signatures are not the same as digital certificates. Digital certificates are like a driver's license you can use to identify yourself that is issued by a trusted third party, in the case of digital certificates, one called a certificate authority (CA). Included in your digital certificate are your private and public keys, which can be used to send encrypted messages and enable recipients to decrypt them. Your private key is then used to create your digital signature, so a digital certificate is a prerequisite for digitally signing documents.

See Also: certificate authority (CA), digital certificate, Digital Signature Algorithm (DSA), Digital Signature Standard (DSS), Elliptic Curve Digital Signature Algorithm (ECDSA), hashing algorithm, public key cryptography, RSA

Digital Signature Algorithm (DSA)

A public key cryptography algorithm used to generate digital signatures.

Overview

The Digital Signature Algorithm (DSA) is a public key algorithm used for creating digital signatures to verify the identity of individuals in electronic transactions. Signatures created using DSA can be used in place of handwritten signatures in scenarios such as legal contracts, electronic funds transfers, software distribution, and other uses. Although DSA is a public key algorithm, it is used mainly for digitally signing documents and not for encrypting them.

DSA is patented by the National Institute of Standards and Technology (NIST) and forms the basis of the Digital Signature Standard (DSS).

See Also: digital signature, Digital Signature Standard (DSS), Federal Information Processing Standard (FIPS), National Institute of Standards and Technology (NIST), public key cryptography

Digital Signature Standard (DSS)

A U.S. federal government standard defining how digital signatures are generated.

Overview

The Digital Signature Standard (DSS) is a Federal Information Processing Standard (FIPS) 186-2 issued in 1994. The goal of the standard is to promote electronic commerce by providing a way for documents and messages to be electronically signed using digital signatures. DSS employs two cryptographic algorithms for this purpose:

  • DSA: A public key algorithm patented by the National Institute of Standards and Technology (NIST)
  • SHA-1: A hashing algorithm standardized by NIST as FIPS 180

DSS is widely used in federal government and defense agencies for transmission of unclassified information.

See Also: digital signature, public key cryptography, Secure Hash Algorithm (SHA-1)

digital watermarking

A Digital Rights Management (DRM) antipiracy and copy-protection technology.

Overview

Digital watermarking enables digital content producers to insert hidden information in digital products and data streams to prevent them from being illegally used or copied. Such watermarks can be embedded into any form of commercially sold digital content, including audio CDs, DVD movies, software on CD- or DVD-ROMs, streaming audio and video, digital television, and so on. Watermarks can include information for copyright protection and authentication information to control who can use content and how such content can be used.

Implementation

There are two basic types of digital watermarks: visible and invisible. Visible watermarks resemble those formerly used to identify vendors of high-quality bond paper and are generally used to discourage copying of digital content. Visible watermarks do not prevent such copying from occurring, but instead may deter such copying by potentially providing legal evidence of copyright infringement through illegal copying of digital media. Invisible watermarks, on the other hand, can be used both for legal evidence and to implement invisible copy-protection schemes for media players designed to read them.

Most watermarking techniques involve manipulating digital content in the spatial or frequency domain using a mathematical procedure called fast Fourier transforms (FFT). Images of text can also be watermarked by subtly altering line and character spacing according to fixed rules.

Marketplace

A leading provider of digital-watermarking technologies and products is Digimarc (www.digimarc.com).

Notes

Another name used to refer to this procedure is digital fingerprinting.

See Also: Digital Rights Management (DRM)

disaster recovery plan (DRP)

A plan that helps a company recover data and restore services after a disaster.

Overview

Digital information is the lifeblood of today's companies, and loss of data means loss of business services and loss of revenue. Disasters that can destroy data can take many forms:

  • Natural disasters such as floods and earthquakes
  • Manufactured disasters such as terrorist attacks and criminal network intrusions
  • Disasters caused by hardware failures or buggy software
  • Accidental disasters from human error

Guarding against such disasters is important, but it's prudent to expect the worst and plan accordingly. Essential to the success of any company's IT (information technology) operations is a disaster recovery plan (DRP) to enable it to recover quickly after a disaster and restore services to customers. This can range from a simple plan to create a backup of the server every night in a small company, to the kind of technological redundancies and procedures that enabled Wall Street to recover from 9/11 after only a week. Clearly, a DRP is not a bandage you apply after things go wrong but a fundamental business practice a company should consider from day one of implementing its IT systems.

Implementation

Creating a good DRP begins with risk assessment and planning. Risk assessment determines the likelihood and scale of potential disasters, which aids in planning which technologies to implement and how much to budget. Planning involves determining which systems and data need to be backed up, how often they should be backed up, and where backed up data should be securely stored.

Selecting an appropriate backup technology and developing an appropriate backup plan for using such technology is important to avoid excessive costs and ensure reliable recovery after a disaster. Backup technologies can include tape backup systems, recordable CDs and DVDs, backup to remote storage area networks (SANs) over secure virtual private network (VPN) connections, and backup to service provider networks. Outsourcing of backup needs is another option a company may consider if its IT department is small and can't manage such needs. The addition of hot-standby systems can greatly simplify the recovery process if financially feasible.

If your company uses IT services from service providers, it is essential to have service level agreements (SLAs) from these providers to help guarantee business continuity after a disaster. Establishing suitable information security policies and procedures is also essential to making a DRP work.

Once your DRP is up and running, it needs to be regularly tested and monitored to be sure it works. Verification of backups ensures information truly is being backed up, and periodic restores on test machines ensure that the DRP will work should it ever need to be implemented. If such monitoring and testing find weaknesses or problems in your plan, you need to modify the plan accordingly.

Having an external audit of your DRP by a company with expertise in this area can also be valuable. ISO 17799 is a recognized standard in IT security best practices, and auditing on this basis can be advantageous on a legal liability basis if your company provides information services to others.

Another essential component of a DRP is a business resumption plan (BRP), sometimes called a business continuity plan (BCP). This is a detailed step-by-step plan on how to quickly resume normal business after a disaster occurs.

Fundamentally, however, your DRP will never be fully tested until a significant disaster occurs.

See Also: backup plan, business resumption plan (BRP)

discretionary access control (DAC)

A mechanism for controlling access by users to computing resources.

Overview

Discretionary access control (DAC) is one of two basic approaches to implementing access control on computer systems, the other being mandatory access control (MAC). DAC specifies who can access a resource and which level of access each user or group of users has to the resource. DAC is generally implemented through the use of an access control list (ACL), a data structure that contains a series of access control entries (ACEs). Each ACE includes the identity of a user or group and a list of which operations that user or group can perform on the resource being secured.

Most computing platforms, including Microsoft Windows, Linux, and different flavors of UNIX, implement some form of DAC mechanism for controlling access to file system and other types of resources.

See Also: access control, access control entry (ACE), access control list (ACL), mandatory access control (MAC)

discretionary access control list (DACL)

The most common type of access control list (ACL) used to control access to computer and network resources.

Overview

Discretionary access control lists (DACLs) are one of two forms of ACLs, the other being system access control lists (SACLs). DACLs are the most general of these two types and are assigned to file system and other computing resources to specify who can access them and which level of access that user or group can have. In fact, when ACL is referred to in discussion, it can usually be assumed to refer to DACL unless system auditing is included. Using DACLs, an operating system can implement discretionary access control (DAC) for enforcing what users can or cannot do with system resources.

See Also: access control, access control list (ACL), discretionary access control (DAC), system access control list (SACL)

distributed denial of service (DDoS)

A type of denial of service (DoS) attack that leverages the power of multiple intermediary hosts.

Overview

Classic DoS attacks are one-to-one attacks in which a more powerful host generates traffic that swamps the network connection of the target host, thus preventing legitimate clients from accessing network services on the target. The distributed denial of service (DDoS) attack takes this one step further by amplifying the attack manyfold, with the result that server farms or entire network segments can be rendered useless to clients.

Click to view graphic
Click to view graphic

Distributed denial of service. How a DDoS attack works.

DDoS attacks first appeared in 1999, just three years after DoS attacks using SYN flooding brought Web servers across the Internet to their knees. In early February 2000, a major attack took place on the Internet, bringing down popular Web sites such as Amazon, CNN, eBay, and Yahoo! for several hours. A more recent attack of some significance occurred in October 2002 when 9 of the 13 root DNS servers were crippled by a massive and coordinated DDoS attack called a ping flood. At the peak of the attack, some of these servers received more than 150,000 Internet Control Message Protocol (ICMP) requests per second. Fortunately, because of caching by top-level Domain Name System (DNS) servers and because the attack lasted only a half hour, traffic on the Internet was not severely disrupted by the attack.

Implementation

The theory and practice behind performing DDoS attacks is simple:

  1. Run automated tools to find vulnerable hosts on other networks connected to the Internet. Once a vulnerable host is found, such tools can compromise the host and install a DDoS Trojan, turning the host into a zombie that can be controlled remotely by a master station that the attacker uses to launch the attack. Popular tools for launching such DDoS attacks include TFN, TFN2K, Trinoo, and Stacheldraht, all of which are readily available on the Internet.
  2. Once enough hosts have been compromised, the attacker uses the master station to signal the zombies to commence the attack against the target host or network. This attack is usually some form of SYN flood or other simple DoS attack scheme, but the fact that hundreds or even thousands of zombie hosts are used in the attack creates a massive amount of network traffic that can quickly consume all Transmission Control Protocol (TCP) resources on the target and may even swamp the target's network connection to the Internet.

Almost all computer platforms are susceptible to being hijacked as zombies to conduct such an attack, including Solaris, Linux, Microsoft Windows, and flavors of UNIX. The best way to defend against such attacks involves modifying router configurations at Internet service providers (ISPs), specifically:

  • Filtering all RFC 1918 private Internet Protocol (IP) addresses using router access control lists
  • Applying RFC 2267 ingress and egress filtering on all edge routers so that the client's side of the connection rejects incoming packets that have addresses originating within their own network, while the ISP's side accepts only packets that have addresses originating from the client's network
  • Rate-limiting all ICMP and SYN packets on all router interfaces

For these practices to be most effective, the cooperation of the whole Internet community is required.

For More Information

A good resource on DDoS is the staff page of Dave Dittrich, senior security engineer at the University of Washington; see staff.washington.edu/dittrich/misc/ddos/.

See Also: denial of service (DoS), SYN flooding, zombie

DITSCAP

Stands for Department of Defense Information Technology Security Certification and Accreditation Process, a standardized approach for certifying the security of IT (information technology) systems.

See: Department of Defense Information Technology Security Certification and Accreditation Process (DITSCAP)

DMCA

Stands for Digital Millennium Copyright Act, legislation that extends U.S. copyright law to cover digital content.

See: Digital Millennium Copyright Act (DMCA)

DMZ

Stands for demilitarized zone, an isolated network segment at the point where a corporate network meets the Internet.

See: demilitarized zone (DMZ)

DNS cache poisoning

Another name for Domain Name System (DNS) spoofing, a method used for attacking DNS servers.

See: DNS spoofing

DNS spoofing

A method used for attacking Domain Name System (DNS) servers.

Overview

DNS spoofing provides DNS servers with false information to impersonate DNS servers. DNS spoofing can enable malicious users to deny access to authentic DNS servers, redirect users to different Web sites, or collect and read e-mail addressed to or sent from a given domain.

There are two basic approaches to DNS spoofing:

  • By modifying a name server to provide false authoritative records in response to a recursive query, a malicious user can redirect all requests to a certain domain to an illicit DNS server. The result is that a user trying to access a popular site may be directed to a different site that looks the same but that has been set up to capture any personal information the user submits. A notorious example of this occurred in 1997 when Eugene Kashpureff used DNS spoofing to redirect users trying to access the InterNIC domain name registry to his own AlterNIC name registry.
  • Another approach is to sniff a network connection over which DNS traffic regularly travels and spoof User Datagram Protocol (UDP) packets used in DNS queries. The attacker predicts the next query ID number and inserts this into a spoofed packet, thus hijacking the DNS query and redirecting the user to an illicit look-alike Web site.

The general approach to prevent such attacks includes patching DNS servers with the latest fixes, restricting zone transfers and dynamic updates, and turning off recursion if necessary. However, the real solution to the problem of DNS spoofing involves developing cryptographically authenticated DNS and deploying it across the Internet.

DNS spoofing can also be considered a form of denial of service (DoS) attack since it prevents users from accessing genuine DNS servers.

See Also: denial of service (DoS), spoofing

DoS

Stands for denial of service, a type of attack that tries to prevent legitimate users from accessing network services.

See: denial of service (DoS)

dot bug vulnerability

A type of coding vulnerability.

Overview

The dot bug vulnerability first appeared in 1997 when someone discovered that by appending two extra periods to the end of a Uniform Resource Locator (URL) requesting an Active Server Page (ASP) file from a Microsoft Internet Information Server (IIS) 3 Web server, you could view the ASP code instead of executing it. For example, browsing the URL http://www.northwindtraders.com/somepage.asp would cause the page to execute normally, while browsing http://www.northwindtraders.com/somepage.asp.. would display the ASP code instead. Other similar exploits soon followed that had similar effect, including adding 2%e in place of the period in somepage.asp and appending ::$DATA to the end of the URL. A similar dot bug vulnerability that allowed scripts residing in cookies to be run and read information in other cookies was discovered in Microsoft Internet Explorer in February 2002.

Similar vulnerabilities have been found in other platforms and products. For instance, a dot bug vulnerability just like one found in ASP was later discovered in PHP, another scripting platform for creating dynamic Web sites. A vulnerability was also discovered in the Hypertext Transfer Protocol (HTTP) server on the IBM AS/400 platform, whereby appending a forward slash (/) to the end of a URL would display the source code of the page.

Improved coding practices have generally resulted in fewer such bugs in the last few years.

See Also: vulnerability

DPAPI

Stands for Data Protection API, an application programming interface (API) that is part of CryptoAPI on Microsoft Windows platforms.

See: Data Protection API (DPAPI)

DRM

Stands for Digital Rights Management, any technology used to protect the interests of copyright holders of commercial digital information products and services.

See: Digital Rights Management (DRM)

DRP

Stands for disaster recovery plan, a plan that helps a company recover data and restore services after a disaster.

See: disaster recovery plan (DRP)

DSA

Stands for Digital Signature Algorithm, a public key cryptography algorithm used to generate digital signatures.

See: Digital Signature Algorithm (DSA)

Dsniff

A popular set of tools for network auditing and penetration testing.

Overview

Dsniff is a collection of tools used on UNIX/Linux platforms developed by Dug Song of the Center for Information Technology Integration at the University of Michigan. These tools are popular with network security professionals and hackers alike and in version 2.3 of Dsniff consist of the following:

  • Passive network monitoring tools: Dsniff, Filesnarf, Mailsnarf, Msgsnarf, Urlsnarf, and Webspy
  • Traffic interception tools: Arpspoof, Dnsspoof, and Macof
  • Man-in-the-middle (MITM) attack tools: shSmitm (for Secure Shell, SSH) and webmitm (for Hypertext Transfer Protocol Secure, HTTPS).

For More Information

See monkey.org/~dugsong/dsniff for more information.

See Also: sniffer

DSS

Stands for Digital Signature Standard, a U.S. federal government standard defining how digital signatures are generated.

See: Digital Signature Standard (DSS)

dynamic packet filtering

An advanced packet-filtering technology used by firewalls and some routers.

Overview

Packet filtering is used by routers and firewalls for filtering out undesired packets. Early routers employed static packet filtering, commonly called packet filtering, which allows routers to be manually configured to allow or block incoming or outgoing packets based on Internet Protocol (IP) address and port information found in packet headers. Dynamic packet filtering takes this a step further by opening ports only when required and closing them when no longer needed. Dynamic packet filtering thus minimizes exposed ports and provides better security than static filtering.

Dynamic packet filtering is managed by creating policies that can rule for how long and when different ports should be opened or closed. All packets passing through the router or firewall are compared with these rules to determine whether to forward or drop them.

In addition to examining the packet header, some firewalls implementing dynamic packet filtering can inspect deeper layers of the TCP/IP protocol within each packet to create a state table containing information about each established connection. This allows them to filter packets not only by rules but also by state information concerning previous packets for that connection. This process is commonly called stateful inspection.

Marketplace

Microsoft Internet Security and Acceleration Server (ISA Server) supports policy-based dynamic packet filtering of IP traffic to enhance the security of your network. Most commercial firewalls also support some kind of dynamic packet filtering in their operation.

See Also: firewall, packet filtering, stateful inspection

dynamic proxy

Another name for adaptive proxy, an enhanced form of application-level gateway.

See: adaptive proxy



Last Updated: June 19, 2003
Top of Page