Chapter : D
See: discretionary access control (DAC)
See: discretionary access control list (DACL)
See: Data Encryption Standard (DES)
The Data Encryption Standard (DES) has been used since 1977 by federal agencies for protecting the confidentiality and integrity of sensitive information both during transmission and when in storage. DES is a secret key encryption algorithm defined by Federal Information Processing Standard FIPS 46-9. A stronger form of DES called 3DES or TDES (Triple DES) is also sometimes used by government agencies, but requires additional processing power because of the extra computation involved.
DES was cracked, however, in 1997, launching a search for a more secure replacement that would be faster than 3DES. The result of this process was the new Advanced Encryption Standard (AES), which is gradually being introduced in government agencies to phase out DES and 3DES.
DES uses a 64-bit key, of which only 56 bits are used for encryption, while the remaining 8 bits are employed for error correction. The algorithm transforms 64 bits of plaintext into ciphertext blocks of the same size. Since DES is a symmetric key algorithm, both the sender and the receiver require the same key in order for secure communications to be implemented. To exchange a DES session key between two parties, an asymmetric key algorithm such as Diffie-Hellman (DH) or RSA can be employed.
DES can operate in several different modes, including cipher block chaining (CBC) and Electronic Codebook (ECB) mode. ECB uses DES directly to encrypt and decrypt information, while CBC chains blocks of ciphertext together.
The American Standards Institute (ANSI) refers to DES as the Data Encryption Algorithm (DEA).
See Also: 3DES, Advanced Encryption Standard (AES), asymmetric key algorithm, Diffie-Hellman (DH), RSA, symmetric key algorithm
Maintaining data integrity is essential to the privacy, security, and reliability of critical business data. There are many ways in which this integrity can be compromised:
To minimize these threats to data integrity, you should implement the following procedures:
See Also: backup plan, disaster recovery plan (DRP), Trojan, virus
Data Protection API (DPAPI) implements Microsoft Windows Data Protection on Windows 2000, Windows XP, and Windows Server 2003 platforms. DPAPI is an operating system-level password-based data protection service that applications can use to encrypt and decrypt information. DPAPI uses the 3DES encryption algorithm and strong keys generated from user passwords, typically the password of the currently logged-on user. Since multiple applications running under the same account might use the same password and have access to such encrypted data, DPAPI also allows an application to provide an additional "secret," called secondary entropy, to ensure only that application can decrypt information it has previously encrypted. The process by which DPAPI generates a cryptographic key from a password is called Password-Based Key Derivation and is defined in the Public Key Cryptography Standards (PKCS) #5 standard.
DPAPI does not store encrypted information, and applications that use it must implement their own storage mechanisms for this purpose.
See Also: 3DES, password
Few actual details are known about DCS-1000 apart from the fact that it can be installed at an Internet service provider and configured to monitor various aspects of traffic in transit through the provider's network. The Electronic Privacy Information Center (EPIC), concerned about the privacy of businesses and the public, has employed the Freedom of Information Act (FOIA) to force disclosure of some information concerning the platform, but the FBI has assured the public that it only uses the system to capture e-mail authorized for seizure by a court order, as opposed to unrestrictively capturing all online traffic.
For More Information
Further information can be found on the FBI Web site at www.fbi.gov/hq/lab/carnivore/carnivore.htm.
See Also: privacy
See: distributed denial of service (DDoS)
See: Data Encryption Standard (DES)
Encryption and decryption are complementary aspects of cryptography. The first involves transforming plaintext (digital information containing human-readable content) into ciphertext (scrambled information that cannot be directly read by humans). Decryption is the reverse process, which recovers the meaning of an encrypted message by transforming it from ciphertext back into plaintext.
The approach used for decrypting messages depends on the method used to encrypt them. For example, in a symmetric (or secret) key algorithm, both the sender and the recipient use the same shared secret key to encrypt and decrypt the message. In asymmetric key algorithms such as those used by public key cryptography systems, two keys are used, one to encrypt the message and the other to decrypt it.
See Also: asymmetric key algorithm, cryptography, encryption, public key cryptography, symmetric key algorithm
Defcon has been referred to by its organizers as the "annual computer underground party for hackers." In addition to papers and presentations on everything from how to hack a system to how to secure a system against attack by others, other topics discussed include phone phreaking, privacy issues, demonstration of new hacking and security tools, recently discovered vulnerabilities and how to exploit and correct them, advances in Trojan and remote-control technologies, and so on.
Defcon is generally well attended by hackers, security professionals, and representatives of government, law enforcement, and media agencies. Fun activities are usually included such as a capture-the-flag type of contest in which groups of hackers are pitted against each other to try to hack each other's networks while simultaneously defending their own networks against attack. Awards are often given; for instance, one was given at Defcon 9 to an individual who hacked the conference network itself in order to gain admission to the conference without a pass.
Defcon was founded by Jeff "Dark Tangent" Moss and had its 10th annual conference in August 2002, with attendance running around 5000 and some sessions being standing room only. Defcon has evolved somewhat from its early freewheeling days and has become more "respectable" as it began to attract IS managers concerned about their growing network security needs. Defcon immediately follows another conference called Black Hat Briefings, which brings legitimate and underground security experts together to discuss the latest network security issues and methodologies.
For More Information
Visit Defcon at www.defcon.org for information about upcoming conferences and archived information from previous ones.
See Also: Black Hat Briefings, hacker, phreaking
The goal of defense in depth is to provide multiple barriers for attackers attempting to compromise the security of your network. These layers provide extra hurdles for the attacker to overcome, thus slowing down the attack and providing extra time for detecting, identifying, and countering the attack. For example, the first layer of defense against passive attacks such as eavesdropping might be implementing link- or network-layer encryption, followed by security-enabled applications as a backup defense. Defense against insider attacks can consist of layers such as physical security, authenticated access control, and regular analysis of audit logs.
From a more general perspective, the first line of defense for a network occurs at its perimeter where firewalls block unwanted traffic and intrusion detection systems (IDSs) monitor traffic passed through the firewall. Additional layers behind this can include host-based firewalls and IDSs, proper access control lists (ACLs) on server resources, strong password policies, and so on.
See Also: access control list (ACL), firewall, intrusion detection system (IDS), password
The demilitarized zone (DMZ) is a critical part of securing your network against attack. The term originated in the Korean War to refer to an area that both sides agreed to stay out of, which acted as a buffer zone to prevent hostilities from flaring up again.
In a networking scenario, the DMZ is used to segregate the private and public from each other while allowing essential network services such as Web site hosting, electronic messaging, and name resolution to function properly. To accomplish this, the DMZ is typically the location where hardened hosts such as Web, mail, and DNS servers are placed so they can handle traffic from both the internal and the external networks. This reduces the attack surface on both these hosts in particular and your network in general, for if these hosts were located outside the DMZ they would be more easily subject to attack, while if they were located inside the DMZ, compromising such a host could lead to penetration of your entire network.
There are a variety of ways of implementing a DMZ, with two of the more popular being the following:
Demilitarized zone (DMZ). Single- and dual-firewall DMZ configurations.
The term perimeter network is more commonly used instead of DMZ in Microsoft networking environments.
See Also: firewall
In a denial of service (DoS) attack, the attacker tries to prevent access to a system or network by several possible means, including the following:
The earliest form of DoS attack was the SYN flood, which first appeared in 1996 and exploits a weakness in Transmission Control Protocol (TCP). Other attacks exploited vulnerabilities in operating systems and applications to bring down services or even crash servers. Numerous tools were developed and freely distributed on the Internet for conducting such attacks, including Bonk, LAND, Smurf, Snork, WinNuke, and Teardrop.
TCP attacks are still the most popular form of DoS attack. This is because other types of attack such as consuming all disk space on a system, locking out user accounts in a directory, or modifying routing tables in a router generally require networks to be penetrated first, which can be a difficult task when systems are properly hardened.
Defenses against DoS attacks include these:
See Also: distributed denial of service (DDoS), SYN flooding
Department of Defense Information Technology Security Certification and Accreditation Process (DITSCAP)A standardized approach for certifying the security of IT (information technology) systems.
The Department of Defense Information Technology Security Certification and Accreditation Process (DITSCAP) was developed to help guide U.S. Department of Defense (DoD) agencies by providing guidance for the accreditation process of IT systems. DITSCAP is a four-stage process involving
The goal of DITSCAP is to introduce integrated security into the life cycle of IT systems to minimize risks in shared infrastructures. DITSCAP was developed as a joint effort by the DoD, the Defense Information Systems Agency (DISA), and the National Security Agency (NSA). A related standard called National Information Assurance Certification and Accreditation Process (NIACAP) is employed for similar purposes between U.S. government agencies and contractors and consultants.
See Also: National Information Assurance Certification and Accreditation Process (NIACAP)
See: Data Encryption Standard (DES)
DESX, which stands for "DES XORed," is a variant of DES developed by Ron Rivest in the 1980s. DESX performs similarly to DES but has greater resistance to exhaustive key search attacks. This is accomplished by XORing the input plaintext file with 64 bits of additional key material prior to encrypting the text using DES, a process sometimes called whitening, which is now implemented in other encryption schemes. Once DES has been applied to the whitened text, the result is again XORed with the same amount of additional key material.
See Also: Data Encryption Standard (DES)
See: Diffie-Hellman (DH)
The simplest but least efficient method for cracking passwords is the brute-force attack, which systematically tries all possible values in an attempt to guess the password. The dictionary attack is an improvement on this; it uses a dictionary (database) of common passwords derived from shared experiences of password crackers. Dictionary attacks can be performed online or offline, and readily available tools exist on the Internet for automating such attacks. A combination of a dictionary attack and a brute-force attack is called a hybrid attack.
In addition to cracking passwords, dictionary attacks have been used in other scenarios such as guessing community names on a network that uses Simple Network Management Protocol (SNMP). Once these names are guessed, the attacker can use SNMP to profile services on the targeted network.
See Also: brute-force attack, hybrid attack
Diffie-Hellman (DH) was the first algorithm developed for public key cryptography. It is used for key exchange by a variety of security protocols, including Internet Protocol Security (IPSec), Secure Sockets Layer (SSL), and Secure Shell (SSH), as well as many popular public key infrastructure (PKI) systems.
DH was developed by Whitfield Diffie and Martin Hellman in 1976 and was the first protocol developed for enabling users to exchange a secret over an insecure medium without an existing shared secret between them. DH is not an encryption algorithm but a protocol for exchanging secret keys to be used for sending encrypted transmissions between users using Data Encryption Standard (DES), Blowfish, or some other symmetric encryption scheme.
DH in its simplest form is susceptible to man-in-the-middle attacks, though this can be mitigated by necessitating the use of digital signatures by all parties. The Station-to-Station (STS) protocol is an authenticated version of DH developed in 1992 that uses keys certified by certificate authorities (CAs) to prevent such attacks.
See Also: public key cryptography
The word diffing derives from the diff utility on UNIX systems that performs bytewise comparison between two files. A variety of diffing tools exist that work at the file, database, and disk levels. These tools are sometimes used by hackers to compare a new version of a file with an earlier version for various reasons, including the following:
Examples of tools used for diffing include the Windows fc and UNIX diff commands. Once a file has been diffed to locate the section of code that has changed, the hacker can then use a hex editor such as Hackman to make bytewise modifications to the file if desired.
See Also: hex editor
Digest authentication is a method used by Web servers to authenticate users trying to access sites. Digest authentication was proposed in RFC 2617 as a more secure method than Basic authentication, which passes user credentials across the connection in cleartext. Instead, Digest authentication encrypts user credentials as an MD5 hash to prevent credential theft by malicious users eavesdropping on the network.
Digest authentication is supported by Internet Information Services (IIS) on Microsoft Windows server platforms, the open source Apache Web server, the Jigsaw Web server developed by the World Wide Web Consortium (W3C), and many other platforms. Digest authentication can also be incorporated directly into Microsoft .NET-managed code, bypassing the version included in IIS on Microsoft Windows platforms.
When a client browser tries to access a Web site on which Digest authentication is configured, the client begins by making an unauthenticated HTTP request to the server. The server responds with an HTTP 401 Unauthorized status code, sending a token called a nonce to the client and telling the client in the HTTP response header that it must use Digest authentication to access the site. The client then opens a dialog box to obtain the user's name and password, hashes the password together with the nonce, and sends the username and hash to the server requesting authentication.
The server then generates the same hash using the copy of the user's password stored in its security accounts database and compares this hash with the one received from the client. If the two hashes match, the client is allowed to download the requested resource from the server.
Digest authentication. How Digest authentication works.
Digest authentication is susceptible to replay attacks, but this can be minimized by time-limiting nonce values or using different values for each connection. While Digest authentication is more secure than Basic authentication, it is not as secure as Kerberos authentication or authentication based on client certificates. Another issue with the security of Digest authentication is that it requires passwords to be retrievable as cleartext.
See Also: authentication, Basic authentication, challenge response authentication, MD5, replay attack
DigiCrime (www.digicrime.com) is the brainchild of mathematician and computer scientist Kevin McCurley, and since 1996 this site has entertained the security community and informed the general public about potential issues in computer and online security. The site humorously promotes itself as offering "a full range of criminal services and products to our customers." These "services" include identity theft, money laundering, airline ticket rerouting, telephone wiretapping, spamming, and more. The idea behind these "services" is to educate and inform the general public of potential dangers in blindly trusting online transactions and to challenge the security community and software vendors to take these dangers more seriously. The site includes a community of real individuals with tongue-in-cheek titles like Director of Disinformation, Chief of Insecurity, Illegal Counsel, and Chief Arms Trafficker, many of whom are security professionals or cryptography experts and who help contribute to the site.
Sometimes simply called certificates, digital certificates are specially formatted digital information that is used in secure messaging systems that employ public key cryptography. Certificates are used to verify the identity of the message sender to the recipient by generating a digital signature that can be used to sign the message. They are also used for providing the recipient of an encrypted message with a copy of the sender's public key.
Digital certificates are issued by a certificate authority (CA) that is trusted by both the sender and recipient. The most common format used for certificates is the X.509 standard, which contains the user's name and public key, a serial number, expiration date, the name and digital signature of the CA that issued the certificate, and other information. When a recipient receives an encrypted message with a certificate attached, the recipient uses the CA's public key to decrypt the certificate and verify the sender's identity.
See Also: digital signature, public key cryptography, X.509
See: digital watermarking
Traditional forensic methods used in criminal investigations include looking for footprints, fingerprints, hair, fiber, and other physical evidence of an intruder's presence. In computer crime, the evidence left behind is of a digital nature and can include data on hard drives, logs of Web server visits or router activity, and so on. Digital forensics is the science of mining computer hardware and software to find evidence that can be used in a court of law to identify and prosecute cybercriminals.
Many companies have deployed an intrusion detection system (IDS) on their network to monitor and detect possible breaches of network security. When a breach has occurred, these companies may not have the necessary expertise to determine the extent of the breach or how the exploit was performed. In serious cases in which significant business loss has resulted, companies must establish an evidence trail to identify and prosecute the individuals responsible. In such cases, companies may enlist the services of digital forensic experts who can send in an incident response team to collect evidence, perform a "postmortem" by piecing together the evidence trail, help recover deleted files and other lost data, and perform "triage" to help restore compromised systems as quickly as possible.
Examples of companies offering digital forensics services include @stake, Computer Forensics, DigitalMedix, ESS Data Recovery, Guidance Software, Vigilinx, and others. Computer Sciences Corporation and Veridian share a significant portion of the digital forensics market for the U.S. federal government.
See Also: intrusion detection system (IDS)
The Digital Millennium Copyright Act (DMCA) was enacted in 1998 as a vehicle for compliance toward treaties with the World Intellectual Property Organization (WIPO), a United Nations agency based in Geneva, Switzerland. The provisions of the DMCA include the following:
The DMCA has been widely praised by the entertainment and software industry but generally criticized by academics, librarians, and civil libertarians as part of larger issues surrounding the purposes and means of implementing DRM technologies in the consumer marketplace. A notable application of the DMCA was the arrest in 2001 of Russian programmer Dmitry Sklyarov, who was apprehended after a Defcon conference at which he presented a paper on how to circumvent copyright protection technology built into Adobe eBooks software.
See Also: Digital Rights Management (DRM)
The last decade has seen the advent of consumer digital information products and services such as CD audio, DVD video, CD- and DVD-ROM software, and digital television. The potential for making illegal copies of digital products using standard computer hardware and software or through online file-sharing services has been viewed by the entertainment and software industries as potentially reducing their revenues by opening a floodgate of copyright circumvention and software piracy. This danger is enhanced by the nature of digitized information, which allows such copies to contain exactly the same information as the original.
In response to this issue, companies such as Microsoft and others have developed various Digital Rights Management (DRM) technologies to protect commercial digital products and services. These technologies may control access to such products and services by preventing the sharing or copying of digital content, limiting the number of times content can be viewed or used, and tying the use or viewing of content to specific individuals, operating systems, or hardware.
There are two general methods for implementing DRM:
Various industry groups are working toward DRM standards, including the Internet Engineering Task Force (IETF), the MPEG Group, the OpenEBook Forum, and several others. Microsoft Corporation's next-generation secure computing base, part of its Trustworthy Computing initiative, includes the incorporation of DRM technologies into the Microsoft Windows operating system platforms.
Critics of the encryption approach to DRM suggest that such technologies weaken the privacy of consumers by requiring them to provide personal information before content can be viewed or used. Such collected information may then be used to profile consumer purchase patterns for marketing purposes and price discrimination, to limit access to certain kinds of material to certain classes of consumers, or to push users toward a pay-per-view licensing model to enhance the revenue stream for content providers.
For More Information
For information about Microsoft Windows Media DRM, see www.microsoft.com/windows/windowsmedia/drm.aspx.
See Also: digital watermarking, next-generation secure computing base
Digital signatures are a way of authenticating the identity of creators or producers of digital information. A digital signature is like a handwritten signature and can have the same legal authority in certain situations, such as buying and selling online or signing legal contracts. Digital signatures can also be used to ensure that the information signed has not been tampered with during transmission or repudiated after being received.
Digital signatures are dependent on public key cryptography algorithms for their operation. There are three public key algorithms that are approved Federal Information Processing Standards (FIPS) for purposes of generating and validating digital signatures:
To create a digital signature, the document or message to be transmitted is first mathematically hashed to produce a message digest. The hash is then encrypted using the sender's private key to form the digital signature, which is appended to or embedded within the message.
Once the encrypted message is received, it is decrypted using the sender's public key. The recipient can then hash the original message and compare it with the hash included in the signature to verify the sender's identity. Nonrepudiation is guaranteed by the fact that the sender's public key has itself been digitally signed by the certificate authority (CA) that issued it.
Digital signature. Creating a digital signature.
Digital signatures are not the same as digital certificates. Digital certificates are like a driver's license you can use to identify yourself that is issued by a trusted third party, in the case of digital certificates, one called a certificate authority (CA). Included in your digital certificate are your private and public keys, which can be used to send encrypted messages and enable recipients to decrypt them. Your private key is then used to create your digital signature, so a digital certificate is a prerequisite for digitally signing documents.
See Also: certificate authority (CA), digital certificate, Digital Signature Algorithm (DSA), Digital Signature Standard (DSS), Elliptic Curve Digital Signature Algorithm (ECDSA), hashing algorithm, public key cryptography, RSA
The Digital Signature Algorithm (DSA) is a public key algorithm used for creating digital signatures to verify the identity of individuals in electronic transactions. Signatures created using DSA can be used in place of handwritten signatures in scenarios such as legal contracts, electronic funds transfers, software distribution, and other uses. Although DSA is a public key algorithm, it is used mainly for digitally signing documents and not for encrypting them.
DSA is patented by the National Institute of Standards and Technology (NIST) and forms the basis of the Digital Signature Standard (DSS).
See Also: digital signature, Digital Signature Standard (DSS), Federal Information Processing Standard (FIPS), National Institute of Standards and Technology (NIST), public key cryptography
The Digital Signature Standard (DSS) is a Federal Information Processing Standard (FIPS) 186-2 issued in 1994. The goal of the standard is to promote electronic commerce by providing a way for documents and messages to be electronically signed using digital signatures. DSS employs two cryptographic algorithms for this purpose:
DSS is widely used in federal government and defense agencies for transmission of unclassified information.
See Also: digital signature, public key cryptography, Secure Hash Algorithm (SHA-1)
Digital watermarking enables digital content producers to insert hidden information in digital products and data streams to prevent them from being illegally used or copied. Such watermarks can be embedded into any form of commercially sold digital content, including audio CDs, DVD movies, software on CD- or DVD-ROMs, streaming audio and video, digital television, and so on. Watermarks can include information for copyright protection and authentication information to control who can use content and how such content can be used.
There are two basic types of digital watermarks: visible and invisible. Visible watermarks resemble those formerly used to identify vendors of high-quality bond paper and are generally used to discourage copying of digital content. Visible watermarks do not prevent such copying from occurring, but instead may deter such copying by potentially providing legal evidence of copyright infringement through illegal copying of digital media. Invisible watermarks, on the other hand, can be used both for legal evidence and to implement invisible copy-protection schemes for media players designed to read them.
Most watermarking techniques involve manipulating digital content in the spatial or frequency domain using a mathematical procedure called fast Fourier transforms (FFT). Images of text can also be watermarked by subtly altering line and character spacing according to fixed rules.
A leading provider of digital-watermarking technologies and products is Digimarc (www.digimarc.com).
Another name used to refer to this procedure is digital fingerprinting.
See Also: Digital Rights Management (DRM)
Digital information is the lifeblood of today's companies, and loss of data means loss of business services and loss of revenue. Disasters that can destroy data can take many forms:
Guarding against such disasters is important, but it's prudent to expect the worst and plan accordingly. Essential to the success of any company's IT (information technology) operations is a disaster recovery plan (DRP) to enable it to recover quickly after a disaster and restore services to customers. This can range from a simple plan to create a backup of the server every night in a small company, to the kind of technological redundancies and procedures that enabled Wall Street to recover from 9/11 after only a week. Clearly, a DRP is not a bandage you apply after things go wrong but a fundamental business practice a company should consider from day one of implementing its IT systems.
Creating a good DRP begins with risk assessment and planning. Risk assessment determines the likelihood and scale of potential disasters, which aids in planning which technologies to implement and how much to budget. Planning involves determining which systems and data need to be backed up, how often they should be backed up, and where backed up data should be securely stored.
Selecting an appropriate backup technology and developing an appropriate backup plan for using such technology is important to avoid excessive costs and ensure reliable recovery after a disaster. Backup technologies can include tape backup systems, recordable CDs and DVDs, backup to remote storage area networks (SANs) over secure virtual private network (VPN) connections, and backup to service provider networks. Outsourcing of backup needs is another option a company may consider if its IT department is small and can't manage such needs. The addition of hot-standby systems can greatly simplify the recovery process if financially feasible.
If your company uses IT services from service providers, it is essential to have service level agreements (SLAs) from these providers to help guarantee business continuity after a disaster. Establishing suitable information security policies and procedures is also essential to making a DRP work.
Once your DRP is up and running, it needs to be regularly tested and monitored to be sure it works. Verification of backups ensures information truly is being backed up, and periodic restores on test machines ensure that the DRP will work should it ever need to be implemented. If such monitoring and testing find weaknesses or problems in your plan, you need to modify the plan accordingly.
Having an external audit of your DRP by a company with expertise in this area can also be valuable. ISO 17799 is a recognized standard in IT security best practices, and auditing on this basis can be advantageous on a legal liability basis if your company provides information services to others.
Another essential component of a DRP is a business resumption plan (BRP), sometimes called a business continuity plan (BCP). This is a detailed step-by-step plan on how to quickly resume normal business after a disaster occurs.
Fundamentally, however, your DRP will never be fully tested until a significant disaster occurs.
See Also: backup plan, business resumption plan (BRP)
Discretionary access control (DAC) is one of two basic approaches to implementing access control on computer systems, the other being mandatory access control (MAC). DAC specifies who can access a resource and which level of access each user or group of users has to the resource. DAC is generally implemented through the use of an access control list (ACL), a data structure that contains a series of access control entries (ACEs). Each ACE includes the identity of a user or group and a list of which operations that user or group can perform on the resource being secured.
Most computing platforms, including Microsoft Windows, Linux, and different flavors of UNIX, implement some form of DAC mechanism for controlling access to file system and other types of resources.
See Also: access control, access control entry (ACE), access control list (ACL), mandatory access control (MAC)
Discretionary access control lists (DACLs) are one of two forms of ACLs, the other being system access control lists (SACLs). DACLs are the most general of these two types and are assigned to file system and other computing resources to specify who can access them and which level of access that user or group can have. In fact, when ACL is referred to in discussion, it can usually be assumed to refer to DACL unless system auditing is included. Using DACLs, an operating system can implement discretionary access control (DAC) for enforcing what users can or cannot do with system resources.
See Also: access control, access control list (ACL), discretionary access control (DAC), system access control list (SACL)
Classic DoS attacks are one-to-one attacks in which a more powerful host generates traffic that swamps the network connection of the target host, thus preventing legitimate clients from accessing network services on the target. The distributed denial of service (DDoS) attack takes this one step further by amplifying the attack manyfold, with the result that server farms or entire network segments can be rendered useless to clients.
Distributed denial of service. How a DDoS attack works.
DDoS attacks first appeared in 1999, just three years after DoS attacks using SYN flooding brought Web servers across the Internet to their knees. In early February 2000, a major attack took place on the Internet, bringing down popular Web sites such as Amazon, CNN, eBay, and Yahoo! for several hours. A more recent attack of some significance occurred in October 2002 when 9 of the 13 root DNS servers were crippled by a massive and coordinated DDoS attack called a ping flood. At the peak of the attack, some of these servers received more than 150,000 Internet Control Message Protocol (ICMP) requests per second. Fortunately, because of caching by top-level Domain Name System (DNS) servers and because the attack lasted only a half hour, traffic on the Internet was not severely disrupted by the attack.
The theory and practice behind performing DDoS attacks is simple:
Almost all computer platforms are susceptible to being hijacked as zombies to conduct such an attack, including Solaris, Linux, Microsoft Windows, and flavors of UNIX. The best way to defend against such attacks involves modifying router configurations at Internet service providers (ISPs), specifically:
For these practices to be most effective, the cooperation of the whole Internet community is required.
For More Information
A good resource on DDoS is the staff page of Dave Dittrich, senior security engineer at the University of Washington; see staff.washington.edu/dittrich/misc/ddos/.
See Also: denial of service (DoS), SYN flooding, zombie
See: Department of Defense Information Technology Security Certification and Accreditation Process (DITSCAP)
See: Digital Millennium Copyright Act (DMCA)
See: demilitarized zone (DMZ)
See: DNS spoofing
DNS spoofing provides DNS servers with false information to impersonate DNS servers. DNS spoofing can enable malicious users to deny access to authentic DNS servers, redirect users to different Web sites, or collect and read e-mail addressed to or sent from a given domain.
There are two basic approaches to DNS spoofing:
The general approach to prevent such attacks includes patching DNS servers with the latest fixes, restricting zone transfers and dynamic updates, and turning off recursion if necessary. However, the real solution to the problem of DNS spoofing involves developing cryptographically authenticated DNS and deploying it across the Internet.
DNS spoofing can also be considered a form of denial of service (DoS) attack since it prevents users from accessing genuine DNS servers.
See Also: denial of service (DoS), spoofing
See: denial of service (DoS)
The dot bug vulnerability first appeared in 1997 when someone discovered that by appending two extra periods to the end of a Uniform Resource Locator (URL) requesting an Active Server Page (ASP) file from a Microsoft Internet Information Server (IIS) 3 Web server, you could view the ASP code instead of executing it. For example, browsing the URL http://www.northwindtraders.com/somepage.asp would cause the page to execute normally, while browsing http://www.northwindtraders.com/somepage.asp.. would display the ASP code instead. Other similar exploits soon followed that had similar effect, including adding 2%e in place of the period in somepage.asp and appending ::$DATA to the end of the URL. A similar dot bug vulnerability that allowed scripts residing in cookies to be run and read information in other cookies was discovered in Microsoft Internet Explorer in February 2002.
Similar vulnerabilities have been found in other platforms and products. For instance, a dot bug vulnerability just like one found in ASP was later discovered in PHP, another scripting platform for creating dynamic Web sites. A vulnerability was also discovered in the Hypertext Transfer Protocol (HTTP) server on the IBM AS/400 platform, whereby appending a forward slash (/) to the end of a URL would display the source code of the page.
Improved coding practices have generally resulted in fewer such bugs in the last few years.
See Also: vulnerability
See: Data Protection API (DPAPI)
See: Digital Rights Management (DRM)
See: disaster recovery plan (DRP)
See: Digital Signature Algorithm (DSA)
Dsniff is a collection of tools used on UNIX/Linux platforms developed by Dug Song of the Center for Information Technology Integration at the University of Michigan. These tools are popular with network security professionals and hackers alike and in version 2.3 of Dsniff consist of the following:
For More Information
See monkey.org/~dugsong/dsniff for more information.
See Also: sniffer
See: Digital Signature Standard (DSS)
Packet filtering is used by routers and firewalls for filtering out undesired packets. Early routers employed static packet filtering, commonly called packet filtering, which allows routers to be manually configured to allow or block incoming or outgoing packets based on Internet Protocol (IP) address and port information found in packet headers. Dynamic packet filtering takes this a step further by opening ports only when required and closing them when no longer needed. Dynamic packet filtering thus minimizes exposed ports and provides better security than static filtering.
Dynamic packet filtering is managed by creating policies that can rule for how long and when different ports should be opened or closed. All packets passing through the router or firewall are compared with these rules to determine whether to forward or drop them.
In addition to examining the packet header, some firewalls implementing dynamic packet filtering can inspect deeper layers of the TCP/IP protocol within each packet to create a state table containing information about each established connection. This allows them to filter packets not only by rules but also by state information concerning previous packets for that connection. This process is commonly called stateful inspection.
Microsoft Internet Security and Acceleration Server (ISA Server) supports policy-based dynamic packet filtering of IP traffic to enhance the security of your network. Most commercial firewalls also support some kind of dynamic packet filtering in their operation.
See Also: firewall, packet filtering, stateful inspection
See: adaptive proxy