Jeffrey Richter and Luis Felipe Cabrera
A File System for the 21st Century: Previewing the Windows NT 5.0 File System
| Because of the way NTFS stores attributes, it is possible for all of the attributes of a single file, including its Data attribute, to be resident. This improves performance when accessing small files. NTFS also stores the most common attributes of a file in the files directory entry.|
This article assumes youre familiar with C++ and Win32|
Code for this article: Nov98NTFS.exe (5KB)
Jeffrey Richter wrote Advanced Windows, Third Edition (Microsoft Press, 1997) and Windows 95: A Developers Guide (M&T Books, 1995). Jeff can be reached at www.JeffreyRichter.com. |
Luis Felipe Cabrera is an architect in the Windows NT Base Development group at Microsoft. His responsibilities are in
Windows NT 5.0 storage management.
Many of your
programming tasks will be simplified when you take advantage of the new innovations in the Windows NT® 5.0 file system (NTFS). Let's go on a whirlwind tour of these new features. Remember, we are discussing software that is in beta, so everything is subject to change. Please check Microsoft's most recent documentation before writing any code based on this information.
Let's begin with an overview of the NTFS file system layout on disk. While this information is programmatically off-limits to the application developer, a high-level explanation will make it much easier for you to understand many of the new NTFS features.
At the heart of the NTFS file system is a special file called the master file table (MFT). This file is created when you format a volume for NTFS. The MFT consists of an array of 1KB entries; each entry identifies a single file on the volume. When you create a file, NTFS must first locate an empty entry within the MFT array (growing the array if necessary); then it fills the 1KB entry with information about the file. A file's information consists of a collection of attributes. Figure 1 shows a list of standard attributes that can be associated with a single file (or directory).
When a file is created, the system creates the set of attributes for the MFT's file entry and attempts to place them inside the 1KB block. But there are two problems: most attributes are variable length, and many attributes (like Name, Data, and Named Data) can be much larger than 1KB. So NTFS can't just simply throw all the attributes inside an MFT entry. Instead, NTFS must examine the attributes; if the length of an attribute's value is small, the attribute's value is placed inside the MFT entry. This is called a resident attribute. If the attribute's value is large, then the system places the attribute value in another location on the disk (making this a nonresident attribute), and simply places a pointer to the attribute's value inside the MFT entry.
Today, everybody has lots and lots of small files stored throughout their hard drives. We all have lots of shortcut (.LNK) files and probably lots of DESKTOP.INI files sprinkled around. Because of the way that NTFS stores attributes in an MFT entry, it is possible for all of the attributes of a single file, including its Data attribute, to
be resident. This greatly improves performance when accessing small files. In addition, NTFS also stores the
most common attributes of a file in the directory entry
that represents the file. This means that when the system does a FindFirstFile/FindNextFile operation to retrieve
a file's name or basic attributes, the data for these attributes is found in the directory entry, so no other disk access is needed.
Prior to Windows NT 4.0, an MFT entry in NTFS was 4KB in size. This, of course, allowed files with slightly more data to have their data resident. In Windows NT 4.0, Microsoft pared the size of an MFT entry to 1KB. Microsoft studied the number of files and their sizes on many typical systems and saw that NTFS was wasting a lot of space in MFT entries and that it would be more efficient to make the MFT entry 1KB.
Now let's go over some of the features offered by
NTFS that software developers can (and should) take advantage of.
It's little known that NTFS allows a single file to have multiple data streams. This feature has actually been in NTFS since its very first version (in Windows NT 3.1) but has been downplayed by Microsoft. This is unfortunate because streams can be incredibly useful in many situations.
For instance, let's say that you are developing a bitmap-editor application. When the user saves their data, you create a BMP file on the hard disk. You'd also like to store a thumbnail version of the image as well. Thumbnails are typically stored at the end of a file, after the main bitmap image. To show the thumbnail image, you must open the file, parse the header information, seek to the bytes following the main image's data, read in the thumbnail image's data, and then display the thumbnail. You could store the thumbnail data in a separate file, but it's not a good idea because it's too easy for the main image file and the thumbnail file to get separated.
An NTFS named stream offers the best of both worlds. When your application creates its file, you can write the main image's data to the default (unnamed) stream and then create another (named) data stream inside the same file for the thumbnail image's data. You have only one file, but it contains two data streams.
To understand how this works, let's perform an experiment. On a Windows NT-based machine (any version) open a command shell. Then change to an NTFS partition and enter the following:
C:\>ECHO "Hi Reader" > XX.TXT:MyStream
When you execute this command, the system creates a file called XX.TXT. This file contains two streams: an unnamed stream that contains 0 bytes and a named stream (called MyStream) that contains the text "Hi Reader". If you haven't guessed by now, you access a file's named stream by placing a colon after the file name followed by the name of the stream. As with file names, Win32® functions treat stream names as case-preserved and searches are case- insensitive.
Unfortunately, the tools supplied with the system treat streams as second class citizens at best. For example, execute the following command:
Volume in drive C is Wizard
Volume Serial Number is 40E5-92D4
Directory of C:\
03/18/98 08:36a 0 XX.TXT
1 File(s) 0 bytes
0 Dir(s) 3,399,192,576 bytes free
As you can see, DIR reports that the file size is 0 bytes, but this is not true. The DIR command only reports to the user the size of a file's unnamed stream; the sizes of named streams within the file are not shown to the user. By the way, Explorer also reports a file size of 0 bytes. This allows for some geeky party games where you can allocate a large stream in a file on a friend's disk. The friend won't be able to discover where all the disk space has gone because all of the tools report that the file occupies only 0 bytes! When working with streams, remember that it's only the tools that don't treat streams with the respect that they deserve; NTFS has full support for streams (they even count against your storage quota).
Now, to see the contents of the stream, execute this command:
C:\>MORE < XX.TXT:MyStream
Here's another way to use streams. Say that you are writing a word-processing application. When the user opens up an existing document, you will probably create a temporary file that holds all of the user's changes. Then, when the user decides to save the changes, you will write all of the updated information to the temporary file, delete the original file, and finally move the temporary file back to the original file's location while renaming the file.
This sounds fairly simple and straightforward, but you're probably forgetting a few things. The final file should have the same creation timestamp as the original, so you'll have to fix that. The final file should also have the same file attributes and security information as the original. It is very easy to miss properly updating some of these attributes during this file-save operation.
If you use streams, all of these problems go away. All streams within a single file share the file's attributes (timestamp, security, and so on). You should revise your application so that the user's temporary information is written to a named stream within the file. Then, when the user saves the data, rename the temporary named data stream to the unnamed data stream, and NTFS will delete the old unnamed data stream and do the rename in an all-or-nothing manner. You won't have to do anything to the file's attributes at all; they'll all just be the same.
Before we leave streams, let us just point out a few more things. First, if you copy a file containing streams to a file system that doesn't support streams (like the FAT file system used on floppy disks), only the data in the unnamed stream is copied over; the data in any named streams does not get copied.
Second, named data streams can also be associated with a directory. Directories never have an unnamed data stream associated with them but they certainly can have named streams. Some of you may be familiar with the DESKTOP.INI file used by the Explorer. If the Explorer sees this file in a directory, it knows to load a shell namespace extension and allows the shell namespace extension to parse the contents of the directory. The system uses this for folders such as My Documents, Fonts, Internet Channels, and many more. Since the DESKTOP.INI file describes how the Explorer should display the contents of a directory, wouldn't it make more sense for Microsoft to place the DESKTOP.INI data into a named stream within a directory?
The reason Microsoft doesn't do this is backward compatibility. Streams are implemented only on NTFS drives; they do not exist on FAT file systems or on CD-ROM drives. For the same reason, streams may not be good for your application. But if your application can require NTFS, you should certainly take advantage of this feature.
The code in Figure 2 demonstrates how an application can work with streams. The code is well-commented, so we won't describe it here. After you compile the code, step through it in the debugger. As you reach each TEST comment line, execute what it says in a command shell to see the results.
Suppose you have a header file that you include in all of your programming projects. Every time you create a new Visual C++® project, you copy this header file into the project's source code directory, add the header file to your project, and then build the project. There are two problems with this. If you have lots and lots of projects on your drive, this means that you have lots and lots of copies of this same header file sprinkled all over the drive; each copy taking up valuable disk space. Also, you may occasionally make changes to this header file; when you do, you have to find every copy of this file on the drive and replace the existing files with the updated copy. This is very time consuming and really inconvenient.
NTFS hard links solve both of these problems. A hard link allows a single file to have multiple (path) names within a single volume. You can create your common header file and place it in some directory. Then, instead of copying the file to a new project's directory, you tell the system to create a hard link to the file. At the time of writing, Windows NT 5.0 had no user tool for creating a hard link; there is a new function exported from Kernel32.DLL that allows you to programmatically create a hard link:
BOOL CreateHardLink(LPCTSTR pszFileName,
When calling CreateHardLink, you must pass it the path name of an existing file and the path name of a nonexistent file. This function will find the existing file's entry in the MFT and add another file name attribute (whose data identifies the new file's name) to this entry. This function also increments the hard link's count attribute as well. If the lpSecurityAttributes parameter is not NULL, the security descriptor associated with the file is changed to the security descriptor passed in.
After CreateHardLink returns, the directory where you created the hard link will show a new file name. Open this file to access the data within the original file. In fact, you can create several hard links to a single file. The file is actually on the drive once but has several path names on the drive that all access the exact same file data. When you open the file by one path name and change its contents, and then open the same file later using one of its other path names, you will see the change.
Since all of these hard links are contained inside a single MFT entry, all of them share the exact same attributes (timestamp, security, streams, and file sizes). We mentioned that every time you create a hard link, the system adds a new Name attribute and increments a reference count to the file's MFT entry. Each time you delete a hard link, you are simply removing the corresponding Name attribute and decrementing this reference count. When you delete the last hard link to the file, the reference count goes to 0. Then NTFS will actually delete the file's contents and free the file's MFT entry. You can determine how many hard links a file has by calling GetFileInformationByHandle and examining the BY_HANDLE_FILE_INFORMATION structure's nNumberOfLinks member.
Like streams, hard links have been a part of NTFS since its inception because the POSIX subsystem required them. The new CreateHardLink function now exposes this capability to programmers using Win32. You should note that hard links are for files only; you cannot create a hard link of a directory. Figure 3 shows a simple utility that allows you to easily create hard links on your NTFS volume.
Since Windows NT 3.51, NTFS has offered the ability to compress file streams. If there were no drawbacks to stream compression, it wouldn't be optional; NTFS would compress streams all the time. But there is a downside: CPU processing cost. To compress a sequence of bytes, an algorithm must make a pass over the data and produce an alternate set of bytes. If this algorithm takes a long time, accessing a stream's data is too slow and the benefit of compression comes with too high a cost. NTFS had to implement compression in such a way that it saves a reasonable amount of disk space without compromising
I/O speed too much. Keep this in mind as we explain NTFS's compression algorithm.
To understand compression, imagine that you have an existing file that contains a 120KB stream. You first open the stream (with CreateFile) and then tell the system to compress the stream by calling DeviceIoControl:
HANDLE hstream = CreateFile("SomeFile:SomeStream",
0, NULL, OPEN_EXISTING, 0, NULL);
USHORT uCompressionState = COMPRESSION_FORMAT_DEFAULT;
sizeof(uCompressionState), NULL, 0,
|To compress the stream, NTFS logically divides the data stream into a set of compression units. A compression unit is 16 clusters long (32KB, assuming 2KB per cluster). Each compression unit is read into memory, the algorithm is run over the data, the data is compressed, and the resulting data is written back out to disk. If the compressed data saves at least one cluster, then the no-longer-needed clusters are freed and given back to the file system. If the compression doesn't save any clusters, then the original data is left on the disk uncompressed.
So, for our 120KB stream, it is possible that the first 32KB compresses down to 20KB, saving 12KB (6 clusters); the second 32KB might not compress down at all, saving 0 KB; and the third 32KB might compress down to 24KB, saving 8KB (4 clusters). So far, NTFS has compressed the first 96KB of the stream. The stream contains another 24KB. Since 24KB is smaller than a compression unit (32KB), NTFS doesn't even touch the end of the file at all; it simply stays on the disk uncompressed. As NTFS compresses this file, it builds a table that looks much like Figure 4 .
Now, you might look at this compression algorithm and think that it could compress the data much better. For example, the stream would be much smaller if NTFS would compress the whole 120KB stream and then write the compressed data back to the stream. But there is an enormous cost in performance associated with this. In addition, if NTFS did this and an application wanted to randomly seek to an offset that is 40KB into the stream and start reading, NTFS would have to decompress the whole stream on the fly in order to accommodate the application's request.
By breaking up the file into compression units, NTFS can read the clusters associated with the second compression unit within the stream, decompress this unit into the system's cache, and return the decompressed bytes to the application. The end result is a nice tradeoff between compression and speed.
NTFS can also compress streams as you write to them. When an application writes data to a stream, the data is actually cached in memory and is not immediately written to disk. Periodically, the system's lazy-writer thread awakes, figures out how the data bytes fit into a compression unit, compresses the data, and finally writes the compression unit out to the disk.
You can determine whether a stream is compressed by calling DeviceIoControl passing the FSCTL_GET_COMPRESSION control code. You can also determine if any stream within a file has ever been compressed by calling GetFileAttributes and checking the FILE_ATTRIBUTE_
COMPRESSED bit flag. If you want to figure out whether a specific stream is compressed, call GetFileInformationByHandle and check the FILE_ATTRIBUTE_COMPRESSED bit flag. The GetFileSize function returns the full size of a stream assuming no compression, while the GetCompressedFileSize function returns the actual number of bytes that a file's stream requires.
You can also call DeviceIoControl passing FSCTL_SET_
COMPRESSION for a directory. When you do this, any new file streams and subdirectories created within this directory are automatically compressed. No change occurs to any existing file streams or directories; you would have to explicitly call DeviceIoControl with FSCTL_SET_COMPRESSION to compress any existing streams or directories. Finally, you can tell if the file system supports compression by calling GetVolumeInformation and checking to see if the FS_FILE_COMPRESSION bit flag is on. Compression will be triggered the next time a file is accessed in a directory.
Sparse streams are one of our absolute favorite features of NTFS 5.0. With sparse streams, you can have really large streams with enormous holes. These "holes" don't require any disk space. Say that you need to implement a persistent queue. Client applications write request records to the end of the queue, and the server application extracts the data request records from the front of the queue.
To implement this without sparse streams, allocate an arbitrary block of a file for the queue, and use the storage in a circular fashion. When you attempt to write to the end of the storage, you'll have code that wraps pointers around to the beginning of the block, and you hope that the server has already read the request record at the beginning so that a client's request is not destroyed.
This is much easier with sparse streams. Create a new file on the disk. Then write client requests to the end of this file (we're assuming you're using the file's default unnamed stream here, but it could be any stream within the file). The server application starts reading request records from offset 0 in the stream and advances the offset just before each record is processed. But the server has another task. Just after the server advances its offset within the stream, the server calls a special function that tells NTFS to assume that all data in the stream from offset 0 to the new offset (minus 1) is not needed. When NTFS gets this call, it frees up the disk space that belongs to the beginning of the
file! This means that both the client's code and the server's code always advance the file offset. There is almost no possibility of wrapping around and losing a client's data request record.
Since a stream can hold as many as 16 billion billion bytes, it is very unlikely that you will ever need to wrap around within the stream. For example, if a record consisted of 1KB and clients added records at a rate of 100 per second, it would take more than five and a half million years before you reached the end of the stream!
The code in Figure 5 demonstrates manipulation of sparse streams. To make working with sparse streams easier, there's a C++ class that wraps the awkward Win32 calls. Build the program and step through it in a debugger to see what the various function calls return.
The program begins by creating a file that looks like it's 50MB, but really just contains a single byte. It then clears the file so that it contains 0 bytes and then adjusts the file size back to 0 bytes. The program then implements a sparse queue as discussed.
Internally, NTFS implements sparse streams exactly the same way it implements compressed streams. When NTFS is ready to write a compression unit's worth of data to the disk, it checks to see if all the bytes are zero (0) bytes. If the bytes are all zeroes, NTFS doesn't write any data to the disk. Remember the 120KB file we talked about before? Let's say that the second 32KB and the last 24KB of this stream are filled with nothing but zeroes. In this case, NTFS would create an internal table that looks something like Figure 6 .
In Figure 6, the stream is both compressed and sparse. You can certainly have a file that is sparse without using compression, but using both of these mechanisms, you can conserve huge amounts of disk space for really big files.
You can determine if a file system supports sparse streams by calling GetVolumeInformation and checking to see if the FILE_SUPPORTS_SPARSE_FILES bit flag is on. When you create a stream, it is not sparse by default, meaning that zero bytes are always written to clusters on the disk. For NTFS to treat it as a sparse stream, you must call DeviceIoControl on the stream passing the FSCTL_
SET_SPARSE control code. You can determine if any stream in a file is sparse by calling GetFileAttributes and checking the FILE_ATTRIBUTE_SPARSE_FILE bit flag; you can figure out if a specific stream is sparse by calling GetFileInformationByHandle, checking the same bit flag. Once a stream is sparse, it can never be converted back to a nonsparse stream. You must destroy the stream and create a new one. Just like compressed files, GetFileSize returns the logical size of a sparse stream, while GetCompressedFileSize returns the physical size of a sparse/compressed stream.
To tell NTFS that data within the stream is no longer necessary, an application calls DeviceIoControl passing the FSCTL_SET_ZERO_DATA control code, as well as the starting and ending offsets of the data to be freed. If you simply write zeroes to the stream, NTFS writes the zero bytes to the disk. If you want the clusters to be freed, you must use FSCTL_SET_ZERO_DATA. To convert an existing stream to a sparse stream, you must write code that scans a file for contiguous runs of zero bytes, then call DeviceIoControl with FSCTL_SET_ZERO_DATA for these runs. Also note that NTFS doesn't coalesce adjacent blocks that have been FSCTL_SET_ZERO_DATAed. This is why 0 is always specified as the starting offset when calling FSCTL_SET_ZERO_DATA in the persistent queue code.
When an application attempts to read from an offset within a sparse stream where no clusters exist, the file system knows to return zeroes into the buffer; an error does not occur and no disk activity occurs (this is consistent with C2 security). If you write bytes to an area of a stream where no clusters exist, NTFS will allocate a compression unit's worth of clusters from the disk (possibly less if compression is turned on). These clusters get allocated if you write just 1 byte, even if that byte is a zero byte.
For any given stream, you can determine ranges within the file that actually have clusters of disk space allocated for them. You do this by passing the FSCTL_QUERY_
ALLOCATED_RANGES control code to DeviceIoControl. Figure 5 shows a proper call to this function.
Here is another example that demonstrates the coolness of sparse streams. Build and execute the following code:
HANDLE hfile = CreateFile("SparseFile", GENERIC_WRITE,
0, NULL, CREATE_ALWAYS, 0, NULL);
DeviceIoControl(hfile, FSCTL_SET_SPARSE, NULL, 0, NULL,
0, &dw, NULL);
LONG DistanceToMoveHigh = 16;
SetFilePointer(hfile, 0, &DistanceToMoveHigh,
After this code executes, take a look at this file in Explorer; you will see that it is 64GB in size even though your hard drive may be much smaller than that (my hard drive is only 4.5GB). In the Streams section, we mentioned a party game in which you create streams with lots of data in them, for which Explorer showed file sizes of 0. Now you have a new party game, where you can create streams with no data in them and Explorer shows enormous file sizes. What fun!
Encryption is another new feature of NTFS 5.0 that gives you the ability to protect your data from being touched by other users who have physical access to the machine. Encryption protects your data when people share a single computer, when your computer is stolen, or your pesky
coworker starts messing around on your machine. While encryption can't stop a user from accessing your file, it
does prevent the contents of the file's streams from being intelligible.
NTFS encryption takes advantage of the CryptoAPI to create public keys. The encryption keys are stored in a nonpaged-pool so that they are not written to disk where they could possibly be "stolen." In addition, keys can be stored on secure devices such as smart cards. While files on a remote server can be encrypted, the data itself is not encrypted as it goes over the network. If this is of concern to you, you must use something like secure sockets layer (SSL). Encryption can be performed on a file (all streams within the file) or directory basis, and the encrypting/decrypting of a stream's data is transparent to an application.
If an employee leaves the company or loses an encryption key, NTFS has built-in recovery support so that the encrypted data can be accessed. In fact, NTFS won't allow files to be encrypted unless the system is configured to have at least one recovery key. For a domain environment, the recovery keys are defined at the domain controller and are enforced on all machines within the domain. For home users, NTFS automatically generates recovery keys and saves them as machine keys. You can then use command-line tools to recover data from an administrator's account.
To determine if the file system supports encryption, call GetVolumeInformation and check to see if the FILE_SUPPORTS_ENCRYPTION bit flag is on. To encrypt or decrypt a file/directory you simply call EncryptFile or DecryptFile. These functions operate on all streams within a file, or they turn on/off encryption for all files within a subdirectory. You can determine if a file is encrypted by calling GetFileAttributes and checking the FILE_ATTRIBUTE_
Windows NT 5.0 comes with a command-line tool, Cipher.exe, that makes it easy for you to work with encrypted file streams. With Cipher, you can encrypt and decrypt files/directories and set the recovery policy. The new Win32 functions OpenEncryptedFileRaw, ReadEncryptedFileRaw, WriteEncryptedFileRaw, and CloseEncryptedFileRaw allow an application to open the encrypted contents of a file and read/write it exactly as is to somewhere else.
Reparse points, another new feature of NTFS 5.0, allows a piece of code to execute when a directory or file is opened. A reparse point is a system-controlled attribute that can be associated with any directory or file. The value of a reparse attribute is user-controlled data that can be up to 16KB. The reparse data contains a 32-bit reparse tag (defined by Microsoft) indicating which file system filter is to be notified that the file or directory with the reparse attribute is being accessed. The file system filter can then execute any code to control the accessing of the directory/file. Because the value of a reparse point attribute can be up to 16KB, any additional data can have any meaning to the file system filter. Since file system filters can totally change the appearance of a file's data, Windows NT only allows administrators to install new file system filters. If for some reason the system can't find the file system filter identified by a reparse tag, the directory/file cannot be accessed; however, they can always be deleted.
Reparse points are used to create NTFS directory junctions. An NTFS directory junction allows you to redirect the directory/file request to another location. For example, let's say that you have a directory called C:\CDROM on your hard drive. If you place an NTFS directory junction on C:\CDROM so that it points to your X: drive (your real CD-ROM drive) and issue a DIR command from a command shell, you'll actually get a directory of what's on your CD-ROM drive. Only empty directories can have a reparse point associated with them, and once a directory has a reparse point on it, you will not be able to create any subdirectories or files within it.
An NTFS directory junction allows you to take a single directory that exists on your system and make it accessible from multiple locations throughout your system. Hard links work for files similarly, except that there is no guarantee that the target directory exists in the system. For security reasons, the system will not allow you to create a directory junction that refers to a UNC path or a mapped drive. If you want to accomplish this, you can use the Distributed File System (DFS) facilities that map volumes from different machines into one namespace. For local volumes, the volume mount point functions such as SetVolumeMountPoint enable grafting into one local namespace all the volumes present in one computer. (See the Platform SDK for more information.)
There is no Win32 function for creating an NTFS directory junction and there is no UI or tool that allows users to create them. There is a tool that comes with the Windows NT DDK, LINKD.EXE, that allows you to create directory junctions. We hope Microsoft will expose directory junctions better in future versions of Windows NT.
As always, you can determine if the file system supports reparse points by calling GetVolumeInformation and checking the FILE_SUPPORTS_REPARSE_POINTS bit flag. You can determine if a directory or file has a reparse point associated with it by calling GetFileAttributes and checking the FILE_ATTRIBUTE_REPARSE_POINT bit flag. You set/get a file/directory's reparse point data by calling CreateFile to open the file/directory, then call DeviceIoControl passing either the FSCTL_SET_REPARSE_POINT or FSCTL_GET_REPARSE_POINT control code. You can also delete the reparse point attribute by passing the FSCTL_DELETE_REPARSE_POINT control code. By the way, to open a directory don't forget to specify the FILE_FLAG_
BACKUP_SEMANTICS flag when calling CreateFile.
Normally, when you open a file that has a reparse point associated with it, the file system notifies the proper file system filter, which then alters or modifies the file-open process. An application can open a file and explicitly disable the reparse point modifications by passing the FILE_FLAG_
OPEN_REPARSE_POINT flag to the CreateFile function. This allows an application to get to the raw data in the file/directory's stream. If you write code to set, get, or delete a file's reparse point data, you may want to include the FILE_
FLAG_OPEN_REPARSE_POINT flag when calling CreateFile; otherwise the file system filter might deny your request to access the reparse point data.
Reparse points are only useful in conjunction with a file system filter, so we didn't write a sample program that demonstrates how to set, get, and delete reparse points. It is useful to understand reparse points because Windows NT 5.0 is scheduled to ship with a number of features (implemented as file system filters) that require reparse points.
For example, when the new Hierarchical Storage Management (HSM) service decides to move a file from the user's disk to an auxiliary storage device, the file's storage is removed from the disk file (freeing the clusters), but the file's entry remains on the disk and gets a reparse point attribute. When the user attempts to modify the file, the HSM file system filter copies the file from auxiliary storage back to the hard disk, removes the reparse point, and allows the application to modify the file as normal.
Another service that uses reparse points is the Native Structured Storage (NSS) service. The NSS file system filter makes a file on the disk look like an OLE-structured storage file. An OLE document may contain data for a Word document, a spreadsheet, a PowerPoint® presentation, and so on. Historically, the OLE libraries have placed all of the data for these different embedded "objects" in a single file. This makes it very easy to copy a file from one place to another and take all of the embedded objects with it. But if you modify any of these objects, the updated objects are placed at the end of the file, and the old objects are not removed from the file. The old objects were not removed to improve performance; if OLE removed these objects, it would have to rewrite the entire file to disk, taking up a lot of time. OLE document files are consequently much larger than they need to be and waste a lot of disk space.
By taking advantage of NTFS 5.0 with its support for streams, reparse points, and NSS, OLE documents no longer waste precious disk clusters and do not have a performance penalty. Each of the embedded object's data now resides in its own stream within a file. Updating an object means that a new stream is created for the new object and that the original stream for the object is destroyed, causing the file system to reclaim the disk space. The NSS file system filter makes all of this appear transparent to an application. The NSS filter also allows an NSS file to be copied to a floppy, converting the file to the old file format and vice versa.
Another service that uses reparse tags is the Single Instance Storage (SIS) service. This service allows a file to be on the disk once but be accessed via several different names. It is very much like hard links except that each reference to the file has its own set of attributes.
Hierarchical Storage Management, Native Structured Storage, and Single Instance Storage will be covered in a future article.
Disk quota support in NTFS 5.0 allows administrators to control how much disk space each user can store on an NTFS volume. Disk quotas are completely transparent to users. If the user attempts to exceed their quota, the system indicates that the disk is full. The user can reclaim disk space by deleting files, have another user take ownership of some files, or have the administrator increase the user's quota. When a file is created, the owner's Security ID (SID) attribute gets associated with the file. The storage occupied by this file's streams is charged against this owner.
Quotas are based on the logical size of a file. This means that a file that is normally 10MB but is compressed to 8MB counts as 10MB for the purposes of disk quotas. Likewise, a sparse file that is logically 10MB but actually contains zero bytes of storage also counts as 10MB for the purposes of disk quotas. This "feature" is by design and allows quota settings on one volume to be compared with quota settings on another volume.
An administrator determines how to configure an NTFS volume for quota tracking/enforcement via the Quota tab on an NTFS volume's Property page (see Figure 7).
|Figure 7 Quota Tab|
By default, quota tracking is disabled when an NTFS volume is first set up. An administrator can change this and tell NTFS to keep track of a user's quota without actually enforcing any limits on the user. This allows an administrator to monitor how much storage each user owns. Turning this on for a volume causes the system to scan every existing file on the drive to build up the quota information; this can take a long time but only needs to be done once.
An administrator can also configure NTFS to track each user's storage and enforce a limit. If limits are enforced, NTFS can also be told to log events when users approach their quota. Administrators can have reports generated. The Quota Entries button on the property page runs a tool that allows the administrator to see the user's quota usage statistics for a particular volume (see Figure 8).
|Figure 8 Quota Entries Tool|
You can determine whether a file system supports disk quotas by calling the GetVolumeInformation function and examining the FILE_VOLUME_QUOTAS bit flag. An application can call GetDiskFreeSpaceEx, which returns the number of free bytes available to the caller and also the number of free bytes available on the entire volume. Your application can then determine which of these two values make sense to use. The system checks to see if a user has exhausted their quota whenever data is written to the end of a file extending the file's size. If, when extending the file, the user's quota is reached, the file write operation fails.
Figure 9 lists the COM interfaces that allow an application to interact with disk quotas. (See the Platform SDK for more information.)
Microsoft is working on other features for NTFS 5.0, which we'll write about in future issues of MSJ. Look for articles about property sets, link tracking, volume mount points, content indexing, and a few others. Of particular interest is the Reliable Change Journal, which keeps a database of all changes that are occurring to files/directories on a volume. We strongly encourage you to see how these new features can simplify your development efforts.
From the November 1998 issue of Microsoft Systems Journal.
Get it at your local newsstand, or better yet, subscribe.