Click Here to Install Silverlight*
United StatesChange|All Microsoft Sites
|Developer Centers|Library|Downloads|Code Center|Subscriptions|MSDN Worldwide
Search for

Advanced Search
MSDN Home > MSJ > January 1999
January 1999

Microsoft Systems Journal Homepage


Code for this article: Jan99BugSlayer.exe (52KB)
John Robbins is a software engineer at NuMega Technologies Inc. who specializes in debuggers. He can be reached at

Recently I have been doing some wild and wacky development that has my head spinning. You guessed it—Windows NT® device drivers. If you have ever had to work on device drivers or looked at them just out of curiosity, they almost seem like a whole different operating system. The main problem is that the learning curve between the toy examples and a real driver is essentially vertical.
This month I will talk about the trials and tribulations that I went through so, hopefully, you will not have to reinvent the wheel on your own drivers. At this point, I am sure many of you are thinking that you might just skip on past the ol' Bugslayer and see what Paul DiLascia is talking about this month because you never plan to develop device drivers. Don't flip the page so quickly!
I have to admit that I thought there was not much in the way of ideas that would cross from driver land into user land. As I learned more about drivers, many things that you have to do in user land started making much more sense. I also learned some interesting things that actually gave me more insight into my user mode development.
I want to concentrate on a few areas in this inaugural Bugslayer device driver column. The first is getting your driver built with BUILD.EXE. There is nothing wrong with BUILD.EXE per se, but it is basically undocumented and I would like to clear up some of the confusion surrounding it. The second hazy area usually involves using WinDBG as a kernel debugger. The documentation tells you the rudiments of establishing a connection, but once you have that connection, you are on your own. After doing a little debugging, I'll talk about a kernel mode library, Track, that's proven extremely useful in my driver development. I will also show you some of the coding tips and techniques that saved me a great deal of time. For this column, I have used both Visual C++
® 5.0 and 6.0, as well as the Windows NT 4.0 Device Driver Kit (DDK) and the Windows 2000 beta 2 DDK.

BUILD.EXE is, Uhhh, Different

The very first time I installed the DDK, I skipped reading the actual documentation, found the INSTDRV sample source code, took a glance at the README.TXT, and typed BUILD. The disk whirred, some things appeared on the screen, and the BUILD appeared to produce the binaries. Unfortunately, I could not find the SYS and EXE it produced. After scratching my head, I started searching my hard disk and noticed that INSTDRV.EXE and SIMPLDRV.SYS were in my DDK directory on a completely different disk drive. This was my first inkling that BUILD.EXE was going to be very different. After fumbling around with BUILD.EXE for a while, I finally had this Zen epiphany: BUILD.EXE is not one of those tools that conforms to your way of thinking—you absolutely conform to its way of thinking. Once I realized this, working with BUILD.EXE became easier.
While you might be wondering why I'm spending more than two sentences on BUILD.EXE, the essential idea is getting perfectly reproducible and understandable builds. You cannot even begin to debug any of your problems until you know exactly how your binaries are produced because different options mean wildly different things. Top-notch developers (and thus, top-notch bugslayers) can tell you the effect each command-line switch has on the compiler and linker.
In a nutshell, BUILD.EXE is a preprocessor to the Microsoft
® make utility, NMAKE.EXE. The BUILD.EXE claim to fame is that it does an automatic dependency scan of all the headers included by your source files so NMAKE.EXE can rebuild the affected source files. The second half of BUILD.EXE is MAKEFILE.DEF, the huge and almost indecipherable collection of macros and rules that controls compilation and linking through NMAKE.EXE.
The main file device driver writers use with BUILD.EXE is the SOURCES file. This, as the name implies, is where the particular device-specific settings and source files are placed. SOURCES is used both by BUILD.EXE and NMAKE.EXE through MAKEFILE.DEF. You can think of SOURCES as a file that lets you specify single-line NMAKE.EXE macros. According to the scant documentation, the only three macros you must specify are TARGETNAME ( the name of your driver), TARGETTYPE (the type of driver you are building), and SOURCES (the list of source files your driver uses). The documentation is fuzzy or just stops when you start using more than these three macros. After much trial and error (as well as suffering through MAKEFILE.DEF), I found most of the macros you need to set to get the most out of BUILD.EXE. Figure 1 shows my sample SOURCES file. In addition to SOURCES, there are other files that BUILD.EXE uses and they all can be found in this month's source code distribution.
At the beginning of the SOURCES file is a tip from the documentation that, if I had been paying better attention, would have saved me some grief. It is best to skip the spaces anywhere around the equal signs when setting the macros in SOURCES. The mandatory macros are listed in the first section, after the header. In addition to the macros I mentioned previously, TARGETPATH is mandatory in my book. TARGETPATH sets the directory where the output files are placed because you do not want to drag your files out of the DDK installation directory on each build. Setting this macro only sets the first part because BUILD.EXE will always put the files two directories below what you set. For example, if in an Intel compile you set TARGETPATH=..\Output, the checked-build files will actually go to ..\Output\i386\checked and the free-build files will go to ..\Output\i386\free. There is no way to change these defaults in BUILD.EXE, so you have to live with it.
After the mandatory section, you'll find the optional settings. For the most part, they are self explanatory from the comments in the file. For settings like C_DEFINES that are expanded on the right-hand side of the expression, you need to make sure they expand that way so you do not lose any macro settings that MAKEFILE.DEF sets before including your SOURCES file directly. The uncommented macro settings are those that I always wanted set, so I left them in. I have also included some of the common settings for user mode programs.
While I did use BUILD.EXE to build the user mode driver portion of this month's code sample, I really would not recommend BUILD.EXE for more complicated applications. If you peruse through MAKEFILE.DEF, there are a million macros for building things with MFC and whatnot. I honestly did not even begin to understand their gyrations. For my real-world development, I use BUILD.EXE for the drivers and Visual C++ makefiles or hand-built makefiles for the user mode portions.
Toward the end of Figure 1 are some more interesting settings. The first is CHECKED_ALT_DIR=1. One of the things that initially bothered me about BUILD.EXE was that it put both the checked and free-build intermediate files into the same directory. This is a very bad thing because checked builds and free builds should never mix. Avoid accidentally mixing and matching, and requiring a complete full rebuild each time. The goal is to get clean and consistent builds. Fortunately, the CHECKED_ALT_DIR setting tells BUILD.EXE and NMAKE.EXE to put the checked-build intermediate files into a different directory. By setting this, your .OBJs will be put in the .\objd directory instead of the .\obj directory under your source file directory.
When I first spied the USE_PDB flag in MAKEFILE.DEF, I assumed that I just needed to set it to have full symbolic PDB files produced. Those of you who've been reading my column for the last year know that my number one bugslaying tip is building with full debug symbols for all of your builds. While I know Intel assembler, I would much rather debug a release/free-build problem with as much symbolic information as possible than wade through a morass of bit twiddling. Unfortunately, setting USE_PDB only produces full symbol information in checked builds. In free builds, a PDB is produced, but it is essentially only publics.
I was determined to get full release symbols, and the good news is that with a little research in MAKEFILE.DEF, BUILD.EXE can produce them. The result is the three lines in the conditional macro statement. I do not know if you are supposed to put conditional compilation in a SOURCES file, but it did not seem to cause any harm. I tried to put these settings in the MAKEFILE.INC, but it is included too late to change the PDB settings. I hope that Microsoft will fix USE_PDB to produce full PDB files in free builds.
After CHECKED_ALT_DIR=1, the time and frustration saving parts come in. If you have ever dealt with the message compiler or used another custom build step, the documentation and examples show those being done in a file called MAKEFILE.INC. This file is where you can set up additional makefile targets and set custom items. There are three macro settings that define when you can set new targets. Any targets in NTTARGETFILE0 are taken care of first after the initial dependency scan. Those in NTTARGETFILE1 are completed before linking, and NTTARGETFILES are completed after the link. If you are curious to see the actual order in which things get built, look for the "all:" target in MAKEFILE.DEF.
I define NTTARGETFILE0=MakeDirs in my sample SOURCES file. This is a target in my MAKEFILE.INC that makes the output directories. For some reason, BUILD.EXE makes the intermediate directories but does not make the output directories, without which the compilation will fail. I put the MakeDirs target along with some others into a common file I called DDKCOMMON.MAK. You can include this file in your MAKEFILE.INC with the statement

 #include <DDKCOMMON.MAK>
provided DDKCOMMON.MAK is in a directory specified by your INCLUDE environment variable. By including the line NTTARGETFILE0=MakeDirs I automatically get the output directories. In addition to the MakeDirs target, CreateDBG is another useful target in my DDKCOMMON.MAK for setting in the NTTARGETFILES macro. This target will automatically create and place the DBG file for your driver into your symbols directory so it is ready for WinDBG.
I also set the flags in the BUILD_DEFAULT environment variable differently than the default DDK settings. This environment variable contains the default command- line options to BUILD.EXE. The default settings from the DDK's SETENV.BAT are -ei -nmake -i. The -e causes BUILD.EXE to generate BUILD.LOG (the log of all executable commands with their command lines), BUILD.WRN (the list of all compilation warnings), and BUILD.ERR (the list of all compilation errors). The -i tells BUILD.EXE to ignore extraneous compiler warning messages. Finally, -nmake -i tells NMAKE.EXE to ignore any commands that fail.
The default options made me a little nervous. If a command fails or there is a compiler warning, I want to know about it. Figure 2 lists all the settings that I use in my BUILD_DEFAULT. These settings provide better information for tracking down problems than the defaults. Before I move on to WinDBG.EXE, I must mention that while BUILD.EXE indicates that there is a clean target, it is not defined. This month's source code contains a few batch files for creating brand new projects and setting my default environment variables for BUILD.EXE. Hopefully, Microsoft will be documenting BUILD.EXE better in the future and fixing some of its existing limitations. While BUILD.EXE might seem idiosyncratic, it is quite powerful—it is used to build Windows NT itself.

Doing the WinDBG Dance

While there are no secrets to getting WinDBG set up for kernel debugging, you must follow the rules exactly as specified in Chapter 4 of the DDK Programmer's Guide. The toughest thing about setting up is determining if the NULL modem connection is really working. To see if the machines can talk to one another, start HyperTerminal on both machines and have them use the communication ports connected with the NULL modem cable you will be using for the kernel debugger. If you can type on the host machine and the output appears on the target machine, you have a good connection. The host machine is probably your faster machine where you develop your code, while the target machine is the machine on which your driver runs.
While the documentation mentions it briefly, you do not even need to have the checked build on your target machine because the full kernel debugger is available in the free or retail build. It is toggled with the /DEBUG switch in BOOT.INI, and is very valuable when you have problems that only appear on a free build. Of course, you should still do almost all of your testing on the checked build because it does much more parameter validation than the free build.
Originally I was confused that the same version of Windows
® NT had to run on both the host and the target machine. Fortunately, this is not the case because you can use Windows NT 4.0 on the host to debug Windows 2000 on the target.
After you have the host and target ready to connect, it is time to get WinDBG installed and set up. You will want to make sure you get the latest WinDBG available. At the time I wrote this, the Windows 2000 beta 2 SDK and DDKs had just shipped in the MSDN Subscription. That version of WinDBG has had some bug fixes and has been upgraded to support Visual C++ 6.0 PDB files. I will cover the WinDBG extensions in a moment, but if you need to debug a Windows NT 4.0 target machine and are using the Windows 2000 beta 2 WinDBG, you will need to copy the Windows NT 4.0-specific extension DLLs from your <SDK DIR>\BIN\NT4 directory and put them in the <SDK DIR>\BIN directory. As you would expect, the Windows 2000 beta 2 WinDBG is out-of-the-box ready for Windows 2000. If you will also have Windows 2000 on your target machine, the Windows 2000-specific extensions are in <SDK DIR>\BIN\NT5, so make sure to copy them over the Windows NT 4.0 versions when you are going to debug Windows 2000.
It is also important to get the supplied Windows NT symbols set up correctly on the host machine so WinDBG can find the proper versions. The symbols must be copied to a directory that ends in SYMBOLS. On my host machine, the free-build symbols are in E:\WINNT\SYMBOLS, and the checked-build symbols are in E:\CHECKED\SYMBOLS. When copying the symbols to the appropriate place, make sure to put the symbols for a binary into a directory that matches the extension under SYMBOLS. For example, checked-build EXE binaries go into E:\CHECKED\ SYMBOLS\EXE, and checked-build SYS binaries go into E:\CHECKED\SYMBOLS\SYS. When you set the SYMBOLS directory in the WinDBG Symbols Options tab, all you need to specify is the top part of the SYMBOLS directory. So for my checked-build setup, I specified E:\CHECKED.
While you can put all the supplied symbols into your directories, you probably do not need to waste the disk space. When WinDBG runs as a kernel debugger, it cannot debug into user mode programs, so all you need are HAL.DBG, NTOSKRNL.EXE, and any SYS file DBGs that you interact with. To debug a user mode program on the target machine, you can just run it under a debugger on the target machine.
The checked build always uses the multiprocessor kernel. To get the correct checked symbols for NTOSKRNL.EXE, rename the existing NTOSKRNL.DBG and copy NTKRNLMP.DBG to NTOSKRNL.DBG. If you are debugging the checked build of Windows 2000, DBG files are still supplied, but all the symbols are in PDB files. For Windows 2000, rename the DBG, but not the PDB. The DBG has an internal pointer to the correct PDB file. You will also need to copy the PDB files for each DBG file to your SYMBOLS directories.
Once everything is set up to start debugging, you will quickly find that WinDBG offers some powerful informational commands that let you get at everything in the kernel. I have to admit that every time I go back and use the Visual C++ debugger, I really wish it had a command window and the ability to have WinDBG-type extensions. The best part about WinDBG extensions is that you can write your own! I plan to cover this in a future Bugslayer column, but if you are really interested, search for WDBGEXTS.C on MSDN for an example.
In addition to its extensibility, WinDBG has excellent breakpoint support. Often, when you set a breakpoint, you know exactly what you want to look at when the breakpoint goes off. In either the WinDBG breakpoint dialog or in the command window, you can specify the commands that you would like to execute when the breakpoint fires. If you set a breakpoint and the module you want to set it in is not loaded, WinDBG is smart enough to defer the breakpoint until that module loads, and it will then set it for you automatically. These deferred breakpoints also work with user mode programs.
While WinDBG is better than the Visual C++ debugger in some areas, it is not the most stable debugger in the world. WinDBG is prone to hanging and crashing at the most inopportune times. When I first started trying to debug device drivers, I wondered if I was ever going to get the connection established. I would start WinDBG on the host and as soon as the connection to the target was established, WinDBG would just hang. This problem really had me scratching my head because I was able to make the initial connection without a workspace, but as soon as I saved an NTOSKRNL.EXE workspace, I would never be able to connect with that workspace again unless I did not open the workspace to connect. After duplicating the problem on a completely different set of systems and hearing that others were having the same problem, I figured that this was a good sign that it was a WinDBG bug.
I finally figured out that this problem cropped up when I had more than just the command window open in my workspace. This was with any window—a source window or just the register window. The moral here is to only use workspaces that just have the command window open when connecting. While this is annoying, at least your carefully constructed breakpoints and symbol search paths will persist across connections. Unfortunately, once you have a workspace open, you cannot open another one, so you need to manually reopen your source and other windows. I hope that Microsoft can have this fixed in the next release of the Platform SDK.
WinDBG is odd in that it seems to be an actual debug build because ASSERT message boxes fire off occasionally. When WinDBG crashes or hangs, close down the non-responding instance, immediately start another instance, and reestablish a connection to the target machine. When the connection is reestablished, you can use the !reload command to get all of your symbol tables back into WinDBG.
One other problem I have encountered (but only occasionally) is that after using a large number of the informational commands, the target machine will bugcheck with an UNEXPECTED_KERNEL_MODE_TRAP. Fortunately, I have never encountered this when doing normal driver debugging, only when I have been playing around following things through the system, like handles and processes.
While kernel debugging has a few problems, for the most part WinDBG works for your average day-to-day driver development. Especially when you consider the price! Now that I have talked about getting your driver built, and about dealing with WinDBG, I want to talk about a very helpful kernel mode library I wrote: Track.

Track This

When I first started working on device drivers, I became nervous because many of the tools that I relied on for so long in user mode development were not available for kernel mode. It seemed to me that every time I loaded my driver, the only debugging technique I had was to read hexadecimal numbers off a pretty blue screen. I quickly figured out that I needed to develop some techniques to give me a little more information and let me find problems proactively. I rolled these ideas up and developed the Track library. Track is for drivers that link against NTOSKRNL.EXE. However, the techniques used in Track could be applied easily to display and NDIS drivers. The first time I applied Track to a commercial project, I immediately found a whopper of a memory leak.
These were my initial requirements for Track. Track must allow me to watch all paged and nonpaged memory allocations my driver makes. Tracking means I can perform the following actions on the allocated memory:
  • I can find out, at any time, how much allocated memory I have in the system.
  • I know where each block of memory was allocated and who allocated it.
  • At any time, I can run through the allocated memory and see if the memory has had any underwrites off the front of the block or overwrites off the end of the block.
  • I can look for memory that has not been deallocated.
Track must also allow me to see other general resource allocations and deletions. For example, I want to know if I forget to unlock some memory pages my driver locked into memory. Like memory tracking, I want general resource tracking to find out at any time how many resource allocations are active and allow me to look for resource leaks.
Track must be easy to use and you should not have to completely change your style of development to use it. Track also needs to be relatively easy to extend. As new resources come along that I want to track, it should be straightforward to add that tracking to the whole system.
Overall, the requirements list for Track is probably not much different than the requirements and features of the user mode debug runtime library CrtDbg for tracking memory. As I mentioned earlier, the key is information. While the checked build of Windows NT will assert when you forget to free the memory associated with your device object, it does not automatically report any memory leaks when your driver unloads. The only way to find memory leaks is to use WinDBG's !memusage, !pool, and !poolfind commands, which you have to remember to run. One of the key rules of bugslaying is to not rely on manual means of finding problems, but to automate them as much as possible.

A Small Aside

Before I jump into the Track usage and implementation, I need to talk about some of the general techniques I use in developing drivers. The first thing you might notice when you start looking at my code is that all the samples are CPP files. I am not doing any full-blown class libraries; I am simply using C++ as a better C. I wanted to use some of the C++ constructs like inline functions, improved variable placement, and its much better type checking. I like to let the compiler do as much error detection as possible because it is easier to fix a potential problem during compile time instead of looking at a bugcheck on a dead machine.
All of my files include a nice and relatively simple debugging header called DrvDiagnostics.h (see Figure 3 ). The first thing I do is set up a couple of macros, TRACE and VERIFY, because I use them heavily in my overall development and would be deprived without them in kernel mode.
Probably the most useful function or macro in Figure 3 is BreakIfKdPresent. I was poking through a disassembly of NTOSKRNL.EXE and I noticed this very curious export called KdDebuggerEnabled. The name alone piqued my interest. Looking at the code, I realized that KdDebuggerEnabled is a variable export, and sure enough it is defined in NTDDK.H. Unfortunately, KdDebuggerEnabled is defined as a BOOLEAN and not what it really is: a pointer to a BOOLEAN. If the value is TRUE, Windows NT is running with a kernel debugger attached. As you can see, I wrapped a nifty function/macro, BreakIfKdPresent, around checking KdDebuggerEnabled so that I issue a DebugBreak if the kernel debugger is running. With WinDBG's instability, I found it best just to put the call to BreakIfKdPresent as the first statement in my DriverEntry so I can gain control cleanly with WinDBG. As you would expect, BreakIfKdPresent defines away if building a free build.
After BreakIfKdPresent in Figure 3 are all sorts of IRQL ASSERT macros. These have proven invaluable at the beginning of each function. It took me a while to get the hang of the IRQL restrictions, so I was crashing when I least expected to. Now I look at each API function I might be calling and take the lowest level IRQL of all the functions and ASSERT it right in the beginning. I would strongly encourage you to use these macros heavily as well.

Back On Track

When first thinking about Track, I thought I could just do something that was similar to what I presented in my February 1997 column: patch the import address table (IAT). While kernel mode drivers are Win32
® PE files, the support system needed to safely and easily patch IATs is just not there in kernel mode. While I thought it might be doable, I felt that the risk of a total system crash was not worth fiddling around in kernel mode. This meant that I needed to redirect the functions I wanted to track to my tracking routines. Since I value safety above everything else in kernel mode, I decided to do the redirection with preprocessor macros. If you want a function tracked, prefix its normal name with Track. Therefore, ExAllocatePoolWithTag becomes TrackExAllocatePoolWithTag, and the wrapper function Track_ExAllocatePoolWithTag is where the actual allocation takes place.
Figure 4 lists all the functions that I handle with Track. In addition to the new names for the functions and including the main header file, TRACK.H, you also need to call the TRACKINITIALIZE macro as soon as you can in your DriverEntry function, and call the TRACKCLOSE macro as late in your driver's lifetime as possible.
Track has a couple of other macros that you can call at various times to check on the current state of your driver. The first is TRACKSTATS, which dumps a small report calling DbgPrint so you can see it in the kernel debugger. The report itself is rather small, but it shows you the amount of memory that you have allocated and the total calls to general resources and handle resources. TRACKDUMPALLOCATED will dump all of your currently allocated resources.
If you happen to have outstanding allocations when you call TRACKCLOSE, you will see a dump that looks like the following in the WinDBG command window:

 Track Reports Allocation/Resource Leaks:
 AllocatorFn : RtlAnsiStringToUnicodeString
    Source   : d:\dev\column\jan99\sourcecode\tracktes
    Line     : 448
 AllocatorFn : MmAllocateNonCachedMemory
    Source   : d:\dev\column\jan99\sourcecode\tracktes
    Line     : 443
    Size     : 20
 AllocatorFn : ExAllocatePoolWithTag
    Source   : d:\dev\column\jan99\sourcecode\tracktes
    Line     : 433
    Size     : 20
 AllocatorFn : IoCreateDevice
    Source   : d:\dev\column\jan99\sourcecode\tracktes
    Line     : 123
The file names are cut off because I wanted to minimize the amount of nonpaged memory used by Track, so I compromised on a 40-character name for the source file.
The final function, TRACKVALIDATEALLOCATED, checks all memory allocations for the underwrites and overwrites. If Track encounters a memory problem, it reports the type of corruption and where the memory was allocated. Immediately after the reports, Track will trigger an ASSERT so you can gain control in WinDBG to start looking at the problem. In the checked version of my drivers, I have a special IOCTL that does nothing more than call TRACKVALIDATEALLOCATED so I can check it at will from my user mode application. Since TRACKVALIDATEALLOCATED can only be called at PASSIVE_LEVEL IRQL, this works well. Sample output in WinDBG looks like the following:

 Starting to validate allocations.  This could crash if your driver
 attempts to allocate/free resources while validating.
 Track Error ****************
 Memory Underwrite and Overwrite corruption
 Allocation point: d:\dev\column\jan99\sourcecode\tracktes, line 536
 Track Error ****************
 *** Assertion failed: Corrupt MemoryFALSE
 ***Source File: d:\dev\column\jan99\sourcecode\track\trackmemory.cpp,
 line 212
 Break, Ignore, Terminate Process or Terminate Thread (bipt)?
From an implementation standpoint, Track is straightforward and does not introduce any patentable algorithms. The bulk of the work is done in the TrackInternal.cpp source file. The main data structure that stores all the individual allocation items is a lookaside list. However, since I have to protect the Track internals with a spinlock anyway, I wondered if I might want to shift to an array-based implementation for storing the items. Fortunately, I set up the internal interface such that you could change it in TrackInternal.cpp and the rest of the files would not need to change.
The grunt work in the implementation is writing the wrapper functions. Figure 5 shows the implementation of Track_MmMapIoSpace (the allocation function) and Track_ MmUnmapIoSpace (the deallocation function). One of the first bugs I had in the implementation was in the deallocation function's __finally blocks. I was calling the wrapped function in the deallocation's __finally blocks and then releasing the internal spinlock protection. Can you guess what the problem was? Since I had acquired the spinlock, the IRQL was raised to DISPATCH_LEVEL and, as you know, many functions cannot be called at a level that high. After the Windows NT check build ASSERTED on me and bugchecked with an IRQL_NOT_LESS_OR_EQUAL error, I quickly saw my problem.
Earlier, I mentioned that the TRACKVALIDATEALLOCATED function could only be called at PASSIVE_ LEVEL. My original function acquired the spinlock, called the validation functions, and released the spinlock. In my initial testing, everything worked well, and I thought I was done. When I did the test driver for the column, I again ended up looking at a bugcheck with an IRQL_NOT_LESS_ OR_EQUAL error when I ran the driver. I noticed that this only happened when I went to check a paged pool memory allocation. Yet again, I was raising the IRQL when I acquired the spinlock, so the memory was not available. That was easy enough to fix by not acquiring the spinlock, but it meant that I could only validate for underwrites and overwrites at PASSIVE_LEVEL.
I need to point out one other thing about Track: be very careful if you are using multiple kernel threads in your driver. If you follow the normal rules of driver development and do your allocations in DriverEntry and your deallocations in an unload routine, you should not have any problem. However, I have not done any testing on drivers that have multiple threads allocating and deallocating memory at the same time. This will probably be problematic because Track is using one spinlock to protect everything. Fortunately, I doubt that many of you are writing heavily multithreaded drivers. If you are, let me know.


While some aspects of device driver development are either weird or undocumented, I hope that now you'll be able to spend more time developing your driver instead of wondering what one of the DDK tools is doing (or not doing). I also encourage you to put Track to work so that you can start getting a little more information about your allocations and when they go bad. As I add more tracked DDK APIs to Track, I will post updated versions to my Web site, Yes, I do eat my own dog food! If you add APIs, please consider sending the sources to me and I will post them to my Web site so others can use them.


Tip 15 When debugging drivers, especially those that can be unloaded, I swap back and forth between the free and checked builds to ensure that the free build behaves the same as the checked build. To make this easier, I have my INI registration file register two drivers: the free build is the normal driver name Foo, and the checked build is FooC. I also set the ImagePath field for each to point to the appropriate driver. For example:

 ImagePath = \??\d:\dev\foo\obj\i386\free\foo.sys.
Tip 16 In my August 1998 column I showed you how to write crash handlers that display a dialog with crash information for the user. Reader Chris Kirmse suggests having your crash handler fill out bug reports for you. When Chris's application crashes, he has the crash handler show a dialog with a Report Error button. When the user presses the button, Chris has their browser jump to his Web site with the crash data. At the Web site he asks for additional information to help him track down the problem. Chris says he gets much better information from his users because the crash is definitely fresh in their mind. Another benefit is that users are impressed that he's so proactive in handling their problems.

Have a tricky issue dealing with bugs? Send your questions or bug slaying tips via email to John Robbins:

From the January 1999 issue of Microsoft Systems Journal. Get it at your local newsstand, or better yet, subscribe.

© 1998 Microsoft Corporation. All rights reserved.
Terms of Use

© 2016 Microsoft Corporation. All rights reserved. Contact Us |Terms of Use |Trademarks |Privacy & Cookies