*
MSDN*
Results by Bing
|Developer Centers|Library|Downloads|Code Center|Subscriptions|MSDN Worldwide
Search for


Advanced Search
MSDN Home > MSJ > March 1999
March 1999

Microsoft Systems Journal Homepage

Make Your Windows 2000 Processes Play Nice Together With Job Kernel Objects

Jeffrey Richter

Windows 2000 offers a new job kernel object that allows you to group processes together and create a sandbox that restricts what these processes are allowed to do. Using jobs that contain a single process lets you place restrictions on that process that you normally wouldn't be able to.

This article assumes you're familiar with C++, Win32

Code for this article: jobobject.exe (2KB)
Jeffrey Richter wrote Advanced Windows, Third Edition (Microsoft Press, 1997) and Windows 95: A Developer's Guide (M&T Books, 1995). Jeff is a consultant and teaches Win32 programming courses ( http://www.solsem.com). He can be reached at http://www.JeffreyRichter.com

There are many times when you need to treat a group of processes as a single entity. For example, when you tell Microsoft® Developer Studio® to build a project, it spawns CL.EXE, which may have to spawn additional processes (like the individual passes of the compiler). But if the user wants to prematurely stop the build, then Developer Studio must somehow be able to terminate CL.EXE and all its child processes. Solving this simple (and common) problem in Windows® has been notoriously difficult because Windows doesn't maintain a parent/child relationship between processes. In particular, child processes continue to execute even after their parent process has been terminated. This behavior is quite different in many other operating systems.
When designing a server, it is often necessary to treat a set of processes as a single group. For instance, a client may request that a server execute some application (which may spawn children of its own) and return the results to the client. Many clients may be connected to this server, and it would be nice if the server could somehow place restrictions on what a client can request. This prevents any single client from monopolizing all of the server's resources. These restrictions could come in many forms: maximum CPU time that can be allocated to the client's request, minimum and maximum working set sizes, restricting the client's application from shutting down the computer, security considerations, and so on.
Windows 2000 offers a new job kernel object that allows you to group processes together and create a sandbox that restricts what these processes are allowed to do. It is best to think of a job object as a container of processes. However, it is useful to create jobs that contain a single process because you can place restrictions on that process that you normally wouldn't be able to.
My StartRestrictedProcess function (see Figure 1) demonstrates how to place a process in a job that restricts the process's ability to do certain things. The first thing that I do is create a new job kernel object by calling
 HANDLE CreateJobObject(LPSECURITY_ATTRIBUTES
                        pJobAttributes, LPCTSTR pName);
As with all kernel objects, the first parameter allows you to associate security information with the new job object and tell the system whether you want the returned handle to be inheritable. The last parameter allows you to name the job object so that it can be accessed by another process via the OpenJobObject function:
 HANDLE OpenJobObject(DWORD dwDesiredAccess,
                      BOOL fInheritHandle,
                      LPCTSTR pName);
When you know that you will no longer access the job object in your code, you must close its handle by calling CloseHandle. You can see this demonstrated at the end of my StartRestrictedProcess function. Be aware that closing a job object does not force all the processes in the job to be terminated. The job object is actually marked for deletion and will be destroyed automatically only after all of the processes within the job have terminated.
Note that closing the job's handle makes the job inaccessible to all processes even though the job still exists. To make this clear, examine the following code:
 // Create a named job object
 HANDLE hjob = CreateJobObject(NULL, "Jeff");

 // Put our own process in the job (this function is
 // explained more later in this article)
 AssignProcessToJobObject(hjob, GetCurrentProcess());

 // Closing the job does not kill our process or the job
 // But, the name ("Jeff") is immediately disassociated
 // with the job
 CloseHandle(hjob);

 // Try to open the existing job
 hjob = OpenJobObject(JOB_OBJECT_ALL_ACCESS, FALSE,
                      "Jeff");
 // OpenJobObject fails and returns NULL here because
 // the name ("Jeff") was disassociated from the job
 // when CloseHandle was called. There is no way to
 // get a handle to this job now. 

Placing Restrictions on a Job's Processes

After creating a job, you will typically want to set up the sandbox—restricting what processes can do within the job. A job has several different types of restrictions that can be placed on it:

  • Basic (and extended) limits restrict processes within a job from monopolizing the system's resources.
  • Basic UI restrictions restrict processes within a job from altering the user interface.
  • Security limits restrict processes within a job from accessing secure resources (files, registry subkeys, and so on).

You place restrictions on a job by calling
 BOOL SetInformationJobObject(HANDLE hJob,
     JOBOBJECTINFOCLASS JobObjectInformationClass,
         LPVOID pJobObjectInformation,
         DWORD cbJobObjectInformationLength); 
The first parameter identifies which job you want to restrict. The second parameter is an enumerated type and indicates what type of restriction you want to apply to the job. The third parameter is the address of a data structure containing the restriction settings. The fourth parameter indicates the size of this structure (used for versioning). Figure 2 summarizes how to set restrictions.
In my StartRestrictedProcess function, I only set some basic limit restrictions on the job. To do this, I allocate a JOB_OBJECT_BASIC_LIMIT_INFORMATION structure, initialize it, and then call SetInformationJobObject. A JOB_OBJECT_BASIC_LIMIT_INFORMATION structure looks like this:
 typedef struct _JOBOBJECT_BASIC_LIMIT_INFORMATION {
    LARGE_INTEGER PerProcessUserTimeLimit;
    LARGE_INTEGER PerJobUserTimeLimit;
    DWORD LimitFlags;
    DWORD MinimumWorkingSetSize;
    DWORD MaximumWorkingSetSize;
    DWORD ActiveProcessLimit;
    DWORD Affinity;
    DWORD PriorityClass;
    DWORD SchedulingClass;
 }*PJOBOBJECT_BASIC_LIMIT_INFORMATION,
JOBOBJECT_BASIC_LIMIT_INFORMATION;
Figure 3 briefly describes the members.
I'd like to explain a few things about this structure that I don't think are clear from the Platform SDK documentation. You set bits in the LimitFlags member to indicate which restrictions you want applied to the job. For example, in my StartRestrictedProcess function I set the JOB_ OBJECT_LIMIT_ PRIORITY_CLASS and JOB_OBJECT_LIMIT_JOB_TIME bits. This means that these are the only two restrictions that I'm placing on the job. No restriction has been made on CPU affinity, working set size, per-process CPU time, and so on.
As the job runs, it maintains accounting information such as how much CPU time has been used by processes in the job. Each time you set the basic limits using the JOB_OBJECT_LIMIT_JOB_TIME flag, the job resets the CPU time accounting information back to 0. This allows you to see how much CPU time is required as different stages of the job execute. But a problem occurs: what if you want to change the affinity of the job and don't want to reset the CPU time accounting information? To do this, you would have to set a new basic limit using the JOB_OBJECT_ LIMIT_AFFINITY flag, and you'd have to leave off the JOB_OBJECT_LIMIT_JOB_TIME flag. But if you do this, you are telling the job that you no longer want to enforce a CPU time restriction. This is not what you want.
What you actually want is to change the affinity restriction and to keep the existing CPU time restriction; you just don't want the CPU time accounting information reset. To solve this problem, there is a special flag: JOB_OBJECT_ LIMIT_PRESERVE_JOB_TIME. This flag and the JOB_ OBJECT_LIMIT_JOB_TIME flag are mutually exclusive. The JOB_OBJECT_LIMIT_ PRESERVE_JOB_TIME flag indicates that you want to change the restrictions without resetting the CPU time accounting information to 0.
I should also talk about the JOBOBJECT_BASIC_LIMIT_ INFORMATION structure's SchedulingClass member. Imagine that you have two jobs running and you set the priority class of both jobs to NORMAL_PRIORITY_CLASS. But maybe you want processes in one job to get more CPU time than processes in the other job. You can use the SchedulingClass member to accomplish this. This member lets you change the relative scheduling of jobs that have the same priority class. You can set the SchedulingClass member to a value from 0 to 9. The default is 5. On Windows 2000, a higher value tells the system to give a longer time quantum to threads in processes in a particular job; a lower value reduces the threads' time quantum.
For example, let's say that I have two normal priority class jobs. Each job contains one process, and each process has just one (normal priority) thread. Under ordinary circumstances, these two threads would be scheduled in the normal round-robin fashion and each would get the same time quantum. However, if I set the SchedulingClass member of the first job to 3, then when threads in this job are scheduled CPU time, their quantum will be shorter than for threads that are in the second job.
If you use the SchedulingClass member, you will want to avoid using large numbers and hence larger time quantums because they reduce the overall responsiveness of the other jobs, processes, and threads in the system. Also, while this is what happens on Windows 2000, Microsoft has plans to make more significant changes to the thread scheduler in future versions of Windows. Microsoft recognizes a need for the operating system to offer a wider range of thread scheduling scenarios to jobs, processes, and threads.
In addition to the basic limits, you can also set extended limits on a job using the JOBOBJECT_EXTENDED_LIMIT_INFORMATION structure:
 typedef struct _JOBOBJECT_EXTENDED_LIMIT_INFORMATION {
     JOBOBJECT_BASIC_LIMIT_INFORMATION
         BasicLimitInformation;
     IO_COUNTERS IoInfo;
     SIZE_T ProcessMemoryLimit;
     SIZE_T JobMemoryLimit;
     SIZE_T PeakProcessMemoryUsed;
     SIZE_T PeakJobMemoryUsed;
 } *PJOBOBJECT_EXTENDED_LIMIT_INFORMATION,
 JOBOBJECT_EXTENDED_LIMIT_INFORMATION;
As you can see, this structure contains a JOBOBJECT_ BASIC_LIMIT_INFORMATION structure, making it a superset of the basic limits. This structure is a little strange in that it includes members that have nothing to do with setting limits on a job. For example, the whole IoInfo structure
 typedef struct _IO_COUNTERS {
    ULONGLONG ReadOperationCount;
    ULONGLONG WriteOperationCount;
    ULONGLONG OtherOperationCount;
    ULONGLONG ReadTransferCount;
    ULONGLONG WriteTransferCount;
    ULONGLONG OtherTransferCount;
 } IO_COUNTERS; 
tells you the number of read, write, and non-read/write operations (as well as total bytes transferred during those operations) that have been performed by processes in the job. This information is not a limit you can set but is statistical information that you can retrieve (by calling QueryInformationJobObject, discussed later). It doesn't make sense to me why Microsoft placed the IoInfo member in the JOBOBJECT_EXTENDED_LIMIT_INFORMATION structure. By the way, the new GetProcessIoCounters function allows you to obtain this information for processes that are not in jobs:
 BOOL GetProcessIoCounters(HANDLE hProcess,
                          PIO_COUNTERS pIoCounters);
In addition, the PeakProcessMemoryUsed and PeakJobMemoryUsed members are also read-only and tell you the maximum amounts of committed storage that has been required for any one process and for all processes within the job. The two remaining members, ProcessMemoryLimit and JobMemoryLimit, allow you to restrict the amount of committed storage used by any one process or by all processes in the job, respectively.
Now, let's turn our attention back to other restrictions that you can place on a job. A JOBOBJECT_BASIC_UI_ RESTRICTIONS structure looks like this:
 typedef struct _JOBOBJECT_BASIC_UI_RESTRICTIONS {
     DWORD UIRestrictionsClass;
 } JOBOBJECT_BASIC_UI_RESTRICTIONS,
 *PJOBOBJECT_BASIC_UI_RESTRICTIONS; 
This structure has only one data member, UIRestrictionsClass, which holds a set of bit flags (described in Figure 4). The last flag, JOB_OBJECT_UILIMIT_HANDLES, is particularly interesting. When you place this restriction on a job, it means that no processes in the job will be able to access USER objects created by processes that are outside the job. So if you try to run Spy++ inside a job, you won't see any windows except for the windows that Spy++ itself creates.
Figure 5 Using JOB_OBJECT_UILIMIT_HANDLES
Figure 5 Using JOB_OBJECT_UILIMIT_HANDLES

Figure 5 shows Spy++ with two MDI child windows open. Notice that the Threads 1 window contains a list of threads in the system. Only one of those threads, 000006AC SPYXX, seems to have created any windows. This is because I ran Spy++ in its own job and restricted its use of UI handles. In the same window, you can see the MSDEV and EXPLORER threads, but it appears that they have not created any windows. I assure you that these threads have definitely created windows, but Spy++ is unable to access them in any way. On the right-hand side, you see the Windows 3 window. In this window, Spy++ shows the hierarchy of all windows existing on the desktop. Notice that there is only one entry, 00000000. Spy++ must place it there as a placeholder.
Note that this UI restriction is only one-way. That is, processes that are outside of a job can see USER objects that are created by processes within a job. For example, if I were to run Notepad in a job and Spy++ outside of a job, Spy++ would be able to see Notepad's window even if the job that Notepad was in specified the JOB_OBJECT_UILIMIT_ HANDLES flag. Also, if Spy++ were in its own job, it would be able to see Notepad's window unless the Spy++ job had the JOB_OBJECT_UILIMIT_HANDLES flag specified.
Restricting UI handles is awesome if you want to create a really secure sandbox for your job's processes to play in. However, it is useful to have a process that is part of the job communicate with a process that is outside of the job. One easy way to accomplish this is to use window messages, but if the job's processes can't access UI handles, then a process in the job can't send or post a window message to a window created by a process outside of the job. Fortunately, there is a way to solve this problem using a new function:
 BOOL UserHandleGrantAccess(HANDLE hUserObj,
                            HANDLE hjob, BOOL fGrant);
The hUserObj parameter indicates a single USER object whose access you want to either grant or deny to processes within the job. This will almost always be a window handle, but it may be other USER objects such as a desktop, hook, icon, menu, and so on. The last two parameters, hjob and fGrant, indicate which job you are granting or denying access to. Note that this function fails if it is called from a process within the job identified by hjob. This prevents a process within a job from granting itself access to an object.
The last type of restriction that can be placed on a job is related to security. A JOBOBJECT_SECURITY_LIMIT_ INFORMATION structure looks like this:
 typedef struct _JOBOBJECT_SECURITY_LIMIT_INFORMATION {
    DWORD SecurityLimitFlags;
   HANDLE JobToken;
   PTOKEN_GROUPS  	      SidsToDisable;
   PTOKEN_PRIVILEGES 	      PrivilegesToDelete;
   PTOKEN_GROUPS RestrictedSids;
 }JOBOBJECT_SECURITY_LIMIT_INFORMATION,
 *PJOBOBJECT_SECURITY_LIMIT_INFORMATION; 
The table in Figure 6 describes the members.
Naturally, once you have placed restrictions on a job, you may want to query those restrictions. You can do so easily by calling
 BOOL QueryInformationJobObject (
     HANDLE hJob,
     JOBOBJECTINFOCLASS obObjectInformationClass,
     LPVOID lpJobObjectInformation,
     DWORD cbJobObjectInformationLength,
     LPDWORD lpReturnLength);
Like SetInformationJobObject, you pass this function the handle of the job, an enumerated type indicating what restriction information you want, the address of the data structure to be initialized by the function, and the length of the data block containing that structure. The last parameter, lpReturnLength, points to a DWORD that is filled in by the function telling you how many bytes were placed in the buffer. You can (and usually will) pass NULL for this parameter if you don't care.

Placing a Process in a Job

OK, that's it for setting and querying restrictions. Now let's get back to my StartRestrictedProcess function. After I place some restrictions on the job, I spawn the process that I intend to place in the job by calling CreateProcess. However, notice that I use the CREATE_SUSPENDED flag when calling CreateProcess. This creates the new process, but doesn't allow it to execute any code.
Since the StartRestrictedProcess function is being executed from a process that is not part of a job, the child process will also not be part of a job. If I allowed the child process to immediately start executing code, it would be running out of my sandbox and could do things successfully that I want it to be restricted from doing. So after I create the child process and before I allow it to start running, I must explicitly place the process in my newly created job. I do that by calling

 BOOL AssignProcessToJobObject(HANDLE hJob,
                               HANDLE hProcess); 
This function tells the system to treat the process (identified by hProcess) as part of an existing job (identified by hJob). Note that this function only allows a process that is not assigned to any job to be assigned to a job. Once a process is part of a job, it cannot be moved to another job or become jobless (so to speak).
Also note that when a process that is part of a job spawns another process, the new process automatically becomes part of the parent's job. However, there are two mechanisms that allow you to alter this behavior. First, you can turn on the JOB_OBJECT_BREAKAWAY_OK flag in JOBOBJECT_BASIC_LIMIT_INFORMATION's LimitFlags member. This flag tells the system that a newly spawned process can execute outside the job. But to make this happen, CreateProcess must be called with the new CREATE_ BREAKAWAY_FROM_JOB flag. If CreateProcess is called with the CREATE_BREAKAWAY_FROM_JOB flag, but the job does not have the JOB_OBJECT_BREAKAWAY_OK limit flag turned on, CreateProcess fails. This mechanism is useful if the newly spawned process also controls jobs.
Second, you can turn on the JOB_OBJECT_SILENT_ BREAKAWAY_OK flag in the JOBOBJECT_BASIC_ LIMIT_INFORMATION LimitFlags member. This flag also tells the system that newly spawned processes should not be part of the job. However, the difference is that there is no need to pass any additional flags to CreateProcess. In fact, this flag forces new processes not to be part of the job. This flag is useful for processes that were originally designed knowing nothing about job objects.
As for my StartRestrictedProcess function, after I call AssignProcessToJobObject my new process is part of my restricted job. I now call ResumeThread so that the process's thread can now execute code under the job's restrictions. At this point, I also close the handle to the thread since I won't be using it.

Terminating All Processes in a Job

One of the most popular things that you will want to do with a job is kill all of the processes within it. Earlier, I mentioned that Microsoft Developer Studio doesn't have an easy way to stop a build in progress because it would have to somehow know which processes were spawned from the first process that it spawned. This is very tricky, and I explained how Developer Studio accomplishes this task in my June 1998 Win32 Q & A column. I suspect that in future versions of Developer Studio, Microsoft will use jobs because the code is a lot easier to write and there's much more that you can do with them.
To kill all the processes within a job, you simply call

 BOOL TerminateJobObject(HANDLE hJob, UINT uExitCode);
This function is similar to calling TerminateProcess for every process contained within the job, setting all their exit codes to uExitCode.

Querying Job Statistics

I've already discussed the QueryInformationJobObject function and how you can use it to get the current restrictions on a job. You can also use this function to get statistical information about a job. For example, to get basic accounting information you call QueryInformationJobObject, passing JobObjectBasicAccountingInformation for the second parameter and the address of a JOBOBJECT_ BASIC_ACCOUNTING_INFORMATION structure, defined as follows:

 typedef struct _JOBOBJECT_BASIC_ACCOUNTING_INFORMATION {
     LARGE_INTEGER TotalUserTime;
     LARGE_INTEGER TotalKernelTime;
     LARGE_INTEGER ThisPeriodTotalUserTime;
     LARGE_INTEGER ThisPeriodTotalKernelTime;
     DWORD TotalPageFaultCount;
     DWORD TotalProcesses;
     DWORD ActiveProcesses;
     DWORD TotalTerminatedProcesses;
 } *PJOBOBJECT_BASIC_ACCOUNTING_INFORMATION,
 JOBOBJECT_BASIC_ACCOUNTING_INFORMATION;
The members of this structure are described in Figure 7. In addition to querying this basic accounting information, you can make a single call to query both basic and I/O accounting information. To do this, you pass JobObjectBasicAndIoAccountingInformation for the second parameter and the address of a JOBOBJECT_BASIC_AND_IO_ACCOUNTING_INFORMATION structure, defined as follows:
 typedef struct JOBOBJECT_BASIC_AND_IO_ACCOUNTING_INFORMATION {
     JOBOBJECT_BASIC_ACCOUNTING_INFORMATION BasicInfo;
     IO_COUNTERS IoInfo;
 } JOBOBJECT_BASIC_AND_IO_ACCOUNTING_INFORMATION;
As you can see, this structure simply contains both JOBOBJECT_BASIC_ACCOUNTING_INFORMATION and IO_COUNTERS structures.
In addition to accounting information, you may also call QueryInformationJobObject at any time to get the set of process IDs for processes that are currently running in the job. To do this, you must first make a guess as to how many processes you expect to see in the job, then allocate a block of memory large enough to hold an array of these process IDs plus the size of a JOBOBJECT_BASIC_PROCESS_ ID_LIST structure:
 typedef struct _JOBOBJECT_BASIC_PROCESS_ID_LIST {
     DWORD NumberOfAssignedProcesses;
     DWORD NumberOfProcessIdsInList;
     DWORD ProcessIdList[1];
 } JOBOBJECT_BASIC_PROCESS_ID_LIST,
 *PJOBOBJECT_BASIC_PROCESS_ID_LIST; 
So, to get the set of Process IDs currently in a job, execute code similar to that shown in Figure 8 .
This is all of the information you get using these functions, but the operating system is actually keeping a lot more information about jobs. The additional information is kept in performance counters and can be retrieved using the functions in the Performance Data Helper function library (PDH.DLL). You can also use the Microsoft Management Console (MMC) System Monitor Control snap-in to view the job information. The dialog box in Figure 9 shows some of the counters available for job objects in the system. Figure 10 shows some of the Job Object Details counters available. You can also see that Jeff's Job has four processes in it: calc, cmd, notepad, and wordpad.
Figure 9 Job Objects in MMC
Figure 9 Job Objects in MMC

Note that performance counter information can only be obtained for jobs that were assigned names when CreateJobObject was called. For this reason, you may want to create job objects with names even though you do not intend to share these objects across process boundaries by name.
Figure 10 Job Details in MMC
Figure 10 Job Details in MMC

 

Job Notifications

At this point, you certainly have the basics. There is only one thing left to cover about job objects: notifications. For example, wouldn't you like to know when all of the processes in the job terminate, or if all the allotted CPU time had expired? Or maybe you'd like to know when a new process is spawned within the job or when a process in the job terminates. If you don't care about these notifications, and many applications won't care, then working with jobs is truly as easy as I've already described. If you do care about these events, then there is a little more that you have to do.
If all you care about is whether all the allotted CPU time had expired, then there is an easy way for you to get this notification. Job objects are non-signaled while the processes within the job have not used up the allotted CPU time. Once all the allotted CPU time has been used, Windows forcibly kills all processes in the job and signals the job object. You can easily trap this event by calling WaitForSingleObject (or a similar function). Incidentally, you can reset the job object to the non-signaled state if you later call SetInformationJobObject, granting the job more CPU time.
When I first started working with jobs, it seemed that the job object should be signaled when there are no processes running within it. After all, process and thread objects are signaled when they stop running; a job should be signaled when it stops running. This way, you could easily determine when a job had run to completion. However, Microsoft chose to signal the job when the allotted time expires instead because that signals an error condition. Since many jobs will start off with one parent process that hangs around until all its children are finished, you can simply wait on the parent process's handle to know when the entire job is finished. My StartRestrictedProcess function shows you when the job's allotted time has expired or when the parent process in the job has terminated.
Well, I've just described how to get some simple notifications, but I haven't explained what you need to do to get more advanced notifications such as process creation/termination. If you want these additional notifications, you must put a lot more infrastructure into your application. In particular, you must create an I/O completion port kernel object and associate your job objects with the completion port. Then, you must have one or more threads that wait on the completion port for job notifications to arrive so that they can be processed.
The completion port is a very complex kernel object that has many cool uses, but it is far too involved to go into here. Instead, I urge you to see the Platform SDK documentation or Chapter 15, Device I/O, of my Advanced Windows book for a full explanation of I/O completion ports.
Once you've created the I/O completion port, you associate a job with it by calling SetInformationJobObject as follows:

 JOBOBJECT_ASSOCIATE_COMPLETION_PORT joacp;
 // Any value to uniquely identify this job
     joacp.CompletionKey  = 1;
     joacp.CompletionPort = hIOCP;
 // Handle of completion port that
 // receives notifications
     SetInformationJobObject (hJob,
         JobObjectAssociateCompletionPortInformation,
         &joacp, sizeof(jaocp));
After this code executes, the system will monitor the job. As events occur, it will post events to the I/O completion port. (By the way, you can call QueryInformationJobObject to retrieve the completion key and completion port handle, but it is very unlikely that you would ever have to do this.)
Threads monitor an I/O completion port by calling GetQueuedCompletionStatus:
 BOOL GetQueuedCompletionStatus(HANDLE hIOCP,
     PDWORD pNumBytesTransferred,
    PULONG_PTR pCompletionKe,
    LPOVERLAPPED *lpOverlapped,
     DWORD dwMilliseconds); 
When this function returns a job event notification, *pCompletionKey will contain the completion key value set when SetInformationJobObject was called to associate the job with the completion port. This lets you know which job had an event. The value in *pNumBytesTransferred indicates which event occurred (see Figure 11). Depending on the event, the value in *1pOverlapped will indicate a process ID.
Just one last note about this: by default, a job object is configured so that when the job's allotted CPU time expires, all the job's processes are terminated automatically and the JOB_OBJECT_MSG_END_OF_JOB_TIME notification does not get posted. If you want to prevent the job object from killing the processes and instead just notify you that the time has been exceeded, you must execute code like this:
 // Create a JOBOBJECT_END_OF_JOB_TIME_INFORMATION
 // structure and initialize its only member
  JOBOBJECT_END_OF_JOB_TIME_INFORMATION joeojti;
  joeojti.EndOfJobTimeAction = JOB_OBJECT_POST_AT_END_OF_JOB;

 // Tell the job object what we want it to do when the
 // job time is exceeded
 SetInformationJobObject(hJob,
      JobObjectEndOfJobTimeInformation,
      &joeojti, sizeof(joeojti)); 
The only other value you can specify for an end-of-job-time action is JOB_OBJECT_TERMINATE_AT_END_OF_JOB, which is the default when jobs are created anyway.

Conclusion

Prior to Windows 2000, Microsoft has not allowed nearly enough control over processes. While it has been a long time coming, the job object certainly addresses many of the issues that developers care about and have spent countless hours trying to get the operating system to do. The job object comes with the bonus that you can now apply restrictions to a single process or to a set of processes all at once. If you find yourself requiring more control over a process's execution, make sure you check the latest job object documentation to see if Microsoft has added the abilities you need. My guess is that Microsoft will add many more capabilities into the job object as new versions of Windows appear.


From the March 1999 issue of Microsoft Systems Journal. Get it at your local newsstand, or better yet, subscribe.


For related information see:
Job Objects at http://premium.microsoft.com /msdn/library/sdkdoc/winbase/prothred_9joz.htm.
Also check http://msdn.microsoft.com for daily updates on developer programs, resources and events.

© 1999 Microsoft Corporation. All rights reserved.
Terms of Use
.

© 2014 Microsoft Corporation. All rights reserved. Contact Us |Terms of Use |Trademarks |Privacy & Cookies
Microsoft