Click Here to Install Silverlight*
United StatesChange|All Microsoft Sites
|Developer Centers|Library|Downloads|Code Center|Subscriptions|MSDN Worldwide
Search for

Advanced Search

December 1996

Microsoft Systems Journal Homepage

First Aid for the Thread-Impaired: Using Multiple Threads with MFC

Russell Weisz

Russell Weisz is a software developer at The Windward Group, a Los Gatos, California-based company that provides software research and development, quality assurance, and documentation services.

For years, Windows programmers have been able to avoid using threads. With many applications going the Internet way, time is running out for applications that are not thread-based. After all, the Internet is not as fast as, er, a snail. But the Internet is just one of many places where threads are required for smooth-running programs. Because it's been years since a lot of us have been near a textbook that explains threads, I'm going to cover some of the groundwork for writing thread-based applications, and then show you how all of this is done with MFC. I will then provide an MFC class that will let you jump on the thread bandwagon as quickly as possible.

There are many ways of defining threads. A common definition of a thread is the basic entity to which the operating system allocates CPU time. You can also think of a thread in application terms. Every application has at least one thread-the primary thread that begins with the program's first executed instruction. In addition, the program can create secondary threads, which can be thought of as independent agents with instructions like "gather this data from the Internet and let me know when you're finished" or "monitor disk usage on the file server and alert me if it exceeds 90 percent busy."

So a thread, then, is an executing stream of code within an application that runs concurrently with the application's other threads, and shares an address space with them, along with access to the application's variables, file handles, device contexts, objects, and other resources. Threads are different from processes, which typically don't share resources or an address space and communicate only through the mechanisms that the operating system provides for interprocess communication, such as pipes, queues, COM, and DDE. Threads often use simpler, less resource-intensive forms of communication like semaphores, mutexes, and events.

Why and how do you use multiple threads in your projects? First, I'll explain why you would want to use threads; then to show you why, I'll create a new class, CMultiThread, that acts as a wrapper around the MFC CWinThread class. CMultiThread encapsulates many of the common details required for CWinThread applications, and makes it easier to create safe and readable multithreaded code.

Why Multithread?

If you've programmed without using multiple threads for all this time, why should you complicate your life with multithreading now? Threads can improve the responsiveness, structure, and efficiency of your code. In addition, some programs containing concurrent threads may run significantly faster on multiprocessor computers under multiprocessor operating systems like Windows NT¨ since each thread could get full use of its own respective CPU.

Consider an application that monitors the performance and available capacity of a network file server. When the server is heavily loaded, it may take several minutes to get utilization measurements from a critical disk drive. A single-threaded application loop that checks the usage of network links, disk drives, CPU, and memory in sequence would be unable to display any data for minutes at a time. A multithreaded application, on the other hand, with one thread dispatched to measure each device and another thread to display the results, would be able to work around the overloaded drive and promptly update the figures for any devices that were able to report their status in a timely fashion. The result is that the total elapsed time until a set of tasks is accomplished can be the maximum of the individual task times rather than their sum, which can result in significant performance improvements. In cases like this, where partial results are meaningful and complete results may be difficult to obtain, multithreading can mean the difference between a useful application and a useless one.

Figure 1 illustrates the performance improvement that multithreading can provide in a situation where two lengthy disk queries must be performed.

Figure 1 Multithreading and Performance

Threads can also improve application responsiveness by managing the priorities of background tasks (a thread's priority can be changed while it is executing, incidentally). A spreadsheet function could, for instance, create a low-priority thread to recalculate in the background without affecting the speed of scrolling or data entry.

Multithreading provides a framework for managing the asynchronous activities in an application. Each thread can focus on the activities specific to that thread. Powerful synchronization mechanisms, like those provided with MFC, allow the coordination of these separate activities.

On the other hand, it's not wise to use threads indiscriminately. Each thread entails a certain amount of system overhead, so it's a good idea to use them only when they significantly improve your program.

Where They're Useful

Certain applications seem made for multithreading. Monitoring applications are practically archetypes. In fact, necessity being the mother of invention, a surgical monitoring system for anesthesiologists that I helped develop for a client inspired this article. The system consists of a LAN-attached PC equipped with a sensor card and a touch-sensitive screen for data entry. Its main job is to present a number of graphs showing patient vital signs, types and quantities of drugs administered, and volumes of input and output fluids. It also displays the status of the network connection, which it uses to back up data. Most displayed items are read automatically from the sensor card, although information about drugs administered is entered by an assistant at a touchscreen or keyboard.

Classes given their own threads were designed to:

  • Graph current and historical data on the screen
  • Acquire data from the sensor card and filter, format, and write it to disk
  • Collect critical measurements every two sec-onds and display them numerically
  • Check on network availability and monitor the power to the sensor card
  • Back up data to the network
  • Log errors encountered by the other threads to disk

These threads were encapsulated in objects like CMultiThread to localize the details of managing multiple threads (see Figure 2 for a simplified example). The multithreading used in this application enabled all these activities to proceed roughly in parallel, allowed the system to be responsive to changes and user input, and resulted in good overall performance.

Client/server applications using distributed server data are also ideally suited for multithreading. A query of a distributed database can dispatch threads to each of the file servers containing a piece of the database, and another thread can display the unsorted results of the query to the user as they come in. This enables the user to start thinking about the results early, which can be quite useful-especially if the user is the person monitoring the EKG machine! For instance, if a flood of unexpected records is returned, the user can cancel the query at an early stage. Meanwhile, another thread can wait for all the query threads to complete to begin sorting the results. It's easy to see how this generalizes to Internet queries as well. This can be accomplished with low processor overhead, because thread execution can easily be suspended (blocked) until certain events cause the system to wake them up again. The thread does not have to periodically check to see whether it's time to run again.

Generalizing these examples, there are at least four types of activities where multithreading makes sense and can deliver performance benefits:

  • Scheduled (timer-driven) activities. Like the data-acquisition thread in my surgical information system, timer-driven activities are blocked until the system determines that the timeout period is up. Minimum times can be set with millisecond precision in Win32¨.
  • Event-driven activities. The threads may be triggered by signals from other threads. In the surgical monitoring system, the error-logging thread is inactive until one of the other threads alerts it to an error condition.
  • Distributed activities. When data must be collected from (or distributed to) several computers, it makes sense to create a thread for each request so that these naturally asynchronous tasks can proceed in parallel, as in the example of a query of distributed server data.
  • Prioritized activities. To improve a program's responsiveness, it's sometimes useful to divide its work into a high-priority thread for the user interface and a lower-priority thread for background work. In the surgical information system, the user-interface thread that enables the surgical assistant to enter information about administered drugs is a high-priority thread to preserve its responsiveness, while the network-backup thread has a lower priority.
How to Multithread

MFC distinguishes two types of threads: worker threads, for background tasks like spreadsheet recalculation, and user-interface (UI) threads, for gathering input and displaying output. Every program's main thread is a UI thread. Threads are activated with calls to AfxBeginThread, which, through parameter overloading, has a form corresponding to each of the two types of threads. Worker threads are simple to start and use because they utilize instances of MFC's CWinThread class (created automatically for you by the call to AfxBeginThread), in contrast to UI threads, which require you to derive a class from CWinThread and override certain functions. In operation, UI threads differ from worker threads in that they spend time in their Run method, which does message pumping (that is, calls to TranslateMessage, DispatchMessage, and OnIdle). Worker threads rely on the main thread or another UI thread for message pumping.

Typically, worker threads are used for background tasks that run once and then terminate, while UI threads are used for threads that interact with the user. You might use UI threads, for instance, in an application that creates several independent windows, each of which occasionally needs to do extensive processing. In a single-threaded situation, this could interfere with the responsiveness of the user interface. A UI thread for each window would create independent message processing and ensure that each remained responsive.

In many cases, neither of these forms of thread are precisely what you need. There's often a need for repetitive scheduled or event-driven activities. Neither worker threads nor UI threads contain the requisite synchronization or timing mechanisms. What's called for is a persistent thread that waits with little or no overhead until the time or situation is right for it to spring into action. Internal message processing isn't generally necessary because you can rely on the application's main UI thread for that.

To address these and other considerations, I created CMultiThread, which I'll discuss in detail later. First, I'll discuss worker and UI threads as provided by MFC and some of the issues that arise when using them or CMultiThread.

Since both worker threads and UI threads are derived from CWinThread, there is little difference in core functionality; the main differences are convenience and overhead. Creating a worker thread can be as simple as creating a function and passing it to AfxBeginThread:

CWinThread* AfxBeginThread
  (AFX_THREADPROC pfnThreadProc, LPVOID pParam, 
   int nPriority = THREAD_PRIORITY_NORMAL, 
   UINT nStackSize = 0, DWORD dwCreateFlags = 0,

The simplest, most common usage looks like this:

myWinThread = AfxBeginThread(pMyFunction, pParam); 

where pMyFunction points to a thread control function of the form:

UINT MyFunction( LPVOID pParam );

The pParam passed to AfxBeginThread is the app-defined parameter to this control function.

As mentioned, you don't have to create a CWinThread object or derive a class from CWinThread when you start a worker thread-AfxBeginThread does that for you. If you wish, you can specify a priority. (You'll probably want to do this if you want your worker thread to run at a lower priority than UI threads, since the default is THREAD_
more typical values for background printing or recalculation threads.) If you don't specify otherwise, the thread will terminate when your function exits.

Let's look at the other parameters to AfxBeginThread. nStackSize controls the size of the thread's local stack. There's little need to specify this since the default behavior is the size of the application's stack and Windows 95 and Windows NT will enlarge it automatically up to 1MB. If you're starting many threads, you might want to specify a low value for each to avoid using memory unnecessarily.

dwCreateFlags specifies whether the thread will run immediately or will be created in a suspended state (to be started with a call to the ResumeThread member function). The default is zero, which indicates an immediate start. If you want a later start, supply a value of CREATE_SUSPENDED for this parameter.

lpSecurityAttrs points to a structure that specifies the security attributes for the thread, which determine the kind of access to files and certain other system resources (Windows NT only). The default value of NULL means that the thread inherits the security attributes of the calling thread.

For an example of exactly how this would be implemented in the case of background spreadsheet recalculation, examine the MTRECALC sample application included with Visual C++¨ 4.x.

Creating a UI thread is more complex. Before you call AfxBeginThread to start a UI thread, you must derive a class from CWinThread and override certain members. The call to AfxBeginThread to start a UI thread is like the call for worker threads, except that the controlling function and its parameters are replaced by a pointer to a CWinThread-derived object. Instead of running a C function, this version of AfxBeginThread will call the InitInstance and Run members of the supplied object.

When you derive a class for this use, you must override InitInstance, if only to return a value of TRUE (or you can use it to do actual work). You may also override ExitInstance to clean up at the termination of the thread. You might want to override OnIdle if you have thread-specific background activity like garbage collection; however, your main thread may have its own OnIdle processing to take care of.

Your Run method gets messages for the windows created by the thread. It sends them to PreTranslateMessage, which you may want to override, then sends them through MFC's standard message routing.

CWinThread has data members that contain the thread ID, a thread handle, and a pointer to the application's main window. It also has an m_bAutoDelete member, which determines whether or not the object should be destroyed when the thread terminates.

CWinThread has methods to get and set the thread's priority, and to get a pointer to the thread's main window. In terms of external control, the members that you're most likely to use are Suspend-Thread and Resume-Thread. The first increments a thread's suspend count, and the second decrements it. When the OS dispatches threads, it declines to dispatch any with a nonzero suspend count. Obvious consequences are that calls to SuspendThread and ResumeThread must be paired, and a single call to ResumeThread will not start up a thread if it has been suspended more than once.

You must declare and implement your thread class using the DECLARE_DYNCREATE and IMPLEMENT_DYNCREATE macros. When you're ready to start your thread, use the following form of AfxBeginThread:

CWinThread* AfxBeginThread
  (CRuntimeClass* pThreadClass, 
   int nPriority = THREAD_PRIORITY_NORMAL, 
   UINT nStackSize = 0, DWORD dwCreateFlags = 0,

To see how this can be used to paint a window with multiple bouncing balls, rectangles, or colored lines, each in its own thread, examine the MTGDI sample application included with Visual C++ 4.x.

How to Synchronize

It's ironic that the word "synchronization" is used when discussing multithreading to mean almost exactly the opposite of what it means in everyday usage. Ordinarily, when two events (or two streams of media data, like sound and images) are synchronized, they are meant to occur, as nearly as possible, simultaneously. In a multi-threading context, synchronization is still about coordinating events even though its purpose is often to avoid simultaneous use of shared resources.

Win32 provides a set of synchronization mechanisms. MFC wraps each of them into a parallel set of synchronization classes (see Figure 3). Their purpose is resource protection and interthread communication. All synchronization classes are derived from CSyncObject.

The synchronization classes are used in conjunction with classes that lock resources or wait for events. In MFC, a thread will call the Lock member of either a CSingleLock or CMultiLock object with a synchronization object as an argument. If the lock succeeds, then the thread continues. If the lock fails, the system suspends the thread until either the lock can succeed or a timeout period (specified as a parameter in the Lock call) has elapsed. At that point, the thread continues and proceeds in accordance with the result of the lock.

In Win32, most synchronization is accomplished with calls to WaitForSingleObject or WaitForMultipleObjects, using handles of mutexes, semaphores, or events as arguments. Alternatively, you can create a CRITICAL_SECTION data structure and use calls to InitializeCriticalSection, EnterCriticalSection, and LeaveCriticalSection to protect resources from simultaneous access. One handy example is the ReadFile function, which can obtain data from a disk, COM port, or pipe. ReadFile takes a structure containing an event handle as an argument. If you use a WaitForSingleObject call to block on that event, ReadFile will call SetEvent to notify your routine when data is available. WriteFile works in a similar fashion.

Threads can come into conflict if they attempt to simultaneously modify a virtualized physical resource, such as a file, an I/O port, or a CWnd. They can also conflict if they attempt to simultaneously modify a global or class data resource, such as a class static variable. The best solution is to wrap all shared resources in appropriate objects so that they become threadsafe. A threadsafe resource object is one that takes care of the details of synchronization internally, allowing threads with a need to access the resource to do so as if they were in a single-threaded environment (see Figure 4).

Figure 4 is adapted from the Visual C++ sample application MTGDI. MTGDI uses Win32 calls to achieve synchronization, while my adaptation shows the same functionality using the MFC wrapper classes. In this application, multiple balls bounce around the screen (see Figure 5). Each ball is controlled and displayed by its own thread, which means that all threads need access to a common CWnd. To implement this, I created a new class, CThreadSafeWnd, which contains a pointer to the CWnd and a CCriticalSection object as private members. Threads draw their balls in the CWnd by calling the PaintBall member.

Figure 5 Each ball is a single thread

The first statement in CThreadSafeWnd-::PaintBall creates a CSingleLock object
named csl, and associates the CCriticalSec-tion member with it via the constructor parameter.

CSingleLock csl(&m_CSect);

The next statement tries to achieve a lock on csl.

if (csl.Lock())

If the csl object is locked by one thread, all other threads will be prevented from proceeding into the display update code until the lock is released (which happens automatically when control passes out of the PaintBall member and csl
is destroyed).

Because csl.Lock is called without a parameter, the function will wait forever (or until it is stopped by external control) until it gets a lock. I could have provided a timeout parameter (in milliseconds) that would cause csl.Lock to return FALSE if a lock could not be obtained in that time. A thread with a strict time budget might need to do this.
In systems that are not heavily loaded, though, the only reason a thread will wait a long time is because of a bug (or a deadlock situation, which is, after all, a type of bug).

You'll note in my example the following commented-out line:

//AFX_MANAGE_STATE(AfxGetStaticModuleState( )); 

CWinThread and MFC automatically create separate instances of MFC internal shared data. In certain cases, such as exported functions in a DLL, member functions of OLE/COM interfaces, and window procedures, it may be necessary to invoke this macro to ensure that state data is handled correctly.

Multithreading Do's and Don'ts

Let's go over the main pitfalls to avoid and practices to follow to help you multithread safely and efficiently. Do create threadsafe resource classes when resources need to be shared by different threads. Then you can encapsulate access synchronization to the resource. This may be sufficient to avoid reentrancy problems with shared resources or stateless variables.

When there is a persistent state involved (such as a series of calculations), if the threads are not designed to cooperate, you need per-thread instances of the resource or variable. You can use the Win32 calls TlsAlloc, TlsSetValue, TlsGetValue, and TlsFree to allocate memory on a per-thread basis. These functions support dynamic allocation of thread-specific memory for shared data. The compiler also provides static support for thread-specific data. This was mentioned above in regard to state data and AFX_MANAGE_STATE. You can also use the CThreadLocal template class to provide thread local storage on data derived from CNoTrackObject:

struct CMyThreadData : public CNoTrackObject{
CString strThread;
CThreadLocal<CMyThreadData> threadData;

Don't create threads with the C runtime functions beginthread or beginthreadex if the threads will access any MFC objects. The framework uses CWinThread to manage thread-specific data. You should use AfxBeginThread to start your threads (or call the CWinThread constructor and its CreateThread method). CWinThread::CreateThread eventually calls beginthreadex after taking care of some setup and synchronization.

Don't expect to be able to access any MFC objects not created within your thread. Each thread gets its own handle map within MFC, which means, for instance, that a handle to a CWnd passed to a thread method may be invalid inside the thread. Instead, pass handles to threads as their native HANDLE type, and then use the FromHandle or Attach methods to obtain a handle to an MFC object. Sometimes you don't have to go to this much trouble. CThreadSafeWnd::PaintBall, for instance, takes a pointer to a CWnd as an argument and works fine. It would take delving deeply into Windows internals to determine which situations require special handling and which don't. I suggest that you try the simple approach first and then, if that doesn't work, resort to FromHandle or Attach.

Do plan each thread's access to objects created by other threads. A thread can only access objects that it creates or that are passed to it. For an example of what I mean, refer back to CThreadSafeWnd. The threads responsible for the balls need to call the PaintBall member of the CThreadSafeWnd object (which I'll call m_tsw) to display the balls. Since they didn't create the m_tsw object, they need another way of getting at it. Make m_tsw a member of the application class and use the AfxGetApp macro to obtain a pointer, as shown in Figure 6. Generally speaking, having clearly defined flows for any data shared between threads will make life easier.

Do avoid deadlock. This well known problem occurs when several threads are in competition for multiple resources, and each obtains a lock on one or more. The threads may end up waiting forever as shown in Figure 7, if you don't plan for this situation.


Figure 7 Deadlock

Threads that need multiple resources can request them with a CMultiLock object, which I'll discuss shortly. One forever, as shown in Figure 7, if you don't plan for this situation. Threads that need multiple resources can request them with a CMultiLock object, which I'll discuss shortly. One solution to the deadlock problem is to use the CMultiLock::Lock's timeout parameter, which will allow at least one of the requesting threads to give up and release its resources so other threads can proceed. Another solution, if you have a common set of resources that are always required as a group, is to encapsulate them in a single threadsafe class and allocate them in an all-or-nothing fashion.

Additional Design Issues

When designing an object-oriented application that uses multiple threads, several issues should be considered. You must decide where to localize control and visibility of each thread. Control of a thread means the ability to stop it, start it, synchronize it, and modify its behavior. Visibility concerns which objects can see the thread and its properties. If you're writing object-oriented code, you will want to localize thread control and visibility within a class as much as possible, and that is the approach I'll take in the code that follows. This allows you to hide some complexity and keep the fact that your object is multithreaded within the object. On the other hand, there are sure to be cases when threads need to be visible or controlled at the global level. These could be handled with a more traditional approach.

There are a number of design questions to answer regarding the construction of an object with an embedded thread. Will the thread be timer or event driven? How will you implement that? If the thread is timer driven, will the first execution be when the thread is started or after one delay cycle? What size stack does the thread need? Once you've made those decisions, you are ready to start designing your thread.

Care must be taken to ensure that an embedded thread is properly terminated when the containing object's destructor is called. This means that the thread must unblock, free up any resources that it is holding, and end before the destructor's code is completed. If the thread unblocks after the object is destroyed, a GP fault will likely occur.

You must also provide a means by which objects can control embedded threads. When one thread needs to change the behavior of another thread, it must do so in a clean fashion. This can include stopping or starting the second thread, or changing its priority or timer interval.

Introducing CMultiThread

CMultiThread, shown in Figure 8, is a base class that addresses many of the design issues discussed here. It provides for the safe creation and destruction of a secondary thread. It provides control mechanisms by which other objects can start, stop, and modify the behavior of the embedded thread. Classes derived from CMultiThread must override the DoWork method to accomplish their real work. (DoWork is not pure virtual because this conflicts with the use of IMPLEMENT_DYNCREATE.)

CMultiThread uses a synchronization class I haven't yet discussed in detail: CEvent. A CEvent object has an internal state which can be "set" (also known as "signaled" or "available") or "not set" (also known as "not signaled," "unavailable," or "reset"). The SetEvent, ResetEvent, and PulseEvent methods manipulate this state. When you construct a CEvent, you can specify whether you wish it to be auto-reset or manual-reset. An auto-reset CEvent has its state automatically set to unavailable whenever an object obtains a lock on it, whereas a manual-reset event requires an explicit call to ResetEvent or PulseEvent to change state. When a CSingleLock or CMultiLock object attempts to get a lock on a CEvent, the lock will succeed if the event is available and fail otherwise.

CMultiThread is derived from CWinThread. When an application wishes to start up a thread using CMultiThread, it calls the CreateThread method with almost the same parameters as would be used in a direct call to CWinThread::CreateThread. There is one additional parameter: nMilliSecs is the minimum number of milliseconds to
wait between successive executions of DoWork (or a time-out interval).

The CMultiThread's member m_pWorkEvent is a pointer to an internal CEvent created by CMultiThread's constructor. Because this CEvent's constructor is called without any parameters, it is initially unavailable and auto-reset. To understand how it is used, look at the Run method. The first statement associates the CSingleLock object csl with m_pWorkEvent, and then the method enters the while loop (assuming that m_bEndThread is FALSE, which it is until CMultiThread's destructor is called). It then tries to obtain a lock on csl. Because m_pWorkEvent initially points to a nonsignaled CEvent, the lock attempt will time out after nMilliSecs milliseconds (assuming that m_nCycleTime is not infinite). At that point, the lock is released, the event is reset, and the overridden DoWork is executed. Then the cycle will be repeated until CMultiThread's destructor is called. The effect is to initiate a cycle of executions of DoWork at intervals of nMilliSecs. As you've seen, the first call to DoWork will not occur immediately, but will wait until the initial interval has passed.

To let DoWork execute immediately, call GetEvent()->SetEvent immediately after CreateThread. This will make m_pWorkEvent available and enable the lock attempt to succeed without waiting for the timeout. Because m_pWorkEvent is auto-reset, subsequent executions of DoWork will still occur at the specified interval unless the calling thread demands them sooner with additional calls to GetEvent()->SetEvent.

For clean destruction of CMultiThread, you must ensure that csl is not waiting for a lock when CMultiThread is destroyed. Therefore, the CMultiThread destructor sets the m_bEndThread member to TRUE and sets m_pWorkEvent, ensuring that csl.Lock succeeds at once and DoWork is executed one last time. Then the destructor waits for m_pExitEvent to be set so it will know when DoWork has completed. Once DoWork is finished, the Run method drops out of the while loop and sets m_pExitEvent, enabling the destructor to continue.

There is one final and subtle twist to note about CMultiThread. The final statement of the Run method calls the Run method of the base class, CWinThread, which doesn't do much except watch for messages. When the destructor process is finished, the object is destroyed and the routine is exited cleanly. If instead Run were to simply return without calling the base class, MFC would attempt to initiate destruction again, causing problems and usually a GP fault.

Derived Classes

Figure 9 demonstrates how CMultiThread hides the messy multithreading details in classes derived from CMultiThread. I've derived CAsynchGarbageCollector, a class to collect garbage, from CMultiThread. When another object calls the CollectGarbage member, CAsynchGarbageCollector adds the buffer pointed to by pBuf to the garbage list, using a CCriticalSection to synchronize access to m_BuffList. Then, if there are enough items in the garbage list to exceed the value specified by the variable CleanTime, it sets CMultiThread's work event, which lets DoWork collect the garbage (an example of DoWork execution on demand rather than timed execution). CollectGarbage runs in the thread of the calling object, while DoWork runs asynchronously in the embedded thread.

Additional Observations

Debugging a multithreaded application is harder than debugging a single-threaded application because of the synchronization issues and interactions. An advantage of CMultiThread is that its basic threading functionality has already been debugged by me, so you can derive from it and only have to debug the DoWork method. Alternatively, if you choose to develop your own base class using a similar approach, you only have to debug the threading functionality once. Reuse of the class is easy, as you've seen.

Performance tuning your threads by varying the intervals at which they're unblocked is straightforward. You can either call the CMultiThread constructor with a different interval parameter, or call CMultiThread::SetCycleTime to accomplish this. You also might find it advantageous to change a thread's priority while it's running. SetThreadPriority can take a number of values provided by constants in the Visual C++ header file winbase.h, ranging from THREAD_PRIORITY_TIME_CRITICAL down to THREAD_PRIORITY_IDLE.

CMultiThread is a class with a single embedded thread. There are several ways to extend this concept to create classes containing multiple embedded threads. This might be desirable if, for example, you want to query multiple Internet sites concurrently. The simplest way to accomplish this is to create a duplicate of CMultiThread called CMultiThread2 with an overrideable method called DoWork2. Your new class can then inherit from both CMultiThread and CMultiThread2. You can repeat this as many times as you want, as long as each class has a different DoWork signature. Admittedly, this method is not elegant, but it works and is simple.

Another method for creating such a class is to modify CMultiThread to provide additional embedded threads. This modified class, though, would have to do housekeeping for each of its threads and would need to call DoWork with a thread ID as a parameter. This would result in a significantly more complicated class.

As you've seen, multithreading is a useful tool in many applications. When you're developing such an application with an object-oriented approach, it's often useful to embed multithreading in the objects that require it. Once you've developed and debugged a multithreaded base class like CMultiThread, you can use it for any number of derived classes with ease and safety

From the December 1996 issue of Microsoft Systems Journal.

© 2017 Microsoft Corporation. All rights reserved. Contact Us |Terms of Use |Trademarks |Privacy & Cookies