Critical Section objects are visible only to the threads of a single process. The other types of synchronization objects may also be used in a single process application; however, the Critical Section type provides a faster mechanism for mutual exclusion synchronization. A separate function implements the initialize, modify, and wait operations for Critical Section objects. These objects exist in memory allocated by the process, and are garbage-collected when the process terminates. You can, however, delete a Critical Section object when it is no longer needed, to release the system resources allocated for it.
Like a Mutex object, a Critical Section object can only be owned by one thread at a time, which makes it useful for protecting a shared resource from simultaneous access. For example, a Critical Section object could be used to prevent more than one thread at a time from modifying a global variable. A thread must enter the Critical Section to request ownership, and leave the Critical Section to release ownership. A thread can repeatedly enter a Critical Section without blocking, but it must leave once for each time that it entered. The EnterCriticalSection function is the equivalent to the wait functions for the other synchronization objects, with the limitations that you cannot wait for more than one object and you cannot specify a timeout interval at which to abandon the wait. If this functionality is needed, you will have to use the other object types.
Each of the interprocess synchronization objects is designed to handle a particular type of synchronization. This section describes the use of each type. Also covered are the wait functions and the procedures for manipulating the object handles that are common to all three types.
Any thread that wants to use an Event, Mutex, or Semaphore object needs an open handle to the object. If the thread will be waiting for the object, the handle must have synchronization access. If the thread will be modifying the state of the object, the handle must have modify state access. The handle returned by the creation function always has both synchronization and modify state access. All threads of the creating process can share the same handle. If the handle was created with an object name, other processes can use the name to open a handle. For unnamed objects, the creating process must transmit information to any other processes that need to access the synchronization object. This can be done by passing the handle through inheritance to a child process, or by duplicating the handle for an unrelated process.
Named objects provide an easy way for processes to share object handles. The object name specified by the creating process is limited to max_path bytes (max_pathw for Unicode); and it can include any character except for null and the path name separator character '\ '. The names for each type of object exist in their own flat name space, so an Event object could have the same name as a Mutex object without collision. After one process has created a named object, other processes can use the name in the appropriate Open function to get a handle to the object. Each opening process must specify the desired access to the object. The following code fragments illustrate this procedure for a Mutex object:
/********** creating process **********/
HANDLE hMutex;
hMutex = CreateMutex(NULL, // no security descriptor
FALSE, // mutex not owned
"NameOfMutexObject" // object name
);
if (!hMutex) {
/* check for error */
}
.
.
.
/********** other processes **********/
HANDLE hMutex;
hMutex = OpenMutex(MUTEX_ALL_ACCESS, // synchronize, modify access
FALSE, // handle not inherited
"NameOfMutexObject" // object name
);
if (!hMutex) {
/* check for error */
}
.
.
.
The next two sections show how to pass object handles to other processes without using named objects. This can be useful in situations where you want to ensure that a name collision does not occur.
A child process can inherit an open handle to a synchronization object if the InheritHandle attribute (in the security_attributes parameter) was set when the handle was created. The handle inherited by the child process has the same access as the parent's handle. The parent can pass the value of the handle to the child as a command line argument via the CreateProcess function. The following code fragments illustrate this procedure:
char CommandLine[80];
HANDLE hEventObj;
SECURITY_ATTRIBUTES SecurityAttributes;
/* create event object that can be inherited */
SecurityAttributes.bInheritHandle = TRUE;
SecurityAttributes.lpSecurityDescriptor = NULL;
SecurityAttributes.nLength = sizeof (SECURITY_ATTRIBUTES);
hEventObj = CreateEvent(&SecurityAttributes,
FALSE, // auto reset event
FALSE, // initial state = not signalled
NULL // no name
);
/* pass handle as a string in command line for child process */
sprintf(CommandLine, "%s %d",
"childproc", /* pathname of executable file */
hEventObj /* object handle */
);
/*
*spawn child; pass handle in command line;
*set inherit handles to true
*/
Success = CreateProcess( NULL,
CommandLine, /* args with handle string */
NULL,
NULL,
TRUE, /* inherit handles */
0,
NULL,
NULL,
&StartupInfo,
&ProcessInfo
);
The child process can then use the GetCommandLine function to retrieve the command-line string and convert the handle argument back into a usable handle.
char *CommandLine, *ChildProcName;
HANDLE hEventObj;
CommandLine = GetCommandLine();
sscanf(CommandLine, "%s %d", ChildProcName, &hEventObj);
To share an unnamed object between unrelated processes, the creating process must communicate the information necessary for the other process to duplicate the handle. The duplicating process will need handles to the creating process and to the object to be duplicated. Any of the methods of interprocess communication described in other chapters can be used (e.g., named pipe, shared file, shared memory). The duplicating process can open its handle with the same access as the original handle by specifying duplicate_same_access in the DuplicateHandle call. Or it can specify a subset of the original handle's access.
The following code fragment shows the steps to be taken by the creating process:
HANDLE hMutexObj;
DWORD dwCreatingProcessID;
/* create a mutex object */
hMutexObj = CreateMutex(NULL, FALSE, NULL);
/* get handle to creating process */
dwCreatingProcessID = GetCurrentProcessId();
.
. /* communicate pid and Mutex handle to other process */
.
Then the duplicating process opens its handle with the same access as the creator:
HANDLE hMutexSrcHandle, hCreatingProcess;
HANDLE hMutexDupedHandle;
.
. /* get communicated handle and pid from creating process */
.
hCreatingProcess = OpenProcess(PROCESS_DUP_HANDLE,
FALSE, /* not inherited handles */
dwCreatingProcessID);
DuplicateHandle(hCreatingProcess,
hMutexSrcHandle,
GetCurrentProcess(),
&hMutexDupedHandle,
0,
FALSE, /* not inherited */
DUPLICATE_SAME_ACCESS);
Two generic functions, WaitForSingleObject and WaitForMultipleObjects , are used by threads to wait for the state of a waitable object to be Signalled. In addition to Event, Mutex, and Semaphore objects, these functions may be used to wait for process and thread objects. The process must have an open handle with synchronize access to any object for which it is waiting. These functions are not used to wait for Critical Section objects.
The WaitForSingleObject function waits for a single instance of any of these object types. If the state of the object is Signalled (or becomes Signalled before a timeout period has elapsed), the function returns zero and the calling process may continue its execution. If the timeout interval elapses before the object is Signalled, a non –zero value is returned. If a Mutex object is being waited for, the function may return WAIT_ABANDONED, indicating that the Mutex had been owned by another thread that terminated without releasing its ownership. In most situations, WaitForSingleObject also modifies the object: the count of a Semaphore object is decremented; an Auto Reset Event object is reset to Not-Signalled; and a Mutex object becomes owned (Not-Signalled). The state of a Manual Reset Event object is not changed by this function. If the waitable object is not Signalled, the function blocks until some other thread changes the state of the object, or the timeout period has elapsed. Note that if the timeout interval is set to -1, the function will wait indefinitely.
The WaitForMultipleObjects function allows a thread to wait on more than one object at the same time. The objects may be a mixture of different types of waitable objects. For example, you could wait for an Event to be signalled and for a Mutex to be unowned. The function may be used to wait for any one or for all of the objects to be Signalled.
If the function's WaitAll parameter is true , the wait is not successful unless all of the objects attain the Signalled state at the same time. For example, suppose a thread requires access to several Mutex-protected regions of shared memory at the same time. The function will block until all of the Mutex objects are unowned, at which time, the thread will acquire ownership of them all and the function will return zero. The thread can then access the shared memory while access by other threads is prevented. If one of the objects is an abandoned Mutex, the function will return WAIT_ABANDONED.
If the WaitAll parameter is false , the wait is satisfied when any one of the objects is Signalled. A process could use this to wait on a group of Event objects so that the function blocks until an event of interest occurs, at which time, the function returns (WAIT_OBJECT_0 + index) where index is the array index of the object that satisfied the wait. If more than one object is Signalled when the wait function is called, the one with the lowest array index will satisfy the wait. If the object that satisfied the wait is an abandoned Mutex, the function will return (WAIT_ABANDONED_0 + index).
The following code fragment creates five Event objects and then waits for one of them to be Signalled.
HANDLE hEventObjs[5];
ULONG i;
DWORD event;
// create 5 event objects
for (i = 0; i < 5; i++) {
hEventObjs[i] = CreateEvent(NULL, FALSE, FALSE, NULL);
if (!hEventObjs[i]) {
/* deal with error */
};
}
.
.
.
while (TRUE) {
event = WaitForMultipleObjects(5, // number of objects
hEventObjs, // array of objects
FALSE, // wait for any
500L); // wait for 1/2 sec
switch (event) {
case WAIT_OBJECT_0 + 0: // hEventObj[0] was signalled
case WAIT_OBJECT_0 + 1: // hEventObj[1] was signalled
.
.
.
case WAIT_TIME_OUT: // deal without timeout
}
}
In general, you should be aware of the danger of deadlocking a process due to a wait that is never satisfied. For example, if one thread fails to release its ownership of a Mutex, another waiting thread could be blocked indefinitely if the timeout interval is infinite.
You can use an Event object to trigger execution of other processes or of other threads within a process. This is useful if one process provides data to many other processes. Using an Event object frees the other processes from the trouble of polling to determine when new data is available. For example, the Comm functions use an Event object to notify a thread that an event of interest has occurred on a device that is being monitored.
The CreateEvent function creates either a Manual Reset Event or an Auto Reset Event, depending on the value of its bManualReset parameter. CreateEvent also sets the initial state of the Event to either Signalled ( true) or Not-Signalled (false). When an event's state is Not-Signalled, any thread waiting on the Event will block.
An Event may be set to the Signalled state by calling SetEvent. For Manual Reset Events, this releases all threads that are waiting on the Event; and the Event remains Signalled until it is explicitly reset to the Not-Signalled state by calling ResetEvent. For Auto Reset Events, the SetEvent function causes the Event to remain Signalled until one waiting thread is released, at which time the Event is automatically reset to the Not-Signalled state. PulseEvent sets the Event to the Signalled state and then immediately resets to the Not-Signalled state. PulseEvent resets even if there were no waiting threads to be released; otherwise, it releases one thread for Auto Reset Events and all waiting threads for Manual Reset Events. If an Event is pulsed, the Event will remain set long enough for a thread using WaitForMultipleObjects(WaitAll=true ) to determine if the other objects are Signalled.
In the following code fragments, a master thread repeatedly writes new data to a shared memory buffer; and several worker threads wait for their turn to process a batch of data. The master process creates two Auto Reset Event objects to synchronize access to the shared memory. The WriteEvent object blocks the workers while the master writes; and the ReadEvent object notifies the master that a worker has read the data and it is safe to write again.
HANDLE hWriteEvent, hReadEvent;
DWORD waitresult, errcode;
/* create event object to notify workers when writing is done */
hWriteEvent = CreateEvent(NULL, // no security attributes
FALSE, // auto reset (release one waiter)
FALSE, // initial state = Not-Signalled
"WriteEvent" // object name
);
if (!hWriteEvent) {
/* error exit */
}
/* create event object to notify master that reading is done */
hReadEvent = CreateEvent(NULL, // no security attributes
FALSE, // auto reset (release one waiter)
FALSE, // initial state = Not-Signalled
"ReadEvent" // object name
);
if (!hReadEvent) {
/* error exit */
}
while (TRUE) {
.
. /* write new data to shared memory */
.
// set hWriteEvent to Signalled state to release one worker
// hWriteEvent is automatically reset
// when a worker thread is released
if (!SetEvent(hWriteEvent)) { // error exit
errcode = GetLastError();
break;
}
/* wait on hReadEvent for data to be read; break if error */
if (waitresult = WaitForSingleObject(hReadEvent, 500L))
break;
}
Using the WaitForSingleObject function, the workers wait for the WriteEvent to be signalled. When the Event is signalled, one worker is released while the others continue to wait. When a worker is released to process the data, it first reads from shared memory and then signals the master before going on to complete its task.
HANDLE hWriteEvent, hReadEvent;
DWORD waitresult;
hWriteEvent = OpenEvent(EVENT_ALL_ACCESS, FALSE, "WriteEvent");
hReadEvent = OpenEvent(EVENT_ALL_ACCESS, FALSE, "ReadEvent");
while (TRUE) {
/* wait indefinitely on hWriteEvent for data to be written */
if (waitresult = WaitForSingleObject(hWriteEvent, -1))
break;
.
. /* read data from shared memory */
.
/*
* set hReadEvent to Signalled state
* to release master process
*
* hReadEvent is automatically reset
* after master process released
*/
if (!SetEvent(hReadEvent)) {
/* error exit */
}
.
. /* process data */
.
}
You can use a Mutex object to protect a shared resource from simultaneous access by multiple threads or processes. It does this by requiring each thread to wait for ownership of the Mutex before it can execute the code in which the shared resource is accessed. For example, if several processes need to write to the same disk file, the Mutex object can be used to permit only one process at at time to write to the file.
The CreateMutex function creates a Mutex object and sets its initial state to either unowned or owned by the creating process. The handle returned to the creating process has synchronize and modify access to the Mutex object.
Before entering the section of code that accesses the shared resource, you need to request ownership of the Mutex by calling either WaitForSingleObject or WaitForMultipleObjects . If another thread already owns the Mutex, these functions will block until the Mutex has been released or the timeout period has elapsed. If the Mutex is currently unowned, the system grants ownership to the requesting thread and it can execute the protected code. When it has finished using the shared resource, the thread uses the ReleaseMutex function to relinquish ownership of the Mutex, thereby allowing another thread to become owner. While a thread has ownership of a Mutex, it can make additional wait calls on the same Mutex object without blocking. However, to relinquish ownership of the Mutex, ReleaseMutex must be called once for each time that a wait was satisfied.
The following code fragment shows how a thread creates a Mutex object, requests ownership, and after writing to a shared file, releases ownership:
HANDLE hFileMutex;
DWORD waitresult;
/* create an initially unowned mutex */
hFileMutex = CreateMutex(NULL, FALSE, NULL);
if (!hFileMutex) {
/* check for error */
}
.
.
.
/* request ownership of Mutex */
waitresult = WaitForSingleObject(hFileMutex, 5000L);
switch (waitresult) {
case 0:
try {
.
. /* write data to shared file */
.
}
finally {
/* release ownership of Mutex */
if (ReleaseMutex(hFileMutex)) {
/* deal with error */
}
break;
}
case WAIT_TIME_OUT:
/* unable to get ownership of mutex due to timeout */
case WAIT_ABANDONED:
/* got ownership of abandoned mutex */
}
The example uses the try . . . finally structured exception handling syntax to ensure that a thread properly releases a Mutex. The finally block of code is executed no matter how the try block terminates (unless the try block includes a call to TerminateThread). This prevents the Mutex from being inadvertently abandoned. Either of the wait functions can return wait_abandoned if a Mutex has been abandoned, which occurs if a thread terminates without releasing its ownership of the Mutex. Only Mutex objects can be abandoned, since Event and Semaphore objects cannot be owned. The waiting thread will be given ownership of the abandoned Mutex, but you should probably assume that an abandoned Mutex means that the shared resource is in an undefined state and the process should terminate. If the thread proceeds normally as though the Mutex had not been abandoned, the wait_abandoned flag is cleared so future waits are satisfied normally.
A Semaphore object is useful in controlling the number of threads that are simultaneously using a shared resource. It acts like a gate that counts the threads as they enter and exit the controlled area.
The CreateSemaphore function creates a Semaphore object, specifying the initial count and the maximum count. The handle returned to the creating process has synchronize and modify access to the Semaphore object. Processes that open, inherit, or duplicate the Semaphore should duplicate this access so they will be able to wait for the object (synchronize access) as well as release it (modify access).
When a thread wants to pass through a Semaphore gate, it calls either WaitForSingleObject or WaitForMultipleObjects . If the count of the Semaphore is greater than 0, the count is decremented and the wait function returns so the thread can execute the protected code. If the count of the Semaphore is 0, the thread will block until the timeout period has elapsed or some other thread increments the count by releasing the Semaphore. When it has finished using the shared resource, the thread exits the gate with the ReleaseSemaphore call. This increments the count of the Semaphore by a specified amount. Typically, you would use an increment of 1 when releasing the Semaphore, but you could specify a larger increment as long as the resulting count is not greater than the maximum count. For example, a Semaphore might be created with an initial count of 0 to block access during an initialization phase of the program. Then after the initialization, the creating process could use ReleaseSemaphore to increment the count to the maximum.
There is no ownership of Semaphore objects, so if a thread repeatedly enters the Semaphore gate, the count will be decremented each time and the thread will block when the count gets to 0. If a thread wants to decrement a Semaphore's count more than once, it must do multiple waits rather than calling WaitForMultipleObjects with multiple occurrences of the same handle.
The following code fragment creates a Semaphore object, waits for it, and then releases it:
HANDLE hSemaphore;
LONG lMaxCount = 10;
LONG PreviousCount;
/* create a semaphore with initial and max counts = to 10 */
hSemaphore = CreateSemaphore(NULL, lMaxCount, lMaxCount, NULL);
if (!hSemaphore) {
/* check for error */
}
.
.
.
/* enter the semaphore gate */
switch (WaitForSingleObject(hSemaphore, 5000L)) {
case 0:
.
. /* use shared resource */
.
/* exit the semaphore gate */
if (ReleaseSemaphore(hSemaphore, 1, &PreviousCount)) {
/* deal with error */
}
break;
case WAIT_TIME_OUT: /* deal without timeout */
}
You can use a Critical Section object to protect a shared resource from simultaneous access by multiple threads of a single process. For example, if several threads need to use a global variable, a Critical Section object could be used to control execution of the code in which the variable is accessed.
Typically, the Critical Section object would be declared as a global variable and the main thread of the process would use the InitializeCriticalSection function to initialize it. This function initializes the object's state to unowned, leaving it ready to be used by the threads of the process. When a thread needs to access the resource that is protected by the Critical Section, it calls the EnterCriticalSection function to request ownership of the object. If another thread already owns the Critical Section, this function will block until the Critical Section has been released. If the Critical Section is currently unowned, the system grants ownership to the requesting thread and it can access the resource. When it has finished executing the protected code, the thread uses the LeaveCriticalSection function to relinquish ownership of the Critical Section, thereby allowing another thread to become owner. While a thread has ownership of a Critical Section, it can make additional EnterCriticalSection calls on the same Critical Section object without blocking. However, to relinquish ownership of the Critical Section, LeaveCriticalSection must be called once for each time that the Critical Section was entered. Refer to the section above on Mutexes for a discussion of using the try . . . finally structured exception handling syntax. This is a good idea with Critical Sections as well, to ensure that a thread properly leaves the Critical Section.
The following code fragment shows a thread initializing, entering, and leaving a Critical Section:
CRITICAL_SECTION GlobalCriticalSection;
.
.
.
/* initialize the critical section */
InitializeCriticalSection(&GlobalCriticalSection);
/* request ownership of the critical section */
EnterCriticalSection(&GlobalCriticalSection);
.
. /* access the shared resource */
.
/* release ownership of the critical section */
LeaveCriticalSection(&GlobalCriticalSection);
When an application is through using a Critical Section object, it may delete the object using the DeleteCriticalSection function. This function deallocates all system resources stored in the Critical Section object. A Critical Section can only be deleted when it is unowned; and once deleted, it can not be used in EnterCriticalSection or LeaveCriticalSection.
The following are the Win32 functions used with synchronization objects.
CreateEvent
PulseEvent
ResetEvent
SetEvent
CreateSemaphore
ReleaseSemaphore
WaitForMultipleObjects
WaitForSingleObject
DeleteCriticalSection
EnterCriticalSection
InitializeCriticalSection
LeaveCriticalSection