Shared Resources

Let's say you have a precious resource, such as a restroom on a 747. You want only one person to have access to that room at a time. Having someone interrupted in the middle of his process by a second passenger would be inconvenient. You solve this problem by providing access to the restroom to the first passenger, and then locking the resource. Other passengers who want access will have to "wait". They may wait patiently, or they may "time out" after a while and return to their seats to resume what they were doing, without having had their turn. The question of how long they wait, and whether they can return to what they were doing or must wait indefinitely, is up to them.

Once the first passenger has finished using the resource, he unlocks it and provides access to the next passenger. He was guaranteed that once he began using the resource, he would have uninterrupted access until he was finished. Because this produces an enormous inefficiency (all those other passengers are blocked from getting any useful work done, until they either get access to the resource or give up waiting and return to their seats), you want to be careful that you only lock folks out when absolutely necessary, and for as short a time as possible.

Failure to lock a resource can lead to data corruption. Because you can't predict when the activity will be interrupted, you must guard all data access by some form of synchronization.

Data corruption is when various aspects of your data are inconsistent with one another.

Synchronization means the coordination of various parts of the program to ensure orderly access to shared resources.

Consider the following example — your program is walking through a list of employees giving each one a raise. One thread accesses Employee 1, increases Employee 1's salary by 10% and then writes it back to the database. Meanwhile, another thread is busy updating everyone's zip code with the new plus four designations. It accesses an employee, looks up their address, updates their zip code and then writes it back. The problem is this — what if they both access Employee 1 at the same time?

Thread 1 accesses Employee 1 whose record looks like this:

Name: John Q. Employee

ID: 12453

Salary: 50 000

Address: 1 Maple Lane

City: Anytown

State: XZ

Zip: 00010

It then changes the salary to 55 000, but while it has the record out, thread 2 accesses it and changes the zip to 00010-0010. Thread 1 writes the record back to the database, so that it looks like this:

Name: John Q. Employee

ID: 12453

Salary: 55 000

Address: 1 Maple Lane

City: Anytown

State: XZ

Zip: 00010

Subsequently, thread 2 writes it back to the database, and it looks like this:

Name: John Q. Employee

ID: 12453

Salary: 50 000

Address: 1 Maple Lane

City: Anytown

State: XZ

Zip: 00010-0010

Oops, the update to the salary has been overwritten. Not good. Both threads completed successfully but the result is corrupted data. There are any numbers of variants on this theme, which leave data in a bad state.

In a sophisticated database, you would be able to update a single field at a time, but there are conditions where two or more threads might need to update the same field, and the same set of problems arise. In short, you need to be able to synchronize access to your data to ensure that one thread does not overwrite the work of another thread in a way which might corrupt the data.

Most commercial database systems will manage thread safety for you, but other aspects of your code will need to be managed by your own synchronization objects. There are two great dangers: the first is that you won't protect your data, leading to data corruption. The second is that you'll overprotect your data, adding enormous inefficiencies into your project and slowing your system to the point where it is unusable.

In the above example, thread 1 could lock Employee 1 the moment it gets the record. It could then read the salary, compute the new salary, update it and then unlock it. All this time, thread two might be waiting for access. Suppose computing the new salary involved looking up survey information on what similar managers earn nationwide. Such a computation could take many minutes. Certainly, we don't want thread two to be waiting all that time.

A more efficient algorithm might be to read Employee 1's record, compute the new salary and then lock the record and update it and unlock it. This may drive down the lock time from many minutes to a few milliseconds, greatly enhancing the performance of the system overall. On the other hand, this kind of locking may not sufficiently protect your data from corruption by other threads, as the data may have been changed after you have read the record, but before you locked it and updated it. In this case the solution may be to track whether the data has been changed from the time you accessed it until the time you locked it. If so, you may need to flush your data, that is remove it from memory, and then get it again, this time with the record locked. Each solution creates new problems, and finding the right balance to ensure high performance and absolute data integrity is what makes writing multi-threaded applications so interesting.

The process view concentrates on the question of how you solve these problems, trading off performance considerations with data integrity.

© 1998 by Wrox Press. All rights reserved.