Synchronization: Background

  • A cooperating process is one that can affect or be affected by other processes executing in the system.
  • Cooperating processes can either directly share a logical address space (that is, both code and data) or be allowed to share data only through files or messages.
  • The former case is achieved through the use of threads. Concurrent access to shared data may result in data inconsistency; however, we discuss various mechanisms to ensure the orderly execution of cooperating processes that share a logical address space, so that data consistency is maintained.
  • We developed a model of a system consisting of cooperating sequential processes or threads, all running asynchronously and possibly sharing data. We illustrated this model with the producer-consumer problem, which is representative of operating systems. Specifically, we described how a bounded buffer could be used to enable processes to share memory.
  • Let's return to our consideration of the bounded buffer. As we pointed out, our original solution allowed at most BUFFER_SIZE - 1 items in the buffer at the same time.
  • Suppose we want to modify the algorithm to remedy this deficiency.
  • One possibility is to add an integer variable counter, initialized to 0. Counter is incremented every time we add a new item to the buffer and is decremented every time we remove one item from the buffer.
  • The code for the producer process can be modified as follows:

while (true)


            /* produce an item in nextProduced */

while (counter == BUFFER_SIZE)

; /* do nothing */

buffer[in] = nextProduced;

in = (in + 1) % BUFFER_SIZE ;



  • The code for the consumer process can be modified as follows:

while (true)


while (counter == 0)

; /* do nothing */

nextConsumed = buffer[out];

out = (out + 1) % BUFFER_SIZE;


/* consume the item in nextConsumed */


  • Although both the producer and consumer routines shown above are correct separately, they may not function correctly when executed concurrently.
  • As an illustration, suppose that the value of the variable counter is currently 5 and that the producer and consumer processes execute the statements "counter++" and "counter--" concurrently.
  • Following the execution of these two statements, the value of the variable counter may be 4, 5, or 6! The only correct result, though, is counter == 5, which is generated correctly if the producer and consumer execute separately.
  • We can show that the value of counter may be incorrect as follows. Note that the statement" counter++" may be implemented in machine language (on a typical machine) as

register1 = counter

register1 = register1 + 1

counter= register1

  • Where register1 is one of the local CPU registers. Similarly, the statement register2"counter--" is implemented as follows:

register2 = counter

register2 = register2 ~ 1

counter= register2

  • Where again register2 is on eo£ the local CPU registers. Even though register1 and register2 may be the same physical register (an accumulator, say), remember that the contents of this register will be saved and restored by the interrupt handler.
  • The concurrent execution of "counter++" and "counter--" is equivalent to a sequential execution in which the lower-level statements presented previously are interleaved in some arbitrary order (but the order within each high-level statement is preserved). One such interleaving is

To:    producer    execute           register1 =counter                {register1 = 5}

T1:    producer    execute          register1 = register1 + 1        {register1 = 6}

T2:   consumer    execute        register2 = counter                 {register2 = 5}

T3:   consumer    execute        register2 = register2 1             {register2 = 4}

T4:    producer    execute        counter= register1                   {counter = 6}

Ts:    consumer    execute        counter = register2                  {counter = 4}

  • Notice that we have arrived at the incorrect state "counter == 4", indicating that four buffers are full, when, in fact, five buffers are full.
  • If we reversed the order of the statements at T4 and T5, we would arrive at the incorrect state "counter== 6".
  • We would arrive at this incorrect state because we allowed both processes to manipulate the variable counter concurrently.
  • A situation like this, where several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular order in which the access takes place, is called a race condition.
  • To guard against the race condition above, we need to ensure that only one process at a time can be manipulating the variable counter.
  • To make such a guarantee, we require that the processes be synchronized in some way.
  • Situations such as the one just described occur frequently in operating systems as different parts of the system manipulate resources.
  • Furthermore, with the growth of multicore systems, there is an increased emphasis on developing multithreaded applications wherein several threads-which are quite possibly sharing data-are running in parallel on different processing cores.
  • Clearly, we want any changes that result from such activities not to interfere with one another. Because of the importance of this issue, a major portion of this chapter is concerned with process synchronization and coordination amongst cooperating processes.


  1. The Operating System Concepts by Silberschatz, Galvin, Gagne, 8th Edition
  2. MODERN OPERATING SYSTEMS by Andrew S. Tanenbaum, Second Edition

Last modified: Sunday, 19 November 2017, 11:52 AM