A Study Guide for Concurrent Programming in Java™

Version: 1.1    Quick Links
Author: Nik Boyd    Study Goals
Started: February, 2002    Study Schedule
Updated: August, 2002    Study Questions

This guide provides a structured introduction to the book:

Doug Lea. Concurrent Programming in Java™ 2nd Edition. Addison-Wesley Publishing Co., Inc., 2000. ISBN 0-201-31009-0.

Study Group Goals

For an introduction to study groups, please see A Learning Guide to Design Patterns, by Joshua Kerievsky

For this study group on concurrent programming, we want to:

Proposed Discussions

We plan to study the entire book in order, somtimes combining two sections into a single session, sometimes splitting a section over two sessions.

1.1 Using Concurrency Constructs (1-18) Session 1
Introduces the basic Java concurrency support constructs and the principal methods of class Thread.
1.2 Objects and Concurrency (19-36)  
Describes the primary uses of concurrency, the basic units used to implement concurrent execution, and how these map onto object models.
1.3 Design Forces (37-56) Session 2
Surveys the design concerns that arise in concurrent software development, including safety, liveness, performance, and reusability.
1.4 Before/After Patterns (57-68) Session 3
Discusses the primary design patterns used to implement symmetrical control mechanisms.
2.1 Immutability (69-74) Session 4
Discusses the uses of immutable and partially immutable objects in concurrent programs.
2.2 Synchronization (75-98)  
Discusses the serialized execution of synchronized Java threads and the impact they have on objects.
2.3 Confinement (99-116) Session 5
Discusses encapsulation techniques that structurally guarantee that only a single thread will ever access some object(s) at a given time.
2.4 Structuring and Refactoring Classes (117-146) Sessions 6 + 7
Discusses strategies for removing unnecessary synchronization, synchronization splitting, read-only operation export, state isolation, and lock reuse.
2.5 Using Lock Utilities (147-158) Session 8
Discusses the implementations of commonly used lock utility classes.
3.1 Dealing with Failure (159-178) Session 9
Discusses exceptions and cancellations.
3.2 Guarded Methods (179-198) Session 10
Introduces the guard constructions used in conservative designs.
3.3 Structuring and Refactoring Classes (199-218) Sessions 11 + 12
Presents structural patterns for classes employing concurrency control.
3.4 Using Concurrency Control Utilities (219-236) Session 13
Shows how utility classes reduce complexity while improving reliability, performance, and flexibility.
3.5 Joint Actions (237-248) Session 14
Shows how to control actions that depend on the states of multiple participants.
3.6 Transactions (249-264) Session 15
Provides a brief overview of transactional concurrency control.
3.7 Implementing Utilities (265-280) Session 16
Illustrates the techniques used in the construction of some common utilities.
4.1 Oneway Messages (281-304) Session 17
Presents some basic options for implementing oneway messages.
4.2 Composing Oneway Messages (305-324) Session 18
Discusses the use of oneway messages in component networks.
4.3 Services in Threads (325-342) Session 19
Presents alternatives for implementing service threads.
4.4 Parallel Decomposition (343-366) Session 20
Examines techniques used to improve performance with multiple processors.
4.5 Active Objects (367-376) Session 21
Provides an overview of constructs and frameworks for systems of active objects.

Study Questions

Disclaimer: These questions can and should be augmented and / or replaced by questions raised by the participating study group members. Please bring your own questions to each study session!

1.1 Using Concurrency Constructs (1-18) Session 1
  1. When would you want to create a daemon Thread?
  2. Why have suspend, resume, stop, destroy been deprecated?
  3. When would it be useful to create a ThreadGroup?
1.2 Objects and Concurrency (19-36)  
  1. What classes of problems can concurrent programs help solve (hint: optimization)?
  2. What benefits do concurrent programs offer over sequential programs?
  3. How can we decide whether a system needs separate machines, processors, processes, or threads?
  4. What are the trade-offs (costs v. benefits) of each kind of separation?
  5. What thread scheduling strategies does Java support?
1.3 Design Forces (37-56) Session 2
  1. How are performance and reusability related to safety and liveness? What other qualities contribute to safety and liveness?
  2. What other kinds of safety concerns are there besides type safety and multithread safety (hint: consider transactions and ACID properties)?
  3. Doug Lea lists several qualities that can be used to measure performance. How can we measure reuse and reusability?
1.4 Before/After Patterns (57-68) Session 3
  1. What are the benefits and costs of the various approaches to implementing before/after patterns?
  2. Is the cost of instantiating a method adapter worth the benefit of factoring out the lifecycle guarantees into a single location (also see: Resource Manager)?
2.1 Immutability (69-74) Session 4
  1. How can the fields of an object become accessible before its construction is complete?
  2. Why is it important to ensure that flyweights are completely constructed before they are published?
2.2 Synchronization (75-98)  
  1. When would it be useful to know which of several threads owns a lock?
  2. When and why is fairness important during lock acquisition?
  3. What are the benefits of reentrant locking? Are there any drawbacks?
  4. When would it be appropriate to use a volatile field instead of synchronized methods or blocks?
2.3 Confinement (99-116) Session 5
  1. What are the trade-offs associated with tail-call hand-offs?
  2. Given the consequences of thread-based confinement (page 106), which of the available options seems most flexible?
  3. When would thread-based confinement be inappropriate?
  4. On page 112, the take protocol has often been called orphan and the put protocol often called adopt (see Taligent's Guide to Designing Programs). How can these two protocols (however named) be combined to implement a clean and confined resource transfer? (hint: use the stack, e.g., x.adoptResource(y.orphanResource()))
2.4 Structuring and Refactoring Classes (117-146) Sessions 6 + 7
  1. What are semantic guarantees (hint: class invariants, method post-conditions)? How can they be weakened?
  2. Given the fragility of double-checked locks and the likelihood that they will break or be used inappropriately, would you ever use them? If so, when?
  3. Does a "single giant lock" offer a significant opportunity for refactoring?
  4. Should the use of open calls always be documented to clarify the contractual obligations of the called methods?
  5. Is splitting a class (via refactoring) preferable to merely splitting a lock within the class? When would one be preferable to the other?
  6. Are explicitly immutable interfaces preferable to runtime immutability (hint: catching contract violations during compilation rather than execution)?
  7. How can the Resource Manager pattern be adapted to enforce the correct usage of resources in open containers?
2.5 Using Lock Utilities (147-158) Session 8
  1. How can the awkward before/after construction on page 148 be simplified (hint: make acquire and release reentrant)?
  2. What kind of method adapter signature will support passed parameters and returned results?
  3. Can shared resource allocations be pre-empted, or must Java threads always coordinate and cooperate?
  4. Do coupled locks introduce a greater likelihood of races and deadlocks between competing threads?
  5. If so, how can that be resolved (hint: coupled lock managers)?
  6. What kind(s) of fairness policy(s) will ensure that read-write locks prevent starvation between contending readers and writers?
3.1 Dealing with Failure (159-178) Session 9
  1. When is each of the six general responses to failed actions appropriate (termination, continuation, rollback, recovery, retry, handlers)?
  2. When are thread interruption checks automatically performed within the Java library?
  3. When are thread interruption checks not performed, i.e., when are threads dormant while waiting for a resource?
  4. Why are lock utilities useful for cancellation protocols (hint: reduced dormancy)?
  5. What kinds of safety concerns arise in relation to Thread.stop, i.e., why was it deprecated?
  6. What (relatively) safe alternatives are there for terminating a thread and diminishing its use of system resources if the thread fails to terminate?
3.2 Guarded Methods (179-198) Session 10
  1. How do guarded methods extended synchronized methods? How do guards differ from traditional conditionals?
  2. How does concurrent constraint programming help solve state-based design problems?
  3. What are the commonly used alternatives for representing state?
  4. How do predicate states differ from enumerated states? What are their benefits?
  5. When can starvation become an issue in state-based concurrent program designs?
  6. How can slipped conditions and missed signals be avoided?
  7. Why is notifyAll used more often than notify to awaken waiting threads?
  8. Why is it better to avoid busy waits? What are better alternatives?
3.3 Structuring and Refactoring Classes (199-218) Sessions 11 + 12
  1. In general, how can logical state analysis help determine the optimal usage of wait and notify during state transition operations?
  2. The state table on page 200 defines the states and the legal transitions for a BoundedBuffer. What additional information is needed to determine the optimal usage of wait and notify during the state transition operations?
  3. How can conflicts between a large set of operation pairs be resolved with a minimum of custom code per each pair?
  4. How can the examples for tracked states and conflict sets be improved with refactoring (hint: extract methods)?
  5. When can starvation become an issue with readers and writers?
  6. Why is it useful to separate functionality as non-public methods from concurrency control as public methods?
  7. How can lockouts be avoided in nested monitors?
3.4 Using Concurrency Control Utilities (219-236) Session 13
  1. How are semaphores related to mutual exclusion locks, resource pools, bounded buffers, and synchronous channels?
  2. In what kind of applications will fairness issues usually arise?
  3. What is priority inversion, and how can it be countered?
  4. What are the typical applications of binary latches?
  5. What rarely useful mechanism finds an appropriate usage in latching variables?
  6. When are exchangers useful (hint: double buffering)?
  7. When are condition variables useful (hint: legacy code conversion)?
3.5 Joint Actions (237-248) Session 14
  1. What are some of the factors to consider when designing joint actions?
  2. What is the main goal of joint action designs?
  3. What are the general structure and behavior of classes involved in joint actions?
  4. What are some of the conflict resolution strategies used to prevent deadlocks in joint action designs?
  5. What is the best way to avoid design issues associated with joint actions?
3.6 Transactions (249-264) Session 15
  1. What are the four steps in the basic transaction protocol?
  2. What are the two complementary sets of policies that can be applied to transaction protocols?
  3. How do these two policies effect the design of transactional interfaces and implementations?
  4. When would it make sense to support both optimistic and conservative transaction policies?
  5. How can the detailed analysis of the structure of transaction policies help determine which to choose (hint: cost comparison)?
  6. What is the relationship between a property constraint and the ability (right) to veto a property change?
3.7 Implementing Utilities (265-280) Session 16
  1. Given the complexity of the methods in Semaphore, how would you refactor them with Extract Method?
  2. What are the potential performance costs of using notifyAll versus notify?
  3. Given that single-threaded notification designs usually increase design complexity, will it usually be worth the performance gain to pursue such designs?
  4. Why might collapsing the classes within a design that splits state-dependent actions make them more efficient?
  5. Given the increased complexity from collapsing classes, when (if ever) would this approach be warranted?
  6. Is there an appropriate utility class that could be used in place of WaitQueue for the FIFO semaphore?
  7. In comparison with the suggested FIFO semaphore implementation, how could a priority semaphore be implemented?
  8. How could a task-oriented semaphore for resolving conflict sets be designed?
  9. How could the queuing policy for the various kinds of queue-based semaphores be factored out?
4.1 Oneway Messages (281-304) Session 17
  1. What data characteristics distinguish the various kinds of message formats described in section 4.1.1 (hint: binary v. text, instance v. class, event v. 1-way request)?
  2. In an open call design, what will happen if the request arrival rate exceeds the request acceptance rate (which is determined by the local state update latency)?
  3. What are some of the reasons one might use a thread-per-message design? What kind of limitations will usually be encountered with thread-per-message designs?
  4. What trade-offs do thread pools introduce? How can thread pool saturation be addressed? How do web servers and app servers typically address saturation?
  5. Given that a Swing event queue is single-threaded, can it be easily saturated? What are the observable consequences of such saturation? How can such saturation be eliminated?
  6. How does the JDK Timer framework in java.util compare with that suggested in section How might a Schedule be represented indepently of tasks, threads, Timers, and the system clock?
  7. When using a busy-wait loop to manage event-driven tasks, would it be beneficial to use the number of tasks (or the average per some time period) to control the sleep / wait duration?
  8. When commands do not arrive as units, a worker thead can stall. Is there an alternative to using a buffering scheme to prevent stalling (hint: test for available() >= N, where N is a fixed command length)?
4.2 Composing Oneway Messages (305-324) Session 18
  1. What are some of the goals satisfied by flow network designs (hint: see the end of section
  2. What are some examples of systems from your development practice that either did benefit or would benefit from the use of a flow network design?
  3. What were the idioms, patterns, and metaphors relevant to your designs?
  4. How do they compare with those described in this section?
4.3 Services in Threads (325-342) Session 19
  1. How have you (or would you) use Thread.join and futures in your own concurrent program designs?
  2. If the Callable interface in section is awkward, how would you improve upon it or generate a design based on this idea?
  3. How does the "elevator algorithm" from section 4.3.4 work? Is there any situation from your own coding practice where you (could) have applied this?
4.4 Parallel Decomposition (343-366) Session 20
  1. What are the primary goals we're trying to satisfy with task granularity and structure in fork/join decomposition solutions?
  2. How do the forces represented by these goals trade off against each other?
  3. What design techniques can be applied to balancing these forces, i.e., what frameworks and design steps can be used?
  4. How can the number of forked subtasks be varied dynamically?
  5. When are callback-based fork/join designs typically used?
  6. When and how can trees improve the efficiency of fork/join designs?
  7. Is there any situation from your own coding practice where you (could) have used a cyclic barrier?
4.5 Active Objects (367-376) Session 21
  1. What differentiates active objects from other kinds of objects?
  2. Is there any situation from your own coding practice where you (could) have used active objects?
  3. How do CSP processes and channels behave?
  4. What benefits can be gained from using CSP in designs?
  5. Is there any situation from your own coding practice where you (could) have used CSP?

Supplemental Readings

Taligent. Taligent's Guide to Designing Programs: Well-Mannered Object-Oriented Design in C++. Addison-Wesley Publishing Co., Inc., 1994. ISBN 0-201-40888-0.
Peter Haggar. Java Q & A: Does Java Guarantee Thread Safety? Doctor Dobb's Journal, June 2002.
Peter Haggar. Excerpts from Practical Java. IBM developerWorks, 2000.
Peter Haggar. Practical Java Programming Language Guide. Addison-Wesley Publishing Co., Inc., 2000. ISBN 0-201-61646-7.

Java™ is a trademark of Sun Microsystems, Inc.