Monthly Archives: March 2008

The Problem With Threads

Multithreaded (concurrent) programming is a valuable tool in my toolbox. I don’t hesitate to use it when I need it. I can’t imagine getting by without it.*

So the SQLite project got my attention when they said, “Threads are evil. Avoid them.”

They referred me to a very interesting paper by Edward A. Lee of UC Berkeley: The Problem With Threads. From the abstract:

Although threads seem to be a small step from sequential computation, in fact, they represent a huge step. They discard the most essential and appealing properties of sequential computation: understandability, predictability, and determinism.
Threads, as a model of computation, are wildly nondeterministic, and the job of the programmer becomes one of pruning that nondeterminism. Although many research techniques improve the model by offering more effective pruning, I argue that this is approaching the problem backwards. Rather than pruning nondeterminism, we should build from essentially deterministic, composable components. Nondeterminism should be explicitly and judiciously introduced where needed, rather than removed where not needed. The consequences of this principle are profound …

Well put. Threads can create intractable problems, bugs can be incredibly subtle, and there’s no way to prove you’ve thought things through well enough.

Section 3 gets into some heavy theory to say, essentially, that we can’t compare two multithreaded programs for equivalence (i.e., if two programs will yield the same result). He concludes with this interesting analogy:

To offer a third analogy, a folk definition of insanity is to do the same thing over and over again and to expect the results to be different. By this definition, we in fact require that programmers of multithreaded systems be insane. Were they sane, they could not understand their programs.

I’ll take that as a compliment. I see his point, though.

Here’s a very interesting piece:

An anecdote from the Ptolemy Project is telling (and alarming). In the early part of the year 2000, my group began developing the kernel of Ptolemy II [20], a modeling environment supporting concurrent models of computation. …
The portion of the kernel that ensured a consistent view of the program structure was written in early 2000, design reviewed …, and code reviewed … The reviewers included concurrency experts, not just inexperienced graduate students … We wrote regression tests that achieved 100 percent code coverage. … The Ptolemy II system itself began to be widely used, and every use of the system exercised this code. No problems were observed until the code deadlocked on April 26, 2004, four years later.

It is certainly true that our relatively rigorous software engineering practice identified and fixed many concurrency bugs. But the fact that a problem as serious as a deadlock that locked up the system could go undetected for four years despite this practice is alarming. How many more such problems remain? How long do we need test before we can be sure to have discovered all such problems? Regrettably, I have to conclude that testing may never reveal all the problems in nontrivial multithreaded code.

Very sobering. I have to say their threading is much more ambitious than anything I do.

The more I think about it, the more I realize that I do “prune” my nondeterminism very aggressively, and introduce it only when needed. For me, concurrency is usually a private attribute of the classes I write (though C++ can’t enforce anything in the time domain). It needs to be warranted for the complexity it introduces, and must avoid (non-private) things like interactions with other classes that might cause deadlocks.

Another typical application is a user interface and a core process. The user interface runs concurrently, allowing us to watch the arbitrarily complicated core process, and poke at it only in well-defined ways (like cancelling it). The process notifies the interface of significant events through a straightforward mechanism like a message queue. (I’ve never needed to implement a full-blown MVC.)

The author makes excellent points. As multi-core computers become more and more common, we would do well to harness their power. But with caution.

* Ok: I do get by without multithreading when it’s not available: like 8-bit microcontrollers that can’t support it. A polling, event-driven system can do a lot in parallel and emulate a multithreaded system pretty well, though its sequencing can be difficult to track.

Rvalue References

Here’s a new C++ feature: rvalue references.

An rvalue reference behaves just like an lvalue reference except that it can bind to a temporary (an rvalue), whereas you can not bind a (non const) lvalue reference to an rvalue.

A& a_ref3 = A(); // Error!
A&& a_ref4 = A(); // Ok

Question: Why on Earth would we want to do this?!

It turns out that the combination of rvalue references and lvalue references is just what is needed to easily code move semantics. The rvalue reference can also be used to achieve perfect forwarding, a heretofore unsolved problem in C++. From a casual programmer’s perspective, what we get from rvalue references is more general and better performing libraries.

I’m sure it will be a while before we see a compiler supporting it, but it’s a good concept to know.

More: Here’s the Forwarding Problem, which rvalue references solve (“perfect forwarding”). Via Boost.Range.