(Post 23/10/2007) In terms of performance,
the overhead with all Wait Handles typically runs in the few-microseconds
region. Rarely is this of consequence in the context in which they are
used.
The lock statement (aka Monitor.Enter / Monitor.Exit)
is one example of a thread synchronization construct. While lock is suitable
for enforcing exclusive access to a particular resource or section of
code, there are some synchronization tasks for which it's clumsy or inadequate,
such as signaling a waiting worker thread to begin a task.
The Win32 API has a richer set of synchronization constructs,
and these are exposed in the .NET framework via the EventWaitHandle, Mutex
and Semaphore classes. Some are more useful than others: the Mutex class,
for instance, mostly doubles up on what's provided by lock, while EventWaitHandle
provides unique signaling functionality.
All three classes are based on the abstract WaitHandle
class, although behaviorally, they are quite different. One of the things
they do all have in common is that they can, optionally, be "named",
allowing them to work across all operating system processes, rather than
across just the threads in the current process.
EventWaitHandle has two subclasses: AutoResetEvent and
ManualResetEvent (neither being related to a C# event or delegate). Both
classes derive all their functionality from their base class: their only
difference being that they call the base class's constructor with a different
argument.
In terms of performance, the overhead with all Wait Handles
typically runs in the few-microseconds region. Rarely is this of consequence
in the context in which they are used.
AutoResetEvent is the
most useful of the WaitHandle classes, and is a staple synchronization
construct, along with the lock statement. |
AutoResetEvent
An AutoResetEvent is much like a ticket turnstile: inserting
a ticket lets exactly one person through. The "auto" in the
class's name refers to the fact that an open turnstile automatically closes
or "resets" after someone is let through. A thread waits, or
blocks, at the turnstile by calling WaitOne (wait at this "one"
turnstile until it opens) and a ticket is inserted by calling the Set
method. If a number of threads call WaitOne, a queue builds up behind
the turnstile. A ticket can come from any thread – in other words, any
(unblocked) thread with access to the AutoResetEvent object can call Set
on it to release one blocked thread.
If Set is called when no thread is waiting, the handle
stays open for as long as it takes until some thread to call WaitOne.
This behavior helps avoid a race between a thread heading for the turnstile,
and a thread inserting a ticket ("oops, inserted the ticket a microsecond
too soon, bad luck, now you'll have to wait indefinitely!") However
calling Set repeatedly on a turnstile at which no-one is waiting doesn't
allow a whole party through when they arrive: only the next single person
is let through and the extra tickets are "wasted".
WaitOne accepts an optional timeout parameter – the method
then returns false if the wait ended because of a timeout rather than
obtaining the signal. WaitOne can also be instructed to exit the current
synchronization context for the duration of the wait (if an automatic
locking regime is in use) in order to prevent excessive blocking.
A Reset method is also provided that closes the turnstile
– should it be open, without any waiting or blocking.
An AutoResetEvent can be created in one of two ways.
The first is via its constructor:
EventWaitHandle wh = new AutoResetEvent
(false);
If the boolean argument is true, the handle's Set method
is called automatically, immediately after construction. The other method
of instantiatation is via its base class, EventWaitHandle:
EventWaitHandle wh = new EventWaitHandle
(false, EventResetMode.Auto);
EventWaitHandle's constructor also allows a ManualResetEvent
to be created (by specifying EventResetMode.Manual).
One should call Close on a Wait Handle to release operating
system resources once it's no longer required. However, if a Wait Handle
is going to be used for the life of an application (as in most of the
examples in this section), one can be lazy and omit this step as it will
be taken care of automatically during application domain tear-down.
In the following example, a thread is started whose job
is simply to wait until signaled by another thread.
class BasicWaitHandle {
static EventWaitHandle wh = new AutoResetEvent (false);
static void Main() {
new Thread (Waiter).Start();
Thread.Sleep (1000); // Wait for some time...
wh.Set(); // OK - wake it up
}
static void Waiter() {
Console.WriteLine ("Waiting...");
wh.WaitOne(); // Wait for notification
Console.WriteLine ("Notified");
}
}
Waiting... (pause) Notified. |
Creating a Cross-Process EventWaitHandle
EventWaitHandle's constructor also allows a "named"
EventWaitHandle to be created – capable of operating across multiple processes.
The name is simply a string – and can be any value that doesn't unintentionally
conflict with someone else's! If the name is already in use on the computer,
one gets a reference to the same underlying EventWaitHandle, otherwise
the operating system creates a new one. Here's an example:
EventWaitHandle wh = new EventWaitHandle
(false, EventResetMode.Auto,
"MyCompany.MyApp.SomeName");
If two applications each ran this code, they would be
able to signal each other: the wait handle would work across all threads
in both processes.
Acknowledgement
Supposing we wish to perform tasks in the background
without the overhead of creating a new thread each time we get a task.
We can achieve this with a single worker thread that continually loops
– waiting for a task, executing it, and then waiting for the next task.
This is a common multithreading scenario. As well as cutting the overhead
in creating threads, task execution is serialized, eliminating the potential
for unwanted interaction between multiple workers and excessive resource
consumption.
We have to decide what to do, however, if the worker's
already busy with previous task when a new task comes along. Suppose in
this situation we choose to block the caller until the previous task is
complete. Such a system can be implemented using two AutoResetEvent objects:
a "ready" AutoResetEvent that's Set by the worker when it's
ready, and a "go" AutoResetEvent that's Set by the calling thread
when there's a new task. In the example below, a simple string field is
used to describe the task (declared using the volatile keyword to ensure
both threads always see the same version):
class AcknowledgedWaitHandle
{
static EventWaitHandle ready = new AutoResetEvent (false);
static EventWaitHandle go = new AutoResetEvent (false);
static volatile string task;
static void Main() {
new Thread (Work).Start();
// Signal the worker 5 times
for (int i = 1; i <= 5; i++) {
ready.WaitOne(); // First wait until worker is ready
task = "a".PadRight (i, 'h'); // Assign a task
go.Set(); // Tell worker to go!
}
// Tell the worker to end using a null-task
ready.WaitOne(); task = null; go.Set();
}
static void Work() {
while (true) {
ready.Set(); // Indicate that we're ready
go.WaitOne(); // Wait to be kicked off...
if (task == null) return; // Gracefully exit
Console.WriteLine (task);
}
}
}
Notice that we assign a null task to signal the worker
thread to exit. Calling Interrupt or Abort on the worker's thread in this
case would work equally well – providing we first called ready.WaitOne.
This is because after calling ready.WaitOne we can be certain on the location
of the worker – either on or just before the go.WaitOne statement – and
thereby avoid the complications of interrupting arbitrary code. Calling
Interrupt or Abort would also also require that we caught the consequential
exception in the worker.
Producer/Consumer Queue
Another common threading scenario is to have a background
worker process tasks from a queue. This is called a Producer/Consumer
queue: the producer enqueues tasks; the consumer dequeues tasks on a worker
thread. It's rather like the previous example, except that the caller
doesn't get blocked if the worker's already busy with a task.
A Producer/Consumer queue is scaleable, in that multiple
consumers can be created – each servicing the same queue, but on a separate
thread. This is a good way to take advantage of multi-processor systems
while still restricting the number of workers so as to avoid the pitfalls
of unbounded concurrent threads (excessive context switching and resource
contention).
In the example below, a single AutoResetEvent is used
to signal the worker, which waits only if it runs out of tasks (when the
queue is empty). A generic collection class is used for the queue, whose
access must be protected by a lock to ensure thread-safety. The worker
is ended by enqueing a null task:
using System;
using System.Threading;
using System.Collections.Generic;
class ProducerConsumerQueue : IDisposable {
EventWaitHandle wh = new AutoResetEvent (false);
Thread worker;
object locker = new object();
Queue<string> tasks = new Queue<string>();
public ProducerConsumerQueue() {
worker = new Thread (Work);
worker.Start();
}
public void EnqueueTask (string task) {
lock (locker) tasks.Enqueue (task);
wh.Set();
}
public void Dispose() {
EnqueueTask (null); // Signal the consumer to exit.
worker.Join(); // Wait for the consumer's thread to finish.
wh.Close(); // Release any OS resources.
}
void Work() {
while (true) {
string task = null;
lock (locker)
if (tasks.Count > 0) {
task = tasks.Dequeue();
if (task == null) return;
}
if (task != null) {
Console.WriteLine ("Performing task: " + task);
Thread.Sleep (1000); // simulate work...
}
else
wh.WaitOne(); // No more tasks - wait for a signal
}
}
}
Here's a main method to test the queue:
class Test {
static void Main() {
using (ProducerConsumerQueue q = new ProducerConsumerQueue()) {
q.EnqueueTask ("Hello");
for (int i = 0; i < 10; i++) q.EnqueueTask ("Say " + i);
q.EnqueueTask ("Goodbye!");
}
// Exiting the using statement calls q's Dispose method, which
// enqueues a null task and waits until the consumer finishes.
}
}
Performing task: Hello
Performing task: Say 1
Performing task: Say 2
Performing task: Say 3
...
...
Performing task: Say 9
Goodbye! |
Note that in this example we explicitly close the Wait
Handle when our ProducerConsumerQueue is disposed – since we could potentially
create and destroy many instances of this class within the life of the
application.
ManualResetEvent
A ManualResetEvent is a variation on AutoResetEvent.
It differs in that it doesn't automatically reset after a thread is let
through on a WaitOne call, and so functions like a gate: calling Set opens
the gate, allowing any number of threads that WaitOne at the gate through;
calling Reset closes the gate, causing, potentially, a queue of waiters
to accumulate until its next opened.
One could simulate this functionality with a boolean
"gateOpen" field (declared with the volatile keyword) in combination
with "spin-sleeping" – repeatedly checking the flag, and then
sleeping for a short period of time.
ManualResetEvents are sometimes used to signal that a
particular operation is complete, or that a thread's completed initialization
and is ready to perform work.
Mutex
Mutex provides the same functionality as C#'s lock statement,
making Mutex mostly redundant. Its one advantage is that it can work across
multiple processes – providing a computer-wide lock rather than an application-wide
lock.
While Mutex is reasonably fast, lock is a
hundred times faster again. Acquiring a Mutex takes a few microseconds;
acquiring a lock takes tens of nanoseconds (assuming no blocking). |
With a Mutex class, the WaitOne method obtains the exclusive
lock, blocking if it's contended. The exclusive lock is then released
with the ReleaseMutex method. Just like with C#'s lock statement, a Mutex
can only be released from the same thread that obtained it.
A common use for a cross-process Mutex is to ensure that
only instance of a program can run at a time. Here's how it's done:
class OneAtATimePlease {
// Use a name unique to the application (eg include your company URL)
static Mutex mutex = new Mutex (false, "oreilly.com OneAtATimeDemo");
static void Main() {
// Wait 5 seconds if contended – in case another instance
// of the program is in the process of shutting down.
if (!mutex.WaitOne (TimeSpan.FromSeconds (5), false)) {
Console.WriteLine ("Another instance of the app is running. Bye!");
return;
}
try {
Console.WriteLine ("Running - press Enter to exit");
Console.ReadLine();
}
finally { mutex.ReleaseMutex(); }
}
}
A good feature of Mutex is that if the application terminates
without ReleaseMutex first being called, the CLR will release the Mutex
automatically.
Semaphore
A Semaphore is like a nightclub: it has a certain capacity,
enforced by a bouncer. Once full, no more people can enter the nightclub
and a queue builds up outside. Then, for each person that leaves, one
person can enter from the head of the queue. The constructor requires
a minimum of two arguments – the number of places currently available
in the nightclub, and the nightclub's total capacity.
A Semaphore with a capacity of one is similar to a Mutex
or lock, except that the Semaphore has no "owner" – it's thread-agnostic.
Any thread can call Release on a Semaphore, while with Mutex and lock,
only the thread that obtained the resource can release it.
In this following example, ten threads execute a loop
with a Sleep statement in the middle. A Semaphore ensures that not more
than three threads can execute that Sleep statement at once:
class SemaphoreTest {
static Semaphore s = new Semaphore (3, 3); // Available=3; Capacity=3
static void Main() {
for (int i = 0; i < 10; i++) new Thread (Go).Start();
}
static void Go() {
while (true) {
s.WaitOne();
Thread.Sleep (100); // Only 3 threads can get here at once
s.Release();
}
}
}
WaitAny, WaitAll and SignalAndWait
In addition to the Set and WaitOne methods, there are
static methods on the WaitHandle class to crack more complex synchronization
nuts.
The WaitAny, WaitAll and SignalAndWait methods facilitate
waiting across multiple Wait Handles, potentially of differing types.
SignalAndWait is perhaps the most useful: it calls WaitOne on one WaitHandle,
while calling Set on another WaitHandle – in an atomic operation. One
can use method this on a pair of EventWaitHandles to set up two threads
so they "meet" at the same point in time, in a textbook fashion.
Either AutoResetEvent or ManualResetEvent will do the trick. The first
thread does the following:
WaitHandle.SignalAndWait (wh1,
wh2);
while the second thread does the opposite:
WaitHandle.SignalAndWait (wh2,
wh1);
WaitHandle.WaitAny waits for any one of an array of wait
handles; WaitHandle.WaitAll waits on all of the given handles. Using the
ticket turnstile analogy, these methods are like simultaneously queuing
at all the turnstiles – going through at the first one to open (in the
case of WaitAny), or waiting until they all open (in the case of WaitAll).
WaitAll is actually of dubious value because of a weird
connection to apartment threading – a throwback from the legacy COM architecture.
WaitAll requires that the caller be in a multi-threaded apartment – which
happens to be the apartment model least suitable for interoperability
– particularly for Windows Forms applications, which need to perform tasks
as mundane as interacting with the clipboard!
Fortunately, the .NET framework provides a more advanced
signaling mechanism for when Wait Handles are awkward or unsuitable –
Monitor.Wait and Monitor.Pulse.
(Sưu tầm) |