Threading in C#

时间:2023-01-14 00:02:21

http://www.albahari.com/threading/

PART 1: GETTING STARTED

Introduction and Concepts

C# supports parallel execution of code through multithreading. A thread is an independent execution path, able to run simultaneously with other threads.

A C# client program (Console, WPF, or Windows Forms) starts in a single thread created automatically by the CLR and operating system (the “main” thread), and is made multithreaded by creating additional threads. Here’s a simple example and its output:

All examples assume the following namespaces are imported:

using System;
using System.Threading;
class ThreadTest
{
  static void Main()
  {
    Thread t = new Thread (WriteY);          // Kick off a new thread
    t.Start();                               // running WriteY()     // Simultaneously, do something on the main thread.
    for (int i = 0; i < 1000; i++) Console.Write ("x");
  }   static void WriteY()
  {
    for (int i = 0; i < 1000; i++) Console.Write ("y");
  }
}
xxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyy
yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyxxxxxxxxxxxxxxxxxxxxxx
xxxxxxxxxxxxxxxxxxxxxxyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy
yyyyyyyyyyyyyxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
...

The main thread creates a new thread t on which it runs a method that repeatedly prints the character “y”. Simultaneously, the main thread repeatedly prints the character “x”:

Threading in C#

Once started, a thread’s IsAlive property returns true, until the point where the thread ends. A thread ends when the delegate passed to the Thread’s constructor finishes executing. Once ended, a thread cannot restart.

The CLR assigns each thread its own memory stack so that local variables are kept separate. In the next example, we define a method with a local variable, then call the method simultaneously on the main thread and a newly created thread:

static void Main()
{
  new Thread (Go).Start();      // Call Go() on a new thread
  Go();                         // Call Go() on the main thread
}
 
static void Go()
{
  // Declare and use a local variable - 'cycles'
  for (int cycles = 0; cycles < 5; cycles++) Console.Write ('?');
}
??????????

A separate copy of the cycles variable is created on each thread's memory stack, and so the output is, predictably, ten question marks.

Threads share data if they have a common reference to the same object instance. For example:

class ThreadTest
{
  bool done;
 
  static void Main()
  {
    ThreadTest tt = new ThreadTest();   // Create a common instance
    new Thread (tt.Go).Start();
    tt.Go();
  }
 
  // Note that Go is now an instance method
  void Go()
  {
     if (!done) { done = true; Console.WriteLine ("Done"); }
  }
}

Because both threads call Go() on the same ThreadTest instance, they share the done field. This results in "Done" being printed once instead of twice:

Done

Static fields offer another way to share data between threads. Here’s the same example with done as a static field:

class ThreadTest
{
  static bool done;    // Static fields are shared between all threads
 
  static void Main()
  {
    new Thread (Go).Start();
    Go();
  }
 
  static void Go()
  {
    if (!done) { done = true; Console.WriteLine ("Done"); }
  }
}

Both of these examples illustrate another key concept: that of thread safety (or rather, lack of it!) The output is actually indeterminate: it’s possible (though unlikely) that “Done” could be printed twice. If, however, we swap the order of statements in the Go method, the odds of “Done” being printed twice go up dramatically:

static void Go()
{
  if (!done) { Console.WriteLine ("Done"); done = true; }
}
Done
Done   (usually!)

The problem is that one thread can be evaluating the if statement right as the other thread is executing theWriteLine statement — before it’s had a chance to set done to true.

The remedy is to obtain an exclusive lock while reading and writing to the common field. C# provides the lockstatement for just this purpose:

class ThreadSafe
{
  static bool done;
  static readonly object locker = new object();
 
  static void Main()
  {
    new Thread (Go).Start();
    Go();
  }
 
  static void Go()
  {
    lock (locker)
    {
      if (!done) { Console.WriteLine ("Done"); done = true; }
    }
  }
}

When two threads simultaneously contend a lock (in this case, locker), one thread waits, or blocks, until the lock becomes available. In this case, it ensures only one thread can enter the critical section of code at a time, and “Done” will be printed just once. Code that's protected in such a manner — from indeterminacy in a multithreading context — is called thread-safe.

Shared data is the primary cause of complexity and obscure errors in multithreading. Although often essential, it pays to keep it as simple as possible.

A thread, while blocked, doesn't consume CPU resources.

Join and Sleep

You can wait for another thread to end by calling its Join method. For example:

static void Main()
{
  Thread t = new Thread (Go);
  t.Start();
  t.Join();
  Console.WriteLine ("Thread t has ended!");
} static void Go()
{
  for (int i = 0; i < 1000; i++) Console.Write ("y");
}

This prints “y” 1,000 times, followed by “Thread t has ended!” immediately afterward. You can include a timeout when calling Join, either in milliseconds or as a TimeSpan. It then returns true if the thread ended or false if it timed out.

Thread.Sleep pauses the current thread for a specified period:

Thread.Sleep (TimeSpan.FromHours (1));  // sleep for 1 hour
Thread.Sleep (500);                     // sleep for 500 milliseconds

While waiting on a Sleep or Join, a thread is blocked and so does not consume CPU resources.

Thread.Sleep(0) relinquishes the thread’s current time slice immediately, voluntarily handing over the CPU to other threads. Framework 4.0’s new Thread.Yield() method does the same thing — except that it relinquishes only to threads running on the same processor.

Sleep(0) or Yield is occasionally useful in production code for advanced performance tweaks. It’s also an excellent diagnostic tool for helping to uncover thread safety issues: if insertingThread.Yield() anywhere in your code makes or breaks the program, you almost certainly have a bug.

How Threading Works

Kiss goodbye to SQL Management Studio

Threading in C#

LINQPad

FREE

Query databases in a
modern query language

Written by the author of this article

Multithreading is managed internally by a thread scheduler, a function the CLR typically delegates to the operating system. A thread scheduler ensures all active threads are allocated appropriate execution time, and that threads that are waiting or blocked (for instance, on an exclusive lock or on user input)  do not consume CPU time.

On a single-processor computer, a thread scheduler performs time-slicing — rapidly switching execution between each of the active threads. Under Windows, a time-slice is typically in the tens-of-milliseconds region — much larger than the CPU overhead in actually switching context between one thread and another (which is typically in the few-microseconds region).

On a multi-processor computer, multithreading is implemented with a mixture of time-slicing and genuine concurrency, where different threads run code simultaneously on different CPUs. It’s almost certain there will still be some time-slicing, because of the operating system’s need to service its own threads — as well as those of other applications.

A thread is said to be preempted when its execution is interrupted due to an external factor such as time-slicing. In most situations, a thread has no control over when and where it’s preempted.

Threads vs Processes

A thread is analogous to the operating system process in which your application runs. Just as processes run in parallel on a computer, threads run in parallel within a single process. Processes are fully isolated from each other; threads have just a limited degree of isolation. In particular, threads share (heap) memory with other threads running in the same application. This, in part, is why threading is useful: one thread can fetch data in the background, for instance, while another thread can display the data as it arrives.

Threading’s Uses and Misuses

Multithreading has many uses; here are the most common:

Maintaining a responsive user interface
By running time-consuming tasks on a parallel “worker” thread, the main UI thread is free to continue processing keyboard and mouse events.
Making efficient use of an otherwise blocked CPU
Multithreading is useful when a thread is awaiting a response from another computer or piece of hardware. While one thread is blocked while performing the task, other threads can take advantage of the otherwise unburdened computer.
Parallel programming
Code that performs intensive calculations can execute faster on multicore or multiprocessor computers if the workload is shared among multiple threads in a “divide-and-conquer” strategy (see Part 5).
Speculative execution
On multicore machines, you can sometimes improve performance by predicting something that might need to be done, and then doing it ahead of time. LINQPad uses this technique to speed up the creation of new queries. A variation is to run a number of different algorithms in parallel that all solve the same task. Whichever one finishes first “wins” — this is effective when you can’t know ahead of time which algorithm will execute fastest.
Allowing requests to be processed simultaneously
On a server, client requests can arrive concurrently and so need to be handled in parallel (the .NET Framework creates threads for this automatically if you use ASP.NET, WCF, Web Services, or Remoting). This can also be useful on a client (e.g., handling peer-to-peer networking — or even multiple requests from the user).

With technologies such as ASP.NET and WCF, you may be unaware that multithreading is even taking place — unless you access shared data (perhaps via static fields) without appropriate lockingrunning afoul of thread safety.

Threads also come with strings attached. The biggest is that multithreading can increase complexity. Having lots of threads does not in and of itself create much complexity; it’s the interaction between threads (typically via shared data) that does. This applies whether or not the interaction is intentional, and can cause long development cycles and an ongoing susceptibility to intermittent and nonreproducible bugs. For this reason, it pays to keep interaction to a minimum, and to stick to simple and proven designs wherever possible. This article focuses largely on dealing with just these complexities; remove the interaction and there’s much less to say!

A good strategy is to encapsulate multithreading logic into reusable classes that can be independently examined and tested. The Framework itself offers many higher-level threading constructs, which we cover later.

Threading also incurs a resource and CPU cost in scheduling and switching threads (when there are more active threads than CPU cores) — and there’s also a creation/tear-down cost. Multithreading will not always speed up your application — it can even slow it down if used excessively or inappropriately. For example, when heavy disk I/O is involved, it can be faster to have a couple of worker threads run tasks in sequence than to have 10 threads executing at once. (In Signaling with Wait and Pulse, we describe how to implement a producer/consumer queue, which provides just this functionality.)

Creating and Starting Threads

As we saw in the introduction, threads are created using the Thread class’s constructor, passing in a ThreadStartdelegate which indicates where execution should begin.  Here’s how the ThreadStart delegate is defined:

public delegate void ThreadStart();

Calling Start on the thread then sets it running. The thread continues until its method returns, at which point the thread ends. Here’s an example, using the expanded C# syntax for creating a TheadStart delegate:

class ThreadTest
{
  static void Main()
  {
    Thread t = new Thread (new ThreadStart (Go));     t.Start();   // Run Go() on the new thread.
    Go();        // Simultaneously run Go() in the main thread.
  }   static void Go()
  {
    Console.WriteLine ("hello!");
  }
}

In this example, thread t executes Go() — at (much) the same time the main thread calls Go(). The result is two near-instant hellos.

A thread can be created more conveniently by specifying just a method group — and allowing C# to infer theThreadStart delegate:

Thread t = new Thread (Go);    // No need to explicitly use ThreadStart

Another shortcut is to use a lambda expression or anonymous method:

static void Main()
{
  Thread t = new Thread ( () => Console.WriteLine ("Hello!") );
  t.Start();
}

Passing Data to a Thread

The easiest way to pass arguments to a thread’s target method is to execute a lambda expression that calls the method with the desired arguments:

static void Main()
{
  Thread t = new Thread ( () => Print ("Hello from t!") );
  t.Start();
} static void Print (string message)
{
  Console.WriteLine (message);
}

With this approach, you can pass in any number of arguments to the method. You can even wrap the entire implementation in a multi-statement lambda:

new Thread (() =>
{
  Console.WriteLine ("I'm running on another thread!");
  Console.WriteLine ("This is so easy!");
}).Start();

You can do the same thing almost as easily in C# 2.0 with anonymous methods:

new Thread (delegate()
{
  ...
}).Start();

Another technique is to pass an argument into Thread’s Start method:

static void Main()
{
  Thread t = new Thread (Print);
  t.Start ("Hello from t!");
} static void Print (object messageObj)
{
  string message = (string) messageObj;   // We need to cast here
  Console.WriteLine (message);
}

This works because Thread’s constructor is overloaded to accept either of two delegates:

public delegate void ThreadStart();
public delegate void ParameterizedThreadStart (object obj);

The limitation of ParameterizedThreadStart is that it accepts only one argument. And because it’s of type object, it usually needs to be cast.

Lambda expressions and captured variables

As we saw, a lambda expression is the most powerful way to pass data to a thread. However, you must be careful about accidentally modifying captured variables after starting the thread, because these variables are shared. For instance, consider the following:

for (int i = 0; i < 10; i++)
  new Thread (() => Console.Write (i)).Start();

The output is nondeterministic! Here’s a typical result:

0223557799

The problem is that the i variable refers to the same memory location throughout the loop’s lifetime. Therefore, each thread calls Console.Write on a variable whose value may change as it is running!

This is analogous to the problem we describe in “Captured Variables” in Chapter 8 of C# 4.0 in a Nutshell. The problem is less about multithreading and more about C#'s rules for capturing variables (which are somewhat undesirable in the case of for and foreach loops).

The solution is to use a temporary variable as follows:

for (int i = 0; i < 10; i++)
{
  int temp = i;
  new Thread (() => Console.Write (temp)).Start();
}

Variable temp is now local to each loop iteration. Therefore, each thread captures a different memory location and there’s no problem. We can illustrate the problem in the earlier code more simply with the following example:

string text = "t1";
Thread t1 = new Thread ( () => Console.WriteLine (text) ); text = "t2";
Thread t2 = new Thread ( () => Console.WriteLine (text) ); t1.Start();
t2.Start();

Because both lambda expressions capture the same text variable, t2 is printed twice:

t2
t2

Naming Threads

Each thread has a Name property that you can set for the benefit of debugging. This is particularly useful in Visual Studio, since the thread’s name is displayed in the Threads Window and Debug Location toolbar. You can set a thread’s name just once; attempts to change it later will throw an exception.

The static Thread.CurrentThread property gives you the currently executing thread. In the following example, we set the main thread’s name:

class ThreadNaming
{
  static void Main()
  {
    Thread.CurrentThread.Name = "main";
    Thread worker = new Thread (Go);
    worker.Name = "worker";
    worker.Start();
    Go();
  }   static void Go()
  {
    Console.WriteLine ("Hello from " + Thread.CurrentThread.Name);
  }
}

Foreground and Background Threads

By default, threads you create explicitly are foreground threads. Foreground threads keep the application alive for as long as any one of them is running, whereas background threads do not. Once all foreground threads finish, the application ends, and any background threads still running abruptly terminate.

A thread’s foreground/background status has no relation to its priority or allocation of execution time.

You can query or change a thread’s background status using its IsBackground property. Here’s an example:

class PriorityTest
{
  static void Main (string[] args)
  {
    Thread worker = new Thread ( () => Console.ReadLine() );
    if (args.Length > 0) worker.IsBackground = true;
    worker.Start();
  }
}

If this program is called with no arguments, the worker thread assumes foreground status and will wait on the ReadLine statement for the user to press Enter. Meanwhile, the main thread exits, but the application keeps running because a foreground thread is still alive.

On the other hand, if an argument is passed to Main(), the worker is assigned background status, and the program exits almost immediately as the main thread ends (terminating the ReadLine).

When a process terminates in this manner, any finally blocks in the execution stack of background threads are circumvented. This is a problem if your program employs finally (or using) blocks to perform cleanup work such as releasing resources or deleting temporary files. To avoid this, you can explicitly wait out such background threads upon exiting an application. There are two ways to accomplish this:

In either case, you should specify a timeout, so you can abandon a renegade thread should it refuse to finish for some reason. This is your backup exit strategy: in the end, you want your application to close — without the user having to enlist help from the Task Manager!

If a user uses the Task Manager to forcibly end a .NET process, all threads “drop dead” as though they were background threads. This is observed rather than documented behavior, and it could vary depending on the CLR and operating system version.

Foreground threads don’t require this treatment, but you must take care to avoid bugs that could cause the thread not to end. A common cause for applications failing to exit properly is the presence of active foreground threads.

Thread Priority

A thread’s Priority property determines how much execution time it gets relative to other active threads in the operating system, on the following scale:

enum ThreadPriority { Lowest, BelowNormal, Normal, AboveNormal, Highest }

This becomes relevant only when multiple threads are simultaneously active.

Think carefully before elevating a thread’s priority — it can lead to problems such as resource starvation for other threads.

Elevating a thread’s priority doesn’t make it capable of performing real-time work, because it’s still throttled by the application’s process priority. To perform real-time work, you must also elevate the process priority using theProcess class in System.Diagnostics (we didn’t tell you how to do this):

using (Process p = Process.GetCurrentProcess())
  p.PriorityClass = ProcessPriorityClass.High;

ProcessPriorityClass.High is actually one notch short of the highest priority: Realtime. Setting a process priority toRealtime instructs the OS that you never want the process to yield CPU time to another process. If your program enters an accidental infinite loop, you might find even the operating system locked out, with nothing short of the power button left to rescue you! For this reason, High is usually the best choice for real-time applications.

If your real-time application has a user interface, elevating the process priority gives screen updates excessive CPU time, slowing down the entire computer (particularly if the UI is complex). Lowering the main thread’s priority in conjunction with raising the process’s priority ensures that the real-time thread doesn’t get preempted by screen redraws, but doesn’t solve the problem of starving other applications of CPU time, because the operating system will still allocate disproportionate resources to the process as a whole. An ideal solution is to have the real-time worker and user interface run as separate applications with different process priorities, communicating via Remoting or memory-mapped files. Memory-mapped files are ideally suited to this task; we explain how they work in Chapters 14 and 25 of C# 4.0 in a Nutshell.

Even with an elevated process priority, there’s a limit to the suitability of the managed environment in handling hard real-time requirements. In addition to the issues of latency introduced by automatic garbage collection, the operating system may present additional challenges — even for unmanaged applications — that are best solved with dedicated hardware or a specialized real-time platform.

Exception Handling

Any try/catch/finally blocks in scope when a thread is created are of no relevance to the thread when it starts executing. Consider the following program:

public static void Main()
{
  try
  {
    new Thread (Go).Start();
  }
  catch (Exception ex)
  {
    // We'll never get here!
    Console.WriteLine ("Exception!");
  }
} static void Go() { throw null; }   // Throws a NullReferenceException

The try/catch statement in this example is ineffective, and the newly created thread will be encumbered with an unhandled NullReferenceException. This behavior makes sense when you consider that each thread has an independent execution path.

The remedy is to move the exception handler into the Go method:

public static void Main()
{
   new Thread (Go).Start();
} static void Go()
{
  try
  {
    // ...
    throw null;    // The NullReferenceException will get caught below
    // ...
  }
  catch (Exception ex)
  {
    // Typically log the exception, and/or signal another thread
    // that we've come unstuck
    // ...
  }
}

You need an exception handler on all thread entry methods in production applications — just as you do (usually at a higher level, in the execution stack) on your main thread. An unhandled exception causes the whole application to shut down. With an ugly dialog!

In writing such exception handling blocks, rarely would you ignore the error: typically, you’d log the details of the exception, and then perhaps display a dialog allowing the user to automatically submit those details to your web server. You then might shut down the application — because it’s possible that the error corrupted the program’s state. However, the cost of doing so is that the user will lose his recent work — open documents, for instance.

The “global” exception handling events for WPF and Windows Forms applications (Application.DispatcherUnhandledException and Application.ThreadException) fire only for exceptions thrown on the main UI thread. You still must handle exceptions on worker threads manually.

AppDomain.CurrentDomain.UnhandledException fires on any unhandled exception, but provides no means of preventing the application from shutting down afterward.

There are, however, some cases where you don’t need to handle exceptions on a worker thread, because the .NET Framework does it for you. These are covered in upcoming sections, and are:

Thread Pooling

Whenever you start a thread, a few hundred microseconds are spent organizing such things as a fresh private local variable stack. Each thread also consumes (by default) around 1 MB of memory. The thread pool cuts these overheads by sharing and recycling threads, allowing multithreading to be applied at a very granular level without a performance penalty. This is useful when leveraging multicore processors to execute computationally intensive code in parallel in “divide-and-conquer” style.

The thread pool also keeps a lid on the total number of worker threads it will run simultaneously. Too many active threads throttle the operating system with administrative burden and render CPU caches ineffective. Once a limit is reached, jobs queue up and start only when another finishes. This makes arbitrarily concurrent applications possible, such as a web server. (The asynchronous method pattern is an advanced technique that takes this further by making highly efficient use of the pooled threads; we describe this in Chapter 23 of C# 4.0 in a Nutshell).

There are a number of ways to enter the thread pool:

The following constructs use the thread pool indirectly:

The Task Parallel Library (TPL) and PLINQ are sufficiently powerful and high-level that you’ll want to use them to assist in multithreading even when thread pooling is unimportant. We discuss these in detail in Part 5; right now, we'll look briefly at how you can use the Task class as a simple means of running a delegate on a pooled thread.

There are a few things to be wary of when using pooled threads:

  • You cannot set the Name of a pooled thread, making debugging more difficult (although you can attach a description when debugging in Visual Studio’s Threads window).
  • Pooled threads are always background threads (this is usually not a problem).
  • Blocking a pooled thread may trigger additional latency in the early life of an application unless you call ThreadPool.SetMinThreads (see Optimizing the Thread Pool).

You are free to change the priority of a pooled thread — it will be restored to normal when released back to the pool.

You can query if you’re currently executing on a pooled thread via the propertyThread.CurrentThread.IsThreadPoolThread.

Entering the Thread Pool via TPL

You can enter the thread pool easily using the Task classes in the Task Parallel Library. The Task classes were introduced in Framework 4.0: if you’re familiar with the older constructs, consider the nongeneric Task class a replacement for ThreadPool.QueueUserWorkItem, and the generic Task<TResult> a replacement forasynchronous delegates. The newer constructs are faster, more convenient, and more flexible than the old.

To use the nongeneric Task class, call Task.Factory.StartNew, passing in a delegate of the target method:

static void Main()    // The Task class is in System.Threading.Tasks
{
  Task.Factory.StartNew (Go);
} static void Go()
{
  Console.WriteLine ("Hello from the thread pool!");
}

Task.Factory.StartNew returns a Task object, which you can then use to monitor the task — for instance, you can wait for it to complete by calling its Wait method.

Any unhandled exceptions are conveniently rethrown onto the host thread when you call a task'sWait method. (If you don’t call Wait and instead abandon the task, an unhandled exception will shut down the process as with an ordinary thread.)

The generic Task<TResult> class is a subclass of the nongeneric Task. It lets you get a return value back from the task after it finishes executing. In the following example, we download a web page using Task<TResult>:

static void Main()
{
  // Start the task executing:
  Task<string> task = Task.Factory.StartNew<string>
    ( () => DownloadString ("http://www.linqpad.net") );   // We can do other work here and it will execute in parallel:
  RunSomeOtherMethod();   // When we need the task's return value, we query its Result property:
  // If it's still executing, the current thread will now block (wait)
  // until the task finishes:
  string result = task.Result;
} static string DownloadString (string uri)
{
  using (var wc = new System.Net.WebClient())
    return wc.DownloadString (uri);
}

(The <string> type argument highlighted is for clarity: it would be inferred if we omitted it.)

Any unhandled exceptions are automatically rethrown when you query the task's Result property, wrapped in anAggregateException. However, if you fail to query its Result property (and don’t call Wait) any unhandled exception will take the process down.

The Task Parallel Library has many more features, and is particularly well suited to leveraging multicore processors. We’ll resume our discussion of TPL in Part 5.

Entering the Thread Pool Without TPL

You can't use the Task Parallel Library if you're targeting an earlier version of the .NET Framework (prior to 4.0). Instead, you must use one of the older constructs for entering the thread pool: ThreadPool.QueueUserWorkItemand asynchronous delegates. The difference between the two is that asynchronous delegates let you return data from the thread. Asynchronous delegates also marshal any exception back to the caller.

QueueUserWorkItem

To use QueueUserWorkItem, simply call this method with a delegate that you want to run on a pooled thread:

static void Main()
{
  ThreadPool.QueueUserWorkItem (Go);
  ThreadPool.QueueUserWorkItem (Go, 123);
  Console.ReadLine();
} static void Go (object data)   // data will be null with the first call.
{
  Console.WriteLine ("Hello from the thread pool! " + data);
}
Hello from the thread pool!
Hello from the thread pool! 123

Our target method, Go, must accept a single object argument (to satisfy the WaitCallback delegate). This provides a convenient way of passing data to the method, just like with ParameterizedThreadStart. Unlike with Task,QueueUserWorkItem doesn't return an object to help you subsequently manage execution. Also, you must explicitly deal with exceptions in the target code — unhandled exceptions will take down the program.

Asynchronous delegates

ThreadPool.QueueUserWorkItem doesn’t provide an easy mechanism for getting return values back from a thread after it has finished executing. Asynchronous delegate invocations (asynchronous delegates for short) solve this, allowing any number of typed arguments to be passed in both directions. Furthermore, unhandled exceptions on asynchronous delegates are conveniently rethrown on the original thread (or more accurately, the thread that callsEndInvoke), and so they don’t need explicit handling.

Don’t confuse asynchronous delegates with asynchronous methods (methods starting withBegin or End, such as File.BeginRead/File.EndRead). Asynchronous methods follow a similar protocol outwardly, but they exist to solve a much harder problem, which we describe in Chapter 23 of C# 4.0 in a Nutshell.

Here’s how you start a worker task via an asynchronous delegate:

  1. Instantiate a delegate targeting the method you want to run in parallel (typically one of the predefined Funcdelegates).
  2. Call BeginInvoke on the delegate, saving its IAsyncResult return value.

    BeginInvoke returns immediately to the caller. You can then perform other activities while the pooled thread is working.

  3. When you need the results, call EndInvoke on the delegate, passing in the saved IAsyncResult object.

In the following example, we use an asynchronous delegate invocation to execute concurrently with the main thread, a simple method that returns a string’s length:

static void Main()
{
  Func<string, int> method = Work;
  IAsyncResult cookie = method.BeginInvoke ("test", null, null);
  //
  // ... here's where we can do other work in parallel...
  //
  int result = method.EndInvoke (cookie);
  Console.WriteLine ("String length is: " + result);
} static int Work (string s) { return s.Length; }

EndInvoke does three things. First, it waits for the asynchronous delegate to finish executing, if it hasn’t already. Second, it receives the return value (as well as any ref or out parameters). Third, it throws any unhandled worker exception back to the calling thread.

If the method you’re calling with an asynchronous delegate has no return value, you are still (technically) obliged to call EndInvoke. In practice, this is open to debate; there are noEndInvoke police to administer punishment to noncompliers! If you choose not to callEndInvoke, however, you’ll need to consider exception handling on the worker method to avoid silent failures.

You can also specify a callback delegate when calling BeginInvoke — a method accepting an IAsyncResult object that’s automatically called upon completion. This allows the instigating thread to “forget” about the asynchronous delegate, but it requires a bit of extra work at the callback end:

static void Main()
{
  Func<string, int> method = Work;
  method.BeginInvoke ("test", Done, method);
  // ...
  //
} static int Work (string s) { return s.Length; } static void Done (IAsyncResult cookie)
{
  var target = (Func<string, int>) cookie.AsyncState;
  int result = target.EndInvoke (cookie);
  Console.WriteLine ("String length is: " + result);
}

The final argument to BeginInvoke is a user state object that populates the AsyncState property of IAsyncResult. It can contain anything you like; in this case, we’re using it to pass the method delegate to the completion callback, so we can call EndInvoke on it.

Optimizing the Thread Pool

The thread pool starts out with one thread in its pool. As tasks are assigned, the pool manager “injects” new threads to cope with the extra concurrent workload, up to a maximum limit. After a sufficient period of inactivity, the pool manager may “retire” threads if it suspects that doing so will lead to better throughput.

You can set the upper limit of threads that the pool will create by calling ThreadPool.SetMaxThreads; the defaults are:

  • 1023 in Framework 4.0 in a 32-bit environment
  • 32768 in Framework 4.0 in a 64-bit environment
  • 250 per core in Framework 3.5
  • 25 per core in Framework 2.0

(These figures may vary according to the hardware and operating system.) The reason there are that many is to ensure progress should some threads be blocked (idling while awaiting some condition, such as a response from a remote computer).

You can also set a lower limit by calling ThreadPool.SetMinThreads. The role of the lower limit is subtler: it’s an advanced optimization technique that instructs the pool manager not to delay in the allocation of threads until reaching the lower limit. Raising the minimum thread count improves concurrency when there are blocked threads (see sidebar).

The default lower limit is one thread per processor core — the minimum that allows full CPU utilization. On server environments, though (such ASP.NET under IIS), the lower limit is typically much higher — as much as 50 or more.

How Does the Minimum Thread Count Work?

Increasing the thread pool’s minimum thread count to x doesn’t actually force x threads to be created right away — threads are created only on demand. Rather, it instructs the pool manager to create up to x threads theinstant they are required. The question, then, is why would the thread pool otherwise delay in creating a thread when it’s needed?

The answer is to prevent a brief burst of short-lived activity from causing a full allocation of threads, suddenly swelling an application’s memory footprint. To illustrate, consider a quad-core computer running a client application that enqueues 40 tasks at once. If each task performs a 10 ms calculation, the whole thing will be over in 100 ms, assuming the work is divided among the four cores. Ideally, we’d want the 40 tasks to run onexactly four threads:

  • Any less and we’d not be making maximum use of all four cores.
  • Any more and we’d be wasting memory and CPU time creating unnecessary threads.

And this is exactly how the thread pool works. Matching the thread count to the core count allows a program to retain a small memory footprint without hurting performance — as long as the threads are efficiently used (which in this case they are).

But now suppose that instead of working for 10 ms, each task queries the Internet, waiting half a second for a response while the local CPU is idle. The pool manager’s thread-economy strategy breaks down; it would now do better to create more threads, so all the Internet queries could happen simultaneously.

Fortunately, the pool manager has a backup plan. If its queue remains stationary for more than half a second, it responds by creating more threads — one every half-second — up to the capacity of the thread pool.

The half-second delay is a two-edged sword. On the one hand, it means that a one-off burst of brief activity doesn’t make a program suddenly consume an extra unnecessary 40 MB (or more) of memory. On the other hand, it can needlessly delay things when a pooled thread blocks, such as when querying a database or callingWebClient.DownloadFile. For this reason, you can tell the pool manager not to delay in the allocation of the first x threads, by calling SetMinThreads, for instance:

ThreadPool.SetMinThreads (50, 50);

(The second value indicates how many threads to assign to I/O completion ports, which are used by the APM, described in Chapter 23 of C# 4.0 in a Nutshell.)

The default value is one thread per core.

Part 2 >>

Threading in C#的更多相关文章

  1. python3&&num;160&semi; threading初体验

    python3中thread模块已被废弃,不能在使用thread模块,为了兼容性,python3将thread命名为_thread.python3中我们可以使用threading进行代替. threa ...

  2. &period;Net多线程编程—System&period;Threading&period;Tasks&period;Parallel

    System.Threading.Tasks.Parallel类提供了Parallel.Invoke,Parallel.For,Parallel.ForEach这三个静态方法. 1 Parallel. ...

  3. Python标准模块--threading

    1 模块简介 threading模块在Python1.5.2中首次引入,是低级thread模块的一个增强版.threading模块让线程使用起来更加容易,允许程序同一时间运行多个操作. 不过请注意,P ...

  4. Python的多线程(threading)与多进程(multiprocessing )

    进程:程序的一次执行(程序载入内存,系统分配资源运行).每个进程有自己的内存空间,数据栈等,进程之间可以进行通讯,但是不能共享信息. 线程:所有的线程运行在同一个进程中,共享相同的运行环境.每个独立的 ...

  5. Pyhton 利用threading远程下发文件和远程执行linux系统命令

    #!/usr/bin/env python # encoding: utf-8 #__author__ = 'cp' #__date__ = '21/07/16 上午 10:32' import th ...

  6. C&num;多任务并行阶段控制—— Threading&period;Barrier

    有一种场景:4个人同时做某项任务,该任务分为3个阶段,必须要4个人都完成第一阶段后才可以进入第二阶段,都完成第二阶段后才可以进入第三阶段. 此时就需要对多个并行的任务做进度控制. Threading. ...

  7. python 线程之threading(五)

    在学习了Event和Condition两个线程同步工具之后还有一个我认为比较鸡肋的工具 semaphores 1. 使用semaphores的使用效果和Condition的notify方法的效果基本相 ...

  8. python 线程之 threading&lpar;四)

    python 线程之 threading(三) http://www.cnblogs.com/someoneHan/p/6213100.html中对Event做了简单的介绍. 但是如果线程打算一遍一遍 ...

  9. python 线程之 threading&lpar;三)

    python 线程之 threading(一)http://www.cnblogs.com/someoneHan/p/6204640.html python 线程之 threading(二)http: ...

  10. python 线程之 threading&lpar;二)

    在http://www.cnblogs.com/someoneHan/p/6204640.html 线程一中对threading线程的开启调用做了简单的介绍 1 在线程开始之后,线程开始独立的运行直到 ...

随机推荐

  1. HTML之form表单和input系列

    <form method="POST" action="/host"> <input class="c1" type=&q ...

  2. Linux下ejabberd安装配置

    1.下载Ejabberd安装包 wget http://www.process-one.net/downloads/ejabberd/2.1.13/ejabberd-2.1.13-linux-x86_ ...

  3. LeetCode&lowbar;图像渲染

    题目: 有一幅以二维整数数组表示的图画,每一个整数表示该图画的像素值大小,数值在 0 到 65535 之间. 给你一个坐标 (sr, sc) 表示图像渲染开始的像素值(行 ,列)和一个新的颜色值 ne ...

  4. 2018-2019-2 20165336 《网络对抗技术》 Exp6 信息搜集与漏洞扫描

    2018-2019-2 20165336 <网络对抗技术> Exp6 信息搜集与漏洞扫描 一.原理与实践说明 1.实践内容 本实践的目标是掌握信息搜集的最基础技能.具体有: 各种搜索技巧的 ...

  5. Linux&period;ls 查看常用参数

    Linux.ls 查看常用参数 在linux中ls查看文件常用参数: -l :详情文件 -h : 格式化文件大小 -r :将查询的结果列表进行翻转 -S :按文件大小进行排列

  6. Codeforces Round &num;447 &lpar;Div&period; 2&rpar;

    我感觉这场CF还是比较毒的,虽然我上分了... Problem A  QAQ 题目大意:给你一个由小写字母构成的字符串,问你里面有多少个QAQ. 思路:找字符串中的A然后找两边的Q即可,可以枚举找Q, ...

  7. iPhone开发--正则表达式获取字符串中的内容

    缘起: 想获取字符串中指定的字符,考虑用正则表达式,遂写了如下的代码: NSString *htmlStr = @"oauth_token=1a1de4ed4fca40599c5e5cfe0 ...

  8. S11 day 95 RestFramework 之 视图

    1. 基于APIVIEW视图序列化   我们会发现两个类有好多重复的代码.所以我们引入了GenericAPIView (继承了APIView) 2.Generic ApiVIew和 ListModel ...

  9. Python之traceback错误堆栈信息处理

    一.Python中的异常栈跟踪 之前在做Java的时候,异常对象默认就包含stacktrace相关的信息,通过异常对象的相关方法printStackTrace()和getStackTrace()等方法 ...

  10. python爬虫的教程

    来源:http://cuiqingcai.com/1052.html 大家好哈,我呢最近在学习Python爬虫,感觉非常有意思,真的让生活可以方便很多.学习过程中我把一些学习的笔记总结下来,还记录了一 ...