What is Threading?

Threading in C# allows you to run multiple operations concurrently, utilizing CPU cores effectively. It helps to utilize the power of hardware and operating systems to increase the performance of our software.

C# provides the System.Threading namespace for working with threads. The operating system allocates processor time to threads. A component in the operating system called Schedular manages the threads independently.

Every process in Windows OS can have one or more threads. Now, you might ask what is a process?

A process is an execution unit that runs when you use any application in Windows OS. For example, if you have Microsoft Edge or any other browser installed then you open your browser and go to Task Manager’s Process tab to see the running processes for the currently running applications.

Tip: Press SHIFT + CTRL + ESC to open Task Manager.

A picture is worth a thousand words.

Let’s look at an example now.

In the example above, you are creating a thread with the Thread class. Whatever the work you want the thread to perform you need to pass it to its constructor. For example, in this case, we are passing the PrintNumbers method because we want our thread to execute this method.

thread.Start() starts the thread which then executes the PrintNumbers method parallel to the main thread.

In the Main method body, you can see we are calling the PrintNumbers method directly. This means the PrintNumbers method will be called on the Main thread as well.

Therefore, the PrintNumbers method gets called on both threads (the main thread and the newly created thread)

Thread.CurrentThread.ManagedThreadId gives the unique ID of the thread running this method. This allows you to distinguish between the main thread and the new thread.

Thread.Sleep(500) pauses the thread for 500 milliseconds to simulate doing some time-consuming task.

Each loop iteration prints the thread ID and the current number, allowing you to see how both threads run concurrently.

Both threads won’t run in any particular order. Every time you run this program; you’ll see different outputs. The Operating System decides which thread will execute at which time. It’s outside of developer’s control.

Whenever you run any C# program, it runs on the Main thread. Any new threads that you create are called as Worker Threads. Threading in C# let you create such worker threads, execute what you want to execute, and eventually dispose them off. The main thread remains active as long as the application is running.

For better readability, you can name your threads whatever you want while initializing the thread as shown below.

How to Manage Shared Resources in Multi-Threading?

As we know the .NET CLR delegates the multi-threading tasks to the operating system (OS). The OS then uses the Thread Schedular to perform multi-threading.

Each thread has its own local memory stack. The local memory doesn’t interfere between multiple threads. For example, in the example above, no matter how many threads call PrintNumbers method, each thread will have its own value of i variable to keep track of the loop.

Now the interesting question is when there are multiple threads, what happens to the shared data (data that multiple threads can access)? How OS decides how multiple threads access the shared data?

Let’s answer this question.

But before that we need to understand the multi-threading in single processor computers and multi-processor computers.

Single-processor computers use something called time slicing to allocate execution time to multiple threads. It means the Thread Scheduler context switch the threads to let the threads perform their work. However, this can result in preempted threads i.e. a thread whose execution is disturbed due to external reasons. A thread can’t control when it’ll be preempted because Thread Schedular decides which thread will get how much time for execution. In simple terms, a single-processor computer can have one active thread at a time.

Whereas, in multi-processor computers multiple threads get execution time truly parallelly. For example, if there are 4 processors in a computer then there can be 4 active threads at any given time. In this situation, accessing shared resources (variables, properties, etc.) can be bit tricky.

To share common data among threads, we use something called Locks. A Lock is a static object that can limit the access to a certain part of your code to single thread at a time i.e. any block of code wrapped with Lock can be accessed by a single thread only.

Let’s see this in action.

In the example code above, our aim is to print the Hello world just once. But in the constructor, we have 2 simultaneous calls to the PrintHelloWorld method. One using the thread.Start() and other directly calling the PrintHelloWorld method.

Both the Main thread and the newly created worker thread will call this method. But we want only one method to print the Hello World! text. To achieve this, we are using two things: a boolean flag _isCompleted and a Lock object.

Because we have our IF block wrapped with the lock scope, no matter which thread calls PrintHelloWorld method first, the lock will ensure the other thread wait until the first thread finishes execution. During first run, the value of _isCompleted will become true hence when the next enter the lock scope the IF condition will fail and we’ll not see another instance of Hello World!

This proves that lock statement ensured only one thread entered the scope at a time. In technical terms, the code inside the lock scope is called critical section.

If you comment out the lock scope, then you’ll see two Hello World! statements print outs on the console.

As you can see, we have given the name to our worker thread My worker thread. This shows that which thread print out the hello world message on the console. You may have to run the program multiple times to see if our worker thread gets the opportunity to reach the critical section. I had to run it 5-6 times to see this.

In the critical section, you can do some heavy operations as well, still the other thread will wait until the active one finishes the execution. To simulate this, you can add a Thread.Sleep(3000) statement in the critical section to add a 3 second delay.

Creating a thread with its local memory stack costs CPU load. Therefore, creating, destroying, and recreating threads is expensive.

Wouldn’t it be better if a thread can be created once and then reused again and again for various tasks? Yes, that’s possible using a Thread Pool. Which you are going to learn in next section.

What is Thread Pool in C#?

A thread pool in C# is a collection of background worker threads that are managed by the .NET runtime. These threads are used to perform tasks in the background, such as executing asynchronous operations, without the overhead of creating and destroying threads manually.

A Background thread is identical to Main thread. The only difference is if the Main thread dies the Background threads die as well. If a Worker thread has IsBackground property true, then it becomes Background thread. The Background threads do not manage execution environment. The Main thread does.

Thread pool controls the limit of threads. For example, if the limit is 10 then only 10 threads can remain active in the pool at any given time. If there are more threads, then they will wait for the current threads to finish their tasks.

You can use Thread.CurrentThread.IsThreadPoolThread property to check if a thread belongs to Thread Pool or not.

Benefits of Thread Pool.

  1. Efficient Resource Management: Creating and destroying threads is resource-intensive. The thread pool reuses existing threads, reducing the overhead.
  2. Scalability: The thread pool can dynamically adjust the number of threads based on the workload, improving application performance.
  3. Simplified Programming Model: Developers can focus on the tasks rather than managing threads, making the code cleaner and easier to maintain.

Example: Using Thread Pool in C#.

Let’s create a simple example using the thread pool in C#.

  1. Create a Console Application: Open Visual Studio Code and create a new console application.
  2. Write the Code: Add the following code to your Program.cs file.

Step-by-Step Explanation

  1. Main Method: The Main method is the entry point of the application. It starts by printing a message to the console.
  2. QueueUserWorkItem: This method queues a task to the thread pool. It takes a WaitCallback delegate that points to the method to be executed (DoWork in this case) and an optional state object (task name).
  3. DoWork Method: This method is executed by the thread pool threads. It takes an object parameter, which is the state passed from QueueUserWorkItem. The method prints the task name and the thread ID, simulates work by sleeping for 2 seconds, and then prints a completion message.
  4. Thread.Sleep: This simulates some work being done by the thread.
  5. Console.ReadLine: This keeps the console window open until the user presses Enter, allowing you to see the output.

Output

When you run the application, you will see an output similar to this:

This demonstrates that the tasks are executed on different threads from the thread pool, and the threads are reused efficiently.

Using the thread pool, You can execute background tasks without the complexity of manually managing threads, leading to more efficient and scalable applications.

What is the Join method in threading?

The Join method in C# is used in threading to make one thread wait until another thread has finished its execution. This is useful when you need to ensure that a particular thread completes its work before the main thread or another thread continues.

Here’s a simple example to illustrate how the Join method works:

  1. Create a new thread that performs some work.
  2. Use the Join method to make the Main thread wait until the new thread completes its execution.
  1. Creating a Thread: We create a new thread workerThread that will run the DoWork method.
  2. Starting the Thread: We start the workerThread using the Start method.
  3. Using Join: The Join method is called on workerThread. This makes the main thread (the thread that runs the Main method) wait until workerThread has finished executing.
  4. Simulating Work: Inside the DoWork method, we simulate some work by making the thread sleep for 5 seconds using Thread.Sleep(5000).
  5. Completion: After the workerThread completes its work, the main thread resumes and prints a message indicating that the worker thread has finished.

In this example, you can see how you can use the Join method to synchronize threads and ensure that one thread completes before another continues.

Exception handling in C# multi-threading.

Exception handling in C# multi-threading involves managing errors that occur within individual threads. When a thread encounters an exception, it can disrupt the execution flow and potentially cause the entire application to crash if not properly handled.

Key points to understand:

  1. Isolated Handling: Each thread should handle its own exceptions using try-catch blocks to prevent unhandled exceptions from terminating the application.
  2. Communication: Threads can communicate exceptions back to the main thread or other threads, often using shared variables, events, or callback methods.
  3. Thread Safety: Ensure that exception handling does not introduce race conditions or other concurrency issues.

By effectively managing exceptions in multi-threaded applications, you can ensure that your application remains robust and can recover gracefully from errors.

Let’s use the Thread class to demonstrate exception handling in a multi-threaded environment.

Here’s an example:

  1. Create a thread that performs some work and may throw an exception.
  2. Use a try-catch block within the thread’s method to catch and handle exceptions.

In this example:

  • The DoWork method simulates some work, waits for 3 seconds, and throws an exception.
  • The Thread class is used to create and start a new thread that runs the DoWork method.
  • The try-catch block within the DoWork method catches and handles any exceptions that occur in the thread.
  • The thread.Join() method ensures the main thread waits for the worker thread to complete.

This demonstrates how to handle exceptions within individual threads using the Thread class and try-catch blocks.

Challenges of Using Threads.

Although threads enable us to do parallel programming in C#, but they bring their challenges as well.

  1. Complexity: Managing threads manually can be complex. You need to handle thread creation, synchronization, and termination explicitly.
  2. Resource Management: Threads consume system resources. Creating too many threads can lead to resource exhaustion.
  3. Error Handling: Handling exceptions in threads can be tricky, especially when you need to communicate errors back to the main thread.
  4. Synchronization: Ensuring thread safety and avoiding race conditions requires careful use of synchronization primitives like locks, which can lead to deadlocks if not managed correctly.
  5. Scalability: Manually managing threads does not scale well with increasing workloads or more complex applications.

Tasks on the other hand can address most of (if not all) the challenges.

What are Tasks?

Tasks in C# are part of the Task Parallel Library (TPL) and provide a higher-level abstraction for managing asynchronous operations. Tasks simplify the process of running code asynchronously and handling the results or exceptions.

How Tasks Solve These Challenges

  1. Simplified Syntax: Tasks provide a simpler and more intuitive syntax for running asynchronous operations.
  2. Automatic Resource Management: The TPL manages the underlying threads, optimizing resource usage and improving performance.
  3. Built-in Exception Handling: Tasks have built-in mechanisms for handling exceptions, making it easier to manage errors.
  4. Synchronization: Tasks provide methods like ContinueWith and await (with async/await keywords) to handle synchronization more easily.
  5. Scalability: The TPL efficiently manages the thread pool, allowing applications to scale better with increasing workloads.

Benefits of Tasks

  1. Ease of Use: Tasks are easier to use and understand compared to manually managing threads.
  2. Better Resource Management: The TPL optimizes resource usage, reducing the risk of resource exhaustion.
  3. Improved Error Handling: Tasks provide better mechanisms for handling exceptions and propagating errors.
  4. Enhanced Scalability: Tasks scale better with increasing workloads, making them suitable for complex applications.
  5. I/O Bound Operations: Tasks enables to work with I/O bound operations efficiently such interacting with databases, external web APIs, etc.

Let’s understand the usage of Tasks vs. Threads in context of exception handling. You’ll see how Tasks can simplify your code to make your life as developer easier.

First, let’s how would you manage exception using Threads. We have discussed it in previous section. Let’s just review it quickly.

See below how using Tasks can simplify the code.

  • Using Threads: You must manually create, start, and join the thread. Exception handling is done within the thread method, and communicating errors back to the main thread can be complex.
  • Using Tasks: Tasks provide a simpler syntax with Task.Run and await. Exception handling is more straightforward, and the TPL manages the underlying threads, optimizing resource usage and improving scalability.

Using Tasks, you can write cleaner, more maintainable, and scalable code, making managing asynchronous operations in your applications easier.

What is Continuation in TPL?

In the context of asynchronous programming, a continuation is a piece of code executed after a task is completed. Continuations allow you to specify what should happen next once an asynchronous operation finishes. This is particularly useful for managing complex workflows where multiple asynchronous operations must be coordinated.

Benefits of Continuations.

  1. Improved Readability: Continuations can make asynchronous code easier to read and maintain by avoiding deeply nested callbacks.
  2. Error Handling: Continuations can be used to handle errors more gracefully.
  3. Composability: Continuations allow you to chain multiple asynchronous operations together clearly and concisely.

Pitfalls to Avoid.

  1. Complexity: Overusing continuations can lead to complex and hard-to-follow code.
  2. Exception Handling: Ensure proper exception handling within continuations to avoid unhandled exceptions.
  3. Resource Management: Be cautious about resource management, especially when dealing with long-running tasks.

Here is a simple example to demonstrate the concept of continuations using the Task class in C#:

  1. Task Creation: A task is created using Task.Run which simulates some work and returns a result.
  2. Continuation: The ContinueWith method is used to specify a continuation that will run after the task is completed. This continuation simply prints the result of the task.
  3. Main Method: The main method completes, but the console remains open to allow the task and its continuation to be completed.

What is Synchronization in Multithreading?

When working with multithreading in C#, synchronization is a crucial concept to understand. Synchronization ensures that multiple threads can safely access shared resources without causing data corruption or inconsistencies. Without proper synchronization, you might encounter issues like race conditions, deadlocks, and data corruption.

Why is Synchronization Important?

Imagine you have multiple threads trying to update a shared variable simultaneously. Without synchronization, these threads might interfere with each other, leading to unpredictable results. Synchronization mechanisms help coordinate the access to shared resources, ensuring that only one thread can modify the resource at a time.

Multiple Ways to Achieve Synchronization in C#.

C# provides several synchronization primitives to help manage access to shared resources. Let’s explore some of the most commonly used ones with beginner-friendly examples.

Lock Statement.

The lock statement is the simplest way to achieve synchronization. It ensures that only one thread can enter a critical section of code at a time.

Let’s look at the example below to see how lock the statement can help.

In the code above:

  • private static readonly object lockObject = new object(); declares a static read-only object lockObject used for synchronization to prevent race conditions.
  • Thread thread1 = new Thread(IncrementCounter); creates a new thread thread1 that will execute the IncrementCounter method.
  • Thread thread2 = new Thread(IncrementCounter); creates another thread thread2 that will also execute the IncrementCounter method.
  • thread1.Start(); and thread2.Start(); start the execution of both threads concurrently.
  • thread1.Join(); and thread2.Join(); block the main thread until thread1 and thread2 complete their execution.
  • Console.WriteLine("Final counter value: " + counter); prints the final value of the counter after both threads have finished.
  • lock (lockObject) in IncrementCounter method ensures that only one thread can execute the code inside the lock block at a time, preventing race conditions by acquiring a mutual exclusion lock on lockObject.

Monitor Class.

The Monitor class provides more control over synchronization compared to the lock statement. It allows you to enter and exit critical sections explicitly.

Here’s an example showing how can you use the Monitor class in your code.

In the code above:

  • private static readonly object lockObject = new object(); declares a static read-only object lockObject used for synchronization to prevent race conditions.
  • Thread thread1 = new Thread(IncrementCounter); creates a new thread thread1 that will execute the IncrementCounter method.
  • Thread thread2 = new Thread(IncrementCounter); creates another thread thread2 that will also execute the IncrementCounter method.
  • thread1.Start(); and thread2.Start(); start the execution of both threads concurrently.
  • thread1.Join(); and thread2.Join(); block the main thread until thread1 and thread2 complete their execution.
  • Console.WriteLine("Final counter value: " + counter); prints the final value of the counter after both threads have finished.
  • Monitor.Enter(lockObject); in the IncrementCounter method acquires a lock on lockObject to ensure that only one thread can execute the code inside the try block at a time, preventing race conditions.
  • try { counter++; } finally { Monitor.Exit(lockObject); } ensures that the lock is released even if an exception occurs, maintaining proper synchronization

Mutex Class.

The Mutex class is used for synchronization across multiple processes. It can be used to ensure that only one instance of a resource is accessed at a time, even if the resource is shared across different applications.

Example:

  • The code above uses multithreading to increment a shared counter variable.
  • A Mutex is used to ensure that only one thread can increment the counter at a time, preventing race conditions.
  • Two threads (thread1 and thread2) are created and started, both running the IncrementCounter method.
  • Each thread increments the counter 1000 times.
  • The mutex.WaitOne() method is called to acquire the mutex before incrementing the counter.
  • The counter++ operation is enclosed in a try block to ensure the mutex is released in the finally block, even if an exception occurs.
  • The thread1.Join() and thread2.Join() methods are called to wait for both threads to finish before printing the final counter value.

Semaphore and SemaphoreSlim Classes.

The Semaphore and SemaphoreSlim classes limit the number of threads that can access a resource concurrently. SemaphoreSlim is a lightweight version of Semaphore.

Let’s understand this with an example.

  • The code above demonstrates the use of a SemaphoreSlim to control access to a shared resource by multiple threads.
  • SemaphoreSlim object is created with an initial count of 2, allowing up to 2 threads to enter the critical section simultaneously.
  • In the Main method, 5 threads are created and started, each running the AccessResource method.
  • Each thread attempts to enter the critical section by calling semaphore.Wait().
  • If the semaphore count is greater than 0, the thread enters the critical section and the count is decremented.
  • Inside the critical section, the thread simulates work by sleeping for 1 second.
  • After completing the work, the thread releases the semaphore by calling semaphore.Release(), and incrementing the semaphore count.
  • The console output shows when each thread is waiting, entering, and leaving the critical section.

Try running the code above to see the output you get.

Synchronization is essential in multithreading to ensure safe access to shared resources. C# provides various synchronization primitives like lock, Monitor, Mutex, and Semaphore to help manage concurrent access. You can write robust and thread-safe applications by understanding and using these synchronization mechanisms.